title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
page content not rendering properly - python / django? | 39,276,536 | <p>I am running ubuntu 16.4 with apache2 webserver</p>
<p>I am trying to setup this site, downloaded from github: <a href="https://github.com/mozilla/http-observatory-website" rel="nofollow">https://github.com/mozilla/http-observatory-website</a></p>
<p>Unfortunately, there is no instructions to follow :(</p>
<p>if put these files in the working/public directory and point to index.html from the browser i.e. <a href="http://localhost/laravel/work/public/" rel="nofollow">http://localhost/laravel/work/public/</a></p>
<p>The page comes up but what seems to be djamgp code shows up.. there is also two python files and a Makefile in the same directory but not sure what to do with those or if anything needs to be compiled - which i also dont have a clue how to do.</p>
<p>Here is a screenshot of what it looks like: <a href="https://snag.gy/4npmjY.jpg" rel="nofollow">https://snag.gy/4npmjY.jpg</a></p>
<p><a href="http://i.stack.imgur.com/DSaKR.png" rel="nofollow"><img src="http://i.stack.imgur.com/DSaKR.png" alt="enter image description here"></a></p>
<p>Any assistance would be much appreciated!</p>
| -1 | 2016-09-01T16:24:59Z | 39,278,131 | <p>Based on the source, you'd need the <a href="http://jinja.pocoo.org/" rel="nofollow">Jinja templating engine</a>. The easiest way to install it is by running <code>pip install jinja2</code>. </p>
<p>That said, I do recommend that you follow the virtualenv instructions on the DigitalOcean tutorial posted as a comment. It is going to be much cleaner and easier to maintain. </p>
<p>The idea behind the virtual environment is to install inside of it, instead of your system-wide python installation all the dependencies that you can possibly need.</p>
| 1 | 2016-09-01T18:01:30Z | [
"python",
"jinja2"
] |
page content not rendering properly - python / django? | 39,276,536 | <p>I am running ubuntu 16.4 with apache2 webserver</p>
<p>I am trying to setup this site, downloaded from github: <a href="https://github.com/mozilla/http-observatory-website" rel="nofollow">https://github.com/mozilla/http-observatory-website</a></p>
<p>Unfortunately, there is no instructions to follow :(</p>
<p>if put these files in the working/public directory and point to index.html from the browser i.e. <a href="http://localhost/laravel/work/public/" rel="nofollow">http://localhost/laravel/work/public/</a></p>
<p>The page comes up but what seems to be djamgp code shows up.. there is also two python files and a Makefile in the same directory but not sure what to do with those or if anything needs to be compiled - which i also dont have a clue how to do.</p>
<p>Here is a screenshot of what it looks like: <a href="https://snag.gy/4npmjY.jpg" rel="nofollow">https://snag.gy/4npmjY.jpg</a></p>
<p><a href="http://i.stack.imgur.com/DSaKR.png" rel="nofollow"><img src="http://i.stack.imgur.com/DSaKR.png" alt="enter image description here"></a></p>
<p>Any assistance would be much appreciated!</p>
| -1 | 2016-09-01T16:24:59Z | 39,324,077 | <p>cloning the package this way solved the problem and gave me the right files to add to /var/www</p>
<p>git clone -b gh-pages <a href="https://github.com/mozilla/http-observatory-website.git" rel="nofollow">https://github.com/mozilla/http-observatory-website.git</a></p>
| 0 | 2016-09-05T04:46:22Z | [
"python",
"jinja2"
] |
django frontend login: TypeError at /auth/login dict expected at most 1 arguments, got 2 | 39,276,645 | <p>In the project im working on the admin site is working fine and i want to test the frontend. But im Getting: </p>
<p>TypeError at /auth/login</p>
<p>dict expected at most 1 arguments, got 2</p>
<p>Here the full traceback:</p>
<pre><code>Environment:
Request Method: GET
Request URL: http://127.0.0.1:8000/auth/login?next=/
Django Version: 1.10
Python Version: 2.7.10
Installed Applications:
('django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.gis',
'django_nose',
'widget_tweaks',
'rest_framework',
'rest_framework_gis',
'backbone_app',
'accounts',
'map')
Installed Middleware:
('django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware')
Traceback:
File "D:\SHK\ElektroClean\lib\site-packages\django\core\handlers\exception.py" in inner
39. response = get_response(request)
File "D:\SHK\ElektroClean\lib\site-packages\django\core\handlers\base.py" in _legacy_get_response
249. response = self._get_response(request)
File "D:\SHK\ElektroClean\lib\site-packages\django\core\handlers\base.py" in _get_response
187. response = self.process_exception_by_middleware(e, request)
File "D:\SHK\ElektroClean\lib\site-packages\django\core\handlers\base.py" in _get_response
185. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "D:\SHK\ElektroClean\accounts\views.py" in login_view
47. return render(request, 'login.html', context)
File "D:\SHK\ElektroClean\lib\site-packages\django\shortcuts.py" in render
30. content = loader.render_to_string(template_name, context, request, using=using)
File "D:\SHK\ElektroClean\lib\site-packages\django\template\loader.py" in render_to_string
68. return template.render(context, request)
File "D:\SHK\ElektroClean\lib\site-packages\django\template\backends\django.py" in render
64. context = make_context(context, request, autoescape=self.backend.engine.autoescape)
File "D:\SHK\ElektroClean\lib\site-packages\django\template\context.py" in make_context
267. context.push(original_context)
File "D:\SHK\ElektroClean\lib\site-packages\django\template\context.py" in push
59. return ContextDict(self, *dicts, **kwargs)
File "D:\SHK\ElektroClean\lib\site-packages\django\template\context.py" in __init__
18. super(ContextDict, self).__init__(*args, **kwargs)
Exception Type: TypeError at /auth/login
Exception Value: dict expected at most 1 arguments, got 2
</code></pre>
<p>This is the urls.py :</p>
<pre><code>urlpatterns = [
url(r'^map/', include('map.urls')),
url(r'^admin/', include(admin.site.urls)),
url(r'^auth/', include('accounts.urls')),
url(r'^', include('backbone_app.urls')),
</code></pre>
<p>and the other</p>
<pre><code>app_name = 'accounts'
urlpatterns = [
url(r'^auth/', views.auth_view, name='auth_view'),
url(r'login', views.login_view, name='login'),
url(r'logout', views.logout_view, name='logout'),
url(r'invalid', views.invalid_view, name='invalid'),
]
</code></pre>
<p>This is the views.py :</p>
<pre><code>def login_view(request):
context = RequestContext(request)
if request.method == "POST":
username = request.POST['username']
password = request.POST['password']
user = authenticate(username=username, password=password)
if user is not None:
context['user'] = user
if user.is_active:
login(request, user)
return HttpResponseRedirect('/') #Redirect to a success page
else:
return HttpResponseRedirect('auth/invalid')
else:
return HttpResponse('Passwort oder Username falsch')
return render(request, 'login.html', context)
</code></pre>
<p>where's my Problem ? Please help. </p>
<p>Thanks</p>
| 0 | 2016-09-01T16:31:32Z | 39,276,760 | <p>The <code>render</code> function is expecting a dictionary not a RequestContext object. Change your return line in your login_view to something like this:</p>
<pre><code>return render(request, 'login.html', {'context': context})
</code></pre>
| 1 | 2016-09-01T16:39:16Z | [
"python",
"django"
] |
Python Pandas ValueError on simple query | 39,276,650 | <p>The following line causes a ValueError (Pandas 17.1), and I'm trying to understand why.</p>
<pre><code>x = (matchdf['ANPR Matched_x'] == 1)
</code></pre>
<p>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</p>
<p>I'm trying to use it for following conditional assignment:</p>
<pre><code>matchdf.loc[x, 'FullMatch'] = 1
</code></pre>
<p>But I can't get past the previous issue.</p>
<p>I'm sure I've done this kind of thing dozens of times before, and I can't see why it should matter what is in the dataframe, but perhaps it does? or more likely, I'm probably making a silly mistake I just can't see!</p>
<p>Thanks for any help.</p>
<p>EDIT: For more context here's some preceding code:</p>
<pre><code>inpairs = []
for m in inmatchedpairs:
# more code
p = {'Type In': mtype ,'Best In Time': besttime, 'Best G In Time': bestgtime,
'Reg In': reg, 'ANPR Matched': anprmatch, 'ANPR Match Key': anprmatchkey}
inpairs.append(p)
outpairs = []
for m in outmatchedpairs:
# more code
p = {'Type Out': mtype ,'Best Out Time': besttime, 'Best G Out Time': bestgtime,
'Reg Out': reg, 'ANPR Matched': anprmatch, 'ANPR Match Key': anprmatchkey}
outpairs.append(p)
indf = pd.DataFrame(inpairs)
outdf = pd.DataFrame(outpairs)
matchdf = pd.merge(indf, outdf, how='outer', on='ANPR Match Key')
matchdf['FullMatch'] = 0
x = (matchdf['ANPR Matched_x'] == 0)
</code></pre>
<p>I get the error on the last line.</p>
| 0 | 2016-09-01T16:32:06Z | 39,276,871 | <p>Use <code>loc</code> to set the values.</p>
<pre><code>matchdf.loc[matchdf['APNR Matched_x'] == 1, 'FullMatch'] = 1
</code></pre>
<p><strong>Example</strong></p>
<pre><code>df = pd.DataFrame({'APNR Matched_x': [0, 1, 1, 0], 'Full Match': [False] * 4})
>>> df
APNR Matched_x Full Match
0 0 False
1 1 False
2 1 False
3 0 False
df.loc[df['APNR Matched_x'] == 1, 'FullMatch'] = 1
>>> df
APNR Matched_x Full Match FullMatch
0 0 False NaN
1 1 False 1
2 1 False 1
3 0 False NaN
</code></pre>
| 2 | 2016-09-01T16:45:35Z | [
"python",
"pandas",
"numpy"
] |
Python Pandas ValueError on simple query | 39,276,650 | <p>The following line causes a ValueError (Pandas 17.1), and I'm trying to understand why.</p>
<pre><code>x = (matchdf['ANPR Matched_x'] == 1)
</code></pre>
<p>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</p>
<p>I'm trying to use it for following conditional assignment:</p>
<pre><code>matchdf.loc[x, 'FullMatch'] = 1
</code></pre>
<p>But I can't get past the previous issue.</p>
<p>I'm sure I've done this kind of thing dozens of times before, and I can't see why it should matter what is in the dataframe, but perhaps it does? or more likely, I'm probably making a silly mistake I just can't see!</p>
<p>Thanks for any help.</p>
<p>EDIT: For more context here's some preceding code:</p>
<pre><code>inpairs = []
for m in inmatchedpairs:
# more code
p = {'Type In': mtype ,'Best In Time': besttime, 'Best G In Time': bestgtime,
'Reg In': reg, 'ANPR Matched': anprmatch, 'ANPR Match Key': anprmatchkey}
inpairs.append(p)
outpairs = []
for m in outmatchedpairs:
# more code
p = {'Type Out': mtype ,'Best Out Time': besttime, 'Best G Out Time': bestgtime,
'Reg Out': reg, 'ANPR Matched': anprmatch, 'ANPR Match Key': anprmatchkey}
outpairs.append(p)
indf = pd.DataFrame(inpairs)
outdf = pd.DataFrame(outpairs)
matchdf = pd.merge(indf, outdf, how='outer', on='ANPR Match Key')
matchdf['FullMatch'] = 0
x = (matchdf['ANPR Matched_x'] == 0)
</code></pre>
<p>I get the error on the last line.</p>
| 0 | 2016-09-01T16:32:06Z | 39,278,338 | <p>If you have this kind of error, first check your dataframe contains what you think it does.</p>
<p>I was stupidly ending up with some Series objects being added to one of the columns which should have contained ints!</p>
| 1 | 2016-09-01T18:14:12Z | [
"python",
"pandas",
"numpy"
] |
How to connect to MS SQL Server database remotely by IP in Python using mssql and pymssql | 39,276,703 | <p>How can I connect to MS SQL Server database remotely by IP in Python using mssql and pymssql modules.
To connect locally I use link = mssql+pymssql://InstanceName/DataBaseName</p>
<p>I enabled TCP/IP Network Configurations.
But How can I get the connection link?</p>
<p>Thank you.</p>
| 0 | 2016-09-01T16:35:20Z | 39,276,739 | <p>You need to create a <code>Connection</code> object</p>
<pre><code>import pymssql
ip = '127.0.0.1'
database_connection = pymssql.connect(host=ip, port=1433, username='foo', password='bar')
</code></pre>
<p>If you're using SQLAlchemy, or another ORM that supports connection strings, you can also use the following format for the connection string.</p>
<pre><code>'mssql+pymssql://{user}:{password}@{host}:{port}'
</code></pre>
| 4 | 2016-09-01T16:37:56Z | [
"python",
"sql-server",
"pymssql"
] |
Splitting a list into uneven tuples | 39,276,740 | <p>I'm trying to split a list of strings into a list of tuples of uneven length containing these strings, with each tuple containing strings initially separated with blank strings. Basically I'd need the parameterized split that I could apply to lists. If my initial list looks like:</p>
<pre><code>init = ['a', 'b', '', 'c', 'd e', 'fgh', '', 'ij', '', '', 'k', 'l', '']
</code></pre>
<p>The last element of this list is always a closing <code>''</code>. There can be consecutive <code>''</code>s which shall be considered as singles.
The result I need is:</p>
<pre><code>end = [('a', 'b'), ('c', 'd e', 'fgh'), ('ij',), ('k', 'l')]
</code></pre>
<p>I already have ugly code that does the job and gets out of range once the list is fully popped out:</p>
<pre><code>end = []
while init[-1] == u'':
init.pop()
l = []
while init[-1] != u'':
l.append(init.pop())
end.append(tuple(l))
</code></pre>
<p>I'd like to use comprehensions, but having unsuccessfully tried unpacking argument lists, reversing self-referenced lists, using <code>deque</code> queues, and various code smells, I'm now doubting whether it makes sense looking for a (nested) comprehension solution?</p>
| 6 | 2016-09-01T16:38:00Z | 39,276,854 | <p>You can use <code>itertools.groupby</code> function to group the elements based on their sizes, like this</p>
<pre><code>>>> from itertools import groupby
>>> init = ['a', 'b', '', 'c', 'd e', 'fgh', '', 'ij', '', '', 'k', 'l', '']
>>> [tuple(g) for valid, g in groupby(init, key=lambda x: len(x) != 0) if valid]
[('a', 'b'), ('c', 'd e', 'fgh'), ('ij',), ('k', 'l')]
</code></pre>
<p>This basically groups the elements based on their lengths. If the length of the item is not equal to zero, they will be put in a group, till an element from another group is met. The <code>key</code> function would return <code>True</code> for the group of elements whose length is not equal to zero, <code>False</code> otherwise. We ignore the group with <code>False</code> (hence the check <code>if valid</code>).</p>
| 4 | 2016-09-01T16:44:37Z | [
"python",
"list",
"python-2.7",
"group",
"list-comprehension"
] |
Splitting a list into uneven tuples | 39,276,740 | <p>I'm trying to split a list of strings into a list of tuples of uneven length containing these strings, with each tuple containing strings initially separated with blank strings. Basically I'd need the parameterized split that I could apply to lists. If my initial list looks like:</p>
<pre><code>init = ['a', 'b', '', 'c', 'd e', 'fgh', '', 'ij', '', '', 'k', 'l', '']
</code></pre>
<p>The last element of this list is always a closing <code>''</code>. There can be consecutive <code>''</code>s which shall be considered as singles.
The result I need is:</p>
<pre><code>end = [('a', 'b'), ('c', 'd e', 'fgh'), ('ij',), ('k', 'l')]
</code></pre>
<p>I already have ugly code that does the job and gets out of range once the list is fully popped out:</p>
<pre><code>end = []
while init[-1] == u'':
init.pop()
l = []
while init[-1] != u'':
l.append(init.pop())
end.append(tuple(l))
</code></pre>
<p>I'd like to use comprehensions, but having unsuccessfully tried unpacking argument lists, reversing self-referenced lists, using <code>deque</code> queues, and various code smells, I'm now doubting whether it makes sense looking for a (nested) comprehension solution?</p>
| 6 | 2016-09-01T16:38:00Z | 39,277,136 | <p>Here is a more concise and general approach with <code>groupby</code> if you want to split your list with a special delimiter:</p>
<pre><code>>>> delimiter = ''
>>> [tuple(g) for k, g in groupby(init, delimiter.__eq__) if not k]
[('a', 'b'), ('c', 'd e', 'fgh'), ('ij',), ('k', 'l')]
</code></pre>
| 4 | 2016-09-01T16:59:34Z | [
"python",
"list",
"python-2.7",
"group",
"list-comprehension"
] |
Pass user replies across different pages | 39,276,874 | <p>I'd like to create a small web application with Django.</p>
<p>It's about different questions: I have ~20 questions/statements that can be answered with "Agree"/"Don't agree"/"Neutral". Now I need a way to pass the data because at the end, when the user has answered all questions I have to analyse the input.</p>
<p>But I don't know what's the best way to save the data across the different pages/questions. I guess <a href="https://django-formtools.readthedocs.io/en/latest/wizard.html#how-it-works" rel="nofollow">Form wizard</a> could be a good idea. How many questions I have is saved in the database.</p>
<p>But I don't know how to create one <code>Form</code> class for every question dynamically. It would be nonsense to hardcode the form like <a href="https://django-formtools.readthedocs.io/en/latest/wizard.html#defining-form-classes" rel="nofollow">here</a>.</p>
<p>So, is there a way how I can use it and depening on how many questions are in the database, more or less <code>Form</code> classes get created?</p>
<p>Or do you have a whole different solution for my problem?</p>
| 0 | 2016-09-01T16:46:09Z | 39,277,913 | <p>I've accomplished this by having all questions be part of the same form, but hiding all questions except for the current one using Javascript. It'd look something like this:
1) Render the form with a class that gives all questions "display: none" except for the first one.
2) Have "Previous" and "Next" buttons that doesn't submit, but rather hide the current question and move on to the next/previous.
3) If someone clicks "Next" on the second-to-last question, display the last question and also replace the "Next" button with a "Submit" button.</p>
<p>This approach has the advantage of making you not need to worry about pagination on the server side, and also avoids multiple saves (in case form validation, database queries, or data processing times are an issue)</p>
<p>If you need to process and re-process questions before giving out the next ones, the approach I've suggested might not be ideal. But otherwise, I think this approach might work nicely. </p>
| 0 | 2016-09-01T17:48:18Z | [
"python",
"django"
] |
How can I limit matplotlib contains to one object? | 39,276,925 | <p>I'm working on a simple GUI with draggable lines to allow a user to visually window some plotted data. </p>
<p>Working with <a href="http://matplotlib.org/users/event_handling.html" rel="nofollow">matplotlib's event handling documentation</a> I've been able to implement an initial version of the draggable window lines:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
class DraggableLine:
def __init__(self, orientation, ax, position):
if orientation.lower() == 'horizontal':
self.myline, = ax.plot(ax.get_xlim(), np.array([1, 1])*position)
self.orientation = orientation.lower()
elif orientation.lower() == 'vertical':
self.myline, = ax.plot(np.array([1, 1])*position, ax.get_ylim())
self.orientation = orientation.lower()
else:
# throw an error
pass
self.parentfig = self.myline.figure.canvas
self.parentax = ax
self.clickpress = self.parentfig.mpl_connect('button_press_event', self.on_click) # Execute on mouse click
self.clicked = False
def on_click(self, event):
# Executed on mouse click
if event.inaxes != self.parentax: return # See if the mouse is over the parent axes object
# See if the click is on top of this line object
contains, attrs = self.myline.contains(event)
if not contains: return
self.mousemotion = self.parentfig.mpl_connect('motion_notify_event', self.on_motion)
self.clickrelease = self.parentfig.mpl_connect('button_release_event', self.on_release)
self.clicked = True
def on_motion(self, event):
# Executed on mouse motion
if not self.clicked: return # See if we've clicked yet
if event.inaxes != self.parentax: return # See if we're moving over the parent axes object
if self.orientation == 'vertical':
self.myline.set_xdata(np.array([1, 1])*event.xdata)
self.myline.set_ydata(self.parentax.get_ylim())
elif self.orientation == 'horizontal':
self.myline.set_xdata(self.parentax.get_xlim())
self.myline.set_ydata(np.array([1, 1])*event.ydata)
self.parentfig.draw()
def on_release(self, event):
self.clicked = False
self.parentfig.mpl_disconnect(self.mousemotion)
self.parentfig.mpl_disconnect(self.clickrelease)
self.parentfig.draw()
</code></pre>
<p>Which generates the lines that behave as expected:</p>
<pre><code>fig = plt.figure()
ax = fig.add_subplot(111)
vl1 = DraggableLine('vertical', ax, 3)
vl2 = DraggableLine('vertical', ax, 6)
ax.set_xlim([0, 10])
plt.show()
</code></pre>
<p>However, when the lines are stacked the ability to move a single line is lost because <code>matplotlib.lines.Line2D.contains()</code> does not know that one object is obscured by another. So we're left dragging a chunk of objects around until the plot is closed.</p>
<p>Is there an already implemented method to mitigate this issue? If not, I think one approach could be to query the children of the parent axes for instances of the <code>DraggableLine</code> class on mouse release, check their positions, and connect/disconnect the <code>'button_press_event'</code> where necessary. I'm not sure if that makes the most sense computation time wise.</p>
| 2 | 2016-09-01T16:48:47Z | 39,281,093 | <p>One approach could be to check the children of the axes for objects that will fire their respective "move" callbacks, see which one is rendered the topmost and only move that one.</p>
<p>For the above example I've defined an additional method:</p>
<pre><code>def shouldthismove(self, event):
# Check to see if this object has been clicked on
contains, attrs = self.myline.contains(event)
if not contains:
# We haven't been clicked
timetomove = False
else:
# See how many draggable objects contains this event
firingobjs = []
for child in self.parentax.get_children():
if child._label == 'dragobj':
contains, attrs = child.contains(event)
if contains:
firingobjs.append(child)
# Assume the last child object is the topmost rendered object, only move if we're it
if firingobjs[-1] == self.myline:
timetomove = True
else:
timetomove = False
return timetomove
</code></pre>
<p>Redefined my <code>on_click</code> method:</p>
<pre><code>def on_click(self, event):
# Executed on mouse click
if event.inaxes != self.parentax: return # See if the mouse is over the parent axes object
# Check for overlaps, make sure we only fire for one object per click
timetomove = self.shouldthismove(event)
if not timetomove: return
self.mousemotion = self.parentfig.canvas.mpl_connect('motion_notify_event', self.on_motion)
self.clickrelease = self.parentfig.canvas.mpl_connect('button_release_event', self.on_release)
self.clicked = True
</code></pre>
<p>And added a generic label to my line object in <code>__init__</code> to expedite filtering of the axes children later on:</p>
<pre><code>self.myline._label = 'dragobj'
</code></pre>
| 2 | 2016-09-01T21:24:36Z | [
"python",
"python-3.x",
"matplotlib"
] |
Python how to use "try:" not stop when it raises except | 39,276,998 | <p>I have this code and would like to shorten it. Is this anyway possible? Does not make that much sense to have that often the same code:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>try:
years = values['year']
except KeyError:
pass
try:
tracks = values['track']
except KeyError:
pass
try:
statuses = values['status']
except KeyError:
pass
</code></pre>
</div>
</div>
</p>
| 0 | 2016-09-01T16:53:19Z | 39,277,034 | <p>How about avoiding the exceptions entirely?</p>
<p><code>.get()</code> allows you to provide a default value if they key doesn't exist already...</p>
<pre><code>years = values.get('year') # Implicitly default to None
tracks = values.get('track', None) # Explicitly default to None
statuses = values.get('status', 'Unknown') # Or use any custom value
</code></pre>
<p>As mentioned in comments by <a href="http://stackoverflow.com/users/487339/dsm">@DSM</a>, this differs from your code in that it guarantees all variables will be bound with <em>some</em> value. Otherwise, attempting to use any of the variables might result in a <code>NameError</code> at run time.</p>
<p>Less efficient, but you can also explicitly check if a key exists...</p>
<pre><code>if 'year' in values:
# do something
</code></pre>
| 10 | 2016-09-01T16:55:21Z | [
"python",
"optimization"
] |
Python how to use "try:" not stop when it raises except | 39,276,998 | <p>I have this code and would like to shorten it. Is this anyway possible? Does not make that much sense to have that often the same code:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>try:
years = values['year']
except KeyError:
pass
try:
tracks = values['track']
except KeyError:
pass
try:
statuses = values['status']
except KeyError:
pass
</code></pre>
</div>
</div>
</p>
| 0 | 2016-09-01T16:53:19Z | 39,277,052 | <p>Here you go. </p>
<pre><code>try:
years = values['year']
tracks = values['track']
statuses = values['status']
except KeyError:
pass
</code></pre>
| -2 | 2016-09-01T16:56:10Z | [
"python",
"optimization"
] |
PyGTK-2.24.0 Installation cannot find NumPy | 39,277,078 | <p>I am trying to build the PyGTK source from version 2.24.0 with a local (prefix=$HOME/.local) installation of python 3.5.2. Running the configure script produces:</p>
<pre><code>$: ./configure --prefix=$HOME/.local
....
configure: WARNING: Could not find a valid numpy installation, disabling.
....
The following modules will be built:
atk
pango
pangocairo
gtk with 2.18 API
gtk.glade
gtk.unixprint
Numpy support: no
</code></pre>
<p>Looking in <code>config.log</code>:</p>
<pre><code>....
configure:12393: checking for /home/me/.local/bin/python3.5 version
configure:12400: result: 3.5
configure:12412: checking for /home/me/.local/bin/python3.5 platform
configure:12419: result: linux
configure:12426: checking for /home/me/.local/bin/python3.5 script directory
configure:12455: result: ${prefix}/lib/python3.5/site-packages
configure:12464: checking for /home/me/.local/bin/python3.5 extension module directory
configure:12493: result: ${exec_prefix}/lib/python3.5/site-packages
....
ac_cv_env_PKG_CONFIG_PATH_value=/home/me/.local/lib/pkgconfig:/home/me/.local/bin/libwx/pkgconfig:/usr/lib/pkconfig:/usr/lib64/pkgconfig:/usr/share/pkgconfig
....
ac_cv_env_PYGOBJECT_LIBS_value=-L/home/me/.local/lib/python3.5/site-packages/gi
....
am_cv_python_platform=linux
am_cv_python_pyexecdir='${exec_prefix}/lib/python3.5/site-packages'
am_cv_python_pythondir='${prefix}/lib/python3.5/site-packages'
am_cv_python_version=3.5
....
PYTHON='/home/me/.local/bin/python3.5'
PYTHON_EXEC_PREFIX='${exec_prefix}'
PYTHON_INCLUDES='-I/home/me/.local/include/python3.5m -I/home/csmall02/.local/include/python3.5m'
PYTHON_PLATFORM='linux'
PYTHON_PREFIX='${prefix}'
PYTHON_VERSION='3.5'
....
pyexecdir='${exec_prefix}/lib/python3.5/site-packages'
pythondir='${prefix}/lib/python3.5/site-packages'
</code></pre>
<p>Why can't this configure find the NumPy packages? My <code>lib/python3.5</code> directory looks like:</p>
<pre><code>.local
`--lib
`--python3.5
`--site-packages
|-- numpy
| |-- compat |-- ma
| |-- core |-- matrixlib
| |-- distutils |-- polynomial
| |-- doc |-- __pycache__
| |-- f2py |-- random
| |-- fft |-- testing
| |-- lib `-- tests
| `-- linalg
|-- numpy-1.11.1.dist-info
`-- numpy-1.11.1-py3.5-linux-x86_64.egg
|-- EGG-INFO
`-- numpy
|-- compat |-- ma
|-- core |-- matrixlib
|-- distutils |-- polynomial
|-- doc |-- __pycache__
|-- f2py |-- random
|-- fft |-- testing
|-- lib `-- tests
`-- linalg
</code></pre>
<p>The reason for the two numpy directories is I installed one using <code>pip install numpy</code> and the other I installed from source in the course of trying to fix this problem.</p>
<p>Also, I have no problem using <code>import numpy</code> and such in interactive python, so I <em>know</em> it's "there".</p>
<p>Does anyone know how to pass the location of NumPy directly?
Any other advice would also be appreciated.</p>
<p>Thanks!</p>
| 0 | 2016-09-01T16:57:07Z | 39,344,020 | <p>I'm afraid you have some mix-up.
Here is what I did :</p>
<pre><code>sudo apt-get dist-upgrade
sudo apt-get install python3
sudo apt-get install python3-numpy
sudo apt-get install python3-matplotlib
sudo apt-get install python3-scipy
sudo apt-get install python3-pyfits
</code></pre>
<p>One can also use <code>pip3</code> to install those libs, but using pip will install them for python 2.7...</p>
<p>Also, pygtk for python3 seems not to be available, read <a href="http://askubuntu.com/questions/97023/why-cant-i-import-pygtk-with-python-3-2-from-pydev">the answer to this question</a></p>
<p>Hope this clears things up so that you can solve it.</p>
| 1 | 2016-09-06T08:21:06Z | [
"python",
"numpy",
"makefile",
"pygtk",
"configure"
] |
broadcasting a comparison of a column of 2d array with multiple columns | 39,277,113 | <p>What's the right numpy syntax to compare one column against others in a 2d ndarray? </p>
<p>After reading <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow">some</a> <a href="http://scipy.github.io/old-wiki/pages/EricsBroadcastingDoc" rel="nofollow">docs</a> on array broadcasting, I am still not quite sure what the correct way to do this is.</p>
<p><strong>Example:</strong> Suppose I have a 2d array of goals scored by each player (row) in each game (column). </p>
<pre><code># goals = number of goals scored by ith player in jth game (NaN if player did not play)
# column = game
goals = np.array([ [np.nan, 0, 1], # row = player
[ 1, 2, 0],
[ 0, 0, np.nan],
[np.nan, 1, 1],
[ 0, 0, 1] ])
</code></pre>
<p>I want to know if, in the final game, the player achieved a personal record by scoring more goals than she did in any previous game, ignoring games in which she did not appear (represented as <code>nan</code>). I expect <code>True</code> for only the first and last players in the array. </p>
<p>Just writing <code>goals[:,2] > goals[:,:2]</code> returns the <code>ValueError: operands could not be broadcast together with shapes (5,) (5,2)</code></p>
<p><strong>What I tried:</strong> I know that I can manually stretch the <code>(5,)</code> into <code>(5,2)</code> with <code>np.newaxis</code>. So this works:</p>
<pre><code>with np.errstate(invalid='ignore'):
personalBest= ( np.isnan(goals[:,:2]) |
(goals[:,2][:,np.newaxis] > goals[:,:2] )
).all(axis=1)
print(personalBest) # returns desired solution
</code></pre>
<p>Is there a less hacky, more idiomatically numpy way to write this?</p>
| 2 | 2016-09-01T16:58:38Z | 39,277,221 | <p>You could do something like this -</p>
<pre><code>np.flatnonzero((goals[:,None,-1] > goals[:,:-1]).any(1))
</code></pre>
<p>Let's go through it in steps.</p>
<p><strong>Step #1:</strong> We are introducing a new axis on the last-column sliced version to keep it as <code>2D</code> with the last axis being a singleton dimension/axis. The idea is to compare each of its element against all elements in that row except the element itself :</p>
<pre><code>In [3]: goals[:,None,-1]
Out[3]:
array([[ 1.],
[ 0.],
[ nan],
[ 1.],
[ 1.]])
In [4]: goals[:,None,-1].shape # Check the shapes for broadcasting alignment
Out[4]: (5, 1)
In [5]: goals.shape
Out[5]: (5, 3)
</code></pre>
<p><strong>Step #2:</strong> Next up, we are actually performing the comparison against all the columns of the array skipping the last column itself as that's part of the sliced version obtained earlier -</p>
<pre><code>In [7]: goals[:,None,-1] > goals[:,:-1]
Out[7]:
array([[False, True],
[False, False],
[False, False],
[False, False],
[ True, True]], dtype=bool)
</code></pre>
<p><strong>Step #3:</strong> Then, we are checking if there's ANY match along each row -</p>
<pre><code>In [8]: (goals[:,None,-1] > goals[:,:-1]).any(axis=1)
Out[8]: array([ True, False, False, False, True], dtype=bool)
</code></pre>
<p><strong>Step #4:</strong> Finally, getting the matching indices with <code>np.flatnonzero</code> -</p>
<pre><code>In [9]: np.flatnonzero((goals[:,None,-1] > goals[:,:-1]).any(axis=1))
Out[9]: array([0, 4])
</code></pre>
| 2 | 2016-09-01T17:04:10Z | [
"python",
"arrays",
"numpy",
"numpy-broadcasting"
] |
broadcasting a comparison of a column of 2d array with multiple columns | 39,277,113 | <p>What's the right numpy syntax to compare one column against others in a 2d ndarray? </p>
<p>After reading <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow">some</a> <a href="http://scipy.github.io/old-wiki/pages/EricsBroadcastingDoc" rel="nofollow">docs</a> on array broadcasting, I am still not quite sure what the correct way to do this is.</p>
<p><strong>Example:</strong> Suppose I have a 2d array of goals scored by each player (row) in each game (column). </p>
<pre><code># goals = number of goals scored by ith player in jth game (NaN if player did not play)
# column = game
goals = np.array([ [np.nan, 0, 1], # row = player
[ 1, 2, 0],
[ 0, 0, np.nan],
[np.nan, 1, 1],
[ 0, 0, 1] ])
</code></pre>
<p>I want to know if, in the final game, the player achieved a personal record by scoring more goals than she did in any previous game, ignoring games in which she did not appear (represented as <code>nan</code>). I expect <code>True</code> for only the first and last players in the array. </p>
<p>Just writing <code>goals[:,2] > goals[:,:2]</code> returns the <code>ValueError: operands could not be broadcast together with shapes (5,) (5,2)</code></p>
<p><strong>What I tried:</strong> I know that I can manually stretch the <code>(5,)</code> into <code>(5,2)</code> with <code>np.newaxis</code>. So this works:</p>
<pre><code>with np.errstate(invalid='ignore'):
personalBest= ( np.isnan(goals[:,:2]) |
(goals[:,2][:,np.newaxis] > goals[:,:2] )
).all(axis=1)
print(personalBest) # returns desired solution
</code></pre>
<p>Is there a less hacky, more idiomatically numpy way to write this?</p>
| 2 | 2016-09-01T16:58:38Z | 39,280,176 | <p>Just focusing on the <code>newaxis</code> bit:</p>
<pre><code>In [332]: goals = np.arange(12).reshape(3,4)
In [333]: goals[:,2]>goals[:,:2]
...
ValueError: operands could not be broadcast together with shapes (3,) (3,2)
</code></pre>
<p>So the goal is to make the 1st array of shape (3,1) so it can be broadcast against the (3,2):</p>
<p>We can index with a list or slice: <code>goals[:,2:3]</code> works as well</p>
<pre><code>In [334]: goals[:,[2]]>goals[:,:2]
Out[334]:
array([[ True, True],
[ True, True],
[ True, True]], dtype=bool)
</code></pre>
<p>we can explicitly add the <code>newaxis</code> (common)</p>
<pre><code>In [335]: goals[:,2][:,None]>goals[:,:2]
Out[335]:
array([[ True, True],
[ True, True],
[ True, True]], dtype=bool)
</code></pre>
<p>we can combine the two indexing operations (this isn't seen as frequently)</p>
<pre><code>In [336]: goals[:,2,None]>goals[:,:2]
Out[336]:
array([[ True, True],
[ True, True],
[ True, True]], dtype=bool)
</code></pre>
<p>we can explicitly reshape:</p>
<pre><code>In [339]: goals[:,2].reshape(-1,1)>goals[:,:2]
Out[339]:
array([[ True, True],
[ True, True],
[ True, True]], dtype=bool)
</code></pre>
<p>I don't think the execution times differ significantly. These are all good <code>numpy</code> code.</p>
<p>========</p>
<p>If the 2 arrays were (3,) and (2,3), we wouldn't need any of this. The numpy broadcasting automatically expands the first to (1,3). In effect <code>x[None,:]</code> is automatic, but <code>x[:,None]</code> is not.</p>
| 2 | 2016-09-01T20:13:23Z | [
"python",
"arrays",
"numpy",
"numpy-broadcasting"
] |
Concatenating column values into row values in Pandas | 39,277,163 | <p>I have a dataframe like the below- both columns are strings, with the ValCol being a string of comma separated integers. The index is a generic integer index with no meaning.</p>
<pre><code>NameCol ValCol
Name1 555, 333
Name2 433
Name1 999
Name3 123
Name2 533
</code></pre>
<p>What's the best way to aggregate it to</p>
<pre><code>NameCol ValCol
Name1 555, 333, 999
Name2 433, 533
Name3 123
</code></pre>
<p>T don't care about the order of the comma separated integers, but I do need to keep commas between them. It likely will be a very small dataframe, <100 records, so efficiency isn't critical.</p>
<p>I feel like there should be some groupby approach to this, but I haven't figured it out yet.</p>
| 1 | 2016-09-01T17:01:00Z | 39,277,222 | <p>Using a <code>groupby</code> approach:</p>
<pre><code>df = df.groupby('NameCol')['ValCol'].apply(', '.join).reset_index()
</code></pre>
<p>The resulting output:</p>
<pre><code> NameCol ValCol
0 Name1 555, 333, 999
1 Name2 433, 533
2 Name3 123
</code></pre>
| 4 | 2016-09-01T17:04:16Z | [
"python",
"pandas",
"dataframe"
] |
django-rest-framework : list parameters in URL | 39,277,196 | <p>I am pretty new to django and django-rest-framework, but I am trying to pass lists into url parameters to then filter my models by them.</p>
<p>Lets say the client application is sending a request that looks something like this... </p>
<pre><code> url: "api.com/?something=string,string2,string3&?subthings=sub,sub2,sub3&?year=2014,2015,2016/"
</code></pre>
<p>I want to pass in those parameters "things", "subthings", and "years" with their values.</p>
<p>Where the url looks something like this?</p>
<p>NOTE: Trick is that it won't be always an array of length 3 for each parameter.</p>
<p>Can someone point me in the right direction for how my url regex should be handing the lists and also retrieving the query lists in my views.</p>
<p>Thanks!</p>
| 2 | 2016-09-01T17:02:49Z | 39,277,432 | <p>Checkout this doc <a href="http://www.django-rest-framework.org/api-guide/filtering/" rel="nofollow">http://www.django-rest-framework.org/api-guide/filtering/</a></p>
<p>Query params are normally not validated by url regex</p>
| 1 | 2016-09-01T17:17:28Z | [
"python",
"django",
"rest",
"django-rest-framework"
] |
django-rest-framework : list parameters in URL | 39,277,196 | <p>I am pretty new to django and django-rest-framework, but I am trying to pass lists into url parameters to then filter my models by them.</p>
<p>Lets say the client application is sending a request that looks something like this... </p>
<pre><code> url: "api.com/?something=string,string2,string3&?subthings=sub,sub2,sub3&?year=2014,2015,2016/"
</code></pre>
<p>I want to pass in those parameters "things", "subthings", and "years" with their values.</p>
<p>Where the url looks something like this?</p>
<p>NOTE: Trick is that it won't be always an array of length 3 for each parameter.</p>
<p>Can someone point me in the right direction for how my url regex should be handing the lists and also retrieving the query lists in my views.</p>
<p>Thanks!</p>
| 2 | 2016-09-01T17:02:49Z | 39,282,193 | <p>To show how I did this thanks to the document links above.
Note: I used pipes as my url delimiter and not commas -> '|'.</p>
<p>in my <code>urls.py</code></p>
<pre><code>url(r'^$', SomethingAPIView.as_view(), name='something'),
</code></pre>
<p>in my <code>views.py</code></p>
<pre><code>class SomethingAPIView(ListAPIView):
# whatever serializer class
def get_queryset(self):
query_params = self.request.query_params
somethings = query_params.get('something', None)
subthings = query_params.get('subthing', None)
years = query_params.get('year', None)
# create an empty list for parameters to be filters by
somethingParams = []
subthingsParams = []
yearParams = []
# create the list based on the query parameters
if somethings is not None:
for something in somethings.split('|'):
countryParams.append(int(something))
if subthings is not None:
for subthing in subthings.split('|'):
subthingsParams.append(int(subthing))
if years is not None:
for year in years.split('|'):
yearParams.append(int(year))
if somethings and subthings and years is not None:
queryset_list = Model.objects.all()
queryset_list = queryset_list.filter(something_id__in=countryParams)
queryset_list = queryset_list.filter(subthing_id__in=subthingsParams)
queryset_list = queryset_list.filter(year__in=yearParams)
return queryset_list
</code></pre>
<p>I do need to check for an empty result if they are not valid. But here is starting point for people looking to pass in multiple values in query parameters.</p>
<p>A valid url here would be <code>/?something=1|2|3&subthing=4|5|6&year=2015|2016</code>.</p>
| 1 | 2016-09-01T23:17:20Z | [
"python",
"django",
"rest",
"django-rest-framework"
] |
What package has this python function? | 39,277,332 | <p>I found this code in web but it doesn't work.</p>
<pre><code>from numpy import *
from mayavi import *
N = 100
a = 0.
b = 1.
dt = b / N;
q = [1., -1., 1., -1.]
qpos = [[0.56, 0.56, 0.50],
[0.26, 0.76, 0.50],
[0.66, 0.16, 0.50],
[0.66, 0.86, 0.50]]
x,y,z = mgrid[a:b:dt, a:b:dt, 0.:1.:0.5]
Ex, Ey, Ez = mgrid[a:b:dt, a:b:dt, 0.:1.:0.5]
for i in range(N):
for j in range(N):
Ex[i,j] = 0.0
Ey[i,j] = 0.0
for num in range(len(q)):
rs = ((x[i,j] - qpos[num][0])**2 + (y[i,j] - qpos[num][1])**2)
r = sqrt(rs)
q1x = q[num] * (x[i,j] - qpos[num][0]) / (r * rs)
q1y = q[num] * (y[i,j] - qpos[num][1]) / (r * rs)
Ex[i,j] = q1x + Ex[i,j]
Ey[i,j] = q1y + Ey[i,j]
fig = figure(fgcolor=(0,0,0), bgcolor=(1,1,1))
streams = list()
for s in range(len(q)):
stream = flow(x,y,z,Ex, Ey, Ez, seed_scale=0.5, seed_resolution=1, seedtype='sphere')
streams.append(stream)
fig.scene.z_plus_view()
fig.scene.parallel_projection = True
</code></pre>
<p>I have installed numpy and mayavi but when I try to run. doesn't recognize figure and flow function. do I need some other library?</p>
| -4 | 2016-09-01T17:11:02Z | 39,277,844 | <p>First, check out that you have NumPy and Mayavi actually working. Just run <code>python</code> (or IDLE) and when you see the <code>>>></code> prompt type <code>import numpy</code> and then <code>import mayavi</code>. If you see any <code>ImportError</code> messages (or any other errors), then you don't. Normally, all you should have is another <code>>>></code> prompt.</p>
<p>Here's how it should look like:</p>
<pre><code>$ python
Python 2.7.10 (default, May 23 2015, 09:40:32) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> import mayavi
>>>
</code></pre>
<p>If you have it working, a wild guess is you can try adding this right below the <code>import</code> statements:</p>
<pre><code>from mayavi.mlab import flow
from mayavi.tools.figure import figure
</code></pre>
<p>And maybe it would work.</p>
<p>(I'm not actually familiar with Mayavi library and don't know what your code does â just found the suspect functions on Google.)</p>
| 1 | 2016-09-01T17:44:01Z | [
"python",
"numpy",
"mayavi"
] |
Categorize list in Python | 39,277,387 | <p>What is the best way to categorize a list in python?</p>
<p>for example:</p>
<pre><code>totalist is below
totalist[1] = ['A','B','C','D','E']
totalist[2] = ['A','B','X','Y','Z']
totalist[3] = ['A','F','T','U','V']
totalist[4] = ['A','F','M','N','O']
</code></pre>
<p>Say I want to get the list where the first two items are <code>['A','B']</code>, basically <code>list[1]</code> and <code>list[2]</code>. Is there an easy way to get these without iterate one item at a time? Like something like this?</p>
<pre><code>if ['A','B'] in totalist
</code></pre>
<p>I know that doesn't work.</p>
| 4 | 2016-09-01T17:14:15Z | 39,277,413 | <p>You could check the first two elements of each list.</p>
<pre><code>for totalist in all_lists:
if totalist[:2] == ['A', 'B']:
# Do something.
</code></pre>
<p><strong>Note:</strong> The one-liner solutions suggested by Kasramvd are quite nice too. I found my solution more readable. Though I should say comprehensions are slightly faster than regular for loops. (Which I tested myself.)</p>
| 3 | 2016-09-01T17:16:11Z | [
"python",
"list"
] |
Categorize list in Python | 39,277,387 | <p>What is the best way to categorize a list in python?</p>
<p>for example:</p>
<pre><code>totalist is below
totalist[1] = ['A','B','C','D','E']
totalist[2] = ['A','B','X','Y','Z']
totalist[3] = ['A','F','T','U','V']
totalist[4] = ['A','F','M','N','O']
</code></pre>
<p>Say I want to get the list where the first two items are <code>['A','B']</code>, basically <code>list[1]</code> and <code>list[2]</code>. Is there an easy way to get these without iterate one item at a time? Like something like this?</p>
<pre><code>if ['A','B'] in totalist
</code></pre>
<p>I know that doesn't work.</p>
| 4 | 2016-09-01T17:14:15Z | 39,277,486 | <p>Basically you can't do this in python with a nested list. But if you are looking for an optimized approach here are some ways:</p>
<p>Use a simple list comprehension, by comparing the intended list with only first two items of sub lists:</p>
<pre><code>>>> [sub for sub in totalist if sub[:2] == ['A', 'B']]
[['A', 'B', 'C', 'D', 'E'], ['A', 'B', 'X', 'Y', 'Z']]
</code></pre>
<p>If you want the indices use <code>enumerate</code>:</p>
<pre><code>>>> [ind for ind, sub in enumerate(totalist) if sub[:2] == ['A', 'B']]
[0, 1]
</code></pre>
<p>And here is a approach in Numpy which is pretty much optimized when you are dealing with large data sets:</p>
<pre><code>>>> import numpy as np
>>>
>>> totalist = np.array([['A','B','C','D','E'],
... ['A','B','X','Y','Z'],
... ['A','F','T','U','V'],
... ['A','F','M','N','O']])
>>> totalist[(totalist[:,:2]==['A', 'B']).all(axis=1)]
array([['A', 'B', 'C', 'D', 'E'],
['A', 'B', 'X', 'Y', 'Z']],
dtype='|S1')
</code></pre>
<p>Also as an alternative to list comprehension in python if you don't want to use a loop and you are looking for a functional way, you can use <code>filter</code> function, which is not as optimized as a list comprehension:</p>
<pre><code>>>> list(filter(lambda x: x[:2]==['A', 'B'], totalist))
[['A', 'B', 'C', 'D', 'E'], ['A', 'B', 'X', 'Y', 'Z']]
</code></pre>
| 1 | 2016-09-01T17:22:11Z | [
"python",
"list"
] |
Categorize list in Python | 39,277,387 | <p>What is the best way to categorize a list in python?</p>
<p>for example:</p>
<pre><code>totalist is below
totalist[1] = ['A','B','C','D','E']
totalist[2] = ['A','B','X','Y','Z']
totalist[3] = ['A','F','T','U','V']
totalist[4] = ['A','F','M','N','O']
</code></pre>
<p>Say I want to get the list where the first two items are <code>['A','B']</code>, basically <code>list[1]</code> and <code>list[2]</code>. Is there an easy way to get these without iterate one item at a time? Like something like this?</p>
<pre><code>if ['A','B'] in totalist
</code></pre>
<p>I know that doesn't work.</p>
| 4 | 2016-09-01T17:14:15Z | 39,277,507 | <p>You could do this.</p>
<pre><code>>>> for i in totalist:
... if ['A','B']==i[:2]:
... print i
</code></pre>
| 1 | 2016-09-01T17:22:50Z | [
"python",
"list"
] |
Categorize list in Python | 39,277,387 | <p>What is the best way to categorize a list in python?</p>
<p>for example:</p>
<pre><code>totalist is below
totalist[1] = ['A','B','C','D','E']
totalist[2] = ['A','B','X','Y','Z']
totalist[3] = ['A','F','T','U','V']
totalist[4] = ['A','F','M','N','O']
</code></pre>
<p>Say I want to get the list where the first two items are <code>['A','B']</code>, basically <code>list[1]</code> and <code>list[2]</code>. Is there an easy way to get these without iterate one item at a time? Like something like this?</p>
<pre><code>if ['A','B'] in totalist
</code></pre>
<p>I know that doesn't work.</p>
| 4 | 2016-09-01T17:14:15Z | 39,277,637 | <p>You imply that you are concerned about performance (cost). If you need to do this, and if you are worried about performance, you need a different data-structure. This will add a little "cost" when you making the lists, but save you time when filtering them. </p>
<p>If the need to filter based on the first two elements is fixed (it doesn't generalise to the first n elements) then I would add the lists, as they are made, to a dict where the key is a tuple of the first two elements, and the item is a list of lists. </p>
<p>then you simply retrieve your list by doing a dict lookup. This is easy to do and will bring potentially large speed ups, at almost no cost in memory and time while making the lists.</p>
| 0 | 2016-09-01T17:30:48Z | [
"python",
"list"
] |
Categorize list in Python | 39,277,387 | <p>What is the best way to categorize a list in python?</p>
<p>for example:</p>
<pre><code>totalist is below
totalist[1] = ['A','B','C','D','E']
totalist[2] = ['A','B','X','Y','Z']
totalist[3] = ['A','F','T','U','V']
totalist[4] = ['A','F','M','N','O']
</code></pre>
<p>Say I want to get the list where the first two items are <code>['A','B']</code>, basically <code>list[1]</code> and <code>list[2]</code>. Is there an easy way to get these without iterate one item at a time? Like something like this?</p>
<pre><code>if ['A','B'] in totalist
</code></pre>
<p>I know that doesn't work.</p>
| 4 | 2016-09-01T17:14:15Z | 39,277,683 | <p>Just for fun, <code>itertools</code> solution to push per-element work to the C layer:</p>
<pre><code>from future_builtins import map # Py2 only; not needed on Py3
from itertools import compress
from operator import itemgetter
# Generator
prefixes = map(itemgetter(slice(2)), totalist)
selectors = map(['A','B'].__eq__, prefixes)
# If you need them one at a time, just skip list wrapping and iterate
# compress output directly
matches = list(compress(totalist, selectors))
</code></pre>
<p>This could all be one-lined to:</p>
<pre><code>matches = list(compress(totalist, map(['A','B'].__eq__, map(itemgetter(slice(2)), totalist))))
</code></pre>
<p>but I wouldn't recommend it. Incidentally, if <code>totalist</code> might be a generator, not a re-iterable sequence, you'd want to use <code>itertools.tee</code> to double it, adding:</p>
<pre><code> totalist, forselection = itertools.tee(totalist, 2)
</code></pre>
<p>and changing the definition of <code>prefixes</code> to <code>map</code> over <code>forselection</code>, not <code>totalist</code>; since <code>compress</code> iterates both iterators in parallel, <code>tee</code> won't have meaningful memory overhead.</p>
<p>Of course, as others have noted, even moving to C, this is a linear algorithm. Ideally, you'd use something like a <code>collections.defaultdict(list)</code> to map from two element prefixes of each <code>list</code> (converted to <code>tuple</code> to make them legal <code>dict</code> keys) to a <code>list</code> of all <code>list</code>s with that prefix. Then, instead of linear search over N <code>list</code>s to find those with matching prefixes, you just do <code>totaldict['A', 'B']</code> and you get the results with <code>O(1)</code> lookup (and less fixed work too; no constant slicing).</p>
<p>Example precompute work:</p>
<pre><code>from collections import defaultdict
totaldict = defaultdict(list)
for x in totalist:
totaldict[tuple(x[:2])].append(x)
# Optionally, to prevent autovivification later:
totaldict = dict(totaldict)
</code></pre>
<p>Then you can get <code>matches</code> effectively instantly for any two element prefix with just:</p>
<pre><code>matches = totaldict['A', 'B']
</code></pre>
| 2 | 2016-09-01T17:33:41Z | [
"python",
"list"
] |
Python-Invalid syntax does not highligt error | 39,277,473 | <p>I've been working on this code and when I test it, it says there is a syntax error but it does not highlight my error in IDLE. Any ideas?</p>
<pre><code>import os
import sys
Start=True
while Start==True:
Operation=input("Please select from the following operations\n"
"Add\n"
"Subtract\n"
"Note: Please type the option exactly as on\nscreen or you will recieve an error message.")
if Operation==not(in("Add","Subtract")):
print("That is not a mathmatical operation.\nPlease try again.")
time.sleep(2)
os.sys("cls")
while Operation==str("Add"):
print("You have selected Addition!")
time.sleep(2)
os.sys("cls")
</code></pre>
| -1 | 2016-09-01T17:21:18Z | 39,277,538 | <p>The error is in the line:</p>
<pre><code>if Operation==not(in("Add","Subtract")):
</code></pre>
<p>The correct code should be:</p>
<pre><code>if Operation not in("Add","Subtract"):
</code></pre>
<p>You also should add <code>import time</code> at the beginning of your code for the <code>time.sleep</code> to work.</p>
| 1 | 2016-09-01T17:25:07Z | [
"python",
"syntax-error"
] |
Python-Invalid syntax does not highligt error | 39,277,473 | <p>I've been working on this code and when I test it, it says there is a syntax error but it does not highlight my error in IDLE. Any ideas?</p>
<pre><code>import os
import sys
Start=True
while Start==True:
Operation=input("Please select from the following operations\n"
"Add\n"
"Subtract\n"
"Note: Please type the option exactly as on\nscreen or you will recieve an error message.")
if Operation==not(in("Add","Subtract")):
print("That is not a mathmatical operation.\nPlease try again.")
time.sleep(2)
os.sys("cls")
while Operation==str("Add"):
print("You have selected Addition!")
time.sleep(2)
os.sys("cls")
</code></pre>
| -1 | 2016-09-01T17:21:18Z | 39,277,556 | <p>Wrong:</p>
<pre><code>if Operation==not(in("Add","Subtract")):
print("That is not a mathmatical operation.\nPlease try again.")
time.sleep(2)
os.sys("cls")
</code></pre>
<p>Should be:</p>
<pre><code>if Operation not in ("Add","Subtract"):
print("That is not a mathmatical operation.\nPlease try again.")
time.sleep(2)
os.sys("cls")
</code></pre>
| 2 | 2016-09-01T17:26:02Z | [
"python",
"syntax-error"
] |
PyMC3 Multinomial Model doesn't work with non-integer observe data | 39,277,474 | <p>I'm trying to use PyMC3 to solve a fairly simple multinomial distribution. It works perfectly if I have the 'noise' value set to 0.0. However when I change it to anything else, for example 0.01, I get an error in the find_MAP() function and it hangs if I don't use find_MAP(). Is there some reason that the multinomial has to be sparse?</p>
<p><code>
import numpy as np
from pymc3 import *
import pymc3 as mc
import pandas as pd
print 'pymc3 version: ' + mc.__version__</code></p>
<p><code>
sample_size = 10
number_of_experiments = 1</code></p>
<p><code>
true_probs = [0.2, 0.1, 0.3, 0.4]</code></p>
<p><code>
k = len(true_probs)</code></p>
<p><code>
noise = 0.0
y = np.random.multinomial(n=number_of_experiments, pvals=true_probs, size=sample_size)+noise
y_denominator = np.sum(y,axis=1)
y = y/y_denominator[:,None]</code></p>
<p><code>
with Model() as multinom_test:
probs = Dirichlet('probs', a = np.ones(k), shape = k)
for i in range(sample_size):
data = Multinomial('data_%d' % (i),
n = y[i].sum(),
p = probs,
observed = y[i])
</code>
<code>
with multinom_test:
start = find_MAP()
trace = sample(5000, Slice())
trace[probs].mean(0)
</code></p>
<p>Error:</p>
<p><em>ValueError: Optimization error: max, logp or dlogp at max have non-finite values. Some values may be outside of distribution support. max: {'probs_stickbreaking_': array([ 0.00000000e+00, -4.47034834e-08, 0.00000000e+00])} logp: array(-inf) dlogp: array([ 0.00000000e+00, 2.98023221e-08, 0.00000000e+00])Check that 1) you don't have hierarchical parameters, these will lead to points with infinite density. 2) your distribution logp's are properly specified. Specific issues:</em> </p>
| 0 | 2016-09-01T17:21:21Z | 39,363,050 | <p>This works for me</p>
<pre class="lang-py prettyprint-override"><code>sample_size = 10
number_of_experiments = 100
true_probs = [0.2, 0.1, 0.3, 0.4]
k = len(true_probs)
noise = 0.01
y = np.random.multinomial(n=number_of_experiments, pvals=true_probs, size=sample_size)+noise
with pm.Model() as multinom_test:
a = pm.Dirichlet('a', a=np.ones(k))
for i in range(sample_size):
data_pred = pm.Multinomial('data_pred_%s'% i, n=number_of_experiments, p=a, observed=y[i])
trace = pm.sample(50000, pm.Metropolis())
#trace = pm.sample(1000) # also works with NUTS
pm.traceplot(trace[500:]);
</code></pre>
<p><a href="http://i.stack.imgur.com/rDgpY.png" rel="nofollow"><img src="http://i.stack.imgur.com/rDgpY.png" alt="traceplot"></a></p>
| 1 | 2016-09-07T06:53:12Z | [
"python",
"pymc",
"pymc3"
] |
Pandas - Finding Unique Entries in Daily Census Data | 39,277,501 | <p>I have census data that looks like this for a full month and I want to find out how many unique inmates there were for the month. The information is taken daily so there are multiples. </p>
<pre><code> _id,Date,Gender,Race,Age at Booking,Current Age
1,2016-06-01,M,W,32,33
2,2016-06-01,M,B,25,27
3,2016-06-01,M,W,31,33
</code></pre>
<p>My method now is to group them by day and then add the ones that are not accounted for into the DataFrame. My question is how to account for two people with the same info. They would both get not added to the new DataFrame because one of them already exists? I'm trying to figure out how many people total were in the prison during this time. </p>
<p>_id is incremental, for example here is some data from the second day </p>
<pre><code>2323,2016-06-02,M,B,20,21
2324,2016-06-02,M,B,44,45
2325,2016-06-02,M,B,22,22
2326,2016-06-02,M,B,38,39
</code></pre>
<p>link to the dataset here: <a href="https://data.wprdc.org/dataset/allegheny-county-jail-daily-census" rel="nofollow">https://data.wprdc.org/dataset/allegheny-county-jail-daily-census</a></p>
| 3 | 2016-09-01T17:22:45Z | 39,277,721 | <p>You could use the <code>df.drop_duplicates()</code> which will return the DataFrame with only unique values, then count the entries.</p>
<p>Something like this should work:</p>
<pre><code>import pandas as pd
df = pd.read_csv('inmates_062016.csv', index_col=0, parse_dates=True)
uniqueDF = df.drop_duplicates()
countUniques = len(uniqueDF.index)
print(countUniques)
</code></pre>
<p>Result:</p>
<pre><code>>> 11845
</code></pre>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html" rel="nofollow">Pandas drop_duplicates Documentation</a></p>
<p><a href="https://data.wprdc.org/datastore/dump/869ca391-7508-4a64-8a27-09cc4799923f" rel="nofollow">Inmates June 2016 CSV</a></p>
<p>The problem with this approach / data is that there could be many individual inmates that are the same age / gender / race that would be filtered out.</p>
| 1 | 2016-09-01T17:37:12Z | [
"python",
"pandas",
"dataframe",
"grouping",
"data-cleaning"
] |
Pandas - Finding Unique Entries in Daily Census Data | 39,277,501 | <p>I have census data that looks like this for a full month and I want to find out how many unique inmates there were for the month. The information is taken daily so there are multiples. </p>
<pre><code> _id,Date,Gender,Race,Age at Booking,Current Age
1,2016-06-01,M,W,32,33
2,2016-06-01,M,B,25,27
3,2016-06-01,M,W,31,33
</code></pre>
<p>My method now is to group them by day and then add the ones that are not accounted for into the DataFrame. My question is how to account for two people with the same info. They would both get not added to the new DataFrame because one of them already exists? I'm trying to figure out how many people total were in the prison during this time. </p>
<p>_id is incremental, for example here is some data from the second day </p>
<pre><code>2323,2016-06-02,M,B,20,21
2324,2016-06-02,M,B,44,45
2325,2016-06-02,M,B,22,22
2326,2016-06-02,M,B,38,39
</code></pre>
<p>link to the dataset here: <a href="https://data.wprdc.org/dataset/allegheny-county-jail-daily-census" rel="nofollow">https://data.wprdc.org/dataset/allegheny-county-jail-daily-census</a></p>
| 3 | 2016-09-01T17:22:45Z | 39,280,402 | <p>I think the trick here is to groupby as much as possible and check the differences in those (small) groups through the month:</p>
<pre><code>inmates = pd.read_csv('inmates.csv')
# group by everything except _id and count number of entries
grouped = inmates.groupby(
['Gender', 'Race', 'Age at Booking', 'Current Age', 'Date']).count()
# pivot the dates out and transpose - this give us the number of each
# combination for each day
grouped = grouped.unstack().T.fillna(0)
# get the difference between each day of the month - the assumption here
# being that a negative number means someone left, 0 means that nothing
# has changed and positive means that someone new has come in. As you
# mentioned yourself, that isn't necessarily true
diffed = grouped.diff()
# replace the first day of the month with the grouped numbers to give
# the number in each group at the start of the month
diffed.iloc[0, :] = grouped.iloc[0, :]
# sum only the positive numbers in each row to count those that have
# arrived but ignore those that have left
diffed['total'] = diffed.apply(lambda x: x[x > 0].sum(), axis=1)
# sum total column
diffed['total'].sum() # 3393
</code></pre>
| 1 | 2016-09-01T20:30:59Z | [
"python",
"pandas",
"dataframe",
"grouping",
"data-cleaning"
] |
Solving/teaching Python to solve Number Series | 39,277,512 | <p>Is there a better way of solving Number Series questions using Python other than teaching it basic pattern rules and hoping the rules fit the question? For example. we would have a list of functions that has a rule for each of them and see if the series fits. Is there a Library that does this, and if not am I stuck writing functions every time a new pattern comes along?</p>
<pre><code>given_series = #random series
def maybe_fib(series):
#solve the fib
throw error if wrong
def add_iterating_numbers(series):
#solve the series
throw error if wrong
.
.
.
.
.
list_of_possible_match = [maybe_fib, add_iterating_numbers, . . . , #list like fib, adding prime, taking the first 3 numbers and doing somethign with it]
for each_method in list_of_possible_match:
try:
each_method(given_series)
catch error:
print("didn't work out try another one")
if all_fail:
#teach me new function/method?
</code></pre>
<p>Number Series is basically a series of numbers and you have to find the next number that follows up. They can range from simple math patterns, to counting the number of corners in a number like '7' would be 1. </p>
<p>Examples of Number Series:</p>
<p>1 1 2 3 5 8 13 21 34 55 89 (?) - The next number would be 144</p>
<p>1 2 4 7 11 16 (?) - The next number would be 22 </p>
| 0 | 2016-09-01T17:23:19Z | 39,278,096 | <p>Consider using the <a href="http://oeis.org" rel="nofollow">OEIS</a>. It is a huge database of sequences the you can check against. I would say, because it probably doesn't have every possible arithmetic progression etc., you would get the best coverage by writing a program that checks basic sequences (trying to find common differences or quadratic differences etc.) and if nothing works, to check against the database. </p>
<p>I'm not sure whether the OEIS has any kind of api, but you may find <a href="http://stackoverflow.com/questions/5991756/programmatic-access-to-on-line-encyclopedia-of-integer-sequences">this</a> helpful.</p>
<p>Wolfram also has a <a href="http://www.wolframalpha.com/widgets/view.jsp?id=8e63b780a72059bb82ab69ee51cff12d" rel="nofollow">widget</a> that attempts this but it has very limited uses.</p>
<p>You should post a link to your code in a comment if you manage to finish it. It sounds like a cool program; I'd love to see it :)</p>
| 0 | 2016-09-01T17:59:01Z | [
"python"
] |
Storing the value that meets a requirement in variable and finding the next value that also meets that requirement | 39,277,566 | <p>I am trying to write a code that iterates through a list and if i % 10 equals 1, then I store that <code>i</code> in a variable. Then, I want my code to keep iterating through the list and if another i value meets the same requirement than it adds to a count. </p>
<p>This is what I have right now but it is just saving an i value and than adding to a count. This is because what I want to be my next i value is just the first i value that meets my requirement.</p>
<pre><code>count 1_1 = 0
for i in B:
if some_reqirement:
if some_other_reqirement:
if (old == 1) and (i % 10 == 1):
count1_1 += 1
old = i % 10
</code></pre>
<p>Does anyone know how I could fix this?</p>
<p>EDIT:</p>
<p>this is the code with requirement, and I have tested the rest of the code and it works how i want it to.</p>
<p>A is a range with certain values replaced with with the value 1 and b is just a range.</p>
<pre><code>count1_1 = 0
for i in B:
if B[i] % 10 == 1 and B[i + 2] % 10 == 3:
if A[i] * A[i + 2] == 1:
if (old == 1) and (i % 10 == 1):
count1_1 += 1
old = i % 10
</code></pre>
| -2 | 2016-09-01T17:26:35Z | 39,277,636 | <p>You're saving the old value too soon</p>
<pre><code> old = i % 10
if (old == 1) and (i % 10 == 1):
count1_1 += 1
</code></pre>
<p>should be</p>
<pre><code> if (old == 1) and (i % 10 == 1):
count1_1 += 1
old = i % 10
</code></pre>
<p>or <code>old</code> and <code>i % 10</code> are identical: not what you want (since both expressions of your <code>if</code> test the same thing)</p>
| 1 | 2016-09-01T17:30:41Z | [
"python"
] |
Storing the value that meets a requirement in variable and finding the next value that also meets that requirement | 39,277,566 | <p>I am trying to write a code that iterates through a list and if i % 10 equals 1, then I store that <code>i</code> in a variable. Then, I want my code to keep iterating through the list and if another i value meets the same requirement than it adds to a count. </p>
<p>This is what I have right now but it is just saving an i value and than adding to a count. This is because what I want to be my next i value is just the first i value that meets my requirement.</p>
<pre><code>count 1_1 = 0
for i in B:
if some_reqirement:
if some_other_reqirement:
if (old == 1) and (i % 10 == 1):
count1_1 += 1
old = i % 10
</code></pre>
<p>Does anyone know how I could fix this?</p>
<p>EDIT:</p>
<p>this is the code with requirement, and I have tested the rest of the code and it works how i want it to.</p>
<p>A is a range with certain values replaced with with the value 1 and b is just a range.</p>
<pre><code>count1_1 = 0
for i in B:
if B[i] % 10 == 1 and B[i + 2] % 10 == 3:
if A[i] * A[i + 2] == 1:
if (old == 1) and (i % 10 == 1):
count1_1 += 1
old = i % 10
</code></pre>
| -2 | 2016-09-01T17:26:35Z | 39,279,711 | <p>I still see <code>old</code> as essentially a boolean flag so let's make it one (called <code>previous_1</code>), but this time, instead of only being triggered True once, we'll change it's value based on every <code>i</code> that passes both requirements:</p>
<pre><code>count1_1 = 0
previous_1 = False
for i in B:
if B[i] % 10 == 1 and B[i + 2] % 10 == 3:
if some_other_reqirement:
current_1 = i % 10 == 1
if current_1 and previous_1:
count1_1 += 1
previous_1 = current_1
</code></pre>
| 0 | 2016-09-01T19:43:58Z | [
"python"
] |
Elegant way to rearrange dictionary on number in key string | 39,277,589 | <p>Say I have a dict with these objects:</p>
<pre><code>< MultiValueDict: {
u 'task-1-t_name': [u 'T2'],
u 'task-INITIAL_FORMS': [u '0'],
u 'task-1-end_date': [u '1010-01-01'],
u 'task-MAX_NUM_FORMS': [u '1000'],
u 'order_wizard-current_step': [u 'task'],
u 'task-TOTAL_FORMS': [u '2'],
u 'task-1-start_date': [u '1010-01-01'],
u 'task-0-t_name': [u 'T1'],
u 'task-MIN_NUM_FORMS': [u '0'],
u 'task-0-end_date': [u '1010-01-01'],
u 'task-0-start_date': [u '1010-10-01']
</code></pre>
<p>What would be the most pythonic and elegant way to create an array or dictionary that would give me <code>task[0].start_time</code> or <code>task[n]['key_name']</code></p>
<p>I feel like I am going to write a lot more code than necessary to do this</p>
| -1 | 2016-09-01T17:27:43Z | 39,277,711 | <p>This is the data for a Django formset. Rather than trying to manipulate it separately, you should use the methods available on the formset itself: once you have validated that formset, you can do <code>formset.forms[0].cleaned_data['start_time']</code> for example.</p>
| 2 | 2016-09-01T17:36:16Z | [
"python",
"arrays"
] |
Element-wise minimum of multiple vectors in numpy | 39,277,638 | <p>I know that in numpy I can compute the element-wise minimum of two vectors with</p>
<pre><code>numpy.minimum(v1, v2)
</code></pre>
<p>What if I have a list of vectors of equal dimension, <code>V = [v1, v2, v3, v4]</code> (but a list, not an array)? Taking <code>numpy.minimum(*V)</code> doesn't work. What's the preferred thing to do instead?</p>
| 2 | 2016-09-01T17:30:49Z | 39,277,693 | <p>Convert to NumPy array and perform <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.min.html" rel="nofollow"><code>ndarray.min</code></a> along the first axis -</p>
<pre><code>np.asarray(V).min(0)
</code></pre>
<p>Or simply use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.amin.html" rel="nofollow"><code>np.amin</code></a> as under the hoods, it will convert the input to an array before finding the minimum along that axis -</p>
<pre><code>np.amin(V,axis=0)
</code></pre>
<p>Sample run -</p>
<pre><code>In [52]: v1 = [2,5]
In [53]: v2 = [4,5]
In [54]: v3 = [4,4]
In [55]: v4 = [1,4]
In [56]: V = [v1, v2, v3, v4]
In [57]: np.asarray(V).min(0)
Out[57]: array([1, 4])
In [58]: np.amin(V,axis=0)
Out[58]: array([1, 4])
</code></pre>
<p>If you need to final output as a list, append the output with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.tolist.html" rel="nofollow"><code>.tolist()</code></a>.</p>
| 3 | 2016-09-01T17:34:22Z | [
"python",
"numpy"
] |
Element-wise minimum of multiple vectors in numpy | 39,277,638 | <p>I know that in numpy I can compute the element-wise minimum of two vectors with</p>
<pre><code>numpy.minimum(v1, v2)
</code></pre>
<p>What if I have a list of vectors of equal dimension, <code>V = [v1, v2, v3, v4]</code> (but a list, not an array)? Taking <code>numpy.minimum(*V)</code> doesn't work. What's the preferred thing to do instead?</p>
| 2 | 2016-09-01T17:30:49Z | 39,279,912 | <p><code>*V</code> works if <code>V</code> has only 2 arrays. <code>np.minimum</code> is a <code>ufunc</code> and takes 2 arguments.</p>
<p>As a <code>ufunc</code> it has a <code>.reduce</code> method, so it can apply repeated to a list inputs.</p>
<pre><code>In [321]: np.minimum.reduce([np.arange(3), np.arange(2,-1,-1), np.ones((3,))])
Out[321]: array([ 0., 1., 0.])
</code></pre>
<p>I suspect the <code>np.min</code> approach is faster, but that could depend on the array and list size.</p>
<pre><code>In [323]: np.array([np.arange(3), np.arange(2,-1,-1), np.ones((3,))]).min(axis=0)
Out[323]: array([ 0., 1., 0.])
</code></pre>
<p>The <code>ufunc</code> also has an <code>accumulate</code> which can show us the results of each stage of the reduction. Here's it's not to interesting, but I could tweak the inputs to change that.</p>
<pre><code>In [325]: np.minimum.accumulate([np.arange(3), np.arange(2,-1,-1), np.ones((3,))])
...:
Out[325]:
array([[ 0., 1., 2.],
[ 0., 1., 0.],
[ 0., 1., 0.]])
</code></pre>
| 2 | 2016-09-01T19:56:54Z | [
"python",
"numpy"
] |
Set datatype after converting null values while reading from csv to DataFrame with Pandas | 39,277,664 | <p>I have a .csv file with GPS data which looks like this:</p>
<pre><code>ID,GPS_LATITUDE,GPS_LONGITUDE
1,35.66727683,139.7591279
2,35.66727683,139.7591279
3,-1,-1
4,35.66750697,139.7589757
5,,139.7589757
</code></pre>
<p>The last row has a blank or "null" value. I would like to read the data into a dataframe and set the null value to -1 and also read the data in as type float. With my code the data type is set to string and the null value is not substituted.</p>
<p>How I'm trying to do it (wrong):</p>
<pre><code>data = r'c:\temp\gps.csv'
def conv(val):
if val == np.nan:
return -1
return val
df = pd.read_csv(data,converters={'GPS_LATITUDE':conv,'GPS_LONGITUDE':conv},dtype={'GPS_LATITUDE':np.float64,'GPS_LONGITUDE':np.float64})
</code></pre>
<p>Code to test output:</p>
<pre><code>lats = df['GPS_LATITUDE'].tolist()
for l in lats:
print(l,type(l))
df
</code></pre>
<p>Output:</p>
<pre><code>35.66727683 <class 'str'>
35.66727683 <class 'str'>
-1 <class 'str'>
35.66750697 <class 'str'>
<class 'str'>
Out[63]:
ID GPS_LATITUDE GPS_LONGITUDE
0 1 35.66727683 139.7591279
1 2 35.66727683 139.7591279
2 3 -1 -1
3 4 35.66750697 139.7589757
4 5 139.7589757
</code></pre>
| 1 | 2016-09-01T17:32:49Z | 39,277,982 | <p>First of all, you don't even need to use any conv function:</p>
<pre><code>$ cat /tmp/a.csv
ID,GPS_LATITUDE,GPS_LONGITUDE
1,35.66727683,139.7591279
2,35.66727683,139.7591279
3,-1,-1
4,35.66750697,139.7589757
5,,139.7589757
In [15]: df = pd.read_csv("/tmp/a.csv", dtype={'GPS_LATITUDE':np.float64,'GPS_LONGITUDE':np.float64})
In [16]: df
Out[16]:
ID GPS_LATITUDE GPS_LONGITUDE
0 1 35.667277 139.759128
1 2 35.667277 139.759128
2 3 -1.000000 -1.000000
3 4 35.667507 139.758976
4 5 NaN 139.758976
In [18]: df.dtypes
Out[18]:
ID int64
GPS_LATITUDE float64
GPS_LONGITUDE float64
dtype: object
In [19]: df.fillna(-1, inplace = True)
In [20]: df
Out[20]:
ID GPS_LATITUDE GPS_LONGITUDE
0 1 35.667277 139.759128
1 2 35.667277 139.759128
2 3 -1.000000 -1.000000
3 4 35.667507 139.758976
4 5 -1.000000 139.758976
</code></pre>
<p>Second, if you do want to use conv, change it to (also, if you are using conv for all columns, then no need to specify dtype):</p>
<pre><code>In [21]: def conv(val):
....: if not val:
....: return -1
....: return np.float64(val)
....:
In [22]: df = pd.read_csv("/tmp/a.csv", converters={'GPS_LATITUDE':conv,'GPS_LONGITUDE':conv})
In [23]: df
Out[23]:
ID GPS_LATITUDE GPS_LONGITUDE
0 1 35.667277 139.759128
1 2 35.667277 139.759128
2 3 -1.000000 -1.000000
3 4 35.667507 139.758976
4 5 -1.000000 139.758976
In [24]: df.dtypes
Out[24]:
ID int64
GPS_LATITUDE float64
GPS_LONGITUDE float64
dtype: object
</code></pre>
<p>In either case:</p>
<pre><code>In [26]: lats = df['GPS_LATITUDE'].tolist()
In [27]: for l in lats:
....: print(l,type(l))
....:
(35.667276829999999, <type 'numpy.float64'>)
(35.667276829999999, <type 'numpy.float64'>)
(-1.0, <type 'numpy.float64'>)
(35.667506969999998, <type 'numpy.float64'>)
(-1.0, <type 'numpy.float64'>)
</code></pre>
| 1 | 2016-09-01T17:52:37Z | [
"python",
"pandas"
] |
In Python how do I make a copy of a numpy Matrix column such that any further operations to the copy does not affect the original matrix? | 39,277,687 | <p>I would normally copy a whole matrix as follows:</p>
<pre><code>from copy import copy, deepcopy
b=np.array([[2,3],[1,2]])
a = np.empty_like (b)
a[:] = b
</code></pre>
<p>(Note a and b are not what I am using in my code and are just made up for this example). But how do I copy just the first column (or any selected column) of a matrix so that when operated on it does not affect the original column?</p>
<p>PS. I am new so sorry If I am making a really stupid error but I really have searched for the solution a long time</p>
| 3 | 2016-09-01T17:34:01Z | 39,277,771 | <p>Just use indexing to slice a column then use <code>copy()</code> attribute of array object to create a copy:</p>
<pre><code>>>> b=np.array([[2,3],[1,2]])
>>> b
array([[2, 3],
[1, 2]])
>>> a = b[:,0].copy()
>>> a
array([2, 1])
>>> a += 2
>>> a
array([4, 3])
>>> b
array([[2, 3],
[1, 2]])
</code></pre>
| 1 | 2016-09-01T17:40:13Z | [
"python",
"arrays",
"numpy",
"matrix",
"copy"
] |
Python Dictionary: compare 2 values in 1 key | 39,277,724 | <p>I have a dataset where there is a list of names in a column, and a response for each name in a separate column. Each name is listed twice, and I want to see if there is agreement between the two recorded responses.
i.e.</p>
<p>name a | response 1</p>
<p>name a | response 2</p>
<p>name b | response 1</p>
<p>name b | response 2</p>
<p>I created a dictionary where the key has two values. The dictionary creates the name as the key, and each response as a value. I want to create a list to see if response1 = response2, or if response1 != response2. Here is what I have so far:</p>
<pre><code>myDict = {}
if name not in myDict.keys():
myDict[name] = {'response1': answer}
else:
myDict[name]['reponse2'] = answer
match = True
for items in hospitalDict:
if hospitalDict[items] != hospitalDict[items]:
match = False
print match
</code></pre>
<p>I am stuck on this part...any advice on how to construct this? I would also like to output this data to a csv eventually.</p>
| 3 | 2016-09-01T17:37:19Z | 39,277,944 | <p>I assume what you want to achieve is a list of names that have matching answers.</p>
<p>given that you have the dictionary already created:</p>
<pre><code>myDict = {
'name 1' : {'response1':'A', 'response2':'B'},
'name 2' : {'response1':'C', 'response2':'C'},
... ,
'name N' : {'response1':'Z', 'response2':'Z'},
}
</code></pre>
<p>you could do the following:</p>
<pre><code>myList = []
for name, resp in myDict:
if resp['response1'] == resp['response2']:
myList.append(name)
print myList
</code></pre>
<p>the result should be something like:</p>
<pre><code>['name 2', ..., 'name N']
</code></pre>
<p>that list only includes names with matching answers</p>
| 0 | 2016-09-01T17:49:52Z | [
"python",
"python-2.7",
"dictionary",
"key"
] |
Python Dictionary: compare 2 values in 1 key | 39,277,724 | <p>I have a dataset where there is a list of names in a column, and a response for each name in a separate column. Each name is listed twice, and I want to see if there is agreement between the two recorded responses.
i.e.</p>
<p>name a | response 1</p>
<p>name a | response 2</p>
<p>name b | response 1</p>
<p>name b | response 2</p>
<p>I created a dictionary where the key has two values. The dictionary creates the name as the key, and each response as a value. I want to create a list to see if response1 = response2, or if response1 != response2. Here is what I have so far:</p>
<pre><code>myDict = {}
if name not in myDict.keys():
myDict[name] = {'response1': answer}
else:
myDict[name]['reponse2'] = answer
match = True
for items in hospitalDict:
if hospitalDict[items] != hospitalDict[items]:
match = False
print match
</code></pre>
<p>I am stuck on this part...any advice on how to construct this? I would also like to output this data to a csv eventually.</p>
| 3 | 2016-09-01T17:37:19Z | 39,278,455 | <p>I implemented NamesDict class, which could have 2 or more responses for each key (name), compare it and export. To export in csv you can easily use csv library for python: <a href="https://docs.python.org/2/library/csv.html" rel="nofollow">https://docs.python.org/2/library/csv.html</a></p>
<p>I put comparison and csv export functions as an example.</p>
<pre><code>import csv
class NamesDict(dict):
def __init__(self, *args, **kwargs):
super(NamesDict, self).__init__(*args, **kwargs)
def __setitem__(self, key, item):
if isinstance(item, dict):
self.__dict__[key] = item
else:
raise Exception('item has to be a dict')
def __getitem__(self, key):
return self.__dict__[key]
def responses_match(self, key):
# Here you can implement your own comparison method
match = False
for key_one, value_one in self.__dict__[key].items():
for key_two, value_two in self.__dict__[key].items():
if key_one != key_two and value_one == value_two:
match = True
return match
def export_csv(self, path):
# Here you can change csv export
with open(path, 'wb') as csv_file:
fieldnames = ['name', 'responses']
writer = csv.DictWriter(csv_file, fieldnames=fieldnames)
for key, value in self.__dict__.items():
responses_string = ''
for resp_key, resp in value.items():
responses_string += '%s=%s\n' % (str(resp_key), str(resp_key))
writer.writerow({
'name': key,
'responses': responses_string
})
if __name__ == '__main__':
namesDict = NamesDict()
namesDict['test'] = {
'response1': 1,
'response2': 2
}
namesDict['test2'] = {
'response1': 2,
'response2': 2
}
print(namesDict.responses_match('test')) # False
print(namesDict.responses_match('test2')) # True
namesDict.export_csv('test.csv')
</code></pre>
| 0 | 2016-09-01T18:22:01Z | [
"python",
"python-2.7",
"dictionary",
"key"
] |
Python Dictionary: compare 2 values in 1 key | 39,277,724 | <p>I have a dataset where there is a list of names in a column, and a response for each name in a separate column. Each name is listed twice, and I want to see if there is agreement between the two recorded responses.
i.e.</p>
<p>name a | response 1</p>
<p>name a | response 2</p>
<p>name b | response 1</p>
<p>name b | response 2</p>
<p>I created a dictionary where the key has two values. The dictionary creates the name as the key, and each response as a value. I want to create a list to see if response1 = response2, or if response1 != response2. Here is what I have so far:</p>
<pre><code>myDict = {}
if name not in myDict.keys():
myDict[name] = {'response1': answer}
else:
myDict[name]['reponse2'] = answer
match = True
for items in hospitalDict:
if hospitalDict[items] != hospitalDict[items]:
match = False
print match
</code></pre>
<p>I am stuck on this part...any advice on how to construct this? I would also like to output this data to a csv eventually.</p>
| 3 | 2016-09-01T17:37:19Z | 39,278,535 | <p>you can use groupby and customer key function to separate two groups </p>
<pre><code> from itertools import groupby
myDict = {
'name 1' : {'response1':'A', 'response2':'B'},
'name 2' : {'response1':'C', 'response2':'C'},
'name N' : {'response1':'Z', 'response2':'Z'},
}
for i in groupby(myDict.items(),key = lambda x: x[1]['response1'] == x[1]['response2']):
print i[0],list(i[1])
True [('name N', {'response2': 'Z', 'response1': 'Z'}), ('name 2', {'response2': 'C', 'response1': 'C'})]
False [('name 1', {'response2': 'B', 'response1': 'A'})]
</code></pre>
| 0 | 2016-09-01T18:26:53Z | [
"python",
"python-2.7",
"dictionary",
"key"
] |
Python Dictionary: compare 2 values in 1 key | 39,277,724 | <p>I have a dataset where there is a list of names in a column, and a response for each name in a separate column. Each name is listed twice, and I want to see if there is agreement between the two recorded responses.
i.e.</p>
<p>name a | response 1</p>
<p>name a | response 2</p>
<p>name b | response 1</p>
<p>name b | response 2</p>
<p>I created a dictionary where the key has two values. The dictionary creates the name as the key, and each response as a value. I want to create a list to see if response1 = response2, or if response1 != response2. Here is what I have so far:</p>
<pre><code>myDict = {}
if name not in myDict.keys():
myDict[name] = {'response1': answer}
else:
myDict[name]['reponse2'] = answer
match = True
for items in hospitalDict:
if hospitalDict[items] != hospitalDict[items]:
match = False
print match
</code></pre>
<p>I am stuck on this part...any advice on how to construct this? I would also like to output this data to a csv eventually.</p>
| 3 | 2016-09-01T17:37:19Z | 39,278,580 | <p>Assuming myDict and hospitalDict are the same dictionary, you need just one simple change. Change the line:</p>
<pre><code>if hospitalDict[items] != hospitalDict[items]:
</code></pre>
<p>To the line:</p>
<pre><code>if hospitalDict[items]["response1"] != hospitalDict[items]["response2"]:
</code></pre>
<p>Then <strong>match</strong> will be True if all the responses match, and False if there are one or more different responses.</p>
| 0 | 2016-09-01T18:29:51Z | [
"python",
"python-2.7",
"dictionary",
"key"
] |
graphlab SFrame sum all values in a column | 39,277,805 | <p>How to sum all values in a column of SFrame graphlab. I tried looking into the official documentation and it is given only for SaArray(<a href="https://turi.com/products/create/docs/generated/graphlab.SArray.sum.html#graphlab.SArray.sum" rel="nofollow">doc</a>)
without any example.</p>
| 1 | 2016-09-01T17:41:58Z | 39,283,213 | <pre><code>>>> import graphlab as gl
>>> sf = gl.SFrame({'foo':[1,2,3], 'bar':[4,5,6]})
>>> sf
Columns:
bar int
foo int
Rows: 3
Data:
+-----+-----+
| bar | foo |
+-----+-----+
| 4 | 1 |
| 5 | 2 |
| 6 | 3 |
+-----+-----+
[3 rows x 2 columns]
>>> sf['foo'].sum()
6
</code></pre>
| 1 | 2016-09-02T01:52:35Z | [
"python",
"graphlab"
] |
graphlab SFrame sum all values in a column | 39,277,805 | <p>How to sum all values in a column of SFrame graphlab. I tried looking into the official documentation and it is given only for SaArray(<a href="https://turi.com/products/create/docs/generated/graphlab.SArray.sum.html#graphlab.SArray.sum" rel="nofollow">doc</a>)
without any example.</p>
| 1 | 2016-09-01T17:41:58Z | 39,320,324 | <p>I think the question from the op was more about how to do this across all (or a list of) columns at once. Here's the comparison between pandas and graphlab.</p>
<pre><code># imports
import graphlab as gl
import pandas as pd
import numpy as np
# generate data
data = np.random.randint(0,10,size=100).reshape(10,10)
col_names = list('ABCDEFGHIJ')
# make dataframe and sframe
df = pd.DataFrame(data, columns=names)
sf = graphlab.SFrame(df)
# get sum for all columns (pandas). Returns a series.
df.sum().sort_values(ascending=False)
D 65
A 61
J 59
B 50
H 46
G 46
I 45
F 43
C 37
E 36
# sf.sum() does not work
# get sum for each of the columns (graphlab)
for col in col_names:
print col, sf[col].sum()
A 61
B 50
C 37
D 65
E 36
F 43
G 46
H 46
I 45
J 59
</code></pre>
<p>I had the same question. Pandas provides an easy interface to apply an aggregating function across rows or columns of a dataframe. Could not find the same for a SFrame? Only way I could think to do it was to iterate on a list of columns. </p>
<p>Is there a better way?</p>
| 0 | 2016-09-04T18:44:02Z | [
"python",
"graphlab"
] |
Iteratively concatenate columns in pandas with NaN values | 39,277,838 | <p>I have a <code>pandas.DataFrame</code> data frame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"x": ["hello there you can go home now", "why should she care", "please sort me appropriately"],
"y": [np.nan, "finally we were able to go home", "but what about meeeeeeeeeee"],
"z": ["", "alright we are going home now", "ok fine shut up already"]})
cols = ["x", "y", "z"]
</code></pre>
<p>I want to iteratively concatenate these columns, as opposed to writing something like:</p>
<pre><code>df["concat"] = df["x"].str.cat(df["y"], sep = " ").str.cat(df["z"], sep = " ")
</code></pre>
<p>I know that three columns seems trivial to put together, but I actually have 30. so, I would like to do something like:</p>
<pre><code>df["concat"] = df[cols[0]]
for i in range(1, len(cols)):
df["concat"] = df["concat"].str.cat(df[cols[i]], sep = " ")
</code></pre>
<p>Right now, the initial <code>df["concat"] = df[cols[0]]</code> line works fine, but the <code>NaN</code> value in location <code>df.loc[1, "y"]</code> messes up the concatenation. Ultimately, the entire <code>1</code>st row ends up as <code>NaN</code> in <code>df["concat"]</code> due to this one null value. How can I get around this? Is there some option with <code>pd.Series.str.cat</code> I need to specify?</p>
| 4 | 2016-09-01T17:43:40Z | 39,278,015 | <p><strong><em>Option 1</em></strong></p>
<pre><code>pd.Series(df.fillna('').values.tolist()).str.join(' ')
0 hello there you can go home now
1 why should she care finally we were able to go...
2 please sort me appropriately but what about me...
dtype: object
</code></pre>
<p><strong><em>Option 2</em></strong></p>
<pre><code>df.fillna('').add(' ').sum(1).str.strip()
0 hello there you can go home now
1 why should she care finally we were able to go...
2 please sort me appropriately but what about me...
dtype: object
</code></pre>
| 3 | 2016-09-01T17:54:03Z | [
"python",
"pandas"
] |
How do I prevent sqlalchemy from creating a transaction on select? | 39,277,841 | <p>My problem:</p>
<p>I have a file with several rows of data in it. I want to <em>try</em> to insert every row into my database, but if <strong>any</strong> of the rows have problems I need to roll back the whole kit and kaboodle. But I want to track the actual errors so rather than just dying on the first record that has an error I can say something like this:</p>
<blockquote>
<p>This file has 42 errors in it.</p>
<pre><code>Line 1 is missing a whirlygig.
Line 2 is a duplicate.
Line 5 is right out.
</code></pre>
</blockquote>
<p>The way I'm trying to do this is with transactions, but I have a problem where SQLAlchemy creates implicit transactions on select, and apparently I don't really understand how sqlalchemy is using transactions because nothing I do seems to work the way I want. Here's some code that demonstrates my problem:</p>
<pre><code>import sqlalchemy as sa
import logging
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
l = logging.getLogger('sqlalchemy.engine')
l.setLevel(logging.INFO)
l.addHandler(logging.StreamHandler())
engine = sa.create_engine('YOUR PG CONNECTION HERE')
Session = sessionmaker(bind=engine)
session = Session()
temp_metadata = sa.MetaData(schema='pg_temp')
TempBase = declarative_base(metadata=temp_metadata)
class Whatever(TempBase):
__tablename__ = 'whatevs'
id = sa.Column('id', sa.Integer, primary_key=True, autoincrement=True)
fnord = sa.Column('fnord', sa.String, server_default=sa.schema.FetchedValue())
quux = sa.Column('quux', sa.String)
value = sa.Column('value', sa.String)
def insert_some_stuff(session, data):
value = session.query(Whatever.value).limit(1).scalar()
session.add(Whatever(quux=data, value='hi'))
try:
session.commit()
errors = 0
except sa.exc.IntegrityError:
session.rollback()
errors = 1
return errors
with session.begin_nested():
session.execute('''
CREATE TABLE pg_temp.whatevs (
id serial
, fnord text not null default 'fnord'
, quux text not null
, value text not null
, CONSTRAINT totally_unique UNIQUE (quux)
);
INSERT INTO pg_temp.whatevs (value, quux) VALUES ('something cool', 'fnord');
''')
w = Whatever(value='something cool', quux='herp')
session.add(w)
errors = 0
for q in ('foo', 'biz', 'bang', 'herp'):
with session.begin_nested():
errors += insert_some_stuff(session, q)
for row in session.query(Whatever).all():
print(row.id, row.fnord, row.value)
</code></pre>
<p>I've tried a variety of combinations where I do <code>session.begin()</code> or <code>.begin(subtransactions=True)</code>, but they all either don't work, or just seem really weird because I'm committing transactions I never (explicitly) started.</p>
<p>Can I prevent sqlalchemy from creating a transaction on select? Or am I missing something here? Is there a better way to accomplish what I want?</p>
| 1 | 2016-09-01T17:43:41Z | 39,280,270 | <p>It looks as if <code>begin_nested</code> and <code>with</code> blocks are the way to go.</p>
<blockquote>
<p><code>begin_nested()</code>, in the same manner as the less often used <code>begin()</code> method... - <a href="http://docs.sqlalchemy.org/en/latest/orm/session_transaction.html#using-savepoint" rel="nofollow">sqlalchemy docs</a></p>
</blockquote>
<p>That leads me to believe that <code>begin_nested</code> is the preferred option.</p>
<pre><code>def insert_some_stuff(session, data):
try:
with session.begin_nested():
value = session.query(Whatever.value).limit(1).scalar()
session.add(Whatever(quux=data, value='hi'))
errors = 0
except sa.exc.IntegrityError:
errors = 1
return errors
</code></pre>
<p>By using the <code>with</code> block, it Does The Right Thing⢠when it comes to committing/rolling back, and not rolling back too far.</p>
| 1 | 2016-09-01T20:20:49Z | [
"python",
"transactions",
"sqlalchemy"
] |
SQLAlchemy emitting cross join for no reason | 39,277,957 | <p>I had a query set up in SQLAlchemy which was running a bit slow, tried to optimize it. The result, for unknown reason, uses an implicit cross join, which is both significantly slower and comes up with entirely the wrong result. Iâve anonymized the table names and arguments but otherwise made no changes. Does anyone know where this is coming from?</p>
<p>To make it easier to find: The differences in new and old emitted SQL are that the new one has a longer SELECT and mentions all three tables in the WHERE before any JOINs.</p>
<p>Original code:</p>
<pre><code>cust_name = u'Bob'
proj_name = u'job1'
item_color = u'blue'
query = (db.session.query(Item.name)
.join(Project, Customer)
.filter(Customer.name == cust_name,
Project.name == proj_name)
.distinct(Item.name))
# some conditionals determining last filter, resolving to this one:
query = query.filter(Item.color == item_color)
result = query.all()
</code></pre>
<p>Original emitted SQL as logged by flask_sqlalchemy.get_debug_queries:</p>
<pre><code>QUERY: SELECT DISTINCT ON (items.name) items.name AS items_name
FROM items JOIN projects ON projects.id = items._project_id JOIN customers ON customers.id = projects._customer_id
WHERE customers.name = %(name_1)s AND projects.name = %(name_2)s AND items.color = %(color_1)s
Parameters: `{'name_2': u'job1', 'state_1': u'blue', 'name_1': u'Bob'}
</code></pre>
<p>New code:</p>
<pre><code>cust_name = u'Bob'
proj_name = u'job1'
item_color = u'blue'
query = (db.session.query(Item)
.options(Load(Item).load_only('name', 'color'),
joinedload(Item.project, innerjoin=True).load_only('name').
joinedload(Project.customer, innerjoin=True).load_only('name'))
.filter(Customer.name == cust_name,
Project.name == proj_name)
.distinct(Item.name))
# some conditionals determining last filter, resolving to this one:
query = query.filter(Item.color == item_color)
result = query.all()
</code></pre>
<p>New emitted SQL as logged by flask_sqlalchemy.get_debug_queries:</p>
<pre><code>QUERY: SELECT DISTINCT ON (items.nygc_id) items.id AS items_id, items.name AS items_name, items.color AS items_color, items._project_id AS items__project_id, customers_1.id AS customers_1_id, customers_1.name AS customers_1_name, projects_1.id AS projects_1_id, projects_1.name AS projects_1_name
FROM customers, projects, items JOIN projects AS projects_1 ON projects_1.id = items._project_id JOIN customers AS customers_1 ON customers_1.id = projects_1._customer_id
WHERE customers.name = %(name_1)s AND projects.name = %(name_2)s AND items.color = %(color_1)s
Parameters: `{'state_1': u'blue', 'name_2': u'job1', 'name_1': u'Bob'}
</code></pre>
<p>In case it matters, the underlying database is PostgreSQL.</p>
<p>The original intent of the query only needs <code>Item.name</code>. The optimization attempt is looking less likely to actually be helpful the longer I think about it, but I still want to know where that cross-join came from in case it happens again somewhere that adding <code>joinedload</code>, <code>load_only</code>, etc. would actually help.</p>
| 1 | 2016-09-01T17:50:49Z | 39,278,255 | <p>Not sure what you are trying to achieve but it looks like you're trying to do an inner join between the tables, and select only specific columns. </p>
<p>So I think you need to do something like:</p>
<pre><code>cust_name = u'Bob'
proj_name = u'job1'
item_color = u'blue'
query = (db.session.query(Item.name)
.join(Project, Customer)
.filter(Customer.name == cust_name,
Project.name == proj_name)
.distinct(Item.name))
# Select the loaded columns
query = query.add_columns(Item.name, Item.color, Project.name, Customer.name)
# some conditionals determining last filter, resolving to this one:
query = query.filter(Item.color == item_color)
result = query.all()
</code></pre>
<p>FWIW I don't think that will bring any significant optimization to your query. </p>
| 1 | 2016-09-01T18:08:51Z | [
"python",
"postgresql",
"sqlalchemy"
] |
SQLAlchemy emitting cross join for no reason | 39,277,957 | <p>I had a query set up in SQLAlchemy which was running a bit slow, tried to optimize it. The result, for unknown reason, uses an implicit cross join, which is both significantly slower and comes up with entirely the wrong result. Iâve anonymized the table names and arguments but otherwise made no changes. Does anyone know where this is coming from?</p>
<p>To make it easier to find: The differences in new and old emitted SQL are that the new one has a longer SELECT and mentions all three tables in the WHERE before any JOINs.</p>
<p>Original code:</p>
<pre><code>cust_name = u'Bob'
proj_name = u'job1'
item_color = u'blue'
query = (db.session.query(Item.name)
.join(Project, Customer)
.filter(Customer.name == cust_name,
Project.name == proj_name)
.distinct(Item.name))
# some conditionals determining last filter, resolving to this one:
query = query.filter(Item.color == item_color)
result = query.all()
</code></pre>
<p>Original emitted SQL as logged by flask_sqlalchemy.get_debug_queries:</p>
<pre><code>QUERY: SELECT DISTINCT ON (items.name) items.name AS items_name
FROM items JOIN projects ON projects.id = items._project_id JOIN customers ON customers.id = projects._customer_id
WHERE customers.name = %(name_1)s AND projects.name = %(name_2)s AND items.color = %(color_1)s
Parameters: `{'name_2': u'job1', 'state_1': u'blue', 'name_1': u'Bob'}
</code></pre>
<p>New code:</p>
<pre><code>cust_name = u'Bob'
proj_name = u'job1'
item_color = u'blue'
query = (db.session.query(Item)
.options(Load(Item).load_only('name', 'color'),
joinedload(Item.project, innerjoin=True).load_only('name').
joinedload(Project.customer, innerjoin=True).load_only('name'))
.filter(Customer.name == cust_name,
Project.name == proj_name)
.distinct(Item.name))
# some conditionals determining last filter, resolving to this one:
query = query.filter(Item.color == item_color)
result = query.all()
</code></pre>
<p>New emitted SQL as logged by flask_sqlalchemy.get_debug_queries:</p>
<pre><code>QUERY: SELECT DISTINCT ON (items.nygc_id) items.id AS items_id, items.name AS items_name, items.color AS items_color, items._project_id AS items__project_id, customers_1.id AS customers_1_id, customers_1.name AS customers_1_name, projects_1.id AS projects_1_id, projects_1.name AS projects_1_name
FROM customers, projects, items JOIN projects AS projects_1 ON projects_1.id = items._project_id JOIN customers AS customers_1 ON customers_1.id = projects_1._customer_id
WHERE customers.name = %(name_1)s AND projects.name = %(name_2)s AND items.color = %(color_1)s
Parameters: `{'state_1': u'blue', 'name_2': u'job1', 'name_1': u'Bob'}
</code></pre>
<p>In case it matters, the underlying database is PostgreSQL.</p>
<p>The original intent of the query only needs <code>Item.name</code>. The optimization attempt is looking less likely to actually be helpful the longer I think about it, but I still want to know where that cross-join came from in case it happens again somewhere that adding <code>joinedload</code>, <code>load_only</code>, etc. would actually help.</p>
| 1 | 2016-09-01T17:50:49Z | 39,278,757 | <p>This is because a <code>joinedload</code> is different from a <code>join</code>. The <code>joinedload</code>ed entities are effectively anonymous, and the later filters you applied refer to different instances of the same tables, so <code>customers</code> and <code>projects</code> gets joined in twice.</p>
<p>What you should do is to do a <code>join</code> as before, but use <a href="http://docs.sqlalchemy.org/en/latest/orm/loading_relationships.html#sqlalchemy.orm.contains_eager" rel="nofollow"><code>contains_eager</code></a> to make your join look like <code>joinedload</code>.</p>
<pre><code>query = (session.query(Item)
.join(Item.project)
.join(Project.customer)
.options(Load(Item).load_only('name', 'color'),
Load(Item).contains_eager("project").load_only('name'),
Load(Item).contains_eager("project").contains_eager("customer").load_only('name'))
.filter(Customer.name == cust_name,
Project.name == proj_name)
.distinct(Item.name))
</code></pre>
<p>This gives you the query</p>
<pre><code>SELECT DISTINCT ON (items.name) customers.id AS customers_id, customers.name AS customers_name, projects.id AS projects_id, projects.name AS projects_name, items.id AS items_id, items.name AS items_name, items.color AS items_color
FROM items JOIN projects ON projects.id = items._project_id JOIN customers ON customers.id = projects._customer_id
WHERE customers.name = %(name_1)s AND projects.name = %(name_2)s AND items.color = %(color_1)s
</code></pre>
| 2 | 2016-09-01T18:43:30Z | [
"python",
"postgresql",
"sqlalchemy"
] |
Storing pure python datetime.datetime in pandas DataFrame | 39,278,042 | <p>Since <code>matplotlib</code> doesn't support <a href="https://github.com/pydata/pandas/issues/8113" rel="nofollow">either</a><code>pandas.TimeStamp</code> <a href="http://stackoverflow.com/questions/22048792/how-do-i-display-dates-when-plotting-in-matplotlib-pyplot">or</a><code>numpy.datetime64</code>, and there are <a href="http://stackoverflow.com/questions/27472548/pandas-scatter-plotting-datetime">no simple workarounds</a>, I decided to convert a native pandas date column into a pure python <code>datetime.datetime</code> so that scatter plots are easier to make.</p>
<p>However:</p>
<pre><code>t = pd.DataFrame({'date': [pd.to_datetime('2012-12-31')]})
t.dtypes # date datetime64[ns], as expected
pure_python_datetime_array = t.date.dt.to_pydatetime() # works fine
t['date'] = pure_python_datetime_array # doesn't do what I hoped
t.dtypes # date datetime64[ns] as before, no luck changing it
</code></pre>
<p>I'm guessing pandas auto-converts the pure python <code>datetime</code> produced by <code>to_pydatetime</code> into its native format. I guess it's convenient behavior in general, but is there a way to override it?</p>
| 3 | 2016-09-01T17:55:29Z | 39,278,421 | <p>The use of <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#converting-to-python-datetimes" rel="nofollow">to_pydatetime()</a> is correct.</p>
<pre><code>In [87]: t = pd.DataFrame({'date': [pd.to_datetime('2012-12-31'), pd.to_datetime('2013-12-31')]})
In [88]: t.date.dt.to_pydatetime()
Out[88]:
array([datetime.datetime(2012, 12, 31, 0, 0),
datetime.datetime(2013, 12, 31, 0, 0)], dtype=object)
</code></pre>
<p>When you assign it back to <code>t.date</code>, it automatically converts it back to <code>datetime64</code></p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#overview" rel="nofollow">pandas.Timestamp</a> is a datetime subclass anyway :)</p>
<p>One way to do the plot is to convert the datetime to int64:</p>
<pre><code>In [117]: t = pd.DataFrame({'date': [pd.to_datetime('2012-12-31'), pd.to_datetime('2013-12-31')], 'sample_data': [1, 2]})
In [118]: t['date_int'] = t.date.astype(np.int64)
In [119]: t
Out[119]:
date sample_data date_int
0 2012-12-31 1 1356912000000000000
1 2013-12-31 2 1388448000000000000
In [120]: t.plot(kind='scatter', x='date_int', y='sample_data')
Out[120]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3c852662d0>
In [121]: plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/oOWCU.png" rel="nofollow"><img src="http://i.stack.imgur.com/oOWCU.png" alt="enter image description here"></a></p>
<p>Another workaround is (to not use scatter, but ...):</p>
<pre><code>In [126]: t.plot(x='date', y='sample_data', style='.')
Out[126]: <matplotlib.axes._subplots.AxesSubplot at 0x7f3c850f5750>
</code></pre>
<p>And, the last work around:</p>
<pre><code>In [141]: import matplotlib.pyplot as plt
In [142]: t = pd.DataFrame({'date': [pd.to_datetime('2012-12-31'), pd.to_datetime('2013-12-31')], 'sample_data': [100, 20000]})
In [143]: t
Out[143]:
date sample_data
0 2012-12-31 100
1 2013-12-31 20000
In [144]: plt.scatter(t.date.dt.to_pydatetime() , t.sample_data)
Out[144]: <matplotlib.collections.PathCollection at 0x7f3c84a10510>
In [145]: plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/Xh4ZE.png" rel="nofollow"><img src="http://i.stack.imgur.com/Xh4ZE.png" alt="enter image description here"></a></p>
<p>This has an issue at <a href="https://github.com/pydata/pandas/issues/8113" rel="nofollow">github</a>, which is open as of now.</p>
| 2 | 2016-09-01T18:19:19Z | [
"python",
"python-3.x",
"datetime",
"pandas"
] |
Storing pure python datetime.datetime in pandas DataFrame | 39,278,042 | <p>Since <code>matplotlib</code> doesn't support <a href="https://github.com/pydata/pandas/issues/8113" rel="nofollow">either</a><code>pandas.TimeStamp</code> <a href="http://stackoverflow.com/questions/22048792/how-do-i-display-dates-when-plotting-in-matplotlib-pyplot">or</a><code>numpy.datetime64</code>, and there are <a href="http://stackoverflow.com/questions/27472548/pandas-scatter-plotting-datetime">no simple workarounds</a>, I decided to convert a native pandas date column into a pure python <code>datetime.datetime</code> so that scatter plots are easier to make.</p>
<p>However:</p>
<pre><code>t = pd.DataFrame({'date': [pd.to_datetime('2012-12-31')]})
t.dtypes # date datetime64[ns], as expected
pure_python_datetime_array = t.date.dt.to_pydatetime() # works fine
t['date'] = pure_python_datetime_array # doesn't do what I hoped
t.dtypes # date datetime64[ns] as before, no luck changing it
</code></pre>
<p>I'm guessing pandas auto-converts the pure python <code>datetime</code> produced by <code>to_pydatetime</code> into its native format. I guess it's convenient behavior in general, but is there a way to override it?</p>
| 3 | 2016-09-01T17:55:29Z | 39,298,633 | <p>For me, the steps look like this:</p>
<ol>
<li>convert timezone with pytz </li>
<li>convert to_datetime with pandas and make that the index</li>
<li>plot and autoformat</li>
</ol>
<p>Starting df looks like this:</p>
<p><a href="http://i.stack.imgur.com/L2WPm.png" rel="nofollow"><img src="http://i.stack.imgur.com/L2WPm.png" alt="before converting timestamps"></a></p>
<ol>
<li><code>import pytz</code>
<code>ts['posTime']=[x.astimezone(
pytz.timezone('US/Pacific')) for x in ts['posTime']]</code></li>
</ol>
<p>I can see that it worked because the timestamps changed format:</p>
<p><a href="http://i.stack.imgur.com/KHk1U.png" rel="nofollow"><img src="http://i.stack.imgur.com/KHk1U.png" alt="after timezone conversion"></a></p>
<ol start="2">
<li><p><code>sample['posTime'] = pandas.to_datetime(sample['posTime'])</code></p>
<p><code>sample.index = sample['posTime']</code></p></li>
</ol>
<p>At this point, just plotting with pandas (which uses matplotlib under the hood) gives me a nice rotation and totally the wrong format:</p>
<p><a href="http://i.stack.imgur.com/0bPCU.png" rel="nofollow"><img src="http://i.stack.imgur.com/0bPCU.png" alt="after pandas datetime conversion"></a></p>
<ol start="3">
<li><p>However, there's nothing wrong with the format of the objects. I can now make a scatterplot with matplotlib and it autoformats the datetimes as you'd expect. </p>
<p><code>plt.scatter(sample['posTime'].values, sample['Altitude'].values)</code></p>
<p><code>fig = plt.gcf()</code></p>
<p><code>fig.set_size_inches(9.5, 3.5)</code></p></li>
</ol>
<p><a href="http://i.stack.imgur.com/Tqy0w.png" rel="nofollow"><img src="http://i.stack.imgur.com/Tqy0w.png" alt="formatted"></a> </p>
<ol start="4">
<li>If you use the auto format method, you can zoom in and it will continue to automatically choose the appropriate format (but you still have to set the scale manually). </li>
</ol>
<p><a href="http://i.stack.imgur.com/n5vP7.png" rel="nofollow"><img src="http://i.stack.imgur.com/n5vP7.png" alt="autoformatted"></a></p>
| 0 | 2016-09-02T18:22:05Z | [
"python",
"python-3.x",
"datetime",
"pandas"
] |
How do I add collision detection to the sides of the window in Tkinter? [Python 3] | 39,278,069 | <p>I'm making an (admittedly, my first) game in Python, and I've only really made the base of the game (like, moving around), and everything works <em>pretty</em> well, except for the fact that you can move right off the screen.
How would I add collision detection to the sides of the window?
Here's the code:</p>
<pre><code>from tkinter import *
import time
tk = Tk()
tk.title("Squarey")
c = colorchooser.askcolor(title='Choose Squarey\'s Color')
canvas = Canvas(tk, width=700, height=700)
canvas.pack()
class Squarey:
def __init__(self, canvas, color):
self.canvas = canvas
self.id = canvas.create_rectangle(10, 10, 50, 50, fill=color)
def draw(self, pos):
def move(event):
if event.keysym == 'Up':
self.canvas.move(self.id, 0, -5)
elif event.keysym == 'Down':
self.canvas.move(self.id, 0, 5)
elif event.keysym == 'Left':
self.canvas.move(self.id, -5, 0)
else:
self.canvas.move(self.id, 5, 0)
canvas.bind_all('<KeyPress-Up>', move)
canvas.bind_all('<KeyPress-Down>', move)
canvas.bind_all('<KeyPress-Left>', move)
canvas.bind_all('<KeyPress-Right>', move)
class Tele:
def __init__(self, canvas, pos):
self.canvas = canvas
self.id = canvas.create_text(10, 10, text=pos)
def draw(self, pos):
canvas.delete(self.id)
self.id = canvas.create_text(350, 350, text=pos)
squarey = Squarey(canvas, c[1])
pos = squarey.canvas.coords(squarey.id)
tele = Tele(canvas, pos)
while 1:
pos = squarey.canvas.coords(squarey.id)
squarey.draw(pos)
tele.draw(pos)
tk.update_idletasks()
tk.update()
time.sleep(0.01)
</code></pre>
<p>Hope everyone can help me out; thanks!</p>
| -1 | 2016-09-01T17:57:11Z | 39,278,300 | <p>I assume you want to detect the collision of a rectangle with the edges of the screen. I am stranger to tkinter, however I have experience with pygame, let me explain as well as I can.</p>
<p>In pygame, a rectange position are given as left_top corner, a(one side), b(another side). Such as,</p>
<pre><code>(x, y) b
...............
. .
. .
a. .
. .
...............
</code></pre>
<p>if you want to detect collision, you have to check all sides, using these values.</p>
<p>Let's say screen is <code>(width, height)</code></p>
<pre><code># Top side collision
if y < 0:
print "Touched top"
# Right side collision
if x + b > width:
print "Touched right"
# Bottom side collision
if y + a > height:
print "Touched bottom"
# Left side collision
if x < 0:
print "Touched left"
</code></pre>
<p>I am pretty sure quite similar logic is needed in tkinter as well.</p>
| 0 | 2016-09-01T18:12:15Z | [
"python",
"tkinter",
"collision-detection",
"python-3.4",
"tkinter-canvas"
] |
How do I add collision detection to the sides of the window in Tkinter? [Python 3] | 39,278,069 | <p>I'm making an (admittedly, my first) game in Python, and I've only really made the base of the game (like, moving around), and everything works <em>pretty</em> well, except for the fact that you can move right off the screen.
How would I add collision detection to the sides of the window?
Here's the code:</p>
<pre><code>from tkinter import *
import time
tk = Tk()
tk.title("Squarey")
c = colorchooser.askcolor(title='Choose Squarey\'s Color')
canvas = Canvas(tk, width=700, height=700)
canvas.pack()
class Squarey:
def __init__(self, canvas, color):
self.canvas = canvas
self.id = canvas.create_rectangle(10, 10, 50, 50, fill=color)
def draw(self, pos):
def move(event):
if event.keysym == 'Up':
self.canvas.move(self.id, 0, -5)
elif event.keysym == 'Down':
self.canvas.move(self.id, 0, 5)
elif event.keysym == 'Left':
self.canvas.move(self.id, -5, 0)
else:
self.canvas.move(self.id, 5, 0)
canvas.bind_all('<KeyPress-Up>', move)
canvas.bind_all('<KeyPress-Down>', move)
canvas.bind_all('<KeyPress-Left>', move)
canvas.bind_all('<KeyPress-Right>', move)
class Tele:
def __init__(self, canvas, pos):
self.canvas = canvas
self.id = canvas.create_text(10, 10, text=pos)
def draw(self, pos):
canvas.delete(self.id)
self.id = canvas.create_text(350, 350, text=pos)
squarey = Squarey(canvas, c[1])
pos = squarey.canvas.coords(squarey.id)
tele = Tele(canvas, pos)
while 1:
pos = squarey.canvas.coords(squarey.id)
squarey.draw(pos)
tele.draw(pos)
tk.update_idletasks()
tk.update()
time.sleep(0.01)
</code></pre>
<p>Hope everyone can help me out; thanks!</p>
| -1 | 2016-09-01T17:57:11Z | 39,278,497 | <p>You can get the size of your canvas like this:</p>
<pre><code>size, _ = self.canvas.winfo_geometry().split('+', maxsplit=1)
w, h = (int(_) for _ in size.split('x'))
</code></pre>
<p>And the position of your <code>Squarey</code> like this:</p>
<pre><code>x, y, _, __ = self.canvas.coords(self.id)
</code></pre>
<p>(There may be better ways to do this, of course)</p>
<p>Then just adapt your movement function like this:</p>
<pre><code>if event.keysym == 'Up':
if y > 0:
self.canvas.move(self.id, 0, -5)
elif event.keysym == 'Down':
if y+50 < h:
self.canvas.move(self.id, 0, 5)
elif event.keysym == 'Left':
if x > 0:
self.canvas.move(self.id, -5, 0)
else:
if x+50 < w:
self.canvas.move(self.id, 5, 0)
</code></pre>
<p>That should work for you (at least it does for me). But you shouldn't stop here, there are some improvements that you can make.</p>
<p>The first one that I would do is something like this:</p>
<pre><code>def __init__(self, canvas, color, width=50, height=50):
self.canvas = canvas
self.width = width
self.height = height
self.id = canvas.create_rectangle(10, 10, width, height, fill=color)
</code></pre>
<p>Then you could change your move:</p>
<pre><code>left_edge = x
right_edge = left_edge + self.width
top_edge = y
bottom_edge = top_edge + self.height
if event.keysym == 'Up' and top_edge > 0:
...
elif event.keysym == 'Down' and bottom_edge < h:
...
elif event.keysym == 'Left' and left_edge > 0:
...
elif event.keysym == 'Right' and right_edge < w:
...
</code></pre>
| 1 | 2016-09-01T18:24:35Z | [
"python",
"tkinter",
"collision-detection",
"python-3.4",
"tkinter-canvas"
] |
Python time module - clock_getres(clk_id),clock_gettime(clk_id), clock_settime(clk_id, time) | 39,278,092 | <p>According to Python 3.5 documenation (time module) all three functions <code>clock_getres(clk_id), clock_gettime(clk_id)</code> and <code>clock_settime(clk_id, time)</code> are available for Unix Systems. According to documentation: </p>
<blockquote>
<p><code>clock_getres(clk_id)</code> Return the resolution (precision) of the
specified clock <code>clk_id</code> </p>
<p><code>clock_gettime(clk_id)</code> Return the time of the specified clock <code>clk_id</code></p>
<p><code>clock_settime(clk_id, time)</code> Set the time of the specified clock
<code>clk_id</code></p>
</blockquote>
<p>But Python documenation doesn't say anything about <code>clk_id</code>.
Can someone explain me how to get <code>clk_id</code> using python.</p>
| 0 | 2016-09-01T17:58:32Z | 39,278,959 | <p>Basically clk_id it is integer id of clock, list of clock you can find at: <a href="https://docs.python.org/3/library/time.html" rel="nofollow">https://docs.python.org/3/library/time.html</a></p>
<p>for example <strong>time.CLOCK_REALTIME</strong> == 0 and <strong>time.CLOCK_MONOTONIC</strong> == 1, etc
for each clock you can set the time and surely time is different for every single one as well.</p>
<pre><code>import time
clock_id_realtime = time.CLOCK_REALTIME
clock_id_monotonic = time.CLOCK_MONOTONIC
print('clock_id_realtime = %s' % clock_id_realtime)
print('clock_id_monotonic = %s' % clock_id_monotonic)
clock_realtime_time = time.clock_gettime(clock_id_realtime)
clock_monotonic_time = time.clock_gettime(clock_id_monotonic)
print('clock_realtime_time = %s' % clock_realtime_time)
print('clock_monotonic_time = %s' % clock_monotonic_time)
try:
print(time.clock_settime(clock_id_realtime, clock_realtime_time))
except PermissionError:
print('No permissions to change clock time')
</code></pre>
| 1 | 2016-09-01T18:56:24Z | [
"python",
"python-3.5"
] |
What's the command to "reset" a bokeh plot? | 39,278,110 | <p>I have a bokeh figure that has a reset button in the toolbar. Basically, I want to "reset" the figure when I update the data that I'm plotting in the figure. How can I do that?</p>
| 4 | 2016-09-01T17:59:52Z | 39,278,173 | <p>As of Bokeh <code>0.12.1</code> there is no built in function to do this. It would possible to make a <a href="http://bokeh.pydata.org/en/latest/docs/user_guide/extensions.html" rel="nofollow">custom extension</a> that does this. However, that would take a little work and experimentation and dialogue. If you'd like to pursue that option, I'd encourage you to come to the <a href="https://groups.google.com/a/continuum.io/forum/?pli=1#!forum/bokeh" rel="nofollow">public mailing list</a> which is better suited to iterative collaboration and discussion than SO. Alternatively, please feel free to open a feature request on the <a href="https://github.com/bokeh/bokeh/issues" rel="nofollow">project issue tracker</a></p>
| 1 | 2016-09-01T18:04:21Z | [
"python",
"plot",
"jupyter",
"bokeh"
] |
What's the command to "reset" a bokeh plot? | 39,278,110 | <p>I have a bokeh figure that has a reset button in the toolbar. Basically, I want to "reset" the figure when I update the data that I'm plotting in the figure. How can I do that?</p>
| 4 | 2016-09-01T17:59:52Z | 39,352,678 | <p>Example with a radiogroup callback, that's the best way I found to reset while changing plots, just get the range of the data and set it to the range:</p>
<pre><code>from bokeh.plotting import Figure
from bokeh.models import ColumnDataSource, CustomJS, RadioGroup
from bokeh.layouts import gridplot
from bokeh.resources import CDN
from bokeh.embed import file_html
x0 = range(10)
x1 = range(100)
y0 = [i for i in x0]
y1 = [i*2 for i in x1][::-1]
fig=Figure()
source1=ColumnDataSource(data={"x":[],"y":[]})
source2=ColumnDataSource(data={"x0":x0,"x1":x1,"y0":y0,"y1":y1})
p = fig.line(x='x',y='y',source=source1)
callback=CustomJS(args=dict(s1=source1,s2=source2,px=fig.x_range,py=fig.y_range), code="""
var d1 = s1.get("data");
var d2 = s2.get("data");
var val = cb_obj.active;
d1["y"] = [];
var y = d2["y"+val];
var x = d2["x"+val];
var min = Math.min( ...y );
var max = Math.max( ...y );
py.set("start",min);
py.set("end",max);
var min = Math.min( ...x );
var max = Math.max( ...x );
px.set("start",min);
px.set("end",max);
for(i=0;i<=y.length;i++){
d1["y"].push(d2["y"+val][i]);
d1["x"].push(d2["x"+val][i]);
}
s1.trigger("change");
""")
radiogroup=RadioGroup(labels=['First plot','Second plot'],active=0,callback=callback)
grid = gridplot([[fig,radiogroup]])
outfile=open('TEST.html','w')
outfile.write(file_html(grid,CDN,'Reset'))
outfile.close()
</code></pre>
<p>The Bokeh website is seriously lacking in examples for different ways to set callbacks for the different widgets.</p>
| 1 | 2016-09-06T15:24:10Z | [
"python",
"plot",
"jupyter",
"bokeh"
] |
Trying to get a grip on CachedPropery in clang\cindex.py | 39,278,165 | <p>This is related to other <a href="http://stackoverflow.com/questions/39194326/libclang-with-python-binding-asserterror">question</a> I had, which left with no answer...
I trying to understand what's going on under the hood of the <a href="https://github.com/llvm-mirror/clang/tree/master/bindings/python" rel="nofollow">Python binding</a> to <a href="http://clang.llvm.org/doxygen/group__CINDEX.html" rel="nofollow">libclang</a>, and having really hard-time doing so.</p>
<p>I've read TONs of articles about both <code>decorators</code> and <code>descriptors</code> in Python, in order to understand how the <a href="https://github.com/llvm-mirror/clang/blob/master/bindings/python/clang/cindex.py#L122" rel="nofollow">CachedProperty class in clang/cindex.py</a> works, but still can't get all the pieces together.</p>
<p>The most related texts I've seen is <a href="http://stackoverflow.com/questions/6598534/explanation-of-a-decorator-class-in-python">one SO answer</a>, and this <a href="http://code.activestate.com/recipes/577452-a-memoize-decorator-for-instance-methods/" rel="nofollow">code recipe</a> in ActiveState. This helps me a bit, but - as I mentioned - I'm still not there.</p>
<p>So, let's cut to the chase:
I want to understand why am I getting <code>AssertionError</code> on creating CIndex. I will post here only the relevant code (cindex.py is 3646 lines long..), and I hope I don't miss anything that is relevant to me.
My code has only one relevant line, which is:</p>
<pre><code>index = clang.cindex.Index.create()
</code></pre>
<p>This reffers to <a href="https://github.com/llvm-mirror/clang/blob/master/bindings/python/clang/cindex.py#L2291" rel="nofollow">line 2291 in cindex.py</a>, which yields:</p>
<pre><code>return Index(conf.lib.clang_createIndex(excludeDecls, 0))
</code></pre>
<p>From now on, there's a series of function calls, which I can't explain why and WTH did they come from. I'll list the code and <code>pdb</code> output along the questions that relevant to each part:</p>
<p>(Important thing to notice ahead: conf.lib defined like this:)</p>
<pre><code>class Config:
...snip..
@CachedProperty
def lib(self):
lib = self.get_cindex_library()
...
return lib
</code></pre>
<p><strong>CachedProperty code:</strong></p>
<pre><code>class CachedProperty(object):
"""Decorator that lazy-loads the value of a property.
The first time the property is accessed, the original property function is
executed. The value it returns is set as the new value of that instance's
property, replacing the original method.
"""
def __init__(self, wrapped):
self.wrapped = wrapped
try:
self.__doc__ = wrapped.__doc__
except:
pass
def __get__(self, instance, instance_type=None):
if instance is None:
return self
value = self.wrapped(instance)
setattr(instance, self.wrapped.__name__, value)
return value
</code></pre>
<p><strong><code>Pdb</code> output:</strong></p>
<pre><code>-> return Index(conf.lib.clang_createIndex(excludeDecls, 0))
(Pdb) s
--Call--
> d:\project\clang\cindex.py(137)__get__()
-> def __get__(self, instance, instance_type=None):
(Pdb) p self
<clang.cindex.CachedProperty object at 0x00000000027982E8>
(Pdb) p self.wrapped
<function Config.lib at 0x0000000002793598>
</code></pre>
<ol>
<li>Why the next call after
<code>Index(conf.lib.clang_createIndex(excludeDecls, 0))</code> is to
<code>CachedProperty.__get__</code> method? What about the <code>__init__</code>?</li>
<li>If the <code>__init__</code> method isn't get called, how comes that self.wrapped has
value?</li>
</ol>
<p><strong><code>Pdb</code> output:</strong></p>
<pre><code>(Pdb) r
--Return--
> d:\project\clang\cindex.py(144)__get__()-><CDLL 'libcla... at 0x27a1cc0>
-> return value
(Pdb) n
--Call--
> c:\program files\python35\lib\ctypes\__init__.py(357)__getattr__()
-> def __getattr__(self, name):
(Pdb) r
--Return--
> c:\program files\python35\lib\ctypes\__init__.py(362)__getattr__()-><_FuncPtr obj...000000296B458>
-> return func
(Pdb)
</code></pre>
<ol start="3">
<li>Where <code>CachedProperty.__get__</code> should return value to? Where the call for <code>CDLL.__getattr__</code> method come from?</li>
</ol>
<p><strong>MOST CRITICAL PART, for me</strong></p>
<pre><code>(Pdb) n
--Call--
> d:\project\clang\cindex.py(1970)__init__()
-> def __init__(self, obj):
(Pdb) p obj
40998256
</code></pre>
<p>This is the <a href="https://github.com/llvm-mirror/clang/blob/master/bindings/python/clang/cindex.py#L2044" rel="nofollow">creation of <code>ClangObject</code></a>, which class Index inherits from.</p>
<ol start="4">
<li>But - where there's any call to <code>__init__</code> with one parameter? Is this is the one that <code>conf.lib.clang_createIndex(excludeDecls, 0)</code> returning?</li>
<li>Where is this number (40998256) coming from? I'm getting the same number over and over again. As far as I understand, it should be just a number, but a <code>clang.cindex.LP_c_void_p object</code> and that's why the assertion failed.</li>
</ol>
<p>To sum it up, the best for me will be step-by-step guidance of the functions invocation over here, cause I'm felling a little lost in all this...</p>
<p><strong>SOLUTION to the last 2 questions:</strong>
The problem lays in the difference between Python 2 and 3 in the <code>map()</code> function. While Python 2 actually <strong>does</strong> the mapping, Python 3 only return iterator, which you can use later on. This results the <code>register_function</code> method on <code>config.lib</code> to run without actually register any function - hence the wrong translation of the returning value.</p>
<p><strong>Fix:</strong> Change <a href="https://github.com/llvm-mirror/clang/blob/master/bindings/python/clang/cindex.py#L3657" rel="nofollow">map(register, functionList)</a> to list(map(register, functionList))</p>
<p>Still, thanks <a href="http://stackoverflow.com/users/100297/martijn-pieters">@Martijn</a>, because of you I was able to move on from this <code>CachedProperty</code>...:)</p>
| 1 | 2016-09-01T18:03:57Z | 39,278,313 | <p>The <code>CachedProperty</code> object is a <a href="https://docs.python.org/3/howto/descriptor.html" rel="nofollow">descriptor object</a>; the <code>__get__</code> method is called automatically whenever Python tries to access an attribute on an instance that is only available on the class <em>and</em> has a <code>__get__</code> method.</p>
<p>Using <code>CachedProperty</code> as a decorator means it is called and an instance of <code>CachedProperty</code> is created that replaces the original function object on the <code>Config</code> class. It is the <code>@CachedProperty</code> line that causes <code>CachedProperty.__init__</code> to be called, and the instance ends up on the <code>Config</code> class as <code>Config.lib</code>. Remember, the syntax</p>
<pre><code>@CachedProperty
def lib(self):
# ...
</code></pre>
<p>is essentially executed as</p>
<pre><code>def lib(self):
# ...
lib = CachedProperty(lib)
</code></pre>
<p>so this creates an instance of <code>CachedProperty()</code> with <code>lib</code> passed in as the <code>wrapped</code> argument, and then <code>Config.lib</code> is set to that object.</p>
<p>You can see this in the debugger; one step up you could inspect <code>type(config).lib</code>:</p>
<pre><code>(Pdb) type(config)
<class Config at 0x00000000027936E>
(Pdb) type(config).lib
<clang.cindex.CachedProperty object at 0x00000000027982E8>
</code></pre>
<p>In the rest of the codebase <code>config</code> is an instance of the <code>Config</code> class. At first, that instance has no <code>lib</code> name in the <code>__dict__</code> object, so the instance has no such attribute:</p>
<pre><code>(Pdb) 'lib' in config.__dict__
False
</code></pre>
<p>So trying to get <code>config.lib</code> has to fall back to the class, where Python finds the <code>Config.lib</code> attribute, and this is a descriptor object. Instead of using <code>Config.lib</code> directly, Python returns the result of calling <code>Config.lib.__get__(config, Config)</code> in that case.</p>
<p>The <code>__get__</code> method then executes the original function (referenced by <code>wrapped</code>) and stores that in <code>config.__dict__</code>. So <em>future</em> access to <code>config.lib</code> will find that result, and the descriptor on the class is not going to be used after that.</p>
<p>The <code>__getattr__</code> method is called to satisfy the <em>next</em> attribute in the <code>conf.lib.clang_createIndex(excludeDecls, 0)</code> expression; <code>config.lib</code> returns a dynamically loaded library from <code>cdll.LoadLibrary()</code> (via <code>CachedProperty.__get__()</code>), and that <a href="https://docs.python.org/3/library/ctypes.html#ctypes.CDLL" rel="nofollow">specific object type</a> is handled by the Python <a href="https://docs.python.org/3/library/ctypes.html" rel="nofollow">ctypes libary</a>. It translates attributes to specific C calls for you; here that's the <code>clang_createIndex</code> method; see <a href="https://docs.python.org/3/library/ctypes.html#accessing-functions-from-loaded-dlls" rel="nofollow"><em>Accessing functions from loaded dlls</em></a>.</p>
<p>Once the call to <code>conf.lib.clang_createIndex(excludeDecls, 0)</code> completes, that resulting object is indeed passed to <code>Index()</code>; the <a href="https://github.com/llvm-mirror/clang/blob/master/bindings/python/clang/cindex.py#L2283" rel="nofollow"><code>Index()</code> class</a> itself has no <code>__init__</code> method, but the base class <a href="https://github.com/llvm-mirror/clang/blob/master/bindings/python/clang/cindex.py#L2044" rel="nofollow"><code>ClangObject</code></a> does.</p>
<p>Whatever that return value is, it has a <em>representation</em> that looks like an integer number. However, it almost certainly is not an <code>int</code>. You can see what type of object that is by using <code>type()</code>, see what attributes it has with <code>dir()</code>, etc. I'm pretty certain it is a <a href="https://docs.python.org/3/library/ctypes.html#ctypes.c_void_p" rel="nofollow"><code>ctypes.c_void_p</code> data type</a> <em>representing</em> a <code>clang.cindex.LP_c_void_p</code> value (it is a Python object that proxies for the real C value in memory); it'll represent as an integer:</p>
<blockquote>
<p>Represents the C <code>void *</code> type. The value is represented as integer. The constructor accepts an optional integer initializer.</p>
</blockquote>
<p>The rest of the <code>clang</code> Python code will just pass this value back to more C calls proxied by <code>config.lib</code>.</p>
| 2 | 2016-09-01T18:12:39Z | [
"python",
"clang",
"python-decorators",
"libclang",
"python-descriptors"
] |
Limit data range without using xlim - python, matplotlib | 39,278,182 | <p>I want to put multiple lines on the same plot, but use only part of the data available for some of the lines. Each dataset contains data from 1925 until the present, and I'd like the x-axis to show that entire range, but I only want to show dataset A from 1925 until 1940, dataset B from 1941 to 1958, and so on. <strong>In other words, I want to set limits on the data itself, not the axis.</strong> </p>
<pre><code>fig, ax = plt.subplots(figsize=(15,10))
plt.plot_date(DF['date'], DF['data1'], '.')
plt.plot_date(DF['date'], DF['data2'], '.')
plt.plot_date(DF['date'], DF['data3'], '.')
plt.plot_date(DF['date'], DF['data4'], '.')
ylabel('Mean Streambed Elevation (feet)')
plt.xlim('1925-01-01', '2020-01-01')
</code></pre>
<p>All of my searching has just found questions that are solved by using xlim(), but that's not what I'm looking for. I figure that the solution will be to add something to each <code>plt.plot_date</code> line, but I don't know what to add.</p>
| 0 | 2016-09-01T18:05:23Z | 39,278,299 | <p>There are many ways to select subsets of a pandas dataframe, assuming that's what you have.</p>
<p>For instance something like</p>
<p><code>data_for_plotting=DF.query("date>'1925-01-01' and date<'1940-01-01'")</code></p>
<p>Then pass that instead of DF to the rest of the plotting statements.</p>
<p>You can look at <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/indexing.html</a> for other ways to do the same thing.</p>
| 1 | 2016-09-01T18:12:11Z | [
"python",
"matplotlib"
] |
Python: Plot a bar graph for a pandas data frame with x axis using a given column | 39,278,279 | <p>I want to plot a bar chart for the following pandas data frame on Jupyter Notebook. </p>
<pre><code> | Month | number
-------------------------
0 | Apr | 6.5
1 | May | 7.3
2 | Jun | 3.9
3 | Jul | 5.1
4 | Aug | 4.1
</code></pre>
<p>I did:</p>
<pre><code>%matplotlib notebook
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('ggplot')
trend_df.plot(kind='bar')
</code></pre>
<p>How do I make sure x-axis is actually showing month here?</p>
| 1 | 2016-09-01T18:10:45Z | 39,278,970 | <pre><code>store the data in a csv file.
example i named my file plot.csv
save data in following format in plot.csv
Month,number
Apr,6.5
May,7.3
Jun,3.9
Jul,5.1
Aug,4.1
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import csv
#first read the data
data = pd.read_csv('plot.csv',sep=',')
print(data)
#create a data frame
df = data.ix[-5:,['Month','number']]
#plot
df.plot(kind = 'bar')
plt.show()
#for ggplot
plt.style.use('ggplot')
df.plot()
plt.show()
</code></pre>
| 0 | 2016-09-01T18:56:50Z | [
"python",
"pandas",
"jupyter-notebook",
"python-ggplot"
] |
Python: Plot a bar graph for a pandas data frame with x axis using a given column | 39,278,279 | <p>I want to plot a bar chart for the following pandas data frame on Jupyter Notebook. </p>
<pre><code> | Month | number
-------------------------
0 | Apr | 6.5
1 | May | 7.3
2 | Jun | 3.9
3 | Jul | 5.1
4 | Aug | 4.1
</code></pre>
<p>I did:</p>
<pre><code>%matplotlib notebook
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('ggplot')
trend_df.plot(kind='bar')
</code></pre>
<p>How do I make sure x-axis is actually showing month here?</p>
| 1 | 2016-09-01T18:10:45Z | 39,279,137 | <p>You can simply specify <code>x</code> and <code>y</code> in your call to <code>plot</code> to get the bar plot you want.</p>
<pre><code>trend_df.plot(x='Month', y='number', kind='bar')
</code></pre>
<p><a href="http://i.stack.imgur.com/KOJnc.png" rel="nofollow"><img src="http://i.stack.imgur.com/KOJnc.png" alt="enter image description here"></a></p>
<p>Given <code>trend_df</code> as</p>
<pre><code>In [20]: trend_df
Out[20]:
Month number
0 Apr 6.5
1 May 7.3
2 Jun 3.9
3 Jul 5.1
4 Aug 4.1
</code></pre>
| 1 | 2016-09-01T19:07:18Z | [
"python",
"pandas",
"jupyter-notebook",
"python-ggplot"
] |
How to parse an HTML table with rowspans in Python? | 39,278,376 | <p><strong>The problem</strong></p>
<p>I'm trying to parse an HTML table with rowspans in it, as in, I'm trying to parse my college schedule.</p>
<p>I'm running into the problem where if the last row contains a rowspan, the next row is missing a TD where the rowspan is now that TD that is missing.</p>
<p>I have no clue how to account for this and I hope to be able to parse this schedule.</p>
<p><strong>What I tried</strong></p>
<p>Pretty much everything I can think of.</p>
<p><strong>The result I get</strong>
</p>
<pre><code>[
{
'blok_eind': 4,
'blok_start': 3,
'dag': 4, # Should be 5
'leraar': 'DOODF000',
'lokaal': 'ALK C212',
'vak': 'PROJ-T',
},
]
</code></pre>
<p>As you can see, there's a <code>vak</code> key with the value <code>PROJ-T</code> in the output snippet above, <code>dag</code> is <code>4</code> while it's supposed to be <code>5</code> (a.k.a Friday/Vrijdag), as seen here:</p>
<p><img src="http://i.imgur.com/068mi1E.png" alt="Table"></p>
<p><strong>The result I want</strong></p>
<p>A Python dict() that looks like the one posted above, but with the right value</p>
<p>Where:</p>
<ul>
<li><code>day</code>/<code>dag</code> is an int from 1~5 representing Monday~Friday</li>
<li><code>block_start</code>/<code>blok_start</code> is an int that represents when the course starts (Time block, left side of table)</li>
<li><code>block_end</code>/<code>blok_eind</code> is an int that represent in what block the course ends</li>
<li><code>classroom</code>/<code>lokaal</code> is the classroom's code the course is in</li>
<li><code>teacher</code>/<code>leraar</code> is the teacher's ID </li>
<li><code>course</code>/<code>vak</code> is the ID of the course</li>
</ul>
<p><strong>Basic HTML Structure for above data</strong>
</p>
<pre><code><center>
<table>
<tr>
<td>
<table>
<tbody>
<tr>
<td>
<font>
TEACHER-ID
</font>
</td>
<td>
<font>
<b>
CLASSROOM ID
</b>
</font>
</td>
</tr>
<tr>
<td>
<font>
COURSE ID
</font>
</td>
</tr>
</tbody>
</table>
</td>
</tr>
</table>
</center>
</code></pre>
<p><strong>The code</strong></p>
<p><em>HTML</em></p>
<pre><code><CENTER><font size="3" face="Arial" color="#000000">
<BR></font>
<font size="6" face="Arial" color="#0000FF">
16AO4EIO1B
&nbsp;</font> <font size="4" face="Arial">
IO1B
</font>
<BR>
<TABLE border="3" rules="all" cellpadding="1" cellspacing="1">
<TR>
<TD align="center">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial" color="#000000">
Maandag 29-08
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
Dinsdag 30-08
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
Woensdag 31-08
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
Donderdag 01-09
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
Vrijdag 02-09
</font> </TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>1</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
8:30
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
9:20
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=4 align="center" nowrap="1">
<TABLE>
<TR>
<TD width="50%" nowrap=1><font size="2" face="Arial">
BLEEJ002
</font> </TD>
<TD width="50%" nowrap=1><font size="2" face="Arial">
<B>ALK B021</B>
</font> </TD>
</TR>
<TR>
<TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial">
WEBD
</font> </TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>2</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
9:20
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
10:10
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=4 align="center" nowrap="1">
<TABLE>
<TR>
<TD width="50%" nowrap=1><font size="2" face="Arial">
BLEEJ002
</font> </TD>
<TD width="50%" nowrap=1><font size="2" face="Arial">
<B>ALK B021B</B>
</font> </TD>
</TR>
<TR>
<TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial">
WEBD
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>3</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
10:25
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
11:15
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=4 align="center" nowrap="1">
<TABLE>
<TR>
<TD width="50%" nowrap=1><font size="2" face="Arial">
DOODF000
</font> </TD>
<TD width="50%" nowrap=1><font size="2" face="Arial">
<B>ALK C212</B>
</font> </TD>
</TR>
<TR>
<TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial">
PROJ-T
</font> </TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>4</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
11:15
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
12:05
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=4 align="center" nowrap="1">
<TABLE>
<TR>
<TD width="50%" nowrap=1><font size="2" face="Arial">
BLEEJ002
</font> </TD>
<TD width="50%" nowrap=1><font size="2" face="Arial">
<B>ALK B021B</B>
</font> </TD>
</TR>
<TR>
<TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial">
MENT
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>5</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
12:05
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
12:55
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>6</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
12:55
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
13:45
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=4 align="center" nowrap="1">
<TABLE>
<TR>
<TD width="50%" nowrap=1><font size="2" face="Arial">
JONGJ003
</font> </TD>
<TD width="50%" nowrap=1><font size="2" face="Arial">
<B>ALK B008</B>
</font> </TD>
</TR>
<TR>
<TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial">
BURG
</font> </TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>7</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
13:45
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
14:35
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=4 align="center" nowrap="1">
<TABLE>
<TR>
<TD width="50%" nowrap=1><font size="2" face="Arial">
FLUIP000
</font> </TD>
<TD width="50%" nowrap=1><font size="2" face="Arial">
<B>ALK B004</B>
</font> </TD>
</TR>
<TR>
<TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial">
ICT algemeen Prakti
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>8</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
14:50
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
15:40
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=4 align="center" nowrap="1">
<TABLE>
<TR>
<TD width="50%" nowrap=1><font size="2" face="Arial">
KOOLE000
</font> </TD>
<TD width="50%" nowrap=1><font size="2" face="Arial">
<B>ALK B008</B>
</font> </TD>
</TR>
<TR>
<TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial">
NED
</font> </TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>9</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
15:40
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
16:30
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>10</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
16:30
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
17:20
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
</TABLE>
<TABLE cellspacing="1" cellpadding="1">
<TR>
<TD valign=bottom> <font size="4" face="Arial" color="#0000FF"></TR></TABLE><font size="3" face="Arial">
Periode1 29-08-2016 (35) - 04-09-2016 (35) G r u b e r &amp; P e t t e r s S o f t w a r e
</font></CENTER>
</code></pre>
<p><em>Python</em></p>
<pre><code>from pprint import pprint
from bs4 import BeautifulSoup
import requests
r = requests.get("http://rooster.horizoncollege.nl/rstr/ECO/AMR/400-ECO/Roosters/36"
"/c/c00025.htm")
daytable = {
1: "Maandag",
2: "Dinsdag",
3: "Woensdag",
4: "Donderdag",
5: "Vrijdag"
}
timetable = {
1: ("8:30", "9:20"),
2: ("9:20", "10:10"),
3: ("10:25", "11:15"),
4: ("11:15", "12:05"),
5: ("12:05", "12:55"),
6: ("12:55", "13:45"),
7: ("13:45", "14:35"),
8: ("14:50", "15:40"),
9: ("15:40", "16:30"),
10: ("16:30", "17:20"),
}
page = BeautifulSoup(r.content, "lxml")
roster = []
big_rows = 2
last_row_big = False
# There are 10 blocks, each made up out of 2 TR's, run through them
for block_count in range(2, 22, 2):
# There are 5 days, first column is not data we want
for day in range(2, 7):
dayroster = {
"dag": 0,
"blok_start": 0,
"blok_eind": 0,
"lokaal": "",
"leraar": "",
"vak": ""
}
# This selector provides the classroom
table_bold = page.select(
"html > body > center > table > tr:nth-of-type(" + str(block_count) + ") > td:nth-of-type(" + str(
day) + ") > table > tr > td > font > b")
# This selector provides the teacher's code and the course ID
table = page.select(
"html > body > center > table > tr:nth-of-type(" + str(block_count) + ") > td:nth-of-type(" + str(
day) + ") > table > tr > td > font")
# This gets the rowspan on the current row and column
rowspan = page.select(
"html > body > center > table > tr:nth-of-type(" + str(block_count) + ") > td:nth-of-type(" + str(
day) + ")")
try:
if table or table_bold and rowspan[0].attrs.get("rowspan") == "4":
last_row_big = True
# Setting end of class
dayroster["blok_eind"] = (block_count // 2) + 1
else:
last_row_big = False
# Setting end of class
dayroster["blok_eind"] = (block_count // 2)
except IndexError:
pass
if table_bold:
x = table_bold[0]
# Classroom ID
dayroster["lokaal"] = x.contents[0]
if table:
iter = 0
for x in table:
content = x.contents[0].lstrip("\r\n").rstrip("\r\n")
# Cell has data
if content != "":
# Set start of class
dayroster["blok_start"] = block_count // 2
# Set day of class
dayroster["dag"] = day - 1
if iter == 0:
# Teacher ID
dayroster["leraar"] = content
elif iter == 1:
# Course ID
dayroster["vak"] = content
iter += 1
if table or table_bold:
# Store the data
roster.append(dayroster)
# Remove duplicates
seen = set()
new_l = []
for d in roster:
t = tuple(d.items())
if t not in seen:
seen.add(t)
new_l.append(d)
pprint(new_l)
</code></pre>
| 23 | 2016-09-01T18:16:45Z | 39,335,472 | <p>Maybe it is better to use bs4 builtin function like "<strong>findAll</strong>" to parse your table.</p>
<p>You may use the following code :</p>
<pre><code>from pprint import pprint
from bs4 import BeautifulSoup
import requests
r = requests.get("http://rooster.horizoncollege.nl/rstr/ECO/AMR/400-ECO/Roosters/36"
"/c/c00025.htm")
content=r.content
page = BeautifulSoup(content, "html")
table=page.find('table')
trs=table.findAll("tr", {},recursive=False)
tr_count=0
trs.pop(0)
final_table={}
for tr in trs:
tds=tr.findAll("td", {},recursive=False)
if tds:
td_count=0
tds.pop(0)
for td in tds:
if td.has_attr('rowspan'):
final_table[str(tr_count)+"-"+str(td_count)]=td.text.strip()
if int(td.attrs['rowspan'])==4:
final_table[str(tr_count+1)+"-"+str(td_count)]=td.text.strip()
if final_table.has_key(str(tr_count)+"-"+str(td_count+1)):
td_count=td_count+1
td_count=td_count+1
tr_count=tr_count+1
roster=[]
for i in range(0,10): #iterate over time
for j in range(0,5): #iterate over day
item=final_table[str(i)+"-"+str(j)]
if len(item)!=0:
block_eind=i+1
try:
if final_table[str(i+1)+"-"+str(j)]==final_table[str(i)+"-"+str(j)]:
block_eind=i+2
except:
pass
try:
lokaal=item.split('\r\n \n\n')[0]
leraar=item.split('\r\n \n\n')[1].split('\n \n\r\n')[0]
vak=item.split('\n \n\r\n')[1]
except:
lokaal=leraar=vak="---"
dayroster = {
"dag": j+1,
"blok_start": i+1,
"blok_eind": block_eind,
"lokaal": lokaal,
"leraar": leraar,
"vak": vak
}
dayroster_double = {
"dag": j+1,
"blok_start": i,
"blok_eind": block_eind,
"lokaal": lokaal,
"leraar": leraar,
"vak": vak
}
#use to prevent double dict for same event
if dayroster_double not in roster:
roster.append(dayroster)
print (roster)
</code></pre>
| 2 | 2016-09-05T17:44:16Z | [
"python",
"html",
"python-3.x",
"beautifulsoup",
"html-table"
] |
How to parse an HTML table with rowspans in Python? | 39,278,376 | <p><strong>The problem</strong></p>
<p>I'm trying to parse an HTML table with rowspans in it, as in, I'm trying to parse my college schedule.</p>
<p>I'm running into the problem where if the last row contains a rowspan, the next row is missing a TD where the rowspan is now that TD that is missing.</p>
<p>I have no clue how to account for this and I hope to be able to parse this schedule.</p>
<p><strong>What I tried</strong></p>
<p>Pretty much everything I can think of.</p>
<p><strong>The result I get</strong>
</p>
<pre><code>[
{
'blok_eind': 4,
'blok_start': 3,
'dag': 4, # Should be 5
'leraar': 'DOODF000',
'lokaal': 'ALK C212',
'vak': 'PROJ-T',
},
]
</code></pre>
<p>As you can see, there's a <code>vak</code> key with the value <code>PROJ-T</code> in the output snippet above, <code>dag</code> is <code>4</code> while it's supposed to be <code>5</code> (a.k.a Friday/Vrijdag), as seen here:</p>
<p><img src="http://i.imgur.com/068mi1E.png" alt="Table"></p>
<p><strong>The result I want</strong></p>
<p>A Python dict() that looks like the one posted above, but with the right value</p>
<p>Where:</p>
<ul>
<li><code>day</code>/<code>dag</code> is an int from 1~5 representing Monday~Friday</li>
<li><code>block_start</code>/<code>blok_start</code> is an int that represents when the course starts (Time block, left side of table)</li>
<li><code>block_end</code>/<code>blok_eind</code> is an int that represent in what block the course ends</li>
<li><code>classroom</code>/<code>lokaal</code> is the classroom's code the course is in</li>
<li><code>teacher</code>/<code>leraar</code> is the teacher's ID </li>
<li><code>course</code>/<code>vak</code> is the ID of the course</li>
</ul>
<p><strong>Basic HTML Structure for above data</strong>
</p>
<pre><code><center>
<table>
<tr>
<td>
<table>
<tbody>
<tr>
<td>
<font>
TEACHER-ID
</font>
</td>
<td>
<font>
<b>
CLASSROOM ID
</b>
</font>
</td>
</tr>
<tr>
<td>
<font>
COURSE ID
</font>
</td>
</tr>
</tbody>
</table>
</td>
</tr>
</table>
</center>
</code></pre>
<p><strong>The code</strong></p>
<p><em>HTML</em></p>
<pre><code><CENTER><font size="3" face="Arial" color="#000000">
<BR></font>
<font size="6" face="Arial" color="#0000FF">
16AO4EIO1B
&nbsp;</font> <font size="4" face="Arial">
IO1B
</font>
<BR>
<TABLE border="3" rules="all" cellpadding="1" cellspacing="1">
<TR>
<TD align="center">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial" color="#000000">
Maandag 29-08
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
Dinsdag 30-08
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
Woensdag 31-08
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
Donderdag 01-09
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
Vrijdag 02-09
</font> </TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>1</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
8:30
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
9:20
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=4 align="center" nowrap="1">
<TABLE>
<TR>
<TD width="50%" nowrap=1><font size="2" face="Arial">
BLEEJ002
</font> </TD>
<TD width="50%" nowrap=1><font size="2" face="Arial">
<B>ALK B021</B>
</font> </TD>
</TR>
<TR>
<TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial">
WEBD
</font> </TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>2</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
9:20
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
10:10
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=4 align="center" nowrap="1">
<TABLE>
<TR>
<TD width="50%" nowrap=1><font size="2" face="Arial">
BLEEJ002
</font> </TD>
<TD width="50%" nowrap=1><font size="2" face="Arial">
<B>ALK B021B</B>
</font> </TD>
</TR>
<TR>
<TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial">
WEBD
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>3</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
10:25
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
11:15
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=4 align="center" nowrap="1">
<TABLE>
<TR>
<TD width="50%" nowrap=1><font size="2" face="Arial">
DOODF000
</font> </TD>
<TD width="50%" nowrap=1><font size="2" face="Arial">
<B>ALK C212</B>
</font> </TD>
</TR>
<TR>
<TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial">
PROJ-T
</font> </TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>4</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
11:15
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
12:05
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=4 align="center" nowrap="1">
<TABLE>
<TR>
<TD width="50%" nowrap=1><font size="2" face="Arial">
BLEEJ002
</font> </TD>
<TD width="50%" nowrap=1><font size="2" face="Arial">
<B>ALK B021B</B>
</font> </TD>
</TR>
<TR>
<TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial">
MENT
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>5</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
12:05
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
12:55
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>6</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
12:55
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
13:45
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=4 align="center" nowrap="1">
<TABLE>
<TR>
<TD width="50%" nowrap=1><font size="2" face="Arial">
JONGJ003
</font> </TD>
<TD width="50%" nowrap=1><font size="2" face="Arial">
<B>ALK B008</B>
</font> </TD>
</TR>
<TR>
<TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial">
BURG
</font> </TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>7</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
13:45
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
14:35
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=4 align="center" nowrap="1">
<TABLE>
<TR>
<TD width="50%" nowrap=1><font size="2" face="Arial">
FLUIP000
</font> </TD>
<TD width="50%" nowrap=1><font size="2" face="Arial">
<B>ALK B004</B>
</font> </TD>
</TR>
<TR>
<TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial">
ICT algemeen Prakti
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>8</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
14:50
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
15:40
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=4 align="center" nowrap="1">
<TABLE>
<TR>
<TD width="50%" nowrap=1><font size="2" face="Arial">
KOOLE000
</font> </TD>
<TD width="50%" nowrap=1><font size="2" face="Arial">
<B>ALK B008</B>
</font> </TD>
</TR>
<TR>
<TD colspan="2" width="50%" nowrap=1><font size="2" face="Arial">
NED
</font> </TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>9</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
15:40
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
16:30
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
<TR>
<TD rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD align="center" rowspan="2" nowrap=1><font size="3" face="Arial">
<B>10</B>
</font> </TD>
<TD align="center" nowrap=1><font size="2" face="Arial">
16:30
</font> </TD>
</TR>
<TR>
<TD align="center" nowrap=1><font size="2" face="Arial">
17:20
</font> </TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
<TD colspan=12 rowspan=2 align="center" nowrap="1">
<TABLE>
<TR>
<TD></TD>
</TR>
</TABLE>
</TD>
</TR>
<TR>
</TR>
</TABLE>
<TABLE cellspacing="1" cellpadding="1">
<TR>
<TD valign=bottom> <font size="4" face="Arial" color="#0000FF"></TR></TABLE><font size="3" face="Arial">
Periode1 29-08-2016 (35) - 04-09-2016 (35) G r u b e r &amp; P e t t e r s S o f t w a r e
</font></CENTER>
</code></pre>
<p><em>Python</em></p>
<pre><code>from pprint import pprint
from bs4 import BeautifulSoup
import requests
r = requests.get("http://rooster.horizoncollege.nl/rstr/ECO/AMR/400-ECO/Roosters/36"
"/c/c00025.htm")
daytable = {
1: "Maandag",
2: "Dinsdag",
3: "Woensdag",
4: "Donderdag",
5: "Vrijdag"
}
timetable = {
1: ("8:30", "9:20"),
2: ("9:20", "10:10"),
3: ("10:25", "11:15"),
4: ("11:15", "12:05"),
5: ("12:05", "12:55"),
6: ("12:55", "13:45"),
7: ("13:45", "14:35"),
8: ("14:50", "15:40"),
9: ("15:40", "16:30"),
10: ("16:30", "17:20"),
}
page = BeautifulSoup(r.content, "lxml")
roster = []
big_rows = 2
last_row_big = False
# There are 10 blocks, each made up out of 2 TR's, run through them
for block_count in range(2, 22, 2):
# There are 5 days, first column is not data we want
for day in range(2, 7):
dayroster = {
"dag": 0,
"blok_start": 0,
"blok_eind": 0,
"lokaal": "",
"leraar": "",
"vak": ""
}
# This selector provides the classroom
table_bold = page.select(
"html > body > center > table > tr:nth-of-type(" + str(block_count) + ") > td:nth-of-type(" + str(
day) + ") > table > tr > td > font > b")
# This selector provides the teacher's code and the course ID
table = page.select(
"html > body > center > table > tr:nth-of-type(" + str(block_count) + ") > td:nth-of-type(" + str(
day) + ") > table > tr > td > font")
# This gets the rowspan on the current row and column
rowspan = page.select(
"html > body > center > table > tr:nth-of-type(" + str(block_count) + ") > td:nth-of-type(" + str(
day) + ")")
try:
if table or table_bold and rowspan[0].attrs.get("rowspan") == "4":
last_row_big = True
# Setting end of class
dayroster["blok_eind"] = (block_count // 2) + 1
else:
last_row_big = False
# Setting end of class
dayroster["blok_eind"] = (block_count // 2)
except IndexError:
pass
if table_bold:
x = table_bold[0]
# Classroom ID
dayroster["lokaal"] = x.contents[0]
if table:
iter = 0
for x in table:
content = x.contents[0].lstrip("\r\n").rstrip("\r\n")
# Cell has data
if content != "":
# Set start of class
dayroster["blok_start"] = block_count // 2
# Set day of class
dayroster["dag"] = day - 1
if iter == 0:
# Teacher ID
dayroster["leraar"] = content
elif iter == 1:
# Course ID
dayroster["vak"] = content
iter += 1
if table or table_bold:
# Store the data
roster.append(dayroster)
# Remove duplicates
seen = set()
new_l = []
for d in roster:
t = tuple(d.items())
if t not in seen:
seen.add(t)
new_l.append(d)
pprint(new_l)
</code></pre>
| 23 | 2016-09-01T18:16:45Z | 39,336,433 | <p>You'll have to track the rowspans on previous rows, one per column.</p>
<p>You could do this simply by copying the integer value of a rowspan into a dictionary, and subsequent rows decrement the rowspan value until it drops to <code>1</code> (or we could store the integer value minus 1 and drop to <code>0</code> for ease of coding). Then you can adjust subsequent table counts based on preceding rowspans.</p>
<p>Your table complicates this a little by using a default span of size 2, incrementing in steps of two, but that can easily be brought back to manageable numbers by dividing by 2.</p>
<p>Rather than use massive CSS selectors, select just the table rows and we'll iterate over those:</p>
<pre><code>roster = []
rowspans = {} # track rowspanning cells
# every second row in the table
rows = page.select('html > body > center > table > tr')[1:21:2]
for block, row in enumerate(rows, 1):
# take direct child td cells, but skip the first cell:
daycells = row.select('> td')[1:]
rowspan_offset = 0
for daynum, daycell in enumerate(daycells, 1):
# rowspan handling; if there is a rowspan here, adjust to find correct position
daynum += rowspan_offset
while rowspans.get(daynum, 0):
rowspan_offset += 1
rowspans[daynum] -= 1
daynum += 1
# now we have a correct day number for this cell, adjusted for
# rowspanning cells.
# update the rowspan accounting for this cell
rowspan = (int(daycell.get('rowspan', 2)) // 2) - 1
if rowspan:
rowspans[daynum] = rowspan
texts = daycell.select("table > tr > td > font")
if texts:
# class info found
teacher, classroom, course = (c.get_text(strip=True) for c in texts)
roster.append({
'blok_start': block,
'blok_eind': block + rowspan,
'dag': daynum,
'leraar': teacher,
'lokaal': classroom,
'vak': course
})
# days that were skipped at the end due to a rowspan
while daynum < 5:
daynum += 1
if rowspans.get(daynum, 0):
rowspans[daynum] -= 1
</code></pre>
<p>This produces correct output:</p>
<pre><code>[{'blok_eind': 2,
'blok_start': 1,
'dag': 5,
'leraar': u'BLEEJ002',
'lokaal': u'ALK B021',
'vak': u'WEBD'},
{'blok_eind': 3,
'blok_start': 2,
'dag': 3,
'leraar': u'BLEEJ002',
'lokaal': u'ALK B021B',
'vak': u'WEBD'},
{'blok_eind': 4,
'blok_start': 3,
'dag': 5,
'leraar': u'DOODF000',
'lokaal': u'ALK C212',
'vak': u'PROJ-T'},
{'blok_eind': 5,
'blok_start': 4,
'dag': 3,
'leraar': u'BLEEJ002',
'lokaal': u'ALK B021B',
'vak': u'MENT'},
{'blok_eind': 7,
'blok_start': 6,
'dag': 5,
'leraar': u'JONGJ003',
'lokaal': u'ALK B008',
'vak': u'BURG'},
{'blok_eind': 8,
'blok_start': 7,
'dag': 3,
'leraar': u'FLUIP000',
'lokaal': u'ALK B004',
'vak': u'ICT algemeen Prakti'},
{'blok_eind': 9,
'blok_start': 8,
'dag': 5,
'leraar': u'KOOLE000',
'lokaal': u'ALK B008',
'vak': u'NED'}]
</code></pre>
<p>Moreover, this code will continue to work even if courses span <em>more than 2 blocks</em>, or just one block; any rowspan size is supported.</p>
| 12 | 2016-09-05T19:11:14Z | [
"python",
"html",
"python-3.x",
"beautifulsoup",
"html-table"
] |
Add an arbitrary element to an xrange()? | 39,278,424 | <p>In Python, it's more memory-efficient to use <code>xrange()</code> instead of <code>range</code> when iterating.</p>
<p>The trouble I'm having is that I want to iterate over a large list -- such that I need to use <code>xrange()</code> and after that I want to check an arbitrary element.</p>
<p>With <code>range()</code>, it's easy: <code>x = range(...) + [arbitrary element]</code>.</p>
<p>But with <code>xrange()</code>, there doesn't seem to be a cleaner solution than this:</p>
<pre><code>for i in xrange(...):
if foo(i):
...
if foo(arbitrary element):
...
</code></pre>
<p>Any suggestions for cleaner solutions? Is there a way to "append" an arbitrary element to a generator?</p>
| 2 | 2016-09-01T18:19:23Z | 39,278,445 | <p>I would recommend keeping the <code>arbitrary_element</code> check out of the loop, but if you want to make it part of the loop, you can use <a href="https://docs.python.org/2/library/itertools.html#itertools.chain"><code>itertools.chain</code></a>:</p>
<pre><code>for i in itertools.chain(xrange(...), [arbitrary_element]):
...
</code></pre>
| 7 | 2016-09-01T18:21:35Z | [
"python",
"generator",
"xrange"
] |
Add an arbitrary element to an xrange()? | 39,278,424 | <p>In Python, it's more memory-efficient to use <code>xrange()</code> instead of <code>range</code> when iterating.</p>
<p>The trouble I'm having is that I want to iterate over a large list -- such that I need to use <code>xrange()</code> and after that I want to check an arbitrary element.</p>
<p>With <code>range()</code>, it's easy: <code>x = range(...) + [arbitrary element]</code>.</p>
<p>But with <code>xrange()</code>, there doesn't seem to be a cleaner solution than this:</p>
<pre><code>for i in xrange(...):
if foo(i):
...
if foo(arbitrary element):
...
</code></pre>
<p>Any suggestions for cleaner solutions? Is there a way to "append" an arbitrary element to a generator?</p>
| 2 | 2016-09-01T18:19:23Z | 39,278,459 | <p><a href="https://docs.python.org/3/library/itertools.html#itertools.chain"><code>itertools.chain</code></a> lets you make a combined iterator from multiple iterables without concatenating them (so no expensive temporaries):</p>
<pre><code>from itertools import chain
# Must wrap arbitrary element in one-element tuple (or list)
for i in chain(xrange(...), (arbitrary_element,)):
if foo(i):
...
</code></pre>
| 8 | 2016-09-01T18:22:12Z | [
"python",
"generator",
"xrange"
] |
Installing PIP on Windows 10 python 3.5 | 39,278,499 | <p>I just started learning <strong>Python</strong>, and successfully downloaded <strong>Python 3.5</strong>. I attempted to download/upgrade PIP 8.1.2 multiple times using get-pip.py, which I ran (successfully I think) but when I attempted to execute <code>python get-pip.py</code>
I got the error code:</p>
<pre><code> File "<stdin>", line 1
python get-pip.py
^
SyntaxError: invalid syntax
</code></pre>
<p>I understand that pip is included in python but the pip website requires users to upgrade pip which I don't think I can since any pip commands lead to syntax errors, and do not produce the same output that most tutorial sites show. I have tried to find different ways to fix it, but I can't figure out whats wrong aside from pip not being on the computer in the first place or corrupted. Thank you for your assistance. </p>
| -1 | 2016-09-01T18:24:36Z | 39,278,586 | <p>Not sure what you are asking. If you want to run <code>python get-pip.py</code> do it in a windows command prompt, not in the python interpreter. But I do not know why you would want to do that.</p>
| 0 | 2016-09-01T18:30:13Z | [
"python",
"django",
"python-3.x",
"pip"
] |
Installing PIP on Windows 10 python 3.5 | 39,278,499 | <p>I just started learning <strong>Python</strong>, and successfully downloaded <strong>Python 3.5</strong>. I attempted to download/upgrade PIP 8.1.2 multiple times using get-pip.py, which I ran (successfully I think) but when I attempted to execute <code>python get-pip.py</code>
I got the error code:</p>
<pre><code> File "<stdin>", line 1
python get-pip.py
^
SyntaxError: invalid syntax
</code></pre>
<p>I understand that pip is included in python but the pip website requires users to upgrade pip which I don't think I can since any pip commands lead to syntax errors, and do not produce the same output that most tutorial sites show. I have tried to find different ways to fix it, but I can't figure out whats wrong aside from pip not being on the computer in the first place or corrupted. Thank you for your assistance. </p>
| -1 | 2016-09-01T18:24:36Z | 39,278,973 | <p>You already have pip; there is no need to run <code>get-pip</code>. Upgrading can be done by pip itself.</p>
<p>But the reason you are getting errors is that all these commands, including <code>pip</code> itself, should be run at the command line, not in the Python interpreter.</p>
| 0 | 2016-09-01T18:57:02Z | [
"python",
"django",
"python-3.x",
"pip"
] |
Installing PIP on Windows 10 python 3.5 | 39,278,499 | <p>I just started learning <strong>Python</strong>, and successfully downloaded <strong>Python 3.5</strong>. I attempted to download/upgrade PIP 8.1.2 multiple times using get-pip.py, which I ran (successfully I think) but when I attempted to execute <code>python get-pip.py</code>
I got the error code:</p>
<pre><code> File "<stdin>", line 1
python get-pip.py
^
SyntaxError: invalid syntax
</code></pre>
<p>I understand that pip is included in python but the pip website requires users to upgrade pip which I don't think I can since any pip commands lead to syntax errors, and do not produce the same output that most tutorial sites show. I have tried to find different ways to fix it, but I can't figure out whats wrong aside from pip not being on the computer in the first place or corrupted. Thank you for your assistance. </p>
| -1 | 2016-09-01T18:24:36Z | 39,279,027 | <p>You won't need to upgrade pip if you just downloaded python 3.5, go to where you have your Python3.5 file and open the folder Scripts, you will find pip.exe. Open powershell and use the cd command to move to the folder containing pip.exe. From here you can use pip install to get modules. </p>
<p>Open Windows Powershell</p>
<pre><code>PS C:Users\you> cd C:\path\to\scripts\folder\containing\pip
PS C:\path\to\scripts\folder\containing\pip> pip install module
</code></pre>
| 0 | 2016-09-01T19:00:36Z | [
"python",
"django",
"python-3.x",
"pip"
] |
IntegrityError when creating relationships between objects in flask-sqlalchemy | 39,278,555 | <p>I struggle to build geo object hierarchy for my ridership-forecasting model.</p>
<p>I have the following hierarchy of objects:</p>
<pre><code>class Geoobject(m.db.Model):
__tablename__ = 'geoobjects'
id = m.db.Column(m.db.Integer, primary_key=True)
name = m.db.Column(m.db.String)
def __init__(self, code: int, name: str):
self.code = code
self.name = name
class Region(Geoobject):
__tablename__ = 'regions'
id = m.db.Column(m.db.Integer, m.db.ForeignKey('geoobjects.id'),
primary_key=True)
code = m.db.Column(m.db.Integer)
def __init__(self, name: str, code: int):
self.name = name
self.code = code
self.region_id = self.id
class TransportArea(Geoobject):
__tablename__ = 'transport_areas'
id = m.db.Column(m.db.Integer, m.db.ForeignKey('geoobjects.id'),
primary_key=True, autoincrement=True)
region = m.db.relationship('Region',
foreign_keys='Region.id', uselist=False)
def __init__(self,
name: str,
region: Iterable[Region]):
self.name = name
self.region = region
</code></pre>
<p>When I try to add TransportArea object to session, I get the following error:</p>
<pre><code>IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: geoobjects.id
[SQL: 'UPDATE geoobjects SET id=? WHERE geoobjects.id = ?'] [parameters: (2, 1)]
</code></pre>
<p>Any suggestions, how to fix this?</p>
<p><code>m</code> - is an instance of the model that contains flask app with db.</p>
| 0 | 2016-09-01T18:27:59Z | 39,283,708 | <p>I've finally understood how it should work.
When I add <code>foreign_keys</code> keyword, I should specify a column for it and link it to the relationship. Working code is below:</p>
<pre><code>class TransportArea(Geoobject):
__tablename__ = 'transport_areas'
id = m.db.Column(m.db.Integer, m.db.ForeignKey('geoobjects.id'),
primary_key=True, autoincrement=True)
region_id = m.db.Column(m.db.Integer, m.db.ForeignKey('regions.id'))
region = m.db.relationship('Region', foreign_keys=region_id)
def __init__(self,
name: str,
region: Region
):
self.name = name
self.region = region
</code></pre>
| 0 | 2016-09-02T03:05:30Z | [
"python",
"sqlalchemy",
"flask-sqlalchemy"
] |
how to insert new value into max heap and apply max_delete to heap | 39,278,585 | <p><em>Problem 1</em></p>
<pre><code> 98
/ \
/ \
67 89
/ \ / \
/ \ / \
38 42 54 89
/ \
/ \
17 25
</code></pre>
<p>i want to insert 97 into max heap [98,67,89,38,42,54,89,17,25] (represented in list).</p>
<p>As per me, resulting heap is [98,97,89,38,67,54,89,17,25,42]</p>
<pre><code> 98
/ \
/ \
97 89
/ \ / \
/ \ / \
38 67 54 89
/ \ |
/ \ |
17 25 42
</code></pre>
<p><em>Problem 2</em> </p>
<p>i want to apply delete_max() twice to the heap [100,97,93,38,67,54,93,17,25,42]. </p>
<pre><code> 100
/ \
/ \
97 93
/ \ / \
/ \ / \
38 67 54 93
/ \ |
/ \ |
17 25 42
</code></pre>
<p>As per me heap after two deletemax operations,resulting heap is [93,67,93,38,42,54,25,17]</p>
<pre><code> 93
/ \
/ \
67 93
/ \ / \
/ \ / \
38 42 54 25
/
/
17
</code></pre>
<p><strong>I want to conform, is I am doing insertion and max_delete correctly for heap and above answer is correct?</strong>
If not correct, then please guide me. </p>
| -3 | 2016-09-01T18:30:13Z | 39,281,244 | <p>Look for this link so you can access by this link </p>
<p><a href="https://cstechwiki.blogspot.in/2016/09/python-week-6-quiz-assignment-nptel.html" rel="nofollow">https://cstechwiki.blogspot.in/2016/09/python-week-6-quiz-assignment-nptel.html</a></p>
| 0 | 2016-09-01T21:36:13Z | [
"python",
"python-3.x",
"heap",
"heapsort",
"binary-heap"
] |
how to insert new value into max heap and apply max_delete to heap | 39,278,585 | <p><em>Problem 1</em></p>
<pre><code> 98
/ \
/ \
67 89
/ \ / \
/ \ / \
38 42 54 89
/ \
/ \
17 25
</code></pre>
<p>i want to insert 97 into max heap [98,67,89,38,42,54,89,17,25] (represented in list).</p>
<p>As per me, resulting heap is [98,97,89,38,67,54,89,17,25,42]</p>
<pre><code> 98
/ \
/ \
97 89
/ \ / \
/ \ / \
38 67 54 89
/ \ |
/ \ |
17 25 42
</code></pre>
<p><em>Problem 2</em> </p>
<p>i want to apply delete_max() twice to the heap [100,97,93,38,67,54,93,17,25,42]. </p>
<pre><code> 100
/ \
/ \
97 93
/ \ / \
/ \ / \
38 67 54 93
/ \ |
/ \ |
17 25 42
</code></pre>
<p>As per me heap after two deletemax operations,resulting heap is [93,67,93,38,42,54,25,17]</p>
<pre><code> 93
/ \
/ \
67 93
/ \ / \
/ \ / \
38 42 54 25
/
/
17
</code></pre>
<p><strong>I want to conform, is I am doing insertion and max_delete correctly for heap and above answer is correct?</strong>
If not correct, then please guide me. </p>
| -3 | 2016-09-01T18:30:13Z | 39,283,320 | <p>Your answers look correct. Let's take a closer look at why.</p>
<p>In the first case, you have the heap:</p>
<pre><code>[98,67,89,38,42,54,89,17,25]
</code></pre>
<p>You want to insert 97. So you add it to the end and then bubble it up:</p>
<pre><code>[98,67,89,38,42,54,89,17,25,97]
</code></pre>
<p>You compare 97 with its parent (42). Since 97 is greater, you swap them:</p>
<pre><code>[98,67,89,38,97,54,89,17,25,42]
</code></pre>
<p>Then compare 97 with its parent again. This time the parent is 67, so you have to swap again.</p>
<pre><code>[98,97,89,38,67,54,89,17,25,42]
</code></pre>
<p>Comparing one more time, you see that the parent (98) is greater than the item you inserted, so you're done.</p>
<p>Now, given the heap <code>[100,97,93,38,67,54,93,17,25,42]</code>, you want to remove the two highest items. The rule for delete_max is replace the root with the last item on the heap, and then sift it down. So you have:</p>
<pre><code> [42,97,93,38,67,54,93,17,25]
</code></pre>
<p>42 is smaller than its children, so you swap it with the largest child:</p>
<pre><code> [97,42,93,38,67,54,93,17,25]
</code></pre>
<p>It's greater than 38, but smaller than 67, so you swap again:</p>
<pre><code> [97,67,93,38,42,54,93,17,25]
</code></pre>
<p>And 42 is now at the leaf level so there's nothing more to do. That's the first item removed. Now to remove the second. Move 25 to the root:</p>
<pre><code> [25,67,93,38,42,54,93,17]
</code></pre>
<p>And sifting down:</p>
<pre><code>[93,67,25,38,42,54,93,17] // swapped with 93
[93,67,93,38,42,54,25,17] // swapped with 93 again
</code></pre>
| 0 | 2016-09-02T02:08:41Z | [
"python",
"python-3.x",
"heap",
"heapsort",
"binary-heap"
] |
How to Stop a Program Until a Specific Action Happens | 39,278,603 | <p>Using python and tkinter, is there a way I can run a part of the program, then stop it until the user clicks a specific button and then continue running?</p>
<p>I mean:</p>
<p>function - stop - click button - continue running.</p>
<p>Necessary code for this:</p>
<pre><code>def yellowClick():
yellow.configure(activebackground="yellow3")
yellow.after(500, lambda: yellow.configure(activebackground="yellow"))
yellow = Tkinter.Button(base, bd="0", highlightthickness="0",
width="7", height="5", activebackground="yellow",
bg="yellow3", command = yellowClick)
yellow.place(x = 30, y = 50)
def blueClick():
blue.configure(activebackground="medium blue")
blue.after(500, lambda: blue.configure(activebackground="blue"))
blue = Tkinter.Button(base, bd="0", highlightthickness="0",
width="7", height="5", activebackground="blue",
bg="medium blue", command = blueClick)
blue.place(x = 125, y = 50)
def redClick():
red.configure(activebackground="red3")
red.after(500, lambda: red.configure(activebackground="red"))
red = Tkinter.Button(base, bd="0", highlightthickness="0",
width="7", height="5", activebackground="red",
bg = "red3", command = redClick)
red.place(x = 30, y = 145)
def greenClick():
green.configure(activebackground="dark green")
green.after(500, lambda: green.configure(activebackground="green4"))
green = Tkinter.Button(base, bd="0", highlightthickness="0",
width="7", height="5", activebackground="green4",
bg="dark green", command = greenClick)
green.place(x = 125, y = 145)
def showSequence():
r = random.randint(1, 4)
if r == 1:
yellow.configure(bg="yellow")
yellow.after(1000, lambda: yellow.configure(bg="yellow3"))
elif r == 2:
blue.configure(bg="blue")
blue.after(1000, lambda: blue.configure(bg="medium blue"))
elif r == 3:
red.configure(bg="red")
red.after(1000, lambda: red.configure(bg="red3"))
elif r == 4:
green.configure(bg="green4")
green.after(1000, lambda: green.configure(bg="dark green"))
</code></pre>
<p>This is for a simon game, I need to run this function once, then make it stop until the player clicks a button and then return to this function.This is for the first turn. I need to connect the showsequence function in a way that it stops until a button is clicked but I don't know how.</p>
<p>Stoping the program by time will not work in this, I mean wait for a specific action to happen.</p>
| 0 | 2016-09-01T18:31:20Z | 39,278,903 | <p>have a look at this code sample. it works as is, copy/past it and it will run. There is no loop. It waits for the button click. I think you could initialize the program by loading a random pattern of colors into the "answers" list which could essentially be your levels.<br>
<a href="http://stackoverflow.com/a/14772550/2601293">make TK inter wait for user input</a></p>
| -1 | 2016-09-01T18:52:30Z | [
"python",
"tkinter"
] |
Merging Dicts that have Lists of Dicts in Python 2.7 | 39,278,641 | <p>I am trying to merge two <code>dict</code>s in Python that could be The same or one could have far less info </p>
<p>ex.</p>
<pre><code>master = {"a": 5564, "c": [{"d2":6}]}
daily = { "a": 795, "b": 1337, "c": [{"d1": 2,"d2": 2,"d3": [{"e1": 4,"e2": 4}]}]}
</code></pre>
<p>They need to be merged so the output is as such</p>
<pre><code>master = { "a": 6359, "b": 1337, "c": [{"d1": 2,"d2": 8,"d3": [{"e1": 4,"e2": 4}]}]}
</code></pre>
<p>I took a shot at it though I only ever get returned null. I might be missing something or just way off. I just cant figure it out. Any help would be amazing Thank you. </p>
<pre><code>def merge(master,daily):
for k, v in daily.items():
if isinstance(daily[k],list):
key_check = keyCheck(k, master)
if key_check:
merge(master[k],daily[k])
else :
master[k] = daily[k]
else :
if keyCheck(k, master):
master[k] += daily[k]
else :
master[k] = daily[k]
</code></pre>
<p><code>keyCheck</code> only checks if a key is in the dictionary so it doesn't throw errors.</p>
| 3 | 2016-09-01T18:34:15Z | 39,278,882 | <p>Here is a one linear using <code>collections.Counter()</code>:</p>
<pre><code>>>> from collections import Counter
>> C2 = Counter(daily)
>>> C1 = Counter(master)
>>>
>>> {k:reduce(lambda x,y : Counter(x)+Counter(y), v) if isinstance(v, list) and k in (C1.viewkeys() & C2) else v for k, v in (C1 + C2).items()}
{'a': 6359, 'c': Counter({'d3': [{'e1': 4, 'e2': 4}], 'd2': 8, 'd1': 2}), 'b': 1337}
</code></pre>
<p>First off, you can convert your dictionaries to Counter objects in order to add the values for common keys after summing the counters (that's how Counter's add attribute works), then you can loop over the items and for keys that exist in both counters and their values are lists, you can use the <code>reduce()</code> function to apply the same algorithm to all of the list items too. </p>
<p>If your list contains another nested similar data structure, you can convert this code to a recursive function.</p>
| 5 | 2016-09-01T18:51:11Z | [
"python",
"python-2.7",
"dictionary"
] |
Merging Dicts that have Lists of Dicts in Python 2.7 | 39,278,641 | <p>I am trying to merge two <code>dict</code>s in Python that could be The same or one could have far less info </p>
<p>ex.</p>
<pre><code>master = {"a": 5564, "c": [{"d2":6}]}
daily = { "a": 795, "b": 1337, "c": [{"d1": 2,"d2": 2,"d3": [{"e1": 4,"e2": 4}]}]}
</code></pre>
<p>They need to be merged so the output is as such</p>
<pre><code>master = { "a": 6359, "b": 1337, "c": [{"d1": 2,"d2": 8,"d3": [{"e1": 4,"e2": 4}]}]}
</code></pre>
<p>I took a shot at it though I only ever get returned null. I might be missing something or just way off. I just cant figure it out. Any help would be amazing Thank you. </p>
<pre><code>def merge(master,daily):
for k, v in daily.items():
if isinstance(daily[k],list):
key_check = keyCheck(k, master)
if key_check:
merge(master[k],daily[k])
else :
master[k] = daily[k]
else :
if keyCheck(k, master):
master[k] += daily[k]
else :
master[k] = daily[k]
</code></pre>
<p><code>keyCheck</code> only checks if a key is in the dictionary so it doesn't throw errors.</p>
| 3 | 2016-09-01T18:34:15Z | 39,279,206 | <p>Here is a recursive solution. Though it cannot compete with Kasramvd's answer.</p>
<pre><code>def merge(dic1, dic2):
merged = dict(dic1, **dic2) # Merge dictionaries without adding values. Just exchanging them.
# Similar to .update() but does not override subdicts.
for key in merged:
if key in dic1 and key in dic2:
if isinstance(dic1[key], list):
merged[key] = merge(dic1[key][0], dic2[key][0])
else:
merged[key] = dic1[key] + dic2[key]
return merged
</code></pre>
| 1 | 2016-09-01T19:11:46Z | [
"python",
"python-2.7",
"dictionary"
] |
PyCharm / OS X El Capitan / Python 3.5.2 - matplotlib not working in script | 39,278,730 | <p>Python noob here, apologies if this is has an obvious answer I should know. I'm using Python 3.5.2 via PyCharm in OSX El Capitan and I'm trying to run the following simple script to practise with matplotlib:</p>
<pre><code>import matplotlib.pyplot as plt
year = [1950,1970,1990,2010]
pop = [2.159,3.692,5.263,6.972]
plt.plot(year,pop)
plt.show()
</code></pre>
<p>If I execute this line by line in PyCharm's Python console, it works fine. If I execute it as an entire script, I get this error:</p>
<pre><code>/Library/Frameworks/Python.framework/Versions/3.5/bin/python3.5/Users/Cuckoo/Dropbox/Python/test.py
Traceback (most recent call last):
File "/Users/Cuckoo/Dropbox/Python/test.py", line 1, in <module>
import matplotlib.pyplot as plt
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/matplotlib/__init__.py", line 115, in <module>
import tempfile
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/tempfile.py", line 45, in <module>
from random import Random as _Random
ImportError: cannot import name 'Random'
Process finished with exit code 1
</code></pre>
<p>Can anyone please explain what has gone wrong and better still, how I can fix it?</p>
| 0 | 2016-09-01T18:41:14Z | 39,278,934 | <p>This can be caused by having another Python script in your project named <code>random.py</code> that is overriding the original library named Random.</p>
<p>Try to rename or remove the <code>random.py</code> file and your script should work from within PyCharm and the command line.</p>
| 1 | 2016-09-01T18:54:25Z | [
"python",
"osx",
"matplotlib",
"pycharm"
] |
PySpark how to read file having string with multiple encoding | 39,278,774 | <p>I am writing a python spark utility to read files and do some transformation.
File has large amount of data ( upto 12GB ). I use sc.textFile to create a RDD and logic is to pass each line from RDD to a map function which in turn split's the line by "," and run some data transformation( changing fields value based on a mapping ).</p>
<blockquote>
<p>Sample line from the file.
0014164,02,031270,09,1,,0,0,0000000000,134314,Mobile,ce87862158eb0dff3023e16850f0417a-cs31,584e2cd63057b7ed,Privé,Gossip</p>
</blockquote>
<p>Due to values "Privé" I get UnicodeDecodeError. I tried to following to parse this value:</p>
<pre><code>if isinstance(v[12],basestring):
v[12] = v[12].encode('utf8')
else:
v[12] = unicode(v[12]).encode('utf8')
</code></pre>
<p>but when I write data back to file this field gets translated as 'Priv�'.
on Linux source file type is shown as "ISO-8859 text, with very long lines, with CRLF line terminators".</p>
<p>Could someone let me know right way in Spark to read/write files with mixed encoding please.</p>
| 3 | 2016-09-01T18:44:37Z | 39,280,205 | <p>You can set <code>use_unicode</code> to <code>False</code> when calling <code>textFile</code>. It will give you RDD of <code>str</code> objects (Python 2.x) or <code>bytes</code> objects (Python 3.x) which can further processed using desired encoding.</p>
<pre><code>sc.textFile(path, use_unicode=False).map(lambda x: x.decode("iso-8859-1"))
</code></pre>
| 0 | 2016-09-01T20:15:18Z | [
"python",
"apache-spark",
"pyspark"
] |
replace certain number in DataFrame | 39,278,847 | <p>I am pretty new to Python programming and have a question about replacing certain conditional number in a DataFrame.
for example, I have a dateframe with 5 days of data in each column, day1, day2, day3, day4 and day5. For each day, I have 5 data points with some of them larger than 5 for each day. Now I want to set the data which is larger than 5 to 1.
So how can I do that? Loop into each column and find specific element then change it, or there is other faster way to do it?
Thanks,</p>
| 1 | 2016-09-01T18:49:00Z | 39,281,551 | <p>This will iterate over the data in each column and change high values to 1. Iterating by rows instead of columns is an option with <code>iterrows</code> as discussed <a href="http://stackoverflow.com/a/11617194/6085135">here</a>, but it's generally slower.</p>
<pre><code>import pandas as pd
data = {'day1' : pd.Series([1, 2, 3]),
'day2' : pd.Series([1, 4, 6]),
'day3' : pd.Series([5, 4, 3]),
'day4' : pd.Series([2, 4, 6]),
'day5' : pd.Series([7, 3, 2])}
df = pd.DataFrame(data)
</code></pre>
<p><img src="http://i.stack.imgur.com/RiYSB.png" alt="enter image description here"></p>
<pre><code>for col in df.columns:
df[col] = [x if x <= 5 else 1 for x in df[col]]
</code></pre>
<p><img src="http://i.stack.imgur.com/oV0is.png" alt="enter image description here"></p>
| 0 | 2016-09-01T22:02:31Z | [
"python",
"pandas",
"replace",
"dataframe"
] |
replace certain number in DataFrame | 39,278,847 | <p>I am pretty new to Python programming and have a question about replacing certain conditional number in a DataFrame.
for example, I have a dateframe with 5 days of data in each column, day1, day2, day3, day4 and day5. For each day, I have 5 data points with some of them larger than 5 for each day. Now I want to set the data which is larger than 5 to 1.
So how can I do that? Loop into each column and find specific element then change it, or there is other faster way to do it?
Thanks,</p>
| 1 | 2016-09-01T18:49:00Z | 39,282,216 | <p>To do this without looping (which is usually faster) you can do:</p>
<pre><code>df[df > 5] = 1
</code></pre>
| 1 | 2016-09-01T23:20:01Z | [
"python",
"pandas",
"replace",
"dataframe"
] |
Is it possible to run ubuntu terminal commands using DJango | 39,279,007 | <p>I am designing a simple website using DJango and my database is HBase. In some Part I need to save some files on HDFS, for example video file, and have it's URI. But my problem is I couldn't find any API for accessing HDFS through DJango so I decided to use ubuntu terminal command to upload and download data on HDFS. Now I want to know is there any way to run terminal command using Django or any other way to access HDFS API through Django?</p>
| 4 | 2016-09-01T18:59:09Z | 39,279,043 | <p>have django make a call to a subprocess like the one below. each string in the command should be a string in a list.</p>
<pre><code>import subprocess
subprocess.call(["ls", "-l"])
</code></pre>
| 0 | 2016-09-01T19:01:43Z | [
"python",
"django",
"hadoop",
"hbase",
"hdfs"
] |
Is it possible to run ubuntu terminal commands using DJango | 39,279,007 | <p>I am designing a simple website using DJango and my database is HBase. In some Part I need to save some files on HDFS, for example video file, and have it's URI. But my problem is I couldn't find any API for accessing HDFS through DJango so I decided to use ubuntu terminal command to upload and download data on HDFS. Now I want to know is there any way to run terminal command using Django or any other way to access HDFS API through Django?</p>
| 4 | 2016-09-01T18:59:09Z | 39,284,318 | <p>You don't need to search for Django implemented libraries, Django is written in python and python is providing libraries for it. </p>
<p>An alternative solution</p>
<pre><code>import subprocess
subprocess.Popen(['python', 'manage.py', 'runserver'])
</code></pre>
<p>you can excecute shell commands using subprocess.Popen also.
The difference between subprocess Popen and call and how to use them is described here <a href="http://stackoverflow.com/questions/7681715/whats-the-difference-between-subprocess-popen-and-call-how-can-i-use-them">What's the difference between subprocess Popen and call (how can I use them)?</a></p>
| 0 | 2016-09-02T04:28:17Z | [
"python",
"django",
"hadoop",
"hbase",
"hdfs"
] |
Iterating a loop using await or yield causes error | 39,279,064 | <p>I come from the land of <code>Twisted</code>/<a href="https://github.com/twisted/klein" rel="nofollow"><code>Klein</code></a>. I come in peace and to ask for <code>Tornado</code> help. I'm investigating Tornado and how its take on async differs from Twisted. Twisted has something similar to <code>gen.coroutine</code> which is <a href="http://twistedmatrix.com/documents/current/core/howto/defer-intro.html#inline-callbacks-using-yield" rel="nofollow"><code>defer.inlineCallbacks</code></a> and I'm able to write async code like this:</p>
<p><em>kleinsample.py</em></p>
<pre><code>@app.route('/endpoint/<int:n>')
@defer.inlineCallbacks
def myRoute(request, n):
jsonlist = []
for i in range(n):
yield jsonlist.append({'id': i})
return json.dumps(jsonlist)
</code></pre>
<p>curl cmd:</p>
<pre><code>curl localhost:9000/json/2000
</code></pre>
<p>This endpoint will create a JSON string with <code>n</code> number of elements. <code>n</code> can be small or very big. I'm able to break it up in Twisted such that the event loop won't block using <code>yield</code>. Now here's how I tried to convert this into Tornado:</p>
<p><em>tornadosample.py</em></p>
<pre><code>async def get(self, n):
jsonlist = []
for i in range(n):
await gen.Task(jsonlist.append, {'id': i}) # exception here
self.write(json.dumps(jsonlist))
</code></pre>
<p>The traceback:</p>
<pre><code> TypeError: append() takes no keyword arguments
</code></pre>
<p>I'm confused about what I'm supposed to do to properly iterate each element in the loop so that the event loop doesn't get blocked. Does anyone know the "Tornado" way of doing this?</p>
| 1 | 2016-09-01T19:02:49Z | 39,279,258 | <p>Let's have a look at <code>gen.Task</code> <a href="http://www.tornadoweb.org/en/stable/gen.html#tornado.gen.Task" rel="nofollow">docs</a>:</p>
<blockquote>
<p>Adapts a callback-based asynchronous function for use in coroutines.</p>
<p>Takes a function (and optional additional arguments) and runs it with those arguments <strong>plus a callback keyword argument</strong>. The argument passed to the callback is returned as the result of the yield expression.</p>
</blockquote>
<p>Since <code>append</code> doesn't accept a keyword argument it doesn't know what to do with that <code>callback</code> kwarg and spits that exception.</p>
<p>What you could do is wrap <code>append</code> with your own function that does accept a <code>callback</code> kwarg or the approach showed in <a href="http://stackoverflow.com/a/11683014/1453822">this</a> answer.</p>
| 0 | 2016-09-01T19:14:37Z | [
"python",
"python-3.x",
"asynchronous",
"tornado",
"coroutine"
] |
Iterating a loop using await or yield causes error | 39,279,064 | <p>I come from the land of <code>Twisted</code>/<a href="https://github.com/twisted/klein" rel="nofollow"><code>Klein</code></a>. I come in peace and to ask for <code>Tornado</code> help. I'm investigating Tornado and how its take on async differs from Twisted. Twisted has something similar to <code>gen.coroutine</code> which is <a href="http://twistedmatrix.com/documents/current/core/howto/defer-intro.html#inline-callbacks-using-yield" rel="nofollow"><code>defer.inlineCallbacks</code></a> and I'm able to write async code like this:</p>
<p><em>kleinsample.py</em></p>
<pre><code>@app.route('/endpoint/<int:n>')
@defer.inlineCallbacks
def myRoute(request, n):
jsonlist = []
for i in range(n):
yield jsonlist.append({'id': i})
return json.dumps(jsonlist)
</code></pre>
<p>curl cmd:</p>
<pre><code>curl localhost:9000/json/2000
</code></pre>
<p>This endpoint will create a JSON string with <code>n</code> number of elements. <code>n</code> can be small or very big. I'm able to break it up in Twisted such that the event loop won't block using <code>yield</code>. Now here's how I tried to convert this into Tornado:</p>
<p><em>tornadosample.py</em></p>
<pre><code>async def get(self, n):
jsonlist = []
for i in range(n):
await gen.Task(jsonlist.append, {'id': i}) # exception here
self.write(json.dumps(jsonlist))
</code></pre>
<p>The traceback:</p>
<pre><code> TypeError: append() takes no keyword arguments
</code></pre>
<p>I'm confused about what I'm supposed to do to properly iterate each element in the loop so that the event loop doesn't get blocked. Does anyone know the "Tornado" way of doing this?</p>
| 1 | 2016-09-01T19:02:49Z | 39,279,951 | <p>You cannot and must not await <code>append</code>, since it isn't a coroutine and doesn't return a Future. If you want to occasionally yield to allow other coroutines to proceed using Tornado's event loop, await <a href="http://www.tornadoweb.org/en/stable/gen.html#tornado.gen.moment" rel="nofollow">gen.moment</a>.</p>
<pre><code>from tornado import gen
async def get(self, n):
jsonlist = []
for i in range(n):
jsonlist.append({'id': i})
if not i % 1000: # Yield control for a moment every 1k ops
await gen.moment
return json.dumps(jsonlist)
</code></pre>
<p>That said, unless this function is <em>extremely</em> CPU-intensive and requires hundreds of milliseconds or more to complete, you're probably better off just doing all your computation at once instead of taking multiple trips through the event loop before your function returns.</p>
| 1 | 2016-09-01T19:59:33Z | [
"python",
"python-3.x",
"asynchronous",
"tornado",
"coroutine"
] |
Iterating a loop using await or yield causes error | 39,279,064 | <p>I come from the land of <code>Twisted</code>/<a href="https://github.com/twisted/klein" rel="nofollow"><code>Klein</code></a>. I come in peace and to ask for <code>Tornado</code> help. I'm investigating Tornado and how its take on async differs from Twisted. Twisted has something similar to <code>gen.coroutine</code> which is <a href="http://twistedmatrix.com/documents/current/core/howto/defer-intro.html#inline-callbacks-using-yield" rel="nofollow"><code>defer.inlineCallbacks</code></a> and I'm able to write async code like this:</p>
<p><em>kleinsample.py</em></p>
<pre><code>@app.route('/endpoint/<int:n>')
@defer.inlineCallbacks
def myRoute(request, n):
jsonlist = []
for i in range(n):
yield jsonlist.append({'id': i})
return json.dumps(jsonlist)
</code></pre>
<p>curl cmd:</p>
<pre><code>curl localhost:9000/json/2000
</code></pre>
<p>This endpoint will create a JSON string with <code>n</code> number of elements. <code>n</code> can be small or very big. I'm able to break it up in Twisted such that the event loop won't block using <code>yield</code>. Now here's how I tried to convert this into Tornado:</p>
<p><em>tornadosample.py</em></p>
<pre><code>async def get(self, n):
jsonlist = []
for i in range(n):
await gen.Task(jsonlist.append, {'id': i}) # exception here
self.write(json.dumps(jsonlist))
</code></pre>
<p>The traceback:</p>
<pre><code> TypeError: append() takes no keyword arguments
</code></pre>
<p>I'm confused about what I'm supposed to do to properly iterate each element in the loop so that the event loop doesn't get blocked. Does anyone know the "Tornado" way of doing this?</p>
| 1 | 2016-09-01T19:02:49Z | 39,283,598 | <p><code>list.append()</code> returns <code>None</code>, so it's a little misleading that your Klein sample looks like it's yielding some object. This is equivalent to <code>jsonlist.append(...); yield</code> as two separate statements. The tornado equivalent would be to do <code>await gen.moment</code> in place of the bare <code>yield</code>. </p>
<p>Also note that in Tornado, handlers produce their responses by calling <code>self.write()</code>, not by returning values, so the <code>return</code> statement should be <code>self.write(json.dumps(jsonlist))</code>.</p>
| 1 | 2016-09-02T02:47:36Z | [
"python",
"python-3.x",
"asynchronous",
"tornado",
"coroutine"
] |
How to run django runserver over TLS 1.2 | 39,279,079 | <p>I'm testing Stripe orders on my local Mac OS X machine. I am implementing this code:</p>
<pre><code>stripe.api_key = settings.STRIPE_SECRET
order = stripe.Order.create(
currency = 'usd',
email = 'j@awesomecom',
items = [
{
"type":'sku',
"parent":'sku_88F260aQ',
"quantity": 1,
}
],
shipping = {
"name":'Jenny Rosen',
"address":{
"line1":'1234 Main Street',
"city":'Anytown',
"country":'US',
"postal_code":'123456'
}
},
)
</code></pre>
<p>I receive an error:</p>
<blockquote>
<p>Stripe no longer supports API requests made with TLS 1.0. Please
initiate HTTPS connections with TLS 1.2 or later.</p>
</blockquote>
<p>I am using django 1.10 and python version 2.7.10</p>
<p>How can I force the use of TLS 1.2? Would I do this on the python or django side?</p>
| 1 | 2016-09-01T19:03:33Z | 39,300,617 | <p>This is not a django issue, but operating system and language issue.</p>
<p>I'm using Mac OS X and and a brew version of python. I'm also using virtual env which has its own copy of python and open ssl.</p>
<p>I did the following:</p>
<p>I first downloaded the most recent version of XCode which updates OpenSSL.
I then uninstalled and reinstalled brew python.
I then updated virtualenv.</p>
| 0 | 2016-09-02T20:58:16Z | [
"python",
"django",
"python-2.7",
"ssl",
"tls1.2"
] |
How to run django runserver over TLS 1.2 | 39,279,079 | <p>I'm testing Stripe orders on my local Mac OS X machine. I am implementing this code:</p>
<pre><code>stripe.api_key = settings.STRIPE_SECRET
order = stripe.Order.create(
currency = 'usd',
email = 'j@awesomecom',
items = [
{
"type":'sku',
"parent":'sku_88F260aQ',
"quantity": 1,
}
],
shipping = {
"name":'Jenny Rosen',
"address":{
"line1":'1234 Main Street',
"city":'Anytown',
"country":'US',
"postal_code":'123456'
}
},
)
</code></pre>
<p>I receive an error:</p>
<blockquote>
<p>Stripe no longer supports API requests made with TLS 1.0. Please
initiate HTTPS connections with TLS 1.2 or later.</p>
</blockquote>
<p>I am using django 1.10 and python version 2.7.10</p>
<p>How can I force the use of TLS 1.2? Would I do this on the python or django side?</p>
| 1 | 2016-09-01T19:03:33Z | 39,376,684 | <p>If you have already tried to update openssl and python (using brew), and still does not work, make sure your settings have DEBUG = False. </p>
<p>Watch this thread for more information <a href="https://code.google.com/p/googleappengine/issues/detail?id=13207" rel="nofollow">https://code.google.com/p/googleappengine/issues/detail?id=13207</a></p>
| 0 | 2016-09-07T18:15:21Z | [
"python",
"django",
"python-2.7",
"ssl",
"tls1.2"
] |
How to run django runserver over TLS 1.2 | 39,279,079 | <p>I'm testing Stripe orders on my local Mac OS X machine. I am implementing this code:</p>
<pre><code>stripe.api_key = settings.STRIPE_SECRET
order = stripe.Order.create(
currency = 'usd',
email = 'j@awesomecom',
items = [
{
"type":'sku',
"parent":'sku_88F260aQ',
"quantity": 1,
}
],
shipping = {
"name":'Jenny Rosen',
"address":{
"line1":'1234 Main Street',
"city":'Anytown',
"country":'US',
"postal_code":'123456'
}
},
)
</code></pre>
<p>I receive an error:</p>
<blockquote>
<p>Stripe no longer supports API requests made with TLS 1.0. Please
initiate HTTPS connections with TLS 1.2 or later.</p>
</blockquote>
<p>I am using django 1.10 and python version 2.7.10</p>
<p>How can I force the use of TLS 1.2? Would I do this on the python or django side?</p>
| 1 | 2016-09-01T19:03:33Z | 39,936,720 | <p>I solved installing this libraries:</p>
<pre><code>pip install urllib3
pip install pyopenssl
pip install ndg-httpsclient
pip install pyasn1
</code></pre>
<p>solution from: </p>
<blockquote>
<p><a href="https://github.com/pinax/pinax-stripe/issues/267" rel="nofollow">https://github.com/pinax/pinax-stripe/issues/267</a></p>
</blockquote>
| 0 | 2016-10-08T19:52:18Z | [
"python",
"django",
"python-2.7",
"ssl",
"tls1.2"
] |
tarfile - Adding files from different drives | 39,279,090 | <p>I am trying to archive and compress multiple directories spread along multiple drives using the tarfile library.
The problem is that tarfile merges the paths even if two files are stored in different drives.
For example:</p>
<pre><code>import tarfile
with tarfile.open(r"D:\Temp\archive.tar.gz", "w:gz") as tf:
tf.add(r"C:\files\foo")
tf.add(r"D:\files\bar")
</code></pre>
<p>Will create an archive containing the following files:</p>
<pre><code>archive.tar.gz
|
files
|
foo
bar
</code></pre>
<p>Is there a way of creating this:</p>
<pre><code>archive.tar.gz
|
C
|
files
|
foo
D
|
files
|
bar
</code></pre>
<p>Thanks in advance.</p>
| 4 | 2016-09-01T19:04:11Z | 39,279,182 | <p>You'll need to use <code>tarfile.addfile()</code> instead of <code>tarfile.add()</code> :</p>
<p>With TarInfo you can specify the filename which will be used in the archive.</p>
<p>Exemple :</p>
<pre><code>with open(r"C:\files\foo", "rb") as ff:
ti = tf.gettarinfo(arcname="C/files/foo", fileobj=ff)
tf.addfile(ti, ff)
</code></pre>
<p>Or maybe, a faster solution :</p>
<pre><code>tf.add('/path/to/dir/to/add/', arcname='C/files')
tf.add('/path/to/otherdir/to/add/', arcname='D/files')
</code></pre>
| 3 | 2016-09-01T19:09:44Z | [
"python",
"gzip",
"tar",
"tarfile"
] |
tarfile - Adding files from different drives | 39,279,090 | <p>I am trying to archive and compress multiple directories spread along multiple drives using the tarfile library.
The problem is that tarfile merges the paths even if two files are stored in different drives.
For example:</p>
<pre><code>import tarfile
with tarfile.open(r"D:\Temp\archive.tar.gz", "w:gz") as tf:
tf.add(r"C:\files\foo")
tf.add(r"D:\files\bar")
</code></pre>
<p>Will create an archive containing the following files:</p>
<pre><code>archive.tar.gz
|
files
|
foo
bar
</code></pre>
<p>Is there a way of creating this:</p>
<pre><code>archive.tar.gz
|
C
|
files
|
foo
D
|
files
|
bar
</code></pre>
<p>Thanks in advance.</p>
| 4 | 2016-09-01T19:04:11Z | 39,290,513 | <p>Thanks <strong>Loïc</strong> for your answer, it helped me find the actual answer I was looking for. (And also made me waste about a hour because I got really confused with the linux and windows style paths you mixed up in your answer)...</p>
<pre><code>import os
import tarfile
def create_archive(paths, arc_paths, archive_path):
with tarfile.open(archive_path, "w:gz") as tf:
for path, arc_path in zip(paths, arc_paths):
tf.add(path, arcname=arc_path)
def main():
archive = r"D:\Temp\archive.tar.gz"
paths = [r"C:\files\foo", r"D:\files\bar"]
# Making sure all the paths are absolute.
paths = [os.path.abspath(path) for path in paths]
# Creating arc-style paths
arc_paths = [path.replace(':', '') for path in paths]
# Create the archive including drive letters (if in windows)
create_archive(paths, arc_paths, archive)
</code></pre>
| 0 | 2016-09-02T10:46:04Z | [
"python",
"gzip",
"tar",
"tarfile"
] |
Setting the n_estimators argument using **kwargs (Scikit Learn) | 39,279,121 | <p>I am trying to follow <a href="http://blog.yhat.com/posts/predicting-customer-churn-with-sklearn.html" rel="nofollow">this</a> tutorial to learn the machine learning based prediction but I have got two questions on it?</p>
<p>Ques1. How to set the <code>n_estimators</code> in the below piece of code, otherwise it will always assume the default value.</p>
<pre><code>from sklearn.cross_validation import KFold
def run_cv(X,y,clf_class,**kwargs):
# Construct a kfolds object
kf = KFold(len(y),n_folds=5,shuffle=True)
y_pred = y.copy()
# Iterate through folds
for train_index, test_index in kf:
X_train, X_test = X[train_index], X[test_index]
y_train = y[train_index]
# Initialize a classifier with key word arguments
clf = clf_class(**kwargs)
clf.fit(X_train,y_train)
y_pred[test_index] = clf.predict(X_test)
return y_pred
</code></pre>
<p>It is being called as:</p>
<p><code>from sklearn.svm import SVC
print "%.3f" % accuracy(y, run_cv(X,y,SVC))</code></p>
<p>Ques2: How to use the already trained model file (e.g. obtained from SVM) so that I can use it to predict more (test) data which I didn't used for training?</p>
| 2 | 2016-09-01T19:06:37Z | 39,282,189 | <p>For your first question, in the above code you would call <code>run_cv(X,y,SVC,n_classifiers=100)</code>, the <code>**kwargs</code> will pass this to the classifier initializer with the step <code>clf = clf_class(**kwargs)</code>.</p>
<p>For your second question, the cross validation in the code you've linked is just for model evaluation, i.e. comparing different types of models and hyperparameters, and determining the likely effectiveness of your model in production. Once you've decided on your model, you need to refit the model on the whole dataset: </p>
<pre><code>clf.fit(X,y)
</code></pre>
<p>Then you can get predictions with <code>clf.predict</code> or <code>clf.predict_proba</code>.</p>
| 2 | 2016-09-01T23:16:55Z | [
"python",
"machine-learning",
"scikit-learn"
] |
Curl error when downloading json object | 39,279,291 | <p>getting the following error...</p>
<pre><code>curl: (56) GnuTLS recv error (-54): Error in the pull function.
</code></pre>
<p>...when using the following command to curl a json file</p>
<pre><code>curl -L -o commerce.json http://www.commerce.gov/data.json
</code></pre>
<p>Any advice? I'm not familiar with curl. Perhaps it's a timeout error. Is there anyway I can prevent that? I really need the file, and I am unable to download it from a browser (assuming too big a file).</p>
<p>I'm working from command line on Ubuntu. Would love, alternatively to curl, a python solution.</p>
| 1 | 2016-09-01T19:17:07Z | 39,279,400 | <p>the error code 56 means the following, as described here <a href="https://curl.haxx.se/docs/manpage.html" rel="nofollow">https://curl.haxx.se/docs/manpage.html</a></p>
<blockquote>
<p>56 Failure in receiving network data.</p>
</blockquote>
<p>you should use a <strong>-v</strong> to see what's happen.</p>
<p>I don't thnik that another tool fix the network error.</p>
<p>Never the less there is a example in plain python.</p>
<p><a href="http://stackoverflow.com/questions/2667509/curl-alternative-in-python">CURL alternative in Python</a></p>
| 1 | 2016-09-01T19:24:12Z | [
"python",
"json",
"bash",
"curl"
] |
Curl error when downloading json object | 39,279,291 | <p>getting the following error...</p>
<pre><code>curl: (56) GnuTLS recv error (-54): Error in the pull function.
</code></pre>
<p>...when using the following command to curl a json file</p>
<pre><code>curl -L -o commerce.json http://www.commerce.gov/data.json
</code></pre>
<p>Any advice? I'm not familiar with curl. Perhaps it's a timeout error. Is there anyway I can prevent that? I really need the file, and I am unable to download it from a browser (assuming too big a file).</p>
<p>I'm working from command line on Ubuntu. Would love, alternatively to curl, a python solution.</p>
| 1 | 2016-09-01T19:17:07Z | 39,279,784 | <p>In bash you can use:</p>
<pre><code>wget -O commerce.json http://www.commerce.gov/data.json
</code></pre>
<p>Otherwise, a Python solution to this would be:</p>
<p>First you will need to install the Python <code>wget</code> library, then you can use the code:</p>
<pre><code>import wget
url = 'http://www.commerce.gov/data.json'
commercejson = wget.download(url)
</code></pre>
<p>This will download the <code>data.json</code> file to your local Python project directory.
The <code>data.json</code> file is currently 198MB, so curl may not be able to handle it very well.</p>
<p><strong>UPDATE: Compressed JSON download:</strong></p>
<p>To enable gzip compression, you can use the following to download gzipped version, which ends up being 19MB instead, which would be much more friendly to download.</p>
<pre><code>wget -S --header="accept-encoding: gzip" -O commerce.json.gz http://www.commerce.gov/data.json
</code></pre>
<p>Then, once the gzipped json file is downloaded, run the below command to decompress it:</p>
<p><code>gzip -d commerce.json.gz</code></p>
| 1 | 2016-09-01T19:48:29Z | [
"python",
"json",
"bash",
"curl"
] |
Python function definition | 39,279,298 | <pre><code>def CsMatrix(X not None):
</code></pre>
<p>I meet this piece of code. For <strong><em>X not None</em></strong>, I haven't met this kind of syntax? So I write my test code:</p>
<pre><code>def test(x not None):
pass
</code></pre>
<p>However, I got SyntaxError: invalid syntax.
Can anyone explain this syntax?</p>
| -4 | 2016-09-01T19:17:31Z | 39,279,330 | <p>You can not do that. What you can do is</p>
<pre><code>def test(x):
if x is None:
return
... # All the further actions with x
</code></pre>
| 0 | 2016-09-01T19:19:18Z | [
"python"
] |
Python function definition | 39,279,298 | <pre><code>def CsMatrix(X not None):
</code></pre>
<p>I meet this piece of code. For <strong><em>X not None</em></strong>, I haven't met this kind of syntax? So I write my test code:</p>
<pre><code>def test(x not None):
pass
</code></pre>
<p>However, I got SyntaxError: invalid syntax.
Can anyone explain this syntax?</p>
| -4 | 2016-09-01T19:17:31Z | 39,279,514 | <p>That's not valid syntax neither for <a href="https://docs.python.org/2.0/ref/function.html" rel="nofollow">python 2.x</a> nor <a href="http://www.python-course.eu/python3_functions.php" rel="nofollow">python 3.x</a>, maybe you wanted to declare your function with a <code>None</code> default value, something like this:</p>
<pre><code>def CsMatrix(X = None):
if X is None:
print("Yeah, I'm None")
CsMatrix()
</code></pre>
| 0 | 2016-09-01T19:31:03Z | [
"python"
] |
Python function definition | 39,279,298 | <pre><code>def CsMatrix(X not None):
</code></pre>
<p>I meet this piece of code. For <strong><em>X not None</em></strong>, I haven't met this kind of syntax? So I write my test code:</p>
<pre><code>def test(x not None):
pass
</code></pre>
<p>However, I got SyntaxError: invalid syntax.
Can anyone explain this syntax?</p>
| -4 | 2016-09-01T19:17:31Z | 39,279,797 | <pre><code>def CsMatrix(X not None):
pass
</code></pre>
<p>Is it possible that you mis-read it? Definitely not valid...
If you have the link to thesite where you seen it, could you post it?</p>
<p>You also said "For x not None" implying it might be used in a for loop as well? This would also be bad syntax.
You can (as suggested) set a default value for "x", in this case 'None'</p>
<pre><code>def test(x = None):
if x is None:
print "Nothing was passed to this function!"
elif x not None:
print "The function received: ", x
else:
print "There's no way this should ever print"
</code></pre>
| 0 | 2016-09-01T19:49:14Z | [
"python"
] |
Quickly count the number of objects in bson document | 39,279,323 | <p>I'd like to calculated the number of documents stored in a mongodb bson file without having to import the file into the db via mongo restore. </p>
<p>The best I've been able to come up with in python is </p>
<pre><code>bson_doc = open('./archive.bson','rb')
it = bson.decode_file_iter(bson_doc)
total = sum(1 for _ in it)
print(total)
</code></pre>
<p>This works in theory, but is slow in practice when bson documents are large. Anyone have a quicker approach to counting the number of documents in a bson document without doing a full decode?</p>
<p>I am currently using the python 2.7 and pymongo.
<a href="https://api.mongodb.com/python/current/api/bson/index.html" rel="nofollow">https://api.mongodb.com/python/current/api/bson/index.html</a></p>
| 2 | 2016-09-01T19:18:51Z | 39,279,826 | <p>I don't have a file at hand to try, but I believe there's a way - if you'll parse the data by hand.</p>
<p>The <a href="https://github.com/mongodb/mongo-python-driver/blob/0b34f9702ca8bed45792a53287d33a2292b99152/bson/__init__.py#L855" rel="nofollow">source for <code>bson.decode_file_iter</code></a> (sans the docstring) goes like this:</p>
<pre><code>_UNPACK_INT = struct.Struct("<i").unpack
def decode_file_iter(file_obj, codec_options=DEFAULT_CODEC_OPTIONS):
while True:
# Read size of next object.
size_data = file_obj.read(4)
if len(size_data) == 0:
break # Finished with file normaly.
elif len(size_data) != 4:
raise InvalidBSON("cut off in middle of objsize")
obj_size = _UNPACK_INT(size_data)[0] - 4
elements = size_data + file_obj.read(obj_size)
yield _bson_to_dict(elements, codec_options)
</code></pre>
<p>I presume, the time-consuming operation is <code>_bson_to_dict</code> call - and you don't need one.</p>
<p>So, all you need is to read the file - get the int32 value with the next document's size and skip it. Then count how many documents you've encountered doing this.</p>
<p>So, I believe, this function should do the trick:</p>
<pre><code>import struct
import os
from bson.errors import InvalidBSON
def count_file_documents(file_obj):
"""Counts how many documents provided BSON file contains"""
cnt = 0
while True:
# Read size of next object.
size_data = file_obj.read(4)
if len(size_data) == 0:
break # Finished with file normaly.
elif len(size_data) != 4:
raise InvalidBSON("cut off in middle of objsize")
obj_size = struct.Struct("<i").unpack(size_data)[0] - 4
# Skip the next obj_size bytes
file_obj.seek(obj_size, os.SEEK_CUR)
cnt += 1
return cnt
</code></pre>
<p>(I haven't tested the code, though. Don't have MongoDB at hand.)</p>
| 1 | 2016-09-01T19:51:18Z | [
"python",
"mongodb",
"bson"
] |
Create RDD of arrays | 39,279,334 | <p><sub>As I live in agony for <a href="http://stackoverflow.com/questions/39260820/is-sparks-kmeans-unable-to-handle-bigdata">Is Spark's KMeans unable to handle bigdata?</a>, I want to create a minimal example to demonstrate the drama. For that, I want to create the data, rather than read it.</sub></p>
<p>Here is what my data looks like:</p>
<pre><code>In [22]: realData.take(2)
Out[22]:
[array([ 84.35778894, 190.61634731, 121.61911155, -42.2862512 ,
-39.33345881, 56.73534546, -15.59698061, -86.12075349,
85.48406906, 40.84118662, -1.00725942, -2.87201027,
-78.0677815 , -18.80891205, -92.39391945, -102.98860959,
-10.59249313, 30.80641523, 87.49634549, -78.3205944 ,
-15.99765437, 33.36382612, -14.10079251, 37.05977621,
-30.02787349, -46.48908886, 40.05820543, 12.34164613,
60.59778037, 32.86144792, -75.09426566, -29.71363055,
-24.45698228, -7.22987835, 35.51963431, 36.92236462,
84.71522683, -30.15837915, 1.30921589, 29.79845728,
7.77733962, 28.66041905, 6.55247136, 45.48181712,
-24.81799125, 12.20440078, -14.91224658, -36.80905193,
51.17004102, -18.4527695 , 12.35095124, -3.73548334,
-9.2651481 , 19.53993158, -0.28221419, 33.07089884,
7.89205558, -2.63194072, 13.32103665, 7.62146851,
-41.3406389 , 13.37658853, -36.09437786, -18.15283789]),
array([ 227.63800054, 89.63235623, -28.94679686, -171.95442583,
-157.36636545, -43.28729374, 97.31828944, -45.66335323,
-100.52371276, 16.04201854, 25.79787405, -43.55558296,
-23.43046377, -53.12619721, -10.16698475, -88.88129402,
77.19121455, 28.42062289, -0.30305782, -56.16625533,
-100.88774848, 38.65317047, 37.17211943, 38.16609239,
-50.05152587, -8.73759989, -49.98339921, -21.65102389,
13.39011805, 48.91359669, -22.98882211, -39.78551088,
-52.06830607, 44.4193014 , -30.76970509, -109.19968443,
-67.17202321, -38.17445022, -66.15981665, -12.53127828,
-29.50283995, -72.71269849, -85.92771623, 62.37326985,
-25.44451665, 30.67529111, 19.77880449, 24.68152321,
-62.80451881, 60.57287154, 22.31731031, 37.22992347,
41.42355257, -50.73447099, -9.21878036, -18.39200695,
-11.15764727, 44.76715383, -16.37372336, -4.55888474,
-4.26690754, 23.23691627, 0.25348381, -37.4707463 ])]
</code></pre>
<p>It seems to be a list of arrays.</p>
<p>How to create this kind of data, with importing as less packages as possible?</p>
<hr>
<p>Note: every element of the RDD is 64 dimensional vector. I plan to create 100m vectors.</p>
<p>Random values are also welcomed (for example within [-100, 100], I don't really care).</p>
| 1 | 2016-09-01T19:19:32Z | 39,279,755 | <p>Spark provides utilities for generating random RDDs out-of-the box. In PySpark these are located in <code>pyspark.mllib.random.RandomRDDs</code>. For example:</p>
<pre><code>from pyspark.mllib.random import RandomRDDs
rdd = RandomRDDs.uniformVectorRDD(sc, 100000000, 64)
type(rdd.first())
## numpy.ndarray
rdd.first().shape
# (64,)
</code></pre>
| 3 | 2016-09-01T19:46:46Z | [
"python",
"arrays",
"apache-spark",
"bigdata",
"distributed-computing"
] |
Adding labels to stacked bar chart | 39,279,404 | <p>I'm plotting a cross-tabulation of various offices within certain categories. I'd like to put together a horizontal stacked bar chart where each office and its value is labeled. </p>
<p>Here's some example code: </p>
<pre><code>df = pd.DataFrame({'office1': pd.Series([1,np.nan,np.nan], index=['catA', 'catB', 'catC']),
'office2': pd.Series([np.nan,8,np.nan], index=['catA', 'catB', 'catC']),
'office3': pd.Series([12,np.nan,np.nan], index=['catA', 'catB', 'catC']),
'office4': pd.Series([np.nan,np.nan,3], index=['catA', 'catB', 'catC']),
'office5': pd.Series([np.nan,5,np.nan], index=['catA', 'catB', 'catC']),
'office6': pd.Series([np.nan,np.nan,7], index=['catA', 'catB', 'catC']),
'office7': pd.Series([3,np.nan,np.nan], index=['catA', 'catB', 'catC']),
'office8': pd.Series([np.nan,np.nan,11], index=['catA', 'catB', 'catC']),
'office9': pd.Series([np.nan,6,np.nan], index=['catA', 'catB', 'catC']),
})
ax = df.plot.barh(title="Office Breakdown by Category", legend=False, figsize=(10,7), stacked=True)
</code></pre>
<p>This gives me a fine starting point: </p>
<p><a href="http://i.stack.imgur.com/d4Jeu.png" rel="nofollow"><img src="http://i.stack.imgur.com/d4Jeu.png" alt="Stacked Bar Chart Example"></a></p>
<p>However, what I'd like to have is this:
<a href="http://i.stack.imgur.com/a3FwK.png" rel="nofollow"><img src="http://i.stack.imgur.com/a3FwK.png" alt="Bar Chart with labels"></a></p>
<p>After some research, I came up with the following code that correctly lines up labels on the 'category' axis: </p>
<pre><code>def annotateBars(row, ax=ax):
for col in row.index:
value = row[col]
if (str(value) != 'nan'):
ax.text(value/2, labeltonum(row.name), col+","+str(value))
def labeltonum(label):
if label == 'catA':
return 0
elif label == 'catB':
return 1
elif label == 'catC':
return 2
df.apply(annotateBars, ax=ax, axis=1)
</code></pre>
<p>But this doesn't factor in the "stacking" of the bars. I've also tried iterating through the <code>patches</code> container returned by the plot command (which can let me retrieve x & y positions of each rectangle), but I then lose any connection to the office labels. </p>
| 1 | 2016-09-01T19:24:25Z | 39,280,239 | <p>Figured it out. If I iterate through the columns of each row of the dataframe I can build up a list of the labels I need that matches the progression of the rectangles in <code>ax.patches</code>. Solution below: </p>
<pre><code>labels = []
for j in df.columns:
for i in df.index:
label = str(j)+": " + str(df.loc[i][j])
labels.append(label)
patches = ax.patches
for label, rect in zip(labels, patches):
width = rect.get_width()
if width > 0:
x = rect.get_x()
y = rect.get_y()
height = rect.get_height()
ax.text(x + width/2., y + height/2., label, ha='center', va='center')
</code></pre>
<p>Which, when added to the code above, yields:</p>
<p><a href="http://i.stack.imgur.com/ihchS.png" rel="nofollow"><img src="http://i.stack.imgur.com/ihchS.png" alt="Properly annotated bar chart"></a></p>
<p>Now to just deal with re-arranging labels for bars that are too small...</p>
| 1 | 2016-09-01T20:18:35Z | [
"python",
"pandas",
"matplotlib"
] |
Adding labels to stacked bar chart | 39,279,404 | <p>I'm plotting a cross-tabulation of various offices within certain categories. I'd like to put together a horizontal stacked bar chart where each office and its value is labeled. </p>
<p>Here's some example code: </p>
<pre><code>df = pd.DataFrame({'office1': pd.Series([1,np.nan,np.nan], index=['catA', 'catB', 'catC']),
'office2': pd.Series([np.nan,8,np.nan], index=['catA', 'catB', 'catC']),
'office3': pd.Series([12,np.nan,np.nan], index=['catA', 'catB', 'catC']),
'office4': pd.Series([np.nan,np.nan,3], index=['catA', 'catB', 'catC']),
'office5': pd.Series([np.nan,5,np.nan], index=['catA', 'catB', 'catC']),
'office6': pd.Series([np.nan,np.nan,7], index=['catA', 'catB', 'catC']),
'office7': pd.Series([3,np.nan,np.nan], index=['catA', 'catB', 'catC']),
'office8': pd.Series([np.nan,np.nan,11], index=['catA', 'catB', 'catC']),
'office9': pd.Series([np.nan,6,np.nan], index=['catA', 'catB', 'catC']),
})
ax = df.plot.barh(title="Office Breakdown by Category", legend=False, figsize=(10,7), stacked=True)
</code></pre>
<p>This gives me a fine starting point: </p>
<p><a href="http://i.stack.imgur.com/d4Jeu.png" rel="nofollow"><img src="http://i.stack.imgur.com/d4Jeu.png" alt="Stacked Bar Chart Example"></a></p>
<p>However, what I'd like to have is this:
<a href="http://i.stack.imgur.com/a3FwK.png" rel="nofollow"><img src="http://i.stack.imgur.com/a3FwK.png" alt="Bar Chart with labels"></a></p>
<p>After some research, I came up with the following code that correctly lines up labels on the 'category' axis: </p>
<pre><code>def annotateBars(row, ax=ax):
for col in row.index:
value = row[col]
if (str(value) != 'nan'):
ax.text(value/2, labeltonum(row.name), col+","+str(value))
def labeltonum(label):
if label == 'catA':
return 0
elif label == 'catB':
return 1
elif label == 'catC':
return 2
df.apply(annotateBars, ax=ax, axis=1)
</code></pre>
<p>But this doesn't factor in the "stacking" of the bars. I've also tried iterating through the <code>patches</code> container returned by the plot command (which can let me retrieve x & y positions of each rectangle), but I then lose any connection to the office labels. </p>
| 1 | 2016-09-01T19:24:25Z | 39,280,262 | <p>You could have also just changed the function <code>annotateBars()</code> to:</p>
<pre><code>def annotateBars(row, ax=ax):
curr_value = 0
for col in row.index:
value = row[col]
if (str(value) != 'nan'):
ax.text(curr_value + (value)/2, labeltonum(row.name), col+","+str(value), ha='center',va='center')
curr_value += value
</code></pre>
| 1 | 2016-09-01T20:20:20Z | [
"python",
"pandas",
"matplotlib"
] |
Append dataframe to dict | 39,279,439 | <p>I've created a dict of dicts structured in such a way that the key is the department ('ABC') then the date (01.08) is the key and values are { product name (A), Units (0), Revenue (0)}. This structure continues for several departments. See dict of dict printout below. </p>
<pre><code>'ABC': 01.08 \
A. Units 0
Revenue 0
B. Units 0
Revenue 0
C. Units 0
Revenue 0
D. Units 0
Revenue 0
</code></pre>
<p>Additionally, I've created a dataframe using groupby and an aggregation function (sum) to get the total of units and revenue per day per department (this is an aggregation of two levels as opposed to three in the dict - date , department, product).</p>
<p>Printing out df, which is an aggregation of number of units and total revenue, results in: </p>
<pre><code>print df.ix['ABC']
Total Overall Units \
dates
2016-08-01 2
2016-08-02 0
2016-08-03 2
2016-08-04 1
2016-08-22 2
Total Overall Revenue \
dates
2016-08-01 20
2016-08-02 500
2016-08-03 39
2016-08-04 50
</code></pre>
<p>I am currently ending up with two separate objects which I want to merge/append such that the total units and total revenue will be added to the end of the dict in the correct place (i.e. mapped to the correct department and date).
Currently I am printing the dict and then the dataframe <code>pd.to html</code> separately by 'department' so I am left with two separate tables. Not only are they separate even but the table created from the df also has one fewer column as they are grouped differently.</p>
<pre><code>'ABC':
01.08 | 02.08 | 03.08 | 04.08
A Total Units 0 0 0 0
Total Revenue 0 0 0 0
B Total Units 0 0 0 0
Total Revenue 0 0 0 0
C Total Units 0 0 0 0
Total Revenue 0 0 0 0
D Total Units 0 0 0 0
Total Revenue 0 0 0 0
Total Overall Units 0 0 0 0
Total Overall Revenue 0 0 0 0
</code></pre>
<ol>
<li>Can I add the dataframe to the dict by 'department name'?</li>
<li>Ultimate goal is to merge these two data objects into one unified data object or to literally align the objects for readability.</li>
</ol>
<p>Any ideas? </p>
| 0 | 2016-09-01T19:26:16Z | 39,279,677 | <p>Skipping to question #2: I'd recommend using a single dataframe to store all of your information. It will be much easier to work with than keeping columnar data in a dict of dicts. Set the date as the main index and either use a separate column for each field ('deptA-revenue') or use multi-indexing. You can then store the daily totals as columns in the same dataframe.</p>
| 0 | 2016-09-01T19:41:39Z | [
"python",
"pandas",
"dictionary"
] |
Append dataframe to dict | 39,279,439 | <p>I've created a dict of dicts structured in such a way that the key is the department ('ABC') then the date (01.08) is the key and values are { product name (A), Units (0), Revenue (0)}. This structure continues for several departments. See dict of dict printout below. </p>
<pre><code>'ABC': 01.08 \
A. Units 0
Revenue 0
B. Units 0
Revenue 0
C. Units 0
Revenue 0
D. Units 0
Revenue 0
</code></pre>
<p>Additionally, I've created a dataframe using groupby and an aggregation function (sum) to get the total of units and revenue per day per department (this is an aggregation of two levels as opposed to three in the dict - date , department, product).</p>
<p>Printing out df, which is an aggregation of number of units and total revenue, results in: </p>
<pre><code>print df.ix['ABC']
Total Overall Units \
dates
2016-08-01 2
2016-08-02 0
2016-08-03 2
2016-08-04 1
2016-08-22 2
Total Overall Revenue \
dates
2016-08-01 20
2016-08-02 500
2016-08-03 39
2016-08-04 50
</code></pre>
<p>I am currently ending up with two separate objects which I want to merge/append such that the total units and total revenue will be added to the end of the dict in the correct place (i.e. mapped to the correct department and date).
Currently I am printing the dict and then the dataframe <code>pd.to html</code> separately by 'department' so I am left with two separate tables. Not only are they separate even but the table created from the df also has one fewer column as they are grouped differently.</p>
<pre><code>'ABC':
01.08 | 02.08 | 03.08 | 04.08
A Total Units 0 0 0 0
Total Revenue 0 0 0 0
B Total Units 0 0 0 0
Total Revenue 0 0 0 0
C Total Units 0 0 0 0
Total Revenue 0 0 0 0
D Total Units 0 0 0 0
Total Revenue 0 0 0 0
Total Overall Units 0 0 0 0
Total Overall Revenue 0 0 0 0
</code></pre>
<ol>
<li>Can I add the dataframe to the dict by 'department name'?</li>
<li>Ultimate goal is to merge these two data objects into one unified data object or to literally align the objects for readability.</li>
</ol>
<p>Any ideas? </p>
| 0 | 2016-09-01T19:26:16Z | 39,280,902 | <p>To print in the desired order, you need to transpose the rows & columns in the dates dictionary. It is probably easiest to total the rows while doing that. This makes the second object you mentioned unnecessary. Except for the formatting, something like this should work:</p>
<pre><code>for dept, dates in df.items():
# Transpose the rows and columns into two new dictionaries
# called units and revenue. At the same time, total the
# units and revenue into two new "zztotal" entries.
units = { "zztotal" : {}}
revenues = { "zztotal" : {}}
for date, products in dates.items():
for product, stats in products.items():
name = stats["name"]
if not name in units:
units[name] = {}
revenues[name] = {}
units[name][date] = stats["units"]
revenue[name][date] = stats["revenue"]
if not date in units["zztotal"]:
units["zztotal"][date] = 0
revenue["zztotal"][date] = 0
units["zzotal"][date] += stats["units"]
revenue["zzotal"][date] += stats["revenue"}
# At this point we are ready to print the transposed
# dictionaries. Work is needed to line up the columns
# so the printout is attractive.
print dept
print sorted(dates.keys())
for name, dates in sorted(units.items()):
if name != "zztotal":
print name, "Total Units", [
units[date] for date in sorted(dates)]
print "Total Revenue", [
revenue[date] for date in sorted(dates)]
else:
print "Total Overall Units", [
units[date] for date in sorted(dates)]
print "Total Overall Revenue", [
revenue[date] for date in sorted(dates)]
</code></pre>
| 0 | 2016-09-01T21:09:57Z | [
"python",
"pandas",
"dictionary"
] |
ValueError: RDD is empty-- Pyspark (Windows Standalone) | 39,279,702 | <p>I am trying to create an RDD but spark not creating it, throwing back error, pasted below;</p>
<pre><code>data = records.map(lambda r: LabeledPoint(extract_label(r), extract_features(r)))
first_point = data.first()
Py4JJavaError Traceback (most recent call last)
<ipython-input-19-d713906000f8> in <module>()
----> 1 first_point = data.first()
2 print "Raw data: " + str(first[2:])
3 print "Label: " + str(first_point.label)
4 print "Linear Model feature vector:\n" + str(first_point.features)
5 print "Linear Model feature vector length: " + str(len (first_point.features))
C:\spark\python\pyspark\rdd.pyc in first(self)
1313 ValueError: RDD is empty
1314 """
-> 1315 rs = self.take(1)
1316 if rs:
1317 return rs[0]
C:\spark\python\pyspark\rdd.pyc in take(self, num)
1295
1296 p = range(partsScanned, min(partsScanned + numPartsToTry, totalParts))
-> 1297 res = self.context.runJob(self, takeUpToNumLeft, p)..................
</code></pre>
<p>Any help will be greatly appreciated. </p>
<p>Thank you,
Innocent </p>
| 0 | 2016-09-01T19:43:14Z | 39,292,939 | <p>Your <code>records</code> is empty. You could verify by calling <code>records.first()</code>.</p>
<p>Calling <code>first</code> on an empty RDD raises error, but not <code>collect</code>. For example,</p>
<pre><code>records = sc.parallelize([])
records.map(lambda x: x).collect()
</code></pre>
<blockquote>
<p>[]</p>
</blockquote>
<pre><code>records.map(lambda x: x).first()
</code></pre>
<blockquote>
<p>ValueError: RDD is empty</p>
</blockquote>
| 0 | 2016-09-02T12:53:10Z | [
"python",
"pyspark",
"rdd"
] |
scipy.spatial ValueError when trying to run kdtree | 39,279,781 | <p><strong>Background:</strong> I am trying to run a nearest neighbor using the <code>cKDtree</code> function on a shapefile that has 201 records with lat/lons against a time series dataset of 8760 hours (total hours in a year). I am getting an error, naturally I looked it up. I found this: <a href="http://stackoverflow.com/questions/14264682/scipy-spatial-valueerror-x-must-consist-of-vectors-of-length-d-but-has-shape">scipy.spatial ValueError: "x must consist of vectors of length %d but has shape %s"</a> which is the same error, but I am having trouble understanding how exactly this error was resolved.</p>
<p><strong>Workflow:</strong> I pulled the x & y coordinates out of the shapefile and stored them in separate arrays called <code>x_vector</code> and <code>y_vector</code>. The 8760 data is an hdf5 file. I pulled the coordinates out using <code>h5_coords = np.vstack([meta['latitude'], meta['longitude']]).T</code>. </p>
<p>Now I try to run the kdtree,</p>
<pre><code># Run the kdtree to match nearest values
tree = cKDTree(np.vstack([x_vector, y_vector]))
kdtree_indices = tree.query(h5_coords)[1]
</code></pre>
<p>but it results in this same traceback error.</p>
<p><strong>Traceback Error:</strong></p>
<pre><code>Traceback (most recent call last):
File "meera_extract.py", line 45, in <module>
kdtree_indices = tree.query(h5_coords)[1]
File "scipy/spatial/ckdtree.pyx", line 618, in scipy.spatial.ckdtree.cKDTree.query (scipy/spatial/ckdtree.cxx:6996)
ValueError: x must consist of vectors of length 201 but has shape (1, 389880)
</code></pre>
<p>Help me, stackoverflow. You're my only hope.</p>
| 1 | 2016-09-01T19:48:23Z | 39,280,874 | <p>So it seems I need to read up on the differences of <code>vstack</code> & <code>column_stack</code> and the use of transpose i.e. <code>.T</code>. If anyone has the same issue here is what I changed to make the <code>cKDtree</code> work. Hopefully it will help if someone else runs into this issue. Many thanks to comments from the community to help solve this!</p>
<p>I changed how the <code>hdf5</code> coordinates were brought in from <code>vstack</code> to <code>column_stack</code> and removing the transpose <code>.T</code>.</p>
<pre><code># Get coordinates of HDF5 file
h5_coords = np.column_stack([meta['latitude'], meta['longitude']])
</code></pre>
<p>Instead of trying to add the points in the tree I made a new variable to hold them:</p>
<pre><code># combine x and y
vector_pnts = np.column_stack([x_vector, y_vector])
</code></pre>
<p>Then I ran the kdtree without any error.</p>
<pre><code># Run the kdtree to match nearest values
tree = cKDTree(vector_pnts)
kdtree_indices = tree.query(h5_coords)[1]
</code></pre>
| 2 | 2016-09-01T21:06:59Z | [
"python",
"numpy",
"kdtree"
] |
Python dictionary update and refinement loop | 39,279,782 | <p>I am trying to create a loop that will further refine a user defined dictionary after original pass and parse of user string:</p>
<pre><code>product_info = {'make': [], 'year': [], 'color': []}
make = ['honda','bmw','acura']
year = ['2013','2014','2015','2016']
colors = ['black','grey','red','blue']
user_input = raw_input()
for m in make:
if m in user_input:
product_info['make'].append(m)
for y in year:
if y in user_input:
product_info['year'].append(y)
for c in color:
if c in user_input:
product_info['color'].append(c)
</code></pre>
<p>Here is where I would like to check that dictionary and make sure that all values are filled, and if not, ask for more input to refine the existing dictionary:</p>
<p>example: <code>I am looking for a grey car</code></p>
<pre><code>product_info = {'make': [], 'year': [], 'color': ['grey']}
if product_info['make'] is null:
print 'what make of car are you looking for?'
new_input = 'i am looking for a 2015 honda'
</code></pre>
<p>send the string again through the dictionary/parse process and update the product_info dictionary if there is a value to fill in and also look to see if they mentioned a year this time around, etcâ¦</p>
<p>updated dict:</p>
<pre><code>product_info = {'make': ['honda'], 'year': ['2015'], 'color': ['grey']}
</code></pre>
<p>How do I take in new user input requesting more information and parse through it, looking for the attributes, and update the existing dictionary without modifying old attributes?</p>
| 1 | 2016-09-01T19:48:24Z | 39,280,250 | <pre><code>while [] in product_info.values():
for key in product_info:
if product_info[key] == []:
print("What",key)
user_input = raw_input()
for each in user_input.split(' '):
if each in make:
key = 'make'
product_info[key].append(each)
elif each in year:
key = 'year'
product_info[key].append(each)
elif each in colors:
key = 'color'
product_info[key].append(each)
</code></pre>
<p>This very rough, working system, you could refine it, but that's up to you.</p>
| 1 | 2016-09-01T20:19:45Z | [
"python",
"dictionary"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.