title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Converting a list of list into a dictionary based on common values in python | 38,900,637 | <p>I have a list of list in python as follows:</p>
<pre><code>a = [['John', 24, 'teacher'],['Mary',23,'clerk'],['Vinny', 21, 'teacher'], ['Laura',32, 'clerk']]
</code></pre>
<p>The idea is to create a dict on the basis of their occupation as follows:</p>
<pre><code>b = {'teacher': {'John_24': 'true', 'Vinny_21' 'true'},
'clerk' : {'Mary_23': 'true', 'Laura_32' 'true'}}
</code></pre>
<p>What is the best way of achieving this ?</p>
| 1 | 2016-08-11T15:32:51Z | 38,900,774 | <p>As others have said, you can just use a list instead of a dictionary.</p>
<p>Here is one way to implement this:</p>
<pre><code>a = [
['John', 24, 'teacher'],
['Mary', 23, 'clerk'],
['Vinny', 21, 'teacher'],
['Laura', 32, 'clerk'],
]
b = {}
for name, age, occupation in a:
b.setdefault(occupation, []).append('{}_{}'.format(name, age))
print b # {'clerk': ['Mary_23', 'Laura_32'], 'teacher': ['John_24', 'Vinny_21']}
</code></pre>
| 1 | 2016-08-11T15:39:32Z | [
"python",
"list",
"dictionary"
] |
re.sub doesn't modify string in python | 38,900,731 | <p>I have a small python program where I expect the word "verified" (regardless if written in upper, lower case or a mix of upper and lower case). To be reset to "Verified". How do I need to rewrite the code below? </p>
<pre><code> import re
text="verified, vERIFIED, VERIFIED"
text=re.sub(r'\verified', 'Verified', text, flags=re.IGNORECASE)
print text
Expected output: Verified, Verified, Verified
Actual output:verified, vERIFIED, VERIFIED
</code></pre>
| 0 | 2016-08-11T15:37:31Z | 38,900,952 | <p>Simply remove the backslash before the <code>v</code></p>
<pre><code>import re
text="verified, vERIFIED, VERIFIED"
text=re.sub(r'verified', 'Verified', text, flags=re.IGNORECASE)
print text
</code></pre>
| 1 | 2016-08-11T15:48:27Z | [
"python"
] |
Python3 Cannot import function from my own module | 38,900,788 | <p>I have browsed the site for a while and the only occasions people had this error happening were in circular imports which I don't have as far as I understand what circular imports are? My imports are transitive though.</p>
<p>I have 3 files in the same folder:</p>
<pre><code>packer.py
parser.py
statistics.py
</code></pre>
<p>packer.py</p>
<pre><code>class Conversation:
....
class Message:
....
</code></pre>
<p>parser.py (module works, called all functions from itself without a problem)</p>
<pre><code>from bs4 import BeautifulSoup
from packer import Conversation
from packer import Message
def writeFormatedLog():
....
def getConvs():
....
</code></pre>
<p>statistics.py</p>
<pre><code>from parser import getConvs #this on its own runs without problems
getConvs() #throws ImportError: cannot import name 'getConvs'
</code></pre>
| 0 | 2016-08-11T15:40:13Z | 38,900,871 | <p>ImportErrors may happen if there are duplicate module names. Try naming your <code>parser.py</code> something else, since it is likely conflicting with Python's built-in <a href="https://docs.python.org/2/library/parser.html" rel="nofollow"><code>parser</code></a> module.</p>
| 2 | 2016-08-11T15:44:04Z | [
"python",
"python-3.x"
] |
URLconf error on Django Heroku deployment | 38,900,803 | <p>I am getting the following error on my Django Heroku deployment, not sure what is causing it?:</p>
<pre><code>Page not found (404)
Request Method: GET
Request URL: https://blogpod.herokuapp.com/
Using the URLconf defined in blogpodapi.urls, Django tried these URL patterns, in this order:
^admin/
^api/
^favicon\.ico$
^media\/(?P<path>.*)$
The current URL, , didn't match any of these.
You're seeing this error because you have DEBUG = True in your Django settings file. Change that to False, and Django will display a standard 404 page.
</code></pre>
<p>The log generated by Heroku is as follows:</p>
<pre><code>2016-08-11T12:00:13.231906+00:00 heroku[web.1]: Process exited with status 3
2016-08-11T12:00:13.085417+00:00 app[web.1]: [2016-08-11 12:00:13 +0000] [3] [INFO] Starting gunicorn 19.6.0
2016-08-11T12:00:13.086164+00:00 app[web.1]: [2016-08-11 12:00:13 +0000] [3] [INFO] Using worker: sync
2016-08-11T12:00:13.086036+00:00 app[web.1]: [2016-08-11 12:00:13 +0000] [3] [INFO] Listening at: http://0.0.0.0:28935 (3)
2016-08-11T12:00:13.094899+00:00 app[web.1]: Traceback (most recent call last):
2016-08-11T12:00:13.094908+00:00 app[web.1]: worker.init_process()
2016-08-11T12:00:13.094885+00:00 app[web.1]: [2016-08-11 12:00:13 +0000] [9] [ERROR] Exception in worker process
2016-08-11T12:00:13.094908+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/workers/base.py", line 126, in init_process
2016-08-11T12:00:13.091098+00:00 app[web.1]: [2016-08-11 12:00:13 +0000] [9] [INFO] Booting worker with pid: 9
2016-08-11T12:00:13.094907+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/arbiter.py", line 557, in spawn_worker
2016-08-11T12:00:13.094910+00:00 app[web.1]: self.wsgi = self.app.wsgi()
2016-08-11T12:00:13.094909+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/workers/base.py", line 136, in load_wsgi
2016-08-11T12:00:13.094928+00:00 app[web.1]: __import__(module)
2016-08-11T12:00:13.094924+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
2016-08-11T12:00:13.094926+00:00 app[web.1]: return self.load_wsgiapp()
2016-08-11T12:00:13.094927+00:00 app[web.1]: return util.import_app(self.app_uri)
2016-08-11T12:00:13.094925+00:00 app[web.1]: self.callable = self.load()
2016-08-11T12:00:13.094909+00:00 app[web.1]: self.load_wsgi()
2016-08-11T12:00:13.094925+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 65, in load
2016-08-11T12:00:13.094927+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp
2016-08-11T12:00:13.094928+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/util.py", line 357, in import_app
2016-08-11T12:00:13.094929+00:00 app[web.1]: File "/app/blogpodapi/__init__.py", line 5, in <module>
2016-08-11T12:00:13.094930+00:00 app[web.1]: from .celery import app as celery_app
2016-08-11T12:00:13.094946+00:00 app[web.1]: from celery import Celery
2016-08-11T12:00:13.095036+00:00 app[web.1]: [2016-08-11 12:00:13 +0000] [9] [INFO] Worker exiting (pid: 9)
2016-08-11T12:00:13.116166+00:00 app[web.1]: [2016-08-11 12:00:13 +0000] [3] [INFO] Shutting down: Master
2016-08-11T12:00:13.094931+00:00 app[web.1]: from celery import Celery
2016-08-11T12:00:13.094945+00:00 app[web.1]: File "/app/blogpodapi/celery.py", line 3, in <module>
2016-08-11T12:00:13.094947+00:00 app[web.1]: ImportError: cannot import name Celery
2016-08-11T12:00:13.094930+00:00 app[web.1]: File "/app/blogpodapi/celery.py", line 3, in <module>
2016-08-11T12:00:13.116300+00:00 app[web.1]: [2016-08-11 12:00:13 +0000] [3] [INFO] Reason: Worker failed to boot.
2016-08-11T13:50:33.664042+00:00 heroku[slug-compiler]: Slug compilation finished
2016-08-11T13:50:33.664034+00:00 heroku[slug-compiler]: Slug compilation started
2016-08-11T13:50:33.427856+00:00 heroku[api]: Release v10 created by methuselah@hotmail.co.uk
2016-08-11T13:50:33.427821+00:00 heroku[api]: Deploy cda9e28 by methuselah@hotmail.co.uk
2016-08-11T13:50:33.977599+00:00 heroku[web.1]: State changed from crashed to starting
2016-08-11T13:50:39.111777+00:00 heroku[web.1]: Starting process with command `gunicorn --pythonpath blogpodapi blogpodapi.wsgi`
2016-08-11T13:50:41.177637+00:00 app[web.1]: [2016-08-11 13:50:41 +0000] [3] [INFO] Starting gunicorn 19.6.0
2016-08-11T13:50:41.200609+00:00 app[web.1]: [2016-08-11 13:50:41 +0000] [12] [INFO] Booting worker with pid: 12
2016-08-11T13:50:41.178100+00:00 app[web.1]: [2016-08-11 13:50:41 +0000] [3] [INFO] Using worker: sync
2016-08-11T13:50:41.181609+00:00 app[web.1]: [2016-08-11 13:50:41 +0000] [9] [INFO] Booting worker with pid: 9
2016-08-11T13:50:41.178025+00:00 app[web.1]: [2016-08-11 13:50:41 +0000] [3] [INFO] Listening at: http://0.0.0.0:49918 (3)
2016-08-11T13:50:42.773771+00:00 heroku[web.1]: State changed from starting to up
2016-08-11T13:51:16.618289+00:00 heroku[router]: at=info method=GET path="/" host=blogpod.herokuapp.com request_id=a75e5e61-ea8f-4782-b620-cc5cc75b9f7b fwd="81.140.173.71" dyno=web.1 connect=0ms service=275ms status=404 bytes=2347
2016-08-11T13:51:16.572682+00:00 app[web.1]: Not Found: /
2016-08-11T13:51:16.998620+00:00 heroku[router]: at=info method=GET path="/favicon.ico" host=blogpod.herokuapp.com request_id=4d03a815-63c3-4af2-bc59-7961921d1fe6 fwd="81.140.173.71" dyno=web.1 connect=0ms service=6ms status=404 bytes=2380
2016-08-11T13:51:16.954410+00:00 app[web.1]: Not Found: /favicon.ico
2016-08-11T13:51:18.788369+00:00 app[web.1]: Not Found: /
2016-08-11T13:51:18.833864+00:00 heroku[router]: at=info method=GET path="/" host=blogpod.herokuapp.com request_id=1872905b-5428-4f86-84f9-004fc09f678f fwd="81.140.173.71" dyno=web.1 connect=1ms service=110ms status=404 bytes=2347
2016-08-11T13:51:28.890453+00:00 app[web.1]: Not Found: /
2016-08-11T13:51:29.083208+00:00 app[web.1]: Not Found: /favicon.ico
2016-08-11T13:51:28.901745+00:00 heroku[router]: at=info method=GET path="/" host=blogpod.herokuapp.com request_id=32407ba8-9172-4100-ab3e-ad94556a7ad8 fwd="81.140.173.71" dyno=web.1 connect=1ms service=9ms status=404 bytes=2348
2016-08-11T13:51:29.092764+00:00 heroku[router]: at=info method=GET path="/favicon.ico" host=blogpod.herokuapp.com request_id=8162cd11-0e2b-4b1e-b399-d1da4f54aaf2 fwd="81.140.173.71" dyno=web.1 connect=1ms service=8ms status=404 bytes=2381
2016-08-11T14:00:02.834694+00:00 app[web.1]: Not Found: /
2016-08-11T14:00:02.878192+00:00 heroku[router]: at=info method=GET path="/" host=blogpod.herokuapp.com request_id=1e9378d0-1da8-4043-9ed3-4de28596dcb8 fwd="81.140.173.71" dyno=web.1 connect=0ms service=5ms status=404 bytes=2347
2016-08-11T14:00:43.156178+00:00 heroku[api]: Deploy 9d0f8a9 by methuselah@hotmail.co.uk
2016-08-11T14:00:43.156178+00:00 heroku[api]: Release v11 created by methuselah@hotmail.co.uk
2016-08-11T14:00:43.420375+00:00 heroku[slug-compiler]: Slug compilation finished
2016-08-11T14:00:43.420369+00:00 heroku[slug-compiler]: Slug compilation started
2016-08-11T14:00:43.961889+00:00 heroku[web.1]: Restarting
2016-08-11T14:00:43.962570+00:00 heroku[web.1]: State changed from up to starting
2016-08-11T14:00:47.162710+00:00 heroku[web.1]: Stopping all processes with SIGTERM
2016-08-11T14:00:47.870052+00:00 app[web.1]: [2016-08-11 14:00:47 +0000] [3] [INFO] Handling signal: term
2016-08-11T14:00:48.024476+00:00 heroku[web.1]: Process exited with status 0
2016-08-11T14:00:47.888983+00:00 app[web.1]: [2016-08-11 14:00:47 +0000] [3] [INFO] Shutting down: Master
2016-08-11T14:00:47.871014+00:00 app[web.1]: [2016-08-11 14:00:47 +0000] [12] [INFO] Worker exiting (pid: 12)
2016-08-11T14:00:47.869197+00:00 app[web.1]: [2016-08-11 14:00:47 +0000] [9] [INFO] Worker exiting (pid: 9)
2016-08-11T14:00:53.606738+00:00 heroku[web.1]: Starting process with command `gunicorn --pythonpath blogpodapi blogpodapi.wsgi`
2016-08-11T14:00:56.402633+00:00 app[web.1]: [2016-08-11 14:00:56 +0000] [3] [INFO] Starting gunicorn 19.6.0
2016-08-11T14:00:56.409059+00:00 app[web.1]: [2016-08-11 14:00:56 +0000] [9] [INFO] Booting worker with pid: 9
2016-08-11T14:00:56.403334+00:00 app[web.1]: [2016-08-11 14:00:56 +0000] [3] [INFO] Using worker: sync
2016-08-11T14:00:56.403205+00:00 app[web.1]: [2016-08-11 14:00:56 +0000] [3] [INFO] Listening at: http://0.0.0.0:8099 (3)
2016-08-11T14:00:56.474461+00:00 app[web.1]: [2016-08-11 14:00:56 +0000] [12] [INFO] Booting worker with pid: 12
2016-08-11T14:00:57.424122+00:00 heroku[web.1]: State changed from starting to up
2016-08-11T14:00:59.703603+00:00 heroku[router]: at=info method=GET path="/" host=blogpod.herokuapp.com request_id=e434a2df-e8a7-4ba3-abb2-e18225935469 fwd="81.140.173.71" dyno=web.1 connect=4ms service=254ms status=404 bytes=2461
2016-08-11T14:00:59.674543+00:00 app[web.1]: Not Found: /
2016-08-11T14:01:37.434523+00:00 app[web.1]: Not Found: /
2016-08-11T14:01:37.469187+00:00 heroku[router]: at=info method=GET path="/" host=blogpod.herokuapp.com request_id=94ab0e10-894d-4b3a-a7c2-88d21d2adf88 fwd="81.140.173.71" dyno=web.1 connect=1ms service=16ms status=404 bytes=2461
2016-08-11T14:01:38.605473+00:00 app[web.1]: Not Found: /
2016-08-11T14:01:38.635562+00:00 heroku[router]: at=info method=GET path="/" host=blogpod.herokuapp.com request_id=106ca9e9-8699-48fe-b399-4fc85ad68ec5 fwd="81.140.173.71" dyno=web.1 connect=0ms service=113ms status=404 bytes=2461
2016-08-11T14:01:39.784904+00:00 app[web.1]: Not Found: /
2016-08-11T14:01:39.815800+00:00 heroku[router]: at=info method=GET path="/" host=blogpod.herokuapp.com request_id=a62737a0-47a9-4624-aa74-ada56a87c831 fwd="81.140.173.71" dyno=web.1 connect=0ms service=11ms status=404 bytes=2461
2016-08-11T14:36:35.901858+00:00 heroku[web.1]: State changed from up to down
2016-08-11T14:36:35.901149+00:00 heroku[web.1]: Idling
2016-08-11T14:36:40.049130+00:00 heroku[web.1]: Stopping all processes with SIGTERM
2016-08-11T14:36:41.205660+00:00 heroku[web.1]: Process exited with status 0
2016-08-11T14:36:41.029945+00:00 app[web.1]: [2016-08-11 14:36:41 +0000] [12] [INFO] Worker exiting (pid: 12)
2016-08-11T14:36:41.029933+00:00 app[web.1]: [2016-08-11 14:36:41 +0000] [9] [INFO] Worker exiting (pid: 9)
2016-08-11T14:36:41.031588+00:00 app[web.1]: [2016-08-11 14:36:41 +0000] [3] [INFO] Handling signal: term
2016-08-11T14:36:41.068936+00:00 app[web.1]: [2016-08-11 14:36:41 +0000] [3] [INFO] Shutting down: Master
2016-08-11T15:12:03.453222+00:00 heroku[web.1]: Unidling
2016-08-11T15:12:03.453577+00:00 heroku[web.1]: State changed from down to starting
2016-08-11T15:12:08.706728+00:00 heroku[web.1]: Starting process with command `gunicorn --pythonpath blogpodapi blogpodapi.wsgi`
2016-08-11T15:12:11.131960+00:00 app[web.1]: [2016-08-11 15:12:11 +0000] [3] [INFO] Using worker: sync
2016-08-11T15:12:11.131854+00:00 app[web.1]: [2016-08-11 15:12:11 +0000] [3] [INFO] Listening at: http://0.0.0.0:4732 (3)
2016-08-11T15:12:11.131379+00:00 app[web.1]: [2016-08-11 15:12:11 +0000] [3] [INFO] Starting gunicorn 19.6.0
2016-08-11T15:12:11.220853+00:00 app[web.1]: [2016-08-11 15:12:11 +0000] [12] [INFO] Booting worker with pid: 12
2016-08-11T15:12:11.136374+00:00 app[web.1]: [2016-08-11 15:12:11 +0000] [9] [INFO] Booting worker with pid: 9
2016-08-11T15:12:12.474341+00:00 heroku[web.1]: State changed from starting to up
2016-08-11T15:12:13.158176+00:00 app[web.1]: Not Found: /
2016-08-11T15:12:13.178070+00:00 heroku[router]: at=info method=GET path="/" host=blogpod.herokuapp.com request_id=f1326563-c66c-4aa1-a75c-0ffb1a8796f1 fwd="81.140.173.71" dyno=web.1 connect=1ms service=273ms status=404 bytes=2462
</code></pre>
<p>My folder structure is as follows:</p>
<p><a href="http://i.stack.imgur.com/eziSn.png" rel="nofollow"><img src="http://i.stack.imgur.com/eziSn.png" alt="enter image description here"></a></p>
<p>I understand that I may need to post additional details on certain files. Please let me know which ones are of interest and I will update the question to include it.</p>
<p>My <strong>urls.py</strong> looks like this:</p>
<pre><code>from django.conf import settings
from django.conf.urls import include, url
from django.conf.urls.static import static
from django.contrib import admin
from django.views.generic.base import RedirectView
favicon_view = RedirectView.as_view(url='/static/favicon.ico', permanent=True)
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^api/', include('api.urls')),
url(r'^favicon\.ico$', favicon_view),
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
</code></pre>
| 0 | 2016-08-11T15:40:49Z | 38,908,446 | <p>To define the root url you have to add this pattern '^$':</p>
<pre><code>url(r'^$', view)
</code></pre>
<p>Then it matches to your case <a href="https://blogpod.herokuapp.com/" rel="nofollow">https://blogpod.herokuapp.com/</a></p>
| 1 | 2016-08-12T01:23:04Z | [
"python",
"django",
"heroku"
] |
How many iterations a needed to train tensorflow with the entire MNIST data set (60000 images)? | 38,900,885 | <p>The MNIST set consists of 60,000 images for training set. While training my Tensorflow, I want to run the train step to train the model with the entire training set. The deep learning example on the Tensorflow website uses 20,000 iterations with a batch size of 50 (totaling to 1,000,000 batches). When I try more than 30,000 iterations, my number predictions fail (predicts 0 for all handwritten numbers). My questions is, how many iterations should I use with a batch size of 50 to train the tensorflow model with the entire MNIST set?</p>
<pre><code>self.mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
for i in range(FLAGS.training_steps):
batch = self.mnist.train.next_batch(50)
self.train_step.run(feed_dict={self.x: batch[0], self.y_: batch[1], self.keep_prob: 0.5})
if (i+1)%1000 == 0:
saver.save(self.sess, FLAGS.checkpoint_dir + 'model.ckpt', global_step = i)
</code></pre>
| 0 | 2016-08-11T15:44:53Z | 38,905,101 | <p>I think that depends on your stop criteria. You can stop training when loss doesn't improve, or you can have a validation data set, and stop training when validation accuracy doesn't improve any more.</p>
| 0 | 2016-08-11T19:59:36Z | [
"python",
"tensorflow",
"mnist"
] |
How to include related resource with Django Rest Framework JSON API? | 38,900,889 | <p>I am using <a href="https://github.com/django-json-api/django-rest-framework-json-api" rel="nofollow">Django Rest Framework JSON API</a> to create a REST API. I am trying quite simply to include a related resource (2nd degree relation) but Django keeps responding with the error:</p>
<pre><code>This endpoint does not support the include parameter for path...
</code></pre>
<p>The structure is something like this:</p>
<pre><code># models:
class Household(models.Model):
...
class HouseholdMember(models.Model):
household = models.ForeignKey(Household)
...
class Subscription(models.Model):
subscriber = models.ForeignKey(HouseholdMember)
...
# serializers
from rest_framework_json_api import serializers
class SubscriptionSerializer(serializers.ModelSerializer):
class Meta:
model = Subscription
</code></pre>
<p>I would like to be able to make a request like this: <code>http://example.com/api/subscriptions?include=subscriber.household</code> to be able to group subscriptions by household. However, I simply cannot find out how to do this. <a href="http://django-rest-framework-json-api.readthedocs.io/en/v2.0.1/usage.html#related-fields" rel="nofollow">I know</a> I need to play around with <code>ResourceRelatedField</code> but I'm missing something or too much of a newbie to understand how this works. Any help?</p>
| 0 | 2016-08-11T15:45:04Z | 38,934,973 | <p>Well, perhaps I was missing something obvious (because this wasn't mentioned in the documentation), but if you look at the <a href="https://github.com/django-json-api/django-rest-framework-json-api/blob/develop/example/serializers.py#L34" rel="nofollow"><code>serializers.py</code></a>file in the example directory of the source of Django Rest Framework JSON API, it looks like you need to have a variable called <code>included_serializers</code> to do what I wanted. For my example, here's what you would need:</p>
<pre><code># models:
class Household(models.Model):
...
class HouseholdMember(models.Model):
household = models.ForeignKey(Household)
...
class Subscription(models.Model):
subscriber = models.ForeignKey(HouseholdMember)
...
# serializers
from rest_framework_json_api import serializers
class HouseholdSerializer(serializers.ModelSerializer):
class Meta:
model = Household
class HouseholdMemberSerializer(serializers.ModelSerializer):
included_serializers = {
'household': HouseholdSerializer
}
class Meta:
model = HouseholdMember
class SubscriptionSerializer(serializers.ModelSerializer):
included_serializers = {
'subscriber': SubscriberSerializer
}
class Meta:
model = Subscription
</code></pre>
| 0 | 2016-08-13T16:53:16Z | [
"python",
"django",
"django-rest-framework",
"json-api"
] |
How do I stack rows in a Pandas data frame to get one "long row"? | 38,900,981 | <p>Let's say I have a data frame with 4 rows, 3 columns. I'd like to stack the rows horizontally so that I get one row with 12 columns. How to do it and how to handle colliding column names?</p>
| 2 | 2016-08-11T15:49:39Z | 38,902,076 | <p>You can achieve this by <code>stack</code>ing the frame to produce a series of all the values, we then want to convert this back to a df using <code>to_frame</code> and then <code>reset_index</code> to drop the index levels and then transpose using <code>.T</code>:</p>
<pre><code>In [2]:
df = pd.DataFrame(np.random.randn(4,3), columns=list('abc'))
df
Out[2]:
a b c
0 -1.744219 -2.475923 1.794151
1 0.952148 -0.783606 0.784224
2 0.386506 -0.242355 -0.799157
3 -0.547648 -0.139976 -0.717316
In [3]:
df.stack().to_frame().reset_index(drop=True).T
Out[3]:
0 1 2 3 4 5 6 \
0 -1.744219 -2.475923 1.794151 0.952148 -0.783606 0.784224 0.386506
7 8 9 10 11
0 -0.242355 -0.799157 -0.547648 -0.139976 -0.717316
</code></pre>
| 1 | 2016-08-11T16:51:14Z | [
"python",
"pandas",
"dataframe"
] |
Combining multiple strings into one from text file | 38,900,985 | <p><a href="http://i.stack.imgur.com/22l2X.jpg" rel="nofollow">img4</a></p>
<p>I am using Python pandas/jupyter notebook, and I am having trouble getting the output of a text file to combine into one line. I imported a text file into Python, which I stored in variable AB. I then looped through the data within one of them columns in AB to retrieve the letters in that column, which I then saved in variable emailText. </p>
<p>My issue is that I cannot get the multiple lines of string that are stored in emailText to combine into one single-line string. I tried replacing line-breaks but that has not worked. Could anyone help me out? Screenshot attached</p>
<pre><code>for letter in AB["level_0"]:
emailText = letter
emailText = "".join(emailText.split())
emailText.replace("\n","")
print emailText
</code></pre>
| 0 | 2016-08-11T15:49:47Z | 38,901,634 | <pre><code>''.join( [ "".join(letter.split()) for letter in AB["level_0"] ] ).replace('\n','')
</code></pre>
| 0 | 2016-08-11T16:23:11Z | [
"python"
] |
Read from list in contet | 38,901,005 | <p>I'm very new to python and I'm trying to do a basic api request using the requests library, but I'm having some trouble reading a list in the returned body. </p>
<p>The body of my response looks like this: </p>
<pre><code>{
"files": [{
"url": "http://someurl.json",
"lastModified": 1470924180000
}]
}
</code></pre>
<p>With my code I get the data contained in "files", but I can't figure out how to get the data conatained in "url".<br>
My code: </p>
<pre><code>response = requests.get(url)
data = response.json()
print(data["files"])
</code></pre>
<p>This returns:</p>
<pre><code>[{'url': 'http://myurl.json', 'lastModified': 1470928985000}]
</code></pre>
<p>How can I store the url and lastModified in variables?</p>
| -1 | 2016-08-11T15:51:06Z | 38,901,035 | <p>Simply with:</p>
<pre><code>url = data['files'][0]['url']
last_modified = data['files'][0]['lastModified']
</code></pre>
<p>your data is a dictionary that contains a <code>list</code> of dictionaries for the <code>"files"</code> key. To get the first entry of <code>files</code> you must index the list with <code>data['files'][0]</code>.</p>
<p>After that <code>data['files'][0]</code> is a dictionary which you can again access by key name as required, in this case <code>'url'</code> and <code>'lastModified'</code>.</p>
| 1 | 2016-08-11T15:52:43Z | [
"python",
"python-3.x",
"python-requests"
] |
Python sqlite3 cannot rollback transaction due to 'SQL statements in progress' when script run from Windows command line. | 38,901,055 | <p>I'm operating table merges on some sqlite3 databases in Python. When one of the merging functions spots an error due to a condition, it returns False. If a False is returned then a rollback is performed. Something like this:</p>
<pre><code>con_out = sqlite3.connect('db',isolation_level=None)
out_cursor = con_out.cursor()
try:
out_cursor.execute("begin")
if not merge_table(in_cursor, out_cursor, verbose):
logging.error("rolling back changes")
out_cursor.execute("rollback")
return False
except Exception as e:
logging.error("rolling back changes: %s" % (str(e)))
out_cursor.execute("rollback")
return False
</code></pre>
<p>Now when this code is executed from PyCharm interpreter, and a False is returned, there is no problem. It rolls back normally as expected. However, when the script is run from Windows command line, it gives me this error:</p>
<pre><code>sqlite3.OperationalError: cannot rollback transaction - SQL statements in progress
</code></pre>
| 0 | 2016-08-11T15:53:53Z | 38,915,923 | <p>Just in case if anyone wondering, I found the problem. Before I even set the isolation_level of the database to None, I execute:</p>
<pre><code>out_cursor.execute("PRAGMA journal_mode = MEMORY")
db_con_out.commit()
</code></pre>
<p>For some reason this statement was not committing and was still in progress, blocking other transactions. When I wrapped it with this code down below, it worked</p>
<pre><code>db_con_out.isolation_level = None
db_cursor.execute("begin")
if not execute_sql(db_cursor, "PRAGMA journal_mode = MEMORY;"):
db_cursor.execute("rollback")
sys.exit()
else:
db_cursor.execute("commit")
</code></pre>
<p>It commit just fine telling from the debugger. I don't know the actual reason behind this though.</p>
| 0 | 2016-08-12T10:37:15Z | [
"python",
"sqlite3"
] |
Create a DataFrame with a MultiIndex | 38,901,145 | <p>I would like to make my DataFrame like the one below and export it to excel. I have all the data available for all the '-' that I have put. I want to know what data structure to pass to pd.Dataframe() to make a table like this.</p>
<p>Would like to know how pandas read these data structures to form a DataFrame.</p>
<p><a href="http://i.stack.imgur.com/eITUT.png" rel="nofollow"><img src="http://i.stack.imgur.com/eITUT.png" alt="enter image description here"></a></p>
| 1 | 2016-08-11T15:58:44Z | 38,901,309 | <pre><code>idx = pd.MultiIndex.from_product([['Zara', 'LV', 'Roots'],
['Orders', 'GMV', 'AOV']],
names=['Brand', 'Metric'])
col = ['Yesterday', 'Yesterday-1', 'Yesterday-7', 'Thirty day average']
df = pd.DataFrame('-', idx, col)
df
</code></pre>
<p><strong><em>Jupyter screen shot</em></strong></p>
<p><a href="http://i.stack.imgur.com/uunLd.png"><img src="http://i.stack.imgur.com/uunLd.png" alt="enter image description here"></a></p>
<pre><code>df.to_excel('test.xlsx')
</code></pre>
<p><strong><em>Mac Numbers screen shot</em></strong></p>
<p><a href="http://i.stack.imgur.com/67eVT.png"><img src="http://i.stack.imgur.com/67eVT.png" alt="enter image description here"></a></p>
| 6 | 2016-08-11T16:06:55Z | [
"python",
"pandas",
"multi-index"
] |
insert a variable into an url | 38,901,302 | <p>I got a CSV file with numbers and I want to insert these numbers into a specific location in an url : jus after " "value": "
Here is my code :</p>
<pre><code>with open('update_cases_id.csv') as p:
for lines in p:
uuid = lines.rstrip()
url_POST = "www.example.com/"
values = {}
values['return_type'] = 'retrieval'
values['format'] = 'TSV'
values['size'] = '70'
values['filters'] = '{"op":"and","content":[{"op":"in","content":{"field":"cases.case_id","value": .format(uuid)}}]}'
data = urllib.urlencode(values)
url_final = url_POST + '?' + data
req2 = urllib2.Request(url_final)
req2.add_header('cookie', cookie)
handle = urllib2.urlopen(req2)
</code></pre>
<p>( edited :
example input : 123456-123456-987654
example output : it s data text )</p>
| 0 | 2016-08-11T16:06:22Z | 38,901,544 | <p>You can do this with string formatting, this should work for you:</p>
<pre><code># ...snip
values['filters'] = '{"op":"and","content":[{"op":"in","content":{"field":"cases.case_id","value":%s}]}' % uuid
# snip...
</code></pre>
<p>The <code>%s</code> will be replaced by the uuid by the % replacement operator:</p>
<pre><code>>>> values = {}
>>> uuid = 1234
>>> values['filters'] = '{"op":"and","content":[{"op":"in","content":{"field":"cases.case_id","value":%s}]}' % uuid
>>> values
{'filters': '{"op":"and","content":[{"op":"in","content":{"field":"cases.case_id","value":1234}]}'}
</code></pre>
| 0 | 2016-08-11T16:18:34Z | [
"python",
"csv",
"parsing",
"urllib2"
] |
insert a variable into an url | 38,901,302 | <p>I got a CSV file with numbers and I want to insert these numbers into a specific location in an url : jus after " "value": "
Here is my code :</p>
<pre><code>with open('update_cases_id.csv') as p:
for lines in p:
uuid = lines.rstrip()
url_POST = "www.example.com/"
values = {}
values['return_type'] = 'retrieval'
values['format'] = 'TSV'
values['size'] = '70'
values['filters'] = '{"op":"and","content":[{"op":"in","content":{"field":"cases.case_id","value": .format(uuid)}}]}'
data = urllib.urlencode(values)
url_final = url_POST + '?' + data
req2 = urllib2.Request(url_final)
req2.add_header('cookie', cookie)
handle = urllib2.urlopen(req2)
</code></pre>
<p>( edited :
example input : 123456-123456-987654
example output : it s data text )</p>
| 0 | 2016-08-11T16:06:22Z | 38,902,043 | <p>Try to use <code>Template</code>.</p>
<pre><code>from string import Template
params = Template('{"op":"and","content":[{"op":"in","content":{"field":"cases.case_id","value": ${your_value}}}]}')
params = params.safe_substitute(your_value=123)
# params is '{"op":"and","content":[{"op":"in","content":{"field":"cases.case_id","value":123}]}'
</code></pre>
| 0 | 2016-08-11T16:49:11Z | [
"python",
"csv",
"parsing",
"urllib2"
] |
Pandas: using groupby to make a table | 38,901,307 | <p>I have dataframe</p>
<pre><code>i,Unnamed: 0,ID,active_seconds,subdomain,search_term,period,code,buy
0,56574,08cd0141663315ce71e0121e3cd8d91f,6,market.yandex.ru,None,515,100.0,1.0
1,56576,08cd0141663315ce71e0121e3cd8d91f,26,market.yandex.ru,None,515,100.0,1.0
2,56578,08cd0141663315ce71e0121e3cd8d91f,14,market.yandex.ru,None,515,100.0,1.0
3,56579,08cd0141663315ce71e0121e3cd8d91f,2,market.yandex.ru,None,515,100.0,1.0
4,56581,08cd0141663315ce71e0121e3cd8d91f,8,market.yandex.ru,None,515,100.0,1.0
5,56582,08cd0141663315ce71e0121e3cd8d91f,32,market.yandex.ru,None,515,100.0,1.0
6,56583,08cd0141663315ce71e0121e3cd8d91f,16,market.yandex.ru,None,515,100.0,1.0
7,56584,08cd0141663315ce71e0121e3cd8d91f,4,market.yandex.ru,None,515,100.0,1.0
8,56585,08cd0141663315ce71e0121e3cd8d91f,10,market.yandex.ru,None,515,100.0,1.0
9,56639,08cd0141663315ce71e0121e3cd8d91f,2,market.yandex.ru,None,516,100.0,1.0
</code></pre>
<p>I want to get table with sum of <code>active_seconds</code> and quantity of <code>period</code>(One number is a One period). In this case I want to get quantity of periods = <code>2</code> to this ID.
I use</p>
<pre><code>df.groupby(['ID', 'buy']).agg({'period': len, 'active_seconds': sum}).rename(columns={'active_seconds': 'count_sec', 'period': 'sum_session'}).reset_index()
</code></pre>
<p>But it return uncorrectly value to quantity of periods. How can I fix that?</p>
| 3 | 2016-08-11T16:06:44Z | 38,901,484 | <p>Use <code>'nunique'</code> instead of <code>len</code></p>
<pre><code>df.groupby(['ID', 'buy']).agg({'period': 'nunique', 'active_seconds': sum}) \
.rename(columns={'active_seconds': 'count_sec', 'period': 'sum_session'}).reset_index()
</code></pre>
<p><a href="http://i.stack.imgur.com/OX7IZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/OX7IZ.png" alt="enter image description here"></a></p>
| 2 | 2016-08-11T16:15:46Z | [
"python",
"pandas"
] |
Convert an array directly to an image (each cell becomes one pixel) in python | 38,901,354 | <p>I want to convert an 64x64 cell array to an 64x64 pixel image. Using matplotlib and pylab I am ending up with images that are approximately 900x900 with the extra pixels being blended together. </p>
<pre><code>py.figure(1)
py.clf()
py.imshow( final_image , cmap='Greys_r' )
</code></pre>
<p>How do I convert from cell to pixel in a 1:1 ratio? (If you can't already tell, I'm fairly new to this).</p>
| 0 | 2016-08-11T16:09:02Z | 38,901,681 | <p>That is example of using PIL to create image 2x2. a is array of colors of size 4 (flat)</p>
<pre><code>from PIL import Image
a = [(0, 0, 0), (255, 0, 0), (0, 255, 0), (0, 0, 255)]
# Create RGB image with size 2x2
img = Image.new("RGB", (2, 2))
# Save it to the new function
img.putdata(a)
# Save to the file
img.save('1.png')
</code></pre>
<p>You should adjust that to the format of your data if it is not flat, of course. That should be easy though. For example, this script flattens data from two-dimensional list:</p>
<pre><code>a = [[[1, 2, 3], [2, 3, 4]], [[5, 6, 7], [8, 9, 10]]]
a = [tuple(color) for row in a for color in row]
print a
</code></pre>
<p>If you are dealing with numpy arrays rather then lists, you should use function fromarray (in the following way):</p>
<pre><code># data is numpy array
img = Image.fromarray(data, 'RGB')
# Save to the file
img.save('1.png')
</code></pre>
<p>Please note that using numpy arrays is highly recommended since it is just wrapped C arrays and thus they are waay faster.</p>
| 0 | 2016-08-11T16:25:41Z | [
"python",
"data-representation"
] |
Flask view can't access model class even through it's defined in the same file | 38,901,397 | <p>I have a simple single file app to test using SQLAlchemy with Flask. The <code>getResult</code> view uses the <code>MainResult</code> model. However, navigating to <code>/getResult</code> raise <code>NameError: global name 'MainResult' is not defined</code>. That class is defined in the same file as the view, so why do I get that error?</p>
<pre><code>from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql://root:@localhost/testDB'
db = SQLAlchemy(app)
@app.route('/getResult')
def getResult():
newEntry = MainResult(metadata_key='test')
db.session.add(newEntry)
db.session.commit()
return 'Hello World!'
if __name__ == '__main__':
app.run(debug=True)
class MainResult(db.Model):
metadata_key = db.Column(db.String(128), primary_key=True)
</code></pre>
| 0 | 2016-08-11T16:10:57Z | 38,902,217 | <p>Python executes files sequentially. <code>app.run</code> runs forever. So anything after <code>app.run</code> in the file will not have been executed and won't be until the server stops. <code>MainResult</code> isn't defined until after the app is run, so the view can't see it. Move it above the <code>__main__</code> block.</p>
<pre><code>class MainResult(db.Model):
...
# should be the last thing in the file
if __name__ == '__main__':
app.run()
</code></pre>
<p>Preferably, use the new <code>flask</code> command to run the dev server instead. The <code>__main__</code> block is no longer required in this case.</p>
<pre><code>FLASK_APP=my_file.py flask run
</code></pre>
| 0 | 2016-08-11T17:00:29Z | [
"python",
"flask"
] |
How do I migrate changes in Django when I'm uninheriting a model? | 38,901,428 | <p>If I have two models and one inherits from another and setup the database with migrate etc. like so:</p>
<pre><code>class TemplateProduct(models.Model):
type = models.CharField(max_length=20)
class Product(TemplateProduct):
name = models.CharField(max_length=80)
</code></pre>
<p>Then how would I migrate the db to make it so that Product does not inherit from TemplateProduct? Say I just want these models instead:</p>
<pre><code>class TemplateProduct(models.Model):
type = models.CharField(max_length=20)
class Product(models.Model):
name = models.CharField(max_length=80)
</code></pre>
<p>When I try to migrate this, I get the following error:</p>
<pre><code>django.db.utils.ProgrammingError: column "templateproduct_ptr_id" of relation "product_product" does not exist
</code></pre>
<p>And then when I remove the delete "templateproduct_ptr_id" from the migration, I get the following error:</p>
<pre><code>django.core.exceptions.FieldError: Local field u'id' in class 'Product' clashes with field of similar name from base class 'TemplateProduct'
</code></pre>
<p>As the title says: how do I migrate changes in Django when I'm uninheriting a model?</p>
| 0 | 2016-08-11T16:12:50Z | 38,951,708 | <p>So my solution was to delete both models. Then <code>python manage.py makemigrations --merge</code>, then I added the models back the way I wanted them, finally running <code>python manage.py makemigrations --merge</code> again to add the models back in. </p>
<p>This may not be the most elegant solution but it worked.</p>
| 0 | 2016-08-15T08:26:20Z | [
"python",
"django",
"django-models"
] |
Pandas: using groupby with condition | 38,901,472 | <p>I have df</p>
<pre><code>i,Unnamed: 0,ID,active_seconds,subdomain,search_term,period,code,buy
0,56574,08cd0141663315ce71e0121e3cd8d91f,6,market.yandex.ru,None,515,100.0,1.0
1,56576,08cd0141663315ce71e0121e3cd8d91f,26,market.yandex.ru,None,515,100.0,1.0
2,56578,08cd0141663315ce71e0121e3cd8d91f,14,market.yandex.ru,None,515,100.0,1.0
3,56579,08cd0141663315ce71e0121e3cd8d91f,2,market.yandex.ru,None,515,100.0,1.0
4,56581,08cd0141663315ce71e0121e3cd8d91f,8,market.yandex.ru,None,515,100.0,1.0
5,56582,08cd0141663315ce71e0121e3cd8d91f,32,market.yandex.ru,None,515,100.0,1.0
6,56583,08cd0141663315ce71e0121e3cd8d91f,16,market.yandex.ru,None,515,100.0,1.0
7,56584,7602962fb83ac2e2a0cb44158ca88464,4,market.yandex.ru,None,515,100.0,2.0
8,56585,7602962fb83ac2e2a0cb44158ca88464,10,market.yandex.ru,None,515,100.0,2.0
9,56639,7602962fb83ac2e2a0cb44158ca88464,2,market.yandex.ru,None,516,100.0,2.0
</code></pre>
<p>I need to count sum of <code>active_seconds</code> to every ID, </p>
<pre><code>df.groupby(['ID', 'buy']).agg({'active_seconds': sum}).rename(columns={'active_seconds': 'count_sec'}).reset_index()
</code></pre>
<p>But I need do it, if <code>buy == 2 or buy == 3</code>, if <code>buy == 1</code>, I need to print date from this df.</p>
<pre><code>ID date buy
7602962fb83ac2e2a0cb44158ca88464 01.01.2016 1
bc8a731e4c7e6f6b96e56ebe7f766bcd 10.02.2016 1
a703114aa8a03495c3e042647212fa63 20.02.2016 2
</code></pre>
<p>How can I do that?</p>
| 0 | 2016-08-11T16:15:23Z | 38,906,511 | <p>If I understand your question correctly, you want to join with a different data frame when buy == 1. Assuming the first data frame is named df and the second data frame that contains dates is named df2 then this is my proposed solution:</p>
<pre><code>df.groupby(['ID', 'buy']).agg({'active_seconds': sum}).rename(columns={'active_seconds': 'count_sec'}).reset_index().merge(df2, how='left', on=['ID','buy']).apply(lambda x: x['date'] if x['buy']==1 else x['count_sec'],axis=1)
</code></pre>
| 1 | 2016-08-11T21:35:12Z | [
"python",
"pandas"
] |
Pandas swap columns based on condition | 38,901,563 | <p>I have a pandas dataframe like the following:</p>
<pre><code> Col1 Col2 Col3
0 A 7 NaN
1 B 16 NaN
1 B 16 15
</code></pre>
<p>What I want to do is to swap Col2 with Col3 where the value of Col3 is <code>NaN</code>. Based on other posts and answers on SO, I have this code so far:</p>
<pre><code>df[['Col2', 'Col3']] = df[['Col3', 'Col2']].where(df[['Col3']].isnull())
</code></pre>
<p>But this does not seem to be working properly and gives me the following:</p>
<pre><code> Col1 Col2 Col3
0 A NaN NaN
1 B NaN NaN
1 B NaN NaN
</code></pre>
<p>Is there something that I might be missing here?</p>
<p><strong>Update:</strong> My desired output looks like:</p>
<pre><code> Col1 Col2 Col3
0 A NaN 7
1 B NaN 16
1 B 16 15
</code></pre>
<p>Thanks</p>
| 3 | 2016-08-11T16:19:23Z | 38,901,797 | <p>You can use <code>loc</code> to do the swap:</p>
<pre><code>df.loc[df['Col3'].isnull(), ['Col2', 'Col3']] = df.loc[df['Col3'].isnull(), ['Col3', 'Col2']].values
</code></pre>
<p>Note that <code>.values</code> is required to make sure the swap is done properly, otherwise Pandas would try to align based on index and column names, and no swap would occur.</p>
<p>You can also just reassign each row individually, if you feel the code is cleaner:</p>
<pre><code>null_idx = df['Col3'].isnull()
df.loc[null_idx, 'Col3'] = df['Col2']
df.loc[null_idx, 'Col2'] = np.nan
</code></pre>
<p>The resulting output:</p>
<pre><code> Col1 Col2 Col3
0 A NaN 7.0
1 B NaN 16.0
2 B 16.0 15.0
</code></pre>
| 5 | 2016-08-11T16:32:40Z | [
"python",
"pandas",
"swap"
] |
Pandas swap columns based on condition | 38,901,563 | <p>I have a pandas dataframe like the following:</p>
<pre><code> Col1 Col2 Col3
0 A 7 NaN
1 B 16 NaN
1 B 16 15
</code></pre>
<p>What I want to do is to swap Col2 with Col3 where the value of Col3 is <code>NaN</code>. Based on other posts and answers on SO, I have this code so far:</p>
<pre><code>df[['Col2', 'Col3']] = df[['Col3', 'Col2']].where(df[['Col3']].isnull())
</code></pre>
<p>But this does not seem to be working properly and gives me the following:</p>
<pre><code> Col1 Col2 Col3
0 A NaN NaN
1 B NaN NaN
1 B NaN NaN
</code></pre>
<p>Is there something that I might be missing here?</p>
<p><strong>Update:</strong> My desired output looks like:</p>
<pre><code> Col1 Col2 Col3
0 A NaN 7
1 B NaN 16
1 B 16 15
</code></pre>
<p>Thanks</p>
| 3 | 2016-08-11T16:19:23Z | 38,903,431 | <p>Try this: (its faster)</p>
<pre><code>df["Col3"], df["Col2"] = np.where(df['Col3'].isnull(), [df["Col2"], df["Col3"]], [df["Col3"], df["Col2"] ])
df
Col1 Col2 Col3
0 A NaN 7.0
1 B NaN 16.0
1 B 16.0 15.0
%timeit df.loc[df['Col3'].isnull(), ['Col2', 'Col3']] = df.loc[df['Col3'].isnull(), ['Col3', 'Col2']].values
100 loops, best of 3: 2.68 ms per loop
%timeit df["Col3"], df["Col2"] = np.where(df['Col3'].isnull(), [df["Col2"], df["Col3"]], [df["Col3"], df["Col2"] ])
1000 loops, best of 3: 592 µs per loop
</code></pre>
| 1 | 2016-08-11T18:14:01Z | [
"python",
"pandas",
"swap"
] |
Python: < not supported between str and int | 38,901,601 | <p>I'm learning Python through <em>Automate the Boring Stuff</em> and I'm running into a something I don't quite understand.</p>
<p>I'm trying to create a simple for loop that prints the elements of a list in this format: <code>W, X, Y, and Z</code>.</p>
<p>My code looks like the following: </p>
<pre><code>spam = ['apples', 'bananas', 'tofu', 'cats']
def printSpam(item):
for i in item:
if i < len(item)-1:
print (','.join(str(item[i])))
else:
print ("and ".join(str(item[len(item)-1])))
return
printSpam(spam)
</code></pre>
<p>I get this error in response: </p>
<pre><code>Traceback (most recent call last):
File "CH4_ListFunction.py", line 11, in <module>
printSpam(spam)
File "CH4_ListFunction.py", line 5, in printSpam
if i < len(item)-1:
TypeError: '<' not supported between instances of 'str' and 'int'
</code></pre>
<p>Any help is appreciated. Thanks for helping a newbie.</p>
| 0 | 2016-08-11T16:21:33Z | 38,902,006 | <p>Ah, but <code>for i in array</code> iterates over each <em>element</em>, so <code>if i < len(item)-1:</code> is comparing a <em>string</em> (the array element <code>item</code>) and an integer (<code>len(item)-1:</code>).</p>
<p>So, the problem is you misunderstood <a href="https://docs.python.org/3/tutorial/controlflow.html#for-statements" rel="nofollow">how <code>for</code> works</a> in Python.</p>
<p>The quick fix?</p>
<p>You can replace your <code>for</code> with <code>for i in range(len(array))</code>, as <code>range</code> works like this:</p>
<pre><code>>>> range(10)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
</code></pre>
<p>Thus obtaining:</p>
<pre><code>spam = ['apples', 'bananas', 'tofu', 'cats']
def printSpam(item):
for i in range(len(item)):
if i < len(item)-1:
print (','.join(str(item[i])))
else:
print ("and ".join(str(item[len(item)-1])))
return
printSpam(spam)
</code></pre>
<p>The output <em>probably</em> won't be what you expect, though, as <code>'c'.join(array)</code> uses 'c' as "glue" between the various elements of the array - and what is a string, if not an array of chars?</p>
<pre><code>>>> ','.join("bananas")
'b,a,n,a,n,a,s'
</code></pre>
<p>Thus, the output will be:</p>
<pre><code>a,p,p,l,e,s
b,a,n,a,n,a,s
t,o,f,u
cand aand tand s
</code></pre>
<p>We can do better anyway.</p>
<p>Python supports so-called <a href="http://stackoverflow.com/questions/509211/explain-pythons-slice-notation">slice notation</a> and negative indexes (that start at the end of the array).</p>
<p>Since</p>
<pre><code>>>> spam[0:-1]
['apples', 'bananas', 'tofu']
>>> spam[-1]
'cats'
</code></pre>
<p>We have that </p>
<pre><code>>>> ", ".join(spam[0:-1])
'apples, bananas, tofu'
</code></pre>
<p>And</p>
<pre><code>>>> ", ".join(spam[0:-1]) + " and " + spam[-1]
'apples, bananas, tofu and cats'
</code></pre>
<p>Therefore, you can write your function as simply</p>
<pre><code>def printSpam(item):
print ", ".join(item[0:-1]) + " and " + item[-1]
</code></pre>
<p>That's it.
It works.</p>
<p>P.S.: One las thing about Python and array notation:</p>
<pre><code>>>> "Python"[::-1]
'nohtyP'
</code></pre>
| 1 | 2016-08-11T16:46:25Z | [
"python",
"syntax"
] |
Saving model form not working in Python and Django | 38,901,603 | <p>I have the following models:</p>
<pre><code>class Equipment(models.Model):
asset_number = models.CharField(max_length = 200)
serial_number = models.CharField(max_length = 200)
description = models.TextField(max_length = 200)
def __str__(self):
return self.description
</code></pre>
<p>And Views:</p>
<pre><code>from django.shortcuts import render
from .models import EquipmentForm
from django.utils import timezone
from django.views import generic
from django.urls import reverse
from .models import Equipment
from django.http import HttpResponse, HttpResponseRedirect
def default (request):
return render(request, 'calbase/default.html')
def default_new (request):
if request.method == "POST":
form = EquipmentForm()
if form.is_valid():
post = form.save(commit=False)
post.author = request.user
post.published_date = timezone.now()
post.save()
return HttpResponseRedirect(reverse('calbase:default_detail', args=(pk)))
else:
form = EquipmentForm()
return render(request, 'calbase/default_edit.html', {'form':form})
class default_detail (generic.DetailView):
model = Equipment
template_name = 'calbase/default_detail.html'
</code></pre>
<p>And templates:</p>
<p>default.html:</p>
<pre><code>{% load staticfiles %}
<html>
<head>
<title>Equipment calibration database</title>
<link rel="stylesheet" href="//maxcdn.bootstrapcdn.com/bootstrap/3.2.0/css/bootstrap.min.css">
<link rel="stylesheet" href="//maxcdn.bootstrapcdn.com/bootstrap/3.2.0/css/bootstrap-theme.min.css">
<link href='//fonts.googleapis.com/css?family=Lobster&subset=latin,latin-ext' rel='stylesheet' type='text/css'>
</head>
<body>
<div class="page-header">
<a href="{% url 'default_new' %}" class="top-menu"><span class="glyphicon glyphicon-plus"></span></a>
<h1><a href="/">Equipment calibration database</a></h1>
</div>
<div class="content container">
<div class="row">
<div class="col-md-8">
{% block content %}
{% endblock %}
</div>
</div>
</div>
</body>
</html>
</code></pre>
<p>default_edit.html:</p>
<pre><code>{% extends 'calbase/default.html' %}
{% block content %}
<h1>New post</h1>
<form method="POST" class="post-form">{% csrf_token %}
{{ form.as_p }}
<button type="submit" class="save btn btn-default">Save</button>
</form>
{% endblock %}
</code></pre>
<p>default_detail.html:</p>
<pre><code><h1>{{ equipment.equipment.serial_number }}</h1>
{% if error_message %}<p><strong>{{ error_message }}</strong></p>{% endif %}
<form action="{% url 'calbase:default_new' pk %}" method="post">
{% csrf_token %}
<input type="submit" value="Default_new" />
</form>
</code></pre>
<p>What I would like to do here is simply post user input to database by a ModelForm. But after inputting those two parameters for a new post and click save, nothing happens and there is just no Equipment created. In theory it should record this and jump to the detail page for this piece of equipment. Where did I go wrong?</p>
| 1 | 2016-08-11T16:21:36Z | 38,957,147 | <p>You have not passed your request to form.</p>
<pre><code>if request.method == "POST":
#form = EquipmentForm()
form = EquipmentForm(request.POST)
</code></pre>
| 0 | 2016-08-15T14:33:46Z | [
"python",
"django"
] |
Pandas add column to df based on values in a second df | 38,901,643 | <p>I have two separate dataframes <code>df1</code> and <code>df2</code>, both dataframes contain an <code>id</code> column which links rows between them. <code>df2</code> has a <code>group</code> column that <code>df1</code> does not contain. What I would like to do is go through each <code>id</code> in <code>df1</code> and check to see if it is in <code>df2</code> then if it is to take the <code>group</code> column value and put it in <code>df1</code> under a new column of the same name. Would it be easiest to write a function to loop through or is there a pandas trick I could utilize here?</p>
| 2 | 2016-08-11T16:23:32Z | 38,901,742 | <pre><code>df1 = pd.DataFrame([[1, 'a'],
[2, 'b'],
[3, 'c']], columns=['id', 'attr'])
df2 = pd.DataFrame([[2, 'd'],
[3, 'e'],
[4, 'f']], columns=['id', 'group'])
df1.merge(df2, how='left')
</code></pre>
<p><a href="http://i.stack.imgur.com/NL5TG.png" rel="nofollow"><img src="http://i.stack.imgur.com/NL5TG.png" alt="enter image description here"></a></p>
| 4 | 2016-08-11T16:29:09Z | [
"python",
"pandas"
] |
Pandas add column to df based on values in a second df | 38,901,643 | <p>I have two separate dataframes <code>df1</code> and <code>df2</code>, both dataframes contain an <code>id</code> column which links rows between them. <code>df2</code> has a <code>group</code> column that <code>df1</code> does not contain. What I would like to do is go through each <code>id</code> in <code>df1</code> and check to see if it is in <code>df2</code> then if it is to take the <code>group</code> column value and put it in <code>df1</code> under a new column of the same name. Would it be easiest to write a function to loop through or is there a pandas trick I could utilize here?</p>
| 2 | 2016-08-11T16:23:32Z | 38,901,806 | <p>You can merge the two dataframes in one by joining them on the id column and then keep only the columns that you need:</p>
<pre><code>df1 = merge(df1, df2, how='left', on='id')
df1.drop('unwanted_column',1)
</code></pre>
| 2 | 2016-08-11T16:33:18Z | [
"python",
"pandas"
] |
Python script can't find path | 38,901,653 | <p>When I execute the following line in my script:</p>
<pre><code>if os.path.exists('/home/jsc0606/Desktop/project/myfile.py')
</code></pre>
<p>I get <code>False</code>. However, when I execute the same line in the terminal in the same directory, I get <code>True</code>. Does anyone know why Python can't find the file when executing that line in the script?</p>
| 0 | 2016-08-11T16:24:14Z | 38,901,801 | <p>As stated in <a href="https://docs.python.org/2/library/os.path.html" rel="nofollow">the docs</a> about <code>os.path.exists</code>:</p>
<blockquote>
<p>On some platforms, this function may return <code>False</code> if <em>permission is not granted</em> to execute <code>os.stat()</code> on the requested file, even if the path physically exists.</p>
</blockquote>
<p>I think this may be the case: you're probably running the script with different privileges. </p>
| 0 | 2016-08-11T16:33:09Z | [
"python"
] |
ReLU Function In A Recurrent Neural Network. Weights Become Infinity or Zero | 38,901,684 | <p>I am new to machine learning. I've read that the ReLU function is better than a sigmoid function for a recurrent neural network because of the vanishing gradient problem.</p>
<p>I'm trying to implement a very basic recurrent neural network with 3 input nodes, 10 hidden nodes and 3 output nodes.</p>
<p>There is the ReLU function at both input and hidden nodes and the softmax function for the output nodes.</p>
<p>However when I'm using the ReLU function after few epochs (less than 10) either the error gets to 0 or the error gets to infinity depending on whether the weight changes are added or subtracted from the original weights.</p>
<pre><code>weight = weight + gradient_decent #weights hits infinity
weight = weight - gradient decent #weights become 0
</code></pre>
<p>And also because it hits infinity it gives the following error,</p>
<pre><code>RuntimeWarning: invalid value encountered in maximum
return np.maximum(x, 0)
</code></pre>
<p>However when I implement the sigmoid function the error nicely comes down. But because this is a simple example that is fine but if I use it on a bigger problem I am afraid I will hit with the vanishing gradient problem.</p>
<p>Is this caused by the small number of hidden nodes, how can I solve this issue? If you need the code sample please comment, not posting the code because it's too long.</p>
<p>Thank you.</p>
| 0 | 2016-08-11T16:25:51Z | 38,907,274 | <p>I do not think the number of hidden nodes is the problem. </p>
<p>In the first case the weights are approaching infinity since the gradient descent update is wrong. The gradient of the loss with respect to a weight represents the direction in which you should update the weight in order to increase the loss. Since one (usually) wants to minimize it, if the weights are updated in the positive direction, they will increase the loss and very likely leading to divergence. </p>
<p>Despite of that, even if assuming the update is right, I would see it more as wrong initialization/hyperparameter setting rather than a strictly ReLU dependent problem (obviously ReLU explodes in its positive part, giving infinity, while sigmoid saturates, giving 1).</p>
<p>In the second case instead, what is happening is the <em>dead ReLU</em> problem, a saturated ReLU that gives always the same (zero) output and it is not able to recover itself. It could happen for many reasons (i.e. bad initialization, wrong bias learning) but the most probable is a too high update step. Try to decrease your learning rate and see what happens. </p>
<p>In case this does not solve the problem, think to use the <em>Leaky ReLU</em> version, also just for simply debugging purposes. </p>
<p>More (and better explained) details about the <em>Leaky ReLU</em> and the <em>dead ReLU</em> can be found here: <a href="http://datascience.stackexchange.com/questions/5706/what-is-the-dying-relu-problem-in-neural-networks">http://datascience.stackexchange.com/questions/5706/what-is-the-dying-relu-problem-in-neural-networks</a></p>
| 1 | 2016-08-11T22:51:48Z | [
"python",
"machine-learning",
"neural-network",
"artificial-intelligence"
] |
What are the options for changing the behavior of a Django/python module? | 38,901,834 | <p>I want to change the behavior of a class method without re-implementing large portions of the module. What are my options for doing this?</p>
<p>Iâm using the wagtail_blog module in a current project. Separate models are created for Blog and BlogIndexPage. The BlogIndexPage returns a list of all related Blog pages. I want to introduce a method to limit the returned pages by Category, and this is where my question comes in-âwhat are the best options for doing this?</p>
<ul>
<li><p>I could create a new app (MyBlog), inherit from the original classes,
and override get_context(). The problem with this is that I would<br>
also need to re-implement other parts of the code as object names
would now be different.</p></li>
<li><p>I could fork the original repo, add in my changes, and use this in
my project. Again, lots of work for a small change.</p></li>
<li><p>Something more elegant that you clever people can suggest.</p></li>
</ul>
<p>How do others normally handle this situation?</p>
<p>For completeness, hereâs the code Iâm working on:</p>
<pre><code>def get_blog_context(context):
""" Get context data useful on all blog related pages """
context['authors'] = get_user_model().objects.filter(
owned_pages__live=True,
owned_pages__content_type__model='blogpage'
).annotate(Count('owned_pages')).order_by('-owned_pages__count')
context['all_categories'] = BlogCategory.objects.all()
context['root_categories'] = BlogCategory.objects.filter(
parent=None,
).prefetch_related(
'children',
).annotate(
blog_count=Count('blogpage'),
)
return context
class BlogIndexPage(Page):
@property
def blogs(self):
# Get list of blog pages that are descendants of this page
blogs = BlogPage.objects.descendant_of(self).live()
blogs = blogs.order_by(
'-date'
).select_related('owner').prefetch_related(
'tagged_items__tag',
'categories',
'categories__category',
)
return blogs
def get_context(self, request, tag=None, category=None, author=None, *args,
**kwargs):
context = super(BlogIndexPage, self).get_context(
request, *args, **kwargs)
blogs = self.blogs
if tag is None:
tag = request.GET.get('tag')
if tag:
blogs = blogs.filter(tags__slug=tag)
if category is None: # Not coming from category_view in views.py
if request.GET.get('category'):
category = get_object_or_404(
BlogCategory, slug=request.GET.get('category'))
if category:
if not request.GET.get('category'):
category = get_object_or_404(BlogCategory, slug=category)
blogs = blogs.filter(categories__category__name=category)
if author:
if isinstance(author, str) and not author.isdigit():
blogs = blogs.filter(author__username=author)
else:
blogs = blogs.filter(author_id=author)
# Pagination
page = request.GET.get('page')
page_size = 10
if hasattr(settings, 'BLOG_PAGINATION_PER_PAGE'):
page_size = settings.BLOG_PAGINATION_PER_PAGE
if page_size is not None:
paginator = Paginator(blogs, page_size) # Show 10 blogs per page
try:
blogs = paginator.page(page)
except PageNotAnInteger:
blogs = paginator.page(1)
except EmptyPage:
blogs = paginator.page(paginator.num_pages)
context['blogs'] = blogs
context['category'] = category
context['tag'] = tag
context['author'] = author
context['COMMENTS_APP'] = COMMENTS_APP
context = get_blog_context(context)
return context
class Meta:
verbose_name = _('Blog index')
subpage_types = ['blog.BlogPage']
</code></pre>
| 0 | 2016-08-11T16:35:09Z | 38,906,453 | <p>I'm not familiar with the <code>wagtail</code> CMS, but I guess you can do something like:</p>
<pre><code>class MyBlogIndexPage(BlogIndexPage):
def get_context(self, *args, **kwargs):
context = super(MyBlogIndexPage, self).get_context(*args, **kwargs)
#do context changes
return context
</code></pre>
<p>And now override the the <code>view/views</code> which use the old <code>BlogIndexPage</code>. <em>(Again I'm not familiar with the wagtail system, just saw <a href="https://github.com/thelabnyc/wagtail_blog/blob/f84b2f3edd1110037b9a9b74136ff199b5d84329/blog/views.py#L8" rel="nofollow">how</a> the <code>BlogIndexPage</code> is being used.)</em></p>
<pre><code>def my_tag_view(request, tag):
index = MyBlogIndexPage.objects.first()
return index.serve(request, tag=tag)
</code></pre>
<p>Finally override the <code>url</code> to use your <code>view</code>.</p>
<pre><code>url(r'^tag/(?P<tag>[-\w]+)/', views.my_tag_view, name="tag"),
</code></pre>
| 0 | 2016-08-11T21:29:48Z | [
"python",
"django",
"django-models",
"wagtail"
] |
basic groupby operations in Dask | 38,901,845 | <p>I am attempting to use Dask to handle a large file (50 gb). Typically, I would load it in memory and use Pandas. I want to groupby two columns "A", and "B", and whenever column "C" starts with a value, I want to repeat that value in that column for that particular group.</p>
<p>In pandas, I would do the following:</p>
<pre><code>df['C'] = df.groupby(['A','B'])['C'].fillna(method = 'ffill')
</code></pre>
<p>What would be the equivalent in Dask?
Also, I am a little bit lost as to how to structure problems in Dask as opposed to in Pandas,</p>
<p>thank you,</p>
<p>My progress so far:</p>
<p>First set index:</p>
<pre><code>df1 = df.set_index(['A','B'])
</code></pre>
<p>Then groupby:</p>
<pre><code>df1.groupby(['A','B']).apply(lambda x: x.fillna(method='ffill').compute()
</code></pre>
| 1 | 2016-08-11T16:35:53Z | 39,048,926 | <p>It appears dask does not currently implement the <code>fillna</code> method for <code>GroupBy</code> objects. I've tried PRing it some time ago and gave up quite quickly.</p>
<p>Also, dask doesn't support the <code>method</code> parameter (as it isn't always trivial to implement with delayed algorithms).</p>
<p>A workaround for this could be using <code>fillna</code> before grouping, like so:</p>
<p><code>df['C'] = df.fillna(0).groupby(['A','B'])['C']</code></p>
<p>Although this wasn't tested.</p>
<p>You can find my (failed) attempt here: <a href="https://github.com/nirizr/dask/tree/groupy_fillna" rel="nofollow">https://github.com/nirizr/dask/tree/groupy_fillna</a></p>
| 1 | 2016-08-19T23:24:37Z | [
"python",
"pandas",
"dask"
] |
Matplotlib with Google App Engine local development server | 38,901,858 | <p>I want to use matplotlib in my Google App Engine project. I followed the steps, described <a href="https://cloud.google.com/appengine/docs/python/tools/using-libraries-python-27" rel="nofollow">here</a> in the official docs. What I did:</p>
<p>1) Created a directory named lib in my application root directory.</p>
<p>2) Created a file appengine_config.py in my application root directory and added there these lines:</p>
<pre><code>from google.appengine.ext import vendor
vendor.add('lib')
</code></pre>
<p>3) Since the docs say, that the only version of matplotlib working is 1.2.0, I executed the following command in the Terminal:</p>
<pre><code>pip install -t lib matplotlib==1.2.0
</code></pre>
<p>There is also step 0 in the docs, which says </p>
<blockquote>
<p>Use pip to install the library and the vendor module to enable importing packages from the third-party library directory.</p>
</blockquote>
<p>But I don't understand what it actually means. If this is something essential, please, explain to me what it's meant here. I found <a href="http://stackoverflow.com/questions/34662595/google-app-engine-how-to-add-lib-folder/34668362">this</a> answer here on stackoverflow, and It seems there is nothing different from what I have done. </p>
<p>Also, I added </p>
<pre><code>libraries:
- name: matplotlib
version: "1.2.0"
</code></pre>
<p>to app.yaml.</p>
<p>So, after all these steps I add the line</p>
<pre><code>import matplotlib
</code></pre>
<p>to main.py and start a local server with</p>
<pre><code>python ~/path/google_appengine/dev_appserver.py app.yaml
</code></pre>
<p>But when I try to access <a href="http://localhost:8080/" rel="nofollow">http://localhost:8080/</a>, the error is raised:</p>
<pre><code>raise ImportError('No module named %s' % fullname)
ImportError: No module named _ctypes
</code></pre>
<p>The whole output, if needed, looks like this:</p>
<pre><code>ERROR 2016-08-11 16:26:51,621 wsgi.py:263]
Traceback (most recent call last):
File "/home/magnitofon/ÐагÑÑзки/google_appengine/google/appengine/runtime/wsgi.py", line 240, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "/home/magnitofon/ÐагÑÑзки/google_appengine/google/appengine/runtime/wsgi.py", line 299, in _LoadHandler
handler, path, err = LoadObject(self._handler)
File "/home/magnitofon/ÐагÑÑзки/google_appengine/google/appengine/runtime/wsgi.py", line 85, in LoadObject
obj = __import__(path[0])
File "/home/magnitofon/realec-inspector/main.py", line 20, in <module>
import matplotlib
File "/home/magnitofon/realec-inspector/lib/matplotlib/__init__.py", line 151, in <module>
from matplotlib.rcsetup import (defaultParams,
File "/home/magnitofon/realec-inspector/lib/matplotlib/rcsetup.py", line 20, in <module>
from matplotlib.colors import is_color_like
File "/home/magnitofon/realec-inspector/lib/matplotlib/colors.py", line 52, in <module>
import numpy as np
File "/home/magnitofon/ÐагÑÑзки/google_appengine/google/appengine/tools/devappserver2/python/sandbox.py", line 705, in load_module
module = self._find_and_load_module(fullname, fullname, [module_path])
File "/home/magnitofon/ÐагÑÑзки/google_appengine/google/appengine/tools/devappserver2/python/sandbox.py", line 446, in _find_and_load_module
return imp.load_module(fullname, source_file, path_name, description)
File "/usr/local/lib/python2.7/dist-packages/numpy/__init__.py", line 180, in <module>
from . import add_newdocs
File "/usr/local/lib/python2.7/dist-packages/numpy/add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "/usr/local/lib/python2.7/dist-packages/numpy/lib/__init__.py", line 8, in <module>
from .type_check import *
File "/usr/local/lib/python2.7/dist-packages/numpy/lib/type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "/usr/local/lib/python2.7/dist-packages/numpy/core/__init__.py", line 22, in <module>
from . import _internal # for freeze programs
File "/usr/local/lib/python2.7/dist-packages/numpy/core/_internal.py", line 14, in <module>
import ctypes
File "/usr/lib/python2.7/ctypes/__init__.py", line 10, in <module>
from _ctypes import Union, Structure, Array
File "/home/magnitofon/ÐагÑÑзки/google_appengine/google/appengine/tools/devappserver2/python/sandbox.py", line 963, in load_module
raise ImportError('No module named %s' % fullname)
ImportError: No module named _ctypes
</code></pre>
<p>What am I doing wrong?</p>
| 2 | 2016-08-11T16:36:51Z | 38,902,275 | <p><code>matplotlib</code> is one of the <a href="https://cloud.google.com/appengine/docs/python/tools/built-in-libraries-27" rel="nofollow">Google-provided 3rd party libs</a>, so you should be following just the <a href="https://cloud.google.com/appengine/docs/python/tools/using-libraries-python-27#requesting_a_library" rel="nofollow">Requesting a library</a> instructions and
<strong>not</strong> the <a href="https://cloud.google.com/appengine/docs/python/tools/using-libraries-python-27#installing_a_library" rel="nofollow">Installing a library</a> ones. </p>
<p>Sadly they're now both on the same documentation page, called <a href="https://cloud.google.com/appengine/docs/python/tools/using-libraries-python-27" rel="nofollow">Using Built-in Libraries in Python 2.7</a> - very confusing for the unaware as the vendoring technique should be used for libraries which are <strong>not</strong> GAE built-in/provided. Filed <a href="https://code.google.com/p/googleappengine/issues/detail?id=13202" rel="nofollow">Issue 13202</a>.</p>
<p><strong>Note</strong>: pay attention to the <a href="https://cloud.google.com/appengine/docs/python/tools/using-libraries-python-27#local_development" rel="nofollow">Using libraries with the local development server</a> section, it applies to <code>matplotlib</code>. You may need to install some packages on your system, but <strong>not</strong> in the application itself (which could negatively affect your deployment on GAE) - they need to be accessible by the development server, not directly by your application.</p>
<p>Duh, I just noticed the <a href="https://cloud.google.com/appengine/docs/python/tools/using-libraries-python-27#matplotlib" rel="nofollow">Using matplotlib</a> section, on the same page :)</p>
<p>It mentions:</p>
<blockquote>
<p><strong>Note</strong>: The experimental release of matplotlib is not supported on the development server. You can still add matplotlib to the
libraries list, but it will raise an ImportError exception when
imported.</p>
</blockquote>
| 0 | 2016-08-11T17:04:26Z | [
"python",
"google-app-engine",
"matplotlib"
] |
Determine series order in matplotlib chart | 38,901,901 | <p>I am banging my head against matplotlib bar chart.
How do I change the order of bars here, such that women are shown before men?</p>
<pre><code>#!/usr/bin/env python
# a bar plot with errorbars
import numpy as np
import matplotlib.pyplot as plt
N = 5
menMeans = (20, 35, 30, 35, 27)
menStd = (2, 3, 4, 1, 2)
ind = np.arange(N) # the x locations for the groups
width = 0.35 # the width of the bars
fig, ax = plt.subplots()
rects1 = ax.bar(ind, menMeans, width, color='r', yerr=menStd)
womenMeans = (25, 32, 34, 20, 25)
womenStd = (3, 5, 2, 3, 3)
rects2 = ax.bar(ind + width, womenMeans, width, color='y', yerr=womenStd)
# add some text for labels, title and axes ticks
ax.set_ylabel('Scores')
ax.set_title('Scores by group and gender')
ax.set_xticks(ind + width)
ax.set_xticklabels(('G1', 'G2', 'G3', 'G4', 'G5'))
ax.legend((rects1[0], rects2[0]), ('Men', 'Women'))
def autolabel(rects):
# attach some text labels
for rect in rects:
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2., 1.05*height,
'%d' % int(height),
ha='center', va='bottom')
autolabel(rects1)
autolabel(rects2)
plt.show()
</code></pre>
<p><img src="http://i.stack.imgur.com/IMyeA.png" alt="chart"></p>
<p>It seems to me that matplotlib insists on sorting the data alphabetically. Is there a way to avoid this?</p>
| 1 | 2016-08-11T16:39:23Z | 38,902,078 | <p>The solution is quite easy, the order is the xlocation:</p>
<pre><code> rects2 = ax.bar(ind - width, womenMeans, width, color='y', yerr=womenStd)
</code></pre>
<p>This will plot the bars for the second series before the first one.</p>
| 0 | 2016-08-11T16:51:28Z | [
"python",
"matplotlib"
] |
Unexpected colors in multiple scatterplots in matplotlib | 38,901,925 | <p>I'm sure I'm messing up something really simple here, but can't seem to figure it out. I'm simply trying to plot groups of data as scatterplots with different colors for each group by cycling through a dataframe and repeatedly calling <code>ax.scatter</code>. A minimal example is:</p>
<pre><code>import numpy as np; import pandas as pd; import matplotlib.pyplot as plt; import seaborn as sns
%matplotlib inline
df = pd.DataFrame({"Cat":list("AAABBBCCC"), "x":np.random.rand(9), "y":np.random.rand(9)})
fig, ax = plt.subplots()
for i,cat in enumerate(df.Cat.unique()):
print i, cat, sns.color_palette("husl",3)[i]
ax.scatter(df[df.Cat==cat].x.values, df[df.Cat==cat].y.values, marker="h",s=70,
label = cat, color=sns.color_palette("husl",3)[i])
ax.legend(loc=2)
</code></pre>
<p>I added the <code>print</code> statement for my own sanity to confirm that I am indeed cycling through the groups and choosing different colors. The output however looks as follows:</p>
<p><a href="http://i.stack.imgur.com/EZ0NR.png" rel="nofollow"><img src="http://i.stack.imgur.com/EZ0NR.png" alt="enter image description here"></a></p>
<p>(If this is slightly hard to see: the groups A, B, and C have three very similar blues according to the legend, however all scatterpoints have different and seemingly unrelated colors, which aren't even identical across groups)</p>
<p>What is going on here?</p>
| 3 | 2016-08-11T16:41:06Z | 38,902,894 | <p>You could use <a href="http://pandas.pydata.org/pandas-docs/stable/visualization.html#scatter-plot" rel="nofollow"><code>scatter()</code></a> method of <code>pandas</code> by specifying the target <code>ax</code>and repeating the plots to plot multiple column groups in a single axes,<code>ax</code>.</p>
<pre><code># set random seed
np.random.seed(42)
fig, ax = plt.subplots()
for i,label in enumerate(df['Cat'].unique()):
# select subset of columns equal to a given label
df['X'] = df[df['Cat']==label]['x']
df['Y'] = df[df['Cat']==label]['y']
df.plot.scatter(x='X',y='Y',color=sns.color_palette("husl",3)[i],label=label,ax=ax)
ax.legend(loc=2)
</code></pre>
<p><a href="http://i.stack.imgur.com/lGbsW.png" rel="nofollow"><img src="http://i.stack.imgur.com/lGbsW.png" alt="enter image description here"></a></p>
| 1 | 2016-08-11T17:44:00Z | [
"python",
"pandas",
"matplotlib"
] |
Unexpected colors in multiple scatterplots in matplotlib | 38,901,925 | <p>I'm sure I'm messing up something really simple here, but can't seem to figure it out. I'm simply trying to plot groups of data as scatterplots with different colors for each group by cycling through a dataframe and repeatedly calling <code>ax.scatter</code>. A minimal example is:</p>
<pre><code>import numpy as np; import pandas as pd; import matplotlib.pyplot as plt; import seaborn as sns
%matplotlib inline
df = pd.DataFrame({"Cat":list("AAABBBCCC"), "x":np.random.rand(9), "y":np.random.rand(9)})
fig, ax = plt.subplots()
for i,cat in enumerate(df.Cat.unique()):
print i, cat, sns.color_palette("husl",3)[i]
ax.scatter(df[df.Cat==cat].x.values, df[df.Cat==cat].y.values, marker="h",s=70,
label = cat, color=sns.color_palette("husl",3)[i])
ax.legend(loc=2)
</code></pre>
<p>I added the <code>print</code> statement for my own sanity to confirm that I am indeed cycling through the groups and choosing different colors. The output however looks as follows:</p>
<p><a href="http://i.stack.imgur.com/EZ0NR.png" rel="nofollow"><img src="http://i.stack.imgur.com/EZ0NR.png" alt="enter image description here"></a></p>
<p>(If this is slightly hard to see: the groups A, B, and C have three very similar blues according to the legend, however all scatterpoints have different and seemingly unrelated colors, which aren't even identical across groups)</p>
<p>What is going on here?</p>
| 3 | 2016-08-11T16:41:06Z | 38,913,294 | <p>Should have spent a bit more time whittling down the minimum working example. Turns out the problem was with the call to <code>sns.color_palette</code>, which returns a <code>(float,float,float)</code> tuple that confuses <code>scatter</code> as it apparently interprets one of the numbers as the alpha value. </p>
<p>The problem is solved by replacing</p>
<pre><code>color = sns.color_palette("husl",3)[i]
</code></pre>
<p>with </p>
<pre><code>color = sns.color_palette("husl",3)[i] + (1.,)
</code></pre>
<p>to add an explicit value for alpha.</p>
| 1 | 2016-08-12T08:25:40Z | [
"python",
"pandas",
"matplotlib"
] |
Python - urllib3 recieve 403 'Forbidden' while crawling websites | 38,901,929 | <p>I am using <code>Python3</code> and <code>urllib3</code> for crawling and downloading websites. I crawled a list of 4000 different domains and at about 5 of them i got back <code>HttpErrorCode</code> - <code>403 - 'Forbidden'</code>.</p>
<p>On my browser the website does exist and respond correctly. Probably these websites are suspecting me as a crawler and forbid me from getting the data.</p>
<p>This is my code:</p>
<pre><code>from urllib3 import PoolManager, util, Retry
import certifi as certifi
from urllib3.exceptions import MaxRetryError
manager = PoolManager(cert_reqs='CERT_REQUIRED',
ca_certs=certifi.where(),
num_pools=15,
maxsize=6,
timeout=40.0,
retries=Retry(connect=2, read=2, redirect=10))
url_to_download = "https://www.uvision.co.il/"
headers = util.make_headers(accept_encoding='gzip, deflate',
keep_alive=True,
user_agent="Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0")
headers['Accept-Language'] = "en-US,en;q=0.5"
headers['Connection'] = 'keep-alive'
headers['Accept'] = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"
try:
response = manager.request('GET',
url_to_download,
preload_content=False,
headers=headers)
except MaxRetryError as ex:
raise FailedToDownload()
</code></pre>
<p>Example of websites that have rejected me:
<a href="https://www.uvision.co.il/" rel="nofollow">https://www.uvision.co.il/</a> and <a href="http://www.medyummesut.net/" rel="nofollow">http://www.medyummesut.net/</a>.</p>
<p>Another website that don't work and Throws <code>MaxRetryError</code> is:</p>
<p><a href="http://www.nytimes.com/2015/10/28/world/asia/south-china-sea-uss-lassen-spratly-islands.html?hp&action=click&pgtype=Homepage&module=first-column-region&region=top-news&WT.nav=top-news&_r=1" rel="nofollow">http://www.nytimes.com/2015/10/28/world/asia/south-china-sea-uss-lassen-spratly-islands.html?hp&action=click&pgtype=Homepage&module=first-column-region&region=top-news&WT.nav=top-news&_r=1</a></p>
<p>I've also tried to use the exact same headers that Firefox use and it didn't work either. Am i doing here something wrong?</p>
| 1 | 2016-08-11T16:41:18Z | 38,905,673 | <p>You specify <code>keep_alive=True</code>, which adds a header <code>connection: keep-alive</code></p>
<p>You then also add a header <code>Connection: keep-alive</code> (note the slight difference in case). And this seems to be causing the problem. To fix it just remove the redundant line </p>
<pre><code>headers['Connection'] = 'keep-alive'
</code></pre>
| 1 | 2016-08-11T20:35:57Z | [
"python",
"python-3.x",
"urllib",
"urllib3"
] |
ImportError: No module named jira | 38,901,974 | <p>Hi I'm running a python script that transitions tickets from "pending build" to "in test" in Jira. I've ran it on my local machine (Mac OS X) and it works perfectly but when I try to include it as a build task in my bamboo deployment, I get the error </p>
<p>"from jira import JIRA</p>
<p>ImportError: No module named jira"</p>
<p>I'm calling the python file from a script task like the following "python myFile.py" and then I supply the location to the myFile.py in the working subdirectory field. I don't think that is a problem because the error shows that it is finding my script fine. I've checked multiple times and the jira package is in site-packages and is in the path. I installed using pip and am running python 2.7.8. The OS is SuSE on our server</p>
| 0 | 2016-08-11T16:44:24Z | 38,902,162 | <p>That is very hard to understand what you problem is. From what I understood you are saying that when you run your module as standalone file, everything works, but when you imoprt it you get an error. Here are some steps towards solving the problem.</p>
<ol>
<li>Make sure that your script is in Python package. In order to do that, verify that there is (usually) empty <code>__init__.py</code> file in the same directory where the package is located.</li>
<li>Make sure that your script does not import something else in the block that gets executed only when you run the file as script (<code>if __name__ == "__main__"</code>)</li>
<li>Make sure that the python path includes your package and visible to the script (you can do this by running <code>print os.environ['PYTHONPATH'].split(os.pathsep)</code></li>
</ol>
| 0 | 2016-08-11T16:56:56Z | [
"python",
"bamboo",
"suse",
"python-jira"
] |
Python Pandas : How to return grouped lists in a column as a dict | 38,902,042 | <p><a href="http://stackoverflow.com/questions/38895856/python-pandas-how-to-compile-all-lists-in-a-column-into-one-unique-list/38896038#38896038">Python Pandas : How to compile all lists in a column into one unique list</a></p>
<p>Starting with data from previous question: </p>
<pre><code>f = pd.DataFrame({'id':['a','b', 'a'], 'val':[['val1','val2'],
['val33','val9','val6'],
['val2','val6','val7']]})
print (df)
id val
0 a [val1, val2]
1 b [val33, val9, val6]
2 a [val2, val6, val7]
</code></pre>
<p>How do I get the lists into Dict: </p>
<pre><code>pd.Series([a for b in df.val.tolist() for a in b]).value_counts().to_dict()
{'val1': 1, 'val2': 2, 'val33': 1, 'val6': 2, 'val7': 1, 'val9': 1}
</code></pre>
<p>How do I get the lists by groups:</p>
<p><code>df.groupby('id')["val"].apply(lambda x: (list([a for b in x.tolist() for a in b]))</code> )</p>
<pre><code>id
a [val1, val2, val2, val6, val7]
b [val33, val9, val6]
Name: val, dtype: object
</code></pre>
<p>How do I get the lists by groups <strong>as dicts</strong>:</p>
<pre><code>df.groupby('id')["val"].apply(lambda x: pd.Series([a for b in x.tolist() for a in b]).value_counts().to_dict() )
</code></pre>
<p>Returns: </p>
<pre><code>id
a val1 1.0
val2 2.0
val6 1.0
val7 1.0
b val33 1.0
val6 1.0
val9 1.0
Name: val, dtype: float64
</code></pre>
<p>Desired output What am I overlooking? : </p>
<pre><code> id
a {'val1': 1, 'val2': 2, 'val6': 2, 'val7': 1}
b {'val33': 1, 'val6': 1, 'val9': 1}
Name: val, dtype: object
</code></pre>
| 1 | 2016-08-11T16:49:10Z | 38,902,168 | <p>Edited using <code>agg</code> from @ayhan (much faster than apply).</p>
<pre><code>from collections import Counter
df.groupby("id")["val"].agg(lambda x: Counter([a for b in x for a in b]))
</code></pre>
<p>Out:</p>
<pre><code>id
a {'val2': 2, 'val6': 1, 'val7': 1, 'val1': 1}
b {'val9': 1, 'val33': 1, 'val6': 1}
Name: val, dtype: object
</code></pre>
<p>Time of this version:</p>
<pre><code>%timeit df.groupby("id")["val"].agg(lambda x: Counter([a for b in x for a in b]))
1000 loops, best of 3: 820 µs per loop
</code></pre>
<p>Time of @ayhan version:</p>
<pre><code>%timeit df.groupby('id')["val"].agg(lambda x: pd.Series([a for b in x.tolist() for a in b]).value_counts().to_dict() )
100 loops, best of 3: 1.91 ms per loo
</code></pre>
| 1 | 2016-08-11T16:57:09Z | [
"python",
"pandas",
"dictionary",
"group-by"
] |
Python Pandas : How to return grouped lists in a column as a dict | 38,902,042 | <p><a href="http://stackoverflow.com/questions/38895856/python-pandas-how-to-compile-all-lists-in-a-column-into-one-unique-list/38896038#38896038">Python Pandas : How to compile all lists in a column into one unique list</a></p>
<p>Starting with data from previous question: </p>
<pre><code>f = pd.DataFrame({'id':['a','b', 'a'], 'val':[['val1','val2'],
['val33','val9','val6'],
['val2','val6','val7']]})
print (df)
id val
0 a [val1, val2]
1 b [val33, val9, val6]
2 a [val2, val6, val7]
</code></pre>
<p>How do I get the lists into Dict: </p>
<pre><code>pd.Series([a for b in df.val.tolist() for a in b]).value_counts().to_dict()
{'val1': 1, 'val2': 2, 'val33': 1, 'val6': 2, 'val7': 1, 'val9': 1}
</code></pre>
<p>How do I get the lists by groups:</p>
<p><code>df.groupby('id')["val"].apply(lambda x: (list([a for b in x.tolist() for a in b]))</code> )</p>
<pre><code>id
a [val1, val2, val2, val6, val7]
b [val33, val9, val6]
Name: val, dtype: object
</code></pre>
<p>How do I get the lists by groups <strong>as dicts</strong>:</p>
<pre><code>df.groupby('id')["val"].apply(lambda x: pd.Series([a for b in x.tolist() for a in b]).value_counts().to_dict() )
</code></pre>
<p>Returns: </p>
<pre><code>id
a val1 1.0
val2 2.0
val6 1.0
val7 1.0
b val33 1.0
val6 1.0
val9 1.0
Name: val, dtype: float64
</code></pre>
<p>Desired output What am I overlooking? : </p>
<pre><code> id
a {'val1': 1, 'val2': 2, 'val6': 2, 'val7': 1}
b {'val33': 1, 'val6': 1, 'val9': 1}
Name: val, dtype: object
</code></pre>
| 1 | 2016-08-11T16:49:10Z | 38,902,188 | <p>Apply is flexible. Whenever possible, it converts the returning object to something that is more usable. From the <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#flexible-apply" rel="nofollow">docs</a>:</p>
<blockquote>
<p>Some operations on the grouped data might not fit into either the
aggregate or transform categories. Or, you may simply want GroupBy to
infer how to combine the results. For these, use the apply function,
which can be substituted for both aggregate and transform in many
standard use cases.</p>
<p>Note: apply can act as a reducer, transformer, or filter function,
depending on exactly what is passed to apply. So depending on the path
taken, and exactly what you are grouping. Thus the grouped columns(s)
may be included in the output as well as set the indices.</p>
</blockquote>
<p>There can be cases, like this, that you want to avoid this behavior. If you are grouping, simply replace apply with agg:</p>
<pre><code>df.groupby('id')["val"].agg(lambda x: pd.Series([a for b in x.tolist() for a in b]).value_counts().to_dict() )
Out:
id
a {'val1': 1, 'val7': 1, 'val6': 1, 'val2': 2}
b {'val6': 1, 'val33': 1, 'val9': 1}
Name: val, dtype: object
</code></pre>
| 1 | 2016-08-11T16:58:14Z | [
"python",
"pandas",
"dictionary",
"group-by"
] |
numpy array flags information is not saved when dumping to pickle | 38,902,070 | <p>I'm writing a class that has an attribute of numpy array type.
Since I'd like it to be read-only, I set its WRITABLE flag to be false:</p>
<pre><code>import numpy as np
class MyClass:
def __init__(self):
self.my_array = np.zeros(5)
self.my_array.setflags(write=False)
</code></pre>
<p>After doing some other stuff, I dump MyClass into a pickle file:</p>
<pre><code>pickle.dump(self, writer)
</code></pre>
<p>Later, I load it using <code>x = pickle.load(reader)</code>, but then the WRITABLE flag is true.
How can I make the pickle dump to preserve the numpy array WRITABLE flag?</p>
| 0 | 2016-08-11T16:50:54Z | 38,904,943 | <p>For pickling arrays, numpy uses the <code>np.save</code> function. Details of this save are in <code>np.lib.format</code> file. This format saves a header and a byte representation of the data buffer. The content of header is a dictionary. </p>
<pre><code>In [1212]: np.lib.format.header_data_from_array_1_0(x)
Out[1212]: {'descr': '<i4', 'fortran_order': False, 'shape': (2, 3)}
In [1213]: np.lib.format.header_data_from_array_1_0(u)
Out[1213]: {'descr': '<c16', 'fortran_order': False, 'shape': (4,)}
</code></pre>
<p>As you can see it does not save the whole <code>FLAGS</code> attribute, just info on the order.</p>
<p>Class pickling can be customized with <code>__reduce__</code> and <code>__setstate__</code> methods.</p>
<p>See also</p>
<p><a href="http://stackoverflow.com/questions/38839174/how-can-i-make-np-save-work-for-an-ndarray-subclass">How can I make np.save work for an ndarray subclass?</a></p>
| 0 | 2016-08-11T19:48:39Z | [
"python",
"arrays",
"numpy",
"pickle"
] |
Performance issues with pandas and filtering on datetime column | 38,902,239 | <p>I've a pandas dataframe with a datetime64 object on one of the columns.</p>
<pre><code> time volume complete closeBid closeAsk openBid openAsk highBid highAsk lowBid lowAsk closeMid
0 2016-08-07 21:00:00+00:00 9 True 0.84734 0.84842 0.84706 0.84814 0.84734 0.84842 0.84706 0.84814 0.84788
1 2016-08-07 21:05:00+00:00 10 True 0.84735 0.84841 0.84752 0.84832 0.84752 0.84846 0.84712 0.8482 0.84788
2 2016-08-07 21:10:00+00:00 10 True 0.84742 0.84817 0.84739 0.84828 0.84757 0.84831 0.84735 0.84817 0.847795
3 2016-08-07 21:15:00+00:00 18 True 0.84732 0.84811 0.84737 0.84813 0.84737 0.84813 0.84721 0.8479 0.847715
4 2016-08-07 21:20:00+00:00 4 True 0.84755 0.84822 0.84739 0.84812 0.84755 0.84822 0.84739 0.84812 0.847885
5 2016-08-07 21:25:00+00:00 4 True 0.84769 0.84843 0.84758 0.84827 0.84769 0.84843 0.84758 0.84827 0.84806
6 2016-08-07 21:30:00+00:00 5 True 0.84764 0.84851 0.84768 0.84852 0.8478 0.84857 0.84764 0.84851 0.848075
7 2016-08-07 21:35:00+00:00 4 True 0.84755 0.84825 0.84762 0.84844 0.84765 0.84844 0.84755 0.84824 0.8479
8 2016-08-07 21:40:00+00:00 1 True 0.84759 0.84812 0.84759 0.84812 0.84759 0.84812 0.84759 0.84812 0.847855
9 2016-08-07 21:45:00+00:00 3 True 0.84727 0.84817 0.84743 0.8482 0.84743 0.84822 0.84727 0.84817 0.84772
</code></pre>
<p>My application follows the (simplified) structure below:</p>
<pre><code>class Runner():
def execute_tick(self, clock_tick, previous_tick):
candles = self.broker.get_new_candles(clock_tick, previous_tick)
if candles:
run_calculations(candles)
class Broker():
def get_new_candles(clock_tick, previous_tick)
start = previous_tick - timedelta(minutes=1)
end = clock_tick - timedelta(minutes=3)
return df[(df.time > start) & (df.time <= end)]
</code></pre>
<p>I noticed when profiling the app, that calling the <code>df[(df.time > start) & (df.time <= end)]</code> causes the highest performance issues and I was wondering if there is a way to speed up these calls?</p>
<p>EDIT: I'm adding some more info about the use-case here (also, source is available at: <a href="https://github.com/jmelett/pyFxTrader">https://github.com/jmelett/pyFxTrader</a>)</p>
<ul>
<li>The application will accept a list of <a href="https://en.wikipedia.org/wiki/Foreign_exchange_market#Financial_instruments">instruments</a> (e.g. EUR_USD, USD_JPY, GBP_CHF) and then <a href="https://github.com/jmelett/pyFxTrader/blob/master/trader/broker/oanda_backtest.py#L68">pre-fetch</a> ticks/<a href="https://en.wikipedia.org/wiki/Candlestick_chart">candles</a> for each one of them and their timeframes (e.g. 5 minutes, 30 minutes, 1 hour etc.). The initialised data is basically a <code>dict</code> of Instruments, each containing another <code>dict</code> with candle data for M5, M30, H1 timeframes. </li>
<li>Each "timeframe" is a pandas dataframe like shown at the top</li>
<li>A <a href="https://github.com/jmelett/pyFxTrader/blob/master/trader/controller.py#L145">clock simulator</a> is then used to query the individual candles for the specific time (e.g. at 15:30:00, give me the last x "5-minute-candles") for EUR_USD</li>
<li>This piece of data is then used to "<a href="https://en.wikipedia.org/wiki/Backtesting">simulate</a>" specific market conditions (e.g. average price over last 1 hour increased by 10%, buy market position)</li>
</ul>
| 13 | 2016-08-11T17:01:57Z | 38,934,632 | <p>It's a bit of speculation because I can't test it but two ideas come to my mind.</p>
<ol>
<li><p>Using a lookup series to determine the start and end indices of a data frame to be returned:</p>
<pre><code>s = pd.Series(np.arange(len(df)), index=df.time)
start = s.asof(start)
end = s.asof(end)
ret = df.iloc[start + 1 : end]
</code></pre></li>
<li><p>Setting <code>df.time</code> column as an index and taking a slice. (It may or may not be a good solution for other reasons since this column contains duplicates.)</p>
<pre><code>df = df.set_index('time')
ret = df.loc[start : end]
</code></pre>
<p>You may need to add some small <code>Timedelta</code> to <code>start</code>.</p></li>
</ol>
<p>In both cases you can do the main step (construct the series or set the index) only once per each data frame.</p>
| 0 | 2016-08-13T16:14:42Z | [
"python",
"pandas",
"numpy",
"dataframe"
] |
Performance issues with pandas and filtering on datetime column | 38,902,239 | <p>I've a pandas dataframe with a datetime64 object on one of the columns.</p>
<pre><code> time volume complete closeBid closeAsk openBid openAsk highBid highAsk lowBid lowAsk closeMid
0 2016-08-07 21:00:00+00:00 9 True 0.84734 0.84842 0.84706 0.84814 0.84734 0.84842 0.84706 0.84814 0.84788
1 2016-08-07 21:05:00+00:00 10 True 0.84735 0.84841 0.84752 0.84832 0.84752 0.84846 0.84712 0.8482 0.84788
2 2016-08-07 21:10:00+00:00 10 True 0.84742 0.84817 0.84739 0.84828 0.84757 0.84831 0.84735 0.84817 0.847795
3 2016-08-07 21:15:00+00:00 18 True 0.84732 0.84811 0.84737 0.84813 0.84737 0.84813 0.84721 0.8479 0.847715
4 2016-08-07 21:20:00+00:00 4 True 0.84755 0.84822 0.84739 0.84812 0.84755 0.84822 0.84739 0.84812 0.847885
5 2016-08-07 21:25:00+00:00 4 True 0.84769 0.84843 0.84758 0.84827 0.84769 0.84843 0.84758 0.84827 0.84806
6 2016-08-07 21:30:00+00:00 5 True 0.84764 0.84851 0.84768 0.84852 0.8478 0.84857 0.84764 0.84851 0.848075
7 2016-08-07 21:35:00+00:00 4 True 0.84755 0.84825 0.84762 0.84844 0.84765 0.84844 0.84755 0.84824 0.8479
8 2016-08-07 21:40:00+00:00 1 True 0.84759 0.84812 0.84759 0.84812 0.84759 0.84812 0.84759 0.84812 0.847855
9 2016-08-07 21:45:00+00:00 3 True 0.84727 0.84817 0.84743 0.8482 0.84743 0.84822 0.84727 0.84817 0.84772
</code></pre>
<p>My application follows the (simplified) structure below:</p>
<pre><code>class Runner():
def execute_tick(self, clock_tick, previous_tick):
candles = self.broker.get_new_candles(clock_tick, previous_tick)
if candles:
run_calculations(candles)
class Broker():
def get_new_candles(clock_tick, previous_tick)
start = previous_tick - timedelta(minutes=1)
end = clock_tick - timedelta(minutes=3)
return df[(df.time > start) & (df.time <= end)]
</code></pre>
<p>I noticed when profiling the app, that calling the <code>df[(df.time > start) & (df.time <= end)]</code> causes the highest performance issues and I was wondering if there is a way to speed up these calls?</p>
<p>EDIT: I'm adding some more info about the use-case here (also, source is available at: <a href="https://github.com/jmelett/pyFxTrader">https://github.com/jmelett/pyFxTrader</a>)</p>
<ul>
<li>The application will accept a list of <a href="https://en.wikipedia.org/wiki/Foreign_exchange_market#Financial_instruments">instruments</a> (e.g. EUR_USD, USD_JPY, GBP_CHF) and then <a href="https://github.com/jmelett/pyFxTrader/blob/master/trader/broker/oanda_backtest.py#L68">pre-fetch</a> ticks/<a href="https://en.wikipedia.org/wiki/Candlestick_chart">candles</a> for each one of them and their timeframes (e.g. 5 minutes, 30 minutes, 1 hour etc.). The initialised data is basically a <code>dict</code> of Instruments, each containing another <code>dict</code> with candle data for M5, M30, H1 timeframes. </li>
<li>Each "timeframe" is a pandas dataframe like shown at the top</li>
<li>A <a href="https://github.com/jmelett/pyFxTrader/blob/master/trader/controller.py#L145">clock simulator</a> is then used to query the individual candles for the specific time (e.g. at 15:30:00, give me the last x "5-minute-candles") for EUR_USD</li>
<li>This piece of data is then used to "<a href="https://en.wikipedia.org/wiki/Backtesting">simulate</a>" specific market conditions (e.g. average price over last 1 hour increased by 10%, buy market position)</li>
</ul>
| 13 | 2016-08-11T17:01:57Z | 38,964,404 | <p>Pandas query can use numexpr as engine to speed up evaluation:</p>
<pre><code>df.query('time > @start & time <= @end')
</code></pre>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html</a></p>
| 0 | 2016-08-15T23:19:36Z | [
"python",
"pandas",
"numpy",
"dataframe"
] |
Performance issues with pandas and filtering on datetime column | 38,902,239 | <p>I've a pandas dataframe with a datetime64 object on one of the columns.</p>
<pre><code> time volume complete closeBid closeAsk openBid openAsk highBid highAsk lowBid lowAsk closeMid
0 2016-08-07 21:00:00+00:00 9 True 0.84734 0.84842 0.84706 0.84814 0.84734 0.84842 0.84706 0.84814 0.84788
1 2016-08-07 21:05:00+00:00 10 True 0.84735 0.84841 0.84752 0.84832 0.84752 0.84846 0.84712 0.8482 0.84788
2 2016-08-07 21:10:00+00:00 10 True 0.84742 0.84817 0.84739 0.84828 0.84757 0.84831 0.84735 0.84817 0.847795
3 2016-08-07 21:15:00+00:00 18 True 0.84732 0.84811 0.84737 0.84813 0.84737 0.84813 0.84721 0.8479 0.847715
4 2016-08-07 21:20:00+00:00 4 True 0.84755 0.84822 0.84739 0.84812 0.84755 0.84822 0.84739 0.84812 0.847885
5 2016-08-07 21:25:00+00:00 4 True 0.84769 0.84843 0.84758 0.84827 0.84769 0.84843 0.84758 0.84827 0.84806
6 2016-08-07 21:30:00+00:00 5 True 0.84764 0.84851 0.84768 0.84852 0.8478 0.84857 0.84764 0.84851 0.848075
7 2016-08-07 21:35:00+00:00 4 True 0.84755 0.84825 0.84762 0.84844 0.84765 0.84844 0.84755 0.84824 0.8479
8 2016-08-07 21:40:00+00:00 1 True 0.84759 0.84812 0.84759 0.84812 0.84759 0.84812 0.84759 0.84812 0.847855
9 2016-08-07 21:45:00+00:00 3 True 0.84727 0.84817 0.84743 0.8482 0.84743 0.84822 0.84727 0.84817 0.84772
</code></pre>
<p>My application follows the (simplified) structure below:</p>
<pre><code>class Runner():
def execute_tick(self, clock_tick, previous_tick):
candles = self.broker.get_new_candles(clock_tick, previous_tick)
if candles:
run_calculations(candles)
class Broker():
def get_new_candles(clock_tick, previous_tick)
start = previous_tick - timedelta(minutes=1)
end = clock_tick - timedelta(minutes=3)
return df[(df.time > start) & (df.time <= end)]
</code></pre>
<p>I noticed when profiling the app, that calling the <code>df[(df.time > start) & (df.time <= end)]</code> causes the highest performance issues and I was wondering if there is a way to speed up these calls?</p>
<p>EDIT: I'm adding some more info about the use-case here (also, source is available at: <a href="https://github.com/jmelett/pyFxTrader">https://github.com/jmelett/pyFxTrader</a>)</p>
<ul>
<li>The application will accept a list of <a href="https://en.wikipedia.org/wiki/Foreign_exchange_market#Financial_instruments">instruments</a> (e.g. EUR_USD, USD_JPY, GBP_CHF) and then <a href="https://github.com/jmelett/pyFxTrader/blob/master/trader/broker/oanda_backtest.py#L68">pre-fetch</a> ticks/<a href="https://en.wikipedia.org/wiki/Candlestick_chart">candles</a> for each one of them and their timeframes (e.g. 5 minutes, 30 minutes, 1 hour etc.). The initialised data is basically a <code>dict</code> of Instruments, each containing another <code>dict</code> with candle data for M5, M30, H1 timeframes. </li>
<li>Each "timeframe" is a pandas dataframe like shown at the top</li>
<li>A <a href="https://github.com/jmelett/pyFxTrader/blob/master/trader/controller.py#L145">clock simulator</a> is then used to query the individual candles for the specific time (e.g. at 15:30:00, give me the last x "5-minute-candles") for EUR_USD</li>
<li>This piece of data is then used to "<a href="https://en.wikipedia.org/wiki/Backtesting">simulate</a>" specific market conditions (e.g. average price over last 1 hour increased by 10%, buy market position)</li>
</ul>
| 13 | 2016-08-11T17:01:57Z | 38,974,448 | <p>it seems to me that using the <code>ix</code> locator is faster</p>
<pre><code>df.sort_values(by='time',inplace=True)
df.ix[(df.time > start) & (df.time <= end),:]
</code></pre>
| 0 | 2016-08-16T12:02:35Z | [
"python",
"pandas",
"numpy",
"dataframe"
] |
Performance issues with pandas and filtering on datetime column | 38,902,239 | <p>I've a pandas dataframe with a datetime64 object on one of the columns.</p>
<pre><code> time volume complete closeBid closeAsk openBid openAsk highBid highAsk lowBid lowAsk closeMid
0 2016-08-07 21:00:00+00:00 9 True 0.84734 0.84842 0.84706 0.84814 0.84734 0.84842 0.84706 0.84814 0.84788
1 2016-08-07 21:05:00+00:00 10 True 0.84735 0.84841 0.84752 0.84832 0.84752 0.84846 0.84712 0.8482 0.84788
2 2016-08-07 21:10:00+00:00 10 True 0.84742 0.84817 0.84739 0.84828 0.84757 0.84831 0.84735 0.84817 0.847795
3 2016-08-07 21:15:00+00:00 18 True 0.84732 0.84811 0.84737 0.84813 0.84737 0.84813 0.84721 0.8479 0.847715
4 2016-08-07 21:20:00+00:00 4 True 0.84755 0.84822 0.84739 0.84812 0.84755 0.84822 0.84739 0.84812 0.847885
5 2016-08-07 21:25:00+00:00 4 True 0.84769 0.84843 0.84758 0.84827 0.84769 0.84843 0.84758 0.84827 0.84806
6 2016-08-07 21:30:00+00:00 5 True 0.84764 0.84851 0.84768 0.84852 0.8478 0.84857 0.84764 0.84851 0.848075
7 2016-08-07 21:35:00+00:00 4 True 0.84755 0.84825 0.84762 0.84844 0.84765 0.84844 0.84755 0.84824 0.8479
8 2016-08-07 21:40:00+00:00 1 True 0.84759 0.84812 0.84759 0.84812 0.84759 0.84812 0.84759 0.84812 0.847855
9 2016-08-07 21:45:00+00:00 3 True 0.84727 0.84817 0.84743 0.8482 0.84743 0.84822 0.84727 0.84817 0.84772
</code></pre>
<p>My application follows the (simplified) structure below:</p>
<pre><code>class Runner():
def execute_tick(self, clock_tick, previous_tick):
candles = self.broker.get_new_candles(clock_tick, previous_tick)
if candles:
run_calculations(candles)
class Broker():
def get_new_candles(clock_tick, previous_tick)
start = previous_tick - timedelta(minutes=1)
end = clock_tick - timedelta(minutes=3)
return df[(df.time > start) & (df.time <= end)]
</code></pre>
<p>I noticed when profiling the app, that calling the <code>df[(df.time > start) & (df.time <= end)]</code> causes the highest performance issues and I was wondering if there is a way to speed up these calls?</p>
<p>EDIT: I'm adding some more info about the use-case here (also, source is available at: <a href="https://github.com/jmelett/pyFxTrader">https://github.com/jmelett/pyFxTrader</a>)</p>
<ul>
<li>The application will accept a list of <a href="https://en.wikipedia.org/wiki/Foreign_exchange_market#Financial_instruments">instruments</a> (e.g. EUR_USD, USD_JPY, GBP_CHF) and then <a href="https://github.com/jmelett/pyFxTrader/blob/master/trader/broker/oanda_backtest.py#L68">pre-fetch</a> ticks/<a href="https://en.wikipedia.org/wiki/Candlestick_chart">candles</a> for each one of them and their timeframes (e.g. 5 minutes, 30 minutes, 1 hour etc.). The initialised data is basically a <code>dict</code> of Instruments, each containing another <code>dict</code> with candle data for M5, M30, H1 timeframes. </li>
<li>Each "timeframe" is a pandas dataframe like shown at the top</li>
<li>A <a href="https://github.com/jmelett/pyFxTrader/blob/master/trader/controller.py#L145">clock simulator</a> is then used to query the individual candles for the specific time (e.g. at 15:30:00, give me the last x "5-minute-candles") for EUR_USD</li>
<li>This piece of data is then used to "<a href="https://en.wikipedia.org/wiki/Backtesting">simulate</a>" specific market conditions (e.g. average price over last 1 hour increased by 10%, buy market position)</li>
</ul>
| 13 | 2016-08-11T17:01:57Z | 39,027,509 | <p>I've learned that those datetime objects can become very memory hungry and require more computational effort, <strong>Especially</strong> if they are set to be the index (DatetimeIndex objects?)</p>
<p>I think your best bet is to just cast the df.time, start and end into UNIX timestamps (as ints, no longer the datetime dtypes), and do a simple integer comparison. </p>
<p>The UNIX timestamp will look like: 1471554233 (the time of this posting). More on that here: <a href="https://en.wikipedia.org/wiki/Unix_time" rel="nofollow">https://en.wikipedia.org/wiki/Unix_time</a></p>
<p>Some considerations when doing this (eg keep in mind timezones): <a href="http://stackoverflow.com/questions/19801727/convert-datetime-to-unix-timestamp-and-convert-it-back-in-python">Convert datetime to Unix timestamp and convert it back in python</a></p>
| 0 | 2016-08-18T21:06:03Z | [
"python",
"pandas",
"numpy",
"dataframe"
] |
Performance issues with pandas and filtering on datetime column | 38,902,239 | <p>I've a pandas dataframe with a datetime64 object on one of the columns.</p>
<pre><code> time volume complete closeBid closeAsk openBid openAsk highBid highAsk lowBid lowAsk closeMid
0 2016-08-07 21:00:00+00:00 9 True 0.84734 0.84842 0.84706 0.84814 0.84734 0.84842 0.84706 0.84814 0.84788
1 2016-08-07 21:05:00+00:00 10 True 0.84735 0.84841 0.84752 0.84832 0.84752 0.84846 0.84712 0.8482 0.84788
2 2016-08-07 21:10:00+00:00 10 True 0.84742 0.84817 0.84739 0.84828 0.84757 0.84831 0.84735 0.84817 0.847795
3 2016-08-07 21:15:00+00:00 18 True 0.84732 0.84811 0.84737 0.84813 0.84737 0.84813 0.84721 0.8479 0.847715
4 2016-08-07 21:20:00+00:00 4 True 0.84755 0.84822 0.84739 0.84812 0.84755 0.84822 0.84739 0.84812 0.847885
5 2016-08-07 21:25:00+00:00 4 True 0.84769 0.84843 0.84758 0.84827 0.84769 0.84843 0.84758 0.84827 0.84806
6 2016-08-07 21:30:00+00:00 5 True 0.84764 0.84851 0.84768 0.84852 0.8478 0.84857 0.84764 0.84851 0.848075
7 2016-08-07 21:35:00+00:00 4 True 0.84755 0.84825 0.84762 0.84844 0.84765 0.84844 0.84755 0.84824 0.8479
8 2016-08-07 21:40:00+00:00 1 True 0.84759 0.84812 0.84759 0.84812 0.84759 0.84812 0.84759 0.84812 0.847855
9 2016-08-07 21:45:00+00:00 3 True 0.84727 0.84817 0.84743 0.8482 0.84743 0.84822 0.84727 0.84817 0.84772
</code></pre>
<p>My application follows the (simplified) structure below:</p>
<pre><code>class Runner():
def execute_tick(self, clock_tick, previous_tick):
candles = self.broker.get_new_candles(clock_tick, previous_tick)
if candles:
run_calculations(candles)
class Broker():
def get_new_candles(clock_tick, previous_tick)
start = previous_tick - timedelta(minutes=1)
end = clock_tick - timedelta(minutes=3)
return df[(df.time > start) & (df.time <= end)]
</code></pre>
<p>I noticed when profiling the app, that calling the <code>df[(df.time > start) & (df.time <= end)]</code> causes the highest performance issues and I was wondering if there is a way to speed up these calls?</p>
<p>EDIT: I'm adding some more info about the use-case here (also, source is available at: <a href="https://github.com/jmelett/pyFxTrader">https://github.com/jmelett/pyFxTrader</a>)</p>
<ul>
<li>The application will accept a list of <a href="https://en.wikipedia.org/wiki/Foreign_exchange_market#Financial_instruments">instruments</a> (e.g. EUR_USD, USD_JPY, GBP_CHF) and then <a href="https://github.com/jmelett/pyFxTrader/blob/master/trader/broker/oanda_backtest.py#L68">pre-fetch</a> ticks/<a href="https://en.wikipedia.org/wiki/Candlestick_chart">candles</a> for each one of them and their timeframes (e.g. 5 minutes, 30 minutes, 1 hour etc.). The initialised data is basically a <code>dict</code> of Instruments, each containing another <code>dict</code> with candle data for M5, M30, H1 timeframes. </li>
<li>Each "timeframe" is a pandas dataframe like shown at the top</li>
<li>A <a href="https://github.com/jmelett/pyFxTrader/blob/master/trader/controller.py#L145">clock simulator</a> is then used to query the individual candles for the specific time (e.g. at 15:30:00, give me the last x "5-minute-candles") for EUR_USD</li>
<li>This piece of data is then used to "<a href="https://en.wikipedia.org/wiki/Backtesting">simulate</a>" specific market conditions (e.g. average price over last 1 hour increased by 10%, buy market position)</li>
</ul>
| 13 | 2016-08-11T17:01:57Z | 39,037,324 | <p>I guess you are already running things in a relatively efficient way.</p>
<p>When working with time series, it's usually best practice use the column using your timestamps as the <code>DataFrame</code> index. Using a <code>RangeIndex</code> as your index isn't of much use. However, I ran a couple of tests on a (2650069, 2) DataFrame containing 6 months of trade tick data from a given stock at a given exchange and it turns out your approach (creating a boolean array and using it to slice the DataFrame) seems to be 10x faster than regular <code>DatetimeIndex</code> slicing (which I thought was faster).</p>
<p>The data I tested looks like:</p>
<pre><code> Price Volume
time
2016-02-10 11:16:15.951403000 6197.0 200.0
2016-02-10 11:16:16.241380000 6197.0 100.0
2016-02-10 11:16:16.521871000 6197.0 900.0
2016-02-10 11:16:16.541253000 6197.0 100.0
2016-02-10 11:16:16.592049000 6196.0 200.0
</code></pre>
<p>Setting <code>start</code>/<code>end</code>:</p>
<pre><code>start = df.index[len(df)/4]
end = df.index[len(df)/4*3]
</code></pre>
<p>Test 1:</p>
<pre><code>%%time
_ = df[start:end] # Same for df.ix[start:end]
CPU times: user 413 ms, sys: 20 ms, total: 433 ms
Wall time: 430 ms
</code></pre>
<p>On the other hand, using your approach:</p>
<pre><code>df = df.reset_index()
df.columns = ['time', 'Price', 'Volume']
</code></pre>
<p>Test 2:</p>
<pre><code>%%time
u = (df['time'] > start) & (df['time'] <= end)
CPU times: user 21.2 ms, sys: 368 µs, total: 21.6 ms
Wall time: 20.4 ms
</code></pre>
<p>Test 3:</p>
<pre><code>%%time
_ = df[u]
CPU times: user 10.4 ms, sys: 27.6 ms, total: 38.1 ms
Wall time: 36.8 ms
</code></pre>
<p>Test 4:</p>
<pre><code>%%time
_ = df[(df['time'] > start) & (df['time'] <= end)]
CPU times: user 21.6 ms, sys: 24.3 ms, total: 45.9 ms
Wall time: 44.5 ms
</code></pre>
<p><em>Note: Each code block corresponds to an Jupyter notebook cell and its output. I am using the <code>%%time</code> magic because <code>%%timeit</code> usually yields some caching problems which make the code seems faster than it actually is. Also, the kernel has been restarted after each run.</em></p>
<p>I am not totally sure why that is the case (I thought slicing using a <code>DatetimeIndex</code> would make things faster), but I guess it probably has something to do with how things work under the hood with numpy (most likely the datetime slicing operation generates a boolean array which is then used internally by numpy to actually do the slicing - but don't quote me on that).</p>
| 0 | 2016-08-19T10:57:37Z | [
"python",
"pandas",
"numpy",
"dataframe"
] |
Performance issues with pandas and filtering on datetime column | 38,902,239 | <p>I've a pandas dataframe with a datetime64 object on one of the columns.</p>
<pre><code> time volume complete closeBid closeAsk openBid openAsk highBid highAsk lowBid lowAsk closeMid
0 2016-08-07 21:00:00+00:00 9 True 0.84734 0.84842 0.84706 0.84814 0.84734 0.84842 0.84706 0.84814 0.84788
1 2016-08-07 21:05:00+00:00 10 True 0.84735 0.84841 0.84752 0.84832 0.84752 0.84846 0.84712 0.8482 0.84788
2 2016-08-07 21:10:00+00:00 10 True 0.84742 0.84817 0.84739 0.84828 0.84757 0.84831 0.84735 0.84817 0.847795
3 2016-08-07 21:15:00+00:00 18 True 0.84732 0.84811 0.84737 0.84813 0.84737 0.84813 0.84721 0.8479 0.847715
4 2016-08-07 21:20:00+00:00 4 True 0.84755 0.84822 0.84739 0.84812 0.84755 0.84822 0.84739 0.84812 0.847885
5 2016-08-07 21:25:00+00:00 4 True 0.84769 0.84843 0.84758 0.84827 0.84769 0.84843 0.84758 0.84827 0.84806
6 2016-08-07 21:30:00+00:00 5 True 0.84764 0.84851 0.84768 0.84852 0.8478 0.84857 0.84764 0.84851 0.848075
7 2016-08-07 21:35:00+00:00 4 True 0.84755 0.84825 0.84762 0.84844 0.84765 0.84844 0.84755 0.84824 0.8479
8 2016-08-07 21:40:00+00:00 1 True 0.84759 0.84812 0.84759 0.84812 0.84759 0.84812 0.84759 0.84812 0.847855
9 2016-08-07 21:45:00+00:00 3 True 0.84727 0.84817 0.84743 0.8482 0.84743 0.84822 0.84727 0.84817 0.84772
</code></pre>
<p>My application follows the (simplified) structure below:</p>
<pre><code>class Runner():
def execute_tick(self, clock_tick, previous_tick):
candles = self.broker.get_new_candles(clock_tick, previous_tick)
if candles:
run_calculations(candles)
class Broker():
def get_new_candles(clock_tick, previous_tick)
start = previous_tick - timedelta(minutes=1)
end = clock_tick - timedelta(minutes=3)
return df[(df.time > start) & (df.time <= end)]
</code></pre>
<p>I noticed when profiling the app, that calling the <code>df[(df.time > start) & (df.time <= end)]</code> causes the highest performance issues and I was wondering if there is a way to speed up these calls?</p>
<p>EDIT: I'm adding some more info about the use-case here (also, source is available at: <a href="https://github.com/jmelett/pyFxTrader">https://github.com/jmelett/pyFxTrader</a>)</p>
<ul>
<li>The application will accept a list of <a href="https://en.wikipedia.org/wiki/Foreign_exchange_market#Financial_instruments">instruments</a> (e.g. EUR_USD, USD_JPY, GBP_CHF) and then <a href="https://github.com/jmelett/pyFxTrader/blob/master/trader/broker/oanda_backtest.py#L68">pre-fetch</a> ticks/<a href="https://en.wikipedia.org/wiki/Candlestick_chart">candles</a> for each one of them and their timeframes (e.g. 5 minutes, 30 minutes, 1 hour etc.). The initialised data is basically a <code>dict</code> of Instruments, each containing another <code>dict</code> with candle data for M5, M30, H1 timeframes. </li>
<li>Each "timeframe" is a pandas dataframe like shown at the top</li>
<li>A <a href="https://github.com/jmelett/pyFxTrader/blob/master/trader/controller.py#L145">clock simulator</a> is then used to query the individual candles for the specific time (e.g. at 15:30:00, give me the last x "5-minute-candles") for EUR_USD</li>
<li>This piece of data is then used to "<a href="https://en.wikipedia.org/wiki/Backtesting">simulate</a>" specific market conditions (e.g. average price over last 1 hour increased by 10%, buy market position)</li>
</ul>
| 13 | 2016-08-11T17:01:57Z | 39,045,224 | <p>If efficiency is your goal, I'd use numpy for just about everything</p>
<p>I rewrote <code>get_new_candles</code> as <code>get_new_candles2</code></p>
<pre><code>def get_new_candles2(clock_tick, previous_tick):
start = previous_tick - timedelta(minutes=1)
end = clock_tick - timedelta(minutes=3)
ge_start = df.time.values >= start.to_datetime64()
le_end = df.time.values <= end.to_datetime64()
return pd.DataFrame(df.values[ge_start & le_end], df.index[mask], df.columns)
</code></pre>
<h3>Setup of data</h3>
<pre><code>from StringIO import StringIO
import pandas as pd
text = """time,volume,complete,closeBid,closeAsk,openBid,openAsk,highBid,highAsk,lowBid,lowAsk,closeMid
2016-08-07 21:00:00+00:00,9,True,0.84734,0.84842,0.84706,0.84814,0.84734,0.84842,0.84706,0.84814,0.84788
2016-08-07 21:05:00+00:00,10,True,0.84735,0.84841,0.84752,0.84832,0.84752,0.84846,0.84712,0.8482,0.84788
2016-08-07 21:10:00+00:00,10,True,0.84742,0.84817,0.84739,0.84828,0.84757,0.84831,0.84735,0.84817,0.847795
2016-08-07 21:15:00+00:00,18,True,0.84732,0.84811,0.84737,0.84813,0.84737,0.84813,0.84721,0.8479,0.847715
2016-08-07 21:20:00+00:00,4,True,0.84755,0.84822,0.84739,0.84812,0.84755,0.84822,0.84739,0.84812,0.847885
2016-08-07 21:25:00+00:00,4,True,0.84769,0.84843,0.84758,0.84827,0.84769,0.84843,0.84758,0.84827,0.84806
2016-08-07 21:30:00+00:00,5,True,0.84764,0.84851,0.84768,0.84852,0.8478,0.84857,0.84764,0.84851,0.848075
2016-08-07 21:35:00+00:00,4,True,0.84755,0.84825,0.84762,0.84844,0.84765,0.84844,0.84755,0.84824,0.8479
2016-08-07 21:40:00+00:00,1,True,0.84759,0.84812,0.84759,0.84812,0.84759,0.84812,0.84759,0.84812,0.847855
2016-08-07 21:45:00+00:00,3,True,0.84727,0.84817,0.84743,0.8482,0.84743,0.84822,0.84727,0.84817,0.84772
"""
df = pd.read_csv(StringIO(text), parse_dates=[0])
</code></pre>
<h3>Test input variables</h3>
<pre><code>previous_tick = pd.to_datetime('2016-08-07 21:10:00')
clock_tick = pd.to_datetime('2016-08-07 21:45:00')
</code></pre>
<hr>
<pre><code>get_new_candles2(clock_tick, previous_tick)
</code></pre>
<p><a href="http://i.stack.imgur.com/Ni07K.png" rel="nofollow"><img src="http://i.stack.imgur.com/Ni07K.png" alt="enter image description here"></a></p>
<hr>
<h3>Timing</h3>
<p><a href="http://i.stack.imgur.com/NE3OL.png" rel="nofollow"><img src="http://i.stack.imgur.com/NE3OL.png" alt="enter image description here"></a></p>
| 2 | 2016-08-19T18:02:10Z | [
"python",
"pandas",
"numpy",
"dataframe"
] |
How to update an array of exponential moving average in pandas? | 38,902,271 | <p>I'm using <code>pandas.ewma</code> to compute the exponential moving average of a time series. As new data are appended to the time series, I also need to update its exponential moving average array. How can I do this, without recomputing exponential moving average from the beginning of the time series.</p>
<p>Here's my example code</p>
<pre><code>ema = pd.ewma(ts, span = 10)
ts = np.append(ts, [25], 0)
# I would not like to do this to update ema since all of the ema except the last one have already been computed:
ema = pd.ewma(ts, span = 10)
</code></pre>
| -1 | 2016-08-11T17:04:20Z | 38,903,448 | <p>Algebraically, it is clear that this can be done. </p>
<p>The EWMA for time <em>m</em> <a href="https://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average" rel="nofollow">can be shown to be</a></p>
<p><em>∑ <sub>i = 1</sub><sup>m</sup>[α<sup>m - i</sup> y<sub>i</sub>]</em>.</p>
<p>Suppose you have this for all values up to <em>m</em> (which you do). For any <em>n ≥ m</em>, the EWMA is </p>
<p><em>∑ <sub>i = 1</sub><sup>n</sup>[α<sup>n - i</sup> y<sub>i</sub>] = ∑ <sub>i = 1</sub><sup>m</sup>[α<sup>n - i</sup> y<sub>i</sub>] + ∑ <sub>i = m + 1</sub><sup>n</sup>[α<sup>n - i</sup> y<sub>i</sub>]</em>.</p>
<p>For the first term on the right hand side, we have </p>
<p><em>∑ <sub>i = 1</sub><sup>m</sup>[α<sup>n - i</sup> y<sub>i</sub>] = α<sup>n - m</sup> ∑ <sub>i = 1</sub><sup>m</sup>[α<sup>m - i</sup> y<sub>i</sub>]</em>,</p>
<p>which is exactly <em>α<sup>n - m</sup></em> times the last element of the EWMA on the first <em>m</em> elements. Let us call it <em>Y<sub>m</sub></em>. </p>
<p>For the second term on the right hand side, we have</p>
<p><em>∑ <sub>i = m + 1</sub><sup>n</sup>[α<sup>n - i</sup> y<sub>i</sub>] =
∑ <sub>i - m = 1</sub><sup>n - m</sup>[α<sup>n - m - (i - m)</sup> y<sub>i - m + m</sub>] =
∑ <sub>j = 1</sub><sup>n - m</sup>[α<sup>n - m - j</sup> y<sub>j + m</sub>]</em>.</p>
<p>Combining both simplifications, we have</p>
<p><em>∑ <sub>i = 1</sub><sup>n</sup>[α<sup>n - i</sup> y<sub>i</sub>] =
∑ <sub>j = 1</sub><sup>n - m</sup>[α<sup>n - m - j</sup> y<sub>j + m</sub>] + α<sup>n - m + 1</sup>(Y<sub>m</sub> / α<sup>n - m + 1</sup>)</em>.</p>
<p>This is exactly the calculation of an <em>n - m + 1</em> EWMA, with starting element <em>Y<sub>m</sub> / α<sup>n - m + 1</sup></em>. Thus, it is unnecessary to calculate everything from the start.</p>
<p>I leave it to anyone else interested, the final technical task of adapting this to <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.ewma.html" rel="nofollow"><code>pd.ewma</code></a>, which, e.g., defines <em>α</em> indirectly through <code>halflife</code>. (Surely the downvoter of the answer has already solved this end to end, for example.)</p>
| 0 | 2016-08-11T18:14:56Z | [
"python",
"pandas",
"numpy"
] |
I need a way to map multiple keys to the same value in a dictionary | 38,902,324 | <p>Admittedly, this question seems like it might be a popular one, but I couldn't really find it (perhaps I wasn't using the right search terms). Anyway, I need something of this sort:</p>
<pre><code>tel = {}
tel['1 12'] = 1729
tel['9 10'] = 1729
tel['1 2'] = 9
tel['2 1'] = 9
tel['1 1'] = 2
print(tel)
{['1 1'] : 2, ['1 2', '2 1'] : 9, ['1 12', '9 10'] : 1729}
</code></pre>
<p>So whenever a key's value is already in the dict, append the key to the list of keys mapping to that value; else, add the key value pair to the dict.</p>
<p><strong>EDIT</strong>
I'm sorry if I confused the lot of you, and I'm REALLY sorry if the following confuses you even more :)</p>
<p>This is the original problem I wanted to solve: Given the equation a^3 + b^3, produce a dictionary mapping all positive integer pair values for a, b less than 1000 to the value of the equation when evaluated. When two pairs evaluate to the same value, I want the two pairs to share the same value in the dictionary <em>and</em> be <strong>grouped</strong> together somehow. (I'm already aware that I can map different keys to the same value in a dict, but I need this grouping).</p>
<p>So a sample of my pseudocode would be given by:</p>
<pre><code>for a in range(1, 1000):
for b in range(1, 1000):
map pair (a, b) to a^3 + b^3
</code></pre>
<p>For some integer pairs (a, b) and (p, q) where a != p, and b != q, a^3 + b^3 == p^3 + q^3. I want these pairs to be grouped together in some way. So for example, [(1, 12), (9, 10)] maps to 1729. I hope this makes it more clear what I want.</p>
<p><strong>EDIT2</strong>
As many of you have pointed out, I shall switch the key value pairs if it means a faster lookup time. That would mean though that the values in the key:value pair need to be tuples.</p>
| 1 | 2016-08-11T17:07:02Z | 38,903,042 | <p>As many of the comments have already pointed out, you seem to have your <strong>key/value</strong> structure <strong>inverted</strong>. I would recommend factoring out your <code>int</code> values as <strong>keys</strong> instead. This way you achieve efficient dictionary look ups using the int value as a key, and implement more elegant simple design in your data - using a dictionary as <a href="https://docs.python.org/3/tutorial/datastructures.html#dictionaries" rel="nofollow">intended</a>.</p>
<p>Ex: <code>{9: ('1 2', '2 1'), 2: ('1 1',), 1729: ('9 10', '1 12')}</code></p>
<p>That being said the snippet below will do what you require. It first maps the data as shown above, then <em>inverts</em> the key/values essentially. </p>
<pre><code>tel = {}
tel['1 12'] = 1729
tel['9 10'] = 1729
tel['1 2'] = 9
tel['2 1'] = 9
tel['1 1'] = 2
#-----------------------------------------------------------
from collections import defaultdict
new_tel = defaultdict(list)
for key, value in tel.items():
new_tel[value].append(key)
# new_tel -> {9: ['1 2', '2 1'], 2: ['1 1'], 1729: ['9 10', '1 12']}
print {tuple(key):value for value, key in new_tel.items()}
>>> {('1 2', '2 1'): 9, ('1 1',): 2, ('9 10', '1 12'): 1729}
</code></pre>
| 2 | 2016-08-11T17:52:05Z | [
"python",
"python-3.x",
"dictionary"
] |
Ansible CSV lookup in Jinja2 template, how to handle missing key | 38,902,351 | <p>j2 template and here is a snippet:</p>
<pre><code>{% set denyGroups %}
{{lookup('csvfile', '{{ ansible_hostname }} file={{ csvFile }} delimiter=, col=5')}}
{% endset %}
{%- if denyGroups |trim() -%}
simple_deny_groups = {{ denyGroups |replace(";", ",") }}
{%- endif -%}
</code></pre>
<p>I am injecting values into the template based on csv values. However, if the key (ansible_hostname ) is not found in the csv, get this error: <code>AnsibleError: csvfile: list index out of range</code></p>
<p>How can I do this error handling? Check if its in the csv first. Now I could inject the value in tasks, but that will be a bit more cumbersome, and I prefer this template way. Thanks</p>
| 0 | 2016-08-11T17:09:13Z | 38,903,019 | <p><code>list index out of range</code> â it's a problem with number of columns.<br>
Columns are counted from zero, so for <code>col=5</code> â there must be 6 columns in the file.<br>
If you want to return a default item if key is missing, use <code>default=myvalue</code> option.</p>
<p>Update: Let's look into <a href="https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/lookup/csvfile.py#L59" rel="nofollow">the code</a>:</p>
<pre><code>def read_csv(self, filename, key, delimiter, encoding='utf-8', dflt=None, col=1):
try:
f = open(filename, 'r')
creader = CSVReader(f, delimiter=to_bytes(delimiter), encoding=encoding)
for row in creader:
if row[0] == key:
return row[int(col)]
except Exception as e:
raise AnsibleError("csvfile: %s" % to_str(e))
return dflt
</code></pre>
<p>So in the csv like this:</p>
<pre><code>mykey,val_a,val_b.val_c
</code></pre>
<p>key (column 0) is mykey, column 1 - val_a, column 2 - val_b, column 3 - val_c.</p>
<p><strong>If there is a format error or your parameters are incorrect, exception will be triggered â you can't do anything about it.</strong> Trying to use <code>col=4</code> with my example csv file will rise <code>list index out of range</code> exception.</p>
<p>If the key is not found in the file (<code>row[0] == key</code> was false for every line), default value will be returned (the one you specify with <code>default=</code> option). </p>
| 1 | 2016-08-11T17:50:23Z | [
"python",
"csv",
"ansible",
"jinja2"
] |
Tkinter/TTK: How to determine if a ProgressBar reaches the maximum value? | 38,902,376 | <p>I'm trying to make simple program that prints "done" to the console when a progress bar reaches the maximum value using ttk.</p>
<p>example:</p>
<pre><code>from tkinter import *
import tkinter.ttk
root = Tk()
pb = tkinter.ttk.Progressbar(root, orient=HORIZONTAL, length=200, mode='determinate')
pb.pack()
pb.start()
if pb['value'] == 100: #This isn't correct it's just an example.
pb.stop()
print("Done")
root.mainloop()
</code></pre>
<p>I'm currently using python 3.5.2, please try to avoid classes and objects, It's a bit hard for me to understand them. </p>
| 0 | 2016-08-11T17:10:48Z | 38,905,144 | <p>You can update the value yourself by instructing a function to be called every 100ms or so, like this:</p>
<pre><code>from tkinter import *
import tkinter.ttk
root = Tk()
pb = tkinter.ttk.Progressbar(root, orient=HORIZONTAL, length=200, mode='determinate')
pb.pack()
def task():
pb['value'] += 1
if pb['value'] >= 99:
print("Done")
else:
root.after(100, task) # Tell the mainloop to run "task()" again after 100ms
# Tell the mainloop to run "task()" after 100ms
root.after(100, task)
root.mainloop()
</code></pre>
<p>You typically don't start() the progressbar in determinate mode because you're supposed to update the value yourself. In indeterminate mode, the bar bounces back and forth to imply something is happening, which is why you need to start() it.</p>
| 1 | 2016-08-11T20:02:34Z | [
"python",
"python-3.x",
"tkinter",
"ttk"
] |
Meeting room calendars missing - Office 365 API | 38,902,417 | <p>I am trying to get the list of all calendars for a user. This user has delegate permissions to view the calendar of all the meeting rooms (resources). If I log into the user's account and I am able to see the meeting room calendars in the "Other Calendars" section. I also created my own calendar called "Test" in the "Other Calendars" section.</p>
<p>When I get all the calendar groups first and then iterate through the calendar group list and get the calendars, the list for "Other Calendars" only has "Test" calendar. </p>
<p>Not sure why this is the case. The user is a global administrator as well.</p>
<pre><code>def get_access_info_from_authcode(auth_code, redirect_uri):
post_data = { 'grant_type': 'authorization_code',
'code': auth_code,
'redirect_uri': redirect_uri,
'scope': ' '.join(str(i) for i in scopes),
'client_id': client_registration.client_id(),
'client_secret': client_registration.client_secret()
}
r = requests.post(access_token_url, data = post_data, verify = verifySSL)
try:
return r.json()
except:
return 'Error retrieving token: {0} - {1}'.format(r.status_code, r.text)
def get_access_token_from_refresh_token(refresh_token, resource_id):
post_data = { 'grant_type' : 'refresh_token',
'client_id' : client_registration.client_id(),
'client_secret' : client_registration.client_secret(),
'refresh_token' : refresh_token,
'resource' : resource_id }
r = requests.post(access_token_url, data = post_data, verify = verifySSL)
# Return the token as a JSON object
return r.json()
def get_calendars_from_calendar_groups(calendar_endpoint, token, calendar_groups):
results = []
for group_id in calendar_groups:
get_calendars = '{0}/me/calendargroups/{1}/calendars'.format(calendar_endpoint, group_id)
r = make_api_call('GET', get_calendars, token)
if (r.status_code == requests.codes.unauthorized):
logger.debug('Response Headers: {0}'.format(r.headers))
logger.debug('Response: {0}'.format(r.json()))
results.append(None)
results.append(r.json())
return results
def get_calendars(calendar_endpoint, token, parameters=None):
if (not parameters is None):
logger.debug(' parameters: {0}'.format(parameters))
get_calendars = '{0}/me/calendars'.format(calendar_endpoint)
if (not parameters is None):
get_calendars = '{0}{1}'.format(get_calendars, parameters)
r = make_api_call('GET', get_calendars, token)
if(r.status_code == requests.codes.unauthorized):
logger.debug('Unauthorized request. Leaving get_calendars.')
return None
return r.json()
</code></pre>
<p>Logic + Code:
Step 1) Get the authorization URL:</p>
<pre><code>authority = "https://login.microsoftonline.com/common"
authorize_url = '{0}{1}'.format(authority, '/oauth2/authorize?client_id={0}&redirect_uri={1}&response_type=code&state={2}&prompt=consent')
</code></pre>
<p>Step 2) Opening the URL takes us to <a href="https://login.microsoftonline.com/common" rel="nofollow">https://login.microsoftonline.com/common</a> where I login as the user:
<a href="http://i.stack.imgur.com/Jk5rw.png" rel="nofollow"><img src="http://i.stack.imgur.com/Jk5rw.png" alt="enter image description here"></a></p>
<p>Step 3) This redirects back to my localhost then the following:</p>
<pre><code>discovery_result = exchoauth.get_access_info_from_authcode(auth_code, Office365.redirect_uri)
refresh_token = discovery_result['refresh_token']
client_id = client_registration.client_id()
client_secret = client_registration.client_secret()
access_token_json = exchoauth.get_access_token_from_refresh_token(refresh_token, Office365.resource_id)
access_token = access_token_json['access_token']
calendar_groups_json = exchoauth.get_calendar_groups(Office365.api_endpoint, access_token)
cal_groups = {}
if calendar_groups_json is not None:
for entry in calendar_groups_json['value']:
cal_group_id = entry['Id']
cal_group_name = entry['Name']
cal_groups[cal_group_id] = cal_group_name
calendars_json_list = exchoauth.get_calendars_from_calendar_groups(Office365.api_endpoint,
access_token, cal_groups)
for calendars_json in calendars_json_list:
if calendars_json is not None:
for ent in calendars_json['value']:
cal_id = ent['Id']
cal_name = ent['Name']
calendar_ids[cal_id] = cal_name
</code></pre>
<p>Let me know if you need any other information</p>
| 0 | 2016-08-11T17:13:19Z | 38,911,002 | <p>The delegate-token which request with the <strong>Auth code grant flow</strong> only able to get the calendars of sign-in user.</p>
<p>If you want to the get the events from the specific room, you can use the app-only token request with <strong>client credential flow</strong>. </p>
<p><a href="http://graph.microsoft.io/en-us/docs/authorization/app_only" rel="nofollow">Here</a> is an helpful link for your reference.</p>
| 0 | 2016-08-12T06:05:43Z | [
"python",
"office365",
"office365api"
] |
TensorFlow strings: what they are and how to work with them | 38,902,433 | <p>When I read file with <code>tf.read_file</code> I get something with type <code>tf.string</code>. Documentation says only that it is "Variable length byte arrays. Each element of a Tensor is a byte array." (<a href="https://www.tensorflow.org/versions/r0.10/resources/dims_types.html" rel="nofollow">https://www.tensorflow.org/versions/r0.10/resources/dims_types.html</a>). I have no idea how to interpret this.</p>
<p>I can do nothing with this type. In usual python you can get elements by index like <code>my_string[:4]</code>, but when I run following code I get an error.</p>
<pre><code>import tensorflow as tf
import numpy as np
x = tf.constant("This is string")
y = x[:4]
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
result = sess.run(y)
print result
</code></pre>
<p>It says </p>
<pre> File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_shape.py", line 621, in assert_has_rank
raise ValueError("Shape %s must have rank %d" % (self, rank))
ValueError: Shape () must have rank 1
</pre>
<p>Also I cannot convert my string to <code>tf.float32</code> tensor. It is <code>.flo</code> file and it has magic header "PIEH". This numpy code successfuly convert such header into number (see example here <a href="http://stackoverflow.com/a/28016469/4744283">http://stackoverflow.com/a/28016469/4744283</a>) but I can't do that with tensorflow. I tried <code>tf.string_to_number(string, out_type=tf.float32)</code> but it says </p>
<pre>tensorflow.python.framework.errors.InvalidArgumentError: StringToNumberOp could not correctly convert string: PIEH
</pre>
<p>So, what string is? What it's shape is? How can I at least get part of the string? I suppose that if I can get part of it I can just skip "PIEH" part.</p>
<p><strong>UPD</strong>: I forgot to say that <code>tf.slice(string, [0], [4])</code> also doesn't work with same error.</p>
| 0 | 2016-08-11T17:14:04Z | 38,902,908 | <p>Unlike Python, where a string can be treated as a list of characters for the purposes of slicing and such, TensorFlow's <code>tf.string</code>s are indivisible values. For instance, <code>x</code> below is a <code>Tensor</code> with shape <code>(2,)</code> whose each element is a variable length string.</p>
<pre><code>x = tf.constant(["This is a string", "This is another string"])
</code></pre>
<p>However, to achieve what you want, TensorFlow provides the <code>tf.decode_raw</code> operator. It takes a <code>tf.string</code> tensor as input, but can decode the string into any other primitive data type. For instance, to interpret the string as a tensor of characters, you can do the following :</p>
<pre><code>x = tf.constant("This is string")
x = tf.decode_raw(x, tf.uint8)
y = x[:4]
sess = tf.InteractiveSession()
print(y.eval())
# prints [ 84 104 105 115]
</code></pre>
| 1 | 2016-08-11T17:44:58Z | [
"python",
"string",
"numpy",
"tensorflow"
] |
Creating runtime variable in python to fetch data from dictionary object | 38,902,444 | <p>I have created dictionary object my parsing a json file in python....lets assume the data is as follows</p>
<pre><code> plants = {}
# Add three key-value tuples to the dictionary.
plants["radish"] = {"color":"red", "length":4}
plants["apple"] = {"smell":"sweet", "season":"winter"}
plants["carrot"] = {"use":"medicine", "juice":"sour"}
</code></pre>
<p>This could be a very long dictionary object</p>
<p>But at runtime, I need only few values to be stored in a commaa delimited csv file.....The list of properties desired is in a file....
e.g</p>
<pre><code>radish.color
carrot.juice
</code></pre>
<p>So, how would I create in python an app, where I can created dynamic variables such as below to get data of the json object & create a csv file....</p>
<p>at runtime i need variable</p>
<pre><code>plants[radish][color]
plants[carrot][juice]
</code></pre>
<p>Thank you to all who help</p>
<p>Sanjay</p>
| 0 | 2016-08-11T17:14:51Z | 38,905,654 | <p>Consider parsing the text file line by line to retrieve file contents. In the read, split the line by period which denotes the keys of dictionaries. From there, use such a list of keys to retrieve dictionary values. Then, iteratively output values to csv, conditioned by number of items:</p>
<p><strong>Txt</strong> file</p>
<pre><code>radish.color
carrot.juice
</code></pre>
<p><strong>Python</strong> code</p>
<pre><code>import csv
plants = {}
plants["radish"] = {"color":"red", "length":4}
plants["apple"] = {"smell":"sweet", "season":"winter"}
plants["carrot"] = {"use":"medicine", "juice":"sour"}
data = []
with open("Input.txt", "r") as f:
for line in f:
data.append(line.replace("\n", "").strip().split("."))
with open("Output.csv", "w") as w:
writer = csv.writer(w, lineterminator = '\n')
for item in data:
if len(item) == 2: # ONE-NEST DEEP
writer.writerow([item[0], item[1], plants[item[0]][item[1]]])
if len(item) == 3: # SECOND NEST DEEP
writer.writerow([item[0], item[1], item[2], plants[item[0]][item[1]][item[2]]])
</code></pre>
<p><strong>Output</strong> csv </p>
<pre><code>radish,color,red
carrot,juice,sour
</code></pre>
<p><em>(Note: the deeper the nest, the more columns will output conflicting with key/value pairs across columns -maybe output different structured csv files like one-level files/second-level files)</em></p>
| 1 | 2016-08-11T20:34:34Z | [
"python",
"json",
"python-2.7"
] |
Python argparse to a game | 38,902,469 | <p>I'm trying to figure out argparse for my adventure game.
This is my code:</p>
<pre><code>parser = argparse.ArgumentParser(description='Adventure')
parser.add_argument('--cheat','-c', action='store_true', help='cheat')
parser.add_argument('--info','-i', action='store_true', help='Information' )
parser.add_argument('--version','-v', action='store_true', help='Version' )
parser.add_argument('--about', '-a', action='store_true', help='About' )
args = parser.parse_args()
if args.cheat or args.c:
print("Cheat")
sys.exit()
elif args.info or args.i:
print("Information")
sys.exit()
elif args.version or args.v:
print("Version")
sys.exit()
elif args.about or args.a:
print("About")
sys.exit()
else:
#Game code#
</code></pre>
<p>But I'm getting an error when I have all these arguments. It worked fine when I only had cheat to begin with but when I added all the others, it messed up. Either I dont really know what i'm doing or I cant see whats wrong...</p>
<p>Here is my error:</p>
<pre><code>$ python3 adventure.py --info
Traceback (most recent call last):
File "adventure.py", line 12, in <module>
if args.cheat or args.c:
AttributeError: 'Namespace' object has no attribute 'c'
</code></pre>
<p>The reason I have it like this is because I want to type in this in terminal without starting the game, so if you just type python3 adventure.py the game will start. But I cant even do that now :P.
Only python3 adventure.py -c and python3 adventure --cheat works.</p>
<p>Anyone know a solution to this?</p>
<p>Thanks.</p>
| 1 | 2016-08-11T17:16:34Z | 38,902,546 | <p>The error messages is pretty clear: <code>'Namespace' object has no attribute 'c'</code>.</p>
<p><code>argparse</code> library automatically determines the name of the attribute to store the value of the command line option <a href="https://docs.python.org/3/library/argparse.html#dest" rel="nofollow">in the following way</a>:</p>
<blockquote>
<p>For optional argument actions, the value of <code>dest</code> is normally inferred from the option strings. <code>ArgumentParser</code> generates the value of <code>dest</code> by taking the first long option string and stripping away the initial <code>--</code> string. If no long option strings were supplied, <code>dest</code> will be derived from the first short option string by stripping the initial <code>-</code> character.</p>
</blockquote>
<p>Since you have both long and short names for your option, the long name is used. In other words, replace <code>if args.cheat or args.c:</code> with <code>if args.cheat:</code>.</p>
| 2 | 2016-08-11T17:21:34Z | [
"python",
"python-3.x",
"argparse"
] |
How to run a .py file in Ipython? | 38,902,487 | <h1>Python and Ipython versions:</h1>
<ol>
<li>The version of the notebook server is <code>3.0.0-f75fda4</code>.</li>
<li>Python <code>2.7.9</code> |Anaconda <code>2.2.0</code> (64-bit)| </li>
</ol>
<h1>Problem description</h1>
<p>I am using the <code>undaqtools</code> module in Python. The package page is <a href="http://pythonhosted.org/undaqTools/" rel="nofollow">here</a>. This package contains the functions to convert a data acquisition file (DAQ) from a driving simulator output to HDF5 format. There are 2 ways to do so according to the package page. One way is to convert files one by one by using the functions <code>daq.read</code> and <code>daq.write_hd5</code>. I have used this several times and it works flawlessly. The second method is to use the script <code>undaq.py</code> to batch convert many DAQ files simultaneously. This script is located in <code>/Anaconda/Scripts/</code> in C drive (Windows 7). I have 3 DAQ files in the <code>DrivingSimulator/Data</code> folder, named: </p>
<ol>
<li>Cars_20160601_01</li>
<li>Cars_20160601_02</li>
<li>Cars_20160601_03 </li>
</ol>
<p>So, I first changed directory to <code>DrivingSimulator/Data</code>. Then according to the <a href="http://pythonhosted.org/undaqTools/gettingstarted.html" rel="nofollow">Getting Started</a> page of the package, tried the <code>undaq.py *</code> command in IPython, which gave error: </p>
<pre><code>%run C:/Users/durraniu/Anaconda/Scripts/undaq.py *
usage: undaq.py [-h] [-n NUMCPU] [-o OUTTYPE] [-e ELEMFILE] [-r] [-d] path
undaq.py: error: unrecognized arguments: Cars_20160601_02.daq Cars_20160601_03.daq
An exception has occurred, use %tb to see the full traceback.
SystemExit: 2
</code></pre>
<p>Here is the full traceback:</p>
<pre><code>%tb
---------------------------------------------------------------------------
SystemExit Traceback (most recent call last)
C:\Users\durraniu\Anaconda\lib\site-packages\IPython\utils\py3compat.pyc in execfile(fname, glob, loc, compiler)
205 filename = fname
206 compiler = compiler or compile
--> 207 exec(compiler(scripttext, filename, 'exec'), glob, loc)
208
209 else:
C:\Users\durraniu\Anaconda\lib\site-packages\undaqtools-0.2.3-py2.7.egg\EGG-INFO\scripts\undaq.py in <module>()
2 # EASY-INSTALL-SCRIPT: 'undaqtools==0.2.3','undaq.py'
3 __requires__ = 'undaqtools==0.2.3'
----> 4 __import__('pkg_resources').run_script('undaqtools==0.2.3', 'undaq.py')
C:\Users\durraniu\Anaconda\lib\site-packages\setuptools-18.4-py2.7.egg\pkg_resources\__init__.py in run_script(self, requires, script_name)
733 ns.clear()
734 ns['__name__'] = name
--> 735 self.require(requires)[0].run_script(script_name, ns)
736
737 def __iter__(self):
C:\Users\durraniu\Anaconda\lib\site-packages\setuptools-18.4-py2.7.egg\pkg_resources\__init__.py in run_script(self, script_name, namespace)
1657 )
1658 script_code = compile(script_text, script_filename,'exec')
-> 1659 exec(script_code, namespace, namespace)
1660
1661 def _has(self, path):
C:\Users\durraniu\Anaconda\lib\site-packages\undaqtools-0.2.3-py2.7.egg\EGG-INFO\scripts\undaq.py in <module>()
C:\Users\durraniu\Anaconda\lib\argparse.pyc in parse_args(self, args, namespace)
1702 if argv:
1703 msg = _('unrecognized arguments: %s')
-> 1704 self.error(msg % ' '.join(argv))
1705 return args
1706
C:\Users\durraniu\Anaconda\lib\argparse.pyc in error(self, message)
2372 """
2373 self.print_usage(_sys.stderr)
-> 2374 self.exit(2, _('%s: error: %s\n') % (self.prog, message))
C:\Users\durraniu\Anaconda\lib\argparse.pyc in exit(self, status, message)
2360 if message:
2361 self._print_message(message, _sys.stderr)
-> 2362 _sys.exit(status)
2363
2364 def error(self, message):
SystemExit: 2
</code></pre>
<p>I can't understand this error. Also, I tried using <code>undaq.py</code> in CMD but that opened a new window saying that Windows can't open this file: </p>
<p><a href="http://i.stack.imgur.com/I7C7p.png" rel="nofollow"><img src="http://i.stack.imgur.com/I7C7p.png" alt="enter image description here"></a></p>
<p>Please let me know what am I doing wrong? Also, please note that the path to <code>Script</code> folder and Python is already in the PATH variable of system variables. </p>
<h1>UPDATE:</h1>
<p>Following the instructions of @hpaulj, I did following: </p>
<pre><code>## Changing to the directory containing DAQ files:
%cd C:/Users/durraniu/Documents/DrivingSimulator/Data
## Running the undaq.py script:
%run C:/Users/durraniu/Anaconda/Scripts/undaq.py -r -d \\*
</code></pre>
<p>This gave me following output:</p>
<pre><code>Glob Summary
--------------------------------------------------------------------
hdf5
daq size (KB) exists
--------------------------------------------------------------------
--------------------------------------------------------------------
debug = True
rebuild = True
Converting daqs with 1 cpus (this may take awhile)...
Debug Summary
Batch processing completed.
--------------------------------------------------------------------
Conversion Summary
--------------------------------------------------------------------
Total elapsed time: 0.1 s
Data converted: 0.000 MB
Data throughput: 0.0 MB/s
--------------------------------------------------------------------
</code></pre>
<p>It seems that the script can't 'see' any files in the Data directory. I tried the same in cmd with the prefix <code>python</code> and it gave the same result. How can I fix this?<br>
For your reference, I am pasting the contents of <code>undaq.py</code> file here: </p>
<pre><code>#!C:\Users\durraniu\Anaconda\python.exe
# EASY-INSTALL-SCRIPT: 'undaqtools==0.2.3','undaq.py'
__requires__ = 'undaqtools==0.2.3'
__import__('pkg_resources').run_script('undaqtools==0.2.3', 'undaq.py')
</code></pre>
<p>Please note that I have the version 0.2.3 of undaqtools installed. </p>
<h1>UPDATE 2</h1>
<p>I have also tried following in Ipython:</p>
<pre><code>%run -G C:/Users/durraniu/Anaconda/Scripts/undaq.py -r -d *
</code></pre>
<p>This gives a recurring error:<br>
<a href="http://i.stack.imgur.com/KAKxI.png" rel="nofollow"><img src="http://i.stack.imgur.com/KAKxI.png" alt="enter image description here"></a></p>
| 1 | 2016-08-11T17:17:26Z | 38,904,956 | <p>You should not be running <code>undaq</code> in ipython. It is a stand-alone script.</p>
<p>To run a python program you need to use the python interpreter. A typical installation of python associates .py files with the python interpreter. However it seems that on your system this hasn't happened.</p>
<p>Check that you have python installed by running <code>python</code> at a command line prompt (you should get some information about the version of python, and the <code>>>></code> prompt). Quit from python and then type:</p>
<pre><code>python undaq.py *
</code></pre>
<p>This loads the python interpreter and tells it to read and execute the command in the file <code>undaq.py</code> passing all the files in the current directory as arguments. </p>
| 0 | 2016-08-11T19:49:29Z | [
"python",
"windows",
"ipython",
"argparse"
] |
How to run a .py file in Ipython? | 38,902,487 | <h1>Python and Ipython versions:</h1>
<ol>
<li>The version of the notebook server is <code>3.0.0-f75fda4</code>.</li>
<li>Python <code>2.7.9</code> |Anaconda <code>2.2.0</code> (64-bit)| </li>
</ol>
<h1>Problem description</h1>
<p>I am using the <code>undaqtools</code> module in Python. The package page is <a href="http://pythonhosted.org/undaqTools/" rel="nofollow">here</a>. This package contains the functions to convert a data acquisition file (DAQ) from a driving simulator output to HDF5 format. There are 2 ways to do so according to the package page. One way is to convert files one by one by using the functions <code>daq.read</code> and <code>daq.write_hd5</code>. I have used this several times and it works flawlessly. The second method is to use the script <code>undaq.py</code> to batch convert many DAQ files simultaneously. This script is located in <code>/Anaconda/Scripts/</code> in C drive (Windows 7). I have 3 DAQ files in the <code>DrivingSimulator/Data</code> folder, named: </p>
<ol>
<li>Cars_20160601_01</li>
<li>Cars_20160601_02</li>
<li>Cars_20160601_03 </li>
</ol>
<p>So, I first changed directory to <code>DrivingSimulator/Data</code>. Then according to the <a href="http://pythonhosted.org/undaqTools/gettingstarted.html" rel="nofollow">Getting Started</a> page of the package, tried the <code>undaq.py *</code> command in IPython, which gave error: </p>
<pre><code>%run C:/Users/durraniu/Anaconda/Scripts/undaq.py *
usage: undaq.py [-h] [-n NUMCPU] [-o OUTTYPE] [-e ELEMFILE] [-r] [-d] path
undaq.py: error: unrecognized arguments: Cars_20160601_02.daq Cars_20160601_03.daq
An exception has occurred, use %tb to see the full traceback.
SystemExit: 2
</code></pre>
<p>Here is the full traceback:</p>
<pre><code>%tb
---------------------------------------------------------------------------
SystemExit Traceback (most recent call last)
C:\Users\durraniu\Anaconda\lib\site-packages\IPython\utils\py3compat.pyc in execfile(fname, glob, loc, compiler)
205 filename = fname
206 compiler = compiler or compile
--> 207 exec(compiler(scripttext, filename, 'exec'), glob, loc)
208
209 else:
C:\Users\durraniu\Anaconda\lib\site-packages\undaqtools-0.2.3-py2.7.egg\EGG-INFO\scripts\undaq.py in <module>()
2 # EASY-INSTALL-SCRIPT: 'undaqtools==0.2.3','undaq.py'
3 __requires__ = 'undaqtools==0.2.3'
----> 4 __import__('pkg_resources').run_script('undaqtools==0.2.3', 'undaq.py')
C:\Users\durraniu\Anaconda\lib\site-packages\setuptools-18.4-py2.7.egg\pkg_resources\__init__.py in run_script(self, requires, script_name)
733 ns.clear()
734 ns['__name__'] = name
--> 735 self.require(requires)[0].run_script(script_name, ns)
736
737 def __iter__(self):
C:\Users\durraniu\Anaconda\lib\site-packages\setuptools-18.4-py2.7.egg\pkg_resources\__init__.py in run_script(self, script_name, namespace)
1657 )
1658 script_code = compile(script_text, script_filename,'exec')
-> 1659 exec(script_code, namespace, namespace)
1660
1661 def _has(self, path):
C:\Users\durraniu\Anaconda\lib\site-packages\undaqtools-0.2.3-py2.7.egg\EGG-INFO\scripts\undaq.py in <module>()
C:\Users\durraniu\Anaconda\lib\argparse.pyc in parse_args(self, args, namespace)
1702 if argv:
1703 msg = _('unrecognized arguments: %s')
-> 1704 self.error(msg % ' '.join(argv))
1705 return args
1706
C:\Users\durraniu\Anaconda\lib\argparse.pyc in error(self, message)
2372 """
2373 self.print_usage(_sys.stderr)
-> 2374 self.exit(2, _('%s: error: %s\n') % (self.prog, message))
C:\Users\durraniu\Anaconda\lib\argparse.pyc in exit(self, status, message)
2360 if message:
2361 self._print_message(message, _sys.stderr)
-> 2362 _sys.exit(status)
2363
2364 def error(self, message):
SystemExit: 2
</code></pre>
<p>I can't understand this error. Also, I tried using <code>undaq.py</code> in CMD but that opened a new window saying that Windows can't open this file: </p>
<p><a href="http://i.stack.imgur.com/I7C7p.png" rel="nofollow"><img src="http://i.stack.imgur.com/I7C7p.png" alt="enter image description here"></a></p>
<p>Please let me know what am I doing wrong? Also, please note that the path to <code>Script</code> folder and Python is already in the PATH variable of system variables. </p>
<h1>UPDATE:</h1>
<p>Following the instructions of @hpaulj, I did following: </p>
<pre><code>## Changing to the directory containing DAQ files:
%cd C:/Users/durraniu/Documents/DrivingSimulator/Data
## Running the undaq.py script:
%run C:/Users/durraniu/Anaconda/Scripts/undaq.py -r -d \\*
</code></pre>
<p>This gave me following output:</p>
<pre><code>Glob Summary
--------------------------------------------------------------------
hdf5
daq size (KB) exists
--------------------------------------------------------------------
--------------------------------------------------------------------
debug = True
rebuild = True
Converting daqs with 1 cpus (this may take awhile)...
Debug Summary
Batch processing completed.
--------------------------------------------------------------------
Conversion Summary
--------------------------------------------------------------------
Total elapsed time: 0.1 s
Data converted: 0.000 MB
Data throughput: 0.0 MB/s
--------------------------------------------------------------------
</code></pre>
<p>It seems that the script can't 'see' any files in the Data directory. I tried the same in cmd with the prefix <code>python</code> and it gave the same result. How can I fix this?<br>
For your reference, I am pasting the contents of <code>undaq.py</code> file here: </p>
<pre><code>#!C:\Users\durraniu\Anaconda\python.exe
# EASY-INSTALL-SCRIPT: 'undaqtools==0.2.3','undaq.py'
__requires__ = 'undaqtools==0.2.3'
__import__('pkg_resources').run_script('undaqtools==0.2.3', 'undaq.py')
</code></pre>
<p>Please note that I have the version 0.2.3 of undaqtools installed. </p>
<h1>UPDATE 2</h1>
<p>I have also tried following in Ipython:</p>
<pre><code>%run -G C:/Users/durraniu/Anaconda/Scripts/undaq.py -r -d *
</code></pre>
<p>This gives a recurring error:<br>
<a href="http://i.stack.imgur.com/KAKxI.png" rel="nofollow"><img src="http://i.stack.imgur.com/KAKxI.png" alt="enter image description here"></a></p>
| 1 | 2016-08-11T17:17:26Z | 38,905,205 | <p>With a simple script, <code>echo_argv.py</code>:</p>
<pre><code>import sys
print(sys.argv)
</code></pre>
<p>Running it in Ipython with your commandline:</p>
<pre><code>In [1222]: %run echo_argv *
['echo_argv.py', 'stack38002264.py', 'stack37714032.py', 'test', 'stack37930737.py ...]
</code></pre>
<p>shows that it gets the full directory listing in <code>sys.argv</code>.</p>
<p>Your first error is produced by the commandline parser of <code>undaq.py</code> (hence the <code>argparse</code> error stack).</p>
<pre><code>usage: undaq.py [-h] [-n NUMCPU] [-o OUTTYPE] [-e ELEMFILE] [-r] [-d] path
undaq.py: error: unrecognized arguments: Cars_20160601_02.daq Cars_20160601_03.daq
</code></pre>
<p>This message tells me that <code>undaq</code> expects one <code>path</code> argument, which was met by the first file name: '160601_02.daq'. The other file names are thus superfluous, and it is complaining.</p>
<p>I suspect it will run better if you give it just one file name, or the current directory name, something like:</p>
<pre><code>%run undaq.py .
%run undaq.py Cars_20160601_01.daq
</code></pre>
<p>I expect you'd get the same error if in CMD you did</p>
<pre><code>python undaq.py *
</code></pre>
<p>I don't understand why the 1<code>getting started</code> page recommends using <code>*</code> when the commandline parser gives this error message. I'm wondering if there's a version difference - the documentation page is for a different version.</p>
<p>Consider doing </p>
<pre><code>python undaq.py -h
%run undaq.py -h
</code></pre>
<p>to get a fuller help message.</p>
<p><a href="https://github.com/rogerlew/undaqTools/blob/master/undaqTools/scripts/undaq.py" rel="nofollow">https://github.com/rogerlew/undaqTools/blob/master/undaqTools/scripts/undaq.py</a></p>
<p>Looking at this code, the <code>argparse</code> definition for the <code>path</code> argument is</p>
<pre><code>parser.add_argument('path', type=str, help='Path for glob ("*")')
</code></pre>
<p>That suggests that the correct call is to quote <code>'*'</code> so it is passed as is to the script, rather than being evaluated by the shell</p>
<pre><code>python undaq.py '*'
</code></pre>
<p>However I'm having trouble getting the same behavior with <code>%run</code>.</p>
<p><code>undaq.py</code> does its own <code>glob</code>, so it wants a path string, not the full list generated by the shell <code>glob</code>.</p>
<p>Here's what <code>%run</code> says about globing:</p>
<pre><code>Arguments are expanded using shell-like glob match. Patterns
'*', '?', '[seq]' and '[!seq]' can be used. Additionally,
tilde '~' will be expanded into user's home directory. Unlike
real shells, quotation does not suppress expansions. Use
*two* back slashes (e.g. ``\\*``) to suppress expansions.
To completely disable these expansions, you can use -G flag.
</code></pre>
<p>So I need to do:</p>
<pre><code>In [1236]: %run echo_argv \\*
['echo_argv.py', '*']
In [1237]: %run -G echo_argv *
['echo_argv.py', '*']
</code></pre>
| 0 | 2016-08-11T20:06:18Z | [
"python",
"windows",
"ipython",
"argparse"
] |
Python 3.5/Tkinter: Using Matplotlib in a multi-frame GUI (keeps crashing!) | 38,902,494 | <p>I am trying to achieve something very similar to this question (<a href="http://stackoverflow.com/questions/36928338/cannot-draw-matplotlib-chart-in-tkinter-gui-without-crashing">cannot draw matplotlib chart in tkinter GUI without crashing</a>). The only differences between the code shown in the link and my code are as follows: </p>
<p>-Instead of using <code>canvas.get_tk_widget().pack(...)</code>, I am using <code>canvas.get_tk_widget().grid(...)</code>. </p>
<p>-For each of my individual frames, I do a <code>columnconfigure</code> and <code>rowconfigure</code> because that's how I organize my GUI elements in each frame. </p>
<p>-My version of Spyder, Matplotlib, and Tkinter are all the latest versions (I'm using Anaconda ontop of it all). </p>
<p>-I don't recieve any sort of warning or error, my console window in Spyder simply stops, and I have to restart my kernel. </p>
<p>I've seen pretty much all tutorials regarding matplotlib and tkinter. At this point I'm ready to give up. </p>
<p>EDIT #2: When I copy and paste this code (<a href="https://pythonprogramming.net/how-to-embed-matplotlib-graph-tkinter-gui/" rel="nofollow">https://pythonprogramming.net/how-to-embed-matplotlib-graph-tkinter-gui/</a>), I get the exact same error. This leads me to think that maybe it's something to do with my installations. </p>
<p>EDIT: Skeleton Version of Code/Problem Code </p>
<p>Master App Class:</p>
<pre><code> class iSEEApp(tk.Tk):
def __init__(self, *args, **kwargs):
tk.Tk.__init__(self, *args, **kwargs)
tk.Tk.wm_title(self, "GUI V0.5")
container = tk.Frame(self)
container.winfo_toplevel().geometry("600x600")
container.pack(side="top", fill="both", expand=True)
container.grid_rowconfigure(0, weight=1)
container.grid_columnconfigure(0, weight=1)
self.frames = {}
for F in (pixelAnnoPage, patchAnnoPage, heatMapPage):
page_name = F.__name__
frame = F(parent=container, controller=self)
self.frames[page_name] = frame
frame.grid(row=0, column=0, sticky="nsew")
self.show_frame("pixelAnnoPage")
def show_frame(self, page_name):
frame = self.frames[page_name]
frame.tkraise()
frame.focus_set()
</code></pre>
<p>General Frame Class Construction/Convention:</p>
<pre><code> class heatMapPage(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
self.controller = controller
self.buildFrame()
def buildFrame(self):
print("test")
#self.pack(fill="both", expand=True)
self.columnconfigure(1, weight = 1)
self.columnconfigure(3, pad = 7)
self.rowconfigure(3, weight = 1)
self.rowconfigure(5, pad = 7)
</code></pre>
<p>Problem Code (in <code>buildFrame()</code>): </p>
<pre><code> self.heatFig = Figure(figsize=(5, 4), dpi=100)
self.heatPlot = self.heatFig.add_subplot(111)
self.heatPlot.plot([1,2,3,4,5,6,7,8],[5,6,7,8,1,2,2,1])
self.heatCanvas = FigureCanvasTkAgg(self.heatFig, master=self)
self.heatCanvas.get_tk_widget().grid(row=1, column=0, columnspan=2, rowspan=4, padx=5, sticky="nsew")
self.heatCanvas.show()
</code></pre>
| -1 | 2016-08-11T17:17:49Z | 38,904,578 | <p>Alright. So it turns out that the issue had nothing to do with my code. As I mentioned above, I'm using the Anaconda Python package, meaning my primary installer ISN'T Pip but is Conda. Whenever I do <code>conda upgrade matplotlib</code>, I would get a message claiming that everything is up to date. However, when I finally did <code>python -m pip install --upgrade matplotlib</code>, not only did it turn out the library wasn't up to date, it was a completely different file all together! Basically what happened is that the matplotlib library provided by Anaconda and Anaconda Cloud are BROKEN! If you see this issue on your own project, USE PIP TO CHECK YOUR LIBRARIES. </p>
| -1 | 2016-08-11T19:25:26Z | [
"python",
"user-interface",
"canvas",
"matplotlib",
"tkinter"
] |
Why does the behavior of the patch library change depending on how values are imported? | 38,902,714 | <p>The <code>patch</code> function from the <code>mock</code> library is sensitive to how things are imported. Is there a deep reason why I can't just use the fully qualified name where the function was originally defined regardless of how it is imported in other modules?</p>
<p>using a "module import" works fine</p>
<p>patch_example.py:</p>
<pre><code># WORKS!
from mock import patch
import inner
def outer(x):
return ("outer", inner.inner(x))
@patch("inner.inner")
def test(mock_inner):
mock_inner.return_value = "MOCK"
assert outer(1) == ("outer", "MOCK")
return "SUCCESS"
if __name__ == "__main__":
print test()
</code></pre>
<p>inner.py:</p>
<pre><code>def inner(x):
return ("inner.inner", x)
</code></pre>
<p>Running <code>python patch_example.py</code> just outputs success.</p>
<p>However, changing the import can have pretty dramatic consequences</p>
<p>Using a module alias still works</p>
<pre><code># WORKS!
from mock import patch
import inner as inner2
def outer(x):
return ("outer", inner2.inner(x))
@patch("inner.inner")
def test(mock_inner):
mock_inner.return_value = "MOCK"
assert outer(1) == ("outer", "MOCK")
return "SUCCESS"
if __name__ == "__main__":
print test()
</code></pre>
<p>directly importing the symbol, however, requires you to change the fully qualified name.</p>
<p>direct import, <code>inner.inner</code> as fully qualified name.</p>
<pre><code># FAILS!
from mock import patch
from inner import inner
def outer(x):
return ("outer", inner(x))
@patch("inner.inner")
def test(mock_inner):
mock_inner.return_value = "MOCK"
assert outer(1) == ("outer", "MOCK")
return "SUCCESS"
if __name__ == "__main__":
print test()
</code></pre>
<p>produces</p>
<pre><code>% python patch_example.py
Traceback (most recent call last):
File "patch_example.py", line 14, in <module>
print test()
File "/usr/local/lib/python2.7/site-packages/mock/mock.py", line 1305, in patched
return func(*args, **keywargs)
File "patch_example.py", line 10, in test
assert outer(1) == ("outer", "MOCK")
AssertionError
</code></pre>
<p>If I update the fully qualified path to <code>patch_example.inner</code>, the patch still fails.</p>
<pre><code># FAILS!
from mock import patch
from inner import inner
def outer(x):
return ("outer", inner(x))
@patch("patch_example.inner")
def test(mock_inner):
mock_inner.return_value = "MOCK"
assert outer(1) == ("outer", "MOCK")
return "SUCCESS"
if __name__ == "__main__":
print test()
% python patch_example.py
Traceback (most recent call last):
File "patch_example.py", line 14, in <module>
print test()
File "/usr/local/lib/python2.7/site-packages/mock/mock.py", line 1305, in patched
return func(*args, **keywargs)
File "patch_example.py", line 10, in test
assert outer(1) == ("outer", "MOCK")
AssertionError
</code></pre>
<p>using <code>__main__.inner</code> as my fully qualified name patches the right thing though.</p>
<pre><code># WORKS!
from mock import patch
from inner import inner
def outer(x):
return ("outer", inner(x))
@patch("__main__.inner")
def test(mock_inner):
mock_inner.return_value = "MOCK"
assert outer(1) == ("outer", "MOCK")
return "SUCCESS"
if __name__ == "__main__":
print test()
</code></pre>
<p>prints "SUCCESS"</p>
<p>So, why can't I patch the value of inner when it's imported as <code>from inner import inner</code> using either the fully qualified name of the original symbol <code>inner.inner</code> or the use the name of the main python module rather than <code>__name__</code>?</p>
<p>Tested with Python 2.7.12 on OS X.</p>
| 0 | 2016-08-11T17:32:20Z | 38,903,066 | <p>The problem is that once you directly import the symbol there is <strong>no link whatsoever</strong> between the binding you are using in the <code>__main__</code> module and the binding found in the <code>inner</code> module. So <code>patch</code>ing <em>the module</em> does <strong>not</strong> change the already imported symbols.</p>
<p>Importing a module using an alias doesn't matter because <code>patch</code> will lookup the <code>sys.modules</code> dictionary, which still keep track of the original name, so that's why this works (actually: when calling the mock the module is freshly imported, so it doesn't matter the name in which you imported it when calling <code>patch</code>)</p>
<p>In other words: you have to patch <em>both</em> bindings because they are effectively unrelated. There is no way for patch to know where all references to <code>inner.inner</code> ended up and patch them.</p>
<p>In this situation the second argument of <code>patch</code> might be useful to specify an existing mock object that can be shared to patch all bindings.</p>
| 1 | 2016-08-11T17:53:19Z | [
"python",
"magicmock"
] |
Variable Scope Issue in Python | 38,903,023 | <p>I am new to Python and I have been working with it for a bit, but I am stuck on a problem. Here is my code:</p>
<pre><code>def collatz(num,ctr):
if(num != 1):
ctr+=1
if(num%2==0):
collatz(num/2,ctr)
else:
collatz(num*3+1,ctr)
return ctr
test=collatz(9,0)
</code></pre>
<p>For any number I put in for <code>num</code>, let's say <code>9</code> for instance, and <code>0</code> for <code>ctr</code>, <code>ctr</code> always comes out as <code>1</code>. Am I using the <code>ctr</code> variable wrong?</p>
<p>EDIT:
I am trying to print out how many times the function is recursed. So <code>ctr</code> would be a counter for each recursion. </p>
| 3 | 2016-08-11T17:50:52Z | 38,903,067 | <p>I changed your recursive calls to set the value received back from the recursive calls into ctr. The way you wrote it, you were discarding the values you got back from recursing.</p>
<pre><code>def collatz(num,ctr):
if(num != 1):
ctr+=1
if(num%2==0):
ctr=collatz(num/2,ctr)
else:
ctr=collatz(num*3+1,ctr)
return ctr
test=collatz(9,0)
</code></pre>
| 3 | 2016-08-11T17:53:27Z | [
"python",
"recursion",
"call",
"callstack"
] |
Variable Scope Issue in Python | 38,903,023 | <p>I am new to Python and I have been working with it for a bit, but I am stuck on a problem. Here is my code:</p>
<pre><code>def collatz(num,ctr):
if(num != 1):
ctr+=1
if(num%2==0):
collatz(num/2,ctr)
else:
collatz(num*3+1,ctr)
return ctr
test=collatz(9,0)
</code></pre>
<p>For any number I put in for <code>num</code>, let's say <code>9</code> for instance, and <code>0</code> for <code>ctr</code>, <code>ctr</code> always comes out as <code>1</code>. Am I using the <code>ctr</code> variable wrong?</p>
<p>EDIT:
I am trying to print out how many times the function is recursed. So <code>ctr</code> would be a counter for each recursion. </p>
| 3 | 2016-08-11T17:50:52Z | 38,903,079 | <p>An example: </p>
<pre><code>def collatz(number):
if number % 2 == 0:
print(number // 2)
return number // 2
elif number % 2 == 1:
result = 3 * number + 1
print(result)
return result
n = input("Give me a number: ")
while n != 1:
n = collatz(int(n))
</code></pre>
| 0 | 2016-08-11T17:53:56Z | [
"python",
"recursion",
"call",
"callstack"
] |
Variable Scope Issue in Python | 38,903,023 | <p>I am new to Python and I have been working with it for a bit, but I am stuck on a problem. Here is my code:</p>
<pre><code>def collatz(num,ctr):
if(num != 1):
ctr+=1
if(num%2==0):
collatz(num/2,ctr)
else:
collatz(num*3+1,ctr)
return ctr
test=collatz(9,0)
</code></pre>
<p>For any number I put in for <code>num</code>, let's say <code>9</code> for instance, and <code>0</code> for <code>ctr</code>, <code>ctr</code> always comes out as <code>1</code>. Am I using the <code>ctr</code> variable wrong?</p>
<p>EDIT:
I am trying to print out how many times the function is recursed. So <code>ctr</code> would be a counter for each recursion. </p>
| 3 | 2016-08-11T17:50:52Z | 38,903,101 | <p>The variable will return the final number of the collatz sequence starting from whatever number you put in. The collatz conjecture says this will always be 1</p>
| -3 | 2016-08-11T17:55:22Z | [
"python",
"recursion",
"call",
"callstack"
] |
Variable Scope Issue in Python | 38,903,023 | <p>I am new to Python and I have been working with it for a bit, but I am stuck on a problem. Here is my code:</p>
<pre><code>def collatz(num,ctr):
if(num != 1):
ctr+=1
if(num%2==0):
collatz(num/2,ctr)
else:
collatz(num*3+1,ctr)
return ctr
test=collatz(9,0)
</code></pre>
<p>For any number I put in for <code>num</code>, let's say <code>9</code> for instance, and <code>0</code> for <code>ctr</code>, <code>ctr</code> always comes out as <code>1</code>. Am I using the <code>ctr</code> variable wrong?</p>
<p>EDIT:
I am trying to print out how many times the function is recursed. So <code>ctr</code> would be a counter for each recursion. </p>
| 3 | 2016-08-11T17:50:52Z | 38,903,235 | <p>The variable <code>ctr</code> in your example will always be <code>1</code> because of the order of the recursive call stack. As one value of <code>ctr</code> is returned, then the call stack will start returning the previous values of <code>ctr</code>. Basically, at the very last recursive call, the highest value of <code>ctr</code> will be returned. But since the method call at the bottom of the call stack returns the very last value aka the value that will be stored in <code>test</code>, <code>test</code> will always be <code>1</code>. Let's say I input parameters into <code>collatz</code> that would result in five total calls of the method. The call stack would look like this coming down,</p>
<pre><code>collatz returns ctr --> 5
collatz returns ctr --> 4
collatz returns ctr --> 3
collatz returns ctr --> 2
collatz returns ctr --> 1 //what matters because ctr is being returned with every method call
</code></pre>
<p>As you can see, no matter how many times <code>collatz</code> is called, <code>1</code> will always be returned because the call at the bottom of the call stack has <code>ctr</code> equaling <code>1</code>.</p>
<p>The solution can be a lot of things, but it really depends on the purpose of what you're trying to accomplish which isn't clearly stated in your question.</p>
<p>EDIT: If you want <code>ctr</code> to end up being the number of times a recursive call is made, then just assign <code>ctr</code> to the value of the method call. It should look like this,</p>
<pre><code>def collatz(num,ctr):
if(num != 1):
ctr+=1
if(num%2==0):
ctr = collatz(num/2,ctr)
else:
ttr = collatz(num*3+1,ctr)
return ctr
test=collatz(9,0)
</code></pre>
| 3 | 2016-08-11T18:02:57Z | [
"python",
"recursion",
"call",
"callstack"
] |
Variable Scope Issue in Python | 38,903,023 | <p>I am new to Python and I have been working with it for a bit, but I am stuck on a problem. Here is my code:</p>
<pre><code>def collatz(num,ctr):
if(num != 1):
ctr+=1
if(num%2==0):
collatz(num/2,ctr)
else:
collatz(num*3+1,ctr)
return ctr
test=collatz(9,0)
</code></pre>
<p>For any number I put in for <code>num</code>, let's say <code>9</code> for instance, and <code>0</code> for <code>ctr</code>, <code>ctr</code> always comes out as <code>1</code>. Am I using the <code>ctr</code> variable wrong?</p>
<p>EDIT:
I am trying to print out how many times the function is recursed. So <code>ctr</code> would be a counter for each recursion. </p>
| 3 | 2016-08-11T17:50:52Z | 38,903,352 | <p>Function parameters in Python are passed by value, not by reference. If you pass a number to a function, the function receives a copy of that number. If the function modifies its parameter, that change will not be visible outside the function:</p>
<pre><code>def foo(y):
y += 1
print("y=", y) # prints 11
x = 10
foo(x)
print("x=", x) # Still 10
</code></pre>
<p>In your case, the most direct fix is to make ctr into a global variable. Its very ugly because you need to reset the global back to 0 if you want to call the collatz function again but I'm showing this alternative just to show that your logic is correct except for the pass-by-reference bit. (Note that the collatz function doesn't return anything now, the answer is in the global variable).</p>
<pre><code>ctr = 0
def collatz(num):
global ctr
if(num != 1):
ctr+=1
if(num%2==0):
collatz(num/2)
else:
collatz(num*3+1)
ctr = 0
collatz(9)
print(ctr)
</code></pre>
<p>Since Python doesn't have tail-call-optimization, your current recursive code will crash with a stack overflow if the collatz sequence is longer than 1000 steps (this is Pythons default stack limit). You can avoid this problem by using a loop instead of recursion. This also lets use get rid of that troublesome global variable. The final result is a bit more idiomatic Python, in my opinion:</p>
<pre><code>def collats(num):
ctr = 0
while num != 1:
ctr += 1
if num % 2 == 0:
num = num/2
else:
num = 3*num + 1
return ctr
print(collatz(9))
</code></pre>
<p>If you want to stick with using recursive functions, its usually cleaner to avoid using mutable assignment like you are trying to do. Instead of functions being "subroutines" that modify state, make them into something closer to mathematical functions, which receive a value and return a result that depends only on the inputs. It can be much easier to reason about recursion if you do this. I will leave this as an exercise but the typical "skeleton" of a recursive function is to have an if statement that checks for the base case and the recursive cases:</p>
<pre><code>def collatz(n):
if n == 1:
return 0
else if n % 2 == 0:
# tip: something involving collatz(n/2)
return #???
else:
# tip: something involving collatz(3*n+1)
return #???
</code></pre>
| 0 | 2016-08-11T18:09:18Z | [
"python",
"recursion",
"call",
"callstack"
] |
Find the Number of Distinct Topics After LDA in Python/ R | 38,903,061 | <p>As far as I know, I need to fix the number of topics for LDA modeling in Python/ R. However, say I set <code>topic=10</code> while the results show that, for a document, nine topics are all about 'health' and the distinct number of topics for this document is <code>2</code> indeed. How can I spot it without examining the key words of each topic and manually count the real distinct topics?</p>
<p>P.S. I googled and learned that there are Vocabulary Word Lists (Word Banks) by Theme, and I could pair each topic with a theme according to the word lists. If several topics fall into the same theme, then I can combine them into one distinct topic. I guess it's an approach worth trying and I'm looking for smarter ideas, thanks.</p>
| 0 | 2016-08-11T17:53:03Z | 38,916,691 | <p>First, your question kind of assumes that topics identified by LDA correspond to real semantic topics - I'd be very careful about that assumption and take a look at the documents and words assigned to topics you want to interpret that way, as LDA often have random extra words assigned, can merge two or more actual topics into one (especially with few topics overall) and may not be meaningful at all ("junk" topics). </p>
<p>In answer to you questions then: the idea of a "distinct number of topics" isn't clear at all. Most of the work I've seen uses a simple threshold to decide if a documents topic proportion is "significant". </p>
<p>A more principled way is to look at the proportion of words assigned to that topic that appear in the document - if it's "significantly" higher than average, the topic is significant in the document, but again, this is involves a somewhat arbitrary threshold. I don't think anything can beat close reading of some examples to make meaningful choices here.</p>
<p>I should note that, depending on how you set the document-topic prior (usually beta), you may not have each document focussed on just a few topics (as seems to be your case), but a much more even mix. In this case "distinct number of topics" starts to be less meaningful. </p>
<p>P.S. Using word lists that are meaningful in your application is not a bad way to identify candidate topics of interest. Especially useful if you have many topics in your model (:</p>
<p>P.P.S.: I hope you have a reasonable number of documents (at least some thousands), as LDA tends to be less meaningful with less, capturing chance word co-occurences rather than meaningful ones.
P.P.P.S.: I'd go for a larger number of topics with parameter optimisation (as provided by the Mallet LDA implementation) - this effectively chooses a reasonable number of topics for your model, with very few words assigned to the "extra" topics.</p>
| 1 | 2016-08-12T11:17:53Z | [
"python",
"lda",
"topic-modeling",
"text-analysis"
] |
Python: Avoid using multiple nested for loops to iterate over strings | 38,903,075 | <p>Is there a simpler way to iterate over multiple strings than a massive amount of nested for loops? </p>
<pre><code>list = ['rst','uvw','xy']
for x in list[0]:
for y in list[1]:
for z in list[2]:
print x+y+z
rux
ruy
...
tvx
tvy
twx
twy
</code></pre>
<p>Example list I really want to avoid typing writing loops for:</p>
<pre><code>list = ['rst','uvw','xy','awfg22','xayx','1bbc1','thij','bob','thisistomuch']
</code></pre>
| 1 | 2016-08-11T17:53:49Z | 38,903,206 | <p>You need itertools.product:</p>
<pre><code>import itertools
list = ['rst','uvw','xy','awfg22','xayx','1bbc1','thij','bob','thisistomuch']
for x in itertools.product(*list):
print(''.join(x))
</code></pre>
<p>product returns all possible tuples of elements from iterators it gets. So</p>
<pre><code>itertools.product('ab', 'cd')
</code></pre>
<p>will return a generator, yielding ('a', 'c'), ('a', 'd'), ('b', 'c'), ('b', 'd')</p>
| 2 | 2016-08-11T18:01:06Z | [
"python",
"for-loop"
] |
Python: Avoid using multiple nested for loops to iterate over strings | 38,903,075 | <p>Is there a simpler way to iterate over multiple strings than a massive amount of nested for loops? </p>
<pre><code>list = ['rst','uvw','xy']
for x in list[0]:
for y in list[1]:
for z in list[2]:
print x+y+z
rux
ruy
...
tvx
tvy
twx
twy
</code></pre>
<p>Example list I really want to avoid typing writing loops for:</p>
<pre><code>list = ['rst','uvw','xy','awfg22','xayx','1bbc1','thij','bob','thisistomuch']
</code></pre>
| 1 | 2016-08-11T17:53:49Z | 38,903,207 | <p>You are looking for the <code>product</code> function from <code>itertools</code>:</p>
<pre><code>import itertools
lst = ['rst','uvw','xy']
[''.join(s) for s in itertools.product(*lst)]
# ['rux',
# 'ruy',
# 'rvx',
# 'rvy',
# 'rwx',
# ...
# 'twx',
# 'twy']
</code></pre>
| 3 | 2016-08-11T18:01:07Z | [
"python",
"for-loop"
] |
Python: Avoid using multiple nested for loops to iterate over strings | 38,903,075 | <p>Is there a simpler way to iterate over multiple strings than a massive amount of nested for loops? </p>
<pre><code>list = ['rst','uvw','xy']
for x in list[0]:
for y in list[1]:
for z in list[2]:
print x+y+z
rux
ruy
...
tvx
tvy
twx
twy
</code></pre>
<p>Example list I really want to avoid typing writing loops for:</p>
<pre><code>list = ['rst','uvw','xy','awfg22','xayx','1bbc1','thij','bob','thisistomuch']
</code></pre>
| 1 | 2016-08-11T17:53:49Z | 38,903,436 | <p>Another way? Definitely. Simpler? Maybe not...</p>
<p>I'm guessing it's because you don't necessarily know how many strings you'll have in your list.</p>
<p>What about:
sl = ['abc','mno','xyz']</p>
<pre><code>def strCombo(l,s=''):
if(len(l)==0):
return s
elif(len(l)==1):
return [(s+x) for x in l[0]]
else:
return [strCombo(l[1:],(s+x)) for x in l[0]]
final = []
for x in strCombo(sl)[0]:
final = final + x
</code></pre>
| 0 | 2016-08-11T18:14:22Z | [
"python",
"for-loop"
] |
Exception handling fail in Python | 38,903,175 | <p>I am very new to Python. I am trying to handle the exception in file upload web API. But I am not able to catch. If is goes to success it shows the uploaded file.</p>
<p>app.py:</p>
<pre><code>from flask import Flask
from flask_cors import CORS, cross_origin
from flask import request,jsonify
import smtplib, os, cgi
from email.mime.application import MIMEApplication
from email.mime.multipart import MIMEMultipart
from email.mime.base import MIMEBase
from email.mime.text import MIMEText
from email import encoders
from werkzeug.utils import secure_filename
app = Flask(__name__)
CORS(app)
app.config['UPLOAD_FOLDER'] = 'upload/'
# These are the extension that we are accepting to be uploaded
app.config['ALLOWED_EXTENSIONS'] = set(['txt', 'pdf','docx','ods','xls'])
# For a given file, return whether it's an allowed type or not
def allowed_file(filename):
return '.' in filename and \
filename.rsplit('.', 1)[1] in app.config['ALLOWED_EXTENSIONS']
@app.route('/upload', methods=['POST'])
def upload():
file = request.files['file']
try:
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
return('file uploaded successfully')
except IOError:
return("fail file upload")
if __name__=="__main__":
app.run(debug=True)
</code></pre>
| -1 | 2016-08-11T17:59:28Z | 38,903,504 | <p>Try this to catch all exceptions:</p>
<pre><code>try:
raise ValueError('stuff')
except BaseException as e:
print 'Statement:', e.args
print 'Type:', type(e)
</code></pre>
<p>Outputs:</p>
<blockquote>
<p>Statement: ('stuff',) </p>
<p>Type: (class 'ValueError')</p>
</blockquote>
<p>This will let you see what's happening - using <code>BaseException</code> will capture ALL exception types, then you can query <code>e</code> to see what it is. You should obviously be careful capturing all exceptions, and it isn't generally considered very good practice, but it might be useful for debugging at least. When you see what the type is you can go back and narrow it down.</p>
| 1 | 2016-08-11T18:17:32Z | [
"python",
"flask"
] |
Convert a For loop to While loop in python | 38,903,208 | <p>I am trying to convert these two <strong>for</strong> loops to <strong>while</strong> loops:</p>
<pre><code>sum = 0
for i in range (10, 30):
for j in range(i, 10*i):
sum += j
</code></pre>
<p>Any ideas?</p>
| -4 | 2016-08-11T18:01:11Z | 38,903,315 | <p>What <code>for i in range(a,b)</code> does is it runs the loop for value of <code>i</code> starting from <code>a</code> till it reaches <code>b-1</code> , same you can replecate with while loop statement. What we are doing here is <strong>before starting the loop we initiated <code>i</code> to be qual to a</strong> and then keep on increasing the value by order of 1 after each iteration.</p>
<p>And before starting next iteration if it's less that <code>b</code> if not we don't start the next iteration.</p>
<p>Watch this <a href="https://www.youtube.com/watch?v=Q3T1yyGQd6o" rel="nofollow">tutorial</a> for more info.</p>
<pre><code>sum = 0
i = 10
while i <30:
j=i
while j < (10*i):
sum += j
j+=1
i+=1
</code></pre>
| 1 | 2016-08-11T18:07:15Z | [
"python"
] |
How to write the if else condition inside the list python? | 38,903,297 | <p>I got the pixel size for the image , i need to write a if condition at the pixel </p>
<p>if jpg or tiff: do something
else '0,0'</p>
<p>How can i write in the below code ?</p>
<pre><code>def get(self, request, **response_kwargs):
main_request = MediaRequest.objects.get(request_unique_id=self.kwargs['request_unique_id'])
files = MediaFile.objects.filter(request=main_request)
files_list = []
for media_file in files:
files_list.append ({
'preview' : "/render/" + str(main_request.request_unique_id) + "/" + media_file.filename,
'name' : media_file.filename,
'status' : media_file.status,
'comment' : media_file.comment,
'id':media_file.id,
'pixel' :
if "jpg" not in media_file.filename:
Image.open(settings.MEDIA_ROOT + main_request.request_unique_id + "/"+ media_file.filename).size
else:
return '0, 0'
})
</code></pre>
| -3 | 2016-08-11T18:06:27Z | 38,903,389 | <p>See ternary operator (or conditional expression): <code>"X if C else Y"</code></p>
<p><a href="https://www.python.org/dev/peps/pep-0308/" rel="nofollow">https://www.python.org/dev/peps/pep-0308/</a></p>
<pre><code>for media_file in files:
files_list.append ({
'preview' : "/render/" + str(main_request.request_unique_id) + "/" + media_file.filename,
'name' : media_file.filename,
'status' : media_file.status,
'comment' : media_file.comment,
'id':media_file.id,
'pixel' : Image.open(settings.MEDIA_ROOT + main_request.request_unique_id + "/"+ media_file.filename).size if "jpg" not in media_file.filename else '0, 0'
})
</code></pre>
| 0 | 2016-08-11T18:11:16Z | [
"python",
"django",
"python-2.7",
"python-3.x",
"django-views"
] |
How to write the if else condition inside the list python? | 38,903,297 | <p>I got the pixel size for the image , i need to write a if condition at the pixel </p>
<p>if jpg or tiff: do something
else '0,0'</p>
<p>How can i write in the below code ?</p>
<pre><code>def get(self, request, **response_kwargs):
main_request = MediaRequest.objects.get(request_unique_id=self.kwargs['request_unique_id'])
files = MediaFile.objects.filter(request=main_request)
files_list = []
for media_file in files:
files_list.append ({
'preview' : "/render/" + str(main_request.request_unique_id) + "/" + media_file.filename,
'name' : media_file.filename,
'status' : media_file.status,
'comment' : media_file.comment,
'id':media_file.id,
'pixel' :
if "jpg" not in media_file.filename:
Image.open(settings.MEDIA_ROOT + main_request.request_unique_id + "/"+ media_file.filename).size
else:
return '0, 0'
})
</code></pre>
| -3 | 2016-08-11T18:06:27Z | 38,904,814 | <pre><code> 'pixel' : Image.open(settings.MEDIA_ROOT + main_request.request_unique_id
+ "/"+ media_file.filename).size if ("JPG" in media_file.filename.upper())
or ("TIF" in media_file.filename.upper()) else '0, 0'}
</code></pre>
| 0 | 2016-08-11T19:39:22Z | [
"python",
"django",
"python-2.7",
"python-3.x",
"django-views"
] |
Handling "flags" types in telegram's TL schema language | 38,903,321 | <p>I wrote a tl parser so can now use the latest layer (53). But I am unsure how to handle "flags" types. They are only mentioned in the tl docs but not defined (as far as I can tell) at the bottom of the page here: <a href="https://core.telegram.org/mtproto/TL-formal" rel="nofollow">link</a>. </p>
<p>For example, when a method returns a 'message' type it should look like this:
<code>message#c09be45f flags:# out:flags.1?true mentioned:flags.4?true media_unread:flags.5?true silent:flags.13?true post:flags.14?true id:int from_id:flags.8?int to_id:Peer fwd_from:flags.2?MessageFwdHeader via_bot_id:flags.11?int reply_to_msg_id:flags.3?int date:int message:string media:flags.9?MessageMedia reply_markup:flags.6?ReplyMarkup entities:flags.7?Vector<MessageEntity> views:flags.10?int edit_date:flags.15?int = Message;</code></p>
<p>If I understand correctly each flag is a bit set in some variable, right?</p>
<p>My parser breaks out the 'message' type like this:</p>
<blockquote>
<pre><code> id: -1063525281
params:
name: flags
type:
name: out
bit: 1
type: true
name: mentioned
bit: 4
type: true
name: media_unread
bit: 5
type: true
name: silent
bit: 13
type: true
name: post
bit: 14
type: true
name: from_id
bit: 8
type: int
name: fwd_from
bit: 2
type: MessageFwdHeader
name: via_bot_id
bit: 11
type: int
name: reply_to_msg_id
bit: 3
type: int
name: media
bit: 9
type: MessageMedia
name: reply_markup
bit: 6
type: ReplyMarkup
name: entities
bit: 7
type: Vector<MessageEntity>
name: views
bit: 10
type: int
name: edit_date
bit: 15
type: int
name: id
type: int
name: to_id
type: Peer
name: date
type: int
name: message
type: string
predicate: message
type: Message
</code></pre>
</blockquote>
<p>But if the flags are bits in some variable, which variable?</p>
<p>Related: is tl based on a formal, standardized language spec or was it created specifically for telegram? I ask because if it is a subset of a formal language (like yaml) it might be better to use an already known parser for tl instead of reinventing the wheel.</p>
| 1 | 2016-08-11T18:07:27Z | 38,904,456 | <blockquote>
<p>But if the flags are bits in some variable, which variable?</p>
</blockquote>
<pre><code>Message#c09be45f flags:# out:flags.1?true mentioned:flags.4?true media_unread:flags.5?true silent:flags.13?true post:flags.14?true id:int from_id:flags.8?int to_id:Peer fwd_from:flags.2?MessageFwdHeader via_bot_id:flags.11?int reply_to_msg_id:flags.3?int date:int message:string media:flags.9?MessageMedia reply_markup:flags.6?ReplyMarkup entities:flags.7?Vector<MessageEntity> views:flags.10?int edit_date:flags.15?int = Message;
</code></pre>
<p><strong>Example</strong>: <code>flags:# out:flags.1?true</code></p>
<p>Decoding Flags: BinaryAND(flags, 2^ix) === 2^ix --> this will help you determine if a field is included or not</p>
<p><strong>flags</strong> = The value if the <code>flags</code> field, this is usually the first field for objects with flags</p>
<p>ix == flag index, this is a number that indicates flag position, e.g. <code>out:flags.1?true</code> here out is the field in flag position 1, and the type is true</p>
<p>For the example above <code>out</code> will have a value of <code>true</code> if <code>BAND(flags,2^N) == 2^N</code> otherwise, <code>out</code> field is ignored</p>
<p><strong>Code - Message Encoding (Elixir)</strong></p>
<pre><code>def encode(%Message{} = x), do: <<95, 228, 155, 192, encode(:Int, x.flags)::binary, enc_f(:True, x.out, x.flags, 2)::binary, enc_f(:True, x.mentioned, x.flags, 16)::binary, enc_f(:True, x.media_unread, x.flags, 32)::binary, enc_f(:True, x.silent, x.flags, 8192)::binary, enc_f(:True, x.post, x.flags, 16384)::binary, encode(:Int, x.id)::binary, enc_f(:Int, x.from_id, x.flags, 256)::binary, encode(x.to_id)::binary, enc_f(x.fwd_from, x.flags, 4)::binary, enc_f(:Int, x.via_bot_id, x.flags, 2048)::binary, enc_f(:Int, x.reply_to_msg_id, x.flags, 8)::binary, encode(:Int, x.date)::binary, encode(:String, x.message)::binary, enc_f(x.media, x.flags, 512)::binary, enc_f(x.reply_markup, x.flags, 64)::binary, enc_vf(x.entities, x.flags, 128)::binary, enc_f(:Int, x.views, x.flags, 1024)::binary, enc_f(:Int, x.edit_date, x.flags, 32768)::binary>>
</code></pre>
<p><strong>Code - Message Decode (Elixir)</strong></p>
<pre><code> def decode(<<95, 228, 155, 192, bin::binary>>) do
{flags, bin} = decode(:Int, bin)
{out, bin} = decode(:True, bin, flags, 2) # 1
{mentioned, bin} = decode(:True, bin, flags, 16) # 4
{media_unread, bin} = decode(:True, bin, flags, 32) # 5
{silent, bin} = decode(:True, bin, flags, 8192) # 13
{post, bin} = decode(:True, bin, flags, 16384) # 14
{id, bin} = decode(:Int, bin)
{from_id, bin} = decode(:Int, bin, flags, 256) # 8
{to_id, bin} = decode(bin)
{fwd_from, bin} = decode(bin, flags, 4) # 2
{via_bot_id, bin} = decode(:Int, bin, flags, 2048) # 11
{reply_to_msg_id, bin} = decode(:Int, bin, flags, 8) # 3
{date, bin} = decode(:Int, bin)
{message, bin} = decode(:String, bin)
{media, bin} = decode(bin, flags, 512) # 9
{reply_markup, bin} = decode(bin, flags, 64) # 6
{entities, bin} = decode([:MessageEntity], bin, flags, 128) # 7
{views, bin} = decode(:Int, bin, flags, 1024) # 10
{edit_date, bin} = decode(:Int, bin, flags, 32768) # 15
{%Message{flags: flags, out: out, mentioned: mentioned, media_unread: media_unread, silent: silent, post: post, id: id, from_id: from_id, to_id: to_id, fwd_from: fwd_from, via_bot_id: via_bot_id, reply_to_msg_id: reply_to_msg_id, date: date, message: message, media: media, reply_markup: reply_markup, entities: entities, views: views, edit_date: edit_date}, bin}
end
#5 == 2^5 == 32
#4 == 2^4 == 16
</code></pre>
<p>so basically <code>N == 2^N, where N == ix</code></p>
| 1 | 2016-08-11T19:16:55Z | [
"python",
"api",
"telegram"
] |
Using Regular expressions to match a portion of the string?(python) | 38,903,383 | <p>What regular expression can i use to match genes(<strong>in bold</strong>) in the gene list string:</p>
<p>GENE_LIST: <strong>F59A7.7</strong>; <strong>T25D3.3</strong>; <strong>F13B12.4</strong>; <strong>cysl-1</strong>; <strong>cysl-2</strong>; <strong>cysl-3</strong>; <strong>cysl-4</strong>; <strong>F01D4.8</strong></p>
<p>I tried : <em>GENE_List:((( \w+).(\w+));</em>)+* but it only captures the last gene</p>
| 0 | 2016-08-11T18:11:06Z | 38,903,507 | <p><strong>UPDATE</strong></p>
<p>It's in fact much simpler:</p>
<pre><code>[^\s;]+
</code></pre>
<p>however, first use substring to take only the part you need (the genes, without GENELIST )</p>
<p>demo: <a href="https://regex101.com/r/yW5tU1/1" rel="nofollow">regex demo</a></p>
| 0 | 2016-08-11T18:17:42Z | [
"python",
"regex"
] |
Using Regular expressions to match a portion of the string?(python) | 38,903,383 | <p>What regular expression can i use to match genes(<strong>in bold</strong>) in the gene list string:</p>
<p>GENE_LIST: <strong>F59A7.7</strong>; <strong>T25D3.3</strong>; <strong>F13B12.4</strong>; <strong>cysl-1</strong>; <strong>cysl-2</strong>; <strong>cysl-3</strong>; <strong>cysl-4</strong>; <strong>F01D4.8</strong></p>
<p>I tried : <em>GENE_List:((( \w+).(\w+));</em>)+* but it only captures the last gene</p>
| 0 | 2016-08-11T18:11:06Z | 38,903,564 | <p>Given:</p>
<pre><code>>>> s="GENE_LIST: F59A7.7; T25D3.3; F13B12.4; cysl-1; cysl-2; cysl-3; cysl-4; F01D4.8"
</code></pre>
<p>You can use Python string methods to do:</p>
<pre><code>>>> s.split(': ')[1].split('; ')
['F59A7.7', 'T25D3.3', 'F13B12.4', 'cysl-1', 'cysl-2', 'cysl-3', 'cysl-4', 'F01D4.8']
</code></pre>
<p>For a regex:</p>
<pre><code>(?<=[:;]\s)([^\s;]+)
</code></pre>
<p><a href="https://regex101.com/r/mZ5dB8/1" rel="nofollow">Demo</a></p>
<p>Or, in Python:</p>
<pre><code>>>> re.findall(r'(?<=[:;]\s)([^\s;]+)', s)
['F59A7.7', 'T25D3.3', 'F13B12.4', 'cysl-1', 'cysl-2', 'cysl-3', 'cysl-4', 'F01D4.8']
</code></pre>
| 1 | 2016-08-11T18:20:34Z | [
"python",
"regex"
] |
Using Regular expressions to match a portion of the string?(python) | 38,903,383 | <p>What regular expression can i use to match genes(<strong>in bold</strong>) in the gene list string:</p>
<p>GENE_LIST: <strong>F59A7.7</strong>; <strong>T25D3.3</strong>; <strong>F13B12.4</strong>; <strong>cysl-1</strong>; <strong>cysl-2</strong>; <strong>cysl-3</strong>; <strong>cysl-4</strong>; <strong>F01D4.8</strong></p>
<p>I tried : <em>GENE_List:((( \w+).(\w+));</em>)+* but it only captures the last gene</p>
| 0 | 2016-08-11T18:11:06Z | 38,903,585 | <p>You can use the following:</p>
<pre><code>\s([^;\s]+)
</code></pre>
<p><a href="https://regex101.com/r/iC8zQ7/2" rel="nofollow">Demo</a></p>
<ul>
<li>The captured group, <code>([^;\s]+)</code>, will contain the desired substrings followed by whitespace (<code>\s</code>)</li>
</ul>
<hr>
<pre><code>>>> s = 'GENE_LIST: F59A7.7; T25D3.3; F13B12.4; cysl-1; cysl-2; cysl-3; cysl-4; F01D4.8'
>>> re.findall(r'\s([^;\s]+)', s)
['F59A7.7', 'T25D3.3', 'F13B12.4', 'cysl-1', 'cysl-2', 'cysl-3', 'cysl-4', 'F01D4.8']
</code></pre>
| 1 | 2016-08-11T18:21:28Z | [
"python",
"regex"
] |
Using Regular expressions to match a portion of the string?(python) | 38,903,383 | <p>What regular expression can i use to match genes(<strong>in bold</strong>) in the gene list string:</p>
<p>GENE_LIST: <strong>F59A7.7</strong>; <strong>T25D3.3</strong>; <strong>F13B12.4</strong>; <strong>cysl-1</strong>; <strong>cysl-2</strong>; <strong>cysl-3</strong>; <strong>cysl-4</strong>; <strong>F01D4.8</strong></p>
<p>I tried : <em>GENE_List:((( \w+).(\w+));</em>)+* but it only captures the last gene</p>
| 0 | 2016-08-11T18:11:06Z | 38,928,788 | <pre><code>string = "GENE_LIST: F59A7.7; T25D3.3; F13B12.4; cysl-1; cysl-2; cysl-3; cysl-4; F01D4.8"
re.findall(r"([^;\s]+)(?:;|$)", string)
</code></pre>
<p>The output is:</p>
<pre><code>['F59A7.7',
'T25D3.3',
'F13B12.4',
'cysl-1',
'cysl-2',
'cysl-3',
'cysl-4',
'F01D4.8']
</code></pre>
| 0 | 2016-08-13T03:03:05Z | [
"python",
"regex"
] |
Pip install-couldn't find a version that satisfies the requirement | 38,903,415 | <p>I am trying to install a package called <a href="https://github.com/Jefferson-Henrique/GetOldTweets-python" rel="nofollow">got</a> using pip. But it keeps showing up errors of 'couldn't find a version that satisfies the requirement".</p>
<p><a href="http://i.stack.imgur.com/8kdNc.png" rel="nofollow"><img src="http://i.stack.imgur.com/8kdNc.png" alt="error"></a></p>
<p>I've searched online about the solutions. There are some explanation saying to try pip freeze > requirements.txt. But it still remains a blackbox to me.</p>
<p>What is the problem here and what should I do exactly to install the package?</p>
<p>Thanks!</p>
| 0 | 2016-08-11T18:12:42Z | 38,904,214 | <p>Your package <code>got</code> is indeed not on Pypi.</p>
<p>Your error is thrown when no package have been found.</p>
<p>eg:</p>
<pre><code>â pip install notfoundpackage
Collecting notfoundpackage
Could not find a version that satisfies the requirement notfoundpackage (from versions: )
No matching distribution found for notfoundpackage
</code></pre>
<p>Though, because you know the github link, you can clone the repository using git.</p>
<pre><code>git clone git@github.com:Jefferson-Henrique/GetOldTweets-python.git
cd GetOldTweets-python
python Exporter.py -h
</code></pre>
<p>If you really need a python library for tweeter, some other library already exist, like <a href="https://github.com/ryanmcgrath/twython/tree/master" rel="nofollow">twython</a>.</p>
| 0 | 2016-08-11T18:59:16Z | [
"python",
"package",
"install",
"pip"
] |
Pip install-couldn't find a version that satisfies the requirement | 38,903,415 | <p>I am trying to install a package called <a href="https://github.com/Jefferson-Henrique/GetOldTweets-python" rel="nofollow">got</a> using pip. But it keeps showing up errors of 'couldn't find a version that satisfies the requirement".</p>
<p><a href="http://i.stack.imgur.com/8kdNc.png" rel="nofollow"><img src="http://i.stack.imgur.com/8kdNc.png" alt="error"></a></p>
<p>I've searched online about the solutions. There are some explanation saying to try pip freeze > requirements.txt. But it still remains a blackbox to me.</p>
<p>What is the problem here and what should I do exactly to install the package?</p>
<p>Thanks!</p>
| 0 | 2016-08-11T18:12:42Z | 38,904,430 | <p>This package doesn't include a <code>setup.py</code>, so you can't install it from pip. </p>
<p>If it did, you could install it with:
<code>
pip install git+https://github.com/Jefferson-Henrique/GetOldTweets-python.git
</code></p>
| 0 | 2016-08-11T19:14:58Z | [
"python",
"package",
"install",
"pip"
] |
Python - cannot access class property | 38,903,477 | <p>I'm writing a class to access a mySQL db from python.</p>
<p>Here's the code:</p>
<pre><code>#!/usr/bin/python
import MySQLdb
import configparser
class db():
def __init__(self):
try:
self.handler = MySQLdb.connect(host="localhost", user=user, passwd=passwd, db=dbname)
self.cur = self.handler.cursor()
except:
print "Couldn't connect to db"
def query(self, query):
self.lastQuery = query
self.res = self.cur.execute(query)
def close(self):
self.handler.close()
</code></pre>
<p>When I'm trying to call the class, it give me the following error:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "db.class.py", line 6, in <module>
class db():
File "db.class.py", line 25, in db
self.res = self.cur.execute(query)
NameError: name 'self' is not defined
</code></pre>
<p>I've been searching and commonly there's people that forget to define the method with 'self' as an argument. But I'm including it.</p>
<p>Could anyone help me?</p>
<p>Thanks.</p>
<p>UPDATE:</p>
<p>I checked whitespaces and tabs and that is not the problem.</p>
| -1 | 2016-08-11T18:16:13Z | 38,903,555 | <p>Your code has mixed tabs and spaces, causing Python to get confused about what statements are at what indentation level. The assignment to <code>self.res</code> has ended up at class level, outside of the method it was intended to be a part of.</p>
<p>Stop mixing tabs and spaces. Turn on "show whitespace" in your editor to make the problem more visible, and consider running Python with the <code>-tt</code> flag to give an error on ambiguous mixing of tabs and spaces.</p>
| 5 | 2016-08-11T18:20:15Z | [
"python",
"class"
] |
printing from csv.reader | 38,903,650 | <p>This should be an easy one, but I'm having a bit of a brain fart. the CSV maintains a list of four latitude and longitude pairs. Based on the code, if I print row[0] it prints just the latitudes and if I print row[1] it prints the longitudes. How to I format the code to print a specific lat/lon pair instead? Say.. The second lat/lon pair in the CSV. </p>
<pre><code>import csv
with open('120101.KAP.csv','rb') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
print row[0]
</code></pre>
| 0 | 2016-08-11T18:26:04Z | 38,903,724 | <p>Looping over <code>reader</code> gives you each row. If you wanted to get the second row, use the <a href="https://docs.python.org/2/library/functions.html#next" rel="nofollow"><code>next()</code> function</a> instead, ignore one and get the second:</p>
<pre><code>reader = csv.reader(csvfile)
next(reader) # ignore
row = next(reader) # second row
print row # print the second row.
</code></pre>
<p>You can generalise this by using the <a href="https://docs.python.org/2/library/itertools.html#itertools.islice" rel="nofollow"><code>itertools.islice()</code> object</a> to do the skipping for you:</p>
<pre><code>from itertools import islice
reader = csv.reader(csvfile)
row = next(islice(reader, rownumber)) # skip to index rownumber, read that
print row
</code></pre>
<p>Take into account that counting starts at 0, so "second row" is <code>rownumber = 1</code>.</p>
<p>Or you could just read <em>all</em> rows into a list and index into that:</p>
<pre><code>reader = csv.reader(csvfile)
rows = list(reader)
print rows[1] # print the second row
print rows[3] # print the fourth row
</code></pre>
<p>Only do this (loading everything into a list) if there are a limited number of rows. Iteration over the reader only produces one row at a time and uses a file buffer for efficient reading, limiting how much memory is used; you could process gigantic CSV files this way.</p>
| 3 | 2016-08-11T18:30:00Z | [
"python",
"csv"
] |
Django form input missing in POST data | 38,903,691 | <p>I'm using Django 1.9.8 and I'm trying to learn how to use form validation. I'm working on a form for registering users. I have 2 problems that I'm stuck on.</p>
<ol>
<li><p>The field for re-entering the password isn't accessible (rather, just not there at all) in the forms.py file methods. When I use <code>raise Exception(self.cleaned_data.get('password'))</code> to view the content, the password is shown. I can also view the username and email. When I do the same for the repassword field, it shows <code>None</code> <strong>Edit - this was solved by changing the forms.py file to use a clean() method</strong></p></li>
<li><p>The validation errors are not displaying at all on my form. If there are errors the redirect back to the form is working, but there is nothing in the inputs for the user to fix, nor is the error displayed. <strong>Edit - solved by changing the RegisterForm call after the else statement to <code>form = RegisterForm(request.POST)</code></strong></p></li>
</ol>
<p>Here is the forms.py file</p>
<pre><code>#forms.py
class RegisterForm(forms.Form):
username = forms.CharField(label="Username", max_length=30,
widget=forms.TextInput(attrs={'class': 'form-control', 'name': 'username'}))
email = forms.CharField(label="Email", max_length=30,
widget=forms.TextInput(attrs={'class': 'form-control', 'name': 'email'}))
password = forms.CharField(label="Password", max_length=30,
widget=forms.TextInput(attrs={'class': 'form-control', 'name': 'password', 'type' : 'password'}))
repassword = forms.CharField(label="RePassword", max_length=30,
widget=forms.TextInput(attrs={'class': 'form-control', 'name': 'repassword', 'type' : 'password'}))
#Edit - this was changed to the clean() function in the selected answer.
def clean_password(self):
password1 = self.cleaned_data.get('password')
password2 = self.cleaned_data.get('repassword')
#raise Exception(self.cleaned_data.get('repassword')) # None is displayed
if password1 and password1 != password2:
raise forms.ValidationError("Passwords don't match")
return self.cleaned_data
</code></pre>
<p>Here's the form template</p>
<pre><code>#register.html
{% if form.errors %}
<h2>ERROR!</h2>
{% for field in form %}
{% for error in field.errors %}
<div class="alert alert-error">
<strong>{{ error|escape }}</strong>
</div>
{% endfor %}
{% endfor %}
{% endif %}
<form method="post" action="" id="RegisterForm">
{% csrf_token %}
<p class="bs-component">
<table>
<tr>
<td>{{ form.username.label_tag }}</td>
<td>{{ form.username }}</td>
</tr>
<tr>
<td>{{ form.email.label_tag }}</td>
<td>{{ form.email }}</td>
</tr>
<tr>
<td>{{ form.password.label_tag }}</td>
<td>{{ form.password }}</td>
</tr>
<tr>
<td>{{ form.repassword.label_tag }}</td>
<td>{{ form.repassword }}</td>
</tr>
</table>
</p>
<p class="bs-component">
<center>
<input class="btn btn-success btn-sm" type="submit" value="Register" />
</center>
</p>
<input type="hidden" name="next" value="{{ next }}" />
</form>
</code></pre>
<p>Here's the view</p>
<pre><code>#views.py
class RegisterViewSet(viewsets.ViewSet):
#GET requests
def register(self,request):
return render(request, 'authorization/register.html', {'form': RegisterForm})
#POST requests
def create(self,request):
form = RegisterForm(request.POST)
if form.is_valid():
username = request.POST['username']
email = request.POST['email']
password = request.POST['password']
user = User.objects.create_user(username,email,password)
user.save()
return HttpResponseRedirect('/users')
else:
return render(request, 'authorization/register.html', {'form': RegisterForm })
#changed the previous line to the following. This fixed the errors not displaying
#return render(request, 'authorization/register.html', {'form': RegisterForm(request.POST) })
</code></pre>
| 1 | 2016-08-11T18:27:49Z | 38,904,173 | <p>I am not %100 sure why <code>self.cleaned_data.get('repassword')</code> returns <code>None</code> in that method, however <code>clean_password</code> is not the right place for performing validations of fields that depend on each other.</p>
<p>According to <a href="https://docs.djangoproject.com/en/1.10/ref/forms/validation/#cleaning-and-validating-fields-that-depend-on-each-other" rel="nofollow">docs</a>, you should perform that kind of validation in <a href="https://docs.djangoproject.com/en/1.10/ref/forms/api/#django.forms.Form.clean" rel="nofollow"><code>clean()</code></a> function:</p>
<p><strong>views.py</strong></p>
<pre><code>def register(self,request):
form = RegisterForm() # Notice `()` at the end
return render(request, 'authorization/register.html', {'form': form})
</code></pre>
<p><strong>forms.py</strong></p>
<pre><code>...
def clean(self):
cleaned_data = super(RegisterForm, self).clean()
password1 = cleaned_data.get('password')
password2 = cleaned_data.get('repassword')
if password1 and password1 != password2:
raise forms.ValidationError("Passwords don't match")
</code></pre>
<p>Please note that you don't have to implement <code>clean_password</code> and <code>clean_repassword</code>. </p>
<p><em>(If you think you still have to implement <code>clean_password</code>, you have to return <code>password1</code> from it, not <code>self.cleaned_data</code>.)</em></p>
<p>You also need to render form errors correctly as described in the <a href="https://docs.djangoproject.com/en/1.10/topics/forms/#rendering-form-error-messages" rel="nofollow">docs</a>.</p>
<p>Don't forget to add it in your template:</p>
<pre><code>{{ form.non_field_errors }}
</code></pre>
<p>As for the second error, the problem is, every time the validation fails you are returning a new fresh <code>RegisterForm</code> instance instead of the invalidated one.</p>
<p>You should change the line in <code>create()</code> function:</p>
<pre><code>return render(request, 'authorization/register.html', {'form': RegisterForm})
</code></pre>
<p>to this:</p>
<pre><code>return render(request, 'authorization/register.html', {'form': form})
</code></pre>
| 2 | 2016-08-11T18:56:48Z | [
"python",
"django",
"django-forms",
"django-templates"
] |
Building a set of tile coordinates | 38,903,768 | <p>I have an image which I want to divide into tiles of specific size (and cropping tiles that don't fit).</p>
<p>The output of this operation should be a list of coordinates in tuples <code>[(x, y, width, height),...]</code>. For example, dividing a 50x50 image in tiles of size 20 would give: <code>[(0,0,20,20),(20,0,20,20),(40,0,10,20),(0,20,20,20),(20,20,20,20),(40,20,10,20),...]</code> etc.</p>
<p>Given a <code>height</code>, <code>width</code> and <code>tile_size</code>, it seems like I should be able to do this in a single list comprehension, but I can't wrap my head around it. Any help would be appreciated. Thanks!</p>
| 0 | 2016-08-11T18:32:54Z | 38,904,126 | <p>Got it with:</p>
<pre><code>output = [(x,y,w,h) for x,w in zip(range(width)[::tile_size],[tile_size]*(w_tiles-1) + [w_padding]) for y,h in zip(range(height)[::tile_size],[tile_size]*(h_tiles-1) + [h_padding])]
</code></pre>
| 0 | 2016-08-11T18:54:37Z | [
"python",
"list-comprehension"
] |
Building a set of tile coordinates | 38,903,768 | <p>I have an image which I want to divide into tiles of specific size (and cropping tiles that don't fit).</p>
<p>The output of this operation should be a list of coordinates in tuples <code>[(x, y, width, height),...]</code>. For example, dividing a 50x50 image in tiles of size 20 would give: <code>[(0,0,20,20),(20,0,20,20),(40,0,10,20),(0,20,20,20),(20,20,20,20),(40,20,10,20),...]</code> etc.</p>
<p>Given a <code>height</code>, <code>width</code> and <code>tile_size</code>, it seems like I should be able to do this in a single list comprehension, but I can't wrap my head around it. Any help would be appreciated. Thanks!</p>
| 0 | 2016-08-11T18:32:54Z | 38,904,711 | <pre><code>import itertools
def tiles(h, w, ts):
# here is the one list comprehension for list of tuples
return [tuple(list(ele) + [ts if w-ele[0] > 20 else w-ele[0], ts if h-ele[1] > 20 else h-ele[1]]) for ele in itertools.product(*[filter(lambda x: x % ts == 0, range(w)), filter(lambda x: x % ts == 0, range(h))])]
print tiles(50, 50, 20)
</code></pre>
<p>[(0, 0, 20, 20), (0, 20, 20, 20), (0, 40, 20, 10), (20, 0, 20, 20), (20, 20, 20, 20), (20, 40, 20, 1
0), (40, 0, 10, 20), (40, 20, 10, 20), (40, 40, 10, 10)]</p>
| 0 | 2016-08-11T19:33:20Z | [
"python",
"list-comprehension"
] |
Why isn't this regex working? It should be working based on diveintopython3 | 38,903,795 | <pre><code>pattern='''
^ #beginning of string
M{0,3} # thousands- 0 to 3 MS
(CM|CD|D?C{0,3}) Â Â # hundreds - 900 (CM), 400 (CD), 0-300 (0 to 3 Cs), or 500-800 (D, followed by 0 to 3 Cs)
(XC|XL|L?X{0,3}) Â Â # tens - 90 (XC), 40 (XL), 0-30 (0 to 3 Xs), or 50-80 (L, followed by 0 to 3 Xs)
(IX|IV|V?I{0,3}) Â Â # ones - 9 (IX), 4 (IV), 0-3 (0 to 3 Is), or 5-8 (V, followed by 0 to 3 Is)
$ Â Â Â Â Â Â Â Â Â #end of string
'''
</code></pre>
<p>According to the diveintopython3 website and from my logic, re.search(pattern,'M',re.VERBOSE) should return that the string was matched, but I have no return when I enter call re.search. Why is this?</p>
| 1 | 2016-08-11T18:34:29Z | 38,903,918 | <p>It works fine for me:</p>
<pre><code>romanstring = "M"
m = re.search(pattern, romanstring, re.VERBOSE)
print(m.string)
</code></pre>
<p>Perhaps you were confused because re.search returns a "match object" instead of returning the matched string directly.</p>
| 0 | 2016-08-11T18:40:32Z | [
"python",
"regex",
"python-3.x"
] |
C++ Call Function When Terminated from Python Subprocess | 38,903,857 | <p>I'm using C++, Python 3.5, and Windows 7. I'm currently calling a C++ executable from my Python code using subprocess, then terminating the executable using the following code:</p>
<pre><code>open = Popen([path\to\exe])
open.terminate()
</code></pre>
<p>This seems unlikely, but is it possible for my C++ code to call a function in itself when Python calls terminate on it? I've found options for functions to be called when C++ is closed with the X button or by itself, but this question is too specific.</p>
| 0 | 2016-08-11T18:37:41Z | 38,904,232 | <p>To be able to terminate your C++ code you have to first do things like</p>
<pre><code>p = subprocess.Popen(["path\to\exe"])
#... do stuff
#and when you want to terminate it
p.terminate()
</code></pre>
<p>But you cannot call <code>Popen</code> again because it would spawn another instance, that would would kill immediately: useless.</p>
<p>When you terminate the process, it stops it right there like a <code>kill -9</code> or a <code>taskkill /F</code>. If you want to give a chance to the process to react, do that instead of <code>p.terminate()</code></p>
<p>Windows only:</p>
<pre><code>os.system("taskkill /PID "+str(p.pid))
</code></pre>
<p>Unix & Windows (just read in the docs, but did not test it):</p>
<pre><code>os.kill(p.pid,signal.CTRL_BREAK_EVENT)
</code></pre>
<p>(omitting the <code>/F</code> flag allows to put a handle in the C++ program to execute some exit/cleanup procedure and then exit)</p>
<p>Edit: to put a handle in the C++ program (<code>atexit</code> is just called when you call explicitly <code>exit</code>, not working in that case):</p>
<ul>
<li>Unix: using signal. That's been explained a million times</li>
<li>Windows: example here: <a href="https://msdn.microsoft.com/en-us/library/ms685049(VS.85).aspx" rel="nofollow">https://msdn.microsoft.com/en-us/library/ms685049(VS.85).aspx</a>. I have tested it and it works when you close the window and with CTRL+C</li>
</ul>
<p>(source: <a href="http://stackoverflow.com/questions/1641182/how-can-i-catch-a-ctrl-c-event-c">How can I catch a ctrl-c event? (C++)</a>)</p>
| 0 | 2016-08-11T19:00:42Z | [
"python",
"c++",
"function",
"exit",
"terminate"
] |
Sort list of dictionaries on two criteria | 38,903,913 | <p>Recently started programming with python and I have a question to which I cannot come up with an answer. I have a large list of dictionairies with various keys and values (more than below). I want to sort the order of the dictionaries appearing in the list. <code>list_one</code>:</p>
<pre><code>list_one = [{'country': 'Spain', 'id': 'v1', 'key2': 'value2'},
{'country': 'France', 'id': 'v4', 'key2': 'value2'},
{'country': 'China', 'id': 'v4', 'key2': 'value2'},
{'country': 'Russia', 'id': 'v3', 'key2': 'value2'},
{'country': 'Australia', 'id': 'v2', 'key2': 'value2'},
{'country': 'China', 'id': 'v3', 'key2': 'value2'},
...
]
</code></pre>
<p>First, sort on 'id' value (<code>v1</code>, <code>v2</code>, <code>v3</code>, ...) (which I can work out fine).</p>
<p>Additionally, if <code>id</code> values are similar, sort according to the value of the <code>'country'</code> key. Bu I do not want to sort these alphabetically. I'd like to be able to sort based on values set to these countries, if that makes sense. So, for example, <code>France = 1</code>, <code>China = 2</code>, <code>Australia = 3</code>, <code>Spain = 4</code>, <code>Russia = 5</code>.</p>
<p>The ordering of dictionaries should then look as in the second
example. <code>list_two</code>:</p>
<pre><code>[{'country': 'Spain', 'id': 'v1', 'key2': 'value2'},
{'country': 'Australia', 'id': 'v2', 'key2': 'value2'},
{'country': 'China', 'id': 'v3', 'key2': 'value2'},
{'country': 'Russia', 'id': 'v3', 'key2': 'value2'},
{'country': 'France', 'id': 'v4', 'key2': 'value2'},
{'country': 'China', 'id': 'v4', 'key2': 'value2'}
...
]
</code></pre>
<p>Is there a pythonic way to sort the order that those dicts appear in the list, first on the value of the 'id' key, like below, and then on some 'implicit values' set to the countries?</p>
<pre><code>list_two = sorted(list_one, key=lambda k: k['id'])
</code></pre>
| 0 | 2016-08-11T18:40:16Z | 38,903,979 | <p>Produce a dictionary mapping country name to order number:</p>
<pre><code>country_ordering = {'France': 1, 'China': 2, 'Australia': 3,
'Spain': 4, 'Russia': 5}
</code></pre>
<p>then use that in your sort key, returning a tuple with the <code>id</code> and a value from that mapping:</p>
<pre><code>list_two = sorted(
list_one,
key=lambda k: (k['id'], country_ordering.get(k['country'], float('inf')))
</code></pre>
<p>I used the <code>dict.get()</code> method to look up the ordering value; that way you can specify a default value in case the country is not (yet?) listed in the mapping. Above I used <code>float('inf')</code> (<em>infinity</em>) as the default value, meaning that any such unlisted countries are sorted at the end (per given <code>id</code>).</p>
<p>If you want an exception to be thrown instead, change the lambda to:</p>
<pre><code>lambda k: (k['id'], country_ordering[k['country']])
</code></pre>
<p>to do a straight key lookup.</p>
<p>Demo:</p>
<pre><code>>>> list_one = [{'country': 'Spain', 'id': 'v1', 'key2': 'value2'},
... {'country': 'France', 'id': 'v4', 'key2': 'value2'},
... {'country': 'China', 'id': 'v4', 'key2': 'value2'},
... {'country': 'Russia', 'id': 'v3', 'key2': 'value2'},
... {'country': 'Australia', 'id': 'v2', 'key2': 'value2'},
... {'country': 'China', 'id': 'v3', 'key2': 'value2'}]
>>> country_ordering = {'France': 1, 'China': 2, 'Australia': 3, 'Spain': 4, 'Russia': 5}
>>> sorted(list_one, key=lambda k: (k['id'], country_ordering[k['country']]))
[{'country': 'Spain', 'id': 'v1', 'key2': 'value2'}, {'country': 'Australia', 'id': 'v2', 'key2': 'value2'}, {'country': 'China', 'id': 'v3', 'key2': 'value2'}, {'country': 'Russia', 'id': 'v3', 'key2': 'value2'}, {'country': 'France', 'id': 'v4', 'key2': 'value2'}, {'country': 'China', 'id': 'v4', 'key2': 'value2'}]
>>> pprint(_)
[{'country': 'Spain', 'id': 'v1', 'key2': 'value2'},
{'country': 'Australia', 'id': 'v2', 'key2': 'value2'},
{'country': 'China', 'id': 'v3', 'key2': 'value2'},
{'country': 'Russia', 'id': 'v3', 'key2': 'value2'},
{'country': 'France', 'id': 'v4', 'key2': 'value2'},
{'country': 'China', 'id': 'v4', 'key2': 'value2'}]
</code></pre>
| 3 | 2016-08-11T18:44:28Z | [
"python",
"sorting",
"dictionary"
] |
Python 2.6 unittest - how to set a value to use for a global variable in a function that you're testing | 38,903,951 | <p>I'm having trouble setting the value of a global variable in a function that I'm writing for unit tests. </p>
<p>The function is probably not ready to be used in a test. Or at least to be used to test in an easy manner, but I'm trying to work around that. </p>
<p>Here is an example of the function I'm trying to test:</p>
<pre><code>def my_func_with_globals(filepath):
spos=filepath.find(__my_global_var1)
new_path = filepath[0:spos] + __my_global_var2
return new_path
def some_function():
...
my_func_with_globals(filepath)
...
if __name__ = '__main__':
global __my_global_var1
__my_global_var1='value1'
global __my_global_var2
__my_global_var2='value2'
...
some_function()
</code></pre>
<p>And here is an example of my test:</p>
<pre><code>import unittest
from my_module import *
class UnitTestMyModule(unittest.TestCase):
def test_my_func_with_globals(self):
self.assertEqual(my_func_with_globals('arbitrary/file/path'), 'valid output')
</code></pre>
<p>Another example of my test using @kdopen's suggestion (gives me the same error):</p>
<pre><code>import unittest
import my_module
class UnitTestMyModule(unittest.TestCase):
def test_my_func_with_globals(self):
my_module.__my_global_var1='some/value'
my_module.__my_global_var2='second_val'
self.assertEqual(my_module.my_func_with_globals('arbitrary/file/path'), 'valid output')
</code></pre>
<p>I keep getting the error: </p>
<blockquote>
<p>NameError: global name '__my_global_var1' is not defined.</p>
</blockquote>
<p>I've tried a few different things, but I can't get anything to work. Using unittest.mock.patch looks like it would work perfectly, but I'm stuck with what I currently have with v2.6.4. </p>
| 1 | 2016-08-11T18:42:45Z | 38,904,056 | <p>The globals are defined with a double leading underscore, so they are not imported by the <code>from my_module import *</code> statement.</p>
<p>You <em>can</em> make them accessible with the following:</p>
<pre><code>from my_module import __my_global_var1, __my_global_var2
</code></pre>
<p>Alternatively, if you used <code>import my_module</code> you can access them as <code>my_module.__my_global_var1</code> etc.</p>
<p>But I don't see any reference to the global variables in your sample test case</p>
<p>Here's a simple example</p>
<p>a.py</p>
<pre><code>__global1 = 1
def foo():
return __global1
</code></pre>
<p>b.py:</p>
<pre><code>import a
print "global1: %d" % a.__global1
print "foo: %d" % a.foo()
a.__global1 = 2
print "foo: %d" % a.foo()
</code></pre>
<p>And running <code>b.py</code></p>
<pre><code>$ python2.6 b.py
global1: 1
foo: 1
foo: 2
</code></pre>
<p>UPDATE:</p>
<p>Dang it, missed the obvious</p>
<p>You declare the variables within the <code>if</code> test. That code doesn't run on <code>import</code> - only when you execute <code>python my_module</code> from the command line.</p>
<p>During importing, <code>__name__</code> will be set to <code>my_module</code>, not <code>__main__</code></p>
<p>So, yes - they <strong>are</strong> undefined when you call your unit test.</p>
| 1 | 2016-08-11T18:50:15Z | [
"python",
"python-unittest"
] |
Coverting a list of array of lists of lists into dataframe | 38,904,006 | <p>I need help in conversion of a list of array of lists of lists into a dataframe</p>
<p>my data is something like this</p>
<pre><code>[array([[ 0.01568627, 0.01568627, 0.01176471],
[ 0.01176471, 0.01176471, 0.01176471],
[ 0.01176471, 0.01176471, 0.01176471],
...,
[ 0.05098039, 0.05098039, 0.05098039],
[ 0.04705882, 0.05098039, 0.04705882],
[ 0.05098039, 0.05098039, 0.04705882]]), array([[ 0.01568627, 0.01568627, 0.01568627],
[ 0.01176471, 0.01568627, 0.01176471],
[ 0.01176471, 0.01568627, 0.01568627],
...,
[ 0.05490196, 0.05098039, 0.05098039],
[ 0.05098039, 0.05490196, 0.05098039],
[ 0.05098039, 0.05098039, 0.05098039]])
</code></pre>
<p>When I tried df=pd.DataFrame(lst),it didn't work</p>
<p>I'm trying to read image and put it into a list</p>
<p>My code is something like this</p>
<pre><code>for filename in files:
img = misc.imread(filename)
img = img[::2, ::2]
X = (img / 255.0).reshape(-1, 3)
lst.append(X)
</code></pre>
<p>I get above data when I print lst</p>
<p>Thanks in advance!!!</p>
| 0 | 2016-08-11T18:46:00Z | 38,905,047 | <p>Consider concatenating with <code>pd.concat()</code> from a list comprehension. Do note you will lose slight precision of two decimal points to fit <code>float64</code> dtype. Below will output a 3-column dataframe:</p>
<pre><code>from numpy import array
import pandas as pd
lst = [array([[ 0.01568627, 0.01568627, 0.01176471],
[ 0.01176471, 0.01176471, 0.01176471],
[ 0.01176471, 0.01176471, 0.01176471],
[ 0.05098039, 0.05098039, 0.05098039],
[ 0.04705882, 0.05098039, 0.04705882],
[ 0.05098039, 0.05098039, 0.04705882]]),
array([[ 0.01568627, 0.01568627, 0.01568627],
[ 0.01176471, 0.01568627, 0.01176471],
[ 0.01176471, 0.01568627, 0.01568627],
[ 0.05490196, 0.05098039, 0.05098039],
[ 0.05098039, 0.05490196, 0.05098039],
[ 0.05098039, 0.05098039, 0.05098039]])]
df = pd.concat([pd.DataFrame(i) for i in lst]).reset_index(drop=True)
print(df)
# 0 1 2
# 0 0.015686 0.015686 0.011765
# 1 0.011765 0.011765 0.011765
# 2 0.011765 0.011765 0.011765
# 3 0.050980 0.050980 0.050980
# 4 0.047059 0.050980 0.047059
# 5 0.050980 0.050980 0.047059
# 6 0.015686 0.015686 0.015686
# 7 0.011765 0.015686 0.011765
# 8 0.011765 0.015686 0.015686
# 9 0.054902 0.050980 0.050980
# 10 0.050980 0.054902 0.050980
# 11 0.050980 0.050980 0.050980
</code></pre>
| 0 | 2016-08-11T19:55:43Z | [
"python",
"arrays"
] |
Python - Pandas Output Limits Columns | 38,904,049 | <p>When dealing with Pandas, I'm attempting to print analysis of an objects Kinematic and Angular states. My code for doing so is as follows:</p>
<pre><code>def displayData(tList, xList, zList, dxList, dzList, thetaList, dthetaList, Q_sList):
states = pd.DataFrame({ 't' : tList,
'x' : xList,
'z' : zList,
'dx' : dxList,
'dz' : dzList,
'theta' : thetaList,
'dtheta' : dthetaList,
'Q_s' : Q_sList})
print states[['t', 'x', 'z', 'dx', 'dz', 'theta', 'dtheta', 'Q_s']]
</code></pre>
<p>However, when asked to print the data, the output breaks up the columns beyond a certain point:</p>
<pre><code> t x z dx dz theta \
0 0.000 -500.000000 -100.000000 100.000000 -0.000000 0.000000
1 0.005 -499.500000 -100.000000 99.999983 0.057692 -0.000577
2 0.010 -499.000000 -99.999712 99.999933 0.115329 -0.001153
... ... ... ... ... ... ...
dtheta Q_s
0 -0.115385 -0.038462
1 -0.115274 -0.038425
2 -0.115163 -0.038388
... ... ...
</code></pre>
<p>As I have many thousands of states to print at the time, I would like for pandas to not break the table up like so, allowing me to analyze one given state without having to scroll to pick up the remaining two data fields. Is there any way I can define specific dimensions to be printed out so that this does not occur?</p>
| 4 | 2016-08-11T18:49:29Z | 38,911,212 | <p>There are two useful <a href="http://pandas.pydata.org/pandas-docs/stable/options.html" rel="nofollow">settings</a> which can be used in this case: <code>pd.options.display.width</code> and <code>pd.options.display.expand_frame_repr</code></p>
<p>Here is a small demo:</p>
<pre><code>In [118]: pd.options.display.expand_frame_repr
Out[118]: True
In [119]: pd.options.display.width = 50
In [120]: df
Out[120]:
t x z dx \
0 0.000 -500.0 -100.000000 100.000000
1 0.005 -499.5 -100.000000 99.999983
2 0.010 -499.0 -99.999712 99.999933
dz theta dtheta Q_s
0 -0.000000 0.000000 -0.115385 -0.038462
1 0.057692 -0.000577 -0.115274 -0.038425
2 0.115329 -0.001153 -0.115163 -0.038388
In [121]: pd.options.display.width = 100
In [122]: df
Out[122]:
t x z dx dz theta dtheta Q_s
0 0.000 -500.0 -100.000000 100.000000 -0.000000 0.000000 -0.115385 -0.038462
1 0.005 -499.5 -100.000000 99.999983 0.057692 -0.000577 -0.115274 -0.038425
2 0.010 -499.0 -99.999712 99.999933 0.115329 -0.001153 -0.115163 -0.038388
In [131]: pd.options.display.width = 40
In [132]: df
Out[132]:
t x z \
0 0.000 -500.0 -100.000000
1 0.005 -499.5 -100.000000
2 0.010 -499.0 -99.999712
dx dz theta \
0 100.000000 -0.000000 0.000000
1 99.999983 0.057692 -0.000577
2 99.999933 0.115329 -0.001153
dtheta Q_s
0 -0.115385 -0.038462
1 -0.115274 -0.038425
2 -0.115163 -0.038388
In [125]: pd.options.display.expand_frame_repr = False
In [126]: df
Out[126]:
t x z dx dz theta dtheta Q_s
0 0.000 -500.0 -100.000000 100.000000 -0.000000 0.000000 -0.115385 -0.038462
1 0.005 -499.5 -100.000000 99.999983 0.057692 -0.000577 -0.115274 -0.038425
2 0.010 -499.0 -99.999712 99.999933 0.115329 -0.001153 -0.115163 -0.038388
In [127]: pd.options.display.width
Out[127]: 30
</code></pre>
<p>alternatively, you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.set_option.html" rel="nofollow">set_options()</a> method</p>
<p>Here is a list of all <code>diplay</code> options:</p>
<pre><code>In [128]: pd.options.display.
pd.options.display.chop_threshold pd.options.display.latex pd.options.display.mpl_style
pd.options.display.colheader_justify pd.options.display.line_width pd.options.display.multi_sparse
pd.options.display.column_space pd.options.display.max_categories pd.options.display.notebook_repr_html
pd.options.display.date_dayfirst pd.options.display.max_columns pd.options.display.pprint_nest_depth
pd.options.display.date_yearfirst pd.options.display.max_colwidth pd.options.display.precision
pd.options.display.encoding pd.options.display.max_info_columns pd.options.display.show_dimensions
pd.options.display.expand_frame_repr pd.options.display.max_info_rows pd.options.display.unicode
pd.options.display.float_format pd.options.display.max_rows pd.options.display.width
pd.options.display.height pd.options.display.max_seq_items
pd.options.display.large_repr pd.options.display.memory_usage
</code></pre>
| 1 | 2016-08-12T06:22:42Z | [
"python",
"pandas",
"dataframe",
"io"
] |
How to format the beginning a loop correctly? | 38,904,167 | <p>this is a very simple question (ironic cause I can't answer it :/). I have a program which I designed both for myself and my colleague to use, with all the data being stored in a directories. However, I want to set up the loop so that it work both for me and him. I tried all of these:</p>
<pre><code>file_location = glob.glob('/../*.nc')
file_location = glob('/../*.nc')
</code></pre>
<p>But none of them are picking up any files. How can I fix this? Cheers!</p>
| 0 | 2016-08-11T18:56:18Z | 38,904,290 | <p>You can get a directory relative to a user's home (called <code>~</code> in the function call) using <a href="https://docs.python.org/2/library/os.path.html#os.path.expanduser" rel="nofollow"><code>os.path.expanduser()</code></a>. In your case, the line would be </p>
<pre><code>file_location = glob.glob(os.path.expanduser('~/Dropbox/Argo/Data/*.nc'))
</code></pre>
| 4 | 2016-08-11T19:05:48Z | [
"python",
"loops",
"iteration",
"dropbox"
] |
How to format the beginning a loop correctly? | 38,904,167 | <p>this is a very simple question (ironic cause I can't answer it :/). I have a program which I designed both for myself and my colleague to use, with all the data being stored in a directories. However, I want to set up the loop so that it work both for me and him. I tried all of these:</p>
<pre><code>file_location = glob.glob('/../*.nc')
file_location = glob('/../*.nc')
</code></pre>
<p>But none of them are picking up any files. How can I fix this? Cheers!</p>
| 0 | 2016-08-11T18:56:18Z | 38,904,331 | <p>Usually is a good practice not hardcoding paths if you're gonna use your paths for other tasks which need well-formed paths (ie: subprocess, writing paths to shell scripts), I'd recommend to manage paths using the os.path module instead, for example:</p>
<pre><code>import os, glob
home_path = os.path.expanduser("~")
dropbox_path = os.path.join(home_path, "Dropbox")
good_paths = glob.glob(os.path.join(dropbox_path,"Argo","Data","*.nc"))
bad_paths = glob.glob(dropbox_path+"/Argo\\Data/*.nc")
print len(good_paths)==len(bad_paths)
print all([os.path.exists(p) for p in good_paths])
print all([os.path.exists(p) for p in bad_paths])
</code></pre>
<p>The example shows a comparison between bad and well formed paths. Both of them will work, but good_paths will be more flexible & portable in the long term.</p>
| 2 | 2016-08-11T19:08:31Z | [
"python",
"loops",
"iteration",
"dropbox"
] |
How can I be more specific with this regular expression? | 38,904,253 | <p>I'm using python and trying to separate the following string into two strings:</p>
<pre><code>'"99233 (I21.4,I50.23), 93010 (I21.4,I50.23)"'
stringA = "99233 (I21.4,I50.23),"
stringB = "93010 (I21.4,I50.23)"
</code></pre>
<p>I'm using the following expression in python:</p>
<pre><code>pattern = re.compile('\d{5}.*[),|"|\n]')
</code></pre>
<p>So I do the following:</p>
<ol>
<li>there are always 5 numbers, so \d{5}</li>
<li>followed by (...alphanumerics...), so .*</li>
<li>then there is an end parens and comma and then another set OR there is a new line</li>
</ol>
<p>But my RE keeps matching the whole line. Any suggestions?</p>
| 2 | 2016-08-11T19:02:32Z | 38,904,286 | <p>You could come up with:</p>
<pre><code>import re
string = '99233 (I21.4,I50.23), 93010 (I21.4,I50.23)'
parts = re.split(r'(?<=\)),\ ', string)
print(parts)
# ['99233 (I21.4,I50.23)', '93010 (I21.4,I50.23)']
</code></pre>
<p>This uses a positive lookbehind and splits on the space.<br>
See <a href="http://ideone.com/BD1XjI" rel="nofollow"><strong>a demo on ideone.com</strong></a>.</p>
| 1 | 2016-08-11T19:05:10Z | [
"python",
"regex"
] |
How can I be more specific with this regular expression? | 38,904,253 | <p>I'm using python and trying to separate the following string into two strings:</p>
<pre><code>'"99233 (I21.4,I50.23), 93010 (I21.4,I50.23)"'
stringA = "99233 (I21.4,I50.23),"
stringB = "93010 (I21.4,I50.23)"
</code></pre>
<p>I'm using the following expression in python:</p>
<pre><code>pattern = re.compile('\d{5}.*[),|"|\n]')
</code></pre>
<p>So I do the following:</p>
<ol>
<li>there are always 5 numbers, so \d{5}</li>
<li>followed by (...alphanumerics...), so .*</li>
<li>then there is an end parens and comma and then another set OR there is a new line</li>
</ol>
<p>But my RE keeps matching the whole line. Any suggestions?</p>
| 2 | 2016-08-11T19:02:32Z | 38,904,413 | <pre><code>import re
data = '"99233 (I21.4,I50.23), 93010 (I21.4,I50.23)"'
print re.findall(r'\d{5}.*\(.*?\)', data)
</code></pre>
| 1 | 2016-08-11T19:13:55Z | [
"python",
"regex"
] |
How can I be more specific with this regular expression? | 38,904,253 | <p>I'm using python and trying to separate the following string into two strings:</p>
<pre><code>'"99233 (I21.4,I50.23), 93010 (I21.4,I50.23)"'
stringA = "99233 (I21.4,I50.23),"
stringB = "93010 (I21.4,I50.23)"
</code></pre>
<p>I'm using the following expression in python:</p>
<pre><code>pattern = re.compile('\d{5}.*[),|"|\n]')
</code></pre>
<p>So I do the following:</p>
<ol>
<li>there are always 5 numbers, so \d{5}</li>
<li>followed by (...alphanumerics...), so .*</li>
<li>then there is an end parens and comma and then another set OR there is a new line</li>
</ol>
<p>But my RE keeps matching the whole line. Any suggestions?</p>
| 2 | 2016-08-11T19:02:32Z | 38,904,518 | <p>You can use a positive lookahead:</p>
<p><code>\d{5}.*(?=\))</code></p>
<p>Additionally you could make this:</p>
<p><code>(\d{5})(.*(?=\())(.*)(?=\))</code></p>
<p>Then you could grab the 5 digit string with back-reference 1, and the inner-string with back-reference 3</p>
<p>Or you could take it one step further:</p>
<p><code>(\d{5})(.*(?=\())(\((\s{1,}\b|\b))(.*?(?=(\s{1,},|,)))(\s{1,},|,)(\s{1,}\b|\b)(.+)(?=\s{1,}\)|\))</code></p>
<p>Then you could get the following:</p>
<p>5 digit string: Back-reference 1</p>
<p>Left-hand inner value: Back-reference 5</p>
<p>Right-hand inner value: Back-reference 9</p>
<p><s>Observe</s></p>
<h3>EDIT: spotted a bug, thus removed the link. Here's the new one:</h3>
<p><a href="https://regex101.com/r/sJ8sE4/2" rel="nofollow">Regex with test strings</a></p>
| 0 | 2016-08-11T19:21:01Z | [
"python",
"regex"
] |
How to calculate sum of Nth power of each cell for a column in dataframe? | 38,904,340 | <p>I have a dataframe like this:</p>
<pre><code> A B C
396 1147.430 1147.219 0.000184
397 1147.110 1147.630 -0.000453
398 1146.870 1147.469 -0.000522
399 1147.110 1147.680 -0.000497
400 1147.170 1147.640 -0.000410
401 1147.210 1147.430 -0.000192
</code></pre>
<p>I want to take the Nth power for each cell in Column C and add them up. I know for squares I can use <code>np.square(df['C']).sum(axis=0)</code>.<br>
Is there a quick way to do this for other powers?</p>
| 0 | 2016-08-11T19:09:02Z | 38,904,384 | <pre><code>df['C'].pow(3).sum()
Out: -4.2772918200000004e-10
</code></pre>
<p>Or,</p>
<pre><code>(df['C']**3).sum()
Out: -4.2772918200000004e-10
</code></pre>
| 2 | 2016-08-11T19:12:33Z | [
"python",
"pandas",
"dataframe"
] |
Substracting values from different DataFrame | 38,904,348 | <p>I got two DataFrames and Want to substract.</p>
<p>df1:</p>
<pre><code> Val1 Val2 Val3
0 -27 -0.8 -6.786321 -7.024615 -13.946589
1 -27 -0.9 -5.746795 -5.804550 -11.576365
2 -27 -1.0 -4.624857 -4.372321 -9.103681
3 -27 -1.2 -2.685832 -2.418888 -5.057056
4 -27 -1.4 -1.445561 -1.389468 -2.622357
</code></pre>
<p>df2:</p>
<pre><code> Bench
0 0.4601
1 -5.3336
2 -6.0163
3 -4.1776
4 -2.3472
</code></pre>
<p>As I have the same indexes, I tried to do: <code>df1-df2</code>, but it didn't work.</p>
<p>Therefore I've tried to use another way:</p>
<pre><code> headers = list(df1.columns.values)
filtr_headers = filter(lambda x: x!='',headers)
for i in filtr_headers:
df1['%s' %(i)] = df1['%s' %(i)] - df2['Bench']
</code></pre>
<p>But I'm getting in return Dataframe with NaN values. I don't know why it's happening. Any hints will be higly appreciated. </p>
| 1 | 2016-08-11T19:09:57Z | 38,904,458 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sub.html" rel="nofollow"><code>pd.DataFrame.sub</code></a>, like this:</p>
<pre><code>In [113]: df1.sub(df2.Bench.values, axis=0)
Out[113]:
Val1 Val2 Val3
0 -27 -0.8 -7.246421 -7.484715 -14.406689
1 -27 -0.9 -0.413195 -0.470950 -6.242765
2 -27 -1.0 1.391443 1.643979 -3.087381
3 -27 -1.2 1.491768 1.758712 -0.879456
4 -27 -1.4 0.901639 0.957732 -0.275157
</code></pre>
| 1 | 2016-08-11T19:16:56Z | [
"python",
"pandas"
] |
Substracting values from different DataFrame | 38,904,348 | <p>I got two DataFrames and Want to substract.</p>
<p>df1:</p>
<pre><code> Val1 Val2 Val3
0 -27 -0.8 -6.786321 -7.024615 -13.946589
1 -27 -0.9 -5.746795 -5.804550 -11.576365
2 -27 -1.0 -4.624857 -4.372321 -9.103681
3 -27 -1.2 -2.685832 -2.418888 -5.057056
4 -27 -1.4 -1.445561 -1.389468 -2.622357
</code></pre>
<p>df2:</p>
<pre><code> Bench
0 0.4601
1 -5.3336
2 -6.0163
3 -4.1776
4 -2.3472
</code></pre>
<p>As I have the same indexes, I tried to do: <code>df1-df2</code>, but it didn't work.</p>
<p>Therefore I've tried to use another way:</p>
<pre><code> headers = list(df1.columns.values)
filtr_headers = filter(lambda x: x!='',headers)
for i in filtr_headers:
df1['%s' %(i)] = df1['%s' %(i)] - df2['Bench']
</code></pre>
<p>But I'm getting in return Dataframe with NaN values. I don't know why it's happening. Any hints will be higly appreciated. </p>
| 1 | 2016-08-11T19:09:57Z | 38,904,483 | <p>You can use</p>
<pre><code>pd.DataFrame(df1.values - df2.values)
</code></pre>
| 0 | 2016-08-11T19:18:38Z | [
"python",
"pandas"
] |
Python to Matlab Conversion? | 38,904,366 | <p>I have this python code here below (for bubble sort). Below that is my attempt at converting it to MATLAB code. I'm new to MATLAB, and I'm doing the conversion for practice. I would appreciate feedback on how accurate/incorrect my conversion is.</p>
<p>The python version:</p>
<pre><code>def bubble_sort(alist):
return bubble_sort_helper(alist, len(alist))
def bubble_sort_helper(alist, n):
if n < 2:
return alist
for i in range(len(alist)-1):
if alist[i] > alist[i+1]:
temp = alist[i]
alist[i] = alist[i+1]
alist[i+1] = temp
return bubble_sort_helper(alist, n-1)
</code></pre>
<p>My attempt at a MATLAB conversion:</p>
<pre><code>function a = bubble_sort(alist)
a = bubble_sort_helper(alist, size(alist))
end
function b = bubble_sort_helper(alist, n)
if n < 2
b = alist
end
for ii = size(alist)
if alist(1) > alist (ii+1)
temp = alist(ii)
alist(ii) = alist(ii+1)
alist(ii+1) = temp
end
end
b = bubble_sort_helper(alistn n-1)
end
</code></pre>
| 1 | 2016-08-11T19:11:10Z | 38,904,549 | <p>A few issues here:</p>
<ol>
<li><p>You need to use <code>numel</code> rather than <code>size</code> to get the number of elements in an array. <code>size</code> will give you a vector of sizes of each dimension and <code>numel</code> will give you the total number of elements</p></li>
<li><p>You need to actually create an array of values for your <code>for</code> loop to loop through. To do this, use the colon to create an array from <code>2</code> to <code>n</code>. </p>
<pre><code>for ii = 2:n
end
</code></pre></li>
<li><p>You use <code>ii</code> as your looping variable but try to use <code>i</code> inside of the loop. Pick one and stick to it (preferably not <code>i</code>)</p></li>
<li><p>To flip the values you can simply do your assignment like this:</p>
<pre><code>alist([i-1, i]) = alist([i, i-1]);
</code></pre></li>
</ol>
<p>Taken together, this will give you something like this:</p>
<pre><code>function a = bubble_sort(alist)
a = bubble_sort_helper(alist, numel(alist))
end
function b = bubble_sort_helper(alist, n)
if n < 2
b = alist;
else
for k = 2:n
if alist(k-1) > alist(k)
alist([k-1, k]) = alist([k, k-1]);
end
end
b = bubble_sort_helper(alist, n-1);
end
end
</code></pre>
| 2 | 2016-08-11T19:23:14Z | [
"python",
"matlab"
] |
Python to Matlab Conversion? | 38,904,366 | <p>I have this python code here below (for bubble sort). Below that is my attempt at converting it to MATLAB code. I'm new to MATLAB, and I'm doing the conversion for practice. I would appreciate feedback on how accurate/incorrect my conversion is.</p>
<p>The python version:</p>
<pre><code>def bubble_sort(alist):
return bubble_sort_helper(alist, len(alist))
def bubble_sort_helper(alist, n):
if n < 2:
return alist
for i in range(len(alist)-1):
if alist[i] > alist[i+1]:
temp = alist[i]
alist[i] = alist[i+1]
alist[i+1] = temp
return bubble_sort_helper(alist, n-1)
</code></pre>
<p>My attempt at a MATLAB conversion:</p>
<pre><code>function a = bubble_sort(alist)
a = bubble_sort_helper(alist, size(alist))
end
function b = bubble_sort_helper(alist, n)
if n < 2
b = alist
end
for ii = size(alist)
if alist(1) > alist (ii+1)
temp = alist(ii)
alist(ii) = alist(ii+1)
alist(ii+1) = temp
end
end
b = bubble_sort_helper(alistn n-1)
end
</code></pre>
| 1 | 2016-08-11T19:11:10Z | 38,904,556 | <p>Your python version is inefficient:</p>
<pre><code>def bubble_sort(alist):
return bubble_sort_helper(alist, len(alist))
def bubble_sort_helper(alist, n):
if n < 2:
return alist
for i in range(n-1):
if alist[i] > alist[i+1]:
alist[i], alist[i+1] = alist[i+1], alist[i]
return bubble_sort_helper(alist, n-1)
</code></pre>
<p>and your matlab code is wrong:</p>
<pre><code>function a = bubble_sort(alist)
a = bubble_sort_helper(alist, size(alist))
end
function b = bubble_sort_helper(alist, n)
if n < 2
b = alist;
else
for i = 2:n
if alist(i-1) > alist(i)
temp = alist(i-1);
alist(i-1) = alist(i);
alist(i) = temp;
end
end
b = bubble_sort_helper(alist, n-1);
end
end
</code></pre>
| 1 | 2016-08-11T19:23:44Z | [
"python",
"matlab"
] |
Python storing number with leading zeros in Dictionary | 38,904,377 | <p>I have a dictionary called table. I want to assign this number to the below dic key. it keeps giving me an error saying "invalid token". I have tried converting it string, int, and float but to no avail</p>
<p><code>table['Fac_ID'] = 00000038058</code></p>
| -2 | 2016-08-11T19:12:15Z | 38,904,437 | <p>you're unwilingly invoking Python 2.x octal mode but:</p>
<ul>
<li>you can't because there's a 8 in it (bad or good luck?)</li>
<li>in python 3 (also works in python 2.7), octal prefix is no longer <code>0</code> but <code>0o</code>: invalid token occurs because of that.</li>
</ul>
<p>It would be better to store your values without leading zeroes and add the leading zeroes when you print them</p>
<pre><code>print("%012d"%table['Fac_ID'])
</code></pre>
| 2 | 2016-08-11T19:15:34Z | [
"python",
"dictionary",
"leading-zero"
] |
Python storing number with leading zeros in Dictionary | 38,904,377 | <p>I have a dictionary called table. I want to assign this number to the below dic key. it keeps giving me an error saying "invalid token". I have tried converting it string, int, and float but to no avail</p>
<p><code>table['Fac_ID'] = 00000038058</code></p>
| -2 | 2016-08-11T19:12:15Z | 38,904,442 | <p>Don't convert it to a string, use it as a string in the first place:</p>
<pre><code>>>> table['Fac_ID'] = str(00000038058)
File "<stdin>", line 1
table['Fac_ID'] = str(00000038058)
^
SyntaxError: invalid token
>>> table['Fac_ID'] = '00000038058'
>>> print table['Fac_ID']
00000038058
</code></pre>
<p>str, as any function, evaluates the argument to a value before passing it in, so if there was an invalid token before str, using str is not going to change that. You need to use a valid token, so just hardcode the string.</p>
| 1 | 2016-08-11T19:15:49Z | [
"python",
"dictionary",
"leading-zero"
] |
Python storing number with leading zeros in Dictionary | 38,904,377 | <p>I have a dictionary called table. I want to assign this number to the below dic key. it keeps giving me an error saying "invalid token". I have tried converting it string, int, and float but to no avail</p>
<p><code>table['Fac_ID'] = 00000038058</code></p>
| -2 | 2016-08-11T19:12:15Z | 38,904,475 | <p>Surround the number in quotes.</p>
<p>table['Fac_ID'] = "00000038058"</p>
| 0 | 2016-08-11T19:18:03Z | [
"python",
"dictionary",
"leading-zero"
] |
Python storing number with leading zeros in Dictionary | 38,904,377 | <p>I have a dictionary called table. I want to assign this number to the below dic key. it keeps giving me an error saying "invalid token". I have tried converting it string, int, and float but to no avail</p>
<p><code>table['Fac_ID'] = 00000038058</code></p>
| -2 | 2016-08-11T19:12:15Z | 38,904,544 | <p>You can use <a href="https://docs.python.org/2/library/string.html#string.zfill" rel="nofollow"><code>zfill</code></a> to pad zeros. Assuming you want to pad up to 11 zeroes:</p>
<pre><code>>>> str(38058).zfill(11)
'00000038058'
</code></pre>
<p>This obviously creates a zero padded string to be used as the key based on an integer.</p>
| 0 | 2016-08-11T19:22:53Z | [
"python",
"dictionary",
"leading-zero"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.