title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
no pycharm database window | 39,123,052 | <p>I'm following a tutorial on Django. I'm using PyCharm 2016.2.1. There is a file in my project called db.sqlite3. The icon next to this file in the project explorer shows a question mark which indicates PyCharm does not know what this file is. In searching the web I find many references to clicking View > Tool Windows > Database. In my setup there is no Database option under View > Tool Windows.</p>
<p>Sqlite3 is supposed to be native to Python/PyCharm yet I don't seem to even have a Database option. What am I doing wrong?</p>
| 0 | 2016-08-24T12:15:00Z | 39,127,976 | <p>PyCharm Community Edition does not support database / SQL and many other features providerd by the Professional edition. For full list of differences between these editions see the <a href="https://www.jetbrains.com/pycharm/features/editions_comparison_matrix.html" rel="nofollow">edition comparison matrix</a> on the JetBrains website.</p>
| 1 | 2016-08-24T15:57:42Z | [
"python",
"django",
"pycharm"
] |
Invalid SMTPAPI Header: send_at must be a timestamp | 39,123,099 | <p>I am scheduling the emails through SMTP API. This is what I have tried uptil now: </p>
<pre><code>from smtpapi import SMTPAPIHeader
from django.core.mail import send_mail
from django.core.mail import EmailMultiAlternatives
from django.template.loader import get_template
from django.template import Template, Context
def campaign_email(lg_user, template):
user = lg_user.id
email = user.email
fname = lg_user.id.first_name
lname = lg_user.id.last_name
mobile = lg_user.contact_no
purchase = lg_user.active_user_subjects.values_list('subject', flat=True)
expiry = str(lg_user.active_user_subjects.values_list('at_expiry', flat=True))
filename = '/tmp/campaign_mailer.html'
opened_file = open(filename, "r").read()
temp = Template(opened_file)
c = Context({'fname': fname, 'lname': lname, 'subject': subject, 'email': email,
'mobile': mobile, 'purchase': purchase, 'expiry': expiry})
header = SMTPAPIHeader()
html_content = temp.render(c)
send_at = {"send_at": 1472058300}
header.set_send_at(send_at)
msg = EmailMultiAlternatives(subject, html_content, sender, [email],
headers={'X-SMTPAPI': header.json_string()})
msg.attach_alternative(html_content, "text/html")
msg.send(fail_silently=True)
</code></pre>
<p>In order to check, if my header (which on printing header.json_string() resolves to this: </p>
<pre><code>{
"send_at": {
"send_at": 1472051700
}
}
</code></pre>
<p>) was valid or not, I checked on <a href="https://sendgrid.com/docs/Utilities/smtpapi_validator.html" rel="nofollow">https://sendgrid.com/docs/Utilities/smtpapi_validator.html</a> and it came out to be completely valid. </p>
<p>But the failure mail that I got from sendgrid's support stated the reason of failure to be: send_at must be a timestamp. I believe, that in the <a href="https://sendgrid.com/docs/API_Reference/SMTP_API/scheduling_parameters.html" rel="nofollow">documentation</a>, it's clearly stated that the timestamp should be in UNIX format - which is what I have supplied as the value to my send_at key.</p>
<p>So, how do I resolve this error?</p>
| 0 | 2016-08-24T12:16:55Z | 39,135,948 | <p><code>set_send_at()</code> takes an integer argument, but you are passing it a dictionary (<code>{"send_at": 1472058300}</code>). This is invalid and causes the error.</p>
<p>Change it to:</p>
<pre><code>header.set_send_at(1472058300)
</code></pre>
| 1 | 2016-08-25T02:49:20Z | [
"python",
"django",
"email",
"smtp",
"sendgrid"
] |
Java or Python way to extract a sub-XML from big XML depending on child text nodes | 39,123,136 | <p>I have a big XML to be handled, I need to extract all "Situation" TAGS if these ones have <_0:roadNumber>A-52, <_0:roadNumber>AP-9 or <_0:roadNumber>A-55 values in theirs, because I don`t need the rest of XML. Then build a XML document with the XML substraction. I don´t need a implementation, only I would like to know how can I handle this or wich API is the most appropiated, thanks.</p>
<p><strong>PD:</strong> My finall achieve is to dump the XML in a data base</p>
<p><strong>XML GET:</strong></p>
<pre><code>print("GETTING XML...")
resp = requests.get('http://infocar.dgt.es/datex2/dgt/SituationPublication/all/content.xml', stream = True) #XML that I need
if resp.status_code != 200:
raise ApiError('GET /tasks/ {}'.format(resp.status_code))
print("XML RECIBIDO 200 OK")
#resp.raw.decode_content = True
print("GUARDANDO XML")
with open("DGT_DATEX.xml", "wb") as handle:
for data in (resp.iter_content()):
handle.write(data)
dom = parse("DGT_DATEX.xml")
</code></pre>
| 0 | 2016-08-24T12:18:19Z | 39,123,611 | <p>For really big XML documents you should best use <a href="http://www.saxproject.org/" rel="nofollow">SAX</a> for streaming (not needing to have the full document in memory at once) but for finding elements easly <a href="https://www.w3.org/TR/xpath/" rel="nofollow">XPath</a> is really helpful.</p>
<p>For Python you have some <a href="https://docs.python.org/3/library/xml.etree.elementtree.html#xpath-support" rel="nofollow">XPath support</a> in <a href="https://docs.python.org/3/library/xml.etree.elementtree.html" rel="nofollow">xml.etree.ElementTree</a> and SAX in <a href="https://docs.python.org/3/library/xml.sax.html" rel="nofollow">xml.sax</a> - but there of course are other parsers, too.</p>
<p>There are SAX implementations and XPath for Java, too.</p>
| 1 | 2016-08-24T12:40:21Z | [
"java",
"python",
"xml",
"minidom",
"xmldom"
] |
OSError: [Errno 8] Exec format error when trying to run chromedriver for selenium | 39,123,216 | <p>I am trying to run Selenium based tests on an Ubuntu based server from Jenkins but get the following cryptic errors:</p>
<p><strong>First:</strong></p>
<pre><code>+ python manage.py jenkins --enable-coverage --settings=Modeling.settings.dev
......EException AttributeError: "'Service' object has no attribute 'process'" in <bound method Service.__del__ of <selenium.webdriver.chrome.service.Service object at 0x7f6b67b3ac50>> ignored
EException AttributeError: "'Service' object has no attribute 'process'" in <bound method Service.__del__ of <selenium.webdriver.chrome.service.Service object at 0x7f6b66b2ed10>> ignored
EEException AttributeError: "'Service' object has no attribute 'process'" in <bound method Service.__del__ of <selenium.webdriver.chrome.service.Service object at 0x7f6b66b4bad0>> ignored
Exception AttributeError: "'Service' object has no attribute 'process'" in <bound method Service.__del__ of <selenium.webdriver.chrome.service.Service object at 0x7f6b66ad5a90>> ignored
EException AttributeError: "'Service' object has no attribute 'process'" in <bound method Service.__del__ of <selenium.webdriver.chrome.service.Service object at 0x7f6b66ae3110>> ignored
EEException AttributeError: "'Service' object has no attribute 'process'" in <bound method Service.__del__ of <selenium.webdriver.chrome.service.Service object at 0x7f6b66af06d0>> ignored
Exception AttributeError: "'Service' object has no attribute 'process'" in <bound method Service.__del__ of <selenium.webdriver.chrome.service.Service object at 0x7f6b66af12d0>> ignored
EException AttributeError: "'Service' object has no attribute 'process'" in <bound method Service.__del__ of <selenium.webdriver.chrome.service.Service object at 0x7f6b66af1ad0>> ignored
EEException AttributeError: "'Service' object has no attribute 'process'" in <bound method Service.__del__ of <selenium.webdriver.chrome.service.Service object at 0x7f6b66b03ad0>> ignored
Exception AttributeError: "'Service' object has no attribute 'process'" in <bound method Service.__del__ of <selenium.webdriver.chrome.service.Service object at 0x7f6b66b08f50>> ignored
EEEException AttributeError: "'Service' object has no attribute 'process'" in <bound method Service.__del__ of <selenium.webdriver.chrome.service.Service object at 0x7f6b66b08790>> ignored
Exception AttributeError: "'Service' object has no attribute 'process'" in <bound method Service.__del__ of <selenium.webdriver.chrome.service.Service object at 0x7f6b66b08d50>> ignored
Exception AttributeError: "'Service' object has no attribute 'process'" in <bound method Service.__del__ of <selenium.webdriver.chrome.service.Service object at 0x7f6b66b0bbd0>> ignored
EEException AttributeError: "'Service' object has no attribute 'process'" in <bound method Service.__del__ of <selenium.webdriver.chrome.service.Service object at 0x7f6b66a95a10>> ignored
Exception AttributeError: "'Service' object has no attribute 'process'" in <bound method Service.__del__ of <selenium.webdriver.chrome.service.Service object at 0x7f6b66b08550>> ignored
</code></pre>
<p><strong>And then a bit further down:</strong></p>
<pre><code>Traceback (most recent call last):
File "/var/lib/jenkins/jobs/GS_modelling_web_tests/workspace/modeling/Modeling/tests/BaseTest.py", line 18, in setUp
self.browser = webdriver.Chrome( ChromeDriver.path() )
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/chrome/webdriver.py", line 61, in __init__
self.service.start()
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/common/service.py", line 62, in start
stdout=self.log_file, stderr=self.log_file)
File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child
raise child_exception
OSError: [Errno 8] Exec format error
</code></pre>
<p>For the first error I have seen <a href="http://stackoverflow.com/questions/27674088/scrapy-with-selenium-webdriver-failing-to-instantiate">Scrapy with selenium, webdriver failing to instantiate</a> but that seems to be something else related to shutting down? And the second error seems related to shebangs according to <a href="http://stackoverflow.com/questions/27606653/oserror-errno-8-exec-format-error">OSError: [Errno 8] Exec format error</a> but I haven't managed to understand whether I can use that information. What's going on here?</p>
| 1 | 2016-08-24T12:22:23Z | 39,129,112 | <blockquote>
<p>OSError: [Errno 8] Exec format error</p>
</blockquote>
<p>ChromeDriver executable available here is not supported on Ubuntu platform.
For Ubuntu use chromedriver for ubuntu <a href="http://chromedriver.storage.googleapis.com/2.23/chromedriver_linux32.zip" rel="nofollow">http://chromedriver.storage.googleapis.com/2.23/chromedriver_linux32.zip</a> or <a href="http://chromedriver.storage.googleapis.com/2.23/chromedriver_linux64.zip" rel="nofollow">http://chromedriver.storage.googleapis.com/2.23/chromedriver_linux64.zip</a></p>
<p>The complete list can be found at <a href="https://sites.google.com/a/chromium.org/chromedriver/downloads" rel="nofollow">https://sites.google.com/a/chromium.org/chromedriver/downloads</a></p>
| 1 | 2016-08-24T16:59:11Z | [
"python",
"selenium",
"jenkins"
] |
Apply a list of Python functions in order elegantly | 39,123,375 | <p>I have an input value <code>val</code> and a list of functions to be applied in the order:</p>
<pre><code>funcs = [f1, f2, f3, ..., fn]
</code></pre>
<p>How to apply elegantly and not writing</p>
<pre><code>fn( ... (f3(f2(f1(val))) ... )
</code></pre>
<p>and also not using for loop:</p>
<pre><code>tmp = val
for f in funcs:
tmp = f(tmp)
</code></pre>
<p>Thanks Martijn for the awesome answer. There's some reading I found: <a href="https://mathieularose.com/function-composition-in-python/" rel="nofollow">https://mathieularose.com/function-composition-in-python/</a> .</p>
| 4 | 2016-08-24T12:29:56Z | 39,123,400 | <p>Use the <a href="https://docs.python.org/2/library/functions.html#reduce" rel="nofollow"><code>reduce()</code> function</a>:</p>
<pre><code># forward-compatible import
from functools import reduce
result = reduce(lambda res, f: f(res), funcs, val)
</code></pre>
<p><code>reduce()</code> applies the first argument, a callable, to each element taken from the second argument, plus the accumulated result so far (as <code>(result, element)</code>). The third argument is a starting value (the first element from <code>funcs</code> would be used otherwise).</p>
<p>In Python 3, the built-in function was moved to the <a href="https://docs.python.org/3/library/functools.html#functools.reduce" rel="nofollow"><code>functools.reduce()</code> location</a>; for forward compatibility that same reference is available in Python 2.6 and up.</p>
<p>Other languages may call this <a href="https://en.wikipedia.org/wiki/Fold_(higher-order_function)#Folds_in_various_languages" rel="nofollow">folding</a>.</p>
<p>If you need <em>intermediate</em> results for each function too, use <a href="https://docs.python.org/3/library/itertools.html#itertools.accumulate" rel="nofollow"><code>itertools.accumulate()</code></a> (only from Python 3.3 onwards for a version that takes a function argument):</p>
<pre><code>from itertools import accumulate, chain
running_results = accumulate(chain(val, funcs), lambda res, f: f(res))
</code></pre>
| 12 | 2016-08-24T12:31:07Z | [
"python",
"functional-programming"
] |
Apply a list of Python functions in order elegantly | 39,123,375 | <p>I have an input value <code>val</code> and a list of functions to be applied in the order:</p>
<pre><code>funcs = [f1, f2, f3, ..., fn]
</code></pre>
<p>How to apply elegantly and not writing</p>
<pre><code>fn( ... (f3(f2(f1(val))) ... )
</code></pre>
<p>and also not using for loop:</p>
<pre><code>tmp = val
for f in funcs:
tmp = f(tmp)
</code></pre>
<p>Thanks Martijn for the awesome answer. There's some reading I found: <a href="https://mathieularose.com/function-composition-in-python/" rel="nofollow">https://mathieularose.com/function-composition-in-python/</a> .</p>
| 4 | 2016-08-24T12:29:56Z | 39,130,019 | <p>MartijnPieters answer is excellent. The only thing I would add is that this is called <a href="https://en.wikipedia.org/wiki/Function_composition" rel="nofollow">function composition</a></p>
<p>Giving names to these generics means you can use them whenever the need arises</p>
<pre><code>from functools import reduce
def id(x):
return x
def comp(f,g):
return lambda x: f(g(x))
def compose(*fs):
return reduce(comp, fs, id)
# usage
# compose(f1, f2, f3, ..., fn) (val)
print(compose (lambda x: x + 1, lambda x: x * 3, lambda x: x - 1) (10))
# = ((10 - 1) * 3) + 1
# = 28
</code></pre>
| 1 | 2016-08-24T17:55:30Z | [
"python",
"functional-programming"
] |
Image clustering by its similarity in python | 39,123,421 | <p>I have a collection of photos and I'd like to distinguish clusters of the similar photos. Which features of an image and which algorithm should I use to solve my task?</p>
| -3 | 2016-08-24T12:31:41Z | 39,124,272 | <p>It is a too broad question.</p>
<p>Generally speaking you can use any <a href="http://scikit-learn.org/stable/modules/classes.html#module-sklearn.cluster" rel="nofollow">clustering mechanism</a>, e.g. a popular k-means. To prepare your data for clustering you need to convert your collection into an array X, where every row is one example (image) and every column is a feature.</p>
<p>The main question - what your features should be. It is difficult to answer without knowing what you are trying to accomplish. If your images are small and of the same size you can simply have every pixel as a feature. If you have any metadata and would like to sort using it - you can have every tag in metadata as a feature.</p>
<p>Now if you really need to find some patterns between images you will have to apply an additional layer of processing, like <a href="https://en.wikipedia.org/wiki/Convolutional_neural_network" rel="nofollow">convolutional neural network</a>, which essentially allows you to extract features from different pieces of your image. You can think about it as a filter, which will convert every image into, say 8x8 matrix, which then correspondingly could be used as a row with 64 different features in your array X for clustering.</p>
| 1 | 2016-08-24T13:09:31Z | [
"python",
"machine-learning",
"computer-vision",
"cluster-analysis"
] |
How to print 'tight' dots horizontally in python? | 39,123,549 | <p>I have a program which prints out its progress to the console.
every 20 steps it prints the number of steps like 10 20 30 etc. but within this, it prints a dot. This was printed using the print statement with a comma at the end</p>
<pre><code> if epoch % 10 == 0:
print epoch,
else:
print ".",
</code></pre>
<p>Unfortunately, I noticed that the dots are printed apart from each other, like this:</p>
<pre><code>0 . . . . . . . . . 10 . . . . . . . . . 20 . . . . . . . . . 30
</code></pre>
<p>I want this to be tighter, as follows:</p>
<pre><code>0.........10.........20.........30
</code></pre>
<p>In visual basic language, we can get this form if we add a semicolon to the end of the print statement instead of the comma. Is there a similar way to do so in Python, or a walkthrough to get tighter output?</p>
<p>Thanks in advance...</p>
<p>---Edit---
With all thanks and respect to all who replied, I noticed that some of them considered the change in 'epoch' happens in timely manner. Actually, it happens after finishing some iterations, and it may take from a fraction of second to several minutes. However, Many thanks to all of the contributors.</p>
| 1 | 2016-08-24T12:37:25Z | 39,123,666 | <p>If you want to get more control over the formatting then you need to use either:</p>
<pre><code>import sys
sys.stdout.write('.')
</code></pre>
<p>.. instead of <code>print</code>, or use the Python 3 print function. This is available as a future import in later builds of Python 2.x as:</p>
<pre><code>from __future__ import print_function
print('.', end="")
</code></pre>
| 4 | 2016-08-24T12:42:40Z | [
"python",
"printing",
"console",
"comma",
"semicolon"
] |
How to print 'tight' dots horizontally in python? | 39,123,549 | <p>I have a program which prints out its progress to the console.
every 20 steps it prints the number of steps like 10 20 30 etc. but within this, it prints a dot. This was printed using the print statement with a comma at the end</p>
<pre><code> if epoch % 10 == 0:
print epoch,
else:
print ".",
</code></pre>
<p>Unfortunately, I noticed that the dots are printed apart from each other, like this:</p>
<pre><code>0 . . . . . . . . . 10 . . . . . . . . . 20 . . . . . . . . . 30
</code></pre>
<p>I want this to be tighter, as follows:</p>
<pre><code>0.........10.........20.........30
</code></pre>
<p>In visual basic language, we can get this form if we add a semicolon to the end of the print statement instead of the comma. Is there a similar way to do so in Python, or a walkthrough to get tighter output?</p>
<p>Thanks in advance...</p>
<p>---Edit---
With all thanks and respect to all who replied, I noticed that some of them considered the change in 'epoch' happens in timely manner. Actually, it happens after finishing some iterations, and it may take from a fraction of second to several minutes. However, Many thanks to all of the contributors.</p>
| 1 | 2016-08-24T12:37:25Z | 39,123,791 | <pre><code>import itertools
import sys
import time
counter = itertools.count()
def special_print(value):
sys.stdout.write(value)
sys.stdout.flush()
while True:
time.sleep(0.1)
i = next(counter)
if i % 10 == 0:
special_print(str(i))
else:
special_print('.')
</code></pre>
| 1 | 2016-08-24T12:48:19Z | [
"python",
"printing",
"console",
"comma",
"semicolon"
] |
How to print 'tight' dots horizontally in python? | 39,123,549 | <p>I have a program which prints out its progress to the console.
every 20 steps it prints the number of steps like 10 20 30 etc. but within this, it prints a dot. This was printed using the print statement with a comma at the end</p>
<pre><code> if epoch % 10 == 0:
print epoch,
else:
print ".",
</code></pre>
<p>Unfortunately, I noticed that the dots are printed apart from each other, like this:</p>
<pre><code>0 . . . . . . . . . 10 . . . . . . . . . 20 . . . . . . . . . 30
</code></pre>
<p>I want this to be tighter, as follows:</p>
<pre><code>0.........10.........20.........30
</code></pre>
<p>In visual basic language, we can get this form if we add a semicolon to the end of the print statement instead of the comma. Is there a similar way to do so in Python, or a walkthrough to get tighter output?</p>
<p>Thanks in advance...</p>
<p>---Edit---
With all thanks and respect to all who replied, I noticed that some of them considered the change in 'epoch' happens in timely manner. Actually, it happens after finishing some iterations, and it may take from a fraction of second to several minutes. However, Many thanks to all of the contributors.</p>
| 1 | 2016-08-24T12:37:25Z | 39,123,800 | <p>Here's a possible solution:</p>
<pre><code>import time
import sys
width = 101
for i in xrange(width):
time.sleep(0.001)
if i % 10 == 0:
sys.stdout.write(str(i))
sys.stdout.flush()
else:
sys.stdout.write(".")
sys.stdout.flush()
sys.stdout.write("\n")
</code></pre>
| 1 | 2016-08-24T12:48:49Z | [
"python",
"printing",
"console",
"comma",
"semicolon"
] |
Django Advanced tutorial: How to write reusable apps | 39,123,557 | <p>According to <a href="https://docs.djangoproject.com/en/1.10/intro/reusable-apps/" rel="nofollow">Django Advanced tutorial: How to write reusable apps</a>, "2. With luck, your Django project should now work correctly again. Run the server again to confirm this."</p>
<p>How to run the server? Having moved polls app from "mysite" to "django-polls" without manage.py and settings?</p>
<p>Please help with further instruction.</p>
| 0 | 2016-08-24T12:37:48Z | 39,123,581 | <p><code>django-polls</code> is the new reusable app, but <code>mysite</code> is still your site; the idea is that you can extract the polls app but <code>mysite</code> will still work. You run the server there.</p>
| 1 | 2016-08-24T12:39:01Z | [
"python",
"django"
] |
Migrating Python backend from Gitkit to to Firebase-Auth with python-jose for token verification | 39,123,568 | <p><a href="https://github.com/google/identity-toolkit-python-client/issues/20" rel="nofollow">Over on GitHub</a> a helpful Google dev told me that </p>
<blockquote>
<p>to create a user session, your python backend server only needs a JWT
library to verify the Firebase Auth token (signature and audience) in
the request and extract the user info from the token payload.</p>
</blockquote>
<p>I am having trouble with verifying the token.</p>
<p>This is where I'm at; In order to start the migration I proceeded as follows:</p>
<ol>
<li><p>I added Firebase-Auth to the Android App, while still having Gitkit in the App until Firebase-Auth works. Now I have two sign-in buttons, one which signs in into Firebase and one for the "almost deprecated" Gitkit.</p></li>
<li><p>On firebase.com I imported the Google project into a new Firebase Project, so the user database is the same. I've already managed to use Firebase-Auth in the Android App, am able to log-in as a known user and I can successfully retrieve the token which I will need for my backend server by calling <code>mFirebaseAuth.getCurrentUser().getToken(false).getResult().getToken()</code>. It contains the same <code>user_id</code> as the GitKit token.</p></li>
</ol>
<p>Now I'm attempting to replace the <a href="https://github.com/google/identity-toolkit-python-client" rel="nofollow"><code>identity-toolkit-python-client</code></a> library with <a href="https://github.com/mpdavis/python-jose" rel="nofollow"><code>python-jose</code></a>. Since I'm currently not sending the Firebase token to the backend, but only the Gitkit token, I want to test this <code>python-jose</code> library on the Gitkit token. </p>
<p>On the backend, before calling <code>GitKit.VerifyGitkitToken()</code> i'm now printing out the results of <code>jose.jwt.get_unverified_header()</code> and <code>jose.jwt.get_unverified_claims()</code> in order to check if I get to see what I expect. The results are good, I am able to view the content of the Gitkit token just as expected.</p>
<p>My problem comes with the verification. I'm unable to use <code>jose.jwt.decode()</code> for verification because <strong>I don't know which key I need to use</strong>. </p>
<blockquote>
<p><code>jose.jwt.decode(token, key, algorithms=None, options=None, audience=None, issuer=None, subject=None, access_token=None)</code></p>
</blockquote>
<p>I know the algorithm from the header and an 'aud' field is also stored in the claims, if that is of any help.</p>
<p>Getting back to the engineers comment </p>
<blockquote>
<p>verify the Firebase Auth token (signature and audience)</p>
</blockquote>
<p>How do I do that with the info I have avaliable? I guess audience is the 'aud' field in the claims, but how do I check the signature?</p>
<p>Once I've removed the Gitkit dependency on the server, I will continue with the migration.</p>
<p>From what I've seen, the GitKit library apparently makes an "RPC" call to a Google server for verification, but I may be wrong.</p>
<p>So, which will be the key for Gitkit token verification as well as the one for the Firebase token verification?</p>
| 1 | 2016-08-24T12:38:15Z | 39,155,806 | <p>The keys can be obtained</p>
<p>for Firebase at
<code>https://www.googleapis.com/robot/v1/metadata/x509/securetoken@system.gserviceaccount.com</code></p>
<p>and for Gitkit at
<code>https://www.googleapis.com/identitytoolkit/v3/relyingparty/publicKeys</code></p>
<p>Using Googles <a href="https://github.com/google/oauth2client" rel="nofollow"><code>oauth2client</code></a> library makes the verification very easy.</p>
<p>But if you want to use <a href="https://github.com/mpdavis/python-jose" rel="nofollow"><code>python-jose</code></a>, then <a href="https://github.com/mpdavis/python-jose/issues/27" rel="nofollow">you first need to convert the PEM certificate into an RSA public key</a>. This public key is then the key that needs to be used. And <strong>the audience should <em>not</em> be extracted from the JWT header</strong>, but hard coded into the source code, where in Firebase the audience is the project id and in Gitkit it is one of the OAuth 2.0 client IDs which can be found in the Google Developer Console Credentials section.</p>
<p>The <code>kid</code> in the JWT header is user to select the appropiate certificate which will get used to obtain the key used to perform the verification.</p>
<pre><code> # firebase
# target_audience = "firebase-project-id"
# certificate_url = 'https://www.googleapis.com/robot/v1/metadata/x509/securetoken@system.gserviceaccount.com'
# gitkit
target_audience = "123456789-abcdef.apps.googleusercontent.com" # (from developer console, OAuth 2.0 client IDs)
certificate_url = 'https://www.googleapis.com/identitytoolkit/v3/relyingparty/publicKeys'
response = urllib.urlopen(certificate_url)
certs = response.read()
certs = json.loads(certs)
print "CERTS", certs
print ''
print ''
# -------------- verify via oauth2client
from oauth2client import crypt
crypt.MAX_TOKEN_LIFETIME_SECS = 30 * 86400 # according to https://github.com/google/identity-toolkit-python-client/blob/master/identitytoolkit/gitkitclient.py
print "VALID TOKEN", crypt.verify_signed_jwt_with_certs(idtoken, certs, target_audience)
print ''
print ''
# -------------- verify via python-jose
from jose import jwt
unverified_header = jwt.get_unverified_header(idtoken)
print "UNVERIFIED HEADER", unverified_header
print ''
print ''
unverified_claims = jwt.get_unverified_claims(idtoken)
print "UNVERIFIED CLAIMS", unverified_claims
print ''
print ''
from ssl import PEM_cert_to_DER_cert
from Crypto.Util.asn1 import DerSequence
pem = certs[unverified_header['kid']]
der = PEM_cert_to_DER_cert(pem)
cert = DerSequence()
cert.decode(der)
tbsCertificate = DerSequence()
tbsCertificate.decode(cert[0])
rsa_public_key = tbsCertificate[6]
print "VALID TOKEN", jwt.decode(idtoken, rsa_public_key, algorithms=unverified_header['alg'], audience=target_audience)
</code></pre>
| 2 | 2016-08-25T22:37:30Z | [
"python",
"migration",
"jwt",
"firebase-authentication",
"gitkit"
] |
Modifying duplicate subindex in MultiIndex dataframe in Pandas | 39,123,606 | <p>Hi I have a dataframe slice as below:</p>
<pre><code>| | | Lemon | Orange |
|------------|----------|-------|--------|
| Date | Location | | |
| 01/01/2016 | Park | 10 | 20 |
| 01/01/2016 | Beach | 5 | 15 |
| 01/01/2016 | Park | 2 | 4 |
| 02/01/2016 | Park | 8 | 3 |
</code></pre>
<p>As you can see there is a duplicate for <code>(01/01/2016, Park)</code> and the reason is because the 3rd entry has a white space after k in Park. I am having a difficulty with my limited index selection skill to do a <code>rstrip(" ")</code> on the entire Location column to avoid the whitespace error.</p>
<p>Ultimately, I am hoping to do a <code>groupby</code> function to visualise the data between <code>Park</code> and other locations. At the moment, <code>"Park"</code> and <code>"Park "</code> are 2 different locations.</p>
<p>Any suggestion?</p>
| 2 | 2016-08-24T12:40:04Z | 39,123,794 | <p>Indices are immutable, so if you want to change <code>index</code> labels you need to set a new <code>index</code> (thanks <a href="http://stackoverflow.com/questions/39123606/modifying-duplicate-subindex-in-multiindex-dataframe-in-pandas#comment65593081_39123794">IanS</a>).</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.strip.html" rel="nofollow"><code>str.strip</code></a> in second level selecting by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_level_values.html" rel="nofollow"><code>get_level_values</code></a>: </p>
<pre><code>new_index = list(zip(df.index.get_level_values('Date'),
df.index.get_level_values('Location').str.strip()))
df.index = pd.MultiIndex.from_tuples(new_index, names = df.index.names)
print (df.index)
MultiIndex(levels=[[2016-01-01 00:00:00, 2016-02-01 00:00:00], ['Beach', 'Park']],
labels=[[0, 0, 0, 1], [1, 0, 1, 1]],
names=['Date', 'Location'])
</code></pre>
<p>If you want use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.rstrip.html" rel="nofollow"><code>rstrip</code></a>, replace <code>str.strip</code> to <code>str.rstrip</code>.</p>
| 2 | 2016-08-24T12:48:28Z | [
"python",
"pandas",
"dataframe",
"multi-index",
"removing-whitespace"
] |
Loading only a list of rows using Panda read_csv function - Python | 39,123,636 | <p>I would like to know if there is an option for the <code>pandas.read_csv</code> function which allow me load only a certain list of rows from the original csv file.</p>
<p>The csv file is really big, and I cant load the whole file due to a lack of memory.<br>
Is there an option like:</p>
<pre><code>df = pandas.read_csv(file, <b>'read_only'</b> = list_to_read) ?
</code></pre>
<p>with <code>list_to_read = [0,2,10]</code> for example (this will only read the row 0, the row 2 and the row 10)</p>
<p>Many thanks in advance</p>
| 1 | 2016-08-24T12:40:58Z | 39,123,709 | <p>If you go over the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow">docs</a> for <code>read_csv</code> you will find the <code>nrows</code> kwarg:</p>
<blockquote>
<p>nrows : int, default None
Number of rows of file to read. Useful for reading pieces of large files</p>
</blockquote>
<p>Note however that this will read the <code>n</code> first rows from the file, not arbitrary lines (ie you <strong>can't</strong> provide it <code>[0, 2, 10]</code> and expect it to read the first, third and eleventh rows)</p>
| 2 | 2016-08-24T12:44:13Z | [
"python",
"csv",
"pandas"
] |
Loading only a list of rows using Panda read_csv function - Python | 39,123,636 | <p>I would like to know if there is an option for the <code>pandas.read_csv</code> function which allow me load only a certain list of rows from the original csv file.</p>
<p>The csv file is really big, and I cant load the whole file due to a lack of memory.<br>
Is there an option like:</p>
<pre><code>df = pandas.read_csv(file, <b>'read_only'</b> = list_to_read) ?
</code></pre>
<p>with <code>list_to_read = [0,2,10]</code> for example (this will only read the row 0, the row 2 and the row 10)</p>
<p>Many thanks in advance</p>
| 1 | 2016-08-24T12:40:58Z | 39,124,539 | <p>You may want to iteratively update the dataframe as you read through the file. This is not a fast process, but it will get only the rows of interest into a dataframe without pulling the entire file into memory.</p>
<pre><code>import pandas as pd
col_list = ['columnA', 'columnB', ... ] #fill in your data columns
row_list = [0, 3, 10, ... ]
df = pd.DataFrame(columns=col_list)
row_number = 0
with open('path/to/file', 'rb') as fp:
for i, line in enumerate(fp.xreadlines()):
if i in row_list:
data_line = map(float, line.strip().split(',')) #assumes all columns are floats
df.loc[row_number] = data_line
row_number += 1
</code></pre>
| 0 | 2016-08-24T13:20:52Z | [
"python",
"csv",
"pandas"
] |
Virtualenv gives different versions for different os | 39,123,699 | <p>I am working on a django project on two separate systems, Debian Jessie and Mac El Capitan. The project is hosted on github where both systems will pull from or push to.</p>
<p>However, I noticed that on my Debian, when I run <code>python --version</code>, it gives me <code>Python 3.4.2</code> but on my Mac, it gives me <code>Python 2.7.10</code> despite being in the same virtual environment. Moreover, when I run <code>django-admin --version</code> on my Debian, it gives me <code>1.10</code> while on my Mac, <code>1.8.3</code>.</p>
<p>This happens even when I freshly clone the projects from github and run the commands.</p>
<p>Why is it that the virtual environment does not keep the same version of python and django?</p>
| 0 | 2016-08-24T12:43:48Z | 39,124,070 | <p>Thanks to @Oliver and @Daniel's comments that lead me to the answer why it did not work.</p>
<p>I started the virtual environment on my Debian with python 3. <code>virtualenv</code> made the virtual environment but it was specifically for Debian.</p>
<p>When I used it for mac, since it could not run the python executable in the virtual environment (since it is only compatible with Debian), hence, it used my Mac's system python, which is Python 2.7.10.</p>
<p>In summary, as <code>virtualenv</code> uses the python executable on the system, when the python executable is run on another system, it will not work.</p>
| 0 | 2016-08-24T13:00:59Z | [
"python",
"django",
"virtualenv"
] |
Virtualenv gives different versions for different os | 39,123,699 | <p>I am working on a django project on two separate systems, Debian Jessie and Mac El Capitan. The project is hosted on github where both systems will pull from or push to.</p>
<p>However, I noticed that on my Debian, when I run <code>python --version</code>, it gives me <code>Python 3.4.2</code> but on my Mac, it gives me <code>Python 2.7.10</code> despite being in the same virtual environment. Moreover, when I run <code>django-admin --version</code> on my Debian, it gives me <code>1.10</code> while on my Mac, <code>1.8.3</code>.</p>
<p>This happens even when I freshly clone the projects from github and run the commands.</p>
<p>Why is it that the virtual environment does not keep the same version of python and django?</p>
| 0 | 2016-08-24T12:43:48Z | 39,124,215 | <p>Now you understand that virtual environments can't be transferred easily from machine to machine. It's common to use the</p>
<pre><code>pip freeze
</code></pre>
<p>command and store its output in a file called <code>requirements.txt</code>. Then anyone else can rebuild your envirnment on their machine by running</p>
<pre><code>pip install -r requirements.txt
</code></pre>
<p>When you create a new virtual environment you can say which Python interpreter you want to use with the <code>-p</code> or <code>--python</code> switch, which should be followed by the path to the correct executable.</p>
<p>I'd personally recommend against modifying the system Python in any way, because system maintenance routines often rely on its integrity. It's relatively simple to install new copies for Python 2 and 3 somewhere like <code>/usr/local/bin</code> (Mac users often use <code>brew</code> for this) and have virtual environments that rely on different Python interpreters.</p>
| 0 | 2016-08-24T13:07:19Z | [
"python",
"django",
"virtualenv"
] |
bind("<Button-2>",..) not working on tk.canvas | 39,123,728 | <p>I'm using the code shown by MonsterBat Doppelgänger in <a href="http://stackoverflow.com/questions/25787523/move-and-zoom-a-tkinter-canvas-with-mouse">Move and zoom a tkinter canvas with mouse</a></p>
<p>It works great! But when I add a self.canvas.bind("<code><Button-2</code>>",self.find_nearest), the right button click is ignored completely.</p>
<p>Here is the code as modified:</p>
<pre><code>#!/usr/bin/python2.7
"""
http://stackoverflow.com/questions/25787523/move-and-zoom-a-tkinter-canvas-with-mouse
by MonsterBat Doppelganger
"""
import Tkinter as tk
import random
class Example(tk.Frame):
def __init__(self, root):
tk.Frame.__init__(self, root)
self.canvas = tk.Canvas(self, width=400, height=400, background="bisque")
self.xsb = tk.Scrollbar(self, orient="horizontal", command=self.canvas.xview)
self.ysb = tk.Scrollbar(self, orient="vertical", command=self.canvas.yview)
self.canvas.configure(yscrollcommand=self.ysb.set, xscrollcommand=self.xsb.set)
self.canvas.configure(scrollregion=(0,0,1000,1000))
self.xsb.grid(row=1, column=0, sticky="ew")
self.ysb.grid(row=0, column=1, sticky="ns")
self.canvas.grid(row=0, column=0, sticky="nsew")
self.grid_rowconfigure(0, weight=1)
self.grid_columnconfigure(0, weight=1)
#Plot some rectangles
for n in range(50):
x0 = random.randint(0, 900)
y0 = random.randint(50, 900)
x1 = x0 + random.randint(50, 100)
y1 = y0 + random.randint(50,100)
color = ("red", "orange", "yellow", "green", "blue")[random.randint(0,4)]
self.canvas.create_rectangle(x0,y0,x1,y1, outline="black", fill=color, activefill="black", tags=n)
self.canvas.create_text(50,10, anchor="nw", text="Click and drag to move the canvas\nScroll to zoom.")
# This is what enables using the mouse:
self.canvas.bind("<ButtonPress-1>", self.move_start)
self.canvas.bind("<B1-Motion>", self.move_move)
#linux scroll
self.canvas.bind("<Button-4>", self.zoomerP)
self.canvas.bind("<Button-5>", self.zoomerM)
#windows scroll
self.canvas.bind("<MouseWheel>",self.zoomer)
""" My add """
# find my position
self.canvas.bind("<Button-2>", self.find_nearest)
#move
def move_start(self, event):
self.canvas.scan_mark(event.x, event.y)
def move_move(self, event):
self.canvas.scan_dragto(event.x, event.y, gain=1)
#windows zoom
def zoomer(self,event):
if (event.delta > 0):
self.canvas.scale("all", event.x, event.y, 1.1, 1.1)
elif (event.delta < 0):
self.canvas.scale("all", event.x, event.y, 0.9, 0.9)
self.canvas.configure(scrollregion = self.canvas.bbox("all"))
#linux zoom
def zoomerP(self,event):
self.canvas.scale("all", event.x, event.y, 1.1, 1.1)
self.canvas.configure(scrollregion = self.canvas.bbox("all"))
def zoomerM(self,event):
self.canvas.scale("all", event.x, event.y, 0.9, 0.9)
self.canvas.configure(scrollregion = self.canvas.bbox("all"))
""" The rest of the story """
def find_nearest(self,event):
print 'nearest to %d,%d' % (event.x, event.y)
closest = self.canvas.find_closest(event.x,event.y)
if len(closest) == 0:
print 'Nothing near'
else:
print 'Id %s' % closest
if __name__ == "__main__":
root = tk.Tk()
Example(root).pack(fill="both", expand=True)
root.mainloop()
</code></pre>
<p>I'm not even getting to the print in find_nearest. Everything runs as in the original. The canvas scrolls and zooms. No exceptions are raised.</p>
| 0 | 2016-08-24T12:44:59Z | 39,124,566 | <blockquote>
<p>... when I add ... <code><Button-2></code> ..., the right button click is ignored completely.</p>
</blockquote>
<p>It seems like <code>Button-2</code> is the <em>middle</em> mouse button (i.e. pressing down the scroll wheel). For the <em>right</em> mouse button, you have to use <code>Button-3</code>. Change your line to this, and it should work:</p>
<pre><code> # find my position
self.canvas.bind("<Button-3>", self.find_nearest)
</code></pre>
<p>See <a href="http://infohost.nmt.edu/tcc/help/pubs/tkinter/web/event-sequences.html" rel="nofollow">e.g. here</a> for reference:</p>
<blockquote>
<p>The usual setup has button 1 on the left and button 3 on the right, but left-handers can swap these positions. </p>
</blockquote>
| 1 | 2016-08-24T13:22:20Z | [
"python",
"tkinter",
"tkinter-canvas"
] |
Pandas: order values from dataframe | 39,123,750 | <p>I have data:</p>
<pre><code>Third party unique identifier Qsex Qage Qfamilystatus QeducationSingle Qincomeevaluation Qjobstatus QRuCitySize QRuDistrict Qcountry
9ea3e3cb6719f3d336d324c446f486bd 1 32 1 5 1 1 1 1
cb570bb986808a5f4d2629287297b902 2 25 5 2 1 1 1
78b3a44eb7c7f7c687ffbcfed57647a4 1 30 4 1 3 6 1
1c728b223a4c2c267f3a3630b4a63f6e 2 45 4 1 1 1 1
8852ecd198fddfa557186c863f2c6fdf 2 41 4 1 7 7 1
1adc146b9ec35f7c632902f480d7e95c 1 70 5 3 1 1 1
0fb0c903a6b2b68f1b0a7cd1962f353c 1 29 5 1 5 7 1
</code></pre>
<p>And another df:</p>
<pre><code>QRuDistrict 1 ЦФÐ
QRuDistrict 2 ЮФÐ
QRuDistrict 3 СÐФÐ
QRuDistrict 4 ÐÐФÐ
QRuDistrict 5 СФÐ
QRuDistrict 6 УФÐ
QRuDistrict 7 ÐФÐ
QRuDistrict 8 СÐФÐ
QRuDistrict 9 ÐÑÑмÑкий ФÐ
</code></pre>
<p>I try to replace values from first df to data from second and count percentage and write that to <code>excel</code>. </p>
<p>I use:</p>
<pre><code>d = (df_1[df_1['sign']=='Qcountry'].set_index('number')['result'].to_dict())
df['Country'] = df.Qcountry.map(d)
df2 = pd.crosstab(df.Country, df.Qcountry, margins=True)
df3 = np.round(df2[["All"]] / df['Country'].count() * 100, 2).rename(columns={"All": '%'})
country = pd.concat([df2[["All"]], df3], axis=1)
less = country[country['%'] < 5]
country = country[country['%'] >= 5]
country['All'] = ((all_users * df3.divide(100)).astype(int))
country['%'] = country['%'].astype(str) + '%'
country.to_excel(writer, sheet_name=sheet_name, startrow=48, startcol=4)
</code></pre>
<p>and get:</p>
<pre><code>Federal Districts РоÑÑиÑ
N %
ÐÐФР131 5.33%
ÐÑÑмÑкий ФР11 0.48%
ÐФР416 16.91%
СÐФР420 17.09%
СÐФР43 1.75%
СФР259 10.53%
УФР208 8.48%
ЦФР764 31.08%
ЮФР205 8.35%
Total 2461 100.0%
</code></pre>
<p>But I want to get sequence like in a second dataframe.
I want to get:</p>
<pre><code>Federal Districts РоÑÑиÑ
N %
ЦФР764 31.08%
ЮФР205 8.35%
СÐФР420 17.09%
ÐÐФР131 5.33%
СФР259 10.53%
УФР208 8.48%
ÐФР416 16.91%
СÐФР43 1.75%
ÐÑÑмÑкий ФР11 0.48%
Total 2461 100.0%
</code></pre>
<p>How can I sort that in this order?</p>
| 3 | 2016-08-24T12:45:51Z | 39,124,533 | <p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html" rel="nofollow"><code>reindex</code></a> by second DataFrame, but is necessary add last item <code>['Total']</code> to <code>list</code>:</p>
<pre><code>print (df)
a b
0 QRuDistrict 1 ЦФÐ
1 QRuDistrict 2 ЮФÐ
2 QRuDistrict 3 СÐФÐ
3 QRuDistrict 4 ÐÐФÐ
4 QRuDistrict 5 СФÐ
5 QRuDistrict 6 УФÐ
6 QRuDistrict 7 ÐФÐ
7 QRuDistrict 8 СÐФÐ
8 QRuDistrict 9 ÐÑÑмÑкий ФÐ
print (df1)
Federal Districts РоÑÑиÑ
N %
ÐÐФР131 5.33%
ÐÑÑмÑкий ФР11 0.48%
ÐФР416 16.91%
СÐФР420 17.09%
СÐФР43 1.75%
СФР259 10.53%
УФР208 8.48%
ЦФР764 31.08%
ЮФР205 8.35%
Total 2461 100.0%
</code></pre>
<pre><code>idx = df.b.tolist() + ['Total']
print (idx)
['ЦФÐ', 'ЮФÐ', 'СÐФÐ', 'ÐÐФÐ', 'СФÐ', 'УФÐ', 'ÐФÐ', 'СÐФÐ', 'ÐÑÑмÑкий ФÐ', 'Total']
df1 = df1.reindex(idx)
print (df1)
Federal Districts РоÑÑиÑ
N %
ЦФР764 31.08%
ЮФР205 8.35%
СÐФР420 17.09%
ÐÐФР131 5.33%
СФР259 10.53%
УФР208 8.48%
ÐФР416 16.91%
СÐФР43 1.75%
ÐÑÑмÑкий ФР11 0.48%
Total 2461 100.0%
</code></pre>
<hr>
<p>If use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_index.html" rel="nofollow"><code>sort_index</code></a>, ordering is different:</p>
<pre><code>df1 = df1.sort_index(ascending=False)
print (df1)
Federal Districts РоÑÑиÑ
N %
ЮФР205 8.35%
ЦФР764 31.08%
УФР208 8.48%
СФР259 10.53%
СÐФР43 1.75%
СÐФР420 17.09%
ÐФР416 16.91%
ÐÑÑмÑкий ФР11 0.48%
ÐÐФР131 5.33%
Total 2461 100.0%
</code></pre>
<p>EDIT by comment:</p>
<p>I changed column names and it seems you need only values of column <code>sign</code>, where first column <code>number</code> contains <code>QRuDistrict</code>. Then you can use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ix.html" rel="nofollow"><code>ix</code></a> and mask with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html" rel="nofollow"><code>str.contains</code></a>:</p>
<pre><code>print (df)
number sign
0 QRuDistrict 1 ЦФÐ
1 QRuDistrict 2 ЮФÐ
2 QRuDistrict 3 СÐФÐ
3 QRuDistrict 4 ÐÐФÐ
4 QRuDistrict 5 СФÐ
5 QRuDistrict 6 УФÐ
6 QRuDistrict 7 ÐФÐ
7 QRuDistrict 8 СÐФÐ
8 QRuDistrict 9 ÐÑÑмÑкий ФÐ
idx = df.ix[df.number.str.contains('QRuDistrict'), 'sign'].tolist() + ['Total']
print (idx)
['ЦФÐ', 'ЮФÐ', 'СÐФÐ', 'ÐÐФÐ', 'СФÐ', 'УФÐ', 'ÐФÐ', 'СÐФÐ', 'ÐÑÑмÑкий ФÐ', 'Total']
</code></pre>
| 0 | 2016-08-24T13:20:36Z | [
"python",
"pandas",
"indexing",
"dataframe",
"order"
] |
I am getting an error <RuntimeWarning: invalid value encountered in sqrt> | 39,123,766 | <p>I am trying to run a quadratic equation in python. However, it keeps on giving me an error "RuntimeWarning: invalid value encountered in sqrt".</p>
<p>Here's my code:</p>
<pre><code>import numpy as np
a = 0.75 + (1.25 - 0.75)*np.random.randn(10000)
print a
b = 8 + (12 - 8)*np.random.randn(10000)
print b
c = -12 + 2*np.random.randn(10000)
print c
x0 = (-b - np.sqrt(b**2 - (4*a*c)))/(2 * a)
print x0
</code></pre>
<p>Your expert suggestion will be extremely helpful. Thanks in advance</p>
| 0 | 2016-08-24T12:46:54Z | 39,123,873 | <p>This is not 100% Python related. You can't calculate the square root of a negative number (when dealing with real numbers that is).</p>
<p>You didn't take any precautions for when <code>b**2 - (4*a*c)</code> is a negative number.</p>
<pre><code>>>> import numpy as np
>>>
>>> np.sqrt(4)
2.0
>>> np.sqrt(-4)
__main__:1: RuntimeWarning: invalid value encountered in sqrt
nan
</code></pre>
<p>Let's test if you have negative values:</p>
<pre><code>>>> import numpy as np
>>>
>>> a = 0.75 + (1.25 - 0.75) * np.random.randn(10000)
>>> b = 8 + (12 - 8) * np.random.randn(10000)
>>> c = -12 + 2 * np.random.randn(10000)
>>>
>>> z = b ** 2 - (4 * a * c)
>>> print len([_ for _ in z if _ < 0])
71
</code></pre>
| 3 | 2016-08-24T12:52:15Z | [
"python",
"python-2.7",
"python-3.x",
"numpy",
"math"
] |
Python static inheritance in class variable | 39,123,822 | <p>In python, is there a way to get the class name in the "static constructor"? I would like to initialize a class variable using an inherited class method.</p>
<pre><code>class A():
@classmethod
def _getInit(cls):
return 'Hello ' + cls.__name__
class B(A):
staticField = B._getInit()
</code></pre>
<blockquote>
<p>NameError: name 'B' is not defined</p>
</blockquote>
| 2 | 2016-08-24T12:50:22Z | 39,123,894 | <p>The name <code>B</code> is not assigned to until the full <code>class</code> suite has been executed and a class object has been created. For the same reason, the <code>__name__</code> attribute on the class is not set until the class object is created either.</p>
<p>You'd have to assign that attribute afterwards:</p>
<pre><code>class A():
@classmethod
def _getInit(cls):
return 'Hello ' + cls.__name__
class B(A):
pass
B.staticField = B._getInit()
</code></pre>
<p>The alternative is to use a class decorator (which is passed the newly-created class object) or use a metaclass (which creates the class object in the first place and is given the name to use).</p>
| 6 | 2016-08-24T12:53:01Z | [
"python",
"oop",
"static"
] |
Starting a Python script on a raspberry via plink (not responding crontab) | 39,123,829 | <p>I have written a <code>script.py</code>, which opens a tk window and draws with turtle in the canvas the window contains. I want to start this script via a plink using:</p>
<pre><code>plink.exe -pw raspberry pi@pi-fisch00 python /home/pi/script.py
</code></pre>
<p>But I always receive an error:</p>
<pre><code>script.py line 32, in <module> root = Tk()
no display name and no $DISPLAY environment variable
</code></pre>
<p>I think the same error is causing that the crontab is not executing the <code>script.py</code>. </p>
<p>My entry in the crontab:</p>
<pre><code>*/1 * * * * python /home/pi/script.py
</code></pre>
<p>The syntax should be right, because other scripts are working and if I put <code>python /home/pi/script.py</code> in the cmd manually everything is fine. The <code>script.py</code> gets executed. How can I fix this and let the crontab execute the <code>script.py</code>? Why can't I execute the <code>script.py</code> via plink?</p>
| 0 | 2016-08-24T12:50:33Z | 39,124,064 | <p>Look at the error message you are getting:</p>
<blockquote>
<p>no display name and no $DISPLAY environment variable</p>
</blockquote>
<p>You are attempting to run something that requires an X11 display, which isn't going to be available from within cron's context (and likely not via plink either, unless you are running an X11 display server locally <em>and</em> have enabled X11 forwarding).</p>
<p>Typically, if you have something that needs access to the display you need to run it from within an existing desktop session. There are ways to work around this; for some thoughts on that topic see:</p>
<ul>
<li><a href="https://unix.stackexchange.com/questions/25684/how-to-access-x-display-from-a-cron-job-when-using-gdm3">https://unix.stackexchange.com/questions/25684/how-to-access-x-display-from-a-cron-job-when-using-gdm3</a></li>
<li><a href="https://unix.stackexchange.com/questions/10121/open-a-window-on-a-remote-x-display-why-cannot-open-display">https://unix.stackexchange.com/questions/10121/open-a-window-on-a-remote-x-display-why-cannot-open-display</a></li>
</ul>
| 2 | 2016-08-24T13:00:53Z | [
"python",
"crontab",
"plink"
] |
QComboBox item text truncated on Windows | 39,124,182 | <p>I have a standard QComboBox using PySide with fairly long item names, which gets truncated for some reason on Windows, but not in Ubuntu (Gnome).
I have only set it with:</p>
<pre><code>self.ConfChoose = QtGui.QComboBox()
self.ConfChoose.addItem('blablablabla')
</code></pre>
<p>etc. No extra policy settings.</p>
<p><img src="http://i.stack.imgur.com/mgJwW.png" alt="Screenshot Windows"></p>
<p><img src="http://i.stack.imgur.com/y7FqJ.png" alt="Screenshot Ubuntu"></p>
<p>Any ideas as to why, and how I can make the items not get truncated? I can set the size of the QComboBox to the size of the longest text string, but that is not a solution. It should just behave like on Ubuntu.</p>
| 0 | 2016-08-24T13:05:59Z | 39,141,669 | <p>Finally got a solution I can accept:</p>
<pre><code>self.ConfChoose = QtGui.QComboBox()
[self.ConfChoose.addItem(name) for name in self.listOfStrings]
w=self.ConfChoose.fontMetrics().boundingRect(max(self.listOfStrings, key=len)).width()
self.ConfChoose.view().setFixedWidth(w+10)
</code></pre>
<p>Thank you for the input to get in the right direction...</p>
| 1 | 2016-08-25T09:36:10Z | [
"python",
"qt",
"pyside",
"qcombobox"
] |
A different run speed in python | 39,124,265 | <p>i want to get sampling from serial port in python. but if i run a test code to know the rate of it, python gives me different values! it is usually about 24000 time per second. but sometime this code runs only 14000 times. what is the big difference reason ? and if i want to get sampling 1 million what should i do?</p>
<p>this is the sample code for test run speed:</p>
<pre><code>import time
def g(start=0, stop=5, step=1):
while start < stop:
yield start
start += step
t1 = time.time()
t2 = t1 + 1
for item in g(10,1000000,1):
print(item)
t1 = time.time()
if t1 > t2:
break
</code></pre>
| 0 | 2016-08-24T13:09:18Z | 39,124,402 | <p>Investigate the <code>timeit</code> module, which was designed for applications like this. Benchmarks have to be run under very controlled conditions to be anything like repeatable. <code>timeit</code> runs your code a number of times and gives you the best result. Usually slower performance will be an indication that your computer is running some other task(s) at the same time as the benchmark.</p>
| 3 | 2016-08-24T13:15:09Z | [
"python",
"sampling",
"cpu-speed"
] |
A different run speed in python | 39,124,265 | <p>i want to get sampling from serial port in python. but if i run a test code to know the rate of it, python gives me different values! it is usually about 24000 time per second. but sometime this code runs only 14000 times. what is the big difference reason ? and if i want to get sampling 1 million what should i do?</p>
<p>this is the sample code for test run speed:</p>
<pre><code>import time
def g(start=0, stop=5, step=1):
while start < stop:
yield start
start += step
t1 = time.time()
t2 = t1 + 1
for item in g(10,1000000,1):
print(item)
t1 = time.time()
if t1 > t2:
break
</code></pre>
| 0 | 2016-08-24T13:09:18Z | 39,124,404 | <p>You will have always some time discrepancies in running code in python, it's because of resources your CPU <code>gives</code> to running your script. You have to make couple of tries and calculate average time of it.</p>
| 0 | 2016-08-24T13:15:15Z | [
"python",
"sampling",
"cpu-speed"
] |
A different run speed in python | 39,124,265 | <p>i want to get sampling from serial port in python. but if i run a test code to know the rate of it, python gives me different values! it is usually about 24000 time per second. but sometime this code runs only 14000 times. what is the big difference reason ? and if i want to get sampling 1 million what should i do?</p>
<p>this is the sample code for test run speed:</p>
<pre><code>import time
def g(start=0, stop=5, step=1):
while start < stop:
yield start
start += step
t1 = time.time()
t2 = t1 + 1
for item in g(10,1000000,1):
print(item)
t1 = time.time()
if t1 > t2:
break
</code></pre>
| 0 | 2016-08-24T13:09:18Z | 39,124,604 | <p>I was @15000 at first execution and then arround 28000.
In general the result depends mainly of </p>
<ul>
<li>your CPU load</li>
<li>cache hit/miss</li>
<li>RAM access time</li>
</ul>
<p>But in your case it is the print which takes most of the execution time. So print access time to stdout is the cause of your variation.</p>
<p>try this :</p>
<pre><code>for item in g(10,100000000,1):
#print(item)
t1 = time.time()
if t1 > t2:
print(item) #print only the last
break
</code></pre>
| 0 | 2016-08-24T13:23:58Z | [
"python",
"sampling",
"cpu-speed"
] |
Label recognition with Facebook's library Fasttext | 39,124,338 | <p>Ok so I have been playing with Facebook's newest text classification python library and I'm struggling a bit with label recognition. </p>
<p>If I understood the input have to be lines in a txt file, each line containing both the features and the label. The label can be recognize by the classifier by it's prefix :"__label__".</p>
<p>But fore some reason I'm unnable to get my classifier to recognize my labels when I run a simple test code. Here it is:</p>
<pre><code>import fasttext
classifier = fasttext.supervised('toto.txt', 'model')
print classifier.label_prefix
print classifier.labels
raise SystemExit(0)
</code></pre>
<p>Which give me this result in the log :</p>
<pre><code>__label__
[]
</code></pre>
<p>So the code knows that the prefix is : "__label__" but can't catch them in my input file. Any ideas on why this is happening?</p>
<p>Thanks for the help !</p>
| 0 | 2016-08-24T13:12:39Z | 40,086,631 | <p>You need to <strong>show the content of "toto.txt"</strong> in order to get some help.</p>
<p>From what I can see now, there is no problem with the code you provided.</p>
<p><strong>(btw, make sure your toto.txt is encoded in 'utf-8', otherwise, you need to set an encoding param in classifier)</strong></p>
| 0 | 2016-10-17T12:40:27Z | [
"python",
"label"
] |
Check if a directory is a mount point with python 2.7 | 39,124,363 | <p>Is there a pythonic way and without shell commands (i.e. with subprocess module) to check if a directory is a mount point? </p>
<p>Up to now I use:</p>
<pre><code>import os
import subprocess
def is_mount_point(dir_path):
try:
check_output([
'mountpoint',
path.realpath(dir_name)
])
return True
except CalledProcessError:
return False
</code></pre>
| 0 | 2016-08-24T13:13:37Z | 39,124,749 | <p>There is an <a href="https://docs.python.org/2/library/os.path.html#os.path.ismount" rel="nofollow"><code>os.path.ismount(path)</code></a>.</p>
<blockquote>
<p>Return True if pathname path is a mount point: a point in a file
system where a different file system has been mounted. The function
checks whether pathâs parent, path/.., is on a different device than
path, or whether path/.. and path point to the same i-node on the same
device â this should detect mount points for all Unix and POSIX
variants.</p>
</blockquote>
<pre><code>import os
os.path.ismount(dir_name) # returns boolean
</code></pre>
<p>You may also refer to <a href="https://github.com/python/cpython/blob/master/Lib/posixpath.py#L180" rel="nofollow">implementation</a> (if you're on POSIX system). Check <code>macpath.py</code> or <code>ntpath.py</code> for other platforms.</p>
| 2 | 2016-08-24T13:30:33Z | [
"python",
"python-2.7"
] |
NLTK RegexpParser, chunk phrase by matching exactly one item | 39,124,492 | <p>I'm using NLTK's <code>RegexpParser</code> to chunk a noun phrase, which I define with a grammar as </p>
<pre><code> grammar = "NP: {<DT>?<JJ>*<NN|NNS>+}"
cp = RegexpParser(grammar)
</code></pre>
<p>This is grand, it is matching a noun phrase as:</p>
<ul>
<li>DT if it exists</li>
<li>JJ in whatever number</li>
<li>NN or NNS, at least one</li>
</ul>
<p>Now, what if I want to match the same but having the <em>whatever number</em> for JJ transformed into <em>only one</em>? So I want to match DT if it exists, <strong>one</strong> JJ and 1+ NN/NNS. If there are more than one JJ, I want to match only one of them, the one nearest to the noun (and DT if there is, and NN/NNS).</p>
<p>The grammar</p>
<pre><code>grammar = "NP: {<DT>?<JJ><NN|NNS>+}"
</code></pre>
<p>would match only when there is just one JJ, the grammar</p>
<pre><code>grammar = "NP: {<DT>?<JJ>{1}<NN|NNS>+}"
</code></pre>
<p>which I thought would work given the <a href="http://www.rexegg.com/regex-quickstart.html" rel="nofollow">typical Regexp patterns</a>, raises a ValueError.</p>
<p>For example, in "This beautiful green skirt", I'd like to chunk "This green skirt".</p>
<p>So, how would I proceed?</p>
| 0 | 2016-08-24T13:19:06Z | 39,179,019 | <p>Grammer <code>grammar = "NP: {<DT>?<JJ><NN|NNS>+}"</code> is correct for your mentioned requirement.</p>
<p>The example which you gave in comment section, where you are not getting DT in output -</p>
<pre><code>"This beautiful green skirt is for you."
Tree('S', [('This', 'DT'), ('beautiful', 'JJ'), Tree('NP', [('green','JJ'),
('skirt', 'NN')]), ('is', 'VBZ'), ('for', 'IN'), ('you', 'PRP'), ('.', '.')])
</code></pre>
<p>Here in your example, there are <code>2 consecutive JJs</code> which does not meet your requirements as you said - <code>"I want to match DT if it exists, one JJ and 1+ NN/NNS."</code></p>
<hr>
<p>For updated requirement -
<code>I want to match DT if it exists, one JJ and 1+ NN/NNS. If there are more than one JJ, I want to match only one of them, the one nearest to the noun (and DT if there is, and NN/NNS).</code></p>
<p>Here, you will need to use </p>
<pre><code>grammar = "NP: {<DT>?<JJ>*<NN|NNS>+}"
</code></pre>
<p>and do post processing of the NP chunks to remove extra JJ. </p>
<p><strong>Code:</strong></p>
<pre><code>from nltk import Tree
chunk_output = Tree('S', [Tree('NP', [('This', 'DT'), ('beautiful', 'JJ'), ('green','JJ'), ('skirt', 'NN')]), ('is', 'VBZ'), ('for', 'IN'), ('you', 'PRP'), ('.', '.')])
for child in chunk_output:
if isinstance(child, Tree):
if child.label() == 'NP':
for num in range(len(child)):
if not (child[num][1]=='JJ' and child[num+1][1]=='JJ'):
print child[num][0]
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>This
green
skirt
</code></pre>
| 1 | 2016-08-27T08:21:18Z | [
"python",
"regex",
"nlp",
"nltk"
] |
Python & Google Places API | Want to get all Restaurants at a specific postion | 39,124,510 | <p>I want to get all restaurants in London by using python 3.5 and the module "googlePlaces" with the Google Places API. I read the "googleplaces" documentation and searched here. But I don't get it. Thats my Code so far:</p>
<pre><code>from googleplaces import GooglePlaces, types, lang
API_KEY = 'XXXCODEXXX'
google_places = GooglePlaces(API_KEY)
query_result = google_places.nearby_search(
location='London', keyword='Restaurants',
radius=1000, types=[types.TYPE_RESTAURANT])
if query_result.has_attributions:
print query_result.html_attributions
for place in query_result.places:
place.get_details()
print place.rating
</code></pre>
<p>The Code doesn't work. What can I do to get a list with all Restaurants in this area?
Thanks</p>
| 1 | 2016-08-24T13:19:39Z | 39,208,958 | <p>It'll be better if you drop the <code>keyword</code> parameter, <code>types</code> already searches for restaurants.</p>
<p>Bear in mind the Places API (as other Google Maps APIs) is not a database, it will not return all results that match. Actually returns only 20, and you can get an extra 40 or so, but that's all.</p>
<p>If I'm reading the <a href="https://github.com/slimkrazy/python-google-places/blob/master/googleplaces/__init__.py" rel="nofollow">GooglePlaces</a> correctly, your code will send an API request such like:</p>
<p><a href="http://maps.googleapis.com/maps/api/place/nearbysearch/json?location=51.507351,-0.127758&radius=1000&types=restaurant&keyword=Restaurants&key=YOUR_API_KEY" rel="nofollow">http://maps.googleapis.com/maps/api/place/nearbysearch/json?location=51.507351,-0.127758&radius=1000&types=restaurant&keyword=Restaurants&key=YOUR_API_KEY</a></p>
<p>If you just drop the <code>keyword</code> parameter, it'll be like:</p>
<p><a href="http://maps.googleapis.com/maps/api/place/nearbysearch/json?location=51.507351,-0.127758&radius=1000&types=restaurant&key=YOUR_API_KEY" rel="nofollow">http://maps.googleapis.com/maps/api/place/nearbysearch/json?location=51.507351,-0.127758&radius=1000&types=restaurant&key=YOUR_API_KEY</a></p>
<p>The difference is subtle: <code>keyword=Restaurants</code> will make the API match results that have the word "Restaurants" in their name, address, etc. Some of these may not be restaurants (and will be discarded), while some actual restaurants may not have the word "Restaurants" in them.</p>
| 1 | 2016-08-29T14:41:40Z | [
"python",
"python-3.x",
"google-places-api",
"google-places"
] |
How I do connect oracle database with PyQt ? What's the procedure? | 39,124,537 | <p>I want to connect my project with PyQt. I searched a lot off in internet . But it's matter of sorrow that i don't any understandable or effective method or procedure.</p>
| 0 | 2016-08-24T13:20:49Z | 39,124,785 | <p>Refer to the following link, in which the author explains how to use <code>cx_Oracle</code> in Python.</p>
<p><a href="https://www.youtube.com/watch?v=w3WVqn3WySs" rel="nofollow">https://www.youtube.com/watch?v=w3WVqn3WySs</a></p>
<p>After fetching your data from your Oracle DB, you can update your PyQt widgets based on the data.</p>
| 0 | 2016-08-24T13:32:04Z | [
"python",
"database",
"oracle",
"pyqt"
] |
recursive function in python but with strange return | 39,124,558 | <p>I am trying to solve a primary equation with several variables. For example:11x+7y+3z=20. non-negative integer result only.</p>
<p>I use code below in python 3.5.1, but the result contains something like [...]. I wonder what is it?
The code I have is to test every variables from 0 to max [total value divided by corresponding variable]. Because the variables may be of a large number, I want to use recursion to solve it. </p>
<pre><code>def equation (a,b,relist):
global total
if len(a)>1:
for i in range(b//a[0]+1):
corelist=relist.copy()
corelist+=[i]
testrest=equation(a[1:],b-a[0]*i,corelist)
if testrest:
total+=[testrest]
return total
else:
if b%a[0]==0:
relist+=[b//a[0]]
return relist
else:
return False
total=[]
re=equation([11,7,3],20,[])
print(re)
</code></pre>
<p>the result is</p>
<pre><code>[[0, 2, 2], [...], [1, 0, 3], [...]]
</code></pre>
<p>change to a new one could get clean result, but I still need a global variable:</p>
<pre><code>def equation (a,b,relist):
global total
if len(a)>1:
for i in range(b//a[0]+1):
corelist=relist.copy()
corelist+=[i]
equation(a[1:],b-a[0]*i,corelist)
return total
else:
if b%a[0]==0:
relist+=[b//a[0]]
total+=[relist]
return
else:
return
total=[]
print(equation([11,7,3],20,[]))
</code></pre>
| 0 | 2016-08-24T13:22:09Z | 39,125,898 | <p>I see three layers of problems here.</p>
<p>1) There seems to be a misunderstanding about recursion.</p>
<p>2) There seems to be an underestimation of the complexity of the problem you are trying to solve (a modeling issue)</p>
<p>3) Your main question exposes some lacking skills in python itself.</p>
<p>I will address the questions in backward order given that your actual question is "the result contains something like [...]. I wonder what is it?"</p>
<p>"<code>[]</code>" in python designates a list.</p>
<p>For example:</p>
<pre><code>var = [ 1, 2 ,3 ,4 ]
</code></pre>
<p>Creates a reference "<code>var</code>" to a list containing 4 integers of values 1, 2, 3 and 4 respectively.</p>
<pre><code>var2 = [ "hello", ["foo", "bar"], "world" ]
</code></pre>
<p><code>var2</code> on the other hand is a reference to a composite list of 3 elements, a string, another list and a string. The 2nd element is a list of 2 strings.</p>
<p>So your results is a list of lists of integers (assuming the 2 lists with "..." are integers). If each sublists are of the same size, you could also think of it as a matrix. And the way the function is written, you could end up with a composite list of lists of integers, the value "<code>False</code>" (or the value "<code>None</code>" in the newest version)</p>
<p>Now to the modeling problem. The equation 11x + 7y + 3z = 20 is one equation with 3 unknowns. It is not clear at all to me what you want to acheive with this program, but unless you solve the equation by selecting 2 independent variables, you won't achieve much. It is not clear at all to me what is the relation between the program and the equation save for the list you provided as argument with the values 11, 7 and 3.</p>
<p>What I would do (assuming you are looking for triplets of values that solves the equation) is go for the equation: f(x,y) = (20/3) - (11/3)x - (7/3)y. Then the code I would rather write is:</p>
<pre><code>def func_f(x, y):
return 20.0/3.0 - (11.0/3.0) * x - (7.0/3.0) * y
list_of_list_of_triplets = []
for (x, y) in zip(range(100),range(100)):
list_of_triplet = [x, y, func_f(x,y)]
list_of_list_of_triplets += [list_of_triplet] # or .append(list_of_triplet)
</code></pre>
<p>Be mindful that the number of solutions to this equation is infinite. You could think of it as a straight line in a rectangular prism if you bound the variables. If you wanted to represent the same line in an abstract number of dimensions, you could rewrite the above as:</p>
<pre><code>def func_multi_f(nthc, const, coeffs, vars):
return const - sum([a*b/nth for a,b in zip(coeffs, vars)])
</code></pre>
<p>Where <code>nthc</code> is the coefficient of the Nth variable, <code>const</code> is an offset constant, <code>coeffs</code> is a list of coefficients and <code>vars</code> the <strong>values</strong> of the N-1 other variables. For example, we could re-write the <code>func_f</code> as:</p>
<pre><code>def func_f(x,y):
return func_multi_f(3.0, 20.0, [11.0, 7.0], [x,y])
</code></pre>
<p>Now about recursion. A recursion is a formulation of a reducible input that can be called repetivitely as to achieve a final result. In pseudo code a recursive algorithm can be formulated as:</p>
<pre><code>input = a reduced value or input items
if input has reached final state: return final value
operation = perform something on input and reduce it, combine with return value of this algorithm with reduced input.
</code></pre>
<p>For example, the fibonacci suite:</p>
<pre><code>def fibonacci(val):
if val == 1:
return 1
return fibonacci(val - 1) + val
</code></pre>
<p>If you wanted to recusively add elements from a list:</p>
<pre><code>def sum_recursive(list):
if len(list) == 1:
return list[0]
return sum_recursive(list[:-1]) + list[-1]
</code></pre>
<p>Hope it helps.</p>
<p><strong>UPDATE</strong></p>
<p>From comments and original question edits, it appears that we are rather looking for INTEGER solutions to the equation. Of non-negative values. That is quite different.</p>
<p>1) Step one find bounds: use the equation ax + by + cz <= 20 with a,b,c > 0 and x,y,z >= 0</p>
<p>2) Step two, simply do [(x, y, z) for x, y, z in zip(bounds_x, bounds_y, bounds_z) if x*11 + y*7 + z*3 - 20 == 0] and you will have a list of valid triplets. </p>
<p>in code:</p>
<pre><code>def bounds(coeff, const):
return [val for val in range(const) if coeff * val <= const]
def combine_bounds(bounds_list):
# here you have to write your recusive function to build
# all possible combinations assuming N dimensions
def sols(coeffs, const):
bounds_lists = [bounds(a, const) for a in coeffs]
return [vals for vals in combine_bounds(bounds_lists) if sum([a*b for a,b in zip(coeff, vals)] - const == 0)
</code></pre>
| 5 | 2016-08-24T14:18:44Z | [
"python",
"recursion",
"equation"
] |
recursive function in python but with strange return | 39,124,558 | <p>I am trying to solve a primary equation with several variables. For example:11x+7y+3z=20. non-negative integer result only.</p>
<p>I use code below in python 3.5.1, but the result contains something like [...]. I wonder what is it?
The code I have is to test every variables from 0 to max [total value divided by corresponding variable]. Because the variables may be of a large number, I want to use recursion to solve it. </p>
<pre><code>def equation (a,b,relist):
global total
if len(a)>1:
for i in range(b//a[0]+1):
corelist=relist.copy()
corelist+=[i]
testrest=equation(a[1:],b-a[0]*i,corelist)
if testrest:
total+=[testrest]
return total
else:
if b%a[0]==0:
relist+=[b//a[0]]
return relist
else:
return False
total=[]
re=equation([11,7,3],20,[])
print(re)
</code></pre>
<p>the result is</p>
<pre><code>[[0, 2, 2], [...], [1, 0, 3], [...]]
</code></pre>
<p>change to a new one could get clean result, but I still need a global variable:</p>
<pre><code>def equation (a,b,relist):
global total
if len(a)>1:
for i in range(b//a[0]+1):
corelist=relist.copy()
corelist+=[i]
equation(a[1:],b-a[0]*i,corelist)
return total
else:
if b%a[0]==0:
relist+=[b//a[0]]
total+=[relist]
return
else:
return
total=[]
print(equation([11,7,3],20,[]))
</code></pre>
| 0 | 2016-08-24T13:22:09Z | 39,133,698 | <p>Here is a solution built from your second one, but without the global variable. Instead, each call passes back a list of solutions; the parent call appends each solution to the current element, making a new list to return.</p>
<pre><code>def equation (a, b):
result = []
if len(a) > 1:
# For each valid value of the current coefficient,
# recur on the remainder of the list.
for i in range(b // a[0]+1):
soln = equation(a[1:], b-a[0]*i)
# prepend the current coefficient
# to each solution of the recursive call.
for item in soln:
result.append([i] + item)
else:
# Only one item left: is it a solution?
if b%a[0] == 0:
# Success: return a list of the one element
result = [[b // a[0]]]
else:
# Failure: return empty list
result = []
return result
print(equation([11, 7, 3], 20, []))
</code></pre>
| 0 | 2016-08-24T22:02:00Z | [
"python",
"recursion",
"equation"
] |
Show progress bar for each epoch during batchwise training in Keras | 39,124,676 | <p>When I load the whole dataset in memory and train the network in Keras using following code:</p>
<pre><code>model.fit(X, y, nb_epoch=40, batch_size=32, validation_split=0.2, verbose=1)
</code></pre>
<p>This generates a progress bar per epoch with metrics like ETA, accuracy, loss, etc</p>
<p>When I train the network in batches, I'm using the following code</p>
<pre><code>for e in range(40):
for X, y in data.next_batch():
model.fit(X, y, nb_epoch=1, batch_size=data.batch_size, verbose=1)
</code></pre>
<p>This will generate a progress bar for each batch instead of each epoch. Is it possible to generate a progress bar for each epoch during batchwise training? </p>
| 6 | 2016-08-24T13:26:43Z | 39,192,224 | <p>1.</p>
<pre><code>model.fit(X, y, nb_epoch=40, batch_size=32, validation_split=0.2, verbose=1)
</code></pre>
<p>In the above change to <code>verbose=2</code>, as it is mentioned in the documentation: "verbose: 0 for no logging to stdout, 1 for progress bar logging, <code>2 for one log line per epoch</code>."</p>
<p>It'll show your output as:</p>
<pre><code>Epoch 1/100
0s - loss: 0.2506 - acc: 0.5750 - val_loss: 0.2501 - val_acc: 0.3750
Epoch 2/100
0s - loss: 0.2487 - acc: 0.6250 - val_loss: 0.2498 - val_acc: 0.6250
Epoch 3/100
0s - loss: 0.2495 - acc: 0.5750 - val_loss: 0.2496 - val_acc: 0.6250
.....
.....
</code></pre>
<p>2.</p>
<p>If you want to show a progress bar for completion of epochs, keep <code>verbose=0</code> (which shuts out logging to stdout) and implement in the following manner:</p>
<pre><code>from time import sleep
import sys
epochs = 10
for e in range(epochs):
sys.stdout.write('\r')
for X, y in data.next_batch():
model.fit(X, y, nb_epoch=1, batch_size=data.batch_size, verbose=0)
# print loss and accuracy
# the exact output you're looking for:
sys.stdout.write("[%-60s] %d%%" % ('='*(60*(e+1)/10), (100*(e+1)/10)))
sys.stdout.flush()
sys.stdout.write(", epoch %d"% (e+1))
sys.stdout.flush()
</code></pre>
<p>The output will be as follows:</p>
<p>[============================================================] 100%, epoch 10</p>
<p>3.</p>
<p>If you want to show loss after every n batches, you can use:</p>
<pre><code>out_batch = NBatchLogger(display=1000)
model.fit([X_train_aux,X_train_main],Y_train,batch_size=128,callbacks=[out_batch])
</code></pre>
<p>Though, I haven't ever tried it before. The above example was taken from this keras github issue: <a href="https://github.com/fchollet/keras/issues/2850" rel="nofollow">Show Loss Every N Batches #2850</a></p>
<p>4.</p>
<p>You can also use <code>progbar</code> for progress, but it'll print progress batchwise</p>
<pre><code>from keras.utils import generic_utils
progbar = generic_utils.Progbar(X_train.shape[0])
for X_batch, Y_batch in datagen.flow(X_train, Y_train):
loss, acc = model_test.train([X_batch]*2, Y_batch, accuracy=True)
progbar.add(X_batch.shape[0], values=[("train loss", loss), ("acc", acc)])
</code></pre>
| 2 | 2016-08-28T14:19:12Z | [
"python",
"machine-learning",
"keras"
] |
Show progress bar for each epoch during batchwise training in Keras | 39,124,676 | <p>When I load the whole dataset in memory and train the network in Keras using following code:</p>
<pre><code>model.fit(X, y, nb_epoch=40, batch_size=32, validation_split=0.2, verbose=1)
</code></pre>
<p>This generates a progress bar per epoch with metrics like ETA, accuracy, loss, etc</p>
<p>When I train the network in batches, I'm using the following code</p>
<pre><code>for e in range(40):
for X, y in data.next_batch():
model.fit(X, y, nb_epoch=1, batch_size=data.batch_size, verbose=1)
</code></pre>
<p>This will generate a progress bar for each batch instead of each epoch. Is it possible to generate a progress bar for each epoch during batchwise training? </p>
| 6 | 2016-08-24T13:26:43Z | 39,199,930 | <p>you can set verbose=0 and set callbacks that will update progress at the end of each fitting, </p>
<pre><code>clf.fit(X, y, nb_epoch=1, batch_size=data.batch_size, verbose=0, callbacks=[some_callback])
</code></pre>
<p><a href="https://keras.io/callbacks/#example-model-checkpoints" rel="nofollow">https://keras.io/callbacks/#example-model-checkpoints</a></p>
<p>or set callback <a href="https://keras.io/callbacks/#remotemonitor" rel="nofollow">https://keras.io/callbacks/#remotemonitor</a></p>
| 0 | 2016-08-29T06:37:39Z | [
"python",
"machine-learning",
"keras"
] |
I've created a simple calculator in OOP way in python. I get an error message, that my variables are not defined | 39,124,900 | <p>So that's my calc, I was trying to do it to understand class decorators better</p>
<pre><code>class Calc():
@staticmethod
def add(x, y):
return x + y
.
.
.
@staticmethod
def div(x, y):
return x / y
@staticmethod
def get_numbers():
num1 = int(input("Enter first number: "))
num2 = int(input("Enter second number: "))
return num1, num2
@staticmethod
def get_operator():
operator = input('Please enter an operator (+, -, *, /): ')
return operator
@classmethod
def calculate(cls):
cls.get_numbers()
cls.get_operator()
if operator == '+':
print(add(num1, num2))
elif operator == '-':
print(sub(num1, num2))
elif operator == '*':
print(mul(num1, num2))
elif operator == '/':
print(div(num1, num2))
Calc.calculate()
</code></pre>
<p>When I run it, I get this: </p>
<pre><code>Traceback (most recent call last):
File "calc_feltoltesre.py", line 44, in <module>
Calc.calculate()
File "calc_feltoltesre.py", line 32, in calculate
get_numbers()
NameError: name 'get_numbers' is not defined
</code></pre>
<p>I have no idea, why the operator and num1 and num2 are not visible in the scope. </p>
| 0 | 2016-08-24T13:37:23Z | 39,125,288 | <p>There's a couple things wrong with your <code>calculate</code> method.</p>
<p>First off, to get access to what your functions <code>get_numbers()</code> and <code>get_operator()</code> return, you need to store the values inside a variable.<br>
Second, you also need to add <code>cls.</code> to your <code>add</code>, <code>sub</code>, <code>mul</code> and <code>div</code> calls (inside the print statement) or else it will give you a <code>NameError</code> since those methods aren't defined.</p>
<p>Try this out:</p>
<pre><code>@classmethod
def calculate(cls):
num1, num2 = cls.get_numbers()
operator = cls.get_operator()
if operator == '+':
print (cls.add(num1, num2))
elif operator == '-':
print (cls.sub(num1, num2))
elif operator == '*':
print (cls.mul(num1, num2))
elif operator == '/':
print (cls.div(num1, num2))
</code></pre>
| 1 | 2016-08-24T13:54:32Z | [
"python",
"oop"
] |
Can I store my own class object into hdf5? | 39,124,934 | <p>I have a class like this: </p>
<pre><code>class C:
def __init__(self, id, user_id, photo):
self.id = id
self.user_id = user_id
self.photo = photo
</code></pre>
<p>I need to create millions of these objects. id is an integer as well as user_id but photo is a bool array of size 64. My boss wants me to store all of them inside hdf5 files. I also need to be able to make queries according to their user_id attributes to get all of the photos that have the same user_id. Firstly, how do I store them? Or even can I? And secondly, once I store(if I can) them how do I query them? Thank you.</p>
| 0 | 2016-08-24T13:39:21Z | 39,126,483 | <p>Although you can store the whole data structure in a single HDF5 table, it is probably much easier to store the described class as three separate variables - two 1D arrays of integers and a data structure for storing your 'photo' attribute.</p>
<p>If you care about file size and speed and do not care about human-readability of your files, you can model your 64 bool values either as 8 1D arrays of UINT8 or a 2D array N x 8 of UINT8 (or CHARs). Then, you can implement a simple interface that would pack your bool values into bits of UINT8 and back (e.g., <a href="http://stackoverflow.com/questions/17506163/python-convert-boolean-array-to-int-array">Python: convert boolean array to int array</a>)</p>
<p>As far as know, there are no buil-in search functions in HDF5, but you can read in the variable containing user_ids and then simply use Python to find indexes of all elements matching your user_id.</p>
<p>Once you have the indexes, you can read in the relevant slices of your other variables. HDF5 natively supports efficient slicing, but it works on ranges, so you might want to think how to store records with the same user_id in continous chunks, see discussion over here </p>
<p><a href="http://stackoverflow.com/questions/21766145/h5py-correct-way-to-slice-array-datasets">h5py: Correct way to slice array datasets</a></p>
<p>You might also want to look into pytables - a python interace that builds over hdf5 to store data in table-like strucutres.</p>
<pre><code>import numpy as np
import h5py
class C:
def __init__(self, id, user_id, photo):
self.id = id
self.user_id = user_id
self.photo = photo
def write_records(records, file_out):
f = h5py.File(file_out, "w")
dset_id = f.create_dataset("id", (1000000,), dtype='i')
dset_user_id = f.create_dataset("user_id", (1000000,), dtype='i')
dset_photo = f.create_dataset("photo", (1000000,8), dtype='u8')
dset_id[0:len(records)] = [r.id for r in records]
dset_user_id[0:len(records)] = [r.user_id for r in records]
dset_photo[0:len(records)] = [np.packbits(np.array(r.photo, dtype='bool').astype(int)) for r in records]
f.close()
def read_records_by_id(file_in, record_id):
f = h5py.File(file_in, "r")
dset_id = f["id"]
data = dset_id[0:2]
res = []
for idx in np.where(data == record_id)[0]:
record = C(f["id"][idx:idx+1][0], f["user_id"][idx:idx+1][0], np.unpackbits( np.array(f["photo"][idx:idx+1][0], dtype='uint8') ).astype(bool))
res.append(record)
return res
m = [ True, False, True, True, False, True, True, True]
m = m+m+m+m+m+m+m+m
records = [C(1, 3, m), C(34, 53, m)]
# Write records to file
write_records(records, "mytestfile.h5")
# Read record from file
res = read_records_by_id("mytestfile.h5", 34)
print res[0].id
print res[0].user_id
print res[0].photo
</code></pre>
| 0 | 2016-08-24T14:44:10Z | [
"python",
"hdf5"
] |
Using subprocess to ping an address and get the average ping output in Python? | 39,124,982 | <p>With this:</p>
<pre><code>import subprocess
hostname = '104.160.142.3'
pingResponse = subprocess.Popen(["ping", hostname, "-n", '1'], stdout=subprocess.PIPE).stdout.read()
</code></pre>
<p>I get a string pingResponse: </p>
<pre><code>b'\r\nPinging 104.160.142.3 with 32 bytes of data:\r\nReply from 104.160.142.3: bytes=32 time=159ms TTL=60\r\n\r\nPing statistics for 104.160.142.3:\r\n Packets: Sent = 1, Received = 1, Lost = 0 (0% loss),\r\nApproximate round trip times in milli-seconds:\r\n Minimum = 159ms, Maximum = 159ms, Average = 159ms\r\n'
</code></pre>
<p>and I basically want to get the average ms part and store it in another string, but if I try to print out word by word:</p>
<pre><code>for i in pingResponse:
print(i)
</code></pre>
<p>I just get a bunch of numbers:</p>
<pre><code>58
32
83
101
110
116
32
61
32
49
44
32
82
101
99
101
105
118
101
100
32
61
32
49
44
32
76
111
115
116
32
61
32
48
32
40
48
37
32
108
111
115
115
41
44
13
10
65
112
112
114
111
120
105
109
97
116
101
32
114
111
117
110
100
32
116
114
105
112
32
116
105
109
101
115
32
105
110
32
109
105
108
108
105
45
115
101
99
111
110
100
115
58
13
10
32
32
32
32
77
105
110
105
109
117
109
32
61
32
52
52
109
115
44
32
77
97
120
105
109
117
109
32
61
32
52
52
109
115
44
32
65
118
101
114
97
103
101
32
61
32
52
52
109
115
13
10
</code></pre>
<p>How can I store the average ms into another string?</p>
| 0 | 2016-08-24T13:41:27Z | 39,125,090 | <p>You are getting numbers because that is a binary string (note the <code>b</code> in the beginning).</p>
<p>You will need to <code>decode</code> it first, then you can use <a href="https://regex101.com/r/zF5tZ4/1" rel="nofollow">regex</a>:</p>
<pre><code>import re
s = b'\r\nPinging 104.160.142.3 with 32 bytes of data:\r\nReply from 104.160.142.3: bytes=32 time=159ms TTL=60\r\n\r\nPing statistics for 104.160.142.3:\r\n Packets: Sent = 1, Received = 1, Lost = 0 (0% loss),\r\nApproximate round trip times in milli-seconds:\r\n Minimum = 159ms, Maximum = 159ms, Average = 159ms\r\n'
s = s.decode()
print(re.search(r'Average = (\d+)', s, re.MULTILINE).group(1))
>> 159
</code></pre>
| 2 | 2016-08-24T13:45:55Z | [
"python"
] |
Is it possible to define a special sort key in MongoDB? | 39,124,996 | <p>I have a MongoDB collection that I want to display on a webpage using pymongo and flask. I want to display only N at a time on the webpage, but I don't want to load the entire set at once, sort, and then show entries a:(a+N).</p>
<p>What I'd like to do is something like</p>
<pre><code>db.jobs.find().sort({{"status" : KEY, "startTime" : -1})).skip(a).limit(N)
</code></pre>
<p>Values in status would be "UNENDED", "SUCCESS", "FAILED". And I want to be able to define the sort order.</p>
<p>Normally in python I'd do something like:</p>
<pre><code>_JOB_PRIORITIES = {"UNDENDED" : 0, "SUCCESS" : 1, "FAILURE" : 2}
jobs = sorted((job for job in jobs),key=lambda j: _JOB_PRIORITIES.get(j["result"]))
</code></pre>
<p>But as I said, I'd rather not load everything into the program, but rather have MongoDB do all of that work.</p>
<p>Is such a thing possible in MongoDB, or am I going to have to define some "statusPriority" field when I create the entries?</p>
| 0 | 2016-08-24T13:42:01Z | 39,126,182 | <p>Yes. Add the statusPriority field to your documents like you have defined in _JOB_PRIORITIES and then sort on that in the Mongo cursor.</p>
<p><a href="https://docs.mongodb.com/manual/reference/method/cursor.sort/" rel="nofollow">https://docs.mongodb.com/manual/reference/method/cursor.sort/</a></p>
| 0 | 2016-08-24T14:31:48Z | [
"python",
"mongodb",
"sorting"
] |
how to interpret interpret timeit command in Python | 39,124,999 | <p>For example I have a list:</p>
<pre><code>L=[-13, -24, -21, -3, -23, -15, -14, -27, -13, -12]
</code></pre>
<ol>
<li><p>if type in <code>%timeit -n 10 myList = [item for item in L if item < 15]</code>
The output is <code>10 loops, best of 3: 1.25 µs per loop</code></p></li>
<li><p>if I type <code>myGen = (item for item in L if item < 15)</code>
The output is <code>1000000 loops, best of 3: 561 ns per loop</code></p></li>
</ol>
<p>I don't understand in case 2, why a generator takes 1000000 loops rather than 10 loops? And what does "best of 3" mean? And how can I work out the total seconds it takes for each commond, like 10*1.25=12.5 us for case 1?</p>
| 0 | 2016-08-24T13:42:07Z | 39,125,042 | <p>You didn't include the <code>-n</code> argument to <code>%timeit</code> in your second example, so ipython varies the number of repetitions based on how long a trial-run takes; the faster the piece of code being tested, the more iterations are done to get a more accurate per-iteration time value.</p>
<p>Moreover, the tests are run several times to try and minimise external factors (for example, when your OS just happened to schedule a disk buffer flush and everything else becomes a little bit slower). This is where the 'best of 3' comes in; the tests were run 3 times in a row and the best timings were picked.</p>
<p>See the <a href="https://ipython.org/ipython-doc/3/interactive/magics.html#magic-timeit" rel="nofollow"><code>%timeit</code> magic command documentation</a>, which includes these options and their default behaviour:</p>
<blockquote>
<p><code>-n<N></code>: execute the given statement <code><N></code> times in a loop. If this value is not given, a fitting value is chosen.</p>
<p><code>-r<R></code>: repeat the loop iteration <code><R></code> times and take the best result. Default: 3</p>
</blockquote>
<p>Your first example <em>does</em> use <code>-n 10</code> so it was run just 10 times.</p>
<p>Because creating a <em>generator</em> object with a generator expression is near-instant, ipython can execute the loop way more often than executing a list comprehension (which has to execute the <code>for</code> loop and produce a list object with all the results <em>there and then</em>). Remember that a generator expression does not do <strong>any</strong> work until you drive iteration.</p>
<p>If you wanted to compare how long a generator expression takes to produce the same results as a list comprehension, you'd have to actually iterate. You could pass the expression to a <code>list()</code> call to actually produce a list too:</p>
<pre><code>%timeit -n 10 myGen = (item for item in L if item < 15); list(myGen)
</code></pre>
<p>This'll be slower as a generator has a little more overhead than a list comprehension:</p>
<pre><code>In [1]: L=[-13, -24, -21, -3, -23, -15, -14, -27, -13, -12]
In [2]: %timeit -n 10 myList = [item for item in L if item < 15]
10 loops, best of 3: 1.29 µs per loop
In [3]: %timeit -n 10 myGen = (item for item in L if item < 15); list(myGen)
10 loops, best of 3: 1.72 µs per loop
</code></pre>
<p>Note that you <em>have</em> to re-create the generator each test iteration because generators can produce their output just once.</p>
| 4 | 2016-08-24T13:44:24Z | [
"python",
"time",
"ipython",
"generator",
"python-2.x"
] |
Ac2git giving Attribute Error | 39,125,044 | <p>I am trying to use ac2git to convert my Accurev Depot to Git Repository. I followed the steps given here: <a href="https://github.com/NavicoOS/ac2git" rel="nofollow">https://github.com/NavicoOS/ac2git</a>. I am getting this error when trying to run the python ac2git.py command and the operation is aborted:</p>
<pre><code>2016-08-24 09:07:31,312 - ac2git - ERROR - The script has encountered an exception, aborting!
Traceback (most recent call last):
File "ac2git.py", line 3596, in AccuRev2GitMain
rv = state.Start(isRestart=args.restart, isSoftRestart=args.softRestart)
File "ac2git.py", line 2974, in Start
self.RetrieveStreams()
File "ac2git.py", line 1537, in RetrieveStreams
endTr = endTrHist.transactions[0]
AttributeError: 'NoneType' object has no attribute 'transactions'
</code></pre>
<p>I used the method as deep-hist and start_tran as 1 and end_tran as "now".
I went through the accurev history and there is a transaction at #1, so what could be the "NoneType" the error is referring to ?</p>
<p>ac2git.config.xml</p>
<pre><code><accurev
username="********"
password="********"
depot="Product"
start-transaction="1"
end-transaction="now"
command-cache-filename="command_cache.sqlite3" >
<!-- The stream-list is optional. If not given all streams are processed -->
<!-- The branch-name attribute is also optional for each stream element. If provided it specifies the git branch name to which the stream will be mapped. -->
<stream-list>
<stream>Stage1</stream>
</stream-list>
</accurev>
<git
repo-path="C:\Users\ssrivastava\repository"
message-style="notes"
message-key="footer"
author-is-committer="true"
empty-child-stream-action="merge"
source-stream-fast-forward="false" >
<!-- Optional: You can add remote elements to specify the remotes to which the converted branches will be pushed. The push-url attribute is optional. -->
<remote name="origin" url="https://github.com/orao/ac2git.git" push-url="https://github.com/orao/ac2git.git" />
<remote name="backup" url="https://github.com/orao/ac2git.git" />
</git>
<method>deep-hist</method>
<merge-strategy>normal</merge-strategy>
<logfile>accurev2git.log</logfile>
<usermaps filename="usermaps.config.xml">
<map-user><accurev username="******"/><git name="Shruti Srivastava" email="******" timezone="+0500"/></map-user>
</usermaps>
</accurev2git>
</code></pre>
| 1 | 2016-08-24T13:44:27Z | 39,126,762 | <p>As commented, looks like you had a typo in your stream/depo name</p>
| 2 | 2016-08-24T14:55:43Z | [
"python",
"git"
] |
Django - add a function to an admin form under a particular field | 39,125,093 | <p>I've made a function that iIwant to be able to run on the fly in my admin form, also that it would be reusable for multiple fields.</p>
<p>Basically, we have a dmvpn ip and a subnet. I would like to be able to click, find free ip/subnet then run my function against the current values other records have for the field</p>
<p>so findfreeip would sit next to both fields and find me the next available dmvpn IP in the 4th octet and it would find me the next available subnet in the 3rd octet. then maybe a pop showing the free ips/subnet and on click it would populate the field for me</p>
<p><a href="http://i.stack.imgur.com/qCAUm.png" rel="nofollow"><img src="http://i.stack.imgur.com/qCAUm.png" alt="Admin form option"></a></p>
<p>Heres the model and the function </p>
<p>models.py</p>
<pre><code>class ShowroomConfigData(models.Model):
location = models.CharField(max_length=50)
dmvpn_dsl_ip = models.GenericIPAddressField(protocol='IPv4')
subnet = models.GenericIPAddressField(protocol='IPv4')
...
</code></pre>
<p>functions.py</p>
<pre><code>def FindFreeIP(list_of_ips, octect):
#get the required octect from all IPs and put them in a list
octect_list = []
for item in list_of_ips:
octect_list.append(getXOctect(item,octect))
octect_list = sorted(octect_list)
#go through the list and find the smallest free no
usable = []
for i in range(1,254):
if i not in octect_list:
usable.append(i)
return min(usable)
</code></pre>
<p>I get all the subnets in a list and then send that list to the function which will then look for the next usable IP in the octect that you ask for</p>
| 0 | 2016-08-24T13:46:01Z | 39,126,388 | <p>in admin.py</p>
<pre><code>from myproject.functions import FindFreeIP
class MyAdmin(admin.ModelAdmin):
actions = [FindFreeIP]
</code></pre>
<p>Full details can be found on the django documentation site under <a href="https://docs.djangoproject.com/en/1.10/ref/contrib/admin/actions/#writing-actions" rel="nofollow">admin writing actions</a> </p>
| 0 | 2016-08-24T14:40:12Z | [
"python",
"django"
] |
Django - add a function to an admin form under a particular field | 39,125,093 | <p>I've made a function that iIwant to be able to run on the fly in my admin form, also that it would be reusable for multiple fields.</p>
<p>Basically, we have a dmvpn ip and a subnet. I would like to be able to click, find free ip/subnet then run my function against the current values other records have for the field</p>
<p>so findfreeip would sit next to both fields and find me the next available dmvpn IP in the 4th octet and it would find me the next available subnet in the 3rd octet. then maybe a pop showing the free ips/subnet and on click it would populate the field for me</p>
<p><a href="http://i.stack.imgur.com/qCAUm.png" rel="nofollow"><img src="http://i.stack.imgur.com/qCAUm.png" alt="Admin form option"></a></p>
<p>Heres the model and the function </p>
<p>models.py</p>
<pre><code>class ShowroomConfigData(models.Model):
location = models.CharField(max_length=50)
dmvpn_dsl_ip = models.GenericIPAddressField(protocol='IPv4')
subnet = models.GenericIPAddressField(protocol='IPv4')
...
</code></pre>
<p>functions.py</p>
<pre><code>def FindFreeIP(list_of_ips, octect):
#get the required octect from all IPs and put them in a list
octect_list = []
for item in list_of_ips:
octect_list.append(getXOctect(item,octect))
octect_list = sorted(octect_list)
#go through the list and find the smallest free no
usable = []
for i in range(1,254):
if i not in octect_list:
usable.append(i)
return min(usable)
</code></pre>
<p>I get all the subnets in a list and then send that list to the function which will then look for the next usable IP in the octect that you ask for</p>
| 0 | 2016-08-24T13:46:01Z | 39,126,610 | <p>The right way to do it is to create your custom Widget and pass it to change form in admin. </p>
<p>Your widget could call a certain view via ajax to perform the IP lookup and fill the input with response.</p>
<p>Have a look at the django's source code for the <a href="https://github.com/django/django/blob/master/django/contrib/admin/widgets.py#L79" rel="nofollow">example</a>.</p>
| 0 | 2016-08-24T14:49:28Z | [
"python",
"django"
] |
Matlab to python logic difficulty for arrays | 39,125,095 | <p>I have created an m-by-n matrix in MATLAB and can easily select a range of values within a certain column and row. For instance, if I have matrix <code>A</code>:</p>
<pre><code>A =
0 0 0 0
1 2 3 4
5 6 7 8
9 10 11 12
</code></pre>
<p>I can isolate the values: 1,5 and 9 from the first column by typing: <code>A(2:4,1)</code>. The results will yield <code>[1;5;9]</code>. As it relates to python, I am not sure how to index an array such that I have the desired values as above.</p>
| 0 | 2016-08-24T13:46:05Z | 39,125,783 | <p>This can be done using numpy</p>
<p><code>a = numpy.matrix('0 0 0 0; 1 2 3 4; 5 6 7 8; 9 10 11 12')</code></p>
<p>Required result is <code>a[1:,0]</code> or <code>a[1:4,0]</code></p>
<p>Only difference is that the array indexing start from 0 instead of 1.</p>
| 0 | 2016-08-24T14:13:54Z | [
"python",
"arrays",
"matlab"
] |
How open file by file in a list of files? | 39,125,131 | <p>How can I open every single file in a list of files, do something with the file and then go to the next file?</p>
<p>I have a directory with 1000 text files. I have already created a list with all the files names and now I want to open file by file. Do someone has an idea how to do that? </p>
<p>What I have so far:</p>
<pre><code>from os import listdir
from os.path import isfile, join
files_in_dir = [ f for f in listdir('nes') if isfile(join('nes',f)) ]
if f.endswith(".txt"):
print(f)
</code></pre>
| -2 | 2016-08-24T13:47:34Z | 39,125,202 | <p>There's no need to create a separate list to hold filenames. Just iterate over the results of <code>listdir()</code> directly.</p>
<pre><code>for fname in listdir('nes'):
fname = join('nes', fname)
if fname.endswith('.txt') and isfile(fname):
with open(fname, 'r') as f:
# do something with open file f
</code></pre>
| 0 | 2016-08-24T13:51:09Z | [
"python"
] |
How open file by file in a list of files? | 39,125,131 | <p>How can I open every single file in a list of files, do something with the file and then go to the next file?</p>
<p>I have a directory with 1000 text files. I have already created a list with all the files names and now I want to open file by file. Do someone has an idea how to do that? </p>
<p>What I have so far:</p>
<pre><code>from os import listdir
from os.path import isfile, join
files_in_dir = [ f for f in listdir('nes') if isfile(join('nes',f)) ]
if f.endswith(".txt"):
print(f)
</code></pre>
| -2 | 2016-08-24T13:47:34Z | 39,125,239 | <pre><code>from os import listdir
from os.path import isfile, join
files_in_dir = [ f for f in listdir('nes') if isfile(join('nes',f)) ]
for f in files_in_dir:
if f.endswith(".txt"):
with open(f, 'r') as in_file:
for line in in_file:
# Here you have access to lines of the opened file.
</code></pre>
| 1 | 2016-08-24T13:52:27Z | [
"python"
] |
Trying to install Spacy English language model, getting urlopen error | 39,125,177 | <p>I'm trying to install spaCy using Windows 8 in an anaconda environment with python 3. Following the instructions on spaCy's website, I run the following commands.</p>
<pre><code>$ pip install spacy
$ python -m spacy.en.download
</code></pre>
<p>The first command works seemingly fine. However, the second command causes an error:
urllib.error.URLError: </p>
<p>The full Traceback:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\urllib\request.py", line 1254, in
do_open
h.request(req.get_method(), req.selector, req.data, headers)
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\http\client.py", line 1106, in re
quest
self._send_request(method, url, body, headers)
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\http\client.py", line 1151, in _s
end_request
self.endheaders(body)
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\http\client.py", line 1102, in en
dheaders
self._send_output(message_body)
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\http\client.py", line 934, in _se
nd_output
self.send(msg)
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\http\client.py", line 877, in sen
d
self.connect()
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\http\client.py", line 1252, in co
nnect
super().connect()
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\http\client.py", line 849, in con
nect
(self.host,self.port), self.timeout, self.source_address)
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\socket.py", line 693, in create_c
onnection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\socket.py", line 732, in getaddri
nfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 11004] getaddrinfo failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\runpy.py", line 184, in _run_modu
le_as_main
"__main__", mod_spec)
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\site-packages\spacy\en\download.p
y", line 13, in <module>
plac.call(main)
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\site-packages\plac_core.py", line
328, in call
cmd, result = parser.consume(arglist)
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\site-packages\plac_core.py", line
207, in consume
return cmd, self.func(*(args + varargs + extraopts), **kwargs)
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\site-packages\spacy\en\download.p
y", line 9, in main
download('en', force)
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\site-packages\spacy\download.py",
line 24, in download
package = sputnik.install(about.__title__, about.__version__, about.__models
__[lang])
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\site-packages\sputnik-0.9.3-py3.5
.egg\sputnik\__init__.py", line 37, in install
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\site-packages\sputnik-0.9.3-py3.5
.egg\sputnik\index.py", line 84, in update
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\site-packages\sputnik-0.9.3-py3.5
.egg\sputnik\session.py", line 43, in open
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\urllib\request.py", line 466, in
open
response = self._open(req, data)
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\urllib\request.py", line 484, in
_open
'_open', req)
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\urllib\request.py", line 444, in
_call_chain
result = func(*args)
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\urllib\request.py", line 1297, in
https_open
context=self._context, check_hostname=self._check_hostname)
File "C:\Users\Tina\Miniconda3\envs\gaia\lib\urllib\request.py", line 1256, in
do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 11004] getaddrinfo failed>
</code></pre>
<p>Other similar questions seem to suggest a proxy may be the issue, but I am not using a proxy. </p>
| 0 | 2016-08-24T13:49:48Z | 39,787,506 | <p>The 'python -m...' command should be used when running/upgrading an existing model. If you're downloading the model for the first time, use the sputnik package manager which should have been installed as a spaCy dependency:</p>
<pre><code>sputnik --name spacy --repository-url http://index.spacy.io install en==1.1.0
</code></pre>
| 0 | 2016-09-30T08:55:30Z | [
"python",
"python-3.x",
"spacy"
] |
Adding data to an iterator created with zip | 39,125,200 | <p>I am using python built-in function zip() to combine two lists:</p>
<pre><code>l1 = [1,2,3]
l2 = [4,5,6]
zipped = zip(l1,l2)
</code></pre>
<p>Is it possible to 'add' to this zipped object, for example:</p>
<pre><code>l3 = [7,8,9]
zipped2 = zipped.add(l3)
</code></pre>
<p>Such that:</p>
<pre><code>list(zipped2)
</code></pre>
<p>Would yield:</p>
<pre><code>[(1,4,7),(2,5,8),(3,6,9)]
</code></pre>
<p>I tried for example:</p>
<pre><code>zipped2 = zip(zipped,l3)
</code></pre>
<p>But this results in:</p>
<pre><code>[((1, 4), 7), ((2, 5), 8), ((3, 6), 9)]
</code></pre>
<p>(which makes sense)</p>
<p>An obvious approach would be to 'unzip' zipped and then zip() again including the new list, however, this is not practical for my application is I would like to add a variable amount of lists depending on some logic, so I am wondering if there already is some built in function that does this. I checked itertools and did not find an obvious candidate.</p>
<p>If one could point me in the right direction that would be highly appreciated.</p>
| 0 | 2016-08-24T13:51:05Z | 39,125,301 | <p>You cannot add anything to <code>zip</code> function, but you can do it in this way (there are many options):</p>
<pre><code>l1 = [1,2,3]
l2 = [4,5,6]
l3 = [3,6,9]
l4 = [10,11,12]
l5 = [13,14,15]
args = [l1, l2]
print (zip(*args))
args.append(l3)
print (zip(*args))
# Using a class
class Zipped():
@staticmethod
def add(_list):
args.append(_list)
return zip(*args)
def zipped():
pass
zipped.add = Zipped.add
print (Zipped.add(l4))
print (zipped.add(l5))
# OR using a method
def add(_list):
args.append(_list)
return zip(*args)
def zipped():
pass
zipped.add = add
print (zipped.add(l4))
</code></pre>
| 3 | 2016-08-24T13:55:11Z | [
"python",
"python-3.x"
] |
Adding data to an iterator created with zip | 39,125,200 | <p>I am using python built-in function zip() to combine two lists:</p>
<pre><code>l1 = [1,2,3]
l2 = [4,5,6]
zipped = zip(l1,l2)
</code></pre>
<p>Is it possible to 'add' to this zipped object, for example:</p>
<pre><code>l3 = [7,8,9]
zipped2 = zipped.add(l3)
</code></pre>
<p>Such that:</p>
<pre><code>list(zipped2)
</code></pre>
<p>Would yield:</p>
<pre><code>[(1,4,7),(2,5,8),(3,6,9)]
</code></pre>
<p>I tried for example:</p>
<pre><code>zipped2 = zip(zipped,l3)
</code></pre>
<p>But this results in:</p>
<pre><code>[((1, 4), 7), ((2, 5), 8), ((3, 6), 9)]
</code></pre>
<p>(which makes sense)</p>
<p>An obvious approach would be to 'unzip' zipped and then zip() again including the new list, however, this is not practical for my application is I would like to add a variable amount of lists depending on some logic, so I am wondering if there already is some built in function that does this. I checked itertools and did not find an obvious candidate.</p>
<p>If one could point me in the right direction that would be highly appreciated.</p>
| 0 | 2016-08-24T13:51:05Z | 39,125,366 | <p>One way would be to write your own generator:</p>
<pre><code>def add_to_zip(zipped, lst):
for tpl, elem in zip(zipped, lst):
yield tpl + (elem,)
zipped2 = add_to_zip(zipped, l3)
list(zipped2)
# OUT: [(1, 4, 7), (2, 5, 8), (3, 6, 9)]
</code></pre>
<p>If you want to extend it to an arbitrary number of lists:</p>
<pre><code>def add_to_zip2(zipped, *lists):
for tpl, *elems in zip(zipped, *lists):
yield tpl + tuple(elems)
zipped3 = add_to_zip2(zipped, l3, l2)
list(zipped3)
# OUT: [(1, 4, 7, 4), (2, 5, 8, 5), (3, 6, 9, 6)]
</code></pre>
| 4 | 2016-08-24T13:57:34Z | [
"python",
"python-3.x"
] |
Adding data to an iterator created with zip | 39,125,200 | <p>I am using python built-in function zip() to combine two lists:</p>
<pre><code>l1 = [1,2,3]
l2 = [4,5,6]
zipped = zip(l1,l2)
</code></pre>
<p>Is it possible to 'add' to this zipped object, for example:</p>
<pre><code>l3 = [7,8,9]
zipped2 = zipped.add(l3)
</code></pre>
<p>Such that:</p>
<pre><code>list(zipped2)
</code></pre>
<p>Would yield:</p>
<pre><code>[(1,4,7),(2,5,8),(3,6,9)]
</code></pre>
<p>I tried for example:</p>
<pre><code>zipped2 = zip(zipped,l3)
</code></pre>
<p>But this results in:</p>
<pre><code>[((1, 4), 7), ((2, 5), 8), ((3, 6), 9)]
</code></pre>
<p>(which makes sense)</p>
<p>An obvious approach would be to 'unzip' zipped and then zip() again including the new list, however, this is not practical for my application is I would like to add a variable amount of lists depending on some logic, so I am wondering if there already is some built in function that does this. I checked itertools and did not find an obvious candidate.</p>
<p>If one could point me in the right direction that would be highly appreciated.</p>
| 0 | 2016-08-24T13:51:05Z | 39,125,391 | <p>if you have <code>zipped</code> but don't have the source <code>l1</code> and <code>l2</code> at this point, you would have to <em>unzip</em> the <code>zipped</code>, add the <code>l3</code> list and zip again:</p>
<pre><code>>>> zip(*(zip(*zipped) + [l3]))
[(1, 4, 7), (2, 5, 8), (3, 6, 9)]
</code></pre>
<p><sub>If you still have <code>l1</code> and <code>l2</code> at this point, just do the <code>zip(l1, l2, l3)</code>.</sub></p>
| 2 | 2016-08-24T13:58:26Z | [
"python",
"python-3.x"
] |
Python - Count consecutive frequencies by group | 39,125,354 | <p>I have a sequence of e-mails ordered by timestamp and user_id. </p>
<p>I want to investigate how often email i was followed by email j. I'm going to display these frequencies across users in a heat map to show the most common path. </p>
<pre><code>a = """timestamp,email,subject
2016-07-01 10:17:00,a@gmail.com,subject2
2016-07-01 02:01:02,a@gmail.com,welcome
2016-07-01 14:45:04,a@gmail.com,subject3
2016-07-01 08:14:02,a@gmail.com,subject1
2016-07-01 16:26:35,a@gmail.com,subject4
2016-07-01 10:17:00,b@gmail.com,subject1
2016-07-01 02:01:02,b@gmail.com,welcome
2016-07-01 14:45:04,b@gmail.com,subject3
2016-07-01 08:14:02,b@gmail.com,subject2
2016-07-01 16:26:35,b@gmail.com,subject4
2016-07-01 18:00:00,c@gmail.com,welcome
2016-07-01 19:00:02,c@gmail.com,subject1
2016-07-01 20:00:04,c@gmail.com,subject3
2016-07-01 21:14:02,c@gmail.com,subject4
2016-07-01 21:26:35,c@gmail.com,subject2
"""
import pandas as pd
from pandas.io.parsers import StringIO
df1 = pd.read_csv(StringIO(a), parse_dates=['timestamp'])
df1=df1.sort_values(['email','timestamp'])
</code></pre>
<p>sorted df1:</p>
<pre><code> timestamp email subject
1 2016-07-01 02:01:02 a@gmail.com welcome
3 2016-07-01 08:14:02 a@gmail.com subject1
0 2016-07-01 10:17:00 a@gmail.com subject2
2 2016-07-01 14:45:04 a@gmail.com subject3
4 2016-07-01 16:26:35 a@gmail.com subject4
6 2016-07-01 02:01:02 b@gmail.com welcome
8 2016-07-01 08:14:02 b@gmail.com subject2
5 2016-07-01 10:17:00 b@gmail.com subject1
7 2016-07-01 14:45:04 b@gmail.com subject3
9 2016-07-01 16:26:35 b@gmail.com subject4
10 2016-07-01 18:00:00 c@gmail.com welcome
11 2016-07-01 19:00:02 c@gmail.com subject1
12 2016-07-01 20:00:04 c@gmail.com subject3
13 2016-07-01 21:14:02 c@gmail.com subject4
14 2016-07-01 21:26:35 c@gmail.com subject2
</code></pre>
<p>The output should look like this </p>
<pre><code> welcome subject1 subject2 subject3 subject4
welcome 0
subject1 2 0
subject2 1 1 0
subject3 0 2 1 0
subject4 0 0 0 3 0
</code></pre>
<p>In other words, there were 2 occurrences where subject1 followed after a welcome email. There was 1 occurrence where subject 2 followed after a welcome message, etc. </p>
<p>What is the best way of doing this?</p>
| 1 | 2016-08-24T13:57:14Z | 39,128,019 | <p>A two-liner (which you can compress to a one-liner):</p>
<pre><code>df1['next_subject'] = df1.groupby('email')['subject'].shift(-1)
res = pd.crosstab(df1['next_subject'], df1['subject'])
print(res)
# subject subject1 subject2 subject3 subject4 welcome
# next_subject
# subject1 0 1 0 0 2
# subject2 1 0 0 1 1
# subject3 2 1 0 0 0
# subject4 0 0 3 0 0
</code></pre>
<p>You can massage this a little bit to get it in the exact form you quote in the OP:</p>
<pre><code>subjects = ['welcome'] + ['subject{}'.format(i) for i in range(1, 5)]
res = res.loc[subjects, subjects].fillna(0).astype(int)
print(res)
# subject welcome subject1 subject2 subject3 subject4
# next_subject
# welcome 0 0 0 0 0
# subject1 2 0 1 0 0
# subject2 1 1 0 0 1
# subject3 0 2 1 0 0
# subject4 0 0 0 3 0
</code></pre>
| 1 | 2016-08-24T15:59:34Z | [
"python",
"pandas",
"sequence",
"frequency",
"itertools"
] |
oTree: How to access the player's id from the player class in model.py? | 39,125,370 | <p>I would like to define a parameter variable via the default attribute id_in_group of the player. However, this attribute does not seem to be accessible via the ways I could think of (such as via BasePlayer.id_in_group).</p>
<p>The code of the class player:</p>
<pre><code>class Player(BasePlayer):
investment_amount = models.CurrencyField(
doc="""
Amount invested by this player.
""",
min=0, max=Constants.endowment
)
random_number = BasePlayer.id_in_group
def set_payoffs(self):
for p in self.get_players():
p.payoff = 110
</code></pre>
<p>How could I access the attribute id_in_group? Or is it impossible due to the fact that it is a default attribute preset by oTree?</p>
| -1 | 2016-08-24T13:57:40Z | 39,125,655 | <p>Have you tried accessing the variable via <code>super.id_in_group</code> or <code>self.id_in_group</code>? If it's a variable of the parent class, you wouldn't call the class directly like you are doing.</p>
<p>Also, Google has a ton of results for how to access parent class variables, a lot of which are answers from stackoverflow that explain things really well :-)</p>
| -1 | 2016-08-24T14:09:08Z | [
"python",
"django"
] |
Using a progress bar for the output | 39,125,406 | <p>I'm extremely new to Python and have created a little password cracker, that uses a brute force attack, I'm trying to get my progress bar to output as the program runs, here's what I have so far:</p>
<pre><code>import zipfile
import sys
import time
def progress_bar(sleep_time):
for i in range(101):
time.sleep(sleep_time)
sys.stdout.write("\r[{0}] {1}%".format('#'*(i/10), i))
sys.stdout.flush()
def obtain_password(path_to_zip_file):
password = None
zip_file = zipfile.ZipFile(path_to_zip_file)
with open('wordlist.txt', 'r') as dict:
for line in dict.readlines():
possible = line.strip("\n")
try:
zip_file.extractall(pwd=possible)
password = "Password found {}".format(possible)
except:
pass
return password
</code></pre>
<p>So my question is how can I get the progress bar to output while the <code>obtain_password</code> method runs? Do I need to change around the progress bar method a little bit? </p>
| 2 | 2016-08-24T13:59:00Z | 39,125,693 | <p>What you are trying to do won't work, you must keep in mind you have only one thread.</p>
<p>What you could do though, is get the numbers of line in your wordlist, and do the maths. It's surely much more precise than a timer by the way.</p>
<p>I didn't test the code, though with something along these lines you'll have what you want :</p>
<pre><code>import zipfile
import sys
import time
def obtain_password(path_to_zip_file):
password = None
zip_file = zipfile.ZipFile(path_to_zip_file)
with open('wordlist.txt', 'r') as f:
lines = f.readlines()
total = len(lines) # get number of lines
current = 0
for line in lines:
current += 1
if current % 1000 == 0: # every 1000 lines, shows the progress
print('%.2f %%' % float(current / total * 100))
possible = line.strip("\n")
try:
zip_file.extractall(pwd=possible)
#password = "Password found {}".format(possible)
print(possible)
sys.exit()
except:
pass
</code></pre>
<p>Also I'd recommend you get what are the exceptions raised by <code>extractall</code> and catch them properly.
Catching everything like that : <code>except:</code> isn't a good practice.</p>
| 2 | 2016-08-24T14:10:29Z | [
"python",
"loops"
] |
Using a progress bar for the output | 39,125,406 | <p>I'm extremely new to Python and have created a little password cracker, that uses a brute force attack, I'm trying to get my progress bar to output as the program runs, here's what I have so far:</p>
<pre><code>import zipfile
import sys
import time
def progress_bar(sleep_time):
for i in range(101):
time.sleep(sleep_time)
sys.stdout.write("\r[{0}] {1}%".format('#'*(i/10), i))
sys.stdout.flush()
def obtain_password(path_to_zip_file):
password = None
zip_file = zipfile.ZipFile(path_to_zip_file)
with open('wordlist.txt', 'r') as dict:
for line in dict.readlines():
possible = line.strip("\n")
try:
zip_file.extractall(pwd=possible)
password = "Password found {}".format(possible)
except:
pass
return password
</code></pre>
<p>So my question is how can I get the progress bar to output while the <code>obtain_password</code> method runs? Do I need to change around the progress bar method a little bit? </p>
| 2 | 2016-08-24T13:59:00Z | 39,125,880 | <p>There are to ways of doing what you desire.</p>
<ol>
<li><p>Let your password-cracker update the progressbar once in a while</p>
<pre><code> import time
# Stores the time between updates in seconds.
time_between_updates = 10
last_update = 0
def your_expensive_operation():
for i in range(10000000):
time.sleep(1) # Emulate an expensive operation
if time.time() - last_update > time_between_updates:
print("\r" + (int(i/10000000.0 * 79) * "#"), end='')
your_expensive_operation()
</code></pre></li>
<li><p>Use threads</p>
<pre><code>import time
import threading
# Stores your current position in percent.
current_position = 0
done = False
def paint_thread():
while not done:
print("\r" + (int(current_position * 79) * "#"), end='')
# Make it update once a second.
time.sleep(1)
thread = threading.Thread(target=paint_thread)
thread.start()
for i in range(10000000):
time.sleep(1) # Emulate an expensive operation
current_position = i/10000000.0
done = True
</code></pre></li>
</ol>
| 2 | 2016-08-24T14:18:02Z | [
"python",
"loops"
] |
How can I plot a pandas multiindex dataframe as 3d | 39,125,423 | <p>I have a dataframe <code>df</code> grouped like this:</p>
<pre><code>Year Product Sales
2010 A 111
B 20
C 150
2011 A 10
B 28
C 190
⦠â¦
</code></pre>
<p>and I would like to plot this in <code>matplotlib</code> as 3d Chart having the <code>Year</code> as the x-axis, <code>Sales</code> on the y-axis and <code>Product</code> on the z-axis.
<a href="http://i.stack.imgur.com/PCbB0.png" rel="nofollow"><img src="http://i.stack.imgur.com/PCbB0.png" alt="enter image description here"></a></p>
<p>I have been trying the following:</p>
<pre><code>from mpl_toolkits.mplot3d import axes3d
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
X = dfgrouped['Year']
Y = dfgrouped['Sales']
Z = dfgrouped['Product']
ax.bar(X, Y, Z, color=cs, alpha=0.8)
</code></pre>
<p>unfortunately I am getting </p>
<blockquote>
<p>"ValueError: incompatible sizes: argument 'height' must be length 7 or
scalar"</p>
</blockquote>
| 3 | 2016-08-24T13:59:41Z | 39,132,123 | <p>You could plot a 3D Bar graph using <code>Pandas</code> as shown:</p>
<p><strong>Setup:</strong></p>
<pre><code>arrays = [[2010, 2010, 2010, 2011, 2011, 2011],['A', 'B', 'C', 'A', 'B', 'C']]
tuples = list(zip(*arrays))
index = pd.MultiIndex.from_tuples(tuples, names=['Year', 'Product'])
df = pd.DataFrame({'Sales': [111, 20, 150, 10, 28, 190]}, index=index)
print (df)
Sales
Year Product
2010 A 111
B 20
C 150
2011 A 10
B 28
C 190
</code></pre>
<p><strong>Data Wrangling:</strong></p>
<pre><code>import numpy as np
import pandas as pd
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
# Set plotting style
plt.style.use('seaborn-white')
</code></pre>
<p>Grouping similar entries (<em>get_group</em>) occuring in the Sales column and iterating through them and later appending them to a <code>list</code>. This gets stacked horizontally using <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.hstack.html" rel="nofollow"><code>np.hstack</code></a> which forms the <code>z</code> dimension of the 3d plot.</p>
<pre><code>L = []
for i, group in df.groupby(level=1)['Sales']:
L.append(group.values)
z = np.hstack(L).ravel()
</code></pre>
<p>Letting the labels on both the x and y dimensions take unique values of the respective levels of the Multi-Index Dataframe. The x and y dimensions then take the range of these values.</p>
<pre><code>xlabels = df.index.get_level_values('Year').unique()
ylabels = df.index.get_level_values('Product').unique()
x = np.arange(xlabels.shape[0])
y = np.arange(ylabels.shape[0])
</code></pre>
<p>Returning coordinate matrices from coordinate vectors using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.meshgrid.html" rel="nofollow"><code>np.meshgrid</code></a></p>
<pre><code>x_M, y_M = np.meshgrid(x, y, copy=False)
</code></pre>
<p><strong>3-D plotting:</strong></p>
<pre><code>fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111, projection='3d')
# Making the intervals in the axes match with their respective entries
ax.w_xaxis.set_ticks(x + 0.5/2.)
ax.w_yaxis.set_ticks(y + 0.5/2.)
# Renaming the ticks as they were before
ax.w_xaxis.set_ticklabels(xlabels)
ax.w_yaxis.set_ticklabels(ylabels)
# Labeling the 3 dimensions
ax.set_xlabel('Year')
ax.set_ylabel('Product')
ax.set_zlabel('Sales')
# Choosing the range of values to be extended in the set colormap
values = np.linspace(0.2, 1., x_M.ravel().shape[0])
# Selecting an appropriate colormap
colors = plt.cm.Spectral(values)
ax.bar3d(x_M.ravel(), y_M.ravel(), z*0, dx=0.5, dy=0.5, dz=z, color=colors)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/avlF3.png" rel="nofollow"><img src="http://i.stack.imgur.com/avlF3.png" alt="Image"></a></p>
<hr>
<p><strong>Note:</strong></p>
<p>Incase of unbalanced <code>groupby</code> objects, you could still do it by <code>unstacking</code>
and filling <code>Nans</code> with 0's and then <code>stacking</code> it back as follows:</p>
<pre><code>df = df_multi_index.unstack().fillna(0).stack()
</code></pre>
<p>where <code>df_multi_index.unstack</code> is your original multi-index dataframe.</p>
<p>For the new values added to the Multi-index Dataframe, following plot is obtained:</p>
<p><a href="http://i.stack.imgur.com/B5EpL.png" rel="nofollow"><img src="http://i.stack.imgur.com/B5EpL.png" alt="Image2"></a></p>
| 2 | 2016-08-24T20:02:33Z | [
"python",
"pandas",
"matplotlib",
"3d",
"seaborn"
] |
Regex with columns pandas | 39,125,455 | <p>My question is how I can use the <code>re</code> to replace strings that included in a dataframe:</p>
<p>when I use the <code>re.sub()</code>, it gives me an error:</p>
<pre><code>p = re.compile('New')
p.sub('old', df['Col1'])
</code></pre>
<p>Also, I tried using the for loop but the out put was unexpected and displaying the value of the first row in all the other rows:</p>
<pre><code>for i in df['Col1']:
p.sub('old', i)
print(i)
</code></pre>
<p>I'm sure that I'm missing something.</p>
| 3 | 2016-08-24T14:00:45Z | 39,125,519 | <p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.replace.html" rel="nofollow"><code>str.replace</code></a>, which works also with <code>regex</code>:</p>
<pre><code>df = pd.DataFrame({'Col1':['sss old','dd','old']})
print (df)
Col1
0 sss old
1 dd
2 old
df.Col1 = df.Col1.str.replace('old','new')
print (df)
Col1
0 sss new
1 dd
2 new
</code></pre>
| 2 | 2016-08-24T14:03:26Z | [
"python",
"regex",
"pandas",
"dataframe",
"series"
] |
python atomic data types | 39,125,486 | <p>It was written <a href="https://ru.wikipedia.org/wiki/Python" rel="nofollow">here</a> that Python has both atomic and reference object types. Atomic objects are: int, long, complex.
When assigning atomic object, it's value is copied, when assigning reference object it's reference is copied.</p>
<p>My question is:
why then, when i do the code bellow i get 'True'?</p>
<pre><code>a = 1234
b = a
print id(a) == id(b)
</code></pre>
<p>It seems to me that i don't copy value, i just copy reference, no matter what type it is. </p>
| 3 | 2016-08-24T14:01:39Z | 39,126,074 | <p>int types are immutable.
what you see is the reference for the number 1234 and that will never change.</p>
<p>for mutable object like list, dictionary you can use</p>
<pre><code>import copy
a = copy.deepcopy(b)
</code></pre>
| 1 | 2016-08-24T14:27:25Z | [
"python",
"types",
"variable-assignment",
"atomic"
] |
python atomic data types | 39,125,486 | <p>It was written <a href="https://ru.wikipedia.org/wiki/Python" rel="nofollow">here</a> that Python has both atomic and reference object types. Atomic objects are: int, long, complex.
When assigning atomic object, it's value is copied, when assigning reference object it's reference is copied.</p>
<p>My question is:
why then, when i do the code bellow i get 'True'?</p>
<pre><code>a = 1234
b = a
print id(a) == id(b)
</code></pre>
<p>It seems to me that i don't copy value, i just copy reference, no matter what type it is. </p>
| 3 | 2016-08-24T14:01:39Z | 39,126,122 | <p>Actually like @spectras said there are only references but there are immutable objects like <code>float</code>s, <code>int</code>s, <code>tuple</code>s. For immutable objects (apart from memory consumption) it just does not matter if you pass around references or create copies.</p>
<p>The interpreter even does some optimizations making use of numbers with the same value being interchangeable making checking numbers for identity interesting because eg for </p>
<pre><code>a=1
b=1
c=2/2
d=12345
e=12345*1
</code></pre>
<p><code>a is b</code> is true and <code>a is c</code> is also true but <code>d is e</code> is false (<code>==</code> works normally as expected)</p>
<p>Immutable objects are atomic the way that <em>changing</em> them is threadsafe because you do not actually change the object itself but just put a new reference in a variable (which is threadsafe).</p>
| 0 | 2016-08-24T14:29:09Z | [
"python",
"types",
"variable-assignment",
"atomic"
] |
python atomic data types | 39,125,486 | <p>It was written <a href="https://ru.wikipedia.org/wiki/Python" rel="nofollow">here</a> that Python has both atomic and reference object types. Atomic objects are: int, long, complex.
When assigning atomic object, it's value is copied, when assigning reference object it's reference is copied.</p>
<p>My question is:
why then, when i do the code bellow i get 'True'?</p>
<pre><code>a = 1234
b = a
print id(a) == id(b)
</code></pre>
<p>It seems to me that i don't copy value, i just copy reference, no matter what type it is. </p>
| 3 | 2016-08-24T14:01:39Z | 39,126,768 | <p>Assignment (binding) in Python NEVER copies data. It ALWAYS copies a reference to the value being bound.</p>
<p>The interpreter computes the value on the right-hand side, and the left-hand side is bound to the new value by referencing it. If expression on the right-hand side is an existing value (in other words, if no operators are required to compute its value) then the left-hand side will be a reference to the same object.</p>
<p>After</p>
<pre><code>a = b
</code></pre>
<p>is executed,</p>
<pre><code>a is b
</code></pre>
<p>will ALWAYS be true - that's how assignment works in Python. It's also true for containers, so <code>x[i].some_attribute = y</code> will make <code>x[i].some_attribute is y</code> true.</p>
<p>The assertion that Python has atomic types and reference types seems unhelpful to me, if not just plain untrue. I'd say it has atomic types and container types. Containers are things like lists, tuples, dicts, and instances with private attributes (to a first approximation).</p>
| 3 | 2016-08-24T14:56:00Z | [
"python",
"types",
"variable-assignment",
"atomic"
] |
How to click on confirmation button using Selenium with Python? | 39,125,633 | <p>I have the following Code that goes to a URL(www.example.com), and clicks on a link(Example 1). <strong>(This part works fine)</strong></p>
<pre><code>from selenium import webdriver
driver = webdriver.Firefox()
driver.get("https://www.example.com")
link = driver.find_element_by_link_text('Example 1')
link.click()
</code></pre>
<p>Now, when we click on 'Example 1' link, it opens a confirmation window, with 2 buttons: 'Yes I am authorized user to this site' and 'No I am a new visitor to this site'</p>
<p>So, I wish to click on 'Yes I am authorized user to this site' and then finally enter my log-in credentials.
I have written these 2 lines, just below the above code, for clicking on that button. But <strong>these don't work.</strong></p>
<pre><code>button = driver.find_element_by_name("'Yes I am authorized user to this site'")
button.click()
</code></pre>
| 0 | 2016-08-24T14:08:01Z | 39,126,251 | <p>If it is an alert window, you need to use the Alert command.</p>
<pre><code>#import Alert
from selenium.webdriver.common.alert import Alert
from selenium import webdriver
driver = webdriver.Firefox()
driver.get("https://www.example.com")
link = driver.find_element_by_link_text('Example 1')
link.click()
Alert(driver).accept()
#to dismiss alert
#Alert(driver).dismiss()
</code></pre>
<p>I think this would have solved your query.</p>
| 1 | 2016-08-24T14:34:56Z | [
"python",
"selenium"
] |
How to click on confirmation button using Selenium with Python? | 39,125,633 | <p>I have the following Code that goes to a URL(www.example.com), and clicks on a link(Example 1). <strong>(This part works fine)</strong></p>
<pre><code>from selenium import webdriver
driver = webdriver.Firefox()
driver.get("https://www.example.com")
link = driver.find_element_by_link_text('Example 1')
link.click()
</code></pre>
<p>Now, when we click on 'Example 1' link, it opens a confirmation window, with 2 buttons: 'Yes I am authorized user to this site' and 'No I am a new visitor to this site'</p>
<p>So, I wish to click on 'Yes I am authorized user to this site' and then finally enter my log-in credentials.
I have written these 2 lines, just below the above code, for clicking on that button. But <strong>these don't work.</strong></p>
<pre><code>button = driver.find_element_by_name("'Yes I am authorized user to this site'")
button.click()
</code></pre>
| 0 | 2016-08-24T14:08:01Z | 39,129,510 | <p>Try this code, hope it will help you</p>
<pre><code>from selenium import webdriver
import time
driver = webdriver.Chrome('path to chromedriver\chromedriver.exe')
driver.get('https://www.example.com')
driver.maximize_window()
link = driver.find_element_by_link_text('Example 1')
link.click()
handles =driver.window_handles # this will give window handles
driver.switch_to.window(handles[1])
button = driver.find_element_by_name("'Yes I am authorized user to this site'")
button.click()
</code></pre>
| 0 | 2016-08-24T17:24:28Z | [
"python",
"selenium"
] |
How to click on confirmation button using Selenium with Python? | 39,125,633 | <p>I have the following Code that goes to a URL(www.example.com), and clicks on a link(Example 1). <strong>(This part works fine)</strong></p>
<pre><code>from selenium import webdriver
driver = webdriver.Firefox()
driver.get("https://www.example.com")
link = driver.find_element_by_link_text('Example 1')
link.click()
</code></pre>
<p>Now, when we click on 'Example 1' link, it opens a confirmation window, with 2 buttons: 'Yes I am authorized user to this site' and 'No I am a new visitor to this site'</p>
<p>So, I wish to click on 'Yes I am authorized user to this site' and then finally enter my log-in credentials.
I have written these 2 lines, just below the above code, for clicking on that button. But <strong>these don't work.</strong></p>
<pre><code>button = driver.find_element_by_name("'Yes I am authorized user to this site'")
button.click()
</code></pre>
| 0 | 2016-08-24T14:08:01Z | 39,146,661 | <p>Based on the comment conversation, I would recommend both using an XPATH search (instead of Name or Id) and waiting for elements to be clickable or loaded. When web-driving or web-scraping, pages may intentionally or accidentally load slowly and this can cause issues if you have pauses or waits either hard coded or non-existent. This snippet of code should allow you to search Google using Selenium and Chromedriver (you can modify the driver function to use Firefox or something else if you'd like):</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import ElementNotVisibleException
from selenium.webdriver.chrome.options import Options
from time import sleep
def init_driver(drvr_path):
chrome_options = Options()
chrome_options.add_argument("--disable-extensions")
driver = webdriver.Chrome(drvr_path+'chromedriver.exe',chrome_options=chrome_options)
driver.wait = WebDriverWait(driver, 5)
return driver
def lookup(query, driver=None, drvr_path=''):
driver = None
if driver is None:
driver = init_driver(drvr_path)
driver.implicitly_wait(45) # Allow up to 45 Seconds for page to load
driver.get("http://www.google.com")
try:
box = driver.wait.until(EC.presence_of_element_located((By.XPATH, """//*[@id="lst-ib"]""")))
box.send_keys(query)
sleep(3) # Let you see the window open
button = driver.wait.until(EC.element_to_be_clickable((By.XPATH,"""//*[@id="sblsbb"]/button""")))
try:
button.click()
except ElementNotVisibleException, s:
print "Error Handled: "+str(s)
button = driver.wait.until(EC.element_to_be_clickable((By.XPATH,"""//*[@id="sblsbb"]/button""")))
try:
button.click()
except:
print "Could not search Google..."
return
resp=driver.page_source.encode('utf-8')
with open(query+'.html','wb') as f:
f.write(resp)
print 'Wrote the File...'
except:
print("Box or Button not found in google.com")
driver.quit()
</code></pre>
<p>For example, if your Chromedriver.exe file was located in your default Python path, you could do something like: <code>lookup('Selenium Python XPATH Examples')</code> and it should download an HTML file of the Google Search results. If you already have a Driver initialized, you could of course pass that to it.</p>
<p>Hope this helps</p>
| 0 | 2016-08-25T13:27:37Z | [
"python",
"selenium"
] |
Cannot convert string to float in pandas (ValueError) | 39,125,665 | <p>I have a dataframe created form a JSON output that looks like this:</p>
<pre><code> Total Revenue Average Revenue Purchase count Rate
Date
Monday 1,304.40 CA$ 20.07 CA$ 2,345 1.54 %
</code></pre>
<p>The value stored are received as string from the JSON. I am trying to:</p>
<p>1) Remove all characters in the entry (ex: CA$ or %)
2) convert rate and revenue columns to float
3) Convert count columns as int</p>
<p>I tried to do the following:</p>
<pre><code>df[column] = (df[column].str.split()).apply(lambda x: float(x[0]))
</code></pre>
<p>It works fine except when I have a value with a coma (ex: 1,465 won't work whereas 143 would).</p>
<p>I tried to use several function to replace the "," by "", etc. Nothing worked so far. I always receive the following error:</p>
<blockquote>
<p>ValueError: could not convert string to float: '1,304.40'</p>
</blockquote>
| 1 | 2016-08-24T14:09:38Z | 39,125,710 | <p>These strings have commas as thousands separators so you will have to remove them before the call to <code>float</code>:</p>
<pre><code>df[column] = (df[column].str.split()).apply(lambda x: float(x[0].replace(',', '')))
</code></pre>
<p>This can be simplified a bit by moving <code>split</code> inside the <code>lambda</code>:</p>
<pre><code>df[column] = df[column].apply(lambda x: float(x.split()[0].replace(',', '')))
</code></pre>
| 0 | 2016-08-24T14:11:10Z | [
"python",
"json",
"pandas",
"dataframe",
"numeric"
] |
Cannot convert string to float in pandas (ValueError) | 39,125,665 | <p>I have a dataframe created form a JSON output that looks like this:</p>
<pre><code> Total Revenue Average Revenue Purchase count Rate
Date
Monday 1,304.40 CA$ 20.07 CA$ 2,345 1.54 %
</code></pre>
<p>The value stored are received as string from the JSON. I am trying to:</p>
<p>1) Remove all characters in the entry (ex: CA$ or %)
2) convert rate and revenue columns to float
3) Convert count columns as int</p>
<p>I tried to do the following:</p>
<pre><code>df[column] = (df[column].str.split()).apply(lambda x: float(x[0]))
</code></pre>
<p>It works fine except when I have a value with a coma (ex: 1,465 won't work whereas 143 would).</p>
<p>I tried to use several function to replace the "," by "", etc. Nothing worked so far. I always receive the following error:</p>
<blockquote>
<p>ValueError: could not convert string to float: '1,304.40'</p>
</blockquote>
| 1 | 2016-08-24T14:09:38Z | 39,125,826 | <p>Another solution with <code>list</code> comprehension, if need apply <code>string</code> <a href="http://pandas.pydata.org/pandas-docs/stable/text.html#method-summary" rel="nofollow">functions</a> working only with <code>Series</code> (columns of <code>DataFrame</code>) like <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow"><code>str.split</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.replace.html" rel="nofollow"><code>str.replace</code></a>:</p>
<pre><code>df = pd.concat([df[col].str.split()
.str[0]
.str.replace(',','').astype(float) for col in df], axis=1)
#if need convert column Purchase count to int
df['Purchase count'] = df['Purchase count'].astype(int)
print (df)
Total Revenue Average Revenue Purchase count Rate
Date
Monday 1304.4 20.07 2345 1.54
</code></pre>
| 0 | 2016-08-24T14:15:31Z | [
"python",
"json",
"pandas",
"dataframe",
"numeric"
] |
Merge two dataframes with pandas | 39,125,668 | <p>I have two dataframes : </p>
<pre><code>df_energy.info()
</code></pre>
<blockquote>
<pre><code><class 'pandas.core.frame.DataFrame'>
Int64Index: 34673 entries, 1 to 43228
Data columns (total 6 columns):
TIMESTAMP 34673 non-null datetime64[ns]
P_ACT_KW 34673 non-null float64
PERIODE_TARIF 34673 non-null object
P_SOUSCR 34673 non-null float64
SITE 34673 non-null object
TARIF 34673 non-null object
dtypes: datetime64[ns](1), float64(2), object(3)
memory usage: 1.9+ MB
</code></pre>
</blockquote>
<p>and df1 : </p>
<pre><code>df1.info()
</code></pre>
<blockquote>
<pre><code><class 'pandas.core.frame.DataFrame'>
RangeIndex: 38840 entries, 0 to 38839
Data columns (total 7 columns):
TIMESTAMP 38840 non-null datetime64[ns]
ACT_TIME_AERATEUR_1_F1 38696 non-null float64
ACT_TIME_AERATEUR_1_F3 38697 non-null float64
ACT_TIME_AERATEUR_1_F5 38695 non-null float64
ACT_TIME_AERATEUR_1_F6 38695 non-null float64
ACT_TIME_AERATEUR_1_F7 38693 non-null float64
ACT_TIME_AERATEUR_1_F8 38696 non-null float64
dtypes: datetime64[ns](1), float64(6)
memory usage: 2.1 MB
</code></pre>
</blockquote>
<p>I try to merge these two dataframes based on TIMESTAMP column : </p>
<pre><code>merged_df_energy = pd.merge(df_energy.set_index('TIMESTAMP'),
df1,
right_index=True,
left_index =True)
</code></pre>
<p>But I get this error : </p>
<blockquote>
<pre><code>TypeError Traceback (most recent call last)
<ipython-input-190-34cd0916eb6a> in <module>()
2 df1,
3 right_index=True,
----> 4 left_index =True)
5 merged_df_energy.info()
C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\tools\merge.py
</code></pre>
<p>in merge(left, right, how, on, left_on, right_on, left_index,
right_index, sort, suffixes, copy, indicator)
37 right_index=right_index, sort=sort, suffixes=suffixes,
38 copy=copy, indicator=indicator)
---> 39 return op.get_result()
40 if <strong>debug</strong>:
41 merge.<strong>doc</strong> = _merge_doc % '\nleft : DataFrame'</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\tools\merge.py
</code></pre>
<p>in get_result(self)
215 self.left, self.right)
216
--> 217 join_index, left_indexer, right_indexer = self._get_join_info()
218
219 ldata, rdata = self.left._data, self.right._data</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\tools\merge.py
</code></pre>
<p>in _get_join_info(self)
337 if self.left_index and self.right_index:
338 join_index, left_indexer, right_indexer = \
--> 339 left_ax.join(right_ax, how=self.how, return_indexers=True)
340 elif self.right_index and self.how == 'left':
341 join_index, left_indexer, right_indexer = \</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\tseries\index.py
</code></pre>
<p>in join(self, other, how, level, return_indexers)
1072 this, other = self._maybe_utc_convert(other)
1073 return Index.join(this, other, how=how, level=level,
-> 1074 return_indexers=return_indexers)
1075
1076 def _maybe_utc_convert(self, other):</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\indexes\base.py
</code></pre>
<p>in join(self, other, how, level, return_indexers)
2480 this = self.astype('O')
2481 other = other.astype('O')
-> 2482 return this.join(other, how=how, return_indexers=return_indexers)
2483
2484 _validate_join_method(how)</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\indexes\base.py
</code></pre>
<p>in join(self, other, how, level, return_indexers)
2493 else:
2494 return self._join_non_unique(other, how=how,
-> 2495 return_indexers=return_indexers)
2496 elif self.is_monotonic and other.is_monotonic:
2497 try:</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\indexes\base.py
</code></pre>
<p>in _join_non_unique(self, other, how, return_indexers)
2571 left_idx, right_idx = _get_join_indexers([self.values],
2572 [other._values], how=how,
-> 2573 sort=True)
2574
2575 left_idx = com._ensure_platform_int(left_idx)</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\tools\merge.py
</code></pre>
<p>in _get_join_indexers(left_keys, right_keys, sort, how)
544
545 # get left & right join labels and num. of levels at each location
--> 546 llab, rlab, shape = map(list, zip(* map(fkeys, left_keys, right_keys)))
547
548 # get flat i8 keys from label lists</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\tools\merge.py
</code></pre>
<p>in _factorize_keys(lk, rk, sort)
718 if sort:
719 uniques = rizer.uniques.to_array()
--> 720 llab, rlab = _sort_labels(uniques, llab, rlab)
721
722 # NA group</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\tools\merge.py
</code></pre>
<p>in _sort_labels(uniques, left, right)
741 uniques = Index(uniques).values
742
--> 743 sorter = uniques.argsort()
744
745 reverse_indexer = np.empty(len(sorter), dtype=np.int64)</p>
<pre><code>pandas\tslib.pyx in pandas.tslib._Timestamp.__richcmp__ (pandas\tslib.c:18619)()
TypeError: Cannot compare type 'Timestamp' with type 'int'
</code></pre>
</blockquote>
<p>Can you help me please to resolve this problem?</p>
<p>Thank you</p>
| 1 | 2016-08-24T14:09:44Z | 39,126,364 | <p>Can you tell me the output after you try this?
This should work:</p>
<pre><code>merged_inner = pd.merge(left=df_energy, right=df1,
left_on='TIMESTAMP', right_on='TIMESTAMP')
</code></pre>
| 2 | 2016-08-24T14:39:16Z | [
"python",
"pandas",
"dataframe",
"merge"
] |
Merge two dataframes with pandas | 39,125,668 | <p>I have two dataframes : </p>
<pre><code>df_energy.info()
</code></pre>
<blockquote>
<pre><code><class 'pandas.core.frame.DataFrame'>
Int64Index: 34673 entries, 1 to 43228
Data columns (total 6 columns):
TIMESTAMP 34673 non-null datetime64[ns]
P_ACT_KW 34673 non-null float64
PERIODE_TARIF 34673 non-null object
P_SOUSCR 34673 non-null float64
SITE 34673 non-null object
TARIF 34673 non-null object
dtypes: datetime64[ns](1), float64(2), object(3)
memory usage: 1.9+ MB
</code></pre>
</blockquote>
<p>and df1 : </p>
<pre><code>df1.info()
</code></pre>
<blockquote>
<pre><code><class 'pandas.core.frame.DataFrame'>
RangeIndex: 38840 entries, 0 to 38839
Data columns (total 7 columns):
TIMESTAMP 38840 non-null datetime64[ns]
ACT_TIME_AERATEUR_1_F1 38696 non-null float64
ACT_TIME_AERATEUR_1_F3 38697 non-null float64
ACT_TIME_AERATEUR_1_F5 38695 non-null float64
ACT_TIME_AERATEUR_1_F6 38695 non-null float64
ACT_TIME_AERATEUR_1_F7 38693 non-null float64
ACT_TIME_AERATEUR_1_F8 38696 non-null float64
dtypes: datetime64[ns](1), float64(6)
memory usage: 2.1 MB
</code></pre>
</blockquote>
<p>I try to merge these two dataframes based on TIMESTAMP column : </p>
<pre><code>merged_df_energy = pd.merge(df_energy.set_index('TIMESTAMP'),
df1,
right_index=True,
left_index =True)
</code></pre>
<p>But I get this error : </p>
<blockquote>
<pre><code>TypeError Traceback (most recent call last)
<ipython-input-190-34cd0916eb6a> in <module>()
2 df1,
3 right_index=True,
----> 4 left_index =True)
5 merged_df_energy.info()
C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\tools\merge.py
</code></pre>
<p>in merge(left, right, how, on, left_on, right_on, left_index,
right_index, sort, suffixes, copy, indicator)
37 right_index=right_index, sort=sort, suffixes=suffixes,
38 copy=copy, indicator=indicator)
---> 39 return op.get_result()
40 if <strong>debug</strong>:
41 merge.<strong>doc</strong> = _merge_doc % '\nleft : DataFrame'</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\tools\merge.py
</code></pre>
<p>in get_result(self)
215 self.left, self.right)
216
--> 217 join_index, left_indexer, right_indexer = self._get_join_info()
218
219 ldata, rdata = self.left._data, self.right._data</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\tools\merge.py
</code></pre>
<p>in _get_join_info(self)
337 if self.left_index and self.right_index:
338 join_index, left_indexer, right_indexer = \
--> 339 left_ax.join(right_ax, how=self.how, return_indexers=True)
340 elif self.right_index and self.how == 'left':
341 join_index, left_indexer, right_indexer = \</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\tseries\index.py
</code></pre>
<p>in join(self, other, how, level, return_indexers)
1072 this, other = self._maybe_utc_convert(other)
1073 return Index.join(this, other, how=how, level=level,
-> 1074 return_indexers=return_indexers)
1075
1076 def _maybe_utc_convert(self, other):</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\indexes\base.py
</code></pre>
<p>in join(self, other, how, level, return_indexers)
2480 this = self.astype('O')
2481 other = other.astype('O')
-> 2482 return this.join(other, how=how, return_indexers=return_indexers)
2483
2484 _validate_join_method(how)</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\indexes\base.py
</code></pre>
<p>in join(self, other, how, level, return_indexers)
2493 else:
2494 return self._join_non_unique(other, how=how,
-> 2495 return_indexers=return_indexers)
2496 elif self.is_monotonic and other.is_monotonic:
2497 try:</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\indexes\base.py
</code></pre>
<p>in _join_non_unique(self, other, how, return_indexers)
2571 left_idx, right_idx = _get_join_indexers([self.values],
2572 [other._values], how=how,
-> 2573 sort=True)
2574
2575 left_idx = com._ensure_platform_int(left_idx)</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\tools\merge.py
</code></pre>
<p>in _get_join_indexers(left_keys, right_keys, sort, how)
544
545 # get left & right join labels and num. of levels at each location
--> 546 llab, rlab, shape = map(list, zip(* map(fkeys, left_keys, right_keys)))
547
548 # get flat i8 keys from label lists</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\tools\merge.py
</code></pre>
<p>in _factorize_keys(lk, rk, sort)
718 if sort:
719 uniques = rizer.uniques.to_array()
--> 720 llab, rlab = _sort_labels(uniques, llab, rlab)
721
722 # NA group</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\tools\merge.py
</code></pre>
<p>in _sort_labels(uniques, left, right)
741 uniques = Index(uniques).values
742
--> 743 sorter = uniques.argsort()
744
745 reverse_indexer = np.empty(len(sorter), dtype=np.int64)</p>
<pre><code>pandas\tslib.pyx in pandas.tslib._Timestamp.__richcmp__ (pandas\tslib.c:18619)()
TypeError: Cannot compare type 'Timestamp' with type 'int'
</code></pre>
</blockquote>
<p>Can you help me please to resolve this problem?</p>
<p>Thank you</p>
| 1 | 2016-08-24T14:09:44Z | 39,126,496 | <p>Try this:</p>
<pre><code>import pandas
result = pandas.merge(df_energy, df1, on='TIMESTAMP')
</code></pre>
<p>If you want to save it:</p>
<pre><code>result.to_csv(path_or_buf='result.csv', sep=',')
</code></pre>
<p>Or check the columns:</p>
<pre><code>result_fields = result.columns.tolist()
print (result_fields)
</code></pre>
| 1 | 2016-08-24T14:45:03Z | [
"python",
"pandas",
"dataframe",
"merge"
] |
How to sum all values with index greater than X? | 39,125,695 | <p>Let's say I have this series:</p>
<pre><code>>>> s = pd.Series({1:10,2:5,3:8,4:12,5:7,6:3})
>>> s
1 10
2 5
3 8
4 12
5 7
6 3
</code></pre>
<p>I want to sum all the values for which the index is greater than X. So if e.g. X = 3, I want to get this:</p>
<pre><code>>>> X = 3
>>> s.some_magic(X)
1 10
2 5
3 8
>3 22
</code></pre>
<p>I managed to do it in this rather clumsy way:</p>
<pre><code>lt = s[s.index.values <= 3]
gt = s[s.index.values > 3]
gt_s = pd.Series({'>3':sum(gt)})
lt.append(gt_s)
</code></pre>
<p>and got the desired result, but I believe there should be an easier and more elegant way... or is there?</p>
| 1 | 2016-08-24T14:10:32Z | 39,125,995 | <p>Here's a possible solution:</p>
<pre><code>import pandas as pd
s = pd.Series({1: 10, 2: 5, 3: 8, 4: 12, 5: 7, 6: 3})
iv = s.index.values
print s[iv <= 3].append(pd.Series({'>3': s[iv > 3].sum()}))
</code></pre>
| 1 | 2016-08-24T14:23:57Z | [
"python",
"pandas"
] |
How to sum all values with index greater than X? | 39,125,695 | <p>Let's say I have this series:</p>
<pre><code>>>> s = pd.Series({1:10,2:5,3:8,4:12,5:7,6:3})
>>> s
1 10
2 5
3 8
4 12
5 7
6 3
</code></pre>
<p>I want to sum all the values for which the index is greater than X. So if e.g. X = 3, I want to get this:</p>
<pre><code>>>> X = 3
>>> s.some_magic(X)
1 10
2 5
3 8
>3 22
</code></pre>
<p>I managed to do it in this rather clumsy way:</p>
<pre><code>lt = s[s.index.values <= 3]
gt = s[s.index.values > 3]
gt_s = pd.Series({'>3':sum(gt)})
lt.append(gt_s)
</code></pre>
<p>and got the desired result, but I believe there should be an easier and more elegant way... or is there?</p>
| 1 | 2016-08-24T14:10:32Z | 39,126,339 | <pre><code>s.groupby(np.where(s.index > 3, '>3', s.index)).sum()
</code></pre>
<p>Or,</p>
<pre><code>s.groupby(s.index.to_series().mask(s.index > 3, '>3')).sum()
Out:
1 10
2 5
3 8
>3 22
dtype: int64
</code></pre>
| 4 | 2016-08-24T14:38:39Z | [
"python",
"pandas"
] |
how to scrape a page with javascript effects | 39,125,815 | <p>I am new to web scraping and so far I only know how to scrape basic html page using python beautiful soup. What I want is to extract the information on this <a href="http://insightdatascience.com/fellows" rel="nofollow">page</a>. Specifically, I would like to get the following data from all the fellows (around 700 of them)</p>
<ul>
<li>name</li>
<li>background</li>
<li>insight project</li>
<li>current employer</li>
</ul>
<p>However, that page is rendered by javascript and the desired information only show up as a separate box when mouseover event is triggered on each fellows picture. </p>
<p>How to extract text in this case? Any information (books, web resources) is appreciated. Python solutions are preferred if possible. Many thanks.</p>
| -1 | 2016-08-24T14:14:59Z | 39,125,926 | <p>Check the page source of the website.</p>
<p>The information is already present in the in the DOM, just hidden using CSS. On a first glance, it seems like the JavaScript logic is only doing CSS manipulations.</p>
<p>The fact that the information is hidden by CSS will not prevent you from scraping it from the source using a web scraping tool.</p>
| 0 | 2016-08-24T14:20:30Z | [
"javascript",
"jquery",
"python",
"html",
"web-scraping"
] |
Speed up numpy filtering | 39,125,952 | <p>I am making a music recognition program, and as part of it, I need to find the largest connected areas of a numpy array from a png (2200x1700 pixels). My current solution is the following.</p>
<pre><code>labels, nlabels = ndimage.label(blobs)
cutoff = len(blobs)*len(blobs[0]) / nlabels
blobs_found = 0
x = []
t1 = time()
for n in range(1, nlabels+1):
squares = np.where(labels==n)
if len(squares[0]) < cutoff:
blobs[squares] = 0
else:
blobs_found += 1
blobs[squares] = blobs_found
x.append(squares - np.amin(squares, axis=0, keepdims=True))
nlabels = blobs_found
print(time() - t1)
</code></pre>
<p>This works, but it takes ~6.5 seconds to run. Is there a way I could remove the loop from this code (or otherwise speed it up)?</p>
| 2 | 2016-08-24T14:21:57Z | 39,130,071 | <p>You can get the size (in pixels) of each labelled region with:</p>
<pre><code>unique_labels = numpy.unique(labels)
label_sizes = scipy.ndimage.measurement.sum(numpy.ones_like(blobs), labels, unique_labels)
</code></pre>
<p>The largest will then be:</p>
<pre><code>unique_labels[label_size == numpy.max(label_size)]
</code></pre>
| 2 | 2016-08-24T17:58:17Z | [
"python",
"numpy",
"scipy",
"png"
] |
Speed up numpy filtering | 39,125,952 | <p>I am making a music recognition program, and as part of it, I need to find the largest connected areas of a numpy array from a png (2200x1700 pixels). My current solution is the following.</p>
<pre><code>labels, nlabels = ndimage.label(blobs)
cutoff = len(blobs)*len(blobs[0]) / nlabels
blobs_found = 0
x = []
t1 = time()
for n in range(1, nlabels+1):
squares = np.where(labels==n)
if len(squares[0]) < cutoff:
blobs[squares] = 0
else:
blobs_found += 1
blobs[squares] = blobs_found
x.append(squares - np.amin(squares, axis=0, keepdims=True))
nlabels = blobs_found
print(time() - t1)
</code></pre>
<p>This works, but it takes ~6.5 seconds to run. Is there a way I could remove the loop from this code (or otherwise speed it up)?</p>
| 2 | 2016-08-24T14:21:57Z | 39,153,370 | <p>The fastest would probably be to use <code>numpy.bincount</code> and work from there. Something like:</p>
<pre><code>labels, nlabels = ndimage.label(blobs)
cutoff = len(blobs)*len(blobs[0]) / float(nlabels)
label_counts = np.bincount(labels)
# Re-label, taking the cutoff into account
cutoff_mask = (label_counts >= cutoff)
cutoff_mask[0] = False
label_mapping = np.zeros_like(label_counts)
label_mapping[cutoff_mask] = np.arange(cutoff_mask.sum()) + 1
# Create an image-array with the updated labels
blobs = label_mapping[labels].astype(blobs.dtype)
</code></pre>
<p>This could be optimized for speed some more, but I aimed for readability. </p>
| 2 | 2016-08-25T19:31:36Z | [
"python",
"numpy",
"scipy",
"png"
] |
Converting str object to int in dictionary | 39,125,957 | <p>I am currently making a scoreboard for a yatzhee game in python where I'm using a dictionary to associate where a player's points will be inserted into a table.
The dictionary looks like this from the start:</p>
<pre><code> self.lista={"aces":'',"twos":'',"threes":'',"fours":''}
</code></pre>
<p>When I associate for example the number 25 with "aces" I want it to be interpreted as an integer, so when the next time I add points to "aces" it adds them up with 25. Is there a way to this? </p>
| 1 | 2016-08-24T14:22:05Z | 39,126,004 | <pre><code>self.lista = {"aces":'',"twos":'',"threes":'',"fours":''}
self.lista['aces'] = 25 --> # 25
self.lista['aces'] += 25 --> # 50
</code></pre>
<p>OT (for the future reference):
You can initialize your dict with keys in this way:</p>
<pre><code>keys = ['aces', 'twos', 'threes', 'fours']
self.lista = dict.fromkeys(keys)
print self.lista --> # {'aces': None, 'fours': None, 'twos': None, 'threes': None}
</code></pre>
| 3 | 2016-08-24T14:24:17Z | [
"python",
"dictionary"
] |
Django - Static files not fetching in Heroku | 39,125,963 | <p>I made a webpage using Django, and hosted it in Heroku Cloud. This is my second app I successfully hosted one application earlier. But for this application the static files are creating some issues. It not fetching the static files in the production environment. I tried several things but still not working. The following is few codes of my settings.py</p>
<pre><code>BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
ALLOWED_HOSTS = ['promagcareer.herokuapp.com']
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
os.path.join(BASE_DIR, 'templates'),
],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
STATICFILES_STORAGE = 'whitenoise.django.GzipManifestStaticFilesStorage'
STATIC_ROOT = os.path.join(BASE_DIR, "static")
STATIC_URL = '/static/'
</code></pre>
<p>I have given {% load static %} in the template html. I have tried many things, but still its not loading the static files. What might be the problem.</p>
<p>This is the link of the <a href="http://promagcareer.herokuapp.com/career/" rel="nofollow">site</a>. The link of <a href="https://github.com/jeriln/PromagCareer" rel="nofollow">github project</a>. Kindly help. Thanks in advance.</p>
<p>The following the the heroku logs --tail</p>
<pre><code>2016-08-24T14:41:17.682678+00:00 app[web.1]: Not Found: /static/promag/js/jquery.magnific-popup.min.js
2016-08-24T14:41:18.169310+00:00 app[web.1]: Not Found: /static/promag/js/jquery.isotope.min.js
2016-08-24T14:41:18.207067+00:00 heroku[router]: at=info method=GET path="/static/promag/js/jquery.isotope.min.js" host=promagcareer.herokuapp.com request_id=295ee14c-abf9-4a0f-b7ba-38d9e118d3f3 fwd="150.107.23.147" dyno=web.1 connect=2ms service=7ms status=404 bytes=2338
2016-08-24T14:41:18.663379+00:00 heroku[router]: at=info method=GET path="/static/promag/js/swipe.js" host=promagcareer.herokuapp.com request_id=051ff48c-ea94-4375-b9d9-1c989449e1ec fwd="150.107.23.147" dyno=web.1 connect=4ms service=7ms status=404 bytes=2299
2016-08-24T14:41:18.657994+00:00 app[web.1]: Not Found: /static/promag/js/swipe.js
2016-08-24T14:41:18.882350+00:00 app[web.1]: Not Found: /static/promag/js/main.js
2016-08-24T14:41:18.888336+00:00 heroku[router]: at=info method=GET path="/static/promag/js/main.js" host=promagcareer.herokuapp.com request_id=aaf5dfac-4b8b-44fd-9e57-3ef5a5ff532e fwd="150.107.23.147" dyno=web.1 connect=0ms service=5ms status=404 bytes=2296
2016-08-24T14:41:19.102273+00:00 app[web.1]: Not Found: /static/promag/js/wow.min.js
2016-08-24T14:41:19.108087+00:00 heroku[router]: at=info method=GET path="/static/promag/js/wow.min.js" host=promagcareer.herokuapp.com request_id=352bf151-ca70-4a11-8ac0-b9582d3fdb9e fwd="150.107.23.147" dyno=web.1 connect=0ms service=5ms status=404 bytes=2305
2016-08-24T14:43:13.854149+00:00 heroku[api]: Starting process with command `python manage.py collectstatic` by jeril.work@gmail.com
2016-08-24T14:43:22.232867+00:00 heroku[run.3587]: Awaiting client
2016-08-24T14:43:22.225086+00:00 heroku[run.3587]: State changed from starting to up
2016-08-24T14:43:22.304420+00:00 heroku[run.3587]: Starting process with command `python manage.py collectstatic`
2016-08-24T14:43:31.942745+00:00 heroku[run.3587]: State changed from up to complete
2016-08-24T14:43:31.918016+00:00 heroku[run.3587]: Process exited with status 0
2016-08-24T14:43:40.118441+00:00 app[web.1]: Not Found: /static/promag/css/style.css
2016-08-24T14:43:40.074124+00:00 app[web.1]: Not Found: /static/promag/js/jquery-1.10.2.min.js
2016-08-24T14:43:40.127348+00:00 app[web.1]: Not Found: /static/promag/images/career.png
2016-08-24T14:43:40.118296+00:00 app[web.1]: Not Found: /static/promag/css/font-awesome.css
2016-08-24T14:43:40.122394+00:00 app[web.1]: Not Found: /static/promag/css/animate.css
2016-08-24T14:43:40.123029+00:00 app[web.1]: Not Found: /static/promag/css/bootstrap.min.css
2016-08-24T14:43:40.404323+00:00 app[web.1]: Not Found: /static/promag/js/jquery.easing.1.3.js
2016-08-24T14:43:40.346398+00:00 app[web.1]: Not Found: /static/promag/js/bootstrap.min.js
2016-08-24T14:43:40.557933+00:00 app[web.1]: Not Found: /static/promag/js/retina-1.1.0.min.js
2016-08-24T14:43:40.628917+00:00 app[web.1]: Not Found: /static/promag/js/jquery.smartmenus.bootstrap.min.js
2016-08-24T14:43:40.828045+00:00 app[web.1]: Not Found: /static/promag/js/jquery.isotope.min.js
2016-08-24T14:43:40.645692+00:00 app[web.1]: Not Found: /static/promag/js/jquery.magnific-popup.min.js
2016-08-24T14:43:40.614655+00:00 app[web.1]: Not Found: /static/promag/js/jquery.smartmenus.min.js
2016-08-24T14:43:40.886567+00:00 app[web.1]: Not Found: /static/promag/js/swipe.js
2016-08-24T14:43:41.046430+00:00 app[web.1]: Not Found: /static/promag/js/main.js
2016-08-24T14:43:41.099928+00:00 app[web.1]: Not Found: /static/promag/js/wow.min.js
2016-08-24T14:43:40.651092+00:00 heroku[router]: at=info method=GET path="/static/promag/js/jquery.smartmenus.bootstrap.min.js" host=promagcareer.herokuapp.com request_id=278e0a64-736c-47b9-bb33-9df30d382bca fwd="150.107.23.147" dyno=web.1 connect=1ms service=34ms status=404 bytes=2377
2016-08-24T14:43:40.574783+00:00 heroku[router]: at=info method=GET path="/static/promag/js/retina-1.1.0.min.js" host=promagcareer.herokuapp.com request_id=27700e8d-6a6a-40aa-a156-28f82e1c451c fwd="150.107.23.147" dyno=web.1 connect=1ms service=15ms status=404 bytes=2332
2016-08-24T14:43:39.852708+00:00 heroku[router]: at=info method=GET path="/career/" host=promagcareer.herokuapp.com request_id=6ee1b08d-8cbb-4c20-aa04-502b87c78b49 fwd="150.107.23.147" dyno=web.1 connect=1ms service=50ms status=200 bytes=5477
2016-08-24T14:43:40.658413+00:00 heroku[router]: at=info method=GET path="/static/promag/js/jquery.magnific-popup.min.js" host=promagcareer.herokuapp.com request_id=1903fe8c-8c3a-4af9-8b74-0bf8d38375e4 fwd="150.107.23.147" dyno=web.1 connect=1ms service=14ms status=404 bytes=2359
2016-08-24T14:43:40.895139+00:00 heroku[router]: at=info method=GET path="/static/promag/js/swipe.js" host=promagcareer.herokuapp.com request_id=75c61054-8ddf-46b5-8e1e-ab604f1a3cad fwd="150.107.23.147" dyno=web.1 connect=2ms service=6ms status=404 bytes=2299
2016-08-24T14:43:40.091524+00:00 heroku[router]: at=info method=GET path="/static/promag/js/jquery-1.10.2.min.js" host=promagcareer.herokuapp.com request_id=6dbf19a0-6b3e-4392-9d74-5e340ae5c099 fwd="150.107.23.147" dyno=web.1 connect=1ms service=8ms status=404 bytes=2335
2016-08-24T14:43:41.095432+00:00 heroku[router]: at=info method=GET path="/static/promag/js/main.js" host=promagcareer.herokuapp.com request_id=cde55212-3393-4439-84fc-4a7a6a5ebb46 fwd="150.107.23.147" dyno=web.1 connect=1ms service=17ms status=404 bytes=2296
2016-08-24T14:43:40.125493+00:00 heroku[router]: at=info method=GET path="/static/promag/css/font-awesome.css" host=promagcareer.herokuapp.com request_id=34763ae8-4319-42b0-a358-cf0470825db8 fwd="150.107.23.147" dyno=web.1 connect=1ms service=6ms status=404 bytes=2326
2016-08-24T14:43:40.353330+00:00 heroku[router]: at=info method=GET path="/static/promag/js/bootstrap.min.js" host=promagcareer.herokuapp.com request_id=231dd103-1ee0-43b1-b413-846e01c734bc fwd="150.107.23.147" dyno=web.1 connect=1ms service=6ms status=404 bytes=2323
2016-08-24T14:43:40.625625+00:00 heroku[router]: at=info method=GET path="/static/promag/js/jquery.smartmenus.min.js" host=promagcareer.herokuapp.com request_id=069d6f11-16d8-46c9-a9a4-acb6b5c917f1 fwd="150.107.23.147" dyno=web.1 connect=1ms service=24ms status=404 bytes=2347
2016-08-24T14:43:40.151383+00:00 heroku[router]: at=info method=GET path="/static/promag/images/career.png" host=promagcareer.herokuapp.com request_id=55a6e9c6-5404-4cbb-a4b7-e69c3fa5fe30 fwd="150.107.23.147" dyno=web.1 connect=1ms service=13ms status=404 bytes=2317
2016-08-24T14:43:40.845095+00:00 heroku[router]: at=info method=GET path="/static/promag/js/jquery.isotope.min.js" host=promagcareer.herokuapp.com request_id=1d5878e5-13e5-413a-ae5c-770e3c0e83ca fwd="150.107.23.147" dyno=web.1 connect=1ms service=11ms status=404 bytes=2338
2016-08-24T14:43:40.132257+00:00 heroku[router]: at=info method=GET path="/static/promag/css/style.css" host=promagcareer.herokuapp.com request_id=9915ebd5-0c8b-4d6e-9d50-9376b1119317 fwd="150.107.23.147" dyno=web.1 connect=1ms service=6ms status=404 bytes=2305
2016-08-24T14:43:40.430185+00:00 heroku[router]: at=info method=GET path="/static/promag/js/jquery.easing.1.3.js" host=promagcareer.herokuapp.com request_id=f3a975c9-205a-4fae-98d2-9079752b9d83 fwd="150.107.23.147" dyno=web.1 connect=1ms service=8ms status=404 bytes=2335
2016-08-24T14:43:41.148636+00:00 heroku[router]: at=info method=GET path="/static/promag/js/wow.min.js" host=promagcareer.herokuapp.com request_id=6302a107-cb82-4ff1-ac43-6933a51aaddb fwd="150.107.23.147" dyno=web.1 connect=1ms service=12ms status=404 bytes=2305
2016-08-24T14:43:40.131264+00:00 heroku[router]: at=info method=GET path="/static/promag/css/bootstrap.min.css" host=promagcareer.herokuapp.com request_id=a1a00313-3192-45c2-ade0-d6e718f30e48 fwd="150.107.23.147" dyno=web.1 connect=1ms service=10ms status=404 bytes=2329
2016-08-24T14:43:40.131964+00:00 heroku[router]: at=info method=GET path="/static/promag/css/animate.css" host=promagcareer.herokuapp.com request_id=a89d285c-6a01-45bf-9818-3cb98d4867cd fwd="150.107.23.147" dyno=web.1 connect=1ms service=10ms status=404 bytes=2311
2016-08-24T14:46:47.652609+00:00 app[web.1]: Not Found: /static/promag/js/jquery-1.10.2.min.js
2016-08-24T14:46:47.719157+00:00 app[web.1]: Not Found: /static/promag/css/font-awesome.css
2016-08-24T14:46:47.744839+00:00 app[web.1]: Not Found: /static/promag/css/animate.css
2016-08-24T14:46:47.686882+00:00 app[web.1]: Not Found: /static/promag/css/style.css
2016-08-24T14:46:47.746422+00:00 app[web.1]: Not Found: /static/promag/js/bootstrap.min.js
2016-08-24T14:46:47.724693+00:00 app[web.1]: Not Found: /static/promag/images/career.png
2016-08-24T14:46:47.768166+00:00 app[web.1]: Not Found: /static/promag/js/jquery.easing.1.3.js
2016-08-24T14:46:47.791088+00:00 app[web.1]: Not Found: /static/promag/js/jquery.smartmenus.min.js
2016-08-24T14:46:47.813430+00:00 app[web.1]: Not Found: /static/promag/js/jquery.magnific-popup.min.js
2016-08-24T14:46:47.847221+00:00 app[web.1]: Not Found: /static/promag/js/jquery.isotope.min.js
2016-08-24T14:46:47.861463+00:00 app[web.1]: Not Found: /static/promag/js/swipe.js
2016-08-24T14:46:47.869106+00:00 app[web.1]: Not Found: /static/promag/js/main.js
2016-08-24T14:46:47.822603+00:00 app[web.1]: Not Found: /static/promag/js/bootstrap.min.js
2016-08-24T14:46:47.673050+00:00 app[web.1]: Not Found: /static/promag/css/bootstrap.min.css
2016-08-24T14:46:47.802422+00:00 app[web.1]: Not Found: /static/promag/js/jquery.smartmenus.bootstrap.min.js
2016-08-24T14:46:47.874797+00:00 app[web.1]: Not Found: /static/promag/js/wow.min.js
2016-08-24T14:46:47.772206+00:00 app[web.1]: Not Found: /static/promag/js/retina-1.1.0.min.js
2016-08-24T14:46:47.918516+00:00 app[web.1]: Not Found: /static/promag/js/jquery.easing.1.3.js
2016-08-24T14:46:47.995609+00:00 app[web.1]: Not Found: /static/promag/js/retina-1.1.0.min.js
2016-08-24T14:46:48.068254+00:00 app[web.1]: Not Found: /static/promag/js/jquery.smartmenus.min.js
2016-08-24T14:46:48.131985+00:00 app[web.1]: Not Found: /static/promag/js/jquery.smartmenus.bootstrap.min.js
2016-08-24T14:46:48.201657+00:00 app[web.1]: Not Found: /static/promag/js/jquery.magnific-popup.min.js
2016-08-24T14:46:48.291775+00:00 app[web.1]: Not Found: /static/promag/js/jquery.isotope.min.js
2016-08-24T14:46:48.366593+00:00 app[web.1]: Not Found: /static/promag/js/swipe.js
2016-08-24T14:46:48.436820+00:00 app[web.1]: Not Found: /static/promag/js/main.js
2016-08-24T14:46:48.532672+00:00 app[web.1]: Not Found: /static/promag/js/wow.min.js
2016-08-24T14:46:47.760216+00:00 heroku[router]: at=info method=GET path="/static/promag/css/animate.css" host=promagcareer.herokuapp.com request_id=15ad6cd4-8682-4617-891e-f83826bbfc1a fwd="50.130.91.84" dyno=web.1 connect=0ms service=96ms status=404 bytes=2311
2016-08-24T14:46:47.940990+00:00 heroku[router]: at=info method=GET path="/static/promag/js/jquery.easing.1.3.js" host=promagcareer.herokuapp.com request_id=4fea24c5-8311-4240-889b-a7b6a33f9156 fwd="50.130.91.84" dyno=web.1 connect=0ms service=32ms status=404 bytes=2335
2016-08-24T14:46:48.012855+00:00 heroku[router]: at=info method=GET path="/static/promag/js/retina-1.1.0.min.js" host=promagcareer.herokuapp.com request_id=42073752-6fe8-44b4-9cf9-22689f8d7d88 fwd="50.130.91.84" dyno=web.1 connect=0ms service=16ms status=404 bytes=2332
2016-08-24T14:46:47.852593+00:00 heroku[router]: at=info method=GET path="/static/promag/js/bootstrap.min.js" host=promagcareer.herokuapp.com request_id=9d9e4e74-b6e9-44da-b4fa-1f539a35bd9e fwd="50.130.91.84" dyno=web.1 connect=0ms service=33ms status=404 bytes=2323
2016-08-24T14:46:48.144956+00:00 heroku[router]: at=info method=GET path="/static/promag/js/jquery.smartmenus.bootstrap.min.js" host=promagcareer.herokuapp.com request_id=295ea2d0-46bf-40f4-a6be-3ae2ee942dcf fwd="50.130.91.84" dyno=web.1 connect=0ms service=12ms status=404 bytes=2377
2016-08-24T14:46:48.077690+00:00 heroku[router]: at=info method=GET path="/static/promag/js/jquery.smartmenus.min.js" host=promagcareer.herokuapp.com request_id=03e611d4-afd1-4c7c-b463-beec2c6c68ef fwd="50.130.91.84" dyno=web.1 connect=0ms service=8ms status=404 bytes=2347
2016-08-24T14:46:48.311315+00:00 heroku[router]: at=info method=GET path="/static/promag/js/jquery.isotope.min.js" host=promagcareer.herokuapp.com request_id=84df497c-50d7-4e4e-88d2-7f4e780a5c98 fwd="50.130.91.84" dyno=web.1 connect=0ms service=18ms status=404 bytes=2338
2016-08-24T14:46:48.237420+00:00 heroku[router]: at=info method=GET path="/static/promag/js/jquery.magnific-popup.min.js" host=promagcareer.herokuapp.com request_id=da71048f-556b-4513-8627-f134619f03b4 fwd="50.130.91.84" dyno=web.1 connect=0ms service=34ms status=404 bytes=2359
2016-08-24T14:46:47.458248+00:00 heroku[router]: at=info method=GET path="/career/" host=promagcareer.herokuapp.com request_id=f6f9a2a5-b530-4a6b-bd02-4417e8098847 fwd="50.130.91.84" dyno=web.1 connect=0ms service=45ms status=200 bytes=5477
2016-08-24T14:46:48.550355+00:00 heroku[router]: at=info method=GET path="/static/promag/js/wow.min.js" host=promagcareer.herokuapp.com request_id=a78c3578-3a51-4639-8dc6-f3d7325bd705 fwd="50.130.91.84" dyno=web.1 connect=0ms service=17ms status=404 bytes=2305
2016-08-24T14:46:48.475241+00:00 heroku[router]: at=info method=GET path="/static/promag/js/main.js" host=promagcareer.herokuapp.com request_id=82aa885d-2eea-4a38-9021-460063ede5f6 fwd="50.130.91.84" dyno=web.1 connect=0ms service=38ms status=404 bytes=2296
2016-08-24T14:46:48.380372+00:00 heroku[router]: at=info method=GET path="/static/promag/js/swipe.js" host=promagcareer.herokuapp.com request_id=91bc1737-dbfb-49b9-a9c3-3b13af80421e fwd="50.130.91.84" dyno=web.1 connect=0ms service=13ms status=404 bytes=2299
2016-08-24T14:46:47.754699+00:00 heroku[router]: at=info method=GET path="/static/promag/css/style.css" host=promagcareer.herokuapp.com request_id=867a41cd-89b4-493f-8d41-3e6fd1a9e154 fwd="50.130.91.84" dyno=web.1 connect=1ms service=58ms status=404 bytes=2305
2016-08-24T14:46:47.701725+00:00 heroku[router]: at=info method=GET path="/static/promag/css/bootstrap.min.css" host=promagcareer.herokuapp.com request_id=3bb9d7dc-4bbc-4a8e-a4bb-d39063b8a252 fwd="50.130.91.84" dyno=web.1 connect=0ms service=47ms status=404 bytes=2329
2016-08-24T14:46:47.825263+00:00 heroku[router]: at=info method=GET path="/static/promag/js/retina-1.1.0.min.js" host=promagcareer.herokuapp.com request_id=df0ad196-0589-400b-89a9-ceb5d4ce3481 fwd="50.130.91.84" dyno=web.1 connect=0ms service=15ms status=404 bytes=2332
2016-08-24T14:46:47.669707+00:00 heroku[router]: at=info method=GET path="/static/promag/js/jquery-1.10.2.min.js" host=promagcareer.herokuapp.com request_id=9fbb183e-1e2b-4851-8175-361dd7393bb8 fwd="50.130.91.84" dyno=web.1 connect=0ms service=23ms status=404 bytes=2335
2016-08-24T14:46:47.781863+00:00 heroku[router]: at=info method=GET path="/static/promag/js/jquery.easing.1.3.js" host=promagcareer.herokuapp.com request_id=1ac22e72-7435-403c-9d24-fa378ff9aee8 fwd="50.130.91.84" dyno=web.1 connect=0ms service=18ms status=404 bytes=2335
2016-08-24T14:46:47.908853+00:00 heroku[router]: at=info method=GET path="/static/promag/js/swipe.js" host=promagcareer.herokuapp.com request_id=78fe5c0a-7704-4d26-9c31-b95468be1770 fwd="50.130.91.84" dyno=web.1 connect=0ms service=31ms status=404 bytes=2299
2016-08-24T14:46:47.760575+00:00 heroku[router]: at=info method=GET path="/static/promag/js/bootstrap.min.js" host=promagcareer.herokuapp.com request_id=ceda2868-3cf5-4632-a859-897056b3c96b fwd="50.130.91.84" dyno=web.1 connect=0ms service=35ms status=404 bytes=2323
2016-08-24T14:46:47.817604+00:00 heroku[router]: at=info method=GET path="/static/promag/js/jquery.smartmenus.bootstrap.min.js" host=promagcareer.herokuapp.com request_id=2e66aae5-4c54-49ee-a3ba-220a2d2f663a fwd="50.130.91.84" dyno=web.1 connect=0ms service=15ms status=404 bytes=2377
2016-08-24T14:46:47.885514+00:00 heroku[router]: at=info method=GET path="/static/promag/js/jquery.isotope.min.js" host=promagcareer.herokuapp.com request_id=a8823c7e-9150-4908-bab3-08e305ca531e fwd="50.130.91.84" dyno=web.1 connect=0ms service=53ms status=404 bytes=2338
2016-08-24T14:46:47.825012+00:00 heroku[router]: at=info method=GET path="/static/promag/js/jquery.magnific-popup.min.js" host=promagcareer.herokuapp.com request_id=ae98f655-9143-4200-b34e-ea470055a7ff fwd="50.130.91.84" dyno=web.1 connect=0ms service=10ms status=404 bytes=2359
2016-08-24T14:46:47.881764+00:00 heroku[router]: at=info method=GET path="/static/promag/js/wow.min.js" host=promagcareer.herokuapp.com request_id=7358f1d8-382d-41d8-a40a-0b72908d870c fwd="50.130.91.84" dyno=web.1 connect=0ms service=6ms status=404 bytes=2305
2016-08-24T14:46:47.743160+00:00 heroku[router]: at=info method=GET path="/static/promag/css/font-awesome.css" host=promagcareer.herokuapp.com request_id=5d42684e-86e2-4cbe-9aff-455d01980568 fwd="50.130.91.84" dyno=web.1 connect=0ms service=82ms status=404 bytes=2326
2016-08-24T14:46:47.733344+00:00 heroku[router]: at=info method=GET path="/static/promag/images/career.png" host=promagcareer.herokuapp.com request_id=58084327-9dd1-4a4c-9dbe-c9a8b448e8ca fwd="50.130.91.84" dyno=web.1 connect=0ms service=71ms status=404 bytes=2317
2016-08-24T14:46:47.876374+00:00 heroku[router]: at=info method=GET path="/static/promag/js/main.js" host=promagcareer.herokuapp.com request_id=4539f1c9-4e45-4d0c-8d6c-f9a266d65f56 fwd="50.130.91.84" dyno=web.1 connect=0ms service=20ms status=404 bytes=2296
2016-08-24T14:46:47.799763+00:00 heroku[router]: at=info method=GET path="/static/promag/js/jquery.smartmenus.min.js" host=promagcareer.herokuapp.com request_id=5e23c9b3-493f-4ccb-8af1-771989f3606e fwd="50.130.91.84" dyno=web.1 connect=0ms service=8ms status=404 bytes=2347
</code></pre>
| 0 | 2016-08-24T14:22:13Z | 39,127,631 | <p>You should rename your <code>static/Promag</code> folder to <code>static/promag</code>, to match the case in the static tag, e.g. <code>"{% static 'promag/css/font-awesome.css' %}</code>.</p>
<p>If your filesystem is case insensitive (e.g. Windows) or case preserving (e.g. Mac), then it doesn't matter whether you use <code>Progmag</code> or <code>promag</code>. However, it does make a difference on a case-sensitive file system.</p>
| 1 | 2016-08-24T15:39:04Z | [
"python",
"django",
"heroku"
] |
Error in using PyQt4 with python 3.5.2 for running a PyMoskito example | 39,125,969 | <p>I'm trying to get an example of <a href="https://github.com/cklb/pymoskito" rel="nofollow">PyMoskito</a> running under Python 3.5.2 Win7 64 bit<br>
This library needs PyQt4 which in turn needs SIP. </p>
<p><strong>1-</strong> I installed SIP using <code>pip3 install SIP</code> which ended up successfully (with a notice to update my pip). </p>
<p><strong>2-</strong> I even barely know Python. So I tried installing PyQt4 with a binary executable. The binaries provided at <a href="https://riverbankcomputing.com/software/pyqt/download" rel="nofollow">riverbankcomputing.com/software/pyqt/download</a> are for python 3.4 so I downloaded an unofficial wheel from <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#pyqt4" rel="nofollow">http://www.lfd.uci.edu/~gohlke/pythonlibs/#pyqt4</a> with a name <code>PyQt4-4.11.4-cp35-none-win_amd64.whl</code>. Then I <code>cd</code>'ed into the respective directory and installed it with <code>pip</code> which ended up successfully.</p>
<p>Now, I don't know what is the standard procedure for testing if a library is working. So I ran <code>setup.py</code> from the main directory of library and I assumed it set things up. Then I ran an example from <code>examples</code> folder and I get this error </p>
<pre><code>Traceback (most recent call last):
File "C:\Users\****\Documents\Python Libraries\pymoskito-master\pymoskito-master\examples\ballbeam\main.py", line 3, in <module>
from PyQt4 import QtGui, QtCore
ImportError: DLL load failed: The specified module could not be found.
</code></pre>
<p>I assume python could NOT find the PyQt4 dependencies. I searched stackoverflow and non of the solutions worked. Here is what I tried so far: </p>
<ul>
<li>There is NO /bin directory in <code>C:\Python35\Lib\site-packages\PyQt4</code>. So all dlls are already in the \PyQt4 directory </li>
<li>I added this directory to System <code>PATH</code> env. var. Same error occurs </li>
<li>I appended this directory to <code>sys.path</code> and tested it. Same error occurs</li>
<li><p>I created a \bin directory and copied all \PyQt4 into it. Then added it to PATH. Same error occurs</p></li>
<li><p>After that, I installed python 2.7 and PyQt4 from riverbankcomputing but not SIP. I still have the same problem </p></li>
</ul>
<p>I really don't know what else should I do. I'm actually frustrated by all the mentioning of whether OP is using Python 2 or Python 3 or QT4 or QT5 or x86 or x64. I believe backward compatibility should be the minimum of a programming language and this is definitely a mess for Python </p>
| 1 | 2016-08-24T14:22:32Z | 39,169,452 | <p>The short answer, I believe is that you have pyqt4 (a python interface to the qt4 library), sip (an automated c++-to-python communication library) but not qt4 itself (the UI library). qt4 is it's own monolithic c++ project with its own build and dependency problems.</p>
<p>You may be able to find the various pieces at <a href="http://www.lfd.uci.edu/~gohlke" rel="nofollow">http://www.lfd.uci.edu/~gohlke</a> , but I would recommend installing the Anaconda distribution of python ( <a href="https://www.continuum.io/downloads" rel="nofollow">https://www.continuum.io/downloads</a> ), which provides binary packages for things like qt4 and pyqtgraph, the former of which is installed by default, the latter requiring a command like</p>
<pre><code>conda install pyqtgraph
</code></pre>
<p>After this, pip install of pymoskito should work.</p>
<p>There is also mention in the readme of vtk, but it is not in the requirememnts. vtk is also a monolithic c++ library and hard to build, but conda has that too, albeit not in the default channel for py35:</p>
<pre><code>conda install vtk -c menpo
</code></pre>
| 1 | 2016-08-26T15:10:12Z | [
"python",
"python-3.x",
"pyqt",
"pyqt4"
] |
How do i create new database in Tryton | 39,125,976 | <p>I have downloaded and installed Tryton 4.0(client 4.0.exe,windows). I have been playing around with the demo database that comes with it.My question is how do i create my own database and user profile. I went to manage profiles,Added new profile,input port number for postgres databse(5432) and input username.But it says "failed to connect to server".What could be the problem ?</p>
| 0 | 2016-08-24T14:22:52Z | 39,126,544 | <p>First of all you have to install your custom server, you can not created databases on the demo server. Once you have <a href="http://doc.tryton.org/4.0/trytond/doc/topics/install.html#topics-install" rel="nofollow">installed</a> and <a href="http://doc.tryton.org/4.0/trytond/doc/topics/configuration.html#topics-configuration" rel="nofollow">configured</a> the server you have to create the database in your database backend. If you are using the postgresql backend you can follow the procedure explained in <a href="https://www.postgresql.org/docs/current/static/sql-createdatabase.html" rel="nofollow">their docs</a>. </p>
<p>Once you have created this database you must <a href="http://doc.tryton.org/4.0/trytond/doc/topics/setup_database.html#initialize-a-database" rel="nofollow">initialize your database</a>. </p>
<p>Once the database is created, <a href="http://doc.tryton.org/4.0/trytond/doc/topics/start_server.html#topics-start-serverhttp://" rel="nofollow">start the server</a> and the database should be available to select on the profiles page of the tryton client. </p>
| 0 | 2016-08-24T14:46:47Z | [
"python",
"python-2.7",
"tryton"
] |
Efficient bookkeeping in heap | 39,126,034 | <p>I'm trying to implement a heap using list data structure. I'd also like to keep track of the position of elements in the list in order to enable easy deletion. My implementation involves looping through the entire list to update the positions after an insert/delete combo. I'm afraid this raises the time complexity from O(log n) to O(n).
Is there a better way of keeping track of elements' position? Currently, update method is what takes care of the bookkeeping.</p>
<pre><code>class heap():
''' Min-Heap'''
def __init__(self,G):
self.list=[0] #to ease dealing with indices, an arbitrary value at index 0
self.pos={} #holds position of elements with respect to list
self.G = G #Graph, contains the score for each element in G[element][2]
def update_pos(self):
self.pos = {}
for i in xrange(1,len(self.list)):
self.pos[self.list[i]]=i
def percUp(self): #percolate up, called by insert method
start = len(self.list)-1
while start//2>0:
if self.G[self.list[start/2]][2] > self.G[self.list[start]][2]:
self.list[start/2],self.list[start] = self.list[start],self.list[start/2]
start = start//2
def insert(self,element):
self.list.append(element)
self.percUp()
self.update_pos()
def percDown(self,start=1): #percolate down, called by extract_min method
while 2*start < len(self.list):
min_ind = self.getMinInd(start)
if self.G[self.list[start]][2] > self.G[self.list[min_ind]][2]:
self.list[start],self.list[min_ind] = self.list[min_ind],self.list[start]
start = min_ind
def extract_min(self):
self.list[-1],self.list[1] = self.list[1],self.list[-1]
small = self.list[-1]
self.list = self.list[:-1]
self.percDown()
self.update_pos()
return small
def delete(self,pos):
self.list[-1],self.list[pos] = self.list[pos],self.list[-1]
self.pos.pop(self.list[pos])
self.list = self.list[:-1]
self.percDown(pos)
self.update_pos()
def getMinInd(self,start):
if 2*start+1 > len(self.list)-1:
return 2*start
else:
if self.G[self.list[2*start]][2]<self.G[self.list[2*start+1]][2]:
return 2*start
else:
return 2*start+1
</code></pre>
| 0 | 2016-08-24T14:25:30Z | 39,129,293 | <p>If you're building a binary heap, the best way I know of to speed up arbitrary removal or changing priority is to create a hash map. The key is the item in the priority queue, and the value is its current position in the array. When you insert an item into the queue, you add an entry to the hash map with the item's current position.</p>
<p>Then, <em>every time</em> an item is moved in the queue, you update its value in the hash map. So every time you do a swap during insertion or removal, you update the swapped items' values in that hash map.</p>
<p>To remove an arbitrary item, then, you do the following:</p>
<ol>
<li>Look up the item's position in the hash map.</li>
<li>Delete the item's entry in the hash map.</li>
<li>Move the last item in the heap to the removed item's position, and update its position in the hash map.</li>
<li>Sift the new item up or down in the heap, as required, updating all affected nodes' positions in the hash map.</li>
</ol>
<p>This works reasonably well, although it can be pretty expensive in terms of memory if your heap is large.</p>
<p>Other heap data structures such as Fibonacci heap, Pairing heap, Skew heap, or even a binary heap implemented as a binary tree work with individual heap nodes rather than implicit nodes in an array, and therefore can be accessed directly without the need of an intermediate hash table. They do require more memory than a binary heap implemented as an array, but are potentially much more efficient.</p>
<p>By the way, if you decide to experiment with one of those alternate structures, I'd recommend that you take a look at Pairing heap. Its asymptotic performance is almost as good as Fibonacci heap, and it's <em>much</em> easier to implement. I don't yet have any good numbers on its real-world performance.</p>
| 0 | 2016-08-24T17:10:11Z | [
"python",
"heap"
] |
Python 2.7 - Unable to correctly decode email subject-header line | 39,126,189 | <p>I'm using Python 2.7, and I am trying to properly decode the subject header line of an email. The source of the email is: </p>
<pre><code>Subject: =?UTF-8?B?VGkgw6ggcGlhY2l1dGEgbGEgZGVtbz8gU2NvcHJpIGFsdHJlIG4=?=
</code></pre>
<p>I use the function decode_header(header) from the email.header library, and the result is: </p>
<pre><code>[('Ti \xc3\xa8 piaciuta la demo? Scopri altre n', 'utf-8')]
</code></pre>
<p>The 'xc3\xa8' part should match the 'è' character, but it is not correctly decoded/showed.
Another example:</p>
<pre><code>Subject: =?iso-8859-1?Q?niccol=F2_cop?= =?iso-8859-1?Q?ernico?=
</code></pre>
<p>Result: </p>
<pre><code>[('niccol\xf2 copernico', 'iso-8859-1')]
</code></pre>
<p>How can I obtain the correct string?</p>
| 0 | 2016-08-24T14:32:06Z | 39,126,500 | <p>You <em>are</em> getting the correct string. It's just encoded (using UTF-8 in the first case, and iso-8895-1 in the second); you need to decode it to get the actual unicode string.</p>
<p>For example:</p>
<pre><code>>>> print unicode('Ti \xc3\xa8 piaciuta la demo? Scopri altre n', 'utf-8')
Ti è piaciuta la demo? Scopri altre n
</code></pre>
<p>Or:</p>
<pre><code>>>> print unicode('niccol\xf2 copernico', 'iso-8859-1')
niccolò copernico
</code></pre>
<p>That's why you get back both the header data <em>and</em> the encoding.</p>
| 1 | 2016-08-24T14:45:11Z | [
"python",
"python-2.7",
"email",
"decode",
"string-decoding"
] |
Calculate distances in a kind of "bag of words" approach | 39,126,249 | <p>My code runs but the output of my function is always <code>0.0</code>. My code calls <code>.txt</code> files and creates a matrix where each <code>.txt</code> file represents a line in the matrix and each word in the <code>.txt</code> file has its own column in the respective line in the matrix.</p>
<p>I compare the lines pairwise. I want to count how often each word of the union of both lines occurs. However, although the code runs I get the wrong result (<code>0.0</code>).</p>
<p>I thought I might have an error in the matrix that I use for the function, but the matrix looks good.</p>
<p>Strange thing is that if I create to lists manually:</p>
<pre><code>a = ["a", "b", "c", "d"],
b = ["b", "c", "d", "e"]
</code></pre>
<p>it works, but when I change to:</p>
<pre><code>a = ["word 1", "word 2", "word 3", "word 4"],
b = ["word 2","word 3","word 4","word 5",]
</code></pre>
<p>the result is again <code>0.0</code>. I am confused!</p>
<p>My code:</p>
<pre><code>def bow_distance(a, b):
p = 0
if len(a) > len(b):
max_words = len(a)
else:
max_words = len(b)
list_words_ab = list(set(a) | set(b))
len_bow_matrix = len(list_words_ab)
bow_matrix = numpy.zeros(shape = (3, len_bow_matrix), dtype = str)
while p < len_bow_matrix:
bow_matrix[0, p] = str(list_words_ab[p])
p = p+1
p = 0
while p < len_bow_matrix:
bow_matrix[1, p] = a.count(bow_matrix[0, p])
bow_matrix[2, p] = b.count(bow_matrix[0, p])
p = p+1
p = 0
overlap = 0
while p < len_bow_matrix:
abs_difference = abs(float(bow_matrix[1, p]) - float(bow_matrix[2, p]))
overlap = overlap + abs_difference
p = p+1
return (overlap/2)/max_num_parts
# Calculate the distances
i = 1
j = 1
while i < num_of_txt + 1:
print(i)
newfile = open("TXT_distance_" + str(i)+".txt", "w")
while j < num_of_txt + 1:
newfile.write(str(bow_distance(text_word_matrix[i-1], text_word_matrix[j-1])) + " ")
j = j+1
newfile.close()
j = 1
i = i+1
</code></pre>
| 0 | 2016-08-24T14:34:48Z | 39,126,305 | <p>For the first sight I see two failures here:</p>
<pre><code>a = ["a", "b", "c", "d"], <----- comma here
b = ["b", "c", "d", "e"]
it works, but when I change to:
a = ["word 1", "word 2", "word 3", "word 4"], <----- and here
b = ["word 2","word 3","word 4","word 5",] <----- and here inside the list
</code></pre>
| 0 | 2016-08-24T14:37:21Z | [
"python",
"count",
"distance"
] |
Reset new keys to a dictionary | 39,126,272 | <p>I have a python dictionary.</p>
<pre><code>A=[0:'dog',1:'cat',3:'fly',4,'fish',6:'lizard']
</code></pre>
<p>I want to reset the keys according to <code>range(len(A))</code>(the natural increment), which should look like:</p>
<pre><code>new_A=[0:'dog',1:'cat',2:'fly',3:'fish',4:'lizard']
</code></pre>
<p>How could I do that?</p>
| 1 | 2016-08-24T14:35:36Z | 39,126,434 | <p>Here's a working example for both py2.x and py3.x:</p>
<pre><code>A = {0: 'dog', 1: 'cat', 3: 'fly', 4: 'fish', 6: 'lizard'}
B = {i: v for i, v in enumerate(A.values())}
print(B)
</code></pre>
| 2 | 2016-08-24T14:42:13Z | [
"python",
"dictionary",
"order",
"key",
"value"
] |
Reset new keys to a dictionary | 39,126,272 | <p>I have a python dictionary.</p>
<pre><code>A=[0:'dog',1:'cat',3:'fly',4,'fish',6:'lizard']
</code></pre>
<p>I want to reset the keys according to <code>range(len(A))</code>(the natural increment), which should look like:</p>
<pre><code>new_A=[0:'dog',1:'cat',2:'fly',3:'fish',4:'lizard']
</code></pre>
<p>How could I do that?</p>
| 1 | 2016-08-24T14:35:36Z | 39,126,512 | <p>If you want to assign new keys in the ascending order of old keys, then</p>
<pre><code>new_A = {i: A[k] for i, k in enumerate(sorted(A.keys()))}
</code></pre>
| 2 | 2016-08-24T14:45:47Z | [
"python",
"dictionary",
"order",
"key",
"value"
] |
Reset new keys to a dictionary | 39,126,272 | <p>I have a python dictionary.</p>
<pre><code>A=[0:'dog',1:'cat',3:'fly',4,'fish',6:'lizard']
</code></pre>
<p>I want to reset the keys according to <code>range(len(A))</code>(the natural increment), which should look like:</p>
<pre><code>new_A=[0:'dog',1:'cat',2:'fly',3:'fish',4:'lizard']
</code></pre>
<p>How could I do that?</p>
| 1 | 2016-08-24T14:35:36Z | 39,126,640 | <p>Dictionaries are not ordered. If your keys are incremental integers, you might as well use a list.</p>
<p>new_A = list(A.values())</p>
| 1 | 2016-08-24T14:50:49Z | [
"python",
"dictionary",
"order",
"key",
"value"
] |
Reset new keys to a dictionary | 39,126,272 | <p>I have a python dictionary.</p>
<pre><code>A=[0:'dog',1:'cat',3:'fly',4,'fish',6:'lizard']
</code></pre>
<p>I want to reset the keys according to <code>range(len(A))</code>(the natural increment), which should look like:</p>
<pre><code>new_A=[0:'dog',1:'cat',2:'fly',3:'fish',4:'lizard']
</code></pre>
<p>How could I do that?</p>
| 1 | 2016-08-24T14:35:36Z | 39,126,674 | <p>If you want to keep the same order of keys </p>
<pre><code>A={0:'dog',1:'cat',3:'fly',4,'fish',6:'lizard'}
new_A=dict((i,A[k]) for i,k in enumerate(sorted(A.keys()))
</code></pre>
| 2 | 2016-08-24T14:52:11Z | [
"python",
"dictionary",
"order",
"key",
"value"
] |
Reset new keys to a dictionary | 39,126,272 | <p>I have a python dictionary.</p>
<pre><code>A=[0:'dog',1:'cat',3:'fly',4,'fish',6:'lizard']
</code></pre>
<p>I want to reset the keys according to <code>range(len(A))</code>(the natural increment), which should look like:</p>
<pre><code>new_A=[0:'dog',1:'cat',2:'fly',3:'fish',4:'lizard']
</code></pre>
<p>How could I do that?</p>
| 1 | 2016-08-24T14:35:36Z | 39,128,821 | <p>If you want ordering by creation as well as access by key, then you want an <code>OrderedDict</code>.</p>
<pre><code>>>> from collections import OrderedDict
>>> d=OrderedDict()
>>> d['Cat'] = 'cool'
>>> d['Dog'] = 'best'
>>> d['Fish'] = 'cold'
>>> d['Norwegian Blue'] = 'ex-parrot'
>>> d
OrderedDict([('Cat', 'cool'), ('Dog', 'best'), ('Fish', 'cold'), ('Norwegian Blue', 'ex-parrot')])
>>> d.values()
odict_values(['cool', 'best', 'cold', 'ex-parrot'])
>>> d.keys()
odict_keys(['Cat', 'Dog', 'Fish', 'Norwegian Blue'])
</code></pre>
<p>You retain the ability to access the items as a sequence in the order that they were added, but you also have the fast access-by-key which a dict (hash) gives you. If you want "natural" sequence numbers you use <code>enumerate</code> as normal:</p>
<pre><code>>>> for i,it in enumerate( d.items()):
... print( '%5d %15s %15s' % ( i,it[0], it[1]) )
...
0 Cat cool
1 Dog best
2 Fish cold
3 Norwegian Blue ex-parrot
>>>
</code></pre>
| 0 | 2016-08-24T16:41:39Z | [
"python",
"dictionary",
"order",
"key",
"value"
] |
How can I check a file has been copied fully to a folder before moving it using python | 39,126,411 | <p>I'm currently working on a project that adds images to a folder. As they're added they also need to be moved (in groups of four) to a secondary folder overwriting the images that are already in there (if any). I have it sort of working using watchdog.py to monitor the first folder. When the 'on_created' event fires I take the file path of the newly added image and copy it to the second folder using shutil.copy(), incrementing a counter and using the counter value to rename the image as it copies (so it becomes folder/1.jpg). When the counter reaches 4 it resets to 0 and the most recent 4 images are displayed on a web page. All these folders are in the local filesystem on the same drive.</p>
<p>My problem is that sometimes it seems the event fires before the image is fully saved in the first folder (the images are around 1Mb but vary slightly so I can't check file size) which results in a partial or corrupted image being copied to the second folder. At worst it throws an IOError saying the file isn't even there.</p>
<p>Any suggestions. I'm using OSX 10.11, Python 2.7. The images are all Jpegs.</p>
| 0 | 2016-08-24T14:41:12Z | 39,126,518 | <p>I see multiple solutions :</p>
<ol>
<li>When you first create your images in the first folder, add a suffix to their name, for instance, <code>filexxx.jpg.part</code> and when they are fully written just rename them, removing the <code>.part</code>.
Then in your watchdog, be sure not to work on files ending with <code>.part</code></li>
<li>In your watchdog, test the image file, like try to load the file with an image library, and catch the exceptions.</li>
</ol>
| 1 | 2016-08-24T14:45:52Z | [
"python",
"image",
"filesystems"
] |
Running code on Django application start | 39,126,445 | <p>I need to run some code every time my application starts. I need to be able to manipulate models, just like I would in actual view code. Specifically, I am trying to hack built-in User model to support longer usernames, so my code is like this</p>
<pre>
def username_length_hack(sender, *args, **kwargs):
model = sender._meta.model
model._meta.get_field("username").max_length = 254
</pre>
<p>But I cannot seem to find the right place to do it. I tried adding a class_prepared signal handler in either models.py or app.py of the app that uses User model (expecting that User will by loaded by the time this apps models are loaded). The post_migrate and pre_migrate only run on migrate command. Adding code into settings.py seems weird and besides nothing is loaded at that point anyway. So far, the only thing that worked was connecting it to a pre_init signal and having it run every time a User instance is spawned. But that seems like a resource drain. I am using Django 1.8. How can I run this on every app load?</p>
| 0 | 2016-08-24T14:42:37Z | 39,127,214 | <p>I agree with the comments; there are prettier approaches than this.</p>
<p>You could add your code to the <code>__init__.py</code>of your app</p>
| 1 | 2016-08-24T15:17:24Z | [
"python",
"django"
] |
Partition list based on elements of another list as keys | 39,126,516 | <p>How can I combine these two lists and use <code>alist</code> as keys and <code>blist</code> as values? What I would like to do is group the values in <code>blist</code> with the corresponding keys. So let's say values <code>3, 4, 2, None, None, 1, 1, 1, 6, 1, 2, 4, 5, 5, 7, 1, 1, 2, 3, 4, 5</code> should have <code>'Inner OD'</code> as key and the remaining should have a key <code>'Outter OD'</code>: <code>None, 3, 4, 6, 5, 1, 3, 2, 2, 2, 2, 1, 1, 1, 1, 3, 4, 3, 5, 6, 5, 2, 3</code></p>
<p>so basically I would want it to look like this </p>
<pre><code>{'Inner OD': [3, 4, 2, None, None, 1, 1, 1, 6, 1, 2, 4, 5, 5, 7, 1, 1, 2, 3, 4, 5], 'Outter OD': [None, 3, 4, 6, 5, 1, 3, 2, 2, 2, 2, 1, 1, 1, 1, 3, 4, 3, 5, 6, 5, 2, 3]})
</code></pre>
<p>Any help would be greatly appreciated. </p>
<pre><code>alist = [u'Outter OD', u'Outter OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD']
blist = [3, 4, 2, None, None, 1, 1, 1, 6, 1, 2, 4, 5, 5, 7, 1, 1, 2, 3, 4, 5, 1, None, 3, 4, 6, 5, 1, 3, 2, 2, 2, 2, 1, 1, 1, 1, 3, 4, 3, 5, 6, 5, 2, 3]
</code></pre>
| 0 | 2016-08-24T14:45:51Z | 39,126,571 | <p>Something like this:</p>
<pre><code>res = {}
for i in range(len(alist)):
if alist[i] in res:
res[alist[i]].append(blist[i])
else:
res[alist[i]]=[blist[i]]
</code></pre>
<p>returns <code>{'Inner OD': [2, None, None, 1, 1, 1, 6, 1, 2, 4, 5, 5, 7, 1, 1, 2, 3, 4, 5, 1, None, 3, 4, 6], 'Outter OD': [3, 4, 5, 1, 3, 2, 2, 2, 2, 1, 1, 1, 1, 3, 4, 3, 5, 6, 5, 2, 3]}</code></p>
| 2 | 2016-08-24T14:48:05Z | [
"python",
"python-2.7"
] |
Partition list based on elements of another list as keys | 39,126,516 | <p>How can I combine these two lists and use <code>alist</code> as keys and <code>blist</code> as values? What I would like to do is group the values in <code>blist</code> with the corresponding keys. So let's say values <code>3, 4, 2, None, None, 1, 1, 1, 6, 1, 2, 4, 5, 5, 7, 1, 1, 2, 3, 4, 5</code> should have <code>'Inner OD'</code> as key and the remaining should have a key <code>'Outter OD'</code>: <code>None, 3, 4, 6, 5, 1, 3, 2, 2, 2, 2, 1, 1, 1, 1, 3, 4, 3, 5, 6, 5, 2, 3</code></p>
<p>so basically I would want it to look like this </p>
<pre><code>{'Inner OD': [3, 4, 2, None, None, 1, 1, 1, 6, 1, 2, 4, 5, 5, 7, 1, 1, 2, 3, 4, 5], 'Outter OD': [None, 3, 4, 6, 5, 1, 3, 2, 2, 2, 2, 1, 1, 1, 1, 3, 4, 3, 5, 6, 5, 2, 3]})
</code></pre>
<p>Any help would be greatly appreciated. </p>
<pre><code>alist = [u'Outter OD', u'Outter OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD']
blist = [3, 4, 2, None, None, 1, 1, 1, 6, 1, 2, 4, 5, 5, 7, 1, 1, 2, 3, 4, 5, 1, None, 3, 4, 6, 5, 1, 3, 2, 2, 2, 2, 1, 1, 1, 1, 3, 4, 3, 5, 6, 5, 2, 3]
</code></pre>
| 0 | 2016-08-24T14:45:51Z | 39,128,182 | <p>It does nothing better than the @Gábor ErdÅs answer but I think it is a bit clearer:</p>
<pre><code>>>> alist = [u'Outter OD', u'Outter OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD',u'Outter OD', u'Outter OD', u'Outter OD']
>>> blist = [3, 4, 2, None, None, 1, 1, 1, 6, 1, 2, 4, 5, 5, 7, 1, 1, 2, 3, 4, 5, 1, None, 3, 4, 6, 5, 1, 3, 2, 2, 2, 2, 1, 1, 1, 1, 3, 4, 3, 5, 6, 5, 2, 3]
>>> res = {}
>>> for key, val in zip(alist, blist):
... if key in res:
... res[key].append(val)
... else:
... res[key] = [val]
...
>>> res
{u'Inner OD': [2, None, None, 1, 1, 1, 6, 1, 2, 4, 5, 5, 7, 1, 1, 2, 3, 4, 5, 1, None, 3, 4, 6], u'Outter OD': [3, 4, 5, 1, 3, 2, 2, 2, 2, 1, 1, 1, 1, 3, 4, 3, 5, 6, 5, 2, 3]}
</code></pre>
<p>more information about the zip builtin function <a href="https://docs.python.org/2/library/functions.html?highlight=zip#zip" rel="nofollow">in the doc</a></p>
| 0 | 2016-08-24T16:07:05Z | [
"python",
"python-2.7"
] |
Partition list based on elements of another list as keys | 39,126,516 | <p>How can I combine these two lists and use <code>alist</code> as keys and <code>blist</code> as values? What I would like to do is group the values in <code>blist</code> with the corresponding keys. So let's say values <code>3, 4, 2, None, None, 1, 1, 1, 6, 1, 2, 4, 5, 5, 7, 1, 1, 2, 3, 4, 5</code> should have <code>'Inner OD'</code> as key and the remaining should have a key <code>'Outter OD'</code>: <code>None, 3, 4, 6, 5, 1, 3, 2, 2, 2, 2, 1, 1, 1, 1, 3, 4, 3, 5, 6, 5, 2, 3</code></p>
<p>so basically I would want it to look like this </p>
<pre><code>{'Inner OD': [3, 4, 2, None, None, 1, 1, 1, 6, 1, 2, 4, 5, 5, 7, 1, 1, 2, 3, 4, 5], 'Outter OD': [None, 3, 4, 6, 5, 1, 3, 2, 2, 2, 2, 1, 1, 1, 1, 3, 4, 3, 5, 6, 5, 2, 3]})
</code></pre>
<p>Any help would be greatly appreciated. </p>
<pre><code>alist = [u'Outter OD', u'Outter OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Inner OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD', u'Outter OD']
blist = [3, 4, 2, None, None, 1, 1, 1, 6, 1, 2, 4, 5, 5, 7, 1, 1, 2, 3, 4, 5, 1, None, 3, 4, 6, 5, 1, 3, 2, 2, 2, 2, 1, 1, 1, 1, 3, 4, 3, 5, 6, 5, 2, 3]
</code></pre>
| 0 | 2016-08-24T14:45:51Z | 39,128,362 | <p>Yet another solution. It does nothing better than @Tryph's answer (which does nothing better than the @Gábor ErdÅs's answer) but I think using <code>defaultdict</code> makes the logic a bit cleaner:</p>
<pre><code>from collections import defaultdict
res = defaultdict(list)
for a, b in zip(alist, blist):
res[a].append(b)
</code></pre>
<p>A <code>defaultdict</code> behaves almost the same as a dictionary, so you can probably just use the <code>res</code> above for the rest of your code; but, if you want, you can turn it into a regular dictionary with</p>
<pre><code>res = dict(res)
</code></pre>
| 1 | 2016-08-24T16:16:33Z | [
"python",
"python-2.7"
] |
Python treats "is" to pluralize string literal | 39,126,532 | <p>Python version 3.4.3</p>
<p>Python converting string literal to plural. I cannot figure out how to solve this.</p>
<p>When I enter:</p>
<pre><code>>>> x = ("The number % is incorrect" % 8)
>>> x
'The number 8s incorrect'
</code></pre>
<p>When I try to escape "is" I get an error.</p>
<pre><code>>>> x = ("The number % \is incorrect" % 8)
ValueError: unsupported format character '\' (0x5c) at index 13
</code></pre>
| 0 | 2016-08-24T14:46:24Z | 39,126,547 | <p>Just use <code>format</code> function instead:</p>
<pre><code>x = "The number {} is incorrect".format(8)
</code></pre>
| 9 | 2016-08-24T14:46:59Z | [
"python",
"python-3.x"
] |
Python treats "is" to pluralize string literal | 39,126,532 | <p>Python version 3.4.3</p>
<p>Python converting string literal to plural. I cannot figure out how to solve this.</p>
<p>When I enter:</p>
<pre><code>>>> x = ("The number % is incorrect" % 8)
>>> x
'The number 8s incorrect'
</code></pre>
<p>When I try to escape "is" I get an error.</p>
<pre><code>>>> x = ("The number % \is incorrect" % 8)
ValueError: unsupported format character '\' (0x5c) at index 13
</code></pre>
| 0 | 2016-08-24T14:46:24Z | 39,126,593 | <p>Try <code>'the number %d is incorrect' % 8</code></p>
<p>The problem is that python is reading your <code>%</code> (with the space, thanks, Ashwini) , and thinking that is your format character.</p>
| 6 | 2016-08-24T14:48:52Z | [
"python",
"python-3.x"
] |
Python treats "is" to pluralize string literal | 39,126,532 | <p>Python version 3.4.3</p>
<p>Python converting string literal to plural. I cannot figure out how to solve this.</p>
<p>When I enter:</p>
<pre><code>>>> x = ("The number % is incorrect" % 8)
>>> x
'The number 8s incorrect'
</code></pre>
<p>When I try to escape "is" I get an error.</p>
<pre><code>>>> x = ("The number % \is incorrect" % 8)
ValueError: unsupported format character '\' (0x5c) at index 13
</code></pre>
| 0 | 2016-08-24T14:46:24Z | 39,127,738 | <p>The string:</p>
<pre><code>'the number % is incorrect' % 8
</code></pre>
<p>is actually interpreted as:</p>
<pre><code>'the number [% i]s incorrect' % 8
# ^ conversion specifier
</code></pre>
<p>and according to the <a href="https://docs.python.org/3/library/stdtypes.html#printf-style-string-formatting" rel="nofollow">docs on formatting</a>, the specifier <code>i</code> is going to get substituted by the integer <code>8</code>.</p>
<p>This can easily be mediated by actually providing the specifier right after <code>%</code> as in:</p>
<pre><code>'the number %i is incorrect' % 8
</code></pre>
| 3 | 2016-08-24T15:45:02Z | [
"python",
"python-3.x"
] |
How to remove characters from multiple files in python | 39,126,668 | <p>I'm, trying to write a simple program to batch rename files in a folder. </p>
<p>file format: </p>
<pre><code>11170_tcd001-20160824-094716.txt
11170_tcd001-20160824-094716.rst
11170_tcd001-20160824-094716.raw
</code></pre>
<p>I have 48 of the above with a different 14 digit character configuration after the first "-". </p>
<p>My final goal is to convert the above to:</p>
<pre><code>11170_tcd001.txt
11170_tcd001.rst
11170_tcd001.raw
</code></pre>
<p>I know it's possible to os.rename files in python. However, I can't figure out how to batch rename multiple files with a different character configuration.</p>
<p>Is this possible? </p>
<p>some pseudocode below of what I would like to achieve. </p>
<pre><code>import os
pathiter = (os.path.join(root, filename)
for root, _, filenames in os.walk(folder)
for filename in filenames
)
for path in pathiter:
newname = path.replace('14 digits.txt', ' 0 digits.txt')
if newname != path:
os.rename(path,newname)
</code></pre>
| 0 | 2016-08-24T14:52:00Z | 39,126,812 | <p>You should probably try using regular expressions, like</p>
<pre><code>import re
<...>
newfilename = re.sub(r'-\d{8}-\d{6}\b', '', oldfilename)
<...>
</code></pre>
<p>This will replace any 'hyphen, 8 digits, hyphen, 6 digits' not followed by letter, digit or underscore with empty string in your filename. Hope I got you right.</p>
| 2 | 2016-08-24T14:57:59Z | [
"python",
"batch-rename"
] |
How to remove characters from multiple files in python | 39,126,668 | <p>I'm, trying to write a simple program to batch rename files in a folder. </p>
<p>file format: </p>
<pre><code>11170_tcd001-20160824-094716.txt
11170_tcd001-20160824-094716.rst
11170_tcd001-20160824-094716.raw
</code></pre>
<p>I have 48 of the above with a different 14 digit character configuration after the first "-". </p>
<p>My final goal is to convert the above to:</p>
<pre><code>11170_tcd001.txt
11170_tcd001.rst
11170_tcd001.raw
</code></pre>
<p>I know it's possible to os.rename files in python. However, I can't figure out how to batch rename multiple files with a different character configuration.</p>
<p>Is this possible? </p>
<p>some pseudocode below of what I would like to achieve. </p>
<pre><code>import os
pathiter = (os.path.join(root, filename)
for root, _, filenames in os.walk(folder)
for filename in filenames
)
for path in pathiter:
newname = path.replace('14 digits.txt', ' 0 digits.txt')
if newname != path:
os.rename(path,newname)
</code></pre>
| 0 | 2016-08-24T14:52:00Z | 39,126,876 | <p>If you are looking for a non-regex approach and considering your files all match that particular pattern you are expecting, what you can do first is get the extension of the file using <code>splitext</code>:</p>
<pre><code>from os.path import splitext
file_name = '11170_tcd001-20160824-094716.txt'
extension = splitext(file_name)[1]
print(extension) # outputs: .txt
</code></pre>
<p>Then, with the extension in hand, split the file_name on the <code>-</code> and get the first item since you know that is the part that you want to keep:</p>
<pre><code>new_filename = file_name.split('-')[0]
print(new_filename) # 11170_tcd001
</code></pre>
<p>Now, append the extension:</p>
<pre><code>new_filename = new_filename + extension
print(new_filename) # 11170_tcd001.txt
</code></pre>
<p>Now you can proceed with the rename: </p>
<pre><code>os.rename(file_name, new_filename)
</code></pre>
| 4 | 2016-08-24T15:00:16Z | [
"python",
"batch-rename"
] |
"This session is in 'prepared' state; no further" error with SQLAlchemy using scoped_session in threaded mod_wsgi app | 39,126,810 | <p>I recently updated to SQLAlchemy 1.1, which I'm using under Django 1.10 (also recently updated from 1.6), and I keep getting sqlalchemy/mysql errors that <code>This session is in 'prepared' state; no further SQL can be emitted within this transaction.</code></p>
<p>How do I debug this?</p>
<p>It's running in a single process, multi-threaded environment under mod_wsgi - and I'm not sure if I've properly configured SQLAlchemy's scoped_session.</p>
<p>I use a request container that is assigned to each incoming request, which sets up the session and cleans it up. (I'm assuming each request in Django is on it's own thread.)</p>
<pre><code># scoped_session as a global variable
# I constant errors if pool_size = 20 for some reason
Engine = create_engine(host, pool_recycle=600, pool_size=10, connect_args=options)
Session = scoped_session(sessionmaker(autoflush=True, bind=Engine))
RUNNING_DEVSERVER = (len(sys.argv) > 1 and sys.argv[1] == 'runserver') # Session.remove() fails in dev
# Created in my API, once per request (per thread)
class RequestContainer(object):
def __init__(self, request, *args, **kwargs):
self.s = Session()
def safe_commit(self):
try:
self.s.commit()
except:
self.s.rollback()
raise
def __del__(self):
if self.s:
try:
self.s.commit()
except:
self.s.rollback()
raise
if not RUNNING_DEVSERVER:
Session.remove()
self.s = None
</code></pre>
<p>And the <code>prepared state</code> error pops up in the code, usually in the same place, but not all the time, and sometimes in other places:</p>
<pre><code>...
rs = request_container.s.query(MyTable)
...
if rs.count():
# Error log:
File "/usr/local/lib/python2.7/dist-packages/SQLAlchemy-1.1.0b3-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py", line 3011, in count
return self.from_self(col).scalar()
File "/usr/local/lib/python2.7/dist-packages/SQLAlchemy-1.1.0b3-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py", line 2765, in scalar
ret = self.one()
File "/usr/local/lib/python2.7/dist-packages/SQLAlchemy-1.1.0b3-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py", line 2736, in one
ret = self.one_or_none()
File "/usr/local/lib/python2.7/dist-packages/SQLAlchemy-1.1.0b3-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py", line 2706, in one_or_none
ret = list(self)
File "/usr/local/lib/python2.7/dist-packages/SQLAlchemy-1.1.0b3-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py", line 2777, in __iter__
return self._execute_and_instances(context)
File "/usr/local/lib/python2.7/dist-packages/SQLAlchemy-1.1.0b3-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py", line 2798, in _execute_and_instances
close_with_result=True)
File "/usr/local/lib/python2.7/dist-packages/SQLAlchemy-1.1.0b3-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py", line 2807, in _get_bind_args
**kw
File "/usr/local/lib/python2.7/dist-packages/SQLAlchemy-1.1.0b3-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py", line 2789, in _connection_from_session
conn = self.session.connection(**kw)
File "/usr/local/lib/python2.7/dist-packages/SQLAlchemy-1.1.0b3-py2.7-linux-x86_64.egg/sqlalchemy/orm/session.py", line 903, in connection
execution_options=execution_options)
File "/usr/local/lib/python2.7/dist-packages/SQLAlchemy-1.1.0b3-py2.7-linux-x86_64.egg/sqlalchemy/orm/session.py", line 908, in _connection_for_bind
engine, execution_options)
File "/usr/local/lib/python2.7/dist-packages/SQLAlchemy-1.1.0b3-py2.7-linux-x86_64.egg/sqlalchemy/orm/session.py", line 319, in _connection_for_bind
self._assert_active()
File "/usr/local/lib/python2.7/dist-packages/SQLAlchemy-1.1.0b3-py2.7-linux-x86_64.egg/sqlalchemy/orm/session.py", line 201, in _assert_active
"This session is in 'prepared' state; no further "
InvalidRequestError: This session is in 'prepared' state; no further SQL can be emitted within this transaction.
</code></pre>
| 0 | 2016-08-24T14:57:58Z | 39,131,561 | <p>The RequestContainer was being accidentally assigned to a global API interface handler, causing one session to be mis-used among multiple threads, when it was intended to be created per thread.</p>
| 0 | 2016-08-24T19:27:24Z | [
"python",
"mysql",
"django",
"sqlalchemy",
"mod-wsgi"
] |
Replacing multiple strings with regex in python for a file giving truncated string | 39,126,870 | <p>The following python code </p>
<pre><code>import xml.etree.cElementTree as ET
import time
import fileinput
import re
ts = str(int(time.time()))
modifiedline =''
for line in fileinput.input("singleoutbound.xml"):
line = re.sub('OrderName=".*"','OrderName="'+ts+'"', line)
line = re.sub('OrderNo=".*"','OrderNo="'+ts+'"', line)
line = re.sub('ShipmentNo=".*"','ShipmentNo="'+ts+'"', line)
line = re.sub('TrackingNo=".*"','TrackingNo="'+ts+'"', line)
line = re.sub('WaveKey=".*"','WaveKey="'+ts+'"', line)
modifiedline=modifiedline+line
</code></pre>
<p>Returns the modifiedline string with some lines truncated wherever the first match is found</p>
<p>How do I ensure it returns the complete string for each line?</p>
<p>Edit:</p>
<p>I have changed the way I am solving this problem, inspired by Tomalak's answer</p>
<pre><code>import xml.etree.cElementTree as ET
import time
ts = str(int(time.time()))
doc = ET.parse('singleoutbound.xml')
for elem in doc.iterfind('//*'):
if 'OrderName' in elem.attrib:
elem.attrib['OrderName'] = ts
if 'OrderNo' in elem.attrib:
elem.attrib['OrderNo'] = ts
if 'ShipmentNo' in elem.attrib:
elem.attrib['ShipmentNo'] = ts
if 'TrackingNo' in elem.attrib:
elem.attrib['TrackingNo'] = ts
if 'WaveKey' in elem.attrib:
elem.attrib['WaveKey'] = ts
doc.write('singleoutbound_2.xml')
</code></pre>
| 1 | 2016-08-24T15:00:02Z | 39,127,439 | <p><strong>Do not use Regexes for parsing XML if you don't have an important reason for doing so</strong></p>
<p><code>*</code> does greedy matching but what you actually seem to want is <code>*?</code> for not matching until the last <code>"</code> in the line but the next <code>"</code>.</p>
<p>So just replace each <code>*</code> with <code>*?</code> in your cone and you should be fine (apart from the usual do-not-regex-XML-problems).</p>
<p><strong>Edit:</strong></p>
<p>The usual Problem with Regex and XML is that your Regex works fine at first but does not with valid XML from other sources (eg other exporters or even other versions of the same exporter) because there different ways of saying the same thing in XML. Some examples for this are <code><name att="123"></name></code> or <code><name att="123"/></code> being the same as <code><name att='123' /></code> which is the same as this with the <code>123</code> &-quoted - this may be the same as <code><a:name att="123"/></code> or <code><b:name att="123"/></code> depending on namespace-use.</p>
<p>Short:</p>
<p>Actually you cannot be sure that your Regex still works when something that you cannot control changes. </p>
<p>But:</p>
<ul>
<li>Some parsers may produce unexpected results, too in such cases</li>
<li>Some exporters produce bad XML that normal parsers do not understand <em>correctly</em> so - if they cannot be fixed - workarounds like Regexes are needed.</li>
</ul>
| 0 | 2016-08-24T15:29:43Z | [
"python",
"regex",
"xml",
"elementtree"
] |
Replacing multiple strings with regex in python for a file giving truncated string | 39,126,870 | <p>The following python code </p>
<pre><code>import xml.etree.cElementTree as ET
import time
import fileinput
import re
ts = str(int(time.time()))
modifiedline =''
for line in fileinput.input("singleoutbound.xml"):
line = re.sub('OrderName=".*"','OrderName="'+ts+'"', line)
line = re.sub('OrderNo=".*"','OrderNo="'+ts+'"', line)
line = re.sub('ShipmentNo=".*"','ShipmentNo="'+ts+'"', line)
line = re.sub('TrackingNo=".*"','TrackingNo="'+ts+'"', line)
line = re.sub('WaveKey=".*"','WaveKey="'+ts+'"', line)
modifiedline=modifiedline+line
</code></pre>
<p>Returns the modifiedline string with some lines truncated wherever the first match is found</p>
<p>How do I ensure it returns the complete string for each line?</p>
<p>Edit:</p>
<p>I have changed the way I am solving this problem, inspired by Tomalak's answer</p>
<pre><code>import xml.etree.cElementTree as ET
import time
ts = str(int(time.time()))
doc = ET.parse('singleoutbound.xml')
for elem in doc.iterfind('//*'):
if 'OrderName' in elem.attrib:
elem.attrib['OrderName'] = ts
if 'OrderNo' in elem.attrib:
elem.attrib['OrderNo'] = ts
if 'ShipmentNo' in elem.attrib:
elem.attrib['ShipmentNo'] = ts
if 'TrackingNo' in elem.attrib:
elem.attrib['TrackingNo'] = ts
if 'WaveKey' in elem.attrib:
elem.attrib['WaveKey'] = ts
doc.write('singleoutbound_2.xml')
</code></pre>
| 1 | 2016-08-24T15:00:02Z | 39,138,529 | <p>Here is how to use ElementTree to make modifications to an XML file without accidentally breaking it:</p>
<pre><code>import xml.etree.cElementTree as ET
import time
ts = str(int(time.time()))
doc = ET.parse('singleoutbound.xml')
for elem in doc.iterfind('//*[@OrderName]'):
elem.attrib['OrderName'] = ts
# and so on
doc.write('singleoutbound_2.xml')
</code></pre>
<p>Things to understand:</p>
<ul>
<li>XML represents a tree-shaped data structure that consists of elements, attributes and values, among other things. Treating it as line-based plain text fails to recognize this fact.</li>
<li>There is a language to select items from that tree of data, called XPath. It's powerful and not difficult to learn. Learn it. I've used <code>//*[@OrderName]</code> above to find all elements that have an <code>OrderName</code> attribute.</li>
<li>Trying to modify the document tree with improper tools like string replace and regular expressions will lead to more complex and hard-to-maintain code. You will encounter run-time errors for completely valid input that your regex has no special case for, character encoding issues and silent errors that are only caught when someone looks at your program's output. In other words: It's the wrong thing to do, so don't do it.</li>
<li>The above code is actually simpler and much easier to reason about and extend than your code.</li>
</ul>
| 1 | 2016-08-25T06:51:59Z | [
"python",
"regex",
"xml",
"elementtree"
] |
Why is Autobahn Twisted Websocket server is not completing handshake with javascript client? | 39,126,916 | <p>I'm trying to use a twisted websocket server and connect it to a javascript client on localhost, not through a network. The server and client see each other but they can't complete the handshake. Yes I'm using a Hixie-76 wrapper provided by txWS because of a systems requirement. </p>
<p>I'm stumped on why they can't connect?</p>
<p>Version: autobahn 0.16, twisted 0.16.3</p>
<p>This is an actual example of what I'm trying to achieve:
<a href="https://github.com/crossbario/autobahn-python/tree/master/examples/twisted/websocket/echo" rel="nofollow">https://github.com/crossbario/autobahn-python/tree/master/examples/twisted/websocket/echo</a> using the server.py and client.html</p>
<p>Log:</p>
<pre><code>2016-08-24 15:44:33+0100 [-] Log opened.
2016-08-24 15:44:33+0100 [-] WebSocketServerFactory (WebSocketFactory) starting on 8080
2016-08-24 15:44:33+0100 [-] Starting factory <autobahn.twisted.websocket.WebSocketServerFactory object at 0x00000000031DB7F0>
2016-08-24 15:44:33+0100 [-] Starting factory <txws.WebSocketFactory instance at 0x0000000002FA82C8>
2016-08-24 15:44:34+0100 [MyServerProtocol (WebSocketProtocol),0,127.0.0.1] Starting HyBi-00/Hixie-76 handshake
2016-08-24 15:44:34+0100 [MyServerProtocol (WebSocketProtocol),0,127.0.0.1] Completed HyBi-00/Hixie-76 handshake
2016-08-24 15:44:39+0100 [-] WebSocket connection closed:
2016-08-24 15:44:39+0100 [-] False
2016-08-24 15:44:39+0100 [-] 1006
2016-08-24 15:44:39+0100 [-] connection was closed uncleanly (peer did not finish (in time) the opening handshake)
</code></pre>
<p>Python Class:</p>
<pre><code>from txws import WebSocketFactory
from twisted.internet import reactor
from twisted.python import log
from autobahn.twisted.websocket import WebSocketServerFactory
from autobahn.twisted.websocket import WebSocketServerProtocol
import json
import sys
class MyServerProtocol(WebSocketServerProtocol):
def onMessage(self, payload, isBinary):
print "Message Received!!!!"
msg = json.dumps({'status':'PLEASE WORK'})
self.sendMessage(msg, isBinary=False)
def onClose(self, wasClean, code, reason):
print "WebSocket connection closed: "
print str(wasClean)
print str(code)
print str(reason)
def make_server():
print 'Making ws server'
log.startLogging(sys.stdout)
factory = WebSocketServerFactory("ws://127.0.0.1:8080")
factory.protocol = MyServerProtocol
reactor.listenTCP(8080, WebSocketFactory(factory)) #txWS WebSocketFactory wrapper
reactor.run()
</code></pre>
<p>Javascript:</p>
<pre><code>function ConnectWebSocket() {
websocket = new WebSocket('ws://127.0.0.1:8080');
websocket_open = true;
websocket.onopen = function(e) {
console.log('opened');
console.log(e);
websocket.send('slow test');
};
websocket.onclose = function(e) {
console.log("Connection closed.");
websocket_open = false;
websocket = null;
ConnectWebSocket();
};
websocket.onmessage = function(e) {
console.log('message');
console.log(e.data);
};
websocket.onerror = function(e) {
console.log('error');
console.log(e);
};
}
ConnectWebSocket();
</code></pre>
| 1 | 2016-08-24T15:02:02Z | 39,131,156 | <p>Okay I found the issue. As Michael S suggested it is probably in the Hixie-76 wrapper, it was. The dev must have let that protocol slip over time and it no longer is working. I could confirm this by tracing it back in the code. I will report it back to the dev of txWS.</p>
<p>I found an alternate solution to the Hixie-76 problem. I switched wrappers to txWebSockets at <a href="https://github.com/gleicon/txwebsockets" rel="nofollow">https://github.com/gleicon/txwebsockets</a>. Not as elegant a solution but it now works.</p>
| 0 | 2016-08-24T19:01:49Z | [
"javascript",
"python",
"websocket",
"twisted",
"autobahn"
] |
Appending several panda dataframes are not working | 39,126,941 | <p>I have this code</p>
<pre><code>import os
import pandas as pd
path = r'c:\Temp\factory'
os.chdir(path)
files = os.listdir()
files_csv = [f for f in files if f[-3:] == 'csv']
x = pd.DataFrame()
for f in files_csv:
data = pd.read_csv(f, sep=';', encoding='latin-1')
x = x.append(data, ignore_index=True)
</code></pre>
<p>I have used the same code before to concatenate CSV files but now it just does not work. </p>
<p>The problem i face is that only the content of one file makes it to the dataframe by name x.</p>
<p>I know i process all files and i expect the x dataframe to contain in total about 10000 rows but i only get the content of one file aproximatley 2000 rows.</p>
<p>My files typically looks like this:</p>
<pre><code>Computer;Managed by;Given Name
cp1;user1;olle
cp2;user2;niklas
cp3;user3;kalle
</code></pre>
| -1 | 2016-08-24T15:03:15Z | 39,127,216 | <p>I've needed to do something similar before. My solution would be:</p>
<pre><code>x = pd.DataFrame()
for f in files_csv:
data = pd.read_csv(f, sep=';', encoding='latin-1')
x = pd.concat([x, data], ignore_index=True, axis=1)
</code></pre>
| 0 | 2016-08-24T15:17:28Z | [
"python",
"pandas"
] |
s3cmd nodename nor servname provided, or not known | 39,127,002 | <p>I'm trying to access objects from my S3 bucket with s3cmd with path style urls. This is no problem with the Java SDK like.</p>
<pre><code>s3Client.setS3ClientOptions(S3ClientOptions.builder()
.setPathStyleAccess(true).build());
</code></pre>
<p>I want to do the same with s3cmd. I have set this up in my s3conf file:</p>
<pre><code>host_base = s3.eu-central-1.amazonaws.com
host_bucket = s3.eu-central-1.amazonaws.com/%(bucket)s
</code></pre>
<p>This works for bucket listing with:</p>
<pre><code>$ s3cmd ls
2016-08-24 12:36 s3://test
</code></pre>
<p>When trying to list all objects of a bucket I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "/usr/local/bin/s3cmd", line 2919, in <module>
rc = main()
File "/usr/local/bin/s3cmd", line 2841, in main
rc = cmd_func(args)
File "/usr/local/bin/s3cmd", line 120, in cmd_ls
subcmd_bucket_list(s3, uri)
File "/usr/local/bin/s3cmd", line 153, in subcmd_bucket_list
response = s3.bucket_list(bucket, prefix = prefix)
File "/usr/local/lib/python2.7/site-packages/S3/S3.py", line 297, in bucket_list
for dirs, objects in self.bucket_list_streaming(bucket, prefix, recursive, uri_params):
File "/usr/local/lib/python2.7/site-packages/S3/S3.py", line 324, in bucket_list_streaming
response = self.bucket_list_noparse(bucket, prefix, recursive, uri_params)
File "/usr/local/lib/python2.7/site-packages/S3/S3.py", line 343, in bucket_list_noparse
response = self.send_request(request)
File "/usr/local/lib/python2.7/site-packages/S3/S3.py", line 1081, in send_request
conn = ConnMan.get(self.get_hostname(resource['bucket']))
File "/usr/local/lib/python2.7/site-packages/S3/ConnMan.py", line 192, in get
conn.c.connect()
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 836, in connect
self.timeout, self.source_address)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 557, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
gaierror: [Errno 8] nodename nor servname provided, or not known
</code></pre>
| 0 | 2016-08-24T15:06:32Z | 39,552,587 | <p>Assuming that there is no other issue with your configuration, the value that you used for "host_bucket" is wrong.</p>
<p>It should be:</p>
<pre><code>host_bucket = %(bucket)s.s3.eu-central-1.amazonaws.com
</code></pre>
<p>or</p>
<pre><code>host_bucket = s3.eu-central-1.amazonaws.com
</code></pre>
<p>The second one will for "path style" to be used. But, if you are using amazon s3 and the first host_bucket value that I propose, s3cmd will automatically use dns-based or path-based buckets depending of what characters you are using in your bucket name.</p>
<p>Is it a particular reason why you would want to only use path-based style?</p>
| 0 | 2016-09-17T23:10:28Z | [
"python",
"amazon-web-services",
"amazon-s3",
"s3cmd"
] |
Unable to create a functional executable with PyInstaller and PyQt | 39,127,097 | <p>I have tried, for a lot of time now, to create an executable for a Python project. In this project, I need to use:</p>
<ol>
<li>PyQt(4) : for my GUI,</li>
<li>PySerial : to communicate with an arduino,</li>
<li>Subprocess : to launch some avr things with a .bat file</li>
</ol>
<p>In fact, the executable is created, but when I try to start it nothing happens, except my mouse tell me that she is occupied.</p>
<p>So, I tried to figured out from where could be the problem, by writing some basic programs, which condense every functions I need for my project. Everything is functional when I launch this from python (3.5), but doesn't when I execute the file generated by pyinstaller. (The interface.py file is <a href="http://pastebin.com/CJNXH6sk" rel="nofollow">here, in a pastebin.com file</a>, if you want, I thought it's not very relevant : it's only a form with a pushbutton)</p>
<pre><code>from PyQt4 import QtGui
from interface import Ui_Form
import serial
import subprocess
import sys, os
class win(QtGui.QWidget, Ui_Form):
"""docstring for win"""
def __init__(self):
super(win, self).__init__()
self.setupUi(self)
self.ser = serial.Serial("COM3", 9600)
self.pathBat = "cmd.bat"
def on_pushButton_clicked(self):
#if (self.ser.isOpen() and self.serAvr.isOpen()):
if True:
self.ser.write("start".encode())
p = subprocess.call(self.pathBat, creationflags=subprocess.CREATE_NEW_CONSOLE, **self.subprocess_args())
if p == 1:
self.writeLog("Works")
self.ser.write("stop".encode())
#self.writeLog(p.returncode)
def subprocess_args(include_stdout=True):
# The following is true only on Windows.
if hasattr(subprocess, 'STARTUPINFO'):
# On Windows, subprocess calls will pop up a command window by default
# when run from Pyinstaller with the ``--noconsole`` option. Avoid this
# distraction.
si = subprocess.STARTUPINFO()
si.dwFlags |= subprocess.STARTF_USESHOWWINDOW
# Windows doesn't search the path by default. Pass it an environment so
# it will.
env = os.environ
else:
si = None
env = None
ret = {}
# On Windows, running this from the binary produced by Pyinstaller
# with the ``--noconsole`` option requires redirecting everything
# (stdin, stdout, stderr) to avoid an OSError exception
# "[Error 6] the handle is invalid."
ret.update({'stdin': subprocess.PIPE,
'stderr': subprocess.PIPE,
'startupinfo': si,
'env': env })
return ret
app = QtGui.QApplication(sys.argv)
v = win()
v.show()
sys.exit(app.exec_())
</code></pre>
<p>I added "cmd.bat" to the data in .spec file for pyinstaller, and the function <em>subprocess_arg</em> is here to avoid problems with subprocess (as mentionned on the documentation <a href="https://github.com/pyinstaller/pyinstaller/wiki/Recipe-subprocess" rel="nofollow">here</a>)</p>
<p>Firstly I thought the problem was linked to subprocess, I tried to delete all the references to it, still not working. Same for Serial. Moreover, I tried to debug the executable by setting <code>debug = True</code> in the .spec file, but if I try to execute the file from the console, nothing happend at all, it stays blocked on the first line.</p>
<p>So if anybody can help ! thank you in advance !</p>
| 0 | 2016-08-24T15:11:29Z | 39,146,888 | <p>Maybe the "frozen" application does not find the "cmd.bat"!? You could test it by replacing it with the absolute path.</p>
<p>Your executable unpacks the "cmd.bat" in a temporary folder accessible in python with <code>sys._MEIPASS</code>. You should find your files with something like <code>os.path.join(sys._MEIPASS, "cmd.bat")</code> !?</p>
<p>In case you need it: <code>getattr(sys, 'frozen', False)</code> indicates whether your code is frozen or not (but only for PyInstaller).</p>
| 0 | 2016-08-25T13:37:26Z | [
"python",
"qt",
"python-3.x",
"subprocess",
"pyinstaller"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.