title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Python: Comparing the speed of NumPy and SymPy ufuncified functions | 39,118,096 | <p>Just wrote a code for comparison of speed of calculation for a function which is in written as <code>numpy</code> and a function which uses <code>ufuncify</code> from <code>sympy</code>:</p>
<pre><code>import numpy as np
from sympy import symbols, Matrix
from sympy.utilities.autowrap import ufuncify
u,v,e,a1,a0 = symbols('u v e a1 a0')
dudt = u-u**3-v
dvdt = e*(u-a1*v-a0)
p = {'a1':0.5,'a0':1.5,'e':0.1}
eqs = Matrix([dudt,dvdt])
numeqs=eqs.subs([(a1,p['a1']),(a0,p['a0']),(e,p['e'])])
print eqs
print numeqs
dudt = ufuncify([u,v],numeqs[0])
dvdt = ufuncify([u,v],numeqs[1])
def syrhs(u,v):
return dudt(u,v),dvdt(u,v)
def nprhs(u,v,p):
dudt = u-u**3-v
dvdt = p['e']*(u-p['a1']*v-p['a0'])
return dudt,dvdt
def compare(n=10000):
import time
timer_np=0
timer_sy=0
error = np.zeros(n)
for i in range(n):
u=np.random.random((128,128))
v=np.random.random((128,128))
start_time=time.time()
npcalc=np.ravel(nprhs(u,v,p))
mid_time=time.time()
sycalc=np.ravel(syrhs(u,v))
end_time=time.time()
timer_np+=(mid_time-start_time)
timer_sy+=(end_time-mid_time)
error[i]=np.max(np.abs(npcalc-sycalc))
print "Max difference is ",np.max(error), ", and mean difference is ",np.mean(error)
print "Average speed for numpy ", timer_np/float(n)
print "Average speed for sympy ", timer_sy/float(n)
</code></pre>
<p>On my machine the result is:</p>
<pre><code>In [21]: compare()
Max difference is 5.55111512313e-17 , and mean difference is 5.55111512313e-17
Average speed for numpy 0.00128133814335
Average speed for sympy 0.00127074036598
</code></pre>
<p>Any suggestions on how to make either of the above function faster is welcome!</p>
| -1 | 2016-08-24T08:24:41Z | 39,121,609 | <p>After further exploration it seems that <code>ufuncify</code> and regular <code>numpy</code> functions will give more or less the same speed of computation. Using <code>numba</code> or printing to <code>theano</code> function did not result in a faster code. So the other option to make things faster is either <code>cython</code> or wrapping a <code>c</code> or <code>FORTRAN</code> code.</p>
| 0 | 2016-08-24T11:06:27Z | [
"python",
"python-2.7",
"numpy",
"sympy"
] |
python type error after third time in loop in line """print (seql, seq)""" following line with """seql = len(seq)""" | 39,118,135 | <pre><code>>>> run
Python 3.5.1 (default, Dec 2015, 13:05:11)
[GCC 4.8.2] on linux
1 [1]
2 [1, 1]
Traceback (most recent call last):
File "python", line 45, in <module>
File "python", line 38, in conwayseq
File "python", line 10, in newseq
TypeError: object of type 'NoneType' has no len()
</code></pre>
<p>Below you can see it looped a couple of times correctly
before giving the run time error. PS sorry for the sloppy code...
I am newly learning python and trying to make it easy on myself. </p>
<p>Thanks for any suggestions and help. </p>
<pre><code># conway sequence coding practice
# self learning project
import sys
import math
def newseq(seq: list) -> list:
p1, p2 = 1, 1
nseq = []
k = 0
seql = (len(seq))
print (seql, seq) # this is the line that gives the error on third pass
if seql == 1:
return [1,seq[0]]
else:
while p1 < seql:
p2 += 1
if p2 >= seql:
nseq.append(k)
nseq.append(seq[p1])
break
if seq[p1] == seq[p2]:
k += 1
p2 += 1
else:
nseq.append(k)
nseq.append(seq[p1])
p1 = p2 + 1
p2 = p1
k = 0
return
def conwayseq(line,seq: list) -> list :
nseq=[]
nseq = nseq + seq
for n in range(line):
nseq = newseq(nseq)
return (nseq)
# print (newseq([1]))
print (conwayseq(4,[1]))
</code></pre>
| 0 | 2016-08-24T08:26:23Z | 39,118,441 | <p>The issue is with the logic of your code. Check below line:</p>
<pre><code>nseq = newseq(nseq)
</code></pre>
<p>Function <code>newseq()</code> do not have any return type. So, <code>nseq</code> is getting set as <code>None</code> (<em>default return type</em>). Next time when your <code>for</code> loop makes a call to <code>newseq</code>, it is called as <code>newseq(None)</code>. Hence, the <code>TypeError</code> error is raised at:</p>
<pre><code>seql = (len(seq))
# Raises "TypeError: object of type 'NoneType' has no len()"
# Since, 'seq' is passed as 'None'
</code></pre>
| 0 | 2016-08-24T08:41:42Z | [
"python",
"types"
] |
Django form in the base template. How to display validation error? | 39,118,136 | <p>I use Django 1.8.14. I have <code>Search</code> form on every page of my website. I pass <code>Search</code> form to base template through context processor. Each time form sends data to <code>/search/</code> view. And there is a problem. Django raises <code>ValidationError</code> on form, but it doesn't display anywhere. What is the correct way to display form errors in template, when form passes to base template through context processor and sends data to one view?</p>
<p><strong>form.py</strong>:</p>
<pre><code>class SearchForm(forms.Form):
search = forms.CharField(required = True,
max_length=255,
widget = forms.TextInput({'class':'form-control', 'type':'search','required':''})
)
def clean_search(self):
search = self.cleaned_data.get("search")
regex = myregex
if not re.match(regex, search):
print("ValidationError")
raise forms.ValidationError(u'Please enter a valid value')
return search
</code></pre>
<p>context processor:</p>
<pre><code>from myproject.forms import SearchForm
def form_context(request):
context_dict = {}
context_dict['search_form'] = SearchForm()
return(context_dict)
</code></pre>
<p>my base template:</p>
<pre><code> <form method="post" action="/search/">
{% csrf_token %}
{{ search_form.non_field_errors }}
{{ search_form.errors }}
{{ search_form.search }}
{{ search_form.search.errors }}
<button type="submit" class="btn btn-find">Search</button>
</form>
</code></pre>
<p>my seach view:</p>
<pre><code>def search(request, template):
if request.method == 'POST':
search_form = SearchForm(request.POST)
if search_form.is_valid():
domen = search_form.cleaned_data['search']
try:
site = SitePage.objects.get(domen=domen)
path="/"+site.domen +"/"
return HttpResponseRedirect(path)
except:
site = None
else:
print search_form.errors
return render(request, template, context_dict)
</code></pre>
| 0 | 2016-08-24T08:26:26Z | 39,118,312 | <p>You need to pass the form once it is bound to the data (the POST request) and validated; right now your context only has a blank form which is why no errors are being displayed.</p>
<pre><code>from django.shortcuts import redirect
def search(request, template):
search_form = SearchForm(request.POST or None, request.FILES or None)
if search_form.is_valid():
domen = search_form.cleaned_data['search']
results = SitePage.objects.filter(domen=domen)
if results.exists():
return redirect('/{}/'.format(results.domen))
return render(request, template, {'form': search_form})
</code></pre>
| 1 | 2016-08-24T08:35:41Z | [
"python",
"django",
"python-2.7",
"django-forms"
] |
Django form in the base template. How to display validation error? | 39,118,136 | <p>I use Django 1.8.14. I have <code>Search</code> form on every page of my website. I pass <code>Search</code> form to base template through context processor. Each time form sends data to <code>/search/</code> view. And there is a problem. Django raises <code>ValidationError</code> on form, but it doesn't display anywhere. What is the correct way to display form errors in template, when form passes to base template through context processor and sends data to one view?</p>
<p><strong>form.py</strong>:</p>
<pre><code>class SearchForm(forms.Form):
search = forms.CharField(required = True,
max_length=255,
widget = forms.TextInput({'class':'form-control', 'type':'search','required':''})
)
def clean_search(self):
search = self.cleaned_data.get("search")
regex = myregex
if not re.match(regex, search):
print("ValidationError")
raise forms.ValidationError(u'Please enter a valid value')
return search
</code></pre>
<p>context processor:</p>
<pre><code>from myproject.forms import SearchForm
def form_context(request):
context_dict = {}
context_dict['search_form'] = SearchForm()
return(context_dict)
</code></pre>
<p>my base template:</p>
<pre><code> <form method="post" action="/search/">
{% csrf_token %}
{{ search_form.non_field_errors }}
{{ search_form.errors }}
{{ search_form.search }}
{{ search_form.search.errors }}
<button type="submit" class="btn btn-find">Search</button>
</form>
</code></pre>
<p>my seach view:</p>
<pre><code>def search(request, template):
if request.method == 'POST':
search_form = SearchForm(request.POST)
if search_form.is_valid():
domen = search_form.cleaned_data['search']
try:
site = SitePage.objects.get(domen=domen)
path="/"+site.domen +"/"
return HttpResponseRedirect(path)
except:
site = None
else:
print search_form.errors
return render(request, template, context_dict)
</code></pre>
| 0 | 2016-08-24T08:26:26Z | 39,122,019 | <p>If you Django is raising <code>validation error</code> how you want it to and now all you need is to <strong>display</strong> that validation error on your <code>html templates</code> then i suppose what you are looking for is <strong>Django Messages</strong></p>
<p>See the official documentation for the same -> <a href="https://docs.djangoproject.com/en/1.8/ref/contrib/messages/" rel="nofollow">https://docs.djangoproject.com/en/1.8/ref/contrib/messages/</a> </p>
| 1 | 2016-08-24T11:25:11Z | [
"python",
"django",
"python-2.7",
"django-forms"
] |
wsdl file parsing results in 'Unable to resolve type {http://schemas.xmlsoap.org/soap/encoding/}Array.' | 39,118,241 | <p>First of all, I don't understand the XML and can't figure what problem I am having. I tried using couple of python libraries but most of them resulted in this error.
For current setup I am using 'zeep' python library. Using the command for validating file:</p>
<pre><code>python -mzeep ss.xml
</code></pre>
<p>I am getting this error:</p>
<pre><code>> zeep.wsdl.wsdl: Creating definition for ss.xml
zeep.wsdl.wsdl: Adding message: {urn:EngineSoap}Mailing_getStatistics
Traceback (most recent call last):
....
File "/usr/lib/python2.7/site-packages/zeep/xsd/schema.py", line 100, in get_type
) % (qname.text, qname.namespace))
KeyError: u"Unable to resolve type {http://schemas.xmlsoap.org/soap/encoding/}Array. No schema available for the namespace u'http://schemas.xmlsoap.org/soap/encoding/'."
</code></pre>
<p>and the xml file is:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<definitions name="EngineSoap" targetNamespace="urn:EngineSoap" xmlns:typens="urn:EngineSoap" xmlns:urn="EngineSoap"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
xmlns="http://schemas.xmlsoap.org/wsdl/">
<message name="Mailing_getStatistics">
<part name="mailingID" type="xsd:int"/>
<part name="periodFrom" type="xsd:string"/>
<part name="periodTill" type="xsd:string"/>
<part name="mlid" type="xsd:int"/>
</message>
<message name="Mailing_getStatisticsResponse">
<part name="Mailing_getStatisticsReturn" type="soapenc:Array"/>
</message>
<message name="Mailing_getStatisticsPerLink">
<part name="mailingID" type="xsd:int"/>
<part name="outlink" type="xsd:boolean"/>
<part name="mlid" type="xsd:int"/>
</message>
<message name="Mailing_getStatisticsPerLinkResponse">
<part name="Mailing_getStatisticsPerLinkReturn" type="soapenc:Array"/>
</message>
<binding name="EngineSoapBinding" type="typens:EngineSoapPortType">
<soap:binding style="rpc" transport="http://schemas.xmlsoap.org/soap/http"/>
<operation name="Mailing_getStatistics">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Mailing_getStatisticsPerLink">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Mailing_createFromContent">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Mailing_createFromTemplate">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Mailing_createFromURL">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Mailinglist_all">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Mailinglist_getUnsubscriptions">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Mailinglist_getUnsubscriptionsAsCSV">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Mailinglist_select">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Mailinglist_validateTechnicalSettings">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<!-- Mailinglist_getExtraFields -->
<operation name="Mailinglist_getExtraFields">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<!-- Mailinglist_getSubscribersCount -->
<operation name="Mailinglist_getSubscribersCount">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<!-- Mailinglist_getSubscribers -->
<operation name="Mailinglist_getSubscribers">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<!-- Mailinglist_getSubscribersCountSince -->
<operation name="Mailinglist_getSubscribersCountSince">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<!-- Mailinglist_getSubscribersSince -->
<operation name="Mailinglist_getSubscribersSince">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<!-- Mailinglist_getStatisticsPerCampaign -->
<operation name="Mailinglist_getStatisticsPerCampaign">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<!-- Mailinglist_getStatisticsPerSource -->
<operation name="Mailinglist_getStatisticsPerSource">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Mailinglist_getLabels">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Customer_getBouncesForRelay">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Subscriber_getByEmail">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Subscriber_getByUniqueID">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Subscriber_sendMailingToSubscribers">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Subscriber_sendMailingToSubscribersFromCSV">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Subscriber_sendMailingToSubscribersFromURL">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Subscriber_sendMailingToSubscriberWithAttachment">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Subscriber_set">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Subscriber_unsubscribe">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Subscriber_temporaryUnsubscribeByEmail">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Subscriber_temporaryUnsubscribeByUniqueID">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Subscriber_assignLabelWeightByEmail">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Subscriber_assignLabelWeightByUniqueID">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Subscriber_processLeadByEmail">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
<operation name="Subscriber_processLeadByUniqueID">
<soap:operation soapAction="urn:EngineSoapAction"/>
<input>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</input>
<output>
<soap:body namespace="urn:EngineSoap" use="encoded" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/>
</output>
</operation>
</binding>
<service name="EngineSoapService">
<port name="EngineSoapPort" binding="typens:EngineSoapBinding">
<soap:address location="http://xxxx/soap/server.live.php"/>
</port>
</service>
</definitions>
</code></pre>
| 1 | 2016-08-24T08:31:36Z | 39,123,839 | <p>For anyone who is having the same problem(and it's a known problem), the wsdl needs to import the soap encoding, but it doesn't happen. So see the snippet and use the <code><types></code> declaration in the file/response of the server:</p>
<pre><code><definitions targetNamespace="TARGET_NAMESPACE" ...>
<types>
<schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="TARGET_NAMESPACE">
<import namespace="http://schemas.xmlsoap.org/soap/encoding/" schemaLocation="http://schemas.xmlsoap.org/soap/encoding/" />
</schema>
</types>
.....
</definitions>
</code></pre>
| 0 | 2016-08-24T12:51:03Z | [
"python",
"xml",
"soap",
"wsdl"
] |
Resample a 'tidy' dataframe with pandas | 39,118,308 | <p>I have timestamped data with two columns of interest: a 'label' and a count. I would like to create a time series with the sums per label per, say, day. Can I use <code>resample</code> to achieve this?</p>
<p>Concrete example:</p>
<pre><code>import pandas as pd
import numpy as np
from itertools import cycle
idx = pd.date_range('2016-01-01', '2016-01-07', freq='H')
n = np.random.randint(10, size=24*6+1)
lst = [(l,c) for l,c in zip(cycle(['foo', 'bar']), n)]
df = pd.DataFrame(lst, index=idx, columns=['label', 'n'])
df.resample(???).sum()
</code></pre>
<p>For this example, the target data frame should contain a time index and two columns (<code>foo</code> and <code>bar</code>) containing the total counts per interval.</p>
| 2 | 2016-08-24T08:35:25Z | 39,118,413 | <p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.resample.html" rel="nofollow"><code>DataFrameGroupBy.resample</code></a>:</p>
<pre><code>print (df.groupby('label')
.resample('1D')
.sum()
.reset_index()
.rename(columns={'level_1':'date'}))
label date n
0 bar 2016-01-01 44
1 bar 2016-01-02 60
2 bar 2016-01-03 65
3 bar 2016-01-04 51
4 bar 2016-01-05 37
5 bar 2016-01-06 59
6 foo 2016-01-01 40
7 foo 2016-01-02 69
8 foo 2016-01-03 58
9 foo 2016-01-04 55
10 foo 2016-01-05 67
11 foo 2016-01-06 59
12 foo 2016-01-07 5
</code></pre>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow"><code>stack</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow"><code>unstack</code></a> for working with <code>datetimeindex</code>:</p>
<pre><code>print (df.set_index('label', append=True)
.unstack(1)
.resample('1D')
.sum()
.stack()
.reset_index()
.rename(columns={'level_0':'date'}))
date label n
0 2016-01-01 bar 44.0
1 2016-01-01 foo 40.0
2 2016-01-02 bar 60.0
3 2016-01-02 foo 69.0
4 2016-01-03 bar 65.0
5 2016-01-03 foo 58.0
6 2016-01-04 bar 51.0
7 2016-01-04 foo 55.0
8 2016-01-05 bar 37.0
9 2016-01-05 foo 67.0
10 2016-01-06 bar 59.0
11 2016-01-06 foo 59.0
12 2016-01-07 foo 5.0
</code></pre>
<p>If need two columns:</p>
<pre><code>df1 = df.set_index('label', append=True).unstack(1).resample('1D').sum()
df1.columns = df1.columns.droplevel(0)
print (df1)
label bar foo
2016-01-01 61.0 65.0
2016-01-02 54.0 56.0
2016-01-03 70.0 53.0
2016-01-04 46.0 49.0
2016-01-05 61.0 49.0
2016-01-06 50.0 55.0
2016-01-07 NaN 6.0
</code></pre>
| 2 | 2016-08-24T08:40:33Z | [
"python",
"pandas"
] |
How to distinguish processes in Multiprocessing.Pool? | 39,118,353 | <p>I'm using python <code>multiprocessing</code> to fork some child processes to run my jobs. There are two demands:</p>
<ol>
<li>I need to know the pid of the child process in case of killing it if I want.</li>
<li>I need callback to do some stuffs after job has finished. Because those stuffs use a lock in parent process, it can't be done in child process.</li>
</ol>
<p>But I get:</p>
<ol>
<li>Process generated <code>by multiprocessing.Process()</code> has an attribute "pid" to get its pid. But I can't add my asynchronous callback, of course I can't wait synchronously neither. </li>
<li>Process pool generated by <code>multiprocessing.Pool()</code> provides callback interface. But I can't tell which process in the pool is the one matching my job, since I may need to kill the process according to a specific job. </li>
</ol>
<p>Task is cheap, here shows the code:</p>
<pre><code>import random, time
import multiprocessing
import os
class Job(object):
def __init__(self, jobid, jobname, command):
self.jobid, self.jobname, self.command = jobid, jobname, command
def __str__(self):
return "Job <{0:05d}>".format(self.jobid)
def __repr__(self):
return self.__str__()
def _run_job(job):
time.sleep(1)
print "{} done".format(job)
return job, random.choice([True, False]) # the second argument indicates whether job has finished successfully
class Test(object):
def __init__(self):
self._loc = multiprocessing.Lock()
self._process_pool = multiprocessing.Pool()
def submit_job(self, job):
with self._loc:
self._process_pool.apply_async(_run_job, (job,), callback=self.job_done)
print "submitting {} successfully".format(job)
def job_done(self, result):
with self._loc:
# stuffs after job has finished is related to some cleanning work, so it needs the lock of the parent process
job, success = result
if success:
print "{} success".format(job)
else:
print "{} failure".format(job)
j1 = Job(1, "test1", "command1")
j2 = Job(2, "test2", "command2")
t = Test()
t.submit_job(j1)
t.submit_job(j2)
time.sleep(3.1) # wait for all jobs finishing
</code></pre>
<p>But now I can't get the pid corresponding to each job. For example, I need to kill the job<1>, but I can't find which process in the process pool is related to the job<1>, so I mightn't kill the job whenever I want. </p>
<p>If I use <code>multiprocessing.Process</code> alternatively, I can record pid of every process with its corresponding jobid. But I can't add callback method now.</p>
<p>So is there a way to both get the pid of child process and to add callback method?</p>
| 0 | 2016-08-24T08:37:52Z | 39,158,063 | <p>Finally I find a solution: use <code>multiprocessing.Event</code> instead.</p>
<p>Since <code>multiprocessing.Pool</code> can't tell me which process is allocated, I can't record it so that I can kill it according to job id whenever I want.</p>
<p>Fortunately, <code>multiprocessing</code> provides <code>Event</code> object as an alternative to callback method. Recall what callback method does: it provides an asynchronous response to child process. Once child process finishes, the parent process can detect it and call the callback method. So the core issue is how parent process detects whether child process has finished or not. That's <code>Event</code> object for.</p>
<p>So the solution is simple: pass a <code>Event</code> object to child process. Once child process finishes, it set the <code>Event</code> object. In parent process, it starts a daemon thread to monitor whether the event is set. If so, it can call the method that does those callback stuffs. Moreover, since I created processes with <code>multiprocessing.Process</code> instead of <code>multiprocessing.Pool</code>, I can easily get its PID, which enables me to kill it. </p>
<p>The solution code:</p>
<pre><code>import time
import multiprocessing
import threading
class Job(object):
def __init__(self, jobid, jobname, command):
self.jobid, self.jobname, self.command = jobid, jobname, command
self.lifetime = 0
def __str__(self):
return "Job <{0:05d}>".format(self.jobid)
def __repr__(self):
return self.__str__()
def _run_job(job, done_event):
time.sleep(1)
print "{} done".format(job)
done_event.set()
class Test(object):
def __init__(self):
self._loc = multiprocessing.Lock()
self._process_pool = {}
t = threading.Thread(target=self.scan_jobs)
t.daemon = True
t.start()
def scan_jobs(self):
while True:
with self._loc:
done_jobid = []
for jobid in self._process_pool:
process, event = self._process_pool[jobid]
if event.is_set():
print "Job<{}> is done in process <{}>".format(jobid, process.pid)
done_jobid.append(jobid)
map(self._process_pool.pop, done_jobid)
time.sleep(1)
def submit_job(self, job):
with self._loc:
done_event = multiprocessing.Event()
new_process = multiprocessing.Process(target=_run_host_job, args=(job, done_event))
new_process.daemon = True
self._process_pool[job.jobid] = (new_process, done_event)
new_process.start()
print "submitting {} successfully".format(job)
j1 = Job(1, "test1", "command1")
j2 = Job(2, "test2", "command2")
t = Test()
t.submit_job(j1)
t.submit_job(j2)
time.sleep(5) # wait for job to finish
</code></pre>
| 0 | 2016-08-26T03:58:36Z | [
"python",
"linux",
"multiprocessing",
"fork",
"python-multiprocessing"
] |
Sorting a list of xy-coordinates in python | 39,118,367 | <p>I have a long list of xy coordinates like the following:</p>
<pre><code>>>> data = [(x1,y1),(x2,y2),(x3,y3),...]
</code></pre>
<p>Every pair of coordinates represents a point of a contour in an image and I would like to sort them like they are arranged along the contour (shortest path). The shape of the contour is very complex (it's the shape of a country), that's why a <a href="http://tomswitzer.net/2010/03/graham-scan/" rel="nofollow">ConvexHull</a>
won't work. </p>
<p>I tried out this code, but it is not precise enough:</p>
<pre><code>>>> import math
>>> import matplotlib.patches as patches
>>> import pylab
>>> pp=[(1,1),(2,3),(3,4)]
# compute centroid
>>> cent=(sum([p[0] for p in pp])/len(pp),sum([p[1] for p in pp])/len(pp))
# sort by polar angle
>>> pp.sort(key=lambda p: math.atan2(p[1]-cent[1],p[0]-cent[0]))
# plot points
>>> pylab.scatter([p[0] for p in pp],[p[1] for p in pp])
# plot polyline
>>> pylab.gca().add_patch(patches.Polygon(pp,closed=False,fill=False))
>>> pylab.grid()
>>> pylab.show()
</code></pre>
<p>I already tried like suggested in <a href="http://stackoverflow.com/questions/18280420/rearrange-a-list-of-points-to-reach-the-shortest-distance-between-them">this case</a>
but it did not work out well, because my list of coordinates is too long. </p>
<p>As the points are very close to each other the solution to <a href="http://stackoverflow.com/questions/37742358/sorting-points-to-form-a-continuous-line">this question</a>
might seem too complicated for my case.</p>
<p><a href="http://i.stack.imgur.com/7nLgu.png" rel="nofollow">This might illustrate my problem</a></p>
| -2 | 2016-08-24T08:38:43Z | 39,118,898 | <p>If your shape is 'simple' as your example, you can compute the center of your cloud of points (average of X and Y). Sort your points by the angle from the center to the points and if two of them share the same angle, sort them by their distance from the center. That should do the trick.</p>
<pre><code>center = functools.reduce(lambda p1, p2: (p1[0]+p2[0], p1[1]+p2[1]), data)
center = (center[0] / len(data), center[1] / len(data))
def angle(p1, p2):
return math.atan2(p2[1], p2[0]) - math.atan2(p1[1], p1[0])
answer = sorted(data, key=lambda p: (angle(p, center), distance(p, center)))
</code></pre>
<p>For more complex shapes, I have another algorithm that I call deflating hull. From the hull of your cloud, you deflate it until it touch all the remaining points:</p>
<pre><code>def deflate_hull(points):
hull = convex_hull(points)
for p in hull:
points.remove(p)
while points:
l = len(hull)
_, p, i = min((distance(hull[i-1], p) + distance(p, hull[i]) - distance(hull[i-1], hull[i]), p, i)
for p in points
for i in range(l))
points.remove(p)
hull = hull[:i] + [p] + hull[i:]
return hull
def convex_hull(points):
if len(points) <= 3:
return points
upper = half_hull(sorted(points))
lower = half_hull(reversed(sorted(points)))
return upper + lower[1:-1]
def half_hull(sorted_points):
hull = []
for C in sorted_points:
while len(hull) >= 2 and turn(hull[-2], hull[-1], C) <= -1e-6:
hull.pop()
hull.append(C)
return hull
def turn(A, B, C):
return (B[0]-A[0]) * (C[1]-B[1]) - (B[1]-A[1]) * (C[0]-B[0])
def distance(p1, p2):
return math.sqrt((p1[0]-p2[0])**2 + (p1[1]-p2[1])**2)
answer = deflate_hull(data)
</code></pre>
| 0 | 2016-08-24T09:02:30Z | [
"python",
"sorting",
"coordinates",
"shortest-path"
] |
Sorting a list of xy-coordinates in python | 39,118,367 | <p>I have a long list of xy coordinates like the following:</p>
<pre><code>>>> data = [(x1,y1),(x2,y2),(x3,y3),...]
</code></pre>
<p>Every pair of coordinates represents a point of a contour in an image and I would like to sort them like they are arranged along the contour (shortest path). The shape of the contour is very complex (it's the shape of a country), that's why a <a href="http://tomswitzer.net/2010/03/graham-scan/" rel="nofollow">ConvexHull</a>
won't work. </p>
<p>I tried out this code, but it is not precise enough:</p>
<pre><code>>>> import math
>>> import matplotlib.patches as patches
>>> import pylab
>>> pp=[(1,1),(2,3),(3,4)]
# compute centroid
>>> cent=(sum([p[0] for p in pp])/len(pp),sum([p[1] for p in pp])/len(pp))
# sort by polar angle
>>> pp.sort(key=lambda p: math.atan2(p[1]-cent[1],p[0]-cent[0]))
# plot points
>>> pylab.scatter([p[0] for p in pp],[p[1] for p in pp])
# plot polyline
>>> pylab.gca().add_patch(patches.Polygon(pp,closed=False,fill=False))
>>> pylab.grid()
>>> pylab.show()
</code></pre>
<p>I already tried like suggested in <a href="http://stackoverflow.com/questions/18280420/rearrange-a-list-of-points-to-reach-the-shortest-distance-between-them">this case</a>
but it did not work out well, because my list of coordinates is too long. </p>
<p>As the points are very close to each other the solution to <a href="http://stackoverflow.com/questions/37742358/sorting-points-to-form-a-continuous-line">this question</a>
might seem too complicated for my case.</p>
<p><a href="http://i.stack.imgur.com/7nLgu.png" rel="nofollow">This might illustrate my problem</a></p>
| -2 | 2016-08-24T08:38:43Z | 39,118,932 | <p>The problem you described is similar to finding a convex hull. You can have a look <a href="http://tomswitzer.net/2009/12/jarvis-march/" rel="nofollow">here</a></p>
| 0 | 2016-08-24T09:04:40Z | [
"python",
"sorting",
"coordinates",
"shortest-path"
] |
Sorting a list of xy-coordinates in python | 39,118,367 | <p>I have a long list of xy coordinates like the following:</p>
<pre><code>>>> data = [(x1,y1),(x2,y2),(x3,y3),...]
</code></pre>
<p>Every pair of coordinates represents a point of a contour in an image and I would like to sort them like they are arranged along the contour (shortest path). The shape of the contour is very complex (it's the shape of a country), that's why a <a href="http://tomswitzer.net/2010/03/graham-scan/" rel="nofollow">ConvexHull</a>
won't work. </p>
<p>I tried out this code, but it is not precise enough:</p>
<pre><code>>>> import math
>>> import matplotlib.patches as patches
>>> import pylab
>>> pp=[(1,1),(2,3),(3,4)]
# compute centroid
>>> cent=(sum([p[0] for p in pp])/len(pp),sum([p[1] for p in pp])/len(pp))
# sort by polar angle
>>> pp.sort(key=lambda p: math.atan2(p[1]-cent[1],p[0]-cent[0]))
# plot points
>>> pylab.scatter([p[0] for p in pp],[p[1] for p in pp])
# plot polyline
>>> pylab.gca().add_patch(patches.Polygon(pp,closed=False,fill=False))
>>> pylab.grid()
>>> pylab.show()
</code></pre>
<p>I already tried like suggested in <a href="http://stackoverflow.com/questions/18280420/rearrange-a-list-of-points-to-reach-the-shortest-distance-between-them">this case</a>
but it did not work out well, because my list of coordinates is too long. </p>
<p>As the points are very close to each other the solution to <a href="http://stackoverflow.com/questions/37742358/sorting-points-to-form-a-continuous-line">this question</a>
might seem too complicated for my case.</p>
<p><a href="http://i.stack.imgur.com/7nLgu.png" rel="nofollow">This might illustrate my problem</a></p>
| -2 | 2016-08-24T08:38:43Z | 39,120,360 | <p>You can do this by selecting a random point and find the next points from there like</p>
<pre><code>def distance(a,b):
pass
#propably something with math.hypot for euclidean norm
points=set([(1,2),(2,3)])
current=points.pop()
path=[current]
while points:
current=min(points,key=(lambda p: distance(current,p))
points.remove(current)
path.append(current)
</code></pre>
<p>This is not a fast algorithm (about O(n*n)) but it is simple.</p>
<p>You can speed up the search by using a kd-tree - simplest case would be a binary tree/binary search along only one of the axes - but building the tree/sorting the points first is not actually fast.</p>
<p>If your countour is even almost convex, the solution of @Cabu is fastest.</p>
| 0 | 2016-08-24T10:09:25Z | [
"python",
"sorting",
"coordinates",
"shortest-path"
] |
Sorting a list of xy-coordinates in python | 39,118,367 | <p>I have a long list of xy coordinates like the following:</p>
<pre><code>>>> data = [(x1,y1),(x2,y2),(x3,y3),...]
</code></pre>
<p>Every pair of coordinates represents a point of a contour in an image and I would like to sort them like they are arranged along the contour (shortest path). The shape of the contour is very complex (it's the shape of a country), that's why a <a href="http://tomswitzer.net/2010/03/graham-scan/" rel="nofollow">ConvexHull</a>
won't work. </p>
<p>I tried out this code, but it is not precise enough:</p>
<pre><code>>>> import math
>>> import matplotlib.patches as patches
>>> import pylab
>>> pp=[(1,1),(2,3),(3,4)]
# compute centroid
>>> cent=(sum([p[0] for p in pp])/len(pp),sum([p[1] for p in pp])/len(pp))
# sort by polar angle
>>> pp.sort(key=lambda p: math.atan2(p[1]-cent[1],p[0]-cent[0]))
# plot points
>>> pylab.scatter([p[0] for p in pp],[p[1] for p in pp])
# plot polyline
>>> pylab.gca().add_patch(patches.Polygon(pp,closed=False,fill=False))
>>> pylab.grid()
>>> pylab.show()
</code></pre>
<p>I already tried like suggested in <a href="http://stackoverflow.com/questions/18280420/rearrange-a-list-of-points-to-reach-the-shortest-distance-between-them">this case</a>
but it did not work out well, because my list of coordinates is too long. </p>
<p>As the points are very close to each other the solution to <a href="http://stackoverflow.com/questions/37742358/sorting-points-to-form-a-continuous-line">this question</a>
might seem too complicated for my case.</p>
<p><a href="http://i.stack.imgur.com/7nLgu.png" rel="nofollow">This might illustrate my problem</a></p>
| -2 | 2016-08-24T08:38:43Z | 39,243,504 | <p>Update:
meanwhile I solved it by using OpenCV's findContours - that outputs the coordinates like I whished! </p>
| 0 | 2016-08-31T07:39:17Z | [
"python",
"sorting",
"coordinates",
"shortest-path"
] |
slice operation for numpy in python2 | 39,118,401 | <p>If I have a <em>numpy</em> array <strong>X</strong>:</p>
<pre><code>array([[ 0.13263767, 0.23149757, 0.57097612],
[ 0.49629958, 0.67507182, 0.6758823 ]])
</code></pre>
<p>And an index array <strong>Y</strong>:</p>
<pre><code>array([1, 2, 1])
</code></pre>
<p>I could use <strong>X[0:,Y]</strong> to index the first row of <strong>X</strong>, and it will output the:</p>
<pre><code>array([ 0.23149757, 0.57097612, 0.23149757])
</code></pre>
<p>my question is, if I have an index array <strong>Z</strong> with 2 dimensions:</p>
<pre><code>array([[1, 2, 1],
[0, 1, 2]])
</code></pre>
<p>I would like to use first row to of <strong>Z</strong> to index the first row of <strong>X</strong>, and second row of <strong>Z</strong> to index the second row of <strong>X</strong> (<strong>Z</strong> and <strong>X</strong> have the same rows).
So one way is to use command as follow:</p>
<pre><code>Row_0 = X[0:, Z[0]]
Row_1 = X[1:, Z[1]]
</code></pre>
<p>I was wondering if there is a simple way to do this.
Thanks </p>
| 2 | 2016-08-24T08:40:06Z | 39,118,614 | <p>You can use fancy indexing to achieve that:</p>
<pre><code>>>> X[[[0], [1]], Z]
array([[ 0.23149757, 0.57097612, 0.23149757],
[ 0.49629958, 0.67507182, 0.6758823 ]])
</code></pre>
<p>The trick is that the array indexing the first dimension must broadcast with the one indexing the second one. In this case:</p>
<pre><code>>>> np.array([[0], [1]]).shape
(2, 1)
>>> Z.shape
(2, 3)
</code></pre>
<p>So the return will be of the broadcast shape, <code>(2, 3)</code> with indices taken from the first array for the first dimension, and from the second array for the second dimension.</p>
<p>For more general cases, you can get the same result as:</p>
<pre><code>>>> X[np.arange(Z.shape[0])[:, None], Z]
array([[ 0.23149757, 0.57097612, 0.23149757],
[ 0.49629958, 0.67507182, 0.6758823 ]])
</code></pre>
| 3 | 2016-08-24T08:50:13Z | [
"python",
"arrays",
"numpy"
] |
How to count rows that share a unique field in pandas | 39,118,521 | <p>Imagine I have a dataframe that stores the books that individual people have read and their scores for them:</p>
<pre><code>df = pd.DataFrame({
'person' : [1,1,2,2,3,3],
'book' : ['dracula', 'frankenstein', 'dracula', 'frankenstein', 'dracula', 'rebecca'],
'score':[10,11,12,13,14,15]
})
df
book person score
0 dracula 1 10
1 frankenstein 1 11
2 dracula 2 12
3 frankenstein 2 13
4 dracula 3 14
5 rebecca 3 15
</code></pre>
<p>What I want to get is a dataframe showing for each pair of books how many people have read them both i.e. the desired outcome looks like this:</p>
<pre><code> dracula frankensten rebecca
dracula 3 2 1
frankenstein 2 2 0
rebecca 1 0 1
</code></pre>
<p>I.e. there are two people who have read both <code>dracula</code> and <code>frankenstein</code>, one person who has read both <code>dracula</code> and <code>rebecca</code>, etc. I don't care about the scores.</p>
<p>I have a feeling this has something to do with pivot/stack/unstack, but can't figure it out, any suggestions?</p>
| 3 | 2016-08-24T08:45:08Z | 39,118,598 | <p>You can construct a pivot table and multiply it with its transpose:</p>
<pre><code>pvt = pd.pivot_table(df, index='book', columns='person', aggfunc=len, fill_value=0)
pvt.dot(pvt.T)
Out:
book dracula frankenstein rebecca
book
dracula 3 2 1
frankenstein 2 2 0
rebecca 1 0 1
</code></pre>
| 3 | 2016-08-24T08:49:30Z | [
"python",
"pandas",
"pivot",
"pivot-table"
] |
How to count rows that share a unique field in pandas | 39,118,521 | <p>Imagine I have a dataframe that stores the books that individual people have read and their scores for them:</p>
<pre><code>df = pd.DataFrame({
'person' : [1,1,2,2,3,3],
'book' : ['dracula', 'frankenstein', 'dracula', 'frankenstein', 'dracula', 'rebecca'],
'score':[10,11,12,13,14,15]
})
df
book person score
0 dracula 1 10
1 frankenstein 1 11
2 dracula 2 12
3 frankenstein 2 13
4 dracula 3 14
5 rebecca 3 15
</code></pre>
<p>What I want to get is a dataframe showing for each pair of books how many people have read them both i.e. the desired outcome looks like this:</p>
<pre><code> dracula frankensten rebecca
dracula 3 2 1
frankenstein 2 2 0
rebecca 1 0 1
</code></pre>
<p>I.e. there are two people who have read both <code>dracula</code> and <code>frankenstein</code>, one person who has read both <code>dracula</code> and <code>rebecca</code>, etc. I don't care about the scores.</p>
<p>I have a feeling this has something to do with pivot/stack/unstack, but can't figure it out, any suggestions?</p>
| 3 | 2016-08-24T08:45:08Z | 39,118,642 | <p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.crosstab.html" rel="nofollow"><code>crosstab</code></a>:</p>
<pre><code>df = pd.crosstab(df.book, df.person)
print (df.dot(df.T))
book dracula frankenstein rebecca
book
dracula 3 2 1
frankenstein 2 2 0
rebecca 1 0 1
</code></pre>
<p>Or solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow"><code>unstack</code></a>:</p>
<pre><code>df = df.groupby(['book','person'])['person'].size().unstack().fillna(0).astype(int)
print (df.dot(df.T))
book dracula frankenstein rebecca
book
dracula 3 2 1
frankenstein 2 2 0
rebecca 1 0 1
</code></pre>
| 2 | 2016-08-24T08:51:27Z | [
"python",
"pandas",
"pivot",
"pivot-table"
] |
edit: how to design and implement an experiment to do benchmark comparisons of the two queue implementations in python | 39,118,587 | <p>I am looking for pointers on how to design an experiment to compare the behaviour of <code>queue</code> implementations one done with <code>python list</code> and the other <code>python queue abstract data type</code> using a benchmark.</p>
<p>Here is some code I could put up based on the how I understood <code>amortized testing</code></p>
<pre><code>###############################################################
# Experiment to determine the differences between a list #
# implemented queue and the 'queue' ADT (abstract data type)#
###############################################################
from pythonds import Queue
import time
class MyQueue:
def __init__(self):
self.items = []
def isEmpty(self):
return self.items == []
def enqueue(self, item):
self.items.insert(0, item)
def dequeue(self):
return self.items.pop()
def size(self):
return len(self.items)
q = Queue()
# Steps:
# 1) Enter 10 to five items in both 'MyQueue' and 'Queue'
# 2) This should be done from instance of the various classes
# 3) Check the number of steps that it takes to remove items from
# these object instances.
m = MyQueue()
t1 = time.time()
for i in range(1, 100001):
m.enqueue(i)
t2 = time.time()
exec_time = t2 - t1
# m.dequeue()
t3 = time.time()
for u in range(1, 100001):
q.enqueue(u)
t4 = time.time()
exec_time2 = t4 - t3
t5 = time.time()
if not q.isEmpty():
while q.items:
q.dequeue()
t6 = time.time()
exec_time3 = t6 - t5
t7 = time.time()
if not m.items:
while m.items:
m.dequeue()
t8 = time.time()
exec_time4 = t8 - t7
print("")
print("----------------------")
print("Enqueue Operations")
print("----------------------")
print("MyQueue Result 1: ", exec_time)
print("Python Queue Result 2: ", exec_time2)
print("----------------------")
print("Dequeue Operations")
print("----------------------")
print("MyQueue Result 1: ", exec_time3)
print("Python Queue Result 2: ", exec_time4)
</code></pre>
<p>Results:</p>
<pre><code>----------------------
Enqueue Operations
----------------------
MyQueue Result 1: 3.1316871643066406
Python Queue Result 2: 3.1880860328674316
----------------------
Dequeue Operations
----------------------
MyQueue Result 1: 0.028588533401489258
Python Queue Result 2: 9.5367431640625e-07
</code></pre>
| 0 | 2016-08-24T08:48:53Z | 39,118,788 | <p>You could use Python time.time() to measure execution time for each queue.
the test should be run multiple times with at least 1000 elements in the structure to have a "decent" execution time.</p>
<pre><code>import time
t1 = time.time()
for i in range(1, 50000):
m.enqueue(i)
t2 = time.time()
exec_time = t2-t1
</code></pre>
<p>Do this with m and q and enqueue/dequeue you can compare execution times</p>
| 0 | 2016-08-24T08:57:15Z | [
"python",
"amortized-analysis",
"amortization"
] |
Pandas disassemble column | 39,118,689 | <p>From the documentaiton of pandas <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html</a> a assembly of multiple columns e.g. date columns to a single one is explained.</p>
<pre><code>>>> df = pd.DataFrame({'year': [2015, 2016],
'month': [2, 3],
'day': [4, 5]})
>>> pd.to_datetime(df)
0 2015-02-04 1 2016-03-05 dtype: datetime64[ns]
</code></pre>
<p>But how can I perform the opposite transformation?</p>
| 2 | 2016-08-24T08:53:08Z | 39,118,825 | <p>You can access the constituent parts of a datetime using the <a href="http://pandas.pydata.org/pandas-docs/stable/api.html#datetimelike-properties" rel="nofollow"><code>dt</code></a> accessor, note that <code>to_datetime</code> returns a <code>Series</code> so I'm converting to a df in order to add columns:</p>
<pre><code>In [71]:
df1 = pd.to_datetime(df)
df1 = df1.to_frame()
df1 = df1.rename(columns={0:'date'})
df1
Out[71]:
date
0 2015-02-04
1 2016-03-05
In [72]:
df1['year'], df1['month'], df1['day'] = df1['date'].dt.year, df1['date'].dt.month, df1['date'].dt.day
df1
Out[72]:
date year month day
0 2015-02-04 2015 2 4
1 2016-03-05 2016 3 5
</code></pre>
<p>the dtypes will be <code>int64</code> for each component:</p>
<pre><code>In [73]:
df1.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2 entries, 0 to 1
Data columns (total 4 columns):
date 2 non-null datetime64[ns]
year 2 non-null int64
month 2 non-null int64
day 2 non-null int64
dtypes: datetime64[ns](1), int64(3)
memory usage: 144.0 bytes
</code></pre>
| 2 | 2016-08-24T08:59:27Z | [
"python",
"date",
"pandas",
"dataframe",
"split"
] |
Pandas disassemble column | 39,118,689 | <p>From the documentaiton of pandas <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html</a> a assembly of multiple columns e.g. date columns to a single one is explained.</p>
<pre><code>>>> df = pd.DataFrame({'year': [2015, 2016],
'month': [2, 3],
'day': [4, 5]})
>>> pd.to_datetime(df)
0 2015-02-04 1 2016-03-05 dtype: datetime64[ns]
</code></pre>
<p>But how can I perform the opposite transformation?</p>
| 2 | 2016-08-24T08:53:08Z | 39,119,245 | <p><code>.dt.strftime('%Y %-m %-d').str.split()</code> will reverse the operation</p>
<pre><code>df = pd.DataFrame({'year': [2015, 2016],
'month': [2, 3],
'day': [4, 5]})
pd.to_datetime(df)
0 2015-02-04
1 2016-03-05
dtype: datetime64[ns]
</code></pre>
<hr>
<pre><code>pd.to_datetime(df).dt.strftime('%Y %-m %-d').str.split()
0 [2015, 2, 4]
1 [2016, 3, 5]
dtype: object
</code></pre>
<hr>
<p>Or with a fancy regex extract</p>
<pre><code>regex = r'(?P<year>\d+) (?P<month>\d+) (?P<day>\d+)'
pd.to_datetime(df).dt.strftime('%Y %-m %-d') \
.str.extract(regex, expand=True).astype(int)
</code></pre>
<p><a href="http://i.stack.imgur.com/zHc3s.png" rel="nofollow"><img src="http://i.stack.imgur.com/zHc3s.png" alt="enter image description here"></a></p>
| 2 | 2016-08-24T09:19:56Z | [
"python",
"date",
"pandas",
"dataframe",
"split"
] |
Division of two dataframe with Group by of a Column Pandas | 39,118,708 | <p>I have a dataframe df_F1 : </p>
<pre><code>df_F1.info()
</code></pre>
<blockquote>
<pre><code><class 'pandas.core.frame.DataFrame'>
Int64Index: 2 entries, 0 to 1
Data columns (total 7 columns):
class_energy 2 non-null object
ACT_TIME_AERATEUR_1_F1 2 non-null float64
ACT_TIME_AERATEUR_1_F3 2 non-null float64
ACT_TIME_AERATEUR_1_F5 2 non-null float64
ACT_TIME_AERATEUR_1_F8 2 non-null float64
ACT_TIME_AERATEUR_1_F7 2 non-null float64
ACT_TIME_AERATEUR_1_F8 2 non-null float64
dtypes: float64(6), object(1)
memory usage: 128.0+ bytes
</code></pre>
</blockquote>
<pre><code> df_F1.head()
</code></pre>
<blockquote>
<pre><code>class_energy ACT_TIME_AERATEUR_1_F1 ACT_TIME_AERATEUR_1_F3 ACT_TIME_AERATEUR_1_F5
low 5.875550 431.000000 856.666667
medium 856.666667 856.666667 856.666667
</code></pre>
</blockquote>
<p>I try to create a dataframe Ratio wich contain for each class_energy the value of energy of each ACT_TIME_AERATEUR_1_Fx devided by the sum of energy of all ACT_TIME_AERATEUR_1_Fx.
For example : </p>
<pre><code> ACT_TIME_AERATEUR_1_F1 ACT_TIME_AERATEUR_1_F3 ACT_TIME_AERATEUR_1_F5
low 5.875550/(5.875550 + 431.000000+856.666667) 431.000000/(5.875550+431.000000+856.666667) 856.666667/(5.875550+431.000000+856.666667)
medium 856.666667/(856.666667+856.666667+856.666667) 856.666667/(856.666667+856.666667+856.666667) 856.666667/(856.666667+856.666667+856.666667)
</code></pre>
<p>Any idea please to help me?</p>
<p>Thank you</p>
<p>Kind regards</p>
| 0 | 2016-08-24T08:53:42Z | 39,118,883 | <p>You could use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.divide.html" rel="nofollow"><code>DF.divide</code></a> to divide the required columns with their <code>sum</code> along the same columns as shown:</p>
<pre><code>df.iloc[:,1:4] = df.iloc[:,1:4].divide(df.sum(axis=1), axis=0)
print(df)
class_energy ACT_TIME_AERATEUR_1_F1 ACT_TIME_AERATEUR_1_F3 \
0 low 0.004542 0.333194
1 medium 0.333333 0.333333
ACT_TIME_AERATEUR_1_F5
0 0.662264
1 0.333333
</code></pre>
| 2 | 2016-08-24T09:02:01Z | [
"python",
"pandas",
"dataframe",
"group-by"
] |
insert special characters in mysql database using python angularjs | 39,118,967 | <p>How do I insert a value in MySQL that consist of single quotes. i.e</p>
<p>"I am joy's brother and trying to get a 'job' "</p>
<p>The single quote is creating a problems.When I am trying to write some text using single quotes in text field. I am getting an error in the console i.e,.</p>
<p>SyntaxError: Unexpected token < in JSON at position 0 </p>
<p>How do you insert single quotes through the user interface (front end) using angularJs ,MySQL.</p>
| -1 | 2016-08-24T09:06:41Z | 39,119,023 | <p>You're looking for: </p>
<pre><code>conn.escape_string()
</code></pre>
<p>Look into what escape_string does. In a nutshell it takes your string and replaces the special characters with ex \x00. </p>
<p>See: <a href="http://mysql-python.sourceforge.net/MySQLdb.html" rel="nofollow">http://mysql-python.sourceforge.net/MySQLdb.html</a></p>
<p>otherwise just put a \ infront of the characters to escape them yourself! :)</p>
| 0 | 2016-08-24T09:09:26Z | [
"python",
"mysql",
"angularjs"
] |
My gauss filter is too slow | 39,119,008 | <p>I am making a simple gauss blur, and since I could not make scipy's convolve work, I made my own:</p>
<pre><code>def Convolve(matr_, ker_):
output = matr_.astype(np.float64)
for x in range(len(matr_)):
for y in range(len(matr_[x])):
sum = 0
count = 0
width = int(len(ker_)/2)
for x_c in range(len(ker_)):
for y_c in range(len(ker_)):
x_index = x - x_c + width
y_index = y - y_c + width
if (x_index >= 0) and (x_index < len(matr_)) and (y_index >= 0) and (y_index < len(matr_[x])):
sum += ker_[x_c][y_c] * matr_[x_index][y_index]
count += ker_[x_c][y_c]
else:
#print("{0} -> {1}, {2} -> {3}".format(x, x_index, y, y_index))
pass
output[x][y] = sum/count
return output.astype(matr_.dtype)
</code></pre>
<p>I also normalize pixels right here, so they would still always fit <code>matr_</code>'s type. But it works really slow, it takes up something like 20 seconds to work with a 1440x900 image. How can this be made to work faster?</p>
| 0 | 2016-08-24T09:08:42Z | 39,119,089 | <p>You should never use loops in Python if you can avoid it. Use Numpy or Pandas and work on vectors.</p>
<p>If you <em>have</em> to use loops (which doesn't seem to be the case in your example), use the Numba package.</p>
| 0 | 2016-08-24T09:12:26Z | [
"python",
"scipy"
] |
Converting string to date/hour at the 24hour mark: ValueError unconverted data remains: 4 | 39,119,093 | <p>I am trying to extract the date and hour from the following string:</p>
<pre><code>string = '3/24/2016 24' # 24 is the hour
</code></pre>
<p>Using the following code:</p>
<pre><code>result = datetime.strptime(string, '%m/%d/%Y %H')
</code></pre>
<p>However I am running into the following error message:
ValueError: unconverted data remains: 4
Lead: Does strptime not recognize 24 as an hour (0 -> 23 versus 1 -> 24)? And if so, how should I fix this since it's a string? </p>
| 0 | 2016-08-24T09:12:32Z | 39,119,268 | <p>I suppose to handle the corner case of 24, you can parse the date part as a date, then replace the "hour" mod 24, eg:</p>
<pre><code>d, t = '3/24/2016 24'.partition(' ')[::2]
dt = datetime.strptime(d, '%m/%d/%Y').replace(hour=int(t) % 24)
# datetime.datetime(2016, 3, 24, 0, 0)
</code></pre>
| 2 | 2016-08-24T09:21:11Z | [
"python",
"date",
"type-conversion",
"extract",
"strptime"
] |
psutil: get cpu of all processes | 39,119,119 | <p>Maybe I'm overlooking something, but I can't figure out how to get <code>current_process.cpu_percent(interval=0.1)</code> for all processes at once without iterating over them. Currently iteration will take <code>process_count * interval</code> seconds to finish. Is there a way around this?</p>
<p>So far I'm doing:</p>
<pre><code>#!/usr/bin/env python2
import psutil
cpu_count = psutil.cpu_count()
processes_info = []
for proc in psutil.process_iter():
try:
pinfo = proc.as_dict(attrs=['pid', 'username', 'memory_info', 'cpu_percent', 'name'])
current_process = psutil.Process(pid=pinfo['pid'])
pinfo["cpu_info"] = current_process.cpu_percent(interval=0.1)
processes_info.append(pinfo)
except psutil.NoSuchProcess:
pass
print processes_info
</code></pre>
<p>As far as I understand it, I cannot include <code>cpu-percent</code> into the <code>attr</code>-list, since the help states</p>
<blockquote>
<p>When interval is 0.0 or None compares system CPU times elapsed since last call or module import, returning immediately. That means the first time this is called it will return a meaningless 0.0 value which you are supposed to ignore. </p>
</blockquote>
| 1 | 2016-08-24T09:13:39Z | 39,121,160 | <p>This is a kind of measuring CPU time of each process, so, you cannot avoid <code>process_count * interval</code> seconds in a single Python interpreter process. Maybe, you could reduce the time by means of multiprocessing.</p>
<p><a href="https://docs.python.org/3/library/multiprocessing.html" rel="nofollow">https://docs.python.org/3/library/multiprocessing.html</a></p>
| 1 | 2016-08-24T10:45:28Z | [
"python",
"python-2.7",
"psutil"
] |
psutil: get cpu of all processes | 39,119,119 | <p>Maybe I'm overlooking something, but I can't figure out how to get <code>current_process.cpu_percent(interval=0.1)</code> for all processes at once without iterating over them. Currently iteration will take <code>process_count * interval</code> seconds to finish. Is there a way around this?</p>
<p>So far I'm doing:</p>
<pre><code>#!/usr/bin/env python2
import psutil
cpu_count = psutil.cpu_count()
processes_info = []
for proc in psutil.process_iter():
try:
pinfo = proc.as_dict(attrs=['pid', 'username', 'memory_info', 'cpu_percent', 'name'])
current_process = psutil.Process(pid=pinfo['pid'])
pinfo["cpu_info"] = current_process.cpu_percent(interval=0.1)
processes_info.append(pinfo)
except psutil.NoSuchProcess:
pass
print processes_info
</code></pre>
<p>As far as I understand it, I cannot include <code>cpu-percent</code> into the <code>attr</code>-list, since the help states</p>
<blockquote>
<p>When interval is 0.0 or None compares system CPU times elapsed since last call or module import, returning immediately. That means the first time this is called it will return a meaningless 0.0 value which you are supposed to ignore. </p>
</blockquote>
| 1 | 2016-08-24T09:13:39Z | 39,138,890 | <p>In order to calculate CPU% you necessarily have to wait.
You don't have to wait 0.1 secs <strong>for each process/iteration</strong> though.
Instead, iterate over all processes, call cpu_percent() with interval=0, ignore the return value then wait 0.1 secs (or more).
The second time you iterate over all processes cpu_percent(interval=0) will return a meaningful value. </p>
| 2 | 2016-08-25T07:12:41Z | [
"python",
"python-2.7",
"psutil"
] |
How to plot bar graph interactively based on value of dropdown widget in bokeh python? | 39,119,140 | <p>I want to plot the bar graph based value of dropdown widget.</p>
<p><strong>Code</strong></p>
<pre><code>import pandas as pd
from bokeh.io import output_file, show
from bokeh.layouts import widgetbox
from bokeh.models.widgets import Dropdown
from bokeh.plotting import curdoc
from bokeh.charts import Bar, output_file,output_server, show #use output_notebook to visualize it in notebook
df=pd.DataFrame({'item':["item1","item2","item2","item1","item1","item2"],'value':[4,8,3,5,7,2]})
menu = [("item1", "item1"), ("item2", "item2")]
dropdown = Dropdown(label="Dropdown button", button_type="warning", menu=menu)
def function_to_call(attr, old, new):
df=df[df['item']==dropdown.value]
p = Bar(df, title="Bar Chart Example", xlabel='x', ylabel='values', width=400, height=400)
output_server()
show(p)
dropdown.on_change('value', function_to_call)
curdoc().add_root(dropdown)
</code></pre>
<p><strong>Questions</strong></p>
<ol>
<li>I am getting the flowing error "UnboundLocalError: local variable '<strong>df</strong>' referenced before assignment" eventhough df is already created.</li>
<li>How to plot the bar graph in the webpage below the dropdown? What is the syntax to display it after issue in 1. is resolved?</li>
</ol>
| 0 | 2016-08-24T09:14:48Z | 39,127,871 | <p>For 1.) you are referencing it before assigning it. Look at the <code>df['item']==dropdown.value</code> inside the square brackets. That happens <em>first</em> before the assignment. As to why this matters, that's how Python works. All assignments in a function by default create <em>local</em> variables. But before the assignment, only the global value is available. Python is telling you it won't allow mixed global/local usage in a single function. Long story short, rename the <code>df</code> variable inside the function:</p>
<pre><code>subset = df[df['item']==dropdown.value]
p = Bar(subset, ...)
</code></pre>
<p>For 2.) you need to put things in a layout (e.g. a <code>column</code>). There are lots of example of this in the project docs and in the gallery. </p>
| 0 | 2016-08-24T15:52:03Z | [
"python",
"matplotlib",
"plot",
"data-visualization",
"bokeh"
] |
List Accumulation with Append | 39,119,300 | <p>I want to generate or return an append-accumulated list from a given list (or iterator). For a list like <code>[1, 2, 3, 4]</code>, I would like to get, <code>[1]</code>, <code>[1, 2]</code>, <code>[1, 2, 3]</code> and <code>[1, 2, 3, 4]</code>. Like so:</p>
<pre><code>>>> def my_accumulate(iterable):
... grow = []
... for each in iterable:
... grow.append(each)
... yield grow
...
>>> for x in my_accumulate(some_list):
... print x # or something more useful
...
[1]
[1, 2]
[1, 2, 3]
[1, 2, 3, 4]
</code></pre>
<p>This works but is there an operation I could use with <a href="https://docs.python.org/3/library/itertools.html#itertools.accumulate" rel="nofollow"><code>itertools.accumulate</code></a> to facilitate this? (I'm on Python2 but the pure-python implementation/equivalent has been provided in the docs.)</p>
<p>Another problem I have with <code>my_accumulate</code> is that it doesn't work well with <code>list()</code>, it outputs the entire <code>some_list</code> for each element in the list:</p>
<pre><code>>>> my_accumulate(some_list)
<generator object my_accumulate at 0x0000000002EC3A68>
>>> list(my_accumulate(some_list))
[[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]]
</code></pre>
<hr>
<p>Option 1:</p>
<p>I wrote my own appending accumulator function to use with <code>itertools.accumulate</code> but considering the LoC and final useful-ness, it seems like a waste of effort, with <code>my_accumulate</code> being more useful, <em>(though may fail in case of empty iterables and consumes more memory since <code>grow</code> keeps growing)</em>:</p>
<pre><code>>>> def app_acc(first, second):
... if isinstance(first, list):
... first.append(second)
... else:
... first = [first, second]
... return first
...
>>> for x in accumulate(some_list, app_acc):
... print x
...
1
[1, 2]
[1, 2, 3]
[1, 2, 3, 4]
>>> list(accumulate(some_list, app_acc)) # same problem again with list
[1, [1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]]
</code></pre>
<p><em>(and the first returned elem is not a list, just a single item)</em></p>
<hr>
<p>Option 2: Figured it would be easier to just do incremental slicing but using the ugly iterate over list length method:</p>
<pre><code>>>> for i in xrange(len(some_list)): # the ugly iterate over list length method
... print some_list[:i+1]
...
[1]
[1, 2]
[1, 2, 3]
[1, 2, 3, 4]
</code></pre>
| 3 | 2016-08-24T09:22:25Z | 39,119,482 | <p>The easiest way to use <code>accumulate</code> is to make each item in the iterable a list with a single item and then the default function works as expected:</p>
<pre><code>from itertools import accumulate
acc = accumulate([el] for el in range(1, 5))
res = list(acc)
# [[1], [1, 2], [1, 2, 3], [1, 2, 3, 4]]
</code></pre>
| 8 | 2016-08-24T09:32:02Z | [
"python",
"list"
] |
Raspberry pi Jasper speech recognition not working | 39,119,471 | <p>I follow this tutorial <a href="http://jasperproject.github.io/documentation/" rel="nofollow">http://jasperproject.github.io/documentation/</a>
I use Debian Jessie so most off them I have to compile.
When I want to run jasper.py I get this error:
Does anyone knows what the problem can be.</p>
<blockquote>
<p>DEBUG:client.diagnose:Checking network connection to server
'www.google.com'... DEBUG:client.diagnose:Network connection
working DEBUG:<strong>main</strong>:Trying to read config file:
'/root/.jasper/profile.yml' DEBUG:client.diagnose:Checking python import 'pocketsphinx'...
DEBUG:client.diagnose:Python package 'pocketsphinx' found:
/usr/local/lib/python2.7/dist-packages/pocketsphinx/<strong>init</strong>.py'
WARNING:root:tts_engine not specified in profile, defaulting to
espeak-tts' DEBUG:client.diagnose:Checking executable 'aplay'...
DEBUG:client.diagnose:Executable 'aplay' found: '/usr/bin/aplay'
DEBUG:client.diagnose:Checking executable 'espeak'...
DEBUG:client.diagnose:Executable 'espeak' found: '/usr/bin/espeak'
DEBUG:client.vocabcompiler:compiled_revision is
'bb74ae36d130ef20de710e3a77b43424b8fa774f' ERROR:root:Error
occured! Traceback (most recent call last): File "jasper.py",
line 146, in app = Jasper() File "jasper.py",
line 109, in <strong>init</strong><br>
stt_passive_engine_class.get_passive_instance(), File
"/home/pi/jasper/client/stt.py", line 48, in get_passive_instance<br>
return cls.get_instance('keyword', phrases) File
"/home/pi/jasper/client/stt.py", line 42, in get_instance<br>
instance = cls(**config) File "/home/pi/jasper/client/stt.py",
line 126, in <strong>init</strong> **vocabulary.decoder_kwargs)
TypeError: <strong>init</strong>() got an unexpected keyword argument 'hmm' </p>
</blockquote>
| -1 | 2016-08-24T09:31:15Z | 39,120,672 | <p>I paste my stt.py script on pastebin. <a href="http://pastebin.com/Cy6ziGPk" rel="nofollow">http://pastebin.com/Cy6ziGPk</a>
Because I could not manage this on this forum.</p>
| 0 | 2016-08-24T10:24:16Z | [
"python",
"linux"
] |
How to I list down the results of a function (python) | 39,119,500 | <p>I want to list down the sums taken from numberA. </p>
<p>The function "add" prompts the user if he wants to add. If he selects yes, then it will go to the function "numberA". This part will loop. </p>
<p>I want to list down the list of the sums when the user selects "N" in the function "add". And finally sum up all the sums taken once again. </p>
<p>I do not know how to store the values taken from "numberA"</p>
<pre><code>def numberA():
num1=int(input("Enter First Number"))
num2=int(input("Enter Second Number"))
total=num1+num2
print("The total: ", total)
def add():
userSelect = input("Do You Want to Add?"
"\n(Y) Yes ; (N) No"
"\n")
while userSelect != "Y" and userSelect != "N":
print("Error")
add()
if userSelect == "Y":
numberA()
add()
else:
print("Bye")
add()
list = [add()] #List of the Sums go here
for each in list:
print(each)
</code></pre>
| 0 | 2016-08-24T09:32:53Z | 39,119,569 | <pre><code>total = []
def numberA():
num1=int(input("Enter First Number"))
num2=int(input("Enter Second Number"))
total.append(num1+num2)
def add():
userSelect = input("Do You Want to Add?"
"\n(Y) Yes ; (N) No"
"\n")
while userSelect != "Y" and userSelect != "N":
print("Error")
add()
if userSelect == "Y":
numberA()
add()
else:
print("Bye")
add()
print("The total: ")
for each in total:
print(each)
</code></pre>
| 0 | 2016-08-24T09:35:46Z | [
"python"
] |
How to I list down the results of a function (python) | 39,119,500 | <p>I want to list down the sums taken from numberA. </p>
<p>The function "add" prompts the user if he wants to add. If he selects yes, then it will go to the function "numberA". This part will loop. </p>
<p>I want to list down the list of the sums when the user selects "N" in the function "add". And finally sum up all the sums taken once again. </p>
<p>I do not know how to store the values taken from "numberA"</p>
<pre><code>def numberA():
num1=int(input("Enter First Number"))
num2=int(input("Enter Second Number"))
total=num1+num2
print("The total: ", total)
def add():
userSelect = input("Do You Want to Add?"
"\n(Y) Yes ; (N) No"
"\n")
while userSelect != "Y" and userSelect != "N":
print("Error")
add()
if userSelect == "Y":
numberA()
add()
else:
print("Bye")
add()
list = [add()] #List of the Sums go here
for each in list:
print(each)
</code></pre>
| 0 | 2016-08-24T09:32:53Z | 39,120,536 | <p>This help</p>
<pre><code>def add():
ask = True
res = []
while ask :
num1=int(input("Enter First Number : "))
num2=int(input("Enter Second Number : "))
total = num1+num2
userSelect = input("Do You Want to Add?"
"\n(Y) Yes ; (N) No"
"\n")
if userSelect not in ['Y', 'N']:
print "Error"
elif userSelect == 'Y':
ask = True
else :
ask = False
res.append(total)
return res
print add()
</code></pre>
| 0 | 2016-08-24T10:18:13Z | [
"python"
] |
button on press being called before button press | 39,119,567 | <p>I am trying to add a set of buttons to a grid layout scroll view using for loop. But the on press event is being triggered for all the buttons even before pressing the buttons. How can I fix this?</p>
<pre><code>from kivy.uix.gridlayout import GridLayout
from kivy.uix.button import Button
from kivy.uix.scrollview import ScrollView
from kivy.core.window import Window
from kivy.app import runTouchApp
import webbrowser
def btnsclicked(id, url):
print 'btn id '+id+' clicked'
webbrowser.open(url)
layout = GridLayout(cols=1, spacing=10, size_hint_y=None)
layout.bind(minimum_height=layout.setter('height'))
for i in range(5):
url= 'https://www.google.com'
btn = Button(text=str(i), size_hint_y=None, height=40, id='b'+str(i))
btn.bind(on_press =btnsclicked('b'+str(i), url))
layout.add_widget(btn)
root = ScrollView(size_hint=(1, None), size=(Window.width, Window.height))
root.add_widget(layout)
runTouchApp(root)
</code></pre>
| 0 | 2016-08-24T09:35:42Z | 39,119,701 | <p>because you are actually calling your callback when passing args:</p>
<pre><code>btn.bind(on_press =btnsclicked('b'+str(i), url))
</code></pre>
<p>Doc example says:</p>
<pre><code>def callback(instance):
print('The button <%s> is being pressed' % instance.text)
btn1 = Button(text='Hello world 1')
btn1.bind(on_press=callback)
</code></pre>
<p>You have to pass a function (not a function call) accepting only 1 argument: instance of the activated widget.</p>
<p>do this using <code>lambda</code> expressions that define functions in-line:</p>
<pre><code>btn.bind(on_press = lambda x : btnsclicked('b'+str(i), url))
</code></pre>
<p>When an event is triggered, the <code>on_press</code> callback is called with one argument: the widget id. That calls your <code>lambda</code> function which calls <code>btnsclicked</code> discarding this argument (that you don't need here), but passing your text id and the url you want to associate to.</p>
| 1 | 2016-08-24T09:40:53Z | [
"python",
"button",
"event-handling",
"kivy",
"buttonclick"
] |
button on press being called before button press | 39,119,567 | <p>I am trying to add a set of buttons to a grid layout scroll view using for loop. But the on press event is being triggered for all the buttons even before pressing the buttons. How can I fix this?</p>
<pre><code>from kivy.uix.gridlayout import GridLayout
from kivy.uix.button import Button
from kivy.uix.scrollview import ScrollView
from kivy.core.window import Window
from kivy.app import runTouchApp
import webbrowser
def btnsclicked(id, url):
print 'btn id '+id+' clicked'
webbrowser.open(url)
layout = GridLayout(cols=1, spacing=10, size_hint_y=None)
layout.bind(minimum_height=layout.setter('height'))
for i in range(5):
url= 'https://www.google.com'
btn = Button(text=str(i), size_hint_y=None, height=40, id='b'+str(i))
btn.bind(on_press =btnsclicked('b'+str(i), url))
layout.add_widget(btn)
root = ScrollView(size_hint=(1, None), size=(Window.width, Window.height))
root.add_widget(layout)
runTouchApp(root)
</code></pre>
| 0 | 2016-08-24T09:35:42Z | 39,119,790 | <p>For <code>on_press</code>, you call the function instead of specifying the function to use. Try:</p>
<pre><code>btn.bind(on_press=lambda x: btnsclicked('b' + str(i), url))
</code></pre>
<p>If you want to deal with arguments, see <a href="https://kivy.org/docs/api-kivy.event.html" rel="nofollow">Event dispatcher</a>:</p>
<pre><code>def btnsclicked(*args):
id_, url = args
print('btn id {} clicked'.format(id_))
webbrowser.open(url)
btn.bind(on_press=btnsclicked)
</code></pre>
| 2 | 2016-08-24T09:45:31Z | [
"python",
"button",
"event-handling",
"kivy",
"buttonclick"
] |
django bootstrap modal form redirect on validation failed | 39,119,619 | <p>I have Bootstrap Modal which contains form. When validation form is failed redirect to the original form page. The question is how to make that server send me response with errors in my bootstrap modal?</p>
<p>urls.py</p>
<pre><code>url(r'^add/$', CreateCarView.as_view(), name='add_auto'),
</code></pre>
<p>views.py </p>
<pre><code>class CreateCarView(CreateView):
model = Car
template_name = 'automobiles/automobiles_add.html'
fields = '__all__'
def get_success_url(self):
messages.success(self.request, 'ÐвÑо ÑÑпÑÑно додано')
return reverse('home')
def post(self, request, *args, **kwargs):
if request.POST.get('cancel_button'):
messages.info(self.request, 'ÐÐ¾Ð´Ð°Ð²Ð°Ð½Ð½Ñ Ð²ÑдмÑнено')
return HttpResponseRedirect(reverse('home'))
else:
return super(CreateCarView, self).post(request, *args, **kwargs)
</code></pre>
<p>base.html</p>
<pre><code><div class="row">
<div class="col-xs-4">
<button type="button" class="btn btn-warning" data-toggle="modal" data-target="#myModal"
data-remote="{% url 'add_auto' %}">
Open Modal
</button>
<div id="myModal" class="modal fade" role="dialog">
<div class="modal-dialog modal-sm">
<!-- Modal content-->
<div class="modal-content">
</div>
</div>
</div>
</div>
</div>
</code></pre>
<p>automobiles_add.html</p>
<pre><code><form id="add_item" class="form" role="form" action="{% url 'add_auto' %}" method="post">
<div class="modal-header">
<button type="button" class="close" data-dismiss="modal" aria-hidden="true">Ã</button>
<h4 class="modal-title">ÐодаÑи авÑо</h4>
</div>
<div class="modal-body">
{% csrf_token %}
{{ form.as_p }}
</div>
<div class="modal-footer">
<input class="btn btn-success" type="submit" value="ÐодаÑи ÑаÑкÑ" name="add_button">
<input class="btn btn-danger" type="submit" value="ÐÑдмÑниÑи" name="cancel_button">
</div>
</code></pre>
<p></p>
| 0 | 2016-08-24T09:37:44Z | 39,121,419 | <p>this may look like a hack but you could manually launch the modal if there are errors on the form, something like</p>
<pre><code><form id="add_item" class="form" role="form" action="{% url 'add_auto' %}" method="post">
<div class="modal-header">
<button type="button" class="close" data-dismiss="modal" aria-hidden="true">Ã</button>
<h4 class="modal-title">ÐодаÑи авÑо</h4>
</div>
<div class="modal-body">
{% csrf_token %}
{{ form.as_p }}
</div>
<div class="modal-footer">
<input class="btn btn-success" type="submit" value="ÐодаÑи ÑаÑкÑ" name="add_button">
<input class="btn btn-danger" type="submit" value="ÐÑдмÑниÑи" name="cancel_button">
</div>
{% if form.errors %}
<script>
$(function() {
$('#myModal').modal({show: true});
});
</script>
{% endif %}
</code></pre>
<p>or something like that.</p>
<p>Hope this helps</p>
| 0 | 2016-08-24T10:57:44Z | [
"python",
"django",
"bootstrap-modal"
] |
django bootstrap modal form redirect on validation failed | 39,119,619 | <p>I have Bootstrap Modal which contains form. When validation form is failed redirect to the original form page. The question is how to make that server send me response with errors in my bootstrap modal?</p>
<p>urls.py</p>
<pre><code>url(r'^add/$', CreateCarView.as_view(), name='add_auto'),
</code></pre>
<p>views.py </p>
<pre><code>class CreateCarView(CreateView):
model = Car
template_name = 'automobiles/automobiles_add.html'
fields = '__all__'
def get_success_url(self):
messages.success(self.request, 'ÐвÑо ÑÑпÑÑно додано')
return reverse('home')
def post(self, request, *args, **kwargs):
if request.POST.get('cancel_button'):
messages.info(self.request, 'ÐÐ¾Ð´Ð°Ð²Ð°Ð½Ð½Ñ Ð²ÑдмÑнено')
return HttpResponseRedirect(reverse('home'))
else:
return super(CreateCarView, self).post(request, *args, **kwargs)
</code></pre>
<p>base.html</p>
<pre><code><div class="row">
<div class="col-xs-4">
<button type="button" class="btn btn-warning" data-toggle="modal" data-target="#myModal"
data-remote="{% url 'add_auto' %}">
Open Modal
</button>
<div id="myModal" class="modal fade" role="dialog">
<div class="modal-dialog modal-sm">
<!-- Modal content-->
<div class="modal-content">
</div>
</div>
</div>
</div>
</div>
</code></pre>
<p>automobiles_add.html</p>
<pre><code><form id="add_item" class="form" role="form" action="{% url 'add_auto' %}" method="post">
<div class="modal-header">
<button type="button" class="close" data-dismiss="modal" aria-hidden="true">Ã</button>
<h4 class="modal-title">ÐодаÑи авÑо</h4>
</div>
<div class="modal-body">
{% csrf_token %}
{{ form.as_p }}
</div>
<div class="modal-footer">
<input class="btn btn-success" type="submit" value="ÐодаÑи ÑаÑкÑ" name="add_button">
<input class="btn btn-danger" type="submit" value="ÐÑдмÑниÑи" name="cancel_button">
</div>
</code></pre>
<p></p>
| 0 | 2016-08-24T09:37:44Z | 39,122,111 | <p>when you are submitting, normal form submitting is happening, instead you can post using ajax, this is an example ajax code.</p>
<pre><code>$(document).on('submit', '#add_item', function() {
$.ajax({
type: $(this).attr('method'),
url: this.action,
data: $(this).serialize(),
context: this,
success: function(data, status) {
location.reload();
},
error: function (request, type, errorThrown) {
$('#add_item').html(request.responseText);
}
});
return false;
})
</code></pre>
| 0 | 2016-08-24T11:29:17Z | [
"python",
"django",
"bootstrap-modal"
] |
Import functions on-the-fly without loading all "sibling" modules | 39,119,681 | <p>I have the following package structure:</p>
<pre><code>pkg/
__init__.py
f1.py
f2.py
</code></pre>
<p>Each of the <code>fX.py</code> files contains a function with the same name as the file (e.g. <code>f1.py</code> contains the definition of <code>f1</code>). Inside <code>__init__.py</code>, I do the following:</p>
<pre><code># -*- encoding: utf-8 -*-
import importlib
import os
module_path = os.path.dirname(__file__)
for fn in os.listdir(module_path):
fname, fext = os.path.splitext(fn)
if fname != '__init__' and fext == '.py':
m = importlib.import_module('.' + fname, __name__)
globals()[fname] = getattr(m, fname)
</code></pre>
<p>With the above, I can import the functions like so:</p>
<pre><code>from pkg import f1
</code></pre>
<p>And I can split the definitions of <code>f1</code>, <code>f2</code>, ..., in separated files. The problem here is that whatever the function I import (e.g. <code>f1</code>), the <code>__init__.py</code> will also load the other modules (e.g. <code>f2</code>).</p>
<p>Is there some way to "catch" the import instruction inside the <code>__init__.py</code> and load only the required module?</p>
<p>If there is a better way to achieve what I want than what I am currently doing, feel free to present it!</p>
<hr>
<p>Full example:</p>
<pre><code># pkg/__init__.py
import importlib
import os
module_path = os.path.dirname(__file__)
for fn in os.listdir(module_path):
fname, fext = os.path.splitext(fn)
if fname != '__init__' and fext == '.py':
m = importlib.import_module('.' + fname, __name__)
globals()[fname] = getattr(m, fname)
# pkg/f1.py
print('I am in f1.')
def f1(): pass
# pkg/f2.py
print('I am in f2.')
def f2(): pass
</code></pre>
<p>Current behavior:</p>
<pre><code>>>> from pkg import f1
I am in f1.
I am in f2. # Ko, f2 loaded...
>>> f1
<function f1 at 0x000000000270AF28>
>>> f2 # Ok, f2 not imported
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'f2' is not defined
</code></pre>
<p>Expected behavior:</p>
<pre><code>>>> from pkg import f1
I am in f1. # Ok, only f1 loaded...
>>> f1
<function f1 at 0x000000000270AF28>
>>> f2 # Ok, f2 not imported
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'f2' is not defined
</code></pre>
| 1 | 2016-08-24T09:40:04Z | 39,120,507 | <p>Python import only what you instruct it to import. The correct syntax here would be:</p>
<pre><code>from pkg.module import object
</code></pre>
<p>Then if you remove all the code from <code>__init__.py</code>, the following happens:</p>
<pre><code>>>> from pkg.f1 import f1
I am in f1.
>>> f1
<function pkg.f1.f1>
</code></pre>
| 0 | 2016-08-24T10:16:57Z | [
"python",
"python-3.x"
] |
How to change the structure of output.xml from Robot Framework output | 39,119,873 | <p>We are passing three text files to pybot and the output generated is in the hierarchical format. For Example, in Test Statistics section of the report.html file, under Statistics by Suite, <strong>Test 1 & Test 2 & Test 3.Test 1</strong> is observed. It was changed to display just Test 1 by editing the report.html template file under /usr/local/lib/python2.7/dist-packages/robot/htmldata/rebot . </p>
<p>The log.html template file was also changed to get the desired output like above in log.html file. But I am not finding, where to change to get the desired in output.xml file. The output.xml file still has the format like Test 1 & Test 2 & Test 3.Test 1 . Could someone help to resolve this ?</p>
| 0 | 2016-08-24T09:49:03Z | 39,122,581 | <p>There is no way to modify the format of the output.xml file that was generated by robot. You have a couple of choices. </p>
<p>First, you can post-process output.xml with xslt or any other tool to transform it into whatever format you want. It's a very simple structure that is easy to parse. </p>
<p>Your second option is to ignore the output.xml and write your own using the <a href="http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#listener-interface" rel="nofollow">listener interface</a>. Through the listener interface you can get a callback for every suite, testcase and keyword where you can write your own output in whatever format you like.</p>
| 2 | 2016-08-24T11:52:40Z | [
"python",
"xml",
"robotframework"
] |
Saving a Video in OpenCV (Linux, Python) | 39,119,893 | <p>There is an example that works on Windows (<a href="https://pythonprogramming.net/loading-video-python-opencv-tutorial/" rel="nofollow">original</a>):</p>
<pre><code>import numpy as np
import cv2
cap = cv2.VideoCapture(0)
fourcc = cv2.VideoWriter_fourcc(*'XDIV')
out = cv2.VideoWriter('output.avi',fourcc, 20.0, (640,480))
while(True):
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
out.write(frame)
cv2.imshow('frame',gray)
if cv2.waitKey(0) & 0xFF == ord('q'):
break
cap.release()
out.release()
cv2.destroyAllWindows()
</code></pre>
<p>In Linux program provides: </p>
<blockquote>
<p>AttributeError: 'module' object has no attribute 'VideoWriter_fourcc'</p>
</blockquote>
<p>Help make it work in Linux.</p>
| 1 | 2016-08-24T09:49:58Z | 39,120,292 | <p>I didn't try but this should work : </p>
<pre><code>fourcc = cv2.cv.CV_FOURCC(*'XVID')
</code></pre>
| 0 | 2016-08-24T10:06:31Z | [
"python",
"linux",
"opencv"
] |
Celery tasks with the same name | 39,119,949 | <p>I am implementing a file tree walking script that calls tasks based on the file extension or directory name.</p>
<p>My walking code looks like this:</p>
<pre><code>from celery import Celery
import os
app = Celery('tasks', broker='amqp://guest@localhost//')
app.conf.update(
CELERY_DEFAULT_EXCHANGE = 'path_walker',
CELERY_DEFAULT_EXCHANGE_TYPE = 'topic',
)
for root, dirs, files in os.walk("/"):
for filename in files:
name, ext = os.path.splitext(filename)
if ext:
app.send_task("process", args=(os.path.join(root, filename),), routing_key="file" + ext)
for dirname in dirs:
app.send_task("process", args=(dirname,), routing_key="directory." + dirname)
</code></pre>
<p>You can see that I'm calling the same task (<code>process</code>) but with different <code>routing_key</code>s.</p>
<p>In my worker I have:</p>
<pre><code>from celery import Celery
from kombu import Queue, Exchange
import uuid
app = Celery(broker='amqp://guest@localhost//')
file_queue = Queue(str(uuid.uuid4()), routing_key="file.py")
dir_queue = Queue(str(uuid.uuid4()), routing_key="directory.tmp")
app.conf.update(
CELERY_DEFAULT_EXCHANGE="path_walker",
CELERY_DEFAULT_EXCHANGE_TYPE="topic",
CELERY_QUEUES=(
dir_queue,
file_queue,
),
)
@app.task(name="process", ignore_result=True, queue=dir_queue)
def process_dir(dir_name):
print("Found a tmp dir: {}".format(dir_name))
@app.task(name="process", ignore_result=True, queue=file_queue)
def process_file(file_name):
print("Found a python file: {}".format(file_name))
</code></pre>
<p>The above code creates two queues with different routing keys. The two tasks then bind to the individual queues but when I run the tree walker, only the second task (the <code>process_file</code> function) gets called.</p>
<p>Is it possible to have tasks with the same name but on different queues, run by the same worker. Or do I need to only have a single task per worker if I want to stick with this approach?</p>
| 0 | 2016-08-24T09:52:25Z | 39,121,211 | <p>Answering my own question:</p>
<p>It's not possible to have two tasks with the same name in a single celery app. I could split the above into two separate apps and run them with different workers, or provide unique names for the tasks.</p>
<p>The key bit of code in celery is here:</p>
<p><a href="https://github.com/celery/celery/blob/8455b0c56797c22ba52abf59f4467ccc19eb9d20/celery/app/base.py" rel="nofollow">https://github.com/celery/celery/blob/8455b0c56797c22ba52abf59f4467ccc19eb9d20/celery/app/base.py</a></p>
| 0 | 2016-08-24T10:47:45Z | [
"python",
"celery"
] |
How can I combine two FITS tables into a single table in new fits file? | 39,120,050 | <p>I have two fits file data (file1.fits and file2.fits). The first one (file1.fits) consists of 80,700 important rows of data and another one is 140,000 rows. The both of them have the same Header. </p>
<pre><code>$ python
>>> import pyfits
>>> f1 = pyfits.open('file1.fits')
>>> f2 = pyfits.open('file2.fits')
>>> event1 = f1[1].data
>>> event2 = f2[1].data
>>> len(event1)
80700
>>> len(event2)
140000
</code></pre>
<p>How can I combine file1.fits and file2.fits into new fits file (newfile.fits) with the same header as the old ones and the total number of rows of newfile.fits is 80,700+ 140,000 = 220,700 ?</p>
| 0 | 2016-08-24T09:57:10Z | 39,123,896 | <p>I tried with <a href="http://www.astropy.org/" rel="nofollow">astropy</a>:</p>
<pre><code>from astropy.table import Table, hstack
t1 = Table.read('file1.fits', format='fits')
t2 = Table.read('file2.fits', format='fits')
new = hstack([t1, t2])
new.write('combined.fits')
</code></pre>
<p>It seems to work with samples from NASA.</p>
| 1 | 2016-08-24T12:53:03Z | [
"python",
"fits",
"pyfits"
] |
Except for non gmail addresses, how do i make gmail addresses to be strictly ending with '@gmail.com' (and not something as '@gmail.com.au', etc) | 39,120,263 | <p>How to validate an email address in a form to be only as 'gmail.com' but raise an error if the domain name and extension is anything other than 'gmail.com'?
I get an error if the email address ends with @gmail.au.com for example</p>
<pre><code>from django import forms
from .models import User
class RegisterForm(forms.ModelForm):
class Meta:
model = User
fields = ['first_name', 'last_name', 'email', 'country']
def clean_email(self):
email = self.cleaned_data.get('email')
email_base, provider = email.split('@')
domain, extension = provider.split('.')
if domain == 'gmail' and extension != 'com':
raise forms.ValidationError('Please enter a valid gmail address.')
return email
</code></pre>
| -2 | 2016-08-24T10:05:20Z | 39,120,504 | <p>You can just check whether or not the email includes <code>@gmail</code> and ends with <code>@gmail.com</code>, theres no need to split it at all</p>
<pre><code>if '@gmail' in email and not email.endswith('@gmail.com'):
</code></pre>
| 4 | 2016-08-24T10:16:54Z | [
"python",
"django"
] |
Except for non gmail addresses, how do i make gmail addresses to be strictly ending with '@gmail.com' (and not something as '@gmail.com.au', etc) | 39,120,263 | <p>How to validate an email address in a form to be only as 'gmail.com' but raise an error if the domain name and extension is anything other than 'gmail.com'?
I get an error if the email address ends with @gmail.au.com for example</p>
<pre><code>from django import forms
from .models import User
class RegisterForm(forms.ModelForm):
class Meta:
model = User
fields = ['first_name', 'last_name', 'email', 'country']
def clean_email(self):
email = self.cleaned_data.get('email')
email_base, provider = email.split('@')
domain, extension = provider.split('.')
if domain == 'gmail' and extension != 'com':
raise forms.ValidationError('Please enter a valid gmail address.')
return email
</code></pre>
| -2 | 2016-08-24T10:05:20Z | 39,121,174 | <p>There are multiple ways to achieve it. Below are the changes in your code which will make it run. Code further, explore, and let us know for issues :)</p>
<pre><code>>>> provider = 'gmail.com.au'
>>> provider_list = provider.split('.')
>>> domain, extension = provider_list[0], ''.join(provider_list[1:])
</code></pre>
| 0 | 2016-08-24T10:46:11Z | [
"python",
"django"
] |
Organise large dataset into separate lines | 39,120,276 | <p>I have a large raw data set which I want to organise into separate lines. The data is delimited. I want to organise so there are 8 delimiters in one line followed by a location, then the new line. </p>
<p>Raw Data:</p>
<p>468|2016-06-17||Mobile|responsive|sport|sport.football.england||London 468|2016-06-16||Mobile|responsive|sport|sport.football.european||Yorkshire and the Humber 468|2016-06-18||Mobile|responsive|sport|sport.football.england||London </p>
<p>Desired Output:</p>
<p>468|2016-06-17||Mobile|responsive|sport|sport.football.england||London</p>
<p>468|2016-06-16||Mobile|responsive|sport|sport.football.european||Yorkshire and the Humber</p>
<p>468|2016-06-18||Mobile|responsive|sport|sport.football.england||London</p>
<p>following help from akash karothiya I now have this</p>
<pre><code>data = open("raw_data.txt", "r")
new = []
for i in data.read().split(' '):
if '|' in i:
new.append(i)
else:
new.append(str(new[-1]) + ' ' + i )
new.remove(new[-2])
print(new)
</code></pre>
<p>but this results in the \n being printed instead of a new line, why? In this example, Yorkshire and the Humber should be at the end of one line:</p>
<p>['468|2016-06-17||Mobile|responsive|sport|sport.football.international.england.story.36558237.page||london\n468|2016-07-03||Mobile|responsive|sport|sport.football.european_championship.2016.media_asset.36695497.page||London\n06b|2016-06-21||Computer|responsive|news|news.page|news|yorkshire and the', 'humber\n468|2016-06-18||Mobile|responsive|sport|sport.football.international.england.story.36558237.page||london']</p>
| 0 | 2016-08-24T10:05:48Z | 39,122,413 | <p>You may try this :</p>
<pre><code>data = '''468|2016-06-17||Mobile|responsive|sport|sport.football.england||london 468|2016-06-16||Mobile|responsive|sport|sport.football.european||west midlands 468|2016-06-17||Mobile|responsive|sport|sport.football.england||india'''
new = []
for i in data.split(' '):
if '|' in i:
new.append(i)
else:
new.append(str(new[-1]) + ' ' + i )
new.remove(new[-2])
print(new)
['468|2016-06-17||Mobile|responsive|sport|sport.football.england||london',
'468|2016-06-16||Mobile|responsive|sport|sport.football.european||west midlands',
'468|2016-06-17||Mobile|responsive|sport|sport.football.england||india']
</code></pre>
| 0 | 2016-08-24T11:44:00Z | [
"python",
"token"
] |
Removing non-alphanumeric characters with bash or python | 39,120,328 | <p>I have much files like these(please see the screenshot): </p>
<pre><code>30.230201521829.jpg
Mens-Sunglasses_L.180022111040.jpg
progressive-sunglasses.180041285287.jpg
Atmosphere.222314222509.jpg
Womens-Sunglasses-L.180023271958.jpg
DAILY ESSENTIALS.211919012115.jpg
aviator-l.Sunglasses.240202216759.jpg
aviator-l.Sunglasses.women.240202218530.jpg
</code></pre>
<p>I want to raname them to the following:</p>
<pre><code>230201521829.jpg
180022111040.jpg
180041285287.jpg
222314222509.jpg
172254027299.jpg
211919012115.jpg
240202216759.jpg
240202218530.jpg
</code></pre>
<p>230201521829 is a timestamp ,180022111040 is a timestamp,180041285287 is a timestamp, etc.<br>
<strong>Ensure that the final file name looks like "timestamp.jpg".</strong></p>
<p>But I am not able to write the script more.<br>
<strong>Sed(Bash)</strong> command or <strong>Python</strong> can be used to do it?<br>
Could you give me a example? Thanks. </p>
<p><img src="http://i.stack.imgur.com/T96Hd.png" alt="enter image description here"></p>
| 0 | 2016-08-24T10:08:12Z | 39,120,374 | <pre><code>#!/bin/bash/
echo "test: "
echo "" > 30.230201521829.jpg
echo "" > Mens-Sunglasses_L.180022111040.jpg
echo "" > progressive-sunglasses.180041285287.jpg
echo "" > Atmosphere.222314222509.jpg
echo "" > Womens-Sunglasses-L.180023271958.jpg
echo "" > DAILY\ ESSENTIALS.211919012115.jpg
echo "" > aviator-l.Sunglasses.240202216759.jpg
echo "" > aviator-l.Sunglasses.women.240202218530.jpg
echo "before: "
ls -ltr
for f in *.jpg; do
renamed=${f: -16}
mv "${f}" "${renamed}"
done
</code></pre>
| -2 | 2016-08-24T10:10:19Z | [
"python",
"bash",
"sed"
] |
Removing non-alphanumeric characters with bash or python | 39,120,328 | <p>I have much files like these(please see the screenshot): </p>
<pre><code>30.230201521829.jpg
Mens-Sunglasses_L.180022111040.jpg
progressive-sunglasses.180041285287.jpg
Atmosphere.222314222509.jpg
Womens-Sunglasses-L.180023271958.jpg
DAILY ESSENTIALS.211919012115.jpg
aviator-l.Sunglasses.240202216759.jpg
aviator-l.Sunglasses.women.240202218530.jpg
</code></pre>
<p>I want to raname them to the following:</p>
<pre><code>230201521829.jpg
180022111040.jpg
180041285287.jpg
222314222509.jpg
172254027299.jpg
211919012115.jpg
240202216759.jpg
240202218530.jpg
</code></pre>
<p>230201521829 is a timestamp ,180022111040 is a timestamp,180041285287 is a timestamp, etc.<br>
<strong>Ensure that the final file name looks like "timestamp.jpg".</strong></p>
<p>But I am not able to write the script more.<br>
<strong>Sed(Bash)</strong> command or <strong>Python</strong> can be used to do it?<br>
Could you give me a example? Thanks. </p>
<p><img src="http://i.stack.imgur.com/T96Hd.png" alt="enter image description here"></p>
| 0 | 2016-08-24T10:08:12Z | 39,120,513 | <p>Using command substitution for renaming the file. Following code will loop to the current directory's (unless path is modified) jpg files. </p>
<p>Awk is used to filter out the penultimate and last column of file name which are separated by "." . </p>
<pre><code>for file in *.jpg
do
mv "$file" $(echo "$file" |awk -F'.' '{print $(NF-1)"." $NF}')
done
</code></pre>
| 1 | 2016-08-24T10:17:19Z | [
"python",
"bash",
"sed"
] |
Removing non-alphanumeric characters with bash or python | 39,120,328 | <p>I have much files like these(please see the screenshot): </p>
<pre><code>30.230201521829.jpg
Mens-Sunglasses_L.180022111040.jpg
progressive-sunglasses.180041285287.jpg
Atmosphere.222314222509.jpg
Womens-Sunglasses-L.180023271958.jpg
DAILY ESSENTIALS.211919012115.jpg
aviator-l.Sunglasses.240202216759.jpg
aviator-l.Sunglasses.women.240202218530.jpg
</code></pre>
<p>I want to raname them to the following:</p>
<pre><code>230201521829.jpg
180022111040.jpg
180041285287.jpg
222314222509.jpg
172254027299.jpg
211919012115.jpg
240202216759.jpg
240202218530.jpg
</code></pre>
<p>230201521829 is a timestamp ,180022111040 is a timestamp,180041285287 is a timestamp, etc.<br>
<strong>Ensure that the final file name looks like "timestamp.jpg".</strong></p>
<p>But I am not able to write the script more.<br>
<strong>Sed(Bash)</strong> command or <strong>Python</strong> can be used to do it?<br>
Could you give me a example? Thanks. </p>
<p><img src="http://i.stack.imgur.com/T96Hd.png" alt="enter image description here"></p>
| 0 | 2016-08-24T10:08:12Z | 39,121,119 | <p>Using <a href="http://www.unix.com/man-page/linux/1/prename/" rel="nofollow">perl rename</a> one-liner:</p>
<pre><code>$ touch 30.230201521829.jpg Mens-Sunglasses_L.180022111040.jpg progressive-sunglasses.180041285287.jpg Atmosphere.222314222509.jpg Womens-Sunglasses-L.180023271958.jpg Womens-Eyeglasses-R.172254027299.jpg
$ ls -1
30.230201521829.jpg
Atmosphere.222314222509.jpg
Mens-Sunglasses_L.180022111040.jpg
progressive-sunglasses.180041285287.jpg
Womens-Eyeglasses-R.172254027299.jpg
Womens-Sunglasses-L.180023271958.jpg
$ prename -v 's/^[^.]*\.//' *.*.jpg
30.230201521829.jpg renamed as 230201521829.jpg
Atmosphere.222314222509.jpg renamed as 222314222509.jpg
Mens-Sunglasses_L.180022111040.jpg renamed as 180022111040.jpg
progressive-sunglasses.180041285287.jpg renamed as 180041285287.jpg
Womens-Eyeglasses-R.172254027299.jpg renamed as 172254027299.jpg
Womens-Sunglasses-L.180023271958.jpg renamed as 180023271958.jpg
</code></pre>
| 0 | 2016-08-24T10:43:40Z | [
"python",
"bash",
"sed"
] |
Removing non-alphanumeric characters with bash or python | 39,120,328 | <p>I have much files like these(please see the screenshot): </p>
<pre><code>30.230201521829.jpg
Mens-Sunglasses_L.180022111040.jpg
progressive-sunglasses.180041285287.jpg
Atmosphere.222314222509.jpg
Womens-Sunglasses-L.180023271958.jpg
DAILY ESSENTIALS.211919012115.jpg
aviator-l.Sunglasses.240202216759.jpg
aviator-l.Sunglasses.women.240202218530.jpg
</code></pre>
<p>I want to raname them to the following:</p>
<pre><code>230201521829.jpg
180022111040.jpg
180041285287.jpg
222314222509.jpg
172254027299.jpg
211919012115.jpg
240202216759.jpg
240202218530.jpg
</code></pre>
<p>230201521829 is a timestamp ,180022111040 is a timestamp,180041285287 is a timestamp, etc.<br>
<strong>Ensure that the final file name looks like "timestamp.jpg".</strong></p>
<p>But I am not able to write the script more.<br>
<strong>Sed(Bash)</strong> command or <strong>Python</strong> can be used to do it?<br>
Could you give me a example? Thanks. </p>
<p><img src="http://i.stack.imgur.com/T96Hd.png" alt="enter image description here"></p>
| 0 | 2016-08-24T10:08:12Z | 39,124,055 | <p>You can use parameter expansion to strip off the extension, then
remove all but the last <code>.</code>-delimited field from the remaining name. After than, you can reapply the extension.</p>
<pre><code>for f in *; do
ext=${f##*.}
base=${f%.$ext}
mv -- "$f" "${base##*.}.$ext"
done
</code></pre>
<p>The first line sets <code>ext</code> to the string following the last <code>.</code>. The second line sets <code>base</code> to the string that precedes the last <code>.</code> (by removing the last <code>.</code> and whatever <code>$ext</code> was set to). The third line constructs a new file name by first removing everything up to, and including, the final <code>.</code> in <code>base</code>, then reapplying the extension to the result.</p>
| 0 | 2016-08-24T13:00:30Z | [
"python",
"bash",
"sed"
] |
Removing non-alphanumeric characters with bash or python | 39,120,328 | <p>I have much files like these(please see the screenshot): </p>
<pre><code>30.230201521829.jpg
Mens-Sunglasses_L.180022111040.jpg
progressive-sunglasses.180041285287.jpg
Atmosphere.222314222509.jpg
Womens-Sunglasses-L.180023271958.jpg
DAILY ESSENTIALS.211919012115.jpg
aviator-l.Sunglasses.240202216759.jpg
aviator-l.Sunglasses.women.240202218530.jpg
</code></pre>
<p>I want to raname them to the following:</p>
<pre><code>230201521829.jpg
180022111040.jpg
180041285287.jpg
222314222509.jpg
172254027299.jpg
211919012115.jpg
240202216759.jpg
240202218530.jpg
</code></pre>
<p>230201521829 is a timestamp ,180022111040 is a timestamp,180041285287 is a timestamp, etc.<br>
<strong>Ensure that the final file name looks like "timestamp.jpg".</strong></p>
<p>But I am not able to write the script more.<br>
<strong>Sed(Bash)</strong> command or <strong>Python</strong> can be used to do it?<br>
Could you give me a example? Thanks. </p>
<p><img src="http://i.stack.imgur.com/T96Hd.png" alt="enter image description here"></p>
| 0 | 2016-08-24T10:08:12Z | 39,136,580 | <p>I use python </p>
<p>examp.</p>
<pre><code>import os
import sys
import glob
pth = "C:\Users\Test"
dir_show = os.listdir(pth)
for list_file in dir_show:
if list_file.endswith(".JPG"):
(shrname, exts) = os.path.splitext(list_file)
path = os.path.join(pth, list_file)
newname=os.path.join(pth,shrname[shrname.find(".")+1:len(shrname)]+".JPG")
os.rename(path,newname)
</code></pre>
| 1 | 2016-08-25T04:09:50Z | [
"python",
"bash",
"sed"
] |
Python regex match until certain word after identaion | 39,120,346 | <p>Given the following string or similar:</p>
<pre><code>baz: bar
key: >
lorem ipsum 1213 __ ^123
lorem ipsum
foo:bar
anotherkey: >
lorem ipsum 1213 __ ^123
lorem ipsum
</code></pre>
<p>I am trying to build a REGEX which captures all values after a key followed by a <code>></code> sign. </p>
<p>So for the above example, I want to match from <code>key</code> to <code>foo</code> (excluding) and then from <code>anotherkey</code> to the end. I managed to come up with a REGEX which does the job, but only if I know the name of <code>foo</code>:</p>
<pre><code>\w+:\s>\n\s+[\S+\s+]+(?=foo)
</code></pre>
<p>But this is not really a good solution. If I remove <code>?=foo</code> then the match will include everything to the end of the string.
How can I fix this regex to do the match the values after <code>></code> as described?</p>
| 0 | 2016-08-24T10:08:59Z | 39,120,436 | <p>You can tweak your regex to this:</p>
<pre><code>(\w+:\s+>\n\s+[\S\s]+?)(?=\n\w+:\w+\n|\Z)
</code></pre>
<p><a href="https://regex101.com/r/fU7wC3/1" rel="nofollow">RegEx Demo</a></p>
<p>Lookahead <code>(?=\n\w+:\w+\n|\Z)</code> will assert presence of <code>key:value</code> or end of input (<code>\Z</code>) after your non-greedy match.</p>
<p>Alternatively this better performing regex can be used (thanks to Wiktor for the helpful comments below):</p>
<pre><code>\w+:\s+>\n(.*(?:\n(?!\n\w+:\w+\n).*)+)
</code></pre>
<p><a href="https://regex101.com/r/fU7wC3/3" rel="nofollow">RegEx Demo 2</a></p>
| 1 | 2016-08-24T10:13:26Z | [
"python",
"regex"
] |
Python regex match until certain word after identaion | 39,120,346 | <p>Given the following string or similar:</p>
<pre><code>baz: bar
key: >
lorem ipsum 1213 __ ^123
lorem ipsum
foo:bar
anotherkey: >
lorem ipsum 1213 __ ^123
lorem ipsum
</code></pre>
<p>I am trying to build a REGEX which captures all values after a key followed by a <code>></code> sign. </p>
<p>So for the above example, I want to match from <code>key</code> to <code>foo</code> (excluding) and then from <code>anotherkey</code> to the end. I managed to come up with a REGEX which does the job, but only if I know the name of <code>foo</code>:</p>
<pre><code>\w+:\s>\n\s+[\S+\s+]+(?=foo)
</code></pre>
<p>But this is not really a good solution. If I remove <code>?=foo</code> then the match will include everything to the end of the string.
How can I fix this regex to do the match the values after <code>></code> as described?</p>
| 0 | 2016-08-24T10:08:59Z | 39,121,755 | <h2>One</h2>
<p>If you are <strong>not sure about indentations</strong> whether or not they exist, then this is the <em>simplest</em> way you can achieve desired result:</p>
<pre><code>^\w+:\s+>(?:\s?[^:]*$)*
</code></pre>
<p><a href="https://regex101.com/r/rM1qZ5/1" rel="nofollow">Live demo</a></p>
<p>Explanation:</p>
<pre><code>^ # Start of line
\w+:\s+> # Match specific block
(?: # Start of non-capturing group (a)
\s? # Match a newline
[^:]*$ # Match rest of line if only it doesn't have a :
)* # End of non-capturing group (a) (zero or more times - greedy)
</code></pre>
<p><em>You need <code>m</code> flag to be on as demonstrated in live demo.</em></p>
<h2>Two - the simplest</h2>
<p>If leading white-spaces are always there, then you can go with this safer regex:</p>
<pre><code>^\w+:\s+>(?:\s?[\t ]+.*)*
</code></pre>
<p><a href="https://regex101.com/r/eD0lW4/3" rel="nofollow">Live demo</a></p>
<p><em><code>m</code> modifier should be set here as well.</em></p>
| 0 | 2016-08-24T11:12:14Z | [
"python",
"regex"
] |
Python regex match until certain word after identaion | 39,120,346 | <p>Given the following string or similar:</p>
<pre><code>baz: bar
key: >
lorem ipsum 1213 __ ^123
lorem ipsum
foo:bar
anotherkey: >
lorem ipsum 1213 __ ^123
lorem ipsum
</code></pre>
<p>I am trying to build a REGEX which captures all values after a key followed by a <code>></code> sign. </p>
<p>So for the above example, I want to match from <code>key</code> to <code>foo</code> (excluding) and then from <code>anotherkey</code> to the end. I managed to come up with a REGEX which does the job, but only if I know the name of <code>foo</code>:</p>
<pre><code>\w+:\s>\n\s+[\S+\s+]+(?=foo)
</code></pre>
<p>But this is not really a good solution. If I remove <code>?=foo</code> then the match will include everything to the end of the string.
How can I fix this regex to do the match the values after <code>></code> as described?</p>
| 0 | 2016-08-24T10:08:59Z | 39,122,367 | <p>(As per request ;)</p>
<p>You could use something like</p>
<pre><code>^\w+:\s*>\n(?:[ \t].*\n?)+
</code></pre>
<p>(This is without the groups. If you decide you wan't them, see the comments to the question.)</p>
<p>It matches the start of a line (<code>^</code>) followed by at least one word character (<code>\w</code> A-Z, a-z, 0-9 or '-'. Could be changed to <code>[a-z]</code> if only lower case alphas should be allowed).</p>
<p>Then it matches optional spaces (<code>\s*</code>) followed by the <code>></code> <em>key-terminator</em> and a line feed (<code>\n</code>).</p>
<p>Then a non-capturing group (<code>(?:</code>) matching:</p>
<ul>
<li>a space or a tab</li>
<li>followed by any character up to a line feed</li>
<li>an optional line feed</li>
</ul>
<p>This group (matching an indented line) can be repeated any number of times (but must exist at least once - <code>)+</code>).</p>
<p><a href="https://regex101.com/r/hP0aZ5/5" rel="nofollow">See it here at regex101</a>.</p>
| 2 | 2016-08-24T11:41:54Z | [
"python",
"regex"
] |
Python Argparse: Raw string input | 39,120,363 | <p>Apologies if this has been asked before, I did search for it but all hits seemed to be about python raw strings in general rather than regarding argparse.</p>
<p>Anyways, I have a code where the user feeds in a string and then this string is processed. However, I have a problem as I want my code to be able to differentiate between <code>\n</code> and <code>\\n</code> so that the user can control if they get a line break or <code>\n</code> appear in the output (respectively). </p>
<p>This in itself is quite simple, and I can get the logic working to check the string, etc. However, argparse doesn't seem to keep the input string raw. So if I were to write: <code>Here is a list:\nItem 1</code> it gets parsed as <code>Here is a list:\\nItem 1</code>. As the exact same thing gets parsed if I were to replace <code>\n</code> with <code>\\n</code> in the input string, it becomes impossible to differentiate between the two.</p>
<p>I could include a bodge (for example, I could make the user enter say <code>$\n</code> for <code>\n</code> to appear in the output, or just <code>\n</code> for a line break). But this is messy and complicates the usage of the code.</p>
<p>Is there a way to ensure the string being parsed by argparse is raw? (I.e. if I enter <code>\n</code> it parses <code>\n</code> and not <code>\\n</code>)</p>
<p>Again, sorry if this has been asked before, but I couldn't find an answer and after over an hour of trying to find an answer, I'm out of ideas (bar the bodge). Cheers in advance for any and all help.</p>
<p>Example code (sorry if this doesn't work, not sure how best to do example code for argparse!):</p>
<pre><code>import argparse
parser = argparse.ArgumentParser( description = 'Test.' )
parser.add_argument( 'text', action = 'store', type = str, help = 'The text to parse.' )
args = parser.parse_args( )
print( repr( args.text ) )
</code></pre>
| 0 | 2016-08-24T10:09:34Z | 39,120,610 | <p>Here's a possible solution to your problem:</p>
<pre><code>import argparse
parser = argparse.ArgumentParser(description='Test.')
parser.add_argument('text', action='store', type=str, help='The text to parse.')
args = parser.parse_args()
print '-' * 80
raw_text = eval('"' + args.text.replace('"', '\\"') + '"')
print raw_text
print '-' * 80
print args.text
</code></pre>
<p>But be aware of one thing, <a href="http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html" rel="nofollow">eval really is dangerous</a></p>
| 1 | 2016-08-24T10:21:21Z | [
"python",
"string",
"argparse",
"rawstring"
] |
Python Argparse: Raw string input | 39,120,363 | <p>Apologies if this has been asked before, I did search for it but all hits seemed to be about python raw strings in general rather than regarding argparse.</p>
<p>Anyways, I have a code where the user feeds in a string and then this string is processed. However, I have a problem as I want my code to be able to differentiate between <code>\n</code> and <code>\\n</code> so that the user can control if they get a line break or <code>\n</code> appear in the output (respectively). </p>
<p>This in itself is quite simple, and I can get the logic working to check the string, etc. However, argparse doesn't seem to keep the input string raw. So if I were to write: <code>Here is a list:\nItem 1</code> it gets parsed as <code>Here is a list:\\nItem 1</code>. As the exact same thing gets parsed if I were to replace <code>\n</code> with <code>\\n</code> in the input string, it becomes impossible to differentiate between the two.</p>
<p>I could include a bodge (for example, I could make the user enter say <code>$\n</code> for <code>\n</code> to appear in the output, or just <code>\n</code> for a line break). But this is messy and complicates the usage of the code.</p>
<p>Is there a way to ensure the string being parsed by argparse is raw? (I.e. if I enter <code>\n</code> it parses <code>\n</code> and not <code>\\n</code>)</p>
<p>Again, sorry if this has been asked before, but I couldn't find an answer and after over an hour of trying to find an answer, I'm out of ideas (bar the bodge). Cheers in advance for any and all help.</p>
<p>Example code (sorry if this doesn't work, not sure how best to do example code for argparse!):</p>
<pre><code>import argparse
parser = argparse.ArgumentParser( description = 'Test.' )
parser.add_argument( 'text', action = 'store', type = str, help = 'The text to parse.' )
args = parser.parse_args( )
print( repr( args.text ) )
</code></pre>
| 0 | 2016-08-24T10:09:34Z | 39,128,697 | <p>As noted in the comments, <code>argparse</code> works with <code>sys.argv</code>, a list which is produced by the shell and Python interpreter. </p>
<p>With a simple <code>argv</code> echo script:</p>
<pre><code>0928:~/mypy$ cat echo_argv.py
import sys
print(sys.argv)
</code></pre>
<p>I get (with a bash shell):</p>
<pre><code>0929:~/mypy$ python echo_argv.py Here is a list:\nItem 1
['echo_argv.py', 'Here', 'is', 'a', 'list:nItem', '1']
0931:~/mypy$ python echo_argv.py "Here is a list:\nItem 1 "
['echo_argv.py', 'Here is a list:\\nItem 1 ']
0931:~/mypy$ python echo_argv.py "Here is a list:\\nItem 1 "
['echo_argv.py', 'Here is a list:\\nItem 1 ']
</code></pre>
<p><code>argparse</code> treats that <code>argv</code> as a list of strings. It does nothing to those strings, at least not with the default <code>None</code> <code>type</code> parameter. </p>
| 0 | 2016-08-24T16:34:13Z | [
"python",
"string",
"argparse",
"rawstring"
] |
Convert current time to WebKit/Chrome 17-digit timestamp | 39,120,437 | <p>I'm writing some bash script that parses and modifies multiple preferences and bookmarks of Google Chrome in OS X. Some of the values require 17-digit WebKit timestamp to be provided. For now I've hardcoded it with <code>13116319200000000</code> which corresponds to beginning of my workday this Monday in CEST, but it would be great to have this somehow calculated from current system time, as the script will be deployed at different times.</p>
<p><em>WebKit timestamp is a 64-bit value for microseconds since Jan 1, 1601 00:00 UTC.</em> Such conversion in bash on OS X might be impossible, as the built-in version of <code>data</code> does not support resolution higher than a second (like <code>%N</code>), but maybe some highly-intelligent math would do.</p>
<hr>
<p>Anyway, I've searched the web and found multiple examples on how to convert such timestamp to human readable time, but not the other way around. The best example would be <a href="http://www.epochconverter.com/webkit" rel="nofollow">this website</a>, that lists python script from some <em>dpcnull</em> user:</p>
<pre><code>import datetime
def date_from_webkit(webkit_timestamp):
epoch_start = datetime.datetime(1601,1,1)
delta = datetime.timedelta(microseconds=int(webkit_timestamp))
print epoch_start + delta
inTime = int(raw_input('Enter a Webkit timestamp to convert: '))
date_from_webkit(inTime)
</code></pre>
<p>and also has this JavaScript on form's action (where <code>document.we.wk.value</code> points to that form):</p>
<pre><code>function WebkitToEpoch() {
var wk = document.we.wk.value;
var sec = Math.round(wk / 1000000);
sec -= 11644473600;
var datum = new Date(sec * 1000);
var outputtext = "<b>Epoch/Unix time</b>: " + sec;
outputtext += "<br/><b>GMT</b>: " + datum.toGMTString() + "<br/><b>Your time zone</b>: " + datum.toLocaleString();
$('#resultle1').html(outputtext);
}
</code></pre>
<p>that looks like it could be easily inverted, but I failed trying.</p>
<p>Interestingly, the site shows current WebKit timestamp, but after examination I believe it's calculated in PHP on server level, so there's no access to it. </p>
<p>I'd be glad if someone could help me with the script I could embed in mine.</p>
<hr>
<p><strong><em>Note:</em></strong> <em>Although Google Chrome uses all 17-digits with precise microseconds, I don't really need such resolution. Just like on the linked website, rounding to seconds with last six digits being zeros is perfectly fine. The only important factor - it has to calculate properly.</em></p>
| 0 | 2016-08-24T10:13:29Z | 39,121,470 | <p>Something like this?</p>
<pre><code>from datetime import datetime, timedelta
def date_from_webkit(webkit_timestamp):
epoch_start = datetime(1601, 1, 1)
delta = timedelta(microseconds=int(webkit_timestamp))
return epoch_start + delta
def date_to_webkit(date_string):
epoch_start = datetime(1601, 1, 1)
date_ = datetime.strptime(date_string, '%Y-%m-%d %H:%M:%S')
diff = date_ - epoch_start
seconds_in_day = 60 * 60 * 24
return '{}000000'.format(
diff.days * seconds_in_day + diff.seconds + diff.microseconds)
# Webkit to date
date_from_webkit('13116508547000000') # 2016-08-24 10:35:47
# Date string to Webkit timestamp
date_to_webkit('2016-08-24 10:35:47') # 13116508547000000
</code></pre>
| 0 | 2016-08-24T11:00:10Z | [
"javascript",
"python",
"bash",
"google-chrome",
"webkit"
] |
Convert current time to WebKit/Chrome 17-digit timestamp | 39,120,437 | <p>I'm writing some bash script that parses and modifies multiple preferences and bookmarks of Google Chrome in OS X. Some of the values require 17-digit WebKit timestamp to be provided. For now I've hardcoded it with <code>13116319200000000</code> which corresponds to beginning of my workday this Monday in CEST, but it would be great to have this somehow calculated from current system time, as the script will be deployed at different times.</p>
<p><em>WebKit timestamp is a 64-bit value for microseconds since Jan 1, 1601 00:00 UTC.</em> Such conversion in bash on OS X might be impossible, as the built-in version of <code>data</code> does not support resolution higher than a second (like <code>%N</code>), but maybe some highly-intelligent math would do.</p>
<hr>
<p>Anyway, I've searched the web and found multiple examples on how to convert such timestamp to human readable time, but not the other way around. The best example would be <a href="http://www.epochconverter.com/webkit" rel="nofollow">this website</a>, that lists python script from some <em>dpcnull</em> user:</p>
<pre><code>import datetime
def date_from_webkit(webkit_timestamp):
epoch_start = datetime.datetime(1601,1,1)
delta = datetime.timedelta(microseconds=int(webkit_timestamp))
print epoch_start + delta
inTime = int(raw_input('Enter a Webkit timestamp to convert: '))
date_from_webkit(inTime)
</code></pre>
<p>and also has this JavaScript on form's action (where <code>document.we.wk.value</code> points to that form):</p>
<pre><code>function WebkitToEpoch() {
var wk = document.we.wk.value;
var sec = Math.round(wk / 1000000);
sec -= 11644473600;
var datum = new Date(sec * 1000);
var outputtext = "<b>Epoch/Unix time</b>: " + sec;
outputtext += "<br/><b>GMT</b>: " + datum.toGMTString() + "<br/><b>Your time zone</b>: " + datum.toLocaleString();
$('#resultle1').html(outputtext);
}
</code></pre>
<p>that looks like it could be easily inverted, but I failed trying.</p>
<p>Interestingly, the site shows current WebKit timestamp, but after examination I believe it's calculated in PHP on server level, so there's no access to it. </p>
<p>I'd be glad if someone could help me with the script I could embed in mine.</p>
<hr>
<p><strong><em>Note:</em></strong> <em>Although Google Chrome uses all 17-digits with precise microseconds, I don't really need such resolution. Just like on the linked website, rounding to seconds with last six digits being zeros is perfectly fine. The only important factor - it has to calculate properly.</em></p>
| 0 | 2016-08-24T10:13:29Z | 39,131,589 | <p>So after great solution provided by <a href="http://stackoverflow.com/a/39121470/6751451">Tiger-222</a> I've found out about python's <code>datetime</code> microseconds (<code>%f</code>) and added it to my final bash script to get full available time resolution and, as a result, the precise WebKit timestamp. Might be helpful for someone, thus final code below:</p>
<pre><code>function currentWebKitTimestamp {
TIMESTAMP="$(python - <<END
from datetime import datetime
def calculateTimestamp():
epoch = datetime(1601, 1, 1)
utcnow = datetime.strptime(datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S.%f'), '%Y-%m-%d %H:%M:%S.%f')
diff = utcnow - epoch
secondsInDay = 60 * 60 * 24
return '{}{:06d}'.format(diff.days * secondsInDay + diff.seconds, diff.microseconds)
print calculateTimestamp()
END)"
echo $TIMESTAMP
}
echo $(currentWebKitTimestamp)
</code></pre>
<p>The script embeds python in bash, just as I needed.</p>
<hr>
<p><strong>Important note:</strong> <code>diff.microseconds</code> do not have leading zeros here, so when below 100k, the way they're added they would result in corrupted timestamp, therefore the <code>{:06d}</code> format has been added to avoid this.</p>
| 0 | 2016-08-24T19:28:48Z | [
"javascript",
"python",
"bash",
"google-chrome",
"webkit"
] |
Put variable inside string + hardcoded %d value | 39,120,538 | <p>I'm trying to put into string variables and hardcoded '%d' value - which is not variable (unfornatelly python take it as intiger variable). Example:</p>
<pre><code>Error="""awk -v col="%s" -F"," '{ if(NF != col) printf("Index: %d, NR, NF-1); }' "%s" > %s"""%(variable1,variable2,variable3)
</code></pre>
<p>Now I got an error:</p>
<pre><code>TypeError: %d format: a number is required, not str.
</code></pre>
<p>So the main problem is "%d" value, I was trying with """%d""" , /%d/ but it does not work.</p>
<p>How to do that ? </p>
| 0 | 2016-08-24T10:18:16Z | 39,120,627 | <p>Try this:</p>
<pre><code>Error="""awk -v col="%s" -F"," '{ if(NF != col) printf("Index: %%d, NR, NF-1); }' "%s" > %s"""%(variable1,variable2,variable3)
</code></pre>
<p>This uses <code>%%</code>, which resolves to <code>%</code>. Another solution:</p>
<pre><code>Error="""awk -v col="{}" -F"," '{ if(NF != col) printf("Index: %d, NR, NF-1); }' "{}" > {}""".format(variable1,variable2,variable3)
</code></pre>
<p>This uses the new format strings. </p>
| 2 | 2016-08-24T10:22:05Z | [
"python"
] |
catching CTRL-C and manipulating clipboard data with wxpython | 39,120,616 | <p>My wxpython application must respond everytime user hits CTRL-C, no matter if applications frame is on top/visible but unfocussed/minimalized/under other window etc. Basically I want to know that user copied something into clipboard using CTRL-C combination - other changes in clipboard (like mouse r-click + "copy" should be ignored), than do things with data copied into clipboard. That's why I'm using pyHook and everything seems to be fine except... all code within "OnKeyboardEvent" seems to be executed before CTRL-C does its "actuall job" (copying things into clipboard), so every time I am kind of "one step back":</p>
<p>What happens:</p>
<pre><code>1. user hits CTRL-C
2. my "OnKeyboardEvent" code is executed
3. data is being copied to the clipboard (CTRL-C does its job)
</code></pre>
<p>I need to do 3. before 2. .... :)</p>
<p>Anyway, here's the code:</p>
<pre><code>import wx
import pyHook
import win32clipboard
class TextFrame(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None, -1, 'Frame', size=(300, 100))
panel = wx.Panel(self, -1)
self.basicText = wx.TextCtrl(panel, -1, "", size=(175, -1))
self.basicText.SetValue("Press CTRL-C")
hm = pyHook.HookManager()
hm.KeyDown = self.OnKeyboardEvent
hm.HookKeyboard()
def OnKeyboardEvent(self,event):
if event.Ascii == 3:
win32clipboard.OpenClipboard()
clipboarditem = win32clipboard.GetClipboardData()
win32clipboard.CloseClipboard()
print clipboarditem
self.basicText.SetValue(clipboarditem)
app = wx.PySimpleApp()
frame = TextFrame()
frame.Show()
app.MainLoop()
</code></pre>
<p>Second thing wrong with code above... See this "print clipboarditem" at the end of "OnKeyboardEvent" procedure? If I delete it next command - "self.basicText.SetValue(clipboarditem)" stops working and gives</p>
<pre><code>line 23, in OnKeyboardEvent
self.basicText.SetValue(clipboarditem)
File "C:\Python27\lib\site-packages\wx-3.0-msw\wx\_core.py", line 13075, in SetValue
return _core_.TextEntryBase_SetValue(*args, **kwargs)
TypeError: an integer is required
</code></pre>
<p>which is mind blowing for me :/</p>
| 1 | 2016-08-24T10:21:35Z | 39,125,066 | <p>Took me a while to figure it out but did it!</p>
<p>Just change key down event to key up and it works. Your callback is called when CTRL+C is released, so clipboard is already correct since CTRL+C has been processed.</p>
<p>(also fixed the callback returned <code>True</code> else I get a lot of exception messages)</p>
<pre><code>import wx
import pyHook
import win32clipboard
import time
class TextFrame(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None, -1, 'Frame', size=(300, 100))
panel = wx.Panel(self, -1)
self.basicText = wx.TextCtrl(panel, -1, "", size=(175, -1))
self.basicText.SetValue("Press CTRL-C")
hm = pyHook.HookManager()
hm.KeyUp = self.OnKeyboardEvent # key up!!
hm.HookKeyboard()
def OnKeyboardEvent(self,event):
if event.Ascii == 3:
print("control c released")
win32clipboard.OpenClipboard()
clipboarditem = win32clipboard.GetClipboardData()
win32clipboard.CloseClipboard()
print("contents "+clipboarditem)
self.basicText.SetValue(clipboarditem)
return True
app = wx.PySimpleApp()
frame = TextFrame()
frame.Show()
app.MainLoop()
</code></pre>
| 1 | 2016-08-24T13:45:06Z | [
"python",
"wxpython",
"clipboard"
] |
Finding list of index corresponding to list of element of a dataframe in Python | 39,120,667 | <p>I have a dataframe with millions of lines. I need to get the corresponding indexes of hundreds of thousands of elements that are present in the original dataframe.</p>
<p>I currently use this code:</p>
<pre><code>df[df['processed_col'] == element.index[0]
</code></pre>
<p>to find the position of <code>'element'</code> into the whole dataframe.</p>
<p>Instead of doing a loop which is very long, is there a way to take a list like <code>element1, element2,..., elementN</code> as input, which would return a list of corresponding indices:</p>
<pre><code>df[df['processed_col'] == [element1, element2, ..., elementN].index[0]
</code></pre>
| 1 | 2016-08-24T10:24:05Z | 39,120,806 | <p>IIUC you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow"><code>isin</code></a> if find first value of index:</p>
<pre><code>df[df['processed_col'].isin([element1, element2,..., elementN])].index[0]
</code></pre>
<p>or if want all values of index, remove <code>[0]</code> only:</p>
<pre><code>df[df['processed_col'].isin([element1, element2,..., elementN])].index
</code></pre>
<p>If need convert to list use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.tolist.html" rel="nofollow"><code>tolist</code></a>:</p>
<pre><code>df[df['processed_col'].isin([element1, element2,..., elementN])].index.tolist()
</code></pre>
| 0 | 2016-08-24T10:30:06Z | [
"python",
"list",
"pandas",
"indexing",
"dataframe"
] |
Sublime Text 3 API reload file after opening | 39,120,699 | <p>I've written a simple Sublime Text 3 plugin which let me open a log file with a keybord shortcut : </p>
<pre><code>class OpenFilesCommand(sublime_plugin.TextCommand):
def run(self, edit, var_str):
str_date = datetime.datetime.strftime(datetime.datetime.now(),'%Y-%m-%d')
path_file = '/home/user/logs/web_'+str_date+'.log')
if os.path.isfile(path_file):
self.view.window().open_file(path_file)
else:
sublime.error_message("The file does not exist.")
</code></pre>
<p>My problem is : if the log file is already open in Sublime, when I use shortcut the file content is not reload from file content. </p>
<p>Do you know a way to refresh Sublime file content from hard disk file content ?</p>
| 1 | 2016-08-24T10:25:16Z | 39,130,907 | <p>If the file is already open, and you want Sublime Text to refresh the view from the file, you can use the <code>revert</code> command to reload it from the disk.</p>
<pre><code>self.view.window().run_command('revert')
</code></pre>
| 1 | 2016-08-24T18:47:30Z | [
"python",
"sublimetext3"
] |
Replacing all numeric value to formatted string | 39,120,783 | <p>What I am trying to do is:</p>
<p>Find out all the numeric values in a string.</p>
<pre><code>input_string = "é«é²æ½å
æç½è¼æ
èè·çè100 79.80"
numbers = re.finditer(r'[-+]?[0-9]*\.?[0-9]+(?:[eE][-+]?[0-9]+)?',input_string)
for number in numbers:
print ("{} start > {}, end > {}".format(number.group(), number.start(0), number.end(0)))
'''Output'''
>>100 start > 12, end > 15
>>79.80 start > 18, end > 23
</code></pre>
<p>And then I want to replace all the integer and float value to a certain format: </p>
<p><code>INT_(number of digit)</code> and <code>FLT(number of decimal places)</code></p>
<p>eg. <code>100 -> INT_3 // 79.80 -> FLT_2</code></p>
<p>Thus, the expect output string is like this:</p>
<pre><code>"é«é²æ½å
æç½è¼æ
èè·çèINT_3 FLT2"
</code></pre>
<p>But the string replace substring method in Python is kind of weird, which can't archive what I want to do.</p>
<p>So I am trying to use the substring append substring methods</p>
<pre><code>string[:number.start(0)] + "INT_%s"%len(number.group()) +.....
</code></pre>
<p>which looks stupid and most importantly I still can't make it work.</p>
<p>Can anyone give me some advice on this problem? </p>
| 4 | 2016-08-24T10:28:56Z | 39,121,003 | <p>Use <code>re.sub</code> and a callback method inside where you can perform various manipulations on the match:</p>
<pre><code>import re
def repl(match):
chunks = match.group(1).split(".")
if len(chunks) == 2:
return "FLT_{}".format(len(chunks[1]))
else:
return "INT_{}".format(len(chunks[0]))
input_string = "é«é²æ½å
æç½è¼æ
èè·çè100 79.80"
result = re.sub(r'[-+]?([0-9]*\.?[0-9]+)(?:[eE][-+]?[0-9]+)?',repl,input_string)
print(result)
</code></pre>
<p>See the <a href="https://ideone.com/lpGcyB" rel="nofollow">Python demo</a></p>
<p><strong>Details</strong>:</p>
<ul>
<li>The regex now has a capturing group over the number part (<code>([0-9]*\.?[0-9]+)</code>), this will be analyzed inside the <code>repl</code> method</li>
<li>Inside the <code>repl</code> method, Group 1 contents is split with <code>.</code> to see if we have a float/double, and if yes, we return the length of the fractional part, else, the length of the integer number.</li>
</ul>
| 4 | 2016-08-24T10:39:14Z | [
"python",
"regex",
"string",
"replace"
] |
Replacing all numeric value to formatted string | 39,120,783 | <p>What I am trying to do is:</p>
<p>Find out all the numeric values in a string.</p>
<pre><code>input_string = "é«é²æ½å
æç½è¼æ
èè·çè100 79.80"
numbers = re.finditer(r'[-+]?[0-9]*\.?[0-9]+(?:[eE][-+]?[0-9]+)?',input_string)
for number in numbers:
print ("{} start > {}, end > {}".format(number.group(), number.start(0), number.end(0)))
'''Output'''
>>100 start > 12, end > 15
>>79.80 start > 18, end > 23
</code></pre>
<p>And then I want to replace all the integer and float value to a certain format: </p>
<p><code>INT_(number of digit)</code> and <code>FLT(number of decimal places)</code></p>
<p>eg. <code>100 -> INT_3 // 79.80 -> FLT_2</code></p>
<p>Thus, the expect output string is like this:</p>
<pre><code>"é«é²æ½å
æç½è¼æ
èè·çèINT_3 FLT2"
</code></pre>
<p>But the string replace substring method in Python is kind of weird, which can't archive what I want to do.</p>
<p>So I am trying to use the substring append substring methods</p>
<pre><code>string[:number.start(0)] + "INT_%s"%len(number.group()) +.....
</code></pre>
<p>which looks stupid and most importantly I still can't make it work.</p>
<p>Can anyone give me some advice on this problem? </p>
| 4 | 2016-08-24T10:28:56Z | 39,121,239 | <p>You need to group the parts of your regex possibly like this</p>
<pre><code>import re
def repl(m):
if m.group(1) is None: #int
return ("INT_%i"%len(m.group(2)))
else: #float
return ("FLT_%i"%(len(m.group(2))))
input_string = "é«é²æ½å
æç½è¼æ
èè·çè100 79.80"
numbers = re.sub(r'[-+]?([0-9]*\.)?([0-9]+)([eE][-+]?[0-9]+)?',repl,input_string)
print(numbers)
</code></pre>
<ul>
<li>group 0 is the whole string that was matched (can be used for putting into <code>float</code> or <code>int</code>)</li>
<li>group 1 is any digits before the <code>.</code> and the <code>.</code> itself if exists else it is <code>None</code></li>
<li>group 2 is all digits after the <code>.</code> if it exists else it it is just all digits</li>
<li>group 3 is the exponential part if existing else <code>None</code></li>
</ul>
<p>You can get a python-number from it with</p>
<pre><code>def parse(m):
s=m.group(0)
if m.group(1) is not None or m.group(3) is not None: # if there is a dot or an exponential part it must be a float
return float(s)
else:
return int(s)
</code></pre>
| 2 | 2016-08-24T10:49:12Z | [
"python",
"regex",
"string",
"replace"
] |
Replacing all numeric value to formatted string | 39,120,783 | <p>What I am trying to do is:</p>
<p>Find out all the numeric values in a string.</p>
<pre><code>input_string = "é«é²æ½å
æç½è¼æ
èè·çè100 79.80"
numbers = re.finditer(r'[-+]?[0-9]*\.?[0-9]+(?:[eE][-+]?[0-9]+)?',input_string)
for number in numbers:
print ("{} start > {}, end > {}".format(number.group(), number.start(0), number.end(0)))
'''Output'''
>>100 start > 12, end > 15
>>79.80 start > 18, end > 23
</code></pre>
<p>And then I want to replace all the integer and float value to a certain format: </p>
<p><code>INT_(number of digit)</code> and <code>FLT(number of decimal places)</code></p>
<p>eg. <code>100 -> INT_3 // 79.80 -> FLT_2</code></p>
<p>Thus, the expect output string is like this:</p>
<pre><code>"é«é²æ½å
æç½è¼æ
èè·çèINT_3 FLT2"
</code></pre>
<p>But the string replace substring method in Python is kind of weird, which can't archive what I want to do.</p>
<p>So I am trying to use the substring append substring methods</p>
<pre><code>string[:number.start(0)] + "INT_%s"%len(number.group()) +.....
</code></pre>
<p>which looks stupid and most importantly I still can't make it work.</p>
<p>Can anyone give me some advice on this problem? </p>
| 4 | 2016-08-24T10:28:56Z | 39,121,356 | <p>You probably are looking for something like the code below (of course there are other ways to do it). This one just starts with what you were doing and show how it can be done.</p>
<pre><code>import re
input_string = u"é«é²æ½å
æç½è¼æ
èè·çè100 79.80"
numbers = re.finditer(r'[-+]?[0-9]*\.?[0-9]+(?:[eE][-+]?[0-9]+)?',input_string)
s = input_string
for m in list(numbers)[::-1]:
num = m.group(0)
if '.' in num:
s = "%sFLT_%s%s" % (s[:m.start(0)],str(len(num)-num.index('.')-1),s[m.end(0):])
else:
s = "%sINT_%s%s" % (s[:m.start(0)],str(len(num)), s[m.end(0):])
print(s)
</code></pre>
<p>This may look a bit complicated because there are really several simple problems to solve.</p>
<p>For instance your initial regex find both ints and floats, but you with to apply totally different replacements afterward. This would be much more straightforward if you were doing only one thing at a time. But as parts of floats may look like an int, doing everything at once may not be such a bad idea, you just have to understand that this will lead to a secondary check to discriminate both cases.</p>
<p>Another more fundamental issue is that really you <strong>can't replace anything</strong> in a python string. Python strings are non modifiable objects, henceforth you have to make a copy. This is fine anyway because the format change may need insertion or removal of characters and an inplace replacement wouldn't be efficient.</p>
<p>The last trouble to take into account is that replacement must be made backward, because if you change the beginning of the string the match position would also change and the next replacement wouldn't be at the right place. If we do it backward, all is fine.</p>
<p>Of course I agree that using <code>re.sub()</code> is much simpler.</p>
| 1 | 2016-08-24T10:54:59Z | [
"python",
"regex",
"string",
"replace"
] |
converting an HTML table in Pandas Dataframe | 39,120,853 | <p>I am reading an HTML table with pd.read_html but the result is coming in a list, I want to convert it inot a pandas dataframe, so I can continue further operations on the same. I am using the following script</p>
<pre><code>import pandas as pd
import html5lib
data=pd.read_html('http://www.espn.com/nhl/statistics/player/_/stat/points/sort/points/year/2015/seasontype/2',skiprows=1)
</code></pre>
<p>and since My results are coming as 1 list, I tried to convert it into a data frame with </p>
<pre><code>data1=pd.DataFrame(Data)
</code></pre>
<p>and result came as
0</p>
<pre><code>0 0 1 2 3 4...
</code></pre>
<p>and because of result as a list, I can't apply any functions such as rename, dropna, drop.</p>
<p>I will appreciate every help</p>
| 0 | 2016-08-24T10:32:35Z | 39,120,889 | <p>I think you need add <code>[0]</code> if need select first item of list, because <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_html.html" rel="nofollow"><code>read_html</code></a> return <code>list of DataFrames</code>:</p>
<p>So you can use:</p>
<pre><code>import pandas as pd
data1 = pd.read_html('http://www.espn.com/nhl/statisââtics/player/ââ_/stat/pointââs/sort/pointââs/year/2015&ââ#47;seasontype/2ââ',skiprows=1)[0]
</code></pre>
<pre><code>print (data1)
0 1 2 3 4 5 6 7 8 9 \
0 RK PLAYER TEAM GP G A PTS +/- PIM PTS/G
1 1 Jamie Benn, LW DAL 82 35 52 87 1 64 1.06
2 2 John Tavares, C NYI 82 38 48 86 5 46 1.05
3 3 Sidney Crosby, C PIT 77 28 56 84 5 47 1.09
4 4 Alex Ovechkin, LW WSH 81 53 28 81 10 58 1.00
5 NaN Jakub Voracek, RW PHI 82 22 59 81 1 78 0.99
6 6 Nicklas Backstrom, C WSH 82 18 60 78 5 40 0.95
7 7 Tyler Seguin, C DAL 71 37 40 77 -1 20 1.08
8 8 Jiri Hudler, LW CGY 78 31 45 76 17 14 0.97
9 NaN Daniel Sedin, LW VAN 82 20 56 76 5 18 0.93
10 10 Vladimir Tarasenko, RW STL 77 37 36 73 27 31 0.95
11 NaN PP SH NaN NaN NaN NaN NaN NaN NaN
12 RK PLAYER TEAM GP G A PTS +/- PIM PTS/G
13 NaN Nick Foligno, LW CBJ 79 31 42 73 16 50 0.92
14 NaN Claude Giroux, C PHI 81 25 48 73 -3 36 0.90
15 NaN Henrik Sedin, C VAN 82 18 55 73 11 22 0.89
16 14 Steven Stamkos, C TB 82 43 29 72 2 49 0.88
17 NaN Tyler Johnson, C TB 77 29 43 72 33 24 0.94
18 16 Ryan Johansen, C CBJ 82 26 45 71 -6 40 0.87
19 17 Joe Pavelski, C SJ 82 37 33 70 12 29 0.85
20 NaN Evgeni Malkin, C PIT 69 28 42 70 -2 60 1.01
21 NaN Ryan Getzlaf, C ANA 77 25 45 70 15 62 0.91
22 20 Rick Nash, LW NYR 79 42 27 69 29 36 0.87
23 NaN PP SH NaN NaN NaN NaN NaN NaN NaN
24 RK PLAYER TEAM GP G A PTS +/- PIM PTS/G
25 21 Max Pacioretty, LW MTL 80 37 30 67 38 32 0.84
26 NaN Logan Couture, C SJ 82 27 40 67 -6 12 0.82
27 23 Jonathan Toews, C CHI 81 28 38 66 30 36 0.81
28 NaN Erik Karlsson, D OTT 82 21 45 66 7 42 0.80
29 NaN Henrik Zetterberg, LW DET 77 17 49 66 -6 32 0.86
30 26 Pavel Datsyuk, C DET 63 26 39 65 12 8 1.03
31 NaN Joe Thornton, C SJ 78 16 49 65 -4 30 0.83
32 28 Nikita Kucherov, RW TB 82 28 36 64 38 37 0.78
33 NaN Patrick Kane, RW CHI 61 27 37 64 10 10 1.05
34 NaN Mark Stone, RW OTT 80 26 38 64 21 14 0.80
35 NaN PP SH NaN NaN NaN NaN NaN NaN NaN
36 RK PLAYER TEAM GP G A PTS +/- PIM PTS/G
37 NaN Alexander Steen, LW STL 74 24 40 64 8 33 0.86
38 NaN Kyle Turris, C OTT 82 24 40 64 5 36 0.78
39 NaN Johnny Gaudreau, LW CGY 80 24 40 64 11 14 0.80
40 NaN Anze Kopitar, C LA 79 16 48 64 -2 10 0.81
41 35 Radim Vrbata, RW VAN 79 31 32 63 6 20 0.80
42 NaN Jaden Schwartz, LW STL 75 28 35 63 13 16 0.84
43 NaN Filip Forsberg, C NSH 82 26 37 63 15 24 0.77
44 NaN Jordan Eberle, RW EDM 81 24 39 63 -16 24 0.78
45 NaN Ondrej Palat, LW TB 75 16 47 63 31 24 0.84
46 40 Zach Parise, LW MIN 74 33 29 62 21 41 0.84
10 11 12 13 14 15 16
0 SOG PCT GWG G A G A
1 253 13.8 6 10 13 2 3
2 278 13.7 8 13 18 0 1
3 237 11.8 3 10 21 0 0
4 395 13.4 11 25 9 0 0
5 221 10.0 3 11 22 0 0
6 153 11.8 3 3 30 0 0
7 280 13.2 5 13 16 0 0
8 158 19.6 5 6 10 0 0
9 226 8.9 5 4 21 0 0
10 264 14.0 6 8 10 0 0
11 NaN NaN NaN NaN NaN NaN NaN
12 SOG PCT GWG G A G A
13 182 17.0 3 11 15 0 0
14 279 9.0 4 14 23 0 0
15 101 17.8 0 5 20 0 0
16 268 16.0 6 13 12 0 0
17 203 14.3 6 8 9 0 0
18 202 12.9 0 7 19 2 0
19 261 14.2 5 19 12 0 0
20 212 13.2 4 9 17 0 0
21 191 13.1 6 3 10 0 2
22 304 13.8 8 6 6 4 1
23 NaN NaN NaN NaN NaN NaN NaN
24 SOG PCT GWG G A G A
25 302 12.3 10 7 4 3 2
26 263 10.3 4 6 18 2 0
27 192 14.6 7 6 11 2 1
28 292 7.2 3 6 24 0 0
29 227 7.5 3 4 24 0 0
30 165 15.8 5 8 16 0 0
31 131 12.2 0 4 18 0 0
32 190 14.7 2 2 13 0 0
33 186 14.5 5 6 16 0 0
34 157 16.6 6 5 8 1 0
35 NaN NaN NaN NaN NaN NaN NaN
36 SOG PCT GWG G A G A
37 223 10.8 5 8 16 0 0
38 215 11.2 6 4 12 1 0
39 167 14.4 4 8 13 0 0
40 134 11.9 4 6 18 0 0
41 267 11.6 7 12 11 0 0
42 184 15.2 4 8 8 0 2
43 237 11.0 6 6 13 0 0
44 183 13.1 2 6 15 0 0
45 139 11.5 5 3 8 1 1
46 259 12.7 3 11 5 0 0
</code></pre>
| 3 | 2016-08-24T10:34:27Z | [
"python",
"html",
"pandas",
"dataframe"
] |
Comparison operator in PySpark (not equal/ !=) | 39,120,934 | <p>I am trying to obtain all rows in a dataframe where two flags are set to '1' and subsequently all those that where only one of two is set to '1' and the other <strong>NOT EQUAL</strong> to '1'</p>
<p>With the following schema (three columns),</p>
<pre><code>df = sqlContext.createDataFrame([('a',1,'null'),('b',1,1),('c',1,'null'),('d','null',1),('e',1,1)], #,('f',1,'NaN'),('g','bla',1)],
schema=('id', 'foo', 'bar')
)
</code></pre>
<p>I obtain the following dataframe:</p>
<pre><code>+---+----+----+
| id| foo| bar|
+---+----+----+
| a| 1|null|
| b| 1| 1|
| c| 1|null|
| d|null| 1|
| e| 1| 1|
+---+----+----+
</code></pre>
<p>When I apply the desired filters, the first filter (foo=1 AND bar=1) works, but not the other (foo=1 AND NOT bar=1)</p>
<pre><code>foobar_df = df.filter( (df.foo==1) & (df.bar==1) )
</code></pre>
<p>yields:</p>
<pre><code>+---+---+---+
| id|foo|bar|
+---+---+---+
| b| 1| 1|
| e| 1| 1|
+---+---+---+
</code></pre>
<p><strong>Here is the non-behaving filter:</strong></p>
<pre><code>foo_df = df.filter( (df.foo==1) & (df.bar!=1) )
foo_df.show()
+---+---+---+
| id|foo|bar|
+---+---+---+
+---+---+---+
</code></pre>
<p>Why is it not filtering? How can I get the columns where only foo is equal to '1'?</p>
| 1 | 2016-08-24T10:36:05Z | 39,121,002 | <p>To filter null values try:</p>
<p><code>foo_df = df.filter( (df.foo==1) & (df.bar.isNull()) )</code></p>
<p><a href="https://spark.apache.org/docs/1.6.2/api/python/pyspark.sql.html#pyspark.sql.Column.isNull" rel="nofollow">https://spark.apache.org/docs/1.6.2/api/python/pyspark.sql.html#pyspark.sql.Column.isNull</a></p>
| 2 | 2016-08-24T10:39:13Z | [
"python",
"apache-spark",
"pyspark",
"pyspark-sql"
] |
Comparison operator in PySpark (not equal/ !=) | 39,120,934 | <p>I am trying to obtain all rows in a dataframe where two flags are set to '1' and subsequently all those that where only one of two is set to '1' and the other <strong>NOT EQUAL</strong> to '1'</p>
<p>With the following schema (three columns),</p>
<pre><code>df = sqlContext.createDataFrame([('a',1,'null'),('b',1,1),('c',1,'null'),('d','null',1),('e',1,1)], #,('f',1,'NaN'),('g','bla',1)],
schema=('id', 'foo', 'bar')
)
</code></pre>
<p>I obtain the following dataframe:</p>
<pre><code>+---+----+----+
| id| foo| bar|
+---+----+----+
| a| 1|null|
| b| 1| 1|
| c| 1|null|
| d|null| 1|
| e| 1| 1|
+---+----+----+
</code></pre>
<p>When I apply the desired filters, the first filter (foo=1 AND bar=1) works, but not the other (foo=1 AND NOT bar=1)</p>
<pre><code>foobar_df = df.filter( (df.foo==1) & (df.bar==1) )
</code></pre>
<p>yields:</p>
<pre><code>+---+---+---+
| id|foo|bar|
+---+---+---+
| b| 1| 1|
| e| 1| 1|
+---+---+---+
</code></pre>
<p><strong>Here is the non-behaving filter:</strong></p>
<pre><code>foo_df = df.filter( (df.foo==1) & (df.bar!=1) )
foo_df.show()
+---+---+---+
| id|foo|bar|
+---+---+---+
+---+---+---+
</code></pre>
<p>Why is it not filtering? How can I get the columns where only foo is equal to '1'?</p>
| 1 | 2016-08-24T10:36:05Z | 39,121,638 | <blockquote>
<p>Why is it not filtering</p>
</blockquote>
<p>Because it is SQL and <code>NULL</code> indicates missing values. Because of that any comparison to <code>NULL</code>, other than <code>IS NULL</code> and <code>IS NOT NULL</code> is undefined. You need either:</p>
<pre><code>col("bar").isNull() | (col("bar") != 1)
</code></pre>
<p>or </p>
<pre><code>coalesce(col("bar") != 1, lit(True))
</code></pre>
<p>if you want null safe comparisons in PySpark.</p>
<p>Also <code>'null'</code> is not a valid way to introduce <code>NULL</code> literal. You should use <code>None</code> to indicate missing objects.</p>
<pre><code>from pyspark.sql.functions import col, coalesce, lit
df = spark.createDataFrame([
('a', 1, 1), ('a',1, None), ('b', 1, 1),
('c' ,1, None), ('d', None, 1),('e', 1, 1)
]).toDF('id', 'foo', 'bar')
df.where((col("foo") == 1) & (col("bar").isNull() | (col("bar") != 1))).show()
## +---+---+----+
## | id|foo| bar|
## +---+---+----+
## | a| 1|null|
## | c| 1|null|
## +---+---+----+
df.where((col("foo") == 1) & coalesce(col("bar") != 1, lit(True))).show()
## +---+---+----+
## | id|foo| bar|
## +---+---+----+
## | a| 1|null|
## | c| 1|null|
## +---+---+----+
</code></pre>
| 1 | 2016-08-24T11:07:31Z | [
"python",
"apache-spark",
"pyspark",
"pyspark-sql"
] |
Error in changing to directory using os.chdir in for loop - python | 39,121,006 | <p>Hey i've got this script which converts all the flv video files in a directory to mp4 format. </p>
<p>I got it to work with just having all the files placed in one directory but am now needing to modify it so it goes into folders within that directory and converts the files within each folder it comes across.</p>
<p>Here is my code </p>
<pre><code>sourcedirectory="/home/dubit/Desktop/test/"
class ConvertToMP4:
def __init__(self):
self.flvfiles = []
def fetch_flv_files(self, directory_name):
print("Scanning directory: " + directory_name)
for (dirpath, dirnames, filenames) in os.walk(directory_name):
for files in filenames:
if files.endswith('.flv'):
print('convert file: ' + files)
self.flvfiles.append(files)
def convert_flv_file_to_mp4_file(self):
# check to see if the list is empty, if not proceed
num_of_dir = len(list(os.walk(sourcedirectory)))
if len(self.flvfiles) <= 0:
print("No files to convert!")
return
for x in range(num_of_dir):
for (dirpath, dirnames, filenames) in os.walk(sourcedirectory):
for z in dirpath:
os.chdir(z)
for flvfiles in filenames:
mp4_file = flvfiles.replace('.flv','.mp4')
cmd_string = 'ffmpeg -i "' + flvfiles + '" -vcodec libx264 -crf 23 -acodec aac -strict experimental "' + mp4_file + '"'
print('converting ' + flvfiles + ' to ' + mp4_file)
os.system(cmd_string)
def main():
usage = "usage: %prog -d <source directory for flv files>"
parser = OptionParser(usage=usage)
parser.add_option("-d","--sourcedirectory",action="store",
type="string", dest="sourcedirectory", default="./",
help="source directory where all the flv files are stored")
(options, args) = parser.parse_args()
flv_to_mp4 = ConvertToMP4()
flv_to_mp4.fetch_flv_files(sourcedirectory)
flv_to_mp4.convert_flv_file_to_mp4_file()
return 0
main()
</code></pre>
<p>This is the error I am receiving</p>
<pre><code>sudo ./sub_start_comp.py
Scanning directory: /home/dubit/Desktop/test/
convert file: 20051210-w50s.flv
convert file: barsandtone.flv
Traceback (most recent call last):
File "./sub_start_comp.py", line 66, in <module>
main()
File "./sub_start_comp.py", line 63, in main
flv_to_mp4.convert_flv_file_to_mp4_file()
File "./sub_start_comp.py", line 46, in convert_flv_file_to_mp4_file
os.chdir(z)
OSError: [Errno 2] No such file or directory: 'h'
</code></pre>
<p>I am new to python scripting so have not really got the knowledge to work out the issue so any help will be appreciated. If I had to take a guess its just taking the first character from the dirpath variable.</p>
| 0 | 2016-08-24T10:39:20Z | 39,121,099 | <p>The <code>dirpath</code> is a <em>single</em> string. When you iterate over it using <code>for</code> loop (<code>for z in dirpath</code>), you're iterating over each individual character in string <code>'/home/dubit/Desktop/test/'</code>! First <code>z</code> is set to <code>'/'</code>, then <code>'h'</code>... and that's where the <code>chdir</code> fails as there is no directory named <code>h</code> in your root directory.</p>
<hr>
<p>Just replace</p>
<pre><code>for z in dirpath:
os.chdir(z)
</code></pre>
<p>with</p>
<pre><code>os.chdir(dirpath)
</code></pre>
<p>and adjust indents accordingly, and it should work.</p>
| 4 | 2016-08-24T10:42:53Z | [
"python",
"python-2.7"
] |
Error in changing to directory using os.chdir in for loop - python | 39,121,006 | <p>Hey i've got this script which converts all the flv video files in a directory to mp4 format. </p>
<p>I got it to work with just having all the files placed in one directory but am now needing to modify it so it goes into folders within that directory and converts the files within each folder it comes across.</p>
<p>Here is my code </p>
<pre><code>sourcedirectory="/home/dubit/Desktop/test/"
class ConvertToMP4:
def __init__(self):
self.flvfiles = []
def fetch_flv_files(self, directory_name):
print("Scanning directory: " + directory_name)
for (dirpath, dirnames, filenames) in os.walk(directory_name):
for files in filenames:
if files.endswith('.flv'):
print('convert file: ' + files)
self.flvfiles.append(files)
def convert_flv_file_to_mp4_file(self):
# check to see if the list is empty, if not proceed
num_of_dir = len(list(os.walk(sourcedirectory)))
if len(self.flvfiles) <= 0:
print("No files to convert!")
return
for x in range(num_of_dir):
for (dirpath, dirnames, filenames) in os.walk(sourcedirectory):
for z in dirpath:
os.chdir(z)
for flvfiles in filenames:
mp4_file = flvfiles.replace('.flv','.mp4')
cmd_string = 'ffmpeg -i "' + flvfiles + '" -vcodec libx264 -crf 23 -acodec aac -strict experimental "' + mp4_file + '"'
print('converting ' + flvfiles + ' to ' + mp4_file)
os.system(cmd_string)
def main():
usage = "usage: %prog -d <source directory for flv files>"
parser = OptionParser(usage=usage)
parser.add_option("-d","--sourcedirectory",action="store",
type="string", dest="sourcedirectory", default="./",
help="source directory where all the flv files are stored")
(options, args) = parser.parse_args()
flv_to_mp4 = ConvertToMP4()
flv_to_mp4.fetch_flv_files(sourcedirectory)
flv_to_mp4.convert_flv_file_to_mp4_file()
return 0
main()
</code></pre>
<p>This is the error I am receiving</p>
<pre><code>sudo ./sub_start_comp.py
Scanning directory: /home/dubit/Desktop/test/
convert file: 20051210-w50s.flv
convert file: barsandtone.flv
Traceback (most recent call last):
File "./sub_start_comp.py", line 66, in <module>
main()
File "./sub_start_comp.py", line 63, in main
flv_to_mp4.convert_flv_file_to_mp4_file()
File "./sub_start_comp.py", line 46, in convert_flv_file_to_mp4_file
os.chdir(z)
OSError: [Errno 2] No such file or directory: 'h'
</code></pre>
<p>I am new to python scripting so have not really got the knowledge to work out the issue so any help will be appreciated. If I had to take a guess its just taking the first character from the dirpath variable.</p>
| 0 | 2016-08-24T10:39:20Z | 39,121,217 | <p>The error you're getting is due to the fact that <code>dirpath</code> is actually a string, not a list of strings. <code>os.walk()</code> returns a tuple, where <code>dirpath</code> being current directory being walked by, <code>dirnames</code> and <code>filenames</code> being <code>dirpath</code>'s contents.</p>
<p>Since a string is iterable in Python, you don't any interpreter errors, but instead you loop over each character in <code>dirpath</code> string. Like,
<code>
for i, c in enumerate("abcd"):
print('index: {i}, char: {c}'.format(i=i, c=c))
</code>
will output
<code>
index: 0, char: a
index: 1, char: b
index: 2, char: c
index: 3, char: d
</code></p>
<p>So, you should <code>chdir()</code> to <code>dirpath</code>, not loop over it. Looping through the filesystem is done internally by <code>os.walk()</code>.</p>
| 1 | 2016-08-24T10:47:59Z | [
"python",
"python-2.7"
] |
How to make simple graphs in python 2.7 | 39,121,235 | <p>I would like to make simple graphs for my web page in python/django, but I do not know, which library (and how) to use.</p>
<p>I DO NOT WANT CHARTS, I SEEK A WAY TO CREATE IMAGE FROM PRIMITIVES LIKE RECTANGLES.</p>
<p>Each such graph is probabely generated and used only one time, as next time the values would differ.</p>
<p>I can simply compute the positions of all rectangles, lines or texts in it, so I would like something lightweight to just create pictre from that, which I would return as img/png (or so) mime style
like <img src="http://my.web.www/my/page/graph" > where the parameters to show would be decidied by session and database.</p>
<p>I can compute all the sizes beforehand, so I would like something simple like</p>
<pre><code>img=Image(PNG,598,89) # style, x, y
img.add_text('1.3.', 10,10)
img.add_rectagle(20,10, 70,20, CYAN, BLACK)
....
return img.render()
</code></pre>
<p>Can you direct me, how to do it?</p>
<p>Thanks beforehand</p>
<p><a href="http://i.stack.imgur.com/Rc7pS.png" rel="nofollow"><img src="http://i.stack.imgur.com/Rc7pS.png" alt="graph"></a></p>
<hr>
<p><a href="http://stackoverflow.com/users/6369873/navit">navit</a> nailed it :)</p>
<pre><code># from django.utils.httpwrappers import HttpResponse
from PIL import Image, ImageDraw
import os,sys
im = Image.new('RGB',(598,89),'white')
draw = ImageDraw.Draw(im)
draw.rectangle((0,0,im.size[0]-1,im.size[1]-1), outline='blue')
draw.rectangle((25,10,590,20), fill='white', outline='black')
draw.rectangle((25,10,70,20), fill='rgb(255,0,0)', outline='black')
draw.rectangle((70,10,90,20), fill='green', outline='black')
draw.text((1,10),'1.3.',fill='black')
del draw
# write to stdout
im.save(sys.stdout, "PNG")
# draw.flush()
# response = HttpResponse(mimetype="image/png")
# image.save(response, "PNG")
# return response
</code></pre>
<p><a href="http://i.stack.imgur.com/OK7P1.png" rel="nofollow"><img src="http://i.stack.imgur.com/OK7P1.png" alt="enter image description here"></a></p>
| 2 | 2016-08-24T10:48:57Z | 39,121,646 | <p>You should check Pillow out. Here is a sample how it works:</p>
<pre><code>from PIL import Image, ImageDraw
im = Image.open("lena.pgm")
draw = ImageDraw.Draw(im)
draw.line((0, 0) + im.size, fill=128)
draw.line((0, im.size[1], im.size[0], 0), fill=128)
del draw
# write to stdout
im.save(sys.stdout, "PNG")
</code></pre>
<p>Serving a file from Pillow to your client should be straightforward. Let me know if you have a question.</p>
<p>edit: found <a href="https://github.com/daviddoria/Examples/tree/master/Python/PIL" rel="nofollow">these</a> examples to get you started.</p>
| 2 | 2016-08-24T11:08:03Z | [
"python",
"django",
"python-2.7"
] |
How to make simple graphs in python 2.7 | 39,121,235 | <p>I would like to make simple graphs for my web page in python/django, but I do not know, which library (and how) to use.</p>
<p>I DO NOT WANT CHARTS, I SEEK A WAY TO CREATE IMAGE FROM PRIMITIVES LIKE RECTANGLES.</p>
<p>Each such graph is probabely generated and used only one time, as next time the values would differ.</p>
<p>I can simply compute the positions of all rectangles, lines or texts in it, so I would like something lightweight to just create pictre from that, which I would return as img/png (or so) mime style
like <img src="http://my.web.www/my/page/graph" > where the parameters to show would be decidied by session and database.</p>
<p>I can compute all the sizes beforehand, so I would like something simple like</p>
<pre><code>img=Image(PNG,598,89) # style, x, y
img.add_text('1.3.', 10,10)
img.add_rectagle(20,10, 70,20, CYAN, BLACK)
....
return img.render()
</code></pre>
<p>Can you direct me, how to do it?</p>
<p>Thanks beforehand</p>
<p><a href="http://i.stack.imgur.com/Rc7pS.png" rel="nofollow"><img src="http://i.stack.imgur.com/Rc7pS.png" alt="graph"></a></p>
<hr>
<p><a href="http://stackoverflow.com/users/6369873/navit">navit</a> nailed it :)</p>
<pre><code># from django.utils.httpwrappers import HttpResponse
from PIL import Image, ImageDraw
import os,sys
im = Image.new('RGB',(598,89),'white')
draw = ImageDraw.Draw(im)
draw.rectangle((0,0,im.size[0]-1,im.size[1]-1), outline='blue')
draw.rectangle((25,10,590,20), fill='white', outline='black')
draw.rectangle((25,10,70,20), fill='rgb(255,0,0)', outline='black')
draw.rectangle((70,10,90,20), fill='green', outline='black')
draw.text((1,10),'1.3.',fill='black')
del draw
# write to stdout
im.save(sys.stdout, "PNG")
# draw.flush()
# response = HttpResponse(mimetype="image/png")
# image.save(response, "PNG")
# return response
</code></pre>
<p><a href="http://i.stack.imgur.com/OK7P1.png" rel="nofollow"><img src="http://i.stack.imgur.com/OK7P1.png" alt="enter image description here"></a></p>
| 2 | 2016-08-24T10:48:57Z | 39,121,890 | <p><a href="http://matplotlib.org/" rel="nofollow">http://matplotlib.org/</a> permits to generate plenty of great graphs. You should be able to save it as an image and integrate it to your webpage</p>
| 0 | 2016-08-24T11:18:58Z | [
"python",
"django",
"python-2.7"
] |
How to make simple graphs in python 2.7 | 39,121,235 | <p>I would like to make simple graphs for my web page in python/django, but I do not know, which library (and how) to use.</p>
<p>I DO NOT WANT CHARTS, I SEEK A WAY TO CREATE IMAGE FROM PRIMITIVES LIKE RECTANGLES.</p>
<p>Each such graph is probabely generated and used only one time, as next time the values would differ.</p>
<p>I can simply compute the positions of all rectangles, lines or texts in it, so I would like something lightweight to just create pictre from that, which I would return as img/png (or so) mime style
like <img src="http://my.web.www/my/page/graph" > where the parameters to show would be decidied by session and database.</p>
<p>I can compute all the sizes beforehand, so I would like something simple like</p>
<pre><code>img=Image(PNG,598,89) # style, x, y
img.add_text('1.3.', 10,10)
img.add_rectagle(20,10, 70,20, CYAN, BLACK)
....
return img.render()
</code></pre>
<p>Can you direct me, how to do it?</p>
<p>Thanks beforehand</p>
<p><a href="http://i.stack.imgur.com/Rc7pS.png" rel="nofollow"><img src="http://i.stack.imgur.com/Rc7pS.png" alt="graph"></a></p>
<hr>
<p><a href="http://stackoverflow.com/users/6369873/navit">navit</a> nailed it :)</p>
<pre><code># from django.utils.httpwrappers import HttpResponse
from PIL import Image, ImageDraw
import os,sys
im = Image.new('RGB',(598,89),'white')
draw = ImageDraw.Draw(im)
draw.rectangle((0,0,im.size[0]-1,im.size[1]-1), outline='blue')
draw.rectangle((25,10,590,20), fill='white', outline='black')
draw.rectangle((25,10,70,20), fill='rgb(255,0,0)', outline='black')
draw.rectangle((70,10,90,20), fill='green', outline='black')
draw.text((1,10),'1.3.',fill='black')
del draw
# write to stdout
im.save(sys.stdout, "PNG")
# draw.flush()
# response = HttpResponse(mimetype="image/png")
# image.save(response, "PNG")
# return response
</code></pre>
<p><a href="http://i.stack.imgur.com/OK7P1.png" rel="nofollow"><img src="http://i.stack.imgur.com/OK7P1.png" alt="enter image description here"></a></p>
| 2 | 2016-08-24T10:48:57Z | 39,121,971 | <p>What about <a href="https://plot.ly/python/gantt/" rel="nofollow">plotly</a>? Never used in a project, but by reading the examples it seems very powerful and easy to use. It has a <a href="https://plot.ly/python/static-image-export/" rel="nofollow">static image export</a> (as most graphic libraries probably have).</p>
| 0 | 2016-08-24T11:22:20Z | [
"python",
"django",
"python-2.7"
] |
How to create a specific type of object for type .hdf5? | 39,121,367 | <p>My question is about creating an object type or document for .hdf5 files. The object will have three attributes, an id, a user_id and a boolean array of size 64. I have to create them about 10000000 (Ten millions) many. </p>
<p>Imagine mongodb, I have to use them like that. I have to make queries for some particular user_id'ed objects as well as for all of them. </p>
<p>Any suggestion and help is appreciated.</p>
| 0 | 2016-08-24T10:55:18Z | 39,121,705 | <p>I would go ahead and use a dictionary for this case. I feel dictionaries do scale up well. Since the query would be on user_id, I would make it the key.</p>
<p>The structure would be like </p>
<pre><code>{
'user_id-xyz': {
'id':'id-1212',
'boolarray':[True,False,..],
},
'user_id-abc':{
...
}
}
</code></pre>
<p>In order to achieve this, I might go for a numpy custom datatype.</p>
<pre><code>element = np.dtype([('id', 'i16'), ('boolarray', 'b',(64,1))])
f = h5py.File('foo.hdf5','w')
dset = f.create_dataset("blocky", (1000000,), dtype='V79') # 64(bools)+15(for id)
grp = f.create_group("user_id-xyz")
# create subgroups for each id.
subdataset = grp.create_dataset('ele',(1,),dtype=element)
# test of membership.
'user_id-xyz' in f
# retrieval
f.get('user_id-xyz')
# all keys.
f.keys()
</code></pre>
<p>Overall, I hope this helps you. </p>
| 0 | 2016-08-24T11:10:10Z | [
"python",
"mongodb",
"hdf5",
"h5py"
] |
How do I ignore the "redundant parentheses" feature in PyCharm? | 39,121,432 | <p>PyCharm decides that certain parenthesis in my Python code are 'redundant'. I want to keep them anyway. So PyCharm started annoying me with green lines under them. I don't want to give in to PyCharm's quirks.</p>
<p>I was able to ignore other warnings in the following way:</p>
<blockquote>
<p>File > Settings > Editor > Inspections > uncheck all warnings that you don't like..</p>
</blockquote>
<p>Sadly, the 'redundant parentheses' warning does not appear in that list.</p>
<p>How do I ignore this warning?</p>
| -1 | 2016-08-24T10:58:23Z | 39,121,575 | <p>There is <code>redundant parenthesis</code> under Inspections (PyCharm 2016.1.4). Look closer.</p>
<p>If you still can't find it, there is a search bar on the top left corner of the settings menu. Search for <code>redun</code> and the <code>redundant parenthesis</code> inspection should come up.</p>
| 4 | 2016-08-24T11:04:51Z | [
"python",
"pycharm"
] |
How do I ignore the "redundant parentheses" feature in PyCharm? | 39,121,432 | <p>PyCharm decides that certain parenthesis in my Python code are 'redundant'. I want to keep them anyway. So PyCharm started annoying me with green lines under them. I don't want to give in to PyCharm's quirks.</p>
<p>I was able to ignore other warnings in the following way:</p>
<blockquote>
<p>File > Settings > Editor > Inspections > uncheck all warnings that you don't like..</p>
</blockquote>
<p>Sadly, the 'redundant parentheses' warning does not appear in that list.</p>
<p>How do I ignore this warning?</p>
| -1 | 2016-08-24T10:58:23Z | 39,121,691 | <p>redundant parenthesis is a <em>weak warning</em>. You can just uncheck the box there.</p>
<p><a href="http://i.stack.imgur.com/QTCwM.png" rel="nofollow"><img src="http://i.stack.imgur.com/QTCwM.png" alt="enter image description here"></a></p>
| 3 | 2016-08-24T11:09:47Z | [
"python",
"pycharm"
] |
Saving functions using shelve | 39,121,587 | <p>I'm trying to use the shelve python module to save my session output and reload it later, but I have found that if I have defined functions then I get an error in the reloading stage. Is there a problem with the way I am doing it? I based my code on an answer at <a href="http://stackoverflow.com/questions/2960864/how-can-i-save-all-the-variables-in-the-current-python-session">How can I save all the variables in the current python session?</a> . </p>
<p>Here's some simple code that reproduces the error:</p>
<pre><code>def test_fn(): #simple test function
return
import shelve
my_shelf = shelve.open('test_shelve','n')
for key in globals().keys():
try:
my_shelf[key] = globals()[key]
except: #__builtins__, my_shelf, and imported modules cannot be shelved.
pass
my_shelf.close()
</code></pre>
<p>Then if I exit I can do</p>
<pre><code>ls -lh test_shelve*
-rw-r--r-- 1 user group 22K Aug 24 11:16 test_shelve.bak
-rw-r--r-- 1 user group 476K Aug 24 11:16 test_shelve.dat
-rw-r--r-- 1 user group 22K Aug 24 11:16 test_shelve.dir
</code></pre>
<p>In general, in a new IPython session I want to be able to do something like:</p>
<pre><code>import shelve
my_shelf = shelve.open('test_shelve')
for key in my_shelf:
globals()[key]=my_shelf[key]
</code></pre>
<p>This produces an error for key 'test_fn'. Here is some code to demonstrate the error: </p>
<pre><code>print my_shelf['test_fn']
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-4-deb481380237> in <module>()
----> 1 print my_shelf['test_fn']
/home/user/anaconda2/envs/main/lib/python2.7/shelve.pyc in __getitem__(self, key)
120 except KeyError:
121 f = StringIO(self.dict[key])
--> 122 value = Unpickler(f).load()
123 if self.writeback:
124 self.cache[key] = value
AttributeError: 'module' object has no attribute 'test_fn'
</code></pre>
<p>Of course, one solution would be to exclude functions in the saving stage, but from what I have read it should be possible to restore them with this method, and so I wondered if I am doing things wrongly.</p>
| 0 | 2016-08-24T11:05:22Z | 39,121,645 | <p>You can't use <code>shelve</code> (or <a href="https://docs.python.org/2/library/pickle.html#module-pickle" rel="nofollow"><code>pickle</code></a>, the actual protocol used by <code>shelve</code>) to store <em>executable code</em>, no.</p>
<p>What is stored is a <em>reference</em> to the function (just the location where the function can be imported from again). Code is not data, only the fact that you referenced a function is data here. Pickle expects to be able to load the same module and function again when you load the stored information.</p>
<p>The same would apply to classes; if you pickle a reference to a class, or pickle an instance of a class, then only the information to import the class again is stored (to re-create the reference or instance).</p>
<p>All this is done because you <em>already</em> have a persisted and loadable representation of that function or class: the module that defines them. There is no need to store another copy.</p>
<p>This is documented explicitly in the <a href="https://docs.python.org/2/library/pickle.html#what-can-be-pickled-and-unpickled" rel="nofollow"><em>What can be pickled and unpickled?</em> section</a>:</p>
<blockquote>
<p>Note that functions (built-in and user-defined) are pickled by âfully qualifiedâ name reference, not by value. This means that only the function name is pickled, along with the name of the module the function is defined in. Neither the functionâs code, nor any of its function attributes are pickled. Thus the defining module must be importable in the unpickling environment, and the module must contain the named object, otherwise an exception will be raised. </p>
</blockquote>
<p>To go into some more detail for your specific example: The main script that Python executes is called the <code>__main__</code> module, and you shelved the <code>__main__.test_fn</code> function. What is stored then is simply a marker that signals you referenced a <em>global</em> and the import location, so something close to <code>GLOBAL</code> and <code>__main__</code> plus <code>test_fn</code> are stored. When loading the shelved data again, upon seeing the <code>GLOBAL</code> marker, the <code>pickle</code> module tries to load the name <code>test_fn</code> from the <code>__main__</code> module. Since your second script is again loaded as <code>__main__</code> but doesn't have a <code>test_fn</code> global, loading the reference fails.</p>
| 3 | 2016-08-24T11:07:59Z | [
"python",
"python-2.7"
] |
Updating messages with inline keyboards using callback queries | 39,121,678 | <p>I want to update message in chat with inline keyboard but can't understand how to receive a <em>inline_message_id</em> or if it only for inline queries how I can determine <em>chat_id</em> and <em>message_id</em> for using it on <a href="https://pythonhosted.org/python-telegram-bot/telegram.bot.html" rel="nofollow">editMessageText(*args, **kwargs)</a> in class <em>telegram.bot.Bot</em>?</p>
<p>my code example (part of it):</p>
<pre><code>#!/usr/bin/python
import telegram
from telegram.ext import Updater, CommandHandler, MessageHandler, Filters, InlineQueryHandler, CallbackQueryHandler
tokenid = "YOUR_TOKEN_ID"
def inl(bot, update):
if update.callback_query.data == "k_light_on":
#func for turn on light res = k_light.on()
bot.answerCallbackQuery(callback_query_id=update.callback_query.id, text="Turning on light ON!")
bot.editMessageText(inline_message_id=update.callback_query.inline_message_id, text="Do you want to turn On or Off light? Light is ON")
#hardcoded vars variant
#bot.editMessageText(message_id=298, chat_id=174554240, text="Do you want to turn On or Off light? Light is ON")
elif update.callback_query.data == "k_light_off":
#func for turn on light res = k_light.off()
bot.answerCallbackQuery(callback_query_id=update.callback_query.id, text="Turning off light OFF!")
bot.editMessageText(inline_message_id=update.callback_query.inline_message_id, text="Do you want to turn On or Off light? Light is ON")
#hardcoded vars variant
#bot.editMessageText(message_id=298, chat_id=174554240, text="Do you want to turn On or Off light? Light is OFF")
else:
print "Err"
def k_light_h(bot, update):
reply_markup = telegram.InlineKeyboardMarkup([[telegram.InlineKeyboardButton("On", callback_data="k_light_on"), telegram.InlineKeyboardButton("Off", callback_data="k_light_off")]])
ddd = bot.sendMessage(chat_id=update.message.chat_id, text="Do you want to turn On or Off light?", reply_markup=reply_markup)
if __name__ == "__main__":
#
updater = Updater(token=tokenid)
### Handler groups
dispatcher = updater.dispatcher
# light
k_light_handler = CommandHandler('light', k_light_h)
dispatcher.add_handler(k_light_handler)
# errors
updater.dispatcher.add_error_handler(error)
updater.start_polling()
# Run the bot until the user presses Ctrl-C or the process receives SIGINT,
# SIGTERM or SIGABRT
updater.idle()
</code></pre>
<p>When I run it I have an error:</p>
<pre><code>telegram.ext.dispatcher - WARNING - A TelegramError was raised while processing the Update.
root - WARNING - Update ...
... caused error "u'Bad Request: message identifier is not specified'"
</code></pre>
<p>I checked var <em>update.callback_query.inline_message_id</em> and it was empty. When I tried <em>bot.editMessageText</em> with hardcoded vars <em>chat_id</em> and <em>message_id</em> it worked well.</p>
<p>Do I need save in DB (for all users) vars <em>chat_id</em> and <em>message_id</em> when they run command /light and then when they press inline button I need read from DB this value or I can use some simpler method for editing messages?</p>
| 0 | 2016-08-24T11:09:16Z | 39,127,669 | <p>You should pass <code>message_id</code> instead of <code>inline_message_id</code> to <code>editMessageText</code>.</p>
<p>So the solution for you is this:</p>
<pre class="lang-python prettyprint-override"><code>bot.editMessageText(
message_id = update.callback_query.message.message_id,
chat_id = update.callback_query.message.chat.id,
text = "Do you want to turn On or Off light? Light is ON"
)
</code></pre>
| 1 | 2016-08-24T15:41:16Z | [
"python",
"telegram",
"telegram-bot",
"python-telegram-bot"
] |
Setting with enlargement - updating transaction DF | 39,121,709 | <p>Looking for ways to achieve following updates on a dataframe:</p>
<ul>
<li><code>dfb</code> is the base dataframe that I want to update with <code>dft</code> transactions.</li>
<li>Any common index rows should be updated with values from <code>dft</code>. </li>
<li>Indexes only in <code>dft</code> should be appended to <code>dfb</code>.</li>
</ul>
<p>Looking at the documentation, setting with enlargement looked perfect but then I realized it only worked with a single row. Is it possible to use setting with enlargement to do this update or is there another method that could be recommended?</p>
<pre><code>dfb = pd.DataFrame(data={'A': [11,22,33], 'B': [44,55,66]}, index=[1,2,3])
dfb
Out[70]:
A B
1 11 44
2 22 55
3 33 66
dft = pd.DataFrame(data={'A': [0,2,3], 'B': [4,5,6]}, index=[3,4,5])
dft
Out[71]:
A B
3 0 4
4 2 5
5 3 6
# Updated dfb should look like this:
dfb
Out[75]:
A B
1 11 44
2 22 55
3 0 4
4 2 5
5 3 6
</code></pre>
| 1 | 2016-08-24T11:10:27Z | 39,121,833 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.combine_first.html" rel="nofollow"><code>combine_first</code></a> with renaming columns, last convert <code>float</code> columns to <code>int</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.astype.html" rel="nofollow"><code>astype</code></a>:</p>
<pre><code>dft = dft.rename(columns={'c':'B', 'B':'A'}).combine_first(dfb).astype(int)
print (dft)
A B
1 11 44
2 22 55
3 0 4
4 2 5
5 3 6
</code></pre>
<p>Another solution with finding same indexes in both DataFrames by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.intersection.html" rel="nofollow"><code>Index.intersection</code></a>, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html" rel="nofollow"><code>drop</code></a> it from first <code>DataFrame</code> <code>dfb</code> and then use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow"><code>concat</code></a>:</p>
<pre><code>dft = dft.rename(columns={'c':'B', 'B':'A'})
idx = dfb.index.intersection(dft.index)
print (idx)
Int64Index([3], dtype='int64')
dfb = dfb.drop(idx)
print (dfb)
A B
1 11 44
2 22 55
print (pd.concat([dfb, dft]))
A B
1 11 44
2 22 55
3 0 4
4 2 5
5 3 6
</code></pre>
| 1 | 2016-08-24T11:16:10Z | [
"python",
"pandas",
"dataframe",
"multiple-columns",
"insert-update"
] |
decoding base64 images from rtf | 39,121,875 | <p>In my rtf document, I want to extract image from string:
The string is like this:</p>
<pre><code> \pard\pard\qc{\*\shppict{\pict\pngblip\picw320\pich192\picwgoal0\pichgoal0
89504e470d0a1a0a0000000d4948445200000140000000c00802000000fa352d9100000e2949444[.....]6c4f0000000049454e44ae426082
}}
</code></pre>
<p>questions:
1) is this really base64?</p>
<p>2) How to decode it using below code.?</p>
<pre><code>import base64
imgData = b"base64code00from007aove007string00bcox007idont007know007where007it007starts007and007ends"
with open("imageToSave.png", "wb") as fh:
fh.write(base64.decodestring(imgData))
</code></pre>
<p>Full rtf text(which when saved as .rtf shows image) is at:</p>
<p><a href="http://hastebin.com/axabazaroc.tex" rel="nofollow">http://hastebin.com/axabazaroc.tex</a></p>
| 0 | 2016-08-24T11:18:07Z | 39,121,955 | <p>No, that's not Base64-encoded data. It is <em>hexadecimal</em>. From the <a href="https://en.wikipedia.org/wiki/Rich_Text_Format#Pictures" rel="nofollow">Wikipedia article on the RTF format</a>:</p>
<blockquote>
<p>RTF supports inclusion of JPEG, Portable Network Graphics (PNG), Enhanced Metafile (EMF), Windows Metafile (WMF), Apple PICT, Windows Device-dependent bitmap, Windows Device Independent bitmap and OS/2 Metafile picture types in hexadecimal (the default) or binary format in a RTF file.</p>
</blockquote>
<p>The <a href="https://docs.python.org/2/library/binascii.html#binascii.unhexlify" rel="nofollow"><code>binascii.unhexlify()</code> function</a> will decode that back to binary image data for you; you have a PNG image here:</p>
<pre><code>>>> # data contains the hex data from your link, newlines removed
...
>>> from binascii import unhexlify
>>> r = unhexlify(data)
>>> r[:20]
'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x01@'
>>> from imghdr import test_png
>>> test_png(r, None)
'png'
</code></pre>
<p>but of course the <code>\pngblip</code> entry was a clue there. I won't include the image here, it is a rather dull 8-bit 320x192 black rectangle.</p>
| 4 | 2016-08-24T11:21:21Z | [
"python",
"base64",
"rtf"
] |
Why is Python regex slower 5 times on aws instance than my local mac OS X with similar specs? | 39,122,067 | <p>I have the following regex code. </p>
<pre><code>#!/usr/bin/env python
# -*- coding: utf-8 -*-
import time
import re
start_time = time.time()
input_string = """áá®ááá¯á áºáá¾ áá¬ááºáá¼á®á¸áá¾ááºááẠá¡á¬áá¯áááºáááá±á¸áá½á¾ááºá¸á
á¬ááᯠááá½ááºáá±á¸áá±á¸áá¬áá¶áááºááẠá¡áááá¹áá¬ááºáá»áẠáááááááºáá²á·áááºá"""
if type(input_string) is not unicode:
input_string = unicode(input_string, "utf8")
input_string = re.sub(ur"([\u1000-\u104F])\s+(?=[\u1000-\u104F])", r"\1", input_string)
input_string = re.sub(ur"\u103A\u1037", u"\u1037\u103A", input_string)
input_string = re.sub(ur"\u1036\u102F", u"\u102F\u1036", input_string)
input_string = re.sub(ur"[\u200B\u200C]", "", input_string)
input_string = re.sub(ur"([\u102D\u102E])\u1030", ur"\1\u102F", input_string)
input_string = re.sub(ur"(\u1047)(?=[\u1000-\u101C\u101E-\u102A\u102C\u102E-\u103F\u104C-\u109F\u0020])", u"\u101B", input_string)
input_string = re.sub(ur"\u1031\u1047", u"\u1031\u101B", input_string)
print "time taken -> %s" % (time.time() - start_time)
</code></pre>
<h2>Benchmarks</h2>
<p>my local Mac (3.2 GHz Intel Core i5 with 16 GB 1867 MHz DDR3 and Fusion Drive)</p>
<p>aws instance (m4.xlarge, ebs_optimized)</p>
<p>local ubuntu (2 cores, 8GB RAM)</p>
<p>digital ocean (1 core, 512MB RAM)</p>
<h2>Time Taken</h2>
<pre><code>+---------------+-------------------+
| OS | TIME |
+---------------+-------------------+
| Mac | 0.00268292427063 |
| AWS | 0.0100150108337 |
| Local Ubuntu | 0.00330495834351 |
| Digital Ocean | 0.00202393531799 |
+---------------+-------------------+
</code></pre>
<p>As you can see, AWS is taking ~ 5 times longer than my other machines.</p>
<p>I'm using <code>python2.7</code>. Can you please explain to me what is wrong? Is aws instance that bad? How can I check what is wrong with my m4.xlarge?</p>
<p><em>Thanks</em></p>
| 2 | 2016-08-24T11:27:21Z | 39,122,978 | <p>The problem is that I was using python 2.7.3 on my aws instance. After I updated to 2.7.10, it is now as fast as my other machines.</p>
<p>Cheers.</p>
| 3 | 2016-08-24T12:11:30Z | [
"python",
"regex",
"amazon-web-services"
] |
How to remove spacing from an input | 39,122,117 | <p>Hi I am new to programming and I am trying to write a code that will add a hashtag from the users input and remove spacing.</p>
<p>For example if the input is </p>
<pre><code>Enter text:
</code></pre>
<p>and I was to write "Sam is cool". I would want it to print "#Samiscool"</p>
<p>This is my program so far:</p>
<pre><code>a = input("Enter text: ")
print('#'+a)
</code></pre>
<p>If I was to write in this program sam is cool it will produce "#sam" is cool</p>
<p>when I want the program to produce "#samiscool"</p>
<p>Anything would help thanks.</p>
| -4 | 2016-08-24T11:29:29Z | 39,122,168 | <p>You can use <code>string.replace</code> to remove the whitespace, then simply use concatenation to put the <code>'#'</code> symbol in front</p>
<pre><code>def hashify(s):
return '#' + s.replace(' ', '')
>>> hashify('Sam is cool')
'#Samiscool'
</code></pre>
| 3 | 2016-08-24T11:31:54Z | [
"python"
] |
How to remove spacing from an input | 39,122,117 | <p>Hi I am new to programming and I am trying to write a code that will add a hashtag from the users input and remove spacing.</p>
<p>For example if the input is </p>
<pre><code>Enter text:
</code></pre>
<p>and I was to write "Sam is cool". I would want it to print "#Samiscool"</p>
<p>This is my program so far:</p>
<pre><code>a = input("Enter text: ")
print('#'+a)
</code></pre>
<p>If I was to write in this program sam is cool it will produce "#sam" is cool</p>
<p>when I want the program to produce "#samiscool"</p>
<p>Anything would help thanks.</p>
| -4 | 2016-08-24T11:29:29Z | 39,122,193 | <p>Here's a possible solution, it'll get rid not only of spaces but also tabs:</p>
<pre><code>a = input("Enter text: ")
print('#' + "".join(a.split()))
</code></pre>
| 4 | 2016-08-24T11:33:10Z | [
"python"
] |
How to remove spacing from an input | 39,122,117 | <p>Hi I am new to programming and I am trying to write a code that will add a hashtag from the users input and remove spacing.</p>
<p>For example if the input is </p>
<pre><code>Enter text:
</code></pre>
<p>and I was to write "Sam is cool". I would want it to print "#Samiscool"</p>
<p>This is my program so far:</p>
<pre><code>a = input("Enter text: ")
print('#'+a)
</code></pre>
<p>If I was to write in this program sam is cool it will produce "#sam" is cool</p>
<p>when I want the program to produce "#samiscool"</p>
<p>Anything would help thanks.</p>
| -4 | 2016-08-24T11:29:29Z | 39,122,370 | <p>I propose one more option</p>
<pre><code>def hashify(s):
return '#' + s.title().replace(' ', '')
>>> hashify('Sam is cool')
'#SamIsCool'
</code></pre>
| 1 | 2016-08-24T11:41:55Z | [
"python"
] |
How to remove spacing from an input | 39,122,117 | <p>Hi I am new to programming and I am trying to write a code that will add a hashtag from the users input and remove spacing.</p>
<p>For example if the input is </p>
<pre><code>Enter text:
</code></pre>
<p>and I was to write "Sam is cool". I would want it to print "#Samiscool"</p>
<p>This is my program so far:</p>
<pre><code>a = input("Enter text: ")
print('#'+a)
</code></pre>
<p>If I was to write in this program sam is cool it will produce "#sam" is cool</p>
<p>when I want the program to produce "#samiscool"</p>
<p>Anything would help thanks.</p>
| -4 | 2016-08-24T11:29:29Z | 39,122,649 | <p>And if you really want to know why Python is cool (#PythonIsCool):</p>
<pre><code>>>> s = 'Python is cool'
>>> print( '#' + ''.join( x[0:1].upper() + x[1:len(x)] for x in s.split() ))
PythonIsCool
</code></pre>
| 0 | 2016-08-24T11:55:19Z | [
"python"
] |
Interpolating backwards with multiple consecutive nan's in Pandas/Python? | 39,122,196 | <p>I have an array with missing values in various places.</p>
<pre><code>import numpy as np
import pandas as pd
x = np.arange(1,10).astype(float)
x[[0,1,6]] = np.nan
df = pd.Series(x)
print(df)
0 NaN
1 NaN
2 3.0
3 4.0
4 5.0
5 6.0
6 NaN
7 8.0
8 9.0
dtype: float64
</code></pre>
<p>For each <code>NaN</code>, I want to take the value proceeding it, an divide it by two. And then propogate that to the next consecutive <code>NaN</code>, so I would end up with:</p>
<pre><code>0 0.75
1 1.5
2 3.0
3 4.0
4 5.0
5 6.0
6 4.0
7 8.0
8 9.0
dtype: float64
</code></pre>
<p>I've tried <code>df.interpolate()</code>, but that doesn't seem to work with consecutive NaN's.</p>
| 3 | 2016-08-24T11:33:21Z | 39,122,277 | <p>You can do something like this:</p>
<pre><code>import numpy as np
import pandas as pd
x = np.arange(1,10).astype(float)
x[[0,1,6]] = np.nan
df = pd.Series(x)
prev = 0
new_list = []
for i in df.values[::-1]:
if np.isnan(i):
new_list.append(prev/2.)
prev = prev / 2.
else:
new_list.append(i)
prev = i
df = pd.Series(new_list[::-1])
</code></pre>
<p>It loops over the values of the df, in reverse. It keeps track of the previous value. It adds the actual value if it is not NaN, otherwise the half of the previous value.</p>
<p>This might not be the most sophisticated Pandas solution, but you can change the behavior quite easy.</p>
| 2 | 2016-08-24T11:37:50Z | [
"python",
"pandas",
"numpy",
"dataframe",
"interpolation"
] |
Interpolating backwards with multiple consecutive nan's in Pandas/Python? | 39,122,196 | <p>I have an array with missing values in various places.</p>
<pre><code>import numpy as np
import pandas as pd
x = np.arange(1,10).astype(float)
x[[0,1,6]] = np.nan
df = pd.Series(x)
print(df)
0 NaN
1 NaN
2 3.0
3 4.0
4 5.0
5 6.0
6 NaN
7 8.0
8 9.0
dtype: float64
</code></pre>
<p>For each <code>NaN</code>, I want to take the value proceeding it, an divide it by two. And then propogate that to the next consecutive <code>NaN</code>, so I would end up with:</p>
<pre><code>0 0.75
1 1.5
2 3.0
3 4.0
4 5.0
5 6.0
6 4.0
7 8.0
8 9.0
dtype: float64
</code></pre>
<p>I've tried <code>df.interpolate()</code>, but that doesn't seem to work with consecutive NaN's.</p>
| 3 | 2016-08-24T11:33:21Z | 39,122,684 | <p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.fillna.html" rel="nofollow"><code>fillna</code></a> with method <code>ffill</code>, what it same as <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.ffill.html" rel="nofollow"><code>ffill()</code></a> function:</p>
<pre><code>#back order of Series
b = df[::-1].isnull()
#find all consecutives NaN, count them, divide by 2 and replace 0 to 1
a = (b.cumsum() - b.cumsum().where(~b).ffill()).mul(2).replace({0:1})
print(a)
8 1
7 1
6 2
5 1
4 1
3 1
2 1
1 2
0 4
dtype: int32
print(df.bfill().div(a))
0 0.75
1 1.50
2 3.00
3 4.00
4 5.00
5 6.00
6 4.00
7 8.00
8 9.00
dtype: float64
</code></pre>
<p><strong>Timings</strong> (<code>len(df)=9k</code>):</p>
<pre><code>In [315]: %timeit (mat(df))
100 loops, best of 3: 11.3 ms per loop
In [316]: %timeit (jez(df1))
100 loops, best of 3: 2.52 ms per loop
</code></pre>
<p><strong>Code for timings</strong>:</p>
<pre><code>import numpy as np
import pandas as pd
x = np.arange(1,10).astype(float)
x[[0,1,6]] = np.nan
df = pd.Series(x)
print(df)
df = pd.concat([df]*1000).reset_index(drop=True)
df1 = df.copy()
def jez(df):
b = df[::-1].isnull()
a = (b.cumsum() - b.cumsum().where(~b).ffill()).mul(2).replace({0:1})
return (df.bfill().div(a))
def mat(df):
prev = 0
new_list = []
for i in df.values[::-1]:
if np.isnan(i):
new_list.append(prev/2.)
prev = prev / 2.
else:
new_list.append(i)
prev = i
return pd.Series(new_list[::-1])
print (mat(df))
print (jez(df1))
</code></pre>
| 3 | 2016-08-24T11:57:17Z | [
"python",
"pandas",
"numpy",
"dataframe",
"interpolation"
] |
Multiprocessing - Shared Array | 39,122,270 | <p>So I'm trying to implement multiprocessing in python where I wish to have a Pool of 4-5 processes running a method in parallel. The purpose of this is to run a total of thousand Monte simulations (250-200 simulations per process) instead of running 1000. I want each process to write to a common shared array by acquiring a lock on it as soon as its done processing the result for one simulation, writing the result and releasing the lock. So it should be a three step process :</p>
<ol>
<li>Acquire lock</li>
<li>Write result</li>
<li>Release lock for other processes waiting to write to array.</li>
</ol>
<p>Everytime I pass the array to the processes each process creates a copy of that array which I donot want as I want a common array. Can anyone help me with this by providing sample code? </p>
| 0 | 2016-08-24T11:37:10Z | 39,122,466 | <p>Not tested, but something like that should work.
The array and lock are shared between processes.</p>
<pre><code>from multiprocessing import Process
def f(array,lock):
lock.acquire()
#modify array here
lock.release()
if __name__ == '__main__':
size=100
multiprocessing.Array('i', size) #c type array
lock=multiprocessing.Lock()
p = Process(target=f, args=(arr,lock,))
q = Process(target=f, args=(arr,lock,))
p.start()
q.start()
q.join()
p.join()
</code></pre>
<p>the documentation here <a href="https://docs.python.org/3.5/library/multiprocessing.html" rel="nofollow">https://docs.python.org/3.5/library/multiprocessing.html</a> has plenty of examples to start with</p>
| 0 | 2016-08-24T11:46:49Z | [
"python",
"multiprocessing",
"shared"
] |
Multiprocessing - Shared Array | 39,122,270 | <p>So I'm trying to implement multiprocessing in python where I wish to have a Pool of 4-5 processes running a method in parallel. The purpose of this is to run a total of thousand Monte simulations (250-200 simulations per process) instead of running 1000. I want each process to write to a common shared array by acquiring a lock on it as soon as its done processing the result for one simulation, writing the result and releasing the lock. So it should be a three step process :</p>
<ol>
<li>Acquire lock</li>
<li>Write result</li>
<li>Release lock for other processes waiting to write to array.</li>
</ol>
<p>Everytime I pass the array to the processes each process creates a copy of that array which I donot want as I want a common array. Can anyone help me with this by providing sample code? </p>
| 0 | 2016-08-24T11:37:10Z | 39,124,004 | <p>Since you're only returning state from the child process to the parent process, then using a shared array and explicity locks is overkill. You can use <code>Pool.map</code> or <code>Pool.starmap</code> to accomplish exactly what you need. For example:</p>
<pre><code>from multiprocessing import Pool
class Adder:
"""I'm using this class in place of a monte carlo simulator"""
def add(self, a, b):
return a + b
def setup(x, y, z):
"""Sets up the worker processes of the pool.
Here, x, y, and z would be your global settings. They are only included
as an example of how to pass args to setup. In this program they would
be "some arg", "another" and 2
"""
global adder
adder = Adder()
def job(a, b):
"""wrapper function to start the job in the child process"""
return adder.add(a, b)
if __name__ == "__main__":
args = list(zip(range(10), range(10, 20)))
# args == [(0, 10), (1, 11), ..., (8, 18), (9, 19)]
with Pool(initializer=setup, initargs=["some arg", "another", 2]) as pool:
# runs jobs in parallel and returns when all are complete
results = pool.starmap(job, args)
print(results) # prints [10, 12, ..., 26, 28]
</code></pre>
| 0 | 2016-08-24T12:58:04Z | [
"python",
"multiprocessing",
"shared"
] |
BitVector performance issues | 39,122,376 | <p>I need to use bits for operations in a crypto scheme, however, when I transform variables and functions into BitVector(bitstring/int/textstrings="") the result is a very long bitvector, at times of length in the thousands. Now, this slows my encryption and operations on these BitVectors a LOT. How can I overcome this? :( </p>
<p>example of ways I'm using BitVector:</p>
<pre><code> msg = BitVector.BitVector(textstring=message) ^ h1
msgxored = msg ^ h1
</code></pre>
<p>Edit1: For example, <code>self.bc.encrypt(msgxored, key)</code> is only ~300 bits, but <code>encr1 = BitVector.BitVector(textstring = self.bc.encrypt(msgxored, key))</code> is ~3000 bits!</p>
| 0 | 2016-08-24T11:42:12Z | 39,122,656 | <p>Your question does not have much information. Nonetheless, the documentation says that you can set the size of your BitVector.</p>
<pre><code>bv = BitVector( intVal = 0, size = 8 )
</code></pre>
<p>Hope that helps!</p>
| 0 | 2016-08-24T11:55:44Z | [
"python",
"encryption",
"bitvector"
] |
BitVector performance issues | 39,122,376 | <p>I need to use bits for operations in a crypto scheme, however, when I transform variables and functions into BitVector(bitstring/int/textstrings="") the result is a very long bitvector, at times of length in the thousands. Now, this slows my encryption and operations on these BitVectors a LOT. How can I overcome this? :( </p>
<p>example of ways I'm using BitVector:</p>
<pre><code> msg = BitVector.BitVector(textstring=message) ^ h1
msgxored = msg ^ h1
</code></pre>
<p>Edit1: For example, <code>self.bc.encrypt(msgxored, key)</code> is only ~300 bits, but <code>encr1 = BitVector.BitVector(textstring = self.bc.encrypt(msgxored, key))</code> is ~3000 bits!</p>
| 0 | 2016-08-24T11:42:12Z | 39,123,978 | <p>This is shameless self-advertising but I made <a href="https://pypi.python.org/pypi/BytesOp" rel="nofollow">https://pypi.python.org/pypi/BytesOp</a> exactly for this.</p>
<p>You could use it like this</p>
<pre><code>from BytesOp import op_xor
msg=b"asdf"
h1=b"1234"
msgxored=op_xor(msg,h1)
print(msgxored,op_xor(msgxored,h1))
</code></pre>
| 1 | 2016-08-24T12:56:55Z | [
"python",
"encryption",
"bitvector"
] |
Change Dyno types through the Heroku API | 39,122,456 | <p>I have an app running in Heroku; I'm using the <a href="https://elements.heroku.com/addons/scheduler" rel="nofollow">Heroku scheduler</a> to run a python script that scales the number of dynos at particular times of the day, using the <a href="https://github.com/heroku/heroku.py" rel="nofollow">python API</a> (following <a href="http://stackoverflow.com/a/11961899/1237531">this answer</a>):</p>
<pre><code>import heroku
cloud = heroku.from_key(os.environ.get('HEROKU_API_KEY'))
app = cloud.apps['myapp']
webproc = app.processes['web']
webproc.scale(1)
</code></pre>
<p>My question is: is there an API call to change Dyno <em>types</em>? For instance to change it from "standard 1X" to "standard 2X" or to "hobby".</p>
<p>Thanks</p>
| -1 | 2016-08-24T11:46:21Z | 39,288,180 | <p>A chat with the Heroku support has confirmed that the python API has no command to perform this operation; I have therefore resorted to add the following script to the app (following <a href="http://stackoverflow.com/a/27153236/1237531">this answer</a>):</p>
<pre><code>#!/bin/bash
curl -s https://s3.amazonaws.com/assets.heroku.com/heroku-client/heroku-client.tgz | tar xz
mv heroku-client/* .
rmdir heroku-client
PATH="bin:$PATH"
heroku dyno:type hobby --app MYAPP
</code></pre>
<p>Changing <code>hobby</code> with <code>standard-1x</code> or <code>standard-2x</code> as needed.</p>
| 0 | 2016-09-02T08:51:14Z | [
"python",
"heroku"
] |
Pass data from django middleware.process_view to the template context | 39,122,541 | <p>I have a custom middleware and during its process_view I get some token. And I need to pass this token to the rendered result html.</p>
<p>I thought that context_processor is a good place to modify context, but looks like it's hard to pass some data from middleware into processor.</p>
<p>But it seems that the only way to communicate for process_view and context processor is request object. And if set any field to the request I get 'WSGIRequest' object does not support item assignment' error. Here is a pieces of code:</p>
<pre><code>def process_view(self, request, view_func, view_args, view_kwargs):
...
with log(request, view_func.__name__, info) as id:
request['TOKEN_ID'] = logger.get().get_id() #here is an error
response = view_func(request, *view_args, **view_kwargs)
</code></pre>
<p>So, looks like I'm doing something wrong. Is there a way to communicate middleware.process_view and context_processor? Or I should change another way to pass data into html from middleware?</p>
| 0 | 2016-08-24T11:50:52Z | 39,122,722 | <p>That error is raised when you try and use dictionary item assignment:</p>
<pre><code>request['my_key'] = 'my_value'
</code></pre>
<p>But the request is not a dictionary, it is an object. As with all objects - like the Django models which you must be familiar with - you need to set attributes, not items.</p>
<pre><code>request.my_attribute = 'my_value'
</code></pre>
<p>(Next time, please show the code you used and the full traceback you got.)</p>
| 1 | 2016-08-24T11:59:11Z | [
"python",
"django"
] |
Django-taggit tags with whitespace in name? | 39,122,552 | <p>How can I use tags with whitespace char in their name with django-taggit? For example, "Some simple tag"? Because if I ctrl-c=>ctrl-v some phrase to tags field in my admin panel, on page with this tag i get something like this:</p>
<pre><code>Reverse for 'posts_by_tag' with arguments '()' and keyword arguments '{u'tag':
u'\u0411\u0430\u043d\u043a \u0422\u0430\u0432\u0440\u0438\u043a\u0430'}'
not found. 1 pattern(s) tried: ['blog-list/posts/(?P<tag>\\w+)$']
</code></pre>
<p>, but if I try add tag with whitespace-it just cut on whitespace char and starts new tag. How can I fix it?</p>
| 0 | 2016-08-24T11:51:21Z | 39,128,388 | <p>Multi-word tags in Wagtail need to be quoted, e.g. "my tag". </p>
<p>There are two pull requests open that address this topic:</p>
<ul>
<li><a href="https://github.com/torchbox/wagtail/issues/1874" rel="nofollow">https://github.com/torchbox/wagtail/issues/1874</a> to improve documentation and explain users that at the moment multi-word tags need to be quoted</li>
<li><a href="https://github.com/torchbox/wagtail/pull/2207" rel="nofollow">https://github.com/torchbox/wagtail/pull/2207</a> to allow multi-word tags written with spaces</li>
</ul>
| 1 | 2016-08-24T16:18:06Z | [
"python",
"django",
"django-taggit",
"wagtail"
] |
Setting axis values in numpy/matplotlib.plot | 39,122,554 | <p>I am in the process of learning numpy. I wish to plot a graph of Planck's law for different temperatures and so have two <code>np.array</code>s, <code>T</code> and <code>l</code> for temperature and wavelength respectively. </p>
<pre><code>import scipy.constants as sc
import numpy as np
import matplotlib.pyplot as plt
lhinm = 10000 # Highest wavelength in nm
T = np.linspace(200, 1000, 10) # Temperature in K
l = np.linspace(0, lhinm*1E-9, 101) # Wavelength in m
labels = np.linspace(0, lhinm, 6) # Axis labels giving l in nm
B = (2*sc.h*sc.c**2/l[:, np.newaxis]**5)/(np.exp((sc.h*sc.c)/(T*l[:, np.newaxis]*sc.Boltzmann))-1)
for ticks in [True, False]:
plt.plot(B)
plt.xlabel("Wavelength (nm)")
if ticks:
plt.xticks(l, labels)
plt.title("With xticks()")
plt.savefig("withticks.png")
else:
plt.title("Without xticks()")
plt.savefig("withoutticks.png")
plt.show()
</code></pre>
<p>I would like to label the x-axis with the wavelength in nm. If I don't call <code>plt.xitcks()</code> the labels on the x-axis would appear to be the index in to the array B (which holds the caculated values).</p>
<p><a href="http://i.stack.imgur.com/flQxI.png" rel="nofollow"><img src="http://i.stack.imgur.com/flQxI.png" alt="Without ticks"></a></p>
<p>I've seen answer <a href="http://stackoverflow.com/questions/7559242/matplotlib-strings-as-labels-on-x-axis">7559542</a>, but when I call <code>plt.xticks()</code> all the values are scrunched up on the left of the axis, rather than being evenly spread along it.</p>
<p><a href="http://i.stack.imgur.com/Vmo4X.png" rel="nofollow"><img src="http://i.stack.imgur.com/Vmo4X.png" alt="With ticks"></a></p>
<p>So what's the best way to define my own set of values (in this case a subset of the values in <code>l</code>) and place them on the axis?</p>
| 2 | 2016-08-24T11:51:23Z | 39,122,887 | <p>The problem is that you're not giving your wavelength values to <code>plt.plot()</code>, so Matplotlib puts the index into the array on the horizontal axis as a default. Quick solution:</p>
<pre><code>plt.plot(l, B)
</code></pre>
<p>Without explicitly setting tick labels, that gives you this:</p>
<p><a href="http://i.stack.imgur.com/cswsb.png" rel="nofollow"><img src="http://i.stack.imgur.com/cswsb.png" alt="plot with proper values on x axis"></a></p>
<p>Of course, the values on the horizontal axis in this plot are actually in meters, not nanometers (despite the labeling), because the values you passed as the first argument to <code>plot()</code> (namely the array <code>l</code>) are in meters. That's where <code>xticks()</code> comes in. The two-argument version <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.xticks" rel="nofollow"><code>xticks(locations, labels)</code></a> places the labels at the corresponding locations on the x axis. For example, <code>xticks([1], 'one')</code> would put a label "one" at the location x=1, if that location is in the plot.</p>
<p>However, it doesn't change the range displayed on the axis. In your original example, your call to <code>xticks()</code> placed a bunch of labels at coordinates like 10<sup>-9</sup>, but it didn't change the axis range, which was still 0 to 100. No wonder all the labels were squished over to the left.</p>
<p>What you need to do is call <code>xticks()</code> with the points at which you want to place the labels, and the desired text of the labels. The way you were doing it, <code>xticks(l, labels)</code>, would work except that <code>l</code> has length 101 and <code>labels</code> only has length 6, so it only uses the first 6 elements of <code>l</code>. To fix that, you can do something like</p>
<pre><code>plt.xticks(labels * 1e-9, labels)
</code></pre>
<p>where the multiplication by <code>1e-9</code> converts from nanometers (what you want displayed) to meters (which are the coordinates Matplotlib actually uses in the plot).</p>
<p><a href="http://i.stack.imgur.com/mu2zt.png" rel="nofollow"><img src="http://i.stack.imgur.com/mu2zt.png" alt="fixed plot with proper labels"></a></p>
| 3 | 2016-08-24T12:07:07Z | [
"python",
"numpy",
"matplotlib"
] |
Setting axis values in numpy/matplotlib.plot | 39,122,554 | <p>I am in the process of learning numpy. I wish to plot a graph of Planck's law for different temperatures and so have two <code>np.array</code>s, <code>T</code> and <code>l</code> for temperature and wavelength respectively. </p>
<pre><code>import scipy.constants as sc
import numpy as np
import matplotlib.pyplot as plt
lhinm = 10000 # Highest wavelength in nm
T = np.linspace(200, 1000, 10) # Temperature in K
l = np.linspace(0, lhinm*1E-9, 101) # Wavelength in m
labels = np.linspace(0, lhinm, 6) # Axis labels giving l in nm
B = (2*sc.h*sc.c**2/l[:, np.newaxis]**5)/(np.exp((sc.h*sc.c)/(T*l[:, np.newaxis]*sc.Boltzmann))-1)
for ticks in [True, False]:
plt.plot(B)
plt.xlabel("Wavelength (nm)")
if ticks:
plt.xticks(l, labels)
plt.title("With xticks()")
plt.savefig("withticks.png")
else:
plt.title("Without xticks()")
plt.savefig("withoutticks.png")
plt.show()
</code></pre>
<p>I would like to label the x-axis with the wavelength in nm. If I don't call <code>plt.xitcks()</code> the labels on the x-axis would appear to be the index in to the array B (which holds the caculated values).</p>
<p><a href="http://i.stack.imgur.com/flQxI.png" rel="nofollow"><img src="http://i.stack.imgur.com/flQxI.png" alt="Without ticks"></a></p>
<p>I've seen answer <a href="http://stackoverflow.com/questions/7559242/matplotlib-strings-as-labels-on-x-axis">7559542</a>, but when I call <code>plt.xticks()</code> all the values are scrunched up on the left of the axis, rather than being evenly spread along it.</p>
<p><a href="http://i.stack.imgur.com/Vmo4X.png" rel="nofollow"><img src="http://i.stack.imgur.com/Vmo4X.png" alt="With ticks"></a></p>
<p>So what's the best way to define my own set of values (in this case a subset of the values in <code>l</code>) and place them on the axis?</p>
| 2 | 2016-08-24T11:51:23Z | 39,122,897 | <p>You need to use same size lists at the <code>xtick</code>. Try setting the axis values separately from the plot value as below.</p>
<pre><code>import scipy.constants as sc
import numpy as np
import matplotlib.pyplot as plt
lhinm = 10000 # Highest wavelength in nm
T = np.linspace(200, 1000, 10) # Temperature in K
l = np.linspace(0, lhinm*1E-9, 101) # Wavelength in m
ll = np.linspace(0, lhinm*1E-9, 6) # Axis values
labels = np.linspace(0, lhinm, 6) # Axis labels giving l in nm
B = (2*sc.h*sc.c**2/l[:, np.newaxis]**5)/(np.exp((sc.h*sc.c)/(T*l[:, np.newaxis]*sc.Boltzmann))-1)
for ticks in [True, False]:
plt.plot(B)
plt.xlabel("Wavelength (nm)")
if ticks:
plt.xticks(ll, labels)
plt.title("With xticks()")
plt.savefig("withticks.png")
else:
plt.title("Without xticks()")
plt.savefig("withoutticks.png")
plt.show()
</code></pre>
| 0 | 2016-08-24T12:07:48Z | [
"python",
"numpy",
"matplotlib"
] |
Setting axis values in numpy/matplotlib.plot | 39,122,554 | <p>I am in the process of learning numpy. I wish to plot a graph of Planck's law for different temperatures and so have two <code>np.array</code>s, <code>T</code> and <code>l</code> for temperature and wavelength respectively. </p>
<pre><code>import scipy.constants as sc
import numpy as np
import matplotlib.pyplot as plt
lhinm = 10000 # Highest wavelength in nm
T = np.linspace(200, 1000, 10) # Temperature in K
l = np.linspace(0, lhinm*1E-9, 101) # Wavelength in m
labels = np.linspace(0, lhinm, 6) # Axis labels giving l in nm
B = (2*sc.h*sc.c**2/l[:, np.newaxis]**5)/(np.exp((sc.h*sc.c)/(T*l[:, np.newaxis]*sc.Boltzmann))-1)
for ticks in [True, False]:
plt.plot(B)
plt.xlabel("Wavelength (nm)")
if ticks:
plt.xticks(l, labels)
plt.title("With xticks()")
plt.savefig("withticks.png")
else:
plt.title("Without xticks()")
plt.savefig("withoutticks.png")
plt.show()
</code></pre>
<p>I would like to label the x-axis with the wavelength in nm. If I don't call <code>plt.xitcks()</code> the labels on the x-axis would appear to be the index in to the array B (which holds the caculated values).</p>
<p><a href="http://i.stack.imgur.com/flQxI.png" rel="nofollow"><img src="http://i.stack.imgur.com/flQxI.png" alt="Without ticks"></a></p>
<p>I've seen answer <a href="http://stackoverflow.com/questions/7559242/matplotlib-strings-as-labels-on-x-axis">7559542</a>, but when I call <code>plt.xticks()</code> all the values are scrunched up on the left of the axis, rather than being evenly spread along it.</p>
<p><a href="http://i.stack.imgur.com/Vmo4X.png" rel="nofollow"><img src="http://i.stack.imgur.com/Vmo4X.png" alt="With ticks"></a></p>
<p>So what's the best way to define my own set of values (in this case a subset of the values in <code>l</code>) and place them on the axis?</p>
| 2 | 2016-08-24T11:51:23Z | 39,122,917 | <p>You can supply the <code>x</code> values to <code>plt.plot</code>, and let matplotlib take care of setting the tick labels.</p>
<p>In your case, you could plot <code>plt.plot(l, B)</code>, but then you still have the ticks in m, not nm.</p>
<p>You could therefore convert your <code>l</code> array to nm before plotting (or during plotting). Here's a working example:</p>
<pre><code>import scipy.constants as sc
import numpy as np
import matplotlib.pyplot as plt
lhinm = 10000 # Highest wavelength in nm
T = np.linspace(200, 1000, 10) # Temperature in K
l = np.linspace(0, lhinm*1E-9, 101) # Wavelength in m
l_nm = l*1e9 # Wavelength in nm
labels = np.linspace(0, lhinm, 6) # Axis labels giving l in nm
B = (2*sc.h*sc.c**2/l[:, np.newaxis]**5)/(np.exp((sc.h*sc.c)/(T*l[:, np.newaxis]*sc.Boltzmann))-1)
plt.plot(l_nm, B)
# Alternativly:
# plt.plot(l*1e9, B)
plt.xlabel("Wavelength (nm)")
plt.title("With xticks()")
plt.savefig("withticks.png")
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/gl2GW.png" rel="nofollow"><img src="http://i.stack.imgur.com/gl2GW.png" alt="enter image description here"></a></p>
| 3 | 2016-08-24T12:09:04Z | [
"python",
"numpy",
"matplotlib"
] |
Python client for free SOAP service using socket giving error? | 39,122,569 | <p>This is the code that I'm trying to use:</p>
<pre><code>import os
import socket
import httplib
packet='''
<?xml version="1.0" encoding="utf-8"?>
<soap12:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xmlns:xsd="http://www.w3.org/2001/XMLSchema"xmlns:soap12="http://www.w3.org/2003/05/soap-envelope">
<soap12:Body>
<GetCitiesByCountry xmlns="http://www.webserviceX.NET">
<CountryName>India</CountryName>
</GetCitiesByCountry>
</soap12:Body>
</soap12:Envelope>'''
lent=len(packet)
msg="""
POST "www.webservicex.net/globalweather.asmx" HTTP/1.1
Host: www.webservicex.net
Content-Type: application/soap+xml; charset=utf-8
Content-Length: {length}
SOAPAction:"http://www.webserviceX.NET/GetCitiesByCountry"
Connection: Keep-Alive
{xml}""".format(length=lent,xml=packet)
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect( ("www.webservicex.net",80) )
if bytes != str: # Testing if is python2 or python3
msg = bytes(msg, 'utf-8')
client.send(msg)
client.settimeout(10)
print(client.type)
res=client.recv(4096)
print(res)
#res = res.replace("<","&lt;") -- only for stack overflow post
#res = res.replace(">","&gt;") -- only for stack overflow post
</code></pre>
<p>The output is:</p>
<pre><code>HTTP/1.1 400 Bad Request
Content-Type: text/html; charset=us-ascii
Server: Microsoft-HTTPAPI/2.0
Date: Wed, 24 Aug 2016 11:40:02 GMT
Connection: close
Content-Length: 324
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd">;
<HTML><HEAD><TITLE>Bad Request</TITLE>
<META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD>
<BODY><h2>Bad Request - Invalid URL</h2>
<hr><p>HTTP Error 400. The request URL is invalid.</p>
</BODY></HTML>
</code></pre>
<p>Any ideas?</p>
| 0 | 2016-08-24T11:52:05Z | 39,122,884 | <p>You ought to use <a href="https://fedorahosted.org/suds/wiki/Documentation" rel="nofollow">suds</a> to consume SOAP web service.</p>
<p><strong>EDIT: Example</strong></p>
<pre><code>import suds
import suds.transport
from suds.transport.http import HttpAuthenticated
class MyException(Exception):
pass
service_url = "http://my/service/url"
</code></pre>
<p>GET the available services:</p>
<pre><code>try:
transport = HttpAuthenticated(username='elmer', password='fudd')
wsdl = suds.client.Client(service_url, faults=False, transport=transport)
except IOError as exc:
fmt = "Can initialize my service: {reason}"
raise MyException(fmt.format(reason=exc))
except suds.transport.TransportError as exc:
fmt = "HTTP error -- Is it a bad URL: {service_url}? {reason}"
raise MyException(fmt.format(service_url=service_url, raison=exc))
</code></pre>
<p>Run a given service (here, its name is RunScript):</p>
<pre><code># add required options in headers
http_headers = {'sessionID': 1}
wsdl.set_options(soapheaders=http_headers)
params = {"scriptLanguage": "javascript", "scriptFile": "...",
"scriptArgs": [{"name": "...", "value": "..."}]}
try:
exit_code, response = wsdl.service.RunScript([params])
# ...
except suds.MethodNotFound as reason:
raise MyException("No method found: {reason}".format(reason=reason))
except suds.WebFault as reason:
raise MyException("Error running the script: {reason}".format(reason=reason))
except Exception as reason:
err_msg = "{0}".format(reason)
if err_msg == "timed out":
raise MyException("Timeout: {reason}".format(reason=reason))
raise
</code></pre>
<p>Of course, it is not required to use an error manager here. But I give an example. The last with the "timed out" is a kind of hack I used to detect a time out in my application.</p>
| 0 | 2016-08-24T12:07:05Z | [
"python",
"web-services",
"sockets",
"soap",
"python-sockets"
] |
Python: reducing cyclomatic complexity | 39,122,645 | <p>I need help in reducing the cyclomatic complexity of the following code: </p>
<pre><code>def avg_title_vec(record, lookup):
avg_vec = []
word_vectors = []
for tag in record['all_titles']:
titles = clean_token(tag).split()
for word in titles:
if word in lookup.value:
word_vectors.append(lookup.value[word])
if len(word_vectors):
avg_vec = [
float(val) for val in numpy.mean(
numpy.array(word_vectors),
axis=0)]
output = (record['id'],
','.join([str(a) for a in avg_vec]))
return output
</code></pre>
<p>Example input: </p>
<pre><code>record ={'all_titles': ['hello world', 'hi world', 'bye world']}
lookup.value = {'hello': [0.1, 0.2], 'world': [0.2, 0.3], 'bye': [0.9, -0.1]}
def clean_token(input_string):
return input_string.replace("-", " ").replace("/", " ").replace(
":", " ").replace(",", " ").replace(";", " ").replace(
".", " ").replace("(", " ").replace(")", " ").lower()
</code></pre>
<p>So all the words that are present in the lookup.value, I am taking average of the their vector form.</p>
| 0 | 2016-08-24T11:55:12Z | 39,126,639 | <p>It probably doesn't count as a correct answer really, as in the end cyclomatic complexity isn't reduced.</p>
<p>This variant is a little bit shorter, but I can't see any way it can be generalized in. And it seems that you need those <code>if</code>s you have.</p>
<pre><code>def avg_title_vec(record, lookup):
word_vectors = [lookup.value[word] for tag in record['all_titles']
for word in clean_token(tag).split() if word in lookup.value]
if not word_vectors:
return (record['id'], None)
avg_vec = [float(val) for val in numpy.mean(
numpy.array(word_vectors),
axis=0)]
output = (record['id'],
','.join([str(a) for a in avg_vec]))
return output
</code></pre>
<p>Your CC is 6, which is already good, according to <a href="https://blog.codecentric.de/en/2011/10/why-good-metrics-values-do-not-equal-good-quality/" rel="nofollow">this</a>. You can reduce CC of your function by using helper functions, like</p>
<pre><code>def get_tags(record):
return [tag for tag in record['all_titles']]
def sanitize_and_split_tags(tags):
return [word for tag in tags for word in
re.sub(r'[\-/:,;\.()]', ' ', tag).lower().split()]
def get_vectors_words(words):
return [lookup.value[word] for word in words if word in lookup.value]
</code></pre>
<p>And it will drop average CC, but overall CC will stay the same or increase. I don't see how you can get rid of those <code>if</code>s checking if word is in <code>lookup.value</code> or checking if we have any vectors to work with.</p>
| 0 | 2016-08-24T14:50:38Z | [
"python",
"code-review",
"cyclomatic-complexity"
] |
Django 1.8.7 get_readonly_fields seems like have a bug | 39,122,829 | <p>I try something with readonly field in Django 1.8.7, let say I have some code like the following:</p>
<pre><code>class MyAdmin(admin.ModelAdmin):
readonly_fields = ('a', 'b')
def get_readonly_fields(self, request, obj=None):
if not request.user.is_superuser:
self.readonly_fields += ('c')
return super(MyAdmin, self).get_readonly_fields(request, obj)
</code></pre>
<p>first I login with super admin and access that admin page change_form,</p>
<p>the code is works well, then I login with staff user, then still works well, again, I try login with superadmin, but the read only fields rendered is for the non-superadmin user,</p>
<p>again I clear the browser cache, try again with super admin, but still not work correctly. I try restart the server, then it work normally until I repeat the same step above I do, this weird thing come again.</p>
<p>Anyone know why this happen ? I think this is looks like some bug but not sure.</p>
<p>Thanks in Advance.</p>
| 0 | 2016-08-24T12:04:18Z | 39,123,123 | <p>The bug is not in Django, but in your code. In your <code>get_readonly_fields</code> method you <em>modify</em> the <code>readonly_fields</code> attribute; those modifications persist, since the admin object lives for the lifetime of the process.</p>
<p>Don't do that. <code>get_readonly_fields</code> is supposed to return a value, not modify the attribute. Just do:</p>
<pre><code>def get_readonly_fields(self, request, obj=None):
rfo = super(MyAdmin, self).get_readonly_fields(request, obj)
if not request.user.is_superuser:
rfo += ('c')
return rfo
</code></pre>
| 1 | 2016-08-24T12:17:54Z | [
"python",
"django",
"get",
"field",
"readonly"
] |
How do I remove blank elements from an AstroPy table? | 39,122,843 | <p>I'm trying to remove any elements from an astropy table that have blank fields. However, all the help I have found so far just tells me how to replace it. I have tried replacing these blank fields (denoted by '--') with zeroes. However, whenever I try to filter them out with traditional python loops, they stay put.
Is there no way I can remove rows that have blank fields in them?</p>
<pre><code>from astropy.io import ascii
dat = ascii.read('exorgdatabase.txt')
dat['MSINI'].fill_value = 0
print(dat['MSINI'])
dat['PER'].fill_value = 0
print(dat['PER'])
newdat = dat.filled()
print(newdat)
while '0' in newdat['MSINI']:
newdat.remove('0')
print(newdat['MSINI'])
</code></pre>
<p>Here's the output I get:</p>
<pre><code> MSINI
--------
mjupiter
--
--
--
0.310432
--
--
--
7.65457
--
...
--
--
--
--
--
--
--
--
--
--
--
Length = 5455 rows
PER
-----------
day
7.958203
3.27346074
19.12947337
10.290994
27.495606
9.478522
5.03728015
2.243752
7.8125124
...
7.227407
91.069934
366.084069
414.45008
5.43099314
328.32211
381.97977
67.412998
2.08802799
359.8249913
293.70696
Length = 5455 rows
NAME MSINI ... FIRSTURL
------------- -------- ... -------------------------------------------------
N/A mjupiter ... N/A
Kepler-107 d 0 ... http://adsabs.harvard.edu/abs/2014arXiv1402.6534R
Kepler-1049 b 0 ... http://adsabs.harvard.edu/abs/2016ApJ...822...86M
Kepler-813 b 0 ... http://adsabs.harvard.edu/abs/2016ApJ...822...86M
Kepler-427 b 0.310432 ... http://adsabs.harvard.edu/abs/2010Sci...327..977B
Kepler-1056 b 0 ... http://adsabs.harvard.edu/abs/2016ApJ...822...86M
Kepler-1165 b 0 ... http://adsabs.harvard.edu/abs/2016ApJ...822...86M
Kepler-1104 b 0 ... http://adsabs.harvard.edu/abs/2016ApJ...822...86M
WASP-14 b 7.65457 ... http://adsabs.harvard.edu/abs/2009MNRAS.392.1532J
Kepler-50 b 0 ... http://adsabs.harvard.edu/abs/2011ApJ...736...19B
... ... ... ...
KOI 2369.03 0 ... N/A
KOI 7572.01 0 ... N/A
KOI 7587.01 0 ... N/A
KOI 7589.01 0 ... N/A
KOI 2859.05 0 ... N/A
KOI 7591.01 0 ... N/A
KOI 7592.01 0 ... N/A
KOI 7593.01 0 ... N/A
KOI 7594.01 0 ... N/A
KOI 7596.01 0 ... N/A
KOI 7599.01 0 ... e
Length = 5455 rows
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-118-c200fae23235> in <module>()
23
24 while '0' in newdat['MSINI']:
---> 25 newdat.remove('0')
26
27 print(newdat['MSINI'])
AttributeError: 'Table' object has no attribute 'remove'
</code></pre>
| -1 | 2016-08-24T12:05:07Z | 39,125,554 | <p>You cannot!</p>
<p>Removing single fields from a <code>astropy.table.Table</code> is impossible because it is based on <code>numpy</code>. Therefore you can only remove whole columns or rows but not single elements.</p>
<p>For example you have such a Table:</p>
<pre><code>>>> from astropy.table import Table, MaskedColumn, Column
>>> a = MaskedColumn([1, 2], name='a', mask=[False, True], dtype='i4')
>>> b = Column([3, 4], name='b', dtype='i8')
>>> tbl = Table([a, b])
>>> tbl
a b
--- ---
1 3
-- 4
</code></pre>
<p>you can replace masked values with</p>
<pre><code>>>> tbl = tbl.filled(0)
>>> tbl
a b
--- ---
1 3
0 4
</code></pre>
<p>and for example to remove all rows where <code>a</code> is 0:</p>
<pre><code>>>> tbl[tbl['a'] != 0]
a b
--- ---
1 3
</code></pre>
<p>or without filling it by accessing the <code>mask</code>:</p>
<pre><code>tbl[~tbl['a'].mask]
a b
--- ---
1 3
</code></pre>
| 1 | 2016-08-24T14:04:44Z | [
"python",
"pandas",
"python-3.5",
"astropy"
] |
Get read of ['\n'] in python3 with paramiko | 39,122,910 | <p>I've just started to do some monitoring tool in python3 and I wondered, if I can get 'clear' number output through ssh. I've made some script:</p>
<pre><code>import os
import paramiko
command = 'w|grep \"load average\"|grep -v grep|awk {\'print ($10+$11+$12)/3*100\'};'
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy( paramiko.AutoAddPolicy())
ssh.connect('10.123.222.233', username='xxx', password='xxx')
stdin, stdout, stderr = ssh.exec_command(command)
print (stdout.readlines())
ssh.close()
</code></pre>
<p>It works fine, except output is:</p>
<pre><code>['22.3333\n']
</code></pre>
<p>How can I get rid of " [' " and " \n'] " and just get the clear number value?</p>
<p>How to get the result as I see it just in putty? </p>
| 0 | 2016-08-24T12:08:29Z | 39,122,949 | <p>Instead of <code>print(stdout.readlines())</code> you should iterate over each element in the list that is returned by <code>stdout.readlines()</code> and <code>strip</code> it, possibly converting it to <code>float</code> as well (depends what you are planning to do with this data later).</p>
<p>You can use list comprehension for this:</p>
<pre><code>print([float(line.strip()) for line in stdout.readlines()])
</code></pre>
<p>Note that <code>strip</code> will remove whitespaces and new-line chars from both the start and end of the string. If you only want to remove the trailing whitespace/new-line char then you can use <code>rstrip</code>, but note that the conversion to <code>float</code> may then fail. </p>
| 1 | 2016-08-24T12:10:19Z | [
"python",
"python-3.x",
"output",
"special-characters",
"paramiko"
] |
Get read of ['\n'] in python3 with paramiko | 39,122,910 | <p>I've just started to do some monitoring tool in python3 and I wondered, if I can get 'clear' number output through ssh. I've made some script:</p>
<pre><code>import os
import paramiko
command = 'w|grep \"load average\"|grep -v grep|awk {\'print ($10+$11+$12)/3*100\'};'
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy( paramiko.AutoAddPolicy())
ssh.connect('10.123.222.233', username='xxx', password='xxx')
stdin, stdout, stderr = ssh.exec_command(command)
print (stdout.readlines())
ssh.close()
</code></pre>
<p>It works fine, except output is:</p>
<pre><code>['22.3333\n']
</code></pre>
<p>How can I get rid of " [' " and " \n'] " and just get the clear number value?</p>
<p>How to get the result as I see it just in putty? </p>
| 0 | 2016-08-24T12:08:29Z | 39,122,954 | <p><code>.readlines()</code> returns a <em>list</em> of separate lines. In your case there is just one line, you can just extract it by indexing the list, then strip of the whitespace at the end:</p>
<pre><code>firstline = stdout.readlines()[0].rstrip()
</code></pre>
<p>This is still a string, however. If you expected <em>numbers</em>, you'd have to convert the string to a <code>float()</code>. Since your command line will only ever return <strong>one</strong> line, you may as well just use <code>.read()</code> and convert that straight up (no need to strip, as <code>float()</code> is tolerant of trailing and leading whitespace):</p>
<pre><code>result = float(stdout.read())
</code></pre>
| 3 | 2016-08-24T12:10:29Z | [
"python",
"python-3.x",
"output",
"special-characters",
"paramiko"
] |
Pyspark error for java heap space error | 39,122,955 | <p>I am new to spark using <strong>Spark 1.6.1</strong> with <strong>two workers</strong> each having <strong>Memory 1GB</strong> and <strong>5 Cores</strong> assigned, running this code on a 33MB file. </p>
<p>This Code is used to Index word in spark.</p>
<pre><code>from textblob import TextBlob as tb
from textblob_aptagger import PerceptronTagger
import numpy as np
import nltk.data
import Constants
from pyspark import SparkContext,SparkConf
import nltk
TOKENIZER = nltk.data.load('tokenizers/punkt/english.pickle')
def word_tokenize(x):
return nltk.word_tokenize(x)
def pos_tag (s):
global TAGGER
return TAGGER.tag(s)
def wrap_words (pair):
''' associable each word with index '''
index = pair[0]
result = []
for word, tag in pair[1]:
word = word.lower()
result.append({ "index": index, "word": word, "tag": tag})
index += 1
return result
if __name__ == '__main__':
conf = SparkConf().setMaster(Constants.MASTER_URL).setAppName(Constants.APP_NAME)
sc = SparkContext(conf=conf)
data = sc.textFile(Constants.FILE_PATH)
sent = data.flatMap(word_tokenize).map(pos_tag).map(lambda x: x[0]).glom()
num_partition = sent.getNumPartitions()
base = list(np.cumsum(np.array(sent.map(len).collect())))
base.insert(0, 0)
base.pop()
RDD = sc.parallelize(base,num_partition)
tagged_doc = RDD.zip(sent).map(wrap_words).cache()
</code></pre>
<p>For Smaller File < 25MB the code is working fine but gives error for files whose size is larger that 25MB.<br>
Help me resolve this issue or provide an alternative to this problem ?</p>
| 0 | 2016-08-24T12:10:32Z | 39,124,549 | <p>That's because of the .collect(). You lose everything when you transform your rdd into a classic python variable (or np.array), all data is collected on the same place.</p>
| 0 | 2016-08-24T13:21:50Z | [
"python",
"numpy",
"optimization",
"pyspark"
] |
How do you pass multiple arguments to a Luigi subtask? | 39,123,021 | <p>I have a Luigi task that <code>requires</code> a subtask. The subtask depends on parameters passed through by the parent task (i.e. the one that is doing the <code>require</code>ing). I know you can specify a parameter that the subtask can use by setting...</p>
<pre><code>def requires(self):
return subTask(some_parameter)
</code></pre>
<p>...then on the subtask, receiving the parameter by setting...</p>
<pre><code>x = luigi.Parameter()
</code></pre>
<p>That only appears to let you pass through one parameter though. What is the best way to send through an arbitrary number of parameters, of whatever types I want? Really I want something like this:</p>
<pre><code>class parentTask(luigi.Task):
def requires(self):
return subTask({'file_postfix': 'foo',
'file_content': 'bar'
})
def run(self):
return
class subTask(luigi.Task):
params = luigi.DictParameter()
def output(self):
return luigi.LocalTarget("file_{}.csv".format(self.params['file_postfix']))
def run(self):
with self.output().open('w') as f:
f.write(self.params['file_content'])
</code></pre>
<p>As you can see I tried using <code>luigi.DictParameter</code> instead of a straight <code>luigi.Parameter</code> but I get <code>TypeError: unhashable type: 'dict'</code> from somewhere deep inside Luigi when I run the above.</p>
<p>Running Python 2.7.11, Luigi 2.1.1</p>
| 0 | 2016-08-24T12:13:41Z | 39,126,080 | <p>Ok, so I found that this works as expected in python 3.5 (and the problem is still there in 3.4).</p>
<p>Don't have the time to get to the bottom of this today so no further details.</p>
| 0 | 2016-08-24T14:27:36Z | [
"python",
"luigi"
] |
How do you pass multiple arguments to a Luigi subtask? | 39,123,021 | <p>I have a Luigi task that <code>requires</code> a subtask. The subtask depends on parameters passed through by the parent task (i.e. the one that is doing the <code>require</code>ing). I know you can specify a parameter that the subtask can use by setting...</p>
<pre><code>def requires(self):
return subTask(some_parameter)
</code></pre>
<p>...then on the subtask, receiving the parameter by setting...</p>
<pre><code>x = luigi.Parameter()
</code></pre>
<p>That only appears to let you pass through one parameter though. What is the best way to send through an arbitrary number of parameters, of whatever types I want? Really I want something like this:</p>
<pre><code>class parentTask(luigi.Task):
def requires(self):
return subTask({'file_postfix': 'foo',
'file_content': 'bar'
})
def run(self):
return
class subTask(luigi.Task):
params = luigi.DictParameter()
def output(self):
return luigi.LocalTarget("file_{}.csv".format(self.params['file_postfix']))
def run(self):
with self.output().open('w') as f:
f.write(self.params['file_content'])
</code></pre>
<p>As you can see I tried using <code>luigi.DictParameter</code> instead of a straight <code>luigi.Parameter</code> but I get <code>TypeError: unhashable type: 'dict'</code> from somewhere deep inside Luigi when I run the above.</p>
<p>Running Python 2.7.11, Luigi 2.1.1</p>
| 0 | 2016-08-24T12:13:41Z | 39,512,474 | <blockquote>
<p>What is the best way to send through an arbitrary number of
parameters, of whatever types I want?</p>
</blockquote>
<p>The best way is to use named parameters, e.g.,</p>
<pre><code>#in requires
return MySampleSubTask(x=local_x, y=local_y)
class MySampleSubTask(luigi.Task):
x = luigi.Parameter()
y = luigi.Parameter()
</code></pre>
| 0 | 2016-09-15T13:34:52Z | [
"python",
"luigi"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.