title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
passing parameters through url to view django | 39,078,506 | <p>I'm having some difficulty passing values to my view through url.</p>
<p>So far, I've managed to run my view without any issues. </p>
<p>view.py (draws a graph):</p>
<pre><code>def draw(request)
....
....
return HttpResponse (buffer.getvalue(), content_type="Image/png")
</code></pre>
<p>but I need my view to take input from users, so I edited it and added an extra parameter:</p>
<pre><code>def draw(request, loan_amount)
loanAmount = loan_amount
.....
</code></pre>
<p>The user input is passed from a form, to another view:</p>
<pre><code>def search_member(request):
loanAmount = request.GET.get('desired_loan')
return render(request, 'blog/search_member.html', {'loanAmount':loanAmount)
</code></pre>
<p>In my template, I insert the user input in the draw's parameter:</p>
<pre><code><img src="http://test.com/graph/{{ loanAmount }}">
</code></pre>
<p>This is suppose to draw an image base on the user input, instead I get no image at all.</p>
<p>If I remove the parameter, the image works fine. I'm assuming I'm doing something wrong with the setup of the parameter, most likely in the template, or url:</p>
<pre><code>url(r'^graph/(?P<desired_loan>\d+)/$', views.draw, name='draw'),
</code></pre>
<p>I have tested the view, form and everything else, they all work. How can I narrow down this problem to find the solution?</p>
<p>Any direction/help would be appreciated,</p>
<p>thanks,</p>
<pre><code><form method="GET" action="/search_member/" class="navbar-form pull-left">
<input type="number" step = "any" id="searchBox" class="input-medium search-query" name="desired_loan" placeholder="Desired Loan"><br>
<input type="number" step = "any" id="searchBox" class="input-medium search-query" name="repayment_time" placeholder="Payment Time"><br>
<input type="submit" class="btn" value="Draw Graph" >
</form><br>
<img src="http://test.com/graph/{{ loanAmount }}">
</code></pre>
<p>screen shot:</p>
<p><a href="http://i.stack.imgur.com/ow0J6.png" rel="nofollow"><img src="http://i.stack.imgur.com/ow0J6.png" alt="The picture doesn't show"></a></p>
| 1 | 2016-08-22T11:35:36Z | 39,078,852 | <p>Fo url like</p>
<pre><code>url(r'^graph/(?P<desired_loan>\d+)/$', views.draw, name='draw'),
</code></pre>
<p>Change your view to: </p>
<pre><code>def draw(request, *args, **kwargs):
loanAmount = kwargs['desired_loan']
.....
</code></pre>
<p>Then change in template:</p>
<pre><code><img src="{% url 'graph' loanAmount %}">
</code></pre>
| 2 | 2016-08-22T11:52:34Z | [
"python",
"django"
] |
Django-superform does not render | 39,078,772 | <p><strong>My spec:</strong>
<br>Django 1.9<br>
Python 3.5.1<br>
django-superform 0.3.1<br></p>
<p><strong>My goal:</strong><br>
I want <code>bod_quota</code> form nested with <code>CustomerForm</code></p>
<p><strong>My attempt</strong><br></p>
<p>I have follow the docs <a href="https://github.com/gregmuellegger/django-superform" rel="nofollow">https://github.com/gregmuellegger/django-superform</a>
But it does not rendered nested form.
<code>customer/model.py</code></p>
<pre><code>class Customer(models.Model):
customer_code = models.CharField(max_length=10, unique=True)
name_th = models.CharField(max_length=100)
name_en = models.CharField(max_length=100)
</code></pre>
<p><code>customer/forms.py</code></p>
<pre><code>class CustomerForm(SuperModelForm):
manual_quota = InlineFormSetField(parent_model=Customer, model=BodQuota, fields = (
"quota_per_occurrence_type",
"quota_by_month",
"quota_by_year",
"quota_count_method_type",
) )
class Meta:
model = Customer
fields = [
'customer_code',
'name_th',
'name_en',
]
</code></pre>
<p><code>bod_quota/model.py</code></p>
<pre><code>class BodQuota(models.Model):
#
# Change Quota General Types
#
class QuotaPerOccurrenceType(DjangoChoices):
monthly = ChoiceItem('monthly')
yearly = ChoiceItem('yearly')
class QuotaCountMethodType(DjangoChoices):
circuit_based = ChoiceItem('circuit_based')
customer_based = ChoiceItem('customer_based')
customer = models.ForeignKey(Customer, default=None)
quota_per_occurrence_type = models.CharField(
max_length=10,
choices=QuotaPerOccurrenceType.choices,
validators=[QuotaPerOccurrenceType.validator],
default=QuotaPerOccurrenceType.monthly)
quota_by_month = models.PositiveSmallIntegerField(
null=False, blank=False, default=0, help_text=_("monthly quota"))
quota_by_year = models.PositiveSmallIntegerField(
null=False, blank=False, default=0, help_text=_("annually quota"))
quota_count_method_type = models.CharField(
max_length=20,
choices=QuotaCountMethodType.choices,
validators=[QuotaCountMethodType.validator],
default=QuotaCountMethodType.customer_based)
</code></pre>
<p><code>bod_quota/forms.py</code></p>
<pre><code>class BodQuotaForm(ModelForm):
class Meta:
model = BodQuota
fields = (
"quota_per_occurrence_type",
"quota_by_month",
"quota_by_year",
"quota_count_method_type",
)
BodQuotaFormSet = modelformset_factory(BodQuota, form=BodQuotaForm)
</code></pre>
<p><strong>Problem:</strong>
<code>manual_quota</code> does not show in the browser</p>
| 0 | 2016-08-22T11:49:10Z | 39,084,603 | <p>Thereâs two way to show this in the browser.</p>
<pre><code>⢠views methods: I think itâs well explained here
</code></pre>
<p><a href="http://tutorial.djangogirls.org/en/django_forms/" rel="nofollow">http://tutorial.djangogirls.org/en/django_forms/</a></p>
<pre><code>⢠django admin interface
</code></pre>
<p>Personally, Iâm used to start by the django admin interface in a first approach.</p>
<p>Based in you example, I can reproduce the forms below.</p>
<p>In case, itâs what you looking for, please check that:</p>
<p>1.You have this in your setting.py file</p>
<pre><code>INSTALLED_APPS = [
#TESTING DJANGO SUPER_FORM_PCKG
'django_superform',
'customer',
'djchoices',
'bod_quotaâ,
#DEFAULT DJANGO SETTING Â
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
</code></pre>
<p>2.In your file admin.py under customer app folder, add the code below </p>
<pre><code>from django.contrib import admin
from customer.models import *
admin.autodiscover()
class CustomerAdmin(admin.ModelAdmin):
fields = ('customer_code', 'name_th','name_en')
admin.site.register(Customer, CustomerAdmin)
</code></pre>
<h1>In the interface you will have the form below</h1>
<p><a href="http://i.stack.imgur.com/vQJAR.png" rel="nofollow"><img src="http://i.stack.imgur.com/vQJAR.png" alt="enter image description here"></a></p>
<p>3.In your file admin.py under bod_quota app folder, add the code below </p>
<pre><code>from customer.forms import *
admin.autodiscover()
class BodQuotaAdmin(admin.ModelAdmin):
fields = [ 'customer','quota_per_occurrence_type','quota_by_month','quota_by_year','quota_count_method_type']
form = CustomerForm
admin.site.register(BodQuota, BodQuotaAdmin)
</code></pre>
<h1>In the interface you will have the form below</h1>
<p><a href="http://i.stack.imgur.com/pL5rt.png" rel="nofollow"><img src="http://i.stack.imgur.com/pL5rt.png" alt="enter image description here"></a></p>
<p>Hope this helps.</p>
| 0 | 2016-08-22T16:37:08Z | [
"python",
"django"
] |
Traversing level order for the graph in networkx | 39,078,805 | <p>I am trying to convert a <code>DiGraph</code> into n-ary tree and displaying the nodes in level order or BFS. My tree is similar to this, but much larger, for simplicity using this example: </p>
<pre><code>G = networkx.DiGraph()
G.add_edges_from([('n', 'n1'), ('n', 'n2'), ('n', 'n3')])
G.add_edges_from([('n4', 'n41'), ('n1', 'n11'), ('n1', 'n12'), ('n1', 'n13')])
G.add_edges_from([('n2', 'n21'), ('n2', 'n22')])
G.add_edges_from([('n13', 'n131'), ('n22', 'n221')])
</code></pre>
<p>Tree: borrowed the data from this <a href="http://stackoverflow.com/questions/21866902/networkx-graph-searches-dfs-successors-vs-dfs-predecessors">question</a>:</p>
<pre><code>n---->n1--->n11
| |--->n12
| |--->n13
| |--->n131
|--->n2
| |---->n21
| |---->n22
| |--->n221
|--->n3
</code></pre>
<p>I am using <code>networkx.DiGraph</code> for this purpose and created the graph successfully. Here is my code for creating a DiGraph: </p>
<pre><code>G = nx.DiGraph()
roots = set()
for l in raw.splitlines():
if len(l):
target, prereq = regex1.split(l)
deps = tuple(regex2.split(prereq))
print("Add node:") + target
roots.add(target)
G.add_node(target)
for d in deps:
if d:
G.add_edge(target, d)
</code></pre>
<p>I am reading the all the data from a file with about 200 lines in the following format and trying to get a dependency tree. My graph is around 100 nodes with 600 edges.</p>
<pre><code>AAA: BBB,CCC,DDD,
BBB:
DDD: EEE,FFF,GGG,KKK
GGG: AAA,BBB,III,LLL
....
...
..
.
</code></pre>
<p>After looking into the networkx docs online, now I can achieve the the level order output doing a topological sort on the dependency tree, with the below code.</p>
<pre><code>order = nx.topological_sort(G)
print "topological sort"
print order
</code></pre>
<p>output: </p>
<pre><code>['n2', 'n3', 'n1', 'n21', 'n22', 'n11', 'n13', 'n12', 'n221', 'n131']
</code></pre>
<p>The order seems correct, but since I need to process the jobs in a batch (which saves time) and not sequentially, I want the output in level ordered output batches or using BFS. What is the best way to achieve this ?<br>
ex: level[0:n], ex: </p>
<pre><code>0. ['n']
1. ['n2', 'n3', 'n1',]
2. ['n21', 'n22', 'n11',]
3. ['n13', 'n12', 'n221', 'n131']
</code></pre>
| 0 | 2016-08-22T11:51:02Z | 39,080,114 | <p>You could use the bfs_edges() function to get a list of nodes in a breadth-first-search order.</p>
<pre><code>In [1]: import networkx
In [2]: G = networkx.DiGraph()
In [3]: G.add_edges_from([('n', 'n1'), ('n', 'n2'), ('n', 'n3')])
In [4]: G.add_edges_from([('n4', 'n41'), ('n1', 'n11'), ('n1', 'n12'), ('n1', 'n13')])
In [5]: G.add_edges_from([('n2', 'n21'), ('n2', 'n22')])
In [6]: G.add_edges_from([('n13', 'n131'), ('n22', 'n221')])
In [7]: list(networkx.bfs_edges(G,'n'))
Out[7]:
[('n', 'n2'),
('n', 'n3'),
('n', 'n1'),
('n2', 'n21'),
('n2', 'n22'),
('n1', 'n11'),
('n1', 'n13'),
('n1', 'n12'),
('n22', 'n221'),
('n13', 'n131')]
In [8]: [t for (s,t) in networkx.bfs_edges(G,'n')]
Out[8]: ['n2', 'n3', 'n1', 'n21', 'n22', 'n11', 'n13', 'n12', 'n221', 'n131']
In [9]: networkx.single_source_shortest_path_length(G,'n')
Out[9]:
{'n': 0,
'n1': 1,
'n11': 2,
'n12': 2,
'n13': 2,
'n131': 3,
'n2': 1,
'n21': 2,
'n22': 2,
'n221': 3,
'n3': 1}
</code></pre>
| 1 | 2016-08-22T12:53:29Z | [
"python",
"networkx"
] |
Resampling and Normalizing Irregular Time Series Data in Pandas | 39,078,835 | <p>I have irregularly spaced time-series data. I have total energy usage and the duration over which the energy was used.</p>
<pre><code>Start Date Start Time Duration (Hours) Usage(kWh)
1/3/2016 12:28:00 PM 2.233333333 6.23
1/3/2016 4:55:00 PM 1.9 11.45
1/4/2016 6:47:00 PM 7.216666667 11.93
1/4/2016 7:00:00 AM 3.45 9.45
1/4/2016 7:26:00 AM 1.6 7.33
1/4/2016 7:32:00 AM 1.6 4.54
</code></pre>
<p>I want to calculate the sum of all the load curves over a 15 minute window. I can round when necessary (e.g., closest 1 minute). I can't use resample immediately because it would average the usage into the next time stamp, which n the case of the first entry 1/3 12:28 PM, would take 6.23 kWH and spread it evenly until 4:55 PM, which is inaccurate. 6.23 kWh should be spread until 12:28 PM + 2.23 hrs ~= 2:42 PM.</p>
| 1 | 2016-08-22T11:52:13Z | 39,085,505 | <p>Here is a straight-forward implementation which simply sets up a Series,
<code>result</code>, whose index has minute-frequency, and then loops through the rows of
<code>df</code> (using <code>df.itertuples</code>) and adds the appropriate amount of power to each
row in the associated interval:</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
df = pd.DataFrame({'Duration (Hours)': [2.233333333, 1.8999999999999999, 7.2166666670000001, 3.4500000000000002, 1.6000000000000001, 1.6000000000000001], 'Start Date': ['1/3/2016', '1/3/2016', '1/4/2016', '1/4/2016', '1/4/2016', '1/4/2016'], 'Start Time': ['12:28:00 PM', '4:55:00 PM', '6:47:00 PM', '7:00:00 AM', '7:26:00 AM', '7:32:00 AM'], 'Usage(kWh)': [6.2300000000000004, 11.449999999999999, 11.93, 9.4499999999999993, 7.3300000000000001, 4.54]} )
df['duration'] = pd.to_timedelta(df['Duration (Hours)'], unit='H')
df['start_date'] = pd.to_datetime(df['Start Date'] + ' ' + df['Start Time'])
df['end_date'] = df['start_date'] + df['duration']
df['power (kW/min)'] = df['Usage(kWh)']/(df['Duration (Hours)']*60)
df = df.drop(['Start Date', 'Start Time', 'Duration (Hours)'], axis=1)
result = pd.Series(0,
index=pd.date_range(df['start_date'].min(), df['end_date'].max(), freq='T'))
power_idx = df.columns.get_loc('power (kW/min)')+1
for row in df.itertuples():
result.loc[row.start_date:row.end_date] += row[power_idx]
# The sum of the usage over 15 minute windows is computed using the `resample/sum` method:
usage = result.resample('15T').sum()
usage.plot(kind='line', label='usage')
plt.legend(loc='best')
plt.show()
</code></pre>
<h2><a href="http://i.stack.imgur.com/2lvUa.png" rel="nofollow"><img src="http://i.stack.imgur.com/2lvUa.png" alt="enter image description here"></a></h2>
<p><strong>A note regarding performance</strong>: Looping through the rows of <code>df</code> is not very
fast especially if <code>len(df)</code> is big. For better performance, you may need a
<a href="http://stackoverflow.com/a/31773404/190597">more clever method</a>, which handles
all the rows "at once" in a vectorized manner:</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Here is an example using a larger DataFrame
N = 10**3
dates = pd.date_range('2016-1-1', periods=N*10, freq='H')
df = pd.DataFrame({'Duration (Hours)': np.random.uniform(1, 10, size=N),
'start_date': np.random.choice(dates, replace=False, size=N),
'Usage(kWh)': np.random.uniform(1,20, size=N)})
df['duration'] = pd.to_timedelta(df['Duration (Hours)'], unit='H')
df['end_date'] = df['start_date'] + df['duration']
df['power (kW/min)'] = df['Usage(kWh)']/(df['Duration (Hours)']*60)
def using_loop(df):
result = pd.Series(0,
index=pd.date_range(df['start_date'].min(), df['end_date'].max(), freq='T'))
power_idx = df.columns.get_loc('power (kW/min)')+1
for row in df.itertuples():
result.loc[row.start_date:row.end_date] += row[power_idx]
usage = result.resample('15T').sum()
return usage
def using_cumsum(df):
result = pd.melt(df[['power (kW/min)','start_date','end_date']],
id_vars=['power (kW/min)'], var_name='usage', value_name='date')
result['usage'] = result['usage'].map({'start_date':1, 'end_date':-1})
result['usage'] *= result['power (kW/min)']
result = result.set_index('date')
result = result[['usage']].resample('T').sum().fillna(0).cumsum()
usage = result.resample('15T').sum()
return usage
usage = using_cumsum(df)
usage.plot(kind='line', label='usage')
plt.legend(loc='best')
plt.show()
</code></pre>
<hr>
<p>With <code>len(df)</code> equal to 1000, <code>using_cumsum</code> is over 10x faster than <code>using_loop</code>:</p>
<pre><code>In [117]: %timeit using_loop(df)
1 loop, best of 3: 545 ms per loop
In [118]: %timeit using_cumsum(df)
10 loops, best of 3: 52.7 ms per loop
</code></pre>
| 3 | 2016-08-22T17:34:42Z | [
"python",
"pandas",
"time-series",
"aggregate"
] |
Resampling and Normalizing Irregular Time Series Data in Pandas | 39,078,835 | <p>I have irregularly spaced time-series data. I have total energy usage and the duration over which the energy was used.</p>
<pre><code>Start Date Start Time Duration (Hours) Usage(kWh)
1/3/2016 12:28:00 PM 2.233333333 6.23
1/3/2016 4:55:00 PM 1.9 11.45
1/4/2016 6:47:00 PM 7.216666667 11.93
1/4/2016 7:00:00 AM 3.45 9.45
1/4/2016 7:26:00 AM 1.6 7.33
1/4/2016 7:32:00 AM 1.6 4.54
</code></pre>
<p>I want to calculate the sum of all the load curves over a 15 minute window. I can round when necessary (e.g., closest 1 minute). I can't use resample immediately because it would average the usage into the next time stamp, which n the case of the first entry 1/3 12:28 PM, would take 6.23 kWH and spread it evenly until 4:55 PM, which is inaccurate. 6.23 kWh should be spread until 12:28 PM + 2.23 hrs ~= 2:42 PM.</p>
| 1 | 2016-08-22T11:52:13Z | 39,130,274 | <p>The solution I used below is the itertuples method. Please note using numpy's .sum function did not work for me. I instead used the pandas resample keyword, "how" and set it equal to sum.</p>
<p>I also renamed the columns in my files to make the import easier. </p>
<p>I was not time/resource constrained so I went with the itertuples method because it was easy for me to implement.</p>
<p><strong>Itertuples Code</strong></p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#load data
df = pd.read_excel(r'C:\input_file.xlsx', sheetname='sheet1')
#convert columns
df['duration'] = pd.to_timedelta(df['Duration (Hours)'], unit='H')
df['end_date'] = df['start_date'] + df['duration']
df['power (kW/min)'] = df['Usage(kWh)']/(df['Duration (Hours)']*60)
df = df.drop(['Duration (Hours)'], axis=1)
#create result df with timestamps
result = pd.Series(0, index=pd.date_range(df['start_date'].min(), df['end_date'].max(), freq='T'))
#iterate through to calculate total energy at each minute
power_idx = df.columns.get_loc('power (kW/min)')+1
for row in df.itertuples():
result.loc[row.start_date:row.end_date] += row[power_idx]
# The sum of the usage over 15 minute windows is computed using the `resample/sum` method
usage = result.resample('15T', how='sum')
#plot
plt.plot(usage)
plt.show()
#write to file
usage.to_csv(r'C:\output_folder\output_file.csv')
</code></pre>
<p><a href="http://i.stack.imgur.com/PXXaL.png" rel="nofollow"><img src="http://i.stack.imgur.com/PXXaL.png" alt="Solution using itertuples method"></a></p>
| 0 | 2016-08-24T18:10:40Z | [
"python",
"pandas",
"time-series",
"aggregate"
] |
Filter safe does not work on the shy tag issue | 39,078,914 | <p>I have a problem with breaking words in the right place at the Django template. Appears <code>&shy;</code>
I'm trying to filter safe, but it does not work.</p>
<p>Here is my code:</p>
<pre><code> <div class="my_class">
<h3>{{ object.title|safe }}</h3>
</div>
</code></pre>
| 0 | 2016-08-22T11:55:12Z | 39,079,946 | <p>From the doc here:<a href="https://docs.djangoproject.com/en/1.10/ref/templates/builtins/#safe" rel="nofollow">safe</a></p>
<blockquote>
<p>safe</p>
<p>Marks a string as not requiring further HTML escaping prior to output. When autoescaping is off, this filter has no effect.</p>
</blockquote>
<p>And Django's templating engine does escaping automatically, look <a href="https://stackoverflow.com/questions/4056883/when-should-i-use-escape-and-safe-in-djangos-template-system">When should I use escape and safe in Django's template system?</a></p>
| 0 | 2016-08-22T12:45:27Z | [
"python",
"django",
"templates",
"frontend",
"css-hyphens"
] |
How to recognize text regions using histogram? | 39,078,999 | <p>I have a sample image which looks like this:</p>
<p><img src="http://i.stack.imgur.com/lIgK4.png" alt=""></p>
<p>There could be one or more horizontal lines that separate text sections. I am looking to get 4 chunks of text which looks like:</p>
<p><img src="http://i.stack.imgur.com/k3z4P.png" alt=""></p>
<p>The horizontal lines could be close to the text and the external rectangle is not always there.</p>
<p>I have tried the following
- Threshold
- Erode & Dilate
- FindContours </p>
<p>Since the horizontal line is close to the text, there is no clean way to erode and dilate to get the text above and below the line. Sometimes it works and sometimes it doesnt depending on closeness of the line to the text.</p>
<p>I read that using histograms the horizontal line can be recognized and the text chunks identified always consistently. Any pointers on how this can be done? </p>
| 1 | 2016-08-22T11:59:35Z | 39,087,523 | <p>Detect hougLines -> Black Out the Lines -> Dialate . </p>
<h1>Code</h1>
<pre><code>import cv2
import numpy as np;
im = cv2.imread("im.png")
im_gray=cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(im_gray,127,255,cv2.THRESH_BINARY_INV)
edges = cv2.Canny(im_gray,50,150,apertureSize = 3)
minLineLength = 100
maxLineGap = 100
lines = cv2.HoughLinesP(edges,1,np.pi/180,100,minLineLength,maxLineGap)
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(thresh,(x1,y1),(x2,y2),(0),5)
kernel = np.ones((3,3),np.uint8)
thresh = cv2.dilate(thresh,kernel,iterations = 10)
_,contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
minArea=5000 #nothing
for cnt in contours:
area=cv2.contourArea(cnt)
if(area>minArea):
rect = cv2.minAreaRect(cnt)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(im,[box],0,(0,0,255),2)
cv2.imshow("thresh", im)
cv2.imwrite('so_result.jpg',im)
cv2.waitKey(0)
</code></pre>
<h1>Output</h1>
<p><a href="http://i.stack.imgur.com/CHgTQ.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/CHgTQ.jpg" alt="enter image description here"></a></p>
| 0 | 2016-08-22T19:44:39Z | [
"python",
"opencv",
"histogram"
] |
Spliting up data to match a certian criteria in my view and template django | 39,079,037 | <p>I want to be able to split up my reading(reading is Inspeciton_vals object) data into multiple rows based off of a description(description is a object from my dimension model) So far I haven't figured out any way of doing so, any help would be greatly appreciated. </p>
<p>To break it down I have a Inspeciton_vals model that has a foreign key to dimension each dimension will have a determined amount of inspection_vals assigned for my sheet model (object sample_size) for this particular one it is 24 so, my inspection_vals table will have 48 due to it having two dimensions with 24 each. I was thinking about keying off the id in my dimension model to break it down but, I cant seem to figure out a solid solution. </p>
<p>Here is my views.py </p>
<pre><code>@login_required
def shipping(request, id):
sheet_data = Sheet.objects.get(pk=id)
work_order = sheet_data.work_order
customer_data = Customer.objects.get(id=sheet_data.customer_id)
customer_name = customer_data.customer_name
title_head = 'Shipping-%s' % sheet_data.work_order
complete_data = Sheet.objects.raw("""select s.id, d.id d_id, s.work_order, d.target, i.reading, d.description, i.serial_number from app_sheet s left join app_dimension d on s.id = d.sheet_id
left join app_inspection_vals i on d.id = i.dimension_id""")
for c_d in complete_data:
dim_description = Dimension.objects.filter(sheet_id=c_d.id).values_list('description', flat=True).distinct()
dim_id = Dimension.objects.filter(sheet_id=c_d.id)[:1]
for d_i in dim_id:
dim_data = Inspection_vals.objects.filter(dimension_id=d_i.id)
sample_size = dim_data
return render(request, 'app/shipping.html',
{
'work_order': work_order,
'sample_size': sample_size,
'customer_name': customer_name,
'title': title_head,
'complete_data': complete_data,
'dim_description': dim_description,
})
</code></pre>
<p>here is my shipping.html </p>
<pre><code><div class="container">
<div class="row">
<div>
<table >
<thead>
<tr>
<th>Serial Number</th>
{% for ss in sample_size %}
<th>{{ ss.serial_number }}</th>
{% endfor %}
</tr>
</thead>
<tbody>
{% for desc in dim_description.all %}
<tr>
<th> {{ desc }}</th>
{% for r_c in complete_data %}
<td> {{ r_c.reading }} </td>
{% endfor %}
{% endfor %}
</tr>
</tbody>
</table>
</div>
</div>
</div>
</code></pre>
<p>here are my models </p>
<pre><code>class Sheet(models.Model):
objects = SheetManager()
create_date = models.DateField()
updated_date = models.DateField()
customer_name = models.CharField(max_length=255)
part_number = models.CharField(max_length=255)
part_revision = models.CharField(max_length=255)
work_order = models.CharField(max_length=255)
purchase_order = models.CharField(max_length=255)
sample_size = models.IntegerField()
sample_scheme = models.CharField(max_length=255)
overide_scheme = models.IntegerField()
template = models.IntegerField()
sample_schem_percent = models.IntegerField()
critical_dimensions = models.IntegerField()
closed = models.IntegerField()
serial_index = models.CharField(max_length=255)
drawing_number = models.CharField(max_length=255)
drawing_revision = models.CharField(max_length=255)
heat_number = models.CharField(max_length=255)
note = models.CharField(max_length=255)
valc = models.CharField(max_length=255)
class Dimension(models.Model):
description = models.CharField(max_length=255)
style = models.CharField(max_length=255)
created_at = models.DateField()
updated_at = models.DateField()
target = models.IntegerField()
upper_limit = models.IntegerField()
lower_limit = models.IntegerField()
inspection_tool = models.CharField(max_length=255)
critical = models.IntegerField()
units = models.CharField(max_length=255)
metric = models.CharField(max_length=255)
target_strings = models.CharField(max_length=255)
ref_dim_id = models.IntegerField()
nested_number = models.IntegerField()
met_upper = models.IntegerField()
met_lower = models.IntegerField()
valc = models.CharField(max_length=255)
sheet = models.ForeignKey(Sheet, on_delete=models.CASCADE, default=DEFAULT_FOREIGN_KEY)
class Inspection_vals(models.Model):
created_at = models.DateField()
updated_at = models.DateField()
reading = models.IntegerField(null=True)
reading2 = models.IntegerField(null=True)
reading3 = models.IntegerField(null=True)
reading4 = models.IntegerField(null=True)
state = models.CharField(max_length=255)
state2 = models.CharField(max_length=255)
state3 = models.CharField(max_length=255)
state4 = models.CharField(max_length=255)
approved_by = models.CharField(max_length=255)
approved_at = models.DateField(null=True, blank=True)
dimension = models.ForeignKey(Dimension, on_delete=models.CASCADE, default=DEFAULT_FOREIGN_KEY)
serial_number = models.IntegerField(default=1)
</code></pre>
<p>finally here is my screen shot of what it looks like now. </p>
<p><a href="http://i.stack.imgur.com/a8sBA.gif" rel="nofollow"><img src="http://i.stack.imgur.com/a8sBA.gif" alt="Intial "></a></p>
<p>Here is what I would like it to look like
<a href="http://i.stack.imgur.com/Kb0G4.gif" rel="nofollow"><img src="http://i.stack.imgur.com/Kb0G4.gif" alt="Here is what I want it to look like"></a></p>
<p>Also if it helps I attached what my data looks like in my db to give more in depth detail.</p>
<p><a href="http://i.stack.imgur.com/dG3GN.png" rel="nofollow"><img src="http://i.stack.imgur.com/dG3GN.png" alt="Data"></a></p>
| 0 | 2016-08-22T12:01:51Z | 39,130,363 | <p>Figured it out basically I turned two list into a dictionary and just iterate over them to give me the results I wanted.</p>
<pre><code>complete_data = Sheet.objects.raw("""select s.id, d.id d_id, s.work_order, d.target, i.reading, d.description, i.serial_number from app_sheet s left join app_dimension d on s.id = d.sheet_id left join app_inspection_vals i on d.id = i.dimension_id""")
key_list = []
vals_list = []
for xr in complete_data:
key_list.append(xr.description)
vals_list.append(xr.reading)
reading_desc = {}
for i in range(len(key_list)):
if key_list[i] in reading_desc:
reading_desc[key_list[i]].append(vals_list[i])
else:
reading_desc[key_list[i]]=[vals_list[i]]
<div class="container">
<div class="row">
<div>
<table >
<thead>
<tr>
<th>Serial Number</th>
{% for ss in sample_size %}
<th>{{ ss.serial_number }}</th>
{% endfor %}
</tr>
</thead>
<tbody>
{% for key, values in reading_desc.items %}
<tr>
<td>{{key}}</td>
{% for v in values %}
<td>{{v}}</td>
{% endfor %}
</tr>
{% endfor %}
</tbody>
</table>
</div>
</div>
</div>
</code></pre>
| 0 | 2016-08-24T18:14:56Z | [
"python",
"django",
"python-2.7"
] |
Index out of range error in scrapy | 39,079,044 | <p>I am trying to scrape a website and get details of the products. Some of the products have unit and some have not. The structure is something like this below:</p>
<p><strong>For products having a unit:</strong></p>
<pre><code><div class="unit">
<p>200ml</p>
</div>
</code></pre>
<p><strong>For products having no unit:</strong></p>
<pre><code><div class = "unit">
<p></p>
</div>
</code></pre>
<p>My spider works something like this:</p>
<pre><code>def product(self, response):
products = response.xpath('descendant::*[@class="product_list_ul"]')
item = Item()
i = 0
while i < 20:
item['link'] = products.xpath(
'descendant::*[@class="product-image"]//a/@href').extract()[i]
item['name'] = products.xpath(
'descendant::*[@class="product-name"]//a/@title').extract()[i]
item['unit'] = products.xpath(
'descendant::*[@class="unit"]/p/text()').extract()[i]
item['price'] = products.xpath(
'descendant::*[@class="price"]/text()').extract()[i]
item['image_url'] = products.xpath(
'descendant::*[@class="product-image"]//a//img/@src').extract()[i]
i += 1
yield item
</code></pre>
<p>But there is a problem. </p>
<pre><code>products.xpath('descendant::*[@class="unit"]/p/text()').extract()
</code></pre>
<p>gives only those results that have unit. For ex: If there are 5 products like this:</p>
<p>p1 : N/A</p>
<p>p2 : 200ml</p>
<p>p3 : 60gm</p>
<p>p4 : 5ml</p>
<p>p5 : N/A</p>
<p>For this I am getting a list as : <strong>[200ml, 60gm, 5ml]</strong>. So I am ultimately getting <strong>"Index out of range error"</strong></p>
<p>Can someone suggest a way in which I can solve this problem and get a list as <strong>[N/A, 200ml, 60gm, 5ml, N/A]</strong></p>
<p><strong>Edit:</strong> I have figured out a way by doing a little more research but the problem is that it works on scrapy shell only. </p>
<pre><code>[txt for item in sel.xpath('descendant::*[@class="litre"]/p') for txt in item.select('text()').extract() or [u'N/A']]
</code></pre>
<p>It gives me a list just as I want. I made the following edits to incorporate this in my python script.</p>
<pre><code>def unit_xpath(self, product):
x = [txt for i in sel.xpath('descendant::*[@class="litre"]/p') for txt in i.select('text()').extract() or [u'n/a']]
return x
def product(self, response):
products = response.xpath('descendant::*[@class="product_list_ul"]')
item = ForestessentialsItem()
i = 0
while i < 20:
item['link'] = products.xpath('descendant::*[@class="product-image"]//a/@href').extract()[i]
item['name'] = products.xpath('descendant::*[@class="product-name"]//a/@title').extract()[i]
item['unit'] = self.unit_xpath(products)[i]
item['price'] = products.xpath('descendant::*[@class="price"]/text()').extract()[i]
item['image_url'] = products.xpath('descendant::*[@class="product-image"]//a//img/@src').extract()[i]
i += 1
yield item
</code></pre>
<p>I am getting error <code>NameError: global name 'sel' is not defined</code>. Can someone please tell me how can I proceed from here</p>
| -2 | 2016-08-22T12:02:01Z | 39,079,543 | <p>There's a slight flaw in your spider's logic. Usually it's possible to get list of products selectors and just iterate through them. Something like this:</p>
<pre><code>def product(self, response):
products = response.xpath('descendant::*[@class="product_list_ul"]')
# [1] "//" is short for "descendant::" so you should use that instead
products = response.xpath('//*[@class="product_list_ul"]')
for prod in products:
item = Item()
item['link'] = prod.xpath('.//*[@class="product-image"]//a/@href').extract()
item['name'] = prod.xpath('.//*[@class="product-name"]//a/@title').extract()
item['unit'] = prod.xpath('.//*[@class="unit"]/p/text()').extract()
item['price'] = prod.xpath('.//*[@class="price"]/text()').extract()
item['image_url'] = prod.xpath('.//*[@class="product-image"]//a//img/@src').extract()
yield item
</code></pre>
<p>If you could provide an url, I could provide a more concrete example.</p>
<p>[1] - More xpath shortcuts and descriptions: <a href="https://our.umbraco.org/wiki/reference/xslt/xpath-axes-and-their-shortcuts/" rel="nofollow">https://our.umbraco.org/wiki/reference/xslt/xpath-axes-and-their-shortcuts/</a></p>
| 0 | 2016-08-22T12:26:48Z | [
"python",
"web-scraping",
"scrapy"
] |
Index out of range error in scrapy | 39,079,044 | <p>I am trying to scrape a website and get details of the products. Some of the products have unit and some have not. The structure is something like this below:</p>
<p><strong>For products having a unit:</strong></p>
<pre><code><div class="unit">
<p>200ml</p>
</div>
</code></pre>
<p><strong>For products having no unit:</strong></p>
<pre><code><div class = "unit">
<p></p>
</div>
</code></pre>
<p>My spider works something like this:</p>
<pre><code>def product(self, response):
products = response.xpath('descendant::*[@class="product_list_ul"]')
item = Item()
i = 0
while i < 20:
item['link'] = products.xpath(
'descendant::*[@class="product-image"]//a/@href').extract()[i]
item['name'] = products.xpath(
'descendant::*[@class="product-name"]//a/@title').extract()[i]
item['unit'] = products.xpath(
'descendant::*[@class="unit"]/p/text()').extract()[i]
item['price'] = products.xpath(
'descendant::*[@class="price"]/text()').extract()[i]
item['image_url'] = products.xpath(
'descendant::*[@class="product-image"]//a//img/@src').extract()[i]
i += 1
yield item
</code></pre>
<p>But there is a problem. </p>
<pre><code>products.xpath('descendant::*[@class="unit"]/p/text()').extract()
</code></pre>
<p>gives only those results that have unit. For ex: If there are 5 products like this:</p>
<p>p1 : N/A</p>
<p>p2 : 200ml</p>
<p>p3 : 60gm</p>
<p>p4 : 5ml</p>
<p>p5 : N/A</p>
<p>For this I am getting a list as : <strong>[200ml, 60gm, 5ml]</strong>. So I am ultimately getting <strong>"Index out of range error"</strong></p>
<p>Can someone suggest a way in which I can solve this problem and get a list as <strong>[N/A, 200ml, 60gm, 5ml, N/A]</strong></p>
<p><strong>Edit:</strong> I have figured out a way by doing a little more research but the problem is that it works on scrapy shell only. </p>
<pre><code>[txt for item in sel.xpath('descendant::*[@class="litre"]/p') for txt in item.select('text()').extract() or [u'N/A']]
</code></pre>
<p>It gives me a list just as I want. I made the following edits to incorporate this in my python script.</p>
<pre><code>def unit_xpath(self, product):
x = [txt for i in sel.xpath('descendant::*[@class="litre"]/p') for txt in i.select('text()').extract() or [u'n/a']]
return x
def product(self, response):
products = response.xpath('descendant::*[@class="product_list_ul"]')
item = ForestessentialsItem()
i = 0
while i < 20:
item['link'] = products.xpath('descendant::*[@class="product-image"]//a/@href').extract()[i]
item['name'] = products.xpath('descendant::*[@class="product-name"]//a/@title').extract()[i]
item['unit'] = self.unit_xpath(products)[i]
item['price'] = products.xpath('descendant::*[@class="price"]/text()').extract()[i]
item['image_url'] = products.xpath('descendant::*[@class="product-image"]//a//img/@src').extract()[i]
i += 1
yield item
</code></pre>
<p>I am getting error <code>NameError: global name 'sel' is not defined</code>. Can someone please tell me how can I proceed from here</p>
| -2 | 2016-08-22T12:02:01Z | 39,092,768 | <p>So I figured out a way to do this.</p>
<pre><code>def unit_xpath(self, response):
x = [txt for item in response.xpath('descendant::*[@class="unit"]/p') for txt in item.xpath('text()').extract() or [u'N/A']]
return x
def product(self, response):
products = response.xpath('descendant::*[@class="product_list_ul"]')
item = Item()
i = 0
while i < 20:
item['link'] = products.xpath(
'descendant::*[@class="product-image"]//a/@href').extract()[i]
item['name'] = products.xpath(
'descendant::*[@class="product-name"]//a/@title').extract()[i]
item['unit'] = products.xpath(
'descendant::*[@class="unit"]/p/text()').extract()[i]
item['price'] = products.xpath(
'descendant::*[@class="price"]/text()').extract()[i]
item['image_url'] = products.xpath(
'descendant::*[@class="product-image"]//a//img/@src').extract()[i]
i += 1
yield item
</code></pre>
<p>Thanks to all for helping me out in this. Also thanks @Granitosaurus.. I know your way of approaching the products in group is way better but this just serves my use case. </p>
| 0 | 2016-08-23T05:06:18Z | [
"python",
"web-scraping",
"scrapy"
] |
Python List Comprehension - extracting from nested data | 39,079,131 | <p>Im new to python and was trying to extract out some nested data</p>
<p>Here is the JSON for two products. A product can belong to zero or more categories</p>
<pre><code> { ï
"Item":[ ï
{ ï
"ID":"170",
"InventoryID":"170",
"Categories":[ ï
{ ï
"Category":[ ï
{ ï
"CategoryID":"444",
"Priority":"0",
"CategoryName":"Paper Mache"
},
{ ï
"CategoryID":"479",
"Priority":"0",
"CategoryName":"Paper Mache"
},
{ ï
"CategoryID":"515",
"Priority":"0",
"CategoryName":"Paper Mache"
}
]
}
],
"Description":"Approximately 9cm wide x 4cm deep.",
"SKU":"111931"
},
{ ï
"ID":"174",
"InventoryID":"174",
" Categories":[ ï
{ ï
"Category":{ ï
"CategoryID":"888",
"Priority":"0",
"CategoryName":"Plaster"
}
}
],
"Description":"Plaster Mould - Australian Animals",
"SKU":"110546"
}
],
"CurrentTime":"2016-08-22 11:52:27",
"Ack":"Success"
}
</code></pre>
<p>I want to work out which Categories a product belongs to.</p>
<p>My code for extraction is as follows:-</p>
<pre><code> for x in products:
productsInCategory = []
for y in x['Categories']:
for z in y['Category']:
if z['CategoryID'] == categories[i]['CategoryID']:
productsInCategory.append(x)
</code></pre>
<p>This issue is that in this case the second item only contains one Category, not an array of categories so this line</p>
<pre><code>for z in y['Category']:
</code></pre>
<p>loops through the properties of a Category and not a Category array and hence causes my code to fail</p>
<p>How can I protect against this? And can this be written more elegantly with list comprehension syntax?</p>
| 2 | 2016-08-22T12:07:01Z | 39,079,206 | <p>That's a very poor document structure in that case; you shouldn't have to deal with this. If an item can contain multiple values, it should always be a list.</p>
<p>Be that as it may, you can still deal with it in your code by checking if it is a list or not.</p>
<pre><code>for x in products:
productsInCategory = []
for y in x['Categories']:
category = y['Category']
if isinstance(category, dict):
category = [category]
for z in category:
...
</code></pre>
<p>(You might want to consider using more descriptive variable names generally; <code>x</code>, <code>y</code> and <code>z</code> are not very helpful for people reading the code.)</p>
| 4 | 2016-08-22T12:10:58Z | [
"python"
] |
Python List Comprehension - extracting from nested data | 39,079,131 | <p>Im new to python and was trying to extract out some nested data</p>
<p>Here is the JSON for two products. A product can belong to zero or more categories</p>
<pre><code> { ï
"Item":[ ï
{ ï
"ID":"170",
"InventoryID":"170",
"Categories":[ ï
{ ï
"Category":[ ï
{ ï
"CategoryID":"444",
"Priority":"0",
"CategoryName":"Paper Mache"
},
{ ï
"CategoryID":"479",
"Priority":"0",
"CategoryName":"Paper Mache"
},
{ ï
"CategoryID":"515",
"Priority":"0",
"CategoryName":"Paper Mache"
}
]
}
],
"Description":"Approximately 9cm wide x 4cm deep.",
"SKU":"111931"
},
{ ï
"ID":"174",
"InventoryID":"174",
" Categories":[ ï
{ ï
"Category":{ ï
"CategoryID":"888",
"Priority":"0",
"CategoryName":"Plaster"
}
}
],
"Description":"Plaster Mould - Australian Animals",
"SKU":"110546"
}
],
"CurrentTime":"2016-08-22 11:52:27",
"Ack":"Success"
}
</code></pre>
<p>I want to work out which Categories a product belongs to.</p>
<p>My code for extraction is as follows:-</p>
<pre><code> for x in products:
productsInCategory = []
for y in x['Categories']:
for z in y['Category']:
if z['CategoryID'] == categories[i]['CategoryID']:
productsInCategory.append(x)
</code></pre>
<p>This issue is that in this case the second item only contains one Category, not an array of categories so this line</p>
<pre><code>for z in y['Category']:
</code></pre>
<p>loops through the properties of a Category and not a Category array and hence causes my code to fail</p>
<p>How can I protect against this? And can this be written more elegantly with list comprehension syntax?</p>
| 2 | 2016-08-22T12:07:01Z | 39,079,456 | <p>I've run into this issue frequently before in JSON structures...frequently enough that I wrote a small library for it a few weeks ago...</p>
<p><a href="https://github.com/tfulmer1/nkr" rel="nofollow">nested key retriever (nkr)</a></p>
<p>Try the generator and see if it solves your problem. You should be able to simple:</p>
<pre><code>for x in products:
if product_id_searching_for in list(nkr.find_nested_key_values(x, 'CategoryID')):
productsInCategory.append(x)
</code></pre>
| 1 | 2016-08-22T12:22:32Z | [
"python"
] |
Reports With Django | 39,079,169 | <p>I'm trying to create a pretty advanced query within django and im having problems doing so, i can use the basic:</p>
<pre><code>for obj in Invoice.objects.filter():
</code></pre>
<p>but if i try and move this into raw PostgreSQL query i get an error telling me that the relation does not exist am i doing something wrong, i am following the <a href="https://docs.djangoproject.com/en/1.10/topics/db/sql/" rel="nofollow">Preforming raw SQL</a> on the django documents but i keep getting same error</p>
<p>full code: </p>
<pre><code>def csv_report(request):
response = HttpResponse(content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename="somefilename.csv"'
writer = csv.writer(response, csv.excel)
response.write(u'\ufeff'.encode('utf8'))
writer.writerow([
smart_str(u"ID"),
smart_str(u"value"),
smart_str(u"workitem content type"),
smart_str(u"created date"),
smart_str(u"workitem.id"),
smart_str(u"workitem"),
smart_str(u"workitem_content_type"),
])
for obj in Invoice.objects.raw('SELECT * from twm_Invoice'):
writer.writerow([
smart_str(obj.pk),
smart_str(obj.value),
smart_str(obj.workitem_content_type),
smart_str(obj.created_date),
smart_str(obj.workitem_id),
smart_str(obj.workitem),
smart_str(obj.workitem_content_type),
])
return response
</code></pre>
<p>i have tried to use the app now within front of the model name and without none of them seem to work.</p>
<p>Thanks J</p>
| -1 | 2016-08-22T12:09:27Z | 39,079,288 | <p>try running your raw sql directly in the database.. my guess is that your table name is not correct, usually they're lowercase</p>
<p>BTW.. I hope you have a very good reason for using raw sql queries and not the awesome ORM ;)</p>
| 1 | 2016-08-22T12:14:55Z | [
"python",
"django",
"postgresql"
] |
Python Multiprocessing using Pool goes recursively haywire | 39,079,183 | <p>I'm trying to make an expensive part of my pandas calculations parallel to speed up things.</p>
<p>I've already managed to make Multiprocessing.Pool work with a simple example:</p>
<pre><code>import multiprocessing as mpr
import numpy as np
def Test(l):
for i in range(len(l)):
l[i] = i**2
return l
t = list(np.arange(100))
L = [t,t,t,t]
if __name__ == "__main__":
pool = mpr.Pool(processes=4)
E = pool.map(Test,L)
pool.close()
pool.join()
</code></pre>
<p>No problems here. Now my own algorithm is a bit more complicated, I can't post it here in its full glory and terribleness, so I'll use some pseudo-code to outline the things I'm doing there:</p>
<pre><code>import pandas as pd
import time
import datetime as dt
import multiprocessing as mpr
import MPFunctions as mpf --> self-written worker functions that get called for the multiprocessing
import ClassGetDataFrames as gd --> self-written class that reads in all the data and puts it into dataframes
=== Settings
=== Use ClassGetDataFrames to get data
=== Lots of single-thread calculations and manipulations on the dataframe
=== Cut dataframe into 4 evenly big chunks, make list of them called DDC
if __name__ == "__main__":
pool = mpr.Pool(processes=4)
LLT = pool.map(mpf.processChunks,DDC)
pool.close()
pool.join()
=== Join processed Chunks LLT back into one dataframe
=== More calculations and manipulations
=== Data Output
</code></pre>
<p>When I'm running this script the following happens:</p>
<ol>
<li><p>It reads in the data.</p></li>
<li><p>It does all calculations and manipulations until the Pool statement.</p></li>
<li><p>Suddenly it reads in the data again, fourfold.</p></li>
<li><p>Then it goes into the main script fourfold at the same time.</p></li>
<li><p>The whole thing cascades recursively and goes haywire.</p></li>
</ol>
<p>I have read before that this can happen if you're not careful, but I do not know why it does happen here. My multiprocessing code is protected by the needed name-main-statement (I'm on Win7 64), it is only 4 lines long, it has close and join statements, it calls one defined worker function which then calls a second worker function in a loop, that's it. By all I know it should just create the pool with four processes, call the four processes from the imported script, close the pool and wait until everything is done, then just continue with the script. On a sidenote, I first had the worker functions in the same script, the behaviour was the same. Instead of just doing what's in the pool it seems to restart the whole script fourfold.</p>
<p>Can anyone enlighten me what might cause this behaviour? I seem to be missing some crucial understanding about Python's multiprocessing behaviour.</p>
<p>Also I don't know if it's important, I'm on a virtual machine that sits on my company's mainframe.</p>
<p>Do I have to use individual processes instead of a pool?</p>
| 1 | 2016-08-22T12:09:55Z | 39,079,709 | <p>I managed to make it work by enceasing the entire script into the <code>if __name__ == "__main__":</code>-statement, not just the multiprocessing part.</p>
| 1 | 2016-08-22T12:34:36Z | [
"python",
"pandas",
"python-multiprocessing"
] |
Python script makes a file but doesn't write anything | 39,079,278 | <p>First, code gets a name and makes a file with <code>w</code> permission (also tested <code>r+</code>) and it should write any other input in file, but it doesn't. I get an empty file.</p>
<pre><code>user_name_in =input("gets Input")
fname = user_name_in
f = input()
ufile = open(fname,"w")
while True:
f=input(answer)
ufile.write(f)
</code></pre>
| 1 | 2016-08-22T12:14:24Z | 39,079,406 | <p>This code works for me:</p>
<pre><code>user_name_in =input("gets Input")
fname = user_name_in
f = input()
ufile = open(fname,"w")
while True:
f=input(answer)
ufile.write(f)
</code></pre>
<p>Some considerations:
I don't see where <em>answer</em> is declared, neither python interpreter see :P, maybe you forgot to paste this part of the code or indeed this was the error?</p>
<p>I don't understand why you assign the name of the file to a variable and then re-assign to another one.</p>
<p>How do I stop writing to the file? The only way I found was Ctrl-C, which doesn't sound ideal.</p>
<p>To reassure the file is being closed you can replace it with a <strong>with open(fname) as ufile</strong> block</p>
| 1 | 2016-08-22T12:20:17Z | [
"python"
] |
Python script makes a file but doesn't write anything | 39,079,278 | <p>First, code gets a name and makes a file with <code>w</code> permission (also tested <code>r+</code>) and it should write any other input in file, but it doesn't. I get an empty file.</p>
<pre><code>user_name_in =input("gets Input")
fname = user_name_in
f = input()
ufile = open(fname,"w")
while True:
f=input(answer)
ufile.write(f)
</code></pre>
| 1 | 2016-08-22T12:14:24Z | 39,079,416 | <p>As i wrote in the comments, always use the <code>with</code> block to handle files as it takes care of intricacies you don't have to worry about. Now on the code, you repeat yourself, for example the first two lines are actually one.. This is what it would look when cleaned a bit.</p>
<pre><code>fname = input("gets Input")
with open(fname, "w") as ufile:
f = input('write something')
ufile.write(f)
</code></pre>
<p>And as others also noticed, the <code>answer</code> is never declared, there is no termination condition and the input prompts are either not the best or totally absent.</p>
| 5 | 2016-08-22T12:20:37Z | [
"python"
] |
Largest subset in an array such that the smallest and largest elements are less than K apart | 39,079,381 | <p>Given an array, I want to find the largest subset of elements such that the smallest and largest elements of the subset are less than or equal to K apart. Specifically, I want the elements, not just the size. If there are multiple occurrences, any can be matched.</p>
<p>For example, in the array <code>[14,15,17,20,23]</code>, if K was 3, the largest subset possible would be <code>[14,15,17]</code>. The same would go if 17 was replaced by 16. Also, multiple elements should be matched, such as <code>[14,14,14,15,16,17,17]</code>. The array is not necessarily sorted, but it is probably a good starting point to sort it. The elements are not necessarily integral and the subset not necessarily consecutive in the original array - I just want an occurrence of the largest possible subset.</p>
<p>To illustrate the desired result more clearly, a naïve approach would be to first sort the array, iterate over every element of the sorted array, and then create a new array containing the current element that is extended to contain every element after the current element <= K larger than it. (i.e. in the first above example, if the current element was 20, the array would be extended to [20,23] and then stop because the end of the array was reached. If the current element was 15, the array would be extended to [15,17] and then stop because 20 is more than 3 larger than 15.) This array would then be checked against a current maximum and, if it was larger, the current maximum would be replaced. The current maximum is then the largest subset. (This method is of complexity O(N^2), in the case that the largest subset is the array.)</p>
<p>I am aware of this naïve approach, and this question is asking for an optimised algorithm.</p>
<p>A solution in Python is preferable although I can run with a general algorithm.</p>
| 3 | 2016-08-22T12:19:16Z | 39,079,856 | <p>Brute force approach:</p>
<pre><code>arr = [14,14,14,15,16,17,17]
max_difference = 3
solution = []
for i, start in enumerate(arr):
tmp = []
largest = start
smallest = start
for j, end in enumerate(arr[i:]):
if abs(end - largest) <= max_difference and abs(end - smallest) <= max_difference:
tmp.append(end)
if end > largest:
largest = end
if end < smallest:
smallest = end
else:
break
if len(tmp) > len(solution):
solution = tmp
</code></pre>
<p>Try to optimize it! (Tip: the inner loop doesn't need to run as many times as it does here)</p>
| -1 | 2016-08-22T12:41:56Z | [
"python",
"algorithm"
] |
Largest subset in an array such that the smallest and largest elements are less than K apart | 39,079,381 | <p>Given an array, I want to find the largest subset of elements such that the smallest and largest elements of the subset are less than or equal to K apart. Specifically, I want the elements, not just the size. If there are multiple occurrences, any can be matched.</p>
<p>For example, in the array <code>[14,15,17,20,23]</code>, if K was 3, the largest subset possible would be <code>[14,15,17]</code>. The same would go if 17 was replaced by 16. Also, multiple elements should be matched, such as <code>[14,14,14,15,16,17,17]</code>. The array is not necessarily sorted, but it is probably a good starting point to sort it. The elements are not necessarily integral and the subset not necessarily consecutive in the original array - I just want an occurrence of the largest possible subset.</p>
<p>To illustrate the desired result more clearly, a naïve approach would be to first sort the array, iterate over every element of the sorted array, and then create a new array containing the current element that is extended to contain every element after the current element <= K larger than it. (i.e. in the first above example, if the current element was 20, the array would be extended to [20,23] and then stop because the end of the array was reached. If the current element was 15, the array would be extended to [15,17] and then stop because 20 is more than 3 larger than 15.) This array would then be checked against a current maximum and, if it was larger, the current maximum would be replaced. The current maximum is then the largest subset. (This method is of complexity O(N^2), in the case that the largest subset is the array.)</p>
<p>I am aware of this naïve approach, and this question is asking for an optimised algorithm.</p>
<p>A solution in Python is preferable although I can run with a general algorithm.</p>
| 3 | 2016-08-22T12:19:16Z | 39,079,955 | <p>An inefficient algorithm (O(n^2)) for this would be very simple:</p>
<pre><code>l = [14,15,17,20,23]
s = max((list(filter(lambda x: start<=x<=start+3, l)) for start in l), key=len)
print(s)
</code></pre>
| -1 | 2016-08-22T12:45:46Z | [
"python",
"algorithm"
] |
Largest subset in an array such that the smallest and largest elements are less than K apart | 39,079,381 | <p>Given an array, I want to find the largest subset of elements such that the smallest and largest elements of the subset are less than or equal to K apart. Specifically, I want the elements, not just the size. If there are multiple occurrences, any can be matched.</p>
<p>For example, in the array <code>[14,15,17,20,23]</code>, if K was 3, the largest subset possible would be <code>[14,15,17]</code>. The same would go if 17 was replaced by 16. Also, multiple elements should be matched, such as <code>[14,14,14,15,16,17,17]</code>. The array is not necessarily sorted, but it is probably a good starting point to sort it. The elements are not necessarily integral and the subset not necessarily consecutive in the original array - I just want an occurrence of the largest possible subset.</p>
<p>To illustrate the desired result more clearly, a naïve approach would be to first sort the array, iterate over every element of the sorted array, and then create a new array containing the current element that is extended to contain every element after the current element <= K larger than it. (i.e. in the first above example, if the current element was 20, the array would be extended to [20,23] and then stop because the end of the array was reached. If the current element was 15, the array would be extended to [15,17] and then stop because 20 is more than 3 larger than 15.) This array would then be checked against a current maximum and, if it was larger, the current maximum would be replaced. The current maximum is then the largest subset. (This method is of complexity O(N^2), in the case that the largest subset is the array.)</p>
<p>I am aware of this naïve approach, and this question is asking for an optimised algorithm.</p>
<p>A solution in Python is preferable although I can run with a general algorithm.</p>
| 3 | 2016-08-22T12:19:16Z | 39,080,422 | <p>A speedy approach with complexity O(n*log(n)) for the sort and O(n) to search for the longest chain:</p>
<pre><code>list_1 = [14, 15, 17, 20, 23]
k = 3
list_1.sort()
list_len = len(list_1)
min_idx = -1
max_idx = -1
idx1 = 0
idx2 = 0
while idx2 < list_len-1:
idx2 += 1
while list_1[idx2] - list_1[idx1] > k:
idx1 += 1
if idx2 - idx1 > max_idx - min_idx:
min_idx, max_idx = idx1, idx2
print(list_1[min_idx:max_idx+1])
</code></pre>
| -1 | 2016-08-22T13:07:55Z | [
"python",
"algorithm"
] |
Largest subset in an array such that the smallest and largest elements are less than K apart | 39,079,381 | <p>Given an array, I want to find the largest subset of elements such that the smallest and largest elements of the subset are less than or equal to K apart. Specifically, I want the elements, not just the size. If there are multiple occurrences, any can be matched.</p>
<p>For example, in the array <code>[14,15,17,20,23]</code>, if K was 3, the largest subset possible would be <code>[14,15,17]</code>. The same would go if 17 was replaced by 16. Also, multiple elements should be matched, such as <code>[14,14,14,15,16,17,17]</code>. The array is not necessarily sorted, but it is probably a good starting point to sort it. The elements are not necessarily integral and the subset not necessarily consecutive in the original array - I just want an occurrence of the largest possible subset.</p>
<p>To illustrate the desired result more clearly, a naïve approach would be to first sort the array, iterate over every element of the sorted array, and then create a new array containing the current element that is extended to contain every element after the current element <= K larger than it. (i.e. in the first above example, if the current element was 20, the array would be extended to [20,23] and then stop because the end of the array was reached. If the current element was 15, the array would be extended to [15,17] and then stop because 20 is more than 3 larger than 15.) This array would then be checked against a current maximum and, if it was larger, the current maximum would be replaced. The current maximum is then the largest subset. (This method is of complexity O(N^2), in the case that the largest subset is the array.)</p>
<p>I am aware of this naïve approach, and this question is asking for an optimised algorithm.</p>
<p>A solution in Python is preferable although I can run with a general algorithm.</p>
| 3 | 2016-08-22T12:19:16Z | 39,093,507 | <p>I assume that <strong>we can not modify array by sorting it</strong> & we have to find out <strong>largest consecutive Subset</strong>, So my solution (in python 3.2) is :</p>
<pre><code>arr = [14, 15, 17, 20, 23]
k = 3
f_start_index=0
f_end_index =0
length = len(arr)
for i in range(length):
min_value = arr[i]
max_value = arr[i]
start_index = i
end_index = i
for j in range((i+1),length):
if (min_value != arr[j] and max_value != arr[j]) :
if (min_value > arr[j]) :
min_value = arr[j]
elif (max_value < arr[j]) :
max_value = arr[j]
if(max_value-min_value) > k :
break
end_index = j
if (end_index-start_index) > (f_end_index-f_start_index):
f_start_index = start_index
f_end_index = end_index
if(f_end_index-f_start_index>=(length-j+1)): # for optimization
break
for i in range(f_start_index,f_end_index+1):
print(arr[i],end=" ")
</code></pre>
<p>It is not most efficient solution , but it will get your work done.</p>
<p>Tested against :</p>
<p>1.input:<code>[14, 15, 17, 20, 23]</code></p>
<p>1.output:<code>14 15 17</code></p>
<p>2.input:<code>[14,14,14,15,16,17,17]</code></p>
<p>2.output:<code>14 14 14 15 16 17 17</code></p>
<p>3.input:<code>[23 ,20, 17 , 16 ,14]</code></p>
<p>3.output:<code>17 16 14</code></p>
<p>4.input:<code>[-2,-1,0,1,2,4]</code></p>
<p>4.output:<code>-2 -1 0 1</code></p>
<p>For input number 4 there are two possible answers</p>
<ul>
<li>-2 -1 0 1</li>
<li>-1 0 1 2
But my solution take first as if subset's length is same then it will print the subset which occurs first in array when we traverse array elements from position 0 to array length-1</li>
</ul>
<p>But if we have to find <strong>largest subset</strong> in array which may or may not be consecutive then solution would be different.</p>
| 1 | 2016-08-23T06:06:42Z | [
"python",
"algorithm"
] |
Largest subset in an array such that the smallest and largest elements are less than K apart | 39,079,381 | <p>Given an array, I want to find the largest subset of elements such that the smallest and largest elements of the subset are less than or equal to K apart. Specifically, I want the elements, not just the size. If there are multiple occurrences, any can be matched.</p>
<p>For example, in the array <code>[14,15,17,20,23]</code>, if K was 3, the largest subset possible would be <code>[14,15,17]</code>. The same would go if 17 was replaced by 16. Also, multiple elements should be matched, such as <code>[14,14,14,15,16,17,17]</code>. The array is not necessarily sorted, but it is probably a good starting point to sort it. The elements are not necessarily integral and the subset not necessarily consecutive in the original array - I just want an occurrence of the largest possible subset.</p>
<p>To illustrate the desired result more clearly, a naïve approach would be to first sort the array, iterate over every element of the sorted array, and then create a new array containing the current element that is extended to contain every element after the current element <= K larger than it. (i.e. in the first above example, if the current element was 20, the array would be extended to [20,23] and then stop because the end of the array was reached. If the current element was 15, the array would be extended to [15,17] and then stop because 20 is more than 3 larger than 15.) This array would then be checked against a current maximum and, if it was larger, the current maximum would be replaced. The current maximum is then the largest subset. (This method is of complexity O(N^2), in the case that the largest subset is the array.)</p>
<p>I am aware of this naïve approach, and this question is asking for an optimised algorithm.</p>
<p>A solution in Python is preferable although I can run with a general algorithm.</p>
| 3 | 2016-08-22T12:19:16Z | 39,144,412 | <p>This seems very similar to your "naïve" approach, but it's O(n) excluding the sort so I don't think you can improve on your approach much. The optimization is to use indices and only create a second array once the answer is known:</p>
<pre><code>def largest_less_than_k_apart(a, k):
a.sort()
upper_index = lower_index = max_length = max_upper_index = max_lower_index = 0
while upper_index < len(a):
while a[lower_index] < a[upper_index] - k:
lower_index += 1
if upper_index - lower_index + 1 > max_length:
max_length = upper_index - lower_index + 1
max_upper_index, max_lower_index = upper_index, lower_index
upper_index += 1
return a[max_lower_index:max_upper_index + 1]
a = [14,15,17,20,23]
print largest_less_than_k_apart(a, 3);
</code></pre>
<p>Output:</p>
<pre><code>[14, 15, 17]
</code></pre>
<p>It does one pass through the sorted array, with the current index stored in <code>upper_index</code> and another index <code>lower_index</code> that lags behind as far as possible while still pointing to a value greater than or equal to K less than the value of the current element. The function keeps track of when the two indices are as far apart as possible and uses those indices to split the list and return the subset.</p>
<p>Duplicate elements are handled, because <code>lower_index</code> lags behind as far as possible (pointing to the earliest duplicate), whereas the difference of indices will be maximal when <code>upper_index</code> is pointing to the last duplicate of a given subset.</p>
<p>It's not valid to pass in a negative value for k.</p>
| 1 | 2016-08-25T11:46:01Z | [
"python",
"algorithm"
] |
implemention of python builtin function | 39,079,400 | <p>I'm a freshman in python and I want to study the implemention of python's builtin function like <code>abs()</code>, but in the python file of <code>\__builtin__.py</code> I saw this:</p>
<p><img src="http://i.stack.imgur.com/VW1od.png" alt="builtin_abs_function"></p>
<p>Does anybody know how it works?</p>
| 0 | 2016-08-22T12:20:10Z | 39,079,597 | <p>The built-in functions are implemented in the same language as the interpreter, so the source code is different depending on the Python implementation you are using (Jython, CPython, PyPy, etc). You are probably using CPython, so the <code>abs()</code> function is implemented in C. You can look at the real source code of this function <a href="https://hg.python.org/coding/cpython/file/tip/Python/bltinmodule.c#l245" rel="nofollow">here</a>.</p>
<pre><code>static PyObject *
builtin_abs(PyObject *module, PyObject *x)
{
return PyNumber_Absolute(x);
}
</code></pre>
<p>The source code for <code>PyNumber_Absolute</code> (which is, arguably, more interesting) can be found <a href="https://hg.python.org/coding/cpython/file/tip/Objects/abstract.c#l1165" rel="nofollow">here</a>:</p>
<pre><code>PyObject *
PyNumber_Absolute(PyObject *o)
{
PyNumberMethods *m;
if (o == NULL)
return null_error();
m = o->ob_type->tp_as_number;
if (m && m->nb_absolute)
return m->nb_absolute(o);
return type_error("bad operand type for abs(): '%.200s'", o);
}
</code></pre>
<p>As you can see, the actual implementation of <code>abs()</code> calls <code>nb_absolute()</code> which is different for different object types. The one for float looks <a href="https://hg.python.org/coding/cpython/file/tip/Objects/floatobject.c#l802" rel="nofollow">like this</a></p>
<pre><code>static PyObject *
float_abs(PyFloatObject *v)
{
return PyFloat_FromDouble(fabs(v->ob_fval));
}
</code></pre>
<p>So, effectively, CPython is just using the C math library in this case. The same will be true for other implementations of Python - Jython is using the functions from the Java math library.</p>
| 5 | 2016-08-22T12:29:30Z | [
"python",
"builtin"
] |
Data frame group ID, create value: count in column | 39,079,415 | <p>Given the following sample dataset:</p>
<pre><code>import numpy as np
import pandas as pd
df1 = (pd.DataFrame(np.random.randint(3, size=(5, 4)), columns=('ID', 'X1', 'X2', 'X3')))
print(df1)
ID X1 X2 X3
0 2 2 0 2
1 1 0 2 1
2 1 2 1 1
3 1 2 0 2
4 2 0 0 0
d = {'ID' : pd.Series([1, 2, 1, 4, 5]), 'Tag' : pd.Series(['One', 'Two', 'Two', 'Four', 'Five'])}
df2 = (pd.DataFrame(d))
print(df2)
ID Tag
0 1 One
1 2 Two
2 1 Two
3 4 Four
4 5 Five
df1['Merged_Tags'] = df1.ID.map(df2.groupby('ID').Tag.apply(list))
print(df1)
ID X1 X2 X3 Merged_Tags
0 2 2 0 2 [Two]
1 1 0 2 1 [One, Two]
2 1 2 1 1 [One, Two]
3 1 2 0 2 [One, Two]
4 2 0 0 0 [Two]
</code></pre>
<p>Expected output for <code>ID = 1</code>:</p>
<p><strong>1.</strong></p>
<p>How would one groupby each key and generate a <code>Tag: Frequency</code> format in the <code>Merged_Tags</code> column?</p>
<pre><code> ID X1 X2 X3 Merged_Tags
1 1 0 2 1 [One: 3, Two: 3]
</code></pre>
<p><strong>2.</strong></p>
<p>Create a new column for the number of rows with that <code>ID</code> </p>
<pre><code> ID X1 X2 X3 Merged_Tags Frequency
1 1 0 2 1 [One: 3, Two: 3] 3
</code></pre>
<p><strong>3.</strong></p>
<p>Add the values of column <code>X3</code> in each row occurrence with the same <code>ID</code></p>
<pre><code> ID X1 X2 X3 Merged_Tags Frequency X3++
1 1 0 2 1 [One: 3, Two: 3] 3 4
</code></pre>
| 2 | 2016-08-22T12:20:35Z | 39,208,560 | <p><a href="/questions/tagged/user3939059" class="post-tag" title="show questions tagged 'user3939059'" rel="tag">user3939059</a></p>
<pre><code>1 0 2 1 [One: 3, Two: 3]
</code></pre>
<p>should be [One: 2, Two:3] instead right? Considering that:</p>
<pre><code> 1 : [One,Two]
0 : None
2 : [Two]
1 : [One, Two]
</code></pre>
<p>and you want a total counter of each key in the row ?</p>
<p>Please help me understand the intuition behind [One:3, Two:3] in case I am missing anything here, but your question should be easy to solve otherwise</p>
| 0 | 2016-08-29T14:20:47Z | [
"python",
"pandas"
] |
linear programming with scipy.optimize.linprog - variable coefficients | 39,079,458 | <p>Trying to optimize using scipy.optimize.linprog a cost function, where the cost coefficients are function of the variables; e.g. </p>
<p>Cost = c1 * x1 + c2 * x2 (x1,x2 are the variables)</p>
<p>for example</p>
<p>if x1 = 1, c1 = 0.5</p>
<p>if x1 = 2, c1 = 1.25</p>
<p>etc.</p>
<p>Thank you for your help</p>
<p><strong>* Just to clarify *</strong></p>
<p>we are looking for a minimum cost of variables; xi; i=1,2,3,...
xi are positive integers.</p>
<p>however, the cost coefficient per xi, is a function of the value of xi.
cost is x1*f1(x1) + x2*f2(x2) + ... + c0</p>
<p>fi - is a "rate" table; e.g. - f1(0) = 0; f1(1) = 2.00; f1(2) = 3.00, etc. </p>
<p>the xi are under constrains, and they can't be negative and can't be over qi =></p>
<p>0 <= xi <= qi </p>
<p>fi() values are calculated for each possible value of xi</p>
<p>I hope it clarifies the model.</p>
| -1 | 2016-08-22T12:22:36Z | 39,089,821 | <p>Here is some prototype-code to show you how, that your problem is quite hard (regarding formulation and performance; the former is visible in the code).</p>
<p>The implementation uses cvxpy for modelling (<strong>convex-programming only</strong>) and is based on the <strong>mixed-integer approach</strong>.</p>
<h3>Code</h3>
<pre class="lang-python prettyprint-override"><code> import numpy as np
from cvxpy import *
"""
x0 == 0 -> f(x) = 0
x0 == 1 -> f(x) = 1
...
x1 == 0 -> f(x) = 1
x1 == 1 -> f(x) = 4
...
"""
rate_table = np.array([[0, 1, 3, 5], [1, 4, 5, 6], [1.3, 1.7, 2.25, 3.0]])
bounds_x = (0, 3) # inclusive; bounds are needed for linearization!
# Vars
# ----
n_vars = len(rate_table)
n_values_per_var = [len(x) for x in rate_table]
I = Bool(n_vars, n_values_per_var[0]) # simplified assumption: rate-table sizes equal
X = Int(n_vars)
X_ = Variable(n_vars, n_values_per_var[0]) # X_ = mul_elemwise(I*X) broadcasted
# Constraints
# -----------
constraints = []
# X is bounded
constraints.append(X >= bounds_x[0])
constraints.append(X <= bounds_x[1])
# only one value in rate-table active (often formulated with SOS-type-1 constraints)
for i in range(n_vars):
constraints.append(sum_entries(I[i, :]) <= 1)
# linearization of product of BIN * INT (INT needs to be bounded!)
# based on Erwin's answer here:
# https://www.or-exchange.org/questions/10775/how-to-linearize-product-of-binary-integer-and-integer-variables
for i in range(n_values_per_var[0]):
constraints.append(bounds_x[0] * I[:, i] <= X_[:, i])
constraints.append(X_[:, i] <= bounds_x[1] * I[:, i])
constraints.append(X - bounds_x[1]*(1-I[:, i]) <= X_[:, i])
constraints.append(X_[:, i] <= X - bounds_x[0]*(1-I[:, i]))
# Fix chosings -> if table-entry x used -> integer needs to be x
# assumptions:
# - table defined for each int
help_vec = np.arange(n_values_per_var[0])
constraints.append(I * help_vec == X)
# ONLY FOR DEBUGGING -> make simple max each X solution infeasible
constraints.append(sum_entries(mul_elemwise([1, 3, 2], square(X))) <= 15)
# Objective
# ---------
objective = Maximize(sum_entries(mul_elemwise(rate_table, X_)))
# Problem & Solve
# ---------------
problem = Problem(objective, constraints)
problem.solve() # choose other solver if needed, e.g. commercial ones like Gurobi, Cplex
print('Max-objective: ', problem.value)
print('X:\n' + str(X.value))
</code></pre>
<h3>Output</h3>
<pre><code>('Max-objective: ', 20.70000000000001)
X:
[[ 3.]
[ 1.]
[ 1.]]
</code></pre>
<h3>Idea</h3>
<ul>
<li>Transform the objective <code>max: x0*f(x0) + x1*f(x1) + ...</code>
<ul>
<li>into: <code>x0*f(x0==0) + x0*f(x0==1) + ... + x1*f(x1==0) + x1*f(x1==1)+ ...</code></li>
</ul></li>
<li>Introduce binary-variables to formulate:
<ul>
<li><code>f(x0==0) as I[0,0]*table[0,0]</code></li>
<li><code>f(x1==2) as I[1,2]*table[0,2]</code></li>
</ul></li>
<li>Add constraints to limit the above <code>I</code> to have one nonzero entry only for each variable <code>x_i</code> (only one of the expanded objective-components will be active)</li>
<li>Linearize the product <code>x0*f(x0==0) == x0*I[0,0]*table(0,0)</code> (integer * binary * constant)</li>
<li>Fix the table-lookup: using table-entry with index x (of x0) should result in <code>x0 == x</code>
<ul>
<li>assuming, that there are no gaps in the table, this can be done formulated as <code>I * help_vec == X)</code> where <code>help_vec == vector(lower_bound, ..., upper_bound)</code></li>
</ul></li>
</ul>
<p><strong>cvxpy</strong> is automatically (by construction) proving, that our formulation is <strong>convex</strong>, which is needed for most solvers (and in general not easy to recognize).</p>
<h3>Just for fun: bigger-problem and commercial-solver</h3>
<p>Input generated by:</p>
<pre><code>def gen_random_growing_table(size):
return np.cumsum(np.random.randint(1, 10, size))
SIZE = 100
VARS = 100
rate_table = np.array([gen_random_growing_table(SIZE) for v in range(VARS)])
bounds_x = (0, SIZE-1) # inclusive; bounds are needed for linearization!
...
...
constraints.append(sum_entries(square(X)) <= 150)
</code></pre>
<p>Output:</p>
<pre><code>Explored 19484 nodes (182729 simplex iterations) in 129.83 seconds
Thread count was 4 (of 4 available processors)
Optimal solution found (tolerance 1.00e-04)
Warning: max constraint violation (1.5231e-05) exceeds tolerance
Best objective -1.594000000000e+03, best bound -1.594000000000e+03, gap 0.0%
('Max-objective: ', 1594.0000000000005)
</code></pre>
| 1 | 2016-08-22T22:44:12Z | [
"python",
"numpy",
"simplex"
] |
Error in dictionary to list manipulation in python | 39,079,477 | <p>I got stuck in a code in python. I am new to python.<br>
I have tried to add the polynomial in a basic way but I don't know how to get the values as a dictionary and convert it to a list and back.<br>
This is the code I have tried:</p>
<pre><code>def add(s1,s2):
if len(s1) > len(s2):
new = [i for i in s1]
for i in range(len(s2)): new[i] += s2[i]
else:
new = [i for i in s2]
for i in range(len(s1)): new[i] += s1[i]
return new
</code></pre>
<p>When I run this program I got output as</p>
<pre><code>add((2,0),(3,1)) [5,1]
</code></pre>
<p>but I gave input like this <code>add([(4,3),(3,0)],[(-4,3),(2,1)])</code>, which gives me an an error. How to get dictionaries as input for following code?
for example if I gave input as </p>
<pre><code>addpoly([(4,3),(3,0)],[(-4,3),(2,1)]) it should give me
[(2, 1),(3, 0)]
</code></pre>
<p>if there the output is zero it should return empty list[].</p>
| -1 | 2016-08-22T12:23:15Z | 39,080,409 | <p>I am running your code in python-3.x and it gives output like <code>[(-4, 3, 4, 3), (2, 1, 3, 0)]</code><br>
so it is only a concatenation of the input, because for the input <code>add([(4,3),(3,0)],[(-4,3),(2,1)])</code> there is a list inside another list.</p>
<p>For more explanation: if the input is like <code>add([(1,1),(2,2),(3,3)],[(-1,-1),(-2,-2),(-3,-3)])</code> then the output is: [(<strong>-1, -1</strong>, 1, 1), (<strong>-2, -2</strong>, 2, 2), (<strong>-3, -3</strong>, 3, 3)]</p>
<p>And for adding two list elements code:</p>
<pre><code>def add(s1,s2):
if len(s1) > len(s2):
new = [i for i in s1]
for i in range(len(s2)):
if(isinstance(new[i], int)):
new[i] += s2[i]
else:
t1 = new[i][0] + s2[i][0]
t2 = new[i][1] + s2[i][1]
new[i] = (t1,t2)
else:
new = [i for i in s2]
for i in range(len(s1)):
if(isinstance(new[i], int)):
new[i] += s1[i]
else:
t1 = new[i][0] + s1[i][0]
t2 = new[i][1] + s1[i][1]
new[i] = (t1,t2)
return new
</code></pre>
<p>If we give input:</p>
<pre><code>print(add([(1,1),(2,2),(3,3)],[(-1,-1),(-2,-2),(-3,-3)]))
print(add((2,0),(3,1)))
print(add([(1,1),(2,2),(3,3)],[(-1,-1),(-2,-2)]))
</code></pre>
<p>we get the output:</p>
<pre><code>[(0, 0), (0, 0), (0, 0)]
[5, 1]
[(0, 0), (0, 0), (3, 3)]
</code></pre>
| 2 | 2016-08-22T13:07:30Z | [
"python",
"python-3.x"
] |
Matplotlib animation: vertical cursor line through subplots | 39,079,562 | <p><strong>[Solution has been added to the EDIT sections in this post]</strong></p>
<p>2 animated subplots are stacked vertically.</p>
<p>I would like to show a black vertical line through them according to the mouse position. </p>
<p>Up to now I can only completely mess the figure when moving the mouse...</p>
<p>How to clear the old vertical lines between updates?</p>
<p>(Just out of curiosity: since mouse movement control, my PC fan goes crazy when executing the code even without moving the mouse. Is mouse so "calculation expensive"?!?)</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from time import sleep
val1 = np.zeros(100)
val2 = np.zeros(100)
level1 = 0.2
level2 = 0.5
fig, ax = plt.subplots()
ax1 = plt.subplot2grid((2,1),(0,0))
lineVal1, = ax1.plot(np.zeros(100))
ax1.set_ylim(-0.5, 1.5)
ax2 = plt.subplot2grid((2,1),(1,0))
lineVal2, = ax2.plot(np.zeros(100), color = "r")
ax2.set_ylim(-0.5, 1.5)
def onMouseMove(event):
ax1.axvline(x=event.xdata, color="k")
ax2.axvline(x=event.xdata, color="k")
def updateData():
global level1, val1
global level2, val2
clamp = lambda n, minn, maxn: max(min(maxn, n), minn)
level1 = clamp(level1 + (np.random.random()-.5)/20.0, 0.0, 1.0)
level2 = clamp(level2 + (np.random.random()-.5)/10.0, 0.0, 1.0)
# values are appended to the respective arrays which keep the last 100 readings
val1 = np.append(val1, level1)[-100:]
val2 = np.append(val2, level2)[-100:]
yield 1 # FuncAnimation expects an iterator
def visualize(i):
lineVal1.set_ydata(val1)
lineVal2.set_ydata(val2)
return lineVal1,lineVal2
fig.canvas.mpl_connect('motion_notify_event', onMouseMove)
ani = animation.FuncAnimation(fig, visualize, updateData, interval=50)
plt.show()
</code></pre>
<p><strong>Edit1</strong></p>
<p>As solved by Ophir:</p>
<pre><code>def onMouseMove(event):
ax1.lines = [ax1.lines[0]]
ax2.lines = [ax2.lines[0]]
ax1.axvline(x=event.xdata, color="k")
ax2.axvline(x=event.xdata, color="k")
</code></pre>
<p><strong>Edit2</strong></p>
<p>In case there are more datasets in the same plot such as in: </p>
<pre><code>ax1 = plt.subplot2grid((2,1),(0,0))
lineVal1, = ax1.plot(np.zeros(100))
lineVal2, = ax2.plot(np.zeros(100), color = "r")
ax1.set_ylim(-0.5, 1.5)
</code></pre>
<p>each dataset's line is stored in <code>ax1.lines[]</code>:</p>
<ul>
<li><code>ax1.lines[0]</code> is <code>lineVal1</code> </li>
<li><code>ax1.lines[1]</code> is <code>lineVal2</code> </li>
<li><code>ax1.lines[2]</code> is the vertical line if you already drew it.</li>
</ul>
<p>This means <code>onMouseMove</code> has to be changed to:</p>
<pre><code>def onMouseMove(event):
ax1.lines = ax1.lines[:2] # keep the first two lines
ax1.axvline(x=event.xdata, color="k") # then draw the vertical line
</code></pre>
| 0 | 2016-08-22T12:27:48Z | 39,082,657 | <p>replace your <code>onMouseMove</code> with the following one:</p>
<p>(I used <a href="http://stackoverflow.com/questions/4981815/how-to-remove-lines-in-a-matplotlib-plot">How to remove lines in a Matplotlib plot</a>)</p>
<pre><code>def onMouseMove(event):
ax1.lines = [ax1.lines[0]]
ax2.lines = [ax2.lines[0]]
ax1.axvline(x=event.xdata, color="k")
ax2.axvline(x=event.xdata, color="k")
</code></pre>
| 1 | 2016-08-22T14:51:55Z | [
"python",
"animation",
"matplotlib",
"cursor",
"mouse"
] |
Pillow: strange behavior using Draw.rectangle | 39,080,087 | <p>I am drawing rectangles in a for-loop using Pillow. This worked on my desktop computer, but throwing a strange exception on my laptop.</p>
<p>This is the code (shortened):</p>
<pre><code>from PIL import Image, ImageDraw
(...)
img = Image.open(sys.argv[1])
rimg = img.copy()
rimg_draw = ImageDraw.Draw(rimg)
(...)
(for-loop)
rimg_draw.rectangle((x1, y1, x2, y2), fill=None, outline=(255, 0, 0))
</code></pre>
<p>This throws the following exception:</p>
<pre><code>rimg_draw.rectangle((x1, y1, x2, y2), fill=None, outline=(255, 0, 0))
File "/home/daniel/tensorflow2.7/lib/python2.7/site-packages/PIL/ImageDraw.py", line 203, in rectangle
ink, fill = self._getink(outline, fill)
File "/home/daniel/tensorflow2.7/lib/python2.7/site-packages/PIL/ImageDraw.py", line 124, in _getink
ink = self.draw.draw_ink(ink, self.mode)
TypeError: function takes exactly 1 argument (3 given)
</code></pre>
<p>I do not understand, why this code fails: at Pillow's very own <a href="http://pillow.readthedocs.io/en/3.3.x/reference/ImageDraw.html#methods" rel="nofollow">documentation</a> <code>PIL.ImageDraw.Draw.rectangle</code> is defined <strong>with</strong> these arguments: <code>rectangle(xy, fill=None, outline=None)</code>.</p>
<p>Since the documentation explicitly lists the optional parameters <code>fill</code> and <code>outline</code>, why is Pillow complaining that it only takes 1 argument?</p>
<p><code>pip freeze</code> says Pillows version is <code>3.3.1</code>.</p>
| 0 | 2016-08-22T12:51:54Z | 39,081,436 | <p>After slight adjustments to your code to make it run, I was not able to reproduce the exception.</p>
<pre><code>from PIL import Image, ImageDraw
img = Image.open('testfig.png')
rimg = img.copy()
rimg_draw = ImageDraw.Draw(rimg)
rimg_draw.rectangle((10, 10, 30, 30), fill=None, outline=(255, 0, 0))
rimg.show()
</code></pre>
<p>However, I'm running Python 3.4.4 and Pillow 3.2.0 on my system. Is there any obvious difference in versions on your laptop compared to your desktop?</p>
<p>Can you have a deeper look at your code lines 124 and 203, respectively, or provide us with a working code snippet that creates this exception for you?</p>
| 2 | 2016-08-22T13:55:22Z | [
"python",
"python-2.7",
"python-imaging-library",
"pillow"
] |
Find clipped pixels in a RGB image | 39,080,153 | <p>I have an image image.png and I want to find all clipped pixels. Here is what I have so far:</p>
<pre><code>for i in range(1,width):
for j in range(1, height):
r,g,b = image.getpixel((i,j))
If( ): # I don't know what should be the condition here
# do something else
</code></pre>
<p>I use Python, Tkinter, Pil.</p>
<p>Thanks</p>
| 1 | 2016-08-22T12:55:27Z | 39,081,018 | <p>If by 'clipped' you mean saturated, then you probably want to create a threshold based on the intensity of the pixel. There are a few equations that try to determine this, but I would recommend one of the <a href="https://en.wikipedia.org/wiki/Grayscale" rel="nofollow">Grayscale equations</a>. Looking at the equation used in ATSC: </p>
<pre><code>I=.2126*r+.7152*g+.0722*b
</code></pre>
<p>Then just figure out what range of values for I you considered 'clipped'.</p>
| 0 | 2016-08-22T13:34:01Z | [
"python",
"image",
"tkinter",
"python-imaging-library",
"clipped"
] |
Python check if date is within 24 hours | 39,080,155 | <p>I have been trying some code for this, but I can't seem to completely wrap my head around it.</p>
<p>I have a set date, <code>set_date</code> which is just some random date as you'd expect and that one is just data I get.
Now I would like some error function that raises an error if <code>datetime.now()</code> is within 24 hours of the <code>set_date</code>.</p>
<p>I have been trying code with the <code>timedelta(hours=24)</code></p>
<pre><code>from datetime import datetime, timedelta
now = datetime.now()
if now < (set_date - timedelta(hours=24)):
raise ValidationError('')
</code></pre>
<p>I'm not sure whats right to do with this, what the good way to do is. How exactly do I check if the current time is 24 hours before the set date?</p>
| 1 | 2016-08-22T12:55:37Z | 39,080,237 | <p>Like that?</p>
<pre><code>if now-timedelta(hours=24) <= set_date <= now:
... #date less than 24 hours in the past
</code></pre>
<p>If you want to check for the date to be within 24 hours on either side:</p>
<pre><code>if now-timedelta(hours=24) <= set_date <= now+timedelta(hours=24):
... #date within 24 hours
</code></pre>
| 4 | 2016-08-22T12:59:16Z | [
"python"
] |
Python check if date is within 24 hours | 39,080,155 | <p>I have been trying some code for this, but I can't seem to completely wrap my head around it.</p>
<p>I have a set date, <code>set_date</code> which is just some random date as you'd expect and that one is just data I get.
Now I would like some error function that raises an error if <code>datetime.now()</code> is within 24 hours of the <code>set_date</code>.</p>
<p>I have been trying code with the <code>timedelta(hours=24)</code></p>
<pre><code>from datetime import datetime, timedelta
now = datetime.now()
if now < (set_date - timedelta(hours=24)):
raise ValidationError('')
</code></pre>
<p>I'm not sure whats right to do with this, what the good way to do is. How exactly do I check if the current time is 24 hours before the set date?</p>
| 1 | 2016-08-22T12:55:37Z | 39,080,745 | <p>That will do:</p>
<pre><code>if now - timedelta(hours=24) <= set_date <= now + timedelta(hours=24):
#Do something
</code></pre>
<p>Which is equivalent to:</p>
<pre><code>if now - timedelta(hours=24) <= set_date <= now or now <= set_date <= now + timedelta(hours=24):
# ---^--- in the past 24h ---^--- in the future 24h
#Do something
</code></pre>
| 0 | 2016-08-22T13:22:20Z | [
"python"
] |
Printing a tuple in Python with user-defined precision | 39,080,180 | <p>Following <a href="http://stackoverflow.com/questions/1455602/printing-tuple-with-string-formatting-in-python">Printing tuple with string formatting in Python</a>, I'd like to print the following tuple:</p>
<pre><code>tup = (0.0039024390243902443, 0.3902439024390244, -0.005853658536585366, -0.5853658536585366)
</code></pre>
<p>with only 5 digits of precision. How can I achieve this?</p>
<p>(I've tried <code>print("%.5f" % (tup,))</code> but I get a <code>TypeError: not all arguments converted during string formatting</code>).</p>
| -1 | 2016-08-22T12:57:00Z | 39,080,284 | <p>try the following (list comprehension)</p>
<pre><code>['%.5f'% t for t in tup]
</code></pre>
| 0 | 2016-08-22T13:01:34Z | [
"python"
] |
Printing a tuple in Python with user-defined precision | 39,080,180 | <p>Following <a href="http://stackoverflow.com/questions/1455602/printing-tuple-with-string-formatting-in-python">Printing tuple with string formatting in Python</a>, I'd like to print the following tuple:</p>
<pre><code>tup = (0.0039024390243902443, 0.3902439024390244, -0.005853658536585366, -0.5853658536585366)
</code></pre>
<p>with only 5 digits of precision. How can I achieve this?</p>
<p>(I've tried <code>print("%.5f" % (tup,))</code> but I get a <code>TypeError: not all arguments converted during string formatting</code>).</p>
| -1 | 2016-08-22T12:57:00Z | 39,080,291 | <p>Try this:</p>
<pre><code>class showlikethis(float):
def __repr__(self):
return "%0.5f" % self
tup = (0.0039024390243902443, 0.3902439024390244, -0.005853658536585366, -0.5853658536585366)
tup = map(showlikethis, tup)
print tup
</code></pre>
<p>You may like to re-quote your question, tuple dnt have precision.</p>
| -1 | 2016-08-22T13:01:50Z | [
"python"
] |
Printing a tuple in Python with user-defined precision | 39,080,180 | <p>Following <a href="http://stackoverflow.com/questions/1455602/printing-tuple-with-string-formatting-in-python">Printing tuple with string formatting in Python</a>, I'd like to print the following tuple:</p>
<pre><code>tup = (0.0039024390243902443, 0.3902439024390244, -0.005853658536585366, -0.5853658536585366)
</code></pre>
<p>with only 5 digits of precision. How can I achieve this?</p>
<p>(I've tried <code>print("%.5f" % (tup,))</code> but I get a <code>TypeError: not all arguments converted during string formatting</code>).</p>
| -1 | 2016-08-22T12:57:00Z | 39,080,297 | <p>You can print the floats with custom precision "like a tuple":</p>
<pre><code>>>> tup = (0.0039024390243902443, 0.3902439024390244, -0.005853658536585366, -0.5853658536585366)
>>> print('(' + ', '.join(('%.5f' % f) for f in tup) + ')')
(0.00390, 0.39024, -0.00585, -0.58537)
</code></pre>
| 0 | 2016-08-22T13:02:02Z | [
"python"
] |
Printing a tuple in Python with user-defined precision | 39,080,180 | <p>Following <a href="http://stackoverflow.com/questions/1455602/printing-tuple-with-string-formatting-in-python">Printing tuple with string formatting in Python</a>, I'd like to print the following tuple:</p>
<pre><code>tup = (0.0039024390243902443, 0.3902439024390244, -0.005853658536585366, -0.5853658536585366)
</code></pre>
<p>with only 5 digits of precision. How can I achieve this?</p>
<p>(I've tried <code>print("%.5f" % (tup,))</code> but I get a <code>TypeError: not all arguments converted during string formatting</code>).</p>
| -1 | 2016-08-22T12:57:00Z | 39,080,299 | <p>you can work on single item.
Try this:</p>
<pre><code>>>> tup = (0.0039024390243902443, 0.3902439024390244, -0.005853658536585366, -0.5853658536585366)
>>> for t in tup:
print ("%.5f" %(t))
0.00390
0.39024
-0.00585
-0.58537
</code></pre>
| 0 | 2016-08-22T13:02:13Z | [
"python"
] |
Printing a tuple in Python with user-defined precision | 39,080,180 | <p>Following <a href="http://stackoverflow.com/questions/1455602/printing-tuple-with-string-formatting-in-python">Printing tuple with string formatting in Python</a>, I'd like to print the following tuple:</p>
<pre><code>tup = (0.0039024390243902443, 0.3902439024390244, -0.005853658536585366, -0.5853658536585366)
</code></pre>
<p>with only 5 digits of precision. How can I achieve this?</p>
<p>(I've tried <code>print("%.5f" % (tup,))</code> but I get a <code>TypeError: not all arguments converted during string formatting</code>).</p>
| -1 | 2016-08-22T12:57:00Z | 39,080,318 | <p>You can iterate over tuple like this, and than you can print result
for python > 3</p>
<pre><code>["{:.5f}".format(i) for i in tup]
</code></pre>
<p>And for python 2.7</p>
<pre><code>['%.5f'% t for t in tup]
</code></pre>
| 0 | 2016-08-22T13:03:17Z | [
"python"
] |
Printing a tuple in Python with user-defined precision | 39,080,180 | <p>Following <a href="http://stackoverflow.com/questions/1455602/printing-tuple-with-string-formatting-in-python">Printing tuple with string formatting in Python</a>, I'd like to print the following tuple:</p>
<pre><code>tup = (0.0039024390243902443, 0.3902439024390244, -0.005853658536585366, -0.5853658536585366)
</code></pre>
<p>with only 5 digits of precision. How can I achieve this?</p>
<p>(I've tried <code>print("%.5f" % (tup,))</code> but I get a <code>TypeError: not all arguments converted during string formatting</code>).</p>
| -1 | 2016-08-22T12:57:00Z | 39,080,333 | <p>Possible workaround:</p>
<pre><code>tup = (0.0039024390243902443, 0.3902439024390244, -
0.005853658536585366, -0.5853658536585366)
print [float("{0:.5f}".format(v)) for v in tup]
</code></pre>
| 0 | 2016-08-22T13:03:42Z | [
"python"
] |
Printing a tuple in Python with user-defined precision | 39,080,180 | <p>Following <a href="http://stackoverflow.com/questions/1455602/printing-tuple-with-string-formatting-in-python">Printing tuple with string formatting in Python</a>, I'd like to print the following tuple:</p>
<pre><code>tup = (0.0039024390243902443, 0.3902439024390244, -0.005853658536585366, -0.5853658536585366)
</code></pre>
<p>with only 5 digits of precision. How can I achieve this?</p>
<p>(I've tried <code>print("%.5f" % (tup,))</code> but I get a <code>TypeError: not all arguments converted during string formatting</code>).</p>
| -1 | 2016-08-22T12:57:00Z | 39,080,377 | <p>Most Pythonic way to achieve this is with <code>map()</code> and <code>lambda()</code> function.</p>
<pre><code>>>> map(lambda x: "%.5f" % x, tup)
['0.00390', '0.39024', '-0.00585', '-0.58537']
</code></pre>
| 0 | 2016-08-22T13:05:51Z | [
"python"
] |
Printing a tuple in Python with user-defined precision | 39,080,180 | <p>Following <a href="http://stackoverflow.com/questions/1455602/printing-tuple-with-string-formatting-in-python">Printing tuple with string formatting in Python</a>, I'd like to print the following tuple:</p>
<pre><code>tup = (0.0039024390243902443, 0.3902439024390244, -0.005853658536585366, -0.5853658536585366)
</code></pre>
<p>with only 5 digits of precision. How can I achieve this?</p>
<p>(I've tried <code>print("%.5f" % (tup,))</code> but I get a <code>TypeError: not all arguments converted during string formatting</code>).</p>
| -1 | 2016-08-22T12:57:00Z | 39,080,415 | <p>I figured out another workaround using Numpy:</p>
<pre><code>import numpy as np
np.set_printoptions(precision=5)
print(np.array(tup))
</code></pre>
<p>which yields the following output:</p>
<pre><code>[ 0.0039 0.39024 -0.00585 -0.58537]
</code></pre>
| 0 | 2016-08-22T13:07:44Z | [
"python"
] |
Google TaskQueue (pull) insert task by API | 39,080,283 | <p>I'm using the apiclient.discovery.build to lease tasks from a Google Pull Queue.. It's working fine.. But when I try to insert tasks in this Queue, I always get the same error:</p>
<pre><code>from apiclient.discovery import build
build = build('taskqueue', 'v1beta2', credentials=GoogleCredentials.get_application_default())
# Works
resp = build.tasks().lease(project=project,taskqueue=name,leaseSecs=lease_time,numTasks=num_tasks).execute()
# Error
payload = {'payloadBase64': 'c29tZSB0ZXN0'}
result = build.tasks().insert(project=project,taskqueue=name,body=payload)
</code></pre>
<blockquote>
<p>raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: https://www.googleapis.com/taskqueue/v1beta2/projects/project_test/taskqueues/pullqq/tasks?alt=json
returned "Backend Error"></p>
</blockquote>
<p>The authentication is correct because I can lease/delete tasks.. It might be some missing field in the payload?</p>
| 1 | 2016-08-22T13:01:30Z | 39,086,041 | <p>Well.. I changed the payload with the exactly payload from the leased tasks.. except for some fields (e.g., ID or leasing time) and added the '~s' to the project name in 'queueName'.</p>
<pre><code>resp = {u'kind': u'taskqueues#task', u'queueName': u'projects/s~project_name/taskqueues/pullqq', u'payloadBase64': u'c29tZSB0ZXN0'}
</code></pre>
<p>Now it worked.</p>
| 1 | 2016-08-22T18:07:44Z | [
"python",
"google-cloud-platform",
"task-queue",
"pull-queue"
] |
How to send a "method" via socket | 39,080,288 | <p>I'm trying to send packet made with scapy library via sockets in python 3..</p>
<p>That's the code:</p>
<pre><code>from scapy.all import *
import socket, threading
def loop():
global threads
for x in range(800):
sending().start()
class sending(threading.Thread):
def run(self):
self.connstart()
def connstart(self):
host = "ip" # this could be a proxy for example
port = port # the port of proxy
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((host, port))
s.send(self.spoofing)
def spoofing(self):
A = "ip" # spoofed source IP address
B = "ip" # destination IP address
C = RandShort() # source port
D = port # destination port
payload = "yada yada yada" # packet payload
spoofed_packet = IP(src=A, dst=B) / TCP(sport=C, dport=D) / payload
return spoofed_packet
loop()
</code></pre>
<p>Obviusly the script raises an error:</p>
<pre><code>Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "spoof.py", line 12, in run
self.connstart()
File "spoof.py", line 19, in connstart
s.send(self.spoofing)
TypeError: a bytes-like object is required, not 'method'
</code></pre>
<p>For you is there a way to bypass this? So to send this packet unchanged?</p>
<p>What I wanna do is to connect to a proxy, and then send to proxy a tcp packet that contains the spoofed source ip, and the destination (different from proxy, it will be another site/server)</p>
| 0 | 2016-08-22T13:01:47Z | 39,080,461 | <p>The particular error you are seeing is because you aren't calling the <code>spoofing</code> method. It will go away if you add parentheses, like this:</p>
<pre><code>s.send(self.spoofing())
</code></pre>
<p>However you have rather more serious issues. <code>socket.socket(socket.AF_INET, socket.SOCK_STREAM)</code> returns a TCP socket, and the system will always insert the (correct) source and destination ports and addresses, as set by the <code>connect()</code> call.</p>
<p>If you want to do IP address spoofing you are going to have to find out how to use raw sockets and pass them directly to the data link driver - see <a href="https://gist.github.com/pklaus/856268" rel="nofollow">this example</a> for a few clues as to how to proceed (and pray you aren't working on Windows, which does everything in its power to prevent raw socket access).</p>
| 1 | 2016-08-22T13:09:21Z | [
"python",
"sockets",
"python-3.x",
"methods",
"byte"
] |
Why and how Mysql float works fine only with Java? | 39,080,345 | <p>I am trying to execute the below queries with different languages and different DB clients.</p>
<pre><code>SELECT MY_FLOAT_COL*1 FROM MY_TABLE;
SELECT MY_FLOAT_COL FROM MY_TABLE;
</code></pre>
<p>As per my observation the first query returns deviated results in Python and DB clients such as Heidi SQL, as reasoned by <a href="http://stackoverflow.com/a/39048275/2046462">this answer</a>. But a Java code or DB Client using JDBC connection(Squirrel) returns same results for the above mentioned query. What magic that Java does, how can I make that magic work with other languages/clients?</p>
<p>**Note:**This is actually a subset of my another <a href="http://stackoverflow.com/questions/39047486/whats-going-on-with-mysql-float">question</a>. The initial question diverts user from providing the complete answer. I want this question to be answered more specifically.</p>
| 1 | 2016-08-22T13:04:07Z | 39,080,441 | <p>As you mentioned on the <a href="http://stackoverflow.com/questions/39047486/whats-going-on-with-mysql-float/39048275#39048275">other question</a> multiply the <code>MY_FLOAT_COL*1</code> change the precision level because it is converted to <code>DOUBLE</code>.</p>
<p>When you retrieve data from a client, if you treat it as a number you convert it to the kind of type used by the client. So the conversion happens twice. </p>
<p>In java I suppose that you convert it back to <code>float</code>. If you retrieve it as a <code>BigDecimal</code> (or as a <code>String</code>) you can see the exact representation used internally by the database.</p>
<hr>
<p>Note that the difference between java and python is due to the internal representation of <code>float</code> that is not the same in java and in python. <code>float</code> in python should be similar to <code>double</code> in java.</p>
| 1 | 2016-08-22T13:08:33Z | [
"java",
"python",
"mysql",
"floating-point"
] |
X and Y or Z - ternary operator | 39,080,416 | <p>In Java or C we have <code><condition> ? X : Y</code>, which translates into Python as <code>X if <condition> else Y</code>.</p>
<p>But there's also this little trick: <code><condition> and X or Y</code>. </p>
<p>While I understand that it's equivalent to the aforementioned ternary operators, I find it difficult to grasp how <code>and</code> and <code>or</code> operators are able to produce correct result. What's the logic behind this?</p>
| 6 | 2016-08-22T13:07:46Z | 39,080,631 | <blockquote>
<p>While I understand that it's equivalent to the aforementioned ternary
operators</p>
</blockquote>
<p>This is incorrect:</p>
<pre><code>In [32]: True and 0 or 1
Out[32]: 1
In [33]: True and 2 or 1
Out[33]: 2
</code></pre>
<p>Why the first expression returns <code>1</code> (i.e. <code>Y</code>), while the condition is <code>True</code> and the "expected" answer is <code>0</code> (i.e. <code>X</code>)?</p>
<p>According to the docs:</p>
<blockquote>
<p>The expression x and y first evaluates x; if x is false, its value is
returned; otherwise, y is evaluated and the resulting value is
returned.</p>
<p>The expression x or y first evaluates x; if x is true, its value is
returned; otherwise, y is evaluated and the resulting value is
returned.</p>
</blockquote>
<p>So, <code>True and 0 or 1</code> evaluates the first argument of the <code>and</code> operator, which is <code>True</code>. Then it returns the second argument, which is <code>0</code>.</p>
<p>Since the <code>True and 0</code> returns false value, the <code>or</code> operator returns the second argument (i.e. <code>1</code>)</p>
| 8 | 2016-08-22T13:16:40Z | [
"python",
"syntax",
"ternary-operator"
] |
X and Y or Z - ternary operator | 39,080,416 | <p>In Java or C we have <code><condition> ? X : Y</code>, which translates into Python as <code>X if <condition> else Y</code>.</p>
<p>But there's also this little trick: <code><condition> and X or Y</code>. </p>
<p>While I understand that it's equivalent to the aforementioned ternary operators, I find it difficult to grasp how <code>and</code> and <code>or</code> operators are able to produce correct result. What's the logic behind this?</p>
| 6 | 2016-08-22T13:07:46Z | 39,080,644 | <p>I think that first it will check <code><condition></code> if it's <code>True</code> then it execute <code>X</code> and skip executing <code>Y</code> if <code>X</code> evaluate to True </p>
<p>But if <code><condition></code> fails then it skip executing <code>X</code> and execute OR statement and execute <code>Y</code>. </p>
| 1 | 2016-08-22T13:17:00Z | [
"python",
"syntax",
"ternary-operator"
] |
X and Y or Z - ternary operator | 39,080,416 | <p>In Java or C we have <code><condition> ? X : Y</code>, which translates into Python as <code>X if <condition> else Y</code>.</p>
<p>But there's also this little trick: <code><condition> and X or Y</code>. </p>
<p>While I understand that it's equivalent to the aforementioned ternary operators, I find it difficult to grasp how <code>and</code> and <code>or</code> operators are able to produce correct result. What's the logic behind this?</p>
| 6 | 2016-08-22T13:07:46Z | 39,080,648 | <p>If we examine <code>A and B</code>, <code>B</code> will be evaluated only if <code>A</code> is <code>True</code>.</p>
<p>Like so, in <code>A or B</code>, <code>B</code> will only be evaluated in case <code>A</code> is <code>False</code>.</p>
<p>Therefore, <code><condition> and X or Y</code> will return <code>X</code> if <code><condition></code> is <code>True</code> and <code>Y</code> if <code><condition></code>is <code>False</code>. This is a result of short-circuiting and the fact that <code>and</code> has precedence over <code>or</code>.</p>
<p>However, you should be careful with this approach. If <code>X</code> itself is evaluated to <code>False</code> (eg an empty string, list or <code>0</code>), <code><condition> and X or Y</code> will return <code>Y</code> even if <code><condition></code> is <code>True</code>:</p>
<pre><code>X = 1
Y = 2
print(True and X or Y)
>> 1
</code></pre>
<p>compared to:</p>
<pre><code>X = 0 # or '' or []
Y = 2
print(True and X or Y)
>> 2
</code></pre>
| 3 | 2016-08-22T13:17:35Z | [
"python",
"syntax",
"ternary-operator"
] |
X and Y or Z - ternary operator | 39,080,416 | <p>In Java or C we have <code><condition> ? X : Y</code>, which translates into Python as <code>X if <condition> else Y</code>.</p>
<p>But there's also this little trick: <code><condition> and X or Y</code>. </p>
<p>While I understand that it's equivalent to the aforementioned ternary operators, I find it difficult to grasp how <code>and</code> and <code>or</code> operators are able to produce correct result. What's the logic behind this?</p>
| 6 | 2016-08-22T13:07:46Z | 39,080,650 | <p>The trick is how python boolean operators <a href="https://docs.python.org/2/reference/expressions.html#boolean-operations" rel="nofollow">work</a></p>
<blockquote>
<p>The expression <code>x and y</code> first evaluates <code>x</code>; if <code>x</code> is false, its value is returned; otherwise, <code>y</code> is evaluated and the resulting value is returned.</p>
<p>The expression <code>x or y</code> first evaluates <code>x</code>; if <code>x</code> is true, its value is returned; otherwise, <code>y</code> is evaluated and the resulting value is returned.</p>
</blockquote>
| 0 | 2016-08-22T13:17:37Z | [
"python",
"syntax",
"ternary-operator"
] |
X and Y or Z - ternary operator | 39,080,416 | <p>In Java or C we have <code><condition> ? X : Y</code>, which translates into Python as <code>X if <condition> else Y</code>.</p>
<p>But there's also this little trick: <code><condition> and X or Y</code>. </p>
<p>While I understand that it's equivalent to the aforementioned ternary operators, I find it difficult to grasp how <code>and</code> and <code>or</code> operators are able to produce correct result. What's the logic behind this?</p>
| 6 | 2016-08-22T13:07:46Z | 39,080,730 | <p>This makes use of the fact that precedence of <code>and</code> is higher than <code>or</code>.</p>
<p>So <code><condition> and X or Y</code> is basically <code>(<condition> and X) or Y</code>. If <code><condition> and X</code> evaluates to <code>True</code>, there is no need to evaluate further, as anything <code>True or Y</code> is <code>True</code>. If <code><condition> and X</code> evaluates to <code>False</code>, then Y is returned as <code>False or Y</code> is basically <code>Y</code>.</p>
| 0 | 2016-08-22T13:21:46Z | [
"python",
"syntax",
"ternary-operator"
] |
Beginner to python facing 'builtin_function_or_method' object has no attribute error | 39,080,520 | <p>I am trying to run a python code and this seems to cause error. Please help me </p>
<pre><code>def randomPlace(b,lis):
pos = []
for i in lis:
if available(b,i):
pos.append(i)
if len.pos() != 0:
return random.choice(pos)
else:
return None
</code></pre>
<p><code>b</code> is a list with 10 characters and <code>lis</code> is a list with 4 integers
Error is: </p>
<blockquote>
<p>Traceback (most recent call last):</p>
<p>File "D:\TestsPython\TicTacToe.py", line 65, in randomPlace
if len.pos() != 0: AttributeError: 'builtin_function_or_method' object has no attribute 'pos'</p>
</blockquote>
| 1 | 2016-08-22T13:11:59Z | 39,080,669 | <p>The expression <code>len.pos()</code> asks the interpreter to locate <code>len</code> (the standard built-in function), look up its <code>pos</code> attribute (clue: it doesn't have one) and that call that looked up result. You actually want to apply the <code>len</code> function to the value of <code>pos</code>, and should therefore code</p>
<pre><code>if len(pos) != 0:
</code></pre>
<p>Since len always returns an integer result, you could abbreviate this to</p>
<pre><code>if len(pos):
</code></pre>
<p>Remembering, however. that empty containers evvaluate False in a Boolen context and non-empty containers evaluate True, it's usual to shorten this to</p>
<pre><code>if pos:
</code></pre>
| 1 | 2016-08-22T13:18:39Z | [
"python",
"artificial-intelligence"
] |
Beginner to python facing 'builtin_function_or_method' object has no attribute error | 39,080,520 | <p>I am trying to run a python code and this seems to cause error. Please help me </p>
<pre><code>def randomPlace(b,lis):
pos = []
for i in lis:
if available(b,i):
pos.append(i)
if len.pos() != 0:
return random.choice(pos)
else:
return None
</code></pre>
<p><code>b</code> is a list with 10 characters and <code>lis</code> is a list with 4 integers
Error is: </p>
<blockquote>
<p>Traceback (most recent call last):</p>
<p>File "D:\TestsPython\TicTacToe.py", line 65, in randomPlace
if len.pos() != 0: AttributeError: 'builtin_function_or_method' object has no attribute 'pos'</p>
</blockquote>
| 1 | 2016-08-22T13:11:59Z | 39,080,685 | <p>Use <code>len(pos)</code>. </p>
<p>In order to find the size of list in python, the syntax is <code>len(your_list)</code>.</p>
<p>In your case <code>len</code> function is not even required. You may simply do:</p>
<pre><code>if pos:
return random.choice(pos)
else:
return None
</code></pre>
<p>Because if your list will have any element, <code>if</code> will consider it as <code>True</code>. In case of empty list, it will be treated as <code>False</code>.</p>
| 1 | 2016-08-22T13:19:28Z | [
"python",
"artificial-intelligence"
] |
pandas get_level_values for multiple columns | 39,080,555 | <p>Is there a way to get the result of <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.Index.get_level_values.html" rel="nofollow"><code>get_level_values</code></a> for more than one column?</p>
<p>Given the following <code>DataFrame</code>:</p>
<pre><code> d
a b c
1 4 10 16
11 17
5 12 18
2 5 13 19
6 14 20
3 7 15 21
</code></pre>
<p>I wish to get the values (<em>i.e.</em> list of tuples) of levels <code>a</code> and <code>c</code>:</p>
<pre><code>[(1, 10), (1, 11), (1, 12), (2, 13), (2, 14), (3, 15)]
</code></pre>
<p><strong>Notes:</strong></p>
<ul>
<li><p>It is impossible to give <code>get_level_values</code> more than one level (<em>e.g.</em> <code>df.index.get_level_values(['a','c']</code>)</p></li>
<li><p>There's a workaround in which one could use <code>get_level_values</code> over each desired column and <code>zip</code> them together:</p></li>
</ul>
<p>For example:</p>
<pre><code>a_list = df.index.get_level_values('a').values
c_list = df.index.get_level_values('c').values
print([i for i in zip(a_list,c_list)])
[(1, 10), (1, 11), (1, 12), (2, 13), (2, 14), (3, 15)]
</code></pre>
<p>but it get cumbersome as the number of columns grow.</p>
<ul>
<li>The code to build the example <code>DataFrame</code>:</li>
</ul>
<p><code>df = pd.DataFrame({'a':[1,1,1,2,2,3],'b':[4,4,5,5,6,7,],'c':[10,11,12,13,14,15], 'd':[16,17,18,19,20,21]}).set_index(['a','b','c'])</code></p>
| 1 | 2016-08-22T13:13:08Z | 39,081,256 | <p>This is less cumbersome insofar as you can pass the list of index names you want to select:</p>
<pre><code>df.reset_index()[['a', 'c']].to_dict(orient='split')['data']
</code></pre>
<p>I have not found a way of selecting levels <code>'a'</code> and <code>'b'</code> from the index object directly, hence the use of <code>reset_index</code>.</p>
<p>Note that <code>to_dict</code> returns a list of lists and not tuples:</p>
<pre><code>[[1, 10], [1, 11], [1, 12], [2, 13], [2, 14], [3, 15]]
</code></pre>
| 0 | 2016-08-22T13:46:07Z | [
"python",
"python-3.x",
"pandas",
"dataframe",
"multi-index"
] |
pandas get_level_values for multiple columns | 39,080,555 | <p>Is there a way to get the result of <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.Index.get_level_values.html" rel="nofollow"><code>get_level_values</code></a> for more than one column?</p>
<p>Given the following <code>DataFrame</code>:</p>
<pre><code> d
a b c
1 4 10 16
11 17
5 12 18
2 5 13 19
6 14 20
3 7 15 21
</code></pre>
<p>I wish to get the values (<em>i.e.</em> list of tuples) of levels <code>a</code> and <code>c</code>:</p>
<pre><code>[(1, 10), (1, 11), (1, 12), (2, 13), (2, 14), (3, 15)]
</code></pre>
<p><strong>Notes:</strong></p>
<ul>
<li><p>It is impossible to give <code>get_level_values</code> more than one level (<em>e.g.</em> <code>df.index.get_level_values(['a','c']</code>)</p></li>
<li><p>There's a workaround in which one could use <code>get_level_values</code> over each desired column and <code>zip</code> them together:</p></li>
</ul>
<p>For example:</p>
<pre><code>a_list = df.index.get_level_values('a').values
c_list = df.index.get_level_values('c').values
print([i for i in zip(a_list,c_list)])
[(1, 10), (1, 11), (1, 12), (2, 13), (2, 14), (3, 15)]
</code></pre>
<p>but it get cumbersome as the number of columns grow.</p>
<ul>
<li>The code to build the example <code>DataFrame</code>:</li>
</ul>
<p><code>df = pd.DataFrame({'a':[1,1,1,2,2,3],'b':[4,4,5,5,6,7,],'c':[10,11,12,13,14,15], 'd':[16,17,18,19,20,21]}).set_index(['a','b','c'])</code></p>
| 1 | 2016-08-22T13:13:08Z | 39,081,316 | <p>The <code>.tolist()</code> method of a <code>MultiIndex</code> gives a list of tuples for all the levels in the <code>MultiIndex</code>. For example, with your example <code>DataFrame</code>,</p>
<pre><code>df.index.tolist()
# => [(1, 4, 10), (1, 4, 11), (1, 5, 12), (2, 5, 13), (2, 6, 14), (3, 7, 15)]
</code></pre>
<p>So here are two ideas:</p>
<ol>
<li><p>Get the list of tuples from the original <code>MultiIndex</code> and filter the result.</p>
<pre><code>[(a, c) for a, b, c in df.index.tolist()]
# => [(1, 10), (1, 11), (1, 12), (2, 13), (2, 14), (3, 15)]
</code></pre>
<p>The disadvantage of this simple method is that you have you manually specify the order of the levels you want. You can leverage <code>itertools.compress</code> to select them by name instead.</p>
<pre><code>from itertools import compress
mask = [1 if name in ['a', 'c'] else 0 for name in df.index.names]
[tuple(compress(t, mask)) for t in df.index.tolist()]
# => [(1, 10), (1, 11), (1, 12), (2, 13), (2, 14), (3, 15)]
</code></pre></li>
<li><p>Create a MultiIndex that has exactly the levels you want and call <code>.tolist()</code> on it.</p>
<pre><code>df.index.droplevel('b').tolist()
# => [(1, 10), (1, 11), (1, 12), (2, 13), (2, 14), (3, 15)]
</code></pre>
<p>If you would prefer to name the levels you want to keep — instead of those that you want to drop — you could do something like</p>
<pre><code>df.index.droplevel([level for level in df.index.names
if not level in ['a', 'c']]).tolist()
# => [(1, 10), (1, 11), (1, 12), (2, 13), (2, 14), (3, 15)]
</code></pre></li>
</ol>
| 2 | 2016-08-22T13:49:16Z | [
"python",
"python-3.x",
"pandas",
"dataframe",
"multi-index"
] |
django rest framework filter on related model | 39,080,701 | <p>I have a model countries and a model of persons on holiday in a certain year to a country. I want to have an Api of Countries in which I can filter only the countries in which certain persons had a holiday in a certain year.</p>
<p>Models.py</p>
<pre><code>from django.db import models
class Country(models.Model):
id = models.CharField(max_length=50,primary_key=True)
country = models.CharField(max_length=255,null=True)
class PersonYear(models.Model):
id = models.IntegerField(primary_key=True)
person = models.CharField(max_length=255,null=True)
year = models.IntegerField(null=True)
country = models.ForeignKey(Country, related_name='personyears')
</code></pre>
<p>Contents of model Country</p>
<pre><code>id|country
1|France
2|Italy
3|Spain
</code></pre>
<p>Contents of model PersonYear</p>
<pre><code>id|person|year|country_id
1|John|2014|1
2|John|2015|1
3|Mary|2014|1
</code></pre>
<p>serializers.py</p>
<pre><code>from apiapp.models import PersonYear,Country
from rest_framework import serializers
class PersonyearSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = PersonYear
fields = ('id','person','year')
class CountrySerializer(serializers.HyperlinkedModelSerializer):
personyears = PersonyearSerializer(many=True)
class Meta:
model = Country
fields = ('id','country','personyears')
</code></pre>
<p>Views.py</p>
<pre><code>import rest_framework_filters as filters
class PersonyearFilter(filters.FilterSet):
id = filters.AllLookupsFilter(name='id')
person = filters.AllLookupsFilter(name='person')
year = filters.AllLookupsFilter(name='year')
class Meta:
model = PersonYear
class PersonyearViewSet(viewsets.ModelViewSet):
queryset = PersonYear.objects.all()
serializer_class = PersonyearSerializer
filter_backends = (filters.backends.DjangoFilterBackend,)
filter_fields = ['id','person','year']
filter_class = PersonyearFilter
class CountryFilter(filters.FilterSet):
id = filters.AllLookupsFilter(name='id')
nm = filters.AllLookupsFilter(name='country')
personyears = filters.RelatedFilter(PersonyearFilter,name='personyears')
class Meta:
model = Country
class CountryViewSet(viewsets.ModelViewSet):
queryset = Country.objects.all()
serializer_class = CountrySerializer
filter_backends = (filters.backends.DjangoFilterBackend,)
filter_fields = ['id','country','personyears']
filter_class = CountryFilter
</code></pre>
<p>I want a selection of all countries in which John had a holiday in 2014:</p>
<p><a href="http://localhost:8000/country/?personyears__person=John&personyears__year=2014" rel="nofollow">http://localhost:8000/country/?personyears__person=John&personyears__year=2014</a></p>
<p>I expected to get one record:</p>
<pre><code>{"id": "1", "country": "France",
"personyears": [
{ "id": 1,
"person": "John"
"year": 2014
}
]
}
</code></pre>
<p>But instead I got this record repeated 4 times. Can you explain what I am doing wrong and how to get what I want.</p>
<p>Update1: </p>
<p>I don't want only a special solution for John in 2014. I want a solution for all instances of Anyperson in AnyYear. For example I also want the following filter to give me one result:
<a href="http://localhost:8001/api/country/?personyears__person=Mary&personyears__year=2014" rel="nofollow">http://localhost:8001/api/country/?personyears__person=Mary&personyears__year=2014</a></p>
<p>Update2:</p>
<p>I tried replacing:</p>
<pre><code> queryset = Country.objects.all()
</code></pre>
<p>by:</p>
<pre><code> queryset = Country.objects.all().distinct()
</code></pre>
<p>It helped to get the expected (1 record) outcome of:</p>
<p>localhost:8000/country/?personyears__person=John&personyears__year=2014</p>
<p>But I now get unexpected/unwanted result for person='Mary' / year='2015'</p>
<p>localhost:8000/country/?personyears__person=Mary&personyears__year=2015</p>
<p>I expected no result (Mary did not go on holiday in 2015 to any country). But I got</p>
<pre><code> {"id": "1","country": "France",
"personyears": [
{
"id": 1,
"person": "John",
"year": 2014
},
{
"id": 2,
"person": "John",
"year": 2015
},
{
"id": 3,
"person": "Mary",
"year": 2014
}
]
}
</code></pre>
| 0 | 2016-08-22T13:20:35Z | 39,080,909 | <p>Your <code>queryset</code> should be:</p>
<pre><code>Country.objects.filter(personyears__year='your_year', person='your_person').values_list('country', flat=True)
</code></pre>
<p>This will return list of all the countries.</p>
<p><strong>Note:</strong> This will filter based on the name of person which may be same for different user. There should be one more model as <code>Person</code> whose <code>foreign key</code> should be mapped to <code>PersonYear</code>. </p>
| 1 | 2016-08-22T13:28:41Z | [
"python",
"django",
"django-rest-framework"
] |
Compiling C extensions for Python3 on CentOS | 39,080,726 | <p>I need to write python interface wrappers around some C functions, which requires the Python.h header file. I want to make sure I get the correct Python.h.</p>
<p>I'm working in CentOS 6.6 and decided to use Python 3.4 so as to avoid using the OS's python distribution. Online sources suggest getting the correct Python.h from python34-devel , but this package is not available for Centos 6, even through the EPEL repository. Also, I was forced to compile and <a href="http://toomuchdata.com/2014/02/16/how-to-install-python-on-centos/" rel="nofollow">install python from source</a>, and this <a href="http://serverfault.com/questions/498393/python-devel-for-python-2-7-on-centos-6-4">thread</a> seems to suggest python34-devel might not even be helpful in this case.</p>
<p>How do I find the right Python.h so that I can compile C libraries for my Python configuration?</p>
<p>thanks in advance</p>
| 2 | 2016-08-22T12:38:42Z | 39,080,873 | <p>If you do things properly and make a complete installable package for your wrapper, with <code>setup.py</code>, then you do not even need to know the location of the includes; as your python3.4 executable would know their location.</p>
<p>Minimlly your <code>setup.py</code> could be something like </p>
<pre><code>from setuptools import setup, Extension
c_ext = Extension('mypackage._mymodule.c',
sources = ['mypackage/_mymodule.c'],
)
setup(name='mypackage',
version='1.0',
description='...',
long_description='...',
packages=['mypackage']
ext_modules=[c_ext])
</code></pre>
<p>Then you would store your C extension sources in <code>mypackage/_mymodule.c</code>, and have <code>mypackage/__init__.py</code> or some python modules write nice wrappers to the C extension itself (as some things are just to tedious to do it in C); minimally this would do</p>
<pre><code>from mypackage._mymodule import *
</code></pre>
<p>Now to install this extension, you'd just execute <code>python3.4 setup.py install</code> and it would automatically compile your extension using whatever include directory <em>and</em> compile-time options are appropriate for <em>that</em> installation, and install it into the <code>site-packages</code> directory.</p>
| 1 | 2016-08-22T13:27:24Z | [
"centos6",
"python"
] |
How to selecting column arrays from matrix in python | 39,080,931 | <p>I am studying python on my own and I want to create a matrix; by reading in the internet I found many ways to define a matrix and I decided to select these 2 methodologies:</p>
<pre><code>import numpy as np
# Method 1
w, h = 5, 5
M = [[0 for x in range(w)] for y in range(h)]
M[1][2] = 100 # ok
m = M[:,2] # <----- Why is it not possible?
# Method 2
A = np.zeros((5, 5))
A[1][2] = 100 # ok
a = A[:,2] # <----- Why is it possible?
</code></pre>
<p>In both cases I am able to construct the matrix but the problem arises when I try to define an array by selecting one column of the matrix itself. While in the second case I am able to define <code>a</code> I cannot do the same thing for <code>m</code>; what am I doing wrong?</p>
<p>What should I do in order to extract a column out of M?</p>
<p>I believe the reason resides in the fact that M and A are not the same type of variable but honestly I don't understand the difference and therefore I don't know how to proceed. </p>
<pre><code><class 'list'> # M
<class 'numpy.ndarray'> # A
</code></pre>
| 1 | 2016-08-22T13:29:53Z | 39,081,242 | <p><code>A</code> and <code>M</code> are very different objects, as you have also discovered yourself. They might store the same information, but they do it differently and allow you to manipulate it in different ways. They have different interfaces, which means you have to interact with them differently. This affects the operations that you are allowed to perform on them.</p>
<p><code>M</code> is a list of lists. It contains several elements, each of which is a list of integers. <code>M</code> doesn't <em>know</em> that it is a matrix, it only <em>knows</em> that it contains a fixed number of elements. You can get individual lists out with <code>M[i]</code>, but then to get the actual matrix element you have to work with the list you got. Note, that you can do <code>M.append('abc')</code>, after which <code>M</code> will stop being a matrix. To actually use <code>M</code> as a matrix you need to resort to trics, like using <code>col = [row[i] for row in M]</code> to get columns, and if you want e.g. to compute the determinant, it is going to be rather painful.</p>
<p><code>A</code> is a matrix and so it can inspect its whole contents, and you can get any element you want out of it, including a single column. It is impossible to append one element to it. You can use the whole NumPy library to perform operations on it as a matrix, such as computing determinants with <code>np.det(A)</code>.</p>
| 2 | 2016-08-22T13:45:24Z | [
"python",
"arrays",
"numpy",
"matrix",
"multidimensional-array"
] |
Get specific letters from a string | 39,080,944 | <p>My input is a variable is a string:</p>
<pre><code> entity = "SmartSys_1_13_PP"
</code></pre>
<p>I need to extra the last two letters of the name of the entity. The results would be PP in this case.</p>
<p>One possible solution is to delimit the name using the _ and then consider the last field as the answer. Is there a one line which can directly parse the last two letters?</p>
| 1 | 2016-08-22T13:30:36Z | 39,081,015 | <p>You my use string slicing. For your case, it will be: <code>name[-2:]</code></p>
<p>But do not forget to add check, because it will return the string as it is in case length of string is less than 2. Your code should be like:</p>
<pre><code>if len(name) > 2:
print name[-2:]
else:
# Whatever you want to do
### For example:
>>> '12345'[-2:]
'45'
>>> '1'[-2:]
'1'
>>> ''[-2:]
''
</code></pre>
| 3 | 2016-08-22T13:33:57Z | [
"python",
"delimiter"
] |
Get specific letters from a string | 39,080,944 | <p>My input is a variable is a string:</p>
<pre><code> entity = "SmartSys_1_13_PP"
</code></pre>
<p>I need to extra the last two letters of the name of the entity. The results would be PP in this case.</p>
<p>One possible solution is to delimit the name using the _ and then consider the last field as the answer. Is there a one line which can directly parse the last two letters?</p>
| 1 | 2016-08-22T13:30:36Z | 39,081,033 | <p><a href="https://docs.python.org/2/tutorial/introduction.html#strings" rel="nofollow">Python strings</a> are sequences of individual characters. You can slice them the same way you would slice a Python list.</p>
<pre><code>entity = 'SmartSys_1_13_PP'
print(entity[-2:])
PP
</code></pre>
| 1 | 2016-08-22T13:34:36Z | [
"python",
"delimiter"
] |
Get specific letters from a string | 39,080,944 | <p>My input is a variable is a string:</p>
<pre><code> entity = "SmartSys_1_13_PP"
</code></pre>
<p>I need to extra the last two letters of the name of the entity. The results would be PP in this case.</p>
<p>One possible solution is to delimit the name using the _ and then consider the last field as the answer. Is there a one line which can directly parse the last two letters?</p>
| 1 | 2016-08-22T13:30:36Z | 39,083,951 | <p>As you suggest, you can split the string on '_' then get the last substring:</p>
<pre><code>>>> entity = "SmartSys_1_13_PP"
>>> entity.split('_')[-1]
'PP'
</code></pre>
| 0 | 2016-08-22T15:57:53Z | [
"python",
"delimiter"
] |
Selecting multiindex entry using labels | 39,080,995 | <p>I do not find any explanation of how to select Pandas multiindex objects by labels. Here is an example from the documentation
(<a href="http://pandas.pydata.org/pandas-docs/stable/advanced.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/advanced.html</a>)</p>
<pre><code>In [1]: arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
...: ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
...:
In [2]: tuples = list(zip(*arrays))
In [4]: index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
In [7]: s = pd.Series(np.random.randn(8), index=index)
Out[7]:
first second
bar one 0.469112
two -0.282863
baz one -1.509059
two -1.135632
foo one 1.212112
two -0.173215
qux one 0.119209
two -1.044236
dtype: float64
In [5]: s.index
Out[5]:
MultiIndex(levels=[[u'bar', u'baz', u'foo', u'qux'], [u'one', u'two']],
labels=[[0, 0, 1, 1, 2, 2, 3, 3], [0, 1, 0, 1, 0, 1, 0, 1]],
names=[u'first', u'second'])
</code></pre>
<p>We can see from this example that the multiindex contains an entry called 'labels', consisting of a sequence of 'coordinates', indicating exactly each entry of the multiindex. My questions is: How can I call an entry specifying exactly these coordinates. So for instance, what I want is something like</p>
<pre><code>s.loc[0,0]
</code></pre>
<p>which should return 0.469112,</p>
<pre><code>s.loc[0,1]
</code></pre>
<p>returns -0.282863 and so on.
I cannot find this mentioned anywhere in the documentation. </p>
| 1 | 2016-08-22T13:32:54Z | 39,083,993 | <p>You can use <code>unstack</code> and <code>ix</code> or <code>iloc</code> together to achieve this.</p>
<pre><code>s.unstack().ix[0, 0] # or s.unstack().iloc[0, 0]
</code></pre>
<p>would give <code>0.469112</code></p>
| 0 | 2016-08-22T16:00:57Z | [
"python",
"pandas"
] |
How do we use "left outer join" for large size pandas dataframes (larger than 5~20GB)? | 39,081,044 | <p>I try to merge two large size dataframes.</p>
<p>One dataframe (patent_id) has 5,271,459 of rows and the others have more than 10,000 of columns.</p>
<p>To combine these two big dataframes, I use "merge" and separate right dataframe into chunks. (similar with <a href="http://stackoverflow.com/questions/32635169/memoryerror-with-python-pandas-and-large-left-outer-joins">MemoryError with python/pandas and large left outer joins</a>)</p>
<p>But it still meets a memory error. Is there any space for improvements?</p>
<p>Should I use "concat" rather than "merge"?</p>
<p>Or should I use "csv" rather than "pandas" to manage this issue like (<a href="http://stackoverflow.com/questions/32635169/memoryerror-with-python-pandas-and-large-left-outer-joins">MemoryError with python/pandas and large left outer joins</a>)?</p>
<pre><code>for key in column_name:
print key
newname = '{}_post.csv'.format(key)
patent_rotated_chunks = pd.read_csv(newname, iterator=True, chunksize=10000)
temp = patent_id.copy(deep=True)
for patent_rotated in patent_rotated_chunks:
temp = pd.merge(temp,patent_rotated,on = ["patent_id_0"],how = 'left')
temp.to_csv('{}_sorted.csv'.format(key))
del temp
</code></pre>
| 0 | 2016-08-22T13:35:08Z | 39,081,896 | <p>Below approach works for me which is from <a href="http://stackoverflow.com/questions/32635169/memoryerror-with-python-pandas-and-large-left-outer-joins">MemoryError with python/pandas and large left outer joins</a></p>
<pre><code>import csv
def gen_chunks(reader, chunksize=1000000):
chunk = []
for i, line in enumerate(reader):
if (i % chunksize == 0 and i > 0):
yield chunk
del chunk[:]
chunk.append(line)
yield chunk
for key in column_name:
idata = open("patent_id.csv","rU")
newcsv = '{}_post.csv'.format(key)
odata = open(newcsv,"rU")
leftdata = csv.reader(idata)
next(leftdata)
rightdata = csv.reader(odata)
index = next(rightdata).index("patent_id_0")
odata.seek(0)
columns = ["project_id"] + next(rightdata)
rd = dict([(rows[index], rows) for rows in rightdata])
print rd.keys()[0]
print rd.values()[0]
with open('{}_sorted.csv'.format(key), "wb") as csvfile:
output = csv.writer(csvfile)
output.writerows(columns)
for chunk in gen_chunks(leftdata):
print key, " New Chunk!"
ld = [[pid[1]]+ rd.get(pid[1], ["NaN"]) for pid in chunk]
output.writerows(ld)
</code></pre>
| 0 | 2016-08-22T14:16:41Z | [
"python",
"database",
"pandas",
"join",
"merge"
] |
How to increment or decrement a recordset? odoo9 | 39,081,083 | <p>My record set is <code>items = product.pricelist.item(4,3,2)</code></p>
<p><code>qty_in_product_uom</code> is passed by the user( <code>4= min_quantity 500, 3= 250, 2= 100</code> )</p>
<pre><code>for rule in items:
if rule.min_quantity and qty_in_product_uom < rule.min_quantity:
print inside rule.id
</code></pre>
<p>now i want such a function that selects the next id if the condition is true</p>
<p>eg. if user passes <code>qty_in_product_uom</code> say 110 then my above if condition will give <code>id=2</code> in this case i want <code>id=3</code></p>
<p>if <code>id=3</code> then answer will be <code>id=4</code></p>
<p>and if <code>id=4</code> select <code>id=4</code></p>
<p>As recordsets are immutable how to achieve this.</p>
| 0 | 2016-08-22T13:37:06Z | 39,089,475 | <p>The recordset(<code>BaseModel</code> class) is an iterable Python object. That means you can use the object in almost the same way you use a list. For example:</p>
<p><code>item = product.pricelist.item(4,3,2)[0]</code></p>
<p><code>print item # This will print product.pricelist.item(4,)</code></p>
<p>If you want to get a record from the recordset using its id you can use:</p>
<p><code>item = product.pricelist.item(4,3,2).index(3)</code></p>
<p><code>print item # This will print product.pricelist.item(3,)</code></p>
| 1 | 2016-08-22T22:08:33Z | [
"python",
"api",
"orm",
"openerp",
"odoo-9"
] |
Django: how to switch to another view with the click of a button? | 39,081,115 | <p>I am building a django app using django 1.9.5 and Python 2.7.11. My project (which I named djan_web) directory looks like the following:</p>
<pre><code>djan_web\
manage.py
djan_frontend\
views.py
templates\
djan_frontend\
upload.html
djan_homepage\
index.html
djan_web/
urls.py
</code></pre>
<p>I am able to load <code>index.html</code>, which is my homepage here. In <code>index.html</code>, I have a button, which when clicked, I would like to load <code>upload.html</code>. Here are my the relevant files:</p>
<p><code>index.html:</code></p>
<pre><code><!DOCTYPE html>
<html>
<head>
<title >Django Wed Project</title>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
</head>
<body>
<div class="header">
<p> some text </p>
<div class="container" style="width:95%">
<center>
<div class="col-md-1 center-block text-center" style="font-size: xx-large">
<a href="/upload-file" class="dark" style="cursor: pointer;">
<span class="glyphicon glyphicon-upload"></span>Let's get started!</a>
</div>
</center>
</div>
</div>
</body>
</html>
</code></pre>
<p><code>urls.py:</code></p>
<pre><code>from django.conf.urls import include, url
from django.contrib import admin
from djan_frontend import views
from django.views.generic import TemplateView
urlpatterns = [
url(r'^', views.homepage, name="homepage"),
url(r'^admin/', admin.site.urls),
url(r'^upload-file/$', views.upload, name='upload')
]
</code></pre>
<p><code>views.py:</code></p>
<pre><code>from django.http import HttpResponseRedirect
from django.shortcuts import render
from django.core.urlresolvers import reverse
from .upload_file import UploadFileForm
from .models import Document
from .tasks import process_csv
def homepage(request):
return render(request, 'djan_frontend/djan_homepage/index.html')
def upload(request):
return render(request, 'djan_frontend/upload.html')
</code></pre>
<p>I am able to load <code>index.html</code>, but when I click the button <code>Let's get started</code> nothing happens except for the URL, where <code>upload-file/</code> gets appended to it.</p>
<p>I also tried using TemplateView, so I changed <code>href</code> part in the button definition to <code>href="{% url 'upload' %}"</code> and changed the third url pattern in <code>urls.py</code> to </p>
<pre><code>url(r'^upload/$', TemplateView.as_view(template_name='upload.html'), name='upload')
</code></pre>
<p>and deleted the <code>upload</code> function in <code>views.py</code> but I couldn't get it work. Any help is greatly appreciated!</p>
| 0 | 2016-08-22T13:38:58Z | 39,081,413 | <p>You need to fix your URLs in <code>urls.py</code>. Try:</p>
<pre><code>urlpatterns = [
url(r'^$', views.homepage, name="homepage"),
url(r'^admin/', admin.site.urls),
url(r'^upload-file/$', views.upload, name='upload')
]
</code></pre>
<p>Notice The <code>$</code> at the end of the regexs. You need to add end anchors to your url patterns, particularly on the first url pattern. Without the ending <code>$</code> it matches on anything, so any request coming in is being directed to your homepage.</p>
| 0 | 2016-08-22T13:54:12Z | [
"python",
"html",
"django",
"django-templates",
"django-views"
] |
BFS, wanting to find the longest path between nodes, reducing the findchildren-method | 39,081,149 | <p>I've opened another thread with exactly this subject, but I think I posted too much code and I didn't really know where my problem was, now I think I have a better idea but still in need of help. What we have is a text-file with 3 letter words, only 3 letter words. I also have a Word (node) and queue-class. My findchildren-method is supposed to find, for one single word, all the children to this word, let's say I enter "fan", then I'm supposed to get something like ["kan","man"....etc]. The code is currently looking like this:</p>
<pre><code>def findchildren(mangd,parent):
children=set()
lparent=list(parent)
mangd.remove(parent)
for word in mangd:
letters=list(word)
count=0
i=0
for a in letters:
if a==lparent[i]:
count+=1
i+=1
else:
i+=1
if count==2:
if word not in children:
children.add(word)
if i>2:
break
return children
</code></pre>
<p>The code above, for findchildren is currently working fine, but, when I use it for my other methods (to implement the bfs-search) everything will take way too long time, therefore, I would like to gather all the children in a dictionary containing lists with the children. It feels like this assignment is out of my league right now, but is this possible to do? I tried to create something like this:</p>
<pre><code>def findchildren2(mangd):
children=[]
for word in mangd:
lparent=list(word)
mangd.remove(word)
letters=list(word)
count=0
i=0
for a in letters:
if a==lparent[i]:
count+=1
i+=1
else:
i+=1
if count==2:
if word not in children:
children.append(word)
if i>2:
break
return children
</code></pre>
<p>I suppose my last try is simply garbage, I get the errormessage " Set changed size using iteration".</p>
<pre><code>def findchildren3(mangd,parent):
children=defaultdict(list)
lparent=list(parent)
mangd.remove(parent)
for word in mangd:
letters=list(word)
count=0
i=0
for a in letters:
if a==lparent[i]:
count+=1
i+=1
else:
i+=1
if count==2:
children[0].append(word)
if i>2:
break
return children
</code></pre>
| 4 | 2016-08-22T13:40:54Z | 39,082,167 | <p>There are more efficient ways to do this (the below is O(n^2) so not great) but here is a simple algorithm to get you started:</p>
<pre><code>import itertools
from collections import defaultdict
words = ['abc', 'def', 'adf', 'adc', 'acf', 'dec']
bigrams = {k: {''.join(x) for x in itertools.permutations(k, 2)} for k in words}
result = defaultdict(list)
for k, v in bigrams.iteritems():
for word in words:
if k == word:
continue
if len(bigrams[k] & bigrams[word]):
result[k].append(word)
print result
</code></pre>
<p>Produces:</p>
<pre><code>defaultdict(<type 'list'>, {'abc': ['adc', 'acf'], 'acf': ['abc', 'adf', 'adc'], 'adf': ['def', 'adc', 'acf'], 'adc': ['abc', 'adf', 'acf', 'dec'], 'dec': ['def', 'adc'], 'def': ['adf', 'dec']})
</code></pre>
<hr>
<p>Here is a more efficient version with some commentary:</p>
<pre><code>import itertools
from collections import defaultdict
words = ['abc', 'def', 'adf', 'adc', 'acf', 'dec']
# Build a map of {word: {bigrams}} i.e. {'abc': {'ab', 'ba', 'bc', 'cb', 'ac', 'ca'}}
bigramMap = {k: {''.join(x) for x in itertools.permutations(k, 2)} for k in words}
# 'Invert' the map so it is {bigram: {words}} i.e. {'ab': {'abc', 'bad'}, 'bc': {...}}
wordMap = defaultdict(set)
for word, bigramSet in bigramMap.iteritems():
for bigram in bigramSet:
wordMap[bigram].add(word)
# Create a final map of {word: {words}} i.e. {'abc': {'abc', 'bad'}, 'bad': {'abc', 'bad'}}
result = defaultdict(set)
for k, v in wordMap.iteritems():
for word in v:
result[word] |= v ^ {word}
# Display all 'childen' of each word from the original list
for word in words:
print "The 'children' of word {} are {}".format(word, result[word])
</code></pre>
<p>Produces:</p>
<pre><code>The 'children' of word abc are set(['acf', 'adc'])
The 'children' of word def are set(['adf', 'dec'])
The 'children' of word adf are set(['adc', 'def', 'acf'])
The 'children' of word adc are set(['adf', 'abc', 'dec', 'acf'])
The 'children' of word acf are set(['adf', 'abc', 'adc'])
The 'children' of word dec are set(['adc', 'def'])
</code></pre>
| 0 | 2016-08-22T14:29:41Z | [
"python",
"python-3.x",
"bfs"
] |
BFS, wanting to find the longest path between nodes, reducing the findchildren-method | 39,081,149 | <p>I've opened another thread with exactly this subject, but I think I posted too much code and I didn't really know where my problem was, now I think I have a better idea but still in need of help. What we have is a text-file with 3 letter words, only 3 letter words. I also have a Word (node) and queue-class. My findchildren-method is supposed to find, for one single word, all the children to this word, let's say I enter "fan", then I'm supposed to get something like ["kan","man"....etc]. The code is currently looking like this:</p>
<pre><code>def findchildren(mangd,parent):
children=set()
lparent=list(parent)
mangd.remove(parent)
for word in mangd:
letters=list(word)
count=0
i=0
for a in letters:
if a==lparent[i]:
count+=1
i+=1
else:
i+=1
if count==2:
if word not in children:
children.add(word)
if i>2:
break
return children
</code></pre>
<p>The code above, for findchildren is currently working fine, but, when I use it for my other methods (to implement the bfs-search) everything will take way too long time, therefore, I would like to gather all the children in a dictionary containing lists with the children. It feels like this assignment is out of my league right now, but is this possible to do? I tried to create something like this:</p>
<pre><code>def findchildren2(mangd):
children=[]
for word in mangd:
lparent=list(word)
mangd.remove(word)
letters=list(word)
count=0
i=0
for a in letters:
if a==lparent[i]:
count+=1
i+=1
else:
i+=1
if count==2:
if word not in children:
children.append(word)
if i>2:
break
return children
</code></pre>
<p>I suppose my last try is simply garbage, I get the errormessage " Set changed size using iteration".</p>
<pre><code>def findchildren3(mangd,parent):
children=defaultdict(list)
lparent=list(parent)
mangd.remove(parent)
for word in mangd:
letters=list(word)
count=0
i=0
for a in letters:
if a==lparent[i]:
count+=1
i+=1
else:
i+=1
if count==2:
children[0].append(word)
if i>2:
break
return children
</code></pre>
| 4 | 2016-08-22T13:40:54Z | 39,098,520 | <p>Solution (which is O(n^2) sadly) for the updated requirement in Python 3 (run it <a href="https://repl.it/Cq11/0" rel="nofollow">here</a>):</p>
<p>from collections import defaultdict</p>
<pre><code>words = ['fan', 'ban', 'fbn', 'ana', 'and', 'ann']
def isChildOf(a, b):
return sum(map(lambda xy: xy[0] == xy[1], zip(a, b))) >= 2
result = defaultdict(set)
for word in words:
result[word] = {x for x in words if isChildOf(word, x) and x != word}
# Display all 'childen' of each word from the original list
for word in words:
print("The children of word {0} are {1}".format(word, result[word]))
</code></pre>
<p>Produces:</p>
<pre><code>The 'children' of word fan are set(['ban', 'fbn'])
The 'children' of word ban are set(['fan'])
The 'children' of word fbn are set(['fan'])
The 'children' of word ana are set(['and', 'ann'])
The 'children' of word and are set(['ann', 'ana'])
The 'children' of word ann are set(['and', 'ana'])
</code></pre>
<p>The algorithm here is very simple and not very efficient but let me try to break it down. </p>
<p>The <code>isChildOf</code> function takes two words as input and does the following:</p>
<ol>
<li><p><code>zip</code>'s <code>a</code> & <code>b</code> together, here both are treated as iterables with each character being one 'item' in the iteration. For example if <code>a</code> is <code>'fan'</code> and <code>b</code> is <code>'ban'</code> then <code>zip('fan', 'ban')</code> produces this list of pairs <code>[('f', 'b'), ('a', 'a'), ('n', 'n')]</code></p></li>
<li><p>Next it uses the <code>map</code> function to apply the lambda function (a fancy name for an anonymous function) to <em>each item</em> in the list produced in step one. The function simply takes the pair of input elements (i.e. <code>'f'</code> & <code>'b'</code>) and returns <code>True</code> if they match and <code>False</code> otherwise. For our example this will result in <code>[False, True, True]</code> as the first pair of characters do not match but both the remaining pairs do match.</p></li>
<li><p>Finally the function runs the <code>sum</code> function on the list produced by step 2. It so happens that <code>True</code> evaluates to <code>1</code> in Python and <code>False</code> to <code>0</code> and so the sum of our list is <code>2</code>. We then simply return whether that number is greater than or equal to <code>2</code>.</p></li>
</ol>
<p>The <code>for word in words</code> loop simply compares each input word against all other words and keeps the ones where <code>isChildOf</code> evaluates to <code>True</code> taking care not to add the word itself.</p>
<p>I hope that is clear!</p>
| 0 | 2016-08-23T10:22:19Z | [
"python",
"python-3.x",
"bfs"
] |
Displaying Flask-WTF error shows extra characters | 39,081,265 | <p>I want to display an error from a field in my Flask-WTF form with JavaScript. Printing out <code>form.errors['password']</code> gives <code>['This field is required']</code>, but I don't want <code>['</code> <code>']</code> in the output. How do I display the error in the right format?</p>
| -1 | 2016-08-22T13:46:17Z | 39,082,297 | <p>Each field can have multiple errors, so they are always contained in a list, even if there is only one error. There is nothing special about Jinja or Flask-WTF here, you just need to pay attention to the data you're working with.</p>
<pre><code>{{ form.password.errors[0] }}
{{ form.errors['password'][0] }}
</code></pre>
| 1 | 2016-08-22T14:36:13Z | [
"python",
"flask",
"jinja2",
"flask-wtforms"
] |
Django apps resolving to the wrong namespace | 39,081,321 | <p>In my project I have three apps, "abc", "xyz" and "common." Common isn't a real app inasmuch as it just stores templates, models and views that are inherited and extended by both apps.</p>
<p>Project-level urls.py looks like so, and properly redirects requests to the respective app:</p>
<pre><code>urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^abc/', include('abc.urls')),
url(r'^xyz/', include('xyz.urls')),
]
</code></pre>
<p>Both apps' url.py files look like so; the ONLY difference is replace every instance of ABC with XYZ:</p>
<pre><code>from django.conf.urls import url
from views import ABCAlertList as AlertList
from views import ABCEventList as EventList
from views import ABCEventDetail as EventDetail
from views import ABCEventReplay as EventReplay
from views import ABCUploadView as UploadView
urlpatterns = [
url(r'^$', AlertList.as_view(), name='view_alerts'),
url(r'^entities/(?P<uid>\w+)/$', EventList.as_view(), name='view_uid'),
url(r'^entities/(?P<uid>\w+)/replay/$', EventReplay.as_view(), name='view_replay'),
url(r'^entities/(?P<uid>\w+)/event/(?P<eid>\w+)/$', EventDetail.as_view(), name='view_event'),
url(r'^upload/$', UploadView.as_view(), name='upload_file'),
]
</code></pre>
<p>Again, all the views are common between both apps so there is nothing app-specific to either of them. Both apps make use of the same line in the same common template:</p>
<pre><code><a href="{% url 'view_uid' alert.uid %}">
</code></pre>
<p>Now, the problem:</p>
<p>App ABC works fine on the top-level page. But the urls it's rendering to go past that point point to the wrong app.</p>
<p>For example, I'll be in </p>
<pre><code>http://localhost:8888/abc/
</code></pre>
<p>and the urls on that page render as </p>
<pre><code>http://localhost:8888/xyz/entities/262b3bce18e71c5459a41e1e6d52a946ab47e88f/
</code></pre>
<p>What gives? It looks like Django is reading the wrong app's urls.py.</p>
| 1 | 2016-08-22T13:49:27Z | 39,081,749 | <p>Django can't tell the difference between the URLs under <code>abc/</code> and <code>xyz/</code> just by the view name and arguments. Since reversing will go through the patterns in reverse order, the patterns under <code>xyz/</code> will always match first, so all links generated using <code>reverse()</code> or the <code>{% url %}</code> tag will point to the <code>xyz</code> app.</p>
<p>You need to give each pattern a unique name, or use a URL namespace. In Django 1.9+ you should set the <code>app_name</code> attribute:</p>
<pre><code>app_name = 'abc'
urlpatterns = [
url(r'^$', AlertList.as_view(), name='view_alerts'),
url(r'^entities/(?P<uid>\w+)/$', EventList.as_view(), name='view_uid'),
url(r'^entities/(?P<uid>\w+)/replay/$', EventReplay.as_view(), name='view_replay'),
url(r'^entities/(?P<uid>\w+)/event/(?P<eid>\w+)/$', EventDetail.as_view(), name='view_event'),
url(r'^upload/$', UploadView.as_view(), name='upload_file'),
]
</code></pre>
<p>In Django 1.8 you need to pass the <code>namespace</code> parameter to <code>include()</code>:</p>
<pre><code>urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^abc/', include('abc.urls', namespace='abc')),
url(r'^xyz/', include('xyz.urls', namespace='xyz')),
]
</code></pre>
<p>You can then reverse the url by passing the proper namespace:</p>
<pre><code><a href="{% url 'abc:view_uid' alert.uid %}">
</code></pre>
<hr>
<p>If you need to use the same templates or functions for both apps, you need to set the application namespace of both apps to be the same, but use a different instance namespace. </p>
<p>In 1.9+, this means using the same <code>app_name</code> attribute, but passing a different <code>namespace</code> argument:</p>
<pre><code># myapp/urls.py
app_name = 'common_app_name'
urlpatterns = [
# app urls
]
# myproject/urls.py
urlpatterns = [
url(r'^abc/', include('abc.urls', namespace='abc')),
url(r'^xyz/', include('xyz.urls', namespace='xyz')),
]
</code></pre>
<p>In templates, you need to use the application namespace to reverse urls. The current instance namespace is automatically taken into account. In calls to <code>reverse()</code> you need to pass the current namespace:</p>
<pre><code>reverse('common_app_name:view_alerts', current_app=request.resolver_match.namespace)
</code></pre>
<p>In Django 1.8 you don't have the <code>app_name</code> attribute, you need to pass it as a parameter to <code>include()</code>:</p>
<pre><code>urlpatterns = [
url(r'^abc/', include('abc.urls', namespace='abc', app_name='common_app_name')),
url(r'^xyz/', include('xyz.urls', namespace='xyz', app_name='common_app_name')),
]
</code></pre>
<p>Django 1.8 also won't automatically use the current instance namespace in calls to the <code>{% url %}</code> tag. You need to set the <code>request.current_app</code> attribute for that:</p>
<pre><code>def my_view(request):
request.current_app = request.resolver_match.namespace
...
</code></pre>
| 3 | 2016-08-22T14:10:14Z | [
"python",
"django"
] |
Pandas plotting: How to format datetimeindex? | 39,081,327 | <p>I am doing a barplot out of a dataframe with a 15min datetimeindex over a couple of years.
Using this code:</p>
<pre><code>df_Vol.resample('A',how='sum').plot.bar(title='Sums per year', style='ggplot', alpha=0.8)
</code></pre>
<p>Unfortunately the ticks on the X-axis are now shown with the full timestamp like this: <code>2009-12-31 00:00:00</code>.</p>
<p>I would prefer to Keep the code for plotting short, but I couldn't find an easy way to format the timestamp simply to the year (<code>2009...2016</code>) for the plot.</p>
<p>Can someone help on this?</p>
| 0 | 2016-08-22T13:49:44Z | 39,144,402 | <p>As it does not seem to be possible to Format the date within the Pandas df.plot(), I have decided to create a new dataframe and plot from it.</p>
<p>The solution below worked for me:</p>
<pre><code>df_Vol_new = df_Vol.resample('A',how='sum')
df_Vol_new.index = df_Vol_new.index.format(formatter=lambda x: x.strftime('%Y'))
ax2 =df_Vol_new.plot.bar(title='Sums per year',stacked=True, style='ggplot', alpha=0.8)
</code></pre>
| 0 | 2016-08-25T11:45:15Z | [
"python",
"pandas",
"matplotlib",
"formatting",
"datetimeindex"
] |
Python finding some, not all custom packages | 39,081,371 | <p>I have a project with the following file structure:</p>
<pre><code>root/
run.py
bot/
__init__.py
my_discord_bot.py
dice/
__init__.py
dice.py
# dice files
help/
__init__.py
help.py
# help files
parser/
__init__.py
parser.py
# other parser files
</code></pre>
<p>The program is run from within the <code>root</code> directory by calling <code>python run.py</code>. <code>run.py</code> imports <code>bot.my_discord_bot</code> and then makes use of a class defined there.</p>
<p>The file <code>bot/my_discord_bot.py</code> has the following import statements:</p>
<pre><code>import dice.dice as d
import help.help as h
import parser.parser as p
</code></pre>
<p>On Linux, all three import statements execute correctly. On Windows, the first two seem to execute fine, but then on the third I'm told:</p>
<pre><code>ImportError: No module named 'parser.parser'; 'parser' is not a package
</code></pre>
<p>Why does it break on the third <code>import</code> statement, and why does it only break on Windows?</p>
<p><strong>Edit:</strong> clarifies how the program is run</p>
| 1 | 2016-08-22T13:52:22Z | 39,083,134 | <p>Make sure that your <code>parser</code> is not shadowing a built-in or third-party package/module/library. </p>
<p>I am not 100% sure about the specifics of how this name conflict would be resolved, but it seems like you can potentially a). have your module overridden by the existing module (which seems like it might be happening in your Windows case), or b). override the existing module, which could cause bugs down the road. It seems like <strong>b</strong> is what commonly trips people up. </p>
<p>If you think this might be happening with one of your modules (which seems fairly likely with a name like <code>parser</code>), try renaming your module.</p>
<p>See <a href="http://python-notes.curiousefficiency.org/en/latest/python_concepts/import_traps.html#the-name-shadowing-trap" rel="nofollow">this very nice article</a> for more details and more common Python "import traps".</p>
| 1 | 2016-08-22T15:13:56Z | [
"python",
"windows",
"python-3.5"
] |
Python finding some, not all custom packages | 39,081,371 | <p>I have a project with the following file structure:</p>
<pre><code>root/
run.py
bot/
__init__.py
my_discord_bot.py
dice/
__init__.py
dice.py
# dice files
help/
__init__.py
help.py
# help files
parser/
__init__.py
parser.py
# other parser files
</code></pre>
<p>The program is run from within the <code>root</code> directory by calling <code>python run.py</code>. <code>run.py</code> imports <code>bot.my_discord_bot</code> and then makes use of a class defined there.</p>
<p>The file <code>bot/my_discord_bot.py</code> has the following import statements:</p>
<pre><code>import dice.dice as d
import help.help as h
import parser.parser as p
</code></pre>
<p>On Linux, all three import statements execute correctly. On Windows, the first two seem to execute fine, but then on the third I'm told:</p>
<pre><code>ImportError: No module named 'parser.parser'; 'parser' is not a package
</code></pre>
<p>Why does it break on the third <code>import</code> statement, and why does it only break on Windows?</p>
<p><strong>Edit:</strong> clarifies how the program is run</p>
| 1 | 2016-08-22T13:52:22Z | 39,083,301 | <p>Put run.py outside root folder, so you'll have run.py next to root folder, then create <code>__init__.py</code> inside root folder, and change imports to:</p>
<pre><code>import root.parser.parser as p
</code></pre>
<p>Or just rename your parser module.</p>
<p>Anyway you should be careful with naming, because you can simply mess your own stuff someday.</p>
| 0 | 2016-08-22T15:22:20Z | [
"python",
"windows",
"python-3.5"
] |
Ansible: Change value in template based on deployment | 39,081,395 | <p>Is it possible for change value on a deployment basis with ansible? I'm configuring keepalived on two machines, I'd like to add a loop for the priority. </p>
<p>I can't loop or use the range() function as that'd just loop within the same deployment. </p>
<p>I'm trying to set priority:</p>
<ul>
<li>lb1 = 100 </li>
<li>lb2 = 101</li>
</ul>
<p>My vrrp instance looks like this so far:</p>
<pre><code>vrrp_instance VI_1 {
state MASTER
interface {{ int }}
virtual_router_id 51
priority 100 <------------------- I'd like to iterate this value
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
# supports up to 20 by default
{% for ip in vips %}
{{ ip.addr }}
{% endfor %}
}
}
</code></pre>
| 1 | 2016-08-22T13:53:28Z | 39,081,765 | <p>You can use host index inside your template like this (if you don't care about who will get higher priority):</p>
<pre><code>priority {{ play_hosts.index(inventory_hostname) }}
</code></pre>
<p>Or you can assign priorities in advance as host variables in your inventory file like this:</p>
<pre><code>server1 vrrp_priority=100
server2 vrrp_priority=150
</code></pre>
<p>... and then use it inside your template:</p>
<pre><code>priority {{ vrrp_priority }}
</code></pre>
| 2 | 2016-08-22T14:10:45Z | [
"python",
"templates",
"ansible",
"jinja2"
] |
Google Drive API 403 error when updating spreadsheet title | 39,081,509 | <p>We are using Google Drive API in our Google App Engine application.
This weekend we noticed that it has problems with updating spreadsheet title. We are getting the following error:</p>
<pre><code>HttpError: <HttpError 403 when requesting https://www.googleapis.com/drive/v2/files/1_X51WMK0U12rfPKc2x60E_EuyqtQ8koW-NSRZq7Eqdw?quotaUser=5660071165952000&fields=title&alt=json returned "The authenticated user has not granted the app 593604285024 write access to the file 1_X51WMK0U12rfPKc2x60E_EuyqtQ8koW-NSRZq7Eqdw">
</code></pre>
<p>Other calls to Google Drive API succeed. We just have the problem with this one. Also this functionality worked properly for a long time. Is it possible that some update on Google side has broken this?</p>
<p>The minimal code to reproduce the issue is:</p>
<pre><code>class TestDriveUpdate(webapp2.RequestHandler):
def get(self):
credentials = StorageByKeyName(Credentials,
'103005000283606027776',
'credentials').get()
spreadsheet_key = '1_X51WMK0U12rfPKc2x60E_EuyqtQ8koW-NSRZq7Eqdw'
quota_user = '5660071165952000'
body = {"title": 'Test'}
fields = "title"
http = httplib2.Http(timeout=60)
credentials.authorize(http)
gdrive = apiclient.discovery.build('drive', 'v2', http=http)
response = gdrive.files().update(
fileId=spreadsheet_key,
body=body,
fields=fields,
quotaUser=quota_user
).execute()
self.response.write("OK")
</code></pre>
| 0 | 2016-08-22T13:58:31Z | 39,096,183 | <p>Based from this <a href="https://developers.google.com/drive/v3/web/handle-errors#403_the_user_has_not_granted_the_app_appid_verb_access_to_the_file_fileid" rel="nofollow">documentation</a>, error occurs when the requesting app is not on the ACL for the file and the user never explicitly opened the file with this Drive app. Found this <a href="http://stackoverflow.com/questions/26138528/httperror-403-forbidden-when-accessing-google-drive-using-pydrive">SO question</a> which states that the scope strings must match exactly between your code and the Admin Console, including trailing slashes, etc. Make sure also that Drive Apps are allowed on the domain ("Allow users to install Google Drive apps").</p>
| 1 | 2016-08-23T08:34:02Z | [
"python",
"google-app-engine",
"google-drive-sdk"
] |
How to determine what string subprocess is passing to the commandline? | 39,081,535 | <p>On Windows, you make a subprocess call by passing a list of string arguments that it then reformats as a single string to call the relevant command. It does this by a series of rules as outline in the documentation <a href="https://docs.python.org/2/library/subprocess.html#converting-an-argument-sequence-to-a-string-on-windows" rel="nofollow">here</a>.</p>
<blockquote>
<p>On Windows, an args sequence is converted to a string that can be
parsed using the following rules (which correspond to the rules used
by the MS C runtime):</p>
<ol>
<li>Arguments are delimited by white space, which is either a space or a
tab.</li>
<li>A string surrounded by double quotation marks is interpreted as a
single argument, regardless of white space contained within. A quoted
string can be embedded in an argument.</li>
<li>A double quotation mark
preceded by a backslash is interpreted as a literal double quotation
mark.</li>
<li>Backslashes are interpreted literally, unless they immediately
precede a double quotation mark.</li>
<li>If backslashes immediately precede a
double quotation mark, every pair of backslashes is interpreted as a
literal backslash. If the number of backslashes is odd, the last
backslash escapes the next double quotation mark as described in rule</li>
</ol>
</blockquote>
<p>However in practice this can be hard to get right, as it's unclear exactly how the strings are being interpreted. And there can be trial and error in figuring out how to properly format the command.</p>
<p>Is there a way I can just determine what string subprocess would formulate? That way I can inspect it and ensure it's being formulated correctly as well as logging it better than just logging the list form of the command.</p>
| 1 | 2016-08-22T13:59:59Z | 39,082,547 | <p>I dug into the actual subprocess module and found an answer in there in fact. There's a function called <code>list2cmdline</code> that's used to just take a list passed to <code>Popen</code> and turn it into a single string of commandline arguments. It can just be called on with the list to get the result I need:</p>
<pre><code>import subprocess
name = "Monty Python's Flying Circus"
path = r"C:\path\to\files"
subprocess.list2cmdline(["file.py", name, path])
# 'file.py "Monty Python\'s Flying Circus" C:\\path\\to\\files'
</code></pre>
| 1 | 2016-08-22T14:46:59Z | [
"python",
"windows",
"subprocess"
] |
Light persistence in the context of ThreadPoolExecutor in Python | 39,081,583 | <p>I've got some Python code that farms out expensive jobs using ThreadPoolExecutor, and I'd like to keep track of which of them have completed so that if I have to restart this system, I don't have to redo the stuff that already finished. In a single-threaded context, I could just mark what I've done in a shelf. Here's a naive port of that idea to a multithreaded environment:</p>
<pre><code>from concurrent.futures import ThreadPoolExecutor
import subprocess
import shelve
def do_thing(done, x):
# Don't let the command run in the background; we want to be able to tell when it's done
_ = subprocess.check_output(["some_expensive_command", x])
done[x] = True
futs = []
with shelve.open("done") as done:
with ThreadPoolExecutor(max_workers=18) as executor:
for x in things_to_do:
if done.get(x, False):
continue
futs.append(executor.submit(do_thing, done, x))
# Can't run `done[x] = True` here--have to wait until do_thing finishes
for future in futs:
future.result()
# Don't want to wait until here to mark stuff done, as the whole system might be killed at some point
# before we get through all of things_to_do
</code></pre>
<p>Can I get away with this? The <a href="https://docs.python.org/2/library/shelve.html" rel="nofollow">documentation for shelve</a> doesn't contain any guarantees about thread safety, so I'm thinking no.</p>
<p>So what is the simple way to handle this? I thought that perhaps sticking <code>done[x] = True</code> in <code>future.add_done_callback</code> would do it, but <a href="http://stackoverflow.com/questions/26021526/python-threadpoolexecutor-is-the-callback-guaranteed-to-run-in-the-same-thread/26021772#26021772">that will often run in the same thread as the future itself</a>. Perhaps there is a locking mechanism that plays nicely with ThreadPoolExecutor? That seems cleaner to me that writing a loop that sleeps and then checks for completed futures.</p>
| 1 | 2016-08-22T14:02:07Z | 39,720,663 | <p>While you're still in the outer-most <code>with</code> context manager, the <code>done</code> shelve is just a normal python object- it is only written to disk when the context manager closes and it runs its <code>__exit__</code> method. It is therefore just as thread safe as any other python object, due to the <a href="https://wiki.python.org/moin/GlobalInterpreterLock" rel="nofollow">GIL</a> (as long as you're using CPython).</p>
<p>Specifically, the reassignment <code>done[x] = True</code> is thread safe / will be done atomically.</p>
<p>It's important to note that while the shelve's <code>__exit__</code> method will run after a Ctrl-C, it won't if the python process ends abruptly, and the shelve won't be saved to disk.</p>
<p>To protect against this kind of failure, I would suggest using a lightweight file-based thread safe database like <a href="https://docs.python.org/2/library/sqlite3.html" rel="nofollow">sqllite3</a>.</p>
| 1 | 2016-09-27T09:12:23Z | [
"python",
"multithreading",
"shelve"
] |
Adding a parcel repository using Cloudera Manager Python API | 39,081,629 | <p>I'm trying to install CDH5 parcels on Hadoop-cluster using <a href="http://cloudera.github.io/cm_api/docs/python-client/" rel="nofollow">Cloudera Manager Python API</a>. I'm doing this using following code:</p>
<pre><code>test_cluster = ... # configuring cluster
# adding hosts ...
for parcel in test_cluster.get_all_parcels():
if parcel.product == 'CDH' and 'cdh5':
parcel.start_download().wait()
parcel.start_distribution().wait()
success = parcel.activate().wait().success
</code></pre>
<p>But I catch such error:</p>
<pre><code>cm_api.api_client.ApiException: Parcel for CDH : 5.8.0-1.cdh5.8.0.p0.42 is not available on UBUNTU_TRUSTY. (error 400)
</code></pre>
<p>The <code>CDH 5.8.0-1.cdh5.8.0.p0.42</code> was in <code>AVAILABLE_REMOTELY</code>, as we can see if print a string representation on this parcel:</p>
<pre><code><ApiParcel>: CDH-5.8.0-1.cdh5.8.0.p0.42 (stage: AVAILABLE_REMOTELY) (state: None) (cluster: TestCluster)
</code></pre>
<p>After the execution of code, parcel changes its stage to <code>DOWNLOADED</code>.</p>
<p>It seems, I should add a new parcel repository, compatible with Ubuntu Trusty (14.04). But I don't know of doing this using Cloudera Manager API.</p>
<p><strong>How I can specify the new repository for installing correct CDH?</strong></p>
| 0 | 2016-08-22T14:04:10Z | 39,136,543 | <p>You may want to be more specific about the parcel you are acting on. I use something like this for the same purpose, the important part for your question is the combined check on <code>parcel.version</code> and <code>parcel.product</code>. After that (yes I am verbose in my output) I print the list of parcels to verify I am trying to only install the 1 parcel I want.</p>
<p>I'm sure you've been here, but if not the <a href="https://cloudera.github.io/cm_api/docs/python-client/#managing-parcels" rel="nofollow">cm_api github site</a> has some helpful examples too. </p>
<pre><code>cdh_version = "CDH5"
cdh_version_number = "5.6.0"
# CREATE THE LIST OF PARCELS TO BE INSTALLED (CDH)
parcels_list = []
for parcel in cluster.get_all_parcels():
if parcel.version.startswith(cdh_version_number) and parcel.product == "CDH":
parcels_list.append(parcel)
for parcel in parcels_list:
print "WILL INSTALL " + parcel.product + ' ' + parcel.version
# DISTRIBUTE THE PARCELS
print "DISTRIBUTING PARCELS..."
for p in parcels_list:
cmd = p.start_distribution()
if not cmd.success:
print "PARCEL DISTRIBUTION FAILED"
exit(1)
# MAKE SURE THE DISTRIBUTION FINISHES
for p in parcels_list:
while p.stage != "DISTRIBUTED":
sleep(5)
p = get_parcel(api, p.product, p.version, cluster_name)
print p.product + ' ' + p.version + " DISTRIBUTED"
# ACTIVATE THE PARCELS
for p in parcels_list:
cmd = p.activate()
if not cmd.success:
print "PARCEL ACTIVATION FAILED"
exit(1)
# MAKE SURE THE ACTIVATION FINISHES
for p in parcels_list:
while p.stage != "ACTIVATED":
p = get_parcel(api, p.product, p.version, cluster_name)
print p.product + ' ' + p.version + " ACTIVATED"
</code></pre>
| 2 | 2016-08-25T04:05:47Z | [
"python",
"cloudera-cdh",
"cloudera-manager"
] |
What are formatted string literals in Python 3.6? | 39,081,766 | <p>One of the features of Python 3.6 are formatted strings.</p>
<p><a href="http://stackoverflow.com/questions/35745050/string-with-f-prefix-in-python-3-6">This SO question</a>(String with 'f' prefix in python-3.6) is asking about the internals of formatted string literals, but I don't understand the exact use case of formatted string literals. In which situations should I use this feature? Isn't explicit better than implicit?</p>
| 2 | 2016-08-22T14:10:50Z | 39,082,102 | <blockquote>
<p>Simple is better than complex.</p>
</blockquote>
<p>So here we have formatted string. It gives the simplicity to the string formatting, while keeping the code explicit (comprared to other string formatting mechanisms).</p>
<pre><code>title = 'Mr.'
name = 'Tom'
msg_count = 3
# This is explicit but complex
print('Hello {title} {name}! You have {count} messages.'.format(title=title, name=name, count=count))
# This is simple but implicit
print('Hello %s %s! You have %d messages.'%(title, name, count))
# This is both explicit and simple. PERFECT!
print(f'Hello {title} {name}! You have {msg_count} messages.')
</code></pre>
<p>It is designed to replace <code>str.format</code> for simple string formatting.</p>
| 5 | 2016-08-22T14:26:49Z | [
"python",
"python-3.6"
] |
Python: Process in one Thread stopping a Process in another Thread from finishing | 39,081,791 | <p>I'd appreciate some help with threading, which I pretty new to. </p>
<p>The example code is not exactly what Iâm doing (ânotepadâ and âcalcâ are just example commands), but a simplified version that shows my problem.</p>
<p>I want to run two seperate threads that each run a different command a number of times. I would like the code to do this:</p>
<ol>
<li>Start the first instance of ânotepadâ and âcalcâ simultaneously
(which it does) </li>
<li>When I close an instance of ânotepadâ, to open the
next instance of ânotepadâ. </li>
<li>When I close an instance of âcalcâ, to
open the next instance of âcalcâ.</li>
<li>[edit] I want the script to wait until both threads have finished, as it needs to do some processing of the output from these.</li>
</ol>
<p><strong>However, when I close an instance of ânotepadâ, the next instance of ânotepadâ does not start until Iâve closed the current instance of âcalcâ and vice versa.</strong> With a bit of de-bugging, it looks like the process (from Popen) for the closed instance of 'notepad' doesn't finish until the current 'calc' is closed.</p>
<p>Running Python 2.7 on Windows 7</p>
<p>Example Code:</p>
<pre><code>from subprocess import Popen, PIPE, STDOUT
from threading import Thread
def do_commands(command_list):
for command in command_list:
proc = Popen("cmd.exe", stdin=PIPE, stdout=PIPE, stderr=STDOUT)
stdout_value, stderr_value = proc.communicate(input=command)
# MAIN CODE
A_command_list = ["notepad\n", "notepad\n", "notepad\n" ]
B_command_list = ["calc\n", "calc\n", "calc\n" ]
A_args = [A_command_list]
B_args = [B_command_list]
A_thread = Thread(target=do_commands, args=(A_args))
B_thread = Thread(target=do_commands, args=(B_args))
A_thread.start()
B_thread.start()
A_thread.join()
B_thread.join()
</code></pre>
<p>Thanks in advance :-)</p>
<p>Nick</p>
| 1 | 2016-08-22T14:11:56Z | 39,082,516 | <p>So the <code>communicate()</code> method is apparently waiting for all processes created by <code>Popen</code> and executing <code>cmd.exe</code> <strong>and started at nearly the same time</strong> to terminate. Since the <code>cmd.exe</code> that runs <code>calculator</code> starts at nearly the same time as the <code>cmd.exe</code> that runs <code>Notepad</code>, both <code>communicate()</code> calls (one in <code>A_thread</code> and one in <code>B_thread</code>) wait until both processes term. Thus neither <code>for</code> loop advances until both processes term.</p>
<p>Adding a delay between starting the two threads fixes the problem. </p>
<p>So, leaving your original code unchanged and adding </p>
<pre><code>sleep(1)
</code></pre>
<p>between the two <code>Thread</code> <code>starts</code> produces the desired behavior.</p>
<p>On my system, adding a delay of 0.0001 seconds reliably fixed the problem whereas a delay of 0.00001 did not.</p>
| 0 | 2016-08-22T14:45:32Z | [
"python",
"multithreading",
"subprocess",
"popen",
"terminate"
] |
PyQT QTableWidget extremely slow | 39,081,852 | <p>this is the code I use to fill a table drawn in QT Designer.
Designed to be universal for any table, it works fine, but...
When I try to show a datasat containing 18 columns and ~12000 rows, it just freezes for 30 seconds or more.
So, what I am doing wrong and is there way to speed up, keeping the code still suitable for any table?</p>
<p>That's my code:</p>
<pre><code>...blablabla...
self.connect(self, SIGNAL("set"), self.real_set)
...blablabla...
def set_table(self, table, data):
self.emit(SIGNAL('set'), table, data)
def real_set(self, table, data):
"""
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Assuming data is list of dict and table is a QTableWidget.
Get first key and get len of contents
"""
for key in data:
rows = len(data[key])
table.setRowCount(rows)
break
"""
Forbid resizing(speeds up)
"""
table.horizontalHeader().setResizeMode(QHeaderView.Fixed)
table.verticalHeader().setResizeMode(QHeaderView.Fixed)
table.horizontalHeader().setStretchLastSection(False)
table.verticalHeader().setStretchLastSection(False)
"""
Set number of columns too
"""
table.setColumnCount(len(data))
table.setHorizontalHeaderLabels(sorted(data.keys()))
"""
Now fill data
"""
for n, key in enumerate(sorted(data.keys())):
for m, item in enumerate(data[key]):
newitem = QTableWidgetItem(item)
table.setItem(m, n, newitem)
</code></pre>
| 0 | 2016-08-22T14:14:42Z | 39,081,908 | <p>In GUI applications one comes across a situation where there is a need
to display a lot of items in a tabular or list format (for example
displaying large number of rows in a table). One way to increase the
GUI responsiveness is to load a few items when the screen is displayed
and defer loading of rest of the items based on user action. Qt
provides a solution to address this requirement of loading the data on
demand.</p>
<p>You can find the implementation of this technique called pagination in this <a href="https://sateeshkumarb.wordpress.com/2012/04/01/paginated-display-of-table-data-in-pyqt/" rel="nofollow">link</a></p>
| 0 | 2016-08-22T14:17:10Z | [
"python",
"qt",
"pyqt",
"qtablewidget"
] |
PyQT QTableWidget extremely slow | 39,081,852 | <p>this is the code I use to fill a table drawn in QT Designer.
Designed to be universal for any table, it works fine, but...
When I try to show a datasat containing 18 columns and ~12000 rows, it just freezes for 30 seconds or more.
So, what I am doing wrong and is there way to speed up, keeping the code still suitable for any table?</p>
<p>That's my code:</p>
<pre><code>...blablabla...
self.connect(self, SIGNAL("set"), self.real_set)
...blablabla...
def set_table(self, table, data):
self.emit(SIGNAL('set'), table, data)
def real_set(self, table, data):
"""
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Assuming data is list of dict and table is a QTableWidget.
Get first key and get len of contents
"""
for key in data:
rows = len(data[key])
table.setRowCount(rows)
break
"""
Forbid resizing(speeds up)
"""
table.horizontalHeader().setResizeMode(QHeaderView.Fixed)
table.verticalHeader().setResizeMode(QHeaderView.Fixed)
table.horizontalHeader().setStretchLastSection(False)
table.verticalHeader().setStretchLastSection(False)
"""
Set number of columns too
"""
table.setColumnCount(len(data))
table.setHorizontalHeaderLabels(sorted(data.keys()))
"""
Now fill data
"""
for n, key in enumerate(sorted(data.keys())):
for m, item in enumerate(data[key]):
newitem = QTableWidgetItem(item)
table.setItem(m, n, newitem)
</code></pre>
| 0 | 2016-08-22T14:14:42Z | 39,088,300 | <p>Here a test script which compares a few ways of populating a table.</p>
<p>The custom model is much faster, because it does not have to create all the items up front - but note that it is a very basic implementation, so does not implement sorting, editing, etc. (See <a href="http://doc.qt.io/qt-4.8/model-view-programming.html" rel="nofollow">Model/View Programming</a> for more details).</p>
<pre><code>from random import shuffle
from PyQt4 import QtCore, QtGui
class TableModel(QtCore.QAbstractTableModel):
def __init__(self, data, parent=None):
super(TableModel, self).__init__(parent)
self._data = data
def rowCount(self, parent=None):
return len(self._data)
def columnCount(self, parent=None):
return len(self._data[0]) if self.rowCount() else 0
def data(self, index, role=QtCore.Qt.DisplayRole):
if role == QtCore.Qt.DisplayRole:
row = index.row()
if 0 <= row < self.rowCount():
column = index.column()
if 0 <= column < self.columnCount():
return self._data[row][column]
class Window(QtGui.QWidget):
def __init__(self):
super(Window, self).__init__()
self.table = QtGui.QTableView(self)
self.tablewidget = QtGui.QTableWidget(self)
self.tablewidget.setSortingEnabled(True)
self.button1 = QtGui.QPushButton('Custom Model', self)
self.button1.clicked.connect(
lambda: self.populateTable('custom'))
self.button2 = QtGui.QPushButton('StandardItem Model', self)
self.button2.clicked.connect(
lambda: self.populateTable('standard'))
self.button3 = QtGui.QPushButton('TableWidget', self)
self.button3.clicked.connect(
lambda: self.populateTable('widget'))
self.spinbox = QtGui.QSpinBox(self)
self.spinbox.setRange(15000, 1000000)
self.spinbox.setSingleStep(10000)
layout = QtGui.QGridLayout(self)
layout.addWidget(self.table, 0, 0, 1, 4)
layout.addWidget(self.tablewidget, 1, 0, 1, 4)
layout.addWidget(self.button1, 2, 0)
layout.addWidget(self.button2, 2, 1)
layout.addWidget(self.button3, 2, 2)
layout.addWidget(self.spinbox, 2, 3)
self._data = []
def populateTable(self, mode):
if mode == 'widget':
self.tablewidget.clear()
self.tablewidget.setRowCount(self.spinbox.value())
self.tablewidget.setColumnCount(20)
else:
model = self.table.model()
if model is not None:
self.table.setModel(None)
model.deleteLater()
if len(self._data) != self.spinbox.value():
del self._data[:]
rows = list(range(self.spinbox.value()))
shuffle(rows)
for row in rows:
items = []
for column in range(20):
items.append('(%d, %d)' % (row, column))
self._data.append(items)
timer = QtCore.QElapsedTimer()
timer.start()
if mode == 'widget':
self.tablewidget.setSortingEnabled(False)
for row, items in enumerate(self._data):
for column, text in enumerate(items):
item = QtGui.QTableWidgetItem(text)
self.tablewidget.setItem(row, column, item)
self.tablewidget.sortByColumn(0, QtCore.Qt.AscendingOrder)
else:
self.table.setSortingEnabled(False)
if mode == 'custom':
model = TableModel(self._data, self.table)
elif mode == 'standard':
model = QtGui.QStandardItemModel(self.table)
for row in self._data:
items = []
for column in row:
items.append(QtGui.QStandardItem(column))
model.appendRow(items)
self.table.setModel(model)
self.table.setSortingEnabled(True)
self.table.sortByColumn(0, QtCore.Qt.AscendingOrder)
print('%s: %.3g seconds' % (mode, timer.elapsed() / 1000))
if __name__ == '__main__':
import sys
app = QtGui.QApplication(sys.argv)
window = Window()
window.setGeometry(600, 50, 1200, 800)
window.show()
sys.exit(app.exec_())
</code></pre>
| 2 | 2016-08-22T20:36:14Z | [
"python",
"qt",
"pyqt",
"qtablewidget"
] |
UDF (User Defined Function) python gives different answer in pig | 39,081,890 | <p>I want to write a UDF python for pig, to read lines from the file called like</p>
<pre><code>#'prefix.csv'
spol.
LLC
Oy
OOD
</code></pre>
<p>and match the names and if finds any matches, then replaces it with white space. here is my python code </p>
<pre><code>def list_files2(name, f):
fin = open(f, 'r')
for line in fin:
final = name
extra = 'nothing'
if (name != name.replace(line.strip(), ' ')):
extra = line.strip()
final = name.replace(line.strip(), ' ').strip()
return final, extra,'insdie if'
return final, extra, 'inside for'
</code></pre>
<p>Running this code in python, </p>
<pre><code>>print list_files2('LLC nakisa', 'prefix.csv' )
>print list_files2('AG company', 'prefix.csv' )
</code></pre>
<p>returns </p>
<pre><code> ('nakisa', 'LLC', 'insdie if')
('AG company', 'nothing', 'inside for')
</code></pre>
<p>which is exactly what I need. But when I register this code as a UDF in apache pig for this sample list:</p>
<pre><code>nakisa company LLC
three Oy
AG Lans
Test OOD
</code></pre>
<p>pig returns wrong answer on the third line:</p>
<pre><code>((nakisa company,LLC,insdie if))
((three,Oy,insdie if))
((A G L a n s,,insdie if))
((Test,OOD,insdie if))
</code></pre>
<p>The question is why UDF enters the if loop for the third entry which does not have any match in the prefix.csv file. </p>
| 0 | 2016-08-22T14:16:14Z | 39,096,001 | <p>I don't know <code>pig</code> but the way you are checking for a match is strange and might be the cause of your problem.</p>
<p>If you want to check whether a string is a substring of another, <code>python</code> provides
the <code>find</code> method on strings:</p>
<pre><code>if name.find(line.strip()) != -1:
# find will return the first index of the substring or -1 if it was not found
# ... do some stuff
</code></pre>
<p>additionally, your code might leave the file handle open. A way better approach to handle file operations is by using the <code>with</code> statement. This assures that in any case (except of interpreter crashes) the file handle will get closed.</p>
<pre><code>with open(filename, "r") as file_:
# Everything within this block can use the opened file.
</code></pre>
<p>Last but not least, <code>python</code> provides a module called <code>csv</code> with a <code>reader</code> and a <code>writer</code>, that handle the parsing of the csv file format.</p>
<p>Thus, you could try the following code and check if it returns the correct thing:</p>
<pre><code>import csv
def list_files2(name, filename):
with open(filename, 'rb') as file_:
final = name
extra = "nothing"
for prefix in csv.reader(file_):
if name.find(prefix) != -1:
extra = prefix
final = name.replace(prefix, " ")
return final, extra, "inside if"
return final, extra, "inside for"
</code></pre>
<p>Because your file is named <code>prefix.csv</code> I assume you want to do prefix substitution. In this case, you could use <code>startswith</code> instead of <code>find</code> for the check and replace the line <code>final = name.replace(prefix, " ")</code> with <code>final = " " + name[name.find(prefix):]</code>. This assures that only a prefix will be substituted with the space.</p>
<p>I hope, this helps</p>
| 0 | 2016-08-23T08:24:52Z | [
"python",
"apache-pig",
"jython",
"udf"
] |
ImportError: No module named keras.optimizers | 39,081,910 | <p>I have this import statement in Keras:</p>
<pre><code>from keras.optimizers import SGD, RMSprop
</code></pre>
<p>But getting this error on this error:</p>
<pre><code>ImportError: No module named keras.optimizers
</code></pre>
<p>Why is that? And, how can I solve this issue?</p>
<p>Thanks.</p>
| -1 | 2016-08-22T14:17:12Z | 39,082,014 | <p>Most obvious answer would be: You do not have <code>keras</code> installed. Do you? Maybe try <code>pip install keras</code> or <code>pip freeze</code> to check? Or if you are on Linux, you can also try <code>which keras</code>.</p>
<p>Can you provide us with additional information?</p>
| 2 | 2016-08-22T14:22:35Z | [
"python",
"keras"
] |
Passing Bokeh plot objects into and out of functions | 39,082,342 | <p>I am currently using Bokeh to make multiple plots and generating their components (script/div). I pass in the plot object to a function, add plot lines to the object based on calculations performed in the function, I then pass the object out to main. I do this frequently for different functions. This seems to create a drastic slowdown when running my program as compared to matplotlib - 12 minutes bokeh to 1 minute matplotlib. </p>
<p>I believe this may be due to the copying of values in and out of the function. Each plot object contains 4 plot line of about 5000 points each. There are at most 16 plots in the program. </p>
<p>Is there a better way to pass in/out the plot objects or should I plot all objects and formatting at the end of the program to minimize the overhead? </p>
| 0 | 2016-08-22T14:38:11Z | 39,083,177 | <p>There are unfortunate and unavoidable tensions between interactive, exploratory use-cases and development use-cases. Making Bokeh simple, convenient and unobtrusive to use in Jupyter Notebooks, etc. meant making it do some "automagic" things. In particular there is an implicit "current document", and unless it is explicitly cleared, <em>everything</em> that is created with higher level APIs accumulates there. Long story, short: For this kind of application you should explicitly clear the current document:</p>
<pre><code>from bokeh.io import curdoc
curdoc().clear()
</code></pre>
<p>after you template/render a particular plot and are done with it (i.e. after you call <code>components</code>)</p>
<p>If you still ned to hold on to the plots longer than that, after you call <code>components</code>, then you will need to partially drop down to the lower level API and create your own documents explicitly. Most of the examples here demonstrate creating documents by hand:</p>
<p><a href="https://github.com/bokeh/bokeh/tree/master/examples/models" rel="nofollow">https://github.com/bokeh/bokeh/tree/master/examples/models</a></p>
| 0 | 2016-08-22T15:15:39Z | [
"python",
"function",
"plot",
"bokeh"
] |
Return list built from a loop with lambda | 39,082,671 | <p>I have a function to return a list of files and folders in a given folder (with recurse and only get files options), or just the file in a list if given path is not a folder:</p>
<pre><code>def path_to_list(path, onlyFiles = False, recurse = False):
if os.path.isdir(path):
if onlyFiles:
if recurse:
result = []
for dirs in list(os.walk(path)):
result.append(dirs[2])
return result
else:
return next(os.walk(path))[2]
else:
return list(os.walk(path)) if recurse else next(os.walk(path))
return [path]
</code></pre>
<p>Trying to shorten this part:</p>
<pre><code>result = []
for dirs in list(os.walk(path)):
result.append(dirs[2])
return result
</code></pre>
<p>I tried to use lambda with multiple syntaxes but haven't got any success. How do I directly return the result from the for loop? Thanks.</p>
| 0 | 2016-08-22T14:52:17Z | 39,083,766 | <p>You can return list of directories by:</p>
<pre><code>return [dir for _, _, dir in list(os.walk(path))]
</code></pre>
| 0 | 2016-08-22T15:47:58Z | [
"python",
"python-2.7"
] |
How does for in loop work with multiple sequences python | 39,082,721 | <p>I meet following code of for-loop, and not very sure how it goes:</p>
<pre><code>for sentence in snippet, phrase:
result = sentence[:]
</code></pre>
<p>is it iterate through 'snippet' then 'phrase'? </p>
<p>EDIT: </p>
<pre><code>PHRASES = {
"class %%%(%%%):":
"Make a class named %%% that is-a %%%.",
"class %%%(object):\n\tdef __init__(self, ***)" :
"class %%% has-a __init__ that takes self and *** parameters.",
"class %%%(object):\n\tdef ***(self, @@@)":
"class %%% has-a function named *** that takes self and @@@ parameters.",
"*** = %%%()":
"Set *** to an instance of class %%%.",
"***.***(@@@)":
"From *** get the *** function, and call it with parameters self, @@@.",
"***.*** = '***'":
"From *** get the *** attribute and set it to '***'."
}
##############
#'snippet' is a key in the dict shown above, and 'phrase' is its corresponding value
def convert(snippet, phrase):
class_names = [w.capitalize() for w in
random.sample(WORDS, snippet.count("%%%"))]
other_names = random.sample(WORDS, snippet.count("***"))
results = []
param_names = []
for i in range(0, snippet.count("@@@")):
param_count = random.randint(1,3)
param_names.append(', '.join(random.sample(WORDS, param_count)))
########
#Here is the code in question
########
for sentence in snippet, phrase:
result = sentence[:]
# fake class names
for word in class_names:
result = result.replace("%%%", word, 1)
# fake other names
for word in other_names:
result = result.replace("***", word, 1)
# fake parameter lists
for word in param_names:
result = result.replace("@@@", word, 1)
results.append(result)
return results
</code></pre>
| -2 | 2016-08-22T14:54:37Z | 39,082,760 | <p>No. <code>snippet, phrase</code> defines a tuple of those two elements only. The iteration is over that tuple; ie sentence is first the value of <code>snippet</code> and then the value of <code>phrase</code>. It doesn't iterate through the contents of those values.</p>
| 1 | 2016-08-22T14:56:28Z | [
"python",
"for-loop"
] |
Why is my code only parsing part of the XML file? | 39,082,749 | <p>Apologies in advance I am a novice in Python.
I am trying to sum across all the elements in this XML file but it seems my code is only summing part of the file for some reason.
I tried to figure out but failed. May I kindly ask for some advice? Thanks.
Sorry for the long file</p>
<pre><code>import xml.etree.ElementTree as ET
input='''
<commentinfo>
<note>This file contains the sample data for testing</note>
<comments>
<comment>
<name>Romina</name>
<count>97</count>
</comment>
<comment>
<name>Laurie</name>
<count>97</count>
</comment>
<comment>
<name>Bayli</name>
<count>90</count>
</comment>
<comment>
<name>Siyona</name>
<count>90</count>
</comment>
<comment>
<name>Taisha</name>
<count>88</count>
</comment>
<comment>
<name>Ameelia</name>
<count>87</count>
</comment>
<comment>
<name>Alanda</name>
<count>87</count>
</comment>
<comment>
<name>Prasheeta</name>
<count>80</count>
</comment>
<comment>
<name>Risa</name>
<count>79</count>
</comment>
<comment>
<name>Asif</name>
<count>79</count>
</comment>
<comment>
<name>Zi</name>
<count>78</count>
</comment>
<comment>
<name>Ediomi</name>
<count>76</count>
</comment>
<comment>
<name>Danyil</name>
<count>76</count>
</comment>
<comment>
<name>Barry</name>
<count>72</count>
</comment>
<comment>
<count>64</count>
<name>Lance</name>
<count>72</count>
</comment>
<comment>
<name>Hattie</name>
<count>66</count>
</comment>
<comment>
<name>Mathu</name>
<count>66</count>
</comment>
<comment>
<name>Bowie</name>
<count>65</count>
</comment>
<comment>
<name>Samara</name>
<count>65</count>
</comment>
<comment>
<name>Uchenna</name>
</comment>
<comment>
<name>Shauni</name>
<count>61</count>
</comment>
<comment>
<name>Georgia</name>
<count>61</count>
</comment>
<comment>
<name>Rivan</name>
<count>59</count>
</comment>
<comment>
<name>Kenan</name>
<count>58</count>
</comment>
<comment>
<name>Isma</name>
<count>57</count>
</comment>
<comment>
<name>Hassan</name>
<count>57</count>
</comment>
<comment>
<name>Samanthalee</name>
<count>54</count>
</comment>
<comment>
<name>Alexa</name>
<count>51</count>
</comment>
<comment>
<name>Caine</name>
<count>49</count>
</comment>
<comment>
<name>Grady</name>
<count>47</count>
</comment>
<comment>
<name>Anne</name>
<count>40</count>
</comment>
<comment>
<name>Rihan</name>
<count>38</count>
</comment>
<comment>
<name>Alexei</name>
<count>37</count>
</comment>
<comment>
<name>Indie</name>
<count>36</count>
</comment>
<comment>
<name>Rhuairidh</name>
<count>36</count>
</comment>
<comment>
<name>Annoushka</name>
<count>32</count>
</comment>
<comment>
<name>Kenzi</name>
<count>25</count>
</comment>
<comment>
<name>Shahd</name>
<count>24</count>
</comment>
<comment>
<name>Irvine</name>
<count>22</count>
</comment>
<comment>
<name>Carys</name>
<count>21</count>
</comment>
<comment>
<name>Skye</name>
<count>19</count>
</comment>
<comment>
<name>Atiya</name>
<count>18</count>
</comment>
<comment>
<name>Rohan</name>
<count>18</count>
</comment>
<comment>
<name>Nuala</name>
<count>14</count>
</comment>
<comment>
<name>Carlo</name>
<count>12</count>
</comment>
<comment>
<name>Maram</name>
<count>12</count>
</comment>
<comment>
<name>Japleen</name>
<count>9</count>
</comment>
<comment>
<name>Breeanna</name>
<count>7</count>
</comment>
<comment>
<name>Zaaine</name>
<count>3</count>
</comment>
<comment>
<name>Inika</name>
<count>2</count>
</comment>
</comments>
</commentinfo>'''
tree = ET.fromstring(input)
counts = tree.findall('comments/comment')
summa=0
for item in counts:
try:
k=item.find('count').text
k=int(k)
print k
summa +=k
except:
break
print summa
</code></pre>
| 0 | 2016-08-22T14:56:00Z | 39,083,064 | <p>One of your <code><comment></code> tags has no <code><count></code>:</p>
<pre><code><comment>
<name>Uchenna</name>
</comment>
</code></pre>
<p>This results in <code>item.find('count')</code> being <code>None</code>. Obviously, <code>None</code> doesn't have a <code>.text</code> attribute so an <code>AttributeError</code> is raised. Your broad exception handling catches the <code>AttributeError</code> and terminates the loop early.</p>
<p>This is a good demonstration of why you should never use:</p>
<pre><code>try:
...
except:
...
</code></pre>
<p>You should <em>only</em> catch exceptions that you know how to handle (and then try to keep the code in the <code>try</code> suite as minimal as possible). In this case:</p>
<pre><code>for item in counts:
try:
k=item.find('count').text
k=int(k)
except (AttributeError, ValueError): # missing or malformatted `<count>`.
continue # Skip that tag and keep on summing the others
print k
summa +=k
</code></pre>
| 1 | 2016-08-22T15:10:04Z | [
"python",
"xml"
] |
How to access a variable being updated in a while loop | 39,082,766 | <p>I am trying to access a variable that is being constantly updated in a while loop in a different file. Here is the code I used for testing:</p>
<pre><code># file1
import time
x = 0
while True:
x += 1
time.sleep(2.0)
# file2
from file1 import x
print x
</code></pre>
<p>When I run file2, it starts the while loop from the beginning. I would like to access one instance of x. For example, if x=10, I would want file2 to print 10. Is this possible?</p>
| 0 | 2016-08-22T14:57:04Z | 39,082,921 | <p>I'm not quite sure what you mean. Are you wanting file1 to run, and be incrementing the value of "x" every 2 seconds, indefinitely and that when you run file2 at any time, it pulls the current value of "x" from the program/python instance running "file1"?</p>
<p>If so, this is not how you would approach it. With file2 you are pulling the set variable x=0 from file1. What you need to do is have some form of IPC (Inter-Process Communication) so that file2 can access the value of "x" from file1. You can do this a multitude of ways, including shared memory, a key/value store program like redis or memcached, a database, etc.</p>
<p>If you want to do it via redis or memcached, simply run redis, use the redis library for Python, and call the .incr method for the key "x" every 2 seconds. Then, when you run file2, call the .get method for key "x" and you will get the current value. When file1 is running, it will continue to increment x; when it's not, it won't and will effectively freeze. However, redis will keep the last known value in memory for the key "x".</p>
<p>To do it with a database, you can implement a mySQL database/table and increase the value of "x" in a key column in a table every 2 seconds. You'd have to look at the mySQL libraries for Python.</p>
<p>To do it with shared memory, look at the shared memory functions for Python.</p>
<p>There are also many other ways to share data. You could, simply, write the value of "x" to a file every 2 seconds by opening it, writing the new value, flushing and closing it. Then simply have file2 read that file. Of course with this you then have the issue of race conditions where it reads the file before its updated and gets a stale value, all dependent on the priority of the OS' filesystem writes for that file at that time from that process.</p>
| 2 | 2016-08-22T15:03:24Z | [
"python",
"while-loop"
] |
How to access a variable being updated in a while loop | 39,082,766 | <p>I am trying to access a variable that is being constantly updated in a while loop in a different file. Here is the code I used for testing:</p>
<pre><code># file1
import time
x = 0
while True:
x += 1
time.sleep(2.0)
# file2
from file1 import x
print x
</code></pre>
<p>When I run file2, it starts the while loop from the beginning. I would like to access one instance of x. For example, if x=10, I would want file2 to print 10. Is this possible?</p>
| 0 | 2016-08-22T14:57:04Z | 39,083,009 | <p>You can try the following. First, as there's and infinite loop, importing file1 will block, so you should run the loop in a thread. And second you can wrap the integer being incremented in a list (or any other kind of objects), so you can use the reference to its current value (otherwise you will be importing a value not a reference):</p>
<pre><code># file1
import time
import threading
x = [0]
def update_var(var):
while True:
var[0] += 1
time.sleep(2.0)
threading.Thread(target=update_var, args=(x,)).start()
# file2
from file1 import x
print x[0]
</code></pre>
| 1 | 2016-08-22T15:07:50Z | [
"python",
"while-loop"
] |
Constructing python-click commands | 39,082,806 | <p>I have a command like :</p>
<p><code>$trial login --user</code></p>
<p>For this I used <code>python-click</code> to wrap my python functions and is working fine. I am very new to <code>python-click</code> and having having to construct a command in a certain way than what is mentioned above.</p>
<p>Below is my present code of a file <code>trial.py</code>,</p>
<pre><code>@click.command()
@click.option('--user', prompt='Username', help='Username.')
@click.option('--password', prompt='Password', help='Password.')
def login(user, password):
"""Simple program that greets NAME for a total of COUNT times."""
login_cli(user,password)
print "done"
@click.group()
def cli():
pass
</code></pre>
<p>And my <code>setup.py</code> file looks like the following:</p>
<pre><code>from setuptools import setup
setup(
name='trial',
version='0.1',
py_modules=['trial'],
install_requires=[
'Click',
],
entry_points='''
[console_scripts]
trial=trial:cli
''',
)
</code></pre>
<p>But now I want to change the command a little. I want to make it something like,</p>
<pre><code>$trial k8s login --user
</code></pre>
<p>I want to add a series of k8s commands but don't know how to include it in the code and would want some guidance on the same.</p>
| 0 | 2016-08-22T14:58:39Z | 39,085,237 | <p>You just need to make another <code>group</code> that's a child of <code>cli</code>, so make a <code>k8s</code> function and decorate it with <code>cli.group()</code>, then change <code>login</code> to be a <code>k8s.command()</code></p>
| 0 | 2016-08-22T17:17:34Z | [
"python",
"python-click"
] |
Call function or function (python) | 39,083,022 | <p>I have 4 functions. I want my code to perform the first one AND the second, third, or fourth. I also want at least one (any of them) no matter what unless they all fail.
My initial implementation was:</p>
<pre><code>try:
function1(var)
except:
pass
try:
function2(var) or function3(var) or function4(var)
except:
pass
</code></pre>
<p>If function2 doesn't work, it doesn't go to function3, how might this be coded to account for that?</p>
| 0 | 2016-08-22T15:08:13Z | 39,083,194 | <p>In case the success of failure of a function is determined, whether it raises an exception or not, you could write a helper method, that would try to call a list of functions until a successful one returns.</p>
<pre><code>#!/usr/bin/env python
# coding: utf-8
import sys
def callany(*funs):
"""
Returns the return value of the first successfully called function
otherwise raises an error.
"""
for fun in funs:
try:
return fun()
except Exception as err:
print('call to %s failed' % (fun.__name__), file=sys.stderr)
raise RuntimeError('none of the functions could be called')
if __name__ == '__main__':
def a(): raise NotImplementedError('a')
def b(): raise NotImplementedError('b')
# def c(): raise NotImplementedError('c')
c = lambda: "OK"
x = callany(a, b, c)
print(x)
# call to a failed
# call to b failed
# OK
</code></pre>
<p>The toy implementation above could be improved by adding support for function arguments.</p>
<p>Runnable snippet: <a href="https://glot.io/snippets/ehqk3alcfv" rel="nofollow">https://glot.io/snippets/ehqk3alcfv</a></p>
<p>If the functions indicate success by returning a boolean value, you can use them just as in an ordinary boolean expression.</p>
| 2 | 2016-08-22T15:16:35Z | [
"python",
"function",
"try-catch",
"boolean-logic"
] |
Running python program doesn't work and only gives function address | 39,083,030 | <p>When I run my python program from terminal with <code>python sumSquares.py</code>, I get the following result: <code><function diffSum at 0x1006dfe60></code>
My program looks like this:</p>
<pre><code>def diffSum():
sumSquares = 0
for i in range(0, 100):
sumSquares += i**2
squareSum = 0
for i in range(0, 100):
squareSum += i
squareSum **= 2
print (squareSum)
return sumSquares - squareSum
print(diffSum)
</code></pre>
<p>Even though I have a print statement at the end, it doesn't actually print the result that is returned; it just prints the function address. Any ideas why this is?</p>
| -3 | 2016-08-22T15:08:31Z | 39,083,074 | <p>You need to call the function by adding parentheses after its name, as in:</p>
<pre><code>print(diffsum())
</code></pre>
| 1 | 2016-08-22T15:10:29Z | [
"python"
] |
Error 'is not in list' | 39,083,060 | <p>I am trying to get the index values of the values in outputx[vX]</p>
<pre><code>>>> outs = np.array(outputx[vX]).tolist()
>>> print outs
[0.806, 0.760, 0.8]
>>> print type(outs)
(type 'list')
>>> idx = outputX.index(outs)
error -> [0.806, 0.760, 0.8] is not in list
</code></pre>
<p>What does this mean, what am i doing wrong?</p>
| 0 | 2016-08-22T15:09:52Z | 39,083,262 | <p>The <code>index</code> list method expects a single value, and returns that value's position in the list on which the method is called. If the given value isn't in the list the method will raise a <code>ValueError</code> exception. So, for example:</p>
<pre><code>>>> [2, 4, 6, 8].index(6)
2
>>> [2, 4, 6, 8].index("no such value")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: 'no such value' is not in list
</code></pre>
<p>The error occurs in your code because there is no element in <code>outputX</code> (should that be <code>outputx</code>, by the way Python is case-sensitive) containing the list <code>[0.806, 0.760, 0.8]</code>.</p>
<p>The reason for this is that you are comparing data of different types. It isn't entriely clear from the question what you expect to happen.</p>
| 0 | 2016-08-22T15:20:32Z | [
"python",
"list"
] |
Error 'is not in list' | 39,083,060 | <p>I am trying to get the index values of the values in outputx[vX]</p>
<pre><code>>>> outs = np.array(outputx[vX]).tolist()
>>> print outs
[0.806, 0.760, 0.8]
>>> print type(outs)
(type 'list')
>>> idx = outputX.index(outs)
error -> [0.806, 0.760, 0.8] is not in list
</code></pre>
<p>What does this mean, what am i doing wrong?</p>
| 0 | 2016-08-22T15:09:52Z | 39,084,200 | <p>If you want a list of indices you might use list comprehensions:</p>
<pre><code>idx = [outputX.index(i) for i in outs]
</code></pre>
<p>But you should be dead sure that all values of the <code>outs</code> list are in the <code>outputX</code> list also, otherwise python will throw you a <code>ValueError</code>.</p>
| 0 | 2016-08-22T16:12:02Z | [
"python",
"list"
] |
How to condense multiple different if statements (Python) | 39,083,109 | <p>I am making a quiz in the style of 20 Questions. It uses a text file to create a dictionary with codes relating to answers. At the moment it only has 5 questions which is no where near enough to be accurate with 'guesses' but already it is looking messy and hard to understand</p>
<p>CODES.txt Example Contents:</p>
<p>a1000,A Book</p>
<p>a1111,A Saucepan</p>
<p>Code:</p>
<pre><code>File = open("CODES.txt","r")
CODES = { }
for line in File:
x = line.split(",")
a = x[0]
b = x[1]
c = len(b)-1
b = b[0:c]
CODES[a] = b
print("Think of anything: \n")
Q1 = str(input("Is it a) An Object, b) A Person, c) A Film: "))
if Q1 == "a":
Q2 = input("Is it hard: ")
if Q2 == "0":
Q3 = input("Is it light: ")
if Q3 == "0":
Q4 = input("Is it smaller than your head: ")
if Q4 == "0":
Q5 = input("Is it square: ")
elif Q4 == "1":
Q5 = input("Is it circular: ")
elif Q3 == "1":
Q4 = input("Is it bigger than your head: ")
if Q4 == "0":
Q5 = input("Is it square: ")
elif Q4 == "1":
Q5 = input("Is it circular: ")
elif Q2 == "1":
Q3 = input("Is it heavy: ")
if Q3 == "0":
Q4 = input("Is it smaller than your head: ")
if Q4 == "0":
Q5 = input("Is it square: ")
elif Q4 == "1":
Q5 = input("Is it circular: ")
elif Q3 == "1":
Q4 = input("Is it bigger than your head: ")
if Q4 == "0":
Q5 = input("Is it square: ")
elif Q4 == "1":
Q5 = input("Is it circular: ")
CCODE = str(Q1+Q2+Q3+Q4+Q5)
if CCODE in CODES:
print("You are thinking of " + CODES[CCODE])
else:
NV = str(input("You have outsmarted me. What were you thinking of: "))
File = open("CODES.txt","a")
File.write((CCODE+","+NV+"\n"))
File.close()
</code></pre>
<p>How would i make the question segment, the If-Statements easier to read/understand. Currently i have loads of embedded ones and it only consists of 5 questions each with 2/3 answers.</p>
| 0 | 2016-08-22T15:12:21Z | 39,083,329 | <p>I will try to give you some thoughts but not solve your problem directly:</p>
<ul>
<li>Think about what you can put into a separate function. This would make sense for statements that are repeated several times.</li>
<li>You can very easily return strings from Python functions. Then you can use the returned strings as keys for a dictionary. </li>
<li><p>Also there is the possibility to return functions from other functions as everything is an object in Python. </p>
<p>For example <code>Q4 = input("Is it smaller than your head: ")</code> could be turned into a statement like
<code>obj_size = ask_size()</code> with outputs <code>"small"</code>, <code>"big"</code>. </p></li>
</ul>
<p>I hope that helps you :)</p>
| 1 | 2016-08-22T15:23:56Z | [
"python",
"python-3.x",
"if-statement",
"dictionary"
] |
interval comparison in pandas data frame | 39,083,157 | <p>I am trying to do an interval comparison similar to what is described in <a href="http://stackoverflow.com/questions/13628791/how-do-i-check-whether-an-int-is-between-the-two-numbers">this question</a> as <code>10000 <= number <= 30000</code> but I'm trying to do it in a data frame. For example, below is my sample data and I want to get all rows where latitude is within 1 of my predefined coordinates.</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame([[5,7, 'wolf'],
[5,6,'cow'],
[8, 2, 'rabbit'],
[5, 3, 'rabbit'],
[3, 2, 'cow'],
[7, 5, 'rabbit']],
columns = ['lat', 'long', 'type'])
coords = [4,7]
viewShort = df[(coords[0] - 1) <= df['lat'] <= (coords[0] + 1)]
</code></pre>
<p>unfortunately, I get a <code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</code> when I write it that way.</p>
<p>I realize that I could write it like this instead</p>
<pre><code>viewLong = df[((coords[0] - 1) <= df['lat']) & (df['lat'] <= (coords[0] + 1))]
</code></pre>
<p>but I have to write a lot of these things, so I was trying to make it a bit more compact. What am I doing wrong in the <code>viewShort</code> example? Or is this just not possible with pandas and I have to write it the long way?</p>
<p>Thank you!</p>
<p>Sidenote: the correct <code>viewShort</code> data frame should have four rows:</p>
<pre><code>[5,7,'wolf'],
[5,6,'cow'],
[5,3,'rabbit'],
[3,2,'cow']
</code></pre>
| 0 | 2016-08-22T15:14:44Z | 39,083,212 | <p>Chained comparisons are not supported. You need to do:</p>
<pre><code>df[df['lat'].between(coords[0] - 1, coords[0] + 1)] # inclusive=True by default
Out:
lat long type
0 5 7 wolf
1 5 6 cow
3 5 3 rabbit
4 3 2 cow
</code></pre>
| 1 | 2016-08-22T15:17:44Z | [
"python",
"pandas"
] |
Django allauth Redirect to email verification after social signup | 39,083,287 | <p>I use Twitter for main signup/login and I would like to redirect to 'accounts/email' link after social signup because I want to force new user to provide their emails. I've found same <a href="http://stackoverflow.com/questions/27759407/django-allauth-redirect-after-social-signup">question</a> and answer from <a href="http://stackoverflow.com/users/3849456/anzel">@Anzel</a></p>
<pre><code>from allauth.socialaccount.adapter import DefaultSocialAccountAdapter
class SocialAccountAdapter(DefaultSocialAccountAdapter):
def save_user(self, request, sociallogin, form=None):
super(DefaultSocialAccountAdapter, self).save_user(request, sociallogin, form=form)
return redirect('/accounts/email/')
</code></pre>
<p>but the answer didn't work for me and got this</p>
<pre><code>AttributeError at /accounts/twitter/login/callback/
'super' object has no attribute 'save_user'
Request Method: GET
Request URL: http://localhost:8000/accounts/twitter/login/callback/?oauth_token=HSowSgAAAAAAuTblAAABVrLCOpE&oauth_verifier=cVrwyB2Vfk2Lgsrwg5fqE0wyzrfnwJ3H
Django Version: 1.9.2
Exception Type: AttributeError
Exception Value:
'super' object has no attribute 'save_user'
</code></pre>
| 1 | 2016-08-22T15:21:43Z | 39,101,570 | <p>In setting.py I only add those two lines and forgot about the adapter :</p>
<pre><code>SOCIALACCOUNT_AUTO_SIGNUP = True
SOCIALACCOUNT_EMAIL_REQUIRED = True
</code></pre>
<p>and now after signup or login it redircts new users only to /accounts/social/signup/ and this view actually force the user to submit his eamil and process to a verification.</p>
| 1 | 2016-08-23T12:43:26Z | [
"python",
"django",
"django-allauth"
] |
Special characters get truncated when using bing tts api from python | 39,083,305 | <p>I have modified the python example found at <a href="https://github.com/Microsoft/Cognitive-Speech-TTS/tree/master/Samples-Http/Python" rel="nofollow">https://github.com/Microsoft/Cognitive-Speech-TTS/tree/master/Samples-Http/Python</a> to synthesize voice in spanish changing</p>
<pre><code>"<speak version='1.0' xml:lang='en-us'><voice xml:lang='en-us' xml:gender='Female' name='Microsoft Server Speech Text to Speech Voice (en-US, ZiraRUS)'>
</code></pre>
<p>to</p>
<pre><code>"<speak version='1.0' xml:lang='es-ES'><voice xml:lang='es-ES' xml:gender='Male' name='Microsoft Server Speech Text to Speech Voice (es-ES, Pablo, Apollo)'>
</code></pre>
<p>but during the synthesis process non-ASCII characters like 'ñ' get truncated at some step, so they don't appear in the final audio file.</p>
<p>I have checked that it's not a python problem by printing the request string, and characters appear correctly.</p>
| 0 | 2016-08-22T15:22:28Z | 39,112,184 | <p>If you look at the HTTP request, you will see that the http.client library does not encode the string correctly. The easiest workaround is to encode it yourself:</p>
<pre><code>ssml = "<speak version='1.0' xml:lang='es-ES'>...</speak>"
body = ssml.encode('utf8')
</code></pre>
| 0 | 2016-08-23T23:25:23Z | [
"python",
"text-to-speech",
"bing-api",
"microsoft-cognitive"
] |
Why cant I do blob detection on this binary image | 39,083,360 | <p>First, now i am doing the blob detection by Python2.7 with Opencv. What i want to do is to finish the blob detection after the color detection. i want to detect the red circles(marks), and to avoid other blob interference, i want to do color detection first, and then do the blob detection.</p>
<p>and the image after color detection is <a href="http://i.stack.imgur.com/eRCe1.png" rel="nofollow">binary mask</a></p>
<p>now i want to do blob detection on this image, but it doesn't work.
This is my code.</p>
<pre><code>import cv2
import numpy as np;
# Read image
im = cv2.imread("myblob.jpg", cv2.IMREAD_GRAYSCALE)
# Set up the detector with default parameters.
params = cv2.SimpleBlobDetector_Params()
# Change thresholds
params.minThreshold = 10; # the graylevel of images
params.maxThreshold = 200;
params.filterByColor = True
params.blobColor = 255
# Filter by Area
params.filterByArea = False
params.minArea = 10000
detector = cv2.SimpleBlobDetector(params)
# Detect blobs.
keypoints = detector.detect(im)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(im, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show keypoints
cv2.imshow("Keypoints", im_with_keypoints)
cv2.waitKey(0)`
</code></pre>
<p>I am really confused by this code, because it work on this image <a href="http://i.stack.imgur.com/c0AXO.jpg" rel="nofollow">white dots</a>
I think the white dots image is quiet similar with the binary mask,but why cant i do blob detection on the binary image?could anyone tell me the difference or the right code?</p>
<p>Thanks!!</p>
<p>Regards,
Nan</p>
| 2 | 2016-08-22T15:25:33Z | 39,084,845 | <p>Its an opencv bug in the filter by color . All you need to do is to invert the color of the image -> Detect Blobs -> Invert again to get back to the original color </p>
<h1>Code</h1>
<pre><code>import cv2
import numpy as np;
# Read image
im = cv2.imread("myblob.jpg", cv2.IMREAD_GRAYSCALE)
# Set up the detector with default parameters.
im=cv2.bitwise_not(im)
params = cv2.SimpleBlobDetector_Params()
detector = cv2.SimpleBlobDetector_create(params)
# Detect blobs.
keypoints = detector.detect(im)
im=cv2.bitwise_not(im)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(im, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show keypoints
cv2.imshow("Keypoints", im_with_keypoints)
cv2.waitKey(0)
</code></pre>
<h1>Output</h1>
<p><a href="http://i.stack.imgur.com/GSZ4l.png" rel="nofollow"><img src="http://i.stack.imgur.com/GSZ4l.png" alt="enter image description here"></a></p>
| 1 | 2016-08-22T16:52:13Z | [
"python",
"opencv",
"image-processing",
"binary",
"blob"
] |
Why cant I do blob detection on this binary image | 39,083,360 | <p>First, now i am doing the blob detection by Python2.7 with Opencv. What i want to do is to finish the blob detection after the color detection. i want to detect the red circles(marks), and to avoid other blob interference, i want to do color detection first, and then do the blob detection.</p>
<p>and the image after color detection is <a href="http://i.stack.imgur.com/eRCe1.png" rel="nofollow">binary mask</a></p>
<p>now i want to do blob detection on this image, but it doesn't work.
This is my code.</p>
<pre><code>import cv2
import numpy as np;
# Read image
im = cv2.imread("myblob.jpg", cv2.IMREAD_GRAYSCALE)
# Set up the detector with default parameters.
params = cv2.SimpleBlobDetector_Params()
# Change thresholds
params.minThreshold = 10; # the graylevel of images
params.maxThreshold = 200;
params.filterByColor = True
params.blobColor = 255
# Filter by Area
params.filterByArea = False
params.minArea = 10000
detector = cv2.SimpleBlobDetector(params)
# Detect blobs.
keypoints = detector.detect(im)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(im, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show keypoints
cv2.imshow("Keypoints", im_with_keypoints)
cv2.waitKey(0)`
</code></pre>
<p>I am really confused by this code, because it work on this image <a href="http://i.stack.imgur.com/c0AXO.jpg" rel="nofollow">white dots</a>
I think the white dots image is quiet similar with the binary mask,but why cant i do blob detection on the binary image?could anyone tell me the difference or the right code?</p>
<p>Thanks!!</p>
<p>Regards,
Nan</p>
| 2 | 2016-08-22T15:25:33Z | 39,085,129 | <p>Easiest way would be what @ArjitMukherjee said.</p>
<p>But I also echo what @meetaig commented initially about difference in structure of blobs in both the images</p>
<blockquote>
<p>A clue for why it might not work could be the structure of the blobs.
In the first image the white pixels are not all connected to a big
blob (meaning there are a few single pixels "floating around") whereas
in the second image the circles are perfect blobs</p>
</blockquote>
<p>You need to fine tune your algorithm so that it suits/aligns with different structures of blobs</p>
<p>I kind of did quick fine tuning, which could partially meet your requirements:</p>
<pre><code>import cv2
import numpy as np;
# Read image
im = cv2.imread("eRCe1.png", cv2.IMREAD_GRAYSCALE)
# Set up the detector with default parameters.
params = cv2.SimpleBlobDetector_Params()
# Change thresholds
params.minThreshold = 10; # the graylevel of images
params.maxThreshold = 200;
params.filterByColor = True
params.blobColor = 255
# Filter by Area
params.filterByArea = True
params.minArea = 300
detector = cv2.SimpleBlobDetector(params)
# Detect blobs.
keypoints = detector.detect(im)
print keypoints
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(im, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imshow("Keypoints", im_with_keypoints)
cv2.waitKey(0)
</code></pre>
<p>Executed the above code on the both the same images that you gave, below are the outputs</p>
<p><strong>Sample 1:</strong></p>
<p><a href="http://i.stack.imgur.com/P7P1c.png" rel="nofollow"><img src="http://i.stack.imgur.com/P7P1c.png" alt="sample #1"></a></p>
<p><strong>Sample 2:</strong></p>
<p><a href="http://i.stack.imgur.com/2B3v1.png" rel="nofollow"><img src="http://i.stack.imgur.com/2B3v1.png" alt="sample #2"></a></p>
| 0 | 2016-08-22T17:10:55Z | [
"python",
"opencv",
"image-processing",
"binary",
"blob"
] |
Why cant I do blob detection on this binary image | 39,083,360 | <p>First, now i am doing the blob detection by Python2.7 with Opencv. What i want to do is to finish the blob detection after the color detection. i want to detect the red circles(marks), and to avoid other blob interference, i want to do color detection first, and then do the blob detection.</p>
<p>and the image after color detection is <a href="http://i.stack.imgur.com/eRCe1.png" rel="nofollow">binary mask</a></p>
<p>now i want to do blob detection on this image, but it doesn't work.
This is my code.</p>
<pre><code>import cv2
import numpy as np;
# Read image
im = cv2.imread("myblob.jpg", cv2.IMREAD_GRAYSCALE)
# Set up the detector with default parameters.
params = cv2.SimpleBlobDetector_Params()
# Change thresholds
params.minThreshold = 10; # the graylevel of images
params.maxThreshold = 200;
params.filterByColor = True
params.blobColor = 255
# Filter by Area
params.filterByArea = False
params.minArea = 10000
detector = cv2.SimpleBlobDetector(params)
# Detect blobs.
keypoints = detector.detect(im)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(im, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show keypoints
cv2.imshow("Keypoints", im_with_keypoints)
cv2.waitKey(0)`
</code></pre>
<p>I am really confused by this code, because it work on this image <a href="http://i.stack.imgur.com/c0AXO.jpg" rel="nofollow">white dots</a>
I think the white dots image is quiet similar with the binary mask,but why cant i do blob detection on the binary image?could anyone tell me the difference or the right code?</p>
<p>Thanks!!</p>
<p>Regards,
Nan</p>
| 2 | 2016-08-22T15:25:33Z | 39,085,654 | <p>It looks that the blob detector has <code>filterByInertia</code> and <code>filterByConvexity</code> parameters enabled by default.
You can check this in your system:</p>
<pre><code>import cv2
params = cv2.SimpleBlobDetector_Params()
print params.filterByColor
print params.filterByArea
print params.filterByCircularity
print params.filterByInertia
print params.filterByConvexity
</code></pre>
<p>So when you call <code>detector = cv2.SimpleBlobDetector(params)</code> you are actually filtering also by inertia and convexity with the default min and max values.</p>
<p>If you explicitly disable those filtering criteria:</p>
<pre><code># Disable unwanted filter criteria params
params.filterByInertia = False
params.filterByConvexity = False
</code></pre>
<p>... and then call <code>detector = cv2.SimpleBlobDetector(params)</code> you get the following image:
<a href="http://i.stack.imgur.com/UxBZY.png" rel="nofollow"><img src="http://i.stack.imgur.com/UxBZY.png" alt="blobing result"></a></p>
<p>The third blob in that image is caused by the white frame on the lower right of your image.
You can crop the image, if the frame is always in the same place, or you can use the parameters to filter by circularity and remove the undesired blob:</p>
<pre><code>params.filterByCircularity = True
params.minCircularity = 0.1
</code></pre>
<p>And you will finally get:</p>
<p><a href="http://i.stack.imgur.com/KPx99.png" rel="nofollow"><img src="http://i.stack.imgur.com/KPx99.png" alt="enter image description here"></a></p>
| 2 | 2016-08-22T17:44:14Z | [
"python",
"opencv",
"image-processing",
"binary",
"blob"
] |
Untar file in Python script with wildcard | 39,083,448 | <p>I am trying in a Python script to import a tar.gz file from HDFS and then untar it. The file comes as follow <b>20160822073413-EoRcGvXMDIB5SVenEyD4pOEADPVPhPsg.tar.gz</b>, it has always the same structure.</p>
<p>In my python script, I would like to copy it locally and the extract the file. I am using the following command to do this:</p>
<pre><code>import subprocess
import os
import datetime
import time
today = time.strftime("%Y%m%d")
#Copy tar file from HDFS to local server
args = ["hadoop","fs","-copyToLocal", "/locationfile/" + today + "*"]
p=subprocess.Popen(args)
p.wait()
#Untar the CSV file
args = ["tar","-xzvf",today + "*"]
p=subprocess.Popen(args)
p.wait()
</code></pre>
<p>The import works perfectly but I am not able to extract the file, I am getting the following error:</p>
<pre><code>['tar', '-xzvf', '20160822*.tar']
tar (child): 20160822*.tar: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
put: `reportResults.csv': No such file or directory
</code></pre>
<p>Can anyone help me?</p>
<p>Thanks a lot!</p>
| 0 | 2016-08-22T15:30:27Z | 39,083,616 | <p>Try with the <code>shell</code> option:</p>
<pre><code>p=subprocess.Popen(args, shell=True)
</code></pre>
<p>From <a href="https://docs.python.org/2/library/subprocess.html#frequently-used-arguments" rel="nofollow">the docs</a>:</p>
<blockquote>
<p>If shell is True, the specified command will be executed through the
shell. This can be useful if you are using Python primarily for the
enhanced control flow it offers over most system shells and still want
convenient access to other shell features such as shell pipes,
filename wildcards, environment variable expansion, and expansion of ~
to a userâs home directory.</p>
</blockquote>
<p>And notice:</p>
<blockquote>
<p>However, note that Python itself offers implementations of many
shell-like features (in particular, glob, fnmatch, os.walk(),
os.path.expandvars(), os.path.expanduser(), and shutil).</p>
</blockquote>
| 2 | 2016-08-22T15:39:36Z | [
"python",
"unix",
"rar"
] |
Untar file in Python script with wildcard | 39,083,448 | <p>I am trying in a Python script to import a tar.gz file from HDFS and then untar it. The file comes as follow <b>20160822073413-EoRcGvXMDIB5SVenEyD4pOEADPVPhPsg.tar.gz</b>, it has always the same structure.</p>
<p>In my python script, I would like to copy it locally and the extract the file. I am using the following command to do this:</p>
<pre><code>import subprocess
import os
import datetime
import time
today = time.strftime("%Y%m%d")
#Copy tar file from HDFS to local server
args = ["hadoop","fs","-copyToLocal", "/locationfile/" + today + "*"]
p=subprocess.Popen(args)
p.wait()
#Untar the CSV file
args = ["tar","-xzvf",today + "*"]
p=subprocess.Popen(args)
p.wait()
</code></pre>
<p>The import works perfectly but I am not able to extract the file, I am getting the following error:</p>
<pre><code>['tar', '-xzvf', '20160822*.tar']
tar (child): 20160822*.tar: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
put: `reportResults.csv': No such file or directory
</code></pre>
<p>Can anyone help me?</p>
<p>Thanks a lot!</p>
| 0 | 2016-08-22T15:30:27Z | 39,083,762 | <p>In addition to @martriay answer, you also got a typo - you wrote "20160822*.tar", while your file's pattern is "20160822*.tar.gz"</p>
<p>When applying <code>shell=True</code>, the command should be passed as a whole string (see <a href="https://docs.python.org/3/library/subprocess.html#popen-constructor" rel="nofollow">documentation</a>), like so:</p>
<pre><code>p=subprocess.Popen('tar -xzvf 20160822*.tar.gz', shell=True)
</code></pre>
<p>If you don't need <code>p</code>, you can simply use <a href="https://docs.python.org/3/library/subprocess.html#subprocess.call" rel="nofollow">subprocess.call</a>:</p>
<pre><code>subprocess.call('tar -xzvf 20160822*.tar.gz', shell=True)
</code></pre>
<p><strong>But</strong> I suggest you use more standard libraries, like so:</p>
<pre><code>import glob
import tarfile
today = "20160822" # compute your common prefix here
target_dir = "/tmp" # choose where ever you want to extract the content
for targz_file in glob.glob('%s*.tar.gz' % today):
with tarfile.open(targz_file, 'r:gz') as opened_targz_file:
opened_targz_file.extractall(target_dir)
</code></pre>
| 2 | 2016-08-22T15:47:35Z | [
"python",
"unix",
"rar"
] |
Untar file in Python script with wildcard | 39,083,448 | <p>I am trying in a Python script to import a tar.gz file from HDFS and then untar it. The file comes as follow <b>20160822073413-EoRcGvXMDIB5SVenEyD4pOEADPVPhPsg.tar.gz</b>, it has always the same structure.</p>
<p>In my python script, I would like to copy it locally and the extract the file. I am using the following command to do this:</p>
<pre><code>import subprocess
import os
import datetime
import time
today = time.strftime("%Y%m%d")
#Copy tar file from HDFS to local server
args = ["hadoop","fs","-copyToLocal", "/locationfile/" + today + "*"]
p=subprocess.Popen(args)
p.wait()
#Untar the CSV file
args = ["tar","-xzvf",today + "*"]
p=subprocess.Popen(args)
p.wait()
</code></pre>
<p>The import works perfectly but I am not able to extract the file, I am getting the following error:</p>
<pre><code>['tar', '-xzvf', '20160822*.tar']
tar (child): 20160822*.tar: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
put: `reportResults.csv': No such file or directory
</code></pre>
<p>Can anyone help me?</p>
<p>Thanks a lot!</p>
| 0 | 2016-08-22T15:30:27Z | 39,096,316 | <p>I found a way to do what I needed, instead of using os command, I used python tar command and it works!</p>
<pre><code>import tarfile
import glob
os.chdir("/folder_to_scan/")
for file in glob.glob("*.tar.gz"):
print(file)
tar = tarfile.open(file)
tar.extractall()
</code></pre>
<p>Hope this help.</p>
<p>Regards
Majid</p>
| 0 | 2016-08-23T08:40:05Z | [
"python",
"unix",
"rar"
] |
Multiple Regression in Python (with Factor Selection) | 39,083,462 | <p>All the threads that I've read with respect to Multiple Regression in Python mostly recommend the OLS function within Statsmodels. Here's the problem that I am encountering, I am trying to explain a fund's returns (HYFAX highlighted in green) by regressing its returns against 14 independent variables that could explain the returns of this fund. This should have a significant F test and spits out the best fit model with the highest adjusted R squared after going through the step by step iterations of the factors. Is there a way to do that in python? </p>
<p><a href="http://i.stack.imgur.com/m5f8r.jpg" rel="nofollow">Fund returns vs Factors</a></p>
| 1 | 2016-08-22T15:31:10Z | 39,135,999 | <p>Sounds like you just want to see the results from your model fit. Heres an example with 1 predictor, but easily extendable to 14:</p>
<p>Import statsmodels and specify the model you want to build (this is where you'd include your 14 predictors):</p>
<pre><code>import statsmodels.api as sm
#read in your data however you want and assign your y, x1...x14 variables
model = sm.OLS(x, y)
</code></pre>
<p>Fit the model:</p>
<pre><code>results = model.fit()
</code></pre>
<p>Now just display a summary of your model fit:</p>
<pre><code>print(results.summary())
</code></pre>
<p>That will give you your adjusted R squared value, F test value, beta weights etc. Should look something like this:</p>
<pre><code> OLS Regression Results
==============================================================================
Dep. Variable: x R-squared: 0.601
Model: OLS Adj. R-squared: 0.594
Method: Least Squares F-statistic: 87.38
Date: Wed, 24 Aug 2016 Prob (F-statistic): 3.56e-13
Time: 19:51:25 Log-Likelihood: -301.81
No. Observations: 59 AIC: 605.6
Df Residuals: 58 BIC: 607.7
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [95.0% Conf. Int.]
------------------------------------------------------------------------------
y 0.8095 0.087 9.348 0.000 0.636 0.983
==============================================================================
Omnibus: 0.119 Durbin-Watson: 1.607
Prob(Omnibus): 0.942 Jarque-Bera (JB): 0.178
Skew: -0.099 Prob(JB): 0.915
Kurtosis: 2.818 Cond. No. 1.00
==============================================================================
</code></pre>
| 0 | 2016-08-25T02:57:22Z | [
"python",
"numpy",
"scikit-learn",
"regression"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.