title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
How to override clean method of a form generated by UpdateView? | 38,884,616 | <p>Is it possible to override a <code>clean method</code> of the form which is generated by <strong>Class Based View</strong> - <code>UpdateView</code>?</p>
<p>In <code>form</code>, I would override <code>clean</code> method to check, whether either first or second field is filled. </p>
<p>Form would be like:</p>
<pre><code>class MyForm(forms.ModelForm):
...
def clean(self):
super(MyForm,self).clean()
if bool(self.cleaned_data['first_field'])== bool(self.cleaned_data['first_field']):
raise ValidationError("Please, fill the first or second field")
</code></pre>
<p>View:</p>
<pre><code>class EditOrderView(UpdateView):
model = Job
fields = ['language_from', 'language_to', 'level', 'short_description', 'notes',
'first_field', 'second_field']
template_name = 'auth/jobs/update-order.html'
def get_object(self, queryset=None):
return get_object_or_404(self.model, pk=self.kwargs["pk"], customer=self.request.user)
def get_success_url(self):
return '/my-orders/'
def form_valid(self, form):
self.order = form.save()
email.AdminNotifications.edited_order(self.order)
return HttpResponseRedirect(self.get_success_url())
</code></pre>
| 0 | 2016-08-10T22:33:16Z | 38,884,798 | <p>You can make your view use your form by setting <a href="https://docs.djangoproject.com/en/1.10/ref/class-based-views/mixins-editing/#django.views.generic.edit.FormMixin.form_class" rel="nofollow"><code>form_class</code></a>.</p>
<pre><code>class EditOrderView(UpdateView):
model = Job
form_class = MyForm
...
</code></pre>
| 2 | 2016-08-10T22:50:28Z | [
"python",
"django",
"django-views"
] |
How to set sys.modules["m"]=<current module> | 38,884,630 | <p>I have this structure:</p>
<pre><code>merged.py:
everything
main.py:
import merged
sys.modules['othermodule'] = merged
othermodule.f()
</code></pre>
<p>If I merged this into single file <code>main.py</code>, is there any way to tell <code>othermodule</code> resolve to <code>main</code> while inside of <code>main.py</code>?</p>
<p>This situation happens when using <a href="https://github.com/xonsh/amalgamate" rel="nofollow">amalgamate</a> package to concat everything into single file, but the result is two files with structure like above, and I'm trying to get it down to one file</p>
| 2 | 2016-08-10T22:34:37Z | 38,884,716 | <pre><code>sys.modules['othermodule'] = sys.modules[__name__]
</code></pre>
<p>Gets the current module you are in and sets as othermodule.
Now you can:</p>
<pre><code>import othermodule
othermodule.f()
</code></pre>
<p>If you prefer to assign module to variable you can simply:</p>
<pre><code>othermodule = sys.modules[__name__]
othermodule.f()
</code></pre>
| 1 | 2016-08-10T22:41:09Z | [
"python"
] |
How to calculate timedelta pairs from a dictionary | 38,884,651 | <p>I have a dictionary with key-value pairs of type timedelta in '%H:%M:%S'</p>
<p>i.e.</p>
<pre><code>myDict = {'0:00:12': '0:40:10', '0:00:18': '0:04:58', '0:00:50': '0:02:35'}
</code></pre>
<p>What I want to be able to do is calculate the difference between the key and the value for each pair and save those those differences to a list</p>
<p>i.e. for first pair <code>'0:40:10' - '0:00:12'</code></p>
<p>Calculate <code>'0:40:10' - '0:00:12'</code></p>
<p>Which is <code>0:39:58</code>, then save that to myList, so that myList looks like:</p>
<pre><code>myList = ['0:39:58', 'difference-for-pair-2', 'difference-for-pair-3' ... ]
</code></pre>
<p>I got as far as </p>
<pre><code>FMT='%H:%M:%S'
for key, value in myDict.iteritems():
print datetime.strptime(value, FMT) - datetime.strptime(key, FMT)
</code></pre>
<p>this prints the differences in times I want like: </p>
<pre><code>0:02:05
0:02:57
0:00:31
...
</code></pre>
<p>Which is correct, but I can't figure out how to save these values to a list rather than just printing on screen</p>
<p>How do I do this?</p>
| 0 | 2016-08-10T22:35:36Z | 38,884,668 | <p>Use a <a href="https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions" rel="nofollow">list comprehension</a></p>
<pre><code>[datetime.strptime(value, FMT) - datetime.strptime(key, FMT) for key, value in myDict.iteritems()]
</code></pre>
| 1 | 2016-08-10T22:37:07Z | [
"python",
"datetime",
"dictionary"
] |
How to calculate timedelta pairs from a dictionary | 38,884,651 | <p>I have a dictionary with key-value pairs of type timedelta in '%H:%M:%S'</p>
<p>i.e.</p>
<pre><code>myDict = {'0:00:12': '0:40:10', '0:00:18': '0:04:58', '0:00:50': '0:02:35'}
</code></pre>
<p>What I want to be able to do is calculate the difference between the key and the value for each pair and save those those differences to a list</p>
<p>i.e. for first pair <code>'0:40:10' - '0:00:12'</code></p>
<p>Calculate <code>'0:40:10' - '0:00:12'</code></p>
<p>Which is <code>0:39:58</code>, then save that to myList, so that myList looks like:</p>
<pre><code>myList = ['0:39:58', 'difference-for-pair-2', 'difference-for-pair-3' ... ]
</code></pre>
<p>I got as far as </p>
<pre><code>FMT='%H:%M:%S'
for key, value in myDict.iteritems():
print datetime.strptime(value, FMT) - datetime.strptime(key, FMT)
</code></pre>
<p>this prints the differences in times I want like: </p>
<pre><code>0:02:05
0:02:57
0:00:31
...
</code></pre>
<p>Which is correct, but I can't figure out how to save these values to a list rather than just printing on screen</p>
<p>How do I do this?</p>
| 0 | 2016-08-10T22:35:36Z | 38,884,687 | <p>Initialise a container to collect the results, and append them inside the loop</p>
<pre><code>results = []
for key, value in myDict.iteritems():
result = datetime.strptime(value, FMT) - datetime.strptime(key, FMT)
results.append(result)
</code></pre>
<p>This is the most basic way to get your desired result. However, most python devs would rather handle this directly with a list comprehension like so:</p>
<pre><code>[datetime.strptime(value, FMT) - datetime.strptime(key, FMT) for key, value in myDict.iteritems()]
</code></pre>
| 0 | 2016-08-10T22:38:33Z | [
"python",
"datetime",
"dictionary"
] |
How to calculate timedelta pairs from a dictionary | 38,884,651 | <p>I have a dictionary with key-value pairs of type timedelta in '%H:%M:%S'</p>
<p>i.e.</p>
<pre><code>myDict = {'0:00:12': '0:40:10', '0:00:18': '0:04:58', '0:00:50': '0:02:35'}
</code></pre>
<p>What I want to be able to do is calculate the difference between the key and the value for each pair and save those those differences to a list</p>
<p>i.e. for first pair <code>'0:40:10' - '0:00:12'</code></p>
<p>Calculate <code>'0:40:10' - '0:00:12'</code></p>
<p>Which is <code>0:39:58</code>, then save that to myList, so that myList looks like:</p>
<pre><code>myList = ['0:39:58', 'difference-for-pair-2', 'difference-for-pair-3' ... ]
</code></pre>
<p>I got as far as </p>
<pre><code>FMT='%H:%M:%S'
for key, value in myDict.iteritems():
print datetime.strptime(value, FMT) - datetime.strptime(key, FMT)
</code></pre>
<p>this prints the differences in times I want like: </p>
<pre><code>0:02:05
0:02:57
0:00:31
...
</code></pre>
<p>Which is correct, but I can't figure out how to save these values to a list rather than just printing on screen</p>
<p>How do I do this?</p>
| 0 | 2016-08-10T22:35:36Z | 38,885,351 | <p>Combining the answers of @wim @user35269 and @RandyTek
Here is the final code. Note that converting the datetime object to string will give you the desired output, to save or print out later:</p>
<pre><code>from datetime import datetime
myDict = {'0:00:12': '0:40:10', '0:00:18': '0:04:58', '0:00:50': '0:02:35'}
FMT = '%H:%M:%S'
results = []
for key, value in myDict.iteritems():
result = datetime.strptime(value, FMT) - datetime.strptime(key, FMT)
results.append(str(result))
print results
</code></pre>
<p>Here is how it look:</p>
<pre><code>['0:04:40', '0:39:58', '0:01:45']
</code></pre>
| 0 | 2016-08-11T00:01:24Z | [
"python",
"datetime",
"dictionary"
] |
Writing multiple files into one file using Python, while taking input from the user to choose the files to scan | 38,884,686 | <p>Ok, so I have code that looks like this:</p>
<pre><code>input_name="PLACEHOLDER"
while input_name != "":
input_name=input('Part Name: ')
with open("/pathway/%s.txt" %input_name ,"r") as read_data, open("output.txt","w") as output:
if part_name != "":
f=input_data.read()
print(input_data)
output.write(part_name)
output.write(date)
output.write(y)
else:
read_data.close()
output.close()
</code></pre>
<p>I know it looks a little broken, but what I need to do is fix the loop, because I need to be able to take multiple inputs, and write each of those inputs(file names) to the same file at the end of the program. I probably need at least one more loop in here, I'm just looking for ideas or a kick in the right direction. I have other formatting code in there, this is just the bare bones, looking for an idea on what kind of loops I could run. Thanks to anyone who takes the time to look at this for me!</p>
| 1 | 2016-08-10T22:38:19Z | 38,884,788 | <p>You can keep the <code>output.txt</code> open from the beginning of the execution and open each file after the user input its name.</p>
<p>Example (not tested):
</p>
<pre><code>with open("output.txt","w") as output:
while True:
input_name = input('Part Name: ').strip()
if input_name == '':
break
with open("/pathway/%s.txt" %input_name ,"r") as read_data:
if part_name != "":
output.write(read_data.read())
</code></pre>
<p>Remember that you don't need to close the file if you open it in <code>with</code></p>
| 1 | 2016-08-10T22:49:18Z | [
"python",
"python-3.x"
] |
Writing multiple files into one file using Python, while taking input from the user to choose the files to scan | 38,884,686 | <p>Ok, so I have code that looks like this:</p>
<pre><code>input_name="PLACEHOLDER"
while input_name != "":
input_name=input('Part Name: ')
with open("/pathway/%s.txt" %input_name ,"r") as read_data, open("output.txt","w") as output:
if part_name != "":
f=input_data.read()
print(input_data)
output.write(part_name)
output.write(date)
output.write(y)
else:
read_data.close()
output.close()
</code></pre>
<p>I know it looks a little broken, but what I need to do is fix the loop, because I need to be able to take multiple inputs, and write each of those inputs(file names) to the same file at the end of the program. I probably need at least one more loop in here, I'm just looking for ideas or a kick in the right direction. I have other formatting code in there, this is just the bare bones, looking for an idea on what kind of loops I could run. Thanks to anyone who takes the time to look at this for me!</p>
| 1 | 2016-08-10T22:38:19Z | 38,884,862 | <p>Just going to mockup some code for you to help guide you, no guarantees this will work to any degree, but should get you started.</p>
<p>First off, lets store all the part names in a list so we can loop over them later on:</p>
<pre><code>input_name = []
user_input = input('Part Name: ')
while user_input != "":
input_name.append(user_input)
user_input = input('Part Name: ')
</code></pre>
<p>Now let's loop through all the files that we just got:</p>
<pre><code>for (file_name in input_name):
with open("/pathway/%s.txt" %file_name ,"r") as read_data, open("output.txt","w") as output:
# any thing file related here
print(input_data)
output.write(part_name)
output.write(date)
output.write(y)
print("All done")
</code></pre>
<p>That way you get all the user input at once, and process all the data at once.</p>
| 1 | 2016-08-10T22:57:25Z | [
"python",
"python-3.x"
] |
Replace each letter in a last name by the consecutive letter in the alphabet | 38,884,715 | <p>How to replace each letter in a last name by the consecutive letter in the alphabet? I need this script as a masking tool.</p>
<p>Logic for last name: (a change to b, b change to c, ...., z change to a)</p>
<p>Example: John Doe will become John Epf</p>
<p>Input File: <code>names.txt</code></p>
<pre><code>John yi
kary Strong
Joe Piazza
So man
</code></pre>
| -3 | 2016-08-10T22:41:09Z | 38,884,746 | <p>This is called <a href="https://en.wikipedia.org/wiki/Caesar_cipher" rel="nofollow">Caesar's cipher</a>.</p>
<p>Take a look at how it's done here: <a href="http://stackoverflow.com/a/8895517/6664393">http://stackoverflow.com/a/8895517/6664393</a></p>
<p>You'll have change it a little to allow uppercase characters as well:</p>
<pre><code>def caesar(plaintext, shift):
alphabet_lower = string.ascii_lowercase
alphabet_upper = string.ascii_uppercase
alphabet = alphabet_lower + alphabet_upper
shifted_alphabet_lower = alphabet_lower[shift:] + alphabet_lower[:shift]
shifted_alphabet_upper = alphabet_upper[shift:] + alphabet_upper[:shift]
shifted_alphabet = shifted_alphabet_lower + shifted_alphabet_upper
table = string.maketrans(alphabet, shifted_alphabet)
return plaintext.translate(table)
</code></pre>
<p>use <code>shift = 1</code> to shift by one.</p>
| 1 | 2016-08-10T22:44:11Z | [
"python",
"python-3.x"
] |
Replace each letter in a last name by the consecutive letter in the alphabet | 38,884,715 | <p>How to replace each letter in a last name by the consecutive letter in the alphabet? I need this script as a masking tool.</p>
<p>Logic for last name: (a change to b, b change to c, ...., z change to a)</p>
<p>Example: John Doe will become John Epf</p>
<p>Input File: <code>names.txt</code></p>
<pre><code>John yi
kary Strong
Joe Piazza
So man
</code></pre>
| -3 | 2016-08-10T22:41:09Z | 38,884,767 | <p>The problem as defined in your question can be solved as follows:</p>
<pre><code>parts = name.split()
parts[1]=''.join([chr((ord(c) - 65 + 1) % 26 + 65)
if ord(c) < 91 else
chr((ord(c) - 97 + 1) % 26 + 97)
for c in parts[1]])
' '.join(parts)
</code></pre>
<p>Here, I define the last name as the second word of the string, this of course is a strong assumption, but improving on this is not the main problem in the question.</p>
<p>Shifting the characters is done inside a list comprehension, where each character is processed separately, and first converted to its ASCII code using <code>ord</code>. The ASCII codes of upper case letters are 65-90 (<code>A</code>-<code>Z</code>), and the ASCII codes of lowercase letters are 97-122 (<code>a</code>-<code>z</code>). Therefore, a condition <code>ord(c) < 91</code> is used to separate the cases. Then, in each case, the ASCII code is converted to a value in the range 0-25, shifted (in the example, incremented by 1), and modulo operation <code>% 26</code> is used to convert shifted <code>z</code> back into <code>a</code>. The resulting value is then converted back to the proper range for the letter ASCII codes.</p>
| 0 | 2016-08-10T22:47:11Z | [
"python",
"python-3.x"
] |
Calculate every 5 minute returns using 1 minute data in pandas dataframe | 38,884,807 | <p>I have 1 minute price data as Python pandas dataframe like this:</p>
<pre><code> Date Time Open High Low Close
390 2004-04-13 1900-01-01 09:31:00 1146.210 1147.020 1146.210 1147.020
391 2004-04-13 1900-01-01 09:32:00 1147.120 1147.339 1147.120 1147.219
392 2004-04-13 1900-01-01 09:33:00 1147.100 1147.630 1147.100 1147.630
393 2004-04-13 1900-01-01 09:34:00 1147.700 1147.700 1147.439 1147.469
394 2004-04-13 1900-01-01 09:35:00 1147.560 1147.730 1147.560 1147.680
395 2004-04-13 1900-01-01 09:36:00 1147.700 1147.700 1147.640 1147.640
396 2004-04-13 1900-01-01 09:37:00 1147.810 1147.810 1147.430 1147.430
397 2004-04-13 1900-01-01 09:38:00 1147.310 1147.310 1147.110 1147.110
398 2004-04-13 1900-01-01 09:39:00 1147.050 1147.050 1146.870 1146.870
399 2004-04-13 1900-01-01 09:40:00 1146.860 1147.120 1146.860 1147.110
400 2004-04-13 1900-01-01 09:41:00 1147.020 1147.170 1147.000 1147.170
401 2004-04-13 1900-01-01 09:42:00 1147.219 1147.250 1147.150 1147.210
402 2004-04-13 1900-01-01 09:43:00 1147.210 1147.210 1146.969 1146.969
403 2004-04-13 1900-01-01 09:44:00 1146.850 1146.850 1146.510 1146.510
404 2004-04-13 1900-01-01 09:45:00 1146.390 1146.510 1146.280 1146.510
405 2004-04-13 1900-01-01 09:46:00 1146.110 1146.110 1144.819 1144.819
406 2004-04-13 1900-01-01 09:47:00 1144.439 1144.439 1144.060 1144.060
407 2004-04-13 1900-01-01 09:48:00 1144.200 1144.350 1144.120 1144.120
408 2004-04-13 1900-01-01 09:49:00 1143.890 1143.930 1143.890 1143.930
409 2004-04-13 1900-01-01 09:50:00 1143.910 1144.010 1143.770 1144.010
410 2004-04-13 1900-01-01 09:51:00 1144.210 1144.360 1144.210 1144.360
411 2004-04-13 1900-01-01 09:52:00 1144.490 1144.850 1144.490 1144.850
412 2004-04-13 1900-01-01 09:53:00 1145.110 1145.219 1144.910 1144.910
413 2004-04-13 1900-01-01 09:54:00 1144.930 1144.969 1144.930 1144.960
414 2004-04-13 1900-01-01 09:55:00 1144.920 1144.920 1144.770 1144.770
415 2004-04-13 1900-01-01 09:56:00 1144.830 1144.939 1144.800 1144.800
</code></pre>
<p>I want to calculate the 5-minute returns, that is, log(09:35:00 Close/<strong>09:31:00 Open</strong>), log(09:40:00 Close/09:35:00 Close),...,log(15:55:00 Close/15:50:00 Close), log(16:00:00 Close/15:55:00 Close). </p>
<p>And then I want to take the sum of quartic returns. How can I do this? Thanks.</p>
<p>If I use datafame.shift(5) and then calculate the returns what I obtain is the rolling 5 minute returns, which is not exactly what I want.</p>
| 1 | 2016-08-10T22:51:20Z | 38,884,926 | <p>User <code>pd.TimeGrouper('5T')</code></p>
<pre><code>df = df.set_index(df.Date + (df.Time - pd.to_datetime(df.Time.dt.date)))
cols = ['Open', 'High', 'Low', 'Close']
agg = np.log(df[cols]).groupby(pd.TimeGrouper('5T')).agg(['first', 'last'])
agg.stack(0).T.diff().dropna().squeeze().unstack()
</code></pre>
<p><a href="http://i.stack.imgur.com/BSrfa.png" rel="nofollow"><img src="http://i.stack.imgur.com/BSrfa.png" alt="enter image description here"></a></p>
| 0 | 2016-08-10T23:05:27Z | [
"python",
"pandas"
] |
How to merge multiple JSON data rows based on a field in pyspark with a given reduce function | 38,884,857 | <p>How do I merge the JSON data rows as shown below using the merge function below with pyspark? </p>
<p>Note: Assume this is just a minutia example and I have 1000s of rows of data to merge. What is the most performant solution? For better or for worse, I must use pyspark.</p>
<p>Input:</p>
<pre><code>data = [
{'timestamp': '20080411204445', 'address': '100 Sunder Ct', 'name': 'Joe Schmoe'},
{'timestamp': '20040218165319', 'address': '100 Lee Ave', 'name': 'Joe Schmoe'},
{'timestamp': '20120309173318', 'address': '1818 Westminster', 'name': 'John Doe'},
... More ...
]
</code></pre>
<p>Desired Output:</p>
<pre><code>combined_result = [
{'name': 'Joe Schmoe': {'addresses': [('20080411204445', '100 Sunder Ct'), ('20040218165319', '100 Lee Ave')]}},
{'name': 'John Doe': {'addresses': [('20120309173318', '1818 Westminster')]}},
... More ...
]
</code></pre>
<p>Merge function:</p>
<pre><code>def reduce_on_name(a, b):
'''Combines two JSON data rows based on name'''
merged = {}
if a['name'] == b['name']:
addresses = (a['timestamp'], a['address']), (b['timestamp'], b['address'])
merged['name'] = a['name']
merged['addresses'] = addresses
return merged
</code></pre>
| 1 | 2016-08-10T22:56:50Z | 38,886,011 | <p>I think it would be something like this:</p>
<pre><code>sc.parallelize(data).groupBy(lambda x: x['name']).map(lambda t: {'name':t[0],'addresses':[(x['timestamp'], x['address']) for x in t[1]]}).collect()
</code></pre>
| 1 | 2016-08-11T01:40:22Z | [
"python",
"json",
"apache-spark",
"merge",
"pyspark"
] |
How to merge multiple JSON data rows based on a field in pyspark with a given reduce function | 38,884,857 | <p>How do I merge the JSON data rows as shown below using the merge function below with pyspark? </p>
<p>Note: Assume this is just a minutia example and I have 1000s of rows of data to merge. What is the most performant solution? For better or for worse, I must use pyspark.</p>
<p>Input:</p>
<pre><code>data = [
{'timestamp': '20080411204445', 'address': '100 Sunder Ct', 'name': 'Joe Schmoe'},
{'timestamp': '20040218165319', 'address': '100 Lee Ave', 'name': 'Joe Schmoe'},
{'timestamp': '20120309173318', 'address': '1818 Westminster', 'name': 'John Doe'},
... More ...
]
</code></pre>
<p>Desired Output:</p>
<pre><code>combined_result = [
{'name': 'Joe Schmoe': {'addresses': [('20080411204445', '100 Sunder Ct'), ('20040218165319', '100 Lee Ave')]}},
{'name': 'John Doe': {'addresses': [('20120309173318', '1818 Westminster')]}},
... More ...
]
</code></pre>
<p>Merge function:</p>
<pre><code>def reduce_on_name(a, b):
'''Combines two JSON data rows based on name'''
merged = {}
if a['name'] == b['name']:
addresses = (a['timestamp'], a['address']), (b['timestamp'], b['address'])
merged['name'] = a['name']
merged['addresses'] = addresses
return merged
</code></pre>
| 1 | 2016-08-10T22:56:50Z | 38,886,629 | <p>All right, using maxymoo's example, I put together my own reusable code. It's not exactly what I was looking for, but it gets me closer to how I want to solve this particular problem: without lambdas and with reusable code.</p>
<pre><code>#!/usr/bin/env pyspark
# -*- coding: utf-8 -*-
data = [
{'timestamp': '20080411204445', 'address': '100 Sunder Ct', 'name': 'Joe Schmoe'},
{'timestamp': '20040218165319', 'address': '100 Lee Ave', 'name': 'Joe Schmoe'},
{'timestamp': '20120309173318', 'address': '1818 Westminster', 'name': 'John Doe'},
]
def combine(field):
'''Returns a function which reduces on a specific field
Args:
field(str): data field to use for merging
Returns:
func: returns a function which supplies the data for the field
'''
def _reduce_this(data):
'''Returns the field value using data'''
return data[field]
return _reduce_this
def aggregate(*fields):
'''Merges data based on a list of fields
Args:
fields(list): a list of fields that should be used as a composite key
Returns:
func: a function which does the aggregation
'''
def _merge_this(iterable):
name, iterable = iterable
new_map = dict(name=name, window=dict(max=None, min=None))
for data in iterable:
for field, value in data.iteritems():
if field in fields:
new_map[field] = value
else:
new_map.setdefault(field, set()).add(value)
return new_map
return _merge_this
# sc provided by pyspark context
combined = sc.parallelize(data).groupBy(combine('name'))
reduced = combined.map(aggregate('name'))
output = reduced.collect()
</code></pre>
| 0 | 2016-08-11T03:08:30Z | [
"python",
"json",
"apache-spark",
"merge",
"pyspark"
] |
Django access request.user in class based view | 38,884,887 | <p>I'm currently working on the following issue: The user can access the page <code>test.com/BlogPostTitle</code>. Where <code>BlogPostTitle</code> is a slug. If a Blog post with the fitting title exists, Django should render the DetailView of said blog post. If it doesn't exist, Django should render a form to create a blog post.</p>
<p>This works so far: </p>
<pre><code>class EntryDetail(DetailView): # Displays blog entry, if it exists
model = Blog
slug_field = 'title'
template_name = 'app/entry.html'
class EntryForm(FormView): # Displays form, if entry 404s
template_name = 'app/create.html'
form_class = EntryForm
success_url = '/'
def form_valid(self, form):
form.save()
return super(EntryForm, self).form_valid(form)
class EntryDisplay(View):
def get(self, request, *args, **kwargs):
try:
view = EntryDetail.as_view()
return view(request, *args, **kwargs)
except Http404:
if check_user_editor(self.request.user) == True: # Fails here
view = EntryForm.as_view()
return view(request, *args, **kwargs)
else:
pass
</code></pre>
<p>Now, only users who are in the group "editor" should be able to see the form/create a post:</p>
<pre><code>def check_user_editor(user):
if user:
return user.groups.filter(name="editor").exists() # Returns true, if user in editor group
else:
return Falseâ
</code></pre>
<p>As you can see, I've implemented the function in the <code>EntryDisplay</code>, however, Django errors <code>'User' object is not iterable</code>.</p>
<p>I'm guessing I've to work with <code>SingleObjectMixin</code>, but I haven't quite understood the docs on that. </p>
<p>Any help would be much appreciated. </p>
<p>Full traceback:</p>
<p>Traceback:</p>
<pre><code>File "/home/django/local/lib/python3.4/site-packages/django/views/generic/detail.py" in get_object
53. obj = queryset.get()
File "/home/django/local/lib/python3.4/site-packages/django/db/models/query.py" in get
385. self.model._meta.object_name
During handling of the above exception (Blog matching query does not exist.), another exception occurred:
File "/home/django/mediwiki/mediwiki/views.py" in get
68. return view(request, *args, **kwargs)
File "/home/django/local/lib/python3.4/site-packages/django/views/generic/base.py" in view
68. return self.dispatch(request, *args, **kwargs)
File "/home/django/local/lib/python3.4/site-packages/django/views/generic/base.py" in dispatch
88. return handler(request, *args, **kwargs)
File "/home/django/local/lib/python3.4/site-packages/django/views/generic/detail.py" in get
115. self.object = self.get_object()
File "/home/django/local/lib/python3.4/site-packages/django/views/generic/detail.py" in get_object
56. {'verbose_name': queryset.model._meta.verbose_name})
During handling of the above exception (No blog found matching the query), another exception occurred:
File "/home/django/local/lib/python3.4/site-packages/django/core/handlers/exception.py" in inner
39. response = get_response(request)
File "/home/django/local/lib/python3.4/site-packages/django/core/handlers/base.py" in _legacy_get_response
249. response = self._get_response(request)
File "/home/django/local/lib/python3.4/site-packages/django/core/handlers/base.py" in _get_response
187. response = self.process_exception_by_middleware(e, request)
File "/home/django/local/lib/python3.4/site-packages/django/core/handlers/base.py" in _get_response
185. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/django/local/lib/python3.4/site-packages/django/views/generic/base.py" in view
68. return self.dispatch(request, *args, **kwargs)
File "/home/django/local/lib/python3.4/site-packages/django/views/generic/base.py" in dispatch
88. return handler(request, *args, **kwargs)
File "/home/django/mediwiki/mediwiki/views.py" in get
74. view = HttpResponse(request.user)
File "/home/django/local/lib/python3.4/site-packages/django/http/response.py" in __init__
293. self.content = content
File "/home/django/local/lib/python3.4/site-packages/django/http/response.py" in content
319. content = b''.join(self.make_bytes(chunk) for chunk in value)
File "/home/django/local/lib/python3.4/site-packages/django/utils/functional.py" in inner
235. return func(self._wrapped, *args)
Exception Type: TypeError at /test
Exception Value: 'User' object is not iterable
</code></pre>
| 0 | 2016-08-10T23:01:00Z | 38,885,049 | <p>Is the error occurred at template rendering? If so I wonder you've make iter over attributes on single <code>User</code> object. I think you may need <code>user.values()</code>.</p>
<p>BTW, <code>check_user_editor</code> should be simpler:</p>
<pre><code>def check_user_editor(user):
return user.groups.filter(name="editor").exists()
</code></pre>
| 0 | 2016-08-10T23:21:13Z | [
"python",
"django"
] |
Django access request.user in class based view | 38,884,887 | <p>I'm currently working on the following issue: The user can access the page <code>test.com/BlogPostTitle</code>. Where <code>BlogPostTitle</code> is a slug. If a Blog post with the fitting title exists, Django should render the DetailView of said blog post. If it doesn't exist, Django should render a form to create a blog post.</p>
<p>This works so far: </p>
<pre><code>class EntryDetail(DetailView): # Displays blog entry, if it exists
model = Blog
slug_field = 'title'
template_name = 'app/entry.html'
class EntryForm(FormView): # Displays form, if entry 404s
template_name = 'app/create.html'
form_class = EntryForm
success_url = '/'
def form_valid(self, form):
form.save()
return super(EntryForm, self).form_valid(form)
class EntryDisplay(View):
def get(self, request, *args, **kwargs):
try:
view = EntryDetail.as_view()
return view(request, *args, **kwargs)
except Http404:
if check_user_editor(self.request.user) == True: # Fails here
view = EntryForm.as_view()
return view(request, *args, **kwargs)
else:
pass
</code></pre>
<p>Now, only users who are in the group "editor" should be able to see the form/create a post:</p>
<pre><code>def check_user_editor(user):
if user:
return user.groups.filter(name="editor").exists() # Returns true, if user in editor group
else:
return Falseâ
</code></pre>
<p>As you can see, I've implemented the function in the <code>EntryDisplay</code>, however, Django errors <code>'User' object is not iterable</code>.</p>
<p>I'm guessing I've to work with <code>SingleObjectMixin</code>, but I haven't quite understood the docs on that. </p>
<p>Any help would be much appreciated. </p>
<p>Full traceback:</p>
<p>Traceback:</p>
<pre><code>File "/home/django/local/lib/python3.4/site-packages/django/views/generic/detail.py" in get_object
53. obj = queryset.get()
File "/home/django/local/lib/python3.4/site-packages/django/db/models/query.py" in get
385. self.model._meta.object_name
During handling of the above exception (Blog matching query does not exist.), another exception occurred:
File "/home/django/mediwiki/mediwiki/views.py" in get
68. return view(request, *args, **kwargs)
File "/home/django/local/lib/python3.4/site-packages/django/views/generic/base.py" in view
68. return self.dispatch(request, *args, **kwargs)
File "/home/django/local/lib/python3.4/site-packages/django/views/generic/base.py" in dispatch
88. return handler(request, *args, **kwargs)
File "/home/django/local/lib/python3.4/site-packages/django/views/generic/detail.py" in get
115. self.object = self.get_object()
File "/home/django/local/lib/python3.4/site-packages/django/views/generic/detail.py" in get_object
56. {'verbose_name': queryset.model._meta.verbose_name})
During handling of the above exception (No blog found matching the query), another exception occurred:
File "/home/django/local/lib/python3.4/site-packages/django/core/handlers/exception.py" in inner
39. response = get_response(request)
File "/home/django/local/lib/python3.4/site-packages/django/core/handlers/base.py" in _legacy_get_response
249. response = self._get_response(request)
File "/home/django/local/lib/python3.4/site-packages/django/core/handlers/base.py" in _get_response
187. response = self.process_exception_by_middleware(e, request)
File "/home/django/local/lib/python3.4/site-packages/django/core/handlers/base.py" in _get_response
185. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/django/local/lib/python3.4/site-packages/django/views/generic/base.py" in view
68. return self.dispatch(request, *args, **kwargs)
File "/home/django/local/lib/python3.4/site-packages/django/views/generic/base.py" in dispatch
88. return handler(request, *args, **kwargs)
File "/home/django/mediwiki/mediwiki/views.py" in get
74. view = HttpResponse(request.user)
File "/home/django/local/lib/python3.4/site-packages/django/http/response.py" in __init__
293. self.content = content
File "/home/django/local/lib/python3.4/site-packages/django/http/response.py" in content
319. content = b''.join(self.make_bytes(chunk) for chunk in value)
File "/home/django/local/lib/python3.4/site-packages/django/utils/functional.py" in inner
235. return func(self._wrapped, *args)
Exception Type: TypeError at /test
Exception Value: 'User' object is not iterable
</code></pre>
| 0 | 2016-08-10T23:01:00Z | 38,885,193 | <p>Your error is in line 74 in <code>mediwiki.views</code>:</p>
<pre><code>view = HttpResponse(request.user)
</code></pre>
<p><code>HttpResponse</code> expects a string or an iterable. Since <code>request.user</code> is not a string, it tries to use it as an iterable, which fails.</p>
<p>I can't say much without the actual code. If in fact you <em>want</em> to send just a string representation of the user as the response, you need to cast is to a string:</p>
<pre><code>view = HttpResponse(str(request.user))
</code></pre>
| 0 | 2016-08-10T23:38:37Z | [
"python",
"django"
] |
Parsing a header file using swig | 38,884,979 | <p>I have a header file with struct definitions that I'd like to be able to parse in python. In order to do this I turned to Swig.</p>
<p>Lets say the header file is named "a.h". I first renamed it to "a.c" and added an empty "a.h" file in the same folder.</p>
<p>Next, I added in an "a_wrap.i" file with the following contents.</p>
<pre><code>%module a
%{
/* the resulting C file should be built as a python extension */
#define SWIG_FILE_WITH_INIT
/* Includes the header in the wrapper code */
#include "a.h"
%}
/* Parse the header file to generate wrappers */
%include "a.h"
extern struct a_a;
extern struct a_b;
extern struct a_c;
</code></pre>
<p>Next, I wrote a setup.py file as follows :</p>
<pre><code>from distutils.core import setup, Extension
setup(ext_modules=[Extension("_a",
sources=["a.c", "a_wrap.i"])])
</code></pre>
<p>Next, I did the build as </p>
<pre><code>python setup.py build_ext --inplace
</code></pre>
<p>I finally tried to import it in python</p>
<pre><code>>>> import a # it works, yaay
>>> dir(a)
...
...
</code></pre>
<p>I was hoping for a way to access the structs defined in "a.c"(originally a.h). However, I don't seem to be able to find a way to do that. How can I solve this? I'm looking for a way to access the struct's defined in the header file from python.</p>
| 0 | 2016-08-10T23:12:45Z | 39,067,693 | <p>The global variables <code>a_a</code>, <code>a_b and</code>a_c<code>should be accessible from [within your SWIG Python module via</code>cvar`]<a href="http://www.swig.org/Doc1.3/Python.html#Python_nn16" rel="nofollow">1</a>:</p>
<pre><code>import a
print a.cvar.a_a
print a.cvar.a_b
# etc.
</code></pre>
| 0 | 2016-08-21T18:45:33Z | [
"python",
"swig"
] |
sphinx-apidoc picks up submodules, but autodoc doesn't document them | 38,885,106 | <p>I've been working on a project for PyQt5 ( found here: <a href="https://github.com/MaVCArt/StyledPyQt5" rel="nofollow">https://github.com/MaVCArt/StyledPyQt5</a> ) which uses a package structure to make imports a bit more logical. I've been relatively successful in documenting the code so far with Sphinx, at least up until I introduced the package structure. ( before, everything was in one folder )</p>
<p>The following is the problem: when I run sphinx-apidoc, everything runs fine, no errors. What's more, autodoc picks up all my submodules just fine. This is the content of one of my .rst files:</p>
<pre><code>styledpyqt package
==================
Subpackages
-----------
.. toctree::
:maxdepth: 8
styledpyqt.core
Submodules
----------
styledpyqt.StyleOptions module
------------------------------
.. automodule:: styledpyqt.StyleOptions
:members:
:undoc-members:
:show-inheritance:
styledpyqt.StyleSheet module
----------------------------
.. automodule:: styledpyqt.StyleSheet
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: styledpyqt
:members:
:undoc-members:
:show-inheritance:
</code></pre>
<p>As you can tell, all submodules are being picked up.</p>
<p>However, when I run make html on this, none of these modules are being <em>documented</em> ( meaning the headers are there, but none of the methods, classes or members are displayed ). In the generated HTML, they're just headers with nothing underneath. I know for a fact that they're properly set up in the code comments, as the code has not changed between now and the set up of the package structure, aka when the documentation did work.</p>
<p>Does anyone have any ideas what the cause of this might be?</p>
<p>Note: to help with resolving this, here's a short breakdown of my folder structure:</p>
<pre><code>styledpyqt
+ core
+ + base
+ + + __init__.py ( containing a class definition )
+ + + AnimationGroups.py
+ + + Animations.py
+ + __init__.py
+ + Color.py
+ + Float.py
+ + Gradient.py
+ + Int.py
+ + String.py
+ __init__.py
+ StyleOptions.py
+ StyleSheet.py
</code></pre>
| 1 | 2016-08-10T23:28:07Z | 38,947,318 | <p>I ended up fixing this problem eventually - it seems I was overlooking some errors, and sphinx worked just fine. I added all the paths the package contains in the conf.py and it just worked from there:</p>
<p>conf.py:</p>
<pre><code>sys.path.insert(0, os.path.abspath('../StyledPyQt5'))
sys.path.insert(0, os.path.abspath('../StyledPyQt5/styledpyqt'))
sys.path.insert(0, os.path.abspath('../StyledPyQt5/styledpyqt/core'))
sys.path.insert(0, os.path.abspath('../StyledPyQt5/styledpyqt/core/base'))
</code></pre>
<p>From there, everything worked.</p>
<p>It's important to note here that I generate my docs in a separate directory from my code. If you're using sphinx-apidoc to generate your .rst files, and you're using a gh-pages branch for documentation like I am, don't forget to generate your HTML pages separately while you're on the master branch. Otherwise, there won't be any code to source from. My workflow looks like this now:</p>
<ol>
<li>make sure i'm on the master branch by running <code>git checkout master</code></li>
<li>run <code>sphinx-apidoc -F -P -o ..output_dir ..source_dir</code>, where output_dir is not the same as source_dir.</li>
<li>run <code>make html</code>, making sure that _build/html is in a directory that isn't in either branch of my repo.</li>
<li>run <code>git checkout gh-pages</code> to switch to my gh-pages branch, removing code files and replacing them with html documentation pages.</li>
<li>copy all newly generated HTML files in _build/html to the gh-pages main folder, overwriting any changes.</li>
<li>run <code>git commit -am "Docs Update" gh-pages</code> to commit the changes</li>
<li>run <code>git push origin gh-pages</code> to push the commit to github</li>
<li>run <code>git checkout master</code> to put me back on the master branch</li>
</ol>
<p>I know there's a dime a dozen tutorials out there documenting this, but I hope this small elaboration might help someone at some point.</p>
| 0 | 2016-08-14T22:52:34Z | [
"python",
"python-sphinx",
"autodoc"
] |
Value Counts of Column Slice to Contain All Possible Unique Values in Column | 38,885,214 | <p>I have a df that looks like this:</p>
<pre><code>group val
A 1
A 1
A 2
B 1
B 2
B 3
</code></pre>
<p>I want to get the value_counts for each group separately, but want to show all possible values for each value_count group:</p>
<pre><code>> df[df['group']=='A']['val'].value_counts()
1 2
2 1
3 NaN
Name: val, dtype: int64
</code></pre>
<p>But it currently looks like this:</p>
<pre><code>> df[df['group']=='A']['val'].value_counts()
1 2
2 1
Name: val, dtype: int64
</code></pre>
<p>Any one know any way I can show value_counts with all possible values represented?</p>
| 0 | 2016-08-10T23:42:02Z | 38,885,482 | <p>This works:</p>
<pre><code>from io import StringIO
import pandas as pd
import numpy as np
data = StringIO("""group,val
A,1
A,1
A,2
B,1
B,2
B,3""")
df = pd.read_csv(data)
print(df, '\n')
res_idx = pd.MultiIndex.from_product([df['group'].unique(), df['val'].unique()])
res = pd.concat([pd.DataFrame(index=res_idx),
df.groupby('group').apply(lambda x: x['val'].value_counts())],
axis=1)
print(res)
</code></pre>
<p>Produces:</p>
<pre><code> group val
0 A 1
1 A 1
2 A 2
3 B 1
4 B 2
5 B 3
val
A 1 2.0
2 1.0
3 NaN
B 1 1.0
2 1.0
3 1.0
</code></pre>
| 1 | 2016-08-11T00:20:57Z | [
"python",
"pandas"
] |
Value Counts of Column Slice to Contain All Possible Unique Values in Column | 38,885,214 | <p>I have a df that looks like this:</p>
<pre><code>group val
A 1
A 1
A 2
B 1
B 2
B 3
</code></pre>
<p>I want to get the value_counts for each group separately, but want to show all possible values for each value_count group:</p>
<pre><code>> df[df['group']=='A']['val'].value_counts()
1 2
2 1
3 NaN
Name: val, dtype: int64
</code></pre>
<p>But it currently looks like this:</p>
<pre><code>> df[df['group']=='A']['val'].value_counts()
1 2
2 1
Name: val, dtype: int64
</code></pre>
<p>Any one know any way I can show value_counts with all possible values represented?</p>
| 0 | 2016-08-10T23:42:02Z | 38,885,652 | <pre><code>In [185]: df.groupby('group')['val'].value_counts().unstack('group')
Out[185]:
group A B
val
1 2.0 1.0
2 1.0 1.0
3 NaN 1.0
In [186]: df.groupby('group')['val'].value_counts().unstack('group')['A']
Out[186]:
val
1 2.0
2 1.0
3 NaN
Name: A, dtype: float64
</code></pre>
| 1 | 2016-08-11T00:46:01Z | [
"python",
"pandas"
] |
Django form validation not redirecting to the correct url | 38,885,260 | <p>I'm using Django 1.9.8 and having some trouble validating a form for registering users. If there are validation errors, the redirect back to the form is to the incorrect url. The registration url is <code>localhost:8000/register</code>. When errors are found (I think that's what's happening, anyway), the page is redirected to <code>localhost:8000/register/register</code>. What am I doing incorrectly that is causing the redirect to add an additional <code>register</code> argument to the url?</p>
<pre><code>#authorization/views.py
class RegisterViewSet(viewsets.ViewSet):
#GET requests
def register(self,request):
return render(request, 'authorization/register.html', {'form': RegisterForm})
#POST requests
def create(self,request):
form = RegisterForm(request.POST)
if form.is_valid():
username = request.POST['username']
email = request.POST['email']
password = request.POST['password']
user = User.objects.create_user(username,email,password)
user.save()
return HttpResponseRedirect('/users') #show list of users after saving
else:
#return to the form for the user to fix errors & continue registering
return render(request, 'authorization/register.html', {'form': RegisterForm})
</code></pre>
<p>Here's the RegisterForm content</p>
<pre><code>#authorization/forms.py
class RegisterForm(AuthenticationForm):
username = forms.CharField(label="Username", max_length=30,
widget=forms.TextInput(attrs={'class': 'form-control', 'name': 'username'}))
email = forms.CharField(label="Email", max_length=30,
widget=forms.TextInput(attrs={'class': 'form-control', 'name': 'email'}))
password = forms.CharField(label="Password", max_length=30,
widget=forms.TextInput(attrs={'class': 'form-control', 'name': 'password', 'type' : 'password'}))
repassword = forms.CharField(label="RePassword", max_length=30,
widget=forms.TextInput(attrs={'class': 'form-control', 'name': 'repassword', 'type' : 'password'}))
def clean_password(self):
password1 = self.cleaned_data.get('password')
password2 = self.cleaned_data.get('repassword')
if password1 and password1 != password2:
raise forms.ValidationError("Passwords don't match")
return self.cleaned_data
</code></pre>
<p>I'm not sure if this is relevant, but here's my urls.py </p>
<pre><code>#authorization/urls.py
urlpatterns = [
url(r'^$', views.home, name='home'),
url(r'^register/', views.RegisterViewSet.as_view({'get' : 'register', 'post' : 'create'})),
]
</code></pre>
<p>I tested the create method prior to adding the form validation part and it was successfully saving users, so I know it at least was working up to that point. </p>
<p><strong>Edit - added form contents</strong></p>
<pre><code>{% if form.errors %}
{% for field in form %}
{% for error in field.errors %}
<div class="alert alert-error">
<strong>{{ error|escape }}</strong>
</div>
{% endfor %}
{% endfor %}
{% endif %}
<form method="post" action="register" id = "RegisterForm">
{% csrf_token %}
<p class="bs-component">
<table>
<tr>
<td>{{ form.username.label_tag }}</td>
<td>{{ form.username }}</td>
</tr>
<tr>
<td>{{ form.email.label_tag }}</td>
<td>{{ form.email }}</td>
</tr>
<tr>
<td>{{ form.password.label_tag }}</td>
<td>{{ form.password }}</td>
</tr>
<tr>
<td>{{ form.repassword.label_tag }}</td>
<td>{{ form.repassword }}</td>
</tr>
</table>
</p>
<p class="bs-component">
<center>
<input class="btn btn-success btn-sm" type="submit" value="Register" />
</center>
</p>
<input type="hidden" name="next" value="{{ next }}" />
</form>
</code></pre>
| 0 | 2016-08-10T23:48:26Z | 38,885,440 | <p>The <code>action</code> on your form points to a relative path, <code>register</code>. If an url path does not begin with a slash, it will append it after the last slash of the current url. Since the form is being posted to <code>/register/register</code>, and your url pattern matches that, that's the url you'll see when an error occurs.</p>
<p>To fix this, you should make it an absolute url (starting with a slash) or make it empty (i.e. <code>action=''</code>) to post to the current url. </p>
<p>The most robust way to point the action to the <code>RegisterViewSet</code> is to use the <code>{% url %}</code> template tag. To use this, you need to give the url a name. It is probably a good idea to add a <code>$</code> to the pattern as well, so it only matches if <code>/register/</code> is the complete url, and not if it's just the start of the url:</p>
<pre><code># authorization/urls.py
urlpatterns = [
url(r'^$', views.home, name='home'),
url(r'^register/$', views.RegisterViewSet.as_view({'get' : 'register', 'post' : 'create'}), name='register'),
]
# authorization/register.html
<form method="post" action="{% url 'register' %}" id="RegisterForm">
...
</code></pre>
| 0 | 2016-08-11T00:14:16Z | [
"python",
"django",
"validation"
] |
Running py2exe says run-py3.4-win32.exe is missing | 38,885,292 | <p>I tried to follow <a href="http://www.py2exe.org/index.cgi/Tutorial" rel="nofollow">this tutorial</a> and I got stuck. These are the steps I followed:</p>
<ol>
<li>I installed Anaconda 32 bit</li>
<li>I executed <code>conda create -n test py2exe -c sasview</code>, which installed Python 3.4.5-0, py2exe 0.9.2.2-py34_1 and other packages</li>
<li>I created the <code>hello.py</code> file containing <code>print("Hello World!")</code></li>
<li><p>I created the <code>setup.py</code> file containing:</p>
<p><code>from distutils.core import setup
import py2exe
setup(console=['hello.py'])</code></p></li>
<li><p>I executed</p>
<p><code>activate test
python setup.py py2exe</code></p></li>
</ol>
<p>The result was:</p>
<pre><code>running py2exe
1 missing Modules
------------------
? readline imported from cmd, code, pdb
Building 'dist\hello.exe'.
error: [Errno 2] No such file or directory: 'C:\\Anaconda3\\envs\\test\\lib\\site-packages\\py2exe\\run-py3.4-win32.exe'
</code></pre>
<p>The missing module is just a warning and can be ignored (see <a href="http://stackoverflow.com/questions/26875291/py2exe-missing-modules">here</a>).</p>
<p>Py2exe is not available for Python 3.5 yet, and it looks like conda knows about it and installs python 3.4.</p>
<p>What am I missing?</p>
| 0 | 2016-08-10T23:52:42Z | 38,903,773 | <p>Executing <code>conda create -n test py2exe -c silg2</code> installs pytnon 3.4.5 instead of the most recent 3.5.2, which makes me think that conda knows which version works with py2exe. Apparently this is not true.</p>
<p>This works:</p>
<pre><code>conda create -n test python=3.4
activate test
pip install py2exe
python setup.py py2exe
</code></pre>
<p>Using <code>conda list</code> shows the same packages with the same versions in both environments, but py2exe only works when installed by pip, not when installed by conda.</p>
| 0 | 2016-08-11T18:33:02Z | [
"python",
"python-3.x",
"py2exe",
"conda"
] |
Rename/Backup old directory in python | 38,885,370 | <p>I have a script that creates a new directory regularly. I would like to check if it already exists and if so move the existing folder to a backup. My first iteration was</p>
<pre><code>if os.path.isdir(destination_path):
os.rename(destination_path,destination_path + '_old')
</code></pre>
<p>However, if there already one being backed up it will obviously crash. What I would like to do is find the number of directories that match the destination_path and append that number like a version number. </p>
<pre><code>if os.path.isdir(destination_path):
n = get_num_folders_like(destination_path)
os.rename(destination_path,destination_path + str(n))
</code></pre>
<p>I am just not sure how to make such a hypothetical function. I think fnmatch might work but I can't get the syntax right.</p>
| 0 | 2016-08-11T00:04:16Z | 38,942,968 | <p>If you need to move the old directory aside, renumbering can easily be done by listing all directories by the same name, then picking last one by extracting the numeric maximum from the matching names.</p>
<p>Listing the directories can be done by using the <a href="https://docs.python.org/2/library/glob.html#glob.glob" rel="nofollow"><code>glob</code> module</a>; it combines listing files with the <code>fnmatch</code> module to filter:</p>
<pre><code> import glob
if os.path.isdir(destination_path):
# match all paths starting with the destination name, plus at least
# an underscore and one digit.
backups = glob.glob(destination_path + '_[0_9]*')
def extract_number(path):
try:
# assume everything after `_` is a number
return int(path.rpartition('_')[-1])
except ValueError:
# not everything was a number, skip this directory
return None
backup_numbers = (extract__number(b) for b in backups)
try:
next_backup = max(filter(None, backup_numbers)) + 1
except ValueError:
# no backup directories
next_backup = 1
os.rename(destination_path,destination_path + '_{:d}'.format(next_backup))
</code></pre>
<p>I'm assuming you are not worried about race conditions here.</p>
| 1 | 2016-08-14T14:00:26Z | [
"python",
"directory",
"rename"
] |
Rename/Backup old directory in python | 38,885,370 | <p>I have a script that creates a new directory regularly. I would like to check if it already exists and if so move the existing folder to a backup. My first iteration was</p>
<pre><code>if os.path.isdir(destination_path):
os.rename(destination_path,destination_path + '_old')
</code></pre>
<p>However, if there already one being backed up it will obviously crash. What I would like to do is find the number of directories that match the destination_path and append that number like a version number. </p>
<pre><code>if os.path.isdir(destination_path):
n = get_num_folders_like(destination_path)
os.rename(destination_path,destination_path + str(n))
</code></pre>
<p>I am just not sure how to make such a hypothetical function. I think fnmatch might work but I can't get the syntax right.</p>
| 0 | 2016-08-11T00:04:16Z | 38,959,796 | <p>Based on the more general answer given I ended up using something more streamlined for my specific case</p>
<pre><code>if os.path.isdir(destination_path):
n = len(glob.glob(destination_path + '*'))
os.rename(destination_path, destination_path + '_' + str(n))
</code></pre>
| 0 | 2016-08-15T17:20:23Z | [
"python",
"directory",
"rename"
] |
Need explanation difference between json and dict in Python | 38,885,375 | <p>I am just curious to understand JSON and Dict in Python more deeply.</p>
<p>I have a JSON response from a server like this:</p>
<pre><code>`{"city":"Mississauga","country":"Canada","countryCode":"CA"}`
</code></pre>
<p>And I want to work with it as a dictionary. For this, I use <code>.json()</code> function. Why can I get data by using <code>res.json()['city']</code>, but cannot do it with <code>req.json().city</code> ?</p>
| 1 | 2016-08-11T00:04:33Z | 38,885,501 | <p>In Python, dictionary values are not accessible using the <code>my_dict.key</code> syntax. This is reserved for attributes of the <code>dict</code> class, such as <code>dict.get</code> and <code>dict.update</code>. Dictionary values are only accessible via the <code>my_dict[key]</code> syntax.</p>
| 3 | 2016-08-11T00:23:31Z | [
"python",
"json",
"dictionary",
"implicit",
"explicit"
] |
Writing a dictionary to a csv file in python 2.7 | 38,885,376 | <p>I want to write a dictionary that looks like this:
{'2234': '1', '233': '60'}
into a new csv file that has been created.
I found a page where this was answered and I tried this out in python 2.7 but this still gives me an error whilst executing my code:</p>
<pre><code>with open('variabelen.csv', 'w') as csvfile:
writer = csv.writer(csvfile)
for name, items in a.items():
writer.writerow([name] + items)
</code></pre>
<p>When i execute the code this shows up for me as an error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/Danny/Desktop/Inventaris.py", line 48, in <module>
Toevoeging_methode_plus(toevoegproduct, aantaltoevoeging)
File "/Users/Danny/Desktop/Inventaris.py", line 9, in Toevoeging_methode_plus
writecolumbs()
File "/Users/Danny/Desktop/Inventaris.py", line 24, in writecolumbs
writer.writerow([name] + items)
TypeError: can only concatenate list (not "str") to list
</code></pre>
| 0 | 2016-08-11T00:04:34Z | 38,885,387 | <p>You are concatenating a list <code>[name]</code> with a string <code>items</code> - hence the error.</p>
<p>Instead, you can simply write <code>items()</code> via <a href="https://docs.python.org/2/library/csv.html#csv.csvwriter.writerows" rel="nofollow"><code>.writerows()</code></a>:</p>
<pre><code>with open('variabelen.csv', 'w') as csvfile:
writer = csv.writer(csvfile)
writer.writerows(a.items())
</code></pre>
<p>Given the value of <code>a = {'2234': '1', '233': '60'}</code>, it would produce <code>variabelen.csv</code> with the following content:</p>
<pre><code>233,60
2234,1
</code></pre>
<p>The order of rows though may differ cause dictionaries in Python are <em>unordered collections</em>.</p>
| 1 | 2016-08-11T00:06:54Z | [
"python",
"csv",
"dictionary"
] |
Python class inheritance | 38,885,378 | <p>I have this code:</p>
<pre><code>class A(object):
def __init__(self):
print " A"
class B(A):
def __init__(self):
print "B"
x=B()
print "Done"
</code></pre>
<p>the result is: "B" gets printed
why does it not print "A", eventhough class B inheritance A</p>
| 2 | 2016-08-11T00:04:53Z | 38,885,405 | <p>If you want to use A's <code>__init__</code> while also using B's <code>__init__</code>, then try:</p>
<pre><code>class A(object):
def __init__(self):
print " A"
class B(A):
def __init__(self):
A.__init__(self)
print "B"
x=B()
print "Done"
</code></pre>
<p>Or, if you would prefer not to mention the superclass by name:</p>
<pre><code>class A(object):
def __init__(self):
print " A"
class B(A):
def __init__(self):
super(B, self).__init__()
print "B"
x=B()
print "Done"
</code></pre>
<p>Both of these produce the output:</p>
<pre><code> A
B
Done
</code></pre>
| 6 | 2016-08-11T00:08:55Z | [
"python",
"class"
] |
Can't output Python Print in PHP | 38,885,431 | <p>I'm really stuck on this one, but I am a python (and Raspberry Pi) newbie. All I want is to output the <code>print</code> output from my python script. The problem is (I believe) that a function in my python script takes half a second to execute and PHP misses the output.</p>
<p>This is my php script:</p>
<pre><code><?php
error_reporting(E_ALL);
ini_set('display_errors', 1);
$cmd = escapeshellcmd('/var/www/html/weathertest.py');
$output = shell_exec($cmd);
echo $output;
//$handle = popen('/var/www/html/weathertest.py', 'r');
//$output = fread($handle, 1024);
//var_dump($output);
//pclose($handle);
//$cmd = "python /var/www/html/weathertest.py";
//$var1 = system($cmd);
//echo $var1;
echo 'end';
?>
</code></pre>
<p>I've included the commented blocks to show what else I've tried. All three output "static text end"</p>
<p>This is the python script:</p>
<pre><code>#!/usr/bin/env python
import sys
import Adafruit_DHT
import time
print 'static text '
humidity, temperature = Adafruit_DHT.read(11, 4)
time.sleep(3)
print 'Temp: {0:0.1f}C Humidity: {1:0.1f}%'.format(temperature, humidity)
</code></pre>
<p>The py executes fine on the command line. I've added the 3 second delay to make the script feel longer for my own testing.</p>
<p>Given that I always get <code>static text</code> as an output, I figure my problem is with PHP not waiting for the Adafruit command. BUT the STRANGEST thing for me is that all three of my PHP attempts work correctly if I execute the PHP script on the command line i.e. <code>php /var/www/html/test.php</code> - I then get the desired output:</p>
<pre><code>static text
Temp: 23.0C Humidity 34.0%
end
</code></pre>
<p>So I guess there's two questions: 1. How to make PHP wait for Python completion. 2. Why does the PHP command line differ from the browser?</p>
| 1 | 2016-08-11T00:12:51Z | 38,885,490 | <blockquote>
<ol>
<li>How to make PHP wait for Python completion</li>
</ol>
</blockquote>
<p><code>shell_exec</code> will wait the command to finish</p>
<blockquote>
<ol start="2">
<li>Why does the PHP command line differ from the browser?</li>
</ol>
</blockquote>
<p>My best guess is the difference of the user running the command. On the command line the script is running as the same user you're logged in, on the "browser", probably as the same user as apache/nginx, the environment variables are different on both cases.</p>
<hr>
<p>Add <code>python</code> before the script, i.e.:</p>
<pre><code>$output = shell_exec("python /var/www/html/weathertest.py");
echo $output;
</code></pre>
<p>Or use the <code>fullpath</code> to the python binary:</p>
<pre><code>$output = shell_exec("/full/path/to/python /var/www/html/weathertest.py");
echo $output;
</code></pre>
<hr>
<p>PS: To know the <code>fullpath</code> use <code>which python</code> on the shell.</p>
| 1 | 2016-08-11T00:22:05Z | [
"php",
"python",
"linux"
] |
Can't output Python Print in PHP | 38,885,431 | <p>I'm really stuck on this one, but I am a python (and Raspberry Pi) newbie. All I want is to output the <code>print</code> output from my python script. The problem is (I believe) that a function in my python script takes half a second to execute and PHP misses the output.</p>
<p>This is my php script:</p>
<pre><code><?php
error_reporting(E_ALL);
ini_set('display_errors', 1);
$cmd = escapeshellcmd('/var/www/html/weathertest.py');
$output = shell_exec($cmd);
echo $output;
//$handle = popen('/var/www/html/weathertest.py', 'r');
//$output = fread($handle, 1024);
//var_dump($output);
//pclose($handle);
//$cmd = "python /var/www/html/weathertest.py";
//$var1 = system($cmd);
//echo $var1;
echo 'end';
?>
</code></pre>
<p>I've included the commented blocks to show what else I've tried. All three output "static text end"</p>
<p>This is the python script:</p>
<pre><code>#!/usr/bin/env python
import sys
import Adafruit_DHT
import time
print 'static text '
humidity, temperature = Adafruit_DHT.read(11, 4)
time.sleep(3)
print 'Temp: {0:0.1f}C Humidity: {1:0.1f}%'.format(temperature, humidity)
</code></pre>
<p>The py executes fine on the command line. I've added the 3 second delay to make the script feel longer for my own testing.</p>
<p>Given that I always get <code>static text</code> as an output, I figure my problem is with PHP not waiting for the Adafruit command. BUT the STRANGEST thing for me is that all three of my PHP attempts work correctly if I execute the PHP script on the command line i.e. <code>php /var/www/html/test.php</code> - I then get the desired output:</p>
<pre><code>static text
Temp: 23.0C Humidity 34.0%
end
</code></pre>
<p>So I guess there's two questions: 1. How to make PHP wait for Python completion. 2. Why does the PHP command line differ from the browser?</p>
| 1 | 2016-08-11T00:12:51Z | 38,886,247 | <p>This is not an answer, but added info to show why Pedro Lobito is correct.</p>
<p>I edited my Python script to be:</p>
<pre><code>#!/usr/bin/env python
import sys
import Adafruit_DHT
import time
print 'static text '
# humidity, temperature = Adafruit_DHT.read(11, 4)
time.sleep(10)
# print 'Temp: {0:0.1f}C Humidity: {1:0.1f}%'.format(temperature, humidity)
print "waited 10 seconds"
</code></pre>
<p>You will notice I simply commented out my problem areas, increased the sleep to 10 seconds and then added a new print at then end. Running this in the browser now takes a while - 10 seconds - so the script is waiting completion.</p>
<p>My problem is now with <code>Adafruit_DHT.read</code> so I will investigate separately. </p>
<p><strong>EDIT (a few hours later with a fresh mind):</strong>
My problem was not with that module or function, my problem was with permissions of the third-party module (Adafruit_DHT) which I installed. Pedro's answer to my question about the difference between php in the browser and the command line was the key. I thought apache was running as root, but after looking at the config with <code>sudo nano /etc/apache2/envvars</code>, I saw it was www-data. I changed it to me, and my script worked perfectly in the browser. Obviously, apache running as me is not great, but at least I could prove the problem was permissions on that particular module!</p>
| 2 | 2016-08-11T02:13:13Z | [
"php",
"python",
"linux"
] |
Python - Loop through non-blank Excel cells | 38,885,466 | <p>Very frustrated new Python user here. I had code running at one point then went on to do some other things and now it's not working.</p>
<p>This is looping through non-blank cells in the 'J' column of an .xlsx file. For the cells that contain text, it looks at the date in column 'A'. If the date in column A is equal to 7 days in the future, print "You're due for a shift."</p>
<p>Very frustraing because it was working at one point and I don't know where it went wrong. </p>
<pre><code>workbook = load_workbook('FRANKLIN.xlsx', data_only=True)
ws=workbook.active
cell_range = ws['j1':'j100'] #Selecting the slice of interest
for row in cell_range: # This is iterating through rows 1-7
for cell in row: # This iterates through the columns(cells) in that row
value = cell.value
if cell.value:
if cell.offset(row=0, column =-9).value.date() == (datetime.now().date() + timedelta(days=7)):
print("you're due for a shift")
</code></pre>
<p>I am getting this error for the 2nd to last line.</p>
<p><code>"AttributeError: 'str' object has no attribute 'date'"</code> </p>
| 1 | 2016-08-11T00:18:50Z | 38,885,830 | <p>Nevermind...I had a blank value in J3 with a string in A3 that was throwing it off. So I need to keep learning and check to see if a value is a date first and if not, ignore.</p>
| 1 | 2016-08-11T01:14:24Z | [
"python",
"excel",
"openpyxl",
"xlrd"
] |
Parallel processing of lists | 38,885,570 | <p>I'm trying to get my head around multiprocessing. I have a list, I divide it in two equally long parts, I sort them in two separate processes. I know that this part works because printing <code>saveto</code> gives me the two lists. But I can't access them because in the end I get two empty lists. Why can't I access what I want to be written to <code>l1</code> and <code>l2</code> and how do I do that?</p>
<pre><code>import multiprocessing
import random
def sort(l, saveto):
saveto = sorted(l)
print saveto
if __name__ == '__main__':
l = [int(100000*random.random()) for i in xrange(10000)]
listlen = len(l)
halflist = listlen/2
l1 = []
l2 = []
p1 = multiprocessing.Process(target=sort, args=(l[0:halflist], l1))
p2 = multiprocessing.Process(target=sort, args=(l[halflist:listlen], l2))
p1.start()
p2.start()
p1.join()
p2.join()
print l1
print l2
</code></pre>
| 1 | 2016-08-11T00:33:55Z | 38,887,015 | <p>Use <a href="https://docs.python.org/2/library/multiprocessing.html#multiprocessing.Queue" rel="nofollow">multiprocessing.Queue</a> to share data between processes</p>
<pre><code>import multiprocessing
import random
def sort(l, queue):
queue.put(sorted(l))
if __name__ == '__main__':
l = [int(100000*random.random()) for i in xrange(10000)]
listlen = len(l)
halflist = listlen/2
queue = multiprocessing.Queue()
p1 = multiprocessing.Process(target=sort, args=(l[0:halflist], queue))
p2 = multiprocessing.Process(target=sort, args=(l[halflist:listlen], queue))
p1.start()
p2.start()
p1.join()
p2.join()
print queue.get()
print queue.get()
</code></pre>
<p><strong>UPDATE:</strong></p>
<p>As it turned out putting the large amounts of data to Queue can cause a deadlock. This is <a href="https://docs.python.org/2.7/library/multiprocessing.html#pipes-and-queues" rel="nofollow">mentioned in the docs</a>:</p>
<blockquote>
<p><strong>Warning</strong></p>
<p>As mentioned above, if a child process has put items on a queue (and
it has not used <code>JoinableQueue.cancel_join_thread</code>), then that process
will not terminate until all buffered items have been flushed to the
pipe.</p>
<p>This means that if you try joining that process you may get a deadlock
unless you are sure that all items which have been put on the queue
have been consumed. Similarly, if the child process is non-daemonic
then the parent process may hang on exit when it tries to join all its
non-daemonic children.</p>
<p>Note that a queue created using a manager does not have this issue.</p>
</blockquote>
<p>Fixed version:</p>
<pre><code>import multiprocessing
import random
def sort(l, queue):
queue.put(sorted(l))
if __name__ == '__main__':
l = [int(100000*random.random()) for i in range(10000)]
listlen = len(l)
halflist = listlen/2
manager = multiprocessing.Manager()
queue = manager.Queue()
p1 = multiprocessing.Process(target=sort, args=(l[0:halflist], queue))
p2 = multiprocessing.Process(target=sort, args=(l[halflist:listlen], queue))
p1.start()
p2.start()
p1.join()
p2.join()
print queue.get()
print queue.get()
</code></pre>
| 2 | 2016-08-11T03:55:55Z | [
"python",
"multiprocessing"
] |
Longest Collatz (or Hailstone) sequence optimization - Python 2.7 | 38,885,614 | <p>I've made a program that prints out a list of numbers, each one with a greater number of steps (according to the <a href="https://en.wikipedia.org/wiki/Collatz_conjecture#Examples" rel="nofollow">Collatz Conjecture</a>) needed to reach 1 than the previous:</p>
<pre><code>limit = 1000000000
maximum = 0
known = {}
for num in xrange(2, limit):
start_num = num
steps = 0
while num != 1:
if num < start_num:
steps += known[num]
break;
if num & 1:
num = (num*3)+1
steps += 1
steps += 1
num //= 2
known[start_num] = steps
if steps > maximum:
print start_num,"\t",steps
maximum = steps
</code></pre>
<p>I cache the results I already know to speed up the program. This method works up to the limit of 1 billion, where my computer runs out of memory (8GB).</p>
<ol>
<li>Is there a more efficient way to cache results?</li>
<li>Is there a way to further optimize this program?</li>
</ol>
<p>Thank you in advance.</p>
| 2 | 2016-08-11T00:40:51Z | 38,886,383 | <p>It appears to be inherently hard to speed up Collatz programs enormously; the best programs I'm aware of <a href="http://www.ericr.nl/wondrous/search.html" rel="nofollow">are distributed</a>, using idle cycles on hundreds (thousands ...) of PCs around the world.</p>
<p>There are some easy things you can do to optimize your program a little in pure CPython, although speed and space optimizations are often at odds:</p>
<ul>
<li><em>Speed</em>: a compute-heavy program in Python should always be written as a function, not as the main program. That's because local variable access is significantly faster than global variable access.</li>
<li><em>Space</em>: making <code>known</code> a list instead of a dict requires significantly less memory. You're storing something for every number; dicts are more suitable for sparse mappings.</li>
<li><em>Space</em>: an <code>array.array</code> requires less space still - although is slower than using a list.</li>
<li><em>Speed</em>: for an odd number <code>n</code>, <code>3*n + 1</code> is necessarily even, so you can collapse 2 steps into 1 by going to <code>(3*n + 1)//2 == n + (n >> 1) + 1</code> directly.</li>
<li><em>Speed</em>: given a final result (number and step count), you can jump ahead and fill in the results for that number times all powers of 2. For example, if <code>n</code> took <code>s</code> steps, then <code>2*n</code> will take <code>s+1</code>, <code>4*n</code> will take <code>s+2</code>, <code>8*n</code> will take <code>s+3</code>, and so on.</li>
</ul>
<p>Here's some code with all those suggestions, although I'm using Python 3 (in Python 2, you'll at least want to change <code>range</code> to <code>xrange</code>). Note that there's a long delay at startup - that's the time taken to fill the large <code>array</code> with a billion 32-bit unsigned zeroes.</p>
<pre><code>def coll(limit):
from array import array
maximum = 0
known = array("L", (0 for i in range(limit)))
for num in range(2, limit):
steps = known[num]
if steps:
if steps > maximum:
print(num, "\t", steps)
maximum = steps
else:
start_num = num
steps = 0
while num != 1:
if num < start_num:
steps += known[num]
break
while num & 1:
num += (num >> 1) + 1
steps += 2
while num & 1 == 0:
num >>= 1
steps += 1
if steps > maximum:
print(start_num, "\t", steps)
maximum = steps
while start_num < limit:
assert known[start_num] == 0
known[start_num] = steps
start_num <<= 1
steps += 1
coll(1000000000)
</code></pre>
<h2>GETTING GONZO</h2>
<p>A tech report written in 1992 gives many ways of speeding this kind of search: <a href="http://lib.dr.iastate.edu/cs_techreports/125/" rel="nofollow">"3x+1 Search Programs", by Leavens and Vermeulen</a>. For example, @Jim Mischel's "cut off based on previous peaks" idea is essentially the paper's Lemma 20.</p>
<p>Another: for an easy factor of 2, note that you can "almost always" ignore even starting numbers. Why: let <code>s(n)</code> denote the number of steps needed to reach 1. You're looking for new peaks in the value of <code>s()</code>. Suppose the most recent peak was found at <code>n</code>, and you're considering an even integer <code>i</code> with <code>n < i < 2*n</code>. Then in particular <code>i/2 < n</code>, so <code>s(i/2) < s(n)</code> (by the definition of "peak" and that a new peak was reached at <code>n</code>). But <code>s(i) == s(i/2) + 1</code>, so <code>s(i) <= s(n)</code>: <code>i</code> cannot be a new peak.</p>
<p>So after finding a new peak at <code>n</code>, you can skip all even integers up to (but not including) <code>2*n</code>.</p>
<p>There are many other useful ideas in the paper - but they're not all <em>that</em> easy ;-)</p>
| 5 | 2016-08-11T02:30:35Z | [
"python",
"algorithm",
"caching",
"optimization",
"collatz"
] |
Longest Collatz (or Hailstone) sequence optimization - Python 2.7 | 38,885,614 | <p>I've made a program that prints out a list of numbers, each one with a greater number of steps (according to the <a href="https://en.wikipedia.org/wiki/Collatz_conjecture#Examples" rel="nofollow">Collatz Conjecture</a>) needed to reach 1 than the previous:</p>
<pre><code>limit = 1000000000
maximum = 0
known = {}
for num in xrange(2, limit):
start_num = num
steps = 0
while num != 1:
if num < start_num:
steps += known[num]
break;
if num & 1:
num = (num*3)+1
steps += 1
steps += 1
num //= 2
known[start_num] = steps
if steps > maximum:
print start_num,"\t",steps
maximum = steps
</code></pre>
<p>I cache the results I already know to speed up the program. This method works up to the limit of 1 billion, where my computer runs out of memory (8GB).</p>
<ol>
<li>Is there a more efficient way to cache results?</li>
<li>Is there a way to further optimize this program?</li>
</ol>
<p>Thank you in advance.</p>
| 2 | 2016-08-11T00:40:51Z | 38,888,580 | <p>You only really need to cache the odd numbers. Consider in your program what happens when you start working on a number.</p>
<p>If you take your beginning number, X, and do a <code>mod 4</code>, you end up with one of four cases:</p>
<ul>
<li>0 or 2: Repeatedly dividing by 2 will eventually result in an odd number that's less than X. You have that value cached. So you can just count the number of divides by 2, add that to the cached value, and you have the sequence length.</li>
<li>1: (3x+1)/2 will result in an even number, and dividing that by 2 again will result in a number that's less than X. If the result is odd, then you already have the cached value, so you can just add 3 to it. If the result is even, repeatedly divide by 2, until you get to an odd number (which you already have cached), add 3 and the number of divisions by 2 to the cached value, and you have the sequence length.</li>
<li>3: Do the standard Collatz sequence calculation until you get to a number that's less than the starting number. Then you either have the value cached or the number is even and you repeatedly divide by 2 until you get to an odd number.</li>
</ul>
<p>This might slow your program a little bit because you have a few more divides by 2, but it doubles your cache capacity.</p>
<p>You can double your cache capacity again by only saving sequence lengths for numbers where <code>x mod 4 == 3</code>, but at the cost of even more processing time.</p>
<p>Those only give you linear increases in cache space. What you really need is a way to groom your cache so that you're not having to save so many results. At the cost of some processing time, you only need to cache the numbers that produce the longest sequences found so far.</p>
<p>Consider that when you compute that 27 has 111 steps, you have saved:</p>
<pre><code>starting value, steps
1, 0
2, 1
3, 7
6, 8
7, 16
9, 19
18, 20
25, 23
27, 111
</code></pre>
<p>So when you see 28, you divide by 2 and get 14. Searching your cache, you see that the number of steps for 14 to go to 1 can't be more than 19 (because no number less than 18 takes more than 19 steps). So the maximum possible sequence length is 20. But you already have a maximum of 111. So you can stop.</p>
<p>This can cost you a little more processing time, but it greatly extends your cache. You'd only have 44 entries all the way up to 837799. See <a href="https://oeis.org/A006877" rel="nofollow">https://oeis.org/A006877</a>.</p>
<p>It's interesting to note that if you do a logarithmic scatter plot of those numbers, you get a very close approximation of a straight line. See <a href="https://oeis.org/A006877/graph" rel="nofollow">https://oeis.org/A006877/graph</a>.</p>
<p>You could combine approaches by keeping a second cache that tells you, for numbers that are greater than the number with the current maximum, how many steps it took to get that number down below the current maximum. So in the above case, where 27 has the current maximum, you'd store 26 for the number 35, because it takes six operations (106, 53, 160, 80, 40, 20) to get 35 to 20. The table tells you that it can't take more than 20 more steps to get to 1, giving you a maximum possible of 26 steps. So if any other value reduces to 35, you add the current number of steps to 26, and if the number is less than 111, then you know you can't possibly have a new maximum with this number. If the number is greater than 111, then you have to continue to compute the entire sequence.</p>
<p>Whenever you find a new maximum, you add the number that generated it to your first cache, and clear the second cache.</p>
<p>This will be slower (my gut feel is that in the worst case it might double your processing time) than caching the results for every value, but it'll greatly extend your range.</p>
<p>The key here is that extending your range is going to come at the cost of some speed. That's a common tradeoff. As I pointed out above, you can do a number of things to save every nth item, which will give you an essentially larger cache. So if you save every 4th value, your cache is essentially 4 times as large as if you saved every value. But you reach the point of diminishing returns very quickly. That is, a cache that's 10 times larger than the original isn't a whole lot larger than the 9x cache.</p>
<p>My suggestion gives you essentially an exponential increase in cache space, at the cost of some processing time. But it shouldn't be a huge increase in processing time because in the worst case the number with the next maximum will be double the previous maximum. (Think of 27, with 111 steps, and 54, with 112 steps.) It takes a bit more code to maintain, but it should extend your range, which is currently only 30 bits, to well over 40 bits.</p>
| 3 | 2016-08-11T06:16:13Z | [
"python",
"algorithm",
"caching",
"optimization",
"collatz"
] |
Program to transform a string in hexadecimal? | 38,885,640 | <pre><code>#!/usr/bin/python3
# -*- coding: utf-8 -*-
import os
import sys
try:
string=sys.argv[1]
cmd = "echo -n "+string+" | xxd -ps | sed 's/[[:xdigit:]]\{2\}/\\\\x&/g'"
os.system(cmd)
except IndexError:
print("\nInforme a string!\n")
</code></pre>
<p>I found this code on Internet. I tried hard to understand what it does. Could someone explain?</p>
<pre><code>string=sys.argv[1]
cmd = "echo -n "+string+" | xxd -ps | sed 's/[[:xdigit:]]\{2\}/\\\\x&/g'"
</code></pre>
<p>The two lines above are like magic to me.</p>
| 0 | 2016-08-11T00:44:05Z | 38,886,432 | <p>This code is meant to be executed from the command line. It takes the first argument passed to the script and spits out that string encoded in ASCII. Let's break down the shell <code>cmd</code> so we can understand how it manages this.</p>
<pre><code>echo -n "+string+"
</code></pre>
<p>Takes the variable <code>string</code> (a.k.a. the first argument to the script) and outputs it, passing it along to the next command via a pipe. (<code>-n</code> stops a newline from being appended to the string.)</p>
<pre><code>xxd -ps
</code></pre>
<p>Converts the string to a hexadecimal number. (The <code>-ps</code> is just to simplify the output to just the hexadecimal number by removing some additional information that is usually outputted.)</p>
<pre><code>sed 's/[[:xdigit:]]\{2\}/\\\\x&/g'
</code></pre>
<p>Finally, the string, which is now a hexadecimal number, gets piped to <code>sed s/.../.../g</code> which globally replaces all occurrences of a regular expression between the first and second slashes with whatever is between the second and third slashes. In our case, that regular expression is two consecutive hexadecimal digits (i.e. <code>0-9</code>, <code>A-F</code>, or <code>a-f</code>). This <code>sed</code> command is being used to prepend <code>\x</code> to each pair of hexadecimal digits (<code>\\\\</code> gets translated to <code>\</code> in the output due to character escaping and the <code>&</code> signals that whatever is being replaced should be inserted at that point.) Hence, we end up with the string encoded in ASCII, which is finally outputted and printed.</p>
| 1 | 2016-08-11T02:37:43Z | [
"python",
"bash",
"python-3.x",
"hacking"
] |
Program to transform a string in hexadecimal? | 38,885,640 | <pre><code>#!/usr/bin/python3
# -*- coding: utf-8 -*-
import os
import sys
try:
string=sys.argv[1]
cmd = "echo -n "+string+" | xxd -ps | sed 's/[[:xdigit:]]\{2\}/\\\\x&/g'"
os.system(cmd)
except IndexError:
print("\nInforme a string!\n")
</code></pre>
<p>I found this code on Internet. I tried hard to understand what it does. Could someone explain?</p>
<pre><code>string=sys.argv[1]
cmd = "echo -n "+string+" | xxd -ps | sed 's/[[:xdigit:]]\{2\}/\\\\x&/g'"
</code></pre>
<p>The two lines above are like magic to me.</p>
| 0 | 2016-08-11T00:44:05Z | 38,886,443 | <p>For the line:</p>
<pre><code>cmd = "echo -n "+string+" | xxd -ps | sed 's/[[:xdigit:]]\{2\}/\\\\x&/g'"
</code></pre>
<ul>
<li><code>echo</code> sends the text to the standard output,</li>
<li><code>|</code> pipes that output to <code>xxd</code>, which translates binary to hexadecimal (think hex editors); the <code>-ps</code> flag, according to the <code>xxd</code> man page:</li>
</ul>
<blockquote>
<pre><code> -p | -ps | -postscript | -plain
output in postscript continuous hexdump style. Also known as
plain hexdump style.
</code></pre>
</blockquote>
<ul>
<li><p><code>sed</code> is the stream editor command - there are literally books on this. Basically here, the piped hexdecimal output from the <code>xxd -ps</code> command has this replacement regex performed, broken down here:</p>
<pre><code>sed 's/ # Start find
[[:xdigit]]\{2\} # Match two or more hexadecimal characters
# ([[:xdigit:]] is POSIX-compliant representation
# of hexadecimal character)
/ # End find, start replace
\\\\x& # Lots of escaping backslashes - as \x&; the ampersand
# becomes the entire previous match (the 2 hexadecimal
# characters), e.g. '\x8e'
/g' # End find, and g means all matches are changed
</code></pre></li>
</ul>
<blockquote>
<p>\xxx
Produces or matches a character whose hexadecimal ascii value is xx.
(<a href="https://www.gnu.org/software/sed/manual/html_node/Escapes.html" rel="nofollow">source</a>)</p>
</blockquote>
<ul>
<li>In a nutshell, the scripts takes an input and translates it to hexadecimal, and then the <code>sed</code> command converts them to ascii characters for each double of hexadecimal code characters from the <code>xxd</code> input.</li>
</ul>
| 3 | 2016-08-11T02:39:31Z | [
"python",
"bash",
"python-3.x",
"hacking"
] |
Program to transform a string in hexadecimal? | 38,885,640 | <pre><code>#!/usr/bin/python3
# -*- coding: utf-8 -*-
import os
import sys
try:
string=sys.argv[1]
cmd = "echo -n "+string+" | xxd -ps | sed 's/[[:xdigit:]]\{2\}/\\\\x&/g'"
os.system(cmd)
except IndexError:
print("\nInforme a string!\n")
</code></pre>
<p>I found this code on Internet. I tried hard to understand what it does. Could someone explain?</p>
<pre><code>string=sys.argv[1]
cmd = "echo -n "+string+" | xxd -ps | sed 's/[[:xdigit:]]\{2\}/\\\\x&/g'"
</code></pre>
<p>The two lines above are like magic to me.</p>
| 0 | 2016-08-11T00:44:05Z | 38,889,049 | <p>Use this, should fix your probs</p>
<pre><code>cmd = "echo -n "+string+" | xxd -ps | sed 's/[[:xdigit:]]\{2\}/\\\\x&/g'"
</code></pre>
| 0 | 2016-08-11T06:42:37Z | [
"python",
"bash",
"python-3.x",
"hacking"
] |
Program to transform a string in hexadecimal? | 38,885,640 | <pre><code>#!/usr/bin/python3
# -*- coding: utf-8 -*-
import os
import sys
try:
string=sys.argv[1]
cmd = "echo -n "+string+" | xxd -ps | sed 's/[[:xdigit:]]\{2\}/\\\\x&/g'"
os.system(cmd)
except IndexError:
print("\nInforme a string!\n")
</code></pre>
<p>I found this code on Internet. I tried hard to understand what it does. Could someone explain?</p>
<pre><code>string=sys.argv[1]
cmd = "echo -n "+string+" | xxd -ps | sed 's/[[:xdigit:]]\{2\}/\\\\x&/g'"
</code></pre>
<p>The two lines above are like magic to me.</p>
| 0 | 2016-08-11T00:44:05Z | 38,889,556 | <p>The explain is already done by @Nick Bull and @pzp, here I just want to say something about the implementation, it's trivial and can hardly satisfy the purpose.</p>
<p>The original code would fail if the string contains unbalanced quote(single quote or double quote).</p>
<p>And I suppose a snippet of python code to do it more safely:</p>
<pre><code>def charHex(ch):
return hex(ord(ch))[1:]
hexStr = ''.join(map(charHex, string))
</code></pre>
| 1 | 2016-08-11T07:09:36Z | [
"python",
"bash",
"python-3.x",
"hacking"
] |
Pandas Dataframe: get a multiindex row above the current column headings and merge the cells into a single cell | 38,885,649 | <p>I want to use pandas dataframe to add a a row above the current column heading. The first row should be a singly-merged cell that contains today's date.</p>
<p>My current dataframe looks like below</p>
<pre><code>index name field1 field2
1 John blah blah
2 Dave blah blah
.........
</code></pre>
<p>But I'm trying to make the dataframe to look like this:</p>
<pre><code> T O D A Y \' s D A T E
index name field1 field2
1 John blah blah
2 Dave blah blah
.........
</code></pre>
<p>I hear you can use <code>pd.multiindex.</code>..but can't seem to get a grip of this. Can anyone help?</p>
| 1 | 2016-08-11T00:45:46Z | 38,885,750 | <p>Something along the lines of</p>
<pre><code>df.columns = pd.MultiIndex.from_tuples([("Today's date", field) for field in df.colums])
</code></pre>
<p>ought to work.</p>
<p>Note what this does: it is not possible to merge a cell in pandas. It's just that repeated values don't get shown in this case.</p>
<p>If your goal is to concatenate several of these guys, I would suggest using a dictionary:</p>
<pre><code>pd.concat({"Yesteday's date": yesterday_df, "Today's date": today_df}, axis=1)
</code></pre>
| 0 | 2016-08-11T01:01:54Z | [
"python",
"pandas",
"dataframe",
"multi-index"
] |
Pandas Dataframe: get a multiindex row above the current column headings and merge the cells into a single cell | 38,885,649 | <p>I want to use pandas dataframe to add a a row above the current column heading. The first row should be a singly-merged cell that contains today's date.</p>
<p>My current dataframe looks like below</p>
<pre><code>index name field1 field2
1 John blah blah
2 Dave blah blah
.........
</code></pre>
<p>But I'm trying to make the dataframe to look like this:</p>
<pre><code> T O D A Y \' s D A T E
index name field1 field2
1 John blah blah
2 Dave blah blah
.........
</code></pre>
<p>I hear you can use <code>pd.multiindex.</code>..but can't seem to get a grip of this. Can anyone help?</p>
| 1 | 2016-08-11T00:45:46Z | 38,888,790 | <p>Another solution is set <code>column name</code>, then <code>MultiIndex</code> is not necessary:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'name':['John','Dave'],
'field1':['blah','blah'],
'field2':['blah','blah']},
index=[1,2],
columns=['name','field1','field2'])
print (df)
name field1 field2
1 John blah blah
2 Dave blah blah
#get today date
today = pd.to_datetime('now').date()
#pandas version 018.0 and more
df = df.rename_axis(today,axis=1)
#pandas verion below 0.18.0
#df.columns.name = today
print (df)
2016-08-11 name field1 field2
1 John blah blah
2 Dave blah blah
#if need remove column name
df = df.rename_axis(None,axis=1)
print (df)
name field1 field2
1 John blah blah
2 Dave blah blah
</code></pre>
| 0 | 2016-08-11T06:28:40Z | [
"python",
"pandas",
"dataframe",
"multi-index"
] |
Boolean expression for if list is within other list | 38,885,658 | <p>What is a efficient way to check if a list is within another list? Something like:</p>
<pre><code>[2,3] in [1,2,3,4] #evaluates True
[1,5,4] in [5,1,5,4] #evaluates True
[1,2] in [4,3,2,1] #evaluates False
</code></pre>
<p>Order within the list matters.</p>
| 2 | 2016-08-11T00:47:09Z | 38,885,803 | <pre><code>def check_ordered_sublist_in_list(sub_list, main_list):
sub_list = np.array(sub_list)
main_list = np.array(main_list)
return any(all(main_list[n:(n + len(sub_list))] == sub_list)
for n in range(0, len(main_list) - len(sub_list) + 1))
>>> check_ordered_sublist_in_list([2, 3], [1, 2, 3, 4])
True
>>> check_ordered_sublist_in_list([1, 5, 4], [5, 1, 5, 4])
True
>>> check_ordered_sublist_in_list([1, 2], [4, 3, 2, 1])
False
</code></pre>
<p>This converts the lists to numpy arrays (for computational efficiency) and then uses slicing to check if the <code>sub_list</code> is contained within the slice. Any success returns True.</p>
| 3 | 2016-08-11T01:10:31Z | [
"python",
"list",
"contains",
"sub-array"
] |
Boolean expression for if list is within other list | 38,885,658 | <p>What is a efficient way to check if a list is within another list? Something like:</p>
<pre><code>[2,3] in [1,2,3,4] #evaluates True
[1,5,4] in [5,1,5,4] #evaluates True
[1,2] in [4,3,2,1] #evaluates False
</code></pre>
<p>Order within the list matters.</p>
| 2 | 2016-08-11T00:47:09Z | 38,885,964 | <p>You could use this:</p>
<pre><code>def is_in(short, long):
return any(short==long[i:i+len(short)] for i in range(len(long)-len(short)+1))
is_in([2,3], [1,2,3,4]) # True
is_in([1,5,4], [5,1,5,4]) # True
is_in([1,2], [4,3,2,1]) # False
</code></pre>
<p>If you really care about speed, these expressions are 20-30% faster:</p>
<pre><code>def segments(long, length):
return [long[i:i+length] for i in range(len(long)-length+1)]
def is_in_seg(short, long):
return short in segments(long, len(short))
is_in_seg([1,5,4], [5,1,5,4]) # true
[1,5,4] in segments([5,1,5,4], 3) # true
</code></pre>
<p>And this is 47% faster, but it uses tuples instead of lists:</p>
<pre><code>import itertools
def segments_zip(long, length):
return itertools.izip(*[long[i:] for i in xrange(length)])
(2,3) in segments_zip((1,2,3,4), 2) # True
(1,5,4) in segments_zip((5,1,5,4), 3) # True
(1,2) in segments_zip((4,3,2,1), 2) # False
</code></pre>
<p>The extra speed comes from using itertools.izip, which can stop generating segments when a match is found; from using xrange, which avoids creating the whole range list; and from using tuples, which are generally slightly faster than lists. But the tiny speed advantage will vanish if you have to convert lists to tuples to use it.</p>
| 2 | 2016-08-11T01:33:08Z | [
"python",
"list",
"contains",
"sub-array"
] |
How can I add Windows Credentials using Python? | 38,885,666 | <p>I am trying to create a program that automatically installs a network shortcut onto my desktop and downloads a driver (it is a print server for my work), but in order to access the shortcut after it is installed I need to enter my work credentials under the printer network's domain. I tried using the keyring library for Python but that was unsuccessful, I also tried to use win32wnet.WNETADDCONNECTION2() which I have seen posted on several forums but it was also unsuccessful for me. </p>
<p>Here is the code currently</p>
<pre><code>import os, winshell, keyring, win32wnet
from win32com.client import Dispatch
#Add Windows Credentials
#**************************
url = r'\\LINKTOTHENETWORK'
win32wnet.WNetAddConnection2(0, None, url, None, USERNAME, PASSWORD)
keyring.set_password(url, USERNAME, PASSWORD)
keyring.get_password(url, USERNAME)
#**************************
# This is where I am having troubles.
# Create the shortcut
desktop = winshell.desktop()
path = os.path.join(desktop, "MYLINK.lnk")
target = url
# Set path and save
shell = Dispatch('WScript.Shell')
shortcut = shell.CreateShortCut(path)
shortcut.Targetpath = target
shortcut.save()
# Open the shortcut
os.startfile(target)
</code></pre>
<p>In my full program I have an interface using Kivy that asks for my Username and Password and then I hit an "Install" button and it adds the domain to my Username (domain\username). Using keyring it was properly showing up, just in the wrong area so that should no be an issue, I just can't find a method to add a Windows Credentials instead of General Credentials. </p>
<p>I am using python2.7 on a Windows 10 computer.</p>
<p>If anyone knows of a library I could use or another method that would be great. Thanks! </p>
| 2 | 2016-08-11T00:48:22Z | 38,885,841 | <p>I'm pretty sure you can achieve this by creating a .bat file and executing it in your program.</p>
<p>For Ex.</p>
<pre><code>def bat_create_with_creds(id, pwd):
"""create Batch file to init printer domain creds with variables"""
auth_print= open("auth_print.bat", "w")
auth_print.write("NET USE \\printserverip /USER:%"+id+"% %"+pwd+"%")
auth_print.close()
</code></pre>
<p>I haven't done this for your particular use case, but it works perfectly well for init. rdp sessions using windows mstsc for example.</p>
| 0 | 2016-08-11T01:15:16Z | [
"python",
"windows",
"python-2.7"
] |
Histogram from NetworkX Degree Values - Python 2 vs. Python 3 | 38,885,670 | <p>I have the following code, which worked in Python 2.7, using NetworkX. Basically, it just plots a histogram of degree nodes like so:</p>
<pre><code>plt.hist(nx.degree(G).values())
plt.xlabel('Degree')
plt.ylabel('Number of Subjects')
plt.savefig('network_degree.png') #Save as file, format specified in argument
</code></pre>
<p><a href="http://i.stack.imgur.com/duM1G.png" rel="nofollow"><img src="http://i.stack.imgur.com/duM1G.png" alt="enter image description here"></a></p>
<p>When I try running this same code under Python 3, I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "filename.py", line 71, in <module>
plt.hist(nx.degree(G).values())
File "/Users/user/anaconda/envs/py3/lib/python3.5/site-packages/matplotlib/pyplot.py", line 2958, in hist
stacked=stacked, data=data, **kwargs)
File "/Users/user/anaconda/envs/py3/lib/python3.5/site-packages/matplotlib/__init__.py", line 1812, in inner
return func(ax, *args, **kwargs)
File "/Users/user/anaconda/envs/py3/lib/python3.5/site-packages/matplotlib/axes/_axes.py", line 5960, in hist
x = _normalize_input(x, 'x')
File "/Users/user/anaconda/envs/py3/lib/python3.5/site-packages/matplotlib/axes/_axes.py", line 5902, in _normalize_input
"{ename} must be 1D or 2D".format(ename=ename))
ValueError: x must be 1D or 2D
</code></pre>
<p>I'm just now starting to mess around with Python 3, using what I hoped would be pretty straightforward code. What's changed?</p>
| 0 | 2016-08-11T00:48:56Z | 38,885,810 | <p>In Python2 the <code>dict.values</code> method returns a list.
In Python3, it returns <a href="http://stackoverflow.com/q/7296716/190597">a <code>dict_values</code> object</a>:</p>
<pre><code>In [197]: nx.degree(G).values()
Out[197]: dict_values([2, 2, 2, 2])
</code></pre>
<p>Since <code>plt.hist</code> accepts a list, but not a <code>dict_values</code> object, convert the <code>dict_values</code> to a list:</p>
<pre><code> plt.hist(list(nx.degree(G).values()))
</code></pre>
| 2 | 2016-08-11T01:11:10Z | [
"python",
"matplotlib",
"networkx"
] |
Numpy NDPointer: Don't know how to convert parameter 1 | 38,885,719 | <p>I am trying to push a Numpy array into C++ code.</p>
<p>The C++ function is,</p>
<pre><code>extern "C"
void propagate(float * __restrict__ H, const float * __restrict__ W,
const float * __restrict__ U, const float * __restrict__ x,
float a, int h_len, int samples);
</code></pre>
<p>My python code is,</p>
<pre><code>from numpy import *
from numpy.ctypeslib import ndpointer
import ctypes
lib = ctypes.cdll.LoadLibrary("libesn.so")
propagate = lib.propagate
propagate.restype = None
propagate.argtype = [ndpointer(ctypes.c_float, flags="C_CONTIGUOUS"),
ndpointer(ctypes.c_float, flags="C_CONTIGUOUS"),
ndpointer(ctypes.c_float, flags="C_CONTIGUOUS"),
ndpointer(ctypes.c_float, flags="C_CONTIGUOUS"),
ctypes.c_float, ctypes.c_int, ctypes.c_int]
H = W = U = X = zeros((10, 10))
a = 5.0
propagate(H, W, U, X, a, U.shape[0], X.shape[0])
</code></pre>
<p>I get error,</p>
<pre><code>Traceback (most recent call last):
File "./minimal.py", line 23, in <module>
propagate(H, W, U, X, a, U.shape[0], X.shape[0])
ctypes.ArgumentError: argument 1: <type 'exceptions.TypeError'>: Don't know how to convert parameter 1
</code></pre>
<p>How do I fix this?</p>
| 0 | 2016-08-11T00:57:30Z | 38,903,914 | <p>Stupid typos ... should be argtypes. This fixed the mysterious error leading to other errors that already have answers on StackOverflow.</p>
<pre><code>propagate = lib.propagate
propagate.restype = None
propagate.argtypes = [ndpointer(ctypes.c_float, flags="C_CONTIGUOUS"),
ndpointer(ctypes.c_float, flags="C_CONTIGUOUS"),
ndpointer(ctypes.c_float, flags="C_CONTIGUOUS"),
ndpointer(ctypes.c_float, flags="C_CONTIGUOUS"),
ctypes.c_float, ctypes.c_int, ctypes.c_int]
</code></pre>
| 0 | 2016-08-11T18:40:17Z | [
"python",
"c++",
"numpy"
] |
An easy way to show images in Django on deployment (DEBUG=false) | 38,885,761 | <p>I am using DJango 1.8 and python 3.4.3, and I have been running my app on Debug mode, and found a way to show images inside a directory configured on MEDIA_ROOT, this was my first question and the solution I have found: <a href="http://stackoverflow.com/questions/38866687/correct-way-to-save-an-image-in-admin-and-showing-it-in-a-template-in-django">How to upload and show images in DJango</a>. But reading the docs I found that that solution is not suitable for a served app, so, if I stop using "Debug=True" the images will not be displayed, and I have to use one of the options exposed on this link: <a href="https://docs.djangoproject.com/en/1.8/howto/static-files/deployment/" rel="nofollow">Static files on deployment</a> but I don't have money to pay another server, I just can pay my hosting on pythonanywhere, and for the option to use the same server for the images, I don't have idea how to automate the <code>collectstatic</code> and also don't know how to trigger it when an user uploads a new image.</p>
<p>I have used ASP.NET and PHP5 and I didn't had problems with the images in none of them, so, I have two questions:</p>
<ol>
<li>Is there an easy way to show images URL's?</li>
<li>Is there a high risk security problem if I deploy my app with DEBUG=True?</li>
</ol>
<p>I hope you can help me, because I find this ridiculous, probably is for better security, but it just not make sense for a framework, instead of making the work easier it make it more difficult and stressful.</p>
| 1 | 2016-08-11T01:03:28Z | 38,885,937 | <p>1) in urls.py add:</p>
<pre><code>(r'^media/(?P<path>.*)$', 'django.views.static.serve',
{'document_root': settings.MEDIA_ROOT, 'show_indexes': True}),
</code></pre>
<p>and open url <a href="http://myhost.com/media/" rel="nofollow">http://myhost.com/media/</a></p>
<p>2) Never deploy a site into production with DEBUG turned on, DEBUG=True is a security issue, </p>
| 1 | 2016-08-11T01:30:18Z | [
"python",
"django",
"image"
] |
An easy way to show images in Django on deployment (DEBUG=false) | 38,885,761 | <p>I am using DJango 1.8 and python 3.4.3, and I have been running my app on Debug mode, and found a way to show images inside a directory configured on MEDIA_ROOT, this was my first question and the solution I have found: <a href="http://stackoverflow.com/questions/38866687/correct-way-to-save-an-image-in-admin-and-showing-it-in-a-template-in-django">How to upload and show images in DJango</a>. But reading the docs I found that that solution is not suitable for a served app, so, if I stop using "Debug=True" the images will not be displayed, and I have to use one of the options exposed on this link: <a href="https://docs.djangoproject.com/en/1.8/howto/static-files/deployment/" rel="nofollow">Static files on deployment</a> but I don't have money to pay another server, I just can pay my hosting on pythonanywhere, and for the option to use the same server for the images, I don't have idea how to automate the <code>collectstatic</code> and also don't know how to trigger it when an user uploads a new image.</p>
<p>I have used ASP.NET and PHP5 and I didn't had problems with the images in none of them, so, I have two questions:</p>
<ol>
<li>Is there an easy way to show images URL's?</li>
<li>Is there a high risk security problem if I deploy my app with DEBUG=True?</li>
</ol>
<p>I hope you can help me, because I find this ridiculous, probably is for better security, but it just not make sense for a framework, instead of making the work easier it make it more difficult and stressful.</p>
| 1 | 2016-08-11T01:03:28Z | 38,885,974 | <p>Django runserver is not intended for serving up static files in a production environment. It should be limited to development and testing environments.</p>
<p>If you are intending to use django's runserver to server up static files with DEBUG=False then use the <a href="https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/#cmdoption-runserver--insecure" rel="nofollow">--insecure flag</a>.</p>
<p>You should never deploy a site with DEBUG = True due to security implications.</p>
<p><br>
Static files and media assets are 2 different things.</p>
<h3>Static Files</h3>
<p>Static files are things like images you created and files that come with 3rd party apps you have installed (e.g. django-cms). These files include images, css and javascript files etc.). So you need to have a settings.STATIC_ROOT for this. </p>
<p><code>python manage.py collectstatic</code> collects static files from different locations and puts them all in a single folder.</p>
<h3>Media Files</h3>
<p>Media files are things the user uploads (e.g. photos, documents etc.). So you have a settings.MEDIA_ROOT for this. <code>collecstatic</code> won't do anything to media files, they will just be there already once the user uploads them.</p>
<h3>Serving up static and media files in production</h3>
<p>Frameworks like Django aren't going to cover automatic production server configuration - that is something else you will have to learn unfortunately.</p>
<p>There are a lot of good guides around e.g. <a href="https://www.digitalocean.com/community/tutorials/how-to-serve-django-applications-with-uwsgi-and-nginx-on-ubuntu-14-04" rel="nofollow">this one</a> to help you get started serving media and static files in production.</p>
<p>Regarding server costs, I'm sure you can find a host to give you some free credit, or pay $5/month for a server somewhere... try <a href="https://lowendbox.com/" rel="nofollow">lowendbox</a></p>
<p><br>
Here is a guide from pythonanywhere regarding media and static files: <a href="https://help.pythonanywhere.com/pages/DjangoStaticFiles/" rel="nofollow">https://help.pythonanywhere.com/pages/DjangoStaticFiles/</a></p>
| 3 | 2016-08-11T01:34:12Z | [
"python",
"django",
"image"
] |
Weather "events" grouped based on time differences in Pandas | 38,885,770 | <p>I have a dataframe of surface weather observations (fzraHrObs) organized by a station identifier code ('usaf') and date. fzraHrObs has several columns of weather data. The station code and date (datetime objects) look like:</p>
<pre><code>usaf dat
716270 2014-11-23 12:00:00
2015-12-20 08:00:00
2015-12-20 12:00:00
716280 2015-12-19 08:00:00
2015-12-19 09:00:00
</code></pre>
<p>I want to group these observations into 'events' by station, in which an observation occurring <6 hours after the previous observation counts in the same event. I then want to output the event start time, end time, and number of obs in the event to a dataframe. Given the example data above, I'd like the output to look something like this: </p>
<pre><code>usaf eventNum start end count
716270 1 2014-11-23 12:00:00 2014-11-23 12:00:00 1
2 2015-12-20 09:00:00 2015-12-20 12:00:00 2
716280 1 2015-12-19 08:00:00 2015-12-19 09:00:00 2
</code></pre>
<p>I'm currently doing this with for/if loops and dicts but am working on switching things over to pandas since it's been much more efficient. </p>
<p>My initial thought was to do a diff of dat on each row grouped by station and get that in hours, so I do have a column 'diff' which shows this. I'm having trouble figuring out how to get event starts/ends/durations without reverting to ugly for and if loops however. I'm guessing something involving fzraHrObs[fzraHrObs['diff']>=6] will be involved as well?</p>
| 1 | 2016-08-11T01:04:19Z | 38,900,053 | <p>The answer in your comment means it is easy to avoid a loop, as you only need to look back to the previous event.</p>
<pre><code>df['new_event'] = df.groupby('usaf')['dat'].apply(lambda s: s.diff().dt.seconds > 6*3600)
</code></pre>
<p>Output:</p>
<pre><code> usaf dat new_event
0 716270 2014-11-23 12:00:00 False
1 716270 2015-12-20 08:00:00 True
2 716270 2015-12-20 12:00:00 False
3 716280 2015-12-19 08:00:00 False
4 716280 2015-12-19 09:00:00 False
</code></pre>
<p>Increase the event count at <code>True</code> values:</p>
<pre><code>df['event'] = df.groupby('usaf')['new_event'].cumsum().astype('int')
</code></pre>
<p>Output:</p>
<pre><code> usaf dat new_event event
0 716270 2014-11-23 12:00:00 False 0
1 716270 2015-12-20 08:00:00 True 1
2 716270 2015-12-20 12:00:00 False 1
3 716280 2015-12-19 08:00:00 False 0
4 716280 2015-12-19 09:00:00 False 0
</code></pre>
<p>Now group by event, and use <code>agg</code> to apply multiple functions, including <code>first</code> and <code>last</code> to get the start and end date:</p>
<pre><code>df.groupby(['usaf', 'event'])['dat'].agg(['first', 'last', 'count'])
</code></pre>
<p>Output:</p>
<pre><code> first last count
usaf event
716270 0 2014-11-23 12:00:00 2014-11-23 12:00:00 1
1 2015-12-20 08:00:00 2015-12-20 12:00:00 2
716280 0 2015-12-19 08:00:00 2015-12-19 09:00:00 2
</code></pre>
<p>All that is left to do is clean up the indices!</p>
| 2 | 2016-08-11T15:06:48Z | [
"python",
"pandas"
] |
Def help, trying to define pieces of code | 38,885,867 | <p>Not sure if/or how to do this, excuse the mess I'm new.</p>
<p>If I can get something like this to work it would clear a lot of my code.</p>
<p>Thanks in advance</p>
<pre><code>def on()
GPIO.output(4, 0)
def off()
GPIO.output(4, 1)
On
Off
</code></pre>
| -4 | 2016-08-11T01:20:23Z | 38,986,509 | <p>You should research how to define Python functions. First, on the <code>def</code> statement after the parentheses, add a colon at the very end of the line. Next, all code inside your function must be indented. Third, to call the function, you need to put parentheses at the end.</p>
| 0 | 2016-08-17T00:42:39Z | [
"python"
] |
Why wasn't my file opened? | 38,885,891 | <p>I'm working on a project at the end of a book I read for Python, so in case this hasn't given it away for you, I'm still quite new to this.</p>
<p>I'm trying to use the <code>open</code> command to open a file that I know exists. I know that the code understands the file is there, because when I switch over to write mode, it clears my text file out, telling me that it can find the file but it just won't read it. Why is this happening? Here's the code-</p>
<pre><code>openFile = open('C:\\Coding\\Projects\\Python\\One Day Project\\BODMAS\\userScores.txt', 'r')
def getUserPoint(userName):
for line in openFile:
split(',')
print(line, end = "")
</code></pre>
<p>I've tried a few variations where my openFile function is a local variable inside <code>getUserPoint()</code>, but that didn't make a difference either.</p>
<p>Editing because I missed a vital detail â the userScores.txt file is laid out as follows:</p>
<pre class="lang-none prettyprint-override"><code>Annie, 125
</code></pre>
<p>The <code>split()</code> function is supposed to split the name and the score assigned to the name.</p>
| 2 | 2016-08-11T01:23:47Z | 38,885,908 | <p>Your function isn't valid Python, as <code>split</code> isn't a globally defined function, but a built-in function of the type <code>str</code>. Try changing your function to something like this.</p>
<pre><code>def getUserPoint(name):
for line in openFile:
line_split = line.split(",")
print(line_split, end = "")
</code></pre>
| 1 | 2016-08-11T01:27:09Z | [
"python",
"python-3.x"
] |
How to deal with backslash apostrophes in a string? | 38,885,909 | <p>This has been talked about in other posts but I have a string that goes:</p>
<pre><code>u' yeah its gucci, wassup baby yellow everything this time you know
what im talking about yellow rims, yellow big booty, yellow bones
yellow lambs, yellow mp\'s,'
</code></pre>
<p>I want to turn mp\'s --> mp's. I understand that the computer reads \' as ' and when it's printed it's not show, but when I vectorize this sentence, the word becomes mp and s which I don't want.</p>
<p>The other option is to get rid of the apostrophe altogether but then i'll turn to ill and gets confused for the word ill. Is there anyway that I can deal with this? Encode it to something else? </p>
| -1 | 2016-08-11T01:27:15Z | 38,888,859 | <p>Just do this "yeah its gucci, wassup baby yellow everything this time you know what im talking about yellow rims, yellow big booty, yellow bones
yellow lambs, yellow mp\'s,".</p>
| 0 | 2016-08-11T06:32:46Z | [
"python",
"string"
] |
slice df where column looks like [(A, 3), (-A, 1), (-C, 4)] using criteria like all rows such that A>5 etc | 38,885,935 | <p>I have a dataframe that has a column that looks something like the following:</p>
<pre><code>dct = {}
for x in range(0,1000000):
test = {'A': np.random.randint(1,5), '-A': np.random.randint(1,5), '-C': np.random.randint(1,5)}
dct[str(x)+'_key'] = test
df = pd.DataFrame([[d.items()] for d in dct.values()])
df.tail()
Out[208]:
0
1299995 [(A, 3), (-A, 1), (-C, 4)]
1299996 [(A, 2), (-A, 4), (-C, 1)]
1299997 [(A, 3), (-A, 1), (-C, 3)]
1299998 [(A, 2), (-A, 2), (-C, 1)]
1299999 [(A, 1), (-A, 2), (-C, 4)]
</code></pre>
<p>I have about 1.3 million rows in the dataframe. There are other columns but for this question they are not relevant. </p>
<p>In my real life situation the total sum of the count per row = 10. But I don't know how to create an example dataframe using <code>np.random.randint()</code> that satisfies that the total count per row has to equal 10. Valid alphabets are any from the following <code>(A,B,C,D,-A,-B,-C,-D)</code>. </p>
<p>So every row selects from that set with the restriction that total <code>count = 10</code>. So a row can have anything like:</p>
<pre><code>[(A, 10)]
[(B, 3), (-D, 1), (-A, 6)]
[(A, 2), (B, 1), (-C, 2),(-D,5)]
</code></pre>
<p>In any case, the above example df should suffice. </p>
<p>What I want to do is be able to slice this df using this column using criteria that resembles questions like:</p>
<pre><code>-all rows such that the number of A > 5 AND B < 0 (or not existent) AND -D > 2
</code></pre>
<p>The questions can be single or multi-conditions like the above. </p>
<p>In any case, I'm not sure how to do this efficiently, especially since each row is comprised of tuples. </p>
| 0 | 2016-08-11T01:29:54Z | 38,886,130 | <p>Easy, convert the column to a dictionary:</p>
<pre><code>df[0] = df[0].apply(dict)
</code></pre>
<p>Now, whatever your query, you can write it as:</p>
<pre><code>def query(row, key, value, cond):
return eval(str(row.get(key)) + cond + str(value))
df.apply(query, key='A', value=2, cond='>', axis=1)
</code></pre>
<p>Or simply as:</p>
<pre><code>df.apply(lambda x: x[0].get('A') > 2, axis=1)
</code></pre>
| -1 | 2016-08-11T01:57:51Z | [
"python",
"pandas",
"tuples",
"slice"
] |
slice df where column looks like [(A, 3), (-A, 1), (-C, 4)] using criteria like all rows such that A>5 etc | 38,885,935 | <p>I have a dataframe that has a column that looks something like the following:</p>
<pre><code>dct = {}
for x in range(0,1000000):
test = {'A': np.random.randint(1,5), '-A': np.random.randint(1,5), '-C': np.random.randint(1,5)}
dct[str(x)+'_key'] = test
df = pd.DataFrame([[d.items()] for d in dct.values()])
df.tail()
Out[208]:
0
1299995 [(A, 3), (-A, 1), (-C, 4)]
1299996 [(A, 2), (-A, 4), (-C, 1)]
1299997 [(A, 3), (-A, 1), (-C, 3)]
1299998 [(A, 2), (-A, 2), (-C, 1)]
1299999 [(A, 1), (-A, 2), (-C, 4)]
</code></pre>
<p>I have about 1.3 million rows in the dataframe. There are other columns but for this question they are not relevant. </p>
<p>In my real life situation the total sum of the count per row = 10. But I don't know how to create an example dataframe using <code>np.random.randint()</code> that satisfies that the total count per row has to equal 10. Valid alphabets are any from the following <code>(A,B,C,D,-A,-B,-C,-D)</code>. </p>
<p>So every row selects from that set with the restriction that total <code>count = 10</code>. So a row can have anything like:</p>
<pre><code>[(A, 10)]
[(B, 3), (-D, 1), (-A, 6)]
[(A, 2), (B, 1), (-C, 2),(-D,5)]
</code></pre>
<p>In any case, the above example df should suffice. </p>
<p>What I want to do is be able to slice this df using this column using criteria that resembles questions like:</p>
<pre><code>-all rows such that the number of A > 5 AND B < 0 (or not existent) AND -D > 2
</code></pre>
<p>The questions can be single or multi-conditions like the above. </p>
<p>In any case, I'm not sure how to do this efficiently, especially since each row is comprised of tuples. </p>
| 0 | 2016-08-11T01:29:54Z | 38,886,356 | <p>If you can split the column of tuples, this should work, just replace the conditionals with your numbers. I used these for the example data:</p>
<pre><code>def f(x, var):
tup_list = list(x)
for t in tup_list:
if t[0] == var:
return t[1]
return np.NaN
df.columns = ['col']
for var in ['A', '-A', 'B', '-B', 'C', '-C', 'D', '-D']:
df[var] = df['col'].apply(lambda x: f(x, var))
df2 = df.loc[(df['A'] > 3) & ((df['-A'] < 3) & (df['B'] is not np.NaN)) & (df['-C'] > 2)]
</code></pre>
| 1 | 2016-08-11T02:26:56Z | [
"python",
"pandas",
"tuples",
"slice"
] |
Data analysis of log files â How to find a pattern? | 38,885,944 | <p>My company has slightly more than 300 vehicle based windows CE 5.0 mobile devices that all share the same software and usage model of Direct Store Delivery during the day then doing a Tcom at the home base every night. There is an unknown event(s) that results in the device freaking out and rebooting itself in the middle of the day. Frequency of this issue is ~10 times per week across the fleet of computers that all reboot daily, 6 days a week. The math is 300*6=1800 boots per week (at least) 10/1800= 0.5%. I realize that number is very low, but it is more than my boss wants to have. </p>
<p>My challenge, is to find a way to scan through several thousand logfille.txt files and try to find some sort of pattern. I KNOW there is a pattern here somewhere. Iâve got a couple ideas of where to start, but I wanted to throw this out to the community and see what suggestions you all might have.</p>
<p>A bit of background on this issue. The application starts a new log file at each boot. In an orderly (control) log file, you see the app startup, do its thing all day, and then start a shutdown process in a somewhat orderly fashion 8-10 hours later. In a problem log file, you see the device startup and then the log ends without any shutdown sequence at all in a time less than 8 hours. It then starts a new log file which shares the same date as the logfile1.old that it made in the rename process. The application that we have was home grown by windows developers that are no longer with the company. Even better, they donât currently know who has the source at the moment.</p>
<p>Iâm aware of the various CE tools that can be used to detect memory leaks (DevHealth, retail messages, etc..) and we are investigating that route as well, however Iâm convinced that there is a pattern to be found, that Iâm just not smart enough to find. There has to be a way to do this using Perl or Python that Iâm just not seeing. Here are two ideas I have.</p>
<p>Idea 1 â Look for trends in word usage.
Create an array of every unique word used in the entire log file and output a count of each word. Once I had a count of the words that were being used, I could run some stats on them and look for the non-normal events. Perhaps the word âpurpleâ is being used 500 times in a 1000 line log file ( there might be some math there?) on a control and only 4 times on a 500 line problem log? Perhaps there is a unique word that is only seen in the problem files. Maybe I could get a reverse âword cloudâ? </p>
<p>Idea 2 â categorize lines into entry-type and then look for trends in the sequence of type of entry type?
The logfiles already have a predictable schema that looks like this = Level|date|time|system|source|message<br>
Iâm 99% sure there is a visible pattern here that I just canât find. All of the logs got turned up to âsuper duper verboseâ so there is a boatload of fluff (25 logs p/sec , 40k lines per file) that makes this even more challenging. If there isnât a unique word, then this has almost got to be true. How do I do this?</p>
<p>Item 3 â Hire a windows CE platform developer
Yes, we are going down that path as well, but I KNOW there is a pattern Iâm missing. They will use the tools that I donât have) or make the tools that we need to figure out whatâs up. I suspect that there might be a memory leak, radio event or other event that platform tools Iâm sure will show.</p>
<p>Item 4 â Something Iâm not even thinking of that you have used.
There have got to be tools out there that do this that arenât as prestigious as a well-executed python script, and Iâm willing to go down that path, I just donât know what those tools are.</p>
<p>Oh yeah, I canât post log files to the web, so donât ask. The users are promising to report trends when they see them, but Iâm not exactly hopeful on that front. All I need to find is either a pattern in the logs, or steps to duplicate</p>
<p>So there you have it. What tools or techniques can I use to even start on this? </p>
| 0 | 2016-08-11T01:30:43Z | 38,886,144 | <p>There is no input data at all to this problem so this answer will be basically pure theory, a little collection of ideas you could consider.</p>
<ol>
<li><p>To analize patterns out of a bunch of many logs you could definitely creating some graphs displaying relevant data which could help to narrow the problem, python is really very good for these kind of tasks.</p></li>
<li><p>You could also transform/insert the logs into databases, that way you'd be able to query the relevant suspicious events much faster and even compare massively all your logs.</p></li>
<li><p>A simpler approach could be just focusing on a simple log showing the crash, instead wasting a lot of efforts or resources trying to find some kind of generic pattern, start by reading through one simple log in order to catch suspicious "events" which could produce the crash.</p></li>
<li><p>My favourite approach for these type of tricky problems is different from the previous ones, instead of focusing on analizing or even parsing the logs I'd just try to reproduce the bug/s in a deterministic way locally (you don't even need to have the source code). Sometimes it's really difficult to replicate the production environment in your the dev environment but definitely is time well invested. All the effort you put into this process will help you to solve not only these bugs but improving your software much faster. Remember, the more times you're able to iterate the better.</p></li>
<li><p>Another approach could just be coding a little script which would allow you to replay logs which crashed, not sure if that'll be easy in your environment though. Usually this strategy works quite well with production software using web-services where there will be a lot of tuples with data-requests and data-retrieves.</p></li>
</ol>
<p>In any case, without seeing the type of data from your logs I can't be more specific nor giving much more concrete details.</p>
| 0 | 2016-08-11T02:00:21Z | [
"python",
"logging",
"windows-ce",
"data-analysis"
] |
Data analysis of log files â How to find a pattern? | 38,885,944 | <p>My company has slightly more than 300 vehicle based windows CE 5.0 mobile devices that all share the same software and usage model of Direct Store Delivery during the day then doing a Tcom at the home base every night. There is an unknown event(s) that results in the device freaking out and rebooting itself in the middle of the day. Frequency of this issue is ~10 times per week across the fleet of computers that all reboot daily, 6 days a week. The math is 300*6=1800 boots per week (at least) 10/1800= 0.5%. I realize that number is very low, but it is more than my boss wants to have. </p>
<p>My challenge, is to find a way to scan through several thousand logfille.txt files and try to find some sort of pattern. I KNOW there is a pattern here somewhere. Iâve got a couple ideas of where to start, but I wanted to throw this out to the community and see what suggestions you all might have.</p>
<p>A bit of background on this issue. The application starts a new log file at each boot. In an orderly (control) log file, you see the app startup, do its thing all day, and then start a shutdown process in a somewhat orderly fashion 8-10 hours later. In a problem log file, you see the device startup and then the log ends without any shutdown sequence at all in a time less than 8 hours. It then starts a new log file which shares the same date as the logfile1.old that it made in the rename process. The application that we have was home grown by windows developers that are no longer with the company. Even better, they donât currently know who has the source at the moment.</p>
<p>Iâm aware of the various CE tools that can be used to detect memory leaks (DevHealth, retail messages, etc..) and we are investigating that route as well, however Iâm convinced that there is a pattern to be found, that Iâm just not smart enough to find. There has to be a way to do this using Perl or Python that Iâm just not seeing. Here are two ideas I have.</p>
<p>Idea 1 â Look for trends in word usage.
Create an array of every unique word used in the entire log file and output a count of each word. Once I had a count of the words that were being used, I could run some stats on them and look for the non-normal events. Perhaps the word âpurpleâ is being used 500 times in a 1000 line log file ( there might be some math there?) on a control and only 4 times on a 500 line problem log? Perhaps there is a unique word that is only seen in the problem files. Maybe I could get a reverse âword cloudâ? </p>
<p>Idea 2 â categorize lines into entry-type and then look for trends in the sequence of type of entry type?
The logfiles already have a predictable schema that looks like this = Level|date|time|system|source|message<br>
Iâm 99% sure there is a visible pattern here that I just canât find. All of the logs got turned up to âsuper duper verboseâ so there is a boatload of fluff (25 logs p/sec , 40k lines per file) that makes this even more challenging. If there isnât a unique word, then this has almost got to be true. How do I do this?</p>
<p>Item 3 â Hire a windows CE platform developer
Yes, we are going down that path as well, but I KNOW there is a pattern Iâm missing. They will use the tools that I donât have) or make the tools that we need to figure out whatâs up. I suspect that there might be a memory leak, radio event or other event that platform tools Iâm sure will show.</p>
<p>Item 4 â Something Iâm not even thinking of that you have used.
There have got to be tools out there that do this that arenât as prestigious as a well-executed python script, and Iâm willing to go down that path, I just donât know what those tools are.</p>
<p>Oh yeah, I canât post log files to the web, so donât ask. The users are promising to report trends when they see them, but Iâm not exactly hopeful on that front. All I need to find is either a pattern in the logs, or steps to duplicate</p>
<p>So there you have it. What tools or techniques can I use to even start on this? </p>
| 0 | 2016-08-11T01:30:43Z | 38,886,198 | <p>was wondering if you'd looked at the ELK stack? It's an acronym for elasticsearch, kibana and log stash and fits your use case closely; it's often used for analysis of large numbers of log files. </p>
<p>Elasticsearch and kibana gives you a UI that lets you interactively explore and chart data for trends. Very powerful and quite straight forward to set up on a Linux platform and there's a Windows version too. (Took me a day or two of setup but you get a lot of functional power from it). Software is free to download and use. You could use this in a style similar to idea 1 / 2</p>
<p><a href="https://www.elastic.co/webinars/introduction-elk-stack" rel="nofollow">https://www.elastic.co/webinars/introduction-elk-stack</a></p>
<p><a href="http://logz.io/learn/complete-guide-elk-stack/" rel="nofollow">http://logz.io/learn/complete-guide-elk-stack/</a></p>
<p>On the question of Python / idea 4 (which elk could be considered part of) I haven't done this for log files but I have used Regex to search and extract text patterns from documents using Python. That may also help you find patterns if you had some leads on the sorts of patterns you are looking for. </p>
<p>Just a couple of thoughts; hope they help. </p>
| 0 | 2016-08-11T02:07:33Z | [
"python",
"logging",
"windows-ce",
"data-analysis"
] |
Python: Pandas Series - Why use loc? | 38,886,080 | <p>Why do we use 'loc' for pandas dataframes? it seems the following code with or without using loc both compile anr run at a simulular speed</p>
<pre><code>%timeit df_user1 = df.loc[df.user_id=='5561']
100 loops, best of 3: 11.9 ms per loop
</code></pre>
<p>or</p>
<pre><code>%timeit df_user1_noloc = df[df.user_id=='5561']
100 loops, best of 3: 12 ms per loop
</code></pre>
<p>So why use loc?</p>
<p><strong>Edit:</strong> This has been flagged as a duplicate question. But although <a href="http://stackoverflow.com/questions/31593201/pandas-iloc-vs-ix-vs-loc-explanation/31593712#31593712">pandas iloc vs ix vs loc explanation?</a> does mention that *</p>
<blockquote>
<p>you can do column retrieval just by using the data frame's
<strong>getitem</strong>:</p>
</blockquote>
<p>*</p>
<pre><code>df['time'] # equivalent to df.loc[:, 'time']
</code></pre>
<p>it does not say why we use loc, although it does explain lots of features of loc, my specific question is 'why not just omit loc altogether'? for which i have accepted a very detailed answer below.</p>
<p>Also that other post the answer (which i do not think is an answer) is very hidden in the discussion and any person searching for what i was looking for would find it hard to locate the information and would be much better served by the answer provided to my question.</p>
| 2 | 2016-08-11T01:51:08Z | 38,886,211 | <ul>
<li><p>Explicit is better than implicit. </p>
<p><code>df[boolean_mask]</code> selects rows where <code>boolean_mask</code> is True, but there is a corner case when you might not want it to: when <code>df</code> has boolean-valued column labels:</p>
<pre><code>In [229]: df = pd.DataFrame({True:[1,2,3],False:[3,4,5]}); df
Out[229]:
False True
0 3 1
1 4 2
2 5 3
</code></pre>
<p>You might want to use <code>df[[True]]</code> to select the <code>True</code> column. Instead it raises a <code>ValueError</code>:</p>
<pre><code>In [230]: df[[True]]
ValueError: Item wrong length 1 instead of 3.
</code></pre>
<p>In contrast, the following does not raise <code>ValueError</code> even though the structure of <code>df2</code> is almost the same:</p>
<pre><code>In [258]: df2 = pd.DataFrame({'A':[1,2,3],'B':[3,4,5]}); df2
Out[258]:
A B
0 1 3
1 2 4
2 3 5
In [259]: df2[['B']]
Out[259]:
B
0 3
1 4
2 5
</code></pre>
<p>Also note that</p>
<pre><code>In [231]: df.loc[[True]]
Out[231]:
False True
0 3 1
</code></pre>
<p>Thus, <code>df[boolean_mask]</code> does not always behave the same as <code>df.loc[boolean_mask]</code>. Even though this is arguably an unlikely use case, I would recommend always using <code>df.loc[boolean_mask]</code> instead of <code>df[boolean_mask]</code> because the meaning of <code>df.loc</code>'s syntax is explicit. With <code>df.loc[indexer]</code> you know automatically that <code>df.loc</code> is selecting rows. In contrast, it is not clear if <code>df[indexer]</code> will select rows or columns (or raise <code>ValueError</code>) without knowing details about <code>indexer</code> and <code>df</code>.</p></li>
<li><p><code>df.loc[row_indexer, column_index]</code> can select rows <em>and</em> columns. <code>df[indexer]</code> can only select rows <em>or</em> columns depending on the type of values in <code>indexer</code> and the type of column values <code>df</code> has (again, are they boolean?). </p>
<pre><code>In [237]: df2.loc[[True,False,True], 'B']
Out[237]:
0 3
2 5
Name: B, dtype: int64
</code></pre></li>
<li><p>When a slice is passed to <code>df.loc</code> the end-points are included in the range. When a slice is passed to <code>df[...]</code>, the slice is interpreted as a half-open interval:</p>
<pre><code>In [239]: df2.loc[1:2]
Out[239]:
A B
1 2 4
2 3 5
In [271]: df2[1:2]
Out[271]:
A B
1 2 4
</code></pre></li>
</ul>
| 4 | 2016-08-11T02:08:40Z | [
"python",
"pandas",
"series",
"loc"
] |
load variables from a local file in ansible | 38,886,086 | <p>I want ansible to run several shell commonds(like: rm/ yum install ) over remote servers. But instead of putting commonds inside the playbook, I want ansible to read shell commands from a file , thereby other people only need to swap the commands in this file with no need to know how playbook works.
file could in any type of extension like txt/yml/json, </p>
<pre><code>[list.txt]
yum install ntp -y
rm -rf /app/tst.txt
service ntpd start
</code></pre>
<p>Is there a module that loads this yml/json file and register every element as variables hence i can use it dynamically in playbook </p>
| 0 | 2016-08-11T01:52:12Z | 38,888,372 | <p>You may want to use <a href="http://docs.ansible.com/ansible/include_vars_module.html" rel="nofollow">include_vars</a> module.</p>
<p>If you want only pure shell commands to be executed without any processing, there is a <a href="http://docs.ansible.com/ansible/script_module.html" rel="nofollow">script</a> module that takes a given file, transfers it to target machine and executes. </p>
| 0 | 2016-08-11T06:03:30Z | [
"python",
"ansible",
"ansible-playbook",
"ansible-2.x"
] |
how to update a text file in a loop in python | 38,886,168 | <p>I write a function that can update a text file only for once but I need to repeatedly do it . In order to avoid frequently copy a temporary file to a target file , I want to update all words in a loop for only once .How can i can do it?
Here is my python code(but only update once):</p>
<pre><code>import io
from tempfile import mkstemp
from shutil import move
from os import remove, close
def replaceWords(source_file_path, old_word, cluster_labels):
new_word_list = [old_word + "_" + str(label) for label in cluster_labels]
fh, target_file_path = mkstemp()
with io.open(target_file_path, mode='w', encoding='utf8') as target_file:
with io.open(source_file_path, mode='r', encoding='utf8') as source_file:
index = 0
for line in source_file:
words =[]
for word in line.split():
if word == old_word:
words.append(word.replace(old_word, new_word_list[index]))
index += 1
else:
words.append(word)
target_file.write(" ".join(words))
close(fh)
remove(source_file_path)
move(target_file_path, source_file_path)
</code></pre>
<p>for example:</p>
<p>for the first update:</p>
<p>source file contexts :<code>of anarchism have often been divided into the categories of social and individualist anarchism or similar dual classifications</code></p>
<p>old_word: 'of'</p>
<p>cluster_labels: '[1, 2]'</p>
<p>after update:
target file contexts :<code>of_1 anarchism have often been divided into the categories of_2 social and individualist anarchism or similar dual classifications</code></p>
<p>for the second update :</p>
<p>old_word: 'anarchism'</p>
<p>cluster_labels: '[1, 2]'</p>
<p>after update:</p>
<p>target file contexts :<code>of_1 anarchism_1 have often been divided into the categories of_2 social and individualist anarchism_2 or similar dual classifications</code></p>
<p>In my code , I have to call the function two times and copy file two times, but when the words needed to be updated are too many, this method is definitely time-consuming and frequent reading/writing/copying which is io unfriendly.</p>
<p><strong>so, is there any method that can elegantly deal with this without frequently reading/writing/copying?</strong> </p>
| 0 | 2016-08-11T02:04:26Z | 38,887,033 | <p>There can be many way to do this. 1 of the approach inline to what you have done can be to use *argv to get the list of words to be replaced and replace the words in same line as you are doing currenty. I am adding kind of pseudo code here, it is not tested for the errors.
Please note 2 changes
1. in the input parameter to the function.
2. added the for loop to iterate through the input parameter.</p>
<pre><code>#! /usr/bin/python
import io
from tempfile import mkstemp
from shutil import move
from os import remove, close
import logging
logging.basicConfig(level=logging.DEBUG, format=' %(asctime)s -%(levelname)s - %(message)s')
def replaceWords(**source_file_path, cluster_labels ,*argv**):
old_word = 'of'
new_word_list = [old_word + "_" + str(label) for label in cluster_labels]
fh, target_file_path = mkstemp()
logging.debug(new_word_list)
logging.debug(old_word)
with io.open(target_file_path, mode='w', encoding='utf8') as target_file:
with io.open(source_file_path, mode='r', encoding='utf8') as source_file:
index = 0
for line in source_file:
words =[]
for word in line.split():
**for wordtochange in argv:**
if word == old_word:
words.append(word.replace(old_word, new_word_list[index]))
index += 1
else:
words.append(word)
target_file.write(" ".join(words))
close(fh)
remove(source_file_path)
move(target_file_path, source_file_path)
replaceWords('file.txt',[1,2],('of','anarchism'))
</code></pre>
| 0 | 2016-08-11T03:58:46Z | [
"python",
"python-2.7",
"nlp",
"text-mining"
] |
Automatically assign color to nodes in Graphviz | 38,886,173 | <p>I'm using python and Graphviz to draw some cluster graph consist of nodes.
I want to assign different colors (depends on an attribute , for example the x coordinate, of each node) to nodes.</p>
<p>Here's the way I produce graph</p>
<pre><code>def add_nodes(graph, nodes):
for n in nodes:
if isinstance(n, tuple):
graph.node(n[0], **n[1])
else:
graph.node(n)
return graph
A = [[517, 1, [409], 10, 6],
[534, 1, [584], 10, 12],
[614, 1, [247], 11, 5],
[679, 1, [228], 13, 7],
[778, 1, [13], 14, 14]]
nodesgv = []
for node in A:
nodesgv.append((str(node[0]),{'label': str(node[0]), 'color': ???, 'style': 'filled'}))
graph = functools.partial(gv.Graph, format='svg', engine='neato')
add_nodes(graph(), nodesgv).render(('img/test'))
</code></pre>
<p>And now I want to assign a color to each node with the ordering of the first value of each node.
More specifically what I want is a red node (517), a yellow node (534), a green node (614), a blue node (679) and a purple node (778).</p>
<p><a href="http://i.stack.imgur.com/LM1vO.png" rel="nofollow"><img src="http://i.stack.imgur.com/LM1vO.png" alt="enter image description here"></a></p>
<p>I know how to assign colors to the graph, but hat I'm looking for is something similar to the c=x part when using matplotlib.
Problem is I'm not able to know the number of nodes(clusters) beforehand, so for example if I got 7 nodes, I still want a graph with 7 nodes start from a red one and end with a purple one.</p>
<pre><code>plt.scatter(x, y, c=x, s=node_sizes)
</code></pre>
<p>So is there any attribute in graphviz can do this?
Or can anyone tell me how does the colormap in matplotlib work??</p>
<p>Sorry for the unclearness. T^T</p>
| 0 | 2016-08-11T02:04:49Z | 38,909,236 | <p>Oh I figured out a way to get what I want.
Just for recording and for someone else may have a same problem(?)
Can just rescale a color map and assign the corresponding index (of color) to the nodes.</p>
<pre><code>def add_nodes(graph, nodes):
for n in nodes:
if isinstance(n, tuple):
graph.node(n[0], **n[1])
else:
graph.node(n)
return graph
A = [[517, 1, [409], 10, 6],
[534, 1, [584], 10, 12],
[614, 1, [247], 11, 5],
[679, 1, [228], 13, 7],
[778, 1, [13], 14, 14]]
nodesgv = []
Arange = [ a[0] for a in A]
norm = mpl.colors.Normalize(vmin = min(Arange), vmax = max(Arange))
cmap = cm.jet
for index, i in enumerate(A):
x = i[0]
m = cm.ScalarMappable(norm = norm, cmap = cmap)
mm = m.to_rgba(x)
M = colorsys.rgb_to_hsv(mm[0], mm[1], mm[2])
nodesgv.append((str(i[0]),{'label': str((i[1])), 'color': "%f, %f, %f" % (M[0], M[1], M[2]), 'style': 'filled'}))
graph = functools.partial(gv.Graph, format='svg', engine='neato')
add_nodes(graph(), nodesgv).render(('img/test'))
</code></pre>
| 0 | 2016-08-12T03:08:53Z | [
"python",
"matplotlib",
"plot",
"graphviz"
] |
Not able to get two if statements and an else statement to execute | 38,886,269 | <p>So currently I am trying to make a personal buddy program. I want it to be two if statements and one else. The two if statements have different word triggers, so that's why there is two. The problem arises when I want to make an else statement, so if the certain word triggers weren't typed, it would still say something. Here is the code</p>
<pre><code>sport = input("What sports do you play?\n")
if sport in ['soccer','baseball','dance','basketball','golf','skiing','surfing']:
print(sport, "sounds fun")
if sport in ['none','not at the moment','nope','none atm','natm']:
print("Im not really into sports either")
else:
print(sport, "is a sport?")
</code></pre>
<p>You can see the else statement should respond with "Thumbwrestling is a sport?". Instead if I say a sport listed it will trigger "Baseball sounds fun" "Baseball is a sport?" I don't want it to trigger both. Am I doing something wrong? Please help!</p>
| 0 | 2016-08-11T02:16:29Z | 38,886,321 | <pre><code>sport = input("What sports do you play?\n")
if sport in ['soccer','baseball','dance','basketball','golf','skiing','surfing']:
print(sport, "sounds fun")
elif sport in ['none','not at the moment','nope','none atm','natm']:
print("Im not really into sports either")
else:
print(sport, "is a sport?")
</code></pre>
<p>Notice the <code>elif</code> instead of the second <code>if</code>. This stands for <code>else if</code> meaning that in the chain of statements, only one will be executed.</p>
| 3 | 2016-08-11T02:22:11Z | [
"python",
"if-statement"
] |
Not able to get two if statements and an else statement to execute | 38,886,269 | <p>So currently I am trying to make a personal buddy program. I want it to be two if statements and one else. The two if statements have different word triggers, so that's why there is two. The problem arises when I want to make an else statement, so if the certain word triggers weren't typed, it would still say something. Here is the code</p>
<pre><code>sport = input("What sports do you play?\n")
if sport in ['soccer','baseball','dance','basketball','golf','skiing','surfing']:
print(sport, "sounds fun")
if sport in ['none','not at the moment','nope','none atm','natm']:
print("Im not really into sports either")
else:
print(sport, "is a sport?")
</code></pre>
<p>You can see the else statement should respond with "Thumbwrestling is a sport?". Instead if I say a sport listed it will trigger "Baseball sounds fun" "Baseball is a sport?" I don't want it to trigger both. Am I doing something wrong? Please help!</p>
| 0 | 2016-08-11T02:16:29Z | 38,886,435 | <p>Use an <code>if-elif-else</code> statement to distinguish more than two cases, instead of the conditional statement <code>if-else</code>:</p>
<pre><code>if sport in ['soccer','baseball','dance','basketball','golf','skiing','surfing']:
print(sport, "sounds fun")
elif sport in ['none','not at the moment','nope','none atm','natm']:
print("I'm not really into sports either")
else:
print(sport, "is a sport?")
</code></pre>
<p>If your going to add another case, besides the one I improved in your code, just stick to the pattern of the syntax of the <code>if-elif-else</code> statement:</p>
<pre><code>if expression1:
statement(s)
elif expression2:
statement(s)
elif expression3: #You can add another line of elif if you add another case, here it is labeled expression3.
statement(s)
else:
statement(s)
</code></pre>
| 3 | 2016-08-11T02:38:30Z | [
"python",
"if-statement"
] |
How to make a dictionary the value component or a numpy array | 38,886,394 | <p>I am a new Python 2.7 user. I recently learned about numpy arrays, and now I am now just learning about dictionaries. Please excuse me if my syntax is not correct.</p>
<p>Let's say we have a dictionary:</p>
<pre><code>dict1 = {'Ann': {'dogs': '3', 'cats': '4'},
'Bob': {'dogs': '5', 'cats': '6'},
'Chris': {'dogs': '7', 'cats': '8'},
'Dan': {'dogs': '9', 'cats': '10'}}
</code></pre>
<p>The keys are <code>dog</code> and <code>cat</code> and the values are the numbers of each Ann, Bob, Chris, and Dan have. </p>
<p>I want to inverse the value component of my dictionary. I know I can convert to a list by using <code>dict1.values()</code>, and then convert to an array, and then convert back to a dictionary, but this seems tedious. Is there a way to make my value component a numpy array and leave the key component the way it is?</p>
| 0 | 2016-08-11T02:32:02Z | 38,886,474 | <h3>Inverting values in the dictionary</h3>
<blockquote>
<p><em>"I want each of the values for dogs and cats to be the inverse meaning 1/3, 1/5, 1/7, 1/9, etc."</em></p>
</blockquote>
<pre><code>>>> {name:{key:1./float(value) for key,value in d.items()} for name,d in dict1.items()}
{'Ann': {'cats': 0.25, 'dogs': 0.3333},
'Bob': {'cats': 0.1667, 'dogs': 0.2},
'Chris': {'cats': 0.125, 'dogs': 0.1429},
'Dan': {'cats': 0.1, 'dogs': 0.1111}}
</code></pre>
<p>Or, keeping the values as strings:</p>
<pre><code>>>> {name:{key:'1/' + value for key,value in d.items()} for name,d in dict1.items()}
{'Ann': {'cats': '1/4', 'dogs': '1/3'},
'Bob': {'cats': '1/6', 'dogs': '1/5'},
'Chris': {'cats': '1/8', 'dogs': '1/7'},
'Dan': {'cats': '1/10', 'dogs': '1/9'}}
</code></pre>
<h3>Converting dict1 to a numpy array</h3>
<p>Let's import numpy and define your dictionary:</p>
<pre><code>>>> import numpy as np
>>> dict1 = {'Ann': {'dogs': '3', 'cats': '4'},
... 'Bob': {'dogs': '5', 'cats': '6'},
... 'Chris': {'dogs': '7', 'cats': '8'},
... 'Dan': {'dogs': '9', 'cats': '10'},}
</code></pre>
<p>Now, let's convert your dictionary to a numpy array:</p>
<pre><code>>>> np.array([[name]+[dict1[name][k] for k in 'dogs', 'cats'] for name in dict1])
array([['Chris', '7', '8'],
['Ann', '3', '4'],
['Dan', '9', '10'],
['Bob', '5', '6']],
dtype='|S5')
</code></pre>
<p>Here, the first column is the name, the second is the number of dogs and the third is the number of cats.</p>
| 0 | 2016-08-11T02:45:27Z | [
"python",
"arrays",
"python-2.7",
"numpy",
"dictionary"
] |
How to make a dictionary the value component or a numpy array | 38,886,394 | <p>I am a new Python 2.7 user. I recently learned about numpy arrays, and now I am now just learning about dictionaries. Please excuse me if my syntax is not correct.</p>
<p>Let's say we have a dictionary:</p>
<pre><code>dict1 = {'Ann': {'dogs': '3', 'cats': '4'},
'Bob': {'dogs': '5', 'cats': '6'},
'Chris': {'dogs': '7', 'cats': '8'},
'Dan': {'dogs': '9', 'cats': '10'}}
</code></pre>
<p>The keys are <code>dog</code> and <code>cat</code> and the values are the numbers of each Ann, Bob, Chris, and Dan have. </p>
<p>I want to inverse the value component of my dictionary. I know I can convert to a list by using <code>dict1.values()</code>, and then convert to an array, and then convert back to a dictionary, but this seems tedious. Is there a way to make my value component a numpy array and leave the key component the way it is?</p>
| 0 | 2016-08-11T02:32:02Z | 38,886,483 | <p>If you just need the values as arrays you can use <code>pandas</code> to help convert to a <code>numpy</code> array. Alternatively, you can just use <code>pandas</code> to meet your requirements. <a href="http://pandas.pydata.org/pandas-docs/stable/dsintro.html" rel="nofollow">Pandas</a> provides a data analysis library (think programmatic spreadsheet) that is built on top of <code>numpy</code>.</p>
<p>To convert to a numpy array for further processing:</p>
<pre><code>>>> import pandas as pd
>>> import numpy as np
>>> pd.DataFrame(dict1).T
cats dogs
Ann 4 3
Bob 6 5
Chris 8 7
Dan 10 9
>>> pd.DataFrame(dict1).T.as_matrix()
array([['4', '3'],
['6', '5'],
['8', '7'],
['10', '9']], dtype=object)
</code></pre>
<p>Updated based on comments, to invert all the values using pandas:</p>
<pre><code>>>> pd.DataFrame(dict1).applymap(lambda x: 1/float(x))
Ann Bob Chris Dan
cats 0.250000 0.166667 0.125000 0.100000
dogs 0.333333 0.200000 0.142857 0.111111
</code></pre>
<p>Or result in a dictionary:</p>
<pre><code>>>> pd.DataFrame(dict1).applymap(lambda x: 1/float(x)).to_dict()
{'Ann': {'cats': 0.25, 'dogs': 0.33333333333333331},
'Bob': {'cats': 0.16666666666666666, 'dogs': 0.20000000000000001},
'Chris': {'cats': 0.125, 'dogs': 0.14285714285714285},
'Dan': {'cats': 0.10000000000000001, 'dogs': 0.1111111111111111}}
</code></pre>
| 5 | 2016-08-11T02:47:04Z | [
"python",
"arrays",
"python-2.7",
"numpy",
"dictionary"
] |
How to make a dictionary the value component or a numpy array | 38,886,394 | <p>I am a new Python 2.7 user. I recently learned about numpy arrays, and now I am now just learning about dictionaries. Please excuse me if my syntax is not correct.</p>
<p>Let's say we have a dictionary:</p>
<pre><code>dict1 = {'Ann': {'dogs': '3', 'cats': '4'},
'Bob': {'dogs': '5', 'cats': '6'},
'Chris': {'dogs': '7', 'cats': '8'},
'Dan': {'dogs': '9', 'cats': '10'}}
</code></pre>
<p>The keys are <code>dog</code> and <code>cat</code> and the values are the numbers of each Ann, Bob, Chris, and Dan have. </p>
<p>I want to inverse the value component of my dictionary. I know I can convert to a list by using <code>dict1.values()</code>, and then convert to an array, and then convert back to a dictionary, but this seems tedious. Is there a way to make my value component a numpy array and leave the key component the way it is?</p>
| 0 | 2016-08-11T02:32:02Z | 38,890,272 | <p>Based on your question and comments I think you just want the same dictionary structure, but with the numbers inverted:</p>
<pre><code>dict1 = {'Ann': {'dogs': '3', 'cats': '4'},
'Bob': {'dogs': '5', 'cats': '6'},
'Chris': {'dogs': '7', 'cats': '8'},
'Dan': {'dogs': '9', 'cats': '10'}}
for k in dict1.keys():
value = dict1[k]
for k1 in value.keys():
value[k1] = 1/float(value[k1])
dict1
Out[64]:
{'Ann': {'cats': 0.25, 'dogs': 0.3333333333333333},
'Bob': {'cats': 0.16666666666666666, 'dogs': 0.2},
'Chris': {'cats': 0.125, 'dogs': 0.14285714285714285},
'Dan': {'cats': 0.1, 'dogs': 0.1111111111111111}}
</code></pre>
<p>I modified the dictionary in place, just replacing the numeric strings with their inverse, e.g. <code>'4'</code> with <code>0.25</code>.</p>
<p>Iterating on two levels of <code>keys()</code> is in a sense, tedious, but it's the straight forward thing to do when working with nested dictionaries. I wrote the <code>for</code> expression in one trial - no errors. I am experienced, but still I usually have to try several things before getting something that works. I iterated on <code>keys</code> so I could easily change the values in place. If I wanted to make a copy, I probably could have written it as a nested dict comprehension, but it would be more obscure.</p>
<p>Provided it does the right thing, it's faster than anything involving <code>numpy</code> or <code>pandas</code>. Creating the arrays takes time.</p>
<p>================</p>
<p>A <code>numpy</code> approach - much more advanced coding (display from a <code>ipython</code> session):</p>
<pre><code>In [65]: dict1 = {'Ann': {'dogs': '3', 'cats': '4'},
...: 'Bob': {'dogs': '5', 'cats': '6'},
...: 'Chris': {'dogs': '7', 'cats': '8'},
...: 'Dan': {'dogs': '9', 'cats': '10'}}
In [66]: dt = np.dtype([('name','U5'),('dogs',float),('cats',float)])
# define a structured array dtype.
In [67]: def foo(k,v):
...: return (k, v['dogs'], v['cats'])
# define a helper function - just helps organize my thoughts better
In [68]: alist=[foo(k,v) for k,v in dict1.items()]
In [69]: alist
Out[69]: [('Chris', '7', '8'), ('Bob', '5', '6'), ('Dan', '9', '10'), ('Ann', '3', '4')]
# this is a list of tuples - a critical format for the next step
In [70]: arr = np.array(alist, dtype=dt)
In [71]: arr
Out[71]:
array([('Chris', 7.0, 8.0),
('Bob', 5.0, 6.0),
('Dan', 9.0, 10.0),
('Ann', 3.0, 4.0)],
dtype=[('name', '<U5'), ('dogs', '<f8'), ('cats', '<f8')])
</code></pre>
<p>I've converted the dictionary to a structured array, with 3 fields. This is similar to what I'd get from reading a csv file like:</p>
<pre><code>name, dogs, cats
Ann, 3, 4
Bob, 5, 6
....
</code></pre>
<p>The <code>dogs</code> and <code>cats</code> fields are numeric, so I can invert their values</p>
<pre><code>In [72]: arr['dogs']=1/arr['dogs']
In [73]: arr['cats']=1/arr['cats']
In [74]: arr
Out[74]:
array([('Chris', 0.14285714285714285, 0.125),
('Bob', 0.2, 0.16666666666666666),
('Dan', 0.1111111111111111, 0.1),
('Ann', 0.3333333333333333, 0.25)],
dtype=[('name', '<U5'), ('dogs', '<f8'), ('cats', '<f8')])
</code></pre>
<p>The result is the same numbers as in the dictionary case, but in a table layout.</p>
<p>======================</p>
<p>A dictionary comprehension version - same double dictionary iteration as the first solution, but building a new dictionary rather than making changes in place:</p>
<pre><code>In [78]: {k1:{k2:1/float(v2) for k2,v2 in v1.items()} for k1,v1 in dict1.items()}
Out[78]:
{'Ann': {'cats': 0.25, 'dogs': 0.3333333333333333},
'Bob': {'cats': 0.16666666666666666, 'dogs': 0.2},
'Chris': {'cats': 0.125, 'dogs': 0.14285714285714285},
'Dan': {'cats': 0.1, 'dogs': 0.1111111111111111}}
</code></pre>
<p>===================</p>
<p>When the numeric values are in an array, it is possible to take the numeric inverse of all the values at once. That's the beauty of <code>numpy</code>. But getting there can require some advance <code>numpy</code> coding.</p>
<p>For example I could take the 2 numeric fields of <code>arr</code>, and <code>view</code> them as a 2d array:</p>
<pre><code>In [80]: arr[['dogs','cats']].view('(2,)float')
Out[80]:
array([[ 0.14285714, 0.125 ],
[ 0.2 , 0.16666667],
[ 0.11111111, 0.1 ],
[ 0.33333333, 0.25 ]])
In [81]: 1/arr[['dogs','cats']].view('(2,)float')
Out[81]:
array([[ 7., 8.],
[ 5., 6.],
[ 9., 10.],
[ 3., 4.]])
</code></pre>
<p>Getting back the original numbers (without the name labels).</p>
| 1 | 2016-08-11T07:44:37Z | [
"python",
"arrays",
"python-2.7",
"numpy",
"dictionary"
] |
How to make a dictionary the value component or a numpy array | 38,886,394 | <p>I am a new Python 2.7 user. I recently learned about numpy arrays, and now I am now just learning about dictionaries. Please excuse me if my syntax is not correct.</p>
<p>Let's say we have a dictionary:</p>
<pre><code>dict1 = {'Ann': {'dogs': '3', 'cats': '4'},
'Bob': {'dogs': '5', 'cats': '6'},
'Chris': {'dogs': '7', 'cats': '8'},
'Dan': {'dogs': '9', 'cats': '10'}}
</code></pre>
<p>The keys are <code>dog</code> and <code>cat</code> and the values are the numbers of each Ann, Bob, Chris, and Dan have. </p>
<p>I want to inverse the value component of my dictionary. I know I can convert to a list by using <code>dict1.values()</code>, and then convert to an array, and then convert back to a dictionary, but this seems tedious. Is there a way to make my value component a numpy array and leave the key component the way it is?</p>
| 0 | 2016-08-11T02:32:02Z | 38,901,973 | <p>I know you said in the comments you weren't ready to start learning pandas but it would be quite a nice way to work with this data rather than a dictionary of dictionaries.</p>
<p>Pandas has some nice built in functionality for constructing data frames from dictionaries. Once in a Pandas DataFrame, it's quite easy to convert the string values to integers and then do the arithmetic.</p>
<pre><code>In [1]: import pandas as pd
In [2]: dict1 = {'Ann': {'dogs': '3', 'cats': '4'},
...: 'Bob': {'dogs': '5', 'cats': '6'},
...: 'Chris': {'dogs': '7', 'cats': '8'},
...: 'Dan': {'dogs': '9', 'cats': '10'}}
In [3]: df = pd.DataFrame(dict1)
In [4]: df
Out[4]:
Ann Bob Chris Dan
cats 4 6 8 10
dogs 3 5 7 9
In [5]: df.values
Out[5]:
array([['4', '6', '8', '10'],
['3', '5', '7', '9']], dtype=object)
In [6]: df.applymap(int)
Out[6]:
Ann Bob Chris Dan
cats 4 6 8 10
dogs 3 5 7 9
In [7]: df = 1.0/df.applymap(int)
In [8]: df
Out[8]:
Ann Bob Chris Dan
cats 0.250000 0.166667 0.125000 0.100000
dogs 0.333333 0.200000 0.142857 0.111111
In [10]: df.to_dict()
Out[10]:
{'Ann': {'cats': 0.25, 'dogs': 0.33333333333333331},
'Bob': {'cats': 0.16666666666666666, 'dogs': 0.20000000000000001},
'Chris': {'cats': 0.125, 'dogs': 0.14285714285714285},
'Dan': {'cats': 0.10000000000000001, 'dogs': 0.1111111111111111}}
</code></pre>
| 0 | 2016-08-11T16:44:19Z | [
"python",
"arrays",
"python-2.7",
"numpy",
"dictionary"
] |
python group same values in a dictionary and give each a mark (or assign them into a new dict) | 38,886,472 | <p>Suppose I have a dictionary like this</p>
<pre><code>origin_dict={0:[],1:[],2:['bus'],3:['bus'],4:['bus'],5:[],6:[],7:['train'],8:['train'],9:['train'],10:[],11:[],12:['train'],13:['train'],14:[]}
</code></pre>
<p>I want to group thme by the same value, but only when they are consecutive.</p>
<pre><code>new_dict={0:{2:'bus',3:'bus',4:'bus'},1:{7:'train',8:'train',9:'train'},2:{12:'train',13:'train'}}
</code></pre>
<p>Anyone has any ideas?</p>
<p>Thanks</p>
| 0 | 2016-08-11T02:44:53Z | 38,886,800 | <pre><code>from collections import defaultdict
from operator import itemgetter
from itertools import groupby
origin_dict = {0: [], 1: [], 2: ['bus'], 3: ['bus'], 4: ['bus'], 5: [], 6: [], 7: [
'train'], 8: ['train'], 9: ['train'], 10: [], 11: [], 12: ['train'], 13: ['train'], 14: []}
result = defaultdict(list)
for k, v in origin_dict.iteritems():
key = "".join(sorted(v))
if key != "":
result[key].append(k)
solution = defaultdict(dict)
index = 0
for k, data in result.iteritems():
for k1, g in groupby(enumerate(data), lambda (i, x): i - x):
for v2 in map(itemgetter(1), g):
solution[index][v2] = k
index += 1
print origin_dict
print dict(solution)
</code></pre>
| 1 | 2016-08-11T03:30:15Z | [
"python",
"list",
"dictionary",
"group",
"itertools"
] |
python group same values in a dictionary and give each a mark (or assign them into a new dict) | 38,886,472 | <p>Suppose I have a dictionary like this</p>
<pre><code>origin_dict={0:[],1:[],2:['bus'],3:['bus'],4:['bus'],5:[],6:[],7:['train'],8:['train'],9:['train'],10:[],11:[],12:['train'],13:['train'],14:[]}
</code></pre>
<p>I want to group thme by the same value, but only when they are consecutive.</p>
<pre><code>new_dict={0:{2:'bus',3:'bus',4:'bus'},1:{7:'train',8:'train',9:'train'},2:{12:'train',13:'train'}}
</code></pre>
<p>Anyone has any ideas?</p>
<p>Thanks</p>
| 0 | 2016-08-11T02:44:53Z | 38,887,593 | <p>This is mine simple and efficient solution.</p>
<pre><code>#! /usr/bin/python
origin_dict={0:[],1:[],2:['bus'],3:['bus'],4:['bus'],5:[],6:[],7:['train'],8:['train'],9:['train'],10:[],11:[],12:['train'],13:['train'],14:[]}
dict_out = {}
int_dict = {}
mine_keys = [key for key in origin_dict.keys() if not origin_dict[key] == []]
prev_val = False
keyind = 0
for key in origin_dict:
if not key in mine_keys:
if prev_val == True:
dict_out[keyind] = int_dict
prev_val = False
keyind += 1
int_dict = {}
else :
prev_val = True
int_dict[key]=origin_dict[key]
print origin_dict
print dict_out
</code></pre>
| 0 | 2016-08-11T05:02:27Z | [
"python",
"list",
"dictionary",
"group",
"itertools"
] |
Python Scrapy Print Item Keys as Header in CSV | 38,886,499 | <p>I have a scrapy project I'm working on and I'm trying to export data to a csv file using a pipeline and I would like to print the item keys as the first row of the csv file. My pipeline code is below (I can post more code if necessary but I imagine this would suffice). Thanks in advance.</p>
<pre><code>import csv
class CsvWriterPipeline(object):
def __init__(self):
self.csvwriter = csv.writer(open('items.csv','wb'))
def process_item(self,item,pfr):
self.csvwriter.writerow([item[key] for key in item.keys()])
return item
</code></pre>
| 0 | 2016-08-11T02:49:44Z | 38,887,346 | <p>Scrapy already adds header to csv export if you are outputting csv via:</p>
<pre><code>scrapy crawl spidername --output results.csv
</code></pre>
<p>If you want to do it manually in a the pipeline you can create file and write headers in pipelines <a href="http://doc.scrapy.org/en/latest/topics/item-pipeline.html#open_spider" rel="nofollow">open_spider()</a> method, which will execute all of the code in it when the spider opens.</p>
<p>Something like:</p>
<pre><code>def open_spider(self, spider):
header_keys = MyItem.fields.keys()
self.csvwriter.writerow(header_keys)
</code></pre>
| 1 | 2016-08-11T04:36:47Z | [
"python",
"csv",
"scrapy"
] |
How to write a bash script which calls itself with python? | 38,886,505 | <p>Can someone explain how this bash script works? The part I don't understand is <code>""":"</code>, what does this syntax mean in bash?</p>
<pre><code>#!/bin/sh
""":"
echo called by bash
exec python $0 ${1+"$@"}
"""
import sys
print 'called by python, args:',sys.argv[1:]
</code></pre>
<p>test running result:</p>
<pre><code>$ ./callself.sh xx
called by bash
called by python, args: ['xx']
$ ./callself.sh
called by bash
called by python, args: []
</code></pre>
| 1 | 2016-08-11T02:50:37Z | 38,886,537 | <p>That's clever! <strong>In Bash</strong>, the <code>""":"</code> will be expanded into only <code>:</code>, which is the empty command (it doesn't do anything). So, the next few lines will be executed, leading to <code>exec</code>. At that point, Bash <em>ceases to exist</em>, and the file is re-read by Python (its name is <code>$0</code>), and the original arguments are forwarded.</p>
<p>The <code>${1+"$@"}</code> means: If <code>$1</code> is defined, pass as arguments <code>"$@"</code>, which are the original Bash script arguments. If <code>$1</code> is not defined, meaning Bash had no arguments, the result is empty, so nothing else is passed, not even the empty string.</p>
<p><strong>In Python</strong>, the <code>"""</code> starts a multi-line string, which includes the Bash commands, and extends up to the closing <code>"""</code>. So Python will jump right below.</p>
| 2 | 2016-08-11T02:56:38Z | [
"python",
"linux",
"bash",
"shell",
"sh"
] |
How to write a bash script which calls itself with python? | 38,886,505 | <p>Can someone explain how this bash script works? The part I don't understand is <code>""":"</code>, what does this syntax mean in bash?</p>
<pre><code>#!/bin/sh
""":"
echo called by bash
exec python $0 ${1+"$@"}
"""
import sys
print 'called by python, args:',sys.argv[1:]
</code></pre>
<p>test running result:</p>
<pre><code>$ ./callself.sh xx
called by bash
called by python, args: ['xx']
$ ./callself.sh
called by bash
called by python, args: []
</code></pre>
| 1 | 2016-08-11T02:50:37Z | 38,886,573 | <p>This is an example for <a href="https://en.wikipedia.org/wiki/Polyglot_(computing)" rel="nofollow">polyglot</a>, where you write multiple programing languages in a file and still make it valid.</p>
<p><strong>How is it valid in python</strong></p>
<pre><code>""":"
echo called by bash
exec python $0 ${1+"$@"}
"""
</code></pre>
<p>This is a multiline docstring in python so python completely ignores it till the <code>import</code> line</p>
<p><strong>How is it valid in bash</strong></p>
<pre><code>""":"
echo called by bash
exec python $0 ${1+"$@"}
</code></pre>
<p>The <code>exec</code> calls the same script using python interpreter and exits the script. So it will not execute the remaining syntactically wrong python statements.</p>
| 1 | 2016-08-11T03:00:58Z | [
"python",
"linux",
"bash",
"shell",
"sh"
] |
How to write a bash script which calls itself with python? | 38,886,505 | <p>Can someone explain how this bash script works? The part I don't understand is <code>""":"</code>, what does this syntax mean in bash?</p>
<pre><code>#!/bin/sh
""":"
echo called by bash
exec python $0 ${1+"$@"}
"""
import sys
print 'called by python, args:',sys.argv[1:]
</code></pre>
<p>test running result:</p>
<pre><code>$ ./callself.sh xx
called by bash
called by python, args: ['xx']
$ ./callself.sh
called by bash
called by python, args: []
</code></pre>
| 1 | 2016-08-11T02:50:37Z | 38,886,587 | <p><a href="http://stackoverflow.com/questions/3224878/what-is-the-purpose-of-the-colon-gnu-bash-builtin">What is the purpose of the : (colon) GNU Bash builtin?</a></p>
<p>Also, once exec is called, the rest of the code is not executed because exec replaces the shell with the program, in this case the python process. (<a href="http://wiki.bash-hackers.org/commands/builtin/exec" rel="nofollow">http://wiki.bash-hackers.org/commands/builtin/exec</a>)</p>
| 0 | 2016-08-11T03:02:28Z | [
"python",
"linux",
"bash",
"shell",
"sh"
] |
Julia string format "if" | 38,886,506 | <p>In Python, <code>if</code> may be used in a situation such as the following for optional string formatting.</p>
<pre><code>bar = 3
"{n} bar{s}".format(n=bar, s='s' if bar != 1 else '')
# "3 bars"
bar = 1
"{n} bar{s}".format(n=bar, s='s' if bar != 1 else '')
# "1 bar"
</code></pre>
<p>Julia uses the dollar sign for string formatting.</p>
<pre><code>foo = 3
"foo $foo" # "foo 3"
</code></pre>
<p>Is it possible to simply mirror the functionality of the Python code using Julia?</p>
| 2 | 2016-08-11T02:50:51Z | 38,893,387 | <p>Yes. The <code>$</code> interpolation method works with expressions in parenthesis. In this case, <code>$bar bar$(bar != 1 ? 's' : "")</code> is equivalent to the Python results.</p>
<p>As @Oxinabox mentioned, Python's inline <code>if</code> corresponds to Julia's ternary operator. In Julia the ternary operator <code>a ? b : c</code> is a handy shortcut for <code>if a b ; else c ; end</code>. Note this means <code>1==2 ? foo() : bar()</code> does not evaluate <code>foo()</code>.</p>
| 5 | 2016-08-11T10:09:55Z | [
"python",
"string",
"if-statement",
"format",
"julia-lang"
] |
Julia string format "if" | 38,886,506 | <p>In Python, <code>if</code> may be used in a situation such as the following for optional string formatting.</p>
<pre><code>bar = 3
"{n} bar{s}".format(n=bar, s='s' if bar != 1 else '')
# "3 bars"
bar = 1
"{n} bar{s}".format(n=bar, s='s' if bar != 1 else '')
# "1 bar"
</code></pre>
<p>Julia uses the dollar sign for string formatting.</p>
<pre><code>foo = 3
"foo $foo" # "foo 3"
</code></pre>
<p>Is it possible to simply mirror the functionality of the Python code using Julia?</p>
| 2 | 2016-08-11T02:50:51Z | 38,899,410 | <p>In addition to everything @DanGetz said, you may also want to check out the <a href="https://github.com/JuliaLang/Formatting.jl" rel="nofollow">Formatting package</a> â it's explicitly designed to offer more Python-like formatting facilities for Julia.</p>
| 2 | 2016-08-11T14:37:50Z | [
"python",
"string",
"if-statement",
"format",
"julia-lang"
] |
Form Validation message not being displayed - Flask | 38,886,525 | <p>I'm having trouble getting error messages in Flask to render.
I suspect this is related to the blueprints. Previously, the logic seen in views.py was in the users blueprint, but I've since ported it over to the main blueprint. Anyhow, since then, I am unable to get error messages to render.</p>
<p>The specific line I think I'm having trouble with is:</p>
<p>self.email.errors.append("This Email is already registered")</p>
<h1>project/main/views.py</h1>
<pre><code>@main_blueprint.route('/', methods=['GET', 'POST'])
@main_blueprint.route('/<referrer>', methods=['GET', 'POST'])
def home(referrer=None):
form = RegisterForm(request.form)
# prepares response
resp = make_response(render_template('main/index.html', form=form))
if form.validate_on_submit():
do_stuff()
return resp
</code></pre>
<h1>project/main/index.html</h1>
<pre><code><h1>Please Register</h1>
<br>
<form class="" role="form" method="post" action="">
{{ form.csrf_token }}
{{ form.email(placeholder="email") }}
<span class="error">
{% if form.email.errors %}
{% for error in form.email.errors %}
{{ error }}
{% endfor %}
{% endif %}
</span>
</p>
<button class="btn btn-success" type="submit">Register!</button>
<br><br>
<p>Already have an account? <a href="/login">Sign in</a>.</p>
</form>
</code></pre>
<h1>project/user/forms.py</h1>
<pre><code>class RegisterForm(Form):
email = TextField(
'email',
validators=[DataRequired(), Email(message=None), Length(min=6, max=40)])
def validate(self):
print "validating"
initial_validation = super(RegisterForm, self).validate()
if not initial_validation:
print "not initial validation"
return False
user = User.query.filter_by(email=self.email.data).first()
print user
if user:
print self
print "error, email already registered"
self.email.errors.append("This Email is already registered")
return False
return True
</code></pre>
<p>When attempting to debug, the value for 'print user' from this is:</p>
<p>project.user.forms.RegisterForm object at 0x7fa436807698</p>
| -1 | 2016-08-11T02:54:49Z | 38,887,023 | <p>Got it to work, @glls, you were correct.Rewrote the code as:</p>
<pre><code>@main_blueprint.route('/', methods=['GET', 'POST'])
@main_blueprint.route('/<referrer>', methods=['GET', 'POST'])
def home(referrer=None):
# prepares response
resp = make_response(render_template('main/index.html', form=form))
if form.validate_on_submit():
do_stuff()
form = RegisterForm(request.form)
return resp
</code></pre>
| 0 | 2016-08-11T03:57:13Z | [
"python",
"flask",
"flask-wtforms"
] |
How to parse a single-column text file into a table using python? | 38,886,546 | <p>I'm new here to StackOverflow, but I have found a LOT of answers on this site. I'm also a programming newbie, so i figured i'd join and finally become part of this community - starting with a question about a problem that's been plaguing me for hours.</p>
<p>I login to a website and scrape a big body of text within the b tag to be converted into a proper table. The layout of the resulting Output.txt looks like this:</p>
<pre><code>BIN STATUS
8FHA9D8H 82HG9F RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
INVENTORY CODE: FPBC *SOUP CANS LENTILS
BIN STATUS
HA8DHW2H HD0138 RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
8SHDNADU 00A123 #2956- INVALID STOCK COUPON CODE (MISSING).
93827548 096DBR RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
</code></pre>
<p>There are a bunch of pages with the exact same blocks, but i need them to be combined into an ACTUAL table that looks like this:</p>
<pre><code> BIN INV CODE STATUS
HA8DHW2HHD0138 FPBC-*SOUP CANS LENTILS RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
8SHDNADU00A123 FPBC-*SOUP CANS LENTILS #2956- INVALID STOCK COUPON CODE (MISSING).
93827548096DBR FPBC-*SOUP CANS LENTILS RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
8FHA9D8H82HG9F SSXR-98-20LM NM CORN CREAM RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
</code></pre>
<p>Essentially, all separate text blocks in this example would become part of this table, with the inv code repeating with its Bin values. I would post my attempts at parsing this data(have tried Pandas/bs/openpyxl/csv writer), but ill admit they are a little embarrassing, as i cannot find any information on this specific problem. Is there any benevolent soul out there that can help me out? :) </p>
<p>(Also, i am using Python 2.7) </p>
| 4 | 2016-08-11T02:57:59Z | 38,887,835 | <p>I had a code written for a website scrapping which may help you.
Basically what you need to do is write click on the web page go to html and try to find the tag for the table you are looking for and using the module (i am using beautiful soup) extract the information. I am creating a json as I need to store it into mongodb you can create table.</p>
<pre><code>#! /usr/bin/python
import sys
import requests
import re
from BeautifulSoup import BeautifulSoup
import pymongo
def req_and_parsing():
url2 = 'http://businfo.dimts.in/businfo/Bus_info/EtaByRoute.aspx?ID='
list1 = ['534UP','534DOWN']
for Route in list1:
final_url = url2 + Route
#r = requests.get(final_url)
#parsing_file(r.text,Route)
outdict = []
outdict = [parsing_file( requests.get(url2+Route).text,Route) for Route in list1 ]
print outdict
conn = f_connection()
for i in range(len(outdict)):
insert_records(conn,outdict[i])
def parsing_file(txt,Route):
soup = BeautifulSoup(txt)
table = soup.findAll("table",{"id" : "ctl00_ContentPlaceHolder1_GridView2"})
#trtags = table[0].findAll('tr')
tdlist = []
trtddict = {}
"""
for trtag in trtags:
print 'print trtag- ' , trtag.text
tdtags = trtag.findAll('td')
for tdtag in tdtags:
print tdtag.text
"""
divtags = soup.findAll("span",{"id":"ctl00_ContentPlaceHolder1_ErrorLabel"})
for divtag in divtags:
for divtag in divtags:
print "div tag - " , divtag.text
if divtag.text == "Currently no bus is running on this route" or "This is not a cluster (orange bus) route":
print "Page not displayed Errored with below meeeage for Route-", Route," , " , divtag.text
sys.exit()
trtags = table[0].findAll('tr')
for trtag in trtags:
tdtags = trtag.findAll('td')
if len(tdtags) == 2:
trtddict[tdtags[0].text] = sub_colon(tdtags[1].text)
return trtddict
def sub_colon(tag_str):
return re.sub(';',',',tag_str)
def f_connection():
try:
conn=pymongo.MongoClient()
print "Connected successfully!!!"
except pymongo.errors.ConnectionFailure, e:
print "Could not connect to MongoDB: %s" % e
return conn
def insert_records(conn,stop_dict):
db = conn.test
print db.collection_names()
mycoll = db.stopsETA
mycoll.insert(stop_dict)
if __name__ == "__main__":
req_and_parsing()
</code></pre>
| -2 | 2016-08-11T05:23:54Z | [
"python",
"web-scraping"
] |
How to parse a single-column text file into a table using python? | 38,886,546 | <p>I'm new here to StackOverflow, but I have found a LOT of answers on this site. I'm also a programming newbie, so i figured i'd join and finally become part of this community - starting with a question about a problem that's been plaguing me for hours.</p>
<p>I login to a website and scrape a big body of text within the b tag to be converted into a proper table. The layout of the resulting Output.txt looks like this:</p>
<pre><code>BIN STATUS
8FHA9D8H 82HG9F RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
INVENTORY CODE: FPBC *SOUP CANS LENTILS
BIN STATUS
HA8DHW2H HD0138 RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
8SHDNADU 00A123 #2956- INVALID STOCK COUPON CODE (MISSING).
93827548 096DBR RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
</code></pre>
<p>There are a bunch of pages with the exact same blocks, but i need them to be combined into an ACTUAL table that looks like this:</p>
<pre><code> BIN INV CODE STATUS
HA8DHW2HHD0138 FPBC-*SOUP CANS LENTILS RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
8SHDNADU00A123 FPBC-*SOUP CANS LENTILS #2956- INVALID STOCK COUPON CODE (MISSING).
93827548096DBR FPBC-*SOUP CANS LENTILS RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
8FHA9D8H82HG9F SSXR-98-20LM NM CORN CREAM RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
</code></pre>
<p>Essentially, all separate text blocks in this example would become part of this table, with the inv code repeating with its Bin values. I would post my attempts at parsing this data(have tried Pandas/bs/openpyxl/csv writer), but ill admit they are a little embarrassing, as i cannot find any information on this specific problem. Is there any benevolent soul out there that can help me out? :) </p>
<p>(Also, i am using Python 2.7) </p>
| 4 | 2016-08-11T02:57:59Z | 38,911,274 | <p>A simple custom parser like the following should do the trick. </p>
<pre><code>from __future__ import print_function
def parse_body(s):
line_sep = '\n'
getting_bins = False
inv_code = ''
for l in s.split(line_sep):
if l.startswith('INVENTORY CODE:') and not getting_bins:
inv_data = l.split()
inv_code = inv_data[2] + '-' + ' '.join(inv_data[3:])
elif l.startswith('INVENTORY CODE:') and getting_bins:
print("unexpected inventory code while reading bins:", l)
elif l.startswith('BIN') and l.endswith('MESSAGE'):
getting_bins = True
elif getting_bins == True and l:
bin_data = l.split()
# need to add exception handling here to make sure:
# 1) we have an inv_code
# 2) bin_data is at least 3 items big (assuming two for
# bin_id and at least one for message)
# 3) maybe some constraint checking to ensure that we have
# a valid instance of an inventory code and bin id
bin_id = ''.join(bin_data[0:2])
message = ' '.join(bin_data[2:])
# we now have a bin, an inv_code, and a message to add to our table
print(bin_id.ljust(20), inv_code.ljust(30), message, sep='\t')
elif getting_bins == True and not l:
# done getting bins for current inventory code
getting_bins = False
inv_code = ''
</code></pre>
| 0 | 2016-08-12T06:26:46Z | [
"python",
"web-scraping"
] |
How to parse a single-column text file into a table using python? | 38,886,546 | <p>I'm new here to StackOverflow, but I have found a LOT of answers on this site. I'm also a programming newbie, so i figured i'd join and finally become part of this community - starting with a question about a problem that's been plaguing me for hours.</p>
<p>I login to a website and scrape a big body of text within the b tag to be converted into a proper table. The layout of the resulting Output.txt looks like this:</p>
<pre><code>BIN STATUS
8FHA9D8H 82HG9F RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
INVENTORY CODE: FPBC *SOUP CANS LENTILS
BIN STATUS
HA8DHW2H HD0138 RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
8SHDNADU 00A123 #2956- INVALID STOCK COUPON CODE (MISSING).
93827548 096DBR RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
</code></pre>
<p>There are a bunch of pages with the exact same blocks, but i need them to be combined into an ACTUAL table that looks like this:</p>
<pre><code> BIN INV CODE STATUS
HA8DHW2HHD0138 FPBC-*SOUP CANS LENTILS RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
8SHDNADU00A123 FPBC-*SOUP CANS LENTILS #2956- INVALID STOCK COUPON CODE (MISSING).
93827548096DBR FPBC-*SOUP CANS LENTILS RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
8FHA9D8H82HG9F SSXR-98-20LM NM CORN CREAM RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
</code></pre>
<p>Essentially, all separate text blocks in this example would become part of this table, with the inv code repeating with its Bin values. I would post my attempts at parsing this data(have tried Pandas/bs/openpyxl/csv writer), but ill admit they are a little embarrassing, as i cannot find any information on this specific problem. Is there any benevolent soul out there that can help me out? :) </p>
<p>(Also, i am using Python 2.7) </p>
| 4 | 2016-08-11T02:57:59Z | 38,928,262 | <p>A rather complex one, but this might get you started:</p>
<pre><code>import re, pandas as pd
from pandas import DataFrame
rx = re.compile(r'''
(?:INVENTORY\ CODE:)\s*
(?P<inv>.+\S)
[\s\S]+?
^BIN.+[\n\r]
(?P<bin_msg>(?:(?!^\ ).+[\n\r])+)
''', re.MULTILINE | re.VERBOSE)
string = your_string_here
# set up the dataframe
df = DataFrame(columns = ['BIN', 'INV', 'MESSAGE'])
for match in rx.finditer(string):
inv = match.group('inv')
bin_msg_raw = match.group('bin_msg').split("\n")
rxbinmsg = re.compile(r'^(?P<bin>(?:(?!\ {2}).)+)\s+(?P<message>.+\S)\s*$', re.MULTILINE)
for item in bin_msg_raw:
for m in rxbinmsg.finditer(item):
# append it to the dataframe
df.loc[len(df.index)] = [m.group('bin'), inv, m.group('message')]
print(df)
</code></pre>
<h3>Explanation</h3>
<p>It looks for <code>INVENTORY CODE</code> and sets up the groups (<code>inv</code> and <code>bin_msg</code>) for further processing in <code>afterwork()</code> (note: it would be easier if you had only one line of bin/msg as you need to split the group here afterwards).<br>
Afterwards, it splits the <code>bin</code> and <code>msg</code> part and appends all to the <code>df</code> object.</p>
| 0 | 2016-08-13T01:09:59Z | [
"python",
"web-scraping"
] |
Unable to parse the value of an element from response in python? | 38,886,574 | <pre><code>import requests
from bs4 import BeautifulSoup
s = requests.Session()
content = s.get('https://nucleus.niituniversity.in/Default.aspx').content
soup = BeautifulSoup(content,"html5lib")
print("viewState = " + str(soup.select_one("#__VIEWSTATE")["value"]))
print("UserDet = " + str(soup.select_one("#SchSel_hidCoreUserDet")["value"]))
</code></pre>
<hr>
<p><strong>Result :</strong>
<br>ViewState = qdwrf3rf<br></p>
<blockquote>
<p>KeyError : Value whoch means no value exists for userdet</p>
</blockquote>
<p>But see the response when I manually login.The value for Userdet exists.<a href="http://i.stack.imgur.com/9X3hf.png" rel="nofollow"><img src="http://i.stack.imgur.com/9X3hf.png" alt="enter image description here"></a></p>
<p>Why is the error occuring during requests with python while no problem with manual login.?</p>
| 0 | 2016-08-11T03:01:00Z | 38,886,753 | <p>No, the element with <code>id="SchSel_hidCoreUserDet"</code> does not have a <code>value</code>. Here is what I see if I open this page in the browser:</p>
<pre><code><input type="hidden" name="SchSel$hidCoreUserDet" id="SchSel_hidCoreUserDet" value="">
</code></pre>
<p>The value is actually empty - it is the browser who adds the <code>value</code> attribute with an empty value.</p>
<p>And, here is what I see if printing the element found by <code>BeautifulSoup</code>:</p>
<pre><code><input id="SchSel_hidCoreUserDet" name="SchSel$hidCoreUserDet" type="hidden"/>
</code></pre>
<p>In both cases, the element does not have a value.</p>
<hr>
<p>You might be previously logged in to this site and it might remember you through cookies or the local storage. Try inspecting the source of the page in an incognito window, or a browser which you have not used to log in to this site.</p>
| 0 | 2016-08-11T03:24:42Z | [
"python",
"asp.net",
"beautifulsoup",
"python-requests",
"python-3.4"
] |
convert string representation of array to numpy array in python | 38,886,641 | <p>I can <a href="http://stackoverflow.com/questions/1894269/convert-string-representation-of-list-to-list-in-python">convert a string representation of a list to a list</a> with <code>ast.literal_eval</code>. Is there an equivalent for a numpy array?</p>
<pre><code>x = arange(4)
xs = str(x)
xs
'[0 1 2 3]'
# how do I convert xs back to an array
</code></pre>
<p>Using <code>ast.literal_eval(xs)</code> raises a <code>SyntaxError</code>. I can do the string parsing if I need to, but I thought there might be a better solution.</p>
| 0 | 2016-08-11T03:09:22Z | 38,886,759 | <p>Numpy has a function called <code>fromstring</code>, document <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromstring.html#numpy.fromstring" rel="nofollow">here</a>. Briefly you can parse string like this:</p>
<pre><code>s = '0 1 2 3'
a = np.fromstring(s, dtype=np.int, sep=' ')
print(a) # [0 1 2 3]
</code></pre>
<p>The tiny difference is the string should <strong>not</strong> contain brackets.</p>
| 1 | 2016-08-11T03:25:59Z | [
"python",
"numpy"
] |
convert string representation of array to numpy array in python | 38,886,641 | <p>I can <a href="http://stackoverflow.com/questions/1894269/convert-string-representation-of-list-to-list-in-python">convert a string representation of a list to a list</a> with <code>ast.literal_eval</code>. Is there an equivalent for a numpy array?</p>
<pre><code>x = arange(4)
xs = str(x)
xs
'[0 1 2 3]'
# how do I convert xs back to an array
</code></pre>
<p>Using <code>ast.literal_eval(xs)</code> raises a <code>SyntaxError</code>. I can do the string parsing if I need to, but I thought there might be a better solution.</p>
| 0 | 2016-08-11T03:09:22Z | 38,887,164 | <p>Starting with this:</p>
<pre><code> x = arange(4)
xs = str(x)
xs
'[0 1 2 3]'
</code></pre>
<p>Try this:</p>
<pre><code>import re, ast
xs = re.sub('\s+', ',', xs)
a = np.array(ast.literal_eval(xs))
a
array([0, 1, 2, 3])
</code></pre>
| 3 | 2016-08-11T04:14:44Z | [
"python",
"numpy"
] |
How to convert a list of data frames to a panel in python-pandas? | 38,886,698 | <p>Given a list of data frames with the following format:</p>
<pre><code>id age weight score date
01 11 50 90 2011-01-23
01 12 52 89 2012-03-23
...
</code></pre>
<p>Please note that the <code>id</code> in a data frame is same. And I wish to get a panel, integrating all the data frames in the list, and with the columns <code>['age', 'weight', 'score']</code> as <code>item-axis</code>, and <code>date</code> as <code>major-axis</code>, and <code>id</code> as <code>minor-axis</code>. Do you know how to do that?</p>
<p>Thank you in advance! </p>
| 0 | 2016-08-11T03:17:25Z | 38,886,856 | <p>First step is to <code>concat</code> your frames together:</p>
<pre><code> concated = pd.concat(list_of_frames)
</code></pre>
<p>Then, you can simply:</p>
<pre><code>items = ['age', 'weight', 'score']
pd.Panel(dict(zip(items, [concated.pivot(index='date', columns='id', values=i) for i in items])))
</code></pre>
<p>This is so nicely specified in this <a href="http://pandas.pydata.org/pandas-docs/stable/dsintro.html#panel" rel="nofollow">documentation</a>.</p>
| 1 | 2016-08-11T03:37:00Z | [
"python",
"pandas",
"dataframe",
"panel"
] |
Python - scapy packet size difference | 38,886,709 | <p>I'm sending and recieving a packet with the module scapy.</p>
<pre><code>a = sr(IP(src="192.168.1.100",dst="8.8.4.4")/UDP(sport=RandShort(),dport=53)/DNS(rd=1,qd=DNSQR(qname="google.com",qtype="ALL",qclass="IN"),ar=DNSRROPT(rclass=3000)),timeout=1)
</code></pre>
<p>If I display the packet size of of the command and response:</p>
<pre><code>#command size
print len(a[0][0][0])
>67
#response size
print len(a[0][0][1])
>496
</code></pre>
<p>But if I capture the packets with Wireshark, it shows me a packet length:</p>
<pre><code>command: 83 bytes
response: 512 bytes
</code></pre>
<p>So we know in Wireshark we have an additional size of 16 bytes for command and response..</p>
<pre><code>83-67 =16
512-496 =16
</code></pre>
<p>And I want to know (just for educational proposes) what are the additional 16 bytes captured by Wireshark? Somebody have a deep 'knowhow' in networking and can tell me what happens?</p>
<p>EDIT:</p>
<p>Output of <code>a[0].summary()</code>:</p>
<pre><code>IP / UDP / DNS Qry "google.com" ==> IP / UDP / DNS Ans "74.125.68.102"
</code></pre>
<p>Output of <code>a[0][0][0].show()</code>:</p>
<pre><code>###[ IP ]###
version = 4
ihl = None
tos = 0x0
len = **67**
id = 1
flags =
frag = 0
ttl = 64
proto = udp
chksum = None
src = 192.168.1.100
dst = 8.8.4.4
\options \
###[ UDP ]###
sport = 41454
dport = domain
len = None
chksum = None
###[ DNS ]###
id = 0
qr = 0
opcode = QUERY
aa = 0
tc = 0
rd = 1
ra = 0
z = 0
ad = 0
cd = 0
rcode = ok
qdcount = 1
ancount = 0
nscount = 0
arcount = 1
\qd \
|###[ DNS Question Record ]###
| qname = 'google.com'
| qtype = ALL
| qclass = IN
an = None
ns = None
\ar \
|###[ DNS OPT Resource Record ]###
| rrname = '.'
| type = OPT
| rclass = 3000
| extrcode = 0
| version = 0
| z = D0
| rdlen = None
| \rdata \
</code></pre>
<p>Output of <code>a[0].show()</code>:</p>
<pre><code>###[ IP ]###
version = 4L
ihl = 5L
tos = 0x0
len = **496**
id = 41777
flags =
frag = 0L
ttl = 56
proto = udp
chksum = 0xfb3
src = 8.8.4.4
dst = 192.168.1.100
\options \
###[ UDP ]###
sport = domain
dport = 41454
len = 476
chksum = 0x2fef
###[ DNS ]###
id = 0
qr = 1L
opcode = QUERY
aa = 0L
tc = 0L
rd = 1L
ra = 1L
z = 0L
ad = 0L
cd = 0L
rcode = ok
qdcount = 1
ancount = 19
nscount = 0
arcount = 1
\qd \
|###[ DNS Question Record ]###
| qname = 'google.com.'
| qtype = ALL
| qclass = IN
\an \
|###[ DNS Resource Record ]###
| rrname = 'google.com.'
| type = A
| rclass = IN
| ttl = 299
| rdlen = 4
| rdata = '74.125.68.102'
|###[ DNS Resource Record ]###
| rrname = 'google.com.'
| type = A
| rclass = IN
| ttl = 299
| rdlen = 4
| rdata = '74.125.68.113'
|###[ DNS Resource Record ]###
| rrname = 'google.com.'
| type = A
| rclass = IN
| ttl = 299
| rdlen = 4
| rdata = '74.125.68.139'
|###[ DNS Resource Record ]###
| rrname = 'google.com.'
| type = A
| rclass = IN
| ttl = 299
| rdlen = 4
| rdata = '74.125.68.100'
|###[ DNS Resource Record ]###
| rrname = 'google.com.'
| type = A
| rclass = IN
| ttl = 299
| rdlen = 4
| rdata = '74.125.68.138'
|###[ DNS Resource Record ]###
| rrname = 'google.com.'
| type = A
| rclass = IN
| ttl = 299
| rdlen = 4
| rdata = '74.125.68.101'
|###[ DNS Resource Record ]###
| rrname = 'google.com.'
| type = AAAA
| rclass = IN
| ttl = 299
| rdlen = 16
| rdata = '2404:6800:4003:c02::65'
|###[ DNS Resource Record ]###
| rrname = 'google.com.'
| type = NS
| rclass = IN
| ttl = 21599
| rdlen = 16
| rdata = 'ns2.google.com.'
|###[ DNS Resource Record ]###
| rrname = 'google.com.'
| type = MX
| rclass = IN
| ttl = 599
| rdlen = 17
| rdata = '\x00\x14\x04alt1\x05aspmx\x01l\xc0\x0c'
|###[ DNS Resource Record ]###
| rrname = 'google.com.'
| type = SOA
| rclass = IN
| ttl = 59
| rdlen = 34
| rdata = '\xc0\xa4\tdns-admin\xc0\x0c\x07\xbe\xf2\xb0\x00\x00\x03\x84\x00\x00\x03\x84\x00\x00\x07\x08\x00\x00\x00<'
|###[ DNS Resource Record ]###
| rrname = 'google.com.'
| type = MX
| rclass = IN
| ttl = 599
| rdlen = 9
| rdata = '\x00(\x04alt3\xc0\xbd'
|###[ DNS Resource Record ]###
| rrname = 'google.com.'
| type = MX
| rclass = IN
| ttl = 599
| rdlen = 4
| rdata = '\x00\n\xc0\xbd'
|###[ DNS Resource Record ]###
| rrname = 'google.com.'
| type = 257
| rclass = IN
| ttl = 21599
| rdlen = 19
| rdata = '\x00\x05issuesymantec.com'
|###[ DNS Resource Record ]###
| rrname = 'google.com.'
| type = NS
| rclass = IN
| ttl = 21599
| rdlen = 16
| rdata = 'ns3.google.com.'
|###[ DNS Resource Record ]###
| rrname = 'google.com.'
| type = MX
| rclass = IN
| ttl = 599
| rdlen = 9
| rdata = '\x00\x1e\x04alt2\xc0\xbd'
|###[ DNS Resource Record ]###
| rrname = 'google.com.'
| type = NS
| rclass = IN
| ttl = 21599
| rdlen = 16
| rdata = 'ns1.google.com.'
|###[ DNS Resource Record ]###
| rrname = 'google.com.'
| type = MX
| rclass = IN
| ttl = 599
| rdlen = 9
| rdata = '\x002\x04alt4\xc0\xbd'
|###[ DNS Resource Record ]###
| rrname = 'google.com.'
| type = TXT
| rclass = IN
| ttl = 3599
| rdlen = 36
| rdata = 'v=spf1 include:_spf.google.com ~all'
|###[ DNS Resource Record ]###
| rrname = 'google.com.'
| type = NS
| rclass = IN
| ttl = 21599
| rdlen = 16
| rdata = 'ns4.google.com.'
ns = None
\ar \
|###[ DNS OPT Resource Record ]###
| rrname = '.'
| type = OPT
| rclass = 512
| extrcode = 0
| version = 0
| z = D0
| rdlen = 0
| \rdata \
</code></pre>
| 3 | 2016-08-11T03:19:19Z | 38,890,631 | <p>
When you are using the <code>.len</code> attribute of the packet, which, in your case, happen to be the value of the <code>len</code> field of the <code>IP</code> layer. It does not contain the <code>Ether</code> layer (14 bytes).</p>
<p>You should use <code>len()</code> (as you do in your example) to get the packet length. Also, you should specify the layer 2 (and hence, use <code>srp()</code> instead of <code>sr()</code>):</p>
<pre><code>a = srp(Ether() / IP(src="192.168.1.100",dst="8.8.4.4") /
UDP(sport=RandShort(),dport=53) /
DNS(rd=1,qd=DNSQR(qname="google.com",qtype="ALL",qclass="IN"),
ar=DNSRROPT(rclass=3000)),
timeout=1)
print len(a[0][0][0]), len(a[0][0][1])
</code></pre>
| 1 | 2016-08-11T08:03:27Z | [
"python",
"size",
"packet",
"scapy",
"packet-sniffers"
] |
Sequential Iterator | 38,886,739 | <p>I currently have the need for a certain type of Iterator / generator (not actually sure which is the appropriate term) that will generate a character sequence such as the following: </p>
<p><br>axxx
<br>bxxx
<br>cxxx
<br>dxxx
<br>...
<br>aaxx
<br>abxx
<br>
and so on</p>
<p>So for every iteration through the alphabet it moves to the next place and replaces 'x' and repeats...</p>
<p>I have tried Iterators and generators with Python but cant seem to get this fixed character functionality. </p>
| 2 | 2016-08-11T03:23:18Z | 38,886,851 | <p>OK, break this down into two problems. I'm cribbing from</p>
<p><a href="http://code.activestate.com/recipes/65212-convert-from-decimal-to-any-base-number/" rel="nofollow">http://code.activestate.com/recipes/65212-convert-from-decimal-to-any-base-number/</a></p>
<p>for the actual baseN code.</p>
<p>First, generate a sequence of numbers in base 26, encoded a=0, b=1, .. z=25. After an example from "trottler"</p>
<pre><code>def basealpha(num,numerals="abcdefghijklmnopqrstuvwxyz"):
return ((num == 0) and "0" ) or ( basealpha(num // 26, 26).lstrip("0") + numerals[num % 26])
</code></pre>
<p>Loop through this, and then right pad out the string with 'x' to a length of four.</p>
<p>The problem I see on looking at this is that 'x' serves double duty, and you'll have a hard time telling what 'aaxx' means, since it could show up in several sequences:</p>
<pre><code>aaxx
aayx
aazx
</code></pre>
<p>or</p>
<pre><code>aaxx
abxx
acxx
</code></pre>
| 0 | 2016-08-11T03:36:42Z | [
"python",
"loops",
"iteration",
"generator"
] |
Python and how to get specific text from HTML using Selenium | 38,886,774 | <p>I am trying to get a specific text content in html page by using Selenium webdriver class name. The HTML code are below:-</p>
<pre><code><tr>
<th>
<td class="max-captured">174.26 kp/s</td>
<td class="max-captured">0 p/s </td>
</tr>
</code></pre>
<p>I want to capture just the text "p/s". Is it possible? Thanks.</p>
| -2 | 2016-08-11T03:27:45Z | 38,900,735 | <p>you can use css_selector</p>
<pre><code>driver.find_element_by_css_selector('tr th .max_captured:nth-child(2)').text
</code></pre>
| 0 | 2016-08-11T15:37:37Z | [
"python",
"selenium",
"selenium-webdriver"
] |
Method to Return List of Instances Without Self | 38,886,784 | <p>I am frequently running an operation to determine a list of "live" instances of a class. To determine if an instance is live, it is testing against the is_live method of my current class -- please see below. </p>
<pre><code>class Game(models.Model):
def is_live(self):
now = timezone.now()
now.astimezone(timezone.utc).replace(tzinfo=None)
if self.time is not None and now < self.time:
return True
if self.time is None:
return True
else:
return False
</code></pre>
<p>Instead of having to run this loop in all of my views, I would love to create another method to run that returned a list of all live instances. However, to do so I wouldn't need the use of self and am getting an error every time I try to do so. Any ideas how to complete this. The loop would be something like the below</p>
<pre><code>def live_game_list():
live_game_list = []
for game in Game.objects.all():
if game.is_live == True:
live_game_list.append(game)
return live_game_list
</code></pre>
<p>Then I would just be able to call Game.live_game_list() and get a list of all games.</p>
| 1 | 2016-08-11T03:28:29Z | 38,886,937 | <p>Declare the class method as static using the <code>@staticmethod</code> decorator. You then don't need to pass <code>self</code> to the function. Rather than use <code>Game</code> directly within the function, why don't you pass the game objects as a parameter to the function. I've used a conditional list comprehension to generate the result which will be more efficient than using <code>append</code>.</p>
<pre><code>@staticmethod
def live_game_list(game_objects):
return [game for game in game_objects.all() if game.is_live]
</code></pre>
| 0 | 2016-08-11T03:47:31Z | [
"python",
"django",
"methods",
"instance"
] |
Method to Return List of Instances Without Self | 38,886,784 | <p>I am frequently running an operation to determine a list of "live" instances of a class. To determine if an instance is live, it is testing against the is_live method of my current class -- please see below. </p>
<pre><code>class Game(models.Model):
def is_live(self):
now = timezone.now()
now.astimezone(timezone.utc).replace(tzinfo=None)
if self.time is not None and now < self.time:
return True
if self.time is None:
return True
else:
return False
</code></pre>
<p>Instead of having to run this loop in all of my views, I would love to create another method to run that returned a list of all live instances. However, to do so I wouldn't need the use of self and am getting an error every time I try to do so. Any ideas how to complete this. The loop would be something like the below</p>
<pre><code>def live_game_list():
live_game_list = []
for game in Game.objects.all():
if game.is_live == True:
live_game_list.append(game)
return live_game_list
</code></pre>
<p>Then I would just be able to call Game.live_game_list() and get a list of all games.</p>
| 1 | 2016-08-11T03:28:29Z | 38,886,983 | <p>In order to do this, you will need to keep track of all <code>Game</code> instances outside of the individual instances themselves. You can do this anywhere, but one option is to have a class-level list:</p>
<pre><code>class Game(models.Model):
all_game_instances = []
def __init__(self, *args, **kwargs):
super(Game, self).__init__(*args, **kwargs)
self.all_game_instances.append(self) # accesses class attribute list
@classmethod
def live_game_list(cls):
return [game for game in cls.all_game_instances if game.is_live()]
def is_live(self):
...
</code></pre>
<p>Then you can access the list of games with <code>Game.live_game_list()</code>.</p>
| 0 | 2016-08-11T03:52:30Z | [
"python",
"django",
"methods",
"instance"
] |
How to assign a value to a property in django | 38,886,788 | <p>i have a new property in my model however I'd like to assign a test value in it for my test script. </p>
<p>this is my code:</p>
<p>models.py</p>
<pre><code>mycode = models.UUIDField(null=True)
@property
def haveCode(self):
if self.mycode == uuid.UUID('{00000000-0000-0000-0000-000000000000}'):
return False
else
return True
</code></pre>
<p>and this is the test script that i am working on. I wanted to have a test value for haveCode:</p>
<pre><code>test = Test()
test.mycode = uuid.UUID('{00000000-0000-0000-0000-000000000000}')
test.save()
checkTest = Test()
#this is only to pass the test
#delete this when start coding
checkTest.haveCode = True
assertEqual(test.haveCode, True)
</code></pre>
<p>however I got an error in <code>checkTest.haveCode = True</code> since this is just a property and not an attribute.</p>
<p>how to assign <code>True</code> to it? I appreciate your help</p>
| 0 | 2016-08-11T03:29:05Z | 38,886,871 | <p>You can 'mock' that property using the mock library</p>
<pre><code>from mock import patch, PropertyMock
@patch.object(Test, 'haveCode', new_callable=PropertyMock)
def myTest(test_haveCode_mock):
test_haveCode_mock.return_value = True
checkTest = Test()
assertEqual(checkTest.haveCode, True)
patch.stopall() # when you want to release all mocks
</code></pre>
| 1 | 2016-08-11T03:38:33Z | [
"python",
"django"
] |
problems with subprocess.call | 38,887,000 | <p>The blow command works well in the shell/terminal, but something goes wrong when it is called in my python script using subprocess.call() method.</p>
<pre><code>-- command in shell/terminal
$ th neural_style.lua -gpu 0 -style_image input/style.jpg -content_image input/img.jpg
-- subprocess.call() in python script
# this works
subprocess.call(["th", "neural_style.lua", "-gpu", "0"])
# this goes wrong - Error during read_image: Read Error
-- subprocess.call in the python script
subprocess.call(["th", "neural_style.lua", "-gpu", "0", "-style_image" "input/style.jpg" "-content_image" "input/img.jpg"])
</code></pre>
<p>How should I use subprocess.call ?</p>
| -2 | 2016-08-11T03:53:50Z | 38,887,254 | <p>As the error message says, it failed to read an image. The error is (presumably) coming from the <code>th</code> program you're calling. I'd guess that there's additional information in the error message you haven't shared, but the most likely explanation is that you're running your Python script from a different directory than where you're running <code>th</code> directly. For example are you running your Python script from an IDE? It's likely running commands relative to the workspace or project directory.</p>
<p>The first thing to try is swapping the image arguments for <a href="https://en.wikipedia.org/wiki/Path_(computing)#Absolute_and_relative_paths" rel="nofollow">absolute paths</a> (e.g. <code>/home/username/input.style.jpg</code> or wherever they're located). This will work around the scripts running from different directories.</p>
<p>Once you've verified that's the issue how you fix it is up to you. You could simply run your Python script from the correct directory, you could specify the paths relative to where your script runs, or you could simply always use absolute paths your Python script. Which you chose really depends on your use case.</p>
| -1 | 2016-08-11T04:24:50Z | [
"python",
"subprocess"
] |
How do I change default text in a Qt lineEdit by entering data into the running frame in order to re-run functions with the new value? | 38,887,007 | <p>I am new to Python, and even more new to PyQt4. (I've created a similar GUI with wxPython, but I can't make this work.) I've spent hours trying to figure this out, and I've tried so many tips from searches here and elsewhere, including the instruction docs, that I'm dizzy.</p>
<p>The window I created in Qt Designer is supposed to allow the user to enter data in QLineEdit boxes, and that data is then used in some functions to do some math and plot some graphs. To make it easy for the user - I think - I've put default values in the frame when it runs. I do not know how to allow the user to dynamically set new values (i.e., different from the default). I've tried chasing things back with a global variable to see where things go wrong, but I can't figure it out.</p>
<p>I've created a much smaller version of what I'm doing (to keep it short), and the issue is the same. </p>
<p>Here's a picture of the frame to help if I've failed to explain this well:
<a href="http://i.stack.imgur.com/egREh.png" rel="nofollow">Frame screen shot</a></p>
<p>And here's the code:</p>
<pre><code># -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'TestWindow2.ui'
#
# Created by: PyQt4 UI code generator 4.11.4
#
# WARNING! All changes made in this file will be lost!
from PyQt4 import QtCore, QtGui
import sys
my_global_var = 100
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
def _fromUtf8(s):
return s
try:
_encoding = QtGui.QApplication.UnicodeUTF8
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig, _encoding)
except AttributeError:
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig)
class Ui_MainWindow(object):
global my_global_var
def setupUi(self, MainWindow):
MainWindow.setObjectName(_fromUtf8("MainWindow"))
MainWindow.resize(514, 363)
self.centralwidget = QtGui.QWidget(MainWindow)
self.centralwidget.setObjectName(_fromUtf8("centralwidget"))
self.formLayoutWidget = QtGui.QWidget(self.centralwidget)
self.formLayoutWidget.setGeometry(QtCore.QRect(10, 0, 501, 31))
self.formLayoutWidget.setObjectName(_fromUtf8("formLayoutWidget"))
self.formLayout = QtGui.QFormLayout(self.formLayoutWidget)
self.formLayout.setFieldGrowthPolicy(QtGui.QFormLayout.AllNonFixedFieldsGrow)
self.formLayout.setObjectName(_fromUtf8("formLayout"))
self.enterHereLabel = QtGui.QLabel(self.formLayoutWidget)
self.enterHereLabel.setObjectName(_fromUtf8("enterHereLabel"))
self.formLayout.setWidget(0, QtGui.QFormLayout.LabelRole, self.enterHereLabel)
self.enterHereLineEdit = QtGui.QLineEdit(self.formLayoutWidget)
self.enterHereLineEdit.setObjectName(_fromUtf8("enterHereLineEdit"))
self.formLayout.setWidget(0, QtGui.QFormLayout.FieldRole, self.enterHereLineEdit)
self.pushButton = QtGui.QPushButton(self.centralwidget)
self.pushButton.setGeometry(QtCore.QRect(40, 80, 461, 23))
self.pushButton.setObjectName(_fromUtf8("pushButton"))
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtGui.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 514, 21))
self.menubar.setObjectName(_fromUtf8("menubar"))
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtGui.QStatusBar(MainWindow)
self.statusbar.setObjectName(_fromUtf8("statusbar"))
MainWindow.setStatusBar(self.statusbar)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
global my_global_var
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow", None))
self.enterHereLabel.setText(_translate("MainWindow", "Enter here:", None))
self.enterHereLineEdit.setText(_translate("MainWindow", "100", None))
""" ^ Code created by Qt Designer with the default value of 100 """
self.pushButton.setText(_translate("MainWindow", "PushThisButton", None))
self.pushButton.clicked.connect(self.MyFunction)
x = str(self.enterHereLineEdit.text()) # Seeing if I can get the value
print("x = {0}".format(x)) # This correctly prints the default value of 100
my_global_var = x
print("My global var = {0}".format(my_global_var)) # Correctly -> 100
def MyFunction(self, MainWindow):
global my_global_var
print("My var = {0}".format(my_global_var)) # prints 100
a = int(my_global_var)
y = a+20
print(y) # correctly prints 120
class MainWindow(QtGui.QMainWindow, Ui_MainWindow):
global my_global_var
def __init__(self, parent=None, f=QtCore.Qt.WindowFlags()):
QtGui.QMainWindow.__init__(self, parent, f)
self.setupUi(self)
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
mw = MainWindow()
mw.show()
sys.exit(app.exec_())
</code></pre>
<p>I've left the global variables and comments to show how things work. What do I need to change / do to make the user's frame entry update the calculations? (When I click the button, I get the same results regardless of whether I change the default value.)</p>
| 0 | 2016-08-11T03:54:43Z | 38,899,938 | <p>Fabio's comment has the answer, but an option to accept it as answered didn't appear (maybe because it was a comment and not an answer??), so I'm "answering" to mark this as answered, but it was his response that is the answer, which I'll quote for simplicity, highlighting the key code: </p>
<blockquote>
<p>Not sure to understand, but I think you don't need a global variable. Simply
read the value from the QLineEdit in MyFunction
<strong>(a = int(self.enterHereLineEdit.text()))</strong> â Fabio</p>
</blockquote>
<p>Thanks!</p>
| 0 | 2016-08-11T15:00:53Z | [
"python",
"qt"
] |
How to insert dictionary items into a PostgreSQL table | 38,887,025 | <p>So I've connected my Postgresql database into Python with Psycopg2, and I've pulled two specific columns to update. The first column I have used as the keys in a Python dictionary, and I have run some functions on the second one and use the results as the values in the dictionary. Now what I want to do is add those values back into Postgresql table as a new column, but I want them to pair with the correct keys they are paired with in the dictionary. Essentially, I want to take dictionary values and insert them as a new column and pick which "key" in the Postgresql table they belong to (however, I don't want to manually assign them, because, well, there's hopefully a better way).</p>
<p>Postgresql Table </p>
<pre><code> |col1 |col2 |col3 | ... | coln
row1 | a1 | b1 | c1 | ... | n1
row2 | a2 | b2 | c2 | ... | n2
... | ... | ... | ... | ... | n...
rowm | am | bm | cm | ... | nm
</code></pre>
<p>This is the dictionary I made in Python, where <code>f()</code> is a series of functions ran on variable: </p>
<pre><code>{
a1 : f(c1),
a2 : f(c2),
... : ...
}
</code></pre>
<p>Now my goal is to add the values column back into my table so that it corresponds to the original keys. Ideally, to look something like this:</p>
<pre><code> |col1|col2|col3| ... |newcol| coln
row1 | a1 | b1 | c1 | ... | f(c1)| n1
row2 | a2 | b2 | c2 | ... | f(c2)| n2
... | ...| ...| ...| ... | ... | n...
rowm | am | bm | cm | ... | f(cm)| nm
</code></pre>
<p>I know I can insert the column into the table, but not sure how to pair it with keys. Any help is very much appreciated!</p>
| 2 | 2016-08-11T03:57:54Z | 38,887,180 | <p>You want an <code>UPDATE</code> statement something like the following:</p>
<pre><code>import psycopg2
con = psycopg2.connect('your connection string')
cur = connection.cursor()
# add newcol
cur.execute('ALTER TABLE your_table ADD COLUMN newcol text;')
con.commit()
for k,v in your_dict.iteritems():
cursor.execute('''UPDATE your_table
SET newcol = (%s)
WHERE col1 = (%s);''',(v,k))
conn.commit()
cur.close()
con.close()
</code></pre>
| 0 | 2016-08-11T04:16:33Z | [
"python",
"postgresql",
"dictionary",
"key-value",
"psycopg2"
] |
How to insert dictionary items into a PostgreSQL table | 38,887,025 | <p>So I've connected my Postgresql database into Python with Psycopg2, and I've pulled two specific columns to update. The first column I have used as the keys in a Python dictionary, and I have run some functions on the second one and use the results as the values in the dictionary. Now what I want to do is add those values back into Postgresql table as a new column, but I want them to pair with the correct keys they are paired with in the dictionary. Essentially, I want to take dictionary values and insert them as a new column and pick which "key" in the Postgresql table they belong to (however, I don't want to manually assign them, because, well, there's hopefully a better way).</p>
<p>Postgresql Table </p>
<pre><code> |col1 |col2 |col3 | ... | coln
row1 | a1 | b1 | c1 | ... | n1
row2 | a2 | b2 | c2 | ... | n2
... | ... | ... | ... | ... | n...
rowm | am | bm | cm | ... | nm
</code></pre>
<p>This is the dictionary I made in Python, where <code>f()</code> is a series of functions ran on variable: </p>
<pre><code>{
a1 : f(c1),
a2 : f(c2),
... : ...
}
</code></pre>
<p>Now my goal is to add the values column back into my table so that it corresponds to the original keys. Ideally, to look something like this:</p>
<pre><code> |col1|col2|col3| ... |newcol| coln
row1 | a1 | b1 | c1 | ... | f(c1)| n1
row2 | a2 | b2 | c2 | ... | f(c2)| n2
... | ...| ...| ...| ... | ... | n...
rowm | am | bm | cm | ... | f(cm)| nm
</code></pre>
<p>I know I can insert the column into the table, but not sure how to pair it with keys. Any help is very much appreciated!</p>
| 2 | 2016-08-11T03:57:54Z | 38,894,429 | <p>Update all the rows in a single query. First turn the dictionary into a list of tuples:</p>
<pre><code>l = [(k,v) for k,v in my_dict.items()]
</code></pre>
<p>A Python list of tuples is adapted to an array of records by Psycopg. Then <code>unnest</code> the array in the <code>from</code> clause:</p>
<pre><code>query = '''
update t
set colX = s.val::text
from unnest(%s) s (key integer, val unknown)
where t.col1 = s.key
'''
print cursor.mogrify(query, (l,))
cursor.execute(query, (l,))
</code></pre>
<p>Records returned from <code>unnest</code> need to have their elements types declared. For string values is it necessary to declare their type as <code>unknown</code> and cast to the appropriate type at assignment or comparing time. For all other types just declare them as usual.</p>
| 0 | 2016-08-11T10:55:50Z | [
"python",
"postgresql",
"dictionary",
"key-value",
"psycopg2"
] |
Troubleshooting Amazon's Alexa Skill Kit (ASK) Lambda interaction | 38,887,061 | <p>I'm starting with ASK development. I'm a little confused by some behavior and I would like to know how to debug errors from the "service simulator" console. How can I get more information on the <code>The remote endpoint could not be called, or the response it returned was invalid.</code> errors?</p>
<p>Here's my situation:</p>
<p>I have a skill and three Lambda functions (ARN:A, ARN:B, ARN:C). If I set the skill's endpoint to ARN:A and try to test it from the skill's service simulator, I get an error response: <code>The remote endpoint could not be called, or the response it returned was invalid.</code>
I copy the lambda request, I head to the lambda console for ARN:A, I set the test even, paste the request from the service simulator, I test it and I get a perfectly fine ASK response. Then I head to the lambda console for ARN:B and I make a dummy handler that returns exactly the same response that ARN:A gave me from the console (literally copy and paste). I set my skill's endpoint to ARN:B, test it using the service simulator and I get the anticipated response (therefore, the response is well formatted) albeit static. I head to the lambda console again and copy and paste the code from ARN:A into a new ARN:C. Set the skill's endpoint to ARN:C and it works perfectly fine. Problem with ARN:C is that it doesn't have the proper permissions to persist data into DynamoDB (I'm still getting familiar with the system, not sure wether I can share an IAM role between different lambdas, I believe not).
How can I figure out what's going on with ARN:A? Is that logged somewhere? I can't find any entry in cloudwatch/logs related to this particular lambda or for the skill.</p>
<p>Not sure if relevant, I'm using python for my lambda runtime, the code is (for now) inline on the web editor and I'm using boto3 for persisting to DynamoDB.</p>
| 0 | 2016-08-11T04:02:31Z | 38,895,935 | <p>My guess would be that you missed a step on setup. There's one where you have to set the "event source". IF you don't do that, I think you get that message.</p>
<p>But the debug options are limited. I wrote EchoSim (the original one on GitHub) before the service simulator was written and, although it is a bit out of date, it does a better job of giving diagnostics.</p>
<p>Lacking debug options, the best is to do what you've done. Partition and re-test. Do static replies until you can work out where the problem is.</p>
| 0 | 2016-08-11T12:06:33Z | [
"python",
"amazon-web-services",
"amazon-dynamodb",
"aws-lambda",
"alexa-skills-kit"
] |
Troubleshooting Amazon's Alexa Skill Kit (ASK) Lambda interaction | 38,887,061 | <p>I'm starting with ASK development. I'm a little confused by some behavior and I would like to know how to debug errors from the "service simulator" console. How can I get more information on the <code>The remote endpoint could not be called, or the response it returned was invalid.</code> errors?</p>
<p>Here's my situation:</p>
<p>I have a skill and three Lambda functions (ARN:A, ARN:B, ARN:C). If I set the skill's endpoint to ARN:A and try to test it from the skill's service simulator, I get an error response: <code>The remote endpoint could not be called, or the response it returned was invalid.</code>
I copy the lambda request, I head to the lambda console for ARN:A, I set the test even, paste the request from the service simulator, I test it and I get a perfectly fine ASK response. Then I head to the lambda console for ARN:B and I make a dummy handler that returns exactly the same response that ARN:A gave me from the console (literally copy and paste). I set my skill's endpoint to ARN:B, test it using the service simulator and I get the anticipated response (therefore, the response is well formatted) albeit static. I head to the lambda console again and copy and paste the code from ARN:A into a new ARN:C. Set the skill's endpoint to ARN:C and it works perfectly fine. Problem with ARN:C is that it doesn't have the proper permissions to persist data into DynamoDB (I'm still getting familiar with the system, not sure wether I can share an IAM role between different lambdas, I believe not).
How can I figure out what's going on with ARN:A? Is that logged somewhere? I can't find any entry in cloudwatch/logs related to this particular lambda or for the skill.</p>
<p>Not sure if relevant, I'm using python for my lambda runtime, the code is (for now) inline on the web editor and I'm using boto3 for persisting to DynamoDB.</p>
| 0 | 2016-08-11T04:02:31Z | 38,902,127 | <p>tl;dr: <code>The remote endpoint could not be called, or the response it returned was invalid.</code> also means there may have been a timeout waiting for the endpoint. </p>
<p>I was able to narrow it down to a timeout.
Seems like the Alexa service simulator (and the Alexa itself) is less tolerant to long responses than the lambda testing console. During development I had increased the timeout of ARN:1 to 30 seconds (whereas I believe the default is 3 seconds). The DynamoDB table used by ARN:1 has more data and it takes slightly longer to process than ARN:3 which has an almost empty table. As soon as I commented out some of the data loading stuff it was running slightly faster and the Alexa service simulator was working again. I can't find the time budget documented anywhere, I'm guessing 3 seconds? I most likely need to move to another backend, DynamoDB+Python on lambda is too slow for very trivial requests.</p>
| 0 | 2016-08-11T16:54:18Z | [
"python",
"amazon-web-services",
"amazon-dynamodb",
"aws-lambda",
"alexa-skills-kit"
] |
Troubleshooting Amazon's Alexa Skill Kit (ASK) Lambda interaction | 38,887,061 | <p>I'm starting with ASK development. I'm a little confused by some behavior and I would like to know how to debug errors from the "service simulator" console. How can I get more information on the <code>The remote endpoint could not be called, or the response it returned was invalid.</code> errors?</p>
<p>Here's my situation:</p>
<p>I have a skill and three Lambda functions (ARN:A, ARN:B, ARN:C). If I set the skill's endpoint to ARN:A and try to test it from the skill's service simulator, I get an error response: <code>The remote endpoint could not be called, or the response it returned was invalid.</code>
I copy the lambda request, I head to the lambda console for ARN:A, I set the test even, paste the request from the service simulator, I test it and I get a perfectly fine ASK response. Then I head to the lambda console for ARN:B and I make a dummy handler that returns exactly the same response that ARN:A gave me from the console (literally copy and paste). I set my skill's endpoint to ARN:B, test it using the service simulator and I get the anticipated response (therefore, the response is well formatted) albeit static. I head to the lambda console again and copy and paste the code from ARN:A into a new ARN:C. Set the skill's endpoint to ARN:C and it works perfectly fine. Problem with ARN:C is that it doesn't have the proper permissions to persist data into DynamoDB (I'm still getting familiar with the system, not sure wether I can share an IAM role between different lambdas, I believe not).
How can I figure out what's going on with ARN:A? Is that logged somewhere? I can't find any entry in cloudwatch/logs related to this particular lambda or for the skill.</p>
<p>Not sure if relevant, I'm using python for my lambda runtime, the code is (for now) inline on the web editor and I'm using boto3 for persisting to DynamoDB.</p>
| 0 | 2016-08-11T04:02:31Z | 39,245,816 | <p>I think the problem you having for ARN:1 is you probably didn't set a trigger to alexa skill in your lambda function.</p>
<p>Or it can be the alexa session timeout which is by default set to 8 seconds.</p>
| 0 | 2016-08-31T09:31:25Z | [
"python",
"amazon-web-services",
"amazon-dynamodb",
"aws-lambda",
"alexa-skills-kit"
] |
find a pattern in html and replace it with php code | 38,887,079 | <p>I am looking at finding this pattern </p>
<pre><code><!-- Footer part at bottom of page-->
<div id="footer">
<div class="row col-md-2 col-md-offset-5">
<p class="text-muted">&copy; 2014. Core Team</p>
</div>
<div id="downloadlinks">
<!-- downloadlinks go here-->
</div>
</div>
</code></pre>
<p>and replacing it with this pattern for a number of .html files</p>
<pre><code><!-- Footer part at bottom of page-->
<div id="footer">
<div class="row col-md-2 col-md-offset-5">
<?php
$year = date("Y");
echo "<p class='text-muted'>© $year. Core Team</p>";
?>
</div>
<div id="downloadlinks">
<!-- downloadlinks go here-->
</div>
</div>
</code></pre>
<p>Note the difference is that
this </p>
<pre><code><p class="text-muted">&copy; 2014. Core Team</p>
</code></pre>
<p>is replaced with </p>
<pre><code> <?php
$year = date("Y");
echo "<p class='text-muted'>© $year. Core Team</p>";
?>
</code></pre>
<p>I was looking at doing it with <code>sed</code> but having had an initial attempt, my difficulty is the characters I might or might or might not have to escape. Also the tabs or new lines in the php code, I would like that to appear as is here.</p>
<p>There is a number of files to do it to so I would like to automate it but it might be quicker to just do it manually(copy and paste). But maybe <code>sed</code> is the wrong approach in this instance. Can someone kindly direct me in the right direction? At this stage I am open to other languages (e.g. php, python, bash ) to find a solution.</p>
<p>I would then plan to rename each .html file to .php with the following: </p>
<pre><code>for i in *.html; do mv "$i" "${i%.*}.php"; done;
</code></pre>
<hr>
<h2>EDIT1</h2>
<p>bsed on the awk answer below I can get it to work under this version</p>
<pre><code>$ awk -Wversion 2>/dev/null || awk --version
GNU Awk 4.1.1, API: 1.1 (GNU MPFR 3.1.2, GNU MP 6.0.0)
Copyright (C) 1989, 1991-2014 Free Software Foundation.
</code></pre>
<p>however on this version I get different output. It seems it prints out the 3 files, old new and file. <strong>Is this easily rectified in this version?</strong> </p>
<pre><code>root@4461f768e343:/github/find_pattern# awk -Wversion 2>/dev/null || awk --version
mawk 1.3.3 Nov 1996, Copyright (C) Michael D. Brennan
root@4461f768e343:/github/find_pattern#
root@4461f768e343:/github/find_pattern#
root@4461f768e343:/github/find_pattern# awk -v RS='^$' -v ORS= 'ARGIND==1{old=$0;next} ARGIND==2{new=$0;next} s=index($0,old){ $0 = substr($0,1,s-1) new substr($0,s+length(old))} 1' old new file
<!-- Footer part at bottom of page-->
<div id="footer">
<div class="row col-md-2 col-md-offset-5">
<p class="text-muted">&copy; 2014. Core Team</p>
</div>
<div id="downloadlinks">
<!-- downloadlinks go here-->
</div>
</div><!-- Footer part at bottom of page-->
<div id="footer">
<div class="row col-md-2 col-md-offset-5">
<?php
$year = date("Y");
echo "<p class='text-muted'>© $year. Core Team</p>";
?>
</div>
<div id="downloadlinks">
<!-- downloadlinks go here-->
</div>
</div>some pile of text
or other
<!-- Footer part at bottom of page-->
<div id="footer">
<div class="row col-md-2 col-md-offset-5">
<p class="text-muted">&copy; 2014. Core Team</p>
</div>
<div id="downloadlinks">
<!-- downloadlinks go here-->
</div>
</div>
and more maybe.root@4461f768e343:/github/find_pattern#
</code></pre>
| 0 | 2016-08-11T04:04:49Z | 38,887,184 | <p>You can use <code>replace</code>.</p>
<pre><code>html_files = ['a.html', ...]
copyright = '<p class="text-muted">&copy; 2014. Core Team</p>'
new_copyright = """ <?php
$year = date("Y");
echo "<p class='text-muted'>© $year. Core Team</p>";
?>"""
for html_file_path in html_files:
with open(html_file_path) as html_file:
html = html_file.read()
if copyright in html:
php_file_path = html_file_path.replace('.html', '.php')
with open(php_file_path, "w") as php_file:
php = html.replace(copyright, new_copyright)
php_file.write(php)
</code></pre>
<p>Note this will not override your html files which is useful if the script has an error. </p>
| 2 | 2016-08-11T04:17:06Z | [
"php",
"python",
"bash",
"sed"
] |
find a pattern in html and replace it with php code | 38,887,079 | <p>I am looking at finding this pattern </p>
<pre><code><!-- Footer part at bottom of page-->
<div id="footer">
<div class="row col-md-2 col-md-offset-5">
<p class="text-muted">&copy; 2014. Core Team</p>
</div>
<div id="downloadlinks">
<!-- downloadlinks go here-->
</div>
</div>
</code></pre>
<p>and replacing it with this pattern for a number of .html files</p>
<pre><code><!-- Footer part at bottom of page-->
<div id="footer">
<div class="row col-md-2 col-md-offset-5">
<?php
$year = date("Y");
echo "<p class='text-muted'>© $year. Core Team</p>";
?>
</div>
<div id="downloadlinks">
<!-- downloadlinks go here-->
</div>
</div>
</code></pre>
<p>Note the difference is that
this </p>
<pre><code><p class="text-muted">&copy; 2014. Core Team</p>
</code></pre>
<p>is replaced with </p>
<pre><code> <?php
$year = date("Y");
echo "<p class='text-muted'>© $year. Core Team</p>";
?>
</code></pre>
<p>I was looking at doing it with <code>sed</code> but having had an initial attempt, my difficulty is the characters I might or might or might not have to escape. Also the tabs or new lines in the php code, I would like that to appear as is here.</p>
<p>There is a number of files to do it to so I would like to automate it but it might be quicker to just do it manually(copy and paste). But maybe <code>sed</code> is the wrong approach in this instance. Can someone kindly direct me in the right direction? At this stage I am open to other languages (e.g. php, python, bash ) to find a solution.</p>
<p>I would then plan to rename each .html file to .php with the following: </p>
<pre><code>for i in *.html; do mv "$i" "${i%.*}.php"; done;
</code></pre>
<hr>
<h2>EDIT1</h2>
<p>bsed on the awk answer below I can get it to work under this version</p>
<pre><code>$ awk -Wversion 2>/dev/null || awk --version
GNU Awk 4.1.1, API: 1.1 (GNU MPFR 3.1.2, GNU MP 6.0.0)
Copyright (C) 1989, 1991-2014 Free Software Foundation.
</code></pre>
<p>however on this version I get different output. It seems it prints out the 3 files, old new and file. <strong>Is this easily rectified in this version?</strong> </p>
<pre><code>root@4461f768e343:/github/find_pattern# awk -Wversion 2>/dev/null || awk --version
mawk 1.3.3 Nov 1996, Copyright (C) Michael D. Brennan
root@4461f768e343:/github/find_pattern#
root@4461f768e343:/github/find_pattern#
root@4461f768e343:/github/find_pattern# awk -v RS='^$' -v ORS= 'ARGIND==1{old=$0;next} ARGIND==2{new=$0;next} s=index($0,old){ $0 = substr($0,1,s-1) new substr($0,s+length(old))} 1' old new file
<!-- Footer part at bottom of page-->
<div id="footer">
<div class="row col-md-2 col-md-offset-5">
<p class="text-muted">&copy; 2014. Core Team</p>
</div>
<div id="downloadlinks">
<!-- downloadlinks go here-->
</div>
</div><!-- Footer part at bottom of page-->
<div id="footer">
<div class="row col-md-2 col-md-offset-5">
<?php
$year = date("Y");
echo "<p class='text-muted'>© $year. Core Team</p>";
?>
</div>
<div id="downloadlinks">
<!-- downloadlinks go here-->
</div>
</div>some pile of text
or other
<!-- Footer part at bottom of page-->
<div id="footer">
<div class="row col-md-2 col-md-offset-5">
<p class="text-muted">&copy; 2014. Core Team</p>
</div>
<div id="downloadlinks">
<!-- downloadlinks go here-->
</div>
</div>
and more maybe.root@4461f768e343:/github/find_pattern#
</code></pre>
| 0 | 2016-08-11T04:04:49Z | 38,887,778 | <p>sed is for simple substitutions on individual lines so your task is certainly not a job for sed. You could use awk if your files are all that well formatted:</p>
<pre><code>$ cat old
<!-- Footer part at bottom of page-->
<div id="footer">
<div class="row col-md-2 col-md-offset-5">
<p class="text-muted">&copy; 2014. Core Team</p>
</div>
<div id="downloadlinks">
<!-- downloadlinks go here-->
</div>
</div>
</code></pre>
<p>.</p>
<pre><code>$ cat new
<!-- Footer part at bottom of page-->
<div id="footer">
<div class="row col-md-2 col-md-offset-5">
<?php
$year = date("Y");
echo "<p class='text-muted'>© $year. Core Team</p>";
?>
</div>
<div id="downloadlinks">
<!-- downloadlinks go here-->
</div>
</div>
</code></pre>
<p>.</p>
<pre><code>$ cat file
some pile of text
or other
<!-- Footer part at bottom of page-->
<div id="footer">
<div class="row col-md-2 col-md-offset-5">
<p class="text-muted">&copy; 2014. Core Team</p>
</div>
<div id="downloadlinks">
<!-- downloadlinks go here-->
</div>
</div>
and more maybe.
</code></pre>
<p>.</p>
<pre><code>$ awk -v RS='^$' -v ORS= 'ARGIND==1{old=$0;next} ARGIND==2{new=$0;next} s=index($0,old){ $0 = substr($0,1,s-1) new substr($0,s+length(old))} 1' old new file
some pile of text
or other
<!-- Footer part at bottom of page-->
<div id="footer">
<div class="row col-md-2 col-md-offset-5">
<?php
$year = date("Y");
echo "<p class='text-muted'>© $year. Core Team</p>";
?>
</div>
<div id="downloadlinks">
<!-- downloadlinks go here-->
</div>
</div>
and more maybe.
</code></pre>
<p>The above uses GNU awk for multi-char RS and ARGIND. If you want to do it for many files you could use:</p>
<pre><code>find . -type f -name '*.php' -exec awk -i inplace -v RS='^$' -v ORS= 'ARGIND==1{old=$0;print;next} ARGIND==2{new=$0;print;next} s=index($0,old){ $0 = substr($0,1,s-1) new substr($0,s+length(old))} 1' old new {} \;
</code></pre>
<p>or similar.</p>
| 2 | 2016-08-11T05:19:06Z | [
"php",
"python",
"bash",
"sed"
] |
Filter a dataframe for a specific range of date in Pandas | 38,887,118 | <p>I have a dataframe say <code>df1</code> which has three fields that contain date type data. Call them <code>'Date 1', 'OC_Date', 'Date 2'</code>. I want to filter this dataframe to obtain another dataframe such that it gives me the rows where <code>'OC_Date'</code> is between <code>'Date 1'</code> and <code>'Date 2'</code>:</p>
<pre><code>Date 1 < OC_Date < Date 2
</code></pre>
<p>The format of the date in these three fields is as follows:</p>
<pre><code>Date 1 : YYYY-MM-DD
OC_Date: DD-MM-YYYY:HH:MM:SS # (MM is text, eg. JAN for January)
Date 2 : YYYY-MM-DD
</code></pre>
<p>Thanks in Advance!</p>
| 1 | 2016-08-11T04:09:56Z | 38,888,055 | <p>You can first convert columns from strings <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow"><code>to_datetime</code></a> and then filter by dates with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.date.html" rel="nofollow"><code>dt.date</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Date 1':['2015-01-04','2015-01-05','2015-01-05'],
'OC_Date':['05-JAN-2015:10:10:20',
'05-JAN-2015:11:15:31',
'05-JAN-2015:08:05:09'],
'Date 2':['2015-01-06','2015-01-08','2015-01-10']})
df['Date 1'] = pd.to_datetime(df['Date 1'])
df['Date 2'] = pd.to_datetime(df['Date 2'])
#http://strftime.org/
df['OC_Date'] = pd.to_datetime(df['OC_Date'], format='%d-%b-%Y:%H:%M:%S')
print (df)
Date 1 Date 2 OC_Date
0 2015-01-04 2015-01-06 2015-01-05 10:10:20
1 2015-01-05 2015-01-08 2015-01-05 11:15:31
2 2015-01-05 2015-01-10 2015-01-05 08:05:09
print (df.dtypes)
Date 1 datetime64[ns]
Date 2 datetime64[ns]
OC_Date datetime64[ns]
dtype: object
mask = (df['Date 1'].dt.date < df['OC_Date'].dt.date) &
(df['OC_Date'].dt.date < df['Date 2'].dt.date)
print (mask)
0 True
1 False
2 False
dtype: bool
print (df[mask])
Date 1 Date 2 OC_Date
0 2015-01-04 2015-01-06 2015-01-05 10:10:20
</code></pre>
| 2 | 2016-08-11T05:40:39Z | [
"python",
"python-2.7",
"pandas"
] |
error in labelled point object pyspark | 38,887,157 | <p>I am writing a function </p>
<ol>
<li>which takes a RDD as input</li>
<li>splits the comma separated values</li>
<li>then convert each row into labelled point object</li>
<li><p>finally fetch the output as a dataframe</p>
<pre><code>code:
def parse_points(raw_rdd):
cleaned_rdd = raw_rdd.map(lambda line: line.split(","))
new_df = cleaned_rdd.map(lambda line:LabeledPoint(line[0],[line[1:]])).toDF()
return new_df
output = parse_points(input_rdd)
</code></pre></li>
</ol>
<p>upto this if I run the code, there is no error it is working fine.</p>
<p>But on adding the line,</p>
<pre><code> output.take(5)
</code></pre>
<p>I am getting the error:</p>
<pre><code>org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 129.0 failed 1 times, most recent failure: Lost task 0.0 in s stage 129.0 (TID 152, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
Py4JJavaError Traceback (most recent call last)
<ipython-input-100-a68c448b64b0> in <module>()
20
21 output = parse_points(raw_rdd)
---> 22 print output.show()
</code></pre>
<p>Please suggest me what is the mistake.</p>
| 0 | 2016-08-11T04:13:46Z | 38,887,635 | <p>The reason you had no errors until you execute the action:</p>
<pre><code> output.take(5)
</code></pre>
<p>Is due to the nature of spark, which is lazy.
i.e. nothing was execute in spark until you execute the action "take(5)"</p>
<p>You have a few issues in your code, and I think that you are failing due to extra "[" and "]" in [line[1:]]</p>
<p>So you need to remove extra "[" and "]" in [line[1:]] (and keep only the line[1:])</p>
<p>Another issue which you might need to solve is the lack of dataframe schema.</p>
<p>i.e. replace "toDF()" with "toDF(["features","label"])"
This will give the dataframe a schema.</p>
| 0 | 2016-08-11T05:05:05Z | [
"python",
"apache-spark",
"pyspark",
"apache-spark-sql"
] |
error in labelled point object pyspark | 38,887,157 | <p>I am writing a function </p>
<ol>
<li>which takes a RDD as input</li>
<li>splits the comma separated values</li>
<li>then convert each row into labelled point object</li>
<li><p>finally fetch the output as a dataframe</p>
<pre><code>code:
def parse_points(raw_rdd):
cleaned_rdd = raw_rdd.map(lambda line: line.split(","))
new_df = cleaned_rdd.map(lambda line:LabeledPoint(line[0],[line[1:]])).toDF()
return new_df
output = parse_points(input_rdd)
</code></pre></li>
</ol>
<p>upto this if I run the code, there is no error it is working fine.</p>
<p>But on adding the line,</p>
<pre><code> output.take(5)
</code></pre>
<p>I am getting the error:</p>
<pre><code>org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 129.0 failed 1 times, most recent failure: Lost task 0.0 in s stage 129.0 (TID 152, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
Py4JJavaError Traceback (most recent call last)
<ipython-input-100-a68c448b64b0> in <module>()
20
21 output = parse_points(raw_rdd)
---> 22 print output.show()
</code></pre>
<p>Please suggest me what is the mistake.</p>
| 0 | 2016-08-11T04:13:46Z | 38,906,839 | <p>Try:</p>
<pre><code>>>> raw_rdd.map(lambda line: line.split(",")) \
... .map(lambda line:LabeledPoint(line[0], [float(x) for x in line[1:]])
</code></pre>
| 0 | 2016-08-11T22:06:10Z | [
"python",
"apache-spark",
"pyspark",
"apache-spark-sql"
] |
incrementing an integer inside an array gives me TypeError: 'int' object is not iterable | 38,887,208 | <p>I have an array that looks like the following: ["string", int , [] ] but when I try to increment the int Python gives me an error. (by the way I am new to python so this might be a simple question)</p>
<blockquote>
<p>TypeError: 'int' object is not iterable</p>
</blockquote>
<p>Here is my problematic part of the code:</p>
<pre><code>for line in responseid_container:
if line[0] is log.id_resp_address:
ipFound = True
#--- this line is where I get an exception ---
responseid_container[1] += 1
responseid_container[2].append = log.id_resp_address
</code></pre>
<p>(example for responseid_container = ['54.192.11.194', 1, [0]])</p>
<p>I tried to look for an answer in many places, such as: <a href="https://www.google.co.il/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwizms71vLjOAhXEPRQKHc33B98QFggaMAA&url=http%3A%2F%2Fstackoverflow.com%2Fquestions%2F9304408%2Fhow-to-add-an-integer-to-each-element-in-a-list&usg=AFQjCNFRcYKSljn1JFC0vM2fUNqgLqHiew" rel="nofollow">here</a>,<a href="https://www.google.co.il/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0ahUKEwizms71vLjOAhXEPRQKHc33B98QFgghMAE&url=http%3A%2F%2Fstackoverflow.com%2Fquestions%2F18325312%2Fhow-to-create-a-range-of-numbers-with-a-given-increment&usg=AFQjCNGV6H0QfryvJ7fffYC_jLOM_re0cQ" rel="nofollow">here</a>,<a href="https://www.google.co.il/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&cad=rja&uact=8&ved=0ahUKEwizms71vLjOAhXEPRQKHc33B98QFggoMAI&url=http%3A%2F%2Fstackoverflow.com%2Fquestions%2F16903264%2Fbasic-python-how-to-increase-value-of-item-in-list&usg=AFQjCNE30Pabs7GiLpLUTSiIGyDFHKwgqg" rel="nofollow">here</a>, <a href="http://stackoverflow.com/questions/14941288/how-do-i-fix-typeerror-int-object-is-not-iterable">here</a> , and many more... I hope it is not a duplicated answer but I did try and find if it is :-)</p>
<p>Here is my full code of the class if needed</p>
<pre><code>from reader import fullLog
class counters:
local_logs = []
arrange_by_response_id = []
def __init__(self , fulllog):
self.local_logs = fulllog
def arrangebyresonseid(self):
responseid_container = [[" " , 0 , []]]
ipFound = False
counter = 0
for log in self.local_logs.oopList:
for line in responseid_container:
if line[0] is log.id_resp_address:
ipFound = True
responseid_container[1] += 1
responseid_container[2].append = log.id_resp_address
if not ipFound:
ipFound = False
responseid_container.append([log.id_resp_address , 1 , [counter]])
counter += 1
</code></pre>
| 0 | 2016-08-11T04:18:32Z | 38,887,451 | <p>I checked your code you have problem at this statement:-</p>
<pre><code>responseid_container[1] += 1
</code></pre>
<p>convert this line to:- </p>
<pre><code>responseid_container[0][1] += 1
</code></pre>
<p>Check it carefully:-</p>
<pre><code>responseid_container = [
0 => [
0 => " " , 1 => 0 , 2 => []
]
]
</code></pre>
| 1 | 2016-08-11T04:48:41Z | [
"python",
"arrays",
"increment"
] |
The returned value not defined | 38,887,226 | <p>I am creating a guessing game and I have created two functions. One to take the user input and the other to check whether the user input is correct.</p>
<pre><code>def getGuess(maxNum):
if maxNum == "10":
count=0
guess = -1
guessnum = [ ]
while guess >10 or guess<0:
try:
guess=int(input("Guess?"))
except:
print("Please enter valid input")
guesses.append(guess)
return guesses
return guess
def checkGuess(maxNum):
if maxNum == "10":
if guess>num1:
print("Too High")
elif guess<num1:
print ("Too Low")
else:
print("Correct")
print (guesses)
</code></pre>
<p>and the main code is </p>
<pre><code>if choice == "1":
count = 0
print("You have selected Easy as the level of difficulty")
maxNum= 10
num1=random.randint(0,10)
print (num1)
guess = 11
while guess != num1:
getGuess("10")
checkGuess("10")
count = count+1
print (guess)
</code></pre>
<p>Although the function returns the users guess the code always takes the guess as 11. If I don't define guess, it doesn't work either. Please help.</p>
| 1 | 2016-08-11T04:21:10Z | 38,887,422 | <p>First, you are returning two values. A <code>return</code> statement also acts as a <code>break</code>, so the second <code>return</code> will not be called. Also, you are not storing the returned value anywhere, so it just disappears. </p>
<p>Here is your edited code:</p>
<pre><code>def getGuess(maxNum):
if maxNum == "10":
guess = -1
while guess >10 or guess<0:
try:
guess=int(input("Guess?"))
except:
print("Please enter valid input")
return guess
def checkGuess(maxNum, guess, num1):
if maxNum == "10":
if guess>num1:
print("Too High")
elif guess<num1:
print ("Too Low")
else:
print("Correct")
return True
return False
if choice == "1":
count = 0
print("You have selected Easy as the level of difficulty")
maxNum= 10
num1=random.randint(0,10)
print (num1)
guess = 11
guesses = []
while guess != num1:
guess = getGuess("10")
guesses.append(guess)
hasWon = checkGuess("10", guess, num1)
if hasWon:
print(guesses)
break
count = count+1
</code></pre>
<hr>
<pre><code>You have selected Easy as the level of difficulty
2
Guess?5
Too High
Guess?1
Too Low
Guess?2
Correct
[5, 1, 2]
>>>
</code></pre>
| 0 | 2016-08-11T04:45:19Z | [
"python",
"python-3.x"
] |
The returned value not defined | 38,887,226 | <p>I am creating a guessing game and I have created two functions. One to take the user input and the other to check whether the user input is correct.</p>
<pre><code>def getGuess(maxNum):
if maxNum == "10":
count=0
guess = -1
guessnum = [ ]
while guess >10 or guess<0:
try:
guess=int(input("Guess?"))
except:
print("Please enter valid input")
guesses.append(guess)
return guesses
return guess
def checkGuess(maxNum):
if maxNum == "10":
if guess>num1:
print("Too High")
elif guess<num1:
print ("Too Low")
else:
print("Correct")
print (guesses)
</code></pre>
<p>and the main code is </p>
<pre><code>if choice == "1":
count = 0
print("You have selected Easy as the level of difficulty")
maxNum= 10
num1=random.randint(0,10)
print (num1)
guess = 11
while guess != num1:
getGuess("10")
checkGuess("10")
count = count+1
print (guess)
</code></pre>
<p>Although the function returns the users guess the code always takes the guess as 11. If I don't define guess, it doesn't work either. Please help.</p>
| 1 | 2016-08-11T04:21:10Z | 38,887,745 | <p>You have a programming style I call "type and hope". <code>maxNum</code> seems to bounce between a number and a string indicating you haven't thought through your approach. Below is a rework where each routine tries do something obvious and useful without extra variables. (I've left off the initial <code>choice</code> logic as it doesn't contribute to this example which can be put into your choice framework.)</p>
<pre><code>import random
def getGuess(maxNum):
guess = -1
while guess < 1 or guess > maxNum:
try:
guess = int(input("Guess? "))
except ValueError:
print("Please enter valid input")
return guess
def checkGuess(guess, number):
if guess > number:
print("Too High")
elif guess < number:
print("Too Low")
else:
print("Correct")
return True
return False
print("You have selected Easy as the level of difficulty")
maxNum = 10
maxTries = 3
number = random.randint(1, maxNum)
count = 1
guess = getGuess(maxNum)
while True:
if checkGuess(guess, number):
break
count = count + 1
if count > maxTries:
print("Too many guesses, it was:", number)
break
guess = getGuess(maxNum)
</code></pre>
<p>A couple of specific things to consider: avoid using <code>except</code> without some sense of what exception you're expecting; avoid passing numbers around as strings -- convert numeric strings to numbers on input, convert numbers to numeric strings on output, but use actual numbers in between.</p>
| 0 | 2016-08-11T05:15:39Z | [
"python",
"python-3.x"
] |
Zip file automation with bash/git | 38,887,388 | <p>I have a directory like <code>/home/folder1/folder2/index.html</code></p>
<p>In the end I have to give the files that have changes and zip it. So I have to create a folder name 2016/11/8 and the content is <code>home/folder1/folder2/css/style.css</code></p>
<p>if says I changed the style. It's tedious but I couldn't find a way to automate this.</p>
| 0 | 2016-08-11T04:41:42Z | 38,887,478 | <p>I assume that you want to get the files that have difference between 2 coimmitss, </p>
<pre><code>git archive --format=zip HEAD `git diff --name-only <SHA> HEAD` > difference.zip
</code></pre>
<p>Following will give you the difference between current and given SHA </p>
<pre><code>git diff --name-only <SHA> HEAD
</code></pre>
<p>I hope this will help you</p>
| 1 | 2016-08-11T04:51:23Z | [
"python",
"bash",
"automation"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.