title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Extract first number in a sequence of identical numbers in a pandas groupby | 39,086,281 | <p>I would like to group a dataframe by a column 'type', and get the first number from each sequence of identical numbers. The following example illustrates:</p>
<pre><code>A = pd.DataFrame({'type':['A','A','A','A','A','A','A','A','A','B','B','B','B','B'], 'value':[1,1,1,1,8,8,8,1,1,2,2,3,3,2]})
</code></pre>
<p>For group A, there is first a sequence of 1's, then of 8's, and a final of 1's. For group B, there is first one of 2's, then one of 3's, and a last one of 2's (only one element). The result should be 1,8,1 for A and 2,3,2 for B:</p>
<pre><code>type value
0 A 1
1 A 8
2 A 1
3 B 2
4 B 3
5 B 2
</code></pre>
<p>Note that using A.groupby('type').first(), nor A.groupby('type').(lambda x:x.unique()) will work because in both cases the last 1 and the last 2 would be ignored. Note that this would be trivial if I had an index to identify each series of same numbers.</p>
<p>Thank you and I appreciate all your help,</p>
| 3 | 2016-08-22T18:24:04Z | 39,087,192 | <pre><code>def first_contiguous(s):
return s.groupby(s.ne(s.shift()).cumsum()).head(1)
A.groupby('type').value.apply(first_contiguous).reset_index('type')
</code></pre>
<p><a href="http://i.stack.imgur.com/6SU68.png" rel="nofollow"><img src="http://i.stack.imgur.com/6SU68.png" alt="enter image description here"></a></p>
| 2 | 2016-08-22T19:19:58Z | [
"python",
"pandas"
] |
Spotipy: How to read more than 100 tracks from a playlist | 39,086,287 | <p>I'm trying to pull all tracks in a certain playlist using the <a href="https://spotipy.readthedocs.io/en/latest/#more-examples" rel="nofollow">Spotipy library</a> for python. </p>
<p>The user_playlist_tracks function is limited to 100 tracks, regardless of the parameter limit. The Spotipy documentation describes it as: </p>
<blockquote>
<p>user_playlist_tracks(user, playlist_id=None, fields=None, limit=100,
offset=0, market=None) </p>
<p>Get full details of the tracks of a playlist
owned by a user.</p>
<p>Parameters:</p>
<ul>
<li>user </li>
<li>the id of the user playlist_id </li>
<li>the id of the playlist fields</li>
<li>which fields to return limit </li>
<li>the maximum number of tracks to return offset </li>
<li>the index of the first track to return market</li>
<li>an ISO 3166-1 alpha-2 country code.</li>
</ul>
</blockquote>
<p>After authenticating with Spotify, I'm currently using something like this:</p>
<pre><code>username = xxxx
playlist = #fromspotipy
sp_playlist = sp.user_playlist_tracks(username, playlist_id=playlist)
tracks = sp_playlist['items']
print tracks
</code></pre>
<p>Is there a way to return more than 100 tracks? I've tried setting the limit=None in the function parameters, but it returns an error.</p>
| 0 | 2016-08-22T18:24:26Z | 39,087,769 | <p>Below is the <code>user_playlist_tracks</code> module used in spotipy. (notice it defaults to 100 limit). </p>
<p>Try setting the limit to 200. </p>
<pre><code>def user_playlist_tracks(self, user, playlist_id = None, fields=None,
limit=100, offset=0):
''' Get full details of the tracks of a playlist owned by a user.
Parameters:
- user - the id of the user
- playlist_id - the id of the playlist
- fields - which fields to return
- limit - the maximum number of tracks to return
- offset - the index of the first track to return
'''
plid = self._get_id('playlist', playlist_id)
return self._get("users/%s/playlists/%s/tracks" % (user, plid),
limit=limit, offset=offset, fields=fields)
</code></pre>
| 0 | 2016-08-22T20:00:38Z | [
"python",
"spotipy"
] |
Spotipy: How to read more than 100 tracks from a playlist | 39,086,287 | <p>I'm trying to pull all tracks in a certain playlist using the <a href="https://spotipy.readthedocs.io/en/latest/#more-examples" rel="nofollow">Spotipy library</a> for python. </p>
<p>The user_playlist_tracks function is limited to 100 tracks, regardless of the parameter limit. The Spotipy documentation describes it as: </p>
<blockquote>
<p>user_playlist_tracks(user, playlist_id=None, fields=None, limit=100,
offset=0, market=None) </p>
<p>Get full details of the tracks of a playlist
owned by a user.</p>
<p>Parameters:</p>
<ul>
<li>user </li>
<li>the id of the user playlist_id </li>
<li>the id of the playlist fields</li>
<li>which fields to return limit </li>
<li>the maximum number of tracks to return offset </li>
<li>the index of the first track to return market</li>
<li>an ISO 3166-1 alpha-2 country code.</li>
</ul>
</blockquote>
<p>After authenticating with Spotify, I'm currently using something like this:</p>
<pre><code>username = xxxx
playlist = #fromspotipy
sp_playlist = sp.user_playlist_tracks(username, playlist_id=playlist)
tracks = sp_playlist['items']
print tracks
</code></pre>
<p>Is there a way to return more than 100 tracks? I've tried setting the limit=None in the function parameters, but it returns an error.</p>
| 0 | 2016-08-22T18:24:26Z | 39,113,522 | <p>Many of the spotipy methods return paginated results, so you will have to scroll through them to view more than just the max limit. I've encountered this most often when collecting a playlist's full track listing and consequently created a custom method to handle this:</p>
<pre><code>def get_playlist_tracks(username,playlist_id):
results = sp.user_playlist_tracks(username,playlist_id)
tracks = results['items']
while results['next']:
results = sp.next(results)
tracks.extend(results['items'])
return tracks
</code></pre>
| 0 | 2016-08-24T02:33:30Z | [
"python",
"spotipy"
] |
How to count number of occurences of permutation (overlapping) in large text in python3? | 39,086,310 | <p>I am having a list of words and I'd like to find out how many time each permutation occurs in this list of word.
And I'd like to count overlapping permutation also. So count() doesn't seem to be appropriate.
for example: the permutation aba appears twice in this string:</p>
<p>ababa</p>
<p>However count() would say one.</p>
<p>So I designed this little script, but I am not too sure that is efficient. The array of word is an external file, I just removed this part to make it simplier.</p>
<pre><code>import itertools
import itertools
#Occurence counting function
def occ(string, sub):
count = start = 0
while True:
start = string.find(sub, start) + 1
if start > 0:
count+=1
else:
return count
#permutation generator
abc="ABCDEFGHIJKLMNOPQRSTUVWXYZ"
permut = [''.join(p) for p in itertools.product(abc,repeat=2)]
#Transform osd7 in array
arrayofWords=['word1',"word2","word3","word4"]
dict_output['total']=0
#create the array
for perm in permut:
dict_output[perm]=0
#iterate over the arrayofWords and permutation
for word in arrayofWords:
for perm in permut:
dict_output[perm]=dict_output[perm]+occ(word,perm)
dict_output['total']=dict_output['total']+occ(word,perm)
</code></pre>
<p>It is working, but it takes looonnnggg time. If I change, product(abc,repeat=2) by product(abc,repeat=3) or product(abc,repeat=4)... It will take a full week!</p>
<p><strong>The question: Is there a more efficient way?</strong></p>
| 1 | 2016-08-22T18:25:42Z | 39,086,441 | <p>You can use <code>re</code> module to count overlapping match.</p>
<pre><code>import re
print len(re.findall(r'(?=(aba))','ababa'))
</code></pre>
<p>Output:</p>
<pre><code>2
</code></pre>
<p>More generally,</p>
<pre><code>print len(re.findall(r'(?=(<pattern>))','<input_string>'))
</code></pre>
| 0 | 2016-08-22T18:33:32Z | [
"python",
"arrays",
"string",
"algorithm",
"count"
] |
How to count number of occurences of permutation (overlapping) in large text in python3? | 39,086,310 | <p>I am having a list of words and I'd like to find out how many time each permutation occurs in this list of word.
And I'd like to count overlapping permutation also. So count() doesn't seem to be appropriate.
for example: the permutation aba appears twice in this string:</p>
<p>ababa</p>
<p>However count() would say one.</p>
<p>So I designed this little script, but I am not too sure that is efficient. The array of word is an external file, I just removed this part to make it simplier.</p>
<pre><code>import itertools
import itertools
#Occurence counting function
def occ(string, sub):
count = start = 0
while True:
start = string.find(sub, start) + 1
if start > 0:
count+=1
else:
return count
#permutation generator
abc="ABCDEFGHIJKLMNOPQRSTUVWXYZ"
permut = [''.join(p) for p in itertools.product(abc,repeat=2)]
#Transform osd7 in array
arrayofWords=['word1',"word2","word3","word4"]
dict_output['total']=0
#create the array
for perm in permut:
dict_output[perm]=0
#iterate over the arrayofWords and permutation
for word in arrayofWords:
for perm in permut:
dict_output[perm]=dict_output[perm]+occ(word,perm)
dict_output['total']=dict_output['total']+occ(word,perm)
</code></pre>
<p>It is working, but it takes looonnnggg time. If I change, product(abc,repeat=2) by product(abc,repeat=3) or product(abc,repeat=4)... It will take a full week!</p>
<p><strong>The question: Is there a more efficient way?</strong></p>
| 1 | 2016-08-22T18:25:42Z | 39,087,419 | <p>Very simple: count only what you need to count.</p>
<pre><code>from collections import defaultdict
quadrigrams = defaultdict(lambda: 0)
for word in arrayofWords:
for i in range(len(word) - 3):
quadrigrams[word[i:i+4]] += 1
</code></pre>
| 2 | 2016-08-22T19:37:15Z | [
"python",
"arrays",
"string",
"algorithm",
"count"
] |
python: read beyond end of file | 39,086,368 | <p>I'm trying to read beyond the EOF in Python, but so far I'm failing (also tried to work with seek to position and read fixed size).</p>
<p>I've found a workaround which only works on Linux (and is quite slow, too) by working with debugfs and subprocess, but this is to slow and does not work on windows.</p>
<p>My Question: is it possible to read a file beyond EOF in python (which works on all platforms)?</p>
| 0 | 2016-08-22T18:29:11Z | 39,086,415 | <p>You can't read more bytes than is in the file. "End of file" literally means exactly that. </p>
| 4 | 2016-08-22T18:32:23Z | [
"python"
] |
python: read beyond end of file | 39,086,368 | <p>I'm trying to read beyond the EOF in Python, but so far I'm failing (also tried to work with seek to position and read fixed size).</p>
<p>I've found a workaround which only works on Linux (and is quite slow, too) by working with debugfs and subprocess, but this is to slow and does not work on windows.</p>
<p>My Question: is it possible to read a file beyond EOF in python (which works on all platforms)?</p>
| 0 | 2016-08-22T18:29:11Z | 39,086,475 | <p>You can only move to the end using:</p>
<pre><code>file.seek(0, 2)
</code></pre>
<p>Is that you're trying to do?</p>
| 0 | 2016-08-22T18:35:48Z | [
"python"
] |
Running Python scripts on Amazon Web Services? Do I need to use Boto? | 39,086,388 | <p>Maybe this is a silly question, I just set up free Amazon Linux instance according to the tutorial, what I want to do is simply running python scripts.</p>
<p>Then I googled AWS and Python, Amazon mentioned Boto.</p>
<p>I don't know why using Boto. Because if I type python, it already installed.</p>
<p>What I want to do is run a script on day time. </p>
<p>Is there a need for me to reading about Boto or just run xx.py on AWS ?</p>
<p>Any help is appreciated.</p>
| 1 | 2016-08-22T18:30:28Z | 39,086,443 | <p>Boto is a python interface to Amazon Services (like copying to S3, etc).</p>
<p>You don't need it to just run regular python as you would on any linux instance with python installed, except to access AWS services from your EC2 instance.</p>
| 3 | 2016-08-22T18:33:44Z | [
"python",
"amazon-web-services"
] |
Running Python scripts on Amazon Web Services? Do I need to use Boto? | 39,086,388 | <p>Maybe this is a silly question, I just set up free Amazon Linux instance according to the tutorial, what I want to do is simply running python scripts.</p>
<p>Then I googled AWS and Python, Amazon mentioned Boto.</p>
<p>I don't know why using Boto. Because if I type python, it already installed.</p>
<p>What I want to do is run a script on day time. </p>
<p>Is there a need for me to reading about Boto or just run xx.py on AWS ?</p>
<p>Any help is appreciated.</p>
| 1 | 2016-08-22T18:30:28Z | 39,086,478 | <p>Boto is a <strong>Python wrappe</strong>r for <strong>AWS APIs</strong>. If you want to interact with AWS using its published APIs, you need boto/boto3 library installed. Boto will not be supported for long. So if you are starting to use Boto, use <strong>Boto3</strong> which is much simpler than Boto.</p>
<p>Boto3 supports (almost) all AWS services.</p>
| 2 | 2016-08-22T18:36:09Z | [
"python",
"amazon-web-services"
] |
Using requests package to make request | 39,086,420 | <p>I have an application (spark based service), which when starts..works like following.</p>
<p>At <code>localhost:9000</code></p>
<p>if I do <code>nc -lk localhost 9000</code>
and then start entering the text.. it takes the text entered in terminal as an input and do a simple wordcount computation on it.
how do i use the <code>requests</code> library to programmatically send the text, instead of manually writing them in the terminal.</p>
<p>Not sure if my question is making sense..</p>
| 0 | 2016-08-22T18:32:39Z | 39,086,692 | <p>requests is a HTTP request library, while Spark's wordcount example provides a raw socket server, so no, requests is not the right package to communicate with your Spark app.</p>
| 1 | 2016-08-22T18:48:01Z | [
"python",
"python-requests"
] |
django testing without email backend | 39,086,434 | <p>I want to test a view in my Django application. So I open the python shell by typing <code>pyton</code> and then I type <code>from django.test.utils import setup_test_environment</code>. It seems to work fine. Then I type <code>setup_test_environment()</code> and it says </p>
<blockquote>
<p>django.core.exceptions.ImproperlyConfigured: Requested setting
EMAIL_BACKEND, but settings are not configured. You must either define
the environment variable DJANGO_SETTINGS_MODULE or call
settings.configure() before accessing settings.</p>
</blockquote>
<p>I don't need to send mails in my test, so why does Django wants me to configure an email back-end ?</p>
<p>Are we forced to configure an email back-end for any test even if it doesn't need it ?</p>
| 0 | 2016-08-22T18:33:08Z | 39,087,027 | <p>You don't need to define the <code>EMAIL_BACKEND</code> setting (it has a default), but you do need to define a setting module. You can set the <code>DJANGO_SETTINGS_MODULE</code> in your shell environment, or set <code>os.environ['DJANGO_SETTINGS_MODULE']</code> to point to your settings module.</p>
<p>Note that calling <code>python manage.py shell</code> will set up the Django environment for you, which includes setting <code>DJANGO_SETTINGS_MODULE</code> and calling <code>django.setup()</code>. You still need to call <code>setup_test_environment()</code> to manually run tests in your python shell. </p>
| 0 | 2016-08-22T19:08:42Z | [
"python",
"django",
"email",
"testing"
] |
Extracting infromation from multiple JSON files to single CSV file in python | 39,086,440 | <p>I have a JSON file with multiple dictionaries:</p>
<pre><code>{"team1participants":
[ {
"stats": {
"item1": 3153,
"totalScore": 0,
...
}
},
{
"stats": {
"item1": 2123,
"totalScore": 5,
...
}
},
{
"stats": {
"item1": 1253,
"totalScore": 1,
...
}
}
],
"team2participants":
[ {
"stats": {
"item1": 1853,
"totalScore": 2,
...
}
},
{
"stats": {
"item1": 21523,
"totalScore": 5,
...
}
},
{
"stats": {
"item1": 12503,
"totalScore": 1,
...
}
}
]
}
</code></pre>
<p>In other words, the JSON has multiple keys. Each key has a list containing statistics of individual participants.</p>
<p>I have many such JSON files, and I want to extract it to a single CSV file. I can of course do this manually, but this is very tedious. I know of DictWriter, but it seems to work only for single dictionaries. I also know that dictionaries can be concatenated, but it will be problematic because all dictionaries have the same keys.</p>
<p>How can I efficiently extract this to a CSV file?</p>
| 2 | 2016-08-22T18:33:30Z | 39,086,846 | <p>You can make your data tidy so that each row is a unique observation.</p>
<pre><code>teams = []
items = []
scores = []
for team in d:
for item in d[team]:
teams.append(team)
items.append(item['stats']['item1'])
scores.append(item['stats']['totalScore'])
# Using Pandas.
import pandas as pd
df = pd.DataFrame({'team': teams, 'item': items, 'score': scores})
>>> df
item score team
0 1853 2 team2participants
1 21523 5 team2participants
2 12503 1 team2participants
3 3153 0 team1participants
4 2123 5 team1participants
5 1253 1 team1participants
</code></pre>
<p>You could also use a list comprehension instead of a loop.</p>
<pre><code>results = [[team, item['stats']['item1'], item['stats']['totalScore']]
for team in d for item in d[team]]
df = pd.DataFrame(results, columns=['team', 'item', 'score'])
</code></pre>
<p>You can then do a pivot table, for example:</p>
<pre><code>>>> df.pivot_table(values='score ', index='team ', columns='item', aggfunc='sum').fillna(0)
item 1253 1853 2123 3153 12503 21523
team
team1participants 1 0 5 0 0 0
team2participants 0 2 0 0 1 5
</code></pre>
<p>Also, now that it is a dataframe, it is easy to save it as a CSV.</p>
<pre><code>df.to_csv(my_file_name.csv)
</code></pre>
| 2 | 2016-08-22T18:57:31Z | [
"python",
"json",
"python-2.7",
"csv",
"pandas"
] |
NoReverseMatch at / Python Django | 39,086,484 | <p>I'm taking a Django course, and I'm having the next error:</p>
<blockquote>
<p>Reverse for 'products.views.product_detail' with arguments '(1,)' and keyword arguments '{}' not found. 0 pattern(s) tried: []</p>
</blockquote>
<p>I'm trying to send an argument to a view from a file that is called index.html</p>
<p>My index.html looks like this:</p>
<pre><code>{% for pr in product %}
<li>
<a href="{% url 'products.views.product_detail' pr.pk %}">{{ pr.name }} </a>
| {{ pr.description }}
<img src="{{ pr.imagen.url }}" alt="">
</li>
{% endfor%}
</code></pre>
<p>I already declared the url that is associated:</p>
<pre><code>urlpatterns = [
url(r'^product/(?P<pk>[0-9]+)/$', views.product_detail, name='views.product_detail')
]
</code></pre>
<p>And my views.py looks like this:</p>
<pre><code>def product_detail(request, pk):
product = get_object_or_404(Product, pk = pk)
template = loader.get_template('product_detail.html')
context = {
'product': product
}
return HttpResponse(template.render(context, request))
</code></pre>
<p>Does someone know why is this error happening?</p>
<p>Thanks.</p>
| 3 | 2016-08-22T18:36:28Z | 39,086,891 | <p>From <a href="https://docs.djangoproject.com/en/1.10/releases/1.10/#features-removed-in-1-10">"Features to be removed in 1.10"</a>:</p>
<blockquote>
<ul>
<li>The ability to reverse() URLs using a dotted Python path is removed.</li>
</ul>
</blockquote>
<p>The <code>{% url %}</code> tag uses <code>reverse()</code>, so the same applies. As elethan mentioned in the comments, you need to use the <code>name</code> parameter provided in your URLconf instead, in this case <code>views.product_detail</code>:</p>
<pre><code>{% for pr in product %}
<li>
<a href="{% url 'views.product_detail' pr.pk %}">{{ pr.name }} </a>
| {{ pr.description }}
<img src="{{ pr.imagen.url }}" alt="">
</li>
{% endfor %}
</code></pre>
| 5 | 2016-08-22T19:00:25Z | [
"python",
"django"
] |
Making a nested dictionary with lists and arrays | 39,086,485 | <p>I have two lists and an array:</p>
<pre><code>owners = [ 'Bill', 'Ann', 'Sarah']
dog = ['shepherd', 'collie', 'poodle', 'terrier']
totals = [[5, 15, 3, 20],[3,2,16,16],[20,35,1,2]]
</code></pre>
<p>I want to make a nested dictionary out of these.</p>
<pre><code> dict1 = {'Bill': {'shepherd': 5, 'collie': 15, 'poodle': 3, 'terrier': 20},
'Ann': {'shepherd': 3, 'collie': 2, 'poodle': 16, 'terrier': 16},
'Sarah': {'shepherd': 20, 'collie': 35, 'poodle': 1, 'terrier': 2}
}
</code></pre>
<p>My closest attempt:</p>
<pre><code> totals_list = totals.tolist()
dict1 = dict(zip(owners, totals_list))
</code></pre>
<p>I cannot find a way to create the nested dictionary I am looking for. Any suggestions?</p>
| 0 | 2016-08-22T18:36:31Z | 39,086,604 | <pre><code>main_dict = {}
for owner, total in zip(owners, totals):
main_dict[owner] = {}
for key, value in zip(dog, total):
main_dict[owner][key] = value
</code></pre>
<p>You may also write it in one line using <code>dict comprehension</code> as:</p>
<pre><code>main_dict = {owner: dict(zip(dog, total)) for owner, total in zip(owners, totals)}
</code></pre>
| 5 | 2016-08-22T18:43:39Z | [
"python",
"arrays",
"list",
"dictionary"
] |
Using xlwt, create a new sheet anytime xls row limit is reached | 39,086,502 | <p>I'm currently writing a python script that will take an arbitrary number of csv files and create .xls files from them. Unfortunately, some of these csv files have row counts greater than 65536, which means that they can't exist on one .xls sheet. What I would like to do is come up with a way to generate a new sheet when that number of rows is reached. For reference, here is the code I'm currently using:</p>
<pre><code>import csv, xlwt, glob, ntpath
files = glob.glob("C:/Users/waldiesamuel/326/*.csv")
bold = xlwt.easyxf('font: bold on')
for i in files:
org_file = open(i, 'r')
reader = csv.reader((org_file), delimiter=",")
workbook = xlwt.Workbook()
sheet = workbook.add_sheet("SQL Results")
path = ntpath.dirname(i)
file = ntpath.basename(i)
for rowi, row in enumerate(reader):
for coli, value in enumerate(row):
if coli == 0:
sheet.write(rowi,coli,value,bold)
else:
sheet.write(rowi,coli,value)
workbook.save(path + file + '.xls')
</code></pre>
<p>My thought is that around</p>
<pre><code>for rowi, row in enumerate(reader):
</code></pre>
<p>I could use an if statement to check if row is greater than 65536, but I'm not sure how to create a new variable from there.</p>
<p><strong>Edit:</strong></p>
<p>I found a potential solution, which failed, and was explained by the answer. I'm including it here as an edit so everyone can follow the thought process:</p>
<p>So it appears that because xlwt checks to specifically make sure you're not adding more than 65536 rows, this might not be doable. I had come up with what I thought was a clever solution, by changing my sheet variable to a dict, like so:</p>
<pre><code>sheet = {1: workbook.add_sheet("SQL Results")}
</code></pre>
<p>then initializing two variables to serve as counters:</p>
<pre><code>sheet_counter = 1
dict_counter = 2
</code></pre>
<p>and then using that for a conditional within the first for loop that would reset the row index and allow xlwt to continue writing to a new sheet:</p>
<pre><code>if rowi == 65536:
sheet[dict_counter] = workbook.add_sheet("SQL Results (" + str(dict_counter) + ")")
sheet_counter += 1
dict_counter += 1
rowi = 1
else:
pass
</code></pre>
<p>Unfortunately, even doing so still causes xlwt to throw the following error when the <code>row</code> variable increments beyond 65536:</p>
<pre><code>Traceback (most recent call last):
File "xlstest.py", line 35, in <module>
sheet[sheet_counter].write(rowi,coli,value,bold)
File "C:\Users\waldiesamuel\AppData\Local\Programs\Python\Python35-32\lib\site-packages\xlwt\Worksheet.py", line 1088, in write
self.row(r).write(c, label, style)
File "C:\Users\waldiesamuel\AppData\Local\Programs\Python\Python35-32\lib\site-packages\xlwt\Worksheet.py", line 1142, in row
self.__rows[indx] = self.Row(indx, self)
File "C:\Users\waldiesamuel\AppData\Local\Programs\Python\Python35-32\lib\site-packages\xlwt\Row.py", line 43, in __init__
raise ValueError("row index was %r, not allowed by .xls format" % rowx)
ValueError: row index was 65537, not allowed by .xls format
</code></pre>
| 1 | 2016-08-22T18:37:17Z | 39,087,611 | <p>xlwt is </p>
<blockquote>
<p>a library for developers to use to generate spreadsheet files
compatible with Microsoft Excel versions 95 to 2003.
(see <a href="https://pypi.python.org/pypi/xlwt" rel="nofollow">here</a>)</p>
</blockquote>
<p>In those excel versions the maximal number of rows is limited by 65536. See <a href="http://superuser.com/questions/366468/what-is-the-maximum-allowed-rows-in-a-microsoft-excel-xls-or-xlsx">here</a>. </p>
<p>Try <a href="https://pypi.python.org/pypi/XlsxWriter" rel="nofollow">XlsxWriter</a> which is compliant with Excel 2007 and number of rows can be up to 1,048,576.</p>
| 1 | 2016-08-22T19:50:10Z | [
"python",
"xlwt"
] |
Using xlwt, create a new sheet anytime xls row limit is reached | 39,086,502 | <p>I'm currently writing a python script that will take an arbitrary number of csv files and create .xls files from them. Unfortunately, some of these csv files have row counts greater than 65536, which means that they can't exist on one .xls sheet. What I would like to do is come up with a way to generate a new sheet when that number of rows is reached. For reference, here is the code I'm currently using:</p>
<pre><code>import csv, xlwt, glob, ntpath
files = glob.glob("C:/Users/waldiesamuel/326/*.csv")
bold = xlwt.easyxf('font: bold on')
for i in files:
org_file = open(i, 'r')
reader = csv.reader((org_file), delimiter=",")
workbook = xlwt.Workbook()
sheet = workbook.add_sheet("SQL Results")
path = ntpath.dirname(i)
file = ntpath.basename(i)
for rowi, row in enumerate(reader):
for coli, value in enumerate(row):
if coli == 0:
sheet.write(rowi,coli,value,bold)
else:
sheet.write(rowi,coli,value)
workbook.save(path + file + '.xls')
</code></pre>
<p>My thought is that around</p>
<pre><code>for rowi, row in enumerate(reader):
</code></pre>
<p>I could use an if statement to check if row is greater than 65536, but I'm not sure how to create a new variable from there.</p>
<p><strong>Edit:</strong></p>
<p>I found a potential solution, which failed, and was explained by the answer. I'm including it here as an edit so everyone can follow the thought process:</p>
<p>So it appears that because xlwt checks to specifically make sure you're not adding more than 65536 rows, this might not be doable. I had come up with what I thought was a clever solution, by changing my sheet variable to a dict, like so:</p>
<pre><code>sheet = {1: workbook.add_sheet("SQL Results")}
</code></pre>
<p>then initializing two variables to serve as counters:</p>
<pre><code>sheet_counter = 1
dict_counter = 2
</code></pre>
<p>and then using that for a conditional within the first for loop that would reset the row index and allow xlwt to continue writing to a new sheet:</p>
<pre><code>if rowi == 65536:
sheet[dict_counter] = workbook.add_sheet("SQL Results (" + str(dict_counter) + ")")
sheet_counter += 1
dict_counter += 1
rowi = 1
else:
pass
</code></pre>
<p>Unfortunately, even doing so still causes xlwt to throw the following error when the <code>row</code> variable increments beyond 65536:</p>
<pre><code>Traceback (most recent call last):
File "xlstest.py", line 35, in <module>
sheet[sheet_counter].write(rowi,coli,value,bold)
File "C:\Users\waldiesamuel\AppData\Local\Programs\Python\Python35-32\lib\site-packages\xlwt\Worksheet.py", line 1088, in write
self.row(r).write(c, label, style)
File "C:\Users\waldiesamuel\AppData\Local\Programs\Python\Python35-32\lib\site-packages\xlwt\Worksheet.py", line 1142, in row
self.__rows[indx] = self.Row(indx, self)
File "C:\Users\waldiesamuel\AppData\Local\Programs\Python\Python35-32\lib\site-packages\xlwt\Row.py", line 43, in __init__
raise ValueError("row index was %r, not allowed by .xls format" % rowx)
ValueError: row index was 65537, not allowed by .xls format
</code></pre>
| 1 | 2016-08-22T18:37:17Z | 39,125,194 | <p>The problem with your solution is that you are trying to reset <code>rowi</code> (which comes from your <code>enumerate()</code> statement) back to 1, but it is reset on the next loop.</p>
<p>The easiest way to achieve what you want, I think, is to change the way you reference rows and sheets. You can use the <a href="http://stackoverflow.com/questions/183853/in-python-what-is-the-difference-between-and-when-used-for-division">floor division</a> and <a href="http://stackoverflow.com/questions/4432208/how-does-work-in-python">modulo</a> operators to give you the sheet number and row numbers respectively.</p>
<pre><code>if rowi % 65536 == 0:
sheet[dict_counter] = workbook.add_sheet("SQL Results (" + str(dict_counter) + ")")
sheet_counter += 1 # Not sure if you use this anywhere else - it can probably go
dict_counter += 1
else:
pass
sheetno = rowi // 65536
rowno = rowi %% 65536
sheet[sheetno].write(rowno,coli,value,bold)
</code></pre>
| 1 | 2016-08-24T13:50:44Z | [
"python",
"xlwt"
] |
Dict in loop for pd.DataFrame | 39,086,512 | <p>I have many columns in my dataset & i need to change values in some of the variables. I do as below </p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'one':['a' , 'b']*5, 'two':['c' , 'd']*5, 'three':['a' , 'd']*5})
</code></pre>
<p>select</p>
<pre><code>df1 = df[['one', 'two']]
</code></pre>
<p>dict</p>
<pre><code>map = { 'a' : 'd', 'b' : 'c', 'c' : 'b', 'd' : 'a'}
</code></pre>
<p>and loop</p>
<pre><code>df2=[]
for i in df1.values:
np = [ map[x] for x in i]
df2.append(np)
</code></pre>
<p>then i change columns</p>
<pre><code>df['one'] = [row[0] for row in df2]
df['two'] = [row[1] for row in df2]
</code></pre>
<p>It works but it's very long way. How to make it shorter?</p>
| 3 | 2016-08-22T18:37:45Z | 39,086,953 | <p>You can use <code>Series.map()</code> iterating over columns:</p>
<pre><code>cols = ['one', 'two']
mapd = { 'a' : 'd', 'b' : 'c', 'c' : 'b', 'd' : 'a'}
for col in cols:
df[col] = df[col].map(mapd).fillna(df[col])
df
Out:
one three two
0 d a b
1 c d a
2 d a b
3 c d a
4 d a b
5 c d a
6 d a b
7 c d a
8 d a b
9 c d a
</code></pre>
<p>Timings:</p>
<pre><code>df = pd.DataFrame({'one':['a' , 'b']*5000000,
'two':['c' , 'd']*5000000,
'three':['a' , 'd']*5000000})
%%timeit
for col in cols:
df[col].map(mapd).fillna(df[col])
1 loop, best of 3: 1.71 s per loop
%%timeit
for col in cols:
... colSet = set(df[col].values);
... colMap = {k:v for k,v in mapd.items() if k in colSet}
... df.replace(to_replace={col:colMap})
1 loop, best of 3: 3.35 s per loop
%timeit df[cols].stack().map(mapd).unstack()
1 loop, best of 3: 9.18 s per loop
</code></pre>
| 2 | 2016-08-22T19:03:49Z | [
"python",
"pandas",
"for-loop",
"dictionary"
] |
Dict in loop for pd.DataFrame | 39,086,512 | <p>I have many columns in my dataset & i need to change values in some of the variables. I do as below </p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'one':['a' , 'b']*5, 'two':['c' , 'd']*5, 'three':['a' , 'd']*5})
</code></pre>
<p>select</p>
<pre><code>df1 = df[['one', 'two']]
</code></pre>
<p>dict</p>
<pre><code>map = { 'a' : 'd', 'b' : 'c', 'c' : 'b', 'd' : 'a'}
</code></pre>
<p>and loop</p>
<pre><code>df2=[]
for i in df1.values:
np = [ map[x] for x in i]
df2.append(np)
</code></pre>
<p>then i change columns</p>
<pre><code>df['one'] = [row[0] for row in df2]
df['two'] = [row[1] for row in df2]
</code></pre>
<p>It works but it's very long way. How to make it shorter?</p>
| 3 | 2016-08-22T18:37:45Z | 39,087,664 | <p>Passing whole map for col with only 'a','b' values is not efficient. At first check what values are in df col. Then map only for them, as here:</p>
<pre><code>>>> cols = ['one', 'two'];
>>> map = { 'a' : 'd', 'b' : 'c', 'c' : 'b', 'd' : 'a'};
>>> for col in cols:
... colSet = set(df[col].values);
... colMap = {k:v for k,v in map.items() if k in colSet};
... df.replace(to_replace={col:colMap},inplace=True);#not efficient like rly
...
>>> df
one three two
0 d a b
1 c d a
2 d a b
3 c d a
4 d a b
5 c d a
6 d a b
7 c d a
8 d a b
9 c d a
>>>
#OR
In [12]: %%timeit
...: for col in cols:
...: colSet = set(df[col].values);
...: colMap = {k:v for k,v in map.items() if k in colSet};
...: df[col].map(colMap)
...:
...:
1 loop, best of 3: 1.93 s per loop
#OR WHEN INPLACE
In [8]: %%timeit
...: for col in cols:
...: colSet = set(df[col].values);
...: colMap = {k:v for k,v in map.items() if k in colSet};
...: df[col]=df[col].map(colMap)
...:
...:
1 loop, best of 3: 2.18 s per loop
</code></pre>
<p>Thats possible too:</p>
<pre><code>df = pd.DataFrame({'one':['a' , 'b']*5, 'two':['c' , 'd']*5, 'three':['a' , 'd']*5})
map = { 'a' : 'd', 'b' : 'c', 'c' : 'b', 'd' : 'a'}
cols = ['one','two']
def func(s):
if s.name in cols:
s=s.map(map)
return s
print df.apply(func)
</code></pre>
<p>Also watch for overlapping keys (ie. if You want to change in parallel lets say a to b and b to c but not like a->b->c)...</p>
<pre><code>>>> cols = ['one', 'two'];
>>> map = { 'a' : 'd', 'b' : 'c', 'c' : 'b', 'd' : 'a'};
>>> mapCols = {k:map for k in cols};
>>> df.replace(to_replace=mapCols,inplace=True);
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "Q:\Miniconda3\envs\py27a\lib\site-packages\pandas\core\generic.py", line 3352, in replace
raise ValueError("Replacement not allowed with "
ValueError: Replacement not allowed with overlapping keys and values
</code></pre>
| 2 | 2016-08-22T19:53:49Z | [
"python",
"pandas",
"for-loop",
"dictionary"
] |
Dict in loop for pd.DataFrame | 39,086,512 | <p>I have many columns in my dataset & i need to change values in some of the variables. I do as below </p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'one':['a' , 'b']*5, 'two':['c' , 'd']*5, 'three':['a' , 'd']*5})
</code></pre>
<p>select</p>
<pre><code>df1 = df[['one', 'two']]
</code></pre>
<p>dict</p>
<pre><code>map = { 'a' : 'd', 'b' : 'c', 'c' : 'b', 'd' : 'a'}
</code></pre>
<p>and loop</p>
<pre><code>df2=[]
for i in df1.values:
np = [ map[x] for x in i]
df2.append(np)
</code></pre>
<p>then i change columns</p>
<pre><code>df['one'] = [row[0] for row in df2]
df['two'] = [row[1] for row in df2]
</code></pre>
<p>It works but it's very long way. How to make it shorter?</p>
| 3 | 2016-08-22T18:37:45Z | 39,087,782 | <pre><code>df = pd.DataFrame({'one':['a' , 'b']*5, 'two':['c' , 'd']*5, 'three':['a' , 'd']*5})
m = { 'a' : 'd', 'b' : 'c', 'c' : 'b', 'd' : 'a'}
cols = ['one', 'two']
df[cols] = df[cols].stack().map(m).unstack()
df
</code></pre>
<p><a href="http://i.stack.imgur.com/FacmA.png" rel="nofollow"><img src="http://i.stack.imgur.com/FacmA.png" alt="enter image description here"></a></p>
| 2 | 2016-08-22T20:01:29Z | [
"python",
"pandas",
"for-loop",
"dictionary"
] |
MySQL ProgrammingError 1064 with SELECT statement | 39,086,676 | <pre><code>table = "tbl_" + platform + "_chks"
search = "%" + search + "%"
cur.execute('''SELECT check_id,check_name,%s, FROM %s WHERE %s LIKE %s;''', (field,table,field,search))
</code></pre>
<p>I'm getting the following error:</p>
<blockquote>
<p>ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''tbl_linux_chks' WHERE 'check_name' LIKE '%test%'' at line 1)</p>
</blockquote>
| 1 | 2016-08-22T18:47:25Z | 39,086,797 | <p>Try this:</p>
<pre><code>cur.execute("""SELECT check_id,check_name,{}
FROM tbl_{}_chks
WHERE {} LIKE '%%{}%%'
""".format(field,platform,field,search))
</code></pre>
<p>The reason this driver developers separate arguments from query is security. So you should take care about the data in the variables before using this solution.</p>
| 0 | 2016-08-22T18:54:25Z | [
"python",
"mysql"
] |
MySQL ProgrammingError 1064 with SELECT statement | 39,086,676 | <pre><code>table = "tbl_" + platform + "_chks"
search = "%" + search + "%"
cur.execute('''SELECT check_id,check_name,%s, FROM %s WHERE %s LIKE %s;''', (field,table,field,search))
</code></pre>
<p>I'm getting the following error:</p>
<blockquote>
<p>ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''tbl_linux_chks' WHERE 'check_name' LIKE '%test%'' at line 1)</p>
</blockquote>
| 1 | 2016-08-22T18:47:25Z | 39,095,634 | <p><code>%s</code> query parameter placeholders can only be used for that, query parameters, not for identifiers like table or column names. When query parameters are used, strings are automatically quoted and escaped before inserted in a query, but quoting for table and column names is different.</p>
<p>MySQL uses single or double quotes for quoting values, but it uses backticks (`) for quoting identifiers.</p>
<p>For this to work correctly, you need to create the query first (using string formatting), then you can execute that query using query parameters:</p>
<pre><code># `field` and `platform` must not come from user input, or be validated!
table = "tbl_" + platform + "_chks"
query = ('SELECT check_id, check_name, `{0}` FROM `{1}` WHERE `{0}` LIKE %s'
.format(field, table))
cur.execute(query, ("%" + search + "%",))
</code></pre>
<p>Make really sure that <code>platform</code> and <code>fields</code> do not come from user input, otherwise you'll have an sql injection vulnerability.</p>
| 0 | 2016-08-23T08:05:53Z | [
"python",
"mysql"
] |
Performing arithmetic on pandas columns based on a different column's value being referenced against a dictionary | 39,086,698 | <p>I have a df</p>
<pre><code> product currency price
a USD 2
b AUD 3
c GBP 9
....
</code></pre>
<p>and I have a dict:</p>
<pre><code>cc={"USD": 1, "AUD":.75, "GBP: 1.13}
</code></pre>
<p>I want to change the price by multiplying the price times the value corresponding to the currency in the CC dict so I tried:</p>
<pre><code>df.price.apply(lambda x: x*cc[df['currency']])
</code></pre>
<p>Which gives the error</p>
<pre><code>TypeError: 'Series' objects are mutable, thus they cannot be hashed
</code></pre>
<p>Thanks!</p>
| 1 | 2016-08-22T18:48:16Z | 39,086,770 | <p>You can use map (make sure all values in that Series are also in the dict):</p>
<pre><code>df['currency'].map(cc) * df['price']
Out:
0 2.00
1 2.25
2 10.17
dtype: float64
</code></pre>
<p>If you want to change the price column, assign it back:</p>
<pre><code>df['price'] = df['currency'].map(cc) * df['price']
</code></pre>
| 2 | 2016-08-22T18:52:22Z | [
"python",
"pandas"
] |
Performing arithmetic on pandas columns based on a different column's value being referenced against a dictionary | 39,086,698 | <p>I have a df</p>
<pre><code> product currency price
a USD 2
b AUD 3
c GBP 9
....
</code></pre>
<p>and I have a dict:</p>
<pre><code>cc={"USD": 1, "AUD":.75, "GBP: 1.13}
</code></pre>
<p>I want to change the price by multiplying the price times the value corresponding to the currency in the CC dict so I tried:</p>
<pre><code>df.price.apply(lambda x: x*cc[df['currency']])
</code></pre>
<p>Which gives the error</p>
<pre><code>TypeError: 'Series' objects are mutable, thus they cannot be hashed
</code></pre>
<p>Thanks!</p>
| 1 | 2016-08-22T18:48:16Z | 39,087,629 | <pre><code>df = pd.DataFrame([['a', 'USD', 2L],
['b', 'AUD', 3L],
['c', 'GBP', 9L]],
columns=['product', 'currency', 'price'])
cc = pd.Series({"USD": 1, "AUD":.75, "GBP": 1.13})
df.price *= cc.ix[df.currency].values
df
</code></pre>
<p><a href="http://i.stack.imgur.com/mWSlH.png" rel="nofollow"><img src="http://i.stack.imgur.com/mWSlH.png" alt="enter image description here"></a></p>
| 1 | 2016-08-22T19:51:30Z | [
"python",
"pandas"
] |
Python bs4 module | 39,086,728 | <pre><code>import requests
from bs4 import BeautifulSoup
'''
It's a web crawler working in ebay, collecting every single item data
'''
def ebay_spider(max_pages):
page = 1
while page <= max_pages:
url = 'http://www.ebay.co.uk/sch/Apple-Laptops/111422/i.html?_pgn=' \
+ str(page)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text)
for link in soup.findAll('a', {'class': 'vip'}):
href = 'http://www.ebay.co.uk' + link.get('href')
title = link.string
get_single_item_data(href)
page += 1
def get_single_item_data(item_url):
source_code = requests.get(item_url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text)
for item_name in soup.findAll('h1', {'id': "itemTitle"}):
print(item_name.string)
ebay_spider(3)
</code></pre>
<blockquote>
<p>Blockquote And the error say that : <a href="http://imgur.com/403a6N8" rel="nofollow">http://imgur.com/403a6N8</a><br>
I tried to fix it but it seems not to work, so any tips/answers how to fix it?</p>
<p>EDIT: Sorry everyone for faulty title and tag, everything was fixed.</p>
</blockquote>
| -2 | 2016-08-22T18:49:47Z | 39,086,831 | <p>This is entirely unrelated to the requests module. AS Jean-Francois stated, do what it tells you and move along.</p>
<p><code>soup = BeautifulSoup(plain_text,"html.parser",markup_type=markup_tyââpe)</code></p>
| 0 | 2016-08-22T18:56:35Z | [
"python",
"module",
"bs4",
"user-warning"
] |
Python bs4 module | 39,086,728 | <pre><code>import requests
from bs4 import BeautifulSoup
'''
It's a web crawler working in ebay, collecting every single item data
'''
def ebay_spider(max_pages):
page = 1
while page <= max_pages:
url = 'http://www.ebay.co.uk/sch/Apple-Laptops/111422/i.html?_pgn=' \
+ str(page)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text)
for link in soup.findAll('a', {'class': 'vip'}):
href = 'http://www.ebay.co.uk' + link.get('href')
title = link.string
get_single_item_data(href)
page += 1
def get_single_item_data(item_url):
source_code = requests.get(item_url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text)
for item_name in soup.findAll('h1', {'id': "itemTitle"}):
print(item_name.string)
ebay_spider(3)
</code></pre>
<blockquote>
<p>Blockquote And the error say that : <a href="http://imgur.com/403a6N8" rel="nofollow">http://imgur.com/403a6N8</a><br>
I tried to fix it but it seems not to work, so any tips/answers how to fix it?</p>
<p>EDIT: Sorry everyone for faulty title and tag, everything was fixed.</p>
</blockquote>
| -2 | 2016-08-22T18:49:47Z | 39,086,850 | <p>When you're trying to make a BeatifulSoup object in line, do instead this:</p>
<pre><code>soup = BeautifulSoup(plain_text)
</code></pre>
<p>This:</p>
<pre><code>soup = BeautifulSoup(plain_text, 'html.parser')
</code></pre>
<p>Note: your problem refers to bs4 module, not requests.</p>
| 1 | 2016-08-22T18:57:50Z | [
"python",
"module",
"bs4",
"user-warning"
] |
Get DLL to communicate with dependencies | 39,086,807 | <p>I am trying to load a DLL from Python, but get <code>WindowsError: [Error 126] The specified module could not be found.</code></p>
<pre><code>import ctypes
my_dll = "C:/smt/toolbox/dlls/NMSim_Libraries.dll"
nmsim = ctypes.cdll.LoadLibrary(my_dll)
</code></pre>
<p>When I used <a href="http://www.dependencywalker.com/" rel="nofollow">Dependency Walker</a>, it states that 3 dependencies are missing, all of which are in the path:
"C:\Users\skeyel\AppData\Local\Continuum\Anaconda2\Lib\site-packages\numpy\core"</p>
<p>I tried adding this path to the system path using:</p>
<pre><code>import sys
sys.path.append("C:\\Users\\skeyel\\AppData\\Local\\Continuum\\Anaconda2\\Lib\\site-packages\\numpy\\core\\")
</code></pre>
<p>but this did not solve the problem. How do I get the .dll to communicate with the dependencies?</p>
<p>NOTES:</p>
<ol>
<li><p>There are two Python installations on my computer: 2.7.8 that shipped with ArcGIS and 2.7.11 that shipped with Anaconda. It runs fine when run through the Spyder IDE that came with the Anaconda installation.</p></li>
<li><p>It runs fine on my laptop (on both 2.7.8 and 2.7.11). </p></li>
<li><p>I've checked and/or tried a variety of things based on the advice from:
<a href="http://stackoverflow.com/questions/1940578/windowserror-error-126-the-specified-module-could-not-be-found">WindowsError: [Error 126] The specified module could not be found</a></p>
<p>3a. The dll exists and the path to the dll is correct, as it works for one version when I copy and paste the exact same code</p>
<p>3b. the DLL and Python are both set up for 32 bits (note: the OS is 64 bit). Using <code>import platform</code> followed by <code>platform.architecture()</code> gives both versions as 32-bit.</p>
<p>3c. I have tried adjusting <code>sys.path</code> to include the same paths between the two installations</p>
<p>3d. I have tried <code>os.chdir()</code> to change to the .dll directory, and then just loading the dll by name with no path information</p>
<p>3e. I have tried copying and pasting the listed missing dependencies into the same folder as the .dll</p></li>
<li><p>I tried copying, pasting and running the (minimally modified) code from selected answer here: <a href="http://stackoverflow.com/questions/7586504/python-accessing-dll-using-ctypes">Python | accessing dll using ctypes</a></p></li>
</ol>
<p>Here is the full traceback:</p>
<pre><code>Runtime error
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\skeyel\AppData\Local\Continuum\Anaconda2\Lib\ctypes\__init__.py", line 443, in LoadLibrary
return self._dlltype(name)
File "C:\Users\skeyel\AppData\Local\Continuum\Anaconda2\Lib\ctypes\__init__.py", line 365, in __init__
self._handle = _dlopen(self._name, mode)
WindowsError: [Error 126] The specified module could not be found
</code></pre>
<p>It seems like there is something simple that I'm missing - anyone know what it is?</p>
<p>Many thanks.</p>
| 2 | 2016-08-22T18:55:10Z | 39,087,537 | <p>The problem was solved by re-installing Anaconda <a href="https://www.continuum.io/downloads" rel="nofollow">https://www.continuum.io/downloads</a>.</p>
<p>I still have no idea what the specific problem was.</p>
| 0 | 2016-08-22T19:45:41Z | [
"python",
"dll"
] |
Get the running time of python script | 39,086,813 | <p>I wrote set of python scripts which runs as my back end. User have to upload a file using the web page that I have provided to them. I that web page there is a progress bar which shows the user how mach of processing done to there video file. Because this video file break into frames and identify objects and save in the db. Every things are works well except the progress bar. I need a way to indicate the user repeatedly until the script is completes. But I do not have a way to do this. I try to use</p>
<pre><code>start_time = time.time()
@app.route('/upload', methods=['POST'])
def upload_file():
filename = request.get_json()
print filename
fullPath = path + "/" + filename
print fullPath
fragmentation.framerate.calframerate(fullPath)
timeRunning = ("--- %s seconds ---" % (time.time() - start_time))
return timeRunning
</code></pre>
<p>but this will give the output after the script ends. But I need to show the progress bar till the script ends.</p>
<p>Is there is a way to do this.Please help me</p>
| 0 | 2016-08-22T18:55:21Z | 39,086,931 | <p>A progress bar would require async or threading. Research these and give it a go. You can't be actively performing tasks while a progress bar is being rendered unless you are ticking the progress bar up in a loop. Which would be a better way to go about this. </p>
<p><a href="https://docs.python.org/2/library/threading.html" rel="nofollow">Threading</a></p>
<p><a href="https://docs.python.org/3/library/asyncio.html" rel="nofollow">Asyncio</a></p>
| 0 | 2016-08-22T19:02:47Z | [
"javascript",
"php",
"python",
"html",
"ajax"
] |
QDockWidget, place the tabs on top and insert text | 39,086,899 | <p>Are there any methods to make the tabs be placed on top? I think it's more suitable. And is there an easy way to name the tabs, maybe add a QLabel.</p>
<p>Below is a picture of how it looks now.</p>
<p><a href="http://i.stack.imgur.com/XNU3g.png" rel="nofollow"><img src="http://i.stack.imgur.com/XNU3g.png" alt="enter image description here"></a></p>
| 0 | 2016-08-22T19:00:48Z | 39,090,128 | <p>Use <a href="http://doc.qt.io/qt-4.8/qmainwindow.html#setTabPosition" rel="nofollow">setTabPosition</a> to put the tabs at the top for the relevant dock-areas:</p>
<pre><code>mainwindow.setTabPosition(QtCore.Qt.AllDockWidgetAreas, QtGui.QTabWidget.North)
</code></pre>
<p>The tab text is taken from the window title, so it can be set like this:</p>
<pre><code>dockwidget.setWindowTitle('Name')
</code></pre>
<p>or indirectly via the <code>QDockWidget</code> constructor:</p>
<pre><code>dockwidget = QtGui.QDockWidget('Name', parent)
</code></pre>
| 1 | 2016-08-22T23:17:05Z | [
"python",
"pyqt"
] |
decorator to track number function calls - not working | 39,086,948 | <p>Why does the <code>count_calls_bad</code> version not retain the added function attribute (decorator adds <code>.calls</code> to the passed in func) after it returns? I understand the second (good) version is binding inside of the inner function versus the bad version which tries to create a closure where the func attribute is bound, but I thought that the "bad" version would maintain reference to the closed over variable, allowing me to get the same result as the "good" version.</p>
<pre><code> def count_calls_bad(func):
func.calls = 0
def inner(*args,**kwargs):
func.calls += 1 #each call to inner increments func.calls (recur_n.calls)
return func(*args,**kwargs)
return inner
def count_calls_good(func):
def inner(*args, **kwargs):
inner.calls += 1
return func(*args, **kwargs)
inner.calls = 0
return inner
@count_calls_bad
def recur_n(num):
if num == 0:
return 0
print (num)
return recur_n(num-1)
recur_n(10)
print(recur_n.calls) #recur_n.calls attribute not bound any longer
</code></pre>
<p><strong>UPDATE</strong>: fixed code, forgot to update function name after testing in editor. Now recur_n is called and not recur_10.</p>
<hr>
<p>Additionally, I was playing around and think that the issue is recur_n becomes <code>inner</code>, and then that last line <code>print(recur_n.calls)</code> is really <code>print(function count_calls_bad.<locals>.inner at 0x000000000364E2F0>)</code>, and that object has no attribute <code>calls</code>, since calls was bound on the actual undecorated <code>recur_n</code>. </p>
<p>You can actually force your way into the original undecorated function and get the correctly updated attribute with the following hackery:</p>
<p><code>print(recur_n.__closure__[0].cell_contents.calls)</code></p>
<p>My next thought was then to use functools @wraps to maintain the original undecorated function name, since that is basically what I'm doing above, reaching into the decorator and pulling out the undecorated name's <code>call</code> attribute.</p>
<pre><code>from functools import wraps
def count_calls_bad(func):
func.calls = 0
@wraps func
def inner(*args,**kwargs):
func.calls += 1 #each call to inner increments func.calls (recur_n.calls)
return func(*args,**kwargs)
return inner
</code></pre>
<p>This at least gets me a result, but that result is zero. So now, I've answered my own original question, but I end up with a new one. Why, given that @wraps has updated the function so that recur_n, now refers to recur_n rather than inner, do I get 0 rather than 11?</p>
<p>It appears that @wraps copies the signature of the function, but does not maintain reference or copy other data such as variables or attributes? </p>
| 1 | 2016-08-22T19:03:33Z | 39,087,069 | <p>You never defined <code>recur_n</code>, at least not in the posted code. You applied the decorator to <code>recur_10</code>.</p>
| 1 | 2016-08-22T19:11:24Z | [
"python",
"debugging",
"decorator"
] |
decorator to track number function calls - not working | 39,086,948 | <p>Why does the <code>count_calls_bad</code> version not retain the added function attribute (decorator adds <code>.calls</code> to the passed in func) after it returns? I understand the second (good) version is binding inside of the inner function versus the bad version which tries to create a closure where the func attribute is bound, but I thought that the "bad" version would maintain reference to the closed over variable, allowing me to get the same result as the "good" version.</p>
<pre><code> def count_calls_bad(func):
func.calls = 0
def inner(*args,**kwargs):
func.calls += 1 #each call to inner increments func.calls (recur_n.calls)
return func(*args,**kwargs)
return inner
def count_calls_good(func):
def inner(*args, **kwargs):
inner.calls += 1
return func(*args, **kwargs)
inner.calls = 0
return inner
@count_calls_bad
def recur_n(num):
if num == 0:
return 0
print (num)
return recur_n(num-1)
recur_n(10)
print(recur_n.calls) #recur_n.calls attribute not bound any longer
</code></pre>
<p><strong>UPDATE</strong>: fixed code, forgot to update function name after testing in editor. Now recur_n is called and not recur_10.</p>
<hr>
<p>Additionally, I was playing around and think that the issue is recur_n becomes <code>inner</code>, and then that last line <code>print(recur_n.calls)</code> is really <code>print(function count_calls_bad.<locals>.inner at 0x000000000364E2F0>)</code>, and that object has no attribute <code>calls</code>, since calls was bound on the actual undecorated <code>recur_n</code>. </p>
<p>You can actually force your way into the original undecorated function and get the correctly updated attribute with the following hackery:</p>
<p><code>print(recur_n.__closure__[0].cell_contents.calls)</code></p>
<p>My next thought was then to use functools @wraps to maintain the original undecorated function name, since that is basically what I'm doing above, reaching into the decorator and pulling out the undecorated name's <code>call</code> attribute.</p>
<pre><code>from functools import wraps
def count_calls_bad(func):
func.calls = 0
@wraps func
def inner(*args,**kwargs):
func.calls += 1 #each call to inner increments func.calls (recur_n.calls)
return func(*args,**kwargs)
return inner
</code></pre>
<p>This at least gets me a result, but that result is zero. So now, I've answered my own original question, but I end up with a new one. Why, given that @wraps has updated the function so that recur_n, now refers to recur_n rather than inner, do I get 0 rather than 11?</p>
<p>It appears that @wraps copies the signature of the function, but does not maintain reference or copy other data such as variables or attributes? </p>
| 1 | 2016-08-22T19:03:33Z | 39,088,955 | <p>As you've discovered, the reason you can't see the count is because the <code>recur_n</code> name at the top level of your module refers to the wrapper function <code>inner</code> returned from the decorator. It doesn't refer to the original <code>recur_n</code> function (though you can get access to that function via the <code>__closure__</code> attribute of the wrapper function).</p>
<p>Using <code>functools.wraps</code> doesn't change that basic issue. All it does is copy some of the attributes of the original function onto the wrapper function. So the <code>recur_n</code> function you see at the top level will have its <code>__name__</code> set to <code>"recur_n"</code> rather than <code>"inner"</code> and it's <code>__doc__</code> would match the original <code>recur_n</code>'s docstring (if it had one). The <code>wraps</code> call also copies the current values of any attributes in the function's <code>__dict__</code> to the new wrapper function.</p>
<p>Setting the function's <code>__name__</code> doesn't change what the name refers to in the module namespace. Indeed, after using <code>wraps</code>, both functions (the original <code>recur_n</code> and the wrapper function) will have the same <code>__name__</code> attribute, <code>"recur_n"</code>. The module can only have one thing referenced by the name <code>recur_n</code> and it could be either one of them or something completely different!</p>
<p>As for why you see <code>0</code> when you check <code>recur_n.calls</code> when you are using <code>wraps</code> in the decorator, that's because the zero got copied over as part of the <code>__dict__</code> of the original function. You can't see the incremented count though, since the copy only happened once (and Python integers are immutable, so they can't be updated in place).</p>
| 0 | 2016-08-22T21:23:14Z | [
"python",
"debugging",
"decorator"
] |
Implement hashid in django | 39,086,950 | <p>I've been trying to implement <a href="https://github.com/davidaurelio/hashids-python" rel="nofollow">hashids</a> in django models. I want to acquire hashid based on model's <code>id</code> like when model's <code>id=3</code> then hash encoding should be like this: <code>hashid.encode(id)</code>. The thing is i can not get id or pk until i save them. What i have in my mind is get the latest objects <code>id</code> and add <code>1</code> on them. But it's not a solution for me. Can anyone help me to figure it out???</p>
<p>django model is:</p>
<pre><code>from hashids import Hashids
hashids = Hashids(salt='thismysalt', min_length=4)
class Article(models.Model):
title = models.CharField(...)
text = models.TextField(...)
hashid = models.CharField(...)
# i know that this is not a good solution. This is meant to be more clear understanding.
def save(self, *args, **kwargs):
super(Article, self).save(*args, **kwargs)
self.hashid = hashids.encode(self.id)
super(Article, self).save(*args, **kwargs)
</code></pre>
| 2 | 2016-08-22T19:03:43Z | 39,087,005 | <p>I would only tell it to save if there is no ID yet, so it doesn't run the code every time. You can do this using a TimeStampedModel inheritance, which is actually great to use in any project.</p>
<pre><code>from hashids import Hashids
hashids = Hashids(salt='thismysalt', min_length=4)
class TimeStampedModel(models.Model):
""" Provides timestamps wherever it is subclassed """
created = models.DateTimeField(editable=False)
modified = models.DateTimeField()
def save(self, *args, **kwargs): # On `save()`, update timestamps
if not self.created:
self.created = timezone.now()
self.modified = timezone.now()
return super().save(*args, **kwargs)
class Meta:
abstract = True
class Article(TimeStampedModel):
title = models.CharField(...)
text = models.TextField(...)
hashid = models.CharField(...)
# i know that this is not a good solution. This is meant to be more clear understanding.
def save(self, *args, **kwargs):
super(Article, self).save(*args, **kwargs)
if self.created == self.modified: # Only run the first time instance is created (where created & modified will be the same)
self.hashid = hashids.encode(self.id)
self.save(update_fields=['hashid'])
</code></pre>
| 1 | 2016-08-22T19:07:04Z | [
"python",
"django",
"hashids"
] |
Efficient way to find particular rows with Blaze package? | 39,086,967 | <p>I have a data table that has ~74 million lines that I used blaze to load it.</p>
<pre><code>from blaze import CSV, data
csv = CSV('train.csv')
t = data(csv)
</code></pre>
<p>It has fields these: A, B, C, D, E, F, G</p>
<p>Since this is such a large dataframe, how can I efficiently output rows that fit specific criteria? For example, I would want rows that have A==4, B==8, E==10. Is there a way to multitask the look-up? For example, by threading or parallel programming or something?</p>
<p>By parallel programming I mean for example, one thread will try to find the matching row from row 1 to row 100000, and the second thread will try to find the matching row from row 100001 to 200000, and so on...</p>
| 3 | 2016-08-22T19:04:26Z | 39,087,356 | <p>Your selection criteria is quite simple:</p>
<pre><code>t[(t.A == 4) & (t.B == 8) & (t.E == 10)]
</code></pre>
<p>Using the readily available <code>iris</code> sample dataset as an example:</p>
<pre><code>from blaze import data
from blaze.utils import example
iris = data(example('iris.csv'))
iris[(iris.sepal_length == 7) & (iris.petal_length > 2)]
sepal_length sepal_width petal_length petal_width species
50 7 3.2 4.7 1.4 Iris-versicolor
</code></pre>
<p>The docs discuss <a href="http://blaze.readthedocs.io/en/latest/ooc.html#parallel-processing" rel="nofollow">parallel processing</a> in Blaze.</p>
<blockquote>
<p>Note that one can only parallelize over datasets that can be easily split in a non-serial fashion. In particular one can not parallelize computation over a single CSV file. Collections of CSV files and binary storage systems like HDF5 and BColz all support multiprocessing.</p>
</blockquote>
<p>Showing that the timings are approximately the same on a single csv file when using multiprocessing:</p>
<pre><code>import multiprocessing
pool = multiprocessing.Pool(4)
%timeit -n 1000 compute(iris[(iris.sepal_length > 7) & (iris.petal_length > 2)],
map=pool.map)
1000 loops, best of 1: 12.1 ms per loop
%timeit -n 1000 compute(iris[(iris.sepal_length > 7) & (iris.petal_length > 2)])
1000 loops, best of 1: 11.7 ms per loop
</code></pre>
| 1 | 2016-08-22T19:31:05Z | [
"python",
"multithreading",
"pandas",
"parallel-processing",
"blaze"
] |
Alternative to this python code? | 39,087,013 | <p>I have a line of code from class that I don't understand fully and want some easier alternative to. What this does is , uses weightList, which is a list of edges that's connected to each other, and returns the edgelists with lowest corresponding value from the graph (adjacency matrix). This is for a Prim's Minimum Spanning Tree problem.</p>
<p><code>edge = sorted(weightList, key=lambda e:graph[e[0]][e[1]])[0];</code> </p>
| 0 | 2016-08-22T19:07:44Z | 39,087,155 | <p>Breaking it up a little bit could be enough. How about this? </p>
<pre><code>get_edge_weight = lambda e: graph[e[0]][e[1]]
sorted_weights = sorted(weightList, key=get_edge_weight)
edge = sorted_weights[0]
</code></pre>
| 3 | 2016-08-22T19:17:27Z | [
"python",
"sorting",
"minimum-spanning-tree",
"prims-algorithm"
] |
Alternative to this python code? | 39,087,013 | <p>I have a line of code from class that I don't understand fully and want some easier alternative to. What this does is , uses weightList, which is a list of edges that's connected to each other, and returns the edgelists with lowest corresponding value from the graph (adjacency matrix). This is for a Prim's Minimum Spanning Tree problem.</p>
<p><code>edge = sorted(weightList, key=lambda e:graph[e[0]][e[1]])[0];</code> </p>
| 0 | 2016-08-22T19:07:44Z | 39,087,219 | <p>Do exactly as you said: for all edges, find the value in the graph which is the lowest.</p>
<pre><code>i, j = current_edge = weightList[0]
current_min = graph[i][j]
for edge in weightList[1:]:
i, j = edge
if graph[i][j] < current_min:
current_min = graph[i][j]
current_edge = edge
</code></pre>
<p>You start with the first edge from your <code>weightList</code>, then you iterate on all other edges to try and find a value which is lower. When you exit the loop, <code>current_edge</code> is the edge with the lowest value.</p>
<p>That being said, it might be worth instead to try and understand your code. I assume you know what <a href="https://docs.python.org/2/library/functions.html#sorted" rel="nofollow"><code>sorted</code></a> does. To sort your <code>weightList</code>, <a href="https://docs.python.org/2/library/functions.html#sorted" rel="nofollow"><code>sorted</code></a> uses the parameter <code>key</code>, which is a function that returns a value. In your case, your function returns the value in <code>graph</code> at the position of your edge. <a href="https://docs.python.org/2/library/functions.html#sorted" rel="nofollow"><code>sorted</code></a> will use this value to compare the edges together.</p>
<p>Thus, this will sort all your edges from the one with the lowest value to the one with the highest value. Then, once it is sorted, you take the first element, which is the edge with the lowest value.</p>
<p>Algorithmically, using <a href="https://docs.python.org/2/library/functions.html#sorted" rel="nofollow"><code>sorted</code></a> for this job isn't a great idea since it has a time complexity of <code>O(n log n)</code>. In comparison, my algorithm is <code>O(n)</code> (but probably slower because I assume <code>sorted</code> is implemented in C). Instead, you can obtain the same result in <code>O(n)</code> using standard functions by using <a href="https://docs.python.org/2/library/functions.html#min" rel="nofollow"><code>min</code></a>, which certainly is the most efficient and readable option out of all three:</p>
<pre><code>edge = min(weightList, key=lambda (i,j): graph[i][j])
</code></pre>
| 0 | 2016-08-22T19:22:25Z | [
"python",
"sorting",
"minimum-spanning-tree",
"prims-algorithm"
] |
Alternative to this python code? | 39,087,013 | <p>I have a line of code from class that I don't understand fully and want some easier alternative to. What this does is , uses weightList, which is a list of edges that's connected to each other, and returns the edgelists with lowest corresponding value from the graph (adjacency matrix). This is for a Prim's Minimum Spanning Tree problem.</p>
<p><code>edge = sorted(weightList, key=lambda e:graph[e[0]][e[1]])[0];</code> </p>
| 0 | 2016-08-22T19:07:44Z | 39,087,261 | <p>If you want the code to be a little less "compact", this should do the trick:</p>
<pre><code>shortest = weightList[0]
for edge in weightList:
if graph[edge[0]][edge[1]] < graph[shortest[0]][shortest[1]]:
shortest = edge
</code></pre>
<p>Set the shortest edge to be equal to the first edge in the weightList, then go through the list and see if any edges are shorter.</p>
| 0 | 2016-08-22T19:25:08Z | [
"python",
"sorting",
"minimum-spanning-tree",
"prims-algorithm"
] |
Alternative to this python code? | 39,087,013 | <p>I have a line of code from class that I don't understand fully and want some easier alternative to. What this does is , uses weightList, which is a list of edges that's connected to each other, and returns the edgelists with lowest corresponding value from the graph (adjacency matrix). This is for a Prim's Minimum Spanning Tree problem.</p>
<p><code>edge = sorted(weightList, key=lambda e:graph[e[0]][e[1]])[0];</code> </p>
| 0 | 2016-08-22T19:07:44Z | 39,087,363 | <p>When trying to reduce complexity, I always look for ways to break things out into self explanatory, modular functions:</p>
<pre><code>def distance(adjacency_matrix, start_node, end_node):
return adjacency_matrix[start_node][end_node]
sorted_edges = sorted(weightList, key=lambda e: distance(graph, e[0], e[1]))
edge = sorted_edges[0];
</code></pre>
| 0 | 2016-08-22T19:32:09Z | [
"python",
"sorting",
"minimum-spanning-tree",
"prims-algorithm"
] |
Trying to calculate multiple line intersections using lists of tuples python | 39,087,113 | <p><strong>EDIT: Git Repo for sample files</strong> <a href="https://github.com/tpubben/lineIntersect" rel="nofollow">https://github.com/tpubben/lineIntersect</a></p>
<p>I am trying to calculate the line intersection points in x,y coordinates based on a set of intersecting lines crossing one continuous line made up of multiple segments.</p>
<p>The continuous line is represented by a list of tuples as follows where each segment starts with the x/y coordinate of the endpoint of the previous segment:</p>
<pre><code>lineA = [((x1, y1),(x2,y2)), ((x2,y2),(x3,y3))....]
</code></pre>
<p>The crossing lines are represented in the same manner, however each is a discrete line (no shared points):</p>
<pre><code>lineB = [((x1, y1),(x2,y2))...]
</code></pre>
<p>I am trying to iterate through the continuous line (lineA) and check to see which crossing lines intersect with which segments of lineA. </p>
<p>An example image of what the line intersections would look like is here:
<a href="http://i.stack.imgur.com/Buae6.png" rel="nofollow"><img src="http://i.stack.imgur.com/Buae6.png" alt="Crossing lines"></a></p>
<p>so far I have tried the following:</p>
<pre><code>from __future__ import print_function
def newSurveys(nintyin, injectorin):
# pull data out of pre-prepared CSV files
fh = open(nintyin)
fho = open(injectorin)
rlines = fho.readlines()
rlines90 = fh.readlines()
segA = []
segB = []
segA90 = []
segB90 = []
for item in rlines:
if not item.startswith('M'):
item = item.split(',')
segA.append((float(item[4]),float(item[5])))#easting northing
segB.append((float(item[4]),float(item[5])))#easting northing
segB.pop(0)
z = len(segA)-1
segA.pop(z)
for item in rlines90:
if not item.startswith('N'):
item = item.split(',')
segA90.append((float(item[1]),float(item[0])))#easting northing
segB90.append((float(item[3]),float(item[2])))#easting northing
activeWellSegs = []
injector90Segs = []
for a, b in zip(segA, segB):
activeWellSegs.append((a,b))
for c, d in zip(segA90, segB90):
injector90Segs.append((c,d))
if len(activeWellSegs) >= len(injector90Segs):
lineA = activeWellSegs
lineB = injector90Segs
else:
lineA = injector90Segs
lineB = activeWellSegs
for l1 in lineA:
for l2 in lineB:
##### Use differential equation to calculate line intersections,
##### taken from another user's post
def line_intersection(line1, line2):
xdiff = (line1[0][0] - line1[1][0], line2[0][0] - line2[1][0])
ydiff = (line1[0][1] - line1[1][1], line2[0][1] - line2[1][1])
def det(a, b):
return a[0] * b[1] - a[1] * b[0]
div = det(xdiff, ydiff)
if div == 0:
raise Exception('lines do not intersect')
d = (det(*line1), det(*line2))
x = det(d, xdiff) / div
y = det(d, ydiff) / div
return x, y
print (line_intersection(l1, l2), file=lprint)
newSurveys('producer90.csv', 'injector.csv')
</code></pre>
| 0 | 2016-08-22T19:14:26Z | 39,088,007 | <p>Your code looks like it's dealing with a specific set of data (i.e. I have no idea what the "startsWith('N')" and the like are referring to) so I can only answer this question in the abstract.</p>
<p>Try splitting the code into multiple functions that do one specific task, rather than one big function that tries to do everything. You will find it much easier to work with and troubleshoot.</p>
<pre><code>def getScalar(lineSegment):
return (lineSegment[1][0] - lineSegment[0][0],
lineSegment[1][1] - lineSegment[0][1])
def doTheyIntersect(lineA, lineB):
scalarA = getScalar(lineA)
scalarB = getScalar(lineB)
s = (-1.0 * scalarA[1] * (lineA[0][0] - lineB[0][0]) + scalarA[0] * (lineA[0][1] - lineB[0][1])) / (-1.0 * scalarB[0] * scalarA[1] + scalarA[0] * scalarB[1])
t = (scalarB[0] * (lineA[0][1] - lineB[0][1]) - scalarB[1] * (lineA[0][0] - lineB[0][0])) / (-1.0 * scalarB[0] * scalarA[1] + scalarA[0] * scalarB[1])
if 0.0 <= s <= 1.0 and 0.0 <= t <= 1.0:
return True
else:
return False
lineA = [(x, y), (x1, y1), ...]
lineB = [(x, y), (x1, y1), ...]
for index, segment in enumerate(lineA):
if index + 1 < len(lineA):
for index2 in range(0, len(lineB), 2):
if doTheyIntersect((lineA[index], lineA[index + 1]), (lineB[index2], lineB[index2+1])):
print("lineB ({0}, {1}) intersects lineA at ({2}, {3})".format(str(lineB[index2]),str(lineB[index2+1]), str(lineA[index]), str(lineA[index + 1]))
</code></pre>
<p>This this is the general idea. I got the geometry formulas from:</p>
<p><a href="http://stackoverflow.com/questions/563198/how-do-you-detect-where-two-line-segments-intersect">How do you detect where two line segments intersect?</a></p>
| -1 | 2016-08-22T20:16:41Z | [
"python",
"line-intersection"
] |
build a DataFrame with columns from tuple of arrays | 39,087,136 | <p>I am struggling with the basic task of constructing a DataFrame of counts by value from a tuple produced by <code>np.unique(arr, return_counts=True)</code>, such as:</p>
<pre><code>import numpy as np
import pandas as pd
np.random.seed(123)
birds=np.random.choice(['African Swallow','Dead Parrot','Exploding Penguin'], size=int(5e4))
someTuple=np.unique(birds, return_counts = True)
someTuple
#(array(['African Swallow', 'Dead Parrot', 'Exploding Penguin'],
# dtype='<U17'), array([16510, 16570, 16920], dtype=int64))
</code></pre>
<p>First I tried</p>
<pre><code>pd.DataFrame(list(someTuple))
# Returns this:
# 0 1 2
# 0 African Swallow Dead Parrot Exploding Penguin
# 1 16510 16570 16920
</code></pre>
<p>I also tried <code>pd.DataFrame.from_records(someTuple)</code>, which returns the same thing.</p>
<p>But what I'm looking for is this:</p>
<pre><code># birdType birdCount
# 0 African Swallow 16510
# 1 Dead Parrot 16570
# 2 Exploding Penguin 16920
</code></pre>
<p>What's the right syntax?</p>
| 7 | 2016-08-22T19:16:09Z | 39,087,184 | <p>You could use Counter.</p>
<pre><code>from collections import Counter
c = Counter(birds)
>>> pd.Series(c)
African Swallow 16510
Dead Parrot 16570
Exploding Penguin 16920
dtype: int64
</code></pre>
<p>You could also use <code>value_counts</code> on the series.</p>
<pre><code>>>> pd.Series(birds).value_counts()
Exploding Penguin 16920
Dead Parrot 16570
African Swallow 16510
dtype: int64
</code></pre>
| 2 | 2016-08-22T19:19:28Z | [
"python",
"pandas",
"numpy"
] |
build a DataFrame with columns from tuple of arrays | 39,087,136 | <p>I am struggling with the basic task of constructing a DataFrame of counts by value from a tuple produced by <code>np.unique(arr, return_counts=True)</code>, such as:</p>
<pre><code>import numpy as np
import pandas as pd
np.random.seed(123)
birds=np.random.choice(['African Swallow','Dead Parrot','Exploding Penguin'], size=int(5e4))
someTuple=np.unique(birds, return_counts = True)
someTuple
#(array(['African Swallow', 'Dead Parrot', 'Exploding Penguin'],
# dtype='<U17'), array([16510, 16570, 16920], dtype=int64))
</code></pre>
<p>First I tried</p>
<pre><code>pd.DataFrame(list(someTuple))
# Returns this:
# 0 1 2
# 0 African Swallow Dead Parrot Exploding Penguin
# 1 16510 16570 16920
</code></pre>
<p>I also tried <code>pd.DataFrame.from_records(someTuple)</code>, which returns the same thing.</p>
<p>But what I'm looking for is this:</p>
<pre><code># birdType birdCount
# 0 African Swallow 16510
# 1 Dead Parrot 16570
# 2 Exploding Penguin 16920
</code></pre>
<p>What's the right syntax?</p>
| 7 | 2016-08-22T19:16:09Z | 39,087,200 | <p>Using your tuple, you can do the following:</p>
<pre><code>In [4]: pd.DataFrame(list(zip(*someTuple)), columns = ['Bird', 'BirdCount'])
Out[4]:
Bird BirdCount
0 African Swallow 16510
1 Dead Parrot 16570
2 Exploding Penguin 16920
</code></pre>
| 3 | 2016-08-22T19:20:42Z | [
"python",
"pandas",
"numpy"
] |
build a DataFrame with columns from tuple of arrays | 39,087,136 | <p>I am struggling with the basic task of constructing a DataFrame of counts by value from a tuple produced by <code>np.unique(arr, return_counts=True)</code>, such as:</p>
<pre><code>import numpy as np
import pandas as pd
np.random.seed(123)
birds=np.random.choice(['African Swallow','Dead Parrot','Exploding Penguin'], size=int(5e4))
someTuple=np.unique(birds, return_counts = True)
someTuple
#(array(['African Swallow', 'Dead Parrot', 'Exploding Penguin'],
# dtype='<U17'), array([16510, 16570, 16920], dtype=int64))
</code></pre>
<p>First I tried</p>
<pre><code>pd.DataFrame(list(someTuple))
# Returns this:
# 0 1 2
# 0 African Swallow Dead Parrot Exploding Penguin
# 1 16510 16570 16920
</code></pre>
<p>I also tried <code>pd.DataFrame.from_records(someTuple)</code>, which returns the same thing.</p>
<p>But what I'm looking for is this:</p>
<pre><code># birdType birdCount
# 0 African Swallow 16510
# 1 Dead Parrot 16570
# 2 Exploding Penguin 16920
</code></pre>
<p>What's the right syntax?</p>
| 7 | 2016-08-22T19:16:09Z | 39,087,209 | <p>Here's one NumPy based solution with <a href="http://docs.scipy.org/doc/numpy-1.10.4/reference/generated/numpy.column_stack.html" rel="nofollow"><code>np.column_stack</code></a> -</p>
<pre><code>pd.DataFrame(np.column_stack(someTuple),columns=['birdType','birdCount'])
</code></pre>
<p>Or with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.vstack.html" rel="nofollow"><code>np.vstack</code></a> -</p>
<pre><code>pd.DataFrame(np.vstack(someTuple).T,columns=['birdType','birdCount'])
</code></pre>
<p>Benchmarking <code>np.transpose</code>, <code>np.column_stack</code> and <code>np.vstack</code> for staking <code>1D</code> arrays into columns to form a <code>2D</code> array -</p>
<pre><code>In [54]: tup1 = (np.random.rand(1000),np.random.rand(1000))
In [55]: %timeit np.transpose(tup1)
100000 loops, best of 3: 15.9 µs per loop
In [56]: %timeit np.column_stack(tup1)
100000 loops, best of 3: 11 µs per loop
In [57]: %timeit np.vstack(tup1).T
100000 loops, best of 3: 14.1 µs per loop
</code></pre>
| 3 | 2016-08-22T19:21:35Z | [
"python",
"pandas",
"numpy"
] |
build a DataFrame with columns from tuple of arrays | 39,087,136 | <p>I am struggling with the basic task of constructing a DataFrame of counts by value from a tuple produced by <code>np.unique(arr, return_counts=True)</code>, such as:</p>
<pre><code>import numpy as np
import pandas as pd
np.random.seed(123)
birds=np.random.choice(['African Swallow','Dead Parrot','Exploding Penguin'], size=int(5e4))
someTuple=np.unique(birds, return_counts = True)
someTuple
#(array(['African Swallow', 'Dead Parrot', 'Exploding Penguin'],
# dtype='<U17'), array([16510, 16570, 16920], dtype=int64))
</code></pre>
<p>First I tried</p>
<pre><code>pd.DataFrame(list(someTuple))
# Returns this:
# 0 1 2
# 0 African Swallow Dead Parrot Exploding Penguin
# 1 16510 16570 16920
</code></pre>
<p>I also tried <code>pd.DataFrame.from_records(someTuple)</code>, which returns the same thing.</p>
<p>But what I'm looking for is this:</p>
<pre><code># birdType birdCount
# 0 African Swallow 16510
# 1 Dead Parrot 16570
# 2 Exploding Penguin 16920
</code></pre>
<p>What's the right syntax?</p>
| 7 | 2016-08-22T19:16:09Z | 39,087,321 | <p>create a dictionary</p>
<pre><code>pd.DataFrame(dict(birdType=someTuple[0], birdCount=someTuple[1]))
</code></pre>
<p><a href="http://i.stack.imgur.com/VQ1E1.png"><img src="http://i.stack.imgur.com/VQ1E1.png" alt="enter image description here"></a></p>
| 5 | 2016-08-22T19:28:47Z | [
"python",
"pandas",
"numpy"
] |
Tkinter, Label/Text in canvas.rectangle [python] | 39,087,139 | <p>I need to place a text/label centred in a canvas rectangle in tkinter.</p>
<p>First I have a canvas covering the whole screen(800, 600).
And then I have a couple of rectangles which I made using the:</p>
<pre><code>create_rectangle(...)
</code></pre>
<p>The first X of the first rectangle is 275 and the second X is 525.</p>
<p>The first Y of the first rectangle is 265 and the second Y is 315.</p>
<pre><code>menuBtn1 = canvas.create_rectangle(275, 165, 525, 215, fill="#C2B6BF")
</code></pre>
<p>Now how I can place a text/label in the center of this rectangle?</p>
| 0 | 2016-08-22T19:16:17Z | 39,087,525 | <p>You should use <a href="http://effbot.org/tkinterbook/canvas.htm#Tkinter.Canvas.create_text-method" rel="nofollow">create_text</a>. As it says in the link in the description of the position parameter:</p>
<blockquote>
<p>By default, the text
is centered on this position. You can override this with the anchor
option. For example, if the coordinate is the upper left corner, set
the anchor to NW.</p>
</blockquote>
<p>So this should be what you want:</p>
<pre><code>mylabel = canvas.create_text((400, 190), text="Label text")
</code></pre>
| 3 | 2016-08-22T19:44:58Z | [
"python",
"tkinter",
"tkinter-canvas"
] |
python polynomial curve fit - coefficients not right | 39,087,204 | <p>I have the following x, y data (in green). I would like to obtain a polynomial function that fits my curve. The curve that is fitted within python looks ok (in blue).
When I use the coefficients of the polynomial and I build the function by myself the results are not on the blue curve. For small values of X, this may still fit, but for large values is totally wrong. In the image the y for x=15 and 2.5 are shown (large points).</p>
<p><a href="http://i.stack.imgur.com/gbtTu.png" rel="nofollow"><img src="http://i.stack.imgur.com/gbtTu.png" alt="enter image description here"></a></p>
<p>The data:</p>
<pre><code>x, y
0.5883596178 18562.5
0.6656014904 20850
0.7407008741 22700
0.8310800498 24525
0.9479506185 26370
1.0768193651 27922
1.1983161945 29070
1.3837939534 30410
1.6650549531 31800
1.946640319 32740
2.3811442965 33655
2.9126326549 34290
3.6970654824 34800
4.2868951065 34987.5
4.8297935972 35102
5.7876198835 35175
7.3463468386 35050
8.9861037519 34725
10.5490727095 34285
13.2260016159 33450
16.5822270413 32795
20.5352502646 32472
25.7462680049 32475
</code></pre>
<p>The code:</p>
<pre><code>data = plb.loadtxt('fig3_1_tiltingRL.dat')
x = data[:,0]
y= data[:,1]
#plt.xscale('log')#plt.set_xscale('log')
coefs = poly.polyfit(x, y, 10)
ffit = poly.polyval(x, coefs)
plt.plot(x, ffit)
plt.plot(x, y, 'o')
print(coefs)
xPoints =15.
yPt = (-6.98662492e+03 * xPoints**0 + 6.57987934e+04 * xPoints**1 -\
4.65689536e+04 * xPoints**2 + 1.85406629e+04 * xPoints**3 -\
4.49987278e+03 * xPoints**4 + 6.92952944e+02 * xPoints**5 -\
6.87501257e+01 * xPoints**6 + 4.35851202e+00 * xPoints**7 -\
1.69771617e-01 * xPoints**8 + 3.68535224e-03 * xPoints**9 -\
3.39940049e-05 * xPoints**10)
print(yPt)
plt.plot(xPoints, yPt , 'or',label="test" ,markersize=18, color='black')
plt.show()
</code></pre>
| 0 | 2016-08-22T19:21:09Z | 39,088,235 | <p>In my opinion, the way you are using the <code>poyval</code> doesn't look right to me. Try to generate you X axis with <code>numpy.linspace</code> and then apply the <code>polyval</code> on it.
Something like the code below.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
data = np.loadtxt('fig3_1_tiltingRL.dat')
x = data[:,0]
y= data[:,1]
#plt.xscale('log')#plt.set_xscale('log')
coefs = np.polyfit(x, y, 10)
ffit = np.polyval(coefs, x)
new_x = np.linspace(0,26)
new_ffit = np.polyval(coefs, new_x)
plt.plot(x, y, 'o', label="Raw")
plt.plot(x, ffit,'x',label="Fit to Raw")
plt.plot(new_x, new_ffit,label="Fit to LinSpace")
# This is ugly. I'd use list comprehension here!
arr = np.linspace(0,26,20)
new_y = []
for xi in arr:
total = 0
for i,v in enumerate(coefs[::-1]):
total += v*xi**i
new_y.append(total)
plt.plot(arr, new_y, '*', label="Polynomial")
plt.legend(loc=2)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/69RTv.png" rel="nofollow"><img src="http://i.stack.imgur.com/69RTv.png" alt="enter image description here"></a></p>
<p>As you can see, there's a hump that does not appear in your plot...</p>
| 0 | 2016-08-22T20:31:44Z | [
"python",
"curve-fitting"
] |
python polynomial curve fit - coefficients not right | 39,087,204 | <p>I have the following x, y data (in green). I would like to obtain a polynomial function that fits my curve. The curve that is fitted within python looks ok (in blue).
When I use the coefficients of the polynomial and I build the function by myself the results are not on the blue curve. For small values of X, this may still fit, but for large values is totally wrong. In the image the y for x=15 and 2.5 are shown (large points).</p>
<p><a href="http://i.stack.imgur.com/gbtTu.png" rel="nofollow"><img src="http://i.stack.imgur.com/gbtTu.png" alt="enter image description here"></a></p>
<p>The data:</p>
<pre><code>x, y
0.5883596178 18562.5
0.6656014904 20850
0.7407008741 22700
0.8310800498 24525
0.9479506185 26370
1.0768193651 27922
1.1983161945 29070
1.3837939534 30410
1.6650549531 31800
1.946640319 32740
2.3811442965 33655
2.9126326549 34290
3.6970654824 34800
4.2868951065 34987.5
4.8297935972 35102
5.7876198835 35175
7.3463468386 35050
8.9861037519 34725
10.5490727095 34285
13.2260016159 33450
16.5822270413 32795
20.5352502646 32472
25.7462680049 32475
</code></pre>
<p>The code:</p>
<pre><code>data = plb.loadtxt('fig3_1_tiltingRL.dat')
x = data[:,0]
y= data[:,1]
#plt.xscale('log')#plt.set_xscale('log')
coefs = poly.polyfit(x, y, 10)
ffit = poly.polyval(x, coefs)
plt.plot(x, ffit)
plt.plot(x, y, 'o')
print(coefs)
xPoints =15.
yPt = (-6.98662492e+03 * xPoints**0 + 6.57987934e+04 * xPoints**1 -\
4.65689536e+04 * xPoints**2 + 1.85406629e+04 * xPoints**3 -\
4.49987278e+03 * xPoints**4 + 6.92952944e+02 * xPoints**5 -\
6.87501257e+01 * xPoints**6 + 4.35851202e+00 * xPoints**7 -\
1.69771617e-01 * xPoints**8 + 3.68535224e-03 * xPoints**9 -\
3.39940049e-05 * xPoints**10)
print(yPt)
plt.plot(xPoints, yPt , 'or',label="test" ,markersize=18, color='black')
plt.show()
</code></pre>
| 0 | 2016-08-22T19:21:09Z | 39,088,763 | <p>Your algorithm seems to be working fine. You should just instead of:</p>
<pre><code>coefs = poly.polyfit(x, y, 10)
ffit = poly.polyval(x, coefs)
</code></pre>
<p>This:</p>
<pre><code>coefs = poly.polyfit(x, y, 10) # fit data to the polynomial
new_x = np.linspace(0, 30, 50) # new x values to evaluate
ffit = poly.polyval(new_x, coefs) # fitted polynomial evaluated with new data
</code></pre>
<p>Thus, the function <code>poly.polyval</code> will evaluate all the points of the <code>new_x</code> instead of the <code>x</code> coordinates that you already know.</p>
| 0 | 2016-08-22T21:07:34Z | [
"python",
"curve-fitting"
] |
python polynomial curve fit - coefficients not right | 39,087,204 | <p>I have the following x, y data (in green). I would like to obtain a polynomial function that fits my curve. The curve that is fitted within python looks ok (in blue).
When I use the coefficients of the polynomial and I build the function by myself the results are not on the blue curve. For small values of X, this may still fit, but for large values is totally wrong. In the image the y for x=15 and 2.5 are shown (large points).</p>
<p><a href="http://i.stack.imgur.com/gbtTu.png" rel="nofollow"><img src="http://i.stack.imgur.com/gbtTu.png" alt="enter image description here"></a></p>
<p>The data:</p>
<pre><code>x, y
0.5883596178 18562.5
0.6656014904 20850
0.7407008741 22700
0.8310800498 24525
0.9479506185 26370
1.0768193651 27922
1.1983161945 29070
1.3837939534 30410
1.6650549531 31800
1.946640319 32740
2.3811442965 33655
2.9126326549 34290
3.6970654824 34800
4.2868951065 34987.5
4.8297935972 35102
5.7876198835 35175
7.3463468386 35050
8.9861037519 34725
10.5490727095 34285
13.2260016159 33450
16.5822270413 32795
20.5352502646 32472
25.7462680049 32475
</code></pre>
<p>The code:</p>
<pre><code>data = plb.loadtxt('fig3_1_tiltingRL.dat')
x = data[:,0]
y= data[:,1]
#plt.xscale('log')#plt.set_xscale('log')
coefs = poly.polyfit(x, y, 10)
ffit = poly.polyval(x, coefs)
plt.plot(x, ffit)
plt.plot(x, y, 'o')
print(coefs)
xPoints =15.
yPt = (-6.98662492e+03 * xPoints**0 + 6.57987934e+04 * xPoints**1 -\
4.65689536e+04 * xPoints**2 + 1.85406629e+04 * xPoints**3 -\
4.49987278e+03 * xPoints**4 + 6.92952944e+02 * xPoints**5 -\
6.87501257e+01 * xPoints**6 + 4.35851202e+00 * xPoints**7 -\
1.69771617e-01 * xPoints**8 + 3.68535224e-03 * xPoints**9 -\
3.39940049e-05 * xPoints**10)
print(yPt)
plt.plot(xPoints, yPt , 'or',label="test" ,markersize=18, color='black')
plt.show()
</code></pre>
| 0 | 2016-08-22T19:21:09Z | 39,213,466 | <p>Thank you very much for answering my question.</p>
<p>Both the solution provided by silgon and RicLeal work.</p>
<p>At the end, since I had several curves I have applied the solution given by RicLeal.</p>
<p>My data were log on the x-axis. I just modified the code given by RicLeal, and I am happy with the outcome. </p>
<p><a href="http://i.stack.imgur.com/a49Gi.png" rel="nofollow">enter image description here</a></p>
<pre><code>x = data[:,0]
y= data[:,1]
plt.xscale('log')#plt.set_xscale('log')
logx=np.log10(x)
coefs = np.polyfit(logx, y, 10)
ffit = np.polyval(coefs, logx)
print (coefs)
logxmin=math.log10(0.5883596178)
logxmax=math.log10(26.)
new_x = np.logspace(logxmin, logxmax,50)
lognew_x=np.log10(new_x)
new_ffit = np.polyval(coefs, lognew_x)
plt.semilogx(x, y, 'o', label="Raw")
plt.semilogx(x, ffit,'x',label="Fit to Raw")
plt.semilogx(new_x, new_ffit,label="Fit to LogSpace")
print(lognew_x, new_ffit)
# This is ugly. I'd use list comprehension here!
arr = np.logspace(logxmin, logxmax,50)
arrlog= np.log10(arr)
new_y = []
for xi in arrlog:
total = 0
for i,v in enumerate(coefs[::-1]):
#print (v)
total += v*xi**i
new_y.append(total)
plt.semilogx(arr, new_y, '*', label="Polynomial")
coeffs= [6.85869364, -92.86678553, 343.39375022, -555.52532934, 434.18179364,
-152.82724751, 9.71300951, 21.68653301, -35.62838377, 28.3985976,
27.04762122]
new_testy = []
for xi in arrlog:
total = 0
for i,v in enumerate(coeffs[::-1]):
#print (v)
total += v*xi**i
new_testy.append(total)
plt.semilogx(arr, new_testy, 'o', label="Polynomial")
plt.legend(loc=2)
plt.show()
</code></pre>
| 0 | 2016-08-29T19:08:59Z | [
"python",
"curve-fitting"
] |
How to convert a string into an integer in python 3 | 39,087,245 | <p>How can I convert this list into a list that does not have '-' in it and they are all integers?</p>
<pre><code>List1 = ['978-0262133838','978-0262201-629','978-0321758927']
</code></pre>
<p>So the list will be something like</p>
<pre><code>List1 = [9780262133838, 9780262201629, 9780321758927]
</code></pre>
| -1 | 2016-08-22T19:23:42Z | 39,089,076 | <p>Use a simple list comprehension with <code>.replace('-', '')</code>:</p>
<pre><code>List1 = ['978-0262133838','978-0262201-629','978-0321758927']
print([x.replace('-','') for x in List1])
# => ['9780262133838', '9780262201629', '9780321758927']
</code></pre>
<p>See the <a href="http://ideone.com/rMk8GF" rel="nofollow">Python 3 demo</a></p>
| 0 | 2016-08-22T21:33:37Z | [
"python",
"string",
"python-3.5"
] |
How to convert a string into an integer in python 3 | 39,087,245 | <p>How can I convert this list into a list that does not have '-' in it and they are all integers?</p>
<pre><code>List1 = ['978-0262133838','978-0262201-629','978-0321758927']
</code></pre>
<p>So the list will be something like</p>
<pre><code>List1 = [9780262133838, 9780262201629, 9780321758927]
</code></pre>
| -1 | 2016-08-22T19:23:42Z | 39,089,276 | <p>Most Pythonic way to achieve this is by using <code>map</code> with <code>lambda</code> function (to replace '-' with '' and then converting it into <code>int</code>):</p>
<pre><code>>>> my_list = ['978-0262133838','978-0262201-629','978-0321758927']
>>> map(lambda x: int(x.replace('-', '')), my_list)
[9780262133838, 9780262201629, 9780321758927]
</code></pre>
| 0 | 2016-08-22T21:51:03Z | [
"python",
"string",
"python-3.5"
] |
How to convert a string into an integer in python 3 | 39,087,245 | <p>How can I convert this list into a list that does not have '-' in it and they are all integers?</p>
<pre><code>List1 = ['978-0262133838','978-0262201-629','978-0321758927']
</code></pre>
<p>So the list will be something like</p>
<pre><code>List1 = [9780262133838, 9780262201629, 9780321758927]
</code></pre>
| -1 | 2016-08-22T19:23:42Z | 39,176,291 | <p>maybe this can help. if your list is:</p>
<pre><code>List1 = ['978-0262133838','978-0262201-629','978-0321758927']
</code></pre>
<p>You can use a for loop like this one, and create a new list to keep the new numbers:</p>
<pre><code>List2=[] #it is very important to have this list outside the for loop
For number in List1: #This does exactly what it says, you get the first number in the list, then you get the second and so on...
number=number.replace("-","") # I use "" to refer strings, that's just how i learned.
List2.append[int(number)]
</code></pre>
<p>And if the variable number just HAS to be List1, then just type</p>
<pre><code>List1=List2
</code></pre>
<p>Hopefully it helps, good luck :D</p>
| 0 | 2016-08-27T00:24:18Z | [
"python",
"string",
"python-3.5"
] |
Python loop benchmark with timeit | 39,087,249 | <p>I would like to benchmark a specific code segment inside a for loop in pytohn. I am using timeit as follows:</p>
<pre><code>def createTokens():
keypath=('./pickles/key.pickle')
path="./data/"
directory = os.listdir(path)
tok={}
print('create tokens..')
t=[2**4,2**5,2**6,2**7,2**8,2**9,2**10,2**12,2**14,2**16]
files=['pl_10000004','pl_10000002','pl_100000026']
for filename in files:
for i in t:
code='etok=utils.token(filename,keypath,str(i))'
t = timeit.Timer(stmt=code,setup='from __main__ import utils')
print(filename+'_'+str(i)+'.pickle')
print ('%f'%float(t.timeit(10/10)))
</code></pre>
<p>However this raises:</p>
<pre><code>NameError: global name 'filename' is not defined
</code></pre>
<p>when I include filename in setup variable Python says:</p>
<pre><code>ImportError: cannot import name filename
</code></pre>
<p>How this is solved?</p>
| 1 | 2016-08-22T19:23:55Z | 39,087,642 | <p>Try this:</p>
<pre><code>code='etok=utils.token("%s","%s","%s")' % (filename, keypath, i)
</code></pre>
<p>This will allow you to create a code string that has the values you want. Also, by using the <code>%s</code> conversion, <code>i</code> is coerced into a <code>str</code> type for you.</p>
<p><strong>Edit:</strong> Added double quotes around values.</p>
| 1 | 2016-08-22T19:52:09Z | [
"python",
"timeit"
] |
Python loop benchmark with timeit | 39,087,249 | <p>I would like to benchmark a specific code segment inside a for loop in pytohn. I am using timeit as follows:</p>
<pre><code>def createTokens():
keypath=('./pickles/key.pickle')
path="./data/"
directory = os.listdir(path)
tok={}
print('create tokens..')
t=[2**4,2**5,2**6,2**7,2**8,2**9,2**10,2**12,2**14,2**16]
files=['pl_10000004','pl_10000002','pl_100000026']
for filename in files:
for i in t:
code='etok=utils.token(filename,keypath,str(i))'
t = timeit.Timer(stmt=code,setup='from __main__ import utils')
print(filename+'_'+str(i)+'.pickle')
print ('%f'%float(t.timeit(10/10)))
</code></pre>
<p>However this raises:</p>
<pre><code>NameError: global name 'filename' is not defined
</code></pre>
<p>when I include filename in setup variable Python says:</p>
<pre><code>ImportError: cannot import name filename
</code></pre>
<p>How this is solved?</p>
| 1 | 2016-08-22T19:23:55Z | 39,087,657 | <p><code>filename</code> isn't defined in the scope of the code in the <code>timeit</code> block. I don't know what <code>utils</code> is in your code, but assuming it expects <code>filename</code> and <code>keypath</code> as strings just replace your</p>
<pre><code> code='etok=utils.token(filename,keypath,str(i))'
</code></pre>
<p>line with:</p>
<pre><code> code='etok=utils.token("{}","{}",{})'.format(filename, keypath, i)
</code></pre>
| 1 | 2016-08-22T19:53:12Z | [
"python",
"timeit"
] |
App in Docker does not update | 39,087,257 | <p>I am trying to run a simple Flask application inside docker. But it seems like even when I update my app.py code and restart the docker container nothing updates.</p>
<p>I am running docker on OS X. IS this something simple I am missing or this is an expected behavior?</p>
<p>This is what my docerfile looks like:</p>
<pre><code>FROM ubuntu:14.04.3
# install dependencies
RUN apt-get update
RUN apt-get install -y nginx
RUN apt-get install -y supervisor
RUN apt-get install -y python3-pip
# update working directories
ADD ./app /app
ADD ./config /config
ADD requirements.txt /
# install dependencies
RUN pip3 install -r requirements.txt
# setup config
RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf
RUN rm /etc/nginx/sites-enabled/default
RUN ln -s /config/nginx.conf /etc/nginx/sites-enabled/
RUN ln -s /config/supervisor.conf /etc/supervisor/conf.d/
EXPOSE 80
CMD ["supervisord", "-n"]
</code></pre>
| 0 | 2016-08-22T19:24:46Z | 39,087,697 | <p>Docker <em>images</em> (what you get after <code>docker build -t app .</code>) is a "frozen" snapshot. Its not editable; its a snapshot of whatever you add to the image at that point in time.</p>
<p>Now, once you run an image, the contents are expanded (think of it like the archive is unzipped) and then the process you defined in the image runs; and this is a <em>container</em>.</p>
<p>Running containers can be shown by <code>docker ps</code>, and images (things you can use to run new containers) are shown by <code>docker images</code>.</p>
<p>A container can write to the file system, but by default all changes are lost once the container is stopped. These changes are not stored back to the image.</p>
<p>Images are immutable until you rebuild them, and containers continue to use the image they were started with. So with your Dockerfile method of importing your app.py, you need to run the following to update that file:</p>
<pre><code>docker build -t app .
docker stop <container_id>
docker rm <container_id>
docker run -p 80:80 -d --name=my-app app
</code></pre>
<p>You'll need to run <code>docker ps -a</code> to get your current container id. By naming your container, you can reference it by "my-app" or any other name you pick going forward.</p>
<p>Note that this is the slow way to do your update. For more efficient developing, use a volume (with MacOS, this must be located under /Users):</p>
<pre><code>docker run -p 80:80 -v $(pwd)/app:/app -d --name=my-app app
</code></pre>
<p>Now, anytime you update your app folder, you can restart python assuming it doesn't have an automatic reload included:</p>
<pre><code>docker restart my-app
</code></pre>
| 3 | 2016-08-22T19:56:32Z | [
"python",
"docker"
] |
AttributeError: 'module' object has no attribute 'Fib' | 39,087,287 | <p>I have just started programming in python and have the following problem: I have written an simple function abc.py: </p>
<pre><code>def Fib(n):
if n<2:
return n
else:
return Fib(n-1) + Fib(n-2)
</code></pre>
<p>which I would like to import in another python file hi.py:</p>
<pre><code>import abc
x = abx.Fib(4)
print(x)
</code></pre>
<p>Then the error written in the title appears. I am using Pycharm Community Edition 2016.2.1 if thats important to know.</p>
| -2 | 2016-08-22T19:26:29Z | 39,087,343 | <p>Oops, it's a typo!</p>
<pre><code> x=abx.Fib(4)
</code></pre>
<p>should be:</p>
<pre><code> x=abc.Fib(4)
</code></pre>
<p>The lesson here is to proofread a little more closely, and pay attention to those pesky error messages :)</p>
| -1 | 2016-08-22T19:30:09Z | [
"python"
] |
Python: Unpack a list of objects to Dictionary | 39,087,402 | <p>I have a list of objects that need to be unpacked to a dictionary efficiently. There are more than 2,000,000 objects in the list. The operation takes more than 1.5 hours complete. I would like to know if this can be done more efficiently.
The objects in the list is based on this class. </p>
<pre><code>class ResObj:
def __init__(self, index, result):
self.loc = index ### This is the location, where the values should go in the final result dictionary
self.res = result ### This is a dictionary that has values for this location.
self.loc = 2
self.res = {'value1':5.4, 'value2':2.3, 'valuen':{'sub_value1':4.5, 'sub_value2':3.4, 'sub_value3':7.6}}
</code></pre>
<p>Currently I use this method to perform this operation. </p>
<pre><code>def make_final_result(list_of_results):
no_sub_result_variables = ['value1', 'value2']
sub_result_variables = ['valuen']
sub_value_variables = ['sub_value1', 'sub_value3', 'sub_value3']
final_result = {}
num_of_results = len(list_of_results)
for var in no_sub_result_variables:
final_result[var] = numpy.zeros(num_of_results)
for var in sub_result_variables:
final_result[var] = {sub_var:numpy.zeros(num_of_results) for sub_var in sub_value_variables}
for obj in list_of_results:
i = obj.loc
result = obj.res
for var in no_sub_result_variables:
final_result[var][i] = result[var]
for var in sub_result_variables:
for name in sub_value_variables:
try:
final_result[var][name][i] = result[var][name]
except KeyError as e:
##TODO Add a debug check
pass
</code></pre>
<p>I have tried using multiprocessing.Manager().dict and Manager().Array() to use parallelism for this, however, I could only get 2 processes to work (even though, I manually set the processes to # of CPUs = 24).
Can you please help me to use a faster method to improve the performance.
Thank you. </p>
| 0 | 2016-08-22T19:35:39Z | 39,087,568 | <p>Remove some indentation to make your loops non-nested:</p>
<pre><code>for obj in list_of_results:
i = obj.loc
result = obj.res
for var in no_sub_result_variables:
final_result[var][i] = result[var]
for var in sub_result_variables:
for name in sub_value_variables:
try:
final_result[var][name][i] = result[var][name]
except KeyError as e:
##TODO Add a debug check
pass
</code></pre>
| 0 | 2016-08-22T19:47:06Z | [
"python",
"numpy",
"dictionary",
"multiprocessing"
] |
Python: Unpack a list of objects to Dictionary | 39,087,402 | <p>I have a list of objects that need to be unpacked to a dictionary efficiently. There are more than 2,000,000 objects in the list. The operation takes more than 1.5 hours complete. I would like to know if this can be done more efficiently.
The objects in the list is based on this class. </p>
<pre><code>class ResObj:
def __init__(self, index, result):
self.loc = index ### This is the location, where the values should go in the final result dictionary
self.res = result ### This is a dictionary that has values for this location.
self.loc = 2
self.res = {'value1':5.4, 'value2':2.3, 'valuen':{'sub_value1':4.5, 'sub_value2':3.4, 'sub_value3':7.6}}
</code></pre>
<p>Currently I use this method to perform this operation. </p>
<pre><code>def make_final_result(list_of_results):
no_sub_result_variables = ['value1', 'value2']
sub_result_variables = ['valuen']
sub_value_variables = ['sub_value1', 'sub_value3', 'sub_value3']
final_result = {}
num_of_results = len(list_of_results)
for var in no_sub_result_variables:
final_result[var] = numpy.zeros(num_of_results)
for var in sub_result_variables:
final_result[var] = {sub_var:numpy.zeros(num_of_results) for sub_var in sub_value_variables}
for obj in list_of_results:
i = obj.loc
result = obj.res
for var in no_sub_result_variables:
final_result[var][i] = result[var]
for var in sub_result_variables:
for name in sub_value_variables:
try:
final_result[var][name][i] = result[var][name]
except KeyError as e:
##TODO Add a debug check
pass
</code></pre>
<p>I have tried using multiprocessing.Manager().dict and Manager().Array() to use parallelism for this, however, I could only get 2 processes to work (even though, I manually set the processes to # of CPUs = 24).
Can you please help me to use a faster method to improve the performance.
Thank you. </p>
| 0 | 2016-08-22T19:35:39Z | 39,088,347 | <p>Having nested numpy arrays doesn't seem the best way to structure your data. You can use numpy's <a href="http://docs.scipy.org/doc/numpy/user/basics.rec.html" rel="nofollow">structured arrays</a> to create a more intuitive data structure.</p>
<pre><code>import numpy as np
# example values
values = [
{
"v1": 0,
"v2": 1,
"vs": {
"x": 2,
"y": 3,
"z": 4,
}
},
{
"v1": 5,
"v2": 6,
"vs": {
"x": 7,
"y": 8,
"z": 9,
}
}
]
def value_to_record(value):
"""Take a dictionary and convert it to an array-like format"""
return (
value["v1"],
value["v2"],
(
value["vs"]["x"],
value["vs"]["y"],
value["vs"]["z"]
)
)
# define what a record looks like -- f8 is an 8-byte float
dtype = [
("v1", "f8"),
("v2", "f8"),
("vs", [
("x", "f8"),
("y", "f8"),
("z", "f8")
])
]
# create actual array
arr = np.fromiter(map(value_to_record, values), dtype=dtype, count=len(values))
# access individual record
print(arr[0]) # prints (0.0, 1.0, (2.0, 3.0, 4.0))
# access specific value
assert arr[0]['vs']['x'] == 2
# access all values of a specific field
print(arr['v2']) # prints [ 1. 6.]
assert arr['v2'].sum() == 7
</code></pre>
<p>Using this way of generating the data created a 2,000,000 long array in 2 seconds on my machine.</p>
<p>To make it work for your <code>ResObj</code> objects then sort them by the <code>loc</code> attribute, and then pass the <code>res</code> attribute to the <code>value_to_record</code> function.</p>
| 2 | 2016-08-22T20:39:22Z | [
"python",
"numpy",
"dictionary",
"multiprocessing"
] |
Python: Unpack a list of objects to Dictionary | 39,087,402 | <p>I have a list of objects that need to be unpacked to a dictionary efficiently. There are more than 2,000,000 objects in the list. The operation takes more than 1.5 hours complete. I would like to know if this can be done more efficiently.
The objects in the list is based on this class. </p>
<pre><code>class ResObj:
def __init__(self, index, result):
self.loc = index ### This is the location, where the values should go in the final result dictionary
self.res = result ### This is a dictionary that has values for this location.
self.loc = 2
self.res = {'value1':5.4, 'value2':2.3, 'valuen':{'sub_value1':4.5, 'sub_value2':3.4, 'sub_value3':7.6}}
</code></pre>
<p>Currently I use this method to perform this operation. </p>
<pre><code>def make_final_result(list_of_results):
no_sub_result_variables = ['value1', 'value2']
sub_result_variables = ['valuen']
sub_value_variables = ['sub_value1', 'sub_value3', 'sub_value3']
final_result = {}
num_of_results = len(list_of_results)
for var in no_sub_result_variables:
final_result[var] = numpy.zeros(num_of_results)
for var in sub_result_variables:
final_result[var] = {sub_var:numpy.zeros(num_of_results) for sub_var in sub_value_variables}
for obj in list_of_results:
i = obj.loc
result = obj.res
for var in no_sub_result_variables:
final_result[var][i] = result[var]
for var in sub_result_variables:
for name in sub_value_variables:
try:
final_result[var][name][i] = result[var][name]
except KeyError as e:
##TODO Add a debug check
pass
</code></pre>
<p>I have tried using multiprocessing.Manager().dict and Manager().Array() to use parallelism for this, however, I could only get 2 processes to work (even though, I manually set the processes to # of CPUs = 24).
Can you please help me to use a faster method to improve the performance.
Thank you. </p>
| 0 | 2016-08-22T19:35:39Z | 39,092,539 | <p>You you can distribute the work among processes by key names.<br>
Here I create a pool of workers and pass to them var and optional subvar names.<br>
The huge dataset is shared with workers using cheap <code>fork</code>.<br>
<code>Unpacker.unpack</code> picks the specified vars from ResObj and returns them as an np.array<br>
The main loop in make_final_result combines the arrays in final_result.<br>
<strong>Py2</strong>:</p>
<pre><code>from collections import defaultdict
from multiprocessing import Process, Pool
import numpy as np
class ResObj(object):
def __init__(self, index=None, result=None):
self.loc = index ### This is the location, where the values should go in the final result dictionary
self.res = result ### This is a dictionary that has values for this location.
self.loc = 2
self.res = {'value1':5.4, 'value2':2.3, 'valuen':{'sub_value1':4.5, 'sub_value2':3.4, 'sub_value3':7.6}}
class Unpacker(object):
@classmethod
def cls_init(cls, list_of_results):
cls.list_of_results = list_of_results
@classmethod
def unpack(cls, var, name):
list_of_results = cls.list_of_results
result = np.zeros(len(list_of_results))
if name is None:
for i, it in enumerate(list_of_results):
result[i] = it.res[var]
else:
for i, it in enumerate(list_of_results):
result[i] = it.res[var][name]
return var, name, result
#Pool.map doesn't accept instancemethods so the use of a wrapper
def Unpacker_unpack((var, name),):
return Unpacker.unpack(var, name)
def make_final_result(list_of_results):
no_sub_result_variables = ['value1', 'value2']
sub_result_variables = ['valuen']
sub_value_variables = ['sub_value1', 'sub_value3', 'sub_value3']
pool = Pool(initializer=Unpacker.cls_init, initargs=(list_of_results, ))
final_result = defaultdict(dict)
def key_generator():
for var in no_sub_result_variables:
yield var, None
for var in sub_result_variables:
for name in sub_value_variables:
yield var, name
for var, name, result in pool.imap(Unpacker_unpack, key_generator()):
if name is None:
final_result[var] = result
else:
final_result[var][name] = result
return final_result
if __name__ == '__main__':
print make_final_result([ResObj() for x in xrange(10)])
</code></pre>
<p>Ensure that you are not on Windows. It lacks <code>fork</code> and multiprocessing will have to pipe entire dataset to each of 24 worker processes.<br>
Hope this will help.</p>
| 1 | 2016-08-23T04:47:40Z | [
"python",
"numpy",
"dictionary",
"multiprocessing"
] |
Apply regex to every row of a spark dataframe and save it as a new column in the same dataframe | 39,087,515 | <p>Suppose I have a spark dataframe,</p>
<p>data.show()</p>
<pre><code>ID URL
1 https://www.sitename.com/&q=To+Be+Parsed+out&oq=Dont+Need+to+be+parsed
2 https://www.sitename.com/&q=To+Be+Parsed+out&oq=Dont+Need+to+be+parsed
3 https://www.sitename.com/&q=To+Be+Parsed+out&oq=Dont+Need+to+be+parsed
4 https://www.sitename.com/&q=To+Be+Parsed+out&oq=Dont+Need+to+be+parsed
5 None
</code></pre>
<p>I want to write a regex operation to it, where I want to parse the URL for a particular scenario. The scenario would be would be to parse out after &q and before next &. I am able to write this in python for a python dataframe as follows,</p>
<pre><code>re.sub(r"\s+", " ", re.search(r'/?q=([^&]*)', data['url'][i]).group(1).replace('+', ' ')
</code></pre>
<p>I want to write the same in pyspark. </p>
<p>If a write something like,</p>
<pre><code> re.sub(r"\s+", " ", re.search(r'/?q=([^&]*)', data.select(data.url.alias("url")).collect()).group(1).replace('+', ' '))
</code></pre>
<p>or</p>
<pre><code>re.sub(r"\s+", " ", re.search(r'/?q=([^&]*)', data.select(data['url']).collect()).group(1).replace('+', ' '))
</code></pre>
<p>I am getting the following error,</p>
<pre><code>TypeError: expected string or buffer
</code></pre>
<p>One option is that to convert the data to pandas using,</p>
<p><code>data.toPandas()</code> and then do the operations. But my data is huge and converting it to pandas makes it slow. Is there a way I can write this directly to a new column in spark dataframe where I can have like,</p>
<pre><code>ID URL word
1 https://www.sitename.com/&q=To+Be+Parsed+out&oq=Dont+Need+to+be+parsed To Be Parsed out
2 https://www.sitename.com/&q=To+Be+Parsed+out&oq=Dont+Need+to+be+parsed To Be Parsed out
3 https://www.sitename.com/&q=To+Be+Parsed+out&oq=Dont+Need+to+be+parsed To Be Parsed out
4 https://www.sitename.com/&q=To+Be+Parsed+out&oq=Dont+Need+to+be+parsed To Be Parsed out
5 None None
</code></pre>
<p>How can we do this to add it as a new table in pyspark dataframe? which applies to every row of the dataframe?</p>
| 0 | 2016-08-22T19:44:08Z | 39,097,961 | <p>As mentioned by @David in the comment, you could use <code>udf</code> and <code>withColumn</code>:</p>
<p><strong>Scala code:</strong></p>
<pre><code>import org.apache.spark.sql.functions._
val getWord: (String => String) = (url: String) => {
if (url != null) {
"""/?q=([^&]*)""".r
.findFirstIn(url)
.get
.replaceAll("q=", "")
.replaceAll("\\+", " ")
}
else
null
}
val udfGetWord = udf(getWord)
df.withColumn("word", udfGetWord($"url")).show()
</code></pre>
| 0 | 2016-08-23T09:56:18Z | [
"python",
"python-2.7",
"apache-spark",
"pyspark",
"pyspark-sql"
] |
Detecting when code is run on Travis CI | 39,087,544 | <p>I have a nose test that uses a pathname to a png file in the tests directory. One path works in local testing, one path works on Travis. How do I check when the code is run on Travis?</p>
<p>Edit: Here is the <a href="https://github.com/construct/construct/commit/839877318559aa2971194836105430aa54d2c43f" rel="nofollow">actual code.</a></p>
| 2 | 2016-08-22T19:46:04Z | 39,088,539 | <p>You could check for the existence (or value) of an environment variable. It looks like Travis defines several by default (see <a href="https://docs.travis-ci.com/user/environment-variables/#Default-Environment-Variables" rel="nofollow">here</a>).</p>
<p>For example:</p>
<pre><code>import os
istravis = os.environ.get('TRAVIS') == 'true'
</code></pre>
| 1 | 2016-08-22T20:52:06Z | [
"python",
"continuous-integration",
"travis-ci",
"nose"
] |
Detecting when code is run on Travis CI | 39,087,544 | <p>I have a nose test that uses a pathname to a png file in the tests directory. One path works in local testing, one path works on Travis. How do I check when the code is run on Travis?</p>
<p>Edit: Here is the <a href="https://github.com/construct/construct/commit/839877318559aa2971194836105430aa54d2c43f" rel="nofollow">actual code.</a></p>
| 2 | 2016-08-22T19:46:04Z | 39,089,212 | <p>To check the existence of TRAVIS:</p>
<pre><code>import os
is_travis = 'TRAVIS' in os.environ
</code></pre>
| 2 | 2016-08-22T21:45:32Z | [
"python",
"continuous-integration",
"travis-ci",
"nose"
] |
Shortcut to creating an adjacency matrix | 39,087,559 | <p>I need a short and sweet version of the python code I wrote. So basically what I have is a text file with values such as below:</p>
<pre><code>x
a b c
d e f
</code></pre>
<p>First line is the number of nodes. From second line, the values are read into NODE1, NODE2, Weight. I am taking these values and creating an adjacency matrix out of it. This will be an undirected graph so matrix[u][v] will equal matrix[v][u].
This is the code I have :</p>
<pre><code>with open(filename, 'r') as textfile:
firstLine = int(textfile.readline())
for line in textfile:
a, b, c = line.split()
a = int(a)
b = int(b)
c = float(c)
graph[a][b] = graph[b][a] = c
</code></pre>
<p>Now I need to populate the diagonals as zero, and other unassigned indices to infinity. </p>
| 0 | 2016-08-22T19:46:46Z | 39,087,676 | <pre><code>with open(filename, 'r') as textfile:
file_lines = text_file.readlines()
# Initialize graph with -1 value, i.e. path do not exists representing infinite
# Note: If negative weight is allowed, use the value which you want to symbolize infinte
total_node = int(file_lines[0])
my_graph = [[-1]*total_node]*total_node
# Update weight based on available path
for line in file_lines[1:]:
s = line.split()
u, v, w = int(s[0]), int(s[1]), float(s[2])
my_graph[u][v] = my_graph[v][u] = w
# Update diagonals to zero
for i in range(total_node):
my_graph[i][i] = my_graph[i][total_node - (i+1)] = 0
</code></pre>
| 1 | 2016-08-22T19:54:47Z | [
"python",
"adjacency-matrix",
"minimum-spanning-tree"
] |
Shortcut to creating an adjacency matrix | 39,087,559 | <p>I need a short and sweet version of the python code I wrote. So basically what I have is a text file with values such as below:</p>
<pre><code>x
a b c
d e f
</code></pre>
<p>First line is the number of nodes. From second line, the values are read into NODE1, NODE2, Weight. I am taking these values and creating an adjacency matrix out of it. This will be an undirected graph so matrix[u][v] will equal matrix[v][u].
This is the code I have :</p>
<pre><code>with open(filename, 'r') as textfile:
firstLine = int(textfile.readline())
for line in textfile:
a, b, c = line.split()
a = int(a)
b = int(b)
c = float(c)
graph[a][b] = graph[b][a] = c
</code></pre>
<p>Now I need to populate the diagonals as zero, and other unassigned indices to infinity. </p>
| 0 | 2016-08-22T19:46:46Z | 39,087,879 | <p>I'm not sure simpler, but with numpy reshape you can create the default matrix before you populate with the weights without explicit loops. Here <code>n</code> is the size.</p>
<pre><code>In [20]: n=4; np.reshape(np.array(([0]+[float("inf")]*n)*(n-1)+[0]),[n,n])
Out[20]:
array([[ 0., inf, inf, inf],
[ inf, 0., inf, inf],
[ inf, inf, 0., inf],
[ inf, inf, inf, 0.]])
</code></pre>
| 0 | 2016-08-22T20:08:30Z | [
"python",
"adjacency-matrix",
"minimum-spanning-tree"
] |
How to trigger a variable/method in python from a html file with javascript | 39,087,614 | <p>I want to have a hyperlink on a html page run a variable that is defined in my python file. The variable is going to clear my database. Here is the code I am trying to use. </p>
<p><strong>Python</strong></p>
<pre><code>@app.route('/log')
def log():
cleardb = db.session.delete()
return render_template('log.html', cleardb=cleardb)
</code></pre>
<p><strong>Html</strong></p>
<pre><code><a onclick="myFunction()">Clear database</a>
</code></pre>
<p><strong>Javascript</strong></p>
<pre><code><script>
function myFunction()
</script>
</code></pre>
<p>I don't know what javascript I need to run the variable. I want to make the cleardb get triggered so that it will delete the database.</p>
<p>Thanks</p>
| 2 | 2016-08-22T19:50:30Z | 39,103,218 | <p>You need to make an ajax request with javascript to /log, it would look something like this:</p>
<pre><code>function myFunction() {
var xmlhttp = new XMLHttpRequest();
xmlhttp.onreadystatechange = function() {
if (xmlhttp.readyState == XMLHttpRequest.DONE ) {
if (xmlhttp.status == 200) {
//Do Success functionality here
}
else if (xmlhttp.status == 400) {
//Handle 400 errors here
}
else {
//All other errors go here
}
}
};
xmlhttp.open("GET", "/log", true);
xmlhttp.send();
}
</code></pre>
| 1 | 2016-08-23T13:55:31Z | [
"javascript",
"python",
"html",
"flask-sqlalchemy"
] |
How to add threads depending on a number | 39,087,647 | <p>In a part of my software code written with python, I have a list of items where it size can vary greatly from 12 to only one item . For each item in this list I'm doing some processing (sending an HTTP request related to the given item, parse results and many other operations . I'd like to speed up my code using threading, I'd like to create 2 threads where each one take a number of items and do the processing async. </p>
<p><strong>Example 1</strong> : Let's say that in my list I have 12 items, each thread would take in this case 6 items and call the processing functions on each item .</p>
<p><strong>Example 2</strong> : Now let's say that my list have 9 items, one thread would take 5 items and the other thread would take the other 4 left items .</p>
<p>Currently I'm not applying any threading and my code base is very large, so here some code that do almost the same thing as my case :</p>
<pre><code>#This procedure need to be used with threading .
itemList = getItems() #This function return an unknown number of items between 1 and 12
if len(itemList) > 0: # Make sure that the list is empty in this case .
for item in itemList:
processItem(item) #This is an imaginary function that do the processing on each item
</code></pre>
<p>Below is a basic lite code that explain what I'm doing, I can't figure out how can I make my threads flexible, so each one take a number of items and the other take the rest (as explained in example 1 & 2) .</p>
<p>Thank's for your time</p>
| 0 | 2016-08-22T19:52:32Z | 39,087,707 | <p>You might rather implement it using shared queues
<a href="https://docs.python.org/3/library/queue.html#queue-objects" rel="nofollow">https://docs.python.org/3/library/queue.html#queue-objects</a></p>
<pre><code>import queue
import threading
def worker():
while True:
item = q.get()
if item is None:
break
do_work(item)
q.task_done()
q = queue.Queue()
threads = []
for i in range(num_worker_threads):
t = threading.Thread(target=worker)
t.start()
threads.append(t)
for item in source():
q.put(item)
# block until all tasks are done
q.join()
# stop workers
for i in range(num_worker_threads):
q.put(None)
for t in threads:
t.join()
</code></pre>
<p>Quoting from
<a href="https://docs.python.org/3/library/queue.html#module-queue" rel="nofollow">https://docs.python.org/3/library/queue.html#module-queue</a>:</p>
<blockquote>
<p>The queue module implements multi-producer, multi-consumer queues. It
is especially useful in threaded programming when information must be
exchanged safely between multiple threads.</p>
</blockquote>
<p>The idea is that you have a shared storage and each thread attempts reading items from it one-by-one.
This is much more flexible than distributing the load in advance as you don't know how threads execution will be scheduled by your OS, how much time each iteration would take etc.
Furthermore, you might add items for further processing to this queue dynamically â for example, having a producer thread running in parallel.</p>
<p>Some helpful links:</p>
<p>A brief introduction into concurrent programming in python:
<a href="http://www.slideshare.net/dabeaz/an-introduction-to-python-concurrency" rel="nofollow">http://www.slideshare.net/dabeaz/an-introduction-to-python-concurrency</a></p>
<p>More details on producer-consumer pattern with line-by-line explanation:
<a href="http://www.informit.com/articles/article.aspx?p=1850445&seqNum=8" rel="nofollow">http://www.informit.com/articles/article.aspx?p=1850445&seqNum=8</a></p>
| 2 | 2016-08-22T19:57:02Z | [
"python",
"multithreading",
"python-multithreading"
] |
How to add threads depending on a number | 39,087,647 | <p>In a part of my software code written with python, I have a list of items where it size can vary greatly from 12 to only one item . For each item in this list I'm doing some processing (sending an HTTP request related to the given item, parse results and many other operations . I'd like to speed up my code using threading, I'd like to create 2 threads where each one take a number of items and do the processing async. </p>
<p><strong>Example 1</strong> : Let's say that in my list I have 12 items, each thread would take in this case 6 items and call the processing functions on each item .</p>
<p><strong>Example 2</strong> : Now let's say that my list have 9 items, one thread would take 5 items and the other thread would take the other 4 left items .</p>
<p>Currently I'm not applying any threading and my code base is very large, so here some code that do almost the same thing as my case :</p>
<pre><code>#This procedure need to be used with threading .
itemList = getItems() #This function return an unknown number of items between 1 and 12
if len(itemList) > 0: # Make sure that the list is empty in this case .
for item in itemList:
processItem(item) #This is an imaginary function that do the processing on each item
</code></pre>
<p>Below is a basic lite code that explain what I'm doing, I can't figure out how can I make my threads flexible, so each one take a number of items and the other take the rest (as explained in example 1 & 2) .</p>
<p>Thank's for your time</p>
| 0 | 2016-08-22T19:52:32Z | 39,088,097 | <p>You can use the <a href="https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ThreadPoolExecutor" rel="nofollow"><code>ThreadPoolExecutor</code></a> class from the <a href="https://docs.python.org/3/library/concurrent.futures.html#module-concurrent.futures" rel="nofollow"><code>concurrent.futures</code></a> module in Python 3. The module is not present in Python 2, but there are some workarounds (which I will not discuss).</p>
<p>A thread pool executor does basically what @ffeast proposed, but with fewer lines of code for you to write. It manages a pool of threads which will execute all the tasks that you submit to it, presumably in the most efficient manner possible. The results will be returned through <code>Future</code> objects, which represent a "pending" result.</p>
<p>Since you seem to know the list of tasks up front, this is especially convenient for you. While you can not guarantee how the tasks will be split between the threads, the result will probably be at least as good as anything you coded by hand.</p>
<pre><code>from concurrent.futures import ThreadPoolExecutor
with ThreadPoolExecutor(max_workers=2) as executor:
for item in getItems():
executor.submit(processItem, item)
</code></pre>
<p>If you need more information with the output, like some way of identifying the futures that have completed or getting results out of them, see the <a href="https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor-example" rel="nofollow">example</a> in the Python documentation (on which the code above is heavily based).</p>
| 1 | 2016-08-22T20:22:24Z | [
"python",
"multithreading",
"python-multithreading"
] |
Python Tornado don't load images from html | 39,087,666 | <p>I have a html page that when loaded from firefox it works, but when it is send from tornado server it don't show the images.
My tornado server:</p>
<pre><code>import tornado.ioloop
import tornado.web
class mainHandler(tornado.web.RequestHandler):
def get(self):
self.render('./prop.html')
application = tornado.web.Application([
(r"/", mainHandler)
])
if __name__ == "__main__":
application.listen(8888)
tornado.ioloop.IOLoop.instance().start()
</code></pre>
<p>my prop.html:
</p>
<pre><code><head>
<meta charset="UTF-8" />
<meta name="description" content="" />
<meta content="text/html; charset=utf-8" http-equiv="Content-Type" />
<meta name="keywords" content="" />
<title>Title</title>
<style>
</style>
</head>
<body>
<img src="./fig1.jpg" />
</body>
</html>
</code></pre>
<p>Can someone help me?</p>
<p>Thank you very much.
The code below gave me the result expected</p>
<pre><code><img src="/static/fig1.jpg" />
</code></pre>
<p>but the</p>
<p>
gave me a following error:</p>
<pre><code>Exception: You must define the 'static_path' setting in your application to use static_url
ERROR:tornado.access:500 GET / (127.0.0.1) 2.52ms
</code></pre>
<p>How do I set this settings?</p>
| 0 | 2016-08-22T19:54:10Z | 39,087,794 | <p>Tornado doesn't treat images specially.</p>
<p><code>/fig1.jpg</code> is a resource and since you haven't defined a handler for it, Tornado will not generate a response. For production, you should be using a webserver better optimized for serving static files, like nginx. For development, however, you can tell Tornado to serve static files like so:</p>
<pre><code>application = tornado.web.Application([
(r"/", mainHandler),
(r"/static/(.*)", tornado.web.StaticFileHandler, {
"path": "/your/static/folder"
}),
])
</code></pre>
<p>You can then update your HTML:</p>
<pre><code><img src="/static/fig1.jpg" />
</code></pre>
<p>Or better yet:</p>
<pre><code><img src="{{ static_url("fig1.jpg") }}" />
</code></pre>
| 0 | 2016-08-22T20:02:01Z | [
"python",
"tornado"
] |
Using Taggit with custom view Django | 39,087,671 | <p>I am currently trying to implement a tag system into my Django project. I am trying to add the tags within each post, and have a category on the right hand side that displays maybe 10-20 of the tags. I am trying to implement this into the feed view, but i am unsure of how to call the slug for each tag in order to do /posts/tag/feed. So once you click on a tag it will redirect to the slug of the tag. Which would make the tag clickable. I tried to follow the link below but it only shows how to do it with the class view. </p>
<p><a href="https://godjango.com/33-tagging-with-django-taggit/" rel="nofollow">https://godjango.com/33-tagging-with-django-taggit/</a></p>
<p>views.py</p>
<pre><code> def post_feed(request):
if not request.user.is_staff or not request.user.is_superuser:
raise Http404
queryset_list = Post.objects.all()
tags = Tag.objects.all()
query = request.GET.get("q")
if query:
queryset_list = queryset_list.filter(
Q(title__icontains=query)|
Q(tags__icontains=query)|
Q(description__icontains=query)|
Q(user__first_name__icontains=query)|
Q(user__last_name__icontains=query)
).distinct()
paginator = Paginator(queryset_list, 5)
page_request_var = "page"
page = request.GET.get(page_request_var)
try:
queryset = paginator.page(page)
except PageNotAnInteger:
# If page is not an integer, deliver first page.
queryset = paginator.page(1)
except EmptyPage:
# If page is out of range (e.g. 9999), deliver last page of results.
queryset = paginator.page(paginator.num_pages)
context = {
"object_list": queryset,
"title": "List",
"page_request_var": page_request_var,
}
return render(request, "post_feed.html", context)
</code></pre>
<p>And here is my url</p>
<pre><code>url(r'^tag/(?P<slug>[-\w]+)/$', post_feed, name='tagged'),
</code></pre>
<p>the Tag.objects.all() only pulls up the tags but doesnt request the slugs. </p>
<p>I am unsure of how to add this to my view without changing it. </p>
<p>from django.views.generic import DetailView, ListView</p>
<p>from taggit.models import Tag</p>
<p>from .models import Product</p>
<p>taggit view to add url and query slug:</p>
<pre><code>class TagMixin(object):
def get_context_data(self, kwargs):
context = super(TagMixin, self).get_context_data(kwargs)
context['tags'] = Tag.objects.all()
return context
class ProductDetail(DetailView):
template_name = 'product/detail.html'
context_object_name = 'product'
model = Product
class ProductIndex(TagMixin, ListView):
template_name = 'product/index.html'
model = Product
paginate_by = '10'
queryset = Product.objects.all()
context_object_name = 'products'
class TagIndexView(TagMixin, ListView):
template_name = 'product/index.html'
model = Product
paginate_by = '10'
context_object_name = 'products'
def get_queryset(self):
return Product.objects.filter(tags__slug=self.kwargs.get('slug'))
</code></pre>
<p>I have been stuck on this a few days. Any advice would be helpful. </p>
<p>Here is my models.py, sorry had to format it this way to show up as the whole models code. </p>
<pre><code> from django.db import models
from django.db.models import Count, QuerySet, F
from django.utils import timezone
from django.conf import settings
from django.contrib.contenttypes.models import ContentType
from django.core.urlresolvers import reverse
from django.db.models.signals import pre_save
from django.utils.text import slugify
from markdown_deux import markdown
from django.utils.safestring import mark_safe
from taggit.managers import TaggableManager
from comments.models import Comment
def upload_location(instance, filename):
return "%s/%s" %(instance.slug, filename)
class Post(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, default=1 )
title = models.CharField(max_length=75)
slug = models.SlugField(unique=True)
image = models.ImageField(
upload_to=upload_location,
null=True,
blank=True,
width_field="width_field",
height_field="height_field")
height_field = models.IntegerField(default=0)
width_field = models.IntegerField(default=0)
description = models.TextField()
tags = TaggableManager()
public = models.BooleanField(default=False)
updated = models.DateTimeField(auto_now_add=False, auto_now=True)
created = models.DateTimeField(auto_now_add=True, auto_now=False)
def __str__(self):
return self.title
def get_absolute_url(self):
return reverse("posts:detail", kwargs={"slug": self.slug})
class Meta:
ordering = ["-created", "-updated" ]
def get_markdown(self):
description = self.description
markdown_text = markdown(description)
return mark_safe(markdown_text)
@property
def comments(self):
instance = self
qs = Comment.objects.filter_by_instance(instance)
return qs
@property
def get_content_type(self):
instance = self
content_type = ContentType.objects.get_for_model(instance.__class__)
return content_type
def create_slug(instance, new_slug=None):
slug = slugify(instance.title)
if new_slug is not None:
slug = new_slug
qs = Post.objects.filter(slug=slug).order_by("-id")
exists = qs.exists()
if exists:
new_slug = "%s-%s" %(slug, qs.first().id)
return create_slug(instance, new_slug=new_slug)
return slug
def pre_save_post_receiver(sender, instance, *args, **kwargs):
if not instance.slug:
instance.slug = create_slug(instance)
pre_save.connect(pre_save_post_receiver, sender=Post)
</code></pre>
<p>here is my template </p>
<pre><code> <div class="row">
<div class="col-sm-2">
<div class="panel panel-primary">
<div class="panel-heading">
Tags
</div>
<div class="panel-body">
<ul class="list-group">
{% for tag in tags %}
<li><a href="{% url 'tagged' tag.slug %}"></a></li>
{% empty %}
<li>No Tags</li>
{% endfor %}
</ul>
</div>
</div>
</div>
</div>
<div class="container">
<div class='col-sm-6 col-sm-offset-3'>
<h1> Post Feed </h1>
<form method='GET' action'' class='row'>
<div class='col-sm-6'>
<div class='input-group'>
<input class='form-control' type='text' name='q' placeholder='Search posts' value='{{ request.GET.q }}'/>
<span class='input-group-btn'>
<input class= 'btn btn-default' type='submit' value='Search'/>
</div>
</div>
</form>
{% for obj in object_list %}
<div class="row">
<div class="col-sm-12">
<div class="thumbnail">
{% if obj.image %}
<img src='{{ obj.image.url }}' class='img-responsive' />
{% endif %}
<div class="caption post-detail-item">
<h3><a href='{{ obj.get_absolute_url }}'><strong>{{ obj.title }}</strong></a> <small>{{ obj.created|timesince }} ago</small>
</h3>
{% if obj.user.get_full_name %}<p>Poster: {{ obj.user.get_full_name }}</p>{% endif %}
{{ obj.get_markdown|truncatechars_html:50 }}
<p>Tags: {{ obj.tags|join:" | "|title }}</p>
<p><a href="{{ obj.get_absolute_url }}" class="btn btn-primary" role="button">View</a></p>
</div>
</div>
</div>
</div>
{% endfor %}
<div class="pagination">
<span class="step-links">
{% if object_list.has_previous %}
<a href="?{{ page_request_var }}={{ object_list.previous_page_number }}{% if request.GET.q %}&
q={{ request.GET.q }}{% endif %}">previous</a>
{% endif %}
<span class="current">
Page {{ object_list.number }} of {{ object_list.paginator.num_pages }}.
</span>
{% if object_list.has_next %}
<a href="?{{ page_request_var }}={{ object_list.next_page_number }}&q={{ request.GET.q }}">next</a>
{% endif %}
</span>
</div>
<footer>
<p class="pull-right"><a href="#">Back to top</a></p>
<p>&copy; 2016 Holms, Inc. &middot; <a href='{% url "privacy" %}'>Privacy</a> &middot; <a href="#">Terms</a></p>
</footer>
</div>
{% endblock content %}
</div>
</code></pre>
| 0 | 2016-08-22T19:54:31Z | 39,108,489 | <p>Just copy and paste these.</p>
<p>Change the urls.py entry to this:</p>
<pre><code>url(r'^tag/(?P<pk>[-\w]+)/$', tag_list, name='tagged'),
</code></pre>
<p>Your post_feed function to this (views.py):</p>
<pre><code>def post_feed(request):
if not request.user.is_staff or not request.user.is_superuser:
raise Http404
queryset = Post.objects.all()
query = request.GET.get("q")
if query: # this is a separate variable (for searching I'm assuming)
queryset_list = queryset_list.filter(
Q(title__icontains=query)|
Q(tags__icontains=query)| # I would consider taking this out. It's gonna cause problems.
Q(description__icontains=query)|
Q(user__first_name__icontains=query)|
Q(user__last_name__icontains=query)
).distinct()
# bring pagination/tag lookup outside of the if query block -- so you don't NEED a query
paginator = Paginator(queryset, 5)
page_request_var = "page"
page = request.GET.get(page_request_var)
try:
queryset = paginator.page(page)
except PageNotAnInteger:
# If page is not an integer, deliver first page.
queryset = paginator.page(1)
except EmptyPage:
# If page is out of range (e.g. 9999), deliver last page of results.
queryset = paginator.page(paginator.num_pages)
context = {
"object_list": queryset,
"tags": tags.objects.all()[0:20], # first 20 tags of all tags
"title": "List",
"page_request_var": page_request_var,
}
return render(request, "post_feed.html", context)
</code></pre>
<p>And your new function to show just posts based on a specific tag to this (views.py):</p>
<pre><code>""" modelled after the function above -- so it's easy to understand """
def tag_list(request, tag_id):
if not request.user.is_staff or not request.user.is_superuser:
raise Http404
queryset = Post.objects.filter(tag__id=tag_id)
paginator = Paginator(queryset, 5)
page_request_var = "page"
page = request.GET.get(page_request_var)
try:
queryset = paginator.page(page)
except PageNotAnInteger:
# If page is not an integer, deliver first page.
queryset = paginator.page(1)
except EmptyPage:
# If page is out of range (e.g. 9999), deliver last page of results.
queryset = paginator.page(paginator.num_pages)
context = {
"object_list": queryset,
"tags": tags.objects.all()[0:20], # first 20 tags of all tags
"title": "List",
"page_request_var": page_request_var,
}
return render(request, "post_feed.html", context)
</code></pre>
<p>then change the template to (post_feed.html):</p>
<pre><code><li><a href="{% url 'tagged' tag.pk %}">{{tag.name}}</a></li>
</code></pre>
<p>also, read this:
<a href="https://docs.djangoproject.com/en/1.9/topics/http/urls/" rel="nofollow">https://docs.djangoproject.com/en/1.9/topics/http/urls/</a></p>
| 0 | 2016-08-23T18:39:22Z | [
"python",
"django",
"tags"
] |
python: how to count number in one file? | 39,087,678 | <p>I need to write a Python program to read the values in a file, one per line, such as file: test.txt</p>
<pre><code>1
2
3
4
5
6
7
8
9
10
</code></pre>
<p>Denoting these as <strong>j1, j2, j3, ... jn</strong>,
I need to sum the differences of consecutive values: </p>
<pre><code>a=(j2-j1)+(j3-j2)+...+(jn-j[n-1])
</code></pre>
<p>I have example source code</p>
<pre><code> a=0
for(j=2;j<=n;j++){
a=a+(j-(j-1))
}
print a
</code></pre>
<p>and the output is </p>
<pre><code>9
</code></pre>
| -1 | 2016-08-22T19:54:58Z | 39,087,888 | <p><strong>Solution (Python 3)</strong></p>
<pre><code>res = 0
with open("test.txt","r") as fp:
lines = list(map(int,fp.readlines()))
for i in range(1,len(lines)):
res += lines[i]-lines[i-1]
print(res)
</code></pre>
<p><strong>Output</strong>: <code>9</code></p>
<p><code>test.text</code> contains:</p>
<pre><code>1
2
3
4
5
6
7
8
9
10
</code></pre>
| 0 | 2016-08-22T20:09:12Z | [
"python",
"ubuntu"
] |
python: how to count number in one file? | 39,087,678 | <p>I need to write a Python program to read the values in a file, one per line, such as file: test.txt</p>
<pre><code>1
2
3
4
5
6
7
8
9
10
</code></pre>
<p>Denoting these as <strong>j1, j2, j3, ... jn</strong>,
I need to sum the differences of consecutive values: </p>
<pre><code>a=(j2-j1)+(j3-j2)+...+(jn-j[n-1])
</code></pre>
<p>I have example source code</p>
<pre><code> a=0
for(j=2;j<=n;j++){
a=a+(j-(j-1))
}
print a
</code></pre>
<p>and the output is </p>
<pre><code>9
</code></pre>
| -1 | 2016-08-22T19:54:58Z | 39,087,913 | <p>I'm not even sure if I understand the question, but here's my best attempt at solving what I think is your problem:</p>
<ol>
<li><p>To read values from a file, use "with open()" in read mode ('r'):</p>
<p><code>with open('test.txt', 'r') as f:
-your code here-</code></p></li>
<li><p>"as f" means that "f" will now represent your file if you use it anywhere in that block</p></li>
<li><p>So, to read all the lines and store them into a list, do this:</p>
<p><code>all_lines = f.readlines()</code></p></li>
</ol>
<p>You can now do whatever you want with the data.</p>
<ol start="4">
<li>If you look at the function you're trying to solve, a=(j2-j1)+(j3-j2)+...+(jn-(jn-1)), you'll notice that many of the values cancel out, e.g. (j2-j1)+(j3-j2) = j3-j1. Thus, the entire function boils down to jn-j1, so all you need is the first and last number.</li>
</ol>
<p>Edit: That being said, please try and search this forum first before asking any questions. As someone who's been in your shoes before, I decided to help you out, but you should learn to reference other people's questions that are identical to your own.</p>
| 0 | 2016-08-22T20:10:43Z | [
"python",
"ubuntu"
] |
python: how to count number in one file? | 39,087,678 | <p>I need to write a Python program to read the values in a file, one per line, such as file: test.txt</p>
<pre><code>1
2
3
4
5
6
7
8
9
10
</code></pre>
<p>Denoting these as <strong>j1, j2, j3, ... jn</strong>,
I need to sum the differences of consecutive values: </p>
<pre><code>a=(j2-j1)+(j3-j2)+...+(jn-j[n-1])
</code></pre>
<p>I have example source code</p>
<pre><code> a=0
for(j=2;j<=n;j++){
a=a+(j-(j-1))
}
print a
</code></pre>
<p>and the output is </p>
<pre><code>9
</code></pre>
| -1 | 2016-08-22T19:54:58Z | 39,088,088 | <p>The correct answer is 9 : </p>
<pre><code>with open("data.txt") as f:
# set prev to first number in the file
prev = int(next(f))
sm = 0
# iterate over the remaining numbers
for j in f:
j = int(j)
sm += j - prev
# update prev
prev = j
print(sm)
</code></pre>
<p>Or using <em>itertools.tee</em> and <em>zip</em>:</p>
<pre><code>from itertools import tee
with open("data.txt") as f:
a,b = tee(f)
next(b)
print(sum(int(j) - int(i) for i,j in zip(a, b)))
</code></pre>
| 0 | 2016-08-22T20:21:44Z | [
"python",
"ubuntu"
] |
python: how to count number in one file? | 39,087,678 | <p>I need to write a Python program to read the values in a file, one per line, such as file: test.txt</p>
<pre><code>1
2
3
4
5
6
7
8
9
10
</code></pre>
<p>Denoting these as <strong>j1, j2, j3, ... jn</strong>,
I need to sum the differences of consecutive values: </p>
<pre><code>a=(j2-j1)+(j3-j2)+...+(jn-j[n-1])
</code></pre>
<p>I have example source code</p>
<pre><code> a=0
for(j=2;j<=n;j++){
a=a+(j-(j-1))
}
print a
</code></pre>
<p>and the output is </p>
<pre><code>9
</code></pre>
| -1 | 2016-08-22T19:54:58Z | 39,088,108 | <p>If I understand correctly, the following equation;</p>
<pre><code>a = (j2-j1) + (j3-j2) + ... + (jn-(jn-1))
</code></pre>
<p>As you iterate over the file, it will subtract the value in the previous line from the value in the current line and then add all those differences.</p>
<pre><code>a = 0
with open("test.txt", "r") as f:
previous = next(f).strip()
for line in f:
line = line.strip()
if not line: continue
a = a + (int(line) - int(previous))
previous = line
print(a)
</code></pre>
| 0 | 2016-08-22T20:23:28Z | [
"python",
"ubuntu"
] |
How I fix "Couldn't find a distribution for 'ploneconf.site'" in Mastering Plone Training | 39,087,745 | <p>I'm learning create custom products for plone (4.3.10) using Mastering Plone, and I stop in the <a href="https://training.plone.org/4/eggs1.html#including-the-egg-in-plone" rel="nofollow">section</a> when I install the newly package created with mrbob.</p>
<p><a href="http://paster.org/m/bkfydi/" rel="nofollow">Here</a> is my buildout (I uncomment the lines informed by trainnig)</p>
<p>And my directory:</p>
<pre><code>jafar@plonedev:~/training/buildout$ pwd
/home/jafar/training/buildout
jafar@plonedev:~/training/buildout$ ls -l src
drwxrwxr-x 4 jafar jafar 4096 Ago 22 16:15 ploneconf.site
drwxrwxr-x 4 jafar jafar 4096 Ago 22 15:13 ploneconf.site_sneak
</code></pre>
<p><strong>[UPDATE]</strong></p>
<p>I did all again in training, and the buildout is <a href="http://paster.org/m/lnpaxn" rel="nofollow">http://paster.org/m/lnpaxn</a>, Icreated the package usind this command:</p>
<pre><code>$ cd src
$ ../bin/mrbob -O ploneconf.site bobtemplates:plone_addon
</code></pre>
<p>After answering the questions it ... I ran the buildout, and this is my output:</p>
<pre><code>(py27) jafar@plonedev:~/training/buildout/src$ ls
ploneconf.site ploneconf.site_sneak
(py27) jafar@plonedev:~/training/buildout/src$ cd ..
(py27) jafar@plonedev:~/training/buildout$ vim buildout.cfg
(py27) jafar@plonedev:~/training/buildout$ ./bin/buildout
mr.developer: Queued 'ploneconf.site_sneak' for checkout.
mr.developer: Updated 'ploneconf.site_sneak' with git.
Upgraded:
setuptools version 20.1.1;
restarting.
Generated script '/home/jafar/training/buildout/bin/buildout'.
mr.developer: Queued 'ploneconf.site_sneak' for checkout.
mr.developer: Updated 'ploneconf.site_sneak' with git.
Uninstalling zopepy.
Uninstalling packages.
Running uninstall recipe.
Uninstalling instance.
Uninstalling codeintel.
Updating _mr.developer.
Updating checkversions.
Installing codeintel.
Couldn't find index page for 'ploneconf.site' (maybe misspelled?)
Getting distribution for 'ploneconf.site'.
Couldn't find index page for 'ploneconf.site' (maybe misspelled?)
While:
Installing codeintel.
Getting distribution for 'ploneconf.site'.
Error: Couldn't find a distribution for 'ploneconf.site'.
</code></pre>
<p>Just remembering...</p>
<p>I'm follow this training:
<a href="https://training.plone.org/4/eggs1.html#including-the-egg-in-plone" rel="nofollow">https://training.plone.org/4/eggs1.html#including-the-egg-in-plone</a></p>
<p><strong>[UPDATE 2]</strong></p>
<p>I think the problem is in buildout.cfg, in section SOURCE we have this</p>
<pre><code>[sources]
ploneconf.site = fs ploneconf.site full-path=${buildout:directory}/src/ploneconf.site
</code></pre>
<p>I tried replace ${buildout:directory} to full path</p>
<pre><code>/home/jafar/training/buildout/src/ploneconf.site
</code></pre>
<p>And yet, it didn't work!</p>
<p>The content of product generated by mrbob</p>
<pre><code>jafar@plonedev:~/training/buildout/src/ploneconf.site$ ls
bootstrap-buildout.py bootstrap-buildout.pyo CHANGES.rst docs README.rst setup.py travis.cfg
bootstrap-buildout.pyc buildout.cfg CONTRIBUTORS.rst MANIFEST.in setup.cfg src
</code></pre>
| 1 | 2016-08-22T19:59:13Z | 39,123,663 | <p>I insert in [buildout] section after eggs and zcml this:</p>
<pre><code>develop =
src/ploneconf.site
</code></pre>
<p>As seen in <a href="http://docs.plone.org/4/en/old-reference-manuals/buildout/creatingpackage.html" rel="nofollow">Plone Docs</a>, the way teaching in training <a href="https://training.plone.org/4/eggs1.html#including-the-egg-in-plone" rel="nofollow">Mastering Plone 4</a> didn't work.</p>
| 1 | 2016-08-24T12:42:26Z | [
"python",
"plone",
"plone-4.x"
] |
How should print all the indices of a matrix which are the maximum number of that matrix? | 39,087,857 | <p>Like we can consider following matrix:
<a href="http://i.stack.imgur.com/BkWrY.jpg" rel="nofollow">enter image description here</a></p>
<p>Now I want to display all the indices of the maximum no of the matrix and not just only one indices of the max of the matrix.</p>
| -4 | 2016-08-22T20:06:32Z | 39,088,141 | <pre class="lang-python prettyprint-override"><code>import numpy as np
mat = np.array([[2,3,2], [7,7,6], [2,7,3]])
print(mat)
max_indices = np.where(mat == np.amax(mat))
print(max_indices)
index_max = mat[max_indices]
print(index_max)
</code></pre>
<p>Output:</p>
<pre><code>[[2 3 2]
[7 7 6]
[2 7 3]]
(array([1, 1, 2]), array([0, 1, 1])) # first array: x-axis, second: y-axis
[7 7 7]
</code></pre>
| 1 | 2016-08-22T20:25:50Z | [
"python",
"numpy",
"matrix"
] |
Canonical way to run Flask app locally | 39,087,917 | <p>The official <a href="http://flask.pocoo.org/docs/0.11/quickstart/" rel="nofollow">Flask documentation</a> uses <code>flask run</code> or <code>python -m flask run</code>, both of which require that <code>FLASK_APP</code> be set. Most other tutorials I've seen, however, simply use <code>python app.py</code>, which doesn't require the extra step and which has worked well for me so far. </p>
<p>What are the advantages of <code>flask run</code>, if any? I want to make sure that the alternative doesn't lead to a bug that I can't figure out later on.</p>
| 0 | 2016-08-22T20:11:03Z | 39,088,836 | <p>Unless you have a reason not to (and you probably don't), use <code>flask run</code> to run the development server. It is what is supported going forward. Paraphrasing from the <a href="http://flask.pocoo.org/docs/0.11/server/#in-code" rel="nofollow">docs</a>:</p>
<blockquote>
<p>from Flask 0.11 onward the <code>flask</code> command is recommended. The reason for this is that due to how the dev server's reload mechanism works there are some bizarre side-effects when using <code>app.run</code> (like executing certain code twice, sometimes crashing without message or dying when a syntax or import error happens).</p>
</blockquote>
<p>To solve these problems, the <code>flask</code> command separates the app from the code that imports the app and runs the server. The <code>flask.run</code> method still exists because none of those issues were critical, only confusing. It may be fully deprecated in the future.</p>
<p>Besides the <code>run</code> command, it also provides the ability to add other useful commands that can be run inside the app context, in place of separate extensions or scripts.</p>
<p>As always, the same warning still applies: do not run the development server in production.</p>
| 2 | 2016-08-22T21:15:41Z | [
"python",
"flask"
] |
Xlsxwriter - Writing a string to a cell given parameters | 39,087,975 | <p>I am looking to write a string to a cell using Xlsxwriter, however, it seems that I can only write to a specific cell in the following formats:</p>
<pre><code>worksheet.write(0, 0, 'I like pie')
worksheet.write('A1', 'I like pie')
</code></pre>
<p>I am first writing a dataframe to the excel worksheet and then adding a footer at the bottom ('I like pie'). I would like the footer to be written in the cell below the the last line of the dataframe without manually telling python what exact cell to write to. </p>
<p>Any ideas? Maybe an if statement? </p>
| 1 | 2016-08-22T20:14:32Z | 39,088,677 | <p>Use <code>df.shape</code> to get the number of rows for your dataframe, then use this number to specify the row for your footer:</p>
<pre><code>nrows = df.shape[0]
worksheet.write(nrows, 0, 'I like pie')
# or: worksheet.write('A{}'.format(nrows+1), 'I like pie')
</code></pre>
| 2 | 2016-08-22T21:01:02Z | [
"python",
"pandas",
"xlsxwriter"
] |
Flask application cannot find gpg | 39,088,048 | <p>I'm interfacing gpg in a flask application with python-gnupg. The module is installed in a virtualenv together with the rest of my application. When running I receive a 500 internal server error, and the exception is:</p>
<pre><code>File "./myproject/views/settings.py", line 257, in settings_keys_add
gpg = gnupg.GPG()
File "/home/puse/myproject/myproject/lib/python3.5/site-packages/gnupg.py", line 733, in __init__
p = self._open_subprocess(["--version"])
File "/home/puse/myproject/myproject/lib/python3.5/site-packages/gnupg.py", line 786, in _open_subprocess
startupinfo=si)
File "/usr/lib/python3.5/subprocess.py", line 947, in __init__
restore_signals, start_new_session)
File "/usr/lib/python3.5/subprocess.py", line 1551, in _execute_child
raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2] No such file or directory: 'gpg'
</code></pre>
<p>gpg is installed and is working:</p>
<pre><code>puse@puse ~/puse> which gpg
/usr/bin/gpg
puse@puse ~/puse> gpg --gen-key
gpg (GnuPG) 1.4.20; Copyright (C) 2015 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Please select what kind of key you want:
(1) RSA and RSA (default)
(2) DSA and Elgamal
(3) DSA (sign only)
(4) RSA (sign only)
Your selection?
gpg: Interrupt caught ... exiting
</code></pre>
<p>I can also get it to work from python within the virtualenv:</p>
<pre><code>Python 3.5.2 (default, Jul 5 2016, 12:43:10)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import gnupg
>>> gpg = gnupg.GPG()
>>> a = gpg.scan_keys("/opt/keys/2a798434-ebb3-4dc6-9f76-fd46f0cce6fa")
</code></pre>
<p>I suspect it has something to do with what user the application is running under, but the application is running under my user, but with group set to www-data. I have confirmed this by running ps.</p>
<p>systemd .service file for the application.</p>
<pre><code>[Unit]
Description=uWSGI instance to serve myproject
After=network.target
[Service]
User=puse
Group=www-data
WorkingDirectory=/home/path
Environment="PATH=/home/path"
ExecStart=/home/path/uwsgi --ini config.ini
[Install]
WantedBy=multi-user.target
</code></pre>
| 1 | 2016-08-22T20:18:59Z | 39,088,161 | <p>you override (not append to) the PATH variable with this line:</p>
<pre><code>Environment="PATH=/home/path"
</code></pre>
<p>change it to</p>
<pre><code>Environment="PATH=/home/path:/usr/bin:/bin"
</code></pre>
<p>to be able to use standard commands.</p>
<p>BTW check if /home/path exists, I suppose not, in that case just set:</p>
<pre><code>Environment="PATH=/usr/bin:/bin"
</code></pre>
| 1 | 2016-08-22T20:27:05Z | [
"python",
"nginx",
"flask",
"gnupg",
"ubuntu-server"
] |
get stacktrace of a python program for debugging | 39,088,101 | <p>In order to get the stacktrace of a python program, I am trying to follow <a href="http://grapsus.net/blog/post/Low-level-Python-debugging-with-GDB" rel="nofollow">this example</a>. In the article, the author invokes the gdb as follows. However, the python version of my environment is <code>python 3.4.4</code>. When I type </p>
<pre><code>python3.4-dbg testmyplotlib2.py &
</code></pre>
<p>The error message is python3.4-dbg: command not found. What's the right way to get stacktrace by using gdb.</p>
<p><a href="http://i.stack.imgur.com/4N5nV.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/4N5nV.jpg" alt="enter image description here"></a></p>
| 0 | 2016-08-22T20:22:47Z | 39,088,207 | <p>What OS are you on? It looks like you need to install python3.4-dbg. If you are on Linux, you will need to enter:</p>
<p>sudo apt-get install python3.4-dbg</p>
| 1 | 2016-08-22T20:30:00Z | [
"python",
"python-3.x",
"gdb"
] |
get stacktrace of a python program for debugging | 39,088,101 | <p>In order to get the stacktrace of a python program, I am trying to follow <a href="http://grapsus.net/blog/post/Low-level-Python-debugging-with-GDB" rel="nofollow">this example</a>. In the article, the author invokes the gdb as follows. However, the python version of my environment is <code>python 3.4.4</code>. When I type </p>
<pre><code>python3.4-dbg testmyplotlib2.py &
</code></pre>
<p>The error message is python3.4-dbg: command not found. What's the right way to get stacktrace by using gdb.</p>
<p><a href="http://i.stack.imgur.com/4N5nV.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/4N5nV.jpg" alt="enter image description here"></a></p>
| 0 | 2016-08-22T20:22:47Z | 39,088,457 | <p>GDB is excellent program for debugging, but if printing traceback is the only reason you are installing GDB, do not do it, that is waay overkill. You can just <code>import traceback</code> and </p>
<ol>
<li>use <code>traceback.format_stack()</code> to get an array of calls that lead to the location in program</li>
<li>use <code>traceback.print_stack()</code> to print it to command line</li>
<li>use <code>print traceback.format_exc()</code> to print what lead to the current exceptions (works in <code>except</code> clasuse)</li>
</ol>
| 0 | 2016-08-22T20:46:09Z | [
"python",
"python-3.x",
"gdb"
] |
Using webdriver execute-script on jquery does not work | 39,088,119 | <p>I was trying to use webdriver's execute_script() to run a jquery, but without luck. Below are the steps I performed:</p>
<ol>
<li><p>From Selenium/Webdriver, run execute_script() as below:</p>
<pre><code>jquery_string = 'return $(\'[id="ClusterMembers:ClusterMembersScreen:ttlBar"]\')'
web_element = driver.execute_script(jquery_string)
</code></pre>
<p>It returned error as:</p>
<blockquote>
<p>WebDriverException: Message: $ is not defined </p>
<p>Build info: version: '2.53.0'
...</p>
</blockquote></li>
<li><p>Hold the above function call from debugger,
go to the standing FF browser from the selenium client machine,
and type in the same jquery string from the Developer Console as below (after typing "allow pasting"):</p>
<pre><code>$('[id="ClusterMembers:ClusterMembersScreen:ttlBar"]')
</code></pre>
<p>=> The console returns as expected</p></li>
<li><p>Added the following codes before running the same above code as (1):</p>
<pre><code>driver.execute_script(open('Cjquery-2.2.4.js').read())
TestCase.assertTrue(cluster_page.driver_util.driver.execute_script("return jQuery.active == 0"))
</code></pre>
<p>=> Returned value of "web_element" from the debugger is not a webelement object, but a <strong>dictionary</strong> of the following:</p>
<pre><code>{'mouseout': 'function ( data, fn ) {\n\t\treturn arguments.length > 0 ?\n\t\t\tthis.on( name, null, data, fn ) :\n\t\t\tthis.trigger( name );\n\t}', ....
</code></pre>
<p>I have tried to use a latest version of jquery (3.1), but still got the same failure.</p></li>
</ol>
<p>I was wondering if the issue has something to do with the <strong>"allow pasting"</strong> required by the FF or something else that I have been missing? I appreciate if anyone can shed some light?</p>
| 0 | 2016-08-22T20:24:03Z | 39,110,153 | <p>Instead of using $ try using this:</p>
<pre><code>jquery_string = "jQuery('[id*=ClusterMembers][id*=ClusterMembersScreen][id*=ttlBar]');"
</code></pre>
<p>try to avoid using ':' in selectors, it might result in errors.</p>
| 0 | 2016-08-23T20:30:37Z | [
"javascript",
"jquery",
"python",
"selenium",
"webdriver"
] |
Changes in Django View Lagging | 39,088,151 | <p>I'm a fairly new to web development so this might actually be normal behavior - but when I make logic changes in my views, it can take about an hour for those changes to show up on my production site.</p>
<p>The changes are instant if I fire up the localhost. Server is Windows IIS 7.5. HTML, CSS, and JS changes show up instantly, it's the code in the view that takes a while to filter through. Any ideas on what is causing this and how to fix it? </p>
| 0 | 2016-08-22T20:26:27Z | 39,088,264 | <p>Have you tried doing a manual reboot of the application pool where the site is sitting in IIS? Documentation might not be exact for the version but it should explain it well enough to give you an idea about what's going:</p>
<p><a href="https://technet.microsoft.com/en-us/library/cc753179(v=ws.10).aspx" rel="nofollow">https://technet.microsoft.com/en-us/library/cc753179(v=ws.10).aspx</a></p>
<p>Basically, if you have the application pool recycle every 3 hours, when you make a change it could take up to 3 hours for the change to take effect. You also don't want it recycling every 5 minutes either. But you can do a manual recycle if you really want to see your changes.</p>
| 1 | 2016-08-22T20:33:56Z | [
"python",
"django"
] |
FutureWarning when comparing a NumPy object to "None" | 39,088,173 | <p>I have a function that receives some arguments, plus some optional arguments. In it, the action taken is dependent upon whether the optional argument <code>c</code> was filled:</p>
<pre><code>def func(a, b, c = None):
doStuff()
if c != None:
doOtherStuff()
</code></pre>
<p>If <code>c</code> is not passed, then this works fine. However, in my context, if <code>c</code> <em>is</em> passed, it will always be a <code>numpy</code> array. And comparing <code>numpy</code> arrays to <code>None</code> yields the following warning:</p>
<pre><code>FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.
</code></pre>
<p>So, what is the cleanest and most general way to check whether or not <code>c</code> was passed or not without comparing to <code>None</code>?</p>
| 5 | 2016-08-22T20:27:41Z | 39,088,194 | <p>Use <code>if c is not None</code> instead. In addition to avoiding the warning, this is <a href="https://www.python.org/dev/peps/pep-0008/#programming-recommendations">generally considered best-practice</a>.</p>
| 10 | 2016-08-22T20:29:11Z | [
"python",
"arrays",
"numpy"
] |
how to integrate pyspark on jupyter notebook | 39,088,189 | <p>i have followed instructions to integrate pyspark with jupyter but after i was done i was only able to run pyspark on the command prompt . </p>
<p>Basically when i am using ipython in my command it is working but when i change to jupyter it says </p>
<pre><code>'"Jupyter "' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
<p>please SO people i am really exhausted now , please help </p>
<p>note : i followed <a href="http://jmdvinodjmd.blogspot.in/2015/08/installing-ipython-notebook-with-apache.html" rel="nofollow">this</a> tutorial to do my integration . </p>
<p>till now i have followed this command : </p>
<pre><code> set PYSPARK_DRIVER_PYTHON=ipython
set PYSPARK_DRIVER_PYTHON_OPTS = notebook
pyspark
</code></pre>
<p>it opens pyspark in cmd itself </p>
| 0 | 2016-08-22T20:28:51Z | 39,088,446 | <p>Windows <code>set</code> command does not accept spaces before the <code>=</code> sign.</p>
<p>fix this like that:</p>
<pre><code>set PYSPARK_DRIVER_PYTHON=ipython
set PYSPARK_DRIVER_PYTHON_OPTS=notebook
pyspark
</code></pre>
<p>The system wants to run the <code>"Jupyter %PYSPARK_DRIVER_PYTHON_OPTS%"</code> executable but instead tries to run '"Jupyter "'</p>
<p>hence the error you're getting.</p>
| 1 | 2016-08-22T20:45:32Z | [
"python",
"pyspark"
] |
print full item in multiple lists | 39,088,219 | <p>I'm trying to print two lists, but it only prints the first letter of each item in the lists;</p>
<pre><code>lst1 = ['hello', 'hi', 'sup']
lst2 = ['bye', 'cya', 'goodbye']
for item in [lst1, lst2]:
print 'Your options are: ' + ' '.join(['-{0}'.format(*x) for x in item])
</code></pre>
<p>Result;</p>
<pre><code>You can choose: -h -h -s
You can choose: -b -c -g
</code></pre>
<p>How do I print the string in full?</p>
| 0 | 2016-08-22T20:30:57Z | 39,088,280 | <p>Removing <code>*</code> from <code>format</code> will work for you:</p>
<pre><code>>>> for item in [lst1, lst2]:
... print 'Your options are: ' + ' '.join(['-{0}'.format(x) for x in item])
...
Your options are: -hello -hi -sup
Your options are: -bye -cya -goodbye
</code></pre>
<p><strong>Explaination</strong>: <code>*my_list</code> unpacks the list. Since, string is also a <code>list</code> of <code>chars</code>, <code>'-{0}'.format(*x)</code> will become: <code>'-{0}'.format(['h', 'e', 'l', 'l', 'o'])</code>. Hence, it is just inserting the string at the 0th index of <code>['h', 'e', 'l', 'l', 'o']</code> i.e. <code>h</code>.</p>
| 4 | 2016-08-22T20:34:44Z | [
"python",
"string",
"list"
] |
print full item in multiple lists | 39,088,219 | <p>I'm trying to print two lists, but it only prints the first letter of each item in the lists;</p>
<pre><code>lst1 = ['hello', 'hi', 'sup']
lst2 = ['bye', 'cya', 'goodbye']
for item in [lst1, lst2]:
print 'Your options are: ' + ' '.join(['-{0}'.format(*x) for x in item])
</code></pre>
<p>Result;</p>
<pre><code>You can choose: -h -h -s
You can choose: -b -c -g
</code></pre>
<p>How do I print the string in full?</p>
| 0 | 2016-08-22T20:30:57Z | 39,088,556 | <p>Alternative;</p>
<pre><code>>>> print '\n'.join('Your options are: -%s' % ' -'.join(x) for x in (lst1, lst2))
Your options are: -hello -hi -sup
Your options are: -bye -cya -goodbye
</code></pre>
<h3>How it works</h3>
<p>To generate a line of output:</p>
<pre><code>>>> 'Your options are: -%s' % ' -'.join(lst1)
'Your options are: -hello -hi -sup'
</code></pre>
<p>To generate the complete output, the above is done for both <code>lst1</code> and <code>lst2</code> and combined with <code>'\n'.join(...)</code>.</p>
| 0 | 2016-08-22T20:52:50Z | [
"python",
"string",
"list"
] |
Python with Selenium - selecting a button or text action | 39,088,244 | <p>I am, poorly, trying to 1. Click a button or 2. Check for text and then execute an action. I think I am just not coding this correctly.</p>
<p>eg</p>
<pre><code>if driver.find_element_by_class_name('classOne').click()
elif:
"No Item" in driver.find_element_by_class_name('classTwo').driver.get(self_base_url)
</code></pre>
<p>This seems pretty simplistic, and I'm sure I've done something horribly wrong. This 'should' work, but isn't? </p>
| 0 | 2016-08-22T20:32:07Z | 39,088,393 | <p>If the <code>driver</code> can't find an element, it will raise <a href="http://selenium-python.readthedocs.io/api.html#selenium.common.exceptions.NoSuchElementException" rel="nofollow"><code>NoSuchElementException</code></a>.</p>
<p>In your case, you can do:</p>
<pre><code>x = driver.find_elements_by_class_name('classOne')
if len(x) > 0:
# click the first one found
x[0].click()
else:
print('No item x was found.')
</code></pre>
<p>Notice that I changed it to <code>find_elements_by_class_name</code> from <code>find_element_by_class_name</code>. This finds all elements if there are any and returns a list.</p>
| 2 | 2016-08-22T20:42:42Z | [
"python",
"selenium",
"selenium-webdriver"
] |
Strange Logic Behavior with Variable and Number | 39,088,311 | <p>Say I define <code>a</code> and <code>b</code> as follows:</p>
<pre><code>a = 1
b = 1
</code></pre>
<p>Then I test:</p>
<pre><code>a == 1
#True
5>4
#True
a==1 & b==1
#True
5>4 & 4>3
#True
a==1 & 5>4
#False
</code></pre>
<p>What is going on with the last one? I would like to be able to test the last inequality and get the result of <code>True</code>.</p>
| 0 | 2016-08-22T20:37:04Z | 39,088,339 | <p>In Python <code>&</code> is for bit operations with numbers, not logic. Use <code>and</code> and <code>or</code> instead.</p>
| 6 | 2016-08-22T20:38:53Z | [
"python",
"logic",
"boolean-logic",
"inequality"
] |
Strange Logic Behavior with Variable and Number | 39,088,311 | <p>Say I define <code>a</code> and <code>b</code> as follows:</p>
<pre><code>a = 1
b = 1
</code></pre>
<p>Then I test:</p>
<pre><code>a == 1
#True
5>4
#True
a==1 & b==1
#True
5>4 & 4>3
#True
a==1 & 5>4
#False
</code></pre>
<p>What is going on with the last one? I would like to be able to test the last inequality and get the result of <code>True</code>.</p>
| 0 | 2016-08-22T20:37:04Z | 39,088,938 | <blockquote>
<p>Unlike C, all comparison operations in Python have the same priority, which is lower than that of any arithmetic, shifting or bitwise operation. Also unlike C, expressions like a < b < c have the interpretation that is conventional in mathematics:</p>
</blockquote>
<p>Which means:</p>
<pre><code>a==1 & 5>4 is equal to
a == ( 1 % 5 ) > 4
a == 1 > 4
True > 4
False
</code></pre>
| -1 | 2016-08-22T21:22:12Z | [
"python",
"logic",
"boolean-logic",
"inequality"
] |
Python 3.5 regular expression matching for directories | 39,088,415 | <p>I am using the Python 3.5 <code>re</code> module with this code:</p>
<pre><code>>>> test
'\\\\192.168.1.2\\shared\\Department\\Travel\\FY 2015\\Travel Expense Statement Jul 25 2019.pdf'
</code></pre>
<p>I want to return <code>Department\Travel\FY 2015\Travel Expense Statement Jul 25 2019.pdf</code>. I have tried the following regex, but keep getting errors such as <code>sre_constants.error: nothing to repeat at position 12</code></p>
<pre><code>x=re.compile( "shared\\[^\\](*?)" )
print( x.findall(test) )
</code></pre>
<p>or the empty result <code>['']</code> for:</p>
<pre><code>x=re.compile( "shared\\\(.*?)" )
</code></pre>
<p>How can I accomplish this operation?</p>
| 2 | 2016-08-22T20:43:45Z | 39,088,485 | <p>The problem with your <em>regular expression</em> is very simple, remove the <code>?</code> character in your second regular expression. You just need <code>.*</code> that matches zero or more characters. </p>
<p><code>*?</code> together means a lazy quantifier that matches as little as possible, so if you use <code>.*?</code>, it means "zero or more any characters, but as few as possible". As for the first regular expression, the <code>*</code> does not have a preceding atom to which it could apply to, hence the error.</p>
<hr>
<p>In general case, you should rather use the <code>ntpath</code> module as in <a href="http://stackoverflow.com/a/39088502/918959">kennytm's answer there</a>.</p>
| 2 | 2016-08-22T20:47:40Z | [
"python",
"regex"
] |
Python 3.5 regular expression matching for directories | 39,088,415 | <p>I am using the Python 3.5 <code>re</code> module with this code:</p>
<pre><code>>>> test
'\\\\192.168.1.2\\shared\\Department\\Travel\\FY 2015\\Travel Expense Statement Jul 25 2019.pdf'
</code></pre>
<p>I want to return <code>Department\Travel\FY 2015\Travel Expense Statement Jul 25 2019.pdf</code>. I have tried the following regex, but keep getting errors such as <code>sre_constants.error: nothing to repeat at position 12</code></p>
<pre><code>x=re.compile( "shared\\[^\\](*?)" )
print( x.findall(test) )
</code></pre>
<p>or the empty result <code>['']</code> for:</p>
<pre><code>x=re.compile( "shared\\\(.*?)" )
</code></pre>
<p>How can I accomplish this operation?</p>
| 2 | 2016-08-22T20:43:45Z | 39,088,502 | <p>You shouldn't use regex for this. Instead, use the <a href="https://docs.python.org/3/library/os.path.html#os.path.splitdrive" rel="nofollow"><code>ntpath</code> module</a> (or <code>os.path</code> if you are sure the script will only run on Windows):</p>
<pre><code>>>> s = '\\\\192.168.1.2\\shared\\Department\\Travel\\FY 2015\\Travel Expense Statement Jul 25 2019.pdf'
>>> import ntpath
>>> ntpath.splitdrive(s)
('\\\\192.168.1.2\\shared', '\\Department\\Travel\\FY 2015\\Travel Expense Statement Jul 25 2019.pdf')
>>> ntpath.splitdrive(s)[1][1:]
'Department\\Travel\\FY 2015\\Travel Expense Statement Jul 25 2019.pdf'
</code></pre>
| 3 | 2016-08-22T20:48:48Z | [
"python",
"regex"
] |
tensorflow periodic padding | 39,088,489 | <p>In tensorflow I cannot find a straightforward possibility to do a convolution (<a href="https://www.tensorflow.org/versions/master/api_docs/python/nn.html#conv2d" rel="nofollow">tf.nn.conv2d</a>) with periodic boundary conditions.</p>
<p>E.g. take the tensor</p>
<pre><code>[[1,2,3],
[4,5,6],
[7,8,9]]
</code></pre>
<p>and any 3x3 filter. A convolution with periodic boundary conditions could in principle be done by doing a periodic padding to 5x5</p>
<pre><code>[[9,7,8,9,7],
[3,1,2,3,1],
[6,4,5,6,4],
[9,7,8,9,7],
[3,1,2,3,1]]
</code></pre>
<p>and subsequently a convolution with the filter in "valid" mode. However, the function <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/array_ops.html#pad" rel="nofollow">tf.pad</a> unfortunately does not support periodic padding.</p>
<p>Is there a simple workaround? </p>
| 0 | 2016-08-22T20:47:54Z | 39,088,910 | <p>The following should work for your case :</p>
<pre><code>import tensorflow as tf
a = tf.constant([[1,2,3],[4,5,6],[7,8,9]])
b = tf.tile(a, [3, 3])
result = b[2:7, 2:7]
sess = tf.InteractiveSession()
print(result.eval())
# prints the following
array([[9, 7, 8, 9, 7],
[3, 1, 2, 3, 1],
[6, 4, 5, 6, 4],
[9, 7, 8, 9, 7],
[3, 1, 2, 3, 1]], dtype=int32)
</code></pre>
<p>As noted in the comments, this is a little inefficient in terms of memory. If memory is an issue for you, but are willing to spend some compute, the following will also work :</p>
<pre><code>pre = tf.constant([[0, 0, 1], [1, 0, 0], [0, 1, 0], [0, 0, 1], [1, 0, 0]])
post = tf.transpose(pre)
result = tf.matmul(tf.matmul(pre, a), post)
print(result.eval())
</code></pre>
| 2 | 2016-08-22T21:20:25Z | [
"python",
"tensorflow"
] |
repetition in regular expression in python | 39,088,522 | <p>I've got a file with lines for example:</p>
<pre><code>aaa$bb$ccc$ddd$eee
fff$ggg$hh$iii$jj
</code></pre>
<p>I need to take what is inside $$ so expected result is:</p>
<pre><code> $bb$
$ddd$
$ggg$
$iii$
</code></pre>
<p>My result:</p>
<pre><code>$bb$
$ggg$
</code></pre>
<p>My solution:</p>
<pre><code>m = re.search(r'$(.*?)$', line)
if m is not None:
print m.group(0)
</code></pre>
<p>Any ideas how to improve my regexp? I was trying with * and + sign, but I'm not sure how to finally create it.
I was searching for similar post, but couldnt find it :(</p>
| 2 | 2016-08-22T20:50:30Z | 39,088,553 | <p>You can use <a href="https://docs.python.org/2/library/re.html#re.findall" rel="nofollow"><strong><code>re.findall</code></strong></a> with <code>r'\$[^$]+\$'</code> regex: </p>
<pre><code>import re
line = """aaa$bb$ccc$ddd$eee
fff$ggg$hh$iii$jj"""
m = re.findall(r'\$[^$]+\$', line)
print(m)
# => ['$bb$', '$ddd$', '$ggg$', '$iii$']
</code></pre>
<p>See <a href="http://ideone.com/Himvv5" rel="nofollow">Python demo</a></p>
<p>Note that you need to escape <code>$</code>s and remove the capturing group for the <code>re.findall</code> to return the <code>$...$</code> substrings, not just what is inside <code>$</code>s.</p>
<p><strong>Pattern details</strong>:</p>
<ul>
<li><code>\$</code> - a dollar symbol (literal)</li>
<li><code>[^$]+</code> - 1 or more symbols other than <code>$</code></li>
<li><code>\$</code> - a literal dollar symbol.</li>
</ul>
<p><strong>NOTE</strong>: The <code>[^$]</code> is a <em>negated character class</em> that matches any char but the one(s) defined in the class. Using a negated character class here speeds up matching since <code>.*?</code> lazy dot pattern expands at each position in the string between two <code>$</code>s, thus taking many more steps to complete and return a match. </p>
<p>And a variation of the pattern to get only the texts inside <code>$...$</code>s:</p>
<pre><code>re.findall(r'\$([^$]+)\$', line)
^ ^
</code></pre>
<p>See <a href="http://ideone.com/2odmAr" rel="nofollow">another Python demo</a>. Note the <code>(...)</code> capturing group added so that <code>re.findall</code> could only return what is <em>captured</em>, and not what is <em>matched</em>.</p>
| 6 | 2016-08-22T20:52:47Z | [
"python",
"regex",
"python-2.7"
] |
repetition in regular expression in python | 39,088,522 | <p>I've got a file with lines for example:</p>
<pre><code>aaa$bb$ccc$ddd$eee
fff$ggg$hh$iii$jj
</code></pre>
<p>I need to take what is inside $$ so expected result is:</p>
<pre><code> $bb$
$ddd$
$ggg$
$iii$
</code></pre>
<p>My result:</p>
<pre><code>$bb$
$ggg$
</code></pre>
<p>My solution:</p>
<pre><code>m = re.search(r'$(.*?)$', line)
if m is not None:
print m.group(0)
</code></pre>
<p>Any ideas how to improve my regexp? I was trying with * and + sign, but I'm not sure how to finally create it.
I was searching for similar post, but couldnt find it :(</p>
| 2 | 2016-08-22T20:50:30Z | 39,088,568 | <p><code>re.search</code> finds only the first match. Perhaps you'd want <code>re.findall</code>, which returns list of strings, or <code>re.finditer</code> that returns iterator of match objects. Additionally, you must escape <code>$</code> to <code>\$</code>, as unescaped <code>$</code> means "end of line".</p>
<hr>
<p>Example:</p>
<pre><code>>>> re.findall(r'\$.*?\$', 'aaa$bb$ccc$ddd$eee')
['$bb$', '$ddd$']
>>> re.findall(r'\$(.*?)\$', 'aaa$bb$ccc$ddd$eee')
['bb', 'ddd']
</code></pre>
<hr>
<p>One more improvement would be to use <code>[^$]*</code> instead of <code>.*?</code>; the former means "zero or more any characters besides <code>$</code>; this can potentially avoid more pathological backtracking behaviour.</p>
| 3 | 2016-08-22T20:53:17Z | [
"python",
"regex",
"python-2.7"
] |
repetition in regular expression in python | 39,088,522 | <p>I've got a file with lines for example:</p>
<pre><code>aaa$bb$ccc$ddd$eee
fff$ggg$hh$iii$jj
</code></pre>
<p>I need to take what is inside $$ so expected result is:</p>
<pre><code> $bb$
$ddd$
$ggg$
$iii$
</code></pre>
<p>My result:</p>
<pre><code>$bb$
$ggg$
</code></pre>
<p>My solution:</p>
<pre><code>m = re.search(r'$(.*?)$', line)
if m is not None:
print m.group(0)
</code></pre>
<p>Any ideas how to improve my regexp? I was trying with * and + sign, but I'm not sure how to finally create it.
I was searching for similar post, but couldnt find it :(</p>
| 2 | 2016-08-22T20:50:30Z | 39,088,578 | <p>Your regex is fine. <a href="https://docs.python.org/3/library/re.html#re.search" rel="nofollow"><code>re.search</code></a> only finds the first match in a line. You are looking for <a href="https://docs.python.org/3/library/re.html#re.findall" rel="nofollow"><code>re.findall</code></a>, which finds all non-overlapping matches. That last bit is important for you since you have the same start and end delimiter.</p>
<pre><code>for m in m = re.findall(r'$(.*?)$', line):
if m is not None:
print m.group(0)
</code></pre>
| 1 | 2016-08-22T20:54:11Z | [
"python",
"regex",
"python-2.7"
] |
Python Subtracting Elements in a Lists from Previous Element | 39,088,546 | <p>I have a loop that produces multiple lists such as these:</p>
<pre><code> [1,6,2,8,3,4]
[8,1,2,3,7,2]
[9,2,5,6,1,4]
</code></pre>
<p><em>For each list, I want to subtract the first two elements, and then use that value to then subtract the third element from.</em> </p>
<p>For example, the first list should end up looking like:</p>
<pre><code> [-5, 4,-6, 5,-1]
</code></pre>
<p>I have tried to manually do this, but there are too many lists to do this and it would take too much time. </p>
<p>How would I do that in the least amount of lines of code?</p>
| -1 | 2016-08-22T20:52:22Z | 39,088,785 | <p>From your updated example, it seems like, given a list <code>[a, b, c, d, ...]</code> you want <code>[a-b, b-c, c-d, d-e, ...]</code> as a result. For this, you should <code>zip</code> the list with itself, offset by one position, and subtract the elements in the pairs.</p>
<pre><code>lst = [1,6,2,8,3,4]
res = [x-y for x, y in zip(lst, lst[1:])]
print(res) # [-5, 4, -6, 5, -1]
</code></pre>
<p>If the lists are much longer, you might instead create an iterator, use <code>tee</code> to duplicate that iterator, and advance one of the iterators one position with <code>next</code>:</p>
<pre><code>import itertools
i1, i2 = itertools.tee(iter(lst))
next(i2)
res = [x-y for x, y in itertools.izip(i1, i2)] # or just zip in Python 3
</code></pre>
| 3 | 2016-08-22T21:10:06Z | [
"python",
"list",
"loops"
] |
Python Subtracting Elements in a Lists from Previous Element | 39,088,546 | <p>I have a loop that produces multiple lists such as these:</p>
<pre><code> [1,6,2,8,3,4]
[8,1,2,3,7,2]
[9,2,5,6,1,4]
</code></pre>
<p><em>For each list, I want to subtract the first two elements, and then use that value to then subtract the third element from.</em> </p>
<p>For example, the first list should end up looking like:</p>
<pre><code> [-5, 4,-6, 5,-1]
</code></pre>
<p>I have tried to manually do this, but there are too many lists to do this and it would take too much time. </p>
<p>How would I do that in the least amount of lines of code?</p>
| -1 | 2016-08-22T20:52:22Z | 39,088,859 | <p>If I understand you correctly, you want to optimize your code, so it runs faster. Reducing the lines of code wont improve this since you are looping through lists. </p>
<p>From what I can tell, your problem can't be solved in under n-1 subtractions (where n is the number of input values). However there might be a more effective solution if you know how the lists are created.</p>
| -1 | 2016-08-22T21:17:02Z | [
"python",
"list",
"loops"
] |
Python Subtracting Elements in a Lists from Previous Element | 39,088,546 | <p>I have a loop that produces multiple lists such as these:</p>
<pre><code> [1,6,2,8,3,4]
[8,1,2,3,7,2]
[9,2,5,6,1,4]
</code></pre>
<p><em>For each list, I want to subtract the first two elements, and then use that value to then subtract the third element from.</em> </p>
<p>For example, the first list should end up looking like:</p>
<pre><code> [-5, 4,-6, 5,-1]
</code></pre>
<p>I have tried to manually do this, but there are too many lists to do this and it would take too much time. </p>
<p>How would I do that in the least amount of lines of code?</p>
| -1 | 2016-08-22T20:52:22Z | 39,089,035 | <pre><code>>>> my_list = [1,6,2,8,3,4]
>>> [my_list[i] - my_list[i+1] for i in range(len(my_list) -1)]
[-5, 4, -6, 5, -1]
</code></pre>
| 0 | 2016-08-22T21:29:53Z | [
"python",
"list",
"loops"
] |
Reboot Raspberry Pi if service not running | 39,088,552 | <p>I am running Bitorrent Sync from my Raspberry Pi. Very occasionally it will- for some reason- go offline. I am trying to run <a href="http://weworkweplay.com/play/rebooting-the-raspberry-pi-when-it-loses-wireless-connection-wifi/" rel="nofollow">a script from crontab that will check the connection</a> but I also want to check the status of the btsync service (<code>sudo service btsync status</code>). How can I put this in a script that will run from Crontab, look at the output, and initiate a reboot if anything other than "running"?</p>
| 1 | 2016-08-22T20:52:44Z | 39,088,852 | <p>Check if process is running, with <code>ps aux</code>. Name the below script as <strong>btsync_reboot.sh</strong> and <code>chown</code> it to user running cron.</p>
<pre><code>#!/bin/sh
echo "check service $(date)" >> /var/log/btsync-check.log
ps auxw | grep btsync | grep -v grep > /dev/null
if [ $? != 0 ]
then
echo "rebooting at $(date)" >> /var/log/btsync-reboot.log
reboot now >> /var/log/btsync-reboot.log
else
echo "btsync is running" >> /var/log/btsync-check.log
fi
</code></pre>
<p>Cron expression: <code>* * * * * sh /path/to/btsync-reboot.sh</code></p>
| 1 | 2016-08-22T21:16:34Z | [
"python",
"raspberry-pi",
"crontab"
] |
Reboot Raspberry Pi if service not running | 39,088,552 | <p>I am running Bitorrent Sync from my Raspberry Pi. Very occasionally it will- for some reason- go offline. I am trying to run <a href="http://weworkweplay.com/play/rebooting-the-raspberry-pi-when-it-loses-wireless-connection-wifi/" rel="nofollow">a script from crontab that will check the connection</a> but I also want to check the status of the btsync service (<code>sudo service btsync status</code>). How can I put this in a script that will run from Crontab, look at the output, and initiate a reboot if anything other than "running"?</p>
| 1 | 2016-08-22T20:52:44Z | 39,088,887 | <p>You could follow the same steps as you do for checkwifi.sh, but make it checkbtsync.sh</p>
<p>Something along these lines should work:</p>
<pre><code>#!/bin/sh
btsyncResult=$(sudo service btsync status)
if [[ $btsyncResult != *"is running"* ]]
then
sudo /sbin/shutdown -r now
fi
</code></pre>
<p>Theoretically, that will take the result of your btsync status command and store it in the variable as text. if the text doesn't contain the word 'running' it shuts down. The rest is just like the checkwifi steps at the link you mentioned:</p>
<p>store it at /usr/local/bin/checkbtsync.sh</p>
<p>then run</p>
<pre><code>sudo chmod 775 /usr/local/bin/checkbtsync.sh
</code></pre>
<p>Then crontab gets this new line:</p>
<pre><code>*/5 * * * * /usr/bin/sudo -H /usr/local/bin/checkbtsync.sh >> /dev/null 2>&1
</code></pre>
| 3 | 2016-08-22T21:18:43Z | [
"python",
"raspberry-pi",
"crontab"
] |
Pymongo $currentDate is not valid for storage | 39,088,579 | <p>This is my first time using pymongo. I have a method that updates Users data in a document. For example, when you log in, it should update the lastLog field, which indicates the last time that user logged in.</p>
<p>This is the method</p>
<pre><code> def update_user(self, mongo, field, value=None):
print self.username
if field != 'lastLog':
result = mongo.db.Users.update(
{'username': self.username},
{
'$set': {
field: value
}
}
)
else:
result = mongo.db.Users.update(
{'username': self.username},
{
'$set': {
'$currentDate': {
'LastLog': {
"$type": "timestamp"
}
}
}
}
)
if result.matched_count != 1:
#NEEDLOG
print "No update performed"
return False
</code></pre>
<p>However, everytime I log in, I get this error: </p>
<pre><code>WriteError: The dollar ($) prefixed field '$currentDate' in '$currentDate' is not valid for storage.
</code></pre>
<p>This is how the document looks in MongoDB</p>
<blockquote>
<p>db.Users.find()
{ "_id" : ObjectId("57b64e1330e6e23b7d050c76"), "username" : "arecalde-contractor", "lastLog" : null, "Name" : "Agustin Recalde", "url" : "/profile/arecalde-contractor", "role" : "Admin", "active" : true, "id" : "9249" }</p>
</blockquote>
<p>I'm pretty sure I'm following the documentation correctly. Did I miss anything? Thanks in advance! </p>
| 0 | 2016-08-22T20:54:13Z | 39,089,450 | <p>I fixed it. I was doing it wrong. Te code inside the else now looks like this</p>
<pre><code> else:
result = mongo.db.Users.update_one(
{'username': self.username},
{
'$currentDate': {
'lastLog': {'$type': 'timestamp'}
}
}
)
</code></pre>
| 0 | 2016-08-22T22:05:24Z | [
"python",
"mongodb",
"pymongo"
] |
conditionally write in array | 39,088,649 | <p>I have a piece of code that goes through and open several tab-delimited files. Each time the first tab of the line of this opened file starts with four numbers (ex. 0012) I would like to write this line in an array (cell-by-cell).</p>
<p>A sample line that I would like to transfer to an array is shown below:</p>
<pre><code>0029 Montana 1970 0922 1133 5.4 CR 620 Eagle 31.9 CAA - 1.10
</code></pre>
<p>As can be seen in some cases the tab will have "-". I would like that to be transferred. I know I should start like:</p>
<pre><code>with open(each_file) as f:
for line in f:
</code></pre>
<p>but I need some help with afterwards</p>
| -2 | 2016-08-22T20:59:44Z | 39,091,046 | <p>Use <a href="https://docs.python.org/3/library/csv.html" rel="nofollow">the <code>csv</code> module</a>; it <a href="https://docs.python.org/3/library/csv.html#csv.excel_tab" rel="nofollow">supports tab separated dialects</a> just fine. For example:</p>
<pre><code>import csv
with open(each_file, newline='') as f:
for row in csv.reader(f, dialect='excel-tab'):
# On each iteration row is a list containing the fields from a single record
# properly splitting only on tabs, not spaces, and handling the Excel
# standard quoting rules when a field might contain tabs or newlines
</code></pre>
| 1 | 2016-08-23T01:37:37Z | [
"python",
"python-3.x",
"csv"
] |
Python: recursive isinstance checking | 39,088,748 | <p>How can one check a complete type signature of a nested abstract class? In this example </p>
<pre><code>In [4]: from typing import Sequence
In [5]: IntSeq = Sequence[int]
In [6]: isinstance([1], IntSeq)
Out[6]: True
In [7]: isinstance([1.0], IntSeq)
Out[7]: True
</code></pre>
<p>I want the last <code>isinstance</code> call to actually return <code>False</code>, while it only checks that the argument is a <code>Sequence</code>. I thought about recursively checking the types, but <code>IntSeq</code> has no public attributes that store the nested type(s):</p>
<pre><code>In [8]: dir(IntSeq)
Out[8]:
['__abstractmethods__',
'__class__',
'__delattr__',
'__dict__',
'__dir__',
'__doc__',
'__eq__',
'__extra__',
'__format__',
'__ge__',
'__getattribute__',
'__gt__',
'__hash__',
'__init__',
'__le__',
'__len__',
'__lt__',
'__module__',
'__ne__',
'__new__',
'__origin__',
'__parameters__',
'__reduce__',
'__reduce_ex__',
'__repr__',
'__setattr__',
'__sizeof__',
'__slots__',
'__str__',
'__subclasshook__',
'__weakref__',
'_abc_cache',
'_abc_negative_cache',
'_abc_negative_cache_version',
'_abc_registry']
</code></pre>
<p>So it doesn't seem to be straightforward to get nested types. I can't find relevant information in the docs. </p>
<p>P.S.
I need this for a multiple dispatch implementation.</p>
<p><strong>Update</strong></p>
<p>Thanks to the feedback from Alexander Huszagh and Blender we now know that abstract classes in Python 3.5 (might) have two attributes that store the nested types: <code>__parameters__</code> and <code>__args__</code>. The former is there under both Linux (Ubuntu) and Darwin (OS X), though it is empty in case of Linux. The later is only available under Linux and stores the types like <code>__parameters__</code> does under OS X. This implementation details add up to the confusion. </p>
| 4 | 2016-08-22T21:06:05Z | 39,098,131 | <p>I see you're trying to implement something using a module that is still provisional; you're bound to encounter a changing interface if you do this.</p>
<p>Blender noticed that the <code>__parameters__</code> argument holds the parameters to the type; this was true until, I believe <code>3.5.1</code>. In my git clone of the most recent version of Python (<code>3.6.0a4+</code>) <code>__parameters__</code> again holds an empty tuple, <code>__args__</code> holds the argument and <code>__origin__</code> is the first entry in its <code>__bases__</code> attribute:</p>
<pre><code>>>> intSeq = typing.Sequence[int]
>>> intSeq.__args__
(<class 'int'>,)
>>> intSeq.__parameters__
()
>>> intSeq.__origin__
typing.Sequence<+T_co>
</code></pre>
<p>Since <code>3.6</code> is when typing will, from what I understand from <a href="https://www.python.org/dev/peps/pep-0411/" rel="nofollow"><code>PEP 411</code></a>, leave provisional and enter a stable state, this is the version you should be working with to implement your functionality.</p>
| 2 | 2016-08-23T10:04:21Z | [
"python",
"python-3.x",
"oop",
"python-3.5",
"type-hinting"
] |
Error: etheruem-serpent; version conflict | 39,088,812 | <p>After installing pythereum and ethereum serpent, I ran a test using: <code>$ pytest -m test_contracts.py</code> I got the following error, and I can't seem to figure out what the real issue is:</p>
<p>================================================================================ test session starts ================================================================================
platform darwin -- Python 2.7.12, pytest-3.0.0, py-1.4.31, pluggy-0.3.1
rootdir: /Users/someone/SmartContract/pyethereum, inifile:
plugins: catchlog-1.2.2, timeout-1.0.0
collected 47942 items </p>
<pre><code>INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/usr/local/lib/python2.7/site-packages/_pytest/main.py", line 96, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
INTERNALERROR> File "/usr/local/lib/python2.7/site-packages/_pytest/main.py", line 130, in _main
INTERNALERROR> config.hook.pytest_collection(session=session)
INTERNALERROR> File "/usr/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
INTERNALERROR> return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR> File "/usr/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/usr/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
INTERNALERROR> _MultiCall(methods, kwargs, hook.spec_opts).execute()
INTERNALERROR> File "/usr/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/usr/local/lib/python2.7/site-packages/_pytest/main.py", line 139, in pytest_collection
INTERNALERROR> return session.perform_collect()
INTERNALERROR> File "/usr/local/lib/python2.7/site-packages/_pytest/main.py", line 592, in perform_collect
INTERNALERROR> config=self.config, items=items)
INTERNALERROR> File "/usr/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
INTERNALERROR> return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR> File "/usr/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/usr/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
INTERNALERROR> _MultiCall(methods, kwargs, hook.spec_opts).execute()
INTERNALERROR> File "/usr/local/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/usr/local/lib/python2.7/site-packages/_pytest/mark.py", line 84, in pytest_collection_modifyitems
INTERNALERROR> if not matchmark(colitem, matchexpr):
INTERNALERROR> File "/usr/local/lib/python2.7/site-packages/_pytest/mark.py", line 124, in matchmark
INTERNALERROR> return eval(markexpr, {}, MarkMapping(colitem.keywords))
INTERNALERROR> File "<string>", line 1, in <module>
INTERNALERROR> AttributeError: 'bool' object has no attribute 'py'
</code></pre>
| 0 | 2016-08-22T21:13:01Z | 39,095,973 | <p>This error seems to be specific with ethereum-serpent-1.6.7 This error was solved by upgrading to the latest ethereum-serpent(2.02.), through:</p>
<pre><code>pip install --upgrade ethereum-setpent
</code></pre>
| 0 | 2016-08-23T08:23:09Z | [
"python",
"python-2.7",
"testing",
"py.test",
"ethereum"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.