title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
Is there any methods to combine the two function to one
39,405,840
<p>how to use recursion to combine the two function to one ?I know it's an answer of "move_zeros", But here I post just want to learn how to use recursion and solve problems with recursion.</p> <p>func 1</p> <pre><code>def move_zeross(array): for i in range(len(array)): if array[i] is not False and array[i] == 0: array.pop(i) array.append(0) return array </code></pre> <p>func 2</p> <pre><code>def move_zeros(array): for i in range(len(array)): if array[i] is not False and array[i] == 0: move_zeross(array) return array </code></pre> <p>I have tried like below, but <code>RuntimeError</code> happens:</p> <pre><code>RuntimeError: maximum recursion depth exceeded in cmp </code></pre> <p>Here's the combined code:</p> <pre><code>def move_zeros(array): for i in range(len(array)): if array[i] is not False and array[i] == 0: array.pop(i) array.append(0) move_zeros(array) return array </code></pre>
-1
2016-09-09T07:13:48Z
39,406,272
<p>If you just want to move all zeros in the list to the end, try:</p> <pre><code>def move_zeros(array): result = [x for x in array if x is not 0] return result + [0]*(len(array)-len(result)) </code></pre> <p>Using recursion:</p> <pre><code>def move_zeros(array, n=None): if n is None: n = len(array) - 1 if n &lt; 0: # no more to process return array if array[n] is 0: # move this zero to the end array.append(array.pop(n)) return move_zeros(array, n-1) </code></pre>
1
2016-09-09T07:38:25Z
[ "python", "recursion", "runtime-error" ]
Is there any methods to combine the two function to one
39,405,840
<p>how to use recursion to combine the two function to one ?I know it's an answer of "move_zeros", But here I post just want to learn how to use recursion and solve problems with recursion.</p> <p>func 1</p> <pre><code>def move_zeross(array): for i in range(len(array)): if array[i] is not False and array[i] == 0: array.pop(i) array.append(0) return array </code></pre> <p>func 2</p> <pre><code>def move_zeros(array): for i in range(len(array)): if array[i] is not False and array[i] == 0: move_zeross(array) return array </code></pre> <p>I have tried like below, but <code>RuntimeError</code> happens:</p> <pre><code>RuntimeError: maximum recursion depth exceeded in cmp </code></pre> <p>Here's the combined code:</p> <pre><code>def move_zeros(array): for i in range(len(array)): if array[i] is not False and array[i] == 0: array.pop(i) array.append(0) move_zeros(array) return array </code></pre>
-1
2016-09-09T07:13:48Z
39,418,203
<p>A different recursive approach:</p> <pre><code>def move_zeros(array): if array: head, tail = array[0], move_zeros(array[1:]) if head is 0: array = tail array.append(head) else: array = [head] array.extend(tail) return array </code></pre> <p>Does more list operations than acw1668's solution but is less index oriented.</p>
1
2016-09-09T19:03:40Z
[ "python", "recursion", "runtime-error" ]
What does .only django queryset really do?
39,405,853
<p>I heard in the Django documentation it optimizes so I did a little bit of experimentation. So I did this:</p> <pre><code>&gt;&gt;&gt; x = Artist.objects.only('id').filter() &gt;&gt;&gt; print x.query SELECT "store_artist"."id" FROM "store_artist" &gt;&gt;&gt; y = Artist.objects.filter() &gt;&gt;&gt; print y.query SELECT "store_artist"."id", "store_artist"."name", "store_artist"."birth_date" FROM "store_artist" </code></pre> <p>I can see that the query changed however I did a further test:</p> <pre><code>&gt;&gt;&gt; for _x in x: ... _x.name ... u'Beyone' u'Beyoneeee' u'Beyone231231' u'Beyone2222' u'No Album' &gt;&gt;&gt; for _y in y: ... _y.name ... u'Beyone' u'Beyoneeee' u'Beyone231231' u'Beyone2222' u'No Album' </code></pre> <p>So if you've noticed it just has the same result. How did that happen? I thought that in the <em>y</em> variable I just fetched the id so the name should not appear or be invalid</p> <p>Here is my model by the way:</p> <pre><code>class Artist(EsIndexable, models.Model): name = models.CharField(max_length=50) birth_date = models.DateField() </code></pre>
1
2016-09-09T07:14:31Z
39,405,893
<p>It does only query the fields you give it, but then it still returns instances of that model. So when you ask for a different field, you would have made a second query to get the name</p>
1
2016-09-09T07:16:39Z
[ "python", "django" ]
What does .only django queryset really do?
39,405,853
<p>I heard in the Django documentation it optimizes so I did a little bit of experimentation. So I did this:</p> <pre><code>&gt;&gt;&gt; x = Artist.objects.only('id').filter() &gt;&gt;&gt; print x.query SELECT "store_artist"."id" FROM "store_artist" &gt;&gt;&gt; y = Artist.objects.filter() &gt;&gt;&gt; print y.query SELECT "store_artist"."id", "store_artist"."name", "store_artist"."birth_date" FROM "store_artist" </code></pre> <p>I can see that the query changed however I did a further test:</p> <pre><code>&gt;&gt;&gt; for _x in x: ... _x.name ... u'Beyone' u'Beyoneeee' u'Beyone231231' u'Beyone2222' u'No Album' &gt;&gt;&gt; for _y in y: ... _y.name ... u'Beyone' u'Beyoneeee' u'Beyone231231' u'Beyone2222' u'No Album' </code></pre> <p>So if you've noticed it just has the same result. How did that happen? I thought that in the <em>y</em> variable I just fetched the id so the name should not appear or be invalid</p> <p>Here is my model by the way:</p> <pre><code>class Artist(EsIndexable, models.Model): name = models.CharField(max_length=50) birth_date = models.DateField() </code></pre>
1
2016-09-09T07:14:31Z
39,406,244
<p>The .only method returns you a queryset which is not appendable, if you want a new queryset you have to make an another query to get that. This method is useful when you have a very large dataset and you want a only field for all your processing in the business logic. In your case i will suggest to use defer() method. You can make multiple calls to defer(). Each call adds new fields to the deferred set. so now, its upto you among the set which value you want to use from it, and the main thing you havnt require to make a call again to database for fetching values which helps you in optimization and performance.</p>
1
2016-09-09T07:36:54Z
[ "python", "django" ]
Printing indices of 3-dimensional variable in PuLP for scheduling
39,405,875
<p>I am working with SolverStudio (Excel add-in) and PuLP (Python-based optimization language) to create a tool that assigns students to working places.</p> <p>This is the variable whose indices I want to print:</p> <p><code># Decision variable, =1 if student s is assigned to working place w on day d; 0 otherwise Assignment = LpVariable.dicts("Assignment",(Students,Working_places,Days),0,1,LpBinary)</code></p> <p>For each working place and day, I want to output the name of the student who is assigned there, in order to create a schedule.</p> <p>My current approach is:</p> <pre><code>for w in Working_places: for d in Days: for s in Students: if Assignment[s][w][d] == 1: Schedule[w,d] = Name[s] </code></pre> <p><code>Schedule[w,d]</code> is an empty 2d parameter defined in SolverStudio and <code>Name[s]</code> contains the names of the students. </p> <p>I placed the code sequence at different positions of the model. It doesn't make a difference, whether it is before or after the <code>prob.solve()</code> statement.</p> <p>Currently, <code>Schedule[w,d]</code> is being filled completely with the name of the last student in the list of students.</p> <p>I observed that the if-clause is completely ignored. I can remove it and the same output is generated.</p> <p>Is there a different way to tell Python "for each working place and day, print the assigned student"?</p> <p>Thank you very much in advance!</p>
0
2016-09-09T07:15:33Z
39,407,685
<p>Found the mistake myself in the meantime:</p> <p><code>for w in Working_places: for d in Days: for s in Students: if Assignment[s][w][d].varValue == 1: Schedule[w,d] = Name[s]</code></p>
0
2016-09-09T08:59:24Z
[ "python", "optimization", "scheduling", "pulp" ]
How to increase the performance of gae query?
39,405,891
<p>I have implemented a logic of querying a table and for each entity in that particular table, I have to lookup another table.</p> <p>For, ex.</p> <p>My code looks like,</p> <pre><code>query = ndb.gql("select * from Foo where user = :1", user.key) stories, next_cursor, more = query.fetch_page(size, start_cursor=cursor) if next_cursor: for story in stories: print story.key images = ndb.gql("select * from Images where story = :1", story.key) for image in images: print image.key else: #do some operations </code></pre> <p>You see, if we give the size as 10 to the fetch_page function, it would find 10 entities each. And for each entity, we have to lookup another kind <code>Image</code>. </p> <p>This type of datastore lookup takes 850 to 950 ms. I want to decrease the response time of this API. Note that I have to get some column values from <code>Story</code> kind and also from <code>Images</code> kind.</p> <p>Is there anyway to shorten the query by using <code>get_multi</code> method. Or, I have an idea of using <code>memcache</code> or shall we define a new <code>StructuredProperty</code> in the <code>Foo</code> model where it's value must be a list of <code>Images</code> model entities.</p> <p>I donno which one suits in this case.. Pls guide me.</p>
0
2016-09-09T07:16:30Z
39,406,179
<p>I don't know whole structure of your project, but...</p> <p>You can do something like that:</p> <pre><code>class Story(ndb.Model): images = ndb.KeyProperty(kind=Image, repeated=True) user = ndb.KeyProperty(kind=User) </code></pre> <p>and every time user will add new image update it (<code>images</code> property of <code>Story</code>).</p> <p>Then you'll be able to use:</p> <pre><code>images = [] stories = Story.query.filter(Story.user == user.key) stories = stories.fetch(size) for story in stories: images.extend(ndb.get_multi(story.images)) print images </code></pre> <p>Hope that helps.</p>
1
2016-09-09T07:33:02Z
[ "python", "performance", "google-app-engine", "google-cloud-datastore", "app-engine-ndb" ]
How to increase the performance of gae query?
39,405,891
<p>I have implemented a logic of querying a table and for each entity in that particular table, I have to lookup another table.</p> <p>For, ex.</p> <p>My code looks like,</p> <pre><code>query = ndb.gql("select * from Foo where user = :1", user.key) stories, next_cursor, more = query.fetch_page(size, start_cursor=cursor) if next_cursor: for story in stories: print story.key images = ndb.gql("select * from Images where story = :1", story.key) for image in images: print image.key else: #do some operations </code></pre> <p>You see, if we give the size as 10 to the fetch_page function, it would find 10 entities each. And for each entity, we have to lookup another kind <code>Image</code>. </p> <p>This type of datastore lookup takes 850 to 950 ms. I want to decrease the response time of this API. Note that I have to get some column values from <code>Story</code> kind and also from <code>Images</code> kind.</p> <p>Is there anyway to shorten the query by using <code>get_multi</code> method. Or, I have an idea of using <code>memcache</code> or shall we define a new <code>StructuredProperty</code> in the <code>Foo</code> model where it's value must be a list of <code>Images</code> model entities.</p> <p>I donno which one suits in this case.. Pls guide me.</p>
0
2016-09-09T07:16:30Z
39,406,572
<p>You can add a property to each Story that contains a list of Image ids. I assume that this list rarely changes. Then you can easily <code>get_multi</code> all images related to a story without any queries.</p> <p>You may also consider to <code>get_multi</code> all images for all stories, returned by your query, in a single call, and then "attach" them to respective stories in your code, if necessary.</p>
2
2016-09-09T07:55:37Z
[ "python", "performance", "google-app-engine", "google-cloud-datastore", "app-engine-ndb" ]
How to increase the performance of gae query?
39,405,891
<p>I have implemented a logic of querying a table and for each entity in that particular table, I have to lookup another table.</p> <p>For, ex.</p> <p>My code looks like,</p> <pre><code>query = ndb.gql("select * from Foo where user = :1", user.key) stories, next_cursor, more = query.fetch_page(size, start_cursor=cursor) if next_cursor: for story in stories: print story.key images = ndb.gql("select * from Images where story = :1", story.key) for image in images: print image.key else: #do some operations </code></pre> <p>You see, if we give the size as 10 to the fetch_page function, it would find 10 entities each. And for each entity, we have to lookup another kind <code>Image</code>. </p> <p>This type of datastore lookup takes 850 to 950 ms. I want to decrease the response time of this API. Note that I have to get some column values from <code>Story</code> kind and also from <code>Images</code> kind.</p> <p>Is there anyway to shorten the query by using <code>get_multi</code> method. Or, I have an idea of using <code>memcache</code> or shall we define a new <code>StructuredProperty</code> in the <code>Foo</code> model where it's value must be a list of <code>Images</code> model entities.</p> <p>I donno which one suits in this case.. Pls guide me.</p>
0
2016-09-09T07:16:30Z
39,683,994
<p>You will want to look at NDB batch async API</p> <pre><code> @ndb.tasklet def get_stories(user_key): stories = yield Story.query(Story.user_key == user_key).fetch_async() futs = [ item.key.get_async() for item in stories] result = yield futs raise ndb.Return(result) get_stories(user_key).get_result() </code></pre> <p>This API will only call 2 queries.</p> <ol> <li>make a query to DataStore</li> <li>with N result from the above query, make 1 query to get all stories </li> </ol> <p>Since, <code>Key.get_async()</code> also use memcache automatically, from the seconds time you call above function, the 2 query will call to memcache</p>
1
2016-09-25T06:13:51Z
[ "python", "performance", "google-app-engine", "google-cloud-datastore", "app-engine-ndb" ]
Pandas: Is there a way to use something like 'droplevel' and in process, rename the the other level using the dropped level labels as prefix/suffix?
39,405,971
<p>Screenshot of the query below:</p> <p><a href="http://i.stack.imgur.com/dGNAV.png" rel="nofollow"><img src="http://i.stack.imgur.com/dGNAV.png" alt="Groupby Query"></a></p> <p>Is there a way to easily drop the upper level column index and a have a single level with labels such as <code>points_prev_amax</code>, <code>points_prev_amin</code>, <code>gf_prev_amax</code>, <code>gf_prev_amin</code> and so on? </p>
4
2016-09-09T07:21:44Z
39,405,994
<p>Use <code>list comprehension</code> for set new column names:</p> <pre><code>df.columns = ['_'.join(col) for col in df.columns] </code></pre> <p>Sample:</p> <pre><code>df = pd.DataFrame({'A':[1,2,2,1], 'B':[4,5,6,4], 'C':[7,8,9,1], 'D':[1,3,5,9]}) print (df) A B C D 0 1 4 7 1 1 2 5 8 3 2 2 6 9 5 3 1 4 1 9 df = df.groupby('A').agg([max, min]) print (['_'.join(col) for col in df.columns]) ['B_max', 'B_min', 'C_max', 'C_min', 'D_max', 'D_min'] df.columns = ['_'.join(col) for col in df.columns] print (df) B_max B_min C_max C_min D_max D_min A 1 4 4 7 1 9 1 2 6 5 9 8 5 3 </code></pre> <p>If need <code>prefix</code> simple swap items of tuples:</p> <pre><code>df.columns = ['_'.join((col[1], col[0])) for col in df.columns] print (df) max_B min_B max_C min_C max_D min_D A 1 4 4 7 1 9 1 2 6 5 9 8 5 3 </code></pre> <p>Another solution:</p> <pre><code>df.columns = ['{}_{}'.format(i[1], i[0]) for i in df.columns] print (df) max_B min_B max_C min_C max_D min_D A 1 4 4 7 1 9 1 2 6 5 9 8 5 3 </code></pre> <p>If <code>len</code> of columns is big (10^6), then rather use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.to_series.html" rel="nofollow"><code>to_series</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.join.html" rel="nofollow"><code>str.join</code></a>:</p> <pre><code>df.columns = df.columns.to_series().str.join('_') </code></pre>
4
2016-09-09T07:23:00Z
[ "python", "pandas", "rename", "multiple-columns", "multi-index" ]
Pandas: Is there a way to use something like 'droplevel' and in process, rename the the other level using the dropped level labels as prefix/suffix?
39,405,971
<p>Screenshot of the query below:</p> <p><a href="http://i.stack.imgur.com/dGNAV.png" rel="nofollow"><img src="http://i.stack.imgur.com/dGNAV.png" alt="Groupby Query"></a></p> <p>Is there a way to easily drop the upper level column index and a have a single level with labels such as <code>points_prev_amax</code>, <code>points_prev_amin</code>, <code>gf_prev_amax</code>, <code>gf_prev_amin</code> and so on? </p>
4
2016-09-09T07:21:44Z
39,407,292
<p>Using @jezrael's setup</p> <pre><code>df = pd.DataFrame({'A':[1,2,2,1], 'B':[4,5,6,4], 'C':[7,8,9,1], 'D':[1,3,5,9]}) df = df.groupby('A').agg([max, min]) </code></pre> <hr> <p>Assign new columns with</p> <pre><code>from itertools import starmap def flat(midx, sep=''): fstr = sep.join(['{}'] * midx.nlevels) return pd.Index(starmap(fstr.format, midx)) df.columns = flat(df.columns, '_') df </code></pre> <p><a href="http://i.stack.imgur.com/xHakS.png" rel="nofollow"><img src="http://i.stack.imgur.com/xHakS.png" alt="enter image description here"></a></p>
3
2016-09-09T08:38:13Z
[ "python", "pandas", "rename", "multiple-columns", "multi-index" ]
How to play music using pygame (Python)
39,406,020
<p>I am trying to play an .mp3 file using pygame. Here is my code:</p> <pre><code>import pygame pygame.init() pygame.mixer.init() pygame.mixer.music.load('MSM.mp3') pygame.mixer.music.play(0) pygame.event.wait() </code></pre> <p>This however does not actually play the file. The audio file is located within the same folder as the .py file.</p>
0
2016-09-09T07:24:32Z
39,407,388
<p>The pyGame docs suggest using the pygame.mixer.pre_init() method before the top-level pygame.init() on some platforms.</p> <blockquote> <p>"Some platforms require the pygame.mixer module for loading and playing sounds module to be initialized after the display modules have initialized. The top level pygame.init() takes care of this automatically, but cannot pass any arguments to the mixer init. To solve this, mixer has a function pygame.mixer.pre_init() to set the proper defaults before the toplevel init is used." <a href="https://www.pygame.org/docs/ref/mixer.html#pygame.mixer.init" rel="nofollow">https://www.pygame.org/docs/ref/mixer.html#pygame.mixer.init</a></p> </blockquote>
1
2016-09-09T08:43:48Z
[ "python", "pygame", "music" ]
Managing contents of requirements.txt for a Python virtual environment
39,406,177
<p>So I am creating a brand new Flask app from scratch.As all good developers do,my first step was to create a virtual environment.</p> <p>The first thing I install in the virtual environment is Flask==0.11.1.Flask installs its following dependencies:</p> <blockquote> <ul> <li>click==6.6</li> <li>itsdangerous==0.24</li> <li>Jinja2==2.8</li> <li>MarkupSafe==0.23</li> <li>Werkzeug==0.11.11</li> <li>wheel==0.24.0</li> </ul> </blockquote> <p>Now,I create a <strong>requirements.txt</strong> to ensure everyone cloning the repository has the same version of the libraries.However, my dilemma is this:</p> <ul> <li>Do I mention each of the Flask dependencies in the <strong>requirements.txt</strong> along with the version numbers OR</li> <li>Do I just mention the exact Flask version number in the <strong>requirements.txt</strong> and hope that when they do a <strong>pip install requirements.txt</strong>, Flask will take care of the dependency management and they will download the right versions of the dependent libraries</li> </ul>
0
2016-09-09T07:32:53Z
39,406,391
<p>You can (from your active virtual environment) do the following</p> <pre><code>pip freeze &gt; requirements.txt </code></pre> <p>which'll automatically take care of all libraries/modules available in your project.</p> <p>The next developer would only have to issue:</p> <pre><code>pip install -r requirements.txt </code></pre>
1
2016-09-09T07:45:26Z
[ "python", "pip", "virtualenv", "requirements.txt" ]
Managing contents of requirements.txt for a Python virtual environment
39,406,177
<p>So I am creating a brand new Flask app from scratch.As all good developers do,my first step was to create a virtual environment.</p> <p>The first thing I install in the virtual environment is Flask==0.11.1.Flask installs its following dependencies:</p> <blockquote> <ul> <li>click==6.6</li> <li>itsdangerous==0.24</li> <li>Jinja2==2.8</li> <li>MarkupSafe==0.23</li> <li>Werkzeug==0.11.11</li> <li>wheel==0.24.0</li> </ul> </blockquote> <p>Now,I create a <strong>requirements.txt</strong> to ensure everyone cloning the repository has the same version of the libraries.However, my dilemma is this:</p> <ul> <li>Do I mention each of the Flask dependencies in the <strong>requirements.txt</strong> along with the version numbers OR</li> <li>Do I just mention the exact Flask version number in the <strong>requirements.txt</strong> and hope that when they do a <strong>pip install requirements.txt</strong>, Flask will take care of the dependency management and they will download the right versions of the dependent libraries</li> </ul>
0
2016-09-09T07:32:53Z
39,406,537
<p>Both approaches are valid and work. But there is a little difference. When you enter all the dependencies in the <code>requirements.txt</code> you will be able to pin the versions of them. If you leave them out, there might be a later update and if Flask has something like <code>Werkzeug&gt;=0.11</code> in it's dependencies, you will get a newer version of Werkzeug installed.</p> <p>So it comes down to updates vs. defined environment. Whatever suits you better.</p>
1
2016-09-09T07:53:26Z
[ "python", "pip", "virtualenv", "requirements.txt" ]
Managing contents of requirements.txt for a Python virtual environment
39,406,177
<p>So I am creating a brand new Flask app from scratch.As all good developers do,my first step was to create a virtual environment.</p> <p>The first thing I install in the virtual environment is Flask==0.11.1.Flask installs its following dependencies:</p> <blockquote> <ul> <li>click==6.6</li> <li>itsdangerous==0.24</li> <li>Jinja2==2.8</li> <li>MarkupSafe==0.23</li> <li>Werkzeug==0.11.11</li> <li>wheel==0.24.0</li> </ul> </blockquote> <p>Now,I create a <strong>requirements.txt</strong> to ensure everyone cloning the repository has the same version of the libraries.However, my dilemma is this:</p> <ul> <li>Do I mention each of the Flask dependencies in the <strong>requirements.txt</strong> along with the version numbers OR</li> <li>Do I just mention the exact Flask version number in the <strong>requirements.txt</strong> and hope that when they do a <strong>pip install requirements.txt</strong>, Flask will take care of the dependency management and they will download the right versions of the dependent libraries</li> </ul>
0
2016-09-09T07:32:53Z
39,408,792
<ol> <li>activate virtualenv</li> <li>go to you project root directory</li> <li><p>get all the packages along with dependencies in requirements.txt</p> <pre><code>pip freeze &gt; requirements.txt </code></pre></li> <li><p>you dont have to worry anything else apart from making sure next person installs the requirements recursively by following command</p> <pre><code>pip install -r requirements.txt </code></pre></li> </ol>
1
2016-09-09T09:53:51Z
[ "python", "pip", "virtualenv", "requirements.txt" ]
ImportError: No module name rest_framework
39,406,526
<p>I have both python 2.7 and 3.5 installed. While creating a django project, I selected Python 3.5 as my python interpreter. And I also installed rest framework but I find this error while running my django project. Help</p> <pre><code>Traceback (most recent call last): File "manage.py", line 22, in &lt;module&gt; execute_from_command_line(sys.argv) File "/Library/Python/2.7/site-packages/Django-1.10-py2.7.egg/django/core/management/__init__.py", line 367, in execute_from_command_line utility.execute() File "/Library/Python/2.7/site-packages/Django-1.10-py2.7.egg/django/core/management/__init__.py", line 341, in execute django.setup() File "/Library/Python/2.7/site-packages/Django-1.10-py2.7.egg/django/__init__.py", line 27, in setup apps.populate(settings.INSTALLED_APPS) File "/Library/Python/2.7/site-packages/Django-1.10-py2.7.egg/django/apps/registry.py", line 85, in populate app_config = AppConfig.create(entry) File "/Library/Python/2.7/site-packages/Django-1.10-py2.7.egg/django/apps/config.py", line 90, in create module = import_module(entry) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) ImportError: No module named rest_framework </code></pre>
0
2016-09-09T07:52:45Z
39,406,690
<p>Oh sorry. I executed the command <code>python manage.py runserver</code> instead of <code>python3 manage.py runserver</code></p> <p>Working fine now.</p>
0
2016-09-09T08:02:11Z
[ "python", "django", "django-rest-framework" ]
sklearn function transformer in pipeline
39,406,539
<p>Writing my first pipeline for sk-learn I stumbled upon some issues when only a subset of columns is put into a pipeline:</p> <pre><code>mydf = pd.DataFrame({'classLabel':[0,0,0,1,1,0,0,0], 'categorical':[7,8,9,5,7,5,6,4], 'numeric1':[7,8,9,5,7,5,6,4], 'numeric2':[7,8,9,5,7,5,6,"N.A"]}) columnsNumber = ['numeric1'] XoneColumn = X[columnsNumber] </code></pre> <p>I use the <code>functionTransformer</code> like:</p> <pre><code>def extractSpecificColumn(X, columns): return X[columns] pipeline = Pipeline([ ('features', FeatureUnion([ ('continuous', Pipeline([ ('numeric', FunctionTransformer(columnsNumber)), ('scale', StandardScaler()) ])) ], n_jobs=1)), ('estimator', RandomForestClassifier(n_estimators=50, criterion='entropy', n_jobs=-1)) ]) cv.cross_val_score(pipeline, XoneColumn, y, cv=folds, scoring=kappaScore) </code></pre> <p>This results in: <code>TypeError: 'list' object is not callable</code> when the function transformer is enabled.</p> <h1>edit:</h1> <p>If I instantiate a <code>ColumnExtractor</code> like below no error is returned. But isn't the <code>functionTransformer</code> meant just for simple cases like this one and should just work?</p> <pre><code>class ColumnExtractor(TransformerMixin): def __init__(self, columns): self.columns = columns def transform(self, X, *_): return X[self.columns] def fit(self, *_): return self </code></pre>
0
2016-09-09T07:53:27Z
39,429,857
<p><code>FunctionTransformer</code> is used to "lift" a function to a transformation which I think can help with some data cleaning steps. Imagine you have a mostly numeric array and you want to transform it with a Transformer that that will error out if it gets a <code>nan</code> (like <code>Normalize</code>). You might end up with something like</p> <pre><code>df.fillna(0, inplace=True) ... cross_val_score(pipeline, ...) </code></pre> <p>but maybe you that <code>fillna</code> is only required in one transformation so instead of having the <code>fillna</code> like above, you have</p> <pre><code>normalize = make_pipeline( FunctionTransformer(np.nan_to_num, validate=False), Normalize() ) </code></pre> <p>which ends up normalizing it as you want. Then you can use that snippet in more places without littering your code with <code>.fillna(0)</code></p> <p>In your example, you're passing in <code>['numeric1']</code> which is a <code>list</code> and not an extractor like the similarly typed <code>df[['numeric1']]</code>. What you may want instead is more like </p> <pre><code>FunctionTransformer(operator.itemgetter(columns)) </code></pre> <p>but that still wont work because the object that is ultimately passed into the FunctionTransformer will be an <code>np.array</code> and not a <code>DataFrame</code>.</p> <p>In order to do operations on particular columns of a <code>DataFrame</code>, you may want to use a library like <a href="https://github.com/paulgb/sklearn-pandas" rel="nofollow">sklearn-pandas</a> which allows you to define particular transformers by column.</p>
1
2016-09-10T19:31:08Z
[ "python", "scikit-learn", "pipeline", "transformer" ]
Convert a text list to numbers?
39,406,754
<p>I am trying to use <code>float()</code> to convert the following list to numbers. But it always says <code>ValueError: could not convert string to float</code>.</p> <p>I understand that this error occurs when there is something in text that cannot be treated as a number. But my list seems okay.</p> <pre><code>a = ['4', '4', '1', '1', '1', '1', '2', '4', '8', '16', '3', '9', '27', '81', '4', '16', '64', '256', '4', '3'] b = [float(x) for x in a] </code></pre>
-1
2016-09-09T08:05:59Z
39,406,815
<pre><code>a = ['4', '4', '1', '1', '1', '1', '2', '4', '8', '16', '3', '9', '27', '81', '4', '16', '64', '256', '4', '3'] b = [float(x) for x in a] </code></pre> <p>Works perfectly.</p> <pre><code>['4', '4', '1', '1', '1', '1', '2', '4', '8', '16', '3', '9', '27', '81', '4', '16', '64', '256', '4', '3'] b = [float(x) for x in a] </code></pre> <p>Not so much.</p> <p>Your error message shows that one of you entries is a single quotation mark or an empty element in your list. For example, </p> <pre><code>a = ['the', '4', '1', '1', '1', '1', '2', '4', '8', '16', '3', '9', '27', '81', '4', '16', '64', '256', '4', '3'] b = [float(x) for x in a] </code></pre> <p>throws the error: <code>ValueError: could not convert string to float: the</code></p>
0
2016-09-09T08:09:22Z
[ "python", "list", "text" ]
Convert a text list to numbers?
39,406,754
<p>I am trying to use <code>float()</code> to convert the following list to numbers. But it always says <code>ValueError: could not convert string to float</code>.</p> <p>I understand that this error occurs when there is something in text that cannot be treated as a number. But my list seems okay.</p> <pre><code>a = ['4', '4', '1', '1', '1', '1', '2', '4', '8', '16', '3', '9', '27', '81', '4', '16', '64', '256', '4', '3'] b = [float(x) for x in a] </code></pre>
-1
2016-09-09T08:05:59Z
39,406,960
<p>you can convert list using </p> <p>if you have any string in your list then you get error message <code>ValueError: could not convert string to float</code> like that </p> <pre><code>&gt;&gt;&gt; a = ['4', '4', '1', '1', '1', '1', '2', '4', '8', '16', '3', '9', '27', '81', '4', '16', '64', '256', '4', '3'] &gt;&gt;&gt; b = list(map(float, a)) &gt;&gt;&gt; b </code></pre> <p>output</p> <pre><code>[4.0, 4.0, 1.0, 1.0, 1.0, 1.0, 2.0, 4.0, 8.0, 16.0, 3.0, 9.0, 27.0, 81.0, 4.0, 16.0, 64.0, 256.0, 4.0, 3.0] </code></pre>
-1
2016-09-09T08:19:09Z
[ "python", "list", "text" ]
Convert a text list to numbers?
39,406,754
<p>I am trying to use <code>float()</code> to convert the following list to numbers. But it always says <code>ValueError: could not convert string to float</code>.</p> <p>I understand that this error occurs when there is something in text that cannot be treated as a number. But my list seems okay.</p> <pre><code>a = ['4', '4', '1', '1', '1', '1', '2', '4', '8', '16', '3', '9', '27', '81', '4', '16', '64', '256', '4', '3'] b = [float(x) for x in a] </code></pre>
-1
2016-09-09T08:05:59Z
39,407,099
<p>The list you have posted is perfectly ok with your method.</p> <ul> <li><p>Either you have <code>a</code> already define to a string and don't have a pointed to that list. </p></li> <li><p>The list contains a number. </p></li> </ul> <p>Just to verify try</p> <pre><code>map(int,a) #if no errors you have all numbers </code></pre> <p>Then try </p> <pre><code>set(map(type,a)) #if outputs {str} you should be good. </code></pre> <p>Either way your error is not reproducible and post more details in your question. </p>
0
2016-09-09T08:26:46Z
[ "python", "list", "text" ]
Convert a text list to numbers?
39,406,754
<p>I am trying to use <code>float()</code> to convert the following list to numbers. But it always says <code>ValueError: could not convert string to float</code>.</p> <p>I understand that this error occurs when there is something in text that cannot be treated as a number. But my list seems okay.</p> <pre><code>a = ['4', '4', '1', '1', '1', '1', '2', '4', '8', '16', '3', '9', '27', '81', '4', '16', '64', '256', '4', '3'] b = [float(x) for x in a] </code></pre>
-1
2016-09-09T08:05:59Z
39,408,499
<p>The problem is exactly what the Traceback log says: Could not convert string to float</p> <ol> <li>If you have a string with only numbers, python's smart enough to do what you're trying and converts the string to a float.</li> <li>If you have a string with non-numerical characters, the conversion will fail and give you the error that you were having.</li> </ol> <p>You can strip whitespaces and then check for digits in the string.</p> <pre><code> f = open('mytext.txt','r') a = f.read().split() a = [each.strip() for each in a] a = [each for each in a if each.isdigit() ] b = [float(each) for each in a] # or b = map(float, a) print b # just to make it clear written as separate steps, you can combine the steps </code></pre>
0
2016-09-09T09:39:59Z
[ "python", "list", "text" ]
PyQtGraph and numpy realtime plot
39,406,768
<p>I am currently making a project that reads 8 sensors and plots real time graphs. I have used Matplotlib but it was slow so I switched to pyqtgraph. It is comparatively very fast. I have refered to the documentations and designed a simple code that plots live data. The only problem I am facing is the diskspace and cpu usage increases drastically as i let it draw for 20minutes or so. Here is my code.</p> <pre><code>from tinkerforge.ip_connection import IPConnection from tinkerforge.bricklet_ptc import BrickletPTC from pyqtgraph.Qt import QtGui, QtCore import numpy as np import pyqtgraph as pg win = pg.GraphicsWindow(title="Basic plotting examples") win.resize(1280,720) win.setWindowTitle('Live Temperature Data') #Enable antialiasing for prettier plots pg.setConfigOptions(antialias=True) p1 = win.addPlot(title = 'Sensor1') curve1 = p1.plot(pen = '#00A3E0') p1.setLabel('left', "Temperature", units='°C') p1.setLabel('bottom', "Time", units= 's') p1.setDownsampling(auto=True,mode='peak') p1.setClipToView(True) p1.showGrid(x=True, y=True) tempC1 = [] def updateSensor1(): global curve1, tempC1, indx1, p1 ipcon = IPConnection() # Create IP connection ptc1 = BrickletPTC(UID1, ipcon) # S1 ipcon.connect(HOST, PORT) # Connect to brickd temperature1 = ptc1.get_temperature() dataArray1=str(temperature1/100).split(',') temp1 = float(dataArray1[0]) tempC1.append(temp1) curve1.setData(tempC1) app.processEvents() timer1 = QtCore.QTimer() timer1.timeout.connect(updateSensor1) timer1.start(1000) if __name__ == '__main__': import sys if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_'): QtGui.QApplication.instance().exec_() </code></pre> <p>I heard lists are much slower and numpy is really fast and much more compatible with pyqtgraph. Since I am new to this programming stuff I am unable to make a numpy array that takes these temperature readings and plots the graph. I have also referred to the documentation but it didn't help</p> <p>P.s sicne I have 8 sensors I have no idea should I create 8 different numpy arrays or something like one multidimensional array that takes sensor values and a function that plots these values in real time</p> <p>I'd be grateful if someone can help me creating numpy arrays instead of lists.</p>
1
2016-09-09T08:07:10Z
39,407,942
<p>So if you want to replace your list with an <code>np.array</code> you will face one major issue: It is not directly possible to append data to an array the same way as with a list. One possibility is to use e.g. <code>np.hstack</code>, <code>np.vstack</code>, <code>np.column_stack</code> or <code>np.row_stack</code>. Here is an example how you could alter your <code>updateSensor1</code> function using <code>np.hstack</code>:</p> <pre><code>tempC1 = np.array(()) def updateSensor1(): global curve1, tempC1, indx1, p1 ipcon = IPConnection() # Create IP connection ptc1 = BrickletPTC(UID1, ipcon) # S1 ipcon.connect(HOST, PORT) # Connect to brickd temperature1 = ptc1.get_temperature() dataArray1=str(temperature1/100).split(',') temp1 = np.array((dataArray1[0])).astype(np.float32) tempC1 = np.hstack((tempC1,temp1)) curve1.setData(tempC1) app.processEvents() </code></pre> <p>However, for arrays each "append" action requires short-term doubling of required memory, so you will probably find that using a list will be more efficient when you will deal with large arrays. </p> <p>A better solution is to generate an array (e.g. <code>np.zeros</code>) that is as large as your desired final output, for example after 20 minutes of plotting. That way you can avoid array creation/destruction. Then you just need to create another count-variable or pass the time to your <code>updateSensor1</code>-function in order to update and plot the right data, e.g.:</p> <pre><code>tempC1 = np.zeros((1200)) count = 0 def updateSensor1(): global curve1, tempC1, indx1, p1, count ipcon = IPConnection() # Create IP connection ptc1 = BrickletPTC(UID1, ipcon) # S1 ipcon.connect(HOST, PORT) # Connect to brickd temperature1 = ptc1.get_temperature() dataArray1=str(temperature1/100).split(',') temp1 = float(dataArray1[0]) tempC1[count] = temp1 curve1.setData(tempC1[::count]) count += 1 app.processEvents() </code></pre> <p>For multiple sensors, you can just add new dimensions to your array, e.g. for 8 sensors, create a <code>np.zeros((1200,8))</code> array. I hope this was helpful in some way...</p> <p><strong>EDIT:</strong></p> <p>If you <code>pop</code> the list each <code>300s</code> or so and still want the x-axis to continue in time you should pass a second list or array with x/time values. I would suggest something like:</p> <pre><code>import time times = [] t0 = time.time() def updateSensor1(): global curve1, tempC1, indx1, p1, times, t0 ipcon = IPConnection() # Create IP connection ptc1 = BrickletPTC(UID1, ipcon) # S1 ipcon.connect(HOST, PORT) # Connect to brickd temperature1 = ptc1.get_temperature() dataArray1=str(temperature1/100).split(',') temp1 = float(dataArray1[0]) tempC1.append(temp1) times.append(time.time()-t0) curve1.setData(times,tempC1) app.processEvents() </code></pre> <p>Then you should just pop the <code>times</code> list the same time you pop your <code>tempC1</code> list. Since the reference time <code>t0</code> is not changed this should give you always the correct time.</p>
0
2016-09-09T09:11:59Z
[ "python", "arrays", "numpy" ]
UnificationEngine: Unable to send Requests via Python using TLS or SSL
39,406,836
<p>I am attempting to use a Raspberry Pi 3 (Model B) to send Requests by Python to the Unification Engine. With SSL/TLS verification disabled, the request happens normally, but I need to get SSL/TLS working with it.</p> <p>The below code is meant to force a session to use TLSv1 for the purpose of sending Python Requests to UnificationEngine:</p> <pre><code>class ForceTLSV1Adapter(HTTPAdapter): def init_poolmanager(self, connection, maxsize, block=False): self.poolmanager = PoolManager(num_pools=connection,maxsize=maxsize,block=block,ssl_version=ssl.PROTOCOL_TLSv1) def proxy_manager_for(self, proxy, **proxy_kwargs): proxy_kwargs['ssl_version'] = ssl.PROTOCOL_TLSv1 return super(ForceTLVS1Adapter, self).proxy_manager_for(proxy, **proxy_kwargs) ----Some Code here---- s = requests.Session() s.mount('https://apiv2.unificationengine.com', ForceTLSV1Adapter()) ----Some Code Here---- </code></pre> <p>However, this error pops up after I send the request.</p> <pre><code>Traceback (most recent call last): File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 516, in urlopen body=body, headers=headers) File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 304, in _make_request self._validate_conn(conn) File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 724, in _validate_conn conn.connect() File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 237, in connect ssl_version=resolved_ssl_version) File "/usr/lib/python3/dist-packages/urllib3/util/ssl_.py", line 123, in ssl_wrap_socket return context.wrap_socket(sock, server_hostname=server_hostname) File "/usr/lib/python3.4/ssl.py", line 364, in wrap_socket _context=self) File "/usr/lib/python3.4/ssl.py", line 577, in __init__ self.do_handshake() File "/usr/lib/python3.4/ssl.py", line 804, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:600) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/requests/adapters.py", line 362, in send timeout=timeout File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 543, in urlopen raise SSLError(e) urllib3.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:600) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/pi/Documents/HumidityRequest.py", line 134, in &lt;module&gt; sendEmail(round(temp,3)) File "/home/pi/Documents/HumidityRequest.py", line 92, in sendEmail r = requests.post('https://apiv2.unificationengine.com/v2/message/send', auth=(key,secret),data=userMessage) File "/usr/lib/python3/dist-packages/requests/api.py", line 94, in post return request('post', url, data=data, json=json, **kwargs) File "/usr/lib/python3/dist-packages/requests/api.py", line 49, in request return session.request(method=method, url=url, **kwargs) File "/usr/lib/python3/dist-packages/requests/sessions.py", line 457, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python3/dist-packages/requests/sessions.py", line 569, in send r = adapter.send(request, **kwargs) File "/usr/lib/python3/dist-packages/requests/adapters.py", line 420, in send raise SSLError(e, request=request) requests.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:600) </code></pre> <p>I'm not sure what is the issue right now.</p>
0
2016-09-09T08:10:54Z
39,423,774
<p>Looks like the certificate verification isn't done correctly. Have a look at: <a href="https://the.randomengineer.com/2014/01/29/using-ssl-wrap_socket-for-secure-sockets-in-python/" rel="nofollow">https://the.randomengineer.com/2014/01/29/using-ssl-wrap_socket-for-secure-sockets-in-python/</a> <a href="https://urllib3.readthedocs.io/en/latest/user-guide.html#ssl" rel="nofollow">https://urllib3.readthedocs.io/en/latest/user-guide.html#ssl</a></p> <p>you could also try the httplib.</p>
0
2016-09-10T07:34:10Z
[ "python", "ssl", "python-requests", "unificationengine" ]
How to fix broken pipe in Chat server using socket in Python after first request?
39,406,906
<p>I am playing with socket and tried to create simple chat server with only one client connection. Code and output as follows.</p> <p>echo_server.py</p> <pre><code>import socket host = '' port = 4538 backlog = 5 size = 1024 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((host,port)) s.listen(backlog) print "Starting Server" while 1: client, address = s.accept() try: data = client.recv(size) if data is not None: if data is 'q': print "I received request to close the connection" client.send('q') continue print "I got this from client {}".format(data) client.send(data) continue if data == 0: client.close() finally: client.close() </code></pre> <p>echo_client.py</p> <pre><code>import socket host = '' port = 4538 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((host,port)) try: while 1: message = filename = raw_input('Enter a your message: ') s.send(message) data = s.recv(1024) if data is 'q': print "You requested to close the connection" break print "Received from socket {}".format(data) finally: s.close() </code></pre> <p>Now, I had tried with sendall() too but it doesn't work. Following is the output on both sides</p> <p>client:</p> <pre><code>Enter a your message: hello Received from socket hello Enter a your message: world Received from socket Enter a your message: hi Traceback (most recent call last): File "echo_client.py", line 12, in &lt;module&gt; s.send(message) socket.error: [Errno 32] Broken pipe </code></pre> <p>And on server</p> <pre><code>Starting Server I got this from client hello </code></pre> <p>As you can see, the server doesn't get the second message(world). And replies with nothing and when I send third request to server with hi, client terminates with <strong>Broken Pipe</strong> How do I go about fixing it?</p> <p>EDIT 1:</p> <p>I changed the code now it's following. The s.accept() gets stuck in second request. Following is the code. echo_server.py</p> <pre><code>import socket host = '' port = 4538 backlog = 5 size = 1024 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((host,port)) s.listen(backlog) print "Starting Server" try: while 1: print "BEFORE REQUEST" client, address = s.accept() print "AFTER REQUEST" data = client.recv(size) if data: if data is 'q': print "I received request to close the connection" client.send('q') print "I got this from client {}".format(data) client.send(data) else: print "CLOSING IN ELSE" client.close() except: print "CLOSING IN except" client.close() </code></pre> <p>Following is the output.</p> <pre><code>BEFORE REQUEST AFTER REQUEST I got this from client hello BEFORE REQUEST </code></pre> <p>As you can see for the second time accept() never returns. How to make it working?</p>
0
2016-09-09T08:15:23Z
39,407,033
<p><code>recv</code> returns empty string when the client is closes the connection, not <code>None</code> or <code>0</code>. Since empty string is a False condition, simply use <code>if data:</code> or <code>if not data:</code>.</p> <p>And, as @JonClements pointed out, use an <code>except</code> instead of a <code>finally</code> in the server, or put the <code>while</code> inside the <code>try</code> and <code>if not data: break</code> to exit the <code>while</code> and execute the <code>finally</code>. </p>
1
2016-09-09T08:22:28Z
[ "python", "sockets", "broken-pipe" ]
ctypes pass an array of byte
39,407,115
<p>I want to call this C function from a DLL in python2.7</p> <pre><code>uint8_t DLL_CONV Mag(uint8_t *t1, uint8_t *t2, uint8_t *t3); </code></pre> <p>This function write data to the arrays sent by reference.</p> <p>This is my python code so far</p> <pre><code>_mag = hDLL.Mag #_mag.argtypes =[ctypes.POINTER(ctypes.c_ubyte),ctypes.POINTER(ctypes.c_ubyte),ctypes.POINTER(ctypes.c_ubyte)] _mag.argtypes =[ctypes.c_ubyte *100,ctypes.c_ubyte *100,ctypes.c_ubyte*100] _mag.restype = ctypes.c_ubyte def mag(): t1 = (ctypes.c_ubyte * 100)() t2 = (ctypes.c_ubyte * 100)() t3 = (ctypes.c_ubyte * 100)() s = _mag( (ctypes.c_ubyte *100)(t1), (ctypes.c_ubyte *100)(t2), (ctypes.c_ubyte *100)(t3) ) return t1,t2,t3 </code></pre> <p>I always got a TypeError at <code>s = _mag()</code></p>
0
2016-09-09T08:27:41Z
39,411,766
<p>I undefined <code>argtypes</code> and just send the arrays like so</p> <pre><code>s = _mag( (t1), (t2), (t3) ) </code></pre>
0
2016-09-09T12:35:58Z
[ "python", "dll", "ctypes" ]
Django model migration added two unwanted fields when using AbstractBaseUser class
39,407,225
<p>Django 1.9<br> Python 3.4 </p> <p>I created a custom Users model using AbstractBaseUser class. Below is the code.</p> <pre><code>class UserModel(AbstractBaseUser): # custom user class SYSTEM = 0 TENANT = 1 parent_type_choices = ( (SYSTEM, 'System'), (TENANT, 'Tenant') ) sys_id = models.BigIntegerField(primary_key=True, blank=False) parent_type = models.PositiveIntegerField(choices=parent_type_choices, null=False, blank=False) parent_sys_id = models.ForeignKey('tenant.TenantModel', on_delete = models.SET_NULL, null=True, blank=True) last_name = models.CharField(null=False, blank=False, max_length=40) first_name = models.CharField(max_length=40, null=False, blank=False) display_name = models.CharField(max_length=80, unique=True, null=False, blank=False) login = models.CharField(max_length=40, unique=True, null=False, blank=False) authentication_method = models.CharField(max_length=80) pwd = models.CharField(max_length=40) access_valid_start = models.DateTimeField() access_valid_end = models.DateTimeField() created_when = models.DateTimeField() created_by = models.BigIntegerField() last_updated_when = models.DateTimeField() last_updated_by = models.BigIntegerField() notes = models.CharField(max_length=2048) USERNAME_FIELD = "login" class Meta: app_label = "accounts" db_table = "Users" </code></pre> <p>When I migrated the changes, table was created in db with two extra fields which I didn't defined. <code>Password</code> and <code>last_login</code> were added. </p> <pre><code>desc Users; +-----------------------+------------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------------------+------------------+------+-----+---------+-------+ | password | varchar(128) | NO | | NULL | | | last_login | datetime | YES | | NULL | | | sys_id | bigint(20) | NO | PRI | NULL | | | parent_type | int(10) unsigned | NO | | NULL | | | last_name | varchar(40) | NO | | NULL | | | first_name | varchar(40) | NO | | NULL | | | display_name | varchar(80) | NO | UNI | NULL | | | login | varchar(40) | NO | UNI | NULL | | | authentication_method | varchar(80) | NO | | NULL | | | pwd | varchar(40) | NO | | NULL | | | access_valid_start | datetime | NO | | NULL | | | access_valid_end | datetime | NO | | NULL | | | created_when | datetime | NO | | NULL | | | created_by | bigint(20) | NO | | NULL | | | last_updated_when | datetime | NO | | NULL | | | last_updated_by | bigint(20) | NO | | NULL | | | notes | varchar(2048) | NO | | NULL | | | parent_sys_id_id | bigint(20) | YES | MUL | NULL | | +-----------------------+------------------+------+-----+---------+-------+ </code></pre> <p>Why this happened? How to remove these fields?</p>
0
2016-09-09T08:33:36Z
39,407,294
<p>You inherited from <code>AbstractBaseUser</code> class which provides these fields. You can not remove them. Check <a href="https://docs.djangoproject.com/es/1.9/topics/auth/customizing/#specifying-a-custom-user-model" rel="nofollow">Django doc</a> for more info</p>
0
2016-09-09T08:38:17Z
[ "python", "mysql", "django", "django-models" ]
Hard time with my Dice game
39,407,282
<p>I have a problem with my dice game. Every time I run it it says incorrect even if I guess the number right. There is the code, and a screenshot of when i got it right: <a href="https://gyazo.com/87400aab5747a05c77415a816952b26d" rel="nofollow">https://gyazo.com/87400aab5747a05c77415a816952b26d</a></p> <pre><code>import random ask = input(": ") for i in range(1): dice = random.randint(1, 6) print(dice) if ask == random: print("correct") else: print("incorrect") </code></pre>
-1
2016-09-09T08:37:32Z
39,407,356
<p>That is probably because you read in something which by default will be a type string, and your number is an integer datatype. So "6" won't be equal number 6. You should do some conversion before the comparison if I'm right.</p>
-1
2016-09-09T08:42:03Z
[ "python", "pycharm" ]
Hard time with my Dice game
39,407,282
<p>I have a problem with my dice game. Every time I run it it says incorrect even if I guess the number right. There is the code, and a screenshot of when i got it right: <a href="https://gyazo.com/87400aab5747a05c77415a816952b26d" rel="nofollow">https://gyazo.com/87400aab5747a05c77415a816952b26d</a></p> <pre><code>import random ask = input(": ") for i in range(1): dice = random.randint(1, 6) print(dice) if ask == random: print("correct") else: print("incorrect") </code></pre>
-1
2016-09-09T08:37:32Z
39,407,393
<p>If you convert <code>ask</code> to an integer, it will work fine:</p> <pre><code>import random ask = int(input(": ")) # input returns a string for i in range(1): dice = random.randint(1, 6) print(dice) if ask == random: print("correct") else: print("incorrect") </code></pre>
1
2016-09-09T08:43:54Z
[ "python", "pycharm" ]
Put image into Google App Engine Datastore
39,407,305
<p>I'm currently building a blog using Google App Engine and Python and I'd like to create a column for my posts table where to store an image. How can I upload an Image from my pc and store it into the database?</p>
0
2016-09-09T08:38:42Z
39,407,467
<p>The Datastore is not optimized for storing images. A better option is to store it in <a href="https://cloud.google.com/storage/" rel="nofollow">Google Cloud Storage</a>. You App Engine project will get a default bucket in GCS when you enable it, and you can use your dev console to upload files to this bucket manually or a command-line tool to upload through a terminal.</p> <p>Then you need to store only an object name from the GCS in the Datastore.</p>
1
2016-09-09T08:47:43Z
[ "python", "database", "google-app-engine" ]
Plotting animate 3d graph from file.txt
39,407,327
<p>I'm trying to plot a 3D animate Graph taking the data from file.txt. I got a issue with the animation function, such as when I'm introducing new coordinate on txt file the plot doesn't update by itself. The code is the following:</p> <pre><code>from mpl_toolkits.mplot3d import axes3d import matplotlib.pyplot as plt import numpy as np import math as m import matplotlib.animation as animation import time np.set_printoptions(threshold=np.inf) def animate(i): point_stress=open(r'Prova.txt','r').read() lines=point_stress.split('\n') xs=[] ys=[] zs=[] for line in lines: if len(line)&gt;1: x,y,z=line.split() x=float(x) y=float(y) z=float(z) xs.append(x) ys.append(y) zs.append(z) point_stress.close() ax1.clear() ax1.plot(xs,ys,zs) fig=plt.figure() ax1=fig.add_subplot(111,projection='3d') ani=animation.FuncAnimation(fig,animate,interval=1000) ax1.set_xlabel('x') ax1.set_ylabel('y') ax1.set_zlabel('z') plt.show() </code></pre>
0
2016-09-09T08:40:01Z
39,408,480
<pre><code>from mpl_toolkits.mplot3d import axes3d import matplotlib.pyplot as plt import numpy as np import matplotlib.animation as animation np.set_printoptions(threshold=np.inf) fig = plt.figure() ax1 = fig.add_subplot(111, projection='3d') def init(): global points points = np.loadtxt('Prova.txt') def animate(i): ax1.clear() ax1.plot(points[:i, 0], points[:i, 1], points[:i, 2]) ani = animation.FuncAnimation(fig, animate, init_func=init, interval=1000) ax1.set_xlabel('x') ax1.set_ylabel('y') ax1.set_zlabel('z') plt.show() </code></pre> <p>my txt file:</p> <pre><code>1 0 0 0 1 0 0 0 1 1 4 0 0 1 0 0 6 1 1 0 2 10 1 0 0 0 1 1 3 0 0 1 0 4 0 1 </code></pre>
0
2016-09-09T09:39:06Z
[ "python", "numpy", "animation", "matplotlib" ]
Recent entries from each category in django model
39,407,414
<p>I need to show in the template most recent entries from each category is the Instarama, Facebook, Twitter. Here my solution, but it does not work.</p> <p>My get_queryset method but isn't working:</p> <pre><code>def get_queryset(self): return super(OnlineManager, self).get_queryset().filter(is_online=True).order_by('-date').annotate(Count('social_channel'))[:1] </code></pre> <p>This is my model:</p> <pre><code>class Social(models.Model): social_channel = models.CharField(max_length=25, choices=SOCIAL_CHANNELS, default=SOCIAL_CHANNELS[0][0], blank=True) text = models.TextField(max_length=5000, blank=True, default='') is_online = models.BooleanField(default=True) position = models.PositiveIntegerField(default=0) date = models.DateTimeField(auto_now_add=True) def __str__(self): return self.social_channel class Meta: ordering = ['position'] </code></pre>
3
2016-09-09T08:44:39Z
39,408,327
<pre><code>def get_queryset(self): super(OnlineManager, self).get_queryset().filter(is_online=True).order_by('- id')annotate(Count('social_channel'))[‌​0] </code></pre>
3
2016-09-09T09:31:26Z
[ "python", "django", "model", "django-queryset", "categories" ]
Recent entries from each category in django model
39,407,414
<p>I need to show in the template most recent entries from each category is the Instarama, Facebook, Twitter. Here my solution, but it does not work.</p> <p>My get_queryset method but isn't working:</p> <pre><code>def get_queryset(self): return super(OnlineManager, self).get_queryset().filter(is_online=True).order_by('-date').annotate(Count('social_channel'))[:1] </code></pre> <p>This is my model:</p> <pre><code>class Social(models.Model): social_channel = models.CharField(max_length=25, choices=SOCIAL_CHANNELS, default=SOCIAL_CHANNELS[0][0], blank=True) text = models.TextField(max_length=5000, blank=True, default='') is_online = models.BooleanField(default=True) position = models.PositiveIntegerField(default=0) date = models.DateTimeField(auto_now_add=True) def __str__(self): return self.social_channel class Meta: ordering = ['position'] </code></pre>
3
2016-09-09T08:44:39Z
39,409,436
<p>Try this:</p> <pre><code>def get_queryset(self): query = super(OnlineManager, self).get_queryset() query = query.filter(is_online=True).order_by('social_channel', '-id').distinct('social_channel') return query </code></pre> <p>it should return the most recent entry for each <code>social_chanel</code></p>
0
2016-09-09T10:27:20Z
[ "python", "django", "model", "django-queryset", "categories" ]
How to check a text contains \
39,407,438
<pre><code>def check(text): pattern = re.compile(r'\\') rv = re.match(pattern, text) if rv: return True else: return False print check('\mi') # True print check('\ni') # False </code></pre> <p>Actually,I want text contains '\' is illegal.</p> <p>But '\n', '\b',etc, python treats them specially,so I can not match them out.</p> <p>Any solutions?</p>
0
2016-09-09T08:45:46Z
39,408,105
<pre><code>def check(text): rv = re.search('(\\\\)|(\\\n)', string) if rv: return True else: return False string = "\mi" print check(string) string = "\ni" print check(string) </code></pre> <p>Result:</p> <blockquote> <blockquote> <blockquote> <p>================================ RESTART ===============</p> <pre><code>True True </code></pre> </blockquote> </blockquote> </blockquote> <p><code>\\\n</code> includes newlines. You can specifically search this way for <code>\n</code> by escaping <code>\\</code> and adding <code>\n</code>. Works with <code>\b</code> etc... </p>
0
2016-09-09T09:19:44Z
[ "python" ]
How to check a text contains \
39,407,438
<pre><code>def check(text): pattern = re.compile(r'\\') rv = re.match(pattern, text) if rv: return True else: return False print check('\mi') # True print check('\ni') # False </code></pre> <p>Actually,I want text contains '\' is illegal.</p> <p>But '\n', '\b',etc, python treats them specially,so I can not match them out.</p> <p>Any solutions?</p>
0
2016-09-09T08:45:46Z
39,409,594
<p>Why would you need or want to use a regex for this?</p> <pre><code> return '\\' in text </code></pre>
1
2016-09-09T10:35:11Z
[ "python" ]
Calling Fortran routines in scipy.special in a function jitted with numba
39,407,585
<p>Is there a way to directly or indirectly call the Fortran routines that can be found here <a href="https://github.com/scipy/scipy/tree/master/scipy/special/cdflib" rel="nofollow">https://github.com/scipy/scipy/tree/master/scipy/special/cdflib</a> and that are used by <code>scipy.stats</code> from a function that is supposed to by compiled by <code>numba</code> in <code>nopython</code>mode?</p> <p>Concretely, because <code>scipy.stats.norm.cdf()</code> is somehow pretty slow, I am right now directly using <code>scipy.special.ndtr</code>, which is called by the former. However, I'm doing this in a loop, and my intend is to speed it up using <code>numba</code>.</p>
2
2016-09-09T08:53:23Z
39,409,736
<p>I would take a look at <a href="https://github.com/QuantEcon/rvlib" rel="nofollow">rvlib</a>, which uses Numba and CFFI to call RMath, which is the standalone C library that R uses to calculate statistical distributions. The functions it provides should be callable by Numba in <code>nopython</code> mode. Take a look at the README for an example of the function that is equivalent to <code>scipy.stats.norm.cdf()</code></p> <p>If you're still interested in wrapping <code>cdflib</code> yourself, I would recommend using CFFI. You'll have to build a C-interface for the functions you want. You might find this blog post that I wrote helpful in getting started:</p> <p><a href="https://www.continuum.io/blog/developer-blog/calling-c-libraries-numba-using-cffi" rel="nofollow">https://www.continuum.io/blog/developer-blog/calling-c-libraries-numba-using-cffi</a></p>
2
2016-09-09T10:42:10Z
[ "python", "numba" ]
scipy.interpolate leads to ImportError
39,407,640
<p>my setup is</p> <pre><code>import cartopy.crs as ccrs import matplotlib.pyplot as plt </code></pre> <p>I have scipy <code>0.17</code> and cartopy '0.14.2'.</p> <p>All I'm trying to do is</p> <pre><code>plt.axes(projection=ccrs.PlateCarree()) </code></pre> <p>and it leads to this:</p> <pre><code>Traceback (most recent call last): File "/usr/local/anaconda2/envs/myenv3/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2885, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "&lt;ipython-input-93-636aeb1a7fc6&gt;", line 1, in &lt;module&gt; plt.axes(projection=ccrs.PlateCarree()) File "/usr/local/anaconda2/envs/myenv3/lib/python3.5/site-packages/matplotlib/pyplot.py", line 867, in axes return subplot(111, **kwargs) File "/usr/local/anaconda2/envs/myenv3/lib/python3.5/site-packages/matplotlib/pyplot.py", line 1022, in subplot a = fig.add_subplot(*args, **kwargs) File "/usr/local/anaconda2/envs/myenv3/lib/python3.5/site-packages/matplotlib/figure.py", line 987, in add_subplot self, *args, **kwargs) File "/usr/local/anaconda2/envs/myenv3/lib/python3.5/site-packages/matplotlib/projections/__init__.py", line 100, in process_projection_requirements projection_class, extra_kwargs = projection._as_mpl_axes() File "/usr/local/anaconda2/envs/myenv3/lib/python3.5/site-packages/cartopy/crs.py", line 150, in _as_mpl_axes import cartopy.mpl.geoaxes as geoaxes File "/opt/pycharm-2016.2/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/usr/local/anaconda2/envs/myenv3/lib/python3.5/site-packages/cartopy/mpl/geoaxes.py", line 52, in &lt;module&gt; from cartopy.vector_transform import vector_scalar_to_grid File "/opt/pycharm-2016.2/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/usr/local/anaconda2/envs/myenv3/lib/python3.5/site-packages/cartopy/vector_transform.py", line 26, in &lt;module&gt; from scipy.interpolate import griddata File "/opt/pycharm-2016.2/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/usr/local/anaconda2/envs/myenv3/lib/python3.5/site-packages/scipy/interpolate/__init__.py", line 158, in &lt;module&gt; from .interpolate import * File "/opt/pycharm-2016.2/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/usr/local/anaconda2/envs/myenv3/lib/python3.5/site-packages/scipy/interpolate/interpolate.py", line 12, in &lt;module&gt; import scipy.special as spec File "/opt/pycharm-2016.2/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/usr/local/anaconda2/envs/myenv3/lib/python3.5/site-packages/scipy/special/__init__.py", line 629, in &lt;module&gt; from .basic import * File "/opt/pycharm-2016.2/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/usr/local/anaconda2/envs/myenv3/lib/python3.5/site-packages/scipy/special/basic.py", line 14, in &lt;module&gt; from ._ufuncs import (ellipkm1, mathieu_a, mathieu_b, iv, jv, gamma, psi, zeta, ImportError: cannot import name 'zeta' </code></pre> <p>Deep down this appears to be a scipy problem, but I have the newest there - what's going on here?</p>
0
2016-09-09T08:56:48Z
39,409,175
<p>It turned out to be a problem with the <code>scipy</code> installation coming from <code>conda</code>. Creating a new environment and freshly installing <code>scipy</code> solved the issue.</p>
0
2016-09-09T10:14:26Z
[ "python", "scipy" ]
Logical operator binding of AND and OR in Python
39,407,674
<p>Yesterday while constructing a conditional statement I ran into what seems to me to be a strange precedence rule. The statement I had was</p> <pre><code>if not condition_1 or not condition_2 and not condition_3: </code></pre> <p>I found that</p> <pre><code>if True or True and False: # Evaluates to true and enters conditional </code></pre> <p>In my mind (and from previous experience in other languages) the <code>and</code> condition should have precedence as the statement is evaluated - so the statement should be equivalent to</p> <pre><code>if (True or True) and (False): </code></pre> <p>But in actual fact it is </p> <pre><code>if (True) or (True and False): </code></pre> <p>Which seems odd to me? </p>
0
2016-09-09T08:58:55Z
39,407,704
<p>From python wiki :</p> <blockquote> <p>The expression <a href="https://docs.python.org/2/reference/expressions.html#and" rel="nofollow"><strong>x and y</strong></a> first evaluates x; if x is false, its <strong>value</strong> is returned; otherwise, y is evaluated and the resulting <strong>value</strong> is returned.</p> <p>The expression <strong>x or y</strong> first evaluates x; if x is true, its <strong>value</strong> is returned; otherwise, y is evaluated and the resulting <strong>value</strong> is returned.</p> </blockquote> <p>So in your case</p> <pre><code>if True or True and False: </code></pre> <p>Since interpreter encounters <code>True</code> before <code>or</code> it doesn't continue to <code>and</code> expression and evaluates to <code>True</code></p>
1
2016-09-09T09:00:06Z
[ "python", "logical-operators" ]
Logical operator binding of AND and OR in Python
39,407,674
<p>Yesterday while constructing a conditional statement I ran into what seems to me to be a strange precedence rule. The statement I had was</p> <pre><code>if not condition_1 or not condition_2 and not condition_3: </code></pre> <p>I found that</p> <pre><code>if True or True and False: # Evaluates to true and enters conditional </code></pre> <p>In my mind (and from previous experience in other languages) the <code>and</code> condition should have precedence as the statement is evaluated - so the statement should be equivalent to</p> <pre><code>if (True or True) and (False): </code></pre> <p>But in actual fact it is </p> <pre><code>if (True) or (True and False): </code></pre> <p>Which seems odd to me? </p>
0
2016-09-09T08:58:55Z
39,407,874
<p>As you said <code>and</code> has a higher preference, so your expression with parenthesis will be exactly what you said:</p> <pre><code>if (True) or (True and False) </code></pre> <p>Since <code>True or (anything)</code> will be <code>True</code> regardless anything here, the result will be <code>True</code>. Even if anything would be evaluated, which is not in a shortcut evaluation.</p>
0
2016-09-09T09:08:03Z
[ "python", "logical-operators" ]
Make 2 functions run at the same time and in parallel?
39,407,680
<p>I have an array</p> <pre><code>myArray = array(url1,url2,...,url90) </code></pre> <p>I want to execute this commande 3 times in parallel</p> <p><code>scrapy crawl mySpider -a links=url</code></p> <p>and each time with 1 url,</p> <pre><code>scrapy crawl mySpider -a links=url1 scrapy crawl mySpider -a links=url2 scrapy crawl mySpider -a links=url3 </code></pre> <p>and when the first one finish his job, he will get the other url like</p> <pre><code>scrapy crawl mySpider -a links=url4 </code></pre> <p>I read <a href="http://stackoverflow.com/questions/21519431/r-run-2-different-blocks-of-code-at-the-same-time-in-parallel">this question</a>, and <a href="http://stackoverflow.com/questions/2957116/make-2-functions-run-at-the-same-time">this one</a> and I try this:</p> <pre><code>import threading from threading import Thread def func1(url): scrapy crawl mySpider links=url if __name__ == '__main__': myArray = array(url1,url2,...,url90) for(url in myArray): Thread(target = func1(url)).start() </code></pre>
1
2016-09-09T08:59:18Z
39,407,885
<p>When you write <code>target = func1(url)</code> you actually runnig <code>func1</code> and passing result to <code>Thread</code> (not a reference do the function). This means functions are run on the loop not in the seperate thread.</p> <p>You need to rewrite it like that:</p> <pre><code>if __name__ == '__main__': myArray = array(url1,url2,...,url90) for(url in myArray): Thread(target=func1, args=(url,))).start() </code></pre> <p>Then you are telling Thread to run <code>func1</code> with arguments <code>(url,)</code></p> <p>Also you should wait for Threads to finish after the loop, otherwise your program with terminate just after starting all the threads.</p> <p>EDIT: and if you want only 3 threads to be run on the same time you may want to use ThreadPool:</p> <pre><code>if __name__ == '__main__': from multiprocessing.pool import ThreadPool pool = ThreadPool(processes=3) pool.map(func, myArray) </code></pre>
2
2016-09-09T09:08:29Z
[ "python", "parallel-processing", "scrapy", "ipython-parallel" ]
Error from QTableView selectionChanged
39,407,717
<p>If I try to get the signal when my tableview changes, Python raises this error:</p> <pre><code>Traceback (most recent call last): File "UIreadresultwindow.py", line 361, in &lt;module&gt; ui.setupUi(ReadResultWindow) File "UIreadresultwindow.py", line 113, in setupUi self.tableEntity.selectionModel().selectionChanged.connect(self.change _display_result) AttributeError: 'NoneType' object has no attribute 'selectionChanged' </code></pre> <p>I define tableEntity as:</p> <pre><code>self.tableEntity = QtWidgets.QTableView(self.centralWidget) </code></pre> <p><strong>Edit</strong>: At first my QTableView is empty. I have to open a file to fill it. </p> <p><strong>Edit2</strong>: To be more specific, I have something like this:</p> <pre><code>from PyQt5 import QtCore, QtGui, QtWidgets class Ui_ReadResultWindow(object): def setupUi(self, ReadResultWindow): ReadResultWindow.setObjectName("ReadResultWindow") ReadResultWindow.resize(661, 438) self.centralWidget = QtWidgets.QWidget(ReadResultWindow) self.centralWidget.setObjectName("centralWidget") self.tableEntity = QtWidgets.QTableView(self.centralWidget) self.tableEntity.setObjectName("tableEntity") self.Open = QtWidgets.QPushButton(self.centralWidget) self.Open.setObjectName("Open") self.Open.clicked.connect(self.on_open_file) self.tableEntity.selectionModel().selectionChanged.connect(self.change_display_result) def on_open_file(self): x=[1,2,3,4] self.model= QtGui.QStandardItemModel() for item in x: self.model.invisibleRootItem().appendRow( QtGui.QStandardItem(str(x))) self.proxy= QtCore.QSortFilterProxyModel() self.proxy.setSourceModel(self.model) self.tableEntity.setModel(self.proxy) self.tableEntity.resizeColumnsToContents() def change_display_result(self,selected,deselected): index_entity = self.tableEntity.selectionModel().selectedIndexes() temp_entity = self.tableEntity.selectionModel().model() for index in sorted(index_entity): print(str(temp_entity.data(index))) if __name__ == "__main__": import sys app = QtWidgets.QApplication(sys.argv) ReadResultWindow = QtWidgets.QMainWindow() ui = Ui_ReadResultWindow() ui.setupUi(ReadResultWindow) ReadResultWindow.show() sys.exit(app.exec_()) </code></pre>
0
2016-09-09T09:01:12Z
39,423,916
<p>you must first set the model. So you can do something like this:</p> <pre><code>self.tableEntity = QtWidgets.QTableView(self.centralWidget) self.tableEntity.setSelectionMode(QtGui.QAbstractItemView.ExtendedSelection) </code></pre> <p>you can set one of this models: <code>{ NoSelection, SingleSelection, MultiSelection, ExtendedSelection, ContiguousSelection }</code></p> <p>hope this help</p>
0
2016-09-10T07:53:49Z
[ "python", "pyqt", "qtableview", "selectionchanged" ]
Error from QTableView selectionChanged
39,407,717
<p>If I try to get the signal when my tableview changes, Python raises this error:</p> <pre><code>Traceback (most recent call last): File "UIreadresultwindow.py", line 361, in &lt;module&gt; ui.setupUi(ReadResultWindow) File "UIreadresultwindow.py", line 113, in setupUi self.tableEntity.selectionModel().selectionChanged.connect(self.change _display_result) AttributeError: 'NoneType' object has no attribute 'selectionChanged' </code></pre> <p>I define tableEntity as:</p> <pre><code>self.tableEntity = QtWidgets.QTableView(self.centralWidget) </code></pre> <p><strong>Edit</strong>: At first my QTableView is empty. I have to open a file to fill it. </p> <p><strong>Edit2</strong>: To be more specific, I have something like this:</p> <pre><code>from PyQt5 import QtCore, QtGui, QtWidgets class Ui_ReadResultWindow(object): def setupUi(self, ReadResultWindow): ReadResultWindow.setObjectName("ReadResultWindow") ReadResultWindow.resize(661, 438) self.centralWidget = QtWidgets.QWidget(ReadResultWindow) self.centralWidget.setObjectName("centralWidget") self.tableEntity = QtWidgets.QTableView(self.centralWidget) self.tableEntity.setObjectName("tableEntity") self.Open = QtWidgets.QPushButton(self.centralWidget) self.Open.setObjectName("Open") self.Open.clicked.connect(self.on_open_file) self.tableEntity.selectionModel().selectionChanged.connect(self.change_display_result) def on_open_file(self): x=[1,2,3,4] self.model= QtGui.QStandardItemModel() for item in x: self.model.invisibleRootItem().appendRow( QtGui.QStandardItem(str(x))) self.proxy= QtCore.QSortFilterProxyModel() self.proxy.setSourceModel(self.model) self.tableEntity.setModel(self.proxy) self.tableEntity.resizeColumnsToContents() def change_display_result(self,selected,deselected): index_entity = self.tableEntity.selectionModel().selectedIndexes() temp_entity = self.tableEntity.selectionModel().model() for index in sorted(index_entity): print(str(temp_entity.data(index))) if __name__ == "__main__": import sys app = QtWidgets.QApplication(sys.argv) ReadResultWindow = QtWidgets.QMainWindow() ui = Ui_ReadResultWindow() ui.setupUi(ReadResultWindow) ReadResultWindow.show() sys.exit(app.exec_()) </code></pre>
0
2016-09-09T09:01:12Z
39,427,668
<p>The reason why you get that error is that you did not set the model on the table before trying to access the selection-model. The best way to fix this is to move the model setup code out of <code>on_open_file</code> and into <code>setupUi</code>. The <code>on_open_file</code> then just needs to clear the model before reloading the data.</p> <p>Below is a re-write of your example. Note that I had to fix quite a few other things to get it to work (mainly the central-widget and layout).</p> <pre><code>import sys, random from PyQt5 import QtCore, QtGui, QtWidgets class Ui_ReadResultWindow(object): def setupUi(self, ReadResultWindow): ReadResultWindow.resize(661, 438) self.tableEntity = QtWidgets.QTableView() self.model = QtGui.QStandardItemModel() self.proxy = QtCore.QSortFilterProxyModel() self.proxy.setSourceModel(self.model) self.tableEntity.setModel(self.proxy) self.tableEntity.selectionModel().selectionChanged.connect( self.change_display_result) self.Open = QtWidgets.QPushButton('Test') self.Open.clicked.connect(self.on_open_file) widget = QtWidgets.QWidget(ReadResultWindow) layout = QtWidgets.QVBoxLayout(widget) layout.addWidget(self.tableEntity) layout.addWidget(self.Open) ReadResultWindow.setCentralWidget(widget) def on_open_file(self): self.model.setRowCount(0) x = random.sample(range(10, 100), 10) for item in x: self.model.invisibleRootItem().appendRow( QtGui.QStandardItem(str(item))) self.tableEntity.resizeColumnsToContents() def change_display_result(self,selected,deselected): index_entity = self.tableEntity.selectionModel().selectedIndexes() temp_entity = self.tableEntity.selectionModel().model() for index in sorted(index_entity): print(str(temp_entity.data(index))) if __name__ == "__main__": app = QtWidgets.QApplication(sys.argv) ReadResultWindow = QtWidgets.QMainWindow() ui = Ui_ReadResultWindow() ui.setupUi(ReadResultWindow) ReadResultWindow.show() sys.exit(app.exec_()) </code></pre>
1
2016-09-10T15:36:47Z
[ "python", "pyqt", "qtableview", "selectionchanged" ]
How to check if the parameter exist in python
39,407,725
<p>I would like to create a simple python script that will take a parameter from the console, and it will display this parameter. If there will be no parameter then I would like to display error message, but custom message not something like IndexError: list index out of range</p> <p>Something like this:</p> <pre><code>if isset(sys.argv[1]): print sys.argv[1]; else: print "No parameter has been included" </code></pre>
0
2016-09-09T09:01:32Z
39,407,768
<pre><code>if len(sys.argv) &gt;= 2: print(sys.argv[1]) else: print("No parameter has been included") </code></pre> <p>For more complex command line interfaces there is the <a href="https://docs.python.org/3/library/argparse.html" rel="nofollow"><code>argparse</code></a> module in Python's standard library - but for simple projects taking just a couple parameters directly checking <code>sys.argv</code> is alright. </p>
0
2016-09-09T09:03:44Z
[ "python", "linux" ]
How to check if the parameter exist in python
39,407,725
<p>I would like to create a simple python script that will take a parameter from the console, and it will display this parameter. If there will be no parameter then I would like to display error message, but custom message not something like IndexError: list index out of range</p> <p>Something like this:</p> <pre><code>if isset(sys.argv[1]): print sys.argv[1]; else: print "No parameter has been included" </code></pre>
0
2016-09-09T09:01:32Z
39,407,897
<pre><code>import sys try: print sys.argv[1] except IndexError: print "No parameter has been included" </code></pre>
0
2016-09-09T09:09:27Z
[ "python", "linux" ]
How to check if the parameter exist in python
39,407,725
<p>I would like to create a simple python script that will take a parameter from the console, and it will display this parameter. If there will be no parameter then I would like to display error message, but custom message not something like IndexError: list index out of range</p> <p>Something like this:</p> <pre><code>if isset(sys.argv[1]): print sys.argv[1]; else: print "No parameter has been included" </code></pre>
0
2016-09-09T09:01:32Z
39,408,049
<p>You can check the lenght </p> <pre><code>if len(sys.argv) &gt; 1: ... </code></pre> <p>Or the try/except</p> <pre><code>try: sys.argv[1] except IndexError as ie: print("Exception : {0}".format(ie)) </code></pre>
1
2016-09-09T09:17:26Z
[ "python", "linux" ]
How to check if the parameter exist in python
39,407,725
<p>I would like to create a simple python script that will take a parameter from the console, and it will display this parameter. If there will be no parameter then I would like to display error message, but custom message not something like IndexError: list index out of range</p> <p>Something like this:</p> <pre><code>if isset(sys.argv[1]): print sys.argv[1]; else: print "No parameter has been included" </code></pre>
0
2016-09-09T09:01:32Z
39,408,184
<pre><code>import sys print sys.argv[0] # will print your file name if len(sys.argv) &gt; 1: print sys.argv[1]; else: print "No parameter has been included" </code></pre> <p>OR</p> <pre><code>import sys try: print sys.argv[1] except IndexError, e: print "No parameter has been included" </code></pre>
0
2016-09-09T09:24:30Z
[ "python", "linux" ]
How to check if the parameter exist in python
39,407,725
<p>I would like to create a simple python script that will take a parameter from the console, and it will display this parameter. If there will be no parameter then I would like to display error message, but custom message not something like IndexError: list index out of range</p> <p>Something like this:</p> <pre><code>if isset(sys.argv[1]): print sys.argv[1]; else: print "No parameter has been included" </code></pre>
0
2016-09-09T09:01:32Z
39,408,855
<p>Just for fun, you can also use <code>getopt</code> which provides you a way of predefining the options that are acceptable using the unix <code>getopt</code> conventions.</p> <pre><code>import sys import getopt try: opts, args = getopt.getopt(sys.argv[1:], "hvxrc:s:", ["help", "config=", "section="]) except getopt.GetoptError as err: print ("Option error:", str(err)) opts=[] for op , val in opts: print ("option",op,"Argument",val) if not opts: print ("No parameter supplied") </code></pre> <p>In the above if an incorrect parameter is supplied all of the options are scrapped.<br> Examples of use would be: </p> <pre><code>python myprog.py -h python myprog.py --help python myprog.py -c123 python myprog.py --config=123 </code></pre> <p><a href="https://pymotw.com/2/getopt/" rel="nofollow">https://pymotw.com/2/getopt/</a><br> <a href="http://linux.about.com/library/cmd/blcmdl1_getopt.htm" rel="nofollow">http://linux.about.com/library/cmd/blcmdl1_getopt.htm</a></p>
0
2016-09-09T09:57:06Z
[ "python", "linux" ]
Two list with string. Pick random from each list and fuse them together with %s in Python
39,407,737
<p>I have 2 list defined:</p> <pre><code> lunchQuote=['ska vi ta %s?','ska vi dra ned till %s?','jag tänkte käka på %s, ska du med?','På %s är det mysigt, ska vi ta där?'] lunchBTH=['thairestaurangen vid korsningen','det är lite mysigt i fiket jämte demolabbet','Indiska','Pappa curry','boden uppe på parkeringen' ,'Bergåsa kebab','Pasterian','Villa Oscar','Eat here','Bistro J'] </code></pre> <p>And as you can see in the first list I wanna use the %s to bind to of the strings of each list together to a new string that I can send back to the program.</p> <p>One example of a successful new string could be:</p> <pre><code>ska vi ta thairestaurangen vid korsningen? </code></pre> <p>The problem is I don't really understand how it works with %s and what I have to do before I can use it. I worked with .format before but then just on a regular string, not strings from two different lists.</p> <p>I have started with saving the randomly selected choices and save them in a variable:</p> <pre><code> lunchQuote = random.choice(lunchQuote) BTH = random.choice(lunchBTH) </code></pre> <p>What should I do next?</p>
0
2016-09-09T09:02:13Z
39,407,794
<p>The best way is using <code>format</code> method, defining <code>lunchQuote</code> as follow</p> <pre><code>import random lunchQuote=['ska vi ta {}?', 'ska vi dra ned till {}?', 'jag tänkte käka på {}, ska du med?', 'På {} är det mysigt, ska vi ta där?'] lunchBTH=['thairestaurangen vid korsningen', 'det är lite mysigt i fiket jämte demolabbet', 'Indiska', 'Pappa curry', 'boden uppe på parkeringen', 'Bergåsa kebab', 'Pasterian','Villa Oscar', 'Eat here', 'Bistro J'] lunchQuote = random.choice(lunchQuote) BTH = random.choice(lunchBTH) print(lunchQuote.format(BTH)) </code></pre> <p>You can use also <code>%s</code> as follow:</p> <pre><code>import random lunchQuote=['ska vi ta %s?', 'ska vi dra ned till %s?', 'jag tänkte käka på %s, ska du med?', 'På %s är det mysigt, ska vi ta där?'] lunchBTH=['thairestaurangen vid korsningen', 'det är lite mysigt i fiket jämte demolabbet', 'Indiska', 'Pappa curry', 'boden uppe på parkeringen', 'Bergåsa kebab', 'Pasterian','Villa Oscar', 'Eat here', 'Bistro J'] lunchQuote = random.choice(lunchQuote) BTH = random.choice(lunchBTH) print(lunchQuote % BTH) </code></pre>
1
2016-09-09T09:04:43Z
[ "python", "list", "python-3.x" ]
Change a filename?
39,407,983
<p>I have some files in a folder named like this <code>test_1999.0000_seconds.vtk</code>. What I would like to do is to is to change the name of the file to <code>test_1999.0000.vtk</code>.</p>
1
2016-09-09T09:13:59Z
39,408,012
<p>You can use <a href="https://docs.python.org/2/library/os.html#os.rename" rel="nofollow"><code>os.rename</code></a></p> <pre><code>os.rename("test_1999.0000_seconds.vtk", "test_1999.0000.vtk") </code></pre>
3
2016-09-09T09:15:22Z
[ "python" ]
Change a filename?
39,407,983
<p>I have some files in a folder named like this <code>test_1999.0000_seconds.vtk</code>. What I would like to do is to is to change the name of the file to <code>test_1999.0000.vtk</code>.</p>
1
2016-09-09T09:13:59Z
39,409,354
<pre><code>import os currentPath = os.getcwd() # get the current working directory unWantedString = "_seconds" matchingFiles =[] for path, subdirs, files in os.walk(currentPath): for f in files: if f.endswith(".vtk"): # To group the vtk files matchingFiles.append(path+"\\"+ f) # print matchingFiles for mf in matchingFiles: if unWantedString in mf: oldName = mf newName = mf.replace(unWantedString, '') # remove the substring from the string os.rename(oldName, newName) # rename the old files with new name without the string </code></pre>
0
2016-09-09T10:22:40Z
[ "python" ]
Get code from module in Excel
39,408,029
<p>I have a number of workbooks that have Macros which point to a particular SQL server using a connection string embedded in the code. We've migrated to a new SQL server so I need to go through these and alter the connection string to look at the new server in each of the Macros that explicitly mentions it. </p> <p>Currently I'm able to list all of the modules in the workbook, however I'm unable to get the code from each module, just the name and type number. </p> <pre><code>for vbc in wb.VBProject.VBComponents: print(vbc.Name + ": " + str(vbc.Type) + "\n" + str(vbc.CodeModule)) </code></pre> <p>What property stores the code so that I can find and replace the server name? I've had a look through the VBA and pywin32 docs but can't find anything. </p>
0
2016-09-09T09:16:09Z
39,408,543
<p>Got it- there's a Lines method in the CodeModule object that allows you to take a selection based on a starting and ending line. Using this in conjunction with the CountOfLines property allows you to get the whole thing.</p> <pre><code>for vbc in wb.VBProject.VBComponents: print(vbc.Name + ":\n" + vbc.CodeModule.Lines(1, vbc.CodeModule.CountOfLines)) </code></pre> <p>It's worth noting as well that the first line is line 1, not line 0 as that caught me out. The following will error <code>vbc.CodeModule.Lines(0, vbc.CodeModule.CountOfLines - 1)</code> because the index 0 is out of range.</p>
0
2016-09-09T09:42:39Z
[ "python", "excel", "vba", "excel-vba", "pywin32" ]
Concatenate values of 2 columns into 1 (equivalent of R's paste)
39,408,038
<p>Example data in python 3.5:</p> <pre><code>import pandas as pd df=pd.DataFrame({"A":["x","y","z","t","f"], "B":[1,2,1,2,4]}) </code></pre> <p>This gives me a dataframe with 2 columns "A" and "B". I then want to add a third column "C" that contains the value of "A" and "B" concatenated and separated by "_".<br> Following the suggestion from <a href="http://stackoverflow.com/questions/28046408/equivalent-of-rs-paste-command-for-vector-of-numbers-in-python?rq=1">this answer</a> I can do it like this.</p> <pre><code>for i in range(0,len(df["A"])): df.loc[i,"C"]=df.loc[i,"A"]+"_"+str(df.loc[i,"B"]) </code></pre> <p>I get the result I want but it seems convoluted for such a simple task.</p> <p>In R this would be done like this:</p> <pre><code>df&lt;-data.frame(A=c("x","y","z","t","f"), B=c(1,2,1,2,4)) df$C&lt;-paste(df$A,df$B,sep="_") </code></pre> <p>Another <a href="http://stackoverflow.com/questions/21292552/equivalent-of-paste-r-to-python">thread</a> suggested the use of the "%" operator but I can't get it to work.</p> <p>Is there a better alternative?</p>
1
2016-09-09T09:16:45Z
39,408,073
<p>You can just add the columns together but for 'B' you need to cast the type using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.astype.html" rel="nofollow"><code>astype(str)</code></a>:</p> <pre><code>In [115]: df['C'] = df['A'] + '_' + df['B'].astype(str) df Out[115]: A B C 0 x 1 x_1 1 y 2 y_2 2 z 1 z_1 3 t 2 t_2 4 f 4 f_4 </code></pre> <p>This is a vectorised approach and will scale much better than looping over every row for large dfs</p>
2
2016-09-09T09:18:32Z
[ "python", "pandas" ]
detect closed eyes after it closed 3 seconds
39,408,039
<p>I want to detect closed eyes after it closed 3 seconds using openCV in python . But when I used time.sleep(1) to count time, the entire program is stopped . But the program must be run continuously to dectect close eyes. I think that can be used thread in python</p> <pre><code> def get_frame(self): success, image = self.video.read() gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) faces = faceCascade.detectMultiScale( gray, scaleFactor=1.3, minNeighbors=5, minSize=(30, 30), flags=cv2.cv.CV_HAAR_SCALE_IMAGE ) while True: for (x, y, w, h) in faces: cv2.rectangle(image, (x, y), (x + w, y + h), (255, 255, 0), 2) roi_gray = gray[y:y+h, x:x+w] roi_color = image[y:y+h, x:x+w] eyes = eyesCascade.detectMultiScale(roi_gray) if eyes is not(): for (ex,ey,ew,eh) in eyes: cv2.rectangle(roi_color,(ex -10 ,ey - 10),(ex+ew + 10,ey+eh + 10),(0,255,0),2) twoeyes = twoeyesCascade.detectMultiScale(roi_gray) checkyeys = 0 if twoeyes is not(): for (exx,eyy,eww,ehh) in twoeyes: checkyeys = 0 led.write(1) cv2.rectangle(roi_color,(exx-5 ,eyy -5 ),(exx+eww -5,eyy+ehh -5 ),(0,0, 255),2) else: #when eyes close print "------------------------------------" for i in xrange(10): time.sleep(1) if(i % 3 == 0){ #eyes close in 3 seconds print "Warning" } print i ret, jpeg = cv2.imencode('.jpg', image) self.string = jpeg.tostring() self._image = image return jpeg.tostring() </code></pre> <p>Thank for helping !!! </p>
-2
2016-09-09T09:16:49Z
39,408,245
<p>Try to change </p> <pre><code>if(i % 3 == 0){ #eyes close in 3 seconds print "Warning" } </code></pre> <p>to</p> <pre><code>if i % 3 == 0: #eyes close in 3 seconds print "Warning" </code></pre>
0
2016-09-09T09:27:37Z
[ "python", "opencv" ]
SafeConfigParser: sections and environment variables
39,408,101
<p>(<em>Using Python 3.4.3</em>)</p> <p>I want to use environment variables in my config file and I read that I should use <code>SafeConfigParser</code> with <code>os.environ</code> as parameter to achieve it.</p> <pre><code>[test] mytest = %(HOME)s/.config/my_folder </code></pre> <p>Since I need to get all the options in a section, I am executing the following code:</p> <pre><code>userConfig = SafeConfigParser(os.environ) userConfig.read(mainConfigFile) for section in userConfig.sections(): for item in userConfig.options(section): print( "### " + section + " -&gt; " + item) </code></pre> <p>My result is not what I expected. As you can see below, it got not only the option I have in my section (<code>[test]\mytest</code>), but also all the environment variables:</p> <pre class="lang-none prettyprint-override"><code>### test -&gt; mytest ### test -&gt; path ### test -&gt; lc_time ### test -&gt; xdg_runtime_dir ### test -&gt; vte_version ### test -&gt; gnome_keyring_control ### test -&gt; user </code></pre> <p>What am I doing wrong?</p> <p>I want to be able to parse <code>[test]\mytest</code> as <code>/home/myuser/.config/my_folder</code> but don't want the <code>SafeConfigParser</code> adding all my environment variables to each one of its sections.</p>
1
2016-09-09T09:19:34Z
39,411,344
<p>If I have understood your question and what you want to do correctly, you can avoid the problem by <strong><em>not</em></strong> supplying <code>os.environ</code> as parameter to <code>SafeConfigParser</code>. Instead use it as the <code>vars</code> keyword argument's value when you actually retrieve values using the <code>get()</code> method.</p> <p>This works because it avoids creating a <em>default</em> section from all your environment variables, but allows their values to be used when referenced for interpolation/expansion purposes.</p> <pre><code>userConfig = SafeConfigParser() userConfig.read(mainConfigFile) for section in userConfig.sections(): for item in userConfig.options(section): value = userConfig.get(section, item, vars=os.environ) # use it here print('### [{}] -&gt; {}: {!r}'.format(section, item, value)) </code></pre>
1
2016-09-09T12:13:18Z
[ "python", "environment-variables", "python-3.4", "configparser" ]
How to remove a subset of a data frame in Python?
39,408,109
<p>My dataframe df is 3020x4. I'd like to remove a subset df1 20x4 out of the original. In other words, I just want to get the difference whose shape is 3000x4. I tried the below but it did not work. It returned exactly df. Would you please help? Thanks.</p> <pre><code>new_df = df.drop(df1) </code></pre>
0
2016-09-09T09:19:50Z
39,408,460
<p>As you seem to be unable to post a representative example I will demonstrate one approach using <code>merge</code> with param <code>indicator=True</code>:</p> <p>So generate some data:</p> <pre><code>In [116]: df = pd.DataFrame(np.random.randn(5,3), columns=list('abc')) df Out[116]: a b c 0 -0.134933 -0.664799 -1.611790 1 1.457741 0.652709 -1.154430 2 0.534560 -0.781352 1.978084 3 0.844243 -0.234208 -2.415347 4 -0.118761 -0.287092 1.179237 </code></pre> <p>take a subset:</p> <pre><code>In [118]: df_subset=df.iloc[2:3] df_subset Out[118]: a b c 2 0.53456 -0.781352 1.978084 </code></pre> <p>now perform a left <code>merge</code> with param <code>indicator=True</code> this will add <code>_merge</code> column which indicates whether the row is <code>left_only</code>, <code>both</code> or <code>right_only</code> (the latter won't appear in this example) and we filter the merged df to show only <code>left_only</code>:</p> <pre><code>In [121]: df_new = df.merge(df_subset, how='left', indicator=True) df_new = df_new[df_new['_merge'] == 'left_only'] df_new Out[121]: a b c _merge 0 -0.134933 -0.664799 -1.611790 left_only 1 1.457741 0.652709 -1.154430 left_only 3 0.844243 -0.234208 -2.415347 left_only 4 -0.118761 -0.287092 1.179237 left_only </code></pre> <p>here is the original merged df:</p> <pre><code>In [122]: df.merge(df_subset, how='left', indicator=True) Out[122]: a b c _merge 0 -0.134933 -0.664799 -1.611790 left_only 1 1.457741 0.652709 -1.154430 left_only 2 0.534560 -0.781352 1.978084 both 3 0.844243 -0.234208 -2.415347 left_only 4 -0.118761 -0.287092 1.179237 left_only </code></pre>
0
2016-09-09T09:37:51Z
[ "python", "pandas", "subset" ]
how to convert string array of mixed data types
39,408,231
<p>Let's say I have read and loaded a file into a 2D matrix of mixed data as strings(an example has been provided below)</p> <pre><code># an example row of the matrix ['529997' '46623448' '2122110124' '2310' '2054' '2' '66' '' '2010/11/03-12:42:08' '26' 'CLEARING' '781' '30' '3' '0' '0' '1'] </code></pre> <p>I want to convert this chunk of data into their data types to be able to do statistical analysis on it with numpy and scipy.</p> <p>The datatype for all of the columns is integer <strong>except</strong> the 8th index this is DateTime and the 10th index is pure string.</p> <h3>Question:</h3> <p>What is the easiest way to this conversation?</p> <hr> <h1>EDIT</h1> <p>Performance is very important than readability, I have to convert <strong>4.5m</strong> rows of data and then process them!</p>
0
2016-09-09T09:26:46Z
39,408,868
<p>Here is a one linear with list comprehension:</p> <pre><code>In [24]: from datetime import datetime In [25]: func = lambda x: datetime.strptime(x, "%Y/%m/%d-%H:%M:%S") In [26]: [{8:func, 10:str}.get(ind)(item) if ind in {8, 10} else int(item or '0') for ind, item in enumerate(lst)] Out[26]: [529997, 46623448, 2122110124, 2310, 2054, 2, 66, 0, datetime.datetime(2010, 11, 3, 12, 42, 8), 26, 'CLEARING', 781, 30, 3, 0, 0, 1] </code></pre>
2
2016-09-09T09:58:12Z
[ "python", "arrays", "numpy", "converter", "mixed" ]
how to convert string array of mixed data types
39,408,231
<p>Let's say I have read and loaded a file into a 2D matrix of mixed data as strings(an example has been provided below)</p> <pre><code># an example row of the matrix ['529997' '46623448' '2122110124' '2310' '2054' '2' '66' '' '2010/11/03-12:42:08' '26' 'CLEARING' '781' '30' '3' '0' '0' '1'] </code></pre> <p>I want to convert this chunk of data into their data types to be able to do statistical analysis on it with numpy and scipy.</p> <p>The datatype for all of the columns is integer <strong>except</strong> the 8th index this is DateTime and the 10th index is pure string.</p> <h3>Question:</h3> <p>What is the easiest way to this conversation?</p> <hr> <h1>EDIT</h1> <p>Performance is very important than readability, I have to convert <strong>4.5m</strong> rows of data and then process them!</p>
0
2016-09-09T09:26:46Z
39,409,177
<p>I like clear code like this:</p> <pre><code>from datetime import datetime input_row = ['529997', '46623448', '2122110124', '2310', '2054', '2', '66', '', '2010/11/03-12:42:08', '26', 'CLEARING', '781', '30', '3', '0', '0', '1'] _date = lambda x: datetime.strptime(x, "%Y/%m/%d-%H:%M:%S") # only necessary because '' should be treated as 0 _int = lambda x: int('0' + x) # specify the type parsers for each column parsers = 8 * [_int] + [_date, _int, str] + 6 * [_int] output_row = [parse(input) for parse, input in zip(parsers, input_row)] </code></pre> <p>Depending on your needs, use an iterator instead of a list. This could greatly reduce the amount of memory you need.</p>
1
2016-09-09T10:14:27Z
[ "python", "arrays", "numpy", "converter", "mixed" ]
how to convert string array of mixed data types
39,408,231
<p>Let's say I have read and loaded a file into a 2D matrix of mixed data as strings(an example has been provided below)</p> <pre><code># an example row of the matrix ['529997' '46623448' '2122110124' '2310' '2054' '2' '66' '' '2010/11/03-12:42:08' '26' 'CLEARING' '781' '30' '3' '0' '0' '1'] </code></pre> <p>I want to convert this chunk of data into their data types to be able to do statistical analysis on it with numpy and scipy.</p> <p>The datatype for all of the columns is integer <strong>except</strong> the 8th index this is DateTime and the 10th index is pure string.</p> <h3>Question:</h3> <p>What is the easiest way to this conversation?</p> <hr> <h1>EDIT</h1> <p>Performance is very important than readability, I have to convert <strong>4.5m</strong> rows of data and then process them!</p>
0
2016-09-09T09:26:46Z
39,415,194
<p>I have developed the following function to convert the 4.5m rows of the matrix, the invalid data type exception is also taken into consideration too. Although it can be improved with parallelizing the process, but it did the job OK for me, for what it worth, I am going to post it here.</p> <pre><code>def cnvt_data(mat): from datetime import datetime _date = lambda x: datetime.strptime(x, "%Y/%m/%d-%H:%M:%S") # only necessary because '' should be treated as 0 _int = lambda x: int('0' + x) # specify the type parsers for each column parsers = 8 * [_int] + [_date, _int, str] + 6 * [_int] def try_parse(parse, value, _def): try: return parse(value), True except ValueError: return _def, False matrix = []; for idx in range(len(mat)): try: row = mat[idx] matrix.append(np.asarray([parse(input) for parse, input in zip(parsers, row)])) except ValueError: l = []; matrix.append([]) for _idx, args in enumerate(zip(parsers, row)): val, pres = try_parse(args[0], args[1], 0) matrix[-1].append(val) if(not pres): l.append(_idx); print "\r[Error] value error @row %d @indices(%s): replaced with 0" %(idx, ', '.join(str(x) for x in l)) print "\r[.] %d%% converted" %(idx * 100/len(mat)), print "\r[+] 100% converted." return matrix </code></pre>
1
2016-09-09T15:38:32Z
[ "python", "arrays", "numpy", "converter", "mixed" ]
how to convert string array of mixed data types
39,408,231
<p>Let's say I have read and loaded a file into a 2D matrix of mixed data as strings(an example has been provided below)</p> <pre><code># an example row of the matrix ['529997' '46623448' '2122110124' '2310' '2054' '2' '66' '' '2010/11/03-12:42:08' '26' 'CLEARING' '781' '30' '3' '0' '0' '1'] </code></pre> <p>I want to convert this chunk of data into their data types to be able to do statistical analysis on it with numpy and scipy.</p> <p>The datatype for all of the columns is integer <strong>except</strong> the 8th index this is DateTime and the 10th index is pure string.</p> <h3>Question:</h3> <p>What is the easiest way to this conversation?</p> <hr> <h1>EDIT</h1> <p>Performance is very important than readability, I have to convert <strong>4.5m</strong> rows of data and then process them!</p>
0
2016-09-09T09:26:46Z
39,416,366
<p>Usually when people ask about reading <code>csv</code> files we ask for a sample of the file. I've attempted to reconstruct your line from the string list:</p> <pre><code>In [590]: txt Out[590]: b'529997, 46623448, 2122110124, 2310, 2054, 2, 66, , 2010/11/03-12:42:08, 26, CLEARING, 781, 30, 3, 0, 0, 1' </code></pre> <p>(<code>b</code> for bytestring in Py3, which is how genfromtxt expects its input)</p> <p><code>genfromtxt</code> expects a filename, open file, or anything that feeds it lines. So a list of lines works fine:</p> <p>With <code>dtype=None</code> it deduces column types.</p> <pre><code>In [591]: data=np.genfromtxt([txt], dtype=None, delimiter=',', autostrip=True) In [592]: data Out[592]: array((529997, 46623448, 2122110124, 2310, 2054, 2, 66, False, b'2010/11/03-12:42:08', 26, b'CLEARING', 781, 30, 3, 0, 0, 1), dtype=[('f0', '&lt;i4'), ('f1', '&lt;i4'), ('f2', '&lt;i4'), ('f3', '&lt;i4'), ('f4', '&lt;i4'), ('f5', '&lt;i4'), ('f6', '&lt;i4'), ('f7', '?'), ('f8', 'S19'), ('f9', '&lt;i4'), ('f10', 'S8'), ('f11', '&lt;i4'), ('f12', '&lt;i4'), ('f13', '&lt;i4'), ('f14', '&lt;i4'), ('f15', '&lt;i4'), ('f16', '&lt;i4')]) </code></pre> <p>The result is a bunch of <code>int</code> fields, 2 string fields. The blank is interpreted as boolean.</p> <p>If I spell out the columns types I get a slightly different array</p> <pre><code>In [593]: dt=[int,int,int,int,int,int,int,float,'U20',int, 'U10',int,int,int,int,int,int] In [594]: data=np.genfromtxt([txt], dtype=dt, delimiter=',', autostrip=True) In [595]: data Out[595]: array((529997, 46623448, 2122110124, 2310, 2054, 2, 66, nan, '2010/11/03-12:42:08', 26, 'CLEARING', 781, 30, 3, 0, 0, 1), dtype=[('f0', '&lt;i4'), ('f1', '&lt;i4'), ('f2', '&lt;i4'), ('f3', '&lt;i4'), ('f4', '&lt;i4'), ('f5', '&lt;i4'), ('f6', '&lt;i4'), ('f7', '&lt;f8'), ('f8', '&lt;U20'), ('f9', '&lt;i4'), ('f10', '&lt;U10'), ('f11', '&lt;i4'), ('f12', '&lt;i4'), ('f13', '&lt;i4'), ('f14', '&lt;i4'), ('f15', '&lt;i4'), ('f16', '&lt;i4')]) </code></pre> <p>I specified <code>float</code> for the blank column, which it then interprets as <code>nan</code>. Handling of blacks can be refined.</p> <p>I changed the string files to unicode (the default py3 string).</p> <p>I should be able to specify a datetime conversion, for example to <code>np.datetime64</code>.</p> <p>With just one line, <code>data</code> is a single element array, 0d, with a compound <code>dtype</code>.</p> <p>Fields are accessed by name</p> <pre><code>In [598]: data['f8'] Out[598]: array('2010/11/03-12:42:08', dtype='&lt;U20') In [599]: data['f2'] Out[599]: array(2122110124) </code></pre> <p>Speed wise this probably is the same as your custom reader. <code>genfromtxt</code> reads the file line by line, and parses it. It collects the parsed lines in a list, and creates an array once at the end (I don't recall if parsed lines are lists or dtype arrays - I suspect lists, but would have to study the code).</p> <p>To handle the date, I have to use <code>'datetime64[s]'</code>, and some how change the date to read <code>"2010-11-03T12:42:08"</code>, probably in a <code>converter</code>.</p> <p>===================</p> <p>I can make a converter based on your <code>datetime</code> parsing:</p> <pre><code>In [649]: from datetime import datetime In [650]: dateconvert=lambda x: datetime.strptime(x.decode(),"%Y/%m/%d-%H:%M:%S") In [651]: data=np.genfromtxt([txt], dtype=dt, delimiter=',', autostrip=True, converters={8:dateconvert}) In [652]: data Out[652]: array((529997, 46623448, 2122110124, 2310, 2054, 2, 66, nan, datetime.datetime(2010, 11, 3, 12, 42, 8), 26, 'CLEARING', 781, 30, 3, 0, 0, 1), dtype=[('f0', '&lt;i4'), ('f1', '&lt;i4'), ('f2', '&lt;i4'), ('f3', '&lt;i4'), ('f4', '&lt;i4'), ('f5', '&lt;i4'), ('f6', '&lt;i4'), ('f7', '&lt;f8'), ('f8', '&lt;M8[s]'), ('f9', '&lt;i4'), ('f10', '&lt;U10'), ('f11', '&lt;i4'), ('f12', '&lt;i4'), ('f13', '&lt;i4'), ('f14', '&lt;i4'), ('f15', '&lt;i4'), ('f16', '&lt;i4')]) </code></pre>
1
2016-09-09T16:52:38Z
[ "python", "arrays", "numpy", "converter", "mixed" ]
Shifting from python to pypy
39,408,265
<p>I am currently working on a project where the source files are all written in python. The files/modules are currently being run on a python interpreter(CPython). I want to use PyPy interpreter instead as i see it is much more efficient. Is there way how I can change the interpreter from the CMakeLists.txt file so the build process takes in PyPy interpreter instead of the python default interpreter? I have a project named P and it contains a CMakeLists.txt file. </p>
-1
2016-09-09T09:28:28Z
39,409,092
<p>When it needs python interpreter, <code>CMakeLists.txt</code> usually uses <a href="https://cmake.org/cmake/help/v3.0/module/FindPythonInterp.html" rel="nofollow">find_package(PythonInterp)</a>, which searches python executable and sets <code>PYTHON_EXECUTABLE</code> to the path where it is located.</p> <p>You may set this <em>cache</em> variable when call <code>cmake</code>:</p> <pre><code>cmake -DPYTHON_EXECUTABLE=&lt;path-to-PyPy&gt; ... </code></pre> <p>so it will not search executable but use one you provide.</p>
1
2016-09-09T10:10:57Z
[ "python", "cmake", "pypy" ]
Cannot understand the 32-bit encoding of the "Python" string
39,408,341
<p>I am reading the <a href="https://docs.python.org/2/howto/unicode.html" rel="nofollow">Unicode HOWTO</a> of the Python docs to start to really understand Unicode. At the <a href="https://docs.python.org/2/howto/unicode.html#encodings" rel="nofollow">Encodings Paragraph</a> it shows a representation of the "Python" string in a 32-bit integers array. </p> <p>I don't understand why each char has so many 00s. Like, the char "P" is represented by 0x50 (which I understand, being the hex equivalent for the ASCII ordinal 80). But then it is followed by 3 couples of 00s. What is that? How should I read this representation?</p>
0
2016-09-09T09:32:09Z
39,408,459
<p>A 32-bit integers array consists of, well, 32-bit integers.</p> <p>A byte is 8 bits, so each character necessarily consists of 4 bytes.</p> <p>The number is 0x00000050, which is translated into four bytes. You could order them <code>0x50 0x00 0x00 0x00</code> (byte representing most significant numbers at the end -- "little endian") or <code>0x00 0x00 0x00 0x50</code> (least significant at the end -- "big endian"). Different CPUs make different choices for the order, as they note in the paragraph you link to.</p> <p>If you think this is impractical: they are trying to explain in that paragraph why it is, and why another encoding is typically preferred.</p> <p>Instead of starting at that article, <a href="http://www.joelonsoftware.com/articles/Unicode.html" rel="nofollow">The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)</a> manages to live up to its title pretty well.</p>
2
2016-09-09T09:37:51Z
[ "python", "unicode" ]
Cannot understand the 32-bit encoding of the "Python" string
39,408,341
<p>I am reading the <a href="https://docs.python.org/2/howto/unicode.html" rel="nofollow">Unicode HOWTO</a> of the Python docs to start to really understand Unicode. At the <a href="https://docs.python.org/2/howto/unicode.html#encodings" rel="nofollow">Encodings Paragraph</a> it shows a representation of the "Python" string in a 32-bit integers array. </p> <p>I don't understand why each char has so many 00s. Like, the char "P" is represented by 0x50 (which I understand, being the hex equivalent for the ASCII ordinal 80). But then it is followed by 3 couples of 00s. What is that? How should I read this representation?</p>
0
2016-09-09T09:32:09Z
39,408,527
<p>The reason why there are so many zeroes there is because all of those letters are contained in the ASCII set, i.e. occupies one byte (two characters in hexadecimal notation). Unicode encodings are compatible with ASCII like that.</p> <p>The rest is just filler of the remaining 3 bytes.</p> <p>It is kind of like taking an original variable declared to be a (unsigned) <code>byte</code>, then copying it to an (unsigned) <code>int32</code> -- you will get a lot of zeroes in the latter, because it is a bigger type.</p>
1
2016-09-09T09:41:37Z
[ "python", "unicode" ]
P4Python: use multiple threads that request perforce information at the same time
39,408,378
<p>I've been working on a "crawler" of sorts that goes through our repository, and lists directories and files as it goes. For every directory it enounters, it creates a thread that does the same for that directory and so on, recursively. Effectively this creates a very short-lived thread for every directory encountered in the repos. ( it doesn't take very long to request information on just one path, there are just tens of thousands of them )</p> <p>The logic looks as follows:</p> <pre><code>import threading import perforce as Perforce #custom perforce class from pathlib import Path p4 = Perforce() p4.connect() class Dir(): def __init__(self, path): self.dirs = [] self.files = [] self.path = path self.crawlers = [] def build_crawler(self): worker = Crawler(self) # append to class variable to keep it from being deleted self.crawlers.append(worker) worker.start() class Crawler(threading.Thread): def __init__(self, dir): threading.Thread.__init__(self) self.dir = dir def run(self): depotdirs = p4.getdepotdirs(self.dir.path) depotfiles = p4.getdepotfiles(self.dir.path) for p in depotdirs: if Path(p).is_dir(): _d = Dir(self.dir, p) self.dir.dirs.append(_d) for p in depotfiles: if Path(p).is_file(): f = File(p) # File is like Dir, but with less stuff, just a path. self.dir.files.append(f) for dir in self.dir.dirs: dir.build_crawler() for worker in d.crawlers: worker.join() </code></pre> <p>Obviously this is not complete code, but it represents what I'm doing.</p> <p>My question really is whether I can create an instance of this Perforce class in the <code>__init__</code> method of the Crawler class, so that requests can be done separately. Right now, I have to call <code>join()</code> on the created threads so that they wait for completion, to avoid concurrent perforce calls.</p> <p>I've tried it out, but it seems like there is a limit to how many connections you can create: I don't have a solid number, but somewhere along the line Perforce just started straight up refusing connections, which I presume is due to the number of concurrent requests.</p> <p>Really what I'm asking I suppose is two-fold: is there a better way of creating a data model representing a repos with tens of thousands of files than the one I'm using, and is what I'm trying to do possible, and if so, how.</p> <p>Any help would be greatly appreciated :)</p>
0
2016-09-09T09:34:07Z
39,413,906
<p>I found out how to do this (it's infuriatingly simple, as with all simple solutions to overly complicated problems):</p> <p>To build a data model that contains <code>Dir</code> and <code>File</code> classes representing a whole depot with thousands of files, just call <code>p4.run("files", "-e", path + "\\...")</code>. This will return a list of every file in <code>path</code>, recursively. From there all you need to do is iterate over every returned path and construct your data model from there.</p> <p>Hope this helps someone at some point.</p>
1
2016-09-09T14:26:51Z
[ "python", "multithreading", "perforce", "p4python" ]
Change next list element during iteration?
39,408,404
<p>Imagine you have a list of points in the 2D-space. I am trying to find symmetric points.</p> <p>For doing that I iterate over my list of points and apply symmetry operations. So suppose I apply one of these operations to the first point and after this operation it is equal to other point in the list. These 2 points are symmetric.</p> <p>So what I want is to erase this other point from the list that I am iterating so in this way my iterating variable say "i" won't take this value. Because I already know that it is symmetric with the first point.</p> <p>I have seen similar Posts but they remove a value in the list that they have already taken. What I want is to remove subsequent values.</p>
0
2016-09-09T09:35:17Z
39,408,626
<p>In general it is a bad idea to remove values from a list you are iterating over. There are, however, another ways to skip the symmetric points. For example, you can check for each point if you have seen a symmetric one before:</p> <pre><code>for i, point in enumerate(points): if symmetric(point) not in points[:i]: # Do whatever you want to do </code></pre> <p>Here <code>symmetric</code> produces a point according to your symmetry operation. If your symmetry operation connects more that two points you can do </p> <pre><code>for i, point in enumerate(points): for sympoint in symmetric(point): if sympoint in points[:i]: break else: # Do whatever you want to do </code></pre>
1
2016-09-09T09:47:00Z
[ "python", "python-2.7", "loops", "iterator" ]
Change next list element during iteration?
39,408,404
<p>Imagine you have a list of points in the 2D-space. I am trying to find symmetric points.</p> <p>For doing that I iterate over my list of points and apply symmetry operations. So suppose I apply one of these operations to the first point and after this operation it is equal to other point in the list. These 2 points are symmetric.</p> <p>So what I want is to erase this other point from the list that I am iterating so in this way my iterating variable say "i" won't take this value. Because I already know that it is symmetric with the first point.</p> <p>I have seen similar Posts but they remove a value in the list that they have already taken. What I want is to remove subsequent values.</p>
0
2016-09-09T09:35:17Z
39,408,710
<p>Whatever symmetric points turn out to be True add them to a set, since set maintains unique elements and look up is <code>O(1)</code> you can use if <code>point not in set</code> condition. </p> <pre><code>if point not in s: #test for symmetry if symmetric: s.add(point) </code></pre>
1
2016-09-09T09:50:18Z
[ "python", "python-2.7", "loops", "iterator" ]
Unable to upgrade Scipy using PIP
39,408,409
<p>I try to upgrade Scipy using PIP on Ubuntu 16.04 but always receive this error. I'm not sure what is going on. The progress reaches 99% and then stops, and spits out this error.</p> <p>I've tried upgrading pip but the same error still occurs.</p> <pre><code>Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/pip/basecommand.py", line 215, in main status = self.run(options, args) File "/usr/local/lib/python2.7/dist-packages/pip/commands/install.py", line 310, in run wb.build(autobuilding=True) File "/usr/local/lib/python2.7/dist-packages/pip/wheel.py", line 750, in build self.requirement_set.prepare_files(self.finder) File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 370, in prepare_files ignore_dependencies=self.ignore_dependencies)) File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 587, in _prepare_file session=self.session, hashes=hashes) File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 810, in unpack_url hashes=hashes File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 649, in unpack_http_url hashes) File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 871, in _download_http_url _download_url(resp, link, content_file, hashes) File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 595, in _download_url hashes.check_against_chunks(downloaded_chunks) File "/usr/local/lib/python2.7/dist-packages/pip/utils/hashes.py", line 46, in check_against_chunks for chunk in chunks: File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 563, in written_chunks for chunk in chunks: File "/usr/local/lib/python2.7/dist-packages/pip/utils/ui.py", line 139, in iter for x in it: File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 552, in resp_read decode_content=False): File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/response.py", line 353, in stream data = self.read(amt=amt, decode_content=decode_content) File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/response.py", line 310, in read data = self._fp.read(amt) File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/cachecontrol/filewrapper.py", line 54, in read self.__callback(self.__buf.getvalue()) File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/cachecontrol/controller.py", line 275, in cache_response self.serializer.dumps(request, response, body=body), File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/cachecontrol/serialize.py", line 87, in dumps ).encode("utf8"), MemoryError </code></pre>
1
2016-09-09T09:35:36Z
39,408,478
<p>Try to disable the cache during install using</p> <pre><code>pip install --no-cache-dir packageName </code></pre> <p>where packageName is <code>scipy</code> in this case</p>
2
2016-09-09T09:38:42Z
[ "python", "ubuntu", "scipy", "pip" ]
SQLAlchemy mapping table columns with filters
39,408,458
<p>I have a table in PostgreSQL that includes information about documents. Lets say something like that:</p> <pre><code>table: doc id (int) name (string) type (int) </code></pre> <p><strong>type</strong> is a category for a document (e.g. 1 - passport, 2 - insurance etc.). Also I have different tables with additional information for every docyment type.</p> <pre><code>table: info id (int) doc_id (fk) info (additional columns) </code></pre> <p>I want to have a SQLAlchemy model to work with each type of document linked with it's additional information and be able to manage columns to display (for Flask-Admin if it is important).</p> <p>Now to join two tables into some sort of "model" I used <a href="http://sqlalchemy.readthedocs.io/en/latest/orm/mapping_columns.html" rel="nofollow" title="Mapping Table Columns">Mapping Table Columns</a> from the SQLAlchemy documentation like that (when there was only one type of documents):</p> <pre><code>class DocMapping(db.Model): __table__ = doc.__table__.join(info) __mapper_args__ = { 'primary_key': [doc.__table__.c.id] } </code></pre> <p>Now the question is that: <strong>how to create multiple classes inherited from db.Model (DocPassportMapping, DocInsuranceMapping etc.) based on doc.type column?</strong></p> <p>Something like that:</p> <pre><code>__table__ = doc.__table__.join(info).filter(doc.type) </code></pre> <p>That is obviously not working because we don't have a <strong>query</strong> object here.</p>
1
2016-09-09T09:37:50Z
39,412,686
<p>If I understood you correctly, you wish to have an <a href="http://docs.sqlalchemy.org/en/latest/orm/inheritance.html#mapping-class-inheritance-hierarchies" rel="nofollow">inheritance hierarchy</a> based on <code>DocMapping</code> with <code>DocMapping.type</code> as the polymorphic identity. Since you have not provided a complete example, here is a somewhat similar structure. It has differences for sure, but should be applicable to yours. This uses <a href="http://docs.sqlalchemy.org/en/latest/orm/inheritance.html#single-table-inheritance" rel="nofollow">single table inheritance</a> on top of the joined mapping.</p> <p>The models:</p> <pre><code>In [2]: class Doc(Base): ...: id = Column(Integer, primary_key=True, autoincrement=True) ...: name = Column(Unicode) ...: type = Column(Integer, nullable=False) ...: __tablename__ = 'doc' ...: In [3]: class Info(Base): ...: __tablename__ = 'info' ...: doc_id = Column(Integer, ForeignKey('doc.id'), primary_key=True) ...: value = Column(Unicode) ...: doc = relationship('Doc', backref=backref('info', uselist=False)) ...: In [4]: class DocMapping(Base): ...: __table__ = Doc.__table__.join(Info) ...: __mapper_args__ = { ...: 'primary_key': (Doc.id, ), ...: # These declare this mapping polymorphic ...: 'polymorphic_on': Doc.type, ...: 'polymorphic_identity': 0, ...: } ...: In [5]: class Passport(DocMapping): ...: __mapper_args__ = { ...: 'polymorphic_identity': 1, ...: } ...: In [6]: class Insurance(DocMapping): ...: __mapper_args__ = { ...: 'polymorphic_identity': 2, ...: } ...: </code></pre> <p>Testing:</p> <pre><code>In [7]: session.add(Insurance(name='Huono vakuutus', ...: value='0-vakuutus, mitään ei kata')) In [8]: session.commit() In [15]: session.query(DocMapping).all() Out[15]: [&lt;__main__.Insurance at 0x7fdc0a086400&gt;] In [16]: _[0].name, _[0].value Out[16]: ('Huono vakuutus', '0-vakuutus, mitään ei kata') </code></pre> <p>The thing is: you probably do not want multiple classes that inherit from <code>db.Model</code> as base, but classes that inherit from <code>DocMapping</code>. It makes a lot more sense as a hierarchy.</p>
3
2016-09-09T13:28:01Z
[ "python", "postgresql", "flask", "sqlalchemy", "flask-admin" ]
Trying to implement a simple edit distance module
39,408,514
<p>This is the code for the function :</p> <pre><code>def populateConfusionMatrix(word,errword): dp = [[0]*(len(errword)+1) for i in range(len(word)+1)] m = len(word)+1; n = len(errword)+1; for i in range(m): for j in range(n): dp[i][0] = i; dp[0][j] = j; for i in range(m): for j in range(n): print(i,j) if i==0 or j==0 : continue dis = [0]*4 dis[0] = dp[i-1][j]+1 dis[1] = dp[i][j-1]+1 print("dis[1] is ",dp[i][j-1]+1) if word[i-1] == errword[j-1]: dis[2] = dp[i-1][j-1] else : dis[2] = dp[i-1][j-1]+1 if i&gt;1 and j&gt;1 and word[i] == errword[j-1] and word[i-1] == errword[j]: dis[3] = dp[i-2][j-2] + 1 if dis[3]!=0 : dp[i][j] = min(dp[0:4]) else : dp[i][j] = min(dp[0:3]) i = m j = n while(i&gt;=0 and j&gt;=0) : if word[i-1] == errword[j-1] : i=i-1 j=j-1 continue if dp[i][j] == dp[i][j-1]+1 : populate_ins(word[i],errword[j]) j=j-1 if dp[i][j] == dp[i-1][j]+1 : populate_del(errword[j],word[i]) i=i-1 if dp[i][j] == dp[i-1][j-1] + 1 : populate_sub(word[i],errword[j]) i=i-1 j=j-1 if i&gt;1 and j&gt;1 and word[i] == errword[j-1] and word[i-1] == errword[j] and dp[i][j] == dp[i-2][j-2]+1 : populate_exc(word[i-1],word[i]) i=i-1 j=j-1 </code></pre> <p>But this code is showing this error on calling the function:</p> <pre><code>populateConfusionMatrix("actress","acress") </code></pre> <p>Error - </p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-25-d5ba10f95b61&gt; in &lt;module&gt;() ----&gt; 1 populateConfusionMatrix("actress","acress") &lt;ipython-input-24-e996de70e204&gt; in populateConfusionMatrix(word, errword) 15 dis = [0]*4 16 dis[0] = dp[i-1][j]+1 ---&gt; 17 dis[1] = dp[i][j-1]+1 18 print("dis[1] is ",dp[i][j-1]+1) 19 if word[i-1] == errword[j-1]: TypeError: can only concatenate list (not "int") to list </code></pre> <p>Tried to print till what value of (i,j) the loop is working fine, I got this - </p> <pre><code>(0, 0) (0, 1) (0, 2) (0, 3) (0, 4) (0, 5) (0, 6) (1, 0) (1, 1) ('dis[1] is ', 2) (1, 2) </code></pre>
0
2016-09-09T09:40:50Z
39,409,553
<p>Your code is really difficult to understand, but the problem is definitely in these lines:</p> <pre><code>if dis[3]!=0 : dp[i][j] = min(dp[0:4]) else : dp[i][j] = min(dp[0:3]) </code></pre> <p>Since your list <code>dp</code> has the value:</p> <pre><code>[[0, 1, 2, 3, 4, 5, 6], [1, 0, 0, 0, 0, 0, 0], [2, 0, 0, 0, 0, 0, 0], [3, 0, 0, 0, 0, 0, 0], [4, 0, 0, 0, 0, 0, 0], [5, 0, 0, 0, 0, 0, 0], [6, 0, 0, 0, 0, 0, 0], [7, 0, 0, 0, 0, 0, 0]] </code></pre> <p>When you use <code>dp[i][j] = min(dp[0:3])</code>, you're calling <code>min</code> on a slice of <code>dp</code>, or:</p> <pre><code>min([0, 1, 2, 3, 4, 5, 6], [1, 0, 0, 0, 0, 0, 0], [2, 0, 0, 0, 0, 0, 0]) </code></pre> <p>That's why you're getting errors later on while trying to add a number to a list:</p> <pre><code>dis[1] = dp[i][j-1] + 1 # evaluates to something like [0,0,0,0] + 1 </code></pre>
0
2016-09-09T10:33:14Z
[ "python" ]
updating EmbeddedDocument in Document with Flask - Mongoengine mongoengine.errors.OperationError
39,408,614
<p>I am developing a project with flask and mongodb(mongoengine drive). The application uses Note model as a embeddedDocument for User Modal.</p> <pre><code>class User(db.Document, UserMixin): fields like created_at, content, slug etc... notes = db.ListField(db.EmbeddedDocumentField('Note')) class Note(db.Document): fields like created_at, content, slug, URLLink, isSecret etc... content = db.StringField(required=True) tags = db.ListField(db.StringField(required=True, max_length=20) </code></pre> <p>When I try to Update a Note it's okay, but then trying to append the updated note in the User collection I stucked!</p> <p><strong>views.py</strong></p> <pre><code>@app.route("/update_quote/&lt;string:id&gt;" ,methods=['POST']) @login_required def update_quote(id): note = Note.objects(id=id).first() form = UpdateNoteForm() if request.method == 'POST': form = UpdateNoteForm(request.form) if form.validate == False: flash('Faliure','danger') return redirect(url_for('profile')+('/'+current_user.slug)) if form.validate_on_submit(): tags = form.tags2.data tagList = tags.split(",") note = Note.objects.get(id=form.wtf.data) note.update(content=form.content2.data, tags=tagList, URLLink=form.URLLink2.data) current_user.notes.append(note) current_user.update(notes__tags=tagList, notes__content=form.content2.data, notes__URLLink=form.URLLink2.data) flash("Success","success") return render_template("update.html", title="delete", form=form, note=note ) </code></pre> <p><strong>traceback</strong></p> <pre><code> mongoengine.errors.OperationError OperationError: Update failed (cannot use the part (notes of notes.URLLink) to traverse the element ({notes: [ { _id: ObjectId('57d27bb24d2e9b04e175c0e5'), created_at: new Date(1473422818025), URLLink: "", content: "{ "_id" : ObjectId("57d27b414d2e9b04d79883b3"), "created_at" : ISODate("2016-09-09T12:05:05.164Z"), "URLLink" : "", "content" : "c...", tags: [ "asd" ], isSecret: false, isArchived: false } ]})) Traceback (most recent call last) File "/Users/ozer/Documents/localhost/copylighter/env/lib/python2.7/site-packages/flask/app.py", line 1836, in __call__ return self.wsgi_app(environ, start_response) File "/Users/ozer/Documents/localhost/copylighter/env/lib/python2.7/site-packages/flask/app.py", line 1820, in wsgi_app response = self.make_response(self.handle_exception(e)) File "/Users/ozer/Documents/localhost/copylighter/env/lib/python2.7/site-packages/flask/app.py", line 1403, in handle_exception reraise(exc_type, exc_value, tb) File "/Users/ozer/Documents/localhost/copylighter/env/lib/python2.7/site-packages/flask/app.py", line 1817, in wsgi_app response = self.full_dispatch_request() File "/Users/ozer/Documents/localhost/copylighter/env/lib/python2.7/site-packages/flask/app.py", line 1477, in full_dispatch_request rv = self.handle_user_exception(e) File "/Users/ozer/Documents/localhost/copylighter/env/lib/python2.7/site-packages/flask/app.py", line 1381, in handle_user_exception reraise(exc_type, exc_value, tb) File "/Users/ozer/Documents/localhost/copylighter/env/lib/python2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request rv = self.dispatch_request() File "/Users/ozer/Documents/localhost/copylighter/env/lib/python2.7/site-packages/flask/app.py", line 1461, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/Users/ozer/Documents/localhost/copylighter/env/lib/python2.7/site-packages/flask_login.py", line 792, in decorated_view return func(*args, **kwargs) File "/Users/ozer/Google Drive/localhost/copylighter/views.py", line 225, in update_quote note = Note.objects.get(id=form.wtf.data) note.update(content=form.content2.data, tags=tagList, URLLink=form.URLLink2.data) current_user.notes.append(note) current_user.update(notes__tags=tagList, notes__content=form.content2.data, notes__URLLink=form.URLLink2.data) #note.save() File "/Users/ozer/Documents/localhost/copylighter/env/lib/python2.7/site-packages/mongoengine/document.py", line 468, in update return self._qs.filter(**self._object_key).update_one(**kwargs) File "/Users/ozer/Documents/localhost/copylighter/env/lib/python2.7/site-packages/mongoengine/queryset/base.py", line 490, in update_one upsert=upsert, multi=False, write_concern=write_concern, **update) File "/Users/ozer/Documents/localhost/copylighter/env/lib/python2.7/site-packages/mongoengine/queryset/base.py", line 472, in update raise OperationError(u'Update failed (%s)' % unicode(err)) OperationError: Update failed (cannot use the part (notes of notes.URLLink) to traverse the element ({notes: [ { _id: ObjectId('57d27bb24d2e9b04e175c0e5'), created_at: new Date(1473422818025), URLLink: "", content: "{ "_id" : ObjectId("57d27b414d2e9b04d79883b3"), "created_at" : ISODate("2016-09-09T12:05:05.164Z"), "URLLink" : "", "content" : "c...", tags: [ "asd" ], isSecret: false, isArchived: false } ]})) </code></pre> <p>I think that's the important part that I writed below. I didn't find any quertset about it and there are few documentation about mongoengine. Where would I search for? </p> <pre><code>current_user.update(notes__tags=tagList, notes__content=form.content2.data, notes__URLLink=form.URLLink2.data) </code></pre>
-1
2016-09-09T09:46:20Z
39,409,903
<p>You have a few issues here:</p> <ul> <li>You can use (in the most recent versions of mongoengine) an <a href="http://mongoengine-odm.readthedocs.io/apireference.html#mongoengine.fields.EmbeddedDocumentListField" rel="nofollow">EmbeddedDocumentListField</a></li> <li>The <code>Note</code>class needs to inherit EmbeddedDocument </li> <li>You can't use an update operation on the <code>User</code> object in the way you are attempting</li> </ul> <p>Consider the following example which will add a note to the users note field.</p> <pre><code>import mongoengine as mdb from numpy.random.mtrand import randint mdb.connect("embed-test") class User(mdb.Document): name = mdb.StringField() notes = mdb.EmbeddedDocumentListField('Note') class Note(mdb.EmbeddedDocument): content = mdb.StringField(required=True) tags = mdb.ListField(mdb.StringField(required=True, max_length=20)) User.drop_collection() u = User(name="Name").save() if __name__ == "__main__": new_note = Note(content="Some notes here.", tags=['one', 'two']) current_user = User.objects.first() current_user.notes.append(new_note) current_user.save() </code></pre> <p>You may also want to read <a href="http://stackoverflow.com/questions/34897566/mongoengine-embeddeddocument-v-s-referencefield/34952982#34952982">this answer</a> on when to you use <code>EmbeddedDocument</code> vs. a <code>ReferenceField</code>. </p>
0
2016-09-09T10:50:37Z
[ "python", "mongodb", "flask", "mongoengine" ]
Sort etree before writing
39,408,706
<p>I am trying to make an xml with python and etree. Now I want to sort the xml before wring it. Is this possible, and if yes, how?</p> <pre><code>objm = json.loads(response.text) newRoot = ET.Element("root") tree = ET.ElementTree(newRoot) i=0 while i &lt; len(objm): newItem = ET.Subelement(newRoot, "item") Start_date = datetime.strptime(objm[i]['Start_date'], '%Y-%m-%d %H:%M:%S') if (Start_date.date() == datetime.today().date()): ET.SubElement(newItem, "Start_date").text = Start_date.strftime("%H:%M") ET.SubElement(newItem, "location").text = objm[i]['location'] i = i+1 ##Some sorting on Start_date should be done here## try: tree.write(os.path.join(tempfile.gettempdir(), "filename.xml")) except Exception,e: print str(e) </code></pre>
0
2016-09-09T09:49:58Z
39,411,459
<p>Found the solution:</p> <pre><code>objm = json.loads(response.text) objm = sorted(objm, key=lambda k: k.get('Start_date', 0), reverse=False) newRoot = ET.Element("root") tree = ET.ElementTree(newRoot) i=0 while i &lt; len(objm): newItem = ET.Subelement(newRoot, "item") Start_date = datetime.strptime(objm[i]['Start_date'], '%Y-%m-%d %H:%M:%S') if (Start_date.date() == datetime.today().date()): ET.SubElement(newItem, "Start_date").text = Start_date.strftime("%H:%M") ET.SubElement(newItem, "location").text = objm[i]['location'] i = i+1 try: tree.write(os.path.join(tempfile.gettempdir(), "filename.xml")) except Exception,e: print str(e) </code></pre>
0
2016-09-09T12:19:56Z
[ "python", "sorting", "elementtree" ]
Django: filtering expected content type?
39,408,715
<p>Django offers a way to restrict the accepted methods using the <code>@request_http_method</code> decorator, so if a particular view can only respond to a GET request we can do:</p> <pre><code>@require_http_methods(['GET']) def only_get(request): pass </code></pre> <p>Otherwise we get a 403 (FORBIDDEN) response.</p> <p>However, I would also like to accept a <code>Content-Type</code> of json. If it's not json it should reject the request as well (I am guessing a 403 response would also be the appropriate one).</p> <p>Does Django have anything similar to the <code>require_http_methods</code> decorator, but for content types? If not, how else could I tackle this scenario?</p>
0
2016-09-09T09:50:22Z
39,408,796
<p>I do not think that Django has something similar for Content-Type, but you can easily write your Middleware, that would drop requests with wrong Content-Type and then use decorator_from_middleware option.</p> <p>If you use Django 1.10:</p> <pre><code>class AllowedContentTypes(object): def __init__(self, get_response): self.get_response = get_response def __call__(self, request, *args, **kwargs): types = kwargs.pop('types') or ['application/json'] if request.content_type in types: response = self.get_response(request) else: response = HttpResponse() # your response for wrong content type return response </code></pre> <p>and apply it to your view like:</p> <pre><code>@decorator_from_middleware_with_args(AllowedContentTypes)(types=['application/json']) def your_view(request): ... </code></pre> <p>Also, you can use Django REST Framework, there you are able to filter Content-Type using Parsers, JSONParser would only allow application/json content-type for requests. It is also would be more useful to implement REST API for you app.</p>
0
2016-09-09T09:54:11Z
[ "python", "django" ]
How to connect to redis?
39,408,722
<pre><code>CACHES = { "default": { "BACKEND": "django_redis.cache.RedisCache", "LOCATION": "redis://127.0.0.1:6379/1", "OPTIONS": { "CLIENT_CLASS": "django_redis.client.DefaultClient", } } } </code></pre> <p>I am trying to connect to redis to save my object in it, but it gives me this error when i try to connect</p> <blockquote> <p>Error 10061 connecting to 127.0.0.1:6379. No connection could be made because the target machine actively refused it</p> </blockquote> <p>How does it work, what should i give in location and i am on a proxy from my company. Need some detailed explanation on location. </p>
0
2016-09-09T09:50:33Z
39,409,324
<p>First start the redis server. Your OS will provide a mechanism to do that, e.g. on some Linuxes you could use <code>systemctl start redis</code>, or <code>/etc/init.d/redis start</code> or similar. Or you could just start it directly with:</p> <pre><code>$ redis-server </code></pre> <p>which will run it as a foreground process.</p> <p>Then try running the <code>redis-cli ping</code> command. Receiving a <code>PONG</code> response indicates that redis is in fact up and running on your local machine:</p> <pre><code>$ redis-cli ping PONG </code></pre> <p>Once you have that working try Django again.</p>
2
2016-09-09T10:20:47Z
[ "python", "django", "redis" ]
How to connect to redis?
39,408,722
<pre><code>CACHES = { "default": { "BACKEND": "django_redis.cache.RedisCache", "LOCATION": "redis://127.0.0.1:6379/1", "OPTIONS": { "CLIENT_CLASS": "django_redis.client.DefaultClient", } } } </code></pre> <p>I am trying to connect to redis to save my object in it, but it gives me this error when i try to connect</p> <blockquote> <p>Error 10061 connecting to 127.0.0.1:6379. No connection could be made because the target machine actively refused it</p> </blockquote> <p>How does it work, what should i give in location and i am on a proxy from my company. Need some detailed explanation on location. </p>
0
2016-09-09T09:50:33Z
39,612,792
<p>if your redis is password protected, you should have a config like this:</p> <pre><code>CACHES.update({ "redis": { "BACKEND": "redis_cache.cache.RedisCache", "LOCATION": "redis://127.0.0.1:6379/1", "OPTIONS": { "PASSWORD": "XXXXXXXXXXX", "CLIENT_CLASS": "redis_cache.client.DefaultClient", }, }, }) </code></pre>
1
2016-09-21T09:36:35Z
[ "python", "django", "redis" ]
Using the same Embedding for Encoder and Decoder in seq2seq
39,408,828
<p>I am building an translation-machine based on the the seq2seq-class. The class assumes different vocabularies for the encoder and the decoder part. Thus also it expects different embeddings for the two.</p> <p>However, I am trying to use this inside a single language. Thus I would like the two embeddings to be one. (Background is to translate laymen's terms to experts' terms, inside the same language)</p> <p>Currently the relevant code is:</p> <p>Encoder-Side: in python/ops/rnn_cell.py in EmbeddingWrapper():</p> <pre><code>with vs.variable_scope(scope or "EmbeddingWrapper"): additional_info_size with vs.variable_scope(scope or type(self).__name__): with ops.device("/cpu:0"): embedding = vs.get_variable("embedding", [self._embedding_classes, self._embedding_size], initializer=initializer) embedded = embedding_ops.embedding_lookup(embedding, array_ops.reshape(inputs, [-1])) </code></pre> <p>Decoder-Side: In python/ops/seq2seq.py in embedding_rnn_decoder():</p> <pre><code> with variable_scope.variable_scope(scope or "embedding_rnn_decoder"): with ops.device("/cpu:0"): embedding = variable_scope.get_variable("embedding", [num_symbols, embedding_size]) loop_function = _extract_sksk_argmax_and_embed( embedding, output_projection, update_embedding_for_previous) if feed_previous else None emb_inp = (embedding_ops.embedding_lookup(embedding, i) for i in decoder_inputs) </code></pre> <p>Any idea how to elegantly get those two to use the same embedding-matrix?</p>
0
2016-09-09T09:55:34Z
39,418,862
<p>You can use a reusing variable scope when you call the function which creates the second embedding. If you use a scope with the same name and set reuse=True the embedding will be reused. The documentation on <a href="https://www.tensorflow.org/versions/r0.10/how_tos/variable_scope/index.html" rel="nofollow">sharing variables</a> is relevant.</p>
0
2016-09-09T19:53:34Z
[ "python", "tensorflow" ]
How to override Django Admin
39,408,860
<p>I have two models Restaurant and Details. The superuser assigns each restaurant a user.When that user logs into admin i want only those Details associated with that user's Restaurant to be shown,and he should be able to edit them as well. I tried to override admin's queryset function but to no success.Any help would be appreciated. This is what i did so far</p> <p>I am just a beginner in Django.</p> <pre><code>class RestaurantAdmin(admin.ModelAdmin): model = Details def save_model(self, request, obj, form, change): obj.user = request.user super(RestaurantAdmin, self).save_model(request, obj, form, change) def queryset(self, request): print(request.user) qs = super(ResaturantAdmin, self).queryset(request) # If super-user, show all comments if request.user.is_superuser: return qs return qs.filter(owner=request.user) admin.site.register(Restaurant) admin.site.register(Details,RestaurantAdmin) </code></pre>
0
2016-09-09T09:57:43Z
39,408,963
<p>The method you need to override is called <code>get_queryset</code>, not <code>queryset</code>.</p>
0
2016-09-09T10:03:14Z
[ "python", "django", "override", "admin" ]
Summary data for pandas dataframe
39,408,938
<p><code>Describe()</code> doesn't do exactly what I'd like - so I'm rolling my own version.</p> <p>The following works fine apart from the final metric 'Num Unique Values' which is returning numbers but they are not correct - I guess I'm using apply incorrectly?</p> <pre><code>pd.DataFrame({ 'Max':d.max(), 'Min':d.min(), 'Count':d.count(axis = 0), 'Count Null':d.isnull().sum(), 'Count Zero':d[d==0].count(), 'Num Unique Values':d.apply(lambda x: x.nunique()) }) </code></pre>
1
2016-09-09T10:02:13Z
39,408,983
<p>For me it works nice:</p> <pre><code>print(df.apply(lambda x: x.nunique())) </code></pre> <p>Sample:</p> <pre><code>df = pd.DataFrame({'A':[1,2,2,1], 'B':[4,5,6,4], 'C':[7,8,9,1], 'D':[1,3,5,9]}) print (df) A B C D 0 1 4 7 1 1 2 5 8 3 2 2 6 9 5 3 1 4 1 9 print (df.apply(lambda x: x.nunique())) A 2 B 3 C 4 D 4 dtype: int64 </code></pre> <p>Another solution:</p> <pre><code>print (df.apply(lambda x: len(x.unique()))) A 2 B 3 C 4 D 4 dtype: int64 </code></pre>
1
2016-09-09T10:04:28Z
[ "python", "pandas" ]
How to force recompile of py source in site-packages?
39,409,084
<p>If I edit source of an installed package and delete the .pyc when I restart an app that uses it there is no new pyc generated in place indicating there is a cache elsewhere.</p> <p>How do I force the update to source to be taken into account?</p>
1
2016-09-09T10:10:33Z
39,409,288
<p>Go to the directory of the <code>.py</code> file and run <code>python -m compileall .</code>.</p>
1
2016-09-09T10:19:20Z
[ "python", "python-2.7", "package" ]
What actually get_Object() takes as a parameter in Fabook SDK?
39,409,170
<p>I want to send friend name as a user in <code>get_object()</code>in below code for getting his public posts. But I am getting error</p> <blockquote> <p>raise GraphAPIError(result) facebook.GraphAPIError: (#803) Cannot query users by their username (tayyab.rasheed.545)</p> </blockquote> <pre><code>user = 'tayyab.rasheed.545' #is giving error #user = 'BillGates' #is working fine. # user = 'me' #is working fine. graph = facebook.GraphAPI(access_token) profile = graph.get_object(user) posts = graph.get_connections(profile['id'], 'posts') </code></pre> <p>Why is the error? I think I am doing something wrong. <code>BillGates</code> and <code>me</code> is working fine then why not <code>tayyab.rasheed.545</code> Profile of friend is '<a href="https://www.facebook.com/tayyab.rasheed.545" rel="nofollow">https://www.facebook.com/tayyab.rasheed.545</a>'</p>
-1
2016-09-09T10:13:48Z
39,409,222
<p>It is not possible to get public posts of any user. Even for you own public posts, you need to authorized yourself with the <code>user_posts</code> permission.</p> <p>Edit: Please don´t change your question, especially when there is an answer already. CBroes answer is correct. You are not supposed to get any data of users who did not authorized your App anyway.</p>
0
2016-09-09T10:16:37Z
[ "python", "facebook", "facebook-graph-api" ]
What actually get_Object() takes as a parameter in Fabook SDK?
39,409,170
<p>I want to send friend name as a user in <code>get_object()</code>in below code for getting his public posts. But I am getting error</p> <blockquote> <p>raise GraphAPIError(result) facebook.GraphAPIError: (#803) Cannot query users by their username (tayyab.rasheed.545)</p> </blockquote> <pre><code>user = 'tayyab.rasheed.545' #is giving error #user = 'BillGates' #is working fine. # user = 'me' #is working fine. graph = facebook.GraphAPI(access_token) profile = graph.get_object(user) posts = graph.get_connections(profile['id'], 'posts') </code></pre> <p>Why is the error? I think I am doing something wrong. <code>BillGates</code> and <code>me</code> is working fine then why not <code>tayyab.rasheed.545</code> Profile of friend is '<a href="https://www.facebook.com/tayyab.rasheed.545" rel="nofollow">https://www.facebook.com/tayyab.rasheed.545</a>'</p>
-1
2016-09-09T10:13:48Z
39,409,869
<blockquote> <p>Why is the error?</p> </blockquote> <p>Because Facebook removed the username field from the API with v2.0, and as the error message says, you can not query user profiles by their username any more.</p> <blockquote> <p><code>BillGates</code> and <code>me</code> is working fine then why not `tayyab.rasheed.545</p> </blockquote> <p><code>BillGates</code> simply is a Facebook <em>Page</em>, and not a user profile.</p> <p>(And <code>me</code> has nothing to do with the username in the first place.)</p>
1
2016-09-09T10:48:31Z
[ "python", "facebook", "facebook-graph-api" ]
Create two aggregate columns by Group By Pandas
39,409,180
<p>I'm new to DataFrames and I want to group multiple columns and then sum and keep a count on the last column. e.g.</p> <pre><code>s = pd.DataFrame(np.matrix([[1, 2,3,4], [3, 4,7,6],[3,4,5,6],[1,2,3,7]]), columns=['a', 'b', 'c', 'd']) a b c d 0 1 2 3 4 1 3 4 7 6 2 3 4 5 6 3 1 2 3 7 </code></pre> <p>I want to group on <code>a</code>, <code>b</code> and <code>c</code> but then sum on <code>d</code> and count the elements within the group. I can count by </p> <pre><code>s = s.groupby(by=["a", "b", "c"])["d"].count() a b c 1 2 3 2 3 4 5 1 7 1 </code></pre> <p>And I can sum by</p> <pre><code>s = s.groupby(by=["a", "b", "c"])["d"].sum() a b c 1 2 3 11 3 4 5 6 7 6 </code></pre> <p>However I want to combine it such that The resulting dataframe has both the sum and count columns. </p> <pre><code> a b c sum count 1 2 3 11 2 3 4 5 6 1 7 6 1 </code></pre>
1
2016-09-09T10:14:28Z
39,409,213
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.aggregate.html" rel="nofollow"><code>aggregate</code></a>, or shorter version <code>agg</code>:</p> <pre><code>print (s.groupby(by=["a", "b", "c"])["d"].agg([sum, 'count'])) #print (s.groupby(by=["a", "b", "c"])["d"].aggregate([sum, 'count'])) sum count a b c 1 2 3 11 2 3 4 5 6 1 7 6 1 </code></pre> <p><a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#applying-multiple-functions-at-once" rel="nofollow">Pandas documentation</a>.</p> <p><a class='doc-link' href="http://stackoverflow.com/documentation/pandas/1822/grouping-data/6874/aggregating-by-size-and-count#t=201609091024047214605">Differences between size and count</a></p> <p>If need count <code>NaN</code> values also:</p> <pre><code>print (s.groupby(by=["a", "b", "c"])["d"].agg([sum, 'size'])) sum size a b c 1 2 3 11 2 3 4 5 6 1 7 6 1 </code></pre>
1
2016-09-09T10:16:10Z
[ "python", "pandas", "dataframe", "group-by", "aggregate-functions" ]
How to plot a stacked histogram with two arrays in python
39,409,187
<p>I am trying to create a stacked histogram showing the clump thickness for malignant and benign tumors, with the malignant class colored red and the benign class colored blue. </p> <p>I got the clump_thickness_array and benign_or_malignant_array. The benign_or_malignant_array consists of 2s and 4s.</p> <ol> <li>If benign_or_malignant equals 2 it is benign(blue colored). </li> <li>If it equals 4 it is malignant(red colored).</li> </ol> <p>I can not figure out how to color the benign and malignant tumors. My Histogram is showing something other than what I try to achieve.</p> <p>This is my code and my histogram so far:</p> <pre><code>fig, ax = plt.subplots(figsize=(12,8)) tmp = list() for i in range(2): indices = np.where(benign_or_malignant&gt;=i ) tmp.append(clump_thickness[indices]) ax.hist(tmp,bins=10,stacked=True,color = ['b',"r"],alpha=0.73) </code></pre> <p><a href="http://i.stack.imgur.com/XHru8.png" rel="nofollow"><img src="http://i.stack.imgur.com/XHru8.png" alt="enter image description here"></a></p>
1
2016-09-09T10:14:49Z
39,411,881
<p>to obtain a stacked histogram using lists of different length for each group, you need to assemble a list of lists. This is what you are doing with your <code>tmp</code> variable. However, I think you are using the wrong indexes in your for loop. Above, you state that you want to label your data according to the variable <code>benign_or_malignant</code>. You want to test if it is exactly 2 or exactly 4. If you really just want these two possibilities, rewrite like this:</p> <pre><code>for i in [2,4]: indices = np.where(benign_or_malignant==i ) tmp.append(clump_thickness[indices]) </code></pre>
1
2016-09-09T12:42:38Z
[ "python", "matplotlib", "histogram" ]
Create a virtualenv from another virtualenv
39,409,380
<p>Can we create a <a href="https://virtualenv.pypa.io/en/stable/" rel="nofollow">virtualenv</a> from an existing virtualenv in order to inherit the installed libraries?</p> <p>In detail:</p> <p>I first create a "reference" virtualenv, and add libraries (with versions fixed):</p> <pre><code>virtualenv ref source ref/bin/activate pip install -U pip==8.1.1 # &lt;- I want to fix the version number pip install -U wheel==0.29.0 # &lt;- I want to fix the version number </code></pre> <p>Then:</p> <pre><code>virtualenv -p ref/bin/python myapp source myapp/bin/activate pip list </code></pre> <p>I get:</p> <pre><code>pip (1.4.1) setuptools (0.9.8) wsgiref (0.1.2) </code></pre> <p>How to get my installed libraries?</p> <p><strong>Similar question</strong></p> <p>I saw a similar question: <a href="http://stackoverflow.com/questions/10538675/can-a-virtualenv-inherit-from-another">Can a virtualenv inherit from another?</a>.</p> <p>But I want a isolated virtualenv which didn't use the referenced virtualenv, except for libraries installation. So, adding the specified directories to the Python path for the currently-active virtualenv, is not the solution.</p> <p><strong>Why doing that?</strong></p> <p>Well, we have an integration server which builds the applications (for releases and continuous integration) and we want to keep the control on libraries versions and make the build faster.</p> <p><strong>Create a relocatable virtualenv</strong></p> <p>I think I could use a <a href="https://virtualenv.pypa.io/en/stable/userguide/#making-environments-relocatable" rel="nofollow">relocatable virtualenv</a>, that way:</p> <ol> <li>create the <strong>ref</strong> virtualenv</li> <li>make it relocatable: ``virtualenv --relocatable ref```</li> </ol> <p>For "myapp":</p> <ul> <li>copy <strong>ref</strong> to <strong>myapp</strong></li> </ul> <p>What do you think of this solution? Is it reliable for a distribuable release?</p>
0
2016-09-09T10:24:09Z
39,409,459
<p>when you install the second virtualenv you have to add <code>--system-site-packages</code> flag.</p> <pre><code>virtualenv -p ref/bin/python myapp --system-site-packages </code></pre>
0
2016-09-09T10:28:26Z
[ "python", "virtualenv" ]
Create a virtualenv from another virtualenv
39,409,380
<p>Can we create a <a href="https://virtualenv.pypa.io/en/stable/" rel="nofollow">virtualenv</a> from an existing virtualenv in order to inherit the installed libraries?</p> <p>In detail:</p> <p>I first create a "reference" virtualenv, and add libraries (with versions fixed):</p> <pre><code>virtualenv ref source ref/bin/activate pip install -U pip==8.1.1 # &lt;- I want to fix the version number pip install -U wheel==0.29.0 # &lt;- I want to fix the version number </code></pre> <p>Then:</p> <pre><code>virtualenv -p ref/bin/python myapp source myapp/bin/activate pip list </code></pre> <p>I get:</p> <pre><code>pip (1.4.1) setuptools (0.9.8) wsgiref (0.1.2) </code></pre> <p>How to get my installed libraries?</p> <p><strong>Similar question</strong></p> <p>I saw a similar question: <a href="http://stackoverflow.com/questions/10538675/can-a-virtualenv-inherit-from-another">Can a virtualenv inherit from another?</a>.</p> <p>But I want a isolated virtualenv which didn't use the referenced virtualenv, except for libraries installation. So, adding the specified directories to the Python path for the currently-active virtualenv, is not the solution.</p> <p><strong>Why doing that?</strong></p> <p>Well, we have an integration server which builds the applications (for releases and continuous integration) and we want to keep the control on libraries versions and make the build faster.</p> <p><strong>Create a relocatable virtualenv</strong></p> <p>I think I could use a <a href="https://virtualenv.pypa.io/en/stable/userguide/#making-environments-relocatable" rel="nofollow">relocatable virtualenv</a>, that way:</p> <ol> <li>create the <strong>ref</strong> virtualenv</li> <li>make it relocatable: ``virtualenv --relocatable ref```</li> </ol> <p>For "myapp":</p> <ul> <li>copy <strong>ref</strong> to <strong>myapp</strong></li> </ul> <p>What do you think of this solution? Is it reliable for a distribuable release?</p>
0
2016-09-09T10:24:09Z
39,409,609
<p>The <code>pip</code> version <code>1.4.1</code> was bundle with an old version of <code>virtualenv</code>. For example the one shipped with Ubuntu 14.04. You should remove that from your system and install the most recent version of <code>virtualenv</code>.</p> <pre><code>pip install virtualenv </code></pre> <p>This might require root permissions (<code>sudo</code>).</p> <p>Then upgrade <code>pip</code> inside the virtual env <code>pip install -U pip</code> or recrete the env.</p>
0
2016-09-09T10:35:33Z
[ "python", "virtualenv" ]
Create a virtualenv from another virtualenv
39,409,380
<p>Can we create a <a href="https://virtualenv.pypa.io/en/stable/" rel="nofollow">virtualenv</a> from an existing virtualenv in order to inherit the installed libraries?</p> <p>In detail:</p> <p>I first create a "reference" virtualenv, and add libraries (with versions fixed):</p> <pre><code>virtualenv ref source ref/bin/activate pip install -U pip==8.1.1 # &lt;- I want to fix the version number pip install -U wheel==0.29.0 # &lt;- I want to fix the version number </code></pre> <p>Then:</p> <pre><code>virtualenv -p ref/bin/python myapp source myapp/bin/activate pip list </code></pre> <p>I get:</p> <pre><code>pip (1.4.1) setuptools (0.9.8) wsgiref (0.1.2) </code></pre> <p>How to get my installed libraries?</p> <p><strong>Similar question</strong></p> <p>I saw a similar question: <a href="http://stackoverflow.com/questions/10538675/can-a-virtualenv-inherit-from-another">Can a virtualenv inherit from another?</a>.</p> <p>But I want a isolated virtualenv which didn't use the referenced virtualenv, except for libraries installation. So, adding the specified directories to the Python path for the currently-active virtualenv, is not the solution.</p> <p><strong>Why doing that?</strong></p> <p>Well, we have an integration server which builds the applications (for releases and continuous integration) and we want to keep the control on libraries versions and make the build faster.</p> <p><strong>Create a relocatable virtualenv</strong></p> <p>I think I could use a <a href="https://virtualenv.pypa.io/en/stable/userguide/#making-environments-relocatable" rel="nofollow">relocatable virtualenv</a>, that way:</p> <ol> <li>create the <strong>ref</strong> virtualenv</li> <li>make it relocatable: ``virtualenv --relocatable ref```</li> </ol> <p>For "myapp":</p> <ul> <li>copy <strong>ref</strong> to <strong>myapp</strong></li> </ul> <p>What do you think of this solution? Is it reliable for a distribuable release?</p>
0
2016-09-09T10:24:09Z
39,410,201
<p>I think your problem can be solved differently. With use of <code>PYTHONPATH</code>. First we create <code>ref</code> virtaulenv and install all needed packages here</p> <pre><code>$ virtualenv ref $ source ref/bin/activate $ pip install pep8 $ pip list &gt; pep8 (1.7.0) &gt; pip (8.1.2) &gt; setuptools (26.1.1) &gt; wheel (0.29.0) </code></pre> <p>Then we create second virtaulenv <code>use</code>.</p> <pre><code>$ virtualenv use $ source use/bin/activate $ pip list &gt; pip (8.1.2) &gt; setuptools (26.1.1) &gt; wheel (0.29.0) </code></pre> <p>And now we can set our <code>PYTHONPATH</code> in this env to include ref's directories</p> <pre><code>$ export PYTHONPATH=PYTHONPATH:/home/path_to/ref/lib/python2.7/site-packages:/home/path_to/ref/local/lib/python2.7/site-packages $ pip list &gt; pep8 (1.7.0) &gt; pip (8.1.2) &gt; setuptools (26.1.1) &gt; wheel (0.29.0) </code></pre> <p>As you see this way you just reference installed packages in ref's environment. Also note that we add this folders at the end so they will have lower priority. </p> <p><strong>NOTE</strong>: this are not all folders that exists in <code>PYTHONPATH</code>. I included this 2 because they are main ones. But if you will have some problems you can add other ones too, just lookup needed paths with this method <a href="http://stackoverflow.com/questions/18486469/how-to-print-contents-of-pythonpath#18486534">how to print contents of PYTHONPATH</a></p>
0
2016-09-09T11:08:47Z
[ "python", "virtualenv" ]
Create a virtualenv from another virtualenv
39,409,380
<p>Can we create a <a href="https://virtualenv.pypa.io/en/stable/" rel="nofollow">virtualenv</a> from an existing virtualenv in order to inherit the installed libraries?</p> <p>In detail:</p> <p>I first create a "reference" virtualenv, and add libraries (with versions fixed):</p> <pre><code>virtualenv ref source ref/bin/activate pip install -U pip==8.1.1 # &lt;- I want to fix the version number pip install -U wheel==0.29.0 # &lt;- I want to fix the version number </code></pre> <p>Then:</p> <pre><code>virtualenv -p ref/bin/python myapp source myapp/bin/activate pip list </code></pre> <p>I get:</p> <pre><code>pip (1.4.1) setuptools (0.9.8) wsgiref (0.1.2) </code></pre> <p>How to get my installed libraries?</p> <p><strong>Similar question</strong></p> <p>I saw a similar question: <a href="http://stackoverflow.com/questions/10538675/can-a-virtualenv-inherit-from-another">Can a virtualenv inherit from another?</a>.</p> <p>But I want a isolated virtualenv which didn't use the referenced virtualenv, except for libraries installation. So, adding the specified directories to the Python path for the currently-active virtualenv, is not the solution.</p> <p><strong>Why doing that?</strong></p> <p>Well, we have an integration server which builds the applications (for releases and continuous integration) and we want to keep the control on libraries versions and make the build faster.</p> <p><strong>Create a relocatable virtualenv</strong></p> <p>I think I could use a <a href="https://virtualenv.pypa.io/en/stable/userguide/#making-environments-relocatable" rel="nofollow">relocatable virtualenv</a>, that way:</p> <ol> <li>create the <strong>ref</strong> virtualenv</li> <li>make it relocatable: ``virtualenv --relocatable ref```</li> </ol> <p>For "myapp":</p> <ul> <li>copy <strong>ref</strong> to <strong>myapp</strong></li> </ul> <p>What do you think of this solution? Is it reliable for a distribuable release?</p>
0
2016-09-09T10:24:09Z
39,410,319
<p>You may <code>freeze</code> list of packages from one env:</p> <pre><code>(ref) user@host:~/dir$ pip freeze &gt; ref-packages.txt </code></pre> <p>Then install them:</p> <pre><code>(use) user@host:~/dir$ pip install -r ref-packages.txt </code></pre>
0
2016-09-09T11:15:12Z
[ "python", "virtualenv" ]
How to fill a knapsack table when using recursive dynamic programming
39,409,443
<p><strong>* NOT HOMEWORK *</strong></p> <p>I have implemented the knapsack in python and am successfully getting the best value however I would like to expand the problem to fill a table with all appropriate values for a knapsack table of all weights and items.</p> <p>I've implemented it in python which I'm new to so please advise me if theres anything I could improve upon however the concepts should work in any language.</p> <pre><code>values, weights, table = [], [], [[]] def knapsack(i, W): global weights, values, table, counter if (i &lt; 0): # Base case return 0 if (weights[i] &gt; W): # Recursion return knapsack(i - 1, W) else: # Recursion return max(knapsack(i - 1, W), values[i] + knapsack(i - 1, W - weights[i])) def main(): global values, weights, table W = int(input()) values = list(map(int, input().split())) weights = list(map(int, input().split())) # initalise table with 0's table = [[0 for i in range(W)] for i in range(len(values))] for i in range(len(values)): for j in range(W): table[i][j] = 0 # Fill table print("Result: {}".format(knapsack(len(values) - 1, W))) printKnapsack(W) if __name__ == '__main__': main() </code></pre> <p>I also have this print table method which is unrelated but just so you can see what I'm outputting it as:</p> <pre><code>def printLine(W): print(" ",end="") for i in range(W + 1): print("-----",end="") print("") def printKnapsack(W): global table print("\nKnapsack Table:") printLine(W) print("| k\w |", end="") for i in range(W): print("{0: &gt;3} |".format(i + 1), end="") print("") printLine(W) for i in range(len(values)): print("| {} |".format(i+1), end="") for j in range(W): print("{0: &gt;3} |".format(table[i][j]), end="") print("") printLine(W) </code></pre> <p>This is the sample input:</p> <pre><code>10 18 9 12 25 5 2 4 6 </code></pre> <p>This is what is should output:</p> <pre><code>Result: 37 Knapsack Table: ------------------------------------------------------- | k\w | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | ------------------------------------------------------- | 1 | 0 | 0 | 0 | 0 | 18 | 18 | 18 | 18 | 18 | 18 | ------------------------------------------------------- | 2 | 0 | 9 | 9 | 9 | 18 | 18 | 27 | 27 | 27 | 27 | ------------------------------------------------------- | 3 | 0 | 9 | 9 | 12 | 18 | 21 | 27 | 27 | 30 | 30 | ------------------------------------------------------- | 4 | 0 | 9 | 9 | 12 | 18 | 25 | 27 | 34 | 34 | 37 | ------------------------------------------------------- </code></pre> <p>I have tried multiple different lines in the <code>knapsack(i, W)</code> function to add elements to the table and I've drawn it out but I can't understand how the recursion is working well enough to figure out what indexes to put in to add the unravelled recursive call values to.</p> <p>This is the method I have to fix.</p> <pre><code>def knapsack(i, W): global weights, values, table, counter if (i &lt; 0): # Base case return 0 if (weights[i] &gt; W): # Recursion table[?][?] = ? return knapsack(i - 1, W) else: # Recursion table[?][?] = ? return max(knapsack(i - 1, W), values[i] + knapsack(i - 1, W - weights[i])) </code></pre>
3
2016-09-09T10:27:40Z
39,410,689
<p>In your recursive algorithm you just can't get full filled table, because this step skip a lot:</p> <pre><code>return max(knapsack(i - 1, W), values[i] + knapsack(i - 1, W - weights[i])) </code></pre> <p>I can sudgest you this solution:</p> <pre><code>def knapsack(i, W): global weights, values, table, counter if (i &lt; 0): # Base case return 0 if (weights[i] &gt; W): # Recursion table[i][W-1] = knapsack(i - 1, W) return table[i][W-1] else: # Recursion table[i][W-1] = max(knapsack(i - 1, W), values[i] + knapsack(i - 1, W - weights[i])) return table[i][W-1] </code></pre> <p>In resulted table non-zero cells means that your algorithm step through here and got this intermediate solution. Also you can run your algorithm more than once with different input values and got full-filled table.</p> <p>Hope this helped</p>
1
2016-09-09T11:35:43Z
[ "python", "recursion", "dynamic-programming", "knapsack-problem", "bottom-up" ]
Pandas DataFrame, smart apply of a complex function to groupby result
39,409,500
<p>I have a <code>pandas.DataFrame</code> with 3 columns of type <code>str</code> and <code>n</code> other columns of type <code>float64</code>.</p> <p>I need to group rows by one of the three <code>str</code> columns and apply a function <code>myComplexFunc()</code> which will reduce `̀N rows to one row.</p> <p><code>myComplexFunc()</code> take only rows of type <code>float64</code>.</p> <p>This can be done with some for loops but it will not be efficient, So I tried to use the <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#flexible-apply" rel="nofollow">flexible apply</a> of <code>pandas</code> but it seems that it runs the heavy code of <code>myComplexFunc()</code> twice!</p> <p>To be more clear, here is a minimal example</p> <p><strong>Let "df" be a dataFrame like this</strong> : </p> <pre><code>df &gt;&gt; A B C D 0 foo one 0.406157 0.735223 1 bar one 1.020493 -1.167256 2 foo two -0.314192 -0.883087 3 bar three 0.271705 -0.215049 4 foo two 0.535290 0.185872 5 bar two 0.178926 -0.459890 6 foo one -1.939673 -0.523396 7 foo three -2.125591 -0.689809 </code></pre> <p><strong>myComplexFunc()</strong></p> <pre class="lang-py prettyprint-override"><code>def myComplexFunc(rows): # Some transformations that will return 1 row result = some_transformations(rows) return result </code></pre> <p><strong>What I want :</strong></p> <pre class="lang-py prettyprint-override"><code># wanted apply is the name of the wanted method df.groupby("A").wanted_apply(myComplexFunc) &gt;&gt; A C D 0 foo new_c0_foo new_d0_foo 1 bar new_c0_bar new_d0_bar </code></pre> <p>The column <code>B</code> have been removed because it's not of type <code>float64</code>. </p> <p>Thanks in advance</p>
1
2016-09-09T10:30:25Z
39,409,706
<p>You can filter DataFrame by <code>dtype</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.select_dtypes.html" rel="nofollow"><code>select_dtypes</code></a>, but then need aggreagate by <code>Series</code> <code>df.A</code>:</p> <pre><code>def myComplexFunc(rows): return rows + 10 df = df.select_dtypes(include=[np.float64]).groupby([df.A]).apply(myComplexFunc) print (df) C D 0 10.406157 10.735223 1 11.020493 8.832744 2 9.685808 9.116913 3 10.271705 9.784951 4 10.535290 10.185872 5 10.178926 9.540110 6 8.060327 9.476604 7 7.874409 9.310191 </code></pre> <p>because if use only <code>A</code>:</p> <pre><code>df = df.select_dtypes(include=[np.float64]).groupby('A').apply(myComplexFunc) </code></pre> <p>get</p> <blockquote> <p>KeyError: 'A'</p> </blockquote> <p>and it is right - all string columns are excluded (<code>A</code> and <code>B</code>).</p> <pre><code>print (df.select_dtypes(include=[np.float64])) C D 0 0.406157 0.735223 1 1.020493 -1.167256 2 -0.314192 -0.883087 3 0.271705 -0.215049 4 0.535290 0.185872 5 0.178926 -0.459890 6 -1.939673 -0.523396 7 -2.125591 -0.689809 </code></pre>
0
2016-09-09T10:41:12Z
[ "python", "string", "pandas", "dataframe", "group-by" ]
Populating DataFrame from tuple with different size
39,409,544
<p>I have data that is spread over the day. I clustered it and then i calculated the ratio (weight) of each cluster per hour (not all the clusters exists in all hours). (dataframe time_df ) </p> <pre><code> cluster Date 0 1 2014-02-28 14:24:59.535000+02:00 1 1 2014-02-28 14:26:35.019000+02:00 2 1 2014-02-28 14:27:37.213000+02:00 3 2 2014-02-28 14:28:35.246000+02:00 4 2 2014-02-28 14:29:37.283000+02:00 </code></pre> <p>I group by hour and use np bincount to calculate the weight of each cluster:</p> <pre><code>group_by_hour = time_df.groupby(time_df.Date.dt.hour) cluster_ids_hour = group_by_hour.cluster.\ apply(lambda arr: list(range(0,(arr+1).max()+1))) cluster_ratio_hour = group_by_hour.cluster.\ apply(lambda arr: 1.0*np.bincount(arr+1)/len(arr)) </code></pre> <p>This gives per hour a different array size of clusters and their weight It tried to construct a dataframe</p> <p>pd.DataFrame(temp, columns=['hour','clusters','ratios']) </p> <p>But I got the following:</p> <pre><code> hour clusters weights 0 14 [0] [1.0] 1 15 [0, 1] [0.488888888889, 0.511111111111] 2 16 [0, 1, 2] [0.302325581395, 0.162790697674, 0.53488372093] 3 17 [0, 1, 2] [0.0, 0.0, 1.0] 4 18 [0, 1, 2] [0.0, 0.0, 1.0] 5 19 [0, 1, 2] [0.0, 0.0, 1.0] 6 20 [0, 1, 2] [0.0, 0.0, 1.0] 7 21 [0, 1, 2] [0.0, 0.0, 1.0] 8 22 [0, 1, 2] [0.0, 0.0, 1.0] 9 23 [0, 1, 2] [0.0, 0.0, 1.0] </code></pre> <p>How can I make it to have the cluster as index and hours as columns?</p> <pre><code> 0 1 2 3 4 ... 0 0.2 0.6 0.4 0.0 0.6 1 0.0 0.4 0.1 0.0 0.4 2 0.8 0.0 0.5 1.0 0.0 </code></pre>
1
2016-09-09T10:32:43Z
39,410,123
<p>I think you can use:</p> <pre><code>import pandas as pd import numpy as np time_df = pd.DataFrame({'cluster': {0: 1, 1: 1, 2: 1, 3: 2, 4: 2, 5: 1, 6: 1, 7: 2}, 'Date': {0: pd.Timestamp('2014-02-28 12:24:59.535000'), 1: pd.Timestamp('2014-02-28 12:26:35.019000'), 2: pd.Timestamp('2014-02-28 12:27:37.213000'), 3: pd.Timestamp('2014-02-28 12:28:35.246000'), 4: pd.Timestamp('2014-02-28 12:29:37.283000'), 5: pd.Timestamp('2014-02-28 13:27:37.213000'), 6: pd.Timestamp('2014-02-28 14:28:35.246000'), 7: pd.Timestamp('2014-02-28 14:29:37.283000')}}) print (time_df) Date cluster 0 2014-02-28 12:24:59.535 1 1 2014-02-28 12:26:35.019 1 2 2014-02-28 12:27:37.213 1 3 2014-02-28 12:28:35.246 2 4 2014-02-28 12:29:37.283 2 5 2014-02-28 13:27:37.213 1 6 2014-02-28 14:28:35.246 1 7 2014-02-28 14:29:37.283 2 </code></pre> <pre><code>group_by_hour = time_df.groupby(time_df.Date.dt.hour) cluster_ids_hour = group_by_hour.cluster.\ apply(lambda arr: list(range(0,(arr+1).max()+1))) cluster_ratio_hour = group_by_hour.cluster.\ apply(lambda arr: 1.0*np.bincount(arr+1)/len(arr)) print (cluster_ids_hour) Date 12 [0, 1, 2, 3] 13 [0, 1, 2] 14 [0, 1, 2, 3] Name: cluster, dtype: object print (cluster_ratio_hour) Date 12 [0.0, 0.0, 0.6, 0.4] 13 [0.0, 0.0, 1.0] 14 [0.0, 0.0, 0.5, 0.5] Name: cluster, dtype: object #create DataFrames from both columns and concate them df1 = pd.DataFrame(cluster_ids_hour.values.tolist(), index=cluster_ids_hour.index) #print (df1) df2 = pd.DataFrame(cluster_ratio_hour.values.tolist(), index=cluster_ratio_hour.index) #print (df2) df = pd.concat([df1, df2], axis=1, keys=('clusters','weights')) print (df) clusters weights 0 1 2 3 0 1 2 3 Date 12 0 1 2 3.0 0.0 0.0 0.6 0.4 13 0 1 2 NaN 0.0 0.0 1.0 NaN 14 0 1 2 3.0 0.0 0.0 0.5 0.5 </code></pre> <pre><code>#reshape, cast clusters column to integer df = df.stack().reset_index(level=1, drop=True).reset_index() df['clusters'] = df['clusters'].astype(int) #pivoting, fill NaN by 0 df = df.pivot(index='clusters', columns='Date', values='weights').fillna(0) df.index.name = None df.columns.name = None print (df) 12 13 14 0 0.0 0.0 0.0 1 0.0 0.0 0.0 2 0.6 1.0 0.5 3 0.4 0.0 0.5 </code></pre>
1
2016-09-09T11:03:36Z
[ "python", "list", "pandas", "dataframe", "pivot-table" ]
Populating DataFrame from tuple with different size
39,409,544
<p>I have data that is spread over the day. I clustered it and then i calculated the ratio (weight) of each cluster per hour (not all the clusters exists in all hours). (dataframe time_df ) </p> <pre><code> cluster Date 0 1 2014-02-28 14:24:59.535000+02:00 1 1 2014-02-28 14:26:35.019000+02:00 2 1 2014-02-28 14:27:37.213000+02:00 3 2 2014-02-28 14:28:35.246000+02:00 4 2 2014-02-28 14:29:37.283000+02:00 </code></pre> <p>I group by hour and use np bincount to calculate the weight of each cluster:</p> <pre><code>group_by_hour = time_df.groupby(time_df.Date.dt.hour) cluster_ids_hour = group_by_hour.cluster.\ apply(lambda arr: list(range(0,(arr+1).max()+1))) cluster_ratio_hour = group_by_hour.cluster.\ apply(lambda arr: 1.0*np.bincount(arr+1)/len(arr)) </code></pre> <p>This gives per hour a different array size of clusters and their weight It tried to construct a dataframe</p> <p>pd.DataFrame(temp, columns=['hour','clusters','ratios']) </p> <p>But I got the following:</p> <pre><code> hour clusters weights 0 14 [0] [1.0] 1 15 [0, 1] [0.488888888889, 0.511111111111] 2 16 [0, 1, 2] [0.302325581395, 0.162790697674, 0.53488372093] 3 17 [0, 1, 2] [0.0, 0.0, 1.0] 4 18 [0, 1, 2] [0.0, 0.0, 1.0] 5 19 [0, 1, 2] [0.0, 0.0, 1.0] 6 20 [0, 1, 2] [0.0, 0.0, 1.0] 7 21 [0, 1, 2] [0.0, 0.0, 1.0] 8 22 [0, 1, 2] [0.0, 0.0, 1.0] 9 23 [0, 1, 2] [0.0, 0.0, 1.0] </code></pre> <p>How can I make it to have the cluster as index and hours as columns?</p> <pre><code> 0 1 2 3 4 ... 0 0.2 0.6 0.4 0.0 0.6 1 0.0 0.4 0.1 0.0 0.4 2 0.8 0.0 0.5 1.0 0.0 </code></pre>
1
2016-09-09T10:32:43Z
39,478,682
<pre><code>import pandas as pd import numpy as np time_df = pd.DataFrame({'cluster': {0: 1, 1: 1, 2: 1, 3: 2, 4: 2, 5: 1, 6: 1, 7: 2}, 'Date': {0: pd.Timestamp('2014-02-28 12:24:59.535000'), 1: pd.Timestamp('2014-02-28 12:26:35.019000'), 2: pd.Timestamp('2014-02-28 12:27:37.213000'), 3: pd.Timestamp('2014-02-28 12:28:35.246000'), 4: pd.Timestamp('2014-02-28 12:29:37.283000'), 5: pd.Timestamp('2014-02-28 13:27:37.213000'), 6: pd.Timestamp('2014-02-28 14:28:35.246000'), 7: pd.Timestamp('2014-02-28 14:29:37.283000')}}) print (time_df) time_df_group = time_df.groupby([time_df.Date.dt.hour,time_df.cluster]).size() cluster_hour_df = time_df_group.unstack(level=0) cluster_hour_df = cluster_hour_df[cluster_hour_df.columns.values].apply(lambda row: row / row.sum(), axis=0) cluster_hour_df Date 12 13 14 cluster 1 0.6 1.0 0.5 2 0.4 NaN 0.5 </code></pre>
0
2016-09-13T20:43:11Z
[ "python", "list", "pandas", "dataframe", "pivot-table" ]
Python3 Beautiful Soup get HTML tag anchor
39,409,545
<p>I am trying to use BS4 and Python to save and replace the content of the first <code>&lt;translate&gt;</code> tag in a HTML file.</p> <p>Now I am trying to do something like this:</p> <pre><code>translate_bs4 = bs4_object.find('translate') translate_key = '{{ key }}' translate_initial = str(title_bs4) translate_bs4.string = translate_key </code></pre> <p>My test case is:</p> <pre><code>&lt;translate&gt;tag with &lt;other_tag&gt;some text&lt;/other_tag&gt;&lt;/translate&gt; &lt;much_longer_file&gt;...&lt;/much_longer_file&gt; </code></pre> <p>and the HTML is the expected one of:</p> <pre><code>&lt;translate&gt;{{ key }}&lt;/translate&gt; &lt;much_longer_file&gt;...&lt;/much_longer_file&gt; </code></pre> <p>but the value of <code>translate_initial</code> is </p> <pre><code>&lt;translate&gt;tag with &lt;other_tag&gt;some text&lt;/other_tag&gt;&lt;/translate&gt; </code></pre> <p>instead of expected</p> <pre><code>tag with &lt;other_tag&gt;some text&lt;/other_tag&gt; </code></pre> <p>I know that it can be easy extracted with a regex, but I want a some more DOM-related solution.</p>
2
2016-09-09T10:32:46Z
39,410,209
<p>Try this:</p> <pre><code>translate_bs4 = bs4_object.find('translate') translate_initial = translate_bs4.decode_contents(formatter="html") </code></pre>
1
2016-09-09T11:09:25Z
[ "python", "html", "beautifulsoup" ]
Django: no module named http_utils
39,409,581
<p>I created a new <code>utils</code> package and an <code>http_utils</code> file with some decorators and HTTP utility functions in there. I imported them wherever I am using them and the IDE reports no errors, and I also added the <code>utils</code> module to the <code>INSTALLED_APPS</code> list.</p> <p>However, when launching the server I am getting an import error:</p> <blockquote> <p>ImportError: No module named http_utils</p> </blockquote> <p>What am I missing? What else do I need to do to register a new module?</p>
0
2016-09-09T10:34:32Z
39,409,980
<p>Make sure the package is correct (Include <strong>init</strong>.py file).</p> <p>Make sure there are no other utils files in the same directory level. That is if you are importing from utils import http_utils from views.py, there should not be a utils.py in the same folder. Conflict occurs because of that.</p> <p>You dont have to include the folder in the INSTALLED_APP settings. Because the utils folder is a package and should be available for importing</p>
0
2016-09-09T10:55:09Z
[ "python", "django" ]
Django: no module named http_utils
39,409,581
<p>I created a new <code>utils</code> package and an <code>http_utils</code> file with some decorators and HTTP utility functions in there. I imported them wherever I am using them and the IDE reports no errors, and I also added the <code>utils</code> module to the <code>INSTALLED_APPS</code> list.</p> <p>However, when launching the server I am getting an import error:</p> <blockquote> <p>ImportError: No module named http_utils</p> </blockquote> <p>What am I missing? What else do I need to do to register a new module?</p>
0
2016-09-09T10:34:32Z
39,418,949
<p>As stared by Arundas above, since there is a utils.py file I suggest renaming the module to something such as <code>utilities</code>, and also make sure you have a <strong>__init__.py</strong> file in that directory.</p> <pre><code>from utilities.http_utils import class_name </code></pre>
0
2016-09-09T20:00:22Z
[ "python", "django" ]
Correlation heatmap
39,409,866
<p>I want to represent correlation matrix using a heatmap. There is something called <a href="http://www.sthda.com/english/wiki/visualize-correlation-matrix-using-correlogram" rel="nofollow">correlogram</a> in R, but I don't think there's such a thing in Python.</p> <p>How can I do this? The values go from -1 to 1, for example:</p> <pre><code>[[ 1. 0.00279981 0.95173379 0.02486161 -0.00324926 -0.00432099] [ 0.00279981 1. 0.17728303 0.64425774 0.30735071 0.37379443] [ 0.95173379 0.17728303 1. 0.27072266 0.02549031 0.03324756] [ 0.02486161 0.64425774 0.27072266 1. 0.18336236 0.18913512] [-0.00324926 0.30735071 0.02549031 0.18336236 1. 0.77678274] [-0.00432099 0.37379443 0.03324756 0.18913512 0.77678274 1. ]] </code></pre> <p>I was able to produce the following heatmap based on another <a href="http://stackoverflow.com/questions/33282368/plotting-a-2d-heatmap-with-matplotlib">question</a>, but the problem is that my values get 'cut' at 0, so I would like to have a map which goes from blue(-1) to red(1), or something like that, but here values below 0 are not presented in an adequate way.</p> <p><a href="http://i.stack.imgur.com/GSSDb.png" rel="nofollow"><img src="http://i.stack.imgur.com/GSSDb.png" alt="enter image description here"></a></p> <p>Here's the code for that:</p> <pre><code>plt.imshow(correlation_matrix,cmap='hot',interpolation='nearest') </code></pre>
0
2016-09-09T10:48:19Z
39,409,968
<p>You can use <a href="http://matplotlib.org/index.html" rel="nofollow">matplotlib</a> for this. There's a similar question which shows how you can achieve what you want: <a href="http://stackoverflow.com/questions/33282368/plotting-a-2d-heatmap-with-matplotlib">Plotting a 2D heatmap with Matplotlib</a></p>
1
2016-09-09T10:54:33Z
[ "python" ]
Correlation heatmap
39,409,866
<p>I want to represent correlation matrix using a heatmap. There is something called <a href="http://www.sthda.com/english/wiki/visualize-correlation-matrix-using-correlogram" rel="nofollow">correlogram</a> in R, but I don't think there's such a thing in Python.</p> <p>How can I do this? The values go from -1 to 1, for example:</p> <pre><code>[[ 1. 0.00279981 0.95173379 0.02486161 -0.00324926 -0.00432099] [ 0.00279981 1. 0.17728303 0.64425774 0.30735071 0.37379443] [ 0.95173379 0.17728303 1. 0.27072266 0.02549031 0.03324756] [ 0.02486161 0.64425774 0.27072266 1. 0.18336236 0.18913512] [-0.00324926 0.30735071 0.02549031 0.18336236 1. 0.77678274] [-0.00432099 0.37379443 0.03324756 0.18913512 0.77678274 1. ]] </code></pre> <p>I was able to produce the following heatmap based on another <a href="http://stackoverflow.com/questions/33282368/plotting-a-2d-heatmap-with-matplotlib">question</a>, but the problem is that my values get 'cut' at 0, so I would like to have a map which goes from blue(-1) to red(1), or something like that, but here values below 0 are not presented in an adequate way.</p> <p><a href="http://i.stack.imgur.com/GSSDb.png" rel="nofollow"><img src="http://i.stack.imgur.com/GSSDb.png" alt="enter image description here"></a></p> <p>Here's the code for that:</p> <pre><code>plt.imshow(correlation_matrix,cmap='hot',interpolation='nearest') </code></pre>
0
2016-09-09T10:48:19Z
39,410,167
<ol> <li>Use the 'jet' colormap for a transition between blue and red.</li> <li>Use <code>pcolor()</code> with the <code>vmin</code>, <code>vmax</code> parameters.</li> </ol> <p>It is detailed in this answer: <a href="http://stackoverflow.com/a/3376734/21974">http://stackoverflow.com/a/3376734/21974</a></p>
1
2016-09-09T11:06:31Z
[ "python" ]
Exiting out of a program through a menu option
39,409,882
<p>I need to say first and foremost, I am just learning Python. </p> <p>I am making a simple python program that has a menu option for exiting the program by using a function I called exit. I have tried making the exit function just call break, but I am getting an error when the exit function is called.</p> <p>Any help would be greatly appreciated.</p> <p>Sorry for not posting code earlier....</p> <pre><code>def exit(): break evade = evade_fw() # Main program running dialogue def main(): # menu goes here opt_list = [xsl_file, basic_loud_scan, fw_main, exit ] </code></pre>
1
2016-09-09T10:49:24Z
39,410,221
<p>Use the <code>sys.exit</code> method:</p> <pre><code>import sys def exit(): sys.exit() </code></pre> <p>That's the proper way to terminate your program.</p> <p>You can also use <code>os._exit</code> the same way.</p>
0
2016-09-09T11:10:18Z
[ "python", "menu", "exit" ]
Exiting out of a program through a menu option
39,409,882
<p>I need to say first and foremost, I am just learning Python. </p> <p>I am making a simple python program that has a menu option for exiting the program by using a function I called exit. I have tried making the exit function just call break, but I am getting an error when the exit function is called.</p> <p>Any help would be greatly appreciated.</p> <p>Sorry for not posting code earlier....</p> <pre><code>def exit(): break evade = evade_fw() # Main program running dialogue def main(): # menu goes here opt_list = [xsl_file, basic_loud_scan, fw_main, exit ] </code></pre>
1
2016-09-09T10:49:24Z
39,410,356
<p><code>break</code> is for breaking out of <code>for</code> or <code>while</code> loops, but it must be called from within the loop. I'm guessing that you expect the <code>break</code> to break out of your program's main event loop from an event handler, and that is not going to work because, as aforementioned, the <code>break</code> must be within the loop itself.</p> <p>Instead your exit function can clean up any resources, e.g. open files, database connections, etc. then call <code>[sys.exit()][1]</code> which will cause the Python interpreter to terminate. You can optionally pass a status code to <code>sys.exit()</code> which will be the system exit status available to shell scripts and batch files.</p> <pre><code>import sys def exit(): # clean up resources sys.exit() # defaults to status 0 == success </code></pre>
0
2016-09-09T11:17:05Z
[ "python", "menu", "exit" ]
Exiting out of a program through a menu option
39,409,882
<p>I need to say first and foremost, I am just learning Python. </p> <p>I am making a simple python program that has a menu option for exiting the program by using a function I called exit. I have tried making the exit function just call break, but I am getting an error when the exit function is called.</p> <p>Any help would be greatly appreciated.</p> <p>Sorry for not posting code earlier....</p> <pre><code>def exit(): break evade = evade_fw() # Main program running dialogue def main(): # menu goes here opt_list = [xsl_file, basic_loud_scan, fw_main, exit ] </code></pre>
1
2016-09-09T10:49:24Z
39,411,744
<p>Just forget about your own <code>exit()</code> function. You can simply do:</p> <pre><code>from sys import exit </code></pre> <p>And the <code>exit()</code> function from <code>sys</code> module will do the job.<br> It's also worth to know what happens under the hood. Function <code>sys.exit()</code> actually throws a special exception. You can do it as well explicitly and without importing anything:</p> <pre><code>raise SystemExit() </code></pre>
0
2016-09-09T12:34:42Z
[ "python", "menu", "exit" ]
pexpect : Is there any way to prevent input overrun while using pexpect?
39,409,892
<p>Using pexpect, I am connecting to a linux machine console which is a machine with limited capabilities. When I spawn a connection and try to execute command using send or sendline I get error saying <code>"ttyAMA0: 1 input overrun(s)"</code></p> <p>This is probably happening because <code>pexpect</code> sending input to console very fast before it is consumed and leading to input buffer overrun. If in some way <code>pexpect</code> slows down the speed of input to console, then it will prevent from input buffer overrun. Is there any parameter which defines character rate for input to console? </p> <p>For similar problem, tcl-expect has command send_slow which slows down input rate to provided value. Would be happy to have any equivalent to <code>send_slow</code> in python-expect. </p> <p>Also tried setting window size in expect and still there's no change in the error. Also the error I'm getting is intermittent. </p>
0
2016-09-09T10:49:53Z
39,796,140
<p>Disclaimer: This is a workaround than actual solution to buffer overrun problem. Did following steps:</p> <ol> <li>Before calling python script having pexpect, set baud rate to match with the console/telnet connect using stty. E.g. <code>stty speed 50</code></li> <li>Spawn new shell in pexpect and set <code>delaybeforesend</code> and <code>delayaftersend</code> to desired value (only required when your devide is being too slow)</li> <li>Replace sendline with our own custom sendline_slow which will send 1 character at a time. </li> </ol> <p>def sendline_slow(spawn_id, cmd):</p> <pre><code> for char in str(cmd): spawn_id.send(char) spawn_id.send('\n') </code></pre> <p>Bingo. Now I am able to send commands in super slow fashion to console...!</p>
0
2016-09-30T16:32:41Z
[ "python", "linux", "expect", "tty", "pexpect" ]
Sentence clustering
39,409,981
<p>I have a huge number of names from different sources.</p> <ol> <li>I need to extract all the groups (part of the names), which repeat from one to another. In the example below program should locate: Post, Office, Post Office.</li> <li>I need to get popularity count.</li> </ol> <p>So I want to extract a sorted by popularity list of phrases.</p> <p>Here is an example of names:</p> <pre><code>Post Office - High Littleton Post Office Pilton Outreach Services Town Street Post Office post office St Thomas </code></pre> <p>Basically need to find out some algorithm or better library, to get such results:</p> <pre><code>Post Office: 16999 Post: 17934 Office: 16999 Tesco: 7300 ... </code></pre> <p>Here is the full <a href="https://gist.github.com/maZahaca/a54046a4cc7ab27f9d06751b89aa7446" rel="nofollow">example of names</a>.</p> <p>I wrote a code which is fine for single words, but not for sentences:</p> <pre><code>from textblob import TextBlob import operator title_file = open("names.txt", 'r') blob = TextBlob(title_file.read()) list = sorted(blob.word_counts.items(), key=operator.itemgetter(1)) print list </code></pre>
-1
2016-09-09T10:55:12Z
39,425,953
<p>You are not looking for clustering (and that is probably why "all of them suck" for @andrewmatte).</p> <p>What you are looking for is <em>word counting</em> (or more precisely, n-gram-counting). Which is actually a much easier problem. Thst is why you won't be finding any library for that...</p> <p>Well, actually you jave some libraries. In python, for example, the <code>collections</code> module has the class <code>Counter</code> that has much of the reusable code.</p> <p>An untested, very basic code:</p> <pre><code>from collections import Counter counter = Counter() for s in sentences: words = s.split(" ") for i in range(len(words)): counter.add(words[i]) if i &gt; 0: counter.add((words[i-1], words[i])) </code></pre> <p>You csn get the most frequent from <code>counter</code>. If you want words and word pairs separate, feel free to use two counters. If you need longer phrases, add an inner loop. You may also want to clean sentences (e.g. lowercase) and use a regexp for splitting.</p>
0
2016-09-10T12:11:54Z
[ "python", "machine-learning", "cluster-analysis" ]
Sentence clustering
39,409,981
<p>I have a huge number of names from different sources.</p> <ol> <li>I need to extract all the groups (part of the names), which repeat from one to another. In the example below program should locate: Post, Office, Post Office.</li> <li>I need to get popularity count.</li> </ol> <p>So I want to extract a sorted by popularity list of phrases.</p> <p>Here is an example of names:</p> <pre><code>Post Office - High Littleton Post Office Pilton Outreach Services Town Street Post Office post office St Thomas </code></pre> <p>Basically need to find out some algorithm or better library, to get such results:</p> <pre><code>Post Office: 16999 Post: 17934 Office: 16999 Tesco: 7300 ... </code></pre> <p>Here is the full <a href="https://gist.github.com/maZahaca/a54046a4cc7ab27f9d06751b89aa7446" rel="nofollow">example of names</a>.</p> <p>I wrote a code which is fine for single words, but not for sentences:</p> <pre><code>from textblob import TextBlob import operator title_file = open("names.txt", 'r') blob = TextBlob(title_file.read()) list = sorted(blob.word_counts.items(), key=operator.itemgetter(1)) print list </code></pre>
-1
2016-09-09T10:55:12Z
39,426,291
<p>Are you looking for something like this?</p> <pre><code>workspace={} with open('names.txt','r') as f: for name in f: if len(name): # makes sure line isnt empty if name in workspace: workspace[name]+=1 else: workspace[name]=1 for name in workspace: print "{}: {}".format(name,workspace[name]) </code></pre>
0
2016-09-10T12:56:51Z
[ "python", "machine-learning", "cluster-analysis" ]
PyQT: PushButton receives commands while disabled
39,410,037
<p>I came across this problem and tried to break it down to the simplest code I could imagine: I created a GUI using Qt Designer V4.8.7 that only consists of a single pushButton with all default settings. It's called 'Test2.ui'. When the button is pressed, it's supposed to get disabled, print something in the terminal and afterwards gets enabled again. What happens is that I'm able to click on the disabled pushButton and it will repeat all the printing as many times as I clicked. This even works when I set the button invisible instead of disable it. I found similar problems on the internet, but none of the solutions seems to work for me – it's driving me mad. Anyone has an idea?</p> <pre><code>from __future__ import division, print_function import sys from PyQt4 import QtCore, QtGui, uic from PyQt4.QtCore import QTimer from time import sleep qtCreatorFile = "Test2.ui" # Enter file here. Ui_MainWindow, QtBaseClass = uic.loadUiType(qtCreatorFile) class MyApp(QtGui.QMainWindow, Ui_MainWindow): def __init__(self): QtGui.QMainWindow.__init__(self) Ui_MainWindow.__init__(self) self.setupUi(self) self.pushButton.pressed.connect(self.Test_Function) def Test_Function(self): self.pushButton.setEnabled(False) QtGui.QApplication.processEvents() print('Test 1') sleep(1) print('Test 2') sleep(1) self.pushButton.setEnabled(True) if __name__ == "__main__": app = QtGui.QApplication(sys.argv) window = MyApp() window.show() sys.exit(app.exec_()) </code></pre> <p>Here is the code for 'Test2.ui'</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;ui version="4.0"&gt; &lt;class&gt;MainWindow&lt;/class&gt; &lt;widget class="QMainWindow" name="MainWindow"&gt; &lt;property name="geometry"&gt; &lt;rect&gt; &lt;x&gt;0&lt;/x&gt; &lt;y&gt;0&lt;/y&gt; &lt;width&gt;445&lt;/width&gt; &lt;height&gt;393&lt;/height&gt; &lt;/rect&gt; &lt;/property&gt; &lt;property name="mouseTracking"&gt; &lt;bool&gt;false&lt;/bool&gt; &lt;/property&gt; &lt;property name="focusPolicy"&gt; &lt;enum&gt;Qt::ClickFocus&lt;/enum&gt; &lt;/property&gt; &lt;property name="acceptDrops"&gt; &lt;bool&gt;false&lt;/bool&gt; &lt;/property&gt; &lt;property name="windowTitle"&gt; &lt;string&gt;MainWindow&lt;/string&gt; &lt;/property&gt; &lt;widget class="QWidget" name="centralwidget"&gt; &lt;widget class="QPushButton" name="pushButton"&gt; &lt;property name="enabled"&gt; &lt;bool&gt;true&lt;/bool&gt; &lt;/property&gt; &lt;property name="geometry"&gt; &lt;rect&gt; &lt;x&gt;160&lt;/x&gt; &lt;y&gt;150&lt;/y&gt; &lt;width&gt;75&lt;/width&gt; &lt;height&gt;23&lt;/height&gt; &lt;/rect&gt; &lt;/property&gt; &lt;property name="text"&gt; &lt;string&gt;Test&lt;/string&gt; &lt;/property&gt; &lt;/widget&gt; &lt;/widget&gt; &lt;widget class="QMenuBar" name="menubar"&gt; &lt;property name="geometry"&gt; &lt;rect&gt; &lt;x&gt;0&lt;/x&gt; &lt;y&gt;0&lt;/y&gt; &lt;width&gt;445&lt;/width&gt; &lt;height&gt;22&lt;/height&gt; &lt;/rect&gt; &lt;/property&gt; &lt;/widget&gt; &lt;widget class="QStatusBar" name="statusbar"/&gt; &lt;/widget&gt; &lt;resources/&gt; &lt;connections/&gt; &lt;/ui&gt; </code></pre>
1
2016-09-09T10:58:11Z
39,419,176
<p>Your example code will not work because the test function blocks the gui. Whilst it is blocking, the button's disabled state is not updated properly, and so the <code>clicked</code> signal can still be emitted. The best way to avoid blocking the gui is to do the work in a separate thread:</p> <pre><code>class Worker(QtCore.QObject): finished = QtCore.pyqtSignal() def run(self): print('Test 1') sleep(1) print('Test 2') sleep(1) self.finished.emit() class MyApp(QtGui.QMainWindow, Ui_MainWindow): def __init__(self): ... self.pushButton.pressed.connect(self.handleButton) self.thread = QtCore.QThread(self) self.worker = Worker() self.worker.moveToThread(self.thread) self.worker.finished.connect(self.handleFinished) self.thread.started.connect(self.worker.run) def handleButton(self): self.pushButton.setEnabled(False) self.thread.start() def handleFinished(self): self.thread.quit() self.thread.wait() self.pushButton.setEnabled(True) </code></pre>
0
2016-09-09T20:21:41Z
[ "python", "qt", "pyqt", "pyqt4", "qpushbutton" ]
Reading fortran binary (streaming access) with np.fromfile or open & struct
39,410,053
<p>The following Fortran code:</p> <pre><code>INTEGER*2 :: i, Array_A(32) Array_A(:) = (/ (i, i=0, 31) /) OPEN (unit=11, file = 'binary2.dat', form='unformatted', access='stream') Do i=1,32 WRITE(11) Array_A(i) End Do CLOSE (11) </code></pre> <p>Produces streaming binary output with numbers from 0 to 31 in integer 16bit. Each record is taking up 2 bytes, so they are written at byte 1, 3, 5, 7 and so on. The access='stream' suppresses the standard header of Fortran for each record (I need to do that to keep the files as tiny as possible).</p> <p>Looking at it with a Hex-Editor, I get:</p> <pre><code>00 00 01 00 02 00 03 00 04 00 05 00 06 00 07 00 08 00 09 00 0A 00 0B 00 0C 00 0D 00 0E 00 0F 00 10 00 11 00 12 00 13 00 14 00 15 00 16 00 17 00 18 00 19 00 1A 00 1B 00 1C 00 1D 00 1E 00 1F 00 </code></pre> <p>which is completely fine (despite the fact that the second byte is never used, because decimals are too low in my example).</p> <p>Now I need to import these binary files into Python 2.7, but I can't. I tried many different routines, but I always fail in doing so.</p> <p><strong>1. attempt:</strong> "np.fromfile"</p> <pre><code>with open("binary2.dat", 'r') as f: content = np.fromfile(f, dtype=np.int16) </code></pre> <p>returns</p> <pre><code>[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 0 0 26104 1242 0 0] </code></pre> <p><strong>2. attempt:</strong> "struct"</p> <pre><code>import struct with open("binary2.dat", 'r') as f: content = f.readlines() struct.unpack('h' * 32, content) </code></pre> <p>delivers </p> <pre><code>struct.error: unpack requires a string argument of length 64 </code></pre> <p>because</p> <pre><code>print content ['\x00\x00\x01\x00\x02\x00\x03\x00\x04\x00\x05\x00\x06\x00\x07\x00\x08\x00\t\x00\n', '\x00\x0b\x00\x0c\x00\r\x00\x0e\x00\x0f\x00\x10\x00\x11\x00\x12\x00\x13\x00\x14\x00\x15\x00\x16\x00\x17\x00\x18\x00\x19\x00'] </code></pre> <p>(note the delimiter, the t and the n which shouldn't be there according to what Fortran's "streaming" access does)</p> <p><strong>3. attempt:</strong> "FortranFile"</p> <pre><code>f = FortranFile("D:/Fortran/Sandbox/binary2.dat", 'r') print(f.read_ints(dtype=np.int16)) </code></pre> <p>With the error:</p> <pre><code>TypeError: only length-1 arrays can be converted to Python scalars </code></pre> <p>(remember how it detected a delimiter in the middle of the file, but it would also crash for shorter files without line break (e.g. decimals from 0 to 8))</p> <p><strong>Some additional thoughts:</strong></p> <p>Python seems to have troubles with reading parts of the binary file. For <code>np.fromfile</code> it reads <code>Hex 19</code> (dec: 25), but crashes for <code>Hex 1A</code> (dec: 26). It seems to be confused with the letters, although 0A, 0B ... work just fine.</p> <p>For attempt 2 the <code>content</code>-result is weird. Decimals 0 to 8 work fine, but then there is this strange <code>\t\x00\n</code> thing. What is it with <code>hex 09</code> then?</p> <p>I've been spending hours trying to find the logic, but I'm stuck and really need some help. Any ideas?</p>
1
2016-09-09T10:59:25Z
39,410,638
<p>The problem is in open file mode. Default it is 'text'. Change this mode to binary:</p> <pre><code>with open("binary2.dat", 'rb') as f: content = np.fromfile(f, dtype=np.int16) </code></pre> <p>and all the numbers will be readed successfull. See Dive in to Python chapter <a href="http://www.diveintopython3.net/files.html#binary" rel="nofollow">Binary Files</a> for more details.</p>
3
2016-09-09T11:32:53Z
[ "python", "io", "binary", "hex" ]