title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
Import Error: Pillow 3.3.1 & Python 2.7 & El Capitan OS
39,472,379
<p>I have installed the Pillow package from PIP using <code>pip install Pillow</code> and Pillow 3.3.1 got installed. I am working with Python 2.7 on Mac OS 10.11 (El Capitan).</p> <p>When I try to import the Image module, I run into <code>ImportError: No module named Pillow</code>. I tried to import the following:</p> <ul> <li><code>import Pillow</code></li> <li><code>import Image</code></li> <li><code>import Pillow.Image</code></li> </ul> <p>All return the same <code>ImportError</code>.</p> <p>What is missing?</p>
1
2016-09-13T14:21:07Z
39,472,408
<p>Reinstall package properly using <code>python -m pip install package_name</code></p> <p>Then import using <code>from PIL import Image</code>.</p>
2
2016-09-13T14:22:47Z
[ "python", "osx", "python-2.7", "pillow" ]
Selenium scrolling internal scroll bar and scrapping results
39,472,394
<p>I'm trying to scrape <a href="http://comparefirst.sg/wap/productsListEvent.action?prodGroup=whole&amp;pageAction=prodlisting" rel="nofollow">this website</a> for my project to populate a list of insurance products available.</p> <p>However, the website has an internal scrolling bar, that only displays the first 10 items onto the page, and would only bring new elements onto display when you scroll that internal bar downwards.</p> <p>How do I </p> <ul> <li>Use python <code>Selenium</code> to scroll that internal bar downwards? Can't seem to find much information of that around.</li> <li>How do I use <code>Selenium</code> to retrieve the <code>Company Name, Product Name, Paymode, product features (if active)</code> and return a <code>pandas Dataframe</code>?</li> </ul>
1
2016-09-13T14:21:59Z
39,472,512
<p>Interesting thing is, <em>you don't need to scroll the container at all.</em> All the results are actually loaded, but part of them are just invisible. You can simply find all <code>li</code> elements with <code>result_content</code> class and get the desired data.</p> <p>Example working code extracting the "prod names":</p> <pre><code>from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium import webdriver driver = webdriver.Chrome("/usr/local/bin/chromedriver") driver.maximize_window() driver.get("http://comparefirst.sg/wap/productsListEvent.action?prodGroup=whole&amp;pageAction=prodlisting") wait = WebDriverWait(driver, 10) wait.until(EC.visibility_of_element_located((By.ID, "result_container"))) results = driver.find_elements_by_css_selector("li.result_content") for result in results: prod_name = result.find_element_by_id('sProdName').get_attribute("innerText") print(prod_name) driver.close() </code></pre> <p>Prints:</p> <pre><code>AIA Gen3 (II) AIA Guaranteed Protect Plus AIA Guaranteed Protect Plus ... DIRECT- TM Basic Whole Life DIRECT- TM Basic Whole Life (+ Critical Illness) TM Legacy TM Legacy (+ Critical Illness) TM Legacy LifeFlex TM Legacy LifeFlex (+ Critical Illness) TM Retirement GIO TM Retirement PaycheckLife (Single Life) </code></pre> <p>Note that we have to use <code>.get_attribute("innerText")</code> instead of <code>.text</code> since the latter would return the visible text only while most of our elements are invisible.</p>
2
2016-09-13T14:28:06Z
[ "python", "selenium" ]
django-tables2 add dynamic columns to table class from hstore
39,472,441
<p>My general question is: can I use the data stored in a <strong><a href="https://docs.djangoproject.com/en/1.8/ref/contrib/postgres/fields/#django.contrib.postgres.fields.HStoreField" rel="nofollow">HStoreField</a></strong> (Django 1.8.9) to generate columns dynamically for an existing <strong>Table</strong> class of <strong><a href="https://django-tables2.readthedocs.io/en/latest/index.html" rel="nofollow">django-tables2</a></strong>? As an example below, say I have a model:</p> <pre><code>from django.contrib.postgres import fields as pgfields GameSession(models.Model): user = models.ForeignKey('profile.GamerProfile') game = models.ForeignKey('games.Game') last_achievement = models.ForeignKey('games.Achievement') extra_info = pgfields.HStoreField(null=True, blank=True) </code></pre> <p>Now, say I have a table defined as:</p> <pre><code>GameSessionTable(tables.Table): class Meta(BaseMetaTable): model = GameSession fields = [] orderable=False id = tables.LinkColumn(accessor='id', verbose_name='Id', viewname='reporting:session_stats', args=[A('id')], attrs={'a':{'target':'_blank'}}) started = DateTimeColumn(accessor='startdata.when_started', verbose_name='Started') stopped = DateTimeColumn(accessor='stopdata.when_stopped', verbose_name='Stopped') game_name = tables.LinkColumn(accessor='game.name', verbose_name='Game name', viewname='reporting:game_stats', args=[A('mainjob.id')], attrs={'a':{'target':'_blank'}}) </code></pre> <p>I want to be able to add columns for each of the keys stored in the <em>extra_info</em> column for all of the <code>GameSession</code>s. I have tried to override the <strong>init</strong>() method of the GameSessionTable class, where I have access to the queryset, then make a set of all the keys of my <code>GameSession</code> objects, then add them to <code>self</code>, however that doesn't seem to work. Code below:</p> <pre><code>def __init__(self, data, *args, **kwargs): super(GameSessionTable, self).__init__(data, *args, **kwargs) if data: extra_cols=[] # just to be sure, check that the model has the extra_info HStore field if data.model._meta.get_field('extra_info'): extra_cols = list(set([item for q in data if q.extra_info for item in q.extra_info.keys()])) for col in extra_cols: self.columns.columns[col] = tables.Column(accessor='extra_info.%s' %col, verbose_name=col.replace("_", " ").title()) </code></pre> <p>Just a mention, I have had a look at <a href="https://spapas.github.io/2015/10/05/django-dynamic-tables-similar-models/#introduction" rel="nofollow">https://spapas.github.io/2015/10/05/django-dynamic-tables-similar-models/#introduction</a> but it's not been much help because the use case there is related to the fields of a model, whereas my situation is slightly different as you can see above.</p> <p>Just wanted to check, is this even possible or do I have to define an entirely different table for this data, or potentially use an entirely different library altogether like <a href="https://django-report-builder.readthedocs.io/en/latest/" rel="nofollow">django-reports-builder</a>?</p>
0
2016-09-13T14:24:43Z
39,494,326
<p>Managed to figure this out to a certain extent. The code I was running above was <em>slightly</em> wrong, so I updated it to run my code before the superclass <em>init()</em> gets run, and changed where I was adding the columns.</p> <p>As a result, my <strong>init()</strong> function now looks like this:</p> <pre><code>def __init__(self, data, *args, **kwargs): if data: extra_cols=[] # just to be sure, check that the model has the extra_info HStore field if data.model._meta.get_field('extra_info'): extra_cols = list(set([item for q in data if q.extra_info for item in q.extra_info.keys()])) for col in extra_cols: self.base_columns[col] = tables.Column(accessor='extra_info.%s' %col, verbose_name=col.replace("_", " ").title()) super(GameSessionTable, self).__init__(data, *args, **kwargs) </code></pre> <p>Note that I replaced <em>self.columns.columns</em> (which were BoundColumn instances) with <strong>self.base_columns</strong>. This allows the superclass to then consider these as well when initializing the <code>Table</code> class.</p> <p>Might not be the most elegant solution, but it seems to do the trick for me.</p>
0
2016-09-14T15:28:48Z
[ "python", "django", "hstore", "django-tables2" ]
Authentication With Imgur API
39,472,518
<p>So I'm writing a simple-ish script that can automatically download images from Imgur. I've come across the Imgur API but am struggling to get it to work. I registered an app but am not sure how to use it to be able to get information about images or albums. I do not want to be able to log in as a user or anything like that - just provide a URL of an album or a single image and be able to download it. </p> <p>I've read that if I want to do this then I don't need to use the oauth stuff, I should just be able to use a client ID.</p> <p><a href="https://www.reddit.com/r/learnprogramming/comments/2uzxfv/how_do_i_get_fully_authenticated_to_use_imgurs_api/" rel="nofollow">https://www.reddit.com/r/learnprogramming/comments/2uzxfv/how_do_i_get_fully_authenticated_to_use_imgurs_api/</a></p> <p>The script I am writing is using Python, but just to test out the API I am typing the URL into the browser. If I go to the following URL:</p> <p><a href="https://api.imgur.com/3/album/qTt8G?client_id=MY_CLIENT_ID" rel="nofollow">https://api.imgur.com/3/album/qTt8G?client_id=MY_CLIENT_ID</a></p> <p>Then I receive the following response:</p> <blockquote> <p>{"data":{"error":"Authentication required","request":"/3/album/qTt8G","method":"GET"},"success":false,"status":401}</p> </blockquote> <p>The full album URL is <a href="https://imgur.com/a/qTt8G" rel="nofollow">https://imgur.com/a/qTt8G</a> I've tried reading through the API docs, but am stuck with this.</p> <p>Useful info:</p> <p><a href="https://api.imgur.com/oauth2" rel="nofollow">https://api.imgur.com/oauth2</a></p> <p><a href="https://api.imgur.com/endpoints/album" rel="nofollow">https://api.imgur.com/endpoints/album</a></p>
1
2016-09-13T14:28:17Z
39,499,615
<p>In the end I just used the Imgur API helper that is available on Github to do all the work. Just needed to provide the client ID and secret.</p> <p><a href="https://github.com/Imgur/imgurpython" rel="nofollow">https://github.com/Imgur/imgurpython</a></p>
0
2016-09-14T21:04:15Z
[ "python", "api", "imgur" ]
Stop embedded Python prompt from C++
39,472,584
<p>I'm running Python embedded in a C++ application. The program consists of a Qt GUI and a work QThread where computations happen. The user can chose to run a Python script from a file or to start a Python prompt. Both run in the QThread. The program exits when the python script is done or when we exit the python prompt. However, I want to handle the case when the user requests to quit from the GUI. </p> <p>If running a Python script I can achieve this by a call to <code>PyErr_SetInterrupt</code>(see <a href="http://stackoverflow.com/questions/1420957/stopping-embedded-python">Stopping embedded Python</a>). The Python interpreter stops, and we can quit normally. However, I haven't found a good way to force the Python prompt to quit.</p> <p>I've tried feeding characters to <code>stdin</code> (with <code>ungetc</code>) in the hopes that the Python prompt receives them but without success. I do this from an overloaded <code>PyOS_InputHook</code> that I use to catch Qt events while the Python prompt is running.</p> <p>Any ideas how to properly force the Python prompt to exit?</p>
1
2016-09-13T14:31:23Z
39,628,593
<p>Can you not set up a slot receiver on your <code>QThread</code> subclass that will call <code>PyErr_SetInterrupt</code> in the proper context for you?</p> <p>You might achieve cleaner separation-of-concerns if, instead of using QThreads, you run your embedded Python interpreter in a separate process – which you can then <code>kill</code> it as you see fit, or at least you can if you’re on a POSIX-y platform; I take it this type of operation is way different on e.g. Windows.</p>
1
2016-09-22T00:49:28Z
[ "python", "c++", "python-c-api", "embedded-language" ]
Sort index alphabetically on first, second, third characters
39,472,601
<p>I have a df which looks like this:</p> <pre><code>df = pd.DataFrame({'val': [0, 0, 0, 1, 0, 0, 0]}, index=['13th str', '3SAT', 'ARD', 'ARD Dritte', 'AXNAction', 'Animal', 'bb']) val 13th str 0 3SAT 0 ARD 0 ARD Dritte 1 AXNAction 0 Animal 0 bb 0 </code></pre> <p>I would like to sort it to look like this, </p> <pre><code> val 13th str 0 3SAT 0 Animal 0 ARD 0 ARD Dritte 1 AXNAction 0 bb 0 </code></pre> <p>note - 'Animal' has shifted places.</p> <p>If all the first letters are the same then look at the next character and so on. </p> <p>Here is what I have tried which has not worked:</p> <pre><code>df.sort() df = df.sort_index() df = df.index.sort_values() #gives an 'Index' object has no attribute 'sort_values' error </code></pre>
1
2016-09-13T14:32:39Z
39,487,638
<p>Your index is sorted correctly as uppercase characters are sorted before lower case which is why your attempts failed, to sort the way you want you can add a temporary column with the lower case index values, sort by this column and then drop it:</p> <pre><code>In [155]: df['labels'] = df.index.str.lower() df = df.sort_values('labels').drop('labels', axis=1) df Out[155]: val 13th str 0 3SAT 0 Animal 0 ARD 0 ARD Dritte 1 AXNAction 0 bb 0 </code></pre>
2
2016-09-14T10:00:14Z
[ "python", "sorting", "pandas", "dataframe" ]
Sort index alphabetically on first, second, third characters
39,472,601
<p>I have a df which looks like this:</p> <pre><code>df = pd.DataFrame({'val': [0, 0, 0, 1, 0, 0, 0]}, index=['13th str', '3SAT', 'ARD', 'ARD Dritte', 'AXNAction', 'Animal', 'bb']) val 13th str 0 3SAT 0 ARD 0 ARD Dritte 1 AXNAction 0 Animal 0 bb 0 </code></pre> <p>I would like to sort it to look like this, </p> <pre><code> val 13th str 0 3SAT 0 Animal 0 ARD 0 ARD Dritte 1 AXNAction 0 bb 0 </code></pre> <p>note - 'Animal' has shifted places.</p> <p>If all the first letters are the same then look at the next character and so on. </p> <p>Here is what I have tried which has not worked:</p> <pre><code>df.sort() df = df.sort_index() df = df.index.sort_values() #gives an 'Index' object has no attribute 'sort_values' error </code></pre>
1
2016-09-13T14:32:39Z
39,487,676
<p>You can sort the index <a href="https://wiki.python.org/moin/HowTo/Sorting" rel="nofollow">using a custom <code>key</code> function</a>:</p> <pre><code>In [22]: df = pd.DataFrame({'val': [0, 0, 0, 1, 0, 0, 0]}, index=['13th str', '3SAT', 'ARD', 'ARD Dritte', 'AXNAction', 'Animal', 'bb']) In [23]: df Out[23]: val 13th str 0 3SAT 0 ARD 0 ARD Dritte 1 AXNAction 0 Animal 0 bb 0 In [24]: df.index = sorted(df.index.values, key=lambda s: s.lower()) In [25]: df Out[25]: val 13th str 0 3SAT 0 Animal 0 ARD 1 ARD Dritte 0 AXNAction 0 bb 0 </code></pre>
2
2016-09-14T10:02:29Z
[ "python", "sorting", "pandas", "dataframe" ]
How to concat and update pandas dataframes
39,472,712
<p>I'd like to concat the dataframe <strong>df1</strong> and <strong>df2</strong> and result the dataframe <strong>df</strong></p> <pre><code>df1 = pd.DataFrame([ {"id": 1, "a": 1, "b": 1}, {"id": 2, "a": 2, "b": 2}, ]) df2 = pd.DataFrame([ {"id": 1, "a": 5, "b": 5}, {"id": 3, "a": 6, "b": 6} ]) df = pd.DataFrame([ {"id": 1, "a": 5, "b": 5}, {"id": 2, "a": 2, "b": 2}, {"id": 3, "a": 6, "b": 6} ]) </code></pre> <p>As can see, the row of same <strong>id</strong> are updated.</p>
-1
2016-09-13T14:37:44Z
39,473,303
<ol> <li>Concatenate</li> <li>Remove duplicates</li> </ol> <hr> <pre><code>df1 = pd.DataFrame([ {"id": 1, "a": 1, "b": 1}, {"id": 2, "a": 2, "b": 2}, ]) df2 = pd.DataFrame([ {"id": 1, "a": 5, "b": 5}, {"id": 3, "a": 6, "b": 6} ]) print (pd.concat([df1.set_index('id'), df2.set_index('id')]) .reset_index() .drop_duplicates(subset='id', keep='last') .set_index('id') .sort_index()) </code></pre> <hr> <p>Output:</p> <pre><code> a b id 1 5 5 2 2 2 3 6 6 </code></pre>
1
2016-09-13T15:05:12Z
[ "python", "pandas" ]
GoogleAppEngine access app.yaml contents inside the app
39,472,839
<p>I have a python GAE app. Inside my <code>webapp2</code> code I would like to access some of the properties defined in the <code>app.yaml</code>.</p> <p>I know it's possible to export environment variables and access them inside my python app using <code>os.environ</code>, but is there a way to directly access <code>app.yaml</code> contents without exporting environment variables?</p>
1
2016-09-13T14:44:25Z
39,473,218
<p>You could simply do:</p> <pre><code>import yaml with open('app.yaml') as fd: data = yaml.load(fd) logging.error('data=%s' % data) </code></pre>
2
2016-09-13T15:00:49Z
[ "python", "google-app-engine" ]
django-positions - multi-table model inheritance using parent_link
39,472,867
<p>Using <a href="https://github.com/jpwatts/django-positions">https://github.com/jpwatts/django-positions</a>,</p> <p>I have a few models that inherit from a parent one, for example:</p> <pre><code>class ContentItem(models.Model): class Meta: ordering = ['position'] content_group = models.ForeignKey(ContentGroup) position = PositionField(collection='content_group', parent_link='contentitem_ptr') class Text(ContentItem): title = models.CharField(max_length=500, unique=False, null=True, blank=True) </code></pre> <p>I understand I need to <a href="https://github.com/jpwatts/django-positions/blob/master/positions/examples/school/models.py">use the parent_link argument</a> (<a href="https://github.com/jpwatts/django-positions">here's the documentation</a>). But I get this error when I use it:</p> <pre><code>websites.Text: (models.E015) 'ordering' refers to the non-existent field 'position'. </code></pre> <p>When using the <code>parent_link</code> argument it's as if the <code>position</code> field has been deleted out of the model completely. I've tried various field names such as <code>contentitem_ptr_id</code> (the actual name of the linking field), but no luck. Anything identifiable I'm doing wrong here?</p>
5
2016-09-13T14:45:45Z
39,475,909
<p><code>class Meta:</code> should come after your field definitions.</p>
-1
2016-09-13T17:35:47Z
[ "python", "django" ]
Fast way to get edges crossing two sets of nodes in networkx.Graph
39,472,910
<p>What's the fasted way in <code>networkx</code> to get the crossing edges between two disjoint node sets? Is there some ready-made function to use?</p> <p>The way I am using now:</p> <pre><code>import networkx as nx from itertools import product A = set(range(50)) B = set(range(50, 100)) g = nx.complete_graph(100) cross_edges = [(n1, n2) for n1, n2 in product(A, B) if n1 in g.adj and n2 in g.adj[n1]] </code></pre>
0
2016-09-13T14:47:42Z
39,483,728
<p>It depends on assumptions about the graph.</p> <p>If graph is dense than your approach is optimal since set of result edges is almost the same as <code>product(A,B)</code>. Than it is goot to iterate through all possible edges (<code>product(A,B)</code>) and check is it an edge.</p> <p>If graph is sparse than it would be faster to iterate through existing edges and check for edges between A and B. Something like:</p> <pre><code>Bs = set(B) # 'in' operator is faster for sets result = [] for n1 in A: # Iterate through first set for n2 in g.adj[n1]: # Than through edges connected to a node if n2 in Bs: # Check is it edge between A and B result.append(n1, n2) </code></pre> <p>Possible optimization is to set A to be smaller of two input sets.</p>
0
2016-09-14T06:23:29Z
[ "python", "graph-theory", "networkx" ]
Mac Terminal Encoding Issues
39,472,917
<p>I have been dealing with an issue regarding the terminal in my Macbook. I am passing greek words in a python string e.g. </p> <pre><code>text = 'Καλημέρα κόσμε' </code></pre> <p>and every time I try to perform any simple task to it like splitting in spaces the result I get looks like this:</p> <pre><code>['\xce\x9a\xce\xb1\xce\xbb\xce\xb7\xce\xbc\xce\xad\xcf\x81\xce\xb1', '\xce\xba\xcf\x8c\xcf\x83\xce\xbc\xce\xb5'] </code></pre> <p>The same thing happens when I use the collections.Counter() function as well.</p> <p>On the other hand when I print the string the output is as expected:</p> <pre><code>Καλημέρα κόσμε </code></pre> <p>I tried doing what is mentioned here: <a href="http://stackoverflow.com/questions/7165108/in-osx-lion-lang-is-not-set-to-utf8-how-fix">In OSX Lion, LANG is not set to utf8, how fix?</a> (by changing en_US.UTF-8 to el_GR.UTF-8) without any luck.</p> <p>Anyone has an idea why that happens and how I can tackle that?</p> <p>Thank you in advance.</p>
1
2016-09-13T14:47:52Z
39,473,176
<p>This is not an issue with your terminal, but how Python (2) does things.</p> <p>Even if you don't perform any task on it, <code>repr</code> will escape any non-ASCII (or non-printable (except space)) characters:</p> <pre><code>&gt;&gt;&gt; text = 'Καλημέρα κόσμε' &gt;&gt;&gt; text '\xce\x9a\xce\xb1\xce\xbb\xce\xb7\xce\xbc\xce\xad\xcf\x81\xce\xb1 \xce\xba\xcf\x8c\xcf\x83\xce\xbc\xce\xb5' </code></pre> <p>If you try the same thing in Python 3, it'll print normally:</p> <pre><code>&gt;&gt;&gt; text = 'Καλημέρα κόσμε' &gt;&gt;&gt; text Καλημέρα κόσμε </code></pre> <p>Is there any reason why you're using Python 2?</p>
0
2016-09-13T14:58:48Z
[ "python", "osx", "encoding", "utf-8", "terminal" ]
Django: Multiple URL parameters
39,472,953
<p>I'm making a study app that involves flashcards. It is divided into subjects. Each subject (biology, physics) has a set of decks (unitone, unittwo). Each deck has a set of cards (terms and definitions). I want my URLs to look like localhost:8000/biology/unitone/ but I have trouble putting two URL parameters in one URL.</p> <p><strong>models.py</strong></p> <pre><code>class Subject(models.Model): subject_name = models.CharField(max_length=100) description = models.TextField() def __str__(self): return self.subject_name def get_absolute_url(self): return reverse('card:index') class Deck(models.Model): deck_name = models.CharField(max_length=100) subject = models.ForeignKey(Subject, on_delete=models.CASCADE) def __str__(self): return self.deck_name class Card(models.Model): term = models.CharField(max_length=100) definition = models.TextField() deck = models.ForeignKey(Deck, on_delete=models.CASCADE) def __str__(self): return self.term </code></pre> <p><strong>views.py</strong></p> <pre><code>class IndexView(generic.ListView): template_name = 'card/index.html' context_object_name = 'subjects' def get_queryset(self): return Subject.objects.all() class SubjectView(DetailView): model = Subject slug_field = "subject" template_name = 'card/subject.html' class DeckView(DetailView): model = Deck slug_field = "deck" template_name = 'card/deck.html' </code></pre> <p><strong>urls.py</strong></p> <pre><code># localhost:8000/subjects/1 (biology) url(r'^subjects/(?P&lt;pk&gt;[0-9]+)/$', views.SubjectView.as_view(), name='subject') # localhost:8000/subjects/1/1 (biology/unitone) url(r'^subjects/(?P&lt;pk&gt;[0-9]+)/(?P&lt;pk&gt;[0-9]+)/$', views.DeckView.as_view(), name='deck'), </code></pre> <p>The second URL in urls.py is what I'm having trouble with. It's not a valid URL.</p>
0
2016-09-13T14:49:26Z
39,473,147
<p>You can't have multiple parameters with the same name. You have to give each parameter a unique name, e.g.:</p> <pre><code>url(r'^subjects/(?P&lt;pk&gt;[0-9]+)/(?P&lt;deck&gt;[0-9]+)/$', views.DeckView.as_view(), name='deck'), </code></pre> <p>In the <code>DeckView</code> you can then access them as <code>self.kwargs['pk']</code> and <code>self.kwargs['deck']</code>. </p>
2
2016-09-13T14:57:38Z
[ "python", "django" ]
Django: Multiple URL parameters
39,472,953
<p>I'm making a study app that involves flashcards. It is divided into subjects. Each subject (biology, physics) has a set of decks (unitone, unittwo). Each deck has a set of cards (terms and definitions). I want my URLs to look like localhost:8000/biology/unitone/ but I have trouble putting two URL parameters in one URL.</p> <p><strong>models.py</strong></p> <pre><code>class Subject(models.Model): subject_name = models.CharField(max_length=100) description = models.TextField() def __str__(self): return self.subject_name def get_absolute_url(self): return reverse('card:index') class Deck(models.Model): deck_name = models.CharField(max_length=100) subject = models.ForeignKey(Subject, on_delete=models.CASCADE) def __str__(self): return self.deck_name class Card(models.Model): term = models.CharField(max_length=100) definition = models.TextField() deck = models.ForeignKey(Deck, on_delete=models.CASCADE) def __str__(self): return self.term </code></pre> <p><strong>views.py</strong></p> <pre><code>class IndexView(generic.ListView): template_name = 'card/index.html' context_object_name = 'subjects' def get_queryset(self): return Subject.objects.all() class SubjectView(DetailView): model = Subject slug_field = "subject" template_name = 'card/subject.html' class DeckView(DetailView): model = Deck slug_field = "deck" template_name = 'card/deck.html' </code></pre> <p><strong>urls.py</strong></p> <pre><code># localhost:8000/subjects/1 (biology) url(r'^subjects/(?P&lt;pk&gt;[0-9]+)/$', views.SubjectView.as_view(), name='subject') # localhost:8000/subjects/1/1 (biology/unitone) url(r'^subjects/(?P&lt;pk&gt;[0-9]+)/(?P&lt;pk&gt;[0-9]+)/$', views.DeckView.as_view(), name='deck'), </code></pre> <p>The second URL in urls.py is what I'm having trouble with. It's not a valid URL.</p>
0
2016-09-13T14:49:26Z
39,474,244
<p>there's a way by rewriting the DetailViews </p> <p>urls.py</p> <pre><code>url(r'^subjects/(?P&lt;subjects&gt;\w+)/(?P&lt;deck&gt;\w+)/$', views.DeckView.as_view(), name='deck'), </code></pre> <p>views.py</p> <pre><code>class DeckView(DetailView): model = Deck # slug_field = "deck" # you don't need it template_name = 'card/deck.html' def get_object(self, subjects, deck): subject_obj = Subject.objects.filter(subject_name=subjects).first() obj = Deck.objects.filter(subject=subject_obj,deck_name=deck).first() return obj def get(self, request, subjects, deck): self.object = self.get_object(subjects, deck) context = self.get_context_data(object=self.object) return self.render_to_response(context) </code></pre> <p>then access <code>localhost:8000/subjects/biology/unitone/</code></p>
1
2016-09-13T15:52:06Z
[ "python", "django" ]
For Loop Append to list from another
39,472,967
<p>I'm working on a "pick up all" and "drop all" for a game I'm designing. The player has an inventory (inventory) and each room has it's own to keep track of what is in it. When it is a specific item, I can easily append or remove the item from the respective lists, but when it is for them all, I am not sure how to proceed. (NOTE: I won't know how many items are in the inventories as it will change as players take and drop items)</p> <pre><code>ROOMNAMEinventory = ['lamp', 'coin'] inventory = ['string'] do = raw_input("What would you like to do?").upper() if(do == 'drop all'): for items in ROOMNAMEinventory: inventory.append(items) ROOMNAMEinventory.remove(items) print inventory print ROOMNAMEinventory </code></pre> <p>Currently, this prints out:</p> <pre><code>['string', 'lamp'] ['coin'] None </code></pre> <p>Why does it print the None?</p>
0
2016-09-13T14:49:56Z
39,473,046
<p>2 mistakes here</p> <ol> <li>you convert to uppercase but test against lower!</li> <li>you should iterate on a copy of <code>ROOMNAMEinventory</code>, modifying list while iterating on it is not recommended: it changes lists to that <code>['string', 'lamp'] and ['coin']</code>: not that you want</li> </ol> <p>Fixed code:</p> <pre><code>ROOMNAMEinventory = ['lamp', 'coin'] inventory = ['string'] do = raw_input("What would you like to do?").upper() if(do == 'DROP ALL'): # upper vs upper: could work :) for items in ROOMNAMEinventory[::]: # iterate on a copy of the list inventory.append(items) ROOMNAMEinventory.remove(items) print inventory print ROOMNAMEinventory </code></pre> <p>result (when inputting <code>drop all</code>)</p> <pre><code>['string', 'lamp', 'coin'] [] </code></pre>
0
2016-09-13T14:53:33Z
[ "python", "list" ]
For Loop Append to list from another
39,472,967
<p>I'm working on a "pick up all" and "drop all" for a game I'm designing. The player has an inventory (inventory) and each room has it's own to keep track of what is in it. When it is a specific item, I can easily append or remove the item from the respective lists, but when it is for them all, I am not sure how to proceed. (NOTE: I won't know how many items are in the inventories as it will change as players take and drop items)</p> <pre><code>ROOMNAMEinventory = ['lamp', 'coin'] inventory = ['string'] do = raw_input("What would you like to do?").upper() if(do == 'drop all'): for items in ROOMNAMEinventory: inventory.append(items) ROOMNAMEinventory.remove(items) print inventory print ROOMNAMEinventory </code></pre> <p>Currently, this prints out:</p> <pre><code>['string', 'lamp'] ['coin'] None </code></pre> <p>Why does it print the None?</p>
0
2016-09-13T14:49:56Z
39,473,403
<p><code>List</code> in Python supports adding one to another:</p> <pre><code>roomname_inventory = ['lamp', 'coin'] inventory = ['string'] do = raw_input("What would you like to do?").upper() if (do == 'DROP ALL'): inventory += roomname_inventory roomname_inventory = [] print inventory print roomname_inventory </code></pre> <p>But if you just want to get rid of modifying the list while iterating over it, you could also do:</p> <pre><code>if (do == 'DROP ALL'): while roomname_inventory: inventory.append(roomname_inventory.pop(0)) </code></pre>
1
2016-09-13T15:09:52Z
[ "python", "list" ]
Django Python Social Auth only allow certain users to sign in
39,472,975
<p>I want only users from a @companyname.net email <em>or</em> from a list of email addresses to be able to sign in with Python Social Auth through google+. How would I accomplish this?</p> <pre><code>SOCIAL_AUTH_GOOGLE_OAUTH2_WHITELISTED_DOMAINS = ['companyname.net'] </code></pre> <p>is what I currently have in settings.py, but that only allows @companyname.net-ers to sign in.</p>
1
2016-09-13T14:50:23Z
39,476,208
<p>One way to solve this is overriding python-social-auth pipeline.</p> <p>You can override create_user with something like:</p> <pre><code>def create_user(strategy, details, user=None, *args, **kwargs): if user: return {'is_new': False} allowed_emails = get_list_of_emails() fields = dict((name, kwargs.get(name, details.get(name))) for name in strategy.setting('USER_FIELDS', USER_FIELDS)) if not fields: return if fields[email] in allowed_emails: return { 'is_new': True, 'user': strategy.create_user(**fields) } return </code></pre> <p>This method <code>get_list_of_emails()</code> is to be used as a way to load the emails from file ou from database. It needs to return a list of emails.</p> <p>Then, in the SOCIAL_AUTH_PIPELINE in your settings you replace the create_user to your custom method:</p> <pre><code>SOCIAL_AUTH_PIPELINE = ( 'social.pipeline.social_auth.social_details', 'social.pipeline.social_auth.social_uid', 'social.pipeline.social_auth.auth_allowed', 'social.pipeline.social_auth.social_user', 'social.pipeline.user.get_username', 'path.to.my.method.create_user', 'social.pipeline.social_auth.associate_user', 'social.pipeline.social_auth.load_extra_data', 'social.pipeline.user.user_details', ) </code></pre> <p>This way you can keep the domais whitelist, and then store the emails you want somewhere where you can load them with the method <code>get_list_of_emails()</code></p> <p>more on the <a href="https://python-social-auth.readthedocs.io/en/latest/pipeline.html#authentication-pipeline" rel="nofollow">docs</a></p>
0
2016-09-13T17:53:42Z
[ "python", "django", "python-social-auth" ]
Output size of convolutional auto-encoder in Keras
39,472,986
<p>I am doing the convolutional autoencoder tutorial written by the author of the Keras library: <a href="https://blog.keras.io/building-autoencoders-in-keras.html" rel="nofollow">https://blog.keras.io/building-autoencoders-in-keras.html</a></p> <p>However, when I launch exactly the same code, and analyse the network's architecture with summary(), it seems that the output size is not compatible with the input one (necessary in case of autoencoders). Here is the output of summary():</p> <pre><code>**____________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ==================================================================================================== input_1 (InputLayer) (None, 1, 28, 28) 0 ____________________________________________________________________________________________________ convolution2d_1 (Convolution2D) (None, 16, 28, 28) 160 input_1[0][0] ____________________________________________________________________________________________________ maxpooling2d_1 (MaxPooling2D) (None, 16, 14, 14) 0 convolution2d_1[0][0] ____________________________________________________________________________________________________ convolution2d_2 (Convolution2D) (None, 8, 14, 14) 1160 maxpooling2d_1[0][0] ____________________________________________________________________________________________________ maxpooling2d_2 (MaxPooling2D) (None, 8, 7, 7) 0 convolution2d_2[0][0] ____________________________________________________________________________________________________ convolution2d_3 (Convolution2D) (None, 8, 7, 7) 584 maxpooling2d_2[0][0] ____________________________________________________________________________________________________ maxpooling2d_3 (MaxPooling2D) (None, 8, 3, 3) 0 convolution2d_3[0][0] ____________________________________________________________________________________________________ convolution2d_4 (Convolution2D) (None, 8, 3, 3) 584 maxpooling2d_3[0][0] ____________________________________________________________________________________________________ upsampling2d_1 (UpSampling2D) (None, 8, 6, 6) 0 convolution2d_4[0][0] ____________________________________________________________________________________________________ convolution2d_5 (Convolution2D) (None, 8, 6, 6) 584 upsampling2d_1[0][0] ____________________________________________________________________________________________________ upsampling2d_2 (UpSampling2D) (None, 8, 12, 12) 0 convolution2d_5[0][0] ____________________________________________________________________________________________________ convolution2d_6 (Convolution2D) (None, 16, 10, 10) 1168 upsampling2d_2[0][0] ____________________________________________________________________________________________________ upsampling2d_3 (UpSampling2D) (None, 16, 20, 20) 0 convolution2d_6[0][0] ____________________________________________________________________________________________________ convolution2d_7 (Convolution2D) (None, 1, 20, 20) 145 upsampling2d_3[0][0] ==================================================================================================== Total params: 4385 ____________________________________________________________________________________________________** </code></pre>
2
2016-09-13T14:50:57Z
39,529,745
<p>Please notice that you are missing a <code>border_mode</code> option in pre-last convolution layer.</p> <pre><code>from keras.layers import Input, Dense, Convolution2D, MaxPooling2D, UpSampling2D from keras.models import Model input_img = Input(shape=(1, 28, 28)) x = Convolution2D(16, 3, 3, activation='relu', border_mode='same')(input_img) x = MaxPooling2D((2, 2), border_mode='same')(x) x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x) x = MaxPooling2D((2, 2), border_mode='same')(x) x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x) encoded = MaxPooling2D((2, 2), border_mode='same')(x) # at this point the representation is (8, 4, 4) i.e. 128-dimensional x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(encoded) x = UpSampling2D((2, 2))(x) x = Convolution2D(8, 3, 3, activation='relu', border_mode='same')(x) x = UpSampling2D((2, 2))(x) x = Convolution2D(16, 3, 3, activation='relu', border_mode='same')(x) x = UpSampling2D((2, 2))(x) decoded = Convolution2D(1, 3, 3, activation='sigmoid', border_mode='same')(x) autoencoder = Model(input_img, decoded) autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy') </code></pre> <p>This should work fine</p>
1
2016-09-16T10:53:59Z
[ "python", "deep-learning", "keras" ]
Evaluating mathematical expressions passed in as strings in python
39,473,066
<p><p> I wish to make a mathematical function ( <code>f(x,y)</code> in this case ) with multiple variables, only two in this case, <code>x</code> and <code>y</code>, which evaluates a mathematical expression which is in a string format initially. <p> For example,<br> If the string is</p> <pre><code>s = "2*x + sin(y) + x/(y-3.0)" </code></pre> <p>The function f(x,y) must be equivalent to</p> <pre><code>def f(x,y): return 2*x + sin(y) + x/(y-3.0) </code></pre> <p>The String is constant throughout the program and is initialized at the start. <bR>The function will be called thousands of times. So I wish it to be very efficient. <p> What is the best way to do so?</p>
1
2016-09-13T14:54:44Z
39,473,121
<p>Use the <code>eval</code> function <code>return eval(s)</code></p>
-4
2016-09-13T14:56:42Z
[ "python", "python-2.7" ]
Evaluating mathematical expressions passed in as strings in python
39,473,066
<p><p> I wish to make a mathematical function ( <code>f(x,y)</code> in this case ) with multiple variables, only two in this case, <code>x</code> and <code>y</code>, which evaluates a mathematical expression which is in a string format initially. <p> For example,<br> If the string is</p> <pre><code>s = "2*x + sin(y) + x/(y-3.0)" </code></pre> <p>The function f(x,y) must be equivalent to</p> <pre><code>def f(x,y): return 2*x + sin(y) + x/(y-3.0) </code></pre> <p>The String is constant throughout the program and is initialized at the start. <bR>The function will be called thousands of times. So I wish it to be very efficient. <p> What is the best way to do so?</p>
1
2016-09-13T14:54:44Z
39,473,779
<p>I'd recommend you stay away of <a href="http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html" rel="nofollow">eval</a> and using a proper library to do the mathematical job at hands, one of the favourite candidates is sympy, which is described as:</p> <blockquote> <p>SymPy is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (<a href="https://en.wikipedia.org/wiki/Computer_algebra_system" rel="nofollow">CAS</a>) while keeping the code as simple as possible in order to be comprehensible and easily extensible. SymPy is written entirely in Python.</p> </blockquote> <p>With sympy, you could solve your problem like this:</p> <pre><code>from sympy.parsing.sympy_parser import parse_expr eq = parse_expr("2*x + sin(y) + x/(y-3.0)") for x in range(4): for y in range(4): s1 = eq.subs({"x": x, "y": y}) s2 = s1.evalf() print s1, "--&gt;", s2 </code></pre> <p>Output:</p> <pre><code>0 --&gt; 0 sin(1) --&gt; 0.841470984807897 sin(2) --&gt; 0.909297426825682 sin(3) --&gt; 0.141120008059867 1.66666666666667 --&gt; 1.66666666666667 sin(1) + 1.5 --&gt; 2.34147098480790 sin(2) + 1.0 --&gt; 1.90929742682568 zoo --&gt; zoo 3.33333333333333 --&gt; 3.33333333333333 sin(1) + 3.0 --&gt; 3.84147098480790 sin(2) + 2.0 --&gt; 2.90929742682568 zoo --&gt; zoo 5.00000000000000 --&gt; 5.00000000000000 sin(1) + 4.5 --&gt; 5.34147098480790 sin(2) + 3.0 --&gt; 3.90929742682568 zoo --&gt; zoo </code></pre> <p>zoo means "complex infinity". For more info, read the <a href="http://docs.sympy.org/latest/tutorial/basic_operations.html" rel="nofollow">docs</a>. </p> <p>Of course, you could use one of the many existing python parsers out there or just writing yours as suggested by vz0. I'd recommend you learn more about <a href="http://www.sympy.org/en/index.html" rel="nofollow">sympy</a> though.</p>
2
2016-09-13T15:27:30Z
[ "python", "python-2.7" ]
Evaluating mathematical expressions passed in as strings in python
39,473,066
<p><p> I wish to make a mathematical function ( <code>f(x,y)</code> in this case ) with multiple variables, only two in this case, <code>x</code> and <code>y</code>, which evaluates a mathematical expression which is in a string format initially. <p> For example,<br> If the string is</p> <pre><code>s = "2*x + sin(y) + x/(y-3.0)" </code></pre> <p>The function f(x,y) must be equivalent to</p> <pre><code>def f(x,y): return 2*x + sin(y) + x/(y-3.0) </code></pre> <p>The String is constant throughout the program and is initialized at the start. <bR>The function will be called thousands of times. So I wish it to be very efficient. <p> What is the best way to do so?</p>
1
2016-09-13T14:54:44Z
39,473,866
<p>Without using SymPy you should create your own parser, for example by <a href="http://stackoverflow.com/questions/11708195/infix-to-postfix-with-function-support">converting the infix expression to a postfix expression</a>, which are very easy to evaluate once in this notation. Mathematical functions are just unary operators like <code>-x</code>.</p>
1
2016-09-13T15:32:20Z
[ "python", "python-2.7" ]
Serialdata import python "every other data point"
39,473,118
<p>I'm trying to read serial data of an arduino which has been somewhat successful. The values that will be read from the arduino is voltage and current. I'm now trying to differentiate the different variables but i have no clue how. The arduino is sending the values in the following order with 1 sec delay. Voltage, AMPs, Voltage, AMPs. How could i differentiate these values in to different variables? Here is my current code which doesn't differentiate the variables at all.</p> <pre><code>import time import serial values = [] serialVoltage = serial.Serial('/dev/ttyACM0', baudrate=9600, timeout=1) voltage = serialVoltage.readline() time.sleep(1) while True:print(voltage) void setup() { Serial.begin(9600); } void loop() { float voltageRead = analogRead(A0); float ampsRead = analogRead(A1); float calculatedVoltage = voltageRead / 103; float calculatedCurrent = ampsRead / 1; Serial.println(calculatedVoltage); delay(1000); Serial.println(calculatedCurrent); delay(1000); } </code></pre>
0
2016-09-13T14:56:40Z
39,473,263
<p>First of all, you shouldn't need the <code>sleep</code>s on the raspberry Pi - <code>readline</code> will block until the output comes along.</p> <p>You should structure your code to read the voltage and current separately:</p> <pre><code>while True: voltage = serialVoltage.readline() current = serialVoltage.readline() print("V:", voltage, "A:", current) </code></pre> <p>Note also that if you want to work with the string values returned by <code>readline</code> you should first explicitly convert them to numeric form.</p> <p>Remember that <code>readline</code> returns a string that includes the line termination - these are the <code>'\r\n'</code> characters you mention in your comment. I can't see how the "bad values" you mention are being created, but it would probably be helpful if the Arduino output allowed you to discriminate between voltage and current values.</p> <p>My suggestion would be that you actually print out both values on a single line from the Arduino in a format that can readily be decoded by your Python program. Since it appears that the Arduino will already be running when you run the Python program on the Pi you could then simply ignore the first (possibly partial) line that you read, which will guarantee that all subsequent lines are whole outputs.</p> <p>The "odd values" you see are because I assumed since you used the form <code>print(voltage)</code> that you were using Python 3! Since you appear to be using Python 2 your <code>print</code> statement should probably read something like</p> <pre><code>print "V:", voltage, "A:", current </code></pre> <p>The interpreter will then print the output as a string, rather than trying to show you the value of a tuple containing the four things to be printed.</p>
0
2016-09-13T15:02:56Z
[ "python", "arduino", "serial-port", "raspberry-pi3" ]
How to append even and odd chars python
39,473,259
<p>I want to convert all the even letters using one function and all the odd numbers using another function. So, each letter represents 0-25 correspsonding with a-z, so a,c,e,g,i,k,m,o,q,s,u,w,y are even characters.</p> <p>However, only my even letters are converting correctly. </p> <pre><code>def encrypt(plain): charCount = 0 answer=[] for ch in plain: if charCount%2==0: answer.append(pycipher.Affine(7,6).encipher(ch)) else: answer.append(pycipher.Affine(3,0).encipher(ch)) return ''.join(answer) </code></pre>
0
2016-09-13T15:02:45Z
39,473,300
<p>You never change <code>charCount</code> in your loop -- So it starts at <code>0</code> and stays at <code>0</code> which means that each <code>ch</code> will be treated as "even".</p> <p>Based on your update, you actually want to check if the character is odd or even based on it's "index" in the english alphabet. Having some sort of mapping of characters to numbers is helpful here. You could build it yourself:</p> <pre><code>alphabet = 'abcde...' # string.ascii_lowercase? mapping = {k: i for i, k in enumerate(alphabet)} </code></pre> <p><em>OR</em> we can use the builtin <code>ord</code> noticing that <code>ord('a')</code> produces an odd result, <code>ord('b')</code> is even, etc.</p> <pre><code>def encrypt(plain): answer=[] for ch in plain: if ord(ch) % 2 == 1: # 'a', 'c', 'e', ... answer.append(pycipher.Affine(7,6).encipher(ch)) else: # 'b', 'd', 'f', ... answer.append(pycipher.Affine(3,0).encipher(ch)) return ''.join(answer) </code></pre>
4
2016-09-13T15:04:57Z
[ "python" ]
How to append even and odd chars python
39,473,259
<p>I want to convert all the even letters using one function and all the odd numbers using another function. So, each letter represents 0-25 correspsonding with a-z, so a,c,e,g,i,k,m,o,q,s,u,w,y are even characters.</p> <p>However, only my even letters are converting correctly. </p> <pre><code>def encrypt(plain): charCount = 0 answer=[] for ch in plain: if charCount%2==0: answer.append(pycipher.Affine(7,6).encipher(ch)) else: answer.append(pycipher.Affine(3,0).encipher(ch)) return ''.join(answer) </code></pre>
0
2016-09-13T15:02:45Z
39,473,519
<p>Since your notion of <em>even letter</em> is based on the position of a character in the alphabet, you could use <a href="https://docs.python.org/3/library/functions.html?highlight=ord#ord" rel="nofollow"><code>ord()</code></a>, like this:</p> <pre><code> if ord(ch)%2==0: </code></pre> <p>Note that <code>ord('a')</code> and <code>ord('A')</code> are both odd, so that would make <code>a</code> go in the <code>else</code> part. If you want the opposite, then just negate the condition:</p> <pre><code> if ord(ch)%2!=0: </code></pre>
0
2016-09-13T15:15:02Z
[ "python" ]
How to append even and odd chars python
39,473,259
<p>I want to convert all the even letters using one function and all the odd numbers using another function. So, each letter represents 0-25 correspsonding with a-z, so a,c,e,g,i,k,m,o,q,s,u,w,y are even characters.</p> <p>However, only my even letters are converting correctly. </p> <pre><code>def encrypt(plain): charCount = 0 answer=[] for ch in plain: if charCount%2==0: answer.append(pycipher.Affine(7,6).encipher(ch)) else: answer.append(pycipher.Affine(3,0).encipher(ch)) return ''.join(answer) </code></pre>
0
2016-09-13T15:02:45Z
39,473,611
<p>This are my two cents on that. What @mgilson is proposing also works of course but not in the way you specified (in the comments). Try to debug your code in your head after writing it.. Go through the for loop and perform 1-2 iterations to see whether the variables take the values you intended them to. <code>charCount</code> is never reassigned a value. It is always 0. And, yes <code>charCount += 1</code> would make it change but <strong>not</strong> in the way you want it to..</p> <pre><code>def encrypt(plain): alphabet = 'abcdefghijklmnopqrwstuvwxyz' answer = '' for letter in plain: try: if alphabet.index(letter.lower()) % 2 == 0: answer += pycipher.Affine(7, 6).encipher(letter) else: answer += pycipher.Affine(3, 0).encipher(letter) except: answer += letter return answer my_text = 'Your question was not very clear OP' encripted_text = encrypt(my_text) </code></pre> <p>Also, i would not use <code>ord(ch)</code> because <code>ord('a') = 97</code> and not <code>0</code> therefore odd instead of even.</p>
0
2016-09-13T15:19:09Z
[ "python" ]
How to append even and odd chars python
39,473,259
<p>I want to convert all the even letters using one function and all the odd numbers using another function. So, each letter represents 0-25 correspsonding with a-z, so a,c,e,g,i,k,m,o,q,s,u,w,y are even characters.</p> <p>However, only my even letters are converting correctly. </p> <pre><code>def encrypt(plain): charCount = 0 answer=[] for ch in plain: if charCount%2==0: answer.append(pycipher.Affine(7,6).encipher(ch)) else: answer.append(pycipher.Affine(3,0).encipher(ch)) return ''.join(answer) </code></pre>
0
2016-09-13T15:02:45Z
39,473,735
<p>Your basic approach is to re-encrypt a letter each time you see it. With only 26 possible characters to encrypt, it is probably worth pre-encrypting them, then just performing a lookup for each character in the plain text. While doing that, you don't need to compute the position of each character, because you know you are alternating between even and odd the entire time.</p> <pre><code>import string def encrypt(plain): # True == 1, False == 0 fs = [pycipher.Affine(3,0).encipher, pycipher.Affine(7,6).encipher] is_even = True # assuming "a" is even; otherwise, just set this to False d = dict() for ch in string.ascii_lowercase: f = fs[is_even] d[ch] = f(ch) is_even = not is_even return ''.join([d[ch] for ch in plain]) </code></pre> <p>You can also use <code>itertools.cycle</code> to simplify the alternation for you.</p> <pre><code>def encrypt(plain): # again, assuming a is even. If not, reverse this list fs = itertools.cycle([pycipher.Affine(3,0).encipher, pycipher.Affine(7,6).encipher]) d = dict((ch, f(ch)) for f, ch in zip(fs, string.ascii_lowercase)) return ''.join([d[ch] for ch in plain]) </code></pre>
1
2016-09-13T15:25:13Z
[ "python" ]
How to install module and package in python
39,473,266
<p>I'm trying to start with OpenCV with python. I have experience c# and I have knowledge of c++. However, I feel more comfortable with python instead of c++. I installed OpenCV then python 3.4 in visual studio 2015. At the beginning I've received an error numpy, "Module couldn't be found", thankfully, I resolved it. The I got another error cv2 "Module couldn't be found" I asked an <a href="http://stackoverflow.com/questions/39460015/how-to-import-cv2-in-python-project-visual-studio-2015">question</a> yesterday, but I think the question has been left away. Anyways, I'm not complaining, but I still need some help please to stat with OpenCV in python.</p> <p>Installing python 3.4 <strong>Successful</strong></p> <p>Installing numpy <strong>Successful</strong></p> <p>installing matpilotlib <strong>Failed</strong></p> <p><a href="http://i.stack.imgur.com/0P14N.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/0P14N.jpg" alt="enter image description here"></a></p> <p>installing cv2 <strong>Failed</strong></p> <p><a href="http://i.stack.imgur.com/JcRE5.png" rel="nofollow"><img src="http://i.stack.imgur.com/JcRE5.png" alt="enter image description here"></a></p> <p>can anybody help me please thanks a lot.</p>
0
2016-09-13T15:03:03Z
39,473,366
<p>You can install matplotlib using pip (which is already installed on your machine - mentioned in your previous quesiton):</p> <pre><code>pip install matplotlib </code></pre> <p>more info: <a href="http://matplotlib.org/faq/installing_faq.html" rel="nofollow">http://matplotlib.org/faq/installing_faq.html</a></p>
1
2016-09-13T15:08:21Z
[ "python", "windows", "python-3.x", "opencv", "visual-studio-2015" ]
How to install module and package in python
39,473,266
<p>I'm trying to start with OpenCV with python. I have experience c# and I have knowledge of c++. However, I feel more comfortable with python instead of c++. I installed OpenCV then python 3.4 in visual studio 2015. At the beginning I've received an error numpy, "Module couldn't be found", thankfully, I resolved it. The I got another error cv2 "Module couldn't be found" I asked an <a href="http://stackoverflow.com/questions/39460015/how-to-import-cv2-in-python-project-visual-studio-2015">question</a> yesterday, but I think the question has been left away. Anyways, I'm not complaining, but I still need some help please to stat with OpenCV in python.</p> <p>Installing python 3.4 <strong>Successful</strong></p> <p>Installing numpy <strong>Successful</strong></p> <p>installing matpilotlib <strong>Failed</strong></p> <p><a href="http://i.stack.imgur.com/0P14N.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/0P14N.jpg" alt="enter image description here"></a></p> <p>installing cv2 <strong>Failed</strong></p> <p><a href="http://i.stack.imgur.com/JcRE5.png" rel="nofollow"><img src="http://i.stack.imgur.com/JcRE5.png" alt="enter image description here"></a></p> <p>can anybody help me please thanks a lot.</p>
0
2016-09-13T15:03:03Z
39,473,504
<p>It's very common to install Python packages through <code>pip</code> today (recursive acronym for <em>pip installs packages</em>). However, this is not that trivial under Windows.</p> <p><strong>How to install <code>matplotlib</code>:</strong></p> <p>Try to open a commandline and type in <code>pip install matplotlib</code>. If this does not work, you'll need to do some more work to get <code>pip</code> running. I gave a detailed answere here: <a href="http://stackoverflow.com/questions/35919876/not-sure-how-to-fix-this-cmd-command-error/35919988#35919988">Not sure how to fix this Cmd command error?</a>.</p> <p><strong>How to install OpenCV:</strong></p> <p>The Python OpenCV DLL must be made for your version of Python and your system architecture (or, to be more specific, the architecture your Python was compiled for).</p> <ul> <li>Download OpenCV for your Python version (2/3)</li> <li>Try replacing the x64 version with the x86 version</li> <li>There are a lot of different binaries here: <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#opencv" rel="nofollow">http://www.lfd.uci.edu/~gohlke/pythonlibs/#opencv</a>. Try to get the one exactly matching your Python version and System architecture and install it via <code>pip</code> (cp35 means CPython version 3.5 ect.).</li> </ul> <p>If you have the OpenCV .whl file matching your system configuration, do <code>pip install file.whl</code>.</p> <p>Hope this helps!</p>
1
2016-09-13T15:14:22Z
[ "python", "windows", "python-3.x", "opencv", "visual-studio-2015" ]
How to install module and package in python
39,473,266
<p>I'm trying to start with OpenCV with python. I have experience c# and I have knowledge of c++. However, I feel more comfortable with python instead of c++. I installed OpenCV then python 3.4 in visual studio 2015. At the beginning I've received an error numpy, "Module couldn't be found", thankfully, I resolved it. The I got another error cv2 "Module couldn't be found" I asked an <a href="http://stackoverflow.com/questions/39460015/how-to-import-cv2-in-python-project-visual-studio-2015">question</a> yesterday, but I think the question has been left away. Anyways, I'm not complaining, but I still need some help please to stat with OpenCV in python.</p> <p>Installing python 3.4 <strong>Successful</strong></p> <p>Installing numpy <strong>Successful</strong></p> <p>installing matpilotlib <strong>Failed</strong></p> <p><a href="http://i.stack.imgur.com/0P14N.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/0P14N.jpg" alt="enter image description here"></a></p> <p>installing cv2 <strong>Failed</strong></p> <p><a href="http://i.stack.imgur.com/JcRE5.png" rel="nofollow"><img src="http://i.stack.imgur.com/JcRE5.png" alt="enter image description here"></a></p> <p>can anybody help me please thanks a lot.</p>
0
2016-09-13T15:03:03Z
39,473,508
<p>You may be better off using an package such as pythonxy as a start, e.g. from <a href="https://python-xy.github.io/" rel="nofollow">https://python-xy.github.io/</a> , instead of installing each single package manually.</p>
1
2016-09-13T15:14:41Z
[ "python", "windows", "python-3.x", "opencv", "visual-studio-2015" ]
How to pass QLineEdit value to another function?
39,473,288
<p>I'm using Python 2.7 and PyQT4, I'm making a simple calculator, I have two <code>QLineEdit</code> and I have a function that prints the result of adding.</p> <pre><code>class Window(QtGui.QMainWindow): global number_1_text global number_2_text def __init__(self): super(Window, self).__init__() self.setWindowFlags(QtCore.Qt.WindowMinimizeButtonHint) #Main Window Settings self.setGeometry(50,50,500,300) self.setWindowTitle("Point of sale !") self.setWindowIcon(QtGui.QIcon('python.png')) #Close Action exitAction = QtGui.QAction(QtGui.QIcon('python.png'),"Close Now !", self) exitAction.setShortcut("Ctrl+Q") exitAction.setStatusTip('Leave the Application') exitAction.triggered.connect(self.close_application) self.statusBar() #Menubar mainMenu = self.menuBar() fileMenu = mainMenu.addMenu('&amp;File') fileMenu.addAction(exitAction) # Create textboxs number_1 = QtGui.QLineEdit(self) number_1.move(20, 50) number_1.resize(380,40) self.number_1_text = number_1.text() number_2 = QtGui.QLineEdit(self) number_2.move(20, 100) number_2.resize(380,40) self.number_2_text = number_2.text() #Main Home Proccess quiteBtn = QtGui.QPushButton("Quite", self) quiteBtn.resize(100, 50) quiteBtn.move(50,220) quiteBtn.clicked.connect(self.close_application) addBtn = QtGui.QPushButton("Add", self) addBtn.resize(100, 50) addBtn.move(150,220) addBtn.clicked.connect(self.addNumbers) self.show() def addNumbers(self): print self.number_1_text print self.number_2_text print "Done" #Close The Whole Application def close_application(self): choice = QtGui.QMessageBox.question(self, 'Exit !', 'Are you sure you wanna exit?', QtGui.QMessageBox.Yes | QtGui.QMessageBox.No ) if choice == QtGui.QMessageBox.Yes: sys.exit() else: pass </code></pre> <p>I tried to get the values with <code>text()</code> function and assigned it to a global variable in the class, then I tried to call the values in <code>addNumbers</code> function but I get an empty value.</p> <p>A screenshot of the result: <a href="http://i.stack.imgur.com/1LXda.png" rel="nofollow"><img src="http://i.stack.imgur.com/1LXda.png" alt="Result"></a></p>
0
2016-09-13T15:04:25Z
39,474,586
<p>You should make your child widgets attributes of the main window. That way, you can easily access them later:</p> <pre><code>class Window(QtGui.QMainWindow): def __init__(self): super(Window, self).__init__() ... self.number_1 = QtGui.QLineEdit(self) self.number_1.move(20, 50) self.number_1.resize(380, 40) self.number_2 = QtGui.QLineEdit(self) self.number_2.move(20, 100) self.number_2.resize(380, 40) ... def addNumbers(self): a = float(self.number_1.text()) b = float(self.number_2.text()) print '%s + %s = %s' % (a, b, a + b) print 'Done' </code></pre>
0
2016-09-13T16:10:08Z
[ "python", "pyqt", "pyqt4" ]
Restrict python exec acess to one directory
39,473,445
<p>I have a python script which executes a string of code with the <em>exec</em> function. I need a way to restrict the read/write access of the script to the current directory. How can I achieve this?</p> <p>Or, is there a way to restrict the python script's environment directly through the command line so that when I run the interpreter, it does not allow writes out of the directory? Can I do that using a virtualenv? How?</p> <p>So basically, my app is a web portal where people can write and execute python apps and get a response - and I've hosted this on heroku. Now there might be multiple users with multiple folders and no user should have access to other's folders or even system files and folders. The permissions should be determined by the user on the nodejs app (a web app) and not a local user. How do I achieve that? </p>
-1
2016-09-13T15:11:42Z
39,473,520
<p>Execute the code as a user that only owns that specific directory and has no permissions anywhere else?</p> <p>However- if you do not completely trust the source of code, you should simply not be using <code>exec</code> under any circumstances. Remember, say you came up with a python solution... the exec code could literally undo whatever restrictions you put on it before doing its nefarious deeds. If you tell us the problem you're trying to solve, we can probably come up with a better idea.</p>
1
2016-09-13T15:15:03Z
[ "python", "python-2.7", "python-exec" ]
Restrict python exec acess to one directory
39,473,445
<p>I have a python script which executes a string of code with the <em>exec</em> function. I need a way to restrict the read/write access of the script to the current directory. How can I achieve this?</p> <p>Or, is there a way to restrict the python script's environment directly through the command line so that when I run the interpreter, it does not allow writes out of the directory? Can I do that using a virtualenv? How?</p> <p>So basically, my app is a web portal where people can write and execute python apps and get a response - and I've hosted this on heroku. Now there might be multiple users with multiple folders and no user should have access to other's folders or even system files and folders. The permissions should be determined by the user on the nodejs app (a web app) and not a local user. How do I achieve that? </p>
-1
2016-09-13T15:11:42Z
39,474,240
<p>The question boils down to: How can I safely execute the code I don't trust.<br> You can't.<br> Either you know what the code does or you don't execute it.<br> You can have an isolated environment for your process, for example with docker. But the use cases are far away from executing unsafe code.</p>
1
2016-09-13T15:51:51Z
[ "python", "python-2.7", "python-exec" ]
64-bit cx_Oracle: DLL load failed
39,473,503
<p>Using Windows 2008 R2 Server. Server was completely clean. Installed 64-bit Python 3.5, 64-bit Oracle Instant Client 12c. pip installed cx_Oracle successfully. When I try to run a python script that imports cx_Oracle however, I get:</p> <pre><code>ImportError: DLL load failed: The specified module could not be found. </code></pre> <p>The instant client path is in the <code>PATH</code> environmental variable. I also made another system variable called <code>ORACLE_HOME</code> with the same instant client path.</p> <p>I've double checked everything is 64-bit, and looked through SO at the many other times this has come up, and no answer has helped.</p>
1
2016-09-13T15:14:16Z
39,503,078
<p>First, the environment variable ORACLE_HOME should not be set when an instant client is used. Setting it can have unintended side effects!</p> <p>Second, if you used pip to install cx_Oracle that suggests you have a compiler and it succeeded in compiling the module. Check to make sure that it used the correct libraries.</p> <p>Third, you can also download and install a pre-built binary from the PyPI site and see if that helps matters any. Make sure you pick the Python 3.5, 64-bit, Oracle 12c version that is listed there. The PyPI site link is here:</p> <p><a href="https://pypi.python.org/pypi/cx_Oracle" rel="nofollow">https://pypi.python.org/pypi/cx_Oracle</a></p>
0
2016-09-15T04:23:18Z
[ "python", "cx-oracle" ]
Apply numpy.where on condition with either list or numpy.array
39,473,548
<p>I discovered that <code>numpy.where</code> behaves differently when applied on a condition such as <code>foo==2</code> when <code>foo</code> is a list or <code>foo</code> is a <code>numpy.array</code></p> <pre><code>foo = ["a","b","c"] bar = numpy.array(["a","b","c"]) numpy.where(foo == "a") # Returns array([]) numpy.where(bar == "a") # Returns array([0]) </code></pre> <p>I want the same command to make this applicable to either list or numpy.array, and I am concerned about how to perform this efficiently. Is the following ok ?</p> <pre><code>numpy.where(numpy.array(foo, copy=False) == "a") # Returns array([0]) numpy.where(numpy.array(bar, copy=False) == "a") # Returns array([0]) </code></pre> <p>Result is as expected, but is this the best way to answer my need ? Using each time <code>numpy.array</code> constructor is the best way to ensure object type ?</p> <p>Thanks !</p>
0
2016-09-13T15:16:38Z
39,474,270
<p>To me your solution is already the best:</p> <pre><code>numpy.where(numpy.array(foo, copy=False) == "a") </code></pre> <p>It is concise, very clear and totaly efficient thanks to <code>copy=False</code>.</p>
2
2016-09-13T15:53:17Z
[ "python", "arrays", "numpy" ]
Apply numpy.where on condition with either list or numpy.array
39,473,548
<p>I discovered that <code>numpy.where</code> behaves differently when applied on a condition such as <code>foo==2</code> when <code>foo</code> is a list or <code>foo</code> is a <code>numpy.array</code></p> <pre><code>foo = ["a","b","c"] bar = numpy.array(["a","b","c"]) numpy.where(foo == "a") # Returns array([]) numpy.where(bar == "a") # Returns array([0]) </code></pre> <p>I want the same command to make this applicable to either list or numpy.array, and I am concerned about how to perform this efficiently. Is the following ok ?</p> <pre><code>numpy.where(numpy.array(foo, copy=False) == "a") # Returns array([0]) numpy.where(numpy.array(bar, copy=False) == "a") # Returns array([0]) </code></pre> <p>Result is as expected, but is this the best way to answer my need ? Using each time <code>numpy.array</code> constructor is the best way to ensure object type ?</p> <p>Thanks !</p>
0
2016-09-13T15:16:38Z
39,483,596
<p>If you are really looking for the most <code>numpy</code>-esque solution, use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.asarray.html" rel="nofollow"><code>np.asarray</code></a>:</p> <pre><code>numpy.where(numpy.asarray(foo) == "a") </code></pre> <p>And if you also want your code to work with subclasses of <code>numpy.ndarray</code>, without "downconverting" them to their base class of <code>ndarray</code>, then use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.asanyarray.html#numpy.asanyarray" rel="nofollow"><code>np.asanyarray</code></a>:</p> <pre><code>numpy.where(numpy.asanyarray(foo) == "a") </code></pre> <p>This works for <code>np.matrix</code> for instance, without converting it to an array. I suppose that this would also ensure that the <code>np.matrix</code> instance doesn't get copied or reconstructed into an array prior to checking.</p> <p><strong>Note:</strong> I think that copies are made by <code>np.array</code> for lists, because it needs to construct the array object. This can be seen in the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.array.html" rel="nofollow">documentation for <code>np.array</code></a>:</p> <pre><code>copy : bool, optional If true (default), then the object is copied. Otherwise, a copy will only be made if __array__ returns a copy, if obj is a nested sequence, or if a copy is needed to satisfy any of the other requirements (dtype, order, etc.). </code></pre> <p><code>np.asarray</code> would also make a copy in this case.</p>
1
2016-09-14T06:13:53Z
[ "python", "arrays", "numpy" ]
Setting class variable based on another class' variable
39,473,653
<p>I was trying to remind myself of OO programming by creating some kind of chess clone in python and found myself with the current issue.</p> <p>I'd like to give each piece on the board an 'identifier' variable so that it can be displayed on screen e.g : bR would indicate a black rook. However I'm not sre how I can establish this due to colour being a variable being inherited to the rook class, rather than being a part of it.</p> <p>This is probably a silly question but I've also looked into using the __str__ function for this display but I run into a similar issue that way too. The code I have is below.</p> <p>I could use another __init__ variable but I'd rather not have to specify every single pawns identifier (P) when the pawn class could surely do the same for me?</p> <pre><code>class piece(object): identifier = ' ' def __init__(self, colour): self.colour = colour class rook(piece): identifier = 'R' rookA = rook('b') </code></pre> <p>And I'm looking to achieve:</p> <p><code>print(rookA.identifier) "bR"</code></p> <p>or </p> <p><code>print(rookA) "bR"</code></p> <p>if I'm going about this task in the wrong way, please suggest an alternative!</p> <p>Thanks</p>
1
2016-09-13T15:21:30Z
39,473,724
<p>I'm not sure what issue you had with using <code>__str__</code>, but that is absolutely the right way to go about it.</p> <pre><code>class Piece(object): identifier = ' ' def __init__(self, colour): self.colour = colour def __str__(self): return "{}{}".format(self.colour, self.identifier) </code></pre>
1
2016-09-13T15:24:48Z
[ "python", "python-3.x" ]
Setting class variable based on another class' variable
39,473,653
<p>I was trying to remind myself of OO programming by creating some kind of chess clone in python and found myself with the current issue.</p> <p>I'd like to give each piece on the board an 'identifier' variable so that it can be displayed on screen e.g : bR would indicate a black rook. However I'm not sre how I can establish this due to colour being a variable being inherited to the rook class, rather than being a part of it.</p> <p>This is probably a silly question but I've also looked into using the __str__ function for this display but I run into a similar issue that way too. The code I have is below.</p> <p>I could use another __init__ variable but I'd rather not have to specify every single pawns identifier (P) when the pawn class could surely do the same for me?</p> <pre><code>class piece(object): identifier = ' ' def __init__(self, colour): self.colour = colour class rook(piece): identifier = 'R' rookA = rook('b') </code></pre> <p>And I'm looking to achieve:</p> <p><code>print(rookA.identifier) "bR"</code></p> <p>or </p> <p><code>print(rookA) "bR"</code></p> <p>if I'm going about this task in the wrong way, please suggest an alternative!</p> <p>Thanks</p>
1
2016-09-13T15:21:30Z
39,473,729
<p>There's no issue trying to get the value you need by defining <code>__str__</code>, define it on <code>piece</code> and return <code>self.colour + self.identifier</code>:</p> <pre><code>def __str__(self): return self.colour + self.identifier </code></pre> <p>Now when you print an instance of <code>rook</code>, you'll get the wanted result:</p> <pre><code>print(rookA) bR </code></pre>
1
2016-09-13T15:25:05Z
[ "python", "python-3.x" ]
Reading mashabe API using urllib
39,473,685
<p>I have this code to read Mashape.com API in python 2. how can i read it in python 3?</p> <p><strong>code</strong></p> <pre><code>import urllib, urllib2, json from pprint import pprint URL = "https://getsentiment.p.mashape.com/" text = "The food was great, but the service was slow." params = {'text': text, 'domain': 'retail', 'terms': 1, 'categories': 1,'sentiment': 1, 'annotate': 1} headers = {'X-Mashape-Key': YOUR_MASHAPE_KEY} opener = urllib2.build_opener(urllib2.HTTPHandler) request = urllib2.Request(URL, urllib.urlencode(params), headers=headers) response = opener.open(request) opener.close() data = json.loads(response.read()) pprint(data) </code></pre> <p>i tried this code but it had following error :</p> <pre><code>import urllib.parse import urllib.request URL = "https://getsentiment.p.mashape.com/" text = "The food was great, but the service was slow." params = {'text': text, 'domain': 'retail', 'terms': 1, 'categories': 1, 'sentiment': 1, 'annotate': 1} headers = {'X-Mashape-Key': YOUR_MASHAPE_KEY} opener = urllib.request.build_opener(urllib.request.HTTPHandler) request = urllib.request.Request(URL, urllib.parse.urlencode(params), headers) response = opener.open(request) opener.close() data = json.loads(response.read()) print(data) </code></pre> <p><strong>error :</strong></p> <pre><code>TypeError: POST data should be bytes or an iterable of bytes. It cannot be of type str. </code></pre>
2
2016-09-13T15:22:52Z
39,474,094
<p>In this line:</p> <pre><code>request = urllib.request.Request(URL, urllib.parse.urlencode(params), headers) </code></pre> <p>Try to replace to </p> <pre><code>data = urllib.parse.urlencode(params).encode('utf-8') request = urllib.request.Request(URL, data, headers) </code></pre>
1
2016-09-13T15:43:48Z
[ "python", "python-3.x", "mashape" ]
Theano ValueError: dimension mismatch in args to gemm; 2d array dimension is interpreted as 1d
39,473,692
<p>I am trying implementing a simple xnor neural network function using Theano, I am getting the type mismatch</p> <blockquote> <p>ValueError: dimension mismatch in args to gemm (8,1)x(2,1)->(8,1)</p> </blockquote> <p>despite the fact that the inputs are in dimension (4X2) and the outputs are (4X1), I don't know why it is reading the inputs dimension as (8X1).</p> <p>It should be (4,2)X(2,1)->(4,1) but some how it sees it as (8,1)x(2,1)->(8,1)</p> <p>Any idea why, it is reading the inputs dimension (n,m) as (n*m,1)?</p> <p>Simple Neural Network for XNOR implementation:</p> <pre><code>print 'Importing Theano Library ...' import theano print 'Importing General Libraries ...' import numpy as np import theano.tensor as T from theano import function from theano import shared from theano.ifelse import ifelse import os from random import random import time print(theano.config.device) print 'Building Neural Network ...' startTime = time.clock() rng = np.random #Define variables: x = T.matrix('x') w1 = shared(np.array([rng.random(1).astype(theano.config.floatX), rng.random(1).astype(theano.config.floatX)])) w2 = shared(np.array([rng.random(1).astype(theano.config.floatX), rng.random(1).astype(theano.config.floatX)])) w3 = shared(np.array([rng.random(1).astype(theano.config.floatX), rng.random(1).astype(theano.config.floatX)])) b1 = shared(np.asarray(1., dtype=theano.config.floatX)) b2 = shared(np.asarray(1., dtype=theano.config.floatX)) learning_rate = 0.01 a1 = 1/(1+T.exp(-T.dot(x,w1)-b1)) a2 = 1/(1+T.exp(-T.dot(x,w2)-b1)) x2 = T.stack([a1,a2],axis=1) a3 = 1/(1+T.exp(-T.dot(x2,w3)-b2)) a_hat = T.vector('a_hat') #Actual output cost = -(a_hat*T.log(a3) + (1-a_hat)*T.log(1-a3)).sum() dw1,dw2,dw3,db1,db2 = T.grad(cost,[w1,w2,w3,b1,b2]) train = function(inputs = [x,a_hat], outputs = [a3,cost], updates = [[w1, w1-learning_rate*dw1],[w2, w2-learning_rate*dw2],[w3, w3-learning_rate*dw3],[b1, b1-learning_rate*b1],[b2, b2-learning_rate*b2]]) print 'Neural Network Built' TimeDelta = time.clock() - startTime print 'Building Time: %.2f seconds' %TimeDelta inputs = np.array([[0,0],[0,1],[1,0],[1,1]]).astype(theano.config.floatX) outputs = np.array([1,0,0,1]).astype(theano.config.floatX) #Iterate through all inputs and find outputs: print 'Training the network ...' startTime = time.clock() cost = [] print 'input shape', inputs.shape print 'output shape', outputs.shape for iteration in range(60000): print 'Iteration no. %d \r' %iteration, pred, cost_iter = train(inputs, outputs) cost.append(cost_iter) TimeDelta = time.clock() - startTime print 'Training Time: %.2f seconds' %TimeDelta #Print the outputs: print 'The outputs of the NN are: ' for i in range(len(inputs)): print 'The output for x1=%d | x2=%d is %.2f' % (inputs[i][0], inputs[i][1], pred[i]) predict = function([x],a3) print predict([[0,0]]) print predict([[0,1]]) print predict([[1,0]]) print predict([[1,1]]) </code></pre> <p>Terminal Output:</p> <pre><code>Importing Theano Library ... Using gpu device 0: NVIDIA Tegra X1 (CNMeM is enabled with initial size: 75.0% of memory, cuDNN 5005) Importing General Libraries ... gpu Building Neural Network ... Neural Network Built Building Time: 1.78 seconds Training the network ... input shape (4, 2) output shape (4,) Traceback (most recent call last): File "neuron2.py", line 59, in &lt;module&gt; pred, cost_iter = train(inputs, outputs) File "/home/ubuntu/Theano/theano/compile/function_module.py", line 879, in __call__ storage_map=getattr(self.fn, 'storage_map', None)) File "/home/ubuntu/Theano/theano/gof/link.py", line 325, in raise_with_op reraise(exc_type, exc_value, exc_trace) File "/home/ubuntu/Theano/theano/compile/function_module.py", line 866, in __call__ self.fn() if output_subset is None else\ ValueError: dimension mismatch in args to gemm (8,1)x(2,1)-&gt;(8,1) Apply node that caused the error: GpuDot22(GpuReshape{2}.0, GpuReshape{2}.0) Toposort index: 68 Inputs types: [CudaNdarrayType(float32, matrix), CudaNdarrayType(float32, matrix)] Inputs shapes: [(8, 1), (2, 1)] Inputs strides: [(1, 0), (1, 0)] Inputs values: ['not shown', CudaNdarray([[ 0.14762458] [ 0.12991147]])] Outputs clients: [[GpuReshape{3}(GpuDot22.0, Join.0)]] HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'. HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node. </code></pre>
0
2016-09-13T15:23:16Z
39,597,879
<p>The shared variables w1,w2,w3 are created as a matrices while casting, they should be vectors, the casting should be done as the following:</p> <p>These lines: </p> <pre><code>w1 = shared(np.array([rng.random(1).astype(theano.config.floatX), rng.random(1).astype(theano.config.floatX)])) w2 = shared(np.array([rng.random(1).astype(theano.config.floatX), rng.random(1).astype(theano.config.floatX)])) w3 = shared(np.array([rng.random(1).astype(theano.config.floatX), rng.random(1).astype(theano.config.floatX)])) </code></pre> <p>Should be:</p> <pre><code>from random import random w1 = shared(np.asarray([random(), random()], dtype=theano.config.floatX)) w2 = shared(np.asarray([random(), random()], dtype=theano.config.floatX)) w3 = shared(np.asarray([random(), random()], dtype=theano.config.floatX)) </code></pre>
0
2016-09-20T15:18:40Z
[ "python", "numpy", "neural-network", "gpu", "theano" ]
Linux - reasons for SIGSTOP and how to deal with it?
39,473,817
<p>I have a Python script which is running bash scripts. I need to be able to kill the bash script if it seems to be infinite and it also has to be run in chroot jail because the script might be dangerous. I run it with <code>psutil.Popen()</code> and leave it running for two seconds. If it does not end naturally, I send <code>SIGKILL</code> to it and all of its possible children.</p> <p>The problem is that if I kill one script due to overtime execution and run another one, the main (Python) script receives a <code>SIGSTOP</code>. On my local machine, I made a really stupid solution: the Python script wrote its PID to a file at startup and then I run another script, which was sending <code>SIGCONT</code> every second to the PID which was stored in the file. This has two problems: it is really stupid, but even worse is that it refuses to work on the server - <code>SIGCONT</code> just does nothing there.</p> <p>The sequence is: Python script runs a bash script responsive for the jail and that bash script runs the possibly dangerous and/or infinite script. This script might have some children as well.</p> <p>The relevant parts of the codes:</p> <p><strong>Main python script</strong></p> <pre><code> p = psutil.Popen(["bash", mode, script_path, self.TESTENV_ROOT]) start = time.time() while True: if p.status() == psutil.STATUS_ZOMBIE: # process ended naturally duration = time.time() - start self.stdout.write("Script finished, execution time: {}s".format(duration)) break if time.time() &gt; start + run_limit: children = p.children(recursive=True) for child in children: child.kill() p.kill() duration = None self.stdout.write("Script exceeded maximum time ({}s) and was killed.".format(run_limit)) break time.sleep(0.01) os.kill(os.getpid(), 17) # SIGCHLD return duration </code></pre> <p><strong>Running script in chroot</strong> ($1 is the script to be run in the chroot jail, $2 is the jail path)</p> <pre><code>#!/usr/bin/env bash # copy script to chroot environment cp "$1" "$2/prepare.sh" # run script chmod u+x "$2/prepare.sh" echo './prepare.sh' | chroot "$2" rm "$2/prepare.sh" </code></pre> <p><strong>Example prepare.sh script</strong></p> <pre><code>#!/bin/bash echo asdf &gt; file </code></pre> <p>I spent some time trying to solve the issue. I found out that this script (which is not using chroot jail to run bash scripts) is working perfectly:</p> <pre><code>import psutil import os import time while True: if os.path.exists("infinite.sh"): p = psutil.Popen(["bash","infinite.sh"]) start = time.time() while True: if p.status() == psutil.STATUS_ZOMBIE: # process ended naturally break if time.time() &gt; start + 2: # process needs too much time and has to be killed children = p.children(recursive=True) for child in children: child.kill() p.kill() break os.remove("infinite.sh") os.kill(os.getpid(), 17) </code></pre> <p>My questions are:</p> <ul> <li>Why am I receiving <code>SIGSTOP</code>s? Is it due to the chroot jail?</li> <li>Is there any better way of hadling my problem than by running the "waking up" script?</li> </ul> <p>Thanks for your ideas.</p> <p><strong>EDIT:</strong> I found out that I am sigstopped at the moment I run the first script after I killed an overtime one. No matter if I use <code>os.system</code> or <code>psutil.Popen</code>.</p> <p><strong>EDIT2:</strong> I did even more investigation and the critical line is <code>echo './prepare.sh' | chroot "$2"</code> in the bash script controlling the chroot jail. The question now is, what the hell is wrong with it?</p> <p><strong>EDIT3:</strong> <a href="http://linux-kernel.vger.kernel.narkive.com/v8Jhd6WB/fork-sh-hello-microbenchmark-performance-in-chroot" rel="nofollow">This</a> might be a related problem, if it helps someone.</p>
0
2016-09-13T15:29:45Z
39,474,718
<p>I'm pretty sure you're running this on Mac OS and not Linux. Why? You're sending signal <code>17</code> to your main python process instead of using:</p> <pre><code>import signal signal.SIGCHLD </code></pre> <p>I believe you have a handler for signal <code>17</code> which is supposed to respawn the jailed process in response to this signal.<br> But <code>signal.SIGCHLD == 17</code> on Linux and <code>signal.SIGCHLD == 20</code> on Mac OS. </p> <p>Now the answer for your question is:<br> <strong><code>signal.SIGSTOP == 17</code> on Mac OS</strong>.<br> Yes, your process sends <code>SIGSTOP</code> to itself with <code>os.kill(os.getpid(), 17)</code><br> <a href="https://developer.apple.com/library/ios/documentation/System/Conceptual/ManPages_iPhoneOS/man3/signal.3.html" rel="nofollow">Mac OS signal man page</a></p> <p><strong>EDIT:</strong><br> Actually it can also happen on Linux since <a href="http://man7.org/linux/man-pages/man7/signal.7.html" rel="nofollow">Linux signal man page</a> says that POSIX standard allows signal <code>17</code> to be either <code>SIGUSR2</code>, <code>SIGCHLD</code> or <code>SIGSTOP</code>. Therefore I strongly recommend using constants from <code>signal</code> module of the standard library instead of hardcoded signal numbers.</p>
1
2016-09-13T16:18:52Z
[ "python", "linux", "bash", "shell" ]
Linux - reasons for SIGSTOP and how to deal with it?
39,473,817
<p>I have a Python script which is running bash scripts. I need to be able to kill the bash script if it seems to be infinite and it also has to be run in chroot jail because the script might be dangerous. I run it with <code>psutil.Popen()</code> and leave it running for two seconds. If it does not end naturally, I send <code>SIGKILL</code> to it and all of its possible children.</p> <p>The problem is that if I kill one script due to overtime execution and run another one, the main (Python) script receives a <code>SIGSTOP</code>. On my local machine, I made a really stupid solution: the Python script wrote its PID to a file at startup and then I run another script, which was sending <code>SIGCONT</code> every second to the PID which was stored in the file. This has two problems: it is really stupid, but even worse is that it refuses to work on the server - <code>SIGCONT</code> just does nothing there.</p> <p>The sequence is: Python script runs a bash script responsive for the jail and that bash script runs the possibly dangerous and/or infinite script. This script might have some children as well.</p> <p>The relevant parts of the codes:</p> <p><strong>Main python script</strong></p> <pre><code> p = psutil.Popen(["bash", mode, script_path, self.TESTENV_ROOT]) start = time.time() while True: if p.status() == psutil.STATUS_ZOMBIE: # process ended naturally duration = time.time() - start self.stdout.write("Script finished, execution time: {}s".format(duration)) break if time.time() &gt; start + run_limit: children = p.children(recursive=True) for child in children: child.kill() p.kill() duration = None self.stdout.write("Script exceeded maximum time ({}s) and was killed.".format(run_limit)) break time.sleep(0.01) os.kill(os.getpid(), 17) # SIGCHLD return duration </code></pre> <p><strong>Running script in chroot</strong> ($1 is the script to be run in the chroot jail, $2 is the jail path)</p> <pre><code>#!/usr/bin/env bash # copy script to chroot environment cp "$1" "$2/prepare.sh" # run script chmod u+x "$2/prepare.sh" echo './prepare.sh' | chroot "$2" rm "$2/prepare.sh" </code></pre> <p><strong>Example prepare.sh script</strong></p> <pre><code>#!/bin/bash echo asdf &gt; file </code></pre> <p>I spent some time trying to solve the issue. I found out that this script (which is not using chroot jail to run bash scripts) is working perfectly:</p> <pre><code>import psutil import os import time while True: if os.path.exists("infinite.sh"): p = psutil.Popen(["bash","infinite.sh"]) start = time.time() while True: if p.status() == psutil.STATUS_ZOMBIE: # process ended naturally break if time.time() &gt; start + 2: # process needs too much time and has to be killed children = p.children(recursive=True) for child in children: child.kill() p.kill() break os.remove("infinite.sh") os.kill(os.getpid(), 17) </code></pre> <p>My questions are:</p> <ul> <li>Why am I receiving <code>SIGSTOP</code>s? Is it due to the chroot jail?</li> <li>Is there any better way of hadling my problem than by running the "waking up" script?</li> </ul> <p>Thanks for your ideas.</p> <p><strong>EDIT:</strong> I found out that I am sigstopped at the moment I run the first script after I killed an overtime one. No matter if I use <code>os.system</code> or <code>psutil.Popen</code>.</p> <p><strong>EDIT2:</strong> I did even more investigation and the critical line is <code>echo './prepare.sh' | chroot "$2"</code> in the bash script controlling the chroot jail. The question now is, what the hell is wrong with it?</p> <p><strong>EDIT3:</strong> <a href="http://linux-kernel.vger.kernel.narkive.com/v8Jhd6WB/fork-sh-hello-microbenchmark-performance-in-chroot" rel="nofollow">This</a> might be a related problem, if it helps someone.</p>
0
2016-09-13T15:29:45Z
39,487,438
<p>Ok, I finally found the solution. The problem really was on the chroot line in the bash script:</p> <pre><code>echo './prepare.sh' | chroot "$2" </code></pre> <p>This appears to be incorrect for some reason. The correct way to run a command in chroot is:</p> <pre><code>chroot chroot_path shell -c command </code></pre> <p>So for example:</p> <pre><code>chroot '/home/chroot_jail' '/bin/sh' -c 'rm -rf /' </code></pre> <p>Hope this helps someone. </p>
1
2016-09-14T09:51:16Z
[ "python", "linux", "bash", "shell" ]
Stop Word Removal with NLTK
39,473,824
<p>I've been working with NLTK and Database Classification. I'm having a problem with stop word removal. When I print the list of stop words all of the words are listed with "u'" before them. For example: [u'all', u'just', u'being', u'over', u'both', u'through'] I'm not sure if this is normal or part of the issue.</p> <p>When I print (1_feats) I get a list of words, with some of them being the stopwords listed in the corpus. </p> <pre><code>import os from nltk.classify import NaiveBayesClassifier from nltk.corpus import stopwords stopset = list(set(stopwords.words('english'))) morewords = 'delivery', 'shipment', 'only', 'copy', 'attach', 'material' stopset.append(morewords) def word_feats(words): return dict([(word, True) for word in words.split() if word not in stopset]) ids_1 = {} ids_2 = {} ids_3 = {} ids_4 = {} ids_5 = {} ids_6 = {} ids_7 = {} ids_8 = {} ids_9 = {} path1 = "/Users/myname/Documents/Data Classifier Files/1/" for name in os.listdir(path1): if name[-4:] == '.txt': f = open(path1 + "/" + name, "r") ids_1[name] = f.read() f.close() path2 = "/Users/myname/Documents/Data Classifier Files/2/" for name in os.listdir(path2): if name[-4:] == '.txt': f = open(path2 + "/" + name, "r") ids_2[name] = f.read() f.close() path3 = "/Users/myname/Documents/Data Classifier Files/3/" for name in os.listdir(path3): if name[-4:] == '.txt': f = open(path3 + "/" + name, "r") ids_3[name] = f.read() f.close() path4 = "/Users/myname/Documents/Data Classifier Files/4/" for name in os.listdir(path4): if name[-4:] == '.txt': f = open(path4 + "/" + name, "r") ids_4[name] = f.read() f.close() path5 = "/Users/myname/Documents/Data Classifier Files/5/" for name in os.listdir(path5): if name[-4:] == '.txt': f = open(path5 + "/" + name, "r") ids_5[name] = f.read() f.close() path6 = "/Users/myname/Documents/Data Classifier Files/6/" for name in os.listdir(path6): if name[-4:] == '.txt': f = open(path6 + "/" + name, "r") ids_6[name] = f.read() f.close() path7 = "/Users/myname/Documents/Data Classifier Files/7/" for name in os.listdir(path7): if name[-4:] == '.txt': f = open(path7 + "/" + name, "r") ids_7[name] = f.read() f.close() path8 = "/Users/myname/Documents/Data Classifier Files/8/" for name in os.listdir(path8): if name[-4:] == '.txt': f = open(path8 + "/" + name, "r") ids_8[name] = f.read() f.close() path9 = "/Users/myname/Documents/Data Classifier Files/9/" for name in os.listdir(path9): if name[-4:] == '.txt': f = open(path9 + "/" + name, "r") ids_9[name] = f.read() f.close() feats_1 = [(word_feats(ids_1[f]), '1') for f in ids_1 ] feats_2 = [(word_feats(ids_2[f]), "2") for f in ids_2 ] feats_3 = [(word_feats(ids_3[f]), '3') for f in ids_3 ] feats_4 = [(word_feats(ids_4[f]), '4') for f in ids_4 ] feats_5 = [(word_feats(ids_5[f]), '5') for f in ids_5 ] feats_6 = [(word_feats(ids_6[f]), '6') for f in ids_6 ] feats_7 = [(word_feats(ids_7[f]), '7') for f in ids_7 ] feats_8 = [(word_feats(ids_8[f]), '8') for f in ids_8 ] feats_9 = [(word_feats(ids_9[f]), '9') for f in ids_9 ] trainfeats = feats_1 + feats_2 + feats_3 + feats_4 + feats_5 + feats_6 + feats_7 + feats_8 + feats_9 classifier = NaiveBayesClassifier.train(trainfeats) </code></pre>
0
2016-09-13T15:29:58Z
39,478,210
<p>After executing these three lines,</p> <pre><code>stopset = list(set(stopwords.words('english'))) morewords = 'delivery', 'shipment', 'only', 'copy', 'attach', 'material' stopset.append(morewords) </code></pre> <p>have a look at <code>stopset</code> (output shortened):</p> <pre><code>&gt;&gt;&gt; stopset [u'all', u'just', u'being', ... u'having', u'once', ('delivery', 'shipment', 'only', 'copy', 'attach', 'material')] </code></pre> <p>The additional entries from <code>morewords</code> aren't on the same level as the previous words: instead, the whole tuple of words is seen as a single stop word, which makes no sense.</p> <p>The reason for that is simple: <code>list.append()</code> adds one element, <code>list.extend()</code> adds many.</p> <p>So, change <code>stopset.append(morewords)</code> to <code>stopset.extend(morewords)</code>.<br> Or even better, keep the stop words as a set, for faster lookup. The right method to add multiple elements is <code>set.update()</code>:</p> <pre><code>stopset = set(stopwords.words('english')) morewords = ['delivery', 'shipment', 'only', 'copy', 'attach', 'material'] stopset.update(morewords) </code></pre>
0
2016-09-13T20:08:32Z
[ "python", "python-3.x", "unicode", "nltk", "stop-words" ]
Git Blog - the pelican template disappears in the new deployed blog but exists in localhost
39,473,867
<p>Sorry if I didn't express the question correctly - I am trying to set up a blog on Git using pelican, but I am new to both of it. </p> <p>So I followed some websites and tried to release one page, however when I did <code>make serve</code> on my local drive the blog looks ok on <code>localhost:/8000</code></p> <p>But after pushing to Git, the template of the blog disappears and the webpage looks pretty ugly. Also, if I click on "Read more" hyperlink, the page navigates to a 404 error.</p> <p>Did I miss anything here? Many thanks if anyone could shed some light on!</p>
1
2016-09-13T15:32:23Z
39,476,013
<p>From your Question, what I understood is you are having problem publishing pelican site on git hub. As per my knowledege below is the way to publish it.I don't know why you got 404 Error though.</p> <p><strong>Step1</strong>: First you need to create repository in github.To create it follow the below steps: goto github.com--sign In--select git hub pages(Project Pages)--click on '+' to create new repository--give repository name (eg.Blog,Order System)--Check 'public' radio Button--check 'Initialize with READ Me' check box --Click Create Repository</p> <p><strong>Note</strong>:Make sure you use a .gitIgnore file before comminting file</p> <p><strong>Step2:</strong> Once repository is created you will be on master pages Branch. Click on Master--create gh-pages--in branch section update default page as gh-pages branch--click on 'code' in menu bar and delete master branch.</p> <p>Now you need to download READ Me file on local machine.</p> <p>Copy READ Me file from gh-Pages branch--go to directory where all the files of you project are stored on you machine--goto command prompt--cd directory name(eg. here we have order systems)-- order systems>git add click enter order systems>git commit -a initialize click enter order systems>git push origin gh-pages click enter</p> <p>It will ask you to enter git credentials. Enter those and sign In. Go to settings. You can see pages are published.</p> <p>I hope this is helpful for you.</p>
0
2016-09-13T17:41:55Z
[ "python", "git", "markdown", "blogs", "pelican" ]
How to square only the positive numbers in a list?
39,473,895
<p>I am trying to only square the positive numbers in a list. But when I try the code it squares all of them</p> <pre><code>def squarethis(numbers): for n in numbers: if n &gt; 0: return[n ** 2 for n in numbers] print(squarethis([1, 3, 5, -81])) </code></pre> <p>Why does it square all numbers? The if-statement should filter the negatives out right?</p>
0
2016-09-13T15:33:35Z
39,473,965
<p>Just use a single list comprehension:</p> <pre><code>def squarethis(numbers): return [n ** 2 for n in numbers if n &gt; 0] </code></pre> <p>What you are doing in the code your provided is checking if the first value in <code>numbers</code> is <code>&gt; 0</code>, and if it is, returning your list comprehension with every value in <code>numbers</code> squared. Instead, you can do your filtering in the list comprehension itself, and return the list.</p>
7
2016-09-13T15:37:21Z
[ "python", "list", "math", "int", "square" ]
How to square only the positive numbers in a list?
39,473,895
<p>I am trying to only square the positive numbers in a list. But when I try the code it squares all of them</p> <pre><code>def squarethis(numbers): for n in numbers: if n &gt; 0: return[n ** 2 for n in numbers] print(squarethis([1, 3, 5, -81])) </code></pre> <p>Why does it square all numbers? The if-statement should filter the negatives out right?</p>
0
2016-09-13T15:33:35Z
39,474,305
<p>A purely functional alternative -></p> <pre><code>&gt;&gt;&gt; map(lambda x: x**2, filter(lambda x: x &gt; 0, [1, 2, 3, -1, -2, -3])) [1, 4, 9] </code></pre>
1
2016-09-13T15:54:57Z
[ "python", "list", "math", "int", "square" ]
What is the significance of scikit-learn GridSearchCV best_score_
39,473,916
<p>I can see the answer at <a href="http://stackoverflow.com/questions/24096146/how-is-scikit-learn-gridsearchcv-best-score-calculated">How is scikit-learn GridSearchCV best_score_ calculated?</a> for the what this score means.</p> <p>I am working with scikit learn example for decision tree and trying various values for scoring parameter.</p> <pre><code>if __name__ == '__main__': df = pd.read_csv('/Users/tcssig/Downloads/ad-dataset/ad.data', header=None) explanatory_variable_columns = set(df.columns.values) response_variable_column = df[len(df.columns.values)-1] # The last column describes the targets explanatory_variable_columns.remove(len(df.columns.values)-1) y = [1 if e == 'ad.' else 0 for e in response_variable_column] X = df[list(explanatory_variable_columns)] X.replace(to_replace=' *\?', value=-1, regex=True, inplace=True) X_train, X_test, y_train, y_test = train_test_split(X, y) pipeline = Pipeline([('clf', DecisionTreeClassifier(criterion='entropy'))]) parameters = {'clf__max_depth': (150, 155, 160), 'clf__min_samples_split': (1, 2, 3), 'clf__min_samples_leaf': (1, 2, 3)} grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1,verbose=1, scoring='accuracy') grid_search.fit(X_train, y_train) print ('Best score: %0.3f' % grid_search.best_score_) best_parameters = grid_search.best_estimator_.get_params() for param_name in sorted(parameters.keys()): print ('\t%s: %r' % (param_name, best_parameters[param_name])) predictions = grid_search.predict(X_test) print (classification_report(y_test, predictions)) </code></pre> <p>Every time I get a diff value for <code>best_score_</code>, ranging from <code>0.92</code> to <code>0.96</code>.</p> <p>Should this score determine the Scoring parameter value that I should finally use. Also on scikit learn website, I see that accuracy value should not be used in case of imbalanced classification.</p>
0
2016-09-13T15:34:44Z
39,474,373
<p>The best_score_ value is different every time because you have not passed a fixed value for random_state in your DecisionTreeClassifier. You can do the following in order to get the same value every time you run your code on any machine.</p> <pre><code>random_seed = 77 ##It can be any value of your choice pipeline = Pipeline([('clf', DecisionTreeClassifier(criterion='entropy', random_state = random_seed))]) </code></pre> <p>I hope this will be useful.</p>
0
2016-09-13T15:58:51Z
[ "python", "pandas", "scikit-learn", "grid-search" ]
Initilize multiple dataframe columns with categorical labels
39,473,952
<p>Problem statement:</p> <p>I need to load 1000's csv files into a data frame. All files have the same columns. The values in each of the columns belong to a <strong>limited</strong> set of possible values in all cases (different per column). The length of the values lies in the 100's of chars. I do not know beforehand those values.</p> <p>My approach has been to parse each of the files and convert into a dataframe, with categorical columns and store them in hdfs store. Later concat them all together into "in memory" dataframe.</p> <p>As I cannot concatenate all this dataframes due to conflicting category values, I want to create an empty dataframe, with same columns and all category values seen in the files I processed. </p> <p>The empty categorized dataframe is my starting point to concatenate one after another.</p> <pre><code>df=pd.DataFrame(columns=["A","B"], dtypes={"A":"category","B":"category"} categories={"A":["a","b","c"],"B":["A","B","C","D"]}) df.concat[df1,df2,df3,d4] </code></pre> <p>or so I wish....</p> <p>would a different strategy work better ?</p> <p>Something like </p>
1
2016-09-13T15:36:44Z
39,600,440
<p>This is resolved in pandas v0.19.0, see <a href="https://github.com/pydata/pandas/issues/12699" rel="nofollow">issue in gihub</a> and <a href="http://pandas-docs.github.io/pandas-docs-travis/whatsnew.html#categorical-concatenation" rel="nofollow">in pandas docs v.1.19 dev</a>.</p> <p>However, there is another post with an excellent and detailed solution to this very same problem with a pre v1.19 solution <a href="http://stackoverflow.com/questions/29709918/pandas-and-category-replacement">pandas with category replacement</a></p>
0
2016-09-20T17:35:23Z
[ "python", "pandas", "dataframe" ]
ValueError: 'object too deep for desired array'
39,474,056
<p>I have a ValueError: 'object too deep for desired array' in a Python program. I have this error while using numpy.digitize.<br> I think it's how I use Pandas DataFrames:<br> To keep it simple (because this is done through an external library), I have a list in my program but the library needs a DataFrame so I do something like this:</p> <pre><code>ts = range(1000) df = pandas.DataFrame(ts) res = numpy.digitize(df.values, bins) </code></pre> <p>But then it seems like df.values is an array of lists instead of an array of floats. I mean:</p> <pre><code>array([[ 0], [ 1], [ 2], ..., [997], [998], [999]], dtype=int64) </code></pre> <p>Help please, I spent too much time on this.</p>
0
2016-09-13T15:42:04Z
39,474,209
<p>Try this:</p> <pre><code>numpy.digitize(df.iloc[:, 0], bins) </code></pre> <p>You are trying to get the values from a whole DataFrame. That is why you get the 2D array. Each row in the array is a row of the DataFrame.</p>
1
2016-09-13T15:49:48Z
[ "python", "pandas", "numpy", "dataframe" ]
Recording audio for specific amount of time with PyAudio?
39,474,111
<p>I am trying to learn about audio capture/recording using Python and in this case PyAudio. I am taking a look at a few examples and came across this one:</p> <pre><code>import pyaudio import wave CHUNK = 2 FORMAT = pyaudio.paInt16 CHANNELS = 2 RATE = 44100 RECORD_SECONDS = 3 WAVE_OUTPUT_FILENAME = "output.wav" p = pyaudio.PyAudio() stream = p.open(format=FORMAT, channels=CHANNELS, rate=RATE, input=True, frames_per_buffer=CHUNK) print("* recording") frames = [] for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)): data = stream.read(CHUNK) frames.append(data) print(int(RATE / CHUNK * RECORD_SECONDS)) print("* done recording") stream.stop_stream() stream.close() p.terminate() wf = wave.open(WAVE_OUTPUT_FILENAME, 'wb') wf.setnchannels(CHANNELS) wf.setsampwidth(p.get_sample_size(FORMAT)) wf.setframerate(RATE) wf.writeframes(b''.join(frames)) wf.close() </code></pre> <p>I think I have a rough understanding of what CHUNK, FORMAT, CHANNELS and RATE all mean and do, but I don't understand how recording for specific amounts of time works. If I was to change the value of CHUNK from 2 to 4, the value of int(RATE / CHUNK * RECORD_SECONDS) would be halved. But then if I was to run the code, the recording will still occur for the 3 seconds specified. </p> <p>Ultimately, how can this for loop execute in the same amount of time when the range is halved?</p> <p>Sorry if I don't make sense, it feels like a stupid question.</p> <p>Edit: So changing the number of samples read manually, without changing the range the for loop is iterating over (so is constant at range(0, 60000) but data = sample.read(CHUNK) varies), does change the time taken to record. That means doubling the samples read each iteration doubles the time taken and so does that mean it just takes twice as long to process the data? But if so, wouldn't the time taken vary on different computers depending on the processing power available?</p>
0
2016-09-13T15:44:41Z
39,477,855
<p><code>CHUNK</code> is the number of samples in a block of data. I would call this "block size". Sound cards and sound drivers typically don't process one sample after the other but they use, well, chunks. The block size of those is typically a few hundred samples, e.g. 512 or 1024 samples. Only if you need very low latencies, you should try to use smaller block sizes, like 64 or 32 samples. A block size of 2 typically doesn't work well.</p> <p><code>RATE</code> is the sampling rate, i.e. the number of samples per seconds. 44100 Hertz is a typical sampling rate from the era of CDs, nowadays you'll also often see 48000 Hertz.</p> <p>The <code>for</code>-loop in your example is reading blocks of data (or "chunks" if you prefer) from the audio hardware. If you want to record 3 seconds of audio, you'll need to record <code>3 * RATE</code> samples. To get the number of <em>blocks</em> you'll have to divide that by the block size <code>CHUNK</code>.</p> <p>If you change the value of <code>CHUNK</code>, this doesn't change the duration of the whole recording (apart from some truncation done by <code>int()</code>), but it changes the number of times the <code>for</code>-loop is running.</p> <p>If you are willing to use NumPy, there is a much simpler way to record a few seconds of audio into a WAV file: Use the <a href="http://python-sounddevice.readthedocs.io/" rel="nofollow">sounddevice</a> module to record the audio data and the <a href="http://pysoundfile.readthedocs.io/" rel="nofollow">soundfile</a> module to save it to a WAV file:</p> <pre><code>import sounddevice as sd import soundfile as sf samplerate = 44100 # Hertz duration = 3 # seconds filename = 'output.wav' mydata = sd.rec(int(samplerate * duration), samplerate=samplerate, channels=2, blocking=True) sf.write(filename, mydata, samplerate) </code></pre> <p>BTW, you don't need to specify the block size if you have no reason for it. The underlying library (PortAudio) will automatically choose one for you.</p>
0
2016-09-13T19:44:02Z
[ "python", "for-loop", "audio", "pyaudio" ]
Odoo qweb call python method
39,474,140
<p>I want to modify the RFQ report and in that I wanted to call a python method from the Qweb report, </p> <p>here is some sample code,</p> <pre><code>&lt;span t-field ="o.my_custom_fuction()" /&gt; </code></pre> <p>and my python function is like</p> <pre><code>@api.model def my_custom_function(self): return "some_value" </code></pre> <p>But it is giving me error like qwebException : "my_custom_function()" while evaluating.</p> <p>Any clue what I am missing?</p>
1
2016-09-13T15:46:03Z
39,474,355
<blockquote> <blockquote> <p>The t-field directive can only be used when performing field access (a.b) on a "smart" record (result of the browse method). </p> </blockquote> </blockquote> <p>To call that function You will need to use <code>t-esc</code> (takes an expression, evaluates it and prints the content): </p> <pre><code>&lt;span t-esc ="o.my_custom_fuction()" /&gt; </code></pre> <p>I used <a href="https://www.odoo.com/documentation/8.0/reference/qweb.html" rel="nofollow">Odoo QWEB reference</a></p>
2
2016-09-13T15:57:40Z
[ "python", "openerp", "odoo-8", "odoo-9" ]
with salt how to access reclass vs pillar data?
39,474,148
<p>Im looking at the readme on the salt-swift <a href="https://github.com/openstack/salt-formula-swift" rel="nofollow">formula</a>, and this has me wondering:</p> <pre><code> rings: - name: default partition_power: 9 replicas: 3 hours: 1 region: 1 devices: - address: ${_param:storage_node01_address} device: vdb - address: ${_param:storage_node02_address} device: vdc - address: ${_param:storage_node03_address} device: vdd </code></pre> <p>Where are the variables: <code>${_param:storage_node01_address}</code> to be defined? this is the pillar, i would assume the addresses would simply be placed in right here.</p>
0
2016-09-13T15:46:26Z
39,480,408
<p>You could definitely just put the addresses right there. You could also define pillar data in the regular manner and access it from there.</p>
1
2016-09-13T23:28:44Z
[ "python", "yaml", "salt-stack", "configuration-management" ]
Cardano's formula not working with numpy?
39,474,254
<p>--- using python 3 ---</p> <p>Following the equations <a href="https://proofwiki.org/wiki/Cardano&#39;s_Formula" rel="nofollow">here</a>, I tried to find all real roots of an arbitrary third-order-polynomial. Unfortunatelly, my implementation does not yield the correct result and I cannot find the error. Maybe you are able to spot it within a blink of an eye and tell me.</p> <p>(As you notice, only the roots of the green curve are wrong.)</p> <p>With best regards</p> <pre><code>import numpy as np def find_cubic_roots(a,b,c,d): # with ax³ + bx² + cx + d = 0 a,b,c,d = a+0j, b+0j, c+0j, d+0j all_ = (a != np.pi) Q = (3*a*c - b**2)/ (9*a**2) R = (9*a*b*c - 27*a**2*d - 2*b**3) / (54 * a**3) D = Q**3 + R**2 S = (R + np.sqrt(D))**(1/3) T = (R - np.sqrt(D))**(1/3) result = np.zeros(tuple(list(a.shape) + [3])) + 0j result[all_,0] = - b / (3*a) + (S+T) result[all_,1] = - b / (3*a) - (S+T) / 2 + 0.5j * np.sqrt(3) * (S - T) result[all_,2] = - b / (3*a) - (S+T) / 2 - 0.5j * np.sqrt(3) * (S - T) return result </code></pre> <p>The example where you see it does not work:</p> <pre><code>import matplotlib.pyplot as plt fig, ax = plt.subplots() a = np.array([2.5]) b = np.array([-5]) c = np.array([0]) x = np.linspace(-2,3,100) for i, d in enumerate([-8,0,8]): d = np.array(d) roots = find_cubic_roots(a,b,c,d) ax.plot(x, a*x**3 + b*x**2 + c*x + d, label = "a = %.3f, b = %.3f, c = %.3f, d = %.3f"%(a,b,c,d), color = colors[i]) print(roots) ax.plot(x, x*0) ax.scatter(roots,roots*0, s = 80) ax.legend(loc = 0) ax.set_xlim(-2,3) plt.show() </code></pre> <p><a href="http://i.stack.imgur.com/8kXx6.png" rel="nofollow"><img src="http://i.stack.imgur.com/8kXx6.png" alt="Easy Example"></a></p> <p>Output:</p> <pre><code>[[ 2.50852567+0.j -0.25426283+1.1004545j -0.25426283-1.1004545j]] [[ 2.+0.j 0.+0.j 0.-0.j]] [[ 1.51400399+1.46763129j 1.02750817-1.1867528j -0.54151216-0.28087849j]] </code></pre>
2
2016-09-13T15:52:38Z
39,477,903
<p>Here is my stab at the solution. Your code fails for the case where <code>R + np.sqrt(D)</code> or <code>R - np.sqrt(D)</code> is negative. The reason is in <a href="http://stackoverflow.com/questions/31231115/raise-to-1-3-gives-complex-number">this post</a>. Basically if you do <code>a**(1/3)</code> where <code>a</code> is negative, numpy returns a complex number. However, we infact, want <code>S</code> and <code>T</code> to be real since cube root of a negative real number is simply a negative real number (let's ignore <strong>De Moivre's</strong> theorem for now and focus on the code and not the math). The way to work around it is to check if <code>S</code> is real, cast it to real and pass <code>S</code> to the function <code>from scipy.special import cbrt</code>. Similarly for <code>T</code>. Example code:</p> <pre><code>import numpy as np import pdb import math from scipy.special import cbrt def find_cubic_roots(a,b,c,d, bp = False): a,b,c,d = a+0j, b+0j, c+0j, d+0j all_ = (a != np.pi) Q = (3*a*c - b**2)/ (9*a**2) R = (9*a*b*c - 27*a**2*d - 2*b**3) / (54 * a**3) D = Q**3 + R**2 S = 0 #NEW CALCULATION FOR S STARTS HERE if np.isreal(R + np.sqrt(D)): S = cbrt(np.real(R + np.sqrt(D))) else: S = (R + np.sqrt(D))**(1/3) T = 0 #NEW CALCULATION FOR T STARTS HERE if np.isreal(R - np.sqrt(D)): T = cbrt(np.real(R - np.sqrt(D))) else: T = (R - np.sqrt(D))**(1/3) result = np.zeros(tuple(list(a.shape) + [3])) + 0j result[all_,0] = - b / (3*a) + (S+T) result[all_,1] = - b / (3*a) - (S+T) / 2 + 0.5j * np.sqrt(3) * (S - T) result[all_,2] = - b / (3*a) - (S+T) / 2 - 0.5j * np.sqrt(3) * (S - T) #if bp: #pdb.set_trace() return result import matplotlib.pyplot as plt fig, ax = plt.subplots() a = np.array([2.5]) b = np.array([-5]) c = np.array([0]) x = np.linspace(-2,3,100) for i, d in enumerate([-8,0,8]): d = np.array(d) if d == 8: roots = find_cubic_roots(a,b,c,d, True) else: roots = find_cubic_roots(a,b,c,d) ax.plot(x, a*x**3 + b*x**2 + c*x + d, label = "a = %.3f, b = %.3f, c = %.3f, d = %.3f"%(a,b,c,d)) print(roots) ax.plot(x, x*0) ax.scatter(roots,roots*0, s = 80) ax.legend(loc = 0) ax.set_xlim(-2,3) plt.show() </code></pre> <p>DISCLAIMER: The output root gives some warning, which you can <em>probably</em> ignore. The output is correct. However, the plotting shows an extra root for some reasons. This is likely due to your plotting code. The printed roots look fine though.</p>
1
2016-09-13T19:46:51Z
[ "python", "numpy", "calculus" ]
how twisted server detects an incomplete message
39,474,266
<p>I am right now implementing a twisted client and twisted server. The question is how does the server detect if a message that is sent by client is completed?</p> <p>For example, there are 2 clients sending messages to server, the message is a python list which has only several elements, respectively. The 2nd client sends the message very short time after the 1st client does. </p> <p>Since it's async, the server will switch to serve the 2nd client, leaving the message of 1st client half-processed. So how do I do to let the server know this message is not complete? Thanks in advance. </p>
0
2016-09-13T15:53:07Z
39,475,446
<p>You want to implement some sort of <em>protocol</em> (not to be confused with <code>twisted.internet.protocol</code>) which has some sort of delimiter signifying the beginning and end of a message and how long your message will be. For example, let's define a protocol which implements the following rules:</p> <ol> <li>All messages start with <code>{</code></li> <li>An <code>int</code> number of bytes which will contain the message body, followed by <code>:</code></li> <li>The message body</li> <li>An end tag <code>}</code></li> </ol> <p>An example expected message would look like:</p> <pre><code>{10:Hey Earth!} </code></pre> <p>Twisted provides many interfaces to do this for you, so you don't really have to do this on your own. There's <a href="http://twistedmatrix.com/documents/current/core/howto/servers.html#helper-protocols" rel="nofollow"><code>LineReceiver</code></a> which combines bytes until a line break. There's the <a href="http://twistedmatrix.com/documents/current/core/howto/amp.html" rel="nofollow">AMP protocol</a> and <a href="https://twistedmatrix.com/documents/current/api/twisted.protocols.basic.NetstringReceiver.html" rel="nofollow"><code>NetStrings</code></a>, which is similar to the example I provided previously.</p> <h2>References:</h2> <ul> <li><a href="http://krondo.com/a-poetry-transformation-server/" rel="nofollow">http://krondo.com/a-poetry-transformation-server/</a> - search for <code>netstring</code> you should find a good example of how to write one for yourself.</li> </ul>
1
2016-09-13T17:05:31Z
[ "python", "twisted" ]
passing function to a class using @property.setter decorator
39,474,333
<p>I am making a class that i want to declare a variable which hold a function in it and i want to call them after i do some processing on some information.but i don't know how to use property decorator in this situation. i already have this code:</p> <pre><code>class MyClass: def __init__(self): self.callback = None def run(): #do something here... result = self.callback(result) print(result) def func1(result): result = result ** 2 return result def func2(result): result = result ** 4 return result class1 = MyClass() class1.callback = func1 class1.run() class1.callback = func2 class1.run() </code></pre> <p>my question is how i can use <code>@property</code> and <code>@property.setter</code> and <code>@property.getter</code> decorator for self.callback in this code?</p>
0
2016-09-13T15:56:20Z
39,474,537
<p>I based on this code don't see a need for <code>properties</code> but here it is anyway.</p> <pre><code>class MyClass: def __init__(self): self.__callback = None @property def cb(self): return self.__callback @cb.setter def cb(self, new_cb): self.__callback = new_cb </code></pre>
0
2016-09-13T16:07:44Z
[ "python", "python-3.x", "callback", "python-3.5" ]
Building a tuple containing colons to be used to index a numpy array
39,474,396
<p>I've created a class for dealing with multidimensional data of a specific type. This class has three attributes: A list containing the names of the axes (<strong>self.axisNames</strong>); a dictionary containing the parameter values along each axis (<strong>self.axes</strong>; keyd using the entries in axisNames); and a numpy array containing the data, with a dimension for each axis (<strong>self.intensityArr</strong>).</p> <p>The class also has functions that dynamically add new axes depending on what I need for a specific case, which makes indexing the intensityArr a tricky proposition. To make indexing better I've started writing a function to build the index I need:</p> <p>Inside class:</p> <pre><code>def indexIntensityArr(self,indexSpec): # indexSpec is a dictionary containing axisName:indexVal entries (indexVal is an int) # I want the function to return a tuple for use in indexing (see below def) indexList = [] for axis in self.axisNames: if axis in indexSpec: indexList.append(indexSpec[axis]) else: # &lt;do something to add : to index list (tuple)&gt; return tuple(indexList) </code></pre> <p>Outside class:</p> <pre><code># ... create an instance of my class called myBlob with 4 dimensions ... mySpec = {'axis1':10,'axis3':7} mySlicedArr = myBlob.intensityArr[myBlob.indexIntensityArr(mySpec)] </code></pre> <p>I expect the above to result in mySlicedArr being a 2-dimensional array.</p> <p>What do I need to put in the 'else' clause to get an : (or equivalent) in the tuple I use to index the intensityArr? Is this perhaps a bad way to solve the problem?</p>
2
2016-09-13T16:00:01Z
39,482,568
<p>Inside indexing <code>[]</code>, a <code>:</code> is translated to a <code>slice</code>, and the whole thing is passed to <code>__getitem__</code> as a tuple</p> <pre><code>indexList = [] for axis in self.axisNames: if axis in indexSpec: indexList.append(indexSpec[axis]) else: indexList.append(slice(None)) </code></pre> <p>There are several <code>numpy</code> functions that use an indexing trick like this - that is build up a tuple of index values and slices. Or if they need to vary it, they'll start with a list, which can mutate, and convert it to a tuple right before use. (e.g. <code>np.apply_along_axis</code>)</p> <p>Yes, the full spec for slice is <code>slice(start, stop, step)</code>, with start and stop optional. Same as for <code>np.arange</code> or <code>range</code>. And <code>None</code> is equivalent to the unspecified values in a <code>:</code> expression.</p> <p>A little custom class in <code>np.lib.index_tricks.py</code> translates the : notation into slices:</p> <pre><code>In [61]: np.s_[:,:1,0:,::3] Out[61]: (slice(None, None, None), slice(None, 1, None), slice(0, None, None), slice(None, None, 3)) </code></pre>
2
2016-09-14T04:38:13Z
[ "python", "arrays", "numpy" ]
Building a tuple containing colons to be used to index a numpy array
39,474,396
<p>I've created a class for dealing with multidimensional data of a specific type. This class has three attributes: A list containing the names of the axes (<strong>self.axisNames</strong>); a dictionary containing the parameter values along each axis (<strong>self.axes</strong>; keyd using the entries in axisNames); and a numpy array containing the data, with a dimension for each axis (<strong>self.intensityArr</strong>).</p> <p>The class also has functions that dynamically add new axes depending on what I need for a specific case, which makes indexing the intensityArr a tricky proposition. To make indexing better I've started writing a function to build the index I need:</p> <p>Inside class:</p> <pre><code>def indexIntensityArr(self,indexSpec): # indexSpec is a dictionary containing axisName:indexVal entries (indexVal is an int) # I want the function to return a tuple for use in indexing (see below def) indexList = [] for axis in self.axisNames: if axis in indexSpec: indexList.append(indexSpec[axis]) else: # &lt;do something to add : to index list (tuple)&gt; return tuple(indexList) </code></pre> <p>Outside class:</p> <pre><code># ... create an instance of my class called myBlob with 4 dimensions ... mySpec = {'axis1':10,'axis3':7} mySlicedArr = myBlob.intensityArr[myBlob.indexIntensityArr(mySpec)] </code></pre> <p>I expect the above to result in mySlicedArr being a 2-dimensional array.</p> <p>What do I need to put in the 'else' clause to get an : (or equivalent) in the tuple I use to index the intensityArr? Is this perhaps a bad way to solve the problem?</p>
2
2016-09-13T16:00:01Z
39,483,091
<p>To add to hpaulj's answer, you can very simply extend your setup to make it even more generic by using <code>np.s_</code>. The advantage of using this over <code>slice</code> is that you can use <code>numpy</code>'s slice syntax more easily and transparently. For example:</p> <pre><code>mySpec = {'axis1': np.s_[10:15], 'axis3': np.s_[7:8]} mySlicedArr = myBlob.intensityArr[myBlob.indexIntensityArr(mySpec)] </code></pre> <p>(Extra info: <code>np.s_[7:8]</code> retrieves only the 7th column, but it preserves the dimension, i.e. your sliced array will still be 4D with a shape of 1 in that dimension: very useful for broadcasting).</p> <p>And if you want to use the same syntax in your function definition as well:</p> <pre><code>indexList = [] for axis in self.axisNames: if axis in indexSpec: indexList.append(indexSpec[axis]) else: indexList.append(np.s_[:]) return tuple(indexList) </code></pre> <p>All of this can be done equally well with <code>slice</code>. You would specify <code>np.s_[10:15]</code> as <code>slice(10, 15)</code>, and <code>np.s_[:]</code> as <code>slice(None)</code>, as hpaulj says.</p>
0
2016-09-14T05:32:54Z
[ "python", "arrays", "numpy" ]
Get Accession Numbers from NCBI from corresponding GI Numbers in fasta headers in python
39,474,446
<p>I keep seeing warnings on Genbank that they are phasing out GI numbers and have a number of fasta files saved where I've edited the headers in the following format:</p> <pre><code>&gt;SomeText_ginumber </code></pre> <p>I've no idea where to even begin with this but is there a way, ideally with python, that I could get the corresponding accession numbers for each gi from NCBI and output a file with the headers as follows:</p> <pre><code>&gt;SomeText_accessionnumber </code></pre> <p>Here's another example of the file format:</p> <pre><code>&gt;Desulfovibrio_fructosivorans_ferredoxin_492837709 MTKSVAPTVTIDGKVVPIEGERNLLELIRKVRIDLPTFCYHSELSVYGACRLCLVEVKNRGIMGAC &gt;Oxalobacteraceae_bacterium_AB_14_hypothetical_protein_522195384 MIKLTVNGIPVEVDEGATYLDAANKAGVHIPTLCYHPRFRSHAVCRMCLVHVAGSSRPQAACIGKA </code></pre> <p><strong>Edit/Update:</strong></p> <pre><code>from Bio import Entrez from time import sleep import sys import re Entrez.email = '' seqs = open(sys.argv[1],"r") for line in seqs: if line.startswith('&gt;'): gi = re.findall("\d{5,}",line) matches = len(gi) #print(matches) if matches &lt; 2: if gi: handle = Entrez.efetch(db="nucleotide", id=gi, retmode="xml") records = Entrez.read(handle) print(line[0:line.rfind('_') + 1] + records[0]['GBSeq_primary-accession']) sleep(1) elif matches &gt;= 2: print("Error - More than one ginumber in header!") break else: seq=line.rstrip() print(seq) else: seq1=line.rstrip() print(seq1) </code></pre>
0
2016-09-13T16:02:30Z
39,487,976
<p>Try using <a href="https://github.com/biopython/biopython.github.io/" rel="nofollow">BioPython</a>.</p> <p>The following snippet should get you started. First get the GI from the header (the part of the header after the underscore), get the data from GenBank, print the old header but with the accession number and then the rest of your input sequences, done :)</p> <p>This works for your two examples but will probably fail with more data (missing GI, etc.). Also the accession numbers have underscores, just like your header, will will complicate parsing later. Perhaps replace the underscore with something else or add another separator.</p> <pre><code>from Bio import Entrez from time import sleep Entrez.email = 'your@email' seqs = """&gt;Desulfovibrio_fructosivorans_ferredoxin_492837709 MTKSVAPTVTIDGKVVPIEGERNLLELIRKVRIDLPTFCYHSELSVYGACRLCLVEVKNRGIMGAC &gt;Oxalobacteraceae_bacterium_AB_14_hypothetical_protein_522195384 MIKLTVNGIPVEVDEGATYLDAANKAGVHIPTLCYHPRFRSHAVCRMCLVHVAGSSRPQAACIGKA""" for line in seqs.splitlines(): if line.startswith('&gt;'): gi = line[line.rfind('_') + 1:] handle = Entrez.efetch(db="nucleotide", id=gi, retmode="xml") records = Entrez.read(handle) print(line[0:line.rfind('_') + 1] + records[0]['GBSeq_primary-accession']) sleep(1) else: print(line) </code></pre>
1
2016-09-14T10:19:05Z
[ "python", "fasta", "ncbi", "genbank" ]
Save/read data to/from textfile
39,474,469
<p>I have the following two series:</p> <pre><code>self.MW_x = .. self.MW_x = .. (from previous calculations) </code></pre> <p>and I zip them together like this:</p> <pre><code>self.MW_final = list(zip(self.MW_x, self.MW_y)) </code></pre> <p>and try to save them with <code>numpy.savetxt</code></p> <pre><code>np.savetxt("testfile.txt", self.MW_final, delimiter = ";", header = "x_Value, y_Value") </code></pre> <p>If I plot it directly (self.MW_x, self.MW_y) it looks like this, which is perfect:</p> <p><img src="http://i.stack.imgur.com/XhBYt.png" alt="perfect plot"></p> <p>but if I try to plot the saved textfile again, it looks like this:</p> <p><img src="http://i.stack.imgur.com/DRzFt.png" alt="entirely wrong"></p> <p>I just don't know what happened on the way to the file and back.</p>
0
2016-09-13T16:03:41Z
39,478,325
<p>Are you tied to exporting it via numpy with the semicolon delimiter? If not, it would be a lot easier to simply export via Pandas as well. i.e.)</p> <pre><code>df = pd.DataFrame({"x_value":self.MW_x, "y_value": self.MW_y}) df.to_csv("testfile.txt") df_again = pd.read_csv("testfile.txt") </code></pre> <p>Also, in the answer you posted to this question you have:</p> <pre><code>x_val = df.ix[0:] y_val = df.ix[1:] </code></pre> <p>which is returning <em>rows</em> 0 through N (N = len(df)) for x_val and <em>rows</em> 1 through N for y_val. This is why you're getting two arrays with two different lengths.</p> <p>To call a column from a pandas DataFrame you can write:</p> <pre><code>x_val = df['col_name1'] y_val = df['col_name2'] </code></pre> <p>Or, more simply, if 'col_name2' is your x-axis variable then you can do:</p> <pre><code>df.set_index('col_name2', inplace = True) df.plot() </code></pre> <p>which will plot the variable 'col_name1' on the y-axis against 'col_name2' on the x-axis.</p>
0
2016-09-13T20:16:29Z
[ "python", "numpy", "save", "text-files" ]
Save/read data to/from textfile
39,474,469
<p>I have the following two series:</p> <pre><code>self.MW_x = .. self.MW_x = .. (from previous calculations) </code></pre> <p>and I zip them together like this:</p> <pre><code>self.MW_final = list(zip(self.MW_x, self.MW_y)) </code></pre> <p>and try to save them with <code>numpy.savetxt</code></p> <pre><code>np.savetxt("testfile.txt", self.MW_final, delimiter = ";", header = "x_Value, y_Value") </code></pre> <p>If I plot it directly (self.MW_x, self.MW_y) it looks like this, which is perfect:</p> <p><img src="http://i.stack.imgur.com/XhBYt.png" alt="perfect plot"></p> <p>but if I try to plot the saved textfile again, it looks like this:</p> <p><img src="http://i.stack.imgur.com/DRzFt.png" alt="entirely wrong"></p> <p>I just don't know what happened on the way to the file and back.</p>
0
2016-09-13T16:03:41Z
39,478,486
<p>Have you tried looking at the first 5 rows of the data at every step (first 5 rows of self.MW_x, self.MW_final, /Users/sping/Desktop/testfile.txt, df, x_val, and y_val) to figure out where the data is changing?</p> <p>I think you have a problem with how you're using ix</p> <pre><code>x_val = df.ix[0:] y_val = df.ix[1:] </code></pre> <p>Doesn't grab the first and second column. It grabs the first row until the end <code>[0:]</code> and the second row until the end <code>[1:]</code>. As a result, x_val will be 1 row longer than y_val. You want to use</p> <pre><code>x_val = df.ix[:,0] y_val = df.ix[:,1] </code></pre> <p>The first argument to ix gives you the rows and the second argument gives you the columns.</p> <p>Also you want to set header=0 or just leave it as default if you want to use the column headers. Python uses zero indexing so the first row is row 0, not row 1.</p> <p>I looks like you have some typos in the code you've posted. You're also using ';' as a delimiter when saving with np.savetxt and ',' as a delimiter when reading using pd.read_csv.</p>
0
2016-09-13T20:29:02Z
[ "python", "numpy", "save", "text-files" ]
Validate and get django form's unkown number of multipleselect checkbox fields
39,474,515
<p>I am trying to get all selected checkboxes values as a list however not able to validate the form due to choices option</p> <p>forms.py:</p> <pre><code>class MarkAccountsForm(forms.Form): accounts = forms.MultipleChoiceField( widget = forms.CheckboxSelectMultiple, required = False ) </code></pre> <p>template:</p> <pre><code>{% for account in accounts %} &lt;tr&gt; &lt;td&gt;&lt;input type="checkbox" name="accounts" value="{{ account.id }}"&gt;&lt;/td&gt; &lt;td&gt;{{ account.name }}&lt;/td&gt; &lt;/tr&gt; {% endfor %} </code></pre> <p>views.py:</p> <pre><code>if request.method == 'POST': form = MarkAccountsForm(request.POST) if form.is_valid(): #this fails data = form.cleaned_data </code></pre> <p>When I try to print <code>form.errors</code>, it says <code>Select a valid choice. 1119 is not one of the available choices.</code></p> <p>Is there a way that I can avoid choice validation and still get values of all accounts checkboxes checked as a list? How do I say it to allow any for choices? If I can get the list without form validation then that is fine too.</p>
0
2016-09-13T16:06:25Z
39,475,486
<p>If you don't care about validation, why are you calling <code>is_valid</code>? In fact, why are you using a Form class at all? You're not using it to display the form, so you might as well leave it out altogether and just get the data from <code>request.POST.getlist('accounts')</code>.</p>
1
2016-09-13T17:08:51Z
[ "python", "django", "forms", "django-forms", "checkboxlist" ]
How to convert a Python Cairo matrix into Android Canvas Matrix?
39,474,636
<p>Matrix in cairo graphics module in python is described as </p> <pre><code>cairo.Matrix(xx = 1.0, yx = 0.0, xy = 0.0, yy = 1.0, x0 = 0.0, y0 = 0.0) </code></pre> <p>In Andorid's Canvas, Matrix is defined as,</p> <blockquote> <p>The Matrix class holds a 3x3 matrix for transforming coordinates.</p> </blockquote> <p>If those 6 affine transformation values (xx ,yx, xy, yy, x0, y0) are given, how those can be fit into 9 valued an Android's Matrix?</p>
0
2016-09-13T16:13:12Z
39,957,046
<p>6 valued PyCairo Matrix can be mapped into 9 valued Android Matrix as followed,</p> <pre><code>android.graphics.Matrix() matrix = new android.graphics.Matrix(); matrix.setValues(new float[] {xx, xy, x0, yx, yy, y0, 0F, 0F, 1F}); </code></pre> <p>In terms of Matrix mathematics, the mapping is actually looked like this,</p> <pre><code>| xx xy x0 | | yx yy y0 | | 0 0 1 | </code></pre>
0
2016-10-10T11:18:24Z
[ "android", "python", "matrix", "transformation", "cairo" ]
Pandas DataFrame - Transpose few elements of a row into columns and fill missing data
39,474,667
<p>I want to reformat a dataframe by transposing some elements of rows into columns. To provide an example of what I meant.</p> <p>In the below dataframe, I want all the elements in the code column to be individual columns. And the missing rows like 'JFK 10/06 XX' should be populated as 0 or nan.</p> <p>Original DataFrame:</p> <pre><code>loc date code --- ----- ---- LGA 10/05 XX LGA 10/06 XX LGA 10/07 XX LGA 10/05 YY LGA 10/06 YY LGA 10/07 YY JFK 10/05 XX ###JFK 10/06 XX (missing) JFK 10/07 XX JFK 10/05 YY JFK 10/06 YY JFK 10/07 YY </code></pre> <p>To be formatted DataFrame:</p> <pre><code>loc date XX YY --- ----- -- -- LGA 10/05 1 1 LGA 10/06 1 1 LGA 10/07 1 1 JFK 10/05 1 1 JFK 10/06 0 1 JFK 10/07 1 1 </code></pre> <p>Here JFK -> 10/06 has an entry 0 in XX</p> <p>I tried grouping by with the rest of the columns and able to verify the counts, but I couldn't format it in the expected way.</p>
0
2016-09-13T16:15:46Z
39,474,807
<p>You are trying to reshape your data to wide format without a value column. One option is to use <code>pivot_table</code> and specify the <code>size</code> as the aggregate function, which will count the combinations of index and columns and fill as values. Missing values can be replaced with the <code>fill_value</code> parameter:</p> <pre><code>df.pivot_table(index = ['loc', 'date'], columns = 'code', aggfunc='size', fill_value=0).reset_index() #code loc date XX YY # 0 JFK 10/05 1 1 # 1 JFK 10/06 0 1 # 2 JFK 10/07 1 1 # 3 LGA 10/05 1 1 # 4 LGA 10/06 1 1 # 5 LGA 10/07 1 1 </code></pre>
2
2016-09-13T16:24:32Z
[ "python", "pandas", "numpy", "dataframe" ]
Pandas DataFrame - Transpose few elements of a row into columns and fill missing data
39,474,667
<p>I want to reformat a dataframe by transposing some elements of rows into columns. To provide an example of what I meant.</p> <p>In the below dataframe, I want all the elements in the code column to be individual columns. And the missing rows like 'JFK 10/06 XX' should be populated as 0 or nan.</p> <p>Original DataFrame:</p> <pre><code>loc date code --- ----- ---- LGA 10/05 XX LGA 10/06 XX LGA 10/07 XX LGA 10/05 YY LGA 10/06 YY LGA 10/07 YY JFK 10/05 XX ###JFK 10/06 XX (missing) JFK 10/07 XX JFK 10/05 YY JFK 10/06 YY JFK 10/07 YY </code></pre> <p>To be formatted DataFrame:</p> <pre><code>loc date XX YY --- ----- -- -- LGA 10/05 1 1 LGA 10/06 1 1 LGA 10/07 1 1 JFK 10/05 1 1 JFK 10/06 0 1 JFK 10/07 1 1 </code></pre> <p>Here JFK -> 10/06 has an entry 0 in XX</p> <p>I tried grouping by with the rest of the columns and able to verify the counts, but I couldn't format it in the expected way.</p>
0
2016-09-13T16:15:46Z
39,474,889
<ol> <li><code>stack</code> the dataframe</li> <li>reset index</li> <li>create pivot table of counts</li> </ol> <hr> <pre><code>new_df = (df.set_index(keys=['loc','date']) .stack() .reset_index() .pivot_table(index=['loc','date'], columns=0, fill_value=0, aggfunc='size')) </code></pre> <hr> <p>OUTPUT:</p> <pre><code>0 XX YY loc date JFK 2016-10-05 1 1 2016-10-06 0 1 2016-10-07 1 1 LGA 2016-10-05 1 1 2016-10-06 1 1 2016-10-07 1 1 </code></pre>
0
2016-09-13T16:29:40Z
[ "python", "pandas", "numpy", "dataframe" ]
Pandas DataFrame - Transpose few elements of a row into columns and fill missing data
39,474,667
<p>I want to reformat a dataframe by transposing some elements of rows into columns. To provide an example of what I meant.</p> <p>In the below dataframe, I want all the elements in the code column to be individual columns. And the missing rows like 'JFK 10/06 XX' should be populated as 0 or nan.</p> <p>Original DataFrame:</p> <pre><code>loc date code --- ----- ---- LGA 10/05 XX LGA 10/06 XX LGA 10/07 XX LGA 10/05 YY LGA 10/06 YY LGA 10/07 YY JFK 10/05 XX ###JFK 10/06 XX (missing) JFK 10/07 XX JFK 10/05 YY JFK 10/06 YY JFK 10/07 YY </code></pre> <p>To be formatted DataFrame:</p> <pre><code>loc date XX YY --- ----- -- -- LGA 10/05 1 1 LGA 10/06 1 1 LGA 10/07 1 1 JFK 10/05 1 1 JFK 10/06 0 1 JFK 10/07 1 1 </code></pre> <p>Here JFK -> 10/06 has an entry 0 in XX</p> <p>I tried grouping by with the rest of the columns and able to verify the counts, but I couldn't format it in the expected way.</p>
0
2016-09-13T16:15:46Z
39,475,170
<p>Another solution using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.crosstab.html" rel="nofollow"><code>crosstab</code></a> which computes the frequency of occurence of the values present in the <code>columns</code> argument:</p> <pre><code>pd.crosstab(index=[df['loc'], df['date']], columns=df['code']) \ .reset_index(level=1) \ .sort_index(ascending=False) </code></pre> <p><a href="http://i.stack.imgur.com/Hcw7r.png" rel="nofollow"><img src="http://i.stack.imgur.com/Hcw7r.png" alt="Image"></a></p> <p>Note: It's not a good practice to name the columns as 'loc' which coincidentally also is a method used by <code>pandas</code> for performing label based location indexing. </p>
1
2016-09-13T16:47:20Z
[ "python", "pandas", "numpy", "dataframe" ]
Load CSV Strings With Different Types into Pandas Dataframe, Split Columns, Parse Date
39,474,717
<p>I have two questions concerning a large csv file which contains data in the following way formatted as strings:</p> <pre><code> "XAU=,XAU=,XAG=,XAG=" "25/08/2014 6:00:05,1200.343,25/08/2014 6:00:03,19.44," "25/08/2014 6:00:05,1200,,," </code></pre> <p>Is there a way to efficiently load this into a pandas dataframe object? Alternatively, also multiple pandas Series objects would do the job. So far I tried: </p> <pre><code>df = read_csv(path, header=None) df[0].str[0:-1].str.split(',', return_type='frame') </code></pre> <p>The second line is an answer from this thread <a href="http://stackoverflow.com/questions/29370211/pandas-split-string-into-columns">pandas split string into columns</a>. However, I wonder if there is an even better way, especially since I have different data types? Secondly, how can I correctly parse the dates with <code>to_datetime()</code>. I tried to reindex <code>df</code> and used <code>df.index = df.index.to_datetime()</code>. This worked only half way because it did not strictly keep the <code>dd/mm/yyyy ...</code> format. Some dates were incorrectly parsed as <code>mm/dd/yyyy ...</code>. I'm looking for fast ways because eventually I will loop over many such csv's. Thx for any help!</p> <p>EDIT: Ideally data in this form should be handled as well: </p> <pre><code> "XAU=,XAU=,XAG=,XAG=" "25/08/2014 6:00:05,1200.343,25/08/2014 6:00:03,19.44," ",,25/08/2014 6:00:05,19.50," </code></pre> <p>So with the answer provided below,</p> <pre><code> data = StringIO( ''' "XAU=,XAU=,XAG=,XAG=" "25/08/2014 6:00:05,1200.343,25/08/2014 6:00:03,19.44," ",,25/08/2014 6:00:05,19.5," ''') </code></pre> <p>df would become:</p> <pre><code> XAU XAU XAG XAG 0 25/08/2014 6:00:05 1200.343 25/08/2014 6:00:03 19.44 1 , 25/08/2014 6:00:05 19.5 \n </code></pre>
1
2016-09-13T16:18:50Z
39,478,472
<p>You can preprocess everything inside the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow"><code>read_csv</code></a> as shown:</p> <pre><code>import csv data = StringIO( ''' "XAU=,XAU=,XAG=,XAG=" "25/08/2014 6:00:05,1200.343,25/08/2014 6:00:03,19.44," "25/08/2014 6:00:05,1200,,," ''') df = pd.read_csv(data, quoting=csv.QUOTE_NONE, index_col=False, escapechar='"', \ parse_dates=[0, 2]).rename(columns=lambda x: x.split("=")[0]) df </code></pre> <p><a href="http://i.stack.imgur.com/x80Br.png" rel="nofollow"><img src="http://i.stack.imgur.com/x80Br.png" alt="Image"></a></p> <pre><code>df.dtypes XAU datetime64[ns] XAU float64 XAG datetime64[ns] XAG float64 dtype: object </code></pre> <p><strong>Break-up:</strong></p> <p><em>quoting=csv.QUOTE_NONE</em> : Instructs writer objects to never quote fields <br></p> <p><em>index_col=False</em>: Do not use the first column as the index <br></p> <p><em>escapechar=</em>": string used to escape delimiter <br></p> <p><em>parse_dates=[0, 2]</em>: convert columns(0,2 → order) to <code>datetime</code> objects</p> <hr> <p>To read a subset of the columns, you can do so with the help of <code>usecols</code> by supplying appropriate integer indices as shown:</p> <pre><code>df = pd.read_csv(data, quoting=csv.QUOTE_NONE, index_col=False, escapechar='"', \ parse_dates=[0], usecols=[0,1]).rename(columns=lambda x: x.split("=")[0]) df </code></pre> <p><a href="http://i.stack.imgur.com/qdVFP.png" rel="nofollow"><img src="http://i.stack.imgur.com/qdVFP.png" alt="Image"></a></p> <p>If you want to convert the two columns of <code>XAU</code> into a series object:</p> <pre><code>df.columns = df.columns + [str('_%d'%(i)) for i in list(range(len(df.columns)))] ser = pd.Series(data=df['XAU_1'].values, index=df['XAU_0'].values, name='XAU') ser 2014-08-25 06:00:05 1200.343 2014-08-25 06:00:05 1200.000 Name: XAU, dtype: float64 type(ser) pandas.core.series.Series </code></pre> <hr> <p>The reason that fails for a newer case is because <code>escapechar</code> skips the first delimiter, as a result the empty strings aren't captured properly.</p> <p>If that's the case, you are better off ignoring <code>escapechar</code> altogether and continuing as shown:</p> <p>For the combination of old and new data:</p> <pre><code>data = StringIO( ''' "XAU=,XAU=,XAG=,XAG=" "25/08/2014 6:00:05,1200.343,25/08/2014 6:00:03,19.44," "25/08/2014 6:00:05,1200,,," ",,25/08/2014 6:00:05,19.50," ''') df = pd.read_csv(data, quoting=csv.QUOTE_NONE, index_col=False, na_values=[""], parse_dates=[2]).rename(columns=lambda x: x.strip('"').split("=")[0]) old_cols = df.columns # Index(['XAU', 'XAU', 'XAG', 'XAG'], dtype='object') new_cols = [col[0] for col in list(enumerate(df.columns))] # [0, 1, 2, 3] df.columns = new_cols # Converting first column to datetime dtype df[0] = pd.to_datetime(df[0].str.replace('"', '')) df.columns = old_cols df </code></pre> <p><a href="http://i.stack.imgur.com/e6BxF.png" rel="nofollow"><img src="http://i.stack.imgur.com/e6BxF.png" alt="Image"></a></p> <pre><code>df.dtypes XAU datetime64[ns] XAU float64 XAG datetime64[ns] XAG float64 dtype: object </code></pre>
0
2016-09-13T20:28:04Z
[ "python", "csv", "datetime", "pandas", "time-series" ]
How are regex quantifiers applied?
39,474,794
<p>I have the following regex:</p> <pre><code>res = re.finditer(r'(?:\w+[ \t,]+){0,4}my car',txt,re.IGNORECASE|re.MULTILINE) for item in res: print(item.group()) </code></pre> <p>When I use this regex with the following string:</p> <blockquote> <p>"my house is painted white, my car is red. A horse is galloping very fast in the road, I drive my car slowly."</p> </blockquote> <p>I am getting the following results:</p> <ul> <li>house is painted white, my car</li> <li>the road, I drive my car</li> </ul> <p>My question is about the quantifier <code>{0,4}</code> that should apply to the whole group. The group collects words with the expression <code>\w+</code> and some separation symbols with the [ ]. Does the the quantifier apply only to the "words" defined by <code>\w+</code>? In the results I am getting 4 words plus space and comma. It's unclear to me. </p>
0
2016-09-13T16:23:38Z
39,474,906
<p>So, here's what's happening. You're using ?: to make a non capture group, which collects 1 or more "words", followed by a [ \t,] (a space, tab char, or comma), match one or more of the preceeding. {0,4} matches between 0-4 of the non-capturing group. So it looks at the word "my car" and captures the 4 words before it, since all 4 of them match the \w+ and the , and space get eaten by the character set you specified.</p> <p>Broken apart more succinctly</p> <pre><code>(?: -- Non capturing group \w+ Grab all words [ \t,]+ -- Grab all spaces, comma, or tab characters ) -- End capture group {0,4} -- Match the previous capture group 0-4 times my car -- Based off where you find the words "my car" </code></pre> <p>As a result this will match 0-4 words / spaces / commas / tabs before the appearance of "my car"</p> <p>This is working as written</p>
1
2016-09-13T16:30:28Z
[ "python", "regex", "python-3.x" ]
OperationalError: Can't connect to local MySQL server through socket
39,474,896
<p>I'm trying to run a server in python/django and I'm getting the following error:</p> <blockquote> <p>django.db.uils.OperationslError: (200, "Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)").</p> </blockquote> <p>I have <code>MySQL-python</code> installed (1.2.5 version) and mysql installed (0.0.1), both via pip, so I'm not sure why I can't connect to the MySQL server. Does anyone know why? Thanks!</p>
0
2016-09-13T16:29:51Z
39,475,119
<p>You can't install mysql through pip; it's a database, not a Python library (and it's currently in version 5.7). You need to install the binary package for your operating system.</p>
1
2016-09-13T16:43:20Z
[ "python", "mysql", "django" ]
gooey module not installing correctly
39,474,930
<pre><code>C:\Python34\Scripts&gt;pip install Gooey Collecting Gooey Using cached Gooey-0.9.2.3.zip Complete output from command python setup.py egg_info: Traceback (most recent call last): File "&lt;string&gt;", line 1, in &lt;module&gt; File "C:\Users\Haeshan\AppData\Local\Temp\pip-build- 5waer38m\Gooey\setup.py", line 9, in &lt;module&gt; version = __import__('gooey').__version__ File "C:\Users\Haeshan\AppData\Local\Temp\pip-build-5waer38m\Gooey\gooey\__init__.py", line 2, in &lt;module&gt; from gooey.python_bindings.gooey_decorator import Gooey File "C:\Users\Haeshan\AppData\Local\Temp\pip-build-5waer38m\Gooey\gooey\python_bindings\gooey_decorator.py", line 54 except Exception, e: ^ SyntaxError: invalid syntax ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in C:\Users\Haeshan\AppData\Local\Temp\pip-build-5waer38m\Gooey\ </code></pre> <p>this error is appearing when I try to install the Gooey module for python, any ideas why?</p>
0
2016-09-13T16:32:26Z
39,475,414
<p>Looks like you're using Python 3.4 but Gooey only supports Python 2:</p> <p><a href="https://github.com/chriskiehl/Gooey/issues/65" rel="nofollow">https://github.com/chriskiehl/Gooey/issues/65</a></p> <p><a href="http://python3porting.com/differences.html#except" rel="nofollow">http://python3porting.com/differences.html#except</a></p>
0
2016-09-13T17:03:06Z
[ "python", "windows", "pip" ]
How to convert the data as following in python?
39,474,936
<p>I have some data in the following format in a csv file.</p> <pre><code> Id Category 1 A 2 B 3 C 4 B 5 C 6 d </code></pre> <p>I'd like to convert it into the below format and save it another csv file</p> <pre><code>Id A B C D E 1 1 0 0 0 0 2 0 1 0 0 0 3 0 0 1 0 0 4 0 1 0 0 0 5 0 0 1 0 0 6 0 0 0 1 0 </code></pre>
0
2016-09-13T16:32:44Z
39,475,023
<p>Try with <code>pd.get_dummies()</code></p> <pre><code>&gt;&gt; df = pd.read_csv(&lt;path_to_file&gt;, sep=',', encoding='utf-8', header=0) &gt;&gt; df Id Category 0 1 A 1 2 B 2 3 C 3 4 B 4 5 C 5 6 d &gt;&gt; pd.get_dummies(df.Category) </code></pre> <p>This will encode <code>Category</code> and give you new columns: </p> <pre><code>A B C d </code></pre> <p>But will not 'fix' d -> D and will not give you any columns that can not be deduced from the values you have in <code>Category</code>.</p> <p>I suggest you check the solution posted in the comment earlier for that.</p> <p><strong>EDIT</strong></p> <pre><code># Load data from .CSV with pd.read_csv() as demonstrated above In [13]: df Out[13]: Category Id 0 A 1 1 B 2 2 C 3 3 B 4 4 C 5 5 D 6 ## One-liner for hot-encoding, then concatenating to original dataframe ## and finally dropping the old column 'Category', you can skip the ## last part if you want to keep original column as well. In [14]: df = pd.concat([df, pd.get_dummies(df.Category)], axis=1).drop('Category', axis=1) In [15]: df Out[15]: Id A B C D 0 1 1.0 0.0 0.0 0.0 1 2 0.0 1.0 0.0 0.0 2 3 0.0 0.0 1.0 0.0 3 4 0.0 1.0 0.0 0.0 4 5 0.0 0.0 1.0 0.0 5 6 0.0 0.0 0.0 1.0 ## Write to file In [16]: df.to_csv(&lt;output_path&gt;, sep='\t', encoding='utf-8', index=None) </code></pre> <p>As you can see this is not the Transpose, only the result of the hot-encoding of the <code>Category</code> column is added to each row.</p> <p>Whether Excel accepts the final data or not, there's not much you can do with Pandas about this, unfortunately.</p> <p>I hope this helps.</p>
2
2016-09-13T16:38:04Z
[ "python", "python-3.x", "pandas", "text-processing", "spyder" ]
How to convert the data as following in python?
39,474,936
<p>I have some data in the following format in a csv file.</p> <pre><code> Id Category 1 A 2 B 3 C 4 B 5 C 6 d </code></pre> <p>I'd like to convert it into the below format and save it another csv file</p> <pre><code>Id A B C D E 1 1 0 0 0 0 2 0 1 0 0 0 3 0 0 1 0 0 4 0 1 0 0 0 5 0 0 1 0 0 6 0 0 0 1 0 </code></pre>
0
2016-09-13T16:32:44Z
39,475,197
<p>Use a pivot table (updated to include .csv read/write functionality):</p> <pre><code>import pandas as pd path = 'the path to your file' df = pd.read_csv(path) # your original dataframe # Category Id # 0 A 1 # 1 B 2 # 2 C 3 # 3 B 4 # 4 C 5 # 5 D 6 # pivot table df.pivot_table(index=['Id'], columns='Category', fill_value=0, aggfunc='size') # save to file df.to_csv('path\filename.csv') #e.g. 'C:\\Users\\you\\Documents\\filename.csv' </code></pre> <hr> <p>OUTPUT:</p> <pre><code>Category A B C D Id 1 1 0 0 0 2 0 1 0 0 3 0 0 1 0 4 0 1 0 0 5 0 0 1 0 6 0 0 0 1 </code></pre>
1
2016-09-13T16:49:07Z
[ "python", "python-3.x", "pandas", "text-processing", "spyder" ]
JSON TypeError: list indices must be integers, not str
39,474,985
<p>I want to extract some data from JSON, but I don't know what happenend. It response "TypeError: list indices must be integers, not str". Here is my code, thanks:</p> <pre><code>import urllib import json url = 'http://python-data.dr-chuck.net/comments_304658.json' data = urllib.urlopen(url).read() info = json.loads(data) #print json.dumps(info,indent=4) lst = list() for item in info: count = item['comments']['count'] count = int(count) lst.append(count) print sum(lst) </code></pre>
-1
2016-09-13T16:35:52Z
39,475,060
<p>You seem to be confused by the structure of the returned data. Your code assumes the structure is a list of two-level dictionaries. If this were the case, then you could find an individual <code>count</code> like so:</p> <pre><code>info[7]['comments']['count'] </code></pre> <p>It is actually a dictionary, one item of which is a list of dictionaries. To find a single item, the expression is like:</p> <pre><code>info['comments'][7]['count'] </code></pre> <p>So, if we want to iterate over the list, we iterate over <code>info['comments']</code>.</p> <p>Try this:</p> <pre><code>import urllib import json url = 'http://python-data.dr-chuck.net/comments_304658.json' data = urllib.urlopen(url).read() info = json.loads(data) #print json.dumps(info,indent=4) lst = list() for item in info['comments']: count = item['count'] count = int(count) lst.append(count) print sum(lst) </code></pre>
1
2016-09-13T16:39:53Z
[ "python", "json" ]
understanding pyresample to regrid irregular grid data to a regular grid
39,475,003
<p>I need to regrid data on a irregular grid (lambert conical) to a regular grid. I think pyresample is my best bet. Infact my original lat,lon are not 1D (which seems to be needed to use basemap.interp or scipy.interpolate.griddata).</p> <p>I found <a href="http://stackoverflow.com/questions/35734070/interpolating-data-from-one-latitude-longitude-grid-onto-a-different-one/35734381#35734381">this SO's answer</a> helpful. However I get empty interpolated data. I think it has to do with the choice of my radius of influence and with the fact that my data are wrapped (??).</p> <p>This is my code:</p> <pre><code>import numpy as np from matplotlib import pyplot as plt import netCDF4 %matplotlib inline url = "http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/NARR/Dailies/monolevel/hlcy.2009.nc" SRHtemp = netCDF4.Dataset(url).variables['hlcy'][0,::] Y_n = netCDF4.Dataset(url).variables['y'][:] X_n = netCDF4.Dataset(url).variables['x'][:] T_n = netCDF4.Dataset(url).variables['time'][:] lat_n = netCDF4.Dataset(url).variables['lat'][:] lon_n = netCDF4.Dataset(url).variables['lon'][:] </code></pre> <p>lat_n and lon_n are irregular and the latitude and longitude corresponding to the projected coordinates x,y.</p> <p>Because of the way lon_n is, I added:</p> <pre><code>lon_n[lon_n&lt;0] = lon_n[lon_n&lt;0]+360 </code></pre> <p>so that now if I plot them they look nice and ok:</p> <p><a href="http://i.stack.imgur.com/S9L51.png" rel="nofollow"><img src="http://i.stack.imgur.com/S9L51.png" alt="enter image description here"></a></p> <p>Then I create my new set of regular coordinates:</p> <pre><code>XI = np.arange(148,360) YI = np.arange(0,87) XI, YI = np.meshgrid(XI,YI) </code></pre> <p>Following the answer above I wrote the following code:</p> <p>from pyresample.geometry import SwathDefinition from pyresample.kd_tree import resample_nearest</p> <pre><code>def_a = SwathDefinition(lons=XI, lats=YI) def_b = SwathDefinition(lons=lon_n, lats=lat_n) interp_dat = resample_nearest(def_b,SRHtemp,def_a,radius_of_influence = 70000,fill_value = -9.96921e+36) </code></pre> <p>the resolution of the data is about 30km, so I put 70km, the fill_value I put is the one from the data, but of course I can just put zero or nan.</p> <p>however I get an empty array. </p> <p>What do I do wrong? also - if there is another way of doing it, I am interested in knowing it. Pyresample documentation is a bit thin, and I need a bit more help.</p> <p>I did find <a href="http://stackoverflow.com/questions/3864899/resampling-irregularly-spaced-data-to-a-regular-grid-in-python">this answer</a> suggesting to use another griddata function:</p> <pre><code>import matplotlib.mlab as ml resampled_data = ml.griddata(lon_n.ravel(), lat_n.ravel(),SRHtemp.ravel(),XI,YI,interp = "linear") </code></pre> <p>and it seems to be ok:</p> <p><a href="http://i.stack.imgur.com/1Zro7.png" rel="nofollow"><img src="http://i.stack.imgur.com/1Zro7.png" alt="enter image description here"></a></p> <p>But I would like to understand more about pyresample, since it seems so powerful.</p>
0
2016-09-13T16:37:00Z
39,476,457
<p>The problem is that XI and XI are integers, not floats. You can fix this by simply doing</p> <pre><code>XI = np.arange(148,360.) YI = np.arange(0,87.) XI, YI = np.meshgrid(XI,YI) </code></pre> <p>The inability to handle integer datatypes is an undocumented, unintuitive, and possibly buggy behavior from pyresample.</p> <p>A few more notes on your coding style:</p> <ul> <li>It's not necessary to overwrite the XI and YI variables, you don't gain much by this</li> <li>You should just load the netCDF dataset once and the access the variables via that object</li> </ul>
2
2016-09-13T18:09:14Z
[ "python", "netcdf", "netcdf4" ]
Read data from binary file python
39,475,010
<p>I have a binary file with this format:</p> <p><a href="http://i.stack.imgur.com/qHVBs.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/qHVBs.jpg" alt="enter image description here"></a></p> <p>and i use this code to open it:</p> <pre><code>import numpy as np f = open("author_1", "r") dt = np.dtype({'names': ['au_id','len_au_name','au_name','nu_of_publ', 'pub_id', 'len_of_pub_id','pub_title','num_auth','len_au_name_1', 'au_name1','len_au_name_2', 'au_name2','len_au_name_3', 'au_name3','year_publ','num_of_cit','citid','len_cit_tit','cit_tit', 'num_of_au_cit','len_cit_au_name_1','au_cit_name_1', len_cit_au_name_2', 'au_cit_name_2','len_cit_au_name_3','au_cit_name_3','len_cit_au_name_4', 'au_cit_name_4', 'len_cit_au_name_5','au_cit_name_5','year_cit'], 'formats': [int,int,'S13',int,int,int,'S61', int,int,'S8',int,'S7',int,'S12',int,int,int,int,'S50',int,int, 'S7',int,'S7',int,'S9',int,'S8',int,'S1',int]}) a = np.fromfile(f, dtype=dt, count=-1, sep="") </code></pre> <p>And I take this:</p> <pre><code>array([ (1, 13, b'Scott Shenker', 200, 1, 61, b'Integrated services in the internet architecture: an overview', 3, 8, b'R Braden', 7, b'D Clark', 12, b'S Shenker\xe2\x80\xa6', 1994, 1000, 401, 50, b'[HTML] An architecture for differentiated services', 5, 7, b'D Black', 7, b'S Blake', 9, b'M Carlson', 8, b'E Davies', 1, b'Z', 1998), (402, 72, b'Resource rese', 1952544370, 544108393, 1953460848, b'ocol (RSVP)--Version 1 functional specification\x05\x00\x00\x00\x08\x00\x00\x00R Brad', 487013, 541851648, b'Zhang\x08', 1109414656, b'erson\x08', 542310400, b'Herzog\x07\x00\x00\x00S ', 1768776010, 511342, 103168, 22016, b'\x00A reliable multicast framework for light-weight s', 1769173861, 544435823, b'and app', 1633905004, b'tion le', 543974774, b'framing\x04', 458752, b'\x00\x00S Floy', 2660, b'', 1632247894), </code></pre> <p>Any idea how can open the whole file?</p>
1
2016-09-13T16:37:26Z
39,477,733
<p>The data structure stored in this file is hierarchical, rather than "flat": child arrays of different length are stored within each parent element. It is not possible to represent such a data structure using numpy arrays (even recarrays), and therefore it is not possible to read the file with <code>np.fromfile()</code>.</p> <p>What do you mean by "open the whole file"? What sort of python data structure would you like to end up with?</p> <p>It would be straightforward, but still not trivial, to write a function to parse the file into a list of dictionaries.</p>
0
2016-09-13T19:36:18Z
[ "python", "binaryfiles", "binary-data" ]
Read data from binary file python
39,475,010
<p>I have a binary file with this format:</p> <p><a href="http://i.stack.imgur.com/qHVBs.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/qHVBs.jpg" alt="enter image description here"></a></p> <p>and i use this code to open it:</p> <pre><code>import numpy as np f = open("author_1", "r") dt = np.dtype({'names': ['au_id','len_au_name','au_name','nu_of_publ', 'pub_id', 'len_of_pub_id','pub_title','num_auth','len_au_name_1', 'au_name1','len_au_name_2', 'au_name2','len_au_name_3', 'au_name3','year_publ','num_of_cit','citid','len_cit_tit','cit_tit', 'num_of_au_cit','len_cit_au_name_1','au_cit_name_1', len_cit_au_name_2', 'au_cit_name_2','len_cit_au_name_3','au_cit_name_3','len_cit_au_name_4', 'au_cit_name_4', 'len_cit_au_name_5','au_cit_name_5','year_cit'], 'formats': [int,int,'S13',int,int,int,'S61', int,int,'S8',int,'S7',int,'S12',int,int,int,int,'S50',int,int, 'S7',int,'S7',int,'S9',int,'S8',int,'S1',int]}) a = np.fromfile(f, dtype=dt, count=-1, sep="") </code></pre> <p>And I take this:</p> <pre><code>array([ (1, 13, b'Scott Shenker', 200, 1, 61, b'Integrated services in the internet architecture: an overview', 3, 8, b'R Braden', 7, b'D Clark', 12, b'S Shenker\xe2\x80\xa6', 1994, 1000, 401, 50, b'[HTML] An architecture for differentiated services', 5, 7, b'D Black', 7, b'S Blake', 9, b'M Carlson', 8, b'E Davies', 1, b'Z', 1998), (402, 72, b'Resource rese', 1952544370, 544108393, 1953460848, b'ocol (RSVP)--Version 1 functional specification\x05\x00\x00\x00\x08\x00\x00\x00R Brad', 487013, 541851648, b'Zhang\x08', 1109414656, b'erson\x08', 542310400, b'Herzog\x07\x00\x00\x00S ', 1768776010, 511342, 103168, 22016, b'\x00A reliable multicast framework for light-weight s', 1769173861, 544435823, b'and app', 1633905004, b'tion le', 543974774, b'framing\x04', 458752, b'\x00\x00S Floy', 2660, b'', 1632247894), </code></pre> <p>Any idea how can open the whole file?</p>
1
2016-09-13T16:37:26Z
39,479,003
<p>I agree with Ryan: parsing the data is straightforward, but not trivial, and really tedious. Whatever disk space saving you gain by packing the data in this way, you pay it dearly at the hour of unpacking.</p> <p>Anyway, the file is made of variable length records and fields. Each record is made of variable number and length of fields that we can read in chunks of bytes. Each chunk will have different format. You get the idea. Following this logic, I assembled these three functions, that you can finish, modify, test, etc:</p> <pre><code>from struct import Struct import struct def read_chunk(fmt, fileobj): chunk_struct = Struct(fmt) chunk = fileobj.read(chunk_struct.size) return chunk_struct.unpack(chunk) def read_record(fileobj): author_id, len_author_name = read_chunk('ii', f) author_name, nu_of_publ = read_chunk(str(len_author_name)+'ci', f) # 's' or 'c' ? record = { 'author_id': author_id, 'author_name': author_name, 'publications': [] } for pub in range(nu_of_publ): pub_id, len_pub_title = read_chunk('ii', f) pub_title, num_pub_auth = read_chunk(str(len_pub_title)+'ci', f) record['publications'].append({ 'publication_id': pub_id, 'publication_title': pub_title, 'publication_authors': [] }) for auth in range(num_pub_auth): len_pub_auth_name = read_chunk('i', f) pub_auth_name = read_chunk(str(len_pub_auth_name)+'c', f) record['publications']['publication_authors'].append({'name': pub_auth_name}) year_publ, nu_of_cit = read_chunk('ii', f) # Finish building your record with the remaining fields... for cit in range(nu_of_cit): cit_id, len_cit_title = read_chunk('ii', f) cit_title, num_cit_auth = read_chunk(str(len_cit_title)+'ci', f) for cit_auth in range(num_cit_auth): len_cit_auth_name = read_chunk('i', f) cit_auth_name = read_chunk(str(len_cit_auth_name)+'c', f) year_cit_publ = read_chunk('i', f) return record def parse_file(filename): records = [] with open(filename, 'rb') as f: while True: try: records.append(read_record(f)) except struct.error: break # do something useful with the records... </code></pre>
0
2016-09-13T21:07:24Z
[ "python", "binaryfiles", "binary-data" ]
compatibility issue with contourArea in openCV 3
39,475,125
<p>I am trying to do a simple area calculation of contours I get from findContours. My openCv version is 3.1.0</p> <p>My code is:</p> <pre><code>cc = cv2.findContours(im_bw.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) cv2.contourArea(cc[0]) error: 'C:\\builds\\master_PackSlaveAddon-win32-vc12-static\\opencv\\modules\\imgproc\\src\\shapedescr.cp...: error: (-215) npoints &gt;= 0 &amp;&amp; (depth == CV_32F || depth == CV_32S) in function cv::contourArea\n' </code></pre> <p>Cant seem to solve it, I have a feeling its just typecasting altough I expect the findContours result to match the type of contourArea</p> <p>Thanks :)</p> <p>EDIT: turns out I need to take the 2nd argument of findContours</p> <pre><code> im2, cc, hierarchy = cv2.findContours(im_bw.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) </code></pre>
0
2016-09-13T16:44:01Z
39,475,245
<p>In Opencv 3 API version the <code>cv2.findContours()</code> returns 3 <a href="http://docs.opencv.org/3.1.0/d4/d73/tutorial_py_contours_begin.html" rel="nofollow">objects</a></p> <ul> <li>image </li> <li>contours</li> <li>hierarchy</li> </ul> <p>So you need to rewrite your statement as:</p> <pre><code>image, contours, hierarchy = cv2.findContours(im_bw.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) </code></pre>
1
2016-09-13T16:52:32Z
[ "python", "opencv", "opencv3.0" ]
From scatter plot to 2D array
39,475,146
<p>My mind has gone completely blank on this one.</p> <p>I want to do what I think is very simple.</p> <p>Suppose I have some test data:</p> <pre><code>import pandas as pd import numpy as np k=10 df = pd.DataFrame(np.array([range(k), [x + 1 for x in range(k)], [x + 4 for x in range(k)], [x + 9 for x in range(k)]]).T,columns=list('abcd')) </code></pre> <p>where rows correspond to time and columns to angles, and it looks like this:</p> <pre><code> a b c d 0 0 1 4 9 1 1 2 5 10 2 2 3 6 11 3 3 4 7 12 4 4 5 8 13 5 5 6 9 14 6 6 7 10 15 7 7 8 11 16 8 8 9 12 17 9 9 10 13 18 </code></pre> <p>Then for reasons I convert it to and ordered dictionary:</p> <pre><code>def highDimDF2Array(df): from collections import OrderedDict # Need to preserve order vels = [1.42,1.11,0.81,0.50] # Get dataframe shapes cols = df.columns trajectories = OrderedDict() for i,j in enumerate(cols): x = df[j].values x = x[~np.isnan(x)] maxTimeSteps = len(x) tmpTraj = np.empty((maxTimeSteps,3)) # This should be fast tmpTraj[:,0] = range(maxTimeSteps) # Remove construction nans tmpTraj[:,1] = x tmpTraj[:,2].fill(vels[i]) trajectories[j] = tmpTraj return trajectories </code></pre> <p>Then I plot it all</p> <pre><code>import matplotlib.pyplot as plt m = highDimDF2Array(df) M = np.vstack(m.values()) plt.scatter(M[:,0],M[:,1],15,M[:,2]) plt.title('Angle $[^\circ]$ vs. Time $[s]$') plt.colorbar() plt.show() </code></pre> <p><a href="http://i.stack.imgur.com/vtEcv.png" rel="nofollow"><img src="http://i.stack.imgur.com/vtEcv.png" alt="enter image description here"></a></p> <p>Now all I want to do is to put all of that into a 2D numpy array with the properties:</p> <ul> <li>Time is mapped to the x-axis (or y doesn't matter)</li> <li>Angle is mapped to the y-axis</li> <li>The entries in the matrix correspond to the values of the coloured dots in the scatter plot</li> <li>All other entries are treated as <code>NaNs</code> (i.e. those that are undefined by a point in the scatter plot)</li> </ul> <p>In 3D the colour would correspond to the height.</p> <p>I was thinking of using something like this: <a href="http://stackoverflow.com/questions/13990465/3d-numpy-array-to-2d">3d Numpy array to 2d</a> but am not quite sure how.</p>
1
2016-09-13T16:45:11Z
39,477,098
<p>I don't use pandas, so I cannot really follow what your function does. But from the description of your array M and what you want, I think the funktion np.histogram2d is what you want. It bins the range of your independent values in equidistant steps and sums all the occurrences. You can apply weighting with your 3rd column to get the proper height. You have to choose the number of bins:</p> <pre><code>z, x, y = np.histogram2d(M[:,0], M[:,1], weights=M[:,2], bins=50) num, x, y = np.histogram2d(M[:,0], M[:,1], bins=50) z /= num # proper averaging, it also gives you NaN where num==0 plt.pcolor(x, y, z) #visualization </code></pre> <p>Also <code>plt.hist2d</code> could be interesting</p> <p><strong>edit:</strong> The histogram2d yields the 2D array which was asked for in the question. The visualization, however, should be done with imshow, since pcolor doesn't skip NaN values (is there some way to teach it?)</p> <p>The advantage of this method is that the x,y values can be float and of arbitrary order. Further, by defining the number of bins, one can choose the resolution of the resulting image. Nevertheless, to get exactly the result which was asked for, one should do:</p> <pre><code>binx = np.arange(M[:,0].min()-0.5, M[:,0].max()+1.5) # edges of the bins. 0.5 is the half width biny = np.arange(M[:,1].min()-0.5, M[:,1].max()+1.5) z, x, y = np.histogram2d(M[:,0], M[:,1], weights=M[:,2], bins=(binx,biny)) num, x, y = np.histogram2d(M[:,0], M[:,1], bins=(binx,biny)) z /= num plt.imshow(z.T, interpolation='none', origin = 'lower') </code></pre> <p><a href="http://i.stack.imgur.com/z7oXZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/z7oXZ.png" alt="enter image description here"></a></p> <p>the output of pcolor doesn't leave out the nans but therefore takes also x and y values into account:</p> <pre><code>plt.pcolormesh(x, y, z.T, vmin=0, vmax=2) </code></pre> <p><a href="http://i.stack.imgur.com/vSLVm.png" rel="nofollow"><img src="http://i.stack.imgur.com/vSLVm.png" alt="enter image description here"></a></p>
1
2016-09-13T18:53:03Z
[ "python", "arrays", "numpy", "matplotlib" ]
From scatter plot to 2D array
39,475,146
<p>My mind has gone completely blank on this one.</p> <p>I want to do what I think is very simple.</p> <p>Suppose I have some test data:</p> <pre><code>import pandas as pd import numpy as np k=10 df = pd.DataFrame(np.array([range(k), [x + 1 for x in range(k)], [x + 4 for x in range(k)], [x + 9 for x in range(k)]]).T,columns=list('abcd')) </code></pre> <p>where rows correspond to time and columns to angles, and it looks like this:</p> <pre><code> a b c d 0 0 1 4 9 1 1 2 5 10 2 2 3 6 11 3 3 4 7 12 4 4 5 8 13 5 5 6 9 14 6 6 7 10 15 7 7 8 11 16 8 8 9 12 17 9 9 10 13 18 </code></pre> <p>Then for reasons I convert it to and ordered dictionary:</p> <pre><code>def highDimDF2Array(df): from collections import OrderedDict # Need to preserve order vels = [1.42,1.11,0.81,0.50] # Get dataframe shapes cols = df.columns trajectories = OrderedDict() for i,j in enumerate(cols): x = df[j].values x = x[~np.isnan(x)] maxTimeSteps = len(x) tmpTraj = np.empty((maxTimeSteps,3)) # This should be fast tmpTraj[:,0] = range(maxTimeSteps) # Remove construction nans tmpTraj[:,1] = x tmpTraj[:,2].fill(vels[i]) trajectories[j] = tmpTraj return trajectories </code></pre> <p>Then I plot it all</p> <pre><code>import matplotlib.pyplot as plt m = highDimDF2Array(df) M = np.vstack(m.values()) plt.scatter(M[:,0],M[:,1],15,M[:,2]) plt.title('Angle $[^\circ]$ vs. Time $[s]$') plt.colorbar() plt.show() </code></pre> <p><a href="http://i.stack.imgur.com/vtEcv.png" rel="nofollow"><img src="http://i.stack.imgur.com/vtEcv.png" alt="enter image description here"></a></p> <p>Now all I want to do is to put all of that into a 2D numpy array with the properties:</p> <ul> <li>Time is mapped to the x-axis (or y doesn't matter)</li> <li>Angle is mapped to the y-axis</li> <li>The entries in the matrix correspond to the values of the coloured dots in the scatter plot</li> <li>All other entries are treated as <code>NaNs</code> (i.e. those that are undefined by a point in the scatter plot)</li> </ul> <p>In 3D the colour would correspond to the height.</p> <p>I was thinking of using something like this: <a href="http://stackoverflow.com/questions/13990465/3d-numpy-array-to-2d">3d Numpy array to 2d</a> but am not quite sure how.</p>
1
2016-09-13T16:45:11Z
39,478,996
<p>You can convert the values in M[:,1] and M[:,2] to integers and use them as indices to a 2D numpy array. Here's an example using the value for M you defined. </p> <pre><code>out = np.empty((20,10)) out[:] = np.NAN N = M[:,[0,1]].astype(int) out[N[:,1], N[:,0]] = M[:,2] plt.scatter(M[:,0],M[:,1],15,M[:,2]) plt.scatter(M[:,0],M[:,1],15,M[:,2]) plt.title('Angle $[^\circ]$ vs. Time $[s]$') plt.colorbar() plt.imshow(out, interpolation='none', origin = 'lower') </code></pre> <p><a href="http://i.stack.imgur.com/JTx6M.png" rel="nofollow"><img src="http://i.stack.imgur.com/JTx6M.png" alt="enter image description here"></a></p> <p>Here you can convert M to integers directly but you might have to come up with a function to map the columns of M to integers depending on the resolution of the array you are creating.</p>
2
2016-09-13T21:07:00Z
[ "python", "arrays", "numpy", "matplotlib" ]
How to access environment variables set in a script later in a batch file run from same script?
39,475,162
<p>My code:</p> <pre><code>file = open("crash_reports_envs.txt") envVariables=file.read() print(envVariables) file.close() os.environ['linuxwdir'] = (re.search("linuxwdir:(\S+)",envVariables).group(1)) os.environ['invertwdir']= (re.search("wdir:(\S+.*)\\n",envVariables).group(1)) </code></pre> <p>I am setting these environment variables in the script and running a batch file <code>file1</code> from the same script, I have another <code>file1</code> in same folder where the script is. How I can use these variables in that batch file? Right now the batch file does not recognize these variables.</p>
0
2016-09-13T16:46:44Z
39,477,852
<p>The way you use environment variables in .bat files is to surround them with %, for example %linuxwdir%. If I understand your <code>.bat</code> file correctly, you need something like this (untested):</p> <pre><code>cd "C:\Program Files (x86)\PuTTY" pscp.exe -pw "pswd" "%invertwdir%/file2" uname@execServer:%linuxwdir%/file2 </code></pre>
0
2016-09-13T19:43:57Z
[ "python", "windows", "batch-file" ]
How to speed LabelEncoder up recoding a categorical variable into integers
39,475,187
<p>I have a large csv with two strings per row in this form:</p> <pre><code>g,k a,h c,i j,e d,i i,h b,b d,d i,a d,h </code></pre> <p>I read in the first two columns and recode the strings to integers as follows:</p> <pre><code>import pandas as pd df = pd.read_csv("test.csv", usecols=[0,1], prefix="ID_", header=None) from sklearn.preprocessing import LabelEncoder # Initialize the LabelEncoder. le = LabelEncoder() le.fit(df.values.flat) # Convert to digits. df = df.apply(le.transform) </code></pre> <p>This code is from <a href="http://stackoverflow.com/a/39419342/2179021">http://stackoverflow.com/a/39419342/2179021</a>.</p> <p>The code works very well but is slow when df is large. I timed each step and the result was surprising to me.</p> <ul> <li><code>pd.read_csv</code> takes about 40 seconds. </li> <li><code>le.fit(df.values.flat)</code> takes about 30 seconds</li> <li><code>df = df.apply(le.transform)</code> takes about 250 seconds.</li> </ul> <p>Is there any way to speed up this last step? It feels like it should be the fastest step of them all!</p> <hr> <p><strong>More timings for the recoding step on a computer with 4GB of RAM</strong></p> <p>The answer below by maxymoo is fast but doesn't give the right answer. Taking the example csv from the top of the question, it translates it to:</p> <pre><code> 0 1 0 4 6 1 0 4 2 2 5 3 6 3 4 3 5 5 5 4 6 1 1 7 3 2 8 5 0 9 3 4 </code></pre> <p>Notice that 'd' is mapped to 3 in the first column but 2 in the second.</p> <p>I tried the solution from <a href="http://stackoverflow.com/a/39356398/2179021">http://stackoverflow.com/a/39356398/2179021</a> and get the following.</p> <pre><code>df = pd.DataFrame({'ID_0':np.random.randint(0,1000,1000000), 'ID_1':np.random.randint(0,1000,1000000)}).astype(str) df.info() memory usage: 7.6MB %timeit x = (df.stack().astype('category').cat.rename_categories(np.arange(len(df.stack().unique()))).unstack()) 1 loops, best of 3: 1.7 s per loop </code></pre> <p>Then I increased the dataframe size by a factor of 10.</p> <pre><code>df = pd.DataFrame({'ID_0':np.random.randint(0,1000,10000000), 'ID_1':np.random.randint(0,1000,10000000)}).astype(str) df.info() memory usage: 76.3+ MB %timeit x = (df.stack().astype('category').cat.rename_categories(np.arange(len(df.stack().unique()))).unstack()) MemoryError Traceback (most recent call last) </code></pre> <p>This method appears to use so much RAM trying to translate this relatively small dataframe that it crashes.</p> <p>I also timed LabelEncoder with the larger dataset with 10 millions rows. It runs without crashing but the fit line alone took 50 seconds. The df.apply(le.transform) step took about 80 seconds.</p> <p>How can I:</p> <ol> <li>Get something of roughly the speed of maxymoo's answer and roughly the memory usage of LabelEncoder but that gives the right answer when the dataframe has two columns. </li> <li>Store the mapping so that I can reuse it for different data (as in the way LabelEncoder allows me to do)?</li> </ol>
5
2016-09-13T16:48:29Z
39,503,973
<p>It looks like it will be much faster to use the pandas <code>category</code> datatype; internally this uses a hash table rather whereas LabelEncoder uses a sorted search:</p> <pre><code>In [87]: df = pd.DataFrame({'ID_0':np.random.randint(0,1000,1000000), 'ID_1':np.random.randint(0,1000,1000000)}).astype(str) In [88]: le.fit(df.values.flat) %time x = df.apply(le.transform) CPU times: user 6.28 s, sys: 48.9 ms, total: 6.33 s Wall time: 6.37 s In [89]: %time x = df.apply(lambda x: x.astype('category').cat.codes) CPU times: user 301 ms, sys: 28.6 ms, total: 330 ms Wall time: 331 ms </code></pre> <p><strong>EDIT:</strong> Here is a custom transformer class that that you could use (you probably won't see this in an official scikit-learn release since the maintainers don't want to have pandas as a dependency)</p> <pre><code>import pandas as pd from pandas.core.nanops import unique1d from sklearn.base import BaseEstimator, TransformerMixin class PandasLabelEncoder(BaseEstimator, TransformerMixin): def fit(self, y): self.classes_ = unique1d(y) return self def transform(self, y): s = pd.Series(y).astype('category', categories=self.classes_) return s.cat.codes </code></pre>
4
2016-09-15T05:55:07Z
[ "python", "pandas", "scikit-learn" ]
How to speed LabelEncoder up recoding a categorical variable into integers
39,475,187
<p>I have a large csv with two strings per row in this form:</p> <pre><code>g,k a,h c,i j,e d,i i,h b,b d,d i,a d,h </code></pre> <p>I read in the first two columns and recode the strings to integers as follows:</p> <pre><code>import pandas as pd df = pd.read_csv("test.csv", usecols=[0,1], prefix="ID_", header=None) from sklearn.preprocessing import LabelEncoder # Initialize the LabelEncoder. le = LabelEncoder() le.fit(df.values.flat) # Convert to digits. df = df.apply(le.transform) </code></pre> <p>This code is from <a href="http://stackoverflow.com/a/39419342/2179021">http://stackoverflow.com/a/39419342/2179021</a>.</p> <p>The code works very well but is slow when df is large. I timed each step and the result was surprising to me.</p> <ul> <li><code>pd.read_csv</code> takes about 40 seconds. </li> <li><code>le.fit(df.values.flat)</code> takes about 30 seconds</li> <li><code>df = df.apply(le.transform)</code> takes about 250 seconds.</li> </ul> <p>Is there any way to speed up this last step? It feels like it should be the fastest step of them all!</p> <hr> <p><strong>More timings for the recoding step on a computer with 4GB of RAM</strong></p> <p>The answer below by maxymoo is fast but doesn't give the right answer. Taking the example csv from the top of the question, it translates it to:</p> <pre><code> 0 1 0 4 6 1 0 4 2 2 5 3 6 3 4 3 5 5 5 4 6 1 1 7 3 2 8 5 0 9 3 4 </code></pre> <p>Notice that 'd' is mapped to 3 in the first column but 2 in the second.</p> <p>I tried the solution from <a href="http://stackoverflow.com/a/39356398/2179021">http://stackoverflow.com/a/39356398/2179021</a> and get the following.</p> <pre><code>df = pd.DataFrame({'ID_0':np.random.randint(0,1000,1000000), 'ID_1':np.random.randint(0,1000,1000000)}).astype(str) df.info() memory usage: 7.6MB %timeit x = (df.stack().astype('category').cat.rename_categories(np.arange(len(df.stack().unique()))).unstack()) 1 loops, best of 3: 1.7 s per loop </code></pre> <p>Then I increased the dataframe size by a factor of 10.</p> <pre><code>df = pd.DataFrame({'ID_0':np.random.randint(0,1000,10000000), 'ID_1':np.random.randint(0,1000,10000000)}).astype(str) df.info() memory usage: 76.3+ MB %timeit x = (df.stack().astype('category').cat.rename_categories(np.arange(len(df.stack().unique()))).unstack()) MemoryError Traceback (most recent call last) </code></pre> <p>This method appears to use so much RAM trying to translate this relatively small dataframe that it crashes.</p> <p>I also timed LabelEncoder with the larger dataset with 10 millions rows. It runs without crashing but the fit line alone took 50 seconds. The df.apply(le.transform) step took about 80 seconds.</p> <p>How can I:</p> <ol> <li>Get something of roughly the speed of maxymoo's answer and roughly the memory usage of LabelEncoder but that gives the right answer when the dataframe has two columns. </li> <li>Store the mapping so that I can reuse it for different data (as in the way LabelEncoder allows me to do)?</li> </ol>
5
2016-09-13T16:48:29Z
39,521,228
<p>I tried this with the DataFrame:</p> <pre><code>In [xxx]: import string In [xxx]: letters = np.array([c for c in string.ascii_lowercase]) In [249]: df = pd.DataFrame({'ID_0': np.random.choice(letters, 10000000), 'ID_1':np.random.choice(letters, 10000000)}) </code></pre> <p>It looks like this:</p> <pre><code>In [261]: df.head() Out[261]: ID_0 ID_1 0 v z 1 i i 2 d n 3 z r 4 x x In [262]: df.shape Out[262]: (10000000, 2) </code></pre> <p>So, 10 million rows. Locally, my timings are:</p> <pre><code>In [257]: % timeit le.fit(df.values.flat) 1 loops, best of 3: 17.2 s per loop In [258]: % timeit df2 = df.apply(le.transform) 1 loops, best of 3: 30.2 s per loop </code></pre> <p>Then I made a dict mapping letters to numbers and used pandas.Series.map:</p> <pre><code>In [248]: letters = np.array([l for l in string.ascii_lowercase]) In [263]: d = dict(zip(letters, range(26))) In [273]: %timeit for c in df.columns: df[c] = df[c].map(d) 1 loops, best of 3: 1.12 s per loop In [274]: df.head() Out[274]: ID_0 ID_1 0 21 25 1 8 8 2 3 13 3 25 17 4 23 23 </code></pre> <p>So that might be an option. The dict just needs to have all of the values that occur in the data. </p> <p>EDIT: The OP asked what timing I have for that second option, with categories. This is what I get: </p> <pre><code>In [40]: %timeit x=df.stack().astype('category').cat.rename_categories(np.arange(len(df.stack().unique()))).unstack() 1 loops, best of 3: 13.5 s per loop </code></pre> <p>EDIT: per the 2nd comment:</p> <pre><code>In [45]: %timeit uniques = np.sort(pd.unique(df.values.ravel())) 1 loops, best of 3: 933 ms per loop In [46]: %timeit dfc = df.apply(lambda x: x.astype('category', categories=uniques)) 1 loops, best of 3: 1.35 s per loop </code></pre>
3
2016-09-15T22:26:22Z
[ "python", "pandas", "scikit-learn" ]
Summing part of 2D array in python
39,475,275
<p>I have a 2D array. After manipulating the x column of the array, I created a new 2D array (data2) with the new changes to the x column (and the y column remained the same). I now want to append the array of y values in data2 to a new array only if its x value is greater than 3 or less than 5. For example, if the 2D array was ([2,3], [4,5], [3.5,6], [9,7]), I would only want the y values of 5 and 6 in my new array because their x values are between 3 and 5. I'm stuck. please help!</p> <pre><code>import numpy as np import matplotlib.pyplot as plt data = np.loadtxt('blah.txt') #blah.txt is a 2d array c = (3*10)^8 x = c /((data[:,0])*10) y = data[:,1] data2 = np.array((x,y)).T def new_yarray(data2): yarray =[] if data2[:,0] &lt;= 5 or data2[:,0] &gt;= 3: np.append(data2[:,1]) print yarray return yarray </code></pre>
0
2016-09-13T16:54:24Z
39,475,494
<p>Here is a one-liner solution broken in several steps for clarity.</p> <p>Given an array</p> <pre><code>&gt;&gt;&gt; a array([[ 2. , 3. ], [ 4. , 5. ], [ 3.5, 6. ], [ 9. , 7. ]]) </code></pre> <p>You can find the <em>index</em> of the elements where the <code>x</code> value is more than 3 and less than 5 by using <code>np.where()</code>:</p> <pre><code>&gt;&gt;&gt; np.where(np.logical_and(a[:,0] &gt; 3,a[:,0] &lt; 5)) (array([1, 2]),) </code></pre> <p>Where <code>a[:,0] = array([ 2. , 4. , 3.5, 9. ])</code> is the array of all the <code>x</code> values. Now, you can get all the corresponding <code>y</code> values where <code>3 &lt; x &lt; 5</code> by:</p> <pre><code>&gt;&gt;&gt; a[np.where(np.logical_and(a[:,0] &gt; 3,a[:,0] &lt; 5))][:,1] array([ 5., 6.]) </code></pre>
1
2016-09-13T17:09:17Z
[ "python", "arrays", "function", "numpy", "append" ]
Summing part of 2D array in python
39,475,275
<p>I have a 2D array. After manipulating the x column of the array, I created a new 2D array (data2) with the new changes to the x column (and the y column remained the same). I now want to append the array of y values in data2 to a new array only if its x value is greater than 3 or less than 5. For example, if the 2D array was ([2,3], [4,5], [3.5,6], [9,7]), I would only want the y values of 5 and 6 in my new array because their x values are between 3 and 5. I'm stuck. please help!</p> <pre><code>import numpy as np import matplotlib.pyplot as plt data = np.loadtxt('blah.txt') #blah.txt is a 2d array c = (3*10)^8 x = c /((data[:,0])*10) y = data[:,1] data2 = np.array((x,y)).T def new_yarray(data2): yarray =[] if data2[:,0] &lt;= 5 or data2[:,0] &gt;= 3: np.append(data2[:,1]) print yarray return yarray </code></pre>
0
2016-09-13T16:54:24Z
39,475,630
<p>You can use this function to flatten the list and then append the values according. </p> <pre><code>def flatten_list(a, result=None): """ Flattens a nested list. """ if result is None: result = [] for x in a: if isinstance(x, list): flatten_list(x, result) else: result.append(x) return result lst = ([2,3], [4,5], [3.5,6], [9,7]) lst = flatten_list(lst) new_lst = [] for i in lst: if (float(i) &gt; 3 and float(i) &lt; 5): new_lst.append(i) print new_lst </code></pre> <p>In this case, only 3.5 and 4 is greater than 3 and less 5...</p>
0
2016-09-13T17:16:32Z
[ "python", "arrays", "function", "numpy", "append" ]
How to find the source of global(ish) variable?
39,475,290
<p>I inherited some large and unwieldy python code. In one file its using a list of commands imported from another file. Looking at it with pdb this commands variable ends up in the global namespace. However there's another file that doesn't look like its even being used that also has a commands variable in it and for some reason on certain machines that variable is used instead.</p> <p>My question is, is there a way in pdb or just code to show the source of the commands variable? I'm hoping for some concrete evidence that shows it's pointing to that file for some reason.</p> <p>It's a nice demonstration on the dangers of global variables I guess, and I can clean up the code but I'd like to fully understand it first.</p>
0
2016-09-13T16:55:41Z
39,475,532
<p>To get the module of the <code>commands</code> object, you could try:</p> <pre><code>import inspect inspect.getmodule(commands) </code></pre>
0
2016-09-13T17:11:03Z
[ "python", "pdb" ]
How to handle unknow encoding
39,475,359
<p>I'm having some issues with a Python script that needs to open files with different encoding.</p> <p><strong>I'm usually using this:</strong></p> <pre><code>with open(path_to_file, 'r') as f: first_line = f.readline() </code></pre> <p>And that works great when the file is properly encode.</p> <p><strong>But sometimes, it doesn't work, for example <a href="http://www.cjoint.com/doc/16_09/FInqYhqOOYi_test.Txt" rel="nofollow">with this file</a>, I've got this:</strong></p> <pre><code>In [22]: with codecs.open(filename, 'r') as f: ...: a = f.readline() ...: print(a) ...: print(repr(a)) ...: ��Test for StackOverlow '\xff\xfeT\x00e\x00s\x00t\x00 \x00f\x00o\x00r\x00 \x00S\x00t\x00a\x00c\x00k\x00O\x00v\x00e\x00r\x00l\x00o\x00w\x00\r\x00\n' </code></pre> <p><strong>And I would like to search some stuff on those lines. Sadly with that method, I can't:</strong></p> <pre><code>In [24]: "Test" in a Out[24]: False </code></pre> <p><strong>I've found a lot of questions here referring to the same type of issues:</strong></p> <ol> <li><a href="http://stackoverflow.com/questions/491921/unicode-utf8-reading-and-writing-to-files-in-python">Unicode (utf8) reading and writing to files in python</a></li> <li><a href="http://stackoverflow.com/questions/22216076/unicodedecodeerror-utf8-codec-cant-decode-byte-0xa5-in-position-0-invalid-s">UnicodeDecodeError: &#39;utf8&#39; codec can&#39;t decode byte 0xa5 in position 0: invalid start byte</a></li> <li><a href="http://programmers.stackexchange.com/questions/187169/how-to-detect-the-encoding-of-a-file">http://programmers.stackexchange.com/questions/187169/how-to-detect-the-encoding-of-a-file</a></li> <li><a href="http://stackoverflow.com/questions/1979171/how-can-i-escape-xff-xfe-to-a-readable-string">how can i escape &#39;\xff\xfe&#39; to a readable string</a></li> </ol> <p>But can't manage to decode the file properly with them... </p> <p><strong>With codecs.open():</strong></p> <pre><code>In [17]: with codecs.open(filename, 'r', "utf-8") as f: a = f.readline() print(a) ....: --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) &lt;ipython-input-17-0e72208eaac2&gt; in &lt;module&gt;() 1 with codecs.open(filename, 'r', "utf-8") as f: ----&gt; 2 a = f.readline() 3 print(a) 4 /usr/lib/python2.7/codecs.pyc in readline(self, size) 688 def readline(self, size=None): 689 --&gt; 690 return self.reader.readline(size) 691 692 def readlines(self, sizehint=None): /usr/lib/python2.7/codecs.pyc in readline(self, size, keepends) 543 # If size is given, we call read() only once 544 while True: --&gt; 545 data = self.read(readsize, firstline=True) 546 if data: 547 # If we're at a "\r" read one extra character (which might /usr/lib/python2.7/codecs.pyc in read(self, size, chars, firstline) 490 data = self.bytebuffer + newdata 491 try: --&gt; 492 newchars, decodedbytes = self.decode(data, self.errors) 493 except UnicodeDecodeError, exc: 494 if firstline: UnicodeDecodeError: 'utf8' codec can't decode byte 0xff in position 0: invalid start byte </code></pre> <p><strong>with encode('utf-8):</strong></p> <pre><code>In [18]: with codecs.open(filename, 'r') as f: a = f.readline() print(a) ....: a.encode('utf-8') ....: print(a) ....: ��Test for StackOverlow --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) &lt;ipython-input-18-7facc05b9cb1&gt; in &lt;module&gt;() 2 a = f.readline() 3 print(a) ----&gt; 4 a.encode('utf-8') 5 print(a) 6 UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128) </code></pre> <p><strong>I've found a way to change file encoding automatically with Vim:</strong></p> <pre><code>system("vim '+set fileencoding=utf-8' '+wq' %s" % path_to_file) </code></pre> <p>But I would like to do this without using Vim...</p> <p>Any help will be appreciate.</p>
1
2016-09-13T17:00:23Z
39,475,568
<p>It looks like you need to detect the encoding in the input file. The <code>chardet</code> library mentioned in the answer to <a href="http://stackoverflow.com/questions/436220/python-is-there-a-way-to-determine-the-encoding-of-text-file">this question</a> might help (though note the proviso that complete encoding detection is not possible).</p> <p>Then you can write the file out in a known encoding, perhaps. When dealing with Unicode remember that it MUST be encoded into a suitable bytestream before being communicated outside the process. Decode on input, then encode on output.</p>
4
2016-09-13T17:13:24Z
[ "python", "python-2.7", "encoding" ]
How to handle unknow encoding
39,475,359
<p>I'm having some issues with a Python script that needs to open files with different encoding.</p> <p><strong>I'm usually using this:</strong></p> <pre><code>with open(path_to_file, 'r') as f: first_line = f.readline() </code></pre> <p>And that works great when the file is properly encode.</p> <p><strong>But sometimes, it doesn't work, for example <a href="http://www.cjoint.com/doc/16_09/FInqYhqOOYi_test.Txt" rel="nofollow">with this file</a>, I've got this:</strong></p> <pre><code>In [22]: with codecs.open(filename, 'r') as f: ...: a = f.readline() ...: print(a) ...: print(repr(a)) ...: ��Test for StackOverlow '\xff\xfeT\x00e\x00s\x00t\x00 \x00f\x00o\x00r\x00 \x00S\x00t\x00a\x00c\x00k\x00O\x00v\x00e\x00r\x00l\x00o\x00w\x00\r\x00\n' </code></pre> <p><strong>And I would like to search some stuff on those lines. Sadly with that method, I can't:</strong></p> <pre><code>In [24]: "Test" in a Out[24]: False </code></pre> <p><strong>I've found a lot of questions here referring to the same type of issues:</strong></p> <ol> <li><a href="http://stackoverflow.com/questions/491921/unicode-utf8-reading-and-writing-to-files-in-python">Unicode (utf8) reading and writing to files in python</a></li> <li><a href="http://stackoverflow.com/questions/22216076/unicodedecodeerror-utf8-codec-cant-decode-byte-0xa5-in-position-0-invalid-s">UnicodeDecodeError: &#39;utf8&#39; codec can&#39;t decode byte 0xa5 in position 0: invalid start byte</a></li> <li><a href="http://programmers.stackexchange.com/questions/187169/how-to-detect-the-encoding-of-a-file">http://programmers.stackexchange.com/questions/187169/how-to-detect-the-encoding-of-a-file</a></li> <li><a href="http://stackoverflow.com/questions/1979171/how-can-i-escape-xff-xfe-to-a-readable-string">how can i escape &#39;\xff\xfe&#39; to a readable string</a></li> </ol> <p>But can't manage to decode the file properly with them... </p> <p><strong>With codecs.open():</strong></p> <pre><code>In [17]: with codecs.open(filename, 'r', "utf-8") as f: a = f.readline() print(a) ....: --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) &lt;ipython-input-17-0e72208eaac2&gt; in &lt;module&gt;() 1 with codecs.open(filename, 'r', "utf-8") as f: ----&gt; 2 a = f.readline() 3 print(a) 4 /usr/lib/python2.7/codecs.pyc in readline(self, size) 688 def readline(self, size=None): 689 --&gt; 690 return self.reader.readline(size) 691 692 def readlines(self, sizehint=None): /usr/lib/python2.7/codecs.pyc in readline(self, size, keepends) 543 # If size is given, we call read() only once 544 while True: --&gt; 545 data = self.read(readsize, firstline=True) 546 if data: 547 # If we're at a "\r" read one extra character (which might /usr/lib/python2.7/codecs.pyc in read(self, size, chars, firstline) 490 data = self.bytebuffer + newdata 491 try: --&gt; 492 newchars, decodedbytes = self.decode(data, self.errors) 493 except UnicodeDecodeError, exc: 494 if firstline: UnicodeDecodeError: 'utf8' codec can't decode byte 0xff in position 0: invalid start byte </code></pre> <p><strong>with encode('utf-8):</strong></p> <pre><code>In [18]: with codecs.open(filename, 'r') as f: a = f.readline() print(a) ....: a.encode('utf-8') ....: print(a) ....: ��Test for StackOverlow --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) &lt;ipython-input-18-7facc05b9cb1&gt; in &lt;module&gt;() 2 a = f.readline() 3 print(a) ----&gt; 4 a.encode('utf-8') 5 print(a) 6 UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128) </code></pre> <p><strong>I've found a way to change file encoding automatically with Vim:</strong></p> <pre><code>system("vim '+set fileencoding=utf-8' '+wq' %s" % path_to_file) </code></pre> <p>But I would like to do this without using Vim...</p> <p>Any help will be appreciate.</p>
1
2016-09-13T17:00:23Z
39,475,719
<p>it looks like this is utf-16-le (utf-16 little endian ...) but you are missing a final <code>\x00</code></p> <pre><code>&gt;&gt;&gt; s = '\xff\xfeT\x00e\x00s\x00t\x00 \x00f\x00o\x00r\x00 \x00S\x00t\x00a\x00c\x 00k\x00O\x00v\x00e\x00r\x00l\x00o\x00w\x00\r\x00\n' &gt;&gt;&gt; s.decode('utf-16-le') # creates error Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "C:\Python26\lib\encodings\utf_16_le.py", line 16, in decode return codecs.utf_16_le_decode(input, errors, True) UnicodeDecodeError: 'utf16' codec can't decode byte 0x0a in position 46: truncat ed data &gt;&gt;&gt; (s+"\x00").decode("utf-16-le") # TADA!!!! u'\ufeffTest for StackOverlow\r\n' &gt;&gt;&gt; </code></pre>
4
2016-09-13T17:22:49Z
[ "python", "python-2.7", "encoding" ]
Openerp , how to save and re-direct in to another form by clicking on save button
39,475,431
<p>I am working in <strong>hr</strong> module in Openerp and there is that requirement arises that once you click on the save button 1. Save the data in to the DB (already happening) 2. Redirect in to leave allocation form.</p> <p>Please help me with completing second requirement which I have no idea .</p> <p><strong>HR - Create Profile Form</strong> <a href="http://i.stack.imgur.com/7h7hG.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/7h7hG.jpg" alt="enter image description here"></a></p> <p><strong>Leave Management - Leave allocation form</strong></p> <p><a href="http://i.stack.imgur.com/MQuIR.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/MQuIR.jpg" alt="enter image description here"></a></p>
0
2016-09-13T17:04:46Z
39,475,504
<p>You can override the create or write function and have it return an action to bring up the other view.</p> <p>I used super(Partner,self) you may need to replace this with something else. The pitfall with this method is that it will not work using xmlrpc. </p> <pre><code>@api.model def create(self, vals): super(Partner, self).create(vals) return { 'view_type': 'form', 'view_mode': 'form', 'res_model': 'hr.holidays', 'type': 'ir.actions.act_window', 'target': 'new' } </code></pre> <p>You should be able to pass a context as well to fill in form values. </p> <p>Another option would be to create a wizard with an almost mirror view for your form above. Have a next button which calls a method which creates the record and then returns an action similar to the example. This way you would not need to override the create function (leaving it available for xmlrpc) and redirecting to the form you want.</p>
1
2016-09-13T17:09:51Z
[ "python", "xml", "python-2.7", "openerp", "openerp-7" ]
Get name of users(persons, not applications) in linux system using psutil library
39,475,435
<p>I am trying to use psutil library to get users logged in to linux system.</p> <p>For that i used function psutil.users()</p> <pre><code>&gt;&gt;&gt; import psutil &gt;&gt;&gt; psutil.users() [suser(name='vibhcool', terminal='tty2',host='localhost',started=1473815296.0)] </code></pre> <p>I want to extract the username from this list, what shall i do? Also what is suser here?</p>
0
2016-09-13T17:05:05Z
39,475,513
<p>I got the answer, (sorry i am bad at googling)</p> <p>psutil.users() outputs a list, so it can be traversed using for loop</p> <pre><code>users = psutil.users() for user in users: print(user.name) </code></pre> <p>reference: <a href="http://www.programcreek.com/python/example/53877/psutil.users" rel="nofollow">http://www.programcreek.com/python/example/53877/psutil.users</a></p>
-1
2016-09-13T17:10:08Z
[ "python", "linux", "psutil" ]
Get name of users(persons, not applications) in linux system using psutil library
39,475,435
<p>I am trying to use psutil library to get users logged in to linux system.</p> <p>For that i used function psutil.users()</p> <pre><code>&gt;&gt;&gt; import psutil &gt;&gt;&gt; psutil.users() [suser(name='vibhcool', terminal='tty2',host='localhost',started=1473815296.0)] </code></pre> <p>I want to extract the username from this list, what shall i do? Also what is suser here?</p>
0
2016-09-13T17:05:05Z
39,475,528
<p>I don't know why they choose the name <code>suser</code>, but it's actually a namedtuple.</p> <p>That shouldn't matter, you get the name of a user like so:</p> <pre><code>&gt;&gt;&gt; import psutil &gt;&gt;&gt; users = psutil.users() &gt;&gt;&gt; first_user = users[0] &gt;&gt;&gt; name = first_user.name &gt;&gt;&gt; print(name) 'vibhcool' </code></pre> <p>In short:</p> <pre><code>&gt;&gt;&gt; import psutil &gt;&gt;&gt; print(psutil.users()[0].name) 'vibhcool' </code></pre>
0
2016-09-13T17:10:39Z
[ "python", "linux", "psutil" ]
Concurrency error while executing DocumentDB stored procedure on multiple Docker containers
39,475,503
<p>I am currently building a python Tornado web application using Azure Storage to store images, and DocumentDB to store metadata on the images. Whenever an image is uploaded, it can use any 1 of 2 possible Docker containers running the Tornado Web App to execute the POST method asynchronously. The error I'm having is when I get to the stored procedure I have sitting in DocumentDB scripts. The sproc is being executed in two separate threads in two separate Docker containers at the same time. The stored procedure is meant to generate a new ReceiptID for each image uploaded by querying DocDB for the <code>gen_receipt_id</code> document which looks like this:</p> <pre><code>{ "id": "gen_receipt_id", "counter": 406 } </code></pre> <p>The sproc then increments the <code>counter</code> property by 1, and that new ID is used to be attached to the new receipt in metadata. The sproc looks like this:</p> <pre><code>function receiptIDSproc() { var collection = getContext().getCollection(); // Query documents and take 1st item. var isAccepted = collection.queryDocuments( collection.getSelfLink(), "SELECT * FROM root r WHERE r.id='gen_receipt_id'", function(err, feed) { if (err) throw err; // Check the feed and if empty, set the body to 'no docs found', // else take 1st element from feed if (!feed || !feed.length) getContext().getResponse().setBody('no docs found'); else { tryUpdate(feed[0]); } }); if (!isAccepted) throw new Error('The query was not accepted by the server.'); function tryUpdate(document) { var requestOptions = {"If-Match": document._etag, etag: document._etag}; document.counter += 1; // Update the document. var isAccepted = collection.replaceDocument(document._self, document, requestOptions, function (err, updatedDocument,responseOptions) { if (err) throw err; // If we have successfully updated the document - return it in the response body. getContext().getResponse().setBody(updatedDocument); }); // If we hit execution bounds - throw an exception. if (!isAccepted) { throw new Error("The stored procedure timed out."); } } } </code></pre> <p>However, when I go to upload multiple images concurrently, I get a conflict with the operation happening asynchronously: <a href="http://i.stack.imgur.com/GtEhz.png" rel="nofollow">Fine-Uploader upload conflict</a></p> <p>The error in console looks like this:</p> <pre><code>[36mtornado2_1 |[0m ERROR:500 POST /v1.0/groups/1/receipts (172.18.0.4) 1684.98ms [33mtornado1_1 |[0m 407 //Here I'm printing the ID the Sproc generated [33mtornado1_1 |[0m 2016/9/13/000000000000407 [36mtornado2_1 |[0m 407 //Here I'm printing the ID the Sproc generated [36mtornado2_1 |[0m 2016/9/13/000000000000407 [32mnginx_1 |[0m 10.0.75.1 - - [13/Sep/2016:16:49:47 +0000] "POST /v1.0/groups/1/receipts HTTP/1.1" 200 17 "http://local.zenxpense.com/upload" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36" [33mtornado1_1 |[0m INFO:200 POST /v1.0/groups/1/receipts (172.18.0.4) 1132.49ms [36mtornado2_1 |[0m WARNING:500 POST /v1.0/groups/1/receipts (172.18.0.4): An error occured while uploading to Azure Storage: HTTP 500: Internal Server Error (An error occured while creating DocumentDB record: Status code: 409 [36mtornado2_1 |[0m {"code":"Conflict","message":"Message: {\"Errors\":[\"Resource with specified id or name already exists\"]}\r\nActivityId: b226be91-f193-4c1b-9cc2-bcd8293bd36b, Request URI: /apps/8ae2ad5a-d261-42ac-aaa1-9ec0fd662d12/services/cc7fdf37-5f62-41db-a9d6-37626da67815/partitions/8063ad6c-33ad-4148-a60f-91c3acbfae6f/replicas/131171655602617741p"}) </code></pre> <p>As you can see from the error, the Sproc is executing and generating the same ReceiptID on two different Docker containers, <code>407</code> and because of that, there's a conflict error since I'm trying to create two documents with the same ID. What I need to happen is prevent the Sproc from generating the same ID on two separate containers. I tried using Etags and the "If-Match" header in the Sproc, but it still happens since each container has the same Etag on the document, so it doesn't see an error.</p>
0
2016-09-13T17:09:44Z
39,476,273
<p>The typical NoSQL solution to this common problem is to use GUIDs rather than sequential IDs. </p> <p>However, since DocumentDB sprocs provide you with ACID constraints, it should be possible to do what you want using an optimistic concurrency approach with retry. </p> <p>So, if you run this sproc twice from two different places at the exact same time, you'll get the same ID (407 in this example). One of the sprocs should be able to write to that ID and the other will fail. The key here is to retry any that fail. The sproc will rerun and get the next ID (408). Since simultaneous requests should be a rarity, there should be negligible impact on the median response time.</p>
0
2016-09-13T17:58:18Z
[ "python", "azure", "stored-procedures", "docker", "azure-documentdb" ]
python pandas: filter out records with null or empty string for a given field
39,475,566
<p>I am trying to filter out records whose field_A is null or empty string in the data frame like below:</p> <pre><code>my_df[my_df.editions is not None] my_df.shape </code></pre> <p>This gives me error:</p> <pre><code>--------------------------------------------------------------------------- KeyError Traceback (most recent call last) &lt;ipython-input-40-e1969e0af259&gt; in &lt;module&gt;() 1 my_df['editions'] = my['editions'].astype(str) ----&gt; 2 my_df = my_df[my_df.editions is not None] 3 my_df.shape /home/edamame/anaconda2/lib/python2.7/site-packages/pandas/core/frame.pyc in __getitem__(self, key) 1995 return self._getitem_multilevel(key) 1996 else: -&gt; 1997 return self._getitem_column(key) 1998 1999 def _getitem_column(self, key): /home/edamame/anaconda2/lib/python2.7/site-packages/pandas/core/frame.pyc in _getitem_column(self, key) 2002 # get column 2003 if self.columns.is_unique: -&gt; 2004 return self._get_item_cache(key) 2005 2006 # duplicate columns &amp; possible reduce dimensionality /home/edamame/anaconda2/lib/python2.7/site-packages/pandas/core/generic.pyc in _get_item_cache(self, item) 1348 res = cache.get(item) 1349 if res is None: -&gt; 1350 values = self._data.get(item) 1351 res = self._box_item_values(item, values) 1352 cache[item] = res /home/edamame/anaconda2/lib/python2.7/site-packages/pandas/core/internals.pyc in get(self, item, fastpath) 3288 3289 if not isnull(item): -&gt; 3290 loc = self.items.get_loc(item) 3291 else: 3292 indexer = np.arange(len(self.items))[isnull(self.items)] /home/edamame/anaconda2/lib/python2.7/site-packages/pandas/indexes/base.pyc in get_loc(self, key, method, tolerance) 1945 return self._engine.get_loc(key) 1946 except KeyError: -&gt; 1947 return self._engine.get_loc(self._maybe_cast_indexer(key)) 1948 1949 indexer = self.get_indexer([key], method=method, tolerance=tolerance) pandas/index.pyx in pandas.index.IndexEngine.get_loc (pandas/index.c:4154)() pandas/index.pyx in pandas.index.IndexEngine.get_loc (pandas/index.c:4018)() pandas/hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:12368)() pandas/hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:12322)() KeyError: True </code></pre> <p>or</p> <pre><code>my_df[my_df.editions != None] my_df.shape </code></pre> <p>This one gave no error but didn't filter out any None values. </p> <p>I also tried:</p> <pre><code>my_df = my_df[my_df.editions.notnull()] </code></pre> <p>This one doesn't give error but doesn't filter out any None values either.</p> <p>Could anyone please advise how to solve this problem? Thanks!</p>
0
2016-09-13T17:13:13Z
39,476,074
<p>Can you create a new dataframe from the filtering?</p> <p>Dataframe before:</p> <pre><code>a b 1 9 2 10 3 11 4 12 5 13 6 14 7 15 8 null </code></pre> <p>Example: </p> <pre><code>import pandas my_df = pandas.DataFrame({"a":[1,2,3,4,5,6,7,8],"b":[9,10,11,12,13,14,15,"null"]}) my_df2= my_df[(my_df['b']!="null")] print(my_df2) </code></pre> <p>dataframe after:</p> <pre><code>a b 1 9 2 10 3 11 4 12 5 13 6 14 7 15 </code></pre> <p>What it is doing is looking for "null" and excluding it. You could do the same thing with empty strings.</p>
1
2016-09-13T17:45:14Z
[ "python", "pandas", "dataframe" ]
Django ImportError cannot import name request
39,475,651
<p>I'm learning Django using book Django-By-Example by Antonio Mele. For now I reached chapter 5 and now I'm trying to create image sharing app. But despite following all instructions in that chapter I'm getting ImportError when I try to add the image from external URL in django development server.</p> <pre><code> ImportError at /images/create/ cannot import name request Request URL: http://127.0.0.1:8000/images/create/?title=%20Django%20and%20Duke&amp;url=http://upload.wikimedia.org/wikipedia/commons/8/85/Django_Reinhardt_and_Duke_Ellington_(Gottlieb).jpg. Django Version: 1.8.6 Exception Location: /home/ciotar/projects/VirtualEnvs/env/bookmarks/bookmarks/images/forms.py in &lt;module&gt;, line 1 Python Version: 2.7.11 </code></pre> <p>I'm using Pycharm and have set python 3.5 interpreter from active virtualenv instance. Not sure why Django runs with python 2.7 though. I wonder if this problem could appear because of 'request' name conflict between forms.py and views.py modules?</p> <p>/images/urls.py</p> <pre><code>urlpatterns = [ url(r'^create/$', views.image_create, name='create'), ] </code></pre> <p>/images/views.py</p> <pre><code>from django.shortcuts import render, redirect from django.contrib.auth.decorators import login_required from django.contrib import messages from .forms import ImageCreateForm @login_required def image_create(request): """ View for creating an Image using the JavaScript Bookmarklet. """ if request.method == 'POST': # form is sent form = ImageCreateForm(data=request.POST) if form.is_valid(): # form data is valid cd = form.cleaned_data new_item = form.save(commit=False) # assign current user to the item new_item.user = request.user new_item.save() messages.success(request, 'Image added successfully') # redirect to new created item detail view return redirect(new_item.get_absolute_url()) else: # build form with data provided by the bookmarklet via GET form = ImageCreateForm(data=request.GET) return render(request, 'images/image/create.html', {'section': 'images', 'form': form}) </code></pre> <p>/images/forms.py</p> <pre><code>from urllib import request from django.core.files.base import ContentFile from django.utils.text import slugify from django import forms from .models import Image class ImageCreateForm(forms.ModelForm): class Meta: model = Image fields = ('title', 'url', 'description') widgets = { 'url': forms.HiddenInput, } def clean_url(self): url = self.cleaned_data['url'] valid_extensions = ['jpg', 'jpeg'] extension = url.rsplit('.', 1)[1].lower() if extension not in valid_extensions: raise forms.ValidationError('The given URL does not match valid image extensions.') return url def save(self, force_insert=False, force_update=False, commit=True): image = super(ImageCreateForm, self).save(commit=False) image_url = self.cleaned_data['url'] image_name = '{}.{}'.format(slugify(image.title), image_url.rsplit('.', 1)[1].lower()) # download image from the given URL response = request.urlopen(image_url) image.image.save(image_name, ContentFile(response.read()), save=False) if commit: image.save() return image </code></pre>
0
2016-09-13T17:18:50Z
39,475,826
<p>This is due to discrepancy in Python version.</p> <p>In Python 2.7, you might have to replace:</p> <pre><code>from urllib import request </code></pre> <p>in your <code>forms.py</code> with</p> <pre><code>import urllib2 </code></pre> <p>Again the <code>urllib2 &gt; Request</code> module does not have the <code>urlopen</code> method. So you will have to replace the line </p> <pre><code>response = request.urlopen(image_url) </code></pre> <p>with:</p> <pre><code>response = urllib2.urlopen(image_url) </code></pre> <p>in your <code>forms.py</code></p> <p>There is a nice discussion about the differences between urllib and urllib2 here on this <a href="http://stackoverflow.com/questions/2018026/what-are-the-differences-between-the-urllib-urllib2-and-requests-module">SO post</a></p>
1
2016-09-13T17:29:23Z
[ "python", "django", "importerror" ]
Import error for urllib3
39,475,652
<p>I'm using Python 2.7 and am trying pull data from an API using a python script and urllib3. I've installed urllib3 by copying the source code but from GitHub. But, I'm still getting the following error when running the script: </p> <pre><code>ImportError: No module named urllib3 </code></pre> <p>The script starts simply enough with: </p> <pre><code>import urllib3 http = urllib3.PoolManager() </code></pre> <p>I've checked the urllib3 file, and it includes the <a href="http://stackoverflow.com/questions/23361432/import-of-urllib3-util-failing-in-python-2-7">util file cited in other responses</a> </p>
-1
2016-09-13T17:18:55Z
39,475,703
<p>you need to copy that module directory (<strong>urllib3/urllib3/</strong>). you'll find __init__py file on that directory to that script's directory. </p> <p>Another way:</p> <pre><code>$ pip search urllib3 opbeat_python_urllib3 (1.1) - An urllib3 transport for Opbeat apiclient (1.0.3) - Framework for making good API client libraries using urllib3. urllib3 (1.17) - HTTP library with thread-safe connection pooling, file post, and more. httplib2shim (0.0.1) - A wrapper over urllib3 that matches httplib2's interface yieldfrom.urllib3 (0.1.4) - Asyncio HTTP library with thread-safe connection pooling, file post, and more. urllib3-mock (0.3.3) - A utility library for mocking out the `urllib3` Python library. </code></pre> <p>Installing it using pip</p> <pre><code>pip install urllib3 </code></pre>
1
2016-09-13T17:21:30Z
[ "python", "python-2.7", "urllib3" ]
Import error for urllib3
39,475,652
<p>I'm using Python 2.7 and am trying pull data from an API using a python script and urllib3. I've installed urllib3 by copying the source code but from GitHub. But, I'm still getting the following error when running the script: </p> <pre><code>ImportError: No module named urllib3 </code></pre> <p>The script starts simply enough with: </p> <pre><code>import urllib3 http = urllib3.PoolManager() </code></pre> <p>I've checked the urllib3 file, and it includes the <a href="http://stackoverflow.com/questions/23361432/import-of-urllib3-util-failing-in-python-2-7">util file cited in other responses</a> </p>
-1
2016-09-13T17:18:55Z
39,475,712
<blockquote> <p>I've installed urllib3 by copying the source code but from GitHub. </p> </blockquote> <p>Bad way to "install" <code>urllib3</code>. Use this instead</p> <pre><code>pip install urllib3 </code></pre>
2
2016-09-13T17:22:32Z
[ "python", "python-2.7", "urllib3" ]
Stacked bar graph with variable width elements?
39,475,683
<p>In Tableau I'm used to making graphs like the one below. It has for each day (or some other discrete variable), a stacked bar of categories of different colours, heights and widths.</p> <p>You can imagine the categories to be different advertisements that I show to people. The heights correspond to the percentage of people I've shown the advertisement to, and the widths correspond to the rate of acceptance.</p> <p>It allows me to see very easily which advertisements I should probably show more often (short, but wide bars, like the 'C' category on September 13th and 14th) and which I should show less often (tall, narrow bars, like the 'H' category on September 16th).</p> <p>Any ideas on how I could create a graph like this in R or Python?</p> <p><a href="http://i.stack.imgur.com/qDSKv.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/qDSKv.jpg" alt="enter image description here"></a></p>
3
2016-09-13T17:20:33Z
39,476,288
<p>Unfortunately, this is not so trivial to achieve with <code>ggplot2</code> (I think), because <code>geom_bar</code> does not really support changing widths for the same x position. But with a bit of effort, we can achieve the same result:</p> <h3>Create some fake data</h3> <pre><code>set.seed(1234) d &lt;- as.data.frame(expand.grid(adv = LETTERS[1:7], day = 1:5)) d$height &lt;- runif(7*5, 1, 3) d$width &lt;- runif(7*5, 0.1, 0.3) </code></pre> <p>My data doesn't add up to 100%, cause I'm lazy.</p> <pre><code>head(d, 10) # adv day height width # 1 A 1 1.227407 0.2519341 # 2 B 1 2.244599 0.1402496 # 3 C 1 2.218549 0.1517620 # 4 D 1 2.246759 0.2984301 # 5 E 1 2.721831 0.2614705 # 6 F 1 2.280621 0.2106667 # 7 G 1 1.018992 0.2292812 # 8 A 2 1.465101 0.1623649 # 9 B 2 2.332168 0.2243638 # 10 C 2 2.028502 0.1659540 </code></pre> <h3>Make a new variable for stacking</h3> <p>We can't easily use <code>position_stack</code> I think, so we'll just do that part ourselves. Basically, we need to calculate the cumulative height for every bar, grouped by day. Using <code>dplyr</code> we can do that very easily.</p> <pre><code>library(dplyr) d2 &lt;- d %&gt;% group_by(day) %&gt;% mutate(cum_height = cumsum(height)) </code></pre> <h3>Make the plot</h3> <p>Finally, we create the plot. Note that the <code>x</code> and <code>y</code> refer to the <em>middle</em> of the tiles.</p> <pre><code>library(ggplot2) ggplot(d2, aes(x = day, y = cum_height - 0.5 * height, fill = adv)) + geom_tile(aes(width = width, height = height), show.legend = FALSE) + geom_text(aes(label = adv)) + scale_fill_brewer(type = 'qual', palette = 2) + labs(title = "Views and other stuff", y = "% of views") </code></pre> <p>If you don't want to play around with correctly scaling the widths (to something &lt; 1), you can use facets instead:</p> <pre><code>ggplot(d2, aes(x = 1, y = cum_height - 0.5 * height, fill = adv)) + geom_tile(aes(width = width, height = height), show.legend = FALSE) + geom_text(aes(label = adv)) + facet_grid(~day) + scale_fill_brewer(type = 'qual', palette = 2) + labs(title = "Views and other stuff", y = "% of views", x = "") </code></pre> <h3>Result</h3> <p><a href="http://i.stack.imgur.com/h6CVN.png"><img src="http://i.stack.imgur.com/h6CVN.png" alt="enter image description here"></a></p> <p><a href="http://i.stack.imgur.com/N1i1v.png"><img src="http://i.stack.imgur.com/N1i1v.png" alt="enter image description here"></a></p>
7
2016-09-13T17:59:03Z
[ "python", "graph", "ggplot2", "data-visualization" ]
Stacked bar graph with variable width elements?
39,475,683
<p>In Tableau I'm used to making graphs like the one below. It has for each day (or some other discrete variable), a stacked bar of categories of different colours, heights and widths.</p> <p>You can imagine the categories to be different advertisements that I show to people. The heights correspond to the percentage of people I've shown the advertisement to, and the widths correspond to the rate of acceptance.</p> <p>It allows me to see very easily which advertisements I should probably show more often (short, but wide bars, like the 'C' category on September 13th and 14th) and which I should show less often (tall, narrow bars, like the 'H' category on September 16th).</p> <p>Any ideas on how I could create a graph like this in R or Python?</p> <p><a href="http://i.stack.imgur.com/qDSKv.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/qDSKv.jpg" alt="enter image description here"></a></p>
3
2016-09-13T17:20:33Z
39,476,491
<pre><code>set.seed(1) days &lt;- 5 cats &lt;- 8 dat &lt;- prop.table(matrix(rpois(days * cats, days), cats), 2) bp1 &lt;- barplot(dat, col = seq(cats)) </code></pre> <p><a href="http://i.stack.imgur.com/pvEFA.png" rel="nofollow"><img src="http://i.stack.imgur.com/pvEFA.png" alt="enter image description here"></a></p> <pre><code>## some width for rect rate &lt;- matrix(runif(days * cats, .1, .5), cats) ## calculate xbottom, xtop, ybottom, ytop bp &lt;- rep(bp1, each = cats) ybot &lt;- apply(rbind(0, dat), 2, cumsum)[-(cats + 1), ] ytop &lt;- apply(dat, 2, cumsum) plot(extendrange(bp1), c(0,1), type = 'n', axes = FALSE, ann = FALSE) rect(bp - rate, ybot, bp + rate, ytop, col = seq(cats)) text(bp, (ytop + ybot) / 2, LETTERS[seq(cats)]) axis(1, bp1, labels = format(Sys.Date() + seq(days), '%d %b %Y'), lwd = 0) axis(2) </code></pre> <p><a href="http://i.stack.imgur.com/lGULx.png" rel="nofollow"><img src="http://i.stack.imgur.com/lGULx.png" alt="enter image description here"></a></p> <p>Probably not very useful, but you can invert the color you are plotting so that you can actually see the labels:</p> <pre><code>inv_col &lt;- function(color) { paste0('#', apply(apply(rbind(abs(255 - col2rgb(color))), 2, function(x) format(as.hexmode(x), 2)), 2, paste, collapse = '')) } inv_col(palette()) # [1] "#ffffff" "#00ffff" "#ff32ff" "#ffff00" "#ff0000" "#00ff00" "#0000ff" "#414141" plot(extendrange(bp1), c(0,1), type = 'n', axes = FALSE, ann = FALSE) rect(bp - rate, ybot, bp + rate, ytop, col = seq(cats), xpd = NA, border = NA) text(bp, (ytop + ybot) / 2, LETTERS[seq(cats)], col = inv_col(seq(cats))) axis(1, bp1, labels = format(Sys.Date() + seq(days), '%d %B\n%Y'), lwd = 0) axis(2) </code></pre> <p><a href="http://i.stack.imgur.com/Dgu1m.png" rel="nofollow"><img src="http://i.stack.imgur.com/Dgu1m.png" alt="enter image description here"></a></p>
7
2016-09-13T18:12:03Z
[ "python", "graph", "ggplot2", "data-visualization" ]
Conditional row select with Pandas
39,475,788
<p>I want to select a sub-set of a pandas dataframe <code>df</code> where the column <code>text</code> has the value <code>'0.0, 0.0'</code>. I thought the command for this would be <code>df.ix[df['text'] == "0.0, 0.0"]</code> but this returns</p> <pre><code>&lt;console&gt;:1: error: identifier expected but symbol literal found. df.ix[df['text'] == "0.0, 0.0"] ^ &lt;console&gt;:1: error: unclosed character literal df.ix[df['text'] == "0.0, 0.0"] ^ </code></pre> <p>What am I doing wrong?</p>
1
2016-09-13T17:26:52Z
39,476,085
<p>As <a href="http://stackoverflow.com/users/487339/dsm">DSM</a> pointed out, the error appears to be an error from the Scala programming language. This is because I was using a Zeppelin Notebook, and had failed to specify that the code should be interpreted with the pyspark interpreter. After specifying the interpreter, the code worked as expected.</p>
1
2016-09-13T17:45:47Z
[ "python", "pandas" ]
How to get pytest fixture data dynamically
39,475,849
<p>I'm trying to define init data for several tests scenarios that test a single api endpoint. I want to do this so that I don't have to produce boiler plate code for multiple iterations of a test where just the data differs. I can't seem to wrap my head around how to do this using the built-in pytest fixtures. Here's essentially what I'm trying to do:</p> <p>In tests/conftext.py:</p> <pre><code>import pytest @pytest.fixture(scope="module") def data_for_a(): return "a_data" @pytest.fixture(scope="module") def data_for_b(): return "b_data" </code></pre> <p>In tests/tests.py</p> <pre><code>import pytest # this works def test_a(data_for_a): assert "a_data" == data_for_a # but I want to do this and it fails: scenarios = [ { "name": "a", "data": data_for_a }, { "name": "b", "data": data_for_b }, ] for scenario in scenarios: print(scenario.name, scenario.data) # desired output: # "a a_data" # "b b_data" </code></pre> <p>I get a <code>NameError: name 'data_for_a' is not defined</code> exception. I've tried various approaches to get this to work, but there seems to be no way around having to pass the fixture as a parameter to the test method - so either define a bunch of boilerplate tests or have a bunch of if/else statements in a single test and pass each fixture explicitly. I don't like either of these options. At the moment it seems like I have to just build my own helper module to pull in this test data, but I'd rather use the built-in mechanism for this. Is there any way to do this?</p>
0
2016-09-13T17:30:47Z
39,565,326
<p>You can import from your conftest.py like so:</p> <pre><code>from conftest import data_for_a, data_for_b </code></pre> <p>or</p> <pre><code>from conftest import * </code></pre> <p>which will allow you to reference that function without passing it as an parameter to a test function.</p> <p><strong>Edit:</strong> Note that this is generally not recommended practice according to <a href="http://pytest.org/2.2.4/plugins.html" rel="nofollow">the official pytest documentation</a></p> <blockquote> <p>If you have conftest.py files which do not reside in a python package directory (i.e. one containing an <strong>__init__.py</strong>) then “import conftest” can be ambiguous because there might be other conftest.py files as well on your PYTHONPATH or sys.path. It is thus good practise for projects to either put conftest.py under a package scope or to never import anything from a conftest.py file.</p> </blockquote>
1
2016-09-19T04:23:10Z
[ "python", "py.test", "python-3.5", "fixtures" ]
Searching to End of String in Regex
39,475,890
<p>I am trying to extract all sequences of '1's from a string of binary digits (0 and 1) and get them into a <code>list</code>. <br/>For example the string may be of the form <code>001111000110000111111</code>. And I am looking for a list that looks like this <code>["1111", "11", "111111"]</code>. </p> <p>I am using the python <code>findall</code> function with the following <code>([1]+?)0</code>. However, it does not match the last sequence of 1's since that ends with a <code>EOS</code> rather than a '0'. I have tried to use <code>([1]+?)0|$</code> to try to capture the <code>EOS</code> as a valid delimited. </p> <p>But that fails too.<br/> Any help appreciated. </p>
1
2016-09-13T17:33:23Z
39,476,024
<p>I think the regex you're looking for is:</p> <pre><code>1+(?!\0) </code></pre> <p>i.e. match one or more 1s which aren't followed by a 0.</p> <p>The one you have is specifically looking for ones that are followed by 0s.</p> <p>you can play around with regexs on various jsfiddle like sites, with interactive explanations of what they're doing. ex:</p> <p><a href="https://regex101.com/r/qY4iN9/1" rel="nofollow">https://regex101.com/r/qY4iN9/1</a></p>
0
2016-09-13T17:42:26Z
[ "python", "regex" ]
Searching to End of String in Regex
39,475,890
<p>I am trying to extract all sequences of '1's from a string of binary digits (0 and 1) and get them into a <code>list</code>. <br/>For example the string may be of the form <code>001111000110000111111</code>. And I am looking for a list that looks like this <code>["1111", "11", "111111"]</code>. </p> <p>I am using the python <code>findall</code> function with the following <code>([1]+?)0</code>. However, it does not match the last sequence of 1's since that ends with a <code>EOS</code> rather than a '0'. I have tried to use <code>([1]+?)0|$</code> to try to capture the <code>EOS</code> as a valid delimited. </p> <p>But that fails too.<br/> Any help appreciated. </p>
1
2016-09-13T17:33:23Z
39,476,033
<p>What you are trying:</p> <pre><code>([1]+?)0 </code></pre> <p><img src="https://www.debuggex.com/i/bHmtnovezOT8omWZ.png" alt="Regular expression visualization"></p> <p><a href="https://regex101.com/r/fJ7lN1/1" rel="nofollow">Regex101 Demo</a></p> <pre><code>([1]+?)0|$ </code></pre> <p><img src="https://www.debuggex.com/i/H9GddLB8b0Zz0ixI.png" alt="Regular expression visualization"></p> <p><a href="https://regex101.com/r/fJ7lN1/2" rel="nofollow">Regex101 Demo</a></p> <p>What will work:</p> <pre><code>(1+) </code></pre> <p><img src="https://www.debuggex.com/i/DcO4jZifB64t-Zc6.png" alt="Regular expression visualization"></p> <p><a href="https://regex101.com/r/fJ7lN1/3" rel="nofollow">Regex101 Demo</a></p>
1
2016-09-13T17:42:58Z
[ "python", "regex" ]
Searching to End of String in Regex
39,475,890
<p>I am trying to extract all sequences of '1's from a string of binary digits (0 and 1) and get them into a <code>list</code>. <br/>For example the string may be of the form <code>001111000110000111111</code>. And I am looking for a list that looks like this <code>["1111", "11", "111111"]</code>. </p> <p>I am using the python <code>findall</code> function with the following <code>([1]+?)0</code>. However, it does not match the last sequence of 1's since that ends with a <code>EOS</code> rather than a '0'. I have tried to use <code>([1]+?)0|$</code> to try to capture the <code>EOS</code> as a valid delimited. </p> <p>But that fails too.<br/> Any help appreciated. </p>
1
2016-09-13T17:33:23Z
39,476,211
<p><strong>Matching</strong>: To match one or more <code>1</code>s, use <code>1+</code> regex.</p> <p><strong>Splitting</strong>: You may split with 1 or more <code>0</code>s and remove empty elements.</p> <p>See <a href="http://ideone.com/sH8MX3" rel="nofollow">Python demo</a>:</p> <pre><code>import re s = '001111000110000111111' print(re.findall('1+', s)) # ['1111', '11', '111111'] print([x for x in re.split('0+', s) if x]) # ['1111', '11', '111111'] </code></pre>
0
2016-09-13T17:53:56Z
[ "python", "regex" ]