title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
Maya (Python): Running condition command and scriptJob command from within a module
39,252,664
<p>I'm creating a UI tool that loads during Maya's startup, and executes some modules AFTER VRay has initialized (otherwise an error is thrown). </p> <p>A suggestion from my broader question <a href="http://stackoverflow.com/questions/38601706/maya-defer-a-script-until-after-vray-is-registered">here</a> has lead me to try out the condition and scriptJob commands. </p> <p>The listener.py code below works when run from within Maya's script editor, but when I import the listener module and run it using the launcher.py code, I get this error:</p> <pre><code>Error: line 1: name 'is_vray_loaded' is not defined Traceback: (most recent call last): File "&lt;maya console&gt;", line 1, in &lt;module&gt; NameError: name 'is_vray_loaded' is not defined </code></pre> <p><strong>Note</strong> that the condition command requires a mel command syntax (seems to be a bug), so just calling the normal function doesn't work and gives an error that procedure cannot be found).</p> <p>Here's the listener:</p> <pre><code># vray_listener.py import os import maya.cmds as mc import maya.mel as mel vray_plugin_path_2016 = os.path.join('C:', os.sep, 'Program Files', 'Autodesk', 'Maya2016', 'vray', 'plug-ins', 'vrayformaya.mll') #----------------------------------------------------------------------- def is_vray_loaded(): return mc.pluginInfo(vray_plugin_path_2016, q=1, l=True) #----------------------------------------------------------------------- def hey(): print 'hey' mc.condition('vray_initialized', initialize=True, d='idle', s='python("is_vray_loaded()");') mc.scriptJob(ct=['vray_initialized', 'hey()']) </code></pre> <p>Here's the launcher:</p> <pre><code># launcher.py import sys vray_listener_path = 'S:/path/to/module' if vray_listener_path not in sys.path: sys.path.append(vray_listener_path) import vray_listener reload(vray_listener) </code></pre>
0
2016-08-31T14:42:11Z
39,253,465
<p>try that, </p> <pre><code>import os import maya.cmds as mc import maya.mel as mel vray_plugin_path_2016 = os.path.join('C:', os.sep, 'Program Files', 'Autodesk', 'Maya2016', 'vray', 'plug-ins', 'vrayformaya.mll') #----------------------------------------------------------------------- def is_vray_loaded(*args): return mc.pluginInfo(vray_plugin_path_2016, q=1, l=True) #----------------------------------------------------------------------- def hey(*args): print 'hey' mc.condition('vray_initialized', initialize=True, d='idle', s=is_vray_loaded) mc.scriptJob(ct=['vray_initialized', 'hey']) </code></pre>
1
2016-08-31T15:22:01Z
[ "python", "condition", "maya" ]
Preselective dynamic modelChoicefields Django
39,252,705
<p>I have 3 models, which are connected by Foreign Key like this</p> <pre><code>models.py class Partner(models.Model): partner_id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False) ... class Product(models.Model): product_id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False) partner_id = models.ForeignKey(Partner) ... class ProductData(models.Model): data_id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False) partner_id = models.ForeignKey(Partner) product_id = models.ForeignKey(Partner) ... </code></pre> <p>What I want is a form ProductDataForm, which adjust my choices for product_id by selection one partner_id. I had this forms.py above, but it doesn#t work :(</p> <pre><code>forms.py class ProductDataForm(ModelForm): partner_id = forms.ModelChoiceField(queryset=Partner.objects.all()) product_id = forms.ModelChoiceField(queryset=Product.objects.filter(partner_id=partner_id) </code></pre> <p>I need to work with this dynamic for normal Users and Admins on Homeage and Adminsite.</p> <p>Can someone help me? :)</p>
0
2016-08-31T14:44:00Z
39,253,568
<p>I do not have a lot of experience with ModelForms or ModelChoiceField, but what I've done here might help:</p> <pre><code>class ProductDataForm(forms.Form): partner_id= forms.ChoiceField( choices = [ (u.id, u.id) for u in Partner.objects.all()] ) </code></pre>
0
2016-08-31T15:27:25Z
[ "python", "django" ]
Counting duplicate characters
39,252,731
<p>I can't figure out why the last character in a string always gets omitted when I try the following:</p> <pre><code>def duplicate_count(text): num = 0 count = {} for char in text: print(count.items()) if char in count.keys(): count[char] += 1 else: count [char] = 1 for key in count: if count[key] == 1: num = 0 else: num = count[key] - 1 return (num) char_s = 'abcde' print (duplicate_count(char_s)) </code></pre>
-2
2016-08-31T14:45:12Z
39,252,863
<p>You only output the counts before a given character is added, so you will never see the final counts. You could move your print statement to after your <code>if</code> if you'd like to change that.</p> <p>The other issue is that your program is only returning the duplicate count of the last entry in your dictionary. You would need to create a new dictionary, or take the highest value of num, depending on your requirements</p>
2
2016-08-31T14:51:30Z
[ "python", "string", "python-3.x" ]
Counting duplicate characters
39,252,731
<p>I can't figure out why the last character in a string always gets omitted when I try the following:</p> <pre><code>def duplicate_count(text): num = 0 count = {} for char in text: print(count.items()) if char in count.keys(): count[char] += 1 else: count [char] = 1 for key in count: if count[key] == 1: num = 0 else: num = count[key] - 1 return (num) char_s = 'abcde' print (duplicate_count(char_s)) </code></pre>
-2
2016-08-31T14:45:12Z
39,252,875
<p>The last character doesn't get omitted, it has not yet been added to <code>count</code> when you call <code>print(count.items())</code>.</p>
2
2016-08-31T14:52:02Z
[ "python", "string", "python-3.x" ]
Counting duplicate characters
39,252,731
<p>I can't figure out why the last character in a string always gets omitted when I try the following:</p> <pre><code>def duplicate_count(text): num = 0 count = {} for char in text: print(count.items()) if char in count.keys(): count[char] += 1 else: count [char] = 1 for key in count: if count[key] == 1: num = 0 else: num = count[key] - 1 return (num) char_s = 'abcde' print (duplicate_count(char_s)) </code></pre>
-2
2016-08-31T14:45:12Z
39,253,106
<p>In my opinion this could use some more segmentation. In my opinion you should not be focused on counting duplicate letters; you should narrow the scope of your function to identifying which letters occur, and how many times they occur. This can be easily stored in a dictionary variable.</p> <p>Example:</p> <pre><code>def string_occurence (string): dict={} for char in string: dict [char]=string.count (char) return dict </code></pre> <p>You can also optimise this by creating a set of unique characters before checking the string:</p> <pre><code>def string_occurence (string): char_set=set ([char for char in string]) dict={} for char in char_set: dict [char]=string.count (char) return dict </code></pre> <p>This hinges on how you want your control flow to work, but I think the second function would work best for you.</p>
0
2016-08-31T15:03:25Z
[ "python", "string", "python-3.x" ]
Get Access Token for Google Analytics Embed API server side authorization
39,252,779
<p>I am trying to set up server side authorization for Google Analytics Embed API. When I run this on the command line:</p> <pre><code>sudo pip install --upgrade google-api-python-client </code></pre> <p>I get this message:</p> <pre><code>The directory '/Users/XXXX/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. The directory '/Users/XXXX/Library/Caches/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. Collecting google-api-python-client Downloading google_api_python_client-1.5.3-py2.py3-none-any.whl (50kB) 100% |████████████████████████████████| 51kB 991kB/s Requirement already up-to-date: httplib2&lt;1,&gt;=0.8 in /Library/Python/2.7/site-packages (from google-api-python-client) Collecting six&lt;2,&gt;=1.6.1 (from google-api-python-client) Downloading six-1.10.0-py2.py3-none-any.whl Collecting uritemplate&lt;1,&gt;=0.6 (from google-api-python-client) Downloading uritemplate-0.6.tar.gz Collecting oauth2client&lt;4.0.0,&gt;=1.5.0 (from google-api-python-client) Downloading oauth2client-3.0.0.tar.gz (77kB) 100% |████████████████████████████████| 81kB 2.5MB/s Collecting simplejson&gt;=2.5.0 (from uritemplate&lt;1,&gt;=0.6-&gt;google-api-python-client) Downloading simplejson-3.8.2-cp27-cp27m-macosx_10_9_x86_64.whl (67kB) 100% |████████████████████████████████| 71kB 6.8MB/s Collecting pyasn1&gt;=0.1.7 (from oauth2client&lt;4.0.0,&gt;=1.5.0-&gt;google-api-python-client) Downloading pyasn1-0.1.9-py2.py3-none-any.whl Collecting pyasn1-modules&gt;=0.0.5 (from oauth2client&lt;4.0.0,&gt;=1.5.0-&gt;google-api-python-client) Downloading pyasn1_modules-0.0.8-py2.py3-none-any.whl Collecting rsa&gt;=3.1.4 (from oauth2client&lt;4.0.0,&gt;=1.5.0-&gt;google-api-python-client) Downloading rsa-3.4.2-py2.py3-none-any.whl (46kB) 100% |████████████████████████████████| 51kB 6.1MB/s Installing collected packages: six, simplejson, uritemplate, pyasn1, pyasn1-modules, rsa, oauth2client, google-api-python-client Found existing installation: six 1.4.1 DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project. Uninstalling six-1.4.1: Exception: Traceback (most recent call last): File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/basecommand.py", line 215, in main status = self.run(options, args) File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/commands/install.py", line 317, in run prefix=options.prefix_path, File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_set.py", line 736, in install requirement.uninstall(auto_confirm=True) File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_install.py", line 742, in uninstall paths_to_remove.remove(auto_confirm) File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_uninstall.py", line 115, in remove renames(path, new_path) File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/utils/__init__.py", line 267, in renames shutil.move(old, new) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move copy2(src, real_dst) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2 copystat(src, dst) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat os.chflags(dst, st.st_flags) OSError: [Errno 1] Operation not permitted: '/tmp/pip-yzJYPo-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info' </code></pre> <p>I am logged in as the admin. I have doubled checked permissions on the directories and parent directories. I am not sure what am I doing wrong?</p>
0
2016-08-31T14:47:52Z
39,253,133
<p>I think you might want to read this : <a href="https://github.com/pypa/pip/issues/3165" rel="nofollow">https://github.com/pypa/pip/issues/3165</a></p> <p>they say that you could do :</p> <p>sudo pip install --ignore-installed six</p> <p>sudo pip install --ignore-installed --upgrade google-api-python-client </p> <p>Let me know if it helps,</p> <p>Eric Lafontaine</p>
1
2016-08-31T15:04:58Z
[ "python", "command-line", "google-analytics", "sudo" ]
How to in-place sort each sublist of a list in python
39,252,802
<p>I have a list of integers as such:</p> <pre><code>[[12, 62, 49, 17, 99, 33, 47, 94, 58, 97, 75, 9], [46, 86, 95, 61, 80, 96, 14, 3, 43, 2, 22, 83], [54, 57, 52, 32, 87, 15, 18, 39, 8, 90, 56, 23, 84], [82, 30, 26, 31, 88, 37, 45, 79, 77, 66, 40, 51, 72]] </code></pre> <p>And I want a list back but each sublist is sorted in place like this:</p> <pre><code>[[9, 12, 17, 33, 47, 49, 58, 62, 75, 94, 97, 99], [2, 3, 14, 22, 43, 46, 61, 80, 83, 86, 95, 96], [8, 15, 18, 23, 32, 39, 52, 54, 56, 57, 84, 87, 90],[26, 30, 31, 37, 40, 45, 51, 66, 72, 77, 79, 82, 88]] </code></pre> <p>I thought about looping through each element and replacing it with the output of <code>element.sort()</code> but that returns <code>None</code> since it's in-place. Is there a lambda function to do this?</p>
-1
2016-08-31T14:49:10Z
39,252,878
<pre><code>f = lambda lst: [lst[i].sort() for i in range(len(lst))].count(None) </code></pre> <p>This returns a number of successfully sorted lists.</p>
1
2016-08-31T14:52:17Z
[ "python" ]
Problems with Python password checker
39,252,996
<p>I have created a password checker in Python. Here is the code that I have used:</p> <pre><code>import easygui as eg def pword(): global password global lower global upper global integer password = eg.enterbox(msg="Please enter your password") length = len(password) print(length) lower = sum([int(c.islower()) for c in password]) print(length) upper = sum([int(c.isupper()) for c in password]) print (upper) integer = sum([int(c.isdigit()) for c in password]) print (integer) def length(): global password if len(password) &lt; 6: eg.msgbox(msg="Your password is too short, please try again") elif len(password) &gt; 12: eg.msgbox(msg="Your password is too long, please try again") def strength(): global lower global upper global integer if (lower) &lt; 1: eg.msgbox(msg="Please use a mixed case password with lower case letters") elif (upper) &lt; 1: eg.msgbox(msg="Please use a mixed case password with UPPER clase letters") elif (integer) &lt; 1: eg.msgbox(msg="Please try adding a number") else: eg.msgbox(msg="Strength Assessed - Your password is ok") while True: pword() length() strength() answer = eg.choicebox(title="Try again?",msg="Would you like to try again?", choices=("Yes","No")) if answer !="Yes": sys.exit() </code></pre> <p>When I go to run the module it just comes up with the following message:</p> <p>RESTART: C:\Users\PGUSER72\AppData\Local\Programs\Python\Python35-32\python password 8.py </p> <p>When I restart it just says RESTART - Shell</p>
-3
2016-08-31T14:57:23Z
39,253,348
<p>I fixed your indentation</p> <p>Your code works, tested on Ubuntu 14.04 LTS</p> <p><strong>EDIT:</strong></p> <p>Also tested on Windows 7 32bit python 2.7.12 <a href="http://i.stack.imgur.com/CGoj8.png" rel="nofollow"><img src="http://i.stack.imgur.com/CGoj8.png" alt="enter image description here"></a></p> <p>Everything works fine. How are you running your script?</p>
0
2016-08-31T15:16:02Z
[ "python", "string", "function", "file", "passwords" ]
Python - Dictionary in class not functioning as expected
39,253,073
<p>I have a very basic class <code>Library</code>, and I initialize it with a passed in dictionary (key book name (string), value book shelf location (int)) with values already entered into it. The code looks like this:</p> <pre><code>class Library(object): def __init__(self, book_table): self.book_table = book_table def get_location(self, book_name): if book_name in self.book_table: # ERROR RIGHT HERE return self.book_table[book_name] else: return "Book Not Found" libraries = [] libraries.append(Library({"Book1":2, "Book2":9})) print Library(libraries[0]).get_location("Book1") </code></pre> <p>For some reason, I am unable to access data from the dictionary from the get_location method, but I am able to access the dictionary data in the initialize method (and I previously tested the represent method and it worked in there too). This is the error I get:</p> <pre><code>Traceback (most recent call last): File "C:/Users/Owner/Documents/Programming/PyCharm/Book_Locator/Book_Locator.py", line 13, in &lt;module&gt; print Library(libraries[0]).get_location("Book1") File "C:/Users/Owner/Documents/Programming/PyCharm/Book_Locator/Book_Locator.py", line 6, in get_location book_name in self.book_table: TypeError: argument of type 'Library' is not iterable </code></pre> <p>I expected it to print out Book1's location, which is a 2.</p>
1
2016-08-31T15:01:43Z
39,254,677
<p>You are creating a <em>new</em> <code>Library</code> instance that you pass your existing instance into:</p> <pre><code>print Library(libraries[0]).get_location("Book1") # ^^^^^^^ ^^^^^^^^^^^^ # | \----------- an existing instance of Library # A new instance of Library </code></pre> <p>This gives you a <code>Library()</code> instance where <code>book_table</code> is <em>another</em> <code>Library()</code> instance, not a dictionary!</p> <p>You'd want to call <code>get_location()</code> <em>directly</em> on <code>libraries[0]</code>:</p> <pre><code>print libraries[0].get_location("Book1") </code></pre> <p>You could also just store <em>just</em> the <code>book_table</code> dictionary in the list:</p> <pre><code>libraries = [{"Book1": 2, "Book2": 9}] print Library(libraries[0]).get_location("Book1") </code></pre> <p>but this would only be needed if you could not store <code>Library</code> instances in the list directly for whatever reason.</p>
1
2016-08-31T16:28:01Z
[ "python", "dictionary" ]
Python - Dictionary in class not functioning as expected
39,253,073
<p>I have a very basic class <code>Library</code>, and I initialize it with a passed in dictionary (key book name (string), value book shelf location (int)) with values already entered into it. The code looks like this:</p> <pre><code>class Library(object): def __init__(self, book_table): self.book_table = book_table def get_location(self, book_name): if book_name in self.book_table: # ERROR RIGHT HERE return self.book_table[book_name] else: return "Book Not Found" libraries = [] libraries.append(Library({"Book1":2, "Book2":9})) print Library(libraries[0]).get_location("Book1") </code></pre> <p>For some reason, I am unable to access data from the dictionary from the get_location method, but I am able to access the dictionary data in the initialize method (and I previously tested the represent method and it worked in there too). This is the error I get:</p> <pre><code>Traceback (most recent call last): File "C:/Users/Owner/Documents/Programming/PyCharm/Book_Locator/Book_Locator.py", line 13, in &lt;module&gt; print Library(libraries[0]).get_location("Book1") File "C:/Users/Owner/Documents/Programming/PyCharm/Book_Locator/Book_Locator.py", line 6, in get_location book_name in self.book_table: TypeError: argument of type 'Library' is not iterable </code></pre> <p>I expected it to print out Book1's location, which is a 2.</p>
1
2016-08-31T15:01:43Z
39,254,724
<p><code>Library(libraries[0])</code> you call <strong>init</strong> again</p> <p>This code should works</p> <pre><code>class Library(object): def __init__(self, book_table): self.book_table = book_table def get_location(self, book_name): if book_name in self.book_table: # ERROR RIGHT HERE return self.book_table[book_name] else: return "Book Not Found" libraries = [] libraries.append(Library({"Book1": 2, "Book2": 9})) print libraries[0].get_location("Book1") </code></pre>
0
2016-08-31T16:30:45Z
[ "python", "dictionary" ]
Separate binary data (blobs) in csv files
39,253,186
<p>Is there any safe way of mixing binary with text data in a (pseudo)csv file?</p> <p>One naive and partial solution would be:</p> <ul> <li>using a compound field separator, made of more than one character (e.g. the <code>\a\b</code> sequence for example)</li> <li>saving each field as either text or as binary data would require the parser of the pseudocsv to look for the <code>\a\b</code> sequence and read the data between separators according to a known rule (e.g. by the means of a known header with field name and field type, for example)</li> </ul> <p>The core issue is that binary data is not guaranteed to not contain the <code>\a\b</code> sequence somewhere inside its body, before the actual end of the data.</p> <p>The proper solution would be to save the individual blob fields in their own separate physical files and only include the filenames in a .csv, but this is not acceptable in this scenario.</p> <p>Is there any proper and safe solution, either already implemented or applicable given these restrictions?</p>
0
2016-08-31T15:07:30Z
39,253,320
<p>If you need everything in a single file, just use one of the methods to encode binary as printable ASCII, and add that results to the CSV vfieds (letting the CSV module add and escape quotes as needed).</p> <p>One such method is <code>base64</code> - but even on Python's base64 codec, there are more efficient codecs like base85 (on newer Pythons, version 3.4 and above, I guess).</p> <p>So, an example in Python 2.7 would be:</p> <pre><code>import csv, base64 import random data = b''.join(chr(random.randrange(0,256)) for i in range(50)) writer = csv.writer(open("testfile.csv", "wt")) writer.writerow(["some text", base64.b64encode(data)]) </code></pre> <p>Of course, you have to do the proper base64 decoding on reading the file as well - but it is certainly better than trying to create an ad-hoc escaping method.</p>
2
2016-08-31T15:14:35Z
[ "python", "csv", "blob", "binaryfiles", "export-to-csv" ]
How to prevent the left x axis from extending to the right x axis in matplotlib?
39,253,464
<p>I'm trying to create two histograms next to each other. My problem is the <code>x</code> labels for the left one is extending to the one on the right as shown below: <a href="http://i.stack.imgur.com/oiosk.png" rel="nofollow"><img src="http://i.stack.imgur.com/oiosk.png" alt="enter image description here"></a></p> <p>Here is how I'm setting up the plot</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt fig = plt.figure(figsize=(16,8)) ax1 = fig.add_subplot(1,1,1) ax1.set_xlim([min(df1["Age"]),max(df1["Age"])]) ax1 = df1["Age"].hist(color="cornflowerblue") ax2 = fig.add_subplot(1,2,2) ax2.set_xlim([min(df2["Age"]),max(df2["Age"])]) ax2 = df2["Age"].hist(color="seagreen") plt.show() </code></pre> <p>I want plot one x axis to show for each <code>subplot</code> so the first one will include the ages from <code>min(df1["Age"])</code> to <code>max(df1["Age"])</code> and another x axis. The second one will include the ages from <code>min(df2["Age"])</code> to <code>max(df2["Age"])</code>. How can I do that</p>
1
2016-08-31T15:22:00Z
39,253,748
<p>The issue is that your first subplot added with <code>ax1 = fig.add_subplot(1, 1, 1)</code> will fill the entire figure. The second subplot <code>ax2 = fig.add_subplot(1, 2, 2)</code> will span the right hand side, as if there was a first subplot to the left (which there isn't). What you should do if you want to have two subplots, half figure-size in width, is to use</p> <pre><code>fig = plt.figure() ax1 = fig.add_subplot(1, 2, 1) # first subplot, to the left ax2 = fig.add_subplot(1, 2, 2) # second subplot, to the right </code></pre> <p>Another neater way to do it is to use the <code>plt.subplots</code>-function. That creates the figure and the two axes with one call, as</p> <pre><code>fig, (ax1, ax2) = plt.subplots(1, 2) </code></pre> <p>Below are three images showing what goes wrong. First figure is the result after adding ax1 in your code (not in my code!). Then you add ax2, giving the second figure, where it is obvious that half of the original ax1 is covered by the new ax2. However the third figure shows two axes, side by side, which is what you what I guess.</p> <p>Clarification edit: when calling <code>plt.add_subplot(rows, cols, num)</code>, the <code>num</code>-parameter tells which subplot to add. I.e. if <code>rows = cols = 2</code>, <code>num = 1</code> corresponds to the upper left, <code>num = 2</code> to the upper right, <code>num = 3</code> to the bottom left and <code>num = 4</code> to the bottom right. This means that you can add i.e. the top right subplot (in a 2 x 2 grid) with <code>fig.add_subplot(2, 2, 2)</code>, see figure 4 below.</p> <p><a href="http://i.stack.imgur.com/YXe81.png" rel="nofollow"><img src="http://i.stack.imgur.com/YXe81.png" alt="Only ax1 added"></a> (Fig1: Only ax1 added from your code) <a href="http://i.stack.imgur.com/G5T0W.png" rel="nofollow"><img src="http://i.stack.imgur.com/G5T0W.png" alt="enter image description here"></a> (Fig2: Adding ax2 as well, from your code) <a href="http://i.stack.imgur.com/G7blA.png" rel="nofollow"><img src="http://i.stack.imgur.com/G7blA.png" alt="enter image description here"></a> (Fig3: Adding ax1 and ax2 in the proper way, side by side) <a href="http://i.stack.imgur.com/26OFN.png" rel="nofollow"><img src="http://i.stack.imgur.com/26OFN.png" alt="enter image description here"></a> (Fig4: Only upper right subplot added, with <code>fig.add_subplot(2, 2, 2)</code>. There are still 3 more spots empty in the 2 x 2 grid.)</p>
3
2016-08-31T15:37:15Z
[ "python", "matplotlib", "histogram" ]
Python/Selenium, how to access html list without id, but has mutlitple list of same class on page
39,253,578
<p>I am new to Selenium and was wondering how to correctly find items in the html list below. The issue I am having is the html list does not have an 'id' directly, it is in a 'span' a couple of lines above. The page has a few of these an they all have the same class "selectUL". In this example case it is the "lang" list, but there is also region, timezone etc. </p> <p>I am trying to write a function that takes a 'field'(lang region etc) and use that to find_element_by_xpath to parse it out and eventually report which one is selected (and or another function to set the selection)</p> <p>So... assuming browser is webdriver.Chrome() and I was able to log in etc.</p> <pre><code>field = "lang" # obviously not working but hopefully concept makes sense sysEntry = browser.find_element_by_xpath("//*[@id='{}']//ul[contains(@class, 'selectUL')]".format(field)) </code></pre> <p>Web Page snippit looks like:</p> <pre><code>&lt;table class="info_table conf_table" cellspacing="0" cellpadding="0"&gt; &lt;tr&gt; &lt;td class="head" colspan="2"&gt;Language / Country&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td class="sub_head"&gt;Language&lt;/td&gt; &lt;td class="content normal"&gt; &lt;div class="selectbox selectmenu"&gt; &lt;a class="selectbtn"&gt; &lt;span id="lang" class="selecttext"&gt;None&lt;/span&gt; &lt;span class="select-arrow"&gt;&lt;/span&gt; &lt;/a&gt; &lt;ul class="selectUL"&gt; &lt;li id="langEN" class="sel"&gt;&lt;a href="javascript:"&gt;English&lt;/a&gt; &lt;/li&gt; &lt;li id="langFR"&gt;&lt;a href="javascript:"&gt;French&lt;/a&gt; &lt;/li&gt; &lt;li id="langGE"&gt;&lt;a href="javascript:"&gt;German&lt;/a&gt; &lt;/li&gt; &lt;/ul&gt; &lt;/div&gt; </code></pre> <p>How do I access these so that I can read from/write to them?</p>
1
2016-08-31T15:27:53Z
39,257,225
<p>I think the easier way to do this is to look for <code>class="sel"</code> on the <code>LI</code>. That seems to indicate which option is selected. From there, you can grab the <code>A</code> inside and then the text inside the <code>A</code>. You can use a CSS Selector to find this element using "li.sel > a" then grab the text inside that element. Something like</p> <pre><code>browser.find_element_by_css_selector("li.sel &gt; a").text </code></pre> <p>This should return "English" from your HTML sample above.</p> <hr> <p>Let's go a slightly different but more specific route. We can use XPath to find the TD that contains "Language" and then down through the children from there to find the <code>LI</code> with class <code>sel</code> and then get the <code>A</code> that contains the text you want.</p> <pre><code>browser.find_element_by_xpath("//td[@class='sub_head'][text()='Language']/following-sibling::td//li[@class='sel']/a").text </code></pre>
1
2016-08-31T19:05:54Z
[ "python", "selenium-webdriver" ]
Scikit SVM error: X.shape[1] = 1 should be equal to 2
39,253,651
<p>I am trying to use Scikit to train 2 features called: x1 and x2. Both these arrays are shape <code>(490,1)</code>. In order to pass in one <code>X</code> argument into <code>clf.fit(X,y)</code>, I used <code>np.concatenate</code> to produce an array shape <code>(490,2)</code>. The label array is composed of 1's and 0's and is shape <code>(490,)</code>. The code is shown below:</p> <pre><code>x1 = int_x # previously defined array shape (490,1) x2 = int_x2 # previously defined array shape (490,1) y=np.ravel(close) # where close is composed of 1's and 0's shape (490,1) X,y = np.concatenate((x1[:-1],x2[:-1]),axis=1), y[:-1] #train on all datapoints except last clf = SVC() clf.fit(X,y) </code></pre> <p>The following error is shown:</p> <pre><code>X.shape[1] = 1 should be equal to 2, the number of features at training time </code></pre> <p>What I don't understand is why this message appears even though when I check the shape of X, it is indeed 2 and not 1. I originally tried this with only one feature and <code>clf.fit(X,y)</code> worked well, so I am inclined to think that <code>np.concatenate</code> produced something that was not suitable. Any suggestions would be great. </p>
0
2016-08-31T15:31:37Z
39,254,138
<p>It's difficult to say without having the concrete values of <code>int_x</code>, <code>int_x2</code> and <code>close</code>. Indeed, if I try with <code>int_x</code>, <code>int_x2</code> and <code>close</code> randomly constructed as </p> <pre><code>import numpy as np from sklearn.svm import SVC int_x = np.random.normal(size=(490,1)) int_x2 = np.random.normal(size=(490,1)) close = np.random.randint(2, size=(490,)) </code></pre> <p>which conforms to your specs, then your code works. Thus the error may be in the way you constructed int_x, int_x2 and close. </p> <p>If you believe the problem is not there, could you please share a minimal reproducible example with specific values of <code>int_x</code>, <code>int_x2</code> and <code>close</code>?</p>
0
2016-08-31T15:57:45Z
[ "python", "scikit-learn", "svm" ]
Scikit SVM error: X.shape[1] = 1 should be equal to 2
39,253,651
<p>I am trying to use Scikit to train 2 features called: x1 and x2. Both these arrays are shape <code>(490,1)</code>. In order to pass in one <code>X</code> argument into <code>clf.fit(X,y)</code>, I used <code>np.concatenate</code> to produce an array shape <code>(490,2)</code>. The label array is composed of 1's and 0's and is shape <code>(490,)</code>. The code is shown below:</p> <pre><code>x1 = int_x # previously defined array shape (490,1) x2 = int_x2 # previously defined array shape (490,1) y=np.ravel(close) # where close is composed of 1's and 0's shape (490,1) X,y = np.concatenate((x1[:-1],x2[:-1]),axis=1), y[:-1] #train on all datapoints except last clf = SVC() clf.fit(X,y) </code></pre> <p>The following error is shown:</p> <pre><code>X.shape[1] = 1 should be equal to 2, the number of features at training time </code></pre> <p>What I don't understand is why this message appears even though when I check the shape of X, it is indeed 2 and not 1. I originally tried this with only one feature and <code>clf.fit(X,y)</code> worked well, so I am inclined to think that <code>np.concatenate</code> produced something that was not suitable. Any suggestions would be great. </p>
0
2016-08-31T15:31:37Z
39,255,194
<p>I think I understand what was wrong with my code. </p> <p>First, I should have created another variable, say <code>x</code> that defined the concatenation of <code>int_x</code> and <code>int_x2</code> and is shape: (490,2), which is the same shape as <code>close</code>. This came in handy later.</p> <p>Next, the <code>clf.fit(X,y)</code> was not incorrect in itself. However, I did not correctly formulate my prediction code. For instance, I said: <code>clf.predict([close[-1]])</code> in hopes of capturing the binary target output (either 0 or 1). The argument that was passed into this method was incorrect. It should have been <code>clf.predict([x[-1]])</code> because the algorithm predicts the label at the feature location as opposed to the other way around. Since the variable <code>x</code> is now the same shape as <code>close</code>, then the result of <code>clf.predict([x[-1]])</code> should produce the predicted result of <code>close[-1]</code>.</p>
0
2016-08-31T17:00:23Z
[ "python", "scikit-learn", "svm" ]
pandas.DataFrame.query keeping original multiindex
39,253,672
<p>I have a dataframe with multiindex:</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame(np.random.randint(0,5,(6, 2)), columns=['col1','col2']) &gt;&gt;&gt; df['ind1'] = list('AAABCC') &gt;&gt;&gt; df['ind2'] = range(6) &gt;&gt;&gt; df.set_index(['ind1','ind2'], inplace=True) &gt;&gt;&gt; df col1 col2 ind1 ind2 A 0 2 0 1 2 2 2 1 2 B 3 2 2 C 4 4 0 5 1 4 </code></pre> <p>when I select data using <code>.loc[]</code> on one of the index levels, and apply <code>.query()</code> afterwards, resulting index is "shrinked" as expected to match only those values contained in resulting dataframe:</p> <pre><code>&gt;&gt;&gt; df.loc['A'].query('col2 == 2') col1 col2 ind2 1 2 2 2 1 2 &gt;&gt;&gt; df.loc['A'].query('col2 == 2').index Int64Index([1, 2], dtype='int64', name='ind2') </code></pre> <p>however when I try to recieve same result using just <code>.query()</code>, pandas keeps the same index as on original dataframe (despite the fact, that it didn't behave like that above, in the case of single index - resulting index went from <code>[0,1,2]</code> to <code>[1,2]</code>, matching only <code>col2 == 2</code> rows):</p> <pre><code>&gt;&gt;&gt; df.query('ind1 == "A" &amp; col2 == 2') col1 col2 ind1 ind2 A 1 2 2 2 1 2 &gt;&gt;&gt; df.query('ind1 == "A" &amp; col2 == 2').index MultiIndex(levels=[['A', 'B', 'C'], [0, 1, 2, 3, 4, 5]], labels=[[0, 0], [1, 2]], names=['ind1', 'ind2']) </code></pre> <p>is it a bug or a feature? if feature, could you please explain such behavior?</p> <p>EDIT1: I would expect following index instead:</p> <pre><code>MultiIndex(levels=[['A'], [1, 2]], labels=[[0, 0], [0, 1]], names=['ind1', 'ind2']) </code></pre> <p>EDIT2: as explained in <a href="http://stackoverflow.com/questions/32585009/dataframe-slice-does-not-remove-index-values">Dataframe Slice does not remove Index Values</a> index values shouldn't be removed at all when slicing DF; such behavior should give following result: </p> <pre><code>&gt;&gt;&gt; df.loc['A'].query('col2 == 2') col1 col2 ind2 1 2 2 2 1 2 &gt;&gt;&gt; df.loc['A'].query('col2 == 2').index EXPECTATION: Int64Index([0, 1, 2], dtype='int64', name='ind2') REALITY: Int64Index([1, 2], dtype='int64', name='ind2') </code></pre>
2
2016-08-31T15:33:04Z
39,253,751
<p><code>df.loc[A]</code> returns you a DF (or a "view") with a regular ("single") index:</p> <pre><code>In [12]: df.loc['A'] Out[12]: col1 col2 ind2 0 1 1 1 0 3 2 1 2 </code></pre> <p>so <code>.query()</code> will be applied on that DF with a regular index...</p>
1
2016-08-31T15:37:28Z
[ "python", "pandas", "dataframe", "multi-index" ]
Axis numerical offset in matplotlib
39,253,742
<p>I'm plotting something with matplotlib and it looks like this:</p> <p><a href="http://i.stack.imgur.com/dvfOn.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/dvfOn.jpg" alt="enter image description here"></a></p> <p>I can't seem to figure out why the x-axis is offset like it is...It looks like it's saying, 'whatever you read from me, add 2.398e9 to it for the actual x value'.</p> <p>This is not quite what I want...Can I make it take only the first 4 digits, instead?</p> <p>This is representing frequency, so I'd like to see something that reads:</p> <p>2000 or 2400 or 2800....I can add the 'MHz' part in the axis title...But, this is unreadable at a glance.</p> <p>Is this doing this because it's trying to make decisions on how to truncate long data?</p> <p>Here's the code for the plotting:</p> <pre><code>plt.title(file_name +' at frequency '+ freq + 'MHz') plt.xlabel('Frequency') plt.ylabel('Conducted Power (dBm)') plt.grid(True) plt.plot(data['x'],data['y']) #plt.axis([min(data['x']),max(data['x']),min(data['y'],max(data['y']))]) plt.savefig(file_name+'_'+freq) print('plot written!') #plt.show() plt.close('all') </code></pre>
0
2016-08-31T15:36:55Z
39,253,913
<p>You need to <code>import</code> certain formatters from <code>matplotlib.ticker</code>. Here is the <a href="http://matplotlib.org/api/ticker_api.html" rel="nofollow">full documentation to ticker</a></p> <pre><code>from matplotlib.ticker import ScalarFormatter, FormatStrFormatter ax.xaxis.set_major_formatter(FormatStrFormatter('%.0f')) </code></pre> <p>Once you have set this to your plot likewise, you should be able to see the +2.398e9 disappear. </p> <p>In general to avoid scientific notation, use the following:</p> <pre><code>ax.get_xaxis().get_major_formatter().set_scientific(False) </code></pre>
1
2016-08-31T15:45:10Z
[ "python", "matplotlib", "plot", "axes", "graphing" ]
Axis numerical offset in matplotlib
39,253,742
<p>I'm plotting something with matplotlib and it looks like this:</p> <p><a href="http://i.stack.imgur.com/dvfOn.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/dvfOn.jpg" alt="enter image description here"></a></p> <p>I can't seem to figure out why the x-axis is offset like it is...It looks like it's saying, 'whatever you read from me, add 2.398e9 to it for the actual x value'.</p> <p>This is not quite what I want...Can I make it take only the first 4 digits, instead?</p> <p>This is representing frequency, so I'd like to see something that reads:</p> <p>2000 or 2400 or 2800....I can add the 'MHz' part in the axis title...But, this is unreadable at a glance.</p> <p>Is this doing this because it's trying to make decisions on how to truncate long data?</p> <p>Here's the code for the plotting:</p> <pre><code>plt.title(file_name +' at frequency '+ freq + 'MHz') plt.xlabel('Frequency') plt.ylabel('Conducted Power (dBm)') plt.grid(True) plt.plot(data['x'],data['y']) #plt.axis([min(data['x']),max(data['x']),min(data['y'],max(data['y']))]) plt.savefig(file_name+'_'+freq) print('plot written!') #plt.show() plt.close('all') </code></pre>
0
2016-08-31T15:36:55Z
39,277,642
<p>If you don't want to play the formatting game and would rather just display values directly in MHz, then you could simply rescale your data to be in MHz instead of Hz. Something like</p> <pre><code>data['x'] /= 1000 </code></pre> <p>and then add the 'MHz' to your axis label.</p>
0
2016-09-01T17:31:14Z
[ "python", "matplotlib", "plot", "axes", "graphing" ]
Storing pandas dataframe in a local machine
39,254,012
<p>I have a pandas dataframe in the following format:</p> <pre><code> File Hour test1 0 test2 1 test1 1 </code></pre> <p>I am trying to convert it to json and then store it in a local file location using the below command:</p> <pre><code>df1.to_json("\home\user1\Desktop\jsonfiles\df1.json") </code></pre> <p>But the file does not get saved in the above location. Not sure where I am going wrong. Any help would be appreciated.</p>
-2
2016-08-31T15:50:52Z
39,254,473
<p>You can try to find your CSV file (if it was successfully written to disk, otherwise please post a full error stack) this way:</p> <pre><code>In [40]: fn = '/home/user1/Desktop/jsonfiles/df1.json' In [41]: df.to_json(fn) In [42]: f = open(fn) In [43]: import os In [44]: print(os.path.abspath(f.name)) C:\temp\aa.csv </code></pre>
0
2016-08-31T16:16:28Z
[ "python", "pandas", "dataframe" ]
How to write common implementation of __str__ method for all my models in Django?
39,254,057
<p>I want all my models to override <code>__str__</code> method in similar fashion:</p> <pre><code>class MyModel1(models.Model): name = models.CharField(max_length=255) def __init__(self): self.to_show = 'name' def _show(self): if hasattr(self,self.to_show): return str(getattr(self, self.to_show)) else: return str(getattr(self, 'id')) def __str__(self): return self._show() class MyModel2AndSoOn(models.Model): another_param = models.CharField(max_length=255) # same implementation of `__str__` but with another_param </code></pre> <p>I do not want to repeat the same code for all my models so I tried inheritance:</p> <pre><code>class ShowModel(models.Model): name = models.CharField(max_length=255) def __init__(self): self.to_show = 'name' def _show(self): if hasattr(self,self.to_show): return str(getattr(self, self.to_show)) else: return str(getattr(self, 'id')) def __str__(self): return self._show() class MyModel1(ShowModel): another_param = models.CharField(max_length=255) class MyModel2(ShowModel): another_param = models.CharField(max_length=255) </code></pre> <p>but it messes with <code>id</code> of <code>MyModel1</code> and <code>MyModel2</code> by replacing <code>id</code> with a pointer to <code>ShowModel</code>. How to write common implementation of <code>__str__</code> method for my models without inheritance or how to prevent treating <code>ShowModel</code> class as a Django model?</p> <p><strong>Upd:</strong> I used <code>abstract</code> model as alecxe suggested but it ended with an error message:</p> <pre><code>in _show return str(getattr(self, self.to_show)) File "/path/to/my/project/env3/lib/python3.5/site-packages/django/db/models/fields/__init__.py", line 188, in __str__ model = self.model AttributeError: 'CharField' object has no attribute 'model' </code></pre> <p><strong>Upd</strong> Everything works fine if I assign value to the <code>name</code> field of my model object. Whole solution:</p> <pre><code>class ShowModel(object): to_show = 'name' def _show(self): if hasattr(self,self.to_show): return str(getattr(self, self.to_show)) elif hasattr(self,'id'): return str(getattr(self, 'id')) else: return str(self) def __str__(self): return self._show() class Meta: abstract = True class MyModel1(ShowModel): name = models.CharField(max_length=255) to_show = 'name' class MyModel2(ShowModel): another_param = models.CharField(max_length=255) to_show = 'another_param' </code></pre> <p>in test case:</p> <pre><code>ua = MyModel1() ua.name = 'hi' print(ua) #prints hi ub = MyModel2() ub.another_param = 'hi again' print(ub) #prints hi again </code></pre>
1
2016-08-31T15:53:12Z
39,254,097
<p>You need to create an <a href="https://docs.djangoproject.com/en/1.10/topics/db/models/#abstract-base-classes" rel="nofollow"><em><code>abstract</code> model</em></a>:</p> <pre><code>class ShowModel(models.Model): name = models.CharField(max_length=255) def __init__(self): self.to_show = 'name' def _show(self): if hasattr(self, "to_show"): return str(getattr(self, "to_show")) else: return str(getattr(self, 'id')) def __str__(self): return self._show() class Meta: abstract = True </code></pre> <p>And, as for your follow-up question and thanks to @itzmeontv, you should replace <code>self.to_show</code> with "to_show" when calling <code>hasattr()</code> and <code>getattr()</code>.</p>
1
2016-08-31T15:55:23Z
[ "python", "django", "inheritance", "django-models", "abstract" ]
python xarray indexing/slicing very slow
39,254,093
<p>I'm currently processing some ocean model outputs. At each time step, it has 42*1800*3600 grid points.</p> <p>I found that the bottelneck in my program is the slicing, and calling xarray_built in method to extract the values. And what's more interesting, same syntax sometimes require a vastly differnt amount of time.</p> <pre><code>ds = xarray.open_dataset(filename, decode_times=False) vvel0=ds.VVEL.sel(lat=slice(-60,-20),lon=slice(0,40))/100 #in CCSM output, unit is cm/s convert to m/s uvel0=ds.UVEL.sel(lat=slice(-60,-20),lon=slice(0,40))/100 ## why the speed is that different? now it's regional!! temp0=ds.TEMP.sel(lat=slice(-60,-20),lon=slice(0,40)) #de </code></pre> <p>Take this for example, reading a VVEL and UVEL took ~4sec, while reading in TEMP only needed ~6ms. Without slicing, VVEL and UVEL took ~1sec, and TEMP needed 120 nanosecond.</p> <p>I always thought that, when I only input part of the full array, I need less memory, and therefore less time. It turned out, that XARRAY loads in the full array and any extra slicing takes more time. But, could somebody please explain why is reading different variables from the same netcdf file takes that different of time?</p> <p>The program is designed to extract a stepwise section, and calculate the cross-sectional heat transport, so I need to pick out either UVEL or VVEL, times that by TEMP along the section. So, it may seems that, loading in TEMP that fast is good, isn't it? </p> <p>Unfortunately, that's not the case. When I loop through about ~250 grid points along the prescribed section... </p> <pre><code># Calculate VT flux orthogonal to the chosen grid cells, which is the heat transport across GOODHOPE line vtflux=[] utflux=[] vap = vtflux.append uap = utflux.append #for i in range(idx_north,idx_south+1): for i in range(10): yidx=gh_yidx[i] xidx=gh_xidx[i] lon_next=ds_lon[i+1].values lon_current=ds_lon[i].values lat_next=ds_lat[i+1].values lat_current=ds_lat[i].values tt=np.squeeze(temp[:,yidx,xidx].values) #&lt;&lt; calling values is slow if (lon_next&lt;lon_current) and (lat_next==lat_current): # The condition is incorrect dxlon=Re*np.cos(lat_current*np.pi/180.)*0.1*np.pi/180. vv=np.squeeze(vvel[:,yidx,xidx].values) vt=vv*tt vtdxdz=np.dot(vt[~np.isnan(vt)],layerdp[0:len(vt[~np.isnan(vt)])])*dxlon vap(vtdxdz) #del vtdxdz elif (lon_next==lon_current) and (lat_next&lt;lat_current): #ut=np.array(uvel[:,gh_yidx[i],gh_xidx[i]].squeeze().values*temp[:,gh_yidx[i],gh_xidx[i]].squeeze().values) # slow uu=np.squeeze(uvel[:,yidx,xidx]).values # slow ut=uu*tt utdxdz=np.dot(ut[~np.isnan(ut)],layerdp[0:len(ut[~np.isnan(ut)])])*dxlat uap(utdxdz) #m/s*degC*m*m ## looks fine, something wrong with the sign #del utdxdz total_trans=(np.nansum(vtflux)-np.nansum(utflux))*3996*1026/1e15 </code></pre> <p>Especially this line:</p> <pre><code>tt=np.squeeze(temp[:,yidx,xidx].values) </code></pre> <p>It takes ~3.65 Sec, but now it has to be repeated for ~250 times. If I remove <code>.values</code>, then this time reduces to ~4ms. But I need to time the <code>tt</code> to <code>vt</code>, so I have to extract the values. What's weird, is that the similar expression, <code>vv=np.squeeze(vvel[:,yidx,xidx].values)</code> requires much less time, only about ~1.3ms.</p> <hr> <p>To summarize my questions:</p> <ol> <li>Why loading in different variables from the same netcdf file takes different amount of time? </li> <li>Is there a more efficient way to pick out a single column in a multidimensional array? (not necessary the xarray structure, also numpy.ndarray)</li> <li>Why does extracting values from Xarray structures need different amount of time, for the exact same syntax?</li> </ol> <p>Thank you!</p>
1
2016-08-31T15:55:07Z
39,262,154
<p>When you index a variable loaded from a netCDF file, xarray doesn't load it into memory immediately. Instead, we create a lazy array that supports any number of further differed indexing operations. This is true even if you aren't using <a href="http://dask.pydata.org/" rel="nofollow">dask.array</a> (triggered by setting <code>chunks=</code> in <code>open_dataset</code> or using <code>open_mfdataset</code>).</p> <p>This explains the surprising performance you observe. Calculating <code>temp0</code> is fast, because it doesn't load any data from disk. <code>vvel0</code> is slow, because dividing by 100 requires loading the data into memory as a numpy array.</p> <p>Later, it's slower to index <code>temp0</code> because each operation loads data from disk, instead of indexing a numpy array already in memory.</p> <p>The work-around is to explicitly load the portion of your dataset that you need into memory first, e.g., by writing <code>temp0.load()</code>. The <a href="http://xarray.pydata.org/en/stable/io.html#netcdf" rel="nofollow">netCDF section</a> of the xarray docs also gives this tip.</p>
1
2016-09-01T03:15:22Z
[ "python", "numpy", "netcdf", "python-xarray" ]
Posting XML using python requests
39,254,125
<p>I have the following code that posts xml to teamcity to create a new VCS root:</p> <pre><code>def addVcsRoot(vcsRootId, vcsRootName, projectId, projectName, buildName, repoId, teamAdminUser, teamAdminPass): headers = {'Content-type': 'application/xml'} data = ("&lt;?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?&gt;" + "&lt;vcs-root id=\"" + vcsRootId + "\" " + "name=\"" + vcsRootName + "\" " + "vcsName=\"jetbrains.git\" href=\"/app/rest/vcs-roots/id:" + vcsRootId + "\"&gt;" + "&lt;project id=\"" + projectId + "\" " "name=\"" + projectName + "\" " "parentProjectId=\"_Root\" " + "description=\"Single repository for all components\" href=\"/app/rest/projects/id:" + projectId + "\" " + "webUrl=\"http://teamcity.company.com/project.html?projectId=" + projectId + "\"/&gt;" + "&lt;properties count=\"11\"&gt;" + "&lt;property name=\"agentCleanFilesPolicy\" value=\"ALL_UNTRACKED\"/&gt;" + "&lt;property name=\"agentCleanPolicy\" value=\"ON_BRANCH_CHANGE\"/&gt;" + "&lt;property name=\"authMethod\" value=\"PASSWORD\"/&gt;" + "&lt;property name=\"branch\" value=\"refs/heads/master\"/&gt;" + "&lt;property name=\"ignoreKnownHosts\" value=\"true\"/&gt;" + "&lt;property name=\"submoduleCheckout\" value=\"CHECKOUT\"/&gt;" + "&lt;property name=\"url\" value=\"https://source.company.com/scm/" +repoId + "/" + buildName + ".git\"" + "/&gt;" + "&lt;property name=\"username\" value=\"" + teamAdminUser + "\"/&gt;" + "&lt;property name=\"usernameStyle\" value=\"USERID\"/&gt;" + "&lt;property name=\"secure:password\" value=\"" + teamAdminPass + "\"/&gt;" + "&lt;property name=\"useAlternates\" value=\"true\"/&gt;" + "&lt;/properties&gt;" + "&lt;vcsRootInstances href=\"/app/rest/vcs-root-instances?locator=vcsRoot:(id:" + vcsRootId + ")" + "\"" + "/&gt;" + "&lt;/vcs-root&gt;") url = path + 'vcs-roots' return requests.post(url, auth=auth, headers=headers, data=data) </code></pre> <p>I did a get to see what the xml file should look like and have made it so I can input different parameters for different builds, and the script works fine. My question is: is there a more elegant way to do this? Posting this long string with concatenation seems ugly and inefficient. What are some other ways to post xml using requests?</p>
0
2016-08-31T15:57:08Z
39,254,539
<p>I am not going to rewrite it all but using str.format, kwargsa and triple quoted a string will make the code a lot less cluttered:</p> <pre><code>def addVcsRoot(**kwargs): headers = {'Content-type': 'application/xml'} data = """&lt;?xml version="1.0" encoding="UTF-8" standalone="yes"?&gt; &lt;vcs-root id="{vcsRootId}" name="{vcsRootName}" "vcsName="jetbrains.git" href="/app/rest/vcs-roots/id:"{vcsRootId}"\&gt;""".format(**kwargs) </code></pre> <p>Then:</p> <pre><code>addVcsRoot(vcsRootId=1234, ......) </code></pre> <p>or if you want to keep the named args as is:</p> <pre><code>def addVcsRoot(vcsRootId, vcsRootName, projectId, projectName, buildName, repoId, teamAdminUser, teamAdminPass) headers = {'Content-type': 'application/xml'} data = """&lt;?xml version="1.0" encoding="UTF-8" standalone="yes"?&gt; &lt;vcs-root id="{vcsRootId}" name="{vcsRootName}" "vcsName="jetbrains.git" href="/app/rest/vcs-roots/id:"{vcsRootId}"\&gt;"""\ .format(vcsRootI=vcsRootId, vcsRootName=vcsRootName.....) </code></pre>
0
2016-08-31T16:20:01Z
[ "python", "xml", "post", "teamcity", "python-requests" ]
Validating that all components required for an object to exist are present
39,254,149
<p>I need to write a script that gets a list of components from an external source and based on a pre-defined list it validates whether the service is complete. This is needed because the presence of a single component doesn't automatically imply that the service is present - some components are pre-installed even when there is no service. I've devised something really simple below, but I was wondering what is the intelligent way of doing this? There must be a cleaner, simpler way.</p> <pre><code># Components that make up a complete service serviceComponents = ['A','B'] # Input from JSON data = ['B','A','C'] serviceComplete = True for i in serviceComponents: if i in data: print 'yay ' + i + ' found from ' + ', '.join(service2) else: serviceComplete = False break # If serviceComplete = True do blabla... </code></pre>
-1
2016-08-31T15:58:18Z
39,254,265
<pre><code># Components that make up a complete service serviceComponents = ['A','B'] # Input from JSON data = ['B','A','C'] if all(item in data for item in serviceComponents): print("All required components are present") </code></pre>
1
2016-08-31T16:03:49Z
[ "python" ]
Validating that all components required for an object to exist are present
39,254,149
<p>I need to write a script that gets a list of components from an external source and based on a pre-defined list it validates whether the service is complete. This is needed because the presence of a single component doesn't automatically imply that the service is present - some components are pre-installed even when there is no service. I've devised something really simple below, but I was wondering what is the intelligent way of doing this? There must be a cleaner, simpler way.</p> <pre><code># Components that make up a complete service serviceComponents = ['A','B'] # Input from JSON data = ['B','A','C'] serviceComplete = True for i in serviceComponents: if i in data: print 'yay ' + i + ' found from ' + ', '.join(service2) else: serviceComplete = False break # If serviceComplete = True do blabla... </code></pre>
-1
2016-08-31T15:58:18Z
39,254,293
<p><a href="https://docs.python.org/2/library/stdtypes.html#set" rel="nofollow"><strong>Built-in Set</strong></a> would serve for you, use <a href="https://docs.python.org/2/library/stdtypes.html#set.issubset" rel="nofollow">set.issubset</a> to identify that your required service components is subset of input data:</p> <pre><code>serviceComponents = set(['A','B']) input_data = set(['B','A','C']) if serviceComponents.issubset(input_data): # perform actions ... </code></pre>
1
2016-08-31T16:05:11Z
[ "python" ]
Validating that all components required for an object to exist are present
39,254,149
<p>I need to write a script that gets a list of components from an external source and based on a pre-defined list it validates whether the service is complete. This is needed because the presence of a single component doesn't automatically imply that the service is present - some components are pre-installed even when there is no service. I've devised something really simple below, but I was wondering what is the intelligent way of doing this? There must be a cleaner, simpler way.</p> <pre><code># Components that make up a complete service serviceComponents = ['A','B'] # Input from JSON data = ['B','A','C'] serviceComplete = True for i in serviceComponents: if i in data: print 'yay ' + i + ' found from ' + ', '.join(service2) else: serviceComplete = False break # If serviceComplete = True do blabla... </code></pre>
-1
2016-08-31T15:58:18Z
39,254,305
<p>You could do it a few different ways:</p> <pre><code>set(serviceComponents) &lt;= set(data) set(serviceComponents).issubset(data) all(c in data for c in serviceComponents) </code></pre> <p>You can make it shorter, but you lose readability. What you have now is probably fine. I'd go with the first approach personally, since it expresses your intent clearly with set operations.</p>
1
2016-08-31T16:05:37Z
[ "python" ]
Replicate a dataset with dask to all workers
39,254,182
<p>I am using dask with distributed scheduler. I am trying to replicate a dataset read through csv on s3 to all worker nodes. Example:</p> <pre><code>from distributed import Executor import dask.dataframe as dd e= Executor('127.0.0.1:8786',set_as_default=True) df = dd.read_csv('s3://bucket/file.csv', blocksize=None) df = e.persist(df) e.replicate(df) distributed.utils - ERROR - unhashable type: 'list' Traceback (most recent call last): File "/root/.miniconda/envs/dask_env/lib/python3.5/site-packages/distributed/utils.py", line 102, in f result[0] = yield gen.maybe_future(func(*args, **kwargs)) File "/root/.miniconda/envs/dask_env/lib/python3.5/site-packages/tornado/gen.py", line 1015, in run value = future.result() File "/root/.miniconda/envs/dask_env/lib/python3.5/site-packages/tornado/concurrent.py", line 237, in result raise_exc_info(self._exc_info) File "&lt;string&gt;", line 3, in raise_exc_info File "/root/.miniconda/envs/dask_env/lib/python3.5/site-packages/tornado/gen.py", line 1021, in run yielded = self.gen.throw(*exc_info) File "/root/.miniconda/envs/dask_env/lib/python3.5/site-packages/distributed/executor.py", line 1347, in _replicate branching_factor=branching_factor) File "/root/.miniconda/envs/dask_env/lib/python3.5/site-packages/tornado/gen.py", line 1015, in run value = future.result() File "/root/.miniconda/envs/dask_env/lib/python3.5/site-packages/tornado/concurrent.py", line 237, in result raise_exc_info(self._exc_info) File "&lt;string&gt;", line 3, in raise_exc_info File "/root/.miniconda/envs/dask_env/lib/python3.5/site-packages/tornado/gen.py", line 1021, in run yielded = self.gen.throw(*exc_info) File "/root/.miniconda/envs/dask_env/lib/python3.5/site-packages/distributed/core.py", line 444, in send_recv_from_rpc result = yield send_recv(stream=stream, op=key, **kwargs) File "/root/.miniconda/envs/dask_env/lib/python3.5/site-packages/tornado/gen.py", line 1015, in run value = future.result() File "/root/.miniconda/envs/dask_env/lib/python3.5/site-packages/tornado/concurrent.py", line 237, in result raise_exc_info(self._exc_info) File "&lt;string&gt;", line 3, in raise_exc_info File "/root/.miniconda/envs/dask_env/lib/python3.5/site-packages/tornado/gen.py", line 1024, in run yielded = self.gen.send(value) File "/root/.miniconda/envs/dask_env/lib/python3.5/site-packages/distributed/core.py", line 345, in send_recv six.reraise(*clean_exception(**response)) File "/root/.miniconda/envs/dask_env/lib/python3.5/site-packages/six.py", line 685, in reraise raise value.with_traceback(tb) File "/root/.miniconda/envs/dask_env/lib/python3.5/site-packages/distributed/core.py", line 211, in handle_stream result = yield gen.maybe_future(handler(stream, **msg)) File "/root/.miniconda/envs/dask_env/lib/python3.5/site-packages/tornado/gen.py", line 1015, in run value = future.result() File "/root/.miniconda/envs/dask_env/lib/python3.5/site-packages/tornado/concurrent.py", line 237, in result raise_exc_info(self._exc_info) File "&lt;string&gt;", line 3, in raise_exc_info File "/root/.miniconda/envs/dask_env/lib/python3.5/site-packages/tornado/gen.py", line 285, in wrapper yielded = next(result) File "/root/.miniconda/envs/dask_env/lib/python3.5/site-packages/distributed/scheduler.py", line 1324, in replicate keys = set(keys) TypeError: unhashable type: 'list' </code></pre> <p>Is this the correct way to replicate a dataframe? It appears that <code>e.persist(df)</code> returned object does not work with <code>e.replicate</code> for some reason.</p>
0
2016-08-31T15:59:32Z
39,255,082
<p>This was a bug and has been resolved in <a href="https://github.com/dask/distributed/pull/473" rel="nofollow">https://github.com/dask/distributed/pull/473</a></p>
0
2016-08-31T16:52:35Z
[ "python", "dask" ]
Calculating mean for sub-set of dataframe based on unique row names
39,254,203
<p>I have a dataframe which looks like follows,</p> <pre><code> df.head() Sym P1 P2 P3 P4 P5 B1 B2 B3 B4 B5 AA 7.86 8.86 9.86 10.86 11.86 0.7768 1.7768 2.7768 3.7768 4.7768 AA 7.86 8.86 9.86 10.86 11.86 0.8664 1.8664 2.8664 3.8664 4.8664 AA 7.86 8.86 9.86 10.86 11.86 0.874534 1.874534 2.874534 3.874534 4.874534 BB 5.8 6.8 7.8 8.8 9.8 7.42 8.42 9.42 10.42 11.42 BB 5.8 6.8 7.8 8.8 9.8 0.1434 1.1434 2.1434 3.1434 4.1434 CC 0.421 1.421 2.421 3.421 4.421 6.78 7.78 8.78 9.78 10.78 CC 0.421 1.421 2.421 3.421 4.421 8.43 9.43 10.43 11.43 12.43 VV 3.25 4.25 5.25 6.25 7.25 0.97 1.97 2.97 3.97 4.97 VV 3.25 4.25 5.25 6.25 7.25 0.2 1.2 2.2 3.2 4.2 VV 3.25 4.25 5.25 6.25 7.25 0.45 1.45 2.45 3.45 4.45 VV 3.25 4.25 5.25 6.25 7.25 0.78 1.78 2.78 3.78 4.78 </code></pre> <p>And what I am aiming is to get the mean of the second half(Columns Starting with name B1..B5) of the data frame based on the unique values in column 'sym' and make a new dataframe which looks as follows.</p> <pre><code>Sym P1 P2 P3 P4 P5 B1 B2 B3 B4 B5 AA 7.86 8.86 9.86 10.86 11.86 0.8664 1.8664 2.8664 3.8664 4.8664 BB 5.8 6.8 7.8 8.8 9.8 3.7817 4.7817 5.7817 6.7817 7.7817 CC 0.421 1.421 2.421 3.421 4.421 7.605 8.605 9.605 10.605 11.605 VV 3.25 4.25 5.25 6.25 7.25 0.615 1.615 2.615 3.615 4.615 </code></pre> <p>I tried to used groupby for that to get the unique sym .Would be great if someone could suggest a simple way to proceed Thank you</p>
1
2016-08-31T16:00:29Z
39,254,256
<p>Use <code>filter</code> and <code>groupby</code></p> <pre><code>transformed = df.filter(like='B').groupby(df.Sym).transform(np.mean) df.loc[:, df.columns.str.contains('B')] = transformed df </code></pre> <p><a href="http://i.stack.imgur.com/ZR3gL.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZR3gL.png" alt="enter image description here"></a></p>
1
2016-08-31T16:02:59Z
[ "python", "pandas", "numpy" ]
What is the most pythonic way to rebase prices based on a level of a multi index DataFrame?
39,254,229
<p>I have a mulit-index DataFrame that looks like this:</p> <pre><code>In[114]: cdm Out[114]: Last TD Date Ticker 1983-03-30 CLM83 29.40 44 CLN83 29.35 76 CLQ83 29.20 105 CLU83 28.95 139 CLV83 28.95 167 CLX83 28.90 197 CLZ83 28.75 230 1983-03-31 CLM83 29.29 43 CLN83 29.24 75 CLQ83 29.05 104 CLU83 28.85 138 CLV83 28.75 166 CLX83 28.70 196 CLZ83 28.60 229 1983-04-04 CLM83 29.44 39 CLN83 29.25 71 CLQ83 29.10 100 CLU83 29.05 134 CLV83 28.95 162 CLX83 28.95 192 CLZ83 28.85 225 1983-04-05 CLM83 29.71 38 CLN83 29.54 70 CLQ83 29.35 99 CLU83 29.20 133 CLV83 29.10 161 CLX83 29.00 191 CLZ83 29.00 224 1983-04-06 CLM83 29.90 37 CLN83 29.68 69 ... ... 2016-07-05 CLV6 47.91 72 CLX6 48.51 104 CLZ6 49.07 134 CLF7 49.54 163 CLG7 49.93 196 CLH7 50.26 226 CLJ7 50.53 254 CLK7 50.77 286 CLM7 51.00 316 CLN7 51.20 345 CLQ7 51.39 377 CLU7 51.58 408 CLV7 51.79 437 CLX7 52.03 469 2016-07-06 CLQ6 47.43 9 CLU6 48.14 42 CLV6 48.75 71 CLX6 49.34 103 CLZ6 49.89 133 CLF7 50.36 162 CLG7 50.75 195 CLH7 51.08 225 CLJ7 51.35 253 CLK7 51.60 285 CLM7 51.84 315 CLN7 52.05 344 CLQ7 52.25 376 CLU7 52.46 407 CLV7 52.69 436 CLX7 52.94 468 [289527 rows x 2 columns] </code></pre> <p>It is pretty big and I want to rebase the prices, meaning that at each point in time (each 'Date') the first price ('Last') is set to a 100 and the others are measured against this first one.</p> <p>I have the following function:</p> <pre><code>def rebase(prices): return prices/prices[0]*100 </code></pre> <p>I also came up with a groupby way of achieving my objective. However it is ridiculously long:</p> <pre><code>%time cdm.groupby(level='Date')['Last'].apply(rebase) Wall time: 1min 49s Out[115]: Date Ticker 1983-03-30 CLM83 100.000000 CLN83 99.829932 CLQ83 99.319728 CLU83 98.469388 CLV83 98.469388 CLX83 98.299320 CLZ83 97.789116 1983-03-31 CLM83 100.000000 CLN83 99.829293 CLQ83 99.180608 CLU83 98.497781 CLV83 98.156367 CLX83 97.985661 CLZ83 97.644247 1983-04-04 CLM83 100.000000 CLN83 99.354620 CLQ83 98.845109 CLU83 98.675272 CLV83 98.335598 CLX83 98.335598 CLZ83 97.995924 1983-04-05 CLM83 100.000000 CLN83 99.427802 CLQ83 98.788287 CLU83 98.283406 CLV83 97.946819 CLX83 97.610232 CLZ83 97.610232 1983-04-06 CLM83 100.000000 CLN83 99.264214 2016-07-05 CLV6 102.811159 CLX6 104.098712 CLZ6 105.300429 CLF7 106.309013 CLG7 107.145923 CLH7 107.854077 CLJ7 108.433476 CLK7 108.948498 CLM7 109.442060 CLN7 109.871245 CLQ7 110.278970 CLU7 110.686695 CLV7 111.137339 CLX7 111.652361 2016-07-06 CLQ6 100.000000 CLU6 101.496943 CLV6 102.783049 CLX6 104.026987 CLZ6 105.186591 CLF7 106.177525 CLG7 106.999789 CLH7 107.695551 CLJ7 108.264811 CLK7 108.791904 CLM7 109.297913 CLN7 109.740670 CLQ7 110.162345 CLU7 110.605102 CLV7 111.090027 CLX7 111.617120 Name: Last, dtype: float64 </code></pre> <p>It takes between 1.30 and 3 minutes to get it done, and I still need to make some more manipulations to get to where I want to, i.e having this column of rebased prices included in my first DataFrame cdm:</p> <pre><code>groupRebP = cdm.groupby(level='Date')['Last'].apply(rebase) groupRebP = pd.DataFrame(groupRebP) cdm['RebP'] = groupRebP['Last'] </code></pre> <p>Is there a faster and more pythonic way to achieve this?</p> <p>Thank you for your tips,</p>
2
2016-08-31T16:01:35Z
39,254,875
<h3>Setup</h3> <pre><code>from StringIO import StringIO import pandas as pd import numpy as np text = """Date Ticker Last TD 1983-03-30 CLM83 29.40 44 1983-03-30 CLN83 29.35 76 1983-03-30 CLQ83 29.20 105 1983-03-30 CLU83 28.95 139 1983-03-30 CLV83 28.95 167 1983-03-30 CLX83 28.90 197 1983-03-30 CLZ83 28.75 230 1983-03-31 CLM83 29.29 43 1983-03-31 CLN83 29.24 75 1983-03-31 CLQ83 29.05 104 1983-03-31 CLU83 28.85 138 1983-03-31 CLV83 28.75 166 1983-03-31 CLX83 28.70 196 1983-03-31 CLZ83 28.60 229 1983-04-04 CLM83 29.44 39 1983-04-04 CLN83 29.25 71 1983-04-04 CLQ83 29.10 100 1983-04-04 CLU83 29.05 134 1983-04-04 CLV83 28.95 162 1983-04-04 CLX83 28.95 192 1983-04-04 CLZ83 28.85 225 1983-04-05 CLM83 29.71 38 1983-04-05 CLN83 29.54 70 1983-04-05 CLQ83 29.35 99 1983-04-05 CLU83 29.20 133 1983-04-05 CLV83 29.10 161 1983-04-05 CLX83 29.00 191 1983-04-05 CLZ83 29.00 224""" cdm = pd.read_csv(StringIO(text), delim_whitespace=True, parse_dates=[0], index_col=[0, 1]) </code></pre> <h3>Solution</h3> <p>Use <code>numpy</code> a lot!</p> <pre><code>cdm_last = cdm.Last.unstack() a = cdm_last.values a_rebased = np.concatenate([np.ones((1, a.shape[1])), np.exp(np.diff(np.log(a), axis=0))]) * 100 cdm_last_rebased = pd.DataFrame(a_rebased, cdm_last.index, cdm_last.columns) cdm_last_rebased </code></pre> <p><a href="http://i.stack.imgur.com/RG37U.png" rel="nofollow"><img src="http://i.stack.imgur.com/RG37U.png" alt="enter image description here"></a></p> <p><strong><code>stack</code></strong> to get back your series.</p> <pre><code>cdm_last_rebased.stack() Date Ticker 1983-03-30 CLM83 100.000000 CLN83 100.000000 CLQ83 100.000000 CLU83 100.000000 CLV83 100.000000 CLX83 100.000000 CLZ83 100.000000 1983-03-31 CLM83 99.625850 CLN83 99.625213 CLQ83 99.486301 CLU83 99.654577 CLV83 99.309154 CLX83 99.307958 CLZ83 99.478261 1983-04-04 CLM83 100.512120 CLN83 100.034200 CLQ83 100.172117 CLU83 100.693241 CLV83 100.695652 CLX83 100.871080 CLZ83 100.874126 1983-04-05 CLM83 100.917120 CLN83 100.991453 CLQ83 100.859107 CLU83 100.516351 CLV83 100.518135 CLX83 100.172712 CLZ83 100.519931 dtype: float64 </code></pre>
1
2016-08-31T16:40:19Z
[ "python", "pandas", "dataframe", "multi-index", "rebasing" ]
How to use rmagic in Azure Notebooks?
39,254,231
<p>I am trying to get some data out of R snippet to Azure Python 3 Jupyter notebook (hosting is available for free at <a href="http://notebooks.azure.com" rel="nofollow">http://notebooks.azure.com</a>).</p> <p>I tried the following in Python 3 notebook:</p> <pre><code>%load_ext rmagic </code></pre> <p>then tried to embed R: </p> <pre><code>%%R -o x x &lt;- 1 </code></pre> <p>then</p> <pre><code>x </code></pre> <p>Here I get Python error: <code>name 'x' is not defined</code> – see the picture below. What is the right way to embed R into Python 3 and exchange data using Azure Notebooks?</p> <p><img src="http://nogin.info/R2Py.png" alt="Azure Notebooks"></p>
0
2016-08-31T16:01:39Z
39,268,025
<p>@DmitryNogin, I reproduced the issue successfully. And according to the descprition below from <a href="https://ipython.org/ipython-doc/2/config/extensions/rmagic.html" rel="nofollow">here</a>, you need to use <code>%load_ext rpy2.ipython</code> instead of <code>%load_ext rmagic</code> in jupyter notebook now.</p> <blockquote> <p>The rmagic extension has been moved to rpy2 as rpy2.interactive.ipython.</p> </blockquote> <p>However, I got some other errors below when I tried <code>%load_ext rpy2.ipython</code> in notebook.</p> <pre><code>ImportError: libRblas.so: cannot open shared object file: No such file or directory </code></pre> <p>I searched a solution which need to set an environment variable <code>LD_LIBRARY_PATH</code> with <code>export LD_LIBRARY_PATH=/usr/lib64/MR0-3.3.0/R-3.3.0/lib/R/lib</code> (the path value which I found via the command <code>which R</code>) on Azure notebook server. However, the host Ubuntu OS not be installed <code>vi</code> or <code>vim</code> and I don't know the <code>sudo</code> password for <code>nbuser</code> in the terminal, so although the solution works for ipython in the terminal of notebook server, it can't make the jupyter works.</p> <p>My final work solution is that using the terminal of notebook server to command <code>ln -s /usr/lib64/MR0-3.3.0/R-3.3.0/lib/R/lib/* ~/anaconda3_410/lib/</code>. </p> <p>Then, when you enter <code>%load_ext rpy2.ipython</code>, you will get an error <code>ImportError ..../libreadline.so.6: undefined symbol: PC</code>. You only need to enter <code>import readline</code> to solve it before <code>%load_ext rpy2.ipython</code>.</p> <p>Finally, you can load <code>rpy2.ipython</code> for <code>%R xxx</code> with some warning information.</p> <p><a href="http://i.stack.imgur.com/fdhsm.png" rel="nofollow"><img src="http://i.stack.imgur.com/fdhsm.png" alt="enter image description here"></a></p> <p>Hope it helps.</p>
1
2016-09-01T09:40:59Z
[ "python", "azure", "jupyter" ]
Finding the occurrence and position of a character in python
39,254,319
<p>I have a variable (var) with ids I am interested in finding out the position of last occurrence of Z</p> <p>I have tried to convert it to an array with their positions</p> <pre><code> zf1=np.where(df2['Var']=="Z") </code></pre> <p>This will give me the result as </p> <pre><code> (array([4,5,6,7,8,9,10,11,12,22,23,24,25],dtype=int64),) </code></pre> <p>My idea was to find the difference of these values and look for -1 - I use the index value of this -1 to add an id next to it</p> <pre><code> np.diff(zf1) Var ID A B C Z Z Z Z Z Z Z Z Z 1 X X X X X X B A C Z Z Z Z 2 </code></pre> <p>np.diff is not giving me -1. Is there any alternate method? </p>
1
2016-08-31T16:06:32Z
39,254,981
<p><strong><em>Get Index values</em></strong></p> <pre><code>df2.index[df2.Var.eq('Z') &amp; df2.Var.ne(df2.Var.shift(-1))] </code></pre> <p><strong><em>Filter <code>df2</code></em></strong></p> <pre><code>df2[df2.Var.eq('Z') &amp; df2.Var.ne(df2.Var.shift(-1))] </code></pre>
0
2016-08-31T16:46:11Z
[ "python", "pandas", "position", "np", "find-occurrences" ]
Using PIL to open an image with a palette and save that image with the same palette
39,254,339
<p>So I am trying to convert a bmp to a NumPy array, store the array somewhere, and then convert it back into a bmp image at a later time. </p> <pre><code>bmp = Image.open(fn_bmp) data = np.array(bmp.convert('P', palette=Image.WEB)) </code></pre> <p>This data is stored in another file temporarily and then I go to retrieve it at some later point.</p> <pre><code>bmp = Image.fromarray(np.array(dataset).convert('P', palette=Image.WEB)) bmp.save(fn) </code></pre> <p>Note that dataset is an object converted back into a NumPy array and np.array(dataset) == data in all indices.</p> <p>For some reason, when I show or save this resulting image, a "14" corresponding to some palette color is interpreted as a grey scale value and saved a such. How do I save the image as a colored palette bitmap image? I have tried adding options to the save (e.g. mode='P', palette=Image.WEB) to no avail. Thank you for your help.</p> <p><strong>EDIT:</strong></p> <p>In the tutorial portion of the PIL documentation, it specifies the limitations of conversions.</p> <blockquote> <p>The library supports transformations between each supported mode and the “L” and “RGB” modes. To convert between other modes, you may have to use an intermediate image (typically an “RGB” image).</p> </blockquote> <p>So, in order to accomplish what I was doing, I have to convert the image to RGB in the first array and convert back to P in the second.</p> <p>However, the image (with only 4 colors) becomes distorted when converted from RGB back to P. Is there any reason for this? </p>
0
2016-08-31T16:07:49Z
39,258,561
<p>In this case <code>Image.fromarray(data)</code> returns a greyscale image. When you convert this image to a different image mode it will remain greyscale!</p> <p>Instead you have to supply color information in the form of a palette:</p> <pre><code># first part bmp = Image.open(fn_bmp) bmp_P_web = bmp.convert('P', palette=Image.WEB) web_palette = bmp_P_web.getpalette() # &lt;--- data = np.array(bmp_P_web) # second part bmp = Image.fromarray(data) bmp.putpalette(web_palette) # &lt;--- </code></pre> <p>No clue how to get the <code>web_palette</code> directly from PIL, but here's a way to generate <a href="https://github.com/python-pillow/Pillow/blob/3.3.x/libImaging/Palette.c#L65:L92" rel="nofollow">it</a> with numpy:</p> <pre><code>web_palette = np.zeros(3*256, int) web_palette[30:-90] = np.mgrid[0:256:51, 0:256:51, 0:256:51].ravel('F') </code></pre>
0
2016-08-31T20:35:10Z
[ "python", "image", "numpy", "bitmap", "python-imaging-library" ]
SQLite3 & Py: Error Inserting hyphenated string
39,254,365
<p>I am doing a simple script in which Python reads a bunch of tab-separated files, and line by line enters the first item in the line onto an sqlite3 table. The process works well, except for the actual data. The data that is being sent to me is in the format 123-4567890-1234567 (3-7-7). Instead of seeing the full string in the database, I get the arithmetical result of the three numbers in the string, i.e. -5802334. </p> <p>I've tried all kinds of combinations with quotes, such as <code>Lines[0] = "'" + Lines[0] + "'"</code> (I get an error that the last item is an unrecognized token) or <code>Lines[0] = Lines[0].replace('-','_')</code> ("OperationalError: unrecognized token: "114_6555410_7421863")<br> . </p> <p>Can you tell me what I'm doing incorrectly, and/or how to overcome this problem?</p> <p>Here is my full code:</p> <pre><code>import sqlite3, os, fnmatch, csv, datetime Homedir = os.path.expanduser('~') DBFile = Homedir + '\\Desktop\\AmazonProg\\AmazonOrders.sqlite' Rawpathin = '\\\\idc-v-lapedi01\\amtu2\\Data\\production\\reports\\' #TableName = 'OrderNums' #IdColumn = 'Orderid' #POColumn = 'PONum' sTimestamp = datetime.datetime.now().strftime('%Y%m%d%H%M') Lines = [] conn = sqlite3.connect(DBFile) c = conn.cursor() Amzfiles=fnmatch.filter(os.listdir(Rawpathin), 'order*.txt') for Files in Amzfiles: with open(Rawpathin + Files, "r") as Source: Reader = csv.reader(Source, delimiter = '\t') for Lines in Reader: if Lines[0] == 'order-id': pass elif len(Lines[0])== 19: c.execute("INSERT OR IGNORE INTO OrderNums (Orderid, PONum, Timestamp) VALUES ({idf}, {v1}, {v2})".format(idf=Lines[0], v1=Lines[0][12:], v2 = sTimestamp)) else: pass conn.commit() conn.close() </code></pre> <p>Thank you in advance. </p>
1
2016-08-31T16:09:04Z
39,254,645
<p>Bottom line is that you should not be using <code>str.format</code> for this as it is not secure and it is also, in your case, not producing the result you are expecting.</p> <p>Fortunately, this problem was solved long ago. For you, just change your <code>c.execute</code> line to this:</p> <pre><code>c.execute("INSERT OR IGNORE INTO OrderNums (Orderid, PONum, Timestamp) VALUES (?,?,?)",(Lines[0],Lines[0][12:],sTimestamp)) </code></pre> <p>Probably a good idea to review the docs - there are some very helpful examples included:</p> <p><a href="https://docs.python.org/3.5/library/sqlite3.html" rel="nofollow">https://docs.python.org/3.5/library/sqlite3.html</a></p>
1
2016-08-31T16:26:31Z
[ "python", "sqlite3", "insert-into" ]
TypeError: __init__() takes exactly 1 argument (2 given)
39,254,432
<p>I'm trying to run the monasca-persister component in ubuntu, but there is an error with a file related with kafka, my kafka server is running well.</p> <pre><code>Process Process-2: commit_timeout=kafka_conf.max_wait_time_seconds) File "/usr/local/lib/python2.7/dist-packages/monasca_common/kafka/consumer.py", line 92, in __init__ Traceback (most recent call last): self._kafka = kafka.client.KafkaClient(kafka_url) TypeError: __init__() takes exactly 1 argument (2 given) File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run self._target(*self._args, **self._kwargs) File "persister.py", line 126, in start_process persister = Persister(kafka_config, cfg.CONF.zookeeper, respository) File "/home/dpeuser/monasca-persister/monasca_persister/repositories/persister.py", line 42, in __init__ commit_timeout=kafka_conf.max_wait_time_seconds) File "/usr/local/lib/python2.7/dist-packages/monasca_common/kafka/consumer.py", line 92, in __init__ self._kafka = kafka.client.KafkaClient(kafka_url) TypeError: __init__() takes exactly 1 argument (2 given) 2016-08-31 12:05:55.245 28419 INFO __main__ [-] Received signal 17, beginning graceful shutdown. </code></pre> <p>So, I check the file of the error but I can't figure out what is wrong </p> <pre><code>class KafkaConsumer(object): def __init__(self, kafka_url, zookeeper_url, zookeeper_path, group, topic, fetch_size=1048576, repartition_callback=None, commit_callback=None, commit_timeout=30): """Init kafka_url - Kafka location zookeeper_url - Zookeeper location zookeeper_path - Zookeeper path used for partition negotiation group - Kafka consumer group topic - Kafka topic repartition_callback - Callback to run when the Kafka consumer group changes. Repartitioning takes a relatively long time so this is a good time to flush and commit any data. commit_callback - Callback to run when the commit_timeout has elapsed between commits. commit_timeout - Timeout between commits. """ self._kazoo_client = None self._set_partitioner = None self._repartition_callback = repartition_callback self._commit_callback = commit_callback self._commit_timeout = commit_timeout self._last_commit = 0 self._partitions = [] self._kafka_group = group self._kafka_topic = topic self._kafka_fetch_size = fetch_size self._zookeeper_url = zookeeper_url self._zookeeper_path = zookeeper_path self._kafka = kafka.client.KafkaClient(kafka_url) self._consumer = self._create_kafka_consumer() </code></pre>
0
2016-08-31T16:13:40Z
39,254,567
<p>The <code>KafkaClient</code> class doesn't take any positional arguments. Pass in configuration as <em>keyword arguments</em>:</p> <pre><code>self._kafka = kafka.client.KafkaClient(bootstrap_servers=kafka_url) </code></pre> <p>See the <a href="http://kafka-python.readthedocs.io/en/master/_modules/kafka/client_async.html#KafkaClient" rel="nofollow">source linked from the documentation</a> to see what configuration keywords are accepted and what their default values are. Many of the same configuration options are also documented for the <a href="http://kafka-python.readthedocs.io/en/master/apidoc/KafkaConsumer.html" rel="nofollow"><code>KafkaConsumer</code> class</a>.</p>
1
2016-08-31T16:21:52Z
[ "python", "ubuntu", "apache-kafka" ]
Arduino to Python: How to import readings using ser.readline() into a list with a specified starting point?
39,254,574
<p>This is quite a specific query so please bear with me.</p> <p>I have 14 ultrasonic sensors hooked to an Arduino sending live readings to the serial monitor (or Pi when I plug it in) . The readings are sent as follows, <em>with a new line between every 2 digits</em> (except Z). </p> <blockquote> <p>Z 62 61 64 63 64 67 98 70 69 71 90 XX 75 XX</p> </blockquote> <p>These measurements are in cm. "XX" implies the reading is out of the two digit range. Z has been assigned as a starting point as the pi reads the sensors very fast and repetitively, to the point of 80 readings in a second or so. So ser.readline() gives multiple samples of the same sensors</p> <p>When python reads the readings in ser.readline() it does not have a starting point. It may start at 70, XX or Z. I want to assign it into an accessible list so that: </p> <blockquote> <p>array [0] = Z <em>(always)</em></p> <p>array [1] = 62 <em>(first two digits)</em></p> <p>array [2] = 61 <em>(second two digits)</em></p> <p>..</p> <p>array [14] = XX <em>(fourteenth two digits)</em></p> </blockquote> <p>This is my code which unfortunately doesn't work as list is out of range:</p> <pre><code>import serial ser = serial.Serial('/dev/ttyACM0',115200) print ("Start") overallcount=1 #initialise 2 counters arraycount =1 array = [] #initialise 2 lists line = [] while True: while overallcount&lt;30: #read 30 random readings from Arduino ser.readline() print(str(overallcount)) #print reading number while arraycount&lt;15: #Number of readings to fill the array to be made for line in ser.readline(): if line == 'Z': #If element in ser.readline is "Z" array[0] == line #Assign first list element as Z (starting point) arraycount=arraycount+1 #Iterate through until 14 sensors are read arraycount=1 #reset counter overallcount=overallcount+1 #Iterate through 30 random Arduino readings overallcount=1 #iterate random counter </code></pre> <p>If you could please tell me what I'm doing wrong, or if there is a better method for this I'd really really appreciate it!</p> <p>Thank you</p>
2
2016-08-31T16:22:28Z
39,256,723
<p>How about this? Note that your checks overallcount&lt;30 and arraycount&lt;15 should really be overallcount&lt;=30 and arraycount&lt;=15.</p> <pre><code>import serial ser = serial.Serial('/dev/ttyACM0',115200) readings = [] # Array to store arrays of readings reading_id = 1 # Id of current reading random_lines_expected = 30 # NUmber of random lines num_sensors = 14 # Number of sensors def read_random(): for _ in range(random_lines_expected): ser.readline() read_random() # Read initial random lines while True: print "Reading #", reading_id reading = [] # Initialize an array to collect new reading while ser.readline().strip() != 'Z': # Keep reading lines until we find 'Z' pass reading.append('Z') # Add Z to reading array for _ in range(num_sensors): # For 14 sensors... reading.append(ser.readline().strip()) # Add their value into array readings.append(reading) # Add current reading to the array or readings reading_id += 1 # Increment reading ID #read_random() #Uncomment this if random follows each series of readings </code></pre>
0
2016-08-31T18:36:29Z
[ "python", "arrays", "raspberry-pi", "uart", "usart" ]
Fetching live data from website's with continiously updating data
39,254,581
<p>I can easily get the data when I put <strong>html = urllib.request.urlopen(req)</strong> inside a while loop, but it takes about 3 seconds to get the data. So I thought, maybe if I put that outside, I can get it faster as it won't have to open the URL everytime, but this throws up an <strong>AttributeError: 'str' object has no attribute 'read'</strong>. Maybe it doesn't recognize HTML variable name. How can I speed the processing ?</p> <pre><code>def soup(): url = "http://www.investing.com/indices/major-indices" req = urllib.request.Request( url, data=None, headers={ 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36', 'Connection': 'keep-alive' } ) global Ltp global html html = urllib.request.urlopen(req) while True: html = html.read().decode('utf-8') bsobj = BeautifulSoup(html, "lxml") Ltp = bsobj.find("td", {"class":"pid-169-last"} ) Ltp = (Ltp.text) Ltp = Ltp.replace(',' , ''); os.system('cls') Ltp = float(Ltp) print (Ltp, datetime.datetime.now()) soup() </code></pre>
0
2016-08-31T16:23:04Z
39,254,975
<p>if you want to fetching live you need to recall url periodically </p> <pre><code>html = urllib.request.urlopen(req) </code></pre> <p>This one should be in a loop.</p> <pre><code>import os import urllib import datetime from bs4 import BeautifulSoup import time def soup(): url = "http://www.investing.com/indices/major-indices" req = urllib.request.Request( url, data=None, headers={ 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36', 'Connection': 'keep-alive' } ) global Ltp global html while True: html = urllib.request.urlopen(req) ok = html.read().decode('utf-8') bsobj = BeautifulSoup(ok, "lxml") Ltp = bsobj.find("td", {"class":"pid-169-last"} ) Ltp = (Ltp.text) Ltp = Ltp.replace(',' , ''); os.system('cls') Ltp = float(Ltp) print (Ltp, datetime.datetime.now()) time.sleep(3) soup() </code></pre> <p>Result:</p> <pre><code>sh: cls: command not found 18351.61 2016-08-31 23:44:28.103531 sh: cls: command not found 18351.54 2016-08-31 23:44:36.257327 sh: cls: command not found 18351.61 2016-08-31 23:44:47.645328 sh: cls: command not found 18351.91 2016-08-31 23:44:55.618970 sh: cls: command not found 18352.67 2016-08-31 23:45:03.842745 </code></pre>
0
2016-08-31T16:45:49Z
[ "python", "web-scraping", "beautifulsoup" ]
Fetching live data from website's with continiously updating data
39,254,581
<p>I can easily get the data when I put <strong>html = urllib.request.urlopen(req)</strong> inside a while loop, but it takes about 3 seconds to get the data. So I thought, maybe if I put that outside, I can get it faster as it won't have to open the URL everytime, but this throws up an <strong>AttributeError: 'str' object has no attribute 'read'</strong>. Maybe it doesn't recognize HTML variable name. How can I speed the processing ?</p> <pre><code>def soup(): url = "http://www.investing.com/indices/major-indices" req = urllib.request.Request( url, data=None, headers={ 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36', 'Connection': 'keep-alive' } ) global Ltp global html html = urllib.request.urlopen(req) while True: html = html.read().decode('utf-8') bsobj = BeautifulSoup(html, "lxml") Ltp = bsobj.find("td", {"class":"pid-169-last"} ) Ltp = (Ltp.text) Ltp = Ltp.replace(',' , ''); os.system('cls') Ltp = float(Ltp) print (Ltp, datetime.datetime.now()) soup() </code></pre>
0
2016-08-31T16:23:04Z
39,255,358
<p>you reassign <code>html</code> to equal the UTF-8 string response then keep calling it like its an <code>IO</code> ... this code does not fetch new data from the server on every loop, <code>read</code> simply reads the bytes from the <code>IO</code> object, it doesnt make a new request.</p> <p>you can speed up the processing with the Requests library and utilise persistent connections (or urllib3 directly)</p> <p>Try this (you will need to <code>pip install requests</code>)</p> <pre><code>import os import datetime from requests import Request, Session from bs4 import BeautifulSoup s = Session() while True: resp = s.get("http://www.investing.com/indices/major-indices") bsobj = BeautifulSoup(resp.text, "html.parser") Ltp = bsobj.find("td", {"class":"pid-169-last"} ) Ltp = (Ltp.text) Ltp = Ltp.replace(',' , ''); os.system('cls') Ltp = float(Ltp) print (Ltp, datetime.datetime.now()) </code></pre>
0
2016-08-31T17:11:58Z
[ "python", "web-scraping", "beautifulsoup" ]
garbage collection in python
39,254,624
<p>For the below code, I am not able to clear the memory allocated for 'root' variable evan after 'del root' and gc.collect(). I understand that in python, garbage collection releases automatically. Is there any way I can clear it more?</p> <pre><code>from __future__ import with_statement import os import sys from memory_profiler import profile import gc try: import xml.etree.cElementTree as ET except ImportError: import xml.etree.ElementTree as ET def f(): with open('file.xml') as f: output_xml_str = f.read() root = ET.fromstring(output_xml_str) del root @profile def main(): x = {} for i in xrange(10000): x[i] = i+1 del x f() for i in range(12): gc.collect() if __name__ == "__main__": sys.exit(main()) </code></pre> <p>and here's profiling output</p> <p><a href="http://i.stack.imgur.com/wNu4E.png" rel="nofollow"><img src="http://i.stack.imgur.com/wNu4E.png" alt="enter image description here"></a></p>
2
2016-08-31T16:25:38Z
39,255,098
<p>Unfortunately you can't do much except calling <code>gc.collect()</code> plus you can adopt good practice style like you used <code>del</code> to tell the <code>GC</code> that you don't want it anymore so it will be deleted when <code>GC</code> will come around and do it's job.</p> <p>You should use <code>numpy</code> for container like array which is more efficient in terms of memory management </p>
0
2016-08-31T16:53:31Z
[ "python", "python-2.7", "python-3.x" ]
Python sys.executable is empty
39,254,684
<p>I am testing out doing some shenanigans with <code>os.execve</code> and virtual environments. I am running into the problem where <code>sys.executable</code> is empty if I replace the current python process with another python subprocess.</p> <p>The example below shows what's going on (run this inside a python shell):</p> <pre><code>import os, sys print(sys.executable) # works this time os.execve("/usr/bin/python", [], {}) # drops me into a new python shell import sys # yes, again print(sys.executable) # is empty </code></pre> <p>The full output of me running the commands above in a python shell:</p> <pre><code> lptp [ tmp ]: python Python 2.7.10 (default, Oct 14 2015, 16:09:02) [GCC 5.2.1 20151010] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import os, sys &gt;&gt;&gt; print(sys.executable) # works this time /usr/bin/python &gt;&gt;&gt; os.execve("/usr/bin/python", [], {}) # drops me into a new python shell Python 2.7.10 (default, Oct 14 2015, 16:09:02) [GCC 5.2.1 20151010] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import sys # yes, again &gt;&gt;&gt; print(sys.executable) # is empty &gt;&gt;&gt; </code></pre> <p><code>sys.executable</code> being empty is causing me problems, most notably that <code>platform.libc_ver()</code> fails because <code>sys.executable</code> is empty:</p> <pre><code>&gt;&gt;&gt; import platform &gt;&gt;&gt; platform.libc_ver() Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/usr/lib/python2.7/platform.py", line 163, in libc_ver f = open(executable,'rb') IOError: [Errno 21] Is a directory: '/tmp' </code></pre> <p>Note that the example above was run after calling <code>os.execve(...)</code></p>
0
2016-08-31T16:28:24Z
39,254,880
<p>Python relies on <code>argv[0]</code> and several environment variables to determine <code>sys.executable</code>. When you pass an empty argv and environment, Python doesn't know how to determine its path. At the very least, you should provide <code>argv[0]</code>:</p> <pre><code>os.execve('/usr/bin/python', ['/usr/bin/python'], {}) </code></pre>
1
2016-08-31T16:40:40Z
[ "python", "execve" ]
How to display images in Django 1.10
39,254,698
<p>i am not good in django, i use Django 1.10 and now i have problem with image displaying. I read in this version some stuff has changed but i don't get it. Here is what i have now:</p> <p>settings.py:</p> <pre><code>BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) STATIC_DIR = os.path.join(BASE_DIR, 'static') TEMPLATE_DIR = os.path.join(BASE_DIR, '/templates') STATICFILES_DIRS = [STATIC_DIR, ] STATIC_URL = '/static/' MEDIA_ROOT = 'C:/Users/john/myprojects/goal/website/goal/media/' MEDIA_URL = '/media/' </code></pre> <p>Maybe something is unusual here and i can delete it?</p> <p>template:</p> <pre><code>{% block content%} {{ car.name }} {{ car.photo.url }} {% endblock %} </code></pre> <p>Here is folder where i have my images:</p> <blockquote> <p>C:/Users/john/myprojects/goal/website/goal/media/images</p> </blockquote> <p>And what i see instead of image on my website:</p> <blockquote> <p>/media/images/mynewcar.jpg</p> </blockquote> <p>What must i change?</p> <p>i pasted to urls.py:</p> <blockquote> <p>media_dir = os.path.join(os.path.dirname(<strong>file</strong>),'media')</p> </blockquote> <p>and to urlpatterns</p> <blockquote> <p>url(r'^media/(.*)$','django.views.static.serve',{'document_root': media_dir}),</p> </blockquote>
0
2016-08-31T16:29:09Z
39,254,728
<p>You have to put <code>&lt;img&gt;</code> tag</p> <pre><code>{% block content%} {{ car.name }} &lt;img src="{{ car.photo.url }}"&gt; {% endblock %} </code></pre> <p><strong>Update</strong></p> <pre><code>from django.conf import settings from django.conf.urls.static import static urlpatterns = [ # ... the rest of your URLconf goes here ... ] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) </code></pre>
4
2016-08-31T16:31:02Z
[ "python", "django" ]
Pandas - Group/bins of data per longitude/latitude
39,254,704
<p>I have a bunch of geographical data as below. I would like to group the data by bins of .2 degrees in longitude AND .2 degree in latitude.</p> <p>While it is trivial to do for either latitude or longitude, what is the most appropriate of doing this for both variables?</p> <pre><code>|User_ID |Latitude |Longitude|Datetime |u |v | |---------|----------|---------|-------------------|-----|-----| |222583401|41.4020375|2.1478710|2014-07-06 20:49:20|0.3 | 0.2 | |287280509|41.3671346|2.0793115|2013-01-30 09:25:47|0.2 | 0.7 | |329757763|41.5453577|2.1175164|2012-09-25 08:40:59|0.5 | 0.8 | |189757330|41.5844998|2.5621569|2013-10-01 11:55:20|0.4 | 0.4 | |624921653|41.5931846|2.3030671|2013-07-09 20:12:20|1.2 | 1.4 | |414673119|41.5550136|2.0965829|2014-02-24 20:15:30|2.3 | 0.6 | |414673119|41.5550136|2.0975829|2014-02-24 20:16:30|4.3 | 0.7 | |414673119|41.5550136|2.0985829|2014-02-24 20:17:30|0.6 | 0.9 | </code></pre> <p>So far what I have done is created 2 linear spaces: </p> <pre><code>lonbins = np.linspace(df.Longitude.min(), df.Longitude.max(), 10) latbins = np.linspace(df.Latitude.min(), df.Latitude.max(), 10) </code></pre> <p>Then I can groupBy using: </p> <pre><code>groups = df.groupby(pd.cut(df.Longitude, lonbins)) </code></pre> <p>I could then obviously iterate over the groups to create a second level. My goal being to do statistical analysis on each of the group and possibly display them on a map it does not look very handy. </p> <pre><code>bucket = {} for name, group in groups: print name bucket[name] = group.groupby(pd.cut(group.Latitude, latbins)) </code></pre> <p>For example I would like to do a heatmap which would display the number of rows per latlon box, display distribution of speed in each of the latlon boxes, ... </p>
1
2016-08-31T16:29:17Z
39,277,772
<p>How about this?</p> <pre><code>step = 0.2 to_bin = lambda x: np.floor(x / step) * step df["latbin"] = df.Latitude.map(to_bin) df["lonbin"] = df.Longitude.map(to_bin) groups = df.groupby(("latbin", "lonbin")) </code></pre>
1
2016-09-01T17:40:15Z
[ "python", "pandas", "binning" ]
How to parse C++ protobuf binary data by python protobuf?
39,254,706
<p>I use C++ protobuf to serialize data to string.</p> <pre><code>/***** cpp code *****/ string serialized_data; message_cpp.SerializeToString(&amp;serialized_data); </code></pre> <p><strong>Question:</strong> can I parse <code>serialized_data</code> in python? and how?</p> <p>I've tried the following code, but it did not work.</p> <pre><code>##### python code message_python = foo.ParseFromString(serialized_data) print message_python </code></pre> <p>But I get <code>None</code> as the output of <code>print message_python</code>. I've also tried </p> <pre><code>##### python code message_python = foo.MergeFromString(serialized_data) print message_python </code></pre> <p>But I get the length of string <code>serialized_data</code> as the output of <code>print message_python</code>, i.e. <code>message_python == len(serialized_data)</code>. This result agrees with <a href="https://developers.google.com/protocol-buffers/docs/reference/python/google.protobuf.message.Message-class#MergeFromString" rel="nofollow">python protobuf API</a>.</p> <p>Does this means that I cannot parse the binary data in python, which are serialized in C++?</p> <p><strong>Update:</strong></p> <p>My <em>goal</em>: C++ server always generates image stream and image is sent to a python server.</p> <p>Here are my whole codes:</p> <p>.proto file:</p> <pre><code>message MyImage{ repeated int32 width = 1; repeated int32 height = 2; repeated bytes image = 3; } </code></pre> <p>C++ server:</p> <pre><code>zmq::context_t context(1); zmq::socket_t socket(context, ZMQ_REP); socket.bind("tcp://localhost:5555"); MyImage message_cpp; // message_cpp.add_image(), add_width() and add_height() here. string serialized_data; message_cpp.SerializeToString(&amp;serialized_data); int counter = 3; while (counter &gt; 0) { zmq::message_t request; socket.recv(&amp;request); std::string replyMessage = std::string(static_cast&lt;char *&gt;(request.data()), request.size()); std::cout &lt;&lt; "Recived from client: " + replyMessage &lt;&lt; std::endl; sleep(1); zmq::message_t reply(serialized_data.size()); memcpy((void*) reply.data(), serialized_data.data(), serialized_data.size()); std::cout &lt;&lt; "---length of message to client: " &lt;&lt; reply.size() &lt;&lt; std::endl; socket.send(reply); counter --; } </code></pre> <p>python client:</p> <pre><code>context = zmq.Context() socket = context.socket(zmq.REQ) port = "5555" socket.connect("tcp://localhost:%s" %port) print "Connecting to server..." foo = my_image_pb2.MyImage() for i in range(3): socket.send("hello from python") serialized_data = socket.recv() message_python = foo.ParseFromString(serialized_data) print "length of message from server:", len(serialized_data),"; type:", type(message) print "-----", message_python </code></pre> <p>This is the result:</p> <p>server:</p> <p><a href="http://i.stack.imgur.com/dqmpz.png" rel="nofollow"><img src="http://i.stack.imgur.com/dqmpz.png" alt="server"></a></p> <p>client:</p> <p><a href="http://i.stack.imgur.com/RjPG5.png" rel="nofollow"><img src="http://i.stack.imgur.com/RjPG5.png" alt="client"></a></p> <p>Why the <code>foo</code> is <code>None</code>, not a class? Any idea about how to fix it?</p>
0
2016-08-31T16:29:36Z
39,261,223
<p><code>ParseFromString</code> parses <em>into</em> the object it is called on. It doesn't return anything. Use it like:</p> <pre><code>message = MyMessage() message.ParseFromString(data) print message </code></pre>
2
2016-09-01T01:03:59Z
[ "python", "c++", "protocol-buffers" ]
NumPy ndarray broadcasting - shape (X,) vs (X, 1)
39,254,755
<p>I have a NumPy <code>ndarray</code> which is shaped (32, 1024) and holds 32 signal measurements which I would like to combine into a single 1024 element long array, with a different weight for each of the 32. I was using <code>numpy.average</code> but my weights are complex and <code>average</code> performs a normalisation of the weights based on the sum which throws off my results.</p> <p>Looking at the code for average I realised that I can accomplish the same thing by multiplying the weights by the signal array and then summing over the first axis. However when I try and multiply my (32,) weights array by the (32, 1024) signal array I get a dimension mismatch as the (32,) cannot be broadcast to (32, 1024). If I reshape the weights array to (32, 1) then everything works as expected, however this results in rather ugly code:</p> <pre><code>avg = (weights.reshape((32, 1)) * data).sum(axis=0) </code></pre> <p>Can anybody explain why NumPy will not allow my (32,) array to broadcast to (32, 1024) and/or suggest an alternative, neater way of performing the weighted average?</p>
3
2016-08-31T16:32:49Z
39,254,816
<p>On the question of why <code>(32,)</code> can't broadcast to <code>(32, 1024)</code>, it's because the shapes aren't aligned properly. To put it into a schematic, we have :</p> <pre><code>weights : 32 data : 32 x 1024 </code></pre> <p>We need to align the only axis, which is the first axis of <code>weights</code> aligned to the first axis of <code>data</code>. So, as you discovered one way is to <code>reshape</code> to <code>2D</code>, such that we would end up with a singleton dimension as the second axis. Going back to the schematic, with the modified version we would have :</p> <pre><code>weights : 32 x 1 data : 32 x 1024 </code></pre> <p>Now, that the shapes are aligned, we can perform those elementwise operations.</p> <p>We can explicitly introduce that new axis with <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/arrays.indexing.html#numpy.newaxis" rel="nofollow"><code>None/np.newaxis</code></a> and thus replace the <code>reshaping</code>, like so -</p> <pre><code>(weights[:,None]*data).sum(0) </code></pre> <hr> <p>Let's look for neat alternatives!</p> <p>One neat and probably intuitive way would be with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html" rel="nofollow"><code>np.einsum</code></a> -</p> <pre><code>np.einsum('i,ij-&gt;j',weights,data) </code></pre> <p>Another way would be with matrix-multiplication using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html" rel="nofollow"><code>np.dot</code></a>, as we lose the first axis of <code>weights</code> against the first axis of <code>data</code>, like so -</p> <pre><code>weights.dot(data) </code></pre>
3
2016-08-31T16:36:55Z
[ "python", "numpy", "multidimensional-array", "numpy-broadcasting" ]
Solving polynomials with complex coefficients using sympy
39,254,827
<p>I'm very new to python so forgive me if this has a simple fix. I'm trying to solve polynomials with complex coefficients using sympy. I find that I get a blank output if k is 'too complicated'... I'm not quite sure how to define what that means just yet. As a first example consider this fourth order polynomial with complex coefficients,</p> <pre><code>In [424]: solve(k**4+ 2*I,k) Out[424]: [-2**(1/4)*sqrt(-sqrt(2)/4 + 1/2) - 2**(1/4)*I*sqrt(sqrt(2)/4 + 1/2), 2**(1/4)*sqrt(-sqrt(2)/4 + 1/2) + 2**(1/4)*I*sqrt(sqrt(2)/4 + 1/2), -2**(1/4)*sqrt(sqrt(2)/4 + 1/2) + 2**(1/4)*I*sqrt(-sqrt(2)/4 + 1/2), 2**(1/4)*sqrt(sqrt(2)/4 + 1/2) - 2**(1/4)*I*sqrt(-sqrt(2)/4 + 1/2)] </code></pre> <p>there are no problems obtaining an output. I'm interested, though, in solving something like,</p> <pre><code>In [427]: solve(k**6 + 3*I*k**5 - 2*k**4 + 9*k**3 - 4*k**2 + k - 1,k) Out[427]: [] </code></pre> <p>which is a lot more complicated and returns an empty list. I can, however, solve this using maple, for instance. Also, note that in removing the complex coefficients, there are no issues,</p> <pre><code>In [434]: solve(k**6 + 3*k**5 - 2*k**4 + 9*k**3 - 4*k**2 + k - 1,k) Out[434]: [CRootOf(k**6 + 3*k**5 - 2*k**4 + 9*k**3 - 4*k**2 + k - 1, 0), CRootOf(k**6 + 3*k**5 - 2*k**4 + 9*k**3 - 4*k**2 + k - 1, 1), CRootOf(k**6 + 3*k**5 - 2*k**4 + 9*k**3 - 4*k**2 + k - 1, 2), CRootOf(k**6 + 3*k**5 - 2*k**4 + 9*k**3 - 4*k**2 + k - 1, 3), CRootOf(k**6 + 3*k**5 - 2*k**4 + 9*k**3 - 4*k**2 + k - 1, 4), CRootOf(k**6 + 3*k**5 - 2*k**4 + 9*k**3 - 4*k**2 + k - 1, 5)] </code></pre> <p>The elements of the resulting array can be evaluated numerically. </p> <p>So, is this a problem to do with complex coefficients? How can I solve equations like the one on line [427]?</p> <p>I have tried to solve with nsolve() and factor the roots out one by one, though I've had no luck with this method either.</p>
5
2016-08-31T16:37:24Z
40,092,834
<p>As per the <a href="https://stackoverflow.com/questions/39254827/solving-polynomials-with-complex-coefficients-using-sympy/40092834#comment65851348_39254827">comment</a> of <a href="https://stackoverflow.com/users/6212875/stelios">Stelios</a>, you can use <a href="http://docs.sympy.org/0.7.1/modules/polys/reference.html#sympy.polys.polytools.nroots" rel="nofollow">sympy.polys.polytools.nroots</a>:</p> <pre><code>&gt;&gt;&gt; from sympy import solve, nroots, I &gt;&gt;&gt; from sympy.abc import k &gt;&gt;&gt; solve(k**6 + 3*I*k**5 - 2*k**4 + 9*k**3 - 4*k**2 + k - 1,k) [] &gt;&gt;&gt; nroots(k**6 + 3*I*k**5 - 2*k**4 + 9*k**3 - 4*k**2 + k - 1) [-2.05972684672 - 0.930178254620881*I, -0.0901851681681614 + 0.433818575087712*I, -0.0734840785305346 - 0.434217215694685*I, 0.60726931721974 - 0.0485101438937812*I, 0.745127208196241 + 0.945593905069312*I, 0.870999568002712 - 2.96650686594768*I] </code></pre>
0
2016-10-17T18:01:38Z
[ "python", "numpy", "math", "sympy", "polynomials" ]
get a pivot from count using a 2d key
39,254,844
<p>I've got time series data that I'd like to get a count of actions that happened per day/hour combination</p> <p>I'm using <code>Counter</code> object to get the counts as I process items line by line</p> <pre><code> c = Counter() for line in file c.update([[yyyymmdd, hh]]) # or c[yyyymmdd,hh] += 1 </code></pre> <p>how can I get a pivot of <code>yyyymmdd</code> as rows, <code>hh</code> as columns, and count as values?</p> <p>Of course I could loop through the resulting count to generate the pivot, but I'm wondering if there is a function (or python trick) that can do this in a line or two</p>
1
2016-08-31T16:38:33Z
39,260,272
<p>Here is a naive and non-optimised approach for the pivot based on Counter:</p> <pre><code>In [1]: from io import StringIO # Py2 from StringIO import StringIO ...: from datetime import datetime ...: from collections import Counter ...: import random ...: In [2]: data = [] # generate some random data ...: for i in range(30): ...: d = datetime(2016, 9, random.randrange(1, 7), random.randrange(1, 12), 0, 0) ...: data.append(d.strftime("%Y%m%d,%H")) ...: s = StringIO("\n".join(data)) ...: In [3]: data[:5] Out[3]: ['20160902,05', '20160901,05', '20160902,06', '20160902,05', '20160905,01'] In [4]: c = Counter() ...: for line in s.readlines(): ...: c[tuple(line.strip().split(","))] += 1 # use tuple as key, list is not hashable ...: In [5]: c.most_common(5) Out[5]: [(('20160905', '04'), 4), (('20160902', '01'), 2), (('20160902', '05'), 2), (('20160904', '05'), 2), (('20160905', '01'), 2)] In [6]: In [6]: def print_pivot(c): ...: labels = list(c.keys()) ...: # get unique yyyymmdd &amp;&amp; hh values as index &amp; columns ...: index, columns = sorted({l[0] for l in labels}, key=int), sorted({l[1] for l in labels}, key=int) ...: header = " "*8 + " | " + " | ".join(columns) + " |" ...: print(header, "\n", "-"*len(header)) ...: # basically loop and get the (index, column) combination ...: # from the Collection and print out value or blank ...: for idx in index: ...: print(idx + " |", " | ".join(str(c.get((idx, col), "")).ljust(2) for col in columns), "|") ...: In [7]: print_pivot(c) | 01 | 02 | 03 | 04 | 05 | 06 | 09 | 10 | ---------------------------------------------------------------- 20160901 | | | 1 | | 1 | | | | 20160902 | 2 | | | 1 | 2 | 1 | | 1 | 20160903 | | 1 | | 1 | | | 1 | 1 | 20160904 | 1 | | | | 2 | 1 | | 1 | 20160905 | 2 | 1 | | 4 | 1 | 1 | | | 20160906 | | 1 | | | 1 | | | 1 | </code></pre> <hr> <p>And seriously, I mean seriously -- just use <strong>pandas</strong> for the sake of simplicity and ease of use:</p> <pre><code>In [12]: import pandas as pd # you may use pd.read_csv(..., **some options) In [13]: df = pd.DataFrame(data=list(c.elements()), columns=["yyyymmdd", "hh"]) In [14]: df[:5] Out[14]: yyyymmdd hh 0 20160902 01 1 20160902 01 2 20160902 05 3 20160902 05 4 20160904 01 In [15]: df['value'] = df['hh'] # make a copy so we have columns + values # df.groupby will group both columns and count the remaing "value" # unstack the result will re-arrange "hh" as columns # fillna -- fill the "na" fields as blank "" In [16]: df.groupby(['yyyymmdd', 'hh']).count().unstack().fillna('') Out[17]: value hh 01 02 03 04 05 06 09 10 yyyymmdd 20160901 1 1 20160902 2 1 2 1 1 20160903 1 1 1 1 20160904 1 2 1 1 20160905 2 1 4 1 1 20160906 1 1 1 </code></pre>
1
2016-08-31T22:55:49Z
[ "python", "python-2.7" ]
How to handle multiple inheritance in python when parent init functions accept different numbers of arguments
39,254,940
<p>I have a class that inherits from 2 other classes, one of which has accepts an init argument. How do I properly initialize the parent classes?</p> <p>So far I have:</p> <pre><code>class A(object): def __init__(self, arg1): self.arg1 = arg1 class B(object): def __init__(self): pass class C(A, B): def __init__(arg1): super(C, self).__init__(arg1) </code></pre> <p>But this throws a <code>TypeError</code> as <code>B</code> doesn't receive an argument.</p> <p>In context, <code>B</code> is the proper parent of <code>C</code>, whilst <code>A</code> is a mixin that many classes in the project inherit functionality from. </p>
0
2016-08-31T16:44:01Z
39,255,396
<p>You can call the <code>__init__</code> of the parent classes manually.</p> <pre><code>class C(A, B): def __init__(arg1): A.__init__(self,arg1) B.__init__(self) </code></pre>
1
2016-08-31T17:14:43Z
[ "python", "multiple-inheritance" ]
Python Special Colon Inquiry
39,254,947
<pre><code>sortedWinnerIndices = winnerIndices[-numActive:][::-1] </code></pre> <p>Can someone tell me what is going on here? </p> <p><code>WinnerIndices</code> is 2048 ints long, Numpy array. I read somewhere that <code>[::-1]</code> reverses the result but I still can't figure out how this function selects a subset of winnerIndices?</p>
-1
2016-08-31T16:44:27Z
39,255,019
<p>Break it up into steps. It's equivalent to:</p> <pre><code>subset = winnerIndices[-numActive:] sortedWinnerIndices = subset[::-1] </code></pre> <p>The first statement selects the last <code>numActive</code> elements in the array. The second line reverses it. So when you combine them you get the last <code>numActive</code> elements in the reverse order from the original array.</p>
1
2016-08-31T16:48:19Z
[ "python", "numpy", "operator-keyword", "colon" ]
Python Special Colon Inquiry
39,254,947
<pre><code>sortedWinnerIndices = winnerIndices[-numActive:][::-1] </code></pre> <p>Can someone tell me what is going on here? </p> <p><code>WinnerIndices</code> is 2048 ints long, Numpy array. I read somewhere that <code>[::-1]</code> reverses the result but I still can't figure out how this function selects a subset of winnerIndices?</p>
-1
2016-08-31T16:44:27Z
39,255,024
<pre><code>winnerIndices[-numActive:] </code></pre> <p>Above takes a slice from <code>-numActive</code> index to the end of the original list</p> <pre><code>x[::-1] </code></pre> <p>This reverses x</p>
1
2016-08-31T16:48:30Z
[ "python", "numpy", "operator-keyword", "colon" ]
Recursion: design a recursive function called replicate_recur which will receive two arguments:
39,255,069
<p>The below code is a recursive function which takes two arguments and return something like <code>[5,5,5]</code>.</p> <pre><code>def recursive(times, data): if not isinstance(times,int): raise ValueError("times must be an int") if not (isinstance(data,int) or isinstance(data, str)): raise ValueError("data must be an int or a string") if times &lt;= 0: return [] return [data] + recursive(times, data - 1) print(recursive(3, 5)) </code></pre> <p>Why is the code throwing a recursive error?</p>
-4
2016-08-31T16:51:20Z
39,255,244
<p>Let's try to think how we would repeat any data item N times recursively:</p> <ul> <li>If <code>times</code> is 0 or less, we return an empty list, as per the requirements.</li> <li>If <code>times</code> is greater than 0, we return list that has <code>data</code> ones and another <code>times - 1</code> repetitions of data, recursively. </li> </ul> <p>Another requirement is to check the validity of the arguments and raise a <code>ValueError</code> if they are invalid. While this can be done in the same recursive function, this carries a performance hit, as we'll do the same validation <code>times</code> times. The textbook solution for this is to separate the function to two - an "outer" function that handles the validations and and "inner" function that handles the recursive logic. </p> <p>Put it all together, and you'll get something like this:</p> <pre><code>def replicate_recur(times, data): if not isinstance(times, int): raise ValueError("times must be an int") return real_replicate_recur(times, data) def real_replicate_recur(times, data): if times &lt;= 0: return [] return [data] + real_replicate_recur(times - 1, data) </code></pre>
1
2016-08-31T17:03:43Z
[ "python", "recursion" ]
Recursion: design a recursive function called replicate_recur which will receive two arguments:
39,255,069
<p>The below code is a recursive function which takes two arguments and return something like <code>[5,5,5]</code>.</p> <pre><code>def recursive(times, data): if not isinstance(times,int): raise ValueError("times must be an int") if not (isinstance(data,int) or isinstance(data, str)): raise ValueError("data must be an int or a string") if times &lt;= 0: return [] return [data] + recursive(times, data - 1) print(recursive(3, 5)) </code></pre> <p>Why is the code throwing a recursive error?</p>
-4
2016-08-31T16:51:20Z
39,255,418
<p>You can use a list to store the result of current recursive call.</p> <pre><code>def replicate_recur(times, data, ret=None): if not ret: ret = [] ret.append(data) times -= 1 if not times: return ret return replicate_recur(times, data, ret) </code></pre>
0
2016-08-31T17:16:23Z
[ "python", "recursion" ]
Execute python 3 not python 2
39,255,099
<p>I have installed python 2 after installing python 3.And now when I executing my python file by clicking on file (not by cmd) its run python 2 ,but I want python 3. I have tried script:</p> <pre><code>import sys print (sys.version) </code></pre> <p>output was:</p> <pre><code>2.7.11 </code></pre> <p>Can someone help me to make python 3 default on my pc. So when i run my file,it execute Python 3. Sorry for bad English.</p>
0
2016-08-31T16:53:35Z
39,255,365
<p>If the current default windows application for <code>.py</code> files is currently <code>python2</code> (i.e. <code>C:\python27\python.exe</code>) and not the new <code>py.exe</code> launcher, you can just change the default windows application for the file type. Right-click on file -> properties -> click the change button for default application and change it to the python3 executable.</p> <p>If the default application for the file is the <code>py.exe</code> windows launcher, you can add a shebang line in your scripts to force the python executable and the launcher should respect it. Add this as the first line of your file</p> <pre><code>#!C:\python3\python.exe </code></pre> <p>If you're python3 installation path is different, make sure to use that instead.</p>
1
2016-08-31T17:12:22Z
[ "python", "python-2.7", "python-3.4" ]
Execute python 3 not python 2
39,255,099
<p>I have installed python 2 after installing python 3.And now when I executing my python file by clicking on file (not by cmd) its run python 2 ,but I want python 3. I have tried script:</p> <pre><code>import sys print (sys.version) </code></pre> <p>output was:</p> <pre><code>2.7.11 </code></pre> <p>Can someone help me to make python 3 default on my pc. So when i run my file,it execute Python 3. Sorry for bad English.</p>
0
2016-08-31T16:53:35Z
39,255,421
<p>On <code>cmd</code> you can do <code>py -3</code> for python 3 and <code>py -2</code> for 2 but for click-starting the simplest way is to include a line <code>#! python2</code> or <code>#! python3</code>as first line in file.</p> <p>You were on the right trach - it is mentioned in <a href="https://www.python.org/dev/peps/pep-0397/" rel="nofollow">PEP 397</a> in section "Shebang line parsing" </p>
0
2016-08-31T17:16:31Z
[ "python", "python-2.7", "python-3.4" ]
Execute python 3 not python 2
39,255,099
<p>I have installed python 2 after installing python 3.And now when I executing my python file by clicking on file (not by cmd) its run python 2 ,but I want python 3. I have tried script:</p> <pre><code>import sys print (sys.version) </code></pre> <p>output was:</p> <pre><code>2.7.11 </code></pre> <p>Can someone help me to make python 3 default on my pc. So when i run my file,it execute Python 3. Sorry for bad English.</p>
0
2016-08-31T16:53:35Z
39,255,447
<p>Assuming you have python3 installed, you can us <a href="https://docs.python.org/3/using/scripts.html" rel="nofollow">virtual environment mechanisms</a> built into python3 to prevent errors just like this. </p> <p>I saw in the comments you are using Windows, so the following steps to ensure that you are using the intended version of Python every time.</p> <p>first navigate to your projects directory and run the command: <code>c:\Temp&gt;c:\Python35\python -m venv myenv</code>. This will create a directory <code>myenv</code> with scripts to create your virtual enviroment.</p> <p>Next activate your virtual enviroment with the command: <code>C:\&gt; .\myenv\Scripts\activate.bat</code>. This will change your environment to what is set in the virtual environment. </p> <p>Now run the command <code>python</code> to see that python 3.5 is being run.</p> <p>to exit the virtual environment just run <code>deactivate.bat</code> </p>
0
2016-08-31T17:17:33Z
[ "python", "python-2.7", "python-3.4" ]
How can I store many values in 1 variable, in python?
39,255,249
<p>I am doing a 10 x 10 stratified shuffle split cross validation. As you can see in my code, at the end I get 10 results. I want the mean of these 10 results. So I added a variable: xSSSmean. But this one changes in every loop. So at the end it would have just stored the las value. So, how can I make it store the 10 values and just print me the mean of these?</p> <pre><code>############10x10 SSS################################## from sklearn.cross_validation import StratifiedShuffleSplit for i in range(10): sss = StratifiedShuffleSplit(y, 10, test_size=0.1, random_state=0) scoresSSS = cross_validation.cross_val_score(clf, x, y , cv=sss) print("Accuracy x fold SSS_RF: %0.2f (+/- %0.2f)" % (scoresSSS.mean(), scoresSSS.std()* 2)) xSSSmean = "%0.2f" % scoresSSS.mean() print (xSSSmean.mean) Accuracy x fold SSS_RF: 0.95 (+/- 0.10) Accuracy x fold SSS_RF: 0.93 (+/- 0.15) Accuracy x fold SSS_RF: 0.96 (+/- 0.09) Accuracy x fold SSS_RF: 0.93 (+/- 0.12) Accuracy x fold SSS_RF: 0.94 (+/- 0.11) Accuracy x fold SSS_RF: 0.94 (+/- 0.14) Accuracy x fold SSS_RF: 0.93 (+/- 0.15) Accuracy x fold SSS_RF: 0.94 (+/- 0.13) Accuracy x fold SSS_RF: 0.94 (+/- 0.09) Accuracy x fold SSS_RF: 0.95 (+/- 0.10) 0.95 </code></pre>
-1
2016-08-31T17:04:24Z
39,256,774
<pre><code>xSSSmean = [] #I create an empty list for i in range(10): sss = StratifiedShuffleSplit(y, 10, test_size=0.1, random_state=0) scoresSSS = cross_validation.cross_val_score(clf, x, y , cv=sss) print("Accuracy x fold SSS_RF: %0.2f (+/- %0.2f)" % (scoresSSS.mean(), scoresSSS.std()* 2)) #2 decimales xSSSmean.append(scoresSSS.mean()) #I fill the list with the "scoresSSS.mean" values print(np.mean(xSSSmean)) #I get the average of the xSSSmean list. </code></pre>
2
2016-08-31T18:38:44Z
[ "python", "numpy", "scikit-learn" ]
Add a panel to bar chart in matplotlib
39,255,265
<p>How can I add a panel containing (improvement percentage for each algorithm) to this bar chart? It should, for example, be located in top right-corner of the chart. I want to do this in order to be easier for the reader to see how much improvement each algorithm has compared to its non-greedy version.</p> <p>The panel should contain this:</p> <pre><code>UB-improvement= (("UB+Greedy"- "UB")/"UB")*100 IB-improvement=(("IB+Greedy"- "IB")/"IB")*100 SVD-improvement=(("SVD+Greedy"- "SVD")/"SVD")*100 TOP_N-improvement=(("TOP_N+Greedy"- "TOP_N")/"TOP_N")*100 </code></pre> <p><a href="http://i.stack.imgur.com/qCS9R.png" rel="nofollow"><img src="http://i.stack.imgur.com/qCS9R.png" alt="enter image description here"></a></p> <p>Here is my code:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt N = 1 ind = np.arange(N) # the x locations for the groups width = 0.2 # the width of the bars fig = plt.figure() ax = plt.subplot(111) ub = 5444 rects1 = ax.bar(ind, ub, width, color='red') ub_greedy = 6573 rects2 = ax.bar(ind+width, ub_greedy, width, color='darkviolet') ib = 1521 rects3 = ax.bar(ind+2*width, ib, width, color='black') ib_greedy = 5483 rects4 = ax.bar(ind+3*width, ib_greedy, width, color='blue') svd=553 rects5 = ax.bar(ind+4*width, svd, width, color='grey') svd_greedy=1225 rects6 = ax.bar(ind+width*5, svd_greedy, width, color='gold') pop=7 rects7 = ax.bar(ind+width*6, pop, width, color='brown') pop_greedy=53 rects8 = ax.bar(ind+width*7, pop_greedy, width, color='lime') ax.set_ylabel('Owner utilities') ax.set_xticklabels('') ax.set_xticks(ind+width) ax.legend((rects1[0], rects2[0], rects3[0],rects4[0],rects5[0], rects6[0], rects7[0],rects8[0]), ('UB','UB+Greedy','IB','IB+Greedy','SVD','SVD+Greedy','TOP_N','TOP_N+Greedy') ,loc='upper center', bbox_to_anchor=(0.5, -0.05), fancybox=True, shadow=True, ncol=8) def autolabel(rects): for rect in rects: h = rect.get_height() ax.text(rect.get_x()+rect.get_width()/1.9, 1.01*h, '%d'%int(h), ha='center', va='bottom') autolabel(rects1) autolabel(rects2) autolabel(rects3) autolabel(rects4) autolabel(rects5) autolabel(rects6) autolabel(rects7) autolabel(rects8) plt.title("Total owner utilities for different algorithms",y=1.08) plt.show() </code></pre>
1
2016-08-31T17:05:05Z
39,266,707
<p>add the following code just before <code>autolabel(rects1)</code> line. (I use python 3)</p> <p>I used <a href="http://stackoverflow.com/questions/7045729/automatically-position-text-box-in-matplotlib">automatically position text box in matplotlib</a></p> <pre><code>from matplotlib.offsetbox import AnchoredText def func(greedy, non_greedy): return (greedy - non_greedy)/non_greedy*100 anchored_text = AnchoredText('UB-improvement={:.4}%\n'.format(func(ub_greedy, ub))+ 'IB-improvement={:.4}%\n'.format(func(ib_greedy, ib))+ 'SVD-improvement={:.4}%\n'.format(func(svd_greedy, svd))+ 'TOP_N-improvement={:.4}%\n'.format(func(pop_greedy, pop)), loc=1) ax.add_artist(anchored_text) </code></pre>
1
2016-09-01T08:42:52Z
[ "python", "matplotlib", "bar-chart" ]
How to run Skulpt.org on localhost / local server by XAMPP
39,255,274
<p>Hi i want to measure execution time of basic operations like range etc. on Skulpt(Python in browser) I know thath Skulpt.org have interactive console online, but the thing is that I want to do it on my local machine, on my local server created by XAMPP.</p> <p>I have this simple code:</p> <pre><code>import time t0 = time.time() for i in range(1000000): a = 1 print("assignment.py", time.time()-t0) </code></pre> <p>How can I run it on simple webpage index.html on my local server using Skulpt?</p>
1
2016-08-31T17:05:46Z
39,280,202
<p>Here is answer.</p> <pre><code>&lt;html&gt; &lt;head&gt; &lt;meta charset="utf-8"&gt; &lt;title&gt;Skulpt&lt;/title&gt; &lt;script src="skulpt.min.js" type="text/javascript"&gt;&lt;/script&gt; &lt;script src="skulpt-stdlib.js" type="text/javascript"&gt;&lt;/script&gt; &lt;/head&gt; &lt;body&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>To folder with index.html you must add 2 files:</p> <blockquote> <p>skulpt.min.js </p> <p>skulpt-stdlib.js</p> </blockquote>
0
2016-09-01T20:14:54Z
[ "python", "server", "skulpt" ]
Inserting a row into a pandas dataframe based on row value?
39,255,292
<p>I have a DataFrame:</p> <pre><code>df = pd.DataFrame({'B':[2,1,2],'C':['a','b','a']}) B C 0 2 'a' 1 1 'b' 2 2 'a' </code></pre> <p>I want to insert a row above any occurrence of 'b', that is a duplicate of that row but with 'b' changed to 'c', so I end up with this:</p> <pre><code> B C 0 2 'a' 1 1 'b' 1 1 'c' 2 2 'a' </code></pre> <p>For the life of me, I can't figure out how to do this.</p>
4
2016-08-31T17:07:10Z
39,255,882
<p>Working at NumPy level, here's a vectorized approach -</p> <pre><code>arr = df.values idx = np.flatnonzero(df.C=='b') newvals = arr[idx] newvals[:,df.columns.get_loc("C")] = 'c' out = np.insert(arr,idx+1,newvals,axis=0) df_index = np.insert(np.arange(arr.shape[0]),idx+1,idx,axis=0) df_out = pd.DataFrame(out,index=df_index) </code></pre> <p>Sample run -</p> <pre><code>In [149]: df Out[149]: B C 0 2 a 1 1 b 2 2 d 3 4 d 4 3 b 5 8 a 6 4 a 7 2 b In [150]: df_out Out[150]: 0 1 0 2 a 1 1 b 1 1 c 2 2 d 3 4 d 4 3 b 4 3 c 5 8 a 6 4 a 7 2 b 7 2 c </code></pre>
1
2016-08-31T17:42:20Z
[ "python", "pandas", "numpy", "dataframe", "insert" ]
Inserting a row into a pandas dataframe based on row value?
39,255,292
<p>I have a DataFrame:</p> <pre><code>df = pd.DataFrame({'B':[2,1,2],'C':['a','b','a']}) B C 0 2 'a' 1 1 'b' 2 2 'a' </code></pre> <p>I want to insert a row above any occurrence of 'b', that is a duplicate of that row but with 'b' changed to 'c', so I end up with this:</p> <pre><code> B C 0 2 'a' 1 1 'b' 1 1 'c' 2 2 'a' </code></pre> <p>For the life of me, I can't figure out how to do this.</p>
4
2016-08-31T17:07:10Z
39,255,896
<p>Here's one way of doing it:</p> <pre><code>duplicates = df[df['C'] == 'b'].copy() duplicates['C'] = 'c' df.append(duplicates).sort_index() </code></pre>
3
2016-08-31T17:43:06Z
[ "python", "pandas", "numpy", "dataframe", "insert" ]
When am I supposed to use del in python?
39,255,371
<p>So I am curious lets say I have a class as follows</p> <pre><code>class myClass: def __init__(self): parts = 1 to = 2 a = 3 whole = 4 self.contents = [parts,to,a,whole] </code></pre> <p>Is there any benifit of adding lines </p> <pre><code>del parts del to del a del whole </code></pre> <p>inside the constructor or will the memory for these variables be managed by the scope?</p>
3
2016-08-31T17:12:54Z
39,255,472
<p>Never, unless you are very tight on memory and doing something very bulky. If you are writing usual program, garbage collector should take care of everything.</p> <p>If you are writing something bulky, you should know that <code>del</code> does not delete the object, it just dereferences it. I.e. variable no longer refers to the place in memory where object data is stored. After that it still needs to be cleaned up by garbage collector in order for memory to be freed (that happens automatically).</p> <p>There is also a way to force garbage collector to clean objects - <code>gc.collect()</code>, which may be useful after you ran <code>del</code>. For example:</p> <pre><code>import gc a = [i for i in range(1, 10 ** 9)] ... del a # Object [0, 1, 2, ..., 10 ** 9 - 1] is not reachable but still in memory gc.collect() # Object deleted from memory </code></pre> <hr> <p><strong>Update</strong>: really good note in comments. Watch for other references to the object in memory. For example:</p> <pre><code>import gc a = [i for i in range(1, 10 ** 9)] b = a ... del a gc.collect() </code></pre> <p>After execution of this block, the large array is still reachable through <code>b</code> and will not be cleaned.</p>
8
2016-08-31T17:19:12Z
[ "python", "memory-management" ]
When am I supposed to use del in python?
39,255,371
<p>So I am curious lets say I have a class as follows</p> <pre><code>class myClass: def __init__(self): parts = 1 to = 2 a = 3 whole = 4 self.contents = [parts,to,a,whole] </code></pre> <p>Is there any benifit of adding lines </p> <pre><code>del parts del to del a del whole </code></pre> <p>inside the constructor or will the memory for these variables be managed by the scope?</p>
3
2016-08-31T17:12:54Z
39,255,573
<p>One use is to delete specific keys from a dictionary.</p> <pre><code>&gt;&gt;&gt;&gt; food = {"apple": True, "banana": False} &gt;&gt;&gt;&gt; del food['banana'] &gt;&gt;&gt;&gt; import json &gt;&gt;&gt;&gt; json.dumps(food) '{"apple": true}' </code></pre> <p>I use it all the time for cleaning up dictionaries before converting them to JSON.</p>
-1
2016-08-31T17:24:57Z
[ "python", "memory-management" ]
SqlAlchemy commit after using update() function without using a session
39,255,405
<p>My selections work but my updates and deletes do not.</p> <pre><code>db_jb = create_engine(jb) self.jobs = Table('Job', MetaData(jb), autoload=True) # select - works ss = select(self.jobs).where( self.jobs.c.job_guid == jobGuid ).limit(1) rs = ss.execute() rows = [r for r in rs] rs.close() # update - does not work su = update(self.jobs, mysql_limit=1).where( self.jobs.c.job_guid == jobGuid ).values(jobStatus=status) # does not have an affect su.execution_options(autocommit=True) rs = su.execute() rs.close() </code></pre> <p>Prettify the <code>su</code> variable and the query is correct but its not being committed</p> <pre><code>str(su.compile(dialect=None, compile_kwargs={'literal_binds': True})) </code></pre> <p>How do I commit my changes without using a Session?</p>
0
2016-08-31T17:15:28Z
39,256,728
<p>Did you try to execute <code>COMMIT</code> as a raw statement, like</p> <pre><code>db_jb.execute('COMMIT') </code></pre> <p>You could also put <code>db_jb.execute('BEGIN')</code> just before <code>ss = ...</code> to explicitly start a transaction</p>
1
2016-08-31T18:36:48Z
[ "python", "mysql", "session", "sqlalchemy", "commit" ]
Python CGI os.system causing malformed header
39,255,498
<p>I am running Apache/2.4.10 (Raspbian) and I am using python for CGI. But when I try to use os.system in simple code I get this malformed header error:</p> <pre><code>[Wed Aug 31 17:10:05.715740 2016] [cgid:error] [pid 3103:tid 1929376816] [client 192.168.0.106:59277] malformed header from script'play.cgi': Bad header: code.cgi </code></pre> <p>Here is the code from play.cgi:</p> <pre><code>#!/usr/bin/python # -*- coding: UTF-8 -*- import cgi import os print('Content-type: text/html') print('') os.system('ls') </code></pre> <p>The strange thing is that if I remove the os.system line it mysteriously starts working again. I have tried using popen instead, same problem. I have tried to obscure it in some code,change file name, different encodings and even time.sleep, none of those worked.</p> <p>The strangest thing is that it works perfectly fine in a more complicated code.</p>
1
2016-08-31T17:20:21Z
39,255,969
<p>To see why the problem happened, try launching your script</p> <pre><code>python webls.py &gt; output </code></pre> <p>And than open <code>output</code> with some text editor. You will notice that your <code>Content-type: text/html</code> ended up being in the bottom of the file, which, of course, is wrong.</p> <p>That happens because incorporating outout of your <code>os.system</code> in output of Python code screws thing up (because think about it: your <code>print(...)</code> is accumulated and flushed in blocks when appropriate, while <code>os.system()</code> abruptively prints data and because it is <code>os.system</code> transacton, flushes only its result (which is also why you do not see the problem if outout is to console)). The solution is to flush output after printing headers. You should change your code to</p> <pre><code>#!/usr/bin/python import cgi import os import sys print 'Content-type: text/html' print '' sys.stdout.flush() os.system('ls') </code></pre> <hr> <p>Although that is a fix, you know you are doing something terribly wrong if you need to incorporate output of your console command to the content of the web page and use <code>os.system</code> for that. There are several things you should consider doing. These solutions are sorted by how recommended they are (from crappy to good):</p> <p>-Use redirection of input/output. Save output of <code>ls</code> to file and read it in your Python code:</p> <pre><code>os.system('ls &gt; /tmp/lsoutput') print open('/tmp/lsoutput', 'r').read() </code></pre> <p>-Use subprocess. It allows you to capture output of console program and use it in your python code (example from <a href="http://stackoverflow.com/questions/6657690/python-getoutput-equivalent-in-subprocess">python getoutput() equivalent in subprocess</a>)</p> <pre><code>import subprocess process = subprocess.Popen(['ls', '-a'], stdout=subprocess.PIPE) out, err = process.communicate() print(out) </code></pre> <p>-Do not call external programs at all, this is a bad thing to do. If you need a list of files, use Python functions instead</p> <pre><code>import os for filename in os.listdir('.'): print filename </code></pre>
1
2016-08-31T17:47:20Z
[ "python", "apache", "cgi" ]
How to determine the dimensions of a mix of lists and arrays?
39,255,523
<p>Consider an object that is a list of arrays: </p> <p><code>a=[array([1,2,3]),array(2,5,10,20)]</code></p> <p>In its own funny way, this thing has two dimensions. The list itself is one dimension, and it contains objects which are 1D. Is there an easy way to distinguish between <code>a</code> above and a list like <code>b=[1,3,6,9,11]</code> which is simply 1D, and <code>c=1</code>, which is a 0D scalar? I want a function <code>dimens()</code> such that <code>dimens(a)</code> returns <code>2</code>, <code>dimens(b)</code> returns <code>1</code>, and <code>dimens(c)</code> returns <code>0</code>.</p> <p>I am doing it by testing the shape of the first element in the list, but I feel like there may be a cleaner approach.</p>
1
2016-08-31T17:21:57Z
39,255,636
<p>Here's my function:</p> <pre><code>def dimens(x): s=shape(x) if len(s)==0: return 0 #the input was a scalar s2=shape(x[0]) if len(s2)==0: return 1 #each element of the list was a scalar else: #each element of the list was a vector or array if len(s2)==1: if len(shape(s2[0]))==0: return 2 #the first element of the top list was a 1D vector and the first element of that vector was a scalar return 3 #there were more than 2 dimensions involved </code></pre> <p>Testing:</p> <pre><code>a=[array([1,2,3]),array([2,5,10,20])] b=[1,3,6,9,11] c=1 d=[[a]]+[[a]] print dimens(a) 2 print dimens(b) 1 print dimens(c) 0 print dimens(d) 3 </code></pre> <p>Limitations:</p> <ul> <li>Only goes up to three dimensions (this is enough for my application)</li> <li>Only tests the first element, so it assumes that each element has the same dimensionality (which is fine for my application since my 2D case will be a list of all arrays, not a list that has a mix of arrays and scalars)</li> </ul> <p>Can anyone do better?</p>
0
2016-08-31T17:28:46Z
[ "python", "arrays", "list", "numpy" ]
How to determine the dimensions of a mix of lists and arrays?
39,255,523
<p>Consider an object that is a list of arrays: </p> <p><code>a=[array([1,2,3]),array(2,5,10,20)]</code></p> <p>In its own funny way, this thing has two dimensions. The list itself is one dimension, and it contains objects which are 1D. Is there an easy way to distinguish between <code>a</code> above and a list like <code>b=[1,3,6,9,11]</code> which is simply 1D, and <code>c=1</code>, which is a 0D scalar? I want a function <code>dimens()</code> such that <code>dimens(a)</code> returns <code>2</code>, <code>dimens(b)</code> returns <code>1</code>, and <code>dimens(c)</code> returns <code>0</code>.</p> <p>I am doing it by testing the shape of the first element in the list, but I feel like there may be a cleaner approach.</p>
1
2016-08-31T17:21:57Z
39,255,662
<p>You can use <strong>isinstance</strong> method to distinguish between the two arrays</p> <p>Lets consider the first list </p> <pre><code>a = [1,2,3] </code></pre> <p>Here the first element is an integer hence <code>isinstance(a[0],int)</code> will return true</p> <p>For the second array <code>b = [[1,2][3,4]]</code> the first element is an array so <code>isinstance(b[0],int)</code> will return false. You can check the second list by using <code>isinstance(b[0],list)</code></p> <p>I am using list in place of array but it will work with arrays also</p>
0
2016-08-31T17:30:10Z
[ "python", "arrays", "list", "numpy" ]
How to determine the dimensions of a mix of lists and arrays?
39,255,523
<p>Consider an object that is a list of arrays: </p> <p><code>a=[array([1,2,3]),array(2,5,10,20)]</code></p> <p>In its own funny way, this thing has two dimensions. The list itself is one dimension, and it contains objects which are 1D. Is there an easy way to distinguish between <code>a</code> above and a list like <code>b=[1,3,6,9,11]</code> which is simply 1D, and <code>c=1</code>, which is a 0D scalar? I want a function <code>dimens()</code> such that <code>dimens(a)</code> returns <code>2</code>, <code>dimens(b)</code> returns <code>1</code>, and <code>dimens(c)</code> returns <code>0</code>.</p> <p>I am doing it by testing the shape of the first element in the list, but I feel like there may be a cleaner approach.</p>
1
2016-08-31T17:21:57Z
39,255,678
<pre><code>def dimens(l): try: size = len(l) except TypeError: # not an iterable return 0 else: if size: # non-empty iterable return 1 + max(map(dimens, l)) else: # empty iterable return 1 print(dimens([[1,2,3],[2,5,10,[1,2]]])) print(dimens(np.zeros([6,5,4,3,2,1]))) </code></pre> <p><strong>Output</strong></p> <pre><code>3 6 </code></pre>
2
2016-08-31T17:30:59Z
[ "python", "arrays", "list", "numpy" ]
Conda setuptools install changes shebangs to default python install
39,255,544
<p>I'm having an issue where packages installed via setuptools to python anaconda have shebangs rewritten to the wrong location.</p> <p>I have installed python anaconda and setuptools package. I have verified that python executable points to the anaconda executable</p> <pre><code>grant@DevBox2:/opt/content-analysis$ which python /opt/anaconda2/bin/python </code></pre> <p>I need to install a custom package to my anaconda python. It is only installable via setuptools. It includes an command-line executable with the following shebang at the top:</p> <pre><code>#!/usr/bin/env python </code></pre> <p>After installing the package with the following command:</p> <pre><code>sudo python setup.py install --prefix=/opt/anaconda2 </code></pre> <p>The executable (content_analysis) appears in a path reachable location. But the shebang at the top has been replaced with the hard coded location of the default python install on the machine.</p> <pre><code>grant@DevBox2:/opt/content-analysis$ which content_analysis /opt/anaconda2/bin/content_analysis grant@DevBox2:/opt/content-analysis$ sed -n 1,2p /opt/anaconda2/bin/content_analysis #!/usr/local/bin/python </code></pre> <p>I have read the following post <a href="http://stackoverflow.com/questions/1530702/dont-touch-my-shebang">here</a> concerning setuptools' overwrite of shebangs. The post suggests that the python executable that is first in the <code>$PATH</code> <em>should</em> be the executable that setuptools uses to replace the shebang. This doesn't seem to be the case for me however.</p> <p>Note: I cannot hardcode a python executable into my <code>python setup.py build</code> command. I need a deployment solution that will work in any environment that has conda installed as the first python in the <code>$PATH</code></p>
1
2016-08-31T17:23:14Z
39,257,166
<p>I finally figured out what has been causing all my issues getting python and dependencies properly installed:</p> <p>Whenever <code>sudo</code> is invoked before an executable, in Debian the $PATH variable is automatically changed to a secure path lookup. Here is a demonstration:</p> <pre><code>grant@DevBox2:/opt/content-analysis$ sudo sh # echo $PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin </code></pre> <p>versus</p> <pre><code>grant@DevBox2:/opt/content-analysis$ sh $ echo $PATH /opt/anaconda2/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games </code></pre> <p>So, when sudo is invoked prior to <code>sudo python setup.py</code> the install is reverting back to the default python.</p> <p>See <a href="http://unix.stackexchange.com/questions/83191/how-to-make-sudo-preserve-path">this post</a> for discussion</p>
1
2016-08-31T19:02:52Z
[ "python", "anaconda", "setuptools", "conda" ]
Django ORM query with multiple joins
39,255,696
<p>I've been trying to come up with the django version of this query and I can't seem to get it.</p> <pre><code>SELECT * FROM answer a LEFT JOIN groups g ON a.group_id = g.id LEFT JOIN group_permissions gp ON g.id = gp.group_id WHERE gp.user_id = '77777'; </code></pre> <p>I've been trying something like this:</p> <pre><code>answers = Answer.objects.filter(user_id=user_id).prefetch_related('group__grouppermissions') </code></pre> <p>Answers are associated to groups. Groups are associated to multiple users. Group permissions are associated to groups and users. </p> <p>I'm sure I'm missing some relationship somewhere. </p> <p>Models:</p> <pre><code>class User(models.Model): id = models.BigAutoField(primary_key=True) username = models.CharField(max_length=255) class Answer(models.Model): id = models.BigAutoField(primary_key=True) user = models.ForeignKey('User', models.DO_NOTHING) group = models.ForeignKey('Groups', models.DO_NOTHING) class Meta: managed = False db_table = 'answer' class Groups(models.Model): id = models.BigAutoField(primary_key=True) name = models.CharField(unique=True, max_length=255) class Meta: managed = False db_table = 'groups' class GroupPermissions(models.Model): id = models.BigAutoField(primary_key=True) group = models.ForeignKey('Groups', models.DO_NOTHING) user = models.ForeignKey('User', models.DO_NOTHING) role = models.IntegerField() class Meta: managed = False db_table = 'group_permissions' </code></pre>
0
2016-08-31T17:31:45Z
39,260,013
<p>There is an implicit association django creates for you: <code>group_grouppermissions_set</code>. You can read about it in <a href="https://docs.djangoproject.com/en/dev/topics/db/examples/many_to_one/" rel="nofollow">docs</a> about many-to-one relationship.</p> <p>So your query will look like something like this:</p> <pre><code>answers = Answer.objects.\ filter(group__grouppermissions_set__user_id=77777) </code></pre> <p>Also notice that django will create <code>LEFT OUTER JOIN</code>.</p> <p>Then when you will confident that your query is working you can add optimization like <code>prefetch_related</code> to it with similar lookups. It depends on how deep you want to fetch from <code>Answer</code> model. </p>
0
2016-08-31T22:31:48Z
[ "python", "mysql", "django", "join", "orm" ]
How to serve no more that one HTTP request in python?
39,255,865
<p>I have a simple HTTP server setup like <a href="http://stackoverflow.com/a/11769969">this one</a>. It processes a slow 40 second request to open and then close gates (real metallic gates). If second HTTP query is made during execution of the first one, it is placed in queue and then executed after first run. I don't need this behavior, I need to reply with error if gate open/close procedure is in progress now. How can I do that? There's a parameter 'request_queue_size' - but I'm not sure how to set it.</p>
0
2016-08-31T17:41:16Z
39,255,926
<p>You need to follow a different strategy designing your server service. You need to keep the state of the door either in memory or in a database. Then, each time you receive a request to do something on the door, you check the current state of the door in your persistence, and then you execute the action if it is possible to do on the current state, otherwise you return an error. Also, don't forget to update the state of the door once an action completes. </p>
1
2016-08-31T17:44:52Z
[ "python" ]
How to serve no more that one HTTP request in python?
39,255,865
<p>I have a simple HTTP server setup like <a href="http://stackoverflow.com/a/11769969">this one</a>. It processes a slow 40 second request to open and then close gates (real metallic gates). If second HTTP query is made during execution of the first one, it is placed in queue and then executed after first run. I don't need this behavior, I need to reply with error if gate open/close procedure is in progress now. How can I do that? There's a parameter 'request_queue_size' - but I'm not sure how to set it.</p>
0
2016-08-31T17:41:16Z
39,256,041
<p>In general, the idea you're looking for is called request <em>throttling</em>. There are lots of implementations of this kind of thing which shouldn't be hard to dig up out there on the Web: here's one for Flask, my microframework of choice - <a href="https://flask-limiter.readthedocs.io/en/stable/" rel="nofollow">https://flask-limiter.readthedocs.io/en/stable/</a></p> <p>Quick usage example:</p> <pre><code>@app.route("/open_gate") @limiter.limit("1 per minute") def slow(): gate_robot.open_gate() return </code></pre>
0
2016-08-31T17:51:32Z
[ "python" ]
How to serve no more that one HTTP request in python?
39,255,865
<p>I have a simple HTTP server setup like <a href="http://stackoverflow.com/a/11769969">this one</a>. It processes a slow 40 second request to open and then close gates (real metallic gates). If second HTTP query is made during execution of the first one, it is placed in queue and then executed after first run. I don't need this behavior, I need to reply with error if gate open/close procedure is in progress now. How can I do that? There's a parameter 'request_queue_size' - but I'm not sure how to set it.</p>
0
2016-08-31T17:41:16Z
39,267,163
<p>'request_queue_size' seems to have no effect. The solution was to make server multithreaded, and implement locking variable 'busy':</p> <pre><code>from socketserver import ThreadingMixIn from http.server import BaseHTTPRequestHandler, HTTPServer import time from gpiozero import DigitalOutputDevice import logging from time import sleep logging.basicConfig(format='%(asctime)s %(levelname)s:%(message)s', level=logging.INFO) hostName = '' hostPort = 9001 busy = False class ThreadingServer(ThreadingMixIn, HTTPServer): pass class MyServer(BaseHTTPRequestHandler): def do_GET(self): global busy self.send_response(200) self.send_header("Content-type", "text/html") self.end_headers() self.wfile.write(bytes("Hello!&lt;br&gt;", "utf-8")) if self.path == '/gates': if not busy: busy = True relay = DigitalOutputDevice(17) # Initialize GPIO 17 relay.on() logging.info('Cycle started') self.wfile.write(bytes("Cycle started&lt;br&gt;", "utf-8")) sleep(2) relay.close() sleep(20) relay = DigitalOutputDevice(17) relay.on() sleep(2) relay.close() logging.info('Cycle finished') self.wfile.write(bytes("Cycle finished", "utf-8")) busy = False else: # self.wfile.write(bytes("Busy now!&lt;br&gt;", "utf-8")) self.send_error(503) myServer = ThreadingServer((hostName, hostPort), MyServer) print(time.asctime(), "Server Starts - %s:%s" % (hostName, hostPort)) try: myServer.serve_forever() except KeyboardInterrupt: pass myServer.server_close() print(time.asctime(), "Server Stops - %s:%s" % (hostName, hostPort)) </code></pre>
1
2016-09-01T09:03:01Z
[ "python" ]
Subsetting python list into positive/negative movements/trends
39,255,870
<p>Sorry for creating this question but I have been stuck on this question for a while.</p> <p>Basically I'm trying to take a list:</p> <pre><code>numbers=[1, 2, -1, -2, 4, 5] </code></pre> <p>And subset this list into a list of list that display positive/negative movements (or trends)</p> <p>The end result is to have:</p> <pre><code>subset_list = [[1, 2], [-1, -2], [4, 5]] </code></pre> <p>Basically I have been using nested while functions to append a positive movement to the subset, and when the condition is not met, the subset is appended to subset_list and then evaluates if there is a negative movement.</p> <p>I keep getting an <code>IndexError</code>, and so far <code>subset_list</code> only contains <code>[[1, 2]]</code></p> <p>Here is my code:</p> <pre><code>numbers = [1,2,-1,-2,4,5] subset = [] subset_list = [] subset.append(numbers[0]) i = 1 while i &lt; (len(numbers)): if numbers[i] &lt;= numbers[i+1]: subset.append(numbers[i]) i+= 1 while subset[-1] &lt;= numbers[i] or numbers[i] &lt;= numbers[i+1]: subset.append(numbers[i]) i += 1 subset_list.append(subset) subset = [] i += 1 if numbers[i] &gt; numbers[i+1]: subset.append(numbers[i]) i+= 1 while subset[-1] &lt;= numbers[i] or numbers[i] &lt;= numbers[i+1]: subset.append(numbers[i]) i+= 1 subset_list.append(subset) subset = [] i += 1 </code></pre> <p>Thanks!</p> <p>-Jake</p>
1
2016-08-31T17:41:39Z
39,256,098
<p>If change in trends always go through the sign change, you can "group" items based on a sign using <a href="https://docs.python.org/3/library/itertools.html#itertools.groupby" rel="nofollow"><code>itertools.groupby()</code></a>:</p> <pre><code>&gt;&gt;&gt; from itertools import groupby &gt;&gt;&gt; &gt;&gt;&gt; [list(v) for _, v in groupby(numbers, lambda x: x &lt; 0)] [[1, 2], [-1, -2], [4, 5]] </code></pre> <p>We are using <a href="http://stackoverflow.com/questions/1739514/underscore-as-variable-name-in-python"><code>_</code> as a variable name</a> for a "throw-away" variable since we don't need the grouping key in this case.</p>
1
2016-08-31T17:55:12Z
[ "python", "list", "python-2.7", "trend" ]
Subsetting python list into positive/negative movements/trends
39,255,870
<p>Sorry for creating this question but I have been stuck on this question for a while.</p> <p>Basically I'm trying to take a list:</p> <pre><code>numbers=[1, 2, -1, -2, 4, 5] </code></pre> <p>And subset this list into a list of list that display positive/negative movements (or trends)</p> <p>The end result is to have:</p> <pre><code>subset_list = [[1, 2], [-1, -2], [4, 5]] </code></pre> <p>Basically I have been using nested while functions to append a positive movement to the subset, and when the condition is not met, the subset is appended to subset_list and then evaluates if there is a negative movement.</p> <p>I keep getting an <code>IndexError</code>, and so far <code>subset_list</code> only contains <code>[[1, 2]]</code></p> <p>Here is my code:</p> <pre><code>numbers = [1,2,-1,-2,4,5] subset = [] subset_list = [] subset.append(numbers[0]) i = 1 while i &lt; (len(numbers)): if numbers[i] &lt;= numbers[i+1]: subset.append(numbers[i]) i+= 1 while subset[-1] &lt;= numbers[i] or numbers[i] &lt;= numbers[i+1]: subset.append(numbers[i]) i += 1 subset_list.append(subset) subset = [] i += 1 if numbers[i] &gt; numbers[i+1]: subset.append(numbers[i]) i+= 1 while subset[-1] &lt;= numbers[i] or numbers[i] &lt;= numbers[i+1]: subset.append(numbers[i]) i+= 1 subset_list.append(subset) subset = [] i += 1 </code></pre> <p>Thanks!</p> <p>-Jake</p>
1
2016-08-31T17:41:39Z
39,256,124
<p>In python, one tends not to use the actual indexes in a list very often. Try a for-loop instead, plus a check to see whether the trend changed or not (this treats zero as a distinct trend from positive or negative - you can pretty simply change <code>same_direction</code> to group it one way or the other):</p> <pre><code>def same_direction(num1, num2): # both numbers are positive, both are negative, or both are zero return ((num1 &gt; 0 and num2 &gt; 0) or (num1 &lt; 0 and num2 &lt; 0) or (num1 == num2)) numbers = [1, 2, -1, -2, 4, 5] result = [[]] #list with one sublist ready last_number = 0 for num in numbers: if same_direction(num, last_direction): # No need for a new sublist, put new number in last sublist result[-1].append(num) else: # trend changed, new sublist and put the number in it result.append([num]) </code></pre>
1
2016-08-31T17:56:34Z
[ "python", "list", "python-2.7", "trend" ]
Subsetting python list into positive/negative movements/trends
39,255,870
<p>Sorry for creating this question but I have been stuck on this question for a while.</p> <p>Basically I'm trying to take a list:</p> <pre><code>numbers=[1, 2, -1, -2, 4, 5] </code></pre> <p>And subset this list into a list of list that display positive/negative movements (or trends)</p> <p>The end result is to have:</p> <pre><code>subset_list = [[1, 2], [-1, -2], [4, 5]] </code></pre> <p>Basically I have been using nested while functions to append a positive movement to the subset, and when the condition is not met, the subset is appended to subset_list and then evaluates if there is a negative movement.</p> <p>I keep getting an <code>IndexError</code>, and so far <code>subset_list</code> only contains <code>[[1, 2]]</code></p> <p>Here is my code:</p> <pre><code>numbers = [1,2,-1,-2,4,5] subset = [] subset_list = [] subset.append(numbers[0]) i = 1 while i &lt; (len(numbers)): if numbers[i] &lt;= numbers[i+1]: subset.append(numbers[i]) i+= 1 while subset[-1] &lt;= numbers[i] or numbers[i] &lt;= numbers[i+1]: subset.append(numbers[i]) i += 1 subset_list.append(subset) subset = [] i += 1 if numbers[i] &gt; numbers[i+1]: subset.append(numbers[i]) i+= 1 while subset[-1] &lt;= numbers[i] or numbers[i] &lt;= numbers[i+1]: subset.append(numbers[i]) i+= 1 subset_list.append(subset) subset = [] i += 1 </code></pre> <p>Thanks!</p> <p>-Jake</p>
1
2016-08-31T17:41:39Z
39,256,140
<p>This is what I came up with. It is close to what you have but a little easier to read. I avoid having to increment the index counter <code>i</code> as much which is <em>probably</em> where you went wrong.</p> <pre><code>n= [1,2,-1,-2,4,5] out=[] i=1 tmp=[n[0]] while i &lt; len(n): if n[i] &gt;= 0 and tmp[-1] &gt;= 0: tmp.append(n[i]) elif n[i] &lt; 0 and tmp[-1] &lt; 0: tmp.append(n[i]) else: out.append(tmp) tmp = [n[i]] i = i + 1 if len(tmp) &gt; 0: # typo fix was &gt; 1 out.append(tmp) print(out) </code></pre>
0
2016-08-31T17:57:28Z
[ "python", "list", "python-2.7", "trend" ]
Subsetting python list into positive/negative movements/trends
39,255,870
<p>Sorry for creating this question but I have been stuck on this question for a while.</p> <p>Basically I'm trying to take a list:</p> <pre><code>numbers=[1, 2, -1, -2, 4, 5] </code></pre> <p>And subset this list into a list of list that display positive/negative movements (or trends)</p> <p>The end result is to have:</p> <pre><code>subset_list = [[1, 2], [-1, -2], [4, 5]] </code></pre> <p>Basically I have been using nested while functions to append a positive movement to the subset, and when the condition is not met, the subset is appended to subset_list and then evaluates if there is a negative movement.</p> <p>I keep getting an <code>IndexError</code>, and so far <code>subset_list</code> only contains <code>[[1, 2]]</code></p> <p>Here is my code:</p> <pre><code>numbers = [1,2,-1,-2,4,5] subset = [] subset_list = [] subset.append(numbers[0]) i = 1 while i &lt; (len(numbers)): if numbers[i] &lt;= numbers[i+1]: subset.append(numbers[i]) i+= 1 while subset[-1] &lt;= numbers[i] or numbers[i] &lt;= numbers[i+1]: subset.append(numbers[i]) i += 1 subset_list.append(subset) subset = [] i += 1 if numbers[i] &gt; numbers[i+1]: subset.append(numbers[i]) i+= 1 while subset[-1] &lt;= numbers[i] or numbers[i] &lt;= numbers[i+1]: subset.append(numbers[i]) i+= 1 subset_list.append(subset) subset = [] i += 1 </code></pre> <p>Thanks!</p> <p>-Jake</p>
1
2016-08-31T17:41:39Z
39,256,177
<p>Here is a way to re-write this:</p> <pre><code> numbers=[1,2,-1,-2,4,5] direction = True # positive or negative prevdirection = True res = [[numbers[0]]] for previtem, item in zip(numbers[:-1], numbers[1:]): direction = True if item - previtem > 0 else False if direction != prevdirection: res.append([]) prevdirection = direction res[-1].append(item) print(res) </code></pre>
2
2016-08-31T17:59:45Z
[ "python", "list", "python-2.7", "trend" ]
Subsetting python list into positive/negative movements/trends
39,255,870
<p>Sorry for creating this question but I have been stuck on this question for a while.</p> <p>Basically I'm trying to take a list:</p> <pre><code>numbers=[1, 2, -1, -2, 4, 5] </code></pre> <p>And subset this list into a list of list that display positive/negative movements (or trends)</p> <p>The end result is to have:</p> <pre><code>subset_list = [[1, 2], [-1, -2], [4, 5]] </code></pre> <p>Basically I have been using nested while functions to append a positive movement to the subset, and when the condition is not met, the subset is appended to subset_list and then evaluates if there is a negative movement.</p> <p>I keep getting an <code>IndexError</code>, and so far <code>subset_list</code> only contains <code>[[1, 2]]</code></p> <p>Here is my code:</p> <pre><code>numbers = [1,2,-1,-2,4,5] subset = [] subset_list = [] subset.append(numbers[0]) i = 1 while i &lt; (len(numbers)): if numbers[i] &lt;= numbers[i+1]: subset.append(numbers[i]) i+= 1 while subset[-1] &lt;= numbers[i] or numbers[i] &lt;= numbers[i+1]: subset.append(numbers[i]) i += 1 subset_list.append(subset) subset = [] i += 1 if numbers[i] &gt; numbers[i+1]: subset.append(numbers[i]) i+= 1 while subset[-1] &lt;= numbers[i] or numbers[i] &lt;= numbers[i+1]: subset.append(numbers[i]) i+= 1 subset_list.append(subset) subset = [] i += 1 </code></pre> <p>Thanks!</p> <p>-Jake</p>
1
2016-08-31T17:41:39Z
39,256,837
<p>Here is my solution:</p> <pre><code>numbers = [1,2,-1,-2,4,5, 3, 2] subset = [] subset_list = [] subset.append(numbers[0]) forward = 1 for i in range(0, len(numbers) - 1): if ( forward == 1 ): if numbers[i] &lt;= numbers[i+1]: subset.append(numbers[i+1]) else: subset_list.append(subset) subset = [] subset.append(numbers[i+1]) forward = 0 else: if numbers[i] &gt;= numbers[i+1]: subset.append(numbers[i+1]) else: subset_list.append(subset) subset = [] subset.append(numbers[i+1]) forward = 1 subset_list.append(subset) print(*subset) print(*subset_list) </code></pre> <p>Unfortunately, I only have python 3 on my system so my answer is in python 3.</p>
0
2016-08-31T18:42:45Z
[ "python", "list", "python-2.7", "trend" ]
Unequal width binned histogram in python
39,255,916
<p>I have an array with probability values stored in it. Some values are 0. I need to plot a histogram such that there are equal number of elements in each bin. I tried using matplotlibs hist function but that lets me decide number of bins. How do I go about plotting this?(Normal plot and hist work but its not what is needed)</p> <p>I have 10000 entries. Only 200 have values greater than 0 and lie between 0.0005 and 0.2. This distribution isnt even as 0.2 only one element has whereas 2000 approx have value 0.0005. So plotting it was an issue as the bins had to be of unequal width with equal number of elements</p>
0
2016-08-31T17:44:28Z
39,258,397
<p>The task does not make much sense to me, but the following code does, what i understood as the thing to do.</p> <p>I also think the last lines of the code are what you really wanted to do. Using different bin-widths to improve visualization (but don't target the distribution of equal amount of samples within each bin)! I used <a href="http://www.astroml.org/modules/generated/astroML.plotting.hist.html" rel="nofollow">astroml's hist with method='blocks'</a> (<a href="http://docs.astropy.org/en/stable/api/astropy.visualization.hist.html" rel="nofollow">astropy supports this too</a>)</p> <h3>Code</h3> <pre class="lang-python prettyprint-override"><code># Python 3 -&gt; beware the // operator! import numpy as np import matplotlib.pyplot as plt from astroML import plotting as amlp N_VALUES = 1000 N_BINS = 100 # Create fake data prob_array = np.random.randn(N_VALUES) prob_array /= np.max(np.abs(prob_array),axis=0) # scale a bit # Sort array prob_array = np.sort(prob_array) # Calculate bin-borders, bin_borders = [np.amin(prob_array)] + [prob_array[(N_VALUES // N_BINS) * i] for i in range(1, N_BINS)] + [np.amax(prob_array)] print('SAMPLES: ', prob_array) print('BIN-BORDERS: ', bin_borders) # Plot hist counts, x, y = plt.hist(prob_array, bins=bin_borders) plt.xlim(bin_borders[0], bin_borders[-1] + 1e-2) print('COUNTS: ', counts) plt.show() # And this is, what i think, what you really want fig, (ax1, ax2) = plt.subplots(2) left_blob = np.random.randn(N_VALUES/10) + 3 right_blob = np.random.randn(N_VALUES) + 110 both = np.hstack((left_blob, right_blob)) # data is hard to visualize with equal bin-widths ax1.hist(both) amlp.hist(both, bins='blocks', ax=ax2) plt.show() </code></pre> <h3>Output</h3> <p><a href="http://i.stack.imgur.com/eQVPK.png" rel="nofollow"><img src="http://i.stack.imgur.com/eQVPK.png" alt="enter image description here"></a> <a href="http://i.stack.imgur.com/EJ4Cn.png" rel="nofollow"><img src="http://i.stack.imgur.com/EJ4Cn.png" alt="enter image description here"></a></p>
2
2016-08-31T20:23:38Z
[ "python", "matplotlib", "histogram", "probability", "bins" ]
Call pip via Python Launcher
39,256,004
<p>I've installed Python 3.5 and 2.7 side by side on a Windows machine. Rather than messing around with my <code>PATH</code>, I'm using the Python Launcher to call different Python versions, for instance <code>py -2</code> if I want to use Python 2. My question is: how do I call the <code>pip</code> executable for that installation?</p>
3
2016-08-31T17:49:03Z
39,256,048
<p>You have to start <code>pip</code> as a module like</p> <pre><code>py -2 -m pip install virtualenv </code></pre> <p>Actually if you do want to mess around with python-environments (like installing conflicting libraries for the same python-version) you should take a look a <a href="https://virtualenv.pypa.io/en/stable/">virtualenv</a> or <a href="https://docs.python.org/3/library/venv.html">venv</a></p>
5
2016-08-31T17:51:59Z
[ "python", "pip" ]
Applying functools.wraps to nested wrappers
39,256,072
<p>I have a base decorator that takes arguments but that also is built upon by other decorators. I can't seem to figure where to put the functools.wraps in order to preserve the full signature of the decorated function.</p> <pre><code>import inspect from functools import wraps # Base decorator def _process_arguments(func, *indices): """ Apply the pre-processing function to each selected parameter """ @wraps(func) def wrap(f): @wraps(f) def wrapped_f(*args): params = inspect.getargspec(f)[0] args_out = list() for ind, arg in enumerate(args): if ind in indices: args_out.append(func(arg)) else: args_out.append(arg) return f(*args_out) return wrapped_f return wrap # Function that will be used to process each parameter def double(x): return x * 2 # Decorator called by end user def double_selected(*args): return _process_arguments(double, *args) # End-user's function @double_selected(2, 0) def say_hello(a1, a2, a3): """ doc string for say_hello """ print('{} {} {}'.format(a1, a2, a3)) say_hello('say', 'hello', 'arguments') </code></pre> <p>The result of this code should be <em>and is</em>:</p> <pre><code>saysay hello argumentsarguments </code></pre> <p>However, running help on say_hello gives me:</p> <pre><code>say_hello(*args, **kwargs) doc string for say_hello </code></pre> <p>Everything is preserved except the parameter names.</p> <p>It seems like I just need to add another @wraps() somewhere, but where?</p>
0
2016-08-31T17:53:05Z
39,257,170
<p>I experimented with this: </p> <pre><code>&gt;&gt;&gt; from functools import wraps &gt;&gt;&gt; def x(): print(1) ... &gt;&gt;&gt; @wraps(x) ... def xyz(a,b,c): return x &gt;&gt;&gt; xyz.__name__ 'x' &gt;&gt;&gt; help(xyz) Help on function x in module __main__: x(a, b, c) </code></pre> <p>AFAIK, this has nothing to do with <code>wraps</code> itself, but an issue related to <code>help</code>. Indeed, because <code>help</code> inspects your objects to provide the information, including <code>__doc__</code> and other attributes, this is why you get this behavior, although your wrapped function has different argument list. Though, <code>wraps</code> doesn't update that automatically (the argument list) what it really updates is this tuple and the <code>__dict__</code> which is technically the objects namespace: </p> <pre><code>WRAPPER_ASSIGNMENTS = ('__module__', '__name__', '__qualname__', '__doc__', '__annotations__') WRAPPER_UPDATES = ('__dict__',) </code></pre> <p>If you aren't sure about how <code>wraps</code> work, probably it'll help if your read the the source code from the standard library: <code>functools.py</code>.</p> <blockquote> <p>It seems like I just need to add another @wraps() somewhere, but where?</p> </blockquote> <p>No, you don't need to add another <code>wraps</code> in your code, <code>help</code> as I stated above works that way by inspecting your objects. The function's arguments are associated with code objects (<code>__code__</code>) because your function's arguments are stored/represented in that object, <code>wraps</code> has no way to update the argument of the wrapper to be like the wrapped function (continuing with the above example): </p> <pre><code>&gt;&gt;&gt; xyz.__code__.co_varnames &gt;&gt;&gt; xyz.__code__.co_varnames = x.__code__.co_varnames AttributeError: readonly attribute </code></pre> <p>If <code>help</code> displayed that function <code>xyz</code> has this argument list <code>()</code> instead of <code>(a, b, c)</code> then this is clearly wrong! And the same applies for <code>wraps</code>, to change the argument list of the wrapper to the wrapped, would be cumbersome! So this should not be a concern at all. </p> <pre><code>&gt;&gt;&gt; @wraps(x, ("__code__",)) ... def xyz(a,b,c): pass ... &gt;&gt;&gt; help(xyz) Help on function xyz in module __main__: xyz() </code></pre> <p>But <code>xyz()</code> returns <code>x()</code>: </p> <pre><code>&gt;&gt;&gt; xyz() 1 </code></pre> <p>For other references take a look at this question or the Python Documentation</p> <p><a href="http://stackoverflow.com/questions/308999/what-does-functools-wraps-do">What does functools.wraps do?</a></p>
0
2016-08-31T19:03:02Z
[ "python", "python-2.7", "python-decorators", "functools" ]
Applying functools.wraps to nested wrappers
39,256,072
<p>I have a base decorator that takes arguments but that also is built upon by other decorators. I can't seem to figure where to put the functools.wraps in order to preserve the full signature of the decorated function.</p> <pre><code>import inspect from functools import wraps # Base decorator def _process_arguments(func, *indices): """ Apply the pre-processing function to each selected parameter """ @wraps(func) def wrap(f): @wraps(f) def wrapped_f(*args): params = inspect.getargspec(f)[0] args_out = list() for ind, arg in enumerate(args): if ind in indices: args_out.append(func(arg)) else: args_out.append(arg) return f(*args_out) return wrapped_f return wrap # Function that will be used to process each parameter def double(x): return x * 2 # Decorator called by end user def double_selected(*args): return _process_arguments(double, *args) # End-user's function @double_selected(2, 0) def say_hello(a1, a2, a3): """ doc string for say_hello """ print('{} {} {}'.format(a1, a2, a3)) say_hello('say', 'hello', 'arguments') </code></pre> <p>The result of this code should be <em>and is</em>:</p> <pre><code>saysay hello argumentsarguments </code></pre> <p>However, running help on say_hello gives me:</p> <pre><code>say_hello(*args, **kwargs) doc string for say_hello </code></pre> <p>Everything is preserved except the parameter names.</p> <p>It seems like I just need to add another @wraps() somewhere, but where?</p>
0
2016-08-31T17:53:05Z
39,259,648
<p>direprobs was correct in that no amount of functools wraps would get me there. bravosierra99 pointed me to somewhat related examples. However, I couldn't find a single example of signature preservation on nested decorators in which the outer decorator takes arguments.</p> <p>The <a href="http://www.artima.com/forums/flat.jsp?forum=106&amp;thread=240845" rel="nofollow">comments on Bruce Eckel's post</a> on decorators with arguments gave me the biggest hints in achieving my desired result.</p> <p>The key was in removing the middle function from within my _process_arguments function and placing its parameter in the next, nested function. It kind of makes sense to me now...but it works:</p> <pre><code>import inspect from decorator import decorator # Base decorator def _process_arguments(func, *indices): """ Apply the pre-processing function to each selected parameter """ @decorator def wrapped_f(f, *args): params = inspect.getargspec(f)[0] args_out = list() for ind, arg in enumerate(args): if ind in indices: args_out.append(func(arg)) else: args_out.append(arg) return f(*args_out) return wrapped_f # Function that will be used to process each parameter def double(x): return x * 2 # Decorator called by end user def double_selected(*args): return _process_arguments(double, *args) # End-user's function @double_selected(2, 0) def say_hello(a1, a2,a3): """ doc string for say_hello """ print('{} {} {}'.format(a1, a2, a3)) say_hello('say', 'hello', 'arguments') print(help(say_hello)) </code></pre> <p>And the result:</p> <pre><code>saysay hello argumentsarguments Help on function say_hello in module __main__: say_hello(a1, a2, a3) doc string for say_hello </code></pre>
0
2016-08-31T21:59:02Z
[ "python", "python-2.7", "python-decorators", "functools" ]
Can Python print only selected fields from SQLite table to a file
39,256,106
<p>Please forgive me if my terminology is not right. I have this:</p> <pre><code>CREATE TABLE table1 (field1 TEXT, field2 TEXT, field3 TEXT); </code></pre> <p>I want to print only information from field1 and field3 for each row into a text file. What I've tried is:</p> <pre><code>e = open("export.txt", "w+") sqlF1 = """ SELECT field1 FROM table1 """ c.execute(sqlF1) for row in e: #print c.fetchall e.write('%s\n' % row) e.close() </code></pre> <p>The operation finishes without error but the text file is still empty. I did a</p> <pre><code>SELECT field1 FROM table1 </code></pre> <p>in the sqlite shell and data is there.</p> <p>The end result is that I want to eventually have multiple fields piped to one line per row in a file. I also want to put some text at the beginning and end of the values of each field I choose to pull this way.</p> <p>Any advice or direction is helpful. This operation doesn't have to be done with Python; I mean if I can figure out how to do it in conjunction with SQLite commands that would be OK too.</p> <p>Thanks!!</p> <p>Edit: I think a variation on this might be what I'm looking for, yes?: <a href="http://stackoverflow.com/questions/10522830/how-to-export-sqlite-to-csv-in-python-without-being-formatted-as-a-list/%22How%20to%20export%20sqlite%20to%20CSV%20in%20Python%20without%20being%20formatted%20as%20a%20list?%22">http://stackoverflow.com/questions/10522830/how-to-export-sqlite-to-csv-in-python-without-being-formatted-as-a-listf</a></p>
0
2016-08-31T17:55:34Z
39,257,477
<p>IMHO you are not iterating on the correct object.</p> <pre><code>for row in c.execute(sqlF1): e.write('%s\n' % row) </code></pre> <p>See the examples in <a href="https://docs.python.org/2/library/sqlite3.html" rel="nofollow">the docs</a>.</p>
1
2016-08-31T19:21:42Z
[ "python", "sqlite", "io" ]
Why the value read from properties file in python doesn't work
39,256,121
<p>I am brand new to Python. I have some python script which is accessed by other project to read some data from this python script.</p> <p>Past: If suppose I need to provide the version information for the other project from this python script, I was having a class variable which was hard coded and it was working fine till now.</p> <p>Now: I wanted to change the way of providing the version information so I choose to read the properties file instead of hard coding. I have added these line in my script</p> <p>previously it was just this line</p> <pre><code>class Myclass version = '1.2' </code></pre> <p>Now I changed it to</p> <pre><code> class Myclass config = ConfigParser.RawConfigParser() config.read('version.properties') version = config.get('global', 'version') print version </code></pre> <p>my version.properties file looks like</p> <pre><code>[global] version= 1.2 </code></pre> <p>when I run this python file to print the version, it is clearly printing in the console. But when this variable is being accessed by that other project the version read from the properties file is not being read (Hard coded value works just fine).</p> <p>What could be the difference? Why is the value read from properties file is not reflecting?</p>
0
2016-08-31T17:56:25Z
39,256,449
<p>If the <code>version.properties</code> file will be in the same directory as your code, you can use this:</p> <pre><code>import os config.read(os.path.join(os.path.dirname(os.path.abspath(__file__)), 'version.properties')) </code></pre>
1
2016-08-31T18:18:19Z
[ "python" ]
Checking WHAT is missing from a list when comparing it to another list python
39,256,150
<p>I am looking to see what is missing from a list (A) from list (B)</p> <p>If I have the following list of strings:</p> <p><code>A = ['4-5', '3-6', '3-3', '9-0']</code> and <code>B = ['4-4', '4-5', '3-3', '6-9', '5-5', '3-2', '6-6', '9-9', '9,0']</code> and want to check what is missing from A that is in list B. </p> <p>A = [<strong>4-5</strong>,<strong>3-6</strong>,<strong>3-3</strong>, <strong>9-0</strong>] B = [4-4, <strong>4-5</strong>, <strong>3-3</strong>, 6-9, 5-5, <strong>3-6</strong>, 3-2, 6-6, 9-9, <strong>9,0</strong>]</p> <p>so... from the example from above, I would want it to output <code>['4-4', '6-9', '5-5', '3-2', '6-6', '9-9']</code>.</p> <p>if I sort both the lists, what's the best way of going about it?</p> <p>Thanks!</p> <p>I t hought about doing something like:</p> <pre><code>unique = [] for n in A: if n not in B: unique.append(B) print(unique) </code></pre> <p>does this work? it's giving me a very odd output of a list in a list of two strings.</p>
1
2016-08-31T17:58:14Z
39,256,243
<p>I don't know what <code>4-5</code> means? is it a string, an operation?</p> <p>Anyways, assuming it is whatever you meant it to be you can do as follows:</p> <pre><code>A = [4-5,3-6,3-3, 9-0] B = [4-4, 4-5, 3-3, 6-9, 5-5, 3-2, 6-6, 9-9, 9,0] a = set(A) b = set(B) print b - a </code></pre>
5
2016-08-31T18:04:08Z
[ "python", "list", "sorting" ]
Checking WHAT is missing from a list when comparing it to another list python
39,256,150
<p>I am looking to see what is missing from a list (A) from list (B)</p> <p>If I have the following list of strings:</p> <p><code>A = ['4-5', '3-6', '3-3', '9-0']</code> and <code>B = ['4-4', '4-5', '3-3', '6-9', '5-5', '3-2', '6-6', '9-9', '9,0']</code> and want to check what is missing from A that is in list B. </p> <p>A = [<strong>4-5</strong>,<strong>3-6</strong>,<strong>3-3</strong>, <strong>9-0</strong>] B = [4-4, <strong>4-5</strong>, <strong>3-3</strong>, 6-9, 5-5, <strong>3-6</strong>, 3-2, 6-6, 9-9, <strong>9,0</strong>]</p> <p>so... from the example from above, I would want it to output <code>['4-4', '6-9', '5-5', '3-2', '6-6', '9-9']</code>.</p> <p>if I sort both the lists, what's the best way of going about it?</p> <p>Thanks!</p> <p>I t hought about doing something like:</p> <pre><code>unique = [] for n in A: if n not in B: unique.append(B) print(unique) </code></pre> <p>does this work? it's giving me a very odd output of a list in a list of two strings.</p>
1
2016-08-31T17:58:14Z
39,256,244
<p>Don't bother sorting. Use sets instead and calculate the difference:</p> <pre><code>A = ['4-5','3-6','3-3', '9-0'] B = ['4-4', '4-5', '3-3', '6-9', '5-5', '3-2', '6-6', '9-9', '9','0'] print(set(B) - set(A)) &gt;&gt; {'0', '6-9', '9-9', '5-5', '3-2', '6-6', '4-4', '9'} </code></pre> <p>Your required out put was <code>[4-4, 6-9, 5-5, 3-2, 6-6, 9-9]</code>. You either missed a few, or you meant to treat <code>'9'</code> as <code>'9-0'</code>.</p>
1
2016-08-31T18:04:16Z
[ "python", "list", "sorting" ]
Checking WHAT is missing from a list when comparing it to another list python
39,256,150
<p>I am looking to see what is missing from a list (A) from list (B)</p> <p>If I have the following list of strings:</p> <p><code>A = ['4-5', '3-6', '3-3', '9-0']</code> and <code>B = ['4-4', '4-5', '3-3', '6-9', '5-5', '3-2', '6-6', '9-9', '9,0']</code> and want to check what is missing from A that is in list B. </p> <p>A = [<strong>4-5</strong>,<strong>3-6</strong>,<strong>3-3</strong>, <strong>9-0</strong>] B = [4-4, <strong>4-5</strong>, <strong>3-3</strong>, 6-9, 5-5, <strong>3-6</strong>, 3-2, 6-6, 9-9, <strong>9,0</strong>]</p> <p>so... from the example from above, I would want it to output <code>['4-4', '6-9', '5-5', '3-2', '6-6', '9-9']</code>.</p> <p>if I sort both the lists, what's the best way of going about it?</p> <p>Thanks!</p> <p>I t hought about doing something like:</p> <pre><code>unique = [] for n in A: if n not in B: unique.append(B) print(unique) </code></pre> <p>does this work? it's giving me a very odd output of a list in a list of two strings.</p>
1
2016-08-31T17:58:14Z
39,256,268
<p>You could do this:</p> <pre><code>&gt;&gt;&gt; A = ['4-5','3-6','3-3','9-0'] &gt;&gt;&gt; B=['4-4','4-5','3-3','6-9','5-5','3-2','6-6','9-9','9','0'] &gt;&gt;&gt; set(B)-set(A) set(['5-5', '4-4', '9-9', '3-2', '0', '6-9', '9', '6-6']) &gt;&gt;&gt; </code></pre>
0
2016-08-31T18:05:47Z
[ "python", "list", "sorting" ]
Checking WHAT is missing from a list when comparing it to another list python
39,256,150
<p>I am looking to see what is missing from a list (A) from list (B)</p> <p>If I have the following list of strings:</p> <p><code>A = ['4-5', '3-6', '3-3', '9-0']</code> and <code>B = ['4-4', '4-5', '3-3', '6-9', '5-5', '3-2', '6-6', '9-9', '9,0']</code> and want to check what is missing from A that is in list B. </p> <p>A = [<strong>4-5</strong>,<strong>3-6</strong>,<strong>3-3</strong>, <strong>9-0</strong>] B = [4-4, <strong>4-5</strong>, <strong>3-3</strong>, 6-9, 5-5, <strong>3-6</strong>, 3-2, 6-6, 9-9, <strong>9,0</strong>]</p> <p>so... from the example from above, I would want it to output <code>['4-4', '6-9', '5-5', '3-2', '6-6', '9-9']</code>.</p> <p>if I sort both the lists, what's the best way of going about it?</p> <p>Thanks!</p> <p>I t hought about doing something like:</p> <pre><code>unique = [] for n in A: if n not in B: unique.append(B) print(unique) </code></pre> <p>does this work? it's giving me a very odd output of a list in a list of two strings.</p>
1
2016-08-31T17:58:14Z
39,256,293
<p>Simple in a list comprehension too. Not sure why you would sort the inputs, I don't see that it's really necessary, but I've sorted the output.</p> <pre><code>A = ["4-5",'3-6','3-3', '9-0'] B = ['4-4', '4-5', '3-3', '6-9', '5-5', '3-2', '6-6', '9-9', '9','0'] new = sorted([x for x in B if x not in A]) </code></pre> <p>though your expected output doesn't include the last two entries "9" and "0", or "9,0" depending on interpretation</p>
0
2016-08-31T18:07:19Z
[ "python", "list", "sorting" ]
Checking WHAT is missing from a list when comparing it to another list python
39,256,150
<p>I am looking to see what is missing from a list (A) from list (B)</p> <p>If I have the following list of strings:</p> <p><code>A = ['4-5', '3-6', '3-3', '9-0']</code> and <code>B = ['4-4', '4-5', '3-3', '6-9', '5-5', '3-2', '6-6', '9-9', '9,0']</code> and want to check what is missing from A that is in list B. </p> <p>A = [<strong>4-5</strong>,<strong>3-6</strong>,<strong>3-3</strong>, <strong>9-0</strong>] B = [4-4, <strong>4-5</strong>, <strong>3-3</strong>, 6-9, 5-5, <strong>3-6</strong>, 3-2, 6-6, 9-9, <strong>9,0</strong>]</p> <p>so... from the example from above, I would want it to output <code>['4-4', '6-9', '5-5', '3-2', '6-6', '9-9']</code>.</p> <p>if I sort both the lists, what's the best way of going about it?</p> <p>Thanks!</p> <p>I t hought about doing something like:</p> <pre><code>unique = [] for n in A: if n not in B: unique.append(B) print(unique) </code></pre> <p>does this work? it's giving me a very odd output of a list in a list of two strings.</p>
1
2016-08-31T17:58:14Z
39,256,793
<p>In most situations, the best way is to ignore the fact the data is sorted and just do <code>set(B) - set(A)</code>. Or <code>list(set(B) - set(A))</code> if you definitely need a list for the result.</p> <p>However, that has a moderately large space overhead (approximately the sum of the sizes of the two input lists). Normally this is nothing to worry about, but if the data is very large (uses more than half your available memory) then you might find you need to reduce this. You could first try:</p> <pre><code>A_set = set(A) result = [b for b in B if b not in A_set] </code></pre> <p>This avoids constructing a set for B or a set for the difference, so the overhead is approximately the size of A.</p> <p>For your interest, or for situations where resources are very tightly constrained, you might like to know that it's possible to do this with only constant-space overhead supposing that <code>A</code> and <code>B</code> are already sorted (which in your example they are not, but you promise they will be). The trick is to notice that as you look for each element of B in A:</p> <ul> <li>you can search through A in order, and stop searching when you find an element no smaller than the one you're looking for. You won't find it after that because A is sorted.</li> <li>if you found the previous element of B, then you will not find the next one <em>before</em> the place where you found the previous element of B in A, because both lists are sorted.</li> <li>if you did not find the previous element of B, then you likewise will not find it before the place where you stopped looking.</li> <li>therefore at each step we can resume searching from where we left off last time.</li> </ul> <p>Putting it all together, this means we can make a single simultaneous pass over each of the inputs <code>A</code> and <code>B</code>, and in the process of this single pass decide for each element of <code>B</code> whether or not it is in <code>A</code>. If you're familiar with merge sort (or merging), then be aware that the process is similar to a merge, but the output is different. Not only is the overhead zero, but for advanced uses we don't even need lists, we can do it all in generators so that the input needn't necessarily even all be in memory at once. But sticking with lists to illustrate it:</p> <pre><code>def find_element(elt, arr, idx): while idx &lt; len(arr) and elt &gt; arr[idx]: # haven't found it yet idx += 1 # have found the place where it would go, but is it here? if idx &lt; len(arr) and elt == arr[idx]: # it's here return True, idx + 1 # not found return False, idx a_idx = 0 b_idx = 0 results = [] while a_idx &lt; len(A) and b_idx &lt; len(B): found, a_idx = find_element(B[b_idx], A, a_idx) if not found: results.append(B[b_idx]) b_idx += 1 # if there's anything left in B to check then it's definitely not in A results.extend(itertools.islice(B, b_idx, None)) </code></pre> <p>Finally, we could potentially improve the speed of <code>find_element</code>, for large arrays, using a binary search instead of a linear search. I leave this as an exercise for the reader :-)</p>
0
2016-08-31T18:39:46Z
[ "python", "list", "sorting" ]
Trouble parsing XML with python
39,256,162
<p>I have parsed an XML file with BeautifulSoup in Python and I am having trouble extracting the data out of it. An example of the structure of the XML is below:</p> <pre><code>&lt;Products page="0" pages="-1" records="27"&gt; &lt;Product id="ABC001"&gt; &lt;Name&gt;This product name&lt;/Name&gt; &lt;Cur&gt;USD&lt;/Cur&gt; &lt;Tag&gt;Text&lt;/Tag&gt; &lt;Classes&gt; &lt;Class id="USD"&gt; &lt;ClassCur&gt;USD&lt;/ClassCur&gt; &lt;Identifier&gt;XYZ123456&lt;/Identifier&gt; &lt;/Class&gt; &lt;/Classes&gt; &lt;/Product&gt; &lt;Product id="XYZ002"&gt; &lt;Name&gt;That product name&lt;/Name&gt; &lt;Cur&gt;EUR&lt;/Cur&gt; &lt;Tag&gt;More Text&lt;/Tag&gt; &lt;Classes&gt; &lt;Class id="EUR"&gt; &lt;ClassCur&gt;EUR&lt;/ClassCur&gt; &lt;Identifier&gt;VDSHG123456&lt;/Identifier&gt; &lt;/Class&gt; &lt;/Classes&gt; &lt;/Product&gt; &lt;/Products&gt; </code></pre> <p>The first thing I have been trying to accomplish but have so far failed to do is to extract all of the Product and Class id's <code>"ABC001"</code>, <code>"XYZ002"</code> etc..</p> <p>What I have tried is </p> <pre><code>products = soup.find_all("Product") for p in products: print(p.find("name")) # gets the name tag print(p.find("cur")) # gets the cur tag # ...etc </code></pre> <p>However, I can't figure out how to access <code>id</code> within <code>Product</code>. For example, <code>p.find("product")</code> returns <code>None</code>.</p> <p>Note that while I am using bs4 I don't <em>have</em> to - it's just that I have done a lot of web scraping with Python + bs4 and have found bs4 to be useful in navigating through HTML, so assumed it would be the ideal way of handling XML.</p>
0
2016-08-31T17:58:56Z
39,256,414
<p><code>id</code> is an attribute of <code>Product</code>, not a child element, so you access it with:</p> <pre><code>p['id'] </code></pre>
1
2016-08-31T18:15:43Z
[ "python", "xml", "bs4" ]
User login read from file failing compared to user input
39,256,233
<p>I'm writing a program that will verify a username:</p> <pre><code>def user_login(): """ Login and create a username, maybe """ with open('username.txt', 'r') as f: if f.readline() is "": username = raw_input("First login, enter a username to use: ") with open('username.txt', 'a+') as user: user.write(username) else: login_id = raw_input("Enter username: ") if login_id == str(f.readline()): return True else: print "Invalid username." return False if __name__ == '__main__': if user_login() is not True: print "Failed to verify" </code></pre> <p>Everytime I run this it outputs the following:</p> <pre><code>Enter username: tperkins91 Invalid username. Failed to verify </code></pre> <p>How do I compare user input to reading from a file?</p>
1
2016-08-31T18:03:51Z
39,256,531
<p>Opening the same file again in another nested context is not a good idea. Instead, open the file once in <em>append</em> mode, and use <code>f.seek(0)</code> to return to the start whenever you need to:</p> <pre><code>def user_login(): """ Login and create a username, maybe """ with open('username.txt', 'a+') as f: if f.readline() is "": username = raw_input("First login, enter a username to use: ") f.seek(0) f.write(username) # return True/False --&gt; make the function return a bool in this branch else: login_id = raw_input("Enter username: ") f.seek(0) if login_id == f.readline(): return True else: print "Invalid username." return False if __name__ == '__main__': if user_login() is not True: print "Failed to verify" </code></pre> <p>Returning a bool value in the <code>if</code> branch is something you may want to consider so the return type of your function is consistent as bool, and not <code>None</code> as in the current case. </p>
1
2016-08-31T18:24:08Z
[ "python", "python-2.7", "file", "user-input" ]
User login read from file failing compared to user input
39,256,233
<p>I'm writing a program that will verify a username:</p> <pre><code>def user_login(): """ Login and create a username, maybe """ with open('username.txt', 'r') as f: if f.readline() is "": username = raw_input("First login, enter a username to use: ") with open('username.txt', 'a+') as user: user.write(username) else: login_id = raw_input("Enter username: ") if login_id == str(f.readline()): return True else: print "Invalid username." return False if __name__ == '__main__': if user_login() is not True: print "Failed to verify" </code></pre> <p>Everytime I run this it outputs the following:</p> <pre><code>Enter username: tperkins91 Invalid username. Failed to verify </code></pre> <p>How do I compare user input to reading from a file?</p>
1
2016-08-31T18:03:51Z
39,256,799
<p>When you use <code>readline()</code> the first time the current file position is moved past the first record. A better way to find if the file is empty is to test it's size:</p> <pre><code>import os.path import sys def user_login(): fname = 'username.txt' """ Login and create a username, maybe """ if os.path.getsize(fname) == 0: with open(fname, 'w') as f: username = raw_input("First login, enter a username to use: ") f.write(username) return True #/False --&gt; make the function return a bool in this branch else: with open(fname, 'r') as f: login_id = raw_input("Enter username: ") if login_id == f.readline().rstrip(): return True else: print &gt;&gt;sys.stderr, "Invalid username." return False if __name__ == '__main__': if user_login() is not True: print "Failed to verify" </code></pre> <p><code>f.readline()</code> will include a trailing newline, if there is one in the file, whereas <code>raw_input()</code> will not. While we don't explicitly write one, someone might edit the file and add a newline unintentionally, hence the addition of the <code>rstrip()</code> as a defensive precaution.</p>
1
2016-08-31T18:39:53Z
[ "python", "python-2.7", "file", "user-input" ]
How do I tell sqlalchemy to ignore certain (say, null) columns on INSERT
39,256,258
<p>I have a legacy database that creates default values for several columns using a variety of stored procedures. It would be more or less prohibitive to try and track down the names and add queries to my code, not to mention a maintenance nightmare.</p> <p>What I would <em>like</em> is to be able to tell sqlalchemy to ignore the columns that I don't really care about. Unfortunately, it doesn't. Instead it provides <code>null</code> values that violate the DB constraints.</p> <p>Here's an example of what I mean:</p> <pre><code>import sqlalchemy as sa import logging from sqlalchemy.orm import sessionmaker from sqlalchemy.ext.declarative import declarative_base l = logging.getLogger('sqlalchemy.engine') l.setLevel(logging.INFO) l.addHandler(logging.StreamHandler()) engine = sa.create_engine('postgresql+psycopg2://user@host:port/dbname') Session = sessionmaker(bind=engine) session = Session() temp_metadata = sa.MetaData(schema='pg_temp') TempBase = declarative_base(metadata=temp_metadata) with session.begin(subtransactions=True): session.execute(''' CREATE TABLE pg_temp.whatevs ( id serial , fnord text not null default 'fnord' , value text not null ); INSERT INTO pg_temp.whatevs (value) VALUES ('something cool'); ''') class Whatever(TempBase): __tablename__ = 'whatevs' id = sa.Column('id', sa.Integer, primary_key=True, autoincrement=True) fnord = sa.Column('fnord', sa.String) value = sa.Column('value', sa.String) w = Whatever(value='something cool') session.add(w) </code></pre> <p>This barfs, because:</p> <pre><code>INSERT INTO pg_temp.whatevs (fnord, value) VALUES (%(fnord)s, %(value)s) RETURNING pg_temp.whatevs.id {'fnord': None, 'value': 'something cool'} ROLLBACK Traceback (most recent call last): File "/home/wayne/.virtualenvs/myenv/lib64/python3.5/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context context) File "/home/wayne/.virtualenvs/myenv/lib64/python3.5/site-packages/sqlalchemy/engine/default.py", line 450, in do_execute cursor.execute(statement, parameters) psycopg2.IntegrityError: null value in column "fnord" violates not-null constraint DETAIL: Failing row contains (2, null, something cool). </code></pre> <p>What I <em>expected</em> is that it would just skip out on the <code>fnord</code> column, since it didn't get set.</p> <p>Even if I do:</p> <pre><code>w = Whatever() w.value = 'this breaks too' </code></pre> <p>or add:</p> <pre><code>def __init__(self, value): self.value = value </code></pre> <p>to the <code>Whatever</code> class... still no dice.</p> <p>How can I tell sqlalchemy that "look, these other columns are fine, I know I'm not providing a value - the database is going to take care of that for me. It's okay, just don't worry about these columns"? </p> <p>The only way I'm aware of is to futz with the class definition and lie, saying those columns don't exist... but I <em>do</em> actually want them to come in on queries.</p>
4
2016-08-31T18:05:13Z
39,258,819
<p>Add a <a href="http://docs.sqlalchemy.org/en/latest/core/defaults.html#server-defaults" rel="nofollow">server side default</a> with <a href="http://docs.sqlalchemy.org/en/latest/core/metadata.html#sqlalchemy.schema.Column.params.server_default" rel="nofollow"><code>server_default</code></a> for <code>fnord</code>:</p> <pre><code>class Whatever(TempBase): __tablename__ = 'whatevs' id = sa.Column(sa.Integer, primary_key=True, autoincrement=True) fnord = sa.Column(sa.String, nullable=False, server_default='fnord') value = sa.Column(sa.String, nullable=False) </code></pre> <p>SQLAlchemy quite happily lets the default do its thing server side, if just told about it. If you have columns that do not have a default set in the DDL, but <a href="http://docs.sqlalchemy.org/en/latest/core/defaults.html#triggered-columns" rel="nofollow">through triggers</a>, stored procedures, or the like, have a look at <a href="http://docs.sqlalchemy.org/en/latest/core/defaults.html#sqlalchemy.schema.FetchedValue" rel="nofollow"><code>FetchedValue</code></a>.</p> <p>A test with SQLite:</p> <pre><code>In [8]: engine.execute("""CREATE TABLE whatevs ( ...: id INTEGER NOT NULL, ...: fnord VARCHAR DEFAULT 'fnord' NOT NULL, ...: value VARCHAR NOT NULL, ...: PRIMARY KEY (id) ...: )""") In [12]: class Whatever(Base): ...: __tablename__ = 'whatevs' ...: id = Column(Integer, primary_key=True, autoincrement=True) ...: fnord = Column(String, nullable=False, server_default="fnord") ...: value = Column(String, nullable=False) ...: In [13]: session.add(Whatever(value='asdf')) In [14]: session.commit() 2016-08-31 23:46:09,826 INFO sqlalchemy.engine.base.Engine BEGIN (implicit) INFO:sqlalchemy.engine.base.Engine:BEGIN (implicit) 2016-08-31 23:46:09,827 INFO sqlalchemy.engine.base.Engine INSERT INTO whatevs (value) VALUES (?) INFO:sqlalchemy.engine.base.Engine:INSERT INTO whatevs (value) VALUES (?) 2016-08-31 23:46:09,827 INFO sqlalchemy.engine.base.Engine ('asdf',) INFO:sqlalchemy.engine.base.Engine:('asdf',) 2016-08-31 23:46:09,828 INFO sqlalchemy.engine.base.Engine COMMIT INFO:sqlalchemy.engine.base.Engine:COMMIT </code></pre>
2
2016-08-31T20:53:56Z
[ "python", "postgresql", "sqlalchemy" ]
How to get classes labels from cross_val_predict used with predict_proba in scikit-learn
39,256,287
<p>I need to train a <a href="http://scikit-learn.org/dev/modules/generated/sklearn.ensemble.RandomForestClassifier.html" rel="nofollow">Random Forest classifier</a> using a 3-fold cross-validation. For each sample, I need to retrieve the prediction probability when it happens to be in the test set.</p> <p>I am using scikit-learn version 0.18.dev0.</p> <p>This new version adds the feature to use the method <a href="http://scikit-learn.org/dev/modules/generated/sklearn.model_selection.cross_val_predict.html#sklearn.model_selection.cross_val_predict" rel="nofollow">cross_val_predict()</a> with an additional parameter <code>method</code> to define which kind of prediction require from the estimator.</p> <p>In my case I want to use the <a href="http://scikit-learn.org/dev/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier.predict_proba" rel="nofollow">predict_proba()</a> method, which returns the probability for each class, in a multiclass scenario.</p> <p>However, when I run the method, I get as a result the matrix of prediction probabilities, where each rows represents a sample, and each column represents the prediction probability for a specific class.</p> <p>The problem is that the method does not indicate which class corresponds to each column.</p> <p>The value I need is the same (in my case using a <code>RandomForestClassifier</code>) returned in the attribute classes_ defined as:</p> <blockquote> <p>classes_ : array of shape = [n_classes] or a list of such arrays The classes labels (single output problem), or a list of arrays of class labels (multi-output problem).</p> </blockquote> <p>which is needed by <code>predict_proba()</code> because in its documentation it is written that:</p> <blockquote> <p>The order of the classes corresponds to that in the attribute classes_.</p> </blockquote> <p>A minimal example is the following:</p> <pre><code>import numpy as np from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import cross_val_predict clf = RandomForestClassifier() X = np.random.randn(10, 10) y = y = np.array([1] * 4 + [0] * 3 + [2] * 3) # how to get classes from here? proba = cross_val_predict(estimator=clf, X=X, y=y, method="predict_proba") # using the classifier without cross-validation # it is possible to get the classes in this way: clf.fit(X, y) proba = clf.predict_proba(X) classes = clf.classes_ </code></pre>
2
2016-08-31T18:06:49Z
39,260,489
<p>Yes, they will be in sorted order; this is because <code>DecisionTreeClassifier</code> (which is the default <code>base_estimator</code> for <code>RandomForestClassifier</code>) <a href="https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tree/tree.py#L193" rel="nofollow">uses <code>np.unique</code> to construct the <code>classes_</code> attribute</a> which returns the sorted unique values of the input array.</p>
1
2016-08-31T23:22:50Z
[ "python", "scikit-learn", "cross-validation" ]
GTK+ 3.0 Clipboard doesn't paste anything to my clipboard
39,256,324
<p>I've been using this script for testing.</p> <pre><code>import gi gi.require_version('Gtk', '3.0') from gi.repository import Gtk, Gdk board = Gtk.Clipboard.get(Gdk.SELECTION_CLIPBOARD) board.set_text("hello there", -1) board.store() </code></pre> <p>It does erase anything I have in my clipboard, but it doesn't add "hello there". It just leaves empty my clipboard. It seems like other people who have used this script say that it works, so I'm not really sure about what could be causing this.</p> <p><a href="https://andrewsteele.me.uk/learngtk.org/tutorials/python_gtk3_tutorial/html/clipboard.html" rel="nofollow">I've used this as my resource.</a></p> <p>Edit: This also happens to me with tkinter, so I believe that it might not be exclusive to GTK. It's probably client-side.</p> <p>Tkinter code I've used to test:</p> <pre><code>from tkinter import Tk r = Tk() r.withdraw() r.clipboard_clear() r.clipboard_append('i can has clipboardz?') r.destroy() </code></pre>
2
2016-08-31T18:10:03Z
39,259,503
<p>I got it working, sort of. It seems like I've been able to import the <a href="https://pypi.python.org/pypi/xerox" rel="nofollow">xerox library</a> without GTK complaining, and it worked. It's still not the best solution, but at least it's a workarround.</p>
0
2016-08-31T21:46:05Z
[ "python", "tkinter", "gtk" ]
how to print type of keys for a list in python pdb
39,256,340
<p>I'm learning pdb and I can print, or pp a list of objects but how can I print the type of key for each object? I can see it with pp, it looks like a byte array but I'd like to know the type. I suppose I could just print debug this but I'm curious if there's a smarter way to do it when using the debugger.</p>
0
2016-08-31T18:11:09Z
39,256,658
<p>Because you wrote</p> <blockquote> <p>type of key</p> </blockquote> <p>I'm assuming that you mean a dictionary. But you <em>do</em> also talk about a "list of objects", which could also mean</p> <ul> <li>a list of any kind of object</li> <li>a list of dictionaries</li> </ul> <p>But I'll show you two options:</p> <pre><code>mydict = {b'some bytes': 42, 'a string!': 'fnord', (1,2,3): 'A tuple! (is that two-pull or tuh-ple?)', 19: 'Just an int', } list_of_things = [b'some bytes', 'a string!', (1,2,3), 19, ['a', 'b', 'c']] import pdb; pdb.set_trace() </code></pre> <p>Now when <code>pdb</code> fires up:</p> <pre><code>(Pdb) for _ in mydict: print('{} {}'.format(_, type(_))) 19 &lt;type 'int'&gt; some bytes &lt;type 'str'&gt; a string! &lt;type 'str'&gt; (1, 2, 3) &lt;type 'tuple'&gt; </code></pre> <p>That will give you the keys and types of the keys.</p> <p>Here's the types and values from the list:</p> <pre><code>(Pdb) for _ in list_of_things: print('{} {}'.format(_, type(_))) some bytes &lt;type 'str'&gt; a string! &lt;type 'str'&gt; (1, 2, 3) &lt;type 'tuple'&gt; 19 &lt;type 'int'&gt; ['a', 'b', 'c'] &lt;type 'list'&gt; </code></pre>
1
2016-08-31T18:32:21Z
[ "python", "pdb" ]
How to evaluate and add string to numpy array element
39,256,365
<p>Have this piece of code that I am trying to optimize. It uses list comprehensions and works. </p> <pre><code>series1 = np.asarray(range(10)).astype(float) series2 = series1[::-1] ntup = zip(series1,series2) [['', 't:'+str(series2)][series1 &gt; series2] for series1,series2 in ntup ] #['', '', '', '', '', 't:4.0', 't:3.0', 't:2.0', 't:1.0', 't:0.0'] </code></pre> <p>Trying to use <code>np.where()</code> here. Is there a solution with <code>numpy</code>. (Without series being consumed)</p> <pre><code>series1 = np.asarray(range(10)).astype(float) series2 = series1[::-1] np.where(series1 &gt; series2 ,'t:'+ str(series2),'' ) </code></pre> <p>The results is this: </p> <pre><code>array(['', '', '', '', '', 't:[ 9. 8. 7. 6. 5. 4. 3. 2. 1. 0.]', 't:[ 9. 8. 7. 6. 5. 4. 3. 2. 1. 0.]', 't:[ 9. 8. 7. 6. 5. 4. 3. 2. 1. 0.]', 't:[ 9. 8. 7. 6. 5. 4. 3. 2. 1. 0.]', 't:[ 9. 8. 7. 6. 5. 4. 3. 2. 1. 0.]'], dtype='|S43') </code></pre>
4
2016-08-31T18:13:00Z
39,256,811
<p>We can use a vectorized approach based on </p> <ul> <li><p><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.core.defchararray.add.html" rel="nofollow"><code>np.core.defchararray.add</code></a> for the string appending of <code>'t:'</code> with the valid strings, and</p></li> <li><p><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow"><code>np.where</code></a> to choose based on the conditional statement and perform the appending or just use the default value of an empty string. </p></li> </ul> <p>So, we would have an implementation like so -</p> <pre><code>np.where(series1&gt;series2,np.core.defchararray.add('t:',series2.astype(str)),'') </code></pre> <hr> <p><strong>Boost it-up!</strong></p> <p>We can use the appending with <code>np.core.defchararray.add</code> on the valid elements based on the mask of <code>series1&gt;series2</code> to boost up the performance further after initializing an array with the default empty strings and then assigning only the valid values into it. </p> <p>So, the modified version would look something like this -</p> <pre><code>mask = series1&gt;series2 out = np.full(series1.size,'',dtype='U34') out[mask] = np.core.defchararray.add('t:',series2[mask].astype(str)) </code></pre> <hr> <p><strong>Runtime test</strong></p> <p>Vectorized versions as functions :</p> <pre><code>def vectorized_app1(series1,series2): mask = series1&gt;series2 return np.where(mask,np.core.defchararray.add('t:',series2.astype(str)),'') def vectorized_app2(series1,series2): mask = series1&gt;series2 out = np.full(series1.size,'',dtype='U34') out[mask] = np.core.defchararray.add('t:',series2[mask].astype(str)) return out </code></pre> <p>Timings on a bigger dataset -</p> <pre><code>In [283]: # Setup input arrays ...: series1 = np.asarray(range(10000)).astype(float) ...: series2 = series1[::-1] ...: In [284]: %timeit [['', 't:'+str(s2)][s1 &gt; s2] for s1,s2 in zip(series1, series2)] 10 loops, best of 3: 32.1 ms per loop # OP/@hpaulj's soln In [285]: %timeit vectorized_app1(series1,series2) 10 loops, best of 3: 20.5 ms per loop In [286]: %timeit vectorized_app2(series1,series2) 100 loops, best of 3: 10.4 ms per loop </code></pre> <p>As noted by <a href="http://stackoverflow.com/questions/39256365/how-to-evalute-and-add-string-to-numpy-array-element#comment65850443_39256849"><code>OP in comments</code></a>, that we can probably play around with the dtype for <code>series2</code> before appending. So, I used <code>U32</code> there to keep the output dtype same as with <code>str</code> dtype, i.e. <code>series2.astype('U32')</code> inside the <code>np.core.defchararray.add</code> call. The new timings for the vectorized approaches were -</p> <pre><code>In [290]: %timeit vectorized_app1(series1,series2) 10 loops, best of 3: 20.1 ms per loop In [291]: %timeit vectorized_app2(series1,series2) 100 loops, best of 3: 10.1 ms per loop </code></pre> <p>So, there's some further marginal improvement there!</p>
2
2016-08-31T18:40:57Z
[ "python", "string", "performance", "numpy" ]
How to evaluate and add string to numpy array element
39,256,365
<p>Have this piece of code that I am trying to optimize. It uses list comprehensions and works. </p> <pre><code>series1 = np.asarray(range(10)).astype(float) series2 = series1[::-1] ntup = zip(series1,series2) [['', 't:'+str(series2)][series1 &gt; series2] for series1,series2 in ntup ] #['', '', '', '', '', 't:4.0', 't:3.0', 't:2.0', 't:1.0', 't:0.0'] </code></pre> <p>Trying to use <code>np.where()</code> here. Is there a solution with <code>numpy</code>. (Without series being consumed)</p> <pre><code>series1 = np.asarray(range(10)).astype(float) series2 = series1[::-1] np.where(series1 &gt; series2 ,'t:'+ str(series2),'' ) </code></pre> <p>The results is this: </p> <pre><code>array(['', '', '', '', '', 't:[ 9. 8. 7. 6. 5. 4. 3. 2. 1. 0.]', 't:[ 9. 8. 7. 6. 5. 4. 3. 2. 1. 0.]', 't:[ 9. 8. 7. 6. 5. 4. 3. 2. 1. 0.]', 't:[ 9. 8. 7. 6. 5. 4. 3. 2. 1. 0.]', 't:[ 9. 8. 7. 6. 5. 4. 3. 2. 1. 0.]'], dtype='|S43') </code></pre>
4
2016-08-31T18:13:00Z
39,256,849
<p>Your list comprehensions work just fine for lists, not really need to use arrays. And for operations like this arrays probably won't give any speed advantage.</p> <pre><code>In [521]: series1=[float(i) for i in range(10)] In [522]: series2=series1[::-1] In [523]: [['', 't:'+str(s2)][s1 &gt; s2] for s1,s2 in zip(series1, series2)] Out[523]: ['', '', '', '', '', 't:4.0', 't:3.0', 't:2.0', 't:1.0', 't:0.0'] </code></pre> <p>As @Divaker noted there is a <code>np.char.add</code> function that will perform string operations. My experience is that they are marginally faster than list operations. And when you take into account the overhead of creating arrays, they may be slower.</p> <p>=========</p> <p>The <code>array</code> version as shown by @Divakar</p> <pre><code>In [539]: aseries1=np.array(series1) In [540]: aseries2=np.array(series2) In [541]: np.where(aseries1&gt;aseries2, np.char.add('t:',aseries2.astype('U3')), ' ...: ') Out[541]: array(['', '', '', '', '', 't:4.0', 't:3.0', 't:2.0', 't:1.0', 't:0.0'], dtype='&lt;U5') </code></pre> <p>A couple of time tests:</p> <pre><code>In [542]: timeit [['', 't:'+str(s2)][s1 &gt; s2] for s1,s2 in zip(series1, series2) ...: ] 100000 loops, best of 3: 15.5 µs per loop In [543]: timeit np.where(aseries1&gt;aseries2, np.char.add('t:',aseries2.astype('U3')), '') 10000 loops, best of 3: 63 µs per loop </code></pre>
1
2016-08-31T18:43:39Z
[ "python", "string", "performance", "numpy" ]
How to evaluate and add string to numpy array element
39,256,365
<p>Have this piece of code that I am trying to optimize. It uses list comprehensions and works. </p> <pre><code>series1 = np.asarray(range(10)).astype(float) series2 = series1[::-1] ntup = zip(series1,series2) [['', 't:'+str(series2)][series1 &gt; series2] for series1,series2 in ntup ] #['', '', '', '', '', 't:4.0', 't:3.0', 't:2.0', 't:1.0', 't:0.0'] </code></pre> <p>Trying to use <code>np.where()</code> here. Is there a solution with <code>numpy</code>. (Without series being consumed)</p> <pre><code>series1 = np.asarray(range(10)).astype(float) series2 = series1[::-1] np.where(series1 &gt; series2 ,'t:'+ str(series2),'' ) </code></pre> <p>The results is this: </p> <pre><code>array(['', '', '', '', '', 't:[ 9. 8. 7. 6. 5. 4. 3. 2. 1. 0.]', 't:[ 9. 8. 7. 6. 5. 4. 3. 2. 1. 0.]', 't:[ 9. 8. 7. 6. 5. 4. 3. 2. 1. 0.]', 't:[ 9. 8. 7. 6. 5. 4. 3. 2. 1. 0.]', 't:[ 9. 8. 7. 6. 5. 4. 3. 2. 1. 0.]'], dtype='|S43') </code></pre>
4
2016-08-31T18:13:00Z
39,257,537
<p>This works for me. Fully vectorized.</p> <pre><code>import numpy as np series1 = np.arange(10) series2 = series1[::-1] empties = np.repeat('', series1.shape[0]) ts = np.repeat('t:', series1.shape[0]) s2str = series2.astype(np.str) m = np.vstack([empties, np.core.defchararray.add(ts, s2str)]) cmp = np.int64(series1 &gt; series2) idx = np.arange(m.shape[1]) res = m[cmp, idx] print res </code></pre>
1
2016-08-31T19:26:14Z
[ "python", "string", "performance", "numpy" ]
How to Import compiled libs (pyd) in python
39,256,378
<p>I could not get a working example of importing a compiled library (pyd file) in Python.</p> <p>I compiled the blender source code, result is a bpy.pyd file. This file is placed in the python\lib folder.</p> <p>In the source code I have import bpy</p> <p>The file is found at runtime, but I get a runtime error that the module could not be imported</p> <p>Does someone have a good documentation on importing compiled python modules? I searched ~100 entries, but only general definitions on how to do this. I trued all suggestions without success.</p> <p>Thanks!</p>
0
2016-08-31T18:13:46Z
39,298,240
<p>Found the error: the pyd file was compiled with a 32 bit Python, was called with a 64 bit Python </p>
0
2016-09-02T17:54:13Z
[ "python", "python-3.x", "bpy" ]
How to make grid of album covers QT python
39,256,446
<p>Currently i am trying to do it with QTableWidget but i cant seem to adjust the width of the table to the parent element, so when resizing is made, the columns need to increase. QtableWidget is placed in QTabWidget in the first tab. For more info, it is a music player like that: <a href="http://i.stack.imgur.com/gbvns.png" rel="nofollow"><img src="http://i.stack.imgur.com/gbvns.png" alt="enter image description here"></a> </p> <p>You can see that in the <code>Tab 1</code> grid is going out of parent bounds horizontally. Vertically is not a problem but i will disable the horizontal scroll and i need the count of columns to fit the current size of the tab WITHOUT cell resizing.Cells must have the same size always except if regulated by the slider below the tab view. Tabs are needed to fit album covers so they will be square later on.</p>
0
2016-08-31T18:18:06Z
39,256,867
<p>Problem kinda solved. I replaced the TableWidget with list widget in <code>IconMode</code> view mode. </p>
0
2016-08-31T18:44:42Z
[ "python", "qt", "pyqt", "qt-designer" ]
How to make grid of album covers QT python
39,256,446
<p>Currently i am trying to do it with QTableWidget but i cant seem to adjust the width of the table to the parent element, so when resizing is made, the columns need to increase. QtableWidget is placed in QTabWidget in the first tab. For more info, it is a music player like that: <a href="http://i.stack.imgur.com/gbvns.png" rel="nofollow"><img src="http://i.stack.imgur.com/gbvns.png" alt="enter image description here"></a> </p> <p>You can see that in the <code>Tab 1</code> grid is going out of parent bounds horizontally. Vertically is not a problem but i will disable the horizontal scroll and i need the count of columns to fit the current size of the tab WITHOUT cell resizing.Cells must have the same size always except if regulated by the slider below the tab view. Tabs are needed to fit album covers so they will be square later on.</p>
0
2016-08-31T18:18:06Z
39,259,134
<p>The most flexible way to do this would be with a <code>QGraphicsView</code>. You would create a <code>QGraphicScene</code> the same width as the view and place all the album covers accordingly. Based on the size of each image and the padding between them, you can compute how many you can fit on a single line. </p>
1
2016-08-31T21:15:59Z
[ "python", "qt", "pyqt", "qt-designer" ]
Docker image gives me "executable file not found in $PATH" when run from python
39,256,458
<p>I'm trying to run some commands inside a docker image in python. When I do:</p> <pre><code>docker run --rm -v &lt;some_dir&gt;:/mnt --workdir /mnt frolvlad/alpine-oraclejdk8:slim sh -c "javac example.java &amp;&amp; java example" </code></pre> <p>In console (kali linux) it runs fine and prints the result. When I try to run same command from python it gives me the error:</p> <pre><code>"Exception: docker: Error response from daemon: oci runtime error: exec: \"sh -c 'javac example.java &amp;&amp; java example'\": executable file not found in $PATH.\n" </code></pre> <p>These lines work fine from python:</p> <pre><code>docker run --rm -v &lt;some_dir&gt;:/mnt --workdir /mnt frolvlad/alpine-oraclejdk8:slim sh docker run --rm -v &lt;some_dir&gt;:/mnt --workdir /mnt frolvlad/alpine-oraclejdk8:slim javac docker run --rm -v &lt;some_dir&gt;:/mnt --workdir /mnt frolvlad/alpine-oraclejdk8:slim javac example.java </code></pre> <p>But these don't:</p> <pre><code>docker run --rm -v &lt;some_dir&gt;:/mnt --workdir /mnt frolvlad/alpine-oraclejdk8:slim sh javac docker run --rm -v &lt;some_dir&gt;:/mnt --workdir /mnt frolvlad/alpine-oraclejdk8:slim sh javac example.java docker run --rm -v &lt;some_dir&gt;:/mnt --workdir /mnt frolvlad/alpine-oraclejdk8:slim sh -c javac docker run --rm -v &lt;some_dir&gt;:/mnt --workdir /mnt frolvlad/alpine-oraclejdk8:slim sh -c "javac" </code></pre> <p>They all work from console. Just not from python. As soon as I add "sh -c" it gives me the error. I can run it as 2 separate commands like:</p> <pre><code>docker run --rm -v &lt;some_dir&gt;:/mnt --workdir /mnt frolvlad/alpine-oraclejdk8:slim javac example.java docker run --rm -v &lt;some_dir&gt;:/mnt --workdir /mnt frolvlad/alpine-oraclejdk8:slim java example </code></pre> <p>But it's important that it runs all at once.</p> <p>It all works fine from console, but in python as soon as I add "sh -c" it gives me the error. What am I doing wrong?</p> <p>Here is the python code that I use:</p> <pre><code>BASE_CMD = [ 'docker', 'run', '--rm', '-v' ] def calculate_compile_and_execute_java_command(folder_name, file_name): return BASE_CMD + [ folder_name + ":/mnt", "--workdir", "/mnt", "frolvlad/alpine-oraclejdk8:slim", "sh -c 'javac " + file_name + " &amp;&amp; java " + file_name[:-5] + "'" ] . . . response = call_command(calculate_compile_and_execute_command(self.lang, file_path, file_name)) . . . def call_command(cmd, timeout=float('inf'), cwd=None, decode=True, **subproc_options): if cwd is None: cwd = os.getcwd() subproc = subprocess.Popen( cmd, cwd=cwd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, **subproc_options ) . . # code for timeout handling and result polling . . </code></pre> <p>I tried all possible combinations of string concatenation and quotes escaping in calculate method. Popen() sends the string to terminal as it should. It just doesn't work with "sh -c". It even works if I add more switches to the command. Like this:</p> <pre><code>def calculate_compile_and_execute_c_command(folder_name, file_name): return BASE_CMD + [ folder_name + ':/mnt', 'frolvlad/alpine-gcc', 'gcc', '--static', '/mnt/' + file_name, '-o', '/mnt/qq' ] </code></pre> <p>Which gives the string like this:</p> <pre><code>docker run --rm -v "$(pwd)":/mnt --workdir /mnt frolvlad/alpine-gcc gcc --static qq.c -o qq </code></pre>
0
2016-08-31T18:18:50Z
39,274,148
<p>You are passing one big string as the last parameter to Docker. Docker is trying to run a binary called <code>sh -c 'javac example.java &amp;&amp; java example'</code> which obviously doesn't exist.</p> <p>You want to run the <code>sh</code> binary with 2 arguments. <code>-c</code> and the shell script you want <code>sh</code> to run.</p> <pre><code>return BASE_CMD + [ folder_name + ":/mnt", "--workdir", "/mnt", "frolvlad/alpine-oraclejdk8:slim", "sh", "-c", "javac " + file_name + " &amp;&amp; java " + file_name[:-5] ] </code></pre> <p>If a command line script becomes too complex, you can hide it behind a shell script</p> <pre><code>#!/bin/sh set -ue javac $1 java ${1%.*} </code></pre> <p>Then </p> <pre><code>$ compile_and_run.sh whatever.java </code></pre>
0
2016-09-01T14:22:47Z
[ "python", "linux", "docker", "console" ]
Django get_object_or_404 is not defined
39,256,511
<p>I am developing a standalone application which uses ORM of django. In my main application, I am using django's module of get_object_or_404.</p> <p>I have imported it with all its dependencies when I run the script, it gives me the error: </p> <pre><code>Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task R = retval = fun(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 438, in __protected_call__ return self.run(*args, **kwargs) File "/root/standAlone/tasks.py", line 48, in task1 NameError: global name 'get_object_or_404' is not defined </code></pre> <p>Here is my full script code:</p> <pre><code>import django from celery import Celery from django.conf import settings settings.configure( DATABASE_ENGINE = "django.db.backends.mysql", DATABASE_NAME = "database name", DATABASE_USER = "username", DATABASE_PASSWORD = "password", DATABASE_HOST = "host", DATABASE_PORT = "3306", INSTALLED_APPS = ("myApp",) ) django.setup() from django.db import models from myApp.models import * from django.contrib import messages from django.core.urlresolvers import reverse from django.http import HttpResponseRedirect from django.shortcuts import render,redirect from django.shortcuts import get_list_or_404, get_object_or_404 from celery.decorators import task from celery.utils.log import get_task_logger app = Celery('tasks', broker='redis://broker_url') @app.task(name="task1") def task1(recipe_pk): recipe = get_object_or_404(Recipe, pk=recipe_pk) #error occurs here recipe.status = 'Completed' recipe.save() </code></pre> <p>Does anyone know how to solve this problem?</p>
0
2016-08-31T18:23:15Z
39,256,639
<p>Try to remove this double import of "django.shortcuts".</p>
0
2016-08-31T18:31:09Z
[ "python", "django", "import" ]