title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
How to add a dot in python numpy ndarray - data type issue
39,083,473
<p>I have a NumPy <code>ndarray</code> that looks like:</p> <pre><code>[[ 0 0 0 1 0] [ 0 0 0 0 1]] </code></pre> <p>but I would like to process it to the following form:</p> <pre><code>[[ 0. 0. 0. 1. 0.] [ 0. 0. 0. 0. 1.]] </code></pre> <p>How would I achieve this?</p>
0
2016-08-22T15:31:34Z
39,083,556
<p>It looks to me like you have an array of some integer type. You probably want to convert to an array of float:</p> <pre><code>array_float = array_int.astype(float) </code></pre> <p>e.g.:</p> <pre><code>&gt;&gt;&gt; ones_i = np.ones(10, dtype=int) &gt;&gt;&gt; print ones_i [1 1 1 1 1 1 1 1 1 1] &gt;&gt;&gt; ones_f = ones_i.astype(float) &gt;&gt;&gt; print ones_f [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.] </code></pre> <p>With that said, I think that it is worth asking <em>why</em> you want to process the string representation of your array. There very well might be a better way to accomplish your goal.</p>
1
2016-08-22T15:35:52Z
[ "python", "arrays" ]
Force numbers in list to two decimal places
39,083,510
<p>I have a list of numerical data read from a GIS file (.shp):</p> <p><code>dataList = [4.98, 5.09, 5.23, 5.35, 5.4, 5.59, ...]</code></p> <p>I'm looking for a method that forces numbers in this list to display to two decimal places, with output like this:</p> <p><code>dataList = [4.98, 5.09, 5.23, 5.35, 5.40, 5.59, ...]</code></p> <p>The main issue is I'm displaying each item in a separate map document in a loop with the following: <code>TextElement11.text = dataList[count]</code> which does not support the <code>"{:.2f}"</code> or <code>.format(5)</code> solutions I have found elsewhere.</p> <p>The items in this list are for display purposes only, so it does not matter if a solution requires a conversion e.g. to a string.</p> <p>I'm using Python 2.7.</p>
1
2016-08-22T15:33:52Z
39,083,605
<p>Iterate through your datalist and convert each value to the formatted string:</p> <pre><code>new_datalist = ["{:.2f}".format(value) for value in dataList] </code></pre>
2
2016-08-22T15:38:40Z
[ "python", "list", "formatting", "decimal" ]
Force numbers in list to two decimal places
39,083,510
<p>I have a list of numerical data read from a GIS file (.shp):</p> <p><code>dataList = [4.98, 5.09, 5.23, 5.35, 5.4, 5.59, ...]</code></p> <p>I'm looking for a method that forces numbers in this list to display to two decimal places, with output like this:</p> <p><code>dataList = [4.98, 5.09, 5.23, 5.35, 5.40, 5.59, ...]</code></p> <p>The main issue is I'm displaying each item in a separate map document in a loop with the following: <code>TextElement11.text = dataList[count]</code> which does not support the <code>"{:.2f}"</code> or <code>.format(5)</code> solutions I have found elsewhere.</p> <p>The items in this list are for display purposes only, so it does not matter if a solution requires a conversion e.g. to a string.</p> <p>I'm using Python 2.7.</p>
1
2016-08-22T15:33:52Z
39,083,707
<p>Most Pythonic way to achieve this is using <code>map()</code> and <code>lambda</code> functions. </p> <pre><code>map(lambda x: "{:.2f}".format(x), dataList) </code></pre>
1
2016-08-22T15:44:19Z
[ "python", "list", "formatting", "decimal" ]
Sorting email in descending order
39,083,524
<p>In reference to my previous question (Solved): <a href="http://stackoverflow.com/questions/38921933/only-show-certain-number-of-emails-in-imaplib?noredirect=1#comment65309180_38921933">Only show certain number of emails in imaplib</a></p> <p>So now I can show certain number of emails in my inbox but the problem I'm having now is that I would like to output it in descending order.</p> <p>It says here: <a href="http://pythoncentral.io/how-to-slice-listsarrays-and-tuples-in-python/" rel="nofollow">http://pythoncentral.io/how-to-slice-listsarrays-and-tuples-in-python/</a></p> <pre><code>&gt;&gt;&gt; a[::-1] [8, 7, 6, 5, 4, 3, 2, 1] </code></pre> <p><code>"And that -1 I snuck in at the end? It means to increment the index every time by -1, meaning it will traverse the list by going backwards."</code></p> <p>So im thinking backwards is equal to descending right? I tried to add the -1 on the code that I have already working:</p> <pre><code>ids = data[0] id_list = ids.split() for num in id_list[0:10:-1]: rv, data = M.fetch(num, '(RFC822)') msg = email.message_from_string(data[0][1]) subj = msg['Subject'] to = msg['To'] frm = msg['From'] body = msg.get_payload() print subj </code></pre> <p>But there was no output.. I was expecing</p> <pre><code>tenth ninth eigth . . . . second first </code></pre> <p>But I didn't get any.. Any help on how I can achieve the output I want?</p>
-1
2016-08-22T15:34:25Z
39,083,617
<p>I would like to thank Max for this solution, now my email is sorted to the most recent</p> <pre><code>ids = data[0] id_list = ids.split() #[0:10] will only output 10 emails for num in id_list[10::-1]: rv, data = M.fetch(num, '(RFC822)') msg = email.message_from_string(data[0][1]) subj = msg['Subject'] to = msg['To'] frm = msg['From'] body = msg.get_payload() print subj </code></pre> <p>Thank you again to Max! :D</p>
0
2016-08-22T15:39:39Z
[ "python", "sorting", "imaplib" ]
Match all occurences of string using re.findall
39,083,539
<p>I have a string</p> <p><code>a = "123 some_string ABC 456 some_string DEF 789 some_string GHI"</code></p> <pre><code>print re.findall("(\d\d\d).*([A-Z]+)", a) </code></pre> <p><strong>o/p</strong> : <code>[('123', 'I')]</code> </p> <p><strong>Expected o/p</strong> : <code>[('123', 'ABC'), ('456', 'DEF'), ('789', 'GHI')]</code></p> <p>Because of <code>.*</code> it is matching <code>123</code> and final character <code>I</code>. What is the proper regex, so that it prints expected o/p ?</p>
1
2016-08-22T15:34:57Z
39,083,732
<p>Converting my comment to an answer:</p> <p>You are using greedy <code>.*</code> that is matching first 3 digit number to very last text starting with upper case alphabet.</p> <p>You should make it non-greedy (lazy):</p> <pre><code>(\d{3}).*?([A-Z]+) </code></pre> <p><a href="https://regex101.com/r/oL1lR1/1" rel="nofollow">RegEx Demo</a></p>
2
2016-08-22T15:45:53Z
[ "python", "regex" ]
Match all occurences of string using re.findall
39,083,539
<p>I have a string</p> <p><code>a = "123 some_string ABC 456 some_string DEF 789 some_string GHI"</code></p> <pre><code>print re.findall("(\d\d\d).*([A-Z]+)", a) </code></pre> <p><strong>o/p</strong> : <code>[('123', 'I')]</code> </p> <p><strong>Expected o/p</strong> : <code>[('123', 'ABC'), ('456', 'DEF'), ('789', 'GHI')]</code></p> <p>Because of <code>.*</code> it is matching <code>123</code> and final character <code>I</code>. What is the proper regex, so that it prints expected o/p ?</p>
1
2016-08-22T15:34:57Z
39,084,127
<p>While anubhava's expression works, consider using the principle of contrast (108 steps compared to 30 steps - a reduction by more than 70%!):</p> <pre><code>(\d{3})[^A-Z]*([A-Z]+) </code></pre> <p>See the <a href="https://regex101.com/r/oL1lR1/2" rel="nofollow"><strong>hijacked demo on regex101.com</strong></a>.<br> The lazy dot-star is very expensive in terms of performance.</p>
3
2016-08-22T16:08:12Z
[ "python", "regex" ]
Merge 2 images onto new image: can someone explain this code?
39,083,540
<p>I'm trying to write a code to merge 2 photos side by side onto a new image, and I found this script online--however, I have no idea how it works. Where do I input the image files that I want to merge? Can someone please explain this code to me? Thanks!!</p> <pre><code>from PIL import Image import sys if not len(sys.argv) &gt; 3: raise SystemExit("Usage: %s src1 [src2] .. dest" % sys.argv[0]) images = map(Image.open, sys.argv[1:-1]) w = sum(i.size[0] for i in images) mh = max(i.size[1] for i in images) result = Image.new("RGBA", (w, mh)) x = 0 for i in images: result.paste(i, (x, 0)) x += i.size[0] result.save(sys.argv[-1]) </code></pre>
0
2016-08-22T15:34:57Z
39,083,704
<p>It is very easy. You open the images with names provided in sys.argv (arguments to the program):</p> <pre><code>images = map(Image.open, sys.argv[1:-1]) </code></pre> <p>You calculate the new width (sum of all width for opened images, which is i.size[0])</p> <pre><code>w = sum(i.size[0] for i in images) </code></pre> <p>The height, which should be equal to the height of the highest image (this way it can fit each)</p> <pre><code>mh = max(i.size[1] for i in images) </code></pre> <p>Create an image with calculated dumentions</p> <pre><code>result = Image.new("RGBA", (w, mh)) </code></pre> <p>For each image opened, insert it (with <code>paste</code> function) to the point of x pixels from the left and 0 from the top and add the width of inserted image to x so that the next one is adjacent, not overlapping</p> <pre><code>x = 0 for i in images: result.paste(i, (x, 0)) x += i.size[0] </code></pre> <p>Save the image</p> <pre><code>result.save(sys.argv[-1]) </code></pre> <p>The error processing you see at the top has nothing to do with the process of merging images, but rather asserts that there is a correct number of arguments to the program</p>
-1
2016-08-22T15:44:16Z
[ "python", "image" ]
python file read "int too large to convert to c long"
39,083,644
<p>On my (presumably 64-bit Windows, 64-bit 2.7 python) installation, the file read function is using a 4 byte c_long (signed long). I tested the base python file read function and I can't pass in an offset of more than the max signed long integer value (2,147,483,647). Not sure if this is due to a problem with my python installation, or if this is truly the max limit for reading from a file in python...</p> <p>My test code is below:</p> <pre><code>import sys import platform inFileName = r'C:\Projects\Tampa\LASPY_EVLR\LAS_DATA\input\Large_LAS\20505.las' bit32_offset_signedlong = 2147483647 print("python version: " + sys.version) print("platform: " + str(platform.architecture())) print("------------------------------") fileref = open(inFileName, "r") print("starting 32bit max read") datpart_32bitmax = fileref.read(bit32_offset_signedlong) print("------------------------------") print("starting 32bit max plus one read") datpart_32bitmaxplus1 = fileref.read(bit32_offset_signedlong + 1) print("------------------------------") </code></pre> <p>This produces output like this:</p> <pre><code>python version: 2.7.12 |Continuum Analytics, Inc.| (default, Jun 29 2016, 11:07:13) [MSC v.1500 64 bit (AMD64)] platform: ('64bit', 'WindowsPE') ------------------------------ starting 32bit max read ------------------------------ starting 32bit max plus one read Traceback (most recent call last): File "C:\Projects\Tampa\LASPY_EVLR\check_clong.py", line 18, in &lt;module&gt; datpart_32bitmaxplus1 = fileref.read(bit32_offset_signedlong + 1) OverflowError: Python int too large to convert to C long Press any key to continue . . . </code></pre> <p>Is this normal? I thought that python could read an "unlimited" file size (solely limited by available RAM and OS bit size) as discussed here: <a href="http://stackoverflow.com/questions/7134338/max-size-of-a-file-python-can-open">Max size of a file Python can open?</a></p> <p>I should make clear that this issue only shows up when using the offset parameter of the read method. I can read and write files larger than the 32-bit signed integer size, just when I try to read a portion of the file using the read offset parameter then the overflow error shows up. My end goal is to append some data near the tail end of the very large (6GB) file.</p> <p>Is there something wrong with my python installation? If so, maybe there is something I can do to fix this issue...</p>
2
2016-08-22T15:41:32Z
39,083,963
<p>This is happening because the function you are calling is layered on top of a C function that requires a 32-bit offset value. Python integers are not limited to this range, but C functions are.</p> <p>Also note that the read would specify a read of up to 2GB if you ever managed that. Are you prepared to handle a 2GB string item if the file exceeds that length?</p>
1
2016-08-22T15:58:37Z
[ "python", "python-2.7", "long-integer" ]
illegal instruction (core dumped) message when displaying generated image using numpy and matplotlib
39,083,742
<p>When running the following code segment, the last two lines of code <code>plt.imshow(X[0,:,:]) plt.show()</code> keep generating the error message of <code>Illegal instruction (core dumped)</code> The X shape is <code>(1, 572, 572)</code>. May I know what can be the reason of this?</p> <pre><code>import os import numpy as np import matplotlib.pyplot as plt import pylab from scipy.ndimage.filters import gaussian_filter from scipy import ndimage np.random.seed(1234) pylab.rcParams['figure.figsize'] = (10.0, 8.0) nx = 572 ny = 572 sigma = 10 plateau_min = -2 plateau_max = 2 r_min = 1 r_max = 200 def create_image_and_label(nx,ny): x = np.int(np.random.rand(1)[0]*nx) y = np.int(np.random.rand(1)[0]*ny) image = np.ones((nx,ny)) label = np.ones((nx,ny)) image[x,y] = 0 image_distance = ndimage.morphology.distance_transform_edt(image) r = np.random.rand(1)[0]*(r_max-r_min)+r_min plateau = np.random.rand(1)[0]*(plateau_max-plateau_min)+plateau_min label[image_distance &lt;= r] = 0 label[image_distance &gt; r] = 1 label = (1 - label) image_distance[image_distance &lt;= r] = 0 image_distance[image_distance &gt; r] = 1 image_distance = (1 - image_distance)*plateau image = image_distance + np.random.randn(nx,ny)/sigma return image, label[92:nx-92,92:nx-92] def create_batch(nx,ny,n_image): X = np.zeros((n_image,nx,ny)) Y = np.zeros((n_image,nx-184,ny-184,2)) for i in range(n_image): X[i,:,:],Y[i,:,:,1] = create_image_and_label(nx,ny) Y[i,:,:,0] = 1-Y[i,:,:,1] return X,Y X,Y = create_batch(nx,ny,1) print(X.shape) plt.imshow(X[0,:,:]) plt.show() </code></pre>
0
2016-08-22T15:46:17Z
39,083,859
<p>Works fine for me. Try upgrading and/or reinstalling your libraries and/or Python. </p> <p>If it still fails, consider using <code>gdb</code> to get a stack trace (hint: <code>gdb /path/to/your/python</code>). Then submit a <a href="http://matplotlib.org/faq/troubleshooting_faq.html" rel="nofollow">bug report</a> to matplotlib, as suggested by @nneonneo.</p>
0
2016-08-22T15:53:21Z
[ "python", "numpy", "image-processing", "matplotlib", "scipy" ]
Python give number of args to an imported function
39,083,767
<p>I'm trying to give an argument to an imported function.<br> I have <code>base.py</code>:</p> <pre><code>import sc1 #import sc1.py from threading import Thread Thread(target=sc1.main,args="John").start() </code></pre> <p>And a function in <code>sc1.py</code>:</p> <pre><code>def main(name): print "Hello ",name </code></pre> <p>Say </p> <blockquote> <p>TypeError: main() takes exactly 1 argument (4 given)</p> </blockquote> <p>If I give just one <code>args="J"</code> then it works fine,<br> does anyone have any idea what I can do?</p>
2
2016-08-22T15:48:01Z
39,083,786
<p>You want to pass a tuple of args:</p> <pre><code>Thread(target=sc1.main,args=("John",)).start() </code></pre> <p>In your case, since <code>str</code> are iterable, the <code>Thread</code> is trying to unpack <code>"J" "o", "h", "n"</code> as the arguments rather than passing the entire thing as an atomic unit.</p>
4
2016-08-22T15:49:24Z
[ "python", "function", "arguments" ]
Python give number of args to an imported function
39,083,767
<p>I'm trying to give an argument to an imported function.<br> I have <code>base.py</code>:</p> <pre><code>import sc1 #import sc1.py from threading import Thread Thread(target=sc1.main,args="John").start() </code></pre> <p>And a function in <code>sc1.py</code>:</p> <pre><code>def main(name): print "Hello ",name </code></pre> <p>Say </p> <blockquote> <p>TypeError: main() takes exactly 1 argument (4 given)</p> </blockquote> <p>If I give just one <code>args="J"</code> then it works fine,<br> does anyone have any idea what I can do?</p>
2
2016-08-22T15:48:01Z
39,083,854
<p>Call the <code>Thread</code> as:</p> <pre><code>Thread(target=sc1.main,args=["John"]).start() </code></pre> <p><strong>Explaination:</strong></p> <p>It is throwing error in your case because <code>args</code> is expected to be of <code>list</code> or <code>tuple</code> type. And when your are passing <code>"John"</code>, it is getting passed as <code>["J", "o", "h", "n"]</code> i.e. array of <code>chars</code></p>
2
2016-08-22T15:53:10Z
[ "python", "function", "arguments" ]
Newbie inquiry about NameError
39,083,806
<p>Total Newbie question for which I have searched the site. I am running a really simple program in Chapter 2 of Automate the Boring Stuff and I keep getting a NameError. The first line is </p> <p><code>if name == 'Alice':</code></p> <p>And it results in </p> <pre><code>NameError: name 'name' is not defined </code></pre> <p>Any thoughts on this. Cannot find this NameError in the index or any sites.</p> <p>Thanks</p>
-4
2016-08-22T15:50:32Z
39,083,921
<p>In the <a href="https://automatetheboringstuff.com/chapter2/" rel="nofollow">book</a> you missed this comment above the code: "<em>(Pretend name was assigned some value earlier.)</em>". So you need to do that. For example (assuming Python 3):</p> <pre><code>name = input("Please enter your name: ") if name == 'Alice': print('Hi, Alice.') </code></pre> <p>By the way, next time you are searching for this kind of thing in a search engine, prefix the exception type with "python", for example "python NameError".</p>
2
2016-08-22T15:56:41Z
[ "python", "nameerror" ]
Django relation error when running make migrations
39,083,865
<p>Hey I am attempting to initialize a new database, but I am running into some issues setting up the migrations. The error I am getting appears to stem from setting up my forms. In a form I am using, I am creating a choice field as so:</p> <pre><code>from django import forms from ..custom_admin import widgets, choices class MemberForm(forms.Form): provinces = forms.ChoiceField(label='Provinces', choices=choices.PROVINCE_CHOICES, required=True) </code></pre> <p>where PROVINCE_CHOICES comes from here:</p> <pre><code>from ..base.models import ProvinceCode PROVINCE_CHOICES = [] for province in ProvinceCode.objects.filter(country_code_id=1).order_by('code'): PROVINCE_CHOICES.append((province.code, province.code)) </code></pre> <p>The issue seems to be that this loop is being called before the migrations occur, giving me an error stating that the Province model does not exist. Commenting out the reference to this file allows the migrations to work, however, that seems like an impractical solution for continued use. Is there a way to get around this error?</p> <p>For reference, here is the error I get when I run <code>manage.py makemigrations</code>:</p> <pre><code>./manage.py makemigrations Traceback (most recent call last): File "/Users/js/Documents/VirtualEnvironments/pcenv/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute return self.cursor.execute(sql, params) psycopg2.ProgrammingError: relation "pc_psr_code" does not exist LINE 1: ...escription", "pc_psr_code"."country_code_id" FROM "pc_psr_co... ^ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "./manage.py", line 9, in &lt;module&gt; django.setup() File "/Users/js/Documents/VirtualEnvironments/pcenv/lib/python3.5/site-packages/django/__init__.py", line 18, in setup apps.populate(settings.INSTALLED_APPS) File "/Users/js/Documents/VirtualEnvironments/pcenv/lib/python3.5/site-packages/django/apps/registry.py", line 115, in populate app_config.ready() File "/Users/js/Documents/VirtualEnvironments/pcenv/lib/python3.5/site-packages/debug_toolbar/apps.py", line 15, in ready dt_settings.patch_all() File "/Users/js/Documents/VirtualEnvironments/pcenv/lib/python3.5/site-packages/debug_toolbar/settings.py", line 228, in patch_all patch_root_urlconf() File "/Users/js/Documents/VirtualEnvironments/pcenv/lib/python3.5/site-packages/debug_toolbar/settings.py", line 216, in patch_root_urlconf reverse('djdt:render_panel') File "/Users/js/Documents/VirtualEnvironments/pcenv/lib/python3.5/site-packages/django/core/urlresolvers.py", line 568, in reverse app_list = resolver.app_dict[ns] File "/Users/js/Documents/VirtualEnvironments/pcenv/lib/python3.5/site-packages/django/core/urlresolvers.py", line 360, in app_dict self._populate() File "/Users/js/Documents/VirtualEnvironments/pcenv/lib/python3.5/site-packages/django/core/urlresolvers.py", line 293, in _populate for pattern in reversed(self.url_patterns): File "/Users/js/Documents/VirtualEnvironments/pcenv/lib/python3.5/site-packages/django/utils/functional.py", line 33, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "/Users/js/Documents/VirtualEnvironments/pcenv/lib/python3.5/site-packages/django/core/urlresolvers.py", line 417, in url_patterns patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module) File "/Users/js/Documents/VirtualEnvironments/pcenv/lib/python3.5/site-packages/django/utils/functional.py", line 33, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "/Users/js/Documents/VirtualEnvironments/pcenv/lib/python3.5/site-packages/django/core/urlresolvers.py", line 410, in urlconf_module return import_module(self.urlconf_name) File "/Users/js/Documents/VirtualEnvironments/pcenv/lib/python3.5/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "&lt;frozen importlib._bootstrap&gt;", line 986, in _gcd_import File "&lt;frozen importlib._bootstrap&gt;", line 969, in _find_and_load File "&lt;frozen importlib._bootstrap&gt;", line 958, in _find_and_load_unlocked File "&lt;frozen importlib._bootstrap&gt;", line 673, in _load_unlocked File "&lt;frozen importlib._bootstrap_external&gt;", line 662, in exec_module File "&lt;frozen importlib._bootstrap&gt;", line 222, in _call_with_frames_removed File "/Users/js/Documents/app/platform/test/pc/urls.py", line 7, in &lt;module&gt; from .custom_admin import urls as custom_urls File "/Users/js/Documents/app/platform/test/pc/custom_admin/urls.py", line 3, in &lt;module&gt; from ..party import views as party_views File "/Users/js/Documents/app/platform/test/pc/party/views.py", line 1, in &lt;module&gt; from ..party import forms File "/Users/js/Documents/app/platform/test/pc/party/forms.py", line 2, in &lt;module&gt; from ..custom_admin import widgets, choices File "/Users/js/Documents/app/platform/test/pc/custom_admin/choices.py", line 9, in &lt;module&gt; for province in ProvinceCode.objects.filter(country_code_id=1).order_by('code'): File "/Users/js/Documents/VirtualEnvironments/pcenv/lib/python3.5/site-packages/django/db/models/query.py", line 258, in __iter__ self._fetch_all() File "/Users/js/Documents/VirtualEnvironments/pcenv/lib/python3.5/site-packages/django/db/models/query.py", line 1074, in _fetch_all self._result_cache = list(self.iterator()) File "/Users/js/Documents/VirtualEnvironments/pcenv/lib/python3.5/site-packages/django/db/models/query.py", line 52, in __iter__ results = compiler.execute_sql() File "/Users/js/Documents/VirtualEnvironments/pcenv/lib/python3.5/site-packages/django/db/models/sql/compiler.py", line 848, in execute_sql cursor.execute(sql, params) File "/Users/js/Documents/VirtualEnvironments/pcenv/lib/python3.5/site-packages/django/db/backends/utils.py", line 79, in execute return super(CursorDebugWrapper, self).execute(sql, params) File "/Users/js/Documents/VirtualEnvironments/pcenv/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute return self.cursor.execute(sql, params) File "/Users/js/Documents/VirtualEnvironments/pcenv/lib/python3.5/site-packages/django/db/utils.py", line 95, in __exit__ six.reraise(dj_exc_type, dj_exc_value, traceback) File "/Users/js/Documents/VirtualEnvironments/pcenv/lib/python3.5/site-packages/django/utils/six.py", line 685, in reraise raise value.with_traceback(tb) File "/Users/js/Documents/VirtualEnvironments/pcenv/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute return self.cursor.execute(sql, params) django.db.utils.ProgrammingError: relation "pc_psr_code" does not exist LINE 1: ...escription", "pc_psr_code"."country_code_id" FROM "pc_psr_co... </code></pre> <p>Province model:</p> <pre><code>class ProvinceCode(models.Model): code = models.CharField(blank=False, null=False, unique=True) country_code = models.ForeignKey('CountryCode', blank=False, null=True) </code></pre>
0
2016-08-22T15:54:05Z
39,084,645
<p>You cannot execute queries during the initialization of the app registry. Your <code>choices.py</code> file is indirectly imported during this time, resulting in the error. To fix this issue, you can pass a callable to <code>choices</code>:</p> <pre><code>def get_provinces(): province_choices = [] for province in ProvinceCode.objects.filter(country_code_id=1).order_by('code'): province_choices.append((province.code, province.code)) class MemberForm(forms.Form): provinces = forms.ChoiceField(label='Provinces', choices=get_provinces, required=True) </code></pre>
2
2016-08-22T16:40:15Z
[ "python", "django", "django-forms", "django-admin", "django-migrations" ]
Python can't load modules from a large zipfile
39,083,913
<p>I've been packing up a script and some resources into a executable zipfile using the technique in (for example) <a href="http://blog.ablepear.com/2012/10/bundling-python-files-into-stand-alone.html" rel="nofollow">this blog post</a></p> <p>The process suddenly stopped working, and I think it has to do with the zip64 extension. When I try to run the executable zipfile I get: </p> <p>/usr/bin/python: can't find '<strong>main</strong>' module in '/path/to/my_app.zip'</p> <p>I believe that the only change is that one of the resource files (a disk image) has gotten larger. I've verified that <code>__main__.py</code> is still in the root of the archive. The size of the zipfile used to be 600MB, and is now 2.5GB. I noticed in the <a href="https://docs.python.org/2/library/zipimport.html" rel="nofollow">zipimport docs</a> the following statement:</p> <blockquote> <p>ZIP archives with an archive comment are currently not supported.</p> </blockquote> <p>Reading through <a href="https://en.wikipedia.org/wiki/Zip_%28file_format%29" rel="nofollow">the wikipedia article</a> on the zipfile format I see that:</p> <blockquote> <p>The .ZIP file format allows for a comment containing up to 65,535 bytes of data to occur at the end of the file after the central directory.[25] </p> </blockquote> <p>And later, regarding <strong>zip64</strong>:</p> <blockquote> <p>In essence, it uses a "normal" central directory entry for a file, followed by an optional "zip64" directory entry, which has the larger fields.[29]</p> </blockquote> <p>Inferring a bit, it sounds like this might be what's happening: my zipfile has grown to require the zip64 extension. The zip64 extension data is stored in the comment section so now there is an active comment section, and python's <code>zipimport</code> is refusing to read my zipfile.</p> <p>Can anyone provide guidance on:</p> <ol> <li>verifying the cause of why python can't find <code>__main__.py</code> in my zipfile</li> <li>providing any workaround </li> </ol> <p>Note that the image file has always been 16GB in size, however it used to only occupy 600MB on the disk (it resides on an ext4 filesystem, if that matters). It now occupies > 7GB on disk. From the wikipedia page:</p> <blockquote> <p>The original .ZIP format had a 4 GiB limit on various things (uncompressed size of a file, compressed size of a file and total size of the archive)</p> </blockquote> <p>I build the zipfile using a python script so in order to try and work around this issue, I add the python code to the zipfile before adding the image file. The thought being that python might simply ignore the comment section and see a valid zipfile that contains the python code but not the large image file. This doesn't appear to be the case.</p>
0
2016-08-22T15:56:12Z
39,107,193
<p>Digging into the python source code, in zipimport.c, we can see that indeed it looks for the <code>end-of-central-directory</code> block in the last 22 bytes of the file. It does not retract the search from there if it doesn't find a valid end-of-central-directory (and I guess that makes it not a compliant zip parser). In any case, what it does do is it looks at the offset and size of the central directory reported in the end-of-central-directory record. <code>offset + size</code> should be the location of the end-of-central-directory record. If it is not, it computes the difference between the actual and expected location and adds this offset to all offsets in the central directory. This means that python supports loading modules from a zipfile which is catted to the end of another file.</p> <p>It appears, however, that the zipimport implementation for 2.7.6 (distributed with ubuntu 14.04) is broken for large zipfiles (greater than > 2GB? I think max signed 32-bit long). It works, however for python 3.4.3 (also distributed with ubuntu 14.04). zipimport.c has changed sometime between 2.7.6 and 2.7.12, so it may work for newer python 2's. </p> <p>My solution is to pack a resource zip, and a code zip, then cat them together, and run the app with python3 instead of python2. I write the offset and size of the resource zip to a metadata file in the code zip. The code uses this information and <code>filechunkio</code> to get a <code>zipfile.ZipFile</code> for the resource zip segment of the packfile.</p> <p>Not a great solution, but it works.</p>
0
2016-08-23T17:16:13Z
[ "python" ]
Could not find a version that satisfies the requirement easy_install (from versions: )
39,083,949
<blockquote> <p>Python 2.7.12</p> <p>pip 8.1.2</p> <p>ubuntu-16.04</p> </blockquote> <p>I'm trying to install <code>pycurl</code> using:</p> <pre><code>pip install pycurl </code></pre> <p>this is what I get,</p> <blockquote> <p>Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-8EU20I/pycurl/</p> </blockquote> <p>So, I tried updating setup tools like this:</p> <pre><code>pip install --upgrade easy_install -U setuptools </code></pre> <p>i got </p> <blockquote> <p>Could not find a version that satisfies the requirement easy_install (from versions: ) No matching distribution found for easy_install</p> </blockquote> <p>I've no idea on what I'm missing. Please help me out!</p>
0
2016-08-22T15:57:50Z
39,084,081
<p>The issue here is that you are attempting to upgrade the <code>setuptools</code> that came installed in your system Python, which requires changes to areas of the file system a "normal" user won't have (it requires root privileges).</p> <p>Prefixing the command with <code>sudo</code> might help, but you should ask yourself whether you really want to change the system Python, since some OSs require Python "as installed" for various system purposes.</p> <p>It's a lot safer to install a second copy of Python somewhere you have write access to (I personally tend to use <code>/usr/local</code> but YMMV) and then you won't need to worry about breaking your system. Further, as long as you set your PATH to include <code>/usr/local/bin</code> you can just use the <code>python</code> command to run it.</p>
0
2016-08-22T16:05:19Z
[ "python", "python-2.7", "pip", "easy-install", "pycurl" ]
Could not find a version that satisfies the requirement easy_install (from versions: )
39,083,949
<blockquote> <p>Python 2.7.12</p> <p>pip 8.1.2</p> <p>ubuntu-16.04</p> </blockquote> <p>I'm trying to install <code>pycurl</code> using:</p> <pre><code>pip install pycurl </code></pre> <p>this is what I get,</p> <blockquote> <p>Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-8EU20I/pycurl/</p> </blockquote> <p>So, I tried updating setup tools like this:</p> <pre><code>pip install --upgrade easy_install -U setuptools </code></pre> <p>i got </p> <blockquote> <p>Could not find a version that satisfies the requirement easy_install (from versions: ) No matching distribution found for easy_install</p> </blockquote> <p>I've no idea on what I'm missing. Please help me out!</p>
0
2016-08-22T15:57:50Z
39,899,596
<pre><code>sudo apt-get install python-pycurl </code></pre> <p>this solved the problem.</p>
0
2016-10-06T15:08:12Z
[ "python", "python-2.7", "pip", "easy-install", "pycurl" ]
Selenium wait for element to be clickable python
39,083,974
<p>All, I'm needing a little assistance with Selenium waits. I can't seem to figure out how to wait for an element to be ready.</p> <p>The element that I am needing to wait I can locate and click using my script via the code below...</p> <pre><code>CreateJob = driver.find_element_by_xpath(".//*[@id='line']/div[1]/a") </code></pre> <p>or </p> <pre><code>CreateJob = driver.find_element_by_partial_link_text("Create Activity") </code></pre> <p>I'm needing to wait for this element to be on the page and clickable before I try to click on the element. </p> <p>I can use the <code>sleep</code> command, but I have to wait for 5 seconds or more and it seems to be unreliable and errors out 1 out of 8 times or so. </p> <p>I can't seem to find the correct syntax to use. </p> <p>the HTML code for this is below.</p> <pre><code>&lt;document&gt; &lt;html manifest="https://tddf/index.php?m=manifest&amp;a=index"&gt; &lt;head&gt; &lt;body class="my-own-class mozilla mozilla48 mq1280 lt1440 lt1680 lt1920 themered" touch-device="not"&gt; &lt;noscript style="text-align: center; display: block;"&gt;Please enable JavaScript in your browser settings.&lt;/noscript&gt; &lt;div id="wait" style="display: none;"&gt; &lt;div id="processing" class="hidden" style="display: none;"/&gt; &lt;div id="loading" class="hidden" style="display: none;"/&gt; &lt;div id="loadingPartsCatalog" class="hidden"/&gt; &lt;div id="panel"&gt; &lt;div id="top-toolbar" class="hidden" style="display: block;"&gt; &lt;div id="commands-line" class="hidden" style="display: block;"&gt; &lt;div id="line"&gt; &lt;div class="action-link"&gt; &lt;a class="tap-active" href="#m=activity/a=set" action_link_label="create_activity" component_gui="action" component_type="action"&gt;Create Activity&lt;/a&gt; &lt;/div&gt; &lt;div class="action-link"&gt; &lt;div class="action-link"&gt; &lt;div class="action-link"&gt; &lt;/div&gt; &lt;div id="commands-more" style="display: none;"&gt; &lt;div id="commands-list" class="hidden"&gt; &lt;/div&gt; &lt;div id="provider-search-bar" class="hidden center" </code></pre>
1
2016-08-22T15:59:16Z
39,084,035
<p>If you are in Java try below WebDriverWait syntax - </p> <pre><code>new WebDriverWait(driver,10,100).until(ExpectedConditions.visibilityOf(CreateJob)); </code></pre> <p>Waits for 10ses and polls every 100 msec for the CreateJob element to be visible. Can use <code>ExpectedConditions.elementToBeClickable(CreateJob)</code> also.</p>
0
2016-08-22T16:03:03Z
[ "python", "selenium" ]
Selenium wait for element to be clickable python
39,083,974
<p>All, I'm needing a little assistance with Selenium waits. I can't seem to figure out how to wait for an element to be ready.</p> <p>The element that I am needing to wait I can locate and click using my script via the code below...</p> <pre><code>CreateJob = driver.find_element_by_xpath(".//*[@id='line']/div[1]/a") </code></pre> <p>or </p> <pre><code>CreateJob = driver.find_element_by_partial_link_text("Create Activity") </code></pre> <p>I'm needing to wait for this element to be on the page and clickable before I try to click on the element. </p> <p>I can use the <code>sleep</code> command, but I have to wait for 5 seconds or more and it seems to be unreliable and errors out 1 out of 8 times or so. </p> <p>I can't seem to find the correct syntax to use. </p> <p>the HTML code for this is below.</p> <pre><code>&lt;document&gt; &lt;html manifest="https://tddf/index.php?m=manifest&amp;a=index"&gt; &lt;head&gt; &lt;body class="my-own-class mozilla mozilla48 mq1280 lt1440 lt1680 lt1920 themered" touch-device="not"&gt; &lt;noscript style="text-align: center; display: block;"&gt;Please enable JavaScript in your browser settings.&lt;/noscript&gt; &lt;div id="wait" style="display: none;"&gt; &lt;div id="processing" class="hidden" style="display: none;"/&gt; &lt;div id="loading" class="hidden" style="display: none;"/&gt; &lt;div id="loadingPartsCatalog" class="hidden"/&gt; &lt;div id="panel"&gt; &lt;div id="top-toolbar" class="hidden" style="display: block;"&gt; &lt;div id="commands-line" class="hidden" style="display: block;"&gt; &lt;div id="line"&gt; &lt;div class="action-link"&gt; &lt;a class="tap-active" href="#m=activity/a=set" action_link_label="create_activity" component_gui="action" component_type="action"&gt;Create Activity&lt;/a&gt; &lt;/div&gt; &lt;div class="action-link"&gt; &lt;div class="action-link"&gt; &lt;div class="action-link"&gt; &lt;/div&gt; &lt;div id="commands-more" style="display: none;"&gt; &lt;div id="commands-list" class="hidden"&gt; &lt;/div&gt; &lt;div id="provider-search-bar" class="hidden center" </code></pre>
1
2016-08-22T15:59:16Z
39,084,357
<p>Here is a link to the 'waiting' section of the Python Selenium docs: <a href="http://selenium-python.readthedocs.io/waits.html#explicit-waits" rel="nofollow">http://selenium-python.readthedocs.io/waits.html#explicit-waits</a></p> <p>You wait should look like this:</p> <pre><code>element = WebDriverWait(driver, 10).until( EC.visibility_of((By.XPATH, ".//*[@id='line']/div[1]/a")) ) </code></pre>
2
2016-08-22T16:21:35Z
[ "python", "selenium" ]
Selenium wait for element to be clickable python
39,083,974
<p>All, I'm needing a little assistance with Selenium waits. I can't seem to figure out how to wait for an element to be ready.</p> <p>The element that I am needing to wait I can locate and click using my script via the code below...</p> <pre><code>CreateJob = driver.find_element_by_xpath(".//*[@id='line']/div[1]/a") </code></pre> <p>or </p> <pre><code>CreateJob = driver.find_element_by_partial_link_text("Create Activity") </code></pre> <p>I'm needing to wait for this element to be on the page and clickable before I try to click on the element. </p> <p>I can use the <code>sleep</code> command, but I have to wait for 5 seconds or more and it seems to be unreliable and errors out 1 out of 8 times or so. </p> <p>I can't seem to find the correct syntax to use. </p> <p>the HTML code for this is below.</p> <pre><code>&lt;document&gt; &lt;html manifest="https://tddf/index.php?m=manifest&amp;a=index"&gt; &lt;head&gt; &lt;body class="my-own-class mozilla mozilla48 mq1280 lt1440 lt1680 lt1920 themered" touch-device="not"&gt; &lt;noscript style="text-align: center; display: block;"&gt;Please enable JavaScript in your browser settings.&lt;/noscript&gt; &lt;div id="wait" style="display: none;"&gt; &lt;div id="processing" class="hidden" style="display: none;"/&gt; &lt;div id="loading" class="hidden" style="display: none;"/&gt; &lt;div id="loadingPartsCatalog" class="hidden"/&gt; &lt;div id="panel"&gt; &lt;div id="top-toolbar" class="hidden" style="display: block;"&gt; &lt;div id="commands-line" class="hidden" style="display: block;"&gt; &lt;div id="line"&gt; &lt;div class="action-link"&gt; &lt;a class="tap-active" href="#m=activity/a=set" action_link_label="create_activity" component_gui="action" component_type="action"&gt;Create Activity&lt;/a&gt; &lt;/div&gt; &lt;div class="action-link"&gt; &lt;div class="action-link"&gt; &lt;div class="action-link"&gt; &lt;/div&gt; &lt;div id="commands-more" style="display: none;"&gt; &lt;div id="commands-list" class="hidden"&gt; &lt;/div&gt; &lt;div id="provider-search-bar" class="hidden center" </code></pre>
1
2016-08-22T15:59:16Z
39,091,572
<p>I find this to be the easiest: </p> <pre><code>driver.implicitly_wait(10) </code></pre> <p>Where it waits for up to 10 seconds before the script might crash if expected conditions aren't met. I think it's better than always checking for the visibility of, the clickability of, or whatever it is about the element. Less effective and more error prone, however. So it would depend more on why you use selenium.</p> <p>It also lets me cut down on try/except statements in my selenium scripts, and since I've found out about this I've reduced many time.sleep() functions as well. </p>
0
2016-08-23T02:50:45Z
[ "python", "selenium" ]
Django SMTP library generates wrong smtp auth plain authentication string
39,083,998
<p>I encounter a strange, reproducible issue on different setups (localhost, production systems) with Django's smtp library:</p> <p>the auth plain authentication string is totally off the mark. This is what the python console returns: <code> sys.version_info sys.version_info(major=3, minor=5, micro=0, releaselevel='final', serial=0) </code> <code> from email.base64mime import body_encode as encode_base64 encode_base64(("\0%s\0%s" % ("User Name", "1234123241-23421334")).encode("ascii"), eol='') 'AFVzZXIgTmFtZQAxMjM0MTIzMjQxLTIzNDIxMzM0' </code></p> <p>However when I send email through python's smtplib.py this is the auth plain authentication string that is generated by smtplib.py:</p> <p><code> AFVzZXIgTmFtZQBiJzEyMzQxMjMyNDEtMjM0MjEzMzQn </code></p> <p>It's generated here: <a href="https://github.com/python/cpython/blob/master/Lib/smtplib.py#L629" rel="nofollow">https://github.com/python/cpython/blob/master/Lib/smtplib.py#L629</a></p> <p>Example: <a href="https://gist.github.com/macolo/bf2811c14d985d013dda0741bfd339e0" rel="nofollow">https://gist.github.com/macolo/bf2811c14d985d013dda0741bfd339e0</a></p> <p>I am in dire need of a clue regarding this, thank you.</p>
0
2016-08-22T16:01:10Z
39,099,854
<p>aldryn-emailsettings is being deprecated and it does not support python3 yet. Use a recent version of aldryn-django (1.8.15.2 or 1.9.8.2 and higher) and set the email connection as an <code>EMAIL_URL</code> environment variable instead. See <a href="https://pypi.python.org/pypi/dj-email-url/" rel="nofollow">https://pypi.python.org/pypi/dj-email-url/</a> for the supported backends.</p>
0
2016-08-23T11:25:07Z
[ "python", "django", "smtp" ]
Changed behaviour of ImageEnhance.Brightness in Pillow
39,084,077
<p>I'm trying to paste image B over image A with half opacity (i.e. pasted image is half transparent). </p> <p>In version 2.1.0 of pillow the following code worked, in version 3.3.1 it no longer works:</p> <blockquote> <pre><code>A = Image.open('A.png') B = Image.open('B.png') enhancer = ImageEnhance.Brightness(B) mask = enhancer.enhance(0.5) print(mask.getpixel((10,10)), mask.getpixel((30,30))) mask.save('Mask.png') A.paste(B, (0,0), mask) A.save('Result.png') </code></pre> </blockquote> <p>Image A is a black 'A' on a white background</p> <p>Image B is a red 'B' on a transparent background</p> <p>Images are provided below</p> <p>Version 2.1.0 produces (127,0,0,127) for pixel 30,30 of the mask</p> <p>Version 3.3.1 produces (127,0,0,255) for pixel 30,30 of the mask</p> <p><a href="http://i.stack.imgur.com/J9yVz.png" rel="nofollow">Image A</a> <a href="http://i.stack.imgur.com/z1BZE.png" rel="nofollow">Image B</a></p>
0
2016-08-22T16:05:11Z
39,086,273
<p>Pillow is correct, changing the brightness of a pixel <em>should not</em> change its transparency. Obviously there was a bug in PIL.</p> <p>What you really want is to split the alpha from image B and turn <em>that</em> into a mask. Using the technique from <a href="http://stackoverflow.com/a/1963146/5987">this answer</a>:</p> <pre><code>mask = B.split()[-1] enhancer = ImageEnhance.Brightness(mask) mask = enhancer.enhance(0.5) </code></pre>
1
2016-08-22T18:23:35Z
[ "python", "image", "pillow", "brightness" ]
Receive and process a SOAP message
39,084,131
<p>Got a service provider (Safaricom) that has decided to use SOAP to send mobile money payment notifications to businesses. When the mobile user pays (either through USSD or via a web interface) the mobile money service will send a SOAP message that we are supposed to consume.</p> <pre><code>&lt;soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:c2b="http://cps.huawei.com/cpsinterface/c2bpayment"&gt; &lt;soapenv:Header/&gt; &lt;soapenv:Body&gt; &lt;c2b:C2BPaymentValidationRequest&gt; &lt;TransactionType&gt;PayBill&lt;/TransactionType&gt; &lt;TransID&gt;1234560000007031&lt;/TransID&gt; &lt;TransTime&gt;20140227082020&lt;/TransTime&gt; &lt;TransAmount&gt;123.00&lt;/TransAmount&gt; &lt;BusinessShortCode&gt;12345&lt;/BusinessShortCode&gt; &lt;BillRefNumber&gt;&lt;/BillRefNumber&gt; &lt;InvoiceNumber&gt;&lt;/InvoiceNumber&gt; &lt;MSISDN&gt;254722703614&lt;/MSISDN&gt; &lt;KYCInfo&gt; &lt;KYCName&gt;[Personal Details][First Name]&lt;/KYCName&gt; &lt;KYCValue&gt;Hoiyor&lt;/KYCValue&gt; &lt;/KYCInfo&gt; &lt;KYCInfo&gt; &lt;KYCName&gt;[Personal Details][Middle Name]&lt;/KYCName&gt; &lt;KYCValue&gt;G&lt;/KYCValue&gt; &lt;/KYCInfo&gt; &lt;KYCInfo&gt; &lt;KYCName&gt;[Personal Details][Last Name]&lt;/KYCName&gt; &lt;KYCValue&gt;Chen&lt;/KYCValue&gt; &lt;/KYCInfo&gt; &lt;/c2b:C2BPaymentValidationRequest&gt; &lt;/soapenv:Body&gt; &lt;/soapenv:Envelope&gt; </code></pre> <p><em>Don't worry the above details are public information</em></p> <p>Question is, using a framework like bottle (or even Django) how do I "accept" this message and how do I extract the details from within the message.</p> <p>I've used <code>suds-jurko</code> to consume Soap Services but I've never been on the receiving end of a SOAP call.</p> <p>At minimum though am able to get the message using <code>payment_data = request.body.read()</code></p> <pre><code>from bottle import request payment_data = request.body.read() print(payment_data) </code></pre> <p>From there though I've tried using XML parsers in python but its getting complicated. Is there a way for suds (or <a href="http://docs.python-zeep.org/en/latest/index.html#" rel="nofollow">zeep</a>) to allow me to get the data from the xml object?</p>
0
2016-08-22T16:08:31Z
39,266,508
<p>I hope it's not late for an answer: For the C2B transactions there is a project on github <a href="https://github.com/kn9ts/project-mulla" rel="nofollow">https://github.com/kn9ts/project-mulla</a> it takes the request from the checkout in POST form, converts it to a SOAP request, send it to safaricom, receives the response from safaricom and gives a response in Json format.</p>
0
2016-09-01T08:32:28Z
[ "python", "soap", "python-3.4", "bottle", "suds" ]
How to write proper decorator to alter classes
39,084,190
<p>I know I can use closure and inheritance to create a decorator that alter classes.</p> <pre><code>def wrapper(cls, *args, **kwargs): class Wrapped(cls): """Modify your class here.""" return Wrapped </code></pre> <p>But if I need to test my new classes to know if they inherit <code>Wrapped</code> or not, I can't access <code>Wrapped</code> itself to do a straightforward <code>isinstance</code> or <code>issubclass</code> test.</p> <p>On the other hand, straightforward inheritance isn't an option. I have about 10 different wrapper which can need to be added to a class. That burden the hierarchy tree way too much.</p> <p>So I need a way to access the closure from the outside. Or an alternative way to build decorator.</p>
0
2016-08-22T16:11:11Z
39,084,282
<p>It sounds like you want to check whether a class has been wrapped by this particular decorator. The most efficacious method to do so may simply be to add a field to that effect, to wit:</p> <pre><code>def wrapper(cls, *args, **kwargs): class Wrapped(cls): """Modify your class here.""" Wrapped._is_wrapped_by_this_wrapper = True return Wrapped </code></pre> <p>Then you can check <code>hasattr</code> and <code>getattr</code> of <code>_is_wrapped_by_this_wrapper</code>.</p> <p>If you have multiple wrapper classes that work with each other you may be able to come up with a solution that works better together, e.g. perhaps a <code>set</code> consisting of all the names of the wrappers that have been applied.</p>
2
2016-08-22T16:17:17Z
[ "python", "python-decorators" ]
How to write proper decorator to alter classes
39,084,190
<p>I know I can use closure and inheritance to create a decorator that alter classes.</p> <pre><code>def wrapper(cls, *args, **kwargs): class Wrapped(cls): """Modify your class here.""" return Wrapped </code></pre> <p>But if I need to test my new classes to know if they inherit <code>Wrapped</code> or not, I can't access <code>Wrapped</code> itself to do a straightforward <code>isinstance</code> or <code>issubclass</code> test.</p> <p>On the other hand, straightforward inheritance isn't an option. I have about 10 different wrapper which can need to be added to a class. That burden the hierarchy tree way too much.</p> <p>So I need a way to access the closure from the outside. Or an alternative way to build decorator.</p>
0
2016-08-22T16:11:11Z
39,084,314
<p>You could inherit from two classes, a base class and <code>cls</code>:</p> <pre><code>class WrapperBase: pass def wrapper(cls, *args, **kwargs): class Wrapped(cls, WrapperBase): """Modify your class here.""" return Wrapped </code></pre> <p>Now all instances of generated classes test <code>True</code> for <code>isinstance(obj, WrapperBase)</code>.</p> <p>Note that <code>WrapperBase</code> has no impact on finding inherited methods in the MRO; it comes dead last in any hierarchy (on Python 2, not inheriting from <code>object</code> puts it dead last in the MRO, in Python 3 it'll sit between <code>object</code> and whatever came before <code>object</code> in the MRO of the wrapped class.</p>
1
2016-08-22T16:19:07Z
[ "python", "python-decorators" ]
Finding distance between two gps points in Python
39,084,244
<p>I have the method below (haversine) that returns the distance between two gps points. Table below is my dataframe.</p> <p>When I apply the function on the dataframe using, I get the error "cannot convert the series to ". Not sure whether i am missing something. Any help would be appreciated.</p> <pre><code> distdf1['distance'] = distdf1.apply(lambda x: haversine(distdf1['SLongitude'], distdf1['SLatitude'], distdf1['ClosestLong'], distdf1['ClosestLat']), axis=1) </code></pre> <p>Dataframe:</p> <pre><code>SLongitude SLatitude ClosestLong ClosestLat 0 -100.248093 25.756313 -98.220240 26.189491 1 -77.441536 38.991512 -77.481600 38.748722 2 -72.376370 40.898690 -73.662870 41.025640 </code></pre> <p>Method:</p> <pre><code>def haversine(lon1, lat1, lon2, lat2): """ Calculate the great circle distance between two points on the earth (specified in decimal degrees) """ # convert decimal degrees to radians lon1, lat1, lon2, lat2 = map(radians, [lon1, lat1, lon2, lat2]) # haversine formula dlon = lon2 - lon1 dlat = lat2 - lat1 a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2 c = 2 * asin(sqrt(a)) km = 6367 * c return km </code></pre>
0
2016-08-22T16:15:00Z
39,084,283
<p>Try: </p> <pre><code>distdf1.apply(lambda x: haversine(x['SLongitude'], x['SLatitude'], x['ClosestLong'], x['ClosestLat']), axis=1) </code></pre>
0
2016-08-22T16:17:18Z
[ "python", "pandas", "dataframe" ]
Logging in an already-registered user in Django
39,084,249
<p>I have made the registration part of the login system, but I am unable to make a registered user login. How can I do this?</p> <p>Here's my code:</p> <h3>Views.py</h3> <pre><code>from django.shortcuts import render, redirect from django.contrib.auth import authenticate, login from django.views.generic import View from .forms import UserForm # def Success(request): # return render(request, 'success.html') class UserFormView(View): form_class = UserForm template_name = 'form.html' def get(self, request): form = self.form_class(None) return render(request, self.template_name, {'form': form}) def post(self, request): form = self.form_class(request.POST) if form.is_valid(): user = form.save(commit=False) # cleaned data username = form.cleaned_data['username'] password = form.cleaned_data['password'] user.save() return redirect('success.html') return render(request, self.template_name, {'form': form}) </code></pre> <h3>Urls.py</h3> <pre><code>from django.conf import settings from django.conf.urls import include, url from django.conf.urls.static import static from django.contrib import admin from .views import UserFormView urlpatterns = [ url(r'^$', UserFormView.as_view(), name='registraion'), url(r'login/', LoginView.as_view(), name='login'), #url(r'success/', Success, name='sccuess') ] </code></pre> <h3>Forms.html</h3> <pre><code> Registration Form &lt;br&gt; &lt;form action="" method="POST"&gt; {% csrf_token %} {{form.as_p}} &lt;button type="submit"&gt;SUBMIT&lt;/button&gt; &lt;/form&gt; </code></pre> <p>What I basically want is a login page from which I can login in an already-registered user (this login page is different from the page where the user registers). So basically a new view through which a registered user can log in.</p> <p>I have already made the registration page and the code for it is above. Also after the user is authenticated on the login page he must be redirected to another page (say home.html) and his login information must also we carried to the page where he has been redirected to.</p>
-1
2016-08-22T16:15:06Z
39,084,382
<p>You need to use <code>authenticate()</code> and <code>login()</code> after create your user. </p> <p>Also, use <code>HttpResponseRedirect</code> to redirect user to your landing page, I think, you don't need to create a "success.html".</p> <pre><code> if form.is_valid(): user = form.save(commit=False) # cleaned data username = form.cleaned_data['username'] password = form.cleaned_data['password'] user.save() auth_user = authenticate(username=username, password=password) login(self.request, auth_user) # redirect works for return HttpResponseRedirect('your_landing_url') return render(request, self.template_name, {'form': form}) </code></pre>
1
2016-08-22T16:22:47Z
[ "python", "django", "forms", "authentication" ]
SPSS Python Custom Extension Command Debugging
39,084,265
<p>I have created a Custom Extension Command in Python. I installed it, but as expected I am getting errors (quote from SPSS log output - the only way I know for debugging Python programs in SPSS):</p> <pre><code>Extension command TEST_EXTENSION could not be loaded. The module or a module that it requires may be missing, or there may be syntax errors in it. </code></pre> <p>The error is probably from the <code>.xml</code>or from the <code>Run(args)</code> function. The <code>CustomFunction()</code> I am implementing was tested thoroughly.</p> <p>What would be a good practice for debugging this, and the other potential errors ? The official <a href="https://developer.ibm.com/predictiveanalytics/wp-content/uploads/sites/48/2015/04/Writing-IBM-SPSS-Statistics-Extension-Commands1.pdf" rel="nofollow">IBM-SPSS-Statistics-Extension-Command</a> says to </p> <blockquote> <p>set the SPSS_EXTENSIONS_RAISE environment variable to "true"</p> </blockquote> <p>but I don't know how to do that, nor of this will work regardless of the source of the error.</p>
1
2016-08-22T16:16:02Z
39,102,672
<p>@horace</p> <p>You set the environment variable on Windows via the Control Panel > System > Advanced system settings > Environment Variables. The exact wording varies with different Windows versions. I usually choose System variables, although either will usually work. You need to restart Statistics after that. Once you have set this variable, errors in the Python code will produce a traceback. The traceback is ordinarily suppressed as it is of no use to users, but it is very helpful for developers.</p> <p>The traceback only appears for errors in the Python code. The "could not be loaded" error you reported happens before Python gets control, so no traceback would be produced. There are two common causes for this error. The first is that the xml file defining the extension command or the corresponding Python module was not found by Statistics. The extension command definitions are loaded at Statistics startup or by running the EXTENSION command. Execute SHOW EXT. from the Syntax Editor to see the places where Statistics looks for extension files.</p> <p>The second cause is a syntax error in the Python code. Run<br> begin program.<br> import yourmodule<br> end program.<br> to see if any errors are reported.</p> <p>More generally, there are two useful strategies for debugging. The first is to run the code in external mode, where you run the code from Python. That way you can step through the code using your IDE or the plain Python debugger. See the programmability documentation for details. There are some limitations on what can be done in external mode, but it is often a good solution.</p> <p>The second is to use an IDE that supports remote debugging. I use Wing IDE, but there are other IDEs that can do this. That lets me jump into the debugger from within Statistics, step through the Python code, and do all the other things you want in a debugger.</p> <p>HTh</p>
1
2016-08-23T13:31:18Z
[ "python", "debugging", "spss" ]
Find the index of first same group based on value in a dictionary
39,084,289
<p>I have a dictionary, which shows one individual's trip, where blank list means walk and list with content means the tube he/she took. I want to find out his/her first tube journey which are indexed as '2,3'.</p> <pre><code>specific_path_legs={0: [], 1: [], 2: ['Jubilee'], 3: ['Jubilee'], 4: [], 5: [], 6: ['Metropolitan'], 7: ['Metropolitan'], 8: ['Metropolitan'], 9: ['Metropolitan'], 10: [], 11: [], 12: [], 13: [], 14: ['Northern'], 15: ['Northern'], 16: ['Northern'], 17: ['Northern'], 18: ['Northern'], 19: [], 20: [], 21: [], 22: ['Jubilee'], 23: ['Jubilee'], 24: ['Jubilee'], 25: [], 26: [], 27: []} </code></pre> <p>I first excluded the walk part and get a legs_nonempty dictionary. </p> <pre><code>legs_nonempty={2: ['Jubilee'], 3: ['Jubilee'], 6: ['Metropolitan'], 7: ['Metropolitan'], 8: ['Metropolitan'], 9: ['Metropolitan'], 14: ['Northern'], 15: ['Northern'], 16: ['Northern'], 17: ['Northern'], 18: ['Northern'], 22: ['Jubilee'], 23: ['Jubilee'], 24: ['Jubilee']} </code></pre> <p>Then I tried </p> <pre><code>first_leg=[] for key,value in specific_path_legs.items(): if value==legs_nonempty.itervalues().next(): first_leg.append(key) </code></pre> <p>But it returned </p> <pre><code>first_leg=[2,3, 22, 23, 24] </code></pre> <p>I only need [2,3] rather than [2, 3,22, 23, 24]. Any ideas?</p>
0
2016-08-22T16:17:32Z
39,084,378
<pre><code># Sort dictionary based on keys import collections specific_path_legs = collections.OrderedDict(sorted(specific_path_legs.items())) # Store your info in another dict path_legs_dict = {} for key, value in specific_path_legs.items(): if value and value[0] not in path_legs_dict: path_legs_dict[value[0]] = key print path_legs_dict # Output: {'Jubilee': 2, 'Northern': 14, 'Metropolitan': 6} </code></pre> <p>I am using <code>collections.OrderedDict</code> because the default <code>dict</code> object in python is not ordered.</p>
0
2016-08-22T16:22:37Z
[ "python", "dictionary", "indexing", "key", "value" ]
Find the index of first same group based on value in a dictionary
39,084,289
<p>I have a dictionary, which shows one individual's trip, where blank list means walk and list with content means the tube he/she took. I want to find out his/her first tube journey which are indexed as '2,3'.</p> <pre><code>specific_path_legs={0: [], 1: [], 2: ['Jubilee'], 3: ['Jubilee'], 4: [], 5: [], 6: ['Metropolitan'], 7: ['Metropolitan'], 8: ['Metropolitan'], 9: ['Metropolitan'], 10: [], 11: [], 12: [], 13: [], 14: ['Northern'], 15: ['Northern'], 16: ['Northern'], 17: ['Northern'], 18: ['Northern'], 19: [], 20: [], 21: [], 22: ['Jubilee'], 23: ['Jubilee'], 24: ['Jubilee'], 25: [], 26: [], 27: []} </code></pre> <p>I first excluded the walk part and get a legs_nonempty dictionary. </p> <pre><code>legs_nonempty={2: ['Jubilee'], 3: ['Jubilee'], 6: ['Metropolitan'], 7: ['Metropolitan'], 8: ['Metropolitan'], 9: ['Metropolitan'], 14: ['Northern'], 15: ['Northern'], 16: ['Northern'], 17: ['Northern'], 18: ['Northern'], 22: ['Jubilee'], 23: ['Jubilee'], 24: ['Jubilee']} </code></pre> <p>Then I tried </p> <pre><code>first_leg=[] for key,value in specific_path_legs.items(): if value==legs_nonempty.itervalues().next(): first_leg.append(key) </code></pre> <p>But it returned </p> <pre><code>first_leg=[2,3, 22, 23, 24] </code></pre> <p>I only need [2,3] rather than [2, 3,22, 23, 24]. Any ideas?</p>
0
2016-08-22T16:17:32Z
39,085,733
<p>Since the keys are incremental starting from 0 just go until you find a non empty value:</p> <pre><code>for i in range(len(specific_path_legs)): if specific_path_legs[i]: print(i, specific_path_legs[i]) break </code></pre> <p>Which would give you:</p> <pre><code>(2, ['Jubilee']) </code></pre> <p>if you want to match a specific value also:</p> <pre><code>for i in range(len(specific_path_legs)): val = specific_path_legs[i] if val and val == "Jubilee": print(i ,specific_path_legs[i] break </code></pre>
0
2016-08-22T17:49:26Z
[ "python", "dictionary", "indexing", "key", "value" ]
pyqt4 Setting QDateEdit value in Linux
39,084,290
<p>I am copying a date value from a table to a QDateEdit in a dialog prior to launching the dialog. When I do this, the date format changes from "yyyy-MM-dd" to "dd/MM/yy" in the dialog. This happens in Linux not in OSx. My Code:</p> <pre><code>class BuildRecordEditorDialog(QDialog, Ui_brePartEditDialog): def __init__(self): QDialog.__init__((self)) self.setupUi(self) self.breDueDateEditor.setDisplayFormat('yyyy-MM-dd') self.brePickDateEditor.setDisplayFormat('yyyy-MM-dd') # In another Module buildRecordEditDialog = BuildRecordEditorDialog() # Create an edit dialog brUi = buildRecordEditDialog brUi.setupUi(buildRecordEditDialog) brUi.breDeleteLabel.hide() # This is not a delete so hide the delete message brUi.brePartNoEditor.setText(selectedPart[1].text()) # Pre-load defaults from selected data brUi.breDescriptionEditor.setText(selectedPart[2].text()) brUi.breQuantityEditor.setText(selectedPart[3].text()) brUi.breDueDateEditor.setDate(QtCore.QDate.fromString(selectedPart[4].text(), "yyyy-MM-dd")) brUi.brePickDateEditor.setDate(QtCore.QDate.fromString(selectedPart[5].text(), "yyyy-MM-dd")) </code></pre> <hr> <p>I am using pyqt4, Python 3.5.4, Ubuntu Linux</p> <p>I have changed the locale setting for Time to: LC_TIME="en_CA.UTF-8" but it hasn't helped.</p> <p>As a side note the brUi.breDeleteLabel.hide() setting is not respected either.</p>
0
2016-08-22T16:17:43Z
39,240,067
<p>No answers forthcoming so I did a work around. I converted the dates returned from the dialog from the bad format to the one I wanted and then processed them. User sees a different format but enters data with a date picker so all-in-all it will do.</p>
0
2016-08-31T02:59:09Z
[ "python", "linux", "pyqt4" ]
List item template Django
39,084,313
<p>I'm dealing creating a template on Django to show a list of items with 2 buttons that make actions.</p> <p>My form class it's:</p> <pre><code>class AppsForm(forms.Form): def __init__(self, *args, **kwargs): policiesList = kwargs.pop('policiesList', None) applicationList = kwargs.pop('applicationList', None) EC2nodesList = kwargs.pop('amazonNodesList', None) super(AppsForm, self).__init__(*args, **kwargs) self.fields['appsPolicyId'] = forms.ChoiceField(label='Application Policy', choices=policiesList) self.fields['appsId'] = forms.ChoiceField(label='Application', choices=applicationList) self.fields['ec2Nodes'] = forms.ChoiceField(label='Amazon EC2 Nodes', choices=EC2nodesList) </code></pre> <p>Now, I do the form with:</p> <pre><code>&lt;form method="post" action="" class="form-inline" role="form"&gt; &lt;div class="form-group"&gt; {% for field in form %} { field.label }}: {{ field}} {% endfor %} &lt;/div&gt; {% csrf_token %} &lt;input type="submit" class="btn btn-default btn-success" name="deployButton" value="Deploy"/&gt; &lt;input type="submit" class="btn btn-default btn-danger" name="undeployButton" value="Undeploy"/&gt; </code></pre> <p></p> <p>And the result it's:</p> <pre><code>Application Policy - Choicefield ; Application - Choicefield ; Amazon EC2 Nodes - Choicefield [Button Deploy] [Button Undeploy] </code></pre> <p>And what I'm looking for it's a way to render the form and show the list like this:</p> <pre><code>Application Policy - Choicefield ; Application - Choicefield [Button Deploy] [Button Undeploy] Amazon EC2 Nodes - Choicefield [Button Deploy] [Button Undeploy] &lt;more items if I add them in forms.py...&gt; </code></pre> <p>How I can get the proper way to render like that? </p> <p>Thanks and regards.</p>
0
2016-08-22T16:19:05Z
39,084,935
<p>You just need to change the code a bit is all:</p> <pre><code>{% for field in form %} { field.label }}: {{ field}} &lt;input type="submit" class="btn btn-default btn-success" name="deployButton" value="Deploy"/&gt; &lt;input type="submit" class="btn btn-default btn-danger" name="undeployButton" value="Undeploy"/&gt; &lt;br /&gt; {% endfor %} </code></pre> <p>So this will create a new line for each of the field.label and field variables with their own button. One thing to caution against though, if you try and assign ID's to the buttons they will have to be different or you'll get errors. Also, submission may be a bit weird with code such as this but it depends on the rest of your application. Either way, this will give you the desired format.</p>
1
2016-08-22T16:57:36Z
[ "python", "django", "forms", "list" ]
Unable to import multithreaded Python module.
39,084,341
<p>This is the multithreaded code I'm trying to import as a module. It works fine when I run it as a stand alone file. It simply just prints out a list of numbers. All I change is which main() is commented out. I think that the problem might be that the imported code is calling itself as a module after the reader file has already called it as a module.</p> <p>threadtest.py</p> <pre><code>from multiprocessing import Process, Value, Array import time import sys def f(n, a): n.value = 3.1415927 for i in range(len(a)): a[i] = -a[i] def is_prime(x, top): for j in range(2,int(top**.5 + 1)): if x%j == 0: return False return True def f1(top, c): print('test f1') for i in range(2,top): if is_prime(i,top): c.value = c.value + 1 def f3(c1,c2): for k in range(0,20): time.sleep(1) # 1 second sys.stdout.write(str(c1.value) + '|' + str(c2.value) + '\n') sys.stdout.flush() def main(): count1 = Value('d', 0) count2 = Value('d', 0) #arr = Array('i', range(10)) p1 = Process(target=f1, args=(1000000, count1)) p2 = Process(target=f1, args=(1000000, count2)) p3 = Process(target=f3, args=(count1, count2)) p1.start() p2.start() p3.start() p1.join() print('p1.join()') p2.join() print('p2.join()') p3.join() print('p3.join()') print(count1.value) print(count2.value) if __name__ == '__main__': print('grovetest is being run as main') #main() else: print('grovetest is being run as module') main() </code></pre> <p>This is the code that imports the multithreaded module and attempts to read the output. </p> <p>readertest.py</p> <pre><code>import threadtest def main(fname): try: p = subprocess.Popen(fname, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) print('success') return p.communicate() # this gets you pipe values except Exception as e: return 'error'+str(e) else: return "else" if __name__ == '__main__': main(threadtest) </code></pre> <p>Here is the error that is produced when I run the readertest.py</p> <pre><code>grovetest is being run as module grovetest is being run as module Traceback (most recent call last): File "&lt;string&gt;", line 1, in &lt;module&gt; File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\spawn.py", line 106, in spawn_main exitcode = _main(fd) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\spawn.py", line 115, in _main prepare(preparation_data) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\spawn.py", line 226, in prepare _fixup_main_from_path(data['init_main_from_path']) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\spawn.py", line 278, in _fixup_main_from_path run_name="__mp_main__") File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\runpy.py", line 240, in run_path pkg_name=pkg_name, script_name=fname) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\Joseph\documents\python projects\multitest.py", line 1, in &lt;module&gt; import grovetest File "C:\Users\Joseph\documents\python projects\grovetest.py", line 56, in &lt;module&gt; main() File "C:\Users\Joseph\documents\python projects\grovetest.py", line 37, in main p1.start() File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\process.py", line 105, in start self._popen = self._Popen(self) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\context.py", line 212, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\context.py", line 313, in _Popen return Popen(process_obj) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\popen_spawn_win32.py", line 34, in __init__ prep_data = spawn.get_preparation_data(process_obj._name) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\spawn.py", line 144, in get_preparation_data _check_not_importing_main() File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\spawn.py", line 137, in _check_not_importing_main is not going to be frozen to produce an executable.''') RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. Traceback (most recent call last): File "&lt;string&gt;", line 1, in &lt;module&gt; File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\spawn.py", line 106, in spawn_main exitcode = _main(fd) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\spawn.py", line 115, in _main prepare(preparation_data) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\spawn.py", line 226, in prepare _fixup_main_from_path(data['init_main_from_path']) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\spawn.py", line 278, in _fixup_main_from_path run_name="__mp_main__") File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\runpy.py", line 240, in run_path pkg_name=pkg_name, script_name=fname) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\Joseph\documents\python projects\multitest.py", line 1, in &lt;module&gt; import grovetest File "C:\Users\Joseph\documents\python projects\grovetest.py", line 56, in &lt;module&gt; main() File "C:\Users\Joseph\documents\python projects\grovetest.py", line 37, in main p1.start() File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\process.py", line 105, in start self._popen = self._Popen(self) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\context.py", line 212, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\context.py", line 313, in _Popen return Popen(process_obj) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\popen_spawn_win32.py", line 34, in __init__ prep_data = spawn.get_preparation_data(process_obj._name) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\spawn.py", line 144, in get_preparation_data _check_not_importing_main() File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\spawn.py", line 137, in _check_not_importing_main is not going to be frozen to produce an executable.''') RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. grovetest is being run as module Traceback (most recent call last): File "&lt;string&gt;", line 1, in &lt;module&gt; File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\spawn.py", line 106, in spawn_main exitcode = _main(fd) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\spawn.py", line 115, in _main prepare(preparation_data) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\spawn.py", line 226, in prepare _fixup_main_from_path(data['init_main_from_path']) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\spawn.py", line 278, in _fixup_main_from_path run_name="__mp_main__") File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\runpy.py", line 240, in run_path pkg_name=pkg_name, script_name=fname) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\Joseph\documents\python projects\multitest.py", line 1, in &lt;module&gt; import grovetest File "C:\Users\Joseph\documents\python projects\grovetest.py", line 56, in &lt;module&gt; main() File "C:\Users\Joseph\documents\python projects\grovetest.py", line 37, in main p1.start() File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\process.py", line 105, in start self._popen = self._Popen(self) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\context.py", line 212, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\context.py", line 313, in _Popen return Popen(process_obj) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\popen_spawn_win32.py", line 34, in __init__ prep_data = spawn.get_preparation_data(process_obj._name) File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\spawn.py", line 144, in get_preparation_data _check_not_importing_main() File "C:\Users\Joseph\AppData\Local\Programs\Python\Python35-32\lib\multiprocessing\spawn.py", line 137, in _check_not_importing_main is not going to be frozen to produce an executable.''') RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. grovetest is being run as module p1.join() p2.join() p3.join() 0.0 0.0 </code></pre> <p>I'm new to this. Thank you for your feedback.</p>
1
2016-08-22T16:20:50Z
39,115,666
<p>You're violating a significant Windows-specific constraint called out <a href="https://docs.python.org/2/library/multiprocessing.html#windows" rel="nofollow">here</a>:</p> <blockquote> <p>Make sure that the main module can be safely imported by a new Python interpreter without causing unintended side effects (such a starting a new process).</p> </blockquote> <p>One flaw in the paragraph I just quoted is that it says "the main module" when it should say "<em>every</em> module", but most people don't try to run complex code from imported modules and thus don't run across this issue. In your case, what happens is that <code>readertest</code> imports <code>grovetest</code>, which invokes the multiprocessing code, which fires up a new Python interpreter on <code>grovetest</code> using the special Windows multiprocess bootstrap system, which fires up the multiprocessing code, which detects that it's being fired up under the special bootstrap case and aborts:</p> <pre><code>(start) readertest: import grovetest =&gt; grovetest import multiprocessing =&gt; multiprocessing: define some stuff =&gt; grovetest if __name__ == '__main__': # it's not equal ... # so we skip this else: print('grovetest is being run as module') # prints main() # calls main ... p1 = Process(target=f1, args=(1000000, count1)) # calls multiprocessing module =&gt; new process import grovetest # multiprocessing does this using its magic bootstrap ... if __name__ == '__main__': # it's not ... else: print(...) ... p1 = Process(...) </code></pre> <p>At this point the inner <code>Process</code> code detects the constraint violation and raises the first error. This terminates the inner process entirely, and we're back to the outer process that does <code>p2 = Process(...)</code>, which kicks off the same error again, and so on.</p> <p>To fix this, you <em>must not</em> call your <code>main</code> in <code>grovetest</code> when <code>grovetest</code> is imported as a module. Instead, call <code>grovetest.main</code> <em>later</em>, from <code>readertest</code>. That will allow the special bootstrap magic (in the <code>multiprocessing</code> module, run up when you invoke <code>multiprocessing.Process</code>) to import <code>grovetest</code> in the new process. Once this new process has finished importing <code>grovetest</code>, it will wait for instructions from its parent process (the other Python interpreter, where <em>you</em> called <code>multiprocessing.Process</code>, from your <code>readertest</code> code calling <code>grovetest.main</code>). Those instructions will be to invoke function <code>f1</code> (from <code>target=f1</code>) with the supplied arguments.</p>
0
2016-08-24T06:14:42Z
[ "python", "multithreading", "import", "module", "multiprocessing" ]
How can I trigger an IncompleteRead (on purpose) in a Python web application?
39,084,348
<p>I've got some Python code that makes requests using the <a href="http://docs.python-requests.org/en/master/" rel="nofollow">requests</a> library and occasionally experiences an <code>IncompleteRead</code> error. I'm trying to update this code to handle this error more gracefully and would like to test that it works, so I'm wondering how to actually trigger the conditions under which <code>IncompleteRead</code> is raised.</p> <p>I realize I can do some mocking in a unit test; I'd just like to actually reproduce the circumstances (if I can) under which this error was previously occurring and ensure my code is able to deal with it properly.</p>
1
2016-08-22T16:21:09Z
39,084,614
<p>When testing code that relies on external behavior (such as server responses, system sensors, etc) the usual approach is to fake the external factors instead of working to produce them.</p> <p>Create a test version of the function or class you're using to make HTTP requests. If you're using <code>requests</code> <em>directly</em> across your codebase, stop: direct coupling with libraries and external services is <em>very</em> hard to test.</p> <p>You mention that you want to make sure your code can handle this exception, and you'd rather avoid mocking for this reason. <strong>Mocking is just as safe, as long as you're wrapping the modules you need to mock all across your codebase</strong>. If you can't mock to test, you're missing layers in your design (or asking too much of your testing suite).</p> <p>So, for example:</p> <pre><code>class FooService(object): def make_request(*args): # use requests.py to perform HTTP requests # NOBODY uses requests.py directly without passing through here class MockFooService(FooService): def make_request(*args): raise IncompleteRead() </code></pre> <p>The 2nd class is a testing utility written solely for the purpose of testing this specific case. As your tests grow in coverage and completeness, you may need more sophisticated language (to avoid incessant subclassing and repetition), but it's usually good to start with the simplest code that will read easily and test the desired cases.</p>
1
2016-08-22T16:37:47Z
[ "python", "python-requests" ]
How can I trigger an IncompleteRead (on purpose) in a Python web application?
39,084,348
<p>I've got some Python code that makes requests using the <a href="http://docs.python-requests.org/en/master/" rel="nofollow">requests</a> library and occasionally experiences an <code>IncompleteRead</code> error. I'm trying to update this code to handle this error more gracefully and would like to test that it works, so I'm wondering how to actually trigger the conditions under which <code>IncompleteRead</code> is raised.</p> <p>I realize I can do some mocking in a unit test; I'd just like to actually reproduce the circumstances (if I can) under which this error was previously occurring and ensure my code is able to deal with it properly.</p>
1
2016-08-22T16:21:09Z
39,086,666
<p><em>Adding a second answer, more to the point this time. I took a dive into some source code, and found information that may help</em></p> <p>The <code>IncompleteRead</code> exception bubbles up from <code>httplib</code>, part of the python standard library. Most likely, it comes from <a href="https://github.com/python-git/python/blob/master/Lib/httplib.py#L611" rel="nofollow">this function</a>:</p> <pre><code>def _safe_read(self, amt): """ Read the number of bytes requested, compensating for partial reads. Normally, we have a blocking socket, but a read() can be interrupted by a signal (resulting in a partial read). Note that we cannot distinguish between EOF and an interrupt when zero bytes have been read. IncompleteRead() will be raised in this situation. This function should be used when &lt;amt&gt; bytes "should" be present for reading. If the bytes are truly not available (due to EOF), then the IncompleteRead exception can be used to detect the problem. """ </code></pre> <p>So, either the socket was closed before the HTTP response was consumed, or the reader tried to get too many bytes out of it. Judging by search results (so take this with a grain of salt), there is no other arcane situation that can make this happen.</p> <p>The first scenario can be debugged with <code>strace</code>. If I'm reading this correctly, the 2nd scenario can be caused by the <code>requests</code> module, if:</p> <ul> <li>A <code>Content-Length</code> header is present that exceeds the actual amount of data sent by the server.</li> <li>A chunked response is incorrectly assembled (has an erroneous length byte before one of the chunks), or a regular response is being interpreted as chunked.</li> </ul> <p>This function raises the <code>Exception</code>:</p> <pre><code>def _update_chunk_length(self): # First, we'll figure out length of a chunk and then # we'll try to read it from socket. if self.chunk_left is not None: return line = self._fp.fp.readline() line = line.split(b';', 1)[0] try: self.chunk_left = int(line, 16) except ValueError: # Invalid chunked protocol response, abort. self.close() raise httplib.IncompleteRead(line) </code></pre> <p>Try checking the <code>Content-Length</code> header of your buffered responses, or the chunk format of your chunked responses.</p> <p>To <em>produce</em> the error, try:</p> <ul> <li>Forcing an invalid <code>Content-Length</code></li> <li>Using the chunked response protocol, with a too-large length byte at the beginning of a chunk</li> <li>Closing the socket mid-response</li> </ul>
1
2016-08-22T18:46:48Z
[ "python", "python-requests" ]
Google Prediction API with Google App Engine
39,084,362
<p>I'm following this example, and at the bottom it has some code <a href="https://cloud.google.com/prediction/docs/developer-guide" rel="nofollow">https://cloud.google.com/prediction/docs/developer-guide</a></p> <p>I'm using Flask instead of webapp2 and my code looks like this:</p> <pre><code># [START app] import logging from oauth2client.appengine import AppAssertionCredentials from flask import Flask import httplib2, webapp2 from oauth2client.appengine import AppAssertionCredentials from apiclient.discovery import build http = AppAssertionCredentials('https://www.googleapis.com/auth/prediction').authorize(httplib2.Http()) service = build('prediction', 'v1.6', http=http) app = Flask(__name__) @app.route('/') def hello(): return 'Hello World1!' @app.route('/add') def something(): class MakePrediction(): def get(self): result = service.hostedmodels().predict(project=PROJECT-NAME, hostedModelName=PROJECT-ID, body={'input' {'csvInstance': ['hello']}}).execute() self.response.headers['Content-Type'] = 'text/plain' self.response.out.write('Result: ' + repr(result)) @app.errorhandler(500) def server_error(e): # Log the error and stacktrace. logging.exception('An error occurred during a request.') return 'An internal error occurred.', 500 # [END app] </code></pre> <p>I keep getting the error:</p> <pre><code> File "/Users/morganallen/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 240, in Handle handler = _config_handle.add_wsgi_middleware(self._LoadHandler()) File "/Users/morganallen/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 299, in _LoadHandler handler, path, err = LoadObject(self._handler) File "/Users/morganallen/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 85, in LoadObject obj = __import__(path[0]) File "/Users/morganallen/Desktop/project/flask_app_engine/main.py", line 24 result = service.hostedmodels().predict(project='linear-yen-140912', hostedModelName='language-identifier', body={'input' {'csvInstance': ['hello']}}).execute() ^ SyntaxError: invalid syntax </code></pre> <p>What am I doing wrong?</p>
1
2016-08-22T16:21:55Z
39,084,712
<p><strong>You are missing a colon on line 24, where the little carrot is pointing to in the stack trace:</strong></p> <pre><code> File "/Users/morganallen/Desktop/project/flask_app_engine/main.py", line 24 result = service.hostedmodels().predict(project='linear-yen-140912', hostedModelName='language-identifier', body={'input' {'csvInstance': ['hello']}}).execute() ^ </code></pre> <p><strong>So the solution here is change this:</strong></p> <pre><code>body={'input' {'csvInstance': ['hello']}}).execute() ^ </code></pre> <p><strong>To this:</strong></p> <pre><code>body={'input' : {'csvInstance': ['hello']}}).execute() ^ </code></pre> <p>That should solve the syntax error.</p> <pre><code>SyntaxError: invalid syntax </code></pre> <p>Whenever you get an error, don't ignore all the lines that the compiler spits out. It will often tell you the exact line of a problem, especially in the case of a simple syntax error like this one.</p>
2
2016-08-22T16:43:40Z
[ "python", "google-app-engine" ]
Stitched image not well aligned, hence leading to duplication
39,084,392
<p>As shown in the image below, one image is not well aligned with another, which is causing duplication of vessels. How do I get rid of the duplication?</p> <p><img src="http://i.stack.imgur.com/EwlLA.jpg" alt="enter image description here"></p> <p>Th way I'm currently going about stitching is, I first find the keypoints using SIFT, I then use flannbasedmatcher to match the keypoints, find the homography, and then warp both; the stitched, and the to-bo-stitched image. </p>
0
2016-08-22T16:23:50Z
39,087,094
<p>homography isnt perfect for non-planar scene or non-pure-rotation-camera, so there will always be errors SOMEWHERE in the aligned images. </p> <p>Imho best to do (if you have to use homographies) is to distribute the errors over the whole image by making sure that the points used to compute the homography are distributed well over the whole image. </p> <p>Or use a blending method that hides the duplicated parts in either of the images...</p> <p>EDIT:</p> <p>Here is a paper describing how to distribute the keypoints</p> <p><a href="http://www.lfb.rwth-aachen.de/bibtexupload/pdf/BEH10g.pdf" rel="nofollow">http://www.lfb.rwth-aachen.de/bibtexupload/pdf/BEH10g.pdf</a></p> <p>They claim to reduce the error.</p> <p>EDIT 2:</p> <p>another approach could be some kind of dense matching. Afaik there is a french team which try to registrate images by graph-based-matching, but I neither remember the name, nor whether this worked well. One idea coould be to coarse-align the image with your technique and afterwards start a dense matching by optical flow. If you are certain to know which pixel are correctly matched (and those pixel are not within a small part of the image), you can compute another homography with ALL the points (instead of using RANSAC). </p>
1
2016-08-22T19:13:14Z
[ "python", "opencv", "image-processing", "computer-vision" ]
How to deal with a child thread that hangs?
39,084,395
<p>I have a process to extract data from an API that uses multiple threads that is hanging.</p> <ul> <li>The main thread launches the child threads and waits until N numbers of API calls have been made and all child threads have ended to finish;</li> <li>1 child thread populates a queue with calls that need to be made to an API;</li> <li>8 child threads execute the API calls.</li> </ul> <p>When one of the API call hangs (and I can't control the timeout, unfortunately), the child thread never finishes and the main thread will continue forever waiting for the child thread to end. </p> <p>Is there any way to either force the child threads to end from the main thread? Alternatively is there a tried and tested way to do this type of data collection process that does not generate this issue?</p>
0
2016-08-22T16:23:56Z
39,085,490
<p>When a thread is created, you can control whether it will be automatically terminated when the main thread quits by explicitly setting its <code>daemon</code> attribute before calling its <code>start()</code> method. See the <a href="https://docs.python.org/2/library/threading.html#threading.Thread.daemon" rel="nofollow"><em>Thread Objects</em></a> section of the online docs.</p> <p>There's also a fairly detailed explanation in <a href="http://stackoverflow.com/a/38805873/355230">my answer</a> to a similar question:<br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="http://stackoverflow.com/questions/38804988/what-does-sys-exit-really-do-with-multiple-threads">What does sys.exit really do with multiple threads?</a>.</p>
0
2016-08-22T17:32:50Z
[ "python", "multithreading", "api", "design-patterns" ]
tkinter.Listbox scrolbar yview
39,084,403
<p>I again encounter some problems in writing Python and would like to seek my help. I continue to build my Listbox widget but cannot setup a scrollbar. I can put the Scrollbar in the right position, however, the up and down just don't work out and pop up an error saying "Object() takes no parameter". Could anyone advise how to fix it? I attached the code below for reference.</p> <pre><code>import tkinter from tkinter import * def test(): root = tkinter.Tk() lst = ['1', '2', ' 3', '4', '5', ' 6', '7', '8', ' 9', '10'] a = MovListbox(root, lst) a.grid(row=0, column=0, columnspan=2, sticky=tkinter.N) root.mainloop() class MovListbox(tkinter.Listbox): def __init__(self, master=None, inputlist=None): super(MovListbox, self).__init__(master=master) # Populate the news category onto the listbox for item in inputlist: self.insert(tkinter.END, item) #set scrollbar s = tkinter.Scrollbar(master, orient=VERTICAL, command=tkinter.YView) self.configure(yscrollcommand=s.set) s.grid(row=0, column=2, sticky=tkinter.N+tkinter.S) if __name__ == '__main__': test() </code></pre>
1
2016-08-22T16:24:23Z
39,086,030
<p>first of all you don't need both <code>import tkinter</code> and <code>from tkinter import *</code></p> <ul> <li>Using <code>import</code> means you need <code>tkinter.'function'</code> to call a function from tkinter </li> <li>Using <code>from</code> means you can call the function as if it were in your program without the <code>tkinter.</code> at the start </li> <li>Using <code>*</code> means taking all functions from tkinter</li> </ul> <p>Also I have fixed the code based on Rawig's answer</p> <pre><code>import tkinter def test(): root = tkinter.Tk() lst = ['1', '2', ' 3', '4', '5', ' 6', '7', '8', ' 9', '10'] a = MovListbox(root, lst) a.grid(row=0, column=0, columnspan=2, sticky=tkinter.N) root.mainloop() class MovListbox(tkinter.Listbox): def __init__(self, master=None, inputlist=None): super(MovListbox, self).__init__(master=master) # Populate the news category onto the listbox for item in inputlist: self.insert(tkinter.END, item) #set scrollbar s = tkinter.Scrollbar(master, orient=VERTICAL, command=self.yview) self.configure(yscrollcommand=s.set) s.grid(row=0, column=2, sticky=tkinter.N+tkinter.S) if __name__ == '__main__': test() </code></pre>
1
2016-08-22T18:07:08Z
[ "python", "tkinter", "listbox", "python-3.5" ]
Reading a text file and assigning the values
39,084,411
<p>I have a text file with 5 columns:</p> <pre class="lang-none prettyprint-override"><code>StudentID UserID FirstName LastName NickName 0122334 7727263 John Smith Johnny 8273263 8349734 timmer Jansen tim </code></pre> <p>I am trying to read this file in python and assign each of the values to separate variable. Until now, I have read the lines successfully. But, I am not able to assign the values.</p> <p>Code so far:</p> <pre><code>StudentID = [] UserID = [] FirstName = [] LastName = [] NickName = [] with open(textfile,'r') as f: lines = f.readlines() </code></pre>
1
2016-08-22T16:24:48Z
39,084,456
<pre><code>f = open(textfile, 'r') for line in f: student_id, user_id, first_name, last_name, nick_name = line.split(' ') StudentId.append(student_id) UserId.append(user_id) .... f.close() </code></pre> <p>That is very weird way to write anything though. Much better way is to create class Student that will know the properties of student and how to read from line</p> <pre><code>class Student: def __init__(self, student_id=0, user_id=0, first_name='', last_name='', nickname=''): self._student_id = student_id self._user_id = user_id ... def from_line(self, line): values = line.split(' ') self._student_id = int(values[0]) self._user_id = int(values[1]) ... f = open(textfile, 'r') students = [] for line in f: students.append(Student()) students[-1].from_line(line) </code></pre>
1
2016-08-22T16:27:45Z
[ "python", "python-2.7" ]
Reading a text file and assigning the values
39,084,411
<p>I have a text file with 5 columns:</p> <pre class="lang-none prettyprint-override"><code>StudentID UserID FirstName LastName NickName 0122334 7727263 John Smith Johnny 8273263 8349734 timmer Jansen tim </code></pre> <p>I am trying to read this file in python and assign each of the values to separate variable. Until now, I have read the lines successfully. But, I am not able to assign the values.</p> <p>Code so far:</p> <pre><code>StudentID = [] UserID = [] FirstName = [] LastName = [] NickName = [] with open(textfile,'r') as f: lines = f.readlines() </code></pre>
1
2016-08-22T16:24:48Z
39,084,490
<pre><code>with open(textfile,'r') as f: for line in f.readlines(): student_id ,user_id, first_name, last_name, nick_name = line.split() StudentID.append(student_id) UserID.append(user_id) FirstName.append(first_name) LastName.append(last_name) NickName.append(nick_name) </code></pre> <p><strong>Note:</strong> For the naming the list, you are using Camel case convention. In Python, as per <code>PIP8</code> for defining the variables you should be using lowercase alphabets separated by underscore <code>_</code>. CamelCase variables are for defining the classes.</p> <p><strong>Suggestion:</strong></p> <p>Since these values belong to the same object, you should be maintaining a single list of objects, as suggested by <a href="http://stackoverflow.com/users/6664305/dmitry-torba">Dmitry</a> OR, simply store a list of <code>list</code>.</p> <p>For example:</p> <pre><code>persons = [] with open(textfile,'r') as f: for line in f.readlines(): persons.append(line.split()) print persons # [['0122334', '7727263', 'John', 'Smith', 'Johnny'], # ['8273263', '8349734', 'timmer', 'Jansen', 'tim']] </code></pre> <p>Here you'll have <code>StudentID</code>, <code>UserID</code>, <code>FirstName</code>, <code>LastName</code>, <code>NickName</code> at <code>0</code>, <code>1</code>, <code>2</code>, <code>3</code>, <code>4</code>, <code>5</code> index respectively.</p>
1
2016-08-22T16:29:59Z
[ "python", "python-2.7" ]
Reading a text file and assigning the values
39,084,411
<p>I have a text file with 5 columns:</p> <pre class="lang-none prettyprint-override"><code>StudentID UserID FirstName LastName NickName 0122334 7727263 John Smith Johnny 8273263 8349734 timmer Jansen tim </code></pre> <p>I am trying to read this file in python and assign each of the values to separate variable. Until now, I have read the lines successfully. But, I am not able to assign the values.</p> <p>Code so far:</p> <pre><code>StudentID = [] UserID = [] FirstName = [] LastName = [] NickName = [] with open(textfile,'r') as f: lines = f.readlines() </code></pre>
1
2016-08-22T16:24:48Z
39,084,949
<p>You can extract all of them at once:</p> <pre><code>with open(textfile,'r') as f: columns = [line.split() for line in f.readlines()] </code></pre> <p>And then create each list from <code>columns</code>:</p> <pre><code>StudentID, UserID, firstNAme, LastNAme, NickName = map(list, zip(*columns)) </code></pre> <p>First you create a list of lists which is a matrix separated by your given columns line by line. And then it regroup(map) them by making a separated list for each column.</p>
3
2016-08-22T16:58:12Z
[ "python", "python-2.7" ]
How do I setup logging when using aiohttp and aiopg with Gunicorn?
39,084,433
<p><code>aiohttp</code> is great, but setting up logging has been a nightmare, both locally and in production, when using <code>Gunicorn</code>.</p> <p>Most of the examples and documentation I find for setting up logging are for running in native server mode, where you use <code>make_handler()</code></p> <p>As recommended in the documentation, I'm using <code>Gunicorn</code> as a Web Server to deploy, so I don't call <code>make_handler</code> explicitly. </p> <p><strong>I am not seeing aiohttp.access logs, nor the aiohttp.server logs, nor the aiopg logs, all of which should be set up by default</strong></p> <p>This is what I've got in a root level <code>app.py</code>:</p> <pre><code>import logging import aiopg from aiohttp import web async def some_handler(request): id = request.match_info["id"] # perform some SA query return web.json_response({"foo": id}) async def close_postgres(app): app['postgres'].close() await app['postgres'].wait_closed async def init(loop, logger, config): app = web.Application( loop=loop, logger=logger ) app['postgres'] = await aiopg.sa.create_engine(loop=loop, echo=True) # other args ommitted app.on_cleanup.append(close_postgres) app.router.add_route('GET', '/', some_handler, 'name') return app def run(): config = parse_yaml('config.yml') # =&gt; turns config.yml to dict logging.config.dictConfig(config['logging']) logger = logging.getLogger("api") loop = asyncio.get_event_loop() app = run_until_complete(init(loop, logger, config)) return app </code></pre> <p>My config.yml file</p> <pre><code>logging: version: 1 formatters: simple: format: '[%(asctime)s] [%(process)d] [%(levelname)s] %(message)s' datefmt: '%Y-%m-%d %H:%M:%S %z' handlers: console: class: logging.StreamHandler formatter: simple level: DEBUG stream: ext://sys.stdout loggers: api: handlers: - console level: DEBUG </code></pre> <p>I launch gunicorn with the following:</p> <pre><code>gunicorn 'app:run()' --worker-class aiohttp.worker.GunicornWebWorker </code></pre> <p>I only see the following logs no matter what query I make:</p> <pre><code>[2016-08-22 11:26:46 -0400] [41993] [INFO] Starting gunicorn 19.6.0 [2016-08-22 11:26:46 -0400] [41993] [INFO] Listening at: http://127.0.0.1:8000 (41993) [2016-08-22 11:26:46 -0400] [41993] [INFO] Using worker: aiohttp.worker.GunicornWebWorker [2016-08-22 11:26:46 -0400] [41996] [INFO] Booting worker with pid: 41996 </code></pre> <p>What I want:</p> <ul> <li>aiopg logs (which queries ran)</li> <li>access logs</li> <li>server logs </li> </ul> <p>Thanks</p>
1
2016-08-22T16:26:08Z
39,107,800
<p>Documentation <strong>don't recommend</strong> to use Gunicorn for deployment but have instructions for under running Gunicorn. </p> <p>Perhaps it should be upgraded to passing correct format for access logger.</p> <p>From my perspective the easiest way to run aiohttp server is just running it (by using <code>web.run_app()</code> handler or building own runner on top of it).</p> <p>If you need several aiohttp instances -- use nginx in reverse proxy mode (most likely you already have it in your tool chain) and supervisord for controlling servers. </p> <p>The combination just works without the need for intermediate layer. Just like people starts tornado or twisted.</p>
1
2016-08-23T17:53:52Z
[ "python", "psycopg2", "gunicorn", "aiohttp" ]
Spark: how to get cluster's points (KMeans)
39,084,519
<p>I'm trying to retrieve the data points belonging to a specific cluster in Spark. In the following piece of code, the data is made up but I actually obtain the predicted clustered.</p> <p>Here is the code I have so far:</p> <pre><code>import numpy as np # Example data flight_routes = np.array([[1,3,2,0], [4,2,1,4], [3,6,2,2], [0,5,2,1]]) flight_routes = sc.parallelize(flight_routes) model = KMeans.train(rdd=flight_routes, k=500, maxIterations=10) route_test = np.array([[0,2,3,4]]) test = sc.parallelize(route_test) prediction = model.predict(test) cluster_number_predicted = prediction.collect() print cluster_number_predicted # it returns [100] &lt;-- COOL!! </code></pre> <p>Now, I'd like to have all the data points belonging to the cluster number 100. How do I get those ? What I want achieve is something like the answer given to this SO question: <a href="http://stackoverflow.com/questions/32232067/cluster-points-after-kmeans-clustering-scikit-learn">Cluster points after Means (Sklearn)</a></p> <p>Thank you in advance.</p>
0
2016-08-22T16:31:25Z
39,089,300
<p>If you both record and prediction (and not willing to switch to Spark ML) you can <code>zip</code> RDDs:</p> <pre><code>predictions_and_values = model.predict(test).zip(test) </code></pre> <p>and filter afterwards: </p> <pre><code>predictions_and_values.filter(lambda x: x[1] == 100) </code></pre>
0
2016-08-22T21:53:04Z
[ "python", "apache-spark", "k-means" ]
how to use spark with python or jupyter notebook
39,084,520
<p>I am trying to work with 12GB of data in python for which I desperately need to use Spark , but I guess I'm too stupid to use command line by myself or by using internet and that is why I guess I have to turn to SO , </p> <p>So by far I have downloaded the spark and unzipped the tar file or whatever that is ( sorry for the language but I am feeling stupid and out ) but now I can see nowhere to go. I have seen the instruction on spark website documentation and it says :</p> <p><strong>Spark also provides a Python API. To run Spark interactively in a Python interpreter, use</strong> <code>bin/pyspark</code> but where to do this ? please please help . Edit : I am using windows 10 </p> <p>Note:: I have always faced problems when trying to install something mainly because I can't seem to understand Command prompt </p>
1
2016-08-22T16:31:28Z
39,084,592
<p>When you unzip the file, a directory is created.</p> <ol> <li>Open a terminal.</li> <li>Navigate to that directory with <code>cd</code>.</li> <li>Do an <code>ls</code>. You will see its contents. <code>bin</code> must be placed somewhere.</li> <li>Execute <code>bin/pyspark</code> or maybe <code>./bin/pyspark</code>.</li> </ol> <p>Of course, in practice it's not that simple, you may need to set some paths, like said in <a href="http://www.tutorialspoint.com/apache_spark/apache_spark_installation.htm" rel="nofollow">TutorialsPoint</a>, but there are plenty of such links out there.</p>
2
2016-08-22T16:36:21Z
[ "python", "windows", "apache-spark", "pyspark", "distributed-computing" ]
how to use spark with python or jupyter notebook
39,084,520
<p>I am trying to work with 12GB of data in python for which I desperately need to use Spark , but I guess I'm too stupid to use command line by myself or by using internet and that is why I guess I have to turn to SO , </p> <p>So by far I have downloaded the spark and unzipped the tar file or whatever that is ( sorry for the language but I am feeling stupid and out ) but now I can see nowhere to go. I have seen the instruction on spark website documentation and it says :</p> <p><strong>Spark also provides a Python API. To run Spark interactively in a Python interpreter, use</strong> <code>bin/pyspark</code> but where to do this ? please please help . Edit : I am using windows 10 </p> <p>Note:: I have always faced problems when trying to install something mainly because I can't seem to understand Command prompt </p>
1
2016-08-22T16:31:28Z
39,091,446
<p>If you are more familiar with jupyter notebook, you can install <a href="https://toree.incubator.apache.org/documentation/user/quick-start" rel="nofollow">Apache Toree</a> which integrates pyspark,scala,sql and SparkR kernels with Spark.</p> <p>for installing toree</p> <pre><code>pip install toree jupyter toree install --spark_home=path/to/your/spark_directory --interpreters=PySpark </code></pre> <p>if you want to install other kernels you can use </p> <pre><code>jupyter toree install --interpreters=SparkR,SQl,Scala </code></pre>
1
2016-08-23T02:34:40Z
[ "python", "windows", "apache-spark", "pyspark", "distributed-computing" ]
Convert pandas DataFrame to dict where each value is a list of values of multiple columns
39,084,521
<p>Let's say I have the DataFrame</p> <pre><code>filename size inverse similarity 123.txt 1 2 34 323.txt 3 1 44 222.txt 4 1 43 </code></pre> <p>I want to create a dictionary in the form</p> <pre><code>{'123.txt': [1, 2, 34], '323.txt': [3, 1, 44], '222.txt': [4, 1, 43]} </code></pre> <p>Solutions I have found deal with the case of creating a dict with single values using something like</p> <pre><code>df.set_index('Filename')['size'].to_dict() </code></pre>
1
2016-08-22T16:31:35Z
39,084,631
<p>Set <code>'filename'</code> as the index, take the transpose, then use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_dict.html" rel="nofollow"><code>to_dict</code></a> with <code>orient='list'</code>:</p> <pre><code>my_dict = df.set_index('filename').T.to_dict(orient='list') </code></pre> <p>The resulting output:</p> <pre><code>{'323.txt': [3, 1, 44], '222.txt': [4, 1, 43], '123.txt': [1, 2, 34]} </code></pre>
3
2016-08-22T16:39:26Z
[ "python", "pandas", "dataframe" ]
Convert pandas DataFrame to dict where each value is a list of values of multiple columns
39,084,521
<p>Let's say I have the DataFrame</p> <pre><code>filename size inverse similarity 123.txt 1 2 34 323.txt 3 1 44 222.txt 4 1 43 </code></pre> <p>I want to create a dictionary in the form</p> <pre><code>{'123.txt': [1, 2, 34], '323.txt': [3, 1, 44], '222.txt': [4, 1, 43]} </code></pre> <p>Solutions I have found deal with the case of creating a dict with single values using something like</p> <pre><code>df.set_index('Filename')['size'].to_dict() </code></pre>
1
2016-08-22T16:31:35Z
39,084,636
<p>Here's an approach with dict comprehension:</p> <pre><code>{k: v for k, v in zip(df['filename'], df.set_index('filename').values.tolist())} Out: {'123.txt': [1, 2, 34], '222.txt': [4, 1, 43], '323.txt': [3, 1, 44]} </code></pre>
3
2016-08-22T16:39:45Z
[ "python", "pandas", "dataframe" ]
Access queryset values while setting up modelform
39,084,611
<p>Is it possible to access a value from the queryset that was used to create the form class. For example I have the following view:</p> <pre><code>class MyView(View): position = Position() form_class = PortfolioForm PositionModelFormSet = modelformset_factory(Position, fields=('symbol', 'direction', 'size'), form=form_class) def get(self, request): positions = self.position.get_user_positions_qs(user=request.user) portfolio = self.PositionModelFormSet(queryset=positions) </code></pre> <p>What I need is to be able to access the values that are passed to the PortfolioForm when creating the form. In other words, for each form in the formset there is a queryset that is used when instantiating it. I need to access the values in that queryset while setting up the form. For example the PortfolioForm would be something like:</p> <pre><code>class PortfolioForm(forms.ModelForm): value = get_value_from_queryset # eg: access symbol field do_something_with_value(value) class Meta: model = Position fields = ['symbol', 'direction', 'size'] </code></pre> <p>I was thinking something along the lines of accessing it somehow by overriding the form <code>__init__</code> method and using <code>self.instance</code> or <code>kwargs</code> but I haven't had any luck so far.</p>
0
2016-08-22T16:37:39Z
39,090,101
<p>With a ModelForm, form.instance already works. </p> <p>In the form: self.instance</p> <p>In the view: formset.form.instance</p>
1
2016-08-22T23:13:52Z
[ "python", "django", "django-forms" ]
Using a list of lists as a lookup table and updating a value in new list of lists
39,084,661
<p>I have an application that creates a list of lists. The second element in the list needs to be assigned using lookup list which also consists of a list of lists. </p> <p>I have used the "all" method to match the values in the list. If the list value exists in the lookup list, it should update the second position element in the new list. However this is not the case. The == comparative yields a False match for all elements, even though they all exist in both lists. </p> <p>I have also tried various combinations of index finding commands but they are not able to unpack the values of each list. </p> <p>My code is below. The goal is to replace the "xxx" values in the newData with the numbers in the lookupList. </p> <pre><code>lookupList= [['Garry','34'],['Simon', '24'] ,['Louise','13'] ] newData = [['Louise','xxx'],['Garry', 'xxx'] ,['Simon','xxx'] ] #Matching values for i in newData: if (all(i[0] == elem[0] for elem in lookupList)): i[1] = elem[1] </code></pre>
3
2016-08-22T16:40:52Z
39,084,713
<p>You can't do what you want with <code>all()</code>, because <code>elem</code> is not a local variable outside of the generator expression.</p> <p>Instead of using a list, use a <em>dictionary</em> to store the <code>lookupList</code>:</p> <pre><code>lookupDict = dict(lookupList) </code></pre> <p>and looking up matches is a simple constant-time (fast) lookup:</p> <pre><code>for entry in newData: if entry[0] in lookupDict: entry[1] = lookupDict[entry[0]] </code></pre>
3
2016-08-22T16:43:41Z
[ "python", "list" ]
Using a list of lists as a lookup table and updating a value in new list of lists
39,084,661
<p>I have an application that creates a list of lists. The second element in the list needs to be assigned using lookup list which also consists of a list of lists. </p> <p>I have used the "all" method to match the values in the list. If the list value exists in the lookup list, it should update the second position element in the new list. However this is not the case. The == comparative yields a False match for all elements, even though they all exist in both lists. </p> <p>I have also tried various combinations of index finding commands but they are not able to unpack the values of each list. </p> <p>My code is below. The goal is to replace the "xxx" values in the newData with the numbers in the lookupList. </p> <pre><code>lookupList= [['Garry','34'],['Simon', '24'] ,['Louise','13'] ] newData = [['Louise','xxx'],['Garry', 'xxx'] ,['Simon','xxx'] ] #Matching values for i in newData: if (all(i[0] == elem[0] for elem in lookupList)): i[1] = elem[1] </code></pre>
3
2016-08-22T16:40:52Z
39,084,879
<p>you should use dictionaries instead, like this:</p> <pre><code>lookupList = newData = {} old_lookupList = [['Garry','34'],['Simon', '24'] ,['Louise','13'] ] old_newData = [['Louise','xxx'],['Garry', 'xxx'] ,['Simon','xxx'] ] #convert into dictionary for e in old_newData: newData[e[0]] = e[1] for e in old_lookupList: lookupList[e[0]] = e[1] #Matching values for key in lookupList: if key in newData.keys(): newData[key]=lookupList[key] #convert into list output_list = [] for x in newData: output_list.append([x, newData[x]]) </code></pre>
0
2016-08-22T16:54:17Z
[ "python", "list" ]
dictionary from JSON to CSV
39,084,687
<p>I have a JSON file containing a dictionary with many key-value pairs. I want to write it to a single CSV. One way to do this is simply to iterate through each key:</p> <pre><code>csvwriter.writerow([f["dict"]["key1"], f["dict"]["key2"], f["dict"]["key3"], ... ]) </code></pre> <p>This would be very tedious.</p> <p>Another possibility is simply to use</p> <pre><code>csvwriter.writerow([f["dict"].values()]) </code></pre> <p>but it writes everything into one column of the CSV file, which is not helpful.</p> <p>Is there a way I can write each value into one column of the CSV file?</p>
1
2016-08-22T16:41:59Z
39,084,751
<p>You probably want to use a <a href="https://docs.python.org/2/library/csv.html#csv.DictWriter" rel="nofollow"><code>csv.DictWriter</code></a></p> <p>The example in the official documentation is pretty straight-forward:</p> <pre><code>import csv with open('names.csv', 'w') as csvfile: fieldnames = ['first_name', 'last_name'] writer = csv.DictWriter(csvfile, fieldnames=fieldnames) writer.writeheader() writer.writerow({'first_name': 'Baked', 'last_name': 'Beans'}) writer.writerow({'first_name': 'Lovely', 'last_name': 'Spam'}) writer.writerow({'first_name': 'Wonderful', 'last_name': 'Spam'}) </code></pre> <p>Note that you <em>must</em> provide <code>fieldnames</code> to the constructor. If you're certain that all your dict have the same keys and don't care about the order of the output, you can just use <code>list(first_dict)</code> to get the column names, otherwise, you'll want to come up with a way to specify them more explicitly.</p>
3
2016-08-22T16:46:17Z
[ "python", "json", "csv", "dictionary" ]
dictionary from JSON to CSV
39,084,687
<p>I have a JSON file containing a dictionary with many key-value pairs. I want to write it to a single CSV. One way to do this is simply to iterate through each key:</p> <pre><code>csvwriter.writerow([f["dict"]["key1"], f["dict"]["key2"], f["dict"]["key3"], ... ]) </code></pre> <p>This would be very tedious.</p> <p>Another possibility is simply to use</p> <pre><code>csvwriter.writerow([f["dict"].values()]) </code></pre> <p>but it writes everything into one column of the CSV file, which is not helpful.</p> <p>Is there a way I can write each value into one column of the CSV file?</p>
1
2016-08-22T16:41:59Z
39,084,762
<p><a href="http://pandas.pydata.org/pandas-docs/stable/index.html" rel="nofollow">Pandas</a> is good for this kind of thing. </p> <p>I would read the JSON file into a pandas dataframe (<a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_json.html" rel="nofollow">link</a>). Then write it as a CSV (<a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html" rel="nofollow">link</a>). </p> <pre><code>import pandas as pd #read in the json df = pd.read_json("json_path_here") # write the csv df.to_csv("csv_path_here") </code></pre>
1
2016-08-22T16:47:37Z
[ "python", "json", "csv", "dictionary" ]
dictionary from JSON to CSV
39,084,687
<p>I have a JSON file containing a dictionary with many key-value pairs. I want to write it to a single CSV. One way to do this is simply to iterate through each key:</p> <pre><code>csvwriter.writerow([f["dict"]["key1"], f["dict"]["key2"], f["dict"]["key3"], ... ]) </code></pre> <p>This would be very tedious.</p> <p>Another possibility is simply to use</p> <pre><code>csvwriter.writerow([f["dict"].values()]) </code></pre> <p>but it writes everything into one column of the CSV file, which is not helpful.</p> <p>Is there a way I can write each value into one column of the CSV file?</p>
1
2016-08-22T16:41:59Z
39,084,796
<p>Equivalent to your code: </p> <pre><code>csvwriter.writerow(f["dict"].values()) </code></pre> <p><strong>Note:</strong> For this, your dictionary should be of <code>collections.OrderedDict</code> because Python's default dictionaries are not ordered. Hence, you'll end up with different order in each row.</p> <p><strong>Alternatively</strong>, better way to achieve this is using <a href="https://docs.python.org/2/library/csv.html#csv.DictWriter" rel="nofollow">DictWriter</a> (for which you don't need ordered dict):</p> <pre><code>csvwriter.writerow(f["dict"]) </code></pre>
1
2016-08-22T16:49:54Z
[ "python", "json", "csv", "dictionary" ]
dictionary from JSON to CSV
39,084,687
<p>I have a JSON file containing a dictionary with many key-value pairs. I want to write it to a single CSV. One way to do this is simply to iterate through each key:</p> <pre><code>csvwriter.writerow([f["dict"]["key1"], f["dict"]["key2"], f["dict"]["key3"], ... ]) </code></pre> <p>This would be very tedious.</p> <p>Another possibility is simply to use</p> <pre><code>csvwriter.writerow([f["dict"].values()]) </code></pre> <p>but it writes everything into one column of the CSV file, which is not helpful.</p> <p>Is there a way I can write each value into one column of the CSV file?</p>
1
2016-08-22T16:41:59Z
39,088,528
<p>It's not necessary to use <code>csv.DictWriter</code>. The following, which works in both Python 2 and 3, shows how to create a CSV file that will automatically have the key/value pairs in the same order as they appear in the JSON file (instead of requiring a manually defined <code>fieldnames</code> list):</p> <pre><code>from collections import OrderedDict import csv import json from io import StringIO # in-memory JSON file for testing json_file = StringIO(u'{"dict": {"First": "value1", "Second": "value2",' '"Third": "value3", "Fourth": "value4"}}') # read file and preserve order by using OrderedDict json_obj = json.load(json_file, object_pairs_hook=OrderedDict) with open('pairs.csv', 'w') as csvfile: writer = csv.writer(csvfile) writer.writerow(json_obj["dict"].keys()) # header row writer.writerow(json_obj["dict"].values()) </code></pre> <p>Contents of <strong><code>pairs.csv</code></strong> file written:</p> <pre class="lang-none prettyprint-override"><code>First,Second,Third,Fourth value1,value2,value3,value4 </code></pre>
0
2016-08-22T20:51:05Z
[ "python", "json", "csv", "dictionary" ]
Facing 401 Error while streaming tweets using python
39,084,694
<p>The below image shows the code and error message .</p> <p>I have pasted the code below </p> <p>Code:</p> <pre><code>from tweepy import Stream from tweepy import OAuthHandler from tweepy.streaming import StreamListener #consumer key, consumer secret, access token, access secret. ckey="" csecret="" atoken="" asecret="" class listener(StreamListener): def on_data(self, data): print(data) return(True) def on_error(self, status): print status auth = OAuthHandler(ckey, csecret) auth.set_access_token(atoken, asecret) twitterStream = Stream(auth, listener()) twitterStream.filter(track=["car","python"]) </code></pre> <p>Sample Image: Please help me I am stuck with this for a long time.</p>
0
2016-08-22T16:42:31Z
39,085,367
<p>I haven't used this software but I believe the problem is that you haven't supplied a consumer key and consumer secret. This being the case you could refer to,</p> <p><a href="http://support.yapsody.com/hc/en-us/articles/203068116-How-do-I-get-a-Twitter-Consumer-Key-and-Consumer-Secret-key-" rel="nofollow">Getting twitter consumer key and secret</a></p>
0
2016-08-22T17:25:12Z
[ "python", "twitter", "tweepy" ]
Django - create custom tag to acces variable by index in template
39,084,777
<p>I have a HTML template in django. It get's two variables: list of categories (queryset, as it it returned by <code>.objects.all()</code> function on model in django) and dictionary of contestants. As a key of the dictionary, I'm using id of category, and value is list of contestats. I want to print name of the category and then all the contestants. Now I have this:</p> <pre><code>{% for category in categories_list %} &lt;h1&gt;category.category_name&lt;/h1&gt; {% for contestant in contestants_dict[category.id] %} {{ contestant }} &lt;/br&gt; {% endfor %} {% endfor %} </code></pre> <p>However, when I run it, I get error:</p> <pre><code>TemplateSyntaxError at /olympiada/contestants/ Could not parse the remainder: '[category.id]' from 'contestants_dict[category.id]' </code></pre> <p>What I know so far is that I can't use index in template. I thought that <code>{% something %}</code> contains pure Python, but it shoved up it's just a tag. I know that I have to create my own simple_tag, but I don't know how. I read the docs <a href="https://docs.djangoproject.com/en/1.10/howto/custom-template-tags/#writing-custom-template-tags" rel="nofollow">Writing custom template tags</a>, but there is such a little information and I wasn't able to fiqure out how to create (and mainly use in a for loop) a tag, that will take dict, key and return the value. What I tried is: templatetags/custom_tags.py:</p> <pre><code>from django import template register = template.Library() @register.simple_tag def list_index(a, b): return a[b] </code></pre> <p>and in template:</p> <pre><code>{% for contestant in list_index contestants_dict category.id %} </code></pre> <p>But I get TemplateSyntaxError. Could you please explain/show me how to create the tag, or is there a better way to do this? Thanks.</p> <p>//EDIT: I managed to do it this way:</p> <pre><code>{% list_index contestants_list category.id as cont %} {% for contestant in cont %} </code></pre> <p>it works, but it takes 2 lines and I need to create another variable. Is there any way to do it without it?</p>
0
2016-08-22T16:48:27Z
39,085,425
<p>If you don't want 2 lines like that you should be able to use a filter i think</p> <pre><code>@register.filter def list_index(a, b): return a[b] </code></pre> <p>Then the usage like this</p> <pre><code>{% for contestant in contestants_dict|list_index:category.id %} {{ contestant }} &lt;/br&gt; {% endfor %} </code></pre>
0
2016-08-22T17:28:36Z
[ "python", "django" ]
seaborn: how to make a tsplot square
39,084,813
<p>I would like to create a tsplot, where the x and the y axis are the same length. in other words the aspect ratio of the graph should be 1.</p> <p>this dos not work:</p> <pre><code>fig, ax = plt.subplots() fig.set_size_inches(2, 2) sns.tsplot(data=df, condition=' ', time='time', value='value', unit=' ', ax=ax) </code></pre>
-1
2016-08-22T16:50:32Z
39,085,542
<p>You could change the aspect ratio of your plots by controlling the <a href="http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes.set_aspect" rel="nofollow"><code>aspect</code></a> parameter of a <code>matplotlib</code> object as shown:</p> <pre><code>import numpy as np import seaborn as sns import matplotlib.pyplot as plt np.random.seed(22) sns.set_style("whitegrid") gammas = sns.load_dataset("gammas") fig = plt.figure() ax = fig.add_subplot(111, aspect=2) #Use 'equal' to have the same scaling for x and y axes sns.tsplot(time="timepoint", value="BOLD signal", unit="subject", condition="ROI", data=gammas, ax=ax) plt.tight_layout() plt.show() </code></pre> <p><a href="http://i.stack.imgur.com/AS2tp.png" rel="nofollow"><img src="http://i.stack.imgur.com/AS2tp.png" alt="Image"></a></p>
3
2016-08-22T17:37:08Z
[ "python", "matplotlib", "seaborn" ]
Multilingual issue using django
39,084,823
<p><br> Im Using python3 and django1.10 for my application , and kinnda new to django.<br> Im planning to have many languages for django admin panel. as i follow the rules in <a href="https://docs.djangoproject.com/en/1.10/" rel="nofollow">Django documantion</a>, i find out that i have to use a <code>middleware</code> for localization... here is my setting apps</p> <pre><code>'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.locale.LocaleMiddleware', 'django.middleware.common.CommonMiddleware', </code></pre> <p>I also add this</p> <pre><code>LANGUAGES = ( ('fa', ugettext('Farsi')), ('en', ugettext('English')), ) </code></pre> <p>and</p> <pre><code>LOCALE_PATHS = ( os.path.join(BASE_DIR, 'locale'), ) </code></pre> <p>And of course i installed the <code>GNU gettext</code> and create my locale folders with <code>django-admin.py makemessages -l fa</code> command , then i translate the <code>.po</code> file and compile it so i get the <code>.mo</code> file. so far every things looks good i think, and when i just change the language in setting file just by typing it,every thing works.<br> Now here is my question, How can i add the feature to change the application language from admin panel, or view ? i add the <code>url((r'^i18n/', include('django.conf.urls.i18n'))),</code> in my <code>urls</code> file. But i just dont know what to do now. please help me. What is the next step ? How can i add this form or where sholud i add this form to change language ?</p> <pre><code>&lt;form action="/i18n/setlang/" method="post"&gt; &lt;input name="next" type="hidden" value="/next/page/" /&gt; &lt;select name="language"&gt; {% for lang in LANGUAGES %} &lt;option value="{{ lang.0 }}"&gt;{{ lang.1 }}&lt;/option&gt; {% endfor %} &lt;/select&gt; &lt;input type="submit" value="Go" /&gt; &lt;/form&gt; </code></pre>
0
2016-08-22T16:51:06Z
39,085,451
<p>In a view, you can use:</p> <pre><code>from django.utils import translation # to change to French for example translation.activate('fr') # and you set it in the session to change it for the user request.session[settings.LANGUAGE_SESSION_KEY] = 'fr' </code></pre> <p>So, you can add code similar to that in your view's "form_valid" method, passing in the language code for the language chosen in your form.</p> <p>For more info on this, check out the docs at: <a href="https://docs.djangoproject.com/en/1.10/ref/utils/#module-django.utils.translation" rel="nofollow">https://docs.djangoproject.com/en/1.10/ref/utils/#module-django.utils.translation</a></p> <p>and also:</p> <p><a href="https://docs.djangoproject.com/en/1.10/topics/i18n/translation/#explicitly-setting-the-active-language" rel="nofollow">https://docs.djangoproject.com/en/1.10/topics/i18n/translation/#explicitly-setting-the-active-language</a></p>
0
2016-08-22T17:30:02Z
[ "python", "django", "python-3.x", "django-admin", "multilingual" ]
Convert 160 bit Hash to unique integer ids for machine learning input
39,085,051
<p>I am preparing some data for k-means clustering. At the moment I have the id in 160 bit hash format (this is the format for bitcoin addresses). </p> <pre><code>d = {'Hash' : pd.Series(['1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6', '3DndG5HuyP8Ep8p3V1i394AUxG4gtgsvoj', '1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6']), 'X1' : pd.Series([111, 222, 333]), 'X2' : pd.Series([111, 222, 333]), 'X3' : pd.Series([111, 222, 333]) } df1 = (pd.DataFrame(d)) print(df1) Hash X1 X2 X3 0 1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6 111 111 111 1 3DndG5HuyP8Ep8p3V1i394AUxG4gtgsvoj 222 222 222 2 1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6 333 333 333 </code></pre> <p>In order to parse this data into the sklearn.cluster.KMeans¶ algorithm I need to covert the data to np.float or np.array (i think).</p> <p>Therefore I want to convert the hashes to an integer value, maintaining the relationship across all rows. </p> <p><strong>This is my attempt:</strong></p> <pre><code>#REPLACE HASH WITH INT look_up = {} count = 0 for index, row in df1.iterrows(): count +=1 if row['Hash'] not in look_up: look_up[row['Hash']] = count else: continue print(look_up) {'3DndG5HuyP8Ep8p3V1i394AUxG4gtgsvoj': 2, '1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6': 1} </code></pre> <p>At this point I run through each of the dictionary and try to replace the hash value with the new integer value. </p> <pre><code>for index, row in df1.iterrows(): for address, id_int in look_up.iteritems(): if address == row['Hash']: df1.set_value(index, row['Hash'], id_int) print(df1) </code></pre> <p><strong>Output:</strong></p> <pre><code>Hash X1 X2 X3 \ 0 1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6 111 111 111 1 3DndG5HuyP8Ep8p3V1i394AUxG4gtgsvoj 222 222 222 2 1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6 333 333 333 1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6 3DndG5HuyP8Ep8p3V1i394AUxG4gtgsvoj 0 1.0 NaN 1 NaN 2.0 2 1.0 NaN </code></pre> <p>The output does not replace the hashed address with the integer value. How can I get the following output:</p> <p><strong>Expected output:</strong></p> <pre><code>d = {'ID' : pd.Series([1, 2, 1]), 'X1' : pd.Series([111, 222, 333]), 'X2' : pd.Series([111, 222, 333]), 'X3' : pd.Series([111, 222, 333]) } df3 = (pd.DataFrame(d)) print(df3) ID X1 X2 X3 0 1 111 111 111 1 2 222 222 222 2 1 333 333 333 </code></pre> <p>As the hash is the same in row <code>0</code> and <code>2</code> the same integer id should replace the hash.</p> <p>Is there a more efficient way of generating these unique ids? At the moment this code take a long time to run. </p>
0
2016-08-22T17:05:32Z
39,085,229
<p>There are quite a few ways. One way would be to use Categorical codes, and another would be to rank them:</p> <pre><code>In [16]: df1["via_categ"] = pd.Categorical(df1.Hash).codes + 1 In [17]: df1["via_rank"] = df1["Hash"].rank(method="dense").astype(int) In [18]: df1 Out[18]: Hash X1 X2 X3 via_categ via_rank 0 1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6 111 111 111 1 1 1 3DndG5HuyP8Ep8p3V1i394AUxG4gtgsvoj 222 222 222 2 2 2 1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6 333 333 333 1 1 </code></pre> <p>(You could have dropped the Hash column and created a new ID column equally easily.)</p>
1
2016-08-22T17:17:13Z
[ "python", "pandas", "numpy", "k-means" ]
Convert 160 bit Hash to unique integer ids for machine learning input
39,085,051
<p>I am preparing some data for k-means clustering. At the moment I have the id in 160 bit hash format (this is the format for bitcoin addresses). </p> <pre><code>d = {'Hash' : pd.Series(['1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6', '3DndG5HuyP8Ep8p3V1i394AUxG4gtgsvoj', '1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6']), 'X1' : pd.Series([111, 222, 333]), 'X2' : pd.Series([111, 222, 333]), 'X3' : pd.Series([111, 222, 333]) } df1 = (pd.DataFrame(d)) print(df1) Hash X1 X2 X3 0 1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6 111 111 111 1 3DndG5HuyP8Ep8p3V1i394AUxG4gtgsvoj 222 222 222 2 1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6 333 333 333 </code></pre> <p>In order to parse this data into the sklearn.cluster.KMeans¶ algorithm I need to covert the data to np.float or np.array (i think).</p> <p>Therefore I want to convert the hashes to an integer value, maintaining the relationship across all rows. </p> <p><strong>This is my attempt:</strong></p> <pre><code>#REPLACE HASH WITH INT look_up = {} count = 0 for index, row in df1.iterrows(): count +=1 if row['Hash'] not in look_up: look_up[row['Hash']] = count else: continue print(look_up) {'3DndG5HuyP8Ep8p3V1i394AUxG4gtgsvoj': 2, '1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6': 1} </code></pre> <p>At this point I run through each of the dictionary and try to replace the hash value with the new integer value. </p> <pre><code>for index, row in df1.iterrows(): for address, id_int in look_up.iteritems(): if address == row['Hash']: df1.set_value(index, row['Hash'], id_int) print(df1) </code></pre> <p><strong>Output:</strong></p> <pre><code>Hash X1 X2 X3 \ 0 1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6 111 111 111 1 3DndG5HuyP8Ep8p3V1i394AUxG4gtgsvoj 222 222 222 2 1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6 333 333 333 1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6 3DndG5HuyP8Ep8p3V1i394AUxG4gtgsvoj 0 1.0 NaN 1 NaN 2.0 2 1.0 NaN </code></pre> <p>The output does not replace the hashed address with the integer value. How can I get the following output:</p> <p><strong>Expected output:</strong></p> <pre><code>d = {'ID' : pd.Series([1, 2, 1]), 'X1' : pd.Series([111, 222, 333]), 'X2' : pd.Series([111, 222, 333]), 'X3' : pd.Series([111, 222, 333]) } df3 = (pd.DataFrame(d)) print(df3) ID X1 X2 X3 0 1 111 111 111 1 2 222 222 222 2 1 333 333 333 </code></pre> <p>As the hash is the same in row <code>0</code> and <code>2</code> the same integer id should replace the hash.</p> <p>Is there a more efficient way of generating these unique ids? At the moment this code take a long time to run. </p>
0
2016-08-22T17:05:32Z
39,085,281
<pre><code>s = list(set(df1.Hash)) hash2 = dict(zip(s, range(1, len(s) + 1))) df1.Hash = df1.Hash.map(hash2) print(df1) </code></pre> <p>Output:</p> <pre><code> Hash X1 X2 X3 0 2 111 111 111 1 1 222 222 222 2 2 333 333 333 </code></pre>
0
2016-08-22T17:19:40Z
[ "python", "pandas", "numpy", "k-means" ]
Convert 160 bit Hash to unique integer ids for machine learning input
39,085,051
<p>I am preparing some data for k-means clustering. At the moment I have the id in 160 bit hash format (this is the format for bitcoin addresses). </p> <pre><code>d = {'Hash' : pd.Series(['1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6', '3DndG5HuyP8Ep8p3V1i394AUxG4gtgsvoj', '1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6']), 'X1' : pd.Series([111, 222, 333]), 'X2' : pd.Series([111, 222, 333]), 'X3' : pd.Series([111, 222, 333]) } df1 = (pd.DataFrame(d)) print(df1) Hash X1 X2 X3 0 1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6 111 111 111 1 3DndG5HuyP8Ep8p3V1i394AUxG4gtgsvoj 222 222 222 2 1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6 333 333 333 </code></pre> <p>In order to parse this data into the sklearn.cluster.KMeans¶ algorithm I need to covert the data to np.float or np.array (i think).</p> <p>Therefore I want to convert the hashes to an integer value, maintaining the relationship across all rows. </p> <p><strong>This is my attempt:</strong></p> <pre><code>#REPLACE HASH WITH INT look_up = {} count = 0 for index, row in df1.iterrows(): count +=1 if row['Hash'] not in look_up: look_up[row['Hash']] = count else: continue print(look_up) {'3DndG5HuyP8Ep8p3V1i394AUxG4gtgsvoj': 2, '1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6': 1} </code></pre> <p>At this point I run through each of the dictionary and try to replace the hash value with the new integer value. </p> <pre><code>for index, row in df1.iterrows(): for address, id_int in look_up.iteritems(): if address == row['Hash']: df1.set_value(index, row['Hash'], id_int) print(df1) </code></pre> <p><strong>Output:</strong></p> <pre><code>Hash X1 X2 X3 \ 0 1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6 111 111 111 1 3DndG5HuyP8Ep8p3V1i394AUxG4gtgsvoj 222 222 222 2 1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6 333 333 333 1HYKGGzRHDskth2ecKZ2HYvxSvQ1p87m6 3DndG5HuyP8Ep8p3V1i394AUxG4gtgsvoj 0 1.0 NaN 1 NaN 2.0 2 1.0 NaN </code></pre> <p>The output does not replace the hashed address with the integer value. How can I get the following output:</p> <p><strong>Expected output:</strong></p> <pre><code>d = {'ID' : pd.Series([1, 2, 1]), 'X1' : pd.Series([111, 222, 333]), 'X2' : pd.Series([111, 222, 333]), 'X3' : pd.Series([111, 222, 333]) } df3 = (pd.DataFrame(d)) print(df3) ID X1 X2 X3 0 1 111 111 111 1 2 222 222 222 2 1 333 333 333 </code></pre> <p>As the hash is the same in row <code>0</code> and <code>2</code> the same integer id should replace the hash.</p> <p>Is there a more efficient way of generating these unique ids? At the moment this code take a long time to run. </p>
0
2016-08-22T17:05:32Z
39,085,523
<p>You can use <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html" rel="nofollow"><code>sklearn.preprocessing.LabelEncoder</code></a>:</p> <pre><code>from sklearn import preprocessing le = preprocessing.LabelEncoder() le.fit(df1['Hash']) df1['Hash'] = le.transform(df1['Hash']) </code></pre> <p>Resulting Output:</p> <pre><code> Hash X1 X2 X3 0 0 111 111 111 1 1 222 222 222 2 0 333 333 333 </code></pre> <p>Also, note that this gives you an easy way to revert back to the original hash by using <code>inverse_transform</code>:</p> <pre><code>df1['Hash'] = le.inverse_transform(df1['Hash']) </code></pre>
0
2016-08-22T17:35:33Z
[ "python", "pandas", "numpy", "k-means" ]
Interrupt python script with a specific key on Linux
39,085,169
<p>I'm trying to have a loop which increments and prints a value. While it's running I would like to press a key (eg. space or shift) and have it print that the key was pressed. Below is example code of what I would like.</p> <pre><code>def space(): print 'You pressed space' def shift(): print 'You pressed shift' x = 0 while True: print(x) #if space is pressed space() #if shift is pressed shift() x = x + 1; time.sleep(1) </code></pre> <p>EDIT: Here is an example output</p> <pre><code>0 1 2 You pressed shift 3 4 5 You pressed space 6 7 . . . </code></pre>
1
2016-08-22T17:13:16Z
39,085,309
<p>If you're on windows, check out <a href="https://docs.python.org/3.5/library/msvcrt.html" rel="nofollow">msvcrt</a>:</p> <pre><code>import msvcrt while True: x += 1 sleep(x) if msvcrt.kbhit(): print "You pressed: %s" % msvcrt.getch() </code></pre>
-1
2016-08-22T17:21:39Z
[ "python", "linux", "python-2.7" ]
Interrupt python script with a specific key on Linux
39,085,169
<p>I'm trying to have a loop which increments and prints a value. While it's running I would like to press a key (eg. space or shift) and have it print that the key was pressed. Below is example code of what I would like.</p> <pre><code>def space(): print 'You pressed space' def shift(): print 'You pressed shift' x = 0 while True: print(x) #if space is pressed space() #if shift is pressed shift() x = x + 1; time.sleep(1) </code></pre> <p>EDIT: Here is an example output</p> <pre><code>0 1 2 You pressed shift 3 4 5 You pressed space 6 7 . . . </code></pre>
1
2016-08-22T17:13:16Z
39,085,785
<p>I can help you with modified answer form here:</p> <p><a href="http://stackoverflow.com/questions/11918999/key-listeners-in-python?answertab=votes#tab-top">http://stackoverflow.com/questions/11918999/key-listeners-in-python</a></p> <p>and for only space and enter:</p> <pre><code>import contextlib import sys import termios import time @contextlib.contextmanager def raw_mode(file): old_attrs = termios.tcgetattr(file.fileno()) new_attrs = old_attrs[:] new_attrs[3] = new_attrs[3] &amp; ~(termios.ECHO | termios.ICANON) try: termios.tcsetattr(file.fileno(), termios.TCSADRAIN, new_attrs) yield finally: termios.tcsetattr(file.fileno(), termios.TCSADRAIN, old_attrs) def space(ch): if ord(ch) == 32: print 'You pressed space' def enter(ch): if ord(ch) == 10: print 'You pressed enter' def main(): print 'exit with ^C or ^D' with raw_mode(sys.stdin): try: x = 0 while True: print(x) ch = sys.stdin.read(1) space(ch) enter(ch) x = x + 1; time.sleep(1) except (KeyboardInterrupt, EOFError): pass if __name__ == '__main__': main() </code></pre>
1
2016-08-22T17:52:51Z
[ "python", "linux", "python-2.7" ]
How to stream Csv file into BigQuery?
39,085,231
<p>Examples I found so far is streaming json to BQ, e.g. <a href="https://cloud.google.com/bigquery/streaming-data-into-bigquery" rel="nofollow">https://cloud.google.com/bigquery/streaming-data-into-bigquery</a></p> <p>How do I stream Csv or any file type into BQ? Below is a block of code for streaming and seems "issue" is in insert_all_data where 'row' defined as json.. thanks</p> <pre><code># [START stream_row_to_bigquery] def stream_row_to_bigquery(bigquery, project_id, dataset_id, table_name, row, num_retries=5): insert_all_data = { 'rows': [{ 'json': row, # Generate a unique id for each row so retries don't accidentally # duplicate insert 'insertId': str(uuid.uuid4()), }] } return bigquery.tabledata().insertAll( projectId=project_id, datasetId=dataset_id, tableId=table_name, body=insert_all_data).execute(num_retries=num_retries) # [END stream_row_to_bigquery] </code></pre>
2
2016-08-22T17:17:24Z
39,085,676
<p>This is how I <a href="https://github.com/DuoSoftware/DigInEngine/blob/master/modules/BigQueryHandler.py#L85" rel="nofollow">wrote</a> using <a href="https://github.com/tylertreat/BigQuery-Python" rel="nofollow">bigquery-python</a> library very easily.</p> <pre><code>def insert_data(datasetname,table_name,DataObject): client = get_client(project_id, service_account=service_account, private_key_file=key, readonly=False, swallow_results=False) insertObject = DataObject try: result = client.push_rows(datasetname,table_name,insertObject) except Exception, err: print err raise return result </code></pre> <p>Here insertObject is a list of dictionaries where one dictionary contains one row.</p> <p>eg: <code>[{field1:value1, field2:value2},{field1:value3, field2:value4}]</code></p> <p>csv can be read as follows,</p> <pre><code>import pandas as pd fileCsv = pd.read_csv(file_path+'/'+filename, parse_dates=C, infer_datetime_format=True) data = [] for row_x in range(len(fileCsv.index)): i = 0 row = {} for col_y in schema: row[col_y['name']] = _sorted_list[i]['col_data'][row_x] i += 1 data.append(row) insert_data(datasetname,table_name,data) </code></pre> <p>data list can be sent to the insert_data</p> <p>This will do that but still there's a limitation that I already raised <a href="http://stackoverflow.com/questions/38971523/insert-large-amount-of-data-to-bigquery-via-bigquery-python-library">here</a>.</p>
1
2016-08-22T17:45:39Z
[ "python", "streaming", "google-bigquery" ]
Manipulating pandas dataframe using 3 columns of data
39,085,326
<p>I am having trouble coming up with a way to accomplish my task. I've got a dataframe with 3 columns: <code>length, reachcode, and year</code>. </p> <p>My example dataframe: </p> <pre><code>year reachcode length 1988 1000 1.2 1988 1000 2.0 1990 1000 0.3 1993 1000 0.5 </code></pre> <p>I'm trying to find the 'reachcode' duplicates within a single year and then sum 'length' for that year. </p> <p>After that I would like to compare the summed 'length' values against the same 'reachcode' for different years and keep the smallest value. </p> <p>So in the example dataframe, the length 1.2 and 2.0 would be summed for the year 1998 and <code>reachcode = 1000</code>, and then that value (3.2) would be compared to 1990 and 1993, with the value 0.3 and reachcode retained in a new list.</p> <p>I have some experience with Pandas, but this is a more complicated task than I've previously had to deal with. My real dataframe is about 40,000 rows, so finding an automated way to do this would be extremely helpful. Thanks for any help.</p>
1
2016-08-22T17:22:43Z
39,085,682
<p>It sounds like you need a double-stage <code>groupby</code>. Firstly groupby <code>year</code> and <code>reachcode</code> and calculate the sum, reset index so that you can groupby <code>reachcode</code> further to take the min of <code>length</code>:</p> <pre><code>df.groupby(['year', 'reachcode']).sum().reset_index().groupby('reachcode')['length'].min() # reachcode # 1000 0.3 # Name: length, dtype: float64 </code></pre>
2
2016-08-22T17:45:46Z
[ "python", "pandas" ]
Manipulating pandas dataframe using 3 columns of data
39,085,326
<p>I am having trouble coming up with a way to accomplish my task. I've got a dataframe with 3 columns: <code>length, reachcode, and year</code>. </p> <p>My example dataframe: </p> <pre><code>year reachcode length 1988 1000 1.2 1988 1000 2.0 1990 1000 0.3 1993 1000 0.5 </code></pre> <p>I'm trying to find the 'reachcode' duplicates within a single year and then sum 'length' for that year. </p> <p>After that I would like to compare the summed 'length' values against the same 'reachcode' for different years and keep the smallest value. </p> <p>So in the example dataframe, the length 1.2 and 2.0 would be summed for the year 1998 and <code>reachcode = 1000</code>, and then that value (3.2) would be compared to 1990 and 1993, with the value 0.3 and reachcode retained in a new list.</p> <p>I have some experience with Pandas, but this is a more complicated task than I've previously had to deal with. My real dataframe is about 40,000 rows, so finding an automated way to do this would be extremely helpful. Thanks for any help.</p>
1
2016-08-22T17:22:43Z
39,085,705
<p>Simply run <code>groupby</code> aggregates:</p> <pre><code>df['lengthsum'] = df.groupby(['year', 'reachcode'])['length'].transform(sum) df['lengthmin'] = df.groupby(['reachcode'])['lengthsum'].transform(min) # year reachcode length lengthsum lengthmin # 0 1988 1000 1.2 3.2 0.3 # 1 1988 1000 2.0 3.2 0.3 # 2 1990 1000 0.3 0.3 0.3 # 3 1993 1000 0.5 0.5 0.3 </code></pre>
2
2016-08-22T17:47:17Z
[ "python", "pandas" ]
Deleting a List of Labels From The Tkinter Window Placed In The Grid
39,085,468
<p>a. Have a scenario where I had created a list of labels as shown below.</p> <pre><code>class test_template: def __init__(self, master): self.master = master ... def nb_code(self): if nb_cnt == 0: for i in range (int(no_of_fs)): self.enul = Label(root, text="Enter The Number Of Fruits In Basket%d\n"%i) self.enul.grid(row=i+1) # Trying To Delete The List Of Labels elif nb_cnt == 2: for i in range (int(no_of_fs)): self.enul.grid_forget() </code></pre> <p>b. Say if I have a list of 3 Labels, when I try to delete them by putting in a loop, only they first one gets deleted which makes sense because that was holding the Label information for the last Label assigned.</p> <p>c. <strong>But then in this scenario what I need to do to delete the complete list of Labels ? Can be it be done by searching for the "Label" name in the total grid and delete them ? or how can it be done ?</strong></p> <p>Please share in your comments !!</p>
0
2016-08-22T17:31:03Z
39,088,767
<p>You can get <code>root</code> children and search for your labels. </p> <pre><code>for child in root.children.values(): info = child.grid_info() if info['column'] == 0: child.grid_forget() </code></pre>
0
2016-08-22T21:08:00Z
[ "python", "python-2.7", "tkinter", "tkinter-canvas" ]
Using pip with two --extra-index-url arguments that both point to the same domain
39,085,531
<p>We use our own python package index at my office, and we're trying to add a new one. When I try to specify both indices at the same time, I get prompted to log in, but if I use only one at a time I don't.</p> <p>For example:</p> <pre><code>$ pip install --user --upgrade \ --extra-index-url https://&lt;api token&gt;:@packagecloud.io/2rs2ts/oldrepo/pypi/simple \ --extra-index-url https://&lt;other api token&gt;:@packagecloud.io/2rs2ts/newrepo/pypi/simple \ mypackage Collecting mypackage User for packagecloud.io: </code></pre> <p>But if I specify just one of either of those <code>--extra-index-url</code> arguments then I download my package just fine.</p> <p>I'm 99% certain that I am passing the arguments correctly, since <a href="https://github.com/pypa/pip/blob/master/pip/cmdoptions.py#L228-L238" rel="nofollow">it's specified with an <code>append</code> action in the source</a>. So I think the problem is that both of these index URLs are from <code>packagecloud.io</code>... but I could be wrong. Either way, how can I use both of my repos?</p>
1
2016-08-22T17:36:02Z
39,085,736
<pre><code>--extra-index-url </code></pre> <p>accepts a list (it should probably be called --extra-index-urls). Try adding your URLs comma separated, like this:</p> <pre><code>pip install --user --upgrade \ --extra-index-url https://&lt;api token&gt;:@packagebutt.io/2rs2ts/oldrepo/pypi/simple, \ https://&lt;other api token&gt;:@packagebutt.io/2rs2ts/newrepo/pypi/simple \ mypackage </code></pre>
0
2016-08-22T17:49:37Z
[ "python", "pip" ]
Using pip with two --extra-index-url arguments that both point to the same domain
39,085,531
<p>We use our own python package index at my office, and we're trying to add a new one. When I try to specify both indices at the same time, I get prompted to log in, but if I use only one at a time I don't.</p> <p>For example:</p> <pre><code>$ pip install --user --upgrade \ --extra-index-url https://&lt;api token&gt;:@packagecloud.io/2rs2ts/oldrepo/pypi/simple \ --extra-index-url https://&lt;other api token&gt;:@packagecloud.io/2rs2ts/newrepo/pypi/simple \ mypackage Collecting mypackage User for packagecloud.io: </code></pre> <p>But if I specify just one of either of those <code>--extra-index-url</code> arguments then I download my package just fine.</p> <p>I'm 99% certain that I am passing the arguments correctly, since <a href="https://github.com/pypa/pip/blob/master/pip/cmdoptions.py#L228-L238" rel="nofollow">it's specified with an <code>append</code> action in the source</a>. So I think the problem is that both of these index URLs are from <code>packagecloud.io</code>... but I could be wrong. Either way, how can I use both of my repos?</p>
1
2016-08-22T17:36:02Z
39,131,265
<p>Apparently this is a bug in pip. The HTTP basic auth information is not stored correctly when specifying multiple <code>--extra-index-url</code>s that point to the same domain. <a href="https://github.com/pypa/pip/issues/3931" rel="nofollow">I filed an issue</a>, but in the meantime, there is a workaround. By specifying one of the <code>--extra-index-url</code>s as the <code>--index</code> instead, and adding PyPI as an <code>--extra-index-url</code>, I was able to download my package successfully:</p> <pre><code>$ pip install --user --upgrade \ --index https://&lt;api token&gt;:@packagecloud.io/2rs2ts/oldrepo/pypi/simple \ --extra-index-url https://&lt;other api token&gt;:@packagecloud.io/2rs2ts/newrepo/pypi/simple \ --extra-index-url https://pypi.python.org/simple \ mypackage Collecting mypackage Downloading https://packagecloud.io/2rs2ts/newrepo/pypi/packages/mypackage-1.0.0-py2-none-any.whl (52kB) etc. etc. </code></pre>
1
2016-08-24T19:08:19Z
[ "python", "pip" ]
Docker Build can't find pip
39,085,599
<p>Trying to follow a few[<a href="https://aws.amazon.com/blogs/aws/run-docker-apps-locally-using-the-elastic-beanstalk-eb-cli/" rel="nofollow">1</a>][<a href="http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_dockerpreconfig.walkthrough.html" rel="nofollow">2</a>] simple Docker tutorials via AWS am and getting the following error:</p> <pre><code>&gt; docker build -t my-app-image . Sending build context to Docker daemon 94.49 MB Step 1 : FROM amazon/aws-eb-python:3.4.2-onbuild-3.5.1 # Executing 2 build triggers... Step 1 : ADD . /var/app ---&gt; Using cache Step 1 : RUN if [ -f /var/app/requirements.txt ]; then /var/app/bin/pip install -r /var/app/requirements.txt; fi ---&gt; Running in d48860787e63 /bin/sh: 1: /var/app/bin/pip: not found The command '/bin/sh -c if [ -f /var/app/requirements.txt ]; then /var/app/bin/pip install -r /var/app/requirements.txt; fi' returned a non-zero code: 127 </code></pre> <p>Dockerfile:</p> <pre><code># For Python 3.4 FROM amazon/aws-eb-python:3.4.2-onbuild-3.5.1 </code></pre> <p>Which pip returns the following:</p> <pre><code>&gt; which pip ./bin/pip </code></pre> <p>Relevant file structure:</p> <pre><code>. ├── Dockerfile ├── bin │   ├── activate │   ├── pip │   ├── pip3 │   ├── pip3.5 │   ├── python -&gt; python3 │   ├── python-config │   ├── python3 │   ├── python3.5 -&gt; python3 │ . . </code></pre> <p>Again, noob in all things Docker so I'm not sure what troubleshooting steps to take. Please let me know what other helpful information I can provide.</p>
3
2016-08-22T17:40:45Z
39,138,501
<p>/var/app/bin/pip is supopsed to work because the <a href="https://github.com/aws/aws-eb-python-dockerfiles/blob/206710af939b622430fe0be3d7d8dcf177ad78e9/3.4.2-aws-eb-onbuild/Dockerfile#L5-L7" rel="nofollow">amazon/aws-eb-python:3.4.2-onbuild-3.5.1 Dockerfile</a> includes:</p> <pre><code>RUN pip3 install virtualenv RUN virtualenv /var/app RUN /var/app/bin/pip install --download-cache /src uwsgi </code></pre> <p>It means when you are using this image as a base image, its two <a href="https://docs.docker.com/engine/reference/builder/#/onbuild" rel="nofollow"><code>ONBUILD</code></a> <a href="https://github.com/aws/aws-eb-python-dockerfiles/blob/206710af939b622430fe0be3d7d8dcf177ad78e9/3.4.2-aws-eb-onbuild/Dockerfile#L13-L14" rel="nofollow">instructions</a> would apply to your current build.</p> <pre><code>ONBUILD ADD . /var/app ONBUILD RUN if [ -f /var/app/requirements.txt ]; then /var/app/bin/pip install -r /var/app/requirements.txt; fi </code></pre> <p>Try with a simpler Dockerfile, and open a shell session from it, in order to check if /var/app is there, and if pip is correctly installed.<br> You could also test rebuilding directly the <a href="https://github.com/aws/aws-eb-python-dockerfiles/tree/206710af939b622430fe0be3d7d8dcf177ad78e9/3.4.2-aws-eb-onbuild" rel="nofollow">3.4.2-aws-eb-onbuild</a> image itself, again for testing.</p>
1
2016-08-25T06:50:31Z
[ "python", "amazon-web-services", "docker", "python-3.4" ]
Docker Build can't find pip
39,085,599
<p>Trying to follow a few[<a href="https://aws.amazon.com/blogs/aws/run-docker-apps-locally-using-the-elastic-beanstalk-eb-cli/" rel="nofollow">1</a>][<a href="http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_dockerpreconfig.walkthrough.html" rel="nofollow">2</a>] simple Docker tutorials via AWS am and getting the following error:</p> <pre><code>&gt; docker build -t my-app-image . Sending build context to Docker daemon 94.49 MB Step 1 : FROM amazon/aws-eb-python:3.4.2-onbuild-3.5.1 # Executing 2 build triggers... Step 1 : ADD . /var/app ---&gt; Using cache Step 1 : RUN if [ -f /var/app/requirements.txt ]; then /var/app/bin/pip install -r /var/app/requirements.txt; fi ---&gt; Running in d48860787e63 /bin/sh: 1: /var/app/bin/pip: not found The command '/bin/sh -c if [ -f /var/app/requirements.txt ]; then /var/app/bin/pip install -r /var/app/requirements.txt; fi' returned a non-zero code: 127 </code></pre> <p>Dockerfile:</p> <pre><code># For Python 3.4 FROM amazon/aws-eb-python:3.4.2-onbuild-3.5.1 </code></pre> <p>Which pip returns the following:</p> <pre><code>&gt; which pip ./bin/pip </code></pre> <p>Relevant file structure:</p> <pre><code>. ├── Dockerfile ├── bin │   ├── activate │   ├── pip │   ├── pip3 │   ├── pip3.5 │   ├── python -&gt; python3 │   ├── python-config │   ├── python3 │   ├── python3.5 -&gt; python3 │ . . </code></pre> <p>Again, noob in all things Docker so I'm not sure what troubleshooting steps to take. Please let me know what other helpful information I can provide.</p>
3
2016-08-22T17:40:45Z
39,139,192
<p>I think the issue is how you have organized your bin/pip files</p> <p>From Docker Documentation: <a href="https://docs.docker.com/engine/reference/builder/#add" rel="nofollow">https://docs.docker.com/engine/reference/builder/#add</a></p> <pre><code>If &lt;dest&gt; does not end with a trailing slash, it will be considered a regular file and the contents of &lt;src&gt; will be written at &lt;dest&gt;. </code></pre> <p>So your file structure should be :</p> <pre><code>. ├── Dockerfile ├── app | |__bin | | | │ ├── activate │ ├── pip │ ├── pip3 │ ├── pip3.5 │ ├── python -&gt; python3 │ ├── python-config │ ├── python3 │ ├── python3.5 -&gt; python3 │ . . </code></pre>
1
2016-08-25T07:29:49Z
[ "python", "amazon-web-services", "docker", "python-3.4" ]
Docker Build can't find pip
39,085,599
<p>Trying to follow a few[<a href="https://aws.amazon.com/blogs/aws/run-docker-apps-locally-using-the-elastic-beanstalk-eb-cli/" rel="nofollow">1</a>][<a href="http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_dockerpreconfig.walkthrough.html" rel="nofollow">2</a>] simple Docker tutorials via AWS am and getting the following error:</p> <pre><code>&gt; docker build -t my-app-image . Sending build context to Docker daemon 94.49 MB Step 1 : FROM amazon/aws-eb-python:3.4.2-onbuild-3.5.1 # Executing 2 build triggers... Step 1 : ADD . /var/app ---&gt; Using cache Step 1 : RUN if [ -f /var/app/requirements.txt ]; then /var/app/bin/pip install -r /var/app/requirements.txt; fi ---&gt; Running in d48860787e63 /bin/sh: 1: /var/app/bin/pip: not found The command '/bin/sh -c if [ -f /var/app/requirements.txt ]; then /var/app/bin/pip install -r /var/app/requirements.txt; fi' returned a non-zero code: 127 </code></pre> <p>Dockerfile:</p> <pre><code># For Python 3.4 FROM amazon/aws-eb-python:3.4.2-onbuild-3.5.1 </code></pre> <p>Which pip returns the following:</p> <pre><code>&gt; which pip ./bin/pip </code></pre> <p>Relevant file structure:</p> <pre><code>. ├── Dockerfile ├── bin │   ├── activate │   ├── pip │   ├── pip3 │   ├── pip3.5 │   ├── python -&gt; python3 │   ├── python-config │   ├── python3 │   ├── python3.5 -&gt; python3 │ . . </code></pre> <p>Again, noob in all things Docker so I'm not sure what troubleshooting steps to take. Please let me know what other helpful information I can provide.</p>
3
2016-08-22T17:40:45Z
39,191,733
<p>Something is very odd here. Why do you have the virtualenv content next to your Dockerfile? <a href="https://github.com/aws/aws-eb-python-dockerfiles/tree/206710af939b622430fe0be3d7d8dcf177ad78e9/3.4.2-aws-eb-onbuild" rel="nofollow">The image you are building from</a> creates the virtualenv on /var/app (within the container, yes?) for you. I believe that the ONBUILD command copies it (or parts of it) over and corrupt the rest of the process, making the /var/app/bin/pip inoperable.</p> <pre><code>FROM python:3.4.2 &lt;-- this is the base image, on top of which the following command will be applied WORKDIR /var/app &lt;-- this is the working dir (a la 'cd /var/app') RUN pip3 install virtualenv &lt;-- using pip3 (installed using base image I presume) to install the virtualenv package RUN virtualenv /var/app &lt;-- creating a virtual env on /var/app RUN /var/app/bin/pip install --download-cache /src uwsgi &lt;-- using the recently install virtualenv pip to install uwsgi ... ONBUILD ADD . /var/app &lt;-- add the contents of the directory where the Dockerfile is built from, I think this is where the corruption happen ONBUILD RUN if [ -f /var/app/requirements.txt ]; then /var/app/bin/pip install -r /var/app/requirements.txt; fi &lt;-- /var/app/bin/pip has beed corrupted </code></pre> <p>You should not care about externally having /var/app available on the host. You just need (based on the Dockerbuild file) have the "requirements.txt" available on the host, to be copied into the container (or not, if not, it will skip).</p>
2
2016-08-28T13:24:26Z
[ "python", "amazon-web-services", "docker", "python-3.4" ]
Find overlapping modularity in two graphs - iGraph in Python
39,085,646
<p>I have two related graphs created in iGraph, A and G. I find community in structure in G using either infomap or label_propagation methods (because they are two that allow for weighted, directional links). From this, I can see the modularity of this community for the G graph. However, I need to see what modularity this will provide for the A graph. How can I do this?</p>
0
2016-08-22T17:43:51Z
39,185,816
<p>Did you try using the <code>modularity</code> function?</p> <pre><code>im &lt;- infomap.community(graph=G) qG &lt;- modularity(im) memb &lt;- membership(im) qA &lt;- modularity(x=A, membership=memb, weights=E(A)$weight) cat("qG=",qG," vs. qA=",qA,"\n",sep="") </code></pre> <p>Note: tested with igraph v0.7, I don't have a more recent version right now. The parameter/function names might slightly differ.</p>
2
2016-08-27T21:15:01Z
[ "python", "graph", "igraph" ]
Find overlapping modularity in two graphs - iGraph in Python
39,085,646
<p>I have two related graphs created in iGraph, A and G. I find community in structure in G using either infomap or label_propagation methods (because they are two that allow for weighted, directional links). From this, I can see the modularity of this community for the G graph. However, I need to see what modularity this will provide for the A graph. How can I do this?</p>
0
2016-08-22T17:43:51Z
39,735,109
<p>So I figured it out. What you need to do is find a community structure, either pre-defined or using one of the methods provided for community detection, such as infomap or label_propagation. This gives you a vertex clustering, which you can use to place on another graph and from that use .q to find the modularity.</p>
-1
2016-09-27T22:08:31Z
[ "python", "graph", "igraph" ]
Django: Print out all choices for a models.Model class
39,085,760
<p>Im trying to write an algorithm that prints out all choices in a Django model class. </p> <p>For example: I have a model: </p> <pre><code>class SomeModel(models.Model): field_a = models.SmallIntegerField(choices=[(1, "a"), (2, "b"), (3, "c")] field_b = models.CharField(max_length=255) </code></pre> <p>The expected output is something like this: </p> <pre><code>"field_a": [(1, "a"), (2, "b"), (3, "c")] </code></pre> <p>Please note the algorithm should ignore field_b because of the missing choices attribute.</p> <p>Any idea on how this functionality can be achieved? </p>
0
2016-08-22T17:51:17Z
39,086,016
<p>Have a look at the <a href="https://docs.djangoproject.com/en/1.10/ref/models/meta/" rel="nofollow">meta options</a> documentation. You can achieve this in the following way:</p> <pre><code>fields = SomeModel._meta.fields() for field in fields: if field.choices: print "%s: %s" % (field.name, field.choices) </code></pre>
1
2016-08-22T18:06:14Z
[ "python", "django", "django-models" ]
Iterate through Json list object - Python
39,085,867
<p>I have some JSON text I want to iterate through, formatted in the following way:</p> <pre><code>{ "itemsPerPage": 45, "links": { "next": "https://www.12345.com" }, "list": [ { "id": "333333", "placeID": "63333", "description": " ", "displayName": "test-12345", "name": "test", "status": "Active", "groupType": "Creative", "groupTypeV2": "Public", "memberCount": 1, }, { "id": "32423", "placeID": "606", "description": " ", "displayName": "test123", "name": "test", "status": "Active", "groupType": "Creative", "groupTypeV2": "Private", "memberCount": 1, }, </code></pre> <p>I am trying to iterate through this list, and grab the displayName, however my code won't recognize all of the different display names. Here is my code:</p> <pre><code>for i in range(len(json_obj['list'])): if (json_obj['list'][i]['displayName'] == "some id"): do stuff else: exit() </code></pre> <p>How can I fix the statement, in order to successfully loop through the json obj?</p>
-2
2016-08-22T17:57:20Z
39,085,971
<p>While the JSON you posted isn't valid, I'll assume you left some stuff off the end.</p> <pre><code>for entry in dataset['list']: print(entry['displayName']) </code></pre> <p>Will loop through your JSON data. </p> <p>If you want to do_stuff() if it matches a certain value:</p> <pre><code>for entry in dataset['list']: if entry['displayName'] == 'test-12345': do_stuff() </code></pre>
0
2016-08-22T18:03:44Z
[ "python", "json", "loops", "python-3.x", "object" ]
Iterate through Json list object - Python
39,085,867
<p>I have some JSON text I want to iterate through, formatted in the following way:</p> <pre><code>{ "itemsPerPage": 45, "links": { "next": "https://www.12345.com" }, "list": [ { "id": "333333", "placeID": "63333", "description": " ", "displayName": "test-12345", "name": "test", "status": "Active", "groupType": "Creative", "groupTypeV2": "Public", "memberCount": 1, }, { "id": "32423", "placeID": "606", "description": " ", "displayName": "test123", "name": "test", "status": "Active", "groupType": "Creative", "groupTypeV2": "Private", "memberCount": 1, }, </code></pre> <p>I am trying to iterate through this list, and grab the displayName, however my code won't recognize all of the different display names. Here is my code:</p> <pre><code>for i in range(len(json_obj['list'])): if (json_obj['list'][i]['displayName'] == "some id"): do stuff else: exit() </code></pre> <p>How can I fix the statement, in order to successfully loop through the json obj?</p>
-2
2016-08-22T17:57:20Z
39,086,048
<p>This is work for me.</p> <pre><code>import json text = """{ "itemsPerPage": 45, "links": { "next": "https://www.12345.com" }, "list": [ { "id": "333333", "placeID": "63333", "description": " ", "displayName": "test-12345", "name": "test", "status": "Active", "groupType": "Creative", "groupTypeV2": "Public", "memberCount": 1 }, { "id": "32423", "placeID": "606", "description": " ", "displayName": "test", "name": "test", "status": "Active", "groupType": "Creative", "groupTypeV2": "Private", "memberCount": 1 }]}""" data = json.loads(text) for item in data['list']: if 'displayName' in item: print(item['displayName']) </code></pre>
0
2016-08-22T18:08:31Z
[ "python", "json", "loops", "python-3.x", "object" ]
Iterate through Json list object - Python
39,085,867
<p>I have some JSON text I want to iterate through, formatted in the following way:</p> <pre><code>{ "itemsPerPage": 45, "links": { "next": "https://www.12345.com" }, "list": [ { "id": "333333", "placeID": "63333", "description": " ", "displayName": "test-12345", "name": "test", "status": "Active", "groupType": "Creative", "groupTypeV2": "Public", "memberCount": 1, }, { "id": "32423", "placeID": "606", "description": " ", "displayName": "test123", "name": "test", "status": "Active", "groupType": "Creative", "groupTypeV2": "Private", "memberCount": 1, }, </code></pre> <p>I am trying to iterate through this list, and grab the displayName, however my code won't recognize all of the different display names. Here is my code:</p> <pre><code>for i in range(len(json_obj['list'])): if (json_obj['list'][i]['displayName'] == "some id"): do stuff else: exit() </code></pre> <p>How can I fix the statement, in order to successfully loop through the json obj?</p>
-2
2016-08-22T17:57:20Z
39,086,424
<p>You need to actually perform actions within your loop. Python relies on whitespace to denote blocks. This is something you can't forget when writing Python.</p> <pre><code>for i in range(len(json_obj['list'])): if (json_obj['list'][i]['displayName'] == "some id"): do stuff else: exit() </code></pre> <p>should be</p> <pre><code>for i in range(len(json_obj['list'])): if (json_obj['list'][i]['displayName'] == "some id"): do stuff else: exit() </code></pre>
0
2016-08-22T18:32:43Z
[ "python", "json", "loops", "python-3.x", "object" ]
Opening a gzip file in python Apache Beam
39,085,869
<p>Is it currently possible to read froma a gzip file in python using Apache Beam? My pipeline is pulling gzip files from gcs with this line of code: </p> <pre><code>beam.io.Read(beam.io.TextFileSource('gs://bucket/file.gz', compression_type='GZIP')) </code></pre> <p>But I am getting this error: </p> <pre><code>UnicodeDecodeError: 'utf8' codec can't decode byte 0x8b in position 1: invalid start byte </code></pre> <p>We noticed in the python beam source code that compressed files seem to be handled when writing to a sink. <a href="https://github.com/apache/incubator-beam/blob/python-sdk/sdks/python/apache_beam/io/fileio.py#L445" rel="nofollow">https://github.com/apache/incubator-beam/blob/python-sdk/sdks/python/apache_beam/io/fileio.py#L445</a></p> <p><strong>More Detailed Traceback:</strong> </p> <pre><code>Traceback (most recent call last): File "beam-playground.py", line 11, in &lt;module&gt; p.run() File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/apache_beam/pipeline.py", line 159, in run return self.runner.run(self) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/apache_beam/runners/direct_runner.py", line 103, in run super(DirectPipelineRunner, self).run(pipeline) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/apache_beam/runners/runner.py", line 98, in run pipeline.visit(RunVisitor(self)) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/apache_beam/pipeline.py", line 182, in visit self._root_transform().visit(visitor, self, visited) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/apache_beam/pipeline.py", line 419, in visit part.visit(visitor, pipeline, visited) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/apache_beam/pipeline.py", line 422, in visit visitor.visit_transform(self) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/apache_beam/runners/runner.py", line 93, in visit_transform self.runner.run_transform(transform_node) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/apache_beam/runners/runner.py", line 168, in run_transform return m(transform_node) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/apache_beam/runners/direct_runner.py", line 99, in func_wrapper func(self, pvalue, *args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/apache_beam/runners/direct_runner.py", line 258, in run_Read read_values(reader) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/apache_beam/runners/direct_runner.py", line 245, in read_values read_result = [GlobalWindows.windowed_value(e) for e in reader] File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/apache_beam/io/fileio.py", line 807, in __iter__ yield self.source.coder.decode(line) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/apache_beam/coders/coders.py", line 187, in decode return value.decode('utf-8') File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/encodings/utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeDecodeError: 'utf8' codec can't decode byte 0x8b in position 1: invalid start byte </code></pre>
4
2016-08-22T17:57:30Z
39,086,383
<p>Today <code>TextIO</code> in the Python SDK does not actually support reading from compressed files.</p>
1
2016-08-22T18:30:11Z
[ "python", "google-cloud-dataflow", "dataflow", "apache-beam" ]
Camera calibration opencv python
39,085,875
<p>I am following the OpenCV tutorial <a href="http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/py_calibration/py_calibration.html" rel="nofollow">http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/py_calibration/py_calibration.html</a></p> <p>Instead of running it with a chess board, I got my 3D point coordinates from a LAS file. Here is my code:</p> <pre><code>import cv2 import numpy as np obj_point = [(630931.35,4833642.85,157.67),(630948.03,4833662.76,73.94), (631156.3, 4833904.18, 43.89),(630873.71, 4833790, 44.85), (631381.3, 4834152.6, 79.41)] img_point = [(1346.82,843.206),(1293.03,808.146),(1041.92, 585.168), (1150.21, 894.724), (756.993,345.904) ] obj_point = np.array(obj_point,'float32') img_point = np.array(img_point,'float32') ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(obj_point, img_point, (1125, 1725),None,None) </code></pre> <p>I got the following error message:</p> <blockquote> <p>For non-planar calibration rigs the initial intrinsic matrix must be specified in function cvCalibrateCamera2</p> </blockquote> <p>Thx in advance!</p>
0
2016-08-22T17:57:46Z
39,086,019
<p>You need to use the flag <code>CV_CALIB_USE_INTRINSIC_GUESS</code> </p>
1
2016-08-22T18:06:20Z
[ "python", "opencv", "image-processing", "computer-vision", "camera-calibration" ]
Noob Python Code Wont Work
39,085,906
<p>I get get this to work, and it seems simple but it wont.</p> <pre><code>bob = raw_input("What do you need?") if bob is "Hello": sayhello() def sayhello(): print"yo" </code></pre>
-6
2016-08-22T18:00:06Z
39,085,953
<p>Use the value comparison operator instead <code>==</code>, <code>is</code> checks for references (<a href="http://stackoverflow.com/a/38835030/6320655">short answer I wrote on</a> <code>is</code>, and its <a href="https://docs.python.org/2/reference/expressions.html#not-in" rel="nofollow">official doc</a>).</p> <pre><code>def sayhello(): print"yo" bob = raw_input("What do you need?") if bob == "Hello": sayhello() </code></pre>
3
2016-08-22T18:02:24Z
[ "python" ]
Noob Python Code Wont Work
39,085,906
<p>I get get this to work, and it seems simple but it wont.</p> <pre><code>bob = raw_input("What do you need?") if bob is "Hello": sayhello() def sayhello(): print"yo" </code></pre>
-6
2016-08-22T18:00:06Z
39,086,028
<p>mrdomoboto has the solution for you. But a little background information is never bad.</p> <p><code>is</code> returns True if to variables point to the same object.</p> <pre><code>&gt;&gt;&gt; a = [2, 3] &gt;&gt;&gt; b = a &gt;&gt;&gt; b is a True &gt;&gt;&gt; b == a True &gt;&gt;&gt; b = a[:] &gt;&gt;&gt; b is a False &gt;&gt;&gt; b == a True </code></pre>
1
2016-08-22T18:06:48Z
[ "python" ]
Troubles understanding finding elements in python selenium
39,085,910
<p>I'm trying to follow use find elements from <a href="http://selenium-python.readthedocs.io/locating-elements.html#locating-elements-by-class-name" rel="nofollow">http://selenium-python.readthedocs.io/locating-elements.html#locating-elements-by-class-name</a>; however, they seem to work only half the time and usually on more simple sites. I'm wondering why that is. For example, currently i am trying to locate :</p> <pre><code>&lt;a class="username" title="bruceleenation" href="/profile/u/3618527996"&gt;&lt;/a&gt; </code></pre> <p>using : </p> <pre><code>content = driver.find_element_by_class_name('username') </code></pre> <p>but i'm getting nothing. The html is from </p> <p><a href="https://gyazo.com/b2a0d389da26bbd325baaa5f915d0569" rel="nofollow">https://gyazo.com/b2a0d389da26bbd325baaa5f915d0569</a> or</p> <pre><code>&lt;body&gt; &lt;nav id="nav-sidebar" class="nav-main"&gt;&lt;/nav&gt; &lt;main id="page-content" class="" style="margin-right: 17px; margin-bottom: 0px;"&gt; &lt;header class="header-logged"&gt;&lt;/header&gt; &lt;section class="page-content-wrapper"&gt;&lt;/section&gt; &lt;section class="media-slider" style="display: block;"&gt; &lt;div class="close-slider"&gt;&lt;/div&gt; &lt;section id="slider" class="open" style="display: inline-block;"&gt; &lt;a class="go-back" data-media-id="1322612612609855850_3618527996" title="Back to all media" href="javascript:void(0);"&gt;&lt;/a&gt; &lt;section class="media-viewer-wrapper viewer" data-count-comments="0" data-count-likes="1" data-url-delete="/aj/d" data-url-comment="/aj/c" data-url-unlike="/aj/ul" data-url-like="/aj/l" data-user-id="3618527996" data-media-id="1322612612609855850_3618527996"&gt; &lt;section class="mobile-user-info"&gt;&lt;/section&gt; &lt;section class="desktop-wrapper"&gt; &lt;section class="user-image-wrapper"&gt; &lt;div class="image-like-click"&gt;&lt;/div&gt; &lt;a class="user-image-shadow" href="javascript:void(0);"&gt; &lt;img class="user-image" alt="" src="https://scontent.cdninstagram.com/t51.2885-15/s640x640/sh0.0…493235_n.jpg?ig_cache_key=MTMyMjYxMjYxMjYwOTg1NTg1MA%3D%3D.2"&gt;&lt;/img&gt; &lt;/a&gt; &lt;section class="image-actions-wrapper dropdown-anchor"&gt;&lt;/section&gt; &lt;/section&gt; &lt;section class="media-viewer-info ui-front"&gt; &lt;section class="user-info-wrapper text-translate-parent-wrapper "&gt; &lt;a class="user-avatar-wrapper profile" title="bruceleenation" href="/profile/u/3618527996"&gt;&lt;/a&gt; &lt;section class="user-info"&gt; &lt;a class="username" title="bruceleenation" href="/profile/u/3618527996"&gt; bruceleenation &lt;/a&gt; &lt;p class="full-name"&gt;&lt;/p&gt; &lt;div class="media-date-geo"&gt; &lt;span&gt;&lt;/span&gt; &lt;/div&gt; &lt;/section&gt; </code></pre> <p>Any suggestions on what to do? I've tried Xpath as well. <code>["//a[@class='username'"]</code></p>
0
2016-08-22T18:00:15Z
39,086,824
<p>You should try using <a href="http://selenium-python.readthedocs.io/waits.html#explicit-waits" rel="nofollow"><code>WebDriverWait</code></a> to wait until element present as below :-</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC content = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.CSS_SELECTOR, "a.username[title = 'bruceleenation']"))) </code></pre>
0
2016-08-22T18:55:55Z
[ "python", "selenium", "xpath", "find", "element" ]
Batch with Google Bigquery and Python
39,085,925
<p>What's the most efficient way to perform a batch insert in python using Google Bigquery Api. I was tryng to perform a stream row using this <a href="https://cloud.google.com/bigquery/streaming-data-into-bigquery" rel="nofollow">code</a> on a large dataset (1 000 000 +) but it's taking a while to insert them. Is there a more efficient way to insert a large dataset in Python?</p> <ul> <li>The table already exists, and it has info.</li> <li>I have a list of 1 millon datapoints I want to insert</li> <li>I'd like to do it with Python, because I'll reuse the code many times.</li> </ul>
1
2016-08-22T18:01:09Z
39,086,227
<p>I don't think Streaming (Insert All API) makes sense in your case<br> You rather should try <a href="https://cloud.google.com/bigquery/docs/reference/v2/jobs#configuration.load" rel="nofollow">Load Job</a><br> See python code example in <a href="https://cloud.google.com/bigquery/docs/loading-data-cloud-storage#loading_data_from_google_cloud_storage" rel="nofollow">documentation</a></p>
1
2016-08-22T18:20:09Z
[ "python", "google-bigquery", "gcloud" ]
Sorting an 3-dimensional array by a single row/column in Python
39,085,948
<p>m,n) shaped numpy array. I would like to sort the whole array by the values in the index [1,2,:] say. Is there an easy way to do this? I tried to use pandas Panel but for whatever reason there is no sort by values function for it.</p> <p>That 'duplicate' someone linked only seems to work for a column in an 2d array. I am using the term column in a more general sense i.e a 1d sub-array of an (mxnxlx..) array</p>
-2
2016-08-22T18:02:13Z
39,099,113
<p>So how I did it was using argsort as hpaulj said, given a array arr that was (m,n,l) dimension and a column indexed by (i,j,:).</p> <pre><code>sorted = np.empty(arr.shape): for k in range(arr.shape[2]): sorted[:,:,i] = arr[:,:,np.argsort(arr[i,j,:])[i]] </code></pre>
0
2016-08-23T10:50:03Z
[ "python", "pandas", "numpy" ]
python unittest - assert any raise
39,085,998
<p>I have some codes structured in this way:</p> <pre><code>Method(args): try: {method} if "ok": return True else: return False except: raise </code></pre> <p>And I have at least 3 unit tests to perform on this method, one to assert an ideal True condition, at least one where I expect Method to return False, and I wish to build a test that returns "ok" when any exception/error is raised. </p> <p>I know about assertRaise already, but it asks for a specific exception, and I wish to assert any condition raised as true.</p>
1
2016-08-22T18:05:03Z
39,086,198
<p>Since you are essentially catching every exception type, your <code>assertRaises</code> should expect the most basic exception type, which is <code>Exception</code>. </p> <pre><code>assertRaises(Exception, Method) </code></pre>
1
2016-08-22T18:18:13Z
[ "python", "unit-testing" ]
Reading every second and fourth line of an input file and perform text processing using python
39,086,000
<p>Normally I would get my second and fourth line using <code>itertools</code></p> <pre><code>secondline = itertools.islice(input_open, 1, None, 4) fourthline = itertools.islice(input_open, 3, None, 4) </code></pre> <p>and perform <code>for line in secondline</code> or <code>for line in fourthline</code> to process each 2nd line or fourth line separately. </p> <p>Is there a way to process every 2nd and 4th line at the same time? I want to perform some text processing on every 2nd and 4th line and do some math between them.</p> <p>UPDATE What I meant by every 2nd and every 4th line:</p> <pre><code> line0 line1 2nd line line2 line3 4th line line4 line5 2nd line line6 line7 4th line ... </code></pre> <p>But I figured might as well just use <code>enumerate</code> and do a comparison of <code>i % 4 == 1</code> and <code>i % 4 == 3</code> to get them. Much simpler I suppose</p>
3
2016-08-22T18:05:06Z
39,086,212
<p>One way to obtain pairs of "second" line, "fourth" line is to just take an <code>islice</code> with step <code>2</code> and then <code>zip</code>it with itself:</p> <pre><code>lines = islice(input_file, 1, None, 2) for second, fourth in zip(lines, lines): </code></pre> <p>This works because <code>zip</code> first calls the <code>__next__</code> method on the first argument, which obtains the "second" line and advances the iterator, then moves to the second argument and calls <code>__next__</code> again obtaining the "fourth" line and advancing the iterator again.</p> <p>Example with numbers:</p> <pre><code>&gt;&gt;&gt; seq = iter(range(22)) &gt;&gt;&gt; numbers = islice(seq, 1, None, 2) &gt;&gt;&gt; for num1, num2 in zip(numbers, numbers): ... print(num1, num2) ... 1 3 5 7 9 11 13 15 17 19 # Note: missing number 21! </code></pre> <hr> <p>Note that if the last "second" line has no "fourth" line following it because the file is too short, it wont be present in the output.</p>
1
2016-08-22T18:19:09Z
[ "python", "itertools", "text-processing" ]
Reading every second and fourth line of an input file and perform text processing using python
39,086,000
<p>Normally I would get my second and fourth line using <code>itertools</code></p> <pre><code>secondline = itertools.islice(input_open, 1, None, 4) fourthline = itertools.islice(input_open, 3, None, 4) </code></pre> <p>and perform <code>for line in secondline</code> or <code>for line in fourthline</code> to process each 2nd line or fourth line separately. </p> <p>Is there a way to process every 2nd and 4th line at the same time? I want to perform some text processing on every 2nd and 4th line and do some math between them.</p> <p>UPDATE What I meant by every 2nd and every 4th line:</p> <pre><code> line0 line1 2nd line line2 line3 4th line line4 line5 2nd line line6 line7 4th line ... </code></pre> <p>But I figured might as well just use <code>enumerate</code> and do a comparison of <code>i % 4 == 1</code> and <code>i % 4 == 3</code> to get them. Much simpler I suppose</p>
3
2016-08-22T18:05:06Z
39,087,235
<p>why not:</p> <pre><code>def 2_and_4(fh): first = fh.readline() second = fh.readline() third = fh.readline() fourth = fh.readline() yield second, fourth </code></pre> <p>Make it a generator</p>
0
2016-08-22T19:22:56Z
[ "python", "itertools", "text-processing" ]
Django - Sort Queryset by Date instead of Datetime
39,086,005
<p>I have a model (representing a 'job') that contains a <strong>DateTimeField</strong> called <strong>date_created</strong>. I have another called <strong>date_modified</strong>.</p> <p>I would like to sort by <strong>-date_modified</strong> so that the most recently modified 'jobs' are at the top of my list. The problem is that multiple running jobs will keep getting reordered each time the timestamp gets updated. If the <strong>date_modified</strong> field was sorted as if it was a <strong>DateField</strong>, then I could sort all 'jobs' that have been modified 'today' first, and then sort off of a second value (like date_created) so that they would not change places in the list as the timestamps are modified.</p> <p>This is what I have now:</p> <pre><code>queryset = DataCollection.objects.all().order_by('-date_modified','-date_created') </code></pre> <p>I found a related article, but seems outdated with version 1.9: <a href="http://stackoverflow.com/questions/38705451/django-sorting-by-dateday/38709728">Django sorting by date(day)</a></p> <p><strong>UPDATE</strong></p> <p>The current fix that I am looking at is this:</p> <pre><code>queryset = DataCollection.objects.all().extra(select = {'custom_dt': 'date(date_modified)'}).order_by('-custom_dt','-date_created') </code></pre> <p>It's most similar to what @lampslave was suggesting, but it uses the <a href="https://docs.djangoproject.com/en/1.10/ref/models/querysets/#django.db.models.query.QuerySet.extra" rel="nofollow">extra</a> method, which will be deprecated in the future... I don't think that I will be upgrading to a later version of Django anytime soon, but this makes my stomache a bit unsettled.</p>
0
2016-08-22T18:05:25Z
39,088,118
<p>I would get the queryset then sort it in the view in this case.</p> <pre><code>sorted(DataCollection.objects.all(), key = lambda x: x.date_modified.date(), reverse = True) </code></pre> <p>To sort by two keys you can use <a href="https://wiki.python.org/moin/HowTo/Sorting" rel="nofollow">attrgetter</a>, described in the HowTo/Sorting docs.</p>
2
2016-08-22T20:24:01Z
[ "python", "django", "sorting" ]
Django - Sort Queryset by Date instead of Datetime
39,086,005
<p>I have a model (representing a 'job') that contains a <strong>DateTimeField</strong> called <strong>date_created</strong>. I have another called <strong>date_modified</strong>.</p> <p>I would like to sort by <strong>-date_modified</strong> so that the most recently modified 'jobs' are at the top of my list. The problem is that multiple running jobs will keep getting reordered each time the timestamp gets updated. If the <strong>date_modified</strong> field was sorted as if it was a <strong>DateField</strong>, then I could sort all 'jobs' that have been modified 'today' first, and then sort off of a second value (like date_created) so that they would not change places in the list as the timestamps are modified.</p> <p>This is what I have now:</p> <pre><code>queryset = DataCollection.objects.all().order_by('-date_modified','-date_created') </code></pre> <p>I found a related article, but seems outdated with version 1.9: <a href="http://stackoverflow.com/questions/38705451/django-sorting-by-dateday/38709728">Django sorting by date(day)</a></p> <p><strong>UPDATE</strong></p> <p>The current fix that I am looking at is this:</p> <pre><code>queryset = DataCollection.objects.all().extra(select = {'custom_dt': 'date(date_modified)'}).order_by('-custom_dt','-date_created') </code></pre> <p>It's most similar to what @lampslave was suggesting, but it uses the <a href="https://docs.djangoproject.com/en/1.10/ref/models/querysets/#django.db.models.query.QuerySet.extra" rel="nofollow">extra</a> method, which will be deprecated in the future... I don't think that I will be upgrading to a later version of Django anytime soon, but this makes my stomache a bit unsettled.</p>
0
2016-08-22T18:05:25Z
39,088,707
<p>If your database supports datetime_to_date converting, you can try something like this:</p> <pre><code>DataCollection.objects.all().annotate(date_created__date=RawSQL('DATE(date_created)', ())).order_by('-date_created__date') </code></pre>
0
2016-08-22T21:03:26Z
[ "python", "django", "sorting" ]
Django - Sort Queryset by Date instead of Datetime
39,086,005
<p>I have a model (representing a 'job') that contains a <strong>DateTimeField</strong> called <strong>date_created</strong>. I have another called <strong>date_modified</strong>.</p> <p>I would like to sort by <strong>-date_modified</strong> so that the most recently modified 'jobs' are at the top of my list. The problem is that multiple running jobs will keep getting reordered each time the timestamp gets updated. If the <strong>date_modified</strong> field was sorted as if it was a <strong>DateField</strong>, then I could sort all 'jobs' that have been modified 'today' first, and then sort off of a second value (like date_created) so that they would not change places in the list as the timestamps are modified.</p> <p>This is what I have now:</p> <pre><code>queryset = DataCollection.objects.all().order_by('-date_modified','-date_created') </code></pre> <p>I found a related article, but seems outdated with version 1.9: <a href="http://stackoverflow.com/questions/38705451/django-sorting-by-dateday/38709728">Django sorting by date(day)</a></p> <p><strong>UPDATE</strong></p> <p>The current fix that I am looking at is this:</p> <pre><code>queryset = DataCollection.objects.all().extra(select = {'custom_dt': 'date(date_modified)'}).order_by('-custom_dt','-date_created') </code></pre> <p>It's most similar to what @lampslave was suggesting, but it uses the <a href="https://docs.djangoproject.com/en/1.10/ref/models/querysets/#django.db.models.query.QuerySet.extra" rel="nofollow">extra</a> method, which will be deprecated in the future... I don't think that I will be upgrading to a later version of Django anytime soon, but this makes my stomache a bit unsettled.</p>
0
2016-08-22T18:05:25Z
39,089,375
<p>I think souldeux answer is probably a lot neater than this but another solution could be to query separately then join them together. I think something along these lines</p> <pre><code>from itertools import chain import datetime # get just todays data ordered by date modified today_min = datetime.datetime.combine(datetime.date.today(), datetime.time.min) today_max = datetime.datetime.combine(datetime.date.today(), datetime.time.max) data_set1 = DataCollection.objects.filter(date_modified__range=(today_min, today_max)).order_by('-date_modified') # get the rest of the data data_set2 = DataCollection.objects.all().exclude(date_modified__range=(today_min, today_max)).order_by('-date_created') # join it all together all_list = list(chain(data_set1, data_set2)) </code></pre> <p>I think django had planned to introduce a <code>__date</code> query selector i don't know if that is available yet but that might also help</p>
0
2016-08-22T21:59:54Z
[ "python", "django", "sorting" ]
How Do I Include ForeignKey in django-rest-framework POST
39,086,026
<p>So I have tried to make a browsable API via <code>django-rest-framework (DRF)</code>, but I have had some issues nesting serializers. So far, I am able to include the <code>Sport</code> and <code>Category</code> fields/foreignkeys into my <code>Article</code>, but when I try to <code>POST</code> via the API, I get an error saying as follows:</p> <blockquote> <p>Got a <code>TypeError</code> when calling <code>Article.objects.create()</code>. This may be because you have a writable field on the serializer class that is not a valid argument to <code>Article.objects.create()</code>. You may need to make the field read-only, or override the ArticleSerializer.create() method to handle this correctly. Original exception text was: int() argument must be a string, a bytes-like object or a number, not 'ArticleSport'.</p> </blockquote> <p>Here are my files:</p> <p><strong>models.py</strong></p> <pre><code>[...] class ArticleSport(TimeStampedModel): title = models.CharField(max_length=20, blank=False) slug = AutoSlugField(populate_from='title', unique=True, always_update=True) parent = models.ForeignKey('self', blank=True, null=True, related_name='children') # TODO: Add on_delete? uuid = models.UUIDField(default=uuid.uuid4, unique=True, editable=False) def __str__(self): return '{0}'.format(self.title) #class Meta: # TODO: Migrate live #unique_together = ('title', 'parent') class ArticleCategory(TimeStampedModel): title = models.CharField(max_length=20, blank=False) slug = AutoSlugField(populate_from='title', unique=True, always_update=True) parent = models.ForeignKey('self', blank=True, null=True, related_name='children') # TODO: Add on_delete? uuid = models.UUIDField(default=uuid.uuid4, unique=True, editable=False) def __str__(self): return '{0}'.format(self.title) class Meta: verbose_name_plural = 'article categories' #unique_together = ('title', 'parent') # TODO: Migrate live class Article(TimeStampedModel): DEFAULT_FEATURED_IMAGE = settings.STATIC_URL + 'images/defaults/default-featured-image.png' title = models.CharField(max_length=160, blank=False) slug = AutoSlugField(populate_from='title', unique=True, always_update=True) sport = models.ForeignKey(ArticleSport, on_delete=models.CASCADE, related_name='articleAsArticleSport') category = models.ForeignKey(ArticleCategory, on_delete=models.CASCADE, related_name='articleAsArticleCategory') featured_image = models.ImageField(upload_to=PathAndUniqueFilename('featured-images/'), blank=True) featured_image_caption = models.CharField(max_length=100, blank=True) views = models.IntegerField(default=0) uuid = models.UUIDField(default=uuid.uuid4, unique=True, editable=False) def get_absolute_url(self): return reverse('main:article_specific', args=[self.slug]) # TODO: Remove if standalone pages are removed def get_featured_image(self): if self.featured_image: return self.featured_image.url else: return self.DEFAULT_FEATURED_IMAGE def get_comment_count(self): return ArticleComment.objects.filter(article=self).count() def __str__(self): return '{0}'.format(self.title) [...] </code></pre> <p><strong>urls.py</strong></p> <pre><code>[...] class ArticleSportSerializer(serializers.HyperlinkedModelSerializer): class Meta: model = ArticleSport fields = ('id', 'title', 'parent', 'created', 'modified') class ArticleCategorySerializer(serializers.HyperlinkedModelSerializer): class Meta: model = ArticleCategory fields = ('id', 'title', 'parent', 'created', 'modified') class ArticleSerializer(serializers.HyperlinkedModelSerializer): sport = ArticleSportSerializer(read_only=True) sport_id = serializers.PrimaryKeyRelatedField(queryset=ArticleSport.objects.all(), write_only=True) category = ArticleCategorySerializer(read_only=True) category_id = serializers.PrimaryKeyRelatedField(queryset=ArticleCategory.objects.all(), write_only=True) modified = serializers.HiddenField(default=timezone.now()) #TODO: Figure out how to implement this class Meta: model = Article fields = ('id', 'title', 'sport', 'sport_id', 'category', 'category_id', 'featured_image', 'featured_image_caption', 'views', 'created', 'modified') [...] </code></pre> <p>Sample POST to API:</p> <pre><code>{ "title": "This is a test Title", "sport_id": 1, "category_id": 1, "featured_image": null, "featured_image_caption": "", "views": null, "modified": null } </code></pre>
0
2016-08-22T18:06:37Z
39,086,228
<p>You need to override the serializer's <code>create</code> method to accommodate your <code>POST</code> request. This probably isn't what you're looking for, but you haven't included your sample request so we haven't got much to go off of. </p> <p>This would've been a comment had I been of high enough reputation. </p>
1
2016-08-22T18:20:10Z
[ "python", "django", "api", "post", "django-rest-framework" ]
How Do I Include ForeignKey in django-rest-framework POST
39,086,026
<p>So I have tried to make a browsable API via <code>django-rest-framework (DRF)</code>, but I have had some issues nesting serializers. So far, I am able to include the <code>Sport</code> and <code>Category</code> fields/foreignkeys into my <code>Article</code>, but when I try to <code>POST</code> via the API, I get an error saying as follows:</p> <blockquote> <p>Got a <code>TypeError</code> when calling <code>Article.objects.create()</code>. This may be because you have a writable field on the serializer class that is not a valid argument to <code>Article.objects.create()</code>. You may need to make the field read-only, or override the ArticleSerializer.create() method to handle this correctly. Original exception text was: int() argument must be a string, a bytes-like object or a number, not 'ArticleSport'.</p> </blockquote> <p>Here are my files:</p> <p><strong>models.py</strong></p> <pre><code>[...] class ArticleSport(TimeStampedModel): title = models.CharField(max_length=20, blank=False) slug = AutoSlugField(populate_from='title', unique=True, always_update=True) parent = models.ForeignKey('self', blank=True, null=True, related_name='children') # TODO: Add on_delete? uuid = models.UUIDField(default=uuid.uuid4, unique=True, editable=False) def __str__(self): return '{0}'.format(self.title) #class Meta: # TODO: Migrate live #unique_together = ('title', 'parent') class ArticleCategory(TimeStampedModel): title = models.CharField(max_length=20, blank=False) slug = AutoSlugField(populate_from='title', unique=True, always_update=True) parent = models.ForeignKey('self', blank=True, null=True, related_name='children') # TODO: Add on_delete? uuid = models.UUIDField(default=uuid.uuid4, unique=True, editable=False) def __str__(self): return '{0}'.format(self.title) class Meta: verbose_name_plural = 'article categories' #unique_together = ('title', 'parent') # TODO: Migrate live class Article(TimeStampedModel): DEFAULT_FEATURED_IMAGE = settings.STATIC_URL + 'images/defaults/default-featured-image.png' title = models.CharField(max_length=160, blank=False) slug = AutoSlugField(populate_from='title', unique=True, always_update=True) sport = models.ForeignKey(ArticleSport, on_delete=models.CASCADE, related_name='articleAsArticleSport') category = models.ForeignKey(ArticleCategory, on_delete=models.CASCADE, related_name='articleAsArticleCategory') featured_image = models.ImageField(upload_to=PathAndUniqueFilename('featured-images/'), blank=True) featured_image_caption = models.CharField(max_length=100, blank=True) views = models.IntegerField(default=0) uuid = models.UUIDField(default=uuid.uuid4, unique=True, editable=False) def get_absolute_url(self): return reverse('main:article_specific', args=[self.slug]) # TODO: Remove if standalone pages are removed def get_featured_image(self): if self.featured_image: return self.featured_image.url else: return self.DEFAULT_FEATURED_IMAGE def get_comment_count(self): return ArticleComment.objects.filter(article=self).count() def __str__(self): return '{0}'.format(self.title) [...] </code></pre> <p><strong>urls.py</strong></p> <pre><code>[...] class ArticleSportSerializer(serializers.HyperlinkedModelSerializer): class Meta: model = ArticleSport fields = ('id', 'title', 'parent', 'created', 'modified') class ArticleCategorySerializer(serializers.HyperlinkedModelSerializer): class Meta: model = ArticleCategory fields = ('id', 'title', 'parent', 'created', 'modified') class ArticleSerializer(serializers.HyperlinkedModelSerializer): sport = ArticleSportSerializer(read_only=True) sport_id = serializers.PrimaryKeyRelatedField(queryset=ArticleSport.objects.all(), write_only=True) category = ArticleCategorySerializer(read_only=True) category_id = serializers.PrimaryKeyRelatedField(queryset=ArticleCategory.objects.all(), write_only=True) modified = serializers.HiddenField(default=timezone.now()) #TODO: Figure out how to implement this class Meta: model = Article fields = ('id', 'title', 'sport', 'sport_id', 'category', 'category_id', 'featured_image', 'featured_image_caption', 'views', 'created', 'modified') [...] </code></pre> <p>Sample POST to API:</p> <pre><code>{ "title": "This is a test Title", "sport_id": 1, "category_id": 1, "featured_image": null, "featured_image_caption": "", "views": null, "modified": null } </code></pre>
0
2016-08-22T18:06:37Z
39,087,994
<p>So I was able to answer this question by using some of <code>Carter_Smith</code>'s advice - I am not 100% sure why this worked, but I added this <code>create()</code> method to my <code>ArticleSerializer</code>, and it worked:</p> <pre><code>def create(self, validated_data): # Override default `.create()` method in order to properly add `sport` and `category` into the model sport = validated_data.pop('sport_id') category = validated_data.pop('category_id') article = Article.objects.create(sport=sport, category=category, **validated_data) return article </code></pre> <p>My guess is that the <code>PrimaryKeyRelatedField()</code> tries to resolve <code>sport_id</code> and <code>category_id</code> as kwarg fields based on their name, when they should be just <code>sport</code> and <code>category</code>, and so overriding <code>.create()</code> allows you to fix that, while still allowing for a <code>read_only</code> field for <code>sport</code> and <code>category</code>. Hope this helps anyone else who has the same issue.</p>
0
2016-08-22T20:16:06Z
[ "python", "django", "api", "post", "django-rest-framework" ]
Animate plot for a condition with matplotlib
39,086,029
<p>im making an animation with matplotlib and python, the animation looks like: <a href="http://i.stack.imgur.com/usuGh.gif" rel="nofollow"><img src="http://i.stack.imgur.com/usuGh.gif" alt="Animation"></a></p> <p>what i want to do is to extend this plot with more animations, i want to complete the next figure: <a href="http://i.stack.imgur.com/ETJK3.png" rel="nofollow"><img src="http://i.stack.imgur.com/ETJK3.png" alt="enter image description here"></a></p> <p>the main idea is this: the green circles are grouped in 2 each one, having a total of 8 groups (thats why 8 axis). When any blue circle pass trough a green circle, plot in the corresponding axis a vertical line in the corresponding time. I have no clue how to making this. Any idea is welcome :) Greets!. code:</p> <pre><code>circ = np.linspace(0,360,360) circ*=2*np.pi/360 ra = np.empty(360) wheel_position=[] ra.fill(28120/2) r=np.full((1,10*2),28120/2) Ru=180-np.array([24,63,102,141,181.5,219,258,297,336,360]) Ru_pos=[] rtm_pos= np.array([22.5,67.5,112.5,157.5,202.5,247.5,292.5,337.5]) rw=np.empty(16) rw.fill(28120/2) for i in rtm_pos: wheel_position.append([i-2.3,i+2.3]) wheel_position=np.array(wheel_position) wheel_position=2*np.pi/360*np.ravel(wheel_position) for i in Ru: Ru_pos.append([i-0.51,i+0.51]) Ru_pos=np.ravel(Ru_pos) Ru_pos=2*np.pi/360*Ru_pos def simData(): t_max=360 theta0=Ru_pos theta=np.array([0,0]) t=0 dt=0.5 vel=2*np.pi/360 while t&lt;t_max: theta=theta0+vel*t t=t+dt yield theta, t def simPoints(simData): theta, t = simData[0], simData[1] time_text.set_text(time_template%(t)) line.set_data(theta,r) fig = plt.figure() ax1.set_rmax(28120/2+1550) ax1.grid(True) ax1 = fig.add_subplot(121, projection='polar') line, = ax1.plot([], [], 'bo', ms=3, zorder=2) time_template = 'Time = %.1f s' time_text = ax1.text(0.05, 0.9, '', transform=ax.transAxes) ax1.set_ylim(0, 28120/2+5000) ax1.plot(circ,ra, color='r', linestyle='-',zorder=1,lw=1) ax1.plot(wheel_position,rw,'bo',ms=4.6,zorder=3,color='g') ani = animation.FuncAnimation(fig, simPoints, simData, blit=False,\ interval=1, repeat=True) plt.show() </code></pre>
0
2016-08-22T18:06:54Z
39,108,031
<p>The main changes are inserted between your original <code>animation.FuncAnimation</code> and before to <code>plt.show()</code>:</p> <pre><code>import matplotlib.pyplot as plt import matplotlib.animation as animation import numpy as np circ = np.linspace(0,360,360) circ *= 2*np.pi/360 ra = np.empty(360) wheel_position=[] ra.fill(28120/2) r = np.full((1,10*2),28120/2) Ru = 180 - np.array([24,63,102,141,181.5,219,258,297,336,360]) Ru_pos = [] rtm_pos = np.array([22.5,67.5,112.5,157.5,202.5,247.5,292.5,337.5]) rw = np.empty(16) rw.fill(28120/2) for i in rtm_pos: wheel_position.append([i-2.3,i+2.3]) wheel_position=np.array(wheel_position) wheel_position=2*np.pi/360*np.ravel(wheel_position) for i in Ru: Ru_pos.append([i-0.51,i+0.51]) Ru_pos = np.ravel(Ru_pos) Ru_pos = 2*np.pi/360*Ru_pos def simData(): t_max = 360 theta0 = Ru_pos theta = np.array([0,0]) t = 0 dt = 0.5 vel = 2*np.pi/360 while t&lt;t_max: theta=theta0+vel*t t=t+dt yield theta, t # renamed parameter to avoid confusion with the function def simPoints(data): theta, t = data[0], data[1] time_text.set_text(time_template%(t)) line.set_data(theta,r) # Number of subplots needed for green pairs nplots = int(len(wheel_position)/2) fig = plt.figure() ax1 = plt.subplot2grid((nplots,2),(0,0), rowspan=nplots, projection='polar') ax1.set_rmax(28120/2+1550) ax1.grid(True) line, = ax1.plot([], [], 'bo', ms=3, zorder=2) time_template = 'Time = %.1f s' time_text = ax1.text(0.05, 0.9, '', transform=ax1.transAxes) ax1.set_ylim(0, 28120/2+5000) # red circle ax1.plot(circ,ra, color='r', linestyle='-',zorder=1,lw=1) # green dots green_line, = ax1.plot(wheel_position,rw,'bo',ms=4.6,zorder=3,color='g') green_dots = green_line.get_data()[0] green_dots = np.reshape(green_dots, (int(len(green_dots)/2),2)) ani1 = animation.FuncAnimation(fig, simPoints, simData, blit=False,\ interval=1, repeat=True) # Used to check if we should mark an intersection for a given tick # Update this with your preferred distance function def check_intersect(pt1, pt2, tolerance=0.05): return np.linalg.norm(pt1-pt2) &lt; tolerance def greenFunc(*args): t = args[0] affected_plots = [] for n in range(nplots): ax = green_plots[n] blue_dots = line.get_data()[0] if len(blue_dots) &lt; 2: # still initializing return ax, blue_dots = np.reshape(blue_dots, (int(len(blue_dots)/2),2)) is_intersect = False for dot in blue_dots: if check_intersect(dot, green_dots[n]): is_intersect = True if is_intersect: ax.plot([t,t], [-1,1], color='k') affected_plots.append(ax) return affected_plots # Create the 8 subplots green_plots = [] for i in range(nplots): if i == 0: ax = plt.subplot2grid((nplots,2),(i,1)) else: ax = plt.subplot2grid((nplots,2),(i,1), sharex=green_plots[0], sharey=green_plots[0]) # Hide x labels on all but last if i &lt; nplots-1: plt.setp(ax.get_xticklabels(), visible=False) green_plots.append(ax) # Add animation for intersections with green circles ani = animation.FuncAnimation(fig, greenFunc, \ blit=False, interval=1, repeat=True) plt.show() </code></pre> <p>This introduces two new functions:</p> <ul> <li><p><code>check_intersect</code> decides whether or not two dots should be counted as intersecting (and thus draw a line), based on euclidean distance within a given tolerance. The tolerance is necessary because the positions are calculated at discrete intervals (try it with zero tolerance - it will never be an exact match). You may want to tweak the equation and tolerance based on your needs.</p></li> <li><p><code>greenFunc</code> (I know, creative) loops through all of the subplots and checks whether or not to draw a line.</p></li> </ul> <p>The rest just creates the subplots and adds an animation which calls <code>greenFunc</code>.</p> <p>After letting it run for a bit, I get the result:</p> <p><a href="http://i.stack.imgur.com/UkvSM.png" rel="nofollow"><img src="http://i.stack.imgur.com/UkvSM.png" alt="Resulting plot"></a></p> <p>Changing label size and position is left as an exercise to the reader ;)</p>
1
2016-08-23T18:09:40Z
[ "python", "animation", "matplotlib" ]
Error when Importing Beautifulsoup
39,086,062
<p>I am writing some syntax to parse website and get all the href there. However, when I try to import bs4, it pops out an error saying "ImportError: cannot import name 'HTMLParseError'. I am using Python 3.5.2.</p> <p>I take the past reference and know that it may be due to the old version of bs4 and hence has upgraded that to version 4.5.1. However, the error still exists. Is that something wrong with my syntax (I attached below, which is also from past reference). Or I have to seek another tool for doing the task? </p> <p>Could anyone has any idea? One more thing, I also try to install lxml (it said unable to find vcvarsall.bat) but failed too. So, not many tools I can use. </p> <pre><code>from bs4 import BeautifulSoup import urllib.request def open_html(): resp = urllib.request.urlopen("http://www.gpsbasecamp.com/national-parks") soup = BeautifulSoup(resp, from_encoding=resp.info().get_param('charset')) for link in soup.find_all('a', href=True): print(link['href']) if __name__ == '__main__': open_html() </code></pre>
0
2016-08-22T18:09:14Z
39,086,497
<p>If you want to install lxml manually, you can download lxml .whl file copiled from page <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/" rel="nofollow">http://www.lfd.uci.edu/~gohlke/pythonlibs/</a>. Next open cmd, cd to dict where you saved this file, and use command:</p> <blockquote> <p>pip install [name_of_file]</p> </blockquote> <p>But that's simplest way to remove this problem, for else I refer you to topic:</p> <blockquote> <p><a href="http://stackoverflow.com/questions/19830942/pip-install-gives-error-unable-to-find-vcvarsall-bat">pip install gives error: Unable to find vcvarsall.bat</a></p> </blockquote>
0
2016-08-22T18:37:03Z
[ "python", "beautifulsoup" ]
Error when Importing Beautifulsoup
39,086,062
<p>I am writing some syntax to parse website and get all the href there. However, when I try to import bs4, it pops out an error saying "ImportError: cannot import name 'HTMLParseError'. I am using Python 3.5.2.</p> <p>I take the past reference and know that it may be due to the old version of bs4 and hence has upgraded that to version 4.5.1. However, the error still exists. Is that something wrong with my syntax (I attached below, which is also from past reference). Or I have to seek another tool for doing the task? </p> <p>Could anyone has any idea? One more thing, I also try to install lxml (it said unable to find vcvarsall.bat) but failed too. So, not many tools I can use. </p> <pre><code>from bs4 import BeautifulSoup import urllib.request def open_html(): resp = urllib.request.urlopen("http://www.gpsbasecamp.com/national-parks") soup = BeautifulSoup(resp, from_encoding=resp.info().get_param('charset')) for link in soup.find_all('a', href=True): print(link['href']) if __name__ == '__main__': open_html() </code></pre>
0
2016-08-22T18:09:14Z
39,086,723
<p>As an alternative measure, install Anaconda python, which includes BS 4.4.1 and lxml 3.6 (<a href="https://docs.continuum.io/anaconda/pkg-docs" rel="nofollow">https://docs.continuum.io/anaconda/pkg-docs</a>) already. And in general, Anaconda makes package management easy like a breeze.</p>
0
2016-08-22T18:49:34Z
[ "python", "beautifulsoup" ]
What is the difference between these two lists?
39,086,175
<p>I have a list of dictionaries, and in these dictionaries, there sometimes occurs a particular key. This particular key may have a dictionary as its value, and in that dictionary is a key-value pair of interest. Alternatively, the particular key may contain a list of dictionaries with contain the key-value pair of interest. In the course of trying to get the values of interest in a list, I ran into a more basic problem: when I tried to make a list such as the one described above, I got a type error.</p> <p>So I cut down a list from the real data to be as minimal as possible, and the list was created as expected. Perhaps I have simply been awake too long, but I cannot for the life of me see the difference between the list that creates and the one that doesn't.</p> <pre><code>bad_list = list[{'info1':'infoA', 'info2':'infoB'}, {'info1':'infoC', 'info2':'infoD', 'a_dictionary':{'of_interest':'item1','not_interesting':'item1a'}}, {'info1':'infoE', 'info2':'infoF', 'stuff_I_want':{'dlist1': [{'of_interest':'item2', 'not_of_interest':'item3'}, {'of_interest':'item4', 'not_of_interest':'item5'}],'dlist2':[{'of_interest':'item6','not_of_interest':'item7','dont_care':'about_this'},{'of_interest':'item8', 'not_interesting':'item9'}]}}] </code></pre> <p>gives</p> <pre><code>Traceback (most recent call last): File "C:\Users\user1\Anaconda2\lib\site-packages\IPython\core\interactiveshell.py", line 2885, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "&lt;ipython-input-426-5b36ee39e1b4&gt;", line 3, in &lt;module&gt; {'info1':'infoE', 'info2':'infoF', 'stuff_I_want':{'dlist1':[{'of_interest':'item2', 'not_of_interest':'item3'}, {'of_interest':'item4', 'not_of_interest':'item5'}],'dlist2':[{'of_interest':'item6','not_of_interest':'item7','dont_care':'about_this'},{'of_interest':'item8', 'not_interesting':'item9'}]}}] TypeError: 'type' object has no attribute '__getitem__' </code></pre> <p>but:</p> <pre><code>good_list = [{u'_score': 22.789707, u'symbol': u'RP4-669L17.10', u'_id': u'ENSG00000237094', u'query': u'ENSG00000237094'}, {u'pfam': u'PF03715', u'name': u'NOC2 like nucleolar associated transcriptional repressor', u'_score': 22.789707, u'symbol': u'NOC2L', u'go': {u'CC': [{u'term': u'nucleus', u'pubmed': [16322561, 20959462], u'id': u'GO:0005634', u'evidence': u'IDA'}, {u'term': u'nucleoplasm', u'pubmed': 20123734, u'id': u'GO:0005654', u'evidence': u'IDA'}], u'MF': [{u'term': u'chromatin binding', u'pubmed': [16322561, 20123734], u'id': u'GO:0003682', u'evidence': u'IDA'}, {u'term': u'transcription corepressor activity', u'pubmed': 16322561, u'id': u'GO:0003714', u'evidence': u'IDA'}]}, u'query': u'ENSG00000188976', u'_id': u'26155'}, {u'pfam': u'PF00858', u'name': u'sodium channel epithelial 1 delta subunit', u'_score': 22.79168, u'symbol': u'SCNN1D', u'go': {u'CC': [{u'term': u'plasma membrane', u'id': u'GO:0005886', u'evidence': u'IDA'}, {u'term': u'plasma membrane', u'id': u'GO:0005886', u'evidence': u'TAS'}]}}] </code></pre> <p>creates the expected list without dictionaries.</p> <p>What is different about the structure of these two lists that makes one legal and the other not? I have a feeling I'm missing something silly, but I just can't see it.</p>
0
2016-08-22T18:16:08Z
39,086,245
<blockquote> <p>type' object has no attribute '<strong>getitem</strong>'</p> </blockquote> <p>As the error suggest, In the first one, the part <code>list[&lt;object&gt;]</code> is the problem here. </p> <p>You are trying to index by an object, not an integer and hence the error. </p> <p>Just cut down the <code>list</code> part and you are fine. </p> <p>For example, below one will work quite fine.</p> <pre><code>bad_list ={'info1':'infoA', 'info2':'infoB'}, {'info1':'infoC', 'info2':'infoD', 'a_dictionary':{'of_interest':'item1','not_interesting':'item1a'}}, {'info1':'infoE', 'info2':'infoF', 'stuff_I_want':{'dlist1': [{'of_interest':'item2', 'not_of_interest':'item3'}, {'of_interest':'item4', 'not_of_interest':'item5'}],'dlist2':[{'of_interest':'item6','not_of_interest':'item7','dont_care':'about_this'},{'of_interest':'item8', 'not_interesting':'item9'}]}} </code></pre>
1
2016-08-22T18:21:48Z
[ "python", "list", "dictionary", "nested" ]
What is the difference between these two lists?
39,086,175
<p>I have a list of dictionaries, and in these dictionaries, there sometimes occurs a particular key. This particular key may have a dictionary as its value, and in that dictionary is a key-value pair of interest. Alternatively, the particular key may contain a list of dictionaries with contain the key-value pair of interest. In the course of trying to get the values of interest in a list, I ran into a more basic problem: when I tried to make a list such as the one described above, I got a type error.</p> <p>So I cut down a list from the real data to be as minimal as possible, and the list was created as expected. Perhaps I have simply been awake too long, but I cannot for the life of me see the difference between the list that creates and the one that doesn't.</p> <pre><code>bad_list = list[{'info1':'infoA', 'info2':'infoB'}, {'info1':'infoC', 'info2':'infoD', 'a_dictionary':{'of_interest':'item1','not_interesting':'item1a'}}, {'info1':'infoE', 'info2':'infoF', 'stuff_I_want':{'dlist1': [{'of_interest':'item2', 'not_of_interest':'item3'}, {'of_interest':'item4', 'not_of_interest':'item5'}],'dlist2':[{'of_interest':'item6','not_of_interest':'item7','dont_care':'about_this'},{'of_interest':'item8', 'not_interesting':'item9'}]}}] </code></pre> <p>gives</p> <pre><code>Traceback (most recent call last): File "C:\Users\user1\Anaconda2\lib\site-packages\IPython\core\interactiveshell.py", line 2885, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "&lt;ipython-input-426-5b36ee39e1b4&gt;", line 3, in &lt;module&gt; {'info1':'infoE', 'info2':'infoF', 'stuff_I_want':{'dlist1':[{'of_interest':'item2', 'not_of_interest':'item3'}, {'of_interest':'item4', 'not_of_interest':'item5'}],'dlist2':[{'of_interest':'item6','not_of_interest':'item7','dont_care':'about_this'},{'of_interest':'item8', 'not_interesting':'item9'}]}}] TypeError: 'type' object has no attribute '__getitem__' </code></pre> <p>but:</p> <pre><code>good_list = [{u'_score': 22.789707, u'symbol': u'RP4-669L17.10', u'_id': u'ENSG00000237094', u'query': u'ENSG00000237094'}, {u'pfam': u'PF03715', u'name': u'NOC2 like nucleolar associated transcriptional repressor', u'_score': 22.789707, u'symbol': u'NOC2L', u'go': {u'CC': [{u'term': u'nucleus', u'pubmed': [16322561, 20959462], u'id': u'GO:0005634', u'evidence': u'IDA'}, {u'term': u'nucleoplasm', u'pubmed': 20123734, u'id': u'GO:0005654', u'evidence': u'IDA'}], u'MF': [{u'term': u'chromatin binding', u'pubmed': [16322561, 20123734], u'id': u'GO:0003682', u'evidence': u'IDA'}, {u'term': u'transcription corepressor activity', u'pubmed': 16322561, u'id': u'GO:0003714', u'evidence': u'IDA'}]}, u'query': u'ENSG00000188976', u'_id': u'26155'}, {u'pfam': u'PF00858', u'name': u'sodium channel epithelial 1 delta subunit', u'_score': 22.79168, u'symbol': u'SCNN1D', u'go': {u'CC': [{u'term': u'plasma membrane', u'id': u'GO:0005886', u'evidence': u'IDA'}, {u'term': u'plasma membrane', u'id': u'GO:0005886', u'evidence': u'TAS'}]}}] </code></pre> <p>creates the expected list without dictionaries.</p> <p>What is different about the structure of these two lists that makes one legal and the other not? I have a feeling I'm missing something silly, but I just can't see it.</p>
0
2016-08-22T18:16:08Z
39,086,246
<p>You don't need <code>list[]</code> to construct a list. It doesn't construct a list. It tries to extract an element from the concept of a list. I think what you meant was <code>list()</code> but that's just more verbose and less clear.</p> <p><code>[]</code> gets an item from an object. <code>list[]</code> tries to access an item from the <em>list data type</em>. </p> <p><code>list[1]</code> is like saying "Okey, get the first item from the list." The interpreter asks "Which list?", and you respond "the very concept of a list". And then the interpreter responds with a error saying "the very concept of a list isn't a list".</p> <hr> <h1>In Depth</h1> <p>Saying <code>some_object[index]</code> is equivalent (syntactic sugar) for <code>some_object.__getitem__(index)</code>. So if a datatype wants to let you subscript (<code>[index]</code>) it, the datatype will define a <code>__getitem__</code>.</p> <p>But the type of the type (yes - even types have types) don't want you to be able to subscript types themselves, so the type type doesn't define a <code>__getitem__</code>.</p>
2
2016-08-22T18:21:49Z
[ "python", "list", "dictionary", "nested" ]
What is the difference between these two lists?
39,086,175
<p>I have a list of dictionaries, and in these dictionaries, there sometimes occurs a particular key. This particular key may have a dictionary as its value, and in that dictionary is a key-value pair of interest. Alternatively, the particular key may contain a list of dictionaries with contain the key-value pair of interest. In the course of trying to get the values of interest in a list, I ran into a more basic problem: when I tried to make a list such as the one described above, I got a type error.</p> <p>So I cut down a list from the real data to be as minimal as possible, and the list was created as expected. Perhaps I have simply been awake too long, but I cannot for the life of me see the difference between the list that creates and the one that doesn't.</p> <pre><code>bad_list = list[{'info1':'infoA', 'info2':'infoB'}, {'info1':'infoC', 'info2':'infoD', 'a_dictionary':{'of_interest':'item1','not_interesting':'item1a'}}, {'info1':'infoE', 'info2':'infoF', 'stuff_I_want':{'dlist1': [{'of_interest':'item2', 'not_of_interest':'item3'}, {'of_interest':'item4', 'not_of_interest':'item5'}],'dlist2':[{'of_interest':'item6','not_of_interest':'item7','dont_care':'about_this'},{'of_interest':'item8', 'not_interesting':'item9'}]}}] </code></pre> <p>gives</p> <pre><code>Traceback (most recent call last): File "C:\Users\user1\Anaconda2\lib\site-packages\IPython\core\interactiveshell.py", line 2885, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "&lt;ipython-input-426-5b36ee39e1b4&gt;", line 3, in &lt;module&gt; {'info1':'infoE', 'info2':'infoF', 'stuff_I_want':{'dlist1':[{'of_interest':'item2', 'not_of_interest':'item3'}, {'of_interest':'item4', 'not_of_interest':'item5'}],'dlist2':[{'of_interest':'item6','not_of_interest':'item7','dont_care':'about_this'},{'of_interest':'item8', 'not_interesting':'item9'}]}}] TypeError: 'type' object has no attribute '__getitem__' </code></pre> <p>but:</p> <pre><code>good_list = [{u'_score': 22.789707, u'symbol': u'RP4-669L17.10', u'_id': u'ENSG00000237094', u'query': u'ENSG00000237094'}, {u'pfam': u'PF03715', u'name': u'NOC2 like nucleolar associated transcriptional repressor', u'_score': 22.789707, u'symbol': u'NOC2L', u'go': {u'CC': [{u'term': u'nucleus', u'pubmed': [16322561, 20959462], u'id': u'GO:0005634', u'evidence': u'IDA'}, {u'term': u'nucleoplasm', u'pubmed': 20123734, u'id': u'GO:0005654', u'evidence': u'IDA'}], u'MF': [{u'term': u'chromatin binding', u'pubmed': [16322561, 20123734], u'id': u'GO:0003682', u'evidence': u'IDA'}, {u'term': u'transcription corepressor activity', u'pubmed': 16322561, u'id': u'GO:0003714', u'evidence': u'IDA'}]}, u'query': u'ENSG00000188976', u'_id': u'26155'}, {u'pfam': u'PF00858', u'name': u'sodium channel epithelial 1 delta subunit', u'_score': 22.79168, u'symbol': u'SCNN1D', u'go': {u'CC': [{u'term': u'plasma membrane', u'id': u'GO:0005886', u'evidence': u'IDA'}, {u'term': u'plasma membrane', u'id': u'GO:0005886', u'evidence': u'TAS'}]}}] </code></pre> <p>creates the expected list without dictionaries.</p> <p>What is different about the structure of these two lists that makes one legal and the other not? I have a feeling I'm missing something silly, but I just can't see it.</p>
0
2016-08-22T18:16:08Z
39,086,257
<p>bad_list uses the list func, but attempts to subscript it:</p> <pre><code>bad_list = list['h'] Traceback (most recent call last): File "&lt;stdin&gt;", line 1, \ in &lt;module&gt; TypeError: 'type' object is not subscriptable </code></pre>
0
2016-08-22T18:22:48Z
[ "python", "list", "dictionary", "nested" ]
Disabling odoo 9 progress bar
39,086,182
<p>How do i disable this progress bar on the top menu in Odoo 9.It apparently brings a wizard that shows you how to configure and use inventory.Thank you.</p> <p><a href="http://i.stack.imgur.com/6anCc.png" rel="nofollow">progress-bar</a></p>
0
2016-08-22T18:16:28Z
39,089,759
<p>Login as administrator (or any user who has access to the Settings) Activate debug mode</p> <p>Go to Settings -> Technical -> User Interface -> Planners </p> <p>Find the inventory planner, click on deactivate. </p> <p>Then refresh the page.</p> <p>Navigate back to the Inventory menu and the <code>progressbar</code> on the top should be gone.</p>
0
2016-08-22T22:38:32Z
[ "python", "openerp", "odoo-9" ]