title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
Add new data to a text file without overwriting it in python
38,959,849
<p>Right, I'm running a while loop that does some calculations, and, at the end, exports the data to a .txt file. The problem is, rather than appending the data to the end of the file, it seems to overwrite it and create a brand new file instead. How would I make it append to the old file?</p> <p>Here's my code:</p> <pre><code>turn = 1 while turn &lt; times: dip1 = randint(1,4) dip2 = randint(1,4) dip = (dip1 + dip2) - 2 adm1 = randint(1,4) adm2 = randint(1,4) adm = (adm1 + adm2) - 2 mil1 = randint(1,4) mil2 = randint(1,4) mil = (mil1 + mil2) - 2 with open("Monarchs Output.txt", "w") as text_file: print("Monarch{}, adm: {}, dip: {}, mil: {}\n".format(turn, adm, dip, mil), file=text_file) turn = turn + 1 </code></pre> <p>Just to note, it runs just fine, all the required imports are at the top of the code.</p>
0
2016-08-15T17:24:53Z
38,959,890
<p>Open the file before the loop starts. Every time you open the file for writing, it creates a new file (it deletes whatever is in it).</p> <pre><code>with open("Monarchs Output.txt", "w") as text_file: turn = 1 while turn &lt; times: dip1 = randint(1,4) dip2 = randint(1,4) dip = (dip1 + dip2) - 2 adm1 = randint(1,4) adm2 = randint(1,4) adm = (adm1 + adm2) - 2 mil1 = randint(1,4) mil2 = randint(1,4) mil = (mil1 + mil2) - 2 print("Monarch{}, adm: {}, dip: {}, mil: {}\n".format(turn, adm, dip, mil), file=text_file) turn = turn + 1 </code></pre>
2
2016-08-15T17:27:28Z
[ "python", "python-3.x" ]
Using power results in ValueError: a <= 0
38,959,864
<p>I have written the following code but it fails with a <code>ValueError</code>.</p> <pre><code>from numpy import * from pylab import * t = arange(-10, 10, 20/(1001-1)) x = 1./sqrt(2*pi)*exp(power(-(t*t), 2)) </code></pre> <p>Specifically, the error message I'm receiving is:</p> <pre><code>ValueError: a &lt;= 0 x = 1./sqrt(2*pi)*exp(power(-(t*t), 2)) File "mtrand.pyx", line 3214, in mtrand.RandomState.power (numpy\random\mtrand\mtrand.c:24592) Traceback (most recent call last): File "D:\WinPython-64bit-3.4.4.3Qt5\notebooks\untitled1.py", line 6, in &lt;module&gt; </code></pre> <p>Any idea what the issue might be here?</p>
-1
2016-08-15T17:25:42Z
38,962,309
<p>Both <code>numpy</code> and <code>pylab</code> define a function called <code>power</code>, but they are completely different. Because you imported <code>pylab</code> after <code>numpy</code> using <code>import *</code>, the <code>pylab</code> version is the one you end up with. What is <code>pylab.power</code>? From the docstring:</p> <pre><code>power(a, size=None) Draws samples in [0, 1] from a power distribution with positive exponent a - 1. </code></pre> <p>The moral of the story: <strong>don't use <code>import *</code></strong>. In this case, it is common to use <code>import numpy as np</code>:</p> <pre><code>import numpy as np t = np.arange(-10, 10, 20/(1001-1)) x = 1./np.sqrt(2*np.pi)*np.exp(np.power(-(t*t), 2)) </code></pre> <p>Further reading:</p> <ul> <li><a href="http://stackoverflow.com/questions/2386714/why-is-import-bad">Why is &quot;import *&quot; bad?</a></li> <li><a href="https://docs.python.org/2/howto/doanddont.html" rel="nofollow">Idioms and Anti-Idioms in Python</a> (That's in the Python 2 documentation, but it also applies to Python 3.)</li> </ul>
2
2016-08-15T20:09:49Z
[ "python", "python-3.x", "numpy" ]
python Selenium PermissionError: [WinError 5] Access is denied
38,959,885
<p>I'm trying to open up a Internet Explorer with python Selenium but keep getting an error "<strong>PermissionError: [WinError 5] Access is denied</strong>".</p> <p>I have downloaded Internet Explorer Driver Server ran the script as administrator is there anything else I could do???</p> <p><strong>Code</strong></p> <pre><code>from selenium import webdriver driver = webdriver.Ie(r"C:\\Users\\N\\Downloads\\IEwebdriver\\IEDriverServer_x64_2.53.1") driver.get("http://www.hotmail.com") driver.maximize_window() driver.implicitly_wait(20) </code></pre> <p>The full error message</p> <pre><code>C:\Users\N\AppData\Local\Programs\Python\Python35-32\python.exe C:/Users/N/PycharmProjects/first/SeleniumScripts/Myfirstscripts.py Traceback (most recent call last): File "C:\Users\N\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\common\service.py", line 64, in start stdout=self.log_file, stderr=self.log_file) File "C:\Users\N\AppData\Local\Programs\Python\Python35-32\lib\subprocess.py", line 947, in __init__ restore_signals, start_new_session) File "C:\Users\N\AppData\Local\Programs\Python\Python35-32\lib\subprocess.py", line 1224, in _execute_child startupinfo) PermissionError: [WinError 5] Access is denied </code></pre> <p>During handling of the above exception, another exception occurred:</p> <pre><code>Traceback (most recent call last): File "C:/Users/N/PycharmProjects/first/SeleniumScripts/Myfirstscripts.py", line 5, in &lt;module&gt; driver = webdriver.Ie(r"C:\\Users\\N\\Downloads\\IEwebdriver\\IEDriverServer_x64_2.53.1") File "C:\Users\N\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\ie\webdriver.py", line 49, in __init__ self.iedriver.start() File "C:\Users\N\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\common\service.py", line 76, in start os.path.basename(self.path), self.start_error_message) selenium.common.exceptions.WebDriverException: Message: 'IEDriverServer_x64_2.53.1' executable may have wrong permissions. Please download from http://selenium-release.storage.googleapis.com/index.html and read up at https://github.com/SeleniumHQ/selenium/wiki/InternetExplorerDriver Exception ignored in: &lt;bound method Service.__del__ of &lt;selenium.webdriver.ie.service.Service object at 0x01C9D410&gt;&gt; Traceback (most recent call last): File "C:\Users\N\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\common\service.py", line 163, in __del__ self.stop() File "C:\Users\N\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\common\service.py", line 135, in stop if self.process is None: AttributeError: 'Service' object has no attribute 'process' Process finished with exit code 1 </code></pre> <p>I'm using Microsoft Windows 10.</p>
0
2016-08-15T17:27:01Z
38,960,070
<p>The driver wasn't pointing at the exe file! so it should of been. </p> <pre><code>driver = webdriver.Ie(r"C:\\Users\\N\\Downloads\\IEwebdriver\\IEDriverServer_x64_2.53.1\\IEDriverServer.exe") </code></pre>
0
2016-08-15T17:40:30Z
[ "python", "python-3.x", "selenium" ]
Fast advanced indexing in numpy
38,960,001
<p>I'm trying to take a slice from a large numpy array as quickly as possible using fancy indexing. I would be happy returning a view, but <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing" rel="nofollow">advanced indexing returns a copy</a>. </p> <p>I've tried solutions from <a href="http://stackoverflow.com/questions/14386822/fast-numpy-fancy-indexing">here</a> and <a href="http://stackoverflow.com/questions/38679666/slicing-a-python-list-with-a-numpy-array-of-indices-any-fast-way">here</a> with no joy so far.</p> <p>Toy data:</p> <pre><code>data = np.random.randn(int(1e6), 50) keep = np.random.rand(len(data))&gt;0.5 </code></pre> <p>Using the default method:</p> <pre><code>%timeit data[keep] 10 loops, best of 3: 86.5 ms per loop </code></pre> <p>Numpy take:</p> <pre><code>%timeit data.take(np.where(keep)[0], axis=0) %timeit np.take(data, np.where(keep)[0], axis=0) 10 loops, best of 3: 83.1 ms per loop 10 loops, best of 3: 80.4 ms per loop </code></pre> <p>Method from <a href="http://stackoverflow.com/questions/14386822/fast-numpy-fancy-indexing">here</a>:</p> <pre><code>rows = np.where(keep)[0] cols = np.arange(a.shape[1]) %timeit (a.ravel()[(cols + (rows * a.shape[1]).reshape((-1,1))).ravel()]).reshape(rows.size, cols.size) 10 loops, best of 3: 159 ms per loop </code></pre> <p>Whereas if you're taking a view of the same size:</p> <pre><code>%timeit data[1:-1:2, :] 1000000 loops, best of 3: 243 ns per loop </code></pre>
1
2016-08-15T17:35:39Z
38,960,059
<p>There's no way to do this with a view. A view needs consistent strides, while your data is randomly scattered throughout the original array.</p>
4
2016-08-15T17:39:32Z
[ "python", "arrays", "numpy", "optimization", "indexing" ]
How to send data to the TemplateView used in openpyxl to generate xlsx files in DJango
38,960,008
<p>I have downloaded <code>xlsx</code> files before by using this library, and it's amazing, but now I want to send data to use it in a queryset to fill the <code>xlsx</code> sheet.</p> <p>I have a submit button that uses jquery to prevent default and according to a checkbox it either sends the data through Ajax to a View that stores the data in the database, or send it to a file, but I don't know how to send the data to the view which process <code>openpyxl</code> and download the file at the same time. I have tried by calling the TemplateView through Ajax, I know how to send data through Get method by using Ajax, and how to recieve the data in the overrided GET method in the <code>TemplateView</code>, but when I try that, the file just doesn't downloads. According to the tutorial that I was following, I have to use an <code>&lt;a&gt;</code> tag and call the <code>TemplateView</code> by using: <code>href="{% url 'download_the_file' %}"</code>. </p> <p>So my question is, how can I send data to that TemplateView and download the file, knowing that I need to use Jquery to activate the sending data process?</p>
-2
2016-08-15T17:36:26Z
38,963,430
<p>The people that gave me -2 probably was thinking this was a simple excel problem.</p> <p>I found a solution, in my jquery function I added this:</p> <pre><code>$(location).attr('href', '/the_url?number=5'); </code></pre> <p>Without using Ajax, just send the parameter in the URL that is linked to the TemplateView.</p>
0
2016-08-15T21:35:39Z
[ "python", "django", "excel" ]
np.savetxt - using variable for file name
38,960,124
<p>How can I use variable for a file name when using <code>np.savetxt</code></p> <p>Here is my MWE:</p> <pre><code>import numpy as np k=0 x = [1, 2, 3] y = [4, 5, 6] zipped = zip(x, y) np.savetxt('koo'+str(k)'.txt', zipped, fmt='%f\t%f') </code></pre> <p>However this throws me 'invalid syntax' error</p>
0
2016-08-15T17:43:10Z
38,960,150
<pre><code>np.savetxt('koo'+ str(k) + '.txt', zipped, fmt='%f\t%f') </code></pre> <p>You forgot a <code>+</code> sign.</p>
2
2016-08-15T17:44:51Z
[ "python" ]
np.savetxt - using variable for file name
38,960,124
<p>How can I use variable for a file name when using <code>np.savetxt</code></p> <p>Here is my MWE:</p> <pre><code>import numpy as np k=0 x = [1, 2, 3] y = [4, 5, 6] zipped = zip(x, y) np.savetxt('koo'+str(k)'.txt', zipped, fmt='%f\t%f') </code></pre> <p>However this throws me 'invalid syntax' error</p>
0
2016-08-15T17:43:10Z
38,960,158
<p>You forgot a <code>+</code> sign, try replacing with this:</p> <p>np.savetxt('koo'+str(k)+'.txt', zipped, fmt='%f\t%f')</p>
1
2016-08-15T17:45:10Z
[ "python" ]
np.savetxt - using variable for file name
38,960,124
<p>How can I use variable for a file name when using <code>np.savetxt</code></p> <p>Here is my MWE:</p> <pre><code>import numpy as np k=0 x = [1, 2, 3] y = [4, 5, 6] zipped = zip(x, y) np.savetxt('koo'+str(k)'.txt', zipped, fmt='%f\t%f') </code></pre> <p>However this throws me 'invalid syntax' error</p>
0
2016-08-15T17:43:10Z
38,960,208
<p>You forgot the '+' sign between str(k) and '.txt'</p> <pre><code>np.savetxt('koo'+ str(k) + '.txt', zipped, fmt='%f\t%f') </code></pre>
1
2016-08-15T17:48:22Z
[ "python" ]
Pythonic way of modifiying a string with a temporary list
38,960,196
<p>I have a string that is several thousand chars long and contains about 100 <code>\n</code>, separating it out when printed. I am deleting any lines that contain certain substrings, and certain individual chars.</p> <p>This part is already completed, but I'm curious as to what the most <em>pythonic</em> way of doing this is, and, assuming the method i've chosen is sound, if there is an appropriate naming convention for the temporary list.</p> <pre><code>active_config = active_config.split('\n') for i, elem in enumerate(active_config): # Delete entire line based off match if "cmdStatus=" in elem or "&lt;?xml" in elem: active_config.remove(elem) #Delete individual char based off match elem = elem.replace("\r","") # Delete last line if it is '*' if active_config[-1] == "*": del active_config[-1] active_config = '\n'.join(active_config) </code></pre> <p>I have chosen to overwrite the string <code>active_config</code> as a list, and then overwrite that as a string again after deleting certain elements is complete.</p> <p>Since the list is only ever used to remove a few lines and individual characters, and never used elsewhere, is there a special convention for what I should call it? Perhaps call it <code>active_config_list</code> or <code>temp_active_config</code> or even just <code>temp</code>.</p>
0
2016-08-15T17:47:56Z
38,960,371
<p>Here's a pythonic solution for this type of problem using <a href="https://docs.python.org/2/library/functions.html#filter" rel="nofollow">filter</a>:</p> <pre><code>active_config = """this is an example which contains words like cmdStatus= cmdStatus2 or other weird &lt;?lxml tags """ lines = active_config.splitlines() tokens = ["cmdStatus=", "&lt;?lxml"] print '\n'.join(filter(lambda x: not any(w in x for w in tokens), lines)) </code></pre>
1
2016-08-15T18:00:02Z
[ "python", "naming-conventions" ]
Pythonic way of modifiying a string with a temporary list
38,960,196
<p>I have a string that is several thousand chars long and contains about 100 <code>\n</code>, separating it out when printed. I am deleting any lines that contain certain substrings, and certain individual chars.</p> <p>This part is already completed, but I'm curious as to what the most <em>pythonic</em> way of doing this is, and, assuming the method i've chosen is sound, if there is an appropriate naming convention for the temporary list.</p> <pre><code>active_config = active_config.split('\n') for i, elem in enumerate(active_config): # Delete entire line based off match if "cmdStatus=" in elem or "&lt;?xml" in elem: active_config.remove(elem) #Delete individual char based off match elem = elem.replace("\r","") # Delete last line if it is '*' if active_config[-1] == "*": del active_config[-1] active_config = '\n'.join(active_config) </code></pre> <p>I have chosen to overwrite the string <code>active_config</code> as a list, and then overwrite that as a string again after deleting certain elements is complete.</p> <p>Since the list is only ever used to remove a few lines and individual characters, and never used elsewhere, is there a special convention for what I should call it? Perhaps call it <code>active_config_list</code> or <code>temp_active_config</code> or even just <code>temp</code>.</p>
0
2016-08-15T17:47:56Z
38,960,555
<p>A couple of list comprehensions will do it too:</p> <pre><code>active_config = active_config.split('\n') temp_list = [z for z in active_config if "cmdStatus=" in z or "&lt;?xml" in z] if temp_list[-1] == "*": temp_list = "\n".join([x.replace("\r", "") for x in temp_list[:-1]]) #Do whatever you need to do with temp_list here </code></pre>
1
2016-08-15T18:12:14Z
[ "python", "naming-conventions" ]
Mutable indexed heterogeneous data structure?
38,960,221
<p>Is there a data class or type in Python that matches these criteria?</p> <p>I am trying to build an object that looks something like this:</p> <ul> <li><p>ExperimentData</p> <ul> <li><p>ID 1</p> <ul> <li>sample_info_1: <code>character string</code></li> <li>sample_info_2: <code>character string</code></li> <li>Dataframe_1: <code>pandas data frame</code></li> <li>Dataframe_2: <code>pandas data frame</code></li> </ul></li> <li><p>ID 2</p> <ul> <li>(etc.)</li> </ul></li> </ul></li> </ul> <p>Right now, I am using a <code>dict</code> to hold the object ('ExperimentData'), which contains<code>namedtuple</code>'s for each ID. Each of the <code>namedtuple</code>'s has a named field for the corresponding data attached to the sample. This allows me to keep all the ID's indexed, and have all of the fields under each ID indexed as well. </p> <p>However, I need to update and/or replace the entries under each ID during downstream analysis. Since a <code>tuple</code> is immutable, this does not seem to be possible. </p> <p>Is there a better implementation of this? </p>
1
2016-08-15T17:49:29Z
38,966,174
<p>You could use a dict of dicts instead of a dict of namedtuples. Dicts are mutable, so you'll be able to modify the inner dicts.</p> <p>Given what you said in the comments about the structures of each DataFrame-1 and -2 being comparable, you could also group all of each into one big DataFrame, by adding a column to each DataFrame containing the value of <code>sample_info_1</code> repeated across all rows, and likewise for <code>sample_info_2</code>. Then you could concat all the DataFrame-1s into a big one, and likewise for the DataFrame-2s, getting all your data into two DataFrames. (Depending on the structure of those DataFrames, you could even join them into one.)</p>
1
2016-08-16T03:24:44Z
[ "python", "pandas", "tuples" ]
Is there a better method for getting the CBV instance from Django's "as_view" mechanism
38,960,398
<p>I have a simple test using Django's RequestFactory for a view. My view, sets some state within itself that I want to test (this is not a great example, but the point is I want to test some state in the view after the request is processed rather than parsing it out of HTML for no good reason):</p> <pre><code>def some_test(self): rf = RequestFactory() get_request = rf.get('/foo/') view = MyView.as_view() response = view( get_request, foo="hello", bar="world" ) self.assertEquals( ??.sentence_to_display, "hello world") </code></pre> <p>Looking at the internals I can't see a way of getting the view instance used to process the request, <code>??</code> is my placeholder for the view instance which on <code>get</code> sets <code>self.sentence_to_display</code></p> <p>The best I could come up with was overriding <code>dispatch</code> in my CBV to do:</p> <pre><code>def dispatch(self, request, *args, **kwargs): request.META["__the_view__"] = self return super(MyView, self).dispatch(request, *args, **kwargs) </code></pre> <p>Then altering my test such as:</p> <pre><code>def some_test(self): rf = RequestFactory() get_request = rf.get('/foo/') view = MyView.as_view() response = view( get_request, foo="hello", bar="world" ) self.assertEquals( get_request['__the_view__'].sentence_to_display, "hello world") </code></pre> <p>While my example is arbitrary, my CBV has lots of methods that compute lots of things before rendering the HTML via a template. I trust Django to render the template, I don't want to test the template as such, I wish to test the CBV methods and the state of the instance after being invoked.</p> <p>I really don't want to dig around the massive amount of output using BeautifulSoup when I just want to unit test the methods <code>get</code> uses to build the response.</p> <p>Any better approaches? Am I missing the point?</p> <p>Many thanks.</p> <p>Django 1.6 unfortunately </p>
0
2016-08-15T18:01:20Z
38,962,597
<p><code>View.as_view()</code> does not create an instance. All it does is return a closure function that will create an instance when the view is called. </p> <p>The best you can do is manually create an instance and prepare it the same way the closure function does. </p> <pre><code>def some_test(self): rf = RequestFactory() get_request = rf.get('/foo/') view = MyView() if hasattr(view , 'get') and not hasattr(view, 'head'): view.head = view.get view.request = request view.args = () view.kwargs = {'foo': 'hello', 'bar': 'world'} response = view.dispatch(get_request, *view.args, **view.kwargs) self.assertEquals(view.sentence_to_display, "hello world") </code></pre> <p>You can see the exact code Django uses to instantiate the view here: <a href="https://github.com/django/django/blob/stable/1.6.x/django/views/generic/base.py#L62" rel="nofollow">https://github.com/django/django/blob/stable/1.6.x/django/views/generic/base.py#L62</a></p>
0
2016-08-15T20:32:13Z
[ "python", "django" ]
Stack cv2 Frame of RGB channel and Single Channel together
38,960,411
<p>What I am looking to do is make it so that I can stack the 4 Video feeds together, RGB, B, G, R Channels in a quad frame feed. Here is my code I get the error "All the input arrays must have same number of dimensions. I wanted to know if there is a way to work through or around this? If you insert GRAY where RGB is you can see the overall result I am wanting other than the RGB Frame should be where the GRAY Frame is.</p> <pre><code>import numpy as np import cv2 cap = cv2.VideoCapture(0) ret, frame = cap.read() while(True): ret, frame = cap.read() # Resizing down the image to fit in the screen. b,g,r = cv2.split(frame) RGB = cv2.cvtColor(frame, cv2.COLOR_BGR2RGBA) GRAY = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # creating another frame. channels = cv2.split(frame) frame_merge = cv2.merge(channels) # horizintally concatenating the two frames. final_frame = cv2.hconcat((frame, frame_merge)) final_frame2 = cv2.hconcat((frame, frame_merge)) final = cv2.vconcat((final_frame, final_frame2)) frame1 = np.hstack((RGB,b)) frame2 = np.hstack((g,r)) final = np.vstack((frame1,frame2)) cv2.imshow('frame', final) k = cv2.waitKey(30) &amp; 0xff if k == 27: break cap.release() cv2.destroyAllWindows() </code></pre>
1
2016-08-15T18:01:47Z
38,961,299
<p>Here's my solution:</p> <pre><code>import numpy as np import cv2 cap = cv2.VideoCapture(0) ret, frame = cap.read() red = np.zeros(frame.shape, 'uint8') green = np.zeros(frame.shape, 'uint8') blue = np.zeros(frame.shape, 'uint8') while(True): ret, frame = cap.read() b, g, r = cv2.split(frame) red[..., 0], red[..., 1], red[..., 2] = r, r, r green[..., 0], green[..., 1], green[..., 2] = g, g, g blue[..., 0], blue[..., 1], blue[..., 2] = b, b, b final = cv2.vconcat(( cv2.hconcat((frame, blue)), cv2.hconcat((green, red)) )) cv2.imshow('frame', final) k = cv2.waitKey(30) &amp; 0xff if k == 27: break cap.release() cv2.destroyAllWindows() </code></pre>
0
2016-08-15T19:00:26Z
[ "python", "opencv", "numpy" ]
Using scipy.io.savemat to Save Nested Lists
38,960,464
<p>This is relating to my last question, which can be found <a href="http://stackoverflow.com/questions/38924256/creating-a-nested-list-in-python">here</a>. I'm dealing with lists similar to the list I describe in that link as markerList - so a list with three levels. I need to save this information as a .mat file, but I can't get it to save in the right type. When using scipy.io.savemat, it saves the list as a 200x40x2 single, when it should be a set of 200 cells, each containing a 40x2 cell.</p> <p>The code I'm using to save this is:</p> <pre><code>matdict = dict(markers = (markerList), sorted = (finalStack)) scipy.io.savemat('C:\pathname\\sortedMarkers.mat', matdict) </code></pre> <p>What is confusing to me is that it saves markerList in the correct format (1x200 cell, each a cell of varying size), but not finalStack (saved as a 200 x 40 x 2 single). On top of that, before I had figured out the rest of this code, it would save finalStack correctly - which makes me think that perhaps it saves as a cell when the data it is saving isn't uniform in size. (finalStack is uniform in size; markerList is not.) </p> <p>Is there a way to save a complicated data structure like this as a .mat file?</p>
1
2016-08-15T18:05:25Z
38,961,045
<p>Without looking again at your previous question I suspect the issue is with now <code>numpy.array</code> creates arrays from lists of sublists or arrays.</p> <p>You note that <code>markerList</code> is saved as expected, and that the cells vary in size.</p> <p>Try</p> <pre><code>np.array(markerList) </code></pre> <p>and look at its shape and dtype. I'm guessing it will be 1d (200,), and object dtype.</p> <pre><code>np.array(finalStack) </code></pre> <p>on the other hand probably will be the 3d array it saves.</p> <p><code>savemat</code> is set up to save numpy arrays, not python dictionaries and lists - it is, after, all talking to MATLAB where everything used to be a 2d matrix. MATLAB cells generalize this; they are more like 2d numpy arrays of dtype object.</p> <p>The issue of creating an object array from elements that uniform in size comes up often. The usual solution is to create <code>empty</code> array of the desired size (e.g. (200,)) and object type, and load the subarrays into that.</p> <p><a href="http://stackoverflow.com/a/38776674/901925">http://stackoverflow.com/a/38776674/901925</a></p> <p>=============</p> <p>I'll demonstrate. Make 3 arrays, 2 of one size, and different third:</p> <pre><code>In [59]: from scipy import io In [60]: A=np.ones((40,2)) In [61]: B=np.ones((40,2)) In [62]: C=np.ones((30,2)) </code></pre> <p>Save two lists, one with just two arrays, the other with all three:</p> <pre><code>In [63]: io.savemat('test.mat', {'AB':[A,B],'ABC':[A,B,C]}) </code></pre> <p>Load it back; I could do this in <code>octave</code> instead:</p> <pre><code>In [65]: D=io.loadmat('test.mat') In [66]: D.keys() Out[66]: dict_keys(['ABC', '__header__', 'AB', '__globals__', '__version__']) </code></pre> <p><code>ABC</code> is a 2d array with 3 elements</p> <pre><code>In [68]: D['ABC'].shape Out[68]: (1, 3) In [71]: D['ABC'][0,0].shape Out[71]: (40, 2) </code></pre> <p>but <code>AB</code> has been transformed into a 3d array:</p> <pre><code>In [69]: D['AB'].shape Out[69]: (2, 40, 2) In [70]: np.array([A,B]).shape Out[70]: (2, 40, 2) </code></pre> <p>If I instead make a 1d object array to hold A and B, it is preserved:</p> <pre><code>In [72]: AB=np.empty((2,),object) In [73]: AB[...]=[A,B] In [74]: AB.shape Out[74]: (2,) In [75]: io.savemat('test.mat', {'AB':AB,'ABC':[A,B,C]}) In [76]: D=io.loadmat('test.mat') In [77]: D['AB'].shape Out[77]: (1, 2) In [78]: D['AB'][0,0].shape Out[78]: (40, 2) </code></pre> <p>A good alternative is to save the arrays as items of a dictionary</p> <pre><code>io.savemat('test.mat',{'A':A, 'B':B, 'C':C}) </code></pre> <p>Given the difficulties in translating MATLAB structures to numpy ones and back, it's better to keep things flat and simple, rather than create compound objects that would be useful on both sides.</p> <p>===============</p> <p>I installed <code>Octave</code>. Loading this <code>test.mat</code>:</p> <pre><code>io.savemat('test.mat', {'AB':AB,'ABs':[A,B]}) </code></pre> <p>gives</p> <pre><code>&gt;&gt; whos Variables in the current scope: Attr Name Size Bytes Class ==== ==== ==== ===== ===== AB 1x2 1280 cell ABs 2x40x2 1280 double </code></pre> <p>An object dtype array is saved as a matlab cell; other arrays as matlab matrices. (I'd have to review earlier answers to recall the equivalent of matlab structures).</p>
0
2016-08-15T18:45:25Z
[ "python", "python-2.7", "scipy" ]
Using scipy.io.savemat to Save Nested Lists
38,960,464
<p>This is relating to my last question, which can be found <a href="http://stackoverflow.com/questions/38924256/creating-a-nested-list-in-python">here</a>. I'm dealing with lists similar to the list I describe in that link as markerList - so a list with three levels. I need to save this information as a .mat file, but I can't get it to save in the right type. When using scipy.io.savemat, it saves the list as a 200x40x2 single, when it should be a set of 200 cells, each containing a 40x2 cell.</p> <p>The code I'm using to save this is:</p> <pre><code>matdict = dict(markers = (markerList), sorted = (finalStack)) scipy.io.savemat('C:\pathname\\sortedMarkers.mat', matdict) </code></pre> <p>What is confusing to me is that it saves markerList in the correct format (1x200 cell, each a cell of varying size), but not finalStack (saved as a 200 x 40 x 2 single). On top of that, before I had figured out the rest of this code, it would save finalStack correctly - which makes me think that perhaps it saves as a cell when the data it is saving isn't uniform in size. (finalStack is uniform in size; markerList is not.) </p> <p>Is there a way to save a complicated data structure like this as a .mat file?</p>
1
2016-08-15T18:05:25Z
38,961,751
<p>As per <a href="https://docs.scipy.org/doc/scipy-0.9.0/reference/tutorial/io.html#matlab-cell-arrays" rel="nofollow">savemat documentation</a>, convert into a numpy array of 'objects':</p> <pre><code>from scipy.io import savemat import numpy a = numpy.array([[1,2,3],[1,2,3]]) b = numpy.array([[2,3,4],[2,3,4]]) c = numpy.array([[3,4,5],[3,4,5]]) L = [a,b,c] FrameStack = numpy.empty((len(L),), dtype=numpy.object) for i in range(len(L)): FrameStack[i] = L[i] savemat("myfile.mat", {"FrameStack":FrameStack}) </code></pre> <p>In octave:</p> <pre><code>&gt;&gt; load myfile.mat &gt;&gt; whos FrameStack Variables in the current scope: Attr Name Size Bytes Class ==== ==== ==== ===== ===== FrameStack 1x3 144 cell Total is 3 elements using 144 bytes &gt;&gt; whos FrameStack{1} Variables in the current scope: Attr Name Size Bytes Class ==== ==== ==== ===== ===== FrameStack{1} 2x3 48 int64 Total is 6 elements using 48 bytes </code></pre>
1
2016-08-15T19:31:18Z
[ "python", "python-2.7", "scipy" ]
Curl command works but not pycurl command for smartsheet
38,960,468
<p>I am able to successfully update the smartsheet using Curl Commands: </p> <pre><code>curl https://api.smartsheet.com/2.0/sheets/sheetID/rows \ &gt; -H "Authorization: Bearer token" \ &gt; -H "Content-Type: application/json" \ &gt; -X PUT \ &gt; -d '[{"id": rid, "locked" : false}]' </code></pre> <p>But when I try to do the same in my python script using Pycurl: I get authorization error. I am not sure what I am doing wrong here. </p> <pre><code>c = pycurl.Curl() c.setopt(pycurl.URL,url) c.setopt(pycurl.HTTPHEADER, ['Authorization: Bearer ' + token]) c.setopt(pycurl.HTTPHEADER, ['Content-Type: application/json', 'Accept: application/json']) data = json.dumps({"id": rid, "locked": False }) c.setopt(pycurl.CUSTOMREQUEST, "PUT") c.setopt(pycurl.POSTFIELDS,data) c.setopt(pycurl.POSTFIELDSIZE, len(data)) c.setopt(pycurl.VERBOSE, 1) c.perform() c.close() </code></pre> <p>I get an error: </p> <pre><code>* About to connect() to api.smartsheet.com port 443 (#0) * Trying 198.101.138.130... * Connected to api.smartsheet.com (198.101.138.130) port 443 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none * SSL connection using TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA * Server certificate: * subject: CN=api.smartsheet.com,O="Smartsheet.com, Inc.",L=Bellevue,ST=Washington,C=US,serialNumber=6#####7,businessCategory=Private Organization,incorporationState=Washington,incorporationCountry=US * start date: Jul 09 00:00:00 2015 GMT * expire date: Jul 08 23:59:59 2017 GMT * common name: api.smartsheet.com * issuer: CN=Symantec Class 3 EV SSL CA - G3,OU=Symantec Trust Network,O=Symantec Corporation,C=US &gt; POST /2.0/sheets/sheetid/rows HTTP/1.1 User-Agent: PycURL/7.29.0 Host: api.smartsheet.com Content-Type: application/json Accept: application/json Content-Length: 40 * upload completely sent off: 40 out of 40 bytes &lt; HTTP/1.1 403 Forbidden &lt; Date: Mon, 15 Aug 2016 15:17:55 GMT &lt; Content-Type: application/json;charset=UTF-8 &lt; Content-Length: 116 &lt; Connection: close &lt; { "errorCode" : 1004, "message" : "You are not authorized to perform this action.", "refId" : "za6gmwnreg78" * Closing connection 0 }403 </code></pre> <p>I am not sure what is wrong. It would be great if someone could suggest what can be done (while still using pycurl) or also any other workaround! Please help! Thanks!</p>
1
2016-08-15T18:05:34Z
38,960,593
<p>I think the issue is caused by your calling <code>c.setopt(pycurl.HTTPHEADER, ...)</code> twice, and the second invocation overwrites the auth token set in the first.</p> <p>Try replacing the two <code>c.setopt</code> calls with the following:</p> <p><code>c.setopt(pycurl.HTTPHEADER, ['Authorization: Bearer ' + token, 'Content-Type: application/json', 'Accept: application/json']) </code></p>
0
2016-08-15T18:14:45Z
[ "python", "curl", "pycurl", "smartsheet-api-1.1", "smartsheet-api-2.0" ]
Leap Year Boolean Logic: Include Parentheses?
38,960,503
<p>Which is "more correct (logically)"? <em>Specific to Leap Year, not in general</em>.</p> <ol> <li><p>With Parentheses</p> <pre><code>return year % 4 == 0 and (year % 100 != 0 or year % 400 == 0) </code></pre></li> <li><p>Without</p> <pre><code>return year % 4 == 0 and year % 100 != 0 or year % 400 == 0 </code></pre></li> </ol> <hr> <p><strong>Additional Info</strong></p> <p><em>Parentheses change the order in which the booleans are evaluated (<code>and</code> goes before <code>or</code> w/o parenthesis).</em></p> <p><em>Given that all larger numbers are divisible by smaller numbers in this problem, it returns the correct result either way but I'm still curious.</em></p> <p>Observe the effects of parentheses:</p> <ol> <li><pre><code>False and True or True #True False and (True or True) #False </code></pre></li> <li><pre><code>False and False or True #True False and (False or True) #False </code></pre></li> </ol> <p>Without parentheses, there are scenarios where even though <strong>year is not divisible by 4 (first bool) it still returns True</strong> (I know that's impossible in this problem)! <strong>Isn't being divisible by 4 a MUST and therefore it's more correct to include parenthesis?</strong> Anything else I should be paying attention to here? Can someone explain the theoretical logic of not/including parentheses?</p>
4
2016-08-15T18:07:36Z
38,960,562
<p>The parens affect what order your booleans take. <code>and</code>s are grouped together and resolved before <code>or</code>s are, so:</p> <pre><code>a and b or c </code></pre> <p>becomes:</p> <pre><code>(a and b) or c </code></pre> <p>if either BOTH <code>a</code> and <code>b</code> are truthy, OR if <code>c</code> is truthy, we get <code>True</code>.</p> <p>With the parentheses you get:</p> <pre><code>a and (b or c) </code></pre> <p>Now you get <code>True</code> if both <code>a</code> is truthy and either <code>b</code> OR <code>c</code> is truthy.</p> <hr> <p>As far as "correctness," as long as your code derives the correct result then "more correct" is only a matter of opinion. I would include parens where you feel like it makes the result more clear. For instance:</p> <pre><code>if (a and b) or c: </code></pre> <p>is more clear than</p> <pre><code>if a and b or c: </code></pre> <p>However it is NOT more clear (in my opinion) than:</p> <pre><code>if some_long_identifier and some_other_long_identifier or \ some_third_long_identifier_on_another_line: </code></pre> <p>Your guide when writing Python code should be <a href="https://www.python.org/dev/peps/pep-0008/" rel="nofollow">PEP8</a>. PEP8 is quiet on when you should include stylistic parentheses (read: parens that follow the natural order of operations), so use your best judgement.</p> <hr> <p>For leap years specifically, the logic is:</p> <ol> <li>If the year is evenly divisible by 4, go to step 2. ...</li> <li>If the year is evenly divisible by 100, go to step 3. ...</li> <li>If the year is evenly divisible by 400, go to step 4. ...</li> <li>The year is a leap year (it has 366 days).</li> <li>The year is not a leap year (it has 365 days).</li> </ol> <p>In other words: all years divisible by 4 are leap years, unless they're divisible by 100 and NOT divisible by 400, which translates to:</p> <pre><code>return y % 4 == 0 and not (y % 100 == 0 and y % 400 != 0) </code></pre>
5
2016-08-15T18:12:36Z
[ "python", "boolean-logic", "parentheses", "leap-year" ]
Leap Year Boolean Logic: Include Parentheses?
38,960,503
<p>Which is "more correct (logically)"? <em>Specific to Leap Year, not in general</em>.</p> <ol> <li><p>With Parentheses</p> <pre><code>return year % 4 == 0 and (year % 100 != 0 or year % 400 == 0) </code></pre></li> <li><p>Without</p> <pre><code>return year % 4 == 0 and year % 100 != 0 or year % 400 == 0 </code></pre></li> </ol> <hr> <p><strong>Additional Info</strong></p> <p><em>Parentheses change the order in which the booleans are evaluated (<code>and</code> goes before <code>or</code> w/o parenthesis).</em></p> <p><em>Given that all larger numbers are divisible by smaller numbers in this problem, it returns the correct result either way but I'm still curious.</em></p> <p>Observe the effects of parentheses:</p> <ol> <li><pre><code>False and True or True #True False and (True or True) #False </code></pre></li> <li><pre><code>False and False or True #True False and (False or True) #False </code></pre></li> </ol> <p>Without parentheses, there are scenarios where even though <strong>year is not divisible by 4 (first bool) it still returns True</strong> (I know that's impossible in this problem)! <strong>Isn't being divisible by 4 a MUST and therefore it's more correct to include parenthesis?</strong> Anything else I should be paying attention to here? Can someone explain the theoretical logic of not/including parentheses?</p>
4
2016-08-15T18:07:36Z
38,960,673
<blockquote> <p>Which answer is "more correct" and why?</p> </blockquote> <p>It's not a matter of what is '<em>more correct</em>', rather; what logic do you wish to implement? Parenthesis' in <strong>boolean</strong> expressions change <a href="https://docs.python.org/3/reference/expressions.html#evaluation-order" rel="nofollow">order of operations</a>. This allows you to force precedence in its execution.</p> <pre><code>&gt;&gt;&gt; (True or True) and False # or expression evaluates first. False &gt;&gt;&gt; True or True and False # and evaluates first. True </code></pre> <hr> <p>As for the logic in the leap year formula, the <a href="https://www.mathsisfun.com/leap-years.html" rel="nofollow"><strong>rules</strong></a> go as follows:</p> <ol> <li><p><em>Leap Years are any year that can be evenly divided by 4 (such as 2012, 2016, etc)</em></p></li> <li><p><em>Except if it can be evenly divided by 100, then it isn't (such as 2100, 2200, etc)</em></p></li> <li><p><em>Except if it can be evenly divided by 400, then it is (such as 2000, 2400)</em></p></li> </ol> <p>Thus the exception rules must take precedence, which is why the parenthesis around the <code>or</code> is necessary to adhere to the formula's rules. Other wise the two arguments to <code>and</code> would be evaluated first.</p>
3
2016-08-15T18:20:07Z
[ "python", "boolean-logic", "parentheses", "leap-year" ]
Leap Year Boolean Logic: Include Parentheses?
38,960,503
<p>Which is "more correct (logically)"? <em>Specific to Leap Year, not in general</em>.</p> <ol> <li><p>With Parentheses</p> <pre><code>return year % 4 == 0 and (year % 100 != 0 or year % 400 == 0) </code></pre></li> <li><p>Without</p> <pre><code>return year % 4 == 0 and year % 100 != 0 or year % 400 == 0 </code></pre></li> </ol> <hr> <p><strong>Additional Info</strong></p> <p><em>Parentheses change the order in which the booleans are evaluated (<code>and</code> goes before <code>or</code> w/o parenthesis).</em></p> <p><em>Given that all larger numbers are divisible by smaller numbers in this problem, it returns the correct result either way but I'm still curious.</em></p> <p>Observe the effects of parentheses:</p> <ol> <li><pre><code>False and True or True #True False and (True or True) #False </code></pre></li> <li><pre><code>False and False or True #True False and (False or True) #False </code></pre></li> </ol> <p>Without parentheses, there are scenarios where even though <strong>year is not divisible by 4 (first bool) it still returns True</strong> (I know that's impossible in this problem)! <strong>Isn't being divisible by 4 a MUST and therefore it's more correct to include parenthesis?</strong> Anything else I should be paying attention to here? Can someone explain the theoretical logic of not/including parentheses?</p>
4
2016-08-15T18:07:36Z
38,960,751
<p>Include the parentheses. In English, the rule is:</p> <ol> <li>Year must be divisible by 4.</li> <li>Year must not be visible by 100, unless it's divisible by 400.</li> </ol> <p>The version with parentheses matches this two-pronged rule best.</p> <pre><code>return year % 4 == 0 and (year % 100 != 0 or year % 400 == 0) ^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (1) (2) </code></pre> <p>As it happens, removing the parentheses does not break the code, but it leads to an unnatural version of the rules:</p> <ol> <li>Year must be divisible by 4, but not by 100; or</li> <li>Year must be divisible by 400.</li> </ol> <p>That's not the way I think of the leap year rule.</p>
3
2016-08-15T18:25:08Z
[ "python", "boolean-logic", "parentheses", "leap-year" ]
Leap Year Boolean Logic: Include Parentheses?
38,960,503
<p>Which is "more correct (logically)"? <em>Specific to Leap Year, not in general</em>.</p> <ol> <li><p>With Parentheses</p> <pre><code>return year % 4 == 0 and (year % 100 != 0 or year % 400 == 0) </code></pre></li> <li><p>Without</p> <pre><code>return year % 4 == 0 and year % 100 != 0 or year % 400 == 0 </code></pre></li> </ol> <hr> <p><strong>Additional Info</strong></p> <p><em>Parentheses change the order in which the booleans are evaluated (<code>and</code> goes before <code>or</code> w/o parenthesis).</em></p> <p><em>Given that all larger numbers are divisible by smaller numbers in this problem, it returns the correct result either way but I'm still curious.</em></p> <p>Observe the effects of parentheses:</p> <ol> <li><pre><code>False and True or True #True False and (True or True) #False </code></pre></li> <li><pre><code>False and False or True #True False and (False or True) #False </code></pre></li> </ol> <p>Without parentheses, there are scenarios where even though <strong>year is not divisible by 4 (first bool) it still returns True</strong> (I know that's impossible in this problem)! <strong>Isn't being divisible by 4 a MUST and therefore it's more correct to include parenthesis?</strong> Anything else I should be paying attention to here? Can someone explain the theoretical logic of not/including parentheses?</p>
4
2016-08-15T18:07:36Z
38,960,804
<blockquote> <p>Which answer is "more correct" and why? (Specific to Leap Year Logic, not in general)</p> <p>With Parentheses</p> <p>return year % 4 == 0 and (year % 100 != 0 or year % 400 == 0)</p> <p>Without</p> <p>return year % 4 == 0 and year % 100 != 0 or year % 400 == 0</p> </blockquote> <p>Depends on your definition of "more correct". As you know, both return correct. </p> <p>Now speculating the "more correctness" - if you are referring to performance benefits, I cannot think of any, considering the current smart compilers.</p> <p>If you are discussing the human readability point of view, I would go with,</p> <blockquote> <p>return year % 4 == 0 and year % 100 != 0 or year % 400 == 0</p> </blockquote> <p>It naturally narrows down the scope, opposed to your other alternative, which seemed to include visually two disjoint elements.</p> <p>I would suggest, include parentheses, but as below:</p> <blockquote> <p>return (year % 4 == 0 and year % 100 != 0) or year % 400 == 0 </p> </blockquote>
1
2016-08-15T18:29:12Z
[ "python", "boolean-logic", "parentheses", "leap-year" ]
Leap Year Boolean Logic: Include Parentheses?
38,960,503
<p>Which is "more correct (logically)"? <em>Specific to Leap Year, not in general</em>.</p> <ol> <li><p>With Parentheses</p> <pre><code>return year % 4 == 0 and (year % 100 != 0 or year % 400 == 0) </code></pre></li> <li><p>Without</p> <pre><code>return year % 4 == 0 and year % 100 != 0 or year % 400 == 0 </code></pre></li> </ol> <hr> <p><strong>Additional Info</strong></p> <p><em>Parentheses change the order in which the booleans are evaluated (<code>and</code> goes before <code>or</code> w/o parenthesis).</em></p> <p><em>Given that all larger numbers are divisible by smaller numbers in this problem, it returns the correct result either way but I'm still curious.</em></p> <p>Observe the effects of parentheses:</p> <ol> <li><pre><code>False and True or True #True False and (True or True) #False </code></pre></li> <li><pre><code>False and False or True #True False and (False or True) #False </code></pre></li> </ol> <p>Without parentheses, there are scenarios where even though <strong>year is not divisible by 4 (first bool) it still returns True</strong> (I know that's impossible in this problem)! <strong>Isn't being divisible by 4 a MUST and therefore it's more correct to include parenthesis?</strong> Anything else I should be paying attention to here? Can someone explain the theoretical logic of not/including parentheses?</p>
4
2016-08-15T18:07:36Z
38,960,865
<p>As you noted, in operation, there is no difference, since the number being divisible by 400 implies that it is also divisible by 100, which implies that it is also divisible by 4. Operationally, whether the parentheses have any effect depends on the lexical order (order of evaluation) of the language. Most languages today follow the conventions of c, which means a specified precedence of operators, and left-to-right otherwise. When in doubt, I always put parentheses for readability.</p> <p>Stylistically, this sort of thing is hard to read when put in a long expression like that. If it must be one expression, I would prefer to have the logical "sum of products" to the "product of sums" So I would go</p> <pre><code>return (year%400 == 0) or (year%100 != 0 and year%4 == 0) </code></pre> <p>Or even</p> <pre><code>bool IsLeap = false; if (year%4 == 0) IsLeap = true; if (year%100 == 0) IsLeap = false; if (year%400 == 0) IsLeap = true; return IsLeap; </code></pre> <p>An optimizing compiler will make efficient code, anyway, and this sort of thing really helps poor humans like me to read it.</p>
1
2016-08-15T18:32:48Z
[ "python", "boolean-logic", "parentheses", "leap-year" ]
Leap Year Boolean Logic: Include Parentheses?
38,960,503
<p>Which is "more correct (logically)"? <em>Specific to Leap Year, not in general</em>.</p> <ol> <li><p>With Parentheses</p> <pre><code>return year % 4 == 0 and (year % 100 != 0 or year % 400 == 0) </code></pre></li> <li><p>Without</p> <pre><code>return year % 4 == 0 and year % 100 != 0 or year % 400 == 0 </code></pre></li> </ol> <hr> <p><strong>Additional Info</strong></p> <p><em>Parentheses change the order in which the booleans are evaluated (<code>and</code> goes before <code>or</code> w/o parenthesis).</em></p> <p><em>Given that all larger numbers are divisible by smaller numbers in this problem, it returns the correct result either way but I'm still curious.</em></p> <p>Observe the effects of parentheses:</p> <ol> <li><pre><code>False and True or True #True False and (True or True) #False </code></pre></li> <li><pre><code>False and False or True #True False and (False or True) #False </code></pre></li> </ol> <p>Without parentheses, there are scenarios where even though <strong>year is not divisible by 4 (first bool) it still returns True</strong> (I know that's impossible in this problem)! <strong>Isn't being divisible by 4 a MUST and therefore it's more correct to include parenthesis?</strong> Anything else I should be paying attention to here? Can someone explain the theoretical logic of not/including parentheses?</p>
4
2016-08-15T18:07:36Z
38,986,193
<p>Answer: <strong>Include Parentheses</strong></p> <hr> <p><a href="http://stackoverflow.com/users/68587/john-kugelman">John Kugelman</a> explains why they are 2 separate logical tests as opposed to 3, last 2 should be grouped together:</p> <ol> <li>Year must be divisible by 4.</li> <li>Year must not be visible by 100, unless it's divisible by 400.</li> </ol> <p>The version with parentheses matches this two-pronged rule best.</p> <pre><code>return year % 4 == 0 and (year % 100 != 0 or year % 400 == 0) ^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (1) (2) </code></pre> <p>As it happens, removing the parentheses does not break the code, but it leads to an unnatural version of the rules:</p> <ol> <li>Year must be divisible by 4, but not by 100; or</li> <li>Year must be divisible by 400.</li> </ol> <p>That's not the way I think of the leap year rule.</p> <hr> <p>Inspired by <a href="http://stackoverflow.com/users/6320655/mrdomoboto">mrdomoboto</a>, 100/400 are the exception!:</p> <p><em>Year must be divisible by 4</em>, 100 is an exception and 400 is an exception of the exception but they are still one exception in total (see above). This means that if year is not divisible by 4 then the whole thing must be False. The only way to ensure this is to put parens around the exception because <code>False and bool</code> will always return False.</p> <p>See below examples of this from <a href="http://stackoverflow.com/users/4722345/jballin">JBallin</a></p> <ol> <li><pre><code>False and True or True #True False and (True or True) #False </code></pre></li> <li><pre><code>False and False or True #True False and (False or True) #False </code></pre></li> </ol> <hr> <p><a href="http://stackoverflow.com/users/3058609/adam-smith">Adam Smith</a> translated the english into code:</p> <p>All years divisible by 4 are leap years, unless they're divisible by 100 and NOT divisible by 400, which translates to:</p> <pre><code>return y % 4 == 0 and not (y % 100 == 0 and y % 400 != 0) </code></pre> <p><a href="http://stackoverflow.com/users/4722345/jballin">JBallin</a> cited <a href="https://en.wikipedia.org/wiki/De_Morgan&#39;s_laws" rel="nofollow">De Morgan's Laws</a>:</p> <pre><code>not(a and b) = (not a or not b) </code></pre> <p>To convert the parens into the desired answer:</p> <pre><code>#move "not" inside parens return y % 4 == 0 and (not y % 100 == 0 or not y % 400 != 0) #convert parens using "DML" return y % 4 == 0 and (y % 100 != 0 or y % 400 == 0) </code></pre>
0
2016-08-16T23:57:05Z
[ "python", "boolean-logic", "parentheses", "leap-year" ]
pd.Timedelta conversion on a dataframe column
38,960,514
<p>I am trying to convert a dataframe column to a timedelta but am having issues. The format that the column comes in looks like '+XX:XX:XX' or '-XX:XX:XX'</p> <p>My dataframe:</p> <pre><code> df = pd.DataFrame({'time':['+06:00:00', '-04:00:00'],}) </code></pre> <p>My approach:</p> <pre><code> df['time'] = pd.Timedelta(df['time']) </code></pre> <p>However, I get the error:</p> <pre><code> ValueError: Value must be Timedelta, string, integer, float, timedelta or convertible </code></pre> <p>When I do a simpler example:</p> <pre><code> time = pd.Timedelta('+06:00:00') </code></pre> <p>I get my desired output:</p> <pre><code> Timedelta('0 days 06:00:00') </code></pre> <p>What would be the approach if I wanted to convert a series into a timedelta with my desired output?</p>
1
2016-08-15T18:08:48Z
38,960,586
<p>The error is pretty clear: </p> <blockquote> <p>ValueError: Value must be Timedelta, string, integer, float, timedelta or convertible</p> </blockquote> <p>What you are passing to <code>pd.Timedelta()</code> is none of the above data types:</p> <pre><code>&gt;&gt;&gt; type(df['time']) &lt;class 'pandas.core.series.Series'&gt; </code></pre> <p>Probably what you want it:</p> <pre><code>&gt;&gt;&gt; [pd.Timedelta(x) for x in df['time']] [Timedelta('0 days 06:00:00'), Timedelta('-1 days +20:00:00')] </code></pre> <p>Or:</p> <pre><code>&gt;&gt;&gt; df['time'].apply(pd.Timedelta) 0 06:00:00 1 -1 days +20:00:00 Name: time, dtype: timedelta64[ns] </code></pre> <p>See more examples in the <a href="http://pandas.pydata.org/pandas-docs/stable/timedeltas.html" rel="nofollow">docs</a>.</p>
2
2016-08-15T18:14:21Z
[ "python", "pandas" ]
pd.Timedelta conversion on a dataframe column
38,960,514
<p>I am trying to convert a dataframe column to a timedelta but am having issues. The format that the column comes in looks like '+XX:XX:XX' or '-XX:XX:XX'</p> <p>My dataframe:</p> <pre><code> df = pd.DataFrame({'time':['+06:00:00', '-04:00:00'],}) </code></pre> <p>My approach:</p> <pre><code> df['time'] = pd.Timedelta(df['time']) </code></pre> <p>However, I get the error:</p> <pre><code> ValueError: Value must be Timedelta, string, integer, float, timedelta or convertible </code></pre> <p>When I do a simpler example:</p> <pre><code> time = pd.Timedelta('+06:00:00') </code></pre> <p>I get my desired output:</p> <pre><code> Timedelta('0 days 06:00:00') </code></pre> <p>What would be the approach if I wanted to convert a series into a timedelta with my desired output?</p>
1
2016-08-15T18:08:48Z
38,961,561
<p>I would strongly recommend to use specifically designed and vectorized (i.e. very fast) method: <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_timedelta.html" rel="nofollow">to_timedelta()</a>:</p> <pre><code>In [40]: pd.to_timedelta(df['time']) Out[40]: 0 06:00:00 1 -1 days +20:00:00 Name: time, dtype: timedelta64[ns] </code></pre> <p><strong>Timing</strong> against a 200K rows DF:</p> <pre><code>In [41]: df = pd.concat([df] * 10**5, ignore_index=True) In [42]: df.shape Out[42]: (200000, 1) In [43]: %timeit pd.to_timedelta(df['time']) 1 loop, best of 3: 891 ms per loop In [44]: %timeit df['time'].apply(pd.Timedelta) 1 loop, best of 3: 7.15 s per loop In [45]: %timeit [pd.Timedelta(x) for x in df['time']] 1 loop, best of 3: 5.52 s per loop </code></pre>
1
2016-08-15T19:17:42Z
[ "python", "pandas" ]
Using gdal in python to produce tiff files from csv files
38,960,568
<p>I have many csv files with this format:</p> <pre><code>Latitude,Longitude,Concentration 53.833399,-122.825257,0.021957 53.837893,-122.825238,0.022642 .... </code></pre> <p>My goal is to produce GeoTiff files based on the information within these files (one tiff file per csv file), preferably using python. This was done several years ago on the project I am working on, however how they did it before has been lost. All I know is that they most likely used GDAL.</p> <p>I have attempted to do this by researching how to use GDAL, but this has not got me anywhere, as there are limited resources and I have no knowledge of how to use this. </p> <p>Can someone help me with this?</p>
0
2016-08-15T18:13:06Z
38,978,123
<p>Here is a little code I adapted for your case. You need to have the GDAL directory with all the *.exe in added to your path for it to work (in most cases it's <code>C:\Program Files (x86)\GDAL</code>). </p> <p>It uses the <code>gdal_grid.exe</code> util (see doc here: <a href="http://www.gdal.org/gdal_grid.html" rel="nofollow">http://www.gdal.org/gdal_grid.html</a>)</p> <p>You can modify as you wish the <code>gdal_cmd</code> variable to suits your needs.</p> <pre><code>import subprocess import os # your directory with all your csv files in it dir_with_csvs = r"C:\my_csv_files" # make it the active directory os.chdir(dir_with_csvs) # function to get the csv filenames in the directory def find_csv_filenames(path_to_dir, suffix=".csv"): filenames = os.listdir(path_to_dir) return [ filename for filename in filenames if filename.endswith(suffix) ] # get the filenames csvfiles = find_csv_filenames(dir_with_csvs) # loop through each CSV file # for each CSV file, make an associated VRT file to be used with gdal_grid command # and then run the gdal_grid util in a subprocess instance for fn in csvfiles: vrt_fn = fn.replace(".csv", ".vrt") lyr_name = fn.replace('.csv', '') out_tif = fn.replace('.csv', '.tiff') with open(vrt_fn, 'w') as fn_vrt: fn_vrt.write('&lt;OGRVRTDataSource&gt;\n') fn_vrt.write('\t&lt;OGRVRTLayer name="%s"&gt;\n' % lyr_name) fn_vrt.write('\t\t&lt;SrcDataSource&gt;%s&lt;/SrcDataSource&gt;\n' % fn) fn_vrt.write('\t\t&lt;GeometryType&gt;wkbPoint&lt;/GeometryType&gt;\n') fn_vrt.write('\t\t&lt;GeometryField encoding="PointFromColumns" x="Longitude" y="Latitude" z="Concentration"/&gt;\n') fn_vrt.write('\t&lt;/OGRVRTLayer&gt;\n') fn_vrt.write('&lt;/OGRVRTDataSource&gt;\n') gdal_cmd = 'gdal_grid -a invdist:power=2.0:smoothing=1.0 -zfield "Concentration" -of GTiff -ot Float64 -l %s %s %s' % (lyr_name, vrt_fn, out_tif) subprocess.call(gdal_cmd, shell=True) </code></pre>
1
2016-08-16T14:52:50Z
[ "python", "csv", "tiff", "gdal", "geotiff" ]
Find indexes of two lists
38,960,631
<p>I have two numpy lists:</p> <pre><code>x = ['A', 'A', 'C', 'A', 'V', 'A', 'B', 'A', 'A', 'A'] y = ['1', '2', '1', '1', '3', '2', '1', '1', '1', '1'] </code></pre> <p>How can I find indexes when simulataneously <code>x</code> equals <code>'A'</code> and <code>y</code> equals <code>'2'</code>?</p> <p>I expect to get indexes <code>[1, 5]</code>.</p> <p>I tried to use: <code>np.where(x == 'A' and y == '2')</code> but it didn't help me. </p>
1
2016-08-15T18:17:15Z
38,960,694
<p>You need to convert the list to numpy array in order to use vectorized operation such as <code>==</code> and <code>&amp;</code>:</p> <pre><code>import numpy as np np.where((np.array(x) == "A") &amp; (np.array(y) == "2")) # (array([1, 5]),) </code></pre> <p>Shorter version (if you are sure that x and y are numpy arrays):</p> <pre><code>&gt;&gt;&gt; np.where(np.logical_and(x == 'A', y == '2')) (array([1, 5]),) </code></pre>
2
2016-08-15T18:21:18Z
[ "python", "numpy", "where" ]
Find indexes of two lists
38,960,631
<p>I have two numpy lists:</p> <pre><code>x = ['A', 'A', 'C', 'A', 'V', 'A', 'B', 'A', 'A', 'A'] y = ['1', '2', '1', '1', '3', '2', '1', '1', '1', '1'] </code></pre> <p>How can I find indexes when simulataneously <code>x</code> equals <code>'A'</code> and <code>y</code> equals <code>'2'</code>?</p> <p>I expect to get indexes <code>[1, 5]</code>.</p> <p>I tried to use: <code>np.where(x == 'A' and y == '2')</code> but it didn't help me. </p>
1
2016-08-15T18:17:15Z
38,960,750
<p>pure python solution:</p> <pre><code>&gt;&gt;&gt; [i for i,j in enumerate(zip(x,y)) if j==('A','2')] [1, 5] </code></pre>
2
2016-08-15T18:25:08Z
[ "python", "numpy", "where" ]
Find indexes of two lists
38,960,631
<p>I have two numpy lists:</p> <pre><code>x = ['A', 'A', 'C', 'A', 'V', 'A', 'B', 'A', 'A', 'A'] y = ['1', '2', '1', '1', '3', '2', '1', '1', '1', '1'] </code></pre> <p>How can I find indexes when simulataneously <code>x</code> equals <code>'A'</code> and <code>y</code> equals <code>'2'</code>?</p> <p>I expect to get indexes <code>[1, 5]</code>.</p> <p>I tried to use: <code>np.where(x == 'A' and y == '2')</code> but it didn't help me. </p>
1
2016-08-15T18:17:15Z
38,960,855
<p>If you want to work with lists:</p> <pre><code>idx1 = [i for i, x in enumerate(x) if x == 'A'] idx2 = [i for i, x in enumerate(y) if x == '2'] list(set(idx1).intersection(idx2)) </code></pre>
1
2016-08-15T18:32:09Z
[ "python", "numpy", "where" ]
TypeError: int() argument must be a string or a number, not 'Model Instance' - Heroku
38,960,708
<p>Trying to deploy my project on the server, and i'm stuck in migrations becouse there is some error:</p> <pre><code> File "/app/.heroku/python/lib/python2.7/site-packages/django/db/backends/base/schema.py", line 382, in add_field definition, params = self.column_sql(model, field, include_default=True) File "/app/.heroku/python/lib/python2.7/site-packages/django/db/backends/base/schema.py", line 145, in column_sql default_value = self.effective_default(field) File "/app/.heroku/python/lib/python2.7/site-packages/django/db/backends/base/schema.py", line 210, in effective_default default = field.get_db_prep_save(default, self.connection) File "/app/.heroku/python/lib/python2.7/site-packages/django/db/models/fields/related.py", line 915, in get_db_prep_save return self.target_field.get_db_prep_save(value, connection=connection) File "/app/.heroku/python/lib/python2.7/site-packages/django/db/models/fields/__init__.py", line 728, in get_db_prep_save prepared=False) File "/app/.heroku/python/lib/python2.7/site-packages/django/db/models/fields/__init__.py", line 968, in get_db_prep_value value = self.get_prep_value(value) File "/app/.heroku/python/lib/python2.7/site-packages/django/db/models/fields/__init__.py", line 976, in get_prep_value return int(value) TypeError: int() argument must be a string or a number, not 'Profile' </code></pre> <p>And here is my models.py file:</p> <pre><code>class Post(models.Model): author = models.ForeignKey('auth.User') title = models.CharField(max_length=75) image = models.ImageField(null=True, blank=True, width_field="width_field", height_field="height_field", upload_to='images') #must be installed Pillow for ImageField height_field = models.IntegerField(default=0) width_field = models.IntegerField(default=0) content = models.TextField() updated = models.DateTimeField(default=datetime.now()) timestamp = models.DateTimeField(default=datetime.now()) def __str__(self): return self.title @receiver(pre_delete, sender=Post) def post_delete(sender, instance, **kwargs): # Pass false so FileField doesn't save the model. if instance.image: instance.image.delete(False) class Profile(models.Model): onlyletters = RegexValidator(r'^[a-zA-Z]*$', 'Only letters are allowed.') author = models.ForeignKey('auth.User') name = models.CharField(max_length=20, null = True, validators=[onlyletters]) surname = models.CharField(max_length=25, null = True, validators=[onlyletters]) city = models.CharField(max_length=30, blank=True, validators=[onlyletters]) birth_date = models.DateField(blank=True, null=True, help_text="Can not be more than 100 years - (Format yyyy-mm-dd)") topic = models.CharField(max_length = 50, null=True) def __str__(self): return self.topic class Favourite(models.Model): name = models.CharField(max_length=50) members = models.ManyToManyField(Profile, through='Membership') def __str__(self): # __unicode__ on Python 2 return self.name class Membership(models.Model): author = models.CharField(max_length=50) profile = models.ForeignKey(Profile, default=Profile) favourite = models.ForeignKey(Favourite) created = models.DateTimeField(default=timezone.now) class Comment(models.Model): post = models.ForeignKey('blogapp.Post', related_name='comments') author = models.ForeignKey('auth.User') text = models.TextField() created_date = models.DateTimeField(default=timezone.now) def __str__(self): return self.text </code></pre> <p>Just trying to make some fresh app on heroku, making new migrations and deleting olders but it's still didn't work. Can anybody know what's wrong with it? Thanks for the answer.</p>
0
2016-08-15T18:22:13Z
38,961,132
<p>Here's the culprit:</p> <pre><code>profile = models.ForeignKey(Profile, default=Profile) # ^^^^^^^^^^^^^^^ </code></pre> <p>You can't set a Model class as a <code>Foreignkey</code> default. If you're thinking of setting an <em>hardcoded</em> default then you should use an <code>int</code> and be sure the selected value exists as a key in your <code>Profile</code> model.</p>
1
2016-08-15T18:50:58Z
[ "python", "django", "heroku" ]
Python: Strategically go through ten digit numbers with 0-9
38,960,714
<p>Recently, I read a math problem inspiring me to write a program. It asked to <strong>arrange the digits 0-9 once each so that <em>xx xxx / xx xxx = 9</em>.</strong> I wrote a python program to find the solutions and had a bit of trouble making sure the digits were different. I found a way using nested <em>whiles</em> and <em>ifs</em>, but I'm not quite happy with it.</p> <pre><code>b,c,x,y,z = 0,0,0,0,0 #I shortened the code from b,c,d,e,v,w,x,y,z for a in range (10): while b &lt; 10: if b != a: while c &lt; 10: if c != b and c != a: while x &lt; 10: if x != c and x != b and x != a: while y &lt; 10: if y != x and y != c and y != b and y != a: while z &lt; 10: if z != y and if z != z and y != c and z != b and z != a: if (a*100 + b*10 + c)/(x*100 + y*10 + z) == 9: print () print (str (a*100 + b*10 + c) + "/" + str (x*100 + y*10 + z) z += 1 z = 0 y += 1 y,z = 0,0 x += 1 x,y,z = 0,0,0 c += 1 c,x,y,z = 0,0,0,0 b += 1 b,c,x,y,z = 0,0,0,0,0 </code></pre> <p>As you can see, the <strong>code is very long and repetitive</strong>, even the shortened form. Running it on my laptop <strong>takes almost a minute</strong> (and my laptop is new). I have searched for answers, but I only found ways to generate random numbers. I tried using <em>itertools.permutations</em> as well, but that only shows the permutations, not creating a number. </p> <p>Generating all ten digits takes too long, and I want to know if there is a <strong><em>faster, simpler way, with an explanation, using python 3.</em></strong>.</p> <p>Thanks</p>
3
2016-08-15T18:22:39Z
38,960,864
<p>Here's one way to use <code>itertools</code> for this problem.</p> <pre><code>import itertools def makenum(digits): return int(''.join(map(str, digits))) for p in itertools.permutations(range(10)): a = makenum(p[:5]) b = makenum(p[5:]) if a == 9 * b: print(a, b) </code></pre>
2
2016-08-15T18:32:46Z
[ "python", "algorithm", "python-3.x", "digits" ]
Python: Strategically go through ten digit numbers with 0-9
38,960,714
<p>Recently, I read a math problem inspiring me to write a program. It asked to <strong>arrange the digits 0-9 once each so that <em>xx xxx / xx xxx = 9</em>.</strong> I wrote a python program to find the solutions and had a bit of trouble making sure the digits were different. I found a way using nested <em>whiles</em> and <em>ifs</em>, but I'm not quite happy with it.</p> <pre><code>b,c,x,y,z = 0,0,0,0,0 #I shortened the code from b,c,d,e,v,w,x,y,z for a in range (10): while b &lt; 10: if b != a: while c &lt; 10: if c != b and c != a: while x &lt; 10: if x != c and x != b and x != a: while y &lt; 10: if y != x and y != c and y != b and y != a: while z &lt; 10: if z != y and if z != z and y != c and z != b and z != a: if (a*100 + b*10 + c)/(x*100 + y*10 + z) == 9: print () print (str (a*100 + b*10 + c) + "/" + str (x*100 + y*10 + z) z += 1 z = 0 y += 1 y,z = 0,0 x += 1 x,y,z = 0,0,0 c += 1 c,x,y,z = 0,0,0,0 b += 1 b,c,x,y,z = 0,0,0,0,0 </code></pre> <p>As you can see, the <strong>code is very long and repetitive</strong>, even the shortened form. Running it on my laptop <strong>takes almost a minute</strong> (and my laptop is new). I have searched for answers, but I only found ways to generate random numbers. I tried using <em>itertools.permutations</em> as well, but that only shows the permutations, not creating a number. </p> <p>Generating all ten digits takes too long, and I want to know if there is a <strong><em>faster, simpler way, with an explanation, using python 3.</em></strong>.</p> <p>Thanks</p>
3
2016-08-15T18:22:39Z
38,961,088
<p>Take advantage of algebra:</p> <pre><code>a / b = 9 == a = 9 * b </code></pre> <p>Knowing that, you only have to bother generating the values:</p> <pre><code>[(9*num, num) for num in range(10000, 100000)] </code></pre> <p>If you need to filter things out by some criteria, you can easily write a filter function:</p> <pre><code>def unique_numbers(num): num = str(num) return len(num) == len(set(num)) [(9*num, num) for num in range(10000, 100000) if unique_numbers(num) and unique_numbers(9*num)] </code></pre> <p>If you wanted to shorten things a bit, you could re-write your function so that it returns the valid pair, or <code>None</code> otherwise.</p> <pre><code>def good_nums_or_none(num): a = num * 9 b = num str_a = str(a) str_b = str(b) if len(a) == len(set(a)) and len(b) == len(set(b)): return a, b else: return None [nums for nums in (good_nums_or_none(num) for num in range(10000, 100000)) if nums is not None] </code></pre> <p>Or, just create a generator and iterate over that:</p> <pre><code> def target_numbers(factor=9, min=10000, max=100000): cur = min while cur &lt; max: a = factor*cur b = cur str_a = str(a) str_b = str(b) if len(a) == len(set(a)) and len(b) == len(set(b)): yield a, b [num for num in target_numbers()] </code></pre> <p>If you want to allow zero padded numbers in <code>b</code> then you can use this filter:</p> <pre><code>def target_numbers(factor=9, min=1000, max=100000): cur = min while cur &lt; max: b = cur a = factor*cur text = str(a) + str(b).zfill(5) if len(text) == len(set(text)): yield a, b cur += 1 </code></pre>
2
2016-08-15T18:48:11Z
[ "python", "algorithm", "python-3.x", "digits" ]
Python: Strategically go through ten digit numbers with 0-9
38,960,714
<p>Recently, I read a math problem inspiring me to write a program. It asked to <strong>arrange the digits 0-9 once each so that <em>xx xxx / xx xxx = 9</em>.</strong> I wrote a python program to find the solutions and had a bit of trouble making sure the digits were different. I found a way using nested <em>whiles</em> and <em>ifs</em>, but I'm not quite happy with it.</p> <pre><code>b,c,x,y,z = 0,0,0,0,0 #I shortened the code from b,c,d,e,v,w,x,y,z for a in range (10): while b &lt; 10: if b != a: while c &lt; 10: if c != b and c != a: while x &lt; 10: if x != c and x != b and x != a: while y &lt; 10: if y != x and y != c and y != b and y != a: while z &lt; 10: if z != y and if z != z and y != c and z != b and z != a: if (a*100 + b*10 + c)/(x*100 + y*10 + z) == 9: print () print (str (a*100 + b*10 + c) + "/" + str (x*100 + y*10 + z) z += 1 z = 0 y += 1 y,z = 0,0 x += 1 x,y,z = 0,0,0 c += 1 c,x,y,z = 0,0,0,0 b += 1 b,c,x,y,z = 0,0,0,0,0 </code></pre> <p>As you can see, the <strong>code is very long and repetitive</strong>, even the shortened form. Running it on my laptop <strong>takes almost a minute</strong> (and my laptop is new). I have searched for answers, but I only found ways to generate random numbers. I tried using <em>itertools.permutations</em> as well, but that only shows the permutations, not creating a number. </p> <p>Generating all ten digits takes too long, and I want to know if there is a <strong><em>faster, simpler way, with an explanation, using python 3.</em></strong>.</p> <p>Thanks</p>
3
2016-08-15T18:22:39Z
38,961,197
<p>Runs in <strong>0.7 secs</strong>. Faster than most solutions mentioned, though bit clumsy.</p> <pre><code>def sol(a,b,zero): for i in range(a,b): fl = 0 marked = 10*[0] marked[0] = zero tmp = i while tmp &gt; 0: marked[tmp%10] = marked[tmp%10] + 1 tmp = tmp/10 numerator = i*9 while numerator &gt; 0: marked[numerator%10] = marked[numerator%10] + 1 numerator = numerator/10 for j in range(10): if marked[j] != 1: fl = 1 if fl == 0: print "found a solution ",i*9,"/",i sol(1000,10000,1) sol(10000,100000,0) </code></pre> <p>The solution printed is:</p> <pre><code> found a solution 57429 / 6381 found a solution 58239 / 6471 found a solution 75249 / 8361 found a solution 95742 / 10638 found a solution 95823 / 10647 found a solution 97524 / 10836 </code></pre>
1
2016-08-15T18:54:58Z
[ "python", "algorithm", "python-3.x", "digits" ]
Python: Strategically go through ten digit numbers with 0-9
38,960,714
<p>Recently, I read a math problem inspiring me to write a program. It asked to <strong>arrange the digits 0-9 once each so that <em>xx xxx / xx xxx = 9</em>.</strong> I wrote a python program to find the solutions and had a bit of trouble making sure the digits were different. I found a way using nested <em>whiles</em> and <em>ifs</em>, but I'm not quite happy with it.</p> <pre><code>b,c,x,y,z = 0,0,0,0,0 #I shortened the code from b,c,d,e,v,w,x,y,z for a in range (10): while b &lt; 10: if b != a: while c &lt; 10: if c != b and c != a: while x &lt; 10: if x != c and x != b and x != a: while y &lt; 10: if y != x and y != c and y != b and y != a: while z &lt; 10: if z != y and if z != z and y != c and z != b and z != a: if (a*100 + b*10 + c)/(x*100 + y*10 + z) == 9: print () print (str (a*100 + b*10 + c) + "/" + str (x*100 + y*10 + z) z += 1 z = 0 y += 1 y,z = 0,0 x += 1 x,y,z = 0,0,0 c += 1 c,x,y,z = 0,0,0,0 b += 1 b,c,x,y,z = 0,0,0,0,0 </code></pre> <p>As you can see, the <strong>code is very long and repetitive</strong>, even the shortened form. Running it on my laptop <strong>takes almost a minute</strong> (and my laptop is new). I have searched for answers, but I only found ways to generate random numbers. I tried using <em>itertools.permutations</em> as well, but that only shows the permutations, not creating a number. </p> <p>Generating all ten digits takes too long, and I want to know if there is a <strong><em>faster, simpler way, with an explanation, using python 3.</em></strong>.</p> <p>Thanks</p>
3
2016-08-15T18:22:39Z
38,961,307
<p>This is a minor variation on @David's answer.</p> <p>If we look at <code>itertools.permutations([1,2,3,4])</code></p> <pre><code>&gt;&gt;&gt; for p in itertools.permutations([1,2,3,4]): ... print(p) ... (1, 2, 3, 4) (1, 2, 4, 3) (1, 3, 2, 4) (1, 3, 4, 2) (1, 4, 2, 3) (1, 4, 3, 2) (2, 1, 3, 4) (2, 1, 4, 3) (2, 3, 1, 4) (2, 3, 4, 1) (2, 4, 1, 3) (2, 4, 3, 1) (3, 1, 2, 4) (3, 1, 4, 2) (3, 2, 1, 4) (3, 2, 4, 1) (3, 4, 1, 2) (3, 4, 2, 1) (4, 1, 2, 3) (4, 1, 3, 2) (4, 2, 1, 3) (4, 2, 3, 1) (4, 3, 1, 2) (4, 3, 2, 1) </code></pre> <p>You'll notice that for a tuple <code>(a,b,c,d)</code>, <code>(c,d,a,b)</code> also appears. If a number <code>a == 9*b</code>, then <code>b != a*9</code>. We'll use this to our advantage.</p> <p>Also, note that if <code>a = 9*b</code>, <code>a</code> must be bigger than <code>b</code>, unless we're using negative numbers or non-integers.</p> <p>You'll see that as we look at the results, splitting the tuples in half and turning them into numbers gives, initially, a small number followed by a larger number. This is a side effect of passing <code>permutations</code> a sorted list. Again, we can use this to our advantage.</p> <pre><code>import itertools def makenum(digits): return int(''.join(map(str, digits))) for p in itertools.permutations(range(10)): first_half = makenum(p[:5]) second_half = makenum(p[5:]) if second_half &lt; first_half: # first half is smaller for first half of permutations, second half of permutations has been covered, backwards. break if second_half == 9 * first_half: print(first_half, second_half) </code></pre> <p>If you're taking user input, you should be able to get the same result simply by <code>sort</code>ing your input:</p> <pre><code>for p in itertools.permutations(sorted(digits)): # ... </code></pre>
0
2016-08-15T19:01:03Z
[ "python", "algorithm", "python-3.x", "digits" ]
Python: Strategically go through ten digit numbers with 0-9
38,960,714
<p>Recently, I read a math problem inspiring me to write a program. It asked to <strong>arrange the digits 0-9 once each so that <em>xx xxx / xx xxx = 9</em>.</strong> I wrote a python program to find the solutions and had a bit of trouble making sure the digits were different. I found a way using nested <em>whiles</em> and <em>ifs</em>, but I'm not quite happy with it.</p> <pre><code>b,c,x,y,z = 0,0,0,0,0 #I shortened the code from b,c,d,e,v,w,x,y,z for a in range (10): while b &lt; 10: if b != a: while c &lt; 10: if c != b and c != a: while x &lt; 10: if x != c and x != b and x != a: while y &lt; 10: if y != x and y != c and y != b and y != a: while z &lt; 10: if z != y and if z != z and y != c and z != b and z != a: if (a*100 + b*10 + c)/(x*100 + y*10 + z) == 9: print () print (str (a*100 + b*10 + c) + "/" + str (x*100 + y*10 + z) z += 1 z = 0 y += 1 y,z = 0,0 x += 1 x,y,z = 0,0,0 c += 1 c,x,y,z = 0,0,0,0 b += 1 b,c,x,y,z = 0,0,0,0,0 </code></pre> <p>As you can see, the <strong>code is very long and repetitive</strong>, even the shortened form. Running it on my laptop <strong>takes almost a minute</strong> (and my laptop is new). I have searched for answers, but I only found ways to generate random numbers. I tried using <em>itertools.permutations</em> as well, but that only shows the permutations, not creating a number. </p> <p>Generating all ten digits takes too long, and I want to know if there is a <strong><em>faster, simpler way, with an explanation, using python 3.</em></strong>.</p> <p>Thanks</p>
3
2016-08-15T18:22:39Z
38,961,548
<p>Adapting Wayne Werner's solution you can do this to add the digit uniqueness constraint (assuming Python 3):</p> <pre><code>[(9*num, num) for num in range(10000, 100000 // 9) if len(set(str(num) + str(num * 9))) == 10] </code></pre> <p>This runs in 1.5 ms on my machine.</p> <p>Note, that you can only check numbers between 10000 and 100000 / 9 = 11111. </p> <p>And if you want to allow preceding zeros, you can do it like this:</p> <pre><code>[(9*num, num) for num in range(0, 100000 // 9) if len(set(("%05d" % num) + ("%05d" % (num * 9)))) == 10] </code></pre> <p>And this one takes 15 ms.</p>
2
2016-08-15T19:16:56Z
[ "python", "algorithm", "python-3.x", "digits" ]
Does ending a python 2.7 print statement with a comma not work in tmux?
38,960,887
<p>Pretty much just what's in the question. I have a slow process running in tmux and wanted to document progress through the for loop by printing the loop variable.</p> <pre><code>print 'Progress...', for i in range(15): ... print i, print </code></pre> <p>This works in my terminal. In tmux, however, it doesn't print anything until it hits a new-line command with the last print. Does printing on the same line repeatedly not work in tmux? How could I remedy this? It's not a big deal, I'm just curious what I can do as I don't know much about bash scripting.</p> <p>Thanks!</p>
3
2016-08-15T18:34:13Z
38,960,910
<p>This is almost surely due to output buffering. You can check that's the cause by calling flush:</p> <pre><code>import sys print 'Progress...', for i in range(15): ... print i, sys.stdout.flush() print </code></pre> <p>If this fixes your problem, you might consider to run <a href="https://docs.python.org/2/using/cmdline.html#cmdoption-u">python unbuffered</a>.</p>
5
2016-08-15T18:36:01Z
[ "python", "bash", "tmux" ]
Why don't these tkinter programs behave identically?
38,960,936
<p>I have 2 codes I made. The 2nd one is based off of the first one. Unfortunately for some reason the 2nd one doesn't work even though the first one does.</p> <p>First code:</p> <pre><code>import time from Tkinter import * root = Tk() t=StringVar() num=1 t.set(str(num) thelabel = Label(root, textvariable=t).pack() def printnum (x): while x&lt;= 100: t.set(str(x)) x += 1 root.update() time.sleep(30) printnum(num) root.mainloop() </code></pre> <p>This code works like a charm. Here is the other one.</p> <p>Second code:</p> <pre><code>#!/usr/bin/python # -*- coding: latin-1 -*- import Adafruit_DHT as dht import time from Tkinter import * root = Tk() k=StringVar() num = 1 k.set(str(num)) thelabel = Label(root, textvariable=k).pack def printnum(x): while x &lt;= 10000000000000: h,t = dht.read_retry(dht.DHT22, 4) newtext = "Temp%s*C Humidity=%s" %(t,h) k.set(newtext) x += 1 root.update time.sleep(30) printnum(num) root.mainloop() </code></pre> <p>The code runs but it doesn't do anything, no window pops up like the other code does. Please help I cannot figure out how to fix this. Or why the first one works and the second one doesn't.</p>
-2
2016-08-15T18:38:18Z
38,960,987
<p>You're overwriting the previous value of t with the temperature from read_retry on this line:</p> <pre><code>h,t = dht.read_retry(dht.DHT22, 4) </code></pre> <p>Then, when you try to call <code>set</code>, <code>t</code> is now a float, so doesn't have a set method. Use a different variable name instead of <code>t</code> for one of them.</p>
3
2016-08-15T18:41:46Z
[ "python", "tkinter", "raspberry-pi" ]
Why don't these tkinter programs behave identically?
38,960,936
<p>I have 2 codes I made. The 2nd one is based off of the first one. Unfortunately for some reason the 2nd one doesn't work even though the first one does.</p> <p>First code:</p> <pre><code>import time from Tkinter import * root = Tk() t=StringVar() num=1 t.set(str(num) thelabel = Label(root, textvariable=t).pack() def printnum (x): while x&lt;= 100: t.set(str(x)) x += 1 root.update() time.sleep(30) printnum(num) root.mainloop() </code></pre> <p>This code works like a charm. Here is the other one.</p> <p>Second code:</p> <pre><code>#!/usr/bin/python # -*- coding: latin-1 -*- import Adafruit_DHT as dht import time from Tkinter import * root = Tk() k=StringVar() num = 1 k.set(str(num)) thelabel = Label(root, textvariable=k).pack def printnum(x): while x &lt;= 10000000000000: h,t = dht.read_retry(dht.DHT22, 4) newtext = "Temp%s*C Humidity=%s" %(t,h) k.set(newtext) x += 1 root.update time.sleep(30) printnum(num) root.mainloop() </code></pre> <p>The code runs but it doesn't do anything, no window pops up like the other code does. Please help I cannot figure out how to fix this. Or why the first one works and the second one doesn't.</p>
-2
2016-08-15T18:38:18Z
38,961,932
<p><code>root.update</code> doesn't do anything. You need to add <code>()</code>:</p> <pre><code>root.update() </code></pre> <p>That being said, your algorithm is the wrong way to run periodic tasks in tkinter. See <a href="http://stackoverflow.com/a/37681471/7432">http://stackoverflow.com/a/37681471/7432</a></p>
1
2016-08-15T19:44:31Z
[ "python", "tkinter", "raspberry-pi" ]
A value is trying to be set on a copy of a slice from a DataFrame Warning
38,961,014
<p>Not sure why I still get the warning after I updated to .loc method suggested by the message? Is it a false alarm?</p> <pre><code>eG.loc[:,'wt']=eG.groupby(['date','BB'])['m'].transform(weightFunction) </code></pre> <blockquote> <p>A value is trying to be set on a copy of a slice from a DataFrame</p> </blockquote> <pre><code>See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy if __name__ == '__main__': </code></pre>
-2
2016-08-15T18:43:25Z
38,961,755
<p>I guess your <code>eG</code> DF is a copy of another DF...</p> <p>Here is a small demo:</p> <pre><code>In [69]: df = pd.DataFrame(np.random.randint(0, 5, (10, 3)), columns=list('abc')) In [70]: cp = df[df.a &gt; 0] In [71]: cp.loc[:, 'c'] = cp.groupby('a').b.transform('sum') c:\envs\py35\lib\site-packages\pandas\core\indexing.py:549: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy self.obj[item_labels[indexer[info_axis]]] = value </code></pre> <p>Workaround:</p> <pre><code>In [72]: cp = df[df.a &gt; 0].copy() In [73]: cp.loc[:, 'c'] = cp.groupby('a').b.transform('sum') </code></pre> <p>Or if you don't need original DF you can save memory:</p> <pre><code>In [74]: df = df[df.a &gt; 0] In [75]: df.loc[:, 'c'] = df.groupby('a').b.transform('sum') </code></pre>
1
2016-08-15T19:31:28Z
[ "python", "pandas" ]
How can I schedule a script to run every 46 minutes, between 6:00 am and 23:40pm, daily with python?
38,961,111
<p>I need a script to run every 46 minutes, between 6:00 am and 23:40pm, daily. Im currently using apscheduler, but I'm failing to set up the 24 runs per day on interval schedule, and programming each one of the run with a cron mode seems highly inefficient. Is there a simple way of telling python to "run this code every 46 minutes, 24 times a day, starting at 6am?" </p>
-1
2016-08-15T18:49:30Z
38,961,376
<p>Assuming you're not using asyncio, gevent, tornado, etc.</p> <pre><code>from apscheduler.schedulers.background import BackgroundScheduler sched = BackgroundScheduler() sched.start() sched.add_job(function, 'cron', minute='46' hour='6-23') </code></pre> <p>See the <a href="https://apscheduler.readthedocs.io/en/latest/modules/triggers/cron.html?highlight=cron#module-apscheduler.triggers.cron" rel="nofollow">docs</a> for more details.</p> <p>EDIT:</p> <p>Misread the question. I'm assuming you wanted every 46 minutes, not AT the 46th minute every hour between 6 to 23. It would probably be better to have apscheduler event that stops an interval job at 23 and resumes it again at 6.</p> <pre><code>from apscheduler.schedulers.background import BackgroundScheduler sched = BackgroundScheduler() def disable_interval(): sched.remove_job('INTERVAL_JOB') def enable_interval(): sched.add_job(run_function, 'interval', minutes=46, id='INTERVAL_JOB') if __name__ == '__main__': sched.start() sched.add_job(enable_interval, 'cron', minute='0' hour='6') sched.add_job(disable_interval, 'cron', minute='40' hour='23') </code></pre>
1
2016-08-15T19:06:16Z
[ "python", "python-3.x", "apscheduler" ]
Django Form, User input fields not rendering in template
38,961,126
<p>Django newb here, but I'm learning.</p> <p><strong>Problem Statement</strong></p> <p>Input fields not rendering in html template.</p> <p><strong>Output in Browser</strong></p> <p>True</p> <p>| Submit Button |</p> <p><strong>Relevant Code</strong></p> <pre><code>forms.py from django import forms from django.db import models class PostNotes(forms.Form): clientid = models.IntegerField() notetext = models.CharField(max_length=1000) </code></pre> <p>-</p> <pre><code>views.py def post_form_notes(request): if request.method == 'GET': sawit = True form = PostNotes(initial={'notetext':'This is a sample note'}) else: sawit = False pass return render(request, 'clientadmin/post_form_notes.html', { 'form': form, 'sawit': sawit, }) </code></pre> <p>-</p> <pre><code>post_form_notes.html {{ sawit }} &lt;form action="" method="post"&gt;{% csrf_token %} {{ form.as_p }} &lt;input type="submit" value="Submit" /&gt; &lt;/form&gt; </code></pre> <p><strong>Troubleshooting Already Done</strong></p> <ul> <li>I have backed out a fair amount of the code (specifically if I see a POST) request from the browser. No change. </li> <li>I have also included a variable to ensure I am seeing the GET request and also the template variables are working. I get output of True.</li> <li>Simplified the Form Class as much as possible.</li> </ul>
0
2016-08-15T18:50:39Z
38,964,281
<p>I modified the forms.py to use the model I already had for the DB.</p> <pre><code>forms.py from django.forms import ModelForm from clientadmin.models import notes class PostNotes(ModelForm): class Meta: model = notes fields = ['notedate', 'notetext'] </code></pre> <p>I also modified the views.py to not set an initial value, so the function uses the following instead of what was asked.</p> <pre><code>models.py def post_form_notes(request): if request.method == 'GET': form = PostNotes() else: pass return render(request, 'clientadmin/post_form_notes.html', { 'form': form, }) </code></pre> <p>Hope this helps someone that was having the same problem I was...</p> <p>Reference the following URL for further information: <a href="https://docs.djangoproject.com/en/1.10/topics/forms/modelforms/" rel="nofollow">https://docs.djangoproject.com/en/1.10/topics/forms/modelforms/</a></p>
0
2016-08-15T23:03:01Z
[ "python", "django", "forms" ]
Pandas Add Values of Column to Different Dataframe
38,961,170
<p>So I have a dataframe, df1, that has 3 rows, A, B, and C as such:</p> <pre><code> A B C Arizona 0 2.800000 5.600000 California 0 18.300000 36.600000 Colorado 0 2.666667 5.333333 Connecticut 0 0.933333 1.866667 Delaware 0 0.100000 0.200000 Florida 0 0.833333 1.666667 Georgia 0 0.000000 0.000000 Hawaii 0 1.000000 2.000000 Illinois 0 3.366667 6.733333 Indiana 0 0.000000 0.000000 Iowa 0 0.000000 0.000000 </code></pre> <p>I then have another dataframe, df2, that has just one column, D:</p> <pre><code> D Arizona 13 California 18 Colorado 5 Connecticut 15 Delaware 7 Florida 5 Georgia 13 Hawaii 3 Illinois 21 Indiana 2 Iowa 4 </code></pre> <p>What I'd like to do is add the values of column D to all the columns in df1. By add I mean take the value of [Arizona, A] and add it to the value of [Arizona, D] not add column D as a new column. So far I tried using</p> <pre><code>df1 + df2 #returned all NaN df1 + df2['D'] #Also returned all NaN df1['A'] + df2['D'] #Returned a new dataframe with each as a separate column </code></pre> <p>I'm now not entirely sure where to go from here so I'd love some advice on how to solve this. It doesn't seem like it should be difficult and I'm probably missing something obvious but here I am.</p>
1
2016-08-15T18:53:30Z
38,961,227
<p>you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.add.html" rel="nofollow">add()</a> method:</p> <pre><code>In [22]: df1.add(df2.D, axis='index') Out[22]: A B C Arizona 13.0 15.800000 18.600000 California 18.0 36.300000 54.600000 Colorado 5.0 7.666667 10.333333 Connecticut 15.0 15.933333 16.866667 Delaware 7.0 7.100000 7.200000 Florida 5.0 5.833333 6.666667 Georgia 13.0 13.000000 13.000000 Hawaii 3.0 4.000000 5.000000 Illinois 21.0 24.366667 27.733333 Indiana 2.0 2.000000 2.000000 Iowa 4.0 4.000000 4.000000 </code></pre>
2
2016-08-15T18:56:19Z
[ "python", "pandas" ]
Pandas Add Values of Column to Different Dataframe
38,961,170
<p>So I have a dataframe, df1, that has 3 rows, A, B, and C as such:</p> <pre><code> A B C Arizona 0 2.800000 5.600000 California 0 18.300000 36.600000 Colorado 0 2.666667 5.333333 Connecticut 0 0.933333 1.866667 Delaware 0 0.100000 0.200000 Florida 0 0.833333 1.666667 Georgia 0 0.000000 0.000000 Hawaii 0 1.000000 2.000000 Illinois 0 3.366667 6.733333 Indiana 0 0.000000 0.000000 Iowa 0 0.000000 0.000000 </code></pre> <p>I then have another dataframe, df2, that has just one column, D:</p> <pre><code> D Arizona 13 California 18 Colorado 5 Connecticut 15 Delaware 7 Florida 5 Georgia 13 Hawaii 3 Illinois 21 Indiana 2 Iowa 4 </code></pre> <p>What I'd like to do is add the values of column D to all the columns in df1. By add I mean take the value of [Arizona, A] and add it to the value of [Arizona, D] not add column D as a new column. So far I tried using</p> <pre><code>df1 + df2 #returned all NaN df1 + df2['D'] #Also returned all NaN df1['A'] + df2['D'] #Returned a new dataframe with each as a separate column </code></pre> <p>I'm now not entirely sure where to go from here so I'd love some advice on how to solve this. It doesn't seem like it should be difficult and I'm probably missing something obvious but here I am.</p>
1
2016-08-15T18:53:30Z
38,961,352
<p>are you trying to do something like this?</p> <pre><code>df1 = DataFrame({'A':{'a':1, 'b':2}, 'B':{'a':10, 'b':20}}) df2 = DataFrame({'C':{'a':2, 'b':2}}) df1['A+C'] = df1['A'] + df2['C'] df1['B+C'] = df1['B'] + df2['C'] print (df1) A B A+C B+C a 1 10 3 12 b 2 20 4 22 </code></pre>
0
2016-08-15T19:04:44Z
[ "python", "pandas" ]
Unable to login with python selenium error:NoSuchElementException error
38,961,190
<p>I have tried locating the submit button by id and xpath but none of them worked and checked in the page source ,the id is same.I have no idea why this is happening even though I am giving the correct Id or xpath <br><br> <strong>URL :</strong> <a href="https://moodle.niituniversity.in/moodle/login/index.php" rel="nofollow">https://moodle.niituniversity.in/moodle/login/index.php</a><br></p> <pre><code>from pyvirtualdisplay import Display from selenium import webdriver from selenium.webdriver.common.keys import Keys display = Display(visible=0, size=(1024, 768)) display.start() driver = webdriver.Firefox() #driver.set_preference("browser.startup.homepage_override.mstone", "ignore") driver.get("https://moodle.niituniversity.in/moodle/login/index.php") username = driver.find_element_by_name("username") username.clear() username.send_keys("User123") username.send_keys(Keys.RETURN) password = driver.find_element_by_name("password") password.clear() password.send_keys("pass123") password.send_keys(Keys.RETURN) password = driver.find_element_by_xpath(".//*[@id='loginbtn']").click() driver.get("https://moodle.niituniversity.in/moodle") assert "user" in driver.page_source driver.close() display.stop() </code></pre> <blockquote> <p>.NoSuchElementException: Message: Unable to locate element: {"method":"xpath","selector":".//*[@id='loginbtn']"}</p> </blockquote>
0
2016-08-15T18:54:45Z
38,962,073
<p>Might be possible this is timing issue, you should implement <code>WebDriverWait</code> to wait until button present on page as below :-</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC element = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.ID, "loginbtn"))) element.click() </code></pre> <p>Full code :</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC driver.get("https://moodle.niituniversity.in/moodle/login/index.php") username = driver.find_element_by_name("username") username.clear() username.send_keys("User123") password = driver.find_element_by_name("password") password.clear() password.send_keys("pass123") button = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.ID, "loginbtn"))) button.click() </code></pre>
0
2016-08-15T19:53:11Z
[ "python", "selenium", "xpath", "python-3.4" ]
Reference many columns of pandas DataFrame at once
38,961,217
<p>Say I have a <em>n</em> ⨉ <em>p</em> matrix of <em>n</em> samples of a single feature of <em>p</em> dimension (for example a word2vec element, so that <em>p</em> is of the order of ~300). I can create each column programatically, eg. with <code>features = ['f'+str(i) for i in range(p)]</code> and then appending to an existing dataframe.</p> <p>Since they represent a single feature, how can I reference all those columns later on? I can assign <code>df.feature = df[features]</code> which works, but it breaks when I slice the dataset: <code>df[:x].feature</code> results in an exception.</p> <p>Example:</p> <pre><code>df = pre_exisiting_dataframe() # such that len(df) is n n,p = 3,4 m = np.arange(n*p).reshape((n,p)) fs = ['f'+str(i) for i in range(p)] df_m = pd.DataFrame(m) df_m.columns = fs df = pd.concat([df,df_m],axis=1) # m is now only a part of df df.f = df[fs] df.f # works: I can access the whole m at once df[:1].f # crashes </code></pre>
0
2016-08-15T18:55:50Z
38,965,123
<p>I wouldn't use <code>df.f = df[fs]</code>. It may lead to undesired and surprising behaviour if you try to modify the data frame. Instead, I'd consider creating hierarchical columns as in the below example.</p> <p>Say, we already have a preexisting data frame <code>df0</code> and another one with features:</p> <pre><code>df0 = pd.DataFrame(np.arange(4).reshape(2,2), columns=['A', 'B']) df1 = pd.DataFrame(np.arange(10, 16).reshape(2,3), columns=['f0', 'f1', 'f2']) </code></pre> <p>Then, using the <code>keys</code> argument to <code>concat</code>, we create another level in columns:</p> <pre><code>df = pd.concat([df0, df1], keys=['pre', 'feat1'], axis=1) df Out[103]: pre feat1 A B f0 f1 f2 0 0 1 10 11 12 1 2 3 13 14 15 </code></pre> <p>The subframe with features can be accessed as follows:</p> <pre><code>df['feat1'] Out[104]: f0 f1 f2 0 10 11 12 1 13 14 15 df[('feat1', 'f0')] Out[105]: 0 10 1 13 Name: (feat1, f0), dtype: int64 </code></pre> <p>Slicing on rows is straightforward. Slicing on columns may be more complicated:</p> <pre><code>df.loc[:, pd.IndexSlice['feat1', :]] Out[106]: feat1 f0 f1 f2 0 10 11 12 1 13 14 15 df.loc[:, pd.IndexSlice['feat1', 'f0':'f1']] Out[107]: feat1 f0 f1 0 10 11 1 13 14 </code></pre> <p>To modify values in the data frame, use <code>.loc</code>, for example <code>df.loc[1:, ('feat1', 'f1')] = -1</code>. (<a href="http://pandas.pydata.org/pandas-docs/stable/advanced.html" rel="nofollow">More on hierarchical indexing, slicing etc.</a>)</p> <p>It's also possible to append another frame to <code>df</code>.</p> <pre><code># another set of features df2 = pd.DataFrame(np.arange(100, 108).reshape(2,4), columns=['f0', 'f1', 'f2', 'f3']) # create a MultiIndex: idx = pd.MultiIndex.from_product([['feat2'], df2.columns]) # append df[idx] = df2 df Out[117]: pre feat1 feat2 A B f0 f1 f2 f0 f1 f2 f3 0 0 1 10 11 12 100 101 102 103 1 2 3 13 -1 15 104 105 106 107 </code></pre> <p>To keep a nice layout, it's important that <code>idx</code> have the same numbers of levels as <code>df.columns</code>.</p>
1
2016-08-16T00:57:00Z
[ "python", "pandas" ]
python programming socket stream to functions
38,961,224
<p>I have many doubts about design a simply python program.. I have opened a socket from a server that stream data via simply telnet server. I have 3 type of strings that begins with RED,BLUE,YELLOW and after that string the data, example: </p> <p>RED 21763;22;321321 </p> <p>BLUE 1;32132;3432</p> <p>BLUE 1222;332;3</p> <p>YELLOW 1;32132;3432</p> <p>I would split data in three objects, like queue, and then fork three process to <strong>elaborate</strong> this data in parallel <strong>meanwhile</strong> they arrive to socket in a sort of very basic realtime computation of these data. So to achive my goal shoud use thread/fork process and objects like queues for interprocess comunications? Or there is any different kind of approch that could I use? I'm don't known anything about multithreading programming :) Thanks for helping.</p>
0
2016-08-15T18:56:13Z
38,961,642
<p><a href="http://stackoverflow.com/questions/2483041/what-is-the-difference-between-fork-and-thread">This</a> should give you a brief idea of threads vs fork.</p> <p>Creation of threads require lot less overhead. I would go with the thread architecture. Each of the three thread functions will be supplied with the respective queue on which it needs to do the realtime computation. Use of synchronization and mutual exclusion mechanisms will prevent unexpected behaviors. You can also use valgrind with drd for debugging your multithreaded program.</p>
0
2016-08-15T19:23:57Z
[ "python", "multithreading", "sockets", "queue", "ipc" ]
Fill NaN based on MultiIndex Pandas
38,961,234
<p>I have a pandas Data Frame that I would like to fill in some NaN values of.</p> <pre><code>import pandas as pd tuples = [('a', 1990),('a', 1994),('a',1996),('b',1992),('b',1997),('c',2001)] index = pd.MultiIndex.from_tuples(tuples, names = ['Type', 'Year']) vals = ['NaN','NaN','SomeName','NaN','SomeOtherName','SomeThirdName'] df = pd.DataFrame(vals, index=index) print(df) 0 Type Year a 1990 NaN 1994 NaN 1996 SomeName b 1992 NaN 1997 SomeOtherName c 2001 SomeThirdName </code></pre> <p>The output that I would like is:</p> <pre><code>Type Year a 1990 SomeName 1994 SomeName 1996 SomeName b 1992 SomeOtherName 1997 SomeOtherName c 2001 SomeThirdName </code></pre> <p>This needs to be done on a much larger DataFrame (millions of rows) where each 'Type' can have between 1-5 unique 'Years' and the name value is only present for the most recent year. I'm trying to avoid iterating over rows for performance purposes.</p>
0
2016-08-15T18:56:36Z
38,961,374
<p>You can sort your data frame by index in descending order and then <code>ffill</code> it:</p> <pre><code>import pandas as pd df.sort_index(level = [0,1], ascending = False).ffill() # 0 # Type Year # c 2001 SomeThirdName # b 1997 SomeOtherName # 1992 SomeOtherName # a 1996 SomeName # 1994 SomeName # 1990 SomeName </code></pre> <p>Note: The example data doesn't really contain <code>np.nan</code> values but string <code>NaN</code>, so in order for <code>ffill</code> to work you need to replace the <code>NaN</code> string as <code>np.nan</code>:</p> <pre><code>import numpy as np df[0] = np.where(df[0] == "NaN", np.nan, df[0]) </code></pre> <p>Or as @ayhan suggested, after replacing the String "NaN" with <code>np.nan</code> use <code>df.bfill()</code>.</p>
1
2016-08-15T19:06:09Z
[ "python", "pandas" ]
java.lang.OutOfMemoryError: Unable to acquire 100 bytes of memory, got 0
38,961,251
<p>I'm invoking Pyspark with Spark 2.0 in local mode with the following command:</p> <pre><code>pyspark --executor-memory 4g --driver-memory 4g </code></pre> <p>The input dataframe is being read from a tsv file and has 580 K x 28 columns. I'm doing a few operation on the dataframe and then i am trying to export it to a tsv file and i am getting this error.</p> <pre><code>df.coalesce(1).write.save("sample.tsv",format = "csv",header = 'true', delimiter = '\t') </code></pre> <p>Any pointers how to get rid of this error. I can easily display the df or count the rows.</p> <p>The output dataframe is 3100 rows with 23 columns</p> <p>Error:</p> <pre><code>Job aborted due to stage failure: Task 0 in stage 70.0 failed 1 times, most recent failure: Lost task 0.0 in stage 70.0 (TID 1073, localhost): org.apache.spark.SparkException: Task failed while writing rows at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) at org.apache.spark.scheduler.Task.run(Task.scala:85) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.OutOfMemoryError: Unable to acquire 100 bytes of memory, got 0 at org.apache.spark.memory.MemoryConsumer.allocatePage(MemoryConsumer.java:129) at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.acquireNewPageIfNecessary(UnsafeExternalSorter.java:374) at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.insertRecord(UnsafeExternalSorter.java:396) at org.apache.spark.sql.execution.UnsafeExternalRowSorter.insertRow(UnsafeExternalRowSorter.java:94) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.sort_addToSorter$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370) at org.apache.spark.sql.execution.WindowExec$$anonfun$15$$anon$1.fetchNextRow(WindowExec.scala:300) at org.apache.spark.sql.execution.WindowExec$$anonfun$15$$anon$1.&lt;init&gt;(WindowExec.scala:309) at org.apache.spark.sql.execution.WindowExec$$anonfun$15.apply(WindowExec.scala:289) at org.apache.spark.sql.execution.WindowExec$$anonfun$15.apply(WindowExec.scala:288) at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:766) at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:766) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) at org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:89) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) at org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:89) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) at org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:89) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) at org.apache.spark.rdd.CoalescedRDD$$anonfun$compute$1.apply(CoalescedRDD.scala:96) at org.apache.spark.rdd.CoalescedRDD$$anonfun$compute$1.apply(CoalescedRDD.scala:95) at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:253) at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252) at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1325) at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258) ... 8 more Driver stacktrace: </code></pre>
1
2016-08-15T18:57:36Z
38,961,806
<p>I believe that the cause of this problem is <a href="https://spark.apache.org/docs/1.2.0/api/python/pyspark.html?highlight=coalesce#pyspark.RDD.coalesce" rel="nofollow">coalesce()</a>, which despite the fact that it avoids a full shuffle (like <a href="http://stackoverflow.com/questions/31610971/spark-repartition-vs-coalesce">repartition would do</a>), it has to shrink the data in the requested number of partitions.</p> <p>Here, you are requesting all the data to fit into one partition, thus one task (and only one task) has to work with <em>all the data</em>, which may cause its container to suffer from memory limitations.</p> <p>So, either ask for more partitions than 1, or avoid <code>coalesce()</code> in this case.</p> <p>You can now read more in <a class='doc-link' href="http://stackoverflow.com/documentation/apache-spark/5822/partitions#t=201609112009599517571">Partitions</a> and especially <a class='doc-link' href="http://stackoverflow.com/documentation/apache-spark/5822/partitions/21155/repartition-an-rdd#t=201609112000500779093">Repartition an RDD</a>.</p> <hr> <p>Otherwise, you could try the solutions provided in the links below, for increasing your memory configurations:</p> <ol> <li><a href="http://stackoverflow.com/questions/21138751/spark-java-lang-outofmemoryerror-java-heap-space">Spark java.lang.OutOfMemoryError: Java heap space</a></li> <li><a href="http://stackoverflow.com/questions/22637518/spark-runs-out-of-memory-when-grouping-by-key">Spark runs out of memory when grouping by key</a></li> </ol>
1
2016-08-15T19:34:50Z
[ "python", "hadoop", "memory", "apache-spark", "pyspark" ]
java.lang.OutOfMemoryError: Unable to acquire 100 bytes of memory, got 0
38,961,251
<p>I'm invoking Pyspark with Spark 2.0 in local mode with the following command:</p> <pre><code>pyspark --executor-memory 4g --driver-memory 4g </code></pre> <p>The input dataframe is being read from a tsv file and has 580 K x 28 columns. I'm doing a few operation on the dataframe and then i am trying to export it to a tsv file and i am getting this error.</p> <pre><code>df.coalesce(1).write.save("sample.tsv",format = "csv",header = 'true', delimiter = '\t') </code></pre> <p>Any pointers how to get rid of this error. I can easily display the df or count the rows.</p> <p>The output dataframe is 3100 rows with 23 columns</p> <p>Error:</p> <pre><code>Job aborted due to stage failure: Task 0 in stage 70.0 failed 1 times, most recent failure: Lost task 0.0 in stage 70.0 (TID 1073, localhost): org.apache.spark.SparkException: Task failed while writing rows at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) at org.apache.spark.scheduler.Task.run(Task.scala:85) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.OutOfMemoryError: Unable to acquire 100 bytes of memory, got 0 at org.apache.spark.memory.MemoryConsumer.allocatePage(MemoryConsumer.java:129) at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.acquireNewPageIfNecessary(UnsafeExternalSorter.java:374) at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.insertRecord(UnsafeExternalSorter.java:396) at org.apache.spark.sql.execution.UnsafeExternalRowSorter.insertRow(UnsafeExternalRowSorter.java:94) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.sort_addToSorter$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370) at org.apache.spark.sql.execution.WindowExec$$anonfun$15$$anon$1.fetchNextRow(WindowExec.scala:300) at org.apache.spark.sql.execution.WindowExec$$anonfun$15$$anon$1.&lt;init&gt;(WindowExec.scala:309) at org.apache.spark.sql.execution.WindowExec$$anonfun$15.apply(WindowExec.scala:289) at org.apache.spark.sql.execution.WindowExec$$anonfun$15.apply(WindowExec.scala:288) at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:766) at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:766) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) at org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:89) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) at org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:89) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) at org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:89) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) at org.apache.spark.rdd.CoalescedRDD$$anonfun$compute$1.apply(CoalescedRDD.scala:96) at org.apache.spark.rdd.CoalescedRDD$$anonfun$compute$1.apply(CoalescedRDD.scala:95) at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:253) at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252) at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1325) at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258) ... 8 more Driver stacktrace: </code></pre>
1
2016-08-15T18:57:36Z
39,106,641
<p>The problem for me was indeed <code>coalesce()</code>. What I did was exporting the file not using <code>coalesce()</code> but parquet instead using <code>df.write.parquet("testP")</code>. Then read back the file and export that with <code>coalesce(1)</code>.</p> <p>Hopefully it works for you as well.</p>
1
2016-08-23T16:43:24Z
[ "python", "hadoop", "memory", "apache-spark", "pyspark" ]
Heat maps of large data with pyplot slow
38,961,252
<p>I am working with a big set of data (240, 131000) and bigger. I am currently using the code below to plot this.</p> <pre><code>fig,ax = pyplot.subplots() spectrum = ax.pcolor(waterfallplot, cmap='viridis') pyplot.colorbar() pyplot.show() </code></pre> <p>However, it's taking a very long time (30 min+) and the plot still hasn't shown up yet. A quick breakpoint check says the code gets to the <code>spectrum=</code> line, but doesn't go past. Looking at the memory on my computer, it hasn't even gotten close to the limit. </p> <p>Does anyone have a better way of doing this?</p>
1
2016-08-15T18:57:41Z
38,975,543
<p><code>pcolorfast</code> works best for large arrays and updates quickly.</p>
0
2016-08-16T12:54:24Z
[ "python", "performance", "matplotlib", "heatmap" ]
Python variables not defined after if __name__ == '__main__'
38,961,284
<p>I'm trying to divvy up the task of looking up historical stock price data for a list of symbols by using <code>Pool</code> from the <code>multiprocessing</code> library. </p> <p>This works great until I try to use the data I get back. I have my <code>hist_price</code> function defined and it outputs to a list-of-dicts <code>pcl</code>. I can <code>print(pcl)</code> and it has been flawless, but if I try to <code>print(pcl)</code> after the <code>if __name__=='__main__':</code> block, it blows up saying <code>pcl</code> is undefined. I've tried declaring <code>global pcl</code> in a couple places but it doesn't make a difference.</p> <pre><code>from multiprocessing import Pool syms = ['List', 'of', 'symbols'] def hist_price(sym): #... lots of code looking up data, calculations, building dicts... stlh = {"Sym": sym, "10D Max": pcmax, "10D Min": pcmin} #simplified return stlh #global pcl if __name__ == '__main__': pool = Pool(4) #global pcl pcl = pool.map(hist_price, syms) print(pcl) #this works pool.close() pool.join() print(pcl) #says pcl is undefined #...rest of my code, dependent on pcl... </code></pre> <p>I've also tried removing the <code>if __name__=='__main__':</code> block but it gives me a RunTimeError telling me specifically to put it back. Is there some other way to call variables to use outside of the <code>if</code> block? </p>
1
2016-08-15T18:59:46Z
38,961,389
<p>I think there are two parts to your issue. The first is "what's wrong with <code>pcl</code> in the current code?", and the second is "why do I need the <code>if __name__ == "__main__"</code> guard block at all?".</p> <p>Lets address them in order. The problem with the <code>pcl</code> variable is that it is only defined in the <code>if</code> block, so if the module gets loaded without being run as a script (which is what sets <code>__name__ == "__main__"</code>), it will not be defined when the later code runs.</p> <p>To fix this, you can change how your code is structured. The simplest fix would be to guard the other bits of the code that use <code>pcl</code> within an <code>if __name__ == "__main__"</code> block too (e.g. indent them all under the current block, perhaps). An alternative fix would be to put the code that uses <code>pcl</code> into functions (which can be declared outside the guard block), then call the functions from within an <code>if __name__ == "__main__"</code> block. That would look something like this:</p> <pre><code>def do_stuff_with_pcl(pcl): print(pcl) if __name__ == "__main__": # multiprocessing code, etc pcl = ... do_stuff_with_pcl(pcl) </code></pre> <p>As for why the issue came up in the first place, the ultimate cause is using the <code>multiprocessing</code> module on Windows. You can read about the issue in <a href="https://docs.python.org/2/library/multiprocessing.html#windows" rel="nofollow">the documentation</a>.</p> <p>When multiprocessing creates a new process for its <code>Pool</code>, it needs to initialize that process with a copy of the current module's state. Because Windows doesn't have <code>fork</code> (which copies the parent process's memory into a child process automatically), Python needs to set everything up from scratch. In each child process, it loads the module from its file, and if you the module's top-level code tries to create a new <code>Pool</code>, you'd have a recursive situation where each of the child process would start spawning a whole new set of child processes of its own.</p> <p>The <code>multiprocessing</code> code has some guards against that, I think (so you won't <a href="https://en.wikipedia.org/wiki/Fork_bomb" rel="nofollow">fork bomb</a> yourself out of simple carelessness), but you still need to do some of the work yourself too, by using <code>if __name__ == "__main__"</code> to guard any code that shouldn't be run in the child processes.</p>
2
2016-08-15T19:06:42Z
[ "python", "variables", "multiprocessing", "global", "main" ]
Python Requests module - proxy not working
38,961,293
<p>I'm trying to connect to website with python requests, but not with my real IP. So, I found some proxy on the internet and wrote this code:</p> <pre><code>import requests proksi = { 'http': 'http://5.45.64.97:3128' } x = requests.get('http://www.whatsmybrowser.org/', proxies = proksi) print(x.text) </code></pre> <p>When I get output, proxy simple don't work. Site returns my real IP Address. What I did wrong? Thanks.</p>
0
2016-08-15T19:00:07Z
38,961,568
<p>The answer is quite simple. Although it is a proxy service, it doesn't guarantee 100% anonymity. When you send the HTTP GET request via the proxy server, the request sent by your program to the proxy server is:</p> <pre><code>GET http://www.whatsmybrowser.org/ HTTP/1.1 Host: www.whatsmybrowser.org Connection: keep-alive Accept-Encoding: gzip, deflate Accept: */* User-Agent: python-requests/2.10.0 </code></pre> <p>Now, when the proxy server sends this request to the actual destination, it sends:</p> <pre><code>GET http://www.whatsmybrowser.org/ HTTP/1.1 Host: www.whatsmybrowser.org Accept-Encoding: gzip, deflate Accept: */* User-Agent: python-requests/2.10.0 Via: 1.1 naxserver (squid/3.1.8) X-Forwarded-For: 122.126.64.43 Cache-Control: max-age=18000 Connection: keep-alive </code></pre> <p>As you can see, it throws your IP (in my case, <code>122.126.64.43</code>) in the HTTP header: <code>X-Forwarded-For</code> and hence the website knows that the request was sent on behalf of <code>122.126.64.43</code></p> <p>Read more about this header at: <a href="https://tools.ietf.org/html/rfc7239" rel="nofollow">https://tools.ietf.org/html/rfc7239</a></p> <p>If you want to host your own squid proxy server and want to disable setting <code>X-Forwarded-For</code> header, read: <a href="http://www.squid-cache.org/Doc/config/forwarded_for/" rel="nofollow">http://www.squid-cache.org/Doc/config/forwarded_for/</a></p>
0
2016-08-15T19:18:09Z
[ "python", "proxy", "python-requests" ]
How do I create a string containing the filepath of my python program?
38,961,327
<p>We are creating a python program that executes specific macros within Polyworks based on user input into the program. Right now the code is:</p> <pre><code>roto.command.CommandExecute('MACRO EXEC("C:\\RotoWorks\\Macros\\CentrifugalCompressor")') </code></pre> <p>However this assumes that our program is always installed in C:\RotoWorks. Ideally, our app is portable. I'm sure theres a way to retrieve the filepath that Rotoworks is stored in, then just concatenate the rest of the filepath to the end. How do I do this?</p>
0
2016-08-15T19:02:44Z
38,961,378
<p>You can retrieve the path from the <code>__file__</code> attribute of the file. Use <code>os.path.abspath</code> on that attribute to retrieve the absolute path of the file and then <code>os.path.dirname</code> to retrieve the containing directory:</p> <pre><code>import os file_directory = os.path.dirname(os.path.abspath(__file__)) path = os.path.join(file_directory, other_path) # join directory to an inner path roto.command.CommandExecute('MACRO EXEC({})'.format(path)) </code></pre> <p>Use <code>os.path.dirname</code> recursively to move out as many directories as you want.</p>
2
2016-08-15T19:06:19Z
[ "python", "python-2.7" ]
Use openpyxl to verify the structure of a spreadsheet
38,961,360
<p>I am working on a project that requires me to read a spreadsheet provided by the user and I need to build a system to check that the contents of the spreadsheet are valid. Specifically I want to validate that each column contains a specific datatype.</p> <p>I know that this could be done by iterating over every cell in the spreadsheet, but I was hoping there is a simpler way to do it.</p>
1
2016-08-15T19:05:20Z
39,077,066
<p>In openpyxl you'll have to go cell by cell.</p> <p>You could use Excel's builtin Data Validation or Conditional Formatting, which openpyxl supports, for this. Let Excel do the work and talk to it using xlwings.</p>
2
2016-08-22T10:23:59Z
[ "python", "excel", "openpyxl" ]
Use openpyxl to verify the structure of a spreadsheet
38,961,360
<p>I am working on a project that requires me to read a spreadsheet provided by the user and I need to build a system to check that the contents of the spreadsheet are valid. Specifically I want to validate that each column contains a specific datatype.</p> <p>I know that this could be done by iterating over every cell in the spreadsheet, but I was hoping there is a simpler way to do it.</p>
1
2016-08-15T19:05:20Z
39,088,757
<p>I ended up just manually looking at each cell. I have to read them all into my data structures before I can process anything anyways so it actually made sense to check then.</p>
1
2016-08-22T21:07:09Z
[ "python", "excel", "openpyxl" ]
SQLAlchemy set default value for postgres JSON column
38,961,396
<p>I want to set the default value of my SQLAlchemy postgres JSON column to an empty dictionary. </p> <pre><code>from sqlalchemy.dialects.postgresql import JSON info = Column(JSON, default='{}') info = Column(JSON, default={}) </code></pre> <p>Neither of these work.</p>
0
2016-08-15T19:07:12Z
38,999,937
<p>Using default=lambda: {} works. Credit goes to univerio in the comments.</p>
0
2016-08-17T14:51:30Z
[ "python", "json", "postgresql", "sqlalchemy" ]
Python Script in Azure to Detect Vulnerable Ports
38,961,402
<p>I'm essentially trying to translate a Powershell script into a Python script. The script is used to scan all virtual machines within an Azure subscription. It checks to see if their ports are open to the world, and if they are, changes them to be open to only my machines. </p> <p>The Powershell script is as follows. </p> <pre><code>Add-AzureRmAccount Select-AzureRmSubscription -SubscriptionName ##### $nsgs=Get-AzureRmNetworkSecurityGroup foreach ($nsg in $nsgs) { $rules=$nsg.SecurityRules foreach($rule in $rules) { if ($rule.SourceAddressPrefix -eq "*") { $nsg | Set-AzureRmNetworkSecurityRuleConfig -Name $rule.Name -Priority $rule.Priority -Protocol $rule.protocol -Access $rule.access -SourcePortRange $rule.SourcePortRange -DestinationAddressPrefix $rule.DestinationAddressPrefix -DestinationPortRange $rule.DestinationPortRange -Direction $rule.Direction -SourceAddressPrefix "##.##.##.###/##" | Set-AzureRmNetworkSecurityGroup } } } </code></pre>
0
2016-08-15T19:07:21Z
38,963,581
<p>The Python package will allow you to do this is azure 2.0.0rc5: <a href="https://pypi.python.org/pypi/azure/2.0.0rc5" rel="nofollow">https://pypi.python.org/pypi/azure/2.0.0rc5</a></p> <p>Please note that this is a release candidate, but should be stable soon.</p> <p>Even if not directly network oriented, this sample might help you: <a href="https://github.com/Azure-Samples/virtual-machines-python-manage" rel="nofollow">https://github.com/Azure-Samples/virtual-machines-python-manage</a></p> <p>Also, the whole documentation is on readthedocs: <a href="http://azure-sdk-for-python.readthedocs.io/en/latest/" rel="nofollow">http://azure-sdk-for-python.readthedocs.io/en/latest/</a></p>
0
2016-08-15T21:48:55Z
[ "python", "powershell", "azure" ]
LogisticRegressionwithLBFGS throwing error about not supporting Mulitinomial Classification
38,961,429
<p>I am trying to implement Logistic Regression using pySpark Here is my code</p> <pre><code>from pyspark.mllib.classification import LogisticRegressionWithLBFGS from time import time from pyspark.mllib.regression import LabeledPoint from numpy import array RES_DIR="/home/shaahmed115/Pet_Projects/DA/TwitterStream_US_Elections/Features/" sc= SparkContext('local','pyspark') data_file = RES_DIR + "training.txt" raw_data = sc.textFile(data_file) print "Train data size is {}".format(raw_data.count()) test_data_file = RES_DIR + "testing.txt" test_raw_data = sc.textFile(test_data_file) print "Test data size is {}".format(test_raw_data.count()) def parse_interaction(line): line_split = line.split(",") return LabeledPoint(float(line_split[0]), array([float(x) for x in line_split])) training_data = raw_data.map(parse_interaction) logit_model = LogisticRegressionWithLBFGS.train(training_data,iterations=10, numClasses=3) </code></pre> <p>This is throwing an error : Currently, LogisticRegression with ElasticNet in ML package only supports binary classification. Found 3 in the input dataset</p> <p>Below is a sample of my dataset: 2 , 1.0 , 1.0 , 1.0 0 , 1.0 , 1.0 , 1.0 1 , 0.0 , 0.0 , 0.0</p> <p>The first element is the class while the rest is the vector.As you can see there are three classes. Is there a workaround that can make multinomial classification work with this?</p>
1
2016-08-15T19:09:16Z
38,962,358
<p>The error you see </p> <blockquote> <p>LogisticRegression with ElasticNet in ML package only supports binary classification.</p> </blockquote> <p>is clear. You can use the <code>mllib</code> version to support multinomial:<br> <code>org.apache.spark.mllib.classification.LogisticRegression</code></p> <pre><code>/** * Train a classification model for Multinomial/Binary Logistic Regression using * Limited-memory BFGS. Standard feature scaling and L2 regularization are used by default. * NOTE: Labels used in Logistic Regression should be {0, 1, ..., k - 1} * for k classes multi-label classification problem. * * Earlier implementations of LogisticRegressionWithLBFGS applies a regularization * penalty to all elements including the intercept. If this is called with one of * standard updaters (L1Updater, or SquaredL2Updater) this is translated * into a call to ml.LogisticRegression, otherwise this will use the existing mllib * GeneralizedLinearAlgorithm trainer, resulting in a regularization penalty to the * intercept. */ </code></pre>
1
2016-08-15T20:12:59Z
[ "python", "machine-learning", "pyspark", "logistic-regression" ]
how to write tests for a function using py3.5 os.scandir()?
38,961,433
<p>How to write tests to a function using the newly added to build-ins of python 3.5 <code>os.scandir()</code>? Is there a helper to mock a <code>DirEntry</code> objects?</p> <p>Any suggestions on how to mock <code>os.scandir()</code> of an empty folder and a folder with few 2 files for example?</p>
0
2016-08-15T19:09:32Z
39,022,660
<p>As suggested, my solution involves <code>@mock.patch()</code> decorator. My test became something like:</p> <pre><code>from django.test import TestCase, mock from my_app.scan_files import my_method_that_return_count_of_files mock_one_file = mock.MagicMock(return_value = ['one_file', ]) class MyMethodThatReturnCountOfFilesfilesTestCase(TestCase): @mock.patch('os.scandir', mock_one_file) def test_get_one(self): """ Test it receives 1 (one) file """ files_count = my_method_that_return_count_of_files('/anything/') self.assertEqual(files_count, 1) </code></pre>
0
2016-08-18T15:54:18Z
[ "python", "python-3.x", "python-3.5", "python-mock", "python-os" ]
Unicode in the standard TensorFlow format
38,961,547
<p>Following the documentation <a href="https://www.tensorflow.org/versions/r0.10/how_tos/reading_data/index.html#standard-tensorflow-format" rel="nofollow">here</a>, I am trying to create features from unicode strings. Here is what the feature creation method looks like,</p> <pre><code>def _bytes_feature(value): return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value])) </code></pre> <p>This will raise an exception, </p> <pre><code> File "/home/rklopfer/.virtualenvs/tf/local/lib/python2.7/site-packages/google/protobuf/internal/python_message.py", line 512, in init copy.extend(field_value) File "/home/rklopfer/.virtualenvs/tf/local/lib/python2.7/site-packages/google/protobuf/internal/containers.py", line 275, in extend new_values = [self._type_checker.CheckValue(elem) for elem in elem_seq_iter] File "/home/rklopfer/.virtualenvs/tf/local/lib/python2.7/site-packages/google/protobuf/internal/type_checkers.py", line 108, in CheckValue raise TypeError(message) TypeError: u'Gross' has type &lt;type 'unicode'&gt;, but expected one of: (&lt;type 'str'&gt;,) </code></pre> <p>Naturally if I wrap the <code>value</code> in a <code>str</code>, it fails on the first <em>actual</em> unicode character it encounters. </p>
0
2016-08-15T19:16:54Z
38,961,670
<p>BytesList <a href="https://github.com/tensorflow/tensorflow/blob/89e1cc59681b78e8193f899dca16474c19a7fc5b/tensorflow/core/example/feature.proto#L65" rel="nofollow">definition</a> is in feature.proto and it is of type <code>repeated bytes</code>, this means that you need to pass it something that's convertible to a list of byte sequences.</p> <p>There's more than one way to turn <code>unicode</code> into list of bytes, hence ambiguity. You could do it manually instead. IE, to use <code>UTF-8</code> encoding </p> <pre><code>value.encode("utf-8") </code></pre>
0
2016-08-15T19:26:14Z
[ "python", "unicode", "tensorflow", "protocol-buffers" ]
Multiprocesses become zombie processes when increasing iterations. What is the advantage of mp.Queue() to Manager.list()?
38,961,584
<p>I'm trying to use multiprocessing to spawn 4 processes that brute force some calculations and each has a very small chance of manipulating a single list object on each iteration that I want to share between them. Not <a href="https://docs.python.org/2/library/multiprocessing.html#programming-guidelines" rel="nofollow">best practice</a> in the guidelines but I need "many hands". </p> <p>The code works fine for relatively small numbers of iterations but when increasing the number to a certain threshold, all four processes will go into zombie state. They fail silently.</p> <p>I attempt to track the modifications to the shared list by using <code>multiprocessing.Queue()</code>. It appears from <a href="http://stackoverflow.com/questions/11854519/python-multiprocessing-some-functions-do-not-return-when-they-are-complete-que">this SO post</a>, <a href="https://bugs.python.org/issue8426" rel="nofollow">this closed Python issue – "not a bug"</a>, and several posts referring to these, that the underlying pipe can become overloaded and the processes just hang. The accepted answer in the SO post is extremely difficult to decipher because of so much excess code.</p> <p><strong>Edited for clarity:</strong><br> The examples in the documentation do very lightweight things, almost always single function calls. Therefore, I don't know whether I'm misunderstanding and abusing features.</p> <p>The guidelines say:</p> <blockquote> <p>It is probably best to stick to using queues or pipes for communication between processes rather than using the lower level synchronization primitives from the threading module.</p> </blockquote> <p>Does "communicate" here mean something other than what I'm actually doing in my example?<br> Or<br> Does this mean that I should be sharing <code>my_list</code> in the queue rather than with a manager? Wouldn't this mean <code>queue.get</code> and <code>queue.put</code> on every iteration of every process? </p> <blockquote> <p>If maxsize is less than or equal to zero, the queue size is infinite.</p> </blockquote> <p>Doing this does not fix the error in my failing example. Until the point that I do <code>queue.put()</code> all of the data is stored within a normal Python list: <code>my_return_list</code> so is this actually failing due to the links I provided?</p> <p>Is there a better way of doing this compared to my current workaround? I can't seem to find others taking an approach that looks similar, I feel I'm missing something. I need this to work for both Windows and Linux.</p> <p>Failing example (depending on iterations under <code>__main__</code>):</p> <pre><code>import multiprocessing as mp import random import sys def mutate_list(my_list, proc_num, iterations, queue, lock): my_return_list = [] if iterations &lt; 1001: # Works fine for x in xrange(iterations): if random.random() &lt; 0.01: lock.acquire() print "Process {} changed list from:".format(proc_num) print my_list print "to" random.shuffle(my_list) print my_list print "........" sys.stdout.flush() lock.release() my_return_list.append([x, list(my_list)]) else: for x in xrange(iterations): # Enters zombie state if random.random() &lt; 0.01: lock.acquire() random.shuffle(my_list) my_return_list.append([x, list(my_list)]) lock.release() if x % 1000 == 0: print "Completed iterations:", x sys.stdout.flush() queue.put(my_return_list) def multi_proc_handler(iterations): manager = mp.Manager() ns = manager.list() ns.extend([x for x in range(10)]) queue = mp.Queue() lock = manager.Lock() print "Starting list to share", ns print ns sys.stdout.flush() p = [mp.Process(target=mutate_list, args=(ns,x,iterations,queue,lock)) for x in range(4)] for process in p: process.start() for process in p: process.join() output = [queue.get() for process in p] return output if __name__ == '__main__': # 1000 iterations is fine, 100000 iterations will create zombies multi_caller = multi_proc_handler(100000) </code></pre> <p>Workaround using <code>multiprocessing.Manager.list()</code>:</p> <pre><code>import multiprocessing as mp import random import sys def mutate_list(my_list, proc_num, iterations, my_final_list, lock): for x in xrange(iterations): if random.random() &lt; 0.01: lock.acquire() random.shuffle(my_list) my_final_list.append([x, list(my_list)]) lock.release() if x % 10000 == 0: print "Completed iterations:", x sys.stdout.flush() def multi_proc_handler(iterations): manager = mp.Manager() ns = manager.list([x for x in range(10)]) lock = manager.Lock() my_final_list = manager.list() # My Queue substitute print "Starting list to share", ns print ns sys.stdout.flush() p = [mp.Process(target=mutate_list, args=(ns,x,iterations,my_final_list, lock)) for x in range(4)] for process in p: process.start() for process in p: process.join() return list(my_final_list) if __name__ == '__main__': multi_caller = multi_proc_handler(100000) </code></pre>
1
2016-08-15T19:19:23Z
38,986,160
<p><strong>Queue versus list</strong></p> <p>Underneath the hood, a <code>multiprocessing.Queue</code> and a <code>manager.list()</code> are both writing to and reading from a buffer.</p> <p><strong>Queue</strong></p> <pre><code>shared_queue = multiprocessing.Queue() </code></pre> <p>When you call <code>put</code> with N or more bytes (where N is dependent on a lot of variables), it is more than the buffer can handle and the <code>put</code> blocks. You might be able to get the <code>put</code> to unblock by calling <code>get</code> in another process. This is an experiment that should be easy to perform using the first version of your code. I highly recommend that you try this experiment.</p> <p><strong>list</strong></p> <pre><code>manager = multiprocessing.Manager() shared_list = manager.list() </code></pre> <p>When you call <code>append</code>, you are passing much less than N bytes, and the write to the buffer succeeds. <em>There is another process that reads the data from the buffer</em> and appends it to the actual <code>list</code>. This process is created by the <code>manager</code>. Even if you call <code>append</code> with N or more bytes, everything should keep working, because <em>there is another process reading from the buffer</em>. You can pass an arbitrary number of bytes to another process this way.</p> <p><strong>Summary</strong></p> <p>Hopefully, this clarifies why your "workaround" works. You are breaking up the writes to the buffer into smaller pieces and you have a helper process that is reading from the buffer to put the pieces into the managed <code>list</code>.</p>
1
2016-08-16T23:51:53Z
[ "python", "python-2.7", "multiprocessing" ]
Pandas left join leads to column full of 'NaT'
38,961,609
<p>I'm just trying to merge two data frames, where the first one (the left in the left join) is <code>data</code>:</p> <pre><code> userid date event 0 S3gFFFZtYF 2016-04-01 18:04:44.646000+00:00 goReview 1 9iYv7VWA3l 2016-04-01 18:07:43.461000+00:00 goReview 2 9iYv7VWA3l 2016-04-01 18:09:10.264000+00:00 requestReminder 3 9iYv7VWA3l 2016-04-01 18:09:34.526000+00:00 emailFeedback 4 9iYv7VWA3l 2016-04-01 18:10:07.161000+00:00 rejectFeedback </code></pre> <p>And the right table is <code>last_use_date</code>:</p> <pre><code> last_date userid 0 2016-06-10 13:01:38.131000+00:00 00bt52e7Wg 1 2016-08-15 14:26:55.187000+00:00 01oqeMSMkN 2 2016-08-11 00:04:35.812000+00:00 0200dDUPWK 3 2016-08-15 15:13:13.567000+00:00 04mkzqD7e2 4 2016-08-14 16:19:04.582000+00:00 04Tj3htVwh </code></pre> <p>In <code>data</code> the same <code>userid</code> can appear more than once whereas in <code>last_use_date</code> each <code>userid</code> appears only once. The results of a left join are below. As you can see, I seem to lose all info from <code>last_use_date</code>. </p> <pre><code>data.join(last_use_date, on = 'userid', how = 'left', rsuffix = '_right').head() </code></pre> <p>results in:</p> <pre><code> userid date event last_date userid_right 0 S3gFFFZtYF 2016-04-01 18:04:44.646000+00:00 goReview NaT NaN 1 9iYv7VWA3l 2016-04-01 18:07:43.461000+00:00 goReview NaT NaN 2 9iYv7VWA3l 2016-04-01 18:09:10.264000+00:00 requestReminder NaT NaN 3 9iYv7VWA3l 2016-04-01 18:09:34.526000+00:00 emailFeedback NaT NaN 4 9iYv7VWA3l 2016-04-01 18:10:07.161000+00:00 rejectFeedback NaT NaN </code></pre> <p><strong>Why are all the times and userid values gone?</strong></p> <p>Note, I have already verified that I do have an overlap in data:</p> <pre><code>set(last_use_date.userid) == set(data.userid) True </code></pre>
0
2016-08-15T19:21:31Z
38,961,699
<p>first things first. check your datatypes:</p> <pre><code>last_use_date.userid.dtype data.userid.dtype </code></pre> <p>are they equal? Then, replace <code>join</code> for <code>merge</code> instead, as your key is not in the index but in the columns of your dataframe. </p> <pre><code>data.merge(last_use_date, on = 'userid', how = 'left', rsuffix = '_right') </code></pre> <p>That should solve your problem amigo.</p>
1
2016-08-15T19:27:53Z
[ "python", "pandas" ]
Pandas left join leads to column full of 'NaT'
38,961,609
<p>I'm just trying to merge two data frames, where the first one (the left in the left join) is <code>data</code>:</p> <pre><code> userid date event 0 S3gFFFZtYF 2016-04-01 18:04:44.646000+00:00 goReview 1 9iYv7VWA3l 2016-04-01 18:07:43.461000+00:00 goReview 2 9iYv7VWA3l 2016-04-01 18:09:10.264000+00:00 requestReminder 3 9iYv7VWA3l 2016-04-01 18:09:34.526000+00:00 emailFeedback 4 9iYv7VWA3l 2016-04-01 18:10:07.161000+00:00 rejectFeedback </code></pre> <p>And the right table is <code>last_use_date</code>:</p> <pre><code> last_date userid 0 2016-06-10 13:01:38.131000+00:00 00bt52e7Wg 1 2016-08-15 14:26:55.187000+00:00 01oqeMSMkN 2 2016-08-11 00:04:35.812000+00:00 0200dDUPWK 3 2016-08-15 15:13:13.567000+00:00 04mkzqD7e2 4 2016-08-14 16:19:04.582000+00:00 04Tj3htVwh </code></pre> <p>In <code>data</code> the same <code>userid</code> can appear more than once whereas in <code>last_use_date</code> each <code>userid</code> appears only once. The results of a left join are below. As you can see, I seem to lose all info from <code>last_use_date</code>. </p> <pre><code>data.join(last_use_date, on = 'userid', how = 'left', rsuffix = '_right').head() </code></pre> <p>results in:</p> <pre><code> userid date event last_date userid_right 0 S3gFFFZtYF 2016-04-01 18:04:44.646000+00:00 goReview NaT NaN 1 9iYv7VWA3l 2016-04-01 18:07:43.461000+00:00 goReview NaT NaN 2 9iYv7VWA3l 2016-04-01 18:09:10.264000+00:00 requestReminder NaT NaN 3 9iYv7VWA3l 2016-04-01 18:09:34.526000+00:00 emailFeedback NaT NaN 4 9iYv7VWA3l 2016-04-01 18:10:07.161000+00:00 rejectFeedback NaT NaN </code></pre> <p><strong>Why are all the times and userid values gone?</strong></p> <p>Note, I have already verified that I do have an overlap in data:</p> <pre><code>set(last_use_date.userid) == set(data.userid) True </code></pre>
0
2016-08-15T19:21:31Z
38,961,725
<p><code>.join</code> joins by <em>index</em> by default, not by common columns. Use <code>.merge</code> instead:</p> <pre><code>data.merge(last_use_date, left_on='userid', right_on='userid', sort=False, suffixes=('', '_right')) </code></pre>
1
2016-08-15T19:29:53Z
[ "python", "pandas" ]
Python Code Error random script
38,961,702
<pre><code>import random, string max_str_length = 20 while True: random = ''.join(random.choice(string.lowercase) for _ in range(random.choice(range(1, max_str_length)))) print random if random=="hugh": print "Done" </code></pre> <p>What's the error here? It says "Attribute Error: 'str' object has no attribute 'choice'"</p>
-3
2016-08-15T19:28:40Z
38,986,483
<p>You define <code>random</code> as a string on the first line of your while loop, and then try to call a method on it that doesn't belong to <code>str</code>. Changing the name will fix your problem.</p>
1
2016-08-17T00:39:34Z
[ "python" ]
How to release matplolib execution without closing window
38,961,709
<p>I want to show a series of images using <code>imshow()</code> and a <code>for</code> loop, but I want to wait for user input on the image (not on terminal) before continuing. The following does work but it requires closing the window, which is not optimal as my for loop has over a thousand iterations. How can I put a plt.clf() and unblock temporarily until next loop.</p> <pre><code>x, y, z = sp.mgrid[0:10, 0:100, 0:100] for img in x: f = plt.figure() def onkey(event): print("pressed {}".format(event)) plt.close(f) f.canvas.mpl_connect("key_press_event", onkey) plt.imshow(img) plt.show() print("continuing") </code></pre> <p>Thanks!</p>
0
2016-08-15T19:29:04Z
38,961,781
<p>Use:</p> <pre><code>plt.ion() plt.show() </code></pre> <p>before the loop, and remove <code>plt.close()</code> &amp; <code>plt.show()</code> inside the loop.</p> <p>To block the loop until a press event occurred add:</p> <pre><code>f.canvas.start_event_loop(timeout=-1) </code></pre> <p>inside the loop</p>
0
2016-08-15T19:33:45Z
[ "python", "user-interface", "matplotlib" ]
How to release matplolib execution without closing window
38,961,709
<p>I want to show a series of images using <code>imshow()</code> and a <code>for</code> loop, but I want to wait for user input on the image (not on terminal) before continuing. The following does work but it requires closing the window, which is not optimal as my for loop has over a thousand iterations. How can I put a plt.clf() and unblock temporarily until next loop.</p> <pre><code>x, y, z = sp.mgrid[0:10, 0:100, 0:100] for img in x: f = plt.figure() def onkey(event): print("pressed {}".format(event)) plt.close(f) f.canvas.mpl_connect("key_press_event", onkey) plt.imshow(img) plt.show() print("continuing") </code></pre> <p>Thanks!</p>
0
2016-08-15T19:29:04Z
38,965,330
<p>Actually, this turned out to be a repeated question:</p> <p><a href="http://stackoverflow.com/questions/14325844/matplotlib-deliberately-block-code-execution-pending-a-gui-event">matplotlib: deliberately block code execution pending a GUI event</a></p> <p>It is the mix of <code>f.canvas.start_event_loop(timeout=-1)</code> and <code>f.canvas.stop_event_loop()</code> that does the trick.</p> <p>Thanks!</p>
1
2016-08-16T01:26:00Z
[ "python", "user-interface", "matplotlib" ]
Python BST not working
38,961,795
<p>I'm new to Python thus the question,this is the implementation of my my BST</p> <pre><code>class BST(object): def __init__(self): self.root = None self.size = 0 def add(self, item): return self.addHelper(item, self.root) def addHelper(self, item, root): if root is None: root = Node(item) return root if item &lt; root.data: root.left = self.addHelper(item, root.left) else: root.right = self.addHelper(item, root.right) </code></pre> <p>This is the Node object</p> <pre><code>class Node(object): def __init__(self, data): self.data = data self.left = None self.right = None </code></pre> <p>This is my implmentation of <strong>str</strong></p> <pre><code>def __str__(self): self.levelByLevel(self.root) return "Complete" def levelByLevel(self, root): delim = Node(sys.maxsize) queue = deque() queue.append(root) queue.append(delim) while queue: temp = queue.popleft() if temp == delim and len(queue) &gt; 0: queue.append(delim) print() else: print(temp.data, " ") if temp.left: queue.append(temp.left) if temp.right: queue.append(temp.right) </code></pre> <p>This is my calling client,</p> <pre><code>def main(): bst = BST() bst.root = bst.add(12) bst.root = bst.add(15) bst.root = bst.add(9) bst.levelByLevel(bst.root) if __name__ == '__main__': main() </code></pre> <p>Instead of the expected output of printing the BST level by level I get the following output,</p> <pre><code>9 9223372036854775807 </code></pre> <p>When I look in the debugger it seems that the every time the add method is called it starts with root as None and then returns the last number as root. I'm not sure why this is happening. Any help appreciated. </p>
0
2016-08-15T19:34:25Z
38,961,882
<p>Your add method probably shouldn't return a value. And you most certainly shouldn't assign the root of the tree to what the add method returns.</p> <p>Try changing your main code to something like this:</p> <pre><code>def main(): bst = BST() bst.add(12) bst.add(15) bst.add(9) bst.levelByLevel(bst.root) if __name__ == '__main__': main() </code></pre>
-1
2016-08-15T19:41:05Z
[ "python", "binary-search-tree" ]
Python BST not working
38,961,795
<p>I'm new to Python thus the question,this is the implementation of my my BST</p> <pre><code>class BST(object): def __init__(self): self.root = None self.size = 0 def add(self, item): return self.addHelper(item, self.root) def addHelper(self, item, root): if root is None: root = Node(item) return root if item &lt; root.data: root.left = self.addHelper(item, root.left) else: root.right = self.addHelper(item, root.right) </code></pre> <p>This is the Node object</p> <pre><code>class Node(object): def __init__(self, data): self.data = data self.left = None self.right = None </code></pre> <p>This is my implmentation of <strong>str</strong></p> <pre><code>def __str__(self): self.levelByLevel(self.root) return "Complete" def levelByLevel(self, root): delim = Node(sys.maxsize) queue = deque() queue.append(root) queue.append(delim) while queue: temp = queue.popleft() if temp == delim and len(queue) &gt; 0: queue.append(delim) print() else: print(temp.data, " ") if temp.left: queue.append(temp.left) if temp.right: queue.append(temp.right) </code></pre> <p>This is my calling client,</p> <pre><code>def main(): bst = BST() bst.root = bst.add(12) bst.root = bst.add(15) bst.root = bst.add(9) bst.levelByLevel(bst.root) if __name__ == '__main__': main() </code></pre> <p>Instead of the expected output of printing the BST level by level I get the following output,</p> <pre><code>9 9223372036854775807 </code></pre> <p>When I look in the debugger it seems that the every time the add method is called it starts with root as None and then returns the last number as root. I'm not sure why this is happening. Any help appreciated. </p>
0
2016-08-15T19:34:25Z
38,962,262
<p>Based on the following, you can see that <code>bst.root</code> in <code>None</code> after the second call to <code>add()</code>:</p> <pre><code>&gt;&gt;&gt; bst.root = bst.add(12) &gt;&gt;&gt; bst.root &lt;__main__.Node object at 0x7f9aaa29cfd0&gt; &gt;&gt;&gt; bst.root = bst.add(15) &gt;&gt;&gt; type(bst.root) &lt;type 'NoneType'&gt; </code></pre> <p>Your <code>addHelper</code> isn't returning the <code>root</code> node. Try this:</p> <pre><code>def addHelper(self, item, root): if root is None: root = Node(item) return root if item &lt; root.data: root.left = self.addHelper(item, root.left) else: root.right = self.addHelper(item, root.right) return root </code></pre> <p>And then it works as expected:</p> <pre><code>&gt;&gt;&gt; bst.root = bst.add(12) &gt;&gt;&gt; bst.root = bst.add(15) &gt;&gt;&gt; bst.levelByLevel(bst.root) (12, ' ') () (15, ' ') (9223372036854775807, ' ') &gt;&gt;&gt; bst.root = bst.add(9) &gt;&gt;&gt; bst.levelByLevel(bst.root) (12, ' ') () (9, ' ') (15, ' ') (9223372036854775807, ' ') </code></pre>
1
2016-08-15T20:06:24Z
[ "python", "binary-search-tree" ]
Python BST not working
38,961,795
<p>I'm new to Python thus the question,this is the implementation of my my BST</p> <pre><code>class BST(object): def __init__(self): self.root = None self.size = 0 def add(self, item): return self.addHelper(item, self.root) def addHelper(self, item, root): if root is None: root = Node(item) return root if item &lt; root.data: root.left = self.addHelper(item, root.left) else: root.right = self.addHelper(item, root.right) </code></pre> <p>This is the Node object</p> <pre><code>class Node(object): def __init__(self, data): self.data = data self.left = None self.right = None </code></pre> <p>This is my implmentation of <strong>str</strong></p> <pre><code>def __str__(self): self.levelByLevel(self.root) return "Complete" def levelByLevel(self, root): delim = Node(sys.maxsize) queue = deque() queue.append(root) queue.append(delim) while queue: temp = queue.popleft() if temp == delim and len(queue) &gt; 0: queue.append(delim) print() else: print(temp.data, " ") if temp.left: queue.append(temp.left) if temp.right: queue.append(temp.right) </code></pre> <p>This is my calling client,</p> <pre><code>def main(): bst = BST() bst.root = bst.add(12) bst.root = bst.add(15) bst.root = bst.add(9) bst.levelByLevel(bst.root) if __name__ == '__main__': main() </code></pre> <p>Instead of the expected output of printing the BST level by level I get the following output,</p> <pre><code>9 9223372036854775807 </code></pre> <p>When I look in the debugger it seems that the every time the add method is called it starts with root as None and then returns the last number as root. I'm not sure why this is happening. Any help appreciated. </p>
0
2016-08-15T19:34:25Z
38,962,602
<p>You're using the <code>BST</code> object basically only to hold a root <code>Node</code> and the <code>add</code> function doesn't really operate on the <code>BST</code> object so it's better to have only one class (<code>BtsNode</code>) and implement the <code>add</code> there. Try that and you'll see that the <code>add</code> function would be much simpler. And, in general, when a member function doesn't use <code>self</code> it shouldn't be a member function (like <code>addHelper</code>), i.e., it shouldn't have <code>self</code> as a parameter (if you'd like I can show you how to write the <code>BtsNode</code> class).</p> <p>I tried writing a class that uses your idea of how to implement the <code>BST</code>.</p> <pre><code>class BstNode: def __init__(self): self.left = None self.right = None self.data = None def add(self,item): if not self.data: self.data = item elif item &gt;= self.data: if not self.right: self.right = BstNode() self.right.add(item) else: if not self.left: self.left = BstNode() self.left.add(item) </code></pre> <p>That way you can create a <code>BST</code> the following way:</p> <pre><code>bst = BstNode() bst.add(13) bst.add(10) bst.add(20) </code></pre> <p>The difference is that now the <code>add</code> function actually operates on the object without any need for the user to do anything. The function changes the state of the object by itself.</p> <p>In general a function should do only what it's expected to do. The <code>add</code> function is expected to add an item to the tree so it shouldn't return the root. The fact that you had to write <code>bst.root = bst.add()</code> each time should signal that there's some fault in your design.</p>
0
2016-08-15T20:32:50Z
[ "python", "binary-search-tree" ]
Python BST not working
38,961,795
<p>I'm new to Python thus the question,this is the implementation of my my BST</p> <pre><code>class BST(object): def __init__(self): self.root = None self.size = 0 def add(self, item): return self.addHelper(item, self.root) def addHelper(self, item, root): if root is None: root = Node(item) return root if item &lt; root.data: root.left = self.addHelper(item, root.left) else: root.right = self.addHelper(item, root.right) </code></pre> <p>This is the Node object</p> <pre><code>class Node(object): def __init__(self, data): self.data = data self.left = None self.right = None </code></pre> <p>This is my implmentation of <strong>str</strong></p> <pre><code>def __str__(self): self.levelByLevel(self.root) return "Complete" def levelByLevel(self, root): delim = Node(sys.maxsize) queue = deque() queue.append(root) queue.append(delim) while queue: temp = queue.popleft() if temp == delim and len(queue) &gt; 0: queue.append(delim) print() else: print(temp.data, " ") if temp.left: queue.append(temp.left) if temp.right: queue.append(temp.right) </code></pre> <p>This is my calling client,</p> <pre><code>def main(): bst = BST() bst.root = bst.add(12) bst.root = bst.add(15) bst.root = bst.add(9) bst.levelByLevel(bst.root) if __name__ == '__main__': main() </code></pre> <p>Instead of the expected output of printing the BST level by level I get the following output,</p> <pre><code>9 9223372036854775807 </code></pre> <p>When I look in the debugger it seems that the every time the add method is called it starts with root as None and then returns the last number as root. I'm not sure why this is happening. Any help appreciated. </p>
0
2016-08-15T19:34:25Z
38,962,685
<p>If the <code>root</code> argument of your <code>addHelper</code> is <code>None</code>, you set it to a newly-created <code>Node</code> object and return it. If it is not, then you modify the argument but return nothing, so you end up setting <code>bst.root</code> to <code>None</code> again. Try the following with your code above &mdash; it should help your understanding of what your code is doing.</p> <pre><code>bst = BST() bst.root = bst.add(12) try: print(bst.root.data) except AttributeError: print('root is None') # =&gt; 12 # `bst.addHelper(12, self.root)` returned `Node(12)`, # which `bst.add` returned too, so now `bst.root` # is `Node(12)` bst.root = bst.add(15) try: print(bst.root.data) except AttributeError: print('root is None') # =&gt; root is None # `bst.addHelper(15, self.root)` returned `None`, # which `bst.add` returned too, so now `bst.root` # is `None`. bst.root = bst.add(9) try: print(bst.root.data) except AttributeError: print('root is None') # =&gt; 9 # `bst.addHelper(9, self.root)` returned `Node(9)`, # which `bst.add` returned too, so now `bst.root` # is `Node(9)` </code></pre> <p>So you should do two things:</p> <ol> <li>make you <code>addHelper</code> always return its last argument &mdash; after the appropriate modifications &mdash;, and</li> <li>have your <code>add</code> function take care of assigning the result to <code>self.root</code> (do not leave it for the class user to do).</li> </ol> <p>Here is the code:</p> <pre><code>def add(self, item): self.root = self.addHelper(item, self.root) self.size += 1 # Otherwise what good is `self.size`? def addHelper(self, item, node): if node is None: node = Node(item) elif item &lt; node.data: node.left = self.addHelper(item, node.left) else: node.right = self.addHelper(item, node.right) return node </code></pre> <p>Notice that I changed the name of the last argument in <code>addHelper</code> to <code>node</code> for clarity (there already is something called <code>root</code>: that of the tree!).</p> <p>You can now write your <code>main</code> function as follows:</p> <pre><code>def main(): bst = BST() bst.add(12) bst.add(15) bst.add(9) bst.levelByLevel(bst.root) </code></pre> <p>(which is exactly what @AaronTaggart suggests &mdash; but you need the modifications in <code>add</code> and <code>addHelper</code>). Its output is:</p> <pre><code>12 9 15 9223372036854775807 </code></pre> <p>The above gets you to a working binary search tree. A few notes:</p> <ol> <li>I would further modify your <code>levelByLevel</code> to avoid printing that last value, as well as not taking any arguments (besides <code>self</code>, of course) &mdash; it should always print from the root of the tree.</li> <li><p><code>bst.add(None)</code> will raise an error. You can guard against it by changing your <code>add</code> method. One possibility is</p> <pre><code>def add(self, item): try: self.root = self.addHelper(item, self.root) self.size += 1 except TypeError: pass </code></pre> <p>Another option (faster, since it refuses to go on processing <code>item</code> if it is <code>None</code>) is </p> <pre><code>def add(self, item): if item is not None: self.root = self.addHelper(item, self.root) self.size += 1 </code></pre></li> <li><p>From the point of view of design, I would expect selecting a node from a binary search tree would give me the subtree below it. In a way it does (the node contains references to all other nodes below), but still: <code>Node</code> and <code>BST</code> objects are different things. You may want to think about a way of unifying the two (this is the point in @YairTwito's answer).</p></li> <li>One last thing: in Python, the <a href="https://www.python.org/dev/peps/pep-0008/#naming-conventions" rel="nofollow">convention for naming things</a> is to have words in lower case and separated by underscores, not the camelCasing you are using &mdash; so <code>add_helper</code> instead of <code>addHelper</code>. I would further add an underscore at the beginning to signal that it is not meant for public use &mdash; so <code>_add_helper</code>, or simply <code>_add</code>.</li> </ol>
2
2016-08-15T20:39:29Z
[ "python", "binary-search-tree" ]
cv2.phase gives the angle in radian
38,961,957
<p>I'm trying to get the orientation of the gradient through the Sobel function in opencv python. The problem is when I provide the gradient in the x and y direction to the phase function, it always gives me the same result, no matter phase in degree is true or False. Here is the sample code:</p> <pre><code>img = cv2.imread('frameBB.jpg',0) sobelx = cv2.Sobel(img,cv2.CV_32F,1,0,ksize=3) sobely= cv2.Sobel(img,cv2.CV_32F,0,1,ksize=3) phase=cv2.phase(sobelx,sobely,True) </code></pre> <p>I then get the histogram of phase, and end up with the same result for the last argument of phase function being either True, or False. The histogram looks like this for both cases. <a href="http://i.stack.imgur.com/xpk4g.png" rel="nofollow"><img src="http://i.stack.imgur.com/xpk4g.png" alt="enter image description here"></a></p> <p>this is how the original image and its gradient images look like:</p> <p><a href="http://i.stack.imgur.com/HH19l.png" rel="nofollow"><img src="http://i.stack.imgur.com/HH19l.png" alt="enter image description here"></a></p> <p>I am not sure what I am doing wrong here, and why I get the gradient angle in radian for both cases.</p>
0
2016-08-15T19:46:04Z
38,963,685
<p>As you can see from the documentation <a href="http://docs.opencv.org/2.4/modules/core/doc/operations_on_arrays.html#phase" rel="nofollow">here</a> for cv2.phase:</p> <pre><code>cv2.phase(x, y[, angle[, angleInDegrees]]) </code></pre> <p>the third positional argument is not in fact for specifying degrees/radians, it's used for the output array. The argument you are trying to change is <code>angleInDegrees</code>. You can either specify it as the 4th positional argument, or more clearly by using a keyword as such:</p> <pre><code>cv2.phase(sobelx,sobely,angleInDegrees=True) </code></pre>
1
2016-08-15T21:57:35Z
[ "python", "opencv" ]
Why do I get null values from conditional statement?
38,961,979
<p>Thanks again for your patience, I am not the best communicator. Please let me know if there is any additional information that I should add.</p> <p>My current data looks like:</p> <pre><code>"Identifier","Status","OPENED","Resolv","closed_on","duplicate_on","junked_on","unproducible_on","verified_on" "xx1","D","2004-07-28","","","2004-08-26","","","" "xx2","N","2010-03-02","","","","","","" "xx3","U","2005-10-26","","","","","2005-11-01","" "xx4","V","2006-06-30","2006-09-15","","","","","2006-11-20" "xx5","R","2012-09-21","2013-06-06","","","","","" "xx6","D","2009-11-25","","","2010-02-26","","","" "xx7","D","2003-08-29","","","2003-08-29","","","" "xx8","R","2003-06-06","2003-06-24","","","","","" "xx9","R","2004-11-05","2004-11-15","","","","","" "xx10","R","2008-02-21","2008-09-25","","","","","" "xx11","R","2007-03-08","2007-03-21","","","","","" "xx12","R","2011-08-22","2012-06-21","","","","","" "xx13","J","2003-07-07","","","","2003-07-10","","" "xx14","A","2008-09-24","","","","","","" </code></pre> <p>I am trying to add an age calculation column using the code below so that the data looks like(notice that the first value is returning "" for age, this is what I am trying to solve with my question. If the status does not have a date, then I want to use today's date.):</p> <pre><code>"Identifier","Status","OPENED","Resolv","closed_on","duplicate_on","junked_on","unproducible_on","verified_on","Age" "xx1","J","2002-02-07","","","","","","","" "xx2","J","2008-11-25","","","","2008-12-04","","",9.0 "xx3","C","2002-01-27","","2002-03-19","","","","",51.0 "xx4","V","2003-07-09","2003-07-10","","","","","2003-07-15",6.0 "xx5","D","2008-06-30","","","2008-09-09","","","",71.0 "xx6","R","2010-06-02","2010-06-11","","","","","",9.0 "xx7","R","2006-11-16","2006-12-12","","","","","",26.0 "xx8","R","2006-03-29","2006-03-31","","","","","",2.0 "xx9","R","2010-09-07","2010-10-05","","","","","",28.0 "xx10","U","2006-03-09","","","","","2006-06-20","",103.0 "xx11","R","2007-04-26","2007-05-01","","","","","",5.0 "xx12","C","2010-03-07","","2010-03-11","","","","",4.0 "xx13","R","2009-12-22","2010-05-31","","","","","",160.0 "xx14","R","2006-06-24","2006-06-28","","","","","",4.0 </code></pre> <p>However, when defects are missing status change date, the age function returns '' as seen below in picture. This is the case for all 102 blank cells.</p> <p><a href="http://i.stack.imgur.com/lQxkk.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/lQxkk.jpg" alt="Missing data example"></a></p> <pre><code>from datetime import datetime as dtt import pandas as pd import numpy as np import csv </code></pre> <p>Age column calculation function</p> <pre><code>def defect_age(df): """Performs age calc and creates age col""" today = dtt.today() </code></pre> <p>List of terminal statuses:</p> <pre><code> terminal = ['R', 'V', 'D', 'J', 'U', 'C'] </code></pre> <p>Date to date time per status</p> <pre><code> resolved = pd.to_datetime(df.Resolv, errors='coerce') closed = pd.to_datetime(df.closed_on, errors='coerce') duplicate = pd.to_datetime(df.duplicate_on, errors='coerce') junked = pd.to_datetime(df.junked_on, errors='coerce') unproducible = pd.to_datetime(df.unproducible_on, errors='coerce') verified = pd.to_datetime(df.verified_on, errors='coerce') submitted = pd.to_datetime(df.OPENED, errors='coerce') </code></pre> <p>Date calculation by status</p> <pre><code> r = (resolved - submitted) / np.timedelta64(1, 'D', errors='coerce') c = (closed - submitted) / np.timedelta64(1, 'D', errors='coerce') d = (duplicate - submitted) / np.timedelta64(1, 'D', errors='coerce') j = (junked - submitted) / np.timedelta64(1, 'D', errors='coerce') u = (unproducible - submitted) / np.timedelta64(1, 'D', errors='coerce') v = (verified - submitted) / np.timedelta64(1, 'D', errors='coerce') # not terminal state s = (today - submitted) / np.timedelta64(1, 'D', errors='coerce') date_calc = int(s) </code></pre> <p>I am trying to populate age column. If status is terminal and date not blank use above date calculation. For some reason when terminal states are blank, it is not using the else clause which is what I am trying to do.</p> <pre><code> if df.Status in terminal: if df.Status == 'R' and df.Resolv != '': return r elif df.Status == 'C' and df.closed_on != '': return c elif df.Status == 'D' and df.duplicate_on != '': return d elif df.Status == 'J' and df.junked_on != '': return j elif df.Status == 'U' and df.unproducible_on != '': return u elif df.Status == 'V' and df.verified_on != '': return v else: return date_calc </code></pre> <p>Read in data</p> <pre><code>df = pd.read_csv('BigData.txt', low_memory=False) </code></pre> <p>Create new column using defect_age function</p> <pre><code>df['Age'] = df.apply(lambda row: defect_age(row), axis=1) </code></pre> <p>Write result to CSV </p> <pre><code>df.to_csv("data.csv", index=False, sep=',', quoting=csv.QUOTE_NONNUMERIC) </code></pre> <p>ROW 2511:</p> <pre><code> Identifier Status OPENED Resolv closed_on duplicate_on junked_on \ 2511 xxxx5 J 2002-02-07 NaN NaN NaN NaN unproducible_on verified_on 2511 NaN NaN </code></pre>
0
2016-08-15T19:47:20Z
38,990,364
<p>I made a quick code that basically gets the age using the status where if the status is not in the terminal, it will default to date today.</p> <pre><code>def toDateTime(s): return dtt.strptime(s, '%Y-%m-%d') def defect_age(row): status_dict = {'R': 'Resolv', 'V': 'verified_on', 'D': 'duplicate_on', 'J': 'junked_on', 'U': 'unproducible_on', 'C': 'closed_on'} submitted = toDateTime(row['OPENED']) status = row['Status'] if status in status_dict: date_from_col = row[status_dict[status]] date = toDateTime(date_from_col) if date_from_col != '' else dtt.today() else: date = dtt.today() return (date - submitted).days </code></pre> <p>This function is equivalent to your defect_age function above. Now, you can then apply this function to your dataframe as</p> <pre><code>df.fillna('', inplace=True) df['Age'] = df.apply(defect_age, axis=1) </code></pre>
1
2016-08-17T07:17:46Z
[ "python", "pandas", "conditional" ]
10,000+ Point 3D Scatter Plots in Python (with Quick Rendering)
38,962,015
<p>Performance-wise, the following code snippet works perfectly fine for me when plotting in <code>mayavi</code>.</p> <pre><code>import numpy as np from mayavi import mlab n = 5000 x = np.random.rand(n) y = np.random.rand(n) z = np.random.rand(n) s = np.sin(x)**2 + np.cos(y) mlab.points3d(x, y, z, s, colormap="RdYlBu", scale_factor=0.02, scale_mode='none') </code></pre> <p>But <code>mayavi</code> begins to choke once <code>n &gt;= 10000</code>. The analogous 3d plotting routine in <code>matplotlib</code>, (<code>Axes3D.scatter</code>) similarly struggles with data sets of this size (why I started looking into <code>mayavi</code> in the first place).</p> <p>First, is there something in <code>mayavi</code> (trivial or nontrivial) that I am missing that would make 10,000+ point scatter plots much easier to render?</p> <p>Second, if the answer above is no, what other options (either in <code>mayavi</code> or a different python package) do I have to plot datasets of this magnitude?</p> <p>I tagged ParaView simply to add that rendering my data in ParaView goes super smoothly, leading me to believe that I am not trying to do anything unreasonable.</p> <p><strong>Update:</strong></p> <p>Specifying the mode as a 2D glyph goes a long way towards speeding things up. E.g.</p> <pre><code>mlab.points3d(x, y, z, s, colormap="RdYlBu", scale_factor=0.02, scale_mode='none', mode='2dcross') </code></pre> <p>can easily support up to 100,000 points</p> <p><a href="http://i.stack.imgur.com/cDUbH.png" rel="nofollow"><img src="http://i.stack.imgur.com/cDUbH.png" alt="enter image description here"></a></p> <p>It would still be nice if anyone could add some info about how to speed up the rendering of 3D glyphs.</p>
2
2016-08-15T19:49:22Z
39,205,127
<p><a href="http://www.pyqtgraph.org/" rel="nofollow">PyQtGraph</a> is a much more performant plotting package, although not as "beautiful" as matplotlib or mayavi. It is made for number crunching and should therefore easily render points in the order of ten thousands.</p> <p>As for <code>mayavi</code> and <code>matplotlib</code>: I think with that number of points you've reached what is possible with those packages.</p> <p>Edit: <a href="http://vispy.org/" rel="nofollow">VisPy</a> seems to be the successor to PyQtGraph and some other visualization packages. Might be a bit overkill, but it can display a few hundred thousand points easily by offloading computation to the GPU.</p>
1
2016-08-29T11:25:36Z
[ "python", "matplotlib", "mayavi", "paraview" ]
rendering matplotlib mathematical equations
38,962,062
<p>I am working to create a regression plot using a combination of Seaborn (to plot) and Statsmodels (to collect coefficients). The plotting function works fine, but I am attempting to add the linear regression equation to my plot using the <code>ax.text()</code> feature. However, I am having trouble getting the equation to render properly.</p> <p>I have consulted two different docs, but I find these more confusing than helpful: <a href="http://matplotlib.org/users/usetex.html" rel="nofollow">http://matplotlib.org/users/usetex.html</a> and <a href="http://matplotlib.org/users/mathtext.html#mathtext-tutorial" rel="nofollow">http://matplotlib.org/users/mathtext.html#mathtext-tutorial</a></p> <pre><code>regList = (df.ratiojnatoawd,testDF.ratiojnatoawd, df.ratiopdrtoawd,testDF.ratiopdrtoawd,df.ratiopdrtojna,testDF.ratiopdrtojna) fig, axs = plt.subplots(3, 2, sharex=True, figsize=(10,6.5)) fig.subplots_adjust(wspace=0.25) for reg, ax, lims, color in zip(regList, axs.flatten(), axlims, ('g', 'b', 'g','b', 'g','b')): regmax = reg.max() regmin = reg.min() regx = sm.add_constant(np.arange(1,97)) regy = reg regr = sm.OLS(regy, regx).fit() label = 'y = %.2ex + %.2e\nr^2 = %.3f' % (regr.params[1], regr.params[0], regr.rsquared) sns.regplot(np.arange(1,97), reg, ax=ax, color=color, scatter_kws={"alpha": 0.35}) ax.text(0.98,0.75,label, transform=ax.transAxes, horizontalalignment = 'right', bbox=dict(facecolor='w', alpha=0.5)) ax.yaxis.set_major_formatter(percentFormatter) ax.set_ylim(lims) ax.set_xlim((0,96)) ax.set_ylabel('') axs[0][0].set_ylabel('JnA to Total Awds') axs[1][0].set_ylabel('PDR to Total Awds') axs[2][0].set_ylabel('PDR to Total JnA') </code></pre> <p>Is this rendering incorrectly because I am using '%' characters within the '$' characters? Or are there characters I am forgetting to escape? Any help in getting my <code>ax.text()</code> to render as a mathematical expression versus a string would be immensely appreciated.</p>
0
2016-08-15T19:52:13Z
38,962,188
<p>You need to use <code>$...$</code> around your equation. As this</p> <pre><code>import matplotlib.pyplot as plt f, ax = plt.subplots(1,1) # v v &lt;-------- These two $-signs label = 'y = %.2ex + %.2e\nr$^2$ = %.3f' % (10, 100, 0.01) ax.text(0.5, 0.5, label, horizontalalignment = 'right', bbox=dict(facecolor='w', alpha=0.5)) plt.show() </code></pre> <p>The only math you had in your example was the <code>r^2</code>. Below is the image rendered.</p> <p><a href="http://i.stack.imgur.com/ZDyhm.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZDyhm.png" alt="enter image description here"></a></p>
0
2016-08-15T20:01:40Z
[ "python", "matplotlib" ]
rendering matplotlib mathematical equations
38,962,062
<p>I am working to create a regression plot using a combination of Seaborn (to plot) and Statsmodels (to collect coefficients). The plotting function works fine, but I am attempting to add the linear regression equation to my plot using the <code>ax.text()</code> feature. However, I am having trouble getting the equation to render properly.</p> <p>I have consulted two different docs, but I find these more confusing than helpful: <a href="http://matplotlib.org/users/usetex.html" rel="nofollow">http://matplotlib.org/users/usetex.html</a> and <a href="http://matplotlib.org/users/mathtext.html#mathtext-tutorial" rel="nofollow">http://matplotlib.org/users/mathtext.html#mathtext-tutorial</a></p> <pre><code>regList = (df.ratiojnatoawd,testDF.ratiojnatoawd, df.ratiopdrtoawd,testDF.ratiopdrtoawd,df.ratiopdrtojna,testDF.ratiopdrtojna) fig, axs = plt.subplots(3, 2, sharex=True, figsize=(10,6.5)) fig.subplots_adjust(wspace=0.25) for reg, ax, lims, color in zip(regList, axs.flatten(), axlims, ('g', 'b', 'g','b', 'g','b')): regmax = reg.max() regmin = reg.min() regx = sm.add_constant(np.arange(1,97)) regy = reg regr = sm.OLS(regy, regx).fit() label = 'y = %.2ex + %.2e\nr^2 = %.3f' % (regr.params[1], regr.params[0], regr.rsquared) sns.regplot(np.arange(1,97), reg, ax=ax, color=color, scatter_kws={"alpha": 0.35}) ax.text(0.98,0.75,label, transform=ax.transAxes, horizontalalignment = 'right', bbox=dict(facecolor='w', alpha=0.5)) ax.yaxis.set_major_formatter(percentFormatter) ax.set_ylim(lims) ax.set_xlim((0,96)) ax.set_ylabel('') axs[0][0].set_ylabel('JnA to Total Awds') axs[1][0].set_ylabel('PDR to Total Awds') axs[2][0].set_ylabel('PDR to Total JnA') </code></pre> <p>Is this rendering incorrectly because I am using '%' characters within the '$' characters? Or are there characters I am forgetting to escape? Any help in getting my <code>ax.text()</code> to render as a mathematical expression versus a string would be immensely appreciated.</p>
0
2016-08-15T19:52:13Z
39,154,546
<p>I was able to get the desired end result by defining a function to convert a float into a latex readable format, which I adapted from <a href="http://stackoverflow.com/a/13490601/4995611" title="here">here</a>.</p> <pre><code>def latex_float(f): float_str = "{0:.3g}".format(f) if "e" in float_str: base, exponent = float_str.split("e") return r"{0}e^{{{1}}}".format(base, int(exponent)) else: return float_str </code></pre> <p>Then labeling each regression using the below sequence:</p> <pre><code>label = '$y = ' +latex_float(regr.params[1])+'x + '+latex_float(regr.params[0])+ '$\n$r^2 = %.3f$' % (regr.rsquared) </code></pre> <p>Perhaps not the most elegant solution, but adequate nonetheless</p>
0
2016-08-25T20:52:22Z
[ "python", "matplotlib" ]
Automagically propagating deletion when using a bidirectional association_proxy
38,962,124
<p>I'm using a bidirectional <code>association_proxy</code> to associate properties <code>Group.members</code> and <code>User.groups</code>. I'm having issues with removing a member from <code>Group.members</code>. In particular, <code>Group.members.remove</code> will successfully remove an entry from <code>Group.members</code>, <em>but will leave a <code>None</code> in place of the corresponding entry in <code>User.groups</code></em>.</p> <p>More concretely, the following (minimal-ish) representative code snippet fails its last assertion:</p> <pre><code>import sqlalchemy as sa from sqlalchemy.orm import Session from sqlalchemy.ext.associationproxy import association_proxy from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Group(Base): __tablename__ = 'group' id = sa.Column(sa.Integer, autoincrement=True, primary_key=True) name = sa.Column(sa.UnicodeText()) members = association_proxy('group_memberships', 'user', creator=lambda user: GroupMembership(user=user)) class User(Base): __tablename__ = 'user' id = sa.Column(sa.Integer, autoincrement=True, primary_key=True) username = sa.Column(sa.UnicodeText()) groups = association_proxy('group_memberships', 'group', creator=lambda group: GroupMembership(group=group)) class GroupMembership(Base): __tablename__ = 'user_group' user_id = sa.Column(sa.Integer, sa.ForeignKey('user.id'), primary_key=True) group_id = sa.Column(sa.Integer, sa.ForeignKey('group.id'), primary_key=True) user = sa.orm.relationship( 'User', backref=sa.orm.backref('group_memberships', cascade="all, delete-orphan")) group = sa.orm.relationship( 'Group', backref=sa.orm.backref('group_memberships', cascade="all, delete-orphan"), order_by='Group.name') if __name__ == '__main__': engine = sa.create_engine('sqlite://') Base.metadata.create_all(engine) session = Session(engine) group = Group(name='group name') user = User(username='user name') group.members.append(user) session.add(group) session.add(user) session.flush() assert group.members == [user] assert user.groups == [group] group.members.remove(user) session.flush() assert group.members == [] assert user.groups == [] # This assertion fails, user.groups is [None] </code></pre> <p>I've tried to follow the answers to <a href="http://stackoverflow.com/questions/16066749/sqlalchemy-relationship-with-association-proxy-problems/16071115#16071115">SQLAlchemy relationship with association_proxy problems</a> and <a href="http://stackoverflow.com/questions/21501985/how-can-sqlalchemy-association-proxy-be-used-bi-directionally">How can SQLAlchemy association_proxy be used bi-directionally?</a> but they do not seem to help.</p>
9
2016-08-15T19:56:51Z
38,962,755
<p>I discovered <a href="http://stackoverflow.com/a/4202016/344286">your problem</a> almost entirely by accident, as I was trying to figure out what's going on.</p> <p>Because there wasn't any data in the db, I added a <code>session.commit()</code>. It turns out that (from the linked answer):</p> <blockquote> <p>The changes aren't persisted permanently to disk, or visible to other transactions until the database receives a COMMIT for the current transaction (which is what session.commit() does).</p> </blockquote> <p>Because you are just <code>.flush()</code>ing the changes, sqlalchemy never re-queries the database. You can verify this by adding:</p> <pre><code>import logging logging.getLogger('sqlalchemy').setLevel(logging.INFO) logging.getLogger('sqlalchemy').addHandler(logging.StreamHandler()) </code></pre> <p>And then simply running your code. It will display all of the queries that are run as they happen. Then you can change <code>session.flush()</code> to <code>session.commit()</code> and then re-run, and you'll see that several <code>SELECT</code> statements are run after your <code>commit</code>.</p> <p>It looks like either <code>session.expire(user)</code> or <code>session.refresh(user)</code> will force a refresh of the user, as well. I'm not sure if there's a way to force the update to propagate to the other object without being explicit about it (or if that's even desirable).</p>
2
2016-08-15T20:43:49Z
[ "python", "sqlalchemy" ]
RDFLib Blank Node Printing
38,962,190
<p>I have an RDF dataset where triples are stored in N-Triples format like follows:</p> <pre><code>&lt;http://ebola.ndssl.bi.vt.edu/country/1&gt; &lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#type&gt; &lt;http://ebola.ndssl.bi.vt.edu/vocab/country&gt; . _:AmapX3aXcountryX5fXcountryX5fXnameX5fXclassMapX40XX40X1 &lt;http://ebola.ndssl.bi.vt.edu/vocab/hasValue&gt; "Guinea" . </code></pre> <p>I want to do some processing with the blank nodes. I am writing a program to reading this file in Python. I am using Python RDFLib library. After reading the file, I print its content. However, the problem is blank node name is coming out differently. For example:</p> <pre><code>_:AmapX3aXcountryX5fXcountryX5fXnameX5fXclassMapX40XX40X1 is showing like following N75424221e7df43708c3e2a135e3e888b </code></pre> <p>I need original RDF file blank node name as follows:</p> <pre><code>_:AmapX3aXcountryX5fXcountryX5fXnameX5fXclassMapX40XX40X1 </code></pre> <p>How can I print original RDF file blank node name?</p>
0
2016-08-15T20:01:46Z
38,962,982
<p>You (probably) can't. Blank node ids are local to the specific file they are in, they're not guaranteed preserved between different serializations. RDFLib simply replaces the id with its own new internal id. </p> <p>Some tools have a parser setting to optionally preserve node ids. I don't know if RDFLib supports this, but even if it does: unless you have a <em>very</em> specific use case you should not be relying on blank node ids being preserved. They are called <em>blank</em> for a reason: their id is for all intents and purposes "unknown". </p>
4
2016-08-15T20:59:59Z
[ "python", "rdf", "rdflib" ]
python django: error using inline formset
38,962,391
<p>I have the following models:</p> <pre><code>class Equipment(models.Model): asset_number = models.CharField(max_length = 200) serial_number = models.CharField(max_length = 200) class Calibration(models.Model): cal_asset = models.ForeignKey(Equipment, on_delete = models.CASCADE) cal_by = models.CharField(max_length = 200) cal_date = models.DateField() notes = models.TextField(max_length = 200) </code></pre> <p>and view: </p> <pre><code>def default_detail (request, equipment_id): equipment = Equipment.objects.get(id = equipment_id) if request.method == "POST": if 'calibration' in request.POST: EquipmentInlineFormSet = inlineformset_factory(Equipment, Calibration, fields = ('cal_by', 'cal_dates', 'notes') formset = EquipmentInlineFormSet(request.POST, request.FILES, instance=equipment) if formset.is_valid(): formset.save() return HttpResponseRedirect(reverse('calbase:default_detail', args=(post.id))) else: formset = EquipmentInlineFormSet(instance=equipment) return render(request, 'calbase/default_detail.html', {'formset' : formset}) </code></pre> <p>and template for this view:</p> <pre><code>&lt;h1&gt;{{ equipment.serial_number }}&lt;/h1&gt; {{equipment.serial_number}} -- {{equipment.asset_number}} &lt;br&gt; calibration history: {% for calibrations in equipment.calibration_set.all %} &lt;ul&gt; &lt;li&gt; {{calibrations.cal_by}} -- {{calibrations.cal_date}} -- {{calibrations.notes}} &lt;/li&gt; &lt;/ul&gt; {% endfor %} {% if error_message %}&lt;p&gt;&lt;strong&gt;{{ error_message }}&lt;/strong&gt;&lt;/p&gt;{% endif %} &lt;form method="POST" action = "{% url 'calbase:default_detail' equipment.id %}"&gt;{% csrf_token %} {{ formset }} &lt;button type="submit" class="save btn btn-default" name = "calibration"&gt;Save&lt;/button&gt; &lt;/form&gt; &lt;a href="{% url 'calbase:default' %}"&gt;Back?&lt;/a&gt; </code></pre> <p>This view is simply a detail view of equipment, showing some information (asset and serial # of this piece of equipment). I am trying to add a formset that let user add calibration record to this equipment in this view and display them if any. I learned that using inline formset is probably the way to go. However, after following <a href="https://docs.djangoproject.com/en/1.10/topics/forms/modelforms/#using-an-inline-formset-in-a-view" rel="nofollow">documentation</a> step by step, I am having a</p> <blockquote> <p>equipmentInlineFormSet = EquipmentInlineFormSet(request.POST, request.FILES, instance=equipment)</p> <p>SyntaxError: invalid syntax</p> </blockquote> <p>error even though I checked to make sure that there is no typos or so. I am just trying to figure out what I did wrong here.</p>
0
2016-08-15T20:15:32Z
38,963,499
<p>Isn't it a missing closing parenthesis on this line?</p> <pre><code> EquipmentInlineFormSet = inlineformset_factory(Equipment, Calibration, fields = ('cal_by', 'cal_dates', 'notes') </code></pre>
0
2016-08-15T21:42:08Z
[ "python", "django", "django-forms" ]
Splitting DataFrames in Apache Spark
38,962,409
<p>Using Apache Spark 2.0 with pyspark, I have a DataFrame containing 1000 rows of data and would like to split/slice that DataFrame into 2 separate DataFrames;</p> <ul> <li>The first DataFrame should contain the first 750 rows</li> <li>The second DataFrame should contain the remaining 250 rows</li> </ul> <p>Note: a random seed won't suffice, as I intend to repeat this splitting method several times, and want to be in control as to which data is being used for the first and second DataFrame.</p> <p>I've found the take(n) method to be useful to generate the first result.<br> But I can't seem to find the right way (or any way for that matter) to get the second DataFrame.</p> <p>Any pointers in the right direction would be greatly appreciated.</p> <p>Thanks in advance.</p> <p><strong>Update</strong>: I've now managed to find a solution by sorting and applying take(n) again. This still feels like a suboptimal solution though:</p> <pre><code># First DataFrame, simply take the first 750 rows part1 = spark.createDataFrame(df.take(750)) # Second DataFrame, sort by key descending, then take 250 rows part2 = spark.createDataFrame(df.rdd.sortByKey(False).toDF().take(250)) # Then reverse the order again, to maintain the original order part2 = part2.rdd.sortByKey(True).toDF() # Then rename the columns as they have been reset to "_1" and "_2" by the sorting process part2 = part2.withColumnRenamed("_1", "label").withColumnRenamed("_2", "features") </code></pre>
1
2016-08-15T20:16:51Z
38,964,391
<p>You are right to question using take because it draws the data to the driver and then createDataFrame redistributes it across the cluster. This is inefficient and may fail if your driver doesn't have enough memory to store the data.</p> <p>Here's a solution that creates a row index column and slices on that:</p> <pre><code>from pyspark.sql.functions import monotonicallyIncreasingId idxDf = df.withColumn("idx", monotonicallyIncreasingId()) part1 = idxDf.filter('idx &lt; 750') part2 = idxDf.filter('idx &gt;= 750') </code></pre>
2
2016-08-15T23:18:11Z
[ "python", "apache-spark", "pyspark" ]
VOLTTRON : Device communication aborted: segmentationNotSupported
38,962,468
<p>When running the <code>BACnet Proxy</code> and <code>MasterDriver</code> agents, I receive the following error message:</p> <blockquote> <p>master_driver.driver ERROR: Failed to scrape Device Name: RuntimeError('Device communication aborted: segmentationNotSupported')</p> </blockquote> <p>Could anyone help me to resolve this error?</p>
1
2016-08-15T20:22:15Z
38,963,891
<p>BACnet has a size limit for the size of a message. The message size has several different valid values based on the BACnet specification. If a device wants to send a message that exceeds the supported size of either device it may segment the message into smaller pieces. Both devices must support segmentation for this to work, otherwise you get the error you are seeing.</p> <p>The cause of this error is the device being scraped does not support segmentation and the number of points being scraped by the driver at once (by default, all of them) creates a message too big to avoid segmentation either sending or receiving.</p> <p>The BACnet driver currently supports manual segmentation to overcome this device limitation without reducing the number of points configured in the driver. You can set the max_per_request setting in the driver_config section of a BACnet device configuration. The setting is per device so you must include max_per_request in every device affected. A typical value is 20. If the error persists try lower values.</p> <p>A planned future enhancement for the BACnet driver is to auto detect this case and automatically set an ideal max_per_request value. </p> <p><strong>EDIT</strong></p> <p>I should also mention that the max_per_request argument was added after VOLTTRON 3.0. You need to be running either 3.5RC1 or the develop branch.</p>
1
2016-08-15T22:19:09Z
[ "python", "volttron" ]
Seeing if an element from a 1D array is in another MD array
38,962,479
<pre><code>array1D = ['book', 'aa', 'Ab', 'AB'] arrayMD = ['ss', 'book', 'fd', '2'], ['sw', 'd'], ['we', 'wr'] </code></pre> <p>How could I check to see if any element in array1D exists in arrayMD?</p> <p>So far I just know of the find() method and that seems to only work for one element search.</p> <p>Edit: I'd like to get it's index from arrayMD as well</p>
0
2016-08-15T20:23:16Z
38,962,515
<p>Use <a href="https://docs.python.org/3/reference/expressions.html#membership-test-operations" rel="nofollow"><code>in</code></a>.</p> <pre><code>for sublist in arrayMD: for index, element in enumerate(sublist): if element in array1D: # Do something </code></pre>
2
2016-08-15T20:26:15Z
[ "python", "arrays", "find" ]
Seeing if an element from a 1D array is in another MD array
38,962,479
<pre><code>array1D = ['book', 'aa', 'Ab', 'AB'] arrayMD = ['ss', 'book', 'fd', '2'], ['sw', 'd'], ['we', 'wr'] </code></pre> <p>How could I check to see if any element in array1D exists in arrayMD?</p> <p>So far I just know of the find() method and that seems to only work for one element search.</p> <p>Edit: I'd like to get it's index from arrayMD as well</p>
0
2016-08-15T20:23:16Z
38,962,687
<pre><code>array1D = ['book', 'aa', 'Ab', 'AB'] arrayMD = [['ss', 'book', 'fd', '2'], ['sw', 'd'], ['we', 'wr']] for word in array1D: for arrindex, subarr in enumerate(arrayMD): for wordindex, subword in enumerate(subarr): if word == subword: print(word, arrindex, wordindex) break </code></pre> <p><strong>Output:</strong> ('book', 0, 1)</p> <p>It's not efficient as it's iterating through each element in each array but it works.</p>
0
2016-08-15T20:39:35Z
[ "python", "arrays", "find" ]
Seeing if an element from a 1D array is in another MD array
38,962,479
<pre><code>array1D = ['book', 'aa', 'Ab', 'AB'] arrayMD = ['ss', 'book', 'fd', '2'], ['sw', 'd'], ['we', 'wr'] </code></pre> <p>How could I check to see if any element in array1D exists in arrayMD?</p> <p>So far I just know of the find() method and that seems to only work for one element search.</p> <p>Edit: I'd like to get it's index from arrayMD as well</p>
0
2016-08-15T20:23:16Z
38,962,730
<p>If you are just wondering if the element is anywhere within the second 'array', then it is probably best to flatten it first which also has the advantage of being able to deal with arrays of any depth. This is most easily done with <code>numpy</code> if you aren't sure how deep the lists will be.</p> <pre><code>import numpy as np arrayMD_flat = np.array(arrayMD).flatten() for item in array1D: if item in arrayMD_flat: print('{0} was found!'.format(item)) </code></pre>
1
2016-08-15T20:42:16Z
[ "python", "arrays", "find" ]
Seeing if an element from a 1D array is in another MD array
38,962,479
<pre><code>array1D = ['book', 'aa', 'Ab', 'AB'] arrayMD = ['ss', 'book', 'fd', '2'], ['sw', 'd'], ['we', 'wr'] </code></pre> <p>How could I check to see if any element in array1D exists in arrayMD?</p> <p>So far I just know of the find() method and that seems to only work for one element search.</p> <p>Edit: I'd like to get it's index from arrayMD as well</p>
0
2016-08-15T20:23:16Z
38,963,745
<pre><code>def check(list_md, list_1d): def flatten(l, index=None): """ this function will flatten list_md recursively getting only elements which where found and return list of tuples """ for i, el in enumerate(l): # using enumerate to get index _index = [i] if index is None else index + [i] # getting nested list's indexes if isinstance(el, collections.Iterable) and not isinstance(el, (str, bytes)): for sub in flatten(el, _index): yield sub else: """ returning ( &lt;element itself&gt;, &lt;index of element&gt; ) """ if el in list_1d: yield el, _index return list(flatten(list_md)) # example print(check([1, 2, [3, [4, 5, 6, [20]]]], [5, 20, 29])) # your example list_md_example = [['ss', 'book', 'fd', '2'], ['sw', 'd'], ['we', 'wr']] list_1d_example = ['book', 'aa', 'Ab', 'AB'] print( check( list_md_example, list_1d_example ) ) </code></pre> <p>output in first example will be <code>[(5, [2, 1, 1]), (20, [2, 1, 3, 0])]</code> which means that number 5 was found and it's index is [2,1,1]</p> <p>second example will output <code>[('book', [0, 1])]</code> </p> <p>if returned list is empty it's means elements from 1DArray weren't found in MDArray</p>
0
2016-08-15T22:03:36Z
[ "python", "arrays", "find" ]
if else in a face detector with opencv
38,962,582
<p>Note I am a beginner. I made a script that analyzes a picture and places a box around any faces found in the image, that part works, What i need it to do is change the "faces" in "if faces = True" to something to the effect of If faces found = true, though I don't know what that would be, faces does nothing.</p> <pre><code>import cv2 import sys import time imagePath = sys.argv[1] cascPath = sys.argv[2] faceCascade = cv2.CascadeClassifier(cascPath) image = cv2.imread(imagePath) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) faces = faceCascade.detectMultiScale(gray,scaleFactor=1.2,minNeighbors=5,minSize=(30, 30))#,flags = cv2.cv.CV_HAAR_IMAGE_SCALE) for (x, y, w, h) in faces: cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2) if faces = True: cv2.imshow("(1) Pamela Found" ,image) else: cv2.imshow("(0) Pamela's Found" ,image) cv2.waitKey(0)&amp;0xFF </code></pre> <p>the code already works it just that the:</p> <pre><code>if faces = True: cv2.imshow("(1) Pamela Found" ,image) else: cv2.imshow("(0) Pamela's Found" ,image) </code></pre> <p>doesn't work. Help would be appreciated - Thanks! </p> <p>Edit: now I have changed the code to look like this:</p> <pre><code>import cv2 import sys import time imagePath = sys.argv[1] cascPath = sys.argv[2] faceCascade = cv2.CascadeClassifier(cascPath) image = cv2.imread(imagePath) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) faces = faceCascade.detectMultiScale(gray,scaleFactor=1.2,minNeighbors=5,minSize=(30, 30))#,flags = cv2.cv.CV_HAAR_IMAGE_SCALE) for (x, y, w, h) in faces: cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2) if faces == True: cv2.imshow("(1) Pamela(s) Found" ,image) cv2.waitKey(0)&amp;0xFF else: cv2.imshow("(0) Pamela(s) Found" ,image) cv2.waitKey(0)&amp;0xFF </code></pre> <p>When I run this, the XML file and the image without the face, it works, and says, "(0) Pamela(s) Found" as it should, but when I run this, the XML file, and the image with the face the window doesn't pop up, I believe this has to do with the waitkey under the if statement not functioning, help would be appreciated - thanks!</p>
-1
2016-08-15T20:30:40Z
38,962,642
<p>Use <code>faces</code> as the condition:</p> <pre><code>if faces: # python types can be coerced to boolean cv2.imshow("(1) Pamela Found" ,image) else: cv2.imshow("(0) Pamela's Found" ,image) </code></pre> <p>An empty list (or container) has a <em>falsy</em> value, while if a face was detected (i.e. <code>faces</code> is not empty), the iterable, <code>faces</code> will have a <em>truthy</em> value.</p> <hr> <p>P.S. <code>if faces = True</code> will raise a Syntax error and if you intended <code>if faces == True</code>, that also reduces to, and is better written as <code>if faces</code>.</p>
0
2016-08-15T20:35:57Z
[ "python", "python-3.x", "opencv" ]
if else in a face detector with opencv
38,962,582
<p>Note I am a beginner. I made a script that analyzes a picture and places a box around any faces found in the image, that part works, What i need it to do is change the "faces" in "if faces = True" to something to the effect of If faces found = true, though I don't know what that would be, faces does nothing.</p> <pre><code>import cv2 import sys import time imagePath = sys.argv[1] cascPath = sys.argv[2] faceCascade = cv2.CascadeClassifier(cascPath) image = cv2.imread(imagePath) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) faces = faceCascade.detectMultiScale(gray,scaleFactor=1.2,minNeighbors=5,minSize=(30, 30))#,flags = cv2.cv.CV_HAAR_IMAGE_SCALE) for (x, y, w, h) in faces: cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2) if faces = True: cv2.imshow("(1) Pamela Found" ,image) else: cv2.imshow("(0) Pamela's Found" ,image) cv2.waitKey(0)&amp;0xFF </code></pre> <p>the code already works it just that the:</p> <pre><code>if faces = True: cv2.imshow("(1) Pamela Found" ,image) else: cv2.imshow("(0) Pamela's Found" ,image) </code></pre> <p>doesn't work. Help would be appreciated - Thanks! </p> <p>Edit: now I have changed the code to look like this:</p> <pre><code>import cv2 import sys import time imagePath = sys.argv[1] cascPath = sys.argv[2] faceCascade = cv2.CascadeClassifier(cascPath) image = cv2.imread(imagePath) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) faces = faceCascade.detectMultiScale(gray,scaleFactor=1.2,minNeighbors=5,minSize=(30, 30))#,flags = cv2.cv.CV_HAAR_IMAGE_SCALE) for (x, y, w, h) in faces: cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2) if faces == True: cv2.imshow("(1) Pamela(s) Found" ,image) cv2.waitKey(0)&amp;0xFF else: cv2.imshow("(0) Pamela(s) Found" ,image) cv2.waitKey(0)&amp;0xFF </code></pre> <p>When I run this, the XML file and the image without the face, it works, and says, "(0) Pamela(s) Found" as it should, but when I run this, the XML file, and the image with the face the window doesn't pop up, I believe this has to do with the waitkey under the if statement not functioning, help would be appreciated - thanks!</p>
-1
2016-08-15T20:30:40Z
38,962,659
<p>According to the documentation I found <a href="http://docs.opencv.org/2.4/modules/objdetect/doc/cascade_classification.html" rel="nofollow">in OpenCV</a> the faceCascade.detectMultiScale return a collection of <em>objects</em>.</p> <p>To test is a collection (<code>list</code>, <code>set</code>, <code>tuple</code>, <code>dict</code>, etc.) is non-empty, just try:</p> <pre><code>if faces: cv2.imshow("(1) Pamela Found", image) else: cv2.imshow("(0) Pamela's Found", image) </code></pre> <p>May be a duplicate of <a href="http://stackoverflow.com/questions/53513/best-way-to-check-if-a-list-is-empty">Best way to check if a list is empty</a></p>
0
2016-08-15T20:37:04Z
[ "python", "python-3.x", "opencv" ]
Python, pandas: how to extract values from a symmetric, multi-index dataframe
38,962,618
<p>I have a symmetric, multi-index dataframe from which I want to systematically extract data:</p> <pre><code>import pandas as pd df_index = pd.MultiIndex.from_arrays( [["A", "A", "B", "B"], [1, 2, 3, 4]], names = ["group", "id"]) df = pd.DataFrame( [[1.0, 0.5, 0.3, -0.4], [0.5, 1.0, 0.9, -0.8], [0.3, 0.9, 1.0, 0.1], [-0.4, -0.8, 0.1, 1.0]], index=df_index, columns=df_index) </code></pre> <p>I want a function <code>extract_vals</code> that can return all values related to elements in the same group, EXCEPT for the diagonal AND elements must not be double-counted. Here are two examples of the desired behavior (order does not matter):</p> <pre><code>A_vals = extract_vals("A", df) # [0.5, 0.3, -0.4, 0.9, -0.8] B_vals = extract_vals("B", df) # [0.3, 0.9, 0.1, -0.4, -0.8] </code></pre> <p>My question is similar to <a href="http://stackoverflow.com/questions/32892864/how-to-extract-tuples-from-a-pandas-symmetric-dataframe">this question on SO</a>, but my situation is different because I am using a multi-index dataframe.</p> <p>Finally, to make things more fun, please consider efficiency because I'll be running this many times on much bigger dataframes. Thanks very much!</p> <h2>EDIT:</h2> <p>Happy001's solution is awesome. I came up with a method myself based on the logic of extracting the elements where target is NOT in BOTH the rows and columns, and then extracting the lower triangle of those elements where target IS in BOTH the rows and columns. However, Happy001's solution is much faster. </p> <p>First, I created a more complex dataframe to make sure both methods are generalizable:</p> <pre><code>import pandas as pd import numpy as np df_index = pd.MultiIndex.from_arrays( [["A", "B", "A", "B", "C", "C"], [1, 2, 3, 4, 5, 6]], names=["group", "id"]) df = pd.DataFrame( [[1.0, 0.5, 1.0, -0.4, 1.1, -0.6], [0.5, 1.0, 1.2, -0.8, -0.9, 0.4], [1.0, 1.2, 1.0, 0.1, 0.3, 1.3], [-0.4, -0.8, 0.1, 1.0, 0.5, -0.2], [1.1, -0.9, 0.3, 0.5, 1.0, 0.7], [-0.6, 0.4, 1.3, -0.2, 0.7, 1.0]], index=df_index, columns=df_index) </code></pre> <p>Next, I defined both versions of extract_vals (the first is my own):</p> <pre><code>def extract_vals(target, multi_index_level_name, df): # Extract entries where target is in the rows but NOT also in the columns target_in_rows_but_not_in_cols_vals = df.loc[ df.index.get_level_values(multi_index_level_name) == target, df.columns.get_level_values(multi_index_level_name) != target] # Extract entries where target is in the rows AND in the columns target_in_rows_and_cols_df = df.loc[ df.index.get_level_values(multi_index_level_name) == target, df.columns.get_level_values(multi_index_level_name) == target] mask = np.triu(np.ones(target_in_rows_and_cols_df.shape), k = 1).astype(np.bool) vals_with_nans = target_in_rows_and_cols_df.where(mask).values.flatten() target_in_rows_and_cols_vals = vals_with_nans[~np.isnan(vals_with_nans)] # Append both arrays of extracted values vals = np.append(target_in_rows_but_not_in_cols_vals, target_in_rows_and_cols_vals) return vals def extract_vals2(target, multi_index_level_name, df): # Get indices for what you want to extract and then extract all at once coord = [[i, j] for i in range(len(df)) for j in range(len(df)) if i &lt; j and ( df.index.get_level_values(multi_index_level_name)[i] == target or ( df.columns.get_level_values(multi_index_level_name)[j] == target))] return df.values[tuple(np.transpose(coord))] </code></pre> <p>I checked that both functions returned output as desired:</p> <pre><code># Expected values e_A_vals = np.sort([0.5, 1.0, -0.4, 1.1, -0.6, 1.2, 0.1, 0.3, 1.3]) e_B_vals = np.sort([0.5, 1.2, -0.8, -0.9, 0.4, -0.4, 0.1, 0.5, -0.2]) e_C_vals = np.sort([1.1, -0.9, 0.3, 0.5, 0.7, -0.6, 0.4, 1.3, -0.2]) # Sort because order doesn't matter assert np.allclose(np.sort(extract_vals("A", "group", df)), e_A_vals) assert np.allclose(np.sort(extract_vals("B", "group", df)), e_B_vals) assert np.allclose(np.sort(extract_vals("C", "group", df)), e_C_vals) assert np.allclose(np.sort(extract_vals2("A", "group", df)), e_A_vals) assert np.allclose(np.sort(extract_vals2("B", "group", df)), e_B_vals) assert np.allclose(np.sort(extract_vals2("C", "group", df)), e_C_vals) </code></pre> <p>And finally, I checked speed:</p> <pre><code>## Test speed import time # Method 1 start1 = time.time() for ii in range(10000): out = extract_vals("C", "group", df) elapsed1 = time.time() - start1 print elapsed1 # 28.5 sec # Method 2 start2 = time.time() for ii in range(10000): out2 = extract_vals2("C", "group", df) elapsed2 = time.time() - start2 print elapsed2 # 10.9 sec </code></pre>
-3
2016-08-15T20:34:25Z
38,962,711
<p>is that what you want?</p> <p>all elements above the diagonal:</p> <pre><code>In [139]: df.values[np.triu_indices(len(df), 1)] Out[139]: array([ 0.5, 0.3, -0.4, 0.9, -0.8, 0.1]) </code></pre> <p>A_vals:</p> <pre><code>In [140]: df.values[np.triu_indices(len(df), 1)][:-1] Out[140]: array([ 0.5, 0.3, -0.4, 0.9, -0.8]) </code></pre> <p>B_vals:</p> <pre><code>In [141]: df.values[np.triu_indices(len(df), 1)][1:] Out[141]: array([ 0.3, -0.4, 0.9, -0.8, 0.1]) </code></pre> <p>Source matrix:</p> <pre><code>In [142]: df.values Out[142]: array([[ 1. , 0.5, 0.3, -0.4], [ 0.5, 1. , 0.9, -0.8], [ 0.3, 0.9, 1. , 0.1], [-0.4, -0.8, 0.1, 1. ]]) </code></pre>
0
2016-08-15T20:41:11Z
[ "python", "pandas", "numpy", "dataframe", "multi-index" ]
Python, pandas: how to extract values from a symmetric, multi-index dataframe
38,962,618
<p>I have a symmetric, multi-index dataframe from which I want to systematically extract data:</p> <pre><code>import pandas as pd df_index = pd.MultiIndex.from_arrays( [["A", "A", "B", "B"], [1, 2, 3, 4]], names = ["group", "id"]) df = pd.DataFrame( [[1.0, 0.5, 0.3, -0.4], [0.5, 1.0, 0.9, -0.8], [0.3, 0.9, 1.0, 0.1], [-0.4, -0.8, 0.1, 1.0]], index=df_index, columns=df_index) </code></pre> <p>I want a function <code>extract_vals</code> that can return all values related to elements in the same group, EXCEPT for the diagonal AND elements must not be double-counted. Here are two examples of the desired behavior (order does not matter):</p> <pre><code>A_vals = extract_vals("A", df) # [0.5, 0.3, -0.4, 0.9, -0.8] B_vals = extract_vals("B", df) # [0.3, 0.9, 0.1, -0.4, -0.8] </code></pre> <p>My question is similar to <a href="http://stackoverflow.com/questions/32892864/how-to-extract-tuples-from-a-pandas-symmetric-dataframe">this question on SO</a>, but my situation is different because I am using a multi-index dataframe.</p> <p>Finally, to make things more fun, please consider efficiency because I'll be running this many times on much bigger dataframes. Thanks very much!</p> <h2>EDIT:</h2> <p>Happy001's solution is awesome. I came up with a method myself based on the logic of extracting the elements where target is NOT in BOTH the rows and columns, and then extracting the lower triangle of those elements where target IS in BOTH the rows and columns. However, Happy001's solution is much faster. </p> <p>First, I created a more complex dataframe to make sure both methods are generalizable:</p> <pre><code>import pandas as pd import numpy as np df_index = pd.MultiIndex.from_arrays( [["A", "B", "A", "B", "C", "C"], [1, 2, 3, 4, 5, 6]], names=["group", "id"]) df = pd.DataFrame( [[1.0, 0.5, 1.0, -0.4, 1.1, -0.6], [0.5, 1.0, 1.2, -0.8, -0.9, 0.4], [1.0, 1.2, 1.0, 0.1, 0.3, 1.3], [-0.4, -0.8, 0.1, 1.0, 0.5, -0.2], [1.1, -0.9, 0.3, 0.5, 1.0, 0.7], [-0.6, 0.4, 1.3, -0.2, 0.7, 1.0]], index=df_index, columns=df_index) </code></pre> <p>Next, I defined both versions of extract_vals (the first is my own):</p> <pre><code>def extract_vals(target, multi_index_level_name, df): # Extract entries where target is in the rows but NOT also in the columns target_in_rows_but_not_in_cols_vals = df.loc[ df.index.get_level_values(multi_index_level_name) == target, df.columns.get_level_values(multi_index_level_name) != target] # Extract entries where target is in the rows AND in the columns target_in_rows_and_cols_df = df.loc[ df.index.get_level_values(multi_index_level_name) == target, df.columns.get_level_values(multi_index_level_name) == target] mask = np.triu(np.ones(target_in_rows_and_cols_df.shape), k = 1).astype(np.bool) vals_with_nans = target_in_rows_and_cols_df.where(mask).values.flatten() target_in_rows_and_cols_vals = vals_with_nans[~np.isnan(vals_with_nans)] # Append both arrays of extracted values vals = np.append(target_in_rows_but_not_in_cols_vals, target_in_rows_and_cols_vals) return vals def extract_vals2(target, multi_index_level_name, df): # Get indices for what you want to extract and then extract all at once coord = [[i, j] for i in range(len(df)) for j in range(len(df)) if i &lt; j and ( df.index.get_level_values(multi_index_level_name)[i] == target or ( df.columns.get_level_values(multi_index_level_name)[j] == target))] return df.values[tuple(np.transpose(coord))] </code></pre> <p>I checked that both functions returned output as desired:</p> <pre><code># Expected values e_A_vals = np.sort([0.5, 1.0, -0.4, 1.1, -0.6, 1.2, 0.1, 0.3, 1.3]) e_B_vals = np.sort([0.5, 1.2, -0.8, -0.9, 0.4, -0.4, 0.1, 0.5, -0.2]) e_C_vals = np.sort([1.1, -0.9, 0.3, 0.5, 0.7, -0.6, 0.4, 1.3, -0.2]) # Sort because order doesn't matter assert np.allclose(np.sort(extract_vals("A", "group", df)), e_A_vals) assert np.allclose(np.sort(extract_vals("B", "group", df)), e_B_vals) assert np.allclose(np.sort(extract_vals("C", "group", df)), e_C_vals) assert np.allclose(np.sort(extract_vals2("A", "group", df)), e_A_vals) assert np.allclose(np.sort(extract_vals2("B", "group", df)), e_B_vals) assert np.allclose(np.sort(extract_vals2("C", "group", df)), e_C_vals) </code></pre> <p>And finally, I checked speed:</p> <pre><code>## Test speed import time # Method 1 start1 = time.time() for ii in range(10000): out = extract_vals("C", "group", df) elapsed1 = time.time() - start1 print elapsed1 # 28.5 sec # Method 2 start2 = time.time() for ii in range(10000): out2 = extract_vals2("C", "group", df) elapsed2 = time.time() - start2 print elapsed2 # 10.9 sec </code></pre>
-3
2016-08-15T20:34:25Z
38,965,809
<p>I don't assume <code>df</code> has the same columns and index. (Of course they can be the same). </p> <pre><code>def extract_vals(group_label, df): coord = [[i, j] for i in range(len(df)) for j in range(len(df)) if i&lt;j and (df.index.get_level_values('group')[i] == group_label or df.columns.get_level_values('group')[j] == group_label) ] return df.values[tuple(np.transpose(coord))] print extract_vals('A', df) print extract_vals('B', df) </code></pre> <p>result:</p> <pre><code>[ 0.5 0.3 -0.4 0.9 -0.8] [ 0.3 -0.4 0.9 -0.8 0.1] </code></pre>
1
2016-08-16T02:35:26Z
[ "python", "pandas", "numpy", "dataframe", "multi-index" ]
Split pandas dataframe by column variable
38,962,634
<p>I've got a dataframe that I'd like to split by a column variable like the example below:</p> <pre><code>gender height weight male 42.8 157.5 male 41.3 165.6 female 48.4 144.2 </code></pre> <p>My desired outcome is:</p> <p><code>df_male</code></p> <pre><code>gender height weight male 42.8 157.5 male 41.3 165.6 </code></pre> <p><code>df_female</code></p> <pre><code>gender height weight female 48.4 144.2 </code></pre> <p>The catch is that I'd like to be able to do this with a variable that has anywhere from 5-25 categories.</p> <p>My thought is that there should be a way to loop over the original dataframe and spit out multiple dataframes but I'm open to all possible solutions</p>
1
2016-08-15T20:35:23Z
38,962,795
<p>The following will produce a list containing one dataframe for each value of the <code>gender</code> column:</p> <pre><code>import io import pandas as pd data = io.StringIO('''\ gender height weight male 42.8 157.5 male 41.3 165.6 female 48.4 144.2 ''') df = pd.read_csv(data, delim_whitespace=True) dfs = [rows for _, rows in df.groupby('gender')] </code></pre> <p><code>dfs</code> is a list of length 2, with the following elements:</p> <pre><code>print(dfs[0]) # gender height weight # 2 female 48.4 144.2 print(dfs[1]) # gender height weight # 0 male 42.8 157.5 # 1 male 41.3 165.6 </code></pre> <p>It might be even better to create a dictionary with keys the distinct values in the <code>gender</code> column and values the dataframes:</p> <pre><code>dfs = [{gender: rows for gender, rows in df.groupby('gender')} </code></pre> <p>results in the following dictionary:</p> <pre class="lang-none prettyprint-override"><code>{'female': gender height weight 2 female 48.4 144.2, 'male': gender height weight 0 male 42.8 157.5 1 male 41.3 165.6} </code></pre>
4
2016-08-15T20:46:26Z
[ "python", "pandas", "filtering" ]
Serialize list of objects to JSON
38,962,637
<p>I am trying to serialize a list of objects. I am making an HTTP API call. The call returns a list of objects (e.g. class A). I do not have access to the definition of class A. I tried using dumps</p> <pre><code>print ("Result is: %s", json.dumps(result_list.__dict__)) </code></pre> <p>This prints an empty result. However if I were to print the result_list I get below output</p> <pre><code>{ "ResultList": [{ "fieldA": 0, "fieldB": 1.46903594E9, "fieldC": "builder", "fieldD": "AWSSimpleDBStorageNode/release@B5725048349-Linux-2.6c2.5-x86_64" }] } </code></pre> <p>IS there a way I can convert the object with whichever field it returns to a json.</p>
-1
2016-08-15T20:35:36Z
38,963,137
<p>Please specify more how the class of which <code>result_list</code> is an instance looks like (e.g. post the class code). <code>json.dumps(result_list)</code> probably works not since <code>result_list</code> is not a plain dictionary, but an object of a class. You need to dump the variable that holds the data structure (i.e. the same that is displayed in the print call).</p>
0
2016-08-15T21:11:12Z
[ "python", "json", "serialization" ]
Python - Django add username to QuerySet fields
38,962,649
<p>I have a problem since four days and I hope that anyone can help me.</p> <p>I have a model class A, which has User as Foreign Key and other attributes:</p> <p>class A(models.Model):</p> <pre><code>user = models.ForeignKey(User) .... </code></pre> <p>In views.py:</p> <pre><code>objects_as_json = serializers.serialize('json', classA.objects.filter(user_id=2).order_by("-date")) </code></pre> <p>I got this as Response: </p> <pre><code>[{"model": "senApp.classA", "pk": 10, "fields": {"user": 2, "partner": null, "image": 6, "contest": 4, "action": 2, "date": "2016-07-29T09:35:59Z"}}, {"model": "sengApp.classA", "pk": 9, "fields": {"user": 2, "partner": 6, "image": 1, "contest": null, "action": 1, "date": "2016-07-29T09:23:43Z"}}] </code></pre> <p>So my question is, how can i add the username of the user in fields like this:</p> <pre><code>{"model": "senApp.classA", "pk": 10, "fields": {"user": 2, "partner": null, "image": 6, "contest": 4, "action": 2, "date": "2016-07-29T09:35:59Z", "username":}} </code></pre> <p>Thank you very much in advance for your answer.</p>
0
2016-08-15T20:36:25Z
38,969,516
<p>You can set the keyword argument <code>use_natural_foreign_keys</code> to <code>True</code>:</p> <pre><code>objects_as_json = serializers.serialize( 'json', classA.objects.filter(user_id=2).order_by("-date"), use_natural_foreign_keys=True ) </code></pre> <p>Also <a href="https://docs.djangoproject.com/ja/1.10/topics/serialization/#serialization-of-natural-keys" rel="nofollow">see the django docs</a> how you control what is returned as <em>natural key</em></p>
1
2016-08-16T08:00:35Z
[ "python", "django", "python-2.7", "django-queryset", "username" ]
Why do I need to keep a variable pointing to my QWidget?
38,962,683
<p>Why does the example below only work if the useless <code>_</code> variable is created?</p> <p>The <code>_</code> variable is assigned and never used. I would assume a good compiler would optimize and not even create it, instead it does make a difference.</p> <p>If I remove <code>_ =</code> and leave just <code>Test()</code>, then the window is created, but it flickers and disappears immediately, and python hangs forever.</p> <p>Here is the code:</p> <pre><code>import sys from PyQt4 import QtGui class Test(QtGui.QWidget): def __init__(self): super().__init__() self.show() app = QtGui.QApplication(sys.argv) _ = Test() sys.exit(app.exec_()) </code></pre>
5
2016-08-15T20:39:24Z
38,962,809
<p>That's a very good question and I've faced a lot of weird problems in the past because of this fact with my PyQt widgets and plugins, basically that happens thanks to the <a href="http://stackoverflow.com/questions/4484167/python-garbage-collector-documentation/4484312#4484312">python garbage collector</a>.</p> <p>When you assign your instance to that <code>_</code> dummy variable before entering the Qt's main loop, there will be a living reference that will avoid to be collected by the garbage collector, therefore the widget won't be destroyed.</p>
3
2016-08-15T20:47:43Z
[ "python", "pyqt4" ]
Why do I need to keep a variable pointing to my QWidget?
38,962,683
<p>Why does the example below only work if the useless <code>_</code> variable is created?</p> <p>The <code>_</code> variable is assigned and never used. I would assume a good compiler would optimize and not even create it, instead it does make a difference.</p> <p>If I remove <code>_ =</code> and leave just <code>Test()</code>, then the window is created, but it flickers and disappears immediately, and python hangs forever.</p> <p>Here is the code:</p> <pre><code>import sys from PyQt4 import QtGui class Test(QtGui.QWidget): def __init__(self): super().__init__() self.show() app = QtGui.QApplication(sys.argv) _ = Test() sys.exit(app.exec_()) </code></pre>
5
2016-08-15T20:39:24Z
38,962,903
<p>@BPL is correct, but I wanted to add that you don't have to assign it to _. You can assign it to whatever variable you like. You just need a variable referencing to the object you created, otherwise it's collected by the garbage collector after being created.</p>
0
2016-08-15T20:54:06Z
[ "python", "pyqt4" ]
Functional Infix Implementation in Python
38,962,846
<p>I have a current implementation like this:</p> <pre><code>class Infix(object): def __init__(self, func): self.func = func def __or__(self, other): return self.func(other) def __ror__(self, other): return Infix(partial(self.func, other)) def __call__(self, v1, v2): return self.func(v1, v2) @Infix def Map(data, func): return list(map(func,data)) </code></pre> <p>This is great, it works as expected, however I want to also expand this implementation to allow for a left side only solution. If somebody can showcase a solution AND explanation, that would be phenomenal.</p> <p>Here is an example of what I would like to do...</p> <pre><code> valLabels['annotations'] \ |Map| (lambda x: x['category_id']) \ |Unique| </code></pre> <p>Where Unique is defined as below...</p> <pre><code>@Infix def Unique(data): return set(data) </code></pre> <p>Thanks!</p>
2
2016-08-15T20:50:15Z
38,963,030
<p>If you don't mind dropping the final <code>|</code>, you could do </p> <pre><code>class InfixR(object): def __init__(self, func): self.func = func def __ror__(self, other): return self.func(other) def __call__(self, v1): return self.func(v1) @InfixR def Unique(data): return set(data) </code></pre> <p>Then your expression would look like </p> <pre><code>valLabels['annotations'] \ |Map| (lambda x: x['category_id']) \ |Unique </code></pre> <hr> <p>Your original <code>Infix</code> class is (technically) abusing the <code>bitwise or</code> operator: <code>|Map|</code> is nothing special, it's just <code>value | Map | my_lambda</code>, a "bitwise or" of a list, an object, and a lambda, with a couple of spaces removed, and some newlines inserted (using <code>\</code> to prevent the interpretter from trying to treat each line separately).</p> <p>In custom classes, you can implement many of the usual operators using <code>__double_underscore__</code> methods, in the case of bitwise or, they are <code>__or__</code> and <code>__ror__</code>. </p> <p>When the python interpreter encounters the <code>|</code> operator, it first looks at the object to the right, to see if it has a <code>__or__</code> method. It then calls <code>left.__or__(right)</code>. If that is not defined or returns <code>NotImplemented</code>, it looks at the object on the right, for <code>__ror__</code> (reversed or), and calls <code>right.__ror__(left)</code>.</p> <p>The other part of this is the decorator notation.</p> <p>When you say </p> <pre><code>@Infix def Unique(data): return set(data) </code></pre> <p>The interpretter expands that into</p> <pre><code>def Unique(data): return set(data) Unique = Infix(Unique) </code></pre> <p>So you get an <code>Infix</code> instance, which has <code>__or__</code> and <code>__ror__</code> methods, and a <code>__call__</code> method. As you might guess, <code>my_obj.__call__()</code> is called when you invoke <code>my_oby()</code>.</p>
2
2016-08-15T21:03:59Z
[ "python", "functional-programming" ]
JSON from streamed data in Python
38,962,888
<p>I am simply trying to keep the following input and resulting JSON string in order.</p> <p>Here is the input string and code:</p> <pre><code>import json testlist=[] # we create a list as a tuple so the dictionary order stays correct testlist=[({"header":{"stream":2,"function":3,"reply":True},"body": [({"format": "A", "value":"This is some text"})]})] print 'py data string: ' print testlist data_string = json.dumps(testlist) print 'json string: ' print data_string </code></pre> <p>Here is the output string:</p> <pre><code>json string: [{"body": [{"format": "A", "value": "This is some text"}], "header": {"stream": 2, "function": 3, "reply": true}}] </code></pre> <p>I am trying to keep the order of the output the same as the input.</p> <p>Any help would be great. I can't seem to figure this one point.</p>
-1
2016-08-15T20:53:01Z
38,964,022
<p>As Laurent wrote your question is not very clear, but I give it a try:</p> <p><code>OrderedDict.update</code> adds in the above case the entries of <code>databody</code> to the dictionary. What you seem to want to do is something like <code>data['body'] = databody</code> where <code>databody</code> is this list <code>[{"format":"A","value":"This is a text\nthat I am sending\n to a file"},{"format":"U6","value":5},{"format":"Boolean","value":true}, "format":"F4", "value":8.10}]</code> So build first this list end then add it to your dictionary plus what you wrote in your post is that the final variable to be parse into json is a list so do <code>data_string = json.dumps([data])</code></p>
0
2016-08-15T22:33:25Z
[ "python", "jython-2.5" ]
Is there a way to install all python modules at once using pip?
38,962,947
<p>I would like to install all available modules for Python 2.7.12 using a single pip command. Is there a way to do this without having to specify every single package name?</p>
0
2016-08-15T20:57:36Z
38,963,343
<p>I highly recommend against doing this - the overwhelmingly supported best practice is to use a requirements.txt file, listing the packages you want to install specifically.</p> <p>You then install it with <code>pip install -r requirements.txt</code> and it installs all the packages for your project.</p> <p>This has several benefits:</p> <ul> <li>Repeatability by installing only the required packages</li> <li>Conciseness </li> </ul> <p>However, if you really <em>do</em> want to install ALL python packages (note that there are <em>thousands</em>), you can do so via the following:</p> <blockquote class="spoiler"> <p> pip search * | grep ")[[:space:]]" | cut -f1 -d" "</p> </blockquote> <p>I strongly recommend against this as it's likely to do horrible things to your system, as it will attempt to install <em>every python package</em> (and why it's in spoiler tags).</p>
6
2016-08-15T21:28:50Z
[ "python", "windows", "ubuntu", "pip" ]
Is there a way to install all python modules at once using pip?
38,962,947
<p>I would like to install all available modules for Python 2.7.12 using a single pip command. Is there a way to do this without having to specify every single package name?</p>
0
2016-08-15T20:57:36Z
39,045,649
<p>This is a terrible idea, but you could always use <a href="https://github.com/mdipierro/autoinstaller" rel="nofollow">autoinstaller</a>, which will automatically download packages via <code>pip</code> if you don't have them installed.</p>
2
2016-08-19T18:30:49Z
[ "python", "windows", "ubuntu", "pip" ]
AWS Boto3 SQS MessageBody vs MessageAttributes
38,963,001
<p>I am setting up a SQS queue to ingest a block of config data to be processed by a backend container. My first idea was to <code>json.dumps</code> the dictionary with the config info and pass it through the <code>MessageBody</code> parameter of <code>sqsclient.send_message()</code>. However, after reading through the docs I saw there is also a <code>MessageAttributes</code> parameter which seems like I can pass key-value pairs into relatively easily (<a href="http://boto3.readthedocs.io/en/latest/reference/services/sqs.html#SQS.Client.send_message" rel="nofollow">Docs for sqsclient.sendmessage()</a>)</p> <p>I am quite unsure of the difference and if there is any benefit to using one over the other. For reference I am ingesting the queue in a python script running on a container in an EC2 instance.</p>
0
2016-08-15T21:01:26Z
38,964,763
<blockquote> <p>So if I understand correctly, the benefit is data type validation?</p> </blockquote> <p>No. </p> <p>The benefit is that the metadata is essentially out-of-band: you can attach metadata -- information about the payload -- to the "outside" of an SQS message, without modifying (or even necessarily understanding) what you're going to put "inside" the message (the body).</p> <p>If the information in question <strong>is part of</strong> the message, it should probably go in the body. If, on the other hand, it is <strong>about</strong> the message, you may want to attach it as metadata. </p> <p>For the case you described, go with JSON in the message body.</p>
1
2016-08-16T00:06:18Z
[ "python", "amazon-web-services", "boto3" ]
TypeError: super() takes at least 1 argument (0 given) error is specific to any python version?
38,963,018
<p>I'm getting this error</p> <blockquote> <p>TypeError: super() takes at least 1 argument (0 given)</p> </blockquote> <p>using this code on python2.7.11: </p> <pre><code>class Foo(object): def __init__(self): pass class Bar(Foo): def __init__(self): super().__init__() Bar() </code></pre> <p>The workaround to make it work would be:</p> <pre><code>class Foo(object): def __init__(self): pass class Bar(Foo): def __init__(self): super(Bar, self).__init__() Bar() </code></pre> <p>It seems the syntax is specific to python 3. So, what's the best way to provide compatible code between 2.x and 3.x and avoiding this error happening?</p>
1
2016-08-15T21:03:06Z
38,963,433
<p>You can use the <a href="https://pypi.python.org/pypi/future" rel="nofollow">future</a> library to have a Python2/Python3 compatibility.</p> <p>The <a href="http://python-future.org/reference.html?highlight=super" rel="nofollow">super</a> function is back-ported.</p>
1
2016-08-15T21:35:46Z
[ "python", "python-2.7", "python-2.x" ]
TypeError: super() takes at least 1 argument (0 given) error is specific to any python version?
38,963,018
<p>I'm getting this error</p> <blockquote> <p>TypeError: super() takes at least 1 argument (0 given)</p> </blockquote> <p>using this code on python2.7.11: </p> <pre><code>class Foo(object): def __init__(self): pass class Bar(Foo): def __init__(self): super().__init__() Bar() </code></pre> <p>The workaround to make it work would be:</p> <pre><code>class Foo(object): def __init__(self): pass class Bar(Foo): def __init__(self): super(Bar, self).__init__() Bar() </code></pre> <p>It seems the syntax is specific to python 3. So, what's the best way to provide compatible code between 2.x and 3.x and avoiding this error happening?</p>
1
2016-08-15T21:03:06Z
38,964,291
<p>Yes, the 0-argument syntax is specific to Python 3, see <a href="https://docs.python.org/3/whatsnew/3.0.html#builtins" rel="nofollow"><em>What's New in Python 3.0</em></a> and <a href="https://www.python.org/dev/peps/pep-3135/" rel="nofollow">PEP 3135 -- <em>New Super</em></a>.</p> <p>In Python 2 and code that must be cross-version compatible, just stick to passing in the class object and instance explicitly. </p> <p>Yes, there are "backports" available that make a no-argument version of <code>super()</code> work in Python 2 (like the <code>future</code> library) but these require a number of hacks that include a <a href="https://github.com/PythonCharmers/python-future/blob/master/src/future/builtins/newsuper.py" rel="nofollow">full scan of the class hierarchy</a> to find a matching function object. This is both fragile and slow, and simply not worth the "convenience". </p>
1
2016-08-15T23:03:56Z
[ "python", "python-2.7", "python-2.x" ]
Python syntax error on mad libs
38,963,038
<p>So I'm trying to make a simple Mad Libs program and I'm getting multiple errors and I cannot figure out why. One of them is with the "Noah's First Program" part, and the other is with the printing of the story variable. How do I fix it?</p> <pre><code>print "Noah's First Program!" name=raw_input("Enter a name:") adjectiveone=raw_input("Enter an Adjective:") adjectivetwo=raw_input("Enter an Adjective:") adjectivethree=raw_input("Enter an Adjective:") verbone=raw_input("Enter a Verb:") verbtwo=raw_input("Enter a Verb:") verbthree=raw_input("Enter a Verb:") nounone=raw_input("Enter a Noun:") nountwo=raw_input("Enter a Noun:") nounthree=raw_input("Enter a Noun:") nounfour=raw_input("Enter a Noun:") animal=raw_input("Enter an Animal:") food=raw_input("Enter a Food:") fruit=raw_input("Enter a Fruit:") number=raw_input("Enter a Number:") superhero=raw_input("Enter a Superhero Name:") country=raw_input("Enter a Country:") dessert=raw_input("Enter a Dessert:") year=raw_input("Enter a Year:") STORY = “Man, I look really %s this morning. My name is %s, by the way, and my favorite thing to do is %s. My best friend is super %s, because he owns a(n) %s and a(n) %s! What’s your favorite animal? Mine is a %s. I like to watch them at the zoo as I eat %s while %s. Those things are all great, but my other friend is even more interesting! She has a %s, and a lifetime supply of %s! She’s really %s, and her name is %s. She enjoys %s, but only %s times per day! She usually does it with %s. My favorite superhero is %s, but hers is %s. My third friend is named %s and is foreign. His family comes from %s, and their family name is %s. To wrap things up, my favorite dessert is %s, and I’m glad to have introduced you to my friends. Maybe soon I’ll introduce you to my fourth friend %s, but that will probably be in the year %s! I love %s!" print STORY (adjectiveone,name,verbone,adjectivetwo,nounone,nountwo,animal,food,verbtwo,nounthree,fruit,adjectivethree,name,verbthree,number,name,superhero,superhero,name,country,name,dessert,name,year,nounfour) </code></pre>
-2
2016-08-15T21:04:36Z
38,963,125
<p>It seems to be a Python 2.7 program.</p> <p>But, if you use Python 2.7, you'll get syntax errors because <code>print</code> is not a statement but a function in Python 3: you need parenthesis.</p> <p>Wrong:</p> <pre><code>print "Noah's First Program!" </code></pre> <p>Good:</p> <pre><code>print("Noah's First Program!") </code></pre>
-1
2016-08-15T21:10:30Z
[ "python", "syntax" ]