title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
Small Example for pyserial using Threading
39,127,158
<p>Can anyone please give me a small and simple example on how to use threading with pyserial communication. I am googling for over three days and I am still illeterate and I dont even have a working piece of code which integrate both of them</p> <p>Basically I am aiming to use threading in this scenario:</p> <p>Have a serial communication continuously go on in the back ground to attain certain value (say A) from an MCU.</p> <p>Stop attaining value A - then attain value B...and start continuously attaining value A again. </p> <p>You can find some basic code here. </p> <pre><code>import threading import time import sys import serial import os import time def Task1(ser): while 1: print "Inside Thread 1" ser.write('\x5A\x03\x02\x02\x02\x09') # Byte ArrayTo Control a MicroProcessing Unit b = ser.read(7) print b.encode('hex') print "Thread 1 still going on" time.sleep(1) def Task2(ser): print "Inside Thread 2" print "I stopped Task 1 to start and execute Thread 2" ser.write('x5A\x03\x02\x08\x02\x0F') c = ser.read(7) print c.encode('hex') print "Thread 2 complete" def Main(): ser = serial.Serial(3, 11520) t1 = threading.Thread(target = Task1, args=[ser]) t2 = threading.Thread(target = Task2, args=[ser]) print "Starting Thread 1" t1.start() print "Starting Thread 2" t2.start() print "=== exiting ===" ser.close() if __name__ == '__main__': Main() </code></pre>
0
2016-08-24T15:14:11Z
39,127,369
<p>There's no factual basis for the claim by <code>Task2</code>:</p> <pre><code>print "I stopped Task 1 to start and execute Thread 2" </code></pre> <p>Your implementation starts one thread then immediately starts the other <em>without</em> stopping the first. This means that the <code>ser.read</code> and <code>ser.write</code> commands could interfere with each other.</p> <p>The simplest thing you could do to address this is to introduce variables that allow communication:</p> <pre><code>import sys import os import time import threading thread_flag = None def Report(s): print s sys.stdout.flush() # helps to ensure messages from different threads appear in the right order def Stop(): global thread_flag thread_flag = 'stop' def Task1(ser): Report("Inside Thread 1") global thread_flag thread_flag = 'go' while True: Report("Thread 1 waiting for permission to read") while thread_flag != 'go': time.sleep( 0.001 ) while thread_flag == 'go': Report("Thread 1 is reading") #ser.write('\x5A\x03\x02\x02\x02\x09') # Byte ArrayTo Control a MicroProcessing Unit #b = ser.read(7) #Report(b.encode('hex')) time.sleep(1) if thread_flag == 'stop': break else: thread_flag = 'paused' # signals that the inner loop is done Report("Thread 1 complete") def Task2(ser): Report("Inside Thread 2") global thread_flag thread_flag = 'pause' # signals Task1 to pause while thread_flag != 'paused': time.sleep(0.001) # waits for Task1 inner loop to exit Report("I stopped Task 1 to start and execute Thread 2") #ser.write('x5A\x03\x02\x08\x02\x0F') #c = ser.read(7) #Report(c.encode('hex')) thread_flag = 'go' # signals Thread 1 to resume Report("Thread 2 complete") def Main(): ser = None # serial.Serial(3, 11520) t1 = threading.Thread(target = Task1, args=[ser]) t2 = threading.Thread(target = Task2, args=[ser]) Report("Starting Thread 1") t1.start() time.sleep(3) Report("Starting Thread 2") t2.start() if __name__ == '__main__': Main() </code></pre> <p>That approach uses a global variable, which is often frowned upon. You could instead make <code>Task1</code> and <code>Task2</code> methods of an object <code>self</code> that carries both <code>self.ser</code> and <code>self.thread_flag</code> as attributes.</p> <p>For any inter-thread communication more complex than this, you need to investigate locks, mutexes and semaphores (e.g. <code>threading.Lock</code>)</p>
0
2016-08-24T15:25:28Z
[ "python", "multithreading", "pyserial" ]
How to mask an image out of gray noise?
39,127,211
<p>I have the following raw image that I want to mask. I want just the circular shaped (almost) orange/brown structure to be masked white. How do I go about doing it?</p> <p><a href="http://imgur.com/a/HNmRn" rel="nofollow">http://imgur.com/a/HNmRn</a></p> <p>I've tried thresholding, but I don't want the lower threshold value to be a variable.</p>
0
2016-08-24T15:17:16Z
39,127,360
<p>You could try converting into HSV colorspace and threshold for color. But you might not be able to remove the threshold as a variable, as every image has slight variations in the lighting. From experience I can tell you that sometimes you can generously extend the threshold to fit most of the stuff you want. But a more general solution will take more sophisticated algorithms.</p> <p>from opencv documentation:</p> <pre><code>11 # Convert BGR to HSV 12 hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) 13 14 # define range of blue color in HSV 15 lower_blue = np.array([110,50,50]) 16 upper_blue = np.array([130,255,255]) 17 18 # Threshold the HSV image to get only blue colors 19 mask = cv2.inRange(hsv, lower_blue, upper_blue) </code></pre> <p>For the yellowish tone you have there you will have to adjust the parameters of course.</p>
1
2016-08-24T15:25:07Z
[ "python", "opencv", "image-processing", "computer-vision" ]
How to mask an image out of gray noise?
39,127,211
<p>I have the following raw image that I want to mask. I want just the circular shaped (almost) orange/brown structure to be masked white. How do I go about doing it?</p> <p><a href="http://imgur.com/a/HNmRn" rel="nofollow">http://imgur.com/a/HNmRn</a></p> <p>I've tried thresholding, but I don't want the lower threshold value to be a variable.</p>
0
2016-08-24T15:17:16Z
39,153,346
<p>Use Hough circle transform to find the the circle that separate the eye and the gray area.</p> <p>The basic idea is to run Hough circle transfor and then finding the circle that has the biggest difference in values between the inside of the circles and outside.</p> <p>The result: <a href="http://i.stack.imgur.com/FEZdN.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/FEZdN.jpg" alt="enter image description here"></a></p> <p>The code:</p> <pre><code>import cv2 import numpy as np # Read image Irgb = cv2.imread('eye.jpg') # Take the first channel ( No specifc reason just good contrast between inside the eye and outside) Igray = Irgb[:,:,0] # Run median filter to reduce noise IgrayFilter = cv2.medianBlur(Igray,101) # Find circles using hough circles minRadius = np.floor(np.min(Igray.shape)/2) circles = cv2.HoughCircles(IgrayFilter, cv2.HOUGH_GRADIENT, dp=0.5,param1=100,param2=50,minRadius=int(minRadius),minDist=100) circles = np.uint16(np.around(circles)) cimg = Irgb # For each circle that we found find the intinestiy values inside the circle and outside. # We eould take the circle that as the biggest difference between inside and outside diff = [] for i in circles[0, :]: # Create mask from circel identity mask = np.zeros_like(Igray) maskInverse = np.ones_like(Igray) cv2.circle(mask, (i[0], i[1]), i[2], 1, cv2.FILLED) cv2.circle(maskInverse, (i[0], i[1]), i[2], 0, cv2.FILLED) # Find values inside mask and outside insideMeanValues = np.mean(np.multiply(mask,Igray)) outsideMeanValues = np.mean(np.multiply(maskInverse, Igray)) # Save differnses diff.append(abs(insideMeanValues-outsideMeanValues)) # Take the circle with the biggest difference in color as the border circle circleID = np.argmax(diff) circleInfo = circles[0, circleID] # Create mask from final image mask = np.zeros_like(Igray) cv2.circle(mask, (i[0], i[1]), i[2], 1, cv2.FILLED) # Show final image only in the mask finalImage = Irgb finalImage[:,:,0] = np.multiply(finalImage[:,:,0],mask) finalImage[:,:,1] = np.multiply(finalImage[:,:,1],mask) finalImage[:,:,2] = np.multiply(finalImage[:,:,2],mask) cv2.imwrite('circle.jpg',finalImage) </code></pre>
1
2016-08-25T19:30:13Z
[ "python", "opencv", "image-processing", "computer-vision" ]
Gspread - Can Not Retrieve Spreadsheets
39,127,224
<p>I am trying to use Gspread in Python2.7 to retrieve spreadsheets. I seem to be able to login but whenever I issue the command</p> <pre><code>gc.openall() </code></pre> <p>it just returns an empty list. I have given the service account admin access to everything and the sheets api is enabled in my google console. Can anyone point out what I am doing wrong?</p> <p>Below is my <code>name-hash.json</code> file returned by Google and used to login:</p> <pre><code>{ "type": "service_account", "project_id": "name-id", "private_key_id": "PRIVATE KEY ID", "private_key": "-----BEGIN PRIVATE KEY----- ... \n-----END PRIVATE KEY-----\n", "client_email": "test@mname-id.iam.gserviceaccount.com", "client_id": "CLIENTID", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://accounts.google.com/o/oauth2/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/test-279%40name-id.iam.gserviceaccount.com" } </code></pre> <p>My code is as follows:</p> <pre><code>import gspread from oauth2client.service_account import ServiceAccountCredentials def login(json_key): scope = ['https://spreadsheets.google.com/feeds'] credentials = ServiceAccountCredentials.from_json_keyfile_name(json_key, scope) gc = gspread.authorize(credentials) return gc gc = login('/Users/username/Downloads/name-hash.json') files = gc.openall() print(files) [] </code></pre>
1
2016-08-24T15:18:03Z
39,128,653
<p>I can think of mainly 2 issues that could be at play here. First gspread may only access google sheets that are at the root of the google drive. To circumvent that, I strongly recommend moving to v4 google api python client instead of gspread.</p> <p>If you do have google sheets in the root my guess is that you are missing sharing the spreadsheets themselves with the address: test@mname-id.iam.gserviceaccount.com</p> <p>To verify that, I would suggest replacing openall with open_by_key and supply a single specific google sheet id to help debugging.</p>
1
2016-08-24T16:31:56Z
[ "python", "google-spreadsheet", "google-sheets-api", "gspread" ]
Items vs item loaders in scrapy
39,127,256
<p>i'm pretty new to scrapy, I know that items are used to populate scraped data, but i cant understand the difference between items and item loaders. I tried to read some example codes, they used item loaders to store instead of items and i cant understand why. Scarpy documentation wasn't clear enough for me. Can anyone give a simple explanation (better with example) about when item loaders are used and what additional facilities do they provide over items ? </p>
0
2016-08-24T15:19:28Z
39,130,517
<p>I really like the official explanation in the docs:</p> <blockquote> <p>Item Loaders provide a convenient mechanism for populating scraped Items. Even though Items can be populated using their own dictionary-like API, Item Loaders provide a much more convenient API for populating them from a scraping process, by automating some common tasks like parsing the raw extracted data before assigning it.</p> <p><strong>In other words, Items provide the container of scraped data, while Item Loaders provide the mechanism for populating that container.</strong></p> </blockquote> <p>Last paragraph should answer your question.<br> Item loaders are great since they allow you to have so many processing shortcuts and reuse a bunch of code to keep everything tidy, clean and understandable.</p> <p>Comparison example case. Lets say we want to scrape this item:</p> <pre><code>class MyItem(Item): full_name = Field() bio = Field() age = Field() weight = Field() height = Field() </code></pre> <p>Item only approach would look something like this:</p> <pre><code>def parse(self, response): full_name = response.xpath("//div[contains(@class,'name')]/text()").extract() # i.e. returns ugly ['John\n', '\n\t ', ' Snow'] item['full_name'] = ' '.join(i.strip() for i in full_name if i.strip()) bio = response.xpath("//div[contains(@class,'bio')]/text()").extract() item['bio'] = ' '.join(i.strip() for i in full_name if i.strip()) age = response.xpath("//div[@class='age']/text()").extract_first(0) item['age'] = int(age) weight = response.xpath("//div[@class='weight']/text()").extract_first(0) item['weight'] = int(age) height = response.xpath("//div[@class='height']/text()").extract_first(0) item['height'] = int(age) return item </code></pre> <p>vs Item Loaders approach:</p> <pre><code># define once in items.py from scrapy.loader.processors import Compose, MapCompose, Join, TakeFirst clean_text = Compose(MapCompose(lambda v: v.strip()), Join()) to_int = Compose(TakeFirst(), int) class MyItemLoader(ItemLoader): default_item_class = MyItem full_name_out = clean_text bio_out = clean_text age_out = to_int weight_out = to_int height_out = to_int # parse as many different places and times as you want def parse(self, response): loader = MyItemLoader(selector=response) loader.add_xpath('full_name', "//div[contains(@class,'name')]/text()") loader.add_xpath('bio', "//div[contains(@class,'bio')]/text()") loader.add_xpath('age', "//div[@class='age']/text()") loader.add_xpath('weight', "//div[@class='weight']/text()") loader.add_xpath('height', "//div[@class='height']/text()") return loader.load_item() </code></pre> <p>As you can see the Item Loader is so much cleaner and easier to scale. Let's say you have 20 more fields from which a lot share the same processing logic, would be a suicide to do it without Item Loaders. Item Loaders are awesome and you should use them!</p>
1
2016-08-24T18:24:13Z
[ "python", "web-scraping", "scrapy", "scrapy-spider" ]
Retrieving data using Beautiful Soup
39,127,272
<p>So I've been trying to retrieve some data using BeautifulSoup but I've hit a brick wall.</p> <pre><code>&lt;tr data-name="A Color Similar to Slate"&gt; &lt;th class="unique"&gt;&lt;a href="/item/5052/6/223d382afee2ac6857d3298b800652e0" class="item-link"&gt;&lt;span style='color: #7D6D00'&gt;A Color Similar to Slate&lt;/span&gt;&lt;/a&gt;&lt;/th&gt; &lt;td class=unique&gt;0/10&lt;/td&gt; &lt;td class="unique" data-conversion="14 ref"&gt;35,000&lt;/td&gt; &lt;td class="unique" data-conversion="13.02 ref"&gt;32,550&lt;/td&gt; &lt;td class="unique" data-conversion="13.51 ref"&gt;33,775&lt;/td&gt; &lt;td class="unique" style="text-align: center;"&gt;&lt;a class="item-link-backpack" href="http://backpack.tf/stats/Unique/A+Color+Similar+to+Slate/tradable/craftable"&gt;&lt;img src="/img/bptf-icon.png" alt="View on Backpack.tf"/&gt;&lt;/a&gt;&lt;/td&gt; &lt;/tr&gt; </code></pre> <p>What I'd like my script to do is to take an input (in this case a "A Color Similar to Slate" string) and have it return the data below(0/10, 14 ref etc) so that I can compare it to a different set of data. How can I make it work?</p>
0
2016-08-24T15:20:41Z
39,127,363
<pre><code>similar_color = soup.find('tr', {'data-name': 'A Color Similar to Slate'}) for value in similar_color.find_all('td'): print(value.text) </code></pre> <p>Should result in:</p> <pre><code>0/10 35,000 </code></pre> <p>and so on, so forth. However, it seems like you want to grab the text value sometimes, and the <code>data-conversion</code> value other times. To do that, you would just substitute the <code>print(value.text)</code> line with:</p> <pre><code>print(value.attrs.get('data-conversion')) </code></pre>
1
2016-08-24T15:25:10Z
[ "python", "beautifulsoup" ]
Retrieving data using Beautiful Soup
39,127,272
<p>So I've been trying to retrieve some data using BeautifulSoup but I've hit a brick wall.</p> <pre><code>&lt;tr data-name="A Color Similar to Slate"&gt; &lt;th class="unique"&gt;&lt;a href="/item/5052/6/223d382afee2ac6857d3298b800652e0" class="item-link"&gt;&lt;span style='color: #7D6D00'&gt;A Color Similar to Slate&lt;/span&gt;&lt;/a&gt;&lt;/th&gt; &lt;td class=unique&gt;0/10&lt;/td&gt; &lt;td class="unique" data-conversion="14 ref"&gt;35,000&lt;/td&gt; &lt;td class="unique" data-conversion="13.02 ref"&gt;32,550&lt;/td&gt; &lt;td class="unique" data-conversion="13.51 ref"&gt;33,775&lt;/td&gt; &lt;td class="unique" style="text-align: center;"&gt;&lt;a class="item-link-backpack" href="http://backpack.tf/stats/Unique/A+Color+Similar+to+Slate/tradable/craftable"&gt;&lt;img src="/img/bptf-icon.png" alt="View on Backpack.tf"/&gt;&lt;/a&gt;&lt;/td&gt; &lt;/tr&gt; </code></pre> <p>What I'd like my script to do is to take an input (in this case a "A Color Similar to Slate" string) and have it return the data below(0/10, 14 ref etc) so that I can compare it to a different set of data. How can I make it work?</p>
0
2016-08-24T15:20:41Z
39,127,378
<p>In case you will use it on other HTML style files:</p> <pre><code>from bs4 import BeautifulSoup html= """&lt;tr data-name="A Color Similar to Slate"&gt; &lt;th class="unique"&gt;&lt;a href="/item/5052/6/223d382afee2ac6857d3298b800652e0" class="item-link"&gt;&lt;span style='color: #7D6D00'&gt;A Color Similar to Slate&lt;/span&gt;&lt;/a&gt;&lt;/th&gt; &lt;td class=unique&gt;0/10&lt;/td&gt; &lt;td class="unique" data-conversion="14 ref"&gt;35,000&lt;/td&gt; &lt;td class="unique" data-conversion="13.02 ref"&gt;32,550&lt;/td&gt; &lt;td class="unique" data-conversion="13.51 ref"&gt;33,775&lt;/td&gt; &lt;td class="unique" style="text-align: center;"&gt;&lt;a class="item-link-backpack" href="http://backpack.tf/stats/Unique/A+Color+Similar+to+Slate/tradable/craftable"&gt;&lt;img src="/img/bptf-icon.png" alt="View on Backpack.tf"/&gt;&lt;/a&gt;&lt;/td&gt; &lt;/tr&gt;""" soup = BeautifulSoup(html) texts = [i.get_text() for i in soup.find_all() if i.get_text()] print(texts[texts.index('A Color Similar to Slate'):]) </code></pre> <p>This checks all the tags not just <code>td</code>. The output is <code>['A Color Similar to Slate', 'A Color Similar to Slate', 'A Color Similar to Slate', '0/10', '35,000', '32,550', '33,775']</code></p>
0
2016-08-24T15:25:56Z
[ "python", "beautifulsoup" ]
Filter in template to arrange data specifically in django
39,127,316
<p>How can I use a filter in my template to put the values in a row where the key matches. </p> <p>So for example all values in Inner OD key row 1 should have values for that key then in row two the should have all values of Outter OD key for row 2 values. </p> <p>Any help would be greatly appreciated </p> <p>Here is my view.py </p> <pre><code>@login_required def shipping(request, id): sheet_data = Sheet.objects.get(pk=id) work_order = sheet_data.work_order customer_data = Customer.objects.get(id=sheet_data.customer_id) customer_name = customer_data.customer_name title_head = 'Shipping-%s' % sheet_data.work_order complete_data = Sheet.objects.raw("""select s.id, d.id d_id, s.work_order, d.target, i.reading, d.description, i.serial_number from app_sheet s left join app_dimension d on s.id = d.sheet_id left join app_inspection_vals i on d.id = i.dimension_id""") for c_d in complete_data: dim_description = Dimension.objects.filter(sheet_id=c_d.id).values_list('description', flat=True).distinct() dim_id = Dimension.objects.filter(sheet_id=c_d.id)[:1] for d_i in dim_id: dim_data = Inspection_vals.objects.filter(dimension_id=d_i.id) reading_data = Inspection_vals.objects.filter(dimension_id=d_i.id) key_list = [] vals_list = [] for xr in complete_data: key_list.append(xr.description) vals_list.append(xr.reading) #print reading_desc sample_size = dim_data res = {} for i in range(len(key_list)): if key_list[i] in res: res[key_list[i]].append(vals_list[i]) else: res[key_list[i]]=[vals_list[i]] reading_desc = res return render(request, 'app/shipping.html', { 'work_order': work_order, 'sample_size': sample_size, 'customer_name': customer_name, 'title': title_head, 'complete_data': complete_data, 'dim_description': dim_description, 'reading_desc': reading_desc, }) </code></pre> <p>here is the output of reading_desc that correctly uses the correct key and values for each. </p> <pre><code> {u'Inner OD': [2, None, None, 1, 1, 1, 6, 1, 2, 4, 5, 5, 7, 1, 1, 2, 3, 4, 5, 1, None, 3, 4, 6], u'Outter OD': [3, 4, 5, 1, 3, 2, 2, 2, 2, 1, 1, 1, 1, 3, 4, 3, 5, 6, 5, 2, 3]} </code></pre> <p>Here is my template shipping.html </p> <pre><code>&lt;div class="container"&gt; &lt;div class="row"&gt; &lt;div&gt; &lt;table &gt; &lt;thead&gt; &lt;tr&gt; &lt;th&gt;Serial Number&lt;/th&gt; {% for ss in sample_size %} &lt;th&gt;{{ ss.serial_number }}&lt;/th&gt; {% endfor %} &lt;/tr&gt; &lt;/thead&gt; &lt;tbody&gt; {% for desc in dim_description.all %} &lt;tr&gt; &lt;th&gt; {{ desc }}&lt;/th&gt; {% for r_c in reading_desc %} &lt;td class="{% cycle r_c, r_c %}"&gt; {{ r_c }} &lt;/td&gt; {% endfor %} {% endfor %} &lt;/tr&gt; &lt;/tbody&gt; &lt;/table&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; </code></pre> <p>Here is what I would like it to look like </p> <p><a href="http://i.stack.imgur.com/d1Cfx.gif" rel="nofollow"><img src="http://i.stack.imgur.com/d1Cfx.gif" alt="Filter"></a></p>
0
2016-08-24T15:22:50Z
39,129,118
<p>Alright then here we go. This is how you loop over a dictionary in an html page using Django:</p> <pre><code>&lt;tbody&gt; &lt;tr&gt; {% for desc in dim_description.all %} &lt;th&gt; {{ desc }}&lt;/th&gt; {% endfor %} &lt;/tr&gt; {% for key, values in reading_desc.items %} &lt;tr&gt; &lt;td class="some_class_here"&gt; {{ key }} &lt;/td&gt; {% for v in values %} &lt;td class="some_class_here"&gt;{{ v }}&lt;/td&gt; {% endfor %} &lt;tr&gt; {% endfor %} &lt;/tbody&gt; </code></pre> <p>This should be enough to get you going. There's a few things happening here. First, in the first loop that iterates over the dim_description list, we are adding a single row with a number of <code>&lt;th&gt;</code> tags to display the headers (in your case 1-24). Then in the second loop we are looping over the dictionary. We start with displaying the key (Inner OD for the first iteration), then we have to loop over all the values for each key (the inner for loop in the second for loop in the page) and it should display what you want. This may not be the full answer but it should give you the bones enough to tackle your problem.</p>
1
2016-08-24T16:59:32Z
[ "python", "django", "python-2.7" ]
Python Pandas - Add a new column with value based on first and last name in multiple columns
39,127,354
<p>Although I'm still a beginner myself, I'm trying to explain some Pandas fundamentals to colleagues who usually manipulate CSV files with Excel.</p> <p>I hit a wall with my ability to find a "good" answer for solving a given problem I'd like to use as an example.</p> <p>I have a CSV file like this:</p> <pre><code>"Id","First","Last" "109","Karl","Evans" "113","Louise","Hudson" "106","Catherine","Johnson" </code></pre> <p>and I'm importing it into Python like this:</p> <pre><code>import pandas df = pandas.read_csv('C:\\example.csv') </code></pre> <p>I want to add a new column to <code>df</code> called "StartsWithJOrK".</p> <p>It should say "Yay!" for anyone whose lowercased-first-name OR whose lowercased-last-name starts with a "j" or a "k". It should say "BooHiss" for anyone for whom neither lowercased-name starts with a "j" or a "k".</p> <p><em>(It's a rather overwrought example, but I feel like it packs in a lot of things I either don't know how to do or don't know how combine "pythonically.")</em></p> <p><strong>What's the most pythonic, fewest-lines-of-code way to do this?</strong></p>
1
2016-08-24T15:24:29Z
39,127,520
<p>Not the easiest introduction to Pandas...</p> <pre><code>df['StartsWithJorK'] = 'BooHiss' starting_letters = ['j', 'k'] df.loc[(df.First.str[0].str.lower().isin(starting_letters)) | df.Last.str[0].str.lower().isin(starting_letters), 'StartsWithJorK'] = 'Yay!' &gt;&gt;&gt; df Id First Last StartsWithJorK 0 109 Karl Evans Yay! 1 113 Louise Hudson BooHiss 2 106 Catherine Johnson Yay! </code></pre> <p><code>df.First.str[0]</code> finds the first character of the name.</p> <p><code>.str.lower()</code> converts this series of letters to lower case.</p> <p><code>.isin(starting_letters)</code> checks if each lower case letter is in our list of starting letters, i.e. 'j' and 'k'.</p> <p><code>.loc</code> is for <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#different-choices-for-indexing" rel="nofollow">label and boolean based indexing</a> where the column <code>StartsWithJorK</code> is set to <code>Yay!</code> for each matching condition.</p>
2
2016-08-24T15:33:57Z
[ "python", "csv", "pandas" ]
Python Pandas - Add a new column with value based on first and last name in multiple columns
39,127,354
<p>Although I'm still a beginner myself, I'm trying to explain some Pandas fundamentals to colleagues who usually manipulate CSV files with Excel.</p> <p>I hit a wall with my ability to find a "good" answer for solving a given problem I'd like to use as an example.</p> <p>I have a CSV file like this:</p> <pre><code>"Id","First","Last" "109","Karl","Evans" "113","Louise","Hudson" "106","Catherine","Johnson" </code></pre> <p>and I'm importing it into Python like this:</p> <pre><code>import pandas df = pandas.read_csv('C:\\example.csv') </code></pre> <p>I want to add a new column to <code>df</code> called "StartsWithJOrK".</p> <p>It should say "Yay!" for anyone whose lowercased-first-name OR whose lowercased-last-name starts with a "j" or a "k". It should say "BooHiss" for anyone for whom neither lowercased-name starts with a "j" or a "k".</p> <p><em>(It's a rather overwrought example, but I feel like it packs in a lot of things I either don't know how to do or don't know how combine "pythonically.")</em></p> <p><strong>What's the most pythonic, fewest-lines-of-code way to do this?</strong></p>
1
2016-08-24T15:24:29Z
39,127,602
<p>If you don't mind importing <code>numpy</code> too, you can do</p> <pre><code>import numpy as np import pandas as pd mask = df['Last'].str.match('[JjKk]') | df['First'].str.match('[JjKk]') df['StartsWithJOrK'] = np.where(mask, 'Yay!', 'BooHiss') </code></pre> <p>Output:</p> <pre class="lang-none prettyprint-override"><code> Id First Last StartsWithJOrK 0 109 Karl Evans Yay! 1 113 Louise Hudson BooHiss 2 106 Catherine Johnson Yay! </code></pre> <p>There are other ways of creating the above <code>mask</code>. Here is one:</p> <pre><code>mask = (df[['First', 'Last']] .apply(lambda x: x.str.match('[JjKk]'), axis=1) .any(axis=1)) </code></pre> <p>Or, taking a cue from @Alexander's answer's use of <code>.str.lower()</code>:</p> <pre><code>mask = (df[['First', 'Last']] .apply(lambda x: x.str.lower().str.match('[jk]'), axis=1) .any(axis=1)) </code></pre>
2
2016-08-24T15:37:43Z
[ "python", "csv", "pandas" ]
dealing with inf in list
39,127,388
<p>I have a list that I am trying to solve for a variable but is not working because I have inf in some of the elements in the list. How can I deal with the inf in the values so that I can solve for my variable m? When I run the code it says "Cannot convert 0 to int" so the solve command has an issue with the inf values in the list of A. </p> <pre><code>from sympy import * from numpy import inf m = var('m') A = [inf*m - 1, inf*m - 2, .1122*m - 7, 0.054*m - 8] m = [solve(eq,m) for eq in A] </code></pre>
0
2016-08-24T15:26:49Z
39,128,001
<p>It's usually not a good idea to mix <code>sympy</code> and <code>numpy</code>. Sympy uses <code>oo</code> for infinity. You can use <code>.subs</code> to replace the <code>inf</code> with <code>oo</code>:</p> <pre><code>from sympy import * from numpy import inf m = var('m') A = [inf*m - 1, inf*m - 2, .1122*m - 7, 0.054*m - 8] sol = [solve(eq.subs(inf, oo), m) for eq in A] </code></pre> <p><code>sol</code> is <code>[[nan], [nan], [62.3885918003565], [148.148148148148]]</code></p>
0
2016-08-24T15:58:43Z
[ "python", "list", "numpy", "infinity" ]
Comparing two dataframe values in python
39,127,450
<p>I have two dataframes <code>df1['LicId']</code> and <code>df2['LicId']</code>.</p> <p><code>df1['LicId']</code> will always have one value</p> <pre><code> LicId 0 abc1234 </code></pre> <p>However <code>df2['LicId']</code> will have several Ids </p> <pre><code> LicId 0 abc1234 1 xyz2345 </code></pre> <p>My task is to compare <code>df1['LicId']</code> with <code>df2['LicId']</code> and execute the code only if there is a match between two.</p> <p>I have tried: </p> <pre><code>if df1['LicId'][0]==df2['LicId'][0]: remaining code </code></pre> <p>However, this will check only for <code>abc1234</code> - It has to check for all the Index values of <code>df2</code>. Later, I may also have <code>xyz2345</code> in df1. </p> <p>Can someone let me know how to handle this?</p>
1
2016-08-24T15:30:23Z
39,128,384
<p>You can match values with <code>isin()</code>:</p> <pre><code>df1 = pd.DataFrame({'LicId':['abc1234', 'a', 'b']}) df2 = pd.DataFrame({'LicId':['abc1234', 'xyz2345', 'a', 'c']}) </code></pre> <p><code>df1</code>:</p> <pre><code> LicId 0 abc1234 1 a 2 b </code></pre> <p><code>df2</code>:</p> <pre><code> LicId 0 abc1234 1 xyz2345 2 a 3 c </code></pre> <p>Matching values:</p> <pre><code>if len(df2.loc[df2['LicId'].isin(df1['LicId'])]) &gt; 0: print(df2.loc[df2['LicId'].isin(df1['LicId'])]) #remaining code </code></pre> <p>Output:</p> <pre><code> LicId 0 abc1234 2 a </code></pre>
1
2016-08-24T16:17:56Z
[ "python", "pandas", "indexing", "compare" ]
How to select a value from Autolist (search suggestions) in Selenium Python
39,127,468
<p>Go to google.com</p> <p>Type a keyword (as shown in this attachment) <a href="http://i.stack.imgur.com/uBD2c.jpg" rel="nofollow">http://i.stack.imgur.com/uBD2c.jpg</a></p> <p>I want to select 3rd or 4th value from the search suggestions. What method should I use in selenium python ?</p>
-1
2016-08-24T15:31:29Z
39,127,879
<p>Google search page contains a <code>&lt;div class="gstl_0 sbdd_a"&gt;</code></p> <p>When you start typing into the search box, that div becomes populated with an <code>&lt;ul role="listbox"&gt;</code>. The <code>&lt;li&gt;</code> in that list contain the 4 suggestions. Pick one, and call the .click() method.</p>
0
2016-08-24T15:52:32Z
[ "python", "python-2.7", "python-3.x", "selenium", "pycharm" ]
How to select a value from Autolist (search suggestions) in Selenium Python
39,127,468
<p>Go to google.com</p> <p>Type a keyword (as shown in this attachment) <a href="http://i.stack.imgur.com/uBD2c.jpg" rel="nofollow">http://i.stack.imgur.com/uBD2c.jpg</a></p> <p>I want to select 3rd or 4th value from the search suggestions. What method should I use in selenium python ?</p>
-1
2016-08-24T15:31:29Z
39,127,908
<p>I don't know python, but i do have code in C# which i was able to succeed. You can give it try.</p> <pre><code>IWebDriver driver = new InternetExplorerDriver(); driver.Navigate().GoToUrl("https://www.google.com/"); IWebElement txtboxSearch = driver.FindElement(By.Id("lst-ib")); txtboxSearch.SendKeys("ap"); IList&lt;IWebElement&gt; autosaerchList = driver.FindElements(By.CssSelector(".sbsb_c.gsfs")); autosaerchList[1].Click(); </code></pre>
1
2016-08-24T15:53:59Z
[ "python", "python-2.7", "python-3.x", "selenium", "pycharm" ]
How to select a value from Autolist (search suggestions) in Selenium Python
39,127,468
<p>Go to google.com</p> <p>Type a keyword (as shown in this attachment) <a href="http://i.stack.imgur.com/uBD2c.jpg" rel="nofollow">http://i.stack.imgur.com/uBD2c.jpg</a></p> <p>I want to select 3rd or 4th value from the search suggestions. What method should I use in selenium python ?</p>
-1
2016-08-24T15:31:29Z
39,128,537
<pre><code>from selenium import webdriver import time driver = webdriver.Chrome('Path to chromedriver\chromedriver.exe') driver.get('http://google.com') driver.maximize_window() driver.find_element_by_name('q').send_keys('Shah') #pass whatever you want to search time.sleep(5) # to click on third element of search suggestion driver.find_element_by_xpath('//div/div[3]/form/div[2]/div[2]/div[1]/div[2]/div[2]/div[1]/div/ul/li[3]/div/div[2]').click() # to click on fourth element of search suggestion, uncomment the next line and comment the previous one #driver.find_element_by_xpath('//div/div[3]/form/div[2]/div[2]/div[1]/div[2]/div[2]/div[1]/div/ul/li[4]/div/div[2]').click() </code></pre> <p>Hope this help</p>
0
2016-08-24T16:25:23Z
[ "python", "python-2.7", "python-3.x", "selenium", "pycharm" ]
How to use Future with map method of the Executor from dask.distrubuted (Python library)?
39,127,531
<p>I am running <a href="https://distributed.readthedocs.io/en/latest/index.html" rel="nofollow">dask.distributed</a> cluster. </p> <p>My task includes chained computations, where the last step is a parallel processing of a list, created on previous steps, using <code>Executor.map</code> method. The length of the list is not known in advance, because it is generated from intermediate results during computation.</p> <p>The code looks like the following:</p> <pre><code>from distributed import Executor, progress def process(): e = Executor('{address}:{port}'.format(address=config('SERVER_ADDR'), port=config('SERVER_PORT'))) futures = [] gen_list1 = get_list_1() gen_f1 = e.map(generate_1, gen_list1) futures.append(gen_f1) gen_list2 = get_list_2() gen_f2 = e.map(generate_2, gen_list2) futures.append(gen_f2) m_list = e.submit(create_m_list) # m_list is created from gen_list1 and gen_list2 # some results of processing are stored in the database # and create_m_list doesn't need additional arguments futures.append(m_list) m_result = e.map(process_m_list, m_list) futures.append(m_result) return futures if __name__ == '__main__': r = process() progress(r) </code></pre> <p>However, I'm getting the error <code>TypeError: zip argument #1 must support iteration</code>:</p> <pre><code>File "F:/wl/under_development/database/jobs.py", line 366, in start_job match_result = e.map(process_m_list, m_list) File "C:\Anaconda\lib\site-packages\distributed\executor.py", line 672, in map iterables = list(zip(*zip(*iterables))) TypeError: zip argument #1 must support iteration </code></pre> <p><code>gen_list1</code> and <code>gen_list2</code> are computed independently, but <code>m_list</code> is created from <code>gen_list1</code> and <code>gen_list2</code> and therefore depends on them.</p> <p>I've also tried calling <code>.result()</code> method of <code>m_list</code>, however, it has blocked the function <code>process</code> until computations of <code>gen_list1</code> and <code>gen_list2</code> have finished.</p> <p>I've also tried calling asynchronous method <code>._result</code> of <code>m_list</code>, but it has produced the same error "zip argument #1 must support iteration". Same error has been obtained with <code>dask.delayed</code> (<code>m_result = e.map(process_m_list, delayed(m_list))</code>).</p> <p>Documentation of the <code>dask.distributed</code> is vague in this aspect, examples mention only real list objects that already exist. However, other posts here in SO, as well as Google, suggest that it should be possible.</p> <p>Here is the version string of my Python distribution</p> <p><code>Python 2.7.11 |Anaconda custom (64-bit)| (default, Feb 16 2016, 09:58:36) [MSC v.1500 64 bit (AMD64)] on win32</code></p>
3
2016-08-24T15:34:22Z
39,127,953
<p>The crux of your problem seems to be here:</p> <pre><code>m_list = e.submit(create_m_list) m_result = e.map(process_m_list, m_list) </code></pre> <p>You are correct that you can not map a function over an individual future. You need to pass <code>map</code> a sequence. Dask doesn't know how many functions to submit without knowing more about your data. Calling <code>.result()</code> on the future would be a fine solution:</p> <pre><code>m_list = e.submit(create_m_list) m_result = e.map(process_m_list, m_list.result()) </code></pre> <blockquote> <p>I've also tried calling .result() method of m_list, however, it has blocked the function process until computations of gen_list1 and gen_list2 have finished.</p> </blockquote> <p>That's correct. Without any additional information the scheduler will prefer computations that were submitted earlier. You could resolve this problem by submitting your <code>create_m_list</code> function first, then submitting your extra comptuations, then waiting on the <code>create_m_list</code> result.</p> <pre><code>m_list = e.submit(create_m_list) # give this highest priority f1 = e.map(generate_1, get_list_1()) f2 = e.map(generate_2, gen_list_2()) L = m_list.result() # block on m_list until done m_result = e.map(process_m_list, L) # submit more tasks return [f1, f2, m_result] </code></pre>
1
2016-08-24T15:56:20Z
[ "python", "python-2.7", "distributed", "dask" ]
What to do in order to use SQL in docker container deployed in cloud?
39,127,617
<p>Right now, I have a simple web server in Python, pacakaged in a docker container, Linux based. That container is deployed in a private openstack cloud and a volume is attached to the container, mounted.</p> <p>The web server is saving data in a json file on the volume. However, I would like to replace that saving mechanism by a SQL database (postgrest? sqlite?).</p> <p>I don't have experience with database management (I know queries though). How do I install SQL in that container? Does it need to be on the volume or there is a way to install to software in the container and use the volume only for the actual data?</p>
0
2016-08-24T15:38:26Z
39,127,804
<p>Disclaimer: I don't know anything about Openstack and answer this by thinking it will be similar to a dedicated Linux server. If that assumption should be wrong then there is probably a much better answer out there.</p> <p>I suggest you use a separate container for the DB, preferably based on a well-known public image. Personally I have experience with mysql fork <a href="https://hub.docker.com/_/mariadb/" rel="nofollow">mariadb</a>.</p> <p>Use <a href="https://docs.docker.com/compose/compose-file/" rel="nofollow">a docker-compose.yml file</a> to link these containers together by adding an entry, like so:</p> <pre><code>your_existing_container: image: your_image ports: - 80:80 links: - db volumes_from: - your_existing_volume db: image: mariadb:10.1 volumes_from: - dbdata expose: - 3306 environment: - MYSQL_ROOT_PASSWORD=changeme - MYSQL_DATABASE=nameme - MYSQL_USER=callme - MYSQL_PASSWORD=changeme dbdata: image: tianon/true volumes: - /var/lib/mysql </code></pre> <p>Once you have this in place (and changed the passwords etc. for the db), navigate to the directory where the <code>docker-compose.yml</code> file is located and run <code>docker-compose up -d your_existing_container</code> (rename that of course). Compose will run your image as well as all others it depends on (like your db container).</p>
0
2016-08-24T15:48:21Z
[ "python", "sqlite", "docker", "openstack" ]
How to designate Python unit tests as having database dependency or not?
39,127,624
<p>I am working on a project that has many "unit tests" that have hard dependencies that need to interact with the database and other APIs. The tests are a valuable and useful resource to our team, but they just cannot be ran independently, without relying on the functionality of other services within the test environment. Personally I would call these "functional tests", but this is just the semantics already established within our team.</p> <p>The problem is, now that we are beginning to introduce more pure unit tests into our code, we have a medley of tests that do or do not have external dependencies. These tests can be ran immediately after checking out code with no requirement to install or configure other tools. They can also be ran in a continuous integration environment like jenkins.</p> <p>So my question is, how I can denote which is which for a cleaner separation? Is there an existing decorator within unit testing library?</p>
1
2016-08-24T15:38:47Z
39,128,992
<p>You can define which test should be skipped with the <code>skipIf</code> decorator. In combinations with setting an environmental variable you can skip tests in some environments. An example:</p> <pre><code>from unittest import skipIf class MyTest(Testcase): @skipIf(os.environ.get('RUNON') == 'jenkins', 'Does not run in Jenkins') def test_my_code(self): ... </code></pre>
2
2016-08-24T16:50:41Z
[ "python", "unit-testing", "continuous-integration", "functional-testing" ]
How to designate Python unit tests as having database dependency or not?
39,127,624
<p>I am working on a project that has many "unit tests" that have hard dependencies that need to interact with the database and other APIs. The tests are a valuable and useful resource to our team, but they just cannot be ran independently, without relying on the functionality of other services within the test environment. Personally I would call these "functional tests", but this is just the semantics already established within our team.</p> <p>The problem is, now that we are beginning to introduce more pure unit tests into our code, we have a medley of tests that do or do not have external dependencies. These tests can be ran immediately after checking out code with no requirement to install or configure other tools. They can also be ran in a continuous integration environment like jenkins.</p> <p>So my question is, how I can denote which is which for a cleaner separation? Is there an existing decorator within unit testing library?</p>
1
2016-08-24T15:38:47Z
39,130,155
<p>Here's another option. You could separate different test categories by directory. If you wanted to try this strategy, it may look something like:</p> <p><code>python -modules unit -pure unit test modules functional -other unit test modules </code></p> <p>In your testing pipeline, you can call your testing framework to only execute the desired tests. For example, with Python's <code>unittest</code>, you could run your 'pure unit tests' <strong>from within the python directory</strong> with </p> <p><code>python -m unittest discover --start-directory ../unit</code></p> <p>and the functional/other unit tests with</p> <p><code>python -m unittest discover --start-directory ../functional</code></p> <p>An advantage of this setup is that your tests are easily categorized and you can do any scaffolding or mocked up services that you need in each testing environment. Someone with a little more Python experience might be able to help you run the tests regardless of the current directory, too.</p>
1
2016-08-24T18:03:04Z
[ "python", "unit-testing", "continuous-integration", "functional-testing" ]
Creating a django form which also uses database data as an option
39,127,749
<p>My question title isn't very good but what I am trying to create is a way to add students into a database table and in the same form add their parents details into another table. </p> <p>The problem arises where 2 students have the same parents and I now have two sets of the same parent. Is there a way to select parent from the parent table when I am entering a student? </p> <p>How would I go about doing this.</p>
0
2016-08-24T15:45:43Z
39,128,552
<p>If you want to select a parent from the parent table when you are creating a student you could try a <a href="http://www.w3schools.com/tags/tag_datalist.asp" rel="nofollow">http://www.w3schools.com/tags/tag_datalist.asp</a>.</p> <p>This would require you to fetch all the parents from the database in your view, pass them to the template in the context, and then iterate through them as options for the datalist. However, this wouldn't allow you to create a new student if their parents don't already exist in the database.</p> <p>A better thing to try might be Django's get_or_create() method <a href="https://docs.djangoproject.com/en/1.10/ref/models/querysets/" rel="nofollow">https://docs.djangoproject.com/en/1.10/ref/models/querysets/</a>. If you use this when interacting with the parent table, Django will first check if a parent with the information entered exists in the database. If it does, it will use that object instead of creating a duplicate.</p>
0
2016-08-24T16:26:02Z
[ "python", "mysql", "django", "django-forms", "mariadb" ]
How do I sort a dictionary in decreasing order based on values of the list as its value?
39,127,884
<p>I have a dictionary as below:</p> <pre><code>{'Muguruza': [0, 0, 1, 12, 2, 15], 'Williams': [0, 1, 2, 15, 1, 12], 'Murray': [2, 2, 16, 143, 13, 142], 'Djokovic': [3, 1, 13, 142, 16, 143]} </code></pre> <p>I want to sort it in decreasing order based on the values in the corresponding list. </p> <p>I have to print out to the screen a summary in decreasing order of ranking, where the ranking is according to the criteria 1-6(as given in the list) in that order (compare item 1, if equal compare item 2, if equal compare item 3 etc, noting that for items 5 and 6 the comparison is reversed).</p> <p>My output should be as below:</p> <pre><code>Djokovic 3 1 13 142 16 143 Murray 2 2 16 143 13 142 Williams 0 1 2 15 1 12 Muguruza 0 0 1 12 2 15 </code></pre> <p>Here Williams and Murray had same value for element at index 0 of the list so they are compared on the basis of value at the next index in the list.</p> <p>How can I do this in Python?</p>
-3
2016-08-24T15:52:51Z
39,128,010
<p>You can just compare python list with <code>&gt;</code> <code>&lt;</code> <code>=</code>. Simple as that. It will do the exact same thing as you want. Also <code>operator</code> is a great way to sort dicts based on values</p> <pre><code>dict = {'Muguruza': [0, 0, 1, 12, 2, 15], 'Williams': [0, 1, 2, 15, 1, 12], 'Murray': [2, 2, 16, 143, 13, 142], 'Djokovic': [3, 1, 13, 142, 16, 143]} import operator print(sorted(dict.items(), key=operator.itemgetter(1))) for i in sorted_list: print(i[0], " ".join([str(j) for j in i[1]])) </code></pre> <p>Returns</p> <pre><code>Williams 0 1 2 15 1 12 Murray 2 2 16 143 13 142 Djokovic 3 1 13 142 16 143 Muguruza 9 0 1 12 2 15 </code></pre>
2
2016-08-24T15:59:17Z
[ "python" ]
How do I sort a dictionary in decreasing order based on values of the list as its value?
39,127,884
<p>I have a dictionary as below:</p> <pre><code>{'Muguruza': [0, 0, 1, 12, 2, 15], 'Williams': [0, 1, 2, 15, 1, 12], 'Murray': [2, 2, 16, 143, 13, 142], 'Djokovic': [3, 1, 13, 142, 16, 143]} </code></pre> <p>I want to sort it in decreasing order based on the values in the corresponding list. </p> <p>I have to print out to the screen a summary in decreasing order of ranking, where the ranking is according to the criteria 1-6(as given in the list) in that order (compare item 1, if equal compare item 2, if equal compare item 3 etc, noting that for items 5 and 6 the comparison is reversed).</p> <p>My output should be as below:</p> <pre><code>Djokovic 3 1 13 142 16 143 Murray 2 2 16 143 13 142 Williams 0 1 2 15 1 12 Muguruza 0 0 1 12 2 15 </code></pre> <p>Here Williams and Murray had same value for element at index 0 of the list so they are compared on the basis of value at the next index in the list.</p> <p>How can I do this in Python?</p>
-3
2016-08-24T15:52:51Z
39,128,157
<p>You could do</p> <pre><code>d={'Muguruza': [0, 0, 1, 12, 2, 15], 'Williams': [0, 1, 2, 15, 1, 12], 'Murray': [2, 2, 16, 143, 13, 142], 'Djokovic': [3, 1, 13, 142, 16, 143]} for v,k in sorted((v,k) for k,v in d.items()): print(k,"\t".join(v)) </code></pre>
0
2016-08-24T16:05:59Z
[ "python" ]
How do I sort a dictionary in decreasing order based on values of the list as its value?
39,127,884
<p>I have a dictionary as below:</p> <pre><code>{'Muguruza': [0, 0, 1, 12, 2, 15], 'Williams': [0, 1, 2, 15, 1, 12], 'Murray': [2, 2, 16, 143, 13, 142], 'Djokovic': [3, 1, 13, 142, 16, 143]} </code></pre> <p>I want to sort it in decreasing order based on the values in the corresponding list. </p> <p>I have to print out to the screen a summary in decreasing order of ranking, where the ranking is according to the criteria 1-6(as given in the list) in that order (compare item 1, if equal compare item 2, if equal compare item 3 etc, noting that for items 5 and 6 the comparison is reversed).</p> <p>My output should be as below:</p> <pre><code>Djokovic 3 1 13 142 16 143 Murray 2 2 16 143 13 142 Williams 0 1 2 15 1 12 Muguruza 0 0 1 12 2 15 </code></pre> <p>Here Williams and Murray had same value for element at index 0 of the list so they are compared on the basis of value at the next index in the list.</p> <p>How can I do this in Python?</p>
-3
2016-08-24T15:52:51Z
39,128,252
<p>Here's a possible answer:</p> <pre><code>import operator data = { 'Muguruza': [0, 0, 1, 12, 2, 15], 'Williams': [0, 1, 2, 15, 1, 12], 'Murray': [2, 2, 16, 143, 13, 142], 'Djokovic': [3, 1, 13, 142, 16, 143] } output = reversed(sorted(data.items(), key=operator.itemgetter(1))) print("\n".join([(i[0] + " ".join([str(j) for j in i[1]])) for i in output])) </code></pre>
1
2016-08-24T16:09:59Z
[ "python" ]
How to reverse the order of rows in my dataframe using Python, But not reverse the index when saving it into another dataframe
39,127,900
<pre><code>[4 8 3 6] </code></pre> <p>into:</p> <pre><code>[6 3 8 4] </code></pre> <p>Python adds the index values 0, 1, 2, 3 to the dataframe, so when I reverse the rows the index values also tag along so the index values of [6,3,8,4] are 4,3,2,1</p> <p>The problem is when I plot the graph is not ordered in the right way. So instead of getting: <a href="http://i.stack.imgur.com/Ylnjk.png" rel="nofollow">Yahoo Finance data graph</a></p> <p>I get: </p> <p><a href="http://i.stack.imgur.com/qA4rg.png" rel="nofollow">Python output graph</a></p>
0
2016-08-24T15:53:26Z
39,127,960
<p>Seems like reset_index would solve that:</p> <pre><code>df.iloc[::-1].reset_index(drop=True) </code></pre> <hr> <pre><code>df = pd.DataFrame({'A': [4, 8, 3, 6]}) df Out: A 0 4 1 8 2 3 3 6 df.iloc[::-1].reset_index(drop=True) Out: A 0 6 1 3 2 8 3 4 </code></pre>
1
2016-08-24T15:56:38Z
[ "python", "dataframe" ]
twisted autobahn websocket being initialized twice with wss
39,127,928
<p>I have some websocket protocols implemented with Twisted, they work fine when I connect using "ws", but when I enable secure websockets, the <code>__init__</code> method is called twice. To more specific, it is called once, then the connection apparently fails, with connectionLost being called, then it the <code>__init__</code> is called again, and this time the connection stays open. </p> <p>The code bellow exemplifies it. When I connect with wss, the log line in the <code>__init__</code> of the websocket protocol is called twice, but this doesn't happen with plain websockets.</p> <p>import socket from datetime import datetime from twisted.internet import reactor</p> <pre><code>from twisted.internet.ssl import DefaultOpenSSLContextFactory from autobahn.twisted.websocket import WebSocketServerProtocol, WebSocketServerFactory, listenWS import txaio txaio.use_twisted() CERT_KEY = "certificate.key" CERT_PATH = "certificate.crt" def log(msg): print("{}: {}".format(str(datetime.now()), msg)) class TestProtocol(WebSocketServerProtocol): def __init__(self): super(TestProtocol, self).__init__() log("Test protocol init") def connectionLost(self, reason): WebSocketServerProtocol.connectionLost(self, reason) log("Connection closed: Reason is {}".format(reason)) class TestProtocolFactory(WebSocketServerFactory): protocol = TestProtocol def init_websocket_protocol(factory_cls, port): try: key, crt = CERT_KEY, CERT_PATH context_factory = DefaultOpenSSLContextFactory(key, crt) connection_string = "wss://localhost:{}".format(str(port)) factory = factory_cls(connection_string) listenWS(factory, contextFactory=context_factory) log("Port {} bound to test websocket server".format(str(port))) except socket.error as e: log("Server was unable to bind to a new port: ".format(str(e))) def main(): init_websocket_protocol(TestProtocolFactory, 9000) reactor.run() if __name__ == '__main__': main() </code></pre>
0
2016-08-24T15:54:47Z
39,134,379
<p>The recommend API these days is to use endpoints. Also, <code>twisted.internet.ssl.CertificateOptions</code> is the preferred API for TLS connections. So with those changes your code above would look like this:</p> <pre><code>from datetime import datetime from autobahn.twisted.websocket import WebSocketServerProtocol, WebSocketServerFactory from twisted.internet.ssl import CertificateOptions, PrivateCertificate, Certificate, KeyPair from twisted.internet.endpoints import SSL4ServerEndpoint from twisted.internet.task import react from OpenSSL import crypto CERT_KEY = "certificate.key" CERT_PATH = "certificate.crt" def log(msg): print("{}: {}".format(str(datetime.now()), msg)) class TestProtocol(WebSocketServerProtocol): def __init__(self): super(TestProtocol, self).__init__() log("Test protocol init") def connectionLost(self, reason): WebSocketServerProtocol.connectionLost(self, reason) log("Connection closed: Reason is {}".format(reason)) class TestProtocolFactory(WebSocketServerFactory): protocol = TestProtocol def init_websocket_protocol(reactor, port): with open(CERT_KEY) as key_file, open(CERT_PATH) as cert_file: key = KeyPair.load(key_file.read(), crypto.FILETYPE_PEM).original cert = Certificate.loadPEM(cert_file.read()).original ctx = CertificateOptions( privateKey=key, certificate=cert, ) return SSL4ServerEndpoint(reactor, port, ctx) def main(reactor): ep = init_websocket_protocol(reactor, 9000) ep.listen(TestProtocolFactory()) reactor.run() if __name__ == '__main__': react(main) </code></pre> <p>When I run this code and point Firefox at it, it connects once. What does the browser-side code you're using look like?</p>
1
2016-08-24T23:13:47Z
[ "python", "websocket", "twisted", "autobahn" ]
Raise error for undefined attributes in Jinja templates in Flask
39,127,940
<p>By default Flask renders empty values for undefined attributes in Jinja templates. I want to raise an error instead. How can I change this behavior in Flask?</p>
0
2016-08-24T15:55:33Z
39,127,941
<p>Set undefined behavior to Flask application:</p> <pre><code>from flask import Flask from flask import render_template import jinja2 APP = Flask(__name__) APP.jinja_env.undefined = jinja2.StrictUndefined </code></pre> <p>And now you can render the template as always, in my case it's a JSON file:</p> <pre><code>render_template('template.json', **attributes_dict) </code></pre> <p>And we must get an error if there are some attribute missing:</p> <pre><code>*** jinja2.exceptions.UndefinedError: 'attribute' is undefined </code></pre>
0
2016-08-24T15:55:33Z
[ "python", "flask", "jinja2" ]
Is there a working excel sheet module for Python 3.4?
39,127,944
<p>I've been looking around for a python module to work with excel sheets. So far I've come across the following:</p> <ul> <li>openpyxl</li> <li>xlsxwriter</li> <li>xlrd</li> <li>xlwt</li> <li>xlutils</li> </ul> <p>The issue I'm running into is that any time I import these modules I get an error saying that they don't exist. Example: <code>ImportError: No module named 'xlutilx'</code></p> <p>This is the first time I've ever had this issue with importing modules with Python so I'm not sure how to fix it.</p> <p>Thanks for your time.</p>
-1
2016-08-24T15:55:42Z
39,128,057
<p>Most of these modules are still working with Python 3. All you need to do is to actually install them on your computer.</p> <p>If you are not sure what to do, I suggest using a prepackaged Python distribution such as Anaconda which already includes most of them: <a href="https://docs.continuum.io/anaconda/excel" rel="nofollow">https://docs.continuum.io/anaconda/excel</a></p> <p>Note that such a distribution is actually separated from your current installation. See the start of the user guide for a more complete explanation.</p>
1
2016-08-24T16:01:21Z
[ "python", "excel" ]
How do I structure a database cache (memcached/Redis) for a Python web app with many different variables for querying?
39,128,100
<p>For my app, I am using Flask, however the question I am asking is more general and can be applied to any Python web framework.</p> <p>I am building a comparison website where I can update details about products in the database. I want to structure my app so that 99% of users who visit my website will never need to query the database, where information is instead retrieved from the cache (memcached or Redis).</p> <p>I require my app to be realtime, so any update I make to the database must be instantly available to any visitor to the site. Therefore I do not want to cache views/routes/html.</p> <p>I want to cache the entire database. However, because there are so many different variables when it comes to querying, I am not sure how to structure this. For example, if I were to cache every query and then later need to update a product in the database, I would basically need to flush the entire cache, which isn't ideal for a large web app.</p> <p>I would prefer is to cache individual rows within the database. The problem is, how do I structure this so I can flush the cache appropriately when an update is made to the database? Also, how can I map all of this together from the cache?</p> <p>I hope this makes sense.</p>
2
2016-08-24T16:03:19Z
39,128,415
<p>I had this exact question myself, with a PHP project, though. My solution was to use ElasticSearch as an intermediate cache between the application and database.</p> <p>The trick to this is the ORM. I designed it so that when Entity.save() is called it is first stored in the database, then the complete object (with all references) is pushed to ElasticSearch and only then the transaction is committed and the flow is returned back to the caller.</p> <p>This way I maintained full functionality of a relational database (atomic changes, transactions, constraints, triggers, etc.) and still have all entities cached with all their references (parent and child relations) together with the ability to invalidate individual cached objects.</p> <p>Hope this helps.</p>
0
2016-08-24T16:19:36Z
[ "python", "database", "caching", "redis", "memcached" ]
How do I structure a database cache (memcached/Redis) for a Python web app with many different variables for querying?
39,128,100
<p>For my app, I am using Flask, however the question I am asking is more general and can be applied to any Python web framework.</p> <p>I am building a comparison website where I can update details about products in the database. I want to structure my app so that 99% of users who visit my website will never need to query the database, where information is instead retrieved from the cache (memcached or Redis).</p> <p>I require my app to be realtime, so any update I make to the database must be instantly available to any visitor to the site. Therefore I do not want to cache views/routes/html.</p> <p>I want to cache the entire database. However, because there are so many different variables when it comes to querying, I am not sure how to structure this. For example, if I were to cache every query and then later need to update a product in the database, I would basically need to flush the entire cache, which isn't ideal for a large web app.</p> <p>I would prefer is to cache individual rows within the database. The problem is, how do I structure this so I can flush the cache appropriately when an update is made to the database? Also, how can I map all of this together from the cache?</p> <p>I hope this makes sense.</p>
2
2016-08-24T16:03:19Z
39,154,770
<p>So a free eBook called "Redis in Action" by Josiah Carlson answered all of my questions. It it quite long, but after reading through, I have a fairly solid understanding of how to structure a caching architecture. It gives real world examples, such as a social network and a shopping site with tons of traffic. I will need to read through it again once or twice to fully understand. A great book!</p> <p>Link: <a href="https://redislabs.com/ebook/redis-in-action" rel="nofollow">Redis in Action</a></p>
0
2016-08-25T21:09:31Z
[ "python", "database", "caching", "redis", "memcached" ]
Average of numpy array ignoring specified value
39,128,145
<p>I have a number of 1-dimensional numpy ndarrays containing the path length between a given node and all other nodes in a network for which I would like to calculate the average. The matter is complicated though by the fact that if no path exists between two nodes the algorithm returns a value of 2147483647 for that given connection. If I leave this value untreated it would obviously grossly inflate my average as a typical path length would be somewhere between 1 and 3 in my network.</p> <p>One option of dealing with this would be to loop through all elements of all arrays and replace <code>2147483647</code> with <code>NaN</code> and then use <code>numpy.nanmean</code> to find the average though that is probably not the most efficient method of going about it. Is there a way of calculating the average with numpy just ignoring all values of <code>2147483647</code>?</p> <p>I should add that, I could have up to several million arrays with several million values to average over so any performance gain in how the average is found will make a real difference.</p>
1
2016-08-24T16:05:18Z
39,128,230
<p>Why not using your usual numpy filtering for this?</p> <pre><code>m = my_array[my_array != 2147483647].mean() </code></pre> <p>By the way, if you really want speed, your whole algorithm description seems certainly naive and could be improved by a lot.</p> <p>Oh and I guess that you are calculating the mean because you have rigorously checked that the underlying distribution is normal so that it means something, aren't you?</p>
3
2016-08-24T16:09:12Z
[ "python", "arrays", "performance", "numpy", "graph-tool" ]
Average of numpy array ignoring specified value
39,128,145
<p>I have a number of 1-dimensional numpy ndarrays containing the path length between a given node and all other nodes in a network for which I would like to calculate the average. The matter is complicated though by the fact that if no path exists between two nodes the algorithm returns a value of 2147483647 for that given connection. If I leave this value untreated it would obviously grossly inflate my average as a typical path length would be somewhere between 1 and 3 in my network.</p> <p>One option of dealing with this would be to loop through all elements of all arrays and replace <code>2147483647</code> with <code>NaN</code> and then use <code>numpy.nanmean</code> to find the average though that is probably not the most efficient method of going about it. Is there a way of calculating the average with numpy just ignoring all values of <code>2147483647</code>?</p> <p>I should add that, I could have up to several million arrays with several million values to average over so any performance gain in how the average is found will make a real difference.</p>
1
2016-08-24T16:05:18Z
39,128,324
<pre><code>np.nanmean(np.where(my_array == 2147483647, np.nan, my_array)) </code></pre> <p><strong>Timings</strong></p> <pre><code>a = np.random.randn(100000) a[::10] = 2147483647 %timeit np.nanmean(np.where(a == 2147483647, np.nan, a)) 1000 loops, best of 3: 639 µs per loop %timeit a[a != 2147483647].mean() 1000 loops, best of 3: 259 µs per loop import pandas as pd %timeit pd.Series(a).ne(2147483647).mean() 1000 loops, best of 3: 493 µs per loop </code></pre>
1
2016-08-24T16:13:58Z
[ "python", "arrays", "performance", "numpy", "graph-tool" ]
Average of numpy array ignoring specified value
39,128,145
<p>I have a number of 1-dimensional numpy ndarrays containing the path length between a given node and all other nodes in a network for which I would like to calculate the average. The matter is complicated though by the fact that if no path exists between two nodes the algorithm returns a value of 2147483647 for that given connection. If I leave this value untreated it would obviously grossly inflate my average as a typical path length would be somewhere between 1 and 3 in my network.</p> <p>One option of dealing with this would be to loop through all elements of all arrays and replace <code>2147483647</code> with <code>NaN</code> and then use <code>numpy.nanmean</code> to find the average though that is probably not the most efficient method of going about it. Is there a way of calculating the average with numpy just ignoring all values of <code>2147483647</code>?</p> <p>I should add that, I could have up to several million arrays with several million values to average over so any performance gain in how the average is found will make a real difference.</p>
1
2016-08-24T16:05:18Z
39,129,048
<p>One way would be to get the sum for all elements in one go and then removing the contribution from the invalid ones. Finally, we need to get the average value itself, divide by the number of valid elements. So, we would have an implementation like so -</p> <pre><code>def mean_ignore_num(arr,num): # Get count of invalid ones invc = np.count_nonzero(arr==num) # Get the average value for all numbers and remove contribution from num return (arr.sum() - invc*num)/float(arr.size-invc) </code></pre> <p>Verify results -</p> <pre><code>In [191]: arr = np.full(10,2147483647).astype(np.int32) ...: arr[1] = 5 ...: arr[4] = 4 ...: In [192]: arr.max() Out[192]: 2147483647 In [193]: arr.sum() # Extends beyond int32 max limit, so no overflow Out[193]: 17179869185 In [194]: arr[arr != 2147483647].mean() Out[194]: 4.5 In [195]: mean_ignore_num(arr,2147483647) Out[195]: 4.5 </code></pre> <p>Runtime test -</p> <pre><code>In [38]: arr = np.random.randint(0,9,(10000)) In [39]: arr[arr != 7].mean() Out[39]: 3.6704609489462414 In [40]: mean_ignore_num(arr,7) Out[40]: 3.6704609489462414 In [41]: %timeit arr[arr != 7].mean() 10000 loops, best of 3: 102 µs per loop In [42]: %timeit mean_ignore_num(arr,7) 10000 loops, best of 3: 36.6 µs per loop </code></pre>
1
2016-08-24T16:54:58Z
[ "python", "arrays", "performance", "numpy", "graph-tool" ]
Run python script right after RPi boot
39,128,161
<p>I'm relatively new to raspberry pi (5 days using it) and I've just finished to run my python script succesfully (called dogcare.py). Now I'm trying to execute this script right after my raspberry is turned on. I've been doing some research and I find different ways to do it:</p> <ul> <li>using /etc/profile</li> <li>using /etc/rc.local</li> <li>using crontab </li> <li>using /etc/init.d </li> <li>using systemd </li> </ul> <p>But none of these ways are working for me. </p> <p><strong>Setup enviroment:</strong><br> Hardware: RaspberryPi 2 Model B<br> Software: Raspbian or NOOBs (not sure)</p> <p><strong>Context:</strong><br> Since for my project I need to run meet.jit.si, I followed this guide <a href="http://www.instructables.com/id/Video-Calling-on-Raspberry-Pi-3/?ALLSTEPS" rel="nofollow">http://www.instructables.com/id/Video-Calling-on-Raspberry-Pi-3/?ALLSTEPS</a> and It has a step where sets chromium website to start right after RPi is turned on. (Currently this is working fine)</p> <p>My python script is using request library in order to use HTTP GET with an external website application I've been working on.</p> <p><strong>Main problem:</strong> </p> <p>I need to run both events: chromium website with meet.jit.si and my python script when my raspberry is turned on. Current situation: chromium website is running after my RPi is turned on but my script doesn't.</p> <p>I'd appreciate any help !</p>
0
2016-08-24T16:06:12Z
39,135,112
<p>I have done a similar thing with my Raspi 2 as well which involved sending myself an email with the ip address of the pi so I could easily ssh/vnc to it.</p> <p>My steps involved making a shell script which ran the python program.</p> <pre><code>#!/bin/sh cd pythonfiledirectory sudo python pythonfile.py cd / </code></pre> <p>Then I made it executable with the following command:</p> <pre><code>chmod 777 file.sh </code></pre> <p>Now edit your crontab to run the file on startup.</p> <p>In your terminal, type:</p> <pre><code>sudo crontab -e </code></pre> <p>Inside of the crontab write:</p> <pre><code>@reboot sh file.sh </code></pre> <p>You could add a log file if you wanted to debug and see why it's not working by making a log directory and changing the text you wrote in the crontab to:</p> <pre><code>@reboot sh file.sh &gt;/logdirectoy/ 2&gt;&amp;1 </code></pre> <p>This is what made it work for me and if it doesn't work try and make sure you can run your .sh file and try the crontab with some other files to debug the problem.</p>
0
2016-08-25T00:55:27Z
[ "python", "linux", "raspberry-pi" ]
Graphviz installation error
39,128,173
<p>I am installing graphviz library or package in python using the command</p> <pre><code>pip install graphviz error </code></pre> <p>but I am facing the the problem that show in image. please anyone help me in fixing this issue. </p> <p><img src="http://i.stack.imgur.com/dxmYu.png" alt="Error message appear during installation"> <img src="http://i.stack.imgur.com/l6w4j.png" alt="Error message appear during installation"></p>
0
2016-08-24T16:06:32Z
39,129,248
<p>use vs prompt instead of cmd shell, or use</p> <p><a href="https://www.microsoft.com/en-us/download/details.aspx?id=44266" rel="nofollow">https://www.microsoft.com/en-us/download/details.aspx?id=44266</a></p>
0
2016-08-24T17:07:26Z
[ "python", "installation", "graphviz" ]
Large number of Unicode characters not rendering correctly in a kivy TextInput
39,128,198
<p>Question: How can I get all Unicode characters to render correctly in a TextInput using Kivy?</p> <h2>More details below</h2> <p>I'm generating random Unicode characters in a range between 0x0200 and 0x9990 which is massive the issue is that a large portion of the characters will not render correctly in a TextInput to be more specific less than half will work. </p> <p>Whatever doesn't render ends up looking like a small rectangle with an x through it, yet when I copy and paste it into another display source it works fine. I've run the code through idle and it displays fine as well the issue seems to be with kivy, any suggestions as to why this is happening?</p> <hr> <pre><code>import random import kivy from kivy.uix.textinput import TextInput from kivy.core.window import Window from kivy.uix.widget import Widget from kivy.uix.button import Button from kivy.app import App kivy.require('1.9.1') class testclass(object): def example(self, event): k_len = list() complete = '' for i in range(32): k_len.append(random.randint(0x0200, 0x9990)) for i in k_len: if i != 32: complete += chr(i) result.text = complete t = testclass() Root = Widget(size = (600, 200)) buttonOne = Button(text = 'click me', pos = (1,170), size = (120,30)) buttonOne.bind(on_press = t.example) result = TextInput(hint_text = 'Output: ', size = (600, 50), pos = (0, 0), multiline = (True)) Root.add_widget(buttonOne) Root.add_widget(result) class testappApp(App): def build(self): return Root Window.size = (600, 200) if __name__ == '__main__': testappApp().run() </code></pre> <p>This code will only work if you have kivy setup, you can tweak it to work in idle but as I stated the code works as intended it's just not displaying correctly within kivy :) </p>
0
2016-08-24T16:07:29Z
39,128,575
<p>Your font does not seem to support these characters - switch to another one with support for that range (see <a href="https://en.wikipedia.org/wiki/Unicode_block" rel="nofollow">https://en.wikipedia.org/wiki/Unicode_block</a> for more info on what needs to be there)</p>
2
2016-08-24T16:27:35Z
[ "python", "kivy", "python-unicode" ]
How to max regex non greedy working backwards
39,128,213
<p>I always assumed regex worked like this, but I guess I never hit a case like this until now and I'm not sure the best way to tackle it.</p> <p>String to consider:</p> <pre><code>apple apple apple cat </code></pre> <p>I want to use something like apple.*?cat, however, this matches the first apple to the cat when I really want the last apple and cat.</p> <p>Please keep in mind this is just an example, I'm looking for a generalized way to do this (ie telling me to just match one newline between apple and cat won't work in my real case)</p>
1
2016-08-24T16:08:09Z
39,128,260
<p>You can use this negative lookahead based on <a href="http://www.rexegg.com/regex-quantifiers.html#tempered_greed" rel="nofollow">tempered greedy token</a> regex in python:</p> <pre><code>reg = re.compile(r'apple(?:(?!apple).)*cat', re.DOTALL) </code></pre> <p><a href="https://regex101.com/r/sB9aG7/1" rel="nofollow">RegEx Demo</a></p> <p><code>(?:(?!apple).)*</code> will match 0 or more any character that don't have <code>apple</code> at next position thus making sure we don't have <code>apple</code> in our match. Note that negative lookahead will be asserted for each character in the match.</p>
3
2016-08-24T16:10:28Z
[ "python", "regex" ]
PyEZ RPC options format differ between get_configuration and other calls
39,128,268
<p>When using RPC calls in PyEZ we add the parameters as named arguments like <code>rpc.get_interface_information(terse="True", interface-name="xe-0/0/0")</code>, however for configuration the options need to be within a dictionary like <code>rpc.get_configuration({"inherit":"inherit", "groups":"groups"})</code></p> <p>What's the reason for these differences?</p>
1
2016-08-24T16:10:52Z
39,130,451
<p>The best way to describe it is this: With non-configuration rpcs, each of the items is it's own element and in PyEZ we use the parameters to determine that we are referencing elements.</p> <pre><code>&lt;get-interface-information&gt; &lt;routing-instance&gt;routing-instance&lt;/routing-instance&gt; &lt;extensive/&gt; &lt;statistics/&gt; &lt;media/&gt; &lt;detail/&gt; &lt;terse/&gt; &lt;brief/&gt; &lt;descriptions/&gt; &lt;snmp-index&gt;snmp-index&lt;/snmp-index&gt; &lt;switch-port&gt;switch-port&lt;/switch-port&gt; &lt;interface-name&gt;interface-name&lt;/interface-name&gt; &lt;/get-interface-information&gt; </code></pre> <p>In the case of the get-configuration rpc, all of the items you are referencing are actually attributes of the get-configuration tag itself, not elements defined in the rpc. </p> <pre><code> &lt;get-configuration [changed="changed"] [commit-scripts="( apply | apply-no-transients | view )"] [compare="rollback" [rollback="[0-49]"]] [database="(candidate | committed)"] [database-path=$junos-context/commit-context/database-path] [format="( text | xml )"] [inherit="( defaults | inherit )" [groups="groups"] [interface-ranges="interface-ranges"]] [(junos:key | key )="key"] &gt; &lt;!-- tag elements for the configuration element to display --&gt; &lt;/get-configuration&gt; </code></pre> <p>So, to know whether an rpc (which we create dynamically in PyEZ) is referencing an element or an attribute is the usage of parameters or a dictionary, respectively. </p> <p>Hope that helps. </p>
2
2016-08-24T18:20:12Z
[ "python" ]
PyEZ RPC options format differ between get_configuration and other calls
39,128,268
<p>When using RPC calls in PyEZ we add the parameters as named arguments like <code>rpc.get_interface_information(terse="True", interface-name="xe-0/0/0")</code>, however for configuration the options need to be within a dictionary like <code>rpc.get_configuration({"inherit":"inherit", "groups":"groups"})</code></p> <p>What's the reason for these differences?</p>
1
2016-08-24T16:10:52Z
39,132,110
<p>Adding to Edward's answer, I believe the PyEZ RPC calls are implemented using reflection (<code>__call__</code> method), so today it is not aware of valid RPC calls nor args. The way to make it aware would be to load the Netconf schema dynamically from the device and use that to map the named arg to a tag or an element.<br><br> A potential issue from trying to abstract this calling convention from the user is what to do when there is a tag and an element with the same name for the same RPC – not sure if that is the case today or there are rules to prevent this in the schemas, but in that case the user of the call should be able to control what goes in the RPC doc IMHO.</p>
0
2016-08-24T20:01:40Z
[ "python" ]
In Python 3, convert np.array object type to float type, with variable number of object element
39,128,514
<p>I have a np.array with dtype as object. Each element here is a np.array with dtype as float and shape as (2,2) --- in maths, it is a 2-by-2 matrix. My aim is to obtain one 2-dimenional matrix by converting all the object-type element into float-type element. This can be better presented by the following example.</p> <pre><code>dA = 2 # dA is the dimension of the following A, here use 2 as example only A = np.empty((dA,dA), dtype=object) # A is a np.array with dtype as object A[0,0] = np.array([[1,1],[1,1]]) # each element in A is a 2-by-2 matrix A[0,1] = A[0,0]*2 A[1,0] = A[0,0]*3 A[1,1] = A[0,0]*4 </code></pre> <p>My aim is to have one matrix B (the dimension of B is 2*dA-by-2*dA). The form of B in maths should be</p> <pre><code>B = 1 1 2 2 1 1 2 2 3 3 4 4 3 3 4 4 </code></pre> <p>If dA is fixed at 2, then things can be easier, because I can hard-code</p> <pre><code>a00 = A[0,0] a01 = A[0,1] a10 = A[1,0] a11 = A[1,1] B0 = np.hstack((a00,a01)) B1 = np.hstack((a10,a11)) B = np.vstack((B0,B1)) </code></pre> <p>But in reality, dA is a variable, it can be 2 or any other integer. Then I don't know how to do it. I think nested for loops can help but maybe you have brilliant ideas. It would be great if there is something like cell2mat function in MATLAB. Because here you can see A[i,j] as a cell in MATLAB.</p> <p>Thanks in advance.</p>
1
2016-08-24T16:24:03Z
39,128,686
<p>Here's a quick way.</p> <p>Your <code>A</code>:</p> <pre><code>In [137]: A Out[137]: array([[array([[1, 1], [1, 1]]), array([[2, 2], [2, 2]])], [array([[3, 3], [3, 3]]), array([[4, 4], [4, 4]])]], dtype=object) </code></pre> <p>Use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.bmat.html" rel="nofollow"><code>numpy.bmat</code></a>, but convert <code>A</code> to a python list first, so <code>bmat</code> does what we want:</p> <pre><code>In [138]: B = np.bmat(A.tolist()) In [139]: B Out[139]: matrix([[1, 1, 2, 2], [1, 1, 2, 2], [3, 3, 4, 4], [3, 3, 4, 4]]) </code></pre> <p>The result is actually a <code>numpy.matrix</code>. If you need a regular numpy array, use the <code>.A</code> attribute of the <code>matrix</code> object:</p> <pre><code>In [140]: B = np.bmat(A.tolist()).A In [141]: B Out[141]: array([[1, 1, 2, 2], [1, 1, 2, 2], [3, 3, 4, 4], [3, 3, 4, 4]]) </code></pre> <hr> <p>Here's an alternative. (It still uses <code>A.tolist()</code>.)</p> <pre><code>In [164]: np.swapaxes(A.tolist(), 1, 2).reshape(4, 4) Out[164]: array([[1, 1, 2, 2], [1, 1, 2, 2], [3, 3, 4, 4], [3, 3, 4, 4]]) </code></pre> <p>In the general case, you would need something like:</p> <pre><code>In [165]: np.swapaxes(A.tolist(), 1, 2).reshape(A.shape[0]*dA, A.shape[1]*dA) Out[165]: array([[1, 1, 2, 2], [1, 1, 2, 2], [3, 3, 4, 4], [3, 3, 4, 4]]) </code></pre>
2
2016-08-24T16:33:42Z
[ "python", "arrays", "matlab", "numpy" ]
In Python 3, convert np.array object type to float type, with variable number of object element
39,128,514
<p>I have a np.array with dtype as object. Each element here is a np.array with dtype as float and shape as (2,2) --- in maths, it is a 2-by-2 matrix. My aim is to obtain one 2-dimenional matrix by converting all the object-type element into float-type element. This can be better presented by the following example.</p> <pre><code>dA = 2 # dA is the dimension of the following A, here use 2 as example only A = np.empty((dA,dA), dtype=object) # A is a np.array with dtype as object A[0,0] = np.array([[1,1],[1,1]]) # each element in A is a 2-by-2 matrix A[0,1] = A[0,0]*2 A[1,0] = A[0,0]*3 A[1,1] = A[0,0]*4 </code></pre> <p>My aim is to have one matrix B (the dimension of B is 2*dA-by-2*dA). The form of B in maths should be</p> <pre><code>B = 1 1 2 2 1 1 2 2 3 3 4 4 3 3 4 4 </code></pre> <p>If dA is fixed at 2, then things can be easier, because I can hard-code</p> <pre><code>a00 = A[0,0] a01 = A[0,1] a10 = A[1,0] a11 = A[1,1] B0 = np.hstack((a00,a01)) B1 = np.hstack((a10,a11)) B = np.vstack((B0,B1)) </code></pre> <p>But in reality, dA is a variable, it can be 2 or any other integer. Then I don't know how to do it. I think nested for loops can help but maybe you have brilliant ideas. It would be great if there is something like cell2mat function in MATLAB. Because here you can see A[i,j] as a cell in MATLAB.</p> <p>Thanks in advance.</p>
1
2016-08-24T16:24:03Z
39,129,026
<p>Your <code>vstack/hstack</code> could be written more compactly, and generally as</p> <pre><code>In [132]: np.vstack((np.hstack(a) for a in A)) Out[132]: array([[1, 1, 2, 2], [1, 1, 2, 2], [3, 3, 4, 4], [3, 3, 4, 4]]) </code></pre> <p>since <code>for a in A</code> iterates on the <code>rows</code> of <code>A</code>.</p> <p>Warren suggests <code>np.bmat</code>, which is fine. But if you look at the <code>bmat</code> code, you'll see that it just doing this kind of nested <code>concatenate</code> (expressed a row loop with <code>arr_rows.append(np.concatenate...)</code>).</p>
2
2016-08-24T16:53:32Z
[ "python", "arrays", "matlab", "numpy" ]
Unable to merge multiIndexed pandas dataframes
39,128,620
<p>I believe I am ultimately looking for a way to change the dtype of data frame indices. Please allow me to explain:</p> <p>Each df is multi-indexed on (the same) four levels. One level consists of mixed labels of integers, integer and letters (like D8), and just letters. </p> <p>However, for df1, the integers within the index labels are surrounded by quotation marks, while for df2, the same integer lables are free of any quotes; i.e.,</p> <pre><code>df1.index.levels[1] Index(['Z5', '02', '1C', '26', '2G', '2S', '30', '46', '48', '5M', 'CSA', etc...'], dtype='object', name='BMDIV') df2.index.levels[1] Index([ 26, 30, 46, 48, 72, '1C', '5M', '7D', '7Y', '8F', '8J', 'AN', 'AS', 'C3', 'CA', etc. dtype='object', name='BMDIV') </code></pre> <p>When I try to merge these tables </p> <pre><code>df_merge = pd.merge(df1, df2, how='left', left_index=True, right_index=True) </code></pre> <p>I get: </p> <blockquote> <p>TypeError: type object argument after * must be a sequence, not map</p> </blockquote> <p>Is there a way to change, for example, the type of label in df2 so that the numbers are in quotes and therefore presumably match the corresponding labels in df1?</p>
1
2016-08-24T16:30:13Z
39,128,892
<p>One way to change the level values is to build a new MultiIndex and re-assign it to <code>df.index</code>:</p> <pre><code>import pandas as pd df = pd.DataFrame( {'index':[ 26, 30, 46, 48, 72, '1C', '5M', '7D', '7Y', '8F', '8J', 'AN', 'AS', 'C3', 'CA'], 'foo':1, 'bar':2}) df = df.set_index(['index', 'foo']) level_values = [df.index.get_level_values(i) for i in range(index.nlevels)] level_values[0] = level_values[0].astype(str) df.index = pd.MultiIndex.from_arrays(level_values) </code></pre> <p>which makes the level values strings:</p> <pre><code>In [53]: df.index.levels[0] Out[56]: Index(['1C', '26', '30', '46', '48', '5M', '72', '7D', '7Y', '8F', '8J', 'AN', 'AS', 'C3', 'CA'], dtype='object', name='index') </code></pre> <p>Alternatively, you could avoid the somewhat low-level messiness by using <code>reset_index</code> and <code>set_value</code>:</p> <pre><code>import pandas as pd df = pd.DataFrame( {'index':[ 26, 30, 46, 48, 72, '1C', '5M', '7D', '7Y', '8F', '8J', 'AN', 'AS', 'C3', 'CA'], 'foo':1, 'bar':2}) df = df.set_index(['index', 'foo']) df = df.reset_index('index') df['index'] = df['index'].astype(str) df = df.set_index('index', append=True) df = df.swaplevel(0, 1, axis=0) </code></pre> <p>which again produces string-valued index level values:</p> <pre><code>In [67]: df.index.levels[0] Out[67]: Index(['1C', '26', '30', '46', '48', '5M', '72', '7D', '7Y', '8F', '8J', 'AN', 'AS', 'C3', 'CA'], dtype='object', name='index') </code></pre> <hr> <p>Of these two options, <code>using_MultiIndex</code> is faster:</p> <pre><code>N = 1000 def make_df(N): df = pd.DataFrame( {'index': np.random.choice(np.array( [26, 30, 46, 48, 72, '1C', '5M', '7D', '7Y', '8F', '8J', 'AN', 'AS', 'C3', 'CA'], dtype='O'), size=N), 'foo':1, 'bar':2}) df = df.set_index(['index', 'foo']) return df def using_MultiIndex(df): level_values = [df.index.get_level_values(i) for i in range(index.nlevels)] level_values[0] = level_values[0].astype(str) df.index = pd.MultiIndex.from_arrays(level_values) return df def using_reset_index(df): df = df.reset_index('index') df['index'] = df['index'].astype(str) df = df.set_index('index', append=True) df = df.swaplevel(0, 1, axis=0) return df In [81]: %%timeit df = make_df(1000) ....: using_MultiIndex(df) ....: 1000 loops, best of 3: 693 µs per loop In [82]: %%timeit df = make_df(1000) ....: using_reset_index(df) ....: 100 loops, best of 3: 2.09 ms per loop </code></pre>
1
2016-08-24T16:45:27Z
[ "python", "pandas" ]
Problems when opening xlsx with openpyxl
39,128,640
<p>I have a xlsx file and I tried to load this file using openpyxl</p> <pre><code>from openpyxl import load_workbook wb = load_workbook('/home/file_path/file.xlsx') </code></pre> <p>But I get this error:</p> <pre><code>"wb = load_workbook(new_file)"): expected string or buffer </code></pre> <p>new_file is a variable with the path of the xlsx file trying to open. Does anybody knows why this happens or how I should change to read the file? Thanks!</p> <p><strong>Update</strong> More details about the error</p> <pre><code>/home/vagrant/scrapy/local/lib/python2.7/site-packages/openpyxl/reader/worksheet.py:322: UserWarning: Unknown extension is not supported and will be removed warn(msg) /home/vagrant/scrapy/local/lib/python2.7/site-packages/openpyxl/reader/worksheet.py:322: UserWarning: Conditional Formatting extension is not supported and will be removed warn(msg) Traceback (most recent call last): File "/vagrant/vagrant_conf/pycharm-debug.egg/pydevd_comm.py", line 1071, in doIt result = pydevd_vars.evaluateExpression(self.thread_id, self.frame_id, self.expression, self.doExec) File "/vagrant/vagrant_conf/pycharm-debug.egg/pydevd_vars.py", line 344, in evaluateExpression Exec(expression, updated_globals, frame.f_locals) File "/vagrant/vagrant_conf/pycharm-debug.egg/pydevd_exec.py", line 3, in Exec exec exp in global_vars, local_vars File "&lt;string&gt;", line 1, in &lt;module&gt; File "/home/vagrant/scrapy/local/lib/python2.7/site-packages/openpyxl/reader/excel.py", line 252, in load_workbook wb._named_ranges = list(read_named_ranges(archive.read(ARC_WORKBOOK), wb)) File "/home/vagrant/scrapy/local/lib/python2.7/site-packages/openpyxl/workbook/names/named_range.py", line 130, in read_named_ranges if external_range(node_text): File "/home/vagrant/scrapy/local/lib/python2.7/site-packages/openpyxl/workbook/names/named_range.py", line 112, in external_range m = EXTERNAL_RE.match(range_string) TypeError: expected string or buffer </code></pre>
-2
2016-08-24T16:31:18Z
39,128,741
<p>The syntaxe is:</p> <pre><code>wb = load_workbook(filename='file.xlsx', read_only=True) </code></pre> <p>The <code>read_only</code> keyword is not required.</p>
0
2016-08-24T16:36:35Z
[ "python", "xlsx", "openpyxl" ]
return Find_element_by_name as a string In Python
39,128,641
<p>I need some help form all of you, so I have something like this:</p> <pre><code>element = driver.find_element_by_name("SiteMinderVarForm") print (element) </code></pre> <p>When I execute the program I receive:</p> <blockquote> <p>selenium.webdriver.remote.webelement.WebElement (session="d49d6df9305f2e92eb81aed5c0ed848b", element="0.6436298007036831-4")</p> </blockquote> <p>And I need a string as result. </p> <p>I'm trying to automatically login into a web site. If the process of login fails because of an incorrect password or username I need to show a pop up with an error message. So I'm trying to get the name of the form and check if is equal to the actual name. In case it is equal that means the process is successful, and if not the process fails.</p> <p>If you have any better ideas to do that please let me know.</p>
1
2016-08-24T16:31:28Z
39,129,433
<p>From your question, it seems like you want the text value contained in the webelement. You can use the webelement.getAttribute("value") method. If you want to get the name attribute of the webelement with name SiteMinderVarForm:</p> <pre><code>element = driver.find_element_by_name("SiteMinderVarForm") value = element.getAttribute("name") print(value) </code></pre> <p>Similar to <a href="http://stackoverflow.com/questions/7852287/using-selenium-web-driver-to-retrieve-value-of-a-html-input">Using Selenium Web Driver to retrieve value of a HTML input</a><br> More examples at: <a href="https://www.seleniumeasy.com/selenium-tutorials/how-to-get-attribute-values-using-webdriver" rel="nofollow">Get Attribute values using Webdriver</a></p> <p>You may also be able to use the webelement.getText() method.</p> <pre><code>element = driver.find_element_by_name("SiteMinderVarForm") value = element.getText() print(value) </code></pre> <p><a href="http://stackoverflow.com/questions/32307702/difference-b-w-gettext-and-getattribute-in-selenium-webdriver">Difference between getText() and getAttribute()</a></p>
0
2016-08-24T17:18:36Z
[ "python", "string", "selenium", "selenium-webdriver", "return" ]
pandas module to trim columns in python
39,128,643
<p>Any idea why below code can't keep the first column of my csv file? I would like to keep several columns in a new csv file, first column included. And if I select the name of first column to be on new file.<br> I get an error :</p> <blockquote> <p>"Type" not index.</p> </blockquote> <pre><code>import pandas as pd f = pd.read_csv("1.csv") keep_col = ['Type','Pol','Country','User Site Code','PG','Status'] new_f = f[keep_col] new_f.to_csv("2.csv", index=False) </code></pre> <p>Thanks a lot.</p>
0
2016-08-24T16:31:34Z
39,128,748
<p>You are better off by specifying the columns to read from your csv file.</p> <pre><code>pd.read_csv('1.csv', names=keep_col).to_csv("2.csv", index=False) </code></pre> <p>Do you have any special characters in your first column?</p>
0
2016-08-24T16:37:07Z
[ "python", "csv", "pandas" ]
pandas module to trim columns in python
39,128,643
<p>Any idea why below code can't keep the first column of my csv file? I would like to keep several columns in a new csv file, first column included. And if I select the name of first column to be on new file.<br> I get an error :</p> <blockquote> <p>"Type" not index.</p> </blockquote> <pre><code>import pandas as pd f = pd.read_csv("1.csv") keep_col = ['Type','Pol','Country','User Site Code','PG','Status'] new_f = f[keep_col] new_f.to_csv("2.csv", index=False) </code></pre> <p>Thanks a lot.</p>
0
2016-08-24T16:31:34Z
39,129,063
<p>Try <code>f.columns.values.tolist()</code> and check the output of the first column. It sounds like there is an encoding issue when you are reading the CSV. You can try specifying the "encoding" option in your <code>pd.read_csv()</code> to see if that will get rid of the extra characters at the front. Otherwise, you can use <code>f.rename(columns={'F48FBFBFType':'Type'}</code> to change whatever the current name of your first column is to simply be 'Type'.</p>
0
2016-08-24T16:56:05Z
[ "python", "csv", "pandas" ]
pymetar.py metar weather library stopped working
39,128,651
<p>I was using Tobias Klausmann's pymetar.py to fetch airport weather metar reports. It stopped working. <a href="http://www.nws.noaa.gov/om/notification/scn16-16wngccb.htm" rel="nofollow">http://www.nws.noaa.gov/om/notification/scn16-16wngccb.htm</a> They shut the service down. <a href="http://weather.noaa.gov/pub/data/observations/metar/" rel="nofollow">http://weather.noaa.gov/pub/data/observations/metar/</a><br> The Service <a href="http://weather.noaa.gov/pub/data/observations/metar/decoded/" rel="nofollow">http://weather.noaa.gov/pub/data/observations/metar/decoded/</a> was used on line 1047. </p>
0
2016-08-24T16:31:55Z
39,128,652
<p>You can fix it by changing line 1047 in pymetar.py to <a href="http://tgftp.nws.noaa.gov/data/observations/metar/decoded/" rel="nofollow">http://tgftp.nws.noaa.gov/data/observations/metar/decoded/</a> Hope that helps you!</p>
0
2016-08-24T16:31:55Z
[ "python", "weather", "airport" ]
Error while using Graphlab Create Jupyter
39,128,663
<p>I have recently upgraded the Graph Lab create version that I had. While running codes using the Jupyter console I came up with the following errors:</p> <pre><code>from __future__ import division import graphlab import math import string **Error: ACTION REQUIRED: Dependencies libstdc++-6.dll and libgcc_s_seh-1.dll not found. 1. Ensure user account has write permission to C:\Anaconda3\envs\gl-env\lib\site-packages\graphlab 2. Run graphlab.get_dependencies() to download and install them. 3. Restart Python and import graphlab again.** </code></pre> <p>I am not a CS person, and am not sure why this is coming up. Will help if someone can suggest the steps. I have got both Python 2.7 and 3.5 versions and Anaconda 2 and 3.</p> <p>Also, while uploading files above 25 mb its giving errors. Not too sure cause it used to be ok before upgrading Graphlab create. I have tried uninstalling and reinstalling Python and Anaconda but nothing worked.</p>
1
2016-08-24T16:32:32Z
39,132,310
<p>Firstly, make sure Jupyter notebook is CLOSED.</p> <ol> <li>Open the GraphLab Create Launcher and go to the 'TERMINAL' button.</li> <li>Type in <code>import graphlab</code> (there may be an error message, just ignore it).</li> <li>Now type in <code>graphlab.get_dependencies()</code></li> </ol> <p>The terminal will install all of the proper dependencies. Just wait for it to be finished. You can then close the terminal window and open the Jupyter Notebook again. Just try to run your code in the notebook again, it should work.</p> <p>Hope this helps!</p>
0
2016-08-24T20:14:07Z
[ "python", "ipython", "jupyter-notebook", "graphlab" ]
Transitive reduction - Error with code - Python
39,128,698
<p>So I am trying to write a code to do transitive reduction of Acyclic graph. So the elements are: </p> <blockquote> <p>(3, 5), (5, 2), (2, 1), (4, 2), (3, 1), (4, 1)</p> </blockquote> <p>This is what I have written so far: </p> <pre><code>graph = [[3, 5],[5, 2],[2, 1],[4, 2],[3, 1],[4, 1]] for i in range(len(graph)): for j in range(len(graph)): for k in range(len(graph)): if [i,j] in graph and [j,k] in graph: a = [i,k] graph.pop(a) print(graph) </code></pre> <p>After running I am expecting to get the following with (4,1) removed:</p> <pre><code>&gt;&gt; (3, 5), (5, 2), (2, 1), (4, 2), (3, 1) </code></pre> <p>But instead it returns:</p> <pre><code>&gt;&gt; (3, 5), (5, 2), (2, 1), (4, 2), (3, 1), (4, 1) </code></pre> <p>I cant figure out what I am doing wrong. If someone can point out the error, it would be great!</p> <p>P.S: Transtive reduction is removing the redundant edges of a graph. For example:</p> <p>if ( A -> B ) and ( B -> C ), then ( A -> C ), in other words: if A is connected to B and B is connected to C, then A is also connected to C. And in this case ( A -> C ) is redundant because A can reach C through B therefore should be removed.</p>
0
2016-08-24T16:34:19Z
39,129,455
<p>I improved your code, I added a condition <code>if a in graph:</code> because in some cases the Transtive reduction appears and the element [i,k] doesn't exist. Also the function <code>pop()</code> removes an element by index, not by object like <code>remove()</code>.</p> <pre><code>graph = [[3, 5],[5, 2],[2, 1],[4, 2],[3, 1],[4, 1]] for i in range(len(graph)): for j in range(len(graph)): for k in range(len(graph)): if [i,j] in graph and [j,k] in graph: a = [i,k] if a in graph: graph.remove(a) print(graph) </code></pre> <p>I hope this can help you.</p>
1
2016-08-24T17:20:29Z
[ "python", "list", "graph", "transitive-closure" ]
Implementing HWID checking system for Python scripts?
39,128,733
<p>Let's say I was selling a software and I didn't want it to be leaked (of course using Python wouldn't be the greatest as all code is open but let's just go with it), I would want to get a unique ID that only the user's PC has and store it within a list in my script. Now every time someone executed the script it would iterate through the list and check to see if that unique ID matches with one from the list.</p> <p>Would this even be possible and if so how would one implement it in Python?</p>
-1
2016-08-24T16:36:17Z
39,129,406
<p>Python has a unique id library <code>uuid</code>. <a href="https://docs.python.org/3.5/library/uuid.html" rel="nofollow">https://docs.python.org/3.5/library/uuid.html</a></p> <pre><code>import uuid # Create a uuid customer_id = str(uuid.uuid4()) software_ids = [customer_id] # Store in a safe secure place can_run = customer_id in software_ids print(can_run) </code></pre>
0
2016-08-24T17:17:08Z
[ "python" ]
Downloading a song through python-requests
39,128,738
<p>I was trying to make a script to download songs from internet. I was first trying to download the song by using "requests" library. But I was unable to play the song. Then, I did the same using "urllib2" library and I was able to play the song this time.</p> <p>Can't we use "requests" library to download songs? If yes, how?</p> <p>Code by using requests:</p> <pre><code>import requests doc = requests.get("http://gaana99.com/fileDownload/Songs/0/28768.mp3") f = open("movie.mp3","wb") f.write(doc.text) f.close() </code></pre> <p>Code by using urllib2:</p> <pre><code>import urllib2 mp3file = urllib2.urlopen("http://gaana99.com/fileDownload/Songs/0/28768.mp3") output = open('test.mp3','wb') output.write(mp3file.read()) output.close() </code></pre>
0
2016-08-24T16:36:28Z
39,128,806
<p>Use <code>doc.content</code> to save <a href="http://docs.python-requests.org/en/master/user/quickstart/#binary-response-content" rel="nofollow">binary data</a>:</p> <pre><code>import requests doc = requests.get('http://gaana99.com/fileDownload/Songs/0/28768.mp3') with open('movie.mp3', 'wb') as f: f.write(doc.content) </code></pre> <p><strong>Explanation</strong></p> <p>A MP3 file is only binary data, you cannot retrieve its <em>textual</em> part. When you deal with plain text, <code>doc.text</code> is ideal, but for any other binary format, you have to access bytes with <code>doc.content</code>.</p> <p>You can check the used encoding, when you <code>get</code> a plain text response, <code>doc.encoding</code> is set, else it is empty:</p> <pre><code>&gt;&gt;&gt; doc = requests.get('http://gaana99.com/fileDownload/Songs/0/28768.mp3') &gt;&gt;&gt; doc.encoding # nothing &gt;&gt;&gt; doc = requests.get('http://www.example.org') &gt;&gt;&gt; doc.encoding ISO-8859-1 </code></pre>
3
2016-08-24T16:40:55Z
[ "python", "httprequest", "urllib2" ]
Python Key Error With _
39,128,761
<p>I'm getting a weird key error with Python dicts. My key is "B19013_001E" and I've named my dict "sf_tracts" with a nested dict "properties". Here is my code:</p> <pre><code>x = "B19013_001E" for tract in sf_tracts: print tract["properties"][x] </code></pre> <p>With this, I get a KeyError: "B19013_001E"</p> <p>However if I change the code to this, the values get printed:</p> <pre><code>x = "B19013_001E" for tract in sf_tracts: for key in tract["properties"]: if key == "B19013_001E": print tract["properties"][x] </code></pre> <p>What's the difference?</p> <p>-edit- I believe the issue is the underscore as other keys can be printed. How do I access this key?</p> <p>Thanks</p>
-1
2016-08-24T16:38:06Z
39,128,891
<p>You are assuming that the key exists in <em>all</em> dictionaries that the <code>tract in sf_tracts</code> loop produces. That assumption is incorrect.</p> <p>Your second piece of code happens to work because you are essentially testing for the key to exist, albeit expensively. You could instead do this:</p> <pre><code>for tract in sf_tracts: if x in tract["properties"]: print tract["properties"][x] </code></pre> <p>or you could use:</p> <pre><code>for tract in sf_tracts: print tract["properties"].get(x, 'Key is not present') </code></pre> <p>There is otherwise nothing special about a string key with an underscore in the value. An underscore makes no difference to how such keys are treated.</p>
3
2016-08-24T16:45:23Z
[ "python", "dictionary" ]
Python Key Error With _
39,128,761
<p>I'm getting a weird key error with Python dicts. My key is "B19013_001E" and I've named my dict "sf_tracts" with a nested dict "properties". Here is my code:</p> <pre><code>x = "B19013_001E" for tract in sf_tracts: print tract["properties"][x] </code></pre> <p>With this, I get a KeyError: "B19013_001E"</p> <p>However if I change the code to this, the values get printed:</p> <pre><code>x = "B19013_001E" for tract in sf_tracts: for key in tract["properties"]: if key == "B19013_001E": print tract["properties"][x] </code></pre> <p>What's the difference?</p> <p>-edit- I believe the issue is the underscore as other keys can be printed. How do I access this key?</p> <p>Thanks</p>
-1
2016-08-24T16:38:06Z
39,128,893
<p>Some of your tracts must be missing that particular key. In the first case, you're asking every tract to print the key, while in the second you're limiting the print operation to only those that have the key.</p>
1
2016-08-24T16:45:31Z
[ "python", "dictionary" ]
Is it necessary to divide a file into chunks for encryption
39,128,855
<p>I was reading the code for a <a href="https://github.com/z7ev3n/ransomware/blob/master/ransomcrypto.py" rel="nofollow">ransom ware</a>. As per the code, the author had divided the files into 64 kb chunks to encrypt . I am unable to understand why.</p>
1
2016-08-24T16:43:12Z
39,128,936
<p>Why is unknown and was a choice by the attack code writer.</p> <p>It is not necessary to limit file sizes to 64KB using current encryption methods such as AES. Most implementations handle files of any size although it is wise to limit the size to 2^68 bytes for AES.</p>
0
2016-08-24T16:47:39Z
[ "python", "encryption", "cryptography", "aes", "pycrypto" ]
Is it necessary to divide a file into chunks for encryption
39,128,855
<p>I was reading the code for a <a href="https://github.com/z7ev3n/ransomware/blob/master/ransomcrypto.py" rel="nofollow">ransom ware</a>. As per the code, the author had divided the files into 64 kb chunks to encrypt . I am unable to understand why.</p>
1
2016-08-24T16:43:12Z
39,131,306
<p>If you look at the code, the encryptor reads 1024 * blocksize of bytes (actually 16 KiB) as a single chunk and uses the same cipher object to encrypt each chunk separately. </p> <p>This must be done in order to be able to encrypt large files, because some files are simply too large to be read in full into the memory, then encrypted and then written back. That alone means that the free memory must be more than three times higher than the size of the file that would need to be encrypted.</p> <p>Since PyCrypto doesn't have a stream-based implementation of its ciphers, this is the closest that fulfills the same task by maintaining a small memory footprint. </p> <p>Generally, the encryption of each chunk would produce independent ciphertext chunks that would need to be read back in the same chunked fashion as they were written, but this is not necessary here. AES-CBC XORs the current plaintext block with the previous ciphertext block. If it's the first block, then the IV is used as the previous ciphertext block. Since the IV is never reset on the cipher object it will always hold previous ciphertext block. The result is that the produced ciphertext is actually equivalent to encryption as a single large chunk.</p> <hr> <p>For reference, I'm talking about this:</p> <pre><code>def encrypt(in_file, out_file, password, key_length=32): bs = AES.block_size salt = Random.new().read(bs - len('Salted__')) key, iv = derive_key_and_iv(password, salt, key_length, bs) cipher = AES.new(key, AES.MODE_CBC, iv) out_file.write('Salted__' + salt) finished = False while not finished: chunk = in_file.read(1024 * bs) if len(chunk) == 0 or len(chunk) % bs != 0: padding_length = (bs - len(chunk) % bs) or bs chunk += padding_length * chr(padding_length) finished = True out_file.write(cipher.encrypt(chunk)) </code></pre>
1
2016-08-24T19:11:10Z
[ "python", "encryption", "cryptography", "aes", "pycrypto" ]
Python - Drop row if two columns are NaN
39,128,856
<p>This is an extension to <a href="http://stackoverflow.com/questions/13413590/how-to-drop-rows-of-pandas-dataframe-whose-value-of-certain-column-is-nan">this question</a>, where OP wanted to know how to drop rows where the values in a single column are NaN.</p> <p>I'm wondering how I can drop rows where the values in <strong>2</strong> (or more) columns are <strong>both</strong> NaN. Using the second answer's created Data Frame:</p> <pre><code>In [1]: df = pd.DataFrame(np.random.randn(10,3)) In [2]: df.ix[::2,0] = np.nan; df.ix[::4,1] = np.nan; df.ix[::3,2] = np.nan; In [3]: df Out[3]: 0 1 2 0 NaN NaN NaN 1 2.677677 -1.466923 -0.750366 2 NaN 0.798002 -0.906038 3 0.672201 0.964789 NaN 4 NaN NaN 0.050742 5 -1.250970 0.030561 -2.678622 6 NaN 1.036043 NaN 7 0.049896 -0.308003 0.823295 8 NaN NaN 0.637482 9 -0.310130 0.078891 NaN </code></pre> <p>If I use the <code>drop.na()</code> command, specifically the <code>drop.na(subset=[1,2])</code>, then it completes an "or" type drop and leaves:</p> <pre><code>In[4]: df.dropna(subset=[1,2]) Out[4]: 0 1 2 1 2.677677 -1.466923 -0.750366 2 NaN 0.798002 -0.906038 5 -1.250970 0.030561 -2.678622 7 0.049896 -0.308003 0.823295 </code></pre> <p>What I want is an "and" type drop, where it drops rows where there is an <code>NaN</code> in column index 1 <strong>and</strong> 2. This would leave:</p> <pre><code> 0 1 2 1 2.677677 -1.466923 -0.750366 2 NaN 0.798002 -0.906038 3 0.672201 0.964789 NaN 4 NaN NaN 0.050742 5 -1.250970 0.030561 -2.678622 6 NaN 1.036043 NaN 7 0.049896 -0.308003 0.823295 8 NaN NaN 0.637482 9 -0.310130 0.078891 NaN </code></pre> <p>where only the first row is dropped.</p> <p>Any ideas?</p> <p>EDIT: changed data frame values for consistency</p>
2
2016-08-24T16:43:13Z
39,128,913
<p>Any one of the following two:</p> <pre><code>df.dropna(subset=[1, 2], how='all') </code></pre> <p>or </p> <pre><code>df.dropna(subset=[1, 2], thresh=1) </code></pre>
3
2016-08-24T16:46:26Z
[ "python", "pandas", "dataframe" ]
Python - Drop row if two columns are NaN
39,128,856
<p>This is an extension to <a href="http://stackoverflow.com/questions/13413590/how-to-drop-rows-of-pandas-dataframe-whose-value-of-certain-column-is-nan">this question</a>, where OP wanted to know how to drop rows where the values in a single column are NaN.</p> <p>I'm wondering how I can drop rows where the values in <strong>2</strong> (or more) columns are <strong>both</strong> NaN. Using the second answer's created Data Frame:</p> <pre><code>In [1]: df = pd.DataFrame(np.random.randn(10,3)) In [2]: df.ix[::2,0] = np.nan; df.ix[::4,1] = np.nan; df.ix[::3,2] = np.nan; In [3]: df Out[3]: 0 1 2 0 NaN NaN NaN 1 2.677677 -1.466923 -0.750366 2 NaN 0.798002 -0.906038 3 0.672201 0.964789 NaN 4 NaN NaN 0.050742 5 -1.250970 0.030561 -2.678622 6 NaN 1.036043 NaN 7 0.049896 -0.308003 0.823295 8 NaN NaN 0.637482 9 -0.310130 0.078891 NaN </code></pre> <p>If I use the <code>drop.na()</code> command, specifically the <code>drop.na(subset=[1,2])</code>, then it completes an "or" type drop and leaves:</p> <pre><code>In[4]: df.dropna(subset=[1,2]) Out[4]: 0 1 2 1 2.677677 -1.466923 -0.750366 2 NaN 0.798002 -0.906038 5 -1.250970 0.030561 -2.678622 7 0.049896 -0.308003 0.823295 </code></pre> <p>What I want is an "and" type drop, where it drops rows where there is an <code>NaN</code> in column index 1 <strong>and</strong> 2. This would leave:</p> <pre><code> 0 1 2 1 2.677677 -1.466923 -0.750366 2 NaN 0.798002 -0.906038 3 0.672201 0.964789 NaN 4 NaN NaN 0.050742 5 -1.250970 0.030561 -2.678622 6 NaN 1.036043 NaN 7 0.049896 -0.308003 0.823295 8 NaN NaN 0.637482 9 -0.310130 0.078891 NaN </code></pre> <p>where only the first row is dropped.</p> <p>Any ideas?</p> <p>EDIT: changed data frame values for consistency</p>
2
2016-08-24T16:43:13Z
39,128,984
<p>Specify <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html" rel="nofollow"><code>dropna()</code></a> method:</p> <pre><code>df.dropna(subset=[1,2], how='all') </code></pre>
2
2016-08-24T16:50:14Z
[ "python", "pandas", "dataframe" ]
How to generate and save POJO from H2O using Python
39,128,865
<p>I have a model created in H2O using Python. I want to generate a POJO of that model, and save it. </p> <p>Say my model is called model_rf. </p> <p>I have tried: </p> <pre><code>h2o.save_model(model_rf, path='./pojo_test', force=False) </code></pre> <p>This create a directory called "pojo_test", which contains a whole bunch of binary files. I want a java file though, something like model_rf.java, that is the POJO itself. </p> <p>I tried: </p> <pre><code>h2o.download_pojo(model_rf, path='./pojo_test_2', get_jar = True) </code></pre> <p>Which gave the error message: </p> <pre><code>IOError: [Errno 2] No such file or directory: u'./pojo_test_2/model_rf.java' </code></pre> <p>What am I missing? Probably a stupid question but I cannot for the life of me figure this out. </p>
0
2016-08-24T16:43:55Z
39,129,724
<p>Everything looks fine, it just looks like you need to change the <code>path</code> you used.</p> <p>Instead of using the directory that <code>h2o.save_model</code> created, use a directory that you know exists and for which you know the path. As a first test you could just save to your desktop, for example use </p> <pre><code>h2o.download_pojo(model_rf, path = '/Users/your_user_name/Desktop/', get_jar = True) </code></pre> <p>where you need to replace your_user_name (this is assuming you are using a mac)</p> <p>Here's an example you can try from scratch (shutdown h2o first with <code>h2o.cluster().shutdown()</code></p> <pre><code> import h2o h2o.init() iris_df = h2o.import_file("https://s3.amazonaws.com/h2o-public-test-data/smalldata/iris/iris.csv") from h2o.estimators.glm import H2OGeneralizedLinearEstimator predictors = iris_df.columns[0:4] response_col = "C5" train,valid,test = iris_df.split_frame([.7,.15], seed =1234) glm_model = H2OGeneralizedLinearEstimator(family="multinomial") glm_model.train(predictors, response_col, training_frame = train, validation_frame = valid) h2o.download_pojo(glm_model, path = '/Users/your_user_name/Desktop/', get_jar = True) </code></pre> <p>again where you need to replace <code>your_user_name</code> (this is assuming you are using a mac)</p> <p>(what might have happened: It looks like the first time you saved an H2O model to disk with <code>h2o.save_model</code> a directory was created in the location you were running your original h2o cluster (check if you are connecting to an h2o cluster from different locations) and the second time you tried to save the model with <code>download_pojo</code> it looked at your current directory and saw that 'pojo_test2' didn't exist there.</p> <p>when you run <code>h2o.save_model</code> it will print out the full path to where it created a new directory. See if that path is the same as your current directory. </p>
0
2016-08-24T17:37:26Z
[ "python", "pojo", "h2o" ]
pyquery (lxml) not finding a tag in a well-structured XML document?
39,128,909
<p>I have an XML file that looks like <a href="https://clinicaltrials.gov/ct2/show/NCT00636649?displayxml=true" rel="nofollow">this</a>. The relevant bit is this: </p> <pre><code>&lt;reference&gt; &lt;citation&gt;Vander Wal JS, Gang CH, Griffing GT, Gadde KM. Escitalopram for treatment of night eating syndrome: a 12-week, randomized, placebo-controlled trial. J Clin Psychopharmacol. 2012 Jun;32(3):341-5. doi: 10.1097/JCP.0b013e318254239b.&lt;/citation&gt; &lt;PMID&gt;22544016&lt;/PMID&gt; &lt;/reference&gt; </code></pre> <p>I am trying to find the value of the <code>PMID</code> field, using PyQuery to parse the XML:</p> <pre><code> from pyquery import PyQuery as pq text = open(f, 'r').read() d = pq(text) data = {} data['nct_id'] = d('nct_id').text() print d('reference') reference = d('reference') print reference('PMID') data['pmid'] = reference('PMID').text() print data['PMID'] </code></pre> <p>Why isn't this working? In the console I see the full content of <code>reference</code> from the first print statement, followed by two empty values:</p> <pre><code>&lt;reference&gt; &lt;citation&gt;Vander Wal JS, Gang CH, Griffing GT, Gadde KM. Escitalopram for treatment of night eating syndrome: a 12-week, randomized, placebo-controlled trial. J Clin Psychopharmacol. 2012 Jun;32(3):341-5. doi: 10.1097/JCP.0b013e318254239b.&lt;/citation&gt; &lt;PMID&gt;22544016&lt;/PMID&gt; &lt;/reference&gt; </code></pre> <p>I can find other leaf nodes in the document (like <code>nct_id</code>) just fine using <code>.find()</code>, as the example code shows.</p> <p>Is it that PyQuery doesn't like upper-case tags?</p>
1
2016-08-24T16:46:10Z
39,129,070
<p>You can specifiy the parser to use and it will work:</p> <pre><code>d = pq(text, parser='xml') </code></pre>
0
2016-08-24T16:56:51Z
[ "python", "lxml", "pyquery" ]
Kernel crashes when increasing iterations
39,128,973
<p>I am running a Python script using Spyder 2.3.9. I have a fairly large script and when running it through with (300x600) iterations (a loop inside another loop), everything appears to be working fine and takes approximately 40 minutes. But when I increase the number to (500x600) iterations, after 2 hours, the output yields:</p> <pre><code>It seems the kernel died unexpectedly. Use 'Restart kernel' to continue using this console. </code></pre> <p>I've been trying to go through the code but don't see anything that might be causing this in particular. I am using Python 2.7.12 64bits, Qt 4.8.7, PyQt4 (API v2) 4.11.4. (Anaconda2-4.0.0-MacOSX-x86_64)</p> <p>I'm not entirely sure what additional information is pertinent, but if you have any suggestions or questions, I'd be happy to read them. </p>
1
2016-08-24T16:49:34Z
39,129,149
<p><a href="https://github.com/spyder-ide/spyder/issues/3114" rel="nofollow">https://github.com/spyder-ide/spyder/issues/3114</a></p> <p>It seems this issue has been opened on their GitHub profile, should be addressed soon given the repo record.</p> <p>Some possible solutions:</p> <ol> <li><p>It may be helpful, if possible, to modify your script for faster convergence. Very often, for most practical purposes, the incremental value of iterations after a certain point is negligible. </p></li> <li><p>An upgrade or downgrade of the Spyder environment may help. </p></li> <li><p>Check your local firewall for blocked connections to 127.0.0.1 from pythonw.exe. </p></li> <li><p>If nothing works, try using Spyder on Ubuntu.</p></li> </ol>
0
2016-08-24T17:01:12Z
[ "python", "python-2.7", "anaconda", "spyder" ]
Program works, but TypeError: 'int' object is not subscriptable is received. Why?
39,129,057
<p>This program checks to see if a string is a palindrome then returns true if it is. It works fine when I run it in the Python IDLE. But is not accepted on the online testing site, returning this error: "<strong>TypeError: 'int' object is not subscriptable</strong>"</p> <pre><code>string="racecar" def is_palindrome(string): if string == string[::-1]: return True else: return False </code></pre> <p>Why is this? To my knowledge I'm not working with int's.</p>
0
2016-08-24T16:55:29Z
39,129,517
<p>You obviously want your function to only operate on strings, and unless you're using a quite new version of Python supporting type hints, you can't really tell Python that. So just cast to string right off the bat. Almost anything can be cast to a string and then your function will work no matter what it is passed, or at least throw a decent error. </p> <pre><code>def is_palindrome(string): string = str(string) # add this line if string == string[::-1]: return True else: return False </code></pre>
0
2016-08-24T17:24:39Z
[ "python", "string", "slice", "typeerror" ]
Pandas realization of leave one out encoding for categorical features
39,129,085
<p>I have recently watched a video from Owen Zhang kaggle rank 1 competitor: <a href="https://youtu.be/LgLcfZjNF44" rel="nofollow">https://youtu.be/LgLcfZjNF44</a> where he explains a technique of encoding categorical features to numerical which is called leave one out encoding. What he does to a categorical feature is associate a value with each observation, which is the average of the response for all other observations with same category.</p> <p>I've been trying to implement this strategy in python using pandas. Although I have managed to build a successful code the fact that my data set is of size of tens of millions its performance is very slow. If someone could bring up a faster solution I'd be very grateful.</p> <p>This is my code so far:</p> <pre><code>def categ2numeric(data, train=True): def f(series): indexes = series.index.values pomseries = pd.Series() for i, index in enumerate(indexes): pom = np.delete(indexes, i) pomseries.loc[index] = series[pom].mean() series = pomseries return series if train: categ = data.groupby(by=['Cliente_ID'])['Demanda_uni_equil'].apply(f) </code></pre> <p>And I need to turn this Series:</p> <pre><code> 159812 28.0 464556 83.0 717223 45.0 1043801 21.0 1152917 7.0 Name: 26, dtype: float32 </code></pre> <p>to this:</p> <pre><code> 159812 39.00 464556 25.25 717223 34.75 1043801 40.75 1152917 44.25 dtype: float64 </code></pre> <p>Or mathematically element with index 159812 is equal to the average of all the other elements or:</p> <p>39 = (83 + 45 + 21 + 7) / 4</p>
1
2016-08-24T16:57:38Z
39,129,252
<p>Replace each element of the Series with difference between the sum of the Series and the element, then divide by the length of the series minus 1. Assuming <code>s</code> is your Series:</p> <pre><code>s = (s.sum() - s)/(len(s) - 1) </code></pre> <p>The resulting output:</p> <pre><code>159812 39.00 464556 25.25 717223 34.75 1043801 40.75 1152917 44.25 </code></pre>
3
2016-08-24T17:07:51Z
[ "python", "pandas", "categorical-data" ]
Pandas realization of leave one out encoding for categorical features
39,129,085
<p>I have recently watched a video from Owen Zhang kaggle rank 1 competitor: <a href="https://youtu.be/LgLcfZjNF44" rel="nofollow">https://youtu.be/LgLcfZjNF44</a> where he explains a technique of encoding categorical features to numerical which is called leave one out encoding. What he does to a categorical feature is associate a value with each observation, which is the average of the response for all other observations with same category.</p> <p>I've been trying to implement this strategy in python using pandas. Although I have managed to build a successful code the fact that my data set is of size of tens of millions its performance is very slow. If someone could bring up a faster solution I'd be very grateful.</p> <p>This is my code so far:</p> <pre><code>def categ2numeric(data, train=True): def f(series): indexes = series.index.values pomseries = pd.Series() for i, index in enumerate(indexes): pom = np.delete(indexes, i) pomseries.loc[index] = series[pom].mean() series = pomseries return series if train: categ = data.groupby(by=['Cliente_ID'])['Demanda_uni_equil'].apply(f) </code></pre> <p>And I need to turn this Series:</p> <pre><code> 159812 28.0 464556 83.0 717223 45.0 1043801 21.0 1152917 7.0 Name: 26, dtype: float32 </code></pre> <p>to this:</p> <pre><code> 159812 39.00 464556 25.25 717223 34.75 1043801 40.75 1152917 44.25 dtype: float64 </code></pre> <p>Or mathematically element with index 159812 is equal to the average of all the other elements or:</p> <p>39 = (83 + 45 + 21 + 7) / 4</p>
1
2016-08-24T16:57:38Z
39,140,923
<p>WIth help from @root I have found out that the fastest solution to this problem would be this kind of approach:</p> <pre><code>cs = train.groupby(by=['Cliente_ID'])['Demanda_uni_equil'].sum() cc = train['Cliente_ID'].value_counts() boolean = (cc == 1) index = boolean[boolean == True].index.values cc.loc[boolean] += 1 cs.loc[index] *= 2 train = train.join(cs.rename('sum'), on=['Cliente_ID']) train = train.join(cc.rename('count'), on=['Cliente_ID']) train['Cliente_IDloo'] = (train['sum'] - train['Demanda_uni_equil'])/(train['count'] - 1) del train['sum'], train['count'] </code></pre> <p>I've found that if using the apply method with callable function as input it would take 2 minutes while this approach takes only 1 second it is a bit cumbersome though.</p>
0
2016-08-25T09:01:27Z
[ "python", "pandas", "categorical-data" ]
Add a delay while curl is downloading python script and then pipe to execute script?
39,129,116
<p>I just created a rickroll prank to play on friends and family. I want to be able to download the file from github using a curl command which works. My issue is that when I use a pipe and try to execute the script it does it right after curl gets executed and before it downloads the file.</p> <p>This is the command I am trying to run: </p> <pre><code>curl -L -O https://raw.githubusercontent.com/krish-penumarty/RickRollPrank/master/rickroll.py | python rickroll.py </code></pre> <p>I have tried to run it using the sleep command as well, but haven't had any luck.</p> <pre><code>(curl -L -O https://raw.githubusercontent.com/krish-penumarty/RickRollPrank/master/rickroll.py; sleep 10) | python rickroll.py </code></pre>
-1
2016-08-24T16:59:31Z
39,132,600
<p>Expanding on my comment.</p> <p>There are several ways to chain commands using most shell languages (here I assume sh / bash dialect).</p> <ul> <li>The most basic: <code>;</code> will just run each command sequentially, starting the next one as the previous one completes.</li> <li>Conditional chaining, <code>&amp;&amp;</code> works as <code>;</code> but aborts the chain as soon as a command returns an error (any non-0 <a href="http://linux.die.net/man/3/exit" rel="nofollow">return code</a>).</li> <li>Conditional chaining, <code>||</code> works as <code>&amp;&amp;</code> but aborts the chain as soon as a command succeeds (returns 0).</li> </ul> <p>What you tried to do here is neither of those, it's <a href="https://en.wikipedia.org/wiki/Pipeline_(Unix)" rel="nofollow">piping</a>. Triggered by <code>|</code>, it causes commands on its sides to be run at once, with the standard output of the left-hand one being fed into the standard input of the right-hand one.</p> <p>Your second example doesn't work either, because it causes two sequences to be run in parallel:</p> <ul> <li>First sequence is the <code>curl</code>, followed by a <code>sleep</code> once it finishes.</li> <li>Second sequence is the <code>python</code> command, run simultaneously with anything written by the first sequence redirected as its input.</li> </ul> <p>So fix it: <code>command1 &amp;&amp; command2</code>, will run <code>curl</code>, wait for it to complete, and only run <code>python</code> if <code>curl</code> succeeded.</p> <p>And again, you can use your example to show how harmful it can be to run commands one doesn't fully understand. Have your script write “All your files have been deleted” in red, it can be good for educating people on that subject.</p>
1
2016-08-24T20:34:55Z
[ "python", "linux", "curl", "pipe" ]
How to read Certification details in django Request Object?
39,129,133
<p>We are trying integrate with Azure store. Azure store calls our API(built in django restframework) and along with request payload they send Certification details in request. </p> <p>But, I cannot see the certification details (X509Certificate) in Django request Header, body, cookies or session.</p> <p>Can any one help me on ways or location to read the X509Certificate2 certificate sent by Azure in Django request object ?</p>
0
2016-08-24T17:00:30Z
39,132,438
<p>I don't know anything about X509 but a quick Google reveals <a href="https://github.com/novafloss/django-x509" rel="nofollow">this library</a> which will probably help you.</p>
1
2016-08-24T20:24:11Z
[ "python", "django", "django-rest-framework", "x509certificate", "azure-store" ]
Python counts duplicates as uniques in csv file
39,129,165
<p>I've written a script that takes the html table from the executed offenders in Texas (can't post link due to restrictions but can be found in code for getcsv.py) and converts it into a csv file. Another script then counts up the races of each person. However, I've been having an issue where it counts all but one of both white and hispanic, then counts it separately. This: <code> [('White', 237), ('Black', 196), ('Hispanic', 100), ('Other', 2), ('White ', 1), ('Hispanic ', 1)] </code> is the result.</p> <p>This is the script that downloads the csv file (getcsv.py)</p> <pre><code>import csv from bs4 import BeautifulSoup from urllib.request import urlopen soup = BeautifulSoup(urlopen('http://www.tdcj.state.tx.us/death_row/dr_executed_offenders.html'), "html.parser") table = soup.find('table') headers = [header.text for header in table.find_all('th')] rows = [] for row in table.find_all('tr'): rows.append([val.text for val in row.find_all('td')]) with open('new.csv', 'w', encoding="utf8", newline='') as f: writer = csv.writer(f) writer.writerow(headers) writer.writerows(row for row in rows if row) </code></pre> <p>This is the script the takes the races (analyse.py)</p> <pre><code>import csv import collections race = collections.Counter() with open('new.csv') as input_file: next(input_file) for row in csv.reader(input_file, delimiter=','): race[row[8]] += 1 list(race) racecom = race.most_common() print ('Number of white people executed: %s' % grades['White']) print ('Number of black people executed: %s' % grades['Black']) print ('Number of Hispanic people executed: %s' % grades['Hispanic']) print ('Number of Other people executed: %s' % grades['Other']) print (racecom) </code></pre> <p>However when I use a csv file generated by convertcsv.org the problem disappears, so I am fairly sure it's getcsv.py that has the fault.</p> <p>The generated file can be downloaded at <a href="https://www.dropbox.com/s/gz0kob2miejqucq/actual.csv?dl=0" rel="nofollow">https://www.dropbox.com/s/gz0kob2miejqucq/actual.csv?dl=0</a> as actual.csv and the auto downloaded one can be found at <a href="https://www.dropbox.com/s/chkycm21konvcw0/new.csv?dl=0" rel="nofollow">https://www.dropbox.com/s/chkycm21konvcw0/new.csv?dl=0</a> as new.csv.</p> <p>Thanks in advance.</p>
0
2016-08-24T17:01:50Z
39,129,256
<p>Whitespaces are important. You have to strip them away, if the keys should be the same:</p> <pre><code>with open('new.csv') as input_file: next(input_file) race = collections.Counter(row[8].strip() for row in csv.reader(input_file, delimiter=',')) </code></pre>
2
2016-08-24T17:08:07Z
[ "python", "csv" ]
Tivix - angular-django-registration-auth with Ionic?
39,129,184
<p>I'm trying to create a user registration with Ionic and I installed Tivix's django backend, and the angular-django in eh front end. The thing is, I'm trying to incorporate it into Ionic. </p> <p><a href="http://imgur.com/a/rpR65" rel="nofollow">Here's my app structure in Visual Studio</a></p> <p><a href="http://imgur.com/a/r4TUo" rel="nofollow">in my index.hml i have included the files</a></p> <p>I'm just having trouble understanding since how would I include the app <strong>"angularDjangoRegistrationAuthApp"</strong> into my index.html?</p> <p><a href="http://imgur.com/a/0FoVe" rel="nofollow">I tried doing this</a> in my index.html file. but when I click Sign Up button nothing happens. If anyone has ever worked with Tivix's module before and can poit me in the right direction, it would help allot.</p>
0
2016-08-24T17:03:04Z
39,184,507
<p>Solved. <a href="https://creator.ionic.io/" rel="nofollow">Ionic Creator is your friend.</a></p>
0
2016-08-27T18:37:29Z
[ "python", "angularjs", "django", "ionic-framework", "django-rest-auth" ]
how to use *and* in pandas loc API
39,129,224
<p>I try do use <em>and</em> in a <code>.loc</code> API::</p> <pre><code>df = pd.DataFrame(dict(age=[99, 33, 33, 22, 33, 44], aa2=[199, 3, 43, 22, 23, 54], nom=['a', 'z', 'f', 'b', 'p', 'a'],)) df.loc[df.age&gt;30] # aa2 age nom # 0 199 99 a # 1 3 33 z # 2 43 33 f # 4 23 33 p # 5 54 44 a </code></pre> <p>But I get this error::</p> <pre><code>df.loc[df.age&gt;30 and (df.age &gt; df.aa2)] # --------------------------------------------------------------------------- # ValueError Traceback (most recent call last) # &lt;ipython-input-13-930dff789922&gt; in &lt;module&gt;() # ----&gt; 1 df.loc[df.age&gt;30 and (df.age &gt; df.aa2)] # # /site-packages/pandas/core/generic.pyc in __nonzero__(self) # 729 raise ValueError("The truth value of a {0} is ambiguous. " # 730 "Use a.empty, a.bool(), a.item(), a.any() or a.all()." # --&gt; 731 .format(self.__class__.__name__)) # 732 # 733 __bool__ = __nonzero__ # # ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). </code></pre> <p>For now I do this ;(::</p> <pre><code>df.loc[df.age&gt;30].loc[(df.age &gt; df.aa2)] # aa2 age nom # 1 3 33 z # 4 23 33 p </code></pre>
2
2016-08-24T17:05:52Z
39,129,262
<pre><code>&gt;&gt;&gt; df.loc[(df.age&gt;30) &amp; (df.age &gt; df.aa2)] aa2 age nom 1 3 33 z 4 23 33 p </code></pre>
3
2016-08-24T17:08:41Z
[ "python", "pandas" ]
how to use *and* in pandas loc API
39,129,224
<p>I try do use <em>and</em> in a <code>.loc</code> API::</p> <pre><code>df = pd.DataFrame(dict(age=[99, 33, 33, 22, 33, 44], aa2=[199, 3, 43, 22, 23, 54], nom=['a', 'z', 'f', 'b', 'p', 'a'],)) df.loc[df.age&gt;30] # aa2 age nom # 0 199 99 a # 1 3 33 z # 2 43 33 f # 4 23 33 p # 5 54 44 a </code></pre> <p>But I get this error::</p> <pre><code>df.loc[df.age&gt;30 and (df.age &gt; df.aa2)] # --------------------------------------------------------------------------- # ValueError Traceback (most recent call last) # &lt;ipython-input-13-930dff789922&gt; in &lt;module&gt;() # ----&gt; 1 df.loc[df.age&gt;30 and (df.age &gt; df.aa2)] # # /site-packages/pandas/core/generic.pyc in __nonzero__(self) # 729 raise ValueError("The truth value of a {0} is ambiguous. " # 730 "Use a.empty, a.bool(), a.item(), a.any() or a.all()." # --&gt; 731 .format(self.__class__.__name__)) # 732 # 733 __bool__ = __nonzero__ # # ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). </code></pre> <p>For now I do this ;(::</p> <pre><code>df.loc[df.age&gt;30].loc[(df.age &gt; df.aa2)] # aa2 age nom # 1 3 33 z # 4 23 33 p </code></pre>
2
2016-08-24T17:05:52Z
39,131,686
<p>I do prefer a bit nicer and more readable <a href="https://pandas-docs.github.io/pandas-docs-travis/indexing.html#the-query-method-experimental" rel="nofollow">query()</a> method:</p> <pre><code>In [3]: df.query('age &gt; 30 and age &gt; aa2') Out[3]: aa2 age nom 1 3 33 z 4 23 33 p </code></pre> <p>PS well, it doesn't answer your question directly (@M.Klugerford <a href="http://stackoverflow.com/a/39129262/5741205">has already shown you how to do this using <code>.loc[]</code></a>), but it gives you a better (in my personal opinion) alternative</p>
2
2016-08-24T19:34:43Z
[ "python", "pandas" ]
How to make Django rest_framework Session Authentication case insensitive?
39,129,289
<p>I'm using Django rest framework basic authentication with the following code:</p> <pre><code>class MyBasicAuthentication(BasicAuthentication): def authenticate_header(self, request): return 'xBasic realm="%s"' % self.www_authenticate_realm class AuthView(APIView): authentication_classes = (MyBasicAuthentication,) serializer_class = UserSerializer def post(self, request, *args, **kwargs): user = authenticate(username=request.user.username, password=request.user.password) login(request, user) response = get_user_basic_info(request.user) return Response(response) </code></pre> <p>It is working fine, but I need to make this authentication case insensitive for the username. Any suggestions?</p>
0
2016-08-24T17:10:02Z
39,129,486
<p>How about just choosing all upper or all lower case characters for the username stored in the database. IE for all uppercase:</p> <pre><code>user = authenticate(username=str(request.user.username).upper(), password=request.user.password) </code></pre> <p>This way the input can be mixed case but the output in standardized.</p>
0
2016-08-24T17:22:53Z
[ "python", "django", "authentication", "django-rest-framework", "case-insensitive" ]
How to make Django rest_framework Session Authentication case insensitive?
39,129,289
<p>I'm using Django rest framework basic authentication with the following code:</p> <pre><code>class MyBasicAuthentication(BasicAuthentication): def authenticate_header(self, request): return 'xBasic realm="%s"' % self.www_authenticate_realm class AuthView(APIView): authentication_classes = (MyBasicAuthentication,) serializer_class = UserSerializer def post(self, request, *args, **kwargs): user = authenticate(username=request.user.username, password=request.user.password) login(request, user) response = get_user_basic_info(request.user) return Response(response) </code></pre> <p>It is working fine, but I need to make this authentication case insensitive for the username. Any suggestions?</p>
0
2016-08-24T17:10:02Z
39,129,918
<p>The simplest way of doing this is standardizing the way you store the usernames (in upper or lower case as @Upsampled answered) which would mean that the authentication that occurs first would not have a problem. If you want to change all existing users to a particular case I would recommend this answer <a href="http://stackoverflow.com/a/33456271">http://stackoverflow.com/a/33456271</a></p>
0
2016-08-24T17:48:39Z
[ "python", "django", "authentication", "django-rest-framework", "case-insensitive" ]
TypeError: object of type 'bool' has no len() - Odoo v9 community
39,129,302
<p>I still have this error on another part of the code:</p> <pre><code>class invoice(models.Model): _inherit = "account.invoice" @api.multi def send_xml_file(self): # haciendolo para efacturadelsur solamente por ahora host = 'https://www.efacturadelsur.cl' post = '/ws/DTE.asmx' # HTTP/1.1 url = host + post _logger.info('URL to be used %s' % url) # client = Client(url) # _logger.info(client) _logger.info('len (como viene): %s' % len(self.sii_xml_request)) response = pool.urlopen('POST', url, headers={ 'Content-Type': 'application/soap+xml', 'charset': 'utf-8', 'Content-Length': len( self.sii_xml_request)}, body=self.sii_xml_request) _logger.info(response.status) _logger.info(response.data) self.sii_xml_response = response.data self.sii_result = 'Enviado' </code></pre> <p>Before in my previous <a href="http://stackoverflow.com/questions/39113494/typeerror-object-of-type-bool-has-no-len-odoo-v9">question</a> the error was solved on this line:</p> <pre><code>_logger.info('len (como viene): %s' % (len(self.sii_xml_request) if self.sii_xml_request else '') </code></pre> <p>Now I have it again on the next one, I've tried with a conditional like before, but I still can't solve it, must be related to syntax or something, the error is on this sentence:</p> <pre><code> response = pool.urlopen('POST', url, headers={ 'Content-Type': 'application/soap+xml', 'charset': 'utf-8', 'Content-Length': len( self.sii_xml_request)}, body=self.sii_xml_request) </code></pre> <p>Specifically on <code>self.sii_xml_request)}, body=self.sii_xml_request)</code> there's the <code>sii_xml_request</code> object again, I think is just a matter to add the conditional, since the field is empty...</p> <p>But I still can't make it work properly, is this solvable in a similar fashion as my previous question?</p> <p>Thanks in advance!</p> <p><strong>EDIT</strong></p> <p>It is not a duplicate since this is another line of code, and a very very similar way to solve it won't apply here, it is a slightly different syntax.</p> <p><strong>SECOND EDIT</strong></p> <p>This is how it looks right now, the conditional is on every <code>len</code> of this function</p> <pre><code>@api.multi def send_xml_file(self): # haciendolo para efacturadelsur solamente por ahora host = 'https://www.efacturadelsur.cl' post = '/ws/DTE.asmx' # HTTP/1.1 url = host + post _logger.info('URL to be used %s' % url) # client = Client(url) # _logger.info(client) _logger.info('len (como viene): %s' % len(self.sii_xml_request)if self.sii_xml_request else '') #if self.sii_xml_request: response = pool.urlopen('POST', url, headers={ 'Content-Type': 'application/soap+xml', 'charset': 'utf-8', 'Content-Length': (len( self.sii_xml_request) if self.sii_xml_request else '')}, body=self.sii_xml_request) #else ''(len(self.sii_xml_request) if self.sii_xml_request else '') _logger.info(response.status) _logger.info(response.data) self.sii_xml_response = response.data self.sii_result = 'Enviado' </code></pre>
0
2016-08-24T17:10:40Z
39,131,549
<p>To avoid dragging on the conversation in the comments, I am going to take a crack at an actual answer.</p> <p>It seems like your object <code>self.sii_xml_request</code> can be either a). a string, or b). a boolean (<code>True</code> or <code>False</code>) (though please correct me if I am wrong).</p> <p>You are getting an error, because you are trying to take the <code>len()</code> of that object to get an idea of the length of the request, but when that object is <code>True</code> or <code>False</code> this will fail, because <code>bool</code> objects don't have a <code>__len__</code> attribute. You tried solving this based on a previous question by doing this instead:</p> <pre><code>(len(self.sii_xml_request) if self.sii_xml_request else '') </code></pre> <p>This will only work if <code>self.sii_xml_request</code> only ever returns a string or <code>False</code> (or something that is equivalent to <code>False</code> like <code>None</code> or <code>0</code> or <code>[]</code>, etc.), because if it returns <code>True</code>, then it will once again try to get the <code>len()</code> of the object which doesn't work. </p> <p>Doing:</p> <pre><code>(len(self.sii_xml_request) if self.sii_xml_request is not True or False else '') </code></pre> <p>Might work, but I don't know what decides whether <code>self.sii_xml_request</code> returns <code>True</code>, <code>False</code> or some string, and you may want to handle <code>True</code> and <code>False</code> differently. Also, you probably never want to have content length be <code>''</code> because it will normally be an integer, so if anything you should have it be <code>0</code> if <code>self.sii_xml_request</code> is <code>False</code>. If you want to handle them the same try what I have above. Otherwise, you could define a variable <code>content_length</code> earlier, and set it accordingly based on the value of <code>self.sii_xml_request</code>. For example:</p> <pre><code>if isinstance(self.sii_xml_request, bool): content_lengthj == int(self.sii_xml_request) # 1 if True else 0 else: content_length = len(self.sii_xml_request) ... response = pool.urlopen('POST', url, headers={ 'Content-Type': 'application/soap+xml', 'charset': 'utf-8', 'Content-Length': content_length}, body=self.sii_xml_request) </code></pre>
1
2016-08-24T19:26:45Z
[ "python", "openerp", "odoo-9" ]
TypeError: object of type 'bool' has no len() - Odoo v9 community
39,129,302
<p>I still have this error on another part of the code:</p> <pre><code>class invoice(models.Model): _inherit = "account.invoice" @api.multi def send_xml_file(self): # haciendolo para efacturadelsur solamente por ahora host = 'https://www.efacturadelsur.cl' post = '/ws/DTE.asmx' # HTTP/1.1 url = host + post _logger.info('URL to be used %s' % url) # client = Client(url) # _logger.info(client) _logger.info('len (como viene): %s' % len(self.sii_xml_request)) response = pool.urlopen('POST', url, headers={ 'Content-Type': 'application/soap+xml', 'charset': 'utf-8', 'Content-Length': len( self.sii_xml_request)}, body=self.sii_xml_request) _logger.info(response.status) _logger.info(response.data) self.sii_xml_response = response.data self.sii_result = 'Enviado' </code></pre> <p>Before in my previous <a href="http://stackoverflow.com/questions/39113494/typeerror-object-of-type-bool-has-no-len-odoo-v9">question</a> the error was solved on this line:</p> <pre><code>_logger.info('len (como viene): %s' % (len(self.sii_xml_request) if self.sii_xml_request else '') </code></pre> <p>Now I have it again on the next one, I've tried with a conditional like before, but I still can't solve it, must be related to syntax or something, the error is on this sentence:</p> <pre><code> response = pool.urlopen('POST', url, headers={ 'Content-Type': 'application/soap+xml', 'charset': 'utf-8', 'Content-Length': len( self.sii_xml_request)}, body=self.sii_xml_request) </code></pre> <p>Specifically on <code>self.sii_xml_request)}, body=self.sii_xml_request)</code> there's the <code>sii_xml_request</code> object again, I think is just a matter to add the conditional, since the field is empty...</p> <p>But I still can't make it work properly, is this solvable in a similar fashion as my previous question?</p> <p>Thanks in advance!</p> <p><strong>EDIT</strong></p> <p>It is not a duplicate since this is another line of code, and a very very similar way to solve it won't apply here, it is a slightly different syntax.</p> <p><strong>SECOND EDIT</strong></p> <p>This is how it looks right now, the conditional is on every <code>len</code> of this function</p> <pre><code>@api.multi def send_xml_file(self): # haciendolo para efacturadelsur solamente por ahora host = 'https://www.efacturadelsur.cl' post = '/ws/DTE.asmx' # HTTP/1.1 url = host + post _logger.info('URL to be used %s' % url) # client = Client(url) # _logger.info(client) _logger.info('len (como viene): %s' % len(self.sii_xml_request)if self.sii_xml_request else '') #if self.sii_xml_request: response = pool.urlopen('POST', url, headers={ 'Content-Type': 'application/soap+xml', 'charset': 'utf-8', 'Content-Length': (len( self.sii_xml_request) if self.sii_xml_request else '')}, body=self.sii_xml_request) #else ''(len(self.sii_xml_request) if self.sii_xml_request else '') _logger.info(response.status) _logger.info(response.data) self.sii_xml_response = response.data self.sii_result = 'Enviado' </code></pre>
0
2016-08-24T17:10:40Z
39,193,597
<p>this kind of error when you get 'bool' in the message that means that you calling a function on an empty field so before you call any function check if the field has value first. because in odoo empty field contain False value not None i had this error many times every time i found out that i'm calling the function on an empty field </p>
1
2016-08-28T16:48:25Z
[ "python", "openerp", "odoo-9" ]
Why does printing a dataframe break python when constructed from numpy empty_like
39,129,419
<pre><code>import numpy as np import pandas as pd </code></pre> <p>consider numpy array <code>a</code></p> <pre><code>a = np.array([None, None], dtype=object) print(a) [None None] </code></pre> <p>And <code>dfa</code></p> <pre><code>dfa = pd.DataFrame(a) print(dfa) 0 0 None 1 None </code></pre> <p>Now consider numpy array <code>b</code></p> <pre><code>b = np.empty_like(a) print(b) [None None] </code></pre> <p>It appears the same as <code>a</code></p> <pre><code>(a == b).all() True </code></pre> <h1><strong><em>THIS! CRASHES MY PYTHON!!</em></strong> BE CAREFUL!!!</h1> <pre><code>dfb = pd.DataFrame(b) # Fine so far print(dfb.values) [[None] [None]] </code></pre> <p>However</p> <pre><code>print(dfb) # BOOM!!! </code></pre>
10
2016-08-24T17:17:52Z
39,131,997
<p>As reported <a href="https://github.com/pydata/pandas/issues/14082">here,</a> this is a bug, which is fixed in the master branch of <code>pandas</code> / the upcoming <code>0.19.0</code> release.</p>
7
2016-08-24T19:54:46Z
[ "python", "pandas", "numpy" ]
What is the correct format to upgrade pip3 when the default pip is pip2?
39,129,450
<p>I develop for both <code>Python 2</code> and <code>3.</code><br> Thus, I have to use both <code>pip2</code> and <code>pip3.</code></p> <p>When using <code>pip3 -</code> I receive this upgrade request (last two lines):</p> <pre><code>$ pip3 install arrow Requirement already satisfied (use --upgrade to upgrade): arrow in c:\program files (x86)\python3.5.1\lib\site-packages Requirement already satisfied (use --upgrade to upgrade): python-dateutil in c:\program files (x86)\python3.5.1\lib\site-packages (from arrow) Requirement already satisfied (use --upgrade to upgrade): six&gt;=1.5 in c:\program files (x86)\python3.5.1\lib\site-packages (from python-dateutil-&gt;arrow) You are using pip version 7.1.2, however version 8.1.2 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. </code></pre> <p>My default <code>pip</code> is for <code>Python 2,</code> namely:</p> <pre><code>$ python -m pip install --upgrade pip Requirement already up-to-date: pip in /usr/lib/python2.7/site-packages </code></pre> <p>However, none of the following <em>explicit</em> commands succeed in upgrading the <code>Python 3 pip:</code></p> <pre><code>$ python -m pip3 install --upgrade pip3 /bin/python: No module named pip3 $ python -m pip install --upgrade pip3 Collecting pip3 Could not find a version that satisfies the requirement pip3 (from versions: ) No matching distribution found for pip3 $ python -m pip install --upgrade pip3.4 Collecting pip3.4 Could not find a version that satisfies the requirement pip3.4 (from versions: ) No matching distribution found for pip3.4 </code></pre> <h2>What is the correct command to upgrade pip3 when it is not the default pip?</h2> <p>Environment:</p> <pre><code>$ python3 -V Python 3.4.3 $ uname -a CYGWIN_NT-6.1-WOW 2.5.2(0.297/5/3) 2016-06-23 14:27 i686 Cygwin </code></pre>
2
2016-08-24T17:20:10Z
39,129,467
<p>Just use the <code>pip3</code> command you already have:</p> <pre><code>pip3 install --upgrade pip </code></pre> <p>The installed <em>project</em> is called <code>pip</code>, always. The <code>pip3</code> command is tied to your Python 3 installation and is an alias for <code>pip</code>, but the latter is shadowed by the <code>pip</code> command in your Python 2 setup.</p> <p>You can do it with the associated Python binary too; if it executable as <code>python3</code>, then use that:</p> <pre><code>python3 -m pip install --upgrade pip </code></pre> <p>Again, the project is called <code>pip</code>, and so is the module that is installed into your <code>site-packages</code> directory, so stick to that name for the <code>-m</code> command-line option and for the <code>install</code> command.</p>
7
2016-08-24T17:21:11Z
[ "python", "python-3.x", "cygwin", "pip", "python-3.4" ]
Python 3: How can I align the the format of this data when it prints
39,129,603
<p>I would like the format of this data to me uniform for the user to read but I cannot with everything I have tried. Code here :</p> <pre><code>def runModel(): global valueJuvenile, valueAdult, valueSenile,total, values values = '' total = valueSenile + valueJuvenile + valueAdult values += 'G' + ', ' values += 'Juv' + ', ' values += 'Adu' + ', ' values += 'Sen' + ', ' values += 'Tot' + ', ' values += '\n0' + ', ' values += str(valueJuvenile) + ', ' values += str(valueAdult) + ', ' values += str(valueSenile) + ', ' values += str(firstTotal) + ', ' for n in range(1,numNewGen): if n != 0: values += '\n' values += str(n)+', ' valueJuvenile = round(valueAdult * birthRate * valueJuvenileSR,3) valueAdult = round(valueJuvenile * valueAdultSR,3) valueSenile = round(valueSenile + valueAdult * valueSenileSR,3) total = round(valueSenile + valueJuvenile + valueAdult,3) values += str(valueSenile) + ', ' values += str(valueJuvenile) + ', ' values += str(valueJuvenile) + ', ' values += str(total) print(values) print("Model has been ran!") input('\nPlease press Enter to return to menu...') menu() </code></pre> <p>and here is the outcome in the shell:</p> <p>values set:</p> <pre><code>Please enter the amount of Juveniles (1 = 1000) 1 Please enter the amount of Adults (1 = 1000) 1 Please enter the amount of Seniles (1 = 1000) 1 Please enter the survival rate for Juveniles 1 Please enter the survival rate for Adults 1 Please enter the survival rate for Seniles 1 Please enter the birth rate of GreenFlies 1 Please enter the number of new generations 11 Please enter at what population breakpoint would you like the disease to trigger 11 Running model! G, Juv, Adu, Sen, Tot, 0, 1.0, 1.0, 1.0, 3.0, 1, 2.0, 1.0, 1.0, 4.0 2, 3.0, 1.0, 1.0, 5.0 3, 4.0, 1.0, 1.0, 6.0 4, 5.0, 1.0, 1.0, 7.0 5, 6.0, 1.0, 1.0, 8.0 6, 7.0, 1.0, 1.0, 9.0 7, 8.0, 1.0, 1.0, 10.0 8, 9.0, 1.0, 1.0, 11.0 9, 10.0, 1.0, 1.0, 12.0 10, 11.0, 1.0, 1.0, 13.0 Model has been ran! </code></pre> <p>as you can see at row 0 the data isn't in the same sort of position as the others, this also happens at row 10 because the 0 is a extra character it shifts everything one place to the right and I'd like to know how to format this.</p> <p>For context, this is meant to be a program where the user sets variables and those variables are adjusted by other variables set for example value of juveniles being affected by the survival rate of juveniles for the next generation. In the output shell the G corresponds to the number of generations in this case 0 being the original values set by user through to generation 10 which has already been affect by the other variables 10 times by that point. The Juv, Adu and Sen correspond to juvenile adult and senile</p>
1
2016-08-24T17:29:39Z
39,129,734
<p>The <a href="https://docs.python.org/3.4/library/stdtypes.html#str.format" rel="nofollow"><code>format</code></a> function will do the trick.</p> <p>Here an example on how to use it:</p> <pre><code>&gt;&gt;&gt; integer_var = 5 &gt;&gt;&gt; float_var = 3.5 &gt;&gt;&gt; print('{:6d} {:6.2f}'.format(integer_var, float_var)) 5 3.50 </code></pre> <p>For integer variables use <code>d</code> for float variables <code>f</code>. The first number after the <code>:</code> is the space the number should take. In the case of floating point numbers, the number after the <code>.</code> is the number of decimal points numbers to show</p>
2
2016-08-24T17:38:01Z
[ "python", "python-3.x", "formatting" ]
Python 3: How can I align the the format of this data when it prints
39,129,603
<p>I would like the format of this data to me uniform for the user to read but I cannot with everything I have tried. Code here :</p> <pre><code>def runModel(): global valueJuvenile, valueAdult, valueSenile,total, values values = '' total = valueSenile + valueJuvenile + valueAdult values += 'G' + ', ' values += 'Juv' + ', ' values += 'Adu' + ', ' values += 'Sen' + ', ' values += 'Tot' + ', ' values += '\n0' + ', ' values += str(valueJuvenile) + ', ' values += str(valueAdult) + ', ' values += str(valueSenile) + ', ' values += str(firstTotal) + ', ' for n in range(1,numNewGen): if n != 0: values += '\n' values += str(n)+', ' valueJuvenile = round(valueAdult * birthRate * valueJuvenileSR,3) valueAdult = round(valueJuvenile * valueAdultSR,3) valueSenile = round(valueSenile + valueAdult * valueSenileSR,3) total = round(valueSenile + valueJuvenile + valueAdult,3) values += str(valueSenile) + ', ' values += str(valueJuvenile) + ', ' values += str(valueJuvenile) + ', ' values += str(total) print(values) print("Model has been ran!") input('\nPlease press Enter to return to menu...') menu() </code></pre> <p>and here is the outcome in the shell:</p> <p>values set:</p> <pre><code>Please enter the amount of Juveniles (1 = 1000) 1 Please enter the amount of Adults (1 = 1000) 1 Please enter the amount of Seniles (1 = 1000) 1 Please enter the survival rate for Juveniles 1 Please enter the survival rate for Adults 1 Please enter the survival rate for Seniles 1 Please enter the birth rate of GreenFlies 1 Please enter the number of new generations 11 Please enter at what population breakpoint would you like the disease to trigger 11 Running model! G, Juv, Adu, Sen, Tot, 0, 1.0, 1.0, 1.0, 3.0, 1, 2.0, 1.0, 1.0, 4.0 2, 3.0, 1.0, 1.0, 5.0 3, 4.0, 1.0, 1.0, 6.0 4, 5.0, 1.0, 1.0, 7.0 5, 6.0, 1.0, 1.0, 8.0 6, 7.0, 1.0, 1.0, 9.0 7, 8.0, 1.0, 1.0, 10.0 8, 9.0, 1.0, 1.0, 11.0 9, 10.0, 1.0, 1.0, 12.0 10, 11.0, 1.0, 1.0, 13.0 Model has been ran! </code></pre> <p>as you can see at row 0 the data isn't in the same sort of position as the others, this also happens at row 10 because the 0 is a extra character it shifts everything one place to the right and I'd like to know how to format this.</p> <p>For context, this is meant to be a program where the user sets variables and those variables are adjusted by other variables set for example value of juveniles being affected by the survival rate of juveniles for the next generation. In the output shell the G corresponds to the number of generations in this case 0 being the original values set by user through to generation 10 which has already been affect by the other variables 10 times by that point. The Juv, Adu and Sen correspond to juvenile adult and senile</p>
1
2016-08-24T17:29:39Z
39,129,840
<p>Use <code>format</code>. For example:</p> <pre><code>def runModel(valueJuvenile, valueAdult, valueSenile,total, values): header = ['G', 'Juv', 'Adu', 'Sen', 'Tot'] format = "{:5}, {:5}, {:5}, {:5}, {:5}".format values = [format(*header)] values.append(format(n, valueJuvenile, valueAdult, valueSenile, firstTotal)) for n in range(1, newNewGen): valueJuvenile = round(valueAdult * birthRate * valueJuvenileSR,3) valueAdult = round(valueJuvenile * valueAdultSR,3) valueSenile = round(valueSenile + valueAdult * valueSenileSR,3) total = round(valueSenile + valueJuvenile + valueAdult,3) values.append(format(n, valueJuvenile, valueAdult, valueSenile, total)) print('\n'.join(values)) print("Model has been ran!") input('\nPlease press Enter to return to menu...') menu() </code></pre>
2
2016-08-24T17:43:13Z
[ "python", "python-3.x", "formatting" ]
Python 3: How can I align the the format of this data when it prints
39,129,603
<p>I would like the format of this data to me uniform for the user to read but I cannot with everything I have tried. Code here :</p> <pre><code>def runModel(): global valueJuvenile, valueAdult, valueSenile,total, values values = '' total = valueSenile + valueJuvenile + valueAdult values += 'G' + ', ' values += 'Juv' + ', ' values += 'Adu' + ', ' values += 'Sen' + ', ' values += 'Tot' + ', ' values += '\n0' + ', ' values += str(valueJuvenile) + ', ' values += str(valueAdult) + ', ' values += str(valueSenile) + ', ' values += str(firstTotal) + ', ' for n in range(1,numNewGen): if n != 0: values += '\n' values += str(n)+', ' valueJuvenile = round(valueAdult * birthRate * valueJuvenileSR,3) valueAdult = round(valueJuvenile * valueAdultSR,3) valueSenile = round(valueSenile + valueAdult * valueSenileSR,3) total = round(valueSenile + valueJuvenile + valueAdult,3) values += str(valueSenile) + ', ' values += str(valueJuvenile) + ', ' values += str(valueJuvenile) + ', ' values += str(total) print(values) print("Model has been ran!") input('\nPlease press Enter to return to menu...') menu() </code></pre> <p>and here is the outcome in the shell:</p> <p>values set:</p> <pre><code>Please enter the amount of Juveniles (1 = 1000) 1 Please enter the amount of Adults (1 = 1000) 1 Please enter the amount of Seniles (1 = 1000) 1 Please enter the survival rate for Juveniles 1 Please enter the survival rate for Adults 1 Please enter the survival rate for Seniles 1 Please enter the birth rate of GreenFlies 1 Please enter the number of new generations 11 Please enter at what population breakpoint would you like the disease to trigger 11 Running model! G, Juv, Adu, Sen, Tot, 0, 1.0, 1.0, 1.0, 3.0, 1, 2.0, 1.0, 1.0, 4.0 2, 3.0, 1.0, 1.0, 5.0 3, 4.0, 1.0, 1.0, 6.0 4, 5.0, 1.0, 1.0, 7.0 5, 6.0, 1.0, 1.0, 8.0 6, 7.0, 1.0, 1.0, 9.0 7, 8.0, 1.0, 1.0, 10.0 8, 9.0, 1.0, 1.0, 11.0 9, 10.0, 1.0, 1.0, 12.0 10, 11.0, 1.0, 1.0, 13.0 Model has been ran! </code></pre> <p>as you can see at row 0 the data isn't in the same sort of position as the others, this also happens at row 10 because the 0 is a extra character it shifts everything one place to the right and I'd like to know how to format this.</p> <p>For context, this is meant to be a program where the user sets variables and those variables are adjusted by other variables set for example value of juveniles being affected by the survival rate of juveniles for the next generation. In the output shell the G corresponds to the number of generations in this case 0 being the original values set by user through to generation 10 which has already been affect by the other variables 10 times by that point. The Juv, Adu and Sen correspond to juvenile adult and senile</p>
1
2016-08-24T17:29:39Z
39,129,899
<p>You can use C style format strings, or the .format method.</p> <p>As an example using C style format strings, you can do:</p> <pre><code>print("%2i, %5.1f, %5.1f, %5.1f" % (3, 11.2, 5.0, 77.5)) print("%2i, %5.1f, %5.1f, %5.1f" % (11, 12.7, 11.4, 8.1)) </code></pre> <p>This would give the following results:</p> <pre><code> 3, 11.2, 5.0, 77.5 11, 12.7, 11.4, 8.1 </code></pre> <p>The basic rule is that %5.1f makes a floating point number take 5 characters, including one decimal digit. The padding characters will be spaces. So if you wanted to take 8 characters and have 3 decimal digits for a float, you could do %8.3f. Or if you wanted 4 characters and no decimal points for a float, you could do %4.0f.</p> <p>Similarly, for an integer %2i makes an integer take up 2 characters width. If you wanted it to take 6 characters for an integer, you could do %6i. If you wanted zero padding, you could add a zero to the format string like this: %06i. Note that %i and %d are equivalent.</p>
1
2016-08-24T17:47:27Z
[ "python", "python-3.x", "formatting" ]
xpath query not grabbing any information - python
39,129,791
<p>I am trying to grab some information off <a href="http://www.mountainproject.com/v/my-other-woman-is-a-hand-crack/107465606" rel="nofollow">this page </a></p> <p>but there are three pieces of data I have been unable to grab. The first is the grade, which is the '5.6' next to the YDS near the top. The second is FFA: P.Adamson, M.Peck July 2008, listed next to FA:. The third is Trad, 30', listed next to type.</p> <p>The xpath queries I have are</p> <pre><code>grade = tree.xpath('//[@id="rspCol800"]/div[1]/div[1]/span/table/tbody/tr[2]/td[2]/text()') length = tree.xpath('//*[@id="rspCol800"]/div[1]/div[1]/span/table/tbody/tr[1]/td[2]/text()') first_ascent = tree.xpath('.//*[@id="rspCol800"]/div[1]/div[1]/span/table/tbody/tr[3]/td[2]/text()/text()') </code></pre> <p>I have tried to grab grade from a few different places on the page, as it is listed in a couple places but no such luck yet. Any help would be greatly appreciated</p>
0
2016-08-24T17:40:47Z
39,131,649
<p>Not sure what framework you are using but this seems to work:</p> <pre><code>from lxml import html import urllib2 req = urllib2.Request('http://www.mountainproject.com/v/my-other-woman-is-a-hand-crack/107465606') response = urllib2.urlopen(req) data = response.read() tree = html.fromstring(data) grade = tree.xpath('//div[@id="rspCol800"]/div[1]/div[1]/span/table/tr[2]/td[2]/span/text()')[1] length = tree.xpath('//*[@id="rspCol800"]/div[1]/div[1]/span/table/tr[1]/td[2]/text()')[0] first_ascent = tree.xpath('.//*[@id="rspCol800"]/div[1]/div[1]/span/table/tr[3]/td[2]/text()')[0] print grade, length, first_ascent </code></pre>
0
2016-08-24T19:32:30Z
[ "jquery", "python", "html", "css", "xpath" ]
Sort list of mixed strings based on digits
39,129,846
<p>How do I sort this list via the numerical values? Is a regex required to remove the numbers or is there a more Pythonic way to do this?</p> <pre><code>to_sort ['12-foo', '1-bar', '2-bar', 'foo-11', 'bar-3', 'foo-4', 'foobar-5', '6-foo', '7-bar'] </code></pre> <p>Desired output is as follows:</p> <pre><code>1-bar 2-bar bar-3 foo-4 foobar-5 6-foo 7-bar foo-11 12-foo </code></pre>
6
2016-08-24T17:43:34Z
39,129,897
<p>One solution is the following regex extraction:</p> <pre><code>sorted(l, key=lambda x: int(re.search('\d+', x).group(0))) </code></pre> <hr> <pre><code>&gt;&gt;&gt; l ['12-foo', '1-bar', '2-bar', 'foo-11', 'bar-3', 'foo-4', 'foobar-5', '6-foo', '7-bar'] &gt;&gt;&gt; sorted(l, key=lambda x: int(re.search('\d+', x).group(0))) ['1-bar', '2-bar', 'bar-3', 'foo-4', 'foobar-5', '6-foo', '7-bar', 'foo-11', '12-foo'] </code></pre> <p>The <code>key</code> is the extracted digit (converted to <code>int</code> to avoid sorting lexographically).</p>
10
2016-08-24T17:47:18Z
[ "python" ]
Sort list of mixed strings based on digits
39,129,846
<p>How do I sort this list via the numerical values? Is a regex required to remove the numbers or is there a more Pythonic way to do this?</p> <pre><code>to_sort ['12-foo', '1-bar', '2-bar', 'foo-11', 'bar-3', 'foo-4', 'foobar-5', '6-foo', '7-bar'] </code></pre> <p>Desired output is as follows:</p> <pre><code>1-bar 2-bar bar-3 foo-4 foobar-5 6-foo 7-bar foo-11 12-foo </code></pre>
6
2016-08-24T17:43:34Z
39,129,940
<p>If you don't want to use regex</p> <pre><code>&gt;&gt;&gt; l = ['12-foo', '1-bar', '2-bar', 'foo-11', 'bar-3', 'foo-4', 'foobar-5', '6-foo', '7-bar'] &gt;&gt;&gt; sorted(l, key = lambda x: int(''.join(filter(str.isdigit, x)))) ['1-bar', '2-bar', 'bar-3', 'foo-4', 'foobar-5', '6-foo', '7-bar', 'foo-11', '12-foo'] </code></pre>
4
2016-08-24T17:50:01Z
[ "python" ]
Why can I connect to Azure MS SQL with tsql but not pymssql?
39,129,928
<p>Where I am today:</p> <pre><code>TDSVER=7.3 tsql -H example.database.windows.net -U me -D ExampleDB -p 1433 -P notreallymypassword </code></pre> <p>This does not:</p> <pre><code>&gt;&gt;&gt; import pymssql &gt;&gt;&gt; pymssql.connect('example.database.windows.net', user='me', password='notreallymypassword', database='ExampleDB', tds_version='7.3') </code></pre> <p>It fails with </p> <pre><code>Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "pymssql.pyx", line 635, in pymssql.connect (pymssql.c:10734) File "_mssql.pyx", line 1902, in _mssql.connect (_mssql.c:21821) File "_mssql.pyx", line 577, in _mssql.MSSQLConnection.__init__ (_mssql.c:6214) File "_mssql.pyx", line 1704, in _mssql._tds_ver_str_to_constant (_mssql.c:18845) _mssql.MSSQLException: unrecognized tds version: 7.3 </code></pre> <p>Okay. Well, that's... strange. So I went back and tried the <code>tsql</code> using <code>TDSVER=7.2</code>, which seemed to work fine.</p> <p>Trying to connect with <code>tds_version='7.2'</code> gives me:</p> <pre><code>Traceback (most recent call last): File "pymssql.pyx", line 635, in pymssql.connect (pymssql.c:10734) File "_mssql.pyx", line 1902, in _mssql.connect (_mssql.c:21821) File "_mssql.pyx", line 637, in _mssql.MSSQLConnection.__init__ (_mssql.c:6581) File "_mssql.pyx", line 1630, in _mssql.maybe_raise_MSSQLDatabaseException (_mssql.c:17524) _mssql.MSSQLDatabaseException: (20002, b'DB-Lib error message 20002, severity 9:\nAdaptive Server connection failed (datawhse.database. windows.net:1433)\n') During handling of the above exception, another exception occurred: Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "pymssql.pyx", line 641, in pymssql.connect (pymssql.c:10824) pymssql.OperationalError: (20002, b'DB-Lib error message 20002, severity 9:\nAdaptive Server connection failed (datawhse.database.windo ws.net:1433)\n') </code></pre> <p>So, what gives?</p> <p><strong>Update 1:</strong> pyodbc <em>also</em> fails to connect:</p> <pre><code>conn = pyodbc.connect('SERVER=example.database.windows.net;Driver=FreeTDS;UID=me@example.database.windows.net;PWD=notmyrealpassword;' , ansi=True) </code></pre> <p>My ~/.odbcinst.ini:</p> <pre><code>[FreeTDS] Description = MS SQL driver Driver = /usr/lib64/libtdsodbc.so.0 Driver64 = /usr/lib64/libtdsodbc.so.0 Setup = /usr/lib64/libtdsS.so.0 Setup64 = /usr/lib64/libtdsS.so.0 UsageCount = 1 CPTimeout = CPReuse = Trace = Yes </code></pre> <p>And this output:</p> <pre><code>⚘ odbcinst -j unixODBC 2.3.1 DRIVERS............: /etc/odbcinst.ini SYSTEM DATA SOURCES: /etc/odbc.ini FILE DATA SOURCES..: /etc/ODBCDataSources USER DATA SOURCES..: /home/me/.odbc.ini SQLULEN Size.......: 8 SQLLEN Size........: 8 SQLSETPOSIROW Size.: 8 </code></pre>
1
2016-08-24T17:49:33Z
39,131,020
<p>Your connection string doesn't look right. It should be something like:</p> <pre><code>pymssql.connect(server='example.database.windows.net', user='me@example', password='notreallymypassword', database='ExampleDB') </code></pre> <p>Note that in your example call to <code>connect()</code>, you are missing a <code>server=</code> parameter; you only had the full server name.</p>
0
2016-08-24T18:53:36Z
[ "python", "azure", "sql-azure", "pymssql" ]
Why can I connect to Azure MS SQL with tsql but not pymssql?
39,129,928
<p>Where I am today:</p> <pre><code>TDSVER=7.3 tsql -H example.database.windows.net -U me -D ExampleDB -p 1433 -P notreallymypassword </code></pre> <p>This does not:</p> <pre><code>&gt;&gt;&gt; import pymssql &gt;&gt;&gt; pymssql.connect('example.database.windows.net', user='me', password='notreallymypassword', database='ExampleDB', tds_version='7.3') </code></pre> <p>It fails with </p> <pre><code>Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "pymssql.pyx", line 635, in pymssql.connect (pymssql.c:10734) File "_mssql.pyx", line 1902, in _mssql.connect (_mssql.c:21821) File "_mssql.pyx", line 577, in _mssql.MSSQLConnection.__init__ (_mssql.c:6214) File "_mssql.pyx", line 1704, in _mssql._tds_ver_str_to_constant (_mssql.c:18845) _mssql.MSSQLException: unrecognized tds version: 7.3 </code></pre> <p>Okay. Well, that's... strange. So I went back and tried the <code>tsql</code> using <code>TDSVER=7.2</code>, which seemed to work fine.</p> <p>Trying to connect with <code>tds_version='7.2'</code> gives me:</p> <pre><code>Traceback (most recent call last): File "pymssql.pyx", line 635, in pymssql.connect (pymssql.c:10734) File "_mssql.pyx", line 1902, in _mssql.connect (_mssql.c:21821) File "_mssql.pyx", line 637, in _mssql.MSSQLConnection.__init__ (_mssql.c:6581) File "_mssql.pyx", line 1630, in _mssql.maybe_raise_MSSQLDatabaseException (_mssql.c:17524) _mssql.MSSQLDatabaseException: (20002, b'DB-Lib error message 20002, severity 9:\nAdaptive Server connection failed (datawhse.database. windows.net:1433)\n') During handling of the above exception, another exception occurred: Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "pymssql.pyx", line 641, in pymssql.connect (pymssql.c:10824) pymssql.OperationalError: (20002, b'DB-Lib error message 20002, severity 9:\nAdaptive Server connection failed (datawhse.database.windo ws.net:1433)\n') </code></pre> <p>So, what gives?</p> <p><strong>Update 1:</strong> pyodbc <em>also</em> fails to connect:</p> <pre><code>conn = pyodbc.connect('SERVER=example.database.windows.net;Driver=FreeTDS;UID=me@example.database.windows.net;PWD=notmyrealpassword;' , ansi=True) </code></pre> <p>My ~/.odbcinst.ini:</p> <pre><code>[FreeTDS] Description = MS SQL driver Driver = /usr/lib64/libtdsodbc.so.0 Driver64 = /usr/lib64/libtdsodbc.so.0 Setup = /usr/lib64/libtdsS.so.0 Setup64 = /usr/lib64/libtdsS.so.0 UsageCount = 1 CPTimeout = CPReuse = Trace = Yes </code></pre> <p>And this output:</p> <pre><code>⚘ odbcinst -j unixODBC 2.3.1 DRIVERS............: /etc/odbcinst.ini SYSTEM DATA SOURCES: /etc/odbc.ini FILE DATA SOURCES..: /etc/ODBCDataSources USER DATA SOURCES..: /home/me/.odbc.ini SQLULEN Size.......: 8 SQLLEN Size........: 8 SQLSETPOSIROW Size.: 8 </code></pre>
1
2016-08-24T17:49:33Z
39,398,758
<p>It looks like Gord was right: The problem was that <code>pymssql</code> wheel <em>does not have SSL bindings</em>.</p> <p>I uninstalled it:</p> <pre><code>python -m pip uninstall pymssql </code></pre> <p>Then installed it from source:</p> <pre><code>python -m pip install --no-binary pymssql pymssql </code></pre> <p>This required me to install a few dependencies. But now I can connect with </p> <pre><code>pymssql.connect('example.database.windows.net', user='me', password='notreallymypassword', database='ExampleDB', tds_version='7.2') </code></pre>
1
2016-09-08T19:26:55Z
[ "python", "azure", "sql-azure", "pymssql" ]
if allwords in title: match
39,130,043
<p>Using python3, i have a list of words like: <code>['foot', 'stool', 'carpet']</code> </p> <p>these lists vary in length from 1-6 or so. i have thousands and thousands of strings to check, and it is required to make sure that all three words are present in a title. where: <code>'carpet stand upon the stool of foot balls.'</code> is a correct match, as all the words are present here, even though they are out of order.</p> <p>ive wondered about this for a long time, and the only thing i could think of was some sort of iteration like:</p> <pre><code>for word in list: if word in title: match! </code></pre> <p>but this give me results like <code>'carpet cleaner'</code> which is incorrect. i feel as though there is some sort of shortcut to do this, but i cant seem to figure it out without using excessive<code>list(), continue, break</code> or other methods/terminology that im not yet familiar with. etc etc.</p>
1
2016-08-24T17:56:40Z
39,130,067
<p>You can use <a href="https://docs.python.org/3/library/functions.html#all" rel="nofollow"><code>all()</code></a>:</p> <pre><code>words = ['foot', 'stool', 'carpet'] title = "carpet stand upon the stool of foot balls." matches = all(word in title for word in words) </code></pre> <p>Or, inverse the logic with not <a href="https://docs.python.org/3/library/functions.html#any" rel="nofollow"><code>any()</code></a> and <code>not in</code>:</p> <pre><code>matches = not any(word not in title for word in words) </code></pre>
5
2016-08-24T17:58:08Z
[ "python", "list", "python-3.x", "matching", "word" ]
Python: Inserting string into Mysql Database
39,130,049
<p>I am using Python with Mysql.</p> <p>I have a string that looks something like this:</p> <p>s = "1.xxxxxxxxx 2.xxxxxxxxx 3.xxxxxxxxx"</p> <p>Now I want to insert this string in db with one line in one row. For example</p> <ol> <li>xxxxxxx</li> <li>xxxxxxx</li> <li>xxxxxxx</li> </ol> <p>Each line gets inserted into a separate row.</p> <p>How can I do this?</p> <p>This is the code for making calls to database</p> <pre><code>def test(sql): cursor = db.cursor() try: cursor.execute(sql) db.commit() except: db.rollback() db.close() </code></pre> <p>This is the code that I am using to extract something from a pdf</p> <pre><code>def extract(): string =convert_pdf_to_txt("sample.pdf") lines = list(filter(bool, string.split('1.'))) Data = {} for i in range(len(lines)): if 'References' in lines[i]: Data = (lines[i + 1]) # print (Data) x= "INSERT INTO `ref` (`Reference`) VALUES" + '(' + '"{}"'.format(Data) + ')' test(x) </code></pre>
1
2016-08-24T17:57:02Z
39,130,773
<p>The first thing you need to do is properly split the string. In this case, if your example data is correct, you can accomplish this by splitting on the space character:</p> <pre><code>lines = string.split(' ') </code></pre> <p>Now, if I understand your code correctly, we need to insert only the lines that contain the string "References". Python provides easy syntax for this:</p> <pre><code>for i in lines: if 'References' in i: x = 'INSERT INTO `ref` (`Reference`) ' 'VALUES ({})'.format(i) test(x) </code></pre> <p>This assumes that your test() function is correct (it looks to be but I don't have a way of testing it right now). </p> <p>Also, make sure you trust the data you are inserting as you are not doing input checking or sanitization. This is probably fine if you're just trying too process a PDF that you've already looked at to ensure its formatted correctly.</p> <p>Editing to mention one more thing that struck me: If splitting the string on space doesn't handle your input data correctly, you may need to look at a more sophisticated way of identifying and splitting the entries. Regular expressions may help or may be more trouble than they're worth. Its hard to say without a more complete test data set.</p>
0
2016-08-24T18:39:33Z
[ "python", "mysql" ]
Button Widget Not Getting Destroyed While Trying To Remove From The Grid
39,130,055
<p>a. Have a scenario where I wanted to remove a button after few clicks. b. But when the button reaches the last click, its not getting destroyed. Code as given below:</p> <pre><code>from tkinter import * class test_button: def __init__(self, master): self.master = master self.next_button = None if not (self.next_button): self.next_button = Button(root, background="orange red", activebackground="orangered3", text="Next Test Config", command=self.next_button_code).grid(row=1, column=1) def next_button_code(self): if self.next_button: self.next_button.destroy(); self.next_button = None # Top Local Variables root = Tk() # Top Level Default Codes my_gui = test_button(root) root.mainloop() </code></pre> <p>Am I missing anything ? Kindly drop in your comments !!</p>
0
2016-08-24T17:57:23Z
39,130,467
<p>Change</p> <pre><code>self.next_button = Button(root, background="orange red", activebackground="orangered3", text="Next Test Config", command=self.next_button_code).grid(row=1, column=1) </code></pre> <p>to:</p> <pre><code>self.next_button = Button(root, background="orange red", activebackground="orangered3", text="Next Test Config", command=self.next_button_code) self.next_button.grid(row=1, column=1) </code></pre>
1
2016-08-24T18:21:09Z
[ "python", "python-2.7", "tkinter" ]
Fast/Memory-Conserving Manner to remove vectors from array if they are too close in Euclidean Space
39,130,072
<p>In order to make clustering a more feasible task, I want to remove items from an array if they have another item which is within some threshold in n-dimensional euclidean space. The input data into this truncation is an array of pixel-wise feature vectors. My first thought was to compute the pairwise euclidean distance matrix between all the items and then operate on them as such:</p> <pre><code>indices = list(range(len(X))) dist_matrix = euclidean_distances(X,X) index = 0 while True: deletion = np.where(dist_matrix[index]&lt;=threshold)[0] indices = [i for i in indices if i==index or i not in deletion] try: index = indices[indices.index(index) + 1] except IndexError: break dictionary = [] for index in indices: dictionary.append(X[index]) </code></pre> <p>However, this leads to a Memory Error for my large dataset when creating the distance matrix with sklearn.metrics.pairwise.euclidean_distances. What is an effective, memory-conservative manner to perform this operation? I've realized that the computation of this distance matrix is what is causing problems in the clustering algorithm, so I would like to be able to avoid the computation of such a large distance matrix by truncating the input array.</p>
0
2016-08-24T17:58:23Z
39,133,784
<p>Depending on the number of dimensions n, number of points N, the size of the problem in each dimension L, and your acceptable separation distance d, one option would be to grid your space into boxes of dimension d and retain at most one point within each grid box. Memory requirement would change from O(N^2) to O((L/d)^n), and running time would change from O(N^2) to O(N + (L/d)^n), so it might be more efficient if L/d and n are not too large.</p> <p>Alternately, it might be practical to use the following algorithm</p> <pre><code> for each point p in points for each point q in points if p &lt;&gt; q and p.dist(q) &lt; Dmin q.delete </code></pre> <p>This should be O(N^2) running time and O(0) extra memory.</p>
0
2016-08-24T22:10:31Z
[ "python", "arrays", "algorithm", "matrix", "memory-management" ]
Fast/Memory-Conserving Manner to remove vectors from array if they are too close in Euclidean Space
39,130,072
<p>In order to make clustering a more feasible task, I want to remove items from an array if they have another item which is within some threshold in n-dimensional euclidean space. The input data into this truncation is an array of pixel-wise feature vectors. My first thought was to compute the pairwise euclidean distance matrix between all the items and then operate on them as such:</p> <pre><code>indices = list(range(len(X))) dist_matrix = euclidean_distances(X,X) index = 0 while True: deletion = np.where(dist_matrix[index]&lt;=threshold)[0] indices = [i for i in indices if i==index or i not in deletion] try: index = indices[indices.index(index) + 1] except IndexError: break dictionary = [] for index in indices: dictionary.append(X[index]) </code></pre> <p>However, this leads to a Memory Error for my large dataset when creating the distance matrix with sklearn.metrics.pairwise.euclidean_distances. What is an effective, memory-conservative manner to perform this operation? I've realized that the computation of this distance matrix is what is causing problems in the clustering algorithm, so I would like to be able to avoid the computation of such a large distance matrix by truncating the input array.</p>
0
2016-08-24T17:58:23Z
39,135,221
<p>A KD tree should be faster/more efficient than your base approach, and scipy has a nice implementation for your use case. The runtime is O(nlog(n)), and while I'm not sure about the memory use one would assume it's only storing the pairs you'll want to delete.</p> <p>Pengiuno's recommendation of grid sampling method is probably even faster from a runtime perspective, but using scipy requires less additional coding on your part.</p> <p><a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.spatial.KDTree.query_ball_tree.html#scipy.spatial.KDTree.query_ball_tree" rel="nofollow">http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.spatial.KDTree.query_ball_tree.html#scipy.spatial.KDTree.query_ball_tree</a></p>
0
2016-08-25T01:11:20Z
[ "python", "arrays", "algorithm", "matrix", "memory-management" ]
Fast/Memory-Conserving Manner to remove vectors from array if they are too close in Euclidean Space
39,130,072
<p>In order to make clustering a more feasible task, I want to remove items from an array if they have another item which is within some threshold in n-dimensional euclidean space. The input data into this truncation is an array of pixel-wise feature vectors. My first thought was to compute the pairwise euclidean distance matrix between all the items and then operate on them as such:</p> <pre><code>indices = list(range(len(X))) dist_matrix = euclidean_distances(X,X) index = 0 while True: deletion = np.where(dist_matrix[index]&lt;=threshold)[0] indices = [i for i in indices if i==index or i not in deletion] try: index = indices[indices.index(index) + 1] except IndexError: break dictionary = [] for index in indices: dictionary.append(X[index]) </code></pre> <p>However, this leads to a Memory Error for my large dataset when creating the distance matrix with sklearn.metrics.pairwise.euclidean_distances. What is an effective, memory-conservative manner to perform this operation? I've realized that the computation of this distance matrix is what is causing problems in the clustering algorithm, so I would like to be able to avoid the computation of such a large distance matrix by truncating the input array.</p>
0
2016-08-24T17:58:23Z
39,149,590
<p>By Penguino's suggestion, I've gridded the hyperrectangle containing my points. Since the number of dimensions is undefined (I have multiple vector lengths that I'm dealing with at different points), I can't actually partition the space without recursion (I'm being slightly lazy with this), so I've created a workaround based upon splitting each dimension into <em>d</em> chunks. I'm selecting a number of points in each chunk based upon the relative density of that chunk. This operation is slightly costly, but a decent workaround in terms of memory conservation. Another workaround is to chunk the list of vectors and compute the pairwise distance matrix for each chunk and recursively build up the reduced vector list.</p> <pre><code>def ndimcube_grid(X,ndim,d): dictionary = [] for n in range(ndim): maximum = np.amax(X[:,n]) minimum = np.amin(X[:,n]) chunk = (maximum - minimum)/d iterate = minimum lengths = [] while iterate &lt; maximum: a = np.where(X[:,n] &lt; (iterate + chunk))[0] b = np.where(X[:,n] &gt;= iterate)[0] indices = list(set(a) &amp; set(b)) lengths.append(len(indices)) iterate += chunk min_length = np.amin([length for length in lengths if not(length == 0)]) iterate = minimum while iterate &lt; maximum: a = np.where(X[:,n] &lt; (iterate + chunk))[0] b = np.where(X[:,n] &gt;= iterate)[0] indices = list(set(a) &amp; set(b)) size = int(np.round(len(indices)/min_length)) MAX_DENSITY = 25 size = np.minimum(size,MAX_DENSITY) if size &gt; 0: selections = np.random.choice(indices,size=size) try: for selection in selections: if len(dictionary) &gt; 0: if np.amin(euclidean_distances(dictionary,X[selection]))&gt;ndim: dictionary.append(X[selection]) else: dictionary.append(X[selection]) except TypeError: dictionary.append(selections) iterate += chunk return np.array(dictionary) </code></pre>
0
2016-08-25T15:46:07Z
[ "python", "arrays", "algorithm", "matrix", "memory-management" ]
How can I import data from this .dat file
39,130,145
<p>I have data in a file that looks like this:</p> <p><img src="http://i.stack.imgur.com/8p9zG.png" alt="screenshot of the file"></p> <p>As you can see the data is very neat, but it is not separated in a concise fashion, but rather a variable number of spaces between columns and some columns left blank. This makes it import incorrectly into, for example, Excel. I have tried import functions in spyder and sage. I did not create the file.</p>
-2
2016-08-24T18:02:24Z
39,130,709
<p>try import it and see what you get.</p> <pre><code>media = [] with open("filename.dat", "r") as f: media.append(f.readlines()) for row in media: do something with row </code></pre> <p>on 2nd thought it looks like it may be tab-delimited:</p> <pre><code>import csv with open("filename.dat", "rB") as f: csv_file = csv.reader(f, deilmiter='\t') for row in csv_file: do something with row </code></pre>
0
2016-08-24T18:36:14Z
[ "python", "excel", "data-files" ]
How can I import data from this .dat file
39,130,145
<p>I have data in a file that looks like this:</p> <p><img src="http://i.stack.imgur.com/8p9zG.png" alt="screenshot of the file"></p> <p>As you can see the data is very neat, but it is not separated in a concise fashion, but rather a variable number of spaces between columns and some columns left blank. This makes it import incorrectly into, for example, Excel. I have tried import functions in spyder and sage. I did not create the file.</p>
-2
2016-08-24T18:02:24Z
39,132,491
<p>This is a fixed width file, so using pandas with python is the way to go:</p> <p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_fwf.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_fwf.html</a></p>
0
2016-08-24T20:27:58Z
[ "python", "excel", "data-files" ]
How to retrieve a GET variable with a hash in it
39,130,162
<p>I have the following variable that is passed in a url:</p> <p><a href="http://localhost:8000/diff/?Platform=Comcast&amp;PlatformID=7066191365225244112#S8" rel="nofollow">http://localhost:8000/diff/?Platform=Comcast&amp;PlatformID=7066191365225244112#S8</a></p> <p>I need to be able to extract the following:</p> <pre><code>Platform = Comcast PlatformID = 7066191365225244112#S8 </code></pre> <p>However, django will escape the hash in the request.GET dictionary. Here is what it shows:</p> <pre><code>GET:&lt;QueryDict: {u'PlatformID': [u'7066191365225244112'], u'Platform': [u'Comcast']}&gt;, </code></pre> <p>How would I capture the full variable here, including the <code>#S8</code> at the end?</p>
2
2016-08-24T18:03:40Z
39,130,195
<p>Escape it in the query string, basically replace the hash with %23.</p> <p>To answer your question in the comments, yes, there is a template tag that can help you with this. The syntax is</p> <pre><code>{% url 'some-url-name' arg1=v1 arg2=v2 %} </code></pre> <p>Or, if you want to escape in code, you have several options in <a href="https://docs.djangoproject.com/en/1.10/ref/utils/#module-django.utils.encoding" rel="nofollow">https://docs.djangoproject.com/en/1.10/ref/utils/#module-django.utils.encoding</a></p>
1
2016-08-24T18:05:46Z
[ "python", "django" ]
Django - get file from db by its url
39,130,304
<p>I am trying to implement a secure access to files. This is my view serving static files:</p> <pre><code>from django.views.static import serve class ServeStatic(LoginRequiredMixin, View): login_url = reverse_lazy("login") def dispatch(self, request, *args, **kwargs): if request.user == Document.objects.get(url=self.kwargs["path"]).owner: return serve(request, kwargs["path"], kwargs["file_root"]) else: return HttpResponseNotFound() </code></pre> <p>The point is that I cannot get the condition to work. Is it even possible to get a file by its URL, or I need to choose a very different approach of serving files?</p> <p>My Document model: </p> <pre><code> class Document(models.Model): owner = models.ForeignKey(settings.AUTH_USER_MODEL, verbose_name=_("Owner"), related_name="owner") document = models.FileField(upload_to=file_upload_handler, verbose_name=_("Document")) </code></pre> <p>Thanks for your ideas!</p>
0
2016-08-24T18:12:18Z
39,130,728
<p>First of all I will pass to <code>dispatch</code> cleaned data (after validation). I'm not saying you're not doing that. Next thing, I wouldn't use the same part of file path which locates files on your machine as it is in url. If you're using <code>Document</code> model, so you can change url to something which doesn't tell anything about document. Let's say even uuid:</p> <pre><code>import uuid print uuid.uuid4().get_hex() # python 2.7 print (uuid.uuid4().hex) # python 3.x </code></pre> <p>according to uuid sent in url I will search proper document in my database:</p> <pre><code> class Document(models.Model): owner = models.ForeignKey(settings.AUTH_USER_MODEL, verbose_name=_("Owner"), related_name="owner") uid = models.CharField(max_length=32, null=False) document = models.FileField(upload_to=file_upload_handler, verbose_name=_("Document")) </code></pre> <p>You can then query using:</p> <pre><code>class ServeStatic(LoginRequiredMixin, View): login_url = reverse_lazy("login") def dispatch(self, request, *args, **kwargs): doc = Document.objects.get(uid=self.kwargs["uid"], owner=request.user): return serve(request, doc.document, kwargs["file_root"]) else: return HttpResponseNotFound() </code></pre> <p>Hope that helps.</p>
1
2016-08-24T18:37:02Z
[ "python", "django" ]
matplotlib figure does not continue program flow after close event triggered inside tk app
39,130,376
<p>I've come across a really annoying difference between how windows and mac handles a python tk window and matplotlib figure close_event.</p> <p>My problem is thus, </p> <ol> <li>I am trying to load a matplotlib figure from a tk button event. </li> <li>I want the figure to show, and block the tk UI program flow while the plot is active, and capturing user events until the plot is closed.</li> <li>After the plot is closed the tk app should continue.</li> </ol> <p>Minimal example app showing issue.</p> <pre><code>from Tkinter import * from matplotlib import pyplot as plt class Plotter: def __init__(self): self.fig = plt.figure() self.fig.canvas.mpl_connect('close_event', self.dispose) plt.plot(1, 2, 'r*') plt.show() print "done with plotter" def dispose(self, event): plt.close('all') print "disposed" if __name__ == '__main__': def pressed(): print 'button pressed' Plotter() print 'YAY' root = Tk() button = Button(root, text='Press', command=pressed) button.pack(pady=20, padx=20) root.mainloop() </code></pre> <p>Sadly, I found this works as expected in windows but not on mac using the same versions of python2.7, matplotlib (1.5.2). Apart from the fact that this is not good UI practice, it bothers me that there is a difference on Mac and Windows for this piece of code. I would appreciate any feedback that would help with this issue, in the mean time i'll start work on implementing the plotter on a thread which is non-blocking and passing the result back to the main app when closed.</p>
1
2016-08-24T18:15:30Z
39,232,044
<p>You can use <code>plt.ion()</code> to turn on Matplotlib's interactive mode, but this by itself will cause the program to continue without blocking the flow. To manually block the flow, use <code>self.fig.canvas.start_event_loop_default()</code> and <code>self.fig.canvas.stop_event_loop()</code> to pause the program flow until events are captured. </p> <p>Implemented in your minimal example:</p> <pre><code>from Tkinter import * from matplotlib import pyplot as plt class Plotter: def __init__(self): plt.ion() self.fig = plt.figure() self.fig.canvas.mpl_connect('close_event', self.dispose) self.fig.canvas.mpl_connect('button_press_event', self.on_mouse_click) plt.plot(1, 2, 'r*') plt.show() self.fig.canvas.start_event_loop_default() print "done with plotter" def dispose(self, event): self.fig.canvas.stop_event_loop() print "disposed" def on_mouse_click(self, event): print 'mouse clicked!' if __name__ == '__main__': def pressed(): print 'button pressed' Plotter() print 'YAY' root = Tk() button = Button(root, text='Press', command=pressed) button.pack(pady=20, padx=20) root.mainloop() </code></pre>
0
2016-08-30T15:50:26Z
[ "python", "osx", "matplotlib", "tk" ]
UnicodeDecodeError On Unicode File Read
39,130,476
<p>I have a problem where, when I execute a script which involved reading in data from a file that contains unicode code points, everything works fine. But when it is executed via another application, it is raising the following error:</p> <blockquote> <p>UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 0: ordinal not in range(128)</p> </blockquote> <p>I am executing the exact same code using the exact same data file. A sample datafile that replicates the problem is like this:</p> <pre><code>¥ Α © § </code></pre> <p>I called this <code>sample.txt</code> </p> <p>A very simple python script to simply read in and print the file contents:</p> <pre><code>with open("sample.txt") as f: for line in f: print(line) print("Done") </code></pre> <p>This executes fine from the command line; executing via Apache/CGI fails with the above error. </p>
1
2016-08-24T18:21:46Z
39,130,477
<p>A hint to the problem came from the documentation of the <code>open</code> function:</p> <blockquote> <p>In text mode, if encoding is not specified the encoding used is platform dependent: locale.getpreferredencoding(False) is called to get the current locale encoding. <a href="https://docs.python.org/3/library/functions.html#open" rel="nofollow">[Link]</a></p> </blockquote> <p>Platform dependent suggested environment variables. So, I inspected what environment variables were set for my shell, and found <code>LANG</code> set to <code>en_US.UTF-8</code>. Dumping the environment variables set by Apache found that <code>LANG</code> was missing. </p> <p>So, apparently when locale cannot be determined, Python uses ASCII as the default file encoding. As a result, the error was encountered when the ordinal was out of range for ASCII. </p> <p>To fix this, I set this environment variable in my CGI script. If the environment variable is somehow missing from a user shell, it can be set via normal methods, or just by:</p> <pre><code>export LANG=en_US.UTF-8 </code></pre> <p>Or whatever preferred encoding is desired. </p> <p>Note, the issue is probably far more noticeable if the locale is missing from a user shell, as text editors like vi will not display characters without it. It was significantly more subtle when only an issue when called from Apache (or some other application).</p>
1
2016-08-24T18:21:46Z
[ "python", "linux", "python-3.x", "unicode", "cgi" ]
What is the equivalent of list when using numpy?
39,130,629
<p>When I want to copy a list without linking to the same object I have to call list. So:</p> <pre><code>a = [1, 2, 3] b = list(a) a == b True a is b False </code></pre> <p>What would be the equivalent when I have a numpy array</p> <pre><code>import numpy a = numpy.ones(4) b = XXX(a)? </code></pre> <p>Thanks in advance</p>
0
2016-08-24T18:31:12Z
39,130,668
<p>You can use the <code>[:]</code> pattern to copy an array or list (actually, just for lists and not for arrays. See Update below).</p> <pre><code>a = np.ones(4) b = a[:] &gt;&gt;&gt; b array([ 1., 1., 1., 1.]) &gt;&gt;&gt; id(a) 4606143744 &gt;&gt;&gt; id(b) 4606143984 </code></pre> <p><strong>Timings</strong></p> <pre><code>a = np.random.randn(1000000) %timeit -n 1000 a.copy() 1000 loops, best of 3: 1.1 ms per loop %timeit -n 1000 a[:] 1000 loops, best of 3: 659 ns per loop </code></pre> <p><strong>Update</strong></p> <p>I would normally delete an erroneous post like this, but I am leaving it because I believe it is instructive.</p> <p>Works as expected for lists.</p> <pre><code>a = [1, 2, 3] b = a[:] b[2] = 0 &gt;&gt;&gt; a [1, 2, 3] &gt;&gt;&gt; b [1, 2, 0] </code></pre> <p>But as pointed out by @ Divakar, this does not appear to work for Numpy arrays:</p> <pre><code>a = np.array([1, 2, 3]) b = a[:] b[2] = 0 &gt;&gt;&gt; a array([1, 2, 0]) &gt;&gt;&gt; b array([1, 2, 0]) &gt;&gt;&gt; id(a) 4600986400 &gt;&gt;&gt; id(b) 4606142624 </code></pre> <p>Best to explicitly use the <code>copy()</code> method (e.g. <code>b = a.copy()</code>).</p>
-1
2016-08-24T18:33:49Z
[ "python", "numpy" ]
What is the equivalent of list when using numpy?
39,130,629
<p>When I want to copy a list without linking to the same object I have to call list. So:</p> <pre><code>a = [1, 2, 3] b = list(a) a == b True a is b False </code></pre> <p>What would be the equivalent when I have a numpy array</p> <pre><code>import numpy a = numpy.ones(4) b = XXX(a)? </code></pre> <p>Thanks in advance</p>
0
2016-08-24T18:31:12Z
39,130,678
<p>Use the <code>copy</code>-method:</p> <pre><code>b = a.copy() </code></pre>
3
2016-08-24T18:34:35Z
[ "python", "numpy" ]
Performing a calculation to show Python has 53-bit precision
39,130,645
<p>May anyone provide guidance on how to perform a simple calculation in Python to roughly prove that the language implements 53 bit precision, as per IEEE 754? I don't have much to go on here other than this. I have tried to work off of the canonical example of 0.1 + 0.2, but no luck.</p>
1
2016-08-24T18:31:59Z
39,130,752
<p>Calculate the epsilon, such that <code>1.0 + eps == 1.0</code>:</p> <pre><code>from itertools import count eps = 1.0 for bits in count(): if 1.0+eps == 1.0: break eps *= 0.5 </code></pre>
3
2016-08-24T18:38:29Z
[ "python", "math", "bit", "scientific-computing" ]
Performing a calculation to show Python has 53-bit precision
39,130,645
<p>May anyone provide guidance on how to perform a simple calculation in Python to roughly prove that the language implements 53 bit precision, as per IEEE 754? I don't have much to go on here other than this. I have tried to work off of the canonical example of 0.1 + 0.2, but no luck.</p>
1
2016-08-24T18:31:59Z
39,130,792
<pre><code>&gt;&gt;&gt; for i in range(100): x = 2**i if float(x) == float(x + 1): print(i) break 53 </code></pre> <p>This works because <code>x</code> is an integer, and integers are unlimited in their bit range in Python.</p>
4
2016-08-24T18:40:28Z
[ "python", "math", "bit", "scientific-computing" ]
Performing a calculation to show Python has 53-bit precision
39,130,645
<p>May anyone provide guidance on how to perform a simple calculation in Python to roughly prove that the language implements 53 bit precision, as per IEEE 754? I don't have much to go on here other than this. I have tried to work off of the canonical example of 0.1 + 0.2, but no luck.</p>
1
2016-08-24T18:31:59Z
39,131,499
<p>Just to add a third example to the mix.</p> <p>In IEEE 754, infinity is defined as setting all exponent bits to one and all mantissa bits to to zero. With negative infinity, the sign bit is also set to one. This means that the only zero bits will belong to the mantissa.</p> <pre><code>import struct def float_bin(f): "as bin function, but converts floats to their binary representation" bytes_ = struct.pack("d", f) format_ = "{:08b}" * len(bytes_) binary = format_.format(*bytes_[::-1]) return binary mantissa_bits = float_bin(float("-inf")) # mantissa_bits == "1111111111110000000000000000000000000000000000000000000000000000" print(mantissa_bits.count("0")) # prints 52... (rather than 53) </code></pre> <p>Where is the last bit? IEEE 754 clearly specifies 53 bits for the mantissa, but only 52 are stored. This is because the leading bit of the mantissa is implied. All numbers expressed in scientific notation must start with a non-zero digit. For instance, <code>0.123 * 10^5</code> is not valid. Instead, the correct representation is <code>1.23 * 10^4</code>. However, since there is only one non-zero digit in binary, then there is only one value this bit could ever take. As such it would be a waste to store this bit explicitly. Meaning, for a 53-bit mantissa, you only need to store 52-bits.</p>
1
2016-08-24T19:23:51Z
[ "python", "math", "bit", "scientific-computing" ]
What is the appropriate method to replace a dataframe with a subset using pandas.dataframe.query method()?
39,130,679
<p>This question is very similar to one I asked here:</p> <p><a href="http://stackoverflow.com/questions/38087204/python-pandas-settingwithcopywarning-copies-vs-new-objects">Python Pandas SettingWithCopyWarning copies vs new objects</a></p> <p>I'd like to understand how I can exclude records within a given dataframe (IE operate on the dataframe and not a view of it) while also having the option of applying additional operations on the results.</p> <p>I'm struggling with understanding how Python is managing reference vs value assignment when operating on Pandas DataFrame objects. I'm working with a dataset that is in a Pandas Dataframe and I'd like to reduce the set down based on certain attribute values. I'd also like to apply additional operations on the results of this operation. The preferred method I'd like to use is the .query() method. Here is a simple example:</p> <pre><code>mydf = pd.DataFrame({'col1':['A','B','C'], 'col2':['x','y','z']}) mydf = mydf.query('col1 == \'A\'') </code></pre> <p>This will conceptually accomplish what I'm looking for; a reduction in the dataset I'm working with based on a query against it. The question I have is this: </p> <p>"Is this the correct application of the query function or should I be doing something else if I have additional operations to perform on 'mydf'"?</p> <p>I've read through <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html" rel="nofollow">this documentation</a> but still don't understand what pitfalls to watch out for...</p>
2
2016-08-24T18:34:37Z
39,131,469
<p>I think this is a right approach if you don't need the data that was filtered out (reduced). You can also chain your "additional operations" (which is pretty efficient) like this:</p> <pre><code> mydf = mydf.query('col1 == "A"').func1(...).func2(...).func3(...) </code></pre> <p><a href="https://pandas-docs.github.io/pandas-docs-travis/indexing.html#the-query-method-experimental" rel="nofollow">Here is a link to the documentation with lots of examples of how to use the <code>query()</code> method</a></p>
2
2016-08-24T19:21:30Z
[ "python", "pandas", "dataframe" ]