title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
|---|---|---|---|---|---|---|---|---|---|
Call python script from vba with wsh
| 39,356,710
|
<p>I need to call a python script from vba which works fine by using the shell.</p>
<pre><code>Sub CallPythonScript()
Call Shell("C:\Program Files (x86)\Python27\python.exe C:\Users\Markus\BrowseDirectory.py")
End Sub
</code></pre>
<p>But when I try using wsh (because of the wait funcionality) it just won't work anymore. </p>
<pre><code>Sub CallPythonScript()
Dim wsh As Object
Set wsh = VBA.CreateObject("WScript.Shell")
Dim myApp As String: myApp = "C:\Program Files (x86)\Python27\python.exe C:\Users\Markus\BrowseDirectory.py"
Dim waitOnReturn As Boolean: waitOnReturn = True
Dim windowStyle As Integer: windowStyle = 1
wsh.Run """"" & myApp & """"", windowStyle, waitOnReturn
End Sub
</code></pre>
<p>However, I used the same code at home and everything worked out just fine, with the difference that there weren't any blanks in the path. So naturally there must be something wrong with the blanks. Help is greatly appreciated. </p>
| 2
|
2016-09-06T19:46:29Z
| 39,357,131
|
<p>The interpreter cannot possibly distinguish directories containing spaces from real arguments unless you protect the directories with quotes.</p>
<p>A workaround, which is actually a better solution, relies on the fact that Python associates <code>.py</code> extension with the installed python interpreter.</p>
<p>Just do this (like you would launch a <code>.bat</code> file):</p>
<pre><code>Dim myApp As String: myApp = "C:\Users\Markus\BrowseDirectory.py"
</code></pre>
<p>and your code will work no matter where python is installed (and the problem with spaces vanishes)</p>
<p>This little test in excel worked for me and ran <code>pycrust.py</code> which happens to be in my system <code>PATH</code>:</p>
<pre><code>Sub foo()
Dim wsh As Object
Set wsh = VBA.CreateObject("WScript.Shell")
Dim myApp As String: myApp = "pycrust.py"
Dim waitOnReturn As Boolean: waitOnReturn = True
Dim windowStyle As Integer: windowStyle = 1
wsh.Run myApp, windowStyle, waitOnReturn
End Sub
</code></pre>
<p>(I had to simplify your <code>wsh.Run</code> line because it was not working, I suspect that the 256 quotes you added didn't do any good)</p>
| 1
|
2016-09-06T20:15:10Z
|
[
"python",
"vba",
"excel-vba",
"wsh"
] |
Organize a list of unique integers and create ranges/summaries when possible
| 39,356,754
|
<p>I am trying to setup a function or a class to store unique integers with abilities to add,remove or check for existence. This of course can be easily achieved using normal set. The tricky part here is displaying of ranges. I mean, if I have all the numbers between 100 and 20000 I don't want to display a huge list all the numbers in between, but rather displaying 100-20000. </p>
<p>Consider the following example:</p>
<pre><code>numbers 3000,3008-3015,3020,3022,3030-3043,3068
</code></pre>
<p>The goal is to create a function or class to add, check or retrieve information about current numbers. Here is how I imagine this: </p>
<pre><code>>>> f_check(3016)
False
>>> f_check(3039)
True
>>> f_add(3016)
3000,3008-3016,3020,3022,3030-3043,3068
>>> f_remove(3039)
3000,3008-3016,3020,3022,3030-3038,3040-3043,3068
>>> f_add(3100)
3000,3008-3016,3020,3022,3030-3038,3040-3043,3068,3100
</code></pre>
<p>and so forth... </p>
<p>Again, pay attention to the range part (3030-3038, 3040-3043) etc. I don't want to display every single entry for the consecutive numbers but rather displaying a "summary" or range. </p>
<p>Using the same example again, if I add 3021, I'll expect the following result:</p>
<pre><code>>>> f_add(3021)
3000,3008-3016,3020-3022,3030-3043,3068
</code></pre>
<p>Many thanks in advance for your thoughts!</p>
| 0
|
2016-09-06T19:49:16Z
| 39,358,148
|
<p>As people in the comments say, most of your operations are simple list operation. However, if your data contains ranges (e.g. 3008-3015), it then becomes a different story. The reason is that you need some sort of decoding to what that range means.
I wrote a simple code using only functions (no classes) that will do just that.</p>
<pre><code>matched = False
</code></pre>
<p>Check whether a number or a range:</p>
<pre><code>def valueCheck(val):
if "-" in val:
return False
else:
return True
</code></pre>
<p>Encode the range into two values (start, end)</p>
<pre><code>def rangeCheck(val):
if valueCheck(val):
return val, val
else:
return val.split("-")
</code></pre>
<p>Check whether a number matches the content of a list:</p>
<pre><code>def f_check(new_number, numbers_list):
global matched
for i in numbers_list:
rangeCheck(i)
if int(rangeCheck(i)[0]) <= int(new_number) <= int(rangeCheck(i)[1]):
print "{} matches {}".format(new_number, i)
matched = True
break
if not matched:
print "{} doesn't exist in your list".format(new_number)
</code></pre>
<p>Add a number to the list:</p>
<pre><code>def f_add(new_number, numbers_list):
numbers_list.append(new_number)
print numbers_list
</code></pre>
<p>Delete a number from the list:</p>
<pre><code>def f_delete(new_number, numbers_list):
numbers_list.remove(new_number)
print numbers_list
</code></pre>
<p>Assuming your numbers are string values in a list like so:</p>
<pre><code>numbers = ["3000", "3008-3015", "3020", "3022", "3030-3043", "3068"]
</code></pre>
<p>No match:</p>
<pre><code>f_check("2000", numbers)
2000 doesn't exist in your list
</code></pre>
<p>Single match:</p>
<pre><code>f_check("3020", numbers)
3020 matches 3020
</code></pre>
<p>Range match:</p>
<pre><code>f_check("3010", numbers)
3010 matches 3008-3015
</code></pre>
<p>Other simple operations:
Add:</p>
<pre><code>f_add("2000", numbers)
['3000', '3008-3015', '3020', '3022', '3030-3043', '3068', '2000']
</code></pre>
<p>Delete:</p>
<pre><code>f_delete("3068", numbers)
['3000', '3008-3015', '3020', '3022', '3030-3043']
</code></pre>
<p>Please note that single match can also be done much easier by using the following:</p>
<pre><code>number = "3020"
if number in numbers:
print "{} matched".format(number)
3020 matched
</code></pre>
<hr>
<p><strong>UPDATE #1</strong></p>
<hr>
<p>To overcome your raised "problem" of adding number to existing ranges and/or making new range if needed. I found a similar question here <a href="http://stackoverflow.com/questions/2154249/identify-groups-of-continuous-numbers-in-a-list">How to group list of continuous values in ranges</a>, which can solve part of your issue. However, your original encoding (val-val) isn't going to be helpful in this case.
To solve this, you can do the following:</p>
<p><strong>Step 1</strong>, comment these two lines:</p>
<pre><code># rangeCheck(i)
# if int(rangeCheck(i)[0]) <= int(new_number) <= int(rangeCheck(i)[1]):
"""
you will not be using the rangeCheck() or valueCheck() anymore.
"""
</code></pre>
<p><strong>Step 2</strong>, add this line instead of the original IF-statement:</p>
<pre><code>if int(i[0]) <= int(new_number) <= int(i[1]):
</code></pre>
<p><strong>Step 3</strong>, add this function which will flatten your list of numbers</p>
<pre><code>def flatList(numbers_list):
for i in numbers_list:
if len(str(i).split("-")) > 1:
numbers_list.extend(range(int(i.split("-")[0]), int(i.split("-")[1]) + 1))
numbers_list.remove(i)
return numbers_list
</code></pre>
<p><strong>Step 4</strong>, add this function (taken from <a href="http://stackoverflow.com/questions/2154249/identify-groups-of-continuous-numbers-in-a-list">1</a>) which will group your flat list into ranges of values</p>
<pre><code>from operator import itemgetter
from itertools import groupby
def numbers_group(flatt_list):
flatt_list = [int(i) for i in flatt_list]
ranges = []
for k, g in groupby(enumerate(flatt_list), lambda (i, x): i - x):
group = map(itemgetter(1), g)
ranges.append((group[0], group[-1]))
return ranges
</code></pre>
<p><strong>Usage:</strong></p>
<pre><code>numbers = ["3000", "3008-3015", "3020", "3022", "3030-3043", "3068"]
print flatList(numbers)
['3000', '3020', '3022', '3068', 3008, 3009, 3010, 3011, 3012, 3013, 3014, 3015, 3030, 3031, 3032, 3033, 3034, 3035, 3036, 3037, 3038, 3039, 3040, 3041, 3042, 3043]
print numbers_group(sorted(flatList(numbers)))
[(3008, 3015), (3030, 3043), (3000, 3000), (3020, 3020), (3022, 3022), (3068, 3068)]
numbers = f_add("3021", numbers)
print numbers_group(sorted(flatList(numbers)))
[(3008, 3015), (3030, 3043), (3000, 3000), (3020, 3022), (3068, 3068)]
numbers = f_delete("3021", numbers)
print numbers_group(sorted(flatList(numbers)))
[(3008, 3015), (3030, 3043), (3000, 3000), (3020, 3020), (3022, 3022), (3068, 3068)]
</code></pre>
| 1
|
2016-09-06T21:30:06Z
|
[
"python",
"integer",
"set"
] |
Organize a list of unique integers and create ranges/summaries when possible
| 39,356,754
|
<p>I am trying to setup a function or a class to store unique integers with abilities to add,remove or check for existence. This of course can be easily achieved using normal set. The tricky part here is displaying of ranges. I mean, if I have all the numbers between 100 and 20000 I don't want to display a huge list all the numbers in between, but rather displaying 100-20000. </p>
<p>Consider the following example:</p>
<pre><code>numbers 3000,3008-3015,3020,3022,3030-3043,3068
</code></pre>
<p>The goal is to create a function or class to add, check or retrieve information about current numbers. Here is how I imagine this: </p>
<pre><code>>>> f_check(3016)
False
>>> f_check(3039)
True
>>> f_add(3016)
3000,3008-3016,3020,3022,3030-3043,3068
>>> f_remove(3039)
3000,3008-3016,3020,3022,3030-3038,3040-3043,3068
>>> f_add(3100)
3000,3008-3016,3020,3022,3030-3038,3040-3043,3068,3100
</code></pre>
<p>and so forth... </p>
<p>Again, pay attention to the range part (3030-3038, 3040-3043) etc. I don't want to display every single entry for the consecutive numbers but rather displaying a "summary" or range. </p>
<p>Using the same example again, if I add 3021, I'll expect the following result:</p>
<pre><code>>>> f_add(3021)
3000,3008-3016,3020-3022,3030-3043,3068
</code></pre>
<p>Many thanks in advance for your thoughts!</p>
| 0
|
2016-09-06T19:49:16Z
| 39,358,657
|
<p>Since you did not specify what you mean by formatting, I'm assuming you mean sorting a list of numbers based upon their value in ascending order. Also, I modified your list and removed the dashes between certain numbers.</p>
<p>With that being said, what I believe you're asking for is the <code>sorted()</code> function. Straight from the documentation:</p>
<blockquote>
<p>Python lists have a built-in list.sort() method that modifies the list in-place. There is also a sorted() built-in function that builds a new sorted list from an iterable.</p>
</blockquote>
<p>I'd just make a class however, and add the methods that you outlined above. The class would be pretty simple. Here's an example of what I'd do:</p>
<pre><code>class IntList:
def __init__(self, *args):
self.lst = [int(i) for i in args]
self.lst.sort()
def add(self, arg):
self.lst.append(arg)
self.lst.sort()
def remove(self, arg):
self.lst.remove(arg)
self.lst.sort()
def check(self, arg):
return arg in self.lst
</code></pre>
<p>Here is an example usage:</p>
<pre><code>>>> lst = IntList(1, 2, 3, 4, 5)
>>> lst.add(6)
>>> lst.lst
[1, 2, 3, 4, 5, 6]
>>> lst.remove(2)
>>> lst.lst
[1, 3, 4, 5, 6]
>>> lst.check(4)
True
>>> lst.check(90)
False
</code></pre>
<p>If my assumptions are wrong please specify what exactly you mean.</p>
| 0
|
2016-09-06T22:16:57Z
|
[
"python",
"integer",
"set"
] |
Organize a list of unique integers and create ranges/summaries when possible
| 39,356,754
|
<p>I am trying to setup a function or a class to store unique integers with abilities to add,remove or check for existence. This of course can be easily achieved using normal set. The tricky part here is displaying of ranges. I mean, if I have all the numbers between 100 and 20000 I don't want to display a huge list all the numbers in between, but rather displaying 100-20000. </p>
<p>Consider the following example:</p>
<pre><code>numbers 3000,3008-3015,3020,3022,3030-3043,3068
</code></pre>
<p>The goal is to create a function or class to add, check or retrieve information about current numbers. Here is how I imagine this: </p>
<pre><code>>>> f_check(3016)
False
>>> f_check(3039)
True
>>> f_add(3016)
3000,3008-3016,3020,3022,3030-3043,3068
>>> f_remove(3039)
3000,3008-3016,3020,3022,3030-3038,3040-3043,3068
>>> f_add(3100)
3000,3008-3016,3020,3022,3030-3038,3040-3043,3068,3100
</code></pre>
<p>and so forth... </p>
<p>Again, pay attention to the range part (3030-3038, 3040-3043) etc. I don't want to display every single entry for the consecutive numbers but rather displaying a "summary" or range. </p>
<p>Using the same example again, if I add 3021, I'll expect the following result:</p>
<pre><code>>>> f_add(3021)
3000,3008-3016,3020-3022,3030-3043,3068
</code></pre>
<p>Many thanks in advance for your thoughts!</p>
| 0
|
2016-09-06T19:49:16Z
| 39,430,955
|
<p>Okay, first of all - thanks everyone for your efforts. Much appreciated.</p>
<p>Here is the solution I ended up with. I have simply used a class to extend a set & modifying the output of it.</p>
<pre><code>class vlans(set):
def check(self,number):
if number in self:
return True
def __str__(self):
last = 0
out = []
for x in list(self):
if len(out)>0 and last+1==int(x):
out[-1] = out[-1].split("-")[0]+"-"+str(x)
else:
out.append(str(x))
last = int(x)
return ",".join(out)
</code></pre>
<p>And here is how it works:</p>
<pre><code>>>> f=vlans([3008, 3009, 3010, 3011, 3012, 3013, 3014,
3015, 3020, 3022, 3030, 3031, 3032, 3033, 3034, 3035,
3036, 3037, 3038, 3039, 3040, 3041, 3042, 3000, 3068])
>>>
>>>
>>> print f
3000,3008-3015,3020,3022,3030-3042,3068
>>> f.check(3021)
>>> f.add(3021)
>>> f.check(3021)
True
>>> print f
3000,3008-3015,3020-3022,3030-3042,3068
>>> f.remove(3035)
>>> print f
3000,3008-3015,3020-3022,3030-3034,3036-3042,3068
</code></pre>
<p>Apologies for the vague description of what I needed, but hope it all makes sense now!</p>
| 0
|
2016-09-10T21:56:18Z
|
[
"python",
"integer",
"set"
] |
pythonic way to return object based on parameter type
| 39,356,761
|
<p>I have a simple class that can operate on a variety of base classes (strings, list, file... handed to it in Python3.x and return the same type. For now, I am using <code>isinstance</code> for type checking though have read all the admonishments of doing so.</p>
<p>My question is what is the pythonic way of writing this function. 'Duck typing' seems more appropriate for testing that the correct type was used. I considered overloading with passing type <code>[func(type, input)]</code> or testing for <code>hasattr</code>. I want the code to be consistent with best practices, future friendly and up-gradable by a programmer other than myself. </p>
| 2
|
2016-09-06T19:49:55Z
| 39,356,803
|
<p>Rather than checking the actual <em>type</em> of the object, you should check the <em>behavior</em> of the object.</p>
<p>What I mean by that is rather than</p>
<pre><code>isinstance(foo, str) or isinstance(foo, list) or ...
</code></pre>
<p>You are more interested in whether it is a sequence, meaning it has defined <code>__iter__</code> for example</p>
<pre><code>hasattr(foo, '__iter__')
</code></pre>
<p>In this sense you don't really care what the object is per se, rather you just want the objects to behave a certain way.</p>
<p>Similarly if you wanted the objects to be "index-able" you could check </p>
<pre><code>hasattr(foo, '__getitem__')
</code></pre>
<p>You could follow this pattern to enforce your "duck typing" design.</p>
| 2
|
2016-09-06T19:52:38Z
|
[
"python"
] |
Python - Selenium AttributeError: list object has no attribute find_element_by_xpath
| 39,356,818
|
<p>I am attempting to perform some scraping of nutritional data from a website, and everything seems to be going swimmingly so far, until I run into pages that are formatted slightly different. </p>
<p>Using selenium and a line like this, returns an empty list:</p>
<pre><code>values = browser.find_elements_by_class_name('size-12-fl-oz' or 'size-330-ml hide nutrition-value' or 'size-8-fl-oz nutrition-value')
</code></pre>
<p>print would return this:</p>
<pre><code>[]
[]
[]
[]
[]
</code></pre>
<p>But if I define out the element position, then it works fine:</p>
<pre><code>kcal = data.find_elements_by_xpath("(.//div[@class='size-12-fl-oz nutrition-value' or 'size-330-ml hide nutrition-value' or 'size-8-fl-oz nutrition-value'])[position()=1]").text
</code></pre>
<p>The issue I have ran into, is when the elements are not the same from page to page as I iterate. So if a div does not exist in position 9, then an error is thrown. </p>
<p>Now when I go back and try to edit my code to do a <code>try/catch</code>, I am getting:</p>
<blockquote>
<p>AttributeError: 'list' object has no attribute 'find_element_by_xpath'</p>
</blockquote>
<p>or</p>
<blockquote>
<p>AttributeError: 'list' object has no attribute 'find_elements_by_xpath'</p>
</blockquote>
<p>Here is the code, with my commented out areas from my testing back and forth. </p>
<pre><code>import requests, bs4, urllib2, csv
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.support.ui import Select
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.common.exceptions import NoSuchElementException
browser = webdriver.Firefox()
...
#Loop on URLs to get Nutritional Information from each one.
with open('products.txt') as f:
for line in f:
url = line
# url = 'http://www.tapintoyourbeer.com/index.cfm?id=3'
browser.get(url)
with open("output.csv", "a") as o:
writeFile = csv.writer(o)
browser.implicitly_wait(3)
product_name = browser.find_element_by_tag_name('h1').text.title() #Get product name
size = browser.find_element_by_xpath("(//div[@class='dotted-tab'])").text #Get product size
data = browser.find_elements_by_xpath("//table[@class='beer-data-table']")
# values=[]
# values = browser.find_elements_by_class_name('size-12-fl-oz' or 'size-330-ml hide nutrition-value' or 'size-8-fl-oz nutrition-value')
try:
# values = data.find_elements_by_xpath("(.//div[@class='size-12-fl-oz nutrition-value' or 'size-330-ml hide nutrition-value' or 'size-8-fl-oz nutrition-value'])")
kcal = data.find_element_by_xpath("(.//div[@class='size-12-fl-oz nutrition-value' or 'size-330-ml hide nutrition-value' or 'size-8-fl-oz nutrition-value'])[position()=1]").text
kj = data.find_element_by_xpath("(.//div[@class='size-12-fl-oz nutrition-value' or 'size-330-ml hide nutrition-value' or 'size-8-fl-oz nutrition-value'])[position()=3]").text
fat = data.find_element_by_xpath("(.//div[@class='size-12-fl-oz nutrition-value' or 'size-330-ml hide nutrition-value' or 'size-8-fl-oz nutrition-value'])[position()=5]").text
carbs = data.find_element_by_xpath("(.//div[@class='size-12-fl-oz nutrition-value' or 'size-330-ml hide nutrition-value' or 'size-8-fl-oz nutrition-value'])[position()=7]").text
protein = data.find_element_by_xpath("(.//div[@class='size-12-fl-oz nutrition-value' or 'size-330-ml hide nutrition-value' or 'size-8-fl-oz nutrition-value'])[position()=9]").text
values = [kcal, kj, fat, carbs, protein]
print values
writeFile.writerow([product_name] + [size] + values)
except NoSuchElementException:
print("No Protein listed")
browser.quit()
</code></pre>
<p>I had it working earlier to produce a list, and output to a CSV, but on occasion, the position count would come out wrong. </p>
<pre><code>[u'Budweiser', u'12 FL OZ', u'145.00', u'', u'', u'', u'']
[u"Beck'S", u'12 FL OZ', u'146.00', u'610.86', u'0.00', u'10.40', u'1.80']
[u'Bud Light', u'12 FL OZ', u'110.00', u'460.24', u'0.00', u'6.60', u'0.90']
[u'Michelob Ultra', u'12 FL OZ', u'95.00', u'397.48', u'0.00', u'2.60', u'0.60']
[u'Stella Artois', u'100 ML', u'43.30', u'KCAL/100 ML', u'181.17', u'KJ/100 ML', u'0.00']
</code></pre>
<p>The problems started when position 9 didn't exist on a particular page. </p>
<p>Are there any suggestions on how to fix this headache? Do I need to have cases set up for the different pages & sizes? </p>
<p>I appreciate the help. </p>
| 2
|
2016-09-06T19:53:43Z
| 39,358,595
|
<p>Actually <code>find_elements()</code> returns either list of <a href="http://selenium-python.readthedocs.io/api.html#module-selenium.webdriver.remote.webelement" rel="nofollow"><code>WebElement</code></a> or empty list. You're storing this result into a list variable name <code>data</code>.</p>
<blockquote>
<p>AttributeError: 'list' object has no attribute 'find_element_by_xpath'</p>
<p>AttributeError: 'list' object has no attribute 'find_elements_by_xpath'</p>
</blockquote>
<p>This occurs because you're going to find nested <a href="http://selenium-python.readthedocs.io/api.html#module-selenium.webdriver.remote.webelement" rel="nofollow"><code>WebElement</code></a> on <code>data</code> list that's why <strong>you're calling as <code>data.find_element_by_xpath()</code> or <code>data.find_elements_by_xpath()</code> which is absolutely wrong.</strong></p>
<p>Actually <a href="http://selenium-python.readthedocs.io/api.html#selenium.webdriver.remote.webelement.WebElement.find_element" rel="nofollow"><code>find_element()</code></a> or <a href="http://selenium-python.readthedocs.io/api.html#selenium.webdriver.remote.webelement.WebElement.find_elements" rel="nofollow"><code>find_elements()</code></a> is used to search the element on the <strong>page context</strong> or the <strong>context of the <a href="http://selenium-python.readthedocs.io/api.html#module-selenium.webdriver.remote.webelement" rel="nofollow"><code>WebElement</code></a></strong> instead of <code>list</code>. </p>
<p>So you should try to find individual <a href="http://selenium-python.readthedocs.io/api.html#module-selenium.webdriver.remote.webelement" rel="nofollow"><code>WebElement</code></a> from the <code>data</code> list and then find further nested <a href="http://selenium-python.readthedocs.io/api.html#module-selenium.webdriver.remote.webelement" rel="nofollow"><code>WebElement</code></a> using this element context as below :-</p>
<pre><code>if len(data) > 0:
#now find desire element using index
individual_element = data[0]
#now you can find further nested single element using find_element() or list of elements using find_elements() at individual_element context
kcal = individual_element.find_element_by_xpath("(.//div[@class='size-12-fl-oz nutrition-value' or 'size-330-ml hide nutrition-value' or 'size-8-fl-oz nutrition-value'])[position()=1]").text
----------------------------
----------------------------
</code></pre>
| 1
|
2016-09-06T22:10:44Z
|
[
"python",
"selenium",
"web-scraping"
] |
how to get the elapsed time while interacting with other functions in a text-based application in python
| 39,356,838
|
<p>So I created this project that records football (soccer) stats. Up to now it records only corner and free kicks but I want to record the possession time between the 2 teams. My main problem is that when I try to keep possession time, the other functions won't work. Here's what I've tried:</p>
<pre><code>import time
fk_range = range(0, 1000)
fk_list = list(fk_range)
ck_range = range(0, 1000)
ck_list = list(ck_range)
def body():
while True:
choice = input("")
if choice == "fk":
first_number_fk = fk_list[0]
fk_list.remove(first_number_fk)
number_fk = fk_list[0]
string_number_fk = str(number_fk)
print("Free Kick(s)", string_number_fk)
elif choice == "ck":
first_number_ck = ck_list[0]
ck_list.remove(first_number_ck)
number_ck = ck_list[0]
string_number_ck = str(number_ck)
print("Corner Kick(s): ", string_number_ck)
if choice == "home team":
start_home = time.time()
input()
end_home = time.time()
if choice == "away_team":
start_away = time.time()
input()
end_away = time.time()
elapsed_home = end_home - start_home
elapsed_away = end_away - start_away
elif choice == "q":
print(elapsed_away)
print(elapsed_home)
body()
</code></pre>
<p>This doesn't work, but this is an idea of what I've got. If you didn't understand what I mean, please comment below so I can explain in more detail.</p>
| 0
|
2016-09-06T19:55:01Z
| 39,357,768
|
<p>Given that (as you said in comments) that code is called repeatedly in loops, you have two issues.</p>
<ul>
<li><code>elapsed_home</code> is being set in the <code>'away_team'</code> branch instead of the <code>'home_team'</code> branch. Simple enough to fix, just move that up a few lines.</li>
<li>All of your variables are declared inside the function - which means they don't exist outside of it, and they don't persist between calls. You have three (reasonable) options to fix this:
<ol>
<li>Declare them all globally (at the top of the function, put e.g. <code>global elapsed_home</code>.</li>
<li>Pass them all in as arguments every time. Return them all every time.</li>
<li>Use a class to hold them, make it a method on that class instead of a function.</li>
</ol></li>
</ul>
<p>Take your pick.</p>
| 1
|
2016-09-06T21:00:34Z
|
[
"python",
"input",
"output",
"text-based"
] |
how to get the elapsed time while interacting with other functions in a text-based application in python
| 39,356,838
|
<p>So I created this project that records football (soccer) stats. Up to now it records only corner and free kicks but I want to record the possession time between the 2 teams. My main problem is that when I try to keep possession time, the other functions won't work. Here's what I've tried:</p>
<pre><code>import time
fk_range = range(0, 1000)
fk_list = list(fk_range)
ck_range = range(0, 1000)
ck_list = list(ck_range)
def body():
while True:
choice = input("")
if choice == "fk":
first_number_fk = fk_list[0]
fk_list.remove(first_number_fk)
number_fk = fk_list[0]
string_number_fk = str(number_fk)
print("Free Kick(s)", string_number_fk)
elif choice == "ck":
first_number_ck = ck_list[0]
ck_list.remove(first_number_ck)
number_ck = ck_list[0]
string_number_ck = str(number_ck)
print("Corner Kick(s): ", string_number_ck)
if choice == "home team":
start_home = time.time()
input()
end_home = time.time()
if choice == "away_team":
start_away = time.time()
input()
end_away = time.time()
elapsed_home = end_home - start_home
elapsed_away = end_away - start_away
elif choice == "q":
print(elapsed_away)
print(elapsed_home)
body()
</code></pre>
<p>This doesn't work, but this is an idea of what I've got. If you didn't understand what I mean, please comment below so I can explain in more detail.</p>
| 0
|
2016-09-06T19:55:01Z
| 39,359,110
|
<p>Check this one</p>
<pre><code>elapsed_home = 0
elapsed_away = 0
start_home = 0
start_away = 0
ball_home = False
ball_away = True # Assumimg that away team has the starting kick
def body():
global elapsed_home, elapsed_away, start_home, start_away, ball_home, ball_away
while True:
choice = input("")
...
...
if choice == "home team":
if ball_home:
continue:
else:
ball_away = False
ball_home = True
start_home = time.time()
elapsed_away += time.time() - start_away
print "away elapsed: " + str(elapsed_away)
if choice == "away_team":
if ball_away:
continue
else:
ball_away = True
ball_home = False
start_away = time.time()
elapsed_home += time.time() - start_home
print "home elapsed: " + str(elapsed_home)
...
# Initialize depending on who has the starting kick
if ball_away:
start_away = time.time()
else:
home_start = time.time()
body()
</code></pre>
| 1
|
2016-09-06T23:16:24Z
|
[
"python",
"input",
"output",
"text-based"
] |
Find nth least value in multiple numpy arrays
| 39,356,887
|
<p>In numpy, I can find which 2D array is the least of all 3 2D arrays as follows:</p>
<pre><code>mat_a = np.random.random((5, 5))
mat_b = np.random.random((5, 5))
mat_c = np.random.random((5, 5))
bigmat = np.stack((mat_a, mat_b, mat_c)) # this is a 3, 5, 5 array
minima = np.argmin(bigmat, axis=0) # contains a 5x5 array of 0,1,2 for a,b,c respectively
</code></pre>
<p>How do I extend this, so that it works to find 2nd least of all 3 2D arrays?</p>
<p>-- EDIT:</p>
<p>Expected output is a 5 x 5 numpy array, where each element is represents which of the 3 arrays (mat_a, mat_b, mat_c) is the 2nd least value in bigmat. </p>
<p>So, structure will be same as <code>minima</code>, except <code>minima</code> can only show which array is least.</p>
| 0
|
2016-09-06T19:57:45Z
| 39,357,337
|
<p>I'll bet <code>np.argsort()</code> is the most efficient way.</p>
<p>FYI, here's an interesting thing you can do with array indexing that will also solve your problem (though less efficiently):</p>
<pre><code>max_value = np.max(bigmat)
maskmat = np.array([minima == i for i in xrange(bigmat.shape[0])])
bigmat[maskmat] = max_value # assign each minimum to the array-wide maximum
minima2 = np.argmin(bigmat, axis=0)
</code></pre>
| 0
|
2016-09-06T20:28:26Z
|
[
"python",
"numpy"
] |
How to catch error from QtMultimedia in Python?
| 39,356,904
|
<p>I created application in PyQt + QtMultimedia that plays videos. When QtMultimedia can not find backend for playing videos (on Linux it's Gstreamer) it shows this error in terminal:</p>
<p><code>defaultServiceProvider::requestService(): no service found for - "org.qt-project.qt.mediaplayer"</code></p>
<p>However PyQt doesn't throw exception so I can not catch it in python. Is there a way how to detect this error and show some warning to user?</p>
| 0
|
2016-09-06T19:58:50Z
| 39,370,873
|
<p>The warning is probably shown using <code>qWarning()</code>, so you should be able to use <a href="http://doc.qt.io/qt-5/qtglobal.html#qInstallMessageHandler" rel="nofollow"><code>qInstallMessageHandler</code></a> (part of <code>PyQt5.QtCore</code> in PyQt) to catch them.</p>
| 0
|
2016-09-07T13:12:00Z
|
[
"python",
"qt",
"pyqt"
] |
How to catch error from QtMultimedia in Python?
| 39,356,904
|
<p>I created application in PyQt + QtMultimedia that plays videos. When QtMultimedia can not find backend for playing videos (on Linux it's Gstreamer) it shows this error in terminal:</p>
<p><code>defaultServiceProvider::requestService(): no service found for - "org.qt-project.qt.mediaplayer"</code></p>
<p>However PyQt doesn't throw exception so I can not catch it in python. Is there a way how to detect this error and show some warning to user?</p>
| 0
|
2016-09-06T19:58:50Z
| 39,610,484
|
<p>Take a look at <a href="http://doc.qt.io/qt-5/qmediaplayer.html#mediaStatusChanged" rel="nofollow">the docs</a>.</p>
<p>The problem you describle should be emitting the signal:</p>
<pre><code>QMediaPlayer::mediaStatusChanged(QMediaPlayer::MediaStatus status)
</code></pre>
<p>where status = QMediaPlayer::ServiceMissingError</p>
<p>So, just connect a slot to that signal and manage it.</p>
| 0
|
2016-09-21T07:48:37Z
|
[
"python",
"qt",
"pyqt"
] |
I have no idea why the score is keep increasing when the Sprite object is destroyed?
| 39,357,006
|
<p>I am now trying to make a mini game that making a user to dodge all falling pizzas from the sky. However, after a pizza touches the bottom of the graphic screen, the score of user keep increases infinitely, and I have no idea why is it happening. Please help me to fix this problem.</p>
<pre><code>from livewires import games, color
import random
games.init(screen_width = 640, screen_height = 480, fps = 50)
class Pizza(games.Sprite):
image = games.load_image("pizza.bmp")
speed = 2
pizza_count = 0
def __init__(self, x, top):
super(Pizza, self).__init__(image = Pizza.image, x = x,
top = top, dy = Pizza.speed)
def update(self):
if self.bottom > games.screen.height:
Pizza.pizza_count += 1
self.destroy()
if self.left < 0:
self.left = 0
if self.right > games.screen.width:
self.right = games.screen.width
def end_game(self):
end_message = games.Message(value = "Game Over!",
size = 80,
x = games.screen.width/2,
y = games.screen.height/2,
color = color.red,
after_death = games.screen.quit,
lifetime = 5 * games.screen.fps)
games.screen.add(end_message)
class Runner(games.Sprite):
image = games.load_image("chef.bmp")
score = games.Text(value = 0, size = 50, color = color.black,
top = 50, right = games.screen.width - 10)
def __init__(self, x = games.screen.width/2, bottom = games.screen.height):
super(Runner, self).__init__(image = Runner.image,
x = x, bottom = bottom)
games.screen.add(Runner.score)
self.pizza_x = random.randrange(games.screen.width)
pizza = Pizza(x = self.pizza_x, top = 0)
games.screen.add(pizza)
def update(self):
if games.keyboard.is_pressed(games.K_LEFT):
self.x -= 3
if games.keyboard.is_pressed(games.K_RIGHT):
self.x += 3
if self.right > games.screen.width:
self.right = games.screen.width
if self.left < 0:
self.left = 0
for pizza in self.overlapping_sprites:
pizza.destroy()
pizza.end_game()
Runner.score.value += Pizza.pizza_count * 10
Runner.score.right = games.screen.width - 10
games.screen.add(Runner.score)
def main():
background_image = games.load_image("wall.jpg", transparent = False)
games.screen.background = background_image
runner = Runner()
games.screen.add(runner)
games.screen.mainloop()
main()
</code></pre>
| 0
|
2016-09-06T20:05:15Z
| 39,357,790
|
<p>Instances are removed by the garbage collector as soon as there are no more references to it, so there still must be a reference to it... here;</p>
<pre><code>pizza = Pizza(x = self.pizza_x, top = 0) # pizza is still referring to it!
games.screen.add(pizza)
</code></pre>
<p>Try placing 'pizza = None' under the 'games.screen.add(pizza)' line.
If all is well the self.destroy() function should remove the last reference and have the intended effect.</p>
<p>It might also be a good idea to use a flag to indicate scoring has taken place just in case the garbage collector isn't fast enough to remove the instance..</p>
<pre><code>def __init__(self, x, top):
super(Pizza, self).__init__(image = Pizza.image, x = x,
top = top, dy = Pizza.speed)
self.scoreFlag = False
def update(self):
if self.bottom > games.screen.height:
if self.scoreFlag == False:
Pizza.pizza_count += 1
self.scoreFlag = True
self.destroy()
self.destroy()
</code></pre>
| 1
|
2016-09-06T21:02:20Z
|
[
"python",
"object",
"sprite"
] |
NUMPY Implementation of AES significantly slower than pure python
| 39,357,146
|
<p>I am looking at re-implementing the SlowAES code (<a href="http://anh.cs.luc.edu/331/code/aes.py" rel="nofollow">http://anh.cs.luc.edu/331/code/aes.py</a>) to try and take advantage of the native array support of numpy. I'm getting what, to me, is the counter-intuitive result that the pure Python of SlowAES is much, much faster than the same functions implemented using numpy. Here is the clearest example I have.</p>
<p>One of the main operations in AES is Shift Rows, where each row in the 4x4 element byte array is shifted by some number of positions (0 for row 0, 1 for row 1, etc.). The original Python code treats this 4x4 byte state array as a one dimensional 16-element list, then uses slicing to create virtual rows to rotate:</p>
<pre><code>def rotate(word, n):
return word[n:] + word [0:n]
def shiftRows(state):
for i in range(4):
state[i*4:i*4+4] = rotate(state[i*4:i*4+4], -i)
</code></pre>
<p>Running timeit on shiftRows using a list of 16 integers results in a time of 3.47 microseconds.</p>
<p>Re-implementing this same function in numpy, assuming a 4x4 integer input array, would be simply:</p>
<pre><code>def shiftRows(state):
for i in range(4):
state[i] = np.roll(state[i],-i)
</code></pre>
<p>However, timeit shows this to have an execution time of 16.3 microseconds.</p>
<p>I was hoping numpy's optimized array operations might result in somewhat faster code. Where am I going wrong? And is there some approach that would result in a faster AES implementation than pure Python? There are some intermediate results that I want to get at, so pycrypto may not be applicable (though if this is going to be too slow, I may have to take a second look).</p>
<p><em>07 Sep 2016 - Thanks for the answers. To answer the question of "why," I'm looking at running hundreds of thousands, if not millions, of sample plaintext/ciphertext pairs. So, while the time difference for any single encryption makes little difference, any time savings I can get could make a huge difference in the long run.</em></p>
| 2
|
2016-09-06T20:16:17Z
| 39,357,537
|
<p>The simple answer is that there's a lot of overhead in creating arrays. So operations on small lists usually are faster than equivalent ones on arrays. That's especially true if the array version is iterative like the list one. For large arrays, operations using compiled methods, will be faster.</p>
<p>These 4 'roll' timings illustrate this</p>
<p>For a small list:</p>
<pre><code>In [93]: timeit x=list(range(16)); x=x[8:]+x[:8]
100000 loops, best of 3: 2.75 µs per loop
In [94]: timeit y=np.arange(16); y=np.roll(y,8)
The slowest run took 40.90 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 14.5 µs per loop
</code></pre>
<p>for a large one:</p>
<pre><code>In [95]: timeit x=list(range(1000)); x=x[500:]+x[:500]
10000 loops, best of 3: 52.9 µs per loop
In [96]: timeit y=np.arange(1000); y=np.roll(y,500)
The slowest run took 28.91 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 22.2 µs per loop
</code></pre>
<p>We could further refine the question by extracting the <code>range</code> and <code>arange</code> steps out of the timing loop.</p>
<p>The <code>np.roll</code> operation is essentially:</p>
<pre><code>y[np.concatenate((np.arange(8,16), np.arange(0,8)))]
</code></pre>
<p>That constructs 4 arrays, the 2 <code>arange</code>, the <code>concatenate</code>, and the final indexed array.</p>
| 1
|
2016-09-06T20:43:28Z
|
[
"python",
"numpy",
"aes"
] |
Python: match lists in two lists of lists
| 39,357,414
|
<p>I have two lists of lists. The first is composed of lists formatted as follows:</p>
<pre><code>listInA = [id, a1, a2, a3]
</code></pre>
<p>The second is composed of lists formatted similarly, with the id first:</p>
<pre><code>listInB = [id, b1, b2, b3]
</code></pre>
<p>Neither list is sorted, and they are not of equal lengths. What is the best way to make a list of lists, with each list of the format:</p>
<pre><code>listInC = [id, a1, a2, a3, b1, b2, b3]
</code></pre>
<p>where the id's are matched between both lists? Thanks!</p>
| 0
|
2016-09-06T18:21:20Z
| 39,357,532
|
<p>You can create a dictionary using dict comprehension from the second list of lists from ID to list. Then, create your new list using list comprehension, appending the list based on IDs.</p>
<pre><code>listA = [
[1, 'a', 'b', 'c'],
[2, 'd', 'e', 'f'],
]
listB = [
[2, 'u', 'v', 'w'],
[1, 'x', 'y', 'z'],
]
b_map = {b[0]: b for b in listB}
print([a + b_map[a[0]][1:] for a in listA])
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>[
[1, 'a', 'b', 'c', 'x', 'y', 'z'],
[2, 'd', 'e', 'f', 'u', 'v', 'w']
]
</code></pre>
| 3
|
2016-09-06T20:43:05Z
|
[
"python",
"join",
"list"
] |
Python: match lists in two lists of lists
| 39,357,414
|
<p>I have two lists of lists. The first is composed of lists formatted as follows:</p>
<pre><code>listInA = [id, a1, a2, a3]
</code></pre>
<p>The second is composed of lists formatted similarly, with the id first:</p>
<pre><code>listInB = [id, b1, b2, b3]
</code></pre>
<p>Neither list is sorted, and they are not of equal lengths. What is the best way to make a list of lists, with each list of the format:</p>
<pre><code>listInC = [id, a1, a2, a3, b1, b2, b3]
</code></pre>
<p>where the id's are matched between both lists? Thanks!</p>
| 0
|
2016-09-06T18:21:20Z
| 39,357,607
|
<p>The fact that the lists are not sorted and are not of equal length increases the difficulty of coming up with an efficient solution to the problem. However, a quick and dirty solution that would work in the end is definitely still feasible.</p>
<p>The ID seems to be first in both lists. Since this is the case, then for each list <code>a</code> in <code>A</code>, we can get the first element of <code>a</code> and check the lists <code>b</code> in <code>B</code>. If the first elements match, then we can create a list including the remaining elements of <code>a</code> and <code>b</code> and append that to <code>C</code>. In short...</p>
<pre><code>def foo(A, B):
C = []
for a in A:
aID = a[0]
for b in B:
if aID == b[0]:
c = [aID, a[1], a[2], a[3], b[1], b[2], b[3]]
C.append(c)
return C
</code></pre>
<p>When dealing with large list sizes for <code>A</code> and <code>B</code>, the efficiency of this solution would drop abysmally, but it should work for reasonably-sized lists.</p>
| 0
|
2016-09-06T20:49:12Z
|
[
"python",
"join",
"list"
] |
How to create a file in a folder
| 39,357,516
|
<p>How can i create a <code>.pck</code> file in a folder in Python?
The name of the folder is example
I tried:</p>
<pre><code>file = open("\example\file.pck","wb")
</code></pre>
<p>But it says: </p>
<blockquote>
<p>OSError: [Errno 22] Invalid argument: '\example\x0cile.pck'</p>
</blockquote>
<p>EDIT:
Solved! The right command is:<code>
file = open("example/file.pck", "wb")</code></p>
| -1
|
2016-09-06T20:41:53Z
| 39,357,538
|
<p>Use forward slashes</p>
<pre><code>open("/example/file.pck", "wb")
</code></pre>
<p>Your problem is likely that backslashes were being interpreted as <a href="https://docs.python.org/2/reference/lexical_analysis.html#string-literals" rel="nofollow">escape sequences</a>. </p>
| 3
|
2016-09-06T20:43:32Z
|
[
"python"
] |
How to create a file in a folder
| 39,357,516
|
<p>How can i create a <code>.pck</code> file in a folder in Python?
The name of the folder is example
I tried:</p>
<pre><code>file = open("\example\file.pck","wb")
</code></pre>
<p>But it says: </p>
<blockquote>
<p>OSError: [Errno 22] Invalid argument: '\example\x0cile.pck'</p>
</blockquote>
<p>EDIT:
Solved! The right command is:<code>
file = open("example/file.pck", "wb")</code></p>
| -1
|
2016-09-06T20:41:53Z
| 39,357,636
|
<p>You need to use forward slashes and also it seems that the <code>example</code> folder you trying to access doesn't exist. That's probably because you wanted to enter relative address but you entered an absolute one. So it should be:</p>
<p><code>open("example/file.pck", "wb")</code></p>
| 0
|
2016-09-06T20:50:31Z
|
[
"python"
] |
How to create a file in a folder
| 39,357,516
|
<p>How can i create a <code>.pck</code> file in a folder in Python?
The name of the folder is example
I tried:</p>
<pre><code>file = open("\example\file.pck","wb")
</code></pre>
<p>But it says: </p>
<blockquote>
<p>OSError: [Errno 22] Invalid argument: '\example\x0cile.pck'</p>
</blockquote>
<p>EDIT:
Solved! The right command is:<code>
file = open("example/file.pck", "wb")</code></p>
| -1
|
2016-09-06T20:41:53Z
| 39,357,666
|
<p>Paths can be tricky, especially if you want to run your code on Unix-based and Windows systems. You can avoid many problems by using <code>os.path</code> which generates paths that will work on any os. </p>
<p>To open a new file use the 'w+' option instead of 'rw'.</p>
<p>In your case:</p>
<pre><code>import os
file_path = os.path.join(os.path.curdir, 'file.pck')
file = open(file_path,'w+')
</code></pre>
| 0
|
2016-09-06T20:52:51Z
|
[
"python"
] |
Python create dictionary type variable out of row items in wxListCtrl
| 39,357,571
|
<p>in my wxPython app I have a wxListCtrl which I populate with some data. Is there a way I can then use the ListCtrl row items to create a dictionary variable</p>
<p>say my list control has 4 rows in it with columns:- Rush(y/n), Subject, ReceivedDateTime</p>
<p>I want to create a dictionary variable like below:-</p>
<pre><code>mydata = {
1 : ("Y", "Subject1", "datetime1"),
2 : ("N", "Subject2", "Datetime2"),
3 : ("N", "Subject3", "datetime3"),
4 : ("Y", "Subject4", "Datetime4")
}
</code></pre>
| 0
|
2016-09-06T20:46:25Z
| 39,383,151
|
<p>Just loop through the rows, and retrieve the data as in:</p>
<pre><code>def get_dict(self):
data = {}
count = self.list_ctrl.GetItemCount()
for row in range(count):
data[row + 1] = tuple([self.list_ctrl.GetItem(itemId=row, col=c).GetText() for c in range(3)])
return data
</code></pre>
| 0
|
2016-09-08T05:26:05Z
|
[
"python",
"dictionary",
"wxpython",
"listctrl"
] |
HOG Descriptor using Python + OpenCV
| 39,357,574
|
<p>I am trying to implement HOG Descriptor with OpenCV to detect Pedestrians in a <a href="https://www.youtube.com/watch?v=p_GKeQvlFlA" rel="nofollow">video</a>. I am currently using the pre-made dataset by OpenCV <code>hogcascade_pedestrians.xml</code>. Unfortuntley the documentation on this part is very poor on the internet although the HOG Descriptor is very effective for human detection. I have been writing a code for pedestrians detection with Python, and I have stopped at the following code:</p>
<pre><code>import cv2
import numpy as np
import imutils
VidCap = cv2.VideoCapture('pedestrians.mp4')
HOGCascade = cv2.HOGDescriptor('hogcascade_pedestrians.xml')
HOGCascade.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
while True:
_ , image = VidCap.read()
image = imutils.resize(image, width=700)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
clahe = cv2.createCLAHE(clipLimit=15.0,tileGridSize=(8,8))
gray = clahe.apply(gray)
winStride = (8,8)
padding = (16,16)
scale = 1.05
meanshift = -1
(rects, weights) = HOGCascade.detectMultiScale(gray, winStride=winStride,
padding=padding,
scale=scale,
useMeanshiftGrouping=meanshift)
for (x, y, w, h) in rects:
cv2.rectangle(image, (x, y), (x+w, y+h), (0,200,255), 2)
cv2.imshow('Image', image)
if cv2.waitKey(5) == 27:
break
VidCap.release()
cv2.destroyAllWindows()
</code></pre>
<p>I presume that the code scripting would be something like codes written for Haar Cascades. But I have tried that and I got errors. Do anyone have any idea of how to implement the HOG Descriptor on OpenCV with Python.</p>
<p>I have read the following <a href="http://stackoverflow.com/questions/6090399/get-hog-image-features-from-opencv-python">question</a>, but I get nothing from the second answer. </p>
<p>My problem is that I can't find the way to write the code, as the documentation about this part is very poor.</p>
<p><strong>Note:</strong> I am using OpenCV 3.1.0-dev with Python 2.7.11</p>
| 0
|
2016-09-06T20:46:42Z
| 39,387,128
|
<pre><code>HOGCascade = cv2.HOGDescriptor()
</code></pre>
<p>If you want to use this <code>.xml</code>, You have lots of preparation work to do. </p>
<p>When u finally get the available descriptor, you should replace the <code>cv2.HOGDescriptor_getDefaultPeopleDetector()</code> in
<code>setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())</code></p>
| 0
|
2016-09-08T09:21:29Z
|
[
"python",
"python-2.7",
"opencv",
"image-processing",
"opencv3.1"
] |
Import vs import on circular import?
| 39,357,737
|
<p>I read <a href="http://stackoverflow.com/questions/7336802/how-to-avoid-circular-imports-in-python/37126790#37126790">this post</a> about how to prevent circular import in python. I don't understand a claim in the post: </p>
<pre><code>import package.a # Absolute import and
import a # Implicit relative import (deprecated, py2 only)
</code></pre>
<p>can avoid circular import but</p>
<pre><code>from ... import ...
#or
import ... as ..
</code></pre>
<p>can't (on python 2.x). </p>
<p>Does anyone know the reason behind this?</p>
| 1
|
2016-09-06T20:58:32Z
| 39,475,551
|
<p>After some search, I figure out the answer myself.</p>
<p>Essentially, circular import is a problem for <code>from ⦠import â¦</code> because it returns the imported module only after the module code is executed.</p>
<p>To illustrate, assume we have import a in b.py and import b in a.py. For <code>import a</code> and <code>import b</code>, I simply look up sys.modules to find a/b, just put a new module in and return.</p>
<p>On the other hand, <code>from a import c</code> in b.py has code like this (pseudo python, similar for <code>from b import d</code> in a.py)</p>
<pre><code>if 'a' not in sys.modules:
sys.modules[a] = (A new empty module object)
run every line of code in a.py
add the module a to its parent's namespace
return module 'a'
</code></pre>
<p>we start from a.py, put an empty module for b to sys.modules and start executing b.py and put a in sys.modules and executing a.py. Next it hits <code>from b import d</code> again and find the b in sys.modules but it's empty, throws error: no attribute d.</p>
<p>PS1: the referred post is wrong. <code>import ... as ...</code> is good for circular import.</p>
<p>PS2: <code>from a import c</code> is the same to import a.c in python 3 </p>
| 0
|
2016-09-13T17:11:59Z
|
[
"python",
"python-2.7",
"import"
] |
Replace cell value in pandas dataframe where value is 'NaN' with value from another/same dataframe
| 39,357,828
|
<p>The following pandas dataframe df1 was generated:</p>
<pre><code>df1 = pd.DataFrame(data = {'Value': [1.989920, 'NaN', -9.363819, 'NaN'], 'Group-Index' : [6, 6, 7, 7], 'Group-Order' : [2, 2, 2, 2], 'Index' : [221, 225, 222, 222] })
Value Group-Index Group-Order Index
221 1.989920 6 2 221
225 NaN 6 2 225
222 -9.363819 7 2 222
278 NaN 7 2 222
</code></pre>
<p>beware that the pandas index varies because I've used the dataframe output from my actual project.</p>
<p>and there is a second dataframe df2 available which looks as follows:</p>
<pre><code>df2 = pd.DataFrame({'Value': [1.989920, -9.363819], 'Group-Index' : [6, 7], 'Group-Order' : [2, 2], 'Index' : [221, 222] })
Value Group-Index Group-Order Index
221 1.989920 6 2 221
222 -9.363819 7 2 222
</code></pre>
<ol>
<li><p>How can I search through the GC-Value column in the first dataframe and find all NaN values and then replace them with the value from the second dataframe where the Group-Index and the Group-Order column are the same in both rows of both dataframes?</p></li>
<li><p>Another solution to my problem would be to copy the value from the row where a value is defined to the NaN-cell matching with the Group-Index and Group-Order within the same dataframe df1.</p></li>
</ol>
<p>Thus, the result should be:</p>
<pre><code> Value Group-Index Group-Order Index
221 1.989920 6 2 221
225 1.989920 6 2 225
222 -9.363819 7 2 222
278 -9.363819 7 2 222
</code></pre>
| 2
|
2016-09-06T21:05:18Z
| 39,357,939
|
<pre><code>vnull = df1.Value.isnull()
mrg_cols = ['Group-Index', 'Group-Order']
df1.loc[vnull, 'Value'] = df2.merge(df1.loc[vnull, mrg_cols]).Value.values
df1
</code></pre>
<p><a href="http://i.stack.imgur.com/mKHHX.png" rel="nofollow"><img src="http://i.stack.imgur.com/mKHHX.png" alt="enter image description here"></a></p>
| 2
|
2016-09-06T21:13:48Z
|
[
"python",
"pandas"
] |
Get file name before downloading
| 39,357,863
|
<p>I'm trying to build a scraper in python and only want to download new episodes of a podcast. The problem is that I don't know what the file names will be until after the file is downloaded. Is there a way to get the file name before downloading?</p>
<p><code>def download(path, fileName):
if(not os.path.exists(fileName)):
wget.download(path)</code> </p>
| -2
|
2016-09-06T21:07:52Z
| 39,358,010
|
<p>I'm guessing that url to podcast redirects you to another url. Then you can use <code>requests</code> to get the final url</p>
<pre><code>import requests
final_url = requests.head(url_to_podcast, allow_redirects=True).url
</code></pre>
<p>and then get the filename from the final url</p>
<pre><code>filename = final_url.split('/')[-1]
</code></pre>
| -2
|
2016-09-06T21:19:32Z
|
[
"python",
"beautifulsoup",
"wget"
] |
Pandas DENSE RANK
| 39,357,882
|
<p>I'm dealing with pandas dataframe and have a frame like this:</p>
<pre><code>Year Value
2012 10
2013 20
2013 25
2014 30
</code></pre>
<p>I want to make an equialent to DENSE_RANK () over (order by year) function. to make an additional column like this:</p>
<pre><code> Year Value Rank
2012 10 1
2013 20 2
2013 25 2
2014 30 3
</code></pre>
<p>How can it be done in pandas?</p>
<p>Thanks!</p>
| 3
|
2016-09-06T21:09:18Z
| 39,357,944
|
<p>You can convert the year to categoricals and then take their codes (adding one because they are zero indexed and you wanted the initial value to start with one per your example).</p>
<pre><code>df['Rank'] = df.Year.astype('category').cat.codes + 1
>>> df
Year Value Rank
0 2012 10 1
1 2013 20 2
2 2013 25 2
3 2014 30 3
</code></pre>
| 2
|
2016-09-06T21:14:05Z
|
[
"python",
"sql",
"pandas"
] |
Pandas DENSE RANK
| 39,357,882
|
<p>I'm dealing with pandas dataframe and have a frame like this:</p>
<pre><code>Year Value
2012 10
2013 20
2013 25
2014 30
</code></pre>
<p>I want to make an equialent to DENSE_RANK () over (order by year) function. to make an additional column like this:</p>
<pre><code> Year Value Rank
2012 10 1
2013 20 2
2013 25 2
2014 30 3
</code></pre>
<p>How can it be done in pandas?</p>
<p>Thanks!</p>
| 3
|
2016-09-06T21:09:18Z
| 39,358,100
|
<p>Use <code>pd.Series.rank</code> with <code>method='dense'</code></p>
<pre><code>df['Rank'] = df.Year.rank(method='dense').astype(int)
df
</code></pre>
<p><a href="http://i.stack.imgur.com/67n7I.png" rel="nofollow"><img src="http://i.stack.imgur.com/67n7I.png" alt="enter image description here"></a></p>
| 3
|
2016-09-06T21:26:19Z
|
[
"python",
"sql",
"pandas"
] |
Pandas DENSE RANK
| 39,357,882
|
<p>I'm dealing with pandas dataframe and have a frame like this:</p>
<pre><code>Year Value
2012 10
2013 20
2013 25
2014 30
</code></pre>
<p>I want to make an equialent to DENSE_RANK () over (order by year) function. to make an additional column like this:</p>
<pre><code> Year Value Rank
2012 10 1
2013 20 2
2013 25 2
2014 30 3
</code></pre>
<p>How can it be done in pandas?</p>
<p>Thanks!</p>
| 3
|
2016-09-06T21:09:18Z
| 39,368,586
|
<p>The fastest solution is <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.factorize.html" rel="nofollow"><code>factorize</code></a>:</p>
<pre><code>df['Rank'] = pd.factorize(df.Year)[0] + 1
</code></pre>
<p><strong>Timings</strong>:</p>
<pre><code>#len(df)=40k
df = pd.concat([df]*10000).reset_index(drop=True)
In [13]: %timeit df['Rank'] = df.Year.rank(method='dense').astype(int)
1000 loops, best of 3: 1.55 ms per loop
In [14]: %timeit df['Rank1'] = df.Year.astype('category').cat.codes + 1
1000 loops, best of 3: 1.22 ms per loop
In [15]: %timeit df['Rank2'] = pd.factorize(df.Year)[0] + 1
1000 loops, best of 3: 737 µs per loop
</code></pre>
| 0
|
2016-09-07T11:23:35Z
|
[
"python",
"sql",
"pandas"
] |
Performance comparison between type casting procedures
| 39,357,888
|
<p>I am reading lines from a file and want to convert them to a integer list.The file has one million lines.So,I was just wondering which procedure will make it faster.say,
<code>lst=['10','12','31','41','15']</code> is the current line which has been read from the file.
I can do the casting using map function,like-
<code>lst1=map(int,lst)</code>
or using the for function,like-
<code>lst2=[int(x) for x in lst]</code>
Which one would be the most efficient and the fastest way to do it?</p>
| -1
|
2016-09-06T21:09:47Z
| 39,358,036
|
<p><code>[int(x, 10) for x in lst]</code> is at least significantly faster than your two ways:</p>
<pre><code>>>> from timeit import timeit
>>> setup = "lst = ['10','12','31','41','15']"
>>> timeit('map(int, lst)', setup)
4.454881024529442
>>> timeit('[int(x) for x in lst]', setup)
4.7153410495946275
>>> timeit('[int(x, 10) for x in lst]', setup)
1.6508196962672343
</code></pre>
<p>Tested with Python 2.7.11 (I'm assuming you use Python 2 because otherwise your <code>lst1=map(...)</code> would be misleading).</p>
| 0
|
2016-09-06T21:21:39Z
|
[
"python",
"casting"
] |
Read CSV file and only keep some lines according to values in list (Python)
| 39,358,016
|
<p>I will reformulate the problem I had previously stated:</p>
<p>I am currently trying to only read about 26 million rows from a file that has about 600 million. I currently have a list with those 26 million rows that I am interested. </p>
<p>My solution is as follows:</p>
<pre><code>## list_ is a list of indices with the number of the 26MM rows
# First, open the output file where i want to copy the 26MM rows
with open(output_file,'w') as g:
# Open the source file with 600MM rows
with open(source_file,'r') as f:
for i,line in enumerate(f):
if i in list_:
g.write(line)
</code></pre>
<p>Given the size of the list and the size of the original file, I am afraid it might take too long to process this file. I am aware that this topic has been covered in other questions, but I don't think other posts have asked when the text files are very large.</p>
<p>Thank you and apologies for the previous confusing post,</p>
| -6
|
2016-09-06T21:20:14Z
| 39,358,121
|
<p>If you only want to check whether a value is in that list, the best time is O(1) for each check. You probably want to use hashset instead of a list. You can google hashset in python to see some example, like <a href="http://stackoverflow.com/questions/26724002/contains-of-hashsetinteger-in-python">this</a> or see this doc, <a href="https://docs.python.org/2/library/sets.html" rel="nofollow">sets</a>. </p>
| 0
|
2016-09-06T21:28:26Z
|
[
"python"
] |
Inserting a stop statement
| 39,358,022
|
<p>How can I get this program to stop at when the number <strong>13</strong> is entered?</p>
<pre><code>print "\t:-- Enter a Multiplication --: \n"
x = ("Enter the first number: ")
y = ("Enter your second number to multiply by: ")
for total in range(1, 12):
x = input("Enter a number: ")
y = input("Multiplied by this: ")
print "\n TOTAL: "
print x, "X", y, "=", (x * y)
#Exits the program.
raw_input("\t\tPress Enter to Exit")
</code></pre>
| -2
|
2016-09-06T21:20:39Z
| 39,358,597
|
<p>If I understand what you are trying to do here, I think an if statement would be a better solution. Something like this:</p>
<pre><code>print("Enter a multiplication")
x = int(input("Enter a number: "))
y = int(input("Multiplied by this: "))
def multiply(x, y):
if 1 < x < 13 and 1 < y < 13:
answer = x * y
print("Total:")
print("%s X %s = %s" % (x, y, answer))
else:
print("Your number is out of range")
multiply(x, y)
</code></pre>
<p>But to be honest, there are quite a few parts of your code that some work.</p>
| 0
|
2016-09-06T22:10:51Z
|
[
"python"
] |
Inserting a stop statement
| 39,358,022
|
<p>How can I get this program to stop at when the number <strong>13</strong> is entered?</p>
<pre><code>print "\t:-- Enter a Multiplication --: \n"
x = ("Enter the first number: ")
y = ("Enter your second number to multiply by: ")
for total in range(1, 12):
x = input("Enter a number: ")
y = input("Multiplied by this: ")
print "\n TOTAL: "
print x, "X", y, "=", (x * y)
#Exits the program.
raw_input("\t\tPress Enter to Exit")
</code></pre>
| -2
|
2016-09-06T21:20:39Z
| 39,401,248
|
<p>You used a <strong>for</strong> loop; this is appropriate when you know <em>before you enter the loop</em> how many times you want to execute it. This is not the problem you have. Instead, use a <strong>while</strong> loop; this keeps going until a particular condition occurs.</p>
<p>Try something like this:</p>
<pre><code># Get the first input:
x = input("Enter a number (13 to quit): ")
# Check NOW to see whether we have to quit
while x <= 12:
# As long as the first number is acceptable,
# get the second and print the product.
y = input("Multiplied by this: ")
print "\n TOTAL: \n", x, "X", y, "=", (x * y)
# Get another 'x' value before going back to the top.
x = input("Enter a number (13 to quit): ")
# -- Here, you're done with doing products.
# -- Continue as you wish, such as printing a "times table".
</code></pre>
| 0
|
2016-09-08T22:44:35Z
|
[
"python"
] |
python - different encoding results by setting the "encoding" parameter in diff. functions
| 39,358,024
|
<p>I have a function like </p>
<pre><code>f = open('workfile', 'r', encoding='utf-8')
df = pandas.read_csv(...)
</code></pre>
<p>, which opens a csv file. When I tried to set the <code>encoding</code> parameter to <code>read_csv</code> function, I got other encoding results, than by setting the parameter to <code>open()</code> function: for example the <strong>â¬</strong> and <strong>ö</strong> characters were misinterpreted by setting the parameter to <code>read_csv</code> function. Can anyone explain me, why is it so?</p>
| 0
|
2016-09-06T21:20:42Z
| 39,358,856
|
<p>This is what happens in your program</p>
<pre><code>f = open('workfile', 'r', encoding='utf-8') # 1
df = pandas.read_csv(f, encoding=e) # 2
</code></pre>
<p>(1) The file is told to decode bytes using the encoding 'utf-8'. If you print the representation of the file handle f it will show something like </p>
<pre><code> <_io.TextIOWrapper name='workfile' mode='r' encoding='utf-8'>
</code></pre>
<p>When you pull text from this wrapper you will get a unicode string.</p>
<p>(2) read_csv() is told to use some encoding e. Therefore it will convert the unicode string to bytes (doing an implicit encode() -- which will use 'utf-8' on my system) which then are decoded with the decoding e.</p>
<p>Here is a little test program for illustration</p>
<pre><code>import pandas
for file in ['workfile-utf-8.csv', 'workfile-cp1252.csv']:
for file_encoding in ['utf-8', 'cp1252']:
for pandas_encoding in [None, 'utf-8', 'cp1252']:
with open(file, 'r', encoding=file_encoding) as fp:
try:
print('***', file, fp, pandas_encoding)
df = pandas.read_csv(fp, encoding=pandas_encoding)
print(df)
except Exception as ex:
print(ex)
</code></pre>
<p>The files mentioned are in an encoding that is reflected in their name.</p>
<p>The output should be like this (may depend on your environment)</p>
<pre><code>(1) workfile-utf-8.csv <_io.TextIOWrapper name='workfile-utf-8.csv' mode='r'
encoding='utf-8'> None
a b c
0 Hällo â¬uro Ãl
(2) workfile-utf-8.csv <_io.TextIOWrapper name='workfile-utf-8.csv' mode='r'
encoding='utf-8'> utf-8
a b c
0 Hällo â¬uro Ãl
(3) workfile-utf-8.csv <_io.TextIOWrapper name='workfile-utf-8.csv' mode='r'
encoding='utf-8'> cp1252
a b c
0 Hällo ââ¬uro Ãâl
(4) workfile-utf-8.csv <_io.TextIOWrapper name='workfile-utf-8.csv' mode='r'
encoding='cp1252'> None
a b c
0 Hällo ââ¬uro Ãâl
(5) workfile-utf-8.csv <_io.TextIOWrapper name='workfile-utf-8.csv' mode='r'
encoding='cp1252'> utf-8
a b c
0 Hällo ââ¬uro Ãâl
(6) workfile-utf-8.csv <_io.TextIOWrapper name='workfile-utf-8.csv' mode='r'
encoding='cp1252'> cp1252
a b c
0 HÃÆÃ¤llo âââ¬Å¡Ã¬uro ̢̮â¬âl
(7) workfile-cp1252.csv <_io.TextIOWrapper name='workfile-cp1252.csv'
mode='r' encoding='utf-8'> None
Error tokenizing data. C error: Calling read(nbytes) on source failed. Try engine='python'.
(8) workfile-cp1252.csv <_io.TextIOWrapper name='workfile-cp1252.csv'
mode='r' encoding='utf-8'> utf-8
Error tokenizing data. C error: Calling read(nbytes) on source failed. Try engine='python'.
(9) workfile-cp1252.csv <_io.TextIOWrapper name='workfile-cp1252.csv'
mode='r' encoding='utf-8'> cp1252
Error tokenizing data. C error: Calling read(nbytes) on source failed. Try engine='python'.
(10) workfile-cp1252.csv <_io.TextIOWrapper name='workfile-cp1252.csv'
mode='r' encoding='cp1252'> None
a b c
0 Hällo â¬uro Ãl
(11) workfile-cp1252.csv <_io.TextIOWrapper name='workfile-cp1252.csv'
mode='r' encoding='cp1252'> utf-8
a b c
0 Hällo â¬uro Ãl
(12) workfile-cp1252.csv <_io.TextIOWrapper name='workfile-cp1252.csv'
mode='r' encoding='cp1252'> cp1252
a b c
0 Hällo ââ¬uro Ãâl
</code></pre>
<p>(1) File is utf-8 -> decode utf-8 -> use as is -> ok</p>
<p>(2) File is utf-8 -> decode utf-8 -> (encode w. default) -> (decode utf-8) -> ok here, but not in other environments</p>
<p>(3) File is utf-8 -> decode utf-8 -> (encode w. default) -> (decode cp1252) -> turns Hällo into Hällo, etc.</p>
<p>...</p>
<p>(7) File is cp1252 -> decode utf-8 -> raises UnicodeDecodeError, and leads to an error</p>
<p>...</p>
<p>(11) File is cp1252 -> decode cp1252 -> (encode w. default) -> (decode utf-8) -> ok here, but not in other environments</p>
<p>...</p>
<p>Interesting (and funny to look at) is in particular case (6) turning Hällo,â¬uro, Ãl into HÃÆÃ¤llo,âââ¬Å¡Ã¬uro,̢̮â¬âl</p>
<p>It corresponds to a sequence:</p>
<pre><code>>>> x1 = 'Hällo,â¬uro, Ãl'
>>> x1
'Hällo,â¬uro, Ãl'
>>> x2 = x1.encode()
>>> x2
b'H\xc3\xa4llo,\xe2\x82\xacuro, \xc3\x96l'
>>> x3 = x2.decode('cp1252')
>>> x3
'Hällo,ââ¬uro, Ãâl'
>>> x4 = x3.encode()
>>> x4
b'H\xc3\x83\xc2\xa4llo,\xc3\xa2\xe2\x80\x9a\xc2\xacuro, \xc3\x83\xe2\x80\x93l'
>>> x5 = x4.decode('cp1252')
>>> x5
'HÃÆÃ¤llo,âââ¬Å¡Ã¬uro, ̢̮â¬âl'
</code></pre>
| 0
|
2016-09-06T22:44:15Z
|
[
"python",
"pandas",
"character-encoding"
] |
Range as dictionary key in Python
| 39,358,092
|
<p>So, I had an idea that I could use a range of numbers as a key for a single value in a dictionary.</p>
<p>I wrote the code bellow, but I cannot get it to work. Is it even possible?</p>
<pre><code> stealth_roll = randint(1, 20)
# select from a dictionary of 4 responses using one of four ranges.
## not working.
stealth_check = {
range(1, 6) : 'You are about as stealthy as thunderstorm.',
range(6, 11) : 'You tip-toe through the crowd of walkers, while loudly calling them names.',
range(11, 16) : 'You are quiet, and deliberate, but still you smell.',
range(16, 20) : 'You move like a ninja, but attracting a handful of walkers was inevitable.'
}
print stealth_check[stealth_roll]
</code></pre>
| 8
|
2016-09-06T21:25:26Z
| 39,358,118
|
<p>Yes, you can, only if you convert your <code>range</code> lists as immutable <code>tuple</code>, so they are hashable and accepted as keys of your dictionary:</p>
<pre><code>stealth_check = {
tuple(range(1, 6)) : 'You are about as stealthy as thunderstorm.',
</code></pre>
<p>EDIT: actually it works in Python 3 as <code>range</code> is an immutable sequence type and generate an immutable <code>tuple</code> instead of a <code>list</code> as L3viathan stated.</p>
<p>but you cannot access them with a single integer as key though. Your last line won't work.</p>
<p>I took some time to create a solution which would work whatever the values may be (picking one entry in the dictionary works as long as the lines are not "weighted" by bigger ranges.</p>
<p>It calls <code>bisect</code> on the sorted keys to find the insertion point, hacks it a bit, and finds the best value in the dictionary, with <code>O(log(N))</code> complexity, which means it can handle a really big list (maybe a little too much here :) but the dictionary is also too much in that case)</p>
<pre><code>from random import randint
import bisect
stealth_roll = randint(1, 20)
# select from a dictionary of 4 responses using one of four thresholds.
stealth_check = {
1 : 'You are about as stealthy as thunderstorm.',
6 : 'You tip-toe through the crowd of walkers, while loudly calling them names.',
11 : 'You are quiet, and deliberate, but still you smell.',
16 : 'You move like a ninja, but attracting a handful of walkers was inevitable.'
}
sorted_keys = sorted(stealth_check.keys())
insertion_point = bisect.bisect_left(sorted_keys,stealth_roll)
# adjust, as bisect returns not exactly what we want
if insertion_point==len(sorted_keys) or sorted_keys[insertion_point]!=stealth_roll:
insertion_point-=1
print(insertion_point,stealth_roll,stealth_check[sorted_keys[insertion_point]])
</code></pre>
| 3
|
2016-09-06T21:28:15Z
|
[
"python",
"dictionary",
"range"
] |
Range as dictionary key in Python
| 39,358,092
|
<p>So, I had an idea that I could use a range of numbers as a key for a single value in a dictionary.</p>
<p>I wrote the code bellow, but I cannot get it to work. Is it even possible?</p>
<pre><code> stealth_roll = randint(1, 20)
# select from a dictionary of 4 responses using one of four ranges.
## not working.
stealth_check = {
range(1, 6) : 'You are about as stealthy as thunderstorm.',
range(6, 11) : 'You tip-toe through the crowd of walkers, while loudly calling them names.',
range(11, 16) : 'You are quiet, and deliberate, but still you smell.',
range(16, 20) : 'You move like a ninja, but attracting a handful of walkers was inevitable.'
}
print stealth_check[stealth_roll]
</code></pre>
| 8
|
2016-09-06T21:25:26Z
| 39,358,140
|
<p>It is possible on Python 3 â and on Python 2 if you use <code>xrange</code> instead of <code>range</code>:</p>
<pre><code>stealth_check = {
xrange(1, 6) : 'You are about as stealthy as thunderstorm.', #...
}
</code></pre>
<p>However, the way you're trying to use it it won't work. You could iterate over the keys, like this:</p>
<pre><code>for key in stealth_check:
if stealth_roll in key:
print stealth_check[key]
break
</code></pre>
<p>Performance of this isn't nice (O(n)) but if it's a small dictionary like you showed it's okay. If you actually want to do that, I'd subclass <code>dict</code> to work like that automatically:</p>
<pre><code>class RangeDict(dict):
def __getitem__(self, item):
if type(item) != range: # or xrange in Python 2
for key in self:
if item in key:
return self[key]
else:
return super().__getitem__(item)
stealth_check = RangeDict({range(1,6): 'thunderstorm', range(6,11): 'tip-toe'})
stealth_roll = 8
print(stealth_check[stealth_roll]) # prints 'tip-toe'
</code></pre>
| 5
|
2016-09-06T21:29:43Z
|
[
"python",
"dictionary",
"range"
] |
Range as dictionary key in Python
| 39,358,092
|
<p>So, I had an idea that I could use a range of numbers as a key for a single value in a dictionary.</p>
<p>I wrote the code bellow, but I cannot get it to work. Is it even possible?</p>
<pre><code> stealth_roll = randint(1, 20)
# select from a dictionary of 4 responses using one of four ranges.
## not working.
stealth_check = {
range(1, 6) : 'You are about as stealthy as thunderstorm.',
range(6, 11) : 'You tip-toe through the crowd of walkers, while loudly calling them names.',
range(11, 16) : 'You are quiet, and deliberate, but still you smell.',
range(16, 20) : 'You move like a ninja, but attracting a handful of walkers was inevitable.'
}
print stealth_check[stealth_roll]
</code></pre>
| 8
|
2016-09-06T21:25:26Z
| 39,358,170
|
<pre><code>stealth_check = {
0 : 'You are about as stealthy as thunderstorm.',
1 : 'You tip-toe through the crowd of walkers, while loudly calling them names.',
2 : 'You are quiet, and deliberate, but still you smell.',
3 : 'You move like a ninja, but attracting a handful of walkers was inevitable.'
}
stealth_roll = randint(0, len(stealth_check))
return stealth_check[stealth_roll]
</code></pre>
| 1
|
2016-09-06T21:33:19Z
|
[
"python",
"dictionary",
"range"
] |
Range as dictionary key in Python
| 39,358,092
|
<p>So, I had an idea that I could use a range of numbers as a key for a single value in a dictionary.</p>
<p>I wrote the code bellow, but I cannot get it to work. Is it even possible?</p>
<pre><code> stealth_roll = randint(1, 20)
# select from a dictionary of 4 responses using one of four ranges.
## not working.
stealth_check = {
range(1, 6) : 'You are about as stealthy as thunderstorm.',
range(6, 11) : 'You tip-toe through the crowd of walkers, while loudly calling them names.',
range(11, 16) : 'You are quiet, and deliberate, but still you smell.',
range(16, 20) : 'You move like a ninja, but attracting a handful of walkers was inevitable.'
}
print stealth_check[stealth_roll]
</code></pre>
| 8
|
2016-09-06T21:25:26Z
| 39,358,345
|
<p>You can't build a dictionary directly from a range, unless you want the range itself to be the key. I don't think you want that. To get individual entries for each possibility within the range:</p>
<pre><code>stealth_check = dict(
[(n, 'You are about as stealthy as thunderstorm.')
for n in range(1, 6)] +
[(n, 'You tip-toe through the crowd of walkers, while loudly calling them names.')
for n in range(6, 11)] +
[(n, 'You are quiet, and deliberate, but still you smell.')
for n in range(11, 16)] +
[(n, 'You move like a ninja, but attracting a handful of walkers was inevitable.')
for n in range(16, 20)]
)
</code></pre>
<p>When you have a <code>dict</code> indexed by a small range of integers, you really should consider using a <code>list</code> instead:</p>
<pre><code>stealth_check = [None]
stealth_check[1:6] = (6 - 1) * ['You are about as stealthy as thunderstorm.']
stealth_check[6:11] = (11 - 6) * ['You tip-toe through the crowd of walkers, while loudly calling them names.']
stealth_check[11:16] = (16 - 11) * ['You are quiet, and deliberate, but still you smell.']
stealth_check[16:20] = (20 - 16) * ['You move like a ninja, but attracting a handful of walkers was inevitable.']
</code></pre>
| 2
|
2016-09-06T21:48:15Z
|
[
"python",
"dictionary",
"range"
] |
Range as dictionary key in Python
| 39,358,092
|
<p>So, I had an idea that I could use a range of numbers as a key for a single value in a dictionary.</p>
<p>I wrote the code bellow, but I cannot get it to work. Is it even possible?</p>
<pre><code> stealth_roll = randint(1, 20)
# select from a dictionary of 4 responses using one of four ranges.
## not working.
stealth_check = {
range(1, 6) : 'You are about as stealthy as thunderstorm.',
range(6, 11) : 'You tip-toe through the crowd of walkers, while loudly calling them names.',
range(11, 16) : 'You are quiet, and deliberate, but still you smell.',
range(16, 20) : 'You move like a ninja, but attracting a handful of walkers was inevitable.'
}
print stealth_check[stealth_roll]
</code></pre>
| 8
|
2016-09-06T21:25:26Z
| 39,358,354
|
<p>This approach will accomplish what you want, and the last line will work (assumes Py3 behavior of <code>range</code> and <code>print</code>):</p>
<pre><code>def extend_dict(d, value, x):
for a in x:
d[a] = value
stealth_roll = randint(1, 20)
# select from a dictionary of 4 responses using one of four ranges.
## not working.
stealth_check = {}
extend_dict(stealth_check,'You are about as stealthy as thunderstorm.',range(1,6))
extend_dict(stealth_check,'You tip-toe through the crowd of walkers, while loudly calling them names.',range(6,11))
extend_dict(stealth_check,'You are quiet, and deliberate, but still you smell.',range(11,16))
extend_dict(stealth_check,'You move like a ninja, but attracting a handful of walkers was inevitable.',range(16,20))
print(stealth_check[stealth_roll])
</code></pre>
<p>BTW if you're simulating a 20-side die you need the final index to be 21, not 20 (since 20 is not in range(1,20)).</p>
| 2
|
2016-09-06T21:48:52Z
|
[
"python",
"dictionary",
"range"
] |
Range as dictionary key in Python
| 39,358,092
|
<p>So, I had an idea that I could use a range of numbers as a key for a single value in a dictionary.</p>
<p>I wrote the code bellow, but I cannot get it to work. Is it even possible?</p>
<pre><code> stealth_roll = randint(1, 20)
# select from a dictionary of 4 responses using one of four ranges.
## not working.
stealth_check = {
range(1, 6) : 'You are about as stealthy as thunderstorm.',
range(6, 11) : 'You tip-toe through the crowd of walkers, while loudly calling them names.',
range(11, 16) : 'You are quiet, and deliberate, but still you smell.',
range(16, 20) : 'You move like a ninja, but attracting a handful of walkers was inevitable.'
}
print stealth_check[stealth_roll]
</code></pre>
| 8
|
2016-09-06T21:25:26Z
| 39,430,659
|
<p>The following is probably maximally efficient in mapping a randint to one of a set of fixed category strings with fixed probability.</p>
<pre><code>from random import randint
stealth_map = (None, 0,0,0,0,0,0,1,1,1,1,1,2,2,2,2,2,3,3,3,3)
stealth_type = (
'You are about as stealthy as thunderstorm.',
'You tip-toe through the crowd of walkers, while loudly calling them names.',
'You are quiet, and deliberate, but still you smell.',
'You move like a ninja, but attracting a handful of walkers was inevitable.',
)
for i in range(10):
stealth_roll = randint(1, 20)
print(stealth_type[stealth_map[stealth_roll]])
</code></pre>
| 0
|
2016-09-10T21:19:21Z
|
[
"python",
"dictionary",
"range"
] |
Range as dictionary key in Python
| 39,358,092
|
<p>So, I had an idea that I could use a range of numbers as a key for a single value in a dictionary.</p>
<p>I wrote the code bellow, but I cannot get it to work. Is it even possible?</p>
<pre><code> stealth_roll = randint(1, 20)
# select from a dictionary of 4 responses using one of four ranges.
## not working.
stealth_check = {
range(1, 6) : 'You are about as stealthy as thunderstorm.',
range(6, 11) : 'You tip-toe through the crowd of walkers, while loudly calling them names.',
range(11, 16) : 'You are quiet, and deliberate, but still you smell.',
range(16, 20) : 'You move like a ninja, but attracting a handful of walkers was inevitable.'
}
print stealth_check[stealth_roll]
</code></pre>
| 8
|
2016-09-06T21:25:26Z
| 39,781,091
|
<p>Thank you everyone for your responses. I kept hacking away, and I came up with a solution that will suit my purposes quite well. It is most similar to the suggestions of @PaulCornelius.</p>
<pre><code>stealth_roll = randint(1, 20)
# select from a dictionary of 4 responses using one of four ranges.
# only one resolution can be True. # True can be a key value.
def check(i, a, b): # check if i is in the range. # return True or False
if i in range(a, b):
return True
else:
return False
### can assign returned object as dictionary key! # assign key as True or False.
stealth_check = {
check(stealth_roll, 1, 6) :
'You are about as stealthy as a thunderstorm.',
check(stealth_roll, 6, 11) :
'You tip-toe through the crowd of walkers, while loudly calling them names.',
check(stealth_roll, 11, 16) :
'You are quiet, and deliberate, but still you smell.',
check(stealth_roll, 15, 21) :
'You move like a ninja, but attracting a handful of walkers was inevitable.'
}
print stealth_check[True] # print the dictionary value that is True.
</code></pre>
| 0
|
2016-09-29T22:40:05Z
|
[
"python",
"dictionary",
"range"
] |
SQLAlchemy poor performance on iterating
| 39,358,120
|
<p>The following block of code queries a table with ~2000 rows. The loop takes 20s to execute! From my limited knowledge, I don't think I'm doing 2000 queries, just the one, but perhaps I'm not understanding the '.' operator and what it's doing behind the scenes. How can I fix this loop to run more quickly? Is there a way to adjust the top level query s.t. the 2nd for loop is not making a total of 3000 queries (if that's in fact what's going on)?</p>
<p>Here is a test block of code I made to verify that it was in fact this inner loop that was causing the massive time consumption.</p>
<pre><code>block = []
cnt = 0
for blah in dbsession.query(mytable):
tic = time.time()
for a in blah.component:
cnt += 1
block.append(time.time()-tic)
print "block: %s seconds cnt: %s" % (sum(block), cnt)
# block: 20.78191 seconds cnt: 3021
</code></pre>
<p>Using the suggestion from the selected best-answer, the for loop became:</p>
<pre><code>for blah in dbsession.query(mytable).options( joinedload(mytable.componentA)).options(mytable.componentB)).options(mytable.componentC)
</code></pre>
<p>which resulted in the inner loops of each component going from 20-25s each, to 0.25 0.59 and 0.11s respectively. The query itself now takes about 18s ... so my total saved time is about 55s.</p>
| 0
|
2016-09-06T21:28:25Z
| 39,358,344
|
<p>Each time you access <code>.component</code> another SQL query is emitted.</p>
<p>You can read more at <a href="http://docs.sqlalchemy.org/en/latest/orm/loading_relationships.html" rel="nofollow">Relationship Loading Techniques</a>, but to load it all at once you can change your query to the following:</p>
<pre><code>from sqlalchemy.orm import joinedload
dbsession.query(mytable).options(joinedload('component'))
</code></pre>
| 2
|
2016-09-06T21:48:12Z
|
[
"python",
"python-2.7",
"sqlalchemy"
] |
Django reverse is giving me some trouble
| 39,358,180
|
<p>My problem is that when reverse is called it throws the following exception</p>
<p>Reverse for '/documentation/' with arguments '(3,)' and keyword arguments '{}' not found. 0 pattern(s) tried: []</p>
<p>Here is my urls.py</p>
<pre><code>url(r'^documentation/$', views.view1),
url(r'^documentation/([0-9])/$', views.documents, name='documentation'),
</code></pre>
<p>here my views.py</p>
<pre><code>def view1(request):
if request.method == 'POST':
profe = request.POST.get('value')
a = value.encode('ascii', 'ignore')
b = int(a)
return HttpResponseRedirect(reverse('documentation', args=(b,)))
else:
return render(request, "documentation.html", info)
def documents(request,valor):
...something...
return render(request, "anotherdoc.html", ..something..)
</code></pre>
<p>and here is my template (documentation.html)</p>
<pre><code>...
<form method="POST" action="">
{% csrf_token %}
{% for p in ps %}
<tr>
<td><button id="boton1" button type="submit" name = "valor" value ="{{p.idp}}" class="btn btn-success btn-sm">{{p.nombre}} {{p.apellido}}</button>
</td>
<td>algo</a> </td>
</tr>
{%endfor%}
</form>
</code></pre>
<p>I want 127.0.0.1:8000/documentation/1 from reverse() but I´m having a hard time trying that</p>
<p>Thanks</p>
<p>(Edit with traceback)</p>
<pre><code>Traceback:
File "C:\Python27\lib\site-packages\django\core\handlers\base.py" in get_response
149. response = self.process_exception_by_middleware(e, request)
File "C:\Python27\lib\site-packages\django\core\handlers\base.py" in get_response
147. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "C:\Users\Pa\De\Nueva carpeta\ag\Ag\scheduler\views.py" in documentation
82. url = reverse('documentation', args=(3,))
File "C:\Python27\lib\site-packages\django\core\urlresolvers.py" in reverse
600. return force_text(iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs)))
File "C:\Python27\lib\site-packages\django\core\urlresolvers.py" in _reverse_with_prefix
508. (lookup_view_s, args, kwargs, len(patterns), patterns))
Exception Type: NoReverseMatch at /documentation/
Exception Value: Reverse for 'documentation' with arguments '(3,)' and keyword arguments '{}' not found. 0 pattern(s) tried: []
</code></pre>
| -2
|
2016-09-06T21:33:58Z
| 39,358,799
|
<p>Reverse uses the URL name and you declared the name as <code>name='documentation'</code> but we can see you are using <code>'/documentation/'</code> instead:</p>
<pre><code>File "C:\Users\Pa\Desktop\Nueva carpeta\ag\Ag\scheduler\views.py" in documentation
82. url = reverse('/documentation/', args=(3,))
</code></pre>
<p>Change that line to <code>url = reverse('documentation', args=(3,))</code></p>
<p>If your URLs are name-spaced (you would know this if you have another urls.py in which this urls is include()), you should use <code>'[namespace]:documentation'</code>.</p>
| -1
|
2016-09-06T22:37:30Z
|
[
"python",
"django"
] |
Ipython notebook matplotlib gui backends dlopen import error on osx
| 39,358,184
|
<p>On OSX 10.11.6 using Anaconda python 2.7 when switching to (any) gui matplotlib backend from <code>%matplotlib inline</code> in jupyter ipython notebook generates:</p>
<blockquote>
<p>ImportError:
dlopen(/Users/.../anaconda2/lib/python2.7/site-packages/PyQt4/QtGui.so,
2): Library not loaded: @rpath/libpng16.16.dylib Referenced from:
/Users/.../anaconda2/lib/libQtGui.4.8.7.dylib Reason:
Incompatible library version: libQtGui.4.dylib requires version 39.0.0
or later, but libpng16.16.dylib provides version 34.0.0</p>
</blockquote>
<p>Any suggestions which one of the mentioned libraries I am supposed to install/update or both? And how to find these for OSX?</p>
| 1
|
2016-09-06T21:34:38Z
| 39,565,789
|
<p>Problem was solved after I did this:</p>
<pre><code>conda update libpng
</code></pre>
<p>Which resulted in:</p>
<pre><code>The following packages will be UPDATED:
libpng: 1.6.17-0 --> 1.6.22-0
</code></pre>
| 0
|
2016-09-19T05:12:58Z
|
[
"python",
"matplotlib",
"pyqt4",
"anaconda",
"jupyter-notebook"
] |
How to install Bloomberg API Library for Python 2.7 on Mac OS X
| 39,358,397
|
<p>I'm trying to setup my Mac OS X system to use the <code>pdblp</code> Python library which requires me to first install the <a href="https://www.bloomberglabs.com/api/libraries/" rel="nofollow">Bloomberg Open API libary for Python</a>. After cloning the git repo and running <code>python setup.py install</code>, I get </p>
<pre><code>File "setup.py", line 20, in <module>
raise Exception("BLPAPI_ROOT environment variable isn't defined")
Exception: BLPAPI_ROOT environment variable isn't defined
</code></pre>
<p>How should I proceed?</p>
| 0
|
2016-09-06T21:52:32Z
| 40,048,982
|
<p>You also need to install the <a href="https://bloomberg.bintray.com/BLPAPI-Experimental-Generic/blpapi_cpp_3.8.1.1-darwin.tar.gz" rel="nofollow">C/C++ libraries</a> and then set BLPAPI_ROOT to the location of the <code>libblpapi3_32.so</code> or <code>libblpapi3_64.so</code> files. For example:</p>
<pre><code>cd /some/directory
wget https://bloomberg.bintray.com/BLPAPI-Experimental-Generic/blpapi_cpp_3.8.1.1-darwin.tar.gz
tar zxvf blpapi_cpp_3.8.1.1-darwin.tar.gz
export BLPAPI_ROOT=/some/directory/blpapi_cpp_3.8.1.1/Darwin
</code></pre>
<p>Then you can proceed with installing the python library.</p>
| 0
|
2016-10-14T17:36:29Z
|
[
"python",
"osx",
"python-2.7",
"bloomberg"
] |
Using pjsip in daemon mode with python twisted
| 39,358,407
|
<p>I am trying to run simple pjsip application in daemon mode. I have combined this library with python twisted. Script works fine when I run it in shell & can make call. But when I use it with twisted's Application framework, I get following error.</p>
<pre><code>Object: {Account <sip:192.168.0.200:5060>}, operation=make_call(), error=Unknown error from audio driver (PJMEDIA_EAUD_SYSERR)
</code></pre>
<p>Most of example applications from documents do not run in daemon mode - <a href="https://trac.pjsip.org/repos/wiki/Python_SIP_Tutorial" rel="nofollow">pjsip examples</a>.</p>
<p>Looks like even pjsua doesn't run in background - <a href="http://www.pjsip.org/pjsua.htm" rel="nofollow">pjsua</a></p>
<p>I am wondering, does it work in background. I am not getting exactly what "Unknown error" meant to. Is there any better way to debug ? </p>
<p>Architecture of my application is as follows -</p>
<ol>
<li>Start pjsip lib, initiate pjsip lib, create transport & create userless account.</li>
<li>Create UDP protocol which listens for incoming requests.</li>
<li>Once app gets request, it makes calls to particular sip uri.</li>
</ol>
<p>Everything goes well when I run app with <a href="https://twistedmatrix.com/documents/16.3.1/api/twisted.internet.interfaces.IReactorUDP.listenUDP.html" rel="nofollow">listenUDP</a> & <code>reactor.run()</code> but when I tries with typical twisted application setup - <a href="http://twistedmatrix.com/documents/12.2.0/core/howto/application.html#auto4" rel="nofollow">twistd</a>( either <code>listenUPD</code> or <code>UDPServer</code>) above error pops up.</p>
<p>Am I doing anything wrong ? Any info will be welcomed. </p>
<p>thank you.</p>
| 0
|
2016-09-06T21:52:59Z
| 39,571,824
|
<p>This issue resolved after I set sound devices.</p>
| 0
|
2016-09-19T11:13:47Z
|
[
"python",
"twisted",
"sip",
"daemon",
"pjsip"
] |
Running unittest discover ignoring specific directory
| 39,358,442
|
<p>I'm looking for a way of running <code>python -m unittest discover</code>, which will discover tests in, say, directories A, B and C. However, directories A, B and C have directories named <code>dependencies</code> inside each of them, in which there are also some tests which, however, I don't want to run.</p>
<p>Is there a way to run my tests satisfying these constraints without having to create a script for this?</p>
| 1
|
2016-09-06T21:56:28Z
| 39,359,096
|
<p>It would seem that <code>python -m unittest</code> descends into module directories but not in other directories.</p>
<p>It quickly tried the following structure</p>
<pre><code>temp
+ a
- test_1.py
+ dependencies
- test_a.py
</code></pre>
<p>With the result </p>
<pre><code>>python -m unittest discover -s temp\a
test_1
.
----------------------------------------------------------------------
Ran 1 test in 0.002s
OK
</code></pre>
<p>However, if the directory is a module directory (containing a file <code>__init__.py</code>) the situation is different.</p>
<pre><code>temp
+ a
- __init__.py
- test_1.py
+ dependencies
- __init__.py
- test_a.py
</code></pre>
<p>Here the result was </p>
<pre><code>>python -m unittest discover -s temp\a
test_a
.test_1
.
----------------------------------------------------------------------
Ran 2 tests in 0.009s
OK
</code></pre>
<p>The usefulness of this answer for you now depends on whether it is acceptable for your folder <code>dependencies</code> not to be a module directory.</p>
<p>EDIT: After seeing your comment</p>
<p>Would using <code>pytest</code> be an option? This test runner has many command arguments, one specifically to exclude tests.</p>
<p>See <a href="http://doc.pytest.org/en/latest/example/pythoncollection.html" rel="nofollow">Changing standard (Python) test discovery</a></p>
<p>From their site</p>
<blockquote>
<p>Ignore paths during test collection</p>
<p>You can easily ignore certain test directories and modules during collection by passing the <code>--ignore=path</code> option on the cli. pytest allows multiple <code>--ignore</code> options</p>
</blockquote>
| -1
|
2016-09-06T23:14:41Z
|
[
"python",
"python-2.7",
"python-unittest"
] |
Why Pandas Series resample function generated an unexpected start_time after resampling?
| 39,358,594
|
<p>I have a time series data in pandas, I wanna calculate the number of data in different bins by using resampling function of pandas with a specified frequency. </p>
<p>The pandas series looks like something below if you use :</p>
<pre><code>test_data.head(10)
</code></pre>
<p>You will get the results (only show top 10 data as it is a large Series):</p>
<pre><code> Created
2015-11-29 23:28:50 KBH889
2015-11-29 23:30:43 KBH89U
2015-11-29 23:34:06 KBH88K
2015-11-29 23:38:08 KBH8CC
2015-11-29 23:38:36 KBH83T
2015-11-29 23:40:52 KBH8CF
2015-11-29 23:46:27 KBH8F1
2015-11-29 23:50:01 KBH8DQ
2015-11-29 23:54:29 KBH8FV
2015-11-29 23:58:01 KBH8C6
Name: Order_Number, dtype: object
</code></pre>
<p>Then I use resample function with frequency "4541S" (It has to be 4541s exactly, cannot be changed!)</p>
<pre><code>test_data.Order_Number.resample("4541S").count()
</code></pre>
<p>Results:</p>
<pre><code> Created
2015-11-29 22:42:18 9
2015-11-29 23:57:59 15
2015-11-30 01:13:40 6
2015-11-30 02:29:21 3
2015-11-30 03:45:02 2
Freq: 4541S, Name: Order_Number, dtype: int64
</code></pre>
<p>The resampled results starts with date time index <code>2015-11-29 22:42:18</code>. This is unwanted start_time. I want the time to be the minimal value of the series, in which case the resampling should happen starting from <code>2015-11-29 23:28:50</code>. Does anybody know how to do this? I tried to use param <code>base</code> in the resampling function to adjust it. It seems it is difficult. </p>
<p>Here is the <a href="https://www.dropbox.com/s/ug1yllhajcp8eb5/test.csv?dl=0" rel="nofollow">csv file</a> for your testing. Import it and resample with count().</p>
| 0
|
2016-09-06T22:10:42Z
| 39,400,829
|
<p>I think you are correct in that this should be achieved with the <code>base</code> argument. Unfortunately, I also cannot find a way to easily anchor the resampling with the original first datetimeindex (in your case, '2015-11-29 23:28:50') instead of a calculated one ('2015-11-29 22:42:18').</p>
<p>As a workaround, which might be helpful in your case but not entirely useful for other cases, here's what you can do: get the 'original' datetimeindex from the original data and the 'calculated' first datetimeindex from the resampled data:</p>
<pre><code>original = test_data.index[0]
calculated = test_data.Order_Number.resample('4541S',how='count',base=0).index[0]
</code></pre>
<p>(notice I am using pandas 0.17 resample API - you might need to apply <code>.count()</code> instead of using the 'how' argument)</p>
<p>Then get the difference in seconds between that 'calculated' first datetimeindex and the original first datetimeindex:</p>
<pre><code>base_seconds = int((original - calculated).seconds)
</code></pre>
<p>And finally, use <code>base_seconds</code> as the value for the <code>base</code> argument:</p>
<pre><code>test_data.Order_Number.resample('4541S',how='count',base=base_seconds)
</code></pre>
<p>Which yields</p>
<pre><code>Created
2015-11-29 23:28:50 19
2015-11-30 00:44:31 10
2015-11-30 02:00:12 3
2015-11-30 03:15:53 3
Freq: 4541S, Name: Order_Number, dtype: int64
</code></pre>
<p>I hope this helps.</p>
| 0
|
2016-09-08T22:00:30Z
|
[
"python",
"pandas",
"time-series"
] |
Python Pandas df.ix not performing as expected
| 39,358,602
|
<p>Here is my function:</p>
<pre><code> def clean_zipcodes(df):
df.ix[df['workCountryCode'].str.contains('USA') & \
df['workZipcode'].astype(str).str.len() > 5, 'workZipcode'] = \
df['workZipcode'].astype(int).floordiv(10000)
df.ix[df['contractCountryCode'].str.contains('USA') & \
df['contractZipcode'].astype(str).str.len() > 5, 'contractZipcode'] = \
df['contractZipcode'].astype(int).floordiv(10000)
return df
</code></pre>
<p>Here is my test function of what I expect:</p>
<pre><code>def test_clean_zipcodes():
testDf = pandas.DataFrame({'unique_transaction_id' : ['1', '1', '1'],
'workZipcode' : [838431000, 991631000, 99163],
'contractZipcode' : [838431000, 991631000, 99163],
'workCountryCode' : ['USA: STUFF', 'NONE: STUFF', 'USA: STUFF'],
'contractCountryCode' : ['USA: STUFF', 'NONE: STUFF', 'USA: STUFF']})
resultDf = pandas.DataFrame({'unique_transaction_id' : ['1', '1', '1'],
'workZipcode' : [83843, 991631000, 99163],
'contractZipcode' : [83843, 991631000, 99163],
'workCountryCode' : ['USA: STUFF', 'NONE: STUFF', 'USA: STUFF'],
'contractCountryCode' : ['USA: STUFF', 'NONE: STUFF', 'USA: STUFF']})
assert resultDf.equals(clean_zipcodes(testDf))
</code></pre>
<p>Aside from indentations not being proper (didn't convert for SO formatting), the df.ix is not performing as expected. It does not perform any transformations on the contractZipcode or the workZipcode columns. The first row should change to 83843 as noted in the resultDf. </p>
<p>Thank in advance!</p>
| 0
|
2016-09-06T22:11:08Z
| 39,360,547
|
<pre><code>In [2]: import pandas as pd
In [3]: testDf = pd.DataFrame({'unique_transaction_id' : ['1', '1', '1'],
...: 'workZipcode' : [838431000, 991631000, 99163],
...: 'contractZipcode' : [838431000, 991631000, 99163],
...: 'workCountryCode' : ['USA: STUFF', 'NONE: STUFF', 'USA: STUFF'],
...: 'contractCountryCode' : ['USA: STUFF', 'NONE: STUFF', 'USA: STUFF']}
...: )
...:
...: resultDf = pd.DataFrame({'unique_transaction_id' : ['1', '1', '1'],
...: 'workZipcode' : [83843, 991631000, 99163],
...: 'contractZipcode' : [83843, 991631000, 99163],
...: 'workCountryCode' : ['USA: STUFF', 'NONE: STUFF', 'USA: STUFF'],
...: 'contractCountryCode' : ['USA: STUFF', 'NONE: STUFF', 'USA: STUFF']})
...:
...:
...:
</code></pre>
<p>Note that an empty slice is returned when you try to index like this:</p>
<pre><code>In [4]: testDf.ix[testDf['workCountryCode'].str.contains('USA') &
testDf['workZipcode'].astype(str).str.len() > 5,
'workZipcode']
Out[4]: Series([], Name: workZipcode, dtype: int64)
</code></pre>
<p>If you add parentheses around your different filters:</p>
<pre><code>In [5]: testDf.ix[(testDf['workCountryCode'].str.contains('USA'))
& (testDf['workZipcode'].astype(str).str.len() > 5),
'workZipcode']
Out[5]:
0 838431000
Name: workZipcode, dtype: int64
</code></pre>
<p>you get back what you want. Doesn't matter if you use <code>loc</code> either:</p>
<pre><code>In [6]: testDf.loc[testDf['workCountryCode'].str.contains('USA') &
testDf['workZipcode'].astype(str).str.len() > 5,
'workZipcode']
Out[6]: Series([], Name: workZipcode, dtype: int64)
</code></pre>
<p>So here's the cleaned-up function:
I added a few little lambdas for readability's sake.</p>
<pre><code>In [7]: def clean_zipcodes_loc(df):
...: strlen = lambda x: x.astype(str).str.len()
...: floordiv = lambda x: x.astype(int).floordiv(10000)
...:
...: df.loc[((strlen(df.workZipcode)) > 5) &
...: df.workCountryCode.str.contains("USA"),
...: 'workZipcode'] = floordiv(df.workZipcode)
...:
...: df.loc[((strlen(df.contractZipcode)) > 5) &
...: df.contractCountryCode.str.contains("USA"),
...: 'contractZipcode'] = floordiv(df.contractZipcode)
...:
...: return df
...:
In [8]: clean_zipcodes_loc(testDf) == resultDf
Out[8]:
contractCountryCode contractZipcode unique_transaction_id workCountryCode \
0 True True True True
1 True True True True
2 True True True True
workZipcode
0 True
1 True
2 True
</code></pre>
| 1
|
2016-09-07T02:53:49Z
|
[
"python",
"pandas",
"testing"
] |
expected speedup Numba/CUDA versus Numpy
| 39,358,635
|
<p>I'm new to Numba and CUDA and have done measurements to compare cuda jitted functions to Numpy on a few basic examples. For example,</p>
<pre><code>@cuda.jit("void(float32[:])")
def gpu_computation(array):
pos = cuda.grid(1)
if pos < array.size:
array[pos] = array[pos] ** 2.6
</code></pre>
<p>compared to single threaded</p>
<pre><code>def cpu_computation(array):
array = array ** 2.6
return array
</code></pre>
<p>with </p>
<pre><code>n=1000000
array = np.linspace(0, 100, num=n, dtype=np.float32)
threads per block = 32
blocks per grid = 31250
</code></pre>
<p>I get about 3x speedup with the GPU. This is also what I get when performing matrix multiplication (both the basic and the smart versions found in the Numba documentation). Optimising with copying to/from device did not help.</p>
<p>Is this speedup expected? I expected an order of magnitude more. My machine: Mac OSX with GeForcce GTX 775M 2048 MB and CUDA 7.5.30.</p>
| -2
|
2016-09-06T22:14:53Z
| 39,359,102
|
<p>Double precision arithmetic throughput of your GTX 775M is <a href="http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#arithmetic-instructions" rel="nofollow">1/24th of the single precision throughput</a>.
As Python does not have a single precision type, you need to use <a href="http://numba.pydata.org/numba-doc/dev/reference/types.html" rel="nofollow">Numba types</a> to explicitly mark your data as single precision.</p>
<p>Unfortunately there is no way to speed up double precision calculations other than using a different GPU (Tesla lineup or the original, now out-of-production GTX Titan).</p>
| 2
|
2016-09-06T23:15:10Z
|
[
"python",
"cuda",
"numba"
] |
duplicate events action in django admin
| 39,358,644
|
<p>I have a duplicate records function in my django admin.py, and in someway it works, but the weird things is that i have must duplicate this function outside and inside the modelAdmin... </p>
<pre><code>def duplicate_event(ModelAdmin, request, queryset):
for object in queryset:
object.id = None
object.save()
duplicate_event.short_description = "Duplicate selected record"
class ProductAdmin(ImageCroppingMixin, admin.ModelAdmin):
model = Product
inlines = [CompositionAssociactionAdmin]
list_display = ("image_img", "code", "name", "price", "discount", "price_offer", "prompt_delivery", "delivery", "promo", "active")
list_editable = ('active',)
fields = (
("name", "code"),
("price", "discount", "price_offer"),
("color", "material"),
("scarpemisura", "cintureLunghezza"),
"size",
("width", "lenght", "depth", "height"),
"volume",
"descrizione", "album",
"image", "slider", "thumb", "thumbdue", "croplibero",
("prompt_delivery", "delivery"),
("slide", "promo"),
"tags", "active", "pub_date"
)
def duplicate_event(ModelAdmin, request, queryset):
for object in queryset:
object.id = None
object.save()
duplicate_event.short_description = "Duplica Record Selezionati"
actions = ['duplicate_event']
</code></pre>
<p>before I has try simply so:</p>
<pre><code>def duplicate_event(ModelAdmin, request, queryset):
for object in queryset:
object.id = None
object.save()
duplicate_event.short_description = "Duplicate selected record"
class ProductAdmin(ImageCroppingMixin, admin.ModelAdmin):
model = Product
inlines = [CompositionAssociactionAdmin]
list_display = ("image_img", "code", "name", "price", "discount", "price_offer", "prompt_delivery", "delivery", "promo", "active")
list_editable = ('active',)
fields = (
("name", "code"),
("price", "discount", "price_offer"),
("color", "material"),
("scarpemisura", "cintureLunghezza"),
"size",
("width", "lenght", "depth", "height"),
"volume",
"descrizione", "album",
"image", "slider", "thumb", "thumbdue", "croplibero",
("prompt_delivery", "delivery"),
("slide", "promo"),
"tags", "active", "pub_date"
)
actions = ['duplicate_event']
</code></pre>
<p>but I get no action.</p>
<p>So I've tried in this other way:</p>
<pre><code>class ProductAdmin(ImageCroppingMixin, admin.ModelAdmin):
model = Product
inlines = [CompositionAssociactionAdmin]
list_display = ("image_img", "code", "name", "price", "discount", "price_offer", "prompt_delivery", "delivery", "promo", "active")
list_editable = ('active',)
fields = (
("name", "code"),
("price", "discount", "price_offer"),
("color", "material"),
("scarpemisura", "cintureLunghezza"),
"size",
("width", "lenght", "depth", "height"),
"volume",
"descrizione", "album",
"image", "slider", "thumb", "thumbdue", "croplibero",
("prompt_delivery", "delivery"),
("slide", "promo"),
"tags", "active", "pub_date"
)
def duplicate_event(ModelAdmin, request, queryset):
for object in queryset:
object.id = None
object.save()
duplicate_event.short_description = "Duplica Record Selezionati"
actions = ['duplicate_event']
</code></pre>
<p>and I get this error:
<strong>global name 'duplicate_event' is not defined</strong></p>
| 0
|
2016-09-06T22:15:49Z
| 39,358,763
|
<p>Your indentation level is wrong, it should be:</p>
<pre><code>class ProductAdmin(ImageCroppingMixin, admin.ModelAdmin):
def duplicate_event(ModelAdmin, request, queryset):
for object in queryset:
object.id = None
object.save()
duplicate_event.short_description = "Duplica Record Selezionati"
</code></pre>
<p>The last line, which assigns <code>duplicate_event.short_description</code>, should be part of the class body, not part of the method body. The global function is unnecessary. </p>
| 0
|
2016-09-06T22:34:02Z
|
[
"python",
"django",
"backend"
] |
Pandas Insert a row above the Index and the Series data in a Dataframe
| 39,358,661
|
<p>I ve been around several trials, nothing seems to work so far.
I have tried <code>df.insert(0, "XYZ", 555)</code> which seemed to work until it did not for some reasons i am not certain.</p>
<p>I understand that the issue is that Index is not considered a Series and so, df.iloc[0] does not allow you to insert data directly above the Index column.</p>
<p>I ve also tried manually adding in the list of indices part of the definition of the dataframe a first index with the value "XYZ"..but nothing has work.</p>
<p>Thanks for your help</p>
<p>A B C D are my columns. range(5) is my index. I am trying to obtain this below, for an arbitrary row starting with type, and then a list of strings..thanks</p>
<pre><code> A B C D
type 'string1' 'string2' 'string3' 'string4'
0
1
2
3
4
</code></pre>
<p>If you use Timestamps as Index adding a row and a custom single row with its own custom index will throw an error:
ValueError: Cannot add integral value to Timestamp without offset. I am guessing it's due to the difference in the operands, if i substract an Integer from a Timestamp for example.. ? how could i fix this in a generic manner? thanks! â </p>
| 2
|
2016-09-06T22:17:43Z
| 39,358,942
|
<p>if you want to insert a row <strong>before</strong> the first row, you can do it this way:</p>
<p>data:</p>
<pre><code>In [57]: df
Out[57]:
id var
0 a 1
1 a 2
2 a 3
3 b 5
4 b 9
</code></pre>
<p>adding one row:</p>
<pre><code>In [58]: df.loc[df.index.min() - 1] = ['z', -1]
In [59]: df
Out[59]:
id var
0 a 1
1 a 2
2 a 3
3 b 5
4 b 9
-1 z -1
</code></pre>
<p>sort index:</p>
<pre><code>In [60]: df = df.sort_index()
In [61]: df
Out[61]:
id var
-1 z -1
0 a 1
1 a 2
2 a 3
3 b 5
4 b 9
</code></pre>
<p>optionally reset your index :</p>
<pre><code>In [62]: df = df.reset_index(drop=True)
In [63]: df
Out[63]:
id var
0 z -1
1 a 1
2 a 2
3 a 3
4 b 5
5 b 9
</code></pre>
| 2
|
2016-09-06T22:54:23Z
|
[
"python",
"pandas",
"indexing",
"dataframe",
"row"
] |
How to use Django Models in Javascript with AngularJS
| 39,358,684
|
<p>I'm working with django and AngularJS and I want to use my objects I create in Django in my javascript file. </p>
<p>in my models.py (django) I make my object:</p>
<pre><code>from django.db import models
class Cashflow(models.Model):
index = models.FloatField();
value = models.FloatField();
date = models.DateField();
</code></pre>
<p>In my js file I want to use it like:</p>
<pre><code>app = angular.module("coco",[]);
app.controller('cocoCtrl',['$scope',function($scope){
$scope.newCashflow = new Cashflow()
$scope.save = function(cashflow){
$scope.newCashflow.index = cashflow.index;
$scope.newCashflow.value = cashflow.value;
$scope.newCashflow.date = cashflow.date;
</code></pre>
<p>....</p>
<p>but that doesn't work.. Do I forget something? I get no errors or logs but just nothing happens</p>
| 0
|
2016-09-06T22:19:59Z
| 39,386,326
|
<p>you need to change the model object to Json object by serializing first.</p>
<p>I can show some code for you.</p>
<pre><code>results = []
objects = one_table.object.all()
for i in objects:
results.append({'field1: i.value1', 'field2': i.value2})
return HttpResponse(json.dumps(results))
</code></pre>
| 0
|
2016-09-08T08:40:42Z
|
[
"javascript",
"python",
"angularjs",
"django"
] |
Including an AND statement in a Python List Comprehension
| 39,358,699
|
<p>I'm trying to create a function for an array. The array will go in and if the value is 0 it will be put to the back of the list. My function works most of the time, but the only issue is that the Boolean value False keeps getting interpreted as 0. I've created an AND statement to differentiate 0 and False, but it is not working, and I can't figure out why. Can you not have AND statements in list comprehension? </p>
<pre><code>def move_zeros(array):
[array.insert(len(array), array.pop(array.index(x))) for x in array if x == 0 and x is not False]
return array
</code></pre>
<p>I've added two examples.One works. The other doesn't for what I'd like to accomplish. </p>
<p>Input: [1,"a", 2, 0, 3, "b"] Output: [1, "a", 2, 3, "b", 0] This works</p>
<p>Input: [1, False, 2, 0, 3, "b"] Output: [1, 2, 3, "b", False, 0] This doesn't work, because False is being moved to the end of the list when I want to pass over it.</p>
| -1
|
2016-09-06T22:27:51Z
| 39,358,801
|
<p>It is probably a bad idea to try to make a semantic distinctions between <code>0</code> and <code>False</code> when filtering a list, but if you have a good reason to do so there are a couple of ways that might work in your situation.</p>
<p>A somewhat hackish approach which works in CPython is to use <code>is</code> rather than <code>==</code>:</p>
<pre><code>def move_zeros(array):
zeros = 0
new_array = []
for x in array:
if not x is 0:
new_array.append(x)
else:
zeros += 1
return new_array + [0]*zeros
</code></pre>
<p>As @chepner points out, this is implementation dependent. A more robust approach which works in Python 3 is to use <code>isinstance()</code>:</p>
<pre><code>def move_zeros(array):
zeros = 0
new_array = []
for x in array:
if x != 0 or isinstance(x,bool):
new_array.append(x)
else:
zeros += 1
return new_array + [0]*zeros
</code></pre>
<p>Using either definition:</p>
<pre><code>>>> move_zeros([1, False, 2, 0, 3, "b"])
[1, False, 2, 3, 'b', 0]
</code></pre>
| 0
|
2016-09-06T22:37:32Z
|
[
"python",
"arrays",
"list-comprehension"
] |
Including an AND statement in a Python List Comprehension
| 39,358,699
|
<p>I'm trying to create a function for an array. The array will go in and if the value is 0 it will be put to the back of the list. My function works most of the time, but the only issue is that the Boolean value False keeps getting interpreted as 0. I've created an AND statement to differentiate 0 and False, but it is not working, and I can't figure out why. Can you not have AND statements in list comprehension? </p>
<pre><code>def move_zeros(array):
[array.insert(len(array), array.pop(array.index(x))) for x in array if x == 0 and x is not False]
return array
</code></pre>
<p>I've added two examples.One works. The other doesn't for what I'd like to accomplish. </p>
<p>Input: [1,"a", 2, 0, 3, "b"] Output: [1, "a", 2, 3, "b", 0] This works</p>
<p>Input: [1, False, 2, 0, 3, "b"] Output: [1, 2, 3, "b", False, 0] This doesn't work, because False is being moved to the end of the list when I want to pass over it.</p>
| -1
|
2016-09-06T22:27:51Z
| 39,358,923
|
<p>A loop alternative to your list comprehension. It still modifies the list. As noted by others <code>x is 0</code> simplifies the distinction between <code>0</code> and <code>False</code>:</p>
<pre><code>In [106]: al=[0, 1, 2, False, True, 3,0,10]
In [107]: for i,x in enumerate(al):
...: if x is 0:
...: value = al.pop(i)
...: al.append(value)
...:
In [108]: al
Out[108]: [1, 2, False, True, 3, 10, 0, 0]
</code></pre>
<p>With side effects like this, a loop is better than a comprehension. A comprehension should be used in the:</p>
<pre><code>return [.... for x in al if ...]
</code></pre>
<p>sense. You could also use enumerate in the comprehension:</p>
<pre><code>return [fun(x, i) for i, x in enumerate(al) if x...]
</code></pre>
<p>In a clear list comprehension, the list will only appear once; tests and returned values will depend only on the iteration variables, not the original list.</p>
<p>===================</p>
<p>Beware that <code>0</code> and <code>False</code> are often treated as the same. For example operations that expect numbers treat <code>False</code> as <code>0</code>, and <code>True</code> as 1. And functions that expect booleans will treat <code>0</code> as <code>False</code>.</p>
<pre><code>In [117]: [x+1 for x in al]
Out[117]: [1, 2, 3, 1, 2, 4, 1, 11]
In [118]: al=[0, 1, 2, False, True, 3,0,10]
In [119]: sum(al)
Out[119]: 17
</code></pre>
<p>=================</p>
<p>Example with <code>and</code> in list comprehension:</p>
<pre><code>In [137]: [x for x in al if x==0]
Out[137]: [0, False, 0]
In [138]: [x for x in al if x==0 and x is not False]
Out[138]: [0, 0]
In [140]: [x for x in al if not (x==0 and x is not False)]
Out[140]: [1, 2, False, True, 3, 10]
</code></pre>
<p>============</p>
<p>Another possible test - of the str representation:</p>
<pre><code>In [143]: [x for x in al if str(x)!='0']
Out[143]: [1, 2, False, True, 3, 10]
</code></pre>
<p>================</p>
<p>Your problem isn't with the test, but with the <code>al.index(x)</code>; it's matching both <code>0</code> and <code>False</code>, and removing the first, regardless of which <code>x</code> is passing your test.</p>
<p>Version with <code>al.index(x)</code>:</p>
<pre><code>In [396]: al=[1,False,2, 0,3,"b"]
In [397]: for x in al:
...: if x ==0 and x is not False:
...: al.append(al.pop(al.index(x)))
...:
In [398]: al
Out[398]: [1, 2, 0, 3, 'b', False]
</code></pre>
<p>Version with enumerate <code>i</code></p>
<pre><code>In [399]: al=[1,False,2, 0,3,"b"]
In [400]: for i,x in enumerate(al):
...: if x ==0 and x is not False:
...: al.append(al.pop(i))
...:
In [401]: al
Out[401]: [1, False, 2, 3, 'b', 0]
</code></pre>
<p>Or in your function:</p>
<pre><code>def move_zeros(array):
[array.insert(len(array), array.pop(i)) for i,x in enumerate(array) if (x == 0 and x is not False)]
return array
In [403]: al=[1,False,2, 0,3,"b"]
In [404]: move_zeros(al)
Out[404]: [1, False, 2, 3, 'b', 0]
</code></pre>
<p>testing the <code>index</code> in isolation:</p>
<pre><code>In [405]: al=[1,False,2, 0,3,"b"]
In [406]: al.index(0)
Out[406]: 1
In [407]: al.index(False)
Out[407]: 1
</code></pre>
| 1
|
2016-09-06T22:52:09Z
|
[
"python",
"arrays",
"list-comprehension"
] |
For a particular ID, check for every 3 mins and set a flag in python
| 39,358,727
|
<p>Suppose I am having data as follows, The data has three columns ID, date and Time,</p>
<pre><code>ID date Time
1 8/20/2016 20:27:25
1 8/20/2016 20:29:04
1 8/20/2016 20:29:05
1 8/20/2016 20:38:38
1 8/20/2016 20:38:38
2 8/20/2016 14:24:53
3 8/20/2016 21:18:37
3 8/20/2016 21:18:37
4 8/20/2016 01:08:34
4 8/20/2016 01:08:35
4 8/20/2016 01:08:35
4 8/20/2016 01:08:35
4 8/20/2016 11:08:35
4 8/20/2016 11:08:35
4 8/20/2016 11:08:35
5 8/20/2016 09:35:17
5 8/20/2016 09:36:05
5 8/20/2016 09:36:19
5 8/20/2016 09:36:21
5 8/20/2016 00:01:59
5 8/20/2016 00:04:59
6 8/20/2016 00:02:13
6 8/20/2016 00:02:17
6 8/20/2016 00:02:19
6 8/20/2016 00:02:21
6 8/20/2016 00:02:24
6 8/20/2016 00:02:26
6 8/20/2016 00:04:27
6 8/20/2016 00:04:27
6 8/20/2016 00:04:28
6 8/20/2016 00:04:30
6 8/20/2016 00:04:35
6 8/20/2016 01:45:23
7 8/20/2016 00:14:30
7 8/20/2016 00:14:33
7 8/20/2016 00:14:47
7 8/20/2016 00:14:56
7 8/20/2016 00:21:56
</code></pre>
<p>For every ID and particular date, I want to check the first time and then find out the entries corresponding to next 3 minutes and set a flag as 1. If the time is not within three 3 minutes, I want to set the flag to 2 and then keep checking for the next three minutes. Basically I want to find out three minute sets for every ID. I want to analyse the records for every three minutes and If there is a flag like this, It would help me in analyzing it. </p>
<p>the output I want is,</p>
<pre><code>ID date Time flag
1 8/20/2016 20:27:25 1
1 8/20/2016 20:29:04 1
1 8/20/2016 20:29:05 1
1 8/20/2016 20:38:38 2
1 8/20/2016 20:38:38 2
2 8/20/2016 14:24:53 1
3 8/20/2016 21:18:37 1
3 8/20/2016 21:18:37 1
4 8/20/2016 01:08:34 1
4 8/20/2016 01:08:35 1
4 8/20/2016 01:08:35 1
4 8/20/2016 01:08:35 1
4 8/20/2016 11:08:35 2
4 8/20/2016 11:08:35 2
4 8/20/2016 11:08:35 2
5 8/20/2016 09:35:17 1
5 8/20/2016 09:36:05 1
5 8/20/2016 09:36:19 1
5 8/20/2016 09:36:21 1
5 8/20/2016 00:01:59 2
5 8/20/2016 00:04:59 3
6 8/20/2016 00:02:13 1
6 8/20/2016 00:02:17 1
6 8/20/2016 00:02:19 1
6 8/20/2016 00:02:21 1
6 8/20/2016 00:02:24 1
6 8/20/2016 00:02:26 1
6 8/20/2016 00:04:27 2
6 8/20/2016 00:04:27 2
6 8/20/2016 00:04:28 2
6 8/20/2016 00:04:30 2
6 8/20/2016 00:04:35 2
6 8/20/2016 01:45:23 3
7 8/20/2016 00:14:30 1
7 8/20/2016 00:14:33 1
7 8/20/2016 00:14:47 1
7 8/20/2016 00:14:56 1
7 8/20/2016 00:21:56 2
</code></pre>
<p>where for every ID, date, for every 3 minutes, 1 is added to the flag. </p>
<p>I am not able to try regarding this as I am new to python. I apologize for this. If anybody can give any idea of how to do this, it would be helpful for me.</p>
| -1
|
2016-09-06T22:29:40Z
| 39,359,221
|
<p>The first to do would be to put the data in useful data structure: I chose a list of sets. These records would first need to be grouped by their (id, date) which is an easy task using the <code>groupby</code> function of <code>itertools</code>. </p>
<p>If the data came from a database, you could run such grouping in the database itself.</p>
<p>Then all that's left is to loop in each of those groups, and anytime the time delta between the initial record and the next exceeds 3 minutes (<code>3*60</code>) increase the <code>flag</code> value and update the <code>reference_time</code> so that it moves along. </p>
<p>Full code below, although data is reduced, but you can also see the whole working example at <a href="https://eval.in/636448" rel="nofollow">https://eval.in/636448</a></p>
<p>Note: I believe your reference data has an error in dataset <code>id=6</code>.</p>
<pre><code>from itertools import groupby
from datetime import datetime
data = [
('1', '8/20/2016', '20:27:25'),
# ...
('7', '8/20/2016', '00:21:56')
]
FMT = '%H:%M:%S'
for key, group in groupby(data, lambda x: (x[0],x[1])):
# group data based on (id, date), ignore 'key'
reference_time = None
flag = 1
for cur_id, cur_date, cur_time in group:
# for each group, test the delta-3 condition
if reference_time is None:
# init
reference_time = cur_time
print cur_id, cur_date, cur_time, flag
continue
delta = datetime.strptime(cur_time, FMT) - datetime.strptime(reference_time, FMT)
if delta.seconds >= (3*60):
# check if time diff is >= 3 minutes from start of sequence
# increase flag, and update the reference timestamp
reference_time = cur_time
flag += 1
print cur_id, cur_date, cur_time, flag
</code></pre>
<p>Result:</p>
<pre><code>1 8/20/2016 20:27:25 1
1 8/20/2016 20:29:04 1
1 8/20/2016 20:29:05 1
1 8/20/2016 20:38:38 2
1 8/20/2016 20:38:38 2
2 8/20/2016 14:24:53 1
3 8/20/2016 21:18:37 1
3 8/20/2016 21:18:37 1
4 8/20/2016 01:08:34 1
4 8/20/2016 01:08:35 1
4 8/20/2016 01:08:35 1
4 8/20/2016 01:08:35 1
4 8/20/2016 11:08:35 2
4 8/20/2016 11:08:35 2
4 8/20/2016 11:08:35 2
5 8/20/2016 09:35:17 1
5 8/20/2016 09:36:05 1
5 8/20/2016 09:36:19 1
5 8/20/2016 09:36:21 1
5 8/20/2016 00:01:59 2
5 8/20/2016 00:04:59 3
6 8/20/2016 00:02:13 1
6 8/20/2016 00:02:17 1
6 8/20/2016 00:02:19 1
6 8/20/2016 00:02:21 1
6 8/20/2016 00:02:24 1
6 8/20/2016 00:02:26 1
6 8/20/2016 00:04:27 1
6 8/20/2016 00:04:27 1
6 8/20/2016 00:04:28 1
6 8/20/2016 00:04:30 1
6 8/20/2016 00:04:35 1
6 8/20/2016 01:45:23 2
7 8/20/2016 00:14:30 1
7 8/20/2016 00:14:33 1
7 8/20/2016 00:14:47 1
7 8/20/2016 00:14:56 1
7 8/20/2016 00:21:56 2
</code></pre>
| 1
|
2016-09-06T23:31:32Z
|
[
"python",
"python-2.7",
"python-3.x"
] |
Python: Greyscale image: Make everything white, except for black pixels
| 39,358,729
|
<p>I tried to open (already greyscale) images and change all non-black pixels to white pixels. I implemented the following code:</p>
<pre><code>from scipy.misc import fromimage, toimage
from PIL import Image
import numpy as np
in_path = 'E:\\in.png'
out_path = 'E:\\out.png'
# Open gray-scale image
img = Image.open(in_path).convert('L')
# Just for testing: The image is saved correct
#img.save(out_path)
# Make all non-black colors white
imp_arr = fromimage(img)
imp_arr = (np.ceil(imp_arr / 255.0) * 255.0).astype(int)
# Save the image
img = toimage(imp_arr, mode='L')
img.save(out_path)
</code></pre>
<p>The calculation to make all pixels white, except for the black ones is quite simple and also very fast. For my use-case it is especially important that it works very fast, for this reason i used numpy. For some reason this code does not work with all images?</p>
<p>An example: The following image is the input.</p>
<p><a href="http://i.stack.imgur.com/tQmM1.png" rel="nofollow"><img src="http://i.stack.imgur.com/tQmM1.png" alt="enter image description here"></a></p>
<p>It contains a grey rectangle and also a white border. The output should be a complete white image, but for some reason the output is a black image:</p>
<p><a href="http://i.stack.imgur.com/BY5Fc.png" rel="nofollow"><img src="http://i.stack.imgur.com/BY5Fc.png" alt="enter image description here"></a></p>
<p>With some other images it works quite well. What do i do wrong? I think floating point shouldn't be a big issue here, because this code does not require a high calculation accuracy to work.</p>
<p>Thank you very much</p>
| 0
|
2016-09-06T22:29:50Z
| 39,360,008
|
<p><a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.misc.toimage.html" rel="nofollow">toimage</a> expects a byte array, so convert to uint8 not int:</p>
<pre><code>imp_arr = (np.ceil(imp_arr / 255.0) * 255.0).astype('uint8')
</code></pre>
<p>I seems to work for <code>int</code> if there is a mix of black and white pixels in the output, but not if they are all white. I can't find any explanation for this in the documentation.</p>
| 2
|
2016-09-07T01:29:22Z
|
[
"python",
"image",
"performance"
] |
Disconnecting from WiFi using python (Windows)
| 39,358,765
|
<p>I can connect to a WiFi network using: </p>
<pre><code>process = subprocess.Popen(
'netsh wlan connect {0}'.format(wifi_network),
shell = True,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE)
stdout, stderr = process.communicate()
</code></pre>
<p>I was wondering if there was a similar way to disconnect from a WiFi network? Either disconnecting from all WiFi networks or disconnecting from a specific network (given a name) would be fine. </p>
| 1
|
2016-09-06T22:34:12Z
| 39,364,152
|
<p>Look at <strong>"netsh wlan disconnect"</strong>: <a href="https://technet.microsoft.com/en-us/library/cc755301(v=ws.10).aspx#bkmk_wlanDisConn" rel="nofollow">Netsh Commands for Wireless Local Area Network</a></p>
| 0
|
2016-09-07T07:50:17Z
|
[
"python",
"subprocess",
"wifi"
] |
Receving an error when trying to use the pandas read_html function
| 39,358,775
|
<p>I'm really new to python and pandas so I could be making a simple mistake.</p>
<p>I'm trying to run the code below:</p>
<pre><code>import quandl
import pandas as pd
df3 = pd.read_html('https://simple.wikipedia.org/wiki/List_of_U.S._states')
print(df3)
</code></pre>
<p>I have installed pandas as well quandl through pip.
When Iãrun the code I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Python27\FunwithQuandl.py", line 14, in <module>
df3 = pd.read_html('https://simple.wikipedia.org /wiki/List_of_U.S._states')
File "C:\Python27\lib\site-packages\pandas\io\html.py", line 874, in read_html
parse_dates, tupleize_cols, thousands, attrs, encoding)
File "C:\Python27\lib\site-packages\pandas\io\html.py", line 726, in _parse
parser = _parser_dispatch(flav)
File "C:\Python27\lib\site-packages\pandas\io\html.py", line 685, in _parser_dispatch
raise ImportError("lxml not found, please install it")
ImportError: lxml not found, please install it
</code></pre>
<p>I then tried installing lxml via command prompt and pip and I got a few errors:</p>
<pre><code>Cannnot open include file: 'libxml/xpath.h': No such file or directory
Could not find function xmlCheckVersion in library libxm12. Is libxml2 installed?
error: command 'CL\\User\\...Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\cl.exe' failed with exit status 2
</code></pre>
<p>I even tried suggestions on this from <a href="http://stackoverflow.com/questions/33785755/getting-could-not-find-function-xmlcheckversion-in-library-libxml2-is-libxml2">this thread</a>:</p>
<p>such as </p>
<blockquote>
<p>"install lxml from lfd.uci.edu/~gohlke/pythonlibs/#lxml for your
python version. It's a precompiled WHL with required
modules/dependencies.</p>
<p>The site lists several packages, when e.g. using Win32 Python 2.7, use
lxml-3.6.1-cp27-cp27m-win32.whl."</p>
</blockquote>
<p>I have downloaded the whl file from the site suggested but I can't seem to install it.
I have tried using pip and typing out the name of the file but the file isn't being recognized via pip.</p>
<p>I am using Python 2.7 and Windows 7 professional</p>
<p>Thanks for you help.</p>
| 0
|
2016-09-06T22:35:14Z
| 39,378,308
|
<p>I fixed the problem. I moved the file that I wanted to install from <code>Downloads</code> to <code>LocalDisk/Users/My_name</code>. Inside the right directory, <code>PIP</code> was able to locate and install it for some reason. Thanks for the responses.</p>
| 0
|
2016-09-07T20:15:51Z
|
[
"python",
"windows",
"python-2.7",
"pandas",
"pip"
] |
Login to https website using Python
| 39,358,781
|
<p>I'm new to posting on stackoverflow so please don't bite! I had to resort to making an account and asking for help to avoid banging my head on the table any longer...</p>
<p>I'm trying to login to the following website <a href="https://account.socialbakers.com/login" rel="nofollow">https://account.socialbakers.com/login</a> using the requests module in python. It seems as if the requests module is the place to go but the session.post() function isn't working for me. I can't tell if there is something unique about this type of form or the fact the website is https://</p>
<p>The login form is the following:</p>
<pre><code><form action="/login" id="login-form" method="post" novalidate="">
<big class="error-message">
<big>
<strong>
</strong>
</big>
</big>
<div class="item-full">
<label for="">
<span class="label-header">
<span>
Your e-mail address
</span>
</span>
<input id="email" name="email" type="email"/>
</label>
</div>
<div class="item-list">
<div class="item-big">
<label for="">
<span class="label-header">
<span>
Password
</span>
</span>
<input id="password" name="password" type="password"/>
</label>
</div>
<div class="item-small">
<button class="btn btn-green" type="submit">
Login
</button>
</div>
</div>
<p>
<a href="/email/reset-password">
<strong>
Lost password?
</strong>
</a>
</p>
</form>
</code></pre>
<p>Based on the following post <a href="http://stackoverflow.com/questions/11892729/how-to-log-in-to-a-website-using-pythons-requests-module">How to "log in" to a website using Python's Requests module?</a> among others I have tried the following code:</p>
<pre><code>url = 'https://account.socialbakers.com/login'
payload = dict(email = 'Myemail', password = 'Mypass')
with session() as s:
soup = BeautifulSoup(s.get(url).content,'lxml')
p = s.post(url, data = payload, verify=True)
print(p.text)
</code></pre>
<p>This however just gives me the login page again and doesn't seem to log me in</p>
<p>I have checked in the form that I am referring to the correct names of the inputs 'email' and 'password'. I've tried explicitly passing through cookies as well. The verify=True parameter was suggested as a way to deal with the fact the website is https.</p>
<p>I can't work out what isn't working/what is different about this form to the one on the linked post.</p>
<p>Thanks</p>
<p>Edit: Updated p = s.get to p = s.post</p>
| 1
|
2016-09-06T22:35:55Z
| 39,358,943
|
<p>Two things to look out. One, try to use s.post and second you need to check in the browser if there is any other value the form is sending by looking at the network tab. </p>
| 0
|
2016-09-06T22:54:25Z
|
[
"python",
"python-requests"
] |
Login to https website using Python
| 39,358,781
|
<p>I'm new to posting on stackoverflow so please don't bite! I had to resort to making an account and asking for help to avoid banging my head on the table any longer...</p>
<p>I'm trying to login to the following website <a href="https://account.socialbakers.com/login" rel="nofollow">https://account.socialbakers.com/login</a> using the requests module in python. It seems as if the requests module is the place to go but the session.post() function isn't working for me. I can't tell if there is something unique about this type of form or the fact the website is https://</p>
<p>The login form is the following:</p>
<pre><code><form action="/login" id="login-form" method="post" novalidate="">
<big class="error-message">
<big>
<strong>
</strong>
</big>
</big>
<div class="item-full">
<label for="">
<span class="label-header">
<span>
Your e-mail address
</span>
</span>
<input id="email" name="email" type="email"/>
</label>
</div>
<div class="item-list">
<div class="item-big">
<label for="">
<span class="label-header">
<span>
Password
</span>
</span>
<input id="password" name="password" type="password"/>
</label>
</div>
<div class="item-small">
<button class="btn btn-green" type="submit">
Login
</button>
</div>
</div>
<p>
<a href="/email/reset-password">
<strong>
Lost password?
</strong>
</a>
</p>
</form>
</code></pre>
<p>Based on the following post <a href="http://stackoverflow.com/questions/11892729/how-to-log-in-to-a-website-using-pythons-requests-module">How to "log in" to a website using Python's Requests module?</a> among others I have tried the following code:</p>
<pre><code>url = 'https://account.socialbakers.com/login'
payload = dict(email = 'Myemail', password = 'Mypass')
with session() as s:
soup = BeautifulSoup(s.get(url).content,'lxml')
p = s.post(url, data = payload, verify=True)
print(p.text)
</code></pre>
<p>This however just gives me the login page again and doesn't seem to log me in</p>
<p>I have checked in the form that I am referring to the correct names of the inputs 'email' and 'password'. I've tried explicitly passing through cookies as well. The verify=True parameter was suggested as a way to deal with the fact the website is https.</p>
<p>I can't work out what isn't working/what is different about this form to the one on the linked post.</p>
<p>Thanks</p>
<p>Edit: Updated p = s.get to p = s.post</p>
| 1
|
2016-09-06T22:35:55Z
| 39,359,180
|
<p>Form is not sending password in clear text. It is encrypting or hashing it before sending. When you type password <code>aaaa</code> in form via network it sends</p>
<p><code>b3744bb9a8adb2d67cfdf79095bd84f5e77500a76727e6d73eef460eb806511ba73c9f765d4b3738e0b1399ce4a4c4ac3aed17fff34e0ef4037e9be466adec61</code> </p>
<p>so no easy way to login via requests library without duplicating this behavior.</p>
| 0
|
2016-09-06T23:27:09Z
|
[
"python",
"python-requests"
] |
Login to https website using Python
| 39,358,781
|
<p>I'm new to posting on stackoverflow so please don't bite! I had to resort to making an account and asking for help to avoid banging my head on the table any longer...</p>
<p>I'm trying to login to the following website <a href="https://account.socialbakers.com/login" rel="nofollow">https://account.socialbakers.com/login</a> using the requests module in python. It seems as if the requests module is the place to go but the session.post() function isn't working for me. I can't tell if there is something unique about this type of form or the fact the website is https://</p>
<p>The login form is the following:</p>
<pre><code><form action="/login" id="login-form" method="post" novalidate="">
<big class="error-message">
<big>
<strong>
</strong>
</big>
</big>
<div class="item-full">
<label for="">
<span class="label-header">
<span>
Your e-mail address
</span>
</span>
<input id="email" name="email" type="email"/>
</label>
</div>
<div class="item-list">
<div class="item-big">
<label for="">
<span class="label-header">
<span>
Password
</span>
</span>
<input id="password" name="password" type="password"/>
</label>
</div>
<div class="item-small">
<button class="btn btn-green" type="submit">
Login
</button>
</div>
</div>
<p>
<a href="/email/reset-password">
<strong>
Lost password?
</strong>
</a>
</p>
</form>
</code></pre>
<p>Based on the following post <a href="http://stackoverflow.com/questions/11892729/how-to-log-in-to-a-website-using-pythons-requests-module">How to "log in" to a website using Python's Requests module?</a> among others I have tried the following code:</p>
<pre><code>url = 'https://account.socialbakers.com/login'
payload = dict(email = 'Myemail', password = 'Mypass')
with session() as s:
soup = BeautifulSoup(s.get(url).content,'lxml')
p = s.post(url, data = payload, verify=True)
print(p.text)
</code></pre>
<p>This however just gives me the login page again and doesn't seem to log me in</p>
<p>I have checked in the form that I am referring to the correct names of the inputs 'email' and 'password'. I've tried explicitly passing through cookies as well. The verify=True parameter was suggested as a way to deal with the fact the website is https.</p>
<p>I can't work out what isn't working/what is different about this form to the one on the linked post.</p>
<p>Thanks</p>
<p>Edit: Updated p = s.get to p = s.post</p>
| 1
|
2016-09-06T22:35:55Z
| 39,359,295
|
<p>Checked the website. It is sending the SHA3 hash of the password instead of sending as plaintext. You can see this in line 111 of <a href="https://account.socialbakers.com/js/script.js" rel="nofollow">script.js</a> which is included in the main page as :</p>
<pre><code><script src="/js/script.js"></script>
</code></pre>
<p>inside the <code>head</code> tag.</p>
<p>So you need to replicate this behaviour while sending <code>POST</code> requests. I found <a href="https://pypi.python.org/pypi/pysha3" rel="nofollow"><code>pysha3</code></a> library that does the job pretty well.</p>
<p>So first install <a href="https://pypi.python.org/pypi/pysha3" rel="nofollow">pysha3</a> by running <code>pip install pysha3</code> (give <code>sudo</code> if necessary) then run the code below</p>
<pre><code>import sha3
import hashlib
import request
url = 'https://account.socialbakers.com/login'
myemail = "abhigolu10@gmail.com"
mypassword = hashlib.sha3_512(b"st@ck0verflow").hexdigest() #take SHA3 of password
payload = {'email':myemail, 'password':mypassword}
with session() as s:
soup = BeautifulSoup(s.get(url).content,'lxml')
p = s.post(url, data = payload, verify=True)
print(p.text)
</code></pre>
<p>and you will get the correct logged in page!</p>
| 1
|
2016-09-06T23:40:54Z
|
[
"python",
"python-requests"
] |
capturing a result in a variable
| 39,358,811
|
<p>I am just getting my head around functions in Python (only been learning Python for 6 months) and I'm stuck on some code which is not working. It says total is not defined, nameerror. Reading through some posts I think I need to store total in a variable but I don't know where. Can you do this in the return statement? Not sure where to define total to make it global.</p>
<p>It's a program with several tasks. I'm also struggling with storing a table into a csv file. Here is the code.</p>
<pre><code>import csv
def set_values():
ans1 = float(input('Please enter the first number: '))
ans2 = float(input('Please enter the second number: '))
ans3 = float(input('Please enter the third number: '))
levels = int(input('Please set the amount of levels between 5 and 10: '))
return (ans1, ans2, ans3, levels)
def display_model_values(ans1, ans2, ans3, levels):
print('The outcome for model 1 is ',ans1)
print('The outcome for model 2 is ',ans2)
print('The outcome for model 3 is ',ans3)
print('The number of levels are ',levels)
def run_model(ans1, ans2, ans3, levels):
total = ans1+ans2+ans3
print ("\t","Level","\t","Answer 1","\t","Answer 2","\t","Answer 3","\t","Total")
for i in range (0,levels+1):
print("\t",i,"\t\t",ans1,"\t\t",ans2,"\t\t",ans3,"\t\t",total)
result1 =ans2*ans3
result2 = ans2/ans1
total = ans1+result1+result2
return (i,result1, result2, total)
def export_data(ans1,ans2,ans3,total):
table = [ans1, ans2, ans3,total]
nameoffile = input('what would you like to call the filename')
nameoffile = open(nameoffile+".csv","w")
csv_file_ref = csv.writer(nameoffile)
csv_file_ref.writerow(table)
nameoffile.close()
## with open(nameoffile+'.csv', 'w') as csvfile:
## writer = csv.writer(csvfile)
## writer.writerow(r) for r in table]
choice = ''
count = 0
while choice != 'q':
print('Main Menu')
print ('1)Set Model Values')
print ('2)Display Model Values')
print ('3)Run Model')
print ('4)Export Data')
print ('Q)Quit')
choice = input('Please Enter Choice')
if choice =='1':
ans1, ans2, ans3, levels = set_values()
count = count +1
elif choice == '2':
if count < 1:
print ('you need to choose option 1 first')
else:
display_model_values(ans1,ans2,ans3,levels)
elif choice =='3':
if count < 1:
print('you need to choose option 1 first')
else:
run_model(ans1,ans2,ans3,levels)
elif choice =='4':
if count < 1:
print ('you need to choose option 1 first')
else:
export_data(ans1,ans2,ans3,total)
elif choice == 'Q':
break
else:
print('not an option')
</code></pre>
| 1
|
2016-09-06T22:39:02Z
| 39,358,871
|
<p>On line 83, you are sending the variable <code>total</code> to the function export_data(ans1,ans2,ans3,total) which is not defined within your while loop. Assuming that your total is <code>ans1+ans2+ans3</code> before sending the value,
add the line,</p>
<pre><code>total = ans1+ans2+ans3
</code></pre>
<p>That should solve the problem.</p>
| 1
|
2016-09-06T22:46:43Z
|
[
"python",
"function",
"csv"
] |
capturing a result in a variable
| 39,358,811
|
<p>I am just getting my head around functions in Python (only been learning Python for 6 months) and I'm stuck on some code which is not working. It says total is not defined, nameerror. Reading through some posts I think I need to store total in a variable but I don't know where. Can you do this in the return statement? Not sure where to define total to make it global.</p>
<p>It's a program with several tasks. I'm also struggling with storing a table into a csv file. Here is the code.</p>
<pre><code>import csv
def set_values():
ans1 = float(input('Please enter the first number: '))
ans2 = float(input('Please enter the second number: '))
ans3 = float(input('Please enter the third number: '))
levels = int(input('Please set the amount of levels between 5 and 10: '))
return (ans1, ans2, ans3, levels)
def display_model_values(ans1, ans2, ans3, levels):
print('The outcome for model 1 is ',ans1)
print('The outcome for model 2 is ',ans2)
print('The outcome for model 3 is ',ans3)
print('The number of levels are ',levels)
def run_model(ans1, ans2, ans3, levels):
total = ans1+ans2+ans3
print ("\t","Level","\t","Answer 1","\t","Answer 2","\t","Answer 3","\t","Total")
for i in range (0,levels+1):
print("\t",i,"\t\t",ans1,"\t\t",ans2,"\t\t",ans3,"\t\t",total)
result1 =ans2*ans3
result2 = ans2/ans1
total = ans1+result1+result2
return (i,result1, result2, total)
def export_data(ans1,ans2,ans3,total):
table = [ans1, ans2, ans3,total]
nameoffile = input('what would you like to call the filename')
nameoffile = open(nameoffile+".csv","w")
csv_file_ref = csv.writer(nameoffile)
csv_file_ref.writerow(table)
nameoffile.close()
## with open(nameoffile+'.csv', 'w') as csvfile:
## writer = csv.writer(csvfile)
## writer.writerow(r) for r in table]
choice = ''
count = 0
while choice != 'q':
print('Main Menu')
print ('1)Set Model Values')
print ('2)Display Model Values')
print ('3)Run Model')
print ('4)Export Data')
print ('Q)Quit')
choice = input('Please Enter Choice')
if choice =='1':
ans1, ans2, ans3, levels = set_values()
count = count +1
elif choice == '2':
if count < 1:
print ('you need to choose option 1 first')
else:
display_model_values(ans1,ans2,ans3,levels)
elif choice =='3':
if count < 1:
print('you need to choose option 1 first')
else:
run_model(ans1,ans2,ans3,levels)
elif choice =='4':
if count < 1:
print ('you need to choose option 1 first')
else:
export_data(ans1,ans2,ans3,total)
elif choice == 'Q':
break
else:
print('not an option')
</code></pre>
| 1
|
2016-09-06T22:39:02Z
| 39,358,878
|
<p>Your total is defined only within the function <code>run_model</code>. When that function returns, you cannot reference <code>total</code> again, since its gone <a href="http://sebastianraschka.com/Articles/2014_python_scope_and_namespaces.html" rel="nofollow">out of scope</a>. The variable is now "unbound", and that is exactly what Python is telling you when it says that the name <code>total</code> is not defined. It was defined once, somewhere, but it's long gone.</p>
<p>A simple change to your code would be to have total calculated again, inside the body of the export function, as here:</p>
<pre><code>def export_data(ans1,ans2,ans3):
total = ans1 + ans2 + ans3 # total is available inside of export_data
table = [ans1, ans2, ans3,total]
nameoffile = input('what would you like to call the filename')
nameoffile = open(nameoffile+".csv","w")
csv_file_ref = csv.writer(nameoffile)
csv_file_ref.writerow(table)
nameoffile.close()
</code></pre>
<p>These should make your code work.</p>
| 1
|
2016-09-06T22:47:17Z
|
[
"python",
"function",
"csv"
] |
capturing a result in a variable
| 39,358,811
|
<p>I am just getting my head around functions in Python (only been learning Python for 6 months) and I'm stuck on some code which is not working. It says total is not defined, nameerror. Reading through some posts I think I need to store total in a variable but I don't know where. Can you do this in the return statement? Not sure where to define total to make it global.</p>
<p>It's a program with several tasks. I'm also struggling with storing a table into a csv file. Here is the code.</p>
<pre><code>import csv
def set_values():
ans1 = float(input('Please enter the first number: '))
ans2 = float(input('Please enter the second number: '))
ans3 = float(input('Please enter the third number: '))
levels = int(input('Please set the amount of levels between 5 and 10: '))
return (ans1, ans2, ans3, levels)
def display_model_values(ans1, ans2, ans3, levels):
print('The outcome for model 1 is ',ans1)
print('The outcome for model 2 is ',ans2)
print('The outcome for model 3 is ',ans3)
print('The number of levels are ',levels)
def run_model(ans1, ans2, ans3, levels):
total = ans1+ans2+ans3
print ("\t","Level","\t","Answer 1","\t","Answer 2","\t","Answer 3","\t","Total")
for i in range (0,levels+1):
print("\t",i,"\t\t",ans1,"\t\t",ans2,"\t\t",ans3,"\t\t",total)
result1 =ans2*ans3
result2 = ans2/ans1
total = ans1+result1+result2
return (i,result1, result2, total)
def export_data(ans1,ans2,ans3,total):
table = [ans1, ans2, ans3,total]
nameoffile = input('what would you like to call the filename')
nameoffile = open(nameoffile+".csv","w")
csv_file_ref = csv.writer(nameoffile)
csv_file_ref.writerow(table)
nameoffile.close()
## with open(nameoffile+'.csv', 'w') as csvfile:
## writer = csv.writer(csvfile)
## writer.writerow(r) for r in table]
choice = ''
count = 0
while choice != 'q':
print('Main Menu')
print ('1)Set Model Values')
print ('2)Display Model Values')
print ('3)Run Model')
print ('4)Export Data')
print ('Q)Quit')
choice = input('Please Enter Choice')
if choice =='1':
ans1, ans2, ans3, levels = set_values()
count = count +1
elif choice == '2':
if count < 1:
print ('you need to choose option 1 first')
else:
display_model_values(ans1,ans2,ans3,levels)
elif choice =='3':
if count < 1:
print('you need to choose option 1 first')
else:
run_model(ans1,ans2,ans3,levels)
elif choice =='4':
if count < 1:
print ('you need to choose option 1 first')
else:
export_data(ans1,ans2,ans3,total)
elif choice == 'Q':
break
else:
print('not an option')
</code></pre>
| 1
|
2016-09-06T22:39:02Z
| 39,436,551
|
<p>def export_data(ans1,ans2,ans3):
total = ans1 + ans2 + ans3 # total is available inside of export_data
table = [ans1, ans2, ans3,total]</p>
<pre><code>nameoffile = input('what would you like to call the filename')
nameoffile = open(nameoffile+".csv","w")
csv_file_ref = csv.writer(nameoffile)
csv_file_ref.writerow(table)
nameoffile.close()
</code></pre>
<p>Now it exports and there is no error on total</p>
| 0
|
2016-09-11T13:22:17Z
|
[
"python",
"function",
"csv"
] |
check if an IP is within a range of CIDR in Python
| 39,358,869
|
<p>I know there are some similar questions up here, but they mostly either want to find the range itself (which uses some libraries, like the example that stackoverflow says is a dupe of my question) and is in another language.</p>
<p>I have a way to convert the subnet into the beginning and the end of the range of ip's in a subnet (okay, bad wording, it's simply like<code>1.1.1.1/16 -> (1.1.0.0 , 1.1.255.255)</code>)</p>
<p>I now want to check if <code>1.1.2.2</code> is within this subnet. Can I simply do a <code>></code> and <code><</code> to compare?</p>
<pre><code>ip_range = ('1.1.0.0', '1.1.255.255')
if '1.1.2.2' >= ip_range[0] and '1.1.2.2' <= ip_range[1]:
return True
</code></pre>
<p>When I tested it, it works, but I don't know if it would always work for any ipv4 ip's. I'd assume I'm just comparing ASCII order , so this should always work, but is there any exception?</p>
| 0
|
2016-09-06T22:46:08Z
| 39,358,930
|
<p>Your code compares strings, not numbers. I would suggest using tuples instead:</p>
<pre><code>>>> ip_range = [(1,1,0,0), (1,1,255,255)]
>>> testip = (1,1,2,2)
>>> testip > ip_range[0] and testip < ip_range[1]
True
>>> testip = (1,3,1,1)
>>> testip > ip_range[0] and testip < ip_range[1]
False
</code></pre>
| 3
|
2016-09-06T22:53:11Z
|
[
"python",
"ip"
] |
check if an IP is within a range of CIDR in Python
| 39,358,869
|
<p>I know there are some similar questions up here, but they mostly either want to find the range itself (which uses some libraries, like the example that stackoverflow says is a dupe of my question) and is in another language.</p>
<p>I have a way to convert the subnet into the beginning and the end of the range of ip's in a subnet (okay, bad wording, it's simply like<code>1.1.1.1/16 -> (1.1.0.0 , 1.1.255.255)</code>)</p>
<p>I now want to check if <code>1.1.2.2</code> is within this subnet. Can I simply do a <code>></code> and <code><</code> to compare?</p>
<pre><code>ip_range = ('1.1.0.0', '1.1.255.255')
if '1.1.2.2' >= ip_range[0] and '1.1.2.2' <= ip_range[1]:
return True
</code></pre>
<p>When I tested it, it works, but I don't know if it would always work for any ipv4 ip's. I'd assume I'm just comparing ASCII order , so this should always work, but is there any exception?</p>
| 0
|
2016-09-06T22:46:08Z
| 39,358,935
|
<p>This doesn't work in general, because string comparison is in collating order, not the numerical values of the four fields. For instance, '1.1.2.2' > '1.1.128.1' -- the critical spot in the 5th character, '1' vs '2'.</p>
<p>If you want to compare the fields, try separating into lists:</p>
<pre><code>ip_vals = [int(x) for x in ip_range.split('.')]
</code></pre>
<p>ip_vals is now a list of the values; you can compare the lists and get the results I think you want.</p>
| 1
|
2016-09-06T22:53:35Z
|
[
"python",
"ip"
] |
check if an IP is within a range of CIDR in Python
| 39,358,869
|
<p>I know there are some similar questions up here, but they mostly either want to find the range itself (which uses some libraries, like the example that stackoverflow says is a dupe of my question) and is in another language.</p>
<p>I have a way to convert the subnet into the beginning and the end of the range of ip's in a subnet (okay, bad wording, it's simply like<code>1.1.1.1/16 -> (1.1.0.0 , 1.1.255.255)</code>)</p>
<p>I now want to check if <code>1.1.2.2</code> is within this subnet. Can I simply do a <code>></code> and <code><</code> to compare?</p>
<pre><code>ip_range = ('1.1.0.0', '1.1.255.255')
if '1.1.2.2' >= ip_range[0] and '1.1.2.2' <= ip_range[1]:
return True
</code></pre>
<p>When I tested it, it works, but I don't know if it would always work for any ipv4 ip's. I'd assume I'm just comparing ASCII order , so this should always work, but is there any exception?</p>
| 0
|
2016-09-06T22:46:08Z
| 39,358,983
|
<p>You can't really do string comparisons on a dot separated list of numbers because your test will simply fail on input say <code>1.1.99.99</code> as <code>'9'</code> is simply greater than <code>'2'</code></p>
<pre><code>>>> '1.1.99.99' < '1.1.255.255'
False
</code></pre>
<p>So instead you can convert the input into tuples of integers through comprehension expression</p>
<pre><code>def convert_ipv4(ip):
return tuple(int(n) for n in ip.split('.'))
</code></pre>
<p>Note the lack of type checking, but if your input is a proper IP address it will be fine. Since you have a 2-tuple of IP addresses, you can create a function that takes both start and end as argument, pass that tuple in through argument list, and return that with just one statement (as Python allows chaining of comparisons). Perhaps like:</p>
<pre><code>def check_ipv4_in(addr, start, end):
return convert_ipv4(start) < convert_ipv4(addr) < convert_ipv4(end)
</code></pre>
<p>Test it out.</p>
<pre><code>>>> ip_range = ('1.1.0.0', '1.1.255.255')
>>> check_ipv4_in('1.1.99.99', *ip_range)
True
</code></pre>
<p>With this method you can lazily expand it to IPv6, though the conversion to and from hex (instead of int) will be needed instead.</p>
| 2
|
2016-09-06T22:59:31Z
|
[
"python",
"ip"
] |
check if an IP is within a range of CIDR in Python
| 39,358,869
|
<p>I know there are some similar questions up here, but they mostly either want to find the range itself (which uses some libraries, like the example that stackoverflow says is a dupe of my question) and is in another language.</p>
<p>I have a way to convert the subnet into the beginning and the end of the range of ip's in a subnet (okay, bad wording, it's simply like<code>1.1.1.1/16 -> (1.1.0.0 , 1.1.255.255)</code>)</p>
<p>I now want to check if <code>1.1.2.2</code> is within this subnet. Can I simply do a <code>></code> and <code><</code> to compare?</p>
<pre><code>ip_range = ('1.1.0.0', '1.1.255.255')
if '1.1.2.2' >= ip_range[0] and '1.1.2.2' <= ip_range[1]:
return True
</code></pre>
<p>When I tested it, it works, but I don't know if it would always work for any ipv4 ip's. I'd assume I'm just comparing ASCII order , so this should always work, but is there any exception?</p>
| 0
|
2016-09-06T22:46:08Z
| 39,359,495
|
<p>In Python 3.3 and later, you should be using the <code>ipaddress</code> module.</p>
<pre><code>from ipaddress import ip_network, ip_address
net = ip_network("1.1.0.0/16")
print(ip_address("1.1.2.2") in net) # True
</code></pre>
| 1
|
2016-09-07T00:06:02Z
|
[
"python",
"ip"
] |
How do I Parse XML Elements and get values with Python 2.7
| 39,358,925
|
<p>API Response:<a href="http://iss.ndl.go.jp/api/opensearch?isbn=9784334770051" rel="nofollow">http://iss.ndl.go.jp/api/opensearch?isbn=9784334770051</a>
Hello, thanks for help yesterday.
However when I attempt to get value from Elements I always get empty value as response.
I were refereed this <a href="http://stackoverflow.com/questions/14853243/parsing-xml-with-namespace-in-python-via-elementtree">link</a> However not sure I understand it.
Where did I wrong and having empty value?</p>
<pre><code> #!/usr/bin/env python
# -*- coding: utf-8 -*-
import codecs
import sys
import urllib
import urllib2
import re, pprint
from xml.etree.ElementTree import *
import csv
from xml.dom import minidom
import xml.etree.ElementTree as ET
import shelve
import subprocess
errorCheck = "0"
isbn = raw_input("Enter IBSN Number Please ")
isIsbn = len(isbn)
# ElementTree requires namespace definition to work with XML with namespaces correctly
# It is hardcoded at this point, but this should be constructed from response.
namespaces = {
'dc': 'http://purl.org/dc/elements/1.1/',
'dcndl': 'http://ndl.go.jp/dcndl/terms/',
}
# for prefix, uri in namespaces.iteritems():
# ElementTree.register_namespace(prefix, uri)
if isIsbn == 10 or isIsbn == 13:
errorCheck = 1
url = "http://iss.ndl.go.jp/api/opensearch?isbn=%s" % isbn
req = urllib2.Request(url)
response = urllib2.urlopen(req)
tree = ET.parse(response)
root = tree.getroot()
# root = ET.fromstring(XmlData)
print root.findall('dc:title', namespaces)
print root.findall('dc:title')
print root.findall('dc:identifier', namespaces)
print root.findall('dc:identifier')
print root.findall('identifier')
if errorCheck == "0":
print "It is not ISBN"
# print(root.tag,root.attrib)
# for child in root.find('.//item'):
# print child.text
</code></pre>
| 1
|
2016-09-06T22:52:29Z
| 39,359,979
|
<p>Your code needs a slight modification, add <code>.//</code> to your expression in the <em>findall</em> call, the root node is the <em>rss</em> node and the <em>dc:title's</em> are descendants not direct children of the <em>rss</em> node so you need to search through the doc:</p>
<pre><code>import xml.etree.ElementTree as ET
import requests
url = "http://iss.ndl.go.jp/api/opensearch?isbn=9784334770051"
tree = ET.fromstring(requests.get(url).content)
namespaces = {
'dc': 'http://purl.org/dc/elements/1.1/',
'dcndl': 'http://ndl.go.jp/dcndl/terms/',
}
[t.text for t in tree.findall('.//dc:title', namespaces)]
[i.text for i in tree.findall('.//dc:identifier', namespaces)]
</code></pre>
<p>You can do it very easily using <em>lxml</em> which maps the namespaces for you and can get the source:</p>
<pre><code>In [1]: import lxml.etree as et
In [2]: url = "http://iss.ndl.go.jp/api/opensearch?isbn=9784334770051"
In [3]: tree = et.parse(url)
In [4]: nsmap = tree.getroot().nsmap
In [5]: print(tree.xpath("//dc:title/text()", namespaces=nsmap))
[u'\u9244\u8155\u30a2\u30c8\u30e0']
In [6]: print(tree.xpath("//dc:identifier/text()", namespaces=nsmap))
['4334770053', '95078560']
</code></pre>
<p>You can see the path to one of the dc:titles:</p>
<pre><code>In [55]: tree
Out[55]: <Element 'rss' at 0x7f996e8b66d0> # root
In [56]: tree.findall('channel') # child of root so don't need .//
Out[56]: [<Element 'channel' at 0x7f996e131990>]
In [57]: tree.findall('channel/item/dc:title', namespaces) # item is a descendant of rss, item is parent of the dc:title
Out[57]: [<Element '{http://purl.org/dc/elements/1.1/}title' at 0x7f996e131910>]
</code></pre>
<p>Same with the identifiers:</p>
<pre><code>In [58]: tree.findall('channel//item//dc:identifier', namespaces)
Out[58]:
[<Element '{http://purl.org/dc/elements/1.1/}identifier' at 0x7f996e131c50>,
<Element '{http://purl.org/dc/elements/1.1/}identifier' at 0x7f996e131250>]
</code></pre>
| 0
|
2016-09-07T01:25:38Z
|
[
"python",
"xml",
"xml-parsing"
] |
Python - Graphing contents of mutliple files
| 39,358,958
|
<p>I have lists of ~10 corresponding input files containing columns of tab separated data approx 300 lines/datapoints each. </p>
<p>I'm looking to plot the contents of each set of data such that I have a 2 plots for each set of data one is simply of x vs (y1,y2,y3,...) and one which is transformed by a function e.g. x vs (f(y1), f(y2),f(y3),...).</p>
<p>I am not sure of the best way to achieve it, I thought about using a simple array of filenames then couldn't work out how to store them all without overwriting the data - something like this: </p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
def ReadDataFile(file):
print (file)
x,y = np.loadtxt(file, unpack=True, usecols=(8,9))
return x, y
inputFiles = ['data1.txt','data2.txt','data2.txt',...]
for file in inputFiles:
x1,y1 = ReadDataFile(file) ## ? ##
p1,q1 = function(x1,y1) ## ? ##
plt.figure(1)
plt.plot(x1,y1)
plt.plot(x2,y2)
...
# plt.savefig(...)
plt.figure(2)
plt.plot(p1,q1)
plt.plot(p2,q2)
...
# plt.savefig(...)
plt.show()
</code></pre>
<p>I guess my question is how to best read and store all the data and maintain tha ability to access it without needing to put all the code in the readloop. Can I read two data sets into a list of pairs? Is that a thing in Python? if so, how do I access them?</p>
<p>Thanks in advance for any help!
Best regards! </p>
| 2
|
2016-09-06T22:56:29Z
| 39,359,547
|
<p>Basically, I think you should put all your code in the readloop, because that will work easily. There's a slightly different way of using matplotlib that makes it easy to use the existing organization of your data AND write shorter code. Here's a toy, but complete, example: </p>
<pre><code>import matplotlib.pyplot as plt
from numpy.random import random
fig, axs = plt.subplots(2)
for c in 'abc': # In your case, for filename in [file-list]:
x, y = random((2, 5))
axs[0].plot(x, y, label=c) # filename instead of c in your case
axs[1].plot(x, y**2, label=c) # Plot p(x,y), q(x,y) in your case
axs[0].legend() # handy to get this from the data list
fig.savefig('two_plots.png')
</code></pre>
<p><a href="http://i.stack.imgur.com/8eCxU.png" rel="nofollow"><img src="http://i.stack.imgur.com/8eCxU.png" alt="enter image description here"></a></p>
<p>You can also create two figures and plot into each of them explicitly, if you need them in different files for page layout, etc:</p>
<pre><code>import matplotlib.pyplot as plt
from numpy.random import random
fig1, ax1 = plt.subplots(1)
fig2, ax2 = plt.subplots(1)
for c in 'abc': # or, for filename in [file-list]:
x, y = random((2, 5))
ax1.plot(x, y, label=c)
ax2.plot(x, y**2, label=c)
ax1.legend()
ax2.legend()
fig1.savefig('two_plots_1.png')
fig2.savefig('two_plots_2.png')
</code></pre>
| 0
|
2016-09-07T00:12:51Z
|
[
"python",
"matplotlib"
] |
Multithreading to Scrap Yahoo Finance
| 39,358,982
|
<p>I'm running a program to pull some info from Yahoo! Finance. It runs fine as a <code>For</code> loop, however it takes a long time (about 10 minutes for 7,000 inputs) because it has to process each <code>request.get(url)</code> individually (or am I mistaken on the major bottlenecker?)</p>
<p>Anyway, I came across multithreading as a potential solution. This is what I have tried: </p>
<pre><code>import requests
import pprint
import threading
with open('MFTop30MinusAFew.txt', 'r') as ins: #input file for tickers
for line in ins:
ticker_array = ins.read().splitlines()
ticker = ticker_array
url_array = []
url_data = []
data_array =[]
for i in ticker:
url = 'https://query2.finance.yahoo.com/v10/finance/quoteSummary/'+i+'?formatted=true&crumb=8ldhetOu7RJ&lang=en-US&region=US&modules=defaultKeyStatistics%2CfinancialData%2CcalendarEvents&corsDomain=finance.yahoo.com'
url_array.append(url) #loading each complete url at one time
def fetch_data(url):
urlHandler = requests.get(url)
data = urlHandler.json()
data_array.append(data)
pprint.pprint(data_array)
threads = [threading.Thread(target=fetch_data, args=(url,)) for url in url_array]
for thread in threads:
thread.start()
for thread in threads:
thread.join()
fetch_data(url_array)
</code></pre>
<p>The error I get is <code>InvalidSchema: No connection adapters were found for '['https://query2.finance.... [url continues]</code>. </p>
<p>PS. I've also read that using multithread approach to scrap websites is bad/can get you blocked. Would Yahoo! Finance mind if I'm pulling data from a couple thousand tickers at once? Nothing happened when I did them sequentially. </p>
| -1
|
2016-09-06T22:59:26Z
| 39,359,155
|
<p>If you look carefully at the error you will notice that it doesn't show one url but all the urls you appended, enclosed with brackets. Indeed the last line of your code actually call your method fetch_data with the full array as a parameter, which does't make sense. If you remove this last line the code runs just fine, and your threads are called as expected.</p>
| 2
|
2016-09-06T23:22:57Z
|
[
"python",
"multithreading",
"yahoo-finance"
] |
Align python arrays with missing data
| 39,359,061
|
<p>I have some time series data, say:</p>
<pre><code># [ [time] [ data ] ]
a = [[0,1,2,3,4],['a','b','c','d','e']]
b = [[0,3,4]['f','g','h']]
</code></pre>
<p>and I would like an output with some filler value, lets say None for now:</p>
<pre><code>a_new = [[0,1,2,3,4],['a','b','c','d','e']]
b_new = [[0,1,2,3,4],['f',None,None,'g','h']]
</code></pre>
<p>Is there a built in function in python/numpy to do this (or something like this)? Basically I would like to have all of my time vectors of equal size so I can calculate statistics (np.mean) and deal with the missing data accordingly.</p>
<p>Thanks.</p>
| 4
|
2016-09-06T23:09:43Z
| 39,359,133
|
<p>How about this? (I'm assuming your definition of <code>b</code> was a typo, and I'm also assuming you know in advance how many entries you want.)</p>
<pre><code>>>> b = [[0,3,4], ['f','g','h']]
>>> b_new = [list(range(5)), [None] * 5]
>>> for index, value in zip(*b): b_new[1][index] = value
>>> b_new
[[0, 1, 2, 3, 4], ['f', None, None, 'g', 'h']]
</code></pre>
| 4
|
2016-09-06T23:20:35Z
|
[
"python",
"numpy"
] |
Align python arrays with missing data
| 39,359,061
|
<p>I have some time series data, say:</p>
<pre><code># [ [time] [ data ] ]
a = [[0,1,2,3,4],['a','b','c','d','e']]
b = [[0,3,4]['f','g','h']]
</code></pre>
<p>and I would like an output with some filler value, lets say None for now:</p>
<pre><code>a_new = [[0,1,2,3,4],['a','b','c','d','e']]
b_new = [[0,1,2,3,4],['f',None,None,'g','h']]
</code></pre>
<p>Is there a built in function in python/numpy to do this (or something like this)? Basically I would like to have all of my time vectors of equal size so I can calculate statistics (np.mean) and deal with the missing data accordingly.</p>
<p>Thanks.</p>
| 4
|
2016-09-06T23:09:43Z
| 39,359,776
|
<p>smarx has a fine answer, but <a href="http://pandas.pydata.org/pandas-docs/stable/index.html" rel="nofollow">pandas</a> was made exactly for things like this. </p>
<pre><code># your data
a = [[0,1,2,3,4],['a','b','c','d','e']]
b = [[0,3,4],['f','g','h']]
# make an empty DataFrame (can do this faster but I'm going slow so you see how it works)
df_a = pd.DataFrame()
df_a['time'] = a[0]
df_a['A'] = a[1]
df_a.set_index('time',inplace=True)
# same for b (a faster way this time)
df_b = pd.DataFrame({'B':b[1]}, index=b[0])
# now merge the two Series together (the NaNs are in the right place)
df = pd.merge(df_a, df_b, left_index=True, right_index=True, how='outer')
In [28]: df
Out[28]:
A B
0 a f
1 b NaN
2 c NaN
3 d g
4 e h
</code></pre>
<p>Now the fun is just beginning. Within a DataFrame you can </p>
<ul>
<li><p>compute all of your summary statistics (e.g. <code>df.mean()</code>)</p></li>
<li><p>make plots (e.g. <code>df.plot()</code>)</p></li>
<li><p>slice/dice your data basically however you want (e.g <code>df.groupby()</code>)</p></li>
<li><p>Fill in or drop missing data using a specified method (e.g. <code>df.fillna()</code>), </p></li>
<li><p>take quarterly or monthly averages (e.g. <code>df.resample()</code>) and a lot more. </p></li>
</ul>
<p>If you're just getting started (sorry for the infomercial it you aren't), I recommend reading <a href="http://pandas.pydata.org/pandas-docs/stable/10min.html" rel="nofollow">10 minutes to pandas</a> for a quick overview.</p>
| 1
|
2016-09-07T00:51:29Z
|
[
"python",
"numpy"
] |
Align python arrays with missing data
| 39,359,061
|
<p>I have some time series data, say:</p>
<pre><code># [ [time] [ data ] ]
a = [[0,1,2,3,4],['a','b','c','d','e']]
b = [[0,3,4]['f','g','h']]
</code></pre>
<p>and I would like an output with some filler value, lets say None for now:</p>
<pre><code>a_new = [[0,1,2,3,4],['a','b','c','d','e']]
b_new = [[0,1,2,3,4],['f',None,None,'g','h']]
</code></pre>
<p>Is there a built in function in python/numpy to do this (or something like this)? Basically I would like to have all of my time vectors of equal size so I can calculate statistics (np.mean) and deal with the missing data accordingly.</p>
<p>Thanks.</p>
| 4
|
2016-09-06T23:09:43Z
| 39,363,673
|
<p>Here's a vectorized <em>NumPythonic</em> approach -</p>
<pre><code>def align_arrays(A):
time, data = A
time_new = np.arange(np.max(time)+1)
data_new = np.full(time_new.size, None, dtype=object)
data_new[np.in1d(time_new,time)] = data
return time_new, data_new
</code></pre>
<p>Sample runs -</p>
<pre><code>In [113]: a = [[0,1,2,3,4],['a','b','c','d','e']]
In [114]: align_arrays(a)
Out[114]: (array([0, 1, 2, 3, 4]), array(['a', 'b', 'c', 'd', 'e'], dtype=object))
In [115]: b = [[0,3,4],['f','g','h']]
In [116]: align_arrays(b)
Out[116]: (array([0, 1, 2, 3, 4]),array(['f', None, None, 'g', 'h'],dtype=object))
</code></pre>
| 0
|
2016-09-07T07:24:20Z
|
[
"python",
"numpy"
] |
passing directory with spaces in it from raw_input to os.listdir(path) in Python
| 39,359,071
|
<p>I've looked at a few posts regarding this topic but haven't seen anything specifically fitting my usage. <a href="http://stackoverflow.com/questions/36555950/spaces-in-directory-path-python">This one looks to be close</a> to the same issue. They talk about escaping the escape characters.</p>
<p>My issue is that I want this script to run on both MAC and PC systems - and to be able to process files in a sub-folder that has spaces in the folder name.</p>
<p>Right now I use this code gleaned partially from several different SO posts:</p>
<pre><code>directory = raw_input("Enter the location of the files: ")
path = r"%s" % directory
for file in os.listdir(path):
</code></pre>
<p>I don't quite understand what the second line is doing - so perhaps that's the obvious line that needs tweaking. It works fine with regular folder names but not with spaces in the name.</p>
<p>I have tried using "\ " instead of just " ", - that didn't work - but in any case I'm looking for a code solution. I don't want the user to have to specify an escape character</p>
<p>On Windoze using sub folder name "LAS Data" in response to raw_input prompt (without quotation marks), I get an message that:</p>
<blockquote>
<p>the system can't find the path specified "LAS Data\*.*"</p>
</blockquote>
| 0
|
2016-09-06T23:11:13Z
| 39,359,201
|
<p>Your problem is most likely not with the code you present, but with the input you gave. The message </p>
<blockquote>
<p>the system can't find the path specified "LAS Data\*.*"</p>
</blockquote>
<p>suggest that you (or the user) entered the directory together with a wildcard for files. A directory with the name "LAS Data\*.*" indeed does not exist (unless you did something special to your file system :-).</p>
<p>Try entering just "LAS Data" instead.</p>
<p>The raw string should not be needed either</p>
<pre><code>> python
Python 2.7.11 (v2.7.11:6d1b6a68f775, Dec 5 2015, 20:32:19) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> directory = raw_input("Enter the location of the files: ")
Enter the location of the files: Directory with Spaces
>>> for file in os.listdir(directory):
... print(file)
...
hello-1.txt
hello-2.txt
>>>
</code></pre>
| 2
|
2016-09-06T23:29:30Z
|
[
"python",
"python-2.7",
"escaping"
] |
passing directory with spaces in it from raw_input to os.listdir(path) in Python
| 39,359,071
|
<p>I've looked at a few posts regarding this topic but haven't seen anything specifically fitting my usage. <a href="http://stackoverflow.com/questions/36555950/spaces-in-directory-path-python">This one looks to be close</a> to the same issue. They talk about escaping the escape characters.</p>
<p>My issue is that I want this script to run on both MAC and PC systems - and to be able to process files in a sub-folder that has spaces in the folder name.</p>
<p>Right now I use this code gleaned partially from several different SO posts:</p>
<pre><code>directory = raw_input("Enter the location of the files: ")
path = r"%s" % directory
for file in os.listdir(path):
</code></pre>
<p>I don't quite understand what the second line is doing - so perhaps that's the obvious line that needs tweaking. It works fine with regular folder names but not with spaces in the name.</p>
<p>I have tried using "\ " instead of just " ", - that didn't work - but in any case I'm looking for a code solution. I don't want the user to have to specify an escape character</p>
<p>On Windoze using sub folder name "LAS Data" in response to raw_input prompt (without quotation marks), I get an message that:</p>
<blockquote>
<p>the system can't find the path specified "LAS Data\*.*"</p>
</blockquote>
| 0
|
2016-09-06T23:11:13Z
| 39,359,306
|
<blockquote>
<p>I don't quite understand what the [<code>path = r"%s" % directory</code>] line is doing</p>
</blockquote>
<p>It creates a copy <code>path</code> of <code>directory</code>:</p>
<pre><code>>>> directory = raw_input()
LAS Data
>>> path = r"%s" % directory
>>> path == directory
True
>>> type(path), type(directory)
(<type 'str'>, <type 'str'>)
</code></pre>
<p>Really wondering where you got that from. Seems pretty pointless.</p>
| 1
|
2016-09-06T23:41:41Z
|
[
"python",
"python-2.7",
"escaping"
] |
From stat().st_mtime to datetime?
| 39,359,245
|
<p>What is the most idiomatic/efficient way to convert from a modification time retrieved from <code>stat()</code> call to a <code>datetime</code> object? I came up with the following (python3):</p>
<pre><code>from datetime import datetime, timedelta, timezone
from pathlib import Path
path = Path('foo')
path.touch()
statResult = path.stat()
epoch = datetime(1970, 1, 1, tzinfo=timezone.utc)
modified = epoch + timedelta(seconds=statResult.st_mtime)
print('modified', modified)
</code></pre>
<p>Seems round a bout, and a bit surprising that I have to hard code the Unix epoch in there. Is there a more direct way?</p>
| 3
|
2016-09-06T23:35:09Z
| 39,359,270
|
<p>Try
<a href="https://docs.python.org/3.5/library/datetime.html#datetime.date.fromtimestamp" rel="nofollow">datetime.fromtimestamp(statResult.st_mtime)</a></p>
<p>e.g.</p>
<pre><code>mod_timestamp = datetime.fromtimestamp(path.getmtime(<YOUR_PATH_HERE>))
</code></pre>
| 1
|
2016-09-06T23:38:25Z
|
[
"python",
"python-3.x",
"datetime",
"stat",
"pathlib"
] |
Add new columns to pandas dataframe based on other dataframe
| 39,359,272
|
<p>I'm trying to set a new column (two columns in fact) in a pandas dataframe, with the data comes from other dataframe.</p>
<p>I have the following two dataframes (they are example for this purpose, the original dataframes are so much bigger):</p>
<pre><code>In [116]: df0
Out[116]:
A B C
0 0 1 0
1 2 3 2
2 4 5 4
3 5 5 5
In [118]: df1
Out[118]:
A D E
0 2 7 2
1 6 5 5
2 4 3 2
3 0 1 0
4 5 4 6
5 0 1 0
</code></pre>
<p>And I want to have a new dataframe (or added to df0, whatever), as:</p>
<pre><code>df2:
A B C D E
0 0 1 0 1 0
1 2 3 2 7 2
2 4 5 4 3 2
3 5 5 5 4 6
</code></pre>
<p>As you can see, in the resulting dataframe isn't present the row with A=6 which is present in df1 but not in df0. Also the row with A=0 is duplicated in df1, but not in the result df2.</p>
<p>Actually, I'm having trouble with the selection method. I can do this:</p>
<pre><code>df1.loc[df1['A'].isin(df0['A'])]
</code></pre>
<p>But I'm not sure how to apply the part of keep with unique data (remember that df1 can contain duplicated data) and add the two columns to the df2 dataset (or add them to df0).
I've search here and I don't know see how to apply something like groupby, or even map.</p>
<p>Any idea?</p>
<p>Thanks!</p>
| 2
|
2016-09-06T23:38:30Z
| 39,359,665
|
<p>This is a basic application of <code>merge</code> (<a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow">docs</a>):</p>
<pre><code>import pandas as pd
df2 = pd.merge(df0,df1, left_index=True, right_index=True)
</code></pre>
| 2
|
2016-09-07T00:32:47Z
|
[
"python",
"pandas",
"dataframe",
"machine-learning",
"data-science"
] |
Python Google Admin SDK 403 Error
| 39,359,291
|
<p>I am attempting to retrieve all the members of a group but am receiving an error.</p>
<p>Here is my code:</p>
<pre><code># -*- coding: utf-8 -*-
from __future__ import print_function
import httplib2
import os
from apiclient import discovery
import oauth2client
from oauth2client import client
from oauth2client import tools
try:
import argparse
flags = argparse.ArgumentParser(parents=[tools.argparser]).parse_args()
except ImportError:
flags = None
# If modifying these scopes, delete your previously saved credentials
# at ~/.credentials/admin-directory_v1-python-quickstart.json
SCOPES = 'https://www.googleapis.com/auth/admin.directory.group.member.readonly'
CLIENT_SECRET_FILE = 'client_secret.json'
CRED_SAVE = 'cred_save.json'
APPLICATION_NAME = 'Directory API Python Quickstart'
def get_credentials():
"""Gets valid user credentials from storage.
If nothing has been stored, or if the stored credentials are invalid,
the OAuth2 flow is completed to obtain the new credentials.
Returns:
Credentials, the obtained credential.
"""
home_dir = os.path.expanduser('~')
credential_dir = os.path.join(home_dir, '.credentials')
if not os.path.exists(credential_dir):
os.makedirs(credential_dir)
credential_path = os.path.join(credential_dir,
CRED_SAVE)
store = oauth2client.file.Storage(credential_path)
credentials = store.get()
if not credentials or credentials.invalid:
flow = client.flow_from_clientsecrets(CLIENT_SECRET_FILE, SCOPES)
flow.user_agent = APPLICATION_NAME
if flags:
credentials = tools.run_flow(flow, store, flags)
else: # Needed only for compatibility with Python 2.6
credentials = tools.run(flow, store)
print('Storing credentials to ' + credential_path)
return credentials
def get_remote_users(service, http, group="all.faculty@myorg.jp"):
request = service.members().list(groupKey=group).execute()
def main():
"""Shows basic usage of the Google Admin SDK Directory API.
Creates a Google Admin SDK API service object and outputs a list of first
10 users in the domain.
"""
credentials = get_credentials()
http = credentials.authorize(httplib2.Http())
service = discovery.build('admin', 'directory_v1', http=http)
users = get_remote_users(service, http)
if not users:
print('No users in the domain.')
else:
print('Users:')
for user in users:
print('{0} ({1})'.format(user['primaryEmail'],
user['name']['fullName']))
if __name__ == '__main__':
main()
</code></pre>
<p>Here is the error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/user/Documents/workspace/module/groups_members.py", line 84, in <module>
main()
File "/Users/user/Documents/workspace/module/groups_members.py", line 72, in main
users = get_remote_users(service, http)
File "/Users/user/Documents/workspace/module/groups_members.py", line 58, in get_remote_users
request = service.members().list(groupKey=group).execute()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/oauth2client/util.py", line 137, in positional_wrapper
return wrapped(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/googleapiclient/http.py", line 832, in execute
raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: <HttpError 403 when requesting https://www.googleapis.com/admin/directory/v1/groups/all.faculty%40myorg.jp/members?alt=json returned "Insufficient Permission">
</code></pre>
<p>Any idea why I'm getting a 403 error? As far as I know I'm in the right scope and I have the correct json auth file stored. I can also do other things like list users with this code and not receive a 403 error.</p>
<p>Perhaps there's some extra authentication I need to do.</p>
<p>Any help would be much appreciated.</p>
<p>Cheers</p>
| 1
|
2016-09-06T23:40:07Z
| 39,360,057
|
<p>I obviously can't read.</p>
<p>Solved this by deleting credentials from ~/.credentials/</p>
| 0
|
2016-09-07T01:38:47Z
|
[
"python",
"api",
"sdk",
"admin"
] |
Tell python to access resources above the path of the script
| 39,359,359
|
<p>If you run a script from /tmp/myfolder/myscript/, and want to access a resource in myfolder, how do you do that in python?</p>
<p>I did get the file path with <code>__file__</code>, (I was told to not use it because it may not always be populated) but I can't do like in bash, where I "cd .." to get to the previous directory, because Python does not understand that.</p>
<p>I would like to run the python script and no matter where the folder is, it will always go up one level and get the resource that I need.</p>
| 2
|
2016-09-06T23:48:03Z
| 39,359,406
|
<p>Python does understand <code>cd</code>: <code>os.chdir()</code>. I am not sure exactly what you are trying to do, but you can try:</p>
<pre><code>import os
os.chdir('..')
</code></pre>
<p>This will change your working directory to the one above the current one (just like <code>cd ..</code>).</p>
<p>You can also use <code>..</code> for many operations that use paths. For example, if you want to open a file that is up one directory: <code>open('../myfile.txt')</code>, etc. You may also find the answers to <a href="http://stackoverflow.com/questions/21030751/python-imports-from-the-directory-above">this question</a> useful, depending on what your goal is.</p>
| 1
|
2016-09-06T23:54:45Z
|
[
"python"
] |
Tell python to access resources above the path of the script
| 39,359,359
|
<p>If you run a script from /tmp/myfolder/myscript/, and want to access a resource in myfolder, how do you do that in python?</p>
<p>I did get the file path with <code>__file__</code>, (I was told to not use it because it may not always be populated) but I can't do like in bash, where I "cd .." to get to the previous directory, because Python does not understand that.</p>
<p>I would like to run the python script and no matter where the folder is, it will always go up one level and get the resource that I need.</p>
| 2
|
2016-09-06T23:48:03Z
| 39,359,419
|
<p>I am unclear about exactly what you are talking about, but I infer that you are talking about intra-package references. You can read more about it <a href="https://docs.python.org/3/tutorial/modules.html" rel="nofollow">here</a>, but the general concept is you use <code>from .. import module</code> when trying to import a module from another package. </p>
<p>The <code>..</code> in the statement tells the compiler to go up one level and look for a module named <code>module</code>. You can also use <code>for . import module</code>, which will import the
module <code>name</code> from the current directory and <code>from ../folder import module</code></p>
| 1
|
2016-09-06T23:56:04Z
|
[
"python"
] |
How to use yaml.load_all with fileinput.input?
| 39,359,390
|
<p>Without resorting to <code>''.join</code>, is there a Pythonic way to use PyYAML's <code>yaml.load_all</code> with <code>fileinput.input()</code> for easy streaming of multiple documents from multiple sources?</p>
<p>I'm looking for something like the following (non-working example):</p>
<pre><code># example.py
import fileinput
import yaml
for doc in yaml.load_all(fileinput.input()):
print(doc)
</code></pre>
<p>Expected output:</p>
<pre><code>$ cat >pre.yaml <<<'--- prefix-doc'
$ cat >post.yaml <<<'--- postfix-doc'
$ python example.py pre.yaml - post.yaml <<<'--- hello'
prefix-doc
hello
postfix-doc
</code></pre>
<p>Of course, <code>yaml.load_all</code> expects either a string, bytes, or a file-like object and <code>fileinput.input()</code> is none of those things, so the above example does not work.</p>
<p>Actual output:</p>
<pre><code>$ python example.py pre.yaml - post.yaml <<<'--- hello'
...
AttributeError: FileInput instance has no attribute 'read'
</code></pre>
<p>You can make the example work with <code>''.join</code>, but that's cheating. I'm looking for a way that does not read the entire stream into memory at once.</p>
<p>We might rephrase the question as <em>Is there some way to emulate a string, bytes, or file-like object that proxies to an underlying iterator of strings?</em> However, I doubt that <code>yaml.load_all</code> actually needs the entire file-like interface, so that phrasing would ask for more than is strictly necessary.</p>
<p>Ideally I'm looking for the minimal adapter that would support something like this:</p>
<pre><code>for doc in yaml.load_all(minimal_adapter(fileinput.input())):
print(doc)
</code></pre>
| 3
|
2016-09-06T23:52:22Z
| 39,360,463
|
<p>The problem with <code>fileinput.input</code> is that the resulting object doesn't have a <code>read</code> method, which is what <code>yaml.load_all</code> is looking for. If you're willing to give up <code>fileinput</code>, you can just write your own class that will do what you want:</p>
<pre><code>import sys
import yaml
class BunchOFiles (object):
def __init__(self, *files):
self.files = files
self.fditer = self._fditer()
self.fd = self.fditer.next()
def _fditer(self):
for fn in self.files:
with sys.stdin if fn == '-' else open(fn, 'r') as fd:
yield fd
def read(self, size=-1):
while True:
data = self.fd.read(size)
if data:
break
else:
try:
self.fd = self.fditer.next()
except StopIteration:
self.fd = None
break
return data
bunch = BunchOFiles(*sys.argv[1:])
for doc in yaml.load_all(bunch):
print doc
</code></pre>
<p>The <code>BunchOFiles</code> class gets you an object with a <code>read</code> method that will happily iterate over a list of files until everything is exhausted. Given the above code and your sample input, we get exactly the output you're looking for.</p>
| 3
|
2016-09-07T02:41:55Z
|
[
"python",
"pyyaml"
] |
How to use yaml.load_all with fileinput.input?
| 39,359,390
|
<p>Without resorting to <code>''.join</code>, is there a Pythonic way to use PyYAML's <code>yaml.load_all</code> with <code>fileinput.input()</code> for easy streaming of multiple documents from multiple sources?</p>
<p>I'm looking for something like the following (non-working example):</p>
<pre><code># example.py
import fileinput
import yaml
for doc in yaml.load_all(fileinput.input()):
print(doc)
</code></pre>
<p>Expected output:</p>
<pre><code>$ cat >pre.yaml <<<'--- prefix-doc'
$ cat >post.yaml <<<'--- postfix-doc'
$ python example.py pre.yaml - post.yaml <<<'--- hello'
prefix-doc
hello
postfix-doc
</code></pre>
<p>Of course, <code>yaml.load_all</code> expects either a string, bytes, or a file-like object and <code>fileinput.input()</code> is none of those things, so the above example does not work.</p>
<p>Actual output:</p>
<pre><code>$ python example.py pre.yaml - post.yaml <<<'--- hello'
...
AttributeError: FileInput instance has no attribute 'read'
</code></pre>
<p>You can make the example work with <code>''.join</code>, but that's cheating. I'm looking for a way that does not read the entire stream into memory at once.</p>
<p>We might rephrase the question as <em>Is there some way to emulate a string, bytes, or file-like object that proxies to an underlying iterator of strings?</em> However, I doubt that <code>yaml.load_all</code> actually needs the entire file-like interface, so that phrasing would ask for more than is strictly necessary.</p>
<p>Ideally I'm looking for the minimal adapter that would support something like this:</p>
<pre><code>for doc in yaml.load_all(minimal_adapter(fileinput.input())):
print(doc)
</code></pre>
| 3
|
2016-09-06T23:52:22Z
| 39,361,580
|
<p>Your <code>minimal_adapter</code> should take a <code>fileinput.FileInput</code> as a parameter and return an object which <code>load_all</code> can use. <code>load_all</code> either takes as an argument a string, but that would require concatenating the input, or it expects the argument to have a <code>read()</code> method.</p>
<p>Since your minimal_adapter needs to preserve some state, I find it clearest/easiest to implement it as an instance of a class that has a <code>__call__</code> method, and have that method return the instance and store its argument for future use. Implemented that way, the class should also have a <code>read()</code> method, as this will be called after handing the instance to <code>load_all</code>:</p>
<pre><code>import fileinput
import ruamel.yaml
class MinimalAdapter:
def __init__(self):
self._fip = None
self._buf = None # storage of read but unused material, maximum one line
def __call__(self, fip):
self._fip = fip # store for future use
self._buf = ""
return self
def read(self, size):
if len(self._buf) >= size:
# enough in buffer from last read, just cut it off and return
tmp, self._buf = self._buf[:size], self._buf[size:]
return tmp
for line in self._fip:
self._buf += line
if len(self._buf) > size:
break
else:
# ran out of lines, return what we have
tmp, self._buf = self._buf, ''
return tmp
tmp, self._buf = self._buf[:size], self._buf[size:]
return tmp
minimal_adapter = MinimalAdapter()
for doc in ruamel.yaml.load_all(minimal_adapter(fileinput.input())):
print(doc)
</code></pre>
<p>With this, running your example invocation exactly gives the output that you want. </p>
<p>This is probably only more memory efficient for larger files. The <code>load_all</code> tries to read 1024 byte blocks at a time (easily found out by putting a print statement in <code>MinimalAdapter.read()</code>) and <code>fileinput</code> does some buffering as well (use <code>strace</code> if your interested to find out how it behaves).</p>
<hr>
<p><sub>This was done using <a href="https://pypi.python.org/pypi/ruamel.yaml" rel="nofollow">ruamel.yaml</a> a YAML 1.2 parser, of which I am the author. This should work for PyYAML, of which ruamel.yaml is a derived superset, as well.</sub></p>
| 2
|
2016-09-07T05:06:08Z
|
[
"python",
"pyyaml"
] |
Type and default input value of a Click.option in --help option
| 39,359,420
|
<p>How can I show the default input value in @Click.option in --help option of a program.</p>
<p>Thank you.</p>
| 0
|
2016-09-06T23:56:06Z
| 39,359,671
|
<p>Pass <code>show_default=True</code> in the <code>click.option</code> decorator when defining an option. This will display default value in the help when the program is called with <code>--help</code> option.
For example -</p>
<pre><code>#hello.py
import click
@click.command()
@click.option('--count', default=1, help='Number of greetings.', show_default=True)
@click.option('--name', prompt='Your name',
help='The person to greet.')
def hello(count, name):
"""<insert text that you want to display in help screen> e.g: Simple program that greets NAME for a total of COUNT times."""
for x in range(count):
click.echo('Hello %s!' % name)
if __name__ == '__main__':
hello()
</code></pre>
<p>Now you can see the help screen generated by running <code>python hello.py --help</code> as</p>
<pre><code>$ python hello.py --help
Usage: hello.py [OPTIONS]
<insert text that you want to display in help screen> e.g: Simple program that greets NAME for a total of COUNT times.
Options:
--count INTEGER Number of greetings. [default: 1]
--name TEXT The person to greet.
--help Show this message and exit.
</code></pre>
<p>Hence you can see that the default value of the <code>count</code> option is displayed in the help text of the program. (Reference : <a href="https://github.com/pallets/click/issues/243" rel="nofollow">https://github.com/pallets/click/issues/243</a>)</p>
| 0
|
2016-09-07T00:34:44Z
|
[
"python",
"python-click"
] |
Closing Selenium Browser that was Opened in a Child Process
| 39,359,481
|
<p>Here's the situation:</p>
<p>I create a child process which opens and deals with a webdriver. The child process is finicky and might error, in which case it would close immediately, and control would be returned to the main function. In this situation, however, the browser would still be open (as the child process never completely finished running). How can I close a browser that is initialized in a child process?</p>
<p>Approaches I've tried so far:</p>
<p>1) Initializing the webdriver in the main function and passing it to the child process as an argument.</p>
<p>2) Passing the webdriver between the child and parent process using a queue.</p>
<p>The code:</p>
<pre><code>import multiprocessing
def foo(queue):
driver = webdriver.Chrome()
queue.put(driver)
# Do some other stuff
# If finicky stuff happens, this driver.close() will not run
driver.close()
if __name__ == '__main__':
queue = multiprocessing.Queue()
p = multiprocessing.Process(target=foo, name='foo', args=(queue,))
# Wait for process to finish
# Try to close the browser if still open
try:
driver = queue.get()
driver.close()
except:
pass
</code></pre>
| 0
|
2016-09-07T00:04:30Z
| 39,397,438
|
<p>I found a solution:</p>
<p>In foo(), get the process ID of the webdriver when you open a new browser. Add the process ID to the queue. Then in the main function, add time.sleep(60) to wait for a minute, then get the process ID from the queue and use a try-except to try and close the particular process ID.</p>
<p>If foo() running in a separate process hangs, then the browser will be closed in the main function after one minute. </p>
| 0
|
2016-09-08T18:01:20Z
|
[
"python",
"selenium",
"webdriver",
"queue",
"multiprocessing"
] |
not getting expected result from sql query select python
| 39,359,548
|
<p>I am trying to select from a specific row and then column in SQL.
I want to find a specific user_name row and then select the access_id from the row. </p>
<p>Here is all of my code. </p>
<pre><code>import sys, ConfigParser, numpy
import MySQLdb as mdb
from plaid.utils import json
class SQLConnection:
"""Used to connect to a SQL database and send queries to it"""
config_file = 'db.cfg'
section_name = 'Database Details'
_db_name = ''
_hostname = ''
_ip_address = ''
_username = ''
_password = ''
def __init__(self):
config = ConfigParser.RawConfigParser()
config.read(self.config_file)
print "making"
try:
_db_name = config.get(self.section_name, 'db_name')
_hostname = config.get(self.section_name, 'hostname')
_ip_address = config.get(self.section_name, 'ip_address')
_user = config.get(self.section_name, 'user')
_password = config.get(self.section_name, 'password')
except ConfigParser.NoOptionError as e:
print ('one of the options in the config file has no value\n{0}: ' +
'{1}').format(e.errno, e.strerror)
sys.exit()
self.con = mdb.connect(_hostname, _user, _password, _db_name)
self.con.autocommit(False)
self.con.ping(True)
self.cur = self.con.cursor(mdb.cursors.DictCursor)
def query(self, sql_query, values=None):
"""
take in 1 or more query strings and perform a transaction
@param sql_query: either a single string or an array of strings
representing individual queries
@param values: either a single json object or an array of json objects
representing quoted values to insert into the relative query
(values and sql_query indexes must line up)
"""
# TODO check sql_query and values to see if they are lists
# if sql_query is a string
if isinstance(sql_query, basestring):
self.cur.execute(sql_query, values)
self.con.commit()
# otherwise sql_query should be a list of strings
else:
# execute each query with relative values
for query, sub_values in zip(sql_query, values):
self.cur.execute(query, sub_values)
# commit all these queries
self.con.commit
return self.cur.fetchall
def get_plaid_token(self,username):
result= self.query("SELECT access_id FROM `users` WHERE `user_name` LIKE %s",[username])
print type (result)
return result
print SQLConnection().get_plaid_token("test")
</code></pre>
<p>I would like the get the transaction ID but for some reason "result" returns </p>
<blockquote>
<pre><code>> <bound method DictCursor.fetchall of <MySQLdb.cursors.DictCursor
> object at 0x000000000396F278>>
</code></pre>
</blockquote>
<p>result is also of type <code>"instancemethod"</code></p>
| 0
|
2016-09-07T00:13:07Z
| 39,359,565
|
<p>try changing this line:</p>
<pre><code>return self.cur.fetchall
</code></pre>
<p>to </p>
<pre><code>return self.cur.fetchall()
</code></pre>
<p>Without the parentheses after the method name, you are returning a reference to that method itself, not running the method.</p>
| 2
|
2016-09-07T00:16:13Z
|
[
"python",
"mysql",
"sql"
] |
How can I reverse parts of sentence in python?
| 39,359,599
|
<p>I have a sentence, let's say:</p>
<p>The quick brown fox jumps over the lazy dog</p>
<p>I want to create a function that takes 2 arguments, a sentence and a list of things to ignore. And it returns that sentence with the reversed words, however it should ignore the stuff I pass to it in a second argument. This is what I have at the moment:</p>
<pre><code>def main(sentence, ignores):
return ' '.join(word[::-1] if word not in ignores else word for word in sentence.split())
</code></pre>
<p>But this will only work if I pass a second list like so:</p>
<pre><code>print(main('The quick brown fox jumps over the lazy dog', ['quick', 'lazy']))
</code></pre>
<p>However, I want to pass a list like this:</p>
<pre><code>print(main('The quick brown fox jumps over the lazy dog', ['quick brown', 'lazy dog']))
</code></pre>
<p>expected result:
<code>ehT quick brown xof spmuj revo eht lazy dog</code></p>
<p>So basically the second argument (the list) will have parts of the sentence that should be ignored. Not just single words.</p>
<p>Do I have to use regexp for this? I was trying to avoid it...</p>
| 4
|
2016-09-07T00:21:28Z
| 39,359,682
|
<p>Instead of placeholders, why not just initially reverse any phrase that you want to be around the right way, then reverse the whole string:</p>
<pre><code>def main(sentence, ignores):
for phrase in ignores:
reversed_phrase = ' '.join([word[::-1] for word in phrase.split()])
sentence = sentence.replace(phrase, reversed_phrase)
return ' '.join(word[::-1] for word in sentence.split())
print(main('The quick brown fox jumps over the lazy dog', ['quick', 'lazy']))
print(main('The quick brown fox jumps over the lazy dog', ['quick brown', 'lazy dog']))
</code></pre>
<p>returns:</p>
<pre><code>ehT quick nworb xof spmuj revo eht lazy god
ehT quick brown xof spmuj revo eht lazy dog
</code></pre>
| 0
|
2016-09-07T00:36:35Z
|
[
"python",
"python-3.x"
] |
How can I reverse parts of sentence in python?
| 39,359,599
|
<p>I have a sentence, let's say:</p>
<p>The quick brown fox jumps over the lazy dog</p>
<p>I want to create a function that takes 2 arguments, a sentence and a list of things to ignore. And it returns that sentence with the reversed words, however it should ignore the stuff I pass to it in a second argument. This is what I have at the moment:</p>
<pre><code>def main(sentence, ignores):
return ' '.join(word[::-1] if word not in ignores else word for word in sentence.split())
</code></pre>
<p>But this will only work if I pass a second list like so:</p>
<pre><code>print(main('The quick brown fox jumps over the lazy dog', ['quick', 'lazy']))
</code></pre>
<p>However, I want to pass a list like this:</p>
<pre><code>print(main('The quick brown fox jumps over the lazy dog', ['quick brown', 'lazy dog']))
</code></pre>
<p>expected result:
<code>ehT quick brown xof spmuj revo eht lazy dog</code></p>
<p>So basically the second argument (the list) will have parts of the sentence that should be ignored. Not just single words.</p>
<p>Do I have to use regexp for this? I was trying to avoid it...</p>
| 4
|
2016-09-07T00:21:28Z
| 39,359,689
|
<p>I'm the first person to recommend avoiding regular expressions, but in this case, the complexity of doing without is greater than the complexity added by using them:</p>
<pre><code>import re
def main(sentence, ignores):
# Dedup and allow fast lookup for determining whether to reverse a component
ignores = frozenset(ignores)
# Make a pattern that will prefer matching the ignore phrases, but
# otherwise matches each space and non-space run (so nothing is dropped)
# Alternations match the first pattern by preference, so you'll match
# the ignores phrases if possible, and general space/non-space patterns
# otherwise
pat = r'|'.join(map(re.escape, ignores)) + r'|\S+|\s+'
# Returns the chopped up pieces (space and non-space runs, but ignore phrases stay together
parts = re.findall(pat, sentence)
# Reverse everything not found in ignores and then put it all back together
return ''.join(p if p in ignores else p[::-1] for p in parts)
</code></pre>
| 0
|
2016-09-07T00:37:20Z
|
[
"python",
"python-3.x"
] |
How can I reverse parts of sentence in python?
| 39,359,599
|
<p>I have a sentence, let's say:</p>
<p>The quick brown fox jumps over the lazy dog</p>
<p>I want to create a function that takes 2 arguments, a sentence and a list of things to ignore. And it returns that sentence with the reversed words, however it should ignore the stuff I pass to it in a second argument. This is what I have at the moment:</p>
<pre><code>def main(sentence, ignores):
return ' '.join(word[::-1] if word not in ignores else word for word in sentence.split())
</code></pre>
<p>But this will only work if I pass a second list like so:</p>
<pre><code>print(main('The quick brown fox jumps over the lazy dog', ['quick', 'lazy']))
</code></pre>
<p>However, I want to pass a list like this:</p>
<pre><code>print(main('The quick brown fox jumps over the lazy dog', ['quick brown', 'lazy dog']))
</code></pre>
<p>expected result:
<code>ehT quick brown xof spmuj revo eht lazy dog</code></p>
<p>So basically the second argument (the list) will have parts of the sentence that should be ignored. Not just single words.</p>
<p>Do I have to use regexp for this? I was trying to avoid it...</p>
| 4
|
2016-09-07T00:21:28Z
| 39,359,805
|
<p>Just another idea, reverse every word and then reverse the ignores right back:</p>
<pre><code>>>> from functools import reduce
>>> def main(sentence, ignores):
def r(s):
return ' '.join(w[::-1] for w in s.split())
return reduce(lambda s, i: s.replace(r(i), i), ignores, r(sentence))
>>> main('The quick brown fox jumps over the lazy dog', ['quick brown', 'lazy dog'])
'ehT quick brown xof spmuj revo eht lazy dog'
</code></pre>
| 0
|
2016-09-07T00:56:25Z
|
[
"python",
"python-3.x"
] |
How can I reverse parts of sentence in python?
| 39,359,599
|
<p>I have a sentence, let's say:</p>
<p>The quick brown fox jumps over the lazy dog</p>
<p>I want to create a function that takes 2 arguments, a sentence and a list of things to ignore. And it returns that sentence with the reversed words, however it should ignore the stuff I pass to it in a second argument. This is what I have at the moment:</p>
<pre><code>def main(sentence, ignores):
return ' '.join(word[::-1] if word not in ignores else word for word in sentence.split())
</code></pre>
<p>But this will only work if I pass a second list like so:</p>
<pre><code>print(main('The quick brown fox jumps over the lazy dog', ['quick', 'lazy']))
</code></pre>
<p>However, I want to pass a list like this:</p>
<pre><code>print(main('The quick brown fox jumps over the lazy dog', ['quick brown', 'lazy dog']))
</code></pre>
<p>expected result:
<code>ehT quick brown xof spmuj revo eht lazy dog</code></p>
<p>So basically the second argument (the list) will have parts of the sentence that should be ignored. Not just single words.</p>
<p>Do I have to use regexp for this? I was trying to avoid it...</p>
| 4
|
2016-09-07T00:21:28Z
| 39,364,477
|
<p>I have attempted to solve the issue of overlapping ignore phrases e.g. <code>['brown fox', 'quick brown']</code> raised by @PadraicCunningham.</p>
<p>There's obviously a lot more looping and code feels less pythonic so I'd be interested in feedback on how to improve this.</p>
<pre><code>import re
def _span_combiner(spans):
"""replace overlapping spans with encompasing single span"""
for i, s in enumerate(spans):
start = s[0]
end = s[1]
for x in spans[i:]:
if x[0] < end:
end = x[1]
yield (start, end)
def main(sentence, ignores):
# spans is a start and finish indices for each ignore phrase in order of occurence
spans = sorted(
[[m.span() for m in re.finditer(p, sentence)][0] for p in ignores if p in sentence]
)
# replace overlapping indices with single set of indices encompasing overlapped range
spans = [s for s in _span_combiner(spans)]
# recreate ignore list by slicing sentence with combined spans
ignores = [sentence[s[0]:s[1]] for s in spans]
for phrase in ignores:
reversed_phrase = ' '.join([word[::-1] for word in phrase.split()])
sentence = sentence.replace(phrase, reversed_phrase)
return ' '.join(word[::-1] for word in sentence.split())
if __name__ == "__main__":
print(main('The quick brown fox jumps over the lazy dog', ['quick', 'lazy']))
print(main('The quick brown fox jumps over the lazy dog', ['brown fox', 'lazy dog']))
print(main('The quick brown fox jumps over the lazy dog', ['nonexistent' ,'brown fox', 'quick brown']))
print(main('The quick brown fox jumps over the brown fox', ['brown fox', 'quick brown']))
</code></pre>
<p>results:</p>
<pre><code>ehT quick nworb xof spmuj revo eht lazy god
ehT kciuq brown fox spmuj revo eht lazy dog
ehT quick brown fox spmuj revo eht yzal god
ehT quick brown fox spmuj revo eht brown fox
</code></pre>
| 0
|
2016-09-07T08:07:27Z
|
[
"python",
"python-3.x"
] |
python pandas.Series.str.contains WHOLE WORD
| 39,359,601
|
<p>df (Pandas Dataframe) has three rows.</p>
<pre><code>col_name
"This is Donald."
"His hands are so small"
"Why are his fingers so short?"
</code></pre>
<p>I'd like to extract the row that contains "is" and "small".</p>
<p>If I do</p>
<pre><code>df.col_name.str.contains("is|small", case=False)
</code></pre>
<p>Then it catches "His" as well- which I don't want.</p>
<p>Is below query is the right way to catch the whole word in df.series?</p>
<pre><code>df.col_name.str.contains("\bis\b|\bsmall\b", case=False)
</code></pre>
| 4
|
2016-09-07T00:21:34Z
| 39,359,724
|
<p>Your way (with /b) didn't work for me. I'm not sure why you can't use the logical operator and (&) since I think that's what you actually want. </p>
<p>This is a silly way to do it, but it works:</p>
<pre><code>mask = lambda x: ("is" in x) & ("small" in x)
series_name.apply(mask)
</code></pre>
| 0
|
2016-09-07T00:43:55Z
|
[
"python",
"regex",
"pandas",
"dataframe"
] |
python pandas.Series.str.contains WHOLE WORD
| 39,359,601
|
<p>df (Pandas Dataframe) has three rows.</p>
<pre><code>col_name
"This is Donald."
"His hands are so small"
"Why are his fingers so short?"
</code></pre>
<p>I'd like to extract the row that contains "is" and "small".</p>
<p>If I do</p>
<pre><code>df.col_name.str.contains("is|small", case=False)
</code></pre>
<p>Then it catches "His" as well- which I don't want.</p>
<p>Is below query is the right way to catch the whole word in df.series?</p>
<pre><code>df.col_name.str.contains("\bis\b|\bsmall\b", case=False)
</code></pre>
| 4
|
2016-09-07T00:21:34Z
| 39,359,789
|
<p>No, the regex <code>/bis/b|/bsmall/b</code> will fail because you are using <code>/b</code>, not the <code>\b</code> that means "word boundary".</p>
<p>Change that and you get a match. I would recommend using</p>
<pre><code>\b(is|small)\b
</code></pre>
<p>That regex is a little faster and a little more legible, at least to me.</p>
| 4
|
2016-09-07T00:54:05Z
|
[
"python",
"regex",
"pandas",
"dataframe"
] |
python pandas.Series.str.contains WHOLE WORD
| 39,359,601
|
<p>df (Pandas Dataframe) has three rows.</p>
<pre><code>col_name
"This is Donald."
"His hands are so small"
"Why are his fingers so short?"
</code></pre>
<p>I'd like to extract the row that contains "is" and "small".</p>
<p>If I do</p>
<pre><code>df.col_name.str.contains("is|small", case=False)
</code></pre>
<p>Then it catches "His" as well- which I don't want.</p>
<p>Is below query is the right way to catch the whole word in df.series?</p>
<pre><code>df.col_name.str.contains("\bis\b|\bsmall\b", case=False)
</code></pre>
| 4
|
2016-09-07T00:21:34Z
| 39,359,856
|
<p>First, you may want to convert everything to lowercase, remove punctuation and whitespace and then convert the result into a set of words.</p>
<pre><code>import string
df['words'] = [set(words) for words in
df['col_name']
.str.lower()
.str.replace('[{0}]*'.format(string.punctuation), '')
.str.strip()
.str.split()
]
>>> df
col_name words
0 This is Donald. {this, is, donald}
1 His hands are so small {small, his, so, are, hands}
2 Why are his fingers so short? {short, fingers, his, so, are, why}
</code></pre>
<p>You can now use boolean indexing to see if all of your target words are in these new word sets.</p>
<pre><code>target_words = ['is', 'small']
# Convert target words to lower case just to be safe.
target_words = [word.lower() for word in target_words]
df['match'] = df.words.apply(lambda words: all(target_word in words
for target_word in target_words))
print(df)
# Output:
# col_name words match
# 0 This is Donald. {this, is, donald} False
# 1 His hands are so small {small, his, so, are, hands} False
# 2 Why are his fingers so short? {short, fingers, his, so, are, why} False
target_words = ['so', 'small']
target_words = [word.lower() for word in target_words]
df['match'] = df.words.apply(lambda words: all(target_word in words
for target_word in target_words))
print(df)
# Output:
# Output:
# col_name words match
# 0 This is Donald. {this, is, donald} False
# 1 His hands are so small {small, his, so, are, hands} True
# 2 Why are his fingers so short? {short, fingers, his, so, are, why} False
</code></pre>
<p>To extract the matching rows:</p>
<pre><code>>>> df.loc[df.match, 'col_name']
# Output:
# 1 His hands are so small
# Name: col_name, dtype: object
</code></pre>
<p>To make this all into a single statement using boolean indexing:</p>
<pre><code>df.loc[[all(target_word in word_set for target_word in target_words)
for word_set in (set(words) for words in
df['col_name']
.str.lower()
.str.replace('[{0}]*'.format(string.punctuation), '')
.str.strip()
.str.split())], :]
</code></pre>
| 1
|
2016-09-07T01:05:29Z
|
[
"python",
"regex",
"pandas",
"dataframe"
] |
Access all elements at given x, y position in 3-dimensional numpy array
| 39,359,621
|
<pre><code>mat_a = np.random.random((5, 5))
mat_b = np.random.random((5, 5))
mat_c = np.random.random((5, 5))
bigmat = np.stack((mat_a, mat_b, mat_c)) # this is a 3, 5, 5 array
for (x, y, z), value in np.ndenumerate(bigmat):
print (x, y, z)
</code></pre>
<p>In the example above, how can I loop so that I iterate only across the 5 x 5 array and at each position I get 3 values i.e. loop should run 25 times and each time, I get an array with 3 values (one from each of mat_a, mat_b and mat_c)</p>
<ul>
<li>EDIT: Please note that I need to be able to access elements by position later i.e. if <code>bigmat</code> is reshaped, there should be a way to access element based on specific y, z</li>
</ul>
| 3
|
2016-09-07T00:25:13Z
| 39,359,818
|
<p>The required result can be obtained by slicing, e.g.:</p>
<pre><code>for x in range(5):
for y in range(5):
print (bigmat[:,x,y])
</code></pre>
| 1
|
2016-09-07T00:58:33Z
|
[
"python",
"numpy"
] |
Access all elements at given x, y position in 3-dimensional numpy array
| 39,359,621
|
<pre><code>mat_a = np.random.random((5, 5))
mat_b = np.random.random((5, 5))
mat_c = np.random.random((5, 5))
bigmat = np.stack((mat_a, mat_b, mat_c)) # this is a 3, 5, 5 array
for (x, y, z), value in np.ndenumerate(bigmat):
print (x, y, z)
</code></pre>
<p>In the example above, how can I loop so that I iterate only across the 5 x 5 array and at each position I get 3 values i.e. loop should run 25 times and each time, I get an array with 3 values (one from each of mat_a, mat_b and mat_c)</p>
<ul>
<li>EDIT: Please note that I need to be able to access elements by position later i.e. if <code>bigmat</code> is reshaped, there should be a way to access element based on specific y, z</li>
</ul>
| 3
|
2016-09-07T00:25:13Z
| 39,359,850
|
<p>Simon's answer is fine. If you reshape things properly you can get them all in a nice array without any looping. </p>
<pre><code>In [33]: bigmat
Out[33]:
array([[[ 0.51701737, 0.90723012, 0.42534365, 0.3087416 , 0.44315561],
[ 0.3902181 , 0.59261932, 0.21231607, 0.61440961, 0.24910501],
[ 0.63911556, 0.16333704, 0.62123781, 0.6298554 , 0.29012245],
[ 0.95260313, 0.86813746, 0.26722519, 0.14738102, 0.60523372],
[ 0.33189713, 0.6494197 , 0.30269686, 0.47312059, 0.84690451]],
[[ 0.95974972, 0.09659425, 0.06765838, 0.36025411, 0.91492751],
[ 0.92421874, 0.31670119, 0.99623178, 0.30394588, 0.30970197],
[ 0.53590091, 0.04273708, 0.97876218, 0.09686119, 0.78394054],
[ 0.5463358 , 0.29239676, 0.6284822 , 0.96649507, 0.05261606],
[ 0.91733464, 0.77312656, 0.45962704, 0.06446105, 0.58643379]],
[[ 0.75161903, 0.43286354, 0.09633492, 0.52275049, 0.40827006],
[ 0.51816158, 0.05330978, 0.49134325, 0.73652136, 0.14437844],
[ 0.83833791, 0.2072704 , 0.18345275, 0.57282927, 0.7218022 ],
[ 0.56180415, 0.85591746, 0.35482315, 0.94562085, 0.92706479],
[ 0.2994697 , 0.99724253, 0.66386017, 0.0121033 , 0.43448805]]])
</code></pre>
<p>Reshaping things...</p>
<pre><code>new_bigmat = bigmat.T.reshape([25,3])
In [36]: new_bigmat
Out[36]:
array([[ 0.51701737, 0.95974972, 0.75161903],
[ 0.3902181 , 0.92421874, 0.51816158],
[ 0.63911556, 0.53590091, 0.83833791],
[ 0.95260313, 0.5463358 , 0.56180415],
[ 0.33189713, 0.91733464, 0.2994697 ],
[ 0.90723012, 0.09659425, 0.43286354],
[ 0.59261932, 0.31670119, 0.05330978],
[ 0.16333704, 0.04273708, 0.2072704 ],
[ 0.86813746, 0.29239676, 0.85591746],
[ 0.6494197 , 0.77312656, 0.99724253],
[ 0.42534365, 0.06765838, 0.09633492],
[ 0.21231607, 0.99623178, 0.49134325],
[ 0.62123781, 0.97876218, 0.18345275],
[ 0.26722519, 0.6284822 , 0.35482315],
[ 0.30269686, 0.45962704, 0.66386017],
[ 0.3087416 , 0.36025411, 0.52275049],
[ 0.61440961, 0.30394588, 0.73652136],
[ 0.6298554 , 0.09686119, 0.57282927],
[ 0.14738102, 0.96649507, 0.94562085],
[ 0.47312059, 0.06446105, 0.0121033 ],
[ 0.44315561, 0.91492751, 0.40827006],
[ 0.24910501, 0.30970197, 0.14437844],
[ 0.29012245, 0.78394054, 0.7218022 ],
[ 0.60523372, 0.05261606, 0.92706479],
[ 0.84690451, 0.58643379, 0.43448805]])
</code></pre>
<p>Edit: To keep track of indices, you might try the following (open to other ideas here). Each row in <code>xy_index</code> gives your x,y values respectively for the corresponding row in the <code>new_bigmat</code> array. This answer doesn't require any loops. If looping is acceptable you can borrow Simon's suggestion in the comments or <code>np.ndindex</code> as suggested in hpaulj's answer.</p>
<pre><code>row_index, col_index = np.meshgrid(range(5),range(5))
xy_index = np.array([row_index.flatten(), col_index.flatten()]).T
In [48]: xy_index
Out[48]:
array([[0, 0],
[1, 0],
[2, 0],
[3, 0],
[4, 0],
[0, 1],
[1, 1],
[2, 1],
[3, 1],
[4, 1],
[0, 2],
[1, 2],
[2, 2],
[3, 2],
[4, 2],
[0, 3],
[1, 3],
[2, 3],
[3, 3],
[4, 3],
[0, 4],
[1, 4],
[2, 4],
[3, 4],
[4, 4]])
</code></pre>
| 2
|
2016-09-07T01:04:04Z
|
[
"python",
"numpy"
] |
Access all elements at given x, y position in 3-dimensional numpy array
| 39,359,621
|
<pre><code>mat_a = np.random.random((5, 5))
mat_b = np.random.random((5, 5))
mat_c = np.random.random((5, 5))
bigmat = np.stack((mat_a, mat_b, mat_c)) # this is a 3, 5, 5 array
for (x, y, z), value in np.ndenumerate(bigmat):
print (x, y, z)
</code></pre>
<p>In the example above, how can I loop so that I iterate only across the 5 x 5 array and at each position I get 3 values i.e. loop should run 25 times and each time, I get an array with 3 values (one from each of mat_a, mat_b and mat_c)</p>
<ul>
<li>EDIT: Please note that I need to be able to access elements by position later i.e. if <code>bigmat</code> is reshaped, there should be a way to access element based on specific y, z</li>
</ul>
| 3
|
2016-09-07T00:25:13Z
| 39,360,093
|
<p>There is a function that generates all indices for a given shape, <code>ndindex</code>. </p>
<pre><code>for y,z in np.ndindex(bigmat.shape[1:]):
print(y,z,bigmat[:,y,z])
0 0 [ 0 25 50]
0 1 [ 1 26 51]
0 2 [ 2 27 52]
0 3 [ 3 28 53]
0 4 [ 4 29 54]
1 0 [ 5 30 55]
1 1 [ 6 31 56]
...
</code></pre>
<p>For a simple case like this it isn't much easier than the double <code>for range</code> loop. Nor will it be faster; but you asked for an iteration.</p>
<p>Another iterator is <code>itertools.product(range(5),range(5))</code></p>
<p>Timewise, product is pretty good:</p>
<pre><code>In [181]: timeit [bigmat[:,y,z] for y,z in itertools.product(range(5),range(5
...: ))]
10000 loops, best of 3: 26.5 µs per loop
In [191]: timeit [bigmat[:,y,z] for (y,z),v in np.ndenumerate(bigmat[0,...])]
...:
10000 loops, best of 3: 61.9 µs per loop
</code></pre>
<p>transposing and reshaping is the fastest way to get a list (or array) of the triplets - but it does not give the indices as well:</p>
<pre><code>In [198]: timeit list(bigmat.transpose(1,2,0).reshape(-1,3))
100000 loops, best of 3: 15.1 µs per loop
</code></pre>
<p>But the same operation gets the indices from <code>np.mgrid</code> (or <code>np.meshgrid</code>):</p>
<pre><code>np.mgrid[0:5,0:5].transpose(1,2,0).reshape(-1,2)
</code></pre>
<p>(though this is surprisingly slow)</p>
| 3
|
2016-09-07T01:45:27Z
|
[
"python",
"numpy"
] |
Access all elements at given x, y position in 3-dimensional numpy array
| 39,359,621
|
<pre><code>mat_a = np.random.random((5, 5))
mat_b = np.random.random((5, 5))
mat_c = np.random.random((5, 5))
bigmat = np.stack((mat_a, mat_b, mat_c)) # this is a 3, 5, 5 array
for (x, y, z), value in np.ndenumerate(bigmat):
print (x, y, z)
</code></pre>
<p>In the example above, how can I loop so that I iterate only across the 5 x 5 array and at each position I get 3 values i.e. loop should run 25 times and each time, I get an array with 3 values (one from each of mat_a, mat_b and mat_c)</p>
<ul>
<li>EDIT: Please note that I need to be able to access elements by position later i.e. if <code>bigmat</code> is reshaped, there should be a way to access element based on specific y, z</li>
</ul>
| 3
|
2016-09-07T00:25:13Z
| 39,360,877
|
<p>If you don't actually need to stack the arrays, and only want to iterate over all three arrays, element-wise, <em>at once</em>, <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.nditer.html" rel="nofollow">numpy.nditer</a> works - I'm still fuzzy on all its parameters I don't know if it is any faster, test it on a subset.</p>
<pre><code>a1 = np.arange(9).reshape(3,3) + 10
a2 = np.arange(9).reshape(3,3) + 20
a3 = np.arange(9).reshape(3,3) + 30
c = np.nditer((a1, a2, a3))
for thing in c:
print(np.array(thing))
>>>
[10 20 30]
[11 21 31]
[12 22 32]
[13 23 33]
[14 24 34]
[15 25 35]
[16 26 36]
[17 27 37]
[18 28 38]
>>>
</code></pre>
| 0
|
2016-09-07T03:42:36Z
|
[
"python",
"numpy"
] |
Single-valued array to rgba array using custom color map in Python
| 39,359,693
|
<p>I have code that, given a normalized array, gives an rgba array using a predefined colormap - <code>jet</code> in the example below.</p>
<pre><code>import numpy as np
from matplotlib import cm
#arr_orig assumed to be normalized, real array.
arr_color = np.uint8(cm.jet(arr_orig) * 255)
</code></pre>
<p>I have also defined a custom rgb colormap, <code>my_colormap</code>, based on code in <a href="http://matplotlib.org/examples/pylab_examples/custom_cmap.html" rel="nofollow">http://matplotlib.org/examples/pylab_examples/custom_cmap.html</a>, and that I gave a name using</p>
<pre><code>cm.register_cmap(name = 'custom_colormap', cmap = my_colormap)
</code></pre>
<p>Unsurprisingly, though, I can't now replace my original code with</p>
<pre><code>arr_color = np.uint8(cm.custom_colormap(arr_orig) * 255)
</code></pre>
<p>When I try this, I get the following error:</p>
<pre><code>AttributeError: 'module' object has no attribute 'custom_colormap'
</code></pre>
<p>What is some code equivalent to</p>
<pre><code>arr_color = np.uint8(cm.custom_colormap(arr_orig) * 255)
</code></pre>
<p>that I can use to make a new array using a custom colormap that I have named?</p>
<p>Thanks for any help.</p>
| -1
|
2016-09-07T00:38:19Z
| 39,360,904
|
<p>The following worked:</p>
<pre><code>arr_color = cm.ScalarMappable(cmap = custom_colormap).to_rgba(arr_orig, bytes = True)
</code></pre>
| 0
|
2016-09-07T03:46:42Z
|
[
"python",
"matplotlib",
"colors",
"colormap"
] |
Finding all posts without tags in Django Postgres ArrayField
| 39,359,702
|
<p>Just like what my title says, I want to see all my posts without any tags. However none of the following ORM is working:</p>
<pre><code>x = PostTagging.obejcts.filter(tags=[])
x = PostTagging.objects.filter(tags__len=0)
</code></pre>
<p>All I get as a return is:</p>
<pre><code><QuerySet []>
</code></pre>
<p><strong>Here is my model:</strong></p>
<pre><code>class PostTagging(models.Model):
title = models.CharField(max_length=50)
tags = ArrayField(models.CharField(max_length=200), blank=True, null=True)
def __unicode__(self):
return self.title
</code></pre>
<p>Here is my ORM for creating the blank tag:</p>
<pre><code>PostTagging.objects.create(title='Fifth Post')
</code></pre>
| 0
|
2016-09-07T00:40:39Z
| 39,359,735
|
<p><code>PostTagging.objects.filter(tags__isnull=True)</code> is the best way</p>
| 1
|
2016-09-07T00:45:18Z
|
[
"python",
"django",
"postgresql"
] |
Save A Matplotlib Figure to A File Along with Displaying it on The Browser Django
| 39,359,743
|
<p>I am very new to Django and the following is a detailed explanation of the problem I am trying to resolve using Django Python. </p>
<p><strong>Problem:-</strong></p>
<p>Upload a multiple csv files (<strong>five files total</strong>) and then take those files and construct five figures corresponding to the uploaded files. Finally, I want to have a single page in which I ask the user to upload the files and then the user will get a <strong>conformational message for files uploaded successfully</strong>. After that, the files should be displayed only if the user hit the graphing button and should display the five figures on the same webpage. </p>
<p>My attempt thus far:-</p>
<p>A snippet of My views.py:</p>
<pre><code>def Upload(request):
list_doc = []
for count, x in enumerate(request.FILES.getlist('files')):
def multifiles(f):
with open(dir_path + x.name + str(count) + ".csv" , 'wb+') as destination:
list_doc.append(x.name)
for chunk in f.chunks():
destination.write(chunk)
multifiles(x)
docs = list_doc
return render_to_response("myapp/mypage.html", {'upload_num':str(count+1)+" Files uploaded successfully!", "files_list":docs}, context_instance = RequestContext(request))
def graph(request):
plt.figure(num)
plt.plot(x,y, color =colorcurve, label=filename)
ax = plt.subplot(111)
ax.plot(x,y, color =colorcurve, linewidth=1.5)
plt.legend(fontsize = 7.9, bbox_to_anchor=(0.61, 1), loc=6, borderaxespad=0.)
file_saved = plt.savefig(dirpath+filename, format='png')
figure = dirpath+filename+".png"
return render_to_response("myapp/mypage.html", {'file_saved':figure},context_instance = RequestContext(request))
</code></pre>
<p><strong>I am not sure what to use in urls.py? help please?</strong></p>
<p>In html:</p>
<pre><code> <body>
<!-- Uplading files! -->
<p><a href="{% url 'list' %}" align="right">Return to home</a></p>
<p> Please, starting uploading your files <p>
<form action="{% url "UploadFile" %}" method="post" enctype="multipart/form-data">
{% csrf_token %}
<input type="file" name="files" multiple/>
<input type="submit" value="Upload" />
</form>
<form action="{% url "list" %}" method="post" enctype="multipart/form-data">
{% csrf_token %}
<!-- input type="file" name="files" multiple/-->
<input type="submit" value="Graphing" />
</form>
<br>
<div style="overflow:hidden;">
{% load staticfiles %}
<img src="{% static "myapp/afile1-M1" %}" alt="My image" height="400" width="400" float=left hspace="30" border="1.8" />
</code></pre>
<p>When I runserver, it displays the graph into a new window. I want to do a couple of things. One, I want to save the figure into a file. Second, display it in the same browser window without the need to go to another window.</p>
<p>Thanks a lot in advance</p>
| 2
|
2016-09-07T00:46:34Z
| 39,362,441
|
<p>UPDATE:
If you want to draw the graph only when "Graphing" is clicked, then name your input type, hence:</p>
<pre><code><input type="submit" value="Graphing" name="graphing"/>
</code></pre>
<p>and then you can check whether it has been clicked in your view:</p>
<pre><code>if request.POST.get('graphing'):
#basically means "if user has clicked the graphing button"
</code></pre>
<p>Hope this helps,</p>
<hr>
<p>Create two views:</p>
<pre><code>def draw_pie(request):
size = request.GET.get('size')
return render(request, 'blog/draw_pie.html', {'size':size})
from pylab import figure, axes, pie, title
from matplotlib.backends.backend_agg import FigureCanvasAgg
def pies (rquest, size):
f = figure(figsize=(6,6))
ax = axes([0.1, 0.1, 0.8, 0.8])
labels = 'Frogs', 'Hogs', 'Dogs', 'Logs'
fracs = [15,size,45, 10]
explode=(0, 0.05, 0, 0)
pie(fracs, explode=explode, labels=labels, autopct='%1.1f%%', shadow=True)
title('Raining Hogs and Dogs', bbox={'facecolor':'0.8', 'pad':5})
canvas = FigureCanvasAgg(f)
response = HttpResponse(content_type='image/png')
canvas.print_png(response)
matplotlib.pyplot.close(f)
return response
</code></pre>
<p>The first one will contain the form, the second one will draw the graph.</p>
<p>Create two urls:</p>
<pre><code>url(r'^pies/(?P<size>\d+)/$', views.pies, name='pies'),
url(r'^draw_pie/$', views.draw_pie, name='draw_pie'),
</code></pre>
<p>The first one contains a parameter, or as many parameters as you like, to set the variables of the graph. I have set the size of one of the fracs in the pie chart.</p>
<p>The second one is the url for the form.</p>
<p>Create two templates, one empty one called <code>pies.html</code>:</p>
<pre><code>empty
</code></pre>
<p>and one more template called <code>draw_pie.html</code>:</p>
<pre><code> <form method="GET" action="/draw_pie/">
<h1>Pie Size</h1>
<label>
<span>Size: </span>
<input type="number" step = "any" id="searchBox" class="input-medium search-query"
name="size" placeholder="size"><br>
</label>
<input type="submit" class="button" value="Draw Pie" >
</form>
<img src="http://yourwebsite.com/pies/{{ size }}"><br>
</code></pre>
<p>This will give you this result:</p>
<p><a href="http://i.stack.imgur.com/T9xB4.png" rel="nofollow"><img src="http://i.stack.imgur.com/T9xB4.png" alt="enter image description here"></a></p>
<p>If you change the size to 99 and click draw, it will reload the page and creates a new graph using the parameter you just input, then calls the graph directly using the img tag. You can see the parameter being passed in the url also:</p>
<p><a href="http://i.stack.imgur.com/lis1r.png" rel="nofollow"><img src="http://i.stack.imgur.com/lis1r.png" alt="enter image description here"></a></p>
<p>I hope this helps.</p>
| 0
|
2016-09-07T06:17:59Z
|
[
"python",
"django",
"matplotlib"
] |
Incorporating Lmtool into PocketSphinx?
| 39,359,766
|
<p>I am trying to create a simple way to add new keywords into PocketSphinx. The idea is to have a temporary text file that can be used to (via a script) add a word (or phrase) automatically added to the <code>corpus.txt</code>, <code>dictionary.dic</code> and <code>language_model.lm</code>.</p>
<p>Currently the best way to do this seems to be to use lmtool and then replace the aforementioned files with the updated versions. However this presents three problems:</p>
<ol>
<li>Lmtool is slow for large libraries, so the process will become exponentially slower as more words are added.</li>
<li>Lmtool requires a semi-reliable internet connection to work and I'd like to be able to add commands while offline.</li>
<li>This is not the most efficient way to add commands, and won't work with the setup I'm putting together.</li>
</ol>
<p>What I'd like to be able to do is to (if possible) use/create an offline version of lmtool that takes inputs from a temporary text file (<code>input.txt</code>) processes them and prints the contents into three temporary text files (<code>dic.txt</code>, <code>lm.txt</code>, <code>corp.txt</code>). </p>
<p>The last step would be to run a script that will: </p>
<ol>
<li>Take the output in <code>corp.txt</code> and add it to the end of <code>corpus.txt</code>.</li>
<li>Look through <code>dictionary.dic</code> and add any new words in <code>dic.txt</code>.</li>
<li>Somehow modify <code>language_model.lm</code> to include the new terms in <code>lm.txt</code>.</li>
<li>Erase the contents of the three output files.</li>
</ol>
<p>My question is if it is possible to get an offline version of lmtool that is capable of outputting results into specific text files? I know it is possible to automate lmtool (according to their site), but I would like to be able to run the process offline if possible.</p>
<p>Also, has anyone attempted something like this before that I can use as a guide?</p>
<p>I am running pocketsphinx on a raspberry pi and I am aware that it will likely not be able to run lmtool on it's own. My plan is to have lmtool run on a local server and sync files with the pi via wifi/ethernet. </p>
<p>Any help would be appreciated.</p>
| 1
|
2016-09-07T00:49:50Z
| 39,360,291
|
<p>You have few choice, If you want to generate dict and language model locally on Raspberry Pi (Model 2B at least) </p>
<p>For Language Model generation, you can use either</p>
<ol>
<li><p><a href="https://github.com/skerit/cmusphinx/tree/master/cmuclmtk" rel="nofollow">CMUCLMTK</a> or</p></li>
<li><p><a href="http://www.speech.sri.com/projects/srilm/" rel="nofollow">SRILM</a> (SRI Language Modeling Toolkit)</p></li>
</ol>
<p>To compile SRILM on Raspbian you need to tweak some files.
Take a look here <a href="https://github.com/G10DRAS/SRILM-on-RaspberryPi" rel="nofollow">https://github.com/G10DRAS/SRILM-on-RaspberryPi</a></p>
<p>For dictionary generation, you can use either</p>
<ol>
<li><a href="https://github.com/AdolfVonKleist/Phonetisaurus" rel="nofollow">Phonetisaurus</a> with G2P Model available <a href="https://sourceforge.net/projects/cmusphinx/files/G2P%20Models/fst/" rel="nofollow">here</a> (or you can generate FST by yourself using <a href="https://sourceforge.net/projects/cmusphinx/files/G2P%20Models/phonetisaurus-cmudict-split.tar.gz/download" rel="nofollow">phonetisaurus-cmudict-split</a>), or</li>
<li><a href="https://github.com/cmusphinx/g2p-seq2seq" rel="nofollow">g2p-seq2seq</a> (Sequence-to-Sequence G2P toolkit)</li>
</ol>
<p>g2p-seq2seq based on <a href="https://www.tensorflow.org/versions/r0.9/tutorials/seq2seq/index.html" rel="nofollow">TensorFlow</a> which is not officially supported on RaspberryPi. For more details see <a href="https://github.com/samjabrahams/tensorflow-on-raspberry-pi" rel="nofollow">Installing TensorFlow on Raspberry Pi 3</a></p>
<p>For more details (usage, how to compile etc....) please go through the documents of respective toolkit.</p>
| 1
|
2016-09-07T02:15:11Z
|
[
"python",
"pocketsphinx"
] |
Removing white space and colon
| 39,359,816
|
<p>I have a file with a bunch of numbers that have white spaces and colons and I am trying to remove them. As I have seen on this forum the function <code>line.strip.split()</code> works well to achieve this. Is there a way of removing the white space and colon all in one go? Using the method posted by Lorenzo I have this:</p>
<pre><code>train = []
with open('C:/Users/Morgan Weiss/Desktop/STA5635/DataSets/dexter/dexter_train.data') as train_data:
train.append(train_data.read().replace(' ','').replace(':',''))
size_of_train = np.shape(train)
for i in range(size_of_train[0]):
for j in range(size_of_train[1]):
train[i][j] = int(train[i][j])
print(train)
</code></pre>
<p>Although I get this error:</p>
<pre class="lang-none prettyprint-override"><code>File "C:/Users/Morgan Weiss/Desktop/STA5635/Homework/Homework_1/HW1_Dexter.py", line 11, in <module>
for j in range(size_of_train[1]):
IndexError: tuple index out of range
</code></pre>
| 0
|
2016-09-07T00:58:10Z
| 39,360,196
|
<p>I think the above syntax is not correct, but anyways as per your question, you can use replace function present in python.</p>
<p>When reading each line as a string from that file you can do something like, </p>
<pre><code>train = []
with open('/Users/sushant.moon/Downloads/dexter_train.data') as f:
list = f.read().split()
for x in list:
data = x.split(':')
train.append([int(data[0]),int(data[1])])
# this part becomes redundant as i have already converted str to int before i append data to train
size_of_train = np.shape(train)
for i in range(size_of_train[0]):
for j in range(size_of_train[1]):
train[i][j] = int(train[i][j])
</code></pre>
<p>Here I am using replace function to replace space with blank string, and similar with colon.</p>
| 2
|
2016-09-07T02:01:21Z
|
[
"python",
"numpy"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.