title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
|---|---|---|---|---|---|---|---|---|---|
Append dataframes in pandas
| 39,352,725
|
<p>I have two dataframes in pandas
(copy from Spyder Variable Explorer)</p>
<p>df1</p>
<pre><code>index 0 1 2 3 4 5 6
0 Loc 0.0 0.0 0.0 0.25 0.0 light
1 Loc 0.0 0.0 0.0 0.25 0.0 light
2 Loc 0.0 0.0 0.0 0.25 0.0 light
3 Loc 0.0 0.0 0.0 0.25 0.0 light
</code></pre>
<p>df2</p>
<pre><code>index 0 1 2 3 4 5 6
0 DCos -0.25 -0.2 0.9 nan nan nan
1 DCos -0.25 0.2 0.9 nan nan nan
2 DCos 0.25 -0.2 0.9 nan nan nan
3 DCos 0.25 0.2 0.9 nan nan nan
</code></pre>
<p>I would like to append dataframe 2 to dataframe 1, to have</p>
<pre><code>index 0 1 2 3 4 5 6 7 8 9 10 11 12 13
0 Loc 0.0 0.0 0.0 0.25 0.0 light DCos -0.25 -0.2 0.9 nan nan nan
1 Loc 0.0 0.0 0.0 0.25 0.0 light DCos -0.25 0.2 0.9 nan nan nan
2 Loc 0.0 0.0 0.0 0.25 0.0 light DCos 0.25 -0.2 0.9 nan nan nan
3 Loc 0.0 0.0 0.0 0.25 0.0 light DCos 0.25 0.2 0.9 nan nan nan
</code></pre>
<p>I have tried</p>
<pre><code>df1.join.(df2)
</code></pre>
<p>but df1 was not changed. I know there is an append function described in the documentation, but in only appends rows. Is there a way to append columns ?</p>
| 1
|
2016-09-06T15:27:11Z
| 39,353,187
|
<p><strong>Disclaimer</strong>: I do not have 50 rep points to comment. So this answer is just a comment as <a href="http://stackoverflow.com/users/704848/edchum">EdChum</a> has given the correct answer.</p>
<p>You should take a look at the documentation of the various types of concat, merge and join <a href="http://pandas.pydata.org/pandas-docs/stable/merging.html#concatenating-objects" rel="nofollow">here</a>.</p>
<p>If your trying to concatenate the two dataframes using the index, you simply need:</p>
<pre><code>df3 = pd.concat([df1,df2], axis=1)
</code></pre>
<p>This will place the second dataframe(df2) next to the first one(df1) where the indexes match.</p>
<p>If you want to concatenate without being index-sensitive, try </p>
<pre><code>df3 = pd.concat([df1,df2], axis=1, ignore_index=True)
</code></pre>
| 1
|
2016-09-06T15:54:48Z
|
[
"python",
"pandas",
"formatting"
] |
xlwings import error in python 2.7: Can't import name Workbook
| 39,352,768
|
<p>(I know there is an original question regarding this issue but it differs a bit by having command run in home directory. Also no specific solution is mentioned)</p>
<p>I am using Anaconda2 python 2.7 (64bit) distribution. I have installed xlwings (version 0.9.3) on it using </p>
<pre><code>pip install xlwings
</code></pre>
<p>Now I am trying to run a very basic command (in Ipython qtconsole):</p>
<pre><code>from xlwings import Workbook
</code></pre>
<p>to which I get error as:</p>
<pre><code>cannot import name workbook
</code></pre>
<p>I tried the same with a python script saved in my home directory which gave same error. However following command runs fine:</p>
<pre><code>from xlwings import Range, Chart, __version__
</code></pre>
<p>Can anyone point out, what I may be doing wrong? </p>
| 0
|
2016-09-06T15:29:52Z
| 39,354,167
|
<p><code>Workbook</code> has been renamed into <code>Book</code> with the 0.9 release, see the <a href="http://docs.xlwings.org/en/stable/migrate_to_0.9.html#cheat-sheet" rel="nofollow">migration guide</a>, follow the <a href="http://docs.xlwings.org/en/stable/quickstart.html" rel="nofollow">quickstart</a> or simply look at the <a href="http://docs.xlwings.org/en/stable/api.html" rel="nofollow">API</a>.</p>
| 1
|
2016-09-06T16:53:12Z
|
[
"python",
"python-2.7",
"xlwings"
] |
Lines to separate groups in seaborn heatmap
| 39,352,932
|
<p>I am plotting data as a Seaborn heatmap in Python. My data is intrinsically grouped into categories, and I'd like to have lines on the plot to indicate where the groups lie on the map. As a simple example, suppose I wanted to modify this plot from the documentation...</p>
<pre><code>import seaborn as sns; sns.set()
flights = sns.load_dataset("flights")
flights = flights.pivot("month", "year", "passengers")
ax = sns.heatmap(flights, cbar=False)
</code></pre>
<p><a href="http://i.stack.imgur.com/RELwf.png" rel="nofollow"><img src="http://i.stack.imgur.com/RELwf.png" alt="enter image description here"></a></p>
<p>Where I wanted to emphasize the comparisons between quarters of the year by making a plot like the one below; how would I do that?</p>
<p><a href="http://i.stack.imgur.com/lO3uP.png" rel="nofollow"><img src="http://i.stack.imgur.com/lO3uP.png" alt="enter image description here"></a></p>
| 1
|
2016-09-06T15:39:44Z
| 39,353,190
|
<p>You want <code>ax.hlines</code>:</p>
<p><code>ax.hlines([3, 6, 9], *ax.get_xlim())
</code></p>
| 4
|
2016-09-06T15:54:57Z
|
[
"python",
"matplotlib",
"heatmap",
"seaborn"
] |
Return list element by the value of one of its attributes
| 39,352,979
|
<p>There is a list of objects</p>
<pre><code>l = [obj1, obj2, obj3]
</code></pre>
<p>Each <code>obj</code> is an object of a class and has an <code>id</code> attribute. </p>
<p>How can I return an <code>obj</code> from the list by its <code>id</code>?</p>
<p>P.S. <code>id</code>s are unique. and it is guaranteed that the list contains an object with the requested <code>id</code></p>
| 2
|
2016-09-06T15:42:13Z
| 39,353,030
|
<p>Assuming the <code>id</code> is a hashable object, like a string, you should be using a dictionary, not a list.</p>
<pre><code>l = [obj1, obj2, obj3]
d = {o.id:o for o in l}
</code></pre>
<p>You can then retrieve objects with their keys, e.g. <code>d['ID_39A']</code>.</p>
| 9
|
2016-09-06T15:45:30Z
|
[
"python",
"python-3.x"
] |
Pie and bar charts in Python
| 39,353,072
|
<p>I have a pandas dataframe like below:</p>
<pre><code>Group id Count
G1 412 52
G1 413 34
G2 412 2832
G2 413 314
</code></pre>
<p>I am trying to build a pie chart in Python â for each Group and id, I need to display the respective count. It should have two splits - one for Group and other for Id. Outer circle should be Group and inner circle should be id. Just started with the visualisation, wondering whether there is a python library that can do this.</p>
<p>Is this requirement be achieved using bar charts?</p>
| 2
|
2016-09-06T15:47:49Z
| 39,355,610
|
<p>Have you checked out <a href="https://plot.ly/python/getting-started/" rel="nofollow">plotly</a>?</p>
<p>Pie Chart Specific: <a href="https://plot.ly/python/pie-charts/#pie-chart-using-pie-object" rel="nofollow">Pie Charts with Plotly</a></p>
<p>I will say you <em>do</em> have to make an account but its free and easy.</p>
| 1
|
2016-09-06T18:28:57Z
|
[
"python",
"pandas",
"visualization"
] |
Pie and bar charts in Python
| 39,353,072
|
<p>I have a pandas dataframe like below:</p>
<pre><code>Group id Count
G1 412 52
G1 413 34
G2 412 2832
G2 413 314
</code></pre>
<p>I am trying to build a pie chart in Python â for each Group and id, I need to display the respective count. It should have two splits - one for Group and other for Id. Outer circle should be Group and inner circle should be id. Just started with the visualisation, wondering whether there is a python library that can do this.</p>
<p>Is this requirement be achieved using bar charts?</p>
| 2
|
2016-09-06T15:47:49Z
| 39,355,803
|
<p>Panda also integrates with matplotlib, which is somewhat ugly but very convient.
<a href="http://pandas.pydata.org/pandas-docs/stable/visualization.html#pie-plot" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/visualization.html#pie-plot</a></p>
<p>Some complicated examples
<a href="http://stackoverflow.com/questions/33019879/hierarchic-pie-donut-chart-from-pandas-dataframe-using-bokeh-or-matplotlib">Hierarchic pie/donut chart from Pandas DataFrame using bokeh or matplotlib</a></p>
<p>My experience is that you have to do a lot of configuration if you want to have a basic DIY chart with python. Maybe the best approach for you is one of following choices:
1) try plotly if data is not sensitive
2) use excel and integrate python with the chart in excel, which may save you a lot of time because people can be familiar with excel chart templates easily.
3) DIY by yourself, currently , Bokeh, matplotlib , seaborn are under your consideration if you just want to do something simple.</p>
| 1
|
2016-09-06T18:41:43Z
|
[
"python",
"pandas",
"visualization"
] |
Pie and bar charts in Python
| 39,353,072
|
<p>I have a pandas dataframe like below:</p>
<pre><code>Group id Count
G1 412 52
G1 413 34
G2 412 2832
G2 413 314
</code></pre>
<p>I am trying to build a pie chart in Python â for each Group and id, I need to display the respective count. It should have two splits - one for Group and other for Id. Outer circle should be Group and inner circle should be id. Just started with the visualisation, wondering whether there is a python library that can do this.</p>
<p>Is this requirement be achieved using bar charts?</p>
| 2
|
2016-09-06T15:47:49Z
| 39,358,505
|
<p>Tried to comment to agree with MattR - Plotly is great! I've got a few tutorials for how to use it on my website, you can PM me if you'd like a look.</p>
| 0
|
2016-09-06T22:02:22Z
|
[
"python",
"pandas",
"visualization"
] |
Different results with thread in Python 2/3
| 39,353,181
|
<p>Below is the code in Python 3. It always can get 100000. Why it is wrong? I think it should have different results.</p>
<pre><code>import time, _thread
global count
count = 0
def test():
global count
for i in range(0, 10000):
count += 1
for i in range(0, 10):
_thread.start_new_thread(test, ())
time.sleep(5)
print(count)
</code></pre>
<p>Below is the code in Python 2. It always has different result (random). </p>
<pre><code>import time, thread
global count
count = 0
def test():
global count
for i in range(0, 10000):
count += 1
for i in range(0, 10):
thread.start_new_thread(test, ())
time.sleep(5)
print count
</code></pre>
| 4
|
2016-09-06T15:54:18Z
| 39,353,531
|
<p>CPython 2 allowed threads to switch after a certain number of byte codes had executed; CPython 3.2 changed to allow threads to switch after a certain amount of time has passed. Your <code>test()</code> executes plenty of byte codes, but consumes little time. On my box, under Python 3 the displayed result becomes unpredictable if I add, e.g., this near the start:</p>
<pre><code>import sys
sys.setswitchinterval(sys.getswitchinterval() / 10.0)
</code></pre>
<p>That is, allow threads to switch after 10 times less time (than the default) has elapsed.</p>
<p>Also note that <code>_thread</code> is strongly discouraged in Python 3: that's why the leading underscore was added. Using <code>threading.Thread</code> instead, like:</p>
<pre><code>for i in range(10):
t = threading.Thread(target=test)
t.start()
</code></pre>
| 5
|
2016-09-06T16:14:47Z
|
[
"python",
"python-3.x",
"python-2.x"
] |
Trouble downloading xlsx file from website - Scraping
| 39,353,191
|
<p>I'm trying to write some code which download the two latest publications of the Outage Weeks found at the bottom of <a href="http://www.eirgridgroup.com/customer-and-industry/general-customer-information/outage-information/" rel="nofollow">http://www.eirgridgroup.com/customer-and-industry/general-customer-information/outage-information/</a></p>
<p>It's xlsx-files, which I'm going to load into Excel afterwards.
It doesn't matter which programming language the code is written in. </p>
<p>My first idea was to use the direct url's, like <a href="http://www.eirgridgroup.com/site-files/library/EirGrid/Outage-Weeks_36(2016)-51(2016)_31%20August.xlsx" rel="nofollow">http://www.eirgridgroup.com/site-files/library/EirGrid/Outage-Weeks_36(2016)-51(2016)_31%20August.xlsx</a>
, and then make some code which guesses the url of the two latest publications.
But I have noticed some inconsistencies in the url names, so that solution wouldn't work. </p>
<p>Instead it might be solution to scrape the website and use the XPath to download the files. I found out that the two latest publications always have the following XPaths:</p>
<pre><code>/html/body/div[3]/div[3]/div/div/p[5]/a
/html/body/div[3]/div[3]/div/div/p[6]/a
</code></pre>
<p>This is where I need help. I'm new to both XPath and Web Scraping. I have tried stuff like this in Python</p>
<pre><code>from lxml import html
import requests
page = requests.get('http://www.eirgridgroup.com/customer-and-industry/general-customer-information/outage-information/')
tree = html.fromstring(page.content)
v = tree.xpath('/html/body/div[3]/div[3]/div/div/p[5]/a')
</code></pre>
<p>But v seems to be empty.</p>
<p>Any ideas would be greatly appreciated! </p>
| 1
|
2016-09-06T15:55:05Z
| 39,353,663
|
<p>Just use <em>contains</em> to find the <em>hrefs</em> and slice the first two:</p>
<pre><code> tree.xpath('//p/a[contains(@href, "/site-files/library/EirGrid/Outage-Weeks")]/@href')[:2]
</code></pre>
<p>Or doing it all with the xpath using <code>[position() < 3]</code>:</p>
<pre><code>tree.xpath'(//p/a[contains(@href, "site-files/library/EirGrid/Outage-Weeks")])[position() < 3]/@href')
</code></pre>
<p>The files are ordered from latest to oldest so getting the first two gives you the two newest.</p>
<p>To download the files you just need to join each href to the base url and write the content to a file:</p>
<pre><code>from lxml import html
import requests
import os
from urlparse import urljoin # from urllib.parse import urljoin
page = requests.get('http://www.eirgridgroup.com/customer-and-industry/general-customer-information/outage-information/')
tree = html.fromstring(page.content)
v = tree.xpath('(//p/a[contains(@href, "/site-files/library/EirGrid/Outage-Weeks")])[position() < 3]/@href')
for href in v:
# os.path.basename(href) -> Outage-Weeks_35(2016)-50(2016).xlsx
with open(os.path.basename(href), "wb") as f:
f.write(requests.get(urljoin("http://www.eirgridgroup.com", link)).content)
</code></pre>
| 0
|
2016-09-06T16:22:52Z
|
[
"python",
"vba",
"xpath",
"import",
"web-scraping"
] |
Selenium webdriver unable to restart after unexpected exit
| 39,353,210
|
<p>I haven't been able to start up an instance of python's selenium webdriver after my last use a few days ago. According to the error messages, it unexpectedly quit last time I was using it, and now, after restarting my macbook, uninstalling and reinstalling chromedriver/selenium:<br><br>
<code>brew rmtree chromedriver && brew install chromedriver</code><br>
<code>pip uninstall selenium && pip install selenium</code></p>
<p>I'm still in the same place. It seems to be selenium itself, because for both Firefox and Chrome, I'm getting error messages.</p>
<p>What I'm attempting to run on the python3.5 kernel is:</p>
<pre><code>from selenium import webdriver
driver = webdriver.Chrome()
</code></pre>
<p>stacktrace:</p>
<pre><code>File "/Users/myuser/webscraping/env/lib/python3.5/site-packages/selenium/webdriver/chrome/webdriver.py",
line 62, in __init__self.service.start()
File "/Users/myuser/webscraping/env/lib/python3.5/site-packages/selenium/webdriver/common/service.py",
line 86, in start self.assert_process_still_running()
File "/Users/myuser/webscraping/env/lib/python3.5/site-packages/selenium/webdriver/common/service.py",
line 99, in assert_process_still_running % (self.path, return_code)
selenium.common.exceptions.WebDriverException: Message: Service chromedriver unexpectedly exited. Status code was: -5
</code></pre>
<p>likewise, for Firefox:</p>
<pre><code>from selenium import webdriver
driver = webdriver.Firefox()
</code></pre>
<p>and:</p>
<pre><code>File "/Users/myuser/webscraping/env/lib/python3.5/site-packages/selenium/webdriver/firefox/webdriver.py",
line 80, in __init__
self.binary, timeout)
File "/Users/myuser/webscraping/env/lib/python3.5/site-packagesyuser/selenium/webdriver/firefox/extension_connection.py",
line 52, in __init__
self.binary.launch_browser(self.profile, timeout=timeout)
File "/Users/myuser/webscraping/env/lib/python3.5/site-packages/selenium/webdriver/firefox/firefox_binary.py",
line 68, in launch_browser
self._wait_until_connectable(timeout=timeout)
File "/Users/myuser/webscraping/env/lib/python3.5/site-packages/selenium/webdriver/firefox/firefox_binary.py",
line 99, in _wait_until_connectable
"The browser appears to have exited "
selenium.common.exceptions.WebDriverException: Message: The browser appears to have exited before we could connect.
If you specified a log_file in the FirefoxBinary constructor, check it for details.
</code></pre>
<p>Should I be looking for some rogue process to kill with <code>ps -e</code> and <code>kill -sigint</code>?</p>
| 0
|
2016-09-06T15:55:58Z
| 39,473,114
|
<p>I discovered that homebrew chromedriver was throwing an error related to symlinking the correct dylib. I fixed the issue by following the steps in <a href="http://stackoverflow.com/questions/17643509/conflict-between-dynamic-linking-priority-in-osx#answer-35070568">this</a> answer to get chromedriver running again, which enabled selenium/chrome webdriver to work as well.</p>
| 0
|
2016-09-13T14:56:37Z
|
[
"python",
"selenium",
"selenium-webdriver",
"selenium-chromedriver"
] |
How to smartly match two data frames using Python (using pandas or other means)?
| 39,353,215
|
<p>I have one pandas dataframe composed of the names of the world's cities as well as countries, to which cities belong,</p>
<pre><code>city.head(3)
city country
0 Qal eh-ye Now Afghanistan
1 Chaghcharan Afghanistan
2 Lashkar Gah Afghanistan
</code></pre>
<p>and another data frame consisting of addresses of the world's universities, which is shown below:</p>
<pre><code>df.head(3)
university
0 Inst Huizhou, Huihzhou 516001, Guangdong, Peop...
1 Guangxi Acad Sci, Nanning 530004, Guangxi, Peo...
2 Shenzhen VisuCA Key Lab SIAT, Shenzhen, People...
</code></pre>
<p>The locations of cities' names are irregularly distributed across rows. I would like to match the city names with the addresses of world's universities. That is, I would like to know which city each university is located in. Hopefully, the city name matched is shown in the same row as the address of each university.</p>
<p>I've tried the following, and it doesn't work because the locations of cities are irregular across the rows.</p>
<pre><code>df['university'].str.split(',').str[0]
</code></pre>
| 4
|
2016-09-06T15:56:18Z
| 39,427,730
|
<p>I would suggest to use <code>apply</code></p>
<pre><code>city_list = city.tolist()
def match_city(row):
for city in city_list:
if city in row['university']: return city
return 'None'
df['city'] = df.apply(match_city, axis=1)
</code></pre>
<p>I assume the addresses of university data is clean enough. If you want to do more advanced checking of matching, you can adjust the <code>match_city</code> function.</p>
| 2
|
2016-09-10T15:42:15Z
|
[
"python",
"pandas",
"dataframe",
"match"
] |
How to smartly match two data frames using Python (using pandas or other means)?
| 39,353,215
|
<p>I have one pandas dataframe composed of the names of the world's cities as well as countries, to which cities belong,</p>
<pre><code>city.head(3)
city country
0 Qal eh-ye Now Afghanistan
1 Chaghcharan Afghanistan
2 Lashkar Gah Afghanistan
</code></pre>
<p>and another data frame consisting of addresses of the world's universities, which is shown below:</p>
<pre><code>df.head(3)
university
0 Inst Huizhou, Huihzhou 516001, Guangdong, Peop...
1 Guangxi Acad Sci, Nanning 530004, Guangxi, Peo...
2 Shenzhen VisuCA Key Lab SIAT, Shenzhen, People...
</code></pre>
<p>The locations of cities' names are irregularly distributed across rows. I would like to match the city names with the addresses of world's universities. That is, I would like to know which city each university is located in. Hopefully, the city name matched is shown in the same row as the address of each university.</p>
<p>I've tried the following, and it doesn't work because the locations of cities are irregular across the rows.</p>
<pre><code>df['university'].str.split(',').str[0]
</code></pre>
| 4
|
2016-09-06T15:56:18Z
| 39,462,320
|
<p>In order to deal with the inconsistent structure of your strings, a good solution is to use regular expressions. I mocked up some data based on your description and created a function to capture the city from the strings. </p>
<p>In my solution I used numpy to output NaN values when there wasn't a match, but you could easily just make it a blank string. I also included a test case where the input was blank in order to display the NaN result.</p>
<pre><code>import re
import numpy as np
data = ["Inst Huizhou, Huihzhou 516001, Guangdong, People's Republic of China",
"Guangxi Acad Sci, Nanning 530004, Guangxi, People's Republic of China",
"Shenzhen VisuCA Key Lab SIAT, Shenzhen, People's Republic of China",
"New York University, New York, New York 10012, United States of America",
""]
df = pd.DataFrame(data, columns = ['university'])
def extract_city(row):
match = re.match('^[^,]*,([^,]*),', row)
if match:
city = re.sub('\d+', '', match.group(1)).strip()
else:
city = np.nan
return city
df.university.apply(extract_city)
</code></pre>
<p>Here's the output: </p>
<pre><code>0 Huihzhou
1 Nanning
2 Shenzhen
3 New York
4 NaN
Name: university, dtype: object
</code></pre>
| 2
|
2016-09-13T04:40:27Z
|
[
"python",
"pandas",
"dataframe",
"match"
] |
How to smartly match two data frames using Python (using pandas or other means)?
| 39,353,215
|
<p>I have one pandas dataframe composed of the names of the world's cities as well as countries, to which cities belong,</p>
<pre><code>city.head(3)
city country
0 Qal eh-ye Now Afghanistan
1 Chaghcharan Afghanistan
2 Lashkar Gah Afghanistan
</code></pre>
<p>and another data frame consisting of addresses of the world's universities, which is shown below:</p>
<pre><code>df.head(3)
university
0 Inst Huizhou, Huihzhou 516001, Guangdong, Peop...
1 Guangxi Acad Sci, Nanning 530004, Guangxi, Peo...
2 Shenzhen VisuCA Key Lab SIAT, Shenzhen, People...
</code></pre>
<p>The locations of cities' names are irregularly distributed across rows. I would like to match the city names with the addresses of world's universities. That is, I would like to know which city each university is located in. Hopefully, the city name matched is shown in the same row as the address of each university.</p>
<p>I've tried the following, and it doesn't work because the locations of cities are irregular across the rows.</p>
<pre><code>df['university'].str.split(',').str[0]
</code></pre>
| 4
|
2016-09-06T15:56:18Z
| 39,462,597
|
<p>My suggestion is that after a few pre-processing that reduce address into city level information (we don't need to be exact, but try your best; like removing numbers etc), and then merge the dataframes based on text similarities.</p>
<p>You may consider text similarity measures like levenshtein distance or jaro-winkler which are commonly used to match the words.</p>
<p>Here is example for text similarity:</p>
<pre><code>class DLDistance:
def __init__(self, s1):
self.s1 = s1
self.d = {}
self.lenstr1 = len(self.s1)
for i in xrange(-1,self.lenstr1+1):
self.d[(i,-1)] = i+1
def distance(self, s2):
lenstr2 = len(s2)
for j in xrange(-1,lenstr2+1):
self.d[(-1,j)] = j+1
for i in xrange(self.lenstr1):
for j in xrange(lenstr2):
if self.s1[i] == s2[j]:
cost = 0
else:
cost = 1
self.d[(i,j)] = min(
self.d[(i-1,j)] + 1, # deletion
self.d[(i,j-1)] + 1, # insertion
self.d[(i-1,j-1)] + cost, # substitution
)
if i and j and self.s1[i]==s2[j-1] and self.s1[i-1] == s2[j]:
self.d[(i,j)] = min (self.d[(i,j)], self.d[i-2,j-2] + cost) # transposition
return self.d[self.lenstr1-1,lenstr2-1]
if __name__ == '__main__':
base = u'abs'
cmpstrs = [u'abs', u'sdfbasz', u'asdf', u'hfghfg']
dl = DLDistance(base)
for s in cmpstrs:
print "damerau_levenshtein"
print dl.distance(s)
</code></pre>
<p>Even though, it has high level of computation complexity since it calculate N*M times of distance measure where N rows in the first dataframe, M rows in the second dataframe.(To reduce computatinal complexity, you can truncate the set who needs comparison by only comparing the rows which have the same first character)</p>
<p>levenshtein distance: <a href="https://en.wikipedia.org/wiki/Levenshtein_distance" rel="nofollow">https://en.wikipedia.org/wiki/Levenshtein_distance</a></p>
<p>jaro-winkler: <a href="https://en.wikipedia.org/wiki/Jaro%E2%80%93Winkler_distance" rel="nofollow">https://en.wikipedia.org/wiki/Jaro%E2%80%93Winkler_distance</a></p>
| 2
|
2016-09-13T05:14:26Z
|
[
"python",
"pandas",
"dataframe",
"match"
] |
How to smartly match two data frames using Python (using pandas or other means)?
| 39,353,215
|
<p>I have one pandas dataframe composed of the names of the world's cities as well as countries, to which cities belong,</p>
<pre><code>city.head(3)
city country
0 Qal eh-ye Now Afghanistan
1 Chaghcharan Afghanistan
2 Lashkar Gah Afghanistan
</code></pre>
<p>and another data frame consisting of addresses of the world's universities, which is shown below:</p>
<pre><code>df.head(3)
university
0 Inst Huizhou, Huihzhou 516001, Guangdong, Peop...
1 Guangxi Acad Sci, Nanning 530004, Guangxi, Peo...
2 Shenzhen VisuCA Key Lab SIAT, Shenzhen, People...
</code></pre>
<p>The locations of cities' names are irregularly distributed across rows. I would like to match the city names with the addresses of world's universities. That is, I would like to know which city each university is located in. Hopefully, the city name matched is shown in the same row as the address of each university.</p>
<p>I've tried the following, and it doesn't work because the locations of cities are irregular across the rows.</p>
<pre><code>df['university'].str.split(',').str[0]
</code></pre>
| 4
|
2016-09-06T15:56:18Z
| 39,513,162
|
<p>I think one simple idea would be to create a mapping from any word or sequence of word of any address to the full address the word is part of, with the assumption that one of those address words is the cites. In a second step we match this with the set of known cities that you have, and anything that is not a known city gets discarded. </p>
<p>A mapping from each single word to address is as simple as: </p>
<pre><code>def address_to_dict(address):
return {word: address for word in address.split(",")}
</code></pre>
<p>And we can easily extend this to include the set of bi-grams, tri-gram,... so that universities encoded in several words are also collected. See a discussion here: <a href="http://locallyoptimal.com/blog/2013/01/20/elegant-n-gram-generation-in-python/" rel="nofollow">Elegant N-gram Generation in Python</a></p>
<p>We can then apply this to every address we have to obtain one grand mapping from any word to the full address: </p>
<pre><code>word_to_address_mapping = pd.DataFrame(df.university.apply(address_to_dict ).tolist()).stack()
word_to_address_mapping = pd.DataFrame(word_to_address_mapping,
columns=["address"])
word_to_address_mapping.index = word_to_address_mapping.index.droplevel(level=0)
word_to_address_mapping
</code></pre>
<p>This yields something like this: </p>
<p><a href="http://i.stack.imgur.com/gaQEA.png" rel="nofollow"><img src="http://i.stack.imgur.com/gaQEA.png" alt="enter image description here"></a></p>
<p>All you have to do then is join this with the actual city list you have: this will automatically discard any entry in <code>word_to_address_mapping</code> which is not a known city, and provide a mapping between university address and their city. </p>
<pre><code># the outer join here should ensure that several university in the
# same city do not overwrite each other
pd.merge(left=word_to_address_mapping, right=city,
left_index=True, right_on="city",
how="outer)
</code></pre>
| 1
|
2016-09-15T14:03:56Z
|
[
"python",
"pandas",
"dataframe",
"match"
] |
How to smartly match two data frames using Python (using pandas or other means)?
| 39,353,215
|
<p>I have one pandas dataframe composed of the names of the world's cities as well as countries, to which cities belong,</p>
<pre><code>city.head(3)
city country
0 Qal eh-ye Now Afghanistan
1 Chaghcharan Afghanistan
2 Lashkar Gah Afghanistan
</code></pre>
<p>and another data frame consisting of addresses of the world's universities, which is shown below:</p>
<pre><code>df.head(3)
university
0 Inst Huizhou, Huihzhou 516001, Guangdong, Peop...
1 Guangxi Acad Sci, Nanning 530004, Guangxi, Peo...
2 Shenzhen VisuCA Key Lab SIAT, Shenzhen, People...
</code></pre>
<p>The locations of cities' names are irregularly distributed across rows. I would like to match the city names with the addresses of world's universities. That is, I would like to know which city each university is located in. Hopefully, the city name matched is shown in the same row as the address of each university.</p>
<p>I've tried the following, and it doesn't work because the locations of cities are irregular across the rows.</p>
<pre><code>df['university'].str.split(',').str[0]
</code></pre>
| 4
|
2016-09-06T15:56:18Z
| 39,576,732
|
<p>Partial match is prevented in the below function. Information of countries also considered while matching cities. To use this function university dataframe need to be split into list data type, such that every piece of address split into list of strings.</p>
<pre><code>In [22]: def get_city(univ_name_split):
....: # find country from university address
....: for name in univ_name_split:
....: if name in city['country'].values:
....: country = name
....: else:
....: country = None
....: if country:
....: cities = city[city.country == country].city.values
....: else:
....: cities = city['city'].values
....: # find city from university address
....: for name in univ_name_split:
....: if name in cities:
....: return name
....: else:
....: return None
....:
In [1]: import pandas as pd
In [2]: city = pd.read_csv('city.csv')
In [3]: df = pd.read_csv('university.csv')
In [4]: # splitting university name and address
In [5]: df_split = df['university'].str.split(',')
In [6]: df_split = df_split.apply(lambda x:[i.strip() for i in x])
In [10]: df
Out[10]:
university
0 Kongu Engineering College, Perundurai, Erode, ...
1 Anna University - Guindy, Chennai, India
2 Birla Institute of Technology and Science, Pil...
In [11]: df_split
Out[11]:
0 [Kongu Engineering College, Perundurai, Erode,...
1 [Anna University - Guindy, Chennai, India]
2 [Birla Institute of Technology and Science, Pi...
Name: university, dtype: object
In [12]: city
Out[12]:
city country
0 Bangalore India
1 Chennai India
2 Coimbatore India
3 Delhi India
4 Erode India
#This function is shorter version of above function
In [14]: def get_city(univ_name_split):
....: for name in univ_name_split:
....: if name in city['city'].values:
....: return name
....: else:
....: return None
....:
In [15]: df['city'] = df_split.apply(get_city)
In [16]: df
Out[16]:
university city
0 Kongu Engineering College, Perundurai, Erode, ... Erode
1 Anna University - Guindy, Chennai, India Chennai
2 Birla Institute of Technology and Science, Pil... None
</code></pre>
| 0
|
2016-09-19T15:22:18Z
|
[
"python",
"pandas",
"dataframe",
"match"
] |
Color gradient on scatter plot based on values
| 39,353,287
|
<p>I would like to define a color gradient for this image:
<a href="http://i.stack.imgur.com/xjtq5.png" rel="nofollow"><img src="http://i.stack.imgur.com/xjtq5.png" alt="enter image description here"></a></p>
<p>For each direction, I want to define a color gradient from red to blue depending on the density of values or the distance of points with the mode.</p>
| -1
|
2016-09-06T16:00:31Z
| 39,353,738
|
<p>First, for each data point compute the desired color. The provide the color specification sequence as <code>c</code> parameter of the <code>scatter</code> function (<a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.scatter" rel="nofollow">http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.scatter</a>). Colormaps (like those here: <a href="http://matplotlib.org/examples/color/colormaps_reference.html" rel="nofollow">http://matplotlib.org/examples/color/colormaps_reference.html</a>) may help with the first step.</p>
<p>Also, if your color is meant to represent density, you should consider finding the right probability distribution (such as the gaussian one) for the data in given direction and base the coloring on it.</p>
| 0
|
2016-09-06T16:26:53Z
|
[
"python",
"matplotlib",
"scatter-plot"
] |
How to extract nouns from dataframe
| 39,353,348
|
<p>I want to extract nouns from dataframe. Only nouns.
I do as below</p>
<pre><code>import pandas as pd
import nltk
from nltk.tag import pos_tag
from nltk import word_tokenize
df = pd.DataFrame({'noun': ['good day', 'good night']})
</code></pre>
<p>I want to get</p>
<pre><code> noun
0 day
1 night
</code></pre>
<p>My code </p>
<pre><code>df['noun'] = df.apply(lambda row: nltk.word_tokenize(row['noun']), axis=1)
noun=[]
for index, row in df.iterrows():
noun.append([word for word,pos in pos_tag(row) if pos == 'NN'])
df['noun'] = noun
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-194-688cfbb21ec5> in <module>()
1 noun=[]
2 for index, row in df.iterrows():
----> 3 noun.append([word for word,pos in pos_tag(row) if pos == 'NN'])
4 df['noun'] = noun
C:\Users\Edward\Anaconda3\lib\site-packages\nltk\tag\__init__.py in pos_tag(tokens, tagset)
109 """
110 tagger = PerceptronTagger()
--> 111 return _pos_tag(tokens, tagset, tagger)
112
113
C:\Users\Edward\Anaconda3\lib\site-packages\nltk\tag\__init__.py in _pos_tag(tokens, tagset, tagger)
80
81 def _pos_tag(tokens, tagset, tagger):
---> 82 tagged_tokens = tagger.tag(tokens)
83 if tagset:
84 tagged_tokens = [(token, map_tag('en-ptb', tagset, tag)) for (token, tag) in tagged_tokens]
C:\Users\Edward\Anaconda3\lib\site-packages\nltk\tag\perceptron.py in tag(self, tokens)
150 output = []
151
--> 152 context = self.START + [self.normalize(w) for w in tokens] + self.END
153 for i, word in enumerate(tokens):
154 tag = self.tagdict.get(word)
C:\Users\Edward\Anaconda3\lib\site-packages\nltk\tag\perceptron.py in <listcomp>(.0)
150 output = []
151
--> 152 context = self.START + [self.normalize(w) for w in tokens] + self.END
153 for i, word in enumerate(tokens):
154 tag = self.tagdict.get(word)
C:\Users\Edward\Anaconda3\lib\site-packages\nltk\tag\perceptron.py in normalize(self, word)
222 if '-' in word and word[0] != '-':
223 return '!HYPHEN'
--> 224 elif word.isdigit() and len(word) == 4:
225 return '!YEAR'
226 elif word[0].isdigit():
AttributeError: 'list' object has no attribute 'isdigit'
</code></pre>
<p>Please, help, how to improve it?
* Sorry, i have ro write some text so that i can insert all traceback
I guess thr problem is that i cann't convert list to needed format?</p>
| 1
|
2016-09-06T16:04:13Z
| 39,355,909
|
<p>The problem is that in your loop, <code>row</code> is a pandas <code>Series</code> rather than a list. You can access the list of words by writing <code>row[0]</code> instead:</p>
<pre><code>>>> for index, row in df.iterrows():
>>> noun.append([word for word,pos in pos_tag(row[0]) if pos == 'NN'])
>>> print(noun)
[['day'], ['night']]
</code></pre>
<p>Here you're getting a list of lists, with each list containing the nouns from one sentence. If you really want a flat list (as in the sample result in your question), write <code>noun.extend(...)</code> instead of <code>noun.append</code>. </p>
| 1
|
2016-09-06T18:47:38Z
|
[
"python",
"nltk",
"pos-tagger"
] |
Count of unique values in nested dict python
| 39,353,428
|
<p>I have a dict like:</p>
<pre><code>dict = {
"a": {"Azerbaijan": 20006.0, "Germany": 20016.571428571428},
"b": {"Chad": 13000.0, "South Africa": 3000000.0},
"c": {"Chad": 200061.0, "South Africa": 3000000.0}
}
</code></pre>
<p>And I am trying to get a dict of the counts of the occurrences of each unique country and value.</p>
<p>For example, <code>{"Chad": 2, "South Africa": 2,..}</code>,<code>{"3000000": 2, "13000": 1,..}</code></p>
<p>I am using the code below which works but is not very smart, is there a better way to do this without a long iteration cycle, since the actual dict is massive?</p>
<pre><code>seencountries = {}
seenvalues = {}
for key, innerdict in dict.iteritems():
for country, value in innerdict.iteritems():
if value not in seenvalues.keys():
seenvalues[value] = 0
seenvalues[value]+=1
if country not in seencountries.keys():
seencountries[country] = 0
seencountries[country]+=1
print seencountries
print seenvalues
</code></pre>
| 1
|
2016-09-06T16:08:29Z
| 39,353,620
|
<pre><code>from collections import Counter
seen_countries = Counter()
seen_values = Counter()
for data in your_dicts.itervalues():
seen_countries += Counter(data.keys())
seen_values += Counter(data.values())
</code></pre>
| 4
|
2016-09-06T16:20:17Z
|
[
"python",
"dictionary",
"iterator",
"set"
] |
Error string to float
| 39,353,705
|
<p>This is my code. I am doing a beginer execise. When I run the program </p>
<p>I can put the values, but when I go out of the loop the following error message appears:</p>
<pre><code> a + = float (input ("Enter the product cost"))
</code></pre>
<p>ValueError: could not convert string to float:</p>
<p>Can someone help me?</p>
<p>Here it goes:</p>
<pre><code>e = 0.25
f = 0.18
a = 0.0
while True:
a += float(input("Enter the product cost: "))
if a == "":
break
b = a*e
c = a*f
d = a+b+c
print ("tax: " + b)
print ("tips: " + c)
print ( "Total: " + d)
</code></pre>
| 1
|
2016-09-06T16:25:13Z
| 39,353,915
|
<p>You're trying to convert a float to a string by doing <code>if a == ""</code>. Try the following:</p>
<pre><code>while True:
a_input = input("Enter the product cost: ")
if a_input == "":
break
try:
a += float(a_input)
except ValueError:
break
</code></pre>
<p>This checks the input first, then converts to float. It saves the string to <code>a_input</code> and then checks, before incrementing <code>a</code>. It also does a <code>try-except</code> block to break the loop if conversion was unsuccessful. You should also want to place your calculations <strong>inside</strong> your loop.</p>
| 0
|
2016-09-06T16:37:01Z
|
[
"python",
"python-3.x"
] |
Error string to float
| 39,353,705
|
<p>This is my code. I am doing a beginer execise. When I run the program </p>
<p>I can put the values, but when I go out of the loop the following error message appears:</p>
<pre><code> a + = float (input ("Enter the product cost"))
</code></pre>
<p>ValueError: could not convert string to float:</p>
<p>Can someone help me?</p>
<p>Here it goes:</p>
<pre><code>e = 0.25
f = 0.18
a = 0.0
while True:
a += float(input("Enter the product cost: "))
if a == "":
break
b = a*e
c = a*f
d = a+b+c
print ("tax: " + b)
print ("tips: " + c)
print ( "Total: " + d)
</code></pre>
| 1
|
2016-09-06T16:25:13Z
| 39,353,972
|
<p>There are a couple of issues:</p>
<ol>
<li>the check for a being an empty string <code>("")</code> comes <em>after</em> the attempt to add it to the <code>float</code> value <code>a</code>. You should handle this with an exception to make sure that the input is numerical.</li>
<li>If someone doesn't enter an empty or invalid string, ever, then you get stuck in an infinite loop and nothing prints. That's because the indentation of your <code>b, c, d</code> calculations and the <code>prints</code> is outside of the scope of the <code>while</code> loop.</li>
</ol>
<p>This should do what you want:</p>
<pre><code>e = 0.25
f = 0.18
a = 0.0
while True:
try:
input_number = float(input("Enter the product cost: "))
except ValueError:
print ("Input is not a valid number")
break
a += input_number
b = a*e
c = a*f
d = a+b+c
print ("tax: ", b)
print ("tips: ", c)
print ( "Total: ", d)
</code></pre>
| 0
|
2016-09-06T16:41:15Z
|
[
"python",
"python-3.x"
] |
Error string to float
| 39,353,705
|
<p>This is my code. I am doing a beginer execise. When I run the program </p>
<p>I can put the values, but when I go out of the loop the following error message appears:</p>
<pre><code> a + = float (input ("Enter the product cost"))
</code></pre>
<p>ValueError: could not convert string to float:</p>
<p>Can someone help me?</p>
<p>Here it goes:</p>
<pre><code>e = 0.25
f = 0.18
a = 0.0
while True:
a += float(input("Enter the product cost: "))
if a == "":
break
b = a*e
c = a*f
d = a+b+c
print ("tax: " + b)
print ("tips: " + c)
print ( "Total: " + d)
</code></pre>
| 1
|
2016-09-06T16:25:13Z
| 39,357,599
|
<p>You are combining two operations on the same line: the input of a string, and the conversion of the string to a <code>float</code>. If you enter the empty string to end the program, the conversion to <code>float</code> fails with the error message you see; the error contains the string it tried to convert, and it is empty.</p>
<p>Split it into multiple lines:</p>
<pre><code>while True:
inp = input("Enter the product cost: ")
if inp == "":
break
a += float(inp)
</code></pre>
| 1
|
2016-09-06T20:48:42Z
|
[
"python",
"python-3.x"
] |
Import function from a file in root directory
| 39,353,717
|
<p>My file where I am working is at : <code>folder/src/views/basics.py</code>.</p>
<p>I want to import function in file where is present at : <code>folder/src/server.py</code>.</p>
<p>How to <strong>import function from server.py to basics.py</strong> ?
Note: I have made a __ init __.py file in <code>folder/src/</code> and <code>folder/src/views</code></p>
| -1
|
2016-09-06T16:25:37Z
| 39,353,927
|
<pre><code>from ..server import Function
Function()
</code></pre>
<p>or</p>
<pre><code>from .. import server
server.Function()
</code></pre>
| 0
|
2016-09-06T16:37:56Z
|
[
"python"
] |
pandas pivot table of sales
| 39,353,758
|
<p>I have a list like below:</p>
<pre><code> saleid upc
0 155_02127453_20090616_135212_0021 02317639000000
1 155_02127453_20090616_135212_0021 00000000000888
2 155_01605733_20090616_135221_0016 00264850000000
3 155_01072401_20090616_135224_0010 02316877000000
4 155_01072401_20090616_135224_0010 05051969277205
</code></pre>
<p>It represents one customer (saleid) and the items he/she got (upc of the item)</p>
<p>What I want is to pivot this table to a form like below:</p>
<pre><code> 02317639000000 00000000000888 00264850000000 02316877000000
155_02127453_20090616_135212_0021 1 1 0 0
155_01605733_20090616_135221_0016 0 0 1 0
155_01072401_20090616_135224_0010 0 0 0 0
</code></pre>
<p>So, columns are unique UPCs and rows are unique SALEIDs.</p>
<p>i read it like this:</p>
<pre><code>tbl = pd.read_csv('tbl_sale_items.csv',sep=';',dtype={'saleid': np.str, 'upc': np.str})
tbl.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 18570726 entries, 0 to 18570725
Data columns (total 2 columns):
saleid object
upc object
dtypes: object(2)
memory usage: 283.4+ MB
</code></pre>
<p>I have done some steps but not the correct ones!</p>
<pre><code>tbl.pivot_table(columns=['upc'],aggfunc=pd.Series.nunique)
upc 00000000000000 00000000000109 00000000000116 00000000000123 00000000000130 00000000000147 00000000000154 00000000000161 00000000000178 00000000000185 ...
saleid 44950 287 26180 4881 1839 623 3347 7
</code></pre>
<p>EDIT:
Im using the solution variation below:</p>
<pre><code>chunksize = 1000000
f = 0
for chunk in pd.read_csv('tbl_sale_items.csv',sep=';',dtype={'saleid': np.str, 'upc': np.str}, chunksize=chunksize):
print(f)
t = pd.crosstab(chunk.saleid, chunk.upc)
t.head(3)
t.to_csv('tbl_sales_index_converted_' + str(f) + '.csv.bz2',header=True,sep=';',compression='bz2')
f = f+1
</code></pre>
<p>the original file is extremely big to fit to memory after conversion.
The above solution has the problem on not having all the columns on all the files as I'm reading chunks from the original file.</p>
<p>Question 2: is there a way to force all chunks to have the same columns?</p>
| 4
|
2016-09-06T16:28:14Z
| 39,353,909
|
<p><strong><em>Option 1</em></strong></p>
<pre><code>df.groupby(['saleid', 'upc']).size().unstack(fill_value=0)
</code></pre>
<p><a href="http://i.stack.imgur.com/6yvfj.png" rel="nofollow"><img src="http://i.stack.imgur.com/6yvfj.png" alt="enter image description here"></a></p>
<p><strong><em>Option 2</em></strong></p>
<pre><code>pd.crosstab(df.saleid, df.upc)
</code></pre>
<p><a href="http://i.stack.imgur.com/6yvfj.png" rel="nofollow"><img src="http://i.stack.imgur.com/6yvfj.png" alt="enter image description here"></a></p>
<h3>Setup</h3>
<pre><code>from StringIO import StringIO
import pandas as pd
text = """ saleid upc
0 155_02127453_20090616_135212_0021 02317639000000
1 155_02127453_20090616_135212_0021 00000000000888
2 155_01605733_20090616_135221_0016 00264850000000
3 155_01072401_20090616_135224_0010 02316877000000
4 155_01072401_20090616_135224_0010 05051969277205"""
df = pd.read_csv(StringIO(text), delim_whitespace=True, dtype=str)
df
</code></pre>
<p><a href="http://i.stack.imgur.com/qM9sA.png" rel="nofollow"><img src="http://i.stack.imgur.com/qM9sA.png" alt="enter image description here"></a></p>
| 4
|
2016-09-06T16:36:40Z
|
[
"python",
"csv",
"pandas",
"numpy"
] |
pandas pivot table of sales
| 39,353,758
|
<p>I have a list like below:</p>
<pre><code> saleid upc
0 155_02127453_20090616_135212_0021 02317639000000
1 155_02127453_20090616_135212_0021 00000000000888
2 155_01605733_20090616_135221_0016 00264850000000
3 155_01072401_20090616_135224_0010 02316877000000
4 155_01072401_20090616_135224_0010 05051969277205
</code></pre>
<p>It represents one customer (saleid) and the items he/she got (upc of the item)</p>
<p>What I want is to pivot this table to a form like below:</p>
<pre><code> 02317639000000 00000000000888 00264850000000 02316877000000
155_02127453_20090616_135212_0021 1 1 0 0
155_01605733_20090616_135221_0016 0 0 1 0
155_01072401_20090616_135224_0010 0 0 0 0
</code></pre>
<p>So, columns are unique UPCs and rows are unique SALEIDs.</p>
<p>i read it like this:</p>
<pre><code>tbl = pd.read_csv('tbl_sale_items.csv',sep=';',dtype={'saleid': np.str, 'upc': np.str})
tbl.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 18570726 entries, 0 to 18570725
Data columns (total 2 columns):
saleid object
upc object
dtypes: object(2)
memory usage: 283.4+ MB
</code></pre>
<p>I have done some steps but not the correct ones!</p>
<pre><code>tbl.pivot_table(columns=['upc'],aggfunc=pd.Series.nunique)
upc 00000000000000 00000000000109 00000000000116 00000000000123 00000000000130 00000000000147 00000000000154 00000000000161 00000000000178 00000000000185 ...
saleid 44950 287 26180 4881 1839 623 3347 7
</code></pre>
<p>EDIT:
Im using the solution variation below:</p>
<pre><code>chunksize = 1000000
f = 0
for chunk in pd.read_csv('tbl_sale_items.csv',sep=';',dtype={'saleid': np.str, 'upc': np.str}, chunksize=chunksize):
print(f)
t = pd.crosstab(chunk.saleid, chunk.upc)
t.head(3)
t.to_csv('tbl_sales_index_converted_' + str(f) + '.csv.bz2',header=True,sep=';',compression='bz2')
f = f+1
</code></pre>
<p>the original file is extremely big to fit to memory after conversion.
The above solution has the problem on not having all the columns on all the files as I'm reading chunks from the original file.</p>
<p>Question 2: is there a way to force all chunks to have the same columns?</p>
| 4
|
2016-09-06T16:28:14Z
| 39,355,438
|
<p>simple <code>pivot_table()</code> solution:</p>
<pre><code>In [16]: df.pivot_table(index='saleid', columns='upc', aggfunc='size', fill_value=0)
Out[16]:
upc 00000000000888 00264850000000 02316877000000 02317639000000 05051969277205
saleid
155_01072401_20090616_135224_0010 0 0 1 0 1
155_01605733_20090616_135221_0016 0 1 0 0 0
155_02127453_20090616_135212_0021 1 0 0 1 0
</code></pre>
| 2
|
2016-09-06T18:18:08Z
|
[
"python",
"csv",
"pandas",
"numpy"
] |
Deleting Object from QuerySet List in Django with ManyToMany relationship
| 39,353,856
|
<p>I am having trouble deleting an object from a list of objects while using <code>ForeignKey</code> and <code>ManyToMany</code> relationships in Django.</p>
<p>Here is the models I have set up for an Item, List, and the Order (an intermediary model).</p>
<pre><code>class Item(models.Model):
title_english = models.CharField(max_length=250)
url = models.CharField(max_length=250)
img_url = models.CharField(max_length=250)
def __unicode__(self):
return self.title_english
class List(models.Model):
slides = models.ManyToManyField(Item, through='Order')
size = models.PositiveIntegerField(default=0)
def incrementSize(self):
self.size = self.size+1
def __unicode__(self):
return "List: " + str(self.slides.all())
class Order(models.Model):
item = models.ForeignKey(Item, on_delete=models.CASCADE)
list = models.ForeignKey(List, on_delete=models.CASCADE)
index = models.PositiveIntegerField()
def __unicode__(self):
return str(index) + ": " + str(self.item)
def appendItemToList(self, item, list):
self.item = item
self.list = list
self.index = list.size
list.incrementSize()
</code></pre>
<p>I am adding objects to the list (created dynamically from existing objects), through the view like so:</p>
<pre><code>def AddItem(request, pk):
sourceObj = SourceObject.objects.get(pk=pk)
lst = List.objects.all()
if not lst:
lst = List()
lst.save()
else:
lst = lst[0]
item = Item(title_english=sourceObj.name_english, url=sourceObj.slug, img_url=sourceObj.media)
item.save()
order = Order()
order.appendItemToList(item, lst)
order.save()
lst.save()
return redirect("some_url")
</code></pre>
<p>Now my issue is, deleting an item added to <code>list</code>. I am having trouble understanding how I can access the target object.</p>
<pre><code>def RemoveItem(request, pk):
lst = List.objects.all()
lst.delete() #deletes the entire list
#how do I access the target object from here to delete it
return redirect("some_url")
</code></pre>
<p>I read through the Django docs referring to "Following relationships backwards" here: <a href="https://docs.djangoproject.com/en/dev/topics/db/queries/#backwards-related-objects" rel="nofollow">https://docs.djangoproject.com/en/dev/topics/db/queries/#backwards-related-objects</a></p>
<p>But I couldn't find a solution that works.</p>
<p>Note: I'm using Django 1.5 and Python 2.7</p>
| 2
|
2016-09-06T16:33:45Z
| 39,363,262
|
<p>if your target is the item referenced to from 'pk' you could just use List.objects.get(pk).delete(). </p>
<p>Be sure to put this inside a try: except List.DoesNotExist: to avoid people trigerring 500s by manually inputting random strings in the URL.</p>
<pre><code>try:
List.objects.get(pk).delete()
except List.DoesNotExist:
do_something_when_item_does_not_exist()
return redirect("some_url")
</code></pre>
<p>Another option is using the 'get_object_or_404' function to retrieve the item, this will raise an http404 exception if the item given in the URL doesn't exist, which makes sense in my opinion.</p>
<pre><code>item = get_object_or_404(List, pk)
item.delete()
return redirect("some_url")
</code></pre>
<hr>
<p>in case you are looking for a specific item inside a specific list, you'll need two arguments in your URL and in the view</p>
<p>url:</p>
<pre><code>url(r'^(?P<list_id>[\d]+)/(?P<item_id>[\d]+)/delete$', delete, name='whatever')
</code></pre>
<p>view:</p>
<pre><code>def delete(request, list_id, item_id):
item = get_object_or_404(List, pk=item_id, list_id=list_id)
item.delete()
return redirect("some_url")
</code></pre>
| 3
|
2016-09-07T07:04:28Z
|
[
"python",
"django",
"postgresql"
] |
Inheriting from numpy.recarray, __unicode__ issue
| 39,353,917
|
<p>I have made a subclass of a numpy.recarray. The purpose of the class is to provide pretty printing for record arrays while maintaining the record array functionality. </p>
<p>Here is the code:</p>
<pre><code>import numpy as np
import re
class TableView(np.recarray):
def __new__(cls,array):
return np.asarray(array).view(cls)
def __str__(self):
return self.__unicode__().encode("UTF-8")
def __repr__(self):
return self.__unicode__().encode("UTF-8")
def __unicode__(self):
options = np.get_printoptions()
nrows = len(self)
print "unicode called"
if nrows > 2*options['edgeitems']:
if nrows > options['threshold']:
nrows = 2*options['edgeitems'] + 1
ncols = len(self.dtype)
fields = self.dtype.names
strdata = np.empty((nrows + 1,ncols),dtype='S32')
strdata[0] = fields
np_len = np.vectorize(lambda x: len(x))
maxcolchars = np.empty(ncols,dtype='i4')
for i, field in enumerate(fields):
strdata[1:,i] = re.sub('[\[\]]','',np.array_str(self[field])).split()
maxcolchars[i] = np.amax(np_len(strdata[:,i]))
rowformat = ' '.join(["{:>%s}" % maxchars for maxchars in maxcolchars])
formatrow = lambda row: (rowformat).format(*row)
strdata = np.apply_along_axis(formatrow,1,strdata)
return '\n'.join(strdata)
</code></pre>
<p>Here is how it prints:</p>
<pre><code>In [3]: x = np.array([(22, 2, -1000000000.0, 2000.0), (22, 2, 400.0, 2000.0),
...: (22, 2, 500.0, 2000.0), (44, 2, 800.0, 4000.0), (55, 5, 900.0, 5000.0),
...: (55, 5, 1000.0, 5000.0), (55, 5, 8900.0, 5000.0),
...: (55, 5, 11400.0, 5000.0), (33, 3, 14500.0, 3000.0),
...: (33, 3, 40550.0, 3000.0), (33, 3, 40990.0, 3000.0),
...: (33, 3, 44400.01213545, 3000.0)],
...: dtype=[('subcase', '<i4'), ('id', '<i4'), ('vonmises', '<f4'), ('maxprincipal', '<f4')])
In [6]: TableView(x)
unicode called
Out[6]:
subcase id vonmises maxprincipal
22 2 -1.00000000e+09 2000.
22 2 4.00000000e+02 2000.
22 2 5.00000000e+02 2000.
44 2 8.00000000e+02 4000.
55 5 9.00000000e+02 5000.
55 5 1.00000000e+03 5000.
55 5 8.90000000e+03 5000.
55 5 1.14000000e+04 5000.
33 3 1.45000000e+04 3000.
33 3 4.05500000e+04 3000.
33 3 4.09900000e+04 3000.
33 3 4.44000117e+04 3000.
</code></pre>
<p>But this does not work when i print only one row:</p>
<pre><code>In [7]: TableView(x)[0]
Out[7]: (22, 2, -1000000000.0, 2000.0)
</code></pre>
<p>It works for multiple rows:</p>
<pre><code>In [8]: TableView(x)[0:1]
unicode called
Out[8]:
subcase id vonmises maxprincipal
22 2 -1.00000000e+09 2000.
</code></pre>
<p>Upon further investigation:</p>
<pre><code>In [10]: type(TableView(x)[0])
Out[10]: numpy.record
In [11]: type(TableView(x)[0:1])
Out[11]: __main__.TableView
</code></pre>
<p>How can i make a numpy.record of TableView have the same <strong>unicode</strong>?</p>
| 0
|
2016-09-06T16:37:14Z
| 39,354,284
|
<p>Your diagnosis is right. A single element of this class is a <code>record</code>, not an <code>Tableview</code> array. </p>
<p>And indexing with a slice or list, <code>[0:1]</code> or <code>[[0]]</code>, is the immediate solution.</p>
<p>Trying to subclass <code>np.record</code> and changing the elements of the <code>Tableview</code> seems complicated.</p>
<p>You might try customizing the <code>__getitem__</code> method. This is called when you index the array.</p>
<p>It ends with:</p>
<pre><code> else:
# return a single element
return obj
</code></pre>
<p>A modified version might return a single element Tableview instead</p>
<pre><code> return Tableview([obj])
</code></pre>
<p>But that might produce some sort of endless recursion, keeping you from accessing elements as regular records.</p>
<p>Otherwise you might just want to live with the slice indexing. </p>
| 1
|
2016-09-06T17:00:55Z
|
[
"python",
"numpy",
"recarray"
] |
String after last hypen with 1-N Hypens in Regex (Python)
| 39,353,919
|
<p>Given a pattern (<a href="https://regex101.com/r/iN9hG6/2" rel="nofollow">https://regex101.com/r/iN9hG6/2</a>) which can have N # of hypens where I want the text after the last one, how would I request that as I always get the first:</p>
<p><code><details>Fiction - Mystery - Duvall</details></code></p>
<p><code><details>Fiction - Mystery - Horror - Duvall</details></code></p>
<p>Where I want Duvall in each case.</p>
<p>Disclaimer: for anyone following my questions, I realize this looks a lot like</p>
<p><a href="http://stackoverflow.com/questions/39150414/finding-the-last-specific-character-type-in-a-string">Finding the last specific character type in a string</a></p>
<p>but I tried to apply that solution to no avail. Possibly not totally understanding it as a relative Regex newbie, just didn't want the person who did answer that to think I ignored them and was asking for duplicate work.</p>
| 0
|
2016-09-06T16:37:27Z
| 39,353,992
|
<p>Judging by the provided sample input data, this is an XML and should be parsed with <em>specialized</em> tools like <a href="https://docs.python.org/2/library/xml.etree.elementtree.html" rel="nofollow"><code>xml.etree.ElementTree</code></a> or <a href="http://lxml.de/" rel="nofollow"><code>lxml</code></a>. To get to the data after the first hyphen, we'll use <a href="https://docs.python.org/2/library/stdtypes.html#str.split" rel="nofollow"><code>str.split()</code></a> providing the <code>maxsplit</code> value of 1 and getting the last item of the result:</p>
<pre><code>import xml.etree.ElementTree as ET
data = """
<root>
<details>Fiction - Mystery - Duvall</details>
<details>Fiction - Mystery - Horror - Duvall</details>
</root>"""
root = ET.fromstring(data)
for details in root.findall("details"):
text = details.text
print(text.split(" - ", 1)[1])
</code></pre>
<p>Prints:</p>
<pre><code>Mystery - Duvall
Mystery - Horror - Duvall
</code></pre>
| 0
|
2016-09-06T16:42:06Z
|
[
"python",
"regex"
] |
String after last hypen with 1-N Hypens in Regex (Python)
| 39,353,919
|
<p>Given a pattern (<a href="https://regex101.com/r/iN9hG6/2" rel="nofollow">https://regex101.com/r/iN9hG6/2</a>) which can have N # of hypens where I want the text after the last one, how would I request that as I always get the first:</p>
<p><code><details>Fiction - Mystery - Duvall</details></code></p>
<p><code><details>Fiction - Mystery - Horror - Duvall</details></code></p>
<p>Where I want Duvall in each case.</p>
<p>Disclaimer: for anyone following my questions, I realize this looks a lot like</p>
<p><a href="http://stackoverflow.com/questions/39150414/finding-the-last-specific-character-type-in-a-string">Finding the last specific character type in a string</a></p>
<p>but I tried to apply that solution to no avail. Possibly not totally understanding it as a relative Regex newbie, just didn't want the person who did answer that to think I ignored them and was asking for duplicate work.</p>
| 0
|
2016-09-06T16:37:27Z
| 39,353,997
|
<p>I think what you're looking for is this:</p>
<pre><code><details>(?:\w+ - *)*(\w+)<\/details>
</code></pre>
<p>The idea is to match as much as possible inside the (?: ) group, which doesn't cause a backreference to be made, then match the thing you actually care about - the last token. The example below should give a bit more insight into what the syntax means.</p>
<p><a href="https://regex101.com/r/iN9hG6/2" rel="nofollow">Example</a></p>
| 0
|
2016-09-06T16:42:21Z
|
[
"python",
"regex"
] |
String after last hypen with 1-N Hypens in Regex (Python)
| 39,353,919
|
<p>Given a pattern (<a href="https://regex101.com/r/iN9hG6/2" rel="nofollow">https://regex101.com/r/iN9hG6/2</a>) which can have N # of hypens where I want the text after the last one, how would I request that as I always get the first:</p>
<p><code><details>Fiction - Mystery - Duvall</details></code></p>
<p><code><details>Fiction - Mystery - Horror - Duvall</details></code></p>
<p>Where I want Duvall in each case.</p>
<p>Disclaimer: for anyone following my questions, I realize this looks a lot like</p>
<p><a href="http://stackoverflow.com/questions/39150414/finding-the-last-specific-character-type-in-a-string">Finding the last specific character type in a string</a></p>
<p>but I tried to apply that solution to no avail. Possibly not totally understanding it as a relative Regex newbie, just didn't want the person who did answer that to think I ignored them and was asking for duplicate work.</p>
| 0
|
2016-09-06T16:37:27Z
| 39,354,020
|
<p>Sometimes the split() function is easier to use than RegEx. </p>
<pre><code>test_string = "<details>Fiction - Mystery - Horror - Duvall</details>"
author = test_string.split("-")[-1][2:-10]
</code></pre>
| 0
|
2016-09-06T16:44:11Z
|
[
"python",
"regex"
] |
Getting the error unorderable types: int() >= list()
| 39,353,947
|
<p>I am new to python. I am trying to write the python code for mergesort and i unable to locate the error.</p>
<pre><code>import math
t = int(input())
def merge(lf,rf):
p=0
q= 0
b=[]
for i in range(len(rf)+len(lf)):
if (p>=len(lf)):
b.append(rf[q:])
break
elif (q>=len(rf)):
b.append(lf[p:])
break
elif (lf[p]>=rf[q]):
b.append(rf[q])
q=q+1
else:
b.append(lf[p])
p=p+1
return b
def sort(a):
if (len(a)>1):
mid = int(len(a)/2)
lf=a[:mid]
rf=a[mid:]
lf=sort(lf)
rf=sort(rf)
a=merge(lf,rf)
print (a)
return a
for i in range(t):
n = int(input())
a = [0]*n
for j in range(n):
a[j]=int(input())
sort(a)
print(a)
</code></pre>
| -1
|
2016-09-06T16:39:09Z
| 39,354,060
|
<p>When you do either <code>b.append(rf[q:])</code> or <code>b.append(lf[p:])</code>, you adding a <em>list</em> as an element to the list <code>b</code>, which looks like it should be a list of integers.</p>
| 1
|
2016-09-06T16:46:21Z
|
[
"python"
] |
Getting the error unorderable types: int() >= list()
| 39,353,947
|
<p>I am new to python. I am trying to write the python code for mergesort and i unable to locate the error.</p>
<pre><code>import math
t = int(input())
def merge(lf,rf):
p=0
q= 0
b=[]
for i in range(len(rf)+len(lf)):
if (p>=len(lf)):
b.append(rf[q:])
break
elif (q>=len(rf)):
b.append(lf[p:])
break
elif (lf[p]>=rf[q]):
b.append(rf[q])
q=q+1
else:
b.append(lf[p])
p=p+1
return b
def sort(a):
if (len(a)>1):
mid = int(len(a)/2)
lf=a[:mid]
rf=a[mid:]
lf=sort(lf)
rf=sort(rf)
a=merge(lf,rf)
print (a)
return a
for i in range(t):
n = int(input())
a = [0]*n
for j in range(n):
a[j]=int(input())
sort(a)
print(a)
</code></pre>
| -1
|
2016-09-06T16:39:09Z
| 39,354,097
|
<p>This line</p>
<pre><code>b.append(rf[q:])
</code></pre>
<p>appends the list <code>rf[q:]</code> to <code>b</code> as a single item. But that's not what you really want, because <code>b</code> ends up containing sublists of numbers as well as the numbers it's supposed to contain. So you need to add the <em>contents</em> of <code>rf[q:]</code> to <code>b</code>, and you can do that with</p>
<pre><code>b.extend(rf[q:])
</code></pre>
<p>Similar remarks apply to </p>
<pre><code>b.append(lf[p:])
</code></pre>
<p>The error message arises because your code tries to compare numbers in the <code>lf</code> and <code>rf</code> lists with those sublists you accidentally appended.</p>
<p>Also, as Leon mentions in the comments, you need to do</p>
<pre><code>a = sort(a)
</code></pre>
<p>in the 2nd-last line of your script because your <code>sort</code> function doesn't modify the <code>a</code> you pass it.</p>
<hr>
<p>BTW, there's no need for you to import the math module in this script: you aren't calling any of its functions or using any of the constants it defines.</p>
| 2
|
2016-09-06T16:48:30Z
|
[
"python"
] |
Python: StopIteration error being raised iterating through blank csv
| 39,353,985
|
<p>I'm a new python user and have an issue. I apologize in advance if the solution is obvious.</p>
<p>I intend to be able to take a potentially large amount of csv files and jam them into a database which I can then use sql to query for reporting and other sweet stuff and I have the following code:</p>
<pre><code>import csv
# Establishes a db connection and returns connection and cursor obj
# creates dbName.db file in given location
def openDB (dbName,location):
import sqlite3,os
os.chdir(location)
conn = sqlite3.connect(dbName)
c = conn.cursor()
return conn,c
# Uses connection, cursor, csv obj and writes into table
def insertFromCsv (csvObj,connection,cursor,tableName):
c = cursor
# Just added this condition to check for blank files
# but I'm not sure if this is appropriate..
rowCount = sum(1 for row in csvObj)
if rowCount > 0:
csvObj.next()
i = 0
for row in csvObj:
tablerow = ", ".join('"' + value + '"' for value in row)
insertSQL = "INSERT INTO '%s' VALUES (%s)" % (tableName,tablerow)
c.execute(insertSQL)
i += 1
connection.commit()
print '%s rows committed to table %s' % (i, tableName)
# creates the .reader obj
reader = csv.reader(csvFile)
# extract column names from csv header
tableFields = reader.next()
# formats the column names for the INSERT statement coming up
tableFields = ", ".join('"' + field + '"' for field in tableFields)
DB = openDB('foo.db','../bar')
tableName = myTable
insertFromCsv(reader,DB[0],DB[1],myTable)
</code></pre>
<p>insertFromCsv() takes as input a csv file .reader object, sqlite3 database connection and cursor objects, and an output table to create and insert into.</p>
<p>It has been working alright until recently when I tried to input a csv file which consisted of just a header. I got a StopIteration error after calling the .next() method. How can this be avoided/ what am I misunderstanding/overlooking?</p>
<p>I appreciate all the help and welcome any criticism!</p>
| 1
|
2016-09-06T16:41:48Z
| 39,354,128
|
<p>You have exhausted the <code>csvObj</code> iterator on the line before:</p>
<pre><code>rowCount = sum(1 for row in csvObj)
</code></pre>
<p>Once an iterator is exhausted, you can't call <code>next()</code> on it anymore without that raising <code>StopIteration</code>; you have reached the end of the iterator already.</p>
<p>If you want to test for a blank CSV file, read <em>one</em> row with the <a href="https://docs.python.org/2/library/functions.html#next" rel="nofollow"><code>next()</code> function</a>, which can be given a default. <code>next(csvObj, None)</code> would return <code>None</code> rather than propagate the <code>StopIteration</code> exception when the iterator is exhausted for example.</p>
<p>Next, use <em>SQL parameters</em> to create one generic SQL statement, then use <code>cursor.executemany()</code> to have the database pull in all the rows and insert them for you:</p>
<pre><code>header = next(csvObj, None)
if header:
tablerow = ", ".join(['?'] * len(row))
insertSQL = 'INSERT INTO "%s" VALUES (%s)' % (tableName, tablerow)
c.executemany(insertSQL, csvObj)
</code></pre>
<p>The <code>?</code> is a SQL parameter placeholder; <code>executemany()</code> will fill these from each row from <code>csvObj</code>.</p>
<p>It won't matter to the <code>cursor.executemany()</code> call if <code>csvObj</code> actually yieds any rows; if only the header exists and nothing more, then no actual <code>INSERT</code> statements are executed.</p>
<p>Note that I used <code>"..."</code> double quotes to correctly quote the table name, see <a href="https://www.sqlite.org/lang_keywords.html" rel="nofollow"><em>SQLite keywords</em></a>; single quotes are for string literal values, not table names.</p>
| 0
|
2016-09-06T16:50:26Z
|
[
"python",
"sqlite",
"csv",
"error-handling",
"sqlite3"
] |
How to download a gmail attachment?
| 39,353,989
|
<p>I am trying to download email attachments from Gmail using python using code shared on link</p>
<p><a href="https://gist.github.com/baali/2633554" rel="nofollow">https://gist.github.com/baali/2633554</a></p>
<p>I want to apply time filter + subject filter and download the attachment. For example all files received in last 24 hours, etc.
Can anyone please share code or reading material to apply advance filter for email selection.</p>
| 2
|
2016-09-06T16:41:55Z
| 39,354,450
|
<p>Based on the script you linked, add the following lines to filter emails on date and subject:</p>
<pre><code>from datetime import datetime
day = '2016-09-06'
subject = 'Your command is available'
look_for = '(SENTSINCE {0} SUBJECT "{1}")'.format(
datetime.strptime(day, '%Y-%m-%d').strftime('%d-%b-%Y'), subject)
typ, data = imapSession.search(None, 'ALL') # Line 25
</code></pre>
<p>You will have to customize variables but you have a working example here.
BTW, you should have a look at this <a href="https://gist.github.com/mjseeley/5e182c0c29dde014cfac" rel="nofollow">fork</a>, it seems more up to date. </p>
| 1
|
2016-09-06T17:12:22Z
|
[
"python",
"gmail",
"attachment"
] |
merge two dataframes by row with same index pandas
| 39,354,031
|
<p>let's say i have the following two dataframes X1 and X2. I would like
to merge those 2 dataframes by row so that each index from each
dataframe being the same combines the corresponding rows from both
dataframes.</p>
<pre><code> A B C D
DATE1 a1 b1 c1 d1
DATE2 a2 b2 c2 d2
DATE3 a3 b3 c3 d3
A B C D
DATE1 f1 g1 h1 i1
DATE2 f2 g2 h2 i2
DATE3 f3 g3 h3 i3
how would i combine them to get
A B C D
DATE1 A1 B1 C1 D1
f1 g1 h1 i1
DATE2 A2 B2 C2 D2
f2 g2 h2 i2
DATE3 A3 B3 C3 D3
f3 g3 h3 i3
</code></pre>
<p>I have tried this so far but this does not seem to work:</p>
<pre><code> d= pd.concat( { idx : [ X1[idx], X2[idx]] for idx, value in appended_data1.iterrows() } , axis =1}
</code></pre>
<p>thanks</p>
| 4
|
2016-09-06T16:44:53Z
| 39,354,165
|
<p><strong><em>Option 1</em></strong></p>
<pre><code>df3 = df1.stack().to_frame('df1')
df3.loc[:, 'df2'] = df2.stack().values
df3 = df3.stack().unstack(1)
df3
</code></pre>
<p><a href="http://i.stack.imgur.com/ZfKUu.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZfKUu.png" alt="enter image description here"></a></p>
<hr>
<p><strong><em>Option 2</em></strong>
<em>Generalized</em></p>
<pre><code>idx = df1.stack().index
dfs = [df1, df2]
dflabels = ['df1', 'df2']
a = np.stack([d.values.flatten() for d in dfs], axis=1)
df3 = pd.DataFrame(a, index=idx, columns=dflabels).stack().unstack(1)
</code></pre>
<hr>
<h3>Setup</h3>
<pre><code>from StringIO import StringIO
import pandas as pd
df1_text = """ A B C D
DATE1 a1 b1 c1 d1
DATE2 a2 b2 c2 d2
DATE3 a3 b3 c3 d3"""
df2_text = """ F G H I
DATE1 f1 g1 h1 i1
DATE2 f2 g2 h2 i2
DATE3 f3 g3 h3 i3"""
df1 = pd.read_csv(StringIO(df1_text), delim_whitespace=True)
df2 = pd.read_csv(StringIO(df2_text), delim_whitespace=True)
</code></pre>
<hr>
<pre><code>df1
</code></pre>
<p><a href="http://i.stack.imgur.com/JhqVP.png" rel="nofollow"><img src="http://i.stack.imgur.com/JhqVP.png" alt="enter image description here"></a></p>
<pre><code>df2
</code></pre>
<p><a href="http://i.stack.imgur.com/HElZs.png" rel="nofollow"><img src="http://i.stack.imgur.com/HElZs.png" alt="enter image description here"></a></p>
| 6
|
2016-09-06T16:53:05Z
|
[
"python",
"pandas",
"dataframe",
"append",
"concatenation"
] |
merge two dataframes by row with same index pandas
| 39,354,031
|
<p>let's say i have the following two dataframes X1 and X2. I would like
to merge those 2 dataframes by row so that each index from each
dataframe being the same combines the corresponding rows from both
dataframes.</p>
<pre><code> A B C D
DATE1 a1 b1 c1 d1
DATE2 a2 b2 c2 d2
DATE3 a3 b3 c3 d3
A B C D
DATE1 f1 g1 h1 i1
DATE2 f2 g2 h2 i2
DATE3 f3 g3 h3 i3
how would i combine them to get
A B C D
DATE1 A1 B1 C1 D1
f1 g1 h1 i1
DATE2 A2 B2 C2 D2
f2 g2 h2 i2
DATE3 A3 B3 C3 D3
f3 g3 h3 i3
</code></pre>
<p>I have tried this so far but this does not seem to work:</p>
<pre><code> d= pd.concat( { idx : [ X1[idx], X2[idx]] for idx, value in appended_data1.iterrows() } , axis =1}
</code></pre>
<p>thanks</p>
| 4
|
2016-09-06T16:44:53Z
| 39,379,720
|
<p>maybe this solution too could solve your problem:</p>
<pre><code>df3 = pd.concat([df1,df2]).sort_index()
print df3
Out[42]:
A B C D
DATE1 a1 b1 c1 d1
DATE1 f1 g1 h1 i1
DATE2 a2 b2 c2 d2
DATE2 f2 g2 h2 i2
DATE3 a3 b3 c3 d3
DATE3 f3 g3 h3 i3
</code></pre>
| 0
|
2016-09-07T22:06:06Z
|
[
"python",
"pandas",
"dataframe",
"append",
"concatenation"
] |
Nested if statement returns nothing in list of lists
| 39,354,036
|
<p>Here is my list (of lists):</p>
<pre><code>mylist = [ [1, "John", None, "Doe"], [2, "Jane", "group1", "Zee"], [3, "Lex", "group2", "Fee"]]
y = 2
for sublist in mylist:
if sublist[0] == y: # meaning the third list
if sublist[2] == None:
print(sublist[2]) # this should print nothing
else:
print(sublist[2]) #this should print something
</code></pre>
<p>The end result is that nothing prints for this code. </p>
<p>I am trying to do a check for situations where I have a <code>None</code> value in my list (of lists). This method above doesn't seem to work.</p>
<p>I cannot figure out why it refuses to print anything at all, but I assume the nested <code>if sublist[2] == None:</code> may have something to do with it.</p>
| 1
|
2016-09-06T16:45:00Z
| 39,354,279
|
<blockquote>
<p>I am trying to do a check for situations where I have a None value in my list (of lists)</p>
</blockquote>
<p>Then just check for membership with <code>in</code> for every sub list:</p>
<pre><code>for sublist in mylist:
if None in sublist:
# do your check
print("None in list: ", sublist)
</code></pre>
<p>prints:</p>
<pre><code>None in list: [1, 'John', None, 'Doe']
</code></pre>
<p>If you want to just see if a <code>None</code> exists in general just use <code>any</code>:</p>
<pre><code>any(None in sub for sub in mylist)
</code></pre>
<p>which returns <code>True</code> since <code>None</code> is in <code>mylist[0]</code></p>
| 4
|
2016-09-06T17:00:34Z
|
[
"python",
"python-3.x",
"python-3.4"
] |
Nested if statement returns nothing in list of lists
| 39,354,036
|
<p>Here is my list (of lists):</p>
<pre><code>mylist = [ [1, "John", None, "Doe"], [2, "Jane", "group1", "Zee"], [3, "Lex", "group2", "Fee"]]
y = 2
for sublist in mylist:
if sublist[0] == y: # meaning the third list
if sublist[2] == None:
print(sublist[2]) # this should print nothing
else:
print(sublist[2]) #this should print something
</code></pre>
<p>The end result is that nothing prints for this code. </p>
<p>I am trying to do a check for situations where I have a <code>None</code> value in my list (of lists). This method above doesn't seem to work.</p>
<p>I cannot figure out why it refuses to print anything at all, but I assume the nested <code>if sublist[2] == None:</code> may have something to do with it.</p>
| 1
|
2016-09-06T16:45:00Z
| 39,354,435
|
<p>I think I understand what you are trying to accomplish.
You like to enumerate through a list of lists, then check the <code>y</code>-th field to see if it contains <code>None</code>.</p>
<p>The problem you are facing is that when you do:</p>
<pre><code>y = 2
</code></pre>
<p>indicating the third field and later you do:</p>
<pre><code>if sublist[0] == y:
</code></pre>
<p>then you are checking if the <strong>first</strong> field (indicated by <code>[0]</code>) of the <code>sublist</code> is equal to <code>2</code></p>
<p>which brings you to the third sublist, not the third field in the sublist enumeration. This is where your code goes wrong.</p>
<p>You need to drop the <code>if sublist[0] == y:</code> completely, and simply check <code>sublist[y]</code></p>
<p>The following code will enumerate your list of lists, then check the <code>y</code>-th field for <code>None</code>:</p>
<pre><code>mylist = [ [1, "John", None, "Doe"], [2, "Jane", "group1", "Zee"], [3, "Lex", "group2", "Fee"]]
y = 2
for sublist in mylist:
if sublist[y] == None:
print(sublist[y]) # this should print nothing
else:
print(sublist[y]) #this should print something
</code></pre>
<p>If you want to check <em>any</em> field containing <code>None</code>, you get something like this:</p>
<pre><code>mylist = [ [1, "John", None, "Doe"], [2, "Jane", "group1", "Zee"], [3, "Lex", "group2", "Fee"]]
for sublist in mylist:
for field in sublist:
if field == None:
print("None detected in: " + str(sublist))
</code></pre>
| 2
|
2016-09-06T17:10:47Z
|
[
"python",
"python-3.x",
"python-3.4"
] |
Permission Denied for Python Keylogger
| 39,354,197
|
<p>I've created a simple python keylogger by following this tutorial: <a href="https://www.youtube.com/watch?v=8BiOPBsXh0g" rel="nofollow">https://www.youtube.com/watch?v=8BiOPBsXh0g</a></p>
<pre><code>import pyHook, pythoncom, sys, logging
file_log = 'C:\\log.txt'
def OnKeyboardEvent(event):
logging.basicConfig(filename=file_log, level=logging.DEBUG, format='%(message)s')
chr(event.Ascii)
logging.log(10,chr(event.Ascii))
return True
hooks_manager = pyHook.HookManager()
hooks_manager.KeyDown = OnKeyboardEvent
hooks_manager.HookKeyboard()
pythoncom.PumpMessages()
</code></pre>
<p>When I run the program, and type something, i get this error in the console:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Adithya1\Documents\pywin and pyhook\Newfolder\pyHook\HookManager.py", line 351, in KeyboardSwitch
return func(event)
File "C:\Users\Adithya1\Documents\pywin and pyhook\New folder\systemdata.pyw", line 6, in OnKeyboardEvent
logging.basicConfig(filename=file_log, level=logging.DEBUG, format='%(message)s')
File "C:\Python27\lib\logging\__init__.py", line 1540, in basicConfig
hdlr = FileHandler(filename, mode)
File "C:\Python27\lib\logging\__init__.py", line 911, in __init__
StreamHandler.__init__(self, self._open())
File "C:\Python27\lib\logging\__init__.py", line 936, in _open
stream = open(self.baseFilename, self.mode)
IOError: [Errno 13] Permission denied: 'C:\\log.txt'
</code></pre>
<p>It must be to do with the last line, Permission Denied. Any idea what i need to do to fix this? Any way to run it with administrator privilages?</p>
<p>Thanks in advance</p>
| 0
|
2016-09-06T16:54:56Z
| 39,355,017
|
<p><strong>Reposting as an answer for future use</strong></p>
<p>The easiest and, arguably, safest way would be to not write the log to the root of C. Change the "file_log = 'C:\log.txt'" to something like "file_log = 'C:\Users\Adithya1\log.txt" instead.</p>
<p>There are <a href="http://stackoverflow.com/questions/4028904/how-to-get-the-home-directory-in-python">other links</a> for finding the home directory of the user to make this more portable.</p>
| 1
|
2016-09-06T17:48:24Z
|
[
"python"
] |
Python: How to change text of a message widget to a user input
| 39,354,205
|
<p>So I'm making a program which requires a user to enter in a value. I want the value to be displayed via the message widget (or label widget) and updates whenever a new input is enter.</p>
<pre><code>def Enter():
s = v.get()
print (v.get())
e.delete(0, END)
e.insert(0, "")
#Code, Code, Code
...
# Area To Enter Text
v = StringVar()
e = Entry(root, textvariable=v)
e.pack()
m = Message(root, text = "Your Input")
m.pack()
# Enter Button
b = Button(root, text="OK", command=Enter)
b.pack()
</code></pre>
<p>Is there a way for the variable of v to replace the text of Message Widget??</p>
<p>Note:</p>
<p>If I replace <code>text</code> with <code>textvariable</code>, it updates the text after every character key is pressed, where as I need it to update when the user presses the button.</p>
<hr>
<p>My complete code:</p>
<pre><code>from tkinter import *
import os
# All Functions Below
def callback():
print ("HI")
def Exit():
os._exit(0)
def Enter():
s = e.get()
print (e.get())
m.configure(text=s)
e.delete(0, END)
e.insert(0, "")
def Population():
root = Tk
root.mainloop()
def SurvivalRate():
root = Tk
root.mainloop()
def BirthRate():
root = Tk
root.mainloop()
def NewGen():
root = Tk
root.mainloop()
root = Tk()
generation = 0
menubar = Menu(root)
menubar.add_command(label="Hello!", command=callback)
menubar.add_command(label="Quit!", command=Exit)
# Area To Enter Text
e = Entry(root)
e.pack()
m = Message(root, text = e)
m.pack()
# Enter Button
b = Button(root, text="OK", command=Enter)
b.pack()
Pop = Button(root, text="Population", command=Population)
Pop.pack()
</code></pre>
| 2
|
2016-09-06T16:55:27Z
| 39,354,401
|
<p>Simply add:</p>
<pre><code>m.configure(text=s)
</code></pre>
<p>to your function:</p>
<pre><code>def Enter():
s = v.get()
print (v.get())
m.configure(text=s)
e.delete(0, END)
e.insert(0, "")
</code></pre>
<hr>
<p>As a side- note, you do not necessarily need the <code>StringVar()</code>. The code below will do exactly the same:</p>
<pre><code>def Enter():
s = e.get()
m.configure(text=s)
e.delete(0, END)
e.insert(0, "")
#Code, Code, Code
...
# Area To Enter Text
e = Entry(root)
e.pack()
m = Message(root, text = "Your Input")
m.pack()
# Enter Button
b = Button(root, text="OK", command=Enter)
b.pack()
root.mainloop()
</code></pre>
| 3
|
2016-09-06T17:07:57Z
|
[
"python",
"tkinter"
] |
Unable to import a custom DLL in python
| 39,354,389
|
<p>I am trying to expose a C++ class to python with <code>boost::python</code>, so I am going through <a href="http://www.boost.org/doc/libs/1_61_0/libs/python/doc/html/tutorial/tutorial/exposing.html" rel="nofollow">this tutorial</a>. I created a visual studio <code>.dll</code> project, with this source code:</p>
<pre><code>#include <boost/python.hpp>
using namespace boost::python;
struct World
{
void set(std::string msg) { this->msg = msg; }
std::string greet() { return msg; }
std::string msg;
};
BOOST_PYTHON_MODULE(hello)
{
class_<World>("World")
.def("greet", &World::greet)
.def("set", &World::set)
;
}
</code></pre>
<p>And I built it as a 64-bit dll. The next step in the tutorial says:</p>
<blockquote>
<p>Here, we wrote a C++ class wrapper that exposes the member functions greet and set. Now, after building our module as a shared library, we may use our class World in Python. Here's a sample Python session:</p>
</blockquote>
<pre><code>>>> import hello
>>> planet = hello.World()
>>> planet.set('howdy')
>>> planet.greet()
'howdy'
</code></pre>
<p>However, after launching python in the same directory and typing <code>import hello</code> I get</p>
<pre><code>>>> import hello
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named 'hello'
>>>
</code></pre>
<p>I have tried renaming the 'dll' files to <code>hello.dll</code>, and also copying <em>every</em> output file (<code>dll</code>, <code>exp</code>, <code>ilk</code>, <code>lib</code>, and <code>pdb</code>) to <code>%PYTHONPATH%\DLLs</code>, yet I am still unable to import the module into python.</p>
<p>Much googling brought me to <a href="http://wolfprojects.altervista.org/articles/dll-in-c-for-python/" rel="nofollow">this article</a> recommending I use <code>ctypes</code> to import the <code>dll</code>. This lets me load the <code>dll</code>, but I am still unable to call the "World" class. For example:</p>
<pre><code>>>> import ctypes
>>> mydll = ctypes.cdll.LoadLibrary("hello")
>>> mydll
<CDLL 'hello', handle 7fef40a0000 at 0x775ba8>
>>> hello = mydll.World()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Program Files\Python35\lib\ctypes\__init__.py", line 360, in __getatt
r__
func = self.__getitem__(name)
File "C:\Program Files\Python35\lib\ctypes\__init__.py", line 365, in __getite
m__
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: function 'World' not found
>>>
</code></pre>
<p>So a couple questions:</p>
<ol>
<li><p>Is it possible to import a <code>dll</code> in python <em>without</em> using ctypes? The tutorial seems to indicate that it is, but doesn't provide very much detail about the right way to get the dll into python.</p></li>
<li><p>Which files do I need and where? It seems like I should only need the <code>dll</code> file from visual studio in the working directory of my python shell, but this is clearly not working for me.</p></li>
<li><p>Why can't I call <code>World</code> through ctypes?</p></li>
</ol>
<p>Some more important details: I am using windows 7 64-bit, python 3.5.2 64-bit, and Visual Studio 2015 with Boost 1.61.</p>
| 1
|
2016-09-06T17:07:06Z
| 39,354,682
|
<p>I actually just figured out the answer shortly after posting the question. Thanks to <a href="http://mmmovania.blogspot.com/2013/01/running-c-code-from-python-using.html" rel="nofollow">this blog post</a> I found out that simply renaming the <code>hello.dll</code> to <code>hello.pyd</code> was enough. From some more googling, I <em>think</em> that <code>ctypes</code> is only for C DLLs, not C++, and <em>not</em> with boost!. The point of <code>boost::python</code>, is to remove the need for ctypes and make the DLL compatible with python. So to answer all of my own questions:</p>
<ol>
<li><p>Yes, but it must have a <code>.pyd</code> extension.</p></li>
<li><p>You only need the compiled <code>dll</code> file and the <code>boost_python_vc140...dll</code> (may vary). However, like I said, the <code>dll</code> file must be renamed.</p></li>
<li><p>Because ctypes is not the right tool for loading a <code>boost::python</code> dll!</p></li>
</ol>
| 0
|
2016-09-06T17:26:43Z
|
[
"python",
"dll",
"ctypes",
"boost-python",
"python-c-api"
] |
Local CSV as criteria for SQL where clause against Network Netezza DB in Python
| 39,354,399
|
<p>In Python, I am querying against a Netezza database using SQL, and for one of the variables in the Netezza tables, I'd like to do an inner join from that to the same variable in an external CSV file. Is this possible? </p>
<p>Here is an example shell of my Python code, using pandas <code>read_sql</code> module</p>
<pre><code>*conn_nz = od.connect("DRIVER={NetezzaSQL};...")
q_test = '''
SELECT
A.var1,
A.var2
FROM
Netezza.tableA A
WHERE
(A.var1 = csv.var1)
; '''
op_data = pd.read_sql(q_test, conn_nz)
op_data*
</code></pre>
<p>Is this possible? I'm very new to both SQL and Python.</p>
| 1
|
2016-09-06T17:07:48Z
| 39,417,016
|
<p>Since it's a CSV, your best option is to create an <a href="http://www.ibm.com/support/knowledgecenter/SSULQD_7.2.1/com.ibm.nz.load.doc/c_load_external_tables.html" rel="nofollow">external table</a> in Netezza and join to that.</p>
<pre><code>create external table some_csv (
column1 varchar(10)
,column2 varchar(20)
..
) using (
dataobject '/path/to/file'
delimiter ','
);
</code></pre>
<p>Then you can join to it directly from inside the database.</p>
<pre><code>select
var1
,var2
from
tableA
join some_csv csv using (some_join_column)
</code></pre>
| 0
|
2016-09-09T17:40:36Z
|
[
"python",
"csv",
"netezza"
] |
django: link to detail page from list of returned results
| 39,354,430
|
<p>I'm creating a page that returns a list of movies with basic details after a user search. </p>
<p><strong>After the search, I'd like the user to be able to click on a movie, and get more details about it.</strong> </p>
<p>here's a link to the site: (be gentle, I only started learning this stuff 2 months ago! hah) <a href="http://moniblog.pythonanywhere.com/compare/" rel="nofollow">http://moniblog.pythonanywhere.com/compare/</a></p>
<p>the data is coming from <a href="https://www.themoviedb.org/documentation/api" rel="nofollow">TMDB's api</a> and the initial "generic" search JSON response doesn't have specific details that I'd display on the movie's detail page, but it has an ID that will be used for the specific movie's search.</p>
<p>I'm only using views.py to grab/display the search results, and I'm not sure if this is the right way to go, or if I should use a model, but that's probably a different question.</p>
<p>forms.py:</p>
<pre><code>from django import forms
class MovieSearch(forms.Form):
moviename = forms.CharField(label="Search", max_length=250)
</code></pre>
<p>views.py:</p>
<pre><code>from django.shortcuts import render, get_object_or_404
from django.conf import settings
from .forms import MovieSearch
import tmdbsimple as tmdb
tmdb.API_KEY = settings.TMDB_API_KEY
def search_movie(request):
parsed_data = {'results': []}
if request.method == 'POST':
form = MovieSearch(request.POST)
if form.is_valid():
search = tmdb.Search()
query = form.cleaned_data['moviename']
response = search.movie(query=query)
for movie in response['results']:
parsed_data['results'].append(
{
'title': movie['title'],
'id': movie['id'],
'poster_path': movie['poster_path'],
'release_date': movie['release_date'][:-6],
'popularity': movie['popularity'],
'overview': movie['overview']
}
)
for i in range(2, 5 + 1):
response = search.movie(query=query, page=i)
for movie in response['results']:
parsed_data['results'].append(
{
'title': movie['title'],
'id': movie['id'],
'poster_path': movie['poster_path'],
'release_date': movie['release_date'][:-6],
'popularity': movie['popularity'],
'overview': movie['overview']
}
)
context = {
"form": form,
"parsed_data": parsed_data
}
return render(request, './moviecompare/movies.html', context)
else:
form = MovieSearch()
else:
form = MovieSearch()
return render(request, './moviecompare/compare.html', {"form": form})
</code></pre>
<p>and html:</p>
<pre><code>{% extends 'moviecompare/compare.html' %}
{% block movies_returned %}
<div class="wrap">
<div class="compare-gallery">
{% for key in parsed_data.results|dictsortreversed:'release_date' %}
{% if key.poster_path and key.release_date and key.title and key.overview %}
<div class="gallery-item">
<img src="http://image.tmdb.org/t/p/w185/{{ key.poster_path }}">
<div class="gallery-text">
<div class="gallery-date"><h5><span><i class="material-icons">date_range</i></span> {{ key.release_date }}</h5></div>
<div class="gallery-title"><h3><a href="../detail/{{ key.id }}">{{ key.title }}</a></h3></div>
<div class="gallery-overview">{{ key.overview|truncatechars:80 }}</div>
</div>
</div>
{% endif %}
{% endfor %}
</div>
</div>
{% endblock %}
</code></pre>
<p>I've started playing with the urls.py to get something to work, but no luck so far.</p>
<p>site's urls.py:</p>
<pre><code>urlpatterns = [
url(r'^$', home, name="home"),
url(r'^blog/', include('blog.urls', namespace='blog')),
url(r'^compare/', include('moviecompare.urls', namespace='compare')),
url(r'^movies/', include('moviecompare.urls', namespace='movies')),
</code></pre>
<p>and the app's urls.py:</p>
<pre><code>from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^', views.search_movie, name='compare'),
url(r'^(?P<movid>[0-9])+$', views.get_movie, name='movies')
]
</code></pre>
<p>edit: adding my 1st (failed) attempt at the movie detail view:</p>
<pre><code>def get_movie(request, movid=None):
instance = get_object_or_404(request, movid=movid)
context = {
'instance': instance
}
return render(request, './moviecompare/detail.html', context)
</code></pre>
| 2
|
2016-09-06T17:10:20Z
| 39,369,899
|
<p>I think you should try by fixing the urls.py in this line:</p>
<pre><code>url(r'^(?P<movid>[0-9])+$', views.get_movie, name='movies')
</code></pre>
<p>Move the "+" inside the brackets like that:</p>
<pre><code>url(r'^(?P<movid>[0-9]+)$', views.get_movie, name='movies')
</code></pre>
| 1
|
2016-09-07T12:23:50Z
|
[
"python",
"html",
"django"
] |
django: link to detail page from list of returned results
| 39,354,430
|
<p>I'm creating a page that returns a list of movies with basic details after a user search. </p>
<p><strong>After the search, I'd like the user to be able to click on a movie, and get more details about it.</strong> </p>
<p>here's a link to the site: (be gentle, I only started learning this stuff 2 months ago! hah) <a href="http://moniblog.pythonanywhere.com/compare/" rel="nofollow">http://moniblog.pythonanywhere.com/compare/</a></p>
<p>the data is coming from <a href="https://www.themoviedb.org/documentation/api" rel="nofollow">TMDB's api</a> and the initial "generic" search JSON response doesn't have specific details that I'd display on the movie's detail page, but it has an ID that will be used for the specific movie's search.</p>
<p>I'm only using views.py to grab/display the search results, and I'm not sure if this is the right way to go, or if I should use a model, but that's probably a different question.</p>
<p>forms.py:</p>
<pre><code>from django import forms
class MovieSearch(forms.Form):
moviename = forms.CharField(label="Search", max_length=250)
</code></pre>
<p>views.py:</p>
<pre><code>from django.shortcuts import render, get_object_or_404
from django.conf import settings
from .forms import MovieSearch
import tmdbsimple as tmdb
tmdb.API_KEY = settings.TMDB_API_KEY
def search_movie(request):
parsed_data = {'results': []}
if request.method == 'POST':
form = MovieSearch(request.POST)
if form.is_valid():
search = tmdb.Search()
query = form.cleaned_data['moviename']
response = search.movie(query=query)
for movie in response['results']:
parsed_data['results'].append(
{
'title': movie['title'],
'id': movie['id'],
'poster_path': movie['poster_path'],
'release_date': movie['release_date'][:-6],
'popularity': movie['popularity'],
'overview': movie['overview']
}
)
for i in range(2, 5 + 1):
response = search.movie(query=query, page=i)
for movie in response['results']:
parsed_data['results'].append(
{
'title': movie['title'],
'id': movie['id'],
'poster_path': movie['poster_path'],
'release_date': movie['release_date'][:-6],
'popularity': movie['popularity'],
'overview': movie['overview']
}
)
context = {
"form": form,
"parsed_data": parsed_data
}
return render(request, './moviecompare/movies.html', context)
else:
form = MovieSearch()
else:
form = MovieSearch()
return render(request, './moviecompare/compare.html', {"form": form})
</code></pre>
<p>and html:</p>
<pre><code>{% extends 'moviecompare/compare.html' %}
{% block movies_returned %}
<div class="wrap">
<div class="compare-gallery">
{% for key in parsed_data.results|dictsortreversed:'release_date' %}
{% if key.poster_path and key.release_date and key.title and key.overview %}
<div class="gallery-item">
<img src="http://image.tmdb.org/t/p/w185/{{ key.poster_path }}">
<div class="gallery-text">
<div class="gallery-date"><h5><span><i class="material-icons">date_range</i></span> {{ key.release_date }}</h5></div>
<div class="gallery-title"><h3><a href="../detail/{{ key.id }}">{{ key.title }}</a></h3></div>
<div class="gallery-overview">{{ key.overview|truncatechars:80 }}</div>
</div>
</div>
{% endif %}
{% endfor %}
</div>
</div>
{% endblock %}
</code></pre>
<p>I've started playing with the urls.py to get something to work, but no luck so far.</p>
<p>site's urls.py:</p>
<pre><code>urlpatterns = [
url(r'^$', home, name="home"),
url(r'^blog/', include('blog.urls', namespace='blog')),
url(r'^compare/', include('moviecompare.urls', namespace='compare')),
url(r'^movies/', include('moviecompare.urls', namespace='movies')),
</code></pre>
<p>and the app's urls.py:</p>
<pre><code>from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^', views.search_movie, name='compare'),
url(r'^(?P<movid>[0-9])+$', views.get_movie, name='movies')
]
</code></pre>
<p>edit: adding my 1st (failed) attempt at the movie detail view:</p>
<pre><code>def get_movie(request, movid=None):
instance = get_object_or_404(request, movid=movid)
context = {
'instance': instance
}
return render(request, './moviecompare/detail.html', context)
</code></pre>
| 2
|
2016-09-06T17:10:20Z
| 39,396,718
|
<p>I was able to get this working by fixing the urlpattern per Roman's answer, along with a few needed tweaks.</p>
<p>in the app urls.py I needed to change the order:</p>
<pre><code>urlpatterns = [
url(r'^(?P<movid>[0-9]+)$', views.get_movie, name='movie_detail'),
url(r'^', views.search_movie, name='compare'),
]
</code></pre>
<p>and in the root urls.py I have:</p>
<pre><code> url(r'^compare/', include('moviecompare.urls', namespace='compare')),
</code></pre>
<p>added a function to the view:</p>
<pre><code>def get_movie(request, movid):
movie = tmdb.Movies(movid)
response = movie.info()
context = {
'response': response
}
return render(request, './moviecompare/detail.html', context)
</code></pre>
<p>and used this to link the detail in the html:</p>
<pre><code>{% url 'compare:movie_detail' movid=key.id %}
</code></pre>
| 0
|
2016-09-08T17:11:19Z
|
[
"python",
"html",
"django"
] |
Why Scrapy returns an Iframe?
| 39,354,465
|
<p>i want to crawl <a href="http://www.ooshop.com/courses-en-ligne/Home.aspx" rel="nofollow">this site</a> by Python-Scrapy</p>
<p>i try this</p>
<pre><code>class Parik(scrapy.Spider):
name = "ooshop"
allowed_domains = ["http://www.ooshop.com/courses-en-ligne/Home.aspx"]
def __init__(self, idcrawl=None, proxy=None, *args, **kwargs):
super(Parik, self).__init__(*args, **kwargs)
self.start_urls = ['http://www.ooshop.com/courses-en-ligne/Home.aspx']
def parse(self, response):
print response.css('body').extract_first()
</code></pre>
<p>but i don't have the first page, i have an empty iframe</p>
<pre><code>2016-09-06 19:09:24 [scrapy] DEBUG: Crawled (200) <GET http://www.ooshop.com/courses-en-ligne/Home.aspx> (referer: None)
<body>
<iframe style="display:none;visibility:hidden;" src="//content.incapsula.com/jsTest.html" id="gaIframe"></iframe>
</body>
2016-09-06 19:09:24 [scrapy] INFO: Closing spider (finished)
</code></pre>
| 0
|
2016-09-06T17:13:11Z
| 39,354,581
|
<p>The website is protected by Incapsula, a website security service. It's providing your "browser" with a challenge that it must perform before being given a special cookie that gives you access to the website itself.</p>
<p>Fortunately, it's not that hard to bypass. Install <a href="https://github.com/ziplokk1/incapsula-cracker" rel="nofollow">incapsula-cracker</a> and install its downloader middleware:</p>
<pre><code>DOWNLOADER_MIDDLEWARES = {
'incapsula.IncapsulaMiddleware': 900
}
</code></pre>
| 2
|
2016-09-06T17:21:13Z
|
[
"python",
"iframe",
"web-scraping",
"scrapy",
"web-crawler"
] |
Django model, Overriding the save method or custom method with property
| 39,354,500
|
<p>I'm working with some models that has to return a sum of model fields. Is it better to override the save method on the model or just create a custom method that returns the sum. Is there any performance issues with either of the solutions?</p>
<p>Example 1, overriding the save method.</p>
<pre><code>class SomeModel(models.Model):
integer1 = models.IntegerField()
integer2 = models.IntegerField()
integer3 = models.IntegerField()
sum_integers = models.IntegerField()
def save(self, *args, **kwargs):
self.sum_integers = sum(
[self.integer1, self.integer2, self.integer3])
self.sum_integers.save()
return super(SomeModel, self).save(*args, **kwargs)
</code></pre>
<p>Example 2, custom method</p>
<pre><code>class SomeModel(models.Model):
integer1 = models.IntegerField()
integer2 = models.IntegerField()
integer3 = models.IntegerField()
@property
def sum_integers(self):
return sum([self.integer1, self.integer2, self.integer3])
</code></pre>
| 5
|
2016-09-06T17:15:22Z
| 39,354,942
|
<p>The answer depends on the way you are going to use sum_integers. If you keep it as a field in DB, you will be able to make a queries on it, and with property it would be very tricky. </p>
<p>On other hand, if you aren't going to make a queries and this data does not valuable for you(in other words - you need sum_integers as data representation)
then you should go with property.</p>
<p>From the point of the application performance:
If you are going to make complex operations on thousands of objects - it might be better to store the value in column or at least change property to <a href="https://docs.djangoproject.com/es/1.10/ref/utils/#django.utils.functional.cached_property" rel="nofollow">cached_property</a> if it is called a few times.</p>
<p>As a summary - storing value of sum in DB column is more universal and doesn't have any downgrades, but in some cases <code>property</code> approach allows you to keep your data model cleaner and saves some space on your disk.</p>
<p>I hope it is an answer on your question. Please fill free to ask question if something is unclear.</p>
| 2
|
2016-09-06T17:43:30Z
|
[
"python",
"django",
"django-models"
] |
Django model, Overriding the save method or custom method with property
| 39,354,500
|
<p>I'm working with some models that has to return a sum of model fields. Is it better to override the save method on the model or just create a custom method that returns the sum. Is there any performance issues with either of the solutions?</p>
<p>Example 1, overriding the save method.</p>
<pre><code>class SomeModel(models.Model):
integer1 = models.IntegerField()
integer2 = models.IntegerField()
integer3 = models.IntegerField()
sum_integers = models.IntegerField()
def save(self, *args, **kwargs):
self.sum_integers = sum(
[self.integer1, self.integer2, self.integer3])
self.sum_integers.save()
return super(SomeModel, self).save(*args, **kwargs)
</code></pre>
<p>Example 2, custom method</p>
<pre><code>class SomeModel(models.Model):
integer1 = models.IntegerField()
integer2 = models.IntegerField()
integer3 = models.IntegerField()
@property
def sum_integers(self):
return sum([self.integer1, self.integer2, self.integer3])
</code></pre>
| 5
|
2016-09-06T17:15:22Z
| 39,357,001
|
<p>Depends on whether you have to update the fields more or call the sum more.</p>
<p>I am assuming, to make it more generic, that the operation is not only addition but multiple complex calculation involving large numbers.</p>
<p>If you have to get the sum every now and then, then its better to create a model field and add value on save.</p>
<p>If you have to update it mostly, then normally getting the value on call (the second method) is more apt.</p>
| 2
|
2016-09-06T20:04:58Z
|
[
"python",
"django",
"django-models"
] |
Method to sort values in row in pandas Series?
| 39,354,531
|
<p>Consider the following <code>pandas.Series</code> object:</p>
<pre><code>import pandas as pd
s = pd.Series(["hello there you would like to sort me", "sorted i would like to be", "the yankees played the red sox", "apple apple banana fruit orange cucumber"])
</code></pre>
<p>I would like to sort the values <strong>inside</strong> each row, similar to the following approach:</p>
<pre><code>for row in s.index:
split_words = s.loc[row].split()
split_words.sort()
s.loc[row] = " ".join(split_words)
</code></pre>
<p>I have a huge dataset, however, so vectorization is important, here. How can I use pandas <code>str</code> attribute to accomplish the same, but much quicker?</p>
| 2
|
2016-09-06T17:17:30Z
| 39,354,601
|
<p>use the string accessor <code>str</code> and <code>split</code>. Then apply <code>sorted</code> and <code>join</code>.</p>
<pre><code>s.str.split().apply(sorted).str.join(' ')
0 hello like me sort there to would you
1 be i like sorted to would
2 played red sox the the yankees
3 apple apple banana cucumber fruit orange
dtype: object
</code></pre>
| 3
|
2016-09-06T17:22:09Z
|
[
"python",
"pandas",
"vector"
] |
Method to sort values in row in pandas Series?
| 39,354,531
|
<p>Consider the following <code>pandas.Series</code> object:</p>
<pre><code>import pandas as pd
s = pd.Series(["hello there you would like to sort me", "sorted i would like to be", "the yankees played the red sox", "apple apple banana fruit orange cucumber"])
</code></pre>
<p>I would like to sort the values <strong>inside</strong> each row, similar to the following approach:</p>
<pre><code>for row in s.index:
split_words = s.loc[row].split()
split_words.sort()
s.loc[row] = " ".join(split_words)
</code></pre>
<p>I have a huge dataset, however, so vectorization is important, here. How can I use pandas <code>str</code> attribute to accomplish the same, but much quicker?</p>
| 2
|
2016-09-06T17:17:30Z
| 39,354,739
|
<p>I've experienced that Python lists perform better in these situations. Applying piRSquared's logic, a list comprehension would be:</p>
<pre><code>[' '.join(sorted(sentence.split())) for sentence in s.tolist()]
</code></pre>
<p>For timings I've used Shakespeare's works from <a href="http://norvig.com/ngrams/" rel="nofollow">Peter Norvig's website</a>.</p>
<pre><code>s = pd.read_table('shakespeare.txt', squeeze=True, header=None)
s = pd.Series(s.tolist()*10)
r1 = s.str.split().apply(sorted).str.join(' ')
r2 = pd.Series([' '.join(sorted(sentence.split())) for sentence in s.tolist()])
r1.equals(r2)
Out: True
%timeit s.str.split().apply(sorted).str.join(' ')
1 loop, best of 3: 2.71 s per loop
%timeit pd.Series([' '.join(sorted(sentence.split())) for sentence in s.tolist()])
1 loop, best of 3: 1.95 s per loop
</code></pre>
| 4
|
2016-09-06T17:30:00Z
|
[
"python",
"pandas",
"vector"
] |
What is the equivalent of np.std() in TensorFlow?
| 39,354,566
|
<p>Just looking for the equivalent of np.std() in TensorFlow to calculate the standard deviation of a tensor.</p>
| 1
|
2016-09-06T17:19:49Z
| 39,354,802
|
<p>To get the mean and variance just use tf.nn.moments.</p>
<pre><code>mean, var = tf.nn.moments(x,axes=[1])
</code></pre>
<p>for more on tf.nn.moments params see <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/nn.html#moments" rel="nofollow">docs</a></p>
| 2
|
2016-09-06T17:34:27Z
|
[
"python",
"tensorflow"
] |
Avoid to display the UserWarning message of bs4 library
| 39,354,567
|
<p>Every time I make soup a sourcecode of a page with bs4 in python, the terminal shows:</p>
<pre><code>/usr/local/lib/python3.4/dist-packages/bs4/__init__.py:181: UserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("html5lib"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.
The code that caused this warning is on line 229 of the file HibernetBlock.py. To get rid of this warning, change code that looks like this:
BeautifulSoup([your markup])
to this:
BeautifulSoup([your markup], "html5lib")
markup_type=markup_type))
</code></pre>
<p>Is there a way to avoid to display this?</p>
| -1
|
2016-09-06T17:19:49Z
| 39,354,857
|
<p>While the inclusion of the parser is optional, the warning is telling you that results may vary from system to system if you don't explicitly state which parser you want to use.</p>
<p>The <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#installing-a-parser" rel="nofollow">documentation</a> makes it clear that there are advantages and disadvantages to each parser. If you let the module choose "best available" on the system, you may use <code>html5lib</code> on one system and <code>html.parser</code> on another. The two could parse a page differently. This warning is telling you that is a possibility.</p>
<p><a href="http://i.stack.imgur.com/R4p28.png" rel="nofollow"><img src="http://i.stack.imgur.com/R4p28.png" alt="BS Parsers"></a>
<sup>Click for larger view from documentation</sup></p>
<p>To fix your warning, and ensure that all system are parsing the same way, <a href="https://www.python.org/dev/peps/pep-0020/" rel="nofollow">explicitly</a> set the parser you want to use:</p>
<pre><code>BeautifulSoup(html, "html5lib")
</code></pre>
<p>And remember:</p>
<blockquote>
<p>Explicit is better than implicit.</p>
</blockquote>
| 1
|
2016-09-06T17:37:33Z
|
[
"python",
"python-3.x",
"beautifulsoup",
"warnings",
"bs4"
] |
Scraping google news with BeautifulSoup returns empty results
| 39,354,587
|
<p>I am trying to scrape google news using the following code:</p>
<pre><code>from bs4 import BeautifulSoup
import requests
import time
from random import randint
def scrape_news_summaries(s):
time.sleep(randint(0, 2)) # relax and don't let google be angry
r = requests.get("http://www.google.co.uk/search?q="+s+"&tbm=nws")
content = r.text
news_summaries = []
soup = BeautifulSoup(content, "html.parser")
st_divs = soup.findAll("div", {"class": "st"})
for st_div in st_divs:
news_summaries.append(st_div.text)
return news_summaries
l = scrape_news_summaries("T-Notes")
#l = scrape_news_summaries("""T-Notes""")
for n in l:
print(n)
</code></pre>
<p>Even though this bit of code was working before, I now can't figure out why it's not working anymore. Is it possible that I've been banned by google since I only ran the code 3 or four times? (I tried using Bing News with unfortunate empty results too...)</p>
<p>Thanks. </p>
| 0
|
2016-09-06T17:21:32Z
| 39,354,870
|
<p>I tried running the code and it works fine on my computer.</p>
<p>You could try printing the status code for the request, and see if it's anything other than 200.</p>
<pre><code>from bs4 import BeautifulSoup
import requests
import time
from random import randint
def scrape_news_summaries(s):
time.sleep(randint(0, 2)) # relax and don't let google be angry
r = requests.get("http://www.google.co.uk/search?q="+s+"&tbm=nws")
print(r.status_code) # Print the status code
content = r.text
news_summaries = []
soup = BeautifulSoup(content, "html.parser")
st_divs = soup.findAll("div", {"class": "st"})
for st_div in st_divs:
news_summaries.append(st_div.text)
return news_summaries
l = scrape_news_summaries("T-Notes")
#l = scrape_news_summaries("""T-Notes""")
for n in l:
print(n)
</code></pre>
<p><a href="https://www.scrapehero.com/how-to-prevent-getting-blacklisted-while-scraping/" rel="nofollow">https://www.scrapehero.com/how-to-prevent-getting-blacklisted-while-scraping/</a> for a list of status code that's a sign you have been banned.</p>
| 2
|
2016-09-06T17:38:12Z
|
[
"python",
"web-scraping",
"beautifulsoup",
"google-news"
] |
Installing MySQLdb on osx El Capitan for Python
| 39,354,602
|
<p>I am trying:</p>
<pre><code>brew install mysql
pip install mysql-python
</code></pre>
<p>I am getting this output:</p>
<pre class="lang-none prettyprint-override"><code>Collecting mysql-python Using cached MySQL-python-1.2.5.zip Installing
collected packages: mysql-python Running setup.py install for
mysql-python ... error
Complete output from command /usr/bin/python -u -c "import setuptools,
tokenize;__file__='/private/var/folders/zf/phyv38gx56jddgzfyqnw0t7h0000gp/T/pip-build-vla6Kt/mysql-python/setup.py';exec(compile(getattr(tokenize,
'open', open)(__file__).read().replace('\r\n', '\n'), __file__,
'exec'))" install --record
/var/folders/zf/phyv38gx56jddgzfyqnw0t7h0000gp/T/pip-_xy5Mn-record/install-record.txt
--single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build/lib.macosx-10.11-intel-2.7
copying _mysql_exceptions.py -> build/lib.macosx-10.11-intel-2.7
creating build/lib.macosx-10.11-intel-2.7/MySQLdb
copying MySQLdb/__init__.py -> build/lib.macosx-10.11-intel-2.7/MySQLdb
copying MySQLdb/converters.py -> build/lib.macosx-10.11-intel-2.7/MySQLdb
copying MySQLdb/connections.py -> build/lib.macosx-10.11-intel-2.7/MySQLdb
copying MySQLdb/cursors.py -> build/lib.macosx-10.11-intel-2.7/MySQLdb
copying MySQLdb/release.py -> build/lib.macosx-10.11-intel-2.7/MySQLdb
copying MySQLdb/times.py -> build/lib.macosx-10.11-intel-2.7/MySQLdb
creating build/lib.macosx-10.11-intel-2.7/MySQLdb/constants
copying MySQLdb/constants/__init__.py -> build/lib.macosx-10.11-intel-2.7/MySQLdb/constants
copying MySQLdb/constants/CR.py -> build/lib.macosx-10.11-intel-2.7/MySQLdb/constants
copying MySQLdb/constants/FIELD_TYPE.py -> build/lib.macosx-10.11-intel-2.7/MySQLdb/constants
copying MySQLdb/constants/ER.py -> build/lib.macosx-10.11-intel-2.7/MySQLdb/constants
copying MySQLdb/constants/FLAG.py -> build/lib.macosx-10.11-intel-2.7/MySQLdb/constants
copying MySQLdb/constants/REFRESH.py -> build/lib.macosx-10.11-intel-2.7/MySQLdb/constants
copying MySQLdb/constants/CLIENT.py -> build/lib.macosx-10.11-intel-2.7/MySQLdb/constants
running build_ext
building '_mysql' extension
creating build/temp.macosx-10.11-intel-2.7
cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv
-DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe -Dversion_info=(1,2,5,'final',1) -D__version__=1.2.5 -I/usr/local/Cellar/mysql-connector-c/6.1.6/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7
-c _mysql.c -o build/temp.macosx-10.11-intel-2.7/_mysql.o
_mysql.c:287:14: warning: implicit conversion loses integer precision: 'Py_ssize_t' (aka 'long') to 'int' [-Wshorten-64-to-32]
cmd_argc = PySequence_Size(cmd_args);
~ ^~~~~~~~~~~~~~~~~~~~~~~~~
_mysql.c:317:12: warning: implicit conversion loses integer precision: 'Py_ssize_t' (aka 'long') to 'int' [-Wshorten-64-to-32]
groupc = PySequence_Size(groups);
~ ^~~~~~~~~~~~~~~~~~~~~~~
_mysql.c:470:14: warning: implicit conversion loses integer precision: 'Py_ssize_t' (aka 'long') to 'int' [-Wshorten-64-to-32]
int j, n2=PySequence_Size(fun);
~~ ^~~~~~~~~~~~~~~~~~~~
_mysql.c:1127:9: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32]
len = mysql_real_escape_string(&(self->connection), out, in, size);
~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
_mysql.c:1129:9: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32]
len = mysql_escape_string(out, in, size);
~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
_mysql.c:1168:9: warning: implicit conversion loses integer precision: 'Py_ssize_t' (aka 'long') to 'int' [-Wshorten-64-to-32]
size = PyString_GET_SIZE(s);
~ ^~~~~~~~~~~~~~~~~~~~
/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/stringobject.h:92:32:
note: expanded from macro 'PyString_GET_SIZE'
#define PyString_GET_SIZE(op) Py_SIZE(op)
^~~~~~~~~~~
/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/object.h:116:56:
note: expanded from macro 'Py_SIZE'
#define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size)
~~~~~~~~~~~~~~~~~~~~~~^~~~~~~
_mysql.c:1178:9: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32]
len = mysql_real_escape_string(&(self->connection), out+1, in, size);
~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
_mysql.c:1180:9: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32]
len = mysql_escape_string(out+1, in, size);
~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
_mysql.c:1274:11: warning: implicit conversion loses integer precision: 'Py_ssize_t' (aka 'long') to 'int' [-Wshorten-64-to-32]
if ((n = PyObject_Length(o)) == -1) goto error;
~ ^~~~~~~~~~~~~~~~~~
/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/abstract.h:434:25:
note: expanded from macro 'PyObject_Length'
#define PyObject_Length PyObject_Size
^
_mysql.c:1466:10: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32]
len = strlen(buf);
~ ^~~~~~~~~~~
_mysql.c:1468:10: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32]
len = strlen(buf);
~ ^~~~~~~~~~~
_mysql.c:1504:11: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32]
len = strlen(buf);
~ ^~~~~~~~~~~
_mysql.c:1506:11: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32]
len = strlen(buf);
~ ^~~~~~~~~~~
_mysql.c:1589:10: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare]
if (how < 0 || how >= sizeof(row_converters)) {
~~~ ^ ~
14 warnings generated.
In file included from _mysql.c:44:
/usr/local/Cellar/mysql-connector-c/6.1.6/include/my_config.h:176:9:
warning: 'SIZEOF_LONG' macro redefined [-Wmacro-redefined]
#define SIZEOF_LONG 8
^
/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/pymacconfig.h:54:17:
note: previous definition is here
# define SIZEOF_LONG 4
^
In file included from _mysql.c:44:
/usr/local/Cellar/mysql-connector-c/6.1.6/include/my_config.h:181:9:
warning: 'SIZEOF_TIME_T' macro redefined [-Wmacro-redefined]
#define SIZEOF_TIME_T 8
^
/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/pymacconfig.h:57:17:
note: previous definition is here
# define SIZEOF_TIME_T 4
^
_mysql.c:1589:10: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare]
if (how < 0 || how >= sizeof(row_converters)) {
~~~ ^ ~
3 warnings generated.
cc -bundle -undefined dynamic_lookup -arch x86_64 -arch i386 -Wl,-F. build/temp.macosx-10.11-intel-2.7/_mysql.o -L/usr/local/Cellar/mysql-connector-c/6.1.6/lib -lmysqlclient -o build/lib.macosx-10.11-intel-2.7/_mysql.so
ld: warning: ignoring file /usr/local/Cellar/mysql-connector-c/6.1.6/lib/libmysqlclient.dylib,
file was built for x86_64 which is not the architecture being linked
(i386):
/usr/local/Cellar/mysql-connector-c/6.1.6/lib/libmysqlclient.dylib
running install_lib
copying build/lib.macosx-10.11-intel-2.7/_mysql.so -> /Library/Python/2.7/site-packages
error: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/_mysql.so'
----------------------------------------
Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/private/var/folders/zf/phyv38gx56jddgzfyqnw0t7h0000gp/T/pip-build-vla6Kt/mysql-python/setup.py';exec(compile(getattr(tokenize,
'open', open)(__file__).read().replace('\r\n', '\n'), __file__,
'exec'))" install --record
/var/folders/zf/phyv38gx56jddgzfyqnw0t7h0000gp/T/pip-_xy5Mn-record/install-record.txt
--single-version-externally-managed --compile" failed with error code 1 in
/private/var/folders/zf/phyv38gx56jddgzfyqnw0t7h0000gp/T/pip-build-vla6Kt/mysql-python/
</code></pre>
<p>What am I doing wrong and how can I install it?</p>
| 0
|
2016-09-06T17:22:10Z
| 39,355,286
|
<p>The relevant error message is this:</p>
<pre class="lang-none prettyprint-override"><code>error: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/_mysql.so
</code></pre>
<p>It can't copy to the target directory because of missing permissions.</p>
<p>You can either install it to the local user site directory using</p>
<pre><code>pip install --user mysql-python
</code></pre>
<p>In this case it will only be available to the current user.</p>
<p>Or if you really need to install for all users on the machine, you can use <code>sudo pip install mysql-python</code> to install as root, but that is usually not recommendable.</p>
| 0
|
2016-09-06T18:09:17Z
|
[
"python",
"osx",
"python-2.7",
"mysql-python",
"osx-elcapitan"
] |
Best way to read and write two files?
| 39,354,706
|
<p>Folks, looking to get the suggestions on the best way to deal with the following task: <br>
1. Read data off of a CSV file. <br>
2. Edit an XML file based on the data read in Step 1.</p>
<p>I am a Python noob. So far I am able to read the data off of a CSV file. In my Java world, I would simply pass the "read" data off to a method and iterate over and edit the XML file in that method.<br>
Can I do something similar in Python? Is there a more efficient and cleaner way of achieving the same in Python?</p>
<pre><code>import csv
ifile = open('my-file.csv', "rb")
reader = csv.reader(ifile)
rownum = 0
for row in reader:
#print row
if rownum == 0:
header = row
else:
colnum = 0
name = row[1]
desig = row[5]
print("Name: ", name)
print("Designation: ", desig)
rownum += 1
if rownum == 10:
break
ifile.close()
</code></pre>
| -2
|
2016-09-06T17:28:10Z
| 39,354,786
|
<p>Very similar to your solution, just uses <code>enumerate</code> and <code>with</code> instead of <code>open</code> and <code>close</code>:</p>
<pre><code>import csv
with open('my-file.csv', 'rb') as ifile:
reader = csv.reader(ifile)
for rownum, row in enumerate(reader):
#print row
if rownum == 0:
header = row
else:
colnum = 0
name = row[1]
desig = row[5]
print("Name: ", name)
print("Designation: ", desig)
if rownum == 10:
break
</code></pre>
| 0
|
2016-09-06T17:33:23Z
|
[
"python",
"xml",
"csv",
"scripting",
"elementtree"
] |
Best way to read and write two files?
| 39,354,706
|
<p>Folks, looking to get the suggestions on the best way to deal with the following task: <br>
1. Read data off of a CSV file. <br>
2. Edit an XML file based on the data read in Step 1.</p>
<p>I am a Python noob. So far I am able to read the data off of a CSV file. In my Java world, I would simply pass the "read" data off to a method and iterate over and edit the XML file in that method.<br>
Can I do something similar in Python? Is there a more efficient and cleaner way of achieving the same in Python?</p>
<pre><code>import csv
ifile = open('my-file.csv', "rb")
reader = csv.reader(ifile)
rownum = 0
for row in reader:
#print row
if rownum == 0:
header = row
else:
colnum = 0
name = row[1]
desig = row[5]
print("Name: ", name)
print("Designation: ", desig)
rownum += 1
if rownum == 10:
break
ifile.close()
</code></pre>
| -2
|
2016-09-06T17:28:10Z
| 39,355,865
|
<p>Your question is missing a little bit of clarity (what is it that you are seeking).
Anyway, from what I understood, you are looking for an easy way to read a <strong><em>csv</em></strong> file and and print the <strong><em>ith</em></strong> columns in a certain formatting (e.g. name: ... ).
I am assuming that your file looks like the following:</p>
<pre><code>blah,Name,blahblah,blahblahblah,blahblahblahblah,Designation
whatever,name1,whatever,whatever,whatever,Designation1
whatever,name2,whatever,whatever,whatever,Designation2
whatever,name3,whatever,whatever,whatever,Designation3
whatever,name4,whatever,whatever,whatever,Designation4
whatever,name5,whatever,whatever,whatever,Designation5
whatever,name6,whatever,whatever,whatever,Designation6
</code></pre>
<p>If that is the case, then here is what I would do. I would use the known pandas library</p>
<pre><code>import pandas as pd
</code></pre>
<p>Read the csv file into a dataframe "df"</p>
<pre><code>df = pd.read_csv('my-file.csv')
</code></pre>
<p>The variable header will hold the column names</p>
<pre><code>header = list(df) # the equivilant of your "row[0]" variable
</code></pre>
<p>Method #1 of printing the required data</p>
<pre><code>for i, j in zip(list(df['Name'].values), list(df['Designation'].values)):
print "Name: {} \nDesignation: {}".format(i, j)
</code></pre>
<p>This prints out the following:</p>
<pre><code>Name: name1
Designation: Designation1
Name: name2
Designation: Designation2
Name: name3
Designation: Designation3
Name: name4
Designation: Designation4
Name: name5
Designation: Designation5
Name: name6
Designation: Designation6
</code></pre>
<p>Method #2 of printing the required data </p>
<pre><code>df['Name'] = df['Name'].map('Name: {}'.format)
df['Designation'] = df['Designation'].map('Designation: {}'.format)
print df[['Name', 'Designation']].head(n=10)
</code></pre>
<p>Which will print out the following:</p>
<pre><code>0 Name: name1 Designation: Designation1
1 Name: name2 Designation: Designation2
2 Name: name3 Designation: Designation3
3 Name: name4 Designation: Designation4
4 Name: name5 Designation: Designation5
5 Name: name6 Designation: Designation6
</code></pre>
| 2
|
2016-09-06T18:44:55Z
|
[
"python",
"xml",
"csv",
"scripting",
"elementtree"
] |
BeautifulSoup.find_all for nested divs without class attribute
| 39,354,804
|
<p>I am working with python2 and I wanted to get the content of a div in html page.</p>
<pre><code><div class="lts-txt2">
Some Content
</div>
</code></pre>
<p>If the div class is like above then I can get the content using</p>
<pre><code>BeautifulSoup.find_all('div', attrs={"class": 'lts-txt2'})
</code></pre>
<p>But if the div is like,</p>
<pre><code><div class="lts-txt2">
<div align="justify">
Some Content
</div>
</div>
</code></pre>
<p>then using </p>
<pre><code>BeautifulSoup.find_all('div', attrs={"class": 'lts-txt2'})
</code></pre>
<p>isn't return the content.
So I tried with</p>
<pre><code>BeautifulSoup.find_all('div', attrs={"align": 'justify'})
</code></pre>
<p>But it also wasn't worked.
How can I solve the problem.</p>
| -1
|
2016-09-06T17:34:29Z
| 39,354,875
|
<p>You can extract all text from the node <em>including nested nodes</em> with the <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#get-text" rel="nofollow"><code>Element.get_text()</code> method</a>:</p>
<pre><code>[el.get_text() for el in soup.find_all('div', attrs={"class": 'lts-txt2'})]
</code></pre>
<p>This produces a list with the textual content of each such a <code>div</code>, wether or not there is a nested <code>div</code> inside.</p>
<p>You could also use the <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#css-selectors" rel="nofollow">CSS selector <code>Element.select()</code> function</a> to select the nested div:</p>
<pre><code>soup.select('div.lts-txt2 > div')
</code></pre>
| 0
|
2016-09-06T17:38:31Z
|
[
"python",
"html",
"beautifulsoup"
] |
Extracting string before the quotations
| 39,354,823
|
<p>I am trying to parse a pdf in python and extract string in quotations. I am able to extract the text in quotations but I also want to extract the name before the quotation starts.
For example:
Consider this</p>
<p>Ziblatt, Daniel. 2004. "Rethinking the Origins of Federalism: Puzzle, Theory, and Evidence from Nineteenth-Century Europe,"</p>
<p>I am able to extract everything quotations but I want the name to be extracted as well .
This is the code I am using.. Please help</p>
<pre><code>def quotes(x):
quoted = re.compile('"[^"]*"')
for value in quoted.findall(x):
print value
</code></pre>
| 0
|
2016-09-06T17:35:25Z
| 39,354,923
|
<p>Capturing data before a double-quote should work:</p>
<pre><code>def quotes(x):
quoted = re.compile('(.+)"[^"]+"')
for value in quoted.findall(x):
print value.strip()
</code></pre>
<p>I get this ouput:</p>
<pre><code>>>> quotes(text)
'Ziblatt, Daniel. 2004.'
</code></pre>
| 1
|
2016-09-06T17:42:40Z
|
[
"python",
"extract",
"quotes"
] |
Cross-Origin Request Blocked on POST call to api
| 39,355,023
|
<p>I know there are many other articles on this topic but unfortunately none of the solutions worked for me.</p>
<p>I am running linux red hat 7.2 with apache 2.4 (httpd). I am working on the server directly as localhost. The API is python based which i have little experience with - i downloaded the program from github <a href="https://github.com/mozilla/http-observatory" rel="nofollow">mozilla http observatory</a>.</p>
<p>I have tried different settings for several hours and have made no progress on this issue and now seeking further assistance.</p>
<p>The main page before I am on before POST is <a href="http://localhost" rel="nofollow">http://localhost</a></p>
<p><strong>Here is the js (ajax)</strong></p>
<p>A POST XHR call is made to: <a href="http://localhost:57001/api/v1/scan" rel="nofollow">http://localhost:57001/api/v1/scan</a></p>
<pre class="lang-none prettyprint-override"><code>function loadTLSObservatoryResults(rescan, initiateScanOnly) {
'use strict';
var rescan = typeof rescan !== 'undefined' ? rescan : false;
var initiateScanOnly = typeof initiateScanOnly !== 'undefined' ? initiateScanOnly : false;
/*var SCAN_URL = 'https://tls-observatory.services.mozilla.com/api/v1/scan';
var RESULTS_URL = 'https://tls-observatory.services.mozilla.com/api/v1/results';
var CERTIFICATE_URL = 'https://tls-observatory.services.mozilla.com/api/v1/certificate';*/
var SCAN_URL = 'http://localhost:57001/api/v1/scan';
var RESULTS_URL = 'http://localhost:57001/api/v1/results';
var CERTIFICATE_URL = 'http://localhost:57001/api/v1/certificate';
// if it's the first scan through, we need to do a post
if (Observatory.state.third_party.tlsobservatory.scan_id === undefined || rescan) {
// make a POST to initiate the scan
$.ajax({
data: {
rescan: rescan,
target: Observatory.hostname
},
initiateScanOnly: initiateScanOnly,
dataType: 'json',
method: 'POST',
error: function() { errorResults('Scanner unavailable', 'tlsobservatory') },
success: function (data) {
Observatory.state.third_party.tlsobservatory.scan_id = data.scan_id;
if (this.initiateScanOnly) { return; }
loadTLSObservatoryResults(); // retrieve the results
},
url: SCAN_URL
});
</code></pre>
<p><strong>Here is my httpd.conf</strong></p>
<p>basically just included this line:
<code>LoadModule headers_module modules/mod_headers.so</code></p>
<p><strong>Here is my .htaccess file:</strong></p>
<pre class="lang-none prettyprint-override"><code>RewriteEngine On
Header always set Access-Control-Allow-Origin "*"
Header always set Access-Control-Allow-Methods "POST, GET, OPTIONS, DELETE, PUT"
Header always set Access-Control-Max-Age "1000"
Header always set Access-Control-Allow-Headers "x-requested-with, Content-Type, origin, authorization, accept, client-security-token"
RewriteCond %{REQUEST_METHOD} OPTIONS
RewriteRule ^(.*)$ $1 [R=200,L]
</code></pre>
<p>I have restarted apache but i see the same warning in the developer console:</p>
<p><code>Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost:57001/api/v1/scan. (Reason: CORS header 'Access-Control-Allow-Origin' missing).</code></p>
| 0
|
2016-09-06T17:48:54Z
| 39,355,142
|
<p>Out of curiosity, are you using a Javascript build environment like Ember or Angular on the client side? Because that will affect your hosted port, and could contribute to cross-origin related errors.</p>
<p>Another thing you could do is to modify your ajax slightly and create a helpder function as so:</p>
<pre><code> function myCallbackFunction(data){
Observatory.state.third_party.tlsobservatory.scan_id = data.scan_id;
if (this.initiateScanOnly) { return; }
loadTLSObservatoryResults(); // retrieve the results
}
// make a POST to initiate the scan
$.ajax({
data: {
rescan: rescan,
target: Observatory.hostname
},
initiateScanOnly: initiateScanOnly,
dataType: 'jsonp',
method: 'POST',
error: function() { errorResults('Scanner unavailable', 'tlsobservatory') },
jsonpCallback: 'myCallbackFunction'
url: SCAN_URL
});
</code></pre>
<p>Try this and let me know if it works for you.</p>
| 0
|
2016-09-06T17:58:25Z
|
[
"jquery",
"python"
] |
Pony ORM - Order by specific order
| 39,355,048
|
<p>Performing a Pony ORM query and attempting to sort the query by three attributes that live on the model. First by song type, which can be one of the five values listed in <code>ssf_type_order_map</code>, then by duration (int), and uuid (string). </p>
<p>For song type, I would like to have the songs sorted in the following order: Full, Full (Instrumental), Shorts, Loops, Stems</p>
<p>If I attempt to sort using the following <code>.order_by()</code> call, it doesn't return any errors but it doesn't sort by the type as I need it in the aforementioned order (duration and UUID sorting works fine though).</p>
<pre><code>song_source_files = self.song_source_files.select(lambda ssf: True).order_by(lambda ssf: (ssf.type, ssf.duration, ssf.uuid))
</code></pre>
<p>This is what I would think would be the ideal query, map the string types to a map that ranks their ordering.</p>
<pre><code>ssf_type_order_map = {
'Full': 1,
'Full (Instrumental)': 2,
'Shorts': 3,
'Loops': 4,
'Stems': 5
}
song_source_files = self.song_source_files.select(lambda ssf: True).order_by(lambda ssf: (ssf_type_order_map[ssf.type], ssf.duration, ssf.uuid))
</code></pre>
<p>But I get an error when running this saying "Expression <code>ssf_type_order_map</code> has unsupported type 'dict'.</p>
<p>The Pony ORM docs on order_by <a href="https://docs.ponyorm.com/crud.html#sorting-of-query-results" rel="nofollow">here</a>, are very vague on using lambdas in this context.</p>
<p><strong>Update - Sept 7th</strong></p>
<p>I've also tried adding the following getter property on the model as follows:</p>
<pre><code>@property
def source_type(self):
ssf_type_order_map = {
'Full': 1,
'Full (Instrumental)': 2,
'Shorts': 3,
'Loops': 4,
'Stems': 5
}
return ssy_type_order_map[self.type]
</code></pre>
<p>I then try ordering the query as follows:</p>
<pre><code>song_source_files = self.song_source_files.select(lambda ssf: True).order_by(lambda ssf: (ssf_type_order_map[ssf.type], ssf.duration, ssf.uuid))
</code></pre>
<p>But I receive an error basically saying that the model does not have this property. My assumption based off of a similar issue with Django's ORM is that you can only access attributes that are present in the database models. </p>
<p>If that is the case with Pony as well, how does one pull off something that I would like to accomplish?</p>
| 3
|
2016-09-06T17:50:29Z
| 39,370,908
|
<p>At first, I want to say that Pony distinguishes two types of sub-expressions: external expressions and correlated expressions. External expressions don't depend on the value of a generator loop variable(s), while correlated expressions do. Consider the following example:</p>
<pre class="lang-py prettyprint-override"><code>from some_module import f, g
x = 100
query = select(e for e in MyEntity if e.attr > f(x, 200) and g(x, e))
</code></pre>
<p>In this query we have two sub-expressions: the first is <code>f(x, 200)</code> and the second is <code>g(x, e)</code>. The former is considered by Pony as external expression because it doesn't use any loop variable. In that case Pony assumes that it is possible to calculate the value of the expression in Python before query execution, and then translate the expression into a single parameter. For such expressions Pony doesn't impose any restrictions on which Python functions can be used inside them, because a result of such an expression is just a single value evaluated in Python.</p>
<p>The second expression <code>g(x, e)</code> cannot be evaluated in Python, because it depends on the value of the loop variable <code>e</code>. The result of such expression may be different for different table rows. Therefore, Pony needs to translate such expressions to SQL. Not every Python expression can be translated to SQL, and <code>g</code> needs to be a function which Pony specifically know how to translate. Pony defines a subset of Python operations which can be translated. This subset includes arithmetic operations on numeric types, string methods such as <code>startswith</code>, <code>endswith</code>, <code>in</code>, etc., and aggregated functions such as <code>sum</code> and <code>max</code>. </p>
<p>In your code, when you write</p>
<pre class="lang-py prettyprint-override"><code>.order_by(lambda ssf: (ssf_type_order_map[ssf.type], ssf.duration, ssf.uuid))
</code></pre>
<p>the expression <code>ssf_type_order_map[ssf.type]</code> refers to the object variable <code>ssf</code>, and hence will have different values for each table row, so this is correlated expression and Pony needs to translate that expression into SQL. Currently Pony doesn't understand how to perform such specific translation, but in principle this is doable. The result of translation will the following SQL <code>CASE</code> statement:</p>
<pre class="lang-sql prettyprint-override"><code>ORDER BY CASE ssf.type
WHEN 'Full' THEN 1
WHEN 'Full (Instrumental)' THEN 2
WHEN 'Shorts' THEN 3
WHEN 'Loops' THEN 4
WHEN 'Stems' THEN 5
ELSE 0
END
</code></pre>
<p>The good news is that you can write such expression in Pony using Python if-expression syntax:</p>
<pre class="lang-py prettyprint-override"><code>(1 if ssf.type == 'Full' else
2 if ssf.type == 'Full (Instrumental)' else
3 if ssf.type == 'Shorts' else
4 if ssf.type == 'Loops' else
5 if ssf.type == 'Stems' else 0)
</code></pre>
<p>At this moment Pony does not support decompiling if-expressions yet, so if you attempt to write such code directly, you will get an exception. As a workaround you need to pass the source of a lambda function as a string. In this case it will be translated just right, because we can directly parse the string to AST without decompiling. So you can write:</p>
<pre class="lang-py prettyprint-override"><code>song_source_files = self.song_source_files.select().order_by("""
lambda ssf: ((1 if ssf.type == 'Full' else
2 if ssf.type == 'Full (Instrumental)' else
3 if ssf.type == 'Shorts' else
4 if ssf.type == 'Loops' else
5 if ssf.type == 'Stems' else 0),
ssf.duration, ssf.uuid)
""")
</code></pre>
<p>This should work perfectly, but I'd recommend to solve this problem in another way: we can have the <code>SourceFileType</code> entity with the <code>name</code> and <code>code</code> attributes and then order <code>ssf</code> records by <code>ssf.type.code</code> value:</p>
<pre class="lang-py prettyprint-override"><code>class SongSourceFile(db.Entity):
name = Required(str)
type = Required(lambda: SourceFileType)
duration = Required(timedelta)
uuid = Required(uuid.UUID, unique=True, default=uuid.uuid4)
class SourceFileType(db.Entity):
name = Required(str)
code = Required(int)
files = Set(lambda: SongSourceFile)
</code></pre>
<p>Then it becomes possible writing the query in the following way:</p>
<pre class="lang-py prettyprint-override"><code>song_source_files = self.song_source_files.select().order_by(
lambda ssf: (ssf.type.code, ssf.duration, ssf.uuid)
)
</code></pre>
<p>I think this approach is more universal, because now you can add other useful attributes to <code>SourceFileType</code> besides <code>name</code> and <code>code</code> and use them in queries too.</p>
| 2
|
2016-09-07T13:13:29Z
|
[
"python",
"sorting",
"lambda",
"ponyorm"
] |
Jenkins git triggered build not blocking
| 39,355,230
|
<p>I am running a build on commit to <code>origin/master</code> on my jenkins server that is deploying resources to Amazon AWS. I am using the Execute Shell section to run a python script that handles all unit testing/linting/validation/deployment and everything blocks fine until it gets to the deployment (<code>deploy.deploy()</code>) where it returns a success right after kickoff, but doesn't complete deploying. How can I make this block?</p>
<p>For reference here is my config:</p>
<p><strong>Execute Shell (Jenkins)</strong>:</p>
<pre><code>export DEPLOY_REGION=us-west-2
. build-tools/get_aws_credentials.sh
python build-tools/kickoff.py
</code></pre>
<p><strong>kickoff.py</strong></p>
<pre><code>if __name__ == "__main__":
build_tools_dir="{}".format("/".join(os.path.abspath(__file__).split("/")[0:-1]))
sys.path.append(build_tools_dir)
base_dir = "/".join(build_tools_dir.split("/")[0:-1])
test_begin = __import__("test_begin")
test_all_templates = __import__("test_all_templates")
deploy = __import__("deploy")
git_plugin = __import__("git_plugin")
retval = test_begin.entrypoint("{}/platform/backend".format(base_dir))
if (retval == "SUCCESS"):
retval = test_all_templates.entrypoint("{}/platform/backend".format(base_dir))
if (retval == "SUCCESS"):
deploy.deploy()
</code></pre>
<p><strong>deploy.py</strong></p>
<pre><code>def deploy():
print(". {}/platform/backend/nu.sh --name jenkinsdeploy --stage dev --keyname greig --debug".format("/".join(os.path.abspath(__file__).split("/")[0:-2])))
returnedcode = subprocess.call("sh {}/platform/backend/nu.sh --name jenkinsdeploy --stage dev --keyname colin_greig --debug".format("/".join(os.path.abspath(__file__).split("/")[0:-2])), shell=True)
if returnedcode == 0:
return "DEPLOY SUCCESS"
return "DEPLOY FAILURE"
</code></pre>
| 2
|
2016-09-06T18:05:08Z
| 39,357,587
|
<p>You can use api calls to aws to retrieve the status and wait until it become 'some status'.</p>
<p>The below example is psuedo-code to illustrate the idea:</p>
<pre><code>import time
import boto3
ec2 = boto3.resource('ec2')
while True:
response = ec2.describe_instance_status(.....)
if response == 'some status':
break
time.sleep(60)
# continue execution
</code></pre>
| 0
|
2016-09-06T20:48:02Z
|
[
"python",
"amazon-web-services",
"jenkins"
] |
Paginating a DynamoDB query in boto3
| 39,355,377
|
<p>How can I loop through all results in a DynamoDB query, if they span more than one page? <a href="http://stackoverflow.com/a/13178258/526495">This answer</a> implies that pagination is built into the query function (at least in v2), but when I try this in v3, my items seem limited:</p>
<pre><code>import boto3
from boto3.dynamodb.conditions import Key, Attr
dynamodb = boto3.resource('dynamodb')
fooTable = dynamodb.Table('Foo')
response = fooTable.query(
KeyConditionExpression=Key('list_id').eq('123')
)
count = 0
for i in response['Items']:
count += 1
print count # Prints a subset of my total items
</code></pre>
| 1
|
2016-09-06T18:14:33Z
| 39,358,086
|
<p>ExclusiveStartKey is the name of the attribute which you are looking for.
Use the value that was returned for LastEvaluatedKey in the previous operation.</p>
<p>The data type for ExclusiveStartKey must be String, Number or Binary. No set data types are allowed.</p>
<p><a href="http://boto3.readthedocs.io/en/latest/reference/services/dynamodb.html#DynamoDB.Client.query" rel="nofollow">http://boto3.readthedocs.io/en/latest/reference/services/dynamodb.html#DynamoDB.Client.query</a></p>
| 0
|
2016-09-06T21:25:12Z
|
[
"python",
"pagination",
"aws-lambda",
"boto3"
] |
Searching within a datetime object python
| 39,355,483
|
<p>I am trying to locate where the years are within a specified bounds for a datetime object. I have tried doing a for loop a few different ways but unfortunately I cannot seem to get it to work. I know I am able to search for months and years when I convert a datetime object to a pandas array, but unfortunately the software I am using does not have the pandas library and I am unable to download it to my schools server. </p>
<p>Below is how I read in my time data and it works beautifully (it eliminated a lot of time it would have taken to convert my time a different way)</p>
<pre><code>date = netCDF4.Dateset('filename.nc', mode = 'r')
raw_time = data.variables['time']
time_converted = netCDF4.num2date(raw_time[:], raw_time.units)
</code></pre>
<p>The time_converted variable is a datetime object that appears as follows:</p>
<pre><code> ....
datetime.datetime(2006, 1, 1, 0, 0),
datetime.datetime(2006, 2, 1, 0, 0),
datetime.datetime(2006, 3, 1, 0, 0),
datetime.datetime(2006, 4, 1, 0, 0),
datetime.datetime(2006, 5, 1, 0, 0),
datetime.datetime(2006, 6, 1, 0, 0),
datetime.datetime(2006, 7, 1, 0, 0),
datetime.datetime(2006, 8, 1, 0, 0),
datetime.datetime(2006, 9, 1, 0, 0),
.....
</code></pre>
<p>The loop below is my most recent attempt and it returns the following error:</p>
<pre><code>time = []
for i in time_converted:
if i.year>= 2006 and i.year<2016:
time.append(i)
Type Error: 'int' object is not callable
</code></pre>
<p>Within my loop I have also tried using datetime.datetime.year(i) but that returns:</p>
<pre><code>Type Error: 'getset_descriptor' is not callable
</code></pre>
| 1
|
2016-09-06T18:21:33Z
| 39,358,056
|
<p>Turns out I needed to use the following loop instead:</p>
<pre><code>for i in range(len(time_converted)):
if time_converted[i].year >= 2006 and time_converted[i].year < 2016:
time.append(time_converted[i])
</code></pre>
| 0
|
2016-09-06T21:23:11Z
|
[
"python",
"datetime",
"search",
"indexing"
] |
Why the elements of numpy array not same as themselves?
| 39,355,556
|
<p>How do I explain the last line of these?</p>
<pre><code>>>> a = 1
>>> a is a
True
>>> a = [1, 2, 3]
>>> a is a
True
>>> a = np.zeros(3)
>>> a
array([ 0., 0., 0.])
>>> a is a
True
>>> a[0] is a[0]
False
</code></pre>
<p>I always thought that everything is at least "is" that thing itself!</p>
| 3
|
2016-09-06T18:25:37Z
| 39,355,706
|
<p>NumPy doesn't store array elements as Python objects. If you try to access an individual element, NumPy has to create a new wrapper object to represent the element, and it has to do this <em>every time</em> you access the element. The wrapper objects from two accesses to <code>a[0]</code> are different objects, so <code>a[0] is a[0]</code> returns <code>False</code>.</p>
| 7
|
2016-09-06T18:35:58Z
|
[
"python",
"python-3.x",
"numpy"
] |
Speeding up Dijkstra's algorithm to solve a 3D Maze
| 39,355,587
|
<p>I'm trying to write a Python script that can solve 3D mazes and I'm doing it using Dijkstra's algorithm with a priority queue(included in the module heapq). Here is my main function code:</p>
<pre><code>from heapq import *
def dijkstra(start,end,vertices,obstacles):
covered=[]
s=vertices.index(s)
currentVertex=s
liveDistances={}
for i in range(len(vertices)):
liveDistances[i]=inf
liveDistances[s]=0
next=[[liveDistances[s],s]]
while next:
np,currentVertex=heappop(next)
covered.append(currentVertex)
for u in sons(vertices[currentVertex]):
v=vertices.index(u)
if v in covered:continue
if 1+liveDistances[currentVertex]<liveDistances[v]:
liveDistances[v]=1+liveDistances[currentVertex]
heappush(next,[liveDistances[v],v])
if liveDistances[vertices.index(e)]!=inf:
return liveDistances[vertices.index(e)]
else:
return "No path!"
</code></pre>
<p>So basically it's just Dijkstra's applied to a 3D graph.</p>
<p>The program works well but I'm wondering if it's normal that it solves a 100x100 2D maze in 10 seconds or a 30x30x30 maze in 2 minutes ?
Am I implementing something wrong here ? Or is it just the right execution time ? Can I enhance it ?</p>
<p>The reason I'm seeking an enhancement is because I'm asked to solve the problem (Finding the shortest path in a 3D maze up to 40x40x40) in less than 5 seconds (The time limit).</p>
| 1
|
2016-09-06T18:27:22Z
| 39,355,685
|
<p>Do you know about A* search ("A-star")? It's a modification of Dijkstra's algorithm that can help a great deal when you know something about the geometry of the situation. Unmodified Dijkstra's assumes that any edge could be the start of an astonishingly short path to the goal, but often the geometry of the situation doesn't allow that, or at least makes it unlikely.</p>
<p>The basic idea is that if you are trying to find a shortest path from Denver to Pittsburgh, an edge leading to Salt Lake is not likely to be helpful. So you bias the weights on the heap by adding to them a lower bound on the distance from that node to the goal -- in this 2D case, that would be the straight-line or great-circle distance, since no actual road can be shorter than that. That heuristic tends to push bad choices down further in the heap, so they usually they never get explored.</p>
<p>I can't prove, though, that your way of constructing 3D mazes makes A* search applicable.</p>
| 0
|
2016-09-06T18:34:18Z
|
[
"python",
"algorithm",
"3d",
"dijkstra",
"maze"
] |
Speeding up Dijkstra's algorithm to solve a 3D Maze
| 39,355,587
|
<p>I'm trying to write a Python script that can solve 3D mazes and I'm doing it using Dijkstra's algorithm with a priority queue(included in the module heapq). Here is my main function code:</p>
<pre><code>from heapq import *
def dijkstra(start,end,vertices,obstacles):
covered=[]
s=vertices.index(s)
currentVertex=s
liveDistances={}
for i in range(len(vertices)):
liveDistances[i]=inf
liveDistances[s]=0
next=[[liveDistances[s],s]]
while next:
np,currentVertex=heappop(next)
covered.append(currentVertex)
for u in sons(vertices[currentVertex]):
v=vertices.index(u)
if v in covered:continue
if 1+liveDistances[currentVertex]<liveDistances[v]:
liveDistances[v]=1+liveDistances[currentVertex]
heappush(next,[liveDistances[v],v])
if liveDistances[vertices.index(e)]!=inf:
return liveDistances[vertices.index(e)]
else:
return "No path!"
</code></pre>
<p>So basically it's just Dijkstra's applied to a 3D graph.</p>
<p>The program works well but I'm wondering if it's normal that it solves a 100x100 2D maze in 10 seconds or a 30x30x30 maze in 2 minutes ?
Am I implementing something wrong here ? Or is it just the right execution time ? Can I enhance it ?</p>
<p>The reason I'm seeking an enhancement is because I'm asked to solve the problem (Finding the shortest path in a 3D maze up to 40x40x40) in less than 5 seconds (The time limit).</p>
| 1
|
2016-09-06T18:27:22Z
| 39,356,377
|
<p>I suspect that a lot of time will be spent in these two lines:</p>
<pre><code>v=vertices.index(u)
if v in covered:continue
</code></pre>
<p>both of these lines are O(n) operations where n is the number of vertices in your graph.</p>
<p>I suggest you replace the first with a dictionary (that maps from your vertex names to vertex indices), and the second by changing <code>covered</code> from a list to a set. </p>
<p>This should make both operations O(1) and could give you several orders of magnitude speed improvement.</p>
| 1
|
2016-09-06T19:21:16Z
|
[
"python",
"algorithm",
"3d",
"dijkstra",
"maze"
] |
Speeding up Dijkstra's algorithm to solve a 3D Maze
| 39,355,587
|
<p>I'm trying to write a Python script that can solve 3D mazes and I'm doing it using Dijkstra's algorithm with a priority queue(included in the module heapq). Here is my main function code:</p>
<pre><code>from heapq import *
def dijkstra(start,end,vertices,obstacles):
covered=[]
s=vertices.index(s)
currentVertex=s
liveDistances={}
for i in range(len(vertices)):
liveDistances[i]=inf
liveDistances[s]=0
next=[[liveDistances[s],s]]
while next:
np,currentVertex=heappop(next)
covered.append(currentVertex)
for u in sons(vertices[currentVertex]):
v=vertices.index(u)
if v in covered:continue
if 1+liveDistances[currentVertex]<liveDistances[v]:
liveDistances[v]=1+liveDistances[currentVertex]
heappush(next,[liveDistances[v],v])
if liveDistances[vertices.index(e)]!=inf:
return liveDistances[vertices.index(e)]
else:
return "No path!"
</code></pre>
<p>So basically it's just Dijkstra's applied to a 3D graph.</p>
<p>The program works well but I'm wondering if it's normal that it solves a 100x100 2D maze in 10 seconds or a 30x30x30 maze in 2 minutes ?
Am I implementing something wrong here ? Or is it just the right execution time ? Can I enhance it ?</p>
<p>The reason I'm seeking an enhancement is because I'm asked to solve the problem (Finding the shortest path in a 3D maze up to 40x40x40) in less than 5 seconds (The time limit).</p>
| 1
|
2016-09-06T18:27:22Z
| 39,356,541
|
<p>My initial approach would be to use some basic backtracking:</p>
<pre class="lang-java prettyprint-override"><code>boolean[][][] visited = new boolean[h][w][d];
boolean foundPath = false;
public void path(int y, int x,int z) {
visited[y][x][z] = true;
if (x == targetx && y == target && z == targetz) {
foundPath = true;
return;
}
if (!visited[y - 1][x][z] && !foundPath) //if up
path(y - 1, x, z);
if (!visited[y + 1][x][z] && !foundPath) //if down
path(y + 1, x, z);
if (!visited[y][x - 1][z] && !foundPath) //if left
path(y, x - 1, z);
if (!visited[y][x + 1][z] && !foundPath) //if right
path(y, x + 1, z);
if (!visited[y][x][z+1] && !foundPath) //if forward
path(y, x, z + 1);
if (!visited[y][x][z-1] && !foundPath) //if backward
path(y, x + 1, z - 1);
if (foundPath) return;
visited[y][x][z] = false;
}
</code></pre>
| 0
|
2016-09-06T19:33:27Z
|
[
"python",
"algorithm",
"3d",
"dijkstra",
"maze"
] |
Disable python in-built function
| 39,355,608
|
<p>How could I disable a python in-built function?</p>
<p>For example, I note that it is possible to reassign or overwrite <code>len()</code> (like <code>len = None</code>), but it is not possible to reassign <code>list.__len__()</code> , which raises:</p>
<blockquote>
<p>TypeError: can't set attributes of built-in/extension type 'list'</p>
</blockquote>
<p>However, even if reassignment were possible it seems <a href="http://stackoverflow.com/questions/20885760/how-to-get-back-an-overridden-python-built-in-function">easy to override</a>. In this example, <code>del len</code> or <code>from builtins import len</code> would restore the original functionality of <code>len()</code>.</p>
<p>The reason I ask is that on <a href="http://codewars.com/" rel="nofollow">Codewars</a> sometimes people want to set a coding challenge for a user to complete while forbidding the use of certain in-built functions. A trivial example could be determining the length of a list without using the length function.</p>
<p>On reflection, thanks to the comments I have already received and <a href="http://stackoverflow.com/questions/3068139/how-can-i-sandbox-python-in-pure-python">this related question</a>, I now realize that a full-proof solution is very hard, but I'd still be interested in a pragmatic solution that could be useful in this context.</p>
<p>Thank you</p>
| 2
|
2016-09-06T18:28:54Z
| 39,356,040
|
<p><strong>Preface:</strong>
given the various comment conversations... It should be noted that none of these things are sufficient protection for running un-trusted code, and to be absolutely safe, you need a different interpreter with sandboxing specifically built in. <a href="http://stackoverflow.com/a/3068475/3220135">Here</a> is a previous answer discussing that</p>
<p>Given the example of writing a code wars question.. I would sub-class list, string, etc. with custom classes that disable the <code>__len__</code> function, then override the respective constructors with your own custom class.. make sure to provide the test case inputs as instances of the new class, as you cannot override the literal string/list constructors as they are linked to the interpreter directly..</p>
<p>for example:</p>
<pre><code>oldList = list
class myList(list):
def __len__(self):
raise AttributeError("'len' has been disabled for list objects")
list = myList
</code></pre>
| 1
|
2016-09-06T18:57:56Z
|
[
"python",
"python-3.x"
] |
Python conditionally remove whitespaces
| 39,355,693
|
<p>I want to remove whitespaces in a string that are adjacent to <code>/</code> while keeping the rest of the whitespaces.</p>
<p>For instance, say I have a string <code>98 / 100 xx 3/ 4 and 5/6</code>. The desired result for my example would be <code>98/100 xx 3/4 and 5/6</code>. (So that I could use <code>.split(' ')</code> as a next step and extract those meaningful numbers, i.e.<code>98/100</code>, <code>3/4</code>, and <code>5/6</code> as my final results.) Note: I only want to look for <code>/</code>, no need to worry other operators.</p>
<p>I know I should probably use <code>re</code> for this, but I can't figure it out. Any help is appreciated! </p>
<p>---------------------My Approach Below------------------</p>
<p>I used <code>[index_num.end(0) for index_num in re.finditer('/', test)]</code>, where <code>test = '98 / 100 xx 3/ 4 and 5/6'</code> to find the index of the <code>/</code>, then check if the previous or the next is a whitespace. That's not ideal and I believe there are easier ways.</p>
| -1
|
2016-09-06T18:34:58Z
| 39,355,789
|
<p>You can remove whitespace before and after slashes with <code>str.replace</code> instead of regex:</p>
<pre><code>>>> '98 / 100 xx 3/ 4 and 5/6'.replace(' /','/').replace('/ ','/')
'98/100 xx 3/4 and 5/6'
</code></pre>
| 4
|
2016-09-06T18:40:33Z
|
[
"python",
"regex"
] |
Python conditionally remove whitespaces
| 39,355,693
|
<p>I want to remove whitespaces in a string that are adjacent to <code>/</code> while keeping the rest of the whitespaces.</p>
<p>For instance, say I have a string <code>98 / 100 xx 3/ 4 and 5/6</code>. The desired result for my example would be <code>98/100 xx 3/4 and 5/6</code>. (So that I could use <code>.split(' ')</code> as a next step and extract those meaningful numbers, i.e.<code>98/100</code>, <code>3/4</code>, and <code>5/6</code> as my final results.) Note: I only want to look for <code>/</code>, no need to worry other operators.</p>
<p>I know I should probably use <code>re</code> for this, but I can't figure it out. Any help is appreciated! </p>
<p>---------------------My Approach Below------------------</p>
<p>I used <code>[index_num.end(0) for index_num in re.finditer('/', test)]</code>, where <code>test = '98 / 100 xx 3/ 4 and 5/6'</code> to find the index of the <code>/</code>, then check if the previous or the next is a whitespace. That's not ideal and I believe there are easier ways.</p>
| -1
|
2016-09-06T18:34:58Z
| 39,355,798
|
<p>A hint: A regular expression that will find any amount of whitespace (not just space characters, but also tabs, etc.) around a slash is: <code>r'\s*/\s*'</code>. The regex there is between the apostrophes. The period is just the end of my sentence, and the r tells Python to treat the string inside apostrophes as a "raw" string.</p>
<p>If you don't want to find arbitrary whitespace characters, but only the space character itself, the regex is <code>r' */ *'</code>. Note the space in front of the asterisk.</p>
<p>The rest I leave to you, since this sounds like a homework problem.</p>
| 3
|
2016-09-06T18:41:23Z
|
[
"python",
"regex"
] |
Having issue summarising python dataframe to one line per record
| 39,355,717
|
<p>I've got a dataframe in the form:</p>
<pre><code>df = pd.DataFrame({'id':['a', 'a', 'a', 'b','b'],'var':[1,2,3,5,9]})
</code></pre>
<p>and I'm trying to reshape it so that there is one line per 'id' and the values 'var' are displayed across in one line, so 'a' would have 1,2,3 ... 'b' would have '5,9'</p>
<p>I've tried with:</p>
<pre><code>test = pd.crosstab(df.id, df.var) # but it does not work?
</code></pre>
<p>If someone could help me it would be much appreciated</p>
<p>EDIT, I enclose the desired results as a picture here<a href="http://i.stack.imgur.com/NgHBm.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/NgHBm.jpg" alt="enter image description here"></a></p>
| 3
|
2016-09-06T18:36:17Z
| 39,355,758
|
<p><strong>UPDATE:</strong></p>
<pre><code>In [32]: df.groupby('id')['var'].apply(lambda x: x.astype(str).str.cat(sep=',')).reset_index()
Out[32]:
id var
0 a 1,2,3
1 b 5,9
</code></pre>
<p>or having <code>var</code> as a list:</p>
<pre><code>In [29]: df.groupby('id')['var'].apply(list).reset_index()
Out[29]:
id var
0 a [1, 2, 3]
1 b [5, 9]
</code></pre>
<p><strong>OLD answer:</strong></p>
<p>IIUC you can use <code>pivot_table()</code> which is used by <code>crosstab()</code> method internally?</p>
<pre><code>In [26]: df.pivot_table(index='id', columns='var', aggfunc='size', fill_value=0)
Out[26]:
var 1 2 3 5 9
id
a 1 1 1 0 0
b 0 0 0 1 1
</code></pre>
| 2
|
2016-09-06T18:39:09Z
|
[
"python",
"pandas",
"dataframe",
"pivot-table"
] |
Having issue summarising python dataframe to one line per record
| 39,355,717
|
<p>I've got a dataframe in the form:</p>
<pre><code>df = pd.DataFrame({'id':['a', 'a', 'a', 'b','b'],'var':[1,2,3,5,9]})
</code></pre>
<p>and I'm trying to reshape it so that there is one line per 'id' and the values 'var' are displayed across in one line, so 'a' would have 1,2,3 ... 'b' would have '5,9'</p>
<p>I've tried with:</p>
<pre><code>test = pd.crosstab(df.id, df.var) # but it does not work?
</code></pre>
<p>If someone could help me it would be much appreciated</p>
<p>EDIT, I enclose the desired results as a picture here<a href="http://i.stack.imgur.com/NgHBm.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/NgHBm.jpg" alt="enter image description here"></a></p>
| 3
|
2016-09-06T18:36:17Z
| 39,355,875
|
<p>You must supply the correct arguments, like:</p>
<pre><code>pd.crosstab(index=df['id'], columns=df['var'])
var 1 2 3 5 9
id
a 1 1 1 0 0
b 0 0 0 1 1
</code></pre>
| 3
|
2016-09-06T18:45:42Z
|
[
"python",
"pandas",
"dataframe",
"pivot-table"
] |
How to get current login time in Ubuntu/Linux using python?
| 39,355,742
|
<p>From current login time, I mean the time at which the user logged into the system.</p>
<p>Edit : I only need to get the current login time in hh:mm format, not the username and all</p>
| -1
|
2016-09-06T18:37:44Z
| 39,356,001
|
<pre><code>import subprocess
print subprocess.check_output("who").split()[3]
</code></pre>
<p>Output:</p>
<pre><code>22:23
</code></pre>
<p>Which is in <code>hh:mm</code> format.</p>
| 0
|
2016-09-06T18:55:21Z
|
[
"python",
"ubuntu",
"login"
] |
Select columns from a DataFrame based on values in a row in pandas
| 39,355,767
|
<p>Say I have the same dataframe from <a href="http://stackoverflow.com/questions/25479607/pandas-min-of-selected-row-and-columns">this question</a>:</p>
<pre><code> A0 A1 A2 B0 B1 B2 C0 C1
0 0.84 0.47 0.55 0.46 0.76 0.42 0.24 0.75
1 0.43 0.47 0.93 0.39 0.58 0.83 0.35 0.39
2 0.12 0.17 0.35 0.00 0.19 0.22 0.93 0.73
3 0.95 0.56 0.84 0.74 0.52 0.51 0.28 0.03
4 0.73 0.19 0.88 0.51 0.73 0.69 0.74 0.61
5 0.18 0.46 0.62 0.84 0.68 0.17 0.02 0.53
6 0.38 0.55 0.80 0.87 0.01 0.88 0.56 0.72
</code></pre>
<p>But instead of wanting to return the minimum value of each row (of only B0, B1, B2)</p>
<pre><code> A0 A1 A2 B0 B1 B2 C0 C1 Minimum
0 0.84 0.47 0.55 0.46 0.76 0.42 0.24 0.75 0.42
1 0.43 0.47 0.93 0.39 0.58 0.83 0.35 0.39 0.39
2 0.12 0.17 0.35 0.00 0.19 0.22 0.93 0.73 0.00
3 0.95 0.56 0.84 0.74 0.52 0.51 0.28 0.03 0.51
4 0.73 0.19 0.88 0.51 0.73 0.69 0.74 0.61 0.51
5 0.18 0.46 0.62 0.84 0.68 0.17 0.02 0.53 0.17
6 0.38 0.55 0.80 0.87 0.01 0.88 0.56 0.72 0.01
</code></pre>
<p>I want to return the <em>column name</em> which contains the minimum value of each row (of only B0, B1, B2):</p>
<pre><code> A0 A1 A2 B0 B1 B2 C0 C1 col_of_min
0 0.84 0.47 0.55 0.46 0.76 0.42 0.24 0.75 B2
1 0.43 0.47 0.93 0.39 0.58 0.83 0.35 0.39 B0
2 0.12 0.17 0.35 0.00 0.19 0.22 0.93 0.73 B0
3 0.95 0.56 0.84 0.74 0.52 0.51 0.28 0.03 B2
4 0.73 0.19 0.88 0.51 0.73 0.69 0.74 0.61 B0
5 0.18 0.46 0.62 0.84 0.68 0.17 0.02 0.53 B2
6 0.38 0.55 0.80 0.87 0.01 0.88 0.56 0.72 B1
</code></pre>
<p>What's the best way to do this?</p>
| 2
|
2016-09-06T18:39:35Z
| 39,355,809
|
<p>you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.filter.html">filter()</a> in conjunction with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.idxmin.html">idxmin()</a> method:</p>
<pre><code>In [40]: x
Out[40]:
A0 A1 A2 B0 B1 B2 C0 C1
0 0.84 0.47 0.55 0.46 0.76 0.42 0.24 0.75
1 0.43 0.47 0.93 0.39 0.58 0.83 0.35 0.39
2 0.12 0.17 0.35 0.00 0.19 0.22 0.93 0.73
3 0.95 0.56 0.84 0.74 0.52 0.51 0.28 0.03
4 0.73 0.19 0.88 0.51 0.73 0.69 0.74 0.61
5 0.18 0.46 0.62 0.84 0.68 0.17 0.02 0.53
6 0.38 0.55 0.80 0.87 0.01 0.88 0.56 0.72
In [41]: x['col_of_min'] = x.filter(like='B').idxmin(axis=1)
In [42]: x
Out[42]:
A0 A1 A2 B0 B1 B2 C0 C1 col_of_min
0 0.84 0.47 0.55 0.46 0.76 0.42 0.24 0.75 B2
1 0.43 0.47 0.93 0.39 0.58 0.83 0.35 0.39 B0
2 0.12 0.17 0.35 0.00 0.19 0.22 0.93 0.73 B0
3 0.95 0.56 0.84 0.74 0.52 0.51 0.28 0.03 B2
4 0.73 0.19 0.88 0.51 0.73 0.69 0.74 0.61 B0
5 0.18 0.46 0.62 0.84 0.68 0.17 0.02 0.53 B2
6 0.38 0.55 0.80 0.87 0.01 0.88 0.56 0.72 B1
</code></pre>
| 5
|
2016-09-06T18:41:59Z
|
[
"python",
"pandas",
"dataframe",
"calculated-columns"
] |
Finding "taxiNumber" program, want to exchange nested for loop for optimization reasons
| 39,355,781
|
<p>I just started programming 1 week ago so please forgive the chaos you are about to see
I'm trying to find the first x taxi numbers but with my "nested for loop" the program takes ages to go through all possibilities.
Taxi number:
if there is a number a^3+b^3 that is equal to c^3+d^3
that sum is a taxinumber. Example
12^3+1^3 == 10^3+9^3 == 1729</p>
<p>For me it's a success if i can find around 20 taxinumber
Thanks beforehand for any tips or tricks! </p>
<p>Here is my code:</p>
<pre><code>import math
def main():
numbersOfResultsToFind = getNumberOfTaxisToFind()
foundResults = 0
numberToCheck = 1
while(foundResults < numbersOfResultsToFind):
result = getTaxi(numberToCheck)
if len(result) > 1: #if more then one a+b
foundResults = foundResults + 1
print(numberToCheck, result)
numberToCheck = numberToCheck + 1
def getNumberOfTaxisToFind():
return int(input("How many taxinumbers do you want to find? "))
def getThirdSquareFloored(value):
value = value**(1/3)
value = math.floor(value) #floor value
return value
def getTaxi(numberToCheck):
result = []
upperLimit = getThirdSquareFloored(numberToCheck)
for a in range(1, upperLimit+1):
for b in range(1, upperLimit+1):
aCubed = a**3
bCubed = b**3
sumCub = aCubed + bCubed
if(sumCub == numberToCheck and a < b):
result.append((a, b))
return result
main()
</code></pre>
| 0
|
2016-09-06T18:40:19Z
| 39,356,772
|
<p>The problem is that the <a href="https://en.wikipedia.org/wiki/Taxicab_number" rel="nofollow"><strong>Taxicab numbers</strong></a> are VERY far apart. You can do some math tricks to solve it. For example, you can be checking the <a href="https://en.wikipedia.org/wiki/Euler%27s_sum_of_powers_conjecture" rel="nofollow">Euler's sum of powers</a>. The problem with that approach is that you will be generating some taxi numbers, but they might not be in order.</p>
<p>As it goes to your code, I have some notes:</p>
<ol>
<li>If you want to take the cubic root, do not do <code>value**(1/3)</code>! Remember that <code>1/3 = 0</code> in python. Use <code>1./3</code> or use <code>numpy</code></li>
<li>Don't use the <code>**(1./3)</code> at all - any power functions are expensive, try replacing them with something (precomputed values?)</li>
<li><strong>Important:</strong> <a href="http://stackoverflow.com/questions/11410798/finding-taxicab-numbers">Read StackOverflow if there is a solution already!</a></li>
</ol>
| 0
|
2016-09-06T19:50:42Z
|
[
"python",
"for-loop",
"nested"
] |
Finding "taxiNumber" program, want to exchange nested for loop for optimization reasons
| 39,355,781
|
<p>I just started programming 1 week ago so please forgive the chaos you are about to see
I'm trying to find the first x taxi numbers but with my "nested for loop" the program takes ages to go through all possibilities.
Taxi number:
if there is a number a^3+b^3 that is equal to c^3+d^3
that sum is a taxinumber. Example
12^3+1^3 == 10^3+9^3 == 1729</p>
<p>For me it's a success if i can find around 20 taxinumber
Thanks beforehand for any tips or tricks! </p>
<p>Here is my code:</p>
<pre><code>import math
def main():
numbersOfResultsToFind = getNumberOfTaxisToFind()
foundResults = 0
numberToCheck = 1
while(foundResults < numbersOfResultsToFind):
result = getTaxi(numberToCheck)
if len(result) > 1: #if more then one a+b
foundResults = foundResults + 1
print(numberToCheck, result)
numberToCheck = numberToCheck + 1
def getNumberOfTaxisToFind():
return int(input("How many taxinumbers do you want to find? "))
def getThirdSquareFloored(value):
value = value**(1/3)
value = math.floor(value) #floor value
return value
def getTaxi(numberToCheck):
result = []
upperLimit = getThirdSquareFloored(numberToCheck)
for a in range(1, upperLimit+1):
for b in range(1, upperLimit+1):
aCubed = a**3
bCubed = b**3
sumCub = aCubed + bCubed
if(sumCub == numberToCheck and a < b):
result.append((a, b))
return result
main()
</code></pre>
| 0
|
2016-09-06T18:40:19Z
| 39,377,634
|
<pre><code>import math
def main():
numbersOfResultsToFind = getNumberOfTaxisToFind()
foundResults = 0
numberToCheck = 1
while(foundResults < numbersOfResultsToFind):
result = getTaxi(numberToCheck)
if len(result) > 1:
foundResults = foundResults + 1
print(numberToCheck, result)
numberToCheck = numberToCheck + 1
def getNumberOfTaxisToFind():
return int(input("How many taxinumbers do you want to find? "))
def getThirdSquareFloored(value):
value = value**(1./3)
value = math.floor(value)
return value
def getTaxi(numberToCheck):
result = []
upperLimit = getThirdSquareFloored(numberToCheck)
for a in range(1, upperLimit+1):
b = round((numberToCheck-a**3)**(1./3))
if(a**3+b**3 == numberToCheck and a < b):
result.append((a, b))
if len(result) == 2:
break
return result
main()
</code></pre>
<p>Thanks so much for all help! This is my updated code, runs ALOT quicker. I set the b value to be calculated from a instead of using a nested for loop. I also implemented a if len(result), break in the end of my code
No reason to keep looking for pairs if one is already found</p>
| 0
|
2016-09-07T19:24:09Z
|
[
"python",
"for-loop",
"nested"
] |
Django model where field is based on another field unless specified otherwise
| 39,355,835
|
<p>Say I have a django model called <strong>Car</strong> and another called <strong>UserCar</strong> which has a related car object.</p>
<pre><code>class Car(models.Model):
name = models.CharField(max_length=100)
mpg = models.DecimalField(max_digits=6, decimal_places=2, null=True)
class UserCar(models.Model):
car = models.ForeignKey('Car')
mpg = models.DecimalField(max_digits=6, decimal_places=2, null=True)
</code></pre>
<p>I would like to override the save function on <strong>UserCar</strong> such that <em>if no value for mpg is specified</em>, the model instance is pre-populated with the value of mpg on the related Car object.</p>
| 2
|
2016-09-06T18:43:19Z
| 39,355,951
|
<p>Try using the following:</p>
<pre><code>class UserCar(models.Model):
... # your fields here
# Override the save function here
def save(self, *args, **kwargs):
if self.mpg is None:
self.mpg = self.car.mpg
super(UserCar, self).save(*args, **kwargs)
</code></pre>
| 2
|
2016-09-06T18:51:06Z
|
[
"python",
"django",
"django-models"
] |
function decorators to specify the first two lines of the def?
| 39,355,848
|
<p>I'm on python 3.5:</p>
<p>I have a repeating pattern in some of my python functions. For a large collection of classes the first two lines are:</p>
<pre><code>obj_a = <..... obtain something I need.....>
obj_b = <..... obtain another thing I need....>
</code></pre>
<p>I'm simplifying it here, but the process of obtaining <code>obj_a</code> and <code>obj_b</code> isn't a one liner... I would like to avoid repeating this code anywhere in a more elegant way than a util function to obtain <code>obj_a</code> and <code>obj_b</code> (e.g. <code>obj_a = getObjectA()</code>...) </p>
<p>Is there any way to take those lines and put them as part of a decorator of a function where I have something like:</p>
<pre><code>@function_where_I_need_my_objects
def foo:
<....do something with obj_a and obj_b already initialized....>
</code></pre>
| 2
|
2016-09-06T18:44:03Z
| 39,355,916
|
<p><code>obj_a</code> and <code>obj_b</code> sound a lot like state.</p>
<pre><code>class Thing:
def __init__(self):
# These could also be class attributes instead
# if they can be initialized when Thing is first
# defined.
self.obj_a = ...
self.obj_b = ...
def foo(self):
# use self.obj_a and self.obj_b
t = Thing()
t.foo()
</code></pre>
<p>Defining a decorator that returns a closure around <code>obj_a</code> and <code>obj_b</code> is also possible.</p>
<pre><code>def decorator(f):
obj_a = ...
obj_b = ...
def _():
# use obj_a and obj_b
return _
@decorator
def foo():
...
</code></pre>
<p>However, it is hardly any longer and much more flexible to simply write a function that <em>returns</em> the two objects, and lets the caller use whatever names they like:</p>
<pre><code>def _get_a_b():
a = ...
b = ...
return a, b
def foo():
obj_a, obj_b = _get_a_b()
...
def bar():
a1, a2 = _get_a_b()
...
</code></pre>
| 1
|
2016-09-06T18:48:24Z
|
[
"python",
"python-3.x"
] |
function decorators to specify the first two lines of the def?
| 39,355,848
|
<p>I'm on python 3.5:</p>
<p>I have a repeating pattern in some of my python functions. For a large collection of classes the first two lines are:</p>
<pre><code>obj_a = <..... obtain something I need.....>
obj_b = <..... obtain another thing I need....>
</code></pre>
<p>I'm simplifying it here, but the process of obtaining <code>obj_a</code> and <code>obj_b</code> isn't a one liner... I would like to avoid repeating this code anywhere in a more elegant way than a util function to obtain <code>obj_a</code> and <code>obj_b</code> (e.g. <code>obj_a = getObjectA()</code>...) </p>
<p>Is there any way to take those lines and put them as part of a decorator of a function where I have something like:</p>
<pre><code>@function_where_I_need_my_objects
def foo:
<....do something with obj_a and obj_b already initialized....>
</code></pre>
| 2
|
2016-09-06T18:44:03Z
| 39,355,941
|
<p>You can put those two objects in a base class as:</p>
<pre><code>class TheBaseClass:
obj1 = something
obj2 = something_else
</code></pre>
<p>and inherit this in the functions of classes you need:</p>
<pre><code>class Class1(TheBaseClass):
def func1(self):
a = self.obj1 # make use of it
b = self.obj2 # make use of it
</code></pre>
| -3
|
2016-09-06T18:50:11Z
|
[
"python",
"python-3.x"
] |
function decorators to specify the first two lines of the def?
| 39,355,848
|
<p>I'm on python 3.5:</p>
<p>I have a repeating pattern in some of my python functions. For a large collection of classes the first two lines are:</p>
<pre><code>obj_a = <..... obtain something I need.....>
obj_b = <..... obtain another thing I need....>
</code></pre>
<p>I'm simplifying it here, but the process of obtaining <code>obj_a</code> and <code>obj_b</code> isn't a one liner... I would like to avoid repeating this code anywhere in a more elegant way than a util function to obtain <code>obj_a</code> and <code>obj_b</code> (e.g. <code>obj_a = getObjectA()</code>...) </p>
<p>Is there any way to take those lines and put them as part of a decorator of a function where I have something like:</p>
<pre><code>@function_where_I_need_my_objects
def foo:
<....do something with obj_a and obj_b already initialized....>
</code></pre>
| 2
|
2016-09-06T18:44:03Z
| 39,357,241
|
<p>You could do something like the following which creates variables that will effectively become local to the function when it's called. This uses <code>eval()</code> which can be unsafe if used on untrusted input, but that is not the case here where it's being used to execute the compiled byte-code of a decorated function (but <em>caveat emptor</em>).</p>
<pre><code>def local_vars(**kwargs):
""" Create decorator which will inject specified local variable(s) into
function before executing it.
"""
def decorator(fn):
def decorated():
return eval(fn.__code__,
{k: v() for k, v in kwargs.items()}) # call funcs
return decorated
return decorator
def get_object_a(): return 13
def get_object_b(): return 42
# create abbreviation for the long decorator
obj_decorator = local_vars(obj_a=get_object_a, obj_b=get_object_b)
@obj_decorator # apply decorator
def test():
print(obj_a)
print(obj_b)
test()
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>13
42
</code></pre>
| -1
|
2016-09-06T20:22:33Z
|
[
"python",
"python-3.x"
] |
function decorators to specify the first two lines of the def?
| 39,355,848
|
<p>I'm on python 3.5:</p>
<p>I have a repeating pattern in some of my python functions. For a large collection of classes the first two lines are:</p>
<pre><code>obj_a = <..... obtain something I need.....>
obj_b = <..... obtain another thing I need....>
</code></pre>
<p>I'm simplifying it here, but the process of obtaining <code>obj_a</code> and <code>obj_b</code> isn't a one liner... I would like to avoid repeating this code anywhere in a more elegant way than a util function to obtain <code>obj_a</code> and <code>obj_b</code> (e.g. <code>obj_a = getObjectA()</code>...) </p>
<p>Is there any way to take those lines and put them as part of a decorator of a function where I have something like:</p>
<pre><code>@function_where_I_need_my_objects
def foo:
<....do something with obj_a and obj_b already initialized....>
</code></pre>
| 2
|
2016-09-06T18:44:03Z
| 39,357,412
|
<p>You may consider using <em>callable classes</em> instead of functions, this way you could keep the boilerplate in one place, and implement the other stuff as subclasses of that base. </p>
<pre><code>class BaseFoo:
def __init__(self):
# self.obj_a = <..... obtain something I need.....>
# self.obj_b = <..... obtain another thing I need....>
class MyFoo(BaseFoo):
def __call__(self, *arg, **kwargs):
# <....do something with obj_a and obj_b already initialized....>
foo = MyFoo()
</code></pre>
| 0
|
2016-09-06T20:34:14Z
|
[
"python",
"python-3.x"
] |
Is there a way to change a matching object returned from a regex.search() function?
| 39,355,887
|
<p>I am new to coding and made up a project for my self to start learning, but I could'nt get around this problem. i am trying to make a little tool which converts stuff from the clipboard(which for now I will simply use a string called <code>spam</code>) so that the sentences start with a capital letter, and in which ' i ' is also uppercase, so ' I '. </p>
<p>So what I tried to do was, find a match where there is a ('. ', '? ' or ' i ') and go from there.</p>
<pre><code>spam='this is a string which i want to correct. as you can see.'
def capital(lists): #finds out where to change the text
dotRegex=re.compile(r'\. ')
questionRegex=re.compile(r'\? ')
iRegex=re.compile(r' i ')
mo1=dotRegex.search(lists)
mo2=questionRegex.search(lists)
mo3=iRegex.search(lists)
if mo1:
(lists(mo1.start()+2)).upper()
if mo2:
(lists(mo1.start()+2)).upper()
if mo3:
mo3.upper()
capital(spam)
</code></pre>
<p>This returns the error:</p>
<blockquote>
<p>"(lists(mo1.start()+2)).upper() TypeError: 'str' object is not
callable"</p>
</blockquote>
<p>What I try to do there is find where there is a <code>mo</code>, go 2 indecises to the right and change what is there to uppercase. Is there any way to do this? And offcourse the <code>search()</code> function only returns 1 <code>mo</code>,
<strong>so my question is:</strong> is there a way to work around it when there are multiple matching objects and change them all? I know <code>findall()</code> exists but how can you use that here?</p>
<p>Anyway, I would love some help from anybody, and I am sorry if this code hurts to watch.</p>
| 0
|
2016-09-06T18:46:15Z
| 39,356,587
|
<p>There are several errors (or awkwardnesses) in your code.</p>
<p>Here is a quick code review:</p>
<pre><code>import re
spam = 'this is a string which i want to correct. as you can see.'
def capital(lists):
# finds out where to change the text.
dot_regex = re.compile(r'\. ')
question_regex = re.compile(r'\? ')
i_regex = re.compile(r' i ')
mo1 = dot_regex.search(lists)
mo2 = question_regex.search(lists)
mo3 = i_regex.search(lists)
if mo1:
lists[mo1.start() + 2].upper()
if mo2:
lists[mo1.start() + 2].upper()
if mo3:
mo3.group().upper()
capital(spam)
</code></pre>
<ol>
<li>According to the <a href="https://www.python.org/dev/peps/pep-0008/#prescriptive-naming-conventions" rel="nofollow">PEP8 conventions</a>, variable should be written in snake case. So I replace <code>dotRegex</code> by <code>dot_regex</code>,</li>
<li>Since you don't modify it in function, you can also use module-level variables (constant) for RegEx: for instance: <code>DOT_REGEX</code>.</li>
<li>Put spaces around binary operators: <code>mo1 = ...</code>.</li>
</ol>
<p>In Python, string indexing/slicing use <code>[]</code> operator, so replace <code>lists(mo1.start() + 2)</code> by <code>lists[mo1.start() + 2]</code>. The syntax <code>lists(...)</code> is a function call, here.</p>
<p>Remember that in Python, <a href="https://docs.python.org/2/faq/design.html#why-are-python-strings-immutable" rel="nofollow">strings are immutable</a>: so you can't modify it, you must create a copy.</p>
<pre><code>foo = "string"
foo[2] = "l" # <- TypeError: 'str' object does not support item assignment
</code></pre>
<p>To answer your question: No, you can't modify a string, but you can use a search/replace with RegEx to do what you want.</p>
<p>Here is an detailed explanation for the <code>dot_regex</code>:</p>
<pre><code>import re
# Search the first letter after a dot (or after the begining)
dot_regex = re.compile(r"(^|\. )(.)")
def my_upper(mo):
""" Keep the dot (group #1), turn the letter in uppercase (group #2). """
return mo.group(1) + mo.group(2).upper()
spam = 'this is a string which i want to correct. as you can see.'
spin = dot_regex.sub(my_upper, spam)
# => This is a string which i want to correct. As you can see.
</code></pre>
<p>You can continue with other RegEx...</p>
<p><em>Note:</em> to match a single "i", you can use <code>r"\bi\b"</code>:</p>
<pre><code># Search a single "i"
i_regex = re.compile(r"\bi\b")
spon = i_regex.sub("I", spin)
print(spon)
# => This is a string which I want to correct. As you can see.
</code></pre>
<p>You are doing a king of copy-editing, aren't you? ;-)</p>
<p>You can combine the rules for dot and question mark (and exclamation mark too):</p>
<pre><code># Search the first letter after a dot/?/! (or after the begining)
mark_regex = re.compile(r"(^|[!?.] )(.)")
spam = 'can you see? this is a string which i want to correct. as you can see! yeh!'
spif = mark_regex.sub(my_upper, spam)
# => Can you see? This is a string which i want to correct. As you can see! Yeh!
</code></pre>
<p><strong>TUTORIAL: <a href="https://docs.python.org/3/howto/regex.html" rel="nofollow">Regular Expression HOWTO</a></strong></p>
| 1
|
2016-09-06T19:37:38Z
|
[
"python",
"regex"
] |
How to use end="" snippet to create a bowling scoreboard
| 39,356,123
|
<p>I am trying to create a python program that will read in a Bowling score composed of number of pins knocked out for each toss. I am trying to create an output that looks similar to a bowling scoreboard like this:</p>
<pre><code> 1 2 3 4 5 6 7 8 9 10
+---+---+---+---+---+---+---+---+---+-----+
|8 /|7 2|9 /|X |- 7|X |- -|9 /|X |X 9 /|
| 17| 26| 63| 70| 80| 80| --|100|129| 149|
+---+---+---+---+---+---+---+---+---+-----+
</code></pre>
<p>I have the number of pins knocked down and the scores for each frame in two lists and will iterate through them 10 times to get the ten frames. I have tried different ways, but I am not having much luck. SO far I have something like this (assume that frame is accessing the right value in list).</p>
<pre><code>for frame in range(1, 11):
if frame <= 9:
print(" {:d} \n+---\n| \n|{:d}\n+---".format(frame,frame),end="")
else:
print(" {:d} \n+-----\n| \n|{:d}\n+-----".format(frame,frame),end="")
</code></pre>
<p>Do you guys have any suggestions? Thank you so much!</p>
| 0
|
2016-09-06T19:03:22Z
| 39,356,451
|
<p>Once you've invoked <code>\n</code>, you can never go back and modify that line again. What you want to do is build 4 strings, and then print all in succession at the end:</p>
<pre><code>def addEndCaps(lines):
lines[1] += "+"
lines[2] += "|"
lines[3] += "|"
lines[4] += "+"
lines = [ "", "", "", "", "" ]
for frame in range(1,11):
lines[0] += " %d " % frame
lines[1] += "+---"
lines[2] += "| "
lines[3] += "| "
lines[4] += "+---"
addEndCaps(lines)
print("\n".join(lines))
</code></pre>
<p>This produces output like this:</p>
<pre><code> 1 2 3 4 5 6 7 8 9 10
+---+---+---+---+---+---+---+---+---+---+
| | | | | | | | | | |
| | | | | | | | | | |
+---+---+---+---+---+---+---+---+---+---+
</code></pre>
<p>Obviously you are going to have to do a bit of formatting logic to make sure the hyphens and spaces line up in the case of 2 and 3 digit numbers.</p>
<p>If you really have your heart set on using <code>end=""</code> (i.e. it's for a school assignment), then you'll have to create 5 <code>for</code> loops: one per line of output.</p>
| 0
|
2016-09-06T19:27:39Z
|
[
"python"
] |
How to remap ids to consecutive numbers quickly
| 39,356,279
|
<p>I have a large csv file with lines that looks like</p>
<pre><code>stringa,stringb
stringb,stringc
stringd,stringa
</code></pre>
<p>I need to convert it so the ids are consecutively numbered from 0. In this case the following would work</p>
<pre><code>0,1
1,2
3,0
</code></pre>
<p>My current code looks like:</p>
<pre><code>import csv
names = {}
counter = 0
with open('foo.csv', 'rb') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
if row[0] in names:
id1 = row[0]
else:
names[row[0]] = counter
id1 = counter
counter += 1
if row[1] in names:
id2 = row[1]
else:
names[row[1]] = counter
id2 = counter
counter += 1
print id1, id2
</code></pre>
<p>Python dicts use a lot of memory sadly and my input is large.</p>
<blockquote>
<p>What can I do when the input is too large for the dict to fit in memory</p>
</blockquote>
<p>I would also be interested if there is a better/faster way to solve this problem in general.</p>
| 6
|
2016-09-06T19:14:20Z
| 39,356,398
|
<p><strong>UPDATE:</strong> here is a memory saving solution, which converts all your string to numerical categories:</p>
<pre><code>In [13]: df
Out[13]:
c1 c2
0 stringa stringb
1 stringb stringc
2 stringd stringa
3 stringa stringb
4 stringb stringc
5 stringd stringa
6 stringa stringb
7 stringb stringc
8 stringd stringa
In [14]: x = (df.stack()
....: .astype('category')
....: .cat.rename_categories(np.arange(len(df.stack().unique())))
....: .unstack())
In [15]: x
Out[15]:
c1 c2
0 0 1
1 1 2
2 3 0
3 0 1
4 1 2
5 3 0
6 0 1
7 1 2
8 3 0
In [16]: x.dtypes
Out[16]:
c1 category
c2 category
dtype: object
</code></pre>
<p><strong>OLD answer:</strong></p>
<p>I think a you can categorize your columns:</p>
<pre><code>In [63]: big.head(15)
Out[63]:
c1 c2
0 stringa stringb
1 stringb stringc
2 stringd stringa
3 stringa stringb
4 stringb stringc
5 stringd stringa
6 stringa stringb
7 stringb stringc
8 stringd stringa
9 stringa stringb
10 stringb stringc
11 stringd stringa
12 stringa stringb
13 stringb stringc
14 stringd stringa
In [64]: big.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 30000000 entries, 0 to 29999999
Data columns (total 2 columns):
c1 object
c2 object
dtypes: object(2)
memory usage: 457.8+ MB
</code></pre>
<p>So <code>big</code> DF has 30M rows and it's size is approx. 460MiB...</p>
<p>Let's categorize it:</p>
<pre><code>In [65]: cat = big.apply(lambda x: x.astype('category'))
In [66]: cat.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 30000000 entries, 0 to 29999999
Data columns (total 2 columns):
c1 category
c2 category
dtypes: category(2)
memory usage: 57.2 MB
</code></pre>
<p>It takes now only 57MiB and looks exactly the same:</p>
<pre><code>In [69]: cat.head(15)
Out[69]:
c1 c2
0 stringa stringb
1 stringb stringc
2 stringd stringa
3 stringa stringb
4 stringb stringc
5 stringd stringa
6 stringa stringb
7 stringb stringc
8 stringd stringa
9 stringa stringb
10 stringb stringc
11 stringd stringa
12 stringa stringb
13 stringb stringc
14 stringd stringa
</code></pre>
<p>let's compare it's size with similar numeric DF:</p>
<pre><code>In [67]: df = pd.DataFrame(np.random.randint(0,5,(30000000,2)), columns=list('ab'))
In [68]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 30000000 entries, 0 to 29999999
Data columns (total 2 columns):
a int32
b int32
dtypes: int32(2)
memory usage: 228.9 MB
</code></pre>
| 2
|
2016-09-06T19:23:04Z
|
[
"python",
"pandas",
"dataframe"
] |
How to remap ids to consecutive numbers quickly
| 39,356,279
|
<p>I have a large csv file with lines that looks like</p>
<pre><code>stringa,stringb
stringb,stringc
stringd,stringa
</code></pre>
<p>I need to convert it so the ids are consecutively numbered from 0. In this case the following would work</p>
<pre><code>0,1
1,2
3,0
</code></pre>
<p>My current code looks like:</p>
<pre><code>import csv
names = {}
counter = 0
with open('foo.csv', 'rb') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
if row[0] in names:
id1 = row[0]
else:
names[row[0]] = counter
id1 = counter
counter += 1
if row[1] in names:
id2 = row[1]
else:
names[row[1]] = counter
id2 = counter
counter += 1
print id1, id2
</code></pre>
<p>Python dicts use a lot of memory sadly and my input is large.</p>
<blockquote>
<p>What can I do when the input is too large for the dict to fit in memory</p>
</blockquote>
<p>I would also be interested if there is a better/faster way to solve this problem in general.</p>
| 6
|
2016-09-06T19:14:20Z
| 39,356,454
|
<p>You can use <code>factorize</code> if you want an array of id's: </p>
<pre><code>df = pd.read_csv(data, header=None, prefix='Col_')
print (pd.factorize(np.hstack(df.values)))
(array([0, 1, 1, 2, 3, 0]), array(['stringa', 'stringb', 'stringc', 'stringd'], dtype=object))
</code></pre>
<hr>
<p><strong>EDIT :</strong> (as per the comment)</p>
<p>You could take the slices of the tuple obtained after the <code>factorize</code> method and map accordingly to the entire <code>dataframe</code> by replacing one another as shown:</p>
<pre><code>num, letter = pd.factorize(np.hstack(df.values))
df.replace(to_replace=sorted(list(set(letter))), value=sorted(list(set(num))))
Col_0 Col_1
0 0 1
1 1 2
2 3 0
</code></pre>
| 3
|
2016-09-06T19:27:51Z
|
[
"python",
"pandas",
"dataframe"
] |
How to remap ids to consecutive numbers quickly
| 39,356,279
|
<p>I have a large csv file with lines that looks like</p>
<pre><code>stringa,stringb
stringb,stringc
stringd,stringa
</code></pre>
<p>I need to convert it so the ids are consecutively numbered from 0. In this case the following would work</p>
<pre><code>0,1
1,2
3,0
</code></pre>
<p>My current code looks like:</p>
<pre><code>import csv
names = {}
counter = 0
with open('foo.csv', 'rb') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
if row[0] in names:
id1 = row[0]
else:
names[row[0]] = counter
id1 = counter
counter += 1
if row[1] in names:
id2 = row[1]
else:
names[row[1]] = counter
id2 = counter
counter += 1
print id1, id2
</code></pre>
<p>Python dicts use a lot of memory sadly and my input is large.</p>
<blockquote>
<p>What can I do when the input is too large for the dict to fit in memory</p>
</blockquote>
<p>I would also be interested if there is a better/faster way to solve this problem in general.</p>
| 6
|
2016-09-06T19:14:20Z
| 39,356,608
|
<pre><code>df = pd.DataFrame([['a', 'b'], ['b', 'c'], ['d', 'a']])
v = df.stack().unique()
v.sort()
f = pd.factorize(v)
m = pd.Series(f[0], f[1])
df.stack().map(m).unstack()
</code></pre>
<p><a href="http://i.stack.imgur.com/eTlHI.png"><img src="http://i.stack.imgur.com/eTlHI.png" alt="enter image description here"></a></p>
| 7
|
2016-09-06T19:38:55Z
|
[
"python",
"pandas",
"dataframe"
] |
Urllib2: get content of html page
| 39,356,334
|
<p>I need to parse information from some urls:</p>
<pre><code>http://novosibirsk.baza.drom.ru/personal/actual/bulletins
http://drom.ru
http://novosibirsk.baza.drom.ru
http://moscow.drom.ru/volvo/xc70/21914186.html
http://novosibirsk.baza.drom.ru/personal/actual/bulletins
http://novosibirsk.baza.drom.ru/kolpaki-reno-r15-kubera-30227564.html
</code></pre>
<p>And I try parse from this some information</p>
<pre><code>if 'drom.ru' in url:
req = urllib2.Request(url)
response = urllib2.urlopen(req)
page = response.read()
soup = BeautifulSoup(page, 'html.parser')
</code></pre>
<p>But it returns to me empty pages.
Where can be a problem?</p>
| 0
|
2016-09-06T19:17:48Z
| 39,356,616
|
<p>Step 1: can you access site from browser? (if no, goto step 4)</p>
<p>Step 2: can you access site from command-line such as wget, curl, etc.? (if no, goto step 4)</p>
<p>Step 3: Check for proxy issues/try a different library like <a href="http://docs.python-requests.org/en/master/" rel="nofollow">requests</a></p>
<p>Step 4: Get it working in the browser/command-line first, then go back to step 1</p>
| 0
|
2016-09-06T19:39:59Z
|
[
"python",
"html",
"urllib2"
] |
Urllib2: get content of html page
| 39,356,334
|
<p>I need to parse information from some urls:</p>
<pre><code>http://novosibirsk.baza.drom.ru/personal/actual/bulletins
http://drom.ru
http://novosibirsk.baza.drom.ru
http://moscow.drom.ru/volvo/xc70/21914186.html
http://novosibirsk.baza.drom.ru/personal/actual/bulletins
http://novosibirsk.baza.drom.ru/kolpaki-reno-r15-kubera-30227564.html
</code></pre>
<p>And I try parse from this some information</p>
<pre><code>if 'drom.ru' in url:
req = urllib2.Request(url)
response = urllib2.urlopen(req)
page = response.read()
soup = BeautifulSoup(page, 'html.parser')
</code></pre>
<p>But it returns to me empty pages.
Where can be a problem?</p>
| 0
|
2016-09-06T19:17:48Z
| 39,360,830
|
<p>Using <code>requests</code> will make it more easier. If you dont have <code>requests</code> module installed, try to install it by <code>pip install requests</code></p>
<pre><code>import requests
if 'drom.ru' in url:
r = requests.get(url)
soup = BeautifulSoup(r.content, 'html.parser') # lxml works faster than html.parser
</code></pre>
| 0
|
2016-09-07T03:35:17Z
|
[
"python",
"html",
"urllib2"
] |
Submitting a form fails, but because views.auth_login (not related to my form) didn't return an HttpResponse object
| 39,356,355
|
<p>OK, so I have a functional auth_login setup. But it's unrelated to my <code>Articles</code> model, and <code>ArticleForm</code> ModelForm. However, when I try to create a new article on the local website, I get an error related to views.auth_login even though auth_login isn't referenced anywhere (to my knowledge) in my Article stuff: <code>The view home.views.auth_login didn't return an HttpResponse object. It returned None instead.</code> Usually, an error like that means that you're not returning an actual response in a view definition, but I do. The real question is why <code>home.views.auth_login</code> is being called instead of <code>home.views.add_article</code>. Here's my code:</p>
<h2>home/models.py</h2>
<pre><code>class Article(models.Model):
headline = models.CharField(max_length=50)
content = models.CharField(max_length=1024)
def __str__(self):
return self.headline
</code></pre>
<h2>home/forms.py</h2>
<pre><code>from django.contrib.auth.models import User
from .models import Article
class LoginForm(forms.ModelForm):
class Meta:
model = User
fields = ["username", "password"]
class ArticleForm(forms.ModelForm):
class Meta:
model = Article
fields = ['headline', 'content']
</code></pre>
<h2>home/views.py</h2>
<pre><code>def auth_login(request):
if request.method == "POST":
username = request.POST['username']
password = request.POST['password']
user = authenticate(username=username, password=password)
if user is not None:
login(request, user)
# Redirect to a success page.
return HttpResponseRedirect('/home/')
else:
# Return an 'invalid login' error message.
return HttpResponse('Invalid username / password. :( Try again? <3')
else:
loginform = LoginForm()
context = {
'loginform': loginform
}
return render(request, 'home/login.html', context)
def add_article(request):
if request.method == "POST":
form = ArticleForm(data=request.POST)
if form.is_valid():
article = form.save()
article.save()
# todo change to view article page
return HttpResponseRedirect('/home/')
else:
return HttpResponse('Invalid Inputs. :( Try again? <3')
else:
form = ArticleForm()
context = {
'form': form,
}
return render(request, 'home/add_article.html', context)
</code></pre>
<h2>home/urls.py</h2>
<pre><code>...
urlpatterns = [
# match to ''
# ex: /polls/
url(r'^$', views.auth_login, name='login'),
url(r'^home/$', views.index, name='index'),
url(r'^articles/add/$', views.add_article, name='add_article')
]
</code></pre>
<h2>home/templates/home/add_article.html</h2>
<pre><code><h2> Add an Article </h2>
<form action="/" method="post">
{% csrf_token %}
{{ form }}
<br><br>
<input type="submit" value="Submit" name="addArticle" class="btn col2"/>
</form>
</code></pre>
<h2>Results / Problem</h2>
<p>When I go to <code>http://127.0.0.1:8000/articles/add</code>, fill out my simple form, and click submit, I get:</p>
<pre><code>The view home.views.auth_login didn't return an HttpResponse object. It returned None instead.
File "/Users/hills/Desktop/code/django-beanstalk/ebenv/lib/python2.7/site-packages/django/core/handlers/exception.py" in inner
39. response = get_response(request)
File "/Users/hills/Desktop/code/django-beanstalk/ebenv/lib/python2.7/site-packages/django/core/handlers/base.py" in _legacy_get_response
249. response = self._get_response(request)
File "/Users/hills/Desktop/code/django-beanstalk/ebenv/lib/python2.7/site-packages/django/core/handlers/base.py" in _get_response
198. "returned None instead." % (callback.__module__, view_name)
Exception Type: ValueError at /
Exception Value: The view home.views.auth_login didn't return an HttpResponse object. It returned None instead.
</code></pre>
<p>But I can't figure out why <code>home.views.auth_login</code> is being called instead of <code>home.views.add_article</code>. I've tried deleting and recreating all db tables (<code>python manage.py flush</code>, then <code>python manage.py makemigrations</code>, then <code>python manage.py migrate</code>), and I've even tried independently writing an independent Article2 model / form / template set, but I get the same error. --> Any idea what's going on?</p>
| 1
|
2016-09-06T19:19:45Z
| 39,356,515
|
<p>Well your form is submitting to the root of your website, so that's why it's hitting <code>auth_login</code> instead of <code>add_article</code>.</p>
<p>Change <code><form action="/" method="post"></code> to just <code><form method="POST"></code>. I assume the other error (no HttpResponse object) is just a side effect of POSTing to <code>auth_login</code>.</p>
| 2
|
2016-09-06T19:32:03Z
|
[
"python",
"django"
] |
How to add a custom CA Root certificate to the CA Store used by Python in Windows?
| 39,356,413
|
<p>I just installed Python3 from python.org and am having trouble installing packages with <code>pip</code>. By design, there is a man-in-the-middle packet inspection appliance on the network here that inspects all packets (ssl included) by resigning all ssl connections with its own certificate. Part of the GPO pushes the custom root certificate into the Windows Keystore.</p>
<p>When using Java, if I need to access any external https sites, I need to manually update the cacerts in the JVM to trust the Self-Signed CA certificate.</p>
<p>How do I accomplish that for python? Right now, when I try to install packages using <code>pip</code>, understandably, I get wonderful <code>[SSL: CERTIFICATE_VERIFY_FAILED]</code> errors.</p>
<p>I realize I can ignore them using the <code>--trusted-host</code> parameter, but I don't want to do that for every package I'm trying to install.</p>
<p>Is there a way to update the CA Certificate store that python uses?</p>
| 2
|
2016-09-06T19:24:30Z
| 39,358,282
|
<p>Run: python -c "import ssl; print(ssl.get_default_verify_paths())" to check the current paths which are used to verify the certificate. Add your companies root certificate to one of those.</p>
<p>The path openssl_capath_env points to the enviroment variable: 'SSL_CERT_DIR'. You need to create an environment variable 'SSL_CERT_DIR' (if it doesn't exists) and point it to a valid folder within your filesystem. In this folder you need to add your certificate and you should be able to use it.</p>
| 2
|
2016-09-06T21:43:16Z
|
[
"python",
"windows",
"ssl"
] |
Write multiple values in a csv cell without brackets in python
| 39,356,469
|
<p>I have a list of objects. Each object have different attributes.
One of which is a list
I would like to export this objects to a csv file that looks like</p>
<pre><code>a, "1,2,3"
b,"1,2,3"
</code></pre>
<p>so that the csv is readable in a Excel program</p>
<p>My code looks something like this:</p>
<pre><code>final_list = []
for a in list:
first_value = a.first_value
list2 = []
for b in a.list:
list2.append(b)
final_list.append(first_value)
final_list.append(list2)
with open('file' ) as f:
writer = csv.writer(f)
writer.writerows(final_list)
pass
</code></pre>
<p>which results in this csv rows with the brackets and quotes that i don't want</p>
<pre><code>1, "['1','3','4']"
2,"['1','2','3']"
</code></pre>
<p>I need 2 values per row: <code>a</code> and <code>1,2,3</code> as string</p>
| 1
|
2016-09-06T19:28:35Z
| 39,356,543
|
<p>if you do that:</p>
<pre><code> final_list.append(first_value)
final_list.append(list2)
</code></pre>
<p>you create a row with a value and a list.</p>
<p>The <code>csv</code> module performs a <code>str</code> conversion when writing, which explains you see the string as it is printed when you debug in python.</p>
<p>Instead do:</p>
<pre><code> final_list.append(a.first_value) # string
final_list.append(",".join(a.list)) # composes a coma separated string with the contents of a.list
</code></pre>
<p>result:</p>
<pre><code>a,"1,2,3"
</code></pre>
<p>(quotes protecting the commas so it is seen as a single cell)</p>
| 2
|
2016-09-06T19:33:35Z
|
[
"python",
"excel",
"csv"
] |
efficient filtering near/inside clusters after they found - python
| 39,356,509
|
<p>essentially I applied a <code>DBSCAN</code> algorithm (<code>sklearn</code>) with an euclidean distance on a subset of my original data. I found my clusters and all is fine: except for the fact that I want to keep only values that are far enough from those on which I did not run my analysis on. I have a new distance to test such new stuff with and I wanted to understand how to do it <strong>WITHOUT</strong> numerous nested loops.</p>
<p>in a picture:</p>
<p><a href="http://i.stack.imgur.com/ugebW.png" rel="nofollow"><img src="http://i.stack.imgur.com/ugebW.png" alt="enter image description here"></a></p>
<p>my found clusters are in blue whereas the red ones are the points to which I don't want to be near. the crosses are the points belonging to the cluster that are carved out as they are within the new distance I specified.</p>
<p>now, as much I could do something of the sort:</p>
<pre><code>for i in red_points:
for j in blu_points:
if dist(i,j) < given_dist:
original_dataframe.remove(j)
</code></pre>
<p>I refuse to believe there isn't a vectorized method. also, I can't afford to do as above simply because I'll have huge tables to operate upon and I'd like to avoid my CPU to evaporate away.</p>
<p>any and all suggestions welcome</p>
| 1
|
2016-09-06T19:31:43Z
| 39,358,875
|
<p>If you need exact answers, the fastest implementation should be sklearn's pairwise distance calculator:
<a href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.pairwise_distances.html" rel="nofollow">http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.pairwise_distances.html</a></p>
<p>If you can accept an approximate answer, you can do better with the kd tree's queryradius(): <a href="http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KDTree.html" rel="nofollow">http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KDTree.html</a></p>
| 1
|
2016-09-06T22:47:01Z
|
[
"python",
"scikit-learn",
"cluster-computing",
"dbscan"
] |
efficient filtering near/inside clusters after they found - python
| 39,356,509
|
<p>essentially I applied a <code>DBSCAN</code> algorithm (<code>sklearn</code>) with an euclidean distance on a subset of my original data. I found my clusters and all is fine: except for the fact that I want to keep only values that are far enough from those on which I did not run my analysis on. I have a new distance to test such new stuff with and I wanted to understand how to do it <strong>WITHOUT</strong> numerous nested loops.</p>
<p>in a picture:</p>
<p><a href="http://i.stack.imgur.com/ugebW.png" rel="nofollow"><img src="http://i.stack.imgur.com/ugebW.png" alt="enter image description here"></a></p>
<p>my found clusters are in blue whereas the red ones are the points to which I don't want to be near. the crosses are the points belonging to the cluster that are carved out as they are within the new distance I specified.</p>
<p>now, as much I could do something of the sort:</p>
<pre><code>for i in red_points:
for j in blu_points:
if dist(i,j) < given_dist:
original_dataframe.remove(j)
</code></pre>
<p>I refuse to believe there isn't a vectorized method. also, I can't afford to do as above simply because I'll have huge tables to operate upon and I'd like to avoid my CPU to evaporate away.</p>
<p>any and all suggestions welcome</p>
| 1
|
2016-09-06T19:31:43Z
| 39,361,907
|
<p>Of course you can vectoriue this, but it will then still be O(n*m). Better neighbor search algorithms are not vectorized. e.g. kd-tree and ball-tree.</p>
<p>Both are available in sklearn, and used by the DBSCAN module. Please see the <a href="http://scikit-learn.org/stable/modules/neighbors.html" rel="nofollow"><code>sklearn.neighbors</code></a> package.</p>
| 1
|
2016-09-07T05:37:48Z
|
[
"python",
"scikit-learn",
"cluster-computing",
"dbscan"
] |
Obtain csv-like parse AND line length byte count?
| 39,356,610
|
<p>I'm familiar with the <code>csv</code> Python module, and believe it's necessary in my case, as I have some fields that contain the delimiter (<code>|</code> rather than <code>,</code>, but that's irrelevant) within quotes.</p>
<p>However, I am also looking for the byte-count length of each original row, <em>prior</em> to splitting into columns. I can't count on the data to always quote a column, and I don't know if/when <code>csv</code> will strip off outer quotes, so I don't think (but might be wrong) that simply joining on my delimiter will reproduce the original line string (less CRLF characters). Meaning, I'm not positive the following works:</p>
<pre><code>with open(fname) as fh:
reader = csv.reader(fh, delimiter="|")
for row in reader:
original = "|".join(row) ## maybe?
</code></pre>
<p>I've tried looking at <code>csv</code> to see if there was anything in there that I could use/monkey-patch for this purpose, but since <code>_csv.reader</code> is a <code>.so</code>, I don't know how to mess around with that.</p>
<p>In case I'm dealing with an XY problem, my ultimate goal is to read through a CSV file, extracting certain fields and their overall file offsets to create a sort of look-up index. That way, later, when I have a list of candidate values, I can check each one's file-offset and <code>seek()</code> there, instead of chugging through the whole file again. As an idea of scale, I might have 100k values to look up across a 10GB file, so re-reading the file 100k times doesn't feel efficient to me. I'm open to other suggestions than the CSV module, but will still need <code>csv</code>-like intelligent parsing behavior. </p>
<p>EDIT: Not sure how to make it more clear than the title and body already explains - simply <code>seek()</code>-ing on a file handle isn't sufficient because I <strong>also</strong> need to parse the lines as a <code>csv</code> in order to pull out additional information.</p>
| 1
|
2016-09-06T19:39:20Z
| 39,358,339
|
<p>Depending on performance requirements and the size of the data, the low tech solution is to simply read the file twice. Make a first pass where you get the length of each line, and then then you can run the data through the csv parser. On my somewhat outdated Mac I can read and count the length of 2-3 million lines in a second, which isn't a huge performance hit.</p>
| 2
|
2016-09-06T21:47:51Z
|
[
"python",
"csv"
] |
Obtain csv-like parse AND line length byte count?
| 39,356,610
|
<p>I'm familiar with the <code>csv</code> Python module, and believe it's necessary in my case, as I have some fields that contain the delimiter (<code>|</code> rather than <code>,</code>, but that's irrelevant) within quotes.</p>
<p>However, I am also looking for the byte-count length of each original row, <em>prior</em> to splitting into columns. I can't count on the data to always quote a column, and I don't know if/when <code>csv</code> will strip off outer quotes, so I don't think (but might be wrong) that simply joining on my delimiter will reproduce the original line string (less CRLF characters). Meaning, I'm not positive the following works:</p>
<pre><code>with open(fname) as fh:
reader = csv.reader(fh, delimiter="|")
for row in reader:
original = "|".join(row) ## maybe?
</code></pre>
<p>I've tried looking at <code>csv</code> to see if there was anything in there that I could use/monkey-patch for this purpose, but since <code>_csv.reader</code> is a <code>.so</code>, I don't know how to mess around with that.</p>
<p>In case I'm dealing with an XY problem, my ultimate goal is to read through a CSV file, extracting certain fields and their overall file offsets to create a sort of look-up index. That way, later, when I have a list of candidate values, I can check each one's file-offset and <code>seek()</code> there, instead of chugging through the whole file again. As an idea of scale, I might have 100k values to look up across a 10GB file, so re-reading the file 100k times doesn't feel efficient to me. I'm open to other suggestions than the CSV module, but will still need <code>csv</code>-like intelligent parsing behavior. </p>
<p>EDIT: Not sure how to make it more clear than the title and body already explains - simply <code>seek()</code>-ing on a file handle isn't sufficient because I <strong>also</strong> need to parse the lines as a <code>csv</code> in order to pull out additional information.</p>
| 1
|
2016-09-06T19:39:20Z
| 39,358,721
|
<p>You can't subclass <code>_csv.reader</code>, but the <em><code>csvfile</code></em> argument to the <code>csv.reader()</code> <a href="https://docs.python.org/3/library/csv.html#csv.reader" rel="nofollow">constructor</a> only has to be a "file-like object". This means you could supply an instance of your own class that does some preprocessingâsuch as remembering the length of the last line read and file offset. Here's an implementation showing exactly that. Note that the line length does <em>not</em> include the end-of-line character(s). It also shows how the offsets to each line/row could be stored and used after the file is read.</p>
<pre><code>import csv
class CSVInputFile(object):
""" File-like object. """
def __init__(self, file):
self.file = file
self.offset = None
self.linelen = None
def __iter__(self):
return self
def __next__(self):
offset = self.file.tell()
data = self.file.readline()
if not data:
raise StopIteration
self.offset = offset
self.linelen = len(data)
return data
next = __next__
offsets = [] # remember where each row starts
fname = 'unparsed.csv'
with open(fname) as fh:
csvfile = CSVInputFile(fh)
for row in csv.reader(csvfile, delimiter="|"):
print('offset: {}, linelen: {}, row: {}'.format(
csvfile.offset, csvfile.linelen, row)) # file offset and length of row
offsets.append(csvfile.offset) # remember where each row started
</code></pre>
| 4
|
2016-09-06T22:29:20Z
|
[
"python",
"csv"
] |
Python script works in bash, but not when called from R via system
| 39,356,621
|
<p>I use Ubuntu 16.04. I'm trying to run a simple python script from R. The script is</p>
<pre><code> import numpy as np
x=1
print(x)
</code></pre>
<p>and is written in a file named code.py. It works fine if I call it in bash via</p>
<pre><code> python3.5 code.py
</code></pre>
<p>However, when I call it in R via</p>
<pre><code> system("python3.5 code.py",intern=TRUE)
</code></pre>
<p>I get a message that says that numpy was not found. Any idea why there is this difference and how I can fix this?</p>
<p>Thanks!</p>
<p><strong>UPDATE</strong></p>
<p>If I run a file with </p>
<pre><code> import sys
print(sys.path)
</code></pre>
<p>I get</p>
<pre><code> [1] "['/home/user/Desktop', '/usr/lib/python35.zip', '/usr/lib/python3.5', '/usr/lib/python3.5/plat-x86_64-linux-gnu', '/usr/lib/python3.5/lib-dynload', '/usr/local/lib/python3.5/dist-packages', '/usr/lib/python3/dist-packages']"
</code></pre>
<p>if I run the file from R, and </p>
<pre><code> ['/home/user/Desktop', '/home/user/anaconda3/lib/python35.zip', '/home/user/anaconda3/lib/python3.5', '/home/user/anaconda3/lib/python3.5/plat-linux', '/home/user/anaconda3/lib/python3.5/lib-dynload', '/home/user/anaconda3/lib/python3.5/site-packages', '/home/user/anaconda3/lib/python3.5/site-packages/Sphinx-1.4.1-py3.5.egg', '/home/user/anaconda3/lib/python3.5/site-packages/setuptools-23.0.0-py3.5.egg']
</code></pre>
<p>if I run the file from the command line.</p>
| 1
|
2016-09-06T19:40:13Z
| 39,395,286
|
<p>The problem is that you have two versions of python3 on your computer: The system default (Ubuntu, I'm assuming), and the one you installed (Anaconda3).</p>
<p>When you run it from the command-line, you are using the Anaconda3 environment (which includes numpy and all the other anaconda modules). When you run it from R, it doesn't know to use the Anaconda environment, and so it just uses your default python paths (which doesn't include numpy).</p>
<p>To fix this, invoke your python script in R using the Anaconda python, not the system one:</p>
<p><code>system("/home/user/anaconda3/bin/python3 code.py",intern=TRUE)</code></p>
<p>Alternatively, you could add <code>/home/user/anaconda3/bin/</code> to your <code>PATH</code> environment variable in <code>~/.bashrc</code> so that it chooses anaconda over the system binary.</p>
| 1
|
2016-09-08T15:47:12Z
|
[
"python",
"bash",
"shell"
] |
What is the purpose of using instance of a class as a super/base class?
| 39,356,625
|
<p>What is the purpose of subclassing an instance of a class in Python? Here's an example: </p>
<pre><code>class A:
def __init__(*args): print(args)
base = A()
class Test(base): pass
</code></pre>
<p>This code works properly under Python, but <code>base</code> is an instance of class <code>A</code> (1) <strong>Why do we need to subclass an instance of a class?</strong> <strong>Is it related to metaclasses?</strong></p>
<p>From this question:
<a href="http://stackoverflow.com/questions/37876318/what-happens-when-you-inherent-from-a-module-instead-of-a-class-in-python?rq=1">What happens when you inherent from a module instead of a class in Python?</a></p>
<p>I understand that <code>Test(base)</code> will become <code>type(base).__init__</code>, (2) <strong>does this happen at definition time, when the class is defined?</strong> (3) <strong>How does Python know/decide that <code>base</code> is an instance of a class? Is it becuase <code>type(base)</code> doesn't return <code>type</code>?</strong></p>
| 0
|
2016-09-06T19:40:29Z
| 39,358,372
|
<p>Python actually uses <code>type(base)(classname, bases, body)</code> to produce the class object. This is the normal metaclass invocation (unless the class specifies a specific metaclass directly).</p>
<p>For an <em>instance</em> (or a module, which is basically an instance too), that means the class <code>__new__</code> static method is called to produce a new instance, and on that new instance <code>__init__</code> is called. So for your <code>class Test(base): pass</code> syntax, you are essentially creating a new instance of <code>A</code>, when Python executes the <code>class</code> statement.</p>
<p>This isn't really 'definition time'; there is no such phase in Python code. Python loads the bytecode for a module and executes it when it is imported for the first time. Class statements at the top of the module (so not inside functions) are executed at that time. It is at that time then that the <code>type(base)(...)</code> call is executed, yes.</p>
<p>Python doesn't 'know' or 'decide' anything about the base class. Python's philosophy is to trust the developer that writes code in it. The assumption is that you know what you are doing, and the base classes are just treated as if they'll respond correctly. Since <code>type(base)(....)</code> didn't raise an exception, Python just continued.</p>
<p>You can use anything that's callable as a metaclass, really.</p>
| 1
|
2016-09-06T21:50:25Z
|
[
"python",
"class",
"inheritance",
"python-internals"
] |
splitting string using regular expression in python
| 39,356,631
|
<p>Hi guys I want to split the following string that are parsed from a text file using perhaps regular expression in python.</p>
<pre><code>Inside the text file(filename.txt)
iPhone.Case.1.left=1099.0.2
new.phone.newwork=bla.jpg
</code></pre>
<p>I want a function that when looping through arrayOfStrings wii split it so that the following is displayed</p>
<pre><code>['iPhone','Case','1','left','1099.0.2']
['new','phone','newwork','bla.jpg']
</code></pre>
<p>This is what I have done so far</p>
<pre><code>import re
pattern = '(?<!\d)[\.=]|[\.=](?!\d)'
f = open('filename.txt','rb')
for line in data_file:
str_values = re.split(pattern, line.rstrip())
print str_values
</code></pre>
<p>This is what is being printed</p>
<pre><code>['iPhone', 'Case', '1', 'left', '1099.0.2']
['new', 'phone', 'newwork', 'bla', 'jpg']
</code></pre>
<p>but I want the last array to be </p>
<pre><code>['new','phone','newwork','bla.jpg']
</code></pre>
| 1
|
2016-09-06T19:41:10Z
| 39,356,901
|
<p>Try this :</p>
<pre><code>% python
>>> import re
>>> arrayOfStrings =["iPhone.Case.1.left=1099.0.2", " new.phone.newwork=bla.jpg"]
>>> def printStuff(arg):
... for i,x in enumerate(arg):
>>> print(arg[i].split('=')[i].split('.') + [ arg[i].split('=')[1] ])
...
>>> printStuff(arrayOfStrings)
['iPhone', 'Case', '1', 'left', '1099.0.2']
['bla', 'jpg', 'bla.jpg']
</code></pre>
| 0
|
2016-09-06T19:58:40Z
|
[
"python",
"regex",
"python-2.7"
] |
splitting string using regular expression in python
| 39,356,631
|
<p>Hi guys I want to split the following string that are parsed from a text file using perhaps regular expression in python.</p>
<pre><code>Inside the text file(filename.txt)
iPhone.Case.1.left=1099.0.2
new.phone.newwork=bla.jpg
</code></pre>
<p>I want a function that when looping through arrayOfStrings wii split it so that the following is displayed</p>
<pre><code>['iPhone','Case','1','left','1099.0.2']
['new','phone','newwork','bla.jpg']
</code></pre>
<p>This is what I have done so far</p>
<pre><code>import re
pattern = '(?<!\d)[\.=]|[\.=](?!\d)'
f = open('filename.txt','rb')
for line in data_file:
str_values = re.split(pattern, line.rstrip())
print str_values
</code></pre>
<p>This is what is being printed</p>
<pre><code>['iPhone', 'Case', '1', 'left', '1099.0.2']
['new', 'phone', 'newwork', 'bla', 'jpg']
</code></pre>
<p>but I want the last array to be </p>
<pre><code>['new','phone','newwork','bla.jpg']
</code></pre>
| 1
|
2016-09-06T19:41:10Z
| 39,356,999
|
<p>If you have regular enough input data that you can always split first at the <code>=</code> character, then split the first half at every <code>.</code> character, I would skip regex entirely, as it is complicated and not very pretty to read.</p>
<p>Here's an example of doing just that:</p>
<pre><code>s = 'new.phone.newwork=bla.jpg'
l = str.split(s.split('=')[0], '.') + s.split('=')[1:]
</code></pre>
| 1
|
2016-09-06T20:04:51Z
|
[
"python",
"regex",
"python-2.7"
] |
Is there a way to turn off Scientific Notation for Mpld3 plugins
| 39,356,654
|
<p>I want to use mpld3's MousePosition plugin to display the pixel location of my cursor. This works great, but I can't figure out how to turn off scientific notation in the plugin. Pixels > 1000 are displayed in scientific notation.</p>
<p>My code: </p>
<pre><code>import mpld3
from mpld3 import plugins
mpld3.enable_notebook()
fig, ax = plt.subplots()
cross = cv2.imread("cross.png", 0)
img = cv2.imread('frame_400.png', 0)
res = cv2.matchTemplate(img[2500:, :1200], cv2.resize(cross, (0,0), fx = 2, fy = 2), 3)
pylab.rcParams['figure.figsize'] = (10.0, 10.0)
imshow(res, origin='lower', cmap = cm.gray)
plugins.connect(fig, plugins.MousePosition(fontsize=14))
</code></pre>
| 0
|
2016-09-06T19:42:52Z
| 39,577,557
|
<p>There is a property for display format </p>
<pre><code>plugins.connect(fig, plugins.MousePosition(fmt="f"))
</code></pre>
<p>This will display the mouse position in integer format (a float with no decimal units of precision). fmt=".1f" will display the location with 1 decimal place of precision.</p>
<p>Ref: <a href="https://mpld3.github.io/_modules/mpld3/plugins.html" rel="nofollow">https://mpld3.github.io/_modules/mpld3/plugins.html</a></p>
| 0
|
2016-09-19T16:10:21Z
|
[
"python",
"matplotlib",
"jupyter-notebook",
"mpld3"
] |
Webscraping the Indian Patent Website for patent data
| 39,356,677
|
<p>I am trying to write a webscraper for the <a href="http://ipindiaservices.gov.in/publicsearch/" rel="nofollow">Indian patent search website</a> to get data about patents. Here is the code that I have so far.</p>
<pre><code>#import the necessary modules
import urllib2
#import the beautifulsoup functions to parse the data
from bs4 import BeautifulSoup
#mention the website that you are trying to scrape
patentsite="http://ipindiaservices.gov.in/publicsearch/"
#Query the website and return the html to the variable 'page'
page = urllib2.urlopen(patentsite)
#Parse the html in the 'page' variable, and store it in Beautiful Soup format
soup = BeautifulSoup(page)
print soup
</code></pre>
<p>Unfortunately, the Indian patent website is not robust or I am not sure how to proceed further in this regard. </p>
<p>This is the output for the above code.</p>
<pre><code><!--
###################################################################
## ##
## ##
## SIDDHAST.COM ##
## ##
## ##
###################################################################
--><!DOCTYPE HTML>
<html>
<head>
<meta content="IE=edge" http-equiv="X-UA-Compatible"/>
<meta charset="utf-8"/>
<title>:: InPASS - Indian Patent Advanced Search System ::</title>
<link href="resources/ipats-all.css" rel="stylesheet"/>
<script src="app.js" type="text/javascript"></script>
<link href="resources/app.css" rel="stylesheet"/>
</head>
<body></body>
</html>
</code></pre>
<p>The thing that I want to give is, suppose I provide a company name, the scraper should get all the patents for that particular company. I want to do other things if I can get this part right, like providing a set of inputs that the scraper will use to find patents. But I am stuck at the part where I am unable to proceed further.</p>
<p>Any pointers on how to get this data will be greatly appreciated.</p>
| 2
|
2016-09-06T19:44:12Z
| 39,359,152
|
<p>You can do this with just <em>requests</em>. The post is to <em><a href="http://ipindiaservices.gov.in/publicsearch/resources/webservices/search.php" rel="nofollow">http://ipindiaservices.gov.in/publicsearch/resources/webservices/search.php</a></em> with one <em>param</em> <em>rc_</em> which is a timestamp that we create with <em>time.time</em>.</p>
<p>Each value in <code>"field[]"</code> should match up to each in <code>"fieldvalue[]"</code> and in turn match to <code>"operator[]"</code> whether you chose to <code>*AND*</code> <code>*OR*</code> or <code>*NOT*</code>, the <code>[]</code> after each key specifies that we are passing an array of <em>value(s)</em>, without that nothing would work.:</p>
<pre><code>data = {
"publication_type_published": "on",
"publication_type_granted": "on",
"fieldDate": "APD",
"datefieldfrom": "19120101",
"datefieldto": "20160906",
"operatordate": " AND ",
"field[]": ["PA"], # claims,.description, patent-number codes go here
"fieldvalue[]": ["chris*"], # matching values for ^^ go here
"operator[]": [" AND "], # matching sql logic for ^^ goes here
"page": "1", # gives you next page results
"start": "0", # not sure what effect this actually has.
"limit": "25"} # not sure how this relates as len(r.json()[u'record']) stays 25 regardless
import requests
from time import time
post = "http://ipindiaservices.gov.in/publicsearch/resources/webservices/search.php?_dc={}".format(
str(time()).replace(".", ""))
with requests.Session() as s:
s.get("http://ipindiaservices.gov.in/publicsearch/")
s.headers.update({"X-Requested-With": "XMLHttpRequest"})
r = s.post(post, data=data)
print(r.json())
</code></pre>
<p>Output will look like the following, I cannot add it all as there is too much data to post:</p>
<pre><code>{u'success': True, u'record': [{u'Publication_Status': u'Published', u'appDate': u'2016/06/16', u'pubDate': u'2016/08/31', u'title': u'ACTUATOR FOR DEPLOYABLE IMPLANT', u'sourceID': u'inpat', u'abstract': u'\n Systems and methods are provided for usin.............
</code></pre>
<p>If you use the record key you get a list of dicts like:</p>
<pre><code>{u'Publication_Status': u'Published', u'appDate': u'2015/01/27', u'pubDate': u'2015/06/26', u'title': u'CORRUGATED PALLET', u'sourceID': u'inpat', u'abstract': u'\n A corrugated paperboard pallet is produced from two flat blanks which comprise a pallet top and a pallet bottom. The two blanks are each folded to produce only two parallel vertically extending double thickness ribs&nbsp;three horizontal panels&nbsp;two vertical side walls and two horizontal flaps. The ribs of the pallet top and pallet bottom lock each other from opening in the center of the pallet by intersecting perpendicularly with notches in the ribs. The horizontal flaps lock the ribs from opening at the edges of the pallet by intersecting perpendicularly with notches&nbsp;and the vertical sidewalls include vertical flaps that open inward defining fork passages whereby the vertical flaps lock said horizontal flaps from opening.\n ', u'Assignee': u'OLVEY Douglas A., SKETO James L., GUMBERT Sean G., DANKO Joseph J., GABRYS Christopher W., ', u'field_of_invention': u'FI10', u'publication_no': u'26/2015', u'patent_no': u'', u'application_no': u'642/DELNP/2015', u'UCID': u'WVJ4NVVIYzFLcUQvVnJsZGczcVRmSS96Vkh3NWsrS1h3Qk43S2xHczJ2WT0%3D', u'Publication_Type': u'A'}
</code></pre>
<p>which is your patent info.</p>
<p>You can see if we choose a few values in our browser, the values in all <em>fieldvalue</em>, <em>field</em> and <em>operator</em> line up, <code>AND</code>is the default so you see that for every option:</p>
<p><a href="http://i.stack.imgur.com/V6ni6.png" rel="nofollow"><img src="http://i.stack.imgur.com/V6ni6.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/UwHnE.png" rel="nofollow"><img src="http://i.stack.imgur.com/UwHnE.png" alt="enter image description here"></a></p>
<p>So figure out the code, pick what you want and post.</p>
| 4
|
2016-09-06T23:22:29Z
|
[
"python",
"python-2.7",
"web-scraping",
"beautifulsoup"
] |
Call python script from vba with wsh
| 39,356,710
|
<p>I need to call a python script from vba which works fine by using the shell.</p>
<pre><code>Sub CallPythonScript()
Call Shell("C:\Program Files (x86)\Python27\python.exe C:\Users\Markus\BrowseDirectory.py")
End Sub
</code></pre>
<p>But when I try using wsh (because of the wait funcionality) it just won't work anymore. </p>
<pre><code>Sub CallPythonScript()
Dim wsh As Object
Set wsh = VBA.CreateObject("WScript.Shell")
Dim myApp As String: myApp = "C:\Program Files (x86)\Python27\python.exe C:\Users\Markus\BrowseDirectory.py"
Dim waitOnReturn As Boolean: waitOnReturn = True
Dim windowStyle As Integer: windowStyle = 1
wsh.Run """"" & myApp & """"", windowStyle, waitOnReturn
End Sub
</code></pre>
<p>However, I used the same code at home and everything worked out just fine, with the difference that there weren't any blanks in the path. So naturally there must be something wrong with the blanks. Help is greatly appreciated. </p>
| 2
|
2016-09-06T19:46:29Z
| 39,356,767
|
<p>Did you verify that path takes you directly to the interpreter?</p>
<p>Try this,</p>
<pre><code>Dim myApp As String: myApp = "C:\""Program Files (x86)""\Python27\python.exe C:\Users\Markus\BrowseDirectory.py"
</code></pre>
| 0
|
2016-09-06T19:50:33Z
|
[
"python",
"vba",
"excel-vba",
"wsh"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.