title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
Add yticks to marginals in Seaborn JointGrid
39,111,000
<p>There doesn't seem to be any examples in the documentation for <a href="https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.JointGrid.html" rel="nofollow"><code>JointGrid</code></a> or <a href="https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.jointplot.html#seaborn.jointplot" rel="nofollow"><code>JointPlot</code></a> that show <code>yticks</code> in the marginal plots. How can I add <code>yticks</code> to the marginal plots in the <code>seaborn</code> <code>JointGrid</code> plot below?</p> <pre><code>import matplotlib.pyplot as plt import seaborn as sns tips = sns.load_dataset('tips') g = sns.JointGrid(x='total_bill', y='tip', data=tips) g = g.plot_joint(plt.scatter, color='#334f6d') g = g.plot_marginals(sns.distplot, color='#418470') </code></pre> <p><a href="http://i.stack.imgur.com/YCDKJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/YCDKJ.png" alt="enter image description here"></a></p> <p>I know what the yticks are from <code>g.ax_marg_x.get_yticks()</code>, but setting them with </p> <pre><code>g.ax_marg_x.set_yticks(g.ax_marg_x.get_yticks()) </code></pre> <p>doesn't seem to do anything, nor do other simple attempts like <code>g.ax_marg_x.yaxis.set_visible(True)</code>.</p>
0
2016-08-23T21:31:40Z
39,111,336
<p>This plots something, but i'm not sure how to interpret these results (or how they were calculated; the right one does not look like a density).</p> <pre><code> import matplotlib.pyplot as plt import seaborn as sns tips = sns.load_dataset('tips') g = sns.JointGrid(x='total_bill', y='tip', data=tips) g = g.plot_joint(plt.scatter, color='#334f6d') g = g.plot_marginals(sns.distplot, color='#418470', ) plt.setp(g.ax_marg_x.get_yticklabels(), visible=True) plt.setp(g.ax_marg_y.get_xticklabels(), visible=True) plt.show() </code></pre> <p><a href="http://i.stack.imgur.com/qCwe5.png" rel="nofollow"><img src="http://i.stack.imgur.com/qCwe5.png" alt="enter image description here"></a></p> <p>The relevant part to reverse-engineer is in seaborn's sources @ seaborn/axisgrid.py (<a href="https://github.com/mwaskom/seaborn/blob/0f79211cd8d25bddecbf5a6a024f0f606e8685e2/seaborn/axisgrid.py#L1623" rel="nofollow">link</a>):</p> <pre><code># LINE 1623 # # Set up the subplot grid f = plt.figure(figsize=(size, size)) gs = plt.GridSpec(ratio + 1, ratio + 1) ax_joint = f.add_subplot(gs[1:, :-1]) ax_marg_x = f.add_subplot(gs[0, :-1], sharex=ax_joint) ax_marg_y = f.add_subplot(gs[1:, -1], sharey=ax_joint) self.fig = f self.ax_joint = ax_joint self.ax_marg_x = ax_marg_x self.ax_marg_y = ax_marg_y # Turn off tick visibility for the measure axis on the marginal plots plt.setp(ax_marg_x.get_xticklabels(), visible=False) plt.setp(ax_marg_y.get_yticklabels(), visible=False) # Turn off the ticks on the density axis for the marginal plots plt.setp(ax_marg_x.yaxis.get_majorticklines(), visible=False) plt.setp(ax_marg_x.yaxis.get_minorticklines(), visible=False) plt.setp(ax_marg_y.xaxis.get_majorticklines(), visible=False) plt.setp(ax_marg_y.xaxis.get_minorticklines(), visible=False) plt.setp(ax_marg_x.get_yticklabels(), visible=False) plt.setp(ax_marg_y.get_xticklabels(), visible=False) ax_marg_x.yaxis.grid(False) ax_marg_y.xaxis.grid(False) </code></pre>
2
2016-08-23T21:59:04Z
[ "python", "matplotlib", "seaborn" ]
nth item in List to dictionary python
39,111,016
<p>I am trying to create a dictionary from every nth element of a list with list's original index as key. So for example:</p> <pre><code>l = [1,2,3,4,5,6,7,8,9] </code></pre> <p>Now running </p> <pre><code>dict(enumerate(l)).items() </code></pre> <p>gives me:</p> <pre><code>dict_items([(0, 1), (1, 2), (2, 3), (3, 4), (4, 5), (5, 6), (6, 7), (7, 8), (8, 9)]) </code></pre> <p>which is what I want. However the problem begins when I want to now select every second value from l to do this, so I try</p> <pre><code>dict(enumerate(l[::2])).items() </code></pre> <p>which gives me</p> <pre><code>dict_items([(0, 1), (1, 3), (2, 5), (3, 7), (4, 9)]) </code></pre> <p>but I do not want that, I want to preserve the original index when making a dictionary. What is the best way to do this?</p> <p>I want the following output</p> <pre><code>dict_items([(0, 1), (2, 3), (4, 5), (6, 7), (8, 9)]) </code></pre>
1
2016-08-23T21:32:51Z
39,111,054
<p>Use <a href="https://docs.python.org/2/library/itertools.html#itertools.islice" rel="nofollow"><code>itertools.islice()</code></a> on the <code>enumerate()</code> object:</p> <pre><code>from itertools import islice dict(islice(enumerate(l), None, None, 2)).items() </code></pre> <p><code>islice()</code> gives you a slice on any <em>iterator</em>; the above takes every second element:</p> <pre><code>&gt;&gt;&gt; from itertools import islice &gt;&gt;&gt; l = [1,2,3,4,5,6,7,8,9] &gt;&gt;&gt; dict(islice(enumerate(l), None, None, 2)).items() dict_items([(0, 1), (8, 9), (2, 3), (4, 5), (6, 7)]) </code></pre> <p>(note that the output is as expected, but order is, as always, <a href="https://stackoverflow.com/questions/15479928/why-is-the-order-in-python-dictionaries-and-sets-arbitrary/15479974#15479974">determined by the hash table</a>).</p>
4
2016-08-23T21:34:52Z
[ "python", "dictionary", "enumeration", "python-3.5" ]
Debugging Python Segmentation Fault occurring in .so file
39,111,037
<p>I am currently running into a bug when accessing a shared object library using Ctypes, code is below. The strange thing is it occurs on rare occasion. I am able to use the API most of them time but on some occasions it produces a seg fault. </p> <p>Because it doesn't happen frequently I'd prefer not to use gdb to grab the trackback because I'd need to run it numerous amount of times. Is there a way in python to print the trackback or do a core dump so I can debug this bug? How else could I find out what is wrong?</p> <pre><code>client_login = _clientmod.client_login client_login.argtypes = [ ctypes.c_void_p, ctypes.c_int, ctypes.c_uint ] client_login.restype = ctypes.c_int </code></pre> <p>The c_void_p is a handle for the interface.</p> <p>The c_int and c_uint is the login and password respectively.</p>
0
2016-08-23T21:33:41Z
39,111,635
<p><code>ulimit -c</code> tells the current maximum core size, <code>ulimit -c unlimited</code> sets it to, well, unlimited (you may also want to explore <code>/etc/security/limits.conf</code> and your environment, like <code>~/.bashrc</code>, for that matter).</p> <p>Regarding stack trace print, there's a <a href="https://gist.github.com/toolness/d56c1aab317377d5d17a#file-py_exc_print-py" rel="nofollow">PyExcPrint</a> extension which lets you do that.</p>
0
2016-08-23T22:25:09Z
[ "python", "debugging", "segmentation-fault", "ctypes", "traceback" ]
Using python to scrape contents of jsp webpage
39,111,147
<p>Using Python and the requests library, I have a list of zip codes, and from these I would like to compile a list of nearby CVS store addresses for each. I can extract the address field without a problem, but I cannot dynamically generate the next page since there is no "&amp;zip=77098" (or equivalent) in the URL. Each time I visit the page I get a seemingly random "requestid" value. </p> <p><a href="http://www.cvs.com/store-locator/store-locator-landing.jsp?_requestid=1003175" rel="nofollow">http://www.cvs.com/store-locator/store-locator-landing.jsp?_requestid=1003175</a></p> <p>If I copy this link and paste in another browser it routes me back to my default CVS location. Is there a way to send the zip code in the URL or otherwise dynamically set the location to search for? </p> <p>This is my (not working) code for one zip code. It returns the "default" locations, not the locations specific to the zip in in the header:</p> <pre><code>data = {"search":"77098"} urlx = 'http://www.cvs.com/store-locator/store-locator-landing.jsp' cookies = requests.get(urlx).cookies rx = requests.post(urlx, cookies=cookies,data=data, headers={'user-agent':'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36'}) soupx = BeautifulSoup(rx.content, "lxml-xml") addressList = soupx.findAll("div", { "class" : "address-wrap" }) distanceList = soupx.findAll("span", { "class" : "store-miles" }) </code></pre>
1
2016-08-23T21:43:01Z
39,131,684
<p>There quite a bit whole to going on, first you need to get the coordinates for the zip code you enter so you can use them later to post to the url the gives you the search results:</p> <pre><code>urlx = 'http://www.cvs.com/store-locator/store-locator-landing.jsp' headers = { 'user-agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36'} # params for get request to get coordinates for later post. coord_params = { "output": "json", "key": "AkezKYdo-i6Crmr6nW9y0Ce_72T-osA8SwDdbgvfMSrKL47FVwQOpjBRGW_ON5Aq", "$filter": "Cvs_Store_Flag Eq 'Y'"} # This provides the coordinates. coords_url = "https://dev.virtualearth.net/REST/v1/Locations" # The post to get the actual results is to this url. post = "https://www.cvs.com/rest/bean/cvs/store/CvsStoreLocatorServices/getSearchStore" zipcode = "77098" # Template to pass each zip to in your actual loop. template = "{zipcode},US" with requests.Session() as s: s.get(urlx) # Add the query param passing in each zipcode coord_params["query"] = template.format(zipcode=zipcode) js = s.get(coords_url, params=coord_params).json() # Parse latitude and longitude from the returned json. latitude, longitude =(js[u'resourceSets'][0][u'resources'][0]["point"][u'coordinates']) # finally get the search results. results = s.post(post, data={"latitude": latitude, "longitude":longitude}).json() from pprint import pprint as pp pp(results) </code></pre> <p>Output:</p> <pre><code>{u'atgResponse': {u'sc': u'00', u'sd': [{u'ad': u'2111 W. ALABAMA STREET', u'ci': u'HOUSTON', u'cv': u'I', u'cz': u'', u'dt': u'0.35', u'id': u'10554', u'nv': u'I', u'nz': u'', u'ph': u'713-874-1085', u'sn': u'MyWeeklyAdStore10554', u'st': u'TX', u'wf': u'Y', u'zp': u'77098'}, {u'ad': u'4700 KIRBY DRIVE', u'ci': u'HOUSTON', u'cv': u'I', u'cz': u'', u'dt': u'0.43', u'id': u'5685', u'nv': u'I', u'nz': u'', u'ph': u'713-533-2200', u'sn': u'MyWeeklyAdStore5685', u'st': u'TX', u'wf': u'Y', u'zp': u'77098'}, {u'ad': u'2910 WESTHEIMER ROAD', u'ci': u'HOUSTON', u'cv': u'I', u'cz': u'', u'dt': u'0.69', u'id': u'7126', u'nv': u'I', u'nz': u'', u'ph': u'713-526-0062', u'sn': u'MyWeeklyAdStore7126', u'st': u'TX', u'wf': u'Y', u'zp': u'77098'}, {u'ad': u'6011 KIRBY DRIVE', u'ci': u'HOUSTON', u'cv': u'I', u'cz': u'', u'dt': u'1.17', u'id': u'7479', u'nv': u'I', u'nz': u'', u'ph': u'713-522-3983', u'sn': u'MyWeeklyAdStore7479', u'st': u'TX', u'wf': u'Y', u'zp': u'77005'}, {u'ad': u'1003 RICHMOND AVENUE', u'ci': u'HOUSTON', u'cv': u'I', u'cz': u'', u'dt': u'1.41', u'id': u'7162', u'nv': u'I', u'nz': u'', u'ph': u'713-807-8491', u'sn': u'MyWeeklyAdStore7162', u'st': u'TX', u'wf': u'Y', u'zp': u'77006'}, {u'ad': u'1001 WAUGH DRIVE, CORNER OF DALLAS AVENUE', u'ci': u'HOUSTON', u'cv': u'I', u'cz': u'', u'dt': u'1.87', u'id': u'5840', u'nv': u'I', u'nz': u'', u'ph': u'713-807-7859', u'sn': u'MyWeeklyAdStore5840', u'st': u'TX', u'wf': u'Y', u'zp': u'77019'}, {u'ad': u'2266 WEST HOLCOMBE BOULEVARD', u'ci': u'HOUSTON', u'cv': u'I', u'cz': u'', u'dt': u'1.93', u'id': u'5833', u'nv': u'I', u'nz': u'', u'ph': u'713-218-2180', u'sn': u'MyWeeklyAdStore5833', u'st': u'TX', u'wf': u'Y', u'zp': u'77030'}, {u'ad': u'4323 SAN FELIPE ST', u'ci': u'HOUSTON', u'cv': u'', u'cz': u'', u'dt': u'2.23', u'id': u'16368', u'nv': u'', u'nz': u'', u'ph': u'713-331-0166', u'sn': u'MyWeeklyAdStore16368', u'st': u'TX', u'wf': u'N', u'zp': u'77027'}, {u'ad': u'1000 ELGIN STREET', u'ci': u'HOUSTON', u'cv': u'I', u'cz': u'', u'dt': u'2.32', u'id': u'784', u'nv': u'I', u'nz': u'', u'ph': u'713-526-4478', u'sn': u'MyWeeklyAdStore784', u'st': u'TX', u'wf': u'Y', u'zp': u'77004'}, {u'ad': u'5401 WASHINGTON AVENUE', u'ci': u'HOUSTON', u'cv': u'I', u'cz': u'', u'dt': u'2.46', u'id': u'7476', u'nv': u'I', u'nz': u'', u'ph': u'713-861-3883', u'sn': u'MyWeeklyAdStore7476', u'st': u'TX', u'wf': u'Y', u'zp': u'77007'}, {u'ad': u'3939 BELLAIRE BOULEVARD', u'ci': u'HOUSTON', u'cv': u'I', u'cz': u'', u'dt': u'2.54', u'id': u'6240', u'nv': u'I', u'nz': u'', u'ph': u'832-778-9025', u'sn': u'MyWeeklyAdStore6240', u'st': u'TX', u'wf': u'Y', u'zp': u'77025'}, {u'ad': u'4755 WESTHIEMER ROAD', u'ci': u'HOUSTON', u'cv': u'I', u'cz': u'', u'dt': u'2.58', u'id': u'2988', u'nv': u'I', u'nz': u'', u'ph': u'713-386-1091', u'sn': u'MyWeeklyAdStore2988', u'st': u'TX', u'wf': u'Y', u'zp': u'77027'}, {u'ad': u'402 GRAY STREET', u'ci': u'HOUSTON', u'cv': u'I', u'cz': u'', u'dt': u'2.59', u'id': u'5968', u'nv': u'I', u'nz': u'', u'ph': u'713-982-5527', u'sn': u'MyWeeklyAdStore5968', u'st': u'TX', u'wf': u'Y', u'zp': u'77002'}, {u'ad': u'7900 SOUTH MAIN', u'ci': u'HOUSTON', u'cv': u'I', u'cz': u'', u'dt': u'2.77', u'id': u'7402', u'nv': u'I', u'nz': u'', u'ph': u'713-660-8934', u'sn': u'MyWeeklyAdStore7402', u'st': u'TX', u'wf': u'Y', u'zp': u'77030'}, {u'ad': u'8500 MAIN ST', u'ci': u'HOUSTON', u'cv': u'', u'cz': u'', u'dt': u'2.9', u'id': u'16676', u'nv': u'', u'nz': u'', u'ph': u'713-661-8213', u'sn': u'MyWeeklyAdStore16676', u'st': u'TX', u'wf': u'N', u'zp': u'77025'}, {u'ad': u'5204 RICHMOND AVENUE', u'ci': u'HOUSTON', u'cv': u'I', u'cz': u'', u'dt': u'3.14', u'id': u'3177', u'nv': u'I', u'nz': u'', u'ph': u'713-961-0874', u'sn': u'MyWeeklyAdStore3177', u'st': u'TX', u'wf': u'Y', u'zp': u'77056'}, {u'ad': u'2580 SHEARN ST', u'ci': u'HOUSTON', u'cv': u'', u'cz': u'', u'dt': u'3.26', u'id': u'17181', u'nv': u'', u'nz': u'', u'ph': u'713-331-0377', u'sn': u'MyWeeklyAdStore17181', u'st': u'TX', u'wf': u'N', u'zp': u'77007'}, {u'ad': u'5402 WESTHEIMER ROAD', u'ci': u'HOUSTON', u'cv': u'I', u'cz': u'', u'dt': u'3.39', u'id': u'7753', u'nv': u'I', u'nz': u'', u'ph': u'713-877-1479', u'sn': u'MyWeeklyAdStore7753', u'st': u'TX', u'wf': u'Y', u'zp': u'77056'}, {u'ad': u'917 MAIN STREET', u'ci': u'HOUSTON', u'cv': u'I', u'cz': u'', u'dt': u'3.4', u'id': u'6834', u'nv': u'I', u'nz': u'', u'ph': u'713-982-5565', u'sn': u'MyWeeklyAdStore6834', u'st': u'TX', u'wf': u'Y', u'zp': u'77002'}, {u'ad': u'9838 BUFFALO SPEEDWAY', u'ci': u'HOUSTON', u'cv': u'I', u'cz': u'', u'dt': u'3.9', u'id': u'5258', u'nv': u'I', u'nz': u'', u'ph': u'713-218-4491', u'sn': u'MyWeeklyAdStore5258', u'st': u'TX', u'wf': u'Y', u'zp': u'77025'}, {u'ad': u'3811 OLD SPANISH TRAIL', u'ci': u'HOUSTON', u'cv': u'I', u'cz': u'', u'dt': u'3.95', u'id': u'7198', u'nv': u'I', u'nz': u'', u'ph': u'713-741-7900', u'sn': u'MyWeeklyAdStore7198', u'st': u'TX', u'wf': u'Y', u'zp': u'77021'}, {u'ad': u'5430 BISSONNET STREET, CORNER OF CHIMNEY ROCK, CORNER OF CHIMNEY ROCK', u'ci': u'BELLAIRE', u'cv': u'I', u'cz': u'', u'dt': u'4.2', u'id': u'5273', u'nv': u'I', u'nz': u'', u'ph': u'713-218-2291', u'sn': u'MyWeeklyAdStore5273', u'st': u'TX', u'wf': u'Y', u'zp': u'77401'}, {u'ad': u'300 MEYERLAND PLAZA MALL', u'ci': u'HOUSTON', u'cv': u'', u'cz': u'', u'dt': u'4.33', u'id': u'17091', u'nv': u'', u'nz': u'', u'ph': u'713-292-0031', u'sn': u'MyWeeklyAdStore17091', u'st': u'TX', u'wf': u'N', u'zp': u'77096'}, {u'ad': u'110 WEST 20TH STREET', u'ci': u'HOUSTON', u'cv': u'I', u'cz': u'', u'dt': u'4.87', u'id': u'7092', u'nv': u'I', u'nz': u'', u'ph': u'832-673-7131', u'sn': u'MyWeeklyAdStore7092', u'st': u'TX', u'wf': u'Y', u'zp': u'77008'}, {u'ad': u'6545 WESTHEIMER ROAD', u'ci': u'HOUSTON', u'cv': u'I', u'cz': u'', u'dt': u'5.12', u'id': u'5895', u'nv': u'I', u'nz': u'', u'ph': u'713-243-8050', u'sn': u'MyWeeklyAdStore5895', u'st': u'TX', u'wf': u'Y', u'zp': u'77057'}]}} </code></pre> <p>There is a call to <em><a href="https://dev.virtualearth.net/REST/v1/Locations" rel="nofollow">https://dev.virtualearth.net/REST/v1/Locations</a></em> which is a Bing maps <a href="https://www.bingmapsportal.com/" rel="nofollow">Microsoft api</a>, I suggest you setup an account and create your own application which will allow you a key, it took literally two minutes for me to do it. As far as I know the free limit is 30k requests per day so that should be more than enough.</p>
1
2016-08-24T19:34:35Z
[ "python", "web-scraping", "python-requests" ]
Moving rows of data within pandas dataframe to end of last column
39,111,249
<p>Python newbie, please be gentle. I have data in two "middle sections" of a multiple Excel spreadsheets that I would like to isolate into one pandas dataframe. Below is a link to a data screenshot. Within each file, my headers are in Row 4 with data in Rows 5-15, Columns B:O. The headers and data then continue with headers on Row 21, data in Rows 22-30, Columns B:L. I would like to move the headers and data from the second set and append them to the end of the first set of data. </p> <p>This code captures the header from Row 4 and data in Columns B:O but captures all Rows under the header including the second Header and second set of data. How do I move this second set of data and append it after the first set of data?</p> <pre><code>path =r'C:\Users\sarah\Desktop\Original' allFiles = glob.glob(path + "/*.xls") frame = pd.DataFrame() list_ = [] for file_ in allFiles: df = pd.read_excel(file_,sheetname="Data1", parse_cols="B:O",index_col=None, header=3, skip_rows=3 ) list_.append(df) frame = pd.concat(list_) </code></pre> <p><a href="http://i.stack.imgur.com/hsWum.jpg" rel="nofollow">Screenshot of my data</a></p> <p><a href="http://i.stack.imgur.com/hsWum.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/hsWum.jpg" alt="enter image description here"></a></p>
2
2016-08-23T21:51:16Z
39,112,583
<p>If all of your Excel files have the same number of rows and this is a one time operation, you could simply hard code those numbers in your <code>read_excel</code>. If not, it will be a little tricky, but you pretty much follow the same procedure:</p> <pre><code>for file_ in allFiles: top = pd.read_excel(file_, sheetname="Data1", parse_cols="B:O", index_col=None, header=4, skip_rows=3, nrows=14) # Note the nrows kwag bot = pd.read_excel(file_, sheetname="Data1", parse_cols="B:L", index_col=None, header=21, skip_rows=20, nrows=14) list_.append(top.join(bot, lsuffix='_t', rsuffix='_b')) </code></pre>
1
2016-08-24T00:16:50Z
[ "python", "excel", "pandas", "dataframe" ]
Moving rows of data within pandas dataframe to end of last column
39,111,249
<p>Python newbie, please be gentle. I have data in two "middle sections" of a multiple Excel spreadsheets that I would like to isolate into one pandas dataframe. Below is a link to a data screenshot. Within each file, my headers are in Row 4 with data in Rows 5-15, Columns B:O. The headers and data then continue with headers on Row 21, data in Rows 22-30, Columns B:L. I would like to move the headers and data from the second set and append them to the end of the first set of data. </p> <p>This code captures the header from Row 4 and data in Columns B:O but captures all Rows under the header including the second Header and second set of data. How do I move this second set of data and append it after the first set of data?</p> <pre><code>path =r'C:\Users\sarah\Desktop\Original' allFiles = glob.glob(path + "/*.xls") frame = pd.DataFrame() list_ = [] for file_ in allFiles: df = pd.read_excel(file_,sheetname="Data1", parse_cols="B:O",index_col=None, header=3, skip_rows=3 ) list_.append(df) frame = pd.concat(list_) </code></pre> <p><a href="http://i.stack.imgur.com/hsWum.jpg" rel="nofollow">Screenshot of my data</a></p> <p><a href="http://i.stack.imgur.com/hsWum.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/hsWum.jpg" alt="enter image description here"></a></p>
2
2016-08-23T21:51:16Z
39,119,954
<p>you can do it this way:</p> <pre><code>df1 = pd.read_excel(file_,sheetname="Data1", parse_cols="B:O",index_col=None, header=3, skip_rows=3) df2 = pd.read_excel(file_,sheetname="Data1", parse_cols="B:L",index_col=None, header=20, skip_rows=20) # pay attention at `axis=1` df = pd.concat([df1,df2], axis=1) </code></pre>
0
2016-08-24T09:52:47Z
[ "python", "excel", "pandas", "dataframe" ]
Automate google play search items in a list
39,111,251
<p>I am working on a python project where I need to find out what are the apps that the company owns. For example, I have a list:</p> <pre><code>company_name = ['Airbnb', 'WeFi'] </code></pre> <p>I would like to write a python function/ program to do the following:</p> <p>1 . have it <strong>automatically</strong> search item in the list in Play store</p> <p>2 . if the company name match,even if it only matches the first name, eg "Airbnb" will match "Airbnb,inc"</p> <p><a href="http://i.stack.imgur.com/bn3L9.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/bn3L9.jpg" alt="Airbnb Search Page circled"></a></p> <ol start="3"> <li><p>Then it will click into the page and read its category <a href="http://i.stack.imgur.com/Y1C27.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/Y1C27.jpg" alt="Airbnb Read category"></a></p></li> <li><p>If the company has more than one app, it will do the same for all apps.</p></li> <li><p>each app information of the company is store in <code>tuple = {app name, category}</code></p></li> <li><p>Desired end result will be a list of tuples</p></li> </ol> <p>eg:</p> <pre><code>print(company_name[0]) print(type(company_name[0])) </code></pre> <p>outcome:<br> airbnb<br> tuple</p> <pre><code>print(company_name[0][0]) </code></pre> <p>outcome:<br> [('airbnb','Travel')]</p> <p>This is a mixed of many knowledge and I am a newbie to python. So please give me some direction of how should I start writing the code. </p> <p>I learn selenium could do automate "load more" function but I am not sure what exactly package I could use? </p>
0
2016-08-23T21:51:23Z
39,112,347
<p>I've written a little demo that may help you to achieve your goal. I used requests and Beautiful Soup. It's not exactly what you wanted but it can be adapted easily.</p> <pre><code>import requests import bs4 company_name = "airbnb" def get_company(company_name): r = requests.get("https://play.google.com/store/search?q="+company_name) soup = bs4.BeautifulSoup(r.text, "html.parser") subtitles = soup.findAll("a", {'class':"subtitle"}) dev_urls = [] for title in subtitles: try: text = title.attrs["title"].lower() #Sometimes there is a subtitle without any text on GPlay #Catchs the error except KeyError: continue if company_name in text: url = "https://play.google.com" + title.attrs["href"] dev_urls.append(url) return dev_urls def get_company_apps_url(dev_url): r = requests.get(dev_url) soup = bs4.BeautifulSoup(r.text, "html.parser") titles = soup.findAll("a", {"class":"title"}) return ["https://play.google.com"+title.attrs["href"] for title in titles] def get_app_category(app_url): r = requests.get(app_url) soup = bs4.BeautifulSoup(r.text, "html.parser") developer_name = soup.find("span", {"itemprop":"name"}).text app_name = soup.find("div", {"class":"id-app-title"}).text category = soup.find("span", {"itemprop":"genre"}).text return (developer_name, app_name, category) dev_urls = get_company("airbnb") apps_urls = get_company_apps_url(dev_urls[0]) get_app_category(apps_urls[0]) &gt;&gt;&gt; get_company("airbnb") ['https://play.google.com/store/apps/developer?id=Airbnb,+Inc'] &gt;&gt;&gt; get_company_apps_url("https://play.google.com/store/apps/developer?id=Airbnb,+Inc") ['https://play.google.com/store/apps/details?id=com.airbnb.android'] &gt;&gt;&gt; get_app_category("https://play.google.com/store/apps/details?id=com.airbnb.android") ('Airbnb, Inc', 'Airbnb', 'Travel &amp; Local') </code></pre> <p>My script with google</p> <pre><code>dev_urls = get_company("google") apps_urls = get_company_apps_url(dev_urls[0]) for app in apps_urls: print(get_app_category(app)) ('Google Inc.', 'Google Duo', 'Communication') ('Google Inc.', 'Google Translate', 'Tools') ('Google Inc.', 'Google Photos', 'Photography') ('Google Inc.', 'Google Earth', 'Travel &amp; Local') ('Google Inc.', 'Google Play Games', 'Entertainment') ('Google Inc.', 'Google Calendar', 'Productivity') ('Google Inc.', 'YouTube', 'Media &amp; Video') ('Google Inc.', 'Chrome Browser - Google', 'Communication') ('Google Inc.', 'Google Cast', 'Tools') ('Google Inc.', 'Google Sheets', 'Productivity') </code></pre>
0
2016-08-23T23:46:33Z
[ "python", "function", "web-scraping", "automation", "web-crawler" ]
Resizing recommendation On Google DataCloud
39,111,274
<p>I have set up a cluster of 24 high-memory CPUs (1 master: 8 vCPUs and 2 workers: 8 vCPUs). In the recommendation bar at the bottom of the first picture is adviced to resize the master-node to 10 CPUs, so 2 additional CPUs, because the master is overutilized. Nevertheless the graph in the first picture shows I have not been above a CPU utilization of 12%. </p> <p><a href="http://i.stack.imgur.com/Ly1xN.png" rel="nofollow"><img src="http://i.stack.imgur.com/Ly1xN.png" alt="Google DataProc Console"></a></p> <p>Additionally when I go to the VM Instances page, another recommendation is made. It is advised to resize my master-node from a high-memory one to a standard one as show in picture 2. So according to this recommendation I should size my cluster down.</p> <p><a href="http://i.stack.imgur.com/8Kem6.png" rel="nofollow"><img src="http://i.stack.imgur.com/8Kem6.png" alt="Rommendation on VM Instance Page"></a></p> <p>Is there someone who can give me a (logical) explanation of what I should do? I got the impression that my master and workers are not utilized at their full potential as often a lot of CPU-power is not used.</p>
0
2016-08-23T21:54:04Z
39,186,680
<p>Unfortunately, as mentioned in <a href="http://stackoverflow.com/questions/39073032/how-i-dynamically-upgrade-workers-cpu-ram-disk-in-dataproc">this related answer</a> Dataproc doesn't currently support live reconfiguring of the already-running Hadoop/Spark services when you resize machines through the Google Compute Engine interface. Dataproc is optimized to make it easy to run ephemeral clusters, however, so that the quick cluster deployment time allows you to easily experiment with other cluster shapes or newer Dataproc image versions.</p> <p>For now, to try a new machine size you should create a new Dataproc cluster with the new settings. Looking at your historical CPU usage, I'd say the recommended upgrade from 8 cores to 10 cores probably isn't a strong enough signal as long as the brief periods of CPU overutilization don't appear to be causing any problems to your currently running jobs (in general Dataproc jobs are more likely to "overutilize" CPUs than, say, web frontend instances, and this doesn't necessarily mean you actually want more CPUs).</p> <p>The recommended memory downgrade of the master seems close enough to an <code>n1-standard-8</code> that if it were me I'd just try an n1-standard-8 for the master node next time I deploy the cluster rather than going so fined-grained with a custom machine type.</p> <p>If you really do want to try the custom machine types, Dataproc does support custom machine types when deploying with the <code>gcloud</code> command-line tool. <a href="https://cloud.google.com/dataproc/docs/concepts/custom-machine-types" rel="nofollow">Here are the instructions</a> for specifying custom mix of CPU/RAM in a Dataproc command.</p>
1
2016-08-27T23:26:15Z
[ "python", "apache-spark", "apache-spark-sql", "google-cloud-dataproc" ]
How do I convert a MultiIndex to type string
39,111,347
<p>consider the MultiIndex <code>idx</code></p> <pre><code>idx = pd.MultiIndex.from_product([range(2013, 2016), range(1, 5)]) </code></pre> <p>When I do</p> <pre><code>idx.to_series().str.join(' ') </code></pre> <p>I get</p> <pre><code>2013 1 NaN 2 NaN 3 NaN 4 NaN 2014 1 NaN 2 NaN 3 NaN 4 NaN 2015 1 NaN 2 NaN 3 NaN 4 NaN dtype: float64 </code></pre> <p>This happens because the dtypes of the different levels are <code>int</code> and not <code>str</code>. <code>join</code> expects a <code>str</code>. How do I convert the whole <code>idx</code> to <code>str</code>?</p> <p>I've done</p> <pre><code>join = lambda x, delim=' ': delim.join([str(y) for y in x]) idx.to_series().apply(join, delim=' ') 2013 1 2013 1 2 2013 2 3 2013 3 4 2013 4 2014 1 2014 1 2 2014 2 3 2014 3 4 2014 4 2015 1 2015 1 2 2015 2 3 2015 3 4 2015 4 dtype: object </code></pre> <p>I expect there is a simpler way that I'm overlooking.</p>
4
2016-08-23T22:00:11Z
39,111,524
<p>I'm not sure it's the most elegant way, but it should work:</p> <pre><code>idx.get_level_values(0).astype(str).values + ' ' + idx.get_level_values(1).astype(str).values </code></pre>
2
2016-08-23T22:16:34Z
[ "python", "pandas", "multi-index" ]
How do I convert a MultiIndex to type string
39,111,347
<p>consider the MultiIndex <code>idx</code></p> <pre><code>idx = pd.MultiIndex.from_product([range(2013, 2016), range(1, 5)]) </code></pre> <p>When I do</p> <pre><code>idx.to_series().str.join(' ') </code></pre> <p>I get</p> <pre><code>2013 1 NaN 2 NaN 3 NaN 4 NaN 2014 1 NaN 2 NaN 3 NaN 4 NaN 2015 1 NaN 2 NaN 3 NaN 4 NaN dtype: float64 </code></pre> <p>This happens because the dtypes of the different levels are <code>int</code> and not <code>str</code>. <code>join</code> expects a <code>str</code>. How do I convert the whole <code>idx</code> to <code>str</code>?</p> <p>I've done</p> <pre><code>join = lambda x, delim=' ': delim.join([str(y) for y in x]) idx.to_series().apply(join, delim=' ') 2013 1 2013 1 2 2013 2 3 2013 3 4 2013 4 2014 1 2014 1 2 2014 2 3 2014 3 4 2014 4 2015 1 2015 1 2 2015 2 3 2015 3 4 2015 4 dtype: object </code></pre> <p>I expect there is a simpler way that I'm overlooking.</p>
4
2016-08-23T22:00:11Z
39,111,543
<p>Something like this?</p> <pre><code>idx.to_series().apply(lambda x: '{0}-{1}'.format(*x)) </code></pre>
3
2016-08-23T22:17:32Z
[ "python", "pandas", "multi-index" ]
How do I convert a MultiIndex to type string
39,111,347
<p>consider the MultiIndex <code>idx</code></p> <pre><code>idx = pd.MultiIndex.from_product([range(2013, 2016), range(1, 5)]) </code></pre> <p>When I do</p> <pre><code>idx.to_series().str.join(' ') </code></pre> <p>I get</p> <pre><code>2013 1 NaN 2 NaN 3 NaN 4 NaN 2014 1 NaN 2 NaN 3 NaN 4 NaN 2015 1 NaN 2 NaN 3 NaN 4 NaN dtype: float64 </code></pre> <p>This happens because the dtypes of the different levels are <code>int</code> and not <code>str</code>. <code>join</code> expects a <code>str</code>. How do I convert the whole <code>idx</code> to <code>str</code>?</p> <p>I've done</p> <pre><code>join = lambda x, delim=' ': delim.join([str(y) for y in x]) idx.to_series().apply(join, delim=' ') 2013 1 2013 1 2 2013 2 3 2013 3 4 2013 4 2014 1 2014 1 2 2014 2 3 2014 3 4 2014 4 2015 1 2015 1 2 2015 2 3 2015 3 4 2015 4 dtype: object </code></pre> <p>I expect there is a simpler way that I'm overlooking.</p>
4
2016-08-23T22:00:11Z
39,112,164
<p>A general solution using <code>starmap</code> from <code>itertools</code></p> <pre><code>from itertools import starmap def flat2(midx, sep=''): fstr = sep.join(['{}'] * midx.nlevels) return pd.Index(starmap(fstr.format, midx)) </code></pre> <hr> <h3>Demonstration</h3> <pre><code>midx = pd.MultiIndex.from_product([[1, 2], [3, 4]]) </code></pre> <hr> <pre><code>flat(midx) Index([u'13', u'14', u'23', u'24'], dtype='object') </code></pre> <hr> <pre><code>flat(midx, '_') Index([u'1_3', u'1_4', u'2_3', u'2_4'], dtype='object') </code></pre>
1
2016-08-23T23:22:11Z
[ "python", "pandas", "multi-index" ]
How do I convert a MultiIndex to type string
39,111,347
<p>consider the MultiIndex <code>idx</code></p> <pre><code>idx = pd.MultiIndex.from_product([range(2013, 2016), range(1, 5)]) </code></pre> <p>When I do</p> <pre><code>idx.to_series().str.join(' ') </code></pre> <p>I get</p> <pre><code>2013 1 NaN 2 NaN 3 NaN 4 NaN 2014 1 NaN 2 NaN 3 NaN 4 NaN 2015 1 NaN 2 NaN 3 NaN 4 NaN dtype: float64 </code></pre> <p>This happens because the dtypes of the different levels are <code>int</code> and not <code>str</code>. <code>join</code> expects a <code>str</code>. How do I convert the whole <code>idx</code> to <code>str</code>?</p> <p>I've done</p> <pre><code>join = lambda x, delim=' ': delim.join([str(y) for y in x]) idx.to_series().apply(join, delim=' ') 2013 1 2013 1 2 2013 2 3 2013 3 4 2013 4 2014 1 2014 1 2 2014 2 3 2014 3 4 2014 4 2015 1 2015 1 2 2015 2 3 2015 3 4 2015 4 dtype: object </code></pre> <p>I expect there is a simpler way that I'm overlooking.</p>
4
2016-08-23T22:00:11Z
39,138,897
<p>The fastest are list comprehensions:</p> <pre><code>print (['{} {}'.format(i[1], i[0]) for i in idx]) print ([' '.join((str(i[0]), str(i[1]))) for i in idx]) </code></pre> <p><strong>Timings</strong>:</p> <pre><code>In [21]: %timeit (['{} {}'.format(i[1], i[0]) for i in idx]) The slowest run took 4.68 times longer than the fastest. This could mean that an intermediate result is being cached. 100000 loops, best of 3: 7.51 µs per loop In [22]: %timeit ([' '.join((str(i[0]), str(i[1]))) for i in idx]) The slowest run took 6.48 times longer than the fastest. This could mean that an intermediate result is being cached. 100000 loops, best of 3: 9.62 µs per loop In [23]: %timeit (idx.get_level_values(0).astype(str).values + ' ' + idx.get_level_values(1).astype(str).values) The slowest run took 5.91 times longer than the fastest. This could mean that an intermediate result is being cached. 1000 loops, best of 3: 215 µs per loop In [24]: %timeit idx.to_series().apply(lambda x: '{0}-{1}'.format(*x)) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 1000 loops, best of 3: 369 µs per loop In [25]: %timeit idx.to_series().str.join(' ') The slowest run took 5.53 times longer than the fastest. This could mean that an intermediate result is being cached. 1000 loops, best of 3: 394 µs per loop </code></pre>
1
2016-08-25T07:13:04Z
[ "python", "pandas", "multi-index" ]
Bokeh web server app at localhost to html file
39,111,369
<p>I have been working with <code>bokeh web server</code>.</p> <p>I created a web app, using my own data and following this example: <a href="https://github.com/bokeh/bokeh/blob/master/examples/app/movies/main.py" rel="nofollow">https://github.com/bokeh/bokeh/blob/master/examples/app/movies/main.py</a></p> <p>I already finished the script and everything went ok. I can see the result using this command: <code>bokeh serve --show main.py</code></p> <p>The modules I used to create the web app were:</p> <pre><code>from os.path import dirname, join from pandas import Series, DataFrame from bokeh.plotting import figure from bokeh.layouts import layout, widgetbox from bokeh.models import ColumnDataSource, HoverTool, Div from bokeh.models.widgets import Slider, Select, TextInput from bokeh.io import curdoc from scipy import stats import numpy as np import pandas </code></pre> <p>However, my goal is to upload the result to my <code>gh-pages</code> branch on github.</p> <p>How can I save the result from <code>bokeh</code> as html file in order to use it in a web page?</p> <p>I tried using <code>show</code> from <code>bokeh.plotting</code>, but it shows the localhost path as the command <code>bokeh serve --show main.py</code> did. </p> <p>Is there an other command I could use?</p> <p>Any suggestions are appreciated! Thanks in advance.</p> <h1>UPDATE</h1> <p>I use this code to get a solution. With this code I got a html file as output, but it needs to be improved.</p> <pre><code>from os.path import dirname, join from pandas import Series, DataFrame from bokeh.plotting import figure from bokeh.layouts import layout, widgetbox from bokeh.models import ColumnDataSource, HoverTool, Div from bokeh.models.widgets import Slider, Select, TextInput from bokeh.io import curdoc from bokeh.resources import JSResources from bokeh.embed import file_html from bokeh.util.browser import view from jinja2 import Template from scipy import stats import numpy as np import pandas csvdata = pandas.read_csv('Alimentacion.csv', low_memory = False, encoding = 'latin-1') # Convert amount field into int() def str_to_int(mainList): for item in mainList: newList = [(int(item.replace('$', '').replace(',', '')) / (1000000)) for item in mainList] return newList # Call str_to_int function csvdata['CuantiaInt'] = str_to_int(csvdata['Cuantía']) mean = np.mean(csvdata['CuantiaInt']) # Assing colors to each contract by mean csvdata['color'] = np.where(csvdata['CuantiaInt'] &gt; mean, 'red', 'blue') csvdata['alpha'] = np.where(csvdata['CuantiaInt'] &gt; mean, 0.75, 0.75) # Replace missing values (NaN) with 0 csvdata.fillna(0, inplace=True) csvdata['revenue'] = csvdata.CuantiaInt.apply(lambda x: '{:,d}'.format(int(x))) estados1 = [line.rstrip() for line in open('Estados1.txt')] estados2 = [line.rstrip() for line in open('Estados2.txt')] csvdata.loc[csvdata.Estado.isin(estados1), 'color'] = 'grey' csvdata.loc[csvdata.Estado.isin(estados1), 'alpha'] = 0.75 csvdata.loc[csvdata.Estado.isin(estados2), 'color'] = 'brown' csvdata.loc[csvdata.Estado.isin(estados2), 'alpha'] = 0.75 csvdata['z score'] = stats.zscore(csvdata['CuantiaInt']) csvdata['sigma'] = np.std(csvdata['CuantiaInt']) date_time = pandas.DatetimeIndex(csvdata['Fecha (dd-mm-aaaa)']) newdates = date_time.strftime('%Y') newdates = [int(x) for x in newdates] csvdata['dates'] = newdates csvdata['Dptos'] = csvdata['Loc dpto'] csvdata['Entidad'] = csvdata['Entidad Compradora'] csvdata['Proceso'] = csvdata['Tipo de Proceso'] axis_map = { 'Cuantía y promedio': 'z score', 'Cuantía (Millones de pesos)': 'CuantiaInt', 'Desviación estándar': 'sigma', 'Fecha del contrato': 'dates', } desc = Div(text=open(join(dirname(__file__), 'alimentacion.html')).read(), width=800) DptosList = [line.rstrip() for line in open('locdpto.txt')] ProcesosList = [line.rstrip() for line in open('tipoproceso.txt')] EntidadesList = [line.rstrip() for line in open('entidades.txt')] # Create Input controls min_year = Slider(title = 'Año inicial', start = 2012, end = 2015, value = 2013, step = 1) max_year = Slider(title = 'Año final', start = 2012, end = 2015, value = 2014, step = 1) boxoffice = Slider(title = 'Costo del contrato (Millones de pesos)', start = 0, end = 77000, value = 0, step = 2) dptos = Select(title = 'Departamentos', value = 'Todos los departamentos', options = DptosList) proceso = Select(title = 'Tipo de Proceso', value = 'Todos los procesos', options = ProcesosList) entidades = Select(title = 'Entidad Compradora', value = 'Todas las entidades', options = EntidadesList) objeto = TextInput(title='Objeto del contrato') x_axis = Select(title = 'X Axis', options = sorted(axis_map.keys()), value = 'Fecha del contrato') y_axis = Select(title = 'Y Axis', options = sorted(axis_map.keys()), value = 'Cuantía (Millones de pesos)') # Create Column Data Source that will be used by the plot source = ColumnDataSource(data=dict(x=[], y=[], color=[], entidad=[], year=[], revenue=[], alpha=[])) hover = HoverTool(tooltips=[ ("Entidad", "@entidad"), ("Año", "@year"), ("$", "@revenue" + ' Millones de pesos') ]) p = figure(plot_height=500, plot_width=700, title='', toolbar_location=None, tools=[hover]) p.circle(x = 'x', y = 'y', source = source, size = 7, color = 'color', line_color = None, fill_alpha = 'alpha') def select_contracts(): dptos_val = dptos.value proceso_val = proceso.value entidades_val = entidades.value objeto_val = objeto.value.strip() selected = csvdata[ (csvdata.dates &gt;= min_year.value) &amp; (csvdata.dates &lt;= max_year.value) &amp; (csvdata.CuantiaInt &gt;= boxoffice.value) ] if dptos_val != 'Todos los departamentos': selected = selected[selected.Dptos.str.contains(dptos_val) == True] if proceso_val != 'Todos los procesos': selected = selected[selected.Proceso.str.contains(proceso_val) == True] if entidades_val != 'Todas las entidades': selected = selected[selected.Entidad.str.contains(entidades_val) == True] if objeto_val != '': selected = selected[selected.Objeto.str.contains(objeto_val) == True] return selected def update(): df = select_contracts() x_name = axis_map[x_axis.value] y_name = axis_map[y_axis.value] p.xaxis.axis_label = x_axis.value p.yaxis.axis_label = y_axis.value p.title.text = '%d contratos seleccionados' % len(df) source.data = dict( x = df[x_name], y = df[y_name], color = df['color'], entidad = df['Entidad'], year = df['dates'], revenue = df["revenue"], alpha = df['alpha'], ) controls = [min_year, max_year, boxoffice, dptos, proceso, entidades, objeto, x_axis, y_axis] for control in controls: control.on_change('value', lambda attr, old, new: update()) sizing_mode = 'fixed' inputs = widgetbox(*controls, sizing_mode=sizing_mode) l = layout([ [desc], [inputs, p], ], sizing_mode=sizing_mode) update() curdoc().add_root(l) curdoc().title = "Contratos" with open('../Contratos/Alimentación/alimentacion.jinja', 'r') as f: template = Template(f.read()) js_resources = JSResources(mode='inline') html = file_html(l, resources=(js_resources, None), title="Contracts", template=template) output_file = '../test.html' with open(output_file, 'w') as f: f.write(html) view(output_file) </code></pre>
1
2016-08-23T22:02:25Z
39,111,847
<p>If your app makes calls to actual python libraries (e.g. <code>numpy</code> and <code>pandas</code> that you show above) in any of its event callbacks (in fact, if it even has any <code>on_change</code> callbacks <em>at all</em>), then it is not possible to make a "standalone HTML file" (i.e. that can be simply uploaded in isolation) that will reproduce its functionality. Specifically: browsers cannot execute python code, do not have <code>numpy</code> and <code>pandas</code>. The main purpose of the Bokeh server is to be <em>the place where python code can run</em>, in response to UI events. You will need to find some actual server somewhere to run and host a Bokeh server. </p> <p>If you have the Bokeh server hosted somewhere permanently, and are asking how to embed a Bokeh app running on it into a static page on <code>gh-pages</code> then the answer is to use <code>autoload_server</code> or alternatively embedding the server app URL with an <code>&lt;iframe&gt;</code> works perfectly well. </p>
1
2016-08-23T22:46:34Z
[ "python", "python-3.x", "pandas", "bokeh" ]
Tensorflow: chaining tf.gather() produces IndexedSlices warning
39,111,373
<p>I'm running into an issue where chaining <code>tf.gather()</code> indexing produces the following warning:</p> <pre><code>/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gradients.py:90: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory. "Converting sparse IndexedSlices to a dense Tensor of unknown shape. " </code></pre> <p>The scenario arises when one layer indexes into the input layer, performs some operation on the corresponding slice, and then the next layer indexes into the result. Here's a representative example:</p> <pre><code>import tensorflow as tf ## 10-Dimensional data will be fed to the model X = tf.placeholder( tf.float32, [10, None] ) ## W works with the first 3 features of a sample W = tf.Variable( tf.ones( [5, 3] ) ) Xi = tf.gather( X, [0,1,2] ) mm = tf.matmul( W, Xi ) ## Indexing into the result produces a warning during backprop h = tf.gather( mm, [0,1] ) ... train_step = tf.train.AdamOptimizer(1e-4).minimize( loss ) </code></pre> <p>The warning arises upon definition of <code>train_step</code> and goes away if the second <code>tf.gather()</code> call is taken away. The warning also goes away if <code>X</code> is provided with an explicit number of samples (e.g., <code>[10, 1000]</code>).</p> <p>Thoughts?</p>
0
2016-08-23T22:02:59Z
39,131,434
<p>The gradient <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/array_grad.py#L277" rel="nofollow">function</a> of the <code>tf.gather</code> operation returns <code>IndexedSlices</code> typed value. In your program, the input the second <code>tf.gather</code> is the result of a <code>tf.matmul</code> (<code>mm</code>). Consequently, the gradient function for matrix multiply is passed an <code>IndexedSlices</code> value.</p> <p>Now, imagine what the gradient function for <code>tf.matmul</code> needs to do. To compute the gradient w.r.t <code>W</code>, it has to multiply the incoming gradients with the transpose of <code>Xi</code>. In this case, the incoming gradients is a <code>IndexedSlices</code> type, and <code>Xi</code>'s transpose is a dense tensor (<code>Tensor</code>) type. TensorFlow doesn't have an implementation of matrix multiply that can operate on <code>IndexedSlices</code> and <code>Tensor</code>. So it simply converts the <code>IndexedSlices</code> to a <code>Tensor</code> before calling <code>tf.matmul</code>.</p> <p>If you look at the code for that conversion function <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/gradients.py#L53" rel="nofollow">here</a>, you'll notice that it prints out a warning when this sparse to dense conversion might result in either a very large dense tensor (<code>_LARGE_SPARSE_NUM_ELEMENTS</code> determines how large), or a dense tensor of unknown size. When you shape your placeholder <code>X</code> with shape <code>[10, None]</code>, this conversion happens on a <code>IndexedSlices</code> with unknown shape (really, only one of the dimension is unknown, but still it's not possible to determine the resulting shape statically), hence you see the warning printed out. Once you set the shape of <code>X</code> to <code>[10, 1000]</code>, the shape of <code>IndexedSlices</code> becomes fully specified, <em>AND</em> the resulting dense tensor size is within the threshold, so you don't see the warning printed out.</p> <p>For your computation, if you simply cannot avoid the <code>tf.gather</code> on the result of a <code>tf.matmul</code>, then I would worry about this warning too much, unless the number of columns in <code>X</code> is extremely large.</p>
2
2016-08-24T19:19:31Z
[ "python", "neural-network", "tensorflow" ]
Python - iterate over JSON results
39,111,376
<p>In order to get <code>energy</code> values, I am trying to iterate over this <code>JSON</code> result:</p> <pre><code>[{u'track_href': u'https://api.spotify.com/v1/tracks/7603o589huckPbiELnUKgu', u'analysis_url': u'https://api.spotify.com/v1/audio-analysis/7603o589huckPbiELnUKgu', u'energy': 0.526, u'liveness': 0.0966, u'tempo': 92.979, u'speechiness': 0.103, u'uri': u'spotify:track:7603o589huckPbiELnUKgu', u'acousticness': 0.176, u'instrumentalness': 0.527, u'time_signature': 4, u'danceability': 0.635, u'key': 1, u'duration_ms': 172497, u'loudness': -8.073, u'valence': 0.267, u'type': u'audio_features', u'id': u'7603o589huckPbiELnUKgu', u'mode': 1},...}] </code></pre> <p>I'm using list comprehension:</p> <pre><code>[x['energy'] for x in features] </code></pre> <p>But I'm getting the following error:</p> <pre><code>print ([x['energy'] for x in features]) TypeError: 'NoneType' object has no attribute '__getitem__' </code></pre> <p>What am I doing wrong here?</p>
0
2016-08-23T22:03:06Z
39,111,484
<p>It is because within your JSON array, at some place instead of dictionary a <code>None</code> value is present. You may eliminate it via:</p> <pre><code>[x['energy'] for x in features if x] </code></pre>
3
2016-08-23T22:13:20Z
[ "python", "json", "iteration" ]
Replace string by matching using regular expression python
39,111,379
<p>Original String:</p> <pre><code>necessary information Leave a Comment unnecessary information </code></pre> <p>Required String:</p> <pre><code>necessary information </code></pre> <p>I have multiple strings that are in the above mentioned 'Original String' format. I want to remove the 'Leave a Comment' and 'unnecessary information'.</p> <p>Since 'Leave a Comment' is common in all string so I can build a Regular Expression on that? And use that in</p> <pre><code>string.replace( string , pattern ) </code></pre> <p>The pattern here will be a Regular Expression. How can I write a RE for this?</p>
0
2016-08-23T22:03:23Z
39,111,448
<pre><code>&gt;&gt;&gt; s 'necessary information Leave a Comment unnecessary information' &gt;&gt;&gt; re.sub(r'\sLeave a Comment.*', '', s) 'necessary information' </code></pre>
0
2016-08-23T22:09:04Z
[ "python", "regex" ]
Parse out a URL with regex operation in python
39,111,429
<p>I have data as follows,</p> <p>data</p> <pre><code>url http://hostname.com/part1/part2/part3/a+b+c+d http://m.hostname.com/part3.html?nk!e+f+g+h&amp;_junk http://hostname.com/as/ck$st=f+g+h+k+i/ http://www.hostname.com/p-l-k?wod=q+w+e+r+t africa </code></pre> <p>I want to check for first + symbol in the url and move backward until we find a special character such as / or ? or = or any other special character and start from that and go on until we find a space or end of line or &amp; or /.My output should be,</p> <pre><code>parsed abcd efgh fghki qwert </code></pre> <p>My aim is to find first + in the URL and go back until we find a special character and go front until we find a end of line or space or &amp; symbol.</p> <p>I am new to regex and still learning it and since it is bit complex, I am finding it difficult to write. Can anybody help me in writing a regex in python, to parse out these?</p> <p>Thanks</p>
3
2016-08-23T22:07:33Z
39,111,537
<p>So the appropriate regex that shall parse the required characters you wanted is <code>((.\+)+.)</code> I am using Javascript regex here. But, You should be able to implement in py as well.</p> <p>This regex shall extract you <code>a+b+c+d</code> from your first url. It will need to be processed a little bit more to get <code>abcd</code> from <code>a+b+c+d</code>.</p> <p>I will update this with py function in a bit.</p>
1
2016-08-23T22:17:13Z
[ "python", "regex", "python-2.7", "python-3.x", "pyspark" ]
Parse out a URL with regex operation in python
39,111,429
<p>I have data as follows,</p> <p>data</p> <pre><code>url http://hostname.com/part1/part2/part3/a+b+c+d http://m.hostname.com/part3.html?nk!e+f+g+h&amp;_junk http://hostname.com/as/ck$st=f+g+h+k+i/ http://www.hostname.com/p-l-k?wod=q+w+e+r+t africa </code></pre> <p>I want to check for first + symbol in the url and move backward until we find a special character such as / or ? or = or any other special character and start from that and go on until we find a space or end of line or &amp; or /.My output should be,</p> <pre><code>parsed abcd efgh fghki qwert </code></pre> <p>My aim is to find first + in the URL and go back until we find a special character and go front until we find a end of line or space or &amp; symbol.</p> <p>I am new to regex and still learning it and since it is bit complex, I am finding it difficult to write. Can anybody help me in writing a regex in python, to parse out these?</p> <p>Thanks</p>
3
2016-08-23T22:07:33Z
39,111,583
<p>Here is the expression that works for your sample use cases:</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; &gt;&gt;&gt; l = [ ... "http://hostname.com/part1/part2/part3/a+b+c+d", ... "http://m.hostname.com/part3.html?nk!e+f+g+h&amp;_junk", ... "http://hostname.com/as/ck$st=f+g+h+k+i/", ... "http://www.hostname.com/p-l-k?wod=q+w+e+r+t africa" ... ] &gt;&gt;&gt; &gt;&gt;&gt; pattern = re.compile(r"[^\w\+]([\w\+]+\+[\w\+]+)(?:[^\w\+]|$)") &gt;&gt;&gt; for item in l: ... print("".join(pattern.search(item).group(1).split("+"))) ... abcd efgh fghki qwert </code></pre> <p>The idea is basically to capture alphanumerics and a plus character that is between the non-alphanumerics and non-plus character or the end of the string. Then, split by plus and join.</p> <p><a href="https://regex101.com/r/aE6rC1/1" rel="nofollow">Regex101 link.</a></p> <p>I have a feeling that it can be further simplified/improved.</p>
1
2016-08-23T22:21:09Z
[ "python", "regex", "python-2.7", "python-3.x", "pyspark" ]
how to generate n random integers of k digits
39,111,489
<p>Is there a way to create n random integers of k digits.</p> <p>For example.. 2000 random integers comprising of <code>[0, 2, 3]</code></p> <p>My trick was to use a random number of generator and then assign values based on the the ranges?</p> <p>But was wondering if there is a better way to do this in python?</p> <p>Edit: Example: [0,0,0, 2, 2,3,0,0,2,2,..... 2000 elements] comprising of 0,2 and 3 my approach </p> <pre><code> def assign(x): if x&lt; 0.3: return 0 elif x&lt;0.6: return 2 else: return 3 x = np.random.rand(num) x = map(lamdba x:assign(x),x) </code></pre>
-2
2016-08-23T22:13:56Z
39,111,517
<p>Now after your edit, it is clear what you want. You want discrete-sampling of some elements within a container.</p> <p>Just prepare your classes and do this:</p> <pre><code>import numpy as np classes = [0, 2, 3] samples = np.random.choice(classes, 2000) </code></pre> <p>If you want some specific probabilities:</p> <pre><code>import numpy as np classes = [0, 2, 3] samples = np.random.choice(classes, 2000, p=[0.3, 0.3, 0.4]) </code></pre> <p>See the <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.random.choice.html" rel="nofollow">docs</a>.</p> <p>The implementation should be much faster than your approach which is sometimes called roulette-wheel-sampling or linear-search-sampling. There are some possible algorithms mentioned at <a href="https://en.wikipedia.org/wiki/Pseudo-random_number_sampling" rel="nofollow">wiki</a>.</p>
1
2016-08-23T22:16:12Z
[ "python" ]
how to generate n random integers of k digits
39,111,489
<p>Is there a way to create n random integers of k digits.</p> <p>For example.. 2000 random integers comprising of <code>[0, 2, 3]</code></p> <p>My trick was to use a random number of generator and then assign values based on the the ranges?</p> <p>But was wondering if there is a better way to do this in python?</p> <p>Edit: Example: [0,0,0, 2, 2,3,0,0,2,2,..... 2000 elements] comprising of 0,2 and 3 my approach </p> <pre><code> def assign(x): if x&lt; 0.3: return 0 elif x&lt;0.6: return 2 else: return 3 x = np.random.rand(num) x = map(lamdba x:assign(x),x) </code></pre>
-2
2016-08-23T22:13:56Z
39,111,565
<p>You may achieve it via <code>list comprehension</code>. In order to show the result I am using <code>20</code>. Change it to <code>2000</code> as per your requirement.</p> <pre><code>&gt;&gt;&gt; import random &gt;&gt;&gt; x = 20 &gt;&gt;&gt; [random.choice([0, 2, 3]) for i in range(20)] [2, 2, 3, 2, 0, 2, 3, 2, 3, 3, 3, 0, 3, 2, 3, 2, 3, 2, 3, 2] </code></pre>
1
2016-08-23T22:19:10Z
[ "python" ]
how to generate n random integers of k digits
39,111,489
<p>Is there a way to create n random integers of k digits.</p> <p>For example.. 2000 random integers comprising of <code>[0, 2, 3]</code></p> <p>My trick was to use a random number of generator and then assign values based on the the ranges?</p> <p>But was wondering if there is a better way to do this in python?</p> <p>Edit: Example: [0,0,0, 2, 2,3,0,0,2,2,..... 2000 elements] comprising of 0,2 and 3 my approach </p> <pre><code> def assign(x): if x&lt; 0.3: return 0 elif x&lt;0.6: return 2 else: return 3 x = np.random.rand(num) x = map(lamdba x:assign(x),x) </code></pre>
-2
2016-08-23T22:13:56Z
39,111,590
<p>From the sounds of it, it looks like you want to generate a sequence of length <code>n</code> using only the values found within the list <code>k</code>.</p> <p>Python's <code>random.choice</code> function combined with list comprehension is perfect for this.</p> <p>The following function will generate a list of length n with each element being a random element chosen from <code>k</code>.</p> <pre><code>from random import choice def random_choices(n, k): return [choice(k) for _ in xrange(n)] </code></pre> <p>Here is the same thing as simple list comprehension.</p> <pre><code>from random import choice foo = [choice(k) for _ in xrange(n)] </code></pre> <p>*Thanks to Mr.goosberry for pointing out that <code>xrange</code> should be replaced with <code>range</code> in python 3.x.x.</p>
3
2016-08-23T22:21:42Z
[ "python" ]
how to generate n random integers of k digits
39,111,489
<p>Is there a way to create n random integers of k digits.</p> <p>For example.. 2000 random integers comprising of <code>[0, 2, 3]</code></p> <p>My trick was to use a random number of generator and then assign values based on the the ranges?</p> <p>But was wondering if there is a better way to do this in python?</p> <p>Edit: Example: [0,0,0, 2, 2,3,0,0,2,2,..... 2000 elements] comprising of 0,2 and 3 my approach </p> <pre><code> def assign(x): if x&lt; 0.3: return 0 elif x&lt;0.6: return 2 else: return 3 x = np.random.rand(num) x = map(lamdba x:assign(x),x) </code></pre>
-2
2016-08-23T22:13:56Z
39,111,734
<p>You're willing to use numpy, so I'd recommend you use <a href="http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.random.choice.html" rel="nofollow">np.random.choice</a>, ie:</p> <pre><code>import numpy as np N = 2000 print[np.random.choice([0, 2, 3], p=[1/3.0, 1/3.0, 1/3.0]) for x in range(N)] </code></pre>
1
2016-08-23T22:35:37Z
[ "python" ]
how to generate n random integers of k digits
39,111,489
<p>Is there a way to create n random integers of k digits.</p> <p>For example.. 2000 random integers comprising of <code>[0, 2, 3]</code></p> <p>My trick was to use a random number of generator and then assign values based on the the ranges?</p> <p>But was wondering if there is a better way to do this in python?</p> <p>Edit: Example: [0,0,0, 2, 2,3,0,0,2,2,..... 2000 elements] comprising of 0,2 and 3 my approach </p> <pre><code> def assign(x): if x&lt; 0.3: return 0 elif x&lt;0.6: return 2 else: return 3 x = np.random.rand(num) x = map(lamdba x:assign(x),x) </code></pre>
-2
2016-08-23T22:13:56Z
39,111,891
<pre><code>import numpy as np N = 10 # Generate three vectors of n size zeros = np.zeros((N,), dtype = np.int) twos = np.zeros((N,), dtype = np.int) + 2 threes = np.zeros((N,), dtype = np.int) + 3 # Permutate all together df = [] df = np.append(df, [zeros, twos, threes]) df_shuffled = np.random.shuffle(df) print(df) </code></pre>
0
2016-08-23T22:50:32Z
[ "python" ]
Python Pyserial - Threading
39,111,502
<p>Python 2.7</p> <p>This is the code I have. Could you please tell me whats wrong. This is the beast I could come up with after studying threading for over two days continuously.</p> <p>The serial communications work when I dont use threading. </p> <pre><code>import threading import time import sys import serial import os import time def Task1(ser): while 1: print "Inside Thread 1" ser.write('\x5A\x03\x02\x02\x02\x09') # Byte ArrayTo Control a MicroProcessing Unit b = ser.read(7) print b.encode('hex') print "Thread 1 still going on" time.sleep(1) def Task2(ser): print "Inside Thread 2" print "I stopped Task 1 to start and execute Thread 2" ser.write('x5A\x03\x02\x08\x02\x0F') c = ser.read(7) print c.encode('hex') print "Thread 2 complete" def Main(): ser = serial.Serial(3, 11520) t1 = threading.Thread(target = Task1, args=[ser]) t2 = threading.Thread(target = Task2, args=[ser]) print "Starting Thread 1" t1.start() print "Starting Thread 2" t2.start() print "=== exiting ===" ser.close() if __name__ == '__main__': Main() </code></pre>
0
2016-08-23T22:15:21Z
39,111,685
<p>You are not properly syncing the threads. I suggest putting the <code>ser</code> object into the global namespace and using a lock, mutex or semaphore to prevent the two threads from accessing the single <code>ser</code> object at the same time. </p> <p>Python Module of the Week explains it best <a href="https://pymotw.com/2/threading/#controlling-access-to-resources" rel="nofollow">here</a></p>
0
2016-08-23T22:29:50Z
[ "python", "multithreading", "pyserial" ]
Python Pyserial - Threading
39,111,502
<p>Python 2.7</p> <p>This is the code I have. Could you please tell me whats wrong. This is the beast I could come up with after studying threading for over two days continuously.</p> <p>The serial communications work when I dont use threading. </p> <pre><code>import threading import time import sys import serial import os import time def Task1(ser): while 1: print "Inside Thread 1" ser.write('\x5A\x03\x02\x02\x02\x09') # Byte ArrayTo Control a MicroProcessing Unit b = ser.read(7) print b.encode('hex') print "Thread 1 still going on" time.sleep(1) def Task2(ser): print "Inside Thread 2" print "I stopped Task 1 to start and execute Thread 2" ser.write('x5A\x03\x02\x08\x02\x0F') c = ser.read(7) print c.encode('hex') print "Thread 2 complete" def Main(): ser = serial.Serial(3, 11520) t1 = threading.Thread(target = Task1, args=[ser]) t2 = threading.Thread(target = Task2, args=[ser]) print "Starting Thread 1" t1.start() print "Starting Thread 2" t2.start() print "=== exiting ===" ser.close() if __name__ == '__main__': Main() </code></pre>
0
2016-08-23T22:15:21Z
39,116,187
<p>Example for using lock</p> <pre><code>import threading import time import sys import serial import os import time def Task1(lck,ser): while 1: print "Inside Thread 1" lck.acquire() ser.write('\x5A\x03\x02\x02\x02\x09') # Byte ArrayTo Control a MicroProcessing Unit b = ser.read(7) lck.release() print b.encode('hex') print "Thread 1 still going on" time.sleep(1) def Task2(lck,ser): print "Inside Thread 2" print "I stopped Task 1 to start and execute Thread 2" lck.acquire() ser.write('x5A\x03\x02\x08\x02\x0F') c = ser.read(7) lck.release() print c.encode('hex') print "Thread 2 complete" def Main(): ser = serial.Serial(3, 11520) lck = threading.Lock() t1 = threading.Thread(target = Task1, args=[ser,lck]) t2 = threading.Thread(target = Task2, args=[ser,lck]) print "Starting Thread 1" t1.start() print "Starting Thread 2" t2.start() print "=== exiting ===" ser.close() if __name__ == '__main__': Main() </code></pre>
0
2016-08-24T06:46:19Z
[ "python", "multithreading", "pyserial" ]
python, list compare, ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
39,111,589
<p>Here is my python code:</p> <pre><code>def ava_check(nodes_group,child_list): ava_list=nodes_group[:] if nodes_group[1] in child_list: return None else: for a in nodes_group: if a in child_list: ava_list.remove(a) ava_list.remove(nodes_group[nodes_group.index(a)-1]) else: pass </code></pre> <p>The <code>nodes_group</code> is a list like <code>[0.0, (0, 3), 0.0, (0, 2), 0.0, (1, 3)]</code>. The <code>child_list</code> is a list like <code>[(0, 1)]</code>.</p> <p>But when I run the code, there is an error: <code>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()</code> happened in line <code>if a in child_list:</code>. I have no idea what is the problem here. I tried to search, but they said something about numpy. But I didn't use numpy here, the two inout arguments are just list with tuples. </p> <p>Could you help me with this problem?</p> <p>Thanks very much.</p> <p>UPDATE: Thanks for everyone's solution. Some data (not tuples) in the list nodes_group are from the <code>numpy</code> array. But I store the data in new list. So I checked the data type of the element in the new list using type(), and I found that the type is <code>numpy.float64</code>, which explains why I have this error. So I write a loop to change the type of element in list from <code>numpy.float64</code> to int by just using <code>int()</code>. So the problem is solved. But anyone know whether is a better solution or more pythonic way? Thanks.</p>
0
2016-08-23T22:21:36Z
39,111,823
<p>One (or more) of the values in your <code>nodes_group</code> list is a <code>numpy</code> array, not a float or tuple like you show in your example data. You can't use the test <code>a in some_list</code> if <code>a</code> is an array, because an array's <code>==</code> operator doesn't return a <code>bool</code> value, but rather a Boolean array. The Boolean array raises the exception you see when Python tries to covert it to a single <code>bool</code>.</p>
1
2016-08-23T22:44:30Z
[ "python", "arrays", "list", "numpy" ]
How to build model in DynamoDB if each night I need to process the daily records and then delete them?
39,111,598
<p>I need to store some daily information in DynamoDB. Basically, I need to store user actions: UserID, StoreID, ActionID and Timestamp. Each night I would like to process the information generated that day, do some aggregations, some reports, and then I can safely deleted those records. How should I model this? I mean the hash key and the sort key... I need to have the full timestamp of each action for the reports but in order to query DynamoDB I guess it is easier to also save the date only. I have some PKs as UserID and StoreID but anyhow I need to process all data each night, not the data related to one user or one store... Thanks! Patricio</p>
0
2016-08-23T22:22:11Z
39,114,304
<p>You can use RabbitMQ to schedule jobs asynchronously. This would be faster than multiple DB queries. Basically, this tool allows you to create a job queue (Containing UserID, StoreID &amp; Timestamp) where workers can remove (at midnight if you want) and create your reports (or whatever your heart desires).</p> <p>This also allows you to scale your system horizontally across nodes. Your workers can be different machines executing these tasks. You will also be safe if your DB crashes (though you may still have to design redundancy for a machine running RabbitMQ service).</p> <p>DB should be used for persistent storage and not as a queue for processing.</p>
1
2016-08-24T04:13:00Z
[ "python", "database-design", "amazon-dynamodb" ]
PyQt: QListView in connection with QTexEdit
39,111,600
<p>I have a QListView and a QTextEdit on a form and I would like to get them working together, as follows: if a checkbox is checked, the index of the respective item in the QlistView should be displayed in tbe QTextEdit; if the checkbox is unchecked, the value should be deleted from the QTextEdit. The indexes should be displayed cumulatively, delimited by one character (say, a comma), eg., 0,1,3. </p> <p>Conversely, if a value is typed in the the QTextEdit, the respective checkbox should be automatically checked (or none, in case the value entered does not correspond to any index in the QListView).</p> <p><del>I attempted to catch the indices of the selected checboxes by attaching an handler to the clicked event of the QListView, as below:</del></p> <pre><code>&lt;del&gt;@QtCore.pyqtSlot(QtCore.QModelIndex) def onclick(self, index): editbox.setText(str(index.row()))&lt;/del&gt; </code></pre> <p><del>but got the error message: "NameError: global name 'self' is not defined".</del></p> <p>Any hints? Thanks in advance for any assistance you can provide!</p> <p>Here is my complete test code:</p> <p>EDIT: I changed the code below to deal properly with event handlers.</p> <pre><code>import sys from PyQt4 import Qt, QtCore, QtGui class MainWindow(QtGui.QWidget): def __init__(self): QtGui.QWidget.__init__(self) model = QtGui.QStandardItemModel() for n in range(10): item = QtGui.QStandardItem('Item %s' % n) item.setCheckState(QtCore.Qt.Unchecked) item.setCheckable(True) model.appendRow(item) listview = QtGui.QListView() listview.setModel(model) listview.clicked.connect(self.onclick) self.editbox = QtGui.QTextEdit() self.editbox.textChanged.connect(self.onchange) grid = QtGui.QGridLayout() grid.setRowStretch(0, 6) grid.setRowStretch(1, 4) grid.addWidget(listview) grid.setSpacing(2) grid.addWidget(self.editbox) self.setLayout(grid) self.setGeometry(300, 150, 350, 300) self.setWindowTitle("Example") self.show() #@QtCore.pyqtSlot(QtCore.QModelIndex) def onclick(self, index): self.editbox.append(str(index.row())) def onchange(self): print "text in edit box changed" if __name__ == '__main__': app = QtGui.QApplication(sys.argv) mw = MainWindow() sys.exit(app.exec_()) </code></pre>
0
2016-08-23T22:22:16Z
39,111,865
<p>Since you're defining <code>onclick</code> outside of a class definition, there's no <code>self</code> (which refers to an instance of the class) either. Define it as a regular function instead:</p> <pre><code>@QtCore.pyqtSlot(QtCore.QModelIndex) def onclick(index): editbox.setText(str(index.row())) </code></pre> <p>and connect it to the signal as <code>listview.clicked.connect(onclick)</code>.</p>
0
2016-08-23T22:48:27Z
[ "python", "pyqt4" ]
Could not convert string to float: ...Sometimes
39,111,629
<p>I'm having some strange things happen when converting strings to floats in a loop.</p> <p>So when I use the following code exactly it writes: </p> <pre><code>[1468436874000, 0.00254071495719], [1468528803000, 0.00341349353996], [1468688596000, 0.000853373384991], [1468871365000, 0.00256012015497], </code></pre> <p>It stops short, there should be about 30 lines more than that and those are the wrong calculations.</p> <p>The function is: </p> <pre><code>def main(): minprice = pricelist('MIN') maxprice = pricelist('MAX') avgprice = pricelist('AVG') avgsqft = sqftlist() datelist = getdates() index = fileparser() with open('/home/andrew/Desktop/index3.htm', 'w') as file: file.writelines(data[:index[0]]) for date, minprice, maxprice in zip(datelist, minprice, maxprice): file.writelines('[%s, %s, %s],\n' % (date, minprice, maxprice)) file.writelines(data[index[1]:index[2]]) for date, avgprice in zip(datelist, avgprice): file.writelines('[%s, %s],\n' % (date, avgprice)) file.writelines(data[index[3]:index[4]]) for date, avgprice, avgsqft in zip(datelist, avgprice, avgsqft): file.writelines('[%s, %s],\n' % (date, ((float(avgprice))/(float(avgsqft))))) file.writelines(data[index[5]:]) file.close() </code></pre> <p>The error is:</p> <pre><code>file.writelines('[%s, %s],\n' % (date, ((float(avgprice))/(float(avgsqft))))) ValueError: could not convert string to float: . </code></pre> <p>Oddly, when I comment out the other for loops before it, the result is:</p> <pre><code>[1468436874000, 2.82644376127], [1468528803000, 2.86702735915], [1468688596000, 2.8546107764], [1468871365000, 2.8546107764], [1468871996000, 2.8546107764], [1468919656000, 2.85383420662], [1469004050000, 2.85189704903], [1469116491000, 2.87361540168], [1469189815000, 2.86059636119], [1469276601000, 2.83694745621], [1469367041000, 2.83903252711], [1469547497000, 2.83848688853], [1469649630000, 2.83803033196], [1469736031000, 2.82327110329], [1469790030000, 2.82650020338], [1469876430000, 2.96552660866], [1470022624000, 2.93407180385], </code></pre> <p>Moreover, when I use enumerate instead of zip (and make the appropriate changes), it works. I've examined both lists at the fifth item for anything unusual because that's where it's getting hung up, but there is nothing odd there in either list. Since it does work fine with enumerate I'll just do that for now. But I'm new to Python/programming in general and want to understand what exactly is causing this. </p> <p><strong><em>UPDATE</em></strong> Should have included this the first time. </p> <pre><code># file.writelines(data[:index[0]+1]) # for date, minprice, maxprice in zip(datelist, minprice, maxprice): # file.writelines('[%s, %s, %s],\n' % (date, minprice, maxprice)) # file.writelines(data[index[1]:index[2]+1]) # for date, avgprice in zip(datelist, avgprice): # file.writelines('[%s, %s],\n' % (date, avgprice)) # file.writelines(data[index[3]:index[4]+1]) # time.sleep(1) for date, avgprice, avgsqft in zip(datelist, avgprice, avgsqft): # file.writelines( print'[%s, %s],\n' % (date, ((float(avgprice))/(float(avgsqft)))) # file.writelines(data[index[5]:]) # file.close() </code></pre> <p>Prints... (correctly)</p> <pre><code>[1468436874000, 2.82644376127], [1468528803000, 2.86702735915], [1468688596000, 2.8546107764], [1468871365000, 2.8546107764], [1468871996000, 2.8546107764], [1468919656000, 2.85383420662], etc... </code></pre>
0
2016-08-23T22:24:29Z
39,111,888
<p>Debug by printing the values of <code>avgprice</code> and <code>avgsqft</code> in your code. You are getting some <code>string</code> as it's value which can not be converted to float</p>
0
2016-08-23T22:50:14Z
[ "python", "zip" ]
Hooking Python code using Detours
39,111,709
<p>I built a simple Python gui application("App.py") that I am trying to hook using detours. My understanding is that Python should use Windows dll's at some point and I am trying to hook these function calls.</p> <p>For that purpose I am using detours withdll.exe :</p> <pre><code>withdll.exe /d:"myDLL.dll" "myprogram.exe" </code></pre> <p>Because withdll.exe doesn't accept running a program with arguments ("python.exe App.py"), I tried creating a bat file starter.bat as follows:</p> <pre><code>cd appdir python App.py </code></pre> <p>And then running:</p> <pre><code>withdll.exe /d:"myDLL.dll" "starter.bat" </code></pre> <p>However this approach only hooks the background cmd process. </p> <p>Is there a workaround to make detours hook the Python.exe process of my script ?</p>
0
2016-08-23T22:32:20Z
39,275,794
<p>I looked through detours withdll.exe source code and found out that it can take command line arguments, the issue was solved using:</p> <pre><code>withdll.exe /d:"myDLL.dll" "pathtopython/Python.exe" "pathtoscript/myscript.py" </code></pre>
0
2016-09-01T15:40:24Z
[ "python", "windows", "dll", "code-injection", "detours" ]
Ansible: given a list of ints in a variable, define a second list in which each element is incremented
39,111,787
<p>Let's assume that we have an Ansible variable that is a <code>list_of_ints</code>.</p> <p>I want to define an <code>incremented_list</code>, whose elements are obtained incrementing by a fixed amount the elements of the first list.</p> <p>For example, if this is the first variable:</p> <pre><code>--- # file: somerole/vars/main.yml list_of_ints: - 1 - 7 - 8 </code></pre> <p>assuming an increment of 100, the desired second list would have this content:</p> <pre><code>incremented_list: - 101 - 107 - 108 </code></pre> <p>I was thinking of something on the lines of:</p> <pre><code>incremented_list: "{{ list_of_ints | map('add', 100) | list }}" </code></pre> <p>Sadly, Ansible has <a href="http://docs.ansible.com/ansible/playbooks_filters.html#math" rel="nofollow">custom filters for logarithms or powers</a>, but not for basic arithmetic, so I can easily calculate the log10 of those numbers, but not increment them.</p> <p>Any ideas, apart from a pull request on <a href="https://github.com/ansible/ansible/blob/v2.1.1.0-1/lib/ansible/plugins/filter/mathstuff.py" rel="nofollow">https://github.com/ansible/ansible/blob/v2.1.1.0-1/lib/ansible/plugins/filter/mathstuff.py</a> ?</p>
2
2016-08-23T22:40:38Z
39,112,215
<p>This will do it:</p> <pre><code>--- - hosts: localhost connection: local vars: incremented_list: [] list_of_ints: - 1 - 7 - 8 incr: 100 tasks: - set_fact: incremented_list: "{{ incremented_list + [ item + incr ] }}" no_log: False with_items: "{{ list_of_ints }}" - name: show cntr debug: var=incremented_list </code></pre>
2
2016-08-23T23:28:31Z
[ "python", "functional-programming", "ansible", "jinja2" ]
Sort an item in a list of tuples in the values of a dictionary with Python
39,111,798
<p>I'm trying to sort the first item in a list of tuples in the value of a dictionary. </p> <p>Here is my dictionary:</p> <pre><code>d = {'key_': (('2', 'a'), ('3', 'b'), ('4', 'c'), ('1', 'd'))} </code></pre> <p>I want the sorted output to look like this:</p> <pre><code>d2 = {'key_': (('1', 'd'), ('2', 'a'), ('3', 'b'), ('4', 'c'))} </code></pre> <p>I tried sorting the values to a new dictionary, but that doesn't work:</p> <pre><code>d2 = sorted(d.values(), key=lambda x: x[0]) </code></pre>
0
2016-08-23T22:41:41Z
39,111,822
<p><code>d.values()</code> return the list of all the values present within the dictionary. But here you want to sort the list which is present as value corresponding to <code>key_</code> key. So, you have to call the sorted function as:</p> <pre><code># Using tuple as your example is having tuple instead of list &gt;&gt;&gt; d['key_'] = tuple(sorted(d['key_'], key=lambda x: x[0])) &gt;&gt;&gt; d {'key_': (('1', 'd'), ('2', 'a'), ('3', 'b'), ('4', 'c'))} </code></pre> <p>Alternatively (not suggested), you may also directly sort the list without calling the <code>sorted</code> function as:</p> <pre><code>&gt;&gt;&gt; d['key_'] = list(d['key_']).sort(key=lambda x: x[0]) {'key_': (('1', 'd'), ('2', 'a'), ('3', 'b'), ('4', 'c'))} </code></pre>
2
2016-08-23T22:44:12Z
[ "python", "sorting", "dictionary" ]
pyqt QTreeWidgetItem double click connect
39,111,880
<p>Is it possible to connect a doubleclick event to a QTreeWidgetItem?</p> <p>Something like this:</p> <pre><code>def test(self): print("hello") childItem = QTreeWidgetItem() childItem.doubleClicked.connect(self.test) </code></pre>
0
2016-08-23T22:49:32Z
39,112,026
<p>The signal you want is called <code>itemDoubleClicked</code> and belongs to <code>QTreeWidget</code> itself:</p> <pre><code>from PyQt4 import QtGui def handler(item, column_no): print(item, column_no) def main(): app = QtGui.QApplication(sys.argv) win = QtGui.QTreeWidget() items = [QtGui.QTreeWidgetItem("item: {}".format(i)) for i in xrange(10)] win.insertTopLevelItems(0, items) win.itemDoubleClicked.connect(handler) win.show() sys.exit(app.exec_()) if __name__ == '__main__': main() </code></pre>
1
2016-08-23T23:06:14Z
[ "python", "pyqt" ]
Python 3.4 64-bit download
39,111,886
<p>How can I download Anaconda with previous Python versions like Python 3.4 64-bit.</p> <p>The reason is Bloomberg API is only available up to 3.4 and 3.5 is not out yet.</p>
0
2016-08-23T22:50:11Z
39,111,945
<p>I recommend installing the newest Anaconda version and using <strong>virtual-environments</strong>. This way, you would set up a <em>Python 3.4</em> environment.</p> <p>This is documented <a href="http://conda.pydata.org/docs/using/envs.html" rel="nofollow">here</a>. There are also <a href="http://conda.pydata.org/docs/py2or3.html" rel="nofollow">these docs</a>, which are describing mostly the same approach, but are targeting more specifically the python2/3 problem. (Link mentioned in the comments)</p> <p>So after installing Anaconda (let's assume, condas binaries are in the path:</p> <pre><code>conda create --name py34 python=3.4 </code></pre> <p>Then it can be used with</p> <pre><code>source activate py34 # linux activate py34 # windows </code></pre> <p>During activation (or: while activated), the binaries (python, pip, conda) will be in the path). This means using <code>conda install matplotlib</code> will install to the <code>3.4</code> version!</p> <p>After doing:</p> <pre><code>source activate root # linux activate root # windows </code></pre> <p>something like <code>conda install matplotlib</code> will install to the base-version.</p>
1
2016-08-23T22:56:26Z
[ "python", "anaconda" ]
Adding values to sequentially spaced list in python?
39,111,903
<p>I am fairly new to Python and am confused how to represent the following code from Matlab into Python:</p> <pre><code>P = [2:35,50,100,200] </code></pre> <p>In Matlab, this will spit out: P = [2,3,...,35,50,100,200] ; however, I can't seem to figure out how to easily add values to an a list with sequential numbering as is easily done in Matlab. Any suggestions would be great. Thanks!</p>
0
2016-08-23T22:51:18Z
39,111,921
<p>Vanilla python doesn't have a dedicated syntax for this ... If you're working with lists, you need 2 steps:</p> <pre><code>lst = list(range(2, 36)) # for python2.x, you don't need `list(...)` lst.extend([50, 100, 200]) </code></pre> <p>If you have the "bleeding edge" (python3.5), <a href="https://www.python.org/dev/peps/pep-0448/" rel="nofollow">you <em>can</em> use unpacking</a>:</p> <pre><code>lst = [*range(2, 36), 50, 100, 200] </code></pre> <p>If you're using <code>numpy</code>, you can use the <code>r_</code> index trick (which looks somewhat similar to the matlab version):</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; np.r_[2:36, 100, 200, 500] array([ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 100, 200, 500]) </code></pre>
5
2016-08-23T22:53:44Z
[ "python", "list", "range" ]
Adding values to sequentially spaced list in python?
39,111,903
<p>I am fairly new to Python and am confused how to represent the following code from Matlab into Python:</p> <pre><code>P = [2:35,50,100,200] </code></pre> <p>In Matlab, this will spit out: P = [2,3,...,35,50,100,200] ; however, I can't seem to figure out how to easily add values to an a list with sequential numbering as is easily done in Matlab. Any suggestions would be great. Thanks!</p>
0
2016-08-23T22:51:18Z
39,111,946
<p>If you're fortunate enough to use Python 3.5, you can use <a href="https://docs.python.org/3/whatsnew/3.5.html#whatsnew-pep-448" rel="nofollow"><em>additional unpacking generalizations</em></a> (from <a href="https://www.python.org/dev/peps/pep-0448/" rel="nofollow">PEP 448</a>) with <code>range</code>:</p> <pre><code>&gt;&gt;&gt; [*range(2, 36), 50, 100, 200] [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 50, 100, 200] </code></pre> <p>Note that the last value generated by <code>range</code> is one less than the <code>end</code> argument.</p>
4
2016-08-23T22:56:27Z
[ "python", "list", "range" ]
How do you serve a dynamically downloaded image to a browser using flask?
39,111,906
<p>I'm working with an IP camera. I can use a URL such as this one to grab a static image off the camera:</p> <p><a href="http://Username:Password@IP_of_Camera:Port/streaming/channels/1/picture" rel="nofollow">http://Username:Password@IP_of_Camera:Port/streaming/channels/1/picture</a></p> <p>What I want to do is have python/flask download the image from that URL when the client loads the page, and embed the image into the page using an img tag.</p> <p>If I have a template that looks something like this:</p> <pre><code>&lt;html&gt; &lt;head&gt; &lt;title&gt;Test&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;img src="{{ image }}"&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>how do I replace the {{ image }} with the downloaded image?</p> <p>Would I use urllib/requests to download the image to flask's static folder, replace {{ image }} with something like <code>{{ url_for('static', filename="temp_image.png") }}</code>, and then delete the image from the static folder when the page loads? Would I download it someplace else instead (other than the static folder)? Or is there some other way to do it that keeps the image in memory?</p> <p>PS. I know it's possible to replace {{ image }} with that URL directly, but that reveals the username/password/IP/port of the camera to the client.</p>
0
2016-08-23T22:51:36Z
39,112,024
<pre><code>import requests url = "http://Username:Password@IP_of_Camera:Port/streaming/channels/1/picture" response = requests.get(url) if response.status_code == 200: f = open("/your/static/dir/temp.png", 'wb') f.write(response.content) f.close() </code></pre> <p><code>{{ url_for('static' filename="temp.png") }}</code></p> <p>Not sure why you would need to delete it, but I guess you could if you thought that was required</p>
1
2016-08-23T23:06:08Z
[ "python", "flask" ]
How do you serve a dynamically downloaded image to a browser using flask?
39,111,906
<p>I'm working with an IP camera. I can use a URL such as this one to grab a static image off the camera:</p> <p><a href="http://Username:Password@IP_of_Camera:Port/streaming/channels/1/picture" rel="nofollow">http://Username:Password@IP_of_Camera:Port/streaming/channels/1/picture</a></p> <p>What I want to do is have python/flask download the image from that URL when the client loads the page, and embed the image into the page using an img tag.</p> <p>If I have a template that looks something like this:</p> <pre><code>&lt;html&gt; &lt;head&gt; &lt;title&gt;Test&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;img src="{{ image }}"&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>how do I replace the {{ image }} with the downloaded image?</p> <p>Would I use urllib/requests to download the image to flask's static folder, replace {{ image }} with something like <code>{{ url_for('static', filename="temp_image.png") }}</code>, and then delete the image from the static folder when the page loads? Would I download it someplace else instead (other than the static folder)? Or is there some other way to do it that keeps the image in memory?</p> <p>PS. I know it's possible to replace {{ image }} with that URL directly, but that reveals the username/password/IP/port of the camera to the client.</p>
0
2016-08-23T22:51:36Z
39,112,044
<p>I would add a masking route on flask that fetches and serves the image directly. Lets say <code>domain.com/image/user1/cam1</code> </p> <p>Your server would typically make a http request to the camera and once it receives a response, you can straight up serve it as a <code>Response</code> object with appropriate mimetype.</p> <p>In this case, the image you fetched from camera resides in your RAM.</p> <pre><code>@app.route('image/&lt;userID&gt;/&lt;camID&gt;') def fun(userID,camID): # fetch the picture from appropriate cam pic = requests.get('http://'+ 'Username:Password'+ # dynamically replace user id / password/ auth '@IP_of_Camera:Port'+ #dynamically replace port / IP '/streaming/channels/1/picture') # do processing of pic here.. return Response(pic,mimetype="image/png") </code></pre> <p>However, if this image needs to be served over and over again, then you might wanna cache it. In which case, I would pick something closer to your approach. </p> <p>If you want to stream the camera images, it is a whole different ballgame.</p>
3
2016-08-23T23:08:03Z
[ "python", "flask" ]
discord channel link in message
39,111,931
<p>Using discord.py, I am making a bot to send users a direct message if a keyword of their choosing is mentioned.</p> <p>Everything is working, except I just want to add the channel they were mentioned in to the message. Here is my code:</p> <pre><code> print("SENDING MESSAGE") sender = '{0.author.name}'.format(message) channel = message.channel.name server = '{0.server}'.format(message) await client.send_message(member, server+": #"+channel+": "+sender+": "+msg) </code></pre> <p>This results in a correct message being composed, but the #channel part of the message is not a clickable link as it would be if i typed it into the chat window myself. Is there a different object I should be feeding into the message?</p>
0
2016-08-23T22:54:27Z
39,119,899
<p>In Discord: there is channel mention. Try that way, do <code>message.channel.mention</code> instead of <code>message.channel.name</code> it should able to link a channel in PM or everywhere.</p> <p>Source: <a href="http://discordpy.readthedocs.io/en/latest/api.html#discord.Channel.mention" rel="nofollow">Discord Documentation</a></p>
1
2016-08-24T09:50:10Z
[ "python", "bots" ]
Differentiating between compressed .gz files and archived tar.gz files properly?
39,112,008
<p>What is the proper way to deal with differentiating between a plain compressed file in gzip or bzip2 format (eg. .gz) and a tarball compressed with gzip or bzip2 (eg. .tar.gz) Identification using suffix extensions is not a reliable option as it's possible files may end up renamed.</p> <p>Now on the command line I am able to do something like this:</p> <pre><code>bzip2 -dc test.tar.bz2 |head|file - </code></pre> <p>So I attempted something similar in python with the following function:</p> <pre><code>def get_magic(self, store_file, buffer=False, look_deeper=False): # see what we're indexing if look_deeper == True: m = magic.Magic(mime=True, uncompress=True) else: m = magic.Magic(mime=True) if buffer == False: try: file_type = m.from_file(store_file) except Exception, e: raise e else: try: file_type = m.from_buffer(store_file) except Exception, e: raise e return file_type </code></pre> <p>Then when trying to read a compressed tarball I'll pass in the buffer from elsewhere via:</p> <pre><code> file_buffer = open(file_name).read(8096) archive_check = self.get_magic(file_buffer, True, True) </code></pre> <p>Unfortunately this then becomes problematic using the <strong>uncompress</strong> flag in python-magic because it appears that python-magic is expecting me to pass in the entire file even though I only want it to read the buffer. I end up with the exception:</p> <pre><code>bzip2 ERROR: Compressed file ends unexpectedly </code></pre> <p>Seeing as the the files I am looking at can end up being <strong>2M to 20GB in size</strong> this becomes rather problematic. I <em>don't want to read the entire file</em>.</p> <p>Can it be hacked and chop the end of the compressed file off and append it to the buffer? Is it better to ignore the idea of uncompressing the file using python-magic and instead do it before I pass in a buffer to identify via:</p> <pre><code> file_buffer = open(file_name, "r:bz2").read(8096) </code></pre> <p>Is there a better way?</p>
0
2016-08-23T23:03:19Z
39,112,429
<p>It is very likely a tar file if the uncompressed data at offset 257 is "ustar", <em>or</em> if the uncompressed data in its entirety is 1024 zero bytes (an empty tar file).</p> <p>You can read just the first 1024 bytes of the uncompressed data using <code>z = zlib.decompressobj()</code> or <code>z = bz2.BZ2Decompressor()</code>, and <code>z.decompress()</code>.</p>
0
2016-08-23T23:59:22Z
[ "python", "compression", "archive", "python-magic" ]
Python filtering strings
39,112,053
<p>I´m trying to make a python script to rename the subtitles file (.srt) with the filename of the video file corresponding to that episode, so when I open it in VLC the subtitles are already loaded.</p> <p>So far I´ve succeeded in getting all the file names from the videos and subs into strings on two separate lists. Now I need to somehow pair the video string of episode 1 with the subtitle corresponding to the same episode.</p> <p>I´ve tried in lots of differents ways but I always,most with regular patterns, but none has worked. Here´s an example of my code :</p> <pre><code>import glob videoPaths = glob.glob(r"C:\Users\tobia_000\Desktop\BBT\Video\*.mp4") subsPaths = glob.glob(r"C:\Users\tobia_000\Desktop\BBT\Subs\*.srt") #With glob I sort the filenames of video and subs in separate variables ep1 = [] ep2 = [] ep3 = [] #etc.. </code></pre> <p>This is how the videoPaths and subsPaths variables would look like with the files I have. I dont have them all yet.</p> <pre><code>videoPath = ["The.Big.Bang.Theory.S05E24.HDTV.x264-LOL.mp4", "The.Big.Bang.Theory.S05E19.HDTV.x264LOL.mp4", "The.Big.Bang.Theory.S05E21.HDTV.x264-LOL.mp4"] subsPath = ["The Big Bang Theory - 5x19 - The Weekend Vortex.720p HDTV.lol.en.srt", "The Big Bang Theory - 5x21 - The Hawking Excitation.HDTV.LOL.en.srt", "The Big Bang Theory - 5x24 - The Countdown Reflection.HDTV.LOL.en.srt"] </code></pre>
-3
2016-08-23T23:08:42Z
39,112,089
<p>You can use <a href="https://docs.python.org/2.7/library/functions.html#zip" rel="nofollow"><code>zip</code></a> to make the pairs, <em>if</em> the sorted lists for <code>videoPaths</code> and <code>subsPaths</code> are correct in the they correspond to each other. ie First episode video and subtitles are both the first element of each list.</p> <pre><code>episodes = list(zip(videoPaths, subsPaths)) </code></pre> <p>Edit: and since <a href="https://docs.python.org/2/library/glob.html" rel="nofollow">glob</a> returns result in an arbitrary order, sort before that.</p> <pre><code>episodes = list(zip(sorted(videoPaths), sorted(subsPaths))) </code></pre>
2
2016-08-23T23:14:13Z
[ "python" ]
Constructing Zipf Distribution with matplotlib, trying to draw fitted line
39,112,063
<p>I have a list of paragraphs, where I want to run a zipf distribution on their combination. </p> <p>My code is below:</p> <pre><code>from itertools import * from pylab import * from collections import Counter import matplotlib.pyplot as plt paragraphs = " ".join(targeted_paragraphs) for paragraph in paragraphs: frequency = Counter(paragraph.split()) counts = array(frequency.values()) tokens = frequency.keys() ranks = arange(1, len(counts)+1) indices = argsort(-counts) frequencies = counts[indices] loglog(ranks, frequencies, marker=".") title("Zipf plot for Combined Article Paragraphs") xlabel("Frequency Rank of Token") ylabel("Absolute Frequency of Token") grid(True) for n in list(logspace(-0.5, log10(len(counts)-1), 20).astype(int)): dummy = text(ranks[n], frequencies[n], " " + tokens[indices[n]], verticalalignment="bottom", horizontalalignment="left") </code></pre> <p>At first I have encountered the following error for some reason and do not know why:</p> <pre><code>IndexError: index 1 is out of bounds for axis 0 with size 1 </code></pre> <p><strong>PURPOSE</strong> I attempt to draw "a fitted line" in this graph, and assign its value to a variable. However I do not know how to add that. Any help would be much appreciated for both of these issues.</p>
0
2016-08-23T23:10:10Z
39,112,844
<p>I don't know what <code>targeted_paragraphs</code> looks like, but I got your error using:</p> <pre><code>targeted_paragraphs = ['a', 'b', 'c'] </code></pre> <p>Based on that it looks like the problem is in how you set up the <code>for</code> loop. You're indexing <code>ranks</code> and <code>frequencies</code> using a list generated from the length of <code>counts</code>, but that gives you an off-by-one error because (as far as I can tell) <code>ranks</code>, <code>frequencies</code>, and <code>counts</code> should all have the same length. Change the loop index to use <code>len(counts)-1</code> as below:</p> <pre><code>for n in list(logspace(-0.5, log10(len(counts)-1), 20).astype(int)): dummy = text(ranks[n], frequencies[n], " " + tokens[indices[n]], verticalalignment="bottom", horizontalalignment="left") </code></pre>
1
2016-08-24T00:56:29Z
[ "python", "python-2.7", "matplotlib", "itertools" ]
Parallelizing a for loop with map and reduce in spark with pyspark
39,112,109
<p>In my application, I am creating different data-frames from data in different locations on S3, and then trying to merge the dataframes into a single dataframes. Right now I am using a for loop for this. But I have a feeling this can be done in a much more efficient way using map and reduce functions in pyspark. Here's my code:</p> <pre><code>from pyspark import SparkConf, SparkContext from pyspark.sql import SQLContext, GroupedData import pandas as pd from datetime import datetime sparkConf = SparkConf().setAppName('myTestApp') sc = SparkContext(conf=sparkConf) sqlContext = SQLContext(sc) filepath = 's3n://my-s3-bucket/report_date=' date_from = pd.to_datetime('2016-08-01',format='%Y-%m-%d') date_to = pd.to_datetime('2016-08-22',format='%Y-%m-%d') datelist = pd.date_range(date_from, date_to) First = True #THIS is the for-loop I want to get rid of for dt in datelist: date_string = datetime.strftime(dt, '%Y-%m-%d') print('Running the pyspark - Data read for the date - '+date_string) df = sqlContext.read.format("com.databricks.spark.csv").options(header = "false", inferschema = "true", delimiter = "\t").load(filepath + date_string + '/*.gz') if First: First=False df_Full = df else: df_Full = df_Full.unionAll(df) </code></pre>
0
2016-08-23T23:15:49Z
39,112,355
<p>Actually iterative <code>union</code>, although suboptimal, is not the biggest issue here. Much more serious problem is introduced by schema inference (<code>inferschema = "true"</code>).</p> <p>It not only makes data frame creation not lazy but also requires a separate data scan just for inference. If you know schema up front you should provide it as an argument for <code>DataFrameReader</code>:</p> <pre><code>schema = ... df = sqlContext.read.format("com.databricks.spark.csv").schema(schema) </code></pre> <p>otherwise you can extract it from the first <code>DataFrame</code>. Combined with well tuned parallelism it should work just fine but if number of files you fetch is large you should also consider a little bit smarter approach than an iterative union. You'll find an example in my answer to <a href="http://stackoverflow.com/q/33743978/1560062">Spark union of multiple RDDs</a>. It is more expensive but has better general properties. </p> <p>Regarding your idea it is not possible to nest operations on distributed data structures so if you want to read data inside <code>map</code> you'll have to use S3 client directly without utilizing <code>SQLContext</code>.</p>
1
2016-08-23T23:48:36Z
[ "python", "apache-spark", "pyspark" ]
Use Selenium to click a 'Load More' button until it doesn't exist (Youtube)
39,112,138
<p>I am trying to write a python script that goes to a youtube channel. Clicks on the Video Tab. Then Scrapes the webpage for content. I am all fine for scraping the content, until... I get to the load more button. The python script I have manages to click the load more button once but it never presses it again :'( How can I modify the code I have to make it click it again and again until it doesn't exist. That way, I can open up the user's complete channel and get information from every video they have. Thankyou.</p> <pre><code>from selenium import webdriver from selenium.common.exceptions import NoSuchElementException from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC </code></pre> <p>^^ Those are the modules that I have imported. I don't know how to add in the NoSuchElementException into my code either. Here is the code:</p> <pre><code>chrome_path = r"/Users/jack/Desktop/Other/Downloads/Software_and_Programs/chromedriver" browser = webdriver.Chrome(chrome_path) YOUTUBER_HOME_PAGE_URL = "https://www.youtube.com/user/Google/videos" PATIENCE_TIME = 60 LOAD_MORE_BUTTON_XPATH = '//*[@id="browse-itemsprimary"]/li[2]/button/span/span[2]' def waitForLoad(inputXPath): Wait = WebDriverWait(browser, PATIENCE_TIME) Wait.until(EC.presence_of_element_located((By.XPATH, inputXPath))) loadMoreButtonExists = True while loadMoreButtonExists: try: waitForLoad(LOAD_MORE_BUTTON_XPATH) WebDriverWait(browser, PATIENCE_TIME) loadMoreButton = browser.find_element_by_partial_link_text('Load More') #loadMoreButton = browser.find_element_by_xpath(LOAD_MORE_BUTTON_XPATH) loadMoreButton.click() except: print 'we have completely loaded every video from this Youtuber. Now we will scrape the video content\n' loadMoreButtonExists = False </code></pre> <p>I have used the xpath way and that still seems not to work. I have that commented out in the code above. It would be so awesome if I could get help on this. I haven't been able to find any good answers. I believe it can be done with selenium, but if not what should I use?</p>
-1
2016-08-23T23:18:51Z
39,304,871
<p>you can use following code for scrolling</p> <pre><code>from selenium import webdriver from selenium.common.exceptions import NoSuchElementException from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC import time #browser = webdriver.Firefox()#Chrome('./chromedriver.exe') YOUTUBER_HOME_PAGE_URL = "https://www.youtube.com/user/Google/videos" PATIENCE_TIME = 60 LOAD_MORE_BUTTON_XPATH = '//*[@id="browse-itemsprimary"]/li[2]/button/span/span[2]' driver = webdriver.Chrome('./chromedriver.exe') driver.get(YOUTUBER_HOME_PAGE_URL) while True: try: loadMoreButton = driver.find_element_by_xpath("//button[contains(@aria-label,'Load more')]") time.sleep(2) loadMoreButton.click() time.sleep(5) except Exception as e: print e break print "Complete" time.sleep(10) driver.quit() </code></pre>
0
2016-09-03T08:29:20Z
[ "python", "selenium" ]
What's the standard way to document a namedtuple?
39,112,163
<p>Looking to clean up some code by using a namedtuple to hold multiple variables for passing through a number of functions. Below is a simplified example (I actually have a few more additional arguments).</p> <p>Before:</p> <pre><code>def my_function(session_cass, session_solr, session_mysql, some_var, another): """Blah blah. Args: session_cass (Session): Cassandra session to execute queries with. session_solr (SolrConnection): Solr connection to execute requests with. session_mysql (connection): MySQL connection to execute queries with. some_var (str): Yada yada. another (int): Yada yada. """ </code></pre> <p>After:</p> <pre><code>def my_function(sessions, some_var, another): """Blah blah. Args: sessions (namedtuple): Holds all the database sessions. some_var (str): Yada yada. another (int): Yada yada. """ </code></pre> <p>For docstrings, I've been following the Google style guide, with the addition of types (inspired by <a href="http://stackoverflow.com/questions/3898572/what-is-the-standard-python-docstring-format/8109339#8109339">this post</a>), which I really like because it makes it a lot easier to keep track of what types are coming in.</p> <p>My question is, how would you go about documenting a namedtuple in this scenario? Obviously as it's currently set up, you have no information about the types within the namedtuple. Is there an accepted way to extend the docstring here, or document the namedtuple where it's defined (not shown)?</p> <p>I know you could document a class in this manor, but I'm trying to stay away from using a class as I don't really have any purpose for it other than to hold the variables. </p>
0
2016-08-23T23:22:09Z
39,112,787
<p>I am not familiar with Google style guide, but how about this:</p> <p>for a namedtuple or tuple or list or whatever that is interchangeable I would go for something like this</p> <pre><code>def my_function(sessions, some_var, another): """Blah blah. Args: sessions (sequence): A sequence of length n that holds all the database sessions. In position 0 need bla bla In position 1 need ble ble ... In position n-1 need blu blu some_var (str): Yada yada. another (int): Yada yada. """ </code></pre> <p>on the other hand if I use the attributes of the namedtuple, then maybe something like this</p> <pre><code>def my_function(sessions, some_var, another): """Blah blah. Args: sessions (object): A object that holds all the database sessions. It need the following attributes bla_bla is ... ble_ble is ... ... blu_blu is ... some_var (str): Yada yada. another (int): Yada yada. """ </code></pre> <p>for a dictionary, how about this</p> <pre><code>def my_function(sessions, some_var, another): """Blah blah. Args: sessions (map): A dictionary-like object that holds all the database sessions, it need the following keys bla_bla is ... ble_ble is ... ... blu_blu is ... some_var (str): Yada yada. another (int): Yada yada. """ </code></pre> <p>or </p> <pre><code>def my_function(sessions, some_var, another): """Blah blah. Args: sessions (customclass): Holds all the database sessions. some_var (str): Yada yada. another (int): Yada yada. """ </code></pre> <p>In each instance just ask for the minimum functionality that the function need to work correctly </p>
1
2016-08-24T00:49:10Z
[ "python", "code-documentation" ]
How do I pass a scalar via a TensorFlow feed dictionary
39,112,176
<p>My TensorFlow model uses <code>tf.random_uniform</code> to initialize a variable. I would like to specify the range when I begin training, so I created a placeholder for the initialization value.</p> <pre><code>init = tf.placeholder(tf.float32, name="init") v = tf.Variable(tf.random_uniform((100, 300), -init, init), dtype=tf.float32) initialize = tf.initialize_all_variables() </code></pre> <p>I initialize variables at the start of training like so.</p> <pre><code>session.run(initialize, feed_dict={init: 0.5}) </code></pre> <p>This gives me the following error:</p> <pre><code>ValueError: initial_value must have a shape specified: Tensor("Embedding/random_uniform:0", dtype=float32) </code></pre> <p>I cannot figure out the correct <code>shape</code> parameter to pass to <code>tf.placeholder</code>. I would think for a scalar I should do <code>init = tf.placeholder(tf.float32, shape=0, name="init")</code> but this gives the following error:</p> <pre><code>ValueError: Incompatible shapes for broadcasting: (100, 300) and (0,) </code></pre> <p>If I replace <code>init</code> with the literal value <code>0.5</code> in the call to <code>tf.random_uniform</code> it works.</p> <p>How do I pass this scalar initial value via the feed dictionary?</p>
0
2016-08-23T23:24:20Z
39,112,432
<p><strong>TL;DR:</strong> Define <code>init</code> with a scalar shape as follows:</p> <pre><code>init = tf.placeholder(tf.float32, shape=(), name="init") </code></pre> <p>This looks like an unfortunate implementation detail of <a href="https://github.com/tensorflow/tensorflow/blob/cc3153a7a0a23533d14ead34db37e4ccd7892079/tensorflow/python/ops/random_ops.py#L188" rel="nofollow"><code>tf.random_uniform()</code></a>: it currently uses <code>tf.add()</code> and <code>tf.mul()</code> to rescale the random value from [-1, +1] to [<code>minval</code>, <code>maxval</code>], but if the shape of <code>minval</code> or <code>maxval</code> is unknown, <code>tf.add()</code> and <code>tf.mul()</code> can't infer the proper shapes, because there might be broadcasting involved.</p> <p>By defining <code>init</code> with a known shape (where a scalar is <code>()</code> or <code>[]</code>, not <code>0</code>), TensorFlow can draw the proper inferences about the shape of the result of <code>tf.random_uniform()</code>, and your program should work as intended.</p>
1
2016-08-23T23:59:41Z
[ "python", "machine-learning", "tensorflow" ]
Python/NumPy first occurrence of masked subarray
39,112,189
<p>I would like to find the occurrences of a subarray in a numpy array, but with a "wildcard". </p> <pre><code>a = np.array([1, 2, 3, 4, 5]) b = np.ma.array([2, 99, 4], mask=[0, 1, 0]) </code></pre> <p>The idea is that searching for b in a gives a match because 99 is masked.</p> <p>More specifically, I hoped that the method described <a href="http://stackoverflow.com/questions/7100242/python-numpy-first-occurrence-of-subarray">here</a> would work, but it does not:</p> <pre><code>def rolling_window(a, size): shape = a.shape[:-1] + (a.shape[-1] - size + 1, size) strides = a.strides + (a. strides[-1],) return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides) a = np.array([1, 2, 3, 4, 5]) b = np.array([2, 3, 4]) c = np.ma.array([2, 99, 4], mask=[0, 1, 0]) workingMatch = rolling_window(a, len(b)) == b notWorkingMatch = rolling_window(a, len(c)) == c </code></pre> <p>this results in</p> <pre><code>&gt;&gt;&gt; workingMatch array([[False, False, False], [ True, True, True], [False, False, False]], dtype=bool) &gt;&gt;&gt; notWorkingMatch masked_array(data = [[False False False] [-- -- --] [False False False]], mask = [False True False], fill_value = True) </code></pre> <p>...so no match is found. Why not? (I'd like to learn something) How to make this work?</p>
0
2016-08-23T23:26:06Z
39,112,647
<p>Use <code>np.ma.equal</code> instead of <code>==</code> - see end.</p> <p>========================</p> <p>A masked array consists of a <code>data</code> array and a mask array. Often the masked array is used in other operations by 'filling' the masked values with something innocuous, or by compressing them out. I'm not entirely sure what's going on with this <code>==</code> test, but let's look at the calculations.</p> <p>Your striding produces an array:</p> <pre><code>In [614]: A Out[614]: array([[1, 2, 3], [2, 3, 4], [3, 4, 5]]) In [615]: b Out[615]: array([2, 3, 4]) In [612]: A==b Out[612]: array([[False, False, False], [ True, True, True], [False, False, False]], dtype=bool) </code></pre> <p>The masked array has <code>data</code> and <code>mask</code></p> <pre><code>In [616]: c Out[616]: masked_array(data = [2 -- 4], mask = [False True False], fill_value = 999999) In [617]: c.data Out[617]: array([ 2, 99, 4]) In [618]: c.mask Out[618]: array([False, True, False], dtype=bool) In [619]: (A==c).data Out[619]: array([[False, False, False], [ True, False, True], [False, False, False]], dtype=bool) </code></pre> <p>This <code>data</code> is we'd expect from <code>A==c.data</code>. The center <code>99</code> does not match.</p> <p>But it looks like the mask is applied to the whole boolean array as though <code>c</code> where a column array - it's masking the 2nd row, rather than the 2nd column. </p> <pre><code>In [624]: A==c Out[624]: masked_array(data = [[False False False] [-- -- --] [False False False]], mask = [False True False], fill_value = True) </code></pre> <p>My first impression is that that is an error. But I'll have to dig more.</p> <p>The <code>data</code> of <code>A==c</code> is 2d, but the mask is 1d.</p> <p>If I replicated <code>c</code> to 3 rows, then I get the desired results:</p> <pre><code>In [638]: c[None,:]*np.array([1,1,1])[:,None] Out[638]: masked_array(data = [[2 -- 4] [2 -- 4] [2 -- 4]], mask = [[False True False] [False True False] [False True False]], fill_value = 999999) In [639]: c1=c[None,:]*np.array([1,1,1])[:,None] In [640]: A==c1 Out[640]: masked_array(data = [[False -- False] [True -- True] [False -- False]], mask = [[False True False] [False True False] [False True False]], fill_value = True) In [641]: (A==c1).all(axis=1) Out[641]: masked_array(data = [False True False], mask = [False False False], fill_value = True) </code></pre> <p>I don't know if there's a cleaner way of doing this, but it indicates the direction such as solution needs to take.</p> <p>============</p> <p><code>np.ma.equal</code> does what we want (<code>==</code> comparison with correct mask)</p> <pre><code>In [645]: np.ma.equal(A,c) Out[645]: masked_array(data = [[False -- False] [True -- True] [False -- False]], mask = [[False True False] [False True False] [False True False]], fill_value = 999999) In [646]: np.ma.equal(A,c).any(axis=1) Out[646]: masked_array(data = [False True False], mask = [False False False], fill_value = True) </code></pre> <p><code>np.ma.equal</code> is a masked-aware version of <code>np.equal</code>, which a <code>ufunc</code> version of the element by element <code>==</code> operator.</p>
1
2016-08-24T00:27:06Z
[ "python", "arrays", "numpy" ]
restframework 'tuple' object has no attribute '_meta'
39,112,252
<p>Django throws the next exception:</p> <p>restframework 'tuple' object has no attribute '_meta'</p> <p>Model</p> <pre><code>class BDetail(models.Model): lat = models.FloatField(blank=True, null=True) lng = models.FloatField(blank=True, null=True) class Meta: # managed = False db_table = 'b_detail' </code></pre> <p>View</p> <pre><code>from .models import BDetail from .serializers import BDetailSerializer from rest_framework import viewsets class BDetailList(viewsets.ModelViewSet): queryset = BDetail.objects.all() serializer_class = BDetailSerializer </code></pre> <p>urls</p> <pre><code>from django.conf.urls import url, include from bdetail import views from rest_framework import routers router = routers.DefaultRouter() router.register(r'bdetail', views.BDetailList) urlpatterns = [ url(r'^', include(router.urls), name='bdetail') ] </code></pre> <p>serializers</p> <pre><code>from .models import BDetail from rest_framework import serializers class BDetailSerializer(serializers.HyperlinkedModelSerializer): class Meta: model = BDetail, fields = ('lat', 'lng') </code></pre> <p>Environment:</p> <p>Request Method: GET Request URL: <a href="http://apiix.verinmuebles.dev/v1/bdetail/" rel="nofollow">http://apiix.verinmuebles.dev/v1/bdetail/</a></p> <p>Traceback:</p> <p>File "/var/www/verinmuebles/current/Env/api/local/lib/python2.7/site-packages/django/core/handlers/exception.py" in inner 39. response = get_response(request)</p> <p>File "/var/www/verinmuebles/current/Env/api/local/lib/python2.7/site-packages/django/core/handlers/base.py" in _get_response 187. response = self.process_exception_by_middleware(e, request)</p> <p>File "/var/www/verinmuebles/current/Env/api/local/lib/python2.7/site-packages/django/core/handlers/base.py" in _get_response 185. response = wrapped_callback(request, *callback_args, **callback_kwargs)</p> <p>File "/var/www/verinmuebles/current/Env/api/local/lib/python2.7/site-packages/django/views/decorators/csrf.py" in wrapped_view 58. return view_func(*args, **kwargs)</p> <p>File "/var/www/verinmuebles/current/Env/api/local/lib/python2.7/site-packages/rest_framework/viewsets.py" in view 87. return self.dispatch(request, *args, **kwargs)</p> <p>File "/var/www/verinmuebles/current/Env/api/local/lib/python2.7/site-packages/rest_framework/views.py" in dispatch 474. response = self.handle_exception(exc)</p> <p>File "/var/www/verinmuebles/current/Env/api/local/lib/python2.7/site-packages/rest_framework/views.py" in handle_exception 434. self.raise_uncaught_exception(exc)</p> <p>File "/var/www/verinmuebles/current/Env/api/local/lib/python2.7/site-packages/rest_framework/views.py" in dispatch 471. response = handler(request, *args, **kwargs)</p> <p>File "/var/www/verinmuebles/current/Env/api/local/lib/python2.7/site-packages/rest_framework/mixins.py" in list 45. return self.get_paginated_response(serializer.data)</p> <p>File "/var/www/verinmuebles/current/Env/api/local/lib/python2.7/site-packages/rest_framework/serializers.py" in data 701. ret = super(ListSerializer, self).data</p> <p>File "/var/www/verinmuebles/current/Env/api/local/lib/python2.7/site-packages/rest_framework/serializers.py" in data 240. self._data = self.to_representation(self.instance)</p> <p>File "/var/www/verinmuebles/current/Env/api/local/lib/python2.7/site-packages/rest_framework/serializers.py" in to_representation 619. self.child.to_representation(item) for item in iterable</p> <p>File "/var/www/verinmuebles/current/Env/api/local/lib/python2.7/site-packages/rest_framework/serializers.py" in to_representation 460. fields = self._readable_fields</p> <p>File "/var/www/verinmuebles/current/Env/api/local/lib/python2.7/site-packages/django/utils/functional.py" in <strong>get</strong> 35. res = instance.<strong>dict</strong>[self.name] = self.func(instance)</p> <p>File "/var/www/verinmuebles/current/Env/api/local/lib/python2.7/site-packages/rest_framework/serializers.py" in _readable_fields 354. field for field in self.fields.values()</p> <p>File "/var/www/verinmuebles/current/Env/api/local/lib/python2.7/site-packages/rest_framework/serializers.py" in fields 340. for key, value in self.get_fields().items():</p> <p>File "/var/www/verinmuebles/current/Env/api/local/lib/python2.7/site-packages/rest_framework/serializers.py" in get_fields 946. info = model_meta.get_field_info(model)</p> <p>File "/var/www/verinmuebles/current/Env/api/local/lib/python2.7/site-packages/rest_framework/utils/model_meta.py" in get_field_info 36. opts = model._meta.concrete_model._meta</p> <p>Exception Type: AttributeError at /v1/bdetail/ Exception Value: 'tuple' object has no attribute '_meta'</p>
0
2016-08-23T23:33:06Z
39,112,322
<p>You are having the <code>,</code> after the name of <code>BDetail</code> model in <code>BDetailSerializer</code> serializer. Removing that and your code will work.</p> <p><strong>Suggesstion</strong>: Inherit <code>serializers.ModelSerializer</code> in your <code>BDetailSerializer</code> serializer instead of <code>serializers.HyperlinkedModelSerializer</code> i.e. :</p> <pre><code>class BDetailSerializer(serializers.ModelSerializer): class Meta: model = BDetail fields = ('lat', 'lng') </code></pre>
0
2016-08-23T23:43:32Z
[ "python", "django", "python-2.7", "django-rest-framework", "django-rest-framework-gis" ]
ploting and save multiple functions in the same file matplotlib
39,112,291
<p>I want to save 6 graphs in one page using matplotlib. The graphs are each a call of a a function and came up with this code below to test before saving it:</p> <pre><code>def save_plot (output_DF, path = None): fig = plt.figure(1) sub1 = fig.add_subplot(321) plt.plot(plot_BA(output_DF)) sub2 = fig.add_subplot(322) sub2.plot(plot_merchantableVol(output_DF)) sub3 = fig.add_subplot(323) sub3.plot(plot_topHeight(output_DF)) sub4 = fig.add_subplot(324) sub4.plot(plot_GrTotVol(output_DF)) sub5 = fig.add_subplot(325) sub5.plot(plot_SC(output_DF)) sub6 = fig.add_subplot(326) sub6.plot(plot_N(output_DF)) plt.show() </code></pre> <p>The way it is, I do create a page with 6 empty plots, but also create 6 separate plots for every function I call. plot_BA(output_DF), for example, is a function that I call to read a csv file and create a plot (individually it is working). The other are similar functions and are working as well. It seems that I am missing something to put the graphs in their designated place of fig. </p> <p>here is one of the functions I am using.</p> <pre><code>def plot_BA(output_DF): BA = output_DF.loc[:,['BA_Aw','BA_Sw', 'BA_Sb','BA_Pl']] BAPlot = BA.plot() plt.xlabel('Year', fontsize=14) plt.ylabel('BA (m2)') return True </code></pre> <p>Any tips?</p>
0
2016-08-23T23:38:16Z
39,113,014
<p>Try something like this:</p> <pre><code>import matplotlib.pyplot as plt import random def function_to_call(func_name, ax): data = range(10) random_x_label = random.choice('abcdefghijk') random_y_label = random.choice('abcdefghijk') random.shuffle(data) # demonstration ax.plot(data) ax.set_xlabel(random_x_label) ax.set_ylabel(random_y_label) ax.set_title(func_name) fig = plt.figure(1) sub1 = fig.add_subplot(321) function_to_call('call_0', sub1) sub2 = fig.add_subplot(322) function_to_call('call_1', sub2) sub3 = fig.add_subplot(323) function_to_call('call_2', sub3) sub4 = fig.add_subplot(324) function_to_call('call_3', sub4) sub5 = fig.add_subplot(325) function_to_call('call_4', sub5) sub6 = fig.add_subplot(326) function_to_call('call_5', sub6) plt.tight_layout() # just to improve spacings plt.show() fig.savefig('output_plot.png') </code></pre> <p>The idea is to <strong>forward your axes-object</strong> (which is one of many inside the whole figure) to your functions. Then you would need the axes-level functions. Notice the difference between <code>plt.xlabel</code> (<strong>figure-level</strong>) and <code>ax.set_xlabel</code> (<strong>axes-level</strong>).</p> <p><a href="http://i.stack.imgur.com/XHmYI.png" rel="nofollow"><img src="http://i.stack.imgur.com/XHmYI.png" alt="enter image description here"></a></p>
0
2016-08-24T01:21:38Z
[ "python", "matplotlib" ]
Python 3.5 ImportError: dynamic module does not define module export function (PyInit_cv2)
39,112,321
<p>This is what I'm getting when i try to import cv2 into python3.5 IDLE. I'm using OpenCV 3.1.0 Python3.5.2 Ubuntu 16.04</p> <p>I tried lots of installing methods but no one solved my problem, i had the import working on terminal but it just stopped as well. Anyone might have a solution?</p> <pre><code>import cv2 Traceback (most recent call last): File "&lt;pyshell#0&gt;", line 1, in &lt;module&gt; import cv2 ImportError: dynamic module does not define module export function (PyInit_cv2) </code></pre> <p>Edit: I followed the tutorials on this links:</p> <p><a href="http://docs.opencv.org/3.0-last-rst/doc/tutorials/introduction/linux_install/linux_install.html" rel="nofollow">http://docs.opencv.org/3.0-last-rst/doc/tutorials/introduction/linux_install/linux_install.html</a></p> <p><a href="http://www.pyimagesearch.com/2015/07/20/install-opencv-3-0-and-python-3-4-on-ubuntu/" rel="nofollow">http://www.pyimagesearch.com/2015/07/20/install-opencv-3-0-and-python-3-4-on-ubuntu/</a></p>
0
2016-08-23T23:43:30Z
39,112,441
<p>For python3, you need to provide python a <code>init</code> method as entrance,</p> <p>which is in <code>cv.py</code> I guess. But in my case, </p> <p>this file did not exist. I copied own from <a href="https://code.google.com/p/ctypes-opencv/source/browse/trunk/src/ctypes_opencv/cv.py?r=236" rel="nofollow">google code</a>.</p> <p>If the <code>cv.py</code> is not provided, </p> <p>you may get error <code>ImportError: dynamic module does not define init function (PyInit_cv2)</code> when <code>import cv2</code> in python3 (no such problem in python2).</p>
0
2016-08-24T00:00:33Z
[ "python", "python-3.x", "opencv", "ubuntu" ]
Putting 2 dimensional numpy arrays into a 3 dimensional array
39,112,372
<p>I want to keep adding numpy arrays to another array in python. let's say I have the following arrays:</p> <pre><code>arraytotal = np.array([]) array1 = np.array([1,1,1,1,1]) array2 = np.array([2,2,2,2,2]) </code></pre> <p>and I want to append array1 and array2 into arraytotal. However, when I use:</p> <pre><code>arraytotal.append[array1] </code></pre> <p>it tells me:</p> <blockquote> <p>'numpy.ndarray' object has no attribute 'append'</p> </blockquote> <p>how can I append array1 and array2 into arraytotal?</p>
-1
2016-08-23T23:51:28Z
39,112,576
<p>You should append the arrays onto a regular python list and then convert the list to a numpy array at the end:</p> <pre><code>import numpy as np total = [] for i in range(5,15): thisArray = np.arange(i) total.append(thisArray) total = np.asarray(total) </code></pre> <p>That loop makes a 2D array; you'd nest loops to produce higher dimensional arrays.</p>
-1
2016-08-24T00:16:20Z
[ "python", "arrays", "python-2.7", "numpy", "multidimensional-array" ]
Putting 2 dimensional numpy arrays into a 3 dimensional array
39,112,372
<p>I want to keep adding numpy arrays to another array in python. let's say I have the following arrays:</p> <pre><code>arraytotal = np.array([]) array1 = np.array([1,1,1,1,1]) array2 = np.array([2,2,2,2,2]) </code></pre> <p>and I want to append array1 and array2 into arraytotal. However, when I use:</p> <pre><code>arraytotal.append[array1] </code></pre> <p>it tells me:</p> <blockquote> <p>'numpy.ndarray' object has no attribute 'append'</p> </blockquote> <p>how can I append array1 and array2 into arraytotal?</p>
-1
2016-08-23T23:51:28Z
39,313,960
<p>Unfortunately, there is no way to manipulate arrays quite like that. Instead, make a list with the same name, and append the two arrays and change it to a numpy array like so:</p> <pre><code>arraytotal[] array1 = np.array([1,1,1,1,1]) arraytotal.append[array1] np.array(arraytotal) </code></pre>
0
2016-09-04T05:55:37Z
[ "python", "arrays", "python-2.7", "numpy", "multidimensional-array" ]
Django: "referenced before assignment" but only for some variables
39,112,401
<p>I'm writing a small app in Django and I'm keeping the state saved in a few variables I declare out of the methods in views.py. Here is the important part of this file:</p> <pre><code>from app.playerlist import fullList auc_unsold = fullList[:] auc_teams = [] auc_in_progress = [] auc_current_turn = -1 print(auc_in_progress) def auc_action(request): data = json.loads(request.GET["data"]) # ... elif data[0] == "start": random.shuffle(auc_teams) print(auc_unsold) print(auc_in_progress) auc_in_progress = [None, 0, None] print(auc_in_progress) </code></pre> <p>The <code>auc_unsold</code> and <code>auc_teams</code> variables work fine; the <code>auc_in_progress</code> variable is not seen by this method, though, giving the error in the title. If I take out the print statement and let this code assign a value to it, the exception will be thrown somewhere else in the code as soon as I use that variable again.</p> <p>I have tried making another variable and this new one seems to suffer from this problem as well.</p> <p>What is happening?</p> <hr> <p>Edit: I found a solution: if I write <code>global auc_in_progress</code> just before the print statements, then everything works fine. If I try writing that as I declare the variable above it doesn't work, though, for some reason.</p> <p>I am unsatisfied with this, because I don't know why this happens and because I dislike using global like that, but eh. Someone has an explanation?</p>
-1
2016-08-23T23:55:02Z
39,118,960
<p>You should absolutely not be doing this, either your original code or your proposed solution with <code>global</code>.</p> <p>Anything at module level will be shared across requests, not only for the current user but for <em>all</em> users for that process. So everyone will see the same auction, etc.</p> <p>The reason for your error is because you assign to that variable within your function, which automatically makes it a local variable: see <a href="http://stackoverflow.com/questions/370357/python-variable-scope-error">this question</a> for more details. But the solution recommended there, which is the same as your workaround - ie use <code>global</code> - is not appropriate here; you should store the data somewhere specifically associated with the user, eg the session.</p>
1
2016-08-24T09:06:29Z
[ "python", "django" ]
Django Static image File Failed to load resource
39,112,412
<p><strong>FIXED</strong> This was my solution, in the main programs urls.py I had to add these lines.</p> <p>from django.conf import settings from django.conf.urls.static import static</p> <p>urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)</p> <p>I have an model called Item within there I have an image field. I added an new item with an image to the database and tried loading it inside of my index template. However I dont get an image when I look at the console it says ("WEBSITENAME/site_media/items/image.jpg 404 (Not Found)")</p> <p>I think the problem lies within the settings.py file but I cant figure out what exactly I did wrong here.</p> <p><strong>index.html template</strong></p> <pre><code>{% load static %} &lt;div class="item" style="background-image: url('{{ item.img.url }}')"&gt;` </code></pre> <p><strong>Model.py</strong></p> <pre><code>class Item(models.Model): name = models.CharField(max_length=200) img = models.ImageField(upload_to='items', default='', blank=True, null=True) def __str__(self): return "%s" % (self.name) </code></pre> <p><strong>Views.py</strong></p> <pre><code>def index(request): latestItems = Item.objects.all().order_by('-id')[:3][::-1] return render(request, 'webshop/index.html', {'latestItems' :latestItems}) </code></pre> <p><strong>settings.py</strong></p> <pre><code>SITE_ROOT = os.path.dirname(os.path.realpath(__file__)) MEDIA_ROOT = os.path.join(SITE_ROOT, 'site_media/') MEDIA_URL = 'site_media/' </code></pre>
0
2016-08-23T23:56:44Z
39,112,499
<p>In your <code>settings.py</code></p> <pre><code>BASE_DIR = os.path.dirname(os.path.dirname(__file__)) STATICFILES_DIRS = ( os.path.join(BASE_DIR, "static"), ) STATIC_URL = '/static/' </code></pre> <p>and read this <a href="https://docs.djangoproject.com/en/1.10/howto/static-files/" rel="nofollow">document</a> </p> <p>With the directory root structure that you have shown, I think the above setting should work. Have not tested it though. Let me know if it works.</p>
0
2016-08-24T00:07:26Z
[ "python", "django", "django-models", "django-templates", "django-views" ]
Django Static image File Failed to load resource
39,112,412
<p><strong>FIXED</strong> This was my solution, in the main programs urls.py I had to add these lines.</p> <p>from django.conf import settings from django.conf.urls.static import static</p> <p>urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)</p> <p>I have an model called Item within there I have an image field. I added an new item with an image to the database and tried loading it inside of my index template. However I dont get an image when I look at the console it says ("WEBSITENAME/site_media/items/image.jpg 404 (Not Found)")</p> <p>I think the problem lies within the settings.py file but I cant figure out what exactly I did wrong here.</p> <p><strong>index.html template</strong></p> <pre><code>{% load static %} &lt;div class="item" style="background-image: url('{{ item.img.url }}')"&gt;` </code></pre> <p><strong>Model.py</strong></p> <pre><code>class Item(models.Model): name = models.CharField(max_length=200) img = models.ImageField(upload_to='items', default='', blank=True, null=True) def __str__(self): return "%s" % (self.name) </code></pre> <p><strong>Views.py</strong></p> <pre><code>def index(request): latestItems = Item.objects.all().order_by('-id')[:3][::-1] return render(request, 'webshop/index.html', {'latestItems' :latestItems}) </code></pre> <p><strong>settings.py</strong></p> <pre><code>SITE_ROOT = os.path.dirname(os.path.realpath(__file__)) MEDIA_ROOT = os.path.join(SITE_ROOT, 'site_media/') MEDIA_URL = 'site_media/' </code></pre>
0
2016-08-23T23:56:44Z
39,112,528
<p>you have:</p> <pre><code>upload_to='items' </code></pre> <p>but:</p> <pre><code>{{ item.img.url }} </code></pre> <p>should that be?</p> <pre><code>{{ items.img.url }} </code></pre>
0
2016-08-24T00:10:44Z
[ "python", "django", "django-models", "django-templates", "django-views" ]
Where (and what) can I find in local Google Chrome sql databses?
39,112,419
<p>I'm trying to work my way through the book "Violent Python", and I'm on chapter 3... The exercise walks you through writing some Python scripts to grab Firefox data from the locally stored sql dbs - cool stuff! But now I want to see if I can do the same for Chrome. I've seen on some websites that I should be able to find a "urls" db, but I can't seem to figure out where it is. </p> <p>I've found the Databases.db file, which has tables "databases", "meta", and "sqlite_sequence". Am I on the right track here? I want to find things like internet history, bookmarks, etc.</p> <p>Thanks!</p>
-1
2016-08-23T23:57:19Z
39,112,640
<p>If you are on windows you'll find the sqlite db for chrome history here:</p> <pre><code>C:\Users\%USERNAME%\AppData\Local\Google\Chrome\User Data\Default\History </code></pre> <p>On Linux:</p> <pre><code>/home/$USER/.config/google-chrome/Default/History </code></pre> <p>On MacOS-X:</p> <pre><code>/Users/$USER/Library/Application Support/Google/Chrome/Default/History </code></pre> <p>For more Information look here: <a href="http://www.forensicswiki.org/wiki/Google_Chrome" rel="nofollow">http://www.forensicswiki.org/wiki/Google_Chrome</a></p> <p>I advise you to use: <a href="http://sqlitebrowser.org/" rel="nofollow">http://sqlitebrowser.org/</a> to look through the history file. <a href="http://i.stack.imgur.com/DZdC9.png" rel="nofollow"><img src="http://i.stack.imgur.com/DZdC9.png" alt="enter image description here"></a></p>
1
2016-08-24T00:25:43Z
[ "python", "sqlite", "google-chrome", "sqlite3" ]
'module' object has no attribute 'TK'
39,112,426
<p>I'm a beginner of learning GUI. </p> <p>My python version is 2.7 and I'm using Windows.</p> <p>I've searched tkinter in folder there is only one python file which is in <code>C:\python27</code>.</p> <p>Here is my code:</p> <pre><code>import Tkinter as tk class Electronic_Signature_User_Program(tk.TK): def __init__(self,*args,**kwargs): tk.Tk.__init__(self, *args, **kwargs) container = tk.Frame(self) container.pack(side = "top",fill = "both",expand = True) container.grid_rowconfigure(0,weight=1) container.grid_columnconfigure(0,weight=1) self.frames = {} for F in (Loginpage, Login_Confirm): frame = Loginpage(container,self) self.frames[Loginpage] = frame frame.grid(row=0,column=0,sticky="nsew") self.show_frame(Loginpage) def show_frame(self,cont): frame = self.frames[cont] frame.tkraise() class Loginpage(tk.Frame): def __init__(self, parent, controller): tk.Frame.__init__(self, parent) button1 = tk.Button(self,text="Login_Confirm",command=lambda:controller.show_frame(Login_Confirm)) button1.pack() class Login_Confirm(tk.Frame): def __init__(self, parent, controller): tk.Frame.__init__(self, parent) button2 = tk.Button(self,text="Loginpage",command=lambda:controller.show_frame(Loginpage)) button2.pack() app = Electronic_Signature_User_Program() app.title('UoL 702 Electrinic Signature User Program') app.mainloop() </code></pre>
-2
2016-08-23T23:58:55Z
39,112,442
<p>In the very first line of code you have <code>TK</code> where it should be <code>Tk</code>.</p>
0
2016-08-24T00:00:50Z
[ "python", "user-interface", "tkinter" ]
How to include non PyPi packages for virtualenv requirements file?
39,112,466
<p>Is there a way to include packages/modules not available through pip in the requirements file so that the project is portable? </p> <p>The default version of lxml seems to have issues with pypy so I need to use a <a href="https://github.com/aglyzov/lxml" rel="nofollow">custom fork</a>. </p> <p>The problem is I need Heroku (where I deploy this application) to use a custom version of lxml and not the one that's available via pip. Is there any way to do this?</p>
0
2016-08-24T00:02:41Z
39,112,477
<p>You can by using <a href="https://pip.pypa.io/en/latest/user_guide/#id9" rel="nofollow">the listed git packages syntax</a>, you would need to add the following line to your requirements.txt</p> <pre><code>-e git://github.com/aglyzov/lxml.git#egg=lxml </code></pre>
2
2016-08-24T00:04:50Z
[ "python", "pip", "virtualenv", "pypy" ]
TypeError: unsupported operand type(s) for -: 'str' and 'str'?
39,112,489
<p>I'm new to programming and can't figure out how to fix this error:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File "/Users/aubreyoleary/Documents/Cashier.py", line 31, in &lt;module&gt; changePennies = int((amountReceived - amountDue) * 100) TypeError: unsupported operand type(s) for -: 'str' and 'str' </code></pre> <p>My code:</p> <pre><code>import math class Cashier: def getDollars(self, x): return x / 100 def getQuarters(self, x): y = x % 100 return y / 25 def getDimes(self, x): y = x % 100 return y % 10 def getNickels(self, x): y = x % 100 return y % 5 def getPennies(self, x): y = x * 1 return y while True: thecashier = Cashier() amountDue = input("Please enter amount due: ") amountReceived = input("Please enter amount received: ") changePennies = int((amountReceived - amountDue) * 100) print(thecashier.getPennies(changePennies)) print(thecashier.getDollars(changePennies)) print(thecashier.getQuarters(changePennies)) print(thecashier.getDimes(changePennies)) print(thecashier.getNickels(changePennies)) choice = input("Do you want to continue &lt;yes&gt; &lt;no&gt;? ") if (choice == "no"): print("Have a nice day. ") break </code></pre>
0
2016-08-24T00:06:26Z
39,112,505
<p>That mean '6' - '4' won't work because they are both strings. You first need to convert the string values to numbers:</p> <pre><code>changePennies = int(round((float(amountReceived) - float(amountDue)) * 100, 0)) </code></pre>
1
2016-08-24T00:08:45Z
[ "python", "python-3.x" ]
TypeError: unsupported operand type(s) for -: 'str' and 'str'?
39,112,489
<p>I'm new to programming and can't figure out how to fix this error:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File "/Users/aubreyoleary/Documents/Cashier.py", line 31, in &lt;module&gt; changePennies = int((amountReceived - amountDue) * 100) TypeError: unsupported operand type(s) for -: 'str' and 'str' </code></pre> <p>My code:</p> <pre><code>import math class Cashier: def getDollars(self, x): return x / 100 def getQuarters(self, x): y = x % 100 return y / 25 def getDimes(self, x): y = x % 100 return y % 10 def getNickels(self, x): y = x % 100 return y % 5 def getPennies(self, x): y = x * 1 return y while True: thecashier = Cashier() amountDue = input("Please enter amount due: ") amountReceived = input("Please enter amount received: ") changePennies = int((amountReceived - amountDue) * 100) print(thecashier.getPennies(changePennies)) print(thecashier.getDollars(changePennies)) print(thecashier.getQuarters(changePennies)) print(thecashier.getDimes(changePennies)) print(thecashier.getNickels(changePennies)) choice = input("Do you want to continue &lt;yes&gt; &lt;no&gt;? ") if (choice == "no"): print("Have a nice day. ") break </code></pre>
0
2016-08-24T00:06:26Z
39,112,512
<p>It is because data type of <code>amountReceived</code> and <code>amountDue</code> is string. You have to typecast it to <code>float</code> before you perform arithematic <code>-</code> operation.</p> <p>Instead of <code>int((amountReceived - amountDue) * 100)</code>, use:</p> <pre><code>changePennies = int(float(amountReceived) - float(amountDue)) * 100 </code></pre>
0
2016-08-24T00:09:21Z
[ "python", "python-3.x" ]
How to write list to csv, with each item on a new row
39,112,491
<p>I am having trouble writing a list of items into a csv file, with each item being on a new row. Here is what I have, it does what I want, except it is putting each letter on a new row...</p> <pre><code>import csv data = ['First Item', 'Second Item', 'Third Item'] with open('output.csv', 'w', newline='') as csvfile: writer = csv.writer(csvfile) for i in data: writer.writerows(i) </code></pre>
0
2016-08-24T00:06:41Z
39,112,550
<p>Use a nested list: <code>writer.writerows([[i]])</code>. Explanation from <a href="http://stackoverflow.com/questions/14134237/writing-data-from-a-python-list-to-csv-row-wise">writing data from a python list to csv row-wise</a>:</p> <blockquote> <p><code>.writerow</code> takes an iterable and uses each element of that iterable for each column. If you use a list with only one element it will be placed in a single column.</p> </blockquote> <p>So, as all you need is a single column, ...</p>
2
2016-08-24T00:13:33Z
[ "python", "csv" ]
Jupyter iPython Notebook and Command Line yield different results
39,112,504
<p>I have the following Python 2.7 code: </p> <pre><code>def average_rows2(mat): ''' INPUT: 2 dimensional list of integers (matrix) OUTPUT: list of floats Use map to take the average of each row in the matrix and return it as a list. Example: &gt;&gt;&gt; average_rows2([[4, 5, 2, 8], [3, 9, 6, 7]]) [4.75, 6.25] ''' return map(lambda x: sum(x)/float(len(x)), mat) </code></pre> <p>When I run it in my browser using iPython notebook, I get the following output: </p> <pre><code>[4.75, 6.25] </code></pre> <p>However, when I run the code's file on Command Line (Windows), I get the following error: </p> <pre><code>&gt;python -m doctest Delete.py ********************************************************************** File "C:\Delete.py", line 10, in Delete.average_rows2 Failed example: average_rows2([[4, 5, 2, 8], [3, 9, 6, 7]]) Expected: [4.75, 6.25] Got: &lt;map object at 0x00000228FE78A898&gt; ********************************************************************** </code></pre> <p>Why does the command line toss an error? Is there a better way to structure my function?</p>
1
2016-08-24T00:08:44Z
39,112,581
<p>It seems like your command line is running Python 3. The builtin <code>map</code> returns a list in Python 2, but an iterator (a <code>map</code> object) in Python 3. To turn the latter into a list, apply the <code>list</code> constructor to it:</p> <pre><code># Python 2 average_rows2([[4, 5, 2, 8], [3, 9, 6, 7]]) == [4.75, 6.25] # =&gt; True # Python 3 list(average_rows2([[4, 5, 2, 8], [3, 9, 6, 7]])) == [4.75, 6.25] # =&gt; True </code></pre>
5
2016-08-24T00:16:38Z
[ "python" ]
For Loop at Python
39,112,509
<p>My goal is to create a for-loop code that will return an output like the following:</p> <pre><code>list(indicator.values())[0], list(indicator.values())[1], list(indicator.values())[2], list(indicator.values())[3], ... ... list(indicator.values())[98], list(indicator.values())[99], </code></pre> <p>However when I run the code below, I receive an error message 'TypeError: 'int' object is not iterable'. How can I fix this so I can get the intended result?</p> <pre><code>x = 100 for item in x: list(indicator.values())[item] </code></pre>
-3
2016-08-24T00:09:11Z
39,112,569
<p>Use <code>range()</code> with <code>for</code> to iterate it from 0 to 100. For example:</p> <pre><code>for i in range(100): print i # prints number from 1 to 100 </code></pre>
0
2016-08-24T00:15:56Z
[ "python", "for-loop" ]
For Loop at Python
39,112,509
<p>My goal is to create a for-loop code that will return an output like the following:</p> <pre><code>list(indicator.values())[0], list(indicator.values())[1], list(indicator.values())[2], list(indicator.values())[3], ... ... list(indicator.values())[98], list(indicator.values())[99], </code></pre> <p>However when I run the code below, I receive an error message 'TypeError: 'int' object is not iterable'. How can I fix this so I can get the intended result?</p> <pre><code>x = 100 for item in x: list(indicator.values())[item] </code></pre>
-3
2016-08-24T00:09:11Z
39,112,575
<p>read this <a href="http://pythoncentral.io/pythons-range-function-explained/" rel="nofollow">document</a></p> <p>and try this code</p> <pre><code>x = range(0,100) for item in x: list(indicator.values())[item] </code></pre>
0
2016-08-24T00:16:13Z
[ "python", "for-loop" ]
For Loop at Python
39,112,509
<p>My goal is to create a for-loop code that will return an output like the following:</p> <pre><code>list(indicator.values())[0], list(indicator.values())[1], list(indicator.values())[2], list(indicator.values())[3], ... ... list(indicator.values())[98], list(indicator.values())[99], </code></pre> <p>However when I run the code below, I receive an error message 'TypeError: 'int' object is not iterable'. How can I fix this so I can get the intended result?</p> <pre><code>x = 100 for item in x: list(indicator.values())[item] </code></pre>
-3
2016-08-24T00:09:11Z
39,112,595
<p>Use this code:</p> <pre><code>x = 100 for item in range(x): print(list(indicator.values())[item])] </code></pre>
3
2016-08-24T00:19:12Z
[ "python", "for-loop" ]
For Loop at Python
39,112,509
<p>My goal is to create a for-loop code that will return an output like the following:</p> <pre><code>list(indicator.values())[0], list(indicator.values())[1], list(indicator.values())[2], list(indicator.values())[3], ... ... list(indicator.values())[98], list(indicator.values())[99], </code></pre> <p>However when I run the code below, I receive an error message 'TypeError: 'int' object is not iterable'. How can I fix this so I can get the intended result?</p> <pre><code>x = 100 for item in x: list(indicator.values())[item] </code></pre>
-3
2016-08-24T00:09:11Z
39,113,909
<p>For -- in -- is python's way of iterating (going through) something iterable (something that can be gone through). For example a list and a dictionary are iterable. The range function creates <strong>a list</strong> with given parameters, making an integer (which is not itself iterable) iterable. </p>
0
2016-08-24T03:26:39Z
[ "python", "for-loop" ]
Python is not recognizing text file that is in the same directory?
39,112,522
<p>My current project tree:</p> <pre><code>redditbot/ -- commands/ ----__init__.py ----comment_cache.txt ----readcomments.py --mainbot.py </code></pre> <p>What I am attempting to do is read the comment_cache.txt file via <code>open('comment_cache.txt')</code>in the readcomments.py file., but for some reason I am getting a FileNotFoundError. Even when I try <code>print(os.path.isfile('comment_cache.txt'))</code>, it just returns false. </p> <p>Am I making a beginner mistake here? Maybe something that just I keep missing?</p> <p>EDIT: I appreciate all the answers/comments, but I believe it is a problem with my Python interpreter itself. I kept moving around the file between the redditbot/ directory and the commands/ package until it just started working. Also for some reason whenever I call <code>print()</code>, PyCharm tells me that it is undefined...</p>
-1
2016-08-24T00:10:13Z
39,114,145
<p>I am assuming that <code>mainbot.py</code> is the entry point from where you run your application, so </p> <p><strong>Problem I</strong></p> <p>based on your project tree, the file should be available at path</p> <pre><code>open('commands/comment_cache.txt') </code></pre> <p><strong>Problem II</strong></p> <blockquote> <p>I kept moving around the file between the redditbot/ directory and the commands/ package until it just started working.</p> </blockquote> <p>You probably ended up placing the file in the same folder as your <code>mainbot.py</code></p> <p><strong>Problem III</strong></p> <blockquote> <p>Also for some reason whenever I call print(), PyCharm tells me that it is undefined...</p> </blockquote> <p>There can be many posibilities behind this, check if the python interpreter is configured correctly and the python libraries are available on the PYTHONPATH for the project</p>
0
2016-08-24T03:53:53Z
[ "python", "python-3.x", "relative-path", "reddit", "praw" ]
Django Form Not updating
39,112,555
<p>I'm trying to update a form in Django. I have the following: </p> <p><strong>models.py</strong></p> <pre><code>from django.db import models from django.core.urlresolvers import reverse class List(models.Model): def get_absolute_url(self): return reverse('view_list', args=[self.id]) # Create your models here. class Item(models.Model): text = models.TextField(default = '') list = models.ForeignKey(List, default = None) </code></pre> <p><strong>forms.py</strong></p> <pre><code>from django import forms from lists.models import Item EMPTY_ITEM_ERROR = "You can't have an empty list item" class ItemForm(forms.models.ModelForm): class Meta: model = Item fields = ('text',) widgets ={ 'text' : forms.fields.TextInput(attrs={ 'placeholder': 'Enter a to-do item', 'class': 'form-control input-lg', }), } error_messages = { 'text' : { 'required': EMPTY_ITEM_ERROR } } </code></pre> <p>I'm not seeing any changes in the forms.py now that it has been loaded. What I mean is, the page displays the form find, but if I attempt to change, for example, the placeholder value::</p> <pre><code> 'placeholder': 'Enter a to-do item OR DON'T!', </code></pre> <p>The input box doesn't show any changes once the page loads. Is there a manage.py command I need to run? Or some other migration? </p>
0
2016-08-24T00:13:57Z
39,112,700
<p>I'm assuming you are saying that you don't see the changes in your browser. So i suggest you clear your browser's cache and reload the page. If you go to the manage section of the browser developer tools and select the option to not store cache when dev tools is activated. Cache can be a pain when you forget it exists. </p> <p>Answering your question at the bottom: you don't need to reload the <code>manage.py runserver</code> or use another command. The django automatically reloads the server (the simple one used in development) with every change in the python files.</p>
1
2016-08-24T00:34:54Z
[ "python", "django", "forms", "refresh" ]
.build_release/lib/libcaffe.so: undefined reference to `boost::python::import(boost::python::str)'
39,112,578
<p>I get this error with Python2.7 and Ubuntu15.10:</p> <pre><code>jalal@klein:~/computer_vision/py-faster-rcnn/caffe-fast-rcnn$ make -j8 &amp;&amp; make pycaffe CXX/LD -o .build_release/tools/compute_image_mean.bin CXX/LD -o .build_release/tools/upgrade_net_proto_binary.bin CXX/LD -o .build_release/tools/convert_imageset.bin CXX/LD -o .build_release/tools/upgrade_net_proto_text.bin CXX/LD -o .build_release/tools/caffe.bin CXX/LD -o .build_release/tools/extract_features.bin CXX/LD -o .build_release/tools/upgrade_solver_proto_text.bin CXX/LD -o .build_release/examples/cpp_classification/classification.bin /usr/bin/ld: warning: libboost_system.so.1.58.0, needed by .build_release/lib/libcaffe.so, may conflict with libboost_system.so.1.61.0 /usr/bin/ld: warning: libboost_thread.so.1.58.0, needed by .build_release/lib/libcaffe.so, may conflict with libboost_thread.so.1.61.0 .build_release/lib/libcaffe.so: undefined reference to `boost::python::throw_error_already_set()' .build_release/lib/libcaffe.so: undefined reference to `boost::python::import(boost::python::str)' .build_release/lib/libcaffe.so: undefined reference to `PyEval_CallFunction' .build_release/lib/libcaffe.so: undefined reference to `typeinfo for boost::python::error_already_set' .build_release/lib/libcaffe.so: undefined reference to `PyErr_Print' </code></pre> <p>How can I fix this? I have boost installed. From <a href="https://github.com/rbgirshick/py-faster-rcnn" rel="nofollow">https://github.com/rbgirshick/py-faster-rcnn</a></p> <p>I already have run:</p> <pre><code>sudo apt-get install build-essential g++ python-dev autotools-dev libicu-dev build-essential libbz2-dev libboost-all-dev </code></pre> <p>And:</p> <pre><code>sudo apt-get install libboost-python-dev </code></pre>
1
2016-08-24T00:16:28Z
39,129,554
<p>Installing Boost from the source worked for me! After downloading the Boost source code from its official website:</p> <pre><code>sudo ./bootstrap.sh --prefix=/usr/local ./b2 sudo ./b2 install </code></pre>
1
2016-08-24T17:26:47Z
[ "python", "c++", "ubuntu", "boost", "caffe" ]
How do I set TensorFlow RNN state when state_is_tuple=True?
39,112,622
<p>I have written an <a href="https://github.com/wpm/tfrnnlm" rel="nofollow">RNN language model using TensorFlow</a>. The model is implemented as an <code>RNN</code> class. The graph structure is built in the constructor, while <code>RNN.train</code> and <code>RNN.test</code> methods run it.</p> <p>I want to be able to reset the RNN state when I move to a new document in the training set, or when I want to run a validation set during training. I do this by managing the state inside the training loop, passing it into the graph via a feed dictionary.</p> <p>In the constructor I define the the RNN like so</p> <pre><code> cell = tf.nn.rnn_cell.LSTMCell(hidden_units) rnn_layers = tf.nn.rnn_cell.MultiRNNCell([cell] * layers) self.reset_state = rnn_layers.zero_state(batch_size, dtype=tf.float32) self.state = tf.placeholder(tf.float32, self.reset_state.get_shape(), "state") self.outputs, self.next_state = tf.nn.dynamic_rnn(rnn_layers, self.embedded_input, time_major=True, initial_state=self.state) </code></pre> <p>The training loop looks like this</p> <pre><code> for document in document: state = session.run(self.reset_state) for x, y in document: _, state = session.run([self.train_step, self.next_state], feed_dict={self.x:x, self.y:y, self.state:state}) </code></pre> <p><code>x</code> and <code>y</code> are batches of training data in a document. The idea is that I pass the latest state along after each batch, except when I start a new document, when I zero out the state by running <code>self.reset_state</code>.</p> <p>This all works. Now I want to change my RNN to use the recommended <code>state_is_tuple=True</code>. However, I don't know how to pass the more complicated LSTM state object via a feed dictionary. Also I don't know what arguments to pass to the <code>self.state = tf.placeholder(...)</code> line in my constructor.</p> <p>What is the correct strategy here? There still isn't much example code or documentation for <code>dynamic_rnn</code> available.</p> <hr> <p>TensorFlow issues <a href="https://github.com/tensorflow/tensorflow/issues/2695" rel="nofollow">2695</a> and <a href="https://github.com/tensorflow/tensorflow/issues/2838" rel="nofollow">2838</a> appear relevant.</p> <p>A <a href="http://www.wildml.com/2016/08/rnns-in-tensorflow-a-practical-guide-and-undocumented-features/" rel="nofollow">blog post</a> on WILDML addresses these issues but doesn't directly spell out the answer.</p> <p>See also <a href="http://stackoverflow.com/questions/38241410/tensorflow-remember-lstm-state-for-next-batch-stateful-lstm">TensorFlow: Remember LSTM state for next batch (stateful LSTM)</a>.</p>
1
2016-08-24T00:22:34Z
39,917,340
<p>One problem with a Tensorflow placeholder is that you can only feed it with a Python list or Numpy array (I think). So you can't save the state between runs in tuples of LSTMStateTuple. </p> <p>I solved this by saving the state in a tensor like this</p> <p><code>initial_state = np.zeros((num_layers, 2, batch_size, state_size))</code></p> <p>You have two components in an LSTM layer, the <strong>cell state</strong> and <strong>hidden state</strong>, thats what the "2" comes from. (this article is great: <a href="https://arxiv.org/pdf/1506.00019.pdf" rel="nofollow">https://arxiv.org/pdf/1506.00019.pdf</a>)</p> <p>When building the graph you unpack and create the tuple state like this:</p> <pre><code>state_placeholder = tf.placeholder(tf.float32, [num_layers, 2, batch_size, state_size]) l = tf.unpack(state_placeholder, axis=0) rnn_tuple_state = tuple( [tf.nn.rnn_cell.LSTMStateTuple(l[idx][0],l[idx][1]) for idx in range(num_layers)] ) </code></pre> <p>Then you get the new state the usual way</p> <pre><code>cell = tf.nn.rnn_cell.LSTMCell(state_size, state_is_tuple=True) cell = tf.nn.rnn_cell.MultiRNNCell([cell] * num_layers, state_is_tuple=True) outputs, state = tf.nn.dynamic_rnn(cell, series_batch_input, initial_state=rnn_tuple_state) </code></pre> <p>It shouldn't be like this... perhaps they are working on a solution.</p>
2
2016-10-07T12:29:24Z
[ "python", "machine-learning", "tensorflow" ]
Save line in file to list
39,112,645
<pre><code>file = input('Name: ') with open(file) as infile: for line in infile: for name in infile: name print(name[line]) </code></pre> <p>So if a user were to pass a file of vertical list of sentences, how would I save each sentence to its own list?</p> <p>Sample input:</p> <pre><code>'hi' 'hello' 'cat' 'dog' </code></pre> <p>Output:</p> <pre><code>['hi'] ['hello'] and so on... </code></pre>
4
2016-08-24T00:26:49Z
39,112,738
<p>I think this is what you need:</p> <pre><code>with open(file) as infile: for line in infile.readlines(): print [line] # list of all the lines in the file as list </code></pre> <p>If the content of file is as:</p> <pre><code>hi hello cat dog </code></pre> <p>It will <code>print</code>:</p> <pre><code>['hi'] ['hello'] ['cat'] ['dog'] </code></pre>
0
2016-08-24T00:40:59Z
[ "python", "python-3.x" ]
Save line in file to list
39,112,645
<pre><code>file = input('Name: ') with open(file) as infile: for line in infile: for name in infile: name print(name[line]) </code></pre> <p>So if a user were to pass a file of vertical list of sentences, how would I save each sentence to its own list?</p> <p>Sample input:</p> <pre><code>'hi' 'hello' 'cat' 'dog' </code></pre> <p>Output:</p> <pre><code>['hi'] ['hello'] and so on... </code></pre>
4
2016-08-24T00:26:49Z
39,112,753
<pre><code>sentence_lists = [] with open('file') as f: for s in f: sentence_lists.append([s.strip()]) </code></pre> <p><hr> simplified as per <code>idjaw</code>:</p> <pre><code>with open('file') as f: sentence_list = [[s.strip()] for s in f] </code></pre>
3
2016-08-24T00:43:12Z
[ "python", "python-3.x" ]
Save line in file to list
39,112,645
<pre><code>file = input('Name: ') with open(file) as infile: for line in infile: for name in infile: name print(name[line]) </code></pre> <p>So if a user were to pass a file of vertical list of sentences, how would I save each sentence to its own list?</p> <p>Sample input:</p> <pre><code>'hi' 'hello' 'cat' 'dog' </code></pre> <p>Output:</p> <pre><code>['hi'] ['hello'] and so on... </code></pre>
4
2016-08-24T00:26:49Z
39,112,767
<pre><code>&gt;&gt;&gt; [line.split() for line in open('File.txt')] [['hi'], ['hello'], ['cat'], ['dog']] </code></pre> <p>Or, if we want to be more careful about making sure that the file is closed:</p> <pre><code>&gt;&gt;&gt; with open('File.txt') as f: ... [line.split() for line in f] ... [['hi'], ['hello'], ['cat'], ['dog']] </code></pre>
6
2016-08-24T00:45:36Z
[ "python", "python-3.x" ]
Can I create a new column based on when the value changes in another column?
39,112,689
<p>Let s say I have this <code>df</code></p> <pre><code>print(df) DATE_TIME A B 0 10/08/2016 12:04:56 1 5 1 10/08/2016 12:04:58 1 6 2 10/08/2016 12:04:59 2 3 3 10/08/2016 12:05:00 2 2 4 10/08/2016 12:05:01 3 4 5 10/08/2016 12:05:02 3 6 6 10/08/2016 12:05:03 1 3 7 10/08/2016 12:05:04 1 2 8 10/08/2016 12:05:05 2 4 9 10/08/2016 12:05:06 2 6 10 10/08/2016 12:05:07 3 4 11 10/08/2016 12:05:08 3 2 </code></pre> <p>The values in column <code>['A']</code> repeat over time, I need a column though, where they have a new ID each time they change, so that I would have something like the following <code>df</code></p> <pre><code>print(df) DATE_TIME A B C 0 10/08/2016 12:04:56 1 5 1 1 10/08/2016 12:04:58 1 6 1 2 10/08/2016 12:04:59 2 3 2 3 10/08/2016 12:05:00 2 2 2 4 10/08/2016 12:05:01 3 4 3 5 10/08/2016 12:05:02 3 6 3 6 10/08/2016 12:05:03 1 3 4 7 10/08/2016 12:05:04 1 2 4 8 10/08/2016 12:05:05 2 4 5 9 10/08/2016 12:05:06 2 6 5 10 10/08/2016 12:05:07 3 4 6 11 10/08/2016 12:05:08 3 2 6 </code></pre> <p>Is there a way to do this with python? I am still very new to this and hoped to find something that could help me in pandas, but I have not found anything yet. In my original dataframe the values in Column <code>['A']</code> change on irregular intervals approximately every ten minutes and not every two rows like in my example. Has anybody an idea how I could approach this task? Thank you</p>
3
2016-08-24T00:33:07Z
39,112,754
<p>You can use the <em>shift-cumsum</em> pattern.</p> <pre><code>df['C'] = (df.A != df.A.shift()).cumsum() &gt;&gt;&gt; df DATE_TIME A B C 0 10/08/2016 12:04:56 1 5 1 1 10/08/2016 12:04:58 1 6 1 2 10/08/2016 12:04:59 2 3 2 3 10/08/2016 12:05:00 2 2 2 4 10/08/2016 12:05:01 3 4 3 5 10/08/2016 12:05:02 3 6 3 6 10/08/2016 12:05:03 1 3 4 7 10/08/2016 12:05:04 1 2 4 8 10/08/2016 12:05:05 2 4 5 9 10/08/2016 12:05:06 2 6 5 10 10/08/2016 12:05:07 3 4 6 11 10/08/2016 12:05:08 3 2 6 </code></pre> <p>As a side note, this is a popular pattern for grouping. For example, to get the average <code>B</code> value of each such group:</p> <pre><code>df.groupby((df.A != df.A.shift()).cumsum()).B.mean() </code></pre>
5
2016-08-24T00:43:30Z
[ "python", "pandas", "uniqueidentifier" ]
Volume Slider using Tkinter
39,112,692
<p>I'm new to coding, and I am attempting to create a juke box for a school project, but I'm struggling to create a slider that will edit the volume. I'm just unsure where to start to get the volume to actually change as I move the slider. I'm using VLC lib. </p> <pre><code>import vlc import random from tkinter import * import threading song = "" instance = vlc.Instance() def get_songs(): global song global x global songs songs = filedialog.askopenfilenames() x = 0 song = songs[x] print(songs) commence(song) def pause_resume(): player.pause() def commence(song): global player global x player = instance.media_player_new() media = instance.media_new(song) player.set_media(media) player.play() def next_song(): if x &gt;= len(songs): print("Error: Can't go any further") x = 0 return player.stop() song = songs[x] commence(song) window = Tk() window.geometry("600x600") window.title('JukeBox') #pause_button = Button(window, text = "Next", command = next_song) #pause_button.grid(row=1, column = 2) Button(window, text="Start", command=get_songs).grid(column=1,row=1) Button(window, text="Next", command=next_song).grid(column=1,row=2) pause_button = Button(window, text = "Pause/Resume", command = pause_resume) pause_button.grid(row=3, column = 1) menubar = Menu(window) filemenu = Menu(menubar, tearoff=0) filemenu.add_separator() filemenu.add_command(label="Open", command=get_songs()) filemenu.add_command(label="Exit", command=window.destroy) menubar.add_cascade(label="File", menu=filemenu) window.config(menu=menubar) vol = Scale(window,from_ = 0,to = 1,orient = HORIZONTAL ,resolution = .1,) vol.grid(row = 1, column = 2) window.mainloop() </code></pre> <p>I understand I'm not using the best coding practices, but this way I can actually understand what I have written.</p>
0
2016-08-24T00:33:28Z
39,113,802
<p>Set the <code>command</code> parameter when creating the <code>Scale</code> widget:</p> <pre><code>def set_volume(v): global vol global player # either get the new volume from given argument v (type: str): # value = int(v) # or get it directly from Scale widget (type: int) value = vol.get() player.audio_set_volume(value) vol = Scale(..., command=set_volume) </code></pre>
0
2016-08-24T03:11:50Z
[ "python", "tkinter", "python-3.5" ]
Volume Slider using Tkinter
39,112,692
<p>I'm new to coding, and I am attempting to create a juke box for a school project, but I'm struggling to create a slider that will edit the volume. I'm just unsure where to start to get the volume to actually change as I move the slider. I'm using VLC lib. </p> <pre><code>import vlc import random from tkinter import * import threading song = "" instance = vlc.Instance() def get_songs(): global song global x global songs songs = filedialog.askopenfilenames() x = 0 song = songs[x] print(songs) commence(song) def pause_resume(): player.pause() def commence(song): global player global x player = instance.media_player_new() media = instance.media_new(song) player.set_media(media) player.play() def next_song(): if x &gt;= len(songs): print("Error: Can't go any further") x = 0 return player.stop() song = songs[x] commence(song) window = Tk() window.geometry("600x600") window.title('JukeBox') #pause_button = Button(window, text = "Next", command = next_song) #pause_button.grid(row=1, column = 2) Button(window, text="Start", command=get_songs).grid(column=1,row=1) Button(window, text="Next", command=next_song).grid(column=1,row=2) pause_button = Button(window, text = "Pause/Resume", command = pause_resume) pause_button.grid(row=3, column = 1) menubar = Menu(window) filemenu = Menu(menubar, tearoff=0) filemenu.add_separator() filemenu.add_command(label="Open", command=get_songs()) filemenu.add_command(label="Exit", command=window.destroy) menubar.add_cascade(label="File", menu=filemenu) window.config(menu=menubar) vol = Scale(window,from_ = 0,to = 1,orient = HORIZONTAL ,resolution = .1,) vol.grid(row = 1, column = 2) window.mainloop() </code></pre> <p>I understand I'm not using the best coding practices, but this way I can actually understand what I have written.</p>
0
2016-08-24T00:33:28Z
39,118,051
<p>My mate was able to help me with this by simply adding self in the required paramaters of the function, no clue why it helps and thanks everyone for trying, much appreciated. </p> <pre><code>def show_value(self): global player i = vol.get() player.audio_set_volume(i) vol = Scale(window,from_ = 0,to = 100,orient = HORIZONTAL ,resolution = 1,command = show_value) vol.place(x=75, y = 300) vol.set(50) </code></pre>
0
2016-08-24T08:21:51Z
[ "python", "tkinter", "python-3.5" ]
Python script fails using launchd and Selenium
39,112,707
<p>I'm trying to run a simple script using launchd in OS X 10.10.5 but the job fails. I think it has something to do with permissions/privileges not set correctly?</p> <p>This is the error code it throws up:</p> <blockquote> <p>Traceback (most recent call last): File "/Users/John/Documents/AutoRun/OpenTwitter.py", line 7, in driver = webdriver.Firefox() File "/Library/Python/2.7/site-packages/selenium-3.0.0.b2-py2.7.egg/selenium/webdriver/firefox/webdriver.py", line 64, in <strong>init</strong> self.service = Service(executable_path, firefox_binary=self.options.binary_location) File "/Library/Python/2.7/site-packages/selenium-3.0.0.b2-py2.7.egg/selenium/webdriver/firefox/service.py", line 44, in <strong>init</strong> log_file = open(log_path, "a+") IOError: [Errno 13] Permission denied: 'geckodriver.log' Exception AttributeError: "'Service' object has no attribute 'log_file'" in <code>&lt;bound method Service.__del__ of &lt;selenium.webdriver.firefox.service.Service object at 0x10ca6bdd0&gt;&gt;</code> ignored</p> </blockquote> <p>I do get the printed "start script" in the console job.out that I've hardcoded into my script, so I assume launchd is actually starting the script ok, but it's running into a problem with Selenium/Firefox driver? And this is where my permissions issue is coming into play?</p> <p>It runs fine in IDE/run and from the terminal. </p> <p>Here's the test code I'm trying to run:</p> <pre><code>#!/usr/bin/python from selenium import webdriver print("start script") driver = webdriver.Firefox() driver.get("https://twitter.com/search?q=news&amp;src=typd&amp;lang=en") print("twitter open, done") </code></pre> <p>The P.List as follows:</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"&gt; &lt;plist version="1.0"&gt; &lt;dict&gt; &lt;key&gt;Label&lt;/key&gt; &lt;string&gt;JohnsJob.job&lt;/string&gt; &lt;key&gt;Program&lt;/key&gt; &lt;string&gt;/Users/John/Documents/AutoRun/OpenTwitter.py&lt;/string&gt; &lt;key&gt;StandardErrorPath&lt;/key&gt; &lt;string&gt;/tmp/JohnsJob.job.err&lt;/string&gt; &lt;key&gt;StandardOutPath&lt;/key&gt; &lt;string&gt;/tmp/JohnsJob.job.out&lt;/string&gt; &lt;key&gt;StartCalendarInterval&lt;/key&gt; &lt;array&gt; &lt;dict&gt; &lt;key&gt;Hour&lt;/key&gt; &lt;integer&gt;10&lt;/integer&gt; &lt;key&gt;Minute&lt;/key&gt; &lt;integer&gt;14&lt;/integer&gt; &lt;key&gt;Weekday&lt;/key&gt; &lt;integer&gt;3&lt;/integer&gt; &lt;/dict&gt; &lt;/array&gt; &lt;/dict&gt; &lt;/plist&gt; </code></pre> <p>Note: I change the time for this code to run in launchd so I can test.</p>
1
2016-08-24T00:35:50Z
39,136,015
<p>I've managed to get it to work by putting the geckodriver in /usr/bin </p> <ol> <li>Move the file to /usr/bin directory: sudo mv chromedriver /usr/bin</li> <li>Goto /usr/bin directory and you would need to run something like "chmod a+x geckodriver" to mark it executable.</li> </ol> <p>I think it was having trouble finding the PATH to the driver and stopping the script dead. </p>
0
2016-08-25T02:59:21Z
[ "python", "python-2.7", "selenium", "launchd", "selenium-firefoxdriver" ]
How To Make a Window in Python without GUI toolkits
39,112,734
<p>I'm curious, as making a window with Tkinter really does seem to easy. Does Python have an alert thing similar to javascript by default?</p>
-2
2016-08-24T00:40:34Z
39,124,747
<p>There is no graphical alert in python. Tkinter is the default python GUI interface. If all you need is an alert, that's less than a dozen lines of code. Here's a python 3 example:</p> <pre><code>import tkinter as tk import tkinter.messagebox def alert(message): root = tk.Tk() root.withdraw() tkinter.messagebox.showwarning("Alert", message) root.destroy() alert("Danger Will Robinson!") </code></pre>
1
2016-08-24T13:30:30Z
[ "python", "user-interface" ]
Python program fails on Windows but not on Linux
39,112,761
<p>The program below triggers a UnicodeEncodeError on my Windows 10 machine (running Python 3.5.2) but no error at all on my Linux machine (running Python 3.3.2).</p> <pre><code>#!/usr/bin/python import logging str ="Antonín Dvořák" logging.basicConfig(filename='log.txt', level=logging.INFO) logging.info(str) </code></pre> <p>On Linux, the log file correctly contains: </p> <pre><code>INFO:root:Antonín Dvořák </code></pre> <p>On Windows, I get the following error: </p> <p><a href="http://i.stack.imgur.com/oc1v7.png" rel="nofollow"><img src="http://i.stack.imgur.com/oc1v7.png" alt="enter image description here"></a></p> <p>Any ideas on what the possible cause could be for this discrepancy? </p>
4
2016-08-24T00:44:22Z
39,112,928
<p>Instead of a file name, you could pass a stream whose encoding is specified:</p> <pre><code>logging.basicConfig( stream=open('log.txt', 'w', encoding='utf-8'), level=logging.INFO ) </code></pre> <p>As for the cause, it's probably trying to open the target file using your current locale's encoding (CP1252, judging by the stack trace).</p>
3
2016-08-24T01:09:52Z
[ "python", "python-3.x" ]
Python program fails on Windows but not on Linux
39,112,761
<p>The program below triggers a UnicodeEncodeError on my Windows 10 machine (running Python 3.5.2) but no error at all on my Linux machine (running Python 3.3.2).</p> <pre><code>#!/usr/bin/python import logging str ="Antonín Dvořák" logging.basicConfig(filename='log.txt', level=logging.INFO) logging.info(str) </code></pre> <p>On Linux, the log file correctly contains: </p> <pre><code>INFO:root:Antonín Dvořák </code></pre> <p>On Windows, I get the following error: </p> <p><a href="http://i.stack.imgur.com/oc1v7.png" rel="nofollow"><img src="http://i.stack.imgur.com/oc1v7.png" alt="enter image description here"></a></p> <p>Any ideas on what the possible cause could be for this discrepancy? </p>
4
2016-08-24T00:44:22Z
39,112,951
<p>Theh default encoding of Windows (cp1252 in your case) is different from Linux (usually utf8), so you have to specify the encoding you want.</p> <p>Below didn't work in Python 3.3 (still used cp1252) but did with 3.5 so it looks like a bug in 3.3. I used <code>utf-8-sig</code> because many Windows text editors default to an ANSI encoding (such as cp1252) without a UTF-8 BOM signature.</p> <pre><code>import logging str ="Antonín Dvořák" with open('log.txt','w',encoding='utf-8-sig') as s: logging.basicConfig(stream=s, level=logging.INFO) logging.info(str) </code></pre>
4
2016-08-24T01:12:55Z
[ "python", "python-3.x" ]
Removing every third item from list
39,112,805
<p>I want to remove every 3rd item from list. For Example:</p> <pre><code>list1 = list(['a','b','c','d','e','f','g','h','i','j']) </code></pre> <p>After removing indexes which are multiple of three the list will be:</p> <pre><code>['a','b','d','e','g','h','j'] </code></pre> <p>How can I achieve this?</p>
-3
2016-08-24T00:51:18Z
39,112,824
<pre><code>[v for i, v in enumerate(list1) if (i + 1) % 3 != 0] </code></pre> <p>It seems like you want the third item in the list, which is actually at index 2, gone. This is what the <code>+1</code> is for.</p>
1
2016-08-24T00:53:19Z
[ "python" ]
Removing every third item from list
39,112,805
<p>I want to remove every 3rd item from list. For Example:</p> <pre><code>list1 = list(['a','b','c','d','e','f','g','h','i','j']) </code></pre> <p>After removing indexes which are multiple of three the list will be:</p> <pre><code>['a','b','d','e','g','h','j'] </code></pre> <p>How can I achieve this?</p>
-3
2016-08-24T00:51:18Z
39,112,837
<p>You may use <code>enumerate()</code>:</p> <pre><code>&gt;&gt;&gt; x = ['a','b','c','d','e','f','g','h','i','j'] &gt;&gt;&gt; [i for j, i in enumerate(x) if (j+1)%3] ['a', 'b', 'd', 'e', 'g', 'h', 'j'] </code></pre> <p><em>Alternatively</em>, you may create the copy of list and delete the values at interval. For example:</p> <pre><code>&gt;&gt;&gt; y = list(x) # where x is the list mentioned in above example &gt;&gt;&gt; del y[2::3] # y[2::3] = ['c', 'f', 'i'] &gt;&gt;&gt; y ['a', 'b', 'd', 'e', 'g', 'h', 'j'] </code></pre>
6
2016-08-24T00:55:00Z
[ "python" ]
How to patch method io.RawIOBase.read with unittest?
39,112,854
<p>I've recently learned about <code>unittest.monkey.patch</code> and its variants, and I'd like to use it to unit test for atomicity of a file read function. However, the patch doesn't seem to have any effect.</p> <p>Here's my set-up. The method under scrutiny is roughly like so (abriged):</p> <pre><code>#local_storage.py def read(uri): with open(path, "rb") as file_handle: result = file_handle.read() return result </code></pre> <p>And the module that performs the unit tests (also abriged):</p> <pre><code>#test/test_local_storage.py import unittest.mock import local_storage def _read_while_writing(io_handle, size=-1): """ The patch function, to replace io.RawIOBase.read. """ _write_something_to(TestLocalStorage._unsafe_target_file) #Appends "12". result = io_handle.read(size) #Should call the actual read. _write_something_to(TestLocalStorage._unsafe_target_file) #Appends "34". class TestLocalStorage(unittest.TestCase): _unsafe_target_file = "test.txt" def test_read_atomicity(self): with open(self._unsafe_target_file, "wb") as unsafe_file_handle: unsafe_file_handle.write(b"Test") with unittest.mock.patch("io.RawIOBase.read", _read_while_writing): # &lt;--- This doesn't work! result = local_storage.read(TestLocalStorage._unsafe_target_file) #The actual test. self.assertIn(result, [b"Test", b"Test1234"], "Read is not atomic.") </code></pre> <p>This way, the patch should ensure that every time you try to read it, the file gets modified just before and just after the actual read, as if it happens concurrently, thus testing for atomicity of our read.</p> <p>The unit test currently succeeds, but I've verified with print statements that the patch function doesn't actually get called, so the file never gets the additional writes (it just says "Test"). I've also modified the code as to be non-atomic on purpose.</p> <p>So my question: <strong>How can I patch the <code>read</code> function of an IO handle inside the local_storage module?</strong> I've read elsewhere that people tend to replace the open() function to return something like a <code>StringIO</code>, but I don't see how that could fix this problem.</p> <p>I need to support Python 3.4 and up.</p>
0
2016-08-24T00:58:45Z
39,183,945
<p>I've finally found a solution myself.</p> <p>The problem is that <code>mock</code> can't mock any methods of objects that are written in C. One of these is the <code>RawIOBase</code> that I was encountering.</p> <p>So indeed the solution was to mock <code>open</code> to return a wrapper around <code>RawIOBase</code>. I couldn't get <code>mock</code> to produce a wrapper for me, so I implemented it myself.</p> <p>There is one pre-defined file that's considered "unsafe". The wrapper writes to this "unsafe" file every time any call is made to the wrapper. This allows for testing the atomicity of file writes, since it writes additional things to the unsafe file while writing. My implementation prevents this by writing to a temporary ("safe") file and then moving that file over the target file.</p> <p>The wrapper has a special case for the <code>read</code> function, because to test atomicity properly it needs to write to the file <em>during</em> the read. So it reads first halfway through the file, then stops and writes something, and then reads on. This solution is now semi-hardcoded (in how far is halfway), but I'll find a way to improve that.</p> <p>You can see my solution here: <a href="https://github.com/Ghostkeeper/Luna/blob/0e88841d19737fb1f4606917f86e3de9b5b9f29b/plugins/storage/localstorage/test/test_local_storage.py" rel="nofollow">https://github.com/Ghostkeeper/Luna/blob/0e88841d19737fb1f4606917f86e3de9b5b9f29b/plugins/storage/localstorage/test/test_local_storage.py</a></p>
0
2016-08-27T17:36:05Z
[ "python", "unit-testing", "mocking", "monkeypatching", "python-unittest.mock" ]
How to find the number of ways to get 21 in Blackjack?
39,112,897
<p>Some assumptions: </p> <ol> <li>One deck of 52 cards is used</li> <li>Picture cards count as 10</li> <li>Aces count as 1 or 11</li> <li>The order is not important (ie. Ace + Queen is the same as Queen + Ace)</li> </ol> <p>I thought I would then just sequentially try all the possible combinations and see which ones add up to 21, but there are way too many ways to mix the cards (52! ways). This approach also does not take into account that order is not important nor does it account for the fact that there are only 4 maximum types of any one card (Spade, Club, Diamond, Heart).</p> <p>Now I am thinking of the problem like this: </p> <p>We have 11 "slots". Each of these slots can have 53 possible things inside them: 1 of 52 cards or no card at all. The reason it is 11 slots is because 11 cards is the maximum amount of cards that can be dealt and still add up to 21; more than 11 cards would have to add up to more than 21. </p> <p>Then the "leftmost" slot would be incremented up by one and all 11 slots would be checked to see if they add up to 21 (0 would represent no card in the slot). If not, the next slot to the right would be incremented, and the next, and so on.</p> <p>Once the first 4 slots contain the same "card" (after four increments, the first 4 slots would all be 1), the fifth slot could not be that number as well since there are 4 numbers of any type. The fifth slot would then become the next lowest number in the remaining available cards; in the case of four 1s, the fifth slot would become a 2 and so on. </p> <p>How would you do approach this?</p>
3
2016-08-24T01:04:57Z
39,113,041
<p>divide and conquer by leveraging the knowledge that if you have 13 and pick a 10 you only have to pick cards to sum to 3 left to look at ... be forwarned this solution might be slow(took about 180 seconds on my box... it is definately non-optimal) ..</p> <pre><code>def sum_to(x,cards): if x == 0: # if there is nothing left to sum to yield [] for i in range(1,12): # for each point value 1..11 (inclusive) if i &gt; x: break # if i is bigger than whats left we are done card_v = 11 if i == 1 else i if card_v not in cards: continue # if there is no more of this card new_deck = cards[:] # create a copy of hte deck (we do not want to modify the original) if i == 1: # one is clearly an ace... new_deck.remove(11) else: # remove the value new_deck.remove(i) # on the recursive call we need to subtract our recent pick for result in sum_to(x-i,new_deck): yield [i] + result # append each further combination to our solutions </code></pre> <p>set up your cards as follows</p> <pre><code>deck = [] for i in range(2,11): # two through ten (with 4 of each) deck.extend([i]*4) deck.extend([10]*4) #jacks deck.extend([10]*4) #queens deck.extend([10]*4) #kings deck.extend([11]*4) # Aces </code></pre> <p>then just call your function</p> <pre><code>for combination in sum_to(21,deck): print combination </code></pre> <p>unfortunately this does allow some duplicates to sneak in ... in order to get unique entries you need to change it a little bit</p> <p>in <code>sum_to</code> on the last line change it to</p> <pre><code> # sort our solutions so we can later eliminate duplicates yield sorted([i] + result) # append each further combination to our solutions </code></pre> <p>then when you get your combinations you gotta do some deep dark voodoo style python</p> <pre><code> unique_combinations = sorted(set(map(tuple,sum_to(21,deck))),key=len,reverse=0) for combo in unique_combinations: print combo </code></pre> <p>from this cool question i have learned the following (keep in mind in real play you would have the dealer and other players also removing from the same deck)</p> <pre><code>there are 416 unique combinations of a deck of cards that make 21 there are 300433 non-unique combinations!!! the longest number of ways to make 21 are as follows with 11 cards there are 1 ways [(1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3)] with 10 cards there are 7 ways with 9 cards there are 26 ways with 8 cards there are 54 ways with 7 cards there are 84 ways with 6 cards there are 94 ways with 5 cards there are 83 ways with 4 cards there are 49 ways with 3 cards there are 17 ways with 2 cards there are 1 ways [(10, 11)] there are 54 ways in which all 4 aces are used in making 21!! there are 106 ways of making 21 in which NO aces are used !!! </code></pre> <p>keep in mind these are often suboptimal plays (ie considering A,10 -> 1,10 and hitting )</p>
4
2016-08-24T01:24:25Z
[ "python", "oop", "combinations", "blackjack", "playing-cards" ]
How to find the number of ways to get 21 in Blackjack?
39,112,897
<p>Some assumptions: </p> <ol> <li>One deck of 52 cards is used</li> <li>Picture cards count as 10</li> <li>Aces count as 1 or 11</li> <li>The order is not important (ie. Ace + Queen is the same as Queen + Ace)</li> </ol> <p>I thought I would then just sequentially try all the possible combinations and see which ones add up to 21, but there are way too many ways to mix the cards (52! ways). This approach also does not take into account that order is not important nor does it account for the fact that there are only 4 maximum types of any one card (Spade, Club, Diamond, Heart).</p> <p>Now I am thinking of the problem like this: </p> <p>We have 11 "slots". Each of these slots can have 53 possible things inside them: 1 of 52 cards or no card at all. The reason it is 11 slots is because 11 cards is the maximum amount of cards that can be dealt and still add up to 21; more than 11 cards would have to add up to more than 21. </p> <p>Then the "leftmost" slot would be incremented up by one and all 11 slots would be checked to see if they add up to 21 (0 would represent no card in the slot). If not, the next slot to the right would be incremented, and the next, and so on.</p> <p>Once the first 4 slots contain the same "card" (after four increments, the first 4 slots would all be 1), the fifth slot could not be that number as well since there are 4 numbers of any type. The fifth slot would then become the next lowest number in the remaining available cards; in the case of four 1s, the fifth slot would become a 2 and so on. </p> <p>How would you do approach this?</p>
3
2016-08-24T01:04:57Z
39,113,661
<p>Before worrying about the suits and different cards with value <code>10</code> lets figure out how many different value combinations resulting to <code>21</code> there are. For example <code>5, 5, 10, 1</code> is one such combination. The following function takes in <code>limit</code> which is the target value, <code>start</code> which indicates the lowest value that can be picked and <code>used</code> which is the list of picked values:</p> <pre><code>def combinations(limit, start, used): # Base case if limit == 0: return 1 # Start iteration from lowest card to picked so far # so that we're only going to pick cards 3 &amp; 7 in order 3,7 res = 0 for i in range(start, min(12, limit + 1)): # Aces are at index 1 no matter if value 11 or 1 is used index = i if i != 11 else 1 # There are 16 cards with value of 10 (T, J, Q, K) and 4 with every # other value available = 16 if index == 10 else 4 if used[index] &lt; available: # Mark the card used and go through combinations starting from # current card and limit lowered by the value used[index] += 1 res += combinations(limit - i, i, used) used[index] -= 1 return res print combinations(21, 1, [0] * 11) # 416 </code></pre> <p>Since we're interested about different card combinations instead of different value combinations the base case in above should be modified to return number of different card combinations that can be used to generate a value combination. Luckily that's quite easy task, <a href="https://en.wikipedia.org/wiki/Binomial_coefficient" rel="nofollow">Binomial coefficient</a> can be used to figure out how many different combinations of <code>k</code> items can be picked from <code>n</code> items.</p> <p>Once the number of different card combinations for each value in <code>used</code> is known they can be just multiplied with each other for the final result. So for the example of <code>5, 5, 10, 1</code> value <code>5</code> results to <code>bcoef(4, 2) == 6</code>, value <code>10</code> to <code>bcoef(16, 1) == 16</code> and value <code>1</code> to <code>bcoef(4, 1) == 4</code>. For all the other values <code>bcoef(x, 0)</code> results to <code>1</code>. Multiplying those values results to <code>6 * 16 * 4 == 384</code> which is then returned:</p> <pre><code>import operator from math import factorial def bcoef(n, k): return factorial(n) / (factorial(k) * factorial(n - k)) def combinations(limit, start, used): if limit == 0: combs = (bcoef(4 if i != 10 else 16, x) for i, x in enumerate(used)) res = reduce(operator.mul, combs, 1) return res res = 0 for i in range(start, min(12, limit + 1)): index = i if i != 11 else 1 available = 16 if index == 10 else 4 if used[index] &lt; available: used[index] += 1 res += combinations(limit - i, i, used) used[index] -= 1 return res print combinations(21, 1, [0] * 11) # 186184 </code></pre>
1
2016-08-24T02:52:12Z
[ "python", "oop", "combinations", "blackjack", "playing-cards" ]
How to find the number of ways to get 21 in Blackjack?
39,112,897
<p>Some assumptions: </p> <ol> <li>One deck of 52 cards is used</li> <li>Picture cards count as 10</li> <li>Aces count as 1 or 11</li> <li>The order is not important (ie. Ace + Queen is the same as Queen + Ace)</li> </ol> <p>I thought I would then just sequentially try all the possible combinations and see which ones add up to 21, but there are way too many ways to mix the cards (52! ways). This approach also does not take into account that order is not important nor does it account for the fact that there are only 4 maximum types of any one card (Spade, Club, Diamond, Heart).</p> <p>Now I am thinking of the problem like this: </p> <p>We have 11 "slots". Each of these slots can have 53 possible things inside them: 1 of 52 cards or no card at all. The reason it is 11 slots is because 11 cards is the maximum amount of cards that can be dealt and still add up to 21; more than 11 cards would have to add up to more than 21. </p> <p>Then the "leftmost" slot would be incremented up by one and all 11 slots would be checked to see if they add up to 21 (0 would represent no card in the slot). If not, the next slot to the right would be incremented, and the next, and so on.</p> <p>Once the first 4 slots contain the same "card" (after four increments, the first 4 slots would all be 1), the fifth slot could not be that number as well since there are 4 numbers of any type. The fifth slot would then become the next lowest number in the remaining available cards; in the case of four 1s, the fifth slot would become a 2 and so on. </p> <p>How would you do approach this?</p>
3
2016-08-24T01:04:57Z
39,178,939
<p>So I decided to write the script that every possible viable hand can be checked. The total number comes out to be 188052. Since I checked every possible combination, this is the exact number (as opposed to an estimate):</p> <pre><code>import itertools as it big_list = [] def deck_set_up(m): special = {8:'a23456789TJQK', 9:'a23456789', 10:'a2345678', 11:'a23'} if m in special: return [x+y for x,y in list(it.product(special[m], 'shdc'))] else: return [x+y for x,y in list(it.product('a23456789TJQKA', 'shdc'))] deck_dict = {'as':1,'ah':1,'ad':1,'ac':1, '2s':2,'2h':2,'2d':2,'2c':2, '3s':3,'3h':3,'3d':3,'3c':3, '4s':4,'4h':4,'4d':4,'4c':4, '5s':5,'5h':5,'5d':5,'5c':5, '6s':6,'6h':6,'6d':6,'6c':6, '7s':7,'7h':7,'7d':7,'7c':7, '8s':8,'8h':8,'8d':8,'8c':8, '9s':9,'9h':9,'9d':9,'9c':9, 'Ts':10,'Th':10,'Td':10,'Tc':10, 'Js':10,'Jh':10,'Jd':10,'Jc':10, 'Qs':10,'Qh':10,'Qd':10,'Qc':10, 'Ks':10,'Kh':10,'Kd':10,'Kc':10, 'As':11,'Ah':11,'Ad':11,'Ac':11} stop_here = {2:'As', 3:'8s', 4:'6s', 5:'4h', 6:'3c', 7:'3s', 8:'2h', 9:'2s', 10:'2s', 11:'2s'} for n in range(2,12): # n is number of cards in the draw combos = it.combinations(deck_set_up(n), n) stop_point = stop_here[n] while True: try: pick = combos.next() except: break if pick[0] == stop_point: break if n &lt; 8: if len(set([item.upper() for item in pick])) != n: continue if sum([deck_dict[card] for card in pick]) == 21: big_list.append(pick) print n, len(big_list) # Total number hands that can equal 21 is 188052 </code></pre> <p>In the output, the the first column is the number of cards in the draw, and the second number is the cumulative count. So the number after "3" in the output is the total count of hands that equal 21 for a 2-card draw, and a 3-card draw. The lower case a is a low ace (1 point), and uppercase A is high ace. I have a line (the one with the set command), to make sure it throws out any hand that has a duplicate card.</p> <p>The script takes 36 minutes to run. So there is definitely a trade-off between execution time, and accuracy. The "big_list" contains the solutions (i.e. every hand where the sum is 21)</p> <pre><code>&gt;&gt;&gt; ================== RESTART: C:\Users\JBM\Desktop\bj3.py ================== 2 64 3 2100 4 14804 5 53296 6 111776 7 160132 8 182452 9 187616 10 188048 11 188052 # &lt;-- This is the total count, as these numbers are cumulative &gt;&gt;&gt; </code></pre>
1
2016-08-27T08:09:58Z
[ "python", "oop", "combinations", "blackjack", "playing-cards" ]
Parse hierarchical XML tags
39,112,938
<p>Need to parse hierarchical tags from XML and get the tag's value in desired output</p> <p><strong>Input</strong></p> <pre><code>&lt;doc&gt; &lt;pid id="231"&gt; &lt;label key=""&gt;Electronics&lt;/label&gt; &lt;desc/&gt; &lt;cid id="122"&gt; &lt;label key=""&gt;TV&lt;/label&gt; &lt;/cid&gt; &lt;desc/&gt; &lt;cid id="123"&gt; &lt;label key=""&gt;Computers&lt;/label&gt; &lt;cid id="12433"&gt; &lt;label key=""&gt;Lenovo&lt;/label&gt; &lt;/cid&gt; &lt;desc/&gt; &lt;cid id="12434"&gt; &lt;label key=""&gt;IBM&lt;/label&gt; &lt;desc/&gt; &lt;/cid&gt; &lt;cid id="12435"&gt; &lt;label key=""&gt;Mac&lt;/label&gt; &lt;/cid&gt; &lt;desc/&gt; &lt;/cid&gt; &lt;/pid&gt; &lt;pid id="7764"&gt; &lt;label key=""&gt;Music&lt;/label&gt; &lt;desc/&gt; &lt;cid id="1224"&gt; &lt;label key=""&gt;Play&lt;/label&gt; &lt;desc/&gt; &lt;cid id="341"&gt; &lt;label key=""&gt;PQR&lt;/label&gt; &lt;/cid&gt; &lt;desc/&gt; &lt;/cid&gt; &lt;cid id="221"&gt; &lt;label key=""&gt;iTunes&lt;/label&gt; &lt;cid id="341"&gt; &lt;label key=""&gt;XYZ&lt;/label&gt; &lt;/cid&gt; &lt;desc/&gt; &lt;cid id="515"&gt; &lt;label key=""&gt;ABC&lt;/label&gt; &lt;/cid&gt; &lt;desc/&gt; &lt;/cid&gt; &lt;/pid&gt; &lt;/doc&gt; </code></pre> <p><strong>Output</strong></p> <pre><code>Electornics/ Electornics/TV Electornics/Computers/Lenovo Electornics/Computers/IBM Electornics/Computers/Mac Music/ Music/Play/PQR Music/iTunes/XYZ Music/iTunes/ABC </code></pre> <p><strong>What I have tried (<em>in Python</em>)</strong></p> <pre><code>import xml.etree.ElementTree as ET import os import sys import string def perf_func(elem, func, level=0): func(elem,level) for child in elem.getchildren(): perf_func(child, func, level+1) def print_level(elem,level): print '-'*level+elem.tag root = ET.parse('Products.xml') perf_func(root.getroot(), print_level) # Added find logic root = tree.getroot() for n in root.findall('doc') l = n.find('label').text print l </code></pre> <p>With the above code, I am able to get the nodes and its levels (just the tag not their value) . And also the 1st level of all labels. Need some suggestion (Perl/Python) on how to proceed to get the hirerachical structure in the format mentioned in Output.</p>
0
2016-08-24T01:10:23Z
39,113,929
<p>We are going to use 3 pieces: find all of the elements in the order in which they occur, get the depth of each one, build a bread crumb based on the depth and order.</p> <pre><code>from lxml import etree xml = etree.fromstring(xml_str) elems = xml.xpath(r'//label') #xpath expression to find all '&lt;label ...&gt; elements # counts the number of parents to the root element def get_depth(element): depth = 0 parent = element.getparent() while parent is not None: depth += 1 parent = parent.getparent() return depth # build up the bread crumbs by tracking the depth # when a new element is entered, it replaces the value in the list # at that level and drops all values to the right def reduce_by_depth(element_list): crumbs = [] depth = 0 elem_crumb = ['']*10 for elem in element_list: depth = get_depth(elem) elem_crumb[depth] = elem.text elem_crumb[depth+1:] = ['']*(10-depth-1) # join all the non-empty string to get the breadcrumb crumbs.append('/'.join([e for e in elem_crumb if e])) return crumbs reduce_by_depth(elems) # output: ['Electronics', 'Electronics/TV', 'Electronics/Computers', 'Electronics/Computers/Lenovo', 'Electronics/Computers/IBM', 'Electronics/Computers/Mac', 'Music', 'Music/Play', 'Music/Play/PQR', 'Music/iTunes', 'Music/iTunes/XYZ', 'Music/iTunes/ABC'] </code></pre>
2
2016-08-24T03:28:21Z
[ "python", "xml", "xml-parsing", "ipython" ]
Django isn't serving static files
39,112,961
<p>I'm working with an existing (and previously functional) Django site. We recently upgraded from Django 1.8.13 to 1.10 and our WSGI is Gunicorn. It works fine when hosted from my development machine, but when deployed, all static resources (on the admin and the main site) yield 404's with the message, <code>Directory indexes are not allowed here.</code></p> <p>Our <code>settings.py</code> contains the following:</p> <pre><code>INSTALLED_APPS = ( ... 'django.contrib.staticfiles', ... ) DEBUG = True STATIC_URL = '/static/' PROJECT_DIR = os.path.dirname(os.path.dirname(__file__)) STATICFILES_DIRS = ( os.path.join(PROJECT_DIR, 'static'), ) STATIC_ROOT = os.path.join(PROJECT_DIR, 'static_resources') </code></pre> <p>The directory structure looks like this:</p> <pre><code>/my-project-name /my-project-name server.py settings.py urls.py wsgi.py ... /static /static_resources manage.py </code></pre>
0
2016-08-24T01:14:06Z
39,113,150
<p>Try to change os.path.join(PROJECT_DIR, '../static') to os.path.join(PROJECT_DIR, 'static') and STATIC_ROOT = os.path.join(PROJECT_DIR, '../static_resources') to STATIC_ROOT = os.path.join(PROJECT_DIR, 'static_resources'). It will solve your problem.</p>
0
2016-08-24T01:40:37Z
[ "python", "django", "web", "static", "resources" ]
Django isn't serving static files
39,112,961
<p>I'm working with an existing (and previously functional) Django site. We recently upgraded from Django 1.8.13 to 1.10 and our WSGI is Gunicorn. It works fine when hosted from my development machine, but when deployed, all static resources (on the admin and the main site) yield 404's with the message, <code>Directory indexes are not allowed here.</code></p> <p>Our <code>settings.py</code> contains the following:</p> <pre><code>INSTALLED_APPS = ( ... 'django.contrib.staticfiles', ... ) DEBUG = True STATIC_URL = '/static/' PROJECT_DIR = os.path.dirname(os.path.dirname(__file__)) STATICFILES_DIRS = ( os.path.join(PROJECT_DIR, 'static'), ) STATIC_ROOT = os.path.join(PROJECT_DIR, 'static_resources') </code></pre> <p>The directory structure looks like this:</p> <pre><code>/my-project-name /my-project-name server.py settings.py urls.py wsgi.py ... /static /static_resources manage.py </code></pre>
0
2016-08-24T01:14:06Z
39,114,105
<p>Django does not serve static files in production mode (DEBUG=False). On a production deployment that's the job of the webserver. To resolve the problem:</p> <ul> <li>run <code>python manage.py collectstatic</code></li> <li>in your web server configuration point the <code>/static</code> folder to the static folder of Django</li> </ul> <p>Don't just turn DEBUG on, it would be dangerous!</p>
1
2016-08-24T03:48:43Z
[ "python", "django", "web", "static", "resources" ]
Django isn't serving static files
39,112,961
<p>I'm working with an existing (and previously functional) Django site. We recently upgraded from Django 1.8.13 to 1.10 and our WSGI is Gunicorn. It works fine when hosted from my development machine, but when deployed, all static resources (on the admin and the main site) yield 404's with the message, <code>Directory indexes are not allowed here.</code></p> <p>Our <code>settings.py</code> contains the following:</p> <pre><code>INSTALLED_APPS = ( ... 'django.contrib.staticfiles', ... ) DEBUG = True STATIC_URL = '/static/' PROJECT_DIR = os.path.dirname(os.path.dirname(__file__)) STATICFILES_DIRS = ( os.path.join(PROJECT_DIR, 'static'), ) STATIC_ROOT = os.path.join(PROJECT_DIR, 'static_resources') </code></pre> <p>The directory structure looks like this:</p> <pre><code>/my-project-name /my-project-name server.py settings.py urls.py wsgi.py ... /static /static_resources manage.py </code></pre>
0
2016-08-24T01:14:06Z
39,132,796
<p>The answer was very subtle. When I upgraded Django to 1.9 and ran the server, it gave the following warning:</p> <p><code>?: (urls.W001) Your URL pattern '^static/(?P&lt;path&gt;.*)$' uses include with a regex ending with a '$'. Remove the dollar from the regex to avoid problems including URLs.</code></p> <p>In <code>urls.py</code>, my <code>urlpatterns</code> list contained:</p> <pre><code>url(r'^static/(?P&lt;path&gt;.*)$', 'django.views.static.serve', { 'document_root': settings.STATIC_ROOT, }), </code></pre> <p>I changed it to:</p> <pre><code>url(r'^static/(?P&lt;path&gt;.*)/', 'django.views.static.serve', { 'document_root': settings.STATIC_ROOT, }), </code></pre> <p>This eliminated the warning but caused static resources to stop loading. It needed to be:</p> <pre><code>url(r'^static/(?P&lt;path&gt;.*)', 'django.views.static.serve', { 'document_root': settings.STATIC_ROOT, }), </code></pre> <p>It's still a mystery to me why this worked on my dev machine (a Macbook), as well as another on the team's dev machine (a windows laptop), but not on our Linux server. But, it works now, so I'm done trying to figure it out.</p>
0
2016-08-24T20:49:24Z
[ "python", "django", "web", "static", "resources" ]
removing rows with any column containing NaN, NaTs, and nans
39,112,973
<p>Currently I have data as below:</p> <pre><code>df_all.head() Out[2]: Unnamed: 0 Symbol Date Close Weight 0 4061 A 2016-01-13 36.515889 (0.000002) 1 4062 AA 2016-01-14 36.351784 0.000112 2 4063 AAC 2016-01-15 36.351784 (0.000004) 3 4064 AAL 2016-01-19 36.590483 0.000006 4 4065 AAMC 2016-01-20 35.934062 0.000002 df_all.tail() Out[3]: Unnamed: 0 Symbol Date Close Weight 1252498 26950320 nan NaT 9.84 NaN 1252499 26950321 nan NaT 10.26 NaN 1252500 26950322 nan NaT 9.99 NaN 1252501 26950323 nan NaT 9.11 NaN 1252502 26950324 nan NaT 9.18 NaN df_all.dtypes Out[4]: Unnamed: 0 int64 Symbol object Date datetime64[ns] Close float64 Weight object dtype: object </code></pre> <p>As can be seen, I am getting values in Symbol of nan, Nat for Date and NaN for weight. </p> <p>MY GOAL: I want to remove any row that has ANY column containing nan, Nat or NaN and have a new df_clean to be the result</p> <p>I don't seem to be able to apply the appropriate filter? I am not sure if I have to convert the datatypes first (although I tried this as well)</p>
3
2016-08-24T01:15:45Z
39,113,037
<p>You can use</p> <pre><code>df_all.replace({'nan': None})[~pd.isnull(df_all).any(axis=1)] </code></pre> <p>This is because <code>isnull</code> recognizes both <code>NaN</code> and <code>NaT</code> as "null" values. </p>
3
2016-08-24T01:24:12Z
[ "python", "pandas", "dataframe", null ]
removing rows with any column containing NaN, NaTs, and nans
39,112,973
<p>Currently I have data as below:</p> <pre><code>df_all.head() Out[2]: Unnamed: 0 Symbol Date Close Weight 0 4061 A 2016-01-13 36.515889 (0.000002) 1 4062 AA 2016-01-14 36.351784 0.000112 2 4063 AAC 2016-01-15 36.351784 (0.000004) 3 4064 AAL 2016-01-19 36.590483 0.000006 4 4065 AAMC 2016-01-20 35.934062 0.000002 df_all.tail() Out[3]: Unnamed: 0 Symbol Date Close Weight 1252498 26950320 nan NaT 9.84 NaN 1252499 26950321 nan NaT 10.26 NaN 1252500 26950322 nan NaT 9.99 NaN 1252501 26950323 nan NaT 9.11 NaN 1252502 26950324 nan NaT 9.18 NaN df_all.dtypes Out[4]: Unnamed: 0 int64 Symbol object Date datetime64[ns] Close float64 Weight object dtype: object </code></pre> <p>As can be seen, I am getting values in Symbol of nan, Nat for Date and NaN for weight. </p> <p>MY GOAL: I want to remove any row that has ANY column containing nan, Nat or NaN and have a new df_clean to be the result</p> <p>I don't seem to be able to apply the appropriate filter? I am not sure if I have to convert the datatypes first (although I tried this as well)</p>
3
2016-08-24T01:15:45Z
39,113,201
<p>Since, the symbol '<code>nan'</code> is not caught by <code>dropna()</code> or <code>isnull()</code>. You need to cast the symbol<code>'nan'</code> as <code>np.nan</code></p> <p>Try this:</p> <pre><code> df["symbol"] = np.where(df["symbol"]=='nan',np.nan, df["symbol"] ) df.dropna() </code></pre>
0
2016-08-24T01:48:23Z
[ "python", "pandas", "dataframe", null ]
Unique items in list of sets
39,112,992
<p>If I have list of sets:</p> <pre><code>&gt;&gt;&gt; lst = [{1, 2}, {0, 1}, {1, 2}] </code></pre> <p>How to return unique items?</p> <p>Trying well known <code>set()</code> does not work:</p> <pre><code>&gt;&gt;&gt; set(lst) TypeError: unhashable type: 'set' </code></pre>
-1
2016-08-24T01:18:30Z
39,113,013
<p>You may achieve this by:</p> <pre><code>&gt;&gt;&gt; lst = [{1, 2}, {0, 1}, {1, 2}] &gt;&gt;&gt; set(frozenset(s) for s in lst) set([frozenset([1, 2]), frozenset([0, 1])]) </code></pre> <p>Check <a href="https://docs.python.org/2.4/lib/types-set.html" rel="nofollow">Frozen Set</a> document for more information.</p>
0
2016-08-24T01:21:20Z
[ "python" ]
Unique items in list of sets
39,112,992
<p>If I have list of sets:</p> <pre><code>&gt;&gt;&gt; lst = [{1, 2}, {0, 1}, {1, 2}] </code></pre> <p>How to return unique items?</p> <p>Trying well known <code>set()</code> does not work:</p> <pre><code>&gt;&gt;&gt; set(lst) TypeError: unhashable type: 'set' </code></pre>
-1
2016-08-24T01:18:30Z
39,113,025
<p>If by "unique items" you mean unique sets, you could just use <code>frozenset</code>, which is a hashable but immutable version of <code>set</code>. You could either build your sets as <code>frozenset</code> objects initially, or if you need to mutate them, do something like:</p> <pre><code>uniques = set(frozenset(s) for s in lst) </code></pre> <p>Then:</p> <pre><code>&gt;&gt;&gt; uniques set([frozenset([1, 2]), frozenset([0, 1])]) </code></pre>
2
2016-08-24T01:22:46Z
[ "python" ]
Unique items in list of sets
39,112,992
<p>If I have list of sets:</p> <pre><code>&gt;&gt;&gt; lst = [{1, 2}, {0, 1}, {1, 2}] </code></pre> <p>How to return unique items?</p> <p>Trying well known <code>set()</code> does not work:</p> <pre><code>&gt;&gt;&gt; set(lst) TypeError: unhashable type: 'set' </code></pre>
-1
2016-08-24T01:18:30Z
39,113,053
<pre><code>def get_unique_sets(l): unique_sets = [] for s in l: if s not in unique_sets: unique_sets.append(s) return unique_sets lst = [{1, 2}, {0, 1}, {1, 2}] print(get_unique_sets(lst)) </code></pre> <p><strong>Output</strong></p> <pre><code>[{1, 2}, {0, 1}] </code></pre>
0
2016-08-24T01:26:41Z
[ "python" ]
Unique items in list of sets
39,112,992
<p>If I have list of sets:</p> <pre><code>&gt;&gt;&gt; lst = [{1, 2}, {0, 1}, {1, 2}] </code></pre> <p>How to return unique items?</p> <p>Trying well known <code>set()</code> does not work:</p> <pre><code>&gt;&gt;&gt; set(lst) TypeError: unhashable type: 'set' </code></pre>
-1
2016-08-24T01:18:30Z
39,113,054
<pre><code>&gt;&gt;&gt; reduce(lambda a, b: a.union(b), lst) {0, 1, 2} </code></pre> <p><strong>EDIT</strong></p> <p>Given that the OP appears to want unique subsets:</p> <pre><code>&gt;&gt;&gt; set(tuple(sorted(s)) for s in lst) {(0, 1), (1, 2)} </code></pre>
1
2016-08-24T01:26:46Z
[ "python" ]
Unique items in list of sets
39,112,992
<p>If I have list of sets:</p> <pre><code>&gt;&gt;&gt; lst = [{1, 2}, {0, 1}, {1, 2}] </code></pre> <p>How to return unique items?</p> <p>Trying well known <code>set()</code> does not work:</p> <pre><code>&gt;&gt;&gt; set(lst) TypeError: unhashable type: 'set' </code></pre>
-1
2016-08-24T01:18:30Z
39,113,068
<pre><code>In [8]: lst = [{1, 2}, {0, 1}, {1, 2}] In [9]: reduce(set.union, lst) Out[9]: {0, 1, 2} </code></pre> <h3>Edit:</h3> <p>A better version of above:</p> <pre><code>In [1]: lst = [{1, 2}, {0, 1}, {1, 2}] In [2]: set.union(*lst) Out[2]: {0, 1, 2} </code></pre>
2
2016-08-24T01:28:31Z
[ "python" ]
Unique items in list of sets
39,112,992
<p>If I have list of sets:</p> <pre><code>&gt;&gt;&gt; lst = [{1, 2}, {0, 1}, {1, 2}] </code></pre> <p>How to return unique items?</p> <p>Trying well known <code>set()</code> does not work:</p> <pre><code>&gt;&gt;&gt; set(lst) TypeError: unhashable type: 'set' </code></pre>
-1
2016-08-24T01:18:30Z
39,113,192
<p>Do your input list items have to be sets or could they be tuples instead?</p> <p><code>set()</code> works on tuples in my test (py 2.7): </p> <pre><code>&gt;&gt;&gt; lst = [(1,2), (0,1), (1,2)] &gt;&gt;&gt; set(lst) set([(1, 2), (0, 1)]) </code></pre> <p>If your input is always a list of sets, you can just do a transformation to tuples before and a transformation back to sets after:</p> <pre><code>&gt;&gt;&gt; lst = [{1, 2}, {0, 1}, {1, 2}] &gt;&gt;&gt; lst [set([1, 2]), set([0, 1]), set([1, 2])] &gt;&gt;&gt; set(lst) TypeError: unhashable type: 'set' &gt;&gt;&gt; lst2 = [tuple(x) for x in lst] &gt;&gt;&gt; lst2 [(1, 2), (0, 1), (1, 2)] &gt;&gt;&gt; lst2 = set(lst2) &gt;&gt;&gt; lst2 set([(1, 2), (0, 1)]) &gt;&gt;&gt; lst2 = list(lst2) &gt;&gt;&gt; lst2 [(1, 2), (0, 1)] &gt;&gt;&gt; lst = [set(x) for x in lst2] &gt;&gt;&gt; lst [set([1, 2]), set([0, 1])] </code></pre> <p>Not sure if this is the best option, but it works in the example case you gave, hope this helps :) </p>
0
2016-08-24T01:47:04Z
[ "python" ]
SQLAlchemy - override orm.Query.count for a database without subselect
39,113,016
<p>I am using sqlalchemy with a database that doesn't support subselects. What that means is that something like this wouldn't work (where <code>Calendar</code> is a model inheriting a declarative base):</p> <pre><code> Calendar.query.filter(uuid=uuid).count() </code></pre> <p>I am trying to override the <code>count</code> method with something like this:</p> <pre><code>def count(self): col = func.count(literal_column("'uuid'")) return self.from_self(col).scalar() </code></pre> <p>However, the <code>from_self</code> bit still does the subselect. I can't do something like this:</p> <pre><code>session.query(sql.func.count(Calendar.uuid)).scalar() </code></pre> <p>Because I want all the filter information from the <code>Query</code>. Is there a way I can get the filter arguments for the current <code>Query</code> without doing the subselect?</p> <p>Thanks~</p>
0
2016-08-24T01:21:54Z
39,165,945
<p>From the SQLAlchemy documentation:</p> <blockquote> <p>For fine grained control over specific columns to count, to skip the usage of a subquery or otherwise control of the FROM clause, or to use other aggregate functions, use func expressions in conjunction with query(), i.e.:</p> </blockquote> <pre><code>from sqlalchemy import func # count User records, without # using a subquery. session.query(func.count(User.id)) # return count of user "id" grouped # by "name" session.query(func.count(User.id)).\ group_by(User.name) from sqlalchemy import distinct # count distinct "name" values session.query(func.count(distinct(User.name))) </code></pre> <p>Source: <a href="http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.count" rel="nofollow">SQLAlchemy (sqlalchemy.orm.query.Query.count)</a></p>
0
2016-08-26T12:06:38Z
[ "python", "sqlalchemy", "crate" ]
Pymel: How do I extract these vector floats from a complex .TXT file?
39,113,050
<p>I am having trouble wrapping my head around how I would extract the float values from a complex text file in Pymel. I am not a programmer, I am an artist, however I am in need of creating a script for a specific workflow process and I have a beginner level knowledge of python.</p> <p>My goal: to create objects in 3D space with (x,y,z) coordinates parsed from a specific text file from another program.</p> <p>Ex. of text file:</p> <blockquote> <p>point 1 8.740349 -4.640922 103.950059<br> point 2 8.520906 3.447561 116.580496<br> point 3 4.235010 -7.562914 99.632423<br> etc., etc</p> </blockquote> <p>there's much more space in my text file between the point #'s and the vector floats.</p> <p>I want to create a dictionary that I will use to create my objects in my 3D program. </p> <p>For example, </p> <blockquote> <p>myDictionary = {(point 1),[8.740349,-4.640922,103.950059]), etc. }.</p> </blockquote> <p>This is my code snippet so far:</p> <pre><code>def createLocators(): global filePath global markerNum global markerCoord print "getting farther! :)" with open(filePath,'r') as readfile: for line in readfile: if "point" in line: Types = line.split(' ') markerNum = [Type[1] for Type in Types] markerCoord = [Type[2] for Type in Types] print markerNum, markerCoord </code></pre> <p>As you can see in the code, the space between the information is long. I figure if I can remove that space I can get two data sets that will be easier to work with. There is also many more lines in the text document that I don't care about, hence the if statement to filter only lines that start with "point". When I run createLocator() to test to see if it's splitting up the lines into my two lists it runs fine, but the print looks empty to me.</p> <p>ex.</p> <blockquote> <p>[' '] [' ']</p> </blockquote> <p>I've tried googling and searching answers here on SO, and searching both Pymel and regular python documentation for what I'm doing wrong or better approaches, but I have to admit the extent of my knowledge ends here.</p> <p>What am I doing wrong? Is there a better and more efficient way to extract the data I need that I'm missing?</p> <p>Thanks for reading!</p>
0
2016-08-24T01:25:50Z
39,113,119
<p>First, you probably do not want to be splitting on that massive string of spaces. In fact what you almost certainly want is to just use <code>line.split()</code> with no arguments, as this will split apart all text on any kind, and any amount, of whitespace. i.e:</p> <pre><code>&gt;&gt;&gt; 'A B C\t\n\t D'.split() ['A', 'B', 'C', 'D'] </code></pre> <p>Then, assuming the format you've shown is correct, you should need need to get <code>Types[2:5]</code>, i.e. the second, third, and fourth elements of <code>Types</code>, for the coordinates. </p> <p>Beyond this, you should not be using capital names for local variables, and you should not be using global variables. Use function arguments instead, and rename <code>Types</code> to <code>split_line</code> or something.</p>
0
2016-08-24T01:35:06Z
[ "python", "vector", "pymel" ]