title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
reading corresponding columns using python
38,915,758
<p>I have the output in the format :</p> <pre><code>Neighbor InQ OutQ Up/Down State 10.230.3.2 0 0 33w5d 1177 10.230.4.2 0 0 33w4d 1175 125.62.173.253 0 0 8w3d 2637 125.62.173.254 0 0 1w3d 2657 </code></pre> <p>I want to read the Neighbor(e.g 10.230.3.2) if state is >= 0. Likewise, i want to read all the neighbors where corresponding state column is >=0.</p> <p>Please suggest me how can I do this? Any help is appreciated. Thanks in advance!</p>
0
2016-08-12T10:29:24Z
38,917,796
<p>I've never used Pandas before but I read Nehal's linked documentation and based this solution on <a href="http://stackoverflow.com/a/38916031/1636276">his answer</a>. This is in response to your comment:</p> <blockquote> <p>Can I add the data to df via a text file? Because I have a lot of data like this and manually feeding the data is not possible.</p> </blockquote> <p>All you have to do to make the code below read from your text file is replace the use of <code>StringIO</code> with an actual file handle (e.g. <code>with open("data_table.txt") as f:...</code>).</p> <pre><code>from io import StringIO import pandas as pd DATA = """ Neighbor InQ OutQ Up/Down State 10.230.3.2 0 0 33w5d 1177 10.230.4.2 0 0 33w4d 1175 125.62.173.253 0 0 8w3d 2637 125.62.173.254 0 0 1w3d 2657 111.11.111.111 0 0 1w3d -1 """ def main(): data_io = StringIO(DATA) table = pd.read_table(data_io, sep='\s+') print("Valid neighbours:\n{}\n".format(table[table.State &gt;= 0]['Neighbor'])) print("Invalid neighbours:\n{}".format(table[table.State &lt; 0]['Neighbor'])) if __name__ == '__main__': main() </code></pre> <p><strong>Output</strong></p> <pre class="lang-none prettyprint-override"><code>Valid neighbours: 0 10.230.3.2 1 10.230.4.2 2 125.62.173.253 3 125.62.173.254 Name: Neighbor, dtype: object Invalid neighbours: 4 111.11.111.111 Name: Neighbor, dtype: object </code></pre>
0
2016-08-12T12:17:25Z
[ "python" ]
Yahtzee using Asciitable, need to assign player points
38,915,790
<p>I'm currently working on a yahtzee board, but I'm walking into problems I have a hard time figuring out, my code is looking like this currently:</p> <pre><code>from terminaltables import AsciiTable class Player: def __init__(self,name): self.name=name self.ones=0 self.twos=0 self.threes=0 self.fours=0 self.fives=0 self.sixs=0 self.abovesum=0 self.bonus=0 self.onepair=0 self.twopair=0 self.threepair=0 self.fourpair=0 self.smalladder=0 self.bigladder=0 self.house=0 self.chance=0 self.yatzy=0 self.totalsum=0 def __repr__(self): return self.name def __str__(self): return self.name def welcome(): print("Welcome to the yahtzee game!") players = int(input("How many players: ")) rounds=0 spelarlista=[] spelarlista.append("name") while not players==rounds: player=input("What is your name?: ") rounds=rounds+1 spelarlista.append(Player(player)) for i in range(len(spelarlista)): table_data = [spelarlista, ['Ettor',spelarlista[1:i],","], ['Tvåor'], ['Treor'], ['Fyror'], ['femmor'], ['sexor']] table = AsciiTable(table_data) table.inner_row_border = True print(table.table) welcome() </code></pre> <p>Let's say that for names, I typ "James", "Alfred" and "Peter", then I will get three columns with each name, which is actually how I want it. The next issue is that under each column, I want to assign playerscore. The issue is that, if I for example go with spelarlista instead of spelarlista[1], I will just get a list in the first column, is there a way to make every row in an asciitable a headers row?</p> <p>Thanks in advance!</p>
1
2016-08-12T10:30:39Z
38,918,578
<p>You need to use a single list of strings/numbers that doesn't contain other lists for each row. Currently: <code>['Ettor',spelarlista[1:i],","],</code> looks something like <code>'Ettor',['James', 'Alfred',","],</code> when you want it to look more like <code>'Ettor', 0, 0, 0</code>.</p> <p>(Note that I don't know if I have the logic right due to not being in English! I'm assuming that "ettors" == "ones".)</p> <p>Since you have a list of <code>Player</code> objects and you want to use the scores from those, you can use list comprehensions to create your rows from the objects, selecting only the attributes you want, like (but see *note below):</p> <pre><code>['Ettor'] + [player.ones for player in spelarlista], ['Tvåor'] + [player.twos for player in spelarlista], ... </code></pre> <p><strong>Some other issues:</strong></p> <p>You should use a for loop instead of a while loop to get the names, as you know how many of them there are. I would use a better variable name, as <code>players</code> sounds like it stores... players (names maybe) - especially since you use <code>player</code> to store each player name. A <code>player</code> variable should be of type <code>Player</code> (class).</p> <pre><code>for i in range(number_of_players): name = input("What is your name?: ") spelarlista.append(Player(name)) </code></pre> <p>*You currently create the variable <code>spelarlista</code> as a list of Player objects with a string "name" in the first position. This is inconsistent and unhelpful. My list comprehensions above assume no "name" at the start. Instead, just add "Name" when you need it in your table, like:</p> <pre><code>table_data = [["Name"] + spelarlista, </code></pre> <p>Your current code creates the table in a for loop, then only uses the final one. The loop is redundant.</p> <p>I hope that helps.</p>
1
2016-08-12T12:58:11Z
[ "python" ]
How to run python script results on Flask
38,915,855
<p>I have created an app.py and index.html file. My problem is that I want to execute a python script with the input I gathered from POST when submit is clicked, and then display the script output on the same or different html page. I used CGI and Flask. I do not fully know how to proceed. I research online, but couldn't find anything very helpful. Any help would be appreciated. </p> <p>Here is my code.</p> <pre><code>from flask import Flask, render_template, request, redirect app = Flask(__name__) @app.route("/") def main(): return render_template('index.html') @app.route("/src_code/main.py", methods = ['POST']) def run_app(): id = request.form['id'] name = request.form['name'] url = request.form['url'] if not id or not name or not url: return render_template('index.html') else: #execute the python script. if __name__ == "__main__": app.run() </code></pre> <p><strong>EDIT:</strong></p> <p>I have used the following code to import my function. At the end, though I have received an error when I clicked the submit button on index.html</p> <pre><code> script_analyze = Analyzer() result = script_analyze.main() return render_template(results.html', data=result) AttributeError: 'WSGIRequestHandler' object has no attribute 'environ' </code></pre> <p>I am unsure why this attribute error is raised. </p>
-1
2016-08-12T10:33:08Z
38,917,884
<p>Since you want to execute another Python script... If you are able to <code>import</code> the other script then you can just use something like the following to call it and store the results - assuming the other script is a value-returning function.</p> <pre><code>from othermodule import function_to_run ... # where you want to call it result = function_to_run() </code></pre> <p>Then you can use <code>render_template</code> as others have said, passing this result as the data to the template (or simply return the result if it's already in the format you want to output with Flask).</p> <p>Does that work, or is the script you want to run something that this wouldn't work for? Let us know more about the script if it's an issue.</p>
1
2016-08-12T12:21:56Z
[ "python", "html", "python-2.7", "web", "flask" ]
TensorFlow multithreading StatusNotOK
38,915,892
<p>I'm trying to implement a decoupled training queue in tensorflow</p> <p>at the very beginning im initilizing the graph</p> <pre><code>def init(self,restore,network_name): self.sess = tf.InteractiveSession() (some other stuff) self.data_a = tf.placeholder(tf.float32, [1,9]) self.data_b = tf.placeholder(tf.int8, [1, 162]) self.q = tf.FIFOQueue(capacity=100000, dtypes=[tf.float32, tf.int8], shapes=[[1,9], [1,162]] ) self.enqueue_op = self.q.enqueue([self.data_a, self.data_b]) self.sess.run(tf.initialize_all_variables()) </code></pre> <p>In the main program I have two threads. </p> <p>the first enqueues new data that is generated by my main program:</p> <pre><code>def load_and_enqueue(self, observations): _data_a = [d[0] for d in observations] _data_b = [d[1] for d in observations] self.sess.run(self.enqueue_op, feed_dict={self.data_a: _data_a, self.data_b:_data_b}) </code></pre> <p>the training function is called by another thread or the mainprogram this doesnt matter because it generates the same error</p> <pre><code>def train(self): tensor_a,tensor_b= self.q.dequeue_many(200) data_a,data_b= self.sess.run([tensor_a,tensor_b]) # do something meaningful </code></pre> <p>after a while it happens that if <code>self.sess.run([tensor_a,tensor_b])</code> is called I'm getting the following error </p> <pre><code>return tf_session.TF_Run(session, feed_dict, fetch_list, target_list) tensorflow.python.pywrap_tensorflow.StatusNotOK: Not found: FetchOutputs node FIFOQueue_DequeueMany_39:0: not found </code></pre> <p>I believe it is some sort of race condition but I dont now how to fix it. any help would be really nice</p>
0
2016-08-12T10:35:04Z
38,916,953
<p>Ok it appears it that tf.graph was not thread safe for this operation in version 0.7.1 <a href="https://github.com/tensorflow/tensorflow/commit/acac487ac4ebaa6edb3e3f866d41cbd12546a107" rel="nofollow">https://github.com/tensorflow/tensorflow/commit/acac487ac4ebaa6edb3e3f866d41cbd12546a107</a></p> <p>after upgrading to 0.10.0 the error did not appear anymore</p>
0
2016-08-12T11:31:39Z
[ "python", "multithreading", "tensorflow" ]
Write a pandas data frame to HDF5
38,915,917
<p>I'm processing large number of files in python and need to write the output (one dataframe for each input file) in <code>HDF5</code> directly. I am wondering what is the best way to write <code>pandas</code> data frame from my script to <code>HDF5</code> directly in a fast way? I am not sure if any python module like hdf5, hadoopy can do this. Any help in this regard will be appreciate. </p>
1
2016-08-12T10:36:43Z
38,918,732
<p>It's difficult to give you a good answer to this rather generic question.</p> <p>It's not clear how are you going to use (read) your HDF5 files - do you want to select data conditionally (using <code>where</code> parameter)?</p> <p>fir of all you need to open a store object:</p> <pre><code>store = pd.HDFStore('/path/to/filename.h5') </code></pre> <p>now you can write (or append) to the store (i'm using here <code>blosc</code> compression - it's pretty fast and efficient), beside that i will use <code>data_columns</code> parameter in order to specify the columns that must be indexed (so you can use these columns in the <code>where</code> parameter later when you will read your HDF5 file):</p> <pre><code>for f in files: #read or process each file in/into a separate `df` store.append('df_identifier_AKA_key', df, data_columns=[list_of_indexed_cols], complevel=5, complib='blosc') store.close() </code></pre>
0
2016-08-12T13:04:43Z
[ "python", "hadoop", "pandas", "dataframe" ]
How to remove duplicates and display unique values in backend (i.e. run time). while run this code what we ll give specific column that is removed
38,915,997
<pre><code>import re import pandas as pd # my imports from job_processing.utils import * def get_duplication(rule): try: return re.compile(rule.duplication, re.UNICODE) except re.error: raise re.error def run_duplication(rule, df, column): cols = dict() cols["dirty"] = get_column_name(df, column) cols["clean"] = get_unique_column_name(df, "clean") # add a new column for the clean data df.loc[df.duplicated(subset=0, keep='first'),cols["clean"]] = df[cols["dirty"]] # return the dirty dataframe with the clean column appended to the end... return df, df[cols["clean"]].dropna().unique() </code></pre> <p>my orginal file</p> <pre><code> 0 1 2 3 4 0 Jason Miller 42 4 25 1 Tina Ali 36 31 57 2 Jake Milner 24 2 62 3 Jason Miller 42 4 25 4 Jake Milner 24 2 62 5 Amy Cooze 73 3 70 6 Jason Miller 42 4 25 7 Jason Miller 42 4 25 8 Jake Milner 24 2 62 9 Jake Miller 42 4 25 </code></pre> <p>My requirement like below.</p> <pre><code> 0 1 2 3 4 0 Jason Miller 42 4 25 1 Tina Ali 36 31 57 2 Jake Milner 24 2 62 5 Amy Cooze 73 3 70 </code></pre> <p>Pls can review it and suggest me. Thanks.</p>
-2
2016-08-12T10:40:57Z
38,916,040
<p>It looks like need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html" rel="nofollow"><code>drop_duplicates</code></a>:</p> <pre><code>df1 = df.drop_duplicates(subset=[0]) print (df1) 0 1 2 3 4 0 Jason Miller 42 4 25 1 Tina Ali 36 31 57 2 Jake Milner 24 2 62 5 Amy Cooze 73 3 70 df = pd.concat([df, df1]) print (df) 0 1 2 3 4 0 Jason Miller 42 4 25 1 Tina Ali 36 31 57 2 Jake Milner 24 2 62 3 Jason Miller 42 4 25 4 Jake Milner 24 2 62 5 Amy Cooze 73 3 70 6 Jason Miller 42 4 25 7 Jason Miller 42 4 25 8 Jake Milner 24 2 62 9 Jake Miller 42 4 25 0 Jason Miller 42 4 25 1 Tina Ali 36 31 57 2 Jake Milner 24 2 62 5 Amy Cooze 73 3 70 </code></pre>
0
2016-08-12T10:43:05Z
[ "python", "django", "python-3.x", "pandas" ]
Pandas start generating Run-time warning error and require restart
38,916,188
<pre><code>import numpy as np import pandas as pd tempdata = np.random.random(10) myseries = pd.Series(tempdata) newseries = myseries[2:6] </code></pre> <p>Now if I put <strong>newseries</strong> in <strong>pd.Series</strong> and want to re-index(I know indexing is immutable) it with some alphabets I encounter following error.</p> <p><strong>According to pandas documentation Series can accept data as a Python dict or an ndarray or a scaler value</strong> </p> <pre><code>newseries = pd.Series(newseries, index = ['a','b','c','d']) C:\Users\user110244\Anaconda3\lib\site-packages\pandas\formats\format.py:2191: RuntimeWarning: invalid value encountered in greater has_large_values = (abs_vals &gt; 1e6).any() C:\Users\user110244\Anaconda3\lib\site-packages\pandas\formats\format.py:2192: RuntimeWarning: invalid value encountered in less has_small_values = ((abs_vals &lt; 10**(-self.digits)) &amp; C:\Users\user110244\Anaconda3\lib\site-packages\pandas\formats\format.py:2193: RuntimeWarning: invalid value encountered in greater (abs_vals &gt; 0)).any() </code></pre> <p>This error display continuously till I restart my Python.Even if try to execute some other simple command say </p> <pre><code>a = 2 </code></pre> <p>the same error occured again. I just want to know why this is happening ? If I am wrong at some place then it will be better to display the error rather than continuously reoccurance of same thing. Is this a bug ? Please explain</p> <p>My system details are:- Python 3.5.2 |Anaconda 4.1.1 (64-bit)| (default, Jul 5 2016, 11:41:13) [MSC v.1900 64 bit (AMD64)] Win10 Pro</p>
0
2016-08-12T10:50:55Z
38,916,547
<p>I don't get the same error. Since the indices 'a', 'b' etc are not in the orginal series, they get a NaN value, and the numbers are lost.</p> <p>I think you want to discard the old index and start with a new one, So you can do</p> <pre><code>newseries = pd.Series(newseries.values, index = ['a','b','c','d']) </code></pre>
1
2016-08-12T11:09:46Z
[ "python", "python-2.7", "python-3.x", "pandas" ]
Unable to fetch Table from BeautifulSoup
38,916,518
<pre><code>from BeautifulSoup import BeautifulSoup import urllib2 url = 'http://www.data.jma.go.jp/obd/stats/etrn/view/monthly_s3_en.php?block_no=47401&amp;view=1' html = urllib2.urlopen(url).read() soup = BeautifulSoup(html) table = soup.find('table') print table </code></pre> <p>Expected table is not resulted.</p> <p>I want to grab the table below:</p> <p><a href="http://i.stack.imgur.com/GI9wl.png" rel="nofollow"><img src="http://i.stack.imgur.com/GI9wl.png" alt="enter image description here"></a></p>
0
2016-08-12T11:07:18Z
38,916,683
<p>There's more than one table in the HTML. Get the second table with:</p> <pre><code>tables = soup.findAll('table') print tables[1] # the second table </code></pre> <p>Or you can go directly to the table by its CSS class:</p> <pre><code>from bs4 import BeautifulSoup table = soup.find_all('table', class_='data2_s') print table </code></pre> <p>Note that the above uses <code>bs4</code>.</p>
1
2016-08-12T11:17:29Z
[ "python", "beautifulsoup", "urllib2" ]
Unable to fetch Table from BeautifulSoup
38,916,518
<pre><code>from BeautifulSoup import BeautifulSoup import urllib2 url = 'http://www.data.jma.go.jp/obd/stats/etrn/view/monthly_s3_en.php?block_no=47401&amp;view=1' html = urllib2.urlopen(url).read() soup = BeautifulSoup(html) table = soup.find('table') print table </code></pre> <p>Expected table is not resulted.</p> <p>I want to grab the table below:</p> <p><a href="http://i.stack.imgur.com/GI9wl.png" rel="nofollow"><img src="http://i.stack.imgur.com/GI9wl.png" alt="enter image description here"></a></p>
0
2016-08-12T11:07:18Z
38,916,696
<p>First off, use <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/" rel="nofollow"><em>bs4</em></a> <em>beaufifulsoup3</em> is no longer maintained, also the table you want has the class <code>*data2_s*</code>, calling <code>find("table")</code> just gets the first table on the page which is not what you want:</p> <pre><code>from bs4 import BeautifulSoup import urllib2 url = 'http://www.data.jma.go.jp/obd/stats/etrn/view/monthly_s3_en.php?block_no=47401&amp;view=1' html = urllib2.urlopen(url).read() soup = BeautifulSoup(html) table = soup.select_one("table.data2_s") # or table = soup.find("table", class_="data2_s") print table </code></pre> <p>Which gives you:</p> <pre><code>&lt;table class="data2_s"&gt;&lt;caption class="m"&gt;WAKKANAI   WMO Station ID:47401 Lat 45&lt;sup&gt;o&lt;/sup&gt;24.9'N  Lon 141&lt;sup&gt;o&lt;/sup&gt;40.7'E&lt;/caption&gt;&lt;tr&gt;&lt;th scope="col"&gt;Year&lt;/th&gt;&lt;th scope="col"&gt;Jan&lt;/th&gt;&lt;th scope="col"&gt;Feb&lt;/th&gt;&lt;th scope="col"&gt;Mar&lt;/th&gt;&lt;th scope="col"&gt;Apr&lt;/th&gt;&lt;th scope="col"&gt;May&lt;/th&gt;&lt;th scope="col"&gt;Jun&lt;/th&gt;&lt;th scope="col"&gt;Jul&lt;/th&gt;&lt;th scope="col"&gt;Aug&lt;/th&gt;&lt;th scope="col"&gt;Sep&lt;/th&gt;&lt;th scope="col"&gt;Oct&lt;/th&gt;&lt;th scope="col"&gt;Nov&lt;/th&gt;&lt;th scope="col"&gt;Dec&lt;/th&gt;&lt;th scope="col"&gt;Annual&lt;/th&gt;&lt;/tr&gt;&lt;tr class="mtx" style="text-align:right;"&gt;&lt;td style="text-align:center"&gt;1938&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-5.2&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-4.9&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-0.6&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;4.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;9.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;11.6&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;17.9&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;22.2&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;16.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;10.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;3.3&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-4.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;6.8&lt;/td&gt;&lt;/tr&gt; &lt;tr class="mtx" style="text-align:right;"&gt;&lt;td style="text-align:center"&gt;1939&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-7.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-6.6&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-1.4&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;4.0&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;7.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;13.0&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;17.4&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;20.0&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;17.4&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;9.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;3.0&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-2.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;6.2&lt;/td&gt;&lt;/tr&gt; &lt;tr class="mtx" style="text-align:right;"&gt;&lt;td style="text-align:center"&gt;1940&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-6.0&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-5.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-0.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;3.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;8.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;11.0&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;16.6&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;19.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;15.6&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;10.4&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;3.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-1.0&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;6.3&lt;/td&gt;&lt;/tr&gt; &lt;tr class="mtx" style="text-align:right;"&gt;&lt;td style="text-align:center"&gt;1941&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-6.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-5.8&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-2.6&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;3.6&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;8.1&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;11.4&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;12.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;16.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;16.0&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;10.0&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;4.0&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-2.9&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;5.4&lt;/td&gt;&lt;/tr&gt; &lt;tr class="mtx" style="text-align:right;"&gt;&lt;td style="text-align:center"&gt;1942&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-7.8&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-8.2&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-0.8&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;3.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;7.1&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;12.0&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;17.4&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;18.4&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;15.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;10.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;2.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-2.9&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;5.6&lt;/td&gt;&lt;/tr&gt; etc................................... </code></pre>
1
2016-08-12T11:18:02Z
[ "python", "beautifulsoup", "urllib2" ]
Python grep code much slower than command line's grep
38,916,645
<p>I'm just grepping some Xliff files for the pattern <code>approved="no"</code>. I have a Shell script and a Python script, and the difference in performance is huge (for a set of 393 files, and a total of 3,686,329 lines, 0.1s user time for the Shell script, and 6.6s for the Python script).</p> <p>Shell: <code>grep 'approved="no"' FILE</code><br> Python:</p> <pre><code>def grep(pattern, file_path): ret = False with codecs.open(file_path, "r", encoding="utf-8") as f: while 1 and not ret: lines = f.readlines(100000) if not lines: break for line in lines: if re.search(pattern, line): ret = True break return ret </code></pre> <p>Any ideas to improve performance with a multiplatform solution?</p> <h2>Results</h2> <p>Here are a couple of results after applying some of the proposed solutions.<br> Tests were run on a RHEL6 Linux machine, with Python 2.6.6.<br> Working set: 393 Xliff files, 3,686,329 lines in total.<br> Numbers are user time in seconds.</p> <p><strong>grep_1</strong> (io, joining 100,000 file lines): 50s<br> <strong>grep_3</strong> (mmap): 0.7s<br> <strong>Shell version</strong> (Linux grep): 0.130s</p>
2
2016-08-12T11:16:13Z
38,917,139
<p>Grep is actually a very clever piece of software, it does not just do a regex search per line. It utilizes the <a href="https://en.wikipedia.org/wiki/Boyer%E2%80%93Moore_string_search_algorithm" rel="nofollow">Boyer-Moore</a> algorithm. See <a href="https://lists.freebsd.org/pipermail/freebsd-current/2010-August/019310.html" rel="nofollow">here</a> for more information.</p> <p>See <a href="https://github.com/heyhuyen/python-grep" rel="nofollow">here</a> for a python implementation of grep for more pointers.</p>
1
2016-08-12T11:42:43Z
[ "python", "performance", "grep" ]
Python grep code much slower than command line's grep
38,916,645
<p>I'm just grepping some Xliff files for the pattern <code>approved="no"</code>. I have a Shell script and a Python script, and the difference in performance is huge (for a set of 393 files, and a total of 3,686,329 lines, 0.1s user time for the Shell script, and 6.6s for the Python script).</p> <p>Shell: <code>grep 'approved="no"' FILE</code><br> Python:</p> <pre><code>def grep(pattern, file_path): ret = False with codecs.open(file_path, "r", encoding="utf-8") as f: while 1 and not ret: lines = f.readlines(100000) if not lines: break for line in lines: if re.search(pattern, line): ret = True break return ret </code></pre> <p>Any ideas to improve performance with a multiplatform solution?</p> <h2>Results</h2> <p>Here are a couple of results after applying some of the proposed solutions.<br> Tests were run on a RHEL6 Linux machine, with Python 2.6.6.<br> Working set: 393 Xliff files, 3,686,329 lines in total.<br> Numbers are user time in seconds.</p> <p><strong>grep_1</strong> (io, joining 100,000 file lines): 50s<br> <strong>grep_3</strong> (mmap): 0.7s<br> <strong>Shell version</strong> (Linux grep): 0.130s</p>
2
2016-08-12T11:16:13Z
38,918,877
<p>Python, being an interpreted language vs. a compiled C version of <code>grep</code> will always be slower.</p> <p>Apart from that your Python implementation is <em>not</em> the same as your <code>grep</code> example. It is not returning the matching lines, it is merely testing to see if the pattern matches the characters on any one line. A closer comparison would be:</p> <pre><code>grep -q 'approved="no"' FILE </code></pre> <p>which will return as soon as a match is found and not produce any output.</p> <p>You can substantially speed up your code by writing your <code>grep()</code> function more efficiently:</p> <pre><code>def grep_1(pattern, file_path): with io.open(file_path, "r", encoding="utf-8") as f: while True: lines = f.readlines(100000) if not lines: return False if re.search(pattern, ''.join(lines)): return True </code></pre> <p>This uses <code>io</code> instead of <code>codecs</code> which I found was a little faster. The while loop condition does not need to check <code>ret</code> and you can return from the function as soon as the result is known. There's no need to run <code>re.search()</code> for each individual ilne - just join the lines and perform a single search.</p> <p>At the cost of memory usage you could try this:</p> <pre><code>import io def grep_2(pattern, file_path): with io.open(file_path, "r", encoding="utf-8") as f: return re.search(pattern, f.read()) </code></pre> <p>If memory is an issue you could <code>mmap</code> the file and run the regex search on the <code>mmap</code>:</p> <pre><code>import io import mmap def grep_3(pattern, file_path): with io.open(file_path, "r", encoding="utf-8") as f: return re.search(pattern, mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)) </code></pre> <p><code>mmap</code> will efficiently read the data from the file in pages without consuming a lot of memory. Also, you'll probably find that <code>mmap</code> runs faster than the other solutions.</p> <hr> <p>Using <code>timeit</code> for each of these functions shows that this is the case:</p> <pre> 10 loops, best of 3: 639 msec per loop # grep() 10 loops, best of 3: 78.7 msec per loop # grep_1() 10 loops, best of 3: 19.4 msec per loop # grep_2() 100 loops, best of 3: 5.32 msec per loop # grep_3() </pre> <p>The file was <code>/usr/share/dict/words</code> containing approx 480,000 lines and the search pattern was <code>zymurgies</code>, which occurs near the end of the file. For comparison, when pattern is near the start of the file, e.g. <code>abaciscus</code>, the times are:</p> <pre> 10 loops, best of 3: 62.6 msec per loop # grep() 1000 loops, best of 3: 1.6 msec per loop # grep_1() 100 loops, best of 3: 14.2 msec per loop # grep_2() 10000 loops, best of 3: 37.2 usec per loop # grep_3() </pre> <p>which again shows that the <code>mmap</code> version is fastest.</p> <hr> <p>Now comparing the <code>grep</code> command with the Python <code>mmap</code> version:</p> <pre><code>$ time grep -q zymurgies /usr/share/dict/words real 0m0.010s user 0m0.007s sys 0m0.003s $ time python x.py grep_3 # uses mmap real 0m0.023s user 0m0.019s sys 0m0.004s </code></pre> <p>Which is not too bad considering the advantages that <code>grep</code> has.</p>
1
2016-08-12T13:11:21Z
[ "python", "performance", "grep" ]
Python grep code much slower than command line's grep
38,916,645
<p>I'm just grepping some Xliff files for the pattern <code>approved="no"</code>. I have a Shell script and a Python script, and the difference in performance is huge (for a set of 393 files, and a total of 3,686,329 lines, 0.1s user time for the Shell script, and 6.6s for the Python script).</p> <p>Shell: <code>grep 'approved="no"' FILE</code><br> Python:</p> <pre><code>def grep(pattern, file_path): ret = False with codecs.open(file_path, "r", encoding="utf-8") as f: while 1 and not ret: lines = f.readlines(100000) if not lines: break for line in lines: if re.search(pattern, line): ret = True break return ret </code></pre> <p>Any ideas to improve performance with a multiplatform solution?</p> <h2>Results</h2> <p>Here are a couple of results after applying some of the proposed solutions.<br> Tests were run on a RHEL6 Linux machine, with Python 2.6.6.<br> Working set: 393 Xliff files, 3,686,329 lines in total.<br> Numbers are user time in seconds.</p> <p><strong>grep_1</strong> (io, joining 100,000 file lines): 50s<br> <strong>grep_3</strong> (mmap): 0.7s<br> <strong>Shell version</strong> (Linux grep): 0.130s</p>
2
2016-08-12T11:16:13Z
38,975,809
<p>Another source of slowness here is calling <code>re.search</code> inside a loop. This will <em>recompile</em> the regex for every single line.</p> <p>Try instead:</p> <pre><code>pattern = re.compile(pattern) while True: ... if pattern.search(line): ... </code></pre>
0
2016-08-16T13:06:39Z
[ "python", "performance", "grep" ]
Regular expression finditer: search twice on the same symbols
38,916,664
<p>I need to find matches in the text and get its positions. For example, I have to find "hello hello" in the text. When the text is "hello hello world hello hello", it's ok, I get the positions 0-11 and 18-29. But when the text is "hello hello hello world", I get only one position - 0-11. But I have to find the both ones (0-11 and 6-17). I mean, I get</p> <ol> <li><strong>hello hello</strong> hello world</li> </ol> <p>but have to get</p> <ol> <li><p><strong>hello hello</strong> hello world</p></li> <li><p>hello <strong>hello hello</strong> world</p></li> </ol> <p>In another case I have to find the complex pattern: "hello 1,2 beautiful 2,4 world" - it means that between the words "hello" and "beautiful" could be one or two words and between the words "beautiful" and "world" 2, 3 or 4 words. And I have to find all the combinations. </p> <p>This is the pattern: <code>re.compile(u'(^|[\[\]\/\\\^\$\.\|\?\*\+\(\)\{\} !&lt;&gt;:;,#@])(hello)(([\[\]\/\\^\$\.\|\?\*\+\(\)\{\} !&lt;&gt;:;,#@%]+[a-zA-Zа-яА-Я$]+(-[a-zA-Zа-яА-Я$]+)*){1,2}[\[\]\/\\^\$\.\|\?\*\+\(\)\{\} !&lt;&gt;:;,#@%]*)(beautiful)(([\[\]\/\\^\$\.\|\?\*\+\(\)\{\} !&lt;&gt;:;,#@%]+[a-zA-Zа-яА-Я$]+(-[a-zA-Zа-яА-Я$]+)*){2,4}[\[\]\/\\^\$\.\|\?\*\+\(\)\{\} !&lt;&gt;:;,#@%]*)(world)($|[\[\]\/\\\^\$\.\|\?\*\+\(\)\{\} !&lt;&gt;:;,#@])')</code></p> <p>And the text is "hello very beautiful beautiful very big world world". I can get the only one combination, but need to get 4:</p> <ol> <li><p><strong>hello</strong> very <strong>beautiful</strong> beautiful very big <strong>world</strong> world</p></li> <li><p><strong>hello</strong> very beautiful <strong>beautiful</strong> very big <strong>world</strong> world</p></li> <li><p><strong>hello</strong> very <strong>beautiful</strong> beautiful very big world <strong>world</strong></p></li> <li><p><strong>hello</strong> very beautiful <strong>beautiful</strong> very big world <strong>world</strong></p></li> </ol> <p>How can I get all the combination of the matches when the matches intersect each other?</p> <p>The flag re.DOTALL doesn't help.</p> <pre><code>import re patterns = [ u'(hello)(( [a-z]+ *){1,2})(beautiful)(( [a-z]+ *){2,4})(world)', u'hello hello' ] text = u'hello hello hello world hello very beautiful beautiful very big world world' for p in patterns: print p c = re.compile(p, flags=re.I+re.U) for m in c.finditer(text): print m.start(), m.end() </code></pre> <p>Result is</p> <pre><code>&gt;&gt;&gt; (hello)(( [a-z]+ *){1,2})(beautiful)(( [a-z]+ *){2,4})(world) &gt;&gt;&gt; 24 69 (need 24 69 and 24 69 and 24 75 and 24 75 - because there are two positions of the word "beautiful") &gt;&gt;&gt; hello hello &gt;&gt;&gt; 0 11 (need 0 11 and 6 17) </code></pre> <p>The real examples of the patterns is:</p> <blockquote> <p>u"выйдите на улицы", u"избавить.* от", u"смотрите смотрите", u"смеят.*"</p> </blockquote> <p>And with the distance:</p> <blockquote> <p>имени 0,3 ленина</p> <p>целых 0,5 лет.*</p> <p>целых 0,5 лет.* 0,1 назад</p> </blockquote> <p><strong>UPD</strong></p> <p>The variant <code>u'(?=(hello hello))</code> helps with the patterns without distances between the words. But how can I use it in the pattern with distances, for example <code>(hello) (?:[a-zA-Zа-яА-Я]+ ){1,2}(beautiful) (?:[a-zA-Zа-яА-Я]+ ){2,4}(world)</code> ?</p>
1
2016-08-12T11:16:57Z
38,917,807
<p>I think you can try <strong>below expression</strong> than regexp, looks not that good but might solve your problem:</p> <p>Expression:</p> <pre><code> [pos for pos, char in enumerate(string) if string[pos:].find(pattern) == 0] </code></pre> <p>It gives list output with positions of the pattern in string.</p> <pre><code>In [43]: string = "hello very beautiful beautiful very big world world" In [44]: pattern='hello' In [45]: [pos for pos, char in enumerate(string) if string[pos:].find(pattern) == 0] Out[45]: [0] In [46]: pattern='very' In [47]: [pos for pos, char in enumerate(string) if string[pos:].find(pattern) == 0] Out[47]: [6, 31] In [48]: pattern='world' In [49]: [pos for pos, char in enumerate(string) if string[pos:].find(pattern) == 0] Out[49]: [40, 46] In [50]: pattern='very big' In [51]: [pos for pos, char in enumerate(string) if string[pos:].find(pattern) == 0] Out[51]: [31] </code></pre> <p>Hope this helps.</p>
0
2016-08-12T12:18:15Z
[ "python", "regex" ]
Regular expression finditer: search twice on the same symbols
38,916,664
<p>I need to find matches in the text and get its positions. For example, I have to find "hello hello" in the text. When the text is "hello hello world hello hello", it's ok, I get the positions 0-11 and 18-29. But when the text is "hello hello hello world", I get only one position - 0-11. But I have to find the both ones (0-11 and 6-17). I mean, I get</p> <ol> <li><strong>hello hello</strong> hello world</li> </ol> <p>but have to get</p> <ol> <li><p><strong>hello hello</strong> hello world</p></li> <li><p>hello <strong>hello hello</strong> world</p></li> </ol> <p>In another case I have to find the complex pattern: "hello 1,2 beautiful 2,4 world" - it means that between the words "hello" and "beautiful" could be one or two words and between the words "beautiful" and "world" 2, 3 or 4 words. And I have to find all the combinations. </p> <p>This is the pattern: <code>re.compile(u'(^|[\[\]\/\\\^\$\.\|\?\*\+\(\)\{\} !&lt;&gt;:;,#@])(hello)(([\[\]\/\\^\$\.\|\?\*\+\(\)\{\} !&lt;&gt;:;,#@%]+[a-zA-Zа-яА-Я$]+(-[a-zA-Zа-яА-Я$]+)*){1,2}[\[\]\/\\^\$\.\|\?\*\+\(\)\{\} !&lt;&gt;:;,#@%]*)(beautiful)(([\[\]\/\\^\$\.\|\?\*\+\(\)\{\} !&lt;&gt;:;,#@%]+[a-zA-Zа-яА-Я$]+(-[a-zA-Zа-яА-Я$]+)*){2,4}[\[\]\/\\^\$\.\|\?\*\+\(\)\{\} !&lt;&gt;:;,#@%]*)(world)($|[\[\]\/\\\^\$\.\|\?\*\+\(\)\{\} !&lt;&gt;:;,#@])')</code></p> <p>And the text is "hello very beautiful beautiful very big world world". I can get the only one combination, but need to get 4:</p> <ol> <li><p><strong>hello</strong> very <strong>beautiful</strong> beautiful very big <strong>world</strong> world</p></li> <li><p><strong>hello</strong> very beautiful <strong>beautiful</strong> very big <strong>world</strong> world</p></li> <li><p><strong>hello</strong> very <strong>beautiful</strong> beautiful very big world <strong>world</strong></p></li> <li><p><strong>hello</strong> very beautiful <strong>beautiful</strong> very big world <strong>world</strong></p></li> </ol> <p>How can I get all the combination of the matches when the matches intersect each other?</p> <p>The flag re.DOTALL doesn't help.</p> <pre><code>import re patterns = [ u'(hello)(( [a-z]+ *){1,2})(beautiful)(( [a-z]+ *){2,4})(world)', u'hello hello' ] text = u'hello hello hello world hello very beautiful beautiful very big world world' for p in patterns: print p c = re.compile(p, flags=re.I+re.U) for m in c.finditer(text): print m.start(), m.end() </code></pre> <p>Result is</p> <pre><code>&gt;&gt;&gt; (hello)(( [a-z]+ *){1,2})(beautiful)(( [a-z]+ *){2,4})(world) &gt;&gt;&gt; 24 69 (need 24 69 and 24 69 and 24 75 and 24 75 - because there are two positions of the word "beautiful") &gt;&gt;&gt; hello hello &gt;&gt;&gt; 0 11 (need 0 11 and 6 17) </code></pre> <p>The real examples of the patterns is:</p> <blockquote> <p>u"выйдите на улицы", u"избавить.* от", u"смотрите смотрите", u"смеят.*"</p> </blockquote> <p>And with the distance:</p> <blockquote> <p>имени 0,3 ленина</p> <p>целых 0,5 лет.*</p> <p>целых 0,5 лет.* 0,1 назад</p> </blockquote> <p><strong>UPD</strong></p> <p>The variant <code>u'(?=(hello hello))</code> helps with the patterns without distances between the words. But how can I use it in the pattern with distances, for example <code>(hello) (?:[a-zA-Zа-яА-Я]+ ){1,2}(beautiful) (?:[a-zA-Zа-яА-Я]+ ){2,4}(world)</code> ?</p>
1
2016-08-12T11:16:57Z
38,917,809
<p>Your question is still lacking a bit in clarity of what you wish to do, but I'll take a stab at it:</p> <p>Regex to find repetitions without consumption:</p> <pre><code>([a-zA-Zа-яА-Я]+)(?= (\1)) </code></pre> <p>Regex to find <code>hello</code> <code>beautiful</code> and <code>world</code> with specific numbers of words in between:</p> <pre><code>(hello) (?:[a-zA-Zа-яА-Я]+ ){1,2}(beautiful) (?:[a-zA-Zа-яА-Я]+ ){2,4}(world) </code></pre> <h2>Final Update</h2> <p>What you wish to do is not easily done completely in regex in a single run.</p> <p>Easier would be to loop and do different regexes:</p> <pre><code>for i in range(1,3): for j in range(2,5): regStr='(hello) (?:\w+ ){' + str(i) + '}(beautiful) (?:\w+ ){' + str(j) +'}(world)' </code></pre> <p>and then do a second check for duplicates using</p> <pre><code>([a-zA-Zа-яА-Я]+)(?= (\1)) </code></pre>
0
2016-08-12T12:18:22Z
[ "python", "regex" ]
how to handle web SQL queries and xml replies in Python
38,916,674
<p>I have a distant database on which I can send SQL select queries through a web service like this:</p> <pre><code>http://aa.bb.cc.dd:85/SQLWEB?query=select+*+from+machine&amp;output=xml_v2 </code></pre> <p>which returns </p> <pre><code>&lt;Query&gt; &lt;SQL&gt;&lt;/SQL&gt; &lt;Fields&gt; &lt;MACHINEID DataType="Integer" DataSize="4"/&gt; &lt;NAME DataType="WideString" DataSize="62"/&gt; &lt;MACHINECLASSID DataType="Integer" DataSize="4"/&gt; &lt;SUBMACHINECLASS DataType="WideString" DataSize="22"/&gt; &lt;DISABLED DataType="Integer" DataSize="4"/&gt; &lt;/Fields&gt; &lt;Record&gt; &lt;MACHINEID&gt;1&lt;/MACHINEID&gt; &lt;NAME&gt;LOADER&lt;/NAME&gt; &lt;MACHINECLASSID&gt;16&lt;/MACHINECLASSID&gt; &lt;SUBMACHINECLASS&gt;A&lt;/SUBMACHINECLASS&gt; &lt;DISABLED&gt;0&lt;/DISABLED&gt; &lt;/Record&gt; &lt;Record&gt; ... &lt;/Record&gt; ... &lt;/Query&gt; </code></pre> <p>Then I need to insert the records into a local SQL database.</p> <p>What's the easiest way ? Thanks !</p>
0
2016-08-12T11:17:13Z
38,916,962
<p><strong>First of all, querys in the url it's a horrible idea for security.</strong></p> <p>Use xml libs to parse the xml, and then iterate over the result to add to the db.</p> <pre><code>import xml.etree.ElementTree as ET tree = ET.parse('xml file') root = tree.getroot() # root = ET.fromstring(country_data_as_string) if you use a string for record in root.findall('Record'): MACHINEID = record.get('MACHINEID') NAME = record.get('NAME') MACHINECLASSID = record.get('MACHINECLASSID') SUBMACHINECLASS = record.get('SUBMACHINECLASS') DISABLED = record.get('DISABLED') #your code to add this result to the db </code></pre> <p><a href="https://docs.python.org/2/library/xml.etree.elementtree.html" rel="nofollow">ElementTree XML API</a></p>
0
2016-08-12T11:31:59Z
[ "python", "web-services", "xml-parsing" ]
Facebook data pull: how do I pull data before a specific date in python?
38,916,706
<p>I need to pull facebook data on or before 4th August 2016. I have used </p> <pre><code>import facebook user = 'nbcolympics' graph = facebook.GraphAPI(access_token) profile = graph.get_object(user) posts = graph.get_connections(profile['id'], 'posts') feeds = graph.get_connections(profile['id'], 'feed') </code></pre> <p>But here I get all the posts and feeds. Is there a way to pull data which are before a specific date, say 4th August 2016? I am using Python for programming</p>
0
2016-08-12T11:18:32Z
38,917,403
<p>Based on the suggestion from CBroe, the timestamp could be added in the main command to extract data as per specific dates.</p> <pre><code>until_date_timestamp = int(datetime.strptime('04/08/2016', '%d/%m/%Y').strftime("%s")) graph = facebook.GraphAPI(access_token) profile = graph.get_object(user) posts = graph.get_connections(profile['id'], 'posts', until = until_date_timestamp) feeds = graph.get_connections(profile['id'], 'feed', until = until_date_timestamp) </code></pre> <p>Here if we want to specify a start date as well, then it is</p> <pre><code>until_date_timestamp = int(datetime.strptime('04/08/2016', '%d/%m/%Y').strftime("%s")) start_date_timestamp = int(datetime.strptime('01/01/2016', '%d/%m/%Y').strftime("%s")) graph = facebook.GraphAPI(access_token) profile = graph.get_object(user) posts = graph.get_connections(profile['id'], 'posts', since = start_date_timestamp,until = until_date_timestamp) feeds = graph.get_connections(profile['id'], 'feed', since = start_date_timestamp,until = until_date_timestamp) </code></pre>
0
2016-08-12T11:57:50Z
[ "python", "facebook", "api" ]
Python library for handling linux's audit.log?
38,916,777
<p>I'm searching for a library that I could import to my python (3.5) code to ease the processing of audit.log (on my CentOS6 it is /var/log/audit/audit.log). I'm thinking about a library that processes the log lines as python arrays for example, enables the querying/filtering in it in a human way without writing all the processes to get the job done.</p> <p>I found out about <a href="https://www.redhat.com/archives/linux-audit/2006-May/msg00174.html" rel="nofollow">audit-python</a>, but it's not in pip list and couldn't find a way to install it for CentOS6. So far no hope of a library handling this widespread audit log.</p> <p>I have been googling for a while but it seems like there is no such library, or is there? Maybe someone would share their code of how they did process the audit.log in python? It would be useful for every sysadmin that uses python.</p>
-1
2016-08-12T11:21:58Z
38,916,954
<p>You can install the package: <code>setroubleshoot-server</code></p> <p>Then look at the file <code>/bin/sealert</code> which is a python program and does a lot of stuff with <code>audit.log</code> based on the flags.</p>
0
2016-08-12T11:31:39Z
[ "python", "linux", "audit" ]
Python library for handling linux's audit.log?
38,916,777
<p>I'm searching for a library that I could import to my python (3.5) code to ease the processing of audit.log (on my CentOS6 it is /var/log/audit/audit.log). I'm thinking about a library that processes the log lines as python arrays for example, enables the querying/filtering in it in a human way without writing all the processes to get the job done.</p> <p>I found out about <a href="https://www.redhat.com/archives/linux-audit/2006-May/msg00174.html" rel="nofollow">audit-python</a>, but it's not in pip list and couldn't find a way to install it for CentOS6. So far no hope of a library handling this widespread audit log.</p> <p>I have been googling for a while but it seems like there is no such library, or is there? Maybe someone would share their code of how they did process the audit.log in python? It would be useful for every sysadmin that uses python.</p>
-1
2016-08-12T11:21:58Z
38,956,161
<p>As I didn't found a library nor did anyone suggest one, so I have come up with this function using a binary provided by the audit's package:</p> <pre><code>def read_audit(before,now,user): auparam = " -sc EXECVE" cmd = "ausearch -ts " + before.strftime('%H:%M:%S') + " -te " + now.strftime('%H:%M:%S') + " -ua " + user + auparam p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE) res = p.stdout.read().decode() return res </code></pre> <p>I call the binary by the subprocess module, so an <code>import subprocess</code> is needed in the header of the code. The function grabs logs of program executions between the provided times via the <code>ausearch</code> tool. </p>
0
2016-08-15T13:38:09Z
[ "python", "linux", "audit" ]
Python 3 json.loads - json.decoder error
38,916,901
<p>I'm trying to parse a json but it doesn't work. I remove the try and except in my code so you can see the Error Massege.</p> <pre><code>import sqlite3 import json import codecs conn = sqlite3.connect('geodata.sqlite') cur = conn.cursor() cur.execute('SELECT * FROM Locations') fhand = codecs.open('where.js','w', "utf-8") fhand.write("myData = [\n") count = 0 for row in cur : data = str(row[1]) print (data) print (type(data)) #try: js = json.loads(data) #except: continue if not('status' in js and js['status'] == 'OK') : continue lat = js["results"][0]["geometry"]["location"]["lat"] lng = js["results"][0]["geometry"]["location"]["lng"] if lat == 0 or lng == 0 : continue where = js['results'][0]['formatted_address'] where = where.replace("'","") try : print (where, lat, lng) count = count + 1 if count &gt; 1 : fhand.write(",\n") output = "["+str(lat)+","+str(lng)+", '"+where+"']" fhand.write(output) except: continue fhand.write("\n];\n") cur.close() fhand.close() print (count, "records written to where.js") print ("Open where.html to view the data in a browser") </code></pre> <p>My problem is that <code>js = json.loads(data)</code> can't parse it for some reason and I get the following exception:</p> <pre><code> "raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)" </code></pre> <p>I thought it becuase the data type but its doing a weird thing. I'm asking for type(data) and I'm getting str type, but when I print data I get Byte type.</p> <p>Full output for the code:</p> <pre><code>Traceback (most recent call last): File "C:/Users/user/Desktop/Courses Online/Coursera/Using Databases with Python/geodata/geodump.py", line 17, in &lt;module&gt; js = json.loads(data) File "C:\Users\user\AppData\Local\Programs\Python\Python35-32\lib\json\__init__.py", line 319, in loads return _default_decoder.decode(s) File "C:\Users\user\AppData\Local\Programs\Python\Python35-32\lib\json\decoder.py", line 339, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Users\user\AppData\Local\Programs\Python\Python35-32\lib\json\decoder.py", line 357, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) b'{\n "results" : [\n {\n "address_components" : [\n {\n ...... long json line...... &lt;class 'str'&gt; </code></pre> <p>I also try to use decode("utf-8") on data , but I'm getting the following Error: <code>'str' object has no attribute 'decode'</code></p>
0
2016-08-12T11:28:04Z
38,916,966
<p>You are converting a <code>bytes</code> value to a string the wrong way here:</p> <pre><code>data = str(row[1]) </code></pre> <p>You forced it to be a <code>str()</code> object, but for <code>bytes</code> objects that'll <em>include</em> the <code>b</code> prefix and quotes, because <code>bytes</code> objects don't have a <code>__str__</code> method, only <code>__repr__</code> so you get a debug representation.</p> <p>Decode the row <em>without</em> converting to a string:</p> <pre><code>data = row[1].decode('utf8') </code></pre> <p>You really shouldn't hand-craft JSON / Javascript output in your code either. Just use <code>json.dumps()</code>; if you <em>must</em> use per-row streaming, you can still use <code>json.dump()</code> to create each list entry:</p> <pre><code>import sqlite3 import json conn = sqlite3.connect('geodata.sqlite') cur = conn.cursor() cur.execute('SELECT * FROM Locations') with open('where.js', 'w', encoding="utf-8") as fhand: fhand.write("myData = [\n") for count, row in enumerate(row): try: js = json.loads(row[1].decode('utf8')) except json.JSONDecodeError: print('Could not decode a row: ', row[1]) continue if js.get('status') != 'OK': continue lat = js["results"][0]["geometry"]["location"]["lat"] lng = js["results"][0]["geometry"]["location"]["lng"] if not (lat and lng): continue where = js['results'][0]['formatted_address'] where = where.replace("'", "") print (where, lat, lng) if count: fhand.write(",\n") json.dump([lat, lng, where], fhand) fhand.write("\n];\n") </code></pre> <p>This uses plain <code>open()</code> (in Python 3, there is never a need to use <code>codecs.open()</code>), uses the file as a context manager, and adds in <code>enumerate()</code> to track if you have the first row processed yet.</p>
0
2016-08-12T11:32:15Z
[ "python", "python-3.x" ]
python pandas get index boundaries from a series of Booleans
38,917,076
<p>I am trying cut videos based on some caracteristics. My current strategy leads on a <code>pandas</code> series of booleans for each frame, indexed by timestamp. <code>True</code> to keep it, <code>False</code> to dump it.</p> <p>As I plan to cut videos, i need to extract boundaries from this list, so that i can tell fmpeg beginning and end of the parts I want to extract from the main video.</p> <p>Tu sum up :</p> <p>I have a <code>pandas</code> Series which looks like this:</p> <pre><code>acquisitionTs 0.577331 False 0.611298 False 0.645255 False 0.679218 False 0.716538 False 0.784453 True 0.784453 True 0.818417 True 0.852379 True 0.886336 True 0.920301 True 0.954259 False ... 83.393376 False 83.427345 False dtype: bool </code></pre> <p>(truncated for presenting reasons, but the TimeStamp usually begins at 0)</p> <p>and I need to get boundaries of <code>True</code> sequences, so in this example i should get <code>[[t_0,t_1],[t_2,t_3]n, ... [t_2n-1,t_2n]]</code> , with <code>t_0 = 0.784453</code> and <code>t_1 = 0.920301</code> if I have <code>n</code> different sequences of <code>True</code> in my pandas Series.</p> <p>Now that probleme seems very simple, in fact you can just shift the sequence by one and a make a xor between the to get a list of boolean with <code>True</code> being for boundaries</p> <pre><code>e = df.shift(periods=1, freq=None, axis=0)^df print(e[e].index) </code></pre> <p>(with <code>df</code> being a pandas Series) there is still some work to do, like figuring if first element is a rising edge or a falling edge, but this hack works.</p> <p>However that doesn't seem very pythonic. In fact, the probleme is so simple I believe that there must be somewhere in <code>pandas</code>, <code>numpy</code> or even <code>python</code> a prebuilt function for this which would fit nicely in a single function call instead of a hack like above. The <code>groupby</code> function seems promising though, but i never used it before.</p> <p>How would be the best way of doing this ?</p>
0
2016-08-12T11:38:39Z
38,918,102
<p>You could use <a href="http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.ndimage.measurements.label.html" rel="nofollow"><code>scipy.ndimage.label</code></a> to identify the clusters of <code>True</code>s:</p> <pre><code>In [102]: ts Out[102]: 0.069347 False 0.131956 False 0.143948 False 0.224864 False 0.242640 True 0.372599 False 0.451989 False 0.462090 False 0.579956 True 0.588791 True 0.603638 False 0.625107 False 0.642565 False 0.708547 False 0.730239 False 0.741652 False 0.747126 True 0.783276 True 0.896705 True 0.942829 True Name: keep, dtype: bool In [103]: groups, nobs = ndimage.label(ts); groups Out[103]: array([0, 0, 0, 0, 1, 0, 0, 0, 2, 2, 0, 0, 0, 0, 0, 0, 3, 3, 3, 3], dtype=int32) </code></pre> <p>Once you have the <code>groups</code> array, you can find the associated times using <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html" rel="nofollow"><code>groupby/agg</code></a>:</p> <pre><code> result = (df.loc[df['group'] != 0] .groupby('group')['times'] .agg({'start':'first','end':'last'})) </code></pre> <p>For example,</p> <pre><code>import numpy as np import pandas as pd import scipy.ndimage as ndimage np.random.seed(2016) def make_ts(N, ngroups): times = np.random.random(N) times = np.sort(times) idx = np.sort(np.random.randint(N, size=(ngroups,))) arr = np.zeros(N) arr[idx] = 1 arr = arr.cumsum() arr = (arr % 2).astype(bool) ts = pd.Series(arr, index=times, name='keep') return ts def find_groups(ts): groups, nobs = ndimage.label(ts) df = pd.DataFrame({'times': ts.index, 'group': groups}) result = (df.loc[df['group'] != 0] .groupby('group')['times'] .agg({'start':'first','end':'last'})) return result ts = make_ts(20, 5) result = find_groups(ts) </code></pre> <p>yields</p> <pre><code> start end group 1 0.242640 0.242640 2 0.579956 0.588791 3 0.747126 0.942829 </code></pre> <p>To obtain the start and end times as a list of lists you could use:</p> <pre><code>In [125]: result.values.tolist() Out[125]: [[0.24264034406127022, 0.24264034406127022], [0.5799564094638113, 0.5887908182432907], [0.7471260123697537, 0.9428288694956402]] </code></pre> <hr> <p>Using <code>ndimage.label</code> is convenient, but note that it is also possible to compute this without <code>scipy</code>:</p> <pre><code>def find_groups_without_scipy(ts): df = pd.DataFrame({'times': ts.index, 'group': (ts.diff() == True).cumsum()}) result = (df.loc[df['group'] % 2 == 1] .groupby('group')['times'] .agg({'start':'first','end':'last'})) return result </code></pre> <p>The main idea here is to find labels for the clusters of <code>True</code>s using <code>(ts.diff() == True).cumsum()</code>. <code>ts.diff() == True</code> gives the same result as <code>ts.shift() ^ ts</code>, but is a bit faster. Taking the cumulative sum (i.e. calling <code>cumsum</code>) treats <code>True</code> as equal to 1 and <code>False</code> as equal to 0, so each time a <code>True</code> is encountered the cumulative sum increases by 1. Thus each cluster gets labeled with a different number:</p> <pre><code>In [111]: (ts.diff() == True).cumsum() Out[111]: 0.069347 0 0.131956 0 0.143948 0 0.224864 0 0.242640 1 0.372599 2 0.451989 2 0.462090 2 0.579956 3 0.588791 3 0.603638 4 0.625107 4 0.642565 4 0.708547 4 0.730239 4 0.741652 4 0.747126 5 0.783276 5 0.896705 5 0.942829 5 Name: keep, dtype: int64 </code></pre>
0
2016-08-12T12:34:19Z
[ "python", "pandas" ]
python pandas get index boundaries from a series of Booleans
38,917,076
<p>I am trying cut videos based on some caracteristics. My current strategy leads on a <code>pandas</code> series of booleans for each frame, indexed by timestamp. <code>True</code> to keep it, <code>False</code> to dump it.</p> <p>As I plan to cut videos, i need to extract boundaries from this list, so that i can tell fmpeg beginning and end of the parts I want to extract from the main video.</p> <p>Tu sum up :</p> <p>I have a <code>pandas</code> Series which looks like this:</p> <pre><code>acquisitionTs 0.577331 False 0.611298 False 0.645255 False 0.679218 False 0.716538 False 0.784453 True 0.784453 True 0.818417 True 0.852379 True 0.886336 True 0.920301 True 0.954259 False ... 83.393376 False 83.427345 False dtype: bool </code></pre> <p>(truncated for presenting reasons, but the TimeStamp usually begins at 0)</p> <p>and I need to get boundaries of <code>True</code> sequences, so in this example i should get <code>[[t_0,t_1],[t_2,t_3]n, ... [t_2n-1,t_2n]]</code> , with <code>t_0 = 0.784453</code> and <code>t_1 = 0.920301</code> if I have <code>n</code> different sequences of <code>True</code> in my pandas Series.</p> <p>Now that probleme seems very simple, in fact you can just shift the sequence by one and a make a xor between the to get a list of boolean with <code>True</code> being for boundaries</p> <pre><code>e = df.shift(periods=1, freq=None, axis=0)^df print(e[e].index) </code></pre> <p>(with <code>df</code> being a pandas Series) there is still some work to do, like figuring if first element is a rising edge or a falling edge, but this hack works.</p> <p>However that doesn't seem very pythonic. In fact, the probleme is so simple I believe that there must be somewhere in <code>pandas</code>, <code>numpy</code> or even <code>python</code> a prebuilt function for this which would fit nicely in a single function call instead of a hack like above. The <code>groupby</code> function seems promising though, but i never used it before.</p> <p>How would be the best way of doing this ?</p>
0
2016-08-12T11:38:39Z
38,918,167
<p>I would use a Dataframe rather than a Series (it actually works with a Series as well). </p> <pre><code>df acquisitionTs Value 0 0.577331 False 1 0.611298 False 2 0.645255 False 3 0.679218 False 4 0.716538 False 5 0.784453 True 6 0.784453 True 7 0.818417 False 8 0.852379 True 9 0.886336 True 10 0.920301 True 11 0.954259 False </code></pre> <p>and I would do:</p> <pre><code>df[df.Value.diff().fillna(False)] acquisitionTs Value 5 0.784453 True 7 0.818417 False 8 0.852379 True 11 0.954259 False </code></pre> <p>So as you know the first Value False here, you know that 0-4 is False and then it switch at every index (5,7,8,11)</p> <p>The <code>groupby</code> function will not help you I think, since it will loose the order of your True/False values (you'll have 2 groups, instead of 5 in my example).</p>
1
2016-08-12T12:37:46Z
[ "python", "pandas" ]
Join two lists into one dictionary
38,917,113
<p>These are my lists (both lists have the same length - 43 indices):</p> <pre><code>list1 = [u'UMTS', u'UMTS', u'UMTS', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'LTE'] list2 = [u'60000', u'60000', u'60000', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'120000', u'120000', u'120000', u'120512', u'120512', u'120512', u'120512', u'120512', u'120512', u'118629', u'118629', u'118629', u'120000', u'120000', u'120000', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'300000'] </code></pre> <p>I would like to join them into one dictionary:</p> <pre><code>dictionary = { 'UMTS' : 'indices from 0 to 2' 'GSM' : 'indices from 3 to 42' 'LTE' : 'index 43' } </code></pre> <p>Does anyone know how to do it? Is it possible at all? Thanks in advance !!!</p>
0
2016-08-12T11:41:15Z
38,917,184
<p>You can use <code>collections.defaultdict()</code> and <code>zip()</code> function:</p> <pre><code>&gt;&gt;&gt; from collections import defaultdict &gt;&gt;&gt; &gt;&gt;&gt; d = defaultdict(list) &gt;&gt;&gt; &gt;&gt;&gt; for i, j in zip(list1, list2): ... d[i].append(j) ... &gt;&gt;&gt; d defaultdict(&lt;type 'list'&gt;, {u'UMTS': [u'60000', u'60000', u'60000'], u'GSM': [u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'120000', u'120000', u'120000', u'120512', u'120512', u'120512', u'120512', u'120512', u'120512', u'118629', u'118629', u'118629', u'120000', u'120000', u'120000', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629'], u'LTE': [u'300000']}) </code></pre>
2
2016-08-12T11:45:12Z
[ "python", "python-2.7" ]
Join two lists into one dictionary
38,917,113
<p>These are my lists (both lists have the same length - 43 indices):</p> <pre><code>list1 = [u'UMTS', u'UMTS', u'UMTS', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'LTE'] list2 = [u'60000', u'60000', u'60000', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'120000', u'120000', u'120000', u'120512', u'120512', u'120512', u'120512', u'120512', u'120512', u'118629', u'118629', u'118629', u'120000', u'120000', u'120000', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'300000'] </code></pre> <p>I would like to join them into one dictionary:</p> <pre><code>dictionary = { 'UMTS' : 'indices from 0 to 2' 'GSM' : 'indices from 3 to 42' 'LTE' : 'index 43' } </code></pre> <p>Does anyone know how to do it? Is it possible at all? Thanks in advance !!!</p>
0
2016-08-12T11:41:15Z
38,917,295
<p>You can implement like this.</p> <pre><code>result = {} for i in set(list1): result.update({i:[]}) for i,j in zip(list1,list2): result[i].append(j) </code></pre> <p><strong>Result</strong> </p> <pre><code>{u'UMTS': [u'60000', u'60000', u'60000'], u'GSM': [u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'120000', u'120000', u'120000', u'120512', u'120512', u'120512', u'120512', u'120512', u'120512', u'118629', u'118629', u'118629', u'120000', u'120000', u'120000', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629'], u'LTE': [u'300000']} </code></pre> <p><strong>Concept</strong></p> <p>Created a dictionary with the possible values in the <code>list1</code> and an empty list as value. With using <code>set</code> you will get the elements only once. And iterate through the <code>list1</code> and <code>list2</code> and insert values directly to the list corresponding. </p>
1
2016-08-12T11:51:44Z
[ "python", "python-2.7" ]
Join two lists into one dictionary
38,917,113
<p>These are my lists (both lists have the same length - 43 indices):</p> <pre><code>list1 = [u'UMTS', u'UMTS', u'UMTS', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'GSM', u'LTE'] list2 = [u'60000', u'60000', u'60000', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'120000', u'120000', u'120000', u'120512', u'120512', u'120512', u'120512', u'120512', u'120512', u'118629', u'118629', u'118629', u'120000', u'120000', u'120000', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'300000'] </code></pre> <p>I would like to join them into one dictionary:</p> <pre><code>dictionary = { 'UMTS' : 'indices from 0 to 2' 'GSM' : 'indices from 3 to 42' 'LTE' : 'index 43' } </code></pre> <p>Does anyone know how to do it? Is it possible at all? Thanks in advance !!!</p>
0
2016-08-12T11:41:15Z
38,917,516
<p>Tried with a single for loop. Isn't this what you want?</p> <pre><code>&gt;&gt;&gt; for i in range(0,len(list1)): ... if(mydict.get(list1[i]) is None): ... mydict[list1[i]]=[list2[i]] ... else: ... mydict[list1[i]].append(list2[i]) ... &gt;&gt;&gt; mydict {u'UMTS': [u'60000', u'60000', u'60000'], u'GSM': [u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'120000', u'120000', u'120000', u'120512', u'120512', u'120512', u'120512', u'120512', u'120512', u'118629', u'118629', u'118629', u'120000', u'120000', u'120000', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629', u'118629'], u'LTE': [u'300000']} </code></pre>
1
2016-08-12T12:03:55Z
[ "python", "python-2.7" ]
How can I create a slice object for Numpy array?
38,917,173
<p>I've tried to find a neat solution to this, but I'm slicing several 2D arrays of the same shape in the same manner. I've tidied it up as much as I can by defining a list containing the 'x,y' center e.g. <code>cpix = [161, 134]</code> What I'd like to do is instead of having to write out the slice three times like so:</p> <pre><code>a1 = array1[cpix[1]-50:cpix[1]+50, cpix[0]-50:cpix[0]+50] a2 = array2[cpix[1]-50:cpix[1]+50, cpix[0]-50:cpix[0]+50] a3 = array3[cpix[1]-50:cpix[1]+50, cpix[0]-50:cpix[0]+50] </code></pre> <p>is just have something predefined (like maybe a mask?) so I can just do a </p> <pre><code>a1 = array1[predefined_2dslice] a2 = array2[predefined_2dslice] a3 = array3[predefined_2dslice] </code></pre> <p>Is this something that numpy supports? </p>
2
2016-08-12T11:44:52Z
38,917,247
<p>Yes you can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.s_.html" rel="nofollow"><code>numpy.s_</code></a>:</p> <p>Example:</p> <pre><code>&gt;&gt;&gt; a = np.arange(10).reshape(2, 5) &gt;&gt;&gt; &gt;&gt;&gt; m = np.s_[0:2, 3:4] &gt;&gt;&gt; &gt;&gt;&gt; a[m] array([[3], [8]]) </code></pre> <p>And in this case:</p> <pre><code>my_slice = np.s_[cpix[1]-50:cpix[1]+50, cpix[0]-50:cpix[0]+50] a1 = array1[my_slice] a2 = array2[my_slice] a3 = array3[my_slice] </code></pre> <p>You can also use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.r_.html" rel="nofollow"><code>numpy.r_</code></a> in order to translates slice objects to concatenation along the first axis.</p>
4
2016-08-12T11:48:56Z
[ "python", "arrays", "numpy" ]
Conditionally extracting rows of CSV files?
38,917,226
<p>I have a CSV file that looks like this:</p> <pre class="lang-none prettyprint-override"><code>Germany,1928,Food Iceland,1943,Oil France,1923,Plastics Russia,1901,Steal South Africa,1932,Silver Russia,1905,Gold Brazil,1901,Platinum </code></pre> <p>I want it to search through the first column and pull a row if it hits the word "Russia."</p> <p>This is what my code looks like currently:</p> <pre><code>import csv import sys with open('country.csv', 'rb') as csvfile: data = csv.reader(csvfile) datalist = [] for row in data: if len (row) != 0: datalist = datalist + [row] csvfile.close() column_names = datalist[0] # LIST OF COLUMNS </code></pre> <p>How would I go about pulling the whole row?</p>
-1
2016-08-12T11:47:48Z
38,917,308
<p>Try <a href="http://book.pythontips.com/en/latest/map_filter.html" rel="nofollow">filter</a>:</p> <pre><code>&gt;&gt;&gt; filter(lambda x: x[0] == 'Russia', datalist) [['Russia', '1901', 'Steal'], ['Russia', '1905', 'Gold']] </code></pre>
1
2016-08-12T11:52:13Z
[ "python", "csv" ]
Insert in sqlite python doesn't change table
38,917,254
<p>I'm create simple class for SQLite datadase and when I'm insert new row table doesn't change.</p> <p>DB.py</p> <pre><code>import sqlite3 class DB: def __init__(self, **kwargs): self.db = sqlite3.connect('passwods.db') self.c = self.db.cursor() self.c.execute('CREATE TABLE IF NOT EXISTS passwords (name, value)') def insert(self, alias, cipher): column = (alias, cipher) self.c.execute('INSERT INTO passwords (name, value) VALUES (?,?)', column) self.db.commit() def get(self, alias): pk = (alias,) self.c.execute('SELECT * FROM passwords WHERE name=?', pk) def getAll(self): self.c.execute('SELECT * FROM passwords') </code></pre> <p>Interactive shell</p> <pre><code>&gt;&gt;&gt; from DB import DB &gt;&gt;&gt; db = DB() &gt;&gt;&gt; db.insert('firstName', 'firstValue') &gt;&gt;&gt; print(db.getAll()) None &gt;&gt;&gt; </code></pre>
2
2016-08-12T11:49:18Z
38,917,917
<p>Your method <code>getAll</code> has no return statement. If you add it, you can see that the table actually changes:</p> <pre><code>def getAll(self): self.c.execute("SELECT * FROM passwords") return self.c.fetchall() </code></pre>
1
2016-08-12T12:23:30Z
[ "python", "sqlite" ]
BeautifulSoup: 'ResultSet' object has no attribute 'find_all'
38,917,302
<pre><code>from bs4 import BeautifulSoup import urllib2 url = 'http://www.data.jma.go.jp/obd/stats/etrn/view/monthly_s3_en.php?block_no=47401&amp;view=1' html = urllib2.urlopen(url).read() soup = BeautifulSoup(html) table = soup.find_all('table', class_='data2_s') rows = table.find_all('tr') print rows rows = table.find_all('tr') AttributeError: 'ResultSet' object has no attribute 'find_all' </code></pre> <p>I want to grasp the table into a CSV file. How to go ahead?</p> <p>This is the table:</p> <pre><code>[&lt;table class="data2_s"&gt;&lt;caption class="m"&gt;WAKKANAI\xa0\xa0\xa0WMO Station ID:47401\xa0Lat\xa045&lt;sup&gt;o&lt;/sup&gt;24.9'N\xa0\xa0Lon\xa0141&lt;sup&gt;o&lt;/sup&gt;40.7'E&lt;/caption&gt;&lt;tr&gt;&lt;th scope="col"&gt;Year&lt;/th&gt;&lt;th scope="col"&gt;Jan&lt;/th&gt;&lt;th scope="col"&gt;Feb&lt;/th&gt;&lt;th scope="col"&gt;Mar&lt;/th&gt;&lt;th scope="col"&gt;Apr&lt;/th&gt;&lt;th scope="col"&gt;May&lt;/th&gt;&lt;th scope="col"&gt;Jun&lt;/th&gt;&lt;th scope="col"&gt;Jul&lt;/th&gt;&lt;th scope="col"&gt;Aug&lt;/th&gt;&lt;th scope="col"&gt;Sep&lt;/th&gt;&lt;th scope="col"&gt;Oct&lt;/th&gt;&lt;th scope="col"&gt;Nov&lt;/th&gt;&lt;th scope="col"&gt;Dec&lt;/th&gt;&lt;th scope="col"&gt;Annual&lt;/th&gt;&lt;/tr&gt;&lt;tr class="mtx" style="text-align:right;"&gt;&lt;td style="text-align:center"&gt;1938&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-5.2&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-4.9&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-0.6&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;4.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;9.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;11.6&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;17.9&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;22.2&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;16.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;10.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;3.3&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-4.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;6.8&lt;/td&gt;&lt;/tr&gt;\n&lt;tr class="mtx" style="text-align:right;"&gt;&lt;td style="text-align:center"&gt;1939&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-7.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-6.6&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-1.4&lt;/td&gt;&lt;td] </code></pre>
0
2016-08-12T11:51:56Z
38,917,836
<p>Try this:</p> <pre><code>rows = table[0].find_all('tr') </code></pre> <p>Since <code>find_all</code> appears to return a Python <code>list</code>, you were trying to call <code>find_all</code> on a <code>list</code>, which doesn't have such a method.</p> <p>To get your resulting list(<code>rows</code>) into CSV format.. well, that depends. If you just want it on screen, you could do:</p> <pre><code>','.join(rows) </code></pre> <p>If you wanted it in a file, you would need to open a file and write the above line to it. But there are also Python modules that deal with creating CSV content. It's a different question really.</p>
0
2016-08-12T12:19:49Z
[ "python", "csv", "beautifulsoup" ]
Recursively searching for specific filenames in Python 3.x
38,917,455
<p>I am currently using Python 3.x. I am looking to recursively search directories for two specific filenames. I know that each of the two filenames will exist in some of the directories. If one of the files exists, the other will. Once the files are identified, I then want to extract them to a list but somehow link them so that I can process them together later on as I'll want to extract data from each of the two files and analyse that data and then, do the same but for the same files from a different directory if that makes sense. So, in <em>C:\Desktop</em> and <em>C:\MyDocuments</em> for example, each directory will contain the two filenames I'm looking to identify (manifest.plist and info.plist)</p> <p>So far, I have the following which searches based on the file extension and not on the filename:-</p> <pre><code>def find(pattern, path): result = [] for root, dirs, files in os.walk(path): for name in files: if fnmatch.fnmatch(name, pattern): result.append(os.path.join(root, name)) if __name__=='__main__': find("*.plist", test_path) </code></pre> <p>The exact filenames are <em>manifest.plist</em> and <em>info.plist</em>. </p> <p>The above approach works well but it takes an age as it works through the thousands of files in each directory.</p> <p>Is there a way to quickly look for the files based on their specific names and likewise, how am I best to link the two files from each directory in a list? I'm thinking of making <em>result[]</em> list to contain tuples with each tuple containing the paths to the respective <em>info.plist</em> and <em>manifest.plist</em></p> <p>Thanks all</p>
2
2016-08-12T12:00:22Z
38,917,608
<p>You should use the <a href="https://docs.python.org/3/library/glob.html#glob.iglob"><code>glob</code></a> module for that, specifically <code>glob.iglob(pathname, recursive=True)</code> for large directories.</p>
5
2016-08-12T12:08:38Z
[ "python", "python-3.x" ]
Iterating through a list of Pandas DF's to then iterate through each DF's row
38,917,604
<p>This may be a slightly insane question... I've got a single Pandas DF of articles which I have then split into multiple DF's so each DF only contains the articles from a particular year. I have then put these variables into a list called <code>box_of_years</code>.</p> <pre><code>indexed_df = article_db.set_index('date') indexed_df = indexed_df.sort_index() year_2004 = indexed_df.truncate(before='2004-01-01', after='2004-12-31') year_2005 = indexed_df.truncate(before='2005-01-01', after='2005-12-31') year_2006 = indexed_df.truncate(before='2006-01-01', after='2006-12-31') year_2007 = indexed_df.truncate(before='2007-01-01', after='2007-12-31') year_2008 = indexed_df.truncate(before='2008-01-01', after='2008-12-31') year_2009 = indexed_df.truncate(before='2009-01-01', after='2009-12-31') year_2010 = indexed_df.truncate(before='2010-01-01', after='2010-12-31') year_2011 = indexed_df.truncate(before='2011-01-01', after='2011-12-31') year_2012 = indexed_df.truncate(before='2012-01-01', after='2012-12-31') year_2013 = indexed_df.truncate(before='2013-01-01', after='2013-12-31') year_2014 = indexed_df.truncate(before='2014-01-01', after='2014-12-31') year_2015 = indexed_df.truncate(before='2015-01-01', after='2015-12-31') year_2016 = indexed_df.truncate(before='2016-01-01', after='2016-12-31') box_of_years = [year_2004, year_2005, year_2006, year_2007, year_2008, year_2009, year_2010, year_2011, year_2012, year_2013, year_2014, year_2015, year_2016] </code></pre> <p>I've written various functions to tokenize, clean up and convert the tokens into a <code>FreqDist</code> object and wrapped those up into a single function called <code>year_prep()</code>. This works fine when I do</p> <pre><code>year_2006 = year_prep(year_2006) </code></pre> <p>...but is there a way I can iterate across every year variable, apply the function and have it transform the same variable, short of just repeating the above for every year? </p> <p>I know repeating myself would be the simplest way, but not necessarily the cleanest. I may perhaps have this backwards and do the slicing later on but at that point I feel like the layers of lists will be out of hand as I'm going from a list of years to a list of years, containing a list of articles, containing a list of every word in the article.</p>
2
2016-08-12T12:08:11Z
38,917,870
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.year.html" rel="nofollow"><code>year</code></a> with custom function:</p> <pre><code>import pandas as pd start = pd.to_datetime('2004-02-24') rng = pd.date_range(start, periods=30, freq='50D') df = pd.DataFrame({'Date': rng, 'a':range(30)}) #print (df) def f(x): print (x) #return year_prep(x) #some custom output return x.a + x.Date.dt.month print (df.groupby(df['Date'].dt.year).apply(f)) </code></pre>
2
2016-08-12T12:21:26Z
[ "python", "pandas", "nltk" ]
Change text color of kivy ListView ListItemButton
38,917,678
<p>I'm trying to change the color of the text displayed on each of my buttons in a python kivy ListView object.</p> <p>My kivy code looks as follows (nothing much in my python code):</p> <pre><code>#:kivy 1.9.1 #:import BoxLayout kivy.uix.boxlayout.BoxLayout #:import ListAdapter kivy.adapters.listadapter.ListAdapter #:import ListView kivy.uix.listview #:import ListItemButton kivy.uix.listview.ListItemButton &lt;ListItemButton&gt;: selected_color: 200, 200, 200, 1 deselected_color: 0, 0, 0, 1 BoxLayout: ListView: adapter: ListAdapter( data=['Home', 'Work', 'Other', 'Custom'], cls=ListItemButton, selection_mode='multiple', allow_empty_selection=True, ) size_hint: (None, None) size: (100, 44) pos_hint: {'center_x': .5, 'center_y': .5} </code></pre> <p>I'm able to successfully change the color of the button when it's selected/deselected (using <code>&lt;ListItemButton&gt;</code>), but I can't seem to find any clear explanation as to how to change the color of the text itself.</p> <p><code>Markup: True</code> would be useful, but I also can't get that to work.</p> <p>Thanks!</p>
0
2016-08-12T12:11:42Z
38,917,882
<p>Found an answer:</p> <p><a href="https://groups.google.com/forum/#!topic/kivy-users/LcLj02Qd2PY" rel="nofollow">https://groups.google.com/forum/#!topic/kivy-users/LcLj02Qd2PY</a></p> <pre><code>&lt;ListItemButton&gt;: selected_color: 200, 200, 200, 1 deselected_color: 0, 0, 0, 1 color: 0, 0, 0, 1 </code></pre>
0
2016-08-12T12:21:51Z
[ "python", "listview", "kivy", "kivy-language" ]
running file opening dialog of server using tkFileDialog on client
38,917,741
<p>I am trying to make a simple system to transfer a file from server to client. I want to show the client a file open dialog using tkFileDialog. The problem is when I run the client &amp; server the dialog box gets opened in server rather than client. I thought about send the object of tkFileDialog via </p> <p><code>con.send(str(tkFileDialog.askopenfilename(initialdir='~', title='Choose a text file/program')))</code></p> <p>If someone could help me out with this it will be really helpful or if someone has a better idea to open server filesystem in client without ssh</p> <p>The full code is </p> <p>server.py:</p> <pre><code>import socket,os import Tkinter,tkFileDialog,tkMessageBox def startServer(portName,ip): if portName != '' and ip != '': tkMessageBox.showinfo('Server Started','Server running!!! ok will be enabled after transfer') port = int(portName) ipName = ip sd = socket.socket() sd.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) #reuse socket sd.bind((ipName, port)) sd.listen(50) con, addr = sd.accept() print ' connection from ' + str(addr) con.send(tkMessageBox.showinfo('connected', 'Connection Successful')) while True: con.send(str(tkFileDialog.askopenfilename(initialdir='~', title='Choose a text file/program'))) fileN = con.recv(1024) if os.path.isfile(fileN): con.send(tkMessageBox.showinfo('Process completed', 'Rerun server to transfer again')) con.send('exists') fileN = str(fileN) #read contents fd = open(fileN, 'r') buff = fd.read() print buff print len(buff) fd.close() #send contents con.send(str(len(buff)-1)) print con.recv(14) #acknowledgement of length received con.send(buff) break else: con.send(tkMessageBox.showerror('Failed', 'Select appropriate file')) con.send('ne') sd.close() else: tkMessageBox.showerror('Failed','Failed to start server. Give appropriate inputs') def main(): root = Tkinter.Tk() root.geometry('300x200') root.title('Server') ipLabel = Tkinter.Label(root,text='\nEnter IP Address of server\n') portLabel = Tkinter.Label(root, text='\nEnter Port Address of server\n') ipEntry = Tkinter.Entry(root) portEntry = Tkinter.Entry(root) connectButton = Tkinter.Button(root, text='Run', command=lambda: startServer(portEntry.get(), ipEntry.get()) ) ipLabel.pack() ipEntry.pack() portLabel.pack() portEntry.pack() connectButton.pack() root.mainloop() main() </code></pre> <p>the client.py:</p> <pre><code>import socket import Tkinter,tkMessageBox def establishConnection(ipEntry,portEntry,root): if ipEntry != '' and portEntry != '': port = int(portEntry) ipName = str(ipEntry) sd = socket.socket() sd.connect((ipName, port)) # ack if connection established sd.recv(1024) while True: #file select dialog fileN = sd.recv(1024) sd.send(fileN) #send file name to open sd.recv(1024) #dialog after file selected or not fileAck = sd.recv(6) #acknowledge if file correct if fileAck == 'exists': leng = int(sd.recv(10)) sd.send('length recieved') buff = str(sd.recv(leng)) saveLabel = Tkinter.Label(root, text='\n\nSave with Name\n') saveNameEntry = Tkinter.Entry(root) saveButton = Tkinter.Button(root, text='SAVE', command=lambda: saveFile(saveNameEntry.get(), buff, root)) saveLabel.pack() saveNameEntry.pack() saveButton.pack() break sd.close() def saveFile(fileN, buff, root): fd = open(fileN, 'w') fd.write(buff) fd.close() tkMessageBox.showinfo('Operation Complete','file saved as '+fileN) root.destroy() def main(): root = Tkinter.Tk() root.geometry('300x600') root.title('Client') ipLabel = Tkinter.Label(root,text='\nEnter IP Address of server\n') portLabel = Tkinter.Label(root, text='\nEnter Port Address of server\n') ipEntry = Tkinter.Entry(root) portEntry = Tkinter.Entry(root) connectButton = Tkinter.Button(root, text='Connect', command=lambda: establishConnection(ipEntry.get(), portEntry.get(),root) ) ipLabel.pack() ipEntry.pack() portLabel.pack() portEntry.pack() connectButton.pack() root.mainloop() main() </code></pre>
0
2016-08-12T12:14:43Z
38,919,540
<p>The file dialog works only for the system that opens it. You can't use it to pick files on a remote system. </p>
0
2016-08-12T13:45:53Z
[ "python", "python-2.7", "sockets", "tkinter", "operating-system" ]
Understanding VIew evaluation in Django
38,917,766
<p>I build web application with Django REST Framework. There is one simple view, which return reference Information with db fields.<br> <code>resources.py:</code></p> <pre><code>RESOURCES = { 'genres': GenreSerializer(Genre.objects.all(), many=True).data, 'authors': AuthorSerializer(Author.objects.all(), many=True).data, ...... } class ResourceApiView(views.APIView): def get(self, request): params = request.query_params response_dict = {} if params: # Return RESOURSES based on query params for i in params: q = RESOURCES.get(i) if q: response_dict[i] = q else: # Return all RESOURSES response_dict = RESOURCES return Response(response_dict, status=status.HTTP_200_OK ) </code></pre> <p>It works fine, but when I add new object to one the resources querysets. Nothing happens, it show old queries.<br> I tried printed <code>RESOURSES</code> in my module, it printed once and other get requests don't trigger it.<br> Then I move <code>RESOURSES</code> directly in class <code>ResourceApiView</code> and it's behavior same like when <code>RESOURSES</code> where in module. </p> <pre><code>class ResourceApiView(views.APIView): RESOURCES = { 'genres': GenreSerializer(Genre.objects.all(), many=True).data, 'authors': AuthorSerializer(Author.objects.all(), many=True).data, ...... } def get(self, request): ... </code></pre> <p>It work fine only when I put <code>RESOURSES</code> in <code>get</code> method. </p> <pre><code>class ResourceApiView(views.APIView): def get(self, request): RESOURCES = { 'genres': GenreSerializer(Genre.objects.all(), many=True).data, 'authors': AuthorSerializer(Author.objects.all(), many=True).data, ...... } </code></pre> <p>But why is it happening? Why I can't evaluate queries from class attributes for each method call? </p>
1
2016-08-12T12:15:55Z
38,918,358
<p>this is more related to python than to django. Let's say you hava file <code>lib.py</code></p> <pre><code>def say_hello(): print "hello" GREETINGS = { "hello": say_hello() } </code></pre> <p>now go to another python file (or the shell) and just import your <code>lib.py</code>, you'll print "hello" to the console because when you import the file It starts resolving the code inside so it's creating the GREETINGS variable (RESOURCES in your case) and calling the say_hello() method, for you it's executing the query. However python is smart enough that if you import the file again he'll remember that you just imported it before so it wont load the module again, cause it already has the module reference saved.</p> <p>Your query is beeing executed once when the view was first loaded, and reimporting the view wont make the reference change</p> <p>The same for placing RESOURCES as a class attribute. The code was executed when the class was imported (again you can test it by creating a class on the <code>lib.py</code> example)</p> <p>hope this clarifies :) but maybe the docs explains it better <a href="https://docs.python.org/2/tutorial/modules.html" rel="nofollow">https://docs.python.org/2/tutorial/modules.html</a></p> <p><strong>Note:</strong> I think that the <code>.data</code> on the serializer is actually executing the query. Without it your query and the serializer would just be stored as reference, because the ORM is lazy. Change your RESOURCES to improve the performance of your endpoint because right now if you request one single resource (e.g. 'authors) its still executing ALL the queries ('authors', 'genres', etc)</p>
2
2016-08-12T12:47:17Z
[ "python", "django", "django-rest-framework" ]
How to understand how workers are being used up in gunicorn
38,917,847
<p>I basically would like to know how gunicorn workers work. I have a server with 4 workers on a machine, with 4GB RAM, and 2 CPU and having nginx as the frontend serving requests and reverse proxy. I have simultaneous requests being sent to the server. </p> <ol> <li><p>I wish to know how the workers are being used up, if they are four requests, are they load balanced across the four workers as If 1 requests for each 1 worker?</p></li> <li><p>Also how to check how much memory is used up for each worker? I have set the max requests to 100. So using this 100 max requests. will it reload the entire 4 workers even if 1 worker has reached 100 requests. </p></li> <li><p>How to get more insight of the workers and how the workers memory and no of requests currently in each worker. </p></li> </ol>
0
2016-08-12T12:20:18Z
38,919,375
<p>Short answer: Depends on the worker type and gunicorn configuration.</p> <p>Long answer:</p> <ol> <li>Yes, as long as there are workers available. When gunicorn is started, the <code>-w</code> option configures the number of workers, implementation of which varies depending on worker type. Some worker types use threads, others use event loops and are asynchronous. Performance varies depending on the type - in general async via event loop is preferred as it is lighter on resources and more performant.</li> <li>Each worker is forked from the main gunicorn process. Memory use can be seen with <code>ps</code> thread output, for example <code>ps -fL -p &lt;gunicorn pid&gt;</code>. Max connections is per worker from documentation so only the worker that reaches 100 connections will be reloaded.</li> <li>There is <a href="https://github.com/m0n5t3r/gstats" rel="nofollow">a stats collecting library for gunicorn</a> though I have not used it myself.</li> </ol>
1
2016-08-12T13:37:30Z
[ "python", "linux", "nginx", "flask", "gunicorn" ]
How indexing works in Pandas?
38,917,945
<p>I am new to python. This seems like a basic question to ask. But I really want to understand what is happening here </p> <pre><code>import numpy as np import pandas as pd tempdata = np.random.random(5) myseries_one = pd.Series(tempdata) myseries_two = pd.Series(data = tempdata, index = ['a','b','c','d','e']) myseries_three = pd.Series(data = tempdata, index = [10,11,12,13,14]) myseries_one Out[1]: 0 0.291293 1 0.381014 2 0.923360 3 0.271671 4 0.605989 dtype: float64 myseries_two Out[2]: a 0.291293 b 0.381014 c 0.923360 d 0.271671 e 0.605989 dtype: float64 myseries_three Out[3]: 10 0.291293 11 0.381014 12 0.923360 13 0.271671 14 0.605989 dtype: float64 </code></pre> <p>Indexing first element from each dataframe </p> <pre><code>myseries_one[0] #As expected Out[74]: 0.29129291112626043 myseries_two[0] #As expected Out[75]: 0.29129291112626043 myseries_three[0] KeyError:0 </code></pre> <p>Doubt1 :-Why this is happenening ? Why myseries_three[0] gives me a keyError ? what we meant by calling myseries_one[0] , myseries_one[0] or myseries_three[0] ? Does calling this way mean we are calling by rownames ? </p> <p>Doubt2 :-Is rownames and rownumber in Python works as different as rownames and rownumber in R ?</p> <pre><code>myseries_one[0:2] Out[78]: 0 0.291293 1 0.381014 dtype: float64 myseries_two[0:2] Out[79]: a 0.291293 b 0.381014 dtype: float64 myseries_three[0:2] Out[80]: 10 0.291293 11 0.381014 dtype: float64 </code></pre> <p>Doubt3:- If calling myseries_three[0] meant calling by rownames then how myseries_three[0:3] producing the output ? does myseries_three[0:4] mean we are calling by rownumber ? Please explain and guide. I am migrating from R to python. so its a bit confusing for me.</p>
6
2016-08-12T12:25:27Z
38,920,717
<p>When you are attempting to slice with <code>myseries[something]</code>, the <code>something</code> is often ambiguous. You are highlighting a case of that ambiguity. In your case, pandas is trying to help you out by guessing what you mean.</p> <pre><code>myseries_one[0] #As expected Out[74]: 0.29129291112626043 </code></pre> <p><code>myseries_one</code> has integer labels. It would make sense that when you attempt to slice with an integer that you intend to get the element that is labeled with that integer. It turns out, that you have an element labeled with <code>0</code> an so that is returned to you.</p> <pre><code>myseries_two[0] #As expected Out[75]: 0.29129291112626043 </code></pre> <p><code>myseries_two</code> has string labels. It's highly unlikely that you meant to slice this series with a label of <code>0</code> when labels are all strings. So, pandas assumes that you meant a position of <code>0</code> and returns the first element (thanks pandas, that was helpful).</p> <pre><code>myseries_three[0] KeyError:0 </code></pre> <p><code>myseries_three</code> has integer labels and you are attempting to slice with an integer... perfect. Let's just get that value for you... <code>KeyError</code>. Whoops, that index label does not exist. In this case, it is safer for pandas to fail than to guess that maybe you meant to slice by position. The documentation even suggests that if you want to remove the ambiguity, use <code>loc</code> for label based slicing and <code>iloc</code> for position based slicing.</p> <p>Let's try <code>loc</code></p> <pre><code>myseries_one.loc[0] 0.29129291112626043 myseries_two.loc[0] KeyError:0 myseries_three.loc[0] KeyError:0 </code></pre> <p>Only <code>myseries_one</code> has a label <code>0</code>. The other two return <code>KeyError</code>s</p> <p>Let's try <code>iloc</code></p> <pre><code>myseries_one.iloc[0] 0.29129291112626043 myseries_two.iloc[0] 0.29129291112626043 myseries_three.iloc[0] 0.29129291112626043 </code></pre> <p>They all have a position of <code>0</code> and return the first element accordingly.</p> <hr> <p>For the range slicing, pandas decides to be less interpretive and sticks to positional slicing for the integer slice <code>0:2</code>. Keep in mind. Actual real people (the programmers writing pandas code) are the ones making these decisions. When you are attempting to do something that is ambiguous, you may get varying results. To remove ambiguity, use <code>loc</code> and <code>iloc</code>.</p> <p><strong><em><code>iloc</code></em></strong></p> <pre><code>myseries_one.iloc[0:2] 0 0.291293 1 0.381014 dtype: float64 myseries_two.iloc[0:2] a 0.291293 b 0.381014 dtype: float64 myseries_three.iloc[0:2] 10 0.291293 11 0.381014 dtype: float64 </code></pre> <p><strong><em><code>loc</code></em></strong></p> <pre><code>myseries_one.loc[0:2] 0 0.291293 1 0.381014 2 0.923360 dtype: float64 myseries_two.loc[0:2] TypeError: cannot do slice indexing on &lt;class 'pandas.indexes.base.Index'&gt; with these indexers [0] of &lt;type 'int'&gt; myseries_three.loc[0:2] Series([], dtype: float64) </code></pre>
5
2016-08-12T14:44:22Z
[ "python", "pandas" ]
Convert HTML into CSV
38,917,958
<p>I want to convert a HTML table as obtained from the script below into a CSV file, but got type error as follows:</p> <blockquote> <p>TypeError: sequence item 0: expected string, Tag found</p> </blockquote> <pre><code>from bs4 import BeautifulSoup import urllib2 url = 'http://www.data.jma.go.jp/obd/stats/etrn/view/monthly_s3_en.php?block_no=47401&amp;view=1' html = urllib2.urlopen(url).read() soup = BeautifulSoup(html) table = soup.find_all('table', class_='data2_s') rows = table[0].find_all('tr') </code></pre> <p>How is the easiest way to convert it into a CSV file? I tried as:</p> <pre><code>fo = open('fo.txt','w') for r in rows: fo.write(str(r.txt) + '\n') fo.close() </code></pre> <p>but it wrote 'none'</p> <p>The HTML is like this:</p> <pre><code>&lt;table class="data2_s"&gt;&lt;caption class="m"&gt;WAKKANAI   WMO Station ID:47401 Lat 45&lt;sup&gt;o&lt;/sup&gt;24.9'N  Lon 141&lt;sup&gt;o&lt;/sup&gt;40.7'E&lt;/caption&gt;&lt;tr&gt;&lt;th scope="col"&gt;Year&lt;/th&gt;&lt;th scope="col"&gt;Jan&lt;/th&gt;&lt;th scope="col"&gt;Feb&lt;/th&gt;&lt;th scope="col"&gt;Mar&lt;/th&gt;&lt;th scope="col"&gt;Apr&lt;/th&gt;&lt;th scope="col"&gt;May&lt;/th&gt;&lt;th scope="col"&gt;Jun&lt;/th&gt;&lt;th scope="col"&gt;Jul&lt;/th&gt;&lt;th scope="col"&gt;Aug&lt;/th&gt;&lt;th scope="col"&gt;Sep&lt;/th&gt;&lt;th scope="col"&gt;Oct&lt;/th&gt;&lt;th scope="col"&gt;Nov&lt;/th&gt;&lt;th scope="col"&gt;Dec&lt;/th&gt;&lt;th scope="col"&gt;Annual&lt;/th&gt;&lt;/tr&gt;&lt;tr class="mtx" style="text-align:right;"&gt;&lt;td style="text-align:center"&gt;1938&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-5.2&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-4.9&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-0.6&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;4.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;9.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;11.6&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;17.9&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;22.2&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;16.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;10.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;3.3&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-4.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;6.8&lt;/td&gt;&lt;/tr&gt; &lt;tr class="mtx" style="text-align:right;"&gt;&lt;td style="text-align:center"&gt;1939&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-7.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-6.6&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-1.4&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;4.0&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;7.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;13.0&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;17.4&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;20.0&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;17.4&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;9.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;3.0&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-2.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;6.2&lt;/td&gt;&lt;/tr&gt; </code></pre>
1
2016-08-12T12:26:26Z
38,918,435
<p>Use the <code>csv</code> module from Python to do this. You can obviously write more columns if you want, but the idea is that you're writing a <code>list</code> to the csv file. There are other options that you can specify in the <code>writer()</code> method if you'd like to quote things, escape things, etc.</p> <pre><code>import csv with open('your_csv_name.csv', 'w') as o: w = csv.writer(o) # Headers w.writerow(['tr_content']) # Write the tr text for r in rows: w.writerow([r]) </code></pre>
1
2016-08-12T12:51:22Z
[ "python", "csv", "beautifulsoup" ]
Convert HTML into CSV
38,917,958
<p>I want to convert a HTML table as obtained from the script below into a CSV file, but got type error as follows:</p> <blockquote> <p>TypeError: sequence item 0: expected string, Tag found</p> </blockquote> <pre><code>from bs4 import BeautifulSoup import urllib2 url = 'http://www.data.jma.go.jp/obd/stats/etrn/view/monthly_s3_en.php?block_no=47401&amp;view=1' html = urllib2.urlopen(url).read() soup = BeautifulSoup(html) table = soup.find_all('table', class_='data2_s') rows = table[0].find_all('tr') </code></pre> <p>How is the easiest way to convert it into a CSV file? I tried as:</p> <pre><code>fo = open('fo.txt','w') for r in rows: fo.write(str(r.txt) + '\n') fo.close() </code></pre> <p>but it wrote 'none'</p> <p>The HTML is like this:</p> <pre><code>&lt;table class="data2_s"&gt;&lt;caption class="m"&gt;WAKKANAI   WMO Station ID:47401 Lat 45&lt;sup&gt;o&lt;/sup&gt;24.9'N  Lon 141&lt;sup&gt;o&lt;/sup&gt;40.7'E&lt;/caption&gt;&lt;tr&gt;&lt;th scope="col"&gt;Year&lt;/th&gt;&lt;th scope="col"&gt;Jan&lt;/th&gt;&lt;th scope="col"&gt;Feb&lt;/th&gt;&lt;th scope="col"&gt;Mar&lt;/th&gt;&lt;th scope="col"&gt;Apr&lt;/th&gt;&lt;th scope="col"&gt;May&lt;/th&gt;&lt;th scope="col"&gt;Jun&lt;/th&gt;&lt;th scope="col"&gt;Jul&lt;/th&gt;&lt;th scope="col"&gt;Aug&lt;/th&gt;&lt;th scope="col"&gt;Sep&lt;/th&gt;&lt;th scope="col"&gt;Oct&lt;/th&gt;&lt;th scope="col"&gt;Nov&lt;/th&gt;&lt;th scope="col"&gt;Dec&lt;/th&gt;&lt;th scope="col"&gt;Annual&lt;/th&gt;&lt;/tr&gt;&lt;tr class="mtx" style="text-align:right;"&gt;&lt;td style="text-align:center"&gt;1938&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-5.2&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-4.9&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-0.6&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;4.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;9.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;11.6&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;17.9&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;22.2&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;16.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;10.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;3.3&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-4.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;6.8&lt;/td&gt;&lt;/tr&gt; &lt;tr class="mtx" style="text-align:right;"&gt;&lt;td style="text-align:center"&gt;1939&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-7.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-6.6&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-1.4&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;4.0&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;7.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;13.0&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;17.4&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;20.0&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;17.4&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;9.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;3.0&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-2.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;6.2&lt;/td&gt;&lt;/tr&gt; </code></pre>
1
2016-08-12T12:26:26Z
38,918,560
<p>Here is another way without using <code>csv</code> module:</p> <pre><code>fp=open('data.csv','w') for row in rows[:-1]: # Removed last row as it has empty cells that gives error which can also be resolved if needed fp.write(row.get_text(',') + '\n') fp.close() </code></pre> <p>You can directly open data.csv file.</p> <p>Station details can be get by below command:</p> <pre><code>&gt;&gt;&gt;&gt; table = soup.find_all('table', class_='data2_s') &gt;&gt;&gt;&gt; print table[0].find_all('caption')[0].get_text().encode('ascii', 'ignore') WAKKANAI WMO Station ID:47401 Lat 45o24.9'N Lon 141o40.7'E </code></pre> <p>Hope this helps.</p>
0
2016-08-12T12:57:11Z
[ "python", "csv", "beautifulsoup" ]
Convert HTML into CSV
38,917,958
<p>I want to convert a HTML table as obtained from the script below into a CSV file, but got type error as follows:</p> <blockquote> <p>TypeError: sequence item 0: expected string, Tag found</p> </blockquote> <pre><code>from bs4 import BeautifulSoup import urllib2 url = 'http://www.data.jma.go.jp/obd/stats/etrn/view/monthly_s3_en.php?block_no=47401&amp;view=1' html = urllib2.urlopen(url).read() soup = BeautifulSoup(html) table = soup.find_all('table', class_='data2_s') rows = table[0].find_all('tr') </code></pre> <p>How is the easiest way to convert it into a CSV file? I tried as:</p> <pre><code>fo = open('fo.txt','w') for r in rows: fo.write(str(r.txt) + '\n') fo.close() </code></pre> <p>but it wrote 'none'</p> <p>The HTML is like this:</p> <pre><code>&lt;table class="data2_s"&gt;&lt;caption class="m"&gt;WAKKANAI   WMO Station ID:47401 Lat 45&lt;sup&gt;o&lt;/sup&gt;24.9'N  Lon 141&lt;sup&gt;o&lt;/sup&gt;40.7'E&lt;/caption&gt;&lt;tr&gt;&lt;th scope="col"&gt;Year&lt;/th&gt;&lt;th scope="col"&gt;Jan&lt;/th&gt;&lt;th scope="col"&gt;Feb&lt;/th&gt;&lt;th scope="col"&gt;Mar&lt;/th&gt;&lt;th scope="col"&gt;Apr&lt;/th&gt;&lt;th scope="col"&gt;May&lt;/th&gt;&lt;th scope="col"&gt;Jun&lt;/th&gt;&lt;th scope="col"&gt;Jul&lt;/th&gt;&lt;th scope="col"&gt;Aug&lt;/th&gt;&lt;th scope="col"&gt;Sep&lt;/th&gt;&lt;th scope="col"&gt;Oct&lt;/th&gt;&lt;th scope="col"&gt;Nov&lt;/th&gt;&lt;th scope="col"&gt;Dec&lt;/th&gt;&lt;th scope="col"&gt;Annual&lt;/th&gt;&lt;/tr&gt;&lt;tr class="mtx" style="text-align:right;"&gt;&lt;td style="text-align:center"&gt;1938&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-5.2&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-4.9&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-0.6&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;4.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;9.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;11.6&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;17.9&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;22.2&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;16.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;10.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;3.3&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-4.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;6.8&lt;/td&gt;&lt;/tr&gt; &lt;tr class="mtx" style="text-align:right;"&gt;&lt;td style="text-align:center"&gt;1939&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-7.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-6.6&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-1.4&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;4.0&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;7.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;13.0&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;17.4&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;20.0&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;17.4&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;9.7&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;3.0&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;-2.5&lt;/td&gt;&lt;td class="data_0_0_0_0"&gt;6.2&lt;/td&gt;&lt;/tr&gt; </code></pre>
1
2016-08-12T12:26:26Z
38,919,491
<p>This is a job for the csv lib, getting each td inside each row and extracting the text, it will handle where there are missing values in each row:</p> <pre><code>from bs4 import BeautifulSoup import urllib2 import csv url = 'http://www.data.jma.go.jp/obd/stats/etrn/view/monthly_s3_en.php?block_no=47401&amp;view=1' html = urllib2.urlopen(url).read() soup = BeautifulSoup(html) table = soup.select_one("table.data2_s") # python3 just use th.text headers = [th.text.encode("utf-8") for th in table.select("tr th")] with open("out.csv", "w") as f: wr = csv.writer(f) wr.writerow(headers) wr.writerows([[td.text.encode("utf-8") for td in row.find_all("td")] for row in table.select("tr + tr")]) </code></pre> <p>Which matches the table exactly as you see on the page:</p> <pre><code>:~$ cat out.csv Year,Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,Dec,Annual 1938,-5.2,-4.9,-0.6,4.7,9.5,11.6,17.9,22.2,16.5,10.7,3.3,-4.7,6.8 1939,-7.5,-6.6,-1.4,4.0,7.5,13.0,17.4,20.0,17.4,9.7,3.0,-2.5,6.2 1940,-6.0,-5.7,-0.5,3.5,8.5,11.0,16.6,19.7,15.6,10.4,3.7,-1.0,6.3 1941,-6.5,-5.8,-2.6,3.6,8.1,11.4,12.7,16.5,16.0,10.0,4.0,-2.9,5.4 1942,-7.8,-8.2,-0.8,3.5,7.1,12.0,17.4,18.4,15.7,10.5,2.5,-2.9,5.6 1943,-4.1,-6.1,-1.1,3.5,6.9,12.9,19.3,21.5,17.5,11.7,1.2,-3.6,6.6 1944,-7.7,-7.9,-2.2,1.7,8.9,13.7,19.0,21.3,16.6,10.8,1.3,-6.0,5.8 1945,-7.8,-6.9,-1.8,3.9,5.5,11.0,13.6,18.7,16.8,11.0,3.9,-4.8,5.3 1946,-6.5,-6.0,-3.3,4.5,7.6,14.9,18.2,22.2,16.9,11.5,4.4,-2.5,6.8 1947,-4.9,-5.5,-2.3,3.7,9.0,11.2,17.1,19.3,15.1,10.6,2.4,-4.6,5.9 1948,-2.7,-4.4,-0.2,6.0,10.7,12.2,16.2,22.0,16.9,11.1,4.2,-0.6,7.6 1949,-2.6,-2.8,-3.4,2.0,9.4,11.8,16.9,20.8,17.8,10.8,3.1,-3.8,6.7 1950,-5.7,-4.8,-1.3,4.0,9.2,14.6,19.3,22.6,16.8,9.0,3.0,-2.9,7.0 1951,-6.7,-6.5,-2.2,3.7,9.5,12.3,16.7,22.3,15.6,10.1,3.7,-0.3,6.5 1952,-5.7,-7.1,-2.4,3.8,8.3,13.1,16.4,19.7,17.0,11.3,0.9,-7.1,5.7 1953,-7.7,-7.3,-0.9,3.6,6.9,11.1,16.8,19.2,17.6,11.2,-0.6,-2.6,5.6 1954,-6.7,-4.1,-2.5,4.0,7.5,11.0,13.7,17.0,17.2,9.5,3.2,-1.8,5.7 1955,-6.4,-4.8,-1.3,4.7,7.0,12.7,20.3,19.5,15.5,10.6,3.6,-0.4,6.8 1956,-6.1,-4.6,-2.0,5.1,10.8,11.2,13.8,16.3,17.2,12.3,2.8,-2.6,6.2 1957,-3.9,-5.5,-2.9,4.4,9.3,10.9,17.1,18.2,15.5,11.1,5.4,-1.1,6.5 1958,-4.9,-4.9,-2.3,4.4,8.5,12.6,17.5,18.3,16.8,10.6,4.5,-0.5,6.7 1959,-7.3,-2.8,0.8,6.4,9.4,12.7,17.1,18.5,16.2,11.6,2.9,-3.9,6.8 1960,-7.2,-5.2,-1.4,3.5,7.7,10.8,15.9,20.8,18.1,9.7,3.3,-3.9,6.0 1961,-7.7,-5.3,-1.4,5.5,8.7,14.7,19.5,20.0,18.9,10.4,4.1,-1.3,7.2 1962,-4.2,-5.4,-2.5,6.7,10.0,12.9,16.8,17.7,16.6,9.9,2.6,-1.5,6.6 1963,-3.6,-3.7,0.1,5.0,10.4,12.4,16.8,17.1,15.6,10.7,4.3,-1.7,7.0 1964,-4.5,-7.7,-1.3,3.7,9.9,11.9,15.3,17.7,14.9,10.0,3.6,-1.9,6.0 1965,-4.1,-5.7,-2.8,3.2,9.1,13.3,15.2,18.8,15.8,11.4,2.1,-2.6,6.1 1966,-5.0,-5.5,-1.0,3.2,8.1,12.2,15.3,17.5,15.4,11.6,4.1,-4.4,6.0 1967,-6.8,-5.9,-0.7,4.5,10.0,11.4,16.4,20.5,15.5,11.0,1.8,-1.5,6.4 1968,-4.2,-4.7,1.9,5.7,8.9,14.5,17.3,18.1,15.9,9.1,5.3,-0.7,7.3 1969,-7.3,-7.5,-2.5,3.9,7.2,10.6,17.0,16.5,16.1,9.4,2.2,-5.4,5.0 1970,-6.6,-6.0,-4.2,4.6,10.4,12.9,17.4,19.2,16.8,10.5,4.3,-3.3,6.3 1971,-6.3,-6.4,-1.7,4.1,7.6,11.6,15.8,17.2,15.2,11.5,3.4,-2.2,5.8 1972,-5.3,-5.0,-0.6,5.9,9.4,12.8,16.8,20.4,15.7,10.9,1.9,-1.4,6.8 1973,-4.2,-5.3,-2.9,4.2,8.4,12.8,17.0,20.9,17.1,10.4,3.5,-1.9,6.7 1974,-2.6,-4.6,-2.1,4.0,8.4,11.8,16.8,18.8,16.5,10.1,1.9,-5.7,6.1 1975,-4.1,-6.1,-1.5,4.3,8.4,13.7,16.1,20.6,17.3,10.4,3.8,-3.8,6.6 1976,-4.6,-3.5,-1.4,4.0,8.9,11.9,17.5,17.6,15.7,10.2,1.3,-2.0,6.3 1977,-8.3,-7.1,-1.0,3.6,8.0,11.9,18.2,19.1,17.4,11.4,4.5,-1.8,6.3 1978,-6.7,-9.2,-1.6,4.3,9.2,13.5,20.6,21.3,17.4,9.6,3.4,-2.1,6.6 1979,-6.9,-4.5,-2.5,2.7,7.8,13.2,15.8,20.3,16.9,11.3,2.9,-0.1,6.4 1980,-5.4,-7.1,-1.9,1.9,7.8,12.9,15.9,16.5,16.0,10.0,4.3,-0.6,5.9 1981,-5.4,-6.3,-2.6,5.6,8.1,11.8,17.1,18.7,16.0,10.5,0.8,-0.6,6.1 1982,-5.6,-5.3,-0.6,3.7,9.0,11.9,16.9,21.0,17.5,11.4,4.3,-1.0,6.9 1983,-4.2,-7.6,-1.9,6.8,8.2,8.5,14.5,18.9,15.8,8.9,4.8,-2.1,5.9 1984,-4.9,-6.6,-3.3,2.9,7.9,15.5,19.5,20.5,16.6,9.2,2.3,-3.6,6.3 1985,-8.7,-4.8,-1.4,4.9,8.6,11.7,16.6,21.1,15.7,10.3,2.7,-4.2,6.0 1986,-7.2,-6.5,-2.4,4.6,8.4,11.2,14.4,19.6,16.8,9.1,2.1,-1.9,5.7 1987,-6.4,-5.6,-1.4,4.2,8.6,12.6,17.5,18.0,16.4,11.1,2.0,-3.1,6.2 1988,-4.8,-6.3,-1.8,4.1,8.0,12.6,14.1,20.4,16.1,10.4,2.0,-1.5,6.1 1989,-2.6,-2.4,0.8,4.0,8.2,10.7,18.4,20.4,16.8,10.8,4.8,-1.3,7.4 1990,-5.7,-2.4,1.4,5.7,9.3,13.4,18.9,20.3,17.1,13.3,6.2,1.2,8.2 1991,-1.6,-3.6,-1.5,4.8,10.1,14.3,16.2,19.0,16.6,11.8,3.5,-2.3,7.3 1992,-3.6,-3.6,-0.4,3.7,8.1,12.1,17.6,18.0,14.9,11.1,3.2,-1.2,6.7 1993,-2.7,-3.3,-0.2,3.1,8.6,10.7,15.6,17.6,16.3,11.1,3.7,-1.6,6.6 1994,-6.1,-2.7,-1.3,4.4,10.0,12.8,17.4,21.7,17.5,11.8,4.3,-2.9,7.2 1995,-4.0,-4.0,-0.8,4.8,11.0,12.7,18.4,19.3,16.3,12.3,5.2,-0.6,7.6 1996,-4.6,-4.5,-1.0,3.5,6.9,12.0,15.9,18.7,16.8,10.4,2.3,-2.4,6.2 1997,-3.0,-3.3,-1.5,4.3,7.3,11.7,17.4,17.2,16.1,10.3,6.4,-0.7,6.9 1998,-6.9,-5.1,0.3,5.3,10.1,12.9,15.5,18.1,17.2,12.5,2.0,-2.4,6.6 1999,-4.1,-5.6,-2.6,4.2,8.4,14.5,16.6,21.0,18.3,11.2,3.8,-1.9,7.0 2000,-4.2,-5.6,-2.1,3.5,9.3,12.8,18.9,21.5,17.7,10.6,1.5,-4.1,6.7 2001,-6.3,-7.7,-2.4,4.7,8.5,13.0,17.4,18.7,15.6,10.8,4.0,-4.2,6.0 2002,-3.6,-1.0,0.5,6.8,11.1,12.1,15.7,17.1,17.0,10.8,2.3,-4.4,7.0 2003,-4.7,-5.6,-0.7,5.3,10.1,13.9,14.3,18.4,16.6,11.3,4.5,-1.4,6.8 2004,-3.9,-3.0,-0.5,4.4,10.6,14.6,16.8,19.7,17.8,11.8,5.9,-2.0,7.7 2005,-4.6,-5.7,-1.0,3.9,7.0,14.3,16.7,21.0,17.9,12.6,4.9,-2.3,7.1 2006,-5.5,-4.7,-0.9,2.1,9.3,11.9,18.4,21.6,17.7,11.0,4.5,-1.8,7.0 2007,-3.7,-3.2,-0.7,3.5,7.6,14.3,16.7,20.4,17.0,10.9,3.0,-1.7,7.0 2008,-6.0,-4.8,0.6,6.0,8.3,11.9,17.9,18.8,17.9,11.5,3.8,-0.4,7.1 2009,-2.4,-4.4,0.0,4.5,10.0,12.3,14.8,18.6,16.9,11.4,3.1,-2.2,6.9 2010,-3.4,-4.9,-1.4,3.5,7.3,15.0,18.1,22.4,18.4,11.4,4.8,-1.1,7.5 2011,-5.1,-2.2,-0.6,4.4,6.5,12.8,17.5 ),21.5,18.3,12.1,4.9,-2.3,7.3 2012,-5.4,-6.4,-2.4,4.6,8.9,12.6,17.2,20.4,19.4,11.8,3.8,-3.0,6.8 2013,-5.8,-5.1,-1.3,4.5,7.2,14.0,18.9,20.2,17.6,11.8,5.5,-0.2,7.3 2014,-5.3,-4.2,-1.2,3.9,8.7,13.9,19.2,20.0,16.7,11.0,4.8,-2.3,7.1 2015,-2.9,-1.7,2.3,5.9,9.9,12.1,17.6,19.0,17.3,10.4,3.7,-0.2,7.8 2016,-5.2,-4.7,0.5,4.3,11.4,12.5,17.4,21.8 ], , , , ,5.2 ] </code></pre> <p>if you want the caption use <code>table.select_one("caption.m").text</code>:</p> <pre><code>with open("out.csv", "w") as f: wr = csv.writer(f) wr.writerow([table.select_one("caption.m").text.encode("utf-8")]) wr.writerow(headers) wr.writerows([[td.text.encode("utf-8") for td in row.find_all("td")] for row in table.select("tr + tr")]) </code></pre> <p>but it might be an idea to use that as the name of the file as opposed to adding it to the csv.</p> <p>If you really want to do it without the csv, use the same logic with <em>str.join</em>:</p> <pre><code>table = soup.select_one("table.data2_s") headers = [th.text.encode("utf-8") for th in table.select("tr th")] with open("out.csv", "w") as f: f.write(",".join(headers) + "\n") f.writelines(",".join([td.text.encode("utf-8") for td in row.find_all("td")]) + "\n" for row in table.select("tr + tr")) </code></pre> <p>If you want to replace the empty cells with <code>N/A</code>:</p> <pre><code>with open("out.csv", "w") as f: f.write(",".join(headers) + "\n") f.writelines(",".join([td.text.encode("utf-8").strip('\xe3\x80\x80') or "N/A" for td in row.find_all("td")]) + "\n" for row in table.select("tr + tr")) </code></pre> <p>Which will change the last row to:</p> <pre><code>2016,-5.2,-4.7,0.5,4.3,11.4,12.5,17.4,21.8 ],N/A,N/A,N/A,N/A,5.2 ] </code></pre> <p>The spaces for missing values are <em>unicode</em> <a href="http://www.fileformat.info/info/unicode/char/3000/index.htm" rel="nofollow">ideographic space</a> characters (<em>u"\u3000" in python</em>) which when encoded to utf-8 become and strip, if that leave an empty string then we just use <code>"N/A"</code></p> <pre><code>In [7]: print u"\u3000"   In [8]: u"\u3000".encode("utf-8") Out[8]: '\xe3\x80\x80' In [9]: u"\u3000".encode("utf-8").strip('\xe3\x80\x80') Out[9]: '' </code></pre>
1
2016-08-12T13:43:34Z
[ "python", "csv", "beautifulsoup" ]
Create new tokens and tuples from existing ones based on conditions
38,917,962
<p>This is very related to a <a href="http://stackoverflow.com/questions/38509239/need-to-remove-items-from-both-a-list-and-a-dictionary-of-tuple-value-pairs-at-s/38512276?noredirect=1#comment65178650_38512276">previous question</a> but I am having difficulties adapting for my use case.</p> <p>I have a sentence: <code>"Forbes Asia 200 Best Under 500 Billion 2011"</code></p> <p>I have tokens like:</p> <pre><code>oldTokens = [u'Forbes', u'Asia', u'200', u'Best', u'Under', u'500', u'Billion', u'2011'] </code></pre> <p>And the indices of where a previous parser has figured out where there should be location or number slots:</p> <pre><code>numberTokenIDs = {(7,): 2011.0, (2,): 200.0, (5,6): 500000000000.00} locationTokenIDs = {(0, 1): u'Forbes Asia'} </code></pre> <p>The token IDs correspond to the index of the tokens where there are locations or numbers, the objective is to obtain a new set of tokens like:</p> <pre><code>newTokens = [u'ForbesAsia', u'200', u'Best', u'Under', u'500Billion', u'2011'] </code></pre> <p>With new number and location tokenIDs perhaps like (to avoid index out of bounds exceptions):</p> <pre><code>numberTokenIDs = {(5,): 2011.0, (1,): 200.0, (4,): 500000000000.00} locationTokenIDs = {(0,): u'Forbes Asia'} </code></pre> <p>Essentially I would like to go through the new reduced set of tokens, and be able to ultimately create a new sentence called:</p> <p><code>"LOCATION_SLOT NUMBER_SLOT Best Under NUMBER_SLOT NUMBER_SLOT"</code></p> <p>via going through the new set of tokens and replacing the correct tokenID with either <code>LOCATION_SLOT</code> or <code>NUMBER_SLOT</code>. If I did this with the current set of number and location token IDs, I would get:</p> <p><code>"LOCATION_SLOT LOCATION_SLOT NUMBER_SLOT Best Under NUMBER_SLOT NUMBER_SLOT NUMBER_SLOT".</code></p> <p>How would I do this?</p> <p>Another example is:</p> <pre><code>Location token IDs are: (0, 1) Number token IDs are: (3, 4) </code></pre> <p>Old sampleTokens <code>[u'United', u'Kingdom', u'USD', u'1.240', u'billion']</code></p> <p>Where I want to both delete tokens and also change location and number token IDs to be able to replace the sentence like:</p> <pre><code>sampleTokens[numberTokenID] = "NUMBER_SLOT" sampleTokens[locationTokenID] = "LOCATION_SLOT" </code></pre> <p>Such that the replaced tokens are <code>[u'LOCATION_SLOT', u'USD', u'NUMBER_SLOT']</code></p> <p>Note, the concatenation should concatenate all the values in the tuple if there are more than one (also the tuple could also contain >2 elements e.g. <code>The United States of America</code>).</p>
1
2016-08-12T12:26:47Z
38,918,244
<p>This should work (if I understood correctly):</p> <pre><code>token_by_index = dict(enumerate(oldTokens)) groups = numberTokenIDs.keys() + locationTokenIDs.keys() for group in groups: token_by_index[group[0]] = ''.join(token_by_index.pop(index) for index in group) newTokens = [token for _, token in sorted(token_by_index.items(), key=lambda (index, _): index)] </code></pre> <p>to find the new token ids:</p> <pre><code>new_index_by_token = dict(map(lambda (i, t): (t, i), enumerate(newTokens)) numberTokenIDs = {(new_index_by_token[token_by_index[group[0]]],): value for group, value in numberTokenIDs.items()} locationTokenIDs = {(new_index_by_token[token_by_index[group[0]]],): value for group, value in locationTokenIDs.items()} </code></pre>
1
2016-08-12T12:42:10Z
[ "python", "loops", "tuples", "token", "tokenize" ]
Read random chunks from text file
38,918,219
<p>For a deep learning model I need to load my data by batches. For every epoch (full iteration over all the data) every row needs to be passed once, but it's important that the data is fed in a random order to the algorithm. My dataset is too big to read it fully in memory. It's sequence data with a variable length, the input format can be changed since it's a dump from a cluster that my other script outputs. Currently it's some meta info per row and then the sequences split by ';'.</p> <p>My current solution is a generator that shuffles all the line numbers, chunks them up in 4 pieces and reads the file, parsing the lines that match the chunk line numbers. It yields batch sized sequences until there is nothing left, and then it parses the next chunk of line numbers. It works, but I feel like there might be a better solution. Who has a better workflow? This is a problem I run into regularly. The problem is that I'm fully scanning the file for every chunk, every epoch. Even though I can get it to work with just 4 chunks, with 30 epochs that is 120 times reading a big file.</p>
0
2016-08-12T12:40:44Z
38,919,973
<p>Build an index of the lines in memory (which requires a single pass through the file, but not all in memory) and then you can access lines randomly and fast. </p> <p>This isn't robust (no validation/range-checking, etc.) but:</p> <pre><code>import sys BUFFER_LEN = 1024 def findNewLines(s): retval = [] lastPos = 0 while True: pos = s.find("\n", lastPos) if pos &gt;= 0: pos += 1 retval.append(pos) lastPos = pos else: break return retval class RandomAccessFile(object): def __init__(self, fileName): self.fileName = fileName self.startPositions = [0] with open(fileName, "rb") as f: looking = True fileOffset = 0 while (looking): bytes = f.read(BUFFER_LEN) if len(bytes) &lt; BUFFER_LEN: looking = False newLines = findNewLines(bytes) for newLine in newLines: self.startPositions.append(fileOffset+newLine) fileOffset += len(bytes) def GetLine(self, index): start, stop = self.startPositions[index],self.startPositions[index+1]-1 with open(self.fileName, "rb") as f: f.seek(start) return f.read((stop-start)-1) raf = RandomAccessFile('/usr/share/dict/words') print raf.GetLine(0) print raf.GetLine(10) print raf.GetLine(456) print raf.GetLine(71015) </code></pre> <p>output is:</p> <pre><code>python indexedFile.py A Aaronic abrim flippantness </code></pre>
2
2016-08-12T14:04:45Z
[ "python", "out-of-memory", "deep-learning" ]
Why does setup.py sweeps the content of the namespace before installing?
38,918,246
<p>I'm using namespaces with setuptools to distribute a same module in two different repositories. The goal is to get <code>mymodule.one</code> and <code>mymodule.two</code> installed, knowing that the content of <code>one</code> and <code>two</code> comes from different repos. But it looks like two <code>setup.py</code> sweep each other content.</p> <pre><code>├── repo1 │ ├── mymodule │ │ ├── __init__.py │ │ └── one │ │ └── __init__.py │ └── setup.py └── repo2 ├── mymodule │ ├── __init__.py │ └── two │ └── __init__.py └── setup.py </code></pre> <p>The namespace has the <code>__init__.py</code> below:</p> <pre><code>test$ cat repo1/mymodule/__init__.py from pkgutil import extend_path __path__ = extend_path(__path__, __name__) test$ cat repo2/mymodule/__init__.py from pkgutil import extend_path __path__ = extend_path(__path__, __name__) </code></pre> <p>The <code>setup.py</code> declares the same name:</p> <pre><code>test$ cat repo1/setup.py #!/usr/bin/env python from setuptools import setup, find_packages setup(name='mymodule', packages=find_packages()) test$ cat repo2/setup.py #!/usr/bin/env python from setuptools import setup, find_packages setup(name='mymodule', packages=find_packages()) </code></pre> <p>Installing the module from the first package allows to import it successfully:</p> <pre><code>test/repo1$ sudo python3 setup.py install running install Checking .pth file support in /usr/local/lib/python3.4/dist-packages/ /usr/bin/python3 -E -c pass TEST PASSED: /usr/local/lib/python3.4/dist-packages/ appears to support .pth files running bdist_egg running egg_info writing dependency_links to mymodule.egg-info/dependency_links.txt writing mymodule.egg-info/PKG-INFO writing top-level names to mymodule.egg-info/top_level.txt reading manifest file 'mymodule.egg-info/SOURCES.txt' writing manifest file 'mymodule.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_py creating build/bdist.linux-x86_64/egg creating build/bdist.linux-x86_64/egg/mymodule copying build/lib/mymodule/__init__.py -&gt; build/bdist.linux-x86_64/egg/mymodule creating build/bdist.linux-x86_64/egg/mymodule/one copying build/lib/mymodule/one/__init__.py -&gt; build/bdist.linux-x86_64/egg/mymodule/one byte-compiling build/bdist.linux-x86_64/egg/mymodule/__init__.py to __init__.cpython-34.pyc byte-compiling build/bdist.linux-x86_64/egg/mymodule/one/__init__.py to __init__.cpython-34.pyc creating build/bdist.linux-x86_64/egg/EGG-INFO copying mymodule.egg-info/PKG-INFO -&gt; build/bdist.linux-x86_64/egg/EGG-INFO copying mymodule.egg-info/SOURCES.txt -&gt; build/bdist.linux-x86_64/egg/EGG-INFO copying mymodule.egg-info/dependency_links.txt -&gt; build/bdist.linux-x86_64/egg/EGG-INFO copying mymodule.egg-info/top_level.txt -&gt; build/bdist.linux-x86_64/egg/EGG-INFO zip_safe flag not set; analyzing archive contents... mymodule.__pycache__.__init__.cpython-34: module references __path__ creating 'dist/mymodule-0.0.0-py3.4.egg' and adding 'build/bdist.linux-x86_64/egg' to it removing 'build/bdist.linux-x86_64/egg' (and everything under it) Processing mymodule-0.0.0-py3.4.egg creating /usr/local/lib/python3.4/dist-packages/mymodule-0.0.0-py3.4.egg Extracting mymodule-0.0.0-py3.4.egg to /usr/local/lib/python3.4/dist-packages Adding mymodule 0.0.0 to easy-install.pth file Installed /usr/local/lib/python3.4/dist-packages/mymodule-0.0.0-py3.4.egg Processing dependencies for mymodule==0.0.0 Finished processing dependencies for mymodule==0.0.0 </code></pre> <p>Here's the import:</p> <pre><code>test/$ ipython3 Python 3.4.3 (default, Oct 14 2015, 20:28:29) [...] In [1]: from mymodule import [TAB] extend_path one </code></pre> <p>Now we install the other part of the namespace from the second repository:</p> <pre><code>test/repo2$ sudo python3 setup.py install running install Checking .pth file support in /usr/local/lib/python3.4/dist-packages/ /usr/bin/python3 -E -c pass TEST PASSED: /usr/local/lib/python3.4/dist-packages/ appears to support .pth files running bdist_egg running egg_info creating mymodule.egg-info writing mymodule.egg-info/PKG-INFO writing top-level names to mymodule.egg-info/top_level.txt writing dependency_links to mymodule.egg-info/dependency_links.txt writing manifest file 'mymodule.egg-info/SOURCES.txt' reading manifest file 'mymodule.egg-info/SOURCES.txt' writing manifest file 'mymodule.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_py creating build creating build/lib creating build/lib/mymodule copying mymodule/__init__.py -&gt; build/lib/mymodule creating build/lib/mymodule/two copying mymodule/two/__init__.py -&gt; build/lib/mymodule/two creating build/bdist.linux-x86_64 creating build/bdist.linux-x86_64/egg creating build/bdist.linux-x86_64/egg/mymodule copying build/lib/mymodule/__init__.py -&gt; build/bdist.linux-x86_64/egg/mymodule creating build/bdist.linux-x86_64/egg/mymodule/two copying build/lib/mymodule/two/__init__.py -&gt; build/bdist.linux-x86_64/egg/mymodule/two byte-compiling build/bdist.linux-x86_64/egg/mymodule/__init__.py to __init__.cpython-34.pyc byte-compiling build/bdist.linux-x86_64/egg/mymodule/two/__init__.py to __init__.cpython-34.pyc creating build/bdist.linux-x86_64/egg/EGG-INFO copying mymodule.egg-info/PKG-INFO -&gt; build/bdist.linux-x86_64/egg/EGG-INFO copying mymodule.egg-info/SOURCES.txt -&gt; build/bdist.linux-x86_64/egg/EGG-INFO copying mymodule.egg-info/dependency_links.txt -&gt; build/bdist.linux-x86_64/egg/EGG-INFO copying mymodule.egg-info/top_level.txt -&gt; build/bdist.linux-x86_64/egg/EGG-INFO zip_safe flag not set; analyzing archive contents... mymodule.__pycache__.__init__.cpython-34: module references __path__ creating dist creating 'dist/mymodule-0.0.0-py3.4.egg' and adding 'build/bdist.linux-x86_64/egg' to it removing 'build/bdist.linux-x86_64/egg' (and everything under it) Processing mymodule-0.0.0-py3.4.egg removing '/usr/local/lib/python3.4/dist-packages/mymodule-0.0.0-py3.4.egg' (and everything under it) creating /usr/local/lib/python3.4/dist-packages/mymodule-0.0.0-py3.4.egg Extracting mymodule-0.0.0-py3.4.egg to /usr/local/lib/python3.4/dist-packages mymodule 0.0.0 is already the active version in easy-install.pth Installed /usr/local/lib/python3.4/dist-packages/mymodule-0.0.0-py3.4.egg Processing dependencies for mymodule==0.0.0 Finished processing dependencies for mymodule==0.0.0 </code></pre> <p>But trying to import <code>one</code> again fails since <code>two</code> has trashed it: </p> <pre><code>test/$ ipython3 Python 3.4.3 (default, Oct 14 2015, 20:28:29) [...] In [1]: from mymodule import [TAB] extend_path two In [1]: from mymodule import one --------------------------------------------------------------------------- ImportError Traceback (most recent call last) &lt;ipython-input-1-ddf1c613e57c&gt; in &lt;module&gt;() ----&gt; 1 from mymodule import one ImportError: cannot import name 'one' </code></pre>
0
2016-08-12T12:42:15Z
38,919,020
<p>There are two requirements for using name spaces correctly.</p> <ol> <li><code>__init__.py</code> of modules declare a name space</li> <li>setup.py defines <em>unique</em> name for each module</li> </ol> <p>The contents of both <code>__init__.py</code> files should be:</p> <pre><code>__import__('pkg_resources').declare_namespace(__name__) </code></pre> <p>Then setup.py for first module:</p> <pre><code>setup(name='mymodule_one', packages=find_packages('.'), namespace_packages=['mymodule']) </code></pre> <p>and second module</p> <pre><code>setup(name='mymodule_two', packages=find_packages('.'), namespace_packages=['mymodule']) </code></pre> <p>As a result, should be able to install and import both <code>mymodule.one</code> and <code>mymodule.two</code></p> <p>The common name space <code>mymodule</code> allows both modules to be imported using the same name. </p> <p>The <code>name</code> given to setup.py for each module needs to be unique as that is used for the installation path of the module and will overwrite anything that shares it, as you have seen.</p>
1
2016-08-12T13:20:18Z
[ "python", "python-3.x", "namespaces", "setuptools", "distutils" ]
change color for first level of contourf
38,918,306
<p>I'm using this script to plot a Ramachandran plot (the kind of plot is not really relevant here, what matters is the outputted graph):</p> <pre><code>#!/usr/bin/python # coding: utf-8 """ Script to plot a heat map of the dihedral angles http://stackoverflow.com/questions/26351621/turn-hist2d-output-into-contours-in-matplotlib """ import sys import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import LogNorm # print(sys.argv[1]) frame, x, y = np.loadtxt(sys.argv[1], unpack=True) fig = plt.figure(1) ax = plt.subplot(111) counts, ybins, xbins, image = plt.hist2d(x, y, bins=180, norm=LogNorm()) plt.clf() plt.contourf(counts.transpose(), 10, extent=[xbins.min(),xbins.max(),ybins.min(),ybins.max()]) plt.colorbar() fig.set_size_inches(30, 20) plt.savefig(sys.argv[1], bbox_inches='tight') </code></pre> <p>And here is what I obtain:</p> <p><a href="http://i.stack.imgur.com/QkARi.png" rel="nofollow"><img src="http://i.stack.imgur.com/QkARi.png" alt="final graph"></a></p> <p>That's almost what I want. I would like the first level (0-15 on the color bar), which appears in the darkest purple, to appears white on the graph.</p> <p>Can it be done ?</p>
0
2016-08-12T12:44:40Z
38,923,013
<p>Will a mask work, with everything below 15 in white?</p> <pre><code>from numpy import ma counts_masked = ma.masked_where(counts&lt;15, counts) plt.contourf(counts_masked.transpose(), 10, extent=[xbins.min(),xbins.max(),ybins.min(),ybins.max()]) </code></pre> <p>Or, another option might be:</p> <pre><code>levels = np.linspace(15,165,10) # your colorbar range, starting at 15 cmap = plt.get_cmap() cmap.set_under(color='white') plt.contourf(counts.transpose(), levels=levels, cmap=cmap, extent=[xbins.min(),xbins.max(),ybins.min(),ybins.max()]) </code></pre>
0
2016-08-12T16:54:51Z
[ "python", "matplotlib", "graph" ]
change color for first level of contourf
38,918,306
<p>I'm using this script to plot a Ramachandran plot (the kind of plot is not really relevant here, what matters is the outputted graph):</p> <pre><code>#!/usr/bin/python # coding: utf-8 """ Script to plot a heat map of the dihedral angles http://stackoverflow.com/questions/26351621/turn-hist2d-output-into-contours-in-matplotlib """ import sys import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import LogNorm # print(sys.argv[1]) frame, x, y = np.loadtxt(sys.argv[1], unpack=True) fig = plt.figure(1) ax = plt.subplot(111) counts, ybins, xbins, image = plt.hist2d(x, y, bins=180, norm=LogNorm()) plt.clf() plt.contourf(counts.transpose(), 10, extent=[xbins.min(),xbins.max(),ybins.min(),ybins.max()]) plt.colorbar() fig.set_size_inches(30, 20) plt.savefig(sys.argv[1], bbox_inches='tight') </code></pre> <p>And here is what I obtain:</p> <p><a href="http://i.stack.imgur.com/QkARi.png" rel="nofollow"><img src="http://i.stack.imgur.com/QkARi.png" alt="final graph"></a></p> <p>That's almost what I want. I would like the first level (0-15 on the color bar), which appears in the darkest purple, to appears white on the graph.</p> <p>Can it be done ?</p>
0
2016-08-12T12:44:40Z
38,927,126
<p>I actually found the answer an hour ago, based on this question: <a href="http://stackoverflow.com/questions/11386054/python-matplotlib-change-default-color-for-values-exceeding-colorbar-range">Python matplotlib change default color for values exceeding colorbar range</a></p> <p>I just wanted to do the opposite, so I set the low limit with the following code:</p> <pre><code>cs = plt.contourf(counts.transpose(), 10, extent=[xbins.min(), xbins.max(), ybins.min(), ybins.max()]) # All bims under 15 are plotted white cs.cmap.set_under('w') cs.set_clim(15) cbar = plt.colorbar(cs) </code></pre>
0
2016-08-12T22:11:51Z
[ "python", "matplotlib", "graph" ]
Concurrency with subprocess module. How can I do this?
38,918,337
<p>The code below works but each time you run a program, for example the notepad on target machine, the prompt is stuck until I quit the program.</p> <p>How to run multiple programs at the same time on target machine? I suppose it can be achieved with either the threads or subprocess modules, but I still can not use the concept.</p> <p>How can I do this?</p> <pre><code>import socket import time import subprocess #Executar comandos do SO #criando a conexao reversa IP = '192.168.1.33' # ip do cliente linux netcat que sera a central de comando PORT = 443 # usamos a porta de https pra confundir o firewall : a conexao de saida nao sera bloqueada def connect(IP,PORT): #conectando a central de controle try: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # IP/TCP s.connect((IP,PORT)) s.send('[!] Conexao recebida\n') # msg pra ver se foi conectado #s.close() return s except Exception as e: print('Erro de conexao',e ) return None def listen(s): ##qdo o cliente nao esta escutando, da erro na conexao e fecha!. Nao quero isso. O server tem que ficar o tempo todo tentando ate conectar! ## versao 3!!!!!!!!!! #versao 4 usa while True ##########loop infinito para receber comandos try: while True: data = s.recv(1024) # a central de controle envia tb o "Enter" que teclamos apos cada comando {\n} #print(data) if data[:-1] == '/exit': #tudo exceto o ultimo caractere, que eh o \n s.close()#fechar conexao exit(0) # 0 eh execucao normal/sem erros else: #executar os comandos cmd(s,data) except: main(s) def cmd(s,data): try: proc = subprocess.Popen(data, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) saida = s.send(proc.stdout.read() + proc.stderr.read()) s.send(saida) #print(proc.stdout.read()) except: main(s) def main(s): if s: s.close() while True: s_connected = connect(IP,PORT) if s_connected: listen(s_connected) else: print("deu erro na conexao, tentando de novo!!!")##so pra debug time.sleep(10) #return 0 #nao precisa s = None main(s) </code></pre>
9
2016-08-12T12:46:02Z
38,919,320
<p>Try something like:</p> <pre><code>import socket import time import subprocess import select IP = '192.168.1.33' PORT = 443 def connect(IP,PORT): try: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((IP,PORT)) s.send('[!] Conexao recebida\n') return s except Exception as e: print('Erro de conexao',e ) return None def listen(s): try: # Create a polling object and register socket with it socket_poll = select.poll() socket_poll.register(s) # Create a list of running processes processes = [] while True: # If no data is available on the socket, we can use the time to clean up processes which are finished if not socket_poll.poll(1): for process in tuple(processes): if process.poll(): s.send(proc.stdout.read() + proc.stderr.read()) processes.remove(process) continue data = s.recv(1024) if data[:-1] == '/exit': s.close() exit(0) else: cmd(s, data, processes) except: main(s) def cmd(s, data, processes): try: proc = subprocess.Popen(data, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # Add new process to the list processes.append(proc) except: main(s) def main(s): if s: s.close() while True: s_connected = connect(IP,PORT) if s_connected: listen(s_connected) else: time.sleep(10) s = None main(s) </code></pre> <p>Sorry for removing Spanish comments ;)</p>
7
2016-08-12T13:35:26Z
[ "python", "python-3.x", "python-multithreading", "python-multiprocessing" ]
Pyspark: Using repartitionAndSortWithinPartitions with multiple sort Critiria
38,918,342
<p>Assuming I am having the following RDD:</p> <pre><code>rdd = sc.parallelize([('a', (5,1)), ('d', (8,2)), ('2', (6,3)), ('a', (8,2)), ('d', (9,6)), ('b', (3,4)),('c', (8,3))]) </code></pre> <p>How can I use <code>repartitionAndSortWithinPartitions</code> and sort by x[0] and after x[1][0]. Using the following I sort only by the key(x[0]):</p> <pre><code>Npartitions = sc.defaultParallelism rdd2 = rdd.repartitionAndSortWithinPartitions(2, lambda x: hash(x) % Npartitions, 2) </code></pre> <p>A way to do it is the following but there should something more simple I guess:</p> <pre><code>Npartitions = sc.defaultParallelism partitioned_data = rdd .partitionBy(2) .map(lambda x:(x[0],x[1][0],x[1][1])) .toDF(['letter','number2','number3']) .sortWithinPartitions(['letter','number2'],ascending=False) .map(lambda x:(x.letter,(x.number2,x.number3))) &gt;&gt;&gt; partitioned_data.glom().collect() [[], [(u'd', (9, 6)), (u'd', (8, 2))], [(u'c', (8, 3)), (u'c', (6, 3))], [(u'b', (3, 4))], [(u'a', (8, 2)), (u'a', (5, 1))] </code></pre> <p>As it can be seen I have to convert it to Dataframe in order to use <code>sortWithinPartitions</code>. Is there another way? Using <code>repartitionAndSortWIthinPartitions</code>?</p> <p>(It doesnt matter that the data is not globally sorted. I care only to be sorted inside the partitions.)</p>
1
2016-08-12T12:46:29Z
38,918,964
<p>It is possible but you'll have to include all required information in the composite key:</p> <pre class="lang-py prettyprint-override"><code>from pyspark.rdd import portable_hash n = 2 def partitioner(n): """Partition by the first item in the key tuple""" def partitioner_(x): return portable_hash(x[0]) % n return partitioner_ (rdd .keyBy(lambda kv: (kv[0], kv[1][0])) # Create temporary composite key .repartitionAndSortWithinPartitions( numPartitions=n, partitionFunc=partitioner(n), ascending=False) .map(lambda x: x[1])) # Drop key (note: there is no partitioner set anymore) </code></pre> <p>Explained step-by-step:</p> <ul> <li><p><code>keyBy(lambda kv: (kv[0], kv[1][0]))</code> creates a substitute key which consist of original key and the first element of the value. In other words it transforms:</p> <pre><code>(0, (5,1)) </code></pre> <p>into </p> <pre><code>((0, 5), (0, (5, 1))) </code></pre> <p>In practice it can be slightly more efficient to simply reshape data to </p> <pre><code>((0, 5), 1) </code></pre></li> <li><p><code>partitioner</code> defines partitioning function based on a hash of the first element of the key so:</p> <pre><code>partitioner(7)((0, 5)) ## 0 partitioner(7)((0, 6)) ## 0 partitioner(7)((0, 99)) ## 0 partitioner(7)((3, 99)) ## 3 </code></pre> <p>as you can see it is consistent and ignores the second bit.</p></li> <li><p>we use default <code>keyfunc</code> function which is identity (<code>lambda x: x</code>) and depend on lexicographic ordering defined on Python <code>tuple</code>:</p> <pre><code>(0, 5) &lt; (1, 5) ## True (0, 5) &lt; (0, 4) ## False </code></pre></li> </ul> <p>As mentioned before you could reshape data instead:</p> <pre><code>rdd.map(lambda kv: ((kv[0], kv[1][0]), kv[1][1])) </code></pre> <p>and drop final <code>map</code> to improve performance.</p>
3
2016-08-12T13:16:49Z
[ "python", "apache-spark", "pyspark" ]
import opencv in python
38,918,478
<p>I want to install opencv on Windows 7 64 bit and I'm using Python 3.5.1 32-bit. I downloaded the .whl file for opencv as said in the answer in this question: <a href="https://stackoverflow.com/questions/20953273/install-opencv-for-python-3-3">Install opencv for Python 3.3</a> given by @user3731622. But when I try to do: </p> <pre><code>import cv2 </code></pre> <p>I get the following error:</p> <pre><code>ImportError: numpy.core.multiarray failed to import </code></pre> <p>What am I doing wrong ?</p> <p>Thanks</p>
1
2016-08-12T12:53:39Z
38,918,565
<p>Are you sure it's <em>bumpy</em> in the error you get? <code>multiarray</code> is a part of <code>numpy</code>. In this post: <a href="http://stackoverflow.com/questions/20518632/importerror-numpy-core-multiarray-failed-to-import">ImportError: numpy.core.multiarray failed to import</a> the solution is to install <code>numpy</code>. Do you have that installed?</p>
0
2016-08-12T12:57:24Z
[ "python", "opencv" ]
Slice DataFrame using pandas Timestamp on DatetimeIndex
38,918,507
<p>I am reading a csv-file into a pandas DataFrame from disk and want to slice/filter the DataFrame based on the index timestamp.</p> <p>This is what I've got so far:</p> <pre><code>INDEX_COL_NAME = 'Zeit' DELIM_SIGN = ';' DECIMAL_SIGN = ',' KEEP_COLUMNS = [-2] ENCODING = 'ISO-8859-1' DATE = (2016, 8, 11) START = (10, 52, 0) END = (10, 53, 0) df = pd.read_csv('data.csv', delimiter=DELIM_SIGN, decimal=DECIMAL_SIGN, index_col=False, parse_dates=[INDEX_COL_NAME], infer_datetime_format=True, encoding=ENCODING) df.set_index(INDEX_COL_NAME, inplace=True) df = df[KEEP_COLUMNS] date = pd.datetime(*DATE) start = date.replace(hour=START[0], minute=START[1], second=START[2]) end = date.replace(hour=END[0], minute=END[1], second=END[2]) </code></pre> <p>The data is as follows (shortened snippet):</p> <pre><code>Zeit;FU_P1;FU_P2;DIR_01;FIR_01;WAAGE_B1.I;WAAGE_B1.T;WAAGE_B1.X;WAAGE_B2.I;WAAGE_B2.T;WAAGE_B2.X;WAAGE_B3.I;WAAGE_B3.T;WAAGE_B3.X;WAAGE_B4.I;WAAGE_B4.T;WAAGE_B4.X;LEITFÄHIGKEIT_1.COND;LEITFÄHIGKEIT_2.COND 11.08.2016 10:51:59; 20,0; 0,00; 991,19;29,21; 0,0;Empty; 239; 1,0;Empty;-11,600; 0,0;Empty;-0,023; 0,0;Empty;-1,776; 0,3;Empty; 11.08.2016 10:52:00; 20,0; 0,00; 991,22;29,11; 0,0;Empty; 239; 1,0;Empty;-11,600; 0,0;Empty;-0,023; 0,0;Empty;-1,787; 0,3;Empty; 11.08.2016 10:52:10; 20,0; 0,00; 991,08;29,24; 0,0;Empty; 239; 1,0;Empty;-11,600; 0,0;Empty;-0,023; 1,0;Empty;-1,840; 0,3;Empty; 11.08.2016 10:52:20; 20,0; 0,00; 990,95;28,95; 0,0;Empty; 239; 1,0;Empty;-11,600; 0,0;Empty;-0,023; 0,0;Empty;-1,947; 0,3;Empty; 11.08.2016 10:52:30; 20,0; 0,00; 990,94;28,96; 0,0;Empty; 238; 1,0;Empty;-11,600; 0,0;Empty;-0,022; 0,0;Empty;-2,059; 0,3;Empty; 11.08.2016 10:52:40; 20,0; 0,00; 990,82;28,91; 0,0;Empty; 238; 1,0;Empty;-11,600; 0,0;Empty;-0,021; 0,0;Empty;-2,155; 0,3;Empty; 11.08.2016 10:52:50; 20,0; 0,00; 990,80;29,37; 0,0;Empty; 238; 1,0;Empty;-11,600; 0,0;Empty;-0,020; 0,0;Empty;-2,249; 0,0;Empty; 11.08.2016 10:53:00; 20,0; 0,00; 990,71;29,15; 0,0;Empty; 239; 1,0;Empty;-11,600; 0,0;Empty;-0,021; 1,0;Empty;-2,309; 0,5;Empty; 11.08.2016 10:53:01; 20,0; 0,00; 990,78;29,04; 0,0;Empty; 239; 1,0;Empty;-11,600; 0,0;Empty;-0,021; 0,2;Empty;-2,350; 0,5;Empty; </code></pre> <p>However, I am not able to get the desired slice since</p> <pre><code>print(df.ix[start:end] </code></pre> <p>prints a empty DataFrame.</p> <p>The elements are part of the index as</p> <pre><code>print(df.index) </code></pre> <p>shows</p> <pre><code>DatetimeIndex(['2016-11-08 10:45:27', '2016-11-08 10:45:28', '2016-11-08 10:45:29', '2016-11-08 10:45:30', '2016-11-08 10:45:31', '2016-11-08 10:45:32', '2016-11-08 10:45:33', '2016-11-08 10:45:34', '2016-11-08 10:45:35', '2016-11-08 10:45:36', ... '2016-11-08 15:59:51', '2016-11-08 15:59:52', '2016-11-08 15:59:53', '2016-11-08 15:59:54', '2016-11-08 15:59:55', '2016-11-08 15:59:56', '2016-11-08 15:59:57', '2016-11-08 15:59:58', '2016-11-08 15:59:59', '2016-11-08 16:00:00'], dtype='datetime64[ns]', name='Zeit', length=10408, freq=None) </code></pre> <p>and there are rows for each second form the total beginning of the data logging until its end.</p> <p>In addition</p> <pre><code>print(start in df.index) </code></pre> <p>gives</p> <pre><code>False </code></pre> <p>which I do not understand as well.</p> <p>How can I perform the disired slicing/filtering? What am I missing?</p>
0
2016-08-12T12:55:13Z
38,923,756
<p>The indexing seems to be fine (you can also use <code>pd.Timestamp</code> or just strings for slicing instead of datetime objects).</p> <p>The issue is with day-month order. IIUC, strings <code>11.08.2016</code> are converted to November 8th instead of August 11th. Adding the argument <code>dayfirst=True</code> to <code>pd.read_csv</code> should sort it out.</p>
1
2016-08-12T17:46:22Z
[ "python", "datetime", "pandas", "dataframe", "timestamp" ]
Python: extract a 2D array from a 3D array
38,918,514
<p>I have a 3D numpy array (1L, 420L, 580L) the 2nd and 3rd dimension is a gray scale image that I want to display using openCV. How do I pull the 2D array from the 3D array?</p> <p>I created a short routine to do this, but I bet there is a better way.</p> <pre><code># helper function to remove 1st dimension def pull_image(in_array): rows = in_array.shape[1] # vertical cols = in_array.shape[2] # horizontal out_array = np.zeros((rows, cols), np.uint8) # create new array to hold image data for r in xrange(rows): for c in xrange(cols): out_array[r, c] = in_array[:, r, c] return out_array </code></pre>
0
2016-08-12T12:55:31Z
38,918,887
<p>If you always only have the first dimension == 1, then you could simply reshape the array...</p> <pre><code>if in_array.shape[0] == 1: return in_array.reshape(in_array.shape[1:]) </code></pre> <p>otherwise, you can use numpy's advanced list slicing...</p> <pre><code>else: return in_array[0,:,:] </code></pre>
1
2016-08-12T13:12:06Z
[ "python", "arrays", "opencv", "numpy" ]
Decode MULTIPLE objects JSON objects from a file to a python dictionary with (raw_decode)
38,918,525
<p>New in python :)</p> <p>I want to decode a long json file containing multiple objects to a python dictionary then process it to a database.</p> <p>Here,is my code</p> <pre><code>file=open('hi.json',encoding='utf-8') def readin(): return file.read(2048) def parse(): decoder = json.JSONDecoder(strict=False) buffer = '' for chunk in iter(readin, ''): buffer += chunk while buffer: try: result, index = decoder.raw_decode(buffer) yield result buffer = buffer[index:] except ValueError as e: print("1",e) # Not enough data to decode, read more break def main(): imputd=parse() output = open('output.txt', 'w') output.write(json.dumps(next(imputd))) main() </code></pre> <p>It works but just for the first object .Instead of a file (output.txt) ,I want a python dictionary .</p> <p>Any suggestion please :)</p>
1
2016-08-12T12:56:01Z
38,919,843
<p>You are only fetching the first value returned from <code>imputd</code>, as Rawing said. You need to iterate over the entire sequence it can return.</p> <pre><code>def main(): imputd = parse() objs = list(imputd) # Get them all, not just the first with open("output.txt", "w") as output: # Encode the entire list and write it to the output file. json.dump(objs) </code></pre> <p>In <code>parse</code>, you need to ensure that <code>buffer</code> begins with a valid JSON object after you skip past the one just parsed. This should mean just trimming any leading whitespace from <code>buffer</code> (assuming the stream contains one object per line, so that <code>buffer[index:] == "\n..."</code>).</p> <pre><code>try: result, index = decoder.raw_decode(buffer) yield result buffer = buffer[index:].lstrip() </code></pre>
0
2016-08-12T13:58:23Z
[ "python", "json", "object", "decode" ]
Optimizing a binary tree function, huffman trees
38,918,619
<p>So the scenario is someone you know gives you a huffman tree but its not optimal (I know all huffman trees are optimal, just if it were hypothetically not optimal but does follow the huffman style of only leaves having values). </p> <p>The function should improve the tree as much as possible without changing the actual 'shape' of it with the aid of a dictionary mapping the each symbol to the number of occurrences it has in a hypothetical text you are compressing. The function does this by swapping nodes. So the end result won't necessarily be an optimal tree but it will be improved as much as possible. For example....</p> <pre><code>Class Node: def __init__(self, item = None, left = None, right = None): self.item = item self.left = left self.right = right def __repr__(self): return 'Node({}, {}, {})'.format(self.item, self.left, self.right) </code></pre> <p>dictionary = {54: 12, 101: 34, 29: 22, 65: 3, 20: 13}</p> <p>Your friend gives you...</p> <p>Node(None, Node(None, Node(20), Node(54)), Node(None, Node(65), Node(None, Node(101), Node(29)))</p> <p>or...</p> <pre><code> None / | \ None | None / \ | / \ 20 54 | 65 None | / \ | 101 29 </code></pre> <p>Where the wanted result would be...</p> <p>Node(None, Node(None, Node(20), Node(29)), Node(None, Node(101), Node(None, Node(65), Node(54)))</p> <p>or...</p> <pre><code> None / | \ None | None / \ | / \ 20 29 | 101 None | / \ | 65 54 </code></pre> <p>How do I locate a leaf node then locate where it's supposed to be, swap it, then do that for all other leaf nodes while also making sure that the shape of the tree is the same, regardless of if it's optimal or not? Also this is in python.</p>
1
2016-08-12T13:00:21Z
39,282,167
<p>From the <a href="https://en.wikipedia.org/wiki/Huffman_coding#Compression" rel="nofollow">basic technique</a> of constructing Huffman Trees, the nodes whose value are least probable are the first ones to be linked to a parent node. Those nodes appear deeper within the Huffman Trees than any other nodes in them. From this, we can deduce the fact that the deeper within the tree you go, the less frequent values you would encounter.</p> <p>This analogy is crucial into developing an optimization function, since we don't need to perform all sorts of swapping when we can get it right the first time by: getting a list of all the items in the tree sorted by depth and their matched values in order; and insert them in their respective depths whenever there are leaves. Here's the solution that I coded:</p> <pre><code>def optimize_tree(tree, dictionary): def grab_items(tree): if tree.item: return [tree.item] else: return grab_items(tree.left) + grab_items(tree.right) def grab_depth_info(tree): def _grab_depth_info(tree,depth): if tree.item: return {depth:1} else: depth_info_list = [_grab_depth_info(child,depth+1) for child in [tree.left, tree.right]] depth_info = depth_info_list[0] for depth in depth_info_list[1]: if depth in depth_info: depth_info[depth] += depth_info_list[1][depth] else: depth_info[depth] = depth_info_list[1][depth] return depth_info return _grab_depth_info(tree,0) def make_inverse_dictionary(dictionary): inv_dictionary = {} for key in dictionary: if dictionary[key] in inv_dictionary: inv_dictionary[dictionary[key]].append(key) else: inv_dictionary[dictionary[key]] = [key] for key in inv_dictionary: inv_dictionary[key].sort() return inv_dictionary def get_depth_to_items(depth_info,actual_values): depth_to_items = {} for depth in depth_info: depth_to_items[depth] = [] for i in range(depth_info[depth]): depth_to_items[depth].append(actual_values[i]) depth_to_items[depth].sort() del actual_values[:depth+1] return depth_to_items def update_tree(tree,depth_to_items,reference): def _update_tree(tree,depth,depth_to_items,reference): if tree.item: tree.item = reference[depth_to_items[depth].pop(0)].pop(0) else: for child in [tree.left,tree.right]: _update_tree(child,depth+1,depth_to_items,reference) _update_tree(tree,0,depth_to_items,reference) items = grab_items(tree) depth_info = grab_depth_info(tree) actual_values = [dictionary[item] for item in items] actual_values.sort(reverse=True) inv_dictionary = make_inverse_dictionary(dictionary) depth_to_items = get_depth_to_items(depth_info,actual_values) update_tree(tree,depth_to_items,inv_dictionary) </code></pre> <h2>Explanation:</h2> <p>The <code>optimize_tree</code> function requires the user to pass in two arguments:</p> <ul> <li><code>tree</code>: the root node of the Huffman Tree.</li> <li><code>dictionary</code>: the dictionary that maps the symbols to their frequency.</li> </ul> <p>The function starts off by defining four inner functions:</p> <ul> <li><code>grab_items</code> is a function that takes in a tree and returns a list of all the items in it.</li> <li><code>grab_depth_info</code> returns a dictionary where the keys are the depth levels and the values are the number of nodes at the level.</li> <li><code>make_inverse_dictionary</code> returns a dictionary that is the inverse of the given dictionary. (It can handle cases where the values can be mapped to two keys.)</li> <li><code>get_depth_to_items</code> returns a dictionary where they keys are the depth levels and the values are lists of actual values (from the dictionary) that are supposed to be at that level in order for the tree to be optimized.</li> <li><code>update_tree</code> inserts the items where they are supposed to be in order to make the tree optimized.</li> </ul> <p>Note: <code>grab_depth_info</code> and <code>update_tree</code> has an inner function defined in them so that their functionality can work recursively.</p> <p>These four inner functions are needed for the following algorithm:</p> <ol> <li>First the function grabs a list of items and the depth information from the tree.</li> <li>Then it uses the list of items to grab the list of actual values from the given dictionary, and have it in descending order. (So that least frequent values are matched with the greatest depth level in step 4.)</li> <li>Next it makes an inverse of the given dictionary, where the keys and values are swapped. (This is to help with step 5.)</li> <li>After making those preparations, the function will then pass the depth info and the list of actual values into the <code>get_depth_to_items</code> function to get a dictionary of depth level to list of values of inorder.</li> <li>Finally, the function passes in the tree, the dictionary that was made in the previous step, and the inverted dictionary into the <code>update_tree</code> function which will use its inner function to recursively go to every node in the tree and update the item attribute using the original keys, from the inverted dictionary.</li> </ol> <p>The result of using this algorithm will make the tree that you passed in be in its most optimized state, without changing the actual shape of it.</p> <hr> <p>I can confirm that this works by executing these lines of code:</p> <pre><code>tree = Node(None, Node(None, Node(20), Node(29)), Node(None, Node(101), Node(None, Node(65), Node(54)))) dictionary = {54: 12, 101: 34, 29: 22, 65: 3, 20: 13} optimize_tree(tree,dictionary) print(tree) </code></pre> <p>And the output of this is:</p> <pre><code>Node(None, Node(None, Node(20, None, None), Node(29, None, None)), Node(None, Node(101, None, None), Node(None, Node(65, None, None), Node(54, None, None)))) </code></pre>
0
2016-09-01T23:13:57Z
[ "python", "function", "optimization", "tree", "huffman-coding" ]
Pandas: pivot table
38,918,623
<p>I have df:</p> <pre><code>ID,url,used_at,active_seconds,domain,search_engine,diff_time,period,code, category 08cd0141663315ce71e0121e3cd8d91f,market.yandex.ru/product/12858630?hid=91491&amp;track=fr_same,2016-03-20 23:19:49,6,yandex.ru,None,78.0,515,100.0, Search system 08cd0141663315ce71e0121e3cd8d91f,market.yandex.ru/product/12858630?hid=91491&amp;track=fr_same,2016-03-20 23:20:01,26,yandex.ru,None,6.0,515,100.0, Social network 08cd0141663315ce71e0121e3cd8d91f,market.yandex.ru/catalog/54726/list?hid=91491&amp;track=pieces&amp;gfilter=1801946%3A1871375&amp;exc=1&amp;regprice=9&amp;how=dpop,2016-03-20 23:20:33,14,yandex.ru,None,6.0,515,100.0, Social network 08cd0141663315ce71e0121e3cd8d91f,market.yandex.ru/product/12858630/offers?hid=91491&amp;grhow=shop,2016-03-20 23:20:47,2,yandex.ru,None,14.0,515,100.0, Internet shop 08cd0141663315ce71e0121e3cd8d91f,market.yandex.ru/product/12858630/offers?hid=91491&amp;grhow=shop,2016-03-20 23:24:05,8,yandex.ru,None,196.0,515,100.0, Internet shop 08cd0141663315ce71e0121e3cd8d91f,market.yandex.ru/catalogmodels.xml?hid=91491&amp;CAT_ID=160043&amp;nid=54726&amp;track=pieces,2016-03-20 23:24:13,32,yandex.ru,None,8.0,515,100.0, Search system 08cd0141663315ce71e0121e3cd8d91f,market.yandex.ru/catalog/54726/list?hid=91491&amp;track=fr_cm_shwall&amp;exc=1&amp;how=dpop,2016-03-20 23:24:45,16,yandex.ru,None,32.0,515,100.0, Internet shop 08cd0141663315ce71e0121e3cd8d91f,market.yandex.ru/catalogmodels.xml?hid=91491&amp;CAT_ID=160043&amp;nid=54726&amp;track=pieces,2016-03-20 23:25:01,4,yandex.ru,None,16.0,515,100.0, Search system 08cd0141663315ce71e0121e3cd8d91f,market.yandex.ru/catalog/54726/list?hid=91491&amp;track=fr_cm_pop&amp;exc=1&amp;how=dpop,2016-03-20 23:25:05,10,yandex.ru,None,4.0,515,100.0, Social network 08cd0141663315ce71e0121e3cd8d91f,market.yandex.ru/product/11153512?hid=91491&amp;track=fr_same,2016-03-21 06:52:44,2,yandex.ru,None,14.0,516,100.0, Internet shop 08cd0141663315ce71e0121e3cd8d91f,market.yandex.ru/catalog/54726/list?hid=91491&amp;track=pieces&amp;gfilter=1801946%3A1871375&amp;exc=1&amp;regprice=9&amp;how=dpop,2016-04-04 21:08:41,24,yandex.ru,None,20.0,562,100.0, Internet shop 0bc0898d3fe2e46158621c674effb458,market.yandex.ru/product/12259780?hid=91491&amp;show-uid=56508001849064882783001,2016-02-26 20:34:20,28,yandex.ru,yandex,10.0,1217,100.0, Social network 0bc0898d3fe2e46158621c674effb458,market.yandex.ru/product/12259780?hid=91491&amp;show-uid=56508001849064882783001,2016-02-26 20:34:50,1,yandex.ru,None,2.0,1217,100.0, Internet shop </code></pre> <p>I need to build <code>pivot_table</code>. I use</p> <pre><code>table = pd.pivot_table(df, values='domain', index=['ID'], columns=['category'], aggfunc=np.sum) </code></pre> <p>But problems in that it concatenates <code>domain</code>, but I want to count quanity of unique domains. How can I do that?</p>
3
2016-08-12T13:00:35Z
38,918,957
<p>It looks like need:</p> <pre><code>table = pd.pivot_table(df, values='domain', index=['ID'], columns=['category'], aggfunc=lambda x: x.nunique()) print (table) category Internet shop Search system \ ID 08cd0141663315ce71e0121e3cd8d91f 1.0 1.0 0bc0898d3fe2e46158621c674effb458 1.0 NaN category Social network ID 08cd0141663315ce71e0121e3cd8d91f 1.0 0bc0898d3fe2e46158621c674effb458 1.0 </code></pre> <p>Another faster solution:</p> <pre><code>print (df.groupby(['ID','category'])['domain'].nunique().unstack()) category Internet shop Search system \ ID 08cd0141663315ce71e0121e3cd8d91f 1.0 1.0 0bc0898d3fe2e46158621c674effb458 1.0 NaN category Social network ID 08cd0141663315ce71e0121e3cd8d91f 1.0 0bc0898d3fe2e46158621c674effb458 1.0 </code></pre>
2
2016-08-12T13:16:14Z
[ "python", "pandas", "dataframe", "unique", "pivot-table" ]
pandas invalid literal for long() with base 10 error
38,918,653
<p>I am trying to do: <code>df['Num_Detections'] = df['Num_Detections'].astype(int)</code></p> <p>And i get following error: </p> <blockquote> <p>ValueError: invalid literal for long() with base 10: '12.0'</p> </blockquote> <p>My data looks looks following:</p> <pre><code>&gt;&gt;&gt; df['Num_Detections'].head() Out[6]: sku_name DOBRIY MORS GRAPE-CRANBERRY-RASBERRY 1L 12.0 AQUAMINERALE 5.0L 9.0 DOBRIY PINEAPPLE 1.5L 2.0 FRUKT.SAD APPLE 0.95L 154.0 DOBRIY PEACH-APPLE 0.33L 71.0 Name: Num_Detections, dtype: object </code></pre> <p>Any idea how to do the conversion correctly ?</p> <p>Thanks for help.</p>
0
2016-08-12T13:01:53Z
38,918,706
<p>There is some value, which cannot be converted to <code>int</code>.</p> <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_numeric.html" rel="nofollow"><code>to_numeric</code></a> and get <code>NaN</code> where is problematic value:</p> <pre><code>df['Num_Detections'] = pd.to_numeric(df['Num_Detections'], errors='coerce') </code></pre> <p>If need check rows with problematic values, use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a> with mask with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isnull.html" rel="nofollow"><code>isnull</code></a>:</p> <pre><code>print (df[ pd.to_numeric(df['Num_Detections'], errors='coerce').isnull()]) </code></pre> <p>Sample:</p> <pre><code>df = pd.DataFrame({'Num_Detections':[1,2,'a1']}) print (df) Num_Detections 0 1 1 2 2 a1 print (df[ pd.to_numeric(df['Num_Detections'], errors='coerce').isnull()]) Num_Detections 2 a1 df['Num_Detections'] = pd.to_numeric(df['Num_Detections'], errors='coerce') print (df) Num_Detections 0 1.0 1 2.0 2 NaN </code></pre>
1
2016-08-12T13:03:44Z
[ "python", "pandas", "dataframe", "casting", "int" ]
applying dynamic solution to Pyqt5 connection
38,918,725
<p>Here is one simple code from Zetcode.com, that I am studying:</p> <pre><code>#!/usr/bin/python3 # -*- coding: utf-8 -*- """ ZetCode PyQt5 tutorial In this example, we create a skeleton of a calculator using a QGridLayout. author: Jan Bodnar website: zetcode.com last edited: January 2015 """ import sys from PyQt5.QtWidgets import (QWidget, QGridLayout, QPushButton, QApplication) class Example(QWidget): def __init__(self): super().__init__() self.initUI() def initUI(self): grid = QGridLayout() self.setLayout(grid) names = ['Cls', 'Bck', '', 'Close', '7', '8', '9', '/', '4', '5', '6', '*', '1', '2', '3', '-', '0', '.', '=', '+'] positions = [(i,j) for i in range(5) for j in range(4)] for position, name in zip(positions, names): if name == '': continue button = QPushButton(name) grid.addWidget(button, *position) self.move(300, 150) self.setWindowTitle('Calculator') self.show() if __name__ == '__main__': app = QApplication(sys.argv) ex = Example() sys.exit(app.exec_()) </code></pre> <p>Now, I am trying to apply the <code>button.setDisable</code> option for every button which is clicked.</p> <p>One way is to create a new list and append each button to the created list. From this list, we could do following:</p> <pre><code>button_list[0].clicked.connect(self.on_click): button_list[1].clicked.connect(self.on_click1): </code></pre> <p>And for each new method, we would then need to define:</p> <pre><code>def on.click(self): button_list[0].setEnabled(False) </code></pre> <p>This is a solution that works. But is there any more dynamic way to solve this issue?</p> <p>Would appreciate any ideas.</p>
1
2016-08-12T13:04:30Z
38,918,997
<p>With <code>lambda</code> or <code>functools.partial</code>, you can do just that:</p> <pre><code>def on.click(self, numb): button_list[numb].setEnabled(False) </code></pre> <p>Invoke with <code>lambda</code>:</p> <pre><code>button_list[1].clicked.connect(lambda: self.on_click(1)) </code></pre> <p>Or with <code>partial</code>:</p> <pre><code>button_list[1].clicked.connect(partial(self.on_click, 1)) </code></pre>
0
2016-08-12T13:18:38Z
[ "python", "user-interface", "dynamic", "pyqt5" ]
Smartsheet Cell History check
38,918,841
<p>My goal is to pull a cell's history if it was modified after a user specified date. However, since the cell history call is so big, I don't really want to request the entire cell history until after I check the dates. This is my code at current. </p> <pre><code>#get the cell history action = smartsheet.Cells.get_cell_history( this_sheet.id, row.id, this_sheet.columns[c].id, include_all=True ) revisions = action.data for rev in revisions: if rev.modified_at &gt; date_of_interest: ## print out information from this specific revision </code></pre> <p>I'd really like to do the date comparison prior to the call for the cell history or at least not call the entire cell history before I know that the information is going to be of interest. Surely there is a better way to do this. Thoughts? </p>
0
2016-08-12T13:09:30Z
38,928,521
<p>You could do a <a href="http://smartsheet-platform.github.io/api-docs/?python#get-sheet" rel="nofollow">GET sheet</a> request first and use that to see the last modified dates of the rows. For rows that are last modified based on your given date. Then you'll know it is worth getting the cell history. This would work if you are looking for data that was modified recently. If you are looking for older data you would need to get the cell history.</p>
0
2016-08-13T02:07:55Z
[ "python", "date", "smartsheet-api" ]
Custom formset validation not working in django
38,918,867
<p>In my formset I would like to check each reading against a target if the reading is larger than the target do not save to db. For some reason I can't get this to work correctly because it still allows me to save. Any help would be greatly appreciated!</p> <p>all in views.py </p> <pre><code> #custom formset validation def get_dim_target(target): dim = Dimension.objects.values_list('target', flat=True).filter(id=target) return dim #custom formset validation class BaseInspecitonFormSet(BaseFormSet): def insp_clean(self): if any(self.errors): return reading = [] for form in self.forms: dim_id = form.cleanded_data['dimension_id'] reading = form.cleaned_data['reading'] target = get_dim_target(dim_id) if reading &gt; target: raise forms.ValidationError("Reading larger than target") reading.append(reading) #formset def update_inspection_vals(request, dim_id=None): dims_data = Dimension.objects.filter(id=dim_id) can_delete = False dims = Dimension.objects.get(pk=dim_id) sheet_data = Sheet.objects.get(pk=dims.sheet_id) serial_sample_number = Inspection_vals.objects.filter(dimension_id=24).values_list('serial_number', flat=True)[0] target = Dimension.objects.filter(id=24).values_list('target', flat=True)[0] title_head = 'Inspect-%s' % dims.description if dims.ref_dim_id == 1: inspection_inline_formset = inlineformset_factory(Dimension, Inspection_vals, can_delete=False, extra=0, fields=('reading',), widgets={ 'reading': forms.TextInput(attrs={'cols': 80, 'rows': 20}) }) if request.method == "POST": formset = inspection_inline_formset(request.POST, request.FILES, instance=dims) if formset.is_valid(): new_instance = formset.save(commit=False) for n_i in new_instance: n_i.created_at = datetime.datetime.now() n_i.updated_at = datetime.datetime.now() n_i.save() else: form_errors = formset.errors formset.non_form_errors() return HttpResponseRedirect(request.META.get('HTTP_REFERER', '/')) else: formset = inspection_inline_formset(instance=dims, queryset=Inspection_vals.objects.filter(dimension_id=dim_id).order_by('serial_number')) return render(request, 'app/inspection_vals.html', { 'formset': formset, 'dim_data': dims_data, 'title':title_head, 'dim_description': dims.description, 'dim_target': dims.target, 'work_order': sheet_data.work_order, 'customer_name': sheet_data.customer_name, 'serial_sample_number': serial_sample_number, }) </code></pre> <p>inspection_val.html </p> <pre><code> &lt;h1&gt;Inspeciton Values&lt;/h1&gt; &lt;div class="well"&gt; &lt;form method="post"&gt; {% csrf_token %} &lt;table&gt; {{ formset.management_form }} {% for x in formset.forms %} &lt;tr&gt; &lt;td&gt; Sample Number {{ forloop.counter0|add:serial_sample_number }} &lt;/td&gt; &lt;td&gt; {{ x }} {{ x.errors }} &lt;/td&gt; &lt;/tr&gt; {% endfor %} &lt;/table&gt; &lt;input type="submit" value="Submit Values" class="btn-primary" /&gt; &lt;/form&gt; &lt;/div&gt; &lt;/div&gt; </code></pre>
0
2016-08-12T13:10:54Z
38,919,196
<p>The Django docs on <a href="https://docs.djangoproject.com/en/1.10/topics/forms/formsets/#custom-formset-validation" rel="nofollow">custom formset validation</a> show that you should create a <code>clean</code> method.</p> <p>You have named your method <code>insp_clean</code>, so Django will never call it. Rename the method to <code>clean</code>.</p>
1
2016-08-12T13:29:15Z
[ "python", "django", "validation" ]
Finding all the bounding rectangles of all non-transparent regions in PIL
38,918,891
<p>I have a transparent-background image with some non-transparent text.</p> <p>And I want to find all the bounding boxes of each individual word in the text.</p> <p>Here is the code about creating a transparent image and draw some text ("Hello World", for example) , after that, do affine transform and thumbnail it.</p> <pre><code>from PIL import Image, ImageFont, ImageDraw, ImageOps import numpy as np fontcolor = (255,255,255) fontsize = 180 # padding rate for setting the image size of font fimg_padding = 1.1 # check code bbox padding rate bbox_gap = fontsize * 0.05 # Rrotation +- N degree # Choice a font type for output--- font = ImageFont.truetype('Fonts/Bebas.TTF', fontsize) # the text is "Hello World" code = "Hello world" # Get the related info of font--- code_w, code_h = font.getsize(code) # Setting the image size of font--- img_size = int((code_w) * fimg_padding) # Create a RGBA image with transparent background img = Image.new("RGBA", (img_size,img_size),(255,255,255,0)) d = ImageDraw.Draw(img) # draw white text code_x = (img_size-code_w)/2 code_y = (img_size-code_h)/2 d.text( ( code_x, code_y ), code, fontcolor, font=font) # img.save('initial.png') # Transform the image--- img = img_transform(img) # crop image to the size equal to the bounding box of whole text alpha = img.split()[-1] img = img.crop(alpha.getbbox()) # resize the image img.thumbnail((512,512), Image.ANTIALIAS) # img.save('myimage.png') # what I want is to find all the bounding box of each individual word boxes=find_all_bbx(img) </code></pre> <p>Here is the code about affine transform (provided here for those who want to do some experiment)</p> <pre><code>def find_coeffs(pa, pb): matrix = [] for p1, p2 in zip(pa, pb): matrix.append([p1[0], p1[1], 1, 0, 0, 0, -p2[0]*p1[0], -p2[0]*p1[1]]) matrix.append([0, 0, 0, p1[0], p1[1], 1, -p2[1]*p1[0], -p2[1]*p1[1]]) A = np.matrix(matrix, dtype=np.float) B = np.array(pb).reshape(8) res = np.dot(np.linalg.inv(A.T * A) * A.T, B) return np.array(res).reshape(8) def rand_degree(st,en,gap): return (np.fix(np.random.random()* (en-st) * gap )+st) def img_transform(img): width, height = img.size print img.size m = -0.5 xshift = abs(m) * width new_width = width + int(round(xshift)) img = img.transform((new_width, height), Image.AFFINE, (1, m, -xshift if m &gt; 0 else 0, 0, 1, 0), Image.BICUBIC) range_n = width*0.2 gap_n = 1 x1 = rand_degree(0,range_n,gap_n) y1 = rand_degree(0,range_n,gap_n) x2 = rand_degree(width-range_n,width,gap_n) y2 = rand_degree(0,range_n,gap_n) x3 = rand_degree(width-range_n,width,gap_n) y3 = rand_degree(height-range_n,height,gap_n) x4 = rand_degree(0,range_n,gap_n) y4 = rand_degree(height-range_n,height,gap_n) coeffs = find_coeffs( [(x1, y1), (x2, y2), (x3, y3), (x4, y4)], [(0, 0), (width, 0), (new_width, height), (xshift, height)]) img = img.transform((width, height), Image.PERSPECTIVE, coeffs, Image.BICUBIC) return img </code></pre> <p>How to implement <code>find_all_bbx</code> to find the bounding box of each individual word?</p> <p>For example, one of the box can be found in 'H' ( you can download the image to see the partial result).</p> <p><a href="http://i.stack.imgur.com/wh6o7.png" rel="nofollow"><img src="http://i.stack.imgur.com/wh6o7.png" alt="result"></a></p>
1
2016-08-12T13:12:23Z
38,921,416
<p>For what you want to do you need to label the individual words and then compute the bounding box of each object with the same label. The most straigh forward approach here is just taking the min and max positions of the pixels that make up that word. The labeling is a little bit more difficult. For example you could use a morphological operation to combine the letters of the words (morphological opening, <a href="http://pillow.readthedocs.io/en/3.1.x/reference/ImageMorph.html" rel="nofollow">see PIL documentation</a>) and then use <code>ImageDraw.floodfill</code>. Or you could try to anticipate the positions of the words from the position where you first draw the text <code>code_x</code> and <code>code_y</code> and the chosen font and size of the letters and the spacing (this will trickier I think).</p>
0
2016-08-12T15:23:03Z
[ "python", "numpy", "image-processing", "python-imaging-library", "bounding-box" ]
Project Euler #17 Python - "Number letter counts"
38,918,947
<p>Here is the question:</p> <p>If the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total.</p> <p>If all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?</p> <p>NOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of "and" when writing out numbers is in compliance with British usage.</p> <p>I'm don't understand why my code won't output the correct answer. I've checked many times and I can't find anything that I've missed. Here is my code:</p> <pre><code>to_19 = [0,3,3,5,4,4,3,5,5,4,3,6,6,8,8,7,7,9,8,8] tens = [0,3,6,6,5,5,5,7,6,6] hundred = 7 thousand = 8 total = 0 for i in range(1,1000): u = i%10 t = int(((i%100)-u) /10) h = int(((i%1000)-(t*10)-u) /100) print(h,t,u) if i &lt; 20: #the number is less than 20 total += to_19[i] elif h != 0 and (t != 0 or u != 0): #the number is over 100 but not a multiple of 100 if t == 0 or t == 1: #the number is between x01 and x19 total += to_19[h] + hundred + 3 + to_19[(t * 10) + u] else: #the number is between x20 and x99 total += to_19[h] + hundred + 3 + tens[t] + to_19[u] elif t == 0 and u == 0: #the number is a multiple of 100 total += to_19[h] + hundred else: #the number is between 20 and 99 total += tens[t] + to_19[u] print(total+thousand) #21121 is wrong </code></pre> <p>Thanks in advance!</p>
-1
2016-08-12T13:16:04Z
38,919,144
<p>The code seems reasonable enough. I didn't put much thought into your definition of <code>u</code>, <code>t</code>, and <code>h</code>, but it seems correct. The only thing I noticed you are missing is the "one" from "one thousand".</p>
2
2016-08-12T13:26:51Z
[ "python", "list", "math" ]
Modify a specific x-axis tick label in python
38,918,971
<p>I am an undergrad newbie to the Python pyplot. What I want to do is to plot a function against a sequence, for example, <code>x = [1,2,3,4,5]</code>. </p> <p>The <code>pyplot.plot</code> function naturally gives a nice figure. But I want to replace on the x-axis the tick label "2" by a string say "key point" and at the same time make the tick labels, for example, "4" and "5" invisible. How can I achieve this in <code>pyplot</code>? </p> <p>Your help will be highly appreciated. </p>
-2
2016-08-12T13:17:10Z
38,919,537
<p>This is how you do it:</p> <pre><code>from matplotlib import pyplot as plt x = [1,2,3,4,5] y = [1,2,0,2,1] plt.clf() plt.plot(x,y,'o-') ax = plt.gca() # grab the current axis ax.set_xticks([1,2,3]) # choose which x locations to have ticks ax.set_xticklabels([1,"key point",2]) # set the labels to display at those ticks </code></pre> <p>By omitting 4 and 5 from your xtick list, they won't be shown.</p>
0
2016-08-12T13:45:52Z
[ "python", "matplotlib" ]
Loop in python for moving averages based on condition
38,919,003
<p>i have 5 weeks sales data of product and store combination. Out of the 5 weeks , first three of them can be on promotions. Either 1st is on promotion, or 2nd or 3rd or all three or none of them are on promotions. so in total 8 conditions will be there. now i want to calculate a moving average kind of thing in python using loops. in simple terms i will calculate the average based on the check if a particular week is on promotion then calculate it differently. <strong>(start and end of loops for each week are as follows</strong> .</p> <p><strong>Table 1</strong></p> <pre><code> week week week week week 1 2 3 4 5 loop_start_1 1 1 1 2 3 loop_end_1 3 4 5 5 5 loop_start_2 1 1 2 3 4 loop_end_2 2 3 4 5 5 </code></pre> <p>for week 1 i will take the average of 1st , 2nd and 3rd week, but for week 2 i will take average of 1,2,3,4 and so on. now this condition will change if a week is on promotion. Input data that i have as:</p> <p><strong>Table 2</strong></p> <pre><code>prod_id store_id sales_week1 sales_week2 sales_week3 sales_week4 sales_week5 promo_1 promo_2 promo_3 promo_4 promo_5 12345 22222 40 44 50 20 21 0 0 0 0 0 12346 22222 82 85 51 72 97 1 0 0 0 0 12347 22222 74 113 31 25 19 0 1 0 0 0 12348 22222 74 105 195 216 142 0 0 1 0 0 12349 22222 78 81 23 10 67 1 1 0 0 0 12243 22222 159 190 223 137 89 0 1 1 0 0 12240 22222 591 457 556 278 726 1 0 1 0 0 22240 22222 76 49 84 132 121 1 1 1 0 0 </code></pre> <p>At max there can be 8 different combinations. based on which week out of the first three we have promotion. i want the out as follows( moving average based on the loops mentioned below)</p> <p><strong>Table 3</strong></p> <pre><code>prod_id store_id sales_week1 sales_week2 sales_week3 sales_week4 sales_week5 promo_1 promo_2 promo_3 promo_4 promo_5 12345 22222 44.6667 38.5 35 33.75 30.3333 0 0 0 0 0 12346 22222 82 69.3333 76.25 76.25 73.3333 1 0 0 0 0 12347 22222 52.5 113 37.25 25 25 0 1 0 0 0 12348 22222 89.5 131.6667 195 154.3333 179 0 0 1 0 0 12349 22222 79.5 79.5 33.3333 33.3333 33.3333 1 1 0 0 0 12243 22222 159 206.5 206.5 113 113 0 1 1 0 0 12240 22222 591 367.5 556 487 502 1 0 1 0 0 22240 22222 62.5 69.6667 66.5 126.5 126.5 1 1 1 0 0 </code></pre> <p>now i have a dummy code(8 conditions) that i want to convert to loops based on promo condition. Dont want to write this many if conditions. Please help thanks in advance.</p> <pre><code>if promo_1 =1 then: sales_new[1]=sales_old[1] sales_new[2]=average(sales_old[2],sales_old[3],sales_old[4]) sales_new[3]=avergae(sales_old[2],sales_old[3],sales_old[4],sales[5]) sales_new[4]=avergae(sales_old[2],sales_old[3],sales_old[4],sales[5]) sales_new[5]=avergae(sales_old[3],sales_old[4],sales[5]) if promo_2 =1 then: sales_new[1]=average(sales_old[1],sales_old[3]) sales_new[2]=sales_old[2] sales_new[3]=avergae(sales_old[1],sales_old[3],sales_old[4],sales[5]) sales_new[4]=avergae(sales_old[3],sales_old[4],sales[5]) sales_new[5]=avergae(sales_old[3],sales_old[4],sales[5]) if promo_3 =1 then: sales_new[1]=average(sales_old[1],sales_old[2]) sales_new[2]=average(sales_old[1],sales[2],sales[4] sales_new[3]=sales_old[3] sales_new[4]=avergae(sales_old[2],sales_old[4],sales[5]) sales_new[5]=avergae(sales_old[4],sales[5]) if promo_1 =1 and promo_2=1 then: sales_new[1]=average(sales_old[1],sales_old[2]) sales_new[2]=average(sales_old[1],sales_old[2]) sales_new[3]=average(sales_old[3],sales_old[4],sales_old[5]) sales_new[4]=average(sales_old[3],sales_old[4],sales_old[5]) sales_new[5]=average(sales_old[3],sales_old[4],sales_old[5]) if promo_2 =1 and promo_3=1 then: sales_new[1]=sales_old[1] sales_new[2]=average(sales_old[2],sales_old[3]) sales_new[3]=average(sales_old[2],sales_old[3]) sales_new[4]=average(sales_old[4],sales_old[5]) sales_new[4]=average(sales_old[4],sales_old[5]) if promo_1 =1 and promo_3=1 then: sales_new[1]=sales_old[1] sales_new[2]=average(sales_old[2],sales_old[4]) sales_new[3]=sales_old[3] sales_new[4]=average(sales_old[2],sales_old[4],sales_old[5]) sales_new[4]=average(sales_old[4],sales_old[5]) if promo_1 =1 and promo_2=1 and promo_3=1 then: sales_new[1]=average(sales_old[1],sales_old[2]) sales_new[2]=average(sales_old[1],sales_old[2],sales_old[3]) sales_new[3]=average(sales_old[2],sales[3]) sales_new[4]=average(sales_old[4],sales_old[5]) sales_new[5]=average(sales_old[4],sales_old[5]) if promo_1 =0 and promo_2=0 and promo_3=0 then: sales_new[1]=average(sales_old[1],sales_old[2],sales_old[3]) sales_new[2]=average(sales_old[1],sales_old[2],sales_old[3],sales_old[4]) sales_new[3]=average(sales_old[1],sales_old[2],sales_old[3],sales_old[4],sales_old[5]) sales_new[3]=average(,sales_old[2],sales_old[3],sales_old[4],sales_old[5]) sales_new[5]=average(sales_old[3],sales_old[4],sales_old[5]) </code></pre> <p><strong>Business Rules</strong></p> <p>In this section I provide the business rules to clarify the requirements for my code.</p> <p>At the highest level, my goal is to calculate moving averages of sales. I do this through a script that calculates new sales data (incorporating these averages) from "old" (raw) data.</p> <p>For any week that is <strong>not promoted</strong> we will consider <strong>loop_start_1 and loop_end_1</strong>, but if a week is <strong>promoted</strong> then we will consider <strong>loop_start_2 and loop_end_2</strong></p> <p>One stipulation is that each new week sales data is - <em>by default</em> - derived from a subset of the old week sales data. This is specified in Table 1. For example, the new sales data for week 1 is (by default) calculated from the old sales data for weeks 1-3 if week is not promoted.</p> <p>However, each week may or may not be a "promotion" week, and this will affect whether that week's old sales data can be used in calculating the new sales data. <em>This overrides whatever Table 1 <strong>loop_start_1 and loop_end_1</strong> specifies</em>. For promotions we will consider <strong>loop_start_2 and loop_end_2</strong> in Table 1 In particular:</p> <ul> <li>if none of the week are on promotion we will consider loop_start_1 and loop_end_1 else </li> <li>if any week of out the first three are promoted then we will consider loop_start_2 and loop_end_2</li> </ul> <p>For example, if week 1 is on promotion, then we'd need to do the following:</p> <ul> <li><p>Calculate <code>week_1_new = avg(week_1_old)</code>.</p> <ul> <li>Table 1 specifies we use old weeks 1-2.<strong>(loop_start_2 and loop_end_2)</strong></li> <li>Week 1 is on promotion, week 2 isn't - so just take week 1</li> </ul></li> <li><p>Calculate <code>week_2_new = avg(week_2_old, week_3_old, week_4_old)</code></p> <ul> <li>Table 1 specifies we use old weeks 1-4.(loop_start_1 and loop_end_1)</li> <li>Week 1 is on promotion, weeks 2, 3 &amp; 4 aren't - so the majority of weeks aren't on promotion. so promoted sales will be inflated so we would exclude them from calculation of average</li> <li>We consider data only from those weeks that aren't on promotion - 2, 3 &amp; 4.</li> </ul></li> <li>...</li> </ul> <p>On the other hand, if both weeks 1 and 2 were on promotion, then consider how we'd calculate the new sales data for week 1:</p> <ul> <li><code>week_1_new = avg(week_1_old, week_2_old)</code> <ul> <li>Table 1 specifies we use old weeks 1-2.(loop_start_2 and loop_end_2)</li> <li>Weeks 1 &amp; 2 are on promotion - so the majority of weeks <em>are</em> on promotion. we take both the weeks for average</li> <li>We consider data only from those weeks that are on promotion - 1 &amp; 2.</li> </ul></li> </ul> <p>On the other hand, if all week 1 and week 3 were on promotion, then consider how we'd calculate the new sales data for week 1:</p> <ul> <li><code>week_1_new = avg(week_1_old)</code> <strong>(loop_start_2 and loop_end_2)</strong> out of 1-2 only 1st week is on promotion. we never add the sales when a product is promoted with the week when it is not promoted. coz this would inflate the average.</li> <li><code>week_2_new = avg(week_2_old,week_4_old)</code><strong>((loop_start_1 and loop_end_1 as out of 1-4 only 2 and 4 are not promoted)</strong></li> <li><p><code>week_3_new = avg(week_3_old)</code></p> <ul> <li>We consider data only from those weeks that are on promotion - 1 &amp; 3 </li> </ul></li> </ul> <p>On the other hand, if all weeks 1 to 3 were on promotion, then consider how we'd calculate the new sales data for week 1:</p> <ul> <li><code>week_1_new = avg(week_1_old,week_2_old)</code>as we need (<strong>loop_start_2 and loop_end_2</strong>)</li> <li><code>week_2_new = avg(week_1_old,week_2_old,week_4_old)</code>as (<strong>loop_start_2 and loop_end_2</strong>) states week 1-3 to be considered for promoted weeeks for week 2</li> <li><code>week_3_new = avg(week_2_old,week_3_old)</code>as (<strong>loop_start_2 and loop_end_2</strong>) states for week 2-4. but out of 2,3,4 only 2 and 3 are promoted.so just take the promoted weeks</li> </ul>
-2
2016-08-12T13:18:58Z
38,919,317
<p>would this make any sense?</p> <pre><code>sales_new[1] = average(sales_old[1],sales_old[2]) sales_new[2] = average(sales_old[1],sales_old[2],sales_old[3]) sales_new[3] = average(sales_old[2],sales_old[3],sales_old[4]) sales_new[4] = average(sales_old[3],sales_old[4],sales_old[5]) sales_new[5] = average(sales_old[4],sales_old[5]) if promo_1 then: sales_new[1] = sales_old[1] if promo_2 then: sales_new[2] = sales_old[2] if promo_3 then: sales_new[3] = sales_old[3] if promo_4 then: sales_new[3] = sales_old[3] if promo_5 then: sales_new[3] = sales_old[3] </code></pre> <p>Calculate 'running' average and then modify it based on the promotions.</p> <p>Note: your array will probably be index based so it will start at sales_new[0].</p>
0
2016-08-12T13:35:24Z
[ "python", "loops", "python-3.x", "for-loop" ]
Loop in python for moving averages based on condition
38,919,003
<p>i have 5 weeks sales data of product and store combination. Out of the 5 weeks , first three of them can be on promotions. Either 1st is on promotion, or 2nd or 3rd or all three or none of them are on promotions. so in total 8 conditions will be there. now i want to calculate a moving average kind of thing in python using loops. in simple terms i will calculate the average based on the check if a particular week is on promotion then calculate it differently. <strong>(start and end of loops for each week are as follows</strong> .</p> <p><strong>Table 1</strong></p> <pre><code> week week week week week 1 2 3 4 5 loop_start_1 1 1 1 2 3 loop_end_1 3 4 5 5 5 loop_start_2 1 1 2 3 4 loop_end_2 2 3 4 5 5 </code></pre> <p>for week 1 i will take the average of 1st , 2nd and 3rd week, but for week 2 i will take average of 1,2,3,4 and so on. now this condition will change if a week is on promotion. Input data that i have as:</p> <p><strong>Table 2</strong></p> <pre><code>prod_id store_id sales_week1 sales_week2 sales_week3 sales_week4 sales_week5 promo_1 promo_2 promo_3 promo_4 promo_5 12345 22222 40 44 50 20 21 0 0 0 0 0 12346 22222 82 85 51 72 97 1 0 0 0 0 12347 22222 74 113 31 25 19 0 1 0 0 0 12348 22222 74 105 195 216 142 0 0 1 0 0 12349 22222 78 81 23 10 67 1 1 0 0 0 12243 22222 159 190 223 137 89 0 1 1 0 0 12240 22222 591 457 556 278 726 1 0 1 0 0 22240 22222 76 49 84 132 121 1 1 1 0 0 </code></pre> <p>At max there can be 8 different combinations. based on which week out of the first three we have promotion. i want the out as follows( moving average based on the loops mentioned below)</p> <p><strong>Table 3</strong></p> <pre><code>prod_id store_id sales_week1 sales_week2 sales_week3 sales_week4 sales_week5 promo_1 promo_2 promo_3 promo_4 promo_5 12345 22222 44.6667 38.5 35 33.75 30.3333 0 0 0 0 0 12346 22222 82 69.3333 76.25 76.25 73.3333 1 0 0 0 0 12347 22222 52.5 113 37.25 25 25 0 1 0 0 0 12348 22222 89.5 131.6667 195 154.3333 179 0 0 1 0 0 12349 22222 79.5 79.5 33.3333 33.3333 33.3333 1 1 0 0 0 12243 22222 159 206.5 206.5 113 113 0 1 1 0 0 12240 22222 591 367.5 556 487 502 1 0 1 0 0 22240 22222 62.5 69.6667 66.5 126.5 126.5 1 1 1 0 0 </code></pre> <p>now i have a dummy code(8 conditions) that i want to convert to loops based on promo condition. Dont want to write this many if conditions. Please help thanks in advance.</p> <pre><code>if promo_1 =1 then: sales_new[1]=sales_old[1] sales_new[2]=average(sales_old[2],sales_old[3],sales_old[4]) sales_new[3]=avergae(sales_old[2],sales_old[3],sales_old[4],sales[5]) sales_new[4]=avergae(sales_old[2],sales_old[3],sales_old[4],sales[5]) sales_new[5]=avergae(sales_old[3],sales_old[4],sales[5]) if promo_2 =1 then: sales_new[1]=average(sales_old[1],sales_old[3]) sales_new[2]=sales_old[2] sales_new[3]=avergae(sales_old[1],sales_old[3],sales_old[4],sales[5]) sales_new[4]=avergae(sales_old[3],sales_old[4],sales[5]) sales_new[5]=avergae(sales_old[3],sales_old[4],sales[5]) if promo_3 =1 then: sales_new[1]=average(sales_old[1],sales_old[2]) sales_new[2]=average(sales_old[1],sales[2],sales[4] sales_new[3]=sales_old[3] sales_new[4]=avergae(sales_old[2],sales_old[4],sales[5]) sales_new[5]=avergae(sales_old[4],sales[5]) if promo_1 =1 and promo_2=1 then: sales_new[1]=average(sales_old[1],sales_old[2]) sales_new[2]=average(sales_old[1],sales_old[2]) sales_new[3]=average(sales_old[3],sales_old[4],sales_old[5]) sales_new[4]=average(sales_old[3],sales_old[4],sales_old[5]) sales_new[5]=average(sales_old[3],sales_old[4],sales_old[5]) if promo_2 =1 and promo_3=1 then: sales_new[1]=sales_old[1] sales_new[2]=average(sales_old[2],sales_old[3]) sales_new[3]=average(sales_old[2],sales_old[3]) sales_new[4]=average(sales_old[4],sales_old[5]) sales_new[4]=average(sales_old[4],sales_old[5]) if promo_1 =1 and promo_3=1 then: sales_new[1]=sales_old[1] sales_new[2]=average(sales_old[2],sales_old[4]) sales_new[3]=sales_old[3] sales_new[4]=average(sales_old[2],sales_old[4],sales_old[5]) sales_new[4]=average(sales_old[4],sales_old[5]) if promo_1 =1 and promo_2=1 and promo_3=1 then: sales_new[1]=average(sales_old[1],sales_old[2]) sales_new[2]=average(sales_old[1],sales_old[2],sales_old[3]) sales_new[3]=average(sales_old[2],sales[3]) sales_new[4]=average(sales_old[4],sales_old[5]) sales_new[5]=average(sales_old[4],sales_old[5]) if promo_1 =0 and promo_2=0 and promo_3=0 then: sales_new[1]=average(sales_old[1],sales_old[2],sales_old[3]) sales_new[2]=average(sales_old[1],sales_old[2],sales_old[3],sales_old[4]) sales_new[3]=average(sales_old[1],sales_old[2],sales_old[3],sales_old[4],sales_old[5]) sales_new[3]=average(,sales_old[2],sales_old[3],sales_old[4],sales_old[5]) sales_new[5]=average(sales_old[3],sales_old[4],sales_old[5]) </code></pre> <p><strong>Business Rules</strong></p> <p>In this section I provide the business rules to clarify the requirements for my code.</p> <p>At the highest level, my goal is to calculate moving averages of sales. I do this through a script that calculates new sales data (incorporating these averages) from "old" (raw) data.</p> <p>For any week that is <strong>not promoted</strong> we will consider <strong>loop_start_1 and loop_end_1</strong>, but if a week is <strong>promoted</strong> then we will consider <strong>loop_start_2 and loop_end_2</strong></p> <p>One stipulation is that each new week sales data is - <em>by default</em> - derived from a subset of the old week sales data. This is specified in Table 1. For example, the new sales data for week 1 is (by default) calculated from the old sales data for weeks 1-3 if week is not promoted.</p> <p>However, each week may or may not be a "promotion" week, and this will affect whether that week's old sales data can be used in calculating the new sales data. <em>This overrides whatever Table 1 <strong>loop_start_1 and loop_end_1</strong> specifies</em>. For promotions we will consider <strong>loop_start_2 and loop_end_2</strong> in Table 1 In particular:</p> <ul> <li>if none of the week are on promotion we will consider loop_start_1 and loop_end_1 else </li> <li>if any week of out the first three are promoted then we will consider loop_start_2 and loop_end_2</li> </ul> <p>For example, if week 1 is on promotion, then we'd need to do the following:</p> <ul> <li><p>Calculate <code>week_1_new = avg(week_1_old)</code>.</p> <ul> <li>Table 1 specifies we use old weeks 1-2.<strong>(loop_start_2 and loop_end_2)</strong></li> <li>Week 1 is on promotion, week 2 isn't - so just take week 1</li> </ul></li> <li><p>Calculate <code>week_2_new = avg(week_2_old, week_3_old, week_4_old)</code></p> <ul> <li>Table 1 specifies we use old weeks 1-4.(loop_start_1 and loop_end_1)</li> <li>Week 1 is on promotion, weeks 2, 3 &amp; 4 aren't - so the majority of weeks aren't on promotion. so promoted sales will be inflated so we would exclude them from calculation of average</li> <li>We consider data only from those weeks that aren't on promotion - 2, 3 &amp; 4.</li> </ul></li> <li>...</li> </ul> <p>On the other hand, if both weeks 1 and 2 were on promotion, then consider how we'd calculate the new sales data for week 1:</p> <ul> <li><code>week_1_new = avg(week_1_old, week_2_old)</code> <ul> <li>Table 1 specifies we use old weeks 1-2.(loop_start_2 and loop_end_2)</li> <li>Weeks 1 &amp; 2 are on promotion - so the majority of weeks <em>are</em> on promotion. we take both the weeks for average</li> <li>We consider data only from those weeks that are on promotion - 1 &amp; 2.</li> </ul></li> </ul> <p>On the other hand, if all week 1 and week 3 were on promotion, then consider how we'd calculate the new sales data for week 1:</p> <ul> <li><code>week_1_new = avg(week_1_old)</code> <strong>(loop_start_2 and loop_end_2)</strong> out of 1-2 only 1st week is on promotion. we never add the sales when a product is promoted with the week when it is not promoted. coz this would inflate the average.</li> <li><code>week_2_new = avg(week_2_old,week_4_old)</code><strong>((loop_start_1 and loop_end_1 as out of 1-4 only 2 and 4 are not promoted)</strong></li> <li><p><code>week_3_new = avg(week_3_old)</code></p> <ul> <li>We consider data only from those weeks that are on promotion - 1 &amp; 3 </li> </ul></li> </ul> <p>On the other hand, if all weeks 1 to 3 were on promotion, then consider how we'd calculate the new sales data for week 1:</p> <ul> <li><code>week_1_new = avg(week_1_old,week_2_old)</code>as we need (<strong>loop_start_2 and loop_end_2</strong>)</li> <li><code>week_2_new = avg(week_1_old,week_2_old,week_4_old)</code>as (<strong>loop_start_2 and loop_end_2</strong>) states week 1-3 to be considered for promoted weeeks for week 2</li> <li><code>week_3_new = avg(week_2_old,week_3_old)</code>as (<strong>loop_start_2 and loop_end_2</strong>) states for week 2-4. but out of 2,3,4 only 2 and 3 are promoted.so just take the promoted weeks</li> </ul>
-2
2016-08-12T13:18:58Z
38,922,747
<p>At the time of writing, my edit to your question is still pending and you haven't confirmed whether I've understood your business rules correctly. Assuming I have, here's a solution I've tried to put together for you:</p> <pre><code>from typing import Generator, Dict, List, Tuple import numpy TOTAL_WEEKS = 5 def create_default_sales_data_map() -&gt; Dict[int, List[int]]: """ Create and return the default mapping of old week sales data to new averaged week sales data. The mapping is a tuple specifying the (inclusive) lower and upper range of weeks to consider the sales data of by default when calculating the new sales data for the week specified by the dictionary key. This is based on Table 1. :return: the default sales data mapping """ sales_data_map = {1: (1, 3), 2: (1, 4), 3: (1, 5), 4: (2, 5), 5: (3, 5)} return sales_data_map def are_majority_weeks_promoted(promoted_weeks: [bool]) -&gt; bool: """ Return if the majority of weeks are promoted or not. :param promoted_weeks: The promotion status of each week :return: `true` if the majority of weeks are promoted """ return sum(promoted_weeks) &gt;= len(promoted_weeks) / 2 def calculate_new_sales_data(old_sales_data: List[int], promoted_weeks: List[bool]) -&gt; int: """ Calculate new (averaged) sales data for a week based on old (raw) sales data and the promotion status of the relevant weeks. :param old_sales_data: the old week sales data used by default (i.e. according to Table 1) :param promoted_weeks: the promotion status of the weeks corresponding to `old_sales_data` :return: the new sales data for a particular week """ majority = are_majority_weeks_promoted(promoted_weeks) relevant_data = [data for i, data in enumerate(old_sales_data) if promoted_weeks[i] == majority] new_sales_data = numpy.mean(relevant_data) return new_sales_data def calculate_all_new_sales_data(complete_old_sales_data: List[int], complete_promoted_weeks: List[bool]) -&gt; Generator[int, None, None]: """ Generates new sales data for all possible weeks based on the supplied sales week data. :param complete_old_sales_data: the complete set of information on old week sales data :param complete_promoted_weeks: the promotion status for all weeks :return: a generator for new sales data """ sales_data_map = create_default_sales_data_map() for week, (lower, upper) in sales_data_map.items(): old_sales_data = complete_old_sales_data[lower-1 : upper] promoted_weeks = complete_promoted_weeks[lower-1 : upper] yield calculate_new_sales_data(old_sales_data, promoted_weeks) def query_user(total_weeks: int) -&gt; Generator[Tuple[int, bool], None, None]: """ Query the user for the old sales data values and promoted weeks and return the results. Each generated tuple contains the old sales data for the week, and whether the week is promoted. :param total_weeks: the number of weeks' worth of data to query the user for. :return: a generator of sales data tuples """ for i in range(total_weeks): while True: try: old_sales_data = int(input("Total sales data for week {}: $".format(i+1))) break except ValueError: continue while True: promotion_status_str = input("Is week promoted (Y/N)? ").lower() if promotion_status_str in ["y", "n"]: promotion_status = promotion_status_str == "y" break yield old_sales_data, promotion_status def main(): old_sales_data, promoted_weeks = zip(*query_user(TOTAL_WEEKS)) print() for week, new_sales_data in enumerate(calculate_all_new_sales_data(old_sales_data, promoted_weeks), 1): print("Week {} averaged sales data: ${:.2f}".format(week, new_sales_data)) if __name__ == '__main__': main() </code></pre> <p><strong>Input</strong></p> <pre class="lang-none prettyprint-override"><code>Total sales data for week 1: $10 Is week promoted (Y/N)? y Total sales data for week 2: $20 Is week promoted (Y/N)? y Total sales data for week 3: $30 Is week promoted (Y/N)? n Total sales data for week 4: $40 Is week promoted (Y/N)? y Total sales data for week 5: $50 Is week promoted (Y/N)? n </code></pre> <p><strong>Output</strong></p> <pre class="lang-none prettyprint-override"><code>Week 1 averaged sales data: $15.00 Week 2 averaged sales data: $23.33 Week 3 averaged sales data: $23.33 Week 4 averaged sales data: $30.00 Week 5 averaged sales data: $40.00 </code></pre> <p><strong>Explanation</strong></p> <ul> <li>For "new" week 1, Table 1 says to consider "old" weeks 1-3. Weeks 1 &amp; 2 are promoted but week 3 is not, so we only average "old" weeks 1 &amp; 2 - i.e. we average $10 and $20 to get $15.</li> <li>For "new" week 2, by default we consider "old" weeks 1-4. The majority of these weeks (1, 2 &amp; 4) are promoted, so we consider just these and average. The average of $10, $20 and $40 is $23.33.</li> <li>...</li> </ul> <p>We could play around with these values to test the behaviour is correct. For example, let's make it so that the majority of weeks considered by default for "new" sales week 1 data are not promoted:</p> <pre class="lang-none prettyprint-override"><code>Total sales data for week 1: $10 Is week promoted (Y/N)? y Total sales data for week 2: $20 Is week promoted (Y/N)? n Total sales data for week 3: $30 Is week promoted (Y/N)? n ... Week 1 averaged sales data: $25.00 </code></pre> <p>In this case, only weeks 2 and 3 are considered.</p> <p>How about if all first three weeks are promoted?</p> <pre><code>Total sales data for week 1: $10 Is week promoted (Y/N)? y Total sales data for week 2: $20 Is week promoted (Y/N)? y Total sales data for week 3: $30 Is week promoted (Y/N)? y ... Week 1 averaged sales data: $20.00 </code></pre> <p>Here, we consider all the weeks, and average the sales data for them.</p>
1
2016-08-12T16:36:07Z
[ "python", "loops", "python-3.x", "for-loop" ]
Regression: Test data for prediction requires class value? in Weka
38,919,132
<p>M5P tree model from weka.classifiers: python-weka-wrapper Each row in my arff file consists of 6 attributes with 6th attribute being the target variable for which the model is being trained. I am using weka.core.converters.ArffLoader to the arff file to train. After the training, if i want to make predictions with some test data, I am creating instances and passing it to the built model to predict. In the instances i am passing only the values of the 5 attributes and not the target variable's value. I am getting a java exception:</p> <blockquote> <p>Traceback (most recent call last): File "C:/Users/Sethuraman/PycharmProjects/Test_printer/m_M5P.py", line 85, in pred_dict1[index + 1] = cls.classify_instance(instance) File "C:\Users\Sethuraman\Anaconda2\lib\site-packages\python_weka_wrapper-0.3.8-py2.7.egg\weka\classifiers.py", line 105, in classify_instance return self.__classify(inst.jobject) File "C:\Users\Sethuraman\Anaconda2\lib\site-packages\javabridge-1.0.14-py2.7-win-amd64.egg\javabridge\jutil.py", line 852, in fn raise JavaException(x) javabridge.jutil.JavaException: Src and Dest differ in # of attributes: 5 != 6</p> </blockquote> <p>why should I provide the target variable value? Is it necessary to pass the target value also? Essentially after the training the model should predict the target value. If yes, why? If no, how to deal with it? Please help!</p>
0
2016-08-12T13:26:14Z
38,920,295
<p>If you want <em>validation</em>, you should definitely provide target values; how does the algorithm know how well it's done otherwise? But if you just want it to predict on that set, it seems the best way is to fill the target spot with '?', so that the data still has the 6 attributes, with the target simply marked as unknown. See <a href="http://weka.wikispaces.com/Making+predictions" rel="nofollow">http://weka.wikispaces.com/Making+predictions</a> for more.</p>
0
2016-08-12T14:22:45Z
[ "python", "weka" ]
Regression: Test data for prediction requires class value? in Weka
38,919,132
<p>M5P tree model from weka.classifiers: python-weka-wrapper Each row in my arff file consists of 6 attributes with 6th attribute being the target variable for which the model is being trained. I am using weka.core.converters.ArffLoader to the arff file to train. After the training, if i want to make predictions with some test data, I am creating instances and passing it to the built model to predict. In the instances i am passing only the values of the 5 attributes and not the target variable's value. I am getting a java exception:</p> <blockquote> <p>Traceback (most recent call last): File "C:/Users/Sethuraman/PycharmProjects/Test_printer/m_M5P.py", line 85, in pred_dict1[index + 1] = cls.classify_instance(instance) File "C:\Users\Sethuraman\Anaconda2\lib\site-packages\python_weka_wrapper-0.3.8-py2.7.egg\weka\classifiers.py", line 105, in classify_instance return self.__classify(inst.jobject) File "C:\Users\Sethuraman\Anaconda2\lib\site-packages\javabridge-1.0.14-py2.7-win-amd64.egg\javabridge\jutil.py", line 852, in fn raise JavaException(x) javabridge.jutil.JavaException: Src and Dest differ in # of attributes: 5 != 6</p> </blockquote> <p>why should I provide the target variable value? Is it necessary to pass the target value also? Essentially after the training the model should predict the target value. If yes, why? If no, how to deal with it? Please help!</p>
0
2016-08-12T13:26:14Z
38,926,364
<p>You can use the <a href="http://weka.sourceforge.net/doc.dev/weka/filters/unsupervised/attribute/Add.html" rel="nofollow">Add</a> filter to introduce a new attribute. By default, this filter will mark all values of the new attribute as missing ("?"). Just make sure that the name of this new attribute and, in case of nominal class, the order of class labels is exactly the same as in the training data.</p>
1
2016-08-12T20:58:10Z
[ "python", "weka" ]
Optimize 4D Numpy array construction
38,919,189
<p>I have a 4D array <code>data</code> of shape (50,8,2048,256) which are 50 groups containing 8 2048x256 pixel images. <code>times</code> is an array of shape (50,8) giving the time that each image was taken.</p> <p>I calculate a 1st order polynomial fit at each pixel for all images in each group, giving me an array of shape (50,2048,256,2). This is essentially a vector plot for each of the 50 groups. The code I use to store the polynomials is:</p> <pre><code>fits = np.ones((50,2048,256,2)) times = times.reshape(50,8,1).repeat(2048,2).reshape(50,8,2048,1).repeat(256,3) for group in range(50): for xpos in range(2048): for ypos in range(256): px_data = data[:,:,ypos,xpos] fits[group,ypos,xpos,:] = np.polyfit(times[group,:,ypos,xpos],data[group,:,ypos,xpos],1) </code></pre> <p>Now the challenge is that I want to generate an array <code>new_data</code> of shape (50,12,2048,256) where I use the polynomial coefficients from <code>fits</code> and the times from <code>new_time</code> to generate 50 groups of 12 images.</p> <p>I figure I can use something like <code>np.polyval(fits, new_time)</code> to generate the images but I'm very confused with how to phrase it. It should be something like:</p> <pre><code>new_data = np.ones((50,12,2048,256)) for i,(times,fit) in enumerate(zip(new_times,fits)): new_data[i] = np.polyval(fit,times) </code></pre> <p>But I'm getting broadcasting errors. Any assistance would be greatly appreciated!</p> <p><strong>Update</strong> Ok, so I changed the code a bit so that it does work and do exactly what I want, but it is terribly slow with all these loops (~1 minute per group meaning this would take me almost an hour to run!). Can anyone suggest a way to optimize this to speed it up?</p> <pre><code># Generate the polynomials for each pixel in each group fits = np.ones((50,2048,256,2)) times = np.arange(0,50*8*grptme,grptme).reshape(50,8) times = times.reshape(50,8,1).repeat(2048,2).reshape(50,8,2048,1).repeat(256,3) for group in range(50): for xpos in range(2048): for ypos in range(256): fits[group,xpos,ypos] = np.polyfit(times[group,:,xpos,ypos],data[group,:,xpos,ypos],1) # Create new array of 12 images per group using the polynomials for each pixel new_data = np.ones((50,12,2048,256)) times = np.arange(0,50*12*grptme,grptme).reshape(50,12) times = times.reshape(50,12,1).repeat(2048,2).reshape(50,12,2048,1).repeat(256,3) for group in range(50): for img in range(12): for xpos in range(2048): for ypos in range(256): new_data[group,img,xpos,ypos] = np.polynomial.polynomial.polyval(times[group,img,xpos,ypos],fits[group,xpos,ypos]) </code></pre>
1
2016-08-12T13:28:47Z
38,930,042
<p>Regarding the speed I see a lot of loops which is what should and often can be avoided due to the beauty of numpy. If I understand your problem fully you want to fit a first order polynom on 50 groups of 8 data points 2048 * 256 times. So for the fit the shape of your image does not play a role. So my suggestion is to flatten your images because with <code>np.polyfit</code> you can fit for a range of x-values several sets of y-values at the same time</p> <p>From the doc string</p> <pre><code>x : array_like, shape (M,) x-coordinates of the M sample points ``(x[i], y[i])``. y : array_like, shape (M,) or (M, K) y-coordinates of the sample points. Several data sets of sample points sharing the same x-coordinates can be fitted at once by passing in a 2D-array that contains one dataset per column. </code></pre> <p>So I would go for</p> <pre><code># Generate the polynomials for each pixel in each group fits = np.ones((50,2048*256,2)) times = np.arange(0,50*8*grptme,grptme).reshape(50,8) data_fit = data.reshape((50,8,2048*256)) for group in range(50): fits[group] = np.polyfit(times[group],data_fit[group],1).T fits_original_shape = fits.reshape((50,2048,256,2)) </code></pre> <p>The transposing is necessary since you want to have the parameters in the last index, but <code>np.polyfit</code> has them first and then the different data sets</p> <p>And then to evaluate it it is basically the same trick again:</p> <pre><code># Create new array of 12 images per group using the polynomials for each pixel new_data = np.zeros((50,12,2048*256)) times = np.arange(0,50*12*grptme,grptme).reshape(50,12) #times = times.reshape(50,12,1).repeat(2048,2).reshape(50,12,2048,1).repeat(256,3) for group in range(50): new_data[group] = np.polynomial.polynomial.polyval(times[group],fits[group].T).T new_data_original_shape = new_data.reshape((50,12,2048,256)) </code></pre> <p>The two transposes are again needed due to the ordering of the parameters vs. the different data sets so that matches with the shapes of your arrays.</p> <p>Probably one could also avoid with some advanced numpy magic the loop over the groups, but with this the code runs much faster already.</p> <p>I hope it helps!</p>
0
2016-08-13T06:49:23Z
[ "python", "arrays", "numpy", "optimization", "slice" ]
Exporting data from python to text file using petl package in python
38,919,246
<p>I am trying to extract raw data from a text file and after processing the raw data, I want to export it to another text file. Below is the python code I have written for this process. I am using the "petl" package in python 3 for this purpose. 'locations.txt' is the raw data file.</p> <pre><code>import glob, os from petl import * class ETL(): def __init__(self, input): self.list = input def parse_P(self): personids = None for term in self.list: if term.startswith('P'): personids = term[1:] personid = personids.split(',') return personid def return_location(self): location = None for term in self.list: if term.startswith('L'): location = term[1:] return location def return_location_id(self, location): location = self.return_location() locationid = None def return_country_id(self): countryid = None for term in self.list: if term.startswith('C'): countryid = term[1:] return countryid def return_region_id(self): regionid = None for term in self.list: if term.startswith('R'): regionid = term[1:] return regionid def return_city_id(self): cityid = None for term in self.list: if term.startswith('I'): cityid = term[1:] return cityid print (os.getcwd()) os.chdir("D:\ETL-IntroductionProject") print (os.getcwd()) final_location = [['L','P', 'C', 'R', 'I']] new_location = fromtext('locations.txt', encoding= 'Latin-1') stored_list = [] for identifier in new_location: if identifier[0].startswith('L'): identifier = identifier[0] info_list = identifier.split('_') stored_list.append(info_list) for lst in stored_list: tabling = ETL(lst) location = tabling.return_location() country = tabling.return_country_id() city = tabling.return_city_id() region = tabling.return_region_id() person_list = tabling.parse_P() for person in person_list: table_new = [location, person, country, region, city] final_location.append(table_new) totext(final_location, 'l1.txt') </code></pre> <p>However when I use "totext" function of petl, it throws me an "Assertion Error".</p> <blockquote> <p>AssertionError: template is required I am unable to understand what the fault is. Can some one please explain the problem I am facing and what I should be doing ?</p> </blockquote>
0
2016-08-12T13:31:37Z
39,088,838
<p>The template parameter to the toext function is not optional there is no default format for how the rows are written in this case, you must provide a template. Check the doc for toext here for an example: <a href="https://petl.readthedocs.io/en/latest/io.html#text-files" rel="nofollow">https://petl.readthedocs.io/en/latest/io.html#text-files</a></p> <p>The template describes the format of each row that it writes out using the field headers to describe things, you can optionally pass in a prologue to write the header too. A basic template in your case would be:</p> <p><code>table_new_template = "{L} {P} {C} {R} {I}" totext(final_location, 'l1.txt', template=table_new_template)</code></p>
0
2016-08-22T21:15:46Z
[ "python", "etl" ]
Error while calling function through C DLL
38,919,272
<p>I have a DLL in which the following is the header and .c file: </p> <pre><code>//demo.h #define vers 0x1 __declspec export void getversion(ulong * version); //demo.c #include"demo.h" void getversion(ulong * version) { *version=(ulong)vers; } </code></pre> <p>The python script that I had written for calling the DLL using ctypes is as below:</p> <pre><code>//python script import ctypes import sys from ctypes import * vers=0x1 def getversion(version): mydll=cdll.LoadLibrary('demo.dll') mydll.getversion.argtypes=[POINTER(c_ulong)] mydll.getversion.restype=None mydll.getversion(version) version=c_ulong() print("Version=",version.value) </code></pre> <p>But the value of the version that I'm getting is 0. The output should be 1 instead. Can anyone help me with the following code? I don't know where am I going wrong?</p>
0
2016-08-12T13:33:11Z
38,919,408
<p>I think is this what you should do:</p> <pre><code>mydll = cdll.demo # just a simpler way to load the dll, not the solution yet ver = c_ulong() mydll.getversion(byref(ver)) print(ver.value) </code></pre>
0
2016-08-12T13:38:43Z
[ "python", "c", "dll", "ctypes" ]
Error while calling function through C DLL
38,919,272
<p>I have a DLL in which the following is the header and .c file: </p> <pre><code>//demo.h #define vers 0x1 __declspec export void getversion(ulong * version); //demo.c #include"demo.h" void getversion(ulong * version) { *version=(ulong)vers; } </code></pre> <p>The python script that I had written for calling the DLL using ctypes is as below:</p> <pre><code>//python script import ctypes import sys from ctypes import * vers=0x1 def getversion(version): mydll=cdll.LoadLibrary('demo.dll') mydll.getversion.argtypes=[POINTER(c_ulong)] mydll.getversion.restype=None mydll.getversion(version) version=c_ulong() print("Version=",version.value) </code></pre> <p>But the value of the version that I'm getting is 0. The output should be 1 instead. Can anyone help me with the following code? I don't know where am I going wrong?</p>
0
2016-08-12T13:33:11Z
38,920,415
<p>Here's a fully working example for windows&amp;vs, if you wanna try on linux you just need to change your datatypes and get rid of the windows headers:</p> <p><strong>demo.h</strong></p> <pre><code>#include &lt;windows.h&gt; ULONG g_version = 0x3; __declspec(dllexport) void set_version(ULONG version); __declspec(dllexport) ULONG get_version(); __declspec(dllexport) void get_version_by_reference(ULONG* version); </code></pre> <p><strong>demo.c</strong></p> <pre><code>// build: cl.exe /D_USRDLL /D_WINDLL demo.c /link /DLL /OUT:demo.dll #include "demo.h" void set_version(ULONG version) { g_version=version; } ULONG get_version() { return g_version; } void get_version_by_reference(ULONG* version) { *version=(ULONG)g_version; } </code></pre> <p><strong>demo.py</strong></p> <pre><code>import ctypes import sys from ctypes import * mydll = cdll.LoadLibrary('demo.dll') # __declspec(dllexport) void set_version(ULONG version); mydll.set_version.restype = None mydll.set_version.argtypes = [c_ulong] # __declspec(dllexport) void get_version(); mydll.get_version.restype = c_ulong mydll.get_version.argtypes = [] # __declspec(dllexport) void get_version_by_reference(ULONG* version); mydll.get_version_by_reference.restype = None mydll.get_version_by_reference.argtypes = [POINTER(c_ulong)] print "Step1, testing get_version" print '-'*80 print mydll.get_version() print "\nStep2, testing set_version &amp; get_version" print '-'*80 mydll.set_version(c_ulong(0x05)) print mydll.get_version() print "\nStep3, testing get_version_by_reference" print '-'*80 vers = c_ulong(0x09) print "before assigning...",vers.value mydll.get_version_by_reference(vers) print "after assigning...",vers.value </code></pre> <p>Hope it helps you to get started.</p>
0
2016-08-12T14:28:32Z
[ "python", "c", "dll", "ctypes" ]
Error while calling function through C DLL
38,919,272
<p>I have a DLL in which the following is the header and .c file: </p> <pre><code>//demo.h #define vers 0x1 __declspec export void getversion(ulong * version); //demo.c #include"demo.h" void getversion(ulong * version) { *version=(ulong)vers; } </code></pre> <p>The python script that I had written for calling the DLL using ctypes is as below:</p> <pre><code>//python script import ctypes import sys from ctypes import * vers=0x1 def getversion(version): mydll=cdll.LoadLibrary('demo.dll') mydll.getversion.argtypes=[POINTER(c_ulong)] mydll.getversion.restype=None mydll.getversion(version) version=c_ulong() print("Version=",version.value) </code></pre> <p>But the value of the version that I'm getting is 0. The output should be 1 instead. Can anyone help me with the following code? I don't know where am I going wrong?</p>
0
2016-08-12T13:33:11Z
38,962,712
<p>Your code isn't calling <code>getversion</code>. Your code creates a default <code>ulong()</code> (which is initialized to zero) and printing it.</p> <pre><code>version=c_ulong() print("Version=",version.value) </code></pre> <p>Here's working code:</p> <pre><code>from ctypes import * mydll = CDLL('demo.dll') mydll.getversion.argtypes = [POINTER(c_ulong)] mydll.getversion.restype = None version=c_ulong() mydll.getversion(byref(version)) # call function with address of version print("Version=",version.value) </code></pre> <p>Here's the DLL I tested with (yours as is won't compile...no definition of <code>ulong</code>) and compiled with Microsoft Visual Studio.</p> <pre><code>__declspec(dllexport) void getversion(unsigned long* version) { *version = 1; } </code></pre>
1
2016-08-15T20:41:12Z
[ "python", "c", "dll", "ctypes" ]
Test if a string contain a combination of a known string and unknown number in python
38,919,376
<p>If I have a string for example <code>line= "hello world 2"</code> and the number <code>2</code> can be any number so line could be <code>"hello world 4"</code> is there a way to test that using in operand, I am thinking of something like that </p> <pre><code>if "world %n" in line </code></pre> <p>but that is not correct syntax</p>
1
2016-08-12T13:37:40Z
38,919,440
<pre><code>import re if re.search("world \d", line): ... </code></pre> <p>This uses a quick, simple <a href="https://docs.python.org/2/library/re.html" rel="nofollow">regular expression</a> to check the line contains the word 'world' followed by a space followed by any number. </p>
0
2016-08-12T13:40:37Z
[ "python", "python-3.x", "find" ]
How to give a name to test?
38,919,394
<p>How do I give a name to unit test in Django?</p> <p>With RSpec I would do something like </p> <pre><code>it 'has a name!' do # do something end </code></pre> <p>But is it possible with Django so I can see normal names in PyCharm instead of this:</p> <p><a href="http://i.stack.imgur.com/QJyu2.png" rel="nofollow"><img src="http://i.stack.imgur.com/QJyu2.png" alt="enter image description here"></a></p>
1
2016-08-12T13:38:26Z
38,919,658
<p>You could use a comment above the method name <code># ...</code> and set the the test suite verbosity to 2. This will print a description along with the test like so:</p> <p><code>&lt;test name&gt; test that so and so happens ... ok</code></p>
1
2016-08-12T13:50:17Z
[ "python", "django", "unit-testing", "integration-testing", "django-testing" ]
nginx connection timeout error with python, django and uwsgi
38,919,471
<p>I have setup a django + uwsgi + nginx server according to <a href="https://tghw.com/blog/multiple-django-and-flask-sites-with-nginx-and-uwsgi-emperor" rel="nofollow">https://tghw.com/blog/multiple-django-and-flask-sites-with-nginx-and-uwsgi-emperor</a></p> <p>The pages work fine when they return quickly. But I have a page that takes about 30 seconds to return and on that page nginx logs show a connection timeout error and a blank page is shown on the browser(Not an error page).</p> <p>Here are my configuration files:</p> <h1>nginx conf:</h1> <pre><code>server { listen 8000; server_name localhost; root /var/www/consent_architecture; location / { include uwsgi_params; uwsgi_pass unix:/var/www/run/crediwatch_consent.sock; uwsgi_read_timeout 300; } } </code></pre> <h1>uwsgi.ini</h1> <pre><code>[uwsgi] # Variables base = /var/www/consent_architecture app = wsgi # Generic Config chdir = /var/www/consent_architecture home = %(base)/venv plugins = http, python processes = 5 pythonpath = %(base) socket = /var/www/run/%n.sock module = %(app) logto = /var/log/uwsgi/%n.log </code></pre> <p>uwsgi log shows that the call returned in ~25 seconds:</p> <pre><code>[pid: 18713|app: 0|req: 14/25] 127.0.0.1 () {50 vars in 1412 bytes} [Fri Aug 12 18:52:30 2016] POST /consent_server/confirmation =&gt; generated 750 bytes in 23503 msecs (HTTP/1.1 200) 2 headers in 88 bytes (1 switches on core 0) </code></pre> <p>But the nginx logs shows the following error:</p> <pre><code>2016/08/12 18:53:33 [error] 21203#0: *1 upstream timed out (110: Connection timed out) while reading upstream, client: 127.0.0.1, server: localhost, request: "POST /consent_server/confirmation HTTP/1.1", upstream: "uwsgi://unix:/var/www/run/crediwatch_consent.sock:", host: "localhost:8000", referrer: "http://localhost:8000/consent_server/get_details?phone_number=...." </code></pre> <p>The timeout in nginx conf has been set to 5 minutes and uwsgi returns within 30 seconds but nginx doesn't seem to be reading it. Anyone have any clue why this is happening?</p> <p>Versions:</p> <p>nginx = 1.4.6, uwsgi = 2.0.13.1 </p>
0
2016-08-12T13:42:30Z
38,920,246
<p>Nginx is telling you that your upstream (uWSGI) has timed-out, so adjusting the timeout in Nginx's config won't help.</p> <p>Try adjusting the internal socket time-out. In your <code>uswgi.ini</code>, add:</p> <p><code>socket-timeout = 60</code></p> <p>P.S I see that you've looked at the 14th request out of 25. Are there any other requests that are taking longer that you forgot to include in your log?</p>
0
2016-08-12T14:19:26Z
[ "python", "django", "nginx", "uwsgi" ]
How to skip blank line while taking input from user in python?
38,919,489
<p>I want to skip blank line (no value entered by user). I get this error.</p> <pre><code> Traceback (most recent call last): File "candy3.py", line 16, in &lt;module&gt; main() File "candy3.py", line 5, in main num=input() File "&lt;string&gt;", line 0 ^ SyntaxError: unexpected EOF while parsing </code></pre> <p>My code is:</p> <pre><code>def main(): tc=input() d=0 while(tc!=0): num=input() i=0 count=0 for i in range (0, num): a=input() count=count+a if (count%num==0): print 'YES' else: print 'NO' tc=tc-1 main() </code></pre>
0
2016-08-12T13:43:31Z
38,919,660
<p>Use raw_input, and convert manually. This is also more save. For a full explaination, see <a href="https://en.wikibooks.org/wiki/Python_Programming/Input_and_Output#input.28.29" rel="nofollow">here</a>. For example, you could use the code below to skip anything which is not an integer.</p> <pre><code>x = None while not x: try: x = int(raw_input()) except ValueError: print 'Invalid Number' </code></pre>
0
2016-08-12T13:50:22Z
[ "python", "user-input" ]
How to skip blank line while taking input from user in python?
38,919,489
<p>I want to skip blank line (no value entered by user). I get this error.</p> <pre><code> Traceback (most recent call last): File "candy3.py", line 16, in &lt;module&gt; main() File "candy3.py", line 5, in main num=input() File "&lt;string&gt;", line 0 ^ SyntaxError: unexpected EOF while parsing </code></pre> <p>My code is:</p> <pre><code>def main(): tc=input() d=0 while(tc!=0): num=input() i=0 count=0 for i in range (0, num): a=input() count=count+a if (count%num==0): print 'YES' else: print 'NO' tc=tc-1 main() </code></pre>
0
2016-08-12T13:43:31Z
38,919,782
<p>The behaviour you're getting is expected, read the <a href="https://docs.python.org/3/library/functions.html#input" rel="nofollow">input</a> docs.</p> <blockquote> <p>input([prompt])</p> <p>If the prompt argument is present, it is written to standard output without a trailing newline. The function then reads a line from input, converts it to a string (stripping a trailing newline), and returns that. When EOF is read, EOFError is raised</p> </blockquote> <p>Try something like this and the code will be capturing the possible exceptions produced by input function:</p> <pre><code>if __name__ == "__main__": tc = input("How many numbers you want:") d = 0 while(tc != 0): try: num = input("Insert number:") except Exception, e: print "Error (try again),", str(e) continue i = 0 count = 0 for i in range(0, num): try: a = input("Insert number to add to your count:") count = count + a except Exception, e: print "Error (count won't be increased),", str(e) if (count % num == 0): print 'YES' else: print 'NO' tc = tc - 1 </code></pre>
0
2016-08-12T13:55:54Z
[ "python", "user-input" ]
Python - Dealing with Input Prompt in a Subprocesses
38,919,496
<p>I'm trying to get a python script on a remotely deployed embedded Linux device to execute an scp command. Executing the command is simple, but if the target server is not listed in the 'known_hosts' file, scp throws a warning that needs to be interacted with. Banging my head against this for days, and I can't solve 2 problems.</p> <p>First, I can't get nonblocking read of responses from the subprocess to function correctly. In the following code, select always returns ( [ ], [ ], [ ] ), even when I know I can read from stderr (assuming a trusted hosts file warning is generated).</p> <pre><code>cmdString = 'scp user@remote.com:file localFile -i ~/.ssh/id_rsa' process = subprocess.Popen(shlex.split(cmdString), shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) while(process.poll() is None): readable, writable, exceptional = select.select([process.stdout], [], [process.stderr], 1) if not (readable or writable or exceptional): # Always hits this condition, although adding an "os.read(...)" here # will return the error prompt from process.stderr. print "timeout condition" else: # Never makes it here for e in exceptional: stderr = os.read(process.stderr.fileno(), 256) print stderr for r in readable: stdout = os.read(process.stdout.fileno(), 256) print stdout </code></pre> <p>Second, I can't get the subprocess to advance beyond the warning by feeding input through an input PIPE. The following code reads the warning code from process.stderr but then hangs until I hit {enter} in my terminal. I've tried sending "n", "n\n", and "\n", but none cause the subprocess to continue execution (though all 3 patterns work when entered manually).</p> <pre><code>cmdString = 'scp user@remote.com:file localFile -i ~/.ssh/id_rsa' process = subprocess.Popen(shlex.split(cmdString), shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # Correctly grabs warning and displays it stderr = os.read(process.stderr.fileno(), 256) print stderr # Just in case there was some weird race condition or something time.sleep(0.5) # Doesn't ever seem to do anything process.stdin.write('\n') </code></pre> <p>Finally, does it matter? I originally started investigating subprocess and PIPES because I was running scp using "os.system(cmdString)" which blocked my thread and was forcing me to deal with the issue. Now that I'm using subprocess, is it bad to just fire off the command and let it succeed or fail? Will the failed subprocesses eventually die off, or could I eventually end up where I have dozens or hundreds of hidden scp attempts running, but waiting for user input?</p> <p>Thanks!</p>
0
2016-08-12T13:43:50Z
38,920,579
<p>The problem is likely that <code>scp</code> doesn't communicate using stdin/stdout/stderr in this case, but directly via the terminal.</p> <p>You can find a lot of similar questions as well as way to deal with it by searching for something like <code>scp input</code> on stackoverflow.</p> <p>Started subprocesses will only die when the parents "piped" the output (stdout/stderr) and the subprocess tries to write something. In this case, scp will probably keep running because it is using the terminal. These processes are not really hidden, though; you can easily see them with a tool like <code>ps</code> (and kill them with <code>kill</code> or <code>killall</code>).</p> <p>EDIT: As you mentioned you have problems with various libraries, perhaps the following approach will help:</p> <pre><code>import os, pty pid, fd = pty.fork() if pid == 0: os.execvp('scp', ['scp', 'user@remote.com:file', ... ]) else: while True: s = os.read(fd, 1024) print repr(s) os.write(fd, 'something\n') </code></pre>
0
2016-08-12T14:37:53Z
[ "python", "pipe", "subprocess", "popen" ]
How to parse oracle jdbc ezconnect with python
38,919,499
<p>I've tried to search ways how to parse with python an oracle jdbc string which can be in different formats, but didn't find anythng.</p> <p>Problem:</p> <p>Input string can be based on different patterns:</p> <ul> <li>jdbc:oracle:thin:@//hostname.example.ru:1521/database.example.ru</li> <li>jdbc:oracle:thin:@hostname:1521:DATABASE</li> </ul> <p>I cannot predict what pattern I get next time. So I need to use some package which always able to parse such string to not implement one more bicycle.</p> <h2>Update_#0</h2> <p>Still not found any package which could parse connection string on parts without actually connecting to oracle.</p> <p>For now, I wrote regex for parsing jdbc oracle ezconnect strings, you can use this to parse ezconnect:</p> <p><code>^jdbc:oracle:thin:((?'username'[a-zA-Z0-9]{1,})([\/](?'password'[a-zA-Z0-9]{1,})){0,1}){0,1}@((\/\/){0,1}(?'hostname'[a-zA-Z0-9\.\-]*)(\:(?'port'\d+)){0,1})(\/(?'service_name'[a-zA-Z\.\-0-9]{1,}(\:(?'server_type'[a-zA-Z]{1,})){0,1}){0,1}(\/(?'instance_name'[a-zA-Z0-9]{1,})){0,1}){0,1}$</code></p> <p>This is an expanded query:</p> <pre><code>^ jdbc:oracle:thin: ( (?'username'[a-zA-Z0-9]{1,}) ([\/] (?'password'[a-zA-Z0-9]{1,}) ){0,1} ){0,1} @ ( (\/\/){0,1} (?'hostname'[a-zA-Z0-9\.\-]{1,}) (\:(?'port'\d+)){0,1} ) (\/ (?'service_name'[a-zA-Z\.\-0-9]{1,} (\: (?'server_type'[a-zA-Z]{1,}) ){0,1} ){0,1} (\/ (?'instance_name'[a-zA-Z0-9]{1,}) ){0,1} ){0,1} $ </code></pre> <p>You can test it <a href="https://regex101.com/" rel="nofollow">here</a> on this lines:</p> <pre><code>jdbc:oracle:thin:@//hostname.example.ru:1521/database.example.ru jdbc:oracle:thin:@sales-server jdbc:oracle:thin:@sales-server:3456 jdbc:oracle:thin:@sales-server/sales jdbc:oracle:thin:@sales-server:80/sales jdbc:oracle:thin:@sales-server/sales:dedicated/inst1 jdbc:oracle:thin:@sales-server//inst1 jdbc:oracle:thin:@sales-server:1521/sales.us.acme.com jdbc:oracle:thin:@//sales-server/sales.us.acme.com jdbc:oracle:thin:@//sales-server.us.acme.com/sales.us.oracle.com jdbc:oracle:thin:wat@//sales-server.us.acme.com/sales.us.oracle.com jdbc:oracle:thin:wat/wat@//sales-server.us.acme.com/sales.us.oracle.com jdbc:oracle:thin:wat/wat@//sales-server.us.acme.com/sales.us.oracle.com:dedicated/instance jdbc:oracle:thin:wat/wat@//sales-server.us.acme.com//instance jdbc:oracle:thin:@non-ezconnect-string-test:1521:DATABASE </code></pre> <h2>Update_#1</h2> <p>This code is for python:</p> <pre><code>import re jdbc_ezconnect = re.compile("^jdbc:oracle:thin:((?P&lt;username&gt;[a-zA-Z0-9]{1,})([\/](?P&lt;password&gt;[a-zA-Z0-9]{1,})){0,1}){0,1}@(?P&lt;ezdb_name&gt;((\/\/){0,1}(?P&lt;hostname&gt;[a-zA-Z0-9\.\-]{1,})(\:(?P&lt;port&gt;\d+)){0,1})(\/(?P&lt;service_name&gt;[a-zA-Z\.\-0-9]{1,}(\:(?P&lt;server_type&gt;[a-zA-Z]{1,})){0,1}){0,1}(\/(?P&lt;instance_name&gt;[a-zA-Z0-9]{1,})){0,1}){0,1})$", re.MULTILINE) text = [ "jdbc:oracle:thin:@//hostname.example.ru:1521/database.example.ru", "jdbc:oracle:thin:@sales-server", "jdbc:oracle:thin:@sales-server:3456", "jdbc:oracle:thin:@sales-server/sales", "jdbc:oracle:thin:@sales-server:80/sales", "jdbc:oracle:thin:@sales-server/sales:dedicated/inst1", "jdbc:oracle:thin:@sales-server//inst1", "jdbc:oracle:thin:@sales-server:1521/sales.us.acme.com", "jdbc:oracle:thin:@//sales-server/sales.us.acme.com", "jdbc:oracle:thin:@//sales-server.us.acme.com/sales.us.oracle.com", "jdbc:oracle:thin:wat@//sales-server.us.acme.com/sales.us.oracle.com", "jdbc:oracle:thin:wat/wat@//sales-server.us.acme.com/sales.us.oracle.com", "jdbc:oracle:thin:wat/wat@//sales-server.us.acme.com/sales.us.oracle.com:dedicated/instance", "jdbc:oracle:thin:wat/wat@//sales-server.us.acme.com//instance", "jdbc:oracle:thin:@hostname:1521:DATABASE" ] matches = jdbc_ezconnect.search(text[0]) username = matches.group('username') password = matches.group('password') ezdb_name = matches.group('ezdb_name') hostname = matches.group('hostname') port = matches.group('port') service_name = matches.group('service_name') server_type = matches.group('server_type') instance_name = matches.group('instance_name') print username, password, ezdb_name, hostname, port, service_name, server_type, instance_name </code></pre> <p>Outputs:</p> <p><code>None None //hostname.example.ru:1521/database.example.ru hostname.example.ru 1521 database.example.ru None None</code></p>
0
2016-08-12T13:43:57Z
38,968,786
<p>I've read oracle docs about EZCONNECT syntax and wrote regex to parse it. The second string is a short version of standard jdbc pattern so i unified those regexp strings within one class to parse each variant.</p> <p>So here's jdbc connection string parser:</p> <pre><code># -*- coding: utf-8 -*- import re class JDBCParserError(Exception): pass class JDBCParser: """ Класс для распарсивания jdbc-строк. """ # ezonnect patterns jdbc_ezconnect = re.compile("^jdbc:oracle:thin:" "((?P&lt;username&gt;[a-zA-Z0-9]{1,})" "([\/](?P&lt;password&gt;[a-zA-Z0-9]{1,})){0,1}){0,1}" "@" "(?P&lt;ezdb_name&gt;((\/\/){0,1}" "(?P&lt;hostname&gt;[a-zA-Z0-9\.\-]{1,})" "(\:(?P&lt;port&gt;\d+)){0,1})" "(\/(?P&lt;service_name&gt;[a-zA-Z\.\-0-9]{1,}" "(\:(?P&lt;server_type&gt;[a-zA-Z]{1,})){0,1}){0,1}" "(\/(?P&lt;instance_name&gt;[a-zA-Z0-9]{1,})){0,1}){0,1})$") # jdbc standard pattern - host:port:sid jdbc_classic = re.compile("^jdbc:oracle:thin:" "((?P&lt;username&gt;[a-zA-Z0-9]{1,})" "([\/](?P&lt;password&gt;[a-zA-Z0-9]{1,})){0,1}){0,1}" "@" "(?P&lt;connection_string&gt;(" "(?P&lt;hostname&gt;[a-zA-Z0-9\.\-]+)" "(\:(?P&lt;port&gt;\d+)))" "(\:(?P&lt;service_name&gt;[a-zA-Z0-9]+)))$") username = None password = None ezdb_name = None hostname = None port = None service_name = None instance_name = None connection_string = None def __init__(self, jdbc_string): ezconnect_match = self.jdbc_ezconnect.search(jdbc_string) classic_match = self.jdbc_classic.search(jdbc_string) if ezconnect_match or classic_match: if ezconnect_match: self.username = ezconnect_match.group('username') self.password = ezconnect_match.group('password') self.ezdb_name = ezconnect_match.group('ezdb_name') self.hostname = ezconnect_match.group('hostname') self.port = ezconnect_match.group('port') self.service_name = ezconnect_match.group('service_name') self.instance_name = ezconnect_match.group('instance_name') if classic_match: self.username = classic_match.group('username') self.password = classic_match.group('password') self.connection_string = classic_match.group('connection_string') self.hostname = classic_match.group('hostname') self.port = classic_match.group('port') self.service_name = classic_match.group('service_name') else: raise JDBCParserError("JDBC string not recognized") </code></pre>
0
2016-08-16T07:19:30Z
[ "python", "oracle", "parsing", "jdbc" ]
python - comparing a newly written file with filecmp.cmp() always returns False?
38,919,521
<p>I must be making a stupid mistake here, because this <em>should</em> be working. I'm thinking the file is staying open or something, and it's driving me nuts. </p> <p>This is for some regression test cases I have where I'm comparing generated output of a script ran against mock files to known good output files (key files). </p> <p>Here is a simple example:</p> <pre><code>def run_and_compare(self, key_file, out_file, option): print filecmp.cmp(out_file, key_file) # always True (as long as I've run this before, so the out_file exists already) cmd = './analyze_files.py -f option' with open(out_file, 'wb') as out: subprocess.Popen(cmd.split(), stdout=out, stderr=subprocess.PIPE) print filecmp.cmp(out_file, key_file) # always False time.sleep(5) print filecmp.cmp(out_file, key_file) # always True </code></pre> <p>I really don't want to keep that sleep in the test! How can I be sure the out file is OK to compare without using the sleep? I've tried using out.close(), but it doesn't work, and shouldn't be needed as long as I'm using 'with'. I'm using python 2.6.4 if that matters here.</p>
0
2016-08-12T13:44:58Z
38,919,745
<p>I'd suggest you add a <code>wait</code> to your subprocess to wait for it to complete</p> <pre><code>with open(out_file, 'wb') as out: p=subprocess.Popen(cmd.split(), stdout=out, stderr=subprocess.PIPE) p.wait() </code></pre> <p>If you don't wait, the subprocess starts, taking the file <code>out</code> as output and returns immediately (starts in background). When you compare both files, one is probably empty hence the False.</p> <p>After a while, the subprocess ends, <code>out</code> is no longer used and probably garbage collected, handle closed: your file is valid. (I'm not saying it is exactly what's going on here, but the lack of <code>p.wait()</code> is surely the issue here)</p> <p>Aside from that, I have always wondered why people run subprocesses involving python commands when it's so simple to import them and call their functions directly, thus benefiting from exception chain, one sole process, avoiding all this inter-process communication issues..</p>
1
2016-08-12T13:54:26Z
[ "python" ]
python - comparing a newly written file with filecmp.cmp() always returns False?
38,919,521
<p>I must be making a stupid mistake here, because this <em>should</em> be working. I'm thinking the file is staying open or something, and it's driving me nuts. </p> <p>This is for some regression test cases I have where I'm comparing generated output of a script ran against mock files to known good output files (key files). </p> <p>Here is a simple example:</p> <pre><code>def run_and_compare(self, key_file, out_file, option): print filecmp.cmp(out_file, key_file) # always True (as long as I've run this before, so the out_file exists already) cmd = './analyze_files.py -f option' with open(out_file, 'wb') as out: subprocess.Popen(cmd.split(), stdout=out, stderr=subprocess.PIPE) print filecmp.cmp(out_file, key_file) # always False time.sleep(5) print filecmp.cmp(out_file, key_file) # always True </code></pre> <p>I really don't want to keep that sleep in the test! How can I be sure the out file is OK to compare without using the sleep? I've tried using out.close(), but it doesn't work, and shouldn't be needed as long as I'm using 'with'. I'm using python 2.6.4 if that matters here.</p>
0
2016-08-12T13:44:58Z
38,920,342
<p>It doesn't matter that you opened the output file object as a context manager. It wouldn't even matter if you explicitly, manually closed the file object.</p> <p>That's because when you hand a Python file object to <code>subprocess.Popen()</code>, all it takes from that file object is the <em>file handle</em>, an integer number that your OS uses to communicate about open files. The subprocess then uses <a href="https://docs.python.org/2/library/os.html#os.dup2" rel="nofollow"><code>os.dup2()</code></a> to clone that filehandle onto the STDOUT file handle of a child process; this is what causes the output of that child process to go to your designated file on disk.</p> <p>Because the file handle is duped, closing the original Python file object (and indirectly, the original OS file handle) won't <em>actually</em> close the file, because that second file handle still keeps it open.</p> <p>The reason that you see the file data appear after waiting a few seconds, is because eventually the subprocess you created will complete and <em>only then</em> is that other, duped file handle closed.</p> <p>Instead of waiting for a few seconds, wait for the subprocess to complete using the <a href="https://docs.python.org/2/library/subprocess.html#subprocess.Popen.communicate" rel="nofollow"><code>Popen.communicate()</code> method</a>:</p> <pre><code>p = subprocess.Popen(cmd.split(), stdout=open(out_file, 'wb'), stderr=subprocess.PIPE) stdout, stderr = p.communicate() # stdout will always be None </code></pre> <p>I inlined the <code>open()</code> call, because there is no other use for that file object once <code>subprocess.Popen()</code> retrieved the file handle from it. You could also use <code>os.open()</code> instead of <code>open()</code> (same arguments) and safe yourself creating a Python file object where only a file handle is enough.</p> <p>Don't use <code>p.wait()</code>; because you are using a <em>pipe</em> for the STDERR stream of the child process, you can deadlock the process if you <em>don't read</em> from STDERR but the child process writes a lot of data to it. You'd end up waiting forever.</p>
2
2016-08-12T14:25:13Z
[ "python" ]
Correlation: make a graph
38,919,546
<p>I have dataframe and I try to print graph. I use</p> <pre><code>df = pd.read_excel('resp1.xlsx') corr = df.corr() sns.set(context="paper", font="monospace") df = sns.load_dataset("brain_networks", header=[0, 1, 2], index_col=0) f, ax = plt.subplots(figsize=(12, 9)) sns.heatmap(corr, vmax=.8, square=True) networks = corr.columns.get_level_values("network") for i, network in enumerate(networks): if i and network != networks[i - 1]: ax.axhline(len(networks) - i, c="w") ax.axvline(i, c="w") f.tight_layout() </code></pre> <p>but it return <code>ImportError: DLL load failed: %1 �� �������� ����������� Win32.</code> in pycharm and <code>KeyError: 'Level network must be same as name (None)'</code> in Spyder(anaconda). How can I fix that?</p>
0
2016-08-12T13:46:13Z
38,919,697
<p>I'm not sure, but I think those %1 errors are related to mixing 64 bit and 32 bit of app/dll. You also have the <code>win32</code> hint. Perhaps you use a 64 bit python with 32 bit package (pandas or other) or the opposite.</p>
0
2016-08-12T13:51:58Z
[ "python", "pandas", "matplotlib", "seaborn" ]
Do the Templates in django need to move to public while using Passenger
38,919,709
<p>Very familiar with python and Flask, but just getting started with Django.</p> <p><strong>ENVIRONMENT</strong></p> <p>Dreamhost, Django 1.9 (i think), python 2.7.3, Passenger </p> <p><strong>PROBLEM STMT</strong></p> <p>I have my app working and can generate an index page using HttpResponse straight from views.py. However, when I try to incorporate templates, I am getting "TemplateDoesNotExist at /"</p> <p>Because I'm new, I'm not sure where to start troubleshooting. Is this a passenger issue or Django?</p> <p><strong>DETAILS</strong></p> <p>I've modified my [DIRS] in settings.py to reference the templates directory.</p> <pre><code>'DIRS': ['CTracker/templates'], </code></pre> <p>I then created the templates directory under CTracker and the 'clientadmin' (the name of my app) directory and finally the index.html file.</p> <pre><code>-CTracker | |-CTracker | | | -templates | | | -clientadmin | | | -index.html |-manage.py </code></pre> <p>My function in views.py uses the following return.</p> <pre><code>return render(request, 'clientadmin/index.html', { 'clients': client, }) </code></pre> <p><strong>Troubleshooting Done</strong></p> <ul> <li>Double checked the files exist and the path to the file appears correct in both the filesystem and the settings.py file.</li> <li>I tried running 'manage.py collectstatics' but 0 files were moved.</li> </ul> <p>Thank you in advance. Any help is appreciated.</p>
0
2016-08-12T13:52:46Z
38,920,236
<p>The items in the <code>DIRS</code> list should be the full path, for example:</p> <pre><code>'DIRS': ['/path/to/CTracker/CTracker/templates'], </code></pre> <p>The Django settings file should have a <code>BASE_DIR</code> defined. You can use this with <code>os.path.join</code> to avoid hardcoding the full path.</p> <pre><code>'DIRS': [os.path.join(BASE_DIR, 'CTracker', 'templates')] </code></pre>
1
2016-08-12T14:18:49Z
[ "python", "django", "passenger" ]
Pymongo $in Query Not Working
38,919,756
<p>Seeing some strange behavior in Pymongo <code>$in</code> query. Looking for records that meet the following query:</p> <pre><code>speciesCollection.find({"SPCOMNAME":{"$in":['paddlefish','lake sturgeon']}}) </code></pre> <p>The query returns no records.</p> <p>If I change it to find_one the it works returning the last value for Lake Sturgeon. The field is a text filed with one vaule. So I am looking for records that match paddlefish or Lake Sturgeon.</p> <p>It works fine in Mongo Shell like this:</p> <pre><code>speciesCollection.find({SPCOMNAME:{$in: ['paddlefish','lake strugeon']}},{_id:0}) </code></pre> <p>Here is the result from shell</p> <pre><code>{ "SPECIES_ID" : 1, "SPECIES_AB" : "LKS", "SPCOMNAME" : "lake sturgeon", "SP_SCINAME" : "Acipenser fulvescens { "SPECIES_ID" : 101, "SPECIES_AB" : "PAH", "SPCOMNAME" : "paddlefish", "SP_SCINAME" : "Polyodon spathula" } </code></pre> <p>Am I missing something here?</p>
4
2016-08-12T13:54:52Z
38,920,164
<p>I think you have a typo or some other error in your program as I just did a test with your sample data and query and it works - see the GIF</p> <p>Below is my test code which connects to the database called <code>so</code> and the collection <code>speciesCollection</code>, maybe you find the error in yours with it </p> <pre><code>import pymongo client = pymongo.MongoClient('dockerhostlinux1', 30000) db = client.so coll = db.speciesCollection result = coll.find({"SPCOMNAME":{"$in":['paddlefish','lake sturgeon']}}) for doc in result: print(doc) </code></pre> <p><a href="http://i.stack.imgur.com/mnAH5.gif" rel="nofollow"><img src="http://i.stack.imgur.com/mnAH5.gif" alt="GIF"></a></p>
3
2016-08-12T14:14:51Z
[ "python", "mongodb", "pymongo", "pymongo-3.x" ]
Get package version for conda meta.yaml from source file
38,919,840
<p>I'm trying to reorganize my python package versioning so I only have to update the version in one place, preferably a python module or a text file. For all the places I need my version there seems to be a way to load it from the source <code>from mypkg import __version__</code> or at least parse it out of the file as text. I can't seem to find a way to do it with my conda meta.yaml file though. Is there a way to load the version from an external source in the meta.yaml file?</p> <p>I know there are the git environment variables, but I don't want to tag every alpha/beta/rc commit that gets tested through out local conda repository. I could load the python object using <code>!!python/object</code> in pyyaml, but conda doesn't support arbitrary python execution. I don't see a way to do it with any other jinja2 features. I could also write a script to update the version number in more than one place, but I was really hoping to only modify one file as the definitive version number. Thanks for any help.</p>
3
2016-08-12T13:58:13Z
39,009,038
<p>There are lots of ways to get to your endpoint. Here's what conda itself does...</p> <p>The source of truth for conda's version information is <code>__version__</code> in <code>conda/__init__.py</code>. It can be loaded programmatically within python code as <code>from conda import __version__</code> as you suggest. It's also hard-wired into <code>setup.py</code> <a href="https://github.com/conda/conda/blob/5a8e020/setup.py#L56" rel="nofollow">here</a> (note <a href="https://github.com/conda/conda/blob/5a8e020/setup.py#L25-L31" rel="nofollow">this code</a> too), so from the command line <code>python setup.py --version</code> is the canonical way to get that information.</p> <p>In 1.x versions of conda-build, putting a line</p> <pre><code>$PYTHON setup.py --version &gt; __conda_version__.txt </code></pre> <p>in <code>build.sh</code> would set the version for the built package using our source of truth. <strong>The <code>__conda_version__.txt</code> file is deprecated</strong>, however, and it will likely be removed with the release of conda-build 2.0. In recent versions of conda-build, the preferred way to do this is to use <code>load_setup_py_data()</code> within a jinja2 context, which will give you access to all the metadata from <code>setup.py</code>. Specifically, in the <code>meta.yaml</code> file, we'd have something like this</p> <pre><code>package: name: conda version: "{{ load_setup_py_data().version }}" </code></pre> <hr> <p>Now, how the <code>__version__</code> variable is set in <code>conda/__init__.py</code>...</p> <p>What you <a href="https://github.com/conda/conda/blob/5a8e020/conda/__init__.py#L23" rel="nofollow">see in the source code</a> is a call to the <a href="https://github.com/kalefranz/auxlib/blob/8523d3f/auxlib/packaging.py#L144" rel="nofollow"><code>auxlib.packaging.get_version()</code></a> function. This function does the following in order</p> <ol> <li>look first for a file <code>conda/.version</code>, and if found return the contents as the version identifier</li> <li>look next for a <code>VERSION</code> environment variable, and if set return the value as the version identifier</li> <li>look last at the <code>git describe --tags</code> output, and return a version identifier if possible (must have git installed, must be a git repo, etc etc)</li> <li>if none of the above yield a version identifier, return <code>None</code></li> </ol> <p>Now there's just one more final trick. In conda's <a href="https://github.com/conda/conda/blob/8523d3f/setup.py#L78-L81" rel="nofollow"><code>setup.py</code> file</a>, we set <code>cmdclass</code> for <code>build_py</code> and <code>sdist</code> to those provided by <code>auxlib.packaging</code>. Basically we have</p> <pre><code>from auxlib import packaging setup( cmdclass={ 'build_py': packaging.BuildPyCommand, 'sdist': packaging.SDistCommand, } ) </code></pre> <p>These special command classes actually modify the <code>conda/__init__.py</code> file in built/installed packages so the <code>__version__</code> variable is hard-coded to a string literal, and doesn't use the <code>auxlib.packaging.get_version()</code> function.</p> <hr> <p>In your case, with not wanting to tag every release, you could use all of the above, and from the command line set the version using a <code>VERSION</code> environment variable. Something like</p> <pre><code>VERSION=1.0.0alpha1 conda build conda.recipe </code></pre> <p>In your <code>build</code> section meta.yaml recipe, you'll need add a <code>script_env</code> key to tell conda-build to pass the <code>VERSION</code> environment variable all the way through to the build environment.</p> <pre><code>build: script_env: - VERSION </code></pre>
1
2016-08-18T02:25:58Z
[ "python", "conda" ]
Second queue is not defined [python]
38,919,877
<p>I have a 3 processes running in one script. Process 1 passes data to Process 2, and then Process 2 passes data to Process 3. When I put data to queue2, error occurs that "Global name "queue2" is not defined", I am stuck on this error now...</p> <pre><code>if __name__ == '__main__': queue1 = mp.Queue() queue2 = mp.Queue() p1 = mp.Process(target=f2, args=(queue1,)) p1.start() p2 = mp.Process(target=f3, args=(queue2,)) p2.start() f1() def f1(): # do something to a get x queue1.put(x) def f2(q): a = q.get() # do something to a, to produce b queue2.put(b) # error happens here: Global name "queue2" is not defined def f3(q): c = q.get() # keeping processing c... </code></pre>
0
2016-08-12T13:59:36Z
38,920,026
<p>Just as you passed <code>queue1</code> to <code>f2</code>, you also need to pass <code>queue2</code>.</p>
1
2016-08-12T14:07:15Z
[ "python", "multiprocessing" ]
Second queue is not defined [python]
38,919,877
<p>I have a 3 processes running in one script. Process 1 passes data to Process 2, and then Process 2 passes data to Process 3. When I put data to queue2, error occurs that "Global name "queue2" is not defined", I am stuck on this error now...</p> <pre><code>if __name__ == '__main__': queue1 = mp.Queue() queue2 = mp.Queue() p1 = mp.Process(target=f2, args=(queue1,)) p1.start() p2 = mp.Process(target=f3, args=(queue2,)) p2.start() f1() def f1(): # do something to a get x queue1.put(x) def f2(q): a = q.get() # do something to a, to produce b queue2.put(b) # error happens here: Global name "queue2" is not defined def f3(q): c = q.get() # keeping processing c... </code></pre>
0
2016-08-12T13:59:36Z
38,920,057
<p>You can declare the queues as global:</p> <pre><code>def f2(q): global queue2 a = q.get() queue2.put(b) </code></pre>
0
2016-08-12T14:09:01Z
[ "python", "multiprocessing" ]
Second queue is not defined [python]
38,919,877
<p>I have a 3 processes running in one script. Process 1 passes data to Process 2, and then Process 2 passes data to Process 3. When I put data to queue2, error occurs that "Global name "queue2" is not defined", I am stuck on this error now...</p> <pre><code>if __name__ == '__main__': queue1 = mp.Queue() queue2 = mp.Queue() p1 = mp.Process(target=f2, args=(queue1,)) p1.start() p2 = mp.Process(target=f3, args=(queue2,)) p2.start() f1() def f1(): # do something to a get x queue1.put(x) def f2(q): a = q.get() # do something to a, to produce b queue2.put(b) # error happens here: Global name "queue2" is not defined def f3(q): c = q.get() # keeping processing c... </code></pre>
0
2016-08-12T13:59:36Z
38,920,137
<p>This works : </p> <pre><code>import multiprocessing as mp queue1 = mp.Queue() queue2 = mp.Queue() def f1(q): x = 5 # do something to a get x q.put(x) def f2(in_queue, out_queue): a = in_queue.get() b = a + 2 # do something to a, to produce b out_queue.put(b) def f3(q): c = q.get() print c f1(queue1) p1 = mp.Process(target=f2, args=(queue1, queue2)) p1.start() p2 = mp.Process(target=f3, args=(queue2,)) p2.start() </code></pre> <p>Your code doesn't return the error you seem to have, it returns "f2 not defined" since you when you spawn the process <code>p1</code>, <code>f2</code> is not a defined variable yet. The rule when you fork is that at creation time your processes must see the variables they use, i.e. they must be in the current scope.</p> <p>To put it clearly, at spawning process time you inherit the current namespace from the parent process.</p>
0
2016-08-12T14:13:13Z
[ "python", "multiprocessing" ]
How to return data from an API in Django?
38,920,003
<p>Im trying to learn how to use APIs in Django and I want to return some simple data from one within a web page in html. The API is Mozscape and when running it in a terminal one can obtain the score of a website out of 100 like so:</p> <pre><code>from mozscape import Mozscape client = Mozscape( 'api_user_id', 'secret_key') url = 'http://www.google.com' get_da = client.urlMetrics(url, cols=68719476736) print(get_da) </code></pre> <p>and this prints the following</p> <pre><code>{u'pda': 100} </code></pre> <p>the '100' is all I want there. I want a user to enter a url into a form in a page in Django and to get that score int back so I have made the following models, views and form</p> <pre><code>class DomainAuthority(models.Model): url = models.URLField(max_length=300) def __str__(self): return self.url class Meta: verbose_name = 'Domain' verbose_name_plural = 'Domains' </code></pre> <p>views.py</p> <pre><code>def DomainAuthorityView(request): form = DomainAuthorityForm(request.POST or None) if form.is_valid(): new_domain = form.save(commit=False) new_domain.save() return render(request, 'domain_authority.html', {'form': form}) </code></pre> <p>forms.py</p> <pre><code>class DomainAuthorityForm(forms.ModelForm): class Meta: model = DomainAuthority fields = ['url'] </code></pre> <p>so I have the form working and when a url is entered in the html form its saved in the admin backend but what I dont know how to do now is how to pass that url into the Mozscape API so that I can get the score back.</p> <p>I took a look at the Django rest framework and installed it and followed some quick tutorial videos on Youtube and other places but in those examples they were taking saved Django objects such as blog posts and returning them as JSON data which is not what I want to do.</p> <p>I tried import the API into the views file and then adding this line into then view</p> <pre><code>get_da = client.urlMetrics(new_domain, cols=68719476736) </code></pre> <p>but then I get this error after entering the url into the form in the web page</p> <pre><code>&lt;DomainAuthority: https://www.google.com&gt; is not JSON serializable </code></pre> <p>what do I need to do here to pass the user inputted urls to the API and return the correct response in a web page?</p> <p>thanks</p> <p>EDIT - UPDATED VIEW as of 19th Aug</p> <pre><code>def DomainAuthorityView(request): form = DomainAuthorityForm(request.POST or None) if form.is_valid(): new_domain = form.save(commit=False) new_domain.save() response = requests.get(new_domain.url, cols=68719476736) #response = requests.get(client.urlMetrics(new_domain.url, cols=68719476736)) json_response = response.json() score = json_response['pda'] return render(request, 'domain_authority_checked.html', {'score': score}) else: return render(request, 'domain_authority.html', {'form': form}) </code></pre> <p>so now it should redirect after successful form completion with url and the url is passed to the API to get the score and the redirects to 'domain_authority_checked.html' with just this</p> <pre><code>{{ score }} </code></pre> <p>so I have two outcomes here, if I pass in 'client.urlMetrics' into response I can load the 'domain_authority.html' but after a url his input into the form an error page returns with this</p> <pre><code>InvalidSchema at /domainauthority/ No connection adapters were found for '{'pda': 100}' </code></pre> <p>if I dont pass 'client.urlMetrics' to response then Django doesn't know what 'cols' is and returns this</p> <pre><code>TypeError at /domainauthority/ request() got an unexpected keyword argument 'cols' </code></pre>
0
2016-08-12T14:06:09Z
38,920,227
<p>You can use:</p> <pre><code>return HttpResponse(json.dumps(data), content_type='application/json') </code></pre> <p>instead of render the form. Only you need to import json in the header and create an empty dict named "data".</p>
1
2016-08-12T14:18:23Z
[ "python", "json", "django", "api" ]
How to return data from an API in Django?
38,920,003
<p>Im trying to learn how to use APIs in Django and I want to return some simple data from one within a web page in html. The API is Mozscape and when running it in a terminal one can obtain the score of a website out of 100 like so:</p> <pre><code>from mozscape import Mozscape client = Mozscape( 'api_user_id', 'secret_key') url = 'http://www.google.com' get_da = client.urlMetrics(url, cols=68719476736) print(get_da) </code></pre> <p>and this prints the following</p> <pre><code>{u'pda': 100} </code></pre> <p>the '100' is all I want there. I want a user to enter a url into a form in a page in Django and to get that score int back so I have made the following models, views and form</p> <pre><code>class DomainAuthority(models.Model): url = models.URLField(max_length=300) def __str__(self): return self.url class Meta: verbose_name = 'Domain' verbose_name_plural = 'Domains' </code></pre> <p>views.py</p> <pre><code>def DomainAuthorityView(request): form = DomainAuthorityForm(request.POST or None) if form.is_valid(): new_domain = form.save(commit=False) new_domain.save() return render(request, 'domain_authority.html', {'form': form}) </code></pre> <p>forms.py</p> <pre><code>class DomainAuthorityForm(forms.ModelForm): class Meta: model = DomainAuthority fields = ['url'] </code></pre> <p>so I have the form working and when a url is entered in the html form its saved in the admin backend but what I dont know how to do now is how to pass that url into the Mozscape API so that I can get the score back.</p> <p>I took a look at the Django rest framework and installed it and followed some quick tutorial videos on Youtube and other places but in those examples they were taking saved Django objects such as blog posts and returning them as JSON data which is not what I want to do.</p> <p>I tried import the API into the views file and then adding this line into then view</p> <pre><code>get_da = client.urlMetrics(new_domain, cols=68719476736) </code></pre> <p>but then I get this error after entering the url into the form in the web page</p> <pre><code>&lt;DomainAuthority: https://www.google.com&gt; is not JSON serializable </code></pre> <p>what do I need to do here to pass the user inputted urls to the API and return the correct response in a web page?</p> <p>thanks</p> <p>EDIT - UPDATED VIEW as of 19th Aug</p> <pre><code>def DomainAuthorityView(request): form = DomainAuthorityForm(request.POST or None) if form.is_valid(): new_domain = form.save(commit=False) new_domain.save() response = requests.get(new_domain.url, cols=68719476736) #response = requests.get(client.urlMetrics(new_domain.url, cols=68719476736)) json_response = response.json() score = json_response['pda'] return render(request, 'domain_authority_checked.html', {'score': score}) else: return render(request, 'domain_authority.html', {'form': form}) </code></pre> <p>so now it should redirect after successful form completion with url and the url is passed to the API to get the score and the redirects to 'domain_authority_checked.html' with just this</p> <pre><code>{{ score }} </code></pre> <p>so I have two outcomes here, if I pass in 'client.urlMetrics' into response I can load the 'domain_authority.html' but after a url his input into the form an error page returns with this</p> <pre><code>InvalidSchema at /domainauthority/ No connection adapters were found for '{'pda': 100}' </code></pre> <p>if I dont pass 'client.urlMetrics' to response then Django doesn't know what 'cols' is and returns this</p> <pre><code>TypeError at /domainauthority/ request() got an unexpected keyword argument 'cols' </code></pre>
0
2016-08-12T14:06:09Z
38,920,286
<p>I suggest this approach:</p> <pre><code>import requests response = requests.get(url) json_response = response.json() score = json_response['key_name'] </code></pre> <p>You can then simply render a template, add the score to the template context and display the value using {{ }}.</p> <p>You may also want to define a rest_framework serializer (otherwise you don't need django_rest_framework) and verify the response against this serializer in order to ensure that you've received what you expected:</p> <pre><code>serializer = MySerializer(data=json_response) if serializer.is_valid(): score = json_response['key_name'] </code></pre>
1
2016-08-12T14:22:33Z
[ "python", "json", "django", "api" ]
numpy vectorize max over segments of array
38,920,043
<p>How can I get rid of the python for loop? t is not uniformly spaced in general (just in the simple example). Solutions using pandas are also fine.</p> <pre><code>import numpy as np n = 100 t = np.arange(n) y = np.arange(n) edges = np.array([2., 5.5, 19, 30, 50, 72, 98]) indices = np.searchsorted(t, edges) maxes = np.zeros(len(edges)-1) for i in range(len(edges)-1): maxes[i] = np.max(y[indices[i]:indices[i+1]]) print(maxes) </code></pre> <p>Update: I think <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.reduceat.html#numpy.ufunc.reduceat" rel="nofollow">reduceat</a> might do it but I don't understand the syntax.</p>
2
2016-08-12T14:08:11Z
38,920,668
<p>reduceat does the job nicely. I didn't know about that functionality 30 minutes ago.</p> <pre><code>maxes = np.maximum.reduceat(y, indices)[:-1] </code></pre>
3
2016-08-12T14:41:47Z
[ "python", "numpy", "vectorization" ]
PyInstaller on mac can't find libpython2.7
38,920,076
<p>I am trying to make a binary version of a Python script using PyInstaller 2.0. I am using a basic "hello world" tkinter script but imported a few dependencies that i need for a project to test Pyinstaller out. I am on a mac running Yosemite 10.10.5. This is my script:</p> <pre><code>#!/usr/bin/env python from Tkinter import * import Tix import tkMessageBox from sklearn import linear_model, decomposition, preprocessing from sklearn.preprocessing import Imputer from sklearn.cross_validation import cross_val_score, cross_val_predict from sklearn.neighbors import KDTree import numpy as np import collections import array import math import csv from collections import OrderedDict import matplotlib matplotlib.use("TkAgg") import matplotlib.pyplot as plt import matplotlib.dates as dates from matplotlib.mlab import PCA from mpl_toolkits.mplot3d import Axes3D from scipy.stats import mode import heapq import sqlite3 from sqlite3 import datetime root = Tk() w = Label(root, text="Hello, world!") w.pack() root.mainloop() </code></pre> <p>This runs perfectly. However when i go to build the binary using </p> <pre><code>$pyinstaller -w -F app.py </code></pre> <p>then i get this error:</p> <pre><code>57665 ERROR: Can not find path ./libpython2.7.dylib (needed by //anaconda/bin/python) Traceback (most recent call last): File "//anaconda/bin/pyinstaller", line 11, in &lt;module&gt; sys.exit(run()) File "//anaconda/lib/python2.7/site-packages/PyInstaller/__main__.py", line 90, in run run_build(pyi_config, spec_file, **vars(args)) File "//anaconda/lib/python2.7/site-packages/PyInstaller/__main__.py", line 46, in run_build PyInstaller.building.build_main.main(pyi_config, spec_file, **kwargs) File "//anaconda/lib/python2.7/site-packages/PyInstaller/building/build_main.py", line 788, in main build(specfile, kw.get('distpath'), kw.get('workpath'), kw.get('clean_build')) File "//anaconda/lib/python2.7/site-packages/PyInstaller/building/build_main.py", line 734, in build exec(text, spec_namespace) File "&lt;string&gt;", line 16, in &lt;module&gt; File "//anaconda/lib/python2.7/site-packages/PyInstaller/building/build_main.py", line 212, in __init__ self.__postinit__() File "//anaconda/lib/python2.7/site-packages/PyInstaller/building/datastruct.py", line 178, in __postinit__ self.assemble() File "//anaconda/lib/python2.7/site-packages/PyInstaller/building/build_main.py", line 543, in assemble self._check_python_library(self.binaries) File "//anaconda/lib/python2.7/site-packages/PyInstaller/building/build_main.py", line 626, in _check_python_library raise IOError(msg) IOError: Python library not found: libpython2.7.dylib, Python, .Python This would mean your Python installation doesn't come with proper library files. This usually happens by missing development package, or unsuitable build parameters of Python installation. * On Debian/Ubuntu, you would need to install Python development packages * apt-get install python3-dev * apt-get install python-dev * If you're building Python by yourself, please rebuild your Python with `--enable-shared` (or, `--enable-framework` on Darwin) </code></pre> <p>Does anyone have any ideas how i can fix this? This error also occurs when i am using the the basic hello world example without the extra dependancies. I have the libpython2.7.dylib file in //anaconda/lib and i tried to link it to usr/lib/ using </p> <pre><code>$sudo ln -s /usr/local/lib/libpython2.7.dylib //anaconda/lib/libpython2.7.dylib </code></pre> <p>however it is not fixing the issue...</p>
1
2016-08-12T14:09:56Z
39,001,061
<p>Firstly, I see you are using conda. I ran into the exact same issue on Mac, specifically:</p> <pre><code>ERROR: Can not find path ./libpython2.7.dylib </code></pre> <p>trying to deploy an app I put together in a conda environment. </p> <p>After a lot of Googling and reading, I found that the current PyInstaller does not handle dynamic libraries with @rpath references very well. You can confirm that the library reference uses @rpath by running "otool -L" on the Python binary, which for you looks like //anaconda/bin/python (might be a link to //anaconda/bin/python2.7).</p> <p>Fortunately, this was recently addressed on a fork of PyInstaller for conda. The specific patch is at <a href="https://github.com/conda-forge/pyinstaller-feedstock/pull/2" rel="nofollow">https://github.com/conda-forge/pyinstaller-feedstock/pull/2</a></p> <p>What I did to use this forked version is uninstall PyInstaller that I had downloaded in my conda environment via pip, and then used the instructions from <a href="https://github.com/conda-forge/pyinstaller-feedstock" rel="nofollow">https://github.com/conda-forge/pyinstaller-feedstock</a> to use this fork of PyInstaller in my conda environment. Specifically, these commands:</p> <pre><code>conda config --add channels conda-forge conda install pyinstaller </code></pre> <p>So I'd recommend switching to this patched version of PyInstaller specifically for conda environments, and see if you it helps you get past the problem like it did for me.</p>
1
2016-08-17T15:46:05Z
[ "python", "exe", "pyinstaller" ]
Showing default required message after customizing template
38,920,133
<p>guys. I'm learning to use Django and I'm facing some issues: I created some required field forms, made some changes in the template because and the default required message no longer is shown when the field is left empty.</p> <p>Here is my code in forms.py:</p> <pre><code>class CreateProjectForm(forms.Form): project_name = forms.CharField(required=True) project_description = forms.CharField(widget=forms.Textarea(attrs={'style':'resize:none;'}), required=True) project_expiration = forms.CharField(required=False) def __init__(self, *args, **kwargs): super(CreateProjectForm, self).__init__(*args, **kwargs) self.fields['project_name'].label = "Project Name:" self.fields['project_description'].label = "Project Description:" self.fields['project_expiration'].label = "Expiration Date:" for name, field in self.fields.items(): field.widget.attrs['class'] = 'form-control' </code></pre> <p>And here is my code in the template:</p> <pre><code>&lt;form class="form-horizontal" role="form" action="" method="post"&gt; {% csrf_token %} {% for field in form %} &lt;div class="form-group"&gt; &lt;div class="col-sm-4"&gt; &lt;label for="{{ field.id_for_label }}" class="control-label"&gt;{{ field.label }}&lt;/label&gt; &lt;/div&gt; &lt;div class="col-sm-8"&gt; {{field}} &lt;/div&gt; &lt;/div&gt; {% endfor %} &lt;button type="submit" class="btn btn-primary"&gt;Create Project&lt;/button&gt; &lt;/form&gt; </code></pre> <p>I can't understand why if I don't do the for loop in the template everything works, but if I try to iterate over the form it stops showing the messages.</p> <p>Thanks!</p>
0
2016-08-12T14:13:03Z
38,920,447
<p>If you <a href="https://docs.djangoproject.com/en/1.10/topics/forms/#looping-over-the-form-s-fields" rel="nofollow">loop over the form's fields in the template</a>, then you need to include the errors manually with <code>{{ field.errors }}</code>.</p> <pre><code>&lt;form class="form-horizontal" role="form" action="" method="post"&gt; {% csrf_token %} {% for field in form %} &lt;div class="form-group"&gt; {{ field.errors }} &lt;div class="col-sm-4"&gt; &lt;label for="{{ field.id_for_label }}" class="control-label"&gt;{{ field.label }}&lt;/label&gt; &lt;/div&gt; &lt;div class="col-sm-8"&gt; {{field}} &lt;/div&gt; &lt;/div&gt; {% endfor %} &lt;button type="submit" class="btn btn-primary"&gt;Create Project&lt;/button&gt; &lt;/form&gt; </code></pre> <p>You might find the docs on <a href="https://docs.djangoproject.com/en/1.10/topics/forms/#rendering-form-error-messages" rel="nofollow">rendering form error messages</a> useful as well.</p>
0
2016-08-12T14:30:31Z
[ "python", "django" ]
subprocess.call on long standing process
38,920,161
<p>I'm using subprocess.call to run an external ffmpeg process that usually takes 1hr approx. What I see is that apparently after some time (for example 20min) my program is closed without returning from subprocess.call.</p> <p>Example:</p> <pre><code>import subprocess try: ret = subprocess.call(['ffmpeg', 'param1', 'param2', 'paramN']) print(ret) except: print("An exception has occured!") </code></pre> <p>The <code>print</code> line is never reached.</p>
0
2016-08-12T14:14:46Z
38,921,168
<p>You might be running out of memory. Linux starts killing processes when the system is about to overflow. And it can kill the parent process rather than the offending <code>ffmpeg</code> process. Check your <code>dmesg</code> and syslog records.</p>
0
2016-08-12T15:08:39Z
[ "python" ]
Using "-t" option inside a requirements.txt file
38,920,244
<p>When I have, for example, a <code>requirements-dev.txt</code> and a <code>requirements.txt</code>, I know I can have <code>-r requirements.txt</code> inside <code>requirements-dev.txt</code>, for example, and running <code>pip install -r requirements-dev.txt</code> would install packages from both files.</p> <p>That said, I was certain that any install option would work fine inside a requirements file. Turns out that when I place inside a requirements file something like:</p> <p><code>mypackage==1.0.0 -t /path/to/local/dir</code></p> <p>I get:</p> <p><code>pip: error: no such option: -t</code></p> <p>while running <code>pip install mypackage==1.0.0 -t /path/to/local/dir</code> works just fine. For complicated reasons, I need to place multiple packages in one requirements file, where some packages must target one directory, others must target another, and so goes on.</p> <p>Any solutions to make this work?</p>
1
2016-08-12T14:19:21Z
38,921,011
<pre><code>pip install -r requirements.txt -t /path/to/install </code></pre> <p>This should work. It worked for me.</p> <p>If you want different modules to be installed to different locations, then I think you might have to put them into multiple requirements text files. This is at least as far as I know</p>
1
2016-08-12T15:00:08Z
[ "python", "python-2.7", "pip" ]
assigning values in each column to be the sum of that column
38,920,257
<p>I have DataFrame and I am trying to assign all values in each column to be the sum of that column.</p> <pre><code>x = pd.DataFrame(data = [[1,2],[3,4],[5,6],[7,8],[9,10]],index=[1,2,3,4,5],columns=['a','b']) x a b 1 1 2 2 3 4 3 5 6 4 7 8 5 9 10 </code></pre> <p>the output should be</p> <pre><code> a b 1 25 30 2 25 30 3 25 30 4 25 30 5 25 30 </code></pre> <p>I want to use x.apply(f, axis=0), but I do not know how to define a function that convert a column to be the sum of all column values in a lambda function. The following line raise SyntaxError: can't assign to lambda </p> <pre><code>f = lambda x : x[:]= x.sum() </code></pre>
4
2016-08-12T14:20:23Z
38,920,525
<p>I don't know exactly what you're trying to do but you can do something with list comprehension, like <code>f = lambda x : [column.sum() for column in x]</code></p>
0
2016-08-12T14:34:48Z
[ "python", "pandas", "lambda" ]
assigning values in each column to be the sum of that column
38,920,257
<p>I have DataFrame and I am trying to assign all values in each column to be the sum of that column.</p> <pre><code>x = pd.DataFrame(data = [[1,2],[3,4],[5,6],[7,8],[9,10]],index=[1,2,3,4,5],columns=['a','b']) x a b 1 1 2 2 3 4 3 5 6 4 7 8 5 9 10 </code></pre> <p>the output should be</p> <pre><code> a b 1 25 30 2 25 30 3 25 30 4 25 30 5 25 30 </code></pre> <p>I want to use x.apply(f, axis=0), but I do not know how to define a function that convert a column to be the sum of all column values in a lambda function. The following line raise SyntaxError: can't assign to lambda </p> <pre><code>f = lambda x : x[:]= x.sum() </code></pre>
4
2016-08-12T14:20:23Z
38,920,570
<pre><code>for col in df: df[col] = df[col].sum() </code></pre> <p>or a slower solution that doesn't use looping...</p> <pre><code>df = pd.DataFrame([df.sum()] * len(df)) </code></pre> <p><strong>Timings</strong></p> <p>@jezrael Thanks for the timings. This does them on a larger dataframe and includes the for loop as well. Most of the time is spent creating the dataframe rather than calculating the sums, so the most efficient method that does this appears to be the one from @ayhan that assigns the sum to the values directly:</p> <pre><code>from string import ascii_letters df = pd.DataFrame(np.random.randn(10000, 52), columns=list(ascii_letters)) # A baseline timing figure to determine sum of each column. %timeit df.sum() 1000 loops, best of 3: 1.47 ms per loop # Solution 1 from @Alexander %%timeit for col in df: df[col] = df[col].sum() 100 loops, best of 3: 21.3 ms per loop # Solution 2 from @Alexander (without `for loop`, but much slower) %timeit df2 = pd.DataFrame([df.sum()] * len(df)) 1 loops, best of 3: 270 ms per loop # Solution from @PiRSquared %timeit df.stack().groupby(level=1).transform('sum').unstack() 10 loops, best of 3: 159 ms per loop # Solution 1 from @Jezrael %timeit (pd.DataFrame(np.tile(df.sum().values, (len(df.index),1)), columns=df.columns, index=df.index)) 100 loops, best of 3: 2.32 ms per loop # Solution 2 from @Jezrael %%timeit df2 = pd.DataFrame(df.sum().values[np.newaxis,:].repeat(len(df.index), axis=0), columns=df.columns, index=df.index) 100 loops, best of 3: 2.3 ms per loop # Solution from @ayhan %time df.values[:] = df.values.sum(0) CPU times: user 1.54 ms, sys: 485 µs, total: 2.02 ms Wall time: 1.36 ms # &lt;&lt;&lt;&lt; FASTEST </code></pre>
5
2016-08-12T14:37:21Z
[ "python", "pandas", "lambda" ]
assigning values in each column to be the sum of that column
38,920,257
<p>I have DataFrame and I am trying to assign all values in each column to be the sum of that column.</p> <pre><code>x = pd.DataFrame(data = [[1,2],[3,4],[5,6],[7,8],[9,10]],index=[1,2,3,4,5],columns=['a','b']) x a b 1 1 2 2 3 4 3 5 6 4 7 8 5 9 10 </code></pre> <p>the output should be</p> <pre><code> a b 1 25 30 2 25 30 3 25 30 4 25 30 5 25 30 </code></pre> <p>I want to use x.apply(f, axis=0), but I do not know how to define a function that convert a column to be the sum of all column values in a lambda function. The following line raise SyntaxError: can't assign to lambda </p> <pre><code>f = lambda x : x[:]= x.sum() </code></pre>
4
2016-08-12T14:20:23Z
38,920,899
<p>Another faster numpy solution with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.tile.html" rel="nofollow"><code>numpy.tile</code></a>:</p> <pre><code>print (pd.DataFrame(np.tile(x.sum().values, (len(x.index),1)), columns=x.columns, index=x.index)) a b 1 25 30 2 25 30 3 25 30 4 25 30 5 25 30 </code></pre> <p>Another solution with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.repeat.html" rel="nofollow"><code>numpy.repeat</code></a>:</p> <pre><code>h = pd.DataFrame(x.sum().values[np.newaxis,:].repeat(len(x.index), axis=0), columns=x.columns, index=x.index) print (h) a b 1 25 30 2 25 30 3 25 30 4 25 30 5 25 30 In [431]: %timeit df = pd.DataFrame([x.sum()] * len(x)) 1000 loops, best of 3: 786 µs per loop In [432]: %timeit (pd.DataFrame(np.tile(x.sum().values, (len(x.index),1)), columns=x.columns, index=x.index)) 1000 loops, best of 3: 192 µs per loop In [460]: %timeit pd.DataFrame(x.sum().values[np.newaxis,:].repeat(len(x.index), axis=0),columns=x.columns, index=x.index) The slowest run took 8.65 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 184 µs per loop </code></pre>
5
2016-08-12T14:54:07Z
[ "python", "pandas", "lambda" ]
assigning values in each column to be the sum of that column
38,920,257
<p>I have DataFrame and I am trying to assign all values in each column to be the sum of that column.</p> <pre><code>x = pd.DataFrame(data = [[1,2],[3,4],[5,6],[7,8],[9,10]],index=[1,2,3,4,5],columns=['a','b']) x a b 1 1 2 2 3 4 3 5 6 4 7 8 5 9 10 </code></pre> <p>the output should be</p> <pre><code> a b 1 25 30 2 25 30 3 25 30 4 25 30 5 25 30 </code></pre> <p>I want to use x.apply(f, axis=0), but I do not know how to define a function that convert a column to be the sum of all column values in a lambda function. The following line raise SyntaxError: can't assign to lambda </p> <pre><code>f = lambda x : x[:]= x.sum() </code></pre>
4
2016-08-12T14:20:23Z
38,921,315
<p>Using <code>transform</code></p> <pre><code>x.stack().groupby(level=1).transform('sum').unstack() </code></pre> <p><a href="http://i.stack.imgur.com/Kh785.png" rel="nofollow"><img src="http://i.stack.imgur.com/Kh785.png" alt="enter image description here"></a></p>
3
2016-08-12T15:17:22Z
[ "python", "pandas", "lambda" ]
assigning values in each column to be the sum of that column
38,920,257
<p>I have DataFrame and I am trying to assign all values in each column to be the sum of that column.</p> <pre><code>x = pd.DataFrame(data = [[1,2],[3,4],[5,6],[7,8],[9,10]],index=[1,2,3,4,5],columns=['a','b']) x a b 1 1 2 2 3 4 3 5 6 4 7 8 5 9 10 </code></pre> <p>the output should be</p> <pre><code> a b 1 25 30 2 25 30 3 25 30 4 25 30 5 25 30 </code></pre> <p>I want to use x.apply(f, axis=0), but I do not know how to define a function that convert a column to be the sum of all column values in a lambda function. The following line raise SyntaxError: can't assign to lambda </p> <pre><code>f = lambda x : x[:]= x.sum() </code></pre>
4
2016-08-12T14:20:23Z
38,921,627
<p>If your DataFrame consists of numbers, you can directly change its values:</p> <pre><code>df.values[:] = df.sum() </code></pre>
4
2016-08-12T15:33:20Z
[ "python", "pandas", "lambda" ]
Given coordinates for the location of players on a football pitch. What is the simplest method for finding groups of players?
38,920,431
<p>I thought something like having a 2d array where the location of players is marked with say a "1" and everywhere else is "0". The algorithm would determine how many groups they form. Definition of a group would be minimum 2 players who are beside each other. Thus, a group would be where a 1 has another 1 immediately around it. How do I search within the array for the 1's? Below is what I have achieved so far.</p> <pre><code> import numpy a = numpy.array([[0, 1, 0], [0, 0, 0], [0, 1, 1]]) players = numpy.where(a == 1) </code></pre>
-1
2016-08-12T14:29:20Z
38,929,139
<p>Create links between players that you know to be adjacent to each other and then run <a href="https://en.wikipedia.org/wiki/Disjoint-set_data_structure" rel="nofollow">https://en.wikipedia.org/wiki/Disjoint-set_data_structure</a>. (a.k.a Union-find)</p>
0
2016-08-13T04:18:38Z
[ "python", "algorithm" ]
Using JQuery to get JSON from Flask is returning null sometimes
38,920,585
<p>So I am developing on oracle Linux Server release 6.6 and I recently switched over to Apache 2.4 for my web server. I am developing a Flask web service so I was using Flask's local WSGI server to do all my debugging. </p> <p>In the application, i have a form and when you click the submit button, it does a JQuery $.getJSON() call to my Flask backend. Then in the backend, I return JSON data and alert the user of the data. The problem is that most of the the time it returns null and doesn't return my data. Other times it returns my data and everything is fine. This worked perfectly when I was using the local WSGI server. It was once I migrated to Apache when this error started happening so I believe the problem is from Apache.</p> <p>Also I print out my json data before returning so I know for sure it is not returning null data but I still get this error.</p> <p>My HTML/JavaScript code</p> <pre><code>&lt;div id="restore_submit" class="restore_submit" style="display:none;"&gt; &lt;form onsubmit="restoreDB()"&gt; Type the database ID and Snapshot ID of the database you want to restore &lt;br&gt;&lt;br&gt; RDS ID:&lt;br&gt; &lt;input type="text" id="DbName2" name="DbName"&gt; &lt;br&gt; Snapshot ID:&lt;br&gt; &lt;input type="text" id="snapshot_id" name="snapshot_id"&gt; &lt;br&gt;&lt;br&gt; &lt;input type="submit" value="Restore"&gt; &lt;/form&gt; &lt;/div&gt; function restoreDB() { var DbName = document.getElementById('DbName2').value; var snapshot_ID = document.getElementById('snapshot_id').value; var check = confirm('Are you sure you want to restore ' + DbName); if(check){ $.getJSON('/restore_rds', { Dbname: DbName, snapshot_id: snapshot_ID }, function(data) { if(data){ if(data &amp;&amp; data.error){ alert(DbName + " is being restored using this snapshot_id " + snapshot_ID); } else{ alert(data.error); } } else{ alert("returned null"); alert(data); } }); } } </code></pre> <p>My Python code</p> <pre><code>#This is for the restore script @app.route('/restore_rds') #@login_required def restore_rds(): DbName = request.args.get('Dbname') snapshot_id = request.args.get('snapshot_id') error = rds_action.restore_db(DbName, snapshot_id, rds) if error: error = str(error) else: error = 'empty' print error print jsonify(error = error) return jsonify(error = error) </code></pre> <p>EDIT:</p> <p>So i fixed the issue after like a week of digging. Looks like my requests were getting cancelled before returning back to the webpage. So what i did was a e.preventDefault() in my JavaScript function.</p> <p>so in my html i change it to </p> <pre><code>&lt;form onsubmit="restoreDB(event)"&gt; </code></pre> <p>and in my JavaScript i added this </p> <pre><code>function restoreDB(e) { e.preventDefault(); </code></pre> <p>Now the webpage doesn't reload and it has enough time to wait for a response. Probably not the best fix but I am just happy I fixed it. If anyone recommends anything better, that will be great</p>
1
2016-08-12T14:38:04Z
39,059,335
<p>So i fixed the issue after like a week of digging. Looks like my requests were getting cancelled before returning back to the webpage. So what i did was a e.preventDefault() in my JavaScript function.</p> <p>so in my html i change it to</p> <p><code>&lt;form onsubmit="restoreDB(event)"&gt;</code></p> <p>and in my JavaScript i added this</p> <p><code>function restoreDB(e) { e.preventDefault();</code></p> <p>Now the webpage doesn't reload and it has enough time to wait for a response. Probably not the best fix but I am just happy I fixed it. If anyone recommends anything better, that will be great</p>
0
2016-08-20T22:42:03Z
[ "javascript", "jquery", "python", "apache", "flask" ]
linux detect system shutdown early in python
38,920,677
<p>I've been working on a monitoring script for a raspberry pi that i'm running as a headless server. As part of that I want it to react to a shutdown event. I tried using the <code>signal</code> module, and it does react and call my shutdown routine, however it happens very late into the shutdown routine, I'd like to try and find a way to make it react very quickly after the shutdown request is issued, rather than waiting for the operating system to ask python to exit.</p> <p>this is running on a raspberry pi 1 B, using the latest jessie lite image I'm using python 3 and my python script itself is the init script:<br> monitor:</p> <pre><code>#!/usr/bin/python3 ### BEGIN INIT INFO # Provides: monitor # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start the monitor daemon # Description: Start the monitor daemon during system boot ### END INIT INFO import os, psutil, socket, sys, time from daemon import Daemon from RPLCD import CharLCD from subprocess import Popen, PIPE import RPi.GPIO as GPIO GPIO.setwarnings(False) def get_cpu_temperature(): process = Popen(['vcgencmd', 'measure_temp'], stdout=PIPE) output, _error = process.communicate() output = output.decode('utf8') return float(output[output.index('=') + 1:output.rindex("'")]) class MyDaemon(Daemon): def run(self): lcd = CharLCD(pin_rs=7, pin_rw=4, pin_e=8, pins_data=[25, 24, 23, 18], numbering_mode=GPIO.BCM, cols=40, rows=2, dotsize=8) while not self.exitflag: gw = os.popen("ip -4 route show default").read().split() s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) try: s.connect((gw[2], 0)) ipaddr = s.getsockname()[0] lcd.cursor_pos = (0, 0) lcd.write_string("IP:" + ipaddr) gateway = gw[2] lcd.cursor_pos = (1, 0) lcd.write_string("GW:" + gateway) except IndexError: lcd.cursor_pos = (0, 0) lcd.write_string("IP:No Network") lcd.cursor_pos = (1, 0) lcd.write_string("GW:No Network") host = socket.gethostname() lcd.cursor_pos = (0, 20) lcd.write_string("Host:" + host) for num in range(10): temp = get_cpu_temperature() perc = psutil.cpu_percent() lcd.cursor_pos = (1, 20) lcd.write_string("CPU :{:5.1f}% {:4.1f}\u00DFC".format(perc, temp)) if (self.exitflag): break time.sleep(2) lcd.clear() ## lcd.cursor_pos = (13, 0) lcd.write_string("Shutting Down") if __name__ == "__main__": daemon = MyDaemon('/var/run/monitor.pid') if len(sys.argv) == 2: if 'start' == sys.argv[1]: daemon.start() elif 'stop' == sys.argv[1]: daemon.stop() elif 'restart' == sys.argv[1]: daemon.restart() elif 'run' == sys.argv[1]: daemon.run() else: print("Unknown command") sys.exit(2) sys.exit(0) else: print("usage: %s start|stop|restart" % sys.argv[0]) sys.exit(2) </code></pre> <p>daemon.py:</p> <pre><code>"""Generic linux daemon base class for python 3.x.""" import sys, os, time, signal class Daemon: """A generic daemon class. Usage: subclass the daemon class and override the run() method.""" def __init__(self, pidfile): self.pidfile = pidfile self.exitflag = False signal.signal(signal.SIGINT, self.exit_signal) signal.signal(signal.SIGTERM, self.exit_signal) def daemonize(self): """Deamonize class. UNIX double fork mechanism.""" try: pid = os.fork() if pid &gt; 0: # exit first parent sys.exit(0) except OSError as err: sys.stderr.write('fork #1 failed: {0}\n'.format(err)) sys.exit(1) # decouple from parent environment os.chdir('/') os.setsid() os.umask(0) # do second fork try: pid = os.fork() if pid &gt; 0: # exit from second parent sys.exit(0) except OSError as err: sys.stderr.write('fork #2 failed: {0}\n'.format(err)) sys.exit(1) # redirect standard file descriptors sys.stdout.flush() sys.stderr.flush() si = open(os.devnull, 'r') so = open(os.devnull, 'a+') se = open(os.devnull, 'a+') os.dup2(si.fileno(), sys.stdin.fileno()) os.dup2(so.fileno(), sys.stdout.fileno()) os.dup2(se.fileno(), sys.stderr.fileno()) pid = str(os.getpid()) with open(self.pidfile,'w+') as f: f.write(pid + '\n') def start(self): """Start the daemon.""" # Check for a pidfile to see if the daemon already runs try: with open(self.pidfile,'r') as pf: pid = int(pf.read().strip()) except IOError: pid = None if pid: message = "pidfile {0} already exist. Daemon already running?\n" sys.stderr.write(message.format(self.pidfile)) sys.exit(1) # Start the daemon self.daemonize() self.run() def stop(self): """Stop the daemon.""" # Get the pid from the pidfile try: with open(self.pidfile,'r') as pf: pid = int(pf.read().strip()) except IOError: pid = None if not pid: message = "pidfile {0} does not exist. Daemon not running?\n" sys.stderr.write(message.format(self.pidfile)) return # not an error in a restart # Try killing the daemon process try: while 1: os.kill(pid, signal.SIGTERM) time.sleep(0.1) except OSError as err: e = str(err.args) if e.find("No such process") &gt; 0: if os.path.exists(self.pidfile): os.remove(self.pidfile) else: print (str(err.args)) sys.exit(1) def restart(self): """Restart the daemon.""" self.stop() self.start() def exit_signal(self, sig, stack): self.exitflag = True try: os.remove(self.pidfile) except FileNotFoundError: pass def run(self): """You should override this method when you subclass Daemon. It will be called after the process has been daemonized by start() or restart().""" </code></pre> <p>so in short is there any way i can detect a shutdown even as early as possible in the shutdown no matter how its called, and preferably able to detect a reboot aswell from within python</p>
1
2016-08-12T14:42:10Z
38,920,825
<p>Don't react. <em>Schedule</em>.</p> <p>Unixoid systems have well-established mechanisms for starting and stopping services when starting up and shutting down. Just add one of these to be stopped when your system shuts down; you can typically even define an order at which these shutdown scripts can be called.</p> <p>Now, which of these systems your Linux uses is not known to me. Chances are that you're using either</p> <ul> <li>SysV-style init scripts (classic)</li> <li>systemd (relatively new)</li> <li>upstart (if you're running one of Canonical's misguided experiments)</li> </ul> <p>In either way, there's a lot of examples of such service files on your system; if you're running systemd, a <code>systemctl</code> will show which services are loaded currently, and shows which files you should look into copying and adding as your own service. If you're running a SysV-Style init, look into /etc/init.d for a lot of scripts.</p> <p>You'll find a lot of information how to add and enable init scripts or systemd service files for specific runlevels/system targets.</p>
1
2016-08-12T14:49:21Z
[ "python", "linux", "python-3.x", "shutdown" ]
unable to catch exception on connection pymysql
38,920,764
<pre><code>@classmethod def getSwDataPass(self, uid, pw, dbname='swdata', _count=0): result = None if dbname == '': dbname = input('Could not find a database by the name swdata. Please provide the database name : ') if dbname == '' or dbname is None: print('There was a problem connecting to the database...exiting') sys.exit() try: connection = pymysql.connect(host='localhost', user=uid, password=pw, db='sw_config', port=5002, cursorclass=pymysql.cursors.DictCursor) except pymysql.err.OperationalError: print('Unable to make a connection to the mysql database. Please provide your swdata credentials') credentials = self.getrootpass(True) result = {"uid": credentials[0], "pwd": credentials[1]} continue if result is None: try: with connection.cursor() as cursor: cursor.execute("select uid, pwd, dbname from sw_databases where dbname = '{0}'".format(dbname)) result = cursor.fetchone() if result is None: connection.close() if _count &lt; 2: self.getSwDataPass(uid, pw, '', _count + 1) finally: connection.close() return result </code></pre> <p>When I input invalid credentials for connecting to mysql I get an unhandled exception and the program exits with an error</p> <pre><code>Traceback (most recent call last): File "buildzapp.py", line 248, in &lt;module&gt; BuildZapp.main() File "buildzapp.py", line 96, in main zapp = BuildZapp(manifest, pw, uid, dbname) File "buildzapp.py", line 51, in __init__ swdata_credentials = Utilities.getSwDataPass(self.conf_uid, self.conf_pw) File "C:\zapp builder\utilities.py", line 60, in getSwDataPass cursorclass=pymysql.cursors.DictCursor) File "C:\Python27\Lib\site-packages\pymysql\__init__.py", line 88, in Connect return Connection(*args, **kwargs) File "C:\Python27\Lib\site-packages\pymysql\connections.py", line 689, in __init__ self.connect() File "C:\Python27\Lib\site-packages\pymysql\connections.py", line 907, in connect self._request_authentication() File "C:\Python27\Lib\site-packages\pymysql\connections.py", line 1115, in _request_authentication auth_packet = self._read_packet() File "C:\Python27\Lib\site-packages\pymysql\connections.py", line 982, in _read_packet packet.check_error() File "C:\Python27\Lib\site-packages\pymysql\connections.py", line 394, in check_error err.raise_mysql_exception(self._data) File "C:\Python27\Lib\site-packages\pymysql\err.py", line 120, in raise_mysql_exception _check_mysql_exception(errinfo) File "C:\Python27\Lib\site-packages\pymysql\err.py", line 112, in _check_mysql_exception raise errorclass(errno, errorvalue) pymysql.err.OperationalError: (1045, u"Access denied for user 'root'@'localhost' (using password: YES)") </code></pre> <p>I've event tried just an open <code>except:</code> just to test if I was catching the wrong error, no dice.</p>
1
2016-08-12T14:46:21Z
38,921,491
<p>Try to assign <code>pymysql.cursors.DictCursor</code> to a variable before calling the function:</p> <pre><code>try: cursor = pymysql.cursors.DictCursor connection = pymysql.connect(host='localhost', user=uid, password=pw, db='sw_config', port=5002, cursorclass=cursor) </code></pre>
1
2016-08-12T15:27:07Z
[ "python", "exception-handling", "pymysql" ]
Python - close windows if exists in order in tkinter
38,920,785
<pre><code> from tkinter import * class Main(Frame): def __init__(self, parent): Frame.__init__(self, parent) self.parent = parent self.initUI() def initUI(self): global canvas self.parent.title('Python') self.pack(fill = BOTH, expand = 1) canvas = Canvas(self) self.Label_My = Label(self, text = 'MyObject') self.Label_My.place(x = 0, y = 0) canvas.pack(fill = BOTH, expand = 1) canvas.update() class Main2(Frame): def __init__(self, parent): Frame.__init__(self, parent) self.parent = parent self.initUI() def initUI(self): global canvas self.parent.title('Python') self.pack(fill = BOTH, expand = 1) canvas = Canvas(self) self.Label_My = Label(self, text = 'MyObject2') self.Label_My.place(x = 0, y = 0) canvas.pack(fill = BOTH, expand = 1) canvas.update() root = Tk() ex = Main(root) root.geometry('700x500') root2 = Tk() ex2 = Main2(root2) root2.geometry('500x500') def d(): if root2: root2.destroy() if root: root.destroy() </code></pre> <p>I created two tkinter window and I am going to close them if they exists, but it print the "root/root2" is not defined if I don't create them as windows.</p> <p>Also, I find that I have to close "root" first in order. If I close "root2" first it prints "pythonw.exe has stopped working"</p> <p>My solution is to add "try-except-statement" before "if rootx", but I want a better solution.</p>
1
2016-08-12T14:47:17Z
38,921,101
<p>Add your windows to a list as you create them, then destroy the windows in the order that you added them. </p> <pre><code>windows = [] root = Tk() ex = Main(root) root.geometry('700x500') windows.append(root) root2 = Tk() ex2 = Main2(root2) root2.geometry('500x500') windows.append(root2) def d(): for window in windows: window.destroy() </code></pre>
0
2016-08-12T15:05:23Z
[ "python", "class", "tkinter", "window", "try-except" ]