title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Can not use tensorflow in pycharm | 39,220,524 | <p>I can ran tensorflow in anaconda command line successfully, but when I ran it in pycharm, it complains for not finding tensorflow, but I do find it in the package of this python interpreter. Is there anything else I need to do ? Thanks</p>
<p><a href="http://i.stack.imgur.com/gVOJe.png" rel="nofollow"><img src="http://i.stack.imgur.com/gVOJe.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/afyM8.png" rel="nofollow"><img src="http://i.stack.imgur.com/afyM8.png" alt="enter image description here"></a></p>
| 0 | 2016-08-30T06:45:40Z | 39,220,560 | <p>Looking at the console, your <code>.py</code> file is <em>also</em> called <code>tensorflow</code>, so when you do <code>import tensorflow as tf</code>, you end up trying to import things from the same file.</p>
<p>Rename your own file to something else, such as <code>tensorflow_test_1.py</code> or whatnot. :)</p>
| 5 | 2016-08-30T06:47:40Z | [
"python",
"pycharm",
"tensorflow"
] |
incorrect date format when writing df to csv pandas | 39,220,660 | <p>I convert a string to date using pandas.
When I write the DF to CSV, the date comes like '2016-08-15 instead of plain 2016-08-15. Unable to read it as date in ETL tool.Same is the case for all date fields.</p>
<p>Any suggestion to get the date format correctly ?</p>
<pre><code>df =pd.read_csv(r'/Users/tcssig/Documents/ABP_News_Aug01.csv', parse_dates=['Dates'])
</code></pre>
<p>df.to_csv('/Users/tcssig/Documents/Sarang.csv')</p>
| -2 | 2016-08-30T06:53:07Z | 39,223,813 | <p>You can try this</p>
<pre><code>df = pd.read_csv(r'/Users/tcssig/Documents/ABP_News_Aug01.csv')
df['date'] = pd.to_datetime(df['date'])
df.to_csv('/Users/tcssig/Documents/Sarang.csv')
</code></pre>
<p>(assuming name of the date field is 'date'</p>
| 0 | 2016-08-30T09:32:44Z | [
"python",
"pandas"
] |
python convolution with different dimension | 39,220,929 | <p>I'm trying to implement convolutional neural network in Python.<br>
However, when I use signal.convolve or np.convolve, it can not do convolution on X, Y(X is 3d, Y is 2d). X are training minibatches. Y are filters.
I don't want to do for loop for every training vector like:</p>
<pre><code>for i in xrange(X.shape[2]):
result = signal.convolve(X[:,:,i], Y, 'valid')
....
</code></pre>
<p>So, is there any function I can use to do convolution efficiently?</p>
| 4 | 2016-08-30T07:06:42Z | 39,226,689 | <p>Scipy implements standard N-dimensional convolutions, so that the matrix to be convolved and the kernel are both N-dimensional.</p>
<p>A quick fix would be to add an extra dimension to <code>Y</code> so that <code>Y</code> is 3-Dimensional:</p>
<pre><code>result = signal.convolve(X, Y[..., None], 'valid')
</code></pre>
<p>I'm assuming here that the last axis corresponds to the image index as in your example <code>[width, height, image_idx]</code> (or <code>[height, width, image_idx]</code>). If it is the other way around and the images are indexed in the first axis (as it is more common in C-ordering arrays) you should replace <code>Y[..., None]</code> with <code>Y[None, ...]</code>.</p>
<p>The line <code>Y[..., None]</code> will add an extra axis to <code>Y</code>, making it 3-dimensional <code>[kernel_width, kernel_height, 1]</code> and thus, converting it to a valid 3-Dimensional convolution kernel.</p>
<p>NOTE: This assumes that all your input mini-batches have the same <code>width x height</code>, which is standard in CNN's.</p>
<hr>
<p>EDIT: Some timings as @Divakar suggested.</p>
<p>The testing framework is setup as follows:</p>
<pre><code>def test(S, N, K):
""" S: image size, N: num images, K: kernel size"""
a = np.random.randn(S, S, N)
b = np.random.randn(K, K)
valid = [slice(K//2, -K//2+1), slice(K//2, -K//2+1)]
%timeit signal.convolve(a, b[..., None], 'valid')
%timeit signal.fftconvolve(a, b[..., None], 'valid')
%timeit ndimage.convolve(a, b[..., None])[valid]
</code></pre>
<p>Find bellow tests for different configurations:</p>
<ul>
<li><p>Varying image size <code>S</code>:</p>
<pre><code>>>> test(100, 50, 11) # 100x100 images
1 loop, best of 3: 909 ms per loop
10 loops, best of 3: 116 ms per loop
10 loops, best of 3: 54.9 ms per loop
>>> test(1000, 50, 11) # 1000x1000 images
1 loop, best of 3: 1min 51s per loop
1 loop, best of 3: 16.5 s per loop
1 loop, best of 3: 5.66 s per loop
</code></pre></li>
<li><p>Varying number of images <code>N</code>:</p>
<pre><code>>>> test(100, 5, 11) # 5 images
10 loops, best of 3: 90.7 ms per loop
10 loops, best of 3: 26.7 ms per loop
100 loops, best of 3: 5.7 ms per loop
>>> test(100, 500, 11) # 500 images
1 loop, best of 3: 9.75 s per loop
1 loop, best of 3: 888 ms per loop
1 loop, best of 3: 727 ms per loop
</code></pre></li>
<li><p>Varying kernel size <code>K</code>:</p>
<pre><code>>>> test(100, 50, 5) # 5x5 kernels
1 loop, best of 3: 217 ms per loop
10 loops, best of 3: 100 ms per loop
100 loops, best of 3: 11.4 ms per loop
>>> test(100, 50, 31) # 31x31 kernels
1 loop, best of 3: 4.39 s per loop
1 loop, best of 3: 220 ms per loop
1 loop, best of 3: 560 ms per loop
</code></pre></li>
</ul>
<p>So, in short, <code>ndimage.convolve</code> is always faster, except when the kernel size is very large (as <code>K = 31</code> in the last test).</p>
| 5 | 2016-08-30T11:47:48Z | [
"python",
"numpy",
"scipy",
"deep-learning"
] |
How to use Python Selenium to get the data? | 39,221,164 | <p>The link is here : <a href="http://www.imei.info/phonedatabase/790-alcatel-9109-mb2/" rel="nofollow">http://www.imei.info/phonedatabase/790-alcatel-9109-mb2/</a></p>
<p>I can already open this website using <code>Selenium-Webdriver</code> in Python. Now I'm trying to get the <code>battery type</code> from this website.</p>
<p><a href="http://i.stack.imgur.com/mSd6Y.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/mSd6Y.jpg" alt="enter image description here"></a></p>
| 0 | 2016-08-30T07:18:40Z | 39,221,510 | <p>use the below code to get the battery type:</p>
<pre><code> batteryType = driver.find_element(By.XPATH, '//div[@class='cc' and contains(text(), 'Li-Ion')]').text()
</code></pre>
<p>use the .text() to get the string of an element.</p>
| 0 | 2016-08-30T07:37:18Z | [
"python",
"html",
"web",
"selenium-webdriver"
] |
How to use Python Selenium to get the data? | 39,221,164 | <p>The link is here : <a href="http://www.imei.info/phonedatabase/790-alcatel-9109-mb2/" rel="nofollow">http://www.imei.info/phonedatabase/790-alcatel-9109-mb2/</a></p>
<p>I can already open this website using <code>Selenium-Webdriver</code> in Python. Now I'm trying to get the <code>battery type</code> from this website.</p>
<p><a href="http://i.stack.imgur.com/mSd6Y.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/mSd6Y.jpg" alt="enter image description here"></a></p>
| 0 | 2016-08-30T07:18:40Z | 39,222,557 | <p>You should try using <code>xpath</code> as below :-</p>
<pre><code>type = driver.find_element_by_xpath("//*[text() = 'Battery:']/following-sibling::div").text
</code></pre>
| 0 | 2016-08-30T08:35:11Z | [
"python",
"html",
"web",
"selenium-webdriver"
] |
Why does getaddrinfo not return all IP Addresses? | 39,221,168 | <p>I am trying to get all IP Addresses for: earth.all.vpn.airdns.org
In Python: </p>
<pre><code>def hostnamesToIps(hostnames):
ip_list = []
for hostname in hostnames:
try:
ais = socket.getaddrinfo(hostname, None)
for result in ais:
#print (result[4][0])
ip_list.append(result[-1][0])
except:
eprint("Unable to get IP for hostname: " + hostname)
return list(set(ip_list))
</code></pre>
<p>(fyi eprint is a function for printing error).
The Output gives me 29 Addresses.</p>
<p>But when i do:</p>
<pre><code>nslookup earth.all.vpn.airdns.org
</code></pre>
<p>I get about 100 entrys.</p>
<p>How to achieve this in python? Why i am not getting all entrys with "getaddrinfo"?</p>
<p>This behaviour only appears using windows (python 3). When i am executing the code on my Linux (python 2.7) it gives me the same result as using nslookup.</p>
<p><strong>info:</strong> As in the answer explained it does not depend on the system.</p>
<hr>
<p>Without changing anything the results of nslookup and getaddrinfo are the same now.</p>
| 3 | 2016-08-30T07:18:46Z | 39,222,389 | <p>Tools like <code>dig</code>, <code>host</code> and <code>nslookup</code> query the default DNS server directly using UDP/TCP and their own implementation of DNS queries, whereas Python's <code>socket</code> module uses the DNS lookup interface of the operating system, which usually uses a more sophisticates lookup mechanism involving e.g. a dns cache, a host file, domain suffixes, link-local name resolution etc.</p>
<p><code>strace</code> shows that Python's <code>socket.getaddrinfo</code> ends up using a netlink (AF_NETLINK) socket, to query the system for the DNS lookup (Python 2.7 on Ubuntu 12.04). <code>nslookup</code> however, reads the default DNS server from <code>/etc/resolv.conf</code> and opens a UDP socket on port 53.</p>
<p>I think there are two reason why you are getting a different entry count:</p>
<ol>
<li>The DNS entries are quite volatile a may change at any instant</li>
<li>Python returns cached entries provided by the system's DNS cache whereas <code>nslookup</code> always retrieves "fresh" results.</li>
</ol>
<p>Furthermore, <code>nslookup</code> might produce slightly different DNS queries than the system resolver (producing another answer). That could be checked with Wireshark, but I will leave that for now.</p>
<p><strike>Another issue could be the truncation of DNS responses when UPD is used. If there is a large number of entries, they won't fit in a single UDP package, so the answer contains a truncation flag. It depends on the client to re-send the DNS query over a TCP socket in order to retrieve all results.</strike> (Truncated answers are actually empty).</p>
<p><strong>Edit:</strong> A note on caching / volatile</p>
<p>Even if the mismatch is not due to your local DNS cache, it may be due to server-side caching. I tried several DNS servers and all gave different results for that particular name. That means that they are not in sync due to DNS changes within the time-to-live (TTL). </p>
| 2 | 2016-08-30T08:27:13Z | [
"python",
"sockets"
] |
dtypes changed when initialising a new DataFrame from another one | 39,221,232 | <p>Let's say that I have a DataFrame df1 with 2 columns: <code>a</code> with dtype <code>bool</code> and <code>b</code> with dtype <code>int64</code>. When I initialise a new DataFrame (<code>df1_bis</code>) from <code>df1</code>, columns <code>a</code> and <code>b</code> are automatically converted into objects, even if I force the dtype of <code>df1_bis</code>:</p>
<pre><code>In [2]: df1 = pd.DataFrame({"a": [True], 'b': [0]})
Out[3]:
a b
0 True 0
In [4]: df1.dtypes
Out[4]:
a bool
b int64
dtype: object
In [5]: df1_bis = pd.DataFrame(df1.values, columns=df1.columns, dtype=df1.dtypes)
Out[6]:
a b
0 True 0
In [7]: df1_bis.dtypes
Out[7]:
a object
b object
dtype: object
</code></pre>
<p>Is there something I'm doing wrong with the <code>dtype</code> argument of DataFrame? </p>
| 2 | 2016-08-30T07:23:15Z | 39,221,301 | <p>For me works:</p>
<pre><code>df1_bis = pd.DataFrame(df1, columns=df1.columns, index=df1.index)
#df1_bis = pd.DataFrame(df1)
print (df1_bis)
a b
0 True 0
print (df1_bis.dtypes)
a bool
b int64
dtype: object
</code></pre>
<p>But I think better is use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.copy.html" rel="nofollow"><code>copy</code></a>: </p>
<pre><code>df1_bis = df1.copy()
</code></pre>
<p>If you want use <code>dtype</code>, you need works with <code>Series</code> because parameter <code>dtype</code> in <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html" rel="nofollow"><code>DataFrame</code></a> is for all columns:</p>
<pre><code>df1_bis = pd.DataFrame({'a':pd.Series(df1.a.values, dtype=df1.a.dtypes),
'b':pd.Series(df1.b.values, dtype=df1.b.dtypes)}
, index=df1.index)
print (df1_bis)
a b
0 True 0
print (df1_bis.dtypes)
a bool
b int64
dtype: object
</code></pre>
<hr>
<pre><code>df = pd.DataFrame({"a": [1,5], 'b': [0,4]}, dtype=float)
print (df)
a b
0 1.0 0.0
1 5.0 4.0
print (df.dtypes)
a float64
b float64
dtype: object
</code></pre>
| 4 | 2016-08-30T07:26:44Z | [
"python",
"pandas",
"dataframe",
"casting",
"multiple-columns"
] |
dtypes changed when initialising a new DataFrame from another one | 39,221,232 | <p>Let's say that I have a DataFrame df1 with 2 columns: <code>a</code> with dtype <code>bool</code> and <code>b</code> with dtype <code>int64</code>. When I initialise a new DataFrame (<code>df1_bis</code>) from <code>df1</code>, columns <code>a</code> and <code>b</code> are automatically converted into objects, even if I force the dtype of <code>df1_bis</code>:</p>
<pre><code>In [2]: df1 = pd.DataFrame({"a": [True], 'b': [0]})
Out[3]:
a b
0 True 0
In [4]: df1.dtypes
Out[4]:
a bool
b int64
dtype: object
In [5]: df1_bis = pd.DataFrame(df1.values, columns=df1.columns, dtype=df1.dtypes)
Out[6]:
a b
0 True 0
In [7]: df1_bis.dtypes
Out[7]:
a object
b object
dtype: object
</code></pre>
<p>Is there something I'm doing wrong with the <code>dtype</code> argument of DataFrame? </p>
| 2 | 2016-08-30T07:23:15Z | 39,221,562 | <p>It is <code>numpy</code> that is causing the problem. <code>pandas</code> is inferring the types from the numpy array. If you convert to a list, you won't have the problem.</p>
<pre><code>df1_bis = pd.DataFrame(df1.values.tolist(),
columns=df1.columns)
print(df1_bis)
print
print(df1_bis.dtypes)
a b
0 True 0
a bool
b int64
dtype: object
</code></pre>
| 4 | 2016-08-30T07:41:11Z | [
"python",
"pandas",
"dataframe",
"casting",
"multiple-columns"
] |
Grouping by set . To group files with similar headers python | 39,221,335 | <p>I have derived an output from a directory structure which has a lot many csv files. The headers of these files are manually created and randomly placed. I have to get all those files which have similar headers together.</p>
<pre><code>/A/B/C/D~b1.csv.0 Delim:,
"First Name" "Last Name" Company EMAIL Phone Fax "SIC CODE"
/A/B/C/D~b2.csv.0 Delim:,
"First Name" "Last Name" Phone Fax "SIC CODE" Company EMAIL
/A/B/C/D~b3.csv.0 Delim:,
"First Name" "Last Name" Company EMAIL Fax "SIC CODE" Phone
/A/B/C/D~b4.csv.0 Delim:,
"First Name" "Last Name" Company EMAIL Phone Fax "SIC CODE"
/A/B/C/D~c1.csv.0 Delim:,
"Business Type" "Main Markets" Establised "No Of Employees" Category "Company Name" "Contact Person" Designation Address "Pin Code" "Telephone no" "Fax No" Country Website Email
/A/B/C/D~c2.csv.0 Delim:,
"Business Type" "Main Markets" Establised "No Of Employees" Category "Company Name" "Contact Person" Designation Address "Pin Code" "Telephone no" "Fax No" Country Website Email
/A/B/C/D~c3.csv.0 Delim:,
"Business Type" "Main Markets" Country Website Email Establised "No Of Employees" Category "Company Name" "Contact Person" Designation Address "Pin Code" "Telephone no" "Fax No"
</code></pre>
<p>The first part <code>/A/B/C/D</code> is the directory structure followed by a <code>~</code> followed by the Delimiter <code>Delim:,</code> required to parse the file. The next line is the header which was fetched from the file <code>"First Name" "Last Name" Company EMAIL Phone Fax "SIC CODE"</code></p>
<p>I tried to create a sample code to group similar headers together, something as follows which I know wouldn't have worked:</p>
<pre><code>>>> li = [('abc', set(['a', 'c', 'b'])), ('def', set(['e', 'd', 'f'])), ('ghi', set(['i', 'h', 'g'])), ('jkl', set(['k', 'j', 'l'])), ('mno', set(['m', 'o', 'n'])), ('pqr', set(['q', 'p', 'r'])), ('stu', set(['s', 'u', 't'])), ('vwx', set(['x', 'w', 'v'])), ('ABC', set(['a', 'c', 'b'])), ('DEF', set(['e', 'd', 'f'])), ('GHI', set(['i', 'h', 'g'])), ('JKL', set(['k', 'j', 'l'])), ('MNO', set(['m', 'o', 'n'])), ('PQR', set(['q', 'p', 'r'])), ('STU', set(['s', 'u', 't'])), ('VWX', set(['x', 'w', 'v']))]
>>> for key, group in groupby(li, lambda x: x[1]):
... for l in group:
... print "%s %s." % (l[1], l[0])
</code></pre>
<p>How can I group the sets together.
Any help figuring out how I could group similar header files is appreciated.</p>
| 0 | 2016-08-30T07:28:29Z | 39,222,320 | <p>The following approach works by taking each of your CSV headers and converting them into a list of column entries. These are then sorted and converted to a tuple. This is then used as the key for a default dictionary. Each matching entry is appended to the list along with the original column ordering. </p>
<p>The result is a dictionary which groups together CSV files containing the same column entries. If column entries are not case sensitive, the tuple entries could be converted to lowercase before being used as the key.</p>
<pre><code>from collections import defaultdict
import csv
from StringIO import StringIO
csv_groups = defaultdict(list)
entries = [
["/A/B/C/D~b1.csv.0", "Delim:,", '"First Name" "Last Name" Company EMAIL Phone Fax "SIC CODE"'],
["/A/B/C/D~b2.csv.0", "Delim:,", '"First Name" "Last Name" Phone Fax "SIC CODE" Company EMAIL'],
["/A/B/C/D~b3.csv.0", "Delim:,", '"First Name" "Last Name" Company EMAIL Fax "SIC CODE" Phone'],
["/A/B/C/D~b4.csv.0", "Delim:,", '"First Name" "Last Name" Company EMAIL Phone Fax "SIC CODE"'],
["/A/B/C/D~c1.csv.0", "Delim:,", '"Business Type" "Main Markets" Establised "No Of Employees" Category "Company Name" "Contact Person" Designation Address "Pin Code" "Telephone no" "Fax No" Country Website Email'],
["/A/B/C/D~c2.csv.0", "Delim:,", '"Business Type" "Main Markets" Establised "No Of Employees" Category "Company Name" "Contact Person" Designation Address "Pin Code" "Telephone no" "Fax No" Country Website Email'],
["/A/B/C/D~c3.csv.0", "Delim:,", '"Business Type" "Main Markets" Country Website Email Establised "No Of Employees" Category "Company Name" "Contact Person" Designation Address "Pin Code" "Telephone no" "Fax No"']
]
for folder, delim, header in entries:
cols = tuple(sorted(list(csv.reader(StringIO(header), delimiter=' ', skipinitialspace=True))[0]))
csv_groups[cols].append((folder, header))
for csv_type, folders in csv_groups.iteritems():
print csv_type
for folder in folders:
print " ", folder
</code></pre>
<p>This would give you the following grouping based on your data:</p>
<pre class="lang-none prettyprint-override"><code>('Address', 'Business Type', 'Category', 'Company Name', 'Contact Person', 'Country', 'Designation', 'Email', 'Establised', 'Fax No', 'Main Markets', 'No Of Employees', 'Pin Code', 'Telephone no', 'Website')
('/A/B/C/D~c1.csv.0', '"Business Type" "Main Markets" Establised "No Of Employees" Category "Company Name" "Contact Person" Designation Address "Pin Code" "Telephone no" "Fax No" Country Website Email')
('/A/B/C/D~c2.csv.0', '"Business Type" "Main Markets" Establised "No Of Employees" Category "Company Name" "Contact Person" Designation Address "Pin Code" "Telephone no" "Fax No" Country Website Email')
('/A/B/C/D~c3.csv.0', '"Business Type" "Main Markets" Country Website Email Establised "No Of Employees" Category "Company Name" "Contact Person" Designation Address "Pin Code" "Telephone no" "Fax No"')
('Company', 'EMAIL', 'Fax', 'First Name', 'Last Name', 'Phone', 'SIC CODE')
('/A/B/C/D~b1.csv.0', '"First Name" "Last Name" Company EMAIL Phone Fax "SIC CODE"')
('/A/B/C/D~b2.csv.0', '"First Name" "Last Name" Phone Fax "SIC CODE" Company EMAIL')
('/A/B/C/D~b3.csv.0', '"First Name" "Last Name" Company EMAIL Fax "SIC CODE" Phone')
('/A/B/C/D~b4.csv.0', '"First Name" "Last Name" Company EMAIL Phone Fax "SIC CODE"')
</code></pre>
| 1 | 2016-08-30T08:23:14Z | [
"python",
"python-2.7"
] |
No module named unusual_prefix_* | 39,221,442 | <p>I tried to run the <a href="https://github.com/apache/incubator-airflow/blob/master/airflow/example_dags/example_python_operator.py" rel="nofollow">Python Operator Example</a> in my Airflow installation. The installation has deployed webserver, scheduler and worker on the same machine and runs with no complaints for all non-PytohnOperator tasks. The task fails, complaining that the module "unusual_prefix_*" could not be imported, where * is the name of the file containing the DAG. </p>
<p>The full stacktrace:</p>
<pre><code>['/usr/bin/airflow', 'run', 'tutorialpy', 'print_the_context', '2016-08-23T10:00:00', '--pickle', '90', '--local']
[2016-08-29 11:35:20,054] {__init__.py:36} INFO - Using executor CeleryExecutor
[2016-08-29 11:35:20,175] {configuration.py:508} WARNING - section/key [core/fernet_key] not found in config
Traceback (most recent call last):
File "/usr/bin/airflow", line 90, in <module>
args.func(args)
File "/opt/mf-airflow/airflow/lib/python2.7/site-packages/airflow/bin/cli.py", line 214, in run
DagPickle).filter(DagPickle.id == args.pickle).first()
File "/opt/mf-airflow/airflow/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2659, in first
ret = list(self[0:1])
File "/opt/mf-airflow/airflow/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2457, in __getitem__
return list(res)
File "/opt/mf-airflow/airflow/lib/python2.7/site-packages/sqlalchemy/orm/loading.py", line 86, in instances
util.raise_from_cause(err)
File "/opt/mf-airflow/airflow/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 202, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/opt/mf-airflow/airflow/lib/python2.7/site-packages/sqlalchemy/orm/loading.py", line 71, in instances
rows = [proc(row) for row in fetch]
File "/opt/mf-airflow/airflow/lib/python2.7/site-packages/sqlalchemy/orm/loading.py", line 428, in _instance
loaded_instance, populate_existing, populators)
File "/opt/mf-airflow/airflow/lib/python2.7/site-packages/sqlalchemy/orm/loading.py", line 486, in _populate_full
dict_[key] = getter(row)
File "/opt/mf-airflow/airflow/lib/python2.7/site-packages/sqlalchemy/sql/sqltypes.py", line 1253, in process
return loads(value)
File "/opt/mf-airflow/airflow/lib/python2.7/site-packages/dill/dill.py", line 260, in loads
return load(file)
File "/opt/mf-airflow/airflow/lib/python2.7/site-packages/dill/dill.py", line 250, in load
obj = pik.load()
File "/usr/lib/python2.7/pickle.py", line 858, in load
dispatch[key](self)
File "/usr/lib/python2.7/pickle.py", line 1090, in load_global
klass = self.find_class(module, name)
File "/opt/mf-airflow/airflow/lib/python2.7/site-packages/dill/dill.py", line 406, in find_class
return StockUnpickler.find_class(self, module, name)
File "/usr/lib/python2.7/pickle.py", line 1124, in find_class
__import__(module)
ImportError: No module named unusual_prefix_tutorialpy
[2016-08-29 11:35:20,498: ERROR/Worker-1] Command 'airflow run tutorialpy print_the_context 2016-08-23T10:00:00 --pickle 90 --local ' returned non-zero exit status 1
[2016-08-29 11:35:20,502: ERROR/MainProcess] Task airflow.executors.celery_executor.execute_command[01152b52-044e-4361-888c-ef2d45983c60] raised unexpected: AirflowException('Celery command failed',)
Traceback (most recent call last):
File "/opt/mf-airflow/airflow/lib/python2.7/site-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/opt/mf-airflow/airflow/lib/python2.7/site-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "/opt/mf-airflow/airflow/lib/python2.7/site-packages/airflow/executors/celery_executor.py", line 45, in execute_command
raise AirflowException('Celery command failed')
AirflowException: Celery command failed
</code></pre>
| 0 | 2016-08-30T07:34:00Z | 39,222,486 | <p>Using <code>donot_pickle=True</code> option (<a href="https://pythonhosted.org/airflow/cli.html" rel="nofollow">see Documentation</a>) I got rid of the error. Probably not the best solution, but good enough for my scenario.</p>
| 0 | 2016-08-30T08:32:00Z | [
"python",
"pickle",
"airflow"
] |
What is the Redis equivalent of TransactionDB's getRange? | 39,221,464 | <p>The transactionDB python api says that,</p>
<blockquote>
<p>Database.get_range(begin, end[, limit, reverse, streaming_mode])</p>
<p>Returns all keys k such that begin <= k < end and their associated values as a
list of KeyValue objects. Note the exclusion of end from the range.</p>
<p>This read is fully synchronous.</p>
</blockquote>
<p>I want something equivalent in Redis. I looked at lrange and zrange functions but don't think that they are similar.</p>
| 2 | 2016-08-30T07:35:06Z | 39,222,578 | <p><a href="http://redis.io/topics/data-types#sorted-sets" rel="nofollow">Sorted sets</a> allow you to query by a <a href="http://redis.io/commands/zrange" rel="nofollow">range</a>. If you're storing say an object, you can use the sorted set to get the desired object ID's, then lookup the object information from a hash with <a href="http://redis.io/commands/hget" rel="nofollow">hget</a>/<a href="http://redis.io/commands/hgetall" rel="nofollow">hgetall</a>.</p>
| 0 | 2016-08-30T08:36:20Z | [
"python",
"redis",
"jedis",
"foundationdb"
] |
What is the Redis equivalent of TransactionDB's getRange? | 39,221,464 | <p>The transactionDB python api says that,</p>
<blockquote>
<p>Database.get_range(begin, end[, limit, reverse, streaming_mode])</p>
<p>Returns all keys k such that begin <= k < end and their associated values as a
list of KeyValue objects. Note the exclusion of end from the range.</p>
<p>This read is fully synchronous.</p>
</blockquote>
<p>I want something equivalent in Redis. I looked at lrange and zrange functions but don't think that they are similar.</p>
| 2 | 2016-08-30T07:35:06Z | 39,234,918 | <p>TL;DR there isn't a direct equivalent and scanning the entire key space is always slow(er) - you should avoid it unless your intent is getting most/all of the keys anyway.</p>
<p>There are two Redis commands that allow you to scan the keyspace - one is called <a href="http://redis.io/commands/scan" rel="nofollow"><code>SCAN</code></a> and the <a href="http://redis.io/commands/keys" rel="nofollow">other one</a> should not be mentioned nor used for anything but development. Unlike what you're after, however, these commands:
1. Do not work on ranges of keys, but rather on glob-like patterns
2. Do not return the associated value, you have to specifically read it</p>
<p>Generally speaking, you should refrain from practicing such read patterns unless you mean it - in most cases, you want the responses fast and cheap, so a full scan is almost always not the right way,</p>
| 1 | 2016-08-30T18:44:23Z | [
"python",
"redis",
"jedis",
"foundationdb"
] |
How to persist state when the crawler dies abruptly? | 39,221,755 | <p>this quetion is in reference to <a href="http://stackoverflow.com/questions/39211490/scrapy-spider-does-not-store-state-persistent-state">Scrapy spider does not store state (persistent state)</a></p>
<p>I have followed the following link to persist state of the crawler <a href="http://doc.scrapy.org/en/latest/topics/jobs.html" rel="nofollow">http://doc.scrapy.org/en/latest/topics/jobs.html</a></p>
<p>Now this works perfectly fine when the crawler ends properly, with a interrupt or a Ctrl+C.</p>
<p>I have noticed that the spider does not shut down properly when</p>
<ol>
<li>You hit Ctrl +C multiple times.</li>
<li>The server capacity is hit.</li>
<li>Any other reason due to which it ends abruptly</li>
</ol>
<p>The spider when runs again , shuts itself down on the first url crawled.</p>
<p>How to achieve a persistent state of the crawler when something like above happens?
Cause or else it ends up crawling the whole bunch of urls again.</p>
<p>Logs when the spider runs again: </p>
<pre><code>2016-08-30 08:14:11 [scrapy] INFO: Scrapy 1.1.2 started (bot: maxverstappen)
2016-08-30 08:14:11 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'maxverstappen.spiders', 'SPIDER_MODULES': ['maxverstappen.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'maxverstappen'}
2016-08-30 08:14:11 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.spiderstate.SpiderState']
2016-08-30 08:14:11 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-08-30 08:14:11 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-08-30 08:14:12 [scrapy] INFO: Enabled item pipelines:
['maxverstappen.pipelines.MaxverstappenPipeline']
2016-08-30 08:14:12 [scrapy] INFO: Spider opened
2016-08-30 08:14:12 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-08-30 08:14:12 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6024
2016-08-30 08:14:12 [scrapy] DEBUG: Crawled (200) <GET http://www.inautonews.com/robots.txt> (referer: None)
2016-08-30 08:14:12 [scrapy] DEBUG: Crawled (200) <GET http://www.thecheckeredflag.com/robots.txt> (referer: None)
2016-08-30 08:14:12 [scrapy] DEBUG: Crawled (200) <GET http://www.inautonews.com/> (referer: None)
2016-08-30 08:14:12 [scrapy] DEBUG: Crawled (200) <GET http://www.thecheckeredflag.com/> (referer: None)
2016-08-30 08:14:12 [scrapy] DEBUG: Filtered duplicate request: <GET http://www.inautonews.com/> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
2016-08-30 08:14:12 [scrapy] DEBUG: Filtered offsite request to 'www.newsnow.co.uk': <GET http://www.newsnow.co.uk/h/Life+&+Style/Motoring>
2016-08-30 08:14:12 [scrapy] DEBUG: Filtered offsite request to 'www.americanmuscle.com': <GET http://www.americanmuscle.com/>
2016-08-30 08:14:12 [scrapy] DEBUG: Filtered offsite request to 'www.extremeterrain.com': <GET http://www.extremeterrain.com/>
2016-08-30 08:14:12 [scrapy] DEBUG: Filtered offsite request to 'www.autoanything.com': <GET http://www.autoanything.com/>
2016-08-30 08:14:12 [scrapy] DEBUG: Filtered offsite request to 'www.bmwcoop.com': <GET http://www.bmwcoop.com/>
2016-08-30 08:14:12 [scrapy] DEBUG: Filtered offsite request to 'www.automotorblog.com': <GET http://www.automotorblog.com/>
2016-08-30 08:14:12 [scrapy] DEBUG: Filtered offsite request to 'twitter.com': <GET https://twitter.com/inautonews>
2016-08-30 08:14:12 [scrapy] DEBUG: Filtered offsite request to 'www.facebook.com': <GET https://www.facebook.com/inautonews>
2016-08-30 08:14:12 [scrapy] DEBUG: Filtered offsite request to 'plus.google.com': <GET https://plus.google.com/+Inautonewsplus>
2016-08-30 08:14:12 [scrapy] DEBUG: Filtered offsite request to 'www.histats.com': <GET http://www.histats.com/>
2016-08-30 08:14:12 [scrapy] DEBUG: Filtered offsite request to 'www.hamiltonf1site.com': <GET http://www.hamiltonf1site.com/>
2016-08-30 08:14:12 [scrapy] DEBUG: Filtered offsite request to 'www.joshwellsracing.com': <GET http://www.joshwellsracing.com/>
2016-08-30 08:14:12 [scrapy] DEBUG: Filtered offsite request to 'www.jensonbuttonfan.net': <GET http://www.jensonbuttonfan.net/>
2016-08-30 08:14:12 [scrapy] DEBUG: Filtered offsite request to 'www.fernandoalonsofan.net': <GET http://www.fernandoalonsofan.net/>
2016-08-30 08:14:12 [scrapy] DEBUG: Filtered offsite request to 'www.markwebberfan.net': <GET http://www.markwebberfan.net/>
2016-08-30 08:14:12 [scrapy] DEBUG: Filtered offsite request to 'www.felipemassafan.net': <GET http://www.felipemassafan.net/>
2016-08-30 08:14:12 [scrapy] DEBUG: Filtered offsite request to 'www.nicorosbergfan.net': <GET http://www.nicorosbergfan.net/>
2016-08-30 08:14:12 [scrapy] DEBUG: Filtered offsite request to 'www.nickheidfeldfan.net': <GET http://www.nickheidfeldfan.net/>
2016-08-30 08:14:12 [scrapy] DEBUG: Filtered offsite request to 'www.lewishamiltonblog.net': <GET http://www.lewishamiltonblog.net/>
2016-08-30 08:14:12 [scrapy] DEBUG: Filtered offsite request to 'www.timoglockfan.net': <GET http://www.timoglockfan.net/>
2016-08-30 08:14:12 [scrapy] DEBUG: Filtered offsite request to 'www.jarnotrullifan.net': <GET http://www.jarnotrullifan.net/>
2016-08-30 08:14:12 [scrapy] DEBUG: Filtered offsite request to 'www.brunosennafan.net': <GET http://www.brunosennafan.net/>
2016-08-30 08:14:12 [scrapy] INFO: Closing spider (finished)
2016-08-30 08:14:12 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 896,
'downloader/request_count': 4,
'downloader/request_method_count/GET': 4,
'downloader/response_bytes': 35353,
'downloader/response_count': 4,
'downloader/response_status_count/200': 4,
'dupefilter/filtered': 149,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 8, 30, 8, 14, 12, 724932),
'log_count/DEBUG': 28,
'log_count/INFO': 7,
'offsite/domains': 22,
'offsite/filtered': 23,
'request_depth_max': 1,
'response_received_count': 4,
'scheduler/dequeued': 2,
'scheduler/dequeued/disk': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/disk': 2,
'start_time': datetime.datetime(2016, 8, 30, 8, 14, 12, 13456)}
2016-08-30 08:14:12 [scrapy] INFO: Spider closed (finished)
</code></pre>
| 0 | 2016-08-30T07:50:15Z | 39,222,056 | <p>One way of doing this is seperating discovery and consumer logic by having two spiders. One to discovery product urls another to consume those urls and return results for every one of them. If for some reason the consumer dies mid run it can easily resume the crawl because discovery queue was not affected by this crash.</p>
<p>There's already a great tool for scrapy by the scrapy guys which does exactly that. It's called <a href="https://github.com/scrapinghub/frontera" rel="nofollow">Frontera</a></p>
<blockquote>
<p>Frontera is a web crawling framework consisting of crawl frontier, and
distribution/scaling primitives, allowing to build a large scale
online web crawler.</p>
<p>Frontera takes care of the logic and policies to follow during the
crawl. It stores and prioritises links extracted by the crawler to
decide which pages to visit next, and capable of doing it in
distributed manner.</p>
</blockquote>
<p>It sounds complicated but it's pretty straigh-forward. However if you run something small scale and one off you might just want to approach this manually. You can run discovery spider and output results in json then parse that json in a persistant manner(i.e. popping values out of it) in your consumer spider.</p>
| 0 | 2016-08-30T08:07:18Z | [
"python",
"scrapy",
"web-crawler"
] |
pandas keep numerical part | 39,221,900 | <p>I have a set of data like: </p>
<pre><code> 0 1
0 type 1 type 2
1 type 3 type 4
</code></pre>
<p>How can I transfer it to:</p>
<pre><code> 0 1
0 1 2
1 3 4
</code></pre>
<p>perfer using <code>apply</code>or <code>transform</code> function</p>
| 3 | 2016-08-30T07:58:09Z | 39,221,940 | <p><strong><em>Option 1</em></strong></p>
<pre><code>df.stack().str.replace('type ', '').unstack()
</code></pre>
<p><strong><em>Option 2</em></strong></p>
<pre><code>df.stack().str.split().str[-1].unstack()
</code></pre>
<p><strong><em>Option 3</em></strong></p>
<pre><code># pandas version 0.18.1 +
df.stack().str.extract('(\d+)', expand=False).unstack()
# pandas version 0.18.0 or prior
df.stack().str.extract('(\d+)').unstack()
</code></pre>
<p><a href="http://i.stack.imgur.com/u1NLv.png" rel="nofollow"><img src="http://i.stack.imgur.com/u1NLv.png" alt="enter image description here"></a></p>
<hr>
<h3>Timing</h3>
<p><strong><em>conclusion</em></strong> @jezreal's is best. no loops and no stacking.</p>
<p><strong><em>code</em></strong></p>
<p><strong><em>20,000 by 200</em></strong></p>
<pre><code>df_ = df.copy()
df = pd.concat([df_ for _ in range(10000)], ignore_index=True)
df = pd.concat([df for _ in range(100)], axis=1, ignore_index=True)
</code></pre>
<p><a href="http://i.stack.imgur.com/e6v6G.png" rel="nofollow"><img src="http://i.stack.imgur.com/e6v6G.png" alt="enter image description here"></a></p>
| 4 | 2016-08-30T08:00:36Z | [
"python",
"string",
"pandas",
"replace",
"numeric"
] |
pandas keep numerical part | 39,221,900 | <p>I have a set of data like: </p>
<pre><code> 0 1
0 type 1 type 2
1 type 3 type 4
</code></pre>
<p>How can I transfer it to:</p>
<pre><code> 0 1
0 1 2
1 3 4
</code></pre>
<p>perfer using <code>apply</code>or <code>transform</code> function</p>
| 3 | 2016-08-30T07:58:09Z | 39,222,003 | <p>You can use <code>applymap</code> and a regex (<code>import re</code>):</p>
<pre><code>df = df.applymap(lambda x: re.search(r'.*(\d+)', x).group(1))
</code></pre>
<p>If you want the digits as integers:</p>
<pre><code>df = df.applymap(lambda x: int(re.search(r'.*(\d+)', x).group(1)))
</code></pre>
<p>This will work even if you have other text instead of <code>type</code>, and only with integers (ie <code>'type 1.2'</code> will break this code), so you will have to adapt it.</p>
<p>Also note that this code is bound to fail if no number is found (ie <code>'type'</code>). You may want to create a function that can handle these errors instead of the <code>lambda</code>:</p>
<pre><code>def extract_digit(x):
try:
return int(re.search(r'.*(\d+)', x).group(1))
except (ValueError, AttributeError):
# return the existing value
return x
df = df.applymap(lambda x: extract_digit(x))
</code></pre>
| 2 | 2016-08-30T08:04:19Z | [
"python",
"string",
"pandas",
"replace",
"numeric"
] |
pandas keep numerical part | 39,221,900 | <p>I have a set of data like: </p>
<pre><code> 0 1
0 type 1 type 2
1 type 3 type 4
</code></pre>
<p>How can I transfer it to:</p>
<pre><code> 0 1
0 1 2
1 3 4
</code></pre>
<p>perfer using <code>apply</code>or <code>transform</code> function</p>
| 3 | 2016-08-30T07:58:09Z | 39,222,004 | <pre><code>>>> df.apply(lambda x: x.str.replace('type ','').astype(int))
0 1
0 1 2
1 3 4
</code></pre>
<p>remove the <code>.astype(int)</code> if you don't need to convert to int</p>
| 3 | 2016-08-30T08:04:31Z | [
"python",
"string",
"pandas",
"replace",
"numeric"
] |
pandas keep numerical part | 39,221,900 | <p>I have a set of data like: </p>
<pre><code> 0 1
0 type 1 type 2
1 type 3 type 4
</code></pre>
<p>How can I transfer it to:</p>
<pre><code> 0 1
0 1 2
1 3 4
</code></pre>
<p>perfer using <code>apply</code>or <code>transform</code> function</p>
| 3 | 2016-08-30T07:58:09Z | 39,222,073 | <p>Yoou can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html" rel="nofollow"><code>DataFrame.replace</code></a>:</p>
<pre><code>print (df.replace({'type ': ''}, regex=True))
0 1
0 1 2
1 3 4
</code></pre>
| 3 | 2016-08-30T08:08:10Z | [
"python",
"string",
"pandas",
"replace",
"numeric"
] |
Python classes - run method when any other method is called | 39,221,915 | <p>I am writing a Python app which will use a config file, so I am delegating the control of the config file to a dedicated module, <code>configmanager</code>, and within it a class, <code>ConfigManager</code>.</p>
<p>Whenever a method within <code>ConfigManager</code> is run, which will change my config file in some way, I will need to get the latest version of the file from the disk. Of course, in the spirit of DRY, I should delegate the opening of the config file to it's own function.</p>
<p>However, I feel as though explicitly calling a method to get and return the config file in each function that edits it is not very "clean".</p>
<p>Is there a recommended way in Python to run a method, and make a value available to other methods in a class, whenever and <em>before</em> a method is run in that class?</p>
<p>In other words:</p>
<ul>
<li><p>I create <code>ConfigManager.edit_config()</code>.</p></li>
<li><p>Whenever <code>ConfigManager.edit_config()</code> is called, another function <code>ConfigManager.get_config_file()</code> is run.</p></li>
<li><p><code>ConfigManager.get_config_file()</code> makes a value available to the method <code>ConfigManager.edit_config()</code>.</p></li>
<li><p>And <code>ConfigManager.edit_config()</code> now runs, having access to the value given by <code>ConfigManager.get_config_file()</code>.</p></li>
</ul>
<p>I expect to have many versions of <code>edit_config()</code> methods in <code>ConfigManager</code>, hence the desire to DRY my code.</p>
<p>Is there a recommended way of accomplishing something like this? Or should I just create a function to get the config fine, and manually call it each time?</p>
| 0 | 2016-08-30T07:58:50Z | 39,221,993 | <p>The natural way to have:</p>
<blockquote>
<p><code>ConfigManager.get_config_file()</code> makes a value available to the method
<code>ConfigManager.edit_config()</code>.</p>
</blockquote>
<p>is to have <code>get_config_file()</code> <em>return</em> that value.</p>
<p>Just call <code>get_config_file()</code> within <code>edit_config()</code>.</p>
<p>If there are going to be <em>many</em> versions of <code>edit_config()</code>, then a decorator might be the way to go:</p>
<pre><code>def config_editor(func):
def wrapped(self, *args, **kwargs):
config_file = self.get_config_file()
func(self, config_file, *args, **kwargs)
return func
class ConfigManager
.
.
.
@config_editor
def edit_config1(self, config_file, arg1):
...
@config_editor
def edit_config2(self, config_file, arg1, arg2):
...
ConfigManager mgr
mgr.edit_config1(arg1)
</code></pre>
<p>I don't actually like this: </p>
<p>Firstly, the declaration of <code>edit_config1</code> takes one more argument than the actual usage needs (because the decorator supplies the additional argument).</p>
<p>Secondly, it doesn't actually save all that much boiler plate over:</p>
<pre><code> def edit_config3(self, arg1):
config_file = self.get_config_file()
</code></pre>
<p>In conclusion, I don't think the decorators save enough repetition to be worth it.</p>
| 2 | 2016-08-30T08:03:48Z | [
"python"
] |
Python classes - run method when any other method is called | 39,221,915 | <p>I am writing a Python app which will use a config file, so I am delegating the control of the config file to a dedicated module, <code>configmanager</code>, and within it a class, <code>ConfigManager</code>.</p>
<p>Whenever a method within <code>ConfigManager</code> is run, which will change my config file in some way, I will need to get the latest version of the file from the disk. Of course, in the spirit of DRY, I should delegate the opening of the config file to it's own function.</p>
<p>However, I feel as though explicitly calling a method to get and return the config file in each function that edits it is not very "clean".</p>
<p>Is there a recommended way in Python to run a method, and make a value available to other methods in a class, whenever and <em>before</em> a method is run in that class?</p>
<p>In other words:</p>
<ul>
<li><p>I create <code>ConfigManager.edit_config()</code>.</p></li>
<li><p>Whenever <code>ConfigManager.edit_config()</code> is called, another function <code>ConfigManager.get_config_file()</code> is run.</p></li>
<li><p><code>ConfigManager.get_config_file()</code> makes a value available to the method <code>ConfigManager.edit_config()</code>.</p></li>
<li><p>And <code>ConfigManager.edit_config()</code> now runs, having access to the value given by <code>ConfigManager.get_config_file()</code>.</p></li>
</ul>
<p>I expect to have many versions of <code>edit_config()</code> methods in <code>ConfigManager</code>, hence the desire to DRY my code.</p>
<p>Is there a recommended way of accomplishing something like this? Or should I just create a function to get the config fine, and manually call it each time?</p>
| 0 | 2016-08-30T07:58:50Z | 39,244,597 | <p>Since you get something from disk, you open a file. So, you could use the class with the <code>with</code> "function" of python. </p>
<p>You should check the context managers. With that, you will be able to implement the functionality that you want each time that someone access the config file through the <code>__enter__</code> method and (if it is needed) implement the functionality for stop using the resource with the <code>__exit__</code> method. </p>
| 0 | 2016-08-31T08:34:26Z | [
"python"
] |
pandas handle column with different date time formats gracefully | 39,221,996 | <p>I have a column with a birthdate. Some are N.A, some <code>01.01.2016</code> but some contain <code>01.01.2016 01:01:01</code>
Filtering the N.A. values works fine. But handling the different date formats seems clumsy. Is it possible to have pandas handle these gracefully and e.g. for a birthdate only interpret the date and not fail?</p>
| 1 | 2016-08-30T08:03:55Z | 39,222,137 | <p><code>pd.to_datetime()</code> will handle multiple formats</p>
<pre><code>>>> ser = pd.Series(['NaT', '01.01.2016', '01.01.2016 01:01:01'])
>>> pd.to_datetime(ser)
0 NaT
1 2016-01-01 00:00:00
2 2016-01-01 01:01:01
dtype: datetime64[ns]
</code></pre>
| 3 | 2016-08-30T08:11:50Z | [
"python",
"date",
"datetime",
"pandas"
] |
Replace NAT dates with data from another column Python Pandas | 39,222,157 | <p>I am facing difficulties replacing NAT dates with data from other column. I am using python pandas by the way.</p>
<p>For example</p>
<pre><code> Column1 Column2
02-12-2006 2006
05-12-2005 2005
NAT 2008
15-02-2015 2015
NAT 2001
</code></pre>
<p>I have the valid dates in column1 and only the years in column2, how can i impute the NAT values using the years from column 2 based on the same row.</p>
<p>My desire result will be</p>
<pre><code> Column1 Column2
02-12-2006 2006
05-12-2005 2005
2008 2008
15-02-2015 2015
2001 2001
</code></pre>
| 2 | 2016-08-30T08:13:17Z | 39,222,205 | <pre><code>df.Column1.combine_first(df.Column2)
0 2006-02-12 00:00:00
1 2005-05-12 00:00:00
2 2008
3 2015-02-15 00:00:00
4 2001
Name: Column1, dtype: object
</code></pre>
<hr>
<h3>Better Answer</h3>
<pre><code>df.Column1 = np.where(df.Column1.notnull(),
df.Column1.dt.strftime('%Y-%m-%d'),
df.Column2)
df
</code></pre>
<p><a href="http://i.stack.imgur.com/ncJSb.png" rel="nofollow"><img src="http://i.stack.imgur.com/ncJSb.png" alt="enter image description here"></a></p>
| 5 | 2016-08-30T08:16:24Z | [
"python",
"pandas"
] |
Django1.11 - URLconf does not appear to have any patterns | 39,222,191 | <p>I'm a new user of django and i have some difficulties.
I follow a Django 1.8 tutorial and i have Django1.11. the problem is about urls.py files & patterns</p>
<p>Here my code, blog/urls.py:</p>
<pre><code>from django.conf.urls import url
from . import views
urlpattern = [
url(r'^accueil/$', views.home, name='home'),
]
</code></pre>
<p>first/urls.py:</p>
<pre><code>from django.conf.urls import url include
urlpattern = [
url(r'^blog/', include('blog/urls')),
]
</code></pre>
<p>And when i use runserver command, it display :</p>
<pre><code>URLconf '<module 'blog.urls'>'does not appear to have any patterns in it
</code></pre>
<p>I read somewhere that patterns are removed since 1.10 and it's an old syntax now but i don't found the solution.</p>
<p>Any idea ? </p>
| 0 | 2016-08-30T08:15:35Z | 39,222,221 | <p>Django 1.11 is not released. You should use the actual released version, 1.10. Also, you should use the actual tutorial for your version.</p>
<p>However, the problem is that both of your files should define <code>urlpatterns</code> with an s, not <code>urlpattern</code>.</p>
| 2 | 2016-08-30T08:17:27Z | [
"python",
"django"
] |
Django1.11 - URLconf does not appear to have any patterns | 39,222,191 | <p>I'm a new user of django and i have some difficulties.
I follow a Django 1.8 tutorial and i have Django1.11. the problem is about urls.py files & patterns</p>
<p>Here my code, blog/urls.py:</p>
<pre><code>from django.conf.urls import url
from . import views
urlpattern = [
url(r'^accueil/$', views.home, name='home'),
]
</code></pre>
<p>first/urls.py:</p>
<pre><code>from django.conf.urls import url include
urlpattern = [
url(r'^blog/', include('blog/urls')),
]
</code></pre>
<p>And when i use runserver command, it display :</p>
<pre><code>URLconf '<module 'blog.urls'>'does not appear to have any patterns in it
</code></pre>
<p>I read somewhere that patterns are removed since 1.10 and it's an old syntax now but i don't found the solution.</p>
<p>Any idea ? </p>
| 0 | 2016-08-30T08:15:35Z | 39,222,357 | <p>I think your need this page.<a href="https://docs.djangoproject.com/en/1.10/topics/http/urls/" rel="nofollow">URL dispatcher</a></p>
<p>Your code is fault, I think your need to change it:</p>
<pre><code>from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^accueil/$', views.home, name='home'),
]
</code></pre>
| 0 | 2016-08-30T08:25:11Z | [
"python",
"django"
] |
Send Email Issue - Google Compute Engine VM | 39,222,288 | <p>I am trying to send an Email from python code. The Code works fine when tested on my local Server. But when I deploy these changes on my Google Compute Engine VM, the Email sending stops and Connection Error starts coming.</p>
<p>Error Trace:</p>
<pre><code>File "/usr/local/lib/python2.7/dist-packages/django/core/mail/message.py", line 342, in send
return self.get_connection(fail_silently).send_messages([self])
File "/usr/local/lib/python2.7/dist-packages/django/core/mail/backends/smtp.py", line 100, in send_messages
new_conn_created = self.open()
File "/usr/local/lib/python2.7/dist-packages/django/core/mail/backends/smtp.py", line 58, in open
self.connection = connection_class(self.host, self.port, **connection_params)
File "/usr/lib/python2.7/smtplib.py", line 256, in __init__
(code, msg) = self.connect(host, port)
File "/usr/lib/python2.7/smtplib.py", line 316, in connect
self.sock = self._get_socket(host, port, self.timeout)
File "/usr/lib/python2.7/smtplib.py", line 291, in _get_socket
return socket.create_connection((host, port), timeout)
File "/usr/lib/python2.7/socket.py", line 571, in create_connection
raise err
error: [Errno 110] Connection timed out
</code></pre>
<p>Code: </p>
<pre><code>from django.core.mail import EmailMessage
msg = EmailMessage("Test Sub", "Msg", "abc@abc.in", ["def@abc.in"])
msg.content_subtype = "html"
msg.send()
</code></pre>
<p>Email settings in settings.py File:</p>
<pre><code>EMAIL_USE_TLS = True
EMAIL_HOST = 'smtp.zoho.com'
EMAIL_PORT = 587
EMAIL_HOST_USER = 'abc@abc.in'
EMAIL_HOST_PASSWORD = 'abcxxxabc'
DEFAULT_FROM_EMAIL = 'abc@abc.in'
DEFAULT_TO_EMAIL = 'def@abc.in'
</code></pre>
<p>Can someone please suggest what could be the cause for this behavior ? And how can this issue be resolved ?</p>
<p>Thanks,</p>
| 0 | 2016-08-30T08:21:17Z | 39,222,919 | <p>You can't send mail directly from google cloud, port 587 is blocked. You're gonna have to find another solution; take a look at Mailgun.</p>
| 1 | 2016-08-30T08:55:10Z | [
"python",
"django",
"google-compute-engine"
] |
Connecting signals to slots up the directory tree with PySide | 39,222,294 | <p>I am trying to separate UI components from functional code as much as I can, so my PySide is application structured like this</p>
<pre><code>main.py
package\
app.py
__init__.py
ui\
mainwindow.py
__init__.py
</code></pre>
<p>Main is just a simple starter</p>
<pre><code>if __name__ == '__main__':
import sys
from package import app
sys.exit(app.run())
</code></pre>
<p>and app is where all the functionality should reside. I start the UI from app.py</p>
<pre><code>from PySide import QtCore, QtGui
@QtCore.Slot()
def start_button_clicked():
print "started"
def run():
import sys
from ui.mainwindow import Ui_MainWindow
app = QtGui.QApplication(sys.argv)
MainWindow = QtGui.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
</code></pre>
<p>Now, from the user interface I want to connect the emitted signals up to the app.py to avoid having a lot of functionality cluttering the mainwindow file, but the UI file is not aware of app.py. How should I go forth to do this and avoid all slots being in mainwindow.py? app.py should easily be able to do stuff on the UI since it has a object reference to it, but I have no clue on the other way around. </p>
| 0 | 2016-08-30T08:21:31Z | 39,238,383 | <p>Create a subclass for the top-level widget from Qt Designer. Using this approach, all of child widgets from Qt Designer will become attributes of the subclass:</p>
<pre><code>import sys
from PySide import QtCore, QtGui
from ui.mainwindow import Ui_MainWindow
class MainWindow(QtGui.QMainWindow, Ui_MainWindow):
def __init__(self):
super(MainWindow, self).__init__()
self.setupUi(self)
self.start_button.clicked.connect(self.start_button_clicked)
def start_button_clicked(self):
print "started"
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
window = MainWindow()
window.show()
sys.exit(app.exec_())
</code></pre>
| 1 | 2016-08-30T23:07:03Z | [
"python",
"pyside",
"signals-slots",
"qt-designer"
] |
Python split up data from print string | 39,222,329 | <p>This is my first post here, if my post dont follow the "standard" you know why.
And iam realy new to python and programming, i am trying to learn as i go.</p>
<p>I am using a script thats Controls my Husqvarna Automower</p>
<p>In that script there is a line that i dont understand and i would like to change the outcome of.</p>
<pre><code>print(dict(mow.status()['mowerInfo']))
</code></pre>
<p>When i run the script i get an print out like this</p>
<pre><code>{u'storedTimestamp': u'1472541846629', u'hdop': u'0.0', u'latitude': u'57.57320833333333', u'lastErrorCode': u'0', u'nextStartTimestamp': u'1472587200', u'mowerStatus': u'PARKED_TIMER', u'cachedSettingsUUID': u'c1029c29-ecd5-48bd-a27b-fa98c6985ff0', u'hostMessage': u'0', u'configChangeCounter': u'846', u'longitude': u'12.04773', u'nextStartSource': u'WEEK_TIMER', u'secondsOld': u'-1471069304597', u'gpsStatus': u'USING_GPS_MAP', u'gsmRssi': u'0', u'batteryPercent': u'100', u'connected': u'true', u'operatingMode': u'AUTO', u'lastErrorCodeTimestamp': u'0'}
</code></pre>
<p>I understands that this line executes the "status" function and prints the outcome, but i dont realy understand the dict and the ['mowerInfo'] and why i cant find any referens to ['mowerInfo'] anywere else in the script. As i understand there should be a dictonary in the script. But i cant find it.</p>
<p>And now to accual the question</p>
<p>Insteed of the print command, a would like to get som of the information parsed inte variables insteed.</p>
<p>For examples would i like to have a variable called mowerStatus and it should have the value PARKED_TIMER and a variable called batteryPercent and it should have the value 100</p>
<p>The script is runned by a smarthomesolution called Indigodomo and is runed on a mac using python 2.6</p>
<p>Anyone knows how to do that?</p>
<p>I have modified the script from the original</p>
<p>here is my modified script (with my credential XXed out)</p>
<pre><code>import requests
import xmltodict
class API:
_API_IM = 'https://tracker-id-ws.husqvarna.net/imservice/rest/'
_API_TRACK = 'https://tracker-api-ws.husqvarna.net/services/'
def __init__(self):
self.session = requests.Session()
self.device_id = None
self.push_id = None
def login(self, login, password):
request = ("<login>"
" <email>%s</email>"
" <password>%s</password><language>fr-FR</language>"
"</login>") % (login, password)
response = self.session.post(self._API_IM + 'im/login',
data=request, headers={'Content type': 'application/xml'})
response.raise_for_status()
self.session.headers.update({'Session-Token': response.headers['Session-Token']})
self.select_first_robot()
def logout(self):
response = self.session.post(self._API_IM + 'im/logout')
response.raise_for_status()
self.device_id = None
del (self.session.headers['Session-Token'])
def list_robots(self):
response = self.session.get(self._API_TRACK + 'pairedRobots_v2')
response.raise_for_status()
result = xmltodict.parse(response.content)
return result
def select_first_robot(self):
result = self.list_robots()
self.device_id = result['robots']['robot']['deviceId']
def status(self):
response = self.session.get(self._API_TRACK + 'robot/%s/status_v2/' % self.device_id)
response.raise_for_status()
result = xmltodict.parse(response.content)
return result
def geo_status(self):
response = self.session.get(self._API_TRACK + 'robot/%s/geoStatus/' % self.device_id)
response.raise_for_status()
result = xmltodict.parse(response.content)
return result
def get_mower_settings(self):
request = ("<settings>"
" <autoTimer/><gpsSettings/><drivePastWire/>"
" <followWireOut><startPositionId>1</startPositionId></followWireOut>"
" <followWireOut><startPositionId>2</startPositionId></followWireOut>"
" <followWireOut><startPositionId>3</startPositionId></followWireOut>"
" <followWireOut><startPositionId>4</startPositionId></followWireOut>"
" <followWireOut><startPositionId>5</startPositionId></followWireOut>"
" <followWireIn><loopWire>RIGHT_BOUNDARY_WIRE</loopWire></followWireIn>"
" <followWireIn><loopWire>GUIDE_1</loopWire></followWireIn>"
" <followWireIn><loopWire>GUIDE_2</loopWire></followWireIn>"
" <followWireIn><loopWire>GUIDE_3</loopWire></followWireIn>"
" <csRange/>"
" <corridor><loopWire>RIGHT_BOUNDARY_WIRE</loopWire></corridor>"
" <corridor><loopWire>GUIDE_1</loopWire></corridor>"
" <corridor><loopWire>GUIDE_2</loopWire></corridor>"
" <corridor><loopWire>GUIDE_3</loopWire></corridor>"
" <exitAngles/><subareaSettings/>"
"</settings>")
response = self.session.post(self._API_TRACK + 'robot/%s/settings/' % self.device_id,
data=request, headers={'Content-type': 'application/xml'})
response.raise_for_status()
result = xmltodict.parse(response.content)
return result
def settingsUUID(self):
response = self.session.get(self._API_TRACK + 'robot/%s/settingsUUID/' % self.device_id)
response.raise_for_status()
result = xmltodict.parse(response.content)
return result
def control(self, command):
if command not in ['PARK', 'STOP', 'START']:
raise Exception("Unknown command")
request = ("<control>"
" <action>%s</action>"
"</control>") % command
response = self.session.put(self._API_TRACK + 'robot/%s/control/' % self.device_id,
data=request, headers={'Content-type': 'application/xml'})
response.raise_for_status()
def add_push_id(self, id):
request = "id=%s&platform=iOS" % id
response = self.session.post(self._API_TRACK + 'addPushId', data=request,
headers={'Content-type': 'application/x-www-form-urlencoded; charset=UTF-8'})
response.raise_for_status()
self.push_id = id
def remove_push_id(self):
request = "id=%s&platform=iOS" % id
response = self.session.post(self._API_TRACK + 'removePushId', data=request,
headers={'Content-type': 'application/x-www-form-urlencoded; charset=UTF-8'})
response.raise_for_status()
self.push_id = None
if __name__ == '__main__':
retry = 5
while retry > 0:
try:
mow = API()
mow.login("xxx@xxx.com", "xxxxxx")
print(dict(mow.status()['mowerInfo']))
retry = 0
except Exception as ex:
retry -= 1
if retry == 0:
print("[ERROR] Retrying to send the command")
else:
print("[ERROR] Failed to send the command")
exit(1)
print("Done")
mow.logout()
exit(0)
</code></pre>
<p>The orgiginal Project and script can bee find here</p>
<p><a href="https://github.com/chrisz/pyhusmow" rel="nofollow">https://github.com/chrisz/pyhusmow</a></p>
<p>Thanx Martin</p>
| 0 | 2016-08-30T08:23:48Z | 39,225,338 | <pre><code>dic_info = dict(mow.status()['mowerInfo'])
mowerStatus = dic_info.get('mowerStatus')
batteryPercent = dic_info.get('batteryPercent')
</code></pre>
| 2 | 2016-08-30T10:42:01Z | [
"python"
] |
Wikipedia table scraping using python | 39,222,441 | <p>I am trying to scrape tables from <a href="https://en.wikipedia.org/wiki/List_of_countries_and_dependencies_by_population" rel="nofollow">wikipedia</a>. I wrote a table scrapper using tutorials available in the web which downloads table and save as pandas data frame.</p>
<p>This is the code</p>
<pre><code>from bs4 import BeautifulSoup
import pandas as pd
import urllib2
headers = { 'User-Agent' : 'Mozilla/5.0' }
req = urllib2.Request('https://en.wikipedia.org/wiki/List_of_countries_and_dependencies_by_population', None, headers)
html = urllib2.urlopen(req).read()
soup = BeautifulSoup(html, 'lxml') # Parse the HTML as a string
print soup
# Create an object of the first object
table = soup.find("table", {"class":"wikitable sortable jquery-tablesorter"})
print table
rank=[]
country=[]
pop=[]
date=[]
per=[]
source=[]
for row in table.find_all('tr')[1:]:
col=row.find_all('td')
col1=col[0].string.strip()
rank.append(col1)
col2=col[1].string.strip()
country.append(col2)
col3=col[2].string.strip()
pop.append(col2)
col4=col[3].string.strip()
date.append(col4)
col5=col[4].string.strip()
per.append(col5)
col6=col[5].string.strip()
source.append(col6)
columns={'Rank':rank,'Country':country,'Population':pop,'Date':date,'Percentage':per,'Source':source}
# Create a dataframe from the columns variable
df = pd.DataFrame(columns)
df
</code></pre>
<p>But it is not downloading the table. The problem is in this section</p>
<pre><code>table = soup.find("table", {"class":"wikitable sortable jquery-tablesorter"})
print table
</code></pre>
<p>where output is <code>None</code></p>
| 0 | 2016-08-30T08:29:49Z | 39,222,745 | <p>As far as I can see, there is no such element on that page. The main table has <code>"class":"wikitable sortable"</code> but not the <code>jquery-tablesorter</code>.</p>
<p>Make sure you know what element you are trying to select and check if your program sees the same elements you see, then make your selector.</p>
| 0 | 2016-08-30T08:45:29Z | [
"python",
"web-scraping",
"beautifulsoup"
] |
Wikipedia table scraping using python | 39,222,441 | <p>I am trying to scrape tables from <a href="https://en.wikipedia.org/wiki/List_of_countries_and_dependencies_by_population" rel="nofollow">wikipedia</a>. I wrote a table scrapper using tutorials available in the web which downloads table and save as pandas data frame.</p>
<p>This is the code</p>
<pre><code>from bs4 import BeautifulSoup
import pandas as pd
import urllib2
headers = { 'User-Agent' : 'Mozilla/5.0' }
req = urllib2.Request('https://en.wikipedia.org/wiki/List_of_countries_and_dependencies_by_population', None, headers)
html = urllib2.urlopen(req).read()
soup = BeautifulSoup(html, 'lxml') # Parse the HTML as a string
print soup
# Create an object of the first object
table = soup.find("table", {"class":"wikitable sortable jquery-tablesorter"})
print table
rank=[]
country=[]
pop=[]
date=[]
per=[]
source=[]
for row in table.find_all('tr')[1:]:
col=row.find_all('td')
col1=col[0].string.strip()
rank.append(col1)
col2=col[1].string.strip()
country.append(col2)
col3=col[2].string.strip()
pop.append(col2)
col4=col[3].string.strip()
date.append(col4)
col5=col[4].string.strip()
per.append(col5)
col6=col[5].string.strip()
source.append(col6)
columns={'Rank':rank,'Country':country,'Population':pop,'Date':date,'Percentage':per,'Source':source}
# Create a dataframe from the columns variable
df = pd.DataFrame(columns)
df
</code></pre>
<p>But it is not downloading the table. The problem is in this section</p>
<pre><code>table = soup.find("table", {"class":"wikitable sortable jquery-tablesorter"})
print table
</code></pre>
<p>where output is <code>None</code></p>
| 0 | 2016-08-30T08:29:49Z | 39,222,812 | <p>The docs says you need to specify multiple classes like so:</p>
<pre><code>soup.find("table", class_="wikitable sortable jquery-tablesorter")
</code></pre>
<p>Also, consider using requests instead of urllib2.</p>
| 0 | 2016-08-30T08:48:54Z | [
"python",
"web-scraping",
"beautifulsoup"
] |
How to use Groupby with condition in Python | 39,222,507 | <p>I have a dataframe called merged_df_energy</p>
<pre><code>merged_df_energy.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 11232 entries, 0 to 11231
Data columns (total 17 columns):
TIMESTAMP 11232 non-null datetime64[ns]
P_ACT_KW 11232 non-null int64
PERIODE_TARIF 11232 non-null object
P_SOUSCR 11232 non-null int64
high_energy 11232 non-null int64
medium_energy 11232 non-null int64
low_energy 11232 non-null int64
0ACT_TIME_ETA_PRG_P2REF_RM 11232 non-null int64
0ACT_TIME_ETA_PRG_VDES_RM 11232 non-null int64
0ACT_TIME_ETA_PRG_P3REF_RM 11232 non-null int64
0ACT_TIME_ETA_POMP_RECIRC_N1 11232 non-null int64
0ACT_TIME_ETA_POMP_RECIRC_N2 11232 non-null int64
0ACT_TIME_ETA_POMP_RECIRC_N3 11232 non-null int64
0ACT_TIME_ETA_SURPRES_AIR_N1 11232 non-null int64
0ACT_TIME_ETA_SURPRES_AIR_N2 11232 non-null int64
0ACT_TIME_ETA_SURPRES_AIR_N3 11232 non-null int64
class_energy 11232 non-null object
dtypes: datetime64[ns](1), int64(14), object(2)
memory usage: 1.5+ MB
</code></pre>
<p>with this structure : </p>
<pre><code>TIMESTAMP P_ACT_KW PERIODE_TARIF P_SOUSCR high_energy medium_energy low_energy 0ACT_TIME_ETA_PRG_P2REF_RM 0ACT_TIME_ETA_PRG_VDES_RM
0ACT_TIME_ETA_PRG_P3REF_RM 0ACT_TIME_ETA_POMP_RECIRC_N1 0ACT_TIME_ETA_POMP_RECIRC_N2 0ACT_TIME_ETA_POMP_RECIRC_N3
0ACT_TIME_ETA_SURPRES_AIR_N1 0ACT_TIME_ETA_SURPRES_AIR_N2
0ACT_TIME_ETA_SURPRES_AIR_N3 class_energy
2016-05-10 04:30:00 107 HP 250 107 0 0 100 0 0 0 0 0 0 0 0 high
2016-05-10 04:40:00 109 HC 250 109 0 0 0 0 100 0 0 0 0 0 0 high
2016-05-10 04:50:00 106 HP 250 106 0 0 0 0 100 0 0 0 0 0 0 high
</code></pre>
<p>I try to calculate the sum of (0ACT_TIME_ETA_PRG_P2REF_RM, 0ACT_TIME_ETA_PRG_VDES_RM, 0ACT_TIME_ETA_PRG_P3REF_RM, 0ACT_TIME_ETA_POMP_RECIRC_N1 0ACT_TIME_ETA_POMP_RECIRC_N2, 0ACT_TIME_ETA_POMP_RECIRC_N3, 0ACT_TIME_ETA_SURPRES_AIR_N1, 0ACT_TIME_ETA_SURPRES_AIR_N2, 0ACT_TIME_ETA_SURPRES_AIR_N3 class_energy) group by (class_energy).</p>
<p>For this I did : </p>
<pre><code>df_F1 = (merged_df_energy.groupby(by=['class_energy'], as_index=False)['0ACT_TIME_ETA_PRG_P2REF_RM', '0ACT_TIME_ETA_PRG_VDES_RM','0ACT_TIME_ETA_PRG_P3REF_RM','0ACT_TIME_ETA_POMP_RECIRC_N1','0ACT_TIME_ETA_POMP_RECIRC_N2', '0ACT_TIME_ETA_POMP_RECIRC_N3', '0ACT_TIME_ETA_SURPRES_AIR_N1', '0ACT_TIME_ETA_SURPRES_AIR_N2', '0ACT_TIME_ETA_SURPRES_AIR_N3' ].sum())
</code></pre>
<p>It works fine, but I would like to know how can I do this with this condition ( if PERIODE_TARIF = 'HP') ?</p>
| 2 | 2016-08-30T08:33:03Z | 39,222,610 | <p>I think you need before <code>groupby</code> <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p>
<pre><code>merged_df_energy1 = merged_df_energy[merged_df_energy.PERIODE_TARIF == 'HP']
cols = ['0ACT_TIME_ETA_PRG_P2REF_RM',
'0ACT_TIME_ETA_PRG_VDES_RM',
'0ACT_TIME_ETA_PRG_P3REF_RM',
'0ACT_TIME_ETA_POMP_RECIRC_N1',
'0ACT_TIME_ETA_POMP_RECIRC_N2',
'0ACT_TIME_ETA_POMP_RECIRC_N3',
'0ACT_TIME_ETA_SURPRES_AIR_N1',
'0ACT_TIME_ETA_SURPRES_AIR_N2',
'0ACT_TIME_ETA_SURPRES_AIR_N3']
df_F1 = (merged_df_energy1.groupby(by=['class_energy'], as_index=False)[cols].sum())
print (df_F1)
class_energy 0ACT_TIME_ETA_PRG_P2REF_RM 0ACT_TIME_ETA_PRG_VDES_RM \
0 high 100 0
0ACT_TIME_ETA_PRG_P3REF_RM 0ACT_TIME_ETA_POMP_RECIRC_N1 \
0 100 0
0ACT_TIME_ETA_POMP_RECIRC_N2 0ACT_TIME_ETA_POMP_RECIRC_N3 \
0 0 0
0ACT_TIME_ETA_SURPRES_AIR_N1 0ACT_TIME_ETA_SURPRES_AIR_N2 \
0 0 0
0ACT_TIME_ETA_SURPRES_AIR_N3
0 0
</code></pre>
<p>EDIT:</p>
<p>If order of columns is never changed, you can use:</p>
<pre><code>cols = merged_df_energy.columns[7:16]
print (cols)
Index(['0ACT_TIME_ETA_PRG_P2REF_RM', '0ACT_TIME_ETA_PRG_VDES_RM',
'0ACT_TIME_ETA_PRG_P3REF_RM', '0ACT_TIME_ETA_POMP_RECIRC_N1',
'0ACT_TIME_ETA_POMP_RECIRC_N2', '0ACT_TIME_ETA_POMP_RECIRC_N3',
'0ACT_TIME_ETA_SURPRES_AIR_N1', '0ACT_TIME_ETA_SURPRES_AIR_N2',
'0ACT_TIME_ETA_SURPRES_AIR_N3'],
dtype='object')
</code></pre>
| 2 | 2016-08-30T08:37:56Z | [
"python",
"pandas",
"dataframe",
"group-by",
"condition"
] |
How to update a plot on Tkinter canvas? | 39,222,641 | <p>I have a tkinter app where I have different frames, each one with a different plot.
I have an import function that allows me to select the data file that I want to plot.</p>
<p>Right now, everything is working well if I import the file right at the start of the program, i.e. as soon as the subplot is created on the canvas the data is shown.</p>
<p>However, if the subplot is created beforehand without any data, when I import the data and call the function to plot it, the canvas does not update the plot. BUT, if I resize the window (or maximize it) the plot is updated.</p>
<p>Below is the code. Any suggestion regarding the code structure are appreciated.</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
matplotlib.use("TkAgg")
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2TkAgg
from matplotlib.figure import Figure
import matplotlib.animation as animation
import Tkinter as tk
import ttk
from tkFileDialog import askopenfilename
LARGE_FONT= ("Verdana", 12)
plot_colors = plt.rcParams['axes.color_cycle']
width, height = plt.figaspect(1)
fig_nyquist = Figure(figsize=(width, height), dpi=100)
plot_axes_nyquist = fig_nyquist.add_subplot(111)
fig_bode = Figure(figsize=(width, height), dpi=100)
plot_axes_bode = fig_bode.add_subplot(111)
fig_randles = Figure(figsize=(width, height), dpi=100)
plot_axes_randles = fig_randles.add_subplot(111)
class EISapp(tk.Tk):
def __init__(self, *args, **kwargs):
tk.Tk.__init__(self, *args, **kwargs)
tk.Tk.wm_title(self, "EIS + CV Analyser")
container = tk.Frame(self)
container.pack(pady=10,padx=10, side="top", fill="both", expand = True)
container.grid_rowconfigure(0, weight=1)
container.grid_columnconfigure(0, weight=1)
self.frames = {}
for F in (menu_page, nyquist_page, bode_page, randles_page):
frame = F(container, self)
self.frames[F] = frame
frame.grid(row=0, column=0, sticky="nsew")
self.show_frame(menu_page)
def show_frame(self, cont):
frame = self.frames[cont]
frame.tkraise()
class menu_page(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self,parent)
label = tk.Label(self, text="Menu", font=LARGE_FONT)
label.pack(pady=10,padx=10)
import_button = ttk.Button(self, text="Import EIS data file", command=lambda: import_EIS_data())
import_button.pack()
nyquist_button = ttk.Button(self, text="Nyquist Plot", command=lambda: controller.show_frame(nyquist_page))
nyquist_button.pack()
bode_button = ttk.Button(self, text="Bode Plot", command=lambda: controller.show_frame(bode_page))
bode_button.pack()
randles_button = ttk.Button(self, text="Randles Plot", command=lambda: controller.show_frame(randles_page))
randles_button.pack()
class nyquist_page(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
label = tk.Label(self, text="Nyquist Plot", font=LARGE_FONT)
label.pack(pady=10,padx=10)
menu_button = ttk.Button(self, text="Menu", command=lambda: controller.show_frame(menu_page))
menu_button.pack()
refresh_button = ttk.Button(self, text="Refresh", command=lambda: refresh_plots())
refresh_button.pack()
canvas = FigureCanvasTkAgg(fig_nyquist, self)
canvas.show()
canvas.get_tk_widget().pack(side=tk.BOTTOM, fill=tk.BOTH, expand=True)
toolbar = NavigationToolbar2TkAgg(canvas, self)
toolbar.update()
class bode_page(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
label = tk.Label(self, text="Bode Plot", font=LARGE_FONT)
label.pack(pady=10,padx=10)
menu_button = ttk.Button(self, text="Menu", command=lambda: controller.show_frame(menu_page))
menu_button.pack()
canvas = FigureCanvasTkAgg(fig_bode, self)
canvas.show()
canvas.get_tk_widget().pack(side=tk.BOTTOM, fill=tk.BOTH, expand=True)
toolbar = NavigationToolbar2TkAgg(canvas, self)
toolbar.update()
class randles_page(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
label = tk.Label(self, text="Randles Plot", font=LARGE_FONT)
label.pack(pady=10,padx=10)
menu_button = ttk.Button(self, text="Menu", command=lambda: controller.show_frame(menu_page))
menu_button.pack()
canvas = FigureCanvasTkAgg(fig_randles, self)
canvas.show()
canvas.get_tk_widget().pack(side=tk.BOTTOM, fill=tk.BOTH, expand=True)
toolbar = NavigationToolbar2TkAgg(canvas, self)
toolbar.update()
def import_EIS_data():
global EIS_df
try:
filename = askopenfilename(defaultextension='.txt', filetypes=[('txt file','*.txt'), ('All files','*.*')]) # show an "Open" dialog box and return the path to the selected file
data_table = pd.read_table(filename, index_col=False, skiprows=headers_footers(filename)[0], skip_footer=headers_footers(filename)[1], names=['Temperature', 'Frequency', 'Raw Amplitude', 'Z1', 'Z2', 'Time', 'Gain level'] )
# Convert Frequency values from kHz to Hz
data_table['Frequency'] = data_table['Frequency'] * 1000;
# Delete Unnecessary Columns
data_table = data_table.drop(['Temperature', 'Gain level', 'Raw Amplitude', 'Time'], axis=1); # axis=1 selects "vertical axis" (i.e. columns instead of rows)
# Adds calculated values of impedance modulus and angle
data_table['Z'] = np.sqrt(data_table['Z1']**2 + data_table['Z2']**2);
data_table['Angle'] = np.degrees( np.arctan( -data_table['Z2'] / data_table['Z1'] ) );
EIS_df = EIS_df.append(data_table)
refresh_plots()
except:
quit()
def nyquist_plot(Z1, Z2, plot_axes=None):
if plot_axes == None:
plot_axes = plt.subplot(111)
if not EIS_df.empty:
plot_axes.plot(Z1, Z2)
plot_axes.set_xlabel('$\Re(Z)$')
plot_axes.set_ylabel('$\Im(Z)$')
plot_axes.set_xlim([0, 800]);
plot_axes.set_ylim([-800, 0]);
def bode_plot(freq, Z, angle, imped_axis=None):
if imped_axis == None:
imped_axis = plt.subplot(111)
if not EIS_df.empty:
handle_imped, = imped_axis.plot(freq, Z, label="Impedance")
imped_axis.set_xlabel('$Frequency$ $(Hz)$')
imped_axis.set_ylabel('$|Z|$')
imped_axis.semilogx()
imped_axis.semilogy()
imped_axis.legend(loc=2)
# imped_axis.set_xlim([0, 1E7]);
# imped_axis.set_ylim([1E-1, 1E5]);
angle_axis = imped_axis.twinx();
handle_angle, = angle_axis.plot(freq, angle, plot_colors[1], label="Angle", linestyle='--');
#Configure plot design
angle_axis.set_ylabel(r"$\theta$ $(^{\circ}) $")
# angle_axis.semilogx()
angle_axis.grid('off')
angle_axis.set_ylim([0, 90]);
angle_axis.legend(loc=1, handlelength=3.6)
def randles_plot(freq, Z1, Z2, plot_axes=None):
if plot_axes == None:
plot_axes = plt.subplot(111)
if not EIS_df.empty:
plot_axes.plot(1/(np.pi*np.sqrt(freq)),Z1, label='$\Re(Z)$')
plot_axes.plot(1/(np.pi*np.sqrt(freq)),-Z2, label='$\Im(Z)$')
plot_axes.legend(loc=2)
plot_axes.set_xlabel('$(\sqrt{\omega})^{-1}$')
plot_axes.set_ylabel('$Impedance$')
def refresh_plots():
nyquist_plot(EIS_df.Z1, EIS_df.Z2, plot_axes_nyquist)
fig_nyquist.tight_layout()
bode_plot(EIS_df.Frequency, EIS_df.Z, EIS_df.Angle, plot_axes_bode)
fig_bode.tight_layout()
randles_plot(EIS_df.Frequency, EIS_df.Z1, EIS_df.Z2, plot_axes_randles)
fig_randles.tight_layout()
EIS_df = pd.DataFrame(columns=['Frequency', 'Z1', 'Z2', 'Z', 'Angle'] )
app = EISapp()
app.mainloop()
</code></pre>
| -1 | 2016-08-30T08:39:46Z | 39,225,762 | <p>Call the <code>draw()</code> method of the canvas.</p>
| 0 | 2016-08-30T11:03:06Z | [
"python",
"matplotlib",
"tkinter",
"tkinter-canvas"
] |
python - building a unique DB and counting the number of appearances | 39,222,644 | <p>Im reading a large file, where each row has 20 numbers.
I want to end up with a 2D-array, where each row will be a unique row from the file, and in addition for each row I will have the number of times it appeared in the file.</p>
<p>So I did this by building rowDB - a list of lists(where each sub-list is a 20 numbers row from the file), and another list that indicates how many times it appeared:</p>
<pre><code>[uniq, idx] = is_unique(rowDB, new_row)
if (uniq):
rowDB.append(new_row)
num_of_occurances.append(1)
else:
num_of_occurances[idx] += 1
</code></pre>
<p>I created this help-function:
Check if new_row is unique - i.e does not exist in rowDB.
Return uniq = True/False, and if False return also the index of the row in rowDB. </p>
<pre><code>def is_unique(rowDB, new_row):
for i in range(len(rowDB)):
row_i = rowDB[i]
equal = 1
for j in range (len(row_i)):
if (row_i[j] != new_row[j]):
equal = 0
break
if (equal):
return [False, i]
return [True, 0]
</code></pre>
<p>However when the DB is large it takes lot of time. so My question is what is the most efficient way to perform this? maybe using numpy array instead of lists?
and if so, maybe there is a built in numpy function to check if a row is unique and if not to get the row index? how would you build this DB? Thanks!!!</p>
| 1 | 2016-08-30T08:39:56Z | 39,223,079 | <p>You could probably use <a href="https://docs.python.org/2/library/collections.html#counter-objects" rel="nofollow">Counter</a> for that, there are some good examples on the <a href="https://docs.python.org/2/library/collections.html#counter-objects" rel="nofollow">official documentation</a>.</p>
| 0 | 2016-08-30T09:01:47Z | [
"python",
"arrays",
"performance",
"numpy",
"unique"
] |
python - building a unique DB and counting the number of appearances | 39,222,644 | <p>Im reading a large file, where each row has 20 numbers.
I want to end up with a 2D-array, where each row will be a unique row from the file, and in addition for each row I will have the number of times it appeared in the file.</p>
<p>So I did this by building rowDB - a list of lists(where each sub-list is a 20 numbers row from the file), and another list that indicates how many times it appeared:</p>
<pre><code>[uniq, idx] = is_unique(rowDB, new_row)
if (uniq):
rowDB.append(new_row)
num_of_occurances.append(1)
else:
num_of_occurances[idx] += 1
</code></pre>
<p>I created this help-function:
Check if new_row is unique - i.e does not exist in rowDB.
Return uniq = True/False, and if False return also the index of the row in rowDB. </p>
<pre><code>def is_unique(rowDB, new_row):
for i in range(len(rowDB)):
row_i = rowDB[i]
equal = 1
for j in range (len(row_i)):
if (row_i[j] != new_row[j]):
equal = 0
break
if (equal):
return [False, i]
return [True, 0]
</code></pre>
<p>However when the DB is large it takes lot of time. so My question is what is the most efficient way to perform this? maybe using numpy array instead of lists?
and if so, maybe there is a built in numpy function to check if a row is unique and if not to get the row index? how would you build this DB? Thanks!!!</p>
| 1 | 2016-08-30T08:39:56Z | 39,223,712 | <p>You may use <code>tuple</code> to save each line data, and build <code>rowDB</code> using <a href="https://docs.python.org/2/library/collections.html#collections.OrderedDict" rel="nofollow">OrderedDict</a>, which maps line tuple to line number, then <code>is_uniq</code> is a simple and quick check as:</p>
<pre><code>return new_row not in rowDB
</code></pre>
<p><code>is_uniq</code> would be:</p>
<pre><code>def is_uniq(rowDB, new_row):
if new_row in rowDB:
return False, rowDB[new_row]
else:
return True, 0
</code></pre>
| 0 | 2016-08-30T09:28:32Z | [
"python",
"arrays",
"performance",
"numpy",
"unique"
] |
Pandas, return df for which values of a certain column is null | 39,222,718 | <p>Im working from SQL code moving it to python, i have a <code>WHERE</code> condition saying where points Is <code>NULL</code></p>
<p>I need to print my dataframe, data, such that the one column in question, points, contains all Null's</p>
<p>eg.</p>
<pre><code>name: age: points:
Sean 12 3
Jack 14 0
Peter 11 2
David 16 0
Paul 15 0
</code></pre>
<p>and i want it to return this:</p>
<pre><code>name: age:
Jack 14
David 16
Paul 15
</code></pre>
<p>any help appreciated</p>
| 1 | 2016-08-30T08:43:57Z | 39,222,742 | <p>Use <code>boolean indexing</code> if need filter where is value <code>0</code>:</p>
<pre><code>df = pd.DataFrame({'points': {0: 3, 1: 0, 2: 2, 3: 0, 4: 0},
'name': {0: 'Sean', 1: 'Jack', 2: 'Peter', 3: 'David', 4: 'Paul'},
'age': {0: 12, 1: 14, 2: 11, 3: 16, 4: 15}})
print (df)
age name points
0 12 Sean 3
1 14 Jack 0
2 11 Peter 2
3 16 David 0
4 15 Paul 0
df1 = df.ix[df.points == 0,['name','age']]
print (df1)
name age
1 Jack 14
3 David 16
4 Paul 15
</code></pre>
<p>And if values are <code>NaN</code>:</p>
<pre><code>df = pd.DataFrame({'points': {0: 3, 1: np.nan, 2: 2, 3: np.nan, 4: np.nan},
'name': {0: 'Sean', 1: 'Jack', 2: 'Peter', 3: 'David', 4: 'Paul'},
'age': {0: 12, 1: 14, 2: 11, 3: 16, 4: 15}})
print (df)
age name points
0 12 Sean 3.0
1 14 Jack NaN
2 11 Peter 2.0
3 16 David NaN
4 15 Paul NaN
df1 = df.ix[df.points.isnull(),['name','age']]
print (df1)
name age
1 Jack 14
3 David 16
4 Paul 15
</code></pre>
| 3 | 2016-08-30T08:45:13Z | [
"python",
"sql",
"pandas",
"dataframe",
null
] |
How to secure Pybossa completely? | 39,222,729 | <p>I want to secure <code>/project/project1/1/results.json</code> end point of pybossa.
This end point exposes our results to public without authentication.</p>
| 0 | 2016-08-30T08:44:20Z | 39,243,742 | <p>I'm the founder of Scifabric, the company that develops PYBOSSA. You can do that using Nginx, as PYBOSSA does not support it out of the box. PYBOSSA has been designed to be an open science/data hub, so all the data is available.</p>
<p>We have a few plugins that have been developed for our customers, that specifically secure some end points. If you want to know more, go to <a href="http://scifabric.com" rel="nofollow">http://scifabric.com</a> and send us an email.</p>
<p>Cheers,</p>
<p>Daniel</p>
| 1 | 2016-08-31T07:51:27Z | [
"python",
"crowdsourcing",
"pybossa"
] |
how to search for elements containing unicode/arabic letters? | 39,222,991 | <p>I am running the below code to find an element containing Unicode Arabic characters. The below code works just fine if I replace XXX with English letter, however, if I replace them with Arabic letters It won't. </p>
<p>I checked the html page and it has "< meta charset="utf-8" >" so I set the character set in my Py script at the first line just to make sure the letters are interpreted as expected but still not working.</p>
<p>Any clue is much appreciate it.</p>
<p>Thanks</p>
<pre><code># coding=UTF8
from selenium import webdriver
# create a new Firefox session
driver = webdriver.Firefox()
driver.implicitly_wait(10)
driver.get("http://www.norikoptic.com/Product/Women")
print driver.find_element_by_xpath(u"//*[contains(text(), 'XXX')]").text
</code></pre>
| 0 | 2016-08-30T08:58:13Z | 39,223,201 | <p>Try passing Text to be checked in <code>contains</code> (Replacement of 'XXX') from external file system like property file, excel etc. It would work. </p>
<p>Whey there is 'u' before your xpath in the example you have given ? </p>
| 0 | 2016-08-30T09:07:43Z | [
"python",
"selenium",
"xpath",
"unicode"
] |
how to search for elements containing unicode/arabic letters? | 39,222,991 | <p>I am running the below code to find an element containing Unicode Arabic characters. The below code works just fine if I replace XXX with English letter, however, if I replace them with Arabic letters It won't. </p>
<p>I checked the html page and it has "< meta charset="utf-8" >" so I set the character set in my Py script at the first line just to make sure the letters are interpreted as expected but still not working.</p>
<p>Any clue is much appreciate it.</p>
<p>Thanks</p>
<pre><code># coding=UTF8
from selenium import webdriver
# create a new Firefox session
driver = webdriver.Firefox()
driver.implicitly_wait(10)
driver.get("http://www.norikoptic.com/Product/Women")
print driver.find_element_by_xpath(u"//*[contains(text(), 'XXX')]").text
</code></pre>
| 0 | 2016-08-30T08:58:13Z | 39,223,652 | <p>I think you are not using the correct unicode in the xpath,
check the demo in <code>Ipython</code> here</p>
<p>First I have selected one node to get the corresponding unicode for that arabic word, so after using that unicode modified the xpath as follows and this was the output.</p>
<pre><code>In [1]: response.xpath('//li[@class="lensItem"]/a/text()').extract()
Out[1]: [u'\u0639\u062f\u0633\u06cc']
In [2]: response.xpath(u'//a[contains(text(), "\u0639\u062f\u0633\u06cc")]/text()').extract()
Out[2]:
[u'\u0639\u062f\u0633\u06cc',
u'\u0639\u062f\u0633\u06cc',
u'\u0645\u0634\u062e\u0635\u0627\u062a \u0639\u062f\u0633\u06cc \u0622\u0641\u062a\u0627\u0628\u06cc']
In [3]: a = response.xpath(u'//a[contains(text(), "\u0639\u062f\u0633\u06cc")]/text()').extract()
In [4]: for i in a:
...: print i
...:
عدسÛ
عدسÛ
Ù
شخصات Ø¹Ø¯Ø³Û Ø¢ÙØªØ§Ø¨Û
</code></pre>
<p><strong>Edit</strong></p>
<p>I have tested the xpath using <code>Scrapy</code> but this will also work with <code>selenium</code>,</p>
<pre><code>In [6]: driver.find_element_by_xpath(u'//a[contains(text(), "\u0639\u062f\u0633\u06cc")]').text
Out[6]: u'\u0639\u062f\u0633\u06cc'
</code></pre>
<p>I hope this will help you to solve your issues.</p>
| 0 | 2016-08-30T09:26:20Z | [
"python",
"selenium",
"xpath",
"unicode"
] |
Python BS4 remove all div ID's Classes, Styles etc | 39,223,076 | <p>i'm trying to parse html content from site with BS4. I got my html fragment, but i need to remove all tags classes, ID's, styles etc. </p>
<p>For example:</p>
<pre><code><div class="applinks">
<div class="appbuttons">
<a href="https://geo.itunes.apple.com/ru/app/cloud-hub-file-manager-document/id972238010?mt=8&amp;at=11l3Ss" rel="nofollow" target="_blank" title="Cloud Hub - File Manager, Document Reader, Clouds Browser and Download Manager">ÐагÑÑзиÑÑ</a>
<span onmouseout="jQuery('.wpappbox-8429dd98d1602dec9a9fc989204dbf7c .qrcode').hide();" onmouseover="jQuery('.wpappbox-8429dd98d1602dec9a9fc989204dbf7c .qrcode').show();">QR-Code</span>
</div>
</div>
</code></pre>
<p>i need to get: </p>
<pre><code><div>
<div>
<a href="https://geo.itunes.apple.com/ru/app/cloud-hub-file-manager-document/id972238010?mt=8&amp;at=11l3Ss" rel="nofollow" target="_blank" title="Cloud Hub - File Manager, Document Reader, Clouds Browser and Download Manager">ÐагÑÑзиÑÑ</a>
<span>QR-Code</span>
</div>
</div>
</code></pre>
<p>My code: </p>
<pre><code># coding: utf-8
import requests
from bs4 import BeautifulSoup
url = "https://lifehacker.ru/2016/08/29/app-store-29-august-2016/"
r = requests.get(url)
soup = BeautifulSoup(r.content)
post_content = soup.find("div", {"class","post-content"})
print post_content
</code></pre>
<p>How i can to remove all tags attributes? </p>
| 0 | 2016-08-30T09:01:41Z | 39,223,223 | <pre><code>import requests
from bs4 import BeautifulSoup
url = "https://lifehacker.ru/2016/08/29/app-store-29-august-2016/"
r = requests.get(url)
soup = BeautifulSoup(r.content)
for tag in soup():
for attribute in ["class"]: # You can also add id,style,etc in the list
del tag[attribute]
</code></pre>
| 1 | 2016-08-30T09:08:43Z | [
"python",
"html",
"web-scraping",
"tags",
"beautifulsoup"
] |
Python BS4 remove all div ID's Classes, Styles etc | 39,223,076 | <p>i'm trying to parse html content from site with BS4. I got my html fragment, but i need to remove all tags classes, ID's, styles etc. </p>
<p>For example:</p>
<pre><code><div class="applinks">
<div class="appbuttons">
<a href="https://geo.itunes.apple.com/ru/app/cloud-hub-file-manager-document/id972238010?mt=8&amp;at=11l3Ss" rel="nofollow" target="_blank" title="Cloud Hub - File Manager, Document Reader, Clouds Browser and Download Manager">ÐагÑÑзиÑÑ</a>
<span onmouseout="jQuery('.wpappbox-8429dd98d1602dec9a9fc989204dbf7c .qrcode').hide();" onmouseover="jQuery('.wpappbox-8429dd98d1602dec9a9fc989204dbf7c .qrcode').show();">QR-Code</span>
</div>
</div>
</code></pre>
<p>i need to get: </p>
<pre><code><div>
<div>
<a href="https://geo.itunes.apple.com/ru/app/cloud-hub-file-manager-document/id972238010?mt=8&amp;at=11l3Ss" rel="nofollow" target="_blank" title="Cloud Hub - File Manager, Document Reader, Clouds Browser and Download Manager">ÐагÑÑзиÑÑ</a>
<span>QR-Code</span>
</div>
</div>
</code></pre>
<p>My code: </p>
<pre><code># coding: utf-8
import requests
from bs4 import BeautifulSoup
url = "https://lifehacker.ru/2016/08/29/app-store-29-august-2016/"
r = requests.get(url)
soup = BeautifulSoup(r.content)
post_content = soup.find("div", {"class","post-content"})
print post_content
</code></pre>
<p>How i can to remove all tags attributes? </p>
| 0 | 2016-08-30T09:01:41Z | 39,224,931 | <p>To remove all attributes from the tags in the scrapped data: </p>
<pre><code>import requests
from bs4 import BeautifulSoup
def CleanSoup(content):
for tags in content.findAll(True):
tags.attrs = {}
return content
url = "https://lifehacker.ru/2016/08/29/app-store-29-august-2016/"
r = requests.get(url)
soup = BeautifulSoup(r.content,"html.parser")
post_content = soup.find("div", {"class","post-content"})
post_content = CleanSoup(post_content)
</code></pre>
| 0 | 2016-08-30T10:24:52Z | [
"python",
"html",
"web-scraping",
"tags",
"beautifulsoup"
] |
SSO with Django 1.9 + djangosaml2 + ADFS 2.0 | 39,223,172 | <p>I'm install and configure ADFS 2.0 as Idp and Django project as SP using djangosaml2. Django project deploing on IIS 7.5.</p>
<p>django saml2 config:</p>
<pre><code>SAML_CONFIG = {
# full path to the xmlsec1 binary programm
'xmlsec_binary': 'C:\\Program Files\\xmlsec1\\xmlsec1-1.2.20-win32-x86\\bin\\xmlsec1.exe',
# your entity id, usually your subdomain plus the url to the metadata view
'entityid': 'https://sp.corp.com/saml2/metadata/',
# this block states what services we provide
'service': {
# we are just a lonely SP
'sp' : {
'authn_requests_signed': "true",
'name': 'SP',
'name_id_format': NAMEID_FORMAT_EMAILADDRESS,
'endpoints': {
# url and binding to the assetion consumer service view
# do not change the binding or service name
'assertion_consumer_service': [
('https://sp.corp.com/saml2/acs/',
saml2.BINDING_HTTP_POST),
],
# url and binding to the single logout service view
# do not change the binding or service name
'single_logout_service': [
('https://sp.corp.com/saml2/ls/',
saml2.BINDING_HTTP_REDIRECT),
('https://sp.corp.com/saml2/ls/post',
saml2.BINDING_HTTP_POST),
],
},
# attributes that this project need to identify a user
'required_attributes': ['email'],
# attributes that may be useful to have but not required
'optional_attributes': ['surname'],
},
},
# where the remote metadata is stored
'metadata': {
'local': [os.path.join(BASE_DIR, 'FederationMetadata.xml')],
},
# set to 1 to output debugging information
'debug': 1,
# certificate
'key_file': os.path.join(BASE_DIR, 'iispk.pem'), # private part
'cert_file': os.path.join(BASE_DIR, 'iiscert.pem'), # public part
}
</code></pre>
<p>On adfs side add Reling Party Trust via url <a href="https://sp.corp.com/saml2/metadata/" rel="nofollow">https://sp.corp.com/saml2/metadata/</a>. Then add claim rule <strong>Send LDAP attribute as Claim</strong> and add E-Mail-Addressess - Email Address, Surname - surname.
After that go to <a href="https://sp.corp.com/saml2/login/" rel="nofollow">https://sp.corp.com/saml2/login/</a>, enter username and pwd, and get adfs error, which show in Event Log:</p>
<pre><code>Event 364:
Encountered error during federation passive request.
Additional Data
Exception details:
Microsoft.IdentityServer.Protocols.Saml.InvalidNameIdPolicyException: MSIS7012: оÑибка пÑи обÑабоÑке запÑоÑа. ÐÐ»Ñ Ð¿Ð¾Ð»ÑÑÐµÐ½Ð¸Ñ Ð´Ð¾Ð¿Ð¾Ð»Ð½Ð¸ÑелÑнÑÑ
Ñведений обÑаÑиÑеÑÑ Ðº админиÑÑÑаÑоÑÑ.
в Microsoft.IdentityServer.Web.FederationPassiveAuthentication.RequestBearerToken(HttpSamlRequestMessage httpSamlRequest, SecurityTokenElement onBehalfOf, String& samlpSessionState, String& samlpAuthenticationProvider)
в Microsoft.IdentityServer.Web.FederationPassiveAuthentication.BuildSignInResponseCoreWithSerializedToken(String signOnToken, WSFederationMessage incomingMessage)
в Microsoft.IdentityServer.Web.FederationPassiveAuthentication.SignIn(SecurityToken securityToken)
Event 321
The SAML authentication request had a NameID Policy that could not be satisfied.
Requestor: https://iisserver.corp.com/saml2/metadata/
Name identifier format: urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress
SPNameQualifier:
Exception details:
MSIS1000: The SAML request contained a NameIDPolicy that was not satisfied by the issued token. Requested NameIDPolicy: AllowCreate: False Format: urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress SPNameQualifier: . Actual NameID properties: Format: , NameQualifier: SPNameQualifier: , SPProvidedId: .
This request failed.
User Action
Use the AD FS 2.0 Management snap-in to configure the configuration that emits the required name identifier.
</code></pre>
<p>Tormenting few days. How fix it? It is advisable to detail. Many thanks.</p>
| 0 | 2016-08-30T09:06:13Z | 39,294,970 | <p>You need to send the NameID claim in the SAML assertion. Since you have not created this claim in the set of issuance rules, ADFS errors saying that the values of cl
aims that it would be minting in the security token does not match the requested policy that you have configured (and sent in the SAML request). </p>
<p>See <a href="https://blogs.msdn.microsoft.com/card/2010/02/17/name-identifiers-in-saml-assertions/" rel="nofollow">https://blogs.msdn.microsoft.com/card/2010/02/17/name-identifiers-in-saml-assertions/</a> for how to generate the NameID claim and the format it should be emitted in. </p>
<p>Thanks //Sam
<strong><em>[Twitter: @MrADFS]</em></strong></p>
| 0 | 2016-09-02T14:35:53Z | [
"python",
"django",
"single-sign-on",
"saml-2.0",
"adfs2.0"
] |
How to refresh text in Matplotlib? | 39,223,286 | <p>I wrote this code to read data from an excel file and to plot them. For a certain x-value, I desire to know the y-values of all the lines so I create a slider to change this x-value but I'm not able to refresh the text which prints the y-value.</p>
<p>The code is this</p>
<pre><code>import numpy as np
from openpyxl import load_workbook as ld
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
from matplotlib.widgets import Slider
wb = ld(filename='example.xlsx')
data = wb['data']
time = wb['time']
row = data.max_row
column = data.max_column
x = np.ones((row, column))
y = np.ones((row, column))
result = np.ones(row)
for i in range(0, row):
for j in range(0, column):
x[i][j] = time.cell(row=i+1, column=j+1).value
y[i][j] = data.cell(row=i+1, column=j+1).value
fig, ax = plt.subplots()
plt.subplots_adjust(left=0.25, bottom=0.25)
plt.plot(x[0], y[0], label='line1')
plt.plot(x[1], y[1], label='line2')
plt.plot(x[2], y[2], label='line3')
line, = plt.plot((np.amin(x), np.amin(x)), (np.amin(y), np.amax(y)))
plt.legend()
plt.grid(True)
axtime = plt.axes([0.25, 0.1, 0.65, 0.03])
stime = Slider(axtime, 'time', np.amin(x), np.amax(x), valinit=np.amin(x))
def y_text(r):
ax.text(10, 8, str(r), style='italic')
def find(t):
global x, y, result
for i in range(0, row):
for j in range(0, column):
if x[i][j] == t or (t < x[i][j] and j == 0) or (t > x[i][j] and j == column):
result[i] = y[i][j]
elif x[i][j] < t < x[i][j+1]:
result[i] = ((t-x[i][j])/(x[i][j+1]-x[i][j]))*(y[i][j+1]-y[i][j])+y[i][j]
return result
def update(val):
line.set_xdata(stime.val)
find(stime.val)
y_text(stime.val)
fig.canvas.draw()
stime.on_changed(update)
plt.show()
</code></pre>
<p>and the result is this</p>
<p><a href="http://i.stack.imgur.com/ZVRRP.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZVRRP.png" alt="enter image description here"></a></p>
<p>As you can see the text is overwritten.</p>
| 2 | 2016-08-30T09:11:31Z | 39,228,262 | <p>With matplotlib widgets, the update method works best if you create an artist object and adjust its value (like you do with <code>line</code> in your code). For a text object, you can use the <code>set_text</code> and <code>set_position</code> method to change what it displays and where. As an example,</p>
<pre><code>import numpy as np
import pylab as plt
from matplotlib.widgets import Slider
fig, ax = plt.subplots()
plt.subplots_adjust(bottom=0.25)
sax = plt.axes([0.25, 0.1, 0.65, 0.03])
x = np.linspace(0,2.*np.pi,100)
f = np.sin(x)
l, = ax.plot(x, f)
tpos = int(0.25*x.shape[0])
t1 = ax.text(x[tpos], f[tpos], str(slider1.val))
tpos = int(0.75*x.shape[0])
t2 = ax.text(x[tpos], f[tpos], str(slider1.val))
slider1 = Slider(sax, 'amplitude', -0.8, 0.8, valinit=0.8)
def update(val):
f = slider1.val*np.sin(x)
l.set_ydata(f)
# update the value of the Text object
tpos = int(0.25*x.shape[0])
t1.set_position((x[tpos], f[tpos]))
t1.set_text(str(slider1.val))
tpos = int(0.75*x.shape[0])
t2.set_position((x[tpos], f[tpos]))
t2.set_text(str(slider1.val))
plt.draw()
slider1.on_changed(update)
plt.show()
</code></pre>
<p>which looks like,</p>
<p><a href="http://i.stack.imgur.com/FgJ5O.png" rel="nofollow"><img src="http://i.stack.imgur.com/FgJ5O.png" alt="enter image description here"></a></p>
<p>The alternative may be to clear and redraw the text each time but this is slower and will be more hassle.</p>
| 0 | 2016-08-30T13:03:02Z | [
"python",
"python-3.x",
"text",
"matplotlib",
"plot"
] |
Executing vty commands in Juniper routers using PyEZ | 39,223,408 | <p>I have a requirement where, a <code>python</code> script running in a Juniper router shell needs to execute some commands in <code>vty</code> console of the FPC. I cannot use <code>vty Âc</code> because it may not work properly in all platforms. However, I can use <code>vty fpc0</code> and then execute the command and exit from there.</p>
<p>Is there a way to execute <code>vty</code> command using <code>PyEZ</code>? If yes, please provide the syntax. </p>
| 2 | 2016-08-30T09:16:08Z | 39,223,474 | <p>Using PyEZ StartShell utility we can do something like</p>
<pre><code>from jnpr.junos.utils.start_shell import StartShell
from jnpr.junos import Device
dev = Device(host='xxxx', user='xxxx', password='xxxx')
dev.open()
with StartShell(dev) as ss:
op = ss.run('vty fpc0', 'vty\)#')
print op[1]
op = ss.run('show version', 'vty\)#')
print op[1]
dev.close()
</code></pre>
<p>or even</p>
<pre><code>dev = Device(host='xxxx', user='xxxx', password='xxxx')
dev.open()
with StartShell(dev) as ss:
op = sh.run('cprod -A fpc0 -c "show version"')
print op[1]
dev.close()
</code></pre>
| 3 | 2016-08-30T09:18:57Z | [
"python",
"junos-automation",
"pyez"
] |
Executing vty commands in Juniper routers using PyEZ | 39,223,408 | <p>I have a requirement where, a <code>python</code> script running in a Juniper router shell needs to execute some commands in <code>vty</code> console of the FPC. I cannot use <code>vty Âc</code> because it may not work properly in all platforms. However, I can use <code>vty fpc0</code> and then execute the command and exit from there.</p>
<p>Is there a way to execute <code>vty</code> command using <code>PyEZ</code>? If yes, please provide the syntax. </p>
| 2 | 2016-08-30T09:16:08Z | 39,296,260 | <p>The <code>StartShell</code> class assumes the ability to make a new SSH connection to the target device using port 22. That might not always be a good assumption.</p>
<p>An alternative to using <code>StartShell</code> is to use the RPC equivalent to the <code>request pfe execute</code> command. Here's an example:</p>
<pre><code>>>> resp = dev.rpc.request_pfe_execute(target='fpc0', command='show version')
>>> print resp.text
SENT: Ukern command: show version
GOT:
GOT:
GOT: Juniper Embedded Microkernel Version 15.1F4.15
GOT: Built by builder on 2015-12-23 18:11:49 UTC
GOT: Copyright (C) 1998-2015, Juniper Networks, Inc.
GOT: All rights reserved.
GOT:
GOT:
GOT: RMPC platform (1200Mhz QorIQ P2020 processor, 3584MB memory, 512KB flash)
GOT: Current time : Sep 2 15:35:42.409859
GOT: Elapsed time : 27+19:14:41
LOCAL: End of file
</code></pre>
| 3 | 2016-09-02T15:44:22Z | [
"python",
"junos-automation",
"pyez"
] |
Pyinstaller throwing massive amount of errors yet created executable works | 39,223,416 | <p>Take a look at this output from the console: <a href="http://pastebin.com/Vy5BqfYL" rel="nofollow">http://pastebin.com/Vy5BqfYL</a></p>
<p>My IDE is Pycharm and I'm using Pyinstaller with the single file executable. The PyInstaller is throwing massive amount of errors, yet the exe created seems to be working.</p>
<p>Using Python 3.5.</p>
<p>Should I be concerned?</p>
| 0 | 2016-08-30T09:16:28Z | 39,223,867 | <p>Yes, you should be concerned because the binary will work for you but probably not in all targeted systems.</p>
<p>The 'errors' you are reporting are warnings and not errors. Pyinstaller tells you that it can't find windows CRT. However if the binary works for you: </p>
<ul>
<li><p><em>probably you have the CRT in some place which can not be found by PyInstaller</em>. Check dlls on your system (probably a file search can help). Check PATH environment var and PYTHONPATH.</p></li>
<li><p><em>probably you have some 32bit vs 64bit issue</em>: the python scripts uses a dll from one type while PyInstaller searches for another dll type which you have not... Check it! I saw in your trace that you are using a Windows 7 OS and PyInstaller is searching dlls in system32. Is your OS 64bit and your python version 32bit? This is some kind of dll smell.</p></li>
</ul>
<p><strong>To have a sane and good target binary you should ensure to have all dependencies. Do not rely on Windows updates on your target platforms but prefer packing all dependencies in a single distribution.</strong> </p>
<p>To ensure a software running in all platforms you should pack a binary for 32bit and one for 64bit. Or at least one for 32bits working also in a 64bit environment.</p>
| 0 | 2016-08-30T09:35:25Z | [
"python",
"pyinstaller",
"console.log"
] |
Using a list as a key to match to another | 39,223,560 | <p>I have 4 lists and I need to match them using the first two as a key:</p>
<pre><code>keyNumbers = [3, 2, 1, 4, 5]
keyLetters = ['E', 'D', 'C', 'B', 'A']
numbers = [3, 2, 4, 1, 5]
letters = []
</code></pre>
<p>I want it to work so that letters would be edited for
It should also work with duplicated numbers or letters</p>
<p>Expected Output: E, D, B, C, A</p>
| -2 | 2016-08-30T09:22:42Z | 39,223,640 | <p>Using all the four lists, you could do:</p>
<pre><code>>>> keyNumbers = [3, 2, 1, 4, 5]
>>> keyLetters = ['E', 'D', 'C', 'B', 'A']
>>> numbers = [3, 2, 4, 1, 5]
>>> letters = []
>>> temp = dict(zip(keyNumbers,keyLetters))
>>> ','.join([temp[i] for i in numbers])
E, D, B, C, A
</code></pre>
| 0 | 2016-08-30T09:25:53Z | [
"python"
] |
Using a list as a key to match to another | 39,223,560 | <p>I have 4 lists and I need to match them using the first two as a key:</p>
<pre><code>keyNumbers = [3, 2, 1, 4, 5]
keyLetters = ['E', 'D', 'C', 'B', 'A']
numbers = [3, 2, 4, 1, 5]
letters = []
</code></pre>
<p>I want it to work so that letters would be edited for
It should also work with duplicated numbers or letters</p>
<p>Expected Output: E, D, B, C, A</p>
| -2 | 2016-08-30T09:22:42Z | 39,223,776 | <p>You may achieve it by using <code>map</code> function:</p>
<pre><code>>>> keyNumbers = [1, 2, 3, 4, 5]
>>> keyLetters = ['E', 'D', 'C', 'B', 'A']
>>> numbers = [3, 2, 4, 1, 5]
>>> map(lambda x: keyLetters[keyNumbers.index(x)], numbers)
['C', 'D', 'B', 'E', 'A']
</code></pre>
<p>OR, by <code>list comprehension</code> as:</p>
<pre><code>>>> [ keyLetters[keyNumbers.index(i)] for i in numbers]
['C', 'D', 'B', 'E', 'A']
</code></pre>
| 0 | 2016-08-30T09:31:11Z | [
"python"
] |
Using a list as a key to match to another | 39,223,560 | <p>I have 4 lists and I need to match them using the first two as a key:</p>
<pre><code>keyNumbers = [3, 2, 1, 4, 5]
keyLetters = ['E', 'D', 'C', 'B', 'A']
numbers = [3, 2, 4, 1, 5]
letters = []
</code></pre>
<p>I want it to work so that letters would be edited for
It should also work with duplicated numbers or letters</p>
<p>Expected Output: E, D, B, C, A</p>
| -2 | 2016-08-30T09:22:42Z | 39,223,842 | <p>You can match the indexes</p>
<pre><code>>>> keyNumbers = [1, 2, 3, 4, 5]
>>> keyLetters = ['E', 'D', 'C', 'B', 'A']
>>> numbers = [3, 2, 4, 1, 5]
>>> letters = [keyLetters[keyNumbers.index(i)] for i in numbers]
>>> letters
['C', 'D', 'B', 'E', 'A']
</code></pre>
| 0 | 2016-08-30T09:34:03Z | [
"python"
] |
Using a list as a key to match to another | 39,223,560 | <p>I have 4 lists and I need to match them using the first two as a key:</p>
<pre><code>keyNumbers = [3, 2, 1, 4, 5]
keyLetters = ['E', 'D', 'C', 'B', 'A']
numbers = [3, 2, 4, 1, 5]
letters = []
</code></pre>
<p>I want it to work so that letters would be edited for
It should also work with duplicated numbers or letters</p>
<p>Expected Output: E, D, B, C, A</p>
| -2 | 2016-08-30T09:22:42Z | 39,223,858 | <p>Try this,</p>
<pre><code>In [1]: zip_list = zip(keyNumbers,keyLetters)
In [2]: [zip_list[i-1][1] for i in numbers]
Out[1]: ['C', 'D', 'B', 'E', 'A']
</code></pre>
| 0 | 2016-08-30T09:34:56Z | [
"python"
] |
Using a list as a key to match to another | 39,223,560 | <p>I have 4 lists and I need to match them using the first two as a key:</p>
<pre><code>keyNumbers = [3, 2, 1, 4, 5]
keyLetters = ['E', 'D', 'C', 'B', 'A']
numbers = [3, 2, 4, 1, 5]
letters = []
</code></pre>
<p>I want it to work so that letters would be edited for
It should also work with duplicated numbers or letters</p>
<p>Expected Output: E, D, B, C, A</p>
| -2 | 2016-08-30T09:22:42Z | 39,225,458 | <p>You can use this -></p>
<pre><code>keyNumbers = [3, 2, 1, 4, 5]
keyLetters = ['E', 'D', 'C', 'B', 'A']
numbers = [3, 2, 4, 1, 5]
x=dict()
for i,j in enumerate(keyNumbers):
x[j]=keyLetters[i]
y = [x[i] for i in numbers]
print ','.join(y)
</code></pre>
| 0 | 2016-08-30T10:47:58Z | [
"python"
] |
Pandas : Delete rows based on other rows | 39,223,638 | <p>I have a pandas dataframe which looks like that :</p>
<pre><code>qseqid sseqid qstart qend
2 1 125 345
4 1 150 320
3 2 150 450
6 2 25 300
8 2 50 500
</code></pre>
<p>I would like to remove rows based on other rows values with these criterias : A row (r1) must be removed if another row (r2) exist with the same <code>sseqid</code> and <code>r1[qstart] > r2[qstart]</code> and <code>r1[qend] < r2[qend]</code>.</p>
<p>Is this possible with pandas ? </p>
| 6 | 2016-08-30T09:25:46Z | 39,225,130 | <pre><code>df = pd.DataFrame({'qend': [345, 320, 450, 300, 500],
'qseqid': [2, 4, 3, 6, 8],
'qstart': [125, 150, 150, 25, 50],
'sseqid': [1, 1, 2, 2, 2]})
def remove_rows(df):
merged = pd.merge(df.reset_index(), df, on='sseqid')
mask = ((merged['qstart_x'] > merged['qstart_y'])
& (merged['qend_x'] < merged['qend_y']))
df_mask = ~df.index.isin(merged.loc[mask, 'index'].values)
result = df.loc[df_mask]
return result
result = remove_rows(df)
print(result)
</code></pre>
<p>yields</p>
<pre><code> qend qseqid qstart sseqid
0 345 2 125 1
3 300 6 25 2
4 500 8 50 2
</code></pre>
<hr>
<p>The idea is to use <code>pd.merge</code> to form a DataFrame with every pairing of rows
with the same <code>sseqid</code>:</p>
<pre><code>In [78]: pd.merge(df.reset_index(), df, on='sseqid')
Out[78]:
index qend_x qseqid_x qstart_x sseqid qend_y qseqid_y qstart_y
0 0 345 2 125 1 345 2 125
1 0 345 2 125 1 320 4 150
2 1 320 4 150 1 345 2 125
3 1 320 4 150 1 320 4 150
4 2 450 3 150 2 450 3 150
5 2 450 3 150 2 300 6 25
6 2 450 3 150 2 500 8 50
7 3 300 6 25 2 450 3 150
8 3 300 6 25 2 300 6 25
9 3 300 6 25 2 500 8 50
10 4 500 8 50 2 450 3 150
11 4 500 8 50 2 300 6 25
12 4 500 8 50 2 500 8 50
</code></pre>
<p>Each row of merged contains data from two rows of df. You can then compare every two rows using</p>
<pre><code>mask = ((merged['qstart_x'] > merged['qstart_y'])
& (merged['qend_x'] < merged['qend_y']))
</code></pre>
<p>and find the labels in <code>df.index</code> that do not match this condition:</p>
<pre><code>df_mask = ~df.index.isin(merged.loc[mask, 'index'].values)
</code></pre>
<p>and select those rows:</p>
<pre><code>result = df.loc[df_mask]
</code></pre>
<p>Note that this assumes <code>df</code> has a unique index. </p>
| 6 | 2016-08-30T10:33:11Z | [
"python",
"pandas",
"dataframe"
] |
Should I import flask.ext.sqlalchemy or flask_sqlalchemy? | 39,223,807 | <p>When I learned Flask, I found some examples used <code>flask.ext.sqlalchemy</code> and some used <code>flask_sqlalchemy</code>. Which one should be used?</p>
| 0 | 2016-08-30T09:32:25Z | 39,225,099 | <p><a href="http://flask.pocoo.org/docs/0.11/extensiondev/" rel="nofollow">Flask Extension Development</a> guide says:</p>
<blockquote>
<p>As of Flask 0.11, most Flask extensions have transitioned to the new
naming schema. The flask.ext.foo compatibility alias is still in Flask
0.11 but is now deprecated â you should use flask_foo.</p>
</blockquote>
| 2 | 2016-08-30T10:31:44Z | [
"python",
"flask",
"flask-sqlalchemy"
] |
Memory management in numpy arrays,python | 39,223,838 | <p>I get a memory error when processing very large(>50Gb) file (problem: RAM memory gets full).</p>
<p>My solution is: I would like to read only 500 kilo bytes of data once and process( and delete it from memory and go for next 500 kb). Is there any other better solution? or If this solution seems better , how to do it with numpy array?</p>
<p>It is just 1/4th the code(just for an idea)</p>
<pre><code> import h5py
import numpy as np
import sys
import time
import os
hdf5_file_name = r"test.h5"
dataset_name = 'IMG_Data_2'
file = h5py.File(hdf5_file_name,'r+')
dataset = file[dataset_name]
data = dataset.value
dec_array = data.flatten()
........
</code></pre>
<p>I get memory error at this point itsef as it trys to put in all the data to memory.</p>
| 1 | 2016-08-30T09:33:50Z | 39,224,171 | <h2>Quick answer</h2>
<ul>
<li><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html" rel="nofollow">Numpuy.memmap</a> allows presenting a large file on disk as a numpy array. Don't know if it allows mapping files larger than RAM+swap though. Worth a shot.</li>
<li>[Presentation about out-of-memory work with Python] (<a href="http://hilpisch.com/TPQ_Out_of_Memory_Analytics.html" rel="nofollow">http://hilpisch.com/TPQ_Out_of_Memory_Analytics.html</a>)</li>
</ul>
<h2>Longer answer</h2>
<p>A key question is how much RAM you have (<10GB, >10GB) and what kind of processing you're doing (need to look at each element in the dataset once or need to look at the whole dataset at once).</p>
<p>If it's <10GB and need to look once, then your approach seems like the most decent one. It's a standard way to deal with datasets which are larger than main memory. What I'd do is increase the size of a chunk from 500kb to something closer to the amount of memory you have - perhaps half of physical RAM, but anyway, something in the GB range, but not large enough to cause swapping to disk and interfere with your algorithm. A nice optimisation would be to hold two chunks in memory at one time. One is being processes, while the other is being loaded in parallel from disk. This works because loading stuff from disk is relatively expensive, but it doesn't require much CPU work - the CPU is basically waiting for data to load. It's harder to do in Python, because of the GIL, but numpy and friends should not be affected by that, since they release the GIL during math operations. The <code>threading</code> package might be useful here.</p>
<p>If you have low RAM AND need to look at the whole dataset at once (perhaps when computing some quadratic-time ML algorithm, or even doing random accesses in the dataset), things get more complicated, and you probably won't be able to use the previous approach. Either upgrade your algorithm to a linear one, or you'll need to implement some logic to make the algorithms in numpy etc work with data on disk directly rather than have it in RAM.</p>
<p>If you have >10GB of RAM, you might let the operating system do the hard work for you and increase swap size enough to capture all the dataset. This way everything is loaded into virtual memory, but only a subset is loaded into physical memory, and the operating system handles the transitions between them, so everything looks like one giant block of RAM. How to increase it is OS specific though.</p>
| 1 | 2016-08-30T09:48:44Z | [
"python",
"arrays",
"numpy",
"memory"
] |
Delete string in between special characters in python | 39,223,859 | <p>I have string something like this</p>
<pre><code>mystring = "CBS Network Radio Panel;\ntitle2 New York OCT13W4, Panel Weighting;\n*options; mprint ls=max mprint;\n\n****************************************out; asd; ***hg;"
</code></pre>
<p>I want to delete the string between * and ;
output should be</p>
<pre><code>"CBS Network Radio Panel;\ntitle2 New York OCT13W4, Panel Weighting;\ mprint ls=max mprint;\n\n asd;"
</code></pre>
<p>I have tried this code</p>
<pre><code>re.sub(r'[\*]*[a-z]*;', '', mystring)
</code></pre>
<p>But it's not working.</p>
| 1 | 2016-08-30T09:34:58Z | 39,223,919 | <p>You may use</p>
<pre><code>re.sub(r'\*[^;]*;', '', mystring)
</code></pre>
<p>See the <a href="https://ideone.com/Sq2jWS" rel="nofollow">Python demo</a>:</p>
<pre><code>import re
mystring = "CBS Network Radio Panel;\ntitle2 New York OCT13W4, Panel Weighting;\n*options; mprint ls=max mprint;\n\n****************************************out; asd; ***hg;"
r = re.sub(r'\*[^;]*;', '', mystring)
print(r)
</code></pre>
<p>Output:</p>
<pre><code>CBS Network Radio Panel;
title2 New York OCT13W4, Panel Weighting;
mprint ls=max mprint;
asd;
</code></pre>
<p>The <code>r'\*[^;]*;'</code> pattern matches a literal <code>*</code>, followed with zero or more characters other than <code>;</code> and then a <code>;</code>.</p>
| 3 | 2016-08-30T09:37:32Z | [
"python",
"regex"
] |
How to read a file whose name includes '/' in python? | 39,223,881 | <p>Now I have a file named <code>Land/SeaMask</code> and I want to open it, but it cannot be recognized as a filename by programme, but as a directory, how to do it?</p>
| -1 | 2016-08-30T09:35:57Z | 39,225,526 | <p>First of all I recommend you to find out how Python interpreter displays yours file name. You can do this simply using <a href="https://docs.python.org/2/library/os.html#os.listdir" rel="nofollow"><code>os</code> built-in module</a>:
</p>
<pre><code>import os
os.listdir('path/to/directory')
</code></pre>
<p>You'll get a list of directories and files in directory you passed as argument in <code>listdir</code> method. In this list you can find something like <code>Land:SeaMask</code>. After recognizing this, <code>open('path/to/Land:SeaMask')</code> will work for you.</p>
| 3 | 2016-08-30T10:51:23Z | [
"python",
"file"
] |
Need Consumer and Producer with duplicate filter python | 39,223,923 | <p>I have a script which send requests to social media site by doing following:</p>
<p>It first scrapes the friends of the account inserted.
It then continues to scrape all friends of the accounts found forever (Similar to how search engine crawlers work).
Add them to a consumer queue which then adds them as a friend or send them a message.
All this in 10-30 threads.
I am currently using Queue and it is not checking if the accounts it finds were duplicate of previously found account. That is my problem. Before changing the source code of Queue module. Is there any similar module with duplicate filtering built in.</p>
| 0 | 2016-08-30T09:37:52Z | 39,224,359 | <p>Python also includes a data type for sets. A set is an unordered collection with no duplicate elements.
Note: to create an empty set you have to use </p>
<pre><code>set()
</code></pre>
<p>There is an ordered set recipe for this which is referred to from the Python 2 Documentation <a href="http://code.activestate.com/recipes/576694/" rel="nofollow">http://code.activestate.com/recipes/576694/</a></p>
<p>This runs on Py2.6 or later and 3.0 or later without any modifications. The interface is almost exactly the same as a normal set, except that initialisation should be done with a list.</p>
<pre><code>OrderedSet([1, 2, 3])
</code></pre>
| 0 | 2016-08-30T09:56:53Z | [
"python"
] |
How SCons hardlinks sources in variant directory? | 39,223,937 | <p>I tested SCons default hardlink behavior. </p>
<p>I am expecting that a modification in the hardlinked file in the variant dir is reflected also in the original file. But this doesn't happen.</p>
<p>Is hardlinking really a default behavior as stated in SCons doc or it is just copying the files in the variant directory ?</p>
<p><strong>EDIT:</strong> </p>
<p>Details about my implementations/configurations:</p>
<p>I am using VariantDir with one SConscript in the root of the project</p>
<p>Duplicate option is on in VariantDir</p>
<p>No duplicate option in cmd line (default).</p>
<p>I am using my own custom tools. I am using only the program builder from the builders available by default in SCons.</p>
<p>I tried the following flags: hard-soft-copy (the default), soft-hard-copy, hard-copy, soft-copy or copy. </p>
<p>None of them will provide me a real hard link copy.</p>
<p>I expect the following:</p>
<p>When I change a source file in variant dir the change will be reflected in the original file (at the original location) since the file in variant dir is a hard link.</p>
<p>Instead a change in the source file in the variant dir won't be reflected in the original file.</p>
<p><strong>EDIT 2:</strong></p>
<p>After running --debug=duplicate I got for each file considered in the dependency cycle a message similar to this:</p>
<pre><code>dup: relinking variant 'relative_path_to_file' from 'absolute_path'
</code></pre>
| 0 | 2016-08-30T09:38:32Z | 39,229,122 | <p>As stated in the corresponding <a href="https://docs.python.org/2/library/os.html?highlight=os.link#os.link" rel="nofollow">Python reference docs</a>, the underlying <code>os.link</code> method, that SCons tries to find and use internally to create hard links, is not available under Windows. The same holds true for <code>os.symlink</code> (symbolic/soft links), so the only remaining option for SCons under Windows is to actually copy the files.</p>
| 1 | 2016-08-30T13:39:16Z | [
"python",
"scons"
] |
Python - Reading line by line from file to Tkinter Text | 39,224,328 | <p>I decided I want to do some GUI programming with Python Tkinter,and I wanted to start with a Text Editor.I've written some code and everything seems to be looking fine but I have trouble inserting text to the Text widget.It doesn't insert all lines just a few.Anyone seems to know the answer?I'm starting with Tkinter and I might move to PyQT.I want to use Python 2.7,so please don't post Python 3.x asnwers.Thanks</p>
<pre><code>import Tkinter as TK
from tkFileDialog import *
#Callbacks
def clearCallback(event):
print "Clearing..."
textBox.delete("1.0",TK.END)
print "Cleared."
return
def openCallback(event):
print "Trying to open a file..."
fname = askopenfile(mode='r',initialdir="C:\\")
if(fname):
print str(fname)
content = fname.readlines()
print str(content)
for line in content:
textBox.insert(TK.END,line)
fname.close()
return
else:
print "No file was opened."
return
print "---MiniPyE---"
#Base
base = TK.Tk()
base.resizable(height=False,width=False)
base.minsize(650,600)
base.title("MiniPyE")
#Buttons
#Clear Button
clearButton = TK.Button(base,text="New")
clearButton.place(x=16,y=0,height=16,width=48)
clearButton.bind("<Button-1>",clearCallback)
#Open Button
openButton = TK.Button(base,text="Open")
openButton.place(x=16+48,y=0,height=16,width=48)
openButton.bind("<Button-1>",openCallback)
#Text Box
textBox = TK.Text()
textBox.place(x=16,y=16,height = 684,width=650)
#Scrollbar
scrollBar = TK.Scrollbar(base,command=textBox.yview)
scrollBar.place(x=0,y=0,height=584)
#scrollBar.grid(row=0,column=1,sticky='nsew')
base.mainloop()
</code></pre>
| -1 | 2016-08-30T09:55:36Z | 39,226,463 | <p>I fixed it.The thing was that the height was too big.I changed:<br>
<code>textBox.place(x=16,y=16,height=684,width=650)</code><br>
to<br>
<code>textBox.place(x=16,y=16,height=580)</code><br>
and everything went fine.Just as expected.</p>
| 0 | 2016-08-30T11:36:15Z | [
"python",
"python-2.7",
"tkinter",
"textbox"
] |
"Push" attribute from decorator to decorated function in Python | 39,224,335 | <p>Some basic question from beginner. Is there a way to "push" attribute to a decorated function not using function arguments ?</p>
<pre><code>import sys
from functools import wraps
def decorator_(func):
@wraps(func)
def newfunc():
func.some_attr = 'some_attr'
func()
return newfunc
@decorator_
def decorated_function():
# ??? access some_attr ???
print some_attr
def main():
decorated_function()
if __name__ == '__main__':
sys.exit(main())
</code></pre>
<p>Thanks in advance.</p>
| 0 | 2016-08-30T09:55:58Z | 39,224,462 | <p>Depending on whether you use Python 2 or 3 you can inject variables into the <code>globals</code> of a function like this:</p>
<h3>Python 2</h3>
<pre><code>func.func_globals["some_attr"] = "some_value"
</code></pre>
<h3>Python 3</h3>
<pre><code>func.__globals__["some_attr"] = "some_value"
</code></pre>
| 0 | 2016-08-30T10:02:08Z | [
"python",
"decorator"
] |
"Push" attribute from decorator to decorated function in Python | 39,224,335 | <p>Some basic question from beginner. Is there a way to "push" attribute to a decorated function not using function arguments ?</p>
<pre><code>import sys
from functools import wraps
def decorator_(func):
@wraps(func)
def newfunc():
func.some_attr = 'some_attr'
func()
return newfunc
@decorator_
def decorated_function():
# ??? access some_attr ???
print some_attr
def main():
decorated_function()
if __name__ == '__main__':
sys.exit(main())
</code></pre>
<p>Thanks in advance.</p>
| 0 | 2016-08-30T09:55:58Z | 39,224,479 | <p>If you set the attribute on <code>new_func</code> instead, you can access it simply as <code>decorated_function.some_attr</code>:</p>
<pre><code>def decorator_(func):
@wraps(func)
def newfunc():
newfunc.some_attr = 'some_attr'
func()
return newfunc
@decorator_
def decorated_function():
print(decorated_function.some_attr)
</code></pre>
<p>Otherwise, <code>wraps</code> makes the original function available as <code>decorated_function.__wrapped__</code> in Python 3:</p>
<pre><code>def decorator_(func):
@wraps(func)
def newfunc():
func.some_attr = 'some_attr'
func()
return newfunc
@decorator_
def decorated_function():
print(decorated_function.__wrapped__.some_attr)
</code></pre>
<p>In Python 2, the <code>__wrapped__</code> is not set by <code>wraps</code>, so we need to set it up manually:</p>
<pre><code>def decorator_(func):
@wraps(func)
def newfunc():
func.some_attr = 'some_attr'
func()
newfunc.__wraps__ = func
return newfunc
</code></pre>
<p>However, this sounds like an XY problem; if you want to pass a value to the <code>decorated_function</code> you should let <code>decorator_</code> pass it as an argument instead.</p>
| 0 | 2016-08-30T10:03:05Z | [
"python",
"decorator"
] |
Extracting information from a large specific header formatted file | 39,224,453 | <p>I am new to python. I have a large header formatted input file where header line starts with '>'. My file is like :</p>
<pre><code>>NC_23689
#
# XYZ
# Copyright (c) BLASC
#
# Predicted binding regions
# No. Start End Length
# 1 1 25 25
# 2 39 47 9
#
>68469409
#
# XYZ
# Copyright (c) BLASC
#
# Predicted binding regions
# None.
#
# Prediction profile output:
# Columns:
# 1 - Amino acid number
# 2 - One letter code
# 3 - probability value
# 4 - output
#
1 M 0.1325 0
2 S 0.1341 0
3 S 0.1384 0
>68464675
#
# XYZ
# Copyright (c) BLASC
#
# Predicted binding regions
# No. Start End Length
# 1 13 24 12
# 2 31 53 23
# 3 81 95 15
# 4 115 164 50
#
...
...
</code></pre>
<p>I want to extract each header and its corresponding Start-End value(s) (after Predicted binding regions line) in a (output.txt file). For the above (input.txt), the output will be: </p>
<pre><code>NC_23689: 1-25, 39-47
68464675: 13-24, 31-53, 81-95, 115-164
</code></pre>
<p>I have tried :</p>
<pre><code>with open('input.txt') as infile, open('output.txt', 'w') as outfile:
copy = False
for line in infile:
if line.strip() == ">+":
copy = True
elif line.strip() == "# No. Start End Length":
copy = True
elif line.strip() == "#":
copy = False
elif copy:
outfile.write(line)
</code></pre>
<p>But it gives me:</p>
<pre><code># 1 1 25 25
# 2 39 47 9
# 1 13 24 12
# 2 31 53 23
# 3 81 95 15
# 4 115 164 50
</code></pre>
<p>Which is obviously not right. I get the range but without header descriptors and with some extra values. How can I get my above mentioned output?
Thanks</p>
<p>Ps. I am using python 2.7 in my Windows7 machine.</p>
| 0 | 2016-08-30T10:01:36Z | 39,225,119 | <p>Try this:</p>
<pre><code>with open("file.txt") as f:
first_time = True
for line in f:
line = line.rstrip()
if line.startswith(">"):
if not first_time:
if start_ends:
print("{}: {}".format(header,", ".join(start_ends)))
else:
first_time = False
header = line.lstrip(">")
start_ends = []
elif len(line.split()) == 5 and "".join(line.split()[1:]).isnumeric():
start_ends.append("{}-{}".format(line.split()[2],line.split()[3]))
if start_ends:
print("{}: {}".format(header,", ".join(start_ends)))
# Outputs:
# NC_23689: 1-25, 39-47
# 68464675: 13-24, 31-53, 81-95, 115-164
</code></pre>
| 0 | 2016-08-30T10:32:49Z | [
"python"
] |
Python requests returns âcannot connect to proxy & error 10061â | 39,224,501 | <p>I have developed a desktop client using PyQt4, it connect to my web service by <a href="http://www.python-requests.org/en/master/" rel="nofollow">requests</a> lib. You know, <a href="http://www.python-requests.org/en/master/" rel="nofollow">requests</a> maybe one of the most useful http client, I think it should be no problem. My desktop client works all right until something strange happened. </p>
<p>I use the following code to send request to my server.</p>
<pre><code>response = requests.get(url, headers = self.getHeaders(), timeout=600, proxies = {}, verify = False)
</code></pre>
<p>where header only includes auth token.</p>
<pre><code> def getHeaders(self, additional = None):
headers = {
'Auth-Token' : HttpBasicClient.UserAuthToken,
}
if additional is not None:
headers.update(additional)
return headers
</code></pre>
<p>I cannot connect to my web service, all the http request pop the same error "'Cannot connect to proxy.', error(10061, '')". For example:</p>
<blockquote>
<p>GET Url: http:// api.fangcloud.com/api/v1/user/timestamp
HTTPSConnectionPool(host='api.fangcloud.com', port=443): Max retries exceeded with url: /api/v1/user/timestamp (Caused by ProxyError('Cannot connect to proxy.', error(10061, '')))</p>
</blockquote>
<p>this API does nothing but return the timestamp of my server. When I copy the url into Chrome in same machine with same environment, it returns correct response. However, my desktop client can only returns error. Is it anything wrong with requests lib?</p>
<p>I googled this problem of connection error 10061 ("No connection could be made because the target machine actively refused it"). This maybe caused by TCP connect rejection of web server.</p>
<blockquote>
<p>The client sends a SYN packet to the server targeting the port (80 for HTTP). A server that is running a service on port 80 will respond with a SYN ACK, but if it is not, it will respond with a RST ACK. Your client reaches the server, but not the intended service. This is one way a server could âactively refuseâ a connection attempt.</p>
</blockquote>
<p>But why? My client works all right before and Chrome still works. I use no proxy on my machine. Is there anything I miss?</p>
| 0 | 2016-08-30T10:04:13Z | 39,280,225 | <p>I notice there is a white space in URL, is that correct?
I tested in my ipython with requests.. that the response was:</p>
<pre><code>{
"timestamp": 1472760770,
"success": true
}
</code></pre>
<p>For HTTP and HTTPS.</p>
| 0 | 2016-09-01T20:17:19Z | [
"python",
"tcp",
"connection",
"python-requests"
] |
Matplotlib PatchCollection to Legend | 39,224,611 | <p>Currently, I am trying to make my own custom legend handler by creating a proxy artist (?) patch using PatchCollections and then following <a href="http://matplotlib.org/users/legend_guide.html" rel="nofollow">http://matplotlib.org/users/legend_guide.html</a> to make a custom handler.</p>
<p>However I am running into a roadblock in trying to implement this into the legend. The arguments for legend takes in patches, but not patchcollections.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
import matplotlib.patches as mpatches
from matplotlib.path import Path
from matplotlib.collections import PatchCollection
fig = plt.figure()
ax = fig.add_subplot(111)
verts1 = [(0.,0.),(0.,1.),(1.,1.),(0.51,0.51),(0.,0.),(0.,0.),]
codes1 = [Path.MOVETO,Path.LINETO,Path.LINETO,Path.LINETO,Path.MOVETO,Path.CLOSEPOLY,]
path1 = Path(verts1,codes1)
patch1 = mpatches.PathPatch(path1,ls='dashed',ec='red',facecolor="none")
verts2 = [(0.49,0.49),(0.,0.),(1.,0.),(1.,1.),(0.5,0.5),(0.,0.),]
codes2 = [Path.MOVETO,Path.LINETO,Path.LINETO,Path.LINETO,Path.MOVETO,Path.CLOSEPOLY,]
path2 = Path(verts2,codes2)
patch2 = mpatches.PathPatch(path2,ls='solid',edgecolor='red', facecolor="none")
patch = PatchCollection([patch1,patch2],match_original=True)
ax.set_xlim(-2,2)
ax.set_ylim(-2,2)
ax.add_collection(patch)
</code></pre>
<p><a href="http://i.stack.imgur.com/dZdd0.png" rel="nofollow"><img src="http://i.stack.imgur.com/dZdd0.png" alt="Visual"></a></p>
<p>The above is the code to visualise the handler. Basically a rectangle with the upper triangle as dashed lines and the lower as solid</p>
<p>Using,</p>
<pre><code>plt.legend([patch],["hellocello"],loc='upper right')
</code></pre>
<p>Recreates the error. Is there a workaround?</p>
| 0 | 2016-08-30T10:09:08Z | 39,225,101 | <p>From the example in this <a href="http://matplotlib.org/users/legend_guide.html#implementing-a-custom-legend-handler" rel="nofollow">section</a>, it looks like you need to define an object and express all coordinates in terms of the handlebox size,</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
import matplotlib.patches as mpatches
from matplotlib.path import Path
from matplotlib.collections import PatchCollection
class AnyObject(object):
pass
class AnyObjectHandler(object):
def legend_artist(self, legend, orig_handle, fontsize, handlebox):
x0, y0 = handlebox.xdescent, handlebox.ydescent
width, height = handlebox.width, handlebox.height
hw = 0.5*width; hh = 0.5*height
verts1 = [(x0,y0),(x0,y0+height),(x0+width,y0+height),((x0+hw)*1.01,(y0+hh)*1.01),(x0,y0),(x0,y0),]
codes1 = [Path.MOVETO,Path.LINETO,Path.LINETO,Path.LINETO,Path.MOVETO,Path.CLOSEPOLY,]
path1 = Path(verts1,codes1)
patch1 = mpatches.PathPatch(path1,ls='dashed',ec='red',facecolor="none")
verts2 = [((x0+hw)*0.99,(y0+hh)*0.99),(x0,y0),(x0+width,y0),(x0+width,y0+height),(x0+hw,y0+hh),(x0,y0),]
codes2 = [Path.MOVETO,Path.LINETO,Path.LINETO,Path.LINETO,Path.MOVETO,Path.CLOSEPOLY,]
path2 = Path(verts2,codes2)
patch2 = mpatches.PathPatch(path2,ls='solid',edgecolor='red', facecolor="none")
patch = PatchCollection([patch1,patch2],match_original=True)
handlebox.add_artist(patch)
return patch
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_xlim(-2,2)
ax.set_ylim(-2,2)
plt.legend([AnyObject()], ['hellocello'],
handler_map={AnyObject: AnyObjectHandler()})
plt.show()
</code></pre>
<p>This seems to work okay with <code>PatchCollection</code>, at least for me on <code>matplotlib</code> version 1.4.3. The result looks like,</p>
<p><a href="http://i.stack.imgur.com/duO0S.png" rel="nofollow"><img src="http://i.stack.imgur.com/duO0S.png" alt="enter image description here"></a></p>
| 1 | 2016-08-30T10:31:51Z | [
"python",
"matplotlib"
] |
Ending script with if statement | 39,224,706 | <p>I'm very new to Python and want to know how to end a program with an if statement that prints a message, as this does not appear to be happening with my 'stand' variable when it is under < 5000. The if statement bolded ** ** is the code I'm having trouble with and it still prints the message I want, however, the program does not stop with that message but continue going to the next code (countdown variable). Ignore 'countdown'. </p>
<p>Here is a part of my code.</p>
<pre><code> while stand <= 0 or mission > 10000:
try:
stand = int(input("Enter a VALID distance in metres: "))
except ValueError:
print("Please enter a valid distance E.g 6000: ")
**if stand > 0 or stand < 5000:
print("ERROR, ERROR")
print("Launch cannot begin, the Mission Control and spectator stands are dangerously close at a distance of {}m.".format(mission))
print("This launch site sucks! It must be demolished and rebuilt!")
print("Launch delayed.")**
if stand >= 5000:
print("Fortunately the mission control and viewing stands are situated far enough.")
while countdown == 0 or countdown == "":
print("We need a person to countdown.")
try:
countdown = int(input("How many seconds would you like the countdown to be?: "))
except ValueError:
</code></pre>
| 0 | 2016-08-30T10:14:14Z | 39,224,794 | <p>Use <code>break</code> to exit the loop.</p>
<pre><code>if stand > 0 or stand < 5000:
print("ERROR, ERROR")
print("Launch cannot begin, the Mission Control and spectator stands are dangerously close at a distance of {}m.".format(mission))
break
</code></pre>
<p>Use <code>exit</code> to exit the program.</p>
<pre><code>import sys
if stand > 0 or stand < 5000:
print("ERROR, ERROR")
print("Launch cannot begin, the Mission Control and spectator stands are dangerously close at a distance of {}m.".format(mission))
sys.exit(0)
</code></pre>
| 1 | 2016-08-30T10:18:35Z | [
"python",
"variables",
"if-statement",
"while-loop"
] |
Ending script with if statement | 39,224,706 | <p>I'm very new to Python and want to know how to end a program with an if statement that prints a message, as this does not appear to be happening with my 'stand' variable when it is under < 5000. The if statement bolded ** ** is the code I'm having trouble with and it still prints the message I want, however, the program does not stop with that message but continue going to the next code (countdown variable). Ignore 'countdown'. </p>
<p>Here is a part of my code.</p>
<pre><code> while stand <= 0 or mission > 10000:
try:
stand = int(input("Enter a VALID distance in metres: "))
except ValueError:
print("Please enter a valid distance E.g 6000: ")
**if stand > 0 or stand < 5000:
print("ERROR, ERROR")
print("Launch cannot begin, the Mission Control and spectator stands are dangerously close at a distance of {}m.".format(mission))
print("This launch site sucks! It must be demolished and rebuilt!")
print("Launch delayed.")**
if stand >= 5000:
print("Fortunately the mission control and viewing stands are situated far enough.")
while countdown == 0 or countdown == "":
print("We need a person to countdown.")
try:
countdown = int(input("How many seconds would you like the countdown to be?: "))
except ValueError:
</code></pre>
| 0 | 2016-08-30T10:14:14Z | 39,224,830 | <p>You can put your code in function. A function you can return the different values. While Calling the function you can check what function is returning.
You can use if condition like </p>
<pre><code>def func1():
while stand <= 0 or mission > 10000:
try:
stand = int(input("Enter a VALID distance in metres: "))
except ValueError:
print("Please enter a valid distance E.g 6000: ")
**if stand > 0 or stand < 5000:
print("ERROR, ERROR")
print("Launch cannot begin, the Mission Control and spectator stands are dangerously close at a distance of {}m.".format(mission))
print("This launch site sucks! It must be demolished and rebuilt!")
print("Launch delayed.")
return 1**
if stand >= 5000:
print("Fortunately the mission control and viewing stands are situated far enough.")
while countdown == 0 or countdown == "":
print("We need a person to countdown.")
try:
countdown = int(input("How many seconds would you like the countdown to be?: "))
except ValueError:
r = func1()
if r == 1:
print 'Error'
</code></pre>
| 0 | 2016-08-30T10:20:27Z | [
"python",
"variables",
"if-statement",
"while-loop"
] |
Ending script with if statement | 39,224,706 | <p>I'm very new to Python and want to know how to end a program with an if statement that prints a message, as this does not appear to be happening with my 'stand' variable when it is under < 5000. The if statement bolded ** ** is the code I'm having trouble with and it still prints the message I want, however, the program does not stop with that message but continue going to the next code (countdown variable). Ignore 'countdown'. </p>
<p>Here is a part of my code.</p>
<pre><code> while stand <= 0 or mission > 10000:
try:
stand = int(input("Enter a VALID distance in metres: "))
except ValueError:
print("Please enter a valid distance E.g 6000: ")
**if stand > 0 or stand < 5000:
print("ERROR, ERROR")
print("Launch cannot begin, the Mission Control and spectator stands are dangerously close at a distance of {}m.".format(mission))
print("This launch site sucks! It must be demolished and rebuilt!")
print("Launch delayed.")**
if stand >= 5000:
print("Fortunately the mission control and viewing stands are situated far enough.")
while countdown == 0 or countdown == "":
print("We need a person to countdown.")
try:
countdown = int(input("How many seconds would you like the countdown to be?: "))
except ValueError:
</code></pre>
| 0 | 2016-08-30T10:14:14Z | 39,225,176 | <p>We can use break statement inside if statement for termination. Instead of break we can write os.exit or sys.exit</p>
<p>os._exit calls the C function _exit() which does an immediate program termination. Note the statement "can never return".</p>
<p>sys.exit() is identical to raise SystemExit(). It raises a Python exception which may be caught by the caller.</p>
<pre><code>import os
if stand > 0 or stand < 5000:
print("ERROR, ERROR")
print("Launch cannot begin, the Mission Control and spectator stands are dangerously close at a distance of {}m.".format(mission))
os.exit(0)
</code></pre>
| 0 | 2016-08-30T10:35:31Z | [
"python",
"variables",
"if-statement",
"while-loop"
] |
How to assert operators < and >= are not implemented? | 39,224,762 | <p>Initially I was not using the <code>unittest</code> framework, so to test that two objects of the same class are not comparable using the operators <code><</code> and <code>>=</code> I did something like:</p>
<pre><code>try:
o1 < o2
assert False
except TypeError:
pass
</code></pre>
<p>after that, though, I decided to start using the <code>unittest</code> module, so I'm converting my tests to the way tests are written with the same module.</p>
<p>I was trying to accomplish the equivalent thing as above with:</p>
<pre><code>self.assertRaises(TypeError, o1 < o2)
</code></pre>
<p>but this does not quite work, because <code>o1 < o2</code> tries to call the operator <code><</code>, instead of being a reference to a function, which should be called as part of the test. </p>
<p>Is there a way to accomplish what I need without needing to wrap <code>o1 < o2</code> in a function?</p>
| 2 | 2016-08-30T10:17:08Z | 39,224,917 | <p>Use <a href="https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertRaises" rel="nofollow"><code>assertRaises</code></a> as a context manager:</p>
<pre><code>with self.assertRaises(TypeError):
o1 < o2
</code></pre>
<p><a href="http://effbot.org/zone/python-with-statement.htm" rel="nofollow">Here</a> is an explanation of the <code>with</code> statement. <a href="https://docs.python.org/3/reference/datamodel.html#with-statement-context-managers" rel="nofollow">Here</a> are the docs. TL;DR: It allows the execution of a code block with a "context", i.e. things to be set up and disposed before/after the execution, error handling etc.</p>
<p>In the case of <code>assertRaises</code>, its context manager simply checks whether an execption of the required type has been raised, by checking the <code>exc</code> agrument passed to its <code>__exit__</code> method.</p>
| 5 | 2016-08-30T10:24:15Z | [
"python",
"unit-testing",
"python-3.x",
"exception",
"python-unittest"
] |
Eficient way to split a bytes array then convert it to string in Python | 39,224,764 | <p>I have a numpy bytes array containing characters, followed by <code>b''</code>, followed by others characters (including weird characters which raise Unicode errors when decoding):</p>
<pre><code>bytes = numpy.array([b'f', b'o', b'o', b'', b'b', b'a', b'd', b'\xfe', b'\x95', b'', b'\x80', b'\x04', b'\x08' b'\x06'])
</code></pre>
<p>I want to get everything before the first <code>b''</code>.</p>
<p>Currently my code is:</p>
<pre><code>txt = []
for c in bytes:
if c != b'':
txt.append(c.decode('utf-8'))
else:
break
txt = ''.join(txt)
</code></pre>
<p>I suppose there is a more efficient and Pythonic way to do that. Any idea?</p>
| 2 | 2016-08-30T10:17:14Z | 39,225,909 | <p>I like your way, it is explicit, the <code>for</code> loop is understandable by all and it isn't all that slow compared to other approaches.</p>
<p>Some suggestions I'd make would be to change your condition from <code>if c != b''</code> to <code>if c</code> since a non-empty byte object will be truthy and, *don't name your list <code>bytes</code>, you mask the built-in! Name it <code>bt</code> or something similar :-)</p>
<p>Other options include <code>itertools.takewhile</code> which will grab elements from an iterable as long as a predicate holds; your operation would look like:</p>
<pre><code>"".join(s.decode('utf-8') for s in takewhile(bool, bt))
</code></pre>
<p>This is slightly slower but is more compact, if you're a one-liner lover this might appeal to you.</p>
<p>Slightly faster and also compact is using <code>index</code> along with a slice:</p>
<pre><code>"".join(b.decode('utf-8') for b in bt[:bt.index(b'')])
</code></pre>
<p>While compact it also suffers from readability.</p>
<p>In short, I'd go with the for loop since readability counts as very pythonic in my eyes.</p>
| 4 | 2016-08-30T11:10:30Z | [
"python",
"python-3.x",
"numpy",
"split"
] |
Shared numpy array with multithreading | 39,224,780 | <p>I have a numpy array(matrix), which I want to fill with calculated values in asynchronously. As a result, I want to have matrix distances with calculated values, but at the end I receive matrix filled with default(-1) value. I understand, that something wrong with sharing distances between threads, but I can't figure out what's exactly wrong.</p>
<pre><code>import numpy as np
import concurrent.futures
data = range(1, 10)
amount = len(data)
default = -1
distances = np.full((amount, amount), default, dtype=np.float32)
def calculate_distance(i, j):
global distances
if i == j:
distances[i][j] = 0
else:
calculated = data[i] + data[j] #doesn't matter how is this calculated
distances[i][j] = calculated
distances[j][i] = calculated
with concurrent.futures.ProcessPoolExecutor() as executor:
for i in range(0, amount):
for j in range(i, amount):
future = executor.submit(calculate_distance, i, j)
result = future.result()
executor.shutdown(True)
print(distances)
</code></pre>
| 0 | 2016-08-30T10:18:04Z | 39,224,918 | <p>You are using a <code>ProcessPoolExecutor</code>. This will fork new processes for performing work. These processes will not share memory, each instead getting a copy of the <code>distances</code> matrix.</p>
<p>Thus any changes to their <em>copy</em> will certainly not be reflected in the original process.</p>
<p>Try using a <code>ThreadPoolExecutor</code> instead. </p>
<p><strong>NOTE:</strong> Globals are generally viewed with distaste ... pass the array into the function instead.</p>
| 0 | 2016-08-30T10:24:16Z | [
"python",
"multithreading"
] |
Using poppler to extract annotations. g_free() / get_color() issue | 39,224,812 | <p>I borrowed this python code <a href="http://stackoverflow.com/questions/1106098/parse-annotations-from-a-pdf">here</a> (first answer by enno groper) to automate the extraction of annotations from pdfs. </p>
<p>I want to make some modifications to the code. Trying to fetch the color of annotations with <code>annot_mapping.annot.get_color()</code> I ran into the first issue. What the command returns is objects like this one <code><PopplerColor at 0x1a85180></code>, rather than rgb values (<a href="https://people.freedesktop.org/~ajohnson/docs/poppler-glib/poppler-PopplerColor.html#PopplerColor" rel="nofollow">promised here</a>). </p>
<p>According to <a href="https://people.freedesktop.org/~ajohnson/docs/poppler-glib/PopplerAnnot.html#poppler-annot-get-color" rel="nofollow">poppler docs</a> <code>poppler_annot_get_color()</code> returns "a new allocated PopplerColor with the color values of poppler_annot , or NULL. It must be freed with g_free() when done".</p>
<p>Is this correct and if yes, how do I achieve this in python?</p>
| 0 | 2016-08-30T10:19:29Z | 39,896,977 | <p><code>annot_mapping.annot.get_color()</code> gives you a PopplerColor, which is a structure with three members (of type guint16): red, green, and blue. For example:</p>
<pre><code>PopplerColor *color = poppler_annot_get_color (annot);
g_printf ("%d\n", color->red);
</code></pre>
<p>gives you the red value of your annotation <code>annot</code>, the gb values can be obtained similarly.</p>
<p>In python this is achieved through <code>annot_mapping.annot.get_color().red</code>, assuming you have <code>import poppler</code></p>
| 0 | 2016-10-06T13:11:50Z | [
"python",
"pdf",
"annotations",
"poppler"
] |
printing corresponding item in list to another list containing numbers | 39,224,866 | <p>My problem is this.
These are the two lists</p>
<pre><code>codes = ['a', 'b', 'c', 'a', 'e', 'f', 'g', 'a', 'i', 'j', 'a', 'l']
pas = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
</code></pre>
<p>How would I find the position of all the 'a' in the codes list. And then print out the corresponding item in the pas list.
This is what the output should be. They should also be sorted with the .sort() function. </p>
<pre><code>1
4
8
11
</code></pre>
<p>I have come up with this code. (That doesnt work)</p>
<pre><code>qwer = [i for i,x in enumerate(codes) if x == common]
qwe = [qwer[i:i+1] for i in range(0, len(qwer), 1)]
print(pas[qwe])
</code></pre>
<p>What would be the best way to get the correct output?</p>
| 0 | 2016-08-30T10:22:01Z | 39,224,933 | <p>There are many ways to achieve it. Your example lists are:</p>
<pre><code>>>> codes = ['a', 'b', 'c', 'a', 'e', 'f', 'g', 'a', 'i', 'j', 'a', 'l']
>>> pas = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
</code></pre>
<p><strong>Approach 1:</strong> Using <code>enumerate</code>:</p>
<pre><code>>>> indices = [pas[i] for i, x in enumerate(codes) if x == "a"]
indices = [1, 4, 8, 11]
</code></pre>
<p><strong>Approach 2</strong>: Using <code>zip</code>:</p>
<pre><code>>>> [p for p, c in zip(pas, codes) if c == 'a']
[1, 4, 8, 11]
</code></pre>
| 1 | 2016-08-30T10:24:56Z | [
"python",
"list"
] |
printing corresponding item in list to another list containing numbers | 39,224,866 | <p>My problem is this.
These are the two lists</p>
<pre><code>codes = ['a', 'b', 'c', 'a', 'e', 'f', 'g', 'a', 'i', 'j', 'a', 'l']
pas = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
</code></pre>
<p>How would I find the position of all the 'a' in the codes list. And then print out the corresponding item in the pas list.
This is what the output should be. They should also be sorted with the .sort() function. </p>
<pre><code>1
4
8
11
</code></pre>
<p>I have come up with this code. (That doesnt work)</p>
<pre><code>qwer = [i for i,x in enumerate(codes) if x == common]
qwe = [qwer[i:i+1] for i in range(0, len(qwer), 1)]
print(pas[qwe])
</code></pre>
<p>What would be the best way to get the correct output?</p>
| 0 | 2016-08-30T10:22:01Z | 39,224,944 | <pre><code>>>> pas = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
>>> codes = ['a', 'b', 'c', 'a', 'e', 'f', 'g', 'a', 'i', 'j', 'a', 'l']
>>> result = sorted(i for i,j in zip(pas,codes) if j=='a')
>>> for i in result:
... print i
...
1
4
8
11
</code></pre>
| 4 | 2016-08-30T10:25:25Z | [
"python",
"list"
] |
printing corresponding item in list to another list containing numbers | 39,224,866 | <p>My problem is this.
These are the two lists</p>
<pre><code>codes = ['a', 'b', 'c', 'a', 'e', 'f', 'g', 'a', 'i', 'j', 'a', 'l']
pas = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
</code></pre>
<p>How would I find the position of all the 'a' in the codes list. And then print out the corresponding item in the pas list.
This is what the output should be. They should also be sorted with the .sort() function. </p>
<pre><code>1
4
8
11
</code></pre>
<p>I have come up with this code. (That doesnt work)</p>
<pre><code>qwer = [i for i,x in enumerate(codes) if x == common]
qwe = [qwer[i:i+1] for i in range(0, len(qwer), 1)]
print(pas[qwe])
</code></pre>
<p>What would be the best way to get the correct output?</p>
| 0 | 2016-08-30T10:22:01Z | 39,225,198 | <p>Just added another way to use numpy:</p>
<pre><code>import numpy as np
codes = np.array(['a', 'b', 'c', 'a', 'e', 'f', 'g', 'a', 'i', 'j', 'a', 'l'])
pas = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])
index = np.where(codes=='a')
values = pas[index]
In [122]: print(values)
[ 1 4 8 11]
</code></pre>
| 0 | 2016-08-30T10:36:30Z | [
"python",
"list"
] |
printing corresponding item in list to another list containing numbers | 39,224,866 | <p>My problem is this.
These are the two lists</p>
<pre><code>codes = ['a', 'b', 'c', 'a', 'e', 'f', 'g', 'a', 'i', 'j', 'a', 'l']
pas = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
</code></pre>
<p>How would I find the position of all the 'a' in the codes list. And then print out the corresponding item in the pas list.
This is what the output should be. They should also be sorted with the .sort() function. </p>
<pre><code>1
4
8
11
</code></pre>
<p>I have come up with this code. (That doesnt work)</p>
<pre><code>qwer = [i for i,x in enumerate(codes) if x == common]
qwe = [qwer[i:i+1] for i in range(0, len(qwer), 1)]
print(pas[qwe])
</code></pre>
<p>What would be the best way to get the correct output?</p>
| 0 | 2016-08-30T10:22:01Z | 39,226,306 | <pre><code>codes = ['a', 'b', 'c', 'a', 'e', 'f', 'g', 'a', 'i', 'j', 'a', 'l']
pas = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
[pas[index] for index, element in enumerate(codes) if element == "a"]
</code></pre>
| 0 | 2016-08-30T11:28:47Z | [
"python",
"list"
] |
How to set output for logging.handlers.SysLogHandler in YAML file in python | 39,224,892 | <p>I have following config file:</p>
<pre><code>[loggers]
keys=MYLogger
[handlers]
keys=fileHandler,streamHandler,sysHandler
[formatters]
keys=simpleFormatter
[logger_MYLogger]
level=INFO
handlers=fileHandler
qualname=MYLogger
propagate=0
[handler_fileHandler]
class=FileHandler
formatter=simpleFormatter
args=('mylog.log',)
[handler_streamHandler]
class=StreamHandler
formatter=simpleFormatter
args=(sys.stdout,)
[handler_sysHandler]
class=logging.handlers.SysLogHandler
formatter=simpleFormatter
args=('/dev/log',)
[formatter_simpleFormatter]
format=%(asctime)s - %(name)s - %(levelname)s : %(message)s
datefmt=%Y-%m-%d %H:%M:%S
</code></pre>
<p>I need to convert it to the YAML file. I've done it successfully, except part for sysHandler:</p>
<pre><code> version: 1
formatters:
simpleFormatter:
format: '%(asctime)s - %(name)s - %(levelname)s : %(message)s'
datefmt: '%Y-%m-%d %H:%M:%S'
handlers:
stream:
class: logging.StreamHandler
formatter: simpleFormatter
stream: ext://sys.stdout
file:
class: logging.FileHandler
formatter: simpleFormatter
filename: mylog.log
# sys:
# class: logging.handlers.SysLogHandler
# formatter: simpleFormatter
# stream: /dev/log
loggers:
MYLogger:
level: INFO
handlers: [stream, file]
</code></pre>
<p>How to provide arguments for sysHandler in yaml format? Why it was ok to simply put args in original config file and here I have to specify stream / filename?</p>
| 1 | 2016-08-30T10:23:31Z | 39,243,542 | <p>The logging <a href="https://docs.python.org/3/library/logging.config.html#dictionary-schema-details" rel="nofollow">dictionary schema details</a> have the following to say about handlers:</p>
<pre><code>handlers - the corresponding value will be a dict in which each key is a handler
id and each value is a dict describing how to configure the corresponding Handler
instance.
The configuring dict is searched for the following keys:
class (mandatory). This is the fully qualified name of the handler class.
level (optional). The level of the handler.
formatter (optional). The id of the formatter for this handler.
filters (optional). A list of ids of the filters for this handler.
All other keys are passed through as keyword arguments to the handlerâs
constructor.
</code></pre>
<p>The SysLogHandler has the following <a href="https://docs.python.org/3/library/logging.handlers.html#logging.handlers.SysLogHandlerhttp://" rel="nofollow">signature</a>:</p>
<pre><code>SysLogHandler(address=('localhost', SYSLOG_UDP_PORT), facility=LOG_USER, socktype=socket.SOCK_DGRAM)
</code></pre>
<p>The key <code>stream</code>, which is not a mandatory or optional key for a handler, is passed as keyword argument to <code>SysLogHandler()</code> and that is not a keyword you can use to instantiate an instance of that class.</p>
<p>The non-mandatory/optional keys for the <a href="https://docs.python.org/3/library/logging.handlers.html#logging.StreamHandler" rel="nofollow">StreamHandler</a> (i.e. <code>stream</code>) and <a href="https://docs.python.org/3/library/logging.handlers.html#logging.FileHandler" rel="nofollow">FileHandler</a> (i.e. <code>filename</code>) match the respective signatures).</p>
<p>I assume you really have to supply to suppy the address with the tuple argument as sequence, to get this accepted:</p>
<pre><code>sys:
class: logging.handlers.SysLogHandler
formatter: simpleFormatter
address: [localhost, /dev/log]
</code></pre>
<p>(Please note that the <code>args</code> entry for <code>[handler_sysHandler]</code> in your INI style config file should start with a tuple. From the <a href="https://docs.python.org/3/library/logging.config.html#configuration-file-format" rel="nofollow">logging documentation</a>:</p>
<pre><code>args=(('localhost', handlers.SYSLOG_UDP_PORT), handlers.SysLogHandler.LOG_USER)
</code></pre>
| 0 | 2016-08-31T07:41:29Z | [
"python",
"logging",
"yaml",
"syslog"
] |
Send scapy packets through raw sockets in python | 39,225,108 | <p>Is it possible? If yes? How?</p>
<p>This is my script (it doesn't work):</p>
<pre><code>from scapy.all import *
import socket
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
p=IP(dst="192.168.1.254")/TCP(flags="S", sport=RandShort(), dport=80)
s.connect(("192.168.1.254",80))
s.send(p)
print ("Request sent!")
except:
print ("An error occurred.")
</code></pre>
<p><strong>--UPDATE--</strong></p>
<pre><code>p = bytes(IP(dst="DESTINATIONIP")/TCP(flags="S", sport=RandShort(), dport=80))
while True:
try:
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, sockip, sockport, True)
s = socks.socksocket()
s.connect((DESTINATIONIP,DESTINATIONPORT))
s.send(p)
print ("Request Sent!")
except:
print ("An error occurred.")
</code></pre>
<p>is it possible to send this syn packet but through an http proxy instead of socks?</p>
| 2 | 2016-08-30T10:32:18Z | 39,227,739 | <p>To send a scapy packet using raw sockets you have to convert your packet to raw bytes first. For example a packet crafted using scapy like this:</p>
<pre><code>p = IP(dst="192.168.1.254")/TCP(flags="S", sport=RandShort(),dport=80)
</code></pre>
<p>should be converted to raw bytes with <code>bytes(p)</code>.
This will give you something like:</p>
<pre><code>'E\x00\x00(\x00\x01\x00\x00@\x06\xf6w\xc0\xa8\x01\t\xc0\xa8\x01\xfe\x97%\x00P\x00\x00\x00\x00\x00\x00\x00\x00P\x02 \x00t\x15\x00\x00'
</code></pre>
<p>Then you can send it using raw sockets. So for your example you could modify a little your code like:</p>
<pre><code>from scapy.all import *
import socket
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
p = IP(dst="192.168.1.254")/TCP(flags="S", sport=RandShort(),dport=80)/Raw("Hallo world!")
s.connect(("192.168.1.254",80))
s.send(bytes(p))
print "[+] Request Sent!"
except Exception, e:
raise e
</code></pre>
<p>This should work!</p>
<p><strong>Notice!!!</strong>
Have in mind that when you use socket (module) to communicate with another computer
sockets automatically construct your packets (headers, etc) and send the content you wish
to send. But when you construct a packet with scapy you craft it from the beginning so
you define its content and its headers,layers etc. So in your example when you send your packet
you will sent 'all' as content-payload even the packet-headers(ip-header,tcp-header).
You can test it by running the below sniffer:</p>
<pre><code>#!/usr/bin/env python
from scapy.all import *
def printer(packet):
if packet.haslayer(Raw):
print packet.getlayer(Raw).load
print "[+] Sniff started"
while True:
sniff(store=0, filter="host 192.168.1.254 and port 80", prn=printer, iface="your_interface_here")
</code></pre>
<p>Well while the sniffer is running try to run <strong>the first piece of code in my post</strong> (as I updated the packet with a raw layer=tcp.payload) and you will observe that not only
the data but the whole packets gets transmitted as data. So you kind of sending the headers twice. That's why sockets has its own send method and scapy its own.</p>
| 2 | 2016-08-30T12:37:50Z | [
"python",
"sockets",
"python-3.x",
"scapy"
] |
HTMLParser and BeautifulSoup not decoding HTML entities correctly | 39,225,208 | <p>I am trying to decode <code>HTML entities</code> from a section of <code>HTML</code> source code with both <code>HTMLParser</code> and <code>BeautifulSoup</code> </p>
<p>However neither seems to work completely. Namely they don't decode slashes.</p>
<p>My Python version is <code>2.7.11</code> with <code>BeautifulSoup</code> version <code>3.2.1</code></p>
<pre><code>print 'ORIGINAL STRING: %s \n' % original_url_string
#clean up
try:
# Python 2.6-2.7
from HTMLParser import HTMLParser
except ImportError:
# Python 3
from html.parser import HTMLParser
h = HTMLParser()
url_string = h.unescape(original_url_string)
print 'CLEANED WITH html.parser: %s \n' % url_string
decoded = BeautifulSoup( original_url_string,convertEntities=BeautifulSoup.HTML_ENTITIES)
print 'CLEANED WITH BeautifulSoup: %s \n' % decoded.contents
</code></pre>
<p>Gives me an output like:</p>
<pre><code>ORIGINAL STRING: api.soundcloud.com%2Ftracks%2F277561480&#038;show_artwork=true&#038;maxwidth=1050&#038;maxheight=1000
CLEANED WITH html.parser: api.soundcloud.com%2Ftracks%2F277561480&show_artwork=true&maxwidth=1050&maxheight=1000
CLEANED WITH BeautifulSoup: [u'api.soundcloud.com%2Ftracks%2F277561480&show_artwork=true&maxwidth=1050&maxheight=1000']
</code></pre>
<p>What am I missing here? </p>
<p>Should I try to decode the entire <code>HTML</code> page before pulling out the urls?</p>
<p>Is there a better way to do this with Python?</p>
| 0 | 2016-08-30T10:36:44Z | 39,247,147 | <p>Are you trying to decode the slashes from the url or the url's html?</p>
<p>If you re trying to decode the slashes, they are not <a href="https://dev.w3.org/html5/html-author/charref" rel="nofollow">HTML entities</a>, but percent-encoded characters.</p>
<p><code>urllib</code> has the method you need:</p>
<pre><code>import urllib
urllib.unquote(original_url_string)
>>> 'api.soundcloud.com/tracks/277561480&#038;show_artwork=true&#038;maxwidth=1050&#038;maxheight=1000'
</code></pre>
<p>If you want to decode the html, you first have to <code>get</code> it with a package like <code>requests</code> or <code>urllib</code></p>
| 0 | 2016-08-31T10:29:41Z | [
"python",
"beautifulsoup",
"html-entities",
"html-parser"
] |
python code- function does not work for arithmatic quiz and password does not work | 39,225,244 | <p>Please could I have some help with my code, as the function does not work for the <code>NumberError()</code> The point is that if when answering the maths quiz you type in a letter by mistake, it will say 'PLEASE TYPE IN A NUMBER', however I cannot get it to work as an error keeps coming up:</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/Helen/Documents/Computer Science Summer Task/Helen_Summer Tasks-1_2_3.py", line 77, in <module>
StudentAnswer = int(input("Please answer the question: "))
ValueError: invalid literal for int() with base 10: 'g'
</code></pre>
<p>Here is what I have attempted so far:</p>
<pre><code>def NumberError():
if StudentAnswer not in Turns '1,2,3,4,5,6,7,8,9,10':
print('Please enter a NUMBER.')
else:
return ("\nQuestion")
</code></pre>
<p>Also the password option to access the quiz does not fully work: Here is what I have so far and I would appreciate it if I could have some help on what I need to do right and what I am doing wrong.</p>
<pre><code>from tkinter import *
import tkinter as tk
window = tk.Tk()
import os
#Must Access this to continue.
def checkPassword():
password = "Starfish"
def enteredPassword():
passwordEntry.get()
if password == enteredPassword:
confirmLabel.config(text="Access granted")
else:
confirmLabel.config(text="Access denied")
passwordLabel = tk.Label(window, text="Password:")
passwordEntry = tk.Entry(window, show="*")
button = tk.Button(window, text="Enter", command=checkPassword)
confirmLabel = tk.Label(window)
passwordLabel.pack()
passwordEntry.pack()
button.pack()
confirmLabel.pack()
window.mainloop()
</code></pre>
<p>THIS IS MY OVERALL PIECE OF CODE SO FAR:</p>
<pre><code>from tkinter import *
import tkinter as tk
window = tk.Tk()
import os
#Must Access this to continue.
def checkPassword():
password = "Starfish"
def enteredPassword():
passwordEntry.get()
if password == enteredPassword:
confirmLabel.config(text="Access granted")
else:
confirmLabel.config(text="Access denied")
passwordLabel = tk.Label(window, text="Password:")
passwordEntry = tk.Entry(window, show="*")
button = tk.Button(window, text="Enter", command=checkPassword)
confirmLabel = tk.Label(window)
passwordLabel.pack()
passwordEntry.pack()
button.pack()
confirmLabel.pack()
window.mainloop()
#Summer_Task1.py
import random
def get_Name():
global Name
global Class
#Inputs pupil's class
while True:
Class = input ("Please select your class: A)Class 1 B)Class 2 C)Class 3 [C1/C2/C3]? : ")
# check if d1a is equal to one of the strings, specified in the list
if Class in ['C1', 'C2', 'C3']:
# process the input
print("Thank you.")
# if it was equal - break from the while loop
break
def start_quiz():
print("Welcome to the Numeracy quiz that will test your basic arithmatic skills!")
Name = input("Please enter your first name and surname: ")
print("Hi " + Name + "!" + " Please ANSWER the following NUMERACY QUESTIONS and then PRESS ENTER to work out whether they are RIGHT or WRONG. Please ENTER a NUMBER.")
print("You will receive a total score at the end of the quiz.")
def Questions():
global score
global questionnumber
Score = 0
questionnumber=0
while questionnumber<10:
questionnumber=questionnumber+1
Turns = (1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
for n in Turns:
Number1 = random.randint(1, 10)
Number2 = random.randint(1, 10)
Number3 = random.randint(1, 3)
if Number3 == 1:
Answer = Number1 + Number2
Sign = " + "
elif Number3 == 2:
Answer = Number1 - Number2
Sign = " - "
elif Number3 == 3:
Answer = Number1 * Number2
Sign = " x "
print("\nQuestion", n, "\n", Number1, Sign, Number2)
StudentAnswer = int(input("Please answer the question: "))
print("The correct answer is:", Answer)
if StudentAnswer == Answer:
Score = Score + 1
print("Well Done, that is correct!")
else:
print("Sorry that is incorrect!")
def NumberError():
if StudentAnswer not in Turns '1,2,3,4,5,6,7,8,9,10':
print('Please enter a NUMBER.')
else:
return ("\nQuestion")
print("\nName:", Name)
print("\nClass:", Class)
print("\nScore:", Score)
</code></pre>
<p>Thanks!</p>
| 0 | 2016-08-30T10:38:05Z | 39,226,396 | <p>You could test the input type rather than casting it as an int</p>
<pre><code>i = input("Please answer the question: ")
if(not type(i) is int):
print('Please enter a NUMBER.')
</code></pre>
| 1 | 2016-08-30T11:32:50Z | [
"python"
] |
Why is ctypes so slow to convert a Python list to a C array? | 39,225,263 | <p>The bottleneck of my code is currently a conversion from a Python list to a C array using ctypes, as described <a href="http://stackoverflow.com/questions/4145775/how-do-i-convert-a-python-list-into-a-c-array-by-using-ctypes">in this question</a>.</p>
<p>A small experiment shows that it is indeed very slow, in comparison of other Python instructions:</p>
<pre><code>import timeit
setup="from array import array; import ctypes; t = [i for i in range(1000000)];"
print(timeit.timeit(stmt='(ctypes.c_uint32 * len(t))(*t)',setup=setup,number=10))
print(timeit.timeit(stmt='array("I",t)',setup=setup,number=10))
print(timeit.timeit(stmt='set(t)',setup=setup,number=10))
</code></pre>
<p>Gives:</p>
<pre><code>1.790962941000089
0.0911122129996329
0.3200237319997541
</code></pre>
<p>I obtained these results with CPython 3.4.2. I get similar times with CPython 2.7.9 and Pypy 2.4.0.</p>
<p>I tried runing the above code with <code>perf</code>, commenting the <code>timeit</code> instructions to run only one at a time. I get these results:</p>
<p><strong>ctypes</strong></p>
<pre><code> Performance counter stats for 'python3 perf.py':
1807,891637 task-clock (msec) # 1,000 CPUs utilized
8 context-switches # 0,004 K/sec
0 cpu-migrations # 0,000 K/sec
59 523 page-faults # 0,033 M/sec
5 755 704 178 cycles # 3,184 GHz
13 552 506 138 instructions # 2,35 insn per cycle
3 217 289 822 branches # 1779,581 M/sec
748 614 branch-misses # 0,02% of all branches
1,808349671 seconds time elapsed
</code></pre>
<p><strong>array</strong></p>
<pre><code> Performance counter stats for 'python3 perf.py':
144,678718 task-clock (msec) # 0,998 CPUs utilized
0 context-switches # 0,000 K/sec
0 cpu-migrations # 0,000 K/sec
12 913 page-faults # 0,089 M/sec
458 284 661 cycles # 3,168 GHz
1 253 747 066 instructions # 2,74 insn per cycle
325 528 639 branches # 2250,011 M/sec
708 280 branch-misses # 0,22% of all branches
0,144966969 seconds time elapsed
</code></pre>
<p><strong>set</strong></p>
<pre><code> Performance counter stats for 'python3 perf.py':
369,786395 task-clock (msec) # 0,999 CPUs utilized
0 context-switches # 0,000 K/sec
0 cpu-migrations # 0,000 K/sec
108 584 page-faults # 0,294 M/sec
1 175 946 161 cycles # 3,180 GHz
2 086 554 968 instructions # 1,77 insn per cycle
422 531 402 branches # 1142,636 M/sec
768 338 branch-misses # 0,18% of all branches
0,370103043 seconds time elapsed
</code></pre>
<p>The code with <code>ctypes</code> has less page-faults than the code with <code>set</code> and the same number of branch-misses than the two others. The only thing I see is that there are more instructions and branches (but I still don't know why) and more context switches (but it is certainly a consequence of the longer run time rather than a cause).</p>
<p>I therefore have two questions:</p>
<ol>
<li>Why is ctypes so slow ?</li>
<li>Is there a way to improve performances, either with ctype or with another library?</li>
</ol>
| 8 | 2016-08-30T10:38:40Z | 39,225,698 | <p>While this is not a definitive answer, the problem seems to be the constructor call with <code>*t</code>. Doing the following instead, decreases the overhead significantly:</p>
<pre><code>array = (ctypes.c_uint32 * len(t))()
array[:] = t
</code></pre>
<p>Test:</p>
<pre><code>import timeit
setup="from array import array; import ctypes; t = [i for i in range(1000000)];"
print(timeit.timeit(stmt='(ctypes.c_uint32 * len(t))(*t)',setup=setup,number=10))
print(timeit.timeit(stmt='a = (ctypes.c_uint32 * len(t))(); a[:] = t',setup=setup,number=10))
print(timeit.timeit(stmt='array("I",t)',setup=setup,number=10))
print(timeit.timeit(stmt='set(t)',setup=setup,number=10))
</code></pre>
<p>Output:</p>
<pre><code>1.7090932869978133
0.3084979929990368
0.08278547400186653
0.2775516299989249
</code></pre>
| 3 | 2016-08-30T10:59:48Z | [
"python",
"performance",
"ctypes"
] |
Why is ctypes so slow to convert a Python list to a C array? | 39,225,263 | <p>The bottleneck of my code is currently a conversion from a Python list to a C array using ctypes, as described <a href="http://stackoverflow.com/questions/4145775/how-do-i-convert-a-python-list-into-a-c-array-by-using-ctypes">in this question</a>.</p>
<p>A small experiment shows that it is indeed very slow, in comparison of other Python instructions:</p>
<pre><code>import timeit
setup="from array import array; import ctypes; t = [i for i in range(1000000)];"
print(timeit.timeit(stmt='(ctypes.c_uint32 * len(t))(*t)',setup=setup,number=10))
print(timeit.timeit(stmt='array("I",t)',setup=setup,number=10))
print(timeit.timeit(stmt='set(t)',setup=setup,number=10))
</code></pre>
<p>Gives:</p>
<pre><code>1.790962941000089
0.0911122129996329
0.3200237319997541
</code></pre>
<p>I obtained these results with CPython 3.4.2. I get similar times with CPython 2.7.9 and Pypy 2.4.0.</p>
<p>I tried runing the above code with <code>perf</code>, commenting the <code>timeit</code> instructions to run only one at a time. I get these results:</p>
<p><strong>ctypes</strong></p>
<pre><code> Performance counter stats for 'python3 perf.py':
1807,891637 task-clock (msec) # 1,000 CPUs utilized
8 context-switches # 0,004 K/sec
0 cpu-migrations # 0,000 K/sec
59 523 page-faults # 0,033 M/sec
5 755 704 178 cycles # 3,184 GHz
13 552 506 138 instructions # 2,35 insn per cycle
3 217 289 822 branches # 1779,581 M/sec
748 614 branch-misses # 0,02% of all branches
1,808349671 seconds time elapsed
</code></pre>
<p><strong>array</strong></p>
<pre><code> Performance counter stats for 'python3 perf.py':
144,678718 task-clock (msec) # 0,998 CPUs utilized
0 context-switches # 0,000 K/sec
0 cpu-migrations # 0,000 K/sec
12 913 page-faults # 0,089 M/sec
458 284 661 cycles # 3,168 GHz
1 253 747 066 instructions # 2,74 insn per cycle
325 528 639 branches # 2250,011 M/sec
708 280 branch-misses # 0,22% of all branches
0,144966969 seconds time elapsed
</code></pre>
<p><strong>set</strong></p>
<pre><code> Performance counter stats for 'python3 perf.py':
369,786395 task-clock (msec) # 0,999 CPUs utilized
0 context-switches # 0,000 K/sec
0 cpu-migrations # 0,000 K/sec
108 584 page-faults # 0,294 M/sec
1 175 946 161 cycles # 3,180 GHz
2 086 554 968 instructions # 1,77 insn per cycle
422 531 402 branches # 1142,636 M/sec
768 338 branch-misses # 0,18% of all branches
0,370103043 seconds time elapsed
</code></pre>
<p>The code with <code>ctypes</code> has less page-faults than the code with <code>set</code> and the same number of branch-misses than the two others. The only thing I see is that there are more instructions and branches (but I still don't know why) and more context switches (but it is certainly a consequence of the longer run time rather than a cause).</p>
<p>I therefore have two questions:</p>
<ol>
<li>Why is ctypes so slow ?</li>
<li>Is there a way to improve performances, either with ctype or with another library?</li>
</ol>
| 8 | 2016-08-30T10:38:40Z | 39,231,670 | <p>The solution is to use the <code>array</code> module and cast the address or use the from_buffer method...</p>
<pre><code>import timeit
setup="from array import array; import ctypes; t = [i for i in range(1000000)];"
print(timeit.timeit(stmt="v = array('I',t);assert v.itemsize == 4; addr, count = v.buffer_info();p = ctypes.cast(addr,ctypes.POINTER(ctypes.c_uint32))",setup=setup,number=10))
print(timeit.timeit(stmt="v = array('I',t);a = (ctypes.c_uint32 * len(v)).from_buffer(v)",setup=setup,number=10))
print(timeit.timeit(stmt='(ctypes.c_uint32 * len(t))(*t)',setup=setup,number=10))
print(timeit.timeit(stmt='set(t)',setup=setup,number=10))
</code></pre>
<p>It is then many times faster when using Python 3:</p>
<pre><code>$ python3 convert.py
0.08303386811167002
0.08139665238559246
1.5630637975409627
0.3013848252594471
</code></pre>
| 2 | 2016-08-30T15:33:01Z | [
"python",
"performance",
"ctypes"
] |
How do I replace the contents of a HTML document with random text using BeautifulSoup? | 39,225,299 | <p>Here is code that I've written.</p>
<pre><code> from bs4 import BeautifulSoup
import urllib
import string, random
def readHtml():
sock = urllib.urlopen('1041956_Page1.htm')
soup = BeautifulSoup(sock,'html.parser')
paraTags = soup.find_all('p')
for para in paraTags:
if(para.get_text() is not None):
para.replace_with(randomizeText(para.get_text())
def randomizeText(text):
length = len(text)
newWord = ''.join(random.choice(string.lowercase) for x in range(length-1))
return newWord
if __name__ == "__main__":
readHtml()
</code></pre>
<p>This gives me an error that says </p>
<blockquote>
<p>ValueError: Cannot insert None into a tag.</p>
</blockquote>
<p>I want the BeautifulSoup object to be replaced with random text would like to rebuild the html from it. Any help would be appreciated. Thanks!</p>
| 0 | 2016-08-30T10:40:11Z | 39,225,434 | <p>I am not sure but have you considered the case in which</p>
<pre><code><p></p>
</code></pre>
<p>is empty.</p>
<p>So,</p>
<pre><code>para.get_text()
</code></pre>
<p>will return None.</p>
| 0 | 2016-08-30T10:46:43Z | [
"python",
"html",
"python-2.7",
"beautifulsoup"
] |
How do I replace the contents of a HTML document with random text using BeautifulSoup? | 39,225,299 | <p>Here is code that I've written.</p>
<pre><code> from bs4 import BeautifulSoup
import urllib
import string, random
def readHtml():
sock = urllib.urlopen('1041956_Page1.htm')
soup = BeautifulSoup(sock,'html.parser')
paraTags = soup.find_all('p')
for para in paraTags:
if(para.get_text() is not None):
para.replace_with(randomizeText(para.get_text())
def randomizeText(text):
length = len(text)
newWord = ''.join(random.choice(string.lowercase) for x in range(length-1))
return newWord
if __name__ == "__main__":
readHtml()
</code></pre>
<p>This gives me an error that says </p>
<blockquote>
<p>ValueError: Cannot insert None into a tag.</p>
</blockquote>
<p>I want the BeautifulSoup object to be replaced with random text would like to rebuild the html from it. Any help would be appreciated. Thanks!</p>
| 0 | 2016-08-30T10:40:11Z | 39,225,977 | <p>Your <code>randomizeText()</code> isn't returning anything i.e. <code>None</code>.
Make it:</p>
<pre><code>def randomizeText(text):
length = len(text)
newWord = ''.join(random.choice(string.lowercase) for x in range(length))
print newWord
return newWord
</code></pre>
<p>and <code>replace_with</code> has a restriction of replacing text with <code>None</code>.</p>
<hr>
<p>Also change the line 10 from:</p>
<pre><code>para.string.replace_with(randomizeText(para.get_text()))
</code></pre>
<p>to </p>
<pre><code>para.replace_with(randomizeText(para.get_text()))
</code></pre>
<p>to avoid the - <strong>AttributeError: 'NoneType' object has no attribute 'replace_with'</strong></p>
<hr>
<p>And my above comment </p>
<blockquote>
<p>Your code seems to be fine - you may be getting this because of empty
<code>p</code> block</p>
</blockquote>
<p>is nullified, as i have checked length of empty <code>p</code> block is 1.</p>
| 1 | 2016-08-30T11:13:57Z | [
"python",
"html",
"python-2.7",
"beautifulsoup"
] |
How to extract and encode data of text file in Python 2.7? | 39,225,320 | <p>I know it's been asked a lot and I tried some things but I can't make it right:</p>
<p>I have a text file like this:</p>
<pre><code>From: VENCA <email@infoclientes.venca.es>
Subject: =?ISO-8859-1?Q?=BFMaxi,_midi_o_mini=3F_=A1No_pases_d?=
=?ISO-8859-1?Q?e_largo_porque_esto_te_interesa!?=
Subject: =?UTF-8?Q?Lo_mejor_de_Gmail_est=C3=A9s_donde_est=C3=A9s?=
From: Equipo de Gmail <mail-noreply@google.com>
Subject: =?UTF-8?Q?Tres_consejos_para_sacarle_el_m=C3=A1ximo_partido_a_Gmai?=
From: Equipo de Gmail <mail-noreply@google.com>
Subject: =?UTF-8?Q?Organ=C3=ADzate_mejor_con_la_bandeja_de_entrada_de_Gmail?=
From: Equipo de Gmail <mail-noreply@google.com>
From: VENCA <email@infoclientes.venca.es>
Subject: =?UTF-8?Q?MARINA,_comprueba_que_tus_datos_se?=
=?UTF-8?Q?an_correctos_y_=C2=A1bienvenid@_a_Venca!?=
Subject: =?UTF-8?Q?Nuevo_inicio_de_sesi=C3=B3n_en_Chrome_con_Windows?=
From: Google <no-reply@accounts.google.com>
[...]
</code></pre>
<p>Each pair of From/Subject or Subject/From is what I want to extract, in the format:</p>
<pre><code>From: VENCA <email@infoclientes.venca.es> - Subject: ¿Maxi, midi o mini? ¡No pases de largo porque esto te interesa!
</code></pre>
<p>[...]</p>
<p>So I have to extract each pair (having in mind that some subjects are 2,3... lines length) gave the format that I want and encode the subject from utf-8, iso... whatever to make them understable </p>
<p>Thanks a lot !</p>
| 2 | 2016-08-30T10:41:19Z | 39,227,411 | <pre><code>with open('infile.txt') as infile:
try:
while True:
line1 = next(infile).rstrip()
line2 = next(infile).rstrip()
if line2.startswith('From:'):
line1, line2 = line2, line1
print line1, '-', line2
except StopIteration:
pass
</code></pre>
| 0 | 2016-08-30T12:24:15Z | [
"python",
"python-2.7",
"utf-8",
"decode",
"encode"
] |
Read data from one csv and create another modified csv file | 39,225,482 | <p>I have a csv file called ipValues.csv which contains the following data:</p>
<pre><code>IPs Values
192.168.1.231 c3s8b1p1 c3s8b1p2 c3s4b1p3
192.168.1.179 c1s1b1p1 c1s1b1p2 c1s3b1p1 c3s2b1p2 c3s2b1p3
192.168.1.195 c1s1b2p8
192.168.1.162 c1s4b7p8 c1s1b1p3 c1s1b2p2 c1s1b2p3 c1s1b2p5
192.168.1.179 c1s1b1p1 c1s1b1p2 c1s3b1p1 c3s2b1p2 c3s2b1p3
192.168.1.143 c2s4b1p2 c2s2b1p3 c2s2b1p5 c2s2b1p9
192.168.1.231 c3s8b1p1 c3s8b1p2 c3s4b1p3 c3s7b1p6 c2s2b1p1
192.168.1.187 c5s4b1p2 c4s9b1p3
192.168.1.114 c1s1b1p10 c1s3b6p1 c1s1b10p9 c4s10b1p1
192.168.1.132 c1s1b2p1 c1s10b7p8 c4s9b1p1 c4s9b1p2 c3s6b1p3
192.168.1.164 c1s1b1p5 c1s1b1p9 c1s1b1p8 c1s1b2p2 c3s5b1p2
</code></pre>
<p>I want to create another csv file in the following format:</p>
<pre><code>values Ips
c3s8b1p1 192.168.1.231
c3s8b1p2 192.168.1.231
c3s4b1p3 192.168.1.231
c1s1b1p1 192.168.1.179
c1s1b1p2 192.168.1.179
c1s3b1p1 192.168.1.179
c3s2b1p2 192.168.1.179
c3s2b1p3 192.168.1.179
</code></pre>
<p>and so on...</p>
<p>I know guys this is very difficult to understand but I don't know how can I explain this, Sorry for that. Please give me suggestion. </p>
| -1 | 2016-08-30T10:49:13Z | 39,226,443 | <p>The following should get you started and help explain how to do things for use on a small file:</p>
<pre><code>import csv
with open('ipValues.csv', 'rb') as f_input, open('output.csv', 'wb') as f_output:
csv_input = csv.reader(f_input)
csv_output = csv.writer(f_output)
next(csv_input) # skip header
csv_output.writerow(["values", "Ips"])
for row in csv_input:
for entry in row[1:]:
csv_output.writerow([entry, row[0]])
</code></pre>
<p>Giving you a CSV output file as follows:</p>
<pre><code>values,Ips
c3s8b1p1,192.168.1.231
c3s8b1p2,192.168.1.231
c3s4b1p3,192.168.1.231
c1s1b1p1,192.168.1.179
c1s1b1p2,192.168.1.179
c1s3b1p1,192.168.1.179
c3s2b1p2,192.168.1.179
c3s2b1p3,192.168.1.179
c1s1b2p8,192.168.1.195
c1s4b7p8,192.168.1.162
c1s1b1p3,192.168.1.162
c1s1b2p2,192.168.1.162
c1s1b2p3,192.168.1.162
c1s1b2p5,192.168.1.162
c1s1b1p1,192.168.1.179
c1s1b1p2,192.168.1.179
c1s3b1p1,192.168.1.179
c3s2b1p2,192.168.1.179
c3s2b1p3,192.168.1.179
c2s4b1p2,192.168.1.143
c2s2b1p3,192.168.1.143
c2s2b1p5,192.168.1.143
c2s2b1p9,192.168.1.143
c3s8b1p1,192.168.1.231
c3s8b1p2,192.168.1.231
c3s4b1p3,192.168.1.231
c3s7b1p6,192.168.1.231
c2s2b1p1,192.168.1.231
c5s4b1p2,192.168.1.187
c4s9b1p3,192.168.1.187
c1s1b1p10,192.168.1.114
c1s3b6p1,192.168.1.114
c1s1b10p9,192.168.1.114
c4s10b1p1,192.168.1.114
c1s1b2p1,192.168.1.132
c1s10b7p8,192.168.1.132
c4s9b1p1,192.168.1.132
c4s9b1p2,192.168.1.132
c3s6b1p3,192.168.1.132
c1s1b1p5,192.168.1.164
c1s1b1p9,192.168.1.164
c1s1b1p8,192.168.1.164
c1s1b2p2,192.168.1.164
c3s5b1p2,192.168.1.164
</code></pre>
<p>This was tested using Python 2.7. It uses Python's <a href="https://docs.python.org/2/library/csv.html#module-csv" rel="nofollow"><code>csv</code></a> library to parse and create the CSV entries in your files.</p>
| 1 | 2016-08-30T11:35:02Z | [
"python",
"csv"
] |
How to group list with index wise? | 39,225,495 | <p>I have a list of list where i need to group elements by using user input ( see <code>split</code> variable in code ) and create new list.
I have tried but instead of grouping, elements are concatenated separately </p>
<pre><code>split = 3 # user input
data = [[1,2], [3,4], [5,6], [7,8], [9,10], [11,12], [13,14], [15,16], [17,18]]
z = [] ; y = []
for i,d in enumerate(data):
z.append(d)
if (i+1)%split==0:
y.append(z)
z = []
xx = (y+[z])
print(xx)
[[[1, 2], [3, 4], [5, 6]], [[7, 8], [9, 10], [11, 12]], [[13, 14], [15, 16], [17, 18]], []]
# ^____________________^ ^_______________________^ this needs to be merged
</code></pre>
<p>input :</p>
<pre><code>data = [[1,2], [3,4], [5,6], [7,8], [9,10], [11,12], [13,14], [15,16], [17,18]]
</code></pre>
<p>expected output :</p>
<pre><code>[[1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18]]
</code></pre>
| 2 | 2016-08-30T10:49:35Z | 39,225,615 | <p>Using numpy:</p>
<pre><code>data = [[1,2], [3,4], [5,6], [7,8], [9,10], [11,12], [13,14], [15,16], [17,18]]
import numpy as np
print np.array(data).reshape((split,-1)).tolist()
</code></pre>
<p>Output:</p>
<pre><code>[[1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18]]
</code></pre>
| 2 | 2016-08-30T10:55:51Z | [
"python",
"list",
"group"
] |
How to group list with index wise? | 39,225,495 | <p>I have a list of list where i need to group elements by using user input ( see <code>split</code> variable in code ) and create new list.
I have tried but instead of grouping, elements are concatenated separately </p>
<pre><code>split = 3 # user input
data = [[1,2], [3,4], [5,6], [7,8], [9,10], [11,12], [13,14], [15,16], [17,18]]
z = [] ; y = []
for i,d in enumerate(data):
z.append(d)
if (i+1)%split==0:
y.append(z)
z = []
xx = (y+[z])
print(xx)
[[[1, 2], [3, 4], [5, 6]], [[7, 8], [9, 10], [11, 12]], [[13, 14], [15, 16], [17, 18]], []]
# ^____________________^ ^_______________________^ this needs to be merged
</code></pre>
<p>input :</p>
<pre><code>data = [[1,2], [3,4], [5,6], [7,8], [9,10], [11,12], [13,14], [15,16], [17,18]]
</code></pre>
<p>expected output :</p>
<pre><code>[[1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18]]
</code></pre>
| 2 | 2016-08-30T10:49:35Z | 39,225,635 | <p>You can try this -></p>
<pre><code>split = 3
data = [[1,2], [3,4], [5,6], [7,8], [9,10], [11,12], [13,14], [15,16], [17,18]]
z=[]
x=[]
for i,j in enumerate(data):
if i!=0 and i%split==0:
z.append(x)
x=[]
for k in j:
x.append(k)
z.append(x)
print z
</code></pre>
| 1 | 2016-08-30T10:57:04Z | [
"python",
"list",
"group"
] |
How to group list with index wise? | 39,225,495 | <p>I have a list of list where i need to group elements by using user input ( see <code>split</code> variable in code ) and create new list.
I have tried but instead of grouping, elements are concatenated separately </p>
<pre><code>split = 3 # user input
data = [[1,2], [3,4], [5,6], [7,8], [9,10], [11,12], [13,14], [15,16], [17,18]]
z = [] ; y = []
for i,d in enumerate(data):
z.append(d)
if (i+1)%split==0:
y.append(z)
z = []
xx = (y+[z])
print(xx)
[[[1, 2], [3, 4], [5, 6]], [[7, 8], [9, 10], [11, 12]], [[13, 14], [15, 16], [17, 18]], []]
# ^____________________^ ^_______________________^ this needs to be merged
</code></pre>
<p>input :</p>
<pre><code>data = [[1,2], [3,4], [5,6], [7,8], [9,10], [11,12], [13,14], [15,16], [17,18]]
</code></pre>
<p>expected output :</p>
<pre><code>[[1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18]]
</code></pre>
| 2 | 2016-08-30T10:49:35Z | 39,225,651 | <p>Here is one way using list comprehension:</p>
<pre><code>>>> sp = 3
>>> fragment = len(data)//sp
>>> [[t for item in data[i:i+fragment] for t in item] for i in range(0, len(data), fragment)]
[[1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18]]
</code></pre>
<p>And here is an intertools based recipe:</p>
<pre><code>>>> from itertools import islice, chain
>>>
>>> [list(chain.from_iterable(islice(data, i, i+fragment))) for i in range(0, len(data), fragment)]
[[1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18]]
</code></pre>
| 3 | 2016-08-30T10:57:49Z | [
"python",
"list",
"group"
] |
How to group list with index wise? | 39,225,495 | <p>I have a list of list where i need to group elements by using user input ( see <code>split</code> variable in code ) and create new list.
I have tried but instead of grouping, elements are concatenated separately </p>
<pre><code>split = 3 # user input
data = [[1,2], [3,4], [5,6], [7,8], [9,10], [11,12], [13,14], [15,16], [17,18]]
z = [] ; y = []
for i,d in enumerate(data):
z.append(d)
if (i+1)%split==0:
y.append(z)
z = []
xx = (y+[z])
print(xx)
[[[1, 2], [3, 4], [5, 6]], [[7, 8], [9, 10], [11, 12]], [[13, 14], [15, 16], [17, 18]], []]
# ^____________________^ ^_______________________^ this needs to be merged
</code></pre>
<p>input :</p>
<pre><code>data = [[1,2], [3,4], [5,6], [7,8], [9,10], [11,12], [13,14], [15,16], [17,18]]
</code></pre>
<p>expected output :</p>
<pre><code>[[1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18]]
</code></pre>
| 2 | 2016-08-30T10:49:35Z | 39,225,669 | <p>you can just use <code>z.extend(d)</code> instead of <code>append</code></p>
| 0 | 2016-08-30T10:58:38Z | [
"python",
"list",
"group"
] |
How to group list with index wise? | 39,225,495 | <p>I have a list of list where i need to group elements by using user input ( see <code>split</code> variable in code ) and create new list.
I have tried but instead of grouping, elements are concatenated separately </p>
<pre><code>split = 3 # user input
data = [[1,2], [3,4], [5,6], [7,8], [9,10], [11,12], [13,14], [15,16], [17,18]]
z = [] ; y = []
for i,d in enumerate(data):
z.append(d)
if (i+1)%split==0:
y.append(z)
z = []
xx = (y+[z])
print(xx)
[[[1, 2], [3, 4], [5, 6]], [[7, 8], [9, 10], [11, 12]], [[13, 14], [15, 16], [17, 18]], []]
# ^____________________^ ^_______________________^ this needs to be merged
</code></pre>
<p>input :</p>
<pre><code>data = [[1,2], [3,4], [5,6], [7,8], [9,10], [11,12], [13,14], [15,16], [17,18]]
</code></pre>
<p>expected output :</p>
<pre><code>[[1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18]]
</code></pre>
| 2 | 2016-08-30T10:49:35Z | 39,225,752 | <p>You can do this in basic Python like this:</p>
<pre><code>import itertools
flatlist = [*itertools.chain(*data)]
groupsize = int(len(flatlist) / split)
data2 = [flatlist[i:i+groupsize] for i in range(0, len(flatlist), groupsize)]
print(data2)
</code></pre>
<p>And the output is</p>
<pre><code>[[1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18]]
</code></pre>
| 3 | 2016-08-30T11:02:45Z | [
"python",
"list",
"group"
] |
pdf_multivariate_gauss() function in Python | 39,225,650 | <p>Which are the necessary modules for execution of the function pdf_multivariate_gauss() in IPython? </p>
<p>I try to execute the below code but i get errors like "Import Error" and "Name Error".</p>
<p><strong><em>Code:</em></strong></p>
<pre><code>import numpy as np
from matplotlib import pyplot as plt
from matplotlib.mlab import bivariate_normal
import parzen_window_est
import pdf_multivariate_gauss ######## ImportError ########
import operator
from mpl_toolkits.mplot3d import Axes3D
##############################################
### Predicted bivariate Gaussian densities ###
##############################################
mu_vec = np.array([0,0])
cov_mat = np.array([[1,0],[0,1]])
x_2Dgauss = np.random.multivariate_normal(mu_vec, cov_mat, 10000)
# generate a range of 400 window widths between 0 < h < 1
h_range = np.linspace(0.001, 1, 400)
# calculate the actual density at the center [0, 0]
mu = np.array([[0],[0]])
cov = np.eye(2)
actual_pdf_val = pdf_multivariate_gauss.pdf_multivariate_gauss(np.array([[0],[0]]), mu, cov)
######## NameError #########
# get a list of the differnces (|estimate-actual|) for different window widths
parzen_estimates = [np.abs(parzen_window_est.parzen_window_est(x_2Dgauss, h=1, center=[0, 0])) for i in h_range]
# get the window width for which |estimate-actual| is closest to 0
min_index, min_value = min(enumerate(parzen_estimates), key=operator.itemgetter(1))
</code></pre>
<p><a href="http://i.stack.imgur.com/uqWUp.png" rel="nofollow">IPython output</a></p>
| -2 | 2016-08-30T10:57:47Z | 39,226,973 | <p>As far as I can tell, there is no such thing as pdf_multivariate_gauss (as pointed out already). There is a python implementation of this in <code>scipy</code>, however: <a href="http://docs.scipy.org/doc/scipy-0.18.0/reference/generated/scipy.stats.multivariate_normal.html" rel="nofollow">scipy.stats.multivariate_normal</a></p>
<p>One would use it like this:</p>
<pre><code>from scipy.stats import multivariate_normal
mvn = multivariate_normal(mu,cov) #create a multivariate Gaussian object with specified mean and covariance matrix
p = mvn.pdf(x) #evaluate the probability density at x
</code></pre>
| 1 | 2016-08-30T12:02:13Z | [
"python",
"numpy",
"matplotlib",
"scipy",
"ipython"
] |
OLA API Integration giving error as invalid partner key | 39,225,859 | <p>I wanted to use OLA APIs for my project. So I followed the <a href="https://developers.olacabs.com/docs/overview" rel="nofollow">official docs</a> of OLA and tried something as follow using python requests. This request expects response for ride estimate from source to destination.</p>
<pre><code>import requests
headers = {'X-APP-TOKEN' : "your_api_token"}
payload= {'pickup_lat': 12.9490936, 'pickup_lng': 77.67773056, 'drop_lat': 12.9190934, 'drop_lng': 77.1777356, 'category': 'micro'}
response = requests.get('https://devapi.olacabs.com/v1/products', params=payload, headers=headers)
print(response.json())
</code></pre>
<p>As mentioned in the docs I have included the <strong>X-APP-TOKEN</strong> in the request header as well. But I'm getting the following issue.</p>
<pre><code>{'code': 'invalid_partner_key', 'message': 'Partner key is not authorized'}
</code></pre>
<p>Any help would be highly appreciated.</p>
| 2 | 2016-08-30T11:08:06Z | 39,592,477 | <p>You need to use the following url while testing(sandbox):</p>
<p><a href="http://sandbox-t.olacabs.com/v1/products" rel="nofollow">http://sandbox-t.olacabs.com/v1/products</a></p>
| 1 | 2016-09-20T11:07:22Z | [
"android",
"python",
"api"
] |
How to convert hex string into Japanese and write it to the file in Python 2.7? | 39,225,973 | <p>I am trying to convert a hex string into Japanese(codec: SHIFT-JIS) and write the Japanese output to a file using Python 2.7. However, all I have got is the original hex string in the file. Can someone tell me where I did wrong? Here is the code I use:</p>
<pre><code>fd = open(path,'w')
temp_str ='\x8d\xc5\x82\xe0\x8d\x82\x8b\x4d\x82\xc8\x89\xa4\x82\xc5\x82\xa0\x82\xc1\x82\xbd\x82\xbc\x81\x76\x80\x01\xff\xff'
fd.write(temp_str.encode('shift-jis'))
fd.close()
</code></pre>
<p>All I have got in the file is "\x8d\xc5\x82\xe0\x8d\x82\x8b\x4d\x82\xc8\x89\xa4\x82\xc5\x82\xa0\x82\xc1\x82\xbd\x82\xbc\x81\x76\x80\x01\xff\xff". </p>
| 0 | 2016-08-30T11:13:45Z | 39,226,333 | <p>The string seems to be encoded in UTF-16BE:</p>
<pre><code>>>> print temp_str.decode('utf_16_be')
è·
è è¶èè覤è
è èè½è¼è
¶è
</code></pre>
<p>But it also seems malformed, ie, it was cut halfway. You should first convert the string to Unicode by decoding the bytes:</p>
<pre><code>uni_str = temp_str.decode('utf_16_be')
</code></pre>
<p>And then saving the Unicode string to a file with an encoding that you want:</p>
<pre><code>fd = open(path,'w')
fd.write(uni_str.encode('shift-jis'))
fd.close()
</code></pre>
<p>However, the codec 'shift-jis' doesn't seem to like your string:</p>
<pre><code>>>> print uni_str.encode('shift_jis')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'shift_jis' codec can't encode character u'\u8dc5' in position 0: illegal multibyte sequence
</code></pre>
<p>This is not Japanese, but Chinese:</p>
<pre><code>>>> print uni_str.encode('gb18030')
Ú���f�G���B�i�[��ѿ�d�a�Ï1�9
</code></pre>
<p>'gb18030' is a Chinese codec, <a href="https://docs.python.org/2.4/lib/standard-encodings.html" rel="nofollow">according to Python docs</a>.</p>
<p>Yes, its jiggebirsh because I don't have a terminal with that codec, but it is the only codec from Python that encodes the string without errors besides UTF8/16/32.</p>
| 1 | 2016-08-30T11:30:04Z | [
"python",
"python-2.7"
] |
How to convert hex string into Japanese and write it to the file in Python 2.7? | 39,225,973 | <p>I am trying to convert a hex string into Japanese(codec: SHIFT-JIS) and write the Japanese output to a file using Python 2.7. However, all I have got is the original hex string in the file. Can someone tell me where I did wrong? Here is the code I use:</p>
<pre><code>fd = open(path,'w')
temp_str ='\x8d\xc5\x82\xe0\x8d\x82\x8b\x4d\x82\xc8\x89\xa4\x82\xc5\x82\xa0\x82\xc1\x82\xbd\x82\xbc\x81\x76\x80\x01\xff\xff'
fd.write(temp_str.encode('shift-jis'))
fd.close()
</code></pre>
<p>All I have got in the file is "\x8d\xc5\x82\xe0\x8d\x82\x8b\x4d\x82\xc8\x89\xa4\x82\xc5\x82\xa0\x82\xc1\x82\xbd\x82\xbc\x81\x76\x80\x01\xff\xff". </p>
| 0 | 2016-08-30T11:13:45Z | 39,226,418 | <p>Maybe you should open file in "wb" instead "w" mode?</p>
| 1 | 2016-08-30T11:33:47Z | [
"python",
"python-2.7"
] |
issue while installing python module (pbr) | 39,225,976 | <p>I just deiscover CKAn and I am trying to install it on a Ubuntu 14.04. I install it from source.</p>
<p>AT a step we have to install the Python module that CKAn requires.</p>
<pre><code>pip install -r /usr/lib/ckan/default/src/ckan/requirements.txt
</code></pre>
<p>I first got an error</p>
<blockquote>
<p>Command python setup.py egg_info failed with error code 1 in
/usr/lib/ckan/default/build/html5lib</p>
</blockquote>
<p>I solved it by upgrading setuptools</p>
<pre><code>pip install --upgrade setuptools
</code></pre>
<p>But now I got a new error with pbr and I do not know what to do</p>
<blockquote>
<p>Command python setup.py egg_info failed with error code 1 in
/usr/lib/ckan/default/build/pbr</p>
</blockquote>
<p>Before ending the installation and displaying that error, I got that message:</p>
<blockquote>
<p>Downloading/unpacking pbr==0.11.0 (from -r
/usr/lib/ckan/default/src/ckan/requirements.txt (line 27)) Running
setup.py egg_info for package pbr
Traceback (most recent call last):
File "", line 14, in
File "/usr/lib/ckan/default/build/pbr/setup.py", line 22, in
**util.cfg_to_args())
File "pbr/util.py", line 261, in cfg_to_args
wrap_commands(kwargs)
File "pbr/util.py", line 482, in wrap_commands
for cmd, _ in dist.get_command_list():
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/setuptools/dist.py",
line 528, in get_command_list
cmdclass = ep.resolve()
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/pkg_resources/<strong>init</strong>.py",
line 2255, in resolve
module = <strong>import</strong>(self.module_name, fromlist=['<strong>name</strong>'], level=0)
File "pbr/testr_command.py", line 47, in
from testrepository import commands
ImportError: No module named testrepository
Complete output from command python setup.py egg_info:
Traceback (most recent call last):</p>
<p>File "", line 14, in </p>
<p>File "/usr/lib/ckan/default/build/pbr/setup.py", line 22, in
</p>
<pre><code>**util.cfg_to_args())
</code></pre>
<p>File "pbr/util.py", line 261, in cfg_to_args</p>
<pre><code>wrap_commands(kwargs)
</code></pre>
<p>File "pbr/util.py", line 482, in wrap_commands</p>
<pre><code>for cmd, _ in dist.get_command_list():
</code></pre>
<p>File
"/usr/lib/ckan/default/local/lib/python2.7/site-packages/setuptools/dist.py",
line 528, in get_command_list</p>
<pre><code>cmdclass = ep.resolve()
</code></pre>
<p>File
"/usr/lib/ckan/default/local/lib/python2.7/site-packages/pkg_resources/<strong>init</strong>.py",
line 2255, in resolve</p>
<pre><code>module = __import__(self.module_name, fromlist=['__name__'], level=0)
</code></pre>
<p>File "pbr/testr_command.py", line 47, in </p>
<pre><code>from testrepository import commands
</code></pre>
<p>ImportError: No module named testrepository</p>
</blockquote>
<p>Someone can help me to complete the installation?
Many thank for your help</p>
| 0 | 2016-08-30T11:13:51Z | 39,279,800 | <p>I ran into this same error trying to install something else with pip (<code>httplib2.ca_certs_locater-0.2.0</code> IIRC).</p>
<p>My issue turned out to be caused by a really old version of <code>pbr</code>, which happened to be the same as the one that's blowing up for you - 0.11.0. In my case, I had what I can only assume was garbage left over from an old installation of <em>something</em>: <code>/usr/local/lib/python2.7/dist-packages/pbr-0.11.0-py2.7.egg</code>.</p>
<p>In my case, since it was my application's <code>requirements.txt</code> file that was blowing up, I just added <code>pbr==1.10.0</code> as a dependency and that fixed the problem. Running <code>pip install pbr</code> would also work.</p>
<p>Also, I'm always in the habit of keeping pip itself up to date, so that might be worth trying too, though it wasn't able to prevent the error for me this time.</p>
| 0 | 2016-09-01T19:49:21Z | [
"python",
"pip",
"ckan"
] |
issue while installing python module (pbr) | 39,225,976 | <p>I just deiscover CKAn and I am trying to install it on a Ubuntu 14.04. I install it from source.</p>
<p>AT a step we have to install the Python module that CKAn requires.</p>
<pre><code>pip install -r /usr/lib/ckan/default/src/ckan/requirements.txt
</code></pre>
<p>I first got an error</p>
<blockquote>
<p>Command python setup.py egg_info failed with error code 1 in
/usr/lib/ckan/default/build/html5lib</p>
</blockquote>
<p>I solved it by upgrading setuptools</p>
<pre><code>pip install --upgrade setuptools
</code></pre>
<p>But now I got a new error with pbr and I do not know what to do</p>
<blockquote>
<p>Command python setup.py egg_info failed with error code 1 in
/usr/lib/ckan/default/build/pbr</p>
</blockquote>
<p>Before ending the installation and displaying that error, I got that message:</p>
<blockquote>
<p>Downloading/unpacking pbr==0.11.0 (from -r
/usr/lib/ckan/default/src/ckan/requirements.txt (line 27)) Running
setup.py egg_info for package pbr
Traceback (most recent call last):
File "", line 14, in
File "/usr/lib/ckan/default/build/pbr/setup.py", line 22, in
**util.cfg_to_args())
File "pbr/util.py", line 261, in cfg_to_args
wrap_commands(kwargs)
File "pbr/util.py", line 482, in wrap_commands
for cmd, _ in dist.get_command_list():
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/setuptools/dist.py",
line 528, in get_command_list
cmdclass = ep.resolve()
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/pkg_resources/<strong>init</strong>.py",
line 2255, in resolve
module = <strong>import</strong>(self.module_name, fromlist=['<strong>name</strong>'], level=0)
File "pbr/testr_command.py", line 47, in
from testrepository import commands
ImportError: No module named testrepository
Complete output from command python setup.py egg_info:
Traceback (most recent call last):</p>
<p>File "", line 14, in </p>
<p>File "/usr/lib/ckan/default/build/pbr/setup.py", line 22, in
</p>
<pre><code>**util.cfg_to_args())
</code></pre>
<p>File "pbr/util.py", line 261, in cfg_to_args</p>
<pre><code>wrap_commands(kwargs)
</code></pre>
<p>File "pbr/util.py", line 482, in wrap_commands</p>
<pre><code>for cmd, _ in dist.get_command_list():
</code></pre>
<p>File
"/usr/lib/ckan/default/local/lib/python2.7/site-packages/setuptools/dist.py",
line 528, in get_command_list</p>
<pre><code>cmdclass = ep.resolve()
</code></pre>
<p>File
"/usr/lib/ckan/default/local/lib/python2.7/site-packages/pkg_resources/<strong>init</strong>.py",
line 2255, in resolve</p>
<pre><code>module = __import__(self.module_name, fromlist=['__name__'], level=0)
</code></pre>
<p>File "pbr/testr_command.py", line 47, in </p>
<pre><code>from testrepository import commands
</code></pre>
<p>ImportError: No module named testrepository</p>
</blockquote>
<p>Someone can help me to complete the installation?
Many thank for your help</p>
| 0 | 2016-08-30T11:13:51Z | 39,341,117 | <p>This worked for me:</p>
<ol>
<li><p>Uninstall the previous pbr version via <code>pip uninstall pbr</code>.</p></li>
<li><p>Removed the version from the requirements file: <code>/usr/lib/ckan/default/src/ckan/requirements.txt</code> this line <code>pbr==0.11.0</code> by this line <code>pbr</code></p></li>
<li><p>Install the requirements again <code>pip install -r /usr/lib/ckan/default/src/ckan/requirements.txt</code></p></li>
</ol>
| 0 | 2016-09-06T05:16:18Z | [
"python",
"pip",
"ckan"
] |
how to convert header row into new columns in python pandas? | 39,226,024 | <p>I am having following dataframe:</p>
<pre><code>A,B,C
1,2,3
</code></pre>
<p>I have to convert above dataframe like following format:</p>
<pre><code>cols,vals
A,1
B,2
c,3
</code></pre>
<p>How to create column names as a new column in pandas?</p>
| 1 | 2016-08-30T11:16:30Z | 39,226,046 | <p>You can transpose by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.T.html" rel="nofollow"><code>T</code></a>:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'A': {0: 1}, 'C': {0: 3}, 'B': {0: 2}})
print (df)
A B C
0 1 2 3
print (df.T)
0
A 1
B 2
C 3
df1 = df.T.reset_index()
df1.columns = ['cols','vals']
print (df1)
cols vals
0 A 1
1 B 2
2 C 3
</code></pre>
<p>If <code>DataFrame</code> has more rows, you can use:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'A': {0: 1, 1: 9, 2: 1},
'C': {0: 3, 1: 6, 2: 7},
'B': {0: 2, 1: 4, 2: 8}})
print (df)
A B C
0 1 2 3
1 9 4 6
2 1 8 7
df.index = 'vals' + df.index.astype(str)
print (df.T)
vals0 vals1 vals2
A 1 9 1
B 2 4 8
C 3 6 7
df1 = df.T.reset_index().rename(columns={'index':'cols'})
print (df1)
cols vals0 vals1 vals2
0 A 1 9 1
1 B 2 4 8
2 C 3 6 7
</code></pre>
| 1 | 2016-08-30T11:17:51Z | [
"python",
"python-2.7",
"pandas",
"dataframe",
"transpose"
] |
Tkinter new windows won't close properly | 39,226,126 | <p>I want my GUI to have a 'new window' option that will be just the same as the first one.</p>
<p>The problem is that it also has an exit(quit) button that won't work as it should - whenever I open the new window and then press the button, in the first click nothing happens and in the second one it closes both windows (if 3 windows are open so it'll close everything in the third click and so on).</p>
<p>This the relevant code:</p>
<pre><code>from Tkinter import *
from ttk import *
class Application(Tk):
def __init__(self):
self.root = Tk()
self.root.geometry("250x150")
self.app = Frame(self.root)
self.app.grid()
self.create_menu()
self.create_widgets()
self.root.mainloop()
def create_menu(self):
menu = Menu(self.root)
self.root.config(menu=menu)
sub_menu = Menu(menu)
menu.add_cascade(label="File", menu=sub_menu)
sub_menu.add_command(label="New", command=self.__init__)
sub_menu.add_command(label="Run", command=self.enter)
sub_menu.add_separator()
sub_menu.add_command(label="Exit", command=self.app.quit)
</code></pre>
<p>I also tried to change:</p>
<pre><code>sub_menu.add_command(label="New", command=self.__init__)
</code></pre>
<p>to:</p>
<pre><code>sub_menu.add_command(label="New", command=self.new window)
</code></pre>
<p>Where:</p>
<pre><code>def new_window(self):
class App(Application):
Application.__init__(self)
</code></pre>
<p>Both do the same thing.</p>
<p>How can I fix it?</p>
| 1 | 2016-08-30T11:21:27Z | 39,226,445 | <p>In a Tkinter-Application there may only be one Tk-object. If the object is destroyed or destroyed by the garbage collector, Tkinter will be disabled. Use Toplevel for other other windows instead.</p>
<p>Try this instead:</p>
<pre><code>from Tkinter import *
from ttk import *
class Application(object):
def __init__(self, master):
self.root = master
self.root.geometry("250x150")
self.app = Frame(self.root)
self.app.grid()
self.create_menu()
self.create_widgets()
def create_menu(self):
menu = Menu(self.root)
self.root.config(menu=menu)
sub_menu = Menu(menu)
menu.add_cascade(label="File", menu=sub_menu)
sub_menu.add_command(label="New", command=self.new)
sub_menu.add_command(label="Run", command=self.enter)
sub_menu.add_separator()
sub_menu.add_command(label="Exit", command=self.quit)
def new(self):
window = Toplevel(tk)
return Application(window)
def quit(self):
tk.destroy()
tk = Tk()
Application(tk)
tk.mainloop()
</code></pre>
| 1 | 2016-08-30T11:35:11Z | [
"python",
"python-2.7",
"tkinter"
] |
python error AttributeError: 'str' object has no attribute 'setdefault' | 39,226,131 | <p>I'm trying to run the django project using this command.</p>
<pre><code>python manage.py runserver 8080
</code></pre>
<p>But everytime I'm trying to run I faced the such a error.</p>
<pre><code>Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line
utility.execute()
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/core/management/__init__.py", line 327, in execute
django.setup()
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/apps/registry.py", line 108, in populate
app_config.import_models(all_models)
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/apps/config.py", line 202, in import_models
self.models_module = import_module(models_module_name)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/contrib/auth/models.py", line 4, in <module>
from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/contrib/auth/base_user.py", line 49, in <module>
class AbstractBaseUser(models.Model):
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/db/models/base.py", line 108, in __new__
new_class.add_to_class('_meta', Options(meta, app_label))
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/db/models/base.py", line 307, in add_to_class
value.contribute_to_class(cls, name)
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/db/models/options.py", line 263, in contribute_to_class
self.db_table = truncate_name(self.db_table, connection.ops.max_name_length())
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/db/__init__.py", line 36, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/db/utils.py", line 209, in __getitem__
self.ensure_defaults(alias)
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/db/utils.py", line 181, in ensure_defaults
conn.setdefault('ATOMIC_REQUESTS', False)
AttributeError: 'str' object has no attribute 'setdefault'
</code></pre>
<p>I tried python2(python2.7.11) and python3(python3.5.1) with virtualenvwrapper.
I think it's not the bug of the project source. but something missed in environment configuration.
But I can't figure out what the problem is.
Please help me fix it. </p>
<p>Thanks in advance.</p>
| 1 | 2016-08-30T11:21:41Z | 39,226,434 | <p>Each element in the DATABASES dict must itself be a dict. You have overwritten the 'default' entry to just be a string. </p>
<p>Since you are using sqlite3 and the default database name, you should just revert to the original version of the setting:</p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
</code></pre>
| 3 | 2016-08-30T11:34:28Z | [
"python",
"django"
] |
python error AttributeError: 'str' object has no attribute 'setdefault' | 39,226,131 | <p>I'm trying to run the django project using this command.</p>
<pre><code>python manage.py runserver 8080
</code></pre>
<p>But everytime I'm trying to run I faced the such a error.</p>
<pre><code>Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line
utility.execute()
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/core/management/__init__.py", line 327, in execute
django.setup()
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/apps/registry.py", line 108, in populate
app_config.import_models(all_models)
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/apps/config.py", line 202, in import_models
self.models_module = import_module(models_module_name)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/contrib/auth/models.py", line 4, in <module>
from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/contrib/auth/base_user.py", line 49, in <module>
class AbstractBaseUser(models.Model):
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/db/models/base.py", line 108, in __new__
new_class.add_to_class('_meta', Options(meta, app_label))
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/db/models/base.py", line 307, in add_to_class
value.contribute_to_class(cls, name)
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/db/models/options.py", line 263, in contribute_to_class
self.db_table = truncate_name(self.db_table, connection.ops.max_name_length())
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/db/__init__.py", line 36, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/db/utils.py", line 209, in __getitem__
self.ensure_defaults(alias)
File "/Users/admin/.virtualenvs/myprojectname/lib/python2.7/site-packages/django/db/utils.py", line 181, in ensure_defaults
conn.setdefault('ATOMIC_REQUESTS', False)
AttributeError: 'str' object has no attribute 'setdefault'
</code></pre>
<p>I tried python2(python2.7.11) and python3(python3.5.1) with virtualenvwrapper.
I think it's not the bug of the project source. but something missed in environment configuration.
But I can't figure out what the problem is.
Please help me fix it. </p>
<p>Thanks in advance.</p>
| 1 | 2016-08-30T11:21:41Z | 39,226,549 | <p>It's better if you show us your settings.py to check if it is ok. </p>
<p>Remember to use python3 instead of python. Sometimes I got strange errors for this reason. </p>
<p>python3 manage.py runserver 8080</p>
| 0 | 2016-08-30T11:40:52Z | [
"python",
"django"
] |
aiohttp failed response.json() with status 500 | 39,226,140 | <p>Server proxies requests. </p>
<pre><code>@asyncio.coroutine
def send_async_request(method, url, data, timeout):
with ClientSession() as session:
response = yield from asyncio.wait_for(
session.request(method, url, data=data), timeout=timeout
)
return response
</code></pre>
<p>Everything works on the response codes 200.
When it comes to 500 response code can not read the json from the response.
Exception ServerDisconnectedError:</p>
<pre><code>response = yield from send_async_request(request.method, url)
response_json = yield from response.json()
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\aiohttp\server.py", line 285, in start
yield from self.handle_request(message, payload)
File "C:\Python34\lib\site-packages\aiohttp\web.py", line 90, in handle_request
resp = yield from handler(request)
File "D:/projects/SeleniumGridDispatcher/trunk/application.py", line 122, in proxy_wd
response_json = yield from response.json()
File "C:\Python34\lib\site-packages\aiohttp\client_reqrep.py", line 764, in json
yield from self.read()
File "C:\Python34\lib\site-packages\aiohttp\client_reqrep.py", line 720, in read
self._content = yield from self.content.read()
File "C:\Python34\lib\site-packages\aiohttp\streams.py", line 486, in wrapper
result = yield from func(self, *args, **kw)
File "C:\Python34\lib\site-packages\aiohttp\streams.py", line 541, in read
return (yield from super().read(n))
File "C:\Python34\lib\site-packages\aiohttp\streams.py", line 261, in read
block = yield from self.readany()
File "C:\Python34\lib\site-packages\aiohttp\streams.py", line 486, in wrapper
result = yield from func(self, *args, **kw)
File "C:\Python34\lib\site-packages\aiohttp\streams.py", line 549, in readany
return (yield from super().readany())
File "C:\Python34\lib\site-packages\aiohttp\streams.py", line 284, in readany
yield from self._waiter
File "C:\Python34\lib\asyncio\futures.py", line 358, in __iter__
yield self # This tells Task to wait for completion.
File "C:\Python34\lib\asyncio\tasks.py", line 290, in _wakeup
future.result()
File "C:\Python34\lib\asyncio\futures.py", line 274, in result
raise self._exception
aiohttp.errors.ServerDisconnectedError
</code></pre>
<p>Help to understand what was going on.
Python: 3.4.4
aiohttp: 0.22.5</p>
| 1 | 2016-08-30T11:21:56Z | 39,228,541 | <p>500 errors may occur at any time especially if the server is under high load or unstable. Make your code more resilient by catching the exception. Then you might retry or just return the response. </p>
<pre><code>@asyncio.coroutine
def send_async_request(method, url, data, timeout):
with ClientSession() as session:
try:
response = yield from asyncio.wait_for(
session.request(method, url, data=data), timeout=timeout
)
except Exception as e:
print("%s has error '%s: %s'" % (url, response.status, response.reason))
# now you can decide what you want to do
# either return the response anyways or do some handling right here
return response
</code></pre>
| 0 | 2016-08-30T13:14:44Z | [
"python",
"json",
"python-asyncio",
"aiohttp"
] |
aiohttp failed response.json() with status 500 | 39,226,140 | <p>Server proxies requests. </p>
<pre><code>@asyncio.coroutine
def send_async_request(method, url, data, timeout):
with ClientSession() as session:
response = yield from asyncio.wait_for(
session.request(method, url, data=data), timeout=timeout
)
return response
</code></pre>
<p>Everything works on the response codes 200.
When it comes to 500 response code can not read the json from the response.
Exception ServerDisconnectedError:</p>
<pre><code>response = yield from send_async_request(request.method, url)
response_json = yield from response.json()
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\aiohttp\server.py", line 285, in start
yield from self.handle_request(message, payload)
File "C:\Python34\lib\site-packages\aiohttp\web.py", line 90, in handle_request
resp = yield from handler(request)
File "D:/projects/SeleniumGridDispatcher/trunk/application.py", line 122, in proxy_wd
response_json = yield from response.json()
File "C:\Python34\lib\site-packages\aiohttp\client_reqrep.py", line 764, in json
yield from self.read()
File "C:\Python34\lib\site-packages\aiohttp\client_reqrep.py", line 720, in read
self._content = yield from self.content.read()
File "C:\Python34\lib\site-packages\aiohttp\streams.py", line 486, in wrapper
result = yield from func(self, *args, **kw)
File "C:\Python34\lib\site-packages\aiohttp\streams.py", line 541, in read
return (yield from super().read(n))
File "C:\Python34\lib\site-packages\aiohttp\streams.py", line 261, in read
block = yield from self.readany()
File "C:\Python34\lib\site-packages\aiohttp\streams.py", line 486, in wrapper
result = yield from func(self, *args, **kw)
File "C:\Python34\lib\site-packages\aiohttp\streams.py", line 549, in readany
return (yield from super().readany())
File "C:\Python34\lib\site-packages\aiohttp\streams.py", line 284, in readany
yield from self._waiter
File "C:\Python34\lib\asyncio\futures.py", line 358, in __iter__
yield self # This tells Task to wait for completion.
File "C:\Python34\lib\asyncio\tasks.py", line 290, in _wakeup
future.result()
File "C:\Python34\lib\asyncio\futures.py", line 274, in result
raise self._exception
aiohttp.errors.ServerDisconnectedError
</code></pre>
<p>Help to understand what was going on.
Python: 3.4.4
aiohttp: 0.22.5</p>
| 1 | 2016-08-30T11:21:56Z | 39,232,014 | <p>Please check <code>Content-Length</code> for 500 response.
Looks like aiohttp tries to read json body but the body is shorter than specified in headers.</p>
| 0 | 2016-08-30T15:48:36Z | [
"python",
"json",
"python-asyncio",
"aiohttp"
] |
How to match every word in the list having single sentence using python | 39,226,172 | <p>how to match the below case 1 in python.. i want each and every word in the sentence to be matched with the list. </p>
<pre><code>l1=['there is a list of contents available in the fields']
>>> 'there' in l1
False
>>> 'there is a list of contents available in the fields' in l1
True
</code></pre>
| -1 | 2016-08-30T11:23:21Z | 39,226,230 | <p>Simple way</p>
<pre><code>l1=['there is a list of contents available in the fields']
>>> 'there' in l1[0]
True
</code></pre>
<p>Better way wil be to iterate to all element of list.</p>
<pre><code>l1=['there is a list of contents available in the fields']
print(bool([i for i in l1 if 'there' in i]))
</code></pre>
| 1 | 2016-08-30T11:25:39Z | [
"python"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.