title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
How to improve python import speed?
39,100,391
<p>This question has been asked many times on SO (for instance <a href="http://stackoverflow.com/q/16373510/2612235">here</a>), but there is no real answer yet.</p> <p>I am writing a short command line tool that renders templates. It is frigged using a Makefile: </p> <pre><code>i = $(wildcard *.in) o = $(patsubst %.in, %.out, $(t)) all: $(o) %.out: %.in ./script.py -o $@ $&lt; </code></pre> <p>In this dummy example, the Makefile parses every <code>.in</code> file to generate an <code>.out</code> file. It is very convenient for me to use <code>make</code> because I have a lot of other actions to trig before and after this script. Moreover I would like to remain as <a href="https://en.wikipedia.org/wiki/KISS_principle" rel="nofollow">KISS</a> as possible.</p> <blockquote> <p>Thus, I want to keep my tool simple, stupid and process each file separately using the syntax <code>script -o out in</code></p> </blockquote> <p>My script uses the following: </p> <pre><code>#!/usr/bin/env python from jinja2 import Template, nodes from jinja2.ext import Extension import hiyapyco import argparse import re ... </code></pre> <p>The problem is that each execution costs me about 1.2s ( ~60ms for the processing and ~1140ms for the import directives):</p> <pre><code>$ time ./script.py -o foo.out foo.in real 0m1.625s user 0m0.452s sys 0m1.185s </code></pre> <p>The overall execution of my Makefile for 100 files is ridiculous: ~100 files x 1.2s = 120s.</p> <p>This is not a solution, but this should be the solution. </p> <p>What alternative can I use?</p> <p><strong>EDIT</strong></p> <p>I love Python because its syntax is readable and size of its community. In this particular case (command line tools), I have to admit Perl is still a decent alternative. The same script written in Perl (which is also an interpreted language) is about 12 times faster (using <code>Text::Xslate</code>). </p> <p>I don't want to promote Perl in anyway I am just trying to solve my biggest issue with Python: it is not yet a suitable language for simple command line tools because of the poor import time. </p>
0
2016-08-23T11:50:33Z
39,100,698
<p>It is not quite easy, but you could turn your program into one that sits in the background and processes commands to process a file.</p> <p>Another program could feed the processing commands to it and thus make the real start quite easy.</p>
3
2016-08-23T12:05:33Z
[ "python", "performance", "python-import" ]
How to improve python import speed?
39,100,391
<p>This question has been asked many times on SO (for instance <a href="http://stackoverflow.com/q/16373510/2612235">here</a>), but there is no real answer yet.</p> <p>I am writing a short command line tool that renders templates. It is frigged using a Makefile: </p> <pre><code>i = $(wildcard *.in) o = $(patsubst %.in, %.out, $(t)) all: $(o) %.out: %.in ./script.py -o $@ $&lt; </code></pre> <p>In this dummy example, the Makefile parses every <code>.in</code> file to generate an <code>.out</code> file. It is very convenient for me to use <code>make</code> because I have a lot of other actions to trig before and after this script. Moreover I would like to remain as <a href="https://en.wikipedia.org/wiki/KISS_principle" rel="nofollow">KISS</a> as possible.</p> <blockquote> <p>Thus, I want to keep my tool simple, stupid and process each file separately using the syntax <code>script -o out in</code></p> </blockquote> <p>My script uses the following: </p> <pre><code>#!/usr/bin/env python from jinja2 import Template, nodes from jinja2.ext import Extension import hiyapyco import argparse import re ... </code></pre> <p>The problem is that each execution costs me about 1.2s ( ~60ms for the processing and ~1140ms for the import directives):</p> <pre><code>$ time ./script.py -o foo.out foo.in real 0m1.625s user 0m0.452s sys 0m1.185s </code></pre> <p>The overall execution of my Makefile for 100 files is ridiculous: ~100 files x 1.2s = 120s.</p> <p>This is not a solution, but this should be the solution. </p> <p>What alternative can I use?</p> <p><strong>EDIT</strong></p> <p>I love Python because its syntax is readable and size of its community. In this particular case (command line tools), I have to admit Perl is still a decent alternative. The same script written in Perl (which is also an interpreted language) is about 12 times faster (using <code>Text::Xslate</code>). </p> <p>I don't want to promote Perl in anyway I am just trying to solve my biggest issue with Python: it is not yet a suitable language for simple command line tools because of the poor import time. </p>
0
2016-08-23T11:50:33Z
39,100,929
<p>Write the template part as a separate process. The first time "script.py" is run it would launch this separate process. Once the process exists it can be passed the input/output filenames via a named pipe. If the process gets no input for x seconds, it automatically exits. How big x is depends on what your needs are</p> <p>So the parameters are passed to the long running process via the script.py writing to a named pipe. The imports only occur once (provided the inputs are fairly often) and as BPL points out this would make everything run faster</p>
1
2016-08-23T12:15:46Z
[ "python", "performance", "python-import" ]
Why do 4 different languages give 4 different results here?
39,100,460
<p>Consider this (all commands run on an 64bit Arch Linux system):</p> <ul> <li><p>Perl (v5.24.0)</p> <pre><code>$ perl -le 'print 10190150730169267102/1000%10' 6 </code></pre></li> <li><p><code>awk</code> (GNU Awk 4.1.3)</p> <pre><code>$ awk 'BEGIN{print 10190150730169267102/1000%10}' 6 </code></pre></li> <li><p>R (3.3.1)</p> <pre><code>&gt; (10190150730169267102/1000)%%10 [1] 6 </code></pre></li> <li><p><code>bc</code></p> <pre><code>$ echo 10190150730169267102/1000%10 | bc 7 </code></pre></li> <li><p>Python 2 (2.7.12)</p> <pre><code>&gt;&gt;&gt; print(10190150730169267102/1000%10) 7 </code></pre></li> <li><p>Python 3 (3.5.2)</p> <pre><code>&gt;&gt;&gt; print(10190150730169267102/1000%10) 8.0 </code></pre></li> </ul> <p>So, Perl, <code>gawk</code> and <code>R</code> agree, as do <code>bc</code> and Pyhon 2. Nevertheless, between the 6 tools tested, I got 4 different results. I understand that this has something to do with how very long integers are being rounded, but why do the different tools differ quite so much? I had expected that this would depend on the processor's ability to deal with large numbers, but it seems to depend on internal features (or bugs) of the language.</p> <p>Could someone explain what is going on behind the scenes here? What are the limitations in each language and why do they behave quite so differently?</p>
5
2016-08-23T11:54:05Z
39,100,909
<p>I can answer only for the difference between python 2 and python 3. "/" is integer division in python 2 while it is real division in python 3 (that's where the <code>.0</code> in python 3 comes from. The output is floating point.</p> <p>To summarize:</p> <ul> <li><p>Python 2</p> <pre><code>10190150730169267102/1000%10 </code></pre> <p>equals </p> <pre><code>10190150730169267%10 </code></pre> <p>equals</p> <pre><code>7 </code></pre></li> <li><p>Python 3</p> <pre><code>10190150730169267102/1000%10 </code></pre> <p>equals </p> <pre><code>10190150730169267,102%10 </code></pre> <p>equals </p> <pre><code>7.102 </code></pre></li> </ul> <p>but because of internal representation it's (wrongly) computed to 8.0</p> <p>You may note that the correct answer may be 7 or 7.102 depending if we consider the division to be floating point or integer. So only Python(2) and bc have correct answers. And python 3 would have correct answer with integer division (<code>10190150730169267102//1000%10</code>).</p> <p>Python supports <strong>arbitrarily</strong> large <strong>integers</strong> natively!</p>
3
2016-08-23T12:14:59Z
[ "python", "perl", "awk", "rounding", "long-integer" ]
Why do 4 different languages give 4 different results here?
39,100,460
<p>Consider this (all commands run on an 64bit Arch Linux system):</p> <ul> <li><p>Perl (v5.24.0)</p> <pre><code>$ perl -le 'print 10190150730169267102/1000%10' 6 </code></pre></li> <li><p><code>awk</code> (GNU Awk 4.1.3)</p> <pre><code>$ awk 'BEGIN{print 10190150730169267102/1000%10}' 6 </code></pre></li> <li><p>R (3.3.1)</p> <pre><code>&gt; (10190150730169267102/1000)%%10 [1] 6 </code></pre></li> <li><p><code>bc</code></p> <pre><code>$ echo 10190150730169267102/1000%10 | bc 7 </code></pre></li> <li><p>Python 2 (2.7.12)</p> <pre><code>&gt;&gt;&gt; print(10190150730169267102/1000%10) 7 </code></pre></li> <li><p>Python 3 (3.5.2)</p> <pre><code>&gt;&gt;&gt; print(10190150730169267102/1000%10) 8.0 </code></pre></li> </ul> <p>So, Perl, <code>gawk</code> and <code>R</code> agree, as do <code>bc</code> and Pyhon 2. Nevertheless, between the 6 tools tested, I got 4 different results. I understand that this has something to do with how very long integers are being rounded, but why do the different tools differ quite so much? I had expected that this would depend on the processor's ability to deal with large numbers, but it seems to depend on internal features (or bugs) of the language.</p> <p>Could someone explain what is going on behind the scenes here? What are the limitations in each language and why do they behave quite so differently?</p>
5
2016-08-23T11:54:05Z
39,106,941
<p>You're seeing different results for two reasons:</p> <ol> <li><p>The division step is doing two different things: in some of the languages you tried, it represents <em>integer</em> division, which discards the fractional part of the result and just keeps the integer part. In others it represents actual mathematical division (which following Python's terminology I'll call "true division" below), returning a floating-point result close to the true quotient.</p></li> <li><p>In some languages (those with support for arbitrary precision), the large numerator value <code>10190150730169267102</code> is being represented exactly; in others, it's replaced by the nearest representable floating-point value.</p></li> </ol> <p>The different combinations of the possibilities in 1. and 2. above give you the different results.</p> <p>In detail: in Perl, awk, and R, we're working with floating-point values and true division. The value <code>10190150730169267102</code> is too large to store in a machine integer, so it's stored in the usual IEEE 754 binary64 floating-point format. That format can't represent that particular value exactly, so what gets stored is the closest value that <em>is</em> representable in that format, which is <code>10190150730169266176.0</code>. Now we divide that approximation by <code>1000</code>, again giving a floating-point result. The exact quotient, <code>10190150730169266.176</code>, is again not exactly representable in the binary64 format, and we get the closest representable float, which happens to be <code>10190150730169266.0</code>. Taking a remainder modulo <code>10</code> gives <code>6</code>.</p> <p>In bc and Python 2, we're working with arbitrary-precision integers and integer division. Both those languages can represent the numerator exactly. The division result is then <code>10190150730169267</code> (we're doing <em>integer division</em>, not <em>true division</em>, so the fractional part is discarded), and the remainder modulo <code>10</code> is <code>7</code>. (This is oversimplifying a bit: the format that bc is using internally is somewhat closer to Python's <code>Decimal</code> type than to an arbitrary-precision integer type, but in this case the effect is the same.)</p> <p>In Python 3, we're working with arbitrary-precision integers and true division. The numerator is represented exactly, but the result of the division is the nearest floating-point value to the true quotient. In this case the exact quotient is <code>10190150730169267.102</code>, and the closest representable floating-point value is <code>10190150730169268.0</code>. Taking the remainder of that value modulo <code>10</code> gives <code>8</code>.</p> <p>Summary:</p> <ul> <li>Perl, awk, R: floating-point approximations, true division</li> <li>bc, Python 2: arbitrary-precision integers, integer division</li> <li>Python 3: arbitrary-precision integers, true division</li> </ul>
9
2016-08-23T17:01:27Z
[ "python", "perl", "awk", "rounding", "long-integer" ]
Why do 4 different languages give 4 different results here?
39,100,460
<p>Consider this (all commands run on an 64bit Arch Linux system):</p> <ul> <li><p>Perl (v5.24.0)</p> <pre><code>$ perl -le 'print 10190150730169267102/1000%10' 6 </code></pre></li> <li><p><code>awk</code> (GNU Awk 4.1.3)</p> <pre><code>$ awk 'BEGIN{print 10190150730169267102/1000%10}' 6 </code></pre></li> <li><p>R (3.3.1)</p> <pre><code>&gt; (10190150730169267102/1000)%%10 [1] 6 </code></pre></li> <li><p><code>bc</code></p> <pre><code>$ echo 10190150730169267102/1000%10 | bc 7 </code></pre></li> <li><p>Python 2 (2.7.12)</p> <pre><code>&gt;&gt;&gt; print(10190150730169267102/1000%10) 7 </code></pre></li> <li><p>Python 3 (3.5.2)</p> <pre><code>&gt;&gt;&gt; print(10190150730169267102/1000%10) 8.0 </code></pre></li> </ul> <p>So, Perl, <code>gawk</code> and <code>R</code> agree, as do <code>bc</code> and Pyhon 2. Nevertheless, between the 6 tools tested, I got 4 different results. I understand that this has something to do with how very long integers are being rounded, but why do the different tools differ quite so much? I had expected that this would depend on the processor's ability to deal with large numbers, but it seems to depend on internal features (or bugs) of the language.</p> <p>Could someone explain what is going on behind the scenes here? What are the limitations in each language and why do they behave quite so differently?</p>
5
2016-08-23T11:54:05Z
39,215,829
<p>in perl6</p> <pre><code>➜ ~ perl6 -e 'say(10190150730169267102 div 1000 mod 10)' 7 ➜ ~ perl6 -e 'say(10190150730169267102/1000%10)' 7.102 </code></pre> <p>so, If you are not sure which language is correct, try to ask Perl6. :)</p>
1
2016-08-29T21:58:38Z
[ "python", "perl", "awk", "rounding", "long-integer" ]
GAE/P: Implementing Exponential backoff for RPC calls
39,100,535
<p>I know that exponential backoff is a good thing when RPC calls fail. So far in my GAE/P app I have implemented exponential backoff by using the task queue:</p> <pre><code>deferred.defer(function_that_makes_RPC_call) </code></pre> <p>If the function that does the RPC call raises an exception, then the exponential backoff of the task queue takes care of it, and I don't have to worry about it.</p> <p>A problem, however, is that deferred.defer is itself an RPC call that can fail! I sometimes get this error:</p> <blockquote> <p>DeadlineExceededError: The API call taskqueue.BulkAdd() took too long to respond and was cancelled.</p> </blockquote> <p>So it seems I can no longer be lazy and have to implement my own exponential backoff. :(</p> <p>I'm thinking to put a wrapper around <code>deferred.defer</code> that implements exponential backoff using <a href="https://github.com/litl/backoff" rel="nofollow">backoff</a>, like this:</p> <pre><code>@backoff.on_exception(backoff.expo, (exception1, exception2, ...), max_tries=8) def defer_wrapper(function_that_makes_RPC_call): deferred.defer(function_that_makes_RPC_call) </code></pre> <p>Here, the decorator implements the backoff where a retry happens when one of the enumerated exceptions (e.g., exception1, exception2, ...) is raised.</p> <p>A couple questions about this:</p> <ol> <li>Is this a good solution for implementing exponential backoff?</li> <li>What exceptions do I need to list? Anything other than DeadlineExceededError?</li> </ol> <p>I know it is somewhat redundant to have my own exponential backoff and then submit to a task queue, but I'm thinking that <code>deferred.defer</code> should fail more rarely than other RPC calls and I'd like to respond to the request ASAP.</p>
1
2016-08-23T11:57:56Z
39,101,667
<p>In particular for the <code>DeadlineExceededError</code> in attempts to enqueue a deferred task I'd just do back2back retries instead of using an exponential backoff - the attempts will be spaced 5s apart due to the deadline interval expiration itself anyways, which gives a max of 12 retries before the request itself hits its deadline.</p> <p>Probably a good idea for other types of failures, though.</p>
1
2016-08-23T12:47:59Z
[ "python", "google-app-engine", "rpc" ]
Print multiple statements on one line for input
39,100,542
<p>How do I allow the user to input values and receive the answer, whilst keeping the values of "x + xy + y" on one line?</p> <pre><code>print("Calculator") x = input("") xy = input("") y = input("") if xy == "+": print(x+y) elif xy == "-": print(x-y) elif xy == "*": print(x*y) elif xy == "/": print(x/y) </code></pre>
-2
2016-08-23T11:58:20Z
39,100,912
<p>I'd suggest using a single <code>input</code> statement and then using a simple <a href="https://docs.python.org/3/library/re.html" rel="nofollow">regular expression</a> to parse the string into <code>x</code>, <code>y</code> and the operator. For example, this pattern: <code>(\d+)\s*([-+*/])\s*(\d+)</code>. Here, <code>\d+</code> means "one or more digits", <code>\s*</code> means "zero or more spaces", and <code>[-+*/]</code> means "any of those four symbols. The parts within <code>(...)</code> can later be extracted.</p> <pre><code>import re expr = input() # get one input for entire line m = re.match(r"(\d+)\s*([-+*/])\s*(\d+)", expr) # match expression if m: # check whether we have a match x, op, y = m.groups() # get the stuff within pairs of (...) x, y = int(x), int(y) # don't forget to cast to int! if op == "+": print(x + y) elif ...: # check operators -, *, / ... else: print("Invalid expression") </code></pre> <p>Alternatively to four <code>if/elif</code>, you could also create a dictionary, mapping operator symbols to functions:</p> <pre><code>operators = {"+": lambda n, m: n + m} </code></pre> <p>And then just get the right function from that dict and apply it to the operands:</p> <pre><code> print(operators[op](x, y)) </code></pre>
1
2016-08-23T12:15:09Z
[ "python" ]
Print multiple statements on one line for input
39,100,542
<p>How do I allow the user to input values and receive the answer, whilst keeping the values of "x + xy + y" on one line?</p> <pre><code>print("Calculator") x = input("") xy = input("") y = input("") if xy == "+": print(x+y) elif xy == "-": print(x-y) elif xy == "*": print(x*y) elif xy == "/": print(x/y) </code></pre>
-2
2016-08-23T11:58:20Z
39,100,997
<p>Here's another possibility.</p> <pre><code>raw = raw_input("Calculator: ") raw1 = raw.split(" ") x = int(raw1[0]) xy = raw1[1] y = int(raw1[2]) if xy == "+": print raw, "=", x + y elif xy == "-": print raw, "=", x-y elif xy == "/": print raw, "=", x/y elif xy == "*": print raw, "=", x*y </code></pre>
0
2016-08-23T12:18:15Z
[ "python" ]
Print multiple statements on one line for input
39,100,542
<p>How do I allow the user to input values and receive the answer, whilst keeping the values of "x + xy + y" on one line?</p> <pre><code>print("Calculator") x = input("") xy = input("") y = input("") if xy == "+": print(x+y) elif xy == "-": print(x-y) elif xy == "*": print(x*y) elif xy == "/": print(x/y) </code></pre>
-2
2016-08-23T11:58:20Z
39,101,133
<p>You can get input like this:</p> <pre><code>cal = input("Calculator: ").strip().split() x, xy, y = int(cal[0]), cal[1], int(cal[2]) </code></pre> <p>Then you can process your input data.</p>
0
2016-08-23T12:23:52Z
[ "python" ]
Python: print a string spanning to a given width, but first part left-aligned and second part right-aligned
39,100,548
<p>I would like to output something like:</p> <pre><code>===&gt;&gt;&gt; [FINISHED] Building sources: bla (1h:20m:30s) ===&gt;&gt;&gt; [FINISHED] Building sources: The brown fox jumped... (7h:05m:00s) </code></pre> <p>That is, a string filling a width of <code>N</code> characters, with a first part left-aligned, and a second part right-aligned.</p> <p>I have a <code>print</code> in a function and I just want to get an easy-to-read output. At the end of the execution of the script, I will see a few lines like the ones above.</p> <p>Some more comments:</p> <ul> <li>The two parts of the string together are never going to exceed <code>N</code>.</li> <li><code>N</code> is a constant value.</li> <li>I've already got the two strings, I don't need any date formatting.</li> </ul> <p>Is it posible to do this using Python's string `format ìn a generic way? Something like:</p> <pre><code>N=80 first_part_of_the_string = "===&gt;&gt;&gt; [FINISHED] Building sources: " + some_random_text second_part_of_the_string = my_function_to_convert_time_to_hms(time) print("{&lt;}{:N&gt;}".format(first_part_of_the_string, second_part_of_the_string)) </code></pre>
-3
2016-08-23T11:58:44Z
39,100,754
<p>assuming you know the width of the line you can use <code>'{:n}'.format(string)</code> to fill the string up to n-length with spaces. This does not shorten the string if it is longer than n, as you state will never be the case.</p> <pre><code>'===&gt;&gt;&gt; [FINISHED] Building sources: {:35} ({})'.format('bla', 'time') </code></pre> <p>in a similar fashion you can format the time by padding with zeroes: <code>{:02}</code></p> <pre><code>hour = 1 minute = 20 second = 30 prefix = '===&gt;&gt;&gt; [FINISHED] Building sources: ' content = 'bla' time = '({:02}h:{:02}m:{:02}s)'.format(hour, minute, second) print '{}{:35} {}'.format(prefix, content, time) </code></pre> <p>printing</p> <pre><code>'===&gt;&gt;&gt; [FINISHED] Building sources: bla (01h:20m:30s)' </code></pre>
1
2016-08-23T12:08:20Z
[ "python", "string", "format", "text-alignment" ]
Python: print a string spanning to a given width, but first part left-aligned and second part right-aligned
39,100,548
<p>I would like to output something like:</p> <pre><code>===&gt;&gt;&gt; [FINISHED] Building sources: bla (1h:20m:30s) ===&gt;&gt;&gt; [FINISHED] Building sources: The brown fox jumped... (7h:05m:00s) </code></pre> <p>That is, a string filling a width of <code>N</code> characters, with a first part left-aligned, and a second part right-aligned.</p> <p>I have a <code>print</code> in a function and I just want to get an easy-to-read output. At the end of the execution of the script, I will see a few lines like the ones above.</p> <p>Some more comments:</p> <ul> <li>The two parts of the string together are never going to exceed <code>N</code>.</li> <li><code>N</code> is a constant value.</li> <li>I've already got the two strings, I don't need any date formatting.</li> </ul> <p>Is it posible to do this using Python's string `format ìn a generic way? Something like:</p> <pre><code>N=80 first_part_of_the_string = "===&gt;&gt;&gt; [FINISHED] Building sources: " + some_random_text second_part_of_the_string = my_function_to_convert_time_to_hms(time) print("{&lt;}{:N&gt;}".format(first_part_of_the_string, second_part_of_the_string)) </code></pre>
-3
2016-08-23T11:58:44Z
39,100,972
<p>Just one way, adding the right number of spaces...</p> <pre><code>&gt;&gt;&gt; for a, b in ('fsad', 'trwe'), ('gregrfgsd', '5435234523554'): print a + ' ' * (50 - len(a + b)) + b fsad trwe gregrfgsd 5435234523554 </code></pre>
1
2016-08-23T12:17:16Z
[ "python", "string", "format", "text-alignment" ]
Skip new line while printing with ternary operator in python
39,100,721
<p>To print a statement and to prevent going into the new line one could simply add a comma in the end:</p> <pre><code>print "text", </code></pre> <p>But how can I do the same using a ternary operator? This one causes invalid syntax:</p> <pre><code>print ("A", if True else "B", ) </code></pre>
1
2016-08-23T12:07:09Z
39,100,859
<p>Instead of using just ugly hacks, we can define function called below as <code>special_print</code> which every print locates in the same line:</p> <pre><code>import sys def special_print(value): sys.stdout.write(value) special_print('ab') special_print('cd') </code></pre> <p>Result:</p> <pre><code>abcd </code></pre> <p>You can even mix normal print with <code>special_print</code>:</p> <pre><code>print('whatever') special_print('ab') special_print('cd') </code></pre> <p>Result:</p> <pre><code>whatever abcd </code></pre> <p>Of course you cannot use any expression as argument of <code>special_print</code>, but with displaying variable it just works.</p> <p>Hope that helps!</p>
1
2016-08-23T12:12:50Z
[ "python", "python-2.7" ]
Skip new line while printing with ternary operator in python
39,100,721
<p>To print a statement and to prevent going into the new line one could simply add a comma in the end:</p> <pre><code>print "text", </code></pre> <p>But how can I do the same using a ternary operator? This one causes invalid syntax:</p> <pre><code>print ("A", if True else "B", ) </code></pre>
1
2016-08-23T12:07:09Z
39,100,869
<p>You can use the <code>or</code> operator as well, like this:</p> <pre><code>a = None b = 12 print a or b, print "is a number" # prints: 12 is a number </code></pre> <p>Just note that the expression above is evaluated <strong>lazily</strong>, meaning that if <code>bool(a)</code> is <code>False</code>, <code>print</code> will return <code>b</code> even if that evaluates to False too since it wouldn't bother checking (<strong>therefore</strong> <code>a or b</code> <strong>is not equal to</strong> <code>b or a</code> <strong>generally speaking</strong>). In the above example for instance, if <code>b = ""</code> it would just print "is a number" (<a href="https://repl.it/Cq2k/1" rel="nofollow">example</a>).</p>
0
2016-08-23T12:13:13Z
[ "python", "python-2.7" ]
Skip new line while printing with ternary operator in python
39,100,721
<p>To print a statement and to prevent going into the new line one could simply add a comma in the end:</p> <pre><code>print "text", </code></pre> <p>But how can I do the same using a ternary operator? This one causes invalid syntax:</p> <pre><code>print ("A", if True else "B", ) </code></pre>
1
2016-08-23T12:07:09Z
39,100,946
<pre><code>print "A" if True else "B", print "Hi" </code></pre> <p><strong>Output: A Hi</strong></p>
1
2016-08-23T12:16:14Z
[ "python", "python-2.7" ]
Skip new line while printing with ternary operator in python
39,100,721
<p>To print a statement and to prevent going into the new line one could simply add a comma in the end:</p> <pre><code>print "text", </code></pre> <p>But how can I do the same using a ternary operator? This one causes invalid syntax:</p> <pre><code>print ("A", if True else "B", ) </code></pre>
1
2016-08-23T12:07:09Z
39,101,007
<p>I guess you can look at it this as one statement:</p> <pre><code>"A" if True else "B" </code></pre> <p>Then your print statement becomes:</p> <pre><code>print "A" if True else "B", </code></pre> <p>That should print "A" without the newline character (or "B" if the condition is <code>False</code>). </p>
1
2016-08-23T12:18:34Z
[ "python", "python-2.7" ]
Skip new line while printing with ternary operator in python
39,100,721
<p>To print a statement and to prevent going into the new line one could simply add a comma in the end:</p> <pre><code>print "text", </code></pre> <p>But how can I do the same using a ternary operator? This one causes invalid syntax:</p> <pre><code>print ("A", if True else "B", ) </code></pre>
1
2016-08-23T12:07:09Z
39,101,034
<blockquote> <p>[...] to prevent going into the new line one could simply add a comma in the end</p> </blockquote> <p>The solution is in your question already. One could simply add a comma in the end:</p> <pre><code>print "A" if True else "B", </code></pre> <hr> <p>However Python 3 has been out for closer to a <strong>decade</strong> now so I will shamelessly plug the new <code>print</code> function that has much saner syntax:</p> <pre><code>from __future__ import print_function print('A' if True else 'B', end=' ') </code></pre> <p>Future imports/Python 3 solved your problem efficiently, and the strange statement syntax is just a bad memory from past. As a plus, you're now forward-compatible!</p>
5
2016-08-23T12:19:40Z
[ "python", "python-2.7" ]
Optimal way to check all fields in a json data?
39,100,856
<pre class="lang-none prettyprint-override"><code>data = {u'name': None, u'region': None, u'id': u'test1', u'code': None, u'city': None, u'first_name': None, u'state_or_province': None, u'primary_phone': None, u'data2': [{u'status': u'deleted', u'name': u'b1', u'id': u'b1', u'modified_at': u'2016-07-13T10:22:47Z', u'eid': u'0012340', u'device_type': u'Mad box', u'custom_data': None, u'is_managed': True, u'number': None, u'sms_id': u'sms1'}], u'secondary_phone': None, u'sms_id': u'sms1', u'status': u'deleted', u'users': [{u'allow_content': False, u'status': u'deleted', u'sign_in_count': 0, u'max1': {u'TV': u'TV-MA', u'MPAA': u'NC-17', u'Unrated': u'UR'}, u'name': u'', u'subscriber_id': u'b1', u'gender': None, u'ids': [], u'modified_at': u'2016-07-27T10:18:19Z', u'allow_adult_content': False, u'sharing_threshold': {}, u'custom_data': None, u'user_id': u'test@test.com', u'email': None, u'sms_id': u'sms1'}, {u'allow_unrated_content': None, u'status': u'deleted', u'sign_in_count': 0, u'max_rating': {}, u'name': None, u'subscriber_id': u'testsub1', u'gender': None, u'offer_uri_ids': [], u'modified_at': u'2016-07-13T10:13:32Z', u'allow_adult_content': None, u'sharing_threshold': {}, u'custom_data': None, u'user_id': u'test', u'email': u'asdf@god.com', u'sms_id': u'sms1'}], u'ref': u'test@test.com', u'pin_code': None, u'credit_limit': 100.0, u'expiration': None, u'country': None, u'modified_at': u'2016-07-27T10:18:19Z', u'last_change_date': None, u'custom_data': None, u'address_line_1': None, u'address_line_2': None, u'change_id': None, u'billing_codes': []} </code></pre> <p>I need to verify all key's and value's for empty data. Tried <code>if/else</code>s, but this takes too much code. Looking for better method than the following:</p> <pre><code>parsed_json = [data] if 'name' not in parsed_json[0]: print "name is not present" if parsed_json[0]['name'] is "": print "name value is empty" if 'region' not in parsed_json[0]: print "region is not present" if parsed_json[0]['region'] is "" print "region value is empty" </code></pre>
-1
2016-08-23T12:12:34Z
39,101,700
<p><strong>Python 3.x Code:</strong></p> <pre><code>for key, value in parsed_json[0].items(): if value == "": print(key + " value is empty.") </code></pre> <p>Note: This code does not differentiate error messages by key (each one gets the same error message) and has not been properly escaped. If you're getting this data from untrusted sources (pro tip, they're all untrusted sources), you should have error catching/handling to account for missing/mangled key names.</p>
0
2016-08-23T12:49:24Z
[ "python", "json", "list", "dictionary", "data-structures" ]
Optimal way to check all fields in a json data?
39,100,856
<pre class="lang-none prettyprint-override"><code>data = {u'name': None, u'region': None, u'id': u'test1', u'code': None, u'city': None, u'first_name': None, u'state_or_province': None, u'primary_phone': None, u'data2': [{u'status': u'deleted', u'name': u'b1', u'id': u'b1', u'modified_at': u'2016-07-13T10:22:47Z', u'eid': u'0012340', u'device_type': u'Mad box', u'custom_data': None, u'is_managed': True, u'number': None, u'sms_id': u'sms1'}], u'secondary_phone': None, u'sms_id': u'sms1', u'status': u'deleted', u'users': [{u'allow_content': False, u'status': u'deleted', u'sign_in_count': 0, u'max1': {u'TV': u'TV-MA', u'MPAA': u'NC-17', u'Unrated': u'UR'}, u'name': u'', u'subscriber_id': u'b1', u'gender': None, u'ids': [], u'modified_at': u'2016-07-27T10:18:19Z', u'allow_adult_content': False, u'sharing_threshold': {}, u'custom_data': None, u'user_id': u'test@test.com', u'email': None, u'sms_id': u'sms1'}, {u'allow_unrated_content': None, u'status': u'deleted', u'sign_in_count': 0, u'max_rating': {}, u'name': None, u'subscriber_id': u'testsub1', u'gender': None, u'offer_uri_ids': [], u'modified_at': u'2016-07-13T10:13:32Z', u'allow_adult_content': None, u'sharing_threshold': {}, u'custom_data': None, u'user_id': u'test', u'email': u'asdf@god.com', u'sms_id': u'sms1'}], u'ref': u'test@test.com', u'pin_code': None, u'credit_limit': 100.0, u'expiration': None, u'country': None, u'modified_at': u'2016-07-27T10:18:19Z', u'last_change_date': None, u'custom_data': None, u'address_line_1': None, u'address_line_2': None, u'change_id': None, u'billing_codes': []} </code></pre> <p>I need to verify all key's and value's for empty data. Tried <code>if/else</code>s, but this takes too much code. Looking for better method than the following:</p> <pre><code>parsed_json = [data] if 'name' not in parsed_json[0]: print "name is not present" if parsed_json[0]['name'] is "": print "name value is empty" if 'region' not in parsed_json[0]: print "region is not present" if parsed_json[0]['region'] is "" print "region value is empty" </code></pre>
-1
2016-08-23T12:12:34Z
39,102,699
<p>To get a filtered dictionary containing only empty values, you could do this:</p> <p><strong>Python 2.x</strong></p> <p>empty_values = {k:v for k,v in data.iteritems() if not v}</p> <p><strong>Python 3.x</strong></p> <p>empty_values = {k:v for k,v in data.items() if not v}</p>
0
2016-08-23T13:32:39Z
[ "python", "json", "list", "dictionary", "data-structures" ]
Optimal way to check all fields in a json data?
39,100,856
<pre class="lang-none prettyprint-override"><code>data = {u'name': None, u'region': None, u'id': u'test1', u'code': None, u'city': None, u'first_name': None, u'state_or_province': None, u'primary_phone': None, u'data2': [{u'status': u'deleted', u'name': u'b1', u'id': u'b1', u'modified_at': u'2016-07-13T10:22:47Z', u'eid': u'0012340', u'device_type': u'Mad box', u'custom_data': None, u'is_managed': True, u'number': None, u'sms_id': u'sms1'}], u'secondary_phone': None, u'sms_id': u'sms1', u'status': u'deleted', u'users': [{u'allow_content': False, u'status': u'deleted', u'sign_in_count': 0, u'max1': {u'TV': u'TV-MA', u'MPAA': u'NC-17', u'Unrated': u'UR'}, u'name': u'', u'subscriber_id': u'b1', u'gender': None, u'ids': [], u'modified_at': u'2016-07-27T10:18:19Z', u'allow_adult_content': False, u'sharing_threshold': {}, u'custom_data': None, u'user_id': u'test@test.com', u'email': None, u'sms_id': u'sms1'}, {u'allow_unrated_content': None, u'status': u'deleted', u'sign_in_count': 0, u'max_rating': {}, u'name': None, u'subscriber_id': u'testsub1', u'gender': None, u'offer_uri_ids': [], u'modified_at': u'2016-07-13T10:13:32Z', u'allow_adult_content': None, u'sharing_threshold': {}, u'custom_data': None, u'user_id': u'test', u'email': u'asdf@god.com', u'sms_id': u'sms1'}], u'ref': u'test@test.com', u'pin_code': None, u'credit_limit': 100.0, u'expiration': None, u'country': None, u'modified_at': u'2016-07-27T10:18:19Z', u'last_change_date': None, u'custom_data': None, u'address_line_1': None, u'address_line_2': None, u'change_id': None, u'billing_codes': []} </code></pre> <p>I need to verify all key's and value's for empty data. Tried <code>if/else</code>s, but this takes too much code. Looking for better method than the following:</p> <pre><code>parsed_json = [data] if 'name' not in parsed_json[0]: print "name is not present" if parsed_json[0]['name'] is "": print "name value is empty" if 'region' not in parsed_json[0]: print "region is not present" if parsed_json[0]['region'] is "" print "region value is empty" </code></pre>
-1
2016-08-23T12:12:34Z
39,104,064
<p>Since a JSON object is a recursive data-structure, probably the easiest way to to do what you want is recursively:</p> <pre><code>def verify(key, value): if not value: print('"{}" is not present'.format(key)) elif isinstance(value, dict): for k, v in value.items(): verify(k, v) elif isinstance(value, list): for i, element in enumerate(value): verify('{}[{}]'.format(key, i), element) parsed_json = [data] verify('parsed_json', parsed_json[0]) </code></pre> <p>Note: This will consider numerical values like <code>0</code> and <code>0.0</code> as not present. If that's not what you want, you'll need to check for those types and modify the first <code>if</code> accordingly.</p>
0
2016-08-23T14:32:42Z
[ "python", "json", "list", "dictionary", "data-structures" ]
How do I release memory used by a pandas dataframe?
39,100,971
<p>I have a really large csv file that I opened in pandas as follows....</p> <pre><code>import pandas df = pandas.read_csv('large_txt_file.txt') </code></pre> <p>Once I do this my memory usage increases by 2GB, which is expected because this file contains millions of rows. My problem comes when I need to release this memory. I ran....</p> <pre><code>del df </code></pre> <p>However, my memory usage did not drop. Is this the wrong approach to release memory used by a pandas data frame? If it is, what is the proper way? </p>
10
2016-08-23T12:17:10Z
39,101,264
<p><code>del df</code> will not be deleted if there are any reference to the <code>df</code> at the time of deletion. So you need to to delete all the references to it with <code>del df</code> to release the memory. </p> <p>So all the instances bound to df should be deleted to trigger garbage collection.</p> <p>Use <a href="https://mg.pov.lt/objgraph/" rel="nofollow">objgragh</a> to check which is holding onto the objects.</p>
0
2016-08-23T12:29:48Z
[ "python", "pandas", "memory" ]
How do I release memory used by a pandas dataframe?
39,100,971
<p>I have a really large csv file that I opened in pandas as follows....</p> <pre><code>import pandas df = pandas.read_csv('large_txt_file.txt') </code></pre> <p>Once I do this my memory usage increases by 2GB, which is expected because this file contains millions of rows. My problem comes when I need to release this memory. I ran....</p> <pre><code>del df </code></pre> <p>However, my memory usage did not drop. Is this the wrong approach to release memory used by a pandas data frame? If it is, what is the proper way? </p>
10
2016-08-23T12:17:10Z
39,101,287
<p>As noted in the comments, there are some things to try: <code>gc.collect</code> (@EdChum) may clear stuff, for example. At least from my experience, these things sometimes work and often don't. </p> <p>There is one thing that always works, however, because it is done at the OS, not language, level.</p> <p>Suppose you have a function that creates an intermediate huge DataFrame, and returns a smaller result (which might also be a DataFrame):</p> <pre><code>def huge_intermediate_calc(something): ... huge_df = pd.DataFrame(...) ... return some_aggregate </code></pre> <p>Then if you do something like</p> <pre><code>import multiprocessing result = multiprocessing.Pool(1).map(huge_intermediate_calc, [something_])[0] </code></pre> <p>Then <a href="https://docs.python.org/2/library/multiprocessing.html" rel="nofollow">the function is executed at a different process</a>. When that process completes, the OS retakes all the resources it used. There's really nothing Python, pandas, the garbage collector, could do to stop that.</p>
4
2016-08-23T12:31:01Z
[ "python", "pandas", "memory" ]
How do I release memory used by a pandas dataframe?
39,100,971
<p>I have a really large csv file that I opened in pandas as follows....</p> <pre><code>import pandas df = pandas.read_csv('large_txt_file.txt') </code></pre> <p>Once I do this my memory usage increases by 2GB, which is expected because this file contains millions of rows. My problem comes when I need to release this memory. I ran....</p> <pre><code>del df </code></pre> <p>However, my memory usage did not drop. Is this the wrong approach to release memory used by a pandas data frame? If it is, what is the proper way? </p>
10
2016-08-23T12:17:10Z
39,377,643
<p>Reducing memory usage in Python is difficult, because <a href="http://effbot.org/pyfaq/why-doesnt-python-release-the-memory-when-i-delete-a-large-object.htm" rel="nofollow">Python does not actually release memory back to the operating system</a>. If you delete objects, then the memory is available to new Python objects, but not <code>free()</code>'d back to the system (<a href="http://stackoverflow.com/q/15455048/509706">see this question</a>).</p> <p>If you stick to numeric numpy arrays, those are freed, but boxed objects are not.</p> <pre><code>&gt;&gt;&gt; import os, psutil, numpy as np &gt;&gt;&gt; def usage(): ... process = psutil.Process(os.getpid()) ... return process.get_memory_info()[0] / float(2 ** 20) ... &gt;&gt;&gt; usage() # initial memory usage 27.5 &gt;&gt;&gt; arr = np.arange(10 ** 8) # create a large array without boxing &gt;&gt;&gt; usage() 790.46875 &gt;&gt;&gt; del arr &gt;&gt;&gt; usage() 27.52734375 # numpy just free()'d the array &gt;&gt;&gt; arr = np.arange(10 ** 8, dtype='O') # create lots of objects &gt;&gt;&gt; usage() 3135.109375 &gt;&gt;&gt; del arr &gt;&gt;&gt; usage() 2372.16796875 # numpy frees the array, but python keeps the heap big </code></pre> <h2>Reducing the Number of Dataframes</h2> <p>Python keep our memory at high watermark, but we can reduce the total number of dataframes we create. When modifying your dataframe, prefer <code>inplace=True</code>, so you don't create copies.</p> <p>Another common gotcha is holding on to copies of previously created dataframes in ipython:</p> <pre><code>In [1]: import pandas as pd In [2]: df = pd.DataFrame({'foo': [1,2,3,4]}) In [3]: df + 1 Out[3]: foo 0 2 1 3 2 4 3 5 In [4]: df + 2 Out[4]: foo 0 3 1 4 2 5 3 6 In [5]: Out # Still has all our temporary DataFrame objects! Out[5]: {3: foo 0 2 1 3 2 4 3 5, 4: foo 0 3 1 4 2 5 3 6} </code></pre> <p>You can fix this by typing <code>%reset Out</code> to clear your history. Alternatively, you can adjust how much history ipython keeps with <code>ipython --cache-size=5</code> (default is 1000).</p> <h2>Reducing Dataframe Size</h2> <p>Wherever possible, avoid using object dtypes.</p> <pre><code>&gt;&gt;&gt; df.dtypes foo float64 # 8 bytes per value bar int64 # 8 bytes per value baz object # at least 48 bytes per value, often more </code></pre> <p>Values with an object dtype are boxed, which means the numpy array just contains a pointer and you have a full Python object on the heap for every value in your dataframe. This includes strings.</p> <p>Whilst numpy supports fixed-size strings in arrays, pandas does not (<a href="https://github.com/pydata/pandas/issues/3209#issuecomment-15659304" rel="nofollow">it's caused user confusion</a>). This can make a significant difference:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; arr = np.array(['foo', 'bar', 'baz']) &gt;&gt;&gt; arr.dtype dtype('S3') &gt;&gt;&gt; arr.nbytes 9 &gt;&gt;&gt; import sys; import pandas as pd &gt;&gt;&gt; s = pd.Series(['foo', 'bar', 'baz']) dtype('O') &gt;&gt;&gt; sum(sys.getsizeof(x) for x in s) 120 </code></pre> <p>You may want to avoid using string columns, or find a way of representing string data as numbers.</p> <p>If you have a dataframe that contains many repeated values (NaN is very common), then you can use a <a href="http://pandas.pydata.org/pandas-docs/stable/sparse.html" rel="nofollow">sparse data structure</a> to reduce memory usage:</p> <pre><code>&gt;&gt;&gt; df1.info() &lt;class 'pandas.core.frame.DataFrame'&gt; Int64Index: 39681584 entries, 0 to 39681583 Data columns (total 1 columns): foo float64 dtypes: float64(1) memory usage: 605.5 MB &gt;&gt;&gt; df1.shape (39681584, 1) &gt;&gt;&gt; df1.foo.isnull().sum() * 100. / len(df1) 20.628483479893344 # so 20% of values are NaN &gt;&gt;&gt; df1.to_sparse().info() &lt;class 'pandas.sparse.frame.SparseDataFrame'&gt; Int64Index: 39681584 entries, 0 to 39681583 Data columns (total 1 columns): foo float64 dtypes: float64(1) memory usage: 543.0 MB </code></pre> <h2>Viewing Memory Usage</h2> <p>You can view the memory usage (<a href="http://pandas.pydata.org/pandas-docs/stable/faq.html#dataframe-memory-usage" rel="nofollow">docs</a>):</p> <pre><code>&gt;&gt;&gt; df.info() &lt;class 'pandas.core.frame.DataFrame'&gt; Int64Index: 39681584 entries, 0 to 39681583 Data columns (total 14 columns): ... dtypes: datetime64[ns](1), float64(8), int64(1), object(4) memory usage: 4.4+ GB </code></pre> <p>As of pandas 0.17.1, you can also do <code>df.info(memory_usage='deep')</code> to see memory usage including objects.</p>
1
2016-09-07T19:25:24Z
[ "python", "pandas", "memory" ]
How to fix Arabic unicode in a list
39,100,987
<p>I made a database containing Arabic words and when I fetch the data and print it it's OK and works well and prints:</p> <pre><code>مشاعر‬ مودة </code></pre> <p>But when I loop into that database and turn it out to a list then print that list to see whats happening, I get this:</p> <pre><code> ['\u202b\u202bمشاعر\u202c', '\u202b\u202bالمودة\u202c'] </code></pre> <p>Here is the code:</p> <pre><code> cors.execute("SELECT * FROM DictContents") # Selecting from database self.AraList = [] # empty list to put arabic words in for raw in cors.fetchall(): # fetching data from database rawAra = raw[1] # the database includes more than that so this index refer to arabic table print(rawAra) # here is the first print . works fine as i said . self.AraList.append(rawAra) print(self.AraList) # here is the other list printing </code></pre> <p>I tried more than one way to fix it before I ask but none of them worked for me.</p>
0
2016-08-23T12:17:44Z
39,113,322
<p>Found ...</p> <pre><code>import re cors.execute("SELECT * FROM DictContents") self.AraList = [] for raw in cors.fetchall(): rawAra = raw[1] cleanit = re.compile('\w+.*') cleanone = cleanit .search(rawAra) if cleanone: print(cleanone.group()) # prints the clean strings : مشاعر‬ مودة self.AraList.append(cleanone.group()) # adding strings to list to see how it will looks like . print(self.AraList) # prints much better clean list than firs one ['مشاعر\u202c - ', 'المودة\u202c'] </code></pre>
0
2016-08-24T02:03:49Z
[ "python", "python-3.x" ]
when I use scipy.optimize.linprog,How can I restrict x be a int
39,101,137
<p>When I solve the problem of Linear Programming, like in the following formula, <strong>I want the result of x all to be int type</strong></p> <p>Consider the following problem:</p> <p>Minimize: <code>f = -1*x[0] + 4*x[1]</code></p> <p>Subject to: </p> <pre><code>-3*x[0] + 1*x[1] &lt;= 6 1*x[0] + 2*x[1] &lt;= 4 x[1] &gt;= -3 </code></pre> <p>where: <code>-inf &lt;= x[0] &lt;= inf</code></p> <p><em>next is the python coder</em></p> <pre><code>&gt;&gt;&gt; c = [-1, 4] &gt;&gt;&gt; A = [[-3, 1], [1, 2]] &gt;&gt;&gt; b = [6, 4] &gt;&gt;&gt; x0_bounds = (None, None) &gt;&gt;&gt; x1_bounds = (-3, None) &gt;&gt;&gt; res = linprog(c, A_ub=A, b_ub=b, bounds=(x0_bounds, x1_bounds), ... options={"disp": True}) &gt;&gt;&gt; print(res) Optimization terminated successfully. Current function value: -11.428571 Iterations: 2 status: 0 success: True fun: -11.428571428571429 x: array([-1.14285714, 2.57142857]) message: 'Optimization terminated successfully.' nit: 2 </code></pre>
0
2016-08-23T12:24:01Z
39,101,617
<p>From the <a href="http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.optimize.linprog.html">docs</a>:</p> <blockquote> <p>method : str, optional Type of solver. At this time only ‘simplex’ is supported.</p> </blockquote> <p>Simplex cannot handle integrality constraints so you cannot solve integer programming problems with scipy.optimize.linprog yet. You can try other libraries like <a href="https://pypi.python.org/pypi/PuLP/1.1">PuLP</a>, <a href="http://www.pyomo.org/">Pyomo</a> or <a href="http://cvxopt.org/">CVXOPT</a>.</p>
5
2016-08-23T12:45:32Z
[ "python", "scipy" ]
add new plot to existing figure
39,101,203
<p>I have a script with some plots ( see example code). After some other things i want to add a new plot to an existing one. But when i try that it add the plot by the last created figure(now fig2). I can't figure out how to change that...</p> <pre><code>import matplotlib.pylab as plt import numpy as np n = 10 x1 = np.arange(n) y1 = np.arange(n) fig1 = plt.figure() ax1 = fig1.add_subplot(111) ax1.plot(x1,y1) fig1.show() x2 = np.arange(10) y2 = n/x2 # add new data and create new figure fig2 = plt.figure() ax2 = fig2.add_subplot(111) ax2.plot(x2,y2) fig2.show() # do something with data to compare with new data y1_geq = y1 &gt;= y2 y1_a = y1**2 ax1.plot(y1_geq.nonzero()[0],y1[y1_geq],'ro') fig1.canvas.draw </code></pre>
-3
2016-08-23T12:26:42Z
39,106,673
<p>Since your code is not runnable without errors I'll provide a sample snippet showing how to plot several data in same graph/diagram:</p> <pre><code>import matplotlib.pyplot as plt xvals = [i for i in range(0, 10)] yvals1 = [i**2 for i in range(0, 10)] yvals2 = [i**3 for i in range(0, 10)] f, ax = plt.subplots(1) ax.plot(xvals, yvals1) ax.plot(xvals, yvals2) </code></pre> <p>So the basic idea is to call <code>ax.plot()</code> for all datasets you need to plot into the same plot.</p>
1
2016-08-23T16:45:16Z
[ "python", "matplotlib", "plot" ]
Initially populate FileField in Django-Form
39,101,231
<p>I have a model that describes a Webpage. The <code>source_upload</code> field represents a screenshot of the webpage. </p> <p>For adding site-objects to my application, I use a django class-based <code>CreateView</code>. This works really well. </p> <p>Now I'm trying to add a semi-automatic way of adding sites. You can pass an URL to the view and the view fills the form automatically (and makes a screenshot of the webpage). The user should be able to review all the auto-extracted fields - especially the auto generated screenshot image - change them and hit the save button to add the object to the database and the image (if approved) to its final location.</p> <p>I tried to implement this in the <code>get_initial</code> method of the view. This works quite well except for the screenshot-<code>FileField</code>. The path I set in <code>initial['source_upload']</code> is not shown in the <code>current: &lt;link&gt;</code>part of the FileInput widget of the form.</p> <p><strong>How can I give the filefield an initial value?</strong></p> <hr> <p><strong>models.py</strong></p> <pre><code>class Site(models.Model): def get_source_upload_path(instance, filename): now = datetime.datetime.now() return "appname/sites/{}/{}/{}/site_{}_{}".format(now.year, now.month, now.day, instance.pk, filename) creationDate = models.DateTimeField(auto_now_add=True) last_modifiedDate = models.DateTimeField(auto_now=True) creator = models.ForeignKey('auth.User', related_name='siteCreated') last_modifier = models.ForeignKey('auth.User', related_name='siteLast_modified') date = models.DateTimeField(default=datetime.date.today) title = models.CharField(max_length=240, blank=True) body = models.TextField(max_length=3000) source_url = models.URLField(blank=True) source_upload = models.FileField(upload_to=get_source_upload_path, blank=True) keywords = models.ManyToManyField("Keyword") </code></pre> <p><strong>urls.py</strong></p> <pre><code>url(r'site/add/$', views.SiteCreate.as_view(), name='site-add'), url(r'site/add/(?P&lt;source_url&gt;[A-Za-z0-9\-._~:/\[\]@!$&amp;\'\(\)\*\+,;=?#]+)/$', views.SiteCreate.as_view(), name='site-add-fromurl'), </code></pre> <p><strong>forms.py</strong></p> <pre><code>class SiteForm(ModelForm): class Meta: model = Site fields = ['date', 'title', 'body', 'source_url', 'source_upload', 'keywords'] widgets = { 'keywords' : CheckboxSelectMultiple(), } </code></pre> <p><strong>views.py</strong></p> <pre><code>class SiteCreate(LoginRequiredMixin, CreateView): model = Site template_name = 'appname/site_form.html' form_class = SiteForm success_url = reverse_lazy('appname:index') def form_valid(self, form): form.instance.creator = self.request.user form.instance.last_modifier = self.request.user return super(SiteCreate, self).form_valid(form) def get_initial(self): # Get the initial dictionary from the superclass method initial = super(SiteCreate, self).get_initial() try: #get target url from request fullpath = self.request.get_full_path() fullpath = fullpath.split("/") fullpath, querystring = fullpath[3:-1], fullpath[-1] source_domain = fullpath[2] fullpath = "/".join(fullpath) fullpath += querystring source_url = fullpath if (not source_url.startswith("http://") and not source_url.startswith("https://")): print("ERROR: url does not start with http:// or https://") return initial # ... # extract title, date &amp; others with BeautifulSoup # ... #extract screenshot (is there a better way?) from selenium import webdriver driver = webdriver.Firefox() driver.get(source_url) tmpfilename = "{}_{}.png".format(get_valid_filename(source_domain), get_valid_filename(title[:30])) now = datetime.datetime.now() tmpfilepath_rel = "appname/sites/tmp/{}/{}/{}/{}".format(now.year, now.month, now.day, tmpfilename) tmpfilepath = settings.MEDIA_ROOT + tmpfilepath_rel folder=os.path.dirname(tmpfilepath) if not os.path.exists(folder): os.makedirs(folder) driver.save_screenshot(tmpfilepath) driver.quit() initial = initial.copy() initial['source_url'] = source_url initial['title'] = title initial['date'] = soup_date initial['body'] = body initial['source_upload'] = tmpfilepath_rel except KeyError as e: print("no valid source_url found. zeige also ganz normales add/new template") except IndexError as e: print("no valid source_url found. zeige also ganz normales add/new template") return initial </code></pre> <p><strong>site_form.html</strong> (Used for Create and Update view)</p> <pre><code>{% extends "appname/base.html" %} {% load staticfiles %} {% block header %} &lt;link rel="stylesheet" type="text/css" href="{% static 'appname/model_forms.css' %}" /&gt; {% endblock %} {% block body %} &lt;form enctype="multipart/form-data" action="" method="post"&gt;{% csrf_token %} &lt;div class="fieldWrapper"&gt; &lt;div class="error"&gt;{{ form.date.errors }}&lt;/div&gt; &lt;div class="label"&gt;{{ form.date.label_tag }}&lt;/div&gt; &lt;div class="field"&gt;{{ form.date }}&lt;br /&gt;{{ form.date.help_text }}&lt;/div&gt; &lt;div class="floatclear"&gt;&lt;/div&gt; &lt;/div&gt; &lt;div class="fieldWrapper"&gt; &lt;div class="error"&gt;{{ form.title.errors }}&lt;/div&gt; &lt;div class="label"&gt;{{ form.title.label_tag }}&lt;/div&gt; &lt;div class="field"&gt;{{ form.title }}&lt;br /&gt;{{ form.title.help_text }}&lt;/div&gt; &lt;div class="floatclear"&gt;&lt;/div&gt; &lt;/div&gt; &lt;div class="fieldWrapper"&gt; &lt;div class="error"&gt;{{ form.body.errors }}&lt;/div&gt; &lt;div class="label"&gt;{{ form.body.label_tag }}&lt;/div&gt; &lt;div class="field"&gt;{{ form.body }}&lt;br /&gt;{{ form.body.help_text }}&lt;/div&gt; &lt;div class="floatclear"&gt;&lt;/div&gt; &lt;/div&gt; &lt;div class="fieldWrapper"&gt; &lt;div class="error"&gt;{{ form.source_url.errors }}&lt;/div&gt; &lt;div class="label"&gt;{{ form.source_url.label_tag }}&lt;/div&gt; &lt;div class="field"&gt;{{ form.source_url }}&lt;br /&gt;{{ form.source_url.help_text }}&lt;/div&gt; &lt;div class="floatclear"&gt;&lt;/div&gt; &lt;/div&gt; &lt;div class="fieldWrapper"&gt; &lt;div class="error"&gt;{{ form.source_upload.errors }}&lt;/div&gt; &lt;div class="label"&gt;{{ form.source_upload.label_tag }}&lt;/div&gt; &lt;div class="field"&gt;{{ form.source_upload }}&lt;br /&gt;{{ form.source_upload.help_text }}&lt;/div&gt; &lt;div class="floatclear"&gt;&lt;/div&gt; &lt;/div&gt; &lt;div class="fieldWrapper"&gt; &lt;div class="error"&gt;{{ form.keywords.errors }}&lt;/div&gt; &lt;div class="label"&gt;{{ form.keywords.label_tag }}&lt;/div&gt; &lt;div class="field"&gt; &lt;ul class="checkbox-grid"&gt; {% for kw in form.keywords %} &lt;li&gt; {{ kw.tag }} &lt;label for="{{ kw.id_for_label }}"&gt; {{ kw.choice_label }} &lt;/label&gt; &lt;/li&gt; {% endfor %} &lt;/ul&gt; &lt;div class="checkbox_help_text"&gt;&lt;br /&gt;{{ form.keywords.help_text }}&lt;/div&gt; &lt;/div&gt; &lt;div class="floatclear"&gt;&lt;/div&gt; &lt;/div&gt; &lt;input type="submit" value="Save" /&gt; &lt;/form&gt; &lt;div id="ObjectHistory"&gt; {% if site.pk %} &lt;p&gt;Created by: {{ site.creator }}&lt;/p&gt; &lt;p&gt;Created on: {{ site.creationDate }}&lt;/p&gt; &lt;p&gt;Last modified by: {{ site.last_modifier }}&lt;/p&gt; &lt;p&gt;Last modified on: {{ site.last_modifiedDate }}&lt;/p&gt; &lt;p&gt;Now: {% now "Y-m-d H:i:s" %} &lt;a href="{% url 'appname:site-delete' site.pk %}"&gt;&lt;button&gt;delete&lt;/button&gt;&lt;/a&gt;&lt;/p&gt; {% else %} &lt;p&gt;This is a new Site!&lt;/p&gt; &lt;p&gt;Now: {% now "Y-m-d H:i:s" %}&lt;/p&gt; {% endif %} &lt;/div&gt; {% endblock %} </code></pre>
2
2016-08-23T12:28:27Z
39,101,582
<p>This is because the value of FileField, as used by your form, isn't just the path to the file - it's an instance of FieldFile (see <a href="https://docs.djangoproject.com/en/1.10/ref/models/fields/#django.db.models.fields.files.FieldFile" rel="nofollow">https://docs.djangoproject.com/en/1.10/ref/models/fields/#django.db.models.fields.files.FieldFile</a>).</p> <p>I'm not sure if you can instantiate a FieldFile directly, but at least you can do it by instantiating the model (you don't need to save it).</p> <p>In views.py:</p> <pre><code>tmp_site = Site(source_upload=tmpfilepath_rel) initial['source_upload'] = tmp_site.source_upload </code></pre> <p>Alternatively you can manually add a link to the file when rendering the html:</p> <pre><code>&lt;div class="currently"&gt;&lt;a href="{{ form.source_upload.value }}"&gt;{{ form.source_upload.value }}&lt;/a&gt;&lt;/div&gt; </code></pre>
1
2016-08-23T12:44:01Z
[ "python", "django", "django-class-based-views" ]
Why isn't the HTML I get from BeautifulSoup the same as the one I see when I inspect element?
39,101,335
<p>I am making a username scraper and I really can't understand why the HTML is 'disappearing' when I parse it. Let's take this site for example: <a href="http://www.lolking.net/leaderboards#/eune/1" rel="nofollow">http://www.lolking.net/leaderboards#/eune/1</a></p> <p><a href="http://i.stack.imgur.com/q5LwE.png" rel="nofollow"><img src="http://i.stack.imgur.com/q5LwE.png" alt="HTML output"></a></p> <p>See how there is a tbody and a bunch of tables in it? Well when I parse it and output it to the shell the tbody is empty</p> <pre><code> &lt;div style="background: #333; box-shadow: 0 0 2px #000; padding: 10px;"&gt; &lt;table class="lktable" id="leaderboard_table" width="100%"&gt; &lt;thead&gt; &lt;tr&gt; &lt;th style="width: 80px;"&gt; Rank &lt;/th&gt; &lt;th style="width: 80px;"&gt; Change &lt;/th&gt; &lt;th style="width: 100px;"&gt; Tier &lt;/th&gt; &lt;th&gt; Summoner &lt;/th&gt; &lt;th style="width: 150px;"&gt; Top Champions &lt;/th&gt; &lt;/tr&gt; &lt;/thead&gt; &lt;tbody&gt; &lt;/tbody&gt; &lt;/table&gt; &lt;/div&gt; &lt;/div&gt; </code></pre> <p>Why is this happening and how can I fix it?</p>
2
2016-08-23T12:33:38Z
39,101,540
<p>This site needs JavaScript to work. JavaScript is used to populate the table by forming a web request, which probably points to a back-end API. This means that the "raw" HTML, without the effects of any JavaScript, has an empty table.</p> <p>We can actually see this empty table in the background if we visit the site with JavaScript disabled:</p> <p><img src="http://i.imgur.com/MfgckRf.png" alt="Screenshot"></p> <p>BeautifulSoup doesn't cause this JavaScript to execute. Instead, have a look at some alternative libraries which do, such as the more advanced <a href="http://selenium-python.readthedocs.io/" rel="nofollow">Selenium</a>.</p>
1
2016-08-23T12:41:56Z
[ "python", "html", "beautifulsoup" ]
Why isn't the HTML I get from BeautifulSoup the same as the one I see when I inspect element?
39,101,335
<p>I am making a username scraper and I really can't understand why the HTML is 'disappearing' when I parse it. Let's take this site for example: <a href="http://www.lolking.net/leaderboards#/eune/1" rel="nofollow">http://www.lolking.net/leaderboards#/eune/1</a></p> <p><a href="http://i.stack.imgur.com/q5LwE.png" rel="nofollow"><img src="http://i.stack.imgur.com/q5LwE.png" alt="HTML output"></a></p> <p>See how there is a tbody and a bunch of tables in it? Well when I parse it and output it to the shell the tbody is empty</p> <pre><code> &lt;div style="background: #333; box-shadow: 0 0 2px #000; padding: 10px;"&gt; &lt;table class="lktable" id="leaderboard_table" width="100%"&gt; &lt;thead&gt; &lt;tr&gt; &lt;th style="width: 80px;"&gt; Rank &lt;/th&gt; &lt;th style="width: 80px;"&gt; Change &lt;/th&gt; &lt;th style="width: 100px;"&gt; Tier &lt;/th&gt; &lt;th&gt; Summoner &lt;/th&gt; &lt;th style="width: 150px;"&gt; Top Champions &lt;/th&gt; &lt;/tr&gt; &lt;/thead&gt; &lt;tbody&gt; &lt;/tbody&gt; &lt;/table&gt; &lt;/div&gt; &lt;/div&gt; </code></pre> <p>Why is this happening and how can I fix it?</p>
2
2016-08-23T12:33:38Z
39,101,626
<p>As you can see in Chrome Dev Tools, the site sends 2 XHR requests to get the data, and displays it by using JavaScript.</p> <p>Since <code>BeautifulSoup</code> is an HTML <strong>parser</strong>. It will not execute JavaScript. You should use a tool like <a href="http://selenium-python.readthedocs.org/" rel="nofollow"><code>selenium</code></a>, which emulates a real browser.</p> <p>But in this case you might be better of using the API, they use to get the data. You can easily see from which urls they get the data by looking in the 'Network' tab. Reload the page, select <code>XHR</code> and you can use the info to create your own requests using something like <a href="http://docs.python-requests.org/en/master/" rel="nofollow"><code>Python Requests</code></a>.</p>
0
2016-08-23T12:46:07Z
[ "python", "html", "beautifulsoup" ]
Why isn't the HTML I get from BeautifulSoup the same as the one I see when I inspect element?
39,101,335
<p>I am making a username scraper and I really can't understand why the HTML is 'disappearing' when I parse it. Let's take this site for example: <a href="http://www.lolking.net/leaderboards#/eune/1" rel="nofollow">http://www.lolking.net/leaderboards#/eune/1</a></p> <p><a href="http://i.stack.imgur.com/q5LwE.png" rel="nofollow"><img src="http://i.stack.imgur.com/q5LwE.png" alt="HTML output"></a></p> <p>See how there is a tbody and a bunch of tables in it? Well when I parse it and output it to the shell the tbody is empty</p> <pre><code> &lt;div style="background: #333; box-shadow: 0 0 2px #000; padding: 10px;"&gt; &lt;table class="lktable" id="leaderboard_table" width="100%"&gt; &lt;thead&gt; &lt;tr&gt; &lt;th style="width: 80px;"&gt; Rank &lt;/th&gt; &lt;th style="width: 80px;"&gt; Change &lt;/th&gt; &lt;th style="width: 100px;"&gt; Tier &lt;/th&gt; &lt;th&gt; Summoner &lt;/th&gt; &lt;th style="width: 150px;"&gt; Top Champions &lt;/th&gt; &lt;/tr&gt; &lt;/thead&gt; &lt;tbody&gt; &lt;/tbody&gt; &lt;/table&gt; &lt;/div&gt; &lt;/div&gt; </code></pre> <p>Why is this happening and how can I fix it?</p>
2
2016-08-23T12:33:38Z
39,102,210
<p>You can get all the data in <em>json</em> format, ll you need to do is parse a value from script tag inside the original page source and pass it to "<a href="http://www.lolking.net/leaderboards/some_value_here/eune/1.json" rel="nofollow">http://www.lolking.net/leaderboards/some_value_here/eune/1.json</a>":</p> <pre><code>from bs4 import BeautifulSoup import requests import re patt = re.compile("\$\.get\('/leaderboards/(\w+)/") js = "http://www.lolking.net/leaderboards/{}/eune/1.json" soup = BeautifulSoup(requests.get("http://www.lolking.net/leaderboards#/eune/1").content) script = soup.find("script", text=re.compile("\$\.get\('/leaderboards/")) val = patt.search(script.text).group(1) data = requests.get(js.format(val)).json() </code></pre> <p>data gives you json that contains all the player info like:</p> <pre><code>{'data': [{'division': '1', 'global_ranking': '12', 'league_points': '1217', 'lks': '2961', 'losses': '31', 'most_played_champions': [{'assists': '238', 'champion_id': '236', 'creep_score': '7227', 'deaths': '131', 'kills': '288', 'losses': '5', 'played': '39', 'wins': '34'}, {'assists': '209', 'champion_id': '429', 'creep_score': '5454', 'deaths': '111', 'kills': '204', 'losses': '3', 'played': '27', 'wins': '24'}, {'assists': '155', 'champion_id': '81', 'creep_score': '4800', 'deaths': '103', 'kills': '168', 'losses': '8', 'played': '26', 'wins': '18'}], 'name': 'Sadastyczny', 'previous_ranking': '2', 'profile_icon_id': 7, 'ranking': '1', 'region': 'eune', 'summoner_id': '42893043', 'tier': '6', 'tier_name': 'CHALLENGER', 'wins': '128'}, {'division': '1', 'global_ranking': '30', 'league_points': '1128', 'lks': '2956', 'losses': '180', 'most_played_champions': [{'assists': '928', 'champion_id': '24', 'creep_score': '37601', 'deaths': '1426', 'kills': '1874', 'losses': '64', 'played': '210', 'wins': '146'}, {'assists': '501', 'champion_id': '67', 'creep_score': '16836', 'deaths': '584', 'kills': '662', 'losses': '37', 'played': '90', 'wins': '53'}, {'assists': '124', 'champion_id': '157', 'creep_score': '5058', 'deaths': '205', 'kills': '141', 'losses': '14', 'played': '28', 'wins': '14'}], 'name': 'Richor', 'previous_ranking': '1', 'profile_icon_id': 577, 'ranking': '2', 'region': 'eune', 'summoner_id': '40385818', 'tier': '6', 'tier_name': 'CHALLENGER', 'wins': '254'}, {'division': '1', 'global_ranking': '49', 'league_points': '1051', 'lks': '2953', 'losses': '47', 'most_played_champions': [{'assists': '638', 'champion_id': '117', 'creep_score': '11927', 'deaths': '99', 'kills': '199', 'losses': '7', 'played': '66', 'wins': '59'}, {'assists': '345', 'champion_id': '48', 'creep_score': '8061', 'deaths': '99', 'kills': '192', 'losses': '11', 'played': '43', 'wins': '32'}, {'assists': '161', 'champion_id': '114', 'creep_score': '5584', 'deaths': '64', 'kills': '165', 'losses': '11', 'played': '31', 'wins': '20'}], </code></pre>
1
2016-08-23T13:11:10Z
[ "python", "html", "beautifulsoup" ]
ValueError: Unable to configure handler 'file': [Errno 2] No such file or directory:
39,101,488
<p>I am very new to Python and Django and is currently busy learning myself through tutorials on www.djangoproject.com. I am using PyCharm and working on OS X El Capitan. I have imported a project from github and created a virtual environment for the project interpretor based on Python 3.5.1. In the vm I installed django.</p> <p>I then activated the vm.</p> <p>Now.. i started by trying to execute simple commands in the terminal like p<code>ython manage.py startapp deonapp</code> and <code>python manage.py runserver</code> but each time I get an error which I pasted below.. What did I miss? I cannot seem to find the /log/ directory?</p> <pre><code>Traceback (most recent call last): File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/logging/config.py", line 558, in configure handler = self.configure_handler(handlers[name]) File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/logging/config.py", line 731, in configure_handler result = factory(**kwargs) File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/logging/__init__.py", line 1008, in __init__ StreamHandler.__init__(self, self._open()) File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/logging/__init__.py", line 1037, in _open return open(self.baseFilename, self.mode, encoding=self.encoding) FileNotFoundError: [Errno 2] No such file or directory: '/Users/deon/Documents/PyCharmProjects/Developments/deonproject/log/debug.log' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "manage.py", line 10, in &lt;module&gt; execute_from_command_line(sys.argv) File "/Users/deon/Documents/PyCharmProjects/Developments/deonproject/venv/lib/python3.5/site-packages/django/core/management/__init__.py", line 367, in execute_from_command_line utility.execute() File "/Users/deon/Documents/PyCharmProjects/Developments/deonproject/venv/lib/python3.5/site-packages/django/core/management/__init__.py", line 341, in execute django.setup() File "/Users/deon/Documents/PyCharmProjects/Developments/deonproject/venv/lib/python3.5/site-packages/django/__init__.py", line 22, in setup configure_logging(settings.LOGGING_CONFIG, settings.LOGGING) File "/Users/deon/Documents/PyCharmProjects/Developments/deonproject/venv/lib/python3.5/site-packages/django/utils/log.py", line 75, in configure_logging logging_config_func(logging_settings) File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/logging/config.py", line 795, in dictConfig dictConfigClass(config).configure() File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/logging/config.py", line 566, in configure '%r: %s' % (name, e)) ValueError: Unable to configure handler 'file': [Errno 2] No such file or directory: '/Users/deon/Documents/PyCharmProjects/Developments/deonproject/log/debug.log' </code></pre>
0
2016-08-23T12:39:41Z
39,101,550
<p>You do not have path to a log file for some reason (/Users/deon/Documents/PyCharmProjects/Developments/deonproject/log). Make sure that all directories exist (if no, create them) and create an empty <code>debug.log</code> log file (just in case).</p> <p>What happens is there is some problem with your code happening. Handler catches this error to save it to your log file so that you can analyze it. However, the path to log file it is trying to open does not exist. Thus, exception occures during handling of another exception.</p>
0
2016-08-23T12:42:12Z
[ "python", "django", "pycharm" ]
TypeError: unorderable types: NoneType() > int()
39,101,535
<p>I am new to python3, and am getting the following error trying to read earthquake data from last day!!</p> <pre><code>Traceback (most recent call last): File "Z:\Python learning\Up and run\Exercise Files\Ch5\jsondata_finished.py", line 54, in &lt;module&gt; main() File "Z:\Python learning\Up and run\Exercise Files\Ch5\jsondata_finished.py", line 49, in main printResults(data) File "Z:\Python learning\Up and run\Exercise Files\Ch5\jsondata_finished.py", line 33, in printResults if (feltReports != None) &amp; (feltReports &gt; 0): TypeError: unorderable types: NoneType() &gt; int() </code></pre> <p>I am unable to identify the error. Here is my CODE:</p> <pre><code>import urllib.request import json def printResults(data): # Use the json module to load the string data into a dictionary theJSON = json.loads(data.decode()) # now we can access the contents of the JSON like any other Python object if "title" in theJSON["metadata"]: print (theJSON["metadata"]["title"]) # output the number of events, plus the magnitude and each event name count = theJSON["metadata"]["count"]; print (str(count) + " events recorded") # for each event, print the place where it occurred for i in theJSON["features"]: print (i["properties"]["place"]) # print the events that only have a magnitude greater than 4 # for i in theJSON["features"]: # if i["properties"]["mag"] &gt;= 4.0: # print "%2.1f" % i["properties"]["mag"], i["properties"]["place"] # print only the events where at least 1 person reported feeling something print ("Events that were felt:") for i in theJSON["features"]: feltReports = i["properties"]["felt"] if (feltReports != None) &amp; (feltReports &gt; 0): print ("%2.1f" % i["properties"]["mag"], i["properties"]["place"], " reported " + str(feltReports) + " times") def main(): # define a variable to hold the source URL # In this case we'll use the free data feed from the USGS # This feed lists all earthquakes for the last day larger than Mag 2.5 urlData = "http://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/2.5_day.geojson" # Open the URL and read the data webUrl = urllib.request.urlopen(urlData) print (webUrl.getcode()) if (webUrl.getcode() == 200): data = webUrl.read() # print out our customized results printResults(data) else: print ("Received an error from server, cannot retrieve results " + str(webUrl.getcode())) if __name__ == "__main__": main() </code></pre> <p>Please help! I have tried to do a few things over by looking at the solutions of other users, but still i was getting the same error over and over again.</p>
0
2016-08-23T12:41:50Z
39,101,613
<p>Use <code>and</code> instead of <code>&amp;</code>. Also, use <code>is not None</code> to check if object is not <code>None</code> instead of <code>!= None</code></p>
1
2016-08-23T12:45:24Z
[ "python", "python-3.x" ]
BigQuery script failing for large file
39,101,602
<p>I am trying to load a json file to <strong>GoogleBigquery</strong> using the script at <a href="https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/bigquery/api/load_data_by_post.py" rel="nofollow">https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/bigquery/api/load_data_by_post.py</a> with very little modification. I added </p> <pre><code>,chunksize=10*1024*1024, resumable=True)) </code></pre> <p>to <strong>MediaFileUpload</strong>.</p> <p>The script works fine for a sample file with a few million records. The actual file is about 140 GB with approx 200,000,000 records. <strong>insert_request.execute()</strong> always fails with </p> <pre><code>socket.error: `[Errno 32] Broken pipe` </code></pre> <p>after half an hour or so. How can this be fixed? Each row is less than 1 KB, so it shouldn't be a quota issue. </p>
0
2016-08-23T12:44:51Z
39,111,367
<p>When handling large files don't use streaming, but batch load: Streaming will easily handle up to 100,000 rows per second. That's pretty good for streaming, but not for loading large files.</p> <p>The sample code linked is doing the right thing (batch instead of streaming), so what we see is a different problem: This sample code is trying to load all this data straight into BigQuery, but the uploading through POST part fails.</p> <p>Solution: Instead of loading big chunks of data through POST, stage them in Google Cloud Storage first, then tell BigQuery to read files from GCS.</p> <p><em>Update</em>: Talking to the engineering team, POST should work if you try a smaller <code>chunksize</code>.</p>
2
2016-08-23T22:02:18Z
[ "python", "json", "google-bigquery" ]
Django: ValueError: Unable to configure handler - [Errno 2]
39,101,715
<p>I am very new to Python and Django and is currently busy learning myself through tutorials on www.djangoproject.com. I am using PyCharm and working on OS X El Capitan. I have imported a project from GitHub and created a virtual environment for the project interpreter based on Python 3.5.1. In the vm I installed django.</p> <p>I then activated the vm.</p> <p>Now.. I started by trying to execute simple commands in the terminal like <code>python manage.py startapp deonapp</code> and <code>python manage.py runserver</code> but each time I get an error which I pasted below.. What did I miss? I cannot seem to find the /log/ directory?</p> <pre><code>Traceback (most recent call last): File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/logging/config.py", line 558, in configure handler = self.configure_handler(handlers[name]) File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/logging/config.py", line 731, in configure_handler result = factory(**kwargs) File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/logging/__init__.py", line 1008, in __init__ StreamHandler.__init__(self, self._open()) File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/logging/__init__.py", line 1037, in _open return open(self.baseFilename, self.mode, encoding=self.encoding) FileNotFoundError: [Errno 2] No such file or directory: '/Users/deon/Documents/PyCharmProjects/Developments/deonproject/log/debug.log' </code></pre> <p>During handling of the above exception, another exception occurred:</p> <pre><code>Traceback (most recent call last): File "manage.py", line 10, in &lt;module&gt; execute_from_command_line(sys.argv) File "/Users/deon/Documents/PyCharmProjects/Developments/deonproject/venv/lib/python3.5/site-packages/django/core/management/__init__.py", line 367, in execute_from_command_line utility.execute() File "/Users/deon/Documents/PyCharmProjects/Developments/deonproject/venv/lib/python3.5/site-packages/django/core/management/__init__.py", line 341, in execute django.setup() File "/Users/deon/Documents/PyCharmProjects/Developments/deonproject/venv/lib/python3.5/site-packages/django/__init__.py", line 22, in setup configure_logging(settings.LOGGING_CONFIG, settings.LOGGING) File "/Users/deon/Documents/PyCharmProjects/Developments/deonproject/venv/lib/python3.5/site-packages/django/utils/log.py", line 75, in configure_logging logging_config_func(logging_settings) File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/logging/config.py", line 795, in dictConfig dictConfigClass(config).configure() File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/logging/config.py", line 566, in configure '%r: %s' % (name, e)) ValueError: Unable to configure handler 'file': [Errno 2] No such file or directory: '/Users/deon/Documents/PyCharmProjects/Developments/deonproject/log/debug.log' </code></pre>
-1
2016-08-23T12:49:59Z
39,138,528
<p>I managed to fix it. The log directory was not created because it was excluded in the .gitignore file. I completely forgot to look what was excluded :). I created the directory manually and now it passed the error. Deon</p>
0
2016-08-25T06:51:52Z
[ "python", "django", "pycharm" ]
Using python to ssh into multiple servers and grab the file with the same postfix
39,101,719
<p>I normally use a bash script to grab all the files onto local machine and use <code>glob</code> to process all the files. Just wondering what would be the best way to use python (instead of another bash script) to ssh into each server and process those files?</p> <p>My current program runs as </p> <pre><code> for filename in glob.glob('*-err.txt'): input_open = open (filename, 'rb') for line in input_open: do something </code></pre> <p>My files all have the ending <code>-err.txt</code> and the directories where they reside in the remote server have the same name <code>/documents/err/</code>. I am not able to install third party libraries as I don't have the permission.</p> <p>UPDATE</p> <p>I am trying to not to <code>scp</code> the files from the server but to read it on the remote server instead..</p> <p>I want to use a local python script LOCALLY to read in files on remote server.</p>
0
2016-08-23T12:50:08Z
39,101,822
<p>The simplest way to do it is to use paramico_scp to use ssh copy from the remote server (<a href="http://stackoverflow.com/questions/250283/how-to-scp-in-python">How to scp in python?</a>)</p> <p>If you are not allowed to download any libraries, you can create SSH key pair so that connecting to server does not require a password (<a href="https://www.debian.org/devel/passwordlessssh" rel="nofollow">https://www.debian.org/devel/passwordlessssh</a>). You then can for each file do</p> <pre><code>import os os.system('scp user@host:/path/to/file/on/remote/machine /path/to/local/file') </code></pre> <p>Note that using system is usually considered less portable than using libraries. If you give the script that use system('scp ...') to copy the files and they do not have SSH key pair set up, they will experience problems</p>
0
2016-08-23T12:54:39Z
[ "python", "bash", "ssh" ]
Using python to ssh into multiple servers and grab the file with the same postfix
39,101,719
<p>I normally use a bash script to grab all the files onto local machine and use <code>glob</code> to process all the files. Just wondering what would be the best way to use python (instead of another bash script) to ssh into each server and process those files?</p> <p>My current program runs as </p> <pre><code> for filename in glob.glob('*-err.txt'): input_open = open (filename, 'rb') for line in input_open: do something </code></pre> <p>My files all have the ending <code>-err.txt</code> and the directories where they reside in the remote server have the same name <code>/documents/err/</code>. I am not able to install third party libraries as I don't have the permission.</p> <p>UPDATE</p> <p>I am trying to not to <code>scp</code> the files from the server but to read it on the remote server instead..</p> <p>I want to use a local python script LOCALLY to read in files on remote server.</p>
0
2016-08-23T12:50:08Z
39,102,308
<p>Looks like you want to use a local Python script remotely. This has been <a href="http://stackoverflow.com/a/22915340/482382">answered here</a>.</p>
0
2016-08-23T13:14:50Z
[ "python", "bash", "ssh" ]
Problems with python versions and environemnts on os x el captain
39,101,818
<p>I'm learning python and I'm using OS X. I've installed anaconda 3 and set up env called testenv with python 3.5. Then I activated recently created env and installed several packages such as numpy, pandas and opencv3. Nevertheless, when I run python shell and type "import numpy" I get the following errors:</p> <pre><code>&gt;&gt;&gt; import numpy Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/usr/local/lib/python2.7/site-packages/numpy/__init__.py", line 180, in &lt;module&gt; from . import add_newdocs File "/usr/local/lib/python2.7/site-packages/numpy/add_newdocs.py", line 13, in &lt;module&gt; from numpy.lib import add_newdoc File "/usr/local/lib/python2.7/site-packages/numpy/lib/__init__.py", line 8, in &lt;module&gt; from .type_check import * File "/usr/local/lib/python2.7/site-packages/numpy/lib/type_check.py", line 11, in &lt;module&gt; import numpy.core.numeric as _nx File "/usr/local/lib/python2.7/site-packages/numpy/core/__init__.py", line 14, in &lt;module&gt; from . import multiarray ImportError: dlopen(/usr/local/lib/python2.7/site-packages/numpy/core/multiarray.so, 2): Symbol not found: _PyBuffer_Type Referenced from: /usr/local/lib/python2.7/site-packages/numpy/core/multiarray.so Expected in: flat namespace in /usr/local/lib/python2.7/site-packages/numpy/core/multiarray.so </code></pre> <p>The same happens with other packages. Could anybody help me to figure out how to fix it?</p> <p>PS I've also been using OS X only a few days, so I'm not good in it as well.</p>
0
2016-08-23T12:54:26Z
39,116,954
<blockquote> <p>File "/usr/local/lib/python2.7/site-packages/numpy/<strong>init</strong>.py", line 180, in </p> </blockquote> <p>Python is picking up the packages from the system's Python packages and not from the packages installed in your virtualenv i.e. <code>testenv</code>.</p> <p>Be sure that you have activated the virtualenv with something like:</p> <pre><code>source testenv/bin/activate </code></pre> <p>and then try running Python.</p> <p>And you said you set it up with <code>Python 3.5</code> but it's picking path with <code>python2.7</code>. Let me know if that solves your problem.</p>
0
2016-08-24T07:27:06Z
[ "python", "osx" ]
Port Modbus RTU CRC to python from C#
39,101,926
<p>I'm trying to port the CRC calculation function for Modbus RTU from C# to Python.</p> <p><strong>C#</strong> </p> <pre><code>private static ushort CRC(byte[] data) { ushort crc = 0xFFFF; for (int pos = 0; pos &lt; data.Length; pos++) { crc ^= (UInt16)data[pos]; for (int i = 8; i != 0; i--) { if ((crc &amp; 0x0001) != 0) { crc &gt;&gt;= 1; crc ^= 0xA001; } else { crc &gt;&gt;= 1; } } } return crc; } </code></pre> <p>Which I run like this:</p> <pre><code>byte[] array = { 0x01, 0x03, 0x00, 0x01, 0x00, 0x01 }; ushort u = CRC(array); Console.WriteLine(u.ToString("X4")); </code></pre> <p><strong>Python</strong></p> <pre><code>def CalculateCRC(data): crc = 0xFFFF for pos in data: crc ^= pos for i in range(len(data)-1, -1, -1): if ((crc &amp; 0x0001) != 0): crc &gt;&gt;= 1 crc ^= 0xA001 else: crc &gt;&gt;= 1 return crc </code></pre> <p>Which I run like this:</p> <pre><code>data = bytearray.fromhex("010300010001") crc = CalculateCRC(data) print("%04X"%(crc)) </code></pre> <ul> <li>The result from the C# example is: 0xCAD5.</li> <li>The result from the Python example is: 0x8682.</li> </ul> <p>I know from fact by other applications that the CRC should be 0xCAD5, as the C#-example provides.</p> <p>When I debug both examples step-by-step, the variable 'crc' has difference values after these code lines:</p> <pre><code>crc ^= (UInt16)data[pos]; </code></pre> <p><strong>VS</strong></p> <pre><code>crc ^= pos </code></pre> <p>What am I missing?</p> <p>/Mc_Topaz</p>
0
2016-08-23T12:58:18Z
39,102,893
<p>Your inner loop uses the size of the data array instead of a fixed 8 iterations. Try this:</p> <pre><code>def calc_crc(data): crc = 0xFFFF for pos in data: crc ^= pos for i in range(8): if ((crc &amp; 1) != 0): crc &gt;&gt;= 1 crc ^= 0xA001 else: crc &gt;&gt;= 1 return crc data = bytearray.fromhex("010300010001") crc = calc_crc(data) print("%04X"%(crc)) </code></pre>
1
2016-08-23T13:41:32Z
[ "c#", "python", "crc", "modbus" ]
pandas views vs copy : the docs says "nobody knows"?
39,101,933
<p>There's lots of questions on StackOverflow about chained indexing and whether a particular operation makes a view or a copy. (for instance, <a href="http://stackoverflow.com/questions/23296282/what-rules-does-pandas-use-to-generate-a-view-vs-a-copy">here</a> or <a href="http://stackoverflow.com/questions/17960511/pandas-subindexing-dataframes-copies-vs-views">here</a>). I still don't fully get it, but the amazing part is the official docs say "nobody knows". (!?!??) Here's an example from the docs; can you tell me if they really meant that, or if they're just being flippant?</p> <p>From <a href="http://pandas-docs.github.io/pandas-docs-travis/indexing.html?highlight=view#why-does-assignment-fail-when-using-chained-indexing" rel="nofollow">http://pandas-docs.github.io/pandas-docs-travis/indexing.html?highlight=view#why-does-assignment-fail-when-using-chained-indexing</a></p> <pre><code>def do_something(df): foo = df[['bar', 'baz']] # Is foo a view? A copy? Nobody knows! # ... many lines here ... foo['quux'] = value # We don't know whether this will modify df or not! return foo </code></pre> <p>Seriously? For that specific example, is it really true that "nobody knows" and this is non-deterministic? Will that really behave differently on two different dataframes? The rules are really that complex? Or did the guy mean there is a definite answer but just that most people aren't aware of it?</p>
1
2016-08-23T12:58:37Z
39,102,097
<p>Here's the core bit of documentation that I think you may have missed:</p> <blockquote> <p>Outside of simple cases, it’s very hard to predict whether it will return a view or a copy (it depends on the memory layout of the array, about which <em>pandas</em> makes no guarantees)</p> </blockquote> <p>So there's an underlying numpy array that has some sort of memory layout. <em>pandas</em> is not concerned with having any sort of knowledge about that. I didn't read the docs too thoroughly besides that, but I assume they have some kind of approach that you <em>should</em> be taking instead, if you're actually wanting to <em>set</em> values.</p>
4
2016-08-23T13:05:40Z
[ "python", "pandas" ]
pandas views vs copy : the docs says "nobody knows"?
39,101,933
<p>There's lots of questions on StackOverflow about chained indexing and whether a particular operation makes a view or a copy. (for instance, <a href="http://stackoverflow.com/questions/23296282/what-rules-does-pandas-use-to-generate-a-view-vs-a-copy">here</a> or <a href="http://stackoverflow.com/questions/17960511/pandas-subindexing-dataframes-copies-vs-views">here</a>). I still don't fully get it, but the amazing part is the official docs say "nobody knows". (!?!??) Here's an example from the docs; can you tell me if they really meant that, or if they're just being flippant?</p> <p>From <a href="http://pandas-docs.github.io/pandas-docs-travis/indexing.html?highlight=view#why-does-assignment-fail-when-using-chained-indexing" rel="nofollow">http://pandas-docs.github.io/pandas-docs-travis/indexing.html?highlight=view#why-does-assignment-fail-when-using-chained-indexing</a></p> <pre><code>def do_something(df): foo = df[['bar', 'baz']] # Is foo a view? A copy? Nobody knows! # ... many lines here ... foo['quux'] = value # We don't know whether this will modify df or not! return foo </code></pre> <p>Seriously? For that specific example, is it really true that "nobody knows" and this is non-deterministic? Will that really behave differently on two different dataframes? The rules are really that complex? Or did the guy mean there is a definite answer but just that most people aren't aware of it?</p>
1
2016-08-23T12:58:37Z
39,103,999
<p>I think I can demonstrate something to clarify your situation, in your example, initially it will be a view but once you try to modify by adding a column it turns into a copy. You can test this by looking at the attribute <code>._is_view</code>:</p> <pre><code>In [29]: df = pd.DataFrame(np.random.randn(5,3), columns=list('abc')) def doSomething(df): a = df[['b','c']] print('before ', a._is_view) a['d'] = 0 print('after ', a._is_view) doSomething(df) df before True after False Out[29]: a b c 0 0.108790 0.580745 1.820328 1 1.066503 -0.238707 -0.655881 2 -1.320731 2.038194 -0.894984 3 -0.962753 -3.961181 0.109476 4 -1.887774 0.909539 1.318677 </code></pre> <p>So here we can see that initially <code>a</code> is a view on the original subsection of the original df, but once you add a column to this, this is no longer true and we can see that the original df is not modified.</p>
5
2016-08-23T14:29:36Z
[ "python", "pandas" ]
How to call all functions with name starting with given prefix?
39,102,240
<p>In Python how to write such a function which will call all functions in current file with given prefix?</p> <p>For example:</p> <pre><code>def prepare(self): # ??? to call prepare_1, prepare_2 def prepare_1(self): def prepare_2(self): </code></pre> <p>How to write <code>prepare</code> so it will call all functions started with <code>prepare_</code>?</p>
2
2016-08-23T13:12:28Z
39,102,335
<p>Use <a href="https://docs.python.org/2/library/functions.html#globals" rel="nofollow">globals</a> to access global namespace, <a href="https://docs.python.org/2/library/stdtypes.html?highlight=iteritems#dict.iteritems" rel="nofollow">dict.iteritems</a> to iterate over it and <a href="https://docs.python.org/2/library/functions.html#callable" rel="nofollow">callable</a> and <a href="https://docs.python.org/2/library/stdtypes.html?highlight=startswith#str.startswith" rel="nofollow">str.startswith</a> to identify that function has name you wish and it's callable:</p> <pre><code>def prepare(self): for key, value in globals().iteritems(): if callable(value) and key.startswith('prepare_'): value() def prepare_1(self):print 1 def prepare_2(self):print 2 </code></pre>
5
2016-08-23T13:15:56Z
[ "python" ]
How to call all functions with name starting with given prefix?
39,102,240
<p>In Python how to write such a function which will call all functions in current file with given prefix?</p> <p>For example:</p> <pre><code>def prepare(self): # ??? to call prepare_1, prepare_2 def prepare_1(self): def prepare_2(self): </code></pre> <p>How to write <code>prepare</code> so it will call all functions started with <code>prepare_</code>?</p>
2
2016-08-23T13:12:28Z
39,102,405
<p>If these functions are methods of a class, use <code>dir(self)</code> to list all attributes of <code>self</code>.</p> <pre><code>class C: def prepare(self): print(dir(self)) for name in dir(self): if name.startswith('prepare_'): method = getattr(self, name) method() def prepare_1(self): print('In prepare_1') def prepare_2(self): print('In prepare_2') C().prepare() </code></pre> <p>Output:</p> <pre><code>['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'prepare', 'prepare_1', 'prepare_2'] In prepare_1 In prepare_2 </code></pre> <hr> <p>Update: if you want to call methods from outside of class C:</p> <pre><code>obj = C() for name in dir(obj): if name.startswith('prepare_'): m = getattr(obj, name) print(m) m() </code></pre> <p>Output:</p> <pre><code>&lt;bound method C.prepare_1 of &lt;__main__.C object at 0x7f347c9dff28&gt;&gt; In prepare_1 &lt;bound method C.prepare_2 of &lt;__main__.C object at 0x7f347c9dff28&gt;&gt; In prepare_2 </code></pre>
2
2016-08-23T13:19:29Z
[ "python" ]
How to call all functions with name starting with given prefix?
39,102,240
<p>In Python how to write such a function which will call all functions in current file with given prefix?</p> <p>For example:</p> <pre><code>def prepare(self): # ??? to call prepare_1, prepare_2 def prepare_1(self): def prepare_2(self): </code></pre> <p>How to write <code>prepare</code> so it will call all functions started with <code>prepare_</code>?</p>
2
2016-08-23T13:12:28Z
39,102,834
<p>It's been asked for, so here's a quick hack:</p> <pre><code>import functools class FunctionGroup(object): """ Defines a function group as a list of functions that can be executed sequentially from a single call to the function group. Use @func_group.add def my_func(...): ... to add functions to the function group. `func_group(...)` calls the added functions one by one. It returns a list of the return values from all evaluated functions. Processing terminates when one of the function raises an exception and the exception is propagated to the caller. """ def __init__(self): self.funcs = [] def add(self, func): self.funcs.append(func) return func def __call__(self, *args, **kwargs): return [ func(*args, **kwargs) for func in self.funcs ] prepare_group = FunctionGroup() </code></pre> <p>Note that the <code>__call__()</code> implementation is rather primitive and does nothing to handle exceptions.</p> <p>Usage example:</p> <pre><code>@prepare_group.add def prepare_1(): print "prep 1" @prepare_group.add def prepare_2(): print "prep 2" prepare_group() </code></pre> <p>Maybe abused to call methods, of course:</p> <pre><code>class C(object): def m(self): pass c = C() func_group.add(c.m) </code></pre>
1
2016-08-23T13:38:55Z
[ "python" ]
Project not finding my reusable app's admin template override
39,102,242
<p>I have a reusable app with a directory structure like this:</p> <pre><code>myapp/ myapp/ views.py models.py ...etc templates/ myapp/ template1.html ...etc admin/ index.html test/ ...testing stuff setup.py ...etc </code></pre> <p>I'm overriding the <code>index.html</code> admin template so that I can add some additional links in <code>{% block userlinks %}</code> that will appear in a project's navigation when it uses my app.</p> <p>However, when using my app inside a project, the admin homepage still uses Django's own <code>index.html</code> file. The project itself has a <code>base_site.html</code> that it uses, but the template inheritance diagram (in <code>django-debug-toolbar</code>) looks like this:</p> <pre><code>/path/to/virtualenv/django/contrib/admin/templates/admin/index.html /projectpath/projectname/templates/admin/base_site.html /path/to/virtualenv/django/contrib/admin/templates/admin/base.html </code></pre> <p>...that first entry should be the <code>index.html</code> file in my app's templates directory! Does anyone know why it's not being found? I can post settings if needed.</p>
0
2016-08-23T13:12:32Z
39,113,815
<p>Django's template loader looks for templates in the order that apps are defined in <code>INSTALLED_APPS</code>. In your case you must have defined <code>django.contrib.admin</code> ahead of your app, so Django will always look there first and use the first template it finds. </p> <p>Change it so that your app is first in the list:</p> <pre><code>INSTALLED_APPS = [ 'myapp', 'django.contrib.admin', ... ] </code></pre>
1
2016-08-24T03:13:11Z
[ "python", "django", "django-templates", "django-admin" ]
Numpy: array > 5 yields "The truth value of an array with more than one element is ambiguous"
39,102,439
<p>I know that in numpy you can't simply do conditionals on arrays as it doesn't know how to treat them and that this error is a result of that, however in my case my code is a lot simpler. I have:</p> <pre><code># H and _H are 3x3 arrays with hand-assigned values # uv1 is 3x57600 array of coordinates, hand assigned in a loop HH = np.dot(_H,np.linalg.inv(H)) new_uv = np.dot(HH,uv1) du = uv1[0,:] * new_uv[2,:] u = new_uv[0,:] - du u_greater_5 = u &gt; 5 </code></pre> <p>And the last line gives me the "ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()" error. u is of shape (57600,) and I can open up the interactive prompt and the following works fine:</p> <pre><code>&gt;&gt;&gt; a = np.array([1,2,3,4,5]) &gt;&gt;&gt; a &gt; 3 array([False, False, False, True, True], dtype=bool) </code></pre> <p>But the code in the previous block doesn't. I've also tried</p> <pre><code>np.greater(u,5) u[u&gt;5] = 1 </code></pre> <p>But they give the same error as well. Any ideas?</p> <p>Also, I don't know if this is related, but bizarrely, trying to access u[0] gives me a 3-vector of the same 3 values (the first value in u) whereas it should be a scalar? Considering its shape is (57600,) ?</p> <p>Edit: the traceback, as per request:</p> <pre><code>Traceback (most recent call last): File "ros2vid.py", line 333, in &lt;module&gt; process_frames(bag) File "ros2vid.py", line 239, in process_frames u_greater_5 = u &gt; 5 ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() </code></pre> <p>Here's how I generate uv1:</p> <pre><code># frame is an image loaded from a ros bag uv1 = [] im_r = frame.shape[0] im_c = frame.shape[1] for i in range(1, im_r): for j in range(im_c): uv1.append([j, i, 1]) uv1 = np.transpose(np.array(uv1)) </code></pre> <p>and the values of H and _H are just numbers I hard code in by hand. Something like:</p> <pre><code>h11 = u0 * cosphi h12 = -alph_u h13 = alph_u + u0 * (-cosphi + cz * sinphi) h21 = -alph_u * sinphi + v0 * cosphi h22 = 0 h23 = alph_v * (sinphi + cz * cosphi) + v0 * (-cosphi + cz * sinphi) h31 = cosphi h32 = 0 h33 = -cosphi + cz * sinphi H = np.array([[h11, h12, h13], [h21, h22, h23], [h31, h32, h33]]) </code></pre> <p>All the values up there like u0, cosphi, cz, etc. are simply scalars. I've individually checked them, and _H is assigned similarly as well. I've checked the shapes of H and _H and verified them to be 3x3.</p> <p>Weirdly, I just tried u == 5 and that doesn't give me an error, u > 5 does.</p>
1
2016-08-23T13:21:14Z
39,102,802
<p>I recreated your code with random variables. But it works fine here:</p> <pre><code>import numpy as np H = np.random.rand(3,3)*10 _H = np.random.rand(3,3)*10 uv1 = np.random.rand(3,57600)*10 HH = np.dot(_H,np.linalg.inv(H)) new_uv = np.dot(HH,uv1) du = uv1[0,:] * new_uv[2,:] u = new_uv[0,:] - du u_greater_5 = u &gt; 5 </code></pre> <p>I don't know if it depends on the values of <code>H</code>, <code>_H</code> and <code>uv1</code>. Can you copy-paste my code and verify if you have problems with that code.</p>
1
2016-08-23T13:37:03Z
[ "python", "numpy" ]
Numpy: array > 5 yields "The truth value of an array with more than one element is ambiguous"
39,102,439
<p>I know that in numpy you can't simply do conditionals on arrays as it doesn't know how to treat them and that this error is a result of that, however in my case my code is a lot simpler. I have:</p> <pre><code># H and _H are 3x3 arrays with hand-assigned values # uv1 is 3x57600 array of coordinates, hand assigned in a loop HH = np.dot(_H,np.linalg.inv(H)) new_uv = np.dot(HH,uv1) du = uv1[0,:] * new_uv[2,:] u = new_uv[0,:] - du u_greater_5 = u &gt; 5 </code></pre> <p>And the last line gives me the "ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()" error. u is of shape (57600,) and I can open up the interactive prompt and the following works fine:</p> <pre><code>&gt;&gt;&gt; a = np.array([1,2,3,4,5]) &gt;&gt;&gt; a &gt; 3 array([False, False, False, True, True], dtype=bool) </code></pre> <p>But the code in the previous block doesn't. I've also tried</p> <pre><code>np.greater(u,5) u[u&gt;5] = 1 </code></pre> <p>But they give the same error as well. Any ideas?</p> <p>Also, I don't know if this is related, but bizarrely, trying to access u[0] gives me a 3-vector of the same 3 values (the first value in u) whereas it should be a scalar? Considering its shape is (57600,) ?</p> <p>Edit: the traceback, as per request:</p> <pre><code>Traceback (most recent call last): File "ros2vid.py", line 333, in &lt;module&gt; process_frames(bag) File "ros2vid.py", line 239, in process_frames u_greater_5 = u &gt; 5 ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() </code></pre> <p>Here's how I generate uv1:</p> <pre><code># frame is an image loaded from a ros bag uv1 = [] im_r = frame.shape[0] im_c = frame.shape[1] for i in range(1, im_r): for j in range(im_c): uv1.append([j, i, 1]) uv1 = np.transpose(np.array(uv1)) </code></pre> <p>and the values of H and _H are just numbers I hard code in by hand. Something like:</p> <pre><code>h11 = u0 * cosphi h12 = -alph_u h13 = alph_u + u0 * (-cosphi + cz * sinphi) h21 = -alph_u * sinphi + v0 * cosphi h22 = 0 h23 = alph_v * (sinphi + cz * cosphi) + v0 * (-cosphi + cz * sinphi) h31 = cosphi h32 = 0 h33 = -cosphi + cz * sinphi H = np.array([[h11, h12, h13], [h21, h22, h23], [h31, h32, h33]]) </code></pre> <p>All the values up there like u0, cosphi, cz, etc. are simply scalars. I've individually checked them, and _H is assigned similarly as well. I've checked the shapes of H and _H and verified them to be 3x3.</p> <p>Weirdly, I just tried u == 5 and that doesn't give me an error, u > 5 does.</p>
1
2016-08-23T13:21:14Z
39,104,866
<p>I suspect that <code>u</code> is a one-dimensional array of <em>objects</em>, and the objects are themselves one-dimensional numpy arrays with length 3. I can reproduce the behavior you report as follows. First, create a one-dimensional array of objects (length 3), and fill it with numpy arrays:</p> <pre><code>In [26]: u = np.empty(3, dtype=object) In [27]: u[:] = [np.array([1, 2, 3]), np.array([4, 5, 6]), np.array([7, 8, 9])] </code></pre> <p>Check the type and attributes of <code>u</code>. First, it is a numpy array with shape (3,).</p> <pre><code>In [28]: type(u) Out[28]: numpy.ndarray In [29]: u.shape Out[29]: (3,) </code></pre> <p>The first element of <code>u</code> is an array with length 3:</p> <pre><code>In [30]: u[0] Out[30]: array([1, 2, 3]) </code></pre> <p>The <code>dtype</code> of <code>u</code> is <code>dtype('O')</code>, which is numpy's representation for the <code>object</code> data type. (If you <em>print</em> <code>u.dtype</code>, it will show <code>object</code>.)</p> <pre><code>In [31]: u.dtype Out[31]: dtype('O') </code></pre> <p>Now try <code>u &gt; 5</code>:</p> <pre><code>In [32]: u &gt; 5 --------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-32-7f680a2a9455&gt; in &lt;module&gt;() ----&gt; 1 u &gt; 5 ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() </code></pre> <p><code>u == 5</code> doesn't generate an error, but it does generate a warning, and returns the <em>scalar</em> False:</p> <pre><code>In [33]: u == 5 /Users/warren/miniconda3/bin/ipython:1: DeprecationWarning: elementwise == comparison failed; this will raise an error in the future. #!/bin/bash /Users/warren/miniconda3/bin/python.app Out[33]: False </code></pre> <p>To figure out why <code>u</code> is an object array, you can work backwards, checking the <code>.shape</code> and <code>.dtype</code> attributes of the variables that you use to create <code>u</code>.</p>
4
2016-08-23T15:10:59Z
[ "python", "numpy" ]
language file dosen't load automaticaly in Django
39,102,538
<p>I'm using python3 and Django 1.10 for my application, and I am kind of new to Django. I'm planning to have many languages for Django admin panel. As I follow the rules in Django documentation, I find out that I have to use a middleware for localization... Here are my settings:</p> <pre><code>MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.locale.LocaleMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] LOCALE_PATHS = ( os.path.join(BASE_DIR, 'locale'), ) LANGUAGE_CODE = 'en' ugettext = lambda s: s LANGUAGES = ( ('fa', ugettext('Farsi')), ('en', ugettext('English')), ) </code></pre> <p>When i go to admin <code>mylocal/en/admin</code> or <code>mylocal/fa/admin</code> the application language changed perfectly. But my language file(<code>.po</code>) always looks into <code>LANGUAGE_CODE</code>. when i set <code>LANGUAGE_CODE='fa'</code> it will change to farsi . not automatically <br> Now i just want that my language files load using the urls <code>/en/</code> or <code>/fa/</code> . Please help me. Here is my <code>urls.py</code> file if you need to check out.</p> <pre><code>urlpatterns = i18n_patterns( url(r'^admin/', admin.site.urls), ) </code></pre>
1
2016-08-23T13:25:56Z
40,140,545
<p>I have a similar working setup, the main difference seems to be that I'm using <code>ugettext_lazy</code>. That's because I need to translate these strings in my models or settings when they were accessed, rather than when they were called (which would happen only once: they would only be evaluated on server startup and would not recognize any further changes; e.g. switching the Django admin language). </p> <p>Reference: <a href="https://docs.djangoproject.com/en/1.10/topics/i18n/translation/#lazy-translation" rel="nofollow">https://docs.djangoproject.com/en/1.10/topics/i18n/translation/#lazy-translation</a></p> <p>That's what I use (in this special case, german is the default language and I'm translating into english): </p> <h3>project/urls.py</h3> <pre><code>from django.conf.urls.i18n import i18n_patterns urlpatterns = i18n_patterns( url(r'^admin/', admin.site.urls), ) </code></pre> <h3>project/settings.py</h3> <pre><code>from django.utils.translation import ugettext_lazy as _ MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.locale.LocaleMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] LANGUAGE_CODE = 'de-de' USE_I18N = True USE_L10N = True LANGUAGES = [ ('de', _('German')), ('en', _('English')), ] LOCALE_PATHS = [ os.path.join(BASE_DIR, 'locale'), ] </code></pre> <h3>app/models.py</h3> <pre><code>from django.utils.translation import ugettext_lazy as _ class Kindergarten(models.Model): stadt = models.CharField(verbose_name=_(Stadt)) class Meta: verbose_name = _('Kindergarten') verbose_name_plural = _('Kindergärten') </code></pre> <h3>Workflow</h3> <pre><code>$ python manage.py makemessages --locale en ... edit project/locale/en/LC_MESSAGES/django.po ... $ python manage.py compilemessages </code></pre> <p>Now I can access my translate Django admin (interface + models) via: </p> <ul> <li><a href="http://127.0.0.1:8000/de/admin/app/kindergarten/" rel="nofollow">http://127.0.0.1:8000/de/admin/app/kindergarten/</a></li> <li><a href="http://127.0.0.1:8000/en/admin/app/kindergarten/" rel="nofollow">http://127.0.0.1:8000/en/admin/app/kindergarten/</a></li> </ul> <h3>Notes</h3> <ul> <li>Pyhton 3.5.2</li> <li>Django 1.10.2</li> </ul>
1
2016-10-19T20:04:30Z
[ "python", "django", "python-3.x", "multilingual" ]
Pandas apply function to unique values in column
39,102,563
<p>I've been stuck on a Pandas problem and I can't seem to figure it out. I have a dataframe like this:</p> <pre><code>ref, value, rule, result, new_column a100, 25, high, fail, nan a100, 25, high, pass, nan a100, 25, medium, fail, nan a100, 25, medium, pass, nan a101, 15, high, fail, nan a101, 15, high, pass, nan a102, 20, high, pass, nan </code></pre> <p>I want to add a new column to this dataframe with the following pseudocode</p> <p>For each unique value in ref, if <code>result = fail</code>, then <code>new_column = no</code> for all subsequent rows of the same "ref" value.</p> <p>This is how the new dataframe should look like.</p> <pre><code>ref, value, rule, result, new_column a100, 25, high, fail, no a100, 25, high, pass, no a100, 25, medium, fail, no a100, 25, medium, pass, no a101, 15, high, fail, no a101, 15, high, pass, no a102, 20, high, pass, yes </code></pre> <p>What I've managed to do is the following:</p> <pre><code>ref, value, rule, result, new_column a100, 25, high, fail, no a100, 25, high, pass, yes </code></pre> <p>This is achieved through the <code>df.loc</code> function. But I need the function to apply to unique values, rather than each row.</p>
-1
2016-08-23T13:27:21Z
39,102,649
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.transform.html" rel="nofollow"><code>transform</code></a>:</p> <pre><code>print (df) ref value rule result new_column 0 a100 25 high pass NaN 1 a100 25 high fail NaN 2 a100 25 medium fail NaN 3 a100 25 medium pass NaN 4 a101 15 high fail NaN 5 a101 15 high pass NaN 6 a102 20 high pass NaN df['new_column']=df.groupby('ref')['result'] .transform(lambda x: 'no' if ((x=='fail').any()) else 'yes') print (df) ref value rule result new_column 0 a100 25 high pass no 1 a100 25 high fail no 2 a100 25 medium fail no 3 a100 25 medium pass no 4 a101 15 high fail no 5 a101 15 high pass no 6 a102 20 high pass yes </code></pre> <p>Thank you <a href="http://stackoverflow.com/questions/39102563/pandas-apply-function-to-unique-values-in-column/39102649?noredirect=1#comment65554409_39102649"><code>Jon Clements</code></a> for another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.replace.html" rel="nofollow"><code>replace</code></a>:</p> <pre><code>df['new_column'] = df.groupby('ref')['result'] .transform(lambda L: (L == 'fail').any()) .replace({True: 'no', False: 'yes'}) print (df) ref value rule result new_column 0 a100 25 high pass no 1 a100 25 high fail no 2 a100 25 medium fail no 3 a100 25 medium pass no 4 a101 15 high fail no 5 a101 15 high pass no 6 a102 20 high pass yes </code></pre>
1
2016-08-23T13:30:30Z
[ "python", "pandas", "dataframe", "unique", "condition" ]
Allow Python.app on El Captain (OS X)
39,102,571
<p><a href="http://i.stack.imgur.com/WKi5J.png" rel="nofollow"><img src="http://i.stack.imgur.com/WKi5J.png" alt="enter image description here"></a></p> <p>I'm using a <code>python</code> executable in a <strong>virtual environment</strong>. I tried doing the whole <code>codesign</code> thing as described <a href="http://wpguru.co.uk/2015/06/how-to-kill-the-accept-incoming-connections-dialogue-on-your-mac-forever/" rel="nofollow">here</a>, including creating the certificate, etc. The command worked, but the result stayed the same. I think it used to work on previous versions of OS X, but I currently use the most recent El Capitan version (10.11.6) and it's not working anymore. Any ideas on how to fix it?</p> <p><strong>EDIT</strong>: I did see <a href="http://stackoverflow.com/questions/34760163/how-to-allow-python-app-to-firewall-on-mac-os-x">this</a> solution, but since my python is in a virtual environment, I'm not sure it applies, unless you guys say otherwise...</p> <p><strong>EDIT 2</strong>: I tried the solution above, didn't work. <strong>I should mention that I am codesigning the python executable in the virtualenv</strong>.</p> <p><strong>EDIT 3</strong>: The thing that ended up working for me was upgrading <code>flask</code> to the current version, (using <code>pip install flask --upgarde</code>), and running the app with <code>export FLASK_APP=app.py; flask run</code> instead of with <code>python app.py</code>. When you run the app with <code>flask run</code>, the annoying dialog box doesn't pop up anymore. No codesigning needed to my knowledge. Hope this helps someone.</p>
11
2016-08-23T13:27:34Z
39,244,070
<p><strong>Why is that happening?</strong></p> <p>So the <code>python</code> executables in El Capitan spawns <code>.../Python.framework/Versions/2.7/Resources/Python.app</code> + some extra magic. The problem is that the framework bundle doesn't have its own signature, and it uses signatures of parent application binaries.</p> <p><strong>How to check?</strong></p> <p>The first thing to check after installing applications from non-Apple-maintained-source-that-might-steal-your-soul, is to check if the application you are installing is <em>restricted</em>:</p> <pre><code>ls -lO /System/Library/Frameworks/Python.framework/Versions/2.7/ csrutil status </code></pre> <p>If it is <strong>restricted</strong> it cannot be removed (even with root) as long as <a href="https://developer.apple.com/library/mac/documentation/Security/Conceptual/System_Integrity_Protection_Guide/ConfiguringSystemIntegrityProtection/ConfiguringSystemIntegrityProtection.html" rel="nofollow">SIP</a> is enabled.</p> <p><strong>What to do?</strong></p> <p>So you have several different options you must try:</p> <ul> <li><p>Pre-Option 0 - <strong>I think you are doing it already</strong>: I am not sure how you are maintaining your virtual environments, so just confirm you are going through the process, like <a href="https://hackercodex.com/guide/python-development-environment-on-mac-osx/" rel="nofollow">here</a>. </p></li> <li><p>Option 1 - <strong>safe, but might not work</strong>: Use <a href="http://brew.sh/" rel="nofollow"><code>brew</code></a> to maintain your executables and <a href="https://pypi.python.org/pypi/pip" rel="nofollow"><code>pip</code></a> to maintain your packages. That usually solves the problem immediately, but I am not sure what is your case :) </p></li> <li><p>Option 2 - <strong>dangerous, but will work</strong>: <a href="https://developer.apple.com/library/mac/documentation/Security/Conceptual/System_Integrity_Protection_Guide/ConfiguringSystemIntegrityProtection/ConfiguringSystemIntegrityProtection.html" rel="nofollow">Check</a> and <a href="http://apple.stackexchange.com/questions/208478/how-do-i-disable-system-integrity-protection-sip-aka-rootless-on-os-x-10-11">Disable</a> the SIP. Unless you work in an environment protected by a team of IT guys with years of security experience, I don't suggest it. This option WILL solve the issue, but you basically getting rid of one of the security layers... GL!</p></li> </ul> <p><strong>UPDATE 1</strong></p> <p>There is another option (not sure if you tried it though)</p> <ul> <li>Option 1.5 - <strong>I have no idea if it will work</strong>: Try <em>Option 1</em> (<code>csrutil disable</code>), reboot, go through the <code>codesign</code> process, reboot, and undo the <em>Option 1</em> (<code>csrutil enable</code>). I have never tried it, but it doesn't mean you can't :))) <a href="http://stackoverflow.com/questions/34760163/how-to-allow-python-app-to-firewall-on-mac-os-x?noredirect=1&amp;lq=1">Credit goes to this SO answer here</a></li> </ul>
2
2016-08-31T08:08:51Z
[ "python", "osx" ]
python Unicode decode error when accessing records of OrderedDict
39,102,647
<p>using python 3.5.2 on windows (32), I'm reading a DBF file which returns me an OrderedDict.</p> <pre><code>from dbfread import DBF Table = DBF('FME.DBF') for record in Table: print(record) </code></pre> <p>When accessing the first record all is ok until I reach a record which contains diacritics:</p> <pre><code>Traceback (most recent call last): File "getdbe.py", line 3, in &lt;module&gt; for record in Table: File "...\AppData\Local\Programs\Python\Python35-32\lib\site-packages\dbfread\dbf.py", line 311, in _iter_records for field in self.fields] File "...\AppData\Local\Programs\Python\Python35-32\lib\site-packages\dbfread\dbf.py", line 311, in &lt;listcomp&gt; for field in self.fields] File "...\AppData\Local\Programs\Python\Python35-32\lib\site-packages\dbfread\field_parser.py", line 75, in parse return func(field, data) File "...\AppData\Local\Programs\Python\Python35-32\lib\site-packages\dbfread\field_parser.py", line 83, in parseC return decode_text(data.rstrip(b'\0 '), self.encoding) UnicodeDecodeError: 'ascii' codec can't decode byte 0x82 in position 11: ordinal not in range(128) </code></pre> <p>Even if I don't print the record I still have the problem.</p> <p>Any idea ?</p>
1
2016-08-23T13:30:28Z
39,102,922
<p><code>dbfread</code> failed to detect the correct encoding from your DBF file. From the <a href="https://dbfread.readthedocs.io/en/latest/introduction.html#character-encodings" rel="nofollow"><em>Character Encodings</em> section of the documentation</a>:</p> <blockquote> <p><code>dbfread</code> will try to detect the character encoding (code page) used in the file by looking at the <code>language_driver</code> byte. <strong>If this fails it reverts to ASCII</strong>. You can override this by passing <code>encoding='my-encoding'</code>.</p> </blockquote> <p>Emphasis mine.</p> <p>You'll have to pass in an explicit encoding; this will invariably be a Windows codepage. Take a look at the <a href="https://docs.python.org/3/library/codecs.html#standard-encodings" rel="nofollow">supported codecs in Python</a>; you'll have to use one that starts with <code>cp</code> here. If you don't know what codepage to you you'll have some trial-and-error work to do. Note that some codepages overlap in characters, so even if a codepage appears to produce legible results, you may want to continue searching and trying out different records in your data file to see what fits best.</p>
0
2016-08-23T13:42:37Z
[ "python", "unicode", "ordereddictionary" ]
Converting Julia JuMP to Python PuLP
39,102,846
<p>I stumbled upon a piece of software that I'd like to convert from Julia to Python (don't have much experience with Julia). The main problem I'm having is I don't understand exactly what is going on in the section I've marked with comments #PROBLEM BELOW/ABOVE</p> <p>skaters_teams is a 180 x 10 matrix (180 skaters and 10 teams) and the team is stored as a binary array where skaters_teams[0] gives the array of player 0 ex:[1, 0, 0, 0, 0, 0, 0, 0, 0, 0]. </p> <pre><code>m = Model(solver=GLPKSolverMIP()) # Variable for skaters in lineup @defVar(m, skaters_lineup[i=1:num_skaters], Bin) # Variable for goalie in lineup @defVar(m, goalies_lineup[i=1:num_goalies], Bin) # One goalie constraint @addConstraint(m, sum{goalies_lineup[i], i=1:num_goalies} == 1) # Eight Skaters constraint @addConstraint(m, sum{skaters_lineup[i], i=1:num_skaters} == 8) # between 2 and 3 centers @addConstraint(m, sum{centers[i]*skaters_lineup[i], i=1:num_skaters} &lt;= 3) @addConstraint(m, 2 &lt;= sum{centers[i]*skaters_lineup[i], i=1:num_skaters}) # between 3 and 4 wingers @addConstraint(m, sum{wingers[i]*skaters_lineup[i], i=1:num_skaters} &lt;= 4) @addConstraint(m, 3&lt;=sum{wingers[i]*skaters_lineup[i], i=1:num_skaters}) # between 2 and 3 defenders @addConstraint(m, 2 &lt;= sum{defenders[i]*skaters_lineup[i], i=1:num_skaters}) @addConstraint(m, sum{defenders[i]*skaters_lineup[i], i=1:num_skaters} &lt;= 3) # Financial Constraint @addConstraint(m, sum{skaters[i,:Salary]*skaters_lineup[i], i=1:num_skaters} + sum{goalies[i,:Salary]*goalies_lineup[i], i=1:num_goalies} &lt;= 50000) # exactly 3 different teams for the 8 skaters constraint @defVar(m, used_team[i=1:num_teams], Bin) #PROBLEM BELOW @addConstraint(m, constr[i=1:num_teams], used_team[i] &lt;= sum{skaters_teams[t, i]*skaters_lineup[t], t=1:num_skaters}) @addConstraint(m, constr[i=1:num_teams], sum{skaters_teams[t, i]*skaters_lineup[t], t=1:num_skaters} &lt;= 6*used_team[i]) #PROBLEM ABOVE @addConstraint(m, sum{used_team[i], i=1:num_teams} == 3) </code></pre> <p>Is it looping like so:</p> <pre><code>for i in range(num_teams): for t in range(num_skaters): m += sum(skaters_teams[i][t]*skaters_lineup[t]) &gt;=ut[i] m += sum(skaters_teams[i][t]*skaters_lineup[t]) &lt;=6*ut[i] </code></pre> <p>Also I can't find any documentation on using 3 parameters with <code>@addConstraint</code>. The first is the problem you're adding it to and the third is the constraint you're adding. What is the second?</p> <pre><code>@addConstraint(m, constr[i=1:num_teams], sum{skaters_teams[t, i]*skaters_lineup[t], t=1:num_skaters} &lt;= 6*used_team[i]) </code></pre>
-2
2016-08-23T13:39:14Z
39,108,725
<p>After playing around with it for a bit longer I found a solution. In case anyone has similar issues with pulp and jump here is what I used.</p> <pre><code>for i in range(num_teams): m += sum(x * st[i] for x,st in zip (skaters_lineup, skaters_teams[:])) &gt;= used_team[i] m += sum(x * st[i] for x,st in zip (skaters_lineup, skaters_teams[:])) &lt;= 6*used_team[i] </code></pre>
1
2016-08-23T18:54:31Z
[ "python", "julia-lang", "pulp", "julia-jump" ]
PyCharm set pattern for newly created file
39,102,868
<p>Every time when I create <code>.py</code> file in django project, I need to add two strings </p> <pre><code># -*- coding: utf-8 -*- from __future__ import unicode_literals </code></pre> <p>Is there any way to set some pattern for newly created python files in PyCharm?</p>
2
2016-08-23T13:40:30Z
39,103,414
<p>File->settings->Editor->File and Code Templates->Python Script</p> <p><img src="http://i.stack.imgur.com/uCmiQ.png" alt="enter image description here"></p>
3
2016-08-23T14:03:11Z
[ "python", "pycharm", "jetbrains" ]
Connect and present data from two different tables in django
39,102,912
<p>I'm trying to easily present data from two different tables (classes). I have an <code>Environment</code> class with all the environments details and a <code>Changes</code> class which contain history changes on all my environments.</p> <p>My view is currently showing all my <code>Environment</code> details. I want to add to this view the last change been made on each environment (e.g last modified by: <code>User</code>).</p> <p>My <code>models.py</code> look like this:</p> <pre><code>class System(models.Model): system_name = models.CharField(max_length=40, blank=True) system_id = models.CharField(max_length=100, blank=True) system_clusters = models.ManyToManyField(Cluster, blank=True) system_owner = models.CharField(max_length=20, blank=True) def __str__(self): return self.system_name class Changes(models.Model): date = models.DateTimeField(auto_now_add=True) cluster = models.ForeignKey(System, on_delete=models.CASCADE) user = models.CharField(max_length=20, blank=True) change_reason = models.CharField(max_length=50, blank=True) def __str__(self): return self.date </code></pre> <p>At first, i though to pass a dictionary to my template with the <code>system</code> as a key and a <code>change</code> as a value:</p> <pre><code>last_changes = {} change = Changes.objects.filter(cluster__in=s.system_clusters.all()).order_by('-id')[0] last_changes[s.system_id] = change.change_reason </code></pre> <p>Even though it partially works (I still trying to parse the dict in my template), I feel like this is not the right approach for the task.</p> <p>I'm hoping to reach a result where I can just call <code>system.last_change</code> in my template. Can I add another field for <code>System</code> class that will point to his <code>last_change</code> in the <code>Changes</code> table?</p>
0
2016-08-23T13:42:12Z
39,103,269
<p>You can write a method on System to return the last change for an item:</p> <pre><code>def last_change(self): return self.changes_set.order_by('-date').first() </code></pre> <p>Now you can indeed call <code>system.last_change</code> in the template.</p>
1
2016-08-23T13:57:14Z
[ "python", "django", "django-models" ]
Matplotlib Stacked Histogram Bin Width
39,102,918
<p>When creating a stacked histogram in Matplotlib I noticed that the bin widths shrink. In this simple example:</p> <pre><code>import numpy as np import matplotlib import matplotlib.pylab as plt #Create histograms and plots fig = plt.figure() gs = matplotlib.gridspec.GridSpec(1, 2) h1 = fig.add_subplot(gs[0]) h2 = fig.add_subplot(gs[1]) x = np.random.normal(0, 5, 500) y = np.random.normal(0, 20, 500) bins = np.arange(-60,60, 5) h1.hist([x, y], bins=bins, stacked=True) h2.hist(x, bins=bins, alpha=1) h2.hist(y, bins=bins, alpha=0.5) plt.tight_layout() plt.show() filename = 'sample.pdf' plt.savefig(filename) </code></pre> <p>I get the following output:</p> <p><a href="http://i.stack.imgur.com/jec1A.png" rel="nofollow"><img src="http://i.stack.imgur.com/jec1A.png" alt="enter image description here"></a></p> <p>Notice that the histogram on the left has spacing between each bin even though both the left and right histograms are using the same bins.</p> <p>Is there a way to correct this behavior? I would like the histogram on the left to use the full bin widths such that neighboring bins share an edge.</p>
0
2016-08-23T13:42:26Z
39,104,417
<p>If the first argument of a <code>ax.hist</code>-call is a list of lists (or a 2d numpy-array or a combination of the two) it will automatically make the bins smaller. The <code>rwidth</code> parameter of the <code>ax.hist</code>-call is the parameter that determines the bin-width. Setting it to 1 will do what you want it to do:</p> <p><code>h1.hist([x, y], bins=bins, stacked=True, rwidth=1)</code></p>
0
2016-08-23T14:49:53Z
[ "python", "matplotlib" ]
Send email with Office365 python library `python-o365`
39,103,057
<p>Ok So ive been able to send mail and read mail but I am now trying to attach an attachment to the mail and it doesnt seem to append the document as expected. I dont get any errors but I also dont get the mail if I attempt to add the attachment.</p> <p>The library im using is <a href="https://github.com/Narcolapser/python-o365" rel="nofollow">here</a></p> <p>The returned value frome the function is <code>True</code> but an email never arrives if i remove the <code>m.attachments.append('/path/to/data.xls')</code> line the email arrives as expected (without an attachment of course).</p> <p><strong>Code</strong> </p> <pre><code>def sendAddresses(username, password): try: authenticiation = (username, password) m = Message(auth=authenticiation) m.attachments.append('/path/to/data.xls') m.setRecipients("email@address.com") m.setSubject("Test Subject") m.setBody("Test Email") m.sendMessage() except Exception, e: print e return False return True </code></pre>
0
2016-08-23T13:48:31Z
39,160,843
<p>Please debug this way</p> <pre><code>att = Attachment(path=path) att.save(path) m.attachments.append(att) </code></pre>
0
2016-08-26T07:35:43Z
[ "python", "office365" ]
Adding a new row to a MultiIndex pandas DataFrame with both values and lists
39,103,060
<p>I have a MultiIndex <code>DataFrame</code>:</p> <pre><code> predicted_y actual_y predicted_full actual_full subj_id org_clip 123 3 2 5 [1, 2, 3] [4, 5, 6] </code></pre> <p>That I wish to add a new row to:</p> <pre><code> predicted_y actual_y predicted_full actual_full subj_id org_clip 123 3 2 5 [1, 2, 3] [4, 5, 6] 321 4 20 50 [10, 20, 30] [40, 50, 60] # add this row </code></pre> <p>And the following code does it:</p> <pre><code>df.loc[('321', 4),['predicted_y', 'actual_y']] = [20, 50] df.loc[('321', 4),['predicted_full', 'actual_full']] = [[10,20,30], [40,50,60]] </code></pre> <p><strong>But</strong> when trying to add a new row <em>in a single line</em>, I'm getting an error:</p> <pre><code>df.loc[('321', 4),['predicted_y', 'actual_y', 'predicted_full', 'actual_full']] = [20, 50, [10,20,30], [40,50,60]] &gt;&gt;&gt; ValueError: setting an array element with a sequence. </code></pre> <h2>Notes:</h2> <p>I believe it has something (possibly syntactic) to do with me trying to add a row that contains both values and lists. All other attempts had raised the same error; see the following examples:</p> <pre><code>df.loc[('321', 4),['predicted_y', 'actual_y', ['predicted_full', 'actual_full']]] = [20, 50, [10,20,30], [40,50,60]] df.loc[('321', 4),['predicted_y', 'actual_y', ['predicted_full'], ['actual_full']]] = [20, 50, [10,20,30], [40,50,60]] df.loc[('321', 4),['predicted_y', 'actual_y', [['predicted_full'], ['actual_full']]]] = [20, 50, [10,20,30], [40,50,60]] df.loc[('321', 4),['predicted_y', 'actual_y', 'predicted_full', 'actual_full']] = [20, 50, np.array([10,20,30]), np.array([40,50,60])] </code></pre> <p>The code to construct the initial <code>DataFrame</code>:</p> <pre><code>df = pd.DataFrame(index=pd.MultiIndex(levels=[[], []], labels=[[], []], names=['subj_id', 'org_clip']), columns=['predicted_y', 'actual_y', 'predicted_full', 'actual_full']) df.loc[('123', 3),['predicted_y', 'actual_y']] = [2, 5] df.loc[('123', 3),['predicted_full', 'actual_full']] = [[1,2,3], [4,5,6]] </code></pre>
2
2016-08-23T13:48:34Z
39,103,254
<p>Make at least one of the sublists an array of dtype <code>object</code>:</p> <pre><code>In [27]: df.loc[('321', 4),['predicted_y', 'actual_y', 'predicted_full', 'actual_full']] = ( [20, 50, np.array((10, 20, 30), dtype='O'), [40, 50, 60]]) In [28]: df Out[28]: predicted_y actual_y predicted_full actual_full subj_id org_clip 123 3 2 5 [1, 2, 3] [4, 5, 6] 321 4 20 50 [10, 20, 30] [40, 50, 60] </code></pre> <hr> <p>Notice that the error </p> <pre><code>ValueError: setting an array element with a sequence. </code></pre> <p>occurs on this line:</p> <pre><code>--&gt; 643 arr_value = np.array(value) </code></pre> <p>and can be reproduced like this</p> <pre><code>In [12]: np.array([20, 50, [10, 20, 30], [40, 50, 60]]) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-12-f6122275ab9f&gt; in &lt;module&gt;() ----&gt; 1 np.array([20, 50, [10, 20, 30], [40, 50, 60]]) ValueError: setting an array element with a sequence. </code></pre> <p>But if one of the sublists is an array of dtype object, then the result is an array of dtype object:</p> <pre><code>In [16]: np.array((20, 50, np.array((10, 20, 30), dtype='O'), (40, 50, 60))) Out[16]: array([20, 50, array([10, 20, 30], dtype=object), (40, 50, 60)], dtype=object) </code></pre> <p>Thus the ValueError can be avoided.</p>
2
2016-08-23T13:56:49Z
[ "python", "python-3.x", "pandas", "dataframe", "multi-index" ]
Adding a new row to a MultiIndex pandas DataFrame with both values and lists
39,103,060
<p>I have a MultiIndex <code>DataFrame</code>:</p> <pre><code> predicted_y actual_y predicted_full actual_full subj_id org_clip 123 3 2 5 [1, 2, 3] [4, 5, 6] </code></pre> <p>That I wish to add a new row to:</p> <pre><code> predicted_y actual_y predicted_full actual_full subj_id org_clip 123 3 2 5 [1, 2, 3] [4, 5, 6] 321 4 20 50 [10, 20, 30] [40, 50, 60] # add this row </code></pre> <p>And the following code does it:</p> <pre><code>df.loc[('321', 4),['predicted_y', 'actual_y']] = [20, 50] df.loc[('321', 4),['predicted_full', 'actual_full']] = [[10,20,30], [40,50,60]] </code></pre> <p><strong>But</strong> when trying to add a new row <em>in a single line</em>, I'm getting an error:</p> <pre><code>df.loc[('321', 4),['predicted_y', 'actual_y', 'predicted_full', 'actual_full']] = [20, 50, [10,20,30], [40,50,60]] &gt;&gt;&gt; ValueError: setting an array element with a sequence. </code></pre> <h2>Notes:</h2> <p>I believe it has something (possibly syntactic) to do with me trying to add a row that contains both values and lists. All other attempts had raised the same error; see the following examples:</p> <pre><code>df.loc[('321', 4),['predicted_y', 'actual_y', ['predicted_full', 'actual_full']]] = [20, 50, [10,20,30], [40,50,60]] df.loc[('321', 4),['predicted_y', 'actual_y', ['predicted_full'], ['actual_full']]] = [20, 50, [10,20,30], [40,50,60]] df.loc[('321', 4),['predicted_y', 'actual_y', [['predicted_full'], ['actual_full']]]] = [20, 50, [10,20,30], [40,50,60]] df.loc[('321', 4),['predicted_y', 'actual_y', 'predicted_full', 'actual_full']] = [20, 50, np.array([10,20,30]), np.array([40,50,60])] </code></pre> <p>The code to construct the initial <code>DataFrame</code>:</p> <pre><code>df = pd.DataFrame(index=pd.MultiIndex(levels=[[], []], labels=[[], []], names=['subj_id', 'org_clip']), columns=['predicted_y', 'actual_y', 'predicted_full', 'actual_full']) df.loc[('123', 3),['predicted_y', 'actual_y']] = [2, 5] df.loc[('123', 3),['predicted_full', 'actual_full']] = [[1,2,3], [4,5,6]] </code></pre>
2
2016-08-23T13:48:34Z
39,103,475
<p>You can let <code>pd.Series</code> handle the <code>dtypes</code></p> <pre><code>row_to_append = pd.Series([20, 50, [10, 20, 30], [40, 50, 60]]) cols = ['predicted_y', 'actual_y', 'predicted_full', 'actual_full'] df.loc[(321, 4), cols] = row_to_append.values df </code></pre> <p><a href="http://i.stack.imgur.com/nr1r3.png" rel="nofollow"><img src="http://i.stack.imgur.com/nr1r3.png" alt="enter image description here"></a></p>
3
2016-08-23T14:05:49Z
[ "python", "python-3.x", "pandas", "dataframe", "multi-index" ]
when exporting my data into csv, my output is in disorder maybe because of tabs and whitespaces
39,103,075
<pre><code>class Job(Item): a_title = Field() b_url = Field() c_date = Field() d_pub = Field() class stage(Spider): name = 'jobs' start_urls = ['http://www.stagiaire.com/offres-stages.html/'] def parse(self, response): for i in response.css('.info-offre'): title = i.css('.titleads::text').extract() url = i.css('.titleads::attr(href)').extract() date = i.css('.date-offre.tip::text').extract() pub = i.css('.content-1+ .content-1 .date-offre::text').extract() yield Job(a_title=title, b_url=url, c_date=date, d_pub=pub) </code></pre> <p><a href="http://i.stack.imgur.com/oZZmt.jpg" rel="nofollow">this my output</a></p>
1
2016-08-23T13:49:45Z
39,110,723
<p>Since you are not using scrapy ItemLoader's you put concrete lists to your results where you are probably expecting single elements. To fix this use <code>extract_first()</code> instead of <code>extract()</code> to get only the first xpath selection.</p> <p>In your case it should be:</p> <pre><code>title = i.css('.titleads::text').extract_first('') # defaults to '' url = i.css('.titleads::attr(href)').extract_first('').strip() # get rid of spaces and /n etc. date = i.css('.date-offre.tip::text').extract_first('') pub = i.css('.content-1+ .content-1 .date-offre::text').extract_first('') </code></pre> <p>Actually seems like you want to use an <a href="http://doc.scrapy.org/en/latest/topics/loaders.html" rel="nofollow">ItemLoader</a> here to clean all fields of newlines etc.</p> <pre><code>from scrapy.loader import ItemLoader from scrapy import Item, Field from scrapy.loader.processors import Compose, TakeFirst class MyItem(scrapy.Item): title = Field() class MyItemLoader(ItemLoader): default_item_class = MyItem # this will process every field in the item, take first element and remove all newlines and trailing spaces default_output_processor = Compose(TakeFirst(), lambda v: v.replace('\n','').strip()) # get rid of new lines </code></pre> <p>This might look like a lot but Item Loaders are just wrappers around item objects which do something when you either put a value in it or take it out. In the example above it will process all values, take first element if it's a list and remove any newlines.</p> <p>Then when just create the loader and load in some fields!</p> <pre><code>loader = MyItemLoader(selector=response) loader.add_css('title', '.titleads::text') loader.add_css('url', '.titleads::attr(href)') loader.add_css('date', '.date-offre.tip::text') loader.add_css('pub', '.content-1+ .content-1 .date-offre::text') return loader.load_item() </code></pre>
1
2016-08-23T21:10:46Z
[ "python", "web-scraping", "scrapy" ]
Data manipulation in Pandas/Python
39,103,090
<p>It seems to be simple data manipulation operation. But I am stuck at this.</p> <p>I have a recommendation dataset for a campaign. </p> <pre><code>Masteruserid content 1 100 1 101 1 102 2 100 2 101 2 110 </code></pre> <p>Now for each user we want to recommend atleast 5 content. So for instance Masteruserid 1 has three recommendations, I want to pick remaining two randomly from globally viewed content, which is a separate dataset(list). Then I have to also check for duplicates in case if the randomly picked is already present in the raw dataset. </p> <pre><code>global_content 100 300 301 101 </code></pre> <p>In actual I have around 4000+ Masteruserid's. Now I want assistance in just how to start approaching this.</p>
2
2016-08-23T13:50:11Z
39,103,962
<pre><code>def add_content(df, gc, k=5): n = len(df) gcs = set(gc.squeeze()) if n &lt; k: choices = list(gcs.difference(df.content)) mc = np.random.choice(choices, k - n, replace=False) ids = np.repeat(df.Masteruserid.iloc[-1], k - n) data = dict(Masteruserid=ids, content=mc) return df.append(pd.DataFrame(data), ignore_index=True) gb = df.groupby('Masteruserid', group_keys=False) gb.apply(add_content, gc).reset_index(drop=True) </code></pre> <p><a href="http://i.stack.imgur.com/Cc4aA.png" rel="nofollow"><img src="http://i.stack.imgur.com/Cc4aA.png" alt="enter image description here"></a></p>
2
2016-08-23T14:27:45Z
[ "python", "pandas" ]
Data manipulation in Pandas/Python
39,103,090
<p>It seems to be simple data manipulation operation. But I am stuck at this.</p> <p>I have a recommendation dataset for a campaign. </p> <pre><code>Masteruserid content 1 100 1 101 1 102 2 100 2 101 2 110 </code></pre> <p>Now for each user we want to recommend atleast 5 content. So for instance Masteruserid 1 has three recommendations, I want to pick remaining two randomly from globally viewed content, which is a separate dataset(list). Then I have to also check for duplicates in case if the randomly picked is already present in the raw dataset. </p> <pre><code>global_content 100 300 301 101 </code></pre> <p>In actual I have around 4000+ Masteruserid's. Now I want assistance in just how to start approaching this.</p>
2
2016-08-23T13:50:11Z
39,106,711
<p>Try this, using this as recs list, </p> <pre><code>df2['global_content'] 0 100 1 300 2 301 3 101 4 400 5 500 6 401 7 501 recs = pd.DataFrame() recs['content'] = df.groupby('Masteruserid')['content'].apply(lambda x: list(x) + np.random.choice(df2[~df2.isin(list(x))].dropna().values.flatten(), 2, replace=False).tolist()) recs content Masteruserid 1 [100, 101, 102, 300.0, 301.0] 2 [100, 101, 110, 501.0, 301.0] </code></pre>
0
2016-08-23T16:47:36Z
[ "python", "pandas" ]
pandas DataFrame combine_first method converts boolean in floats
39,103,144
<p>I'm running into a strange issue where combine_first method is causing values stored as bool to be upcasted into float64s. Example:</p> <pre><code>In [1]: import pandas as pd In [2]: df1 = pd.DataFrame({"a": [True]}) In [3]: df2 = pd.DataFrame({"b": ['test']}) In [4]: df2.combine_first(df1) Out[4]: a b 0 1.0 test </code></pre> <p>This problem has already been reported in a previous post 3 years ago: <a href="http://stackoverflow.com/q/15349795/3894837">pandas DataFrame combine_first and update methods have strange behavior</a>. This issue was told to be solved but I still have this behaviour under pandas 0.18.1</p> <p>thank you for your help</p>
3
2016-08-23T13:52:43Z
39,104,484
<p>Somewhere along the chain of events to get to a combined dataframe, potential missing values had to be addressed. I'm aware that nothing is missing in your example. <code>None</code> and <code>np.nan</code> are not <code>int</code>, or <code>bool</code>. So in order to have a common <code>dtype</code> that contains a <code>bool</code> and a <code>None</code> or <code>np.nan</code> it is necessary to cast the column as either <code>object</code> or <code>float</code>. As 'float`, a large number of operations become far more efficient and is a decent choice. It obviously isn't the best choice all of the time, but a choice has to be made none the less and pandas tried to infer the best one.</p> <p>A work around:</p> <p><strong><em>Setup</em></strong></p> <pre><code>df1 = pd.DataFrame({"a": [True]}) df2 = pd.DataFrame({"b": ['test']}) df3 = df2.combine_first(df1) df3 </code></pre> <p><a href="http://i.stack.imgur.com/bMAYf.png"><img src="http://i.stack.imgur.com/bMAYf.png" alt="enter image description here"></a></p> <p><strong><em>Solution</em></strong></p> <pre><code>dtypes = df1.dtypes.combine_first(df2.dtypes) for k, v in dtypes.iteritems(): df3[k] = df3[k].astype(v) df3 </code></pre> <p><a href="http://i.stack.imgur.com/wleEj.png"><img src="http://i.stack.imgur.com/wleEj.png" alt="enter image description here"></a></p>
5
2016-08-23T14:53:07Z
[ "python", "pandas", "dataframe" ]
ord function in python2.7 and python 3.4 are different?
39,103,164
<p>I have been running a script where I use the <code>ord()</code> function and for whatever the reason in python 2.7, it accepts the unicode string character just as it requires and outputs an integer.</p> <p>In python 3.4, this is not so much the case. This is the output of error that is being produced : </p> <pre><code>Traceback (most recent call last): File "udpTransfer.py", line 38, in &lt;module&gt; buf.append(ord(c)) TypeError: ord() expected string of length 1, but int found </code></pre> <p>When I look in both documentations, the ord function is explained to be doing the same exact thing. </p> <p>This is the code that I am using for both python versions:</p> <pre><code>import socket,sys, ast , os, struct from time import ctime import time import csv # creating the udo socket necessary to receive data sock = socket.socket(socket.AF_INET,socket.SOCK_DGRAM) ip = '192.168.10.101' #i.p. of our computer port = 20000 # socket port opened to connect from the matlab udp send data stream server_address = (ip, port) sock.bind(server_address) # bind socket sock.settimeout(2) # sock configuration sock.setblocking(1) print('able to bind') ii = 0 shotNummer = 0 client = '' Array = [] byte = 8192 filename = time.strftime("%d_%m_%Y_%H-%M-%S") filename = filename + '.csv' try : with open(filename,'wb') as csvfile : spamwriter = csv.writer(csvfile, delimiter=',',quotechar='|', quoting=csv.QUOTE_MINIMAL) # spamwriter.writerow((titles)) # as long as data comes in, well take it while True: data,client = sock.recvfrom(byte) buf = [] values = [] for c in data: # print(type(c)) buf.append(ord(c)) if len(buf) == 4 : ### </code></pre> <p>Can anyone explain why python3.4 it says that <code>c</code> is an integer, rather than in Python 2.7 where it is actually a string, just as the <code>ord()</code> function requires?</p>
1
2016-08-23T13:53:27Z
39,103,245
<p>You are passing in an <em>integer</em> to <code>ord()</code> in Python 3. That's because you are iterating over a <code>bytes</code> object in Python 3 (the first element in the tuple return value from <a href="https://docs.python.org/3/library/socket.html#socket.socket.recvfrom" rel="nofollow"><code>socket.recvfrom()</code></a>):</p> <pre><code>&gt;&gt;&gt; for byte in b'abc': ... print(byte) ... 97 98 99 </code></pre> <p>From the <a href="https://docs.python.org/3/library/stdtypes.html#bytes" rel="nofollow"><code>bytes</code> type documentation</a>:</p> <blockquote> <p>While bytes literals and representations are based on ASCII text, bytes objects actually behave like immutable sequences of integers[.]</p> </blockquote> <p>and</p> <blockquote> <p>Since bytes objects are sequences of integers (akin to a tuple), for a bytes object <em>b</em>, <code>b[0]</code> will be an integer [...].</p> </blockquote> <p>In <a href="https://docs.python.org/2/library/socket.html#socket.socket.recvfrom" rel="nofollow">Python 2, <code>socket.recvfrom()</code></a> produces a <code>str</code> object instead, and iteration over such an object gives new one-character string objects, which indeed need to be passed to <code>ord()</code> to be converted to an integer.</p> <p>You could instead use a <code>bytearray()</code> here to get the same integer sequence in both Python 2 and 3:</p> <pre><code>for c in bytearray(data): # c is now in integer in both Python 2 and 3 </code></pre> <p>You don't need to use <code>ord()</code> at all in that case.</p>
5
2016-08-23T13:56:34Z
[ "python", "python-2.7", "python-3.x" ]
ord function in python2.7 and python 3.4 are different?
39,103,164
<p>I have been running a script where I use the <code>ord()</code> function and for whatever the reason in python 2.7, it accepts the unicode string character just as it requires and outputs an integer.</p> <p>In python 3.4, this is not so much the case. This is the output of error that is being produced : </p> <pre><code>Traceback (most recent call last): File "udpTransfer.py", line 38, in &lt;module&gt; buf.append(ord(c)) TypeError: ord() expected string of length 1, but int found </code></pre> <p>When I look in both documentations, the ord function is explained to be doing the same exact thing. </p> <p>This is the code that I am using for both python versions:</p> <pre><code>import socket,sys, ast , os, struct from time import ctime import time import csv # creating the udo socket necessary to receive data sock = socket.socket(socket.AF_INET,socket.SOCK_DGRAM) ip = '192.168.10.101' #i.p. of our computer port = 20000 # socket port opened to connect from the matlab udp send data stream server_address = (ip, port) sock.bind(server_address) # bind socket sock.settimeout(2) # sock configuration sock.setblocking(1) print('able to bind') ii = 0 shotNummer = 0 client = '' Array = [] byte = 8192 filename = time.strftime("%d_%m_%Y_%H-%M-%S") filename = filename + '.csv' try : with open(filename,'wb') as csvfile : spamwriter = csv.writer(csvfile, delimiter=',',quotechar='|', quoting=csv.QUOTE_MINIMAL) # spamwriter.writerow((titles)) # as long as data comes in, well take it while True: data,client = sock.recvfrom(byte) buf = [] values = [] for c in data: # print(type(c)) buf.append(ord(c)) if len(buf) == 4 : ### </code></pre> <p>Can anyone explain why python3.4 it says that <code>c</code> is an integer, rather than in Python 2.7 where it is actually a string, just as the <code>ord()</code> function requires?</p>
1
2016-08-23T13:53:27Z
39,103,293
<p>I think the difference is that in Python 3 the <code>sock.recvfrom(...)</code> call returns bytes while Python 2.7 recvfrom returns a string. So <code>ord</code> did not change but what is being passed to ord has changed. </p> <p><a href="https://docs.python.org/2.7/library/socket.html#socket.socket.recvfrom" rel="nofollow">Python 2.7 recvfrom</a></p> <p><a href="https://docs.python.org/3.5/library/socket.html#socket.socket.recvfrom" rel="nofollow">Python 3.5 recvfrom</a></p>
1
2016-08-23T13:58:47Z
[ "python", "python-2.7", "python-3.x" ]
Declaring a Python list of expressions without evaluating each
39,103,207
<p>I have a large number of arithmetic expressions that I store in a list. For example</p> <pre><code>exp_list = [exp1, exp2, ...,exp10000] </code></pre> <p>I also have indices of the few expressions I need to evaluate.</p> <pre><code>inds = [ind1,ind2,...,ind10] exp_selected = [exp_list[i] for i in inds ] </code></pre> <p>Is there a way to avoid having to evaluate all the expressions in exp_list?</p>
0
2016-08-23T13:55:08Z
39,103,338
<p>If those expressions share some pattern and can be created 'mid-air' it would be better to use generator instead of just creating the list. Especially if you don't need to remember the results, but just check if any (or all) of them are true/false.</p>
0
2016-08-23T14:00:25Z
[ "python", "list" ]
Declaring a Python list of expressions without evaluating each
39,103,207
<p>I have a large number of arithmetic expressions that I store in a list. For example</p> <pre><code>exp_list = [exp1, exp2, ...,exp10000] </code></pre> <p>I also have indices of the few expressions I need to evaluate.</p> <pre><code>inds = [ind1,ind2,...,ind10] exp_selected = [exp_list[i] for i in inds ] </code></pre> <p>Is there a way to avoid having to evaluate all the expressions in exp_list?</p>
0
2016-08-23T13:55:08Z
39,103,788
<p>Suppose you decide to store you expressions as lambdas (to avoid them being immediately evaluated) then you could selectively evaluate them with a simple list comprehension:</p> <pre><code>exp_list = [lambda: 1+2, lambda: 3+4, lambda: 5+6, lambda: 7+8] inds = [1, 3] print [exp() for i, exp in enumerate(exp_list) if i in inds] </code></pre> <p>Produces:</p> <pre><code>[7, 15] </code></pre>
1
2016-08-23T14:19:07Z
[ "python", "list" ]
Calculate values w/ statsmodels given a formula and parameters
39,103,234
<p>I have fitted a model given a Poly-3 function and extracted the found parameters</p> <pre><code>model = smf.ols(formula='A ~ B + I(B ** 2.0) + I(B ** 3.0)', data=sp) poly_3 = model.fit() params = poly_3.params.values </code></pre> <p>I want to save the params for later use since I don't want to train the model each time. Params would e.g. be <code>[ 0.09525563, 0.09655527, -0.00946222, 0.00056942]</code></p> <p><strong>How can I then, given the formula, the params and some x-values get fitted values?</strong> I am thinking of sth. like this:</p> <pre><code>OLS.predict(x=range(20), params=params, formula='A ~ B + I(B ** 2.0) + I(B ** 3.0)') </code></pre> <p>I obviously could write the formula in Python itself, but I feel that I don't have to re-invent the wheel here!</p> <p>Thanks!</p>
1
2016-08-23T13:56:08Z
39,195,186
<p>please refer to this <a href="http://stackoverflow.com/questions/16420407/python-statsmodels-ols-how-to-save-learned-model-to-file">Python statsmodels OLS: how to save learned model to file</a> you can save your model results to a pickle file, then load the model from the pickle file and use it to predict.</p> <p>Let me know if you have any questions.</p>
0
2016-08-28T19:51:02Z
[ "python", "pandas", "statsmodels" ]
How to import own angularjs javascript files automatically / dynamically?
39,103,289
<p>I am trying to construct a web application using a REST API and an Angular Frontend. Apart from importing all the Angular JS framework files and extensions I also have to add script tags for every single javascript file I wrote on my own (which will be a lot for all the controllers). I have heard about different solutions trying to solve this issue but found nothing so far that is up to date / would work without refactoring. I do not use Node.js but rather a Python / Werkzeug based server which is delivering the content and npm / bower to manage javascript packages. </p> <p><em>Therefore how can I import lots of javascript files automatically / is there any tool that can assist me in the process?</em> Assuming this would be a larger application I wouldn't want to load all x javascript files on a single request. </p> <p><strong>Edit</strong>: I am specifically searching for a way how to handle those imports with Angular / without adding a new dependency. RequireJS e.g. needs something like JQuery. Maybe I am missing the point but right now I don't know any trivial solution under the given requirements.</p>
0
2016-08-23T13:58:40Z
39,104,479
<p>You can use webpack to compile a bunch of files together - it doesn't have to be a single page application. This will also solve the reference order issue.</p> <p>This page explains very well different methodologies: <a href="https://webpack.github.io/docs/motivation.html" rel="nofollow">https://webpack.github.io/docs/motivation.html</a></p> <p>I personally prefer commonjs because it lets you use modules from npm</p> <p>Good luck </p>
1
2016-08-23T14:52:57Z
[ "javascript", "python", "angularjs", "rest" ]
Python 3.5 CSV reading multiline field issue
39,103,307
<p>I am trying to parse a CSV file, which has multi-line string in one of the fields (field 6). I can read the field fine, but when I try to process each line from that field, it gives me one character on a new line at a time instead of a line at a time. Any ideas what am I doing wrong?</p> <pre><code>def lookup(ip, ranges_csv): with open(ranges_csv, 'r') as csvIN: l = csv.reader(csvIN, dialect='excel') next(l) # Skip the header row for row in l: for subnet in row[5]: print(subnet) </code></pre> <p>Let's say in field6 we have 192.168.0.0/24 and 192.168.1.0/24 each on newline. I'm getting this output:</p> <pre><code>1 9 2 . 1 6 8 </code></pre> <p>instead of:</p> <pre><code>192.168.0.0/24 192.168.1.0/24 </code></pre>
0
2016-08-23T13:59:13Z
39,103,524
<p>When you are using </p> <blockquote> <p>for subnet in row[5]:</p> <pre><code> print(subnet) </code></pre> </blockquote> <p>You are iterating over a string, which returns a single char each time.</p> <p>It would be more helpful if you could give us an example to see exactly what do you mean.</p>
0
2016-08-23T14:07:50Z
[ "python", "csv", "reader" ]
Python 3.5 CSV reading multiline field issue
39,103,307
<p>I am trying to parse a CSV file, which has multi-line string in one of the fields (field 6). I can read the field fine, but when I try to process each line from that field, it gives me one character on a new line at a time instead of a line at a time. Any ideas what am I doing wrong?</p> <pre><code>def lookup(ip, ranges_csv): with open(ranges_csv, 'r') as csvIN: l = csv.reader(csvIN, dialect='excel') next(l) # Skip the header row for row in l: for subnet in row[5]: print(subnet) </code></pre> <p>Let's say in field6 we have 192.168.0.0/24 and 192.168.1.0/24 each on newline. I'm getting this output:</p> <pre><code>1 9 2 . 1 6 8 </code></pre> <p>instead of:</p> <pre><code>192.168.0.0/24 192.168.1.0/24 </code></pre>
0
2016-08-23T13:59:13Z
39,103,540
<p>well it makes sense, since each element in row is a string, and when you iterate a string you get one character at a time.</p> <p>you can split the string to get your desired result:</p> <pre><code>for row in l: for subnet in row[5].split('\r\n'): print(subnet) </code></pre> <p>note:</p> <p>\r\n means new line in windows, if you are using linux it should be only \n.</p>
1
2016-08-23T14:08:32Z
[ "python", "csv", "reader" ]
Call js function in python to use document.getElementById
39,103,321
<p>I've recently asked about a way to call a js function in python. People gave me advice such as using js2py, or pyv8, but the problem is that it does not allow me to use the following js command: </p> <pre><code>document.getElementById("example"); </code></pre> <p>So my question is: Is there a way to call js from a python function and that allows you to use the js command above?</p> <p>Thanks in advance!</p>
1
2016-08-23T13:59:43Z
39,103,584
<p><em>If calling js function in python</em> means: <em>How can I select node with specific id?</em>, then you can use BeautifulSoup for it:</p> <pre><code>from bs4 import BeautifulSoup html_doc = "&lt;html&gt;&lt;head&gt;&lt;title&gt;&lt;/title&gt;&lt;/head&gt;&lt;body&gt;&lt;div id='example'&gt;&lt;/body&gt;&lt;/html&gt;" soup = BeautifulSoup(html_doc, 'html.parser') soup.find(id="example") </code></pre>
1
2016-08-23T14:10:36Z
[ "javascript", "python", "socket.io", "getelementbyid", "django-socketio" ]
Find all the div tags find class = "post-" followed by some numbers?
39,103,486
<p>I want to find all the div tags with class = "post-some number some text " There are multiple div tags e.g.</p> <pre><code>&lt;div class="post-3562 some text"&gt; &lt;div class="post-some text"&gt; &lt;div class="post-some text"&gt; &lt;div class="post-1324 some text"&gt; &lt;div class="post-4540 some text"&gt; &lt;div class="post-some text"&gt; &lt;div class="post-1122 some text"&gt; </code></pre> <p>I only want to get those div tags with class="post-some number"</p> <p>Currently I have written this:</p> <pre><code>allPostsDiv = soup.find_all("div", class_= "post") </code></pre> <p>Is there a way to achieve what I want to do? May be using regular expressions would help? Any help will be much appreciated.</p>
1
2016-08-23T14:06:19Z
39,104,614
<p>The following Regex will match your test cases:</p> <pre><code>/&lt;div +class= *"post-\d+.*&gt;/g </code></pre> <p>Regex Tester link: <a href="https://regex101.com/r/cX1qZ7/1" rel="nofollow">https://regex101.com/r/cX1qZ7/1</a></p>
-1
2016-08-23T15:00:03Z
[ "python", "regex", "beautifulsoup" ]
Find all the div tags find class = "post-" followed by some numbers?
39,103,486
<p>I want to find all the div tags with class = "post-some number some text " There are multiple div tags e.g.</p> <pre><code>&lt;div class="post-3562 some text"&gt; &lt;div class="post-some text"&gt; &lt;div class="post-some text"&gt; &lt;div class="post-1324 some text"&gt; &lt;div class="post-4540 some text"&gt; &lt;div class="post-some text"&gt; &lt;div class="post-1122 some text"&gt; </code></pre> <p>I only want to get those div tags with class="post-some number"</p> <p>Currently I have written this:</p> <pre><code>allPostsDiv = soup.find_all("div", class_= "post") </code></pre> <p>Is there a way to achieve what I want to do? May be using regular expressions would help? Any help will be much appreciated.</p>
1
2016-08-23T14:06:19Z
39,104,963
<p>You can pass in a regular expression as the value of your <code>class_</code> parameter, like so:</p> <pre><code>soup.find_all(name='div', class_=re.compile(r'^post-\d+$')) </code></pre> <p>Full Program:</p> <pre><code>from bs4 import BeautifulSoup import re soup = BeautifulSoup(''' &lt;root&gt; &lt;div class="post-3562 some text"/&gt; &lt;xdiv class="post-9999 some text"/&gt; &lt;div class="post-some text"/&gt; &lt;div class="post-some text"/&gt; &lt;div class="post-1324some text"/&gt; &lt;div class="some post-4540 text"/&gt; &lt;div class="post-some text"/&gt; &lt;div class="some text post-1122"/&gt; &lt;/root&gt;''', 'html.parser') for div in soup.find_all(name='div', class_=re.compile(r'^post-\d+$')): print div </code></pre> <p>Result:</p> <pre><code>&lt;div class="post-3562 some text"&gt;&lt;/div&gt; &lt;div class="some post-4540 text"&gt;&lt;/div&gt; &lt;div class="some text post-1122"&gt;&lt;/div&gt; </code></pre>
3
2016-08-23T15:14:45Z
[ "python", "regex", "beautifulsoup" ]
Get the names and count of all buckets in AWS S3
39,103,561
<p>I'm new to Boto3 and AWS API, and I want to get the list of buckets' names and the count of the available buckets in S3. </p> <p>Any help is appreciated.</p>
0
2016-08-23T14:09:30Z
39,110,079
<p>This script will help you to list all the bucket names and also get the count.</p> <pre><code>import boto from boto.s3.connection import OrdinaryCallingFormat conn = boto.connect_s3(calling_format=OrdinaryCallingFormat()) count = 0 print ("Bucket names: ") for bucket in conn.get_all_buckets(): print (bucket.name) count = count + 1 print ("Total count of S3 bucket is ", count) </code></pre> <p>Note : please specify aws keys in script if you are not yet specified it in .boto file</p> <p>Hope it helps !!</p>
2
2016-08-23T20:24:30Z
[ "python", "python-3.x", "amazon-web-services", "amazon-s3", "boto3" ]
Get the names and count of all buckets in AWS S3
39,103,561
<p>I'm new to Boto3 and AWS API, and I want to get the list of buckets' names and the count of the available buckets in S3. </p> <p>Any help is appreciated.</p>
0
2016-08-23T14:09:30Z
39,114,716
<p>To get all buckets in your account:</p> <pre><code>import boto3 s3 = boto3.resource('s3') bucket_list = [bucket.name for bucket in s3.buckets.all()] print len(bucket_list) print bucket_list </code></pre>
1
2016-08-24T04:57:30Z
[ "python", "python-3.x", "amazon-web-services", "amazon-s3", "boto3" ]
How to use BeautifulSoup to get only strings from tags that have specific start?
39,103,570
<p>I am scraping usernames and all of them are in the same a tag and their hrefs all start the same, like this:</p> <pre><code>&lt;a href="http://lolprofile.net/summoner/eune/Sadastyczny" class="link5"&gt;Sadastyczny&lt;/a&gt; </code></pre> <p>I tried finding only if they have the class link5 but there are other values that have that class which I don't want to scrape. So is there a way to search for all the tags which have the</p> <pre><code>href="http://lolprofile.net/summoner" </code></pre> <p>in them but not the rest since that obviously is different for every username?</p>
1
2016-08-23T14:09:51Z
39,103,743
<p>From the <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#the-keyword-arguments" rel="nofollow">BeautifulSoup documentation</a>. </p> <p>Using a regular expression you can match the sites. If you have never heard of regular expressions you can use this:</p> <p><code>soup.find_all(href=re.compile("http://lolprofile.net/summoner/*"))</code></p> <p>Don't forget to import the <code>re</code>-module!</p>
1
2016-08-23T14:17:33Z
[ "python", "beautifulsoup" ]
Trying to adapt TensorFlow's MNIST example gives NAN predictions
39,103,614
<p>I'm playing with TensorFlow, using the 'MNIST for beginners' example (<a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/mnist/mnist_softmax.py" rel="nofollow">initial code here</a>). I've made some slight adaptions:</p> <pre><code>mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True) sess = tf.InteractiveSession() # Create the model x = tf.placeholder(tf.float32, [None, 784]) W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) y = tf.nn.softmax(tf.matmul(x, W) + b) # Define loss and optimizer y_ = tf.placeholder(tf.float32, [None, 10]) cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) fake_images = mnist.train.images.tolist() # Train tf.initialize_all_variables().run() for i in range(10): batch_xs, batch_ys = fake_images, mnist.train.labels train_step.run({x: batch_xs, y_: batch_ys}) # Test trained model print(y.eval({x: mnist.test.images})) </code></pre> <p>Specifically, I'm only running the training step 10 times (I'm not concerned about accuracy, more about speed). I'm also running it on all the data at once (for simplicity). At the end, I'm outputting the predictions TF is making, instead of the accuracy percentage. Here's (some of) the output of the above code:</p> <pre><code> [ 1.08577311e-02 7.29394853e-01 5.02395593e-02 ..., 2.74689011e-02 4.43389975e-02 2.32385024e-02] ..., [ 2.95746652e-03 1.30554764e-02 1.39354384e-02 ..., 9.16484520e-02 9.70732421e-02 2.57733971e-01] [ 5.94450533e-02 1.36338845e-01 5.22132218e-02 ..., 6.91468120e-02 1.95634082e-01 4.83607128e-02] [ 4.46179360e-02 6.66685810e-04 3.84704918e-02 ..., 6.51754031e-04 2.46591796e-03 3.10819712e-03]] </code></pre> <p>Which appears to be the probabilities TF is assigning to each of the possibilities (0-9). All is well with the world.</p> <p>My main goal is to adapt this to another use, but first I'd like to make sure I can give it other data. This is what I've tried:</p> <pre><code>fake_images = np.random.rand(55000, 784).astype('float32').tolist() </code></pre> <p>Which, as I understand it, should generate an array of random junk that is structurally the same as the data from MNIST. But making the change above, here's what I get:</p> <pre><code>[[ nan nan nan ..., nan nan nan] [ nan nan nan ..., nan nan nan] [ nan nan nan ..., nan nan nan] ..., [ nan nan nan ..., nan nan nan] [ nan nan nan ..., nan nan nan] [ nan nan nan ..., nan nan nan]] </code></pre> <p>Which is clearly much less useful. Looking at each option (<code>mnist.train.images</code> and the <code>np.random.rand</code> option), it looks like both are a <code>list</code> of <code>list</code>s of <code>float</code>s. </p> <p><strong>Why won't TensorFlow accept this array?</strong> Is it simply complaining because it recognizes that there's no way it can learn from a bunch of random data? I would expect not, but I've been wrong before.</p>
1
2016-08-23T14:11:51Z
39,109,819
<p>The real MNIST data contains very sparse data. Most of the values are zero. Your synthetic data is uniformly distributed (see <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.random.rand.html" rel="nofollow">numpy</a>). The W and b trained assume a sparse input. It is possible the model you trained was able to overfit strongly and has very large W weights connected to particular input pixels to allow for good output probabilities (a large post-softmax value needs a large pre-softmax activation). When you feed your synthetic data, all of a sudden, all input magnitudes are much larger than before resulting in very large activations everywhere, possibly causing overflow.</p>
0
2016-08-23T20:05:42Z
[ "python", "machine-learning", "tensorflow" ]
Trying to adapt TensorFlow's MNIST example gives NAN predictions
39,103,614
<p>I'm playing with TensorFlow, using the 'MNIST for beginners' example (<a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/mnist/mnist_softmax.py" rel="nofollow">initial code here</a>). I've made some slight adaptions:</p> <pre><code>mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True) sess = tf.InteractiveSession() # Create the model x = tf.placeholder(tf.float32, [None, 784]) W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) y = tf.nn.softmax(tf.matmul(x, W) + b) # Define loss and optimizer y_ = tf.placeholder(tf.float32, [None, 10]) cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) fake_images = mnist.train.images.tolist() # Train tf.initialize_all_variables().run() for i in range(10): batch_xs, batch_ys = fake_images, mnist.train.labels train_step.run({x: batch_xs, y_: batch_ys}) # Test trained model print(y.eval({x: mnist.test.images})) </code></pre> <p>Specifically, I'm only running the training step 10 times (I'm not concerned about accuracy, more about speed). I'm also running it on all the data at once (for simplicity). At the end, I'm outputting the predictions TF is making, instead of the accuracy percentage. Here's (some of) the output of the above code:</p> <pre><code> [ 1.08577311e-02 7.29394853e-01 5.02395593e-02 ..., 2.74689011e-02 4.43389975e-02 2.32385024e-02] ..., [ 2.95746652e-03 1.30554764e-02 1.39354384e-02 ..., 9.16484520e-02 9.70732421e-02 2.57733971e-01] [ 5.94450533e-02 1.36338845e-01 5.22132218e-02 ..., 6.91468120e-02 1.95634082e-01 4.83607128e-02] [ 4.46179360e-02 6.66685810e-04 3.84704918e-02 ..., 6.51754031e-04 2.46591796e-03 3.10819712e-03]] </code></pre> <p>Which appears to be the probabilities TF is assigning to each of the possibilities (0-9). All is well with the world.</p> <p>My main goal is to adapt this to another use, but first I'd like to make sure I can give it other data. This is what I've tried:</p> <pre><code>fake_images = np.random.rand(55000, 784).astype('float32').tolist() </code></pre> <p>Which, as I understand it, should generate an array of random junk that is structurally the same as the data from MNIST. But making the change above, here's what I get:</p> <pre><code>[[ nan nan nan ..., nan nan nan] [ nan nan nan ..., nan nan nan] [ nan nan nan ..., nan nan nan] ..., [ nan nan nan ..., nan nan nan] [ nan nan nan ..., nan nan nan] [ nan nan nan ..., nan nan nan]] </code></pre> <p>Which is clearly much less useful. Looking at each option (<code>mnist.train.images</code> and the <code>np.random.rand</code> option), it looks like both are a <code>list</code> of <code>list</code>s of <code>float</code>s. </p> <p><strong>Why won't TensorFlow accept this array?</strong> Is it simply complaining because it recognizes that there's no way it can learn from a bunch of random data? I would expect not, but I've been wrong before.</p>
1
2016-08-23T14:11:51Z
39,112,876
<p>What is messing you up is that log(softmax) isn't numerically stable.</p> <p><a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/nn.html#softmax_cross_entropy_with_logits" rel="nofollow">The softmax cross entropy with logits loss</a> is numerically stabilised. </p> <p>so, you can do</p> <pre><code>activations = tf.matmul(x, W) + b loss = tf.nn.softmax_cross_entropy_with_logits(activations, y) # only to get predictions, for accuracy or you know, actual forward use of the model predictions = tf.nn.softmax(activations) </code></pre> <p>I'm to lazy to find the machine learning stack exchange articles on log softmax numerical stability, but you can find them pretty quickly I'm sure.</p>
0
2016-08-24T01:01:46Z
[ "python", "machine-learning", "tensorflow" ]
Restore CDATA during lxml serialization
39,103,615
<p>I know that I can preserve CDATA sections during XML parsing, using the following:</p> <pre><code>from lxml import etree parser = etree.XMLParser(strip_cdata=False) root = etree.XML('&lt;root&gt;&lt;![CDATA[test]]&gt;&lt;/root&gt;', parser) </code></pre> <p>See <a href="http://lxml.de/api.html#cdata" rel="nofollow">APIs specific to lxml.etree</a></p> <p>But, is there a simple way to "restore" CDATA section during serialization? For example, by specifying a list of tag names…</p> <p>For instance, I want to turn:</p> <pre><code>&lt;CONFIG&gt; &lt;BODY&gt;This is a &amp;lt;message&amp;gt;.&lt;/BODY&gt; &lt;/CONFIG&gt; </code></pre> <p>to:</p> <pre><code>&lt;CONFIG&gt; &lt;BODY&gt;&lt;![CDATA[This is a &lt;message&gt;.]]&gt;&lt;/BODY&gt; &lt;/CONFIG&gt; </code></pre> <p>Just by telling that <code>BODY</code> should contains CDATA…</p>
0
2016-08-23T14:11:58Z
39,104,395
<p>Something like this?</p> <pre><code>from lxml import etree parser = etree.XMLParser(strip_cdata=True) root = etree.XML('&lt;root&gt;&lt;x&gt;&lt;![CDATA[&lt;test&gt;]]&gt;&lt;/x&gt;&lt;/root&gt;', parser) print etree.tostring(root) for elem in root.findall('x'): elem.text = etree.CDATA(elem.text) print etree.tostring(root) </code></pre> <p>Produces:</p> <pre><code>&lt;root&gt;&lt;x&gt;&amp;lt;test&amp;gt;&lt;/x&gt;&lt;/root&gt; &lt;root&gt;&lt;x&gt;&lt;![CDATA[&lt;test&gt;]]&gt;&lt;/x&gt;&lt;/root&gt; </code></pre>
1
2016-08-23T14:49:06Z
[ "python", "lxml", "cdata" ]
Comparing two arrays which have very dispersed values
39,103,641
<p>I have a very sparse array that looks like:</p> <pre><code>Array A: min = -68093253945.0 max=8.54631971208e+13 Array B: min=-1e+15 max = 1.87343e+14 </code></pre> <p>And also each array will have concentration at certain levels e.g. near 2000, near 1m, near 0.05 and so on.</p> <p>I am trying to compare these two arrays in terms of concentration, and want to do so in a way that is invariant to the number of entries in each. I also want to account for huge outliers if possible and maybe compress the bins to be between 0 and 1 or something of this sort.</p> <p>The aim is to make a histogram via:</p> <pre><code>plt.hist(A,alpha=0.5,label='A') # plt.hist passes it's arguments to np.histogram ion() plt.hist(B,alpha=0.5,label='B') plt.title("Histogram of Values") plt.legend(loc='upper right') plt.savefig('valuecomp.png') </code></pre> <p>How do I do this? I have experimented with:</p> <pre><code>A = stats.zscore(A) B = stats.zscore(B) A = preprocessing.scale(A) B = preprocessing.scale(B) A = preprocessing.scale(A, axis=0, with_mean=True, with_std=True, copy=True) B = preprocessing.scale(B, axis=0, with_mean=True, with_std=True, copy=True) </code></pre> <p>And then for my histograms, adding <code>normed=True</code>, <code>range(0,100)</code>. All the methods give me a histogram with a massive vertical chunk near to 0.0 instead of distributing the values smoothly. <code>range(0,100)</code> looks good but it ignores any values like 1m outside of 100.</p> <p>Perhaps I need to remove outliers from my data first and then do a histogram?</p>
1
2016-08-23T14:13:13Z
39,380,663
<p>@sascha's suggestion of using AstroML was a good one, but the <code>knuth</code> and <code>freedman</code> versions seem to take astronomically long (excuse the pun), and the <code>blocks</code> version simply thinned the blocks.</p> <p>I took the sigmoid of each value via <code>from scipy.special import expit</code> and then plotted the histogram that way. Only way I could get this to work.</p>
0
2016-09-07T23:59:34Z
[ "python", "scipy", "histogram", "normalization", "binning" ]
How can I recursively import every file in a directory and run a function in each one?
39,103,649
<p>In essence, I'm trying to make an extension system, where each plugin hooks into the important functions via the respective function in the file. I need a way to run this function and get the return value, by just looping through the "plugins" directory.</p> <p>Any way I could do this?</p>
-2
2016-08-23T14:13:37Z
39,103,784
<p>you can import files dinamicaly using <code>__import__</code></p> <p>so you just need to iterate the folder looking for py files (not pyc) and import them</p> <pre><code>for root, dirs, files in os.walk(src_path): for f in files: if f.endswith('.py'): m = __import__(f) </code></pre> <p><code>m</code> will now be the instanc of the module , so if you have a function called <code>my_func</code> under it, you can do:</p> <p><code>m.my_func()</code></p> <p>or if you have the name of the function as string:</p> <p><code>getattr(m,'my_func')()</code></p>
0
2016-08-23T14:19:00Z
[ "python", "python-3.x" ]
Extract variable names and values using REGEX in Python from a text file
39,103,676
<p>I am trying to read a large text file, containing variable names and corresponding values (see below for small example). Names are all upper case and the value is usually separated by a periods and whitespaces, but if the variable name is too long it is separated by only whitespaces.</p> <pre><code>WATER DEPTH .......... 20.00 M TENSION AT TOUCHDOWN . 382.47 KN TOUCHDOWN X-COORD. ... -206.75 M BOTTOM SLOPE ANGLE ... 0.000 DEG PROJECTED SPAN LENGTH 166.74 M PIPE LENGTH GAIN ..... 1.72 M </code></pre> <p>I am able to find the values using the following expression:</p> <pre><code>line = ' PROJECTED SPAN LENGTH 166.74 M PIPE LENGTH GAIN ..... 1.72 M \n' re.findall(r"[-+]?\d*\.\d+|\d+", line): ['166.74', '1.72'] </code></pre> <p>But when I try to extract the variable names, using below expression I have leading and trailing whitespaces which I would like to leave out.</p> <pre><code>re.findall('(?&lt;=\s.)[A-Z\s]+', line) [' PROJECTED SPAN LENGTH ', ' PIPE LENGTH GAIN ', ' ', ' \n'] </code></pre> <p>I believe it should have something like ^\s, but I can't get it to work. When successful I'd like to store the data in a dataframe, having the variable names as indices and the values as column. </p>
1
2016-08-23T14:14:20Z
39,103,904
<p>Use <code>[A-Z]{2,}(?:\s+[A-Z]+)*</code></p> <p><code>[A-Z]{2,}</code> looks for uppercase words at least 2 in length</p> <p><code>(?:\s+[A-Z]+)*</code> is a capture group for if there are multiple words in the label</p> <p><strong>EDIT</strong></p> <p>To handle the case in your comment I'd recommend:</p> <pre><code>[A-Z-\/]{2,}(?:\s*[A-Z-\/]+(?:\.)*)* </code></pre> <p>just make sure there is at least one space after the last period in <code>R.O.W.</code> and before the <code>...</code></p> <p><code>[A-Z-\/]{2,}</code> will check for uppercase letters, -, and / of 2 length or greater</p> <p><code>(?:\s*[A-Z-\/]+(?:\.)*)*</code> is a capture group for for multiple words and/or words with periods in them</p>
0
2016-08-23T14:24:43Z
[ "python", "regex" ]
Extract variable names and values using REGEX in Python from a text file
39,103,676
<p>I am trying to read a large text file, containing variable names and corresponding values (see below for small example). Names are all upper case and the value is usually separated by a periods and whitespaces, but if the variable name is too long it is separated by only whitespaces.</p> <pre><code>WATER DEPTH .......... 20.00 M TENSION AT TOUCHDOWN . 382.47 KN TOUCHDOWN X-COORD. ... -206.75 M BOTTOM SLOPE ANGLE ... 0.000 DEG PROJECTED SPAN LENGTH 166.74 M PIPE LENGTH GAIN ..... 1.72 M </code></pre> <p>I am able to find the values using the following expression:</p> <pre><code>line = ' PROJECTED SPAN LENGTH 166.74 M PIPE LENGTH GAIN ..... 1.72 M \n' re.findall(r"[-+]?\d*\.\d+|\d+", line): ['166.74', '1.72'] </code></pre> <p>But when I try to extract the variable names, using below expression I have leading and trailing whitespaces which I would like to leave out.</p> <pre><code>re.findall('(?&lt;=\s.)[A-Z\s]+', line) [' PROJECTED SPAN LENGTH ', ' PIPE LENGTH GAIN ', ' ', ' \n'] </code></pre> <p>I believe it should have something like ^\s, but I can't get it to work. When successful I'd like to store the data in a dataframe, having the variable names as indices and the values as column. </p>
1
2016-08-23T14:14:20Z
39,103,905
<p>If you ever want to take out leading/trailing white space, you can use the <code>.strip()</code> method.</p> <p><a href="http://www.tutorialspoint.com/python/string_strip.htm" rel="nofollow">Python String strip</a></p> <pre><code>stripped_values = [raw.strip() for raw in re.findall('(?&lt;=\s.)[A-Z\s]+', line)] </code></pre>
0
2016-08-23T14:24:44Z
[ "python", "regex" ]
Extract variable names and values using REGEX in Python from a text file
39,103,676
<p>I am trying to read a large text file, containing variable names and corresponding values (see below for small example). Names are all upper case and the value is usually separated by a periods and whitespaces, but if the variable name is too long it is separated by only whitespaces.</p> <pre><code>WATER DEPTH .......... 20.00 M TENSION AT TOUCHDOWN . 382.47 KN TOUCHDOWN X-COORD. ... -206.75 M BOTTOM SLOPE ANGLE ... 0.000 DEG PROJECTED SPAN LENGTH 166.74 M PIPE LENGTH GAIN ..... 1.72 M </code></pre> <p>I am able to find the values using the following expression:</p> <pre><code>line = ' PROJECTED SPAN LENGTH 166.74 M PIPE LENGTH GAIN ..... 1.72 M \n' re.findall(r"[-+]?\d*\.\d+|\d+", line): ['166.74', '1.72'] </code></pre> <p>But when I try to extract the variable names, using below expression I have leading and trailing whitespaces which I would like to leave out.</p> <pre><code>re.findall('(?&lt;=\s.)[A-Z\s]+', line) [' PROJECTED SPAN LENGTH ', ' PIPE LENGTH GAIN ', ' ', ' \n'] </code></pre> <p>I believe it should have something like ^\s, but I can't get it to work. When successful I'd like to store the data in a dataframe, having the variable names as indices and the values as column. </p>
1
2016-08-23T14:14:20Z
39,104,657
<p>You can use the following expression along with <code>re.finditer()</code>:</p> <pre><code>(?P&lt;category&gt;[A-Z][A-Z- ]+[A-Z]) [. ]+ (?P&lt;value&gt;-?\d[.\d]+)\ (?P&lt;unit&gt;M|DEG|KN) </code></pre> <p>See <a href="https://regex101.com/r/wW2tL0/3" rel="nofollow"><strong>a demo on regex101.com</strong></a>. <hr> In <code>Python</code> this would be:</p> <pre><code>import re rx = re.compile(r''' (?P&lt;category&gt;[A-Z][A-Z- ]+[A-Z]) [. ]+ (?P&lt;value&gt;-?\d[.\d]+)\ (?P&lt;unit&gt;M|DEG|KN) ''', re.VERBOSE) string = ''' WATER DEPTH .......... 20.00 M TENSION AT TOUCHDOWN . 382.47 KN TOUCHDOWN X-COORD. ... -206.75 M BOTTOM SLOPE ANGLE ... 0.000 DEG PROJECTED SPAN LENGTH 166.74 M PIPE LENGTH GAIN ..... 1.72 M ''' matches = [(m.group('category'), m.group('value'), m.group('unit')) \ for m in rx.finditer(string)] print(matches) # [('WATER DEPTH', '20.00', 'M'), ('TENSION AT TOUCHDOWN', '382.47', 'KN'), ('TOUCHDOWN X-COORD', '-206.75', 'M'), ('BOTTOM SLOPE ANGLE', '0.000', 'DEG'), ('PROJECTED SPAN LENGTH', '166.74', 'M'), ('PIPE LENGTH GAIN', '1.72', 'M')] </code></pre> <p>See <a href="http://ideone.com/4JNIx1" rel="nofollow"><strong>a demo on ideone.com</strong></a>.</p>
0
2016-08-23T15:01:37Z
[ "python", "regex" ]
how to communicate with a router
39,103,719
<p>I have written a script to perform telnet, which i wanted to use for sending commands to my Device Under Test (router).</p> <p>My Telnet Script:</p> <pre><code>import sys, time, telnetlib sys.path.insert(0, '/tmp') import options def telnet_connect(): HOST = "%s" %options.DUT_telnet_ip PORT = "%s" %options.DUT_telnet_port username = "%s" %options.DUT_telnet_username password = "%s" %options.DUT_telnet_password tn = telnetlib.Telnet(HOST, PORT, 10) time.sleep(5) tn.write("\n") tn.read_until("login:", 2) tn.write(username) tn.read_until("Password:", 2) tn.write(password) tn.write("\n") response = tn.read_until("$", 5) return response def telnet_close(): response = tn.write("exit\n") return response </code></pre> <hr> <p>I want to use this script in another program which will check the version of the Router by telneting it. I expect a script which will call my above function to perform telnet and send other commands viz. "version" or "ls"</p>
1
2016-08-23T14:16:38Z
39,104,175
<p>Try to make this more like a class:</p> <pre><code>import sys, time, telnetlib sys.path.insert(0, '/tmp') class TelnetConnection(): def init(self, HOST, PORT): self.tn = telnetlib.Telnet(HOST, PORT, 10) def connect(self, username, password): tn = self.tn tn.write("\n") tn.read_until("login:", 2) tn.write(username) tn.read_until("Password:", 2) tn.write(password) tn.write("\n") response = tn.read_until("$", 5) return response def close(self): tn = self.tn response = tn.write("exit\n") return response # create here then a method to communicate as you wish </code></pre> <p>Then you can use it as follows:</p> <pre><code>import options HOST = "%s" %options.DUT_telnet_ip PORT = "%s" %options.DUT_telnet_port username = "%s" %options.DUT_telnet_username password = "%s" %options.DUT_telnet_password connection = TelnetConnection(HOST, PORT) connection.connect(username, password) connection.do_all_operations_you_want() # write your own method for that connection.close() </code></pre>
1
2016-08-23T14:38:00Z
[ "python", "telnet" ]
Dreamhost PyCharm Django Python 3 Launching a Site
39,103,775
<p>I'm new to python/django and created a simple django website using PyCharm. The site works fine running on my computer, but I'm really lost when it comes to taking the site onto my host (Dreamhost). I'm able to upload the files to the correct directories fine, and I believe I made the correct changes for the database to be on mysql. I'm not sure where to go from there though. Right now, the page just shows the files rather than the site. Any direction would be greatly appreciated. Thanks so much!</p>
-4
2016-08-23T14:18:36Z
39,103,882
<p>You should not run your Django application using <code>python manage.py runserver</code> on production. How to run the application on Django depends on the server (apache, nginx) you want to use</p> <p>For apache - <a href="https://docs.djangoproject.com/en/1.10/howto/deployment/wsgi/modwsgi/" rel="nofollow">https://docs.djangoproject.com/en/1.10/howto/deployment/wsgi/modwsgi/</a></p> <p>For nginx - <a href="https://www.digitalocean.com/community/tutorials/how-to-serve-django-applications-with-uwsgi-and-nginx-on-ubuntu-14-04" rel="nofollow">https://www.digitalocean.com/community/tutorials/how-to-serve-django-applications-with-uwsgi-and-nginx-on-ubuntu-14-04</a></p> <p>The MySQL database should, of course, exist and have the correct credentials. Or, if you choose to use SQLite, file should be accessible, readable and writable by <code>www-user</code> (may be different depending on Linux distributive)</p>
0
2016-08-23T14:23:42Z
[ "python", "mysql", "django", "dreamhost" ]
boto3 and connecting to custom url
39,103,814
<p>I have a test environment that mimics the S3 envrionment, and I want to write some test scripts using boto3. How can I connect to that service? </p> <p>I tried:</p> <pre><code> client = boto3.client('s3', region_name="us-east-1", endpoint_url="http://mymachine") client = boto3.client('iam', region_name="us-east-1", endpoint_url="http://mymachine") </code></pre> <p>Both fail to work.</p> <p>The service is setup to use IAM authentication. </p> <p>My error:</p> <pre><code> botocore.exceptions.NoCredentialsError: Unable to locate credentials </code></pre> <p>Any ideas?</p> <p>Thanks</p>
0
2016-08-23T14:20:12Z
39,105,304
<p>Please use as below :</p> <pre><code>import boto3 client = boto3.client( 's3', aws_access_key_id=ACCESS_KEY, aws_secret_access_key=SECRET_KEY,) </code></pre> <p>Please check this link for more ways to configure AWS credentials. <a href="http://boto3.readthedocs.io/en/latest/guide/configuration.html" rel="nofollow">http://boto3.readthedocs.io/en/latest/guide/configuration.html</a></p>
0
2016-08-23T15:32:30Z
[ "python", "python-2.7", "boto3" ]
boto3 and connecting to custom url
39,103,814
<p>I have a test environment that mimics the S3 envrionment, and I want to write some test scripts using boto3. How can I connect to that service? </p> <p>I tried:</p> <pre><code> client = boto3.client('s3', region_name="us-east-1", endpoint_url="http://mymachine") client = boto3.client('iam', region_name="us-east-1", endpoint_url="http://mymachine") </code></pre> <p>Both fail to work.</p> <p>The service is setup to use IAM authentication. </p> <p>My error:</p> <pre><code> botocore.exceptions.NoCredentialsError: Unable to locate credentials </code></pre> <p>Any ideas?</p> <p>Thanks</p>
0
2016-08-23T14:20:12Z
39,117,142
<p>1. boto API always look for credential to pass on to connecting services, there is no way you can access AWS resources using bot without a access key and password. </p> <p>If you intended to use some other method, e.g. Temporary Security Credentials, your AWS admin must setup roles and etc , to allow the VM instance connect to AWS using <a href="http://docs.aws.amazon.com/STS/latest/APIReference/Welcome.html" rel="nofollow">AWS Security Token Service</a>. </p> <p>Otherwise, you must request a restricted credential key from your AWS account admin.</p> <p>2.On the other hand, if you want to mimics S3 and test rapid upload/download huge amount of data for development, then you should setup <a href="https://github.com/jubos/fake-s3" rel="nofollow">FakeS3</a>. It will take any dummy access key. However, there is few drawback of FakeS3 : you can't setup and test S3 bucket policy.</p> <p>3.Even you configure your S3 bucket to allow anyone to take the file, it is only through the url, it is a file access permission, not bucket access permission. </p>
-1
2016-08-24T07:36:51Z
[ "python", "python-2.7", "boto3" ]
No module named matplotlib with matplotlib installed Python 2.7
39,103,877
<p>I am fairly new to python as well as matplotlib and I can't get it to work. From the code :</p> <pre><code>import networkx as nx import matplotlib.pyplot as plt G=nx.Graph() G.add_node("spam") G.add_edge(1,2) plt.show() </code></pre> <p>I get the error:</p> <pre><code>Traceback (most recent call last): File "test.py2", line 2, in &lt;module&gt; import matplotlib.pyplot as plt ImportError: No module named matplotlib.pyplot </code></pre> <p>This occurs even though it seems to be installed in:</p> <pre><code>usr/lib/python2.7/dist-packages/matplotlib/ </code></pre> <p>Do you guys have any ideas? Thanks in advance</p>
0
2016-08-23T14:23:19Z
39,104,025
<p>You have 2 pythons installed on your machine, one is the standard python that comes with MacOSX and the second is the one you installed with ports (this is the one that has matplotlib installed in its library, the one that comes with macosx does not).</p> <pre><code>/usr/bin/python </code></pre> <p>Is the standard mac python and since it doesn't have matplotlib you should always start your script with the one installed with ports.</p> <p>If <code>python your_script.py</code> works, then change the shebang (<code>#!</code>) to:</p> <pre><code>\#!/usr/bin/env python </code></pre> <p>Or put the full path to the python interpreter that has the matplotlib installed in its library.</p>
1
2016-08-23T14:30:40Z
[ "python", "python-2.7", "matplotlib" ]
No module named matplotlib with matplotlib installed Python 2.7
39,103,877
<p>I am fairly new to python as well as matplotlib and I can't get it to work. From the code :</p> <pre><code>import networkx as nx import matplotlib.pyplot as plt G=nx.Graph() G.add_node("spam") G.add_edge(1,2) plt.show() </code></pre> <p>I get the error:</p> <pre><code>Traceback (most recent call last): File "test.py2", line 2, in &lt;module&gt; import matplotlib.pyplot as plt ImportError: No module named matplotlib.pyplot </code></pre> <p>This occurs even though it seems to be installed in:</p> <pre><code>usr/lib/python2.7/dist-packages/matplotlib/ </code></pre> <p>Do you guys have any ideas? Thanks in advance</p>
0
2016-08-23T14:23:19Z
39,105,267
<p>You can check whether <code>usr/lib/python2.7/dist-packages</code> (if you are pretty sure matplotlib is installed here) is in your <code>sys.path</code>.</p> <pre><code>&gt;&gt;&gt; import sys &gt;&gt;&gt; sys.path </code></pre> <p>If you don't find the path in the list, you can add lines below before importing matplotlib.</p> <pre><code>import sys sys.path.insert(0, '/path/to/matplotlib') </code></pre>
0
2016-08-23T15:30:57Z
[ "python", "python-2.7", "matplotlib" ]
No module named matplotlib with matplotlib installed Python 2.7
39,103,877
<p>I am fairly new to python as well as matplotlib and I can't get it to work. From the code :</p> <pre><code>import networkx as nx import matplotlib.pyplot as plt G=nx.Graph() G.add_node("spam") G.add_edge(1,2) plt.show() </code></pre> <p>I get the error:</p> <pre><code>Traceback (most recent call last): File "test.py2", line 2, in &lt;module&gt; import matplotlib.pyplot as plt ImportError: No module named matplotlib.pyplot </code></pre> <p>This occurs even though it seems to be installed in:</p> <pre><code>usr/lib/python2.7/dist-packages/matplotlib/ </code></pre> <p>Do you guys have any ideas? Thanks in advance</p>
0
2016-08-23T14:23:19Z
39,105,360
<p>thanks for your help. It appeared the wrong Python Version was used. By using </p> <pre><code>alias python=/usr/lib/python </code></pre> <p>it was fixed, but only temporarly.</p> <p>To permanently set the alias correctly, I had to edit the ~/.bash_aliases and insert:</p> <pre><code>alias python=/usr/bin/python2.7 </code></pre> <p>The other python version installed was 3.0 which was set as the defualt one, but without the matplotlib library.</p>
0
2016-08-23T15:34:58Z
[ "python", "python-2.7", "matplotlib" ]
How to fire a query with a number of variables from local array
39,103,972
<p>Sorry if title is not properly set. The problem is I want to filter DataFrame by comparing df's column with a couple of values from an array:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD')) array = np.arange(10) #simple query df.query('A == %d' %array[3]) </code></pre> <p>Above query runs perfectly fine, the below query also runs without issue:</p> <pre><code>df.query('A == [3,4,5]') </code></pre> <p>Logically, below code should work too, because I select values from 3rd to 5-th from <code>array</code>:</p> <pre><code>df.query('A == %d' %array[3:5]) </code></pre> <p>Nevertheless, it gives me an error:</p> <pre><code>TypeError: %d format: a number is required, not numpy.ndarray </code></pre> <p>Kindly suggest the path I should follow. Thank you!</p>
0
2016-08-23T14:28:10Z
39,104,091
<p>This returns an integer, accepted by the <code>%d</code> format</p> <pre><code>df.query('A == %d' %array[3]) </code></pre> <p>This returns an array, not the same object!, refused by the latter</p> <pre><code>df.query('A == [%s]' %array[3:5]) </code></pre> <p>I suggest:</p> <pre><code>df.query('A == [%s]' % ",".join([str(a) for a in array[3:6]])) </code></pre> <p>this will send <code>'A == [3,4,5]'</code> to the query</p>
1
2016-08-23T14:33:55Z
[ "python", "pandas" ]
How to fire a query with a number of variables from local array
39,103,972
<p>Sorry if title is not properly set. The problem is I want to filter DataFrame by comparing df's column with a couple of values from an array:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD')) array = np.arange(10) #simple query df.query('A == %d' %array[3]) </code></pre> <p>Above query runs perfectly fine, the below query also runs without issue:</p> <pre><code>df.query('A == [3,4,5]') </code></pre> <p>Logically, below code should work too, because I select values from 3rd to 5-th from <code>array</code>:</p> <pre><code>df.query('A == %d' %array[3:5]) </code></pre> <p>Nevertheless, it gives me an error:</p> <pre><code>TypeError: %d format: a number is required, not numpy.ndarray </code></pre> <p>Kindly suggest the path I should follow. Thank you!</p>
0
2016-08-23T14:28:10Z
39,104,267
<p><code>numexpr</code> doesn't support slicing so the closest you can get is to create a variable with the required data, then reference it from the query (note that the slicing here creates a view of the original <code>array</code> and doesn't perform a copy):</p> <pre><code>sliced = array[3:6] df.query('A == @sliced') </code></pre>
2
2016-08-23T14:43:01Z
[ "python", "pandas" ]
pandas read_csv on a *.dat file delimited with cedilla not splitting into columns in dataframe
39,104,079
<p>This is my first time working on pandas so pardon my ignorance. My requirement is to download a file from S3 onto Ec2 and put the dat file onto a dataframe. This is how my input file data looked </p> <pre><code>1Ç70Ç23929Ç4341Ç1111Ç0Ç0Ç1ÇAAÇ012ÇFillerÇ 1Ç75Ç45555Ç4324Ç2222Ç0Ç0Ç1ÇAAÇ011ÇFillerÇ 1Ç76Ç23957Ç4334Ç3333Ç0Ç0Ç1ÇAAÇ011ÇFillerÇ 1Ç72Ç47776Ç4344Ç4444Ç0Ç0Ç1ÇABÇ014ÇFillerÇ 1Ç73Ç88880Ç4354Ç4444Ç0Ç0Ç1ÇCDÇ011ÇFillerÇ 1Ç74Ç99991Ç4364Ç5555Ç0Ç0Ç1ÇEEÇ014ÇFillerÇ </code></pre> <p>As the data did not seem to have any encoding or so i decided to use the read_Csv with delimiter as cedilla and store in dataframe.</p> <pre><code>iFldDelim = 'Ç' tf = pandas.read_csv(itextfile, iFldDelim, nrows = 5,header=None) </code></pre> <p>But for some reason it is not recognizing the same and puts the data in one column. </p> <pre><code> 0 0 1Ç70Ç23929Ç4341Ç1111Ç0Ç0Ç1ÇAAÇ012ÇFi... 1 1Ç75Ç45555Ç4324Ç2222Ç0Ç0Ç1ÇAAÇ011ÇFi... 2 1Ç76Ç23957Ç4334Ç3333Ç0Ç0Ç1ÇAAÇ011ÇFi... 3 1Ç72Ç47776Ç4344Ç4444Ç0Ç0Ç1ÇABÇ014ÇFi... 4 1Ç73Ç88880Ç4354Ç4444Ç0Ç0Ç1ÇCDÇ011ÇFi... </code></pre> <p>The file seems like ASCII and not encoded. I did try using the encoding as UTF-8 and UTF-16 and giving the Unicode value as delimiter that does not work. I also tried to hardcode the delimiter as 'F' instead of cedilla and run the code thinking the file itself might have some encryption/encoding. But that is not the case, i got my output delimited by 'F'.</p> <p>With delimiter as 'F'.</p> <pre><code> 0 1 0 1Ç70Ç23929Ç4341Ç1111Ç0Ç0Ç1ÇAAÇ012Ç illerÇ 1 1Ç75Ç45555Ç4324Ç2222Ç0Ç0Ç1ÇAAÇ011Ç illerÇ 2 1Ç76Ç23957Ç4334Ç3333Ç0Ç0Ç1ÇAAÇ011Ç illerÇ 3 1Ç72Ç47776Ç4344Ç4444Ç0Ç0Ç1ÇABÇ014Ç illerÇ 4 1Ç73Ç88880Ç4354Ç4444Ç0Ç0Ç1ÇCDÇ011Ç illerÇ </code></pre> <p>The file i am loading is a huge one usually and this one runs for a long time. So i am not sure if i encode the file using codec to UTF-8 and then put in dataframe is a wise option.</p> <p>I tried to create a cedilla delimited file manually and when passed through the same command it worked all fine. I am not able to figure what is going wrong here. Is there a way to figure out if it is encoded?</p> <p>Any advise is greatly appreciated. </p> <p>Thanks, VB</p> <p>Adopting Edchum advise, i used the below,</p> <pre><code>#file location dataPath = "C:/Users/Documents/Pytest/" itextfile = join(dataPath,'sample.dat') fb = open(itextfile, 'r') data = fb.read() print(data) tf=pandas.read_csv(StringIO(data), sep='Ç', header=None) #tf=pandas.read_csv(StringIO(data), sep='\Ç', header=None) print(tf) </code></pre> <p>The data came out like below from the file </p> <pre><code>1Ç71Ç23929Ç44Ç5685Ç0Ç0Ç1ÇaaÇ012ÇFillerÇ 1Ç72Ç23953Ç40Ç3319Ç0Ç0Ç1ÇbbÇ011ÇFillerÇ 1Ç73Ç23957Ç43Ç7323Ç0Ç0Ç1ÇccÇ011ÇFillerÇ 1Ç74Ç24006Ç41Ç6938Ç0Ç0Ç1ÇbbÇ014ÇFillerÇ 1Ç75Ç24140Ç45Ç0518Ç0Ç0Ç1ÇddÇ011ÇFillerÇ Output 0 1 2 3 4 5 6 7 8 9 10 11 0 1 71 23929 44 5685 0 0 1 aa 12 Filler NaN 1 1 72 23953 40 3319 0 0 1 bb 11 Filler NaN 2 1 73 23957 43 7323 0 0 1 cc 11 Filler NaN </code></pre> <p>So using the sep='Ç' instead of sep='\Ç' worked. Not sure why it appears this way when i run the script, coz to the naked eye (when i open the sample.dat file) it appears delimited with cedilla symbol.</p> <p>is there a way to pass the textfile without reading externally onto the pandas.read_csv (StringIO) command? I ask this because i wanted to limit the number of rows i read and put in the Dataframe. Say if i want to have only the first few rows i added a Totrows and to have last few i used skiprows. That way to process the huge file with millions of rows it would take minimal time. If this is not possible i ll use a for loop ofcourse. Just wanted to check if there was a way to do without for loop. </p> <p>Thanks, VB</p>
1
2016-08-23T14:33:22Z
39,104,169
<p>Try passing <code>sep='\Ç'</code> as this works for me:</p> <pre><code>In [35]: import pandas as pd import io t="""1Ç70Ç23929Ç4341Ç1111Ç0Ç0Ç1ÇAAÇ012ÇFillerÇ 1Ç75Ç45555Ç4324Ç2222Ç0Ç0Ç1ÇAAÇ011ÇFillerÇ 1Ç76Ç23957Ç4334Ç3333Ç0Ç0Ç1ÇAAÇ011ÇFillerÇ 1Ç72Ç47776Ç4344Ç4444Ç0Ç0Ç1ÇABÇ014ÇFillerÇ 1Ç73Ç88880Ç4354Ç4444Ç0Ç0Ç1ÇCDÇ011ÇFillerÇ 1Ç74Ç99991Ç4364Ç5555Ç0Ç0Ç1ÇEEÇ014ÇFillerÇ""" pd.read_csv(io.StringIO(t), sep='\Ç', header=None) Out[35]: 0 1 2 3 4 5 6 7 8 9 10 11 0 1 70 23929 4341 1111 0 0 1 AA 12 Filler NaN 1 1 75 45555 4324 2222 0 0 1 AA 11 Filler NaN 2 1 76 23957 4334 3333 0 0 1 AA 11 Filler NaN 3 1 72 47776 4344 4444 0 0 1 AB 14 Filler NaN 4 1 73 88880 4354 4444 0 0 1 CD 11 Filler NaN 5 1 74 99991 4364 5555 0 0 1 EE 14 Filler NaN </code></pre>
0
2016-08-23T14:37:48Z
[ "python", "pandas", "utf-8" ]
pandas read_csv on a *.dat file delimited with cedilla not splitting into columns in dataframe
39,104,079
<p>This is my first time working on pandas so pardon my ignorance. My requirement is to download a file from S3 onto Ec2 and put the dat file onto a dataframe. This is how my input file data looked </p> <pre><code>1Ç70Ç23929Ç4341Ç1111Ç0Ç0Ç1ÇAAÇ012ÇFillerÇ 1Ç75Ç45555Ç4324Ç2222Ç0Ç0Ç1ÇAAÇ011ÇFillerÇ 1Ç76Ç23957Ç4334Ç3333Ç0Ç0Ç1ÇAAÇ011ÇFillerÇ 1Ç72Ç47776Ç4344Ç4444Ç0Ç0Ç1ÇABÇ014ÇFillerÇ 1Ç73Ç88880Ç4354Ç4444Ç0Ç0Ç1ÇCDÇ011ÇFillerÇ 1Ç74Ç99991Ç4364Ç5555Ç0Ç0Ç1ÇEEÇ014ÇFillerÇ </code></pre> <p>As the data did not seem to have any encoding or so i decided to use the read_Csv with delimiter as cedilla and store in dataframe.</p> <pre><code>iFldDelim = 'Ç' tf = pandas.read_csv(itextfile, iFldDelim, nrows = 5,header=None) </code></pre> <p>But for some reason it is not recognizing the same and puts the data in one column. </p> <pre><code> 0 0 1Ç70Ç23929Ç4341Ç1111Ç0Ç0Ç1ÇAAÇ012ÇFi... 1 1Ç75Ç45555Ç4324Ç2222Ç0Ç0Ç1ÇAAÇ011ÇFi... 2 1Ç76Ç23957Ç4334Ç3333Ç0Ç0Ç1ÇAAÇ011ÇFi... 3 1Ç72Ç47776Ç4344Ç4444Ç0Ç0Ç1ÇABÇ014ÇFi... 4 1Ç73Ç88880Ç4354Ç4444Ç0Ç0Ç1ÇCDÇ011ÇFi... </code></pre> <p>The file seems like ASCII and not encoded. I did try using the encoding as UTF-8 and UTF-16 and giving the Unicode value as delimiter that does not work. I also tried to hardcode the delimiter as 'F' instead of cedilla and run the code thinking the file itself might have some encryption/encoding. But that is not the case, i got my output delimited by 'F'.</p> <p>With delimiter as 'F'.</p> <pre><code> 0 1 0 1Ç70Ç23929Ç4341Ç1111Ç0Ç0Ç1ÇAAÇ012Ç illerÇ 1 1Ç75Ç45555Ç4324Ç2222Ç0Ç0Ç1ÇAAÇ011Ç illerÇ 2 1Ç76Ç23957Ç4334Ç3333Ç0Ç0Ç1ÇAAÇ011Ç illerÇ 3 1Ç72Ç47776Ç4344Ç4444Ç0Ç0Ç1ÇABÇ014Ç illerÇ 4 1Ç73Ç88880Ç4354Ç4444Ç0Ç0Ç1ÇCDÇ011Ç illerÇ </code></pre> <p>The file i am loading is a huge one usually and this one runs for a long time. So i am not sure if i encode the file using codec to UTF-8 and then put in dataframe is a wise option.</p> <p>I tried to create a cedilla delimited file manually and when passed through the same command it worked all fine. I am not able to figure what is going wrong here. Is there a way to figure out if it is encoded?</p> <p>Any advise is greatly appreciated. </p> <p>Thanks, VB</p> <p>Adopting Edchum advise, i used the below,</p> <pre><code>#file location dataPath = "C:/Users/Documents/Pytest/" itextfile = join(dataPath,'sample.dat') fb = open(itextfile, 'r') data = fb.read() print(data) tf=pandas.read_csv(StringIO(data), sep='Ç', header=None) #tf=pandas.read_csv(StringIO(data), sep='\Ç', header=None) print(tf) </code></pre> <p>The data came out like below from the file </p> <pre><code>1Ç71Ç23929Ç44Ç5685Ç0Ç0Ç1ÇaaÇ012ÇFillerÇ 1Ç72Ç23953Ç40Ç3319Ç0Ç0Ç1ÇbbÇ011ÇFillerÇ 1Ç73Ç23957Ç43Ç7323Ç0Ç0Ç1ÇccÇ011ÇFillerÇ 1Ç74Ç24006Ç41Ç6938Ç0Ç0Ç1ÇbbÇ014ÇFillerÇ 1Ç75Ç24140Ç45Ç0518Ç0Ç0Ç1ÇddÇ011ÇFillerÇ Output 0 1 2 3 4 5 6 7 8 9 10 11 0 1 71 23929 44 5685 0 0 1 aa 12 Filler NaN 1 1 72 23953 40 3319 0 0 1 bb 11 Filler NaN 2 1 73 23957 43 7323 0 0 1 cc 11 Filler NaN </code></pre> <p>So using the sep='Ç' instead of sep='\Ç' worked. Not sure why it appears this way when i run the script, coz to the naked eye (when i open the sample.dat file) it appears delimited with cedilla symbol.</p> <p>is there a way to pass the textfile without reading externally onto the pandas.read_csv (StringIO) command? I ask this because i wanted to limit the number of rows i read and put in the Dataframe. Say if i want to have only the first few rows i added a Totrows and to have last few i used skiprows. That way to process the huge file with millions of rows it would take minimal time. If this is not possible i ll use a for loop ofcourse. Just wanted to check if there was a way to do without for loop. </p> <p>Thanks, VB</p>
1
2016-08-23T14:33:22Z
39,779,389
<p>As standard practice you may want to open your docs using the codecs package. This will allow you to specify the encoding (UTF-16 in most cases) and the codecs package seems to be very good at deciphering things like line terminators and encoding. </p> <p><a href="http://stackoverflow.com/questions/27896214/reading-tab-delimited-file-with-pandas-works-on-windows-but-not-on-mac/38951835#38951835">Reading tab-delimited file with Pandas - works on Windows, but not on Mac</a></p> <pre><code>import codecs doc = codecs.open('document','rU','UTF-16') (open for reading with "universal" type set) df = pandas.csv_read(doc, sep='Ç', nrows=Totrows, header=Skiprows) </code></pre>
0
2016-09-29T20:26:35Z
[ "python", "pandas", "utf-8" ]
Python import module fails with __init__.py
39,104,139
<p>I have two folders each containing several python modules:</p> <pre> 1. pyA: /a /b /c 2. pyB: /d /e /f </pre> <p>I have added the <code>__init__.py</code> (empty) to both folders. However when I try to import pyB in pyA, I get the "ImportError: No module named pyB".</p> <p>I have looked through the already existing answers and couldn't find the solution. Any suggestion is highly appreciated.</p>
-2
2016-08-23T14:36:25Z
39,104,664
<p>Until unless root folder of pyB is in PYTHONPATH, this is expected. Files inside pyA folder is not aware of where pyB is.</p> <p>(Or) please do below:</p> <pre><code>import sys sys.path.append(/path/to/parentfolderofPyB) import pyB </code></pre> <p>Alternatively if you are sure that you don't use pyA in pyB, you can move pyB inside pyA folder</p>
0
2016-08-23T15:01:46Z
[ "python", "import" ]
Post CSV file contents as data in Python
39,104,216
<p>Again, relatively new to python, so this may seem like a no-brainer to some people. My apologies in advance. </p> <p>I would like to know how to open a .csv file and send the contents as data in a post session. </p> <p>Something kind of like this: </p> <pre><code>userData = json.loads(loginResponse.text) sessionToken = userData["sessionId"] print ('Login successful! Attempting to upload file...') # Now try to upload file uploadURL = 'url' headers = { 'token': sessionToken } with open('data.csv', newline='') as csvFile: csvReader = csv.reader(csvFile) uploadResponse = loginAS.post(uploadURL, headers=headers, data='CONTENTS OF CSV FILE') print (uploadResponse.status_code) csvfile.close() </code></pre> <p>I have tried to just open the csv file, but that didn't work. And I've tried </p> <pre><code>data=list(csvReader) </code></pre> <p>But I get a 'too many values to unpack' error. So, any ideas?</p> <p>I'm not sure if it matters, but I am using Python 3.4</p>
0
2016-08-23T14:40:09Z
39,104,448
<p>If you want to pass the file contents as simply a big blob of text, don't use the csv module at all. Just use the normal file read() method:</p> <pre><code>uploadResponse = loginAS.post(..., data=csvFile.read()) </code></pre>
0
2016-08-23T14:51:16Z
[ "python", "python-3.x", "csv", "post" ]
No relationship of type when opening Word document with Python
39,104,325
<p>When trying to open a <code>.dot</code> file with <code>python-docx</code>, I am getting the error:</p> <pre><code>KeyError: "no relationship of type 'http://schemas.openxmlformats.org/officeDocument/2006/relationships/officeDocument' in collection" </code></pre> <p>This is the code in question:</p> <pre><code>from docx import Document document = Document('file.dot') </code></pre> <p>What is the actual problem here?</p>
0
2016-08-23T14:46:03Z
39,104,723
<p>How did you generate the input file? <a href="https://github.com/python-openxml/python-docx/issues/204" rel="nofollow">Here</a> it is an issue about the type when you save the file as <em>Strict Open XML Document</em>. Try the standard <em>Word document</em>.</p> <p>You can get more informations about relations inside the file using <a href="https://github.com/python-openxml/opc-diag" rel="nofollow">opc-diag</a>:</p> <pre><code>opc browse &lt;FILE&gt; .rels </code></pre> <p>An idea to try to fix the bad file:</p> <pre><code># Extract the bad file to a temporary folder unzip &lt;FILE&gt; -d bad-file # Repackage the extracted data to a new fresh file opc repackage bad-file new-file.docx # A diff of relationships opc diff-item test.docx test-ok.docx .rels </code></pre>
2
2016-08-23T15:04:40Z
[ "python", "ms-word", "python-docx" ]
How to differ formsets in django when using modelformset_factory?
39,104,332
<p>Let's say I have an Contact object and I want to have two groups of contact Formsets in django(1.8) divided by <strong>fieldset</strong> tag in html template. I use modelformset_factory. Regardless I use one or two different factory functions, fields in these two formsets have same id in html. Since http.Request.body is dictionary, I lose information about one of the two formsets.</p> <pre><code>contacts_formset = modelformset_factory( models.Contact, form=forms.ContactDetailForm, extra=2) contacts_escalation_formset_new = contacts_formset( queryset=models.Contact.objects.none()) contacts_other_formset_new = contacts_formset( queryset=models.Contact.objects.none()) </code></pre> <p>in HTML:</p> <pre><code>input id="id_form-0-name" maxlength="155" name="form-0-name" type="text" input id="id_form-0-name" maxlength="155" name="form-0-name" type="text" </code></pre> <p>For simple django form, there is keyword "prefix=..." . But this factory function does not have this argument. How can I solve it?</p>
0
2016-08-23T14:46:20Z
39,104,492
<p>The <code>modelformset_factory</code> class returns a FormSet class. This FormSet class has a optional <code>prefix</code> argument, similar to Form classes. </p> <pre><code>contacts_escalation_formset_new = contacts_formset( prefix='escalation', queryset=models.Contact.objects.none(), ) contacts_other_formset_new = contacts_formset( prefix='other' queryset=models.Contact.objects.none(), ) </code></pre> <p>See the docs on <a href="https://docs.djangoproject.com/en/1.10/topics/forms/formsets/#using-more-than-one-formset-in-a-view" rel="nofollow">using more than one formset in a view</a> for another example.</p>
1
2016-08-23T14:53:25Z
[ "python", "django", "django-forms", "django-templates", "django-views" ]
Going through all arguments given and checking their values
39,104,401
<p>I am pretty new to Python and I tried to make a simple program.<br> I tried to define a function that takes any given number of arguments and prints all even numbers given.<br> However, I do not know how to know how many numbers were given, and mostly how to go through them, or more precisely, how to make a loop that checks each of them. Thanks in advance!</p> <p>This is my bad code :</p> <pre><code>def even_number_filter(*arg): a = len(sys.argv) for (arg % 2 == 0): print ("\n%d is an even number") % arg even_number_filter (1, 2, 3, 4, 5, 6, 7, 8, 9, 10) </code></pre>
1
2016-08-23T14:49:23Z
39,104,497
<p>You can iterate over <code>arg</code> itself, it is simply a tuple of the arguments</p> <pre><code>def even_number_filter(*arg): for i in arg: if i % 2 == 0: print('{} is an even number'.format(i)) &gt;&gt;&gt; even_number_filter(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) 2 is an even number 4 is an even number 6 is an even number 8 is an even number 10 is an even number </code></pre> <p>For future reference, however, I would discourage this kind of design. Instead I would simply accept some sequence directly such as a <code>list</code> or <code>tuple</code> </p>
2
2016-08-23T14:53:41Z
[ "python", "loops", "for-loop", "arguments" ]
Going through all arguments given and checking their values
39,104,401
<p>I am pretty new to Python and I tried to make a simple program.<br> I tried to define a function that takes any given number of arguments and prints all even numbers given.<br> However, I do not know how to know how many numbers were given, and mostly how to go through them, or more precisely, how to make a loop that checks each of them. Thanks in advance!</p> <p>This is my bad code :</p> <pre><code>def even_number_filter(*arg): a = len(sys.argv) for (arg % 2 == 0): print ("\n%d is an even number") % arg even_number_filter (1, 2, 3, 4, 5, 6, 7, 8, 9, 10) </code></pre>
1
2016-08-23T14:49:23Z
39,104,668
<p>To do this using set &amp; list comprehension:</p> <pre><code>print {x for x in {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} if x % 2 == 0} print [x for x in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] if x % 2 == 0] </code></pre> <p>Produces:</p> <pre><code>set([8, 2, 4, 10, 6]) [2, 4, 6, 8, 10] </code></pre> <p>Where possible you should try to use comprehension (or generators) over explicit for loops. Not so important as you are just starting out and trying to learn the basics but good to start off with good habits!</p>
1
2016-08-23T15:01:59Z
[ "python", "loops", "for-loop", "arguments" ]
Going through all arguments given and checking their values
39,104,401
<p>I am pretty new to Python and I tried to make a simple program.<br> I tried to define a function that takes any given number of arguments and prints all even numbers given.<br> However, I do not know how to know how many numbers were given, and mostly how to go through them, or more precisely, how to make a loop that checks each of them. Thanks in advance!</p> <p>This is my bad code :</p> <pre><code>def even_number_filter(*arg): a = len(sys.argv) for (arg % 2 == 0): print ("\n%d is an even number") % arg even_number_filter (1, 2, 3, 4, 5, 6, 7, 8, 9, 10) </code></pre>
1
2016-08-23T14:49:23Z
39,104,828
<p>Ideally, take the arguments as an iterable (a list or a generator) rather than varargs. Then, this is the ideal time to use a list comprehension:</p> <pre class="lang-py prettyprint-override"><code>def even_numbers(numbers): return [ num for num in numbers if num % 2 == 0 ] def even_number_filter(numbers): for num in even_numbers(numbers): print "{0} is even".format(num) if __name__ == '__main__': even_number_filter(range(10)) 0 is even 2 is even 4 is even 6 is even 8 is even </code></pre>
0
2016-08-23T15:09:20Z
[ "python", "loops", "for-loop", "arguments" ]