title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Can I do real time adb logcat keyword look up using python? | 38,904,426 | <p>I am trying to run a test on android device and collect the logs using python and ADB. I want my python script to collect the ADB logcat logs and keep monitoring the logs, if any crash keywords(or any required keywords) are found during the run, take adb main radio bug-report and adb dump and save them in to particular folder in PC. </p>
<p>Can you please help with some ideas and documents. </p>
<p>Thank you. </p>
| -1 | 2016-08-11T19:14:45Z | 38,904,485 | <p>Yes, you can do this. Take a look at <em>subprocess</em> - <a href="http://docs.python.org/library/subprocess.html" rel="nofollow">http://docs.python.org/library/subprocess.html</a>. </p>
<p>Once you have the <em>adb</em> output going into your Python script, you can do whatever you want to it - search live logs for keywords and then perform actions as required!</p>
| 1 | 2016-08-11T19:18:49Z | [
"python",
"adb",
"jython"
] |
How do I save a new graph as png with every iteration of a loop | 38,904,461 | <p>I don't know how to save a new graph png for each iteration of a loop using NetworkX. I've borrowed the code from this question: <a href="http://stackoverflow.com/questions/22635538/in-networkx-cannot-save-a-graph-as-jpg-or-png-file">in NetworkX cannot save a graph as jpg or png file</a> and manipulated it a bit. Below is the code:</p>
<pre><code>import networkx as nx
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(12,12))
ax = plt.subplot(111)
ax.set_title('Graph - Shapes', fontsize=10)
G = nx.DiGraph()
G.add_node('shape1', level=1)
G.add_node('shape2', level=2)
G.add_node('shape3', level=2)
G.add_node('shape4', level=3)
G.add_edge('shape1', 'shape2')
G.add_edge('shape1', 'shape3')
G.add_edge('shape3', 'shape4')
pos = nx.spring_layout(G)
n = 0
colorses = ['yellow', 'red', 'blue', 'green']
while n < len(colorses):
nx.draw(G, pos, node_size=1500, node_color=colorses[n], font_size=8, font_weight='bold')
plt.tight_layout()
# plt.show()
plt.savefig("Graph.png", format="PNG")
n += 1
</code></pre>
<p>Ideally I would like to have four images each one with different color nodes. Let me know if you need any more information. Thanks!</p>
| 0 | 2016-08-11T19:17:09Z | 38,904,708 | <p>First suggestion, accessing the colors by their index values is not "Pythonic". Instead, use a <code>for</code> loop:</p>
<pre><code>for color in colors:
print(color)
</code></pre>
<p>Your code is overwriting <code>Graph.png</code> on each iteration of the loop. To save a new file for each iteration, simply rename the output file on each iteration. One way to do this is by using the <a href="https://infohost.nmt.edu/tcc/help/pubs/python/web/new-str-format.html" rel="nofollow">format()</a> and <a href="https://docs.python.org/2.3/whatsnew/section-enumerate.html" rel="nofollow">enumerate()</a> functions:</p>
<pre><code>import networkx as nx
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(12,12))
ax = plt.subplot(111)
ax.set_title('Graph - Shapes', fontsize=10)
G = nx.DiGraph()
G.add_node('shape1', level=1)
G.add_node('shape2', level=2)
G.add_node('shape3', level=2)
G.add_node('shape4', level=3)
G.add_edge('shape1', 'shape2')
G.add_edge('shape1', 'shape3')
G.add_edge('shape3', 'shape4')
pos = nx.spring_layout(G)
colors = ['yellow', 'red', 'blue', 'green']
for i, color in enumerate(colors):
nx.draw(G, pos, node_size=1500, node_color=color, font_size=8, font_weight='bold')
plt.tight_layout()
plt.savefig('Graph_{}.png'.format(i), format="PNG")
</code></pre>
| 0 | 2016-08-11T19:33:14Z | [
"python",
"matplotlib",
"networkx"
] |
How do I save a new graph as png with every iteration of a loop | 38,904,461 | <p>I don't know how to save a new graph png for each iteration of a loop using NetworkX. I've borrowed the code from this question: <a href="http://stackoverflow.com/questions/22635538/in-networkx-cannot-save-a-graph-as-jpg-or-png-file">in NetworkX cannot save a graph as jpg or png file</a> and manipulated it a bit. Below is the code:</p>
<pre><code>import networkx as nx
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(12,12))
ax = plt.subplot(111)
ax.set_title('Graph - Shapes', fontsize=10)
G = nx.DiGraph()
G.add_node('shape1', level=1)
G.add_node('shape2', level=2)
G.add_node('shape3', level=2)
G.add_node('shape4', level=3)
G.add_edge('shape1', 'shape2')
G.add_edge('shape1', 'shape3')
G.add_edge('shape3', 'shape4')
pos = nx.spring_layout(G)
n = 0
colorses = ['yellow', 'red', 'blue', 'green']
while n < len(colorses):
nx.draw(G, pos, node_size=1500, node_color=colorses[n], font_size=8, font_weight='bold')
plt.tight_layout()
# plt.show()
plt.savefig("Graph.png", format="PNG")
n += 1
</code></pre>
<p>Ideally I would like to have four images each one with different color nodes. Let me know if you need any more information. Thanks!</p>
| 0 | 2016-08-11T19:17:09Z | 38,904,716 | <p>Just change the name of the output file</p>
<pre><code>while n < len(colorses):
nx.draw(G, pos, node_size=1500, node_color=colorses[n], font_size=8, font_weight='bold')
plt.tight_layout()
# plt.show()
plt.savefig("Graph" + str(n) +".png", format="PNG")
n += 1
</code></pre>
<p>You should use more descriptive names though. Maybe instead of n, you could refer to a time</p>
<pre><code> plt.savefig("Graph" + str(datetime.datetime.now()) +".png", format="PNG")
</code></pre>
<p>That isn't great, since the string will have whitespace, but you can tweak it beforehand</p>
| 2 | 2016-08-11T19:33:32Z | [
"python",
"matplotlib",
"networkx"
] |
How to insert a list of values into a column in sqlite3 with python? | 38,904,525 | <p>I can't seem to get <code>executemany</code> to work. I didn't get any errors. But no values got inserted either. The following is a small example: the table prices already has some columns and 3 rows. I added a new column <code>t</code> and inserted the values in <code>vals</code> into <code>t</code>.</p>
<pre><code>vals = [(1,), (2,), (3,)]
cursor.execute("ALTER TABLE prices ADD COLUMN t REAL")
cursor.executemany("INSERT INTO TABLE prices(t) VALUES(?)", vals)
data = cursor.execute("SELECT id, t FROM prices")
for i in data:
print i
</code></pre>
<p>I get</p>
<pre><code>(1, None)
(2, None)
(3, None)
</code></pre>
| -1 | 2016-08-11T19:21:27Z | 38,904,567 | <p>it does't look like you're inserting values into the column 't'.i'm assuming your table has 2 columns (id,t)
Check your list on top which has </p>
<pre><code>vals = [(1,), (2,), (3,)]
</code></pre>
<p>perhaps that's the problem. Could you elaborate a little more?</p>
| 0 | 2016-08-11T19:24:34Z | [
"python",
"sqlite3"
] |
Python, reading ID3 tags from m4a files | 38,904,591 | <p>I have a large collection of m4a files, all with ID3 tags for genre, artist, and titles. What I would like is to be able to read these tags and store them as strings. </p>
| 0 | 2016-08-11T19:26:11Z | 38,904,773 | <p>For such common tasks, ready-made libraries almost always exist. A quick Google search for <code>python m4a</code> yields:</p>
<ul>
<li><a href="http://mutagen.readthedocs.io/en/latest/" rel="nofollow">Mutagen</a></li>
<li><a href="https://github.com/devsnd/tinytag" rel="nofollow">tinytag</a></li>
</ul>
| 0 | 2016-08-11T19:36:53Z | [
"python",
"id3",
"m4a"
] |
Multi-processing... Running 2 scripts on 2 terminals | 38,904,646 | <p>I have 2 scripts that start multiple processes. Right now I'm opening up two different terminals and running python start.py to start both the scripts. How can I achieve this with one command, or one running one script.</p>
<p>Start.py 1</p>
<pre><code># globals
my_queue = multiprocessing.Manager().Queue() # queue to store our values
stop_event = multiprocessing.Event() # flag which signals processes to stop
my_pool = None
def my_function(foo):
print("starting %s" % foo)
try:
addnews.varfoo)
except Exception,e:
print str(e)
MAX_PROCESSES = 50
my_pool = multiprocessing.Pool(MAX_PROCESSES)
x = Var.objects.order_by('name').values('link')
for t in x:
t = t.values()[0]
my_pool.apply_async(my_function, args=(t,))
my_pool.close()
my_pool.join()
</code></pre>
<p>Start1.py 2</p>
<pre><code># globals
MAX_PROCESSES = 50
my_queue = multiprocessing.Manager().Queue() # queue to store our values
stop_event = multiprocessing.Event() # flag which signals processes to stop
my_pool = None
def my_function(var):
var.run_main(var)
stop_event.set()
def var_scanner():
# Since `t` could have unlimited size we'll put all `t` value in queue
while not stop_event.is_set(): # forever scan `values` for new items
y = Var.objects.order_by('foo).values('foo__foo')
for t in y:
t = t.values()[0]
my_queue.put(t)
try:
var_scanner_process = multiprocessing.Process(target=var_scanner)
var_scanner_process.start()
my_pool = multiprocessing.Pool(MAX_PROCESSES)
#while not stop_event.is_set():
try: # if queue isn't empty, get value from queue and create new process
var = my_queue.get_nowait() # getting value from queue
p = multiprocessing.Process(target=my_function, args=(var,))
p.start()
except Queue.Empty:
print "No more items in queue"
time.sleep(1)
#stop_event.set()
except KeyboardInterrupt as stop_test_exception:
print(" CTRL+C pressed. Stopping test....")
stop_event.set()
</code></pre>
| 0 | 2016-08-11T19:28:49Z | 38,905,089 | <p>You can run the first script in the background on the same terminal, using the <code>&</code> shell modifier.</p>
<pre><code>python start.py &
python start1.py
</code></pre>
| 1 | 2016-08-11T19:58:33Z | [
"python",
"multiprocessing"
] |
Why not run uwsgi instances as root | 38,904,656 | <p>I am reading through the uWSGI documentation and it warns to <em><a href="https://uwsgi-docs.readthedocs.io/en/latest/WSGIquickstart.html#security-and-availability" rel="nofollow">always avoid running your uWSGI instances as root</a></em>. What is the reason behind this?</p>
<p>Does it matter if it is the only process (besides nginx) running in a docker container, serving up a flask application?</p>
| 0 | 2016-08-11T19:29:09Z | 38,904,680 | <p>In general, security reasoning says that running as root as bad. If there were any kind of bug, for example a code execution bug that can allow anybody to execute arbitrary code they would be able to destroy your entire system. </p>
<p>If you don't run the process as root, any code execution vulnerabilities would need to be paired with a secondary privilege escalation vulnerability in order to destroy your system. </p>
<p>In a docker container, this is mitigated slightly in that you'll be able to recover your old system relatively easily, however, it is generally still a bad practice or habit to allow processes to run as root as a malicious attacker can and will steal the information that may exist on your server or turn your server into a malware delivery mechanism. </p>
| 1 | 2016-08-11T19:31:06Z | [
"python",
"nginx",
"docker",
"uwsgi"
] |
Error reformatting string conditionally | 38,904,775 | <p>I had this subsection of my application working yesterday, and I can't seem to find out why it's not working now.</p>
<p>Here's what it should do:</p>
<p>Input: <code>10th ave 501</code></p>
<p>Output: <code>501 10th ave</code></p>
<p>This block of code is supposed to search for one of the target words in a list called <code>patterns</code>, and if there is a number with <code>len <= 4</code> after it, move it to the front of the string. <code>patterns</code> contains words such as <code>ave, street, road, place</code>.</p>
<p>Here is my code:</p>
<pre><code>address = address.split(' ')
for pattern in patterns:
try:
if address[0].isdigit():
continue
location = address.index(pattern) + 1
number_location = address[location]
if 'th' in address[location + 1] or 'floor' in address[location + 1] or '#' in address[location]:
continue
except (ValueError, IndexError):
continue
if number_location.isdigit() and len(number_location) <= 4:
address = [number_location] + address[:location] + address[location+1:]
break
address = ' '.join(address)
print address
</code></pre>
<p>Currently the output is just the exact same as what I inputted. i.e <code>10th ave 501</code> is returning <code>10th ave 501</code>. I feel like it's something relatively obvious that i'm glancing over.</p>
| -1 | 2016-08-11T19:37:10Z | 38,904,930 | <p><code>'10th'.isdigit()</code> is <code>False</code> since:</p>
<blockquote>
<p>Return true if <strong>all</strong> characters in the string are digits (<a href="https://docs.python.org/3/library/stdtypes.html#str.isdigit" rel="nofollow">source</a>)</p>
</blockquote>
<p>If you wish, you can check only the first character:</p>
<pre><code>if number_location[0].isdigit() and len(number_location) <= 4:
</code></pre>
| 0 | 2016-08-11T19:47:30Z | [
"python"
] |
What does Python's io.open() use as the default separator? | 38,904,799 | <p>So I have a code that's trying to get all lines of a text file inside <code>message</code>, by doing this:</p>
<pre><code>inbody = False
lines = []
f = io.open(path, 'r', encoding='latin1')
for line in f:
if inbody:
lines.appned(line)
elif line == '\n':
inbody = True
f.close()
message = '\n'.join(lines)
</code></pre>
<p>The aim is to get all the lines of a text file inside <code>message</code>.</p>
<p>The line <code>for line in f</code> indicates that <code>f</code> is iterable, which leads me to believe that <code>io.open()</code> returns an iterable sequence. My question is: what separator does <code>io.open()</code> use to generate this sequence out of a text file?</p>
| -1 | 2016-08-11T19:38:27Z | 38,904,824 | <p>From <a href="https://docs.python.org/2/library/io.html#io.IOBase.readline" rel="nofollow">https://docs.python.org/2/library/io.html#io.IOBase.readline</a>:</p>
<blockquote>
<p>The line terminator is always b'\n' for binary files; for text files,
the newline argument to open() can be used to select the line
terminator(s) recognized.</p>
</blockquote>
| 1 | 2016-08-11T19:40:15Z | [
"python",
"io"
] |
What does Python's io.open() use as the default separator? | 38,904,799 | <p>So I have a code that's trying to get all lines of a text file inside <code>message</code>, by doing this:</p>
<pre><code>inbody = False
lines = []
f = io.open(path, 'r', encoding='latin1')
for line in f:
if inbody:
lines.appned(line)
elif line == '\n':
inbody = True
f.close()
message = '\n'.join(lines)
</code></pre>
<p>The aim is to get all the lines of a text file inside <code>message</code>.</p>
<p>The line <code>for line in f</code> indicates that <code>f</code> is iterable, which leads me to believe that <code>io.open()</code> returns an iterable sequence. My question is: what separator does <code>io.open()</code> use to generate this sequence out of a text file?</p>
| -1 | 2016-08-11T19:38:27Z | 38,904,994 | <p>From <a href="https://docs.python.org/2/library/io.html#io.open" rel="nofollow">https://docs.python.org/2/library/io.html#io.open</a>:</p>
<blockquote>
<p>On input, if newline is None, universal newlines mode is enabled. Lines in the input can end in '\n', '\r', or '\r\n', and these are translated into '\n' before being returned to the caller. If it is '', universal newlines mode is enabled, but line endings are returned to the caller untranslated. If it has any of the other legal values, input lines are only terminated by the given string, and the line ending is returned to the caller untranslated.</p>
</blockquote>
| 0 | 2016-08-11T19:51:36Z | [
"python",
"io"
] |
Python flask Auto Redirect message | 38,904,957 | <p>I have defined multplie service urls , all of which will execute same code.</p>
<pre><code>@service.route('/get_info/env=<path:env>/starttime=<int:starttime>/endtime=<int:endtime>/entire=<entire>/over=<over>/stage=<stage>', defaults={'name': None}, strict_slashes=False)
@service.route('/get_info/dir=<path:dir>/starttime=<int:starttime>/endtime=<int:endtime>/entire=<entire>/over=<over>', defaults={'stage': 'ABC', 'name': None}, strict_slashes=False)
@servicelog.operation('GetInfo')
def get_deployment_info(env, starttime, endtime, finished, stage, fleetwide, hostname):
global _misc
try:
log.info("Dir: %s, Starttime: %d, Endtime: %d, name: %s, finished:%s, Stage:%s, Fleetwide: %s", dir, starttime, endtime, name,over, stage, entire)
result = obj.get_info(dir, starttime, endtime, entire, over, stage, name)
</code></pre>
<p>If I were to invoke the service with below parameters, it works as expected:</p>
<pre><code>http://<endpoint>/get_info/dir=home/user/test/starttime=1470439200/endtime=1470856027/entire=true/over=true
</code></pre>
<p>However if I get invoke the service with below parameters, I get a re-direct message:</p>
<pre><code>http://<endpoint>/get_info/dir=home/user/test/starttime=1470439200/endtime=1470856027/entire=true/over=true/stage=ABC
</code></pre>
<p>The error I get is </p>
<pre><code><!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to target URL: <a href="http://endpoint/get_info/dir%3Dhome/user/test/starttime%3D1470439200/endtime%3D1470856027/entire%3Dtrue/over%3Dtrue">http://endpoint/get_info/dir%3Dhome/user/test/starttime%3D1470439200/endtime%3D1470856027/entire%3Dtrue/over%3Dtrue</a>. If not click the link.%
</code></pre>
<p>Am I doing something incorrect here? I am not expecting such a message as I have not defined any sort of redirection. It should execute the method irrespective of which url is used.</p>
| 0 | 2016-08-11T19:49:31Z | 38,905,469 | <p>The way you construct the route is not really clean. GET request with arguments should take the format of
<a href="http://server/route?arg1=val1&arg2=val2&" rel="nofollow">http://server/route?arg1=val1&arg2=val2&</a>.. etc. I'm pretty sure some special character or some parsing issue is responsible of the error here.</p>
<p>Use request.args.get('arg1') to retrieve the parsed argument 1's value (Don't forget <code>from flask import request</code> at the top) in your GET request handler. To get the whole string use <code>request.query_string</code></p>
<p>Look at <a href="http://stackoverflow.com/questions/11774265/flask-how-do-you-get-a-query-string-from-flask">this</a> SO question.</p>
| 0 | 2016-08-11T20:22:58Z | [
"python",
"redirect",
"flask",
"flask-restful"
] |
How to display the decimal points in python | 38,904,963 | <p>I am trying to write a python script that divides two numbers and then stores the value of the first digit after the decimal point but first I want to print the answer of the division.
example:
31/7 should be 4,428 but when I use </p>
<pre><code>def spiel():
anzahl=int(raw_input("Anzahl der Spieler"))
spieler=int(raw_input("Jeder wievielte spielt"))
if anzahl>0 and spieler>0:
#Quotient berechnen
q=anzahl/spieler
print '%.2f'%q
</code></pre>
<p>I get the answer 4.00, what have I done wrong.</p>
<p>If you could tell me how I can store the first decimal point after the decimal point THAT WOULD BE AMAZING!
Thanks in advance</p>
| 0 | 2016-08-11T19:49:53Z | 38,905,007 | <p>You're dividing an int by an int. You should use floats</p>
<pre><code>def spiel():
anzahl=float(raw_input("Anzahl der Spieler"))
spieler=float(raw_input("Jeder wievielte spielt"))
if anzahl>0 and spieler>0:
#Quotient berechnen
q=anzahl/spieler
print '%.2f'%q
</code></pre>
<p>In response to comment:</p>
<p>There are multiple ways to do that, what's best would depend on your applicaiton:</p>
<pre><code>q =anzahl/spieler
print "{}".format(int(10*(q%1)))
print "{}".format(int((q-int(q))*10))
print "{}".format(str(a).split(".")[1][0])
</code></pre>
<p>and probably some easier ones that I didn't think of</p>
| 0 | 2016-08-11T19:52:34Z | [
"python",
"python-2.7",
"floating-point"
] |
How to display the decimal points in python | 38,904,963 | <p>I am trying to write a python script that divides two numbers and then stores the value of the first digit after the decimal point but first I want to print the answer of the division.
example:
31/7 should be 4,428 but when I use </p>
<pre><code>def spiel():
anzahl=int(raw_input("Anzahl der Spieler"))
spieler=int(raw_input("Jeder wievielte spielt"))
if anzahl>0 and spieler>0:
#Quotient berechnen
q=anzahl/spieler
print '%.2f'%q
</code></pre>
<p>I get the answer 4.00, what have I done wrong.</p>
<p>If you could tell me how I can store the first decimal point after the decimal point THAT WOULD BE AMAZING!
Thanks in advance</p>
| 0 | 2016-08-11T19:49:53Z | 38,905,052 | <p>In python2.7:
If you divide two integer, answer will be integer always as it will skip the decimal value.</p>
<p>If you want precise float output, try this:</p>
<pre><code>>>>float(10)/3
>>>3.3333333333333335
</code></pre>
<p>Any of the value should be float.</p>
| 0 | 2016-08-11T19:56:00Z | [
"python",
"python-2.7",
"floating-point"
] |
How to display the decimal points in python | 38,904,963 | <p>I am trying to write a python script that divides two numbers and then stores the value of the first digit after the decimal point but first I want to print the answer of the division.
example:
31/7 should be 4,428 but when I use </p>
<pre><code>def spiel():
anzahl=int(raw_input("Anzahl der Spieler"))
spieler=int(raw_input("Jeder wievielte spielt"))
if anzahl>0 and spieler>0:
#Quotient berechnen
q=anzahl/spieler
print '%.2f'%q
</code></pre>
<p>I get the answer 4.00, what have I done wrong.</p>
<p>If you could tell me how I can store the first decimal point after the decimal point THAT WOULD BE AMAZING!
Thanks in advance</p>
| 0 | 2016-08-11T19:49:53Z | 38,905,282 | <p>The thing that you are doing wrong here is, you are making it believe that operands of division are integers( during division) .</p>
<p>I am sure you want to take two integers. You can do following things:</p>
<ol>
<li><p>use float() during division. In here you do not need to convert both Anzahl and spieler to float. Just convert spieler :</p>
<p>q = anzahl/float(spieler)</p></li>
</ol>
<p>**2. Use <strong>future</strong> module. then you do not need to change anything in your code. Write the following line at the beginning of your code :</p>
<pre><code>from __future__ import division
</code></pre>
<ol start="3">
<li>Use Python 3. In python 3 '/' means you will non-truncating division and '//' means truncating division</li>
</ol>
| 0 | 2016-08-11T20:11:24Z | [
"python",
"python-2.7",
"floating-point"
] |
How to display the decimal points in python | 38,904,963 | <p>I am trying to write a python script that divides two numbers and then stores the value of the first digit after the decimal point but first I want to print the answer of the division.
example:
31/7 should be 4,428 but when I use </p>
<pre><code>def spiel():
anzahl=int(raw_input("Anzahl der Spieler"))
spieler=int(raw_input("Jeder wievielte spielt"))
if anzahl>0 and spieler>0:
#Quotient berechnen
q=anzahl/spieler
print '%.2f'%q
</code></pre>
<p>I get the answer 4.00, what have I done wrong.</p>
<p>If you could tell me how I can store the first decimal point after the decimal point THAT WOULD BE AMAZING!
Thanks in advance</p>
| 0 | 2016-08-11T19:49:53Z | 38,906,078 | <p>If you divide an integer by an integer you will receive an integer. An integer is a whole number, no decimal points.</p>
<p>If you divide a float/integer by a float you will receive a float. A float is a whole number, including decimal points.</p>
<p>When you are doing your operations, change <code>spieler</code> to a float. That can be done with <code>float(spieler)</code>.</p>
<p>Your code will look like this:</p>
<pre><code>def spiel():
anzahl=int(raw_input("Anzahl der Spieler"))
float(spieler)=int(raw_input("Jeder wievielte spielt"))
if anzahl>0 and spieler>0:
#Quotient berechnen
q=anzahl/spieler
print '%.2f'%q
</code></pre>
| 0 | 2016-08-11T21:01:17Z | [
"python",
"python-2.7",
"floating-point"
] |
pygame-How to replace an image with an other one | 38,904,986 | <p>I have this sort of code:</p>
<pre><code>width = 100
height = 50
gameDisplay.blit(button, (width, height))
pygame.display.update()
while True:
for event in pygame.event.get():
if event.type == pygame.MOUSEBUTTONUP and event.button == 1:
# replace the button's image with another
</code></pre>
<p>Is there any function or something that would let me replace the image with another?</p>
| 0 | 2016-08-11T19:51:23Z | 38,905,297 | <p>To show changes on screen use <code>pygame.display.update()</code>.</p>
<p>Your code should look like this</p>
<pre><code>width = 100
height = 50
gameDisplay.blit(button, (width, height))
while True:
for event in pygame.event.get():
if event.type == pygame.MOUSEBUTTONUP and event.button == 1:
gameDisplay.blit(new_button, (width, height))
pygame.display.update()
</code></pre>
| 0 | 2016-08-11T20:12:54Z | [
"python",
"python-3.x",
"pygame"
] |
pygame-How to replace an image with an other one | 38,904,986 | <p>I have this sort of code:</p>
<pre><code>width = 100
height = 50
gameDisplay.blit(button, (width, height))
pygame.display.update()
while True:
for event in pygame.event.get():
if event.type == pygame.MOUSEBUTTONUP and event.button == 1:
# replace the button's image with another
</code></pre>
<p>Is there any function or something that would let me replace the image with another?</p>
| 0 | 2016-08-11T19:51:23Z | 38,907,531 | <p>You cannot 'replace' anything you've drawn. What you do is that you draw new images over the existing ones. Usually, you clear the screen and redraw your images every loop. Here's pseudo-code illustrating what a typical game loop looks like. I'll use <em>screen</em> as variable name instead of <em>gameDisplay</em>, because <em>gameDisplay</em> goes against PEP-8 naming convention.</p>
<pre><code>while True:
handle_time() # Make sure your programs run at constant FPS.
handle_events() # Handle user interactions.
handle_game_logic() # Use the time and interactions to update game objects and such.
screen.fill(background_color) # Clear the screen / Fill the screen with a background color.
screen.blit(image, rect) # Blit an image on some destination. Usually you blit more than one image using pygame.sprite.Group.
pygame.display.update() # Or 'pygame.display.flip()'.
</code></pre>
<p>For your code you probably should do something like this:</p>
<pre><code>rect = pygame.Rect((x_position, y_position), (button_width, button_height))
while True:
for event in pygame.event.get():
if event.type == pygame.MOUSEBUTTONUP and event.button == 1:
button = new_button # Both should be a pygame.Surface.
gameDisplay.fill(background_color, rect) # Clear the screen.
gameDisplay.blit(button, rect)
pygame.display.update()
</code></pre>
<p>If you only want to update the area where your image is you could pass <em>rect</em> inside the update method, <code>pygame.display.update(rect)</code>.</p>
| 1 | 2016-08-11T23:17:58Z | [
"python",
"python-3.x",
"pygame"
] |
Python error with lambda and alphanumeric words | 38,905,074 | <p>I am really new to python and now I am having an error and do not know why I get this error.</p>
<p>I have 3 lists with words. The lists contains words numeric, literal words and alphanumeric words. These lists are saved in an txt file. Each file can contain words from other lists or new words.</p>
<p>Now I like to compare these lists and copy all words without duplicates in to the new list. So I have one big list, containing all words but no duplicates.</p>
<p>This is my script:</p>
<pre><code>file_a = raw_input("File 1?: ")
file_b = raw_input("File 2?: ")
file_c = raw_input("File_3?: ")
file_new = raw_input("Neue Datei: ")
def compare_files():
with open(file_a, 'r') as a:
with open(file_b, 'r') as b:
with open(file_c, 'r') as c:
with open(file_new, 'w') as new:
difference = set(a).symmetric_difference(b).symmetric_difference(c)
difference.discard('\n')
sortiert = sorted(difference, key=lambda item: (int(item.partition(' ')[0])
if item[0].isdigit() else float('inf'), item))
for line in sortiert:
new.write(line)
k = compare_files()
</code></pre>
<p>When I run the script I get the following error message:</p>
<pre><code>Traceback (most recent call last):
File "TestProject1.py", line 19, in <module>
k = compare_files()
File "TestProject1.py", line 13, in compare_files
sortiert = sorted(difference, key=lambda item: (int(item.partition(' ')[0])
File "TestProject1.py", line 14, in <lambda>
if item[0].isdigit() else float('inf'), item))
ValueError: invalid literal for int() with base 10: '12234thl\n'
</code></pre>
<p>Anyone an idea or something what is wrong in my script?</p>
<p>Thank you for your help :)</p>
| 0 | 2016-08-11T19:57:34Z | 38,906,299 | <p><code>partition</code> on <code>' '</code> or any other string for that matter will not extract the numeral part of the string except you know the character immediately following the numeral; very unlikely. </p>
<p>You can instead use a regular expression to extract the leading numeral part of the string:</p>
<pre><code>import re
p = re.compile(r'^\d+')
def compare_files():
with open(file_a, 'r') as a, open(file_b, 'r') as b, \
open(file_c, 'r') as c, open(file_new, 'w') as new:
difference = set(a).symmetric_difference(b).symmetric_difference(c)
difference.discard('\n')
sortiert = sorted(difference,
key=lambda item: (int(p.match(item).group(0)) \
if item[0].isdigit() \
else float('inf'), item))
for line in sortiert:
new.write(line)
</code></pre>
<p>The pattern <code>'^\d+'</code> should match all numerals from the start of the string and then <code>p.match(item).group(0)</code> returns the numeral as a string which can then <em>casted</em> to integer.</p>
| 0 | 2016-08-11T21:17:41Z | [
"python",
"python-3.x",
"lambda"
] |
Xpath combine two treepaths | 38,905,108 | <p>I have two calls:</p>
<pre><code>teamName = tree.xpath('//span[@class="name matchwinner"]/text()')
teamMatch = tree.xpath('//a[@class="match"]/@href')
</code></pre>
<p>At the moment i get two arrays, one with team names and one with hrefs. Is there a way to get something like [team1, href1, team2, href2]? </p>
| 1 | 2016-08-11T20:00:32Z | 38,905,173 | <p>I suspect each team names and team matches share the same parent. If this is the case, you can do something like:</p>
<pre><code>for team in tree.xpath("//div[@class='game']"):
teamName = team.xpath('.//span[@class="name matchwinner"]/text()')[0]
teamMatch = team.xpath('.//a[@class="match"]/@href')[0]
print(teamName, teamMatch)
</code></pre>
<p>Or, you can also use <code>zip()</code> to group names and matches together by index:</p>
<pre><code>teamName = tree.xpath('//span[@class="name matchwinner"]/text()')
teamMatch = tree.xpath('//a[@class="match"]/@href')
print(list(zip(teamName, teamMatch)))
</code></pre>
| 0 | 2016-08-11T20:04:36Z | [
"python",
"xpath"
] |
Nothing happens at a part of my script where there should be calculations | 38,905,118 | <p>I just started programming with python 3.5 and in this first program that I am making nothing happens at this part of the program:</p>
<pre><code> def gender():
var2 = input ("What is your gender? ")
v1 = int (var4)
v2 = int (var5)
guyheight = v1+v2+13
girlhiehgt = v1+v2-13
v4 = guyheight/2
v5 = guyheight/2
if var2 == 'boy' or var2 == 'Boy' or var2 == 'guy' or var2 == 'Guy' or var2 == 'male' or var2 == 'Male' or var2 == 'man' or var2 == 'Man':
print ("Your estimated full-fledged height is, {}.".format(v4))
elif var2 == 'girl' or var2 == 'Girl' or var2 == 'female' or var2 == 'Female' or var2 == 'woman' or var2 == 'Woman':
print ("Your estimated full-fledged height is, {}.".format(v5))
else:
print ("Sorry, your gender is invalid. Please re-enter the answer.")
gender()
</code></pre>
| -1 | 2016-08-11T20:01:02Z | 38,905,982 | <p>This answer is based on my guess that <code>gender()</code> is this entire function and you want it to repeat at the end if a valid answer is not given. Try the formatting below.</p>
<pre><code>def gender():
# There's obviously some code up here that eventually gives us var1 - var5
# I'll assume var2 is an input from the user
var2 = input("Enter your gender")
v1 = int(var4)
v2 = int(var5)
guy_height = v1 + v2 + 13
girl_height = v1 + v2 - 13
v4 = guy_height / 2
v5 = guy_height / 2
if var2.lower() in ('boy', 'guy', 'male'):
print ("Your estimated full-fledged height is, {}.".format(v4))
elif var2.lower() in ('girl', 'female', 'woman'):
print ("Your estimated full-fledged height is, {}.".format(v5))
else:
print ("Sorry, your gender is invalid. Please re-enter the answer.")
gender()
gender()
</code></pre>
| -1 | 2016-08-11T20:55:10Z | [
"python",
"python-3.x"
] |
Retrieving RDS tags using boto3 gives an index error. | 38,905,131 | <p>I am trying to retrieve a tags using boto3 but I constantly run into the ListIndex out of range error.</p>
<p>My Code: </p>
<pre><code>rds = boto3.client('rds',region_name='us-east-1')
rdsinstances = rds.describe_db_instances()
for rdsins in rdsinstances['DBInstances']:
rdsname = rdsins['DBInstanceIdentifier']
arn = "arn:aws:rds:%s:%s:db:%s"%(reg,account_id,rdsname)
rdstags = rds.list_tags_for_resource(ResourceName=arn)
if 'MyTag' in rdstags['TagList'][0]['Key']:
print "Tags exist and the value is:%s"%rdstags['TagList'][0]['Value']
</code></pre>
<p>The error that I have is:</p>
<pre><code>Traceback (most recent call last):
File "rdstags.py", line 49, in <module>
if 'MyTag' in rdstags['TagList'][0]['Key']:
IndexError: list index out of range
</code></pre>
<p>I also tried using the for loop by specifying the range, it didn't seem to work either.</p>
<pre><code>for i in range(0,10):
print rdstags['TagList'][i]['Key']
</code></pre>
<p>Any help is appreciated. Thanks!</p>
| 0 | 2016-08-11T20:01:48Z | 38,906,281 | <p>You should iterate over list of tags first and compare <code>MyTag</code> with each item independently:
something like that:</p>
<pre><code> if 'MyTag' in [tag['Key'] for tag in rdstags['TagList']]:
print "Tags exist and.........."
</code></pre>
<p>or better:</p>
<pre><code>for tag in rdstags['TagList']:
if tag['Key'] == 'MyTag':
print "......"
</code></pre>
| 0 | 2016-08-11T21:16:11Z | [
"python",
"amazon-web-services",
"boto3",
"aws-rds"
] |
How to remove first occurrence in string repeatedly python? | 38,905,216 | <p>I am not sure how to continously change the first occurance in a list. For instance, I will have a list of numbers </p>
<pre><code>import numpy as np
test=np.array([np.nan,np.nan,1,1,1,np.nan,1,1])
</code></pre>
<p>Which gives me an output of </p>
<pre><code>array([ nan, nan, 1., 1., 1., nan, 1., 1.])
</code></pre>
<p>I want to use a function to change this to </p>
<pre><code>array([ nan, nan, nan, 1., 1., nan, nan, 1.])
</code></pre>
<p>which removes the first instance of 1 each time it starts. </p>
| 0 | 2016-08-11T20:06:56Z | 38,905,248 | <p>You could do something like this -</p>
<pre><code>idx = np.diff(np.append(False,(test==1)).astype(int))==1
test[idx] = np.nan
</code></pre>
<p><strong>Sample runs</strong></p>
<p>1) Sample case from posted question -</p>
<pre><code>In [80]: test
Out[80]: array([ nan, nan, 1., 1., 1., nan, 1., 1.])
In [81]: test[np.diff(np.append(False,(test==1)).astype(int))==1] = np.nan
In [82]: test
Out[82]: array([ nan, nan, nan, 1., 1., nan, nan, 1.])
</code></pre>
<p>2) Special case with a <code>1</code> at the start -</p>
<pre><code>In [72]: test
Out[72]: array([ 1., 1., 0., 1., 0., 0., 1., 1., 0., 1.])
In [73]: test[np.diff(np.append(False,(test==1)).astype(int))==1] = np.nan
In [74]: test
Out[74]: array([ nan, 1., 0., nan, 0., 0., nan, 1., 0., nan])
</code></pre>
| 1 | 2016-08-11T20:09:16Z | [
"python",
"list",
"numpy"
] |
How to remove first occurrence in string repeatedly python? | 38,905,216 | <p>I am not sure how to continously change the first occurance in a list. For instance, I will have a list of numbers </p>
<pre><code>import numpy as np
test=np.array([np.nan,np.nan,1,1,1,np.nan,1,1])
</code></pre>
<p>Which gives me an output of </p>
<pre><code>array([ nan, nan, 1., 1., 1., nan, 1., 1.])
</code></pre>
<p>I want to use a function to change this to </p>
<pre><code>array([ nan, nan, nan, 1., 1., nan, nan, 1.])
</code></pre>
<p>which removes the first instance of 1 each time it starts. </p>
| 0 | 2016-08-11T20:06:56Z | 38,905,573 | <p>You can take advantage of the fact that <code>nan</code> times anything is <code>nan</code>.</p>
<pre><code>import numpy as np
test=np.array([np.nan,np.nan,1,1,1,np.nan,1,1])
test[1:] *= test[:-1]
test[0] = np.nan
print test
# [ nan nan nan 1. 1. nan nan 1.]
</code></pre>
| 1 | 2016-08-11T20:29:07Z | [
"python",
"list",
"numpy"
] |
parameterizing a query in Postgres using Python | 38,905,241 | <p>I am having a bit of trouble parameterizing a sql query with python. Don't exactly know why this error is happening... if the tuple has two members and i am using two parameters in the sql, how am i getting an off by one error?</p>
<p>error message:</p>
<pre><code>File "...\app.py", line 27, in main
rows = User.daily_users_by_pool_name('2016-08-01', '2016-08-02')
File "...\user.py", line 48, in daily_users_by_pool_name
cursor.execute(query, (start_date, end_date))
IndexError: tuple index out of range
</code></pre>
<p>calling function in main:</p>
<pre><code>rows = User.daily_users_by_pool_name('2016-08-01', '2016-08-02')
</code></pre>
<p>method in class User:</p>
<pre><code>from database import ConnectionFromPool
from datetime import datetime
import pandas as pd
import numpy as np
import psycopg2
...
@classmethod #static
def daily_users_by_pool_name(cls, start_date, end_date):
'''returns a Pandas.DataFrame of results'''
query = """
Select foo.dos::date, foo.cust_id
from foo f
join customer c on f.id = c.id
where foo.dos >= %s::DATE
and foo.dos < %s::DATE
and c.cust_name ilike '%_bar'
and c.baz not ilike 'test%' """
with ConnectionFromPool() as cursor:
cursor.execute(query, (start_date, end_date))
return pd.DataFrame(cursor.fetchall(), columns=['foo', 'cust_id'])
</code></pre>
| 1 | 2016-08-11T20:08:45Z | 38,905,667 | <p>Escape the <code>%</code> characters with one more <code>%</code></p>
<pre><code>and c.cust_name ilike '%%_bar'
and c.baz not ilike 'test%%' """
</code></pre>
| 1 | 2016-08-11T20:35:36Z | [
"python",
"postgresql",
"psycopg2"
] |
Is it possible to get the contents of an S3 file without downloading it using boto3? | 38,905,291 | <p>I am working on a process to dump files from a <code>Redshift</code> database, and would prefer not to have to locally download the files to process the data. I saw that <code>Java</code> has a <code>StreamingObject</code> class that does what I want, but I haven't seen anything similar in <code>boto3</code>.</p>
| 0 | 2016-08-11T20:12:15Z | 38,910,315 | <p>If you have a <code>mybucket</code> S3 bucket, which contains a <code>beer</code> key, here is how to download and fetch the value without storing it in a local file:</p>
<pre><code>import boto3
s3 = boto3.resource('s3')
print s3.Object('mybucket', 'beer').get()['Body'].read()
</code></pre>
| 1 | 2016-08-12T05:06:09Z | [
"python",
"amazon-s3",
"boto3"
] |
Find 1D array from 2D NumPy arrays? | 38,905,322 | <p>Suppose I have </p>
<pre><code>[[array([x1, y1]), z1]
[array([x2, y1]), z2]
......
[array([xn, yn]), zn]
]
</code></pre>
<p>And I want to find the index of <code>array([x5, y5])</code>. How can find effieciently using NumPy?</p>
| 0 | 2016-08-11T20:14:34Z | 38,905,692 | <p>To start off, owing to the mixed data format, I don't think you can extract the arrays in a vectorized manner. Thus, you can use <code>loop comprehension</code> to extract the first element corresponding to the arrays from each list element as a <code>2D</code> array. So, let's say <code>A</code> is the input list, we would have -</p>
<pre><code>arr = np.vstack([a[0] for a in A])
</code></pre>
<p>Then, simply do the comparison in a vectorized fashion using <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>NumPy's broadcasting feature</code></a>, as it will broadcast that comparison along all the rows and look all matching rows with <code>np.all(axis=1)</code>. Finally, use <code>np.flatnonzero</code> to get the final indices. Thus, the final peace of the puzzle would be -</p>
<pre><code>idx = np.flatnonzero((arr == search1D).all(1))
</code></pre>
<p>You can read up on the answers to <a href="http://stackoverflow.com/questions/38674027/find-the-row-indexes-of-several-values-in-a-numpy-array"><code>this post</code></a> to see other alternatives to get indices in such a <code>1D</code> array searching in <code>2D</code> array problem.</p>
<p>Sample run -</p>
<pre><code>In [140]: A
Out[140]:
[[array([3, 4]), 11],
[array([2, 1]), 12],
[array([4, 2]), 16],
[array([2, 1]), 21]]
In [141]: search1D = [2,1]
In [142]: arr = np.vstack([a[0] for a in A]) # Extract 2D array
In [143]: arr
Out[143]:
array([[3, 4],
[2, 1],
[4, 2],
[2, 1]])
In [144]: np.flatnonzero((arr == search1D).all(1)) # Finally get indices
Out[144]: array([1, 3])
</code></pre>
| 0 | 2016-08-11T20:37:54Z | [
"python",
"python-2.7",
"numpy"
] |
How to verify that the repeated output has same sets of lines and no extra line present in it | 38,905,370 | <p>I have number of repeated lines in output (which is pre-defined), from which I need to verify that the output has fixed set of lines repeated and no extra lines are present in it.</p>
<pre><code>> **********************************
> timestamp expected
> **********************************
>
> **********************************
> Error expected
> **********************************
>
> **********************************
> timestamp expected
> **********************************
>
> **********************************
> Error expected
> **********************************
>
> **********************************
> timestamp expected
> **********************************
>
> **********************************
> junk not expected
> **********************************
</code></pre>
<p>I want to make sure that if any line other than "timestamp expected" and "error expected" is/are present in output, error message should come saying unexpected line/s present in output.</p>
<p>Can someone please help me to get an idea how can I do this?</p>
| -3 | 2016-08-11T20:17:11Z | 38,905,455 | <p>Build a simple class with a print method that formats your output, verifies its content (and throws and exception if not OK) and counts the two cases.
To be honest from your question it is not clear what you already have at hand (do the lines come from anywhere else?, etc.)</p>
| 0 | 2016-08-11T20:22:00Z | [
"python",
"regex",
"python-2.7",
"python-3.x"
] |
How to verify that the repeated output has same sets of lines and no extra line present in it | 38,905,370 | <p>I have number of repeated lines in output (which is pre-defined), from which I need to verify that the output has fixed set of lines repeated and no extra lines are present in it.</p>
<pre><code>> **********************************
> timestamp expected
> **********************************
>
> **********************************
> Error expected
> **********************************
>
> **********************************
> timestamp expected
> **********************************
>
> **********************************
> Error expected
> **********************************
>
> **********************************
> timestamp expected
> **********************************
>
> **********************************
> junk not expected
> **********************************
</code></pre>
<p>I want to make sure that if any line other than "timestamp expected" and "error expected" is/are present in output, error message should come saying unexpected line/s present in output.</p>
<p>Can someone please help me to get an idea how can I do this?</p>
| -3 | 2016-08-11T20:17:11Z | 38,905,481 | <p>This is a <code>pseudo code</code>. I suggest to take an <code>empty dict</code>. then use a <code>counter</code>.</p>
<pre><code>mystr = '''**********************************
timestamp expected
**********************************
**********************************
Error expected
**********************************
**********************************
timestamp expected
**********************************
**********************************
Error expected
**********************************
**********************************
timestamp expected
**********************************
**********************************
junk not expected
**********************************
My Line
Your Line
XYZ
blah blah
My Line
'''
empty_dict = {}
mystr = mystr.split("\n")
for i in mystr:
if i not in empty_dict:
empty_dict[i] = 1
else:
empty_dict[i] += 1
print empty_dict
</code></pre>
| 0 | 2016-08-11T20:23:26Z | [
"python",
"regex",
"python-2.7",
"python-3.x"
] |
Can I change the number of hidden nodes in my Deep Learning model instead of remaking the entire model? | 38,905,396 | <p>I'm trying a brute force grid search to find the optimal number of hidden nodes to have in my TensorFlow DeepLearning model. I'm not too concerned about how long the program will take but I've found that my program runs out of memory because of all the tf.variables it has to make. The code to build my model is as follows:</p>
<pre><code>def hiddenLayer(input_data, num_nodes, num_inputs, layer_num):
#Initialize all weights as the standard deviation of the input
weights = tf.Variable(tf.truncated_normal([num_inputs, num_nodes],
stddev=1.0 / math.sqrt(float(num_inputs))), name='hidden' + str(layer_num) + '_weights')
#Initialize all biases as zero
biases = tf.Variable(tf.zeros([num_nodes]), name='hidden' + str(layer_num) + '_biases')
#Using RELU, return a linear combination of the biases and weights
return tf.nn.relu(tf.matmul(input_data, weights) + biases)
def softmaxOutput(input_data, num_inputs, num_outputs):
#Initialize all weights as the standard deviation of the input
weights = tf.Variable(tf.truncated_normal([num_inputs, num_outputs],
stddev=1.0 / math.sqrt(float(num_inputs))), name='output_weights')
#Initialize all baises as zero
biases = tf.Variable(tf.zeros([num_outputs]), name='output_biases')
#Squash the linear combination using softmax to give you LOGITS
return tf.nn.softmax(tf.matmul(input_data, weights) + biases)
def calculate_loss(logits, labels):
labels = tf.to_int64(labels)
return tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
logits, labels, name = 'xentropy'))
class SingleDNNClassifier(object):
def __init__(self, num_nodes, num_inputs, num_outputs, batch_size, layer_num=1, lr=0.01):
#Defining the features and labels variables
self.x = x = tf.placeholder(tf.float32, shape=(batch_size, num_inputs))
self.y = y = tf.placeholder(tf.int32, shape=(batch_size))
#Run the input data through the first hidden layer
x = hiddenLayer(x, num_nodes, num_inputs, layer_num)
#Run the output from the first hidden layer with softmax to get predicted logits
self.logits = logits = softmaxOutput(x, num_nodes, num_outputs)
#Get the predicted labels
self.predictions = tf.argmax(logits,1)
#Calculate the loss
self.loss = xeloss = calculate_loss(logits, y)
#Define the training operation for this model
self.train_op = tf.train.GradientDescentOptimizer(lr).minimize(xeloss)
def train(sess, model, data, batch_size):
epoch_size = int(data.num_samples/batch_size)
losses = []
for step in xrange(epoch_size):
#The way you get batches now shuffles the data, check to see if this is correct
train_x, train_y = data.next_batch(batch_size)
loss, _ = sess.run([model.loss, model.train_op],
{model.x: train_x, model.y: train_y})
losses.append(loss)
if step % (epoch_size/5) == 5:
tf.logging.info("%.2f: %.3f", step * 1.0 / epoch_size, np.mean(losses))
return np.mean(losses)
def evaluate(sess, model, data, batch_size):
predicted_values = []
actual_values = []
for i in xrange(int(data.num_samples/batch_size)):
val_x, val_y = data.next_batch(batch_size)
predictions = sess.run(model.predictions, {model.x: val_x, model.y: val_y})
predicted_values.append(predictions)
actual_values.append(val_y)
predicted_values = np.concatenate(predicted_values).ravel()
actual_values = np.concatenate(actual_values).ravel()
return roc_auc_score(actual_values, predicted_values)
</code></pre>
<p>As you can see, my model is pretty simple. Just a single layer network with a softmax output. However, I want to know the optimal number of hidden nodes so I am running this code:</p>
<pre><code>sess = tf.InteractiveSession()
hidden_nodes = 100
epochs = 100
increment = 100
max_hidden_nodes = 20000
while (hidden_nodes <= max_hidden_nodes):
print ("HIDDEN NODES: %d\n" %(hidden_nodes))
output_file = open('AUC_SingleLayer/SingleLayer_' + str(hidden_nodes) + '.txt', 'w')
model = SingleDNNClassifier(hidden_nodes,16000,2,100)
tf.initialize_all_variables().run()
for i in range(epochs):
print ("\tEPOCH: %d\n" %(i+1))
train(sess, model, dataset.training, 100)
valid_auc = DLM.evaluate(sess, model, dataset.testing, 100)
output_file.write('EPOCH %d: %.5f' % (i+1, valid_auc))
if (i < epochs-1):
output_file.write('\n')
hidden_nodes += increment
</code></pre>
<p>However, I keep getting an error that my Linux workstation runs out of memory because it has to keep reinitializing the weights and biases variables within my <em>hiddenLayer</em> function. Naturally, as the number of nodes increases, the more memory each variable is going to take. </p>
<p>I'm trying to do something like in this link: <a href="https://github.com/tensorflow/tensorflow/issues/2311" rel="nofollow">https://github.com/tensorflow/tensorflow/issues/2311</a></p>
<p>where I can just pass in the value to a placeholder of an unspecified size for the dimension that I am changing but am not sure if it will work. I'd appreciate any help or direction with this.</p>
<p>TLDR: Need to stop reinitializing variables because i am running out of memory but the variable's size is going to change for each iteration of my while loop.</p>
| 0 | 2016-08-11T20:18:19Z | 38,964,221 | <p>Even though this isn't a perfect answer, I managed to find a workaround. I wrote a shell script that gets a range of numbers based on my interval using the <em>seq starting_number increment ending_number</em> and fed each value into my python code as a command line parameter. This avoids the memory problem since each iteration ends with closing the python process, thus, avoiding the memory problem.</p>
| 0 | 2016-08-15T22:56:01Z | [
"python",
"machine-learning",
"out-of-memory",
"tensorflow",
"deep-learning"
] |
Mac OS X linking .so file to dynamic library | 38,905,527 | <p>I'm trying to build a python package that is dependent on several libraries including boost, boost-python, and fftw. I am able to compile the package without any errors, but when I try to run the program with the <code>steme --help</code> command, I get an error:</p>
<pre><code>2016-08-11 15:07:55,571:ERROR: ImportError: dlopen(/Users/<>/.ENV/lib/python2.7/site-packages/STEME-1.9.1-py2.7-macosx-10.11-intel.egg/stempy/_release_build/_index.so, 2): Symbol not found: __ZN5boost6python6detail13current_scopeE
Referenced from: /Users/<>/.ENV/lib/python2.7/site-packages/STEME-1.9.1-py2.7-macosx-10.11-intel.egg/stempy/_release_build/_index.so
Expected in: flat namespace
in /Users/<>/.ENV/lib/python2.7/site-packages/STEME-1.9.1-py2.7-macosx-10.11-intel.egg/stempy/_release_build/_index.so
</code></pre>
<p>My understanding is the the file <code>_index.so</code> is looking for the symbol <code>__ZN5boost6python6detail13current_scopeE</code> but can't find it. Next I checked what libraries are linked to the <code>_index.so</code> file using <code>otool -L</code>:</p>
<pre><code>/Users/<>/.ENV/lib/python2.7/site-packages/STEME-1.9.1-py2.7-macosx-10.11-intel.egg/stempy/_release_build/_index.so:
/usr/local/opt/fftw/lib/libfftw3.3.dylib (compatibility version 8.0.0, current version 8.4.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1226.10.1)
</code></pre>
<p>Based on this, what I think is happening is that the <code>boost-python</code> dynamic library was not properly linked to <code>_index.so</code> during compilation. To check that I ran <code>nm -g</code> on the <code>boost-python</code> dylibs (which were installed using <code>brew</code>, along with the other mentioned dependencies). The missing symbol was found in the <code>libboost_python.dylib</code> file:</p>
<p><code>0000000000031d08 S __ZN5boost6python6detail13current_scopeE</code></p>
<p>So, my next question is, how do I link the <code>libboost_python.dylib</code> file (located at <code>/usr/local/Cellar/boost-python/1.61.0/lib</code>) to the <code>_index.so</code> file? Will that even fix the problem? Is there some compiler flag that I need to set to make sure the build is properly linked?</p>
| 0 | 2016-08-11T20:26:07Z | 38,906,047 | <p>Try to include <code>/usr/local/Cellar/boost-python/1.61.0/lib</code> in your path when you launch the binary. Something like</p>
<pre><code>$ export PATH=$PATH:/usr/local/Cellar/boost-python/1.61.0/lib
$ ./your-binary
</code></pre>
<p>The link to the library is there otherwise you could not have found it via <code>otool</code>. I think that the program is not finding the shared library as it cannot be found in the PATH.</p>
<p>This error seems very similar to the "DDL not found" error type in windows, which can be fixed by supplying the DDL either in the current dir or in the PATH.</p>
| 0 | 2016-08-11T20:58:56Z | [
"python",
"c++",
"osx",
"boost"
] |
How to use the POST method in web.py to update a table in a postgreSQL database? | 38,905,537 | <p>Here's my web.py code so far. I want to use my web server to update a table.</p>
<pre><code>import web
urls = ('/', 'Index')
render = web.template.render('templates/')
app = web.application(urls, globals())
web.config.debug = True
db = web.database(dbn='postgres', db='my_db', user='postgres', pw='******', host='localhost')
class Index:
def GET(self):
drivers = db.select("drivers")
return render.index(drivers)
def POST(self, name):
return "post"
if __name__ == '__main__':app.run()
</code></pre>
<p>Right now it takes what's in the table and prints it out in the browser. But I want to implement some logic that adds numeric values to the table as a while loop runs using the post method. How can I do this? </p>
| 0 | 2016-08-11T20:26:26Z | 38,930,985 | <p>You can do the Postgres insertion as follows,</p>
<pre><code>def POST(self):
sequence_id = db.insert('test1', col1=5, col2=3) # insert(tablename, seqname=None, _test=False, **values)
</code></pre>
<p>Refer <a href="http://lost-theory.org/webpydocs/module-web.db.html" rel="nofollow">this</a> for more details.</p>
<p>You may use,</p>
<pre><code>data = json.loads(web.data()) # convert to dictionary if needed using json.loads
</code></pre>
<p>or</p>
<pre><code>data = web.input().number # number is the key which holds the data
</code></pre>
<p>depending on how you send data to POST method. </p>
<p>eg: If <code>content-type = application/json</code> use <code>web.input().number</code></p>
<p>Refer <a href="http://webpy.org/cookbook/" rel="nofollow">this</a> as well.</p>
| 0 | 2016-08-13T08:51:35Z | [
"python",
"postgresql",
"rest",
"web.py"
] |
Unit test C-generating python code | 38,905,560 | <p>I have a project which involves some (fairly simple) C-code generation as part of a build system. In essence, I have some version information associated with a project (embedded C) which I want to expose in my binary, so that I can easily determine what firmware version was programmed to a particular device for debugging purposes.</p>
<p>I'm writing some simplistic python tools to do this, and I want to make sure they're thoroughly tested. In general, this has been fairly straightforward, but I'm unsure what the best strategy is for the code-generation portion. Essentially, I want to make sure that the generated files both:</p>
<ul>
<li>Are syntactically correct</li>
<li>Contain the necessary information</li>
</ul>
<p>The second, I can (I believe) achieve to a reasonable degree with regex matching. The first, however, is something of a bigger task. I could probably use something like <a href="https://github.com/eliben/pycparser" rel="nofollow">pycparser</a> and examine the resulting AST to accomplish both goals, but that seems like an unnecessarily heavyweight solution.</p>
<p>Edit: A dataflow diagram of my build hierarchy</p>
<p><img src="http://i.imgur.com/zlw6RBu.png" alt="dataflow diagram"></p>
| 2 | 2016-08-11T20:27:53Z | 38,919,890 | <p>Thanks for the diagram! Since you are not testing for coverage, if it were me, I would just compile the generated C code and see if it worked :) . You didn't mention your toolchain, but in a Unix-like environment, <code>gcc <whatever build flags> -c generated-file.c || echo 'Oops!'</code> should be sufficient.</p>
<p>Now, it may be that the generated code isn't a freestanding compilation unit. No problem there: write a shim. Example <code>shim.c</code>:</p>
<pre><code>#include <stdio.h>
#include "generated-file.c"
main() {
printf("%s\n", GENERATED_VERSION); //or whatever is in generated-file.c
}
</code></pre>
<p>Then <code>gcc -o shim shim.c && diff <(./shim) "name of a file holding the expected output" || echo 'Oops!'</code> should give you a basic test. (The <code><()</code> is bash <a href="http://wiki.bash-hackers.org/syntax/expansion/proc_subst" rel="nofollow">process substitution</a>.) The file holding the expected results may already be in your git repo, or you might be able to use your Python routine to write it to disk somewhere.</p>
<p><strong>Edit 2</strong> This approach can work even if your actual toolchain isn't amenable to automation. To test syntactic validity of your code, you can use <code>gcc</code> even if you are using a different compiler for your target processor. For example, compiling with <code>gcc -ansi</code> will disable a number of GNU extensions, which means code that compiles with <code>gcc -ansi</code> is more likely to compile on another compiler than is code that compiles with full-on, GNU-extended <code>gcc</code>. See the gcc page on <a href="https://gcc.gnu.org/onlinedocs/gcc/C-Dialect-Options.html" rel="nofollow">"C Dialect Options"</a> for all the different flavors you can use (<a href="https://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Dialect-Options.html" rel="nofollow">ditto C++</a>).</p>
<p><strong>Edit</strong> Incidentally, this is the same approach <a href="http://www.ibm.com/developerworks/library/l-debcon/" rel="nofollow">GNU autoconf</a> uses: write a small test program to disk (autoconf calls it <code>conftest.c</code>), compile it, and see if the compilation succeeded. The test program is (preferably) the bare minimum necessary to test if everything is OK. Depending on how complicated your Python is, you might want to test several different aspects of your generated code with respective, different shims.</p>
| 1 | 2016-08-12T14:00:11Z | [
"python",
"c",
"unit-testing",
"auto-generate"
] |
CSV.writer | Ensure first row in CSV is also last row | 38,905,578 | <p>This code is going through .KAP files on a shared drive and creating CSV's on my local drive consisting of only the coordinate pairs of lat/lon extisting within the .KAP files. </p>
<p>Example of coordinate lines within a .KAP file </p>
<pre><code>PLY/1,48.107478621032,-69.733975000000
PLY/2,48.163516399836,-70.032838888053
PLY/3,48.270000002883,-70.032838888053
PLY/4,48.107478621032,-69.733975000000
</code></pre>
<p>These coordinates will later be used to create polygons. </p>
<p><strong>Problem</strong>: About half of the .KAP files will also maintain the first lat/lon coordinate pair as the last lat/lon coordinate pair. (Very useful for closing polygons, see how PLY/1 and PLY/4 above are the same). For some reason some .KAP files do not have this. See where I'm going? </p>
<p>How would I ensure that the first coordinate pair is always recorded as the last coordinate pair in all the output CSV's?</p>
<p>Here's my code so far:</p>
<pre><code>import os
import zipfile
import csv
currentPath = r"K:\BsbProd\BSBCHS\BSB_4_Release"
for zip in [i for i in os.listdir(currentPath) if i.lower().endswith('.zip')]:
zf = zipfile.ZipFile(os.path.join(currentPath, zip), 'r')
for each in [n for n in zf.namelist() if n.lower().endswith('.kap')]:
currentFile = zf.open(each).readlines()
writer = csv.writer(open(r"C:\Users\PalmerDa\Desktop\BSB\{}.csv".format(os.path.basename(each)), "wb"))
for row in currentFile:
if row.startswith('PLY'):
col1, col2 = row.split(",")[1:]
rows = float(col1), float(col2)
writer.writerow(rows)
</code></pre>
| 0 | 2016-08-11T20:29:28Z | 38,905,945 | <p>Just do some kind of check on the last row to see if it is the same as the first, and if not, add a copy of the first row to the end. Maybe something like</p>
<pre><code>for i in range(len(currentFile)):
row = currentfile[i]
if(row.startswith('PLY')):
col1, col2 = row.split(",")[1:]
rows = float(col1), float(col2)
writer.writerow(rows)
if(i = len(currentFile) and row != currentfile[0]):
firstcol1, firstcol2 = currentfile[0].split(',')[1:]
firstrow = float(firstcol1), float(firstcol2)
writer.writerow(firstrow)
</code></pre>
<p>Or even easier, just add the first line to the end before you look at any rows after reading lines:</p>
<pre><code>currentFile = zf.open(each).readlines()
if(currentFile[-1] != currentFile[0]): currentFile.append(currentFile[0])
</code></pre>
<p>and then you don't need to change the row loop at all. I like the second solution better</p>
| 0 | 2016-08-11T20:52:42Z | [
"python",
"csv",
"writer"
] |
CSV.writer | Ensure first row in CSV is also last row | 38,905,578 | <p>This code is going through .KAP files on a shared drive and creating CSV's on my local drive consisting of only the coordinate pairs of lat/lon extisting within the .KAP files. </p>
<p>Example of coordinate lines within a .KAP file </p>
<pre><code>PLY/1,48.107478621032,-69.733975000000
PLY/2,48.163516399836,-70.032838888053
PLY/3,48.270000002883,-70.032838888053
PLY/4,48.107478621032,-69.733975000000
</code></pre>
<p>These coordinates will later be used to create polygons. </p>
<p><strong>Problem</strong>: About half of the .KAP files will also maintain the first lat/lon coordinate pair as the last lat/lon coordinate pair. (Very useful for closing polygons, see how PLY/1 and PLY/4 above are the same). For some reason some .KAP files do not have this. See where I'm going? </p>
<p>How would I ensure that the first coordinate pair is always recorded as the last coordinate pair in all the output CSV's?</p>
<p>Here's my code so far:</p>
<pre><code>import os
import zipfile
import csv
currentPath = r"K:\BsbProd\BSBCHS\BSB_4_Release"
for zip in [i for i in os.listdir(currentPath) if i.lower().endswith('.zip')]:
zf = zipfile.ZipFile(os.path.join(currentPath, zip), 'r')
for each in [n for n in zf.namelist() if n.lower().endswith('.kap')]:
currentFile = zf.open(each).readlines()
writer = csv.writer(open(r"C:\Users\PalmerDa\Desktop\BSB\{}.csv".format(os.path.basename(each)), "wb"))
for row in currentFile:
if row.startswith('PLY'):
col1, col2 = row.split(",")[1:]
rows = float(col1), float(col2)
writer.writerow(rows)
</code></pre>
| 0 | 2016-08-11T20:29:28Z | 38,956,644 | <p>When I had time to do it I came up with this solution to my own question:</p>
<pre><code>import os
import zipfile
import csv
import glob
currentPath = r"K:\BsbProd\BSBCHS\BSB_4_Release"
for zip in glob.glob('{}/*.zip'.format(currentPath)):
zf = zipfile.ZipFile(os.path.join(currentPath, zip), 'r')
for each in [n for n in zf.namelist() if n.lower().endswith('.kap')]:
currentFile = zf.open(each).readlines()
writer = csv.writer(open(r"C:\Users\PalmerDa\Desktop\BSB2{}.csv".format(os.path.basename(each)), "wb"))
plys = [r for r in currentFile if r.startswith('PLY')]
for rr in plys:
col1, col2 = rr.split(",")[1:]
rows = float(col1), float(col2)
writer.writerow(rows)
if plys[0] != plys[-1]:
writer.writerow((float(plys[0].split(",")[1]),float(plys[0].split(",")[2])))
</code></pre>
| 0 | 2016-08-15T14:06:03Z | [
"python",
"csv",
"writer"
] |
Attribute Error in querying with Flask SQL Alchemy | 38,905,712 | <pre><code>from flask import Flask
import requests
from flask_sqlalchemy import SQLAlchemy
from flask_migrate import Migrate
app = Flask(__name__)
db = SQLAlchemy(app)
migrate = Migrate(app, db)
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql://root:@localhost/localDB'
class MainResult(db.Model):
__tablename__ = "mainresult"
metadata_key = db.Column(db.String(128), primary_key=True)
individualResults = db.relationship('IndividualResult', backref='mainresult', lazy='dynamic',
cascade="all, delete-orphan")
def __init__(self, metadata_key):
self.metadata_key = metadata_key
class IndividualResult(db.Model):
__tablename__ = "individualresult"
metadata_key = db.Column(db.String(128), db.ForeignKey('mainresult.metadata_key'), primary_key=True)
ts = db.Column(db.String(50), primary_key=True)
def __init__(self, metadata_key, ts):
self.metadata_key = metadata_key
self.ts = ts
@app.route('/')
def hello_world():
# This throws an error.
print IndividualResult.query.get("03d49625b00643058a40eb8672e94f4e")
# Error too
IndividualResult.query.filter_by(metadata_key='03d49625b00643058a40eb8672e94f4e').first()
# This works.
print MainResult.query.get("03d49625b00643058a40eb8672e94f4e")
return 'Hello World!'
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>I have the above code with the data correctly inserted. I am able to query out with MainResult.query, but I am unable to do so with IndividualResult.query.</p>
<p>The error that was returned is,</p>
<p>AttributeError: Neither 'InstrumentedAttribute' object nor 'Comparator' object associated with IndividualResult.query has an attribute 'get'</p>
<p>It's rather peculiar as they are both nearly identical</p>
<p>Full Stacktrace:</p>
<pre><code>Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 2000, in __call__
return self.wsgi_app(environ, start_response)
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1991, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1567, in handle_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1988, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1641, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1544, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1639, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1625, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/Users/Park/PycharmProjects/Flask_24Aug/Main_Flask.py", line 170, in hello_world()
print IndividualResult.query.get("03d49625b00643058a40eb8672e94f4e","1470860955")
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/attributes.py", line 193, in __getattr__
key)
AttributeError: Neither 'InstrumentedAttribute' object nor 'Comparator' object associated with IndividualResult.query has an attribute 'get'
</code></pre>
| 0 | 2016-08-11T20:39:03Z | 38,906,392 | <p>When you use <code>get</code> you need to give it the primary key, in the case of your <code>IndividualResult</code> you've defined a composite primary key (a primary key made of multiple columns) so to query that you would need to specify those two columns:</p>
<pre><code>print(IndividualResult.query.get(("03d49625b00643058a40eb8672e94f4e", "TSVALUE")))
</code></pre>
<p>It's also worth adding <code>__repr__</code>'s to your classes to make it easier to see what's going on when you're doing <code>print</code> debugging.</p>
<pre><code>from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
db = SQLAlchemy(app)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///'
class MainResult(db.Model):
__tablename__ = "mainresult"
metadata_key = db.Column(db.String(128), primary_key=True)
individualResults = db.relationship('IndividualResult', backref='mainresult', lazy='dynamic',
cascade="all, delete-orphan")
def __init__(self, metadata_key):
self.metadata_key = metadata_key
def __repr__(self):
return '<MR:{}>'.format(self.metadata_key)
class IndividualResult(db.Model):
__tablename__ = "individualresult"
metadata_key = db.Column(db.String(128), db.ForeignKey('mainresult.metadata_key'), primary_key=True)
ts = db.Column(db.String(50), primary_key=True)
def __init__(self, metadata_key, ts):
self.metadata_key = metadata_key
self.ts = ts
def __repr__(self):
return '<IQ:{}>'.format(self.metadata_key)
@app.route('/')
def hello_world():
print MainResult.query.get("12345")
# <MR:12345>
print IndividualResult.query.get(("12345", "AAA"))
# <IQ:12345>
print IndividualResult.query.filter_by(metadata_key='12345').first()
# <IQ:12345>
return 'Hello World!'
# Populate with some test data:
@app.before_first_request
def data():
db.create_all()
mr = MainResult(metadata_key='12345')
ir = IndividualResult(metadata_key='12345', ts='AAA')
db.session.add(mr)
db.session.add(ir)
db.session.commit()
if __name__ == '__main__':
app.run(debug=True, port=5002)
</code></pre>
| 0 | 2016-08-11T21:25:11Z | [
"python",
"flask",
"sqlalchemy"
] |
python asyncio invalid syntax ubuntu | 38,905,747 | <p>On OS X, my code works fine. Trying the exact code out on ubuntu, I get a syntax error:</p>
<pre><code>ubuntu@home:server$ python3 server.py
File "server.py", line 39
async def hello(websocket, path):
^
SyntaxError: invalid syntax
</code></pre>
<p>I used <code>pip3 install asyncio</code> to install asyncio</p>
<p>I also tried upgrading to python 3.5, but it causes a ton of library errors with other libraries, so far I can't get this working with python 3.5 on ubuntu although it works with python 3.5 on OS X.</p>
| 1 | 2016-08-11T20:41:15Z | 38,905,831 | <p>You have different versions of python on your OS X machine and your ubuntu machine. </p>
<p><a href="https://www.python.org/dev/peps/pep-0492/" rel="nofollow"><code>async def</code> syntax was added in python 3.5</a>. </p>
<p>There's no hope of getting that syntax working on python 3.4. </p>
| 4 | 2016-08-11T20:46:01Z | [
"python",
"python-3.x",
"python-asyncio"
] |
Unable to write multiple simultaneous filesinks from appsrc | 38,905,859 | <p>I'm having trouble saving from appsrc to filesink when I have multiple gstreamer processes running simultaneously. Only one of the gstreamer processes will write correctly while the others will all write nearly empty files. There appears to be write contention during the filesink operation.</p>
<p>Note: I am using gstreamer1.0 (v1.8.2) with python3 (v3.5.2) on MAC OS 10.11.6.</p>
<p>Here's what my code is actually doing:</p>
<p>In the background, I am reading in frames from a single video stream, converting each frame to BGR numpy arrays of size 1920x800x3, and storing each frame in a circular buffer. I have built a "gstreamer_writer function" that reads frames from this circular buffer, converts the frames into a byte stream, and feeds this stream into appsrc. </p>
<p>This works by instantiating a new multiprocess (multiprocessing.Process) and pointing this at the "gstreamer_writer function". This works completely fine for a single multiprocess/function call. Appsrc is correctly fed the byte stream and I save these BGR frames into a mp4 with h264 encoding using the following gstreamer pipeline:</p>
<pre><code>appsrc format=3 name=app emit-signals=true do-timestamp=true is-live=true blocksize=4608000 max-bytes=0 caps=video/x-raw,format=BGR,width=1920,height=800 ! videoconvert ! video/x-raw,format=I420,width=1920,height=800 ! vtenc_h264 ! mp4mux ! filesink location=test1.mp4
</code></pre>
<p>However, if I instantiate two or more multiprocesses and point them at the function only one of filesinks will work properly. For example, if one is writing to "test1.mp4" and the other is writing to "test2.mp4" then one of the videos will be written correctly and the other will fail and write a nearly empty mp4 (~500kb). It's not always the same mp4, 50% of the time test1.mp4 gets written correctly and 50% of the time test2.mp4 gets written correctly. It looks like there is some kind of race condition or write contention that is preventing both mp4s from being written to file properly.</p>
<p>One thing to note is that each multiprocess is accessing the same frames from the same ring buffer. I thought this may have been causing problems with gstreamer. However, if I display the streams with autovideosink instead of writing them to file, I can display as many streams/multiprocesses as I'd like. This means the data is being properly passed through the pipeline and is only failing during the write stage. I tested this using the gstreamer command:</p>
<pre><code>appsrc format=3 name=app emit-signals=true do-timestamp=true is-live=true blocksize=4608000 max-bytes=0 caps=video/x-raw,format=BGR,width=1920,height=800 ! videoconvert ! video/x-raw,format=I420,width=1920,height=800 ! vtenc_h264 ! avdec_h264 ! autovideosink
</code></pre>
<p>If anyone has any suggestions for how I can fix this problem I would appreciate it. I'm hoping it's a simple change but you never know with gstreamer!</p>
<p>Thanks!</p>
| 0 | 2016-08-11T20:47:51Z | 38,911,821 | <p>Too long for comment, and this can solve your problem if I am correct..</p>
<p>Add queue before filesink:</p>
<pre><code>appsrc format=3 name=app emit-signals=true do-timestamp=true is-live=true blocksize=4608000 max-bytes=0 caps=video/x-raw,format=BGR,width=1920,height=800 ! videoconvert ! video/x-raw,format=I420,width=1920,height=800 ! queue ! vtenc_h264 ! mp4mux ! queue ! filesink location=test1.mp4
</code></pre>
<p>Queue has two functions:
+ make a buffer before consuming element (which encoder is - it needs many frames before it can start encoding)
+ separate further processing into new thread - so that filesink would process data in new thread.</p>
<p>HTH</p>
| 0 | 2016-08-12T07:01:40Z | [
"python",
"multiprocessing",
"gstreamer",
"python-gstreamer"
] |
How to make a Histogram from a sqldf in python | 38,905,956 | <p>I wrote the following SQL query in iPython notebook</p>
<pre><code>q = """SELECT *
FROM Northeast ORDER BY counted desc LIMIT 10"""
display(sqldf(q))
df = pysqldf(q)
plt.bar(df)
plt.show()
</code></pre>
<p>I was trying to make a histogram of it but I keep getting errors about it need a high etc. I try it in different ways getting different errors. Is it possible to make out of the result of a sgldf an bar histogram with the city as x axe and population as y axe?</p>
| 0 | 2016-08-11T20:53:27Z | 38,978,224 | <p>The following script should help with what you are trying to do:</p>
<pre><code>import pandas as pd
from pysqldf import SQLDF
import matplotlib.pyplot as plt
##Set up the sqldf function to query from the globals() variables
sqldf = SQLDF(globals())
##Since you're using ipython notebooks, put plots in line with this: %matplotlib inline
##Get some data from Wikipedia: Gets all sorts of information on cities in the Northeast
cities = pd.read_html('https://en.wikipedia.org/wiki/Northeastern_United_States',header = 0)
Northeast = cities[1] ##This where the top 10 cities are
Northeast.head(2) ## Take a look
Northeast.columns = ['Rank', 'MetropolitanArea', 'States', 'Population'] ##Change the columns to look better
##Run your query: This gets city name and Population
df = sqldf.execute('SELECT MetropolitanArea, Population FROM Northeast ORDER BY Population desc LIMIT 10;')
##This get the bar plot
df.plot(x='MetropolitanArea',y='Population',kind='bar')
plt.show()
</code></pre>
<p>Thanks!</p>
| 1 | 2016-08-16T14:56:59Z | [
"python",
"plot",
"histogram",
"sqldf"
] |
how do i install pip on python 3.5.2? | 38,905,980 | <p>i downloaded python 3.5.2, i am planning on making a keylogger i need to install pyhook and pywin but i dont know how. every body recommends me to install it by pip but i dont seem to have that module. i open up the idle and import pip, but it gives me the error message saying i dont have that module installed even though people say pip comes with versions 3.4+.. where and how do i install this pip module? i am on a windows ver. 10, 64 bits, python 3.5. any help is aprecciated.. i am new by the way go easy on me..</p>
| 0 | 2016-08-11T20:55:03Z | 38,906,073 | <p>You need to make sure the <code>pip</code> executable is in your <code>%PATH%</code> variable. For me, the <code>pip</code> executable is located in the <code>Scripts</code> directory of my Python installation. That turned out to be <code>C:\Python34\Scripts</code>. So you should find out where this location is for you and then add it to your path variable.</p>
<p><a href="http://stackoverflow.com/questions/25328818/python-2-7-cannot-pip-on-windows-bash-pip-command-not-found">Useful SO answer</a>.</p>
| 1 | 2016-08-11T21:00:32Z | [
"python"
] |
how do i install pip on python 3.5.2? | 38,905,980 | <p>i downloaded python 3.5.2, i am planning on making a keylogger i need to install pyhook and pywin but i dont know how. every body recommends me to install it by pip but i dont seem to have that module. i open up the idle and import pip, but it gives me the error message saying i dont have that module installed even though people say pip comes with versions 3.4+.. where and how do i install this pip module? i am on a windows ver. 10, 64 bits, python 3.5. any help is aprecciated.. i am new by the way go easy on me..</p>
| 0 | 2016-08-11T20:55:03Z | 40,051,061 | <p>I am using Python 3.4 and was getting the error that it didn't exist. I ran this upgrade and I can now import pyperclip which was one of the packages in that module.</p>
<pre><code>C:\Python35-32\Scripts>python -m pip install --upgrade pip
</code></pre>
<p>Output:</p>
<pre><code>Collecting pip
Downloading pip-8.1.2-py2.py3-none-any.whl (1.2MB)
100% |################################| 1.2MB 259kB/s
Installing collected packages: pip
Found existing installation: pip 8.1.1
Uninstalling pip-8.1.1:
Successfully uninstalled pip-8.1.1
Successfully installed pip-8.1.2
</code></pre>
| -1 | 2016-10-14T19:56:55Z | [
"python"
] |
Trouble serving MEDIA and STATIC files of Django in Apache | 38,905,997 | <p>I am using Django 1.8, python 3.4.3 and Apache 2.4.7, I have tried a lot of configurations and I can't see what I'm doing wrong. When I was using debug mode everything but the images was running perfect, and I solved the images issue by using this on urls.py:</p>
<pre><code>url(r'^media/(?P<path>.*)$', 'django.views.static.serve',
{'document_root': settings.MEDIA_ROOT, 'show_indexes': True}),
</code></pre>
<p>Later I turned <code>DEBUG=False</code> and the Images stopped working, thats why started to read about serving Static and Media files on Apache. And with the basic configurations everything seemed to be working right, but the images stopped working again, and after trying some configurations My App CSS and JS just stopped working, so I used <code>collectstatic</code> and after that, also the Admin lost its CSS and Js, but when I turn it <code>True</code> again, CSS and JS starts working again on both, App and Admin. </p>
<p>Now on DEBUG=False nothing is working, admin Static, my app Static files, and media files, nothing.</p>
<p>These are my settings:</p>
<pre><code>BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
PROJECT_ROOT = os.path.realpath(os.path.dirname(__file__))
MEDIA_ROOT = PROJECT_ROOT + '/archivos/'
MEDIA_URL = '/archivos/' #put whatever you want that when url is rendered it will be /archivos/imagename.jpg
DEBUG = True #False
ALLOWED_HOSTS = ['localhost','127.0.0.1']
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
</code></pre>
<p>This is my <code>/etc/apache2/sites-enabled/000-default.conf</code> file:</p>
<pre><code><VirtualHost *:80>
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
<VirtualHost *:80>
ServerName django.myproject
Alias /static /home/developer/myproject/static
<Directory /home/developer/myproject/static>
Require all granted
</Directory>
Alias /archivos/ /home/developer/myproject/archivos/
<Directory /home/developer/myproject/archivos>
Require all granted
</Directory>
<Directory /home/developer/myproject/myapp>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
WSGIDaemonProcess myproject python-path=/home/developer/myproject:/home/developer/myproject/projectvenv/lib/python3.4/site-packages
WSGIProcessGroup myproject
WSGIScriptAlias / /home/developer/myproject/projectsystem/wsgi.py
</VirtualHost>
</code></pre>
<p>If I turn Debug=True, CSS and JS of Admin and App starts working again, only the images (media files) don't. </p>
| 0 | 2016-08-11T20:55:57Z | 38,910,630 | <p>Thanks to the help of user @Nostalg.io (find him on the comments of the question) I have found a solution (well, he found it), there was a lot of bad configurations and permission errors.</p>
<p>First of all, I was using 127.0.0.1:8000 to access to my application, and that was a DJango URL, that's why files weren't served, I had to use the apache one. Also, had two virtualhosts on the same file, I deleted the first one. And as I'm using Ubuntu 14.04, had to add a host to <code>/etc/hosts</code> like this: <code>127.0.0.1 django.myproject</code> notice that this is my server name on Virtualhost. Later tried to enter to <code>django.myproject</code> but I had permission error, for that I used this: <code>chown -R www-data:www-data /home/developer/myproject</code></p>
<p>After that, I still had permission error, that was because this wsgi address:</p>
<pre><code> <Directory /home/developer/myproject/myapp>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
</code></pre>
<p>Didn't matched with this one:</p>
<pre><code> WSGIScriptAlias / /home/developer/myproject/projectsystem/wsgi.py
</code></pre>
<p><code>wsgi.py</code> Is in projectystem folder, so I changed that. </p>
<p>After that, I had 500 error, and it was something about /var/www/, apache couldn't acces to log files, so I added this inside my virtualhost configuration:</p>
<pre><code>ErrorLog ${APACHE_LOG_DIR}/error.log
# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
LogLevel warn
CustomLog ${APACHE_LOG_DIR}/access.log combined
</code></pre>
<p>And just to try, I changed <code>/var/www</code> owner by following the instructions given here: <a href="http://stackoverflow.com/questions/21797372/django-errno-13-permission-denied-var-www-media-animals-user-uploads">errno 13 permission denied</a></p>
<p>And also my "media" directory which is "archivos" wasn't well adressed, I changed It's address and now everything is working :D.</p>
<p>I really have to thank @Nostalg.io because he sent to me everything that I have used to solve this problem, and also he took the time to read all my error logs.</p>
| 0 | 2016-08-12T05:33:55Z | [
"python",
"django",
"apache"
] |
Tensorflow: How can I restore model for training? (Python) | 38,906,100 | <p>I want to train a cnn for 20000 steps. In the 100th step I want to save all variables and after that I want to re-run my code restoring model and starting from the 100th step. I am trying to make it work with tensorflow documentation: <a href="https://www.tensorflow.org/versions/r0.10/how_tos/variables/index.html" rel="nofollow">https://www.tensorflow.org/versions/r0.10/how_tos/variables/index.html</a> but I can't. Any help?</p>
| 0 | 2016-08-11T21:02:30Z | 38,906,748 | <p>Im stuck in something similar but maybe this <a href="https://github.com/tensorflow/tensorflow/issues/3340" rel="nofollow">link</a> can help you. Im new in tensorflow but i think you cant restore and fit without need to training again you model.</p>
| 0 | 2016-08-11T21:57:54Z | [
"python",
"python-2.7",
"machine-learning",
"tensorflow"
] |
Tensorflow: How can I restore model for training? (Python) | 38,906,100 | <p>I want to train a cnn for 20000 steps. In the 100th step I want to save all variables and after that I want to re-run my code restoring model and starting from the 100th step. I am trying to make it work with tensorflow documentation: <a href="https://www.tensorflow.org/versions/r0.10/how_tos/variables/index.html" rel="nofollow">https://www.tensorflow.org/versions/r0.10/how_tos/variables/index.html</a> but I can't. Any help?</p>
| 0 | 2016-08-11T21:02:30Z | 38,907,570 | <p>This functionality is still unstable , and the documentation is outdated so is confusing, what worked for me(this was a suggestion of people from google that works directly on tensorflow) was to use the model_dir parameter on the constructor of my models before training, in this you will tell where to store your model, after training you just instantiate again a model using the same model_dir and it will restore the model from the files and checkpoints generated.</p>
| 0 | 2016-08-11T23:23:36Z | [
"python",
"python-2.7",
"machine-learning",
"tensorflow"
] |
How to make a copy of xml tree in python using ElementTree? | 38,906,104 | <p>I'm using xml.etree.ElementTree to parse xml file. I'm parsing xml file in the following way:</p>
<pre><code>import xml.etree.ElementTree as ET
tree = ET.parse(options.xmlfile)
root = tree.getroot()
</code></pre>
<p>This is my xml file:</p>
<pre><code><rootElement>
<member>
<member_id>439854395435</member_id>
</member>
</rootElement>
</code></pre>
<p>Then I'm saving it:</p>
<pre><code>tree.write(options.outcsvfile)
</code></pre>
<p>How can I make a copy of my tree to produce something like this:</p>
<pre><code><rootElement>
<member>
<member_id>439854395435</member_id>
</member>
<member>
<member_id>439854395435</member_id>
</member>
</rootElement>
</code></pre>
| 0 | 2016-08-11T21:03:05Z | 38,913,589 | <p>You can create a copy of the <code>member</code> element and append it. Example:</p>
<pre><code>import xml.etree.ElementTree as ET
import copy
tree = ET.parse("test.xml")
root = tree.getroot()
# Find element to copy
member1 = tree.find("member")
# Create a copy
member2 = copy.deepcopy(member1)
# Append the copy
root.append(member2)
print ET.tostring(root)
</code></pre>
<p>Output:</p>
<pre><code><rootElement>
<member>
<member_id>439854395435</member_id>
</member>
<member>
<member_id>439854395435</member_id>
</member>
</rootElement>
</code></pre>
| 1 | 2016-08-12T08:40:36Z | [
"python",
"xml",
"elementtree"
] |
Bad input shape with sklearn make_scorer needs_proba=True | 38,906,145 | <p>The code below works fine with:<br>
<code>scorer = make_scorer(roc_auc_score)</code></p>
<p>but gives "ValueError: bad input shape" with:<br>
<code>scorer = make_scorer(roc_auc_score, needs_proba = True)</code></p>
<p>The code is:<br>
<code>clf = GaussianNB()
cv = ShuffleSplit(features.shape[0], n_iter = 10, test_size = 0.2, random_state = 0)
scorer = make_scorer(roc_auc_score, needs_proba = True)
score = cross_val_score(clf, features, labels, cv=cv, scoring=scorer)</code></p>
<p>How would I get around this error so the score is based on probability estimates?</p>
| 0 | 2016-08-11T21:05:52Z | 38,907,322 | <p>If you're using one of the default scoring metrics you don't need to pass a callable to <code>cross_val_score</code>, you can just, you can just call it with the name of the metric you're using:</p>
<pre><code>score = cross_val_score(clf, features, labels, cv=cv, scoring='roc_auc_score')
</code></pre>
| 0 | 2016-08-11T22:56:59Z | [
"python",
"machine-learning",
"scikit-learn"
] |
ValueError when using scipy least_squares | 38,906,193 | <p>I am trying to fit my data to a function. I've been using this sample code as a guide <a href="http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html#example-of-solving-a-fitting-problem" rel="nofollow">http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html#example-of-solving-a-fitting-problem</a>. My code is the following:</p>
<pre><code>from scipy.optimize import least_squares
import numpy as np
import matplotlib.pyplot as plt
def model(x, u):
return -x[0] * np.sqrt((x[1]/u) - 1)
def fun(x, u, y):
return y - model(x, u)
def jac(x, u, y):
J = np.empty((u.size, x.size))
J[:, 0] = np.sqrt((x[1]/u) - 1)
J[:, 1] = x[0] / (2 * u * np.sqrt((x[1]/u) - 1))
return J
u = np.array(T_h2)
y = np.array(lnR2)
x0 = np.array([0.1,0.2])
res = least_squares(fun, x0, jac=jac, bounds=(0, 100), args=(u, y), verbose=1)
print(res.x)
u_test = T_h2
y_test = model(res.x, u_test)
plt.plot(u, y, 'o', markersize=4, label='data')
plt.plot(u_test, y_test, label='fitted model')
plt.xlabel("u")
plt.ylabel("y")
plt.legend(loc='lower right')
plt.show()
</code></pre>
<p>However, when I run my code I get the error "ValueError: Residuals are not finite in the initial point." How would I fix this?</p>
<p>edit:</p>
<pre><code> T_h2 = [234.382, 234.353, 234.435, 234.709, 235.169, 235.803,
236.661, 237.688, 238.697, 239.658, 240.743, 241.813, 242.784, 243.739,
244.791, 245.675, 246.666, 247.615, 248.579, 249.481, 250.336, 251.311,
252.211, 253.058, 253.976, 254.831, 255.738, 256.599, 257.594, 258.482,
259.279, 260.233, 261.112, 262.103, 263.003, 263.9, 264.764, 265.688,
266.629, 267.491, 268.415, 269.285, 270.188, 271.129, 272., 272.935,
273.773, 274.714, 275.581, 276.549, 277.411, 278.334, 279.276, 280.146,
281.006, 281.905, 282.819, 283.803, 284.681, 285.513, 286.49, 287.324,
288.173, 289.105, 290.039, 290.991, 291.795, 292.694, 293.648, 294.522,
295.398, 296.296, 297.25, 298.134, 299.024, 299.912, 300.808, 301.732,
302.635, 303.603, 304.476, 305.35, 306.223, 307.18, 308.091, 308.938,
309.902, 310.792, 311.663, 312.566, 313.412, 314.284, 315.252, 316.126,
317.002, 317.913, 318.81, 319.669, 320.626, 321.523, 322.417, 323.281,
324.245]
lnR2 = [-16.333025681623091, -14.872111670594926, -14.892057965675207, -15.03694367579511, -14.388711659567424, -16.519631908799834, -14.047440985059174, -13.245512823492424, -12.012664970474015, -11.592570515633696, -11.415244487948224, -11.250423587326582, -11.043358068566182, -10.782270761445371, -10.57008012745084, -10.348870666290271, -10.191384942587591, -10.048855650333838, -9.9256240231933077, -9.7926739093465187, -9.6730532317943059, -9.5334101176483124, -9.3859588951369251, -9.2475985534653571, -9.1166053550752206, -9.0088611502583475, -8.8739120056364289, -8.7650034909964933, -8.6823151628382362, -8.6015380878989749, -8.5167589793011746, -8.4314862875533017, -8.364006279047107, -8.3069822249135825, -8.2571447519527315, -8.2111410588354676, -8.1684964170797887, -8.1396219459464945, -8.1149140801354562, -8.0937213212661057, -8.0742199830459658, -8.057615869538207, -8.0494949879212623, -8.0435977497085211, -8.0409171951906373, -8.0461036780308994, -8.0490116406609502, -8.0525194174270123, -8.0653078013251491, -8.0816100755759432, -8.0974305556597912, -8.1152346160995883, -8.1394956678268393, -8.1664274771185354, -8.1980306181968547, -8.2299693351364844, -8.2652082284364567, -8.3050428664294742, -8.3484319768441626, -8.3927260630797864, -8.4461326801347543, -8.5003378964708105, -8.5595337634985853, -8.6098956222034229, -8.6806395376767043, -8.7463523398937717, -8.819120148846844, -8.8938512284941815, -8.975439857789393, -9.0604311437041982, -9.160016977929974, -9.2544272624693313, -9.360134170694149, -9.48357662093877, -9.5792580093353656, -9.7144993201777972, -9.8715380997132574, -10.027248712603699, -10.177417977875871, -10.352374953002723, -10.517136866838991, -10.715774762340427, -10.913431451028842, -11.123817784132052, -11.345932175131191, -11.567233115011238, -11.775872934970939, -11.992878335292444, -12.21474839185972, -12.382545106102548, -12.45951145012326, -12.697558713087753, -12.870960915450144, -13.122795212623657, -13.096364398875277, -13.3741438677707, -13.323032960465998, -13.436772613480292, -13.561709556757362, -14.198404910172693, -13.896250284916482, -13.535947150817048, -14.727538560421378]
</code></pre>
| 0 | 2016-08-11T21:08:35Z | 38,926,045 | <p>The problem is:</p>
<ul>
<li>case A: your initial-point</li>
<li>case B: your function <code>model</code></li>
</ul>
<p>Giving the start-point <code>x0 = np.array([0.1,0.2])</code> (and also <code>u,y</code>), calling <code>fun(x0, u, y)</code>, the following happens:</p>
<pre><code>np.sqrt((x[1]/u) - 1) # part of model(x, u)
= np.sqrt((0.2 / u) - 1)
= np.sqrt(some_near_zero_vector - 1) # because u much bigger than 0.2
= np.sqrt(some_near_minus_one_vector)
= NaN-vector, which is not finite! # because of negative components in sqrt
</code></pre>
| 2 | 2016-08-12T20:31:55Z | [
"python",
"numpy",
"scipy",
"least-squares"
] |
Create bigrams using NLTK from a corpus with multiple lines | 38,906,263 | <p>I am trying to generate bigrams from a corpus with multiple lines. Bigrams are created across line breaks which is a problem because each line represents it's own context and is not related to the subsequent line. That results in semantically incorrect bigrams.</p>
<p><strong>The corpus</strong></p>
<pre><code>Reeves Acrylfarbe 75Ml Ultramarin
Acrylfarbe Deep Peach
Reeves Acrylfarbe 75Ml Grasgrün
Acrylfarbe Antique Go
</code></pre>
<p><strong>Example for problematic bigrams</strong></p>
<blockquote>
<p>'Ultramarin Acrylfarbe', 'Grasgrün Acrylfarbe'</p>
</blockquote>
<p><strong>This is the code I am using:</strong></p>
<pre><code>finder = BigramCollocationFinder.from_words(word_tokenize(corpus))
bigrams = finder.nbest(bigram_measures.likelihood_ratio, 100)
</code></pre>
<p>How can I omit bigrams that span over two lines?</p>
| 0 | 2016-08-11T21:14:25Z | 38,906,307 | <p>I would use <code>split</code> on '\n' to get a list of lines and process then every line separately and combine the list of bigrams</p>
| 0 | 2016-08-11T21:18:22Z | [
"python",
"nltk"
] |
Create bigrams using NLTK from a corpus with multiple lines | 38,906,263 | <p>I am trying to generate bigrams from a corpus with multiple lines. Bigrams are created across line breaks which is a problem because each line represents it's own context and is not related to the subsequent line. That results in semantically incorrect bigrams.</p>
<p><strong>The corpus</strong></p>
<pre><code>Reeves Acrylfarbe 75Ml Ultramarin
Acrylfarbe Deep Peach
Reeves Acrylfarbe 75Ml Grasgrün
Acrylfarbe Antique Go
</code></pre>
<p><strong>Example for problematic bigrams</strong></p>
<blockquote>
<p>'Ultramarin Acrylfarbe', 'Grasgrün Acrylfarbe'</p>
</blockquote>
<p><strong>This is the code I am using:</strong></p>
<pre><code>finder = BigramCollocationFinder.from_words(word_tokenize(corpus))
bigrams = finder.nbest(bigram_measures.likelihood_ratio, 100)
</code></pre>
<p>How can I omit bigrams that span over two lines?</p>
| 0 | 2016-08-11T21:14:25Z | 38,906,598 | <p>I believe something like this should work:</p>
<pre><code>finder = nltk.BigramCollocationFinder.from_documents([
nltk.word_tokenize(x) for x in corpus.split('\n')])
bigrams = finder.nbest(bigram_measures.likelihood_ratio, 100)
</code></pre>
| 0 | 2016-08-11T21:44:27Z | [
"python",
"nltk"
] |
+= operation with non existing dataframes | 38,906,297 | <p><strong>df_pairs:</strong></p>
<pre><code>city1 city2
0 sfo yyz
1 sfo yvr
2 sfo dfw
3 sfo ewr
</code></pre>
<p><strong>output of df_pairs.to_dict('records'):</strong></p>
<pre><code>[{'city1': 'sfo', 'city2': 'yyz'},
{'city1': 'sfo', 'city2': 'yvr'},
{'city1': 'sfo', 'city2': 'dfw'},
{'city1': 'sfo', 'city2': 'ewr'}]
</code></pre>
<p><strong>data_df:</strong></p>
<pre><code> city 2016-02-02 00:00:00 2016-02-05 00:00:00 2016-02-01 00:00:00 2016-02-04 00:00:00 2016-02-03 00:00:00
0 sfo -33.63 -62.34 -35.70 -31.84 -33.87
1 yyz -24.31 -51.17 -22.07 -31.00 -23.00
2 yvr -24.31 -51.17 -22.07 -31.00 -23.00
3 dfw -32.17 -43.77 -34.84 0.27 -11.49
4 ewr -28.87 -59.66 -28.40 -32.94 -29.06
</code></pre>
<p><strong>output of data_df.to_dict('records')</strong></p>
<pre><code>[{'city': 'sfo',
Timestamp('2016-02-02 00:00:00'): -33.63,
Timestamp('2016-02-05 00:00:00'): -62.34,
Timestamp('2016-02-01 00:00:00'): -35.7,
Timestamp('2016-02-04 00:00:00'): -31.84,
Timestamp('2016-02-03 00:00:00'): -33.87},
{'city': 'yyz',
Timestamp('2016-02-02 00:00:00'): -24.31,
Timestamp('2016-02-05 00:00:00'): -51.17,
Timestamp('2016-02-01 00:00:00'): -22.07,
Timestamp('2016-02-04 00:00:00'): -31.0,
Timestamp('2016-02-03 00:00:00'): -23.0},
{'city': 'yvr',
Timestamp('2016-02-02 00:00:00'): -24.31,
Timestamp('2016-02-05 00:00:00'): -51.17,
Timestamp('2016-02-01 00:00:00'): -22.07,
Timestamp('2016-02-04 00:00:00'): -31.0,
Timestamp('2016-02-03 00:00:00'): -23.0},
{'city': 'dfw',
Timestamp('2016-02-02 00:00:00'): -32.17,
Timestamp('2016-02-05 00:00:00'): -43.77,
Timestamp('2016-02-01 00:00:00'): -34.84,
Timestamp('2016-02-04 00:00:00'): 0.27,
Timestamp('2016-02-03 00:00:00'): -11.49},
{'city': 'ewr',
Timestamp('2016-02-02 00:00:00'): -28.87,
Timestamp('2016-02-05 00:00:00'): -59.66,
Timestamp('2016-02-01 00:00:00'): -28.4,
Timestamp('2016-02-04 00:00:00'): -32.94,
Timestamp('2016-02-03 00:00:00'): -29.06}]
</code></pre>
<p>So I have a df named <code>df_pairs</code>. For every pair in <code>df_pairs</code>, I want to lookup city1 and city2 in <code>data_df</code>, subtract one from the other, take the sign of the difference time series, separate positive and negative sign values, separate positive and negative difference values and calculate sums on each across the columns of data_df.</p>
<pre><code>diff_df_sign_pos = diff_df_sign_neg = diff_df_pos = diff_df_neg = 0
for i in range(0,len(data_df.columns)):
a = pd.merge(df_pairs[['city1','city2']], data_df.ix[:, [i]], left_on='city1', right_index=True, how='left').set_index(['city1', 'city2'])
b = pd.merge(df_pairs[['city1','city2']], data_df.ix[:, [i]], left_on='city2', right_index=True, how='left').set_index(['city1', 'city2'])
diff_df = b - a
diff_df_sign = np.sign(diff_df)
diff_df_sign_pos+= diff_df_sign.clip(lower=0)
diff_df_sign_neg+= diff_df_sign.clip(upper=0)
diff_df_pos+= diff_df.clip(lower=0)
diff_df_neg+= diff_df.clip(upper=0)
</code></pre>
<p>If you run the above code, you will see that the final values for <code>diff_df_sign_pos</code>, <code>diff_df_sign_neg</code>, <code>diff_df_pos</code> and <code>diff_df_neg</code> are NaN's. </p>
<p>For example, the end result for <code>diff_df_sign_pos</code> should look like:</p>
<pre><code> 2016-02-03 00:00:00
city1 city2
sfo yyz 5.0
yvr 5.0
dfw 5.0
ewr 4.0
</code></pre>
<p>This tells us that all 5 of the differences between yyz, yvr, dfw and sfo were positive. </p>
| 0 | 2016-08-11T21:17:27Z | 38,907,008 | <p>Why don't you simply do this:</p>
<pre><code>df_city1 = pd.merge(df_pairs['city1'], data_df, left_on='city1', right_on='city', how='left')
df_city2 = pd.merge(df_pairs['city2'], data_df, left_on='city2', right_on='city', how='left')
diff = df_city2.subtract(df_city1, fill_value=0)
pos_sum = diff[diff >= 0].sum(axis=1)
neg_sum = diff[diff < 0].sum(axis=1)
</code></pre>
<p>Instead of looping over all your columns, merging 2*(number of columns) times, not to mention indexing, then that complicated bit with <code>np.sign</code> and <code>.clip</code>... Your <code>df_pairs</code> and <code>data_df</code> have a one-to-one correspondence, right?</p>
| 1 | 2016-08-11T22:22:04Z | [
"python",
"pandas"
] |
+= operation with non existing dataframes | 38,906,297 | <p><strong>df_pairs:</strong></p>
<pre><code>city1 city2
0 sfo yyz
1 sfo yvr
2 sfo dfw
3 sfo ewr
</code></pre>
<p><strong>output of df_pairs.to_dict('records'):</strong></p>
<pre><code>[{'city1': 'sfo', 'city2': 'yyz'},
{'city1': 'sfo', 'city2': 'yvr'},
{'city1': 'sfo', 'city2': 'dfw'},
{'city1': 'sfo', 'city2': 'ewr'}]
</code></pre>
<p><strong>data_df:</strong></p>
<pre><code> city 2016-02-02 00:00:00 2016-02-05 00:00:00 2016-02-01 00:00:00 2016-02-04 00:00:00 2016-02-03 00:00:00
0 sfo -33.63 -62.34 -35.70 -31.84 -33.87
1 yyz -24.31 -51.17 -22.07 -31.00 -23.00
2 yvr -24.31 -51.17 -22.07 -31.00 -23.00
3 dfw -32.17 -43.77 -34.84 0.27 -11.49
4 ewr -28.87 -59.66 -28.40 -32.94 -29.06
</code></pre>
<p><strong>output of data_df.to_dict('records')</strong></p>
<pre><code>[{'city': 'sfo',
Timestamp('2016-02-02 00:00:00'): -33.63,
Timestamp('2016-02-05 00:00:00'): -62.34,
Timestamp('2016-02-01 00:00:00'): -35.7,
Timestamp('2016-02-04 00:00:00'): -31.84,
Timestamp('2016-02-03 00:00:00'): -33.87},
{'city': 'yyz',
Timestamp('2016-02-02 00:00:00'): -24.31,
Timestamp('2016-02-05 00:00:00'): -51.17,
Timestamp('2016-02-01 00:00:00'): -22.07,
Timestamp('2016-02-04 00:00:00'): -31.0,
Timestamp('2016-02-03 00:00:00'): -23.0},
{'city': 'yvr',
Timestamp('2016-02-02 00:00:00'): -24.31,
Timestamp('2016-02-05 00:00:00'): -51.17,
Timestamp('2016-02-01 00:00:00'): -22.07,
Timestamp('2016-02-04 00:00:00'): -31.0,
Timestamp('2016-02-03 00:00:00'): -23.0},
{'city': 'dfw',
Timestamp('2016-02-02 00:00:00'): -32.17,
Timestamp('2016-02-05 00:00:00'): -43.77,
Timestamp('2016-02-01 00:00:00'): -34.84,
Timestamp('2016-02-04 00:00:00'): 0.27,
Timestamp('2016-02-03 00:00:00'): -11.49},
{'city': 'ewr',
Timestamp('2016-02-02 00:00:00'): -28.87,
Timestamp('2016-02-05 00:00:00'): -59.66,
Timestamp('2016-02-01 00:00:00'): -28.4,
Timestamp('2016-02-04 00:00:00'): -32.94,
Timestamp('2016-02-03 00:00:00'): -29.06}]
</code></pre>
<p>So I have a df named <code>df_pairs</code>. For every pair in <code>df_pairs</code>, I want to lookup city1 and city2 in <code>data_df</code>, subtract one from the other, take the sign of the difference time series, separate positive and negative sign values, separate positive and negative difference values and calculate sums on each across the columns of data_df.</p>
<pre><code>diff_df_sign_pos = diff_df_sign_neg = diff_df_pos = diff_df_neg = 0
for i in range(0,len(data_df.columns)):
a = pd.merge(df_pairs[['city1','city2']], data_df.ix[:, [i]], left_on='city1', right_index=True, how='left').set_index(['city1', 'city2'])
b = pd.merge(df_pairs[['city1','city2']], data_df.ix[:, [i]], left_on='city2', right_index=True, how='left').set_index(['city1', 'city2'])
diff_df = b - a
diff_df_sign = np.sign(diff_df)
diff_df_sign_pos+= diff_df_sign.clip(lower=0)
diff_df_sign_neg+= diff_df_sign.clip(upper=0)
diff_df_pos+= diff_df.clip(lower=0)
diff_df_neg+= diff_df.clip(upper=0)
</code></pre>
<p>If you run the above code, you will see that the final values for <code>diff_df_sign_pos</code>, <code>diff_df_sign_neg</code>, <code>diff_df_pos</code> and <code>diff_df_neg</code> are NaN's. </p>
<p>For example, the end result for <code>diff_df_sign_pos</code> should look like:</p>
<pre><code> 2016-02-03 00:00:00
city1 city2
sfo yyz 5.0
yvr 5.0
dfw 5.0
ewr 4.0
</code></pre>
<p>This tells us that all 5 of the differences between yyz, yvr, dfw and sfo were positive. </p>
| 0 | 2016-08-11T21:17:27Z | 38,907,402 | <p>Give this a run:</p>
<p>Take out the initial variables, and get rid of the for loop.</p>
<pre><code>a = pd.merge(df_pairs, data_df, left_on='city1', right_on='city', how='left').set_index(['city1', 'city2'])
b = pd.merge(df_pairs, data_df, left_on='city2', right_on='city', how='left').set_index(['city1', 'city2'])
del a['city']
del b['city']
</code></pre>
<p>Now do each calculation once and sum across each row (axis=1)</p>
<pre><code>diff_df = b - a
diff_df_sign = np.sign(diff_df)
diff_df_sign_pos = diff_df_sign.clip(lower=0).sum(axis=1)
diff_df_sign_neg = diff_df_sign.clip(upper=0).sum(axis=1)
diff_df_pos = diff_df.clip(lower=0).sum(axis=1)
diff_df_neg = diff_df.clip(upper=0).sum(axis=1)
</code></pre>
<p>Does this look like this output you want?</p>
<pre><code>city1 city2
sfo yyz 5
yvr 5
dfw 5
ewr 4
dtype: float64
city1 city2
sfo yyz 0
yvr 0
dfw 0
ewr -1
dtype: float64
city1 city2
sfo yyz 45.83
yvr 45.83
dfw 75.38
ewr 19.55
dtype: float64
city1 city2
sfo yyz 0.0
yvr 0.0
dfw 0.0
ewr -1.1
dtype: float64
</code></pre>
| 0 | 2016-08-11T23:04:42Z | [
"python",
"pandas"
] |
Download and save the file in python | 38,906,310 | <p>I'm downloading the file from my server using <code>urllib2.urlopen</code> method. I want to know how do you open the file and save it to a hard disk location in '/userdata/addon_data/script.tvguide'?</p>
<p>Here is the code:</p>
<pre><code>def All_Channels(self):
global __killthread__
self.getControl(343).setLabel("0%")
try:
# DOWNLOAD THE XML SOURCE HERE
url = ADDON.getSetting('allchannel.url')
data = ''
response = urllib2.urlopen("http://www.example.com/mydb.db")
db_file = response.read()
directory_path = os.path.join('special://userdata/addon_data/script.tvguide', 'mydb.db')
except:
pass
</code></pre>
<p>I just want to download the file <code>mydb.db</code> to store it in my hard disk so I could then open the database.</p>
<p>I have no idea how to store the file in a hard disk after I have downloading them, I have been searching for some information but I couldn't find the answer.</p>
<p>If you can show me an example how I can save the file in a hard drive location after when I have download the file I would be very grateful.</p>
| 0 | 2016-08-11T21:18:29Z | 38,906,362 | <p>For python 2.7 it is <code>urllib.urlretrieve(url,filename)</code> where <code>filename</code> is the place you want to store it.</p>
| 0 | 2016-08-11T21:22:54Z | [
"python",
"python-2.7"
] |
converting/analyzing .csv to xlsx file | 38,906,323 | <p>I am currently attempting to write a script that converts a .csv file to a .xlsx file, then do some data analysis on the new file. I am currently running into an error when trying to copy the new file i make.</p>
<p>here is the error:</p>
<pre><code> File "C:/Users/mmunoz/Desktop/Excel Analysis/danielaScript1.0.py", line 40, in <module>
workbookCopy = xlutils.copy(workbook)
TypeError: 'module' object is not callable
</code></pre>
<p><strong>Here is my original code</strong></p>
<pre><code>import os, csv
import glob
from openpyxl.cell import get_column_letter
from xlsxwriter.workbook import Workbook
import xlrd
import statistics as st
import xlutils.copy
position = []
load = []
positionMM = []
force = []
csvfile = input('Please input filename: ')
for csvfile in glob.glob(os.path.join('.', '*.csv')):
workbook = Workbook(csvfile + '.xlsx')
worksheet = workbook.add_worksheet()
with open(csvfile, 'r') as f:
reader = csv.reader(f)
for r, row in enumerate(reader):
for c, col in enumerate(row):
columb_letter = get_column_letter((c+1))
s = col
try:
s= float(s)
except ValueError:
pass
worksheet.write(r, c, s)
#%%
#workbook1 = xlrd.open_workbook(workbook)
workbookCopy = xlutils.copy(workbook)
sheet = workbook.sheet_by_index(0)
numRows = sheet.nrows
numCols = sheet.ncols
for row in range(2, numRows):
position.append(sheet.cell_value(row,2)) #appends values in column to position array
load.append(sheet.cell_value(row,3))#appends values in column to load array
</code></pre>
| 0 | 2016-08-11T21:19:32Z | 38,906,379 | <p>xlutils.copy is a module.</p>
<p>Try this:</p>
<pre><code>from xlutils.copy import copy
</code></pre>
<p>then</p>
<pre><code>workbookCopy = copy(workbook)
</code></pre>
<p>More info: <a href="http://xlutils.readthedocs.io/en/latest/copy.html" rel="nofollow">http://xlutils.readthedocs.io/en/latest/copy.html</a></p>
| 0 | 2016-08-11T21:24:13Z | [
"python",
"excel",
"xlrd"
] |
Jupyter Notebook: How to change spacing between Text objects inside HBox? | 38,906,333 | <p>Say I have the following code in a cell in the Jupyter Notebook:</p>
<pre><code>from ipywidgets.widgets import HBox, Text
from IPython.display import display
text1 = Text(description = "text1", width = 100)
text2 = Text(description = "text2", width = 100)
text3 = Text(description = "text3", width = 100)
text4 = Text(description = "text4", width = 100)
text5 = Text(description = "text5", width = 100)
display(HBox((text1, text2, text3, text4, text5)))
</code></pre>
<p>It produces the following output:</p>
<p><a href="http://i.stack.imgur.com/lxMKr.png" rel="nofollow"><img src="http://i.stack.imgur.com/lxMKr.png" alt="enter image description here"></a></p>
<p>As you can see, there is a ton of unnecessary spacing between the Text objects. How do I change the spacing so that they stay within the cell?</p>
| 2 | 2016-08-11T21:20:24Z | 38,908,772 | <p>It seems that the spacing is due to the width of the <code>widget-text</code> class being specified with a width of 350 pixels. Try importing the <code>HTML</code> module (i.e. <code>from IPython.display import display, HTML</code>) and insert <code>HTML('<style> .widget-text { width: auto; } </style>')</code> into the cell.</p>
<p>You should end up with something like this:</p>
<pre><code>from ipywidgets.widgets import HBox, Text
from IPython.display import display, HTML
text1 = Text(description = "text1", width = 100)
text2 = Text(description = "text2", width = 100)
text3 = Text(description = "text3", width = 100)
text4 = Text(description = "text4", width = 100)
text5 = Text(description = "text5", width = 100)
display(HBox((text1, text2, text3, text4, text5)))
HTML('<style> .widget-text { width: auto; } </style>')
</code></pre>
<p><a href="http://i.stack.imgur.com/htO6i.png" rel="nofollow"><img src="http://i.stack.imgur.com/htO6i.png" alt="widgets styled with CSS"></a></p>
| 1 | 2016-08-12T02:08:07Z | [
"python",
"jupyter",
"ipywidgets"
] |
Jupyter Notebook: How to change spacing between Text objects inside HBox? | 38,906,333 | <p>Say I have the following code in a cell in the Jupyter Notebook:</p>
<pre><code>from ipywidgets.widgets import HBox, Text
from IPython.display import display
text1 = Text(description = "text1", width = 100)
text2 = Text(description = "text2", width = 100)
text3 = Text(description = "text3", width = 100)
text4 = Text(description = "text4", width = 100)
text5 = Text(description = "text5", width = 100)
display(HBox((text1, text2, text3, text4, text5)))
</code></pre>
<p>It produces the following output:</p>
<p><a href="http://i.stack.imgur.com/lxMKr.png" rel="nofollow"><img src="http://i.stack.imgur.com/lxMKr.png" alt="enter image description here"></a></p>
<p>As you can see, there is a ton of unnecessary spacing between the Text objects. How do I change the spacing so that they stay within the cell?</p>
| 2 | 2016-08-11T21:20:24Z | 38,912,027 | <p>Building on Zarak's answer, you can do something like this</p>
<pre><code>from ipywidgets.widgets import HBox, Text
from IPython.display import display, HTML
text1 = Text(description = "text1", width = 100)
text2 = Text(description = "text2", width = 100)
text3 = Text(description = "text3", width = 100)
text4 = Text(description = "text4", width = 100)
text5 = Text(description = "text5", width = 100)
display(HBox((text1, text2, text3, text4, text5)))
display(HTML('<style> .widget-text { width: auto; } </style>'))
print('hi')
</code></pre>
<p>This way, you don't need to have the <code>HTML('<style> .widget-text { width: auto; } </style>')</code> statement as the last statement.</p>
| 2 | 2016-08-12T07:14:44Z | [
"python",
"jupyter",
"ipywidgets"
] |
newline feature of print command in Python | 38,906,355 | <p>I am teaching myself how to program in Python, and I have a question about "print".</p>
<p>I know that print statement has a newline "\n" feature embedded into it, but I noticed that whenever my code ends with a print statement, for example, print("This is the end"), it does not create a newline.</p>
<p>Is it a feature of Python that the newline of the very <strong>last</strong> print statement is suppressed?</p>
<p>p.s. I am using IDLE</p>
| 0 | 2016-08-11T21:22:02Z | 38,906,461 | <p>No, Python has no such feature. You might be confusing "not printing a newline" with "not ending with a blank space".</p>
<p>For example, in bash, it might look like this:</p>
<pre><code>$ cat endline.py
print('line 1')
print('line 2')
$ cat noendline.py
print('line 1')
print('line 2', end='')
$ python3 endline.py
line 1
line 2
$ python3 noendline.py
line 1
line 2$ echo "It did not end in a newline"
It did not end in a newline
$
</code></pre>
<p>Notice how <code>noendline.py</code> exited without any newline and the prompt immediately continued.</p>
<p>This applies in the live interpreter as well:</p>
<pre><code>>>> print('Hello world!')
Hello world!
>>> print('Hello world!', end='')
Hello world!>>> # See?
</code></pre>
<p>Windows (<code>cmd</code>) always appends the end of a program with a newline:</p>
<pre><code>C:\> python endline.py
line 1
line 2
C:\> python noendline.py
line 1
line 2
C:\>
</code></pre>
| 4 | 2016-08-11T21:30:29Z | [
"python"
] |
newline feature of print command in Python | 38,906,355 | <p>I am teaching myself how to program in Python, and I have a question about "print".</p>
<p>I know that print statement has a newline "\n" feature embedded into it, but I noticed that whenever my code ends with a print statement, for example, print("This is the end"), it does not create a newline.</p>
<p>Is it a feature of Python that the newline of the very <strong>last</strong> print statement is suppressed?</p>
<p>p.s. I am using IDLE</p>
| 0 | 2016-08-11T21:22:02Z | 38,906,469 | <p>In python, a <code>\n</code> is added by <a href="https://www.python.org/dev/peps/pep-0278/" rel="nofollow">default</a>. You can specify what end argument you want concatenated to the end of your print statement in <strong><a href="https://docs.python.org/3/whatsnew/3.0.html#print-is-a-function" rel="nofollow">Python 3</a></strong> with the following (<em>output copied from sublime 3 to demonstrate the newline</em>):</p>
<pre><code>print('Hello World', end='Arg goes here.')
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>Hello WorldArg goes here.[Finished in 0.0s] # Notice no newline.
</code></pre>
<p>Other wise, a <code>\n</code> is default if no value is passed to the keyword argument <code>end</code>.</p>
<pre><code>print('Hello World')
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>Hello World
[Finished in 0.0s] # Newline by default.
</code></pre>
<p>Things work slightly differently in <strong>python 2</strong> as <code>print</code> is not a function (<code>\n</code> <em>is default</em>), however you can emulate this behavior in the following way:</p>
<pre><code>from __future__ import print_function
print('Hello World', end='\n')
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>Hello World
[Finished in 0.0s] # Newline specified.
</code></pre>
<p>Here are a few examples in an interactive shell running in Terminal to demonstrate.</p>
<p><strong>In shell (Python 2):</strong></p>
<pre><code>>>> from __future__ import print_function
>>> print('Hello World', end='\n')
Hello World
>>> print('Hello World', end='\n\n')
Hello World
>>> print('Hello World', end='')
Hello World>>>
</code></pre>
| 1 | 2016-08-11T21:31:21Z | [
"python"
] |
How to clear form fields in class-based views after a submit in Django | 38,906,421 | <p>I use <strong>class-based views for password reset</strong> in my Django project. My form has only one field where users can write there email adress or username. After successfull submit users can see success message in the same page. My problem is that after the submit I see empty field but in the same time I can send data again by clicking button. <strong>How to clear form field after the successfull submit?</strong> I am little bit comfusing.</p>
<p><strong>views.py</strong></p>
<pre><code>class PasswordResetRequestView(FormView):
template_name = "registration/password_reset.html"
success_url = '/account/password_reset'
form_class = PasswordResetRequestForm
@staticmethod
def validate_email_address(email):
try:
validate_email(email)
return True
except ValidationError:
return False
def post(self, request, *args, **kwargs):
global data
form = self.form_class(request.POST)
try:
if form.is_valid():
data = form.cleaned_data["email_or_username"]
if self.validate_email_address(data) is True:
associated_users = User.objects.filter(Q(email=data) | Q(username=data))
if associated_users.exists():
for user in associated_users:
c = {
'email': user.email,
'domain': request.META['HTTP_HOST'],
'site_name': 'your site',
'uid': urlsafe_base64_encode(force_bytes(user.pk)),
'user': user,
'token': default_token_generator.make_token(user),
'protocol': 'http',
}
subject_template_name = 'registration/password_reset_subject.txt'
email_template_name = 'registration/password_reset_email.html'
subject = loader.render_to_string(subject_template_name, c)
subject = ''.join(subject.splitlines())
email = loader.render_to_string(email_template_name, c)
send_mail(subject, email, DEFAULT_FROM_EMAIL, [user.email], fail_silently=False)
result = self.form_valid(form)
messages.success(request, 'An email has been sent to ' +
data + ". Please check its inbox to continue reseting password.")
return result
result = self.form_invalid(form)
messages.error(request, 'No user is associated with this email address')
return result
else:
associated_users = User.objects.filter(username=data)
if associated_users.exists():
for user in associated_users:
c = {
'email': user.email,
'domain': 'example.info',
'site_name': 'example',
'uid': urlsafe_base64_encode(force_bytes(user.pk)),
'user': user,
'token': default_token_generator.make_token(user),
'protocol': 'http',
}
subject_template_name = 'registration/password_reset_subject.txt'
email_template_name = 'registration/password_reset_email.html'
subject = loader.render_to_string(subject_template_name, c)
# Email subject *must not* contain newlines
subject = ''.join(subject.splitlines())
email = loader.render_to_string(email_template_name, c)
send_mail(subject, email, DEFAULT_FROM_EMAIL, [user.email], fail_silently=False)
result = self.form_valid(form)
messages.success(request, 'Email has been sent to ' +
data + "'s email address. Please check its inbox to continue reseting password.")
return result
result = self.form_invalid(form)
messages.error(request, _('This username does not exist in the system.'))
return result
except Exception as e:
print(e)
return self.form_invalid(form)
</code></pre>
| 0 | 2016-08-11T21:27:22Z | 38,912,189 | <p>First, never ever use global variables. </p>
<p>Second, you can put your post logic into a <code>form.save()</code> method. IMO its is much more appropriate there and its close to what <code>Django</code> is doing inside its <a href="https://github.com/django/django/blob/master/django/contrib/auth/forms.py#L228" rel="nofollow">PasswordResetForm</a></p>
<p>This is how your <code>Form</code> class can look like:</p>
<pre><code>class PasswordResetRequestForm(forms.Form):
#We are not using EmailField on purpose
#because you want to treat it
#as a username if its not an email
email = forms.CharField(required=True)
def __init__(self, *args, **kwargs):
#For sending emails we are gonna need the request.
request = kwargs.pop('request')
super(PasswordResetRequestForm, self).__init__(*args, **kwargs)
def clean_email(self):
#You can add form error here if no user exists
email = self.cleaned_data['email']
users = User.objects.filter(Q(email=email)|Q(username=email))
if not users.exists():
self.add_error('email', 'No associated account found')
def send_email(self, user):
c = {
'email': user.email,
'domain': self.request.META['HTTP_HOST'],
'site_name': 'your site',
'uid': urlsafe_base64_encode(force_bytes(user.pk)),
'user': user,
'token': default_token_generator.make_token(user),
'protocol': 'http',
}
subject_template_name = 'registration/password_reset_subject.txt'
email_template_name = 'registration/password_reset_email.html'
subject = loader.render_to_string(subject_template_name, c)
subject = ''.join(subject.splitlines())
email = loader.render_to_string(email_template_name, c)
send_mail(subject, email, DEFAULT_FROM_EMAIL, [user.email], fail_silently=False)
def save(self):
email = self.cleaned_data['email']
users = User.objects.filter(Q(email=email)|Q(username=email))
for user in users:
self.send_email(user)
messages.success(self.request, """An email has been sent to %s.
Please check its inbox to continue reseting password.""" % email
)
</code></pre>
<p>And now in the <code>View</code> you just need to send the <code>request</code> as a <code>form.kwarg</code> and call the <code>form.save()</code> when the `form.is_valid().</p>
<pre><code>class PasswordResetRequestView(FormView):
template_name = "registration/password_reset.html"
success_url = '/account/password_reset'
form_class = PasswordResetRequestForm
def get_form_kwargs(self, *args, **kwargs):
kwargs = super(PasswordResetRequestView, self).get_form_kwargs(*args, **kwargs)
kwargs['request'] = self.request
return kwargs
def form_valid(self, *args, **kwargs):
form = kwargs['form']
form.save()
return super(PasswordResetRequestView, self).form_valid(*args, **kwargs)
</code></pre>
| 1 | 2016-08-12T07:23:06Z | [
"python",
"django",
"python-3.x",
"django-views",
"reset-password"
] |
Numerous XML documents: how can I capture occurrence and max length of text | 38,906,437 | <p>I have a large number of XML documents I need to loop over and capture how many times the a child node occurs and the max length of the field. I am able to properly parse the XML, and can capture the count and the length of the field. What I am unsure of is what datatype and method would be most effective to capture the analysis of the XML documents. </p>
<p>The output I would like to produce is (not in this format - just the data):</p>
<pre><code>field1: count = 2, maxlength = 4
field2: count = 2, maxlength = 1
</code></pre>
<p>Here is an example:</p>
<pre><code>import xml.etree.ElementTree as ET
#sample data:
xml = ['<data><field1>100</field1><field2>1</field2></data>', '<data><field1>1000</field1><field2>2</field2></data>']
#loop to capture fields and length
for item in xml:
x = ET.fromstring(item)
for child in x:
fieldname = child.tag
fieldlength = len(child.text)
print(fieldname, fieldlength)
</code></pre>
<p>I can count the occurrence using this:</p>
<pre><code>fields = {}
for item in xml:
x = ET.fromstring(item)
for child in x:
if child.tag in fields:
fields[child.tag] += 1
else:
fields[child.tag] = 1
</code></pre>
<p>How would I go about capturing the total number off occurrences of a field and the max length <code>(if fieldlength > maxlength then fieldlength else maxlength)</code>?</p>
| 0 | 2016-08-11T21:28:53Z | 38,906,779 | <p>You should use a defaultdict that if the element does not exist resolves to the default value you specify (see the example below). On each iteration then all you'll have to do is:</p>
<pre><code>max_length[child.tag] = max(max_length[child.tag], len(child.text))
</code></pre>
<p>Full example:</p>
<pre><code># Necessary import
from collections import defaultdict
# Create the default dictionary. The argument is a function that generates
# the default value (that is returned if element is not set). int() returns
# zero, thus we will use that, but you could as well have said
# defaultdict(lambda : 0)
max_length = defaultdict(int)
for item in xml:
x = ET.fromstring(item)
for child in x:
# Assign max. You are fine if max_length[child.tag] do not exist yet
# because defaultdict will resolve it to 0.
max_length[child.tag] = max(max_length[child.tag], len(child.text))
</code></pre>
| 1 | 2016-08-11T22:00:19Z | [
"python",
"xml",
"python-3.x"
] |
Returning multiple values in a FLASK response | 38,906,448 | <p>I have a FLASK response object that I am preparing with two string values as follows: </p>
<pre><code>vioAllScript = encode_utf8(vioAllScript)
vioAllDiv = encode_utf8(vioAllDiv)
vioTup = (vioAllDiv, vioAllScript,)
resp = make_response(vioTup)
return resp
</code></pre>
<p>However, whenever I retrieve this response on the front end, the second value always get trimmed out of the response leaving only the first value. I have tried other options such as <code>resp = make_response(vioAllDiv, vioAllScript)</code> but the same thing always happens. Is there a way to get my response to contain two string values without concatenating them together?</p>
| 0 | 2016-08-11T21:29:32Z | 38,906,553 | <p>Unfortunately those flask interfaces are a bit too overloaded, which can be confusing, but if you dig in the relevant <a href="http://flask.pocoo.org/docs/0.11/api/#flask.Flask.make_response" rel="nofollow">docs</a> you will find this part:</p>
<blockquote>
<p>A tuple in the form (response, status, headers) or (response, headers) where response is any of the types defined here, status is a string or an integer and headers is a list or a dictionary with header values.</p>
</blockquote>
<p>The second argument is not being interpreted by flask as part of the response. </p>
<p>Consider returning something like this instead:</p>
<pre><code>flask.jsonify(vioAllDiv=vioAllDiv, vioAllScript=vioAllScript)
</code></pre>
| 0 | 2016-08-11T21:39:42Z | [
"python",
"flask",
"response"
] |
Bundle Cython module with a Python package | 38,906,482 | <p>I'm wrapping a C++ library in Cython, and I would like to distribute this as a Python package. I've been using <a href="http://cython.readthedocs.io/en/latest/src/userguide/wrapping_CPlusPlus.html" rel="nofollow">this tutorial</a> as a guide.</p>
<p>Here is how things are organized.</p>
<pre><code>.
âââ inc
â  âââ Rectangle.h
âââ rect
â  âââ __init__.py
â  âââ wrapper.pyx
âââ setup.py
âââ src
âââ Rectangle.cpp
</code></pre>
<p>I've pasted the contents of these files at the bottom of the post, as well as in this <a href="https://github.com/standage/cython-packaging-demo" rel="nofollow">GitHub repo</a>.</p>
<p>I have no problem compiling and installing with <code>python setup.py install</code>, and I can <code>import rect</code> from the interpreter with no problem. But it seems to be an empty class: I can't create a <code>Rectangle</code> object with any of the following.
- <code>Rectangle</code>
- <code>rect.Rectangle</code>
- <code>wrapper.Rectangle</code>
- <code>rect.wrapper.Rectangle</code></p>
<p>What am I doing wrong here?</p>
<hr>
<p>Contents of <code>Rectangle.h</code>, copied and pasted from the tutorial.</p>
<pre><code>namespace shapes {
class Rectangle {
public:
int x0, y0, x1, y1;
Rectangle();
Rectangle(int x0, int y0, int x1, int y1);
~Rectangle();
int getArea();
void getSize(int* width, int* height);
void move(int dx, int dy);
};
}
</code></pre>
<p>Contents of <code>Rectangle.cpp</code>.</p>
<pre><code>#include "Rectangle.h"
namespace shapes {
Rectangle::Rectangle() { }
Rectangle::Rectangle(int X0, int Y0, int X1, int Y1) {
x0 = X0;
y0 = Y0;
x1 = X1;
y1 = Y1;
}
Rectangle::~Rectangle() { }
int Rectangle::getArea() {
return (x1 - x0) * (y1 - y0);
}
void Rectangle::getSize(int *width, int *height) {
(*width) = x1 - x0;
(*height) = y1 - y0;
}
void Rectangle::move(int dx, int dy) {
x0 += dx;
y0 += dy;
x1 += dx;
y1 += dy;
}
}
</code></pre>
<p>Cython wrapper code <code>wrapper.pyx</code>.</p>
<pre><code># distutils: language = c++
# distutils: sources = src/Rectangle.cpp
cdef extern from "Rectangle.h" namespace "shapes":
cdef cppclass Rectangle:
Rectangle() except +
Rectangle(int, int, int, int) except +
int x0, y0, x1, y1
int getArea()
void getSize(int* width, int* height)
void move(int, int)
cdef class PyRectangle:
cdef Rectangle c_rect # hold a C++ instance which we're wrapping
def __cinit__(self, int x0, int y0, int x1, int y1):
self.c_rect = Rectangle(x0, y0, x1, y1)
def get_area(self):
return self.c_rect.getArea()
def get_size(self):
cdef int width, height
self.c_rect.getSize(&width, &height)
return width, height
def move(self, dx, dy):
self.c_rect.move(dx, dy)
</code></pre>
<p>The <code>setup.py</code> script I've adapted for this file organization.</p>
<pre><code>from distutils.core import setup, Extension
from Cython.Build import cythonize
setup(
name='rect',
packages=['rect'],
ext_modules=cythonize(Extension(
'Rectangle',
sources=['rect/wrapper.pyx', 'src/Rectangle.cpp'],
include_dirs=['inc/'],
language='c++',
extra_compile_args=['--std=c++11'],
extra_link_args=['--std=c++11']
)),
)
</code></pre>
| 0 | 2016-08-11T21:32:20Z | 38,906,589 | <p>Your import statement acts on <code>__init__.py</code> which is probably empty
But what your setup script should create is an <code>.so</code> file. Thus you have to run it not with <code>install</code> but with <code>build_ext</code> (and <code>--inplace</code>) which gives you the <code>.so</code> file that can be then imported. But be careful with your names in <code>setup.py</code> you will end up with several possible items to be imported when calling <code>import rect</code>. Avoid that the name of the extension is the same as the package (or modify afterwards the <code>__init__.py</code> file accordingly)</p>
| 0 | 2016-08-11T21:43:26Z | [
"python",
"cython",
"packaging"
] |
Bundle Cython module with a Python package | 38,906,482 | <p>I'm wrapping a C++ library in Cython, and I would like to distribute this as a Python package. I've been using <a href="http://cython.readthedocs.io/en/latest/src/userguide/wrapping_CPlusPlus.html" rel="nofollow">this tutorial</a> as a guide.</p>
<p>Here is how things are organized.</p>
<pre><code>.
âââ inc
â  âââ Rectangle.h
âââ rect
â  âââ __init__.py
â  âââ wrapper.pyx
âââ setup.py
âââ src
âââ Rectangle.cpp
</code></pre>
<p>I've pasted the contents of these files at the bottom of the post, as well as in this <a href="https://github.com/standage/cython-packaging-demo" rel="nofollow">GitHub repo</a>.</p>
<p>I have no problem compiling and installing with <code>python setup.py install</code>, and I can <code>import rect</code> from the interpreter with no problem. But it seems to be an empty class: I can't create a <code>Rectangle</code> object with any of the following.
- <code>Rectangle</code>
- <code>rect.Rectangle</code>
- <code>wrapper.Rectangle</code>
- <code>rect.wrapper.Rectangle</code></p>
<p>What am I doing wrong here?</p>
<hr>
<p>Contents of <code>Rectangle.h</code>, copied and pasted from the tutorial.</p>
<pre><code>namespace shapes {
class Rectangle {
public:
int x0, y0, x1, y1;
Rectangle();
Rectangle(int x0, int y0, int x1, int y1);
~Rectangle();
int getArea();
void getSize(int* width, int* height);
void move(int dx, int dy);
};
}
</code></pre>
<p>Contents of <code>Rectangle.cpp</code>.</p>
<pre><code>#include "Rectangle.h"
namespace shapes {
Rectangle::Rectangle() { }
Rectangle::Rectangle(int X0, int Y0, int X1, int Y1) {
x0 = X0;
y0 = Y0;
x1 = X1;
y1 = Y1;
}
Rectangle::~Rectangle() { }
int Rectangle::getArea() {
return (x1 - x0) * (y1 - y0);
}
void Rectangle::getSize(int *width, int *height) {
(*width) = x1 - x0;
(*height) = y1 - y0;
}
void Rectangle::move(int dx, int dy) {
x0 += dx;
y0 += dy;
x1 += dx;
y1 += dy;
}
}
</code></pre>
<p>Cython wrapper code <code>wrapper.pyx</code>.</p>
<pre><code># distutils: language = c++
# distutils: sources = src/Rectangle.cpp
cdef extern from "Rectangle.h" namespace "shapes":
cdef cppclass Rectangle:
Rectangle() except +
Rectangle(int, int, int, int) except +
int x0, y0, x1, y1
int getArea()
void getSize(int* width, int* height)
void move(int, int)
cdef class PyRectangle:
cdef Rectangle c_rect # hold a C++ instance which we're wrapping
def __cinit__(self, int x0, int y0, int x1, int y1):
self.c_rect = Rectangle(x0, y0, x1, y1)
def get_area(self):
return self.c_rect.getArea()
def get_size(self):
cdef int width, height
self.c_rect.getSize(&width, &height)
return width, height
def move(self, dx, dy):
self.c_rect.move(dx, dy)
</code></pre>
<p>The <code>setup.py</code> script I've adapted for this file organization.</p>
<pre><code>from distutils.core import setup, Extension
from Cython.Build import cythonize
setup(
name='rect',
packages=['rect'],
ext_modules=cythonize(Extension(
'Rectangle',
sources=['rect/wrapper.pyx', 'src/Rectangle.cpp'],
include_dirs=['inc/'],
language='c++',
extra_compile_args=['--std=c++11'],
extra_link_args=['--std=c++11']
)),
)
</code></pre>
| 0 | 2016-08-11T21:32:20Z | 38,907,323 | <p>In this case, the problem turned out to be that the <code>.pyx</code> file and the extension did not have the same basename. When I renamed <code>wrapper.pyx</code> to <code>Rectangle</code> and re-installed I was able to run the following in the Python interpreter.</p>
<pre><code>>>> import Rectangle
>>> r = Rectangle.PyRectangle(0, 0, 1, 1)
>>> r.get_area()
1
>>>
</code></pre>
| 0 | 2016-08-11T22:57:01Z | [
"python",
"cython",
"packaging"
] |
how to find the complement of two dataframes | 38,906,513 | <p>given two large dataframes, is there any concise and efficient code (avoid using any <code>for loop</code> directly) that allow me to obtain the complement of these two dataframes? </p>
<p>the most straight forward way to me is to compute <code>union-intersection</code> as shown in the naive example below, but I do not know how to implement this in an elegant languages of <code>pandas</code> or <code>np</code></p>
<pre><code>df1= pd.DataFrame({'key1': ['K0', 'K0', 'K1', 'K2'],
'key2': ['K0', 'K1', 'K0', 'K1'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
df2= pd.DataFrame({'key1': ['K0', 'K1', 'K1', 'K2'],
'key2': ['K0', 'K0', 'K0', 'K0'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
intersection= pd.merge(df1, df2, how='inner',on=['key1', 'key2'])
union=pd.merge(df1, df2, how='outer',on=['key1', 'key2'])
complement=union-intersection
</code></pre>
<p>thanks for any comments and answers</p>
| 2 | 2016-08-11T21:35:36Z | 38,906,805 | <p>Starting with this:</p>
<pre><code>df1= pd.DataFrame({'key1': ['K0', 'K0', 'K1', 'K2'],
'key2': ['K0', 'K1', 'K0', 'K1'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
df2= pd.DataFrame({'key1': ['K0', 'K1', 'K1', 'K2'],
'key2': ['K0', 'K0', 'K0', 'K0'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
intersection = pd.merge(df1, df2, how='inner',on=['key1', 'key2'])
union = pd.merge(df1, df2, how='outer',on=['key1', 'key2'])
</code></pre>
<p>print union</p>
<pre><code> A B key1 key2 C D
0 A0 B0 K0 K0 C0 D0
1 A1 B1 K0 K1 NaN NaN
2 A2 B2 K1 K0 C1 D1
3 A2 B2 K1 K0 C2 D2
4 A3 B3 K2 K1 NaN NaN
5 NaN NaN K2 K0 C3 D3
</code></pre>
<p>print intersection </p>
<pre><code> A B key1 key2 C D
0 A0 B0 K0 K0 C0 D0
1 A2 B2 K1 K0 C1 D1
2 A2 B2 K1 K0 C2 D2
</code></pre>
<p>union-intersection try this:</p>
<pre><code>union[union.isnull().any(axis=1)]
A B key1 key2 C D
1 A1 B1 K0 K1 NaN NaN
4 A3 B3 K2 K1 NaN NaN
5 NaN NaN K2 K0 C3 D3
</code></pre>
| 2 | 2016-08-11T22:02:53Z | [
"python",
"pandas",
"join",
"merge"
] |
django tutorial: where does question_id come from? | 38,906,576 | <p>So I am trying to learn django and following this tutorial: <a href="https://docs.djangoproject.com/en/1.10/intro/tutorial01/" rel="nofollow">https://docs.djangoproject.com/en/1.10/intro/tutorial01/</a></p>
<p>After following the tutorial and made the poll app, when I look back and look into the code, I just don't see where does this "question_id" come from. It doesn't appear when creating model. </p>
<p>Here are the codes for models:</p>
<pre><code>from django.db import models
from django.utils import timezone
import datetime
class Question(models.Model):
question_text = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
def __str__(self):
return self.question_text
def was_published_recently(self):
return self.pub_date >= timezone.now() - datetime.timedelta(days=1)
class Choice(models.Model):
question = models.ForeignKey(Question, on_delete=models.CASCADE)
choice_text= models.CharField(max_length=200)
votes = models.IntegerField(default=0)
def __str__(self):
return self.choice_text
</code></pre>
<p>And here in view.py:</p>
<pre><code>from django.shortcuts import render, get_object_or_404
from django.http import HttpResponse, HttpResponseRedirect
from django.http import Http404
from django.template import loader
from django.urls import reverse
from django.views import generic
from .models import Question
from .models import Choice
from django.utils import timezone
def vote(request, question_id):
question = get_object_or_404(Question, pk=question_id)
try:
selected_choice = question.choice_set.get(pk=request.POST['choice'])
except (KeyError, Choice.DoesNotExist):
return render(request, 'polls/detail.html', {
'question': question,
'error_message': "You didn't select a choice.",
})
else:
selected_choice.votes += 1
selected_choice.save()
return HttpResponseRedirect(reverse('polls:results', args=(question.id,)))
</code></pre>
<p>question_id just appear from nowhere.</p>
<ol>
<li><p>Is it the case that when creating a model, django will automatically create a model_id for each instance? </p></li>
<li><p>Also, another question is why they do "pk=question_id" and use pk thereafter. Does it matter?</p></li>
</ol>
<p>codes in urls.py:</p>
<pre><code>from django.shortcuts import render
from django.conf.urls import url
from . import views
app_name = 'polls'
urlpatterns = [
# ex: /polls/
url(r'^$', views.IndexView.as_view(), name='index'),
# ex: /polls/5/
url(r'^(?P<pk>[0-9]+)/$', views.DetailView.as_view(), name='detail'),
# ex: /polls/5/results/
url(r'^(?P<pk>[0-9]+)/results/$', views.ResultsView.as_view(), name='results'),
# ex: /polls/5/vote/
url(r'^(?P<question_id>[0-9]+)/vote/$', views.vote, name='vote'),
]
</code></pre>
<ol start="3">
<li><p>Additionally, when using generic view like this: </p>
<p>class DetailView(generic.DetailView):
model = Question
template_name = 'polls/detail.html'</p></li>
</ol>
<p>Could we pass in arguments (like question_id)? </p>
| 0 | 2016-08-11T21:41:31Z | 38,906,662 | <p>In your <code>urlpatterns</code> list in <code>urls.py</code> you have this route:</p>
<pre><code>url(r'^(?P<question_id>[0-9]+)/vote/$', views.vote, name='vote')
</code></pre>
<p>The url is actually a regex, so one string to specify a whole family of urls.<br>
The <code>question_id</code> is a named part of the regex which defines a parameter of that url. </p>
<p>When the view, i.e. <code>views.vote</code> is called, the <code>question_id</code> parsed from the url is send in as a function argument.</p>
<p>For example, if a client visits the url:</p>
<pre><code> /123/vote/
</code></pre>
<p>Then the view function will be called like:</p>
<pre><code> vote(request=request, question_id=123)
</code></pre>
<blockquote>
<ol>
<li>Is it the case that when creating a model, django will automatically create a model_id for each instance?</li>
</ol>
</blockquote>
<p>Yes.</p>
<blockquote>
<ol start="2">
<li>Also, another question is why they do "pk=question_id" and use pk thereafter. Does it matter?</li>
</ol>
</blockquote>
<p>Here pk stands for "primary key". It doesn't matter, really, the different names are only a scoping issue. </p>
<p>The function argument name for the <code>vote</code> view is "question_id". The function argument name inside of <code>get_object_or_404</code> and <code>question.choice_set.get</code> methods is 'pk'. </p>
<p>They will just be different names pointing at the same object (the integer <code>123</code>, for example). </p>
| 0 | 2016-08-11T21:49:46Z | [
"python",
"django"
] |
django tutorial: where does question_id come from? | 38,906,576 | <p>So I am trying to learn django and following this tutorial: <a href="https://docs.djangoproject.com/en/1.10/intro/tutorial01/" rel="nofollow">https://docs.djangoproject.com/en/1.10/intro/tutorial01/</a></p>
<p>After following the tutorial and made the poll app, when I look back and look into the code, I just don't see where does this "question_id" come from. It doesn't appear when creating model. </p>
<p>Here are the codes for models:</p>
<pre><code>from django.db import models
from django.utils import timezone
import datetime
class Question(models.Model):
question_text = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
def __str__(self):
return self.question_text
def was_published_recently(self):
return self.pub_date >= timezone.now() - datetime.timedelta(days=1)
class Choice(models.Model):
question = models.ForeignKey(Question, on_delete=models.CASCADE)
choice_text= models.CharField(max_length=200)
votes = models.IntegerField(default=0)
def __str__(self):
return self.choice_text
</code></pre>
<p>And here in view.py:</p>
<pre><code>from django.shortcuts import render, get_object_or_404
from django.http import HttpResponse, HttpResponseRedirect
from django.http import Http404
from django.template import loader
from django.urls import reverse
from django.views import generic
from .models import Question
from .models import Choice
from django.utils import timezone
def vote(request, question_id):
question = get_object_or_404(Question, pk=question_id)
try:
selected_choice = question.choice_set.get(pk=request.POST['choice'])
except (KeyError, Choice.DoesNotExist):
return render(request, 'polls/detail.html', {
'question': question,
'error_message': "You didn't select a choice.",
})
else:
selected_choice.votes += 1
selected_choice.save()
return HttpResponseRedirect(reverse('polls:results', args=(question.id,)))
</code></pre>
<p>question_id just appear from nowhere.</p>
<ol>
<li><p>Is it the case that when creating a model, django will automatically create a model_id for each instance? </p></li>
<li><p>Also, another question is why they do "pk=question_id" and use pk thereafter. Does it matter?</p></li>
</ol>
<p>codes in urls.py:</p>
<pre><code>from django.shortcuts import render
from django.conf.urls import url
from . import views
app_name = 'polls'
urlpatterns = [
# ex: /polls/
url(r'^$', views.IndexView.as_view(), name='index'),
# ex: /polls/5/
url(r'^(?P<pk>[0-9]+)/$', views.DetailView.as_view(), name='detail'),
# ex: /polls/5/results/
url(r'^(?P<pk>[0-9]+)/results/$', views.ResultsView.as_view(), name='results'),
# ex: /polls/5/vote/
url(r'^(?P<question_id>[0-9]+)/vote/$', views.vote, name='vote'),
]
</code></pre>
<ol start="3">
<li><p>Additionally, when using generic view like this: </p>
<p>class DetailView(generic.DetailView):
model = Question
template_name = 'polls/detail.html'</p></li>
</ol>
<p>Could we pass in arguments (like question_id)? </p>
| 0 | 2016-08-11T21:41:31Z | 38,906,667 | <ol>
<li>Your statement is true you might want to take a look at the relevant Documentation for Django Models here: <a href="https://docs.djangoproject.com/en/1.10/topics/db/models/#automatic-primary-key-fields" rel="nofollow">https://docs.djangoproject.com/en/1.10/topics/db/models/#automatic-primary-key-fields</a></li>
</ol>
<p>Which notably states:</p>
<blockquote>
<p>By default, Django gives each model the following field:</p>
<pre><code>id = models.AutoField(primary_key=True)
</code></pre>
<p>This is an auto-incrementing primary key.</p>
<p>If youâd like to specify a custom primary key, just specify
<code>primary_key=True</code> on one of your fields. If Django sees youâve
explicitly set <code>Field.primary_key</code>, it wonât add the automatic <code>id</code>
column.</p>
<p>Each model requires exactly one field to have <code>primary_key=True</code> (either
explicitly declared or automatically added).</p>
</blockquote>
<ol start="2">
<li><p>You might also want to take a look at this question which briefly explains the different between ID's and Primary Keys in Django:
<a href="http://stackoverflow.com/questions/22345711/whats-the-difference-between-model-id-and-model-pk-in-django">What's the difference between Model.id and Model.pk in django?</a></p></li>
<li><p>Why don't you try it and see ;)</p></li>
</ol>
| 0 | 2016-08-11T21:49:58Z | [
"python",
"django"
] |
Unable to run scrypt with Python 3.5 | 38,906,610 | <p>After applying the recommended changes to the setup files (listed <a href="https://bitbucket.org/mhallin/py-scrypt/issues/23/scrypt-for-python-35" rel="nofollow">here</a>), I successfully installed Scrypt on Python 3.5. But I can't figure out how to actually get it to run or to pass its own tests. It complains that "_scrypt" doesn't exist but, per the scrypt directory, it does. My attempt:</p>
<pre><code>C:\Users\cmcka\Desktop\mhallin-py-scrypt-119842611360>python setup.py build
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
copying scrypt.py -> build\lib.win-amd64-3.5
running build_ext
building '_scrypt' extension
creating build\temp.win-amd64-3.5
creating build\temp.win-amd64-3.5\Release
creating build\temp.win-amd64-3.5\Release\src
creating build\temp.win-amd64-3.5\Release\scrypt-1.1.6
creating build\temp.win-amd64-3.5\Release\scrypt-1.1.6\lib
creating build\temp.win-amd64-3.5\Release\scrypt-1.1.6\lib\crypto
creating build\temp.win-amd64-3.5\Release\scrypt-1.1.6\lib\scryptenc
creating build\temp.win-amd64-3.5\Release\scrypt-1.1.6\lib\util
creating build\temp.win-amd64-3.5\Release\scrypt-windows-stubs
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHAVE_CONFIG_H -Dinline=__inline -Iscrypt-1.1.6 -Iscrypt-1.1.6/lib -Iscrypt-1.1.6/lib/scryptenc -Iscrypt-1.1.6/lib/crypto -Iscrypt-1.1.6/lib/util -Ic:\OpenSSL-Win64\include -Iscrypt-windows-stubs/include -IC:\Python35\include -IC:\Python35\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\winrt" /Tcsrc/scrypt.c /Fobuild\temp.win-amd64-3.5\Release\src/scrypt.obj
scrypt.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHAVE_CONFIG_H -Dinline=__inline -Iscrypt-1.1.6 -Iscrypt-1.1.6/lib -Iscrypt-1.1.6/lib/scryptenc -Iscrypt-1.1.6/lib/crypto -Iscrypt-1.1.6/lib/util -Ic:\OpenSSL-Win64\include -Iscrypt-windows-stubs/include -IC:\Python35\include -IC:\Python35\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\winrt" /Tcscrypt-1.1.6/lib/crypto/crypto_aesctr.c /Fobuild\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/crypto/crypto_aesctr.obj
crypto_aesctr.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHAVE_CONFIG_H -Dinline=__inline -Iscrypt-1.1.6 -Iscrypt-1.1.6/lib -Iscrypt-1.1.6/lib/scryptenc -Iscrypt-1.1.6/lib/crypto -Iscrypt-1.1.6/lib/util -Ic:\OpenSSL-Win64\include -Iscrypt-windows-stubs/include -IC:\Python35\include -IC:\Python35\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\winrt" /Tcscrypt-1.1.6/lib/crypto/crypto_scrypt-nosse.c /Fobuild\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/crypto/crypto_scrypt-nosse.obj
crypto_scrypt-nosse.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHAVE_CONFIG_H -Dinline=__inline -Iscrypt-1.1.6 -Iscrypt-1.1.6/lib -Iscrypt-1.1.6/lib/scryptenc -Iscrypt-1.1.6/lib/crypto -Iscrypt-1.1.6/lib/util -Ic:\OpenSSL-Win64\include -Iscrypt-windows-stubs/include -IC:\Python35\include -IC:\Python35\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\winrt" /Tcscrypt-1.1.6/lib/crypto/sha256.c /Fobuild\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/crypto/sha256.obj
sha256.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHAVE_CONFIG_H -Dinline=__inline -Iscrypt-1.1.6 -Iscrypt-1.1.6/lib -Iscrypt-1.1.6/lib/scryptenc -Iscrypt-1.1.6/lib/crypto -Iscrypt-1.1.6/lib/util -Ic:\OpenSSL-Win64\include -Iscrypt-windows-stubs/include -IC:\Python35\include -IC:\Python35\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\winrt" /Tcscrypt-1.1.6/lib/scryptenc/scryptenc.c /Fobuild\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/scryptenc/scryptenc.obj
scryptenc.c
scrypt-1.1.6/lib/scryptenc/scryptenc.c(111): warning C4244: '=': conversion from 'std::size_t' to 'double', possible loss of data
scrypt-1.1.6/lib/scryptenc/scryptenc.c(172): warning C4101: 'fd': unreferenced local variable
scrypt-1.1.6/lib/scryptenc/scryptenc.c(173): warning C4101: 'lenread': unreferenced local variable
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHAVE_CONFIG_H -Dinline=__inline -Iscrypt-1.1.6 -Iscrypt-1.1.6/lib -Iscrypt-1.1.6/lib/scryptenc -Iscrypt-1.1.6/lib/crypto -Iscrypt-1.1.6/lib/util -Ic:\OpenSSL-Win64\include -Iscrypt-windows-stubs/include -IC:\Python35\include -IC:\Python35\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\winrt" /Tcscrypt-1.1.6/lib/scryptenc/scryptenc_cpuperf.c /Fobuild\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/scryptenc/scryptenc_cpuperf.obj
scryptenc_cpuperf.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHAVE_CONFIG_H -Dinline=__inline -Iscrypt-1.1.6 -Iscrypt-1.1.6/lib -Iscrypt-1.1.6/lib/scryptenc -Iscrypt-1.1.6/lib/crypto -Iscrypt-1.1.6/lib/util -Ic:\OpenSSL-Win64\include -Iscrypt-windows-stubs/include -IC:\Python35\include -IC:\Python35\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\winrt" /Tcscrypt-1.1.6/lib/util/memlimit.c /Fobuild\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/util/memlimit.obj
memlimit.c
scrypt-1.1.6/lib/util/memlimit.c(331): warning C4244: '=': conversion from 'double' to 'std::size_t', possible loss of data
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHAVE_CONFIG_H -Dinline=__inline -Iscrypt-1.1.6 -Iscrypt-1.1.6/lib -Iscrypt-1.1.6/lib/scryptenc -Iscrypt-1.1.6/lib/crypto -Iscrypt-1.1.6/lib/util -Ic:\OpenSSL-Win64\include -Iscrypt-windows-stubs/include -IC:\Python35\include -IC:\Python35\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\winrt" /Tcscrypt-1.1.6/lib/util/warn.c /Fobuild\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/util/warn.obj
warn.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -DHAVE_CONFIG_H -Dinline=__inline -Iscrypt-1.1.6 -Iscrypt-1.1.6/lib -Iscrypt-1.1.6/lib/scryptenc -Iscrypt-1.1.6/lib/crypto -Iscrypt-1.1.6/lib/util -Ic:\OpenSSL-Win64\include -Iscrypt-windows-stubs/include -IC:\Python35\include -IC:\Python35\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\winrt" /Tcscrypt-windows-stubs/gettimeofday.c /Fobuild\temp.win-amd64-3.5\Release\scrypt-windows-stubs/gettimeofday.obj
gettimeofday.c
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\OpenSSL-Win64\lib /LIBPATH:C:\Python35\libs /LIBPATH:C:\Python35\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\ATLMFC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\um\x64" libeay32.lib advapi32.lib /EXPORT:PyInit__scrypt build\temp.win-amd64-3.5\Release\src/scrypt.obj build\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/crypto/crypto_aesctr.obj build\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/crypto/crypto_scrypt-nosse.obj build\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/crypto/sha256.obj build\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/scryptenc/scryptenc.obj build\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/scryptenc/scryptenc_cpuperf.obj build\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/util/memlimit.obj build\temp.win-amd64-3.5\Release\scrypt-1.1.6/lib/util/warn.obj build\temp.win-amd64-3.5\Release\scrypt-windows-stubs/gettimeofday.obj /OUT:build\lib.win-amd64-3.5\_scrypt.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\src\_scrypt.cp35-win_amd64.lib
scrypt.obj : warning LNK4197: export 'PyInit__scrypt' specified multiple times; using first specification
Creating library build\temp.win-amd64-3.5\Release\src\_scrypt.cp35-win_amd64.lib and object build\temp.win-amd64-3.5\Release\src\_scrypt.cp35-win_amd64.exp
Generating code
Finished generating code
C:\Users\cmcka\Desktop\mhallin-py-scrypt-119842611360>python setup.py install
running install
running bdist_egg
running egg_info
creating scrypt.egg-info
writing scrypt.egg-info\PKG-INFO
writing top-level names to scrypt.egg-info\top_level.txt
writing dependency_links to scrypt.egg-info\dependency_links.txt
writing manifest file 'scrypt.egg-info\SOURCES.txt'
reading manifest file 'scrypt.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'scrypt.egg-info\SOURCES.txt'
installing library code to build\bdist.win-amd64\egg
running install_lib
running build_py
running build_ext
creating build\bdist.win-amd64
creating build\bdist.win-amd64\egg
copying build\lib.win-amd64-3.5\scrypt.py -> build\bdist.win-amd64\egg
copying build\lib.win-amd64-3.5\_scrypt.cp35-win_amd64.pyd -> build\bdist.win-amd64\egg
byte-compiling build\bdist.win-amd64\egg\scrypt.py to scrypt.cpython-35.pyc
creating stub loader for _scrypt.cp35-win_amd64.pyd
byte-compiling build\bdist.win-amd64\egg\_scrypt.py to _scrypt.cpython-35.pyc
creating build\bdist.win-amd64\egg\EGG-INFO
copying scrypt.egg-info\PKG-INFO -> build\bdist.win-amd64\egg\EGG-INFO
copying scrypt.egg-info\SOURCES.txt -> build\bdist.win-amd64\egg\EGG-INFO
copying scrypt.egg-info\dependency_links.txt -> build\bdist.win-amd64\egg\EGG-INFO
copying scrypt.egg-info\top_level.txt -> build\bdist.win-amd64\egg\EGG-INFO
writing build\bdist.win-amd64\egg\EGG-INFO\native_libs.txt
zip_safe flag not set; analyzing archive contents...
__pycache__._scrypt.cpython-35: module references __file__
creating dist
creating 'dist\scrypt-0.7.1-py3.5-win-amd64.egg' and adding 'build\bdist.win-amd64\egg' to it
removing 'build\bdist.win-amd64\egg' (and everything under it)
Processing scrypt-0.7.1-py3.5-win-amd64.egg
creating c:\python35\lib\site-packages\scrypt-0.7.1-py3.5-win-amd64.egg
Extracting scrypt-0.7.1-py3.5-win-amd64.egg to c:\python35\lib\site-packages
Adding scrypt 0.7.1 to easy-install.pth file
Installed c:\python35\lib\site-packages\scrypt-0.7.1-py3.5-win-amd64.egg
Processing dependencies for scrypt==0.7.1
Finished processing dependencies for scrypt==0.7.1
C:\Users\cmcka\Desktop\mhallin-py-scrypt-119842611360>python setup.py test
running test
running egg_info
writing dependency_links to scrypt.egg-info\dependency_links.txt
writing top-level names to scrypt.egg-info\top_level.txt
writing scrypt.egg-info\PKG-INFO
reading manifest file 'scrypt.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'scrypt.egg-info\SOURCES.txt'
running build_ext
copying build\lib.win-amd64-3.5\_scrypt.cp35-win_amd64.pyd ->
error: [WinError 126] The specified module could not be found
C:\Users\cmcka\Desktop\mhallin-py-scrypt-119842611360>python
Python 3.5.1 (v3.5.1:37a07cee5969, Dec 6 2015, 01:54:25) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import scrypt
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\cmcka\Desktop\mhallin-py-scrypt-119842611360\scrypt.py", line 11, in <module>
_scrypt = cdll.LoadLibrary(imp.find_module('_scrypt')[1])
File "C:\Python35\lib\ctypes\__init__.py", line 425, in LoadLibrary
return self._dlltype(name)
File "C:\Python35\lib\ctypes\__init__.py", line 347, in __init__
self._handle = _dlopen(self._name, mode)
OSError: [WinError 126] The specified module could not be found
>>> ^DC:\Python35\Lib\site-packages\scrypt-0.7.1-py3.5-win-amd64.egg
^C
C:\Users\cmcka\Desktop\mhallin-py-scrypt-119842611360>cd C:\Python35\Lib\site-packages\scrypt-0.7.1-py3.5-win-amd64.egg
C:\Python35\Lib\site-packages\scrypt-0.7.1-py3.5-win-amd64.egg>dir
Volume in drive C is Windows
Volume Serial Number is CA5B-5BBB
Directory of C:\Python35\Lib\site-packages\scrypt-0.7.1-py3.5-win-amd64.egg
08/11/2016 02:36 PM <DIR> .
08/11/2016 02:36 PM <DIR> ..
08/11/2016 02:36 PM <DIR> EGG-INFO
08/11/2016 02:36 PM 7,768 scrypt.py
08/11/2016 02:36 PM 31,232 _scrypt.cp35-win_amd64.pyd
08/11/2016 02:36 PM 306 _scrypt.py
08/11/2016 02:36 PM <DIR> __pycache__
3 File(s) 39,306 bytes
4 Dir(s) 120,141,914,112 bytes free
C:\Python35\Lib\site-packages\scrypt-0.7.1-py3.5-win-amd64.egg>
</code></pre>
<p>Any suggestions would be greatly appreciated.
Edit: The error occurs regardless of where the 'python' command is run.</p>
| 0 | 2016-08-11T21:45:33Z | 38,928,070 | <p>After using Process Monitor to track the activity of Scrypt on my working desktop and my non-working laptop, I discovered that it wanted OpenSSL's libeay32.dll. In the beginning, I had chosen to install its DLLs in the /bin directory but, unfortunately, Scrypt doesn't check there. After reinstalling OpenSSL and choosing to install those in the Windows directory, it ran just fine. Scrypt is in need of a serious update.</p>
| 0 | 2016-08-13T00:31:12Z | [
"python",
"python-3.x",
"python-3.5",
"scrypt"
] |
How do I have Splinter to iterate clicks on dropdown menu with python? | 38,906,620 | <p>I am trying to gather some yearly stats from <a href="http://www.atpworldtour.com/en/stats" rel="nofollow">http://www.atpworldtour.com/en/stats</a></p>
<p>The years are on dropdown menu (default selection is 52-weeks).
I succeeded at opening the drop down menu, but my code does not select any year.</p>
<pre><code>browser = Browser('chrome')
browser.visit("http://www.atpworldtour.com/en/stats")
window = browser.windows
window.is_current = True
print("asf")
print(browser.find_by_css(".dropdown-label"))
browser.find_by_css(".dropdown-label").click()
print(browser.find_by_css("ul.dropdown li"))
print(browser.find_by_css("ul.dropdown #2015"))
browser.find_by_css("ul.dropdown li")[1].click()
#browser.find_by_css("ul.dropdown #2015").click() Does not work, either.
</code></pre>
| 0 | 2016-08-11T21:46:18Z | 38,952,680 | <p>This should work</p>
<pre><code>browser.find_by_id("52weeks").first.click()
browser.find_by_id("2010").first.click()
</code></pre>
| 0 | 2016-08-15T09:38:09Z | [
"python",
"web-scraping"
] |
pd.DataFrame() fails when combining 2 lists | 38,906,645 | <p>I have 2 lists like below while my lists actually have 3000 elements:</p>
<pre><code>a=[numpy.datetime64('2004-04-12T08:00:00.000000000+0800'),numpy.datetime64('2004-04-12T08:00:00.000000000+0800'),numpy.datetime64('2004-04-12T08:00:00.000000000+0800')]
b=[1,2,3]
</code></pre>
<p>When I tried to combine them to form a dataframe with 2 columns:
<code>c=pd.DataFrame([pd.Series(a), pd.Series(b)])</code>
It says: <code>TypeError: invalid type promotion</code></p>
<p>How can I solve this problem?</p>
| 2 | 2016-08-11T21:48:08Z | 38,906,724 | <p>Try:</p>
<pre><code>c = pd.DataFrame(data={'a':a,'b':b})
</code></pre>
<p>You're getting the error because you are trying to create a pandas dataframe by putting two lists together into one column, while the data types of the elements in those lists don't match. Basically, you're trying to put dates and integers into the same column.</p>
<p>I hope this helps.</p>
| 3 | 2016-08-11T21:55:29Z | [
"python",
"pandas",
"dataframe"
] |
pd.DataFrame() fails when combining 2 lists | 38,906,645 | <p>I have 2 lists like below while my lists actually have 3000 elements:</p>
<pre><code>a=[numpy.datetime64('2004-04-12T08:00:00.000000000+0800'),numpy.datetime64('2004-04-12T08:00:00.000000000+0800'),numpy.datetime64('2004-04-12T08:00:00.000000000+0800')]
b=[1,2,3]
</code></pre>
<p>When I tried to combine them to form a dataframe with 2 columns:
<code>c=pd.DataFrame([pd.Series(a), pd.Series(b)])</code>
It says: <code>TypeError: invalid type promotion</code></p>
<p>How can I solve this problem?</p>
| 2 | 2016-08-11T21:48:08Z | 38,906,787 | <p>This should work without using a dictionary: </p>
<pre><code>c = pd.DataFrame([a, b]).T
print c
</code></pre>
| 1 | 2016-08-11T22:01:13Z | [
"python",
"pandas",
"dataframe"
] |
pd.DataFrame() fails when combining 2 lists | 38,906,645 | <p>I have 2 lists like below while my lists actually have 3000 elements:</p>
<pre><code>a=[numpy.datetime64('2004-04-12T08:00:00.000000000+0800'),numpy.datetime64('2004-04-12T08:00:00.000000000+0800'),numpy.datetime64('2004-04-12T08:00:00.000000000+0800')]
b=[1,2,3]
</code></pre>
<p>When I tried to combine them to form a dataframe with 2 columns:
<code>c=pd.DataFrame([pd.Series(a), pd.Series(b)])</code>
It says: <code>TypeError: invalid type promotion</code></p>
<p>How can I solve this problem?</p>
| 2 | 2016-08-11T21:48:08Z | 38,906,794 | <pre><code>import pandas as pd
import numpy as np
data = {
'a': [
np.datetime64('2004-04-12T08:00:00.000000000+0800'),
np.datetime64('2004-04-12T08:00:00.000000000+0800'),
np.datetime64('2004-04-12T08:00:00.000000000+0800')
],
'b': [1, 2, 3]
}
df = pd.DataFrame(data)
print df
</code></pre>
| 2 | 2016-08-11T22:01:40Z | [
"python",
"pandas",
"dataframe"
] |
pd.DataFrame() fails when combining 2 lists | 38,906,645 | <p>I have 2 lists like below while my lists actually have 3000 elements:</p>
<pre><code>a=[numpy.datetime64('2004-04-12T08:00:00.000000000+0800'),numpy.datetime64('2004-04-12T08:00:00.000000000+0800'),numpy.datetime64('2004-04-12T08:00:00.000000000+0800')]
b=[1,2,3]
</code></pre>
<p>When I tried to combine them to form a dataframe with 2 columns:
<code>c=pd.DataFrame([pd.Series(a), pd.Series(b)])</code>
It says: <code>TypeError: invalid type promotion</code></p>
<p>How can I solve this problem?</p>
| 2 | 2016-08-11T21:48:08Z | 38,906,902 | <p>try this: </p>
<pre><code>import pandas as pd
pd.DataFrame(zip(a,b))
0 1
0 2004-04-12 1
1 2004-04-12 2
2 2004-04-12 3
</code></pre>
| 1 | 2016-08-11T22:12:06Z | [
"python",
"pandas",
"dataframe"
] |
Office 365 API - Create Event With Attachment | 38,906,663 | <p>I am using the Python script below to create event for Office 365 and it works. However, I have a tough time (googling all day) to find out how can I also include the attachment with the event that is being generated from this script below. </p>
<p>This below code works for creating the event.</p>
<pre><code># Set the request parameters
url = 'https://outlook.office365.com/api/v1.0/me/events?$Select=Start,End'
user = 'user1@domain.com'
pwd = getpass.getpass('Please enter your AD password: ')
# Create JSON payload
data = {
"Subject": "Testing Outlock Event",
"Body": {
"ContentType": "HTML",
"Content": "Test Content"
},
"Start": "2016-05-23T15:00:00.000Z",
"End": "2016-05-23T16:00:00.000Z",
"Attendees": [
{
"EmailAddress": {
"Address": "user1@domain.com",
"Name": "User1"
},
"Type": "Required" },
{
"EmailAddress": {
"Address": "user2@domain.com",
"Name": "User2"
},
"Type": "Optional" }
]
}
json_payload = json.dumps(data)
# Build the HTTP request
opener = urllib2.build_opener(urllib2.HTTPHandler)
request = urllib2.Request(url, data=json_payload)
auth = base64.encodestring('%s:%s' % (user, pwd)).replace('\n', '')
request.add_header('Authorization', 'Basic %s' % auth)
request.add_header('Content-Type', 'application/json')
request.add_header('Accept', 'application/json')
request.get_method = lambda: 'POST'
# Perform the request
result = opener.open(request)
</code></pre>
<p>But when I tried to include attachment, it is not working (see below with attachments code)</p>
<pre><code># Set the request parameters
url = 'https://outlook.office365.com/api/v1.0/me/events?$Select=Start,End'
user = 'user1@domain.com'
pwd = getpass.getpass('Please enter your AD password: ')
# Create JSON payload
data = {
"Subject": "Testing Outlock Event",
"Body": {
"ContentType": "HTML",
"Content": "Test Content"
},
"Start": "2016-05-23T15:00:00.000Z",
"End": "2016-05-23T16:00:00.000Z",
"Attendees": [
{
"EmailAddress": {
"Address": "user1@domain.com",
"Name": "User1"
},
"Type": "Required" },
{
"EmailAddress": {
"Address": "user2@domain.com",
"Name": "User2"
},
"Type": "Optional" }
],
"Attachments": [
{
"@odata.type": "#Microsoft.OutlookServices.FileAttachment",
"Name": "menu.txt",
"ContentBytes": "JVBERi0xLjMNCjEgMCBvYPRg=="
}
],
"HasAttachments":"true"
}
json_payload = json.dumps(data)
# Build the HTTP request
opener = urllib2.build_opener(urllib2.HTTPHandler)
request = urllib2.Request(url, data=json_payload)
auth = base64.encodestring('%s:%s' % (user, pwd)).replace('\n', '')
request.add_header('Authorization', 'Basic %s' % auth)
request.add_header('Content-Type', 'application/json')
request.add_header('Accept', 'application/json')
request.get_method = lambda: 'POST'
# Perform the request
result = opener.open(request)
</code></pre>
<p>Next, I tried to separate the process. So, first I created the event and captured the eventID. Then tried to include the eventID in the URL (see below), but still no luck. </p>
<pre><code>import urllib2
import getpass
import os
import json
import sys
import base64
import traceback
# Set the request parameters
url = 'https://outlook.office.com/api/v1.0/me/events/AAMkADA1OWVjOTkxLTlmYmEtNDAwMS04YWU3LTNkNDE2YjU2OGI1ZABGBBBBBBD_fa49_h8OTJ5eGdjSTEF3BwBOcCSV9aNzSoXurwI4R0IgBBBBBBENAABOcCSV9aNzSoXurwI4R0IgAAHzfZ0mAAA=/attachments'
user = 'user1@domain.com'
pwd = "password123"
# Create JSON payload
data = {
"@odata.type": "#Microsoft.OutlookServices.FileAttachment",
"Name": "menu.txt",
"ContentBytes": "VGVzdCAxMjM0NQ=="
}
print data
json_payload = json.dumps(data)
# Build the HTTP request
opener = urllib2.build_opener(urllib2.HTTPHandler)
request = urllib2.Request(url, data=json_payload)
auth = base64.encodestring('%s:%s' % (user, pwd)).replace('\n', '')
request.add_header('Authorization', 'Basic %s' % auth)
request.add_header('Content-Type', 'application/json')
request.add_header('Accept', 'application/json')
request.get_method = lambda: 'POST'
# Perform the request
result = opener.open(request).read()
</code></pre>
<p>Any helps would be really appreciated.</p>
| 0 | 2016-08-11T21:49:47Z | 38,908,584 | <p>To create attachment for the event, we need send another request. Here is the description for <a href="https://msdn.microsoft.com/en-us/office/office365/api/calendar-rest-operations#create-attachments" rel="nofollow">creating the attachment</a> for the exist event.</p>
<p><a href="http://i.stack.imgur.com/kTe6H.png" rel="nofollow"><img src="http://i.stack.imgur.com/kTe6H.png" alt="enter image description here"></a></p>
<p>After we create the attachments, we also need to update the event so that others can see the changes. </p>
<p>Here is an example that create the attachment for the event:</p>
<p><strong>Create an event</strong>:</p>
<pre><code>POST: https://outlook.office.com/api/v2.0/me/events
authorization: bearer {token}
content-type: application/json
{
"Subject": "Discuss the Calendar REST API",
"Body": {
"ContentType": "HTML",
"Content": "I think it will meet our requirements!"
},
"Start": {
"DateTime": "2016-08-15T14:00:00",
"TimeZone": "Pacific Standard Time"
},
"End": {
"DateTime": "2016-08-15T14:30:00",
"TimeZone": "Pacific Standard Time"
},
"Attendees": [
{
"EmailAddress": {
"Address": "nanyu@o365e3w15.onmicrosoft.com",
"Name": "Nan Yu"
},
"Type": "Required"
}
]
}
</code></pre>
<p><strong>Add attachment</strong></p>
<pre><code>POST: https://outlook.office.com/api/v2.0/me/events/{eventId}/attachments
authorization: bearer {token}
content-type: application/json
{
"@odata.type":"#Microsoft.OutlookServices.FileAttachment",
"Name":"test.txt",
"ContentBytes":"aHR0cDovL2dpb25rdW56LmdpdGh1Yi5pby9jaGFydGlzdC1qcy9leGFtcGxlcy5odG1sDQoNCmh0dHBzOi8vd3d3LnlvdXR1YmUuY29tL3dhdGNoP3Y9MGttZGpxZ085SVkNCjM2OjAyDQpodHRwczovL3d3dy55b3V0dWJlLmNvbS93YXRjaD92PW9FVHY2djlmN3djIA0KNjoxOQ0KDQpodHRwczovL2FuZ3VsYXItdWkuZ2l0aHViLmlvL2Jvb3RzdHJhcC8="
}
</code></pre>
<p>The content bytes are the base64 format string of the text file and here is an sample that convert the text file to base64 string using C#:</p>
<pre><code>Convert.ToBase64String(File.ReadAllBytes(@"C:\users\user1\desktop\test.txt"))
</code></pre>
<p><strong>Update the event</strong></p>
<pre><code>PATCH:https://outlook.office.com/api/v2.0/me/events/{eventid}
authorization: bearer {token}
content-type: application/json
{
"Body": {
"ContentType": "HTML",
"Content": "I think it will meet our requirements!(Update attachments)"
}
}
</code></pre>
| 1 | 2016-08-12T01:44:07Z | [
"python",
"office365"
] |
MaxPlus Python - 3Ds Max - Add Worldspace Modifier | 38,906,782 | <p>I'm trying to learn MaxPlus/Python in 3Ds Max, and I am stuck with creating a worldspace modifier. (Path Deform Binding WSM modifier)</p>
<pre><code>mod = MaxPlus.Factory.CreateWorldSpaceModifier(MaxPlus.ClassIds.PathDeformSpaceWarp)
MaxPlus.ModifierPanel.AddToSelection(mod)
</code></pre>
<p>Above code gives the following error:</p>
<p><code>File "C:\Program Files\Autodesk\3ds Max 2017\MaxPlus.py", line 30534, in CreateWorldSpaceModifier
return _MaxPlus.Factory_CreateWorldSpaceModifier(*args)
RuntimeError: creation failure</code></p>
<p>Not sure why it happens, maybe the class ID is wrong? Adding object space modifiers works like a charm.</p>
<p>My current workaround is using the new pymxs:</p>
<pre><code># pymxs part
rt.execute("meshObj=$")
rt.modpanel.addmodtoselection(rt.SpacePathDeform())
rt.meshObj.modifiers[0].path = rt.s
</code></pre>
<p>But that feels really hacky. Any ideas?</p>
| 1 | 2016-08-11T22:00:42Z | 38,911,802 | <p>Close enough, here you go:</p>
<pre><code>mod = MaxPlus.Factory.CreateWorldSpaceModifier(MaxPlus.Class_ID(0x000110b4, 0x00007f9e))
MaxPlus.ModifierPanel.AddToSelection(mod)
</code></pre>
<p>Apart from looking for #define in the SDK, you can get the classIDs by getting the classID property of existing object, for example <code>$.modifiers[1].classID</code>.</p>
| 0 | 2016-08-12T07:00:20Z | [
"python",
"3dsmax"
] |
How to retrieve data within an a href tag | 38,906,797 | <p>Hey all i've stumbled across some difficulties whilst web crawling. I am trying to obtain the 70 in this chunk of code embedded in the middle of some html, my question is how would i go about doing so.I've tried various methods but none seem to work. I'm using the BeautifulSoup module and writing in Python 3.The link to the website i'm scraping is conveniently the link in the href if anyone needs it.Thank you in advance.</p>
<pre><code><a href="http://www.accuweather.com/en/gb/london/ec4a-2/weather- forecast/328328">London, United Kingdom<span class="temp">70&deg;</span><span class="icon i-33-s"></span></a>
from bs4 import*
import requests
data = requests.get("http://www.accuweather.com/en/gb/london/ec4a-2/weather- forecast/328328")
soup = BeautifulSoup(data.text,"html.parser")
</code></pre>
| 1 | 2016-08-11T22:01:46Z | 38,906,891 | <pre><code>from bs4 import BeautifulSoup
import re
import requests
soup = BeautifulSoup(text,"html.parser")
for link in soup.find("a")
temp = link.find("span",{"class" : "temp"})
print(re.findall(r"[0-9]{1,2}",temp.text))
</code></pre>
<p>I hope this helps you</p>
| -1 | 2016-08-11T22:11:06Z | [
"python",
"beautifulsoup",
"web-crawler",
"href"
] |
How to retrieve data within an a href tag | 38,906,797 | <p>Hey all i've stumbled across some difficulties whilst web crawling. I am trying to obtain the 70 in this chunk of code embedded in the middle of some html, my question is how would i go about doing so.I've tried various methods but none seem to work. I'm using the BeautifulSoup module and writing in Python 3.The link to the website i'm scraping is conveniently the link in the href if anyone needs it.Thank you in advance.</p>
<pre><code><a href="http://www.accuweather.com/en/gb/london/ec4a-2/weather- forecast/328328">London, United Kingdom<span class="temp">70&deg;</span><span class="icon i-33-s"></span></a>
from bs4 import*
import requests
data = requests.get("http://www.accuweather.com/en/gb/london/ec4a-2/weather- forecast/328328")
soup = BeautifulSoup(data.text,"html.parser")
</code></pre>
| 1 | 2016-08-11T22:01:46Z | 38,907,043 | <p>Assuming using <code>BeautifulSoup</code> is not a strict requirement, you could do this with the <code>html.parser</code> module. The below is custom designed for the use-case you have mentioned.
It fetches both the data fields and then filters out the number.</p>
<pre><code>from html.parser import HTMLParser
class MyHTMLParser(HTMLParser):
def handle_data(self, data):
if data.isdigit():
print(data)
parser = MyHTMLParser()
parser.feed('<a href="http://www.accuweather.com/en/gb/london/ec4a-2/weather- forecast/328328">London, United Kingdom<span class="temp">70&deg;</span><span class="icon i-33-s"></span></a>')
</code></pre>
<p>this will output <code>70</code></p>
<p>Could also be done using regex.</p>
| 0 | 2016-08-11T22:26:25Z | [
"python",
"beautifulsoup",
"web-crawler",
"href"
] |
How to retrieve data within an a href tag | 38,906,797 | <p>Hey all i've stumbled across some difficulties whilst web crawling. I am trying to obtain the 70 in this chunk of code embedded in the middle of some html, my question is how would i go about doing so.I've tried various methods but none seem to work. I'm using the BeautifulSoup module and writing in Python 3.The link to the website i'm scraping is conveniently the link in the href if anyone needs it.Thank you in advance.</p>
<pre><code><a href="http://www.accuweather.com/en/gb/london/ec4a-2/weather- forecast/328328">London, United Kingdom<span class="temp">70&deg;</span><span class="icon i-33-s"></span></a>
from bs4 import*
import requests
data = requests.get("http://www.accuweather.com/en/gb/london/ec4a-2/weather- forecast/328328")
soup = BeautifulSoup(data.text,"html.parser")
</code></pre>
| 1 | 2016-08-11T22:01:46Z | 38,907,108 | <p>This will get you any spans containing a temperature</p>
<pre><code>temps = soup.find_all('span',{'class':'temp'})
</code></pre>
<p>Then loop over it</p>
<pre><code>for span in temps:
temp = span.decode_contents()
# temp looks like "70&deg" or "70\xb0" so parse it
print int(temp[:-1])
</code></pre>
<p>The hard work is probably converting from unicode to ASCII if you are in python2.</p>
<p>But the accu-weather page doesn't have a span with class temp:</p>
<pre><code>In [12]: soup.select('[class~=temp]')
Out[12]:
[<strong class="temp">19<span>\xb0</span></strong>,
<strong class="temp">14<span>\xb0</span></strong>,
<strong class="temp">24<span>\xb0</span></strong>,
<strong class="temp">23<span>\xb0</span></strong>,
<h2 class="temp">19\xb0</h2>,
<h2 class="temp">19\xb0</h2>,
<h2 class="temp">17\xb0</h2>,
<h2 class="temp">19\xb0</h2>,
<h2 class="temp">19\xb0</h2>,
<h2 class="temp">19\xb0</h2>,
<h2 class="temp">20\xb0</h2>,
<h2 class="temp">19\xb0</h2>,
<h2 class="temp">17\xb0</h2>,
<h2 class="temp">19\xb0</h2>,
<h2 class="temp">19\xb0</h2>]
</code></pre>
<p>So it's harder to give you an answer</p>
| 0 | 2016-08-11T22:31:58Z | [
"python",
"beautifulsoup",
"web-crawler",
"href"
] |
How to retrieve data within an a href tag | 38,906,797 | <p>Hey all i've stumbled across some difficulties whilst web crawling. I am trying to obtain the 70 in this chunk of code embedded in the middle of some html, my question is how would i go about doing so.I've tried various methods but none seem to work. I'm using the BeautifulSoup module and writing in Python 3.The link to the website i'm scraping is conveniently the link in the href if anyone needs it.Thank you in advance.</p>
<pre><code><a href="http://www.accuweather.com/en/gb/london/ec4a-2/weather- forecast/328328">London, United Kingdom<span class="temp">70&deg;</span><span class="icon i-33-s"></span></a>
from bs4 import*
import requests
data = requests.get("http://www.accuweather.com/en/gb/london/ec4a-2/weather- forecast/328328")
soup = BeautifulSoup(data.text,"html.parser")
</code></pre>
| 1 | 2016-08-11T22:01:46Z | 38,907,252 | <p>You need to add a user-agent to get the correct source, then select using the tag/class names you want:</p>
<pre><code>from bs4 import *
import requests
headers = {"user-agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36"}
data = requests.get("http://www.accuweather.com/en/gb/london/ec4a-2/weather-forecast/328328", headers=headers)
soup = BeautifulSoup(data.content)
print(soup.select_one("span.local-temp").text)
print([span.text for span in soup.select("span.temp")])
</code></pre>
<p>If we run the code, you see we get all we need:</p>
<pre><code>In [17]: headers = {
....: "user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36"}
In [18]: data = requests.get("http://www.accuweather.com/en/gb/london/ec4a-2/weather-forecast/328328", headers=headers)
In [19]: soup = BeautifulSoup(data.content, "html.parser")
In [20]: print(soup.find("span", "local-temp").text)
18°C
In [21]: print("\n".join([span.text for span in soup.select("span.temp")]))
18°
31°
30°
25°
</code></pre>
| 0 | 2016-08-11T22:49:06Z | [
"python",
"beautifulsoup",
"web-crawler",
"href"
] |
converting pyspark dataframe to labelled point object | 38,906,803 | <pre><code>df:
[Row(split(value,,)=[u'21.0', u'1',u'2']),Row(split(value,,)=[u'22.0', u'3',u'4'])]
</code></pre>
<p>how to convert each row in df into a <code>LabeledPoint</code> object, which consists of a label and features, where the first value is the label and the rest 2 are features in each row.</p>
<pre><code>mycode:
df.map(lambda row:LabeledPoint(row[0],row[1: ]))
</code></pre>
<p>It does not seem to work, new to spark hence any suggestions would be helpful.</p>
| 1 | 2016-08-11T22:02:40Z | 38,916,771 | <p>If you want to obtain an <code>RDD</code> you need to create a function to parse your <code>Array</code> of <code>String</code>.</p>
<pre class="lang-py prettyprint-override"><code>a = sc.parallelize([([u'21.0', u'1',u'2'],),([u'22.0', u'3',u'4'],)]).toDF(["value"])
a.printSchema()
#root
#|-- value: array (nullable = true)
#| |-- element: string (containsNull = true)
</code></pre>
<p>To achieve this check my function.</p>
<pre class="lang-py prettyprint-override"><code>def parse(l):
l = [float(x) for x in l]
return LabeledPoint(l[0], l[1:])
</code></pre>
<p>After defining such function, <code>map</code> your <code>DataFrame</code> in order to <code>map</code> its internal <code>RDD</code>.</p>
<pre class="lang-py prettyprint-override"><code>a.map(lambda l: parse(l[0])).take(2)
# [LabeledPoint(21.0, [1.0,2.0]), LabeledPoint(22.0, [3.0,4.0])]
</code></pre>
<p>Here you can find the <a href="https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/2485090270202665/2400794110066361/8589256059752547/latest.html" rel="nofollow">published notebook</a> where I tested everything.</p>
<p>PD: If you use <code>toDF</code> you will obtain two columns (features and label).</p>
| 3 | 2016-08-12T11:21:50Z | [
"python",
"apache-spark",
"pyspark",
"apache-spark-mllib",
"pyspark-sql"
] |
pyqtgraph facilities to pcolor plot llike matplotlib | 38,906,862 | <p>Is there any possibility to plot pcolor in pyqtplot, like the ones matplotlib can ?
and Is there any good example ?<br>
How we can add axis to pyqtgraph plot? </p>
| 0 | 2016-08-11T22:08:10Z | 38,908,121 | <p>Try this to get started:</p>
<pre><code>import pyqtgraph as pg
x = [1, 2, 3, 4, 5]
y = [1, 4, 9, 16, 25]
pg.plot(x, y, title="Simple Example", labels={'left': 'Here is the y-axis', 'bottom': 'Here is the x-axis'})
</code></pre>
<p>I looked at the documentation <a href="http://www.pyqtgraph.org/documentation/graphicsItems/plotitem.html" rel="nofollow">here</a> to construct that example.</p>
<p>You can also try:</p>
<pre><code>import pyqtgraph.examples
pyqtgraph.examples.run()
</code></pre>
<p>to see more examples</p>
<p>Also see <a href="https://groups.google.com/forum/#!topic/pyqtgraph/pNj7n0v2T2Q" rel="nofollow">this google groups question</a>.</p>
| 0 | 2016-08-12T00:35:35Z | [
"python",
"matplotlib",
"pyqtgraph"
] |
Configuring Pycharm to use Python 1.5.2+ | 38,906,879 | <p>I have to write an application for a GSM Modem from Telit that uses Python 1.5.2+</p>
<p>I have been using PyCharm CE only as an editor for my code but debugging and testing my code directly on modem hardware.</p>
<p>I've tried to configure my project to use Python 1.5.2+ and PyCharm calls it as "Unknown" as seen below:</p>
<p><a href="http://i.stack.imgur.com/Y18Iz.png" rel="nofollow"><img src="http://i.stack.imgur.com/Y18Iz.png" alt="PyCharm Python 1.5.2+ configuration"></a></p>
<p>I also tried use Python Console but PyCharm doesn't support it. </p>
<p>I don't have enough experience with PyCharm, so I only want to confirm that it doesn't support Python 1.5.2+ or I'm doing something wrong. In last case, how can I configure it to use Python 1.5.2+?</p>
| 1 | 2016-08-11T22:09:32Z | 39,332,974 | <p>The eldest Python, which PyCharm <a href="https://www.jetbrains.com/help/pycharm/2016.1/supported-languages.html" rel="nofollow">supports</a>, is 2.4.</p>
| 1 | 2016-09-05T14:40:29Z | [
"python",
"pycharm",
"gsm",
"modem",
"gprs"
] |
Python subprocess stdout.readlines() getting stuck | 38,906,906 | <p>I am trying to use to python subprocess module in order to ssh to a server and then switch to a super user and then ls and print the folders in the terminal.</p>
<p>My Code:</p>
<pre><code>def sudo_Test():
HOST = 'Host'
PORT = '227'
USER = 'user'
cmd='sudo su - ec2-user;ls'
process = subprocess.Popen(['ssh','-tt','{}@{}'.format(USER, HOST),
'-p',PORT,cmd],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE)
process.stdin.write("my_password\r\n")
print "stuck here VVV"
result = process.stdout.readlines()
print "finished it"
if not result:
print "Im an error"
err = process.stderr.readlines()
print('ERROR: {}'.format(err))
else:
print "I'm a success"
print result
print sudo_Test()
</code></pre>
<p>Console output when I run this:</p>
<pre><code>dredbounds-computer:folder dredbound$ python terminal_test.py
stuck here VVV
</code></pre>
<p>For some reason the code gets stuck on the line result = process.stdout.readlines(). I have to cntrl+c to exit out of the terminal session when this happens. It works fine if I just do cmd='sudo; ls' instead of cmd='sudo su - ec2-user;ls'.Anyone know what I'm doing wrong or how I can get this to work?</p>
<p>Update:
I changed cmd='sudo su - ec2-user;ls' -> cmd='sudo su - ec2-user ls' in the code above. Now I am getting the following error:</p>
<pre><code>['password\r\n', '\r\n', '/bin/ls: /bin/ls: cannot execute binary file\r\n']
</code></pre>
<p>I'm not sure why it thinks ls is a binary file now, but is there anyway i can tell it that it is just a terminal command so it returns a list of directories?</p>
| 0 | 2016-08-11T22:12:36Z | 38,907,042 | <p>The problem is here:</p>
<pre><code>sudo su - ec2-user;ls
</code></pre>
<p><code>sudo su - ec2-user</code> opens a shell and <em>waits</em> for input. You need to send the shell an exit command (or close its stdin) before the shell exits and the <code>ls</code> command is run.</p>
<p><strong>If your goal was to run <code>ls</code> as user <code>ec2-user</code></strong>, then try:</p>
<pre><code>sudo -u ec2-user ls
</code></pre>
<p>In other words, replace</p>
<pre><code>cmd='sudo su - ec2-user;ls'
</code></pre>
<p>With:</p>
<pre><code>cmd='sudo -u ec2-user ls'
</code></pre>
| 1 | 2016-08-11T22:26:20Z | [
"python",
"linux",
"bash",
"subprocess",
"sudo"
] |
Represent 5 dim data with seaborn/matplotlib in scatter like plots | 38,906,922 | <p>Using seaborn lmplot with the hue option we can represent 3 dimensional data e.g. 3 variables: total_bill, tip and smoker:</p>
<pre><code>>>> import seaborn as sns; sns.set(color_codes=True)
>>> tips = sns.load_dataset("tips")
>>> g = sns.lmplot(x="total_bill", y="tip", hue='smoker', data=tips, fit_reg=False)
</code></pre>
<p>How can I represent in the same scatter plot one more categorical dimension changing e.g the form of the bullet ?</p>
<p>Even more is it possible to represent 2 more categorical variables using different bullet shapes and the 5th using the size of the bullet/shape ?</p>
| 0 | 2016-08-11T22:13:52Z | 38,908,499 | <p>You can represent up to five dimensions with faceting:</p>
<pre><code>import seaborn as sns
tips = sns.load_dataset("tips")
sns.lmplot(x="total_bill", y="tip",
hue="time", col="sex", row="smoker",
size=3, data=tips)
</code></pre>
<p><a href="http://i.stack.imgur.com/9YGFp.png" rel="nofollow"><img src="http://i.stack.imgur.com/9YGFp.png" alt="enter image description here"></a></p>
<p>You can also vary the markers, but they are varied with the color and not independently of it. As @JulienD pointed out, it's extremely hard for the human visual system to actually decode patterns from data where three variables are represented independently with color, marker, and size. It's much better practice to make multiple plots with lower dimensional projections. </p>
| 1 | 2016-08-12T01:32:09Z | [
"python",
"matplotlib",
"seaborn"
] |
File Server Upload Python | 38,906,945 | <p><strong>File Server Download Problem Python 2.5.1</strong></p>
<p>So i am working on a File Server as a hobby project. I am having some problems though. I can use the client to successfully upload the file to the server but say the file to upload is 50,000 bytes (50 mbs) it will only upload like 49,945 bytes then if i try opening it, it says its corrupt. If i close the server it goes to 50,000 then works. Is there a way to fix this without the server needing to close and reopen?</p>
<p>(Downloading Doesnt Have this Problem)</p>
<p>Full Client Code:
<a href="http://pastebin.com/P2gCqwJM" rel="nofollow">Client</a></p>
<p>Full Server:
<a href="http://pastebin.com/14KwSxyf" rel="nofollow">Server</a></p>
<p>Client Upload Function:</p>
<pre><code>def Uploader(s):
IsReal = True
data = "UploaderReady"
if data == "UploaderReady":
List = []
FilePath = dir_path = os.path.dirname(os.path.realpath(__file__))
List.append(os.listdir(FilePath))
FileUpload = raw_input("Pick a file? -> ")
for Item in List:
if FileUpload == Item:
IsReal = True #checks if item exists
if IsReal == True:
File = open(FileUpload,'rb')
bytestosend = File.read(1024)
FileSize = os.path.getsize(FileUpload)
s.send(FileUpload)
s.send(str(FileSize))
s.send(bytestosend)
while bytestosend != "":
bytestosend = File.read(8192)
s.send(bytestosend)
print"Processing"
File.close()
time.sleep(1.5)
s.send("COMPLETE")
print"File Successfully Uploaded"
time.sleep(2)
print" \n " * 10
Main()
if IsReal == "False":
print"Item doesn't Exist"
time.sleep(2)
print" \n " * 10
s.close()
Main()
</code></pre>
<p>Server Upload Function:</p>
<pre><code>Todo = sock.recv(1024)
if Todo == "U":
print str(addr)+" Uploading"
UploadingThread = threading.Thread(target=Uploader,args=(c,c,))
UploadingThread.start()
def Uploader(c,s):
filename = s.recv(1024)
filesize = s.recv(1024)
f = open(filename,'wb')
totalRecv = 0
while totalRecv < filesize:
FileContent = s.recv(8192)
totalRecv += len(FileContent)
f.write(FileContent)
print"Download Complete"
f.close()
s.close()
</code></pre>
| -2 | 2016-08-11T22:15:29Z | 38,907,247 | <p>You close the client connection on the server side, but never close it on the client side as <a href="http://stackoverflow.com/users/6061947/cory-shay">Cory Shay</a> said.<br>
Instead of closing it though, you need to <code>shutdown</code> the socket and signal it is done writing with <a href="https://docs.python.org/2/library/socket.html#socket.socket.shutdown" rel="nofollow"><code>s.shutdown(socket.SHUT_WR)</code></a></p>
<p>Here's how it should look for the client:</p>
<pre><code>def Uploader(s):
IsReal = True
data = "UploaderReady"
if data == "UploaderReady":
List = []
FilePath = dir_path = os.path.dirname(os.path.realpath(__file__))
List.append(os.listdir(FilePath))
FileUpload = raw_input("Pick a file? -> ")
for Item in List:
if FileUpload == Item:
IsReal = True #checks if item exists
if IsReal == True:
File = open(FileUpload,'rb')
bytestosend = File.read(1024)
FileSize = os.path.getsize(FileUpload)
s.send(FileUpload)
s.send(str(FileSize))
s.send(bytestosend)
while bytestosend != "":
bytestosend = File.read(8192)
s.send(bytestosend)
print"Processing"
s.shutdown(socket.SHUT_WR) # End the writing stream
print(s.recv(1024)) # Expecting the server to say 'upload complete'
s.close() # close the socket
File.close()
time.sleep(1.5)
s.send("COMPLETE")
s.close() #To close connection after uploading
print"File Successfully Uploaded"
time.sleep(2)
print" \n " * 10
Main()
</code></pre>
<p>and the server:</p>
<pre><code>def Uploader(c,s):
filename = s.recv(1024)
filesize = s.recv(1024)
f = open(filename,'wb')
totalRecv = 0
while totalRecv < filesize:
FileContent = s.recv(8192)
totalRecv += len(FileContent)
f.write(FileContent)
s.send("Upload Complete!") # Tell client the upload is complete
print"Download Complete"
f.close()
s.close() # Close the socket
</code></pre>
<p>Also, you are passing the server <code>Uploader</code> 2 identical arguments, and only using one, instead you should just pass one:</p>
<pre><code>UploadingThread = threading.Thread(target=Uploader,args=(c,c,))
# should be
UploadingThread = threading.Thread(target=Uploader,args=(c,))
</code></pre>
<p>Similarly, your password thread only needs 2:</p>
<pre><code>c, addr = s.accept()
print"Client Connection: <"+str(addr)+">"
PasswordThread = threading.Thread(target=Password,args=(c,addr))
def Password(c,addr):
c.send("WAITINGPASSWORD")
PASSWORD = "123"
password = c.recv(1024)
</code></pre>
<p>and your checking password function can be simpler:</p>
<pre><code>def Password(c,addr):
password = "123"
c.send("WAITINGPASSWORD")
attempt = c.recv(1024)[::-1]
if attempt == password:
doStuff()
</code></pre>
| 0 | 2016-08-11T22:48:36Z | [
"python",
"file",
"upload",
"server",
"client"
] |
Generating new list of grouped items within a set | 38,907,013 | <p>I'm trying to figure out how to convert the following:</p>
<pre><code>results = [(516L, u'dupe', u'dupe', 106L), (517L, u'dupe', u'dupe', 106L), (518L, u'testing', u'testing', 106L), (519L, u'testing', u'testing', 106L), (523L, u'duplicate', u'duplicate', 88L), (524L, u'duplicate', u'duplicate', 88L)]
</code></pre>
<p>into a new list like this:</p>
<pre><code>results = [
[(516L, u'dupe', u'dupe', 106L), (517L, u'dupe', u'dupe', 106L)],
[(518L, u'testing', u'testing', 106L), (519L, u'testing', u'testing', 106L)],
[(523L, u'duplicate', u'duplicate', 88L), (524L, u'duplicate', u'duplicate', 88L)]
]
</code></pre>
<p>In that they are all grouped by the 3rd index within the set.</p>
<p>I tried something like this:</p>
<pre><code>[list(v) for k,v in groupby(results[3])]
</code></pre>
<p>but that returns the 3rd item, and not the 3rd index. Is groupby the right thing to be using here?</p>
| 0 | 2016-08-11T22:22:33Z | 38,907,052 | <p>Almost there!</p>
<pre><code>[list(v) for k,v in groupby(results, key=lambda x: x[2])]
# ^ grouping key ^ 0-based index
</code></pre>
<p><strong>Output</strong>:</p>
<pre><code>[(516L, u'dupe', u'dupe', 106L), (517L, u'dupe', u'dupe', 106L)], [(518L, u'testing', u'testing', 106L), (519L, u'testing', u'testing', 106L)], [(523L, u'duplicate', u'duplicate', 88L), (524L, u'duplicate', u'duplicate', 88L)]]
</code></pre>
<p>If you do not want a lambda, then <code>from operator import itemgetter</code> and then pass in <code>key=itemgetter(2)</code>.</p>
| 2 | 2016-08-11T22:27:00Z | [
"python",
"list-comprehension"
] |
Read txt files sequentially using Python | 38,907,054 | <p>I have a problem reading txt files in order. I would like to read the files in the following order:</p>
<pre><code>0.txt
1.txt
2.txt
...
10.txt
11.txt
...
19.txt
20.txt
21.txt
...
</code></pre>
<p>However, the following codes</p>
<pre><code>import os
path = "temp/"
dirs = os.listdir(path)
for filename in sorted(dirs):
print filename
</code></pre>
<p>returns</p>
<pre><code>0.txt
1.txt
2.txt
10.txt
11.txt
...
19.txt
2.txt
20.txt
...
</code></pre>
<p>Any suggestion?</p>
| 1 | 2016-08-11T22:27:21Z | 38,907,085 | <p>You are sorting the names literally, instead you can use a key function in order to sort the names based on the integer value of the names:</p>
<pre><code>for filename in sorted(dirs, key=lambda x: int(x.split('.')[0])):
print filename
</code></pre>
<p>Note that if one of your file names doesn't follow the proper format the sorted might raise an exception.</p>
| 3 | 2016-08-11T22:30:00Z | [
"python"
] |
How to convert a very large image(tif) file into an array using Python | 38,907,079 | <p>I have a <strong>tif</strong> file of size ~2GB. I want to convert it into a numpy array for further processing.
I tried to open the image using PIL.Image.open("FileName") and then add it to numpy array. But I am getting the error:</p>
<p><strong>IOError: cannot identify image file</strong> </p>
<p>The file format is correct and also the location is accurately specified. Can you provide some information on why it might be happening? Do you think it has to do with the file size? </p>
| 0 | 2016-08-11T22:29:16Z | 38,907,204 | <p>You can try Scipy:</p>
<pre><code>from scipy import misc
f = misc.face()
misc.imsave('face.png', f) # uses the Image module (PIL)
import matplotlib.pyplot as plt
plt.imshow(f)
plt.show()
</code></pre>
<p>--Source: <a href="http://www.scipy-lectures.org/advanced/image_processing/" rel="nofollow">http://www.scipy-lectures.org/advanced/image_processing/</a></p>
| 0 | 2016-08-11T22:43:24Z | [
"python",
"numpy",
"image-processing"
] |
How to convert a very large image(tif) file into an array using Python | 38,907,079 | <p>I have a <strong>tif</strong> file of size ~2GB. I want to convert it into a numpy array for further processing.
I tried to open the image using PIL.Image.open("FileName") and then add it to numpy array. But I am getting the error:</p>
<p><strong>IOError: cannot identify image file</strong> </p>
<p>The file format is correct and also the location is accurately specified. Can you provide some information on why it might be happening? Do you think it has to do with the file size? </p>
| 0 | 2016-08-11T22:29:16Z | 38,941,411 | <p><a href="http://www.vips.ecs.soton.ac.uk/index.php?title=VIPS" rel="nofollow">vips</a> has good support for large files, and a <a href="http://www.vips.ecs.soton.ac.uk/supported/current/doc/html/libvips/using-from-python.html" rel="nofollow">convenient high-level Python binding</a>, you could try that. </p>
<p>You can load images to memory like this:</p>
<pre><code>$ python
Python 2.7.12 (default, Jul 1 2016, 15:12:24)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import gi
>>> gi.require_version('Vips', '8.0')
>>> from gi.repository import Vips
>>> im = Vips.Image.new_from_file("summer8x3.tif")
>>> im.width
18008
>>> im.height
22764
>>> y = im.write_to_memory()
>>> type(y)
<type 'str'>
>>> len(y)
1229802336
</code></pre>
<p>And then make a numpy array from that string in the usual way. </p>
<p>What kind of further processing are you planning? You might be able to do what you need just using vips. It'd be a lot quicker. </p>
| 1 | 2016-08-14T10:41:08Z | [
"python",
"numpy",
"image-processing"
] |
Find IP from CIDR | 38,907,080 | <p>I am searching for IPs listed in <code>s1</code> in CIDR listed in <code>s2</code>. So basically, I want to expand all elements of <code>s2</code> and search for IPs. Not sure how to get IPs from <code>s1</code>, if use <code>re.compile</code>. I am not getting ip4:, how do I remove the ip4: part</p>
<pre><code>import re
s1 = ["ip4:64.18.10.101", "ip4:66.102.10.10", "spf1", "mx"]
s2 = "v=spf1 ip4:64.18.0.0/20 ip4:66.102.0.0/20 ~all"
list = s2.split(' ')
regex = re.compile("ip.*")
for ip in list:
if re.search(regex, ip):
print ip
</code></pre>
| 0 | 2016-08-11T22:29:25Z | 38,907,845 | <p>Finally I wrote this:</p>
<pre><code>#/usr/bin/python
import os
import re
from netaddr import IPNetwork
import subprocess
import shlex
cmd = '/usr/bin/dig +short -t TXT _netblocks.google.com'
proc=subprocess.Popen(shlex.split(cmd),stdout=subprocess.PIPE)
s1,err=proc.communicate()
cmd3 = '/usr/bin/dig +short -t TXT _netblocks3.google.com'
proc=subprocess.Popen(shlex.split(cmd3),stdout=subprocess.PIPE)
s3,err=proc.communicate()
s2 = ''.join([s1,s3])
regex = re.compile("\d+\.\d+\.\d+\.\d+\/[0-9][0-9]")
list = re.findall(regex, s2)
f2 = open("s1.txt", "w")
for ip in list:
for ip1 in IPNetwork(ip):
#print ip1
f2.write(str(ip1)+'\n')
#######################
f = open("/var/log/iptables.log")
f1 = open("s_out.txt", "w")
pattern = re.compile(r"\bSRC\=\d+\.\d+\.\d+\.\d+")
for line in f:
test = re.findall(pattern,line)
for line in test:
line1 = re.split("=", line)
f1.write(line1[1]+'\n')
f1.close()
#######################
with open('s_out.txt', 'r')as f1:
with open('s1.txt', 'r') as f2:
same = set(f1).intersection(f2)
same.discard('\n')
with open('outfile.txt', 'w') as file_out:
for line in same:
file_out.write(line)
uniqlines = set(open('outfile.txt').readlines())
bar = open('output_file.txt', 'w').writelines(set(uniqlines))
</code></pre>
| 0 | 2016-08-11T23:56:03Z | [
"python"
] |
Trying to extract meta data from news article | 38,907,114 | <p>I am trying to extract the meta tags from a cnn article </p>
<pre><code>import httplib2
from bs4 import BeautifulSoup
http = httplib2.Http()
status, response = http.request(http://www.cnn.com/2016/08/09/health/chagas-sleeping-sickness-leishmaniasis-drug/index.html)
soup = BeautifulSoup(response)
print(soup.select('body > div.pg-right-rail-tall.pg-wrapper.pg__background__image > article > meta'))
</code></pre>
<p>I am trying to narrow it down to just this output</p>
<pre><code><meta content="health" itemprop="articleSection"><meta content="2016-08-09T12:10:24Z" itemprop="dateCreated"><meta content="2016-08-09T12:10:24Z" itemprop="datePublished"><meta content="2016-08-09T12:10:24Z" itemprop="dateModified"><meta content="http://www.cnn.com/2016/08/09/health/chagas-sleeping-sickness-leishmaniasis-drug/index.html" itemprop="url"><meta content="Meera Senthilingam, for CNN" itemprop="author"><meta content="Could one discovery take on three deadly parasites? - CNN.com" itemprop="headline"><meta content="Three seemingly different diseases infect 20 million people each year: Chagas disease, leishmaniasis and African sleeping sickness. But one drug could be developed to fight all three." itemprop="description"><meta content="sleeping sickness, disease, drug, drug development, chagas disease, leishmaniasis, Novartis, health, Could one discovery take on three deadly parasites? - CNN.com" itemprop="keywords"><meta content="http://i2.cdn.turner.com/cnnnext/dam/assets/150812101743-chagas-bug-large-tease.jpg" itemprop="image"><meta content="http://i2.cdn.turner.com/cnnnext/dam/assets/150812101743-chagas-bug-large-tease.jpg" itemprop="thumbnailUrl"><meta content="Could one discovery take on three deadly parasites? " itemprop="alternativeHeadline">
</code></pre>
<p>but for some reason the <code>BeautifulSoup.select()</code> method is returning about 100x as much of the html as I want. I would really appreciate any advice on how to fix this problem.</p>
| 1 | 2016-08-11T22:32:29Z | 38,907,170 | <p>It is the parser/html that is the issue, <em>lxml</em> and <em>html5lib</em> gives you what you want.</p>
<pre><code>soup = BeautifulSoup(response,"lxml")
</code></pre>
<p>Or:</p>
<pre><code> soup = BeautifulSoup(response,"html5lib")
</code></pre>
<p>If you don't have with <em>lxml</em> or <em>html5lib</em> installed, you can install <em>html5lib</em> using pip, <em>lxml</em> is a bit more involved depending on your OS as it has a few dependencies but it is definitely worth installing.</p>
<p>You can also simplify your select:</p>
<pre><code>soup.select('div.pg-right-rail-tall.pg-wrapper.pg__background__image meta')
</code></pre>
| 1 | 2016-08-11T22:39:57Z | [
"python",
"beautifulsoup"
] |
What do I need to place in my `__init__.py` files to simplify this package's interface? | 38,907,151 | <p>I have a package that works with the following files:</p>
<pre><code># package/sub1/obj1.py
class Obj1:
pass
# package/sub2/obj2.py
from ..sub1.obj1 import Obj1
class Obj2(Obj1):
pass
</code></pre>
<p>Currently, with a file at the same level as the <code>package</code> folder, I need to do this:</p>
<pre><code>from package.sub1.ob1 import Obj1
from package.sub2.ob2 import Obj2
</code></pre>
<p>I want to be able to use the following:</p>
<pre><code>from package.sub1 import Obj1 # can reference package.sub1.Obj1
from package.sub2 import Obj2 # can reference package.sub2.Obj2
from package import * # Can reference both
</code></pre>
<p>I know I need to edit one or more of the three existing <code>__init__.py</code> files, and I <em>think</em> I'll need to set the <code>__all__</code> variable in one or more of them, but I can't seem to figure out what the proper use is for what I want. Am I going about this incorrectly?</p>
| 2 | 2016-08-11T22:37:30Z | 38,907,399 | <p>Assuming you don't have anything else in the <code>__init__.py</code> files except for the following, this should work:</p>
<pre><code># package/sub1/__init__.py
from package.sub1.obj1 import Obj1
# package/sub2/__init__.py
from package.sub2.obj2 import Obj2
# package/__init__.py
from package.sub1 import Obj1
from package.sub2 import Obj2
</code></pre>
<p>After this your 3 use cases should work. You should not need a <code>__all__</code> unless you have other stuff in the <code>__init__.py</code> and keep in mind this only affects the load all syntax <code>from package import *</code>.</p>
<p>One benefit to the <code>package/__init__.py</code> file using the aliased imports for <code>Obj1</code> and <code>Obj2</code> is that you can restructure the entirety of the subpackages and not have to change your <code>package/__init__.py</code> as long as the <code>package/sub*/__init__.py</code> files reflect the new structure.</p>
<p><strong>Edit: Flat vs Nested</strong></p>
<p>(copied from other answer's comment)</p>
<p>It is best for the user of your package that your package appears as flat as possible, but for developers it may be any structure. For example if your package is mainly used via the Obj1 class it would suck to have to do <code>from package.sub1.obj1 import Obj1</code>. On the other hand it would suck as a developer to have to organize all the Obj1 dependencies in one large package/module with the rest of the package. Using the imports the way we've described will make it appear flat while being organized underneath.</p>
| 3 | 2016-08-11T23:04:13Z | [
"python"
] |
What do I need to place in my `__init__.py` files to simplify this package's interface? | 38,907,151 | <p>I have a package that works with the following files:</p>
<pre><code># package/sub1/obj1.py
class Obj1:
pass
# package/sub2/obj2.py
from ..sub1.obj1 import Obj1
class Obj2(Obj1):
pass
</code></pre>
<p>Currently, with a file at the same level as the <code>package</code> folder, I need to do this:</p>
<pre><code>from package.sub1.ob1 import Obj1
from package.sub2.ob2 import Obj2
</code></pre>
<p>I want to be able to use the following:</p>
<pre><code>from package.sub1 import Obj1 # can reference package.sub1.Obj1
from package.sub2 import Obj2 # can reference package.sub2.Obj2
from package import * # Can reference both
</code></pre>
<p>I know I need to edit one or more of the three existing <code>__init__.py</code> files, and I <em>think</em> I'll need to set the <code>__all__</code> variable in one or more of them, but I can't seem to figure out what the proper use is for what I want. Am I going about this incorrectly?</p>
| 2 | 2016-08-11T22:37:30Z | 38,907,428 | <p>You could make it work by adding the following to your <code>__init__.py</code> files:</p>
<p>In <code>package/sub1/__init__.py</code>:</p>
<pre><code>from .ob1 import Obj1
</code></pre>
<p>In <code>package/sub2/__init__.py</code>:</p>
<pre><code>from .ob2 import Obj2
</code></pre>
<p>In <code>package/__init__.py</code>:</p>
<pre><code>__all__ = ['sub1', 'sub2']
</code></pre>
<p>Now top-level code can do any the imports you asked about and they'll make <code>package.subX.ObjX</code> a valid name.</p>
<p>Is this a good design? Not really. One of the principles of good Python design is that <a href="https://www.python.org/dev/peps/pep-0020/" rel="nofollow">"flat is better than nested"</a>. As <a href="http://stackoverflow.com/users/2357112/user2357112">user2357112</a> commented, unlike some other languages, Python doesn't require each class to have its own file. Since you want the <code>Obj1</code> and <code>Obj2</code> classes to be accessible from the <code>package.sub1</code> and <code>package.sub2</code> namespaces, respectively, it might make more sense to combine the files from the <code>subX</code> folders into a single <code>subX.py</code> file (replacing the subpackage with a single larger module).</p>
| 3 | 2016-08-11T23:06:55Z | [
"python"
] |
KeyProperty or IntegerID Look-up to Model One-to-Many Relationship Google App Engine | 38,907,174 | <p>this may be a super dumb question but here goes:</p>
<p>On App Engine, if I want to model a one-to-many relationship, should I store the referenced entity as a KeyProperty, or should I simply store the numericID of the entity (auto-generated by App Engine) as an Integery Property?</p>
<p>For example:</p>
<pre><code>class Book(ndb.Model):
author = ndb.KeyProperty(kind=Author)
class Author(ndb.Model):
name = ndb.StringProperty()
</code></pre>
<p>OR:</p>
<pre><code>class Book(ndb.Model):
author_id = ndb.IntegerProperty()
class Author(ndb.Model):
books = ndb.StringProperty()
</code></pre>
<p>In the second example, if I want to get the Author of a Book, I can do:</p>
<pre><code>key = ndb.Key('Author', Book.author_id)
author = key.get()
</code></pre>
<p>Would one way be more efficient than the other? If the application is at a large scape, for example an Author can have 1 million books, and I needed to query for all the books for that Author, would one way be more efficient than the other?</p>
<p>I'm leaning toward the second way, simply because I feel it would be less storage to store the ID rather than a complex Key object, especially if I need to output JSON, it would be hard to serialize the Key object</p>
| 1 | 2016-08-11T22:40:17Z | 38,908,326 | <p>There is a method to produce Web-safe strings from keys, but keys do take much more space than IDs. There is no reason to use Keys for references with an exception of some rare use cases when you need to know the parent entity and decide that it's easier to store a full key than pass ID of an entity and ID of a parent (and maybe even parent of a parent) each time.</p>
<p>Also, if a property is included in composite indexes, the difference in size becomes even more significant.</p>
| 0 | 2016-08-12T01:04:35Z | [
"python",
"google-app-engine",
"one-to-many"
] |
Regex - Non grouping and conditional block does not match | 38,907,215 | <p>I have this pattern:</p>
<pre><code>>>> pat = r'(?:.*)?(name)|nombres?'
</code></pre>
<p>When I test:</p>
<pre><code>>>> import re
>>> re.search('nombre', pat).group()
>>> 'nombre'
>>> re.search('name', pat).group()
>>> 'name'
</code></pre>
<p>But</p>
<pre><code>>>> re.search('first_name', pat).group()
>>> *** AttributeError: 'NoneType' object has no attribute 'group'
</code></pre>
| 0 | 2016-08-11T22:44:54Z | 38,907,259 | <p>You have the arguments in the wrong order. The pattern goes first.</p>
| 2 | 2016-08-11T22:49:54Z | [
"python",
"regex"
] |
Regex - Non grouping and conditional block does not match | 38,907,215 | <p>I have this pattern:</p>
<pre><code>>>> pat = r'(?:.*)?(name)|nombres?'
</code></pre>
<p>When I test:</p>
<pre><code>>>> import re
>>> re.search('nombre', pat).group()
>>> 'nombre'
>>> re.search('name', pat).group()
>>> 'name'
</code></pre>
<p>But</p>
<pre><code>>>> re.search('first_name', pat).group()
>>> *** AttributeError: 'NoneType' object has no attribute 'group'
</code></pre>
| 0 | 2016-08-11T22:44:54Z | 38,907,313 | <p>As already answered the arguments are swapped it should be:</p>
<pre><code>re.search(pat, 'first_name').group()
</code></pre>
<p>I'd say also that you may want to check if the pattern has actually matched before trying to extract the group match:</p>
<pre><code>result = re.search(pat, 'first_name')
if result:
print(result.group())
else:
print("not found")
</code></pre>
| 1 | 2016-08-11T22:56:05Z | [
"python",
"regex"
] |
Detection of irregular shapes using houghcircle function opencv python | 38,907,219 | <p>I am currently doing circle detection on images look like this one, but some of the drops merge and form some irregular shapes(red marks in the original image). I am using houghcircle function in opencv to detect circles. For those irregular shapes, the function can only detect them as several small circles, but I really want the program to consider irregular shape as an entire big shape and get a big circle like I draw in my output image. </p>
<p><a href="http://i.stack.imgur.com/dPj7N.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/dPj7N.jpg" alt="Original image"></a></p>
<p><a href="http://i.stack.imgur.com/DmQGg.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/DmQGg.jpg" alt="Output image"></a></p>
<p>My code will detect all the circles and get diameters of them.</p>
<p>Here is my code:</p>
<pre><code>def circles(filename, p1, p2, minR, maxR):
# print(filename)
img = cv2.imread(filename, 0)
img = img[0:1000, 0:1360]
l = len(img)
w = len(img[1])
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
circles = cv2.HoughCircles(img, cv2.HOUGH_GRADIENT, 1, 25,
param1 = int(p1) ,param2 = int(p2), minRadius = int(minR), maxRadius = int(maxR))
diameter = open(filename[:-4] + "_diamater.txt", "w")
diameter.write("Diameters(um)\n")
for i in circles[0,:]:
diameter.write(str(i[2] * 1.29 * 2) + "\n")
count = 0
d = []
area = []
for i in circles[0,:]:
cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2)
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
count += 1
d += [i[2]*2]
area += [i[2]*i[2]*pi*1.286*1.286]
f = filename.split("/")[-1]
cv2.imwrite(filename[:-4] + "_circle.jpg", cimg)
# cv2.imwrite("test3/edge.jpg", edges)
print "Number of Circles is %d" % count
diaM = []
for i in d:
diaM += [i*1.286]
bWidth = range(int(min(diaM)) - 10, int(max(diaM)) + 10, 2)
txt = '''
Sample name: %s
Average diameter(um): %f std: %f
Drop counts: %d
Average coverage per drop(um^2): %f std: %f
''' % (f, np.mean(diaM), np.std(diaM), count, np.mean(area), np.std(area))
fig = plt.figure()
fig.suptitle('Histogram of Diameters', fontsize=14, fontweight='bold')
ax1 = fig.add_axes((.1,.4,.8,.5))
ax1.hist(diaM, bins = bWidth)
ax1.set_xlabel('Diameter(um)')
ax1.set_ylabel('Frequency')
fig.text(.1,.1,txt)
plt.savefig(filename[:-4] + '_histogram.jpg')
plt.clf()
print "Total area is %d" % (w*l)
print "Total covered area is %d" % (np.sum(area))
rt = "Number of Circles is " + str(count) + "\n" + "Coverage percent is " + str(np.divide(np.sum(area), (w*l))) + "\n"
return rt
</code></pre>
| 4 | 2016-08-11T22:45:16Z | 38,907,373 | <p>If you still want to use the HoughCircles function, you could just see if two circles overlap and make a new circle out of them.</p>
| 1 | 2016-08-11T23:02:08Z | [
"python",
"opencv",
"image-processing"
] |
Detection of irregular shapes using houghcircle function opencv python | 38,907,219 | <p>I am currently doing circle detection on images look like this one, but some of the drops merge and form some irregular shapes(red marks in the original image). I am using houghcircle function in opencv to detect circles. For those irregular shapes, the function can only detect them as several small circles, but I really want the program to consider irregular shape as an entire big shape and get a big circle like I draw in my output image. </p>
<p><a href="http://i.stack.imgur.com/dPj7N.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/dPj7N.jpg" alt="Original image"></a></p>
<p><a href="http://i.stack.imgur.com/DmQGg.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/DmQGg.jpg" alt="Output image"></a></p>
<p>My code will detect all the circles and get diameters of them.</p>
<p>Here is my code:</p>
<pre><code>def circles(filename, p1, p2, minR, maxR):
# print(filename)
img = cv2.imread(filename, 0)
img = img[0:1000, 0:1360]
l = len(img)
w = len(img[1])
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
circles = cv2.HoughCircles(img, cv2.HOUGH_GRADIENT, 1, 25,
param1 = int(p1) ,param2 = int(p2), minRadius = int(minR), maxRadius = int(maxR))
diameter = open(filename[:-4] + "_diamater.txt", "w")
diameter.write("Diameters(um)\n")
for i in circles[0,:]:
diameter.write(str(i[2] * 1.29 * 2) + "\n")
count = 0
d = []
area = []
for i in circles[0,:]:
cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2)
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
count += 1
d += [i[2]*2]
area += [i[2]*i[2]*pi*1.286*1.286]
f = filename.split("/")[-1]
cv2.imwrite(filename[:-4] + "_circle.jpg", cimg)
# cv2.imwrite("test3/edge.jpg", edges)
print "Number of Circles is %d" % count
diaM = []
for i in d:
diaM += [i*1.286]
bWidth = range(int(min(diaM)) - 10, int(max(diaM)) + 10, 2)
txt = '''
Sample name: %s
Average diameter(um): %f std: %f
Drop counts: %d
Average coverage per drop(um^2): %f std: %f
''' % (f, np.mean(diaM), np.std(diaM), count, np.mean(area), np.std(area))
fig = plt.figure()
fig.suptitle('Histogram of Diameters', fontsize=14, fontweight='bold')
ax1 = fig.add_axes((.1,.4,.8,.5))
ax1.hist(diaM, bins = bWidth)
ax1.set_xlabel('Diameter(um)')
ax1.set_ylabel('Frequency')
fig.text(.1,.1,txt)
plt.savefig(filename[:-4] + '_histogram.jpg')
plt.clf()
print "Total area is %d" % (w*l)
print "Total covered area is %d" % (np.sum(area))
rt = "Number of Circles is " + str(count) + "\n" + "Coverage percent is " + str(np.divide(np.sum(area), (w*l))) + "\n"
return rt
</code></pre>
| 4 | 2016-08-11T22:45:16Z | 38,917,817 | <p>You can use <a href="http://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=minenclosingcircle#minenclosingcircle" rel="nofollow">minEnclosingCircle</a> for this. Find the contours of your image, then apply the function to detect the shapes as circles.</p>
<p>Below is a simple example with a c++ code. In your case, I feel you should use a combination of Hough-circles and minEnclosingCircle, as some circles in your image are very close to each other, there's a chance that they may be detected as a single contour.</p>
<p>input image:</p>
<p><a href="http://i.stack.imgur.com/VSfBA.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/VSfBA.jpg" alt="input"></a></p>
<p>circles:</p>
<p><a href="http://i.stack.imgur.com/IEtby.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/IEtby.jpg" alt="output"></a></p>
<pre><code>Mat im = imread("circ.jpg");
Mat gr;
cvtColor(im, gr, CV_BGR2GRAY);
Mat bw;
threshold(gr, bw, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
findContours(bw, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
for(int idx = 0; idx >= 0; idx = hierarchy[idx][0])
{
Point2f center;
float radius;
minEnclosingCircle(contours[idx], center, radius);
circle(im, Point(center.x, center.y), radius, Scalar(0, 255, 255), 2);
}
</code></pre>
| 0 | 2016-08-12T12:18:54Z | [
"python",
"opencv",
"image-processing"
] |
Detection of irregular shapes using houghcircle function opencv python | 38,907,219 | <p>I am currently doing circle detection on images look like this one, but some of the drops merge and form some irregular shapes(red marks in the original image). I am using houghcircle function in opencv to detect circles. For those irregular shapes, the function can only detect them as several small circles, but I really want the program to consider irregular shape as an entire big shape and get a big circle like I draw in my output image. </p>
<p><a href="http://i.stack.imgur.com/dPj7N.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/dPj7N.jpg" alt="Original image"></a></p>
<p><a href="http://i.stack.imgur.com/DmQGg.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/DmQGg.jpg" alt="Output image"></a></p>
<p>My code will detect all the circles and get diameters of them.</p>
<p>Here is my code:</p>
<pre><code>def circles(filename, p1, p2, minR, maxR):
# print(filename)
img = cv2.imread(filename, 0)
img = img[0:1000, 0:1360]
l = len(img)
w = len(img[1])
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
circles = cv2.HoughCircles(img, cv2.HOUGH_GRADIENT, 1, 25,
param1 = int(p1) ,param2 = int(p2), minRadius = int(minR), maxRadius = int(maxR))
diameter = open(filename[:-4] + "_diamater.txt", "w")
diameter.write("Diameters(um)\n")
for i in circles[0,:]:
diameter.write(str(i[2] * 1.29 * 2) + "\n")
count = 0
d = []
area = []
for i in circles[0,:]:
cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2)
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
count += 1
d += [i[2]*2]
area += [i[2]*i[2]*pi*1.286*1.286]
f = filename.split("/")[-1]
cv2.imwrite(filename[:-4] + "_circle.jpg", cimg)
# cv2.imwrite("test3/edge.jpg", edges)
print "Number of Circles is %d" % count
diaM = []
for i in d:
diaM += [i*1.286]
bWidth = range(int(min(diaM)) - 10, int(max(diaM)) + 10, 2)
txt = '''
Sample name: %s
Average diameter(um): %f std: %f
Drop counts: %d
Average coverage per drop(um^2): %f std: %f
''' % (f, np.mean(diaM), np.std(diaM), count, np.mean(area), np.std(area))
fig = plt.figure()
fig.suptitle('Histogram of Diameters', fontsize=14, fontweight='bold')
ax1 = fig.add_axes((.1,.4,.8,.5))
ax1.hist(diaM, bins = bWidth)
ax1.set_xlabel('Diameter(um)')
ax1.set_ylabel('Frequency')
fig.text(.1,.1,txt)
plt.savefig(filename[:-4] + '_histogram.jpg')
plt.clf()
print "Total area is %d" % (w*l)
print "Total covered area is %d" % (np.sum(area))
rt = "Number of Circles is " + str(count) + "\n" + "Coverage percent is " + str(np.divide(np.sum(area), (w*l))) + "\n"
return rt
</code></pre>
| 4 | 2016-08-11T22:45:16Z | 38,924,239 | <p>When you have such beautiful well separated and contrasted patterns, the easiest way would be to use shape indexes. See <a href="http://www.thibault.biz/Doc/Publications/ShapeAndTextureIndexesIJPRAI2013.pdf" rel="nofollow">this paper</a> or <a href="http://www.thibault.biz/Doc/Publications/PosterWSCG2008_V4.pdf" rel="nofollow">this poster</a>. In both cases you have a list of shape indexes.</p>
<p>Thanks to shape indexes, you can what follow:</p>
<ul>
<li>binaries the image</li>
<li>connected components labeling in order to separate each pattern</li>
<li>compute shape indexes (most of them use basic measures)</li>
<li>classify the pattern according to the shape indexes values.</li>
</ul>
<p>As in your specific case the round shapes are perfectly round, I would use the following shape indexes:</p>
<ul>
<li>Circularity = >using just radii, so easiest to compute and perfect in your case.</li>
<li>Extension/elongation/stretching by radii => Perfect in your case but the minium ball computation is not available in all the libraries.</li>
<li>Iso-perimetric deficit => really easy to compute, but a little bit less stable than the circularity because of the perimeter.</li>
</ul>
<p>Also work in your case:</p>
<ul>
<li>Gap inscribed disk</li>
<li>Spreading of Morton</li>
<li>Deficit</li>
<li>Extension by diameters</li>
</ul>
| 0 | 2016-08-12T18:20:37Z | [
"python",
"opencv",
"image-processing"
] |
Error when modifying list in python | 38,907,266 | <p>I have had a problem when trying to change a specific item that is in a nested list.The code I have written is in python 2.7.
This is what i have written:</p>
<pre><code>list_1 = []
list_2 = []
infin = 25
while infin != 0:
list_1.append((0,0,0))
infin = infin - 1
infin = 5
while infin != 0:
list_2.append(list_1)
infin = infin - 1
</code></pre>
<p>Basically it makes a list that looks like this: </p>
<blockquote>
<p>[[<em>25 tuples</em>],[<em>25 tuples</em>],[<em>25 tuples</em>],[<em>25 tuples</em>],[<em>25 tuples</em>]]</p>
</blockquote>
<p>then when i attempt to modify the list by doing this:</p>
<pre><code>list_2[0][0] = (1,1,1)
</code></pre>
<p>Every single list with 25 tuples in it will have (1,1,1) at the start, not just the first. Why? </p>
| 0 | 2016-08-11T22:50:36Z | 38,907,305 | <p>You're not actually appending different instances of lists, instead you're repeatedly appending a reference to the same list. Use list to avoid this.</p>
<pre><code>list_1 = []
list_2 = []
infin = 25
while infin != 0:
list_1.append((0,0,0))
infin = infin - 1
infin = 5
while infin != 0:
list_2.append(list(list_1))
infin = infin - 1
</code></pre>
| 1 | 2016-08-11T22:55:07Z | [
"python",
"list"
] |
Error when modifying list in python | 38,907,266 | <p>I have had a problem when trying to change a specific item that is in a nested list.The code I have written is in python 2.7.
This is what i have written:</p>
<pre><code>list_1 = []
list_2 = []
infin = 25
while infin != 0:
list_1.append((0,0,0))
infin = infin - 1
infin = 5
while infin != 0:
list_2.append(list_1)
infin = infin - 1
</code></pre>
<p>Basically it makes a list that looks like this: </p>
<blockquote>
<p>[[<em>25 tuples</em>],[<em>25 tuples</em>],[<em>25 tuples</em>],[<em>25 tuples</em>],[<em>25 tuples</em>]]</p>
</blockquote>
<p>then when i attempt to modify the list by doing this:</p>
<pre><code>list_2[0][0] = (1,1,1)
</code></pre>
<p>Every single list with 25 tuples in it will have (1,1,1) at the start, not just the first. Why? </p>
| 0 | 2016-08-11T22:50:36Z | 38,907,309 | <p>Because you are appending the same list <code>list_1</code> 5 times. What you change in list_1 is printed 5 times, because the list has been added 5 times.
If you want new lists, use:</p>
<pre><code>list(list_1)
</code></pre>
| 2 | 2016-08-11T22:55:27Z | [
"python",
"list"
] |
Error when modifying list in python | 38,907,266 | <p>I have had a problem when trying to change a specific item that is in a nested list.The code I have written is in python 2.7.
This is what i have written:</p>
<pre><code>list_1 = []
list_2 = []
infin = 25
while infin != 0:
list_1.append((0,0,0))
infin = infin - 1
infin = 5
while infin != 0:
list_2.append(list_1)
infin = infin - 1
</code></pre>
<p>Basically it makes a list that looks like this: </p>
<blockquote>
<p>[[<em>25 tuples</em>],[<em>25 tuples</em>],[<em>25 tuples</em>],[<em>25 tuples</em>],[<em>25 tuples</em>]]</p>
</blockquote>
<p>then when i attempt to modify the list by doing this:</p>
<pre><code>list_2[0][0] = (1,1,1)
</code></pre>
<p>Every single list with 25 tuples in it will have (1,1,1) at the start, not just the first. Why? </p>
| 0 | 2016-08-11T22:50:36Z | 38,907,315 | <p><code>list_1</code> is a object. You then add the object to another list 5 times, creating not 5 copies of the object, but 5 references to the same object.</p>
<p>You need to copy the list as you create the second list:</p>
<pre><code>infin = 5
while infin != 0:
list_2.append(list(list_1))
infin = infin - 1
</code></pre>
| 1 | 2016-08-11T22:56:20Z | [
"python",
"list"
] |
Can anyone tell me what is the type of this language? | 38,907,396 | <p>query = 'GATTACA';! genome = 'TGATTACAGATTACC';! ! nummatches=0;! ! % At every possible offset! for offset=1:length(genome)-length(query)+1! !% Do all of the characters match?! !if (genome(offset:offset+length(query)-1) == query)! !! !disp(['Match at offset ', num2str(offset)])! !! !nummatches = nummatches+1;! !else! !! !%Uncomment to see every non-match! !! !%disp(['No match at offset ', num2str(offset)])! !end! end! ! disp(['Found ', num2str(nummatches),' matches of ', query, ' in genome of length ', num2str(length(genome))])! ! ! disp(['Expected number of occurrences: ', num2str((length(genome)-length(query)+1)/ (4^length(query)))])! </p>
| -3 | 2016-08-11T23:03:58Z | 38,907,476 | <p>MATLAB amigo, the disp and '%' for comments and num2str function are all MATLAB tid bits.</p>
| 0 | 2016-08-11T23:11:14Z | [
"python"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.