title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Why I can't instantiate a instance in the same module? | 38,989,481 | <p>Suppose my module is <code>myclass.py</code>, and here is the code:</p>
<pre><code>#!/usr/bin/env python
# coding=utf-8
class A(object):
b = B()
def __init__(self):
pass
class B(object):
pass
</code></pre>
<p>and import it</p>
<pre><code>In [1]: import myclass
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-1-e891426834ac> in <module>()
----> 1 import myclass
/home/python/myclass.py in <module>()
2 # coding=utf-8
3
----> 4 class A(object):
5 b = B()
6 def __init__(self):
/home/python/myclass.py in A()
3
4 class A(object):
----> 5 b = B()
6 def __init__(self):
7 pass
NameError: name 'B' is not defined
</code></pre>
<p>I know that if I define the class B above the class A, it is ok, there is no error. But, I don't want do that, are there any other methods to solve this. And I know that in C, there is function declaration.Thank you!</p>
| 2 | 2016-08-17T06:27:00Z | 38,993,358 | <p>Is there any good reason to do what you are doing? In general this is quite dangerous pattern in Python.</p>
<p>In your case</p>
<pre><code>class A(object):
b = B()
def __init__(self):
pass
</code></pre>
<p>You are binding an instance of B to the class A, which means that every instance of class A will <strong>share the same instance</strong> of class B. It's a case you must then handle properly.</p>
<p>In general you don't want this, If you want each instance of A to be related to an instance of B, you must make the assignment inside <code>__init__</code></p>
<pre><code>class A(object):
def __init__(self):
self.b = B()
</code></pre>
<p>In these case it doesn't meter where class B is defined, since it's instantiated at run time. </p>
<p>Again beware that the semantic is very different in the two cases (if you know Java, the former is more like defining a <code>static</code> attribute).</p>
<p>About:</p>
<blockquote>
<p>And I know that in C, there is function declaration</p>
</blockquote>
<p>You shouldn't make too much parallels with a language like C, which is very different on many aspects, most important: it's a compiled language, that means that you code is parsed in it whole before being <em>translated</em> to machine language, that's why you can make function declaration and have your namespace populated regardless of the order you define things.</p>
<p>Python is an interpreted language, which means basically that each statement is <em>translated</em> when it's called and a class declaration is called when the module is imported.</p>
<p>So to recap: if you really need a class bound instance, you have to declare class B before class A, else you <strong>must</strong> instantiate B inside <code>__init__</code>, then you can declare B wherever you want (since it's called at runtime).</p>
| 0 | 2016-08-17T09:54:43Z | [
"python"
] |
celery launches more processes than configured | 38,989,533 | <p>I'm running a celery machine, using redis as the broker with the following configuration:</p>
<pre><code>celery -A project.tasks:app worker -l info --concurrency=8
</code></pre>
<p>When checking the number of celery running processes, I see more than 8.</p>
<p>Is there something that I am missing? Is there a limit for max concurrency?</p>
<p>This problem causes huge memory allocation, and is killing the machine.</p>
| 0 | 2016-08-17T06:30:29Z | 38,989,780 | <p>For the concurrency problem, I have no suggestion.</p>
<p>For the memory problem, you can look at redis configuration in <code>~/.redis/redis.conf</code>. You have a <code>maxmemory</code> attribute which fix a limit upon tasksâ¦</p>
<p>See the <a href="http://redis.io/topics/config" rel="nofollow">Redis configuration</a></p>
| 0 | 2016-08-17T06:43:38Z | [
"python",
"celery"
] |
celery launches more processes than configured | 38,989,533 | <p>I'm running a celery machine, using redis as the broker with the following configuration:</p>
<pre><code>celery -A project.tasks:app worker -l info --concurrency=8
</code></pre>
<p>When checking the number of celery running processes, I see more than 8.</p>
<p>Is there something that I am missing? Is there a limit for max concurrency?</p>
<p>This problem causes huge memory allocation, and is killing the machine.</p>
| 0 | 2016-08-17T06:30:29Z | 38,994,438 | <p>With the default settings Celery will always start one more process than the number you ask. This additional process is a kind of bookkeeping process that is used to coordinate the other processes that are part of the worker. It communicates with the rest of Celery, and dispatches the tasks to the processes that actually run the tasks.</p>
<p>Switching to a different pool implementation than the "prefork" default might reduce the number of processes created but that's opening new can of worms.</p>
| 0 | 2016-08-17T10:44:19Z | [
"python",
"celery"
] |
Python2 and Python3 both in windows 10 | 38,989,589 | <p>I used Anaconda for python.</p>
<p>python2 is installed in
<code>D:\Python\Anaconda2</code></p>
<p>python3 is installed in
<code>D:\Python\Anaconda3</code></p>
<p>python3 is the default.</p>
<p>created two environmental variables with name: <code>python2</code> and <code>python2</code> and selected respective <code>python.exe</code> from different folder respectively.</p>
<p>my setup.py supports only python2.</p>
<p>when i run command from cmd <code>python setup.py install</code> it says it does not support.
If I rename <code>D:\Python\Anaconda2\python.exe</code> to <code>D:\Python\Anaconda2\python2.exe</code> and change the environment path file accordingly it works. But I dnt want to change the file name (as it may break other apps, like conda says unable to create process, etc).</p>
<p>Windows 10 Pro, 64 bit.
setup.py location:
<code>E:\Program Files\IBM\ILOG\CPLEX_Studio1251\cplex\python\x64_win64</code></p>
<ol>
<li><p>How to overcome this? Want <code>python2 setup.py install</code> for python2 compiler and <code>python3 setup.py install</code> for python3 compiler, without renaming.</p></li>
<li><p>How to install setup.py by running <code>D:\Python\Anaconda2\python.exe</code>?</p></li>
</ol>
| 0 | 2016-08-17T06:33:57Z | 38,989,865 | <p>I'm not sure whether this directly answers your question,but anaconda manages environments for you. <a href="http://conda.pydata.org/docs/using/envs.html" rel="nofollow">Reference</a></p>
<p>You should be able to type into your Anaconda prompt to create your environment:</p>
<pre><code>conda create --name pyenv python=2.7
</code></pre>
<p>and then list your environments:</p>
<pre><code>conda info --envs
</code></pre>
<p>and lastly activate your environment python 2 or python 3 environments:</p>
<pre><code>activate pyenv
</code></pre>
<p>These separate environments with their own versions of python are saved in the anaconda folder under the envs folder</p>
<blockquote>
<p>/Anaconda3/envs/pyenv/python</p>
</blockquote>
| 1 | 2016-08-17T06:48:02Z | [
"python",
"windows",
"anaconda"
] |
Quick Sort in python with last element as pivot | 38,989,629 | <p>I have successfully tested my code.It works with last element as pivot.
However, when i try to count the total no. of comparisons made, it shows incorrect count.
I am counting through the global variable <strong>tot_comparisons</strong>.</p>
<p>Suggestions,where am i going wrong ?
Is there some silly error that i am making?</p>
<pre><code>def swap(A,i,k):
temp=A[i]
print "temp is "
print temp
A[i]=A[k]
A[k]=temp
def partition(A,start,end):
pivot=A[end]
pivot_index=start
#pivot=A[end]
for i in range(start,end):
#if A[i]<=pivot:
if A[i]<pivot:
print 'HHHHHHHHHHHHHhh'
swap(A,i,pivot_index)
pivot_index+=1
#swap(A,pivot_index,end)
swap(A,pivot_index,end)
return pivot_index
def quicksort(A,start,end):
global tot_comparisons
if start<end:
pivot_index=partition(A,start,end)
tot_comparisons+=end-start
print "pivot_index"
print pivot_index
print "ENDS"
quicksort(A, start,pivot_index-1)
#tot_comparisons+=end-pivot_index
#quicksort(A, pivot_index, end)
quicksort(A, pivot_index+1, end)
#A=[45,21,23,4,65]
#A=[21,23,19,22,1,3,7,88,110]
#A=[1,22,3,4,66,7]
#A=[1, 3, 7, 19, 21, 22, 23, 88, 110]
#A=[7,2,1,6,8,5,3,4]
temp_list=[]
f=open('temp_list.txt','r')
for line in f:
temp_list.append(int(line.strip()))
f.close()
print 'list is '
#print temp_list
print 'list ends'
tot_comparisons=0
#quicksort(A, 0, 7)
quicksort(temp_list, 0, 9999)
#quicksort(temp_list, 0, len(temp_list))
print 'hhh'
print temp_list
print tot_comparisons
#print A
</code></pre>
| 0 | 2016-08-17T06:35:54Z | 38,990,584 | <p>I checked that your quicksort works, though it's slightly different from the algorithm given in popular algorithmic texts, in which the last element is switched to the first and then the partitioning ensues. This may change the ordering of the sort which has an effect on the number of comparisons.</p>
<p>For example, your code:</p>
<pre><code>def partition(A,start,end):
pivot=A[end]
pivot_index=start
for i in range(start,end):
if A[i] < pivot:
swap(A,i,pivot_index)
pivot_index+=1
swap(A,pivot_index,end)
return pivot_index
</code></pre>
<p>can be switched to:</p>
<pre><code>def partition(A,start,end):
swap(A,start,end)
pivot=A[start]
pivot_index=start + 1
for i in range(start+1,end+1):
if A[i] < pivot:
swap(A,i,pivot_index)
pivot_index+=1
swap(A,pivot_index-1,start)
return pivot_index-1
</code></pre>
<blockquote>
<p>Edited based on comment by OP.</p>
</blockquote>
| 1 | 2016-08-17T07:29:15Z | [
"python",
"recursion",
"quicksort"
] |
Quick Sort in python with last element as pivot | 38,989,629 | <p>I have successfully tested my code.It works with last element as pivot.
However, when i try to count the total no. of comparisons made, it shows incorrect count.
I am counting through the global variable <strong>tot_comparisons</strong>.</p>
<p>Suggestions,where am i going wrong ?
Is there some silly error that i am making?</p>
<pre><code>def swap(A,i,k):
temp=A[i]
print "temp is "
print temp
A[i]=A[k]
A[k]=temp
def partition(A,start,end):
pivot=A[end]
pivot_index=start
#pivot=A[end]
for i in range(start,end):
#if A[i]<=pivot:
if A[i]<pivot:
print 'HHHHHHHHHHHHHhh'
swap(A,i,pivot_index)
pivot_index+=1
#swap(A,pivot_index,end)
swap(A,pivot_index,end)
return pivot_index
def quicksort(A,start,end):
global tot_comparisons
if start<end:
pivot_index=partition(A,start,end)
tot_comparisons+=end-start
print "pivot_index"
print pivot_index
print "ENDS"
quicksort(A, start,pivot_index-1)
#tot_comparisons+=end-pivot_index
#quicksort(A, pivot_index, end)
quicksort(A, pivot_index+1, end)
#A=[45,21,23,4,65]
#A=[21,23,19,22,1,3,7,88,110]
#A=[1,22,3,4,66,7]
#A=[1, 3, 7, 19, 21, 22, 23, 88, 110]
#A=[7,2,1,6,8,5,3,4]
temp_list=[]
f=open('temp_list.txt','r')
for line in f:
temp_list.append(int(line.strip()))
f.close()
print 'list is '
#print temp_list
print 'list ends'
tot_comparisons=0
#quicksort(A, 0, 7)
quicksort(temp_list, 0, 9999)
#quicksort(temp_list, 0, len(temp_list))
print 'hhh'
print temp_list
print tot_comparisons
#print A
</code></pre>
| 0 | 2016-08-17T06:35:54Z | 38,991,390 | <p>I would suggest you to declare global variable at the top of your script, check this working basic example:</p>
<pre><code>inside = 0
def qsort(l):
global inside
inside += 1
if len(l) <= 1:
return l
pivot = l[-1]
return qsort(filter(lambda x: x < pivot, l[:-1])) + [pivot] + qsort(filter(lambda x: x >= pivot, l[:-1]))
import random
l = [random.randint(0,100) for _ in xrange(100)]
print qsort(l)
print inside
>>> [1, 1, 2, 3, 7, 9, 10, 11, 11, 11, 13, 13, 13, 13, 17, 17, 17, 18, 18, 21, 22, 23, 26, 26, 28, 30, 30, 32, 32, 34, 35, 38, 40, 41, 42, 42, 42, 42, 43, 44, 45, 46, 47, 47, 48, 48, 49, 51, 51, 54, 55, 56, 56, 56, 58, 59, 60, 60, 61, 61, 62, 63, 65, 67, 67, 68, 68, 70, 70, 72, 73, 74, 77, 79, 80, 83, 85, 85, 85, 86, 87, 89, 90, 90, 90, 91, 91, 95, 96, 96, 96, 97, 97, 97, 98, 98, 98, 99, 99, 99]
>>> 135
</code></pre>
| 0 | 2016-08-17T08:13:50Z | [
"python",
"recursion",
"quicksort"
] |
Why can a lambda for collections.defaultdict have no arguments? | 38,989,635 | <p>In python I have the following line of code:</p>
<pre><code>my_var = collections.defaultdict(lambda: collections.defaultdict(collections.defaultdict(float)))
</code></pre>
<p>I don't understand what it is doing. One of the things I don't understand is how we have not specified any variables for the lambda function.</p>
| -2 | 2016-08-17T06:36:05Z | 39,000,442 | <p>The <em><code>default_factory</code></em> argument to <code>defaultdict</code> should be a function that doesn't accept any arguments which creates and returns an object which will become the value of any non-existent entries that get referenced.</p>
<p>One way to provide such a function is to create an anonymous one using an inline <a href="https://docs.python.org/2/tutorial/controlflow.html#lambda-expressions" rel="nofollow"><code>lambda</code> expression</a>. Another way is to use the name of a built-in function the creates and returns one of the built-in types, like <code>float</code>, <code>int</code>, <code>tuple</code>, etc. When any of <em>those</em> are called without an argument, the value of the object returned will have some default value â typically something like zero or empty.</p>
<p>First of all, you can make code like you have more readable by formatting it like this:</p>
<pre><code>my_var = collections.defaultdict(
lambda: collections.defaultdict(
collections.defaultdict(float)))
</code></pre>
<p>However, (in either form) it's incorrect, since some of the <code>defaultdict</code> calls don't have callable function arguments. Because Python is an interpreted language you won't know this until you try to use <code>my_var</code>. For example something like:</p>
<pre><code>my_var['level1']['level2']['level3'] = 42.
</code></pre>
<p>would result in:</p>
<pre class="lang-none prettyprint-override"><code>TypeError: first argument must be callable or None
</code></pre>
<p>This can be fixed by adding another <code>lambda</code>:</p>
<pre><code>my_var = collections.defaultdict(
lambda: collections.defaultdict(
lambda: collections.defaultdict(float))) # need lambda here, too
</code></pre>
<p>and afterwards the preceding assignment will won't be considered an error.</p>
<p>Since <code>defaultdict</code> is a subclass of the built-in <code>dict</code> class, you can use the <code>json</code> module to pretty-print instances of one:</p>
<pre><code>import collections
my_var = collections.defaultdict(
lambda: collections.defaultdict(
lambda: collections.defaultdict(float)))
import json # for pretty printing dictionaries
# Auto-creates the 3 levels of dictionaries needed.
my_var['level1']['level2']['level3A'] = 42.
# Similar to above but additionally auto-creates a
# my_var['level1']['level2']['level3B'] entry with default value of 0.0
my_var['level1']['level2']['level3C'] = my_var['level1']['level2']['level3B'] + 1
print(json.dumps(my_var, indent=4))
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>{
"level1": {
"level2": {
"level3A": 42.0,
"level3B": 0.0,
"level3C": 1.0
}
}
}
</code></pre>
<p>I think you should be able to understand what it does now â but let me know if not.</p>
| 3 | 2016-08-17T15:14:18Z | [
"python",
"lambda"
] |
Overriding __iadd__ in python for fractions | 38,989,665 | <p>I am trying to override the <code>__iadd__</code> method in python with fractions, now this is what I did. Please could some one check to see if I did it right. I have <a href="https://stackoverflow.com/questions/1047021/overriding-in-python-iadd-method">this</a> and <a href="https://stackoverflow.com/questions/35512684/python-execute-a-string-code-for-fractions">this</a>, but that's not what I want. It should be used from a <code>class</code> perspective.</p>
<p>My <code>__iadd__</code> code:</p>
<pre><code>def __iadd__(self, other):
"""
Implementation of the '+='
augmented function
:param other:
:return:
"""
newnum = self.num * other.den + self.den * other.num
newden = self.den * other.den
v = Fraction(newnum, newden)
return v
</code></pre>
<p>This is done in a <code>class Fraction</code>with this structure:</p>
<pre><code>def gcd(m, n):
while m % n != 0:
oldm = m
oldn = n
m = oldn
n = oldm % oldn
return n
class Fraction:
# initializing variables for class
def __init__(self, top, bottom):
# check if entered fraction is an integer
if isinstance(top, int) and isinstance(bottom, int):
# reduce the given fractions to lowest term
common = gcd(top, bottom)
self.num = abs(top) // common
self.den = abs(bottom) // common
else:
raise "Please only integers are allowed"
def __str__(self):
return str(self.num) + "/" + str(self.den)
</code></pre>
<p>This actually return the write value when done like this:</p>
<pre><code>f1 = Fraction(1, 2)
f2 = Fraction(8, 10)
f1 += f2
print(f1)
</code></pre>
<p>Also did it by calling an overridden <code>__add__</code> method:</p>
<pre><code>def __iadd__(self, other):
"""
Implementation of the '+='
augmented function
:param other:
:return:
"""
if other == 0:
return self
else:
return self.__add__(other)
</code></pre>
<p>The overridden <code>__add__</code>:</p>
<pre><code>def __add__(self, otherfraction):
newnum = self.num * otherfraction.den + self.den * otherfraction.num
newden = self.den * otherfraction.den
return Fraction(newnum, newden)
</code></pre>
| 0 | 2016-08-17T06:37:53Z | 38,990,010 | <ul>
<li>Use <code>__iadd__</code> to increment in-place.</li>
<li>Use <code>__add__</code> to increment and create a new instance.</li>
</ul>
<p>So, you can change your code as follow.</p>
<pre><code>def __iadd__(self, other):
self.num = self.num * other.den + self.den * other.num
self.den = self.den * other.den
return self
</code></pre>
<p>See also this question: <a href="http://stackoverflow.com/questions/20204230/implementing-add-and-iadd-for-custom-class-in-python">implementing add and iadd for custom class in python?</a></p>
<p>Note that Python has a <a href="https://docs.python.org/2/library/fractions.html" rel="nofollow">Rational numbers</a> module. Check the source code⦠But <code>Fraction</code> objects are immutable, so <code>__iadd__</code> is not implemented.</p>
| 2 | 2016-08-17T06:57:02Z | [
"python",
"magic-methods"
] |
How to create non linear axis in plot | 38,989,667 | <p>I'm using matplotlib to plot a series of datas and get a result as below</p>
<p><a href="http://i.stack.imgur.com/TmB2a.png" rel="nofollow"><img src="http://i.stack.imgur.com/TmB2a.png" alt="enter image description here"></a></p>
<p>But I'm expecting to have a non linear axis as below.</p>
<p><a href="http://i.stack.imgur.com/noqR5.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/noqR5.jpg" alt="enter image description here"></a></p>
<p>How can I make that kind of plot? Thanks in advance.</p>
| 0 | 2016-08-17T06:38:07Z | 38,989,812 | <p>You can set the y-axis to logaritmic by writing <code>plt.yscale('log')</code></p>
<p>full example:</p>
<pre><code>import matplotlib.pyplot as plt
example = [pow(2,i) for i in range(10)]
plt.plot(example)
plt.yscale('log')
plt.show()
</code></pre>
| 3 | 2016-08-17T06:45:11Z | [
"python",
"matplotlib"
] |
How to create non linear axis in plot | 38,989,667 | <p>I'm using matplotlib to plot a series of datas and get a result as below</p>
<p><a href="http://i.stack.imgur.com/TmB2a.png" rel="nofollow"><img src="http://i.stack.imgur.com/TmB2a.png" alt="enter image description here"></a></p>
<p>But I'm expecting to have a non linear axis as below.</p>
<p><a href="http://i.stack.imgur.com/noqR5.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/noqR5.jpg" alt="enter image description here"></a></p>
<p>How can I make that kind of plot? Thanks in advance.</p>
| 0 | 2016-08-17T06:38:07Z | 38,989,923 | <p>You can use <code>plt.semilogy</code>:</p>
<pre><code>import matplotlib.pyplot as plt
plt.semilogy([i**2 for i in range(100)])
plt.show()
</code></pre>
<p>Result:</p>
<p><a href="http://i.stack.imgur.com/5Sv0b.png" rel="nofollow"><img src="http://i.stack.imgur.com/5Sv0b.png" alt="enter image description here"></a></p>
| 2 | 2016-08-17T06:50:57Z | [
"python",
"matplotlib"
] |
How to install scikit -learn | 38,989,896 | <p>I do no how to install external modules
So far i came to know that we can use pip command to install
But for scikit -learn we need to install numpy and matplotlib </p>
<p>I need some body to help on this
how to install these modules using pip command</p>
| 0 | 2016-08-17T06:49:29Z | 38,990,089 | <p>Using Python 3.4, I run the following from the command line:</p>
<p>c:\python34\python.exe -m pip install package_name</p>
<p>So you would substitute "numpy" and "matplotlib" for 'package_name'</p>
| 0 | 2016-08-17T07:01:53Z | [
"python",
"windows",
"scikit-learn"
] |
Execute python script from C | 38,989,905 | <p>As part of my C code I run python script,(one of the .dll files runs the script)</p>
<pre><code>getcwd(directory);
ret_value = ShellExecute(NULL,NULL,"myScript.py",NULL,directory,0);
</code></pre>
<p>This is the folder of the program after build.</p>
<p>If I run the .exe from the folder every thing works.</p>
<p><strong>The bug:</strong>
If I search program .exe outside the folder and run it the script doesn't run.</p>
<p><a href="https://postimg.org/image/aewpx0ofd/" rel="nofollow"><img src="https://s4.postimg.org/417mtrjjh/msg_Vbug.png" alt="msgVbug.png"></a></p>
<p><strong>Search:</strong>
If I run it from here the script doesn't run.
<a href="https://postimg.org/image/f0aqvfkbd/" rel="nofollow"><img src="https://s4.postimg.org/52zq2dcpp/Capturesearch.png" alt="Capturesearch.png"></a></p>
| 0 | 2016-08-17T06:50:01Z | 39,041,564 | <p>GetModuleFileNameW() function retrieves the fully qualified path for the file that contains the specified module.
This way you can find absolute path of the .dll and use _chdir and change the current working directory to that path.</p>
| 1 | 2016-08-19T14:28:51Z | [
"python",
"c"
] |
Iteration input and update on dictionary python | 38,989,964 | <p>Keys of dictionay <strong><em>y</em></strong></p>
<p>I have dictionary <strong>x</strong></p>
<pre><code>x = {1 :'a', 2 :'b', 3 :'c', 4 :'d', 5 :'e'}
</code></pre>
<p>The program displays these as choices. Next, the user selects one to update, and then enters a value. The program is supposed to add this value to the running total in another dictionary, <strong>y</strong>.</p>
<p>of dictionary x), and then </p>
<blockquote>
<p>values</p>
</blockquote>
<p>of <strong><em>x</em></strong> will input to dictionary <strong><em>y</em></strong> as <strong>keys</strong></p>
<p>Example :</p>
<pre><code>$python mysc.py
1. a
2. b
3. c
4. d
5. e
choose number to input case : **1**
you choose 'a' #now i have a as keys of dictionary *y*
input number : **5**
y = { 'a' : 5 }
1. a
2. b
3. c
4. d
5. e
choose number to input case : **1**
you choose 'a' #now i have a as keys of dictionary *y*
input number : **6**
y = { 'a' : 11 } #values of 'a' change to 11(5+6)
1. a
2. b
3. c
4. d
5. e
choose number to input case : **5**
you choose 'e' #now we add a new keys of dictionary *y*
input number : **6**
y = { 'a' : 11, 'e' : 5 }
</code></pre>
<p>Code:</p>
<pre><code>y = {}
while True:
x = {1 :'a', 2 :'b', 3 :'c', 4 :'d', 5 :'e'}
n = input('choose number to input keys : ')
nn = x[n]
print 'your choose is ',x[n]
m = input('input number : ')
y[nn] = m
print y
</code></pre>
| -1 | 2016-08-17T06:53:39Z | 39,047,696 | <p>This program does have its problems. First of all, a dictionary is overkill for your menu of choices. Since you never change it, you can simply hard-code the choices and print the list each time.</p>
<p>Your <strong>y</strong> dictionary is a collection of running sums -- accumulations -- for the five items. You never got a sum because your code only replaces an existing value with a new one. I've added code to check whether the item is already in <strong>y</strong>; if so, the new code adds the new value to the old.</p>
<pre><code>y = {}
x = " abcde"
while True:
for i in range(1,len(x)):
print i, ':', x[i]
n = input('choose the number of the key you want to update:')
key_num = x[n]
print 'your choice is ',x[n]
m = input('input update quantity : ')
if key_num in y:
y[key_num] += m
else:
y[key_num] = m
print y
</code></pre>
<p>Output:</p>
<pre><code>$ python2.7 so.py
1 : a
2 : b
3 : c
4 : d
5 : e
choose the number of the key you want to update:4
your choice is d
input update quantity : 5
{'d': 5}
1 : a
2 : b
3 : c
4 : d
5 : e
choose the number of the key you want to update:1
your choice is a
input update quantity : 5
{'a': 5, 'd': 5}
1 : a
2 : b
3 : c
4 : d
5 : e
choose the number of the key you want to update:4
your choice is d
input update quantity : 6
{'a': 5, 'd': 11}
1 : a
2 : b
3 : c
4 : d
5 : e
choose the number of the key you want to update:
-- I interrupted the program --
</code></pre>
<p>Does this get you moving to the next stage?</p>
| 0 | 2016-08-19T21:08:14Z | [
"python",
"dictionary",
"while-loop"
] |
python bracket notation for dict to setitem method | 38,990,054 | <p>I know that for a <strong>dict</strong> D in python,</p>
<pre><code>D = {0:1, 1: {2:3} }
D[0] = 1
</code></pre>
<p>is equivalent to </p>
<pre><code>D.__setitem__(0,1)
</code></pre>
<p>what about below</p>
<pre><code>D[1][3] = 4
</code></pre>
<p>although it's equivalent to</p>
<pre><code>D[1].__setitem__(3,4)
</code></pre>
<p>I don't want to use bracket notation, how to do that?</p>
| 0 | 2016-08-17T06:59:26Z | 38,990,098 | <p>It'll be like this:
<code>
D.__getitem__(1).__setitem__(3,4)
</code></p>
<p>Note that <code>__setitem__</code> called not for <code>D</code> variable, but for valirable retured by <code>__getitem__</code></p>
| 1 | 2016-08-17T07:02:21Z | [
"python",
"dictionary"
] |
python bracket notation for dict to setitem method | 38,990,054 | <p>I know that for a <strong>dict</strong> D in python,</p>
<pre><code>D = {0:1, 1: {2:3} }
D[0] = 1
</code></pre>
<p>is equivalent to </p>
<pre><code>D.__setitem__(0,1)
</code></pre>
<p>what about below</p>
<pre><code>D[1][3] = 4
</code></pre>
<p>although it's equivalent to</p>
<pre><code>D[1].__setitem__(3,4)
</code></pre>
<p>I don't want to use bracket notation, how to do that?</p>
| 0 | 2016-08-17T06:59:26Z | 38,990,167 | <p>Another way, if you absolutely must avoid bracket notation for some reason:</p>
<pre><code>>>> D.get(1).update([(3,4)])
>>> D
{0: 1, 1: {2: 3, 3: 4}}
</code></pre>
| 1 | 2016-08-17T07:06:20Z | [
"python",
"dictionary"
] |
How to deal with data with pandas without influencing nan? | 38,990,192 | <p>I have a string series with some nan, and I want to replace some characters and then turn it into int(float is ok) but nan still remain nan. Like </p>
<pre><code>In[1]:df = pd.DataFrame(["type 12", None, "type13"], columns=['A'])
Out[1]:
A
0 12
1 NaN
2 13
</code></pre>
<p>Is there any good way to do it?</p>
| -3 | 2016-08-17T07:08:01Z | 38,990,508 | <p>No, unfortunately. You will have to settle for <code>floats</code>. </p>
<pre><code>>>> s = pd.Series(['1', '2', '3', '4', '5'], index=list('abcde'))
>>> s
a 1
b 2
c 3
d 4
e 5
dtype: object
>>> s = s.reindex(['a','b','c','f','u'])
>>> s
a 1
b 2
c 3
f NaN
u NaN
dtype: object
>>> s.astype(int)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/juan/anaconda3/lib/python3.5/site-packages/pandas/core/generic.py", line 2947, in astype
raise_on_error=raise_on_error, **kwargs)
File "/home/juan/anaconda3/lib/python3.5/site-packages/pandas/core/internals.py", line 2873, in astype
return self.apply('astype', dtype=dtype, **kwargs)
File "/home/juan/anaconda3/lib/python3.5/site-packages/pandas/core/internals.py", line 2832, in apply
applied = getattr(b, f)(**kwargs)
File "/home/juan/anaconda3/lib/python3.5/site-packages/pandas/core/internals.py", line 422, in astype
values=values, **kwargs)
File "/home/juan/anaconda3/lib/python3.5/site-packages/pandas/core/internals.py", line 465, in _astype
values = com._astype_nansafe(values.ravel(), dtype, copy=True)
File "/home/juan/anaconda3/lib/python3.5/site-packages/pandas/core/common.py", line 2628, in _astype_nansafe
return lib.astype_intsafe(arr.ravel(), dtype).reshape(arr.shape)
File "pandas/lib.pyx", line 937, in pandas.lib.astype_intsafe (pandas/lib.c:16620)
File "pandas/src/util.pxd", line 60, in util.set_value_at (pandas/lib.c:67979)
ValueError: cannot convert float NaN to integer
</code></pre>
<p>From Pandas <a href="http://pandas.pydata.org/pandas-docs/stable/gotchas.html#nan-integer-na-values-and-na-type-promotions" rel="nofollow">Caveats and Gotchas</a>:</p>
<blockquote>
<p>The special value NaN (Not-A-Number) is used everywhere as the NA
value, and there are API functions isnull and notnull which can be
used across the dtypes to detect NA values.</p>
<p>However, it comes with it a couple of trade-offs which I most
certainly have not ignored... In the absence of high performance NA
support being built into NumPy from the ground up, <strong>the primary
casualty is the ability to represent NAs in integer arrays.</strong></p>
</blockquote>
<p>So work with this:</p>
<pre><code>>>> s.astype(float)
a 1.0
b 2.0
c 3.0
f NaN
u NaN
dtype: float64
</code></pre>
| 1 | 2016-08-17T07:25:12Z | [
"python",
"pandas"
] |
Writing string to file from loop if does not already exist | 38,990,204 | <p>This has been asked and answered, and I've read lots of those posts... but for some reason my code isn't working. Hopefully someone can help.</p>
<p>The code matches strings within a variable and then attempts to write those strings to a file if they don't already exist within that file. </p>
<p>Code doesn't work. Any help please?</p>
<pre><code>#this works
str_match = re.findall(r'(https?://[^\s]+)', input
if str_match:
with open (datestamp+_"strings.txt", "a+") as text_file:
for string in str_match:
#THIS DOES NOT WORK -- WITH OR WITHOUT THE '\n'
#WITH, ALWAYS SAYS IT EXISTS AND WRITES NOTHING
if (string + '\n') in text_file:
print "str exists"
else:
print "Doesn't exist"
text_file.write(string + '\n')
</code></pre>
<p>Without it, it says the string doesn't exist and writes it to the file multiple times.</p>
<pre><code>if string in text_file:
print "str exists"
else:
print "Doesn't exist"
text_file.write(string + '\n')
</code></pre>
<p>If I look at the string that's written using vim, it looks like: mystring$</p>
<p>(the $ is appended at the end of each string -- and no, adding <strong>+"$"</strong> doesn't work)</p>
<p>Any help please?</p>
| 0 | 2016-08-17T07:08:48Z | 38,990,592 | <p>Problem is you are iterating through the file once, and file is not rewound afterwards. File is only scanned once.</p>
<p>You have to read the data into a <code>set</code> first, then you can loop over the strings (and <code>set</code> is very performant because uses dichotomic search in O(log(N)))</p>
<p>Problem: if there are duplicates in <code>str_match</code>, it will be written more than once, so I added unicity with a <code>set</code></p>
<pre><code>if str_match:
with open(datestamp+"_strings.txt", "r") as text_file: # read-only
lines = set(map(str.rstrip,text_file)) # reads the file, removes \n and \r
with open(datestamp+"_strings.txt", "a") as text_file: # append, write only
for string in set(str_match):
#THIS DOES NOT WORK -- WITH OR WITHOUT THE '\n'
#WITH, ALWAYS SAYS IT EXISTS AND WRITES NOTHING
if (string) in lines:
print("str exists")
else:
print("Doesn't exist")
text_file.write(string + '\n')
</code></pre>
<p>Notes:</p>
<ul>
<li>to preserve the order in the file, remove <code>set</code> in the <code>for string</code> loop, and add the string to <code>lines</code> when found.</li>
<li>first version with <code>\n</code> added would work OK on Linux, but on windows it would fail because of the <code>\r</code>. Now I <code>rstrip</code> the lines when I put them in the mini-database: no need to add <code>\n</code> when testing and is portable</li>
<li>the <code>string$</code> you saw in vim is explained: vim adds end-of-lines as <code>$</code> when showing the text. Mystery solved.</li>
</ul>
| 2 | 2016-08-17T07:29:52Z | [
"python"
] |
Writing string to file from loop if does not already exist | 38,990,204 | <p>This has been asked and answered, and I've read lots of those posts... but for some reason my code isn't working. Hopefully someone can help.</p>
<p>The code matches strings within a variable and then attempts to write those strings to a file if they don't already exist within that file. </p>
<p>Code doesn't work. Any help please?</p>
<pre><code>#this works
str_match = re.findall(r'(https?://[^\s]+)', input
if str_match:
with open (datestamp+_"strings.txt", "a+") as text_file:
for string in str_match:
#THIS DOES NOT WORK -- WITH OR WITHOUT THE '\n'
#WITH, ALWAYS SAYS IT EXISTS AND WRITES NOTHING
if (string + '\n') in text_file:
print "str exists"
else:
print "Doesn't exist"
text_file.write(string + '\n')
</code></pre>
<p>Without it, it says the string doesn't exist and writes it to the file multiple times.</p>
<pre><code>if string in text_file:
print "str exists"
else:
print "Doesn't exist"
text_file.write(string + '\n')
</code></pre>
<p>If I look at the string that's written using vim, it looks like: mystring$</p>
<p>(the $ is appended at the end of each string -- and no, adding <strong>+"$"</strong> doesn't work)</p>
<p>Any help please?</p>
| 0 | 2016-08-17T07:08:48Z | 38,990,622 | <p>The problem here is that files don't (really) support membership tests with the <code>in</code> operator.</p>
<p>The reason why no error is thrown is because files are iterable and thus <code>x in file</code> evaluates to <code>any(x is e or x == e for e in file)</code> (<a href="https://docs.python.org/3/reference/expressions.html#membership-test-operations" rel="nofollow">docs</a>). This operation works only once, because after the first time the file has been exhausted and no more lines can be read (until you write new ones).</p>
<p>The solution to your problem is to read all the lines in the file into a list or set and use that for membership tests:</p>
<pre><code>all_lines= set(text_file)
...
if (string + '\n') in all_lines:
</code></pre>
<hr>
<p>However, this does not explain why <code>if (string + '\n') in text_file:</code> always returns <code>True</code>. In fact it should always (after the first iteration) return <code>False</code>, and that's exactly what happens when I run your code on my machine. There's probably something writing to the file in other parts of your code.</p>
| 3 | 2016-08-17T07:31:40Z | [
"python"
] |
Python: issue while trying to print a string | 38,990,457 | <pre><code>import os
from os import path
import datetime
from datetime import date, time, timedelta
import time
def main():
# Print the name of the OS
print(os.name)
# Check for item existence and type
print("Item exists: " + (str(path.exists("textfile.txt"))))
#print("Item is a file: " + str(path.isfile("textfile.txt")))
#print("Item is a directory: " + str(path.isdir("textfile.txt")))
if __name__ == "__main__":
main()
</code></pre>
<p>ERROR:</p>
<blockquote>
<p>print ("Item exists: " + (str(path.exists("textfile.txt"))))<br>
TypeError: 'str' object is not callable</p>
</blockquote>
| 0 | 2016-08-17T07:22:42Z | 38,990,544 | <p>According to the error you got, somewhere in your code (which doesn't appear in your post) you assigned to <code>str</code>...</p>
<p><strong>ADDED</strong> after reading comments:</p>
<p>This code runs fine. Maybe you run it from an IDE like spyder, where it remembers vars you assigned earlier in the shell or in the code that was executed. Please try to run it from the Windows "DOS" shell and see it the error occurs again. If it doesn't occur, you may restart you IDE and find it's gone there too</p>
| 5 | 2016-08-17T07:27:26Z | [
"python",
"printing"
] |
Django send mail and sms after 6 hours of user sign up | 38,990,472 | <p>I am using django 1.9. I am working on requirement where we need to send users reminder to activate their account by email after x hours h/she has signed on our platform.</p>
<p>I have send_mail task and I have configured <a href="https://github.com/ui/django-rq" rel="nofollow">Django RQ</a> to send mails. </p>
<p>One way I can think of is to set up a cron job using <a href="https://github.com/ui/rq-scheduler" rel="nofollow">Django RQ scheduler</a> which runs every 5 mins to check for users who signed up 6 hours before. </p>
<p>Is there any better way to do this ?</p>
| 0 | 2016-08-17T07:23:21Z | 38,992,398 | <p>Looking at the django-rq documentation, if you have both <code>django-rq</code> and <code>rq-scheduler</code> installed, it seems you don't need to create a periodic job. Just schedule your job during sign on. </p>
<pre><code># From django-rq docs
import django_rq
# Select the queue you want to use, here it uses default queue
scheduler = django_rq.get_scheduler('default')
# HERE: Set the datetime X HOURS from sign on
job = scheduler.enqueue_at(datetime(2020, 10, 10), func)
</code></pre>
<p>Reference: <a href="https://github.com/ui/django-rq" rel="nofollow">django-rq docs Support for RQ Scheduler</a> section</p>
<p>Note: I haven't used Django-RQ myself. </p>
| 1 | 2016-08-17T09:08:36Z | [
"python",
"django",
"email"
] |
How to combine two columns to make a third column taking property from group by method? | 38,990,565 | <p>I don't know if I can ask this question clearly, but here I try! </p>
<p>I have a classification problem in which I have to predict a person's credit score based on his income group. I have used this code:</p>
<pre><code>dta.groupby(['income_bracket'])['credit_score'].get_values()
</code></pre>
<p>Now I have a data table as usual, which looks like this:</p>
<pre><code>income_bracket credit_scores
'very low' 0.0 2340
1.0 456
'moderate' 0.0 1234
1.0 657
'high' 0.0 54
1.0 657
'very high' 0.0 9
1.0 1234
</code></pre>
<p>Explanation: The data above is saying that, for example, a person with 'very low' income bracket having 0.0 credit score is 2340 and with credit score 1.0 is 456.</p>
<p>Now, is there any way that I can do something like: if a person is in income_bracket, then predict that his credit_score will be MAX(of the credit score in that income bracket)? For example, if someone has an income bracket of 'high', then I can predict his credit_score will be MAX(54,657) = 657 = 1.0 </p>
<p>the desired output that i want : newdata --> income_group = 'high' ---> credit_score = 1 ( because i know that in high income group the MAX value is 657 which belong to the credit score of 1.0</p>
<p>Please help me achieve this.</p>
| 1 | 2016-08-17T07:28:12Z | 38,990,735 | <p>You need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.idxmax.html" rel="nofollow"><code>idxmax</code></a> for get <code>index</code> values per group which max value in <code>val</code> and then select these rows by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ix.html" rel="nofollow"><code>ix</code></a>:</p>
<pre><code>#dta.reset_index(inplace=True)
#dta = dta.reset_index().rename(columns={0: 'val'})
print (dta)
income_bracket credit_score val
0 very low 0.0 2340
1 very low 1.0 456
2 moderate 0.0 1234
3 moderate 1.0 657
4 high 0.0 54
5 high 1.0 657
6 very high 0.0 9
7 very high 1.0 1234
</code></pre>
<pre><code>print (dta.groupby(['income_bracket'], sort=False)['val'].idxmax())
income_bracket
very low 0
moderate 2
high 5
very high 7
Name: val, dtype: int64
#select all columns
print (dta.ix[dta.groupby(['income_bracket'], sort=False)['val'].idxmax()])
income_bracket credit_score val
0 very low 0.0 2340
2 moderate 0.0 1234
5 high 1.0 657
7 very high 1.0 1234
#select columns income_bracket and credit_score
print (dta.ix[dta.groupby(['income_bracket'], sort=False)['val'].idxmax(),
['income_bracket','credit_score']])
income_bracket credit_score
0 very low 0.0
2 moderate 0.0
5 high 1.0
7 very high 1.0
#select column credit_score
print (dta.ix[dta.groupby(['income_bracket'], sort=False)['val'].idxmax(), 'credit_score'])
0 0.0
2 0.0
5 1.0
7 1.0
Name: credit_score, dtype: float64
</code></pre>
| 1 | 2016-08-17T07:38:33Z | [
"python",
"pandas",
"indexing",
"group-by",
"max"
] |
What is the best way to evaluate whether a string is a stringified dictionary or not | 38,990,616 | <p>I could use a regex, but am wondering if there is a better way.</p>
<p>For example, a value might be returned either as:</p>
<p>1.) <code>'{"username": "joe.soap", "password": "pass@word123"}'</code></p>
<p>or</p>
<p>2.) <code>'https://www.url-example.com'</code></p>
<p>In the case of 1.) I want to convert the contents to an actual dictionary. I am happy that I know how to do the conversion. I am stuck on how to identify 1.) without reverting to the use of regex.</p>
<p><strong>EDIT:</strong> Because I was asked, this is how I plan to make the conversion:</p>
<pre><code>import ast
if string_in_question == '{"username": "joe.soap", "password": "pass@word123"}':
return ast.literal_eval(strââing_in_question)
else:
return valid_command_returnââs
</code></pre>
| 3 | 2016-08-17T07:31:23Z | 38,990,684 | <p>Don't bother with a complex regex, simply try to convert the string to a dictionary.</p>
<p>I'm assuming you are using <code>json.loads</code> to do it. If the string doesn't represent a dictionary <code>json.loads</code> will raise an exception.</p>
<p>Note that if you do use <code>json.loads</code> the conversion will fail if the "keys" are not surrounded with double-quotes, ie trying to convert the string <code>"{'username': 'joe.soap', 'password': 'pass@word123'}"</code> to a dictionary will raise an exception as well.</p>
<pre><code>import json
a = '{"username": "joe.soap", "password": "pass@word123"}'
b = 'https://www.url-example.com'
try:
json.loads(a)
except ValueError:
print("{} is not a dictionary".format(a))
try:
json.loads(b)
except ValueError:
print("{} is not a dictionary".format(b))
</code></pre>
<p>The output of this program will be
<code>https://www.url-example.com is not a dictionary</code></p>
<p><strong>UPDATE</strong>:</p>
<p>When using <code>ast.literal_eval</code> the concept is the same, but you will have to catch <code>SyntaxError</code> instead of <code>ValueError</code>. Note that with <code>literal_eval</code> both single and double quotes are acceptable.</p>
<pre><code>import ast
a = '{"username": "joe.soap", "password": "pass@word123"}'
b = "{'username': 'joe.soap', 'password': 'pass@word123'}"
c = 'https://www.url-example.com'
try:
ast.literal_eval(a)
except SyntaxError:
print("{} is not a dictionary".format(a))
try:
ast.literal_eval(b)
except SyntaxError:
print("{} is not a dictionary".format(b))
try:
ast.literal_eval(c)
except SyntaxError:
print("{} is not a dictionary".format(c))
</code></pre>
<p>Same as before, output is <code>https://www.url-example.com is not a dictionary</code>.</p>
| 8 | 2016-08-17T07:35:50Z | [
"python",
"dictionary"
] |
History on spyder kills the kernel | 38,990,620 | <p>My spyder has worked for month but it has suddenly stopped working.
Here is the internal log.</p>
<pre><code>/Users/Name_User/anaconda/lib/python2.7/site-packages/nbformat/current.py:19: UserWarning: nbformat.current is deprecated.
- use nbformat for read/write/validate public API
- use nbformat.vX directly to composing notebooks of a particular version
""")
Traceback (most recent call last):
[ . . . ]
File "/Users/Name_User/anaconda/lib/python2.7/site-packages/spyderlib/widgets/shell.py", line 494, in load_history
if rawhistory[1] != self.INITHISTORY[1]:
IndexError: list index out of range
</code></pre>
<p>I don't get the meaning of the error. Has someone ever faced this problem ?
Thanks ! </p>
| 0 | 2016-08-17T07:31:37Z | 39,011,842 | <p>The cause of this is that the history file somehow got truncated or otherwise corrupted. The history file is called <code>history.py</code> and it is in the spyder configuration directory, which is <code>/Users/Name_User/.spyder2</code> or similar. To solve that problem one needs to remove that file. </p>
| 0 | 2016-08-18T07:03:05Z | [
"python",
"kernel",
"anaconda",
"spyder"
] |
Using p5's functions without the setup/draw format | 38,990,679 | <p>I'm relatively new to Javascript, and i was tinkering around with the <a href="https://p5js.org/" rel="nofollow">p5</a> library. In Python i can import a single function from a library using the <code>from x import y statement</code>:</p>
<pre><code>from subprocess import check_output
</code></pre>
<p>My question is, there is a way to do the same thing with p5 without using the <code>setup/draw</code> format? Say, for example, i want to use the <code>noise</code> function in one of my scripts; can i import and use that function only? </p>
| 1 | 2016-08-17T07:35:22Z | 38,997,669 | <p>With questions like this, it's best to just put together a super simple test to try things out:</p>
<pre><code><!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Noise Test</title>
<script src="p5.js" type="text/javascript"></script>
<script>
console.log(noise(100));
</script>
</head>
<body>
Noise Test, check the console.
</body>
</html>
</code></pre>
<p>Here we're loading the <code>p5.js</code> library, and then trying to call the <code>noise()</code> function. If we run that, we get an error:</p>
<pre><code>Did you just try to use p5.js's noise() function? If so, you may want to move it into your sketch's setup() function.
For more details, see: https://github.com/processing/p5.js/wiki/Frequently-Asked-Questions#why-cant-i-assign-variables-using-p5-functions-and-variables-before-setup
index.html:9 Uncaught ReferenceError: noise is not defined
</code></pre>
<p>We can go to that url to read about what's going on:</p>
<blockquote>
<p>In global mode, p5 variable and function names are not available
outside setup(), draw(), mousePressed(), etc. (Except in the case
where they are placed inside functions that are called by one of these
methods.)</p>
<p>The explanation for this is a little complicated, but it has to do
with the way the library is setup in order to support both global and
instance mode. To understand what's happening, let's first look at the
order things happen when a page with p5 is loaded (in global mode).</p>
<ol>
<li><p>Scripts in are loaded.</p></li>
<li><p>of HTML page loads (when this is complete, the onload event fires, which then triggers step 3).</p></li>
<li><p>p5 is started, all functions are added to the global namespace.</p></li>
</ol>
<p>So the issue is that the scripts are loaded and evaluated before p5 is
started, when it's not yet aware of the p5 variables. If we try to
call them here, they will cause an error. However, when we use p5
function calls inside setup() and draw() this is ok, because the
browser doesn't look inside functions when the scripts are first
loaded. This is because the setup() and draw() functions are not
called in the user code, they are only defined, so the stuff inside of
them isn't run or evaluated yet.</p>
<p>It's not until p5 is started up that the setup() function is actually
run (p5 calls it for you), and at this point, the p5 functions exist
in the global namespace.</p>
</blockquote>
<h2>So, no, you can't use <code>p5.js</code> functions without the <code>setup()</code> and <code>draw()</code> functions.</h2>
<p>That being said, you could simply define a callback function that you call from <code>setup()</code>, that way you know the <code>p5.js</code> functions are available:</p>
<pre><code><script src="p5.js" type="text/javascript"></script>
<script>
function doYourStuff(){
console.log(noise(100));
}
function setup(){
doYourStuff();
}
</script>
</code></pre>
<h2>Alternatively, you could use <a href="https://github.com/processing/p5.js/wiki/p5.js-overview#instantiation--namespace" rel="nofollow">instance mode</a>.</h2>
<p>This involves manually creating an instance of <code>p5</code> and using that to call functions directly instead of calling them globally:</p>
<pre><code><script src="p5.js" type="text/javascript"></script>
<script>
var p5 = new p5();
console.log(p5.noise(100));
</script>
</code></pre>
<h2>You could also dig through the source.</h2>
<p>There isn't a good way to import a single function, but you could just do a ctrl+f search of the unminified source for <code>p5.js</code> and look for the <code>noise()</code> function. Then you could copy that into its own file (and any other helper functions it relies on). But that's probably going to be more work than simply using one of the above approaches, plus it might violate p5.js's copyrights.</p>
| 2 | 2016-08-17T13:12:36Z | [
"javascript",
"python",
"import",
"p5.js"
] |
openCV 3.1.0 Cannot load .dat | 38,990,723 | <p>I moved from python 2.7 to python 3.5</p>
<p>So, I had to use openCV 3.x.x.</p>
<p>The problem is that I cannot load SVM data.</p>
<p>For openCV 2.x.x, I could use</p>
<pre><code>svm.load('filename')
</code></pre>
<p>However, for openCV 3.x.x, there is no load method.</p>
<p>I read <a href="http://stackoverflow.com/questions/38182132/how-to-load-svm-data-from-file-in-opencv-3-1">this</a> article.</p>
<p>But I could not find a method,</p>
<pre><code>cv2.ml.SVM_load()
</code></pre>
<p>I think 3.1.0 is latest version, and I use it.</p>
<pre><code>>>> cv2.__version__
'3.1.0'
</code></pre>
<p>How can I load svm data?</p>
| 0 | 2016-08-17T07:37:46Z | 38,991,733 | <p>Ok.. Additional infos to my <a href="http://stackoverflow.com/a/38956656/3524844">answer</a>:</p>
<p>Use the opencv master branch (currently 3.1.0-dev) of opencv </p>
<pre class="lang-py prettyprint-override"><code>>>> import cv2
>>> cv2.__version__
'3.1.0-dev'
</code></pre>
<p>there is the <a href="http://docs.opencv.org/trunk/d1/d2d/classcv_1_1ml_1_1SVM.html#a7b05db6110aec2246f2b31363937539c" rel="nofollow">method</a> you're looking for</p>
<pre><code>SVM_load(...)
SVM_load(filepath) -> retval
</code></pre>
<p>in the output of</p>
<pre><code>>>> help(cv2.ml)
</code></pre>
<p>I use it in production with Python 3.4.3+. </p>
| 0 | 2016-08-17T08:32:29Z | [
"python",
"opencv"
] |
How to resolve attribute error in python | 38,991,025 | <p>on the begin I'll say that I was looking for the answer but can't find it and sorry for so basic question.I created program with TTS. I created global variable called "list_merge", but most of you said that global variables are BAD. So I decided to put this list in init. PS. ignore whitespaces, they exist only because I copied it here.</p>
<p>the error is:
AttributeError: 'Ver2ProjectWithTTS' object has no attribute 'list_merge'</p>
<pre><code>import json
import pyttsx
from openpyxl import load_workbook
class Ver2ProjectWithTTS(object):
def __init__(self):
self.read_json_file()
self.read_xml_file()
self.say_something()
self.list_merge = []
def read_json_file(self):
with open("json-example.json", 'r') as df:
json_data = json.load(df)
df.close()
for k in json_data['sentences']:
text_json = k['text']
speed_json = int(k['speed'])
volume_json = float(k['volume'])
dict_json = {'text': text_json, 'speed': speed_json, 'volume': volume_json}
self.list_merge.append(dict_json)
def read_xml_file(self):
tree = et.parse('xml-example.xml')
root = tree.getroot()
for k in range(0, len(root)):
text_xml = root[k][0].text
speed_xml = int(root[k][1].text)
volume_xml = float(root[k][2].text)
dict_xml = {'text': text_xml, 'speed': speed_xml, 'volume': volume_xml}
self.list_merge.append(dict_xml)
def say_something(self):
for item in self.list_merge:
engine = pyttsx.init()
engine.getProperty('rate')
engine.getProperty('volume')
engine.setProperty('rate', item['speed'])
engine.setProperty('volume', item['volume'])
engine.say(cleared_text)
engine.runAndWait()
if __name__ == '__main__':
a = Ver2ProjectWithTTS()
</code></pre>
<p>I'm getting
AttributeError: 'Ver2ProjectWithTTS' object has no attribute 'list_merge'</p>
<p>Any ideas how to avoid this error? Well i'm not good in objectivity and I just cant move on without fixing this. PS. with global variable before init def it worked properly.
Thanks for help :)</p>
| 0 | 2016-08-17T07:54:39Z | 38,991,145 | <p>You have to set if first before you use it:</p>
<pre><code>class Ver2ProjectWithTTS(object):
def __init__(self):
# first set it
self.list_merge = []
self.read_json_file()
self.read_xml_file()
self.say_something()
</code></pre>
<p>Anyway don't do any advanced logic in constructors, it's not a good practice. Make a method instead:</p>
<pre><code>class Ver2ProjectWithTTS(object):
def __init__(self):
# first set it
self.list_merge = []
def do_the_job(self):
self.read_json_file()
self.read_xml_file()
self.say_something()
...
instance = Ver2ProjectWithTTS()
instance.do_the_job()
</code></pre>
| 3 | 2016-08-17T08:01:25Z | [
"python",
"python-2.7",
"object",
"text-to-speech"
] |
Pytest assert of returned value from method | 38,991,156 | <p>This is works as expected:</p>
<pre><code>def my_method():
return True;
def test_method():
assert my_method()
</code></pre>
<p>But this not:</p>
<pre><code>assert filecmp.cmp(path1, path2)
</code></pre>
<p>Instead I get:</p>
<pre><code>AssertionError: assert <function cmp at 0x1042db840>((((('/Users/vital...my-path
</code></pre>
<p>Of course I can assign result (<code>True</code> or <code>False</code> from <code>filecmp.cmp()</code>) to variable and <code>assert</code> this variable, but why <code>assert</code> works for first method but not for second? And maybe is there a way to <code>assert</code> from <code>filecmp.cmp()</code>?</p>
| 1 | 2016-08-17T08:01:54Z | 38,991,224 | <p>Everything seems right. This looks like the regular py.test output if an <code>assert</code> was not fulfilled.</p>
<p>Are <code>path1</code> and <code>path2</code> really equal? Try</p>
<pre><code>assert filecmp.cmp(path1, path1)
</code></pre>
<p>to see if the <code>assert</code> statement itself works.</p>
| 1 | 2016-08-17T08:05:52Z | [
"python",
"python-3.x",
"py.test"
] |
django:OperationalError at /admin/login/ unable to open database file | 38,991,164 | <p>I deploy my django project in iis6.When i login in the admin-page using the superuser name and password,it produce mistakes as follows:</p>
<pre><code>enter code here
Environment:
Request Method: POST
Request URL: http://localhost/admin/login/?next=/admin/
Django Version: 1.9.6
Python Version: 2.7.11
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback:
File "D:\python\lib\site-packages\django\core\handlers\base.py" in get_response
149. response = self.process_exception_by_middleware(e, request)
File "D:\python\lib\site-packages\django\core\handlers\base.py" in get_response
147. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "D:\python\lib\site-packages\django\views\decorators\cache.py" in _wrapped_view_func
57. response = view_func(request, *args, **kwargs)
File "D:\python\lib\site-packages\django\contrib\admin\sites.py" in login
413. return login(request, **defaults)
File "D:\python\lib\site-packages\django\contrib\auth\views.py" in inner
49. return func(*args, **kwargs)
File "D:\python\lib\site-packages\django\views\decorators\debug.py" in sensitive_post_parameters_wrapper
76. return view(request, *args, **kwargs)
File "D:\python\lib\site-packages\django\utils\decorators.py" in _wrapped_view
149. response = view_func(request, *args, **kwargs)
File "D:\python\lib\site-packages\django\views\decorators\cache.py" in _wrapped_view_func
57. response = view_func(request, *args, **kwargs)
File "D:\python\lib\site-packages\django\contrib\auth\views.py" in login
76. auth_login(request, form.get_user())
File "D:\python\lib\site-packages\django\contrib\auth\__init__.py" in login
110. request.session.cycle_key()
File "D:\python\lib\site-packages\django\contrib\sessions\backends\base.py" in cycle_key
305. self.create()
File "D:\python\lib\site-packages\django\contrib\sessions\backends\db.py" in create
53. self.save(must_create=True)
File "D:\python\lib\site-packages\django\contrib\sessions\backends\db.py" in save
86. obj.save(force_insert=must_create, using=using)
File "D:\python\lib\site-packages\django\db\models\base.py" in save
708. force_update=force_update, update_fields=update_fields)
File "D:\python\lib\site-packages\django\db\models\base.py" in save_base
736. updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields)
File "D:\python\lib\site-packages\django\db\models\base.py" in _save_table
820. result = self._do_insert(cls._base_manager, using, fields, update_pk, raw)
File "D:\python\lib\site-packages\django\db\models\base.py" in _do_insert
859. using=using, raw=raw)
File "D:\python\lib\site-packages\django\db\models\manager.py" in manager_method
122. return getattr(self.get_queryset(), name)(*args, **kwargs)
File "D:\python\lib\site-packages\django\db\models\query.py" in _insert
1039. return query.get_compiler(using=using).execute_sql(return_id)
File "D:\python\lib\site-packages\django\db\models\sql\compiler.py" in execute_sql
1060. cursor.execute(sql, params)
File "D:\python\lib\site-packages\django\db\backends\utils.py" in execute
79. return super(CursorDebugWrapper, self).execute(sql, params)
File "D:\python\lib\site-packages\django\db\backends\utils.py" in execute
64. return self.cursor.execute(sql, params)
File "D:\python\lib\site-packages\django\db\utils.py" in __exit__
95. six.reraise(dj_exc_type, dj_exc_value, traceback)
File "D:\python\lib\site-packages\django\db\backends\utils.py" in execute
64. return self.cursor.execute(sql, params)
File "D:\python\lib\site-packages\django\db\backends\sqlite3\base.py" in execute
323. return Database.Cursor.execute(self, query, params)
Exception Type: OperationalError at /admin/login/
Exception Value: unable to open database file
</code></pre>
<p>Here is my settings.py</p>
<pre><code>enter code here
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': 'C:\\inetpub\\wwwroot\\untitled5\\db\\db.sqlite3',
}
}
</code></pre>
<p>I tried to change the db file's privileges,but it just didn't work</p>
| -1 | 2016-08-17T08:02:17Z | 38,991,299 | <p>Use slashes instead of backslashes in the db file path <a href="https://docs.djangoproject.com/en/1.10/ref/settings/#name" rel="nofollow">as is stated here</a>.</p>
| 2 | 2016-08-17T08:09:25Z | [
"python",
"django",
"windows",
"iis-6"
] |
Use a for loop/ accumulator pattern | 38,991,168 | <p>How to write a function emphasize() that takes as an input a string s and print it with spaces inserted between adjacent letters. This is what I tried</p>
<pre><code>def emphasize (s):
for aWord in s:
print(s.replace [1:-1])
</code></pre>
| 0 | 2016-08-17T08:02:38Z | 38,991,463 | <p>You can use <a href="https://docs.python.org/3/library/stdtypes.html#str.join" rel="nofollow">.join</a> to </p>
<pre><code>def emphasize(s):
print(" ".join(s))
</code></pre>
| 1 | 2016-08-17T08:17:53Z | [
"python",
"string"
] |
Numpy Softmax - The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() | 38,991,216 | <p>I am using a softmax function in getting an output from a neural network and getting the minimum value as the output in calculating the error.</p>
<p>However if the output is all the same assuming [0,0,0] the output of the softmax function is [0.33,0.33,0.33]</p>
<p>So when selecting the minimum from this like, </p>
<pre><code> output = softmax(np.dot(hs,HO))
tarminout = np.subtract(target,output)
mine = min(tarminout)
mine = 0.5 * np.power(mine,2)
finalError += mine
</code></pre>
<p>It gives the following error because there are more than one equal minimum values,</p>
<blockquote>
<p>Traceback (most recent call last):<br>
File "ann.py", line 234, in module</p>
<p>learn()<br>
File "ann.py", line 97, in learn</p>
<p>mine = min(tarminout)</p>
<p>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()</p>
</blockquote>
<p>How can I get pass this when there are more than 1 equal minimum values by selecting just one of them?</p>
<p>Thanks</p>
| 0 | 2016-08-17T08:05:27Z | 39,022,221 | <p>The answer's buried in the comments above: your error is likely the result of passing a multi-dimensional ndarray to the standard python min(), which doesn't understand them.</p>
<p>Way #1: call np.min instead of min</p>
<p>Way #2 (not recommended): flatten your array, min(tarminout.ravel())</p>
<p>Way #1 is preferred, use numpy operators on numpy arrays</p>
| 0 | 2016-08-18T15:33:40Z | [
"python",
"arrays",
"numpy"
] |
How to query XML node using ElementTree in python | 38,991,456 | <p>I have the following example XML tree:</p>
<pre><code><main>
<section>
<list key="capital" value="sydney">
<items>
<item id="abc-123"></item>
<item id="abc-345"></item>
</items>
</list>
<list key="capital" value="tokyo">
<items>
<item id="def-678"></item>
<item id="def-901"></item>
</items>
</list>
</section>
</maim>
</code></pre>
<p>Do you know how to run a query that will extract the "items" node under "list" with key="capital" and value="tokyo" (which should extract item nodes with id="def-678" and id="def-901")?</p>
<p>Thanks so much for your help!</p>
| 0 | 2016-08-17T08:17:34Z | 38,991,582 | <p>You can use XPath expression that <code>xml.etree</code> supports (see <a href="https://docs.python.org/2/library/xml.etree.elementtree.html#supported-xpath-syntax" rel="nofollow">the documentation</a>) via <code>find()</code> or <code>findall()</code> method :</p>
<pre><code>from xml.etree import ElementTree as ET
raw = '''your xml string here'''
root = ET.fromstring(raw)
result = root.findall(".//list[@key='capital'][@value='tokyo']/items/item")
</code></pre>
<p><strong>console test output :</strong></p>
<pre><code>>>> for r in result:
... print ET.tostring(r)
...
<item id="def-678" />
<item id="def-901" />
</code></pre>
| 0 | 2016-08-17T08:24:30Z | [
"python",
"xml",
"extract",
"elementtree"
] |
python - upload image into directory, save it and rename it | 38,991,461 | <p>I want to upload an image "camera.jpg" and save it to "static/photo/" directory and then rename the "camera.jpg" file to "niloofar.jpg" file.</p>
<p>How can I do that?</p>
<p><strong>app.py:</strong></p>
<pre><code>from flask import Flask, request, url_for, render_template, make_response, redirect
from flask_sqlalchemy import SQLAlchemy
from werkzeug.utils import secure_filename
UPLOAD_FOLDER = '/home/me/my_flask_project/static/photo/'
ALLOWED_EXTENSIONS = set(['txt', 'pdf', 'png', 'jpg', 'jpeg', 'gif'])
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql://myusername:mypassword@localhost/mydbname'
db = SQLAlchemy(app)
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
def allowed_file(filename):
return '.' in filename and \
filename.rsplit('.', 1)[1] in ALLOWED_EXTENSIONS
@app.route('/upload/', methods=['GET', 'POST'])
def upload_file():
if request.method == 'POST':
file = request.files['file']
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
#return redirect(url_for('upload_file', filename=filename))
os.rename(UPLOAD_FOLDER + filename, 'niloofar.jpg')
return render_template('upload.html')
if __name__ == '__main__':
app.run()
</code></pre>
<p><strong>upload.html:</strong></p>
<pre><code><!doctype html>
<title>Upload new File</title>
<h1>Upload new File</h1>
<form action="" method=post enctype=multipart/form-data>
<p><input type=file name=file>
<input type=submit value=Upload>
</form>
</code></pre>
<p>I also tried this script below, but it does not even save the image into "photo" directory:</p>
<pre><code>import os, sys
@app.route('/upload/', methods=['GET', 'POST'])
def upload_file():
if request.method == 'POST':
file = request.files['file']
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
#return redirect(url_for('upload_file', filename=filename))
os.rename(UPLOAD_FOLDER + filename, 'niloofar.jpg')
return render_template('upload.html')
</code></pre>
<p>What is the problem?</p>
| 0 | 2016-08-17T08:17:43Z | 38,991,671 | <p>Try giving the complete target path in <code>os.rename</code>. </p>
<p><code>os.rename(UPLOAD_FOLDER + filename, UPLOAD_FOLDER+'niloofar.jpg')</code></p>
| 1 | 2016-08-17T08:29:18Z | [
"python",
"file",
"upload",
"save",
"rename"
] |
Handling Bounce in Sendgrid smtp API in Python | 38,991,472 | <p>I am sending marketing emails through sendgrid smtp api via python EmailMultiAlternatives. I want to know how can I handle bounces directly from there to mark particular emails as undeliverable.</p>
<p>The code snippet is:</p>
<pre><code>def send1():
text_content = 'Hi this is the text version'
connection = get_connection(host=EMAIL_HOST,
port=EMAIL_PORT,
username=EMAIL_HOST_USER,
password=EMAIL_HOST_PASSWORD,
use_tls=EMAIL_USE_TLS)
connection.open()
subject = 'Inviting {0} to join the Business Network of SMEs'.format('surya')
html_content = template.format('Surya')
from_email = 'sp@abc.com'
to = 'abc@gmail.com'
msg = EmailMultiAlternatives(subject, text_content, from_email, [to], connection=connection)
msg.attach_alternative(html_content, "text/html")
msg.send()
connection.close()
</code></pre>
<p>Is it possible to get the response here only after <code>msg.send()</code> or is there some other way.</p>
| 0 | 2016-08-17T08:18:30Z | 39,147,370 | <p>The best way to respond to events like blocks and bounces is to implement the <a href="https://sendgrid.com/docs/API_Reference/Webhooks/event.html" rel="nofollow">event webhook</a>.</p>
<p>You can also poll for the data via the <a href="https://sendgrid.com/docs/API_Reference/Web_API_v3/bounces.html" rel="nofollow">bounces endpoint</a>.</p>
| 0 | 2016-08-25T13:59:10Z | [
"python",
"email",
"smtp",
"sendgrid"
] |
Remove first encountered elements from a list | 38,991,478 | <p>I have two Python lists with the same number of elements. The elements of the first list are unique, the ones in the second list - not necessarily so. For instance</p>
<pre><code>list1 = ['e1', 'e2', 'e3', 'e4', 'e5', 'e6', 'e7']
list2 = ['h1', 'h2', 'h1', 'h3', 'h1', 'h2', 'h4']
</code></pre>
<p>I want to remove all the "first encountered" elements from the second list and their corresponding elements from the first list. Basically, this means removing all unique elements <em>and</em> the first element of the duplicates. With the above example, the correct result should be</p>
<pre><code>>>>list1
['e3', 'e5', 'e6']
>>>list2
['h1', 'h1', 'h2']
</code></pre>
<p>That is, the element 'e1' was removed because its corresponding 'h1' was encountered for the first time, 'e2' was removed because 'h2' was seen for the first time, 'e3' was left because 'h1' was already seen, 'e4' was removed because 'h3' was seen for the first time, 'e5' was left because 'h1' was already seen, 'e6' was left because 'h2' was already seen, and 'e7' was removed because 'h4' was seen for the first time.</p>
<p>What would be an efficient way to solve this problem? The lists could contain thousands of elements, so I'd rather not make duplicates of them or run multiple loops, if possible.</p>
| 12 | 2016-08-17T08:18:46Z | 38,991,619 | <p>Here: </p>
<pre><code>list1 = ['e1', 'e2', 'e3', 'e4', 'e5', 'e6', 'e7']
list2 = ['h1', 'h2', 'h1', 'h3', 'h1', 'h2', 'h4']
seen = []
output = []
for index in range(len(list1)):
if list2[index] not in seen:
seen.append(list2[index])
else:
output.append(list1[index])
print output
</code></pre>
| 2 | 2016-08-17T08:26:18Z | [
"python",
"list"
] |
Remove first encountered elements from a list | 38,991,478 | <p>I have two Python lists with the same number of elements. The elements of the first list are unique, the ones in the second list - not necessarily so. For instance</p>
<pre><code>list1 = ['e1', 'e2', 'e3', 'e4', 'e5', 'e6', 'e7']
list2 = ['h1', 'h2', 'h1', 'h3', 'h1', 'h2', 'h4']
</code></pre>
<p>I want to remove all the "first encountered" elements from the second list and their corresponding elements from the first list. Basically, this means removing all unique elements <em>and</em> the first element of the duplicates. With the above example, the correct result should be</p>
<pre><code>>>>list1
['e3', 'e5', 'e6']
>>>list2
['h1', 'h1', 'h2']
</code></pre>
<p>That is, the element 'e1' was removed because its corresponding 'h1' was encountered for the first time, 'e2' was removed because 'h2' was seen for the first time, 'e3' was left because 'h1' was already seen, 'e4' was removed because 'h3' was seen for the first time, 'e5' was left because 'h1' was already seen, 'e6' was left because 'h2' was already seen, and 'e7' was removed because 'h4' was seen for the first time.</p>
<p>What would be an efficient way to solve this problem? The lists could contain thousands of elements, so I'd rather not make duplicates of them or run multiple loops, if possible.</p>
| 12 | 2016-08-17T08:18:46Z | 38,991,663 | <p>An efficient way would be to use a <code>set</code>, which contains all already seen keys. A <code>set</code> will guarantee you an average lookup of <code>O(1)</code>. </p>
<p>So something like this should work:</p>
<pre><code>s = set()
result1 = []
result2 = []
for x, y in zip(list1, list2):
if y in s:
result1.append(x)
result2.append(y)
else:
s.add(y)
</code></pre>
<p>Notice, this will create a new list. Shouldn't be a big problem though, since Python doesn't actually copy the strings, but only creates a pointer to the original string. </p>
| 7 | 2016-08-17T08:28:53Z | [
"python",
"list"
] |
Remove first encountered elements from a list | 38,991,478 | <p>I have two Python lists with the same number of elements. The elements of the first list are unique, the ones in the second list - not necessarily so. For instance</p>
<pre><code>list1 = ['e1', 'e2', 'e3', 'e4', 'e5', 'e6', 'e7']
list2 = ['h1', 'h2', 'h1', 'h3', 'h1', 'h2', 'h4']
</code></pre>
<p>I want to remove all the "first encountered" elements from the second list and their corresponding elements from the first list. Basically, this means removing all unique elements <em>and</em> the first element of the duplicates. With the above example, the correct result should be</p>
<pre><code>>>>list1
['e3', 'e5', 'e6']
>>>list2
['h1', 'h1', 'h2']
</code></pre>
<p>That is, the element 'e1' was removed because its corresponding 'h1' was encountered for the first time, 'e2' was removed because 'h2' was seen for the first time, 'e3' was left because 'h1' was already seen, 'e4' was removed because 'h3' was seen for the first time, 'e5' was left because 'h1' was already seen, 'e6' was left because 'h2' was already seen, and 'e7' was removed because 'h4' was seen for the first time.</p>
<p>What would be an efficient way to solve this problem? The lists could contain thousands of elements, so I'd rather not make duplicates of them or run multiple loops, if possible.</p>
| 12 | 2016-08-17T08:18:46Z | 38,991,667 | <p>Just use a <code>set</code> object to lookup if the current value is already seen, like this</p>
<pre><code>>>> list1 = ['e1', 'e2', 'e3', 'e4', 'e5', 'e6', 'e7']
>>> list2 = ['h1', 'h2', 'h1', 'h3', 'h1', 'h2', 'h4']
>>>
>>> def filterer(l1, l2):
... r1 = []
... r2 = []
... seen = set()
... for e1, e2 in zip(l1, l2):
... if e2 not in seen:
... seen.add(e2)
... else:
... r1.append(e1)
... r2.append(e2)
... return r1, r2
...
>>> list1, list2 = filterer(list1, list2)
>>> list1
['e3', 'e5', 'e6']
>>> list2
['h1', 'h1', 'h2']
</code></pre>
<hr>
<p>If you are going to consume the elements one-by-one and if the input lists are pretty big, then I would recommend making a generator, like this</p>
<pre><code>>>> def filterer(l1, l2):
... seen = set()
... for e1, e2 in zip(l1, l2):
... if e2 not in seen:
... seen.add(e2)
... else:
... yield e1, e2
...
>>> list(filterer(list1, list2))
[('e3', 'h1'), ('e5', 'h1'), ('e6', 'h2')]
>>>
>>> zip(*filterer(list1, list2))
[('e3', 'e5', 'e6'), ('h1', 'h1', 'h2')]
</code></pre>
| 10 | 2016-08-17T08:29:09Z | [
"python",
"list"
] |
Remove first encountered elements from a list | 38,991,478 | <p>I have two Python lists with the same number of elements. The elements of the first list are unique, the ones in the second list - not necessarily so. For instance</p>
<pre><code>list1 = ['e1', 'e2', 'e3', 'e4', 'e5', 'e6', 'e7']
list2 = ['h1', 'h2', 'h1', 'h3', 'h1', 'h2', 'h4']
</code></pre>
<p>I want to remove all the "first encountered" elements from the second list and their corresponding elements from the first list. Basically, this means removing all unique elements <em>and</em> the first element of the duplicates. With the above example, the correct result should be</p>
<pre><code>>>>list1
['e3', 'e5', 'e6']
>>>list2
['h1', 'h1', 'h2']
</code></pre>
<p>That is, the element 'e1' was removed because its corresponding 'h1' was encountered for the first time, 'e2' was removed because 'h2' was seen for the first time, 'e3' was left because 'h1' was already seen, 'e4' was removed because 'h3' was seen for the first time, 'e5' was left because 'h1' was already seen, 'e6' was left because 'h2' was already seen, and 'e7' was removed because 'h4' was seen for the first time.</p>
<p>What would be an efficient way to solve this problem? The lists could contain thousands of elements, so I'd rather not make duplicates of them or run multiple loops, if possible.</p>
| 12 | 2016-08-17T08:18:46Z | 38,991,697 | <p>Use a set to keep track of values you've already encountered:</p>
<pre><code>seen= set()
index= 0
while index < len(list1):
i1, i2= list1[index], list2[index]
if i2 in seen:
index+= 1
else:
seen.add(i2)
del list1[index]
del list2[index]
</code></pre>
| 4 | 2016-08-17T08:30:39Z | [
"python",
"list"
] |
Remove first encountered elements from a list | 38,991,478 | <p>I have two Python lists with the same number of elements. The elements of the first list are unique, the ones in the second list - not necessarily so. For instance</p>
<pre><code>list1 = ['e1', 'e2', 'e3', 'e4', 'e5', 'e6', 'e7']
list2 = ['h1', 'h2', 'h1', 'h3', 'h1', 'h2', 'h4']
</code></pre>
<p>I want to remove all the "first encountered" elements from the second list and their corresponding elements from the first list. Basically, this means removing all unique elements <em>and</em> the first element of the duplicates. With the above example, the correct result should be</p>
<pre><code>>>>list1
['e3', 'e5', 'e6']
>>>list2
['h1', 'h1', 'h2']
</code></pre>
<p>That is, the element 'e1' was removed because its corresponding 'h1' was encountered for the first time, 'e2' was removed because 'h2' was seen for the first time, 'e3' was left because 'h1' was already seen, 'e4' was removed because 'h3' was seen for the first time, 'e5' was left because 'h1' was already seen, 'e6' was left because 'h2' was already seen, and 'e7' was removed because 'h4' was seen for the first time.</p>
<p>What would be an efficient way to solve this problem? The lists could contain thousands of elements, so I'd rather not make duplicates of them or run multiple loops, if possible.</p>
| 12 | 2016-08-17T08:18:46Z | 38,991,706 | <p>I might be code golfing here but i find this interesting:</p>
<pre><code>list1_new = [x for i, x in enumerate(list1) if list2[i] in list2[:i]]
print(list1_new)
# prints ['e3', 'e5', 'e6']
</code></pre>
<p>What happens here in case you are not familiar with list comprehensions is the following (reading it from the end):</p>
<ul>
<li>i am checking whether element <code>i</code> of <code>list2</code> exists in a slicing of <code>list2</code> that includes all previous elements <code>list2[:i]</code>.</li>
<li>if it does then i capture the corresponding element from <code>list1</code> (<code>x</code>) and i store it in the new list i am creating <code>list1_new</code></li>
</ul>
| 7 | 2016-08-17T08:31:03Z | [
"python",
"list"
] |
Remove first encountered elements from a list | 38,991,478 | <p>I have two Python lists with the same number of elements. The elements of the first list are unique, the ones in the second list - not necessarily so. For instance</p>
<pre><code>list1 = ['e1', 'e2', 'e3', 'e4', 'e5', 'e6', 'e7']
list2 = ['h1', 'h2', 'h1', 'h3', 'h1', 'h2', 'h4']
</code></pre>
<p>I want to remove all the "first encountered" elements from the second list and their corresponding elements from the first list. Basically, this means removing all unique elements <em>and</em> the first element of the duplicates. With the above example, the correct result should be</p>
<pre><code>>>>list1
['e3', 'e5', 'e6']
>>>list2
['h1', 'h1', 'h2']
</code></pre>
<p>That is, the element 'e1' was removed because its corresponding 'h1' was encountered for the first time, 'e2' was removed because 'h2' was seen for the first time, 'e3' was left because 'h1' was already seen, 'e4' was removed because 'h3' was seen for the first time, 'e5' was left because 'h1' was already seen, 'e6' was left because 'h2' was already seen, and 'e7' was removed because 'h4' was seen for the first time.</p>
<p>What would be an efficient way to solve this problem? The lists could contain thousands of elements, so I'd rather not make duplicates of them or run multiple loops, if possible.</p>
| 12 | 2016-08-17T08:18:46Z | 38,991,920 | <p>You an try :</p>
<pre><code>>>> list1 = ['e1', 'e2', 'e3', 'e4', 'e5', 'e6', 'e7']
>>> list2 = ['h1', 'h2', 'h1', 'h3', 'h1', 'h2', 'h4']
>>> repeat = list(set([x for x in list2 if list2.count(x) > 1]))
>>> print repeat
['h2', 'h1']
>>> l1=[]
>>> l2=[]
>>> for single_data in repeat:
indices = [i for i, x in enumerate(list2) if x == single_data]
del indices[0]
for index in indices:
l1.append(list1[index])
l2.append(list2[index])
>>> print l1
['e6', 'e3', 'e5']
>>> print l2
['h2', 'h1', 'h1']
</code></pre>
| 3 | 2016-08-17T08:43:52Z | [
"python",
"list"
] |
Remove first encountered elements from a list | 38,991,478 | <p>I have two Python lists with the same number of elements. The elements of the first list are unique, the ones in the second list - not necessarily so. For instance</p>
<pre><code>list1 = ['e1', 'e2', 'e3', 'e4', 'e5', 'e6', 'e7']
list2 = ['h1', 'h2', 'h1', 'h3', 'h1', 'h2', 'h4']
</code></pre>
<p>I want to remove all the "first encountered" elements from the second list and their corresponding elements from the first list. Basically, this means removing all unique elements <em>and</em> the first element of the duplicates. With the above example, the correct result should be</p>
<pre><code>>>>list1
['e3', 'e5', 'e6']
>>>list2
['h1', 'h1', 'h2']
</code></pre>
<p>That is, the element 'e1' was removed because its corresponding 'h1' was encountered for the first time, 'e2' was removed because 'h2' was seen for the first time, 'e3' was left because 'h1' was already seen, 'e4' was removed because 'h3' was seen for the first time, 'e5' was left because 'h1' was already seen, 'e6' was left because 'h2' was already seen, and 'e7' was removed because 'h4' was seen for the first time.</p>
<p>What would be an efficient way to solve this problem? The lists could contain thousands of elements, so I'd rather not make duplicates of them or run multiple loops, if possible.</p>
| 12 | 2016-08-17T08:18:46Z | 38,996,607 | <p>From a comment:</p>
<blockquote>
<p>I was hoping to avoid that and edit the lists in place</p>
</blockquote>
<p>I don't really advise doing this unless your code actually is running out of memory (or you reasonably expect that it will), but it's certainly possible:</p>
<pre><code>seen = set()
toidx = 0
for first, second in itertools.izip(list1, list2):
if second in seen:
list1[toidx] = first
list2[toidx] = second
toidx += 1
else:
seen.add(second)
del seen
del list1[toidx:]
del list2[toidx:]
</code></pre>
<p>Fans of C++ will recognise this as the erase-remove idiom.</p>
<p>The <code>del</code> might make a copy of the part of the list you're keeping, but at least it does them one at a time instead of needing to have all five collections in memory simultaneously (two input lists, two output lists, and the set <code>seen</code>).</p>
<p>There's no way to truncate a list without that copy being a possibility, so you could instead leave the lists at their full size but remember how many values are usable. In that case you should probably set the unusable values at the end to <code>None</code>, so that any removed elements which aren't referenced from elsewhere can be freed.</p>
<blockquote>
<p>The lists could contain thousands of elements</p>
</blockquote>
<p>If you're using a real computer, as opposed to some microscopic machine etched onto the head of a pin, then thousands of elements is nothing. A list requires approximately 8 bytes per element. Storing the same object in multiple lists doesn't need a copy of the object. So, using two extra lists for the outputs will occupy something of the order of 16 bytes per pair of inputs: 160kB for 10k elements. For scale, the browser I'm writing this answer on is currently using 1GB of RAM. Getting off SO while it runs is a far greater memory optimization than modifying the lists in place ;-)</p>
<p>Reducing memory usage can help with cache performance, though. And if you have hundreds of millions of elements, then in-place modification could be the difference between your code running or failing.</p>
| 4 | 2016-08-17T12:24:54Z | [
"python",
"list"
] |
Checking if a string is a prefix of a possible regex match | 38,991,497 | <p>I want to traverse a tree structure, but only those parts that match a wildcard expression, a la python glob, where double asterisk means 'any number of subdirs'.</p>
<p>For example, say my wildcard expression is /*/foo/**/bar/. This would match /a/foo/bar/, /b/foo/note/bar/, but not /a/bar/foo/bar/. You get the idea.</p>
<p>My problem is that when traversing the tree structure, I need to know whether the current dir <em>could possibly</em> match the wildcard expression as a prefix. So I do want to traverse the directory /a/, but not /a/bar/, because I know the latter will never match the wildcard expression.</p>
<p>The wildcard expression I will rewrite to a regular expression, of course.</p>
| 0 | 2016-08-17T08:19:59Z | 38,991,987 | <p>Consider the following code for starters. I assume you have each "directory" in the path and pattern as elements in a pair of lists:</p>
<pre><code>def traverse(pattern_list, path_list):
if pattern_list[0] == '**':
traverse_children(pattern_list, path_list[1:])
if current_matches(pattern_list[0], path_list[0]):
traverse_children(pattern_list[1:], path_list[1:])
# Other things you might want to do in the case of a valid prefix
def current_matches(pattern_atom, path_atom):
return pattern_atom in (path_atom, '*', '**')
</code></pre>
| 0 | 2016-08-17T08:47:52Z | [
"python",
"regex",
"tree",
"wildcard",
"directory-structure"
] |
Python get query from MySQL, remove the string format | 38,991,558 | <p>I'm currently using an example to select a variable from MySQL via Python. For this I'm using the MySQLdb import.</p>
<p>It all works great, I'm able to get the value from MySQL but when I print the result, it returns as:</p>
<pre><code>('text',)
</code></pre>
<p>Is there a way to get this to just show up as </p>
<pre><code>text
</code></pre>
<p>The code I'm working with is:</p>
<pre><code>try:
cursor.execute("""SELECT value FROM settings WHERE name='text'""")
results = cursor.fetchone()
print results
</code></pre>
<p>Thank you!!</p>
| 0 | 2016-08-17T08:23:26Z | 38,991,617 | <p>Python 3 :</p>
<pre><code>print(results[0])
</code></pre>
<p>Python 2 :</p>
<pre><code>print results[0]
</code></pre>
<p>This will take the first and only element of the tuple, which is <code>'text'</code>, and printing a string will just write <code>text</code> without quotes in the console.</p>
| 2 | 2016-08-17T08:26:11Z | [
"python",
"mysql"
] |
How to read data from a HTML table and create a csv in python | 38,991,625 | <p>I want to read the data from a url which gives an HTML table as output. After reading the data I need the data in the csv format, just like the page itself.</p>
<p>Below mentioned is the HTML output.</p>
<p><a href="http://i.stack.imgur.com/6Ky2m.jpg" rel="nofollow">enter image description here</a></p>
<p>HTML Source Code: </p>
<pre><code><html>
<body>
<h1> Below are the order details for the recallID. Download as <a href=http://sp-ff-im.nm.flipkart.com:18700/seller_returns/67795/details/download oncontextmenu="return false;">CSV</a><br>
<table border='1'>
<tr>
<th>SellerID</th>
<th>ShipmentID</th>
<th>OrderItemID</th>
<th>OrderId</th>
<th>Quantity</th>
<th>listingID</th>
<th>FSN</th>
<th>SKU</th>
</tr>
<tr>
<td>da473f06039a45e7</td>
<td>S167965494</td>
<td>4579250217234000</td>
<td>OD405792502172340000</td>
<td>1</td>
<td>LSTCOME9YJK7VTFRPTNZR0HFM</td>
<td>COME9YJK7VTFRPTN</td>
<td>Dell 3558 Notebook</td>
</tr>
<tr>
<td>da473f06039a45e7</td>
<td>Mis-shipment</td>
<td>ii_id:242950951</td>
<td>Received from Mis-Shipment</td>
<td>1</td>
<td>LSTCOME9YJK7VTFRPTNZR0HFM</td>
<td>COME9YJK7VTFRPTN</td>
<td>Dell 3558 Notebook</td>
</tr>
</table>
</body>
</html>
</code></pre>
| -3 | 2016-08-17T08:26:38Z | 38,991,721 | <p>I think you can combine these two StackOverflow answers to get what you need:</p>
<p><a href="http://stackoverflow.com/questions/6325216/parse-html-table-to-python-list">html table -> python list</a></p>
<p><a href="http://stackoverflow.com/questions/2084069/create-a-csv-file-with-values-from-a-python-list">python list -> csv</a></p>
| 0 | 2016-08-17T08:31:39Z | [
"python",
"html"
] |
ImportError: name 'Restaurant' is not defined | 38,991,637 | <p>App that i am developing is about online food order. A restaurant owner lists his/her restaurant with menus available in that restaurant. I have designed the models for such scenario. But I am facing problem in my review models where i get an error of NameError: name 'Restaurant' is not defined while importing for Restaurant class.</p>
<p>Code</p>
<p><strong>restaurants/models.py</strong></p>
<pre><code>class Restaurant(models.Model):
OPEN = 1
CLOSED = 2
OPENING_STATUS = (
(OPEN, 'open'),
(CLOSED, 'closed'),
)
owner = models.ForeignKey(User)
name = models.CharField(max_length=150, db_index=True)
slug = models.SlugField(max_length=150, db_index=True)
address = models.CharField(max_length=100)
city = models.CharField(max_length=100)
phone_number = models.PositiveIntegerField()
owner_email = models.EmailField()
opening_status = models.IntegerField(choices=OPENING_STATUS, default=OPEN)
website = models.URLField(max_length=300)
features = models.ManyToManyField(Choice, related_name="restaurants_features")
timings = models.ManyToManyField(Choice, related_name="restaurants_timings")
opening_from = models.TimeField()
opening_to = models.TimeField()
facebook_page = models.URLField(max_length=200)
twitter_handle = models.CharField(max_length=15, blank=True, null=True)
other_details = models.TextField()
# votes = models.IntegerField(choices=STARS, default=5)
class Menu(models.Model):
STARS = (
(1, 'one'),
(2, 'two'),
(3, 'three'),
(4, 'four'),
(5, 'five'),
)
menu_category = models.ForeignKey(Category, related_name="menu")
restaurant = models.ForeignKey(Restaurant)
name = models.CharField(max_length=120,db_index=True)
slug = models.SlugField(max_length=120,db_index=True)
image = models.ImageField(upload_to='products/%Y/%m/%d', blank=True)
description = models.TextField(blank=True)
price = models.DecimalField(max_digits=10,decimal_places=2)
stock = models.PositiveIntegerField()
vote = models.SmallIntegerField(choices=STARS, default=5)
</code></pre>
<p><strong>review/models.py</strong></p>
<pre><code>from restaurants.models import Restaurant # I am getting an error here
class Review(models.Model):
STARS = (
(1, 'one'),
(2, 'two'),
(3, 'three'),
(4, 'four'),
(5, 'five'),
)
vote = models.SmallIntegerField(choices=STARS, default=5)
user = models.ForeignKey(User)
restaurant = models.ForeignKey(Restaurant)
review = models.TextField()
created = models.DateTimeField(auto_now_add=True)
updated = models.DateTimeField(auto_now=True)
def __str__(self):
return self.vote
</code></pre>
<p>Why am i getting such error? </p>
<p>Also one more question. A restaurant has multiple menu items. User should get to rate(only rating no review) each menu. So is my model ok for such feature? </p>
| 0 | 2016-08-17T08:27:18Z | 38,991,927 | <p>You probably have a circular import error; the models files are attempting to import each other, and Python can't resolve it.</p>
<p>Note that there is no need to import the actual model if all you want to do is define a relationship; you can use a string value instead. Remove the import from review.models and do this in the definition:</p>
<pre><code>restaurant = models.ForeignKey('restaurant.Restaurant')
</code></pre>
| 0 | 2016-08-17T08:44:25Z | [
"python",
"django",
"python-3.x",
"django-models"
] |
ImportError: name 'Restaurant' is not defined | 38,991,637 | <p>App that i am developing is about online food order. A restaurant owner lists his/her restaurant with menus available in that restaurant. I have designed the models for such scenario. But I am facing problem in my review models where i get an error of NameError: name 'Restaurant' is not defined while importing for Restaurant class.</p>
<p>Code</p>
<p><strong>restaurants/models.py</strong></p>
<pre><code>class Restaurant(models.Model):
OPEN = 1
CLOSED = 2
OPENING_STATUS = (
(OPEN, 'open'),
(CLOSED, 'closed'),
)
owner = models.ForeignKey(User)
name = models.CharField(max_length=150, db_index=True)
slug = models.SlugField(max_length=150, db_index=True)
address = models.CharField(max_length=100)
city = models.CharField(max_length=100)
phone_number = models.PositiveIntegerField()
owner_email = models.EmailField()
opening_status = models.IntegerField(choices=OPENING_STATUS, default=OPEN)
website = models.URLField(max_length=300)
features = models.ManyToManyField(Choice, related_name="restaurants_features")
timings = models.ManyToManyField(Choice, related_name="restaurants_timings")
opening_from = models.TimeField()
opening_to = models.TimeField()
facebook_page = models.URLField(max_length=200)
twitter_handle = models.CharField(max_length=15, blank=True, null=True)
other_details = models.TextField()
# votes = models.IntegerField(choices=STARS, default=5)
class Menu(models.Model):
STARS = (
(1, 'one'),
(2, 'two'),
(3, 'three'),
(4, 'four'),
(5, 'five'),
)
menu_category = models.ForeignKey(Category, related_name="menu")
restaurant = models.ForeignKey(Restaurant)
name = models.CharField(max_length=120,db_index=True)
slug = models.SlugField(max_length=120,db_index=True)
image = models.ImageField(upload_to='products/%Y/%m/%d', blank=True)
description = models.TextField(blank=True)
price = models.DecimalField(max_digits=10,decimal_places=2)
stock = models.PositiveIntegerField()
vote = models.SmallIntegerField(choices=STARS, default=5)
</code></pre>
<p><strong>review/models.py</strong></p>
<pre><code>from restaurants.models import Restaurant # I am getting an error here
class Review(models.Model):
STARS = (
(1, 'one'),
(2, 'two'),
(3, 'three'),
(4, 'four'),
(5, 'five'),
)
vote = models.SmallIntegerField(choices=STARS, default=5)
user = models.ForeignKey(User)
restaurant = models.ForeignKey(Restaurant)
review = models.TextField()
created = models.DateTimeField(auto_now_add=True)
updated = models.DateTimeField(auto_now=True)
def __str__(self):
return self.vote
</code></pre>
<p>Why am i getting such error? </p>
<p>Also one more question. A restaurant has multiple menu items. User should get to rate(only rating no review) each menu. So is my model ok for such feature? </p>
| 0 | 2016-08-17T08:27:18Z | 38,993,459 | <p>You need to add full path to restaurants into system path</p>
<p>Add following code in review/models.py</p>
<pre><code>import sys
sys.path.append('<full path to parent dir of restaurants>')
from restaurants.models import Restaurant
</code></pre>
| 0 | 2016-08-17T09:59:07Z | [
"python",
"django",
"python-3.x",
"django-models"
] |
Making submodules visible to each other inside the same Python module | 38,991,687 | <p>Say I have a folder <code>Awesome_stuff/my_module</code> that contains the following files: <code>my_algorithm.py</code>, <code>my_settings.py</code>, <code>my_utils.py</code>.
The main file <code>my_algorithm.py</code> contains the following lines:</p>
<pre><code># my_algorithm.py
import my_utils as mu
from my_settings import My_settings
def alg():
# do something
if __name__ == "__main__":
alg()
</code></pre>
<p>Running <code>python my_algorithm.py</code> does not create any problem. However, things change if I want to install this module in my Python library. In order to do that I added an empty <code>__init__.py</code> file inside the <code>my_module</code> folder and outside the <code>my_module</code> folder I placed a <code>setup.py</code> file that looks like this:</p>
<pre><code>from setuptools import setup
setup(
name = 'Awesome_stuff',
version = '1.0.0',
packages = ['my_module'],
# # dependencies
install_requires = ['numpy','scipy', 'numpydoc>=0.5', 'pyomo>=4.3.11388', 'mock>=1.1.3'],
# # project metadata
author = 'Me',
author_email = 'me@mymail.com',
description = 'This module contains awesome stuff',
license = 'BSD',
url = 'http://my_website.com',
download_url = 'my_download_address.com'
)
</code></pre>
<p>Running <code>python setup.py install</code> generates the egg and the module is installed in my Python library. Now the main folder <code>Awesome_stuff</code> contains: </p>
<pre><code>Awesome_stuff.egg-info
build (folder created during installation)
dist (folder created during installation)
my_module (my original folder plus __init__.py)
setup.py
</code></pre>
<p>In order to execute something equivalent to the original <code>python my_algorithm.py</code> I can now create a new Python file <code>test.py</code> that contains something like <code>from my_module.my_algorithm import *</code> and then executes <code>alg()</code>.</p>
<p>Unfortunately, the line <code>from my_module.my_algorithm import *</code> generates the following error:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "build/bdist.macosx-10.11-intel/egg/my_module/my_algorithm.py", line 25, in <module>
ImportError: No module named my_utils
</code></pre>
<p>How can I fix this without modifying the original three module's files? I can't see what is going wrong here. </p>
<p>Moreover, <code>import my_module</code> does not generate any error. I'm a bit confused here.
Why importing submodules from the same folder worked when the module was a standalone folder and didn't work when it was installed in the Python library?</p>
<p>More general question: what is the correct/suggested way of importing/working with submodules that might depend on each other? Does the file <code>__init__.py</code> might play a role in this case?</p>
<p>Thanks.</p>
<p>OS X El Captain, Python 2.7.10 </p>
<p><strong>EDIT</strong></p>
<p>As an example, I looked at the Python library <a href="http://zen.networkdynamics.org" rel="nofollow">Zen</a>, which is structured in a similar way:</p>
<pre><code>Zen
build
zen
folder1
folder2
...
__init__.py
graph.pxd
graph.pyx
digraph.pxd
digraph.pyx
...
Makefile
setup.py
</code></pre>
<p>In this case <code>digraph.pyx</code> (that overwrites some of the <code>graph.pxd</code> declarations) contains the line <code>from graph cimport *</code>, which of course does not cause any problem. Note that it does not say: <code>from zen.graph cimport *</code>.</p>
<p><strong>LAST EDIT</strong></p>
<p>When using <code>cimport</code> you may define <code>package_data</code> inside <code>setup.py</code> in order to set the path of the <code>*.pxd</code> files. This is why <code>cimport</code> does not need the absolute import in the example above. However, this is not possible with <code>*.py</code> files (to the best of my knowledge), and the only way is to use absolute and relative import. </p>
| 2 | 2016-08-17T08:30:21Z | 38,991,901 | <p>Since you've added an <code>__init__.py</code> file, you've made a package. So now, you have to use that package name when importing from the package:</p>
<pre><code># my_algorithm.py
import my_module.my_utils as mu
from my_module.my_settings import My_settings
def alg():
# do something
if __name__ == "__main__":
alg()
</code></pre>
| 3 | 2016-08-17T08:42:21Z | [
"python",
"import",
"module"
] |
Python - How to check print outputs of multiple py files? | 38,991,734 | <p>I have a number of test cases in separate .py files that I want to test on a module I've created. All of these files use a py module that I've created and all these py files will print a pre-determined output (some in thousands of lines).</p>
<p>Is there a way to run a .py script that runs these other test .py scripts and checks the outputs? I've looked into doctest and unittests, but these relate to particular functions rather than py scripts?</p>
<p>EDIT: These py files print outputs rather than return values. Some of them also utilize multi-threading. </p>
| 0 | 2016-08-17T08:32:34Z | 38,991,930 | <p>try this</p>
<pre><code>import glob
lst = glob.glob("/home/test/*.py")
for each_file in lst:
variables= {} #what ever variables to need run
execfile(each_file, variables )
</code></pre>
| 1 | 2016-08-17T08:44:30Z | [
"python"
] |
Python - How to check print outputs of multiple py files? | 38,991,734 | <p>I have a number of test cases in separate .py files that I want to test on a module I've created. All of these files use a py module that I've created and all these py files will print a pre-determined output (some in thousands of lines).</p>
<p>Is there a way to run a .py script that runs these other test .py scripts and checks the outputs? I've looked into doctest and unittests, but these relate to particular functions rather than py scripts?</p>
<p>EDIT: These py files print outputs rather than return values. Some of them also utilize multi-threading. </p>
| 0 | 2016-08-17T08:32:34Z | 38,992,068 | <p>What you need to do is, you will invoke these scripts by </p>
<p><code>subprocess.Popen(['python',file_name],universal_newlines=True,stdout=stdout, stderr=stderr)</code></p>
<p>These <code>stdout</code>,<code>stderr</code> are all file objects, they will write the result into these files. Then you can use <code>sleep()</code> for the best appropriate time you think.</p>
<p>And after that you can open these files and check the results.</p>
<p>Read more about subprocess <a href="https://docs.python.org/2/library/subprocess.html#popen-constructor" rel="nofollow">here</a>.</p>
| 1 | 2016-08-17T08:51:54Z | [
"python"
] |
Using Google App Engine SDK for Python with Python 3 | 38,991,770 | <p>I have both Python 3.5.2 and Python 2.7.12 Installed (On Windows). But when I try to deploy from Using Google App Engine SDK for Python, I receive this error message:</p>
<pre><code>in <module>
run_file(__file__, globals())
File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 129, in run_file
execfile(_PATHS.script_file(script_name), globals_)
NameError: name 'execfile' is not defined
2016-08-17 11:28:50 (Process exited with code 1)
</code></pre>
<p>I can only deploy from the command line:</p>
<pre><code>C:\Python27\python.exe "C:\Program Files (x86)\Google\google_appengine\appcfg.py" update app.yaml
</code></pre>
<p>Is there a way to deploy with Python 2 and not Python 3? Without using the command line? And how do I report to Google about this bug, I think if they want they can fix it for Google App Engine SDK.</p>
| 1 | 2016-08-17T08:34:41Z | 38,992,057 | <p>You need to select the correct Python path, when you have multiple copies of Python the wrong path might be selected as the default.
<br>
Usually it's <code>C:\Python27\pythonw.exe</code> however it may be something else if you changed that during installation.
<br>
Go to the Google App Engine Launcher and change the path to the one you want in Edit/Preferences.</p>
| 1 | 2016-08-17T08:51:17Z | [
"python",
"windows",
"python-2.7",
"python-3.x",
"google-app-engine"
] |
Scikit-learn and pyspark integration | 38,991,799 | <p>I have trained a logistic regression model in sklearn and saved the model to .pkl files. Is there a method of using this pkl file from within spark?</p>
| 0 | 2016-08-17T08:36:11Z | 38,994,681 | <p>The fact that you are using spark shouldn't hold you from using external python libraries.</p>
<p>You can import sklearn library in your spark-python code, and use sklearn logistic regression model with the saved pkl file.</p>
| 0 | 2016-08-17T10:55:39Z | [
"python",
"apache-spark",
"scikit-learn",
"pyspark"
] |
Use existing open tab and url in Selenium py | 38,991,897 | <p>Hi I'm trying to use Selenium python to use a url that is already open on Internet Explorer. I had a look around and not sure if this is possible. </p>
<p>The reason why I wouldn't like to open new brower or tab is because the page changes to different text.</p>
<p>So far my text only opens a new browser
<strong>CODE</strong></p>
<pre><code>from selenium import webdriver
driver = webdriver.Ie()
driver.get("https://outlook.live.com/owa/")
</code></pre>
| 0 | 2016-08-17T08:42:01Z | 38,999,589 | <p><a href="http://stackoverflow.com/a/37274741/6335604">This answer</a> helped me with same problem.</p>
<p>By now you can not access previously opened tabs with selenium.
But you can try to recreate your session, passing what is needed using requests library, for example.</p>
| 0 | 2016-08-17T14:34:33Z | [
"python",
"python-2.7",
"python-3.x",
"selenium",
"selenium-webdriver"
] |
Is there a way to add a parameter that supress the need of other parameters using argparse on Python? | 38,991,939 | <p>I am using Argparse on python to do a script on command line. I have this for my script:</p>
<pre><code>parser = argparse.ArgumentParser(prog = 'manageAdam')
parser.add_argument("-s", action='store_true', default=False, help='Shows configuration file')
parser.add_argument("d", type=str, help="device")
parser.add_argument("o", type=str, help="operation")
parser.add_argument("-v", "--value", type=int, nargs='*', help="value or list to send in the operation")
</code></pre>
<p>I am looking that if I call manageAdam -s it would work and don't ask for the positional arguments, something like the -h, which can be called without any other positional argument that is defined. Is it possible?</p>
| 0 | 2016-08-17T08:44:56Z | 38,992,060 | <p>No, there are no such way.</p>
<p>You can make all arguments optional and set default value to <code>None</code> then perform check that all of them aren't <code>None</code> otherwise raise <code>argparse.ArgumentError</code>, if <code>manageAdam</code> provided skip check for other arguments.</p>
| 0 | 2016-08-17T08:51:26Z | [
"python",
"argparse"
] |
Is there a way to add a parameter that supress the need of other parameters using argparse on Python? | 38,991,939 | <p>I am using Argparse on python to do a script on command line. I have this for my script:</p>
<pre><code>parser = argparse.ArgumentParser(prog = 'manageAdam')
parser.add_argument("-s", action='store_true', default=False, help='Shows configuration file')
parser.add_argument("d", type=str, help="device")
parser.add_argument("o", type=str, help="operation")
parser.add_argument("-v", "--value", type=int, nargs='*', help="value or list to send in the operation")
</code></pre>
<p>I am looking that if I call manageAdam -s it would work and don't ask for the positional arguments, something like the -h, which can be called without any other positional argument that is defined. Is it possible?</p>
| 0 | 2016-08-17T08:44:56Z | 38,992,632 | <p>There is no built-in way to do this. You <em>might</em> be able to achieve something by writing some custom <a href="https://docs.python.org/3/library/argparse.html#argparse.Action" rel="nofollow"><code>Action</code></a> classes that keep track on the parser about their state, but I believe it will become quite messy and buggy.</p>
<p>I believe the best bet is to simply improve your UI. The <code>-s</code> is <strong>not</strong> an option. It's a separate command that completely alters how your script executes. In such cases you should use the <a href="https://docs.python.org/3/library/argparse.html#argparse.ArgumentParser.add_subparsers" rel="nofollow"><code>subparsers</code></a> functionality which allows to introduce sub-commands. This is a better interface then the one you thought, and is used by a lot of other tools (e.g. Git/mercurial).</p>
<p>In this case you'd have a <code>config</code> command to handle the configuration and a <code>run</code> (or how you want to call it) command to perform the operations on the device:</p>
<pre><code>subparsers = parser.add_subparsers(dest='command')
parser_config = subparsers.add_parser('config', help='Configuration')
parser_run = subparsers.add_parser('run', help='Execute operation on device')
parser_run.add_argument('d', type=str, ...)
parser_run.add_argument('o', type=str, ...)
parser_run.add_argument('-v', type=int, nargs='*', ...)
# later:
args = parser.parse_args()
if args.command == 'config':
print('Configuration')
else:
print('Run operation')
</code></pre>
<p>Used from the command line as:</p>
<pre><code>$ manageAdam config
# or
$ manageAdam run <device> <operation> <values...>
</code></pre>
| 1 | 2016-08-17T09:20:26Z | [
"python",
"argparse"
] |
Web Push notification Payload data is getting null | 38,992,034 | <p>Hi I'm trying to implement web push notification for web i followed the following <a href="https://serviceworke.rs/push-payload_index_doc.html" rel="nofollow">example</a>,except the server part for which i'm using python script to connect to gcm/fcm and generating payload .i'm getting the push event but the event.data coming null .</p>
<pre><code>from datetime import datetime
from pyfcm import FCMNotification
pushService = FCMNotification(api_key ='xxx'}
registrationId=""
messageTitle = "New update available"
messageBody ="some message"
dryRun = False
extraData = {}
extraData['title'] = "nknkn"
</code></pre>
<p>sw.js</p>
<pre><code>self.addEventListener('push',function(e){
console.log('Push Message Recevied',e.data);
var title = 'Push message';
e.waitUntil(
self.registration.showNotification(title, {
body: 'The Message form data',
icon: 'icon.jpg',
tag: 'my-tag'
}));
});
</code></pre>
| 0 | 2016-08-17T08:50:10Z | 39,045,532 | <p>Both, Google Chrome and Mozilla Firefox currently support payload for push messages, see <a href="https://developer.mozilla.org/ru/docs/Web/API/PushMessageData#Browser_compatibility" rel="nofollow">PushMessageData</a> on MDN. But according to the Push API specification, any payload MUST be encrypted, otherwise a browser will discard it and return <em>null</em> (see <a href="https://www.w3.org/TR/push-api/#the-push-event" rel="nofollow">11.1.6</a>):</p>
<blockquote>
<p>If the push message could not be decrypted for any reason, or if it is not encrypted and contains any payload, discard the message and terminate this process. A push message may be empty if it contains no content, but otherwise push event must not be fired for a push message that was not successfully decrypted using the key pair associated with the push subscription.</p>
</blockquote>
<p>Here is a good article from Google Developers, which explains it with more details: <a href="https://developers.google.com/web/updates/2016/03/web-push-encryption" rel="nofollow">Web Push Payload Encryption</a>. And original draft of the <a href="https://tools.ietf.org/html/draft-thomson-webpush-encryption" rel="nofollow">Message Encryption for Web Push</a>.</p>
<p>I also can suggest you to look at the set of already implemented libraries for WebPush on different languages: <a href="https://github.com/web-push-libs" rel="nofollow">web-push-libs</a>. You can find there a lib written on Python too. And another lib on Java, which can send push messages with a payload to Chrome and Firefox: <a href="https://github.com/MartijnDwars/web-push" rel="nofollow">https://github.com/MartijnDwars/web-push</a>.</p>
| 0 | 2016-08-19T18:23:24Z | [
"python",
"push-notification",
"service-worker",
"payload",
"web-push"
] |
How to convert blob to integer in python? | 38,992,157 | <p>I have some file with little-endian encoding bytes in it, I want to take <code>N</code> bytes, specify endianess and convert them into a decimal number using python (any version). How to do it correctly?</p>
| 0 | 2016-08-17T08:56:04Z | 38,992,417 | <p>In Python 3 you can use something like this:</p>
<pre><code>int.from_bytes(byte_string, byteorder='little')
</code></pre>
| 4 | 2016-08-17T09:09:30Z | [
"python",
"endianness"
] |
How to convert blob to integer in python? | 38,992,157 | <p>I have some file with little-endian encoding bytes in it, I want to take <code>N</code> bytes, specify endianess and convert them into a decimal number using python (any version). How to do it correctly?</p>
| 0 | 2016-08-17T08:56:04Z | 38,992,523 | <p>Using Python 3 (or 2), you can achieve this with the <a href="https://docs.python.org/3/library/struct.html" rel="nofollow">struct</a> library.</p>
<pre><code>with open('blob.dat', 'rb') as f:
data = f.read(n)
</code></pre>
<p>Now, you unpack using the appropriate <a href="https://docs.python.org/3/library/struct.html#byte-order-size-and-alignment" rel="nofollow">format specifier string</a>. For example, big-endian int:</p>
<pre><code>num = struct.unpack(">i",data)
</code></pre>
| 0 | 2016-08-17T09:15:26Z | [
"python",
"endianness"
] |
How to convert blob to integer in python? | 38,992,157 | <p>I have some file with little-endian encoding bytes in it, I want to take <code>N</code> bytes, specify endianess and convert them into a decimal number using python (any version). How to do it correctly?</p>
| 0 | 2016-08-17T08:56:04Z | 38,995,819 | <p>As Harshad Mulmuley' answer shows, this is easy in Python 3, using the <code>int.from_bytes</code> method. In Python 2, it's a little trickier.</p>
<p>The <code>struct</code> module is designed to handle standard C data types. It won't handle arbitrary length integers (Python 2 <code>long</code> integers), as these are not native to C. But you can convert them using a simple <code>for</code> loop. I expect that this will be significantly slower than the Python 3 way, since Python <code>for</code> loops are slower than looping at C speed, like <code>int.from_bytes</code> (probably) does.</p>
<pre><code>from binascii import hexlify
def int_from_bytes_LE(s):
total = 0
for c in reversed(s):
total = (total << 8) + ord(c)
return total
# Test
data = (
(b'\x01\x02\x03\x04', 0x04030201),
(b'\x01\x02\x03\x04\x05\x06\x07\x08', 0x0807060504030201),
(b'\x01\x23\x45\x67\x89\xab\xcd\xef\x01\x23\x45\x67\x89\xab\xcd\xef',
0xefcdab8967452301efcdab8967452301),
)
for s, u in data:
print hexlify(s), u, int_from_bytes_LE(s)
#print(hexlify(s), u, int.from_bytes(s, 'little'))
</code></pre>
<p><strong>output</strong></p>
<pre><code>01020304 67305985 67305985
0102030405060708 578437695752307201 578437695752307201
0123456789abcdef0123456789abcdef 318753391026855559389420636404904698625 318753391026855559389420636404904698625
</code></pre>
<p>(I put that Python 3 print call in there so you can easily verify that my function gives the same result as <code>int.from_bytes</code>).</p>
<p>If your data is <em>really</em> large and you don't want to waste RAM reversing your byte string you can do it this way:</p>
<pre><code>def int_from_bytes_LE(s):
m = 1
total = 0
for c in s:
total += m * ord(c)
m <<= 8
return total
</code></pre>
<p>Of course, that uses some RAM for <code>m</code>, but it won't be as much as the RAM used for reversing the input string.</p>
| 1 | 2016-08-17T11:49:26Z | [
"python",
"endianness"
] |
Azure SDK Python: tag a particular resource | 38,992,201 | <p>I want to create tag on each resource in Azure using python.</p>
<p>I see this module in the docs:
<a href="http://azure-sdk-for-python.readthedocs.io/en/latest/ref/azure.mgmt.resource.resources.operations.html#azure.mgmt.resource.resources.operations.TagsOperations" rel="nofollow">http://azure-sdk-for-python.readthedocs.io/en/latest/ref/azure.mgmt.resource.resources.operations.html#azure.mgmt.resource.resources.operations.TagsOperations</a></p>
<p>create_or_update: Create a subscription resource tag
list: Get a list of subscription resource tags</p>
<p><strong>Seems like I can only do tag operations on resource group and not resource?</strong></p>
<p>Example:</p>
<p>To add a tag to a resource group: Set-AzureRmResourceGroup
add tags to a resource: Set-AzureRmResource</p>
<p>EDIT:</p>
<p>Thanks for the api lookup code, very neat. But I believe the old api that I manually put should also work. I tried your code with little modification(we might have different Azure SDK, I am using 2.0.0rc5). After adding the api function(very helpful), I still have the same error unfortunately.</p>
<pre><code>from azure.common.credentials import UserPassCredentials
from azure.mgmt.resource.resources import ResourceManagementClient
def resolve_resource_api(client, resource):
""" This method retrieves the latest non-preview api version for
the given resource (unless the preview version is the only available
api version) """
provider = client.providers.get(resource.id.split('/')[6])
rt = next((t for t in provider.resource_types
if t.resource_type == '/'.join(resource.type.split('/')[1:])), None)
#print(rt)
if rt and 'api_versions' in rt.__dict__:
#api_version = [v for v in rt[0].api_versions if 'preview' not in v.lower()]
#return npv[0] if npv else rt[0].api_versions[0]
api_version = [v for v in rt.__dict__['api_versions'] if 'preview' not in v.lower()]
return api_version[0] if api_version else rt.__dict__['api_versions'][0]
credentials = UserPassCredentials(
'****@****.com', # Your new user
'******', # Your password
)
subscription_id= '*****-***-****-****-*******'
resource_client = ResourceManagementClient(credentials,
subscription_id)
for resource in resource_client.resources.list():
#print(resource)
#print(resolve_resource_api(resource_client, resource))
if resource.id.split('/')[4] == 'Build':
#resource.tags = {'foo':'bar'}
if resource.type == 'Microsoft.Web/sites':
print('resource.id: ', resource.id)
print('resource_group_name: ', resource.id.split('/')[4])
print('resource_provider_namespace: ', resource.id.split('/')[6])
print('parent_resource_path: ', '')
print('resource_type: ', str(resource.type).split('/')[-1])
print('resource_name: ', resource.name)
print('api_version: ', resolve_resource_api(resource_client, resource))
resource.tags['test'] = 'test1'
#print(resolve_resource_api(resource_client, resource))
#continue
print(resource)
resource_client.resources.create_or_update(
resource_group_name= resource.id.split('/')[4], # Extract from resource.id
resource_provider_namespace=resource.id.split('/')[6], # Extract from resource.id
parent_resource_path='', # Extract from resource.id
resource_type=str(resource.type).split('/')[-1], # Extract from resource type
resource_name=resource.name,
api_version=resolve_resource_api(resource_client, resource),
parameters=resource
)
print('-'*10)
</code></pre>
<blockquote>
<p>Error
Traceback (most recent call last):
File "C:\Python35-32\Scripts\Azure\temp.py", line 56, in
parameters=resource
File "C:\Python35-32\lib\site-packages\azure\mgmt\resource\resources\operations\resources_operations.py", line 408, in create_or_update
raise exp
msrestazure.azure_exceptions.CloudError: Operation failed with status: 'Bad Request'. Details: 400 Client Error: Bad Request for url: <a href="https://management.azure.com/subscriptions/" rel="nofollow">https://management.azure.com/subscriptions/</a><strong><em>-</em></strong>-***-*****-*******/resourcegroups/Build/providers/Microsoft.Web/sites/build-dev?api-version=2016-03-01</p>
</blockquote>
<p>I worked more and found the I am able to use the create_or_update method in the following way:</p>
<pre><code>from azure.mgmt.resource.resources.models import GenericResource
parameters=GenericResource(
location='West US',
properties={},
)
</code></pre>
<p>And the response error message with your code example says that "The parameter properties has an invalid value". So I am guessing parameters=resource needs to be fixed. I will look more into that.</p>
<p>UPDATE (SOLVED!): </p>
<pre><code>for resource in resource_client.resources.list():
#print(resource)
if resource.id.split('/')[4] == 'Build':
if resource.type == 'Microsoft.Web/sites':
print('resource.id: ', resource.id)
print('resource_group_name: ', resource.id.split('/')[4])
print('resource_provider_namespace: ', resource.id.split('/')[6])
print('parent_resource_path: ', '')
print('resource_type: ', str(resource.type).split('/')[-1])
print('resource_name: ', resource.name)
print('api_version: ', resolve_resource_api(resource_client, resource))
if not resource.tags:
resource.tags = {}
resource.tags['test'] = 'test1'
else:
resource.tags['test'] = 'test1'
# This solves the error 400 Client Error: Bad Request. The parameter properties has an invalid value.
if not resource.properties:
resource.properties = {}
resource_client.resources.create_or_update(
resource_group_name= resource.id.split('/')[4], # Extract from resource.id
resource_provider_namespace=resource.id.split('/')[6], # Extract from resource.id
parent_resource_path='', # Extract from resource.id
resource_type=str(resource.type).split('/')[-1], # Extract from resource type
resource_name=resource.name,
api_version=resolve_resource_api(resource_client, resource),
parameters=resource,
)
print('-'*10)
</code></pre>
<p><strong>For some odd reason, if the resource.properties is None, the requests does not like it. It has to be {}.</strong> </p>
<p>Thank you for your help Travis! I will post more questions as I work on Azure SDK ;)</p>
| 0 | 2016-08-17T08:58:39Z | 39,000,653 | <p>If you are using the Python SDK, you can generally add tags to a resource using that resource's <code>create_or_update</code> method. These methods take an object called <code>parameters</code> which is generally the object type of the resource you are interested in. This is where you will find tags.</p>
<p>For example to tag a virtual network:</p>
<pre><code>from azure.mgmt.network.models import VirtualNetwork
vnet = client.virtual_networks.get(resource_group_name, vnet_name)
vnet.tags = {'a':'b'}
client.virtual_networks.create_or_update(resource_group_name, virtual_network_name, vnet)
</code></pre>
<p>Additionally, you can tag your resource through Xplat-Cli using (for this example) the <code>azure network vnet set -t {tags}</code> command.</p>
<p>You can tag resource groups using <code>azure group set -t {tags}</code> and resources generically using <code>azure resource set -t {tags}</code>. </p>
<p>Hopefully that helps.</p>
<p><strong>UPDATE (8/26/16)</strong></p>
<p>Getting API versions can be tricky. You would think it would just be part of the generic resource object, but for some reason it's not. However, try something like this:</p>
<pre><code>from azure.common.credentials import UserPassCredentials
from azure.mgmt.resource.resources import ResourceManagementClient
def resolve_resource_api(client, resource):
""" This method retrieves the latest non-preview api version for
the given resource (unless the preview version is the only available
api version) """
provider = client.providers.get(resource.id.split('/')[6])
rt = next((t for t in provider.resource_types if t.resource_type == resource.type), None)
if rt and len(rt) == 1 and rt[0].api_versions:
api_version = [v for v in rt[0].api_versions if 'preview' not in v.lower()]
return npv[0] if npv else rt[0].api_versions[0]
credentials = UserPassCredentials(
'****@****.com', # Your new user
'******', # Your password
)
subscription_id= '*****-***-****-****-*******'
resource_client = ResourceManagementClient(credentials, subscription_id)
for resource in resource_client.resources.list():
resource.tags['test'] = 'test1'
# avoid error 400 if properties must be set
if not resource.properties:
resource.properties = {}
resource_client.resources.create_or_update(
resource_group_name= resource.id.split('/')[4],
resource_provider_namespace=resource.id.split('/')[6],
parent_resource_path='', # WARNING: this will not work with child resources
resource_type=str(resource.type).split('/')[-1],
resource_name=resource.name,
api_version=resolve_resource_api(resource_client, resource),
parameters=resource
)
</code></pre>
<p>The list operation under client.resources gives a paged list of GenericResource objects for the entire subscription. The way you posted, you looped through the resource groups one by one and then through the resources within each resource group. That will work just fine, and it will avoid you having to extract the resource group name from the ID, but I think this solution is a little cleaner. </p>
<p>The <code>resolve_resource_api</code> method uses the provider namespace and the resource type from the resource ID to look up the available API versions for that resource type using the resource provider get operation. This code (which is missing some validation) will retrieve the most recent API versions that is not a preview version (unless that is the only version available). Just arbitrarily specifying a version in a string is not going to work generally, as the different resources will have different API versions.</p>
<p>Also, your code specifies '' for parent path, so this would not work generally for a child resource.</p>
| 2 | 2016-08-17T15:24:39Z | [
"python",
"azure",
"azure-sdk-python"
] |
Python Web Crawling on JQuery/ Json outputs | 38,992,234 | <p>I am wondering how can we crawl J Query / JSON outputs using Beautiful Soup or Selenium. Please have a look at <a href="https://www.udemy.com/courses/design/web-design/all-courses/" rel="nofollow">https://www.udemy.com/courses/design/web-design/all-courses/</a>.</p>
<p>The left column gives number of Free and Paid courses, which are being rendered directly from DB / Json, and hence cannot be detected in page source.</p>
<p>I have seen similar output in number on ecommerce websites as well.</p>
<p>Can anyone please guide me to crawl such numbers.</p>
<p>Thanks!</p>
| 0 | 2016-08-17T09:00:36Z | 38,992,343 | <p>There is an easy way to do these, press F12 and you will see developer tab in chrome, in there you can see network tab from where you can track all the API calls made, I found <code>https://www.udemy.com/api-2.0/channels/1654/courses?is_angular_app=true&lang=en</code> as the API you're looking for maybe.</p>
<p>Similary you can find the API and then call those to recieve JSON and extract information from there.</p>
| -1 | 2016-08-17T09:05:49Z | [
"jquery",
"python",
"json",
"beautifulsoup"
] |
Tensorflow: Py_func returns unknown shape | 38,992,445 | <p>I have a simple question re the <code>tf.py_func</code> function.</p>
<p>I have an image tensor <code>my_img</code> of shape <code>(1,224,224,3)</code>. To test <code>py_func</code>, I feed the tensor to a python function <code>return_tf</code> that should give back the same tensor (after being converted to a numpy array as per docs). </p>
<p>Here's the code:</p>
<pre><code>def return_tf(x):
return np.array(x)
test = tf.py_func(return_tf,[my_img],[tf.float32])
</code></pre>
<p>But when I checked the shape of the returned tensor called <code>test</code>, I get:</p>
<pre><code>tf.Tensor 'PyFunc:0' shape=unknown dtype=float32
</code></pre>
<p>I am also unable to run <code>eval()</code> on the tensor, since I get the error:</p>
<pre><code>AttributeError: 'list' object has no attribute 'eval'.
</code></pre>
<p>Anyone knows how could I fix the tensor shape of the tensor returned by <code>tf.py_func</code>?</p>
| 1 | 2016-08-17T09:11:13Z | 38,993,931 | <p>Just found a work-around.. since py_func returns a tensorflow list, I can do the ff:</p>
<pre><code>test = tf.reshape(tf.concat(1, test), [ <<a shape>> ])
</code></pre>
<p>to get a tensor with a desired shape</p>
| 1 | 2016-08-17T10:21:20Z | [
"python",
"numpy",
"tensorflow"
] |
how to remove first x characters from a string in a list | 38,992,480 | <p>So I have a list and a couple of string in it. I just want to remove the first 7 characters from each of the strings. How do I do that?</p>
<p>I've tried:</p>
<pre><code>lst = ["1234567something", "1234567smthelse"]
for i in lst:
i [7:]
print lst
</code></pre>
<p>But I get the same list from the beginning...</p>
| -2 | 2016-08-17T09:12:55Z | 38,992,528 | <p>Try this:</p>
<pre><code>for i in range(0, len(lst)):
lst[i] = lst[i][7:]
</code></pre>
| 1 | 2016-08-17T09:15:37Z | [
"python",
"string",
"list"
] |
how to remove first x characters from a string in a list | 38,992,480 | <p>So I have a list and a couple of string in it. I just want to remove the first 7 characters from each of the strings. How do I do that?</p>
<p>I've tried:</p>
<pre><code>lst = ["1234567something", "1234567smthelse"]
for i in lst:
i [7:]
print lst
</code></pre>
<p>But I get the same list from the beginning...</p>
| -2 | 2016-08-17T09:12:55Z | 38,992,555 | <p>You can do the following:</p>
<pre><code>lst = ["1234567something", "1234567smthelse"]
newlst=[]
for i in lst:
newlst.append(i[7:])
print newlst
</code></pre>
<p>I hope that helps.</p>
| 1 | 2016-08-17T09:16:43Z | [
"python",
"string",
"list"
] |
how to remove first x characters from a string in a list | 38,992,480 | <p>So I have a list and a couple of string in it. I just want to remove the first 7 characters from each of the strings. How do I do that?</p>
<p>I've tried:</p>
<pre><code>lst = ["1234567something", "1234567smthelse"]
for i in lst:
i [7:]
print lst
</code></pre>
<p>But I get the same list from the beginning...</p>
| -2 | 2016-08-17T09:12:55Z | 38,992,562 | <p><code>i[7:]</code> is not inplace, it returns a new string which you are doing nothing with.</p>
<p>You can create a new list with the required string:</p>
<pre><code>lst = [string[7:] for string in lst]
</code></pre>
<p>Or you can modify the same list:</p>
<pre><code>for idx, string in enumerate(ls):
ls[idx] = string[7:]
</code></pre>
| 1 | 2016-08-17T09:17:12Z | [
"python",
"string",
"list"
] |
how to remove first x characters from a string in a list | 38,992,480 | <p>So I have a list and a couple of string in it. I just want to remove the first 7 characters from each of the strings. How do I do that?</p>
<p>I've tried:</p>
<pre><code>lst = ["1234567something", "1234567smthelse"]
for i in lst:
i [7:]
print lst
</code></pre>
<p>But I get the same list from the beginning...</p>
| -2 | 2016-08-17T09:12:55Z | 38,992,564 | <p>You're not saving the value of <code>i[7:]</code> anywhere⦠Just create a new list with the trimmed values:</p>
<pre><code>lst = [i[7:] for i in lst]
</code></pre>
| 1 | 2016-08-17T09:17:17Z | [
"python",
"string",
"list"
] |
how to remove first x characters from a string in a list | 38,992,480 | <p>So I have a list and a couple of string in it. I just want to remove the first 7 characters from each of the strings. How do I do that?</p>
<p>I've tried:</p>
<pre><code>lst = ["1234567something", "1234567smthelse"]
for i in lst:
i [7:]
print lst
</code></pre>
<p>But I get the same list from the beginning...</p>
| -2 | 2016-08-17T09:12:55Z | 38,992,576 | <p>try this:</p>
<pre><code>lst = [s[7:] for s in lst]
</code></pre>
| 1 | 2016-08-17T09:17:54Z | [
"python",
"string",
"list"
] |
how to remove first x characters from a string in a list | 38,992,480 | <p>So I have a list and a couple of string in it. I just want to remove the first 7 characters from each of the strings. How do I do that?</p>
<p>I've tried:</p>
<pre><code>lst = ["1234567something", "1234567smthelse"]
for i in lst:
i [7:]
print lst
</code></pre>
<p>But I get the same list from the beginning...</p>
| -2 | 2016-08-17T09:12:55Z | 38,992,593 | <p>You never reassigned the lst in your question, which is why the output of lst in print(lst) does not change. Try reassigning like this:</p>
<pre><code>lst = ["1234567something", "1234567smthelse"]
lst = [i[7:] for i in lst]
print(lst)
</code></pre>
<p>returns</p>
<blockquote>
<p>['something', 'smthelse']</p>
</blockquote>
| 1 | 2016-08-17T09:18:32Z | [
"python",
"string",
"list"
] |
how to remove first x characters from a string in a list | 38,992,480 | <p>So I have a list and a couple of string in it. I just want to remove the first 7 characters from each of the strings. How do I do that?</p>
<p>I've tried:</p>
<pre><code>lst = ["1234567something", "1234567smthelse"]
for i in lst:
i [7:]
print lst
</code></pre>
<p>But I get the same list from the beginning...</p>
| -2 | 2016-08-17T09:12:55Z | 38,992,598 | <p>When doing <code>i [7:]</code> you are <strong>not actually editing</strong> the element of the list, you are just computing a new string, without the first 7 characters, and not doing anything with it.</p>
<p>You can do this instead :</p>
<pre><code>>>> lst = [e[7:] for e in lst]
>>> lst
['something', 'smthelse']
</code></pre>
<p>This will loop on the elements of your array, and remove the characters from the beginning, as expected.</p>
| 3 | 2016-08-17T09:18:48Z | [
"python",
"string",
"list"
] |
python/pandas - counting unique values in a single DataFrame column and displaying counts as new columns | 38,992,512 | <p>I am starting with data of city transits with an additional column containing the mode of transportation</p>
<pre><code>Orig Dest Type
NY SF Train
NY SF Plane
NO NY Plane
SE NO Plane
SE NO Train
</code></pre>
<p>I want to aggregate it such that each unique value in Type becomes a column with counts of that Type for each unique Orig/Dest pair</p>
<pre><code>Orig Dest Plane Train
NY SF 1 1
NO NY 1 0
SE NO 1 1
</code></pre>
<p>I know some basic aggregation using pd.groupby but can only aggregate so far as to get just basic counts of the Orig/Dest pairs using:</p>
<pre><code>df.groubpy(['Orig','Dest'])['Type'].count()
</code></pre>
| 2 | 2016-08-17T09:14:46Z | 38,992,606 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.SeriesGroupBy.nunique.html" rel="nofollow"><code>nunique</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unstack.html" rel="nofollow"><code>unstack</code></a>. Last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow"><code>reset_index</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/whatsnew.html#changes-to-rename" rel="nofollow"><code>rename_axis</code></a> (new in <code>pandas</code> <code>0.18.0</code>):</p>
<pre><code>print (df.groupby(['Orig','Dest', 'Type'])['Type']
.nunique()
.unstack()
.fillna(0)
.astype(int)
.reset_index()
.rename_axis(None, axis=1))
Orig Dest Plane Train
0 NO NY 1 0
1 NY SF 1 1
2 SE NO 1 1
</code></pre>
| 2 | 2016-08-17T09:19:22Z | [
"python",
"pandas",
"group-by"
] |
Arrow with color gradient in matplotlib | 38,992,615 | <p>Is it possible to make arrows in matplotlib where the color of the arrow is changing gradually from one color to another?</p>
| 0 | 2016-08-17T09:19:41Z | 38,994,432 | <p>I don't think it's possible out of the box, but it's probably hackable.</p>
<ul>
<li>For a start, have a look at <a href="http://matplotlib.org/examples/pylab_examples/multicolored_line.html" rel="nofollow">This example that shows how to do a gradient line</a></li>
<li>or <a href="http://matplotlib.org/examples/pylab_examples/gradient_bar.html" rel="nofollow">This example to draw a gradient bar</a></li>
</ul>
<p>depending on what kind of arrow you want to draw.</p>
| 1 | 2016-08-17T10:44:05Z | [
"python",
"matplotlib"
] |
I need a python regex to tokenize the sentences upon finding a "\\n" | 38,992,647 | <p>I used a document converter to get the text from pdf. The text appears in the form: </p>
<blockquote>
<p>"Hello Programmers\\nToday we will learn how to create a program in
python\\nThefirst task is very easy and the level will exponentially
increase\\nso please bare in mind that this course is not for the
weak hearted\\n"</p>
</blockquote>
<p>I am using NLTK to tokenize the document into sentence upon occurrence of <code>\\n</code>. I have used the below regex, but it doesn't work.</p>
<p>Please excuse me if the regex is wrong. I am new to it and there's no time to learn as I have to deliver the code asap.</p>
<pre><code>from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'^[\n]')
>>> tokens
[]
</code></pre>
<p>..</p>
<pre><code>#tokenizer = RegexpTokenizer('\\n')
>>> tokens
['\n']
>>>
</code></pre>
<p>Even using <code>\\n</code> did not work. Someone please suggest a correct regex.</p>
| 1 | 2016-08-17T09:21:03Z | 38,992,880 | <p>Hey you need to use <code>gaps</code></p>
<pre><code>>>> tokenizer = RegexpTokenizer(r'\\n', gaps=True)
>>> tokenizer.tokenize(s)
['Hello Programmers', 'Today we will learn how to create a program in python', 'Thefirst task is very easy and the level will exponentially increase', 'so please bare in mind that this course is not for the weak hearted']
</code></pre>
<p>A <code>RegexpTokenizer</code> splits a string into substrings using a regular expression. A RegexpTokenizer can use its regexp to match delimiters instead using <code>gaps=True</code></p>
| 1 | 2016-08-17T09:32:51Z | [
"python",
"regex"
] |
I need a python regex to tokenize the sentences upon finding a "\\n" | 38,992,647 | <p>I used a document converter to get the text from pdf. The text appears in the form: </p>
<blockquote>
<p>"Hello Programmers\\nToday we will learn how to create a program in
python\\nThefirst task is very easy and the level will exponentially
increase\\nso please bare in mind that this course is not for the
weak hearted\\n"</p>
</blockquote>
<p>I am using NLTK to tokenize the document into sentence upon occurrence of <code>\\n</code>. I have used the below regex, but it doesn't work.</p>
<p>Please excuse me if the regex is wrong. I am new to it and there's no time to learn as I have to deliver the code asap.</p>
<pre><code>from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'^[\n]')
>>> tokens
[]
</code></pre>
<p>..</p>
<pre><code>#tokenizer = RegexpTokenizer('\\n')
>>> tokens
['\n']
>>>
</code></pre>
<p>Even using <code>\\n</code> did not work. Someone please suggest a correct regex.</p>
| 1 | 2016-08-17T09:21:03Z | 38,993,009 | <p>The most basic solution which may be useful is:</p>
<pre><code>text = "Hello Programmers\\nToday we will learn how to create a program in python\\nThefirst task is very easy and the level will exponentially increase\\nso please bare in mind that this course is not for the weak hearted\\n"
each_line = text.split('\\n')
for i in each_line:
print i
</code></pre>
| 1 | 2016-08-17T09:38:31Z | [
"python",
"regex"
] |
matlabish "strncmp" in python | 38,992,764 | <p>I need to find indices of all occurrences of a particular pattern in a string (or numerical vector). For example, given the boolean list (DataFrame):</p>
<pre><code>z =
15 False
16 False
17 False
18 False
19 False
20 False
21 False
22 False
23 False
24 True
25 True
26 True
27 False
28 False
29 False
30 False
31 False
32 False
33 False
34 False
35 False
36 True
37 False
38 False
39 False
40 True
41 False
42 False
43 False
44 False
45 True
46 True
47 True
48 False
49 False
</code></pre>
<p>I am interested in a function which returns indices of all occurrences of three 'True' in a row, in this example, I should get the index</p>
<pre><code>>> result = some_function(z)
>> print result
>> [24, 45]
</code></pre>
<p>In matlab it is quite easy with the function strcmp, which does exactly what I need. I am sure that there is a similar function in Python.</p>
<p>I tried to use '<code>if ['True', 'True', 'True'] in z</code>:....but I am doing something wrong.</p>
<p><strong>UPD</strong> I found a very simple and general solution to such problems, which works with any datatype:</p>
<pre><code>def find_subarray_in_array(sub_array, large_array):
large_array_view = as_strided(large_array, shape=(len(large_array) - len(sub_array) + 1, len(sub_array)), strides=(large_array.dtype.itemsize,) * 2)
return where(numpy.all(large_array_view == sub_array, axis=1))[0]
</code></pre>
<p>where "sub_array" is the pattern which should be found in the larger array "large_array".</p>
| 1 | 2016-08-17T09:27:28Z | 38,993,552 | <p>I'm assuming here that your inputs are lists:</p>
<pre><code>inds =
[15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30,
31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46,
47, 48, 49]
bools =
[False,False,False,False,False,False,False,False,False, True, True,
True,False,False,False,False,False,False,False,False,False, True,
False,False,False, True,False,False,False,False, True, True, True,
False,False]
</code></pre>
<p>You then want to check for the pattern [True, True, True]</p>
<pre><code>pattern = [True, True, True]
</code></pre>
<p>The required comparison is then done by:</p>
<pre><code>[inds[i] for i in range(len(bools)) if bools[i:i+len(pattern)] == pattern ]
</code></pre>
<p>Returns:</p>
<blockquote>
<p>[24, 45]</p>
</blockquote>
| 1 | 2016-08-17T10:03:33Z | [
"python",
"dataframe",
"strcmp"
] |
matlabish "strncmp" in python | 38,992,764 | <p>I need to find indices of all occurrences of a particular pattern in a string (or numerical vector). For example, given the boolean list (DataFrame):</p>
<pre><code>z =
15 False
16 False
17 False
18 False
19 False
20 False
21 False
22 False
23 False
24 True
25 True
26 True
27 False
28 False
29 False
30 False
31 False
32 False
33 False
34 False
35 False
36 True
37 False
38 False
39 False
40 True
41 False
42 False
43 False
44 False
45 True
46 True
47 True
48 False
49 False
</code></pre>
<p>I am interested in a function which returns indices of all occurrences of three 'True' in a row, in this example, I should get the index</p>
<pre><code>>> result = some_function(z)
>> print result
>> [24, 45]
</code></pre>
<p>In matlab it is quite easy with the function strcmp, which does exactly what I need. I am sure that there is a similar function in Python.</p>
<p>I tried to use '<code>if ['True', 'True', 'True'] in z</code>:....but I am doing something wrong.</p>
<p><strong>UPD</strong> I found a very simple and general solution to such problems, which works with any datatype:</p>
<pre><code>def find_subarray_in_array(sub_array, large_array):
large_array_view = as_strided(large_array, shape=(len(large_array) - len(sub_array) + 1, len(sub_array)), strides=(large_array.dtype.itemsize,) * 2)
return where(numpy.all(large_array_view == sub_array, axis=1))[0]
</code></pre>
<p>where "sub_array" is the pattern which should be found in the larger array "large_array".</p>
| 1 | 2016-08-17T09:27:28Z | 39,005,217 | <p>Although this can be done using list comprehensions, you lost a lot of the advantage of using numpy arrays or pandas dataframes, specifically that you can vectorize operations. The better approach would be to use <code>numpy.correlate</code>, which allows you to compare two arrays to see how well they match up. You can use this to find all the places where your target (a sequence of three <code>True</code> values) matches up perfectly with the array itself (the correlation is <code>3</code>, so 3 elements match). This finds the center of the correlation, so if you want to find the start you need to substract one from the result. So this will do what you want (assuming <code>inds</code> and <code>vals</code> are numpy arrays):</p>
<pre><code>targ = [True, True, True]
corr = np.correlate(vals.astype('int'), targ, mode='same')
matches = np.where(corr == len(targ))[0]-len(targ)//2
result = inds[matches]
</code></pre>
<p>If the indices will always be sequential (such as <code>13,14,15,16,...</code>), you can simplify this to:</p>
<pre><code>targ = [True, True, True]
corr = inds[np.correlate(vals.astype('int'), targ, mode='same') == len(targ)]-len(targ)//2
</code></pre>
| 1 | 2016-08-17T19:56:09Z | [
"python",
"dataframe",
"strcmp"
] |
Trying to run Cloudera Image in Docker | 38,992,850 | <p>I am trying to run cloudera/clusterdock in a docker image for a university project. This is my first time using docker and so far I have been using the instructions on the cloudera website which are a little sparse.</p>
<p>I successfully downloaded docker and the cloudera image and when I run the <code>docker-images</code> command I get the following:</p>
<pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE
cloudera/clusterdock latest 9b4d4f1dda22 7 days ago 467.5 MB
</code></pre>
<p>When I try and run up the container with this image. Using the following command</p>
<pre><code>docker run cloudera/clusterdock:latest /bin/bash
</code></pre>
<p>I get the following message</p>
<pre><code> File "/bin/bash", line 1
SyntaxError: Non-ASCII character '\x80' in file /bin/bash on line 2,
but no encoding declared; see http://www.python.org/peps/pep-0263.html for details
</code></pre>
<p>Having read the pep mentioned pep I know I need to change the encoding in a file but the pep concentrates on python files and I am unaware of having a python file so have no idea where to find it to correct it. Also, having limited knowledge I am uneasy changing the bin/bash file as I know it can affect your machine.</p>
<p>Any help will have to assume I have little knowledge of this as I have little experience.</p>
| 0 | 2016-08-17T09:31:44Z | 38,993,584 | <p>If you look at <a href="https://github.com/cloudera/clusterdock/blob/master/Dockerfile#L54" rel="nofollow">Dockerfile</a> for <code>cloudera/clusterdock:latest</code>, you can see:</p>
<pre><code>ENTRYPOINT ["python"]
</code></pre>
<p>So, when you do <code>docker run cloudera/clusterdock:latest /bin/bash</code>, you are basically doing <code>python /bin/bash</code> inside the container. You will see the same error if you type that in your terminal, normally:</p>
<pre><code>$ python /bin/bash
File "/bin/bash", line 1
SyntaxError: Non-ASCII character '\xe0' in file /bin/bash on line 2, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details
</code></pre>
<p>You probably wanted to do:</p>
<pre><code>docker run -it --entrypoint=/bin/bash cloudera/clusterdock:latest
</code></pre>
<p>Look at <a href="https://github.com/cloudera/clusterdock/blob/master/clusterdock.sh#L86-L97" rel="nofollow">clusterdock.sh</a> to see how actually the container is supposed to be run.</p>
| 1 | 2016-08-17T10:04:59Z | [
"python",
"bash",
"command-line",
"docker"
] |
Trying to run Cloudera Image in Docker | 38,992,850 | <p>I am trying to run cloudera/clusterdock in a docker image for a university project. This is my first time using docker and so far I have been using the instructions on the cloudera website which are a little sparse.</p>
<p>I successfully downloaded docker and the cloudera image and when I run the <code>docker-images</code> command I get the following:</p>
<pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE
cloudera/clusterdock latest 9b4d4f1dda22 7 days ago 467.5 MB
</code></pre>
<p>When I try and run up the container with this image. Using the following command</p>
<pre><code>docker run cloudera/clusterdock:latest /bin/bash
</code></pre>
<p>I get the following message</p>
<pre><code> File "/bin/bash", line 1
SyntaxError: Non-ASCII character '\x80' in file /bin/bash on line 2,
but no encoding declared; see http://www.python.org/peps/pep-0263.html for details
</code></pre>
<p>Having read the pep mentioned pep I know I need to change the encoding in a file but the pep concentrates on python files and I am unaware of having a python file so have no idea where to find it to correct it. Also, having limited knowledge I am uneasy changing the bin/bash file as I know it can affect your machine.</p>
<p>Any help will have to assume I have little knowledge of this as I have little experience.</p>
| 0 | 2016-08-17T09:31:44Z | 39,042,298 | <p>The associated docs (e.g. the description on the image's Docker Hub page or our blog post) describe that clusterdock is intended to be run by sourcing clusterdock.sh. This is required because the framework controls Docker on the host machine.</p>
| 0 | 2016-08-19T15:05:05Z | [
"python",
"bash",
"command-line",
"docker"
] |
How tdo you run an elif function in a graph? | 38,992,866 | <p>I have a dataset which has corresponding temperature values, with a column called CO2-rh:</p>
<pre><code>import pandas as pd
df=pd.read_csv('F:/data32.csv',parse_dates=['Date'])
print (df)
Temperature unit unit.1 CO2 flux.1 %Root Resp CO2-Rh
4.5 umol/m2/s mg/cm^2/h 0.001210 26.5 0.000889
4.5 umol/m2/s mg/cm^2/h 0.001339 26.5 0.000984
6.5 umol/m2/s mg/cm^2/h 0.001339 26.5 0.000984
5.3 umol/m2/s mg/cm^2/h 0.001469 26.5 0.001080
4.0 umol/m2/s mg/cm^2/h 0.001598 26.5 0.001175
5.5 umol/m2/s mg/cm^2/h 0.001598 26.5 0.001175
5.0 umol/m2/s mg/cm^2/h 0.001771 26.5 0.001302
5.0 umol/m2/s mg/cm^2/h 0.001944 26.5 0.001429
4.5 umol/m2/s mg/cm^2/h 0.003110 26.5 0.002286
10.3 umol/m2/s mg/cm^2/h 0.001166 26.5 0.000857
9.0 umol/m2/s mg/cm^2/h 0.002030 26.5 0.001492
</code></pre>
<p>I have a dataset which has corresponding temperature values, with a column called CO2-rh. I want to tell my function, to divide my data set according to mean temperature, If the temperature is above the value 8.21, I want the data to go to dataset âaâ, while anything that is equal to or below 8.21, go into data setâ bâ (i feel like this is the best way to plot two separate graphs). What can I do?
So far this is what I got.</p>
<pre><code>if df['Temperature']> 8.212312312312307:
plt.plot(df['Temperature'],df['CO2-rh'],linewidth=3)
plt.show()
</code></pre>
| 1 | 2016-08-17T09:32:27Z | 38,993,004 | <p>It looks like need <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html" rel="nofollow"><code>DataFrame.plot</code></a>:</p>
<pre><code>import matplotlib.pyplot as plt
mask = df['Temperature']> 8.212312312312307
df1 = df[mask]
df2 = df[~mask]
print (df1)
Temperature unit unit.1 CO2 flux.1 %Root Resp CO2-Rh
9 10.3 umol/m2/s mg/cm^2/h 0.001166 26.5 0.000857
10 9.0 umol/m2/s mg/cm^2/h 0.002030 26.5 0.001492
print (df2)
Temperature unit unit.1 CO2 flux.1 %Root Resp CO2-Rh
0 4.5 umol/m2/s mg/cm^2/h 0.001210 26.5 0.000889
1 4.5 umol/m2/s mg/cm^2/h 0.001339 26.5 0.000984
2 6.5 umol/m2/s mg/cm^2/h 0.001339 26.5 0.000984
3 5.3 umol/m2/s mg/cm^2/h 0.001469 26.5 0.001080
4 4.0 umol/m2/s mg/cm^2/h 0.001598 26.5 0.001175
5 5.5 umol/m2/s mg/cm^2/h 0.001598 26.5 0.001175
6 5.0 umol/m2/s mg/cm^2/h 0.001771 26.5 0.001302
7 5.0 umol/m2/s mg/cm^2/h 0.001944 26.5 0.001429
8 4.5 umol/m2/s mg/cm^2/h 0.003110 26.5 0.002286
df1[['Temperature','CO2-Rh']].plot(linewidth=3)
df2[['Temperature','CO2-Rh']].plot(linewidth=3)
plt.show()
</code></pre>
| 1 | 2016-08-17T09:38:23Z | [
"python",
"pandas",
"if-statement",
"matplotlib",
"statistics"
] |
Scrapy crawled 0 pages with 200 response status | 38,992,887 | <p>I am testing with Scrapy to crawl web pages. I cannot crawl the pages I want and I cannot find the reason why. Can anyone solve my problem?</p>
<p>p.s. Thx for someone's reminder, the previous web page shows an error. I have changed the path.</p>
<p>total_corner_spider.py</p>
<pre><code>name = "totalcorner"
allowed_domains = ["totalcorner.com"]
start_urls = [
"http://www.totalcorner.com/match/corner_stats/57868009",
]
def parse(self, response):
histories = Selector(response).xpath('//*[@id="home_history_table"]/tbody')
for history in histories:
item = HistoryItem()
item['leagueId'] = history.xpath(
'a[@data-league_id').extract()[0]
yield item
</code></pre>
<p>items.py</p>
<pre><code>from scrapy.item import Item, Field
class HistoryItem(Item):
leagueId = Field()
</code></pre>
<p>after </p>
<pre><code>>>> scrapy crawl totalcorner -o some.json,
</code></pre>
<p>I found that the .json file contains nothing but a "["</p>
<p>after </p>
<pre><code>>>> scrapy crawl totalcorner
</code></pre>
<p>I get the following log from the terminal:</p>
<pre><code>2016-08-17 17:20:50 [scrapy] INFO: Scrapy 1.1.1 started (bot: totalcorner)
2016-08-17 17:20:50 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'totalcorner.spiders', 'SPIDER_MODULES': ['totalcorner.spiders'], 'BOT_NAME': 'totalcorner'}
2016-08-17 17:20:50 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats']
2016-08-17 17:20:50 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-08-17 17:20:50 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-08-17 17:20:50 [scrapy] INFO: Enabled item pipelines:
[]
2016-08-17 17:20:50 [scrapy] INFO: Spider opened
2016-08-17 17:20:50 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-08-17 17:20:50 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-08-17 17:20:53 [scrapy] DEBUG: Crawled (200) <GET http://www.totalcorner.com/match/corner_stats/57838664> (referer: None)
2016-08-17 17:20:53 [scrapy] INFO: Closing spider (finished)
2016-08-17 17:20:53 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 244,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 5541,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 8, 17, 9, 20, 53, 487371),
'log_count/DEBUG': 2,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2016, 8, 17, 9, 20, 50, 660260)}
2016-08-17 17:20:53 [scrapy] INFO: Spider closed (finished)
</code></pre>
| 0 | 2016-08-17T09:33:18Z | 38,993,387 | <p>I found the reason, a silly mistake. The file name for the spider should be totalcorner_spider.py. My bad. Thanks for your effort.</p>
| 0 | 2016-08-17T09:55:42Z | [
"python",
"scrapy",
"web-crawler"
] |
celery workers not releasing memory after tasks are finished | 38,992,930 | <p>I am running celery with redis as the broker, on centos 6.5 machine, with the following configuration:</p>
<pre><code>app.conf.update(
CELERY_ENABLE_UTC=True,
CELERY_SEND_TASK_SENT_EVENT=True,
CELERY_ACCEPT_CONTENT=['msgpack', 'json', 'application/x-python-serialize'],
CELERY_TASK_SERIALIZER='msgpack',
CELERY_RESULT_SERIALIZER='msgpack',
CELERYD_PREFETCH_MULTIPLIER=1)
</code></pre>
<p>The problem is, even when there are no tasks in the queue, and no active tasks at all, the memory is not released</p>
<p>Any idea might help!</p>
| 0 | 2016-08-17T09:35:15Z | 38,994,639 | <p>Usually, Unix processes do not systematically return to the OS the memory they acquire from it when they no longer need it. So over time the memory usage of a process will increase. </p>
<p>If you are satisfied there's no actual coding or configuration problem affecting your workers, you could restart them periodically.</p>
| 0 | 2016-08-17T10:53:56Z | [
"python",
"linux",
"celery"
] |
insert element with element tree to existing xml | 38,992,981 | <p>I'm trying to find the simplest way to add an element to these item entries using element tree.</p>
<p>I have the below XML output stored in (xmldata). I don't want to write this to a file yet, I just need to add the id so I an further use the data by relating it to a corresponding id in other data.</p>
<p>Where you see </p>
<pre><code> <archived type="bool">False</archived>
</code></pre>
<p>Just above that I want to add</p>
<pre><code> <id>555666</id>
</code></pre>
<p>to all items in the list (same id to all)</p>
<pre><code> <?xml version="1.0" encoding="UTF-8" ?>
<root>
<tasks type="list">
<item type="dict">
<archived type="bool">False</archived>
<budget_spent type="float">0.0</budget_spent>
<billable_hours type="float">0.0</billable_hours>
<billable type="bool">True</billable>
<billable_amount type="float">0.0</billable_amount>
<budget_left type="null"/>
<over_budget_percentage type="null"/>
<task_id type="int">6356</task_id>
<detailed_report_url type="str">/reports/detailed/</detailed_report_url>
<name type="str">Planning</name>
<internal_cost type="float">0.0</internal_cost>
<budget type="null"/>
<budget_spent_percentage type="null"/>
<total_hours type="float">0.0</total_hours>
<over_budget type="null"/>
<billed_rate type="float">0.0</billed_rate>
</item>
<item type="dict">
<archived type="bool">False</archived>
<budget_spent type="float">0.0</budget_spent>
<billable_hours type="float">0.0</billable_hours>
<billable type="bool">True</billable>
<billable_amount type="float">0.0</billable_amount>
<budget_left type="null"/>
<over_budget_percentage type="null"/>
<task_id type="int">6357</task_id>
<detailed_report_url type="str">/detailed/123</detailed_report_url>
<name type="str">Planning</name>
<internal_cost type="float">0.0</internal_cost>
<budget type="null"/>
<budget_spent_percentage type="null"/>
<total_hours type="float">0.0</total_hours>
<over_budget type="null"/>
<billed_rate type="float">0.0</billed_rate>
</item>
</tasks>
</code></pre>
<p>**** update ****</p>
<p>Based on the answer from DAXaholic I've added this:</p>
<pre><code> tree = ET.fromstring(xmldata)
for item in tree.iterfind('tasks/item'):
idtag = ET.Element('id')
idtag.text = '555666'
item.insert(0, idtag)
</code></pre>
<p>not sure how to finish this off so I have the updated data to use.</p>
| 1 | 2016-08-17T09:37:36Z | 38,993,280 | <p>Something like this should give you the idea</p>
<pre><code>root = ET.fromstring(xmldata)
for item in root.iterfind('tasks/item'):
idtag = ET.Element('id')
idtag.text = '555666'
item.insert(0, idtag)
xmldata = ET.tostring(root, encoding="unicode")
</code></pre>
| 1 | 2016-08-17T09:51:07Z | [
"python",
"xml",
"string",
"elementtree"
] |
how to match two columns using one of the column as reference? | 38,993,072 | <p>I have done some analysis and have found a particular pattern , and now I am trying to do some predictions.
I have a data set that predicts the ratings for students with a given number of accidents in their childhood.
My prediction matrix is looks some thing like this:</p>
<pre><code> A
injuries ratings
0 5
1 4.89
2 4.34
3 3.99
4 3.89
5 3.77
</code></pre>
<p>and my dataset looks like this:</p>
<pre><code>B
siblings income injuries total_scoldings_from father
3 12000 4 09
4 34000 5 22
1 23400 3 12
3 24330 1 1
0 12000 1 12
</code></pre>
<p>now i want to create a column name <strong>predictions</strong> that essentially matches the entries from <code>A</code> to <code>B</code> and returns </p>
<pre><code>siblings income injuries total_scoldings_from_father predictions
3 12000 4 09 3.89
4 34000 5 22 3.77
1 23400 3 12 3.99
3 24330 1 1 4.89
0 12000 1 12 4.89
</code></pre>
<p>please help </p>
<p>Also suggest a title as mine lacks everything important for future references </p>
| 1 | 2016-08-17T09:40:58Z | 38,993,174 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow"><code>map</code></a> if all values for mapping are in DataFrame <code>A</code>:</p>
<pre><code>B['predictions'] = B['injuries'].map(A.set_index('injuries')['ratings'])
print (B)
siblings income injuries total_scoldings_from_father predictions
0 3 12000 4 9 3.89
1 4 34000 5 22 3.77
2 1 23400 3 12 3.99
3 3 24330 1 1 4.89
4 0 12000 1 12 4.89
</code></pre>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a>:</p>
<pre><code>C = pd.merge(B,A)
print (C)
siblings income injuries total_scoldings_from_father ratings
0 3 12000 4 9 3.89
1 4 34000 5 22 3.77
2 1 23400 3 12 3.99
3 3 24330 1 1 4.89
4 0 12000 1 12 4.89
</code></pre>
| 1 | 2016-08-17T09:45:42Z | [
"python",
"pandas",
"dataframe",
"multiple-columns",
"maping"
] |
Running unix checksum command in Python | 38,993,138 | <p>Is there anyway we can run below unix command in Python.</p>
<pre><code>csum -h SHA1 <filename>
</code></pre>
<p>The checksum produced in python would be stored and supposed to match with the checksum produced by unix command in destination server.<br>
I know we can produce checksum through python itself.<br>
But I was not sure if this would match unix checksum produced on destination server.<br>
So I was thinking if there is anyway we run the same command on both servers to ensure there is no mismatch because of unix and python</p>
| -1 | 2016-08-17T09:43:54Z | 38,993,279 | <p>You can use </p>
<pre><code>import commands
print commands.getstatusoutput('csum -h SHA1 foobar')
(0, 'YOURCHECKSUM')
</code></pre>
<p>be aware that <code>commands</code> is deprecated in python 3 </p>
| 0 | 2016-08-17T09:51:04Z | [
"python",
"unix",
"checksum"
] |
Running unix checksum command in Python | 38,993,138 | <p>Is there anyway we can run below unix command in Python.</p>
<pre><code>csum -h SHA1 <filename>
</code></pre>
<p>The checksum produced in python would be stored and supposed to match with the checksum produced by unix command in destination server.<br>
I know we can produce checksum through python itself.<br>
But I was not sure if this would match unix checksum produced on destination server.<br>
So I was thinking if there is anyway we run the same command on both servers to ensure there is no mismatch because of unix and python</p>
| -1 | 2016-08-17T09:43:54Z | 38,993,285 | <p>You can just call unix from python using <code>subprocess.call</code></p>
<pre><code>import subprocess
subprocess.call("csum -h SHA1 {}".format(filename))
</code></pre>
| 1 | 2016-08-17T09:51:17Z | [
"python",
"unix",
"checksum"
] |
flask: `@after_this_request` not working | 38,993,146 | <p>I want to delete a file after the user downloaded a file which was created by the flask app.</p>
<p>For doing so I found this <a href="http://stackoverflow.com/a/24613980/3991125">answer on SO</a> which did not work as expected and raised an error telling that <code>after_this_request</code> is not defined.</p>
<p>Due to that I had a deeper look into <a href="http://flask.pocoo.org/snippets/53/" rel="nofollow">Flask's documentation providing a sample snippet</a> about how to use that method. So, I extended my code by defining a <code>after_this_request</code> function as shown in the sample snippet.</p>
<p>Executing the code resp. running the server works as expected. However, the file is not removed because <code>@after_this_request</code> is not called which is obvious since <code>After request ...</code> is not printed to Flask's output in the terminal:</p>
<pre><code>#!/usr/bin/env python3
# coding: utf-8
import os
from operator import itemgetter
from flask import Flask, request, redirect, url_for, send_from_directory, g
from werkzeug.utils import secure_filename
UPLOAD_FOLDER = '.'
ALLOWED_EXTENSIONS = set(['csv', 'xlsx', 'xls'])
app = Flask(__name__)
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
def allowed_file(filename):
return '.' in filename and \
filename.rsplit('.', 1)[1] in ALLOWED_EXTENSIONS
def after_this_request(func):
if not hasattr(g, 'call_after_request'):
g.call_after_request = []
g.call_after_request.append(func)
return func
@app.route('/', methods=['GET', 'POST'])
def upload_file():
if request.method == 'POST':
if 'file' not in request.files:
flash('No file part')
return redirect(request.url)
file = request.files['file']
if file.filename == '':
flash('No selected file')
return redirect(request.url)
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
filepath = os.path.join(app.config['UPLOAD_FOLDER'], filename)
file.save(filepath)
@after_this_request
def remove_file(response):
print('After request ...')
os.remove(filepath)
return response
return send_from_directory('.', filename=filepath, as_attachment=True)
return '''
<!doctype html>
<title>Upload a file</title>
<h1>Uplaod new file</h1>
<form action="" method=post enctype=multipart/form-data>
<p><input type=file name=file>
<input type=submit value=Upload>
</form>
'''
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8080, debug=True)
</code></pre>
<p>What do I miss here? How can I ensure calling the function following to the <code>@after_this_request</code> decorator in order to delete the file after it was downloaded by the user?</p>
<p><em>Note: Using Flask version 0.11.1</em></p>
| 0 | 2016-08-17T09:44:26Z | 38,993,208 | <p>There is no specific <em>after this request</em> functionality in Flask. There is only a <em>after <strong>every</strong> request</em> hook, which is what the snippet coopts to handle per-request call-backs.</p>
<p>You did not implement the hook:</p>
<pre><code>@app.after_request
def per_request_callbacks(response):
for func in getattr(g, 'call_after_request', ()):
response = func(response)
return response
</code></pre>
<p>So that hook is run after each and every request, and looks for a list of hooks to call in <code>g.call_after_request</code>. The <code>after_this_request</code> decorator registers a function there. Without the above hook, your <code>after_this_request</code> decorator is essentially useless.</p>
| 0 | 2016-08-17T09:47:25Z | [
"python",
"python-3.x",
"flask",
"request",
"delete-file"
] |
PyOpenSSL get server certificate from SNI host with unknown hostnames | 38,993,256 | <p>My code sample:</p>
<pre><code> import OpenSSL
import socket
ctx = OpenSSL.SSL.Context(OpenSSL.SSL.SSLv23_METHOD)
s = socket.socket(AF_INET, SOCK_STREAM)
connection = OpenSSL.SSL.Connection(ctx, s)
connection.connect((str(ip), port))
connection.setblocking(1)
connection.do_handshake()
chain = connection.get_peer_cert_chain()
</code></pre>
<p>The case is that if host has SNI extension I get an error:</p>
<blockquote>
<p>[('SSL routines', 'SSL3_READ_BYTES', 'sslv3 alert handshake failure')]</p>
</blockquote>
<p>I believe that I can overcome this using <code>OpenSSL.SSL.Connection.set_tlsext_host_name(name)</code>, but for user hostname is unknown and I would like to connect to every available hostname.</p>
<p>So my question is: Is there a way to connect to host by ip and retrieve all available hostnames that provide certificates? Or is there a way to just retrieve all certificates from a SNI host?</p>
| 0 | 2016-08-17T09:50:08Z | 39,005,454 | <blockquote>
<p>Is there a way to connect to host by ip and retrieve all available hostnames that provide certificates? Or is there a way to just retrieve all certificates from a SNI host?</p>
</blockquote>
<p>None of this is possible using HTTPS. If you are lucky you can gather information about the possible hostnames somewhere on the server or by sniffing for DNS lookups which resolve to the servers IP or by using other ways to figure out which names belong to this IP address: <a href="http://stackoverflow.com/questions/1069221/how-can-i-find-all-the-domain-names-that-resolve-to-one-ip-address">How can I find all the domain names that resolve to one ip address?</a></p>
| 0 | 2016-08-17T20:12:01Z | [
"python",
"sockets",
"openssl",
"pyopenssl"
] |
Trigger python file under django project from a HTML button | 38,993,259 | <p>My python script file in under django project but in some other folder(say, otherPythons). I want to trigger this python file on click of a HTML button using javascript.</p>
<p>Please no PHP</p>
<p>Thanks in advance</p>
| -2 | 2016-08-17T09:50:19Z | 38,993,300 | <p>Django doesn't care which folder your view code is in. Just define a URL that points to it.</p>
| 0 | 2016-08-17T09:51:59Z | [
"javascript",
"jquery",
"python",
"ajax",
"django"
] |
Text based data format which supports multiline strings | 38,993,265 | <p>I search a text based data format which supports multiline strings.</p>
<p>JSON does not allow multiline strings:</p>
<pre><code>>>> import json
>>> json.dumps(dict(text='first line\nsecond line'))
'{"text": "first line\\nsecond line"}'
</code></pre>
<p>My desired output:</p>
<pre><code>{"text": "first line
second line"}
</code></pre>
<p>This question is about input and output. The data format should be editable with a editor like vi, emacs or notepad.</p>
<p>I don't care if simple quotes <code>"</code> or tripple quotes (like in Python) <code>"""</code> get used.</p>
<p>Is there a easy for human beings readable textual data interchange format which supports this?</p>
<h1>Use case</h1>
<p>I want to edit data with multiline strings with <code>vi</code>. This is not fun, if the data is in json format.</p>
| 15 | 2016-08-17T09:50:34Z | 39,037,722 | <p>I think you should consider <a href="http://yaml.org/"><code>YAML</code></a> format. It supports block notation which is <a href="http://www.yaml.org/spec/1.2/spec.html#id2760844">able to preserve newlines</a> like this</p>
<pre><code>data: |
There once was a short man from Ealing
Who got on a bus to Darjeeling
It said on the door
"Please don't spit on the floor"
So he carefully spat on the ceiling
</code></pre>
<p>Also there is a lot of parsers for any kind of programming languages including python <em>(i.e <a href="http://pyyaml.org/wiki/PyYAMLDocumentation">pyYaml</a>)</em>.</p>
<p>Also there is a huge advantage that any valid <a href="http://yaml.org/spec/1.2/spec.html#id2759572">JSON is YAML</a>.</p>
| 21 | 2016-08-19T11:17:49Z | [
"python",
"json",
"format"
] |
Text based data format which supports multiline strings | 38,993,265 | <p>I search a text based data format which supports multiline strings.</p>
<p>JSON does not allow multiline strings:</p>
<pre><code>>>> import json
>>> json.dumps(dict(text='first line\nsecond line'))
'{"text": "first line\\nsecond line"}'
</code></pre>
<p>My desired output:</p>
<pre><code>{"text": "first line
second line"}
</code></pre>
<p>This question is about input and output. The data format should be editable with a editor like vi, emacs or notepad.</p>
<p>I don't care if simple quotes <code>"</code> or tripple quotes (like in Python) <code>"""</code> get used.</p>
<p>Is there a easy for human beings readable textual data interchange format which supports this?</p>
<h1>Use case</h1>
<p>I want to edit data with multiline strings with <code>vi</code>. This is not fun, if the data is in json format.</p>
| 15 | 2016-08-17T09:50:34Z | 39,039,087 | <p><code>ini</code> format also supports multiline strings; configparser from Python stdlib can handle it. See <a href="https://docs.python.org/3/library/configparser.html#supported-ini-file-structure" rel="nofollow">https://docs.python.org/3/library/configparser.html#supported-ini-file-structure</a>.</p>
| 2 | 2016-08-19T12:29:08Z | [
"python",
"json",
"format"
] |
Text based data format which supports multiline strings | 38,993,265 | <p>I search a text based data format which supports multiline strings.</p>
<p>JSON does not allow multiline strings:</p>
<pre><code>>>> import json
>>> json.dumps(dict(text='first line\nsecond line'))
'{"text": "first line\\nsecond line"}'
</code></pre>
<p>My desired output:</p>
<pre><code>{"text": "first line
second line"}
</code></pre>
<p>This question is about input and output. The data format should be editable with a editor like vi, emacs or notepad.</p>
<p>I don't care if simple quotes <code>"</code> or tripple quotes (like in Python) <code>"""</code> get used.</p>
<p>Is there a easy for human beings readable textual data interchange format which supports this?</p>
<h1>Use case</h1>
<p>I want to edit data with multiline strings with <code>vi</code>. This is not fun, if the data is in json format.</p>
| 15 | 2016-08-17T09:50:34Z | 39,040,301 | <p>If the files are only used by Python (overlooking the <em>interchange</em>), you could simply put your data in a python script file and import this as a module:</p>
<p>Data</p>
<pre><code>datum_1 = """ lorem
ipsum
dolor
"""
datum_list = [1, """two
liner"""]
datum_dict = {"key": None, "another": [None, 42.13]}
datum_tuple = ("anything", "goes")
</code></pre>
<p>Script</p>
<pre><code>from data import *
d = [e for e in locals() if not e.startswith("__")]
print( d )
for k in d:
print( k, locals()[k] )
</code></pre>
<p>Output</p>
<pre><code>['datum_list', 'datum_1', 'datum_dict', 'datum_tuple']
datum_list [1, 'two\nliner']
datum_1 lorem
ipsum
dolor
datum_dict {'another': [None, 42.13], 'key': None}
datum_tuple ('anything', 'goes')
</code></pre>
<p><hr/>
Update:</p>
<p>Code with dictionary comprehension</p>
<pre><code>from data import *
d = {e:globals()[e] for e in globals() if not e.startswith("__")}
for k in d:
print( k, d[k] )
</code></pre>
| 2 | 2016-08-19T13:27:18Z | [
"python",
"json",
"format"
] |
Text based data format which supports multiline strings | 38,993,265 | <p>I search a text based data format which supports multiline strings.</p>
<p>JSON does not allow multiline strings:</p>
<pre><code>>>> import json
>>> json.dumps(dict(text='first line\nsecond line'))
'{"text": "first line\\nsecond line"}'
</code></pre>
<p>My desired output:</p>
<pre><code>{"text": "first line
second line"}
</code></pre>
<p>This question is about input and output. The data format should be editable with a editor like vi, emacs or notepad.</p>
<p>I don't care if simple quotes <code>"</code> or tripple quotes (like in Python) <code>"""</code> get used.</p>
<p>Is there a easy for human beings readable textual data interchange format which supports this?</p>
<h1>Use case</h1>
<p>I want to edit data with multiline strings with <code>vi</code>. This is not fun, if the data is in json format.</p>
| 15 | 2016-08-17T09:50:34Z | 39,040,460 | <p>XML with <a href="https://docs.python.org/3/library/xml.etree.elementtree.html" rel="nofollow">ElementTree</a> (standard library) or <a href="http://lxml.de/" rel="nofollow">lxml</a> if you are OK with the markup overhead:</p>
<p>Data</p>
<pre><code><?xml version="1.0"?>
<data>
<string>Lorem
Ipsum
Dolor
</string>
</data>
</code></pre>
<p>Script</p>
<pre><code>import xml.etree.ElementTree
root = xml.etree.ElementTree.parse('data.xml').getroot()
for child in root:
print(child.tag, child.attrib, child.text)
</code></pre>
<p>Output</p>
<pre><code>string {} Lorem
Ipsum
Dolor
</code></pre>
| 2 | 2016-08-19T13:36:15Z | [
"python",
"json",
"format"
] |
Text based data format which supports multiline strings | 38,993,265 | <p>I search a text based data format which supports multiline strings.</p>
<p>JSON does not allow multiline strings:</p>
<pre><code>>>> import json
>>> json.dumps(dict(text='first line\nsecond line'))
'{"text": "first line\\nsecond line"}'
</code></pre>
<p>My desired output:</p>
<pre><code>{"text": "first line
second line"}
</code></pre>
<p>This question is about input and output. The data format should be editable with a editor like vi, emacs or notepad.</p>
<p>I don't care if simple quotes <code>"</code> or tripple quotes (like in Python) <code>"""</code> get used.</p>
<p>Is there a easy for human beings readable textual data interchange format which supports this?</p>
<h1>Use case</h1>
<p>I want to edit data with multiline strings with <code>vi</code>. This is not fun, if the data is in json format.</p>
| 15 | 2016-08-17T09:50:34Z | 39,077,281 | <p>Apropos of your comment:</p>
<blockquote>
<p>I want to use it for configuration. A lot of applications invent
their own configuration language. I want to avoid this. But json and
ConfigParser don't satisfy me. Json does not allow strings with
newlines (only \n) and ConfigParser does not allow nested data
structures. Next thing that I am missing: Validation (But this is a
different topic).</p>
</blockquote>
<p>There're 3 main options you have <strong>ConfigParser</strong>, <strong>ConfigObj</strong>, or YAML (<a href="http://pyyaml.org/wiki/PyYAMLDocumentation" rel="nofollow"><strong>PyYAML</strong></a>) - each with their particular pros and cons. All 3 are better then JSON for your use-case i.e. configuration file.</p>
<p>Now further, which one is better depends upon what exactly you want to store in your conf file. </p>
<hr>
<p><strong>ConfigObj</strong> - For configuration and validation (your use-case):</p>
<p>ConfigObj is very simple to use then YAML (also the ConfigParser). Supports default values and types, and also includes validation (a huge plus over ConfigParser).</p>
<p><a href="http://www.voidspace.org.uk/python/articles/configobj.shtml" rel="nofollow">An Introduction to ConfigObj</a></p>
<blockquote>
<p>When you perform validation, each of the members in your specification
are checked and they undergo a process that converts the values into
the specified type. Missing values that have defaults will be filled
in, and validation returns either True to indicate success or a
dictionary with members that failed validation. The individual checks
and conversions are performed by functions, and adding your own check
function is very easy.</p>
</blockquote>
<p><strong>P.S. Yes, it allows multiline values</strong>.</p>
<hr>
<p>Helpful links:</p>
<p><a href="http://www.blog.pythonlibrary.org/2010/01/01/a-brief-configobj-tutorial/" rel="nofollow">A Brief ConfigObj Tutorial</a></p>
<p><a href="http://configobj.readthedocs.io/en/latest/configobj.html" rel="nofollow">ConfigObj 5 Introduction and Reference</a></p>
<hr>
<p>There are solid SO answers available on the comparison <strong>YAML</strong> vs <strong>ConfigParser</strong> vs <strong>ConfigObj</strong>:</p>
<p><a href="http://stackoverflow.com/questions/3420250/whats-better-configobj-or-configparser">What's better, ConfigObj or ConfigParser?</a></p>
<p><a href="http://stackoverflow.com/questions/3444436/configobj-configparser-vs-using-yaml-for-python-settings-file">ConfigObj/ConfigParser vs. using YAML for Python settings file</a></p>
<hr>
| 4 | 2016-08-22T10:33:56Z | [
"python",
"json",
"format"
] |
Text based data format which supports multiline strings | 38,993,265 | <p>I search a text based data format which supports multiline strings.</p>
<p>JSON does not allow multiline strings:</p>
<pre><code>>>> import json
>>> json.dumps(dict(text='first line\nsecond line'))
'{"text": "first line\\nsecond line"}'
</code></pre>
<p>My desired output:</p>
<pre><code>{"text": "first line
second line"}
</code></pre>
<p>This question is about input and output. The data format should be editable with a editor like vi, emacs or notepad.</p>
<p>I don't care if simple quotes <code>"</code> or tripple quotes (like in Python) <code>"""</code> get used.</p>
<p>Is there a easy for human beings readable textual data interchange format which supports this?</p>
<h1>Use case</h1>
<p>I want to edit data with multiline strings with <code>vi</code>. This is not fun, if the data is in json format.</p>
| 15 | 2016-08-17T09:50:34Z | 39,136,805 | <p>Not sure whether I've understood your question correctly, but are you not asking for something like this?</p>
<pre><code>my_config = {
"text": """first line
second line"""
}
print my_config
</code></pre>
| 0 | 2016-08-25T04:40:56Z | [
"python",
"json",
"format"
] |
Text based data format which supports multiline strings | 38,993,265 | <p>I search a text based data format which supports multiline strings.</p>
<p>JSON does not allow multiline strings:</p>
<pre><code>>>> import json
>>> json.dumps(dict(text='first line\nsecond line'))
'{"text": "first line\\nsecond line"}'
</code></pre>
<p>My desired output:</p>
<pre><code>{"text": "first line
second line"}
</code></pre>
<p>This question is about input and output. The data format should be editable with a editor like vi, emacs or notepad.</p>
<p>I don't care if simple quotes <code>"</code> or tripple quotes (like in Python) <code>"""</code> get used.</p>
<p>Is there a easy for human beings readable textual data interchange format which supports this?</p>
<h1>Use case</h1>
<p>I want to edit data with multiline strings with <code>vi</code>. This is not fun, if the data is in json format.</p>
| 15 | 2016-08-17T09:50:34Z | 39,156,392 | <p>If you're using Python 2, I actually think json can do what you need. You can dump and load json while decoding and encoding it with <code>string-escape</code>:</p>
<pre><code>import json
config_dict = {
'text': 'first line\nsecond line',
}
config_str = json.dumps(config_dict).decode('string-escape')
print config_str
config_dict = json.loads(config_str.encode('string-escape'))
print config_dict
</code></pre>
<p><strong>Output</strong>:</p>
<pre><code>{"text": "first line
second line"}
{u'text': u'first line\nsecond line'}
</code></pre>
<p>So, you can use the decoded string to edit your JSON, newlines included, and when reading it, just encode with string-escape to get the dictionary back.</p>
| 1 | 2016-08-25T23:48:54Z | [
"python",
"json",
"format"
] |
Keep initialize value of a dict if no key is present | 38,993,266 | <p>I am trying to parse a pdf metadata like:</p>
<pre><code> fields = ["Author", "Year", "Journal", "Title", "Publisher",
"Page", "Address", "Annote", "Booktitle", "Chapter",
"Crossred", "Edition", "Editor", "HowPublished",
"Institution", "Month", "Note", "Number",
"Organization", "Pages", "School",
"Series", "Type", "Volume", "Doi", "File"]
op=pexif.get_json(filename)
new_op = {"Author":"Unknown"}
print(new_op)
new_op = {
field: str(value) for field in fields
for key, value in op[0].items() if field.lower() in key.lower()
}
print(new_op)
id_auth=new_op["Author"].split()[-1]
id_tit = (new_op["Title"].split()[:2])
</code></pre>
<p>In few cases, Author tag is not present, so I have initialized it with <code>Unknown</code>, with the hope that, the value will sustain, if Author tag is not found.
But, in the <code>new_op ={}</code>, its overwriting old data. So for the two <code>print(new_op)</code> yields:</p>
<pre><code>{'Author': 'Unknown'}
{'File': '/home/rudra/Downloads', 'Title': 'Formation of bcc non-equilibrium La, Gd and Dy alloys and the magnetic structure of Mg-stabilized [beta] Gd and [beta] Dy', 'Type': 'pdf', 'Page': '140'}
</code></pre>
<p>and throwing a KeyError for id_auth line:</p>
<pre><code>id_auth=new_op["Author"].split()[-1]
KeyError: 'Author'
</code></pre>
<p>I am trying to keep Author = Unknow if no Author key is present in op.
How I can do that?</p>
<p>For reference, Below is a exiftool output:</p>
<pre><code>ExifTool Version Number : 10.20
File Name : Formation of bcc non-equilibrium La Gd and Dy alloys and the mag.pdf
Directory : /home/rudra/Downloads
File Size : 2.2 MB
File Modification Date/Time : 2016:07:20 15:30:48+02:00
File Access Date/Time : 2016:08:16 19:20:21+02:00
File Inode Change Date/Time : 2016:08:16 18:13:30+02:00
File Permissions : rw-rw-r--
File Type : PDF
File Type Extension : pdf
MIME Type : application/pdf
PDF Version : 1.7
Linearized : No
XMP Toolkit : Adobe XMP Core 5.2-c001 63.143651, 2012/04/05-09:01:49
Modify Date : 2015:09:18 07:48:48-07:00
Create Date : 2015:09:18 07:48:48-07:00
Metadata Date : 2015:09:18 07:48:48-07:00
Creator Tool : Appligent AppendPDF Pro 5.5
Document ID : uuid:f06a868b-a105-11b2-0a00-782dad000000
Instance ID : uuid:f06aec42-a105-11b2-0a00-400080adfd7f
Format : application/pdf
Title : Formation of bcc non-equilibrium La, Gd and Dy alloys and the magnetic structure of Mg-stabilized [beta] Gd and [beta] Dy
Producer : Prince 9.0 rev 5 (www.princexml.com)
Appligent : AppendPDF Pro 5.5 Linux Kernel 2.6 64bit Oct 2 2014 Library 10.1.0
Page Count : 140
Creator : Appligent AppendPDF Pro 5.5
</code></pre>
| 0 | 2016-08-17T09:50:35Z | 38,993,377 | <p>There are a bunch of ways of going this, but the simplest is to drop the initial version of the dict and instead check afterwards if Author is present:</p>
<pre><code>new_op = {
field: str(value) for field in fields
for key, value in op[0].items() if field.lower() in key.lower()
}
if 'Author' not in new_op:
new_op['Author'] = 'Unknown'
</code></pre>
| 1 | 2016-08-17T09:55:16Z | [
"python",
"python-3.x"
] |
Keep initialize value of a dict if no key is present | 38,993,266 | <p>I am trying to parse a pdf metadata like:</p>
<pre><code> fields = ["Author", "Year", "Journal", "Title", "Publisher",
"Page", "Address", "Annote", "Booktitle", "Chapter",
"Crossred", "Edition", "Editor", "HowPublished",
"Institution", "Month", "Note", "Number",
"Organization", "Pages", "School",
"Series", "Type", "Volume", "Doi", "File"]
op=pexif.get_json(filename)
new_op = {"Author":"Unknown"}
print(new_op)
new_op = {
field: str(value) for field in fields
for key, value in op[0].items() if field.lower() in key.lower()
}
print(new_op)
id_auth=new_op["Author"].split()[-1]
id_tit = (new_op["Title"].split()[:2])
</code></pre>
<p>In few cases, Author tag is not present, so I have initialized it with <code>Unknown</code>, with the hope that, the value will sustain, if Author tag is not found.
But, in the <code>new_op ={}</code>, its overwriting old data. So for the two <code>print(new_op)</code> yields:</p>
<pre><code>{'Author': 'Unknown'}
{'File': '/home/rudra/Downloads', 'Title': 'Formation of bcc non-equilibrium La, Gd and Dy alloys and the magnetic structure of Mg-stabilized [beta] Gd and [beta] Dy', 'Type': 'pdf', 'Page': '140'}
</code></pre>
<p>and throwing a KeyError for id_auth line:</p>
<pre><code>id_auth=new_op["Author"].split()[-1]
KeyError: 'Author'
</code></pre>
<p>I am trying to keep Author = Unknow if no Author key is present in op.
How I can do that?</p>
<p>For reference, Below is a exiftool output:</p>
<pre><code>ExifTool Version Number : 10.20
File Name : Formation of bcc non-equilibrium La Gd and Dy alloys and the mag.pdf
Directory : /home/rudra/Downloads
File Size : 2.2 MB
File Modification Date/Time : 2016:07:20 15:30:48+02:00
File Access Date/Time : 2016:08:16 19:20:21+02:00
File Inode Change Date/Time : 2016:08:16 18:13:30+02:00
File Permissions : rw-rw-r--
File Type : PDF
File Type Extension : pdf
MIME Type : application/pdf
PDF Version : 1.7
Linearized : No
XMP Toolkit : Adobe XMP Core 5.2-c001 63.143651, 2012/04/05-09:01:49
Modify Date : 2015:09:18 07:48:48-07:00
Create Date : 2015:09:18 07:48:48-07:00
Metadata Date : 2015:09:18 07:48:48-07:00
Creator Tool : Appligent AppendPDF Pro 5.5
Document ID : uuid:f06a868b-a105-11b2-0a00-782dad000000
Instance ID : uuid:f06aec42-a105-11b2-0a00-400080adfd7f
Format : application/pdf
Title : Formation of bcc non-equilibrium La, Gd and Dy alloys and the magnetic structure of Mg-stabilized [beta] Gd and [beta] Dy
Producer : Prince 9.0 rev 5 (www.princexml.com)
Appligent : AppendPDF Pro 5.5 Linux Kernel 2.6 64bit Oct 2 2014 Library 10.1.0
Page Count : 140
Creator : Appligent AppendPDF Pro 5.5
</code></pre>
| 0 | 2016-08-17T09:50:35Z | 38,993,393 | <p>You are reassigning the <code>new_op</code> dictionary. Instead, after the following assignment</p>
<pre><code>new_op = {
field: str(value) for field in fields
for key, value in op[0].items() if field.lower() in key.lower()
}
</code></pre>
<p>Do this:</p>
<pre><code>if not new_op.has_key('Author'):
new_op['Author'] = 'Unknown'
</code></pre>
| 0 | 2016-08-17T09:56:12Z | [
"python",
"python-3.x"
] |
Keep initialize value of a dict if no key is present | 38,993,266 | <p>I am trying to parse a pdf metadata like:</p>
<pre><code> fields = ["Author", "Year", "Journal", "Title", "Publisher",
"Page", "Address", "Annote", "Booktitle", "Chapter",
"Crossred", "Edition", "Editor", "HowPublished",
"Institution", "Month", "Note", "Number",
"Organization", "Pages", "School",
"Series", "Type", "Volume", "Doi", "File"]
op=pexif.get_json(filename)
new_op = {"Author":"Unknown"}
print(new_op)
new_op = {
field: str(value) for field in fields
for key, value in op[0].items() if field.lower() in key.lower()
}
print(new_op)
id_auth=new_op["Author"].split()[-1]
id_tit = (new_op["Title"].split()[:2])
</code></pre>
<p>In few cases, Author tag is not present, so I have initialized it with <code>Unknown</code>, with the hope that, the value will sustain, if Author tag is not found.
But, in the <code>new_op ={}</code>, its overwriting old data. So for the two <code>print(new_op)</code> yields:</p>
<pre><code>{'Author': 'Unknown'}
{'File': '/home/rudra/Downloads', 'Title': 'Formation of bcc non-equilibrium La, Gd and Dy alloys and the magnetic structure of Mg-stabilized [beta] Gd and [beta] Dy', 'Type': 'pdf', 'Page': '140'}
</code></pre>
<p>and throwing a KeyError for id_auth line:</p>
<pre><code>id_auth=new_op["Author"].split()[-1]
KeyError: 'Author'
</code></pre>
<p>I am trying to keep Author = Unknow if no Author key is present in op.
How I can do that?</p>
<p>For reference, Below is a exiftool output:</p>
<pre><code>ExifTool Version Number : 10.20
File Name : Formation of bcc non-equilibrium La Gd and Dy alloys and the mag.pdf
Directory : /home/rudra/Downloads
File Size : 2.2 MB
File Modification Date/Time : 2016:07:20 15:30:48+02:00
File Access Date/Time : 2016:08:16 19:20:21+02:00
File Inode Change Date/Time : 2016:08:16 18:13:30+02:00
File Permissions : rw-rw-r--
File Type : PDF
File Type Extension : pdf
MIME Type : application/pdf
PDF Version : 1.7
Linearized : No
XMP Toolkit : Adobe XMP Core 5.2-c001 63.143651, 2012/04/05-09:01:49
Modify Date : 2015:09:18 07:48:48-07:00
Create Date : 2015:09:18 07:48:48-07:00
Metadata Date : 2015:09:18 07:48:48-07:00
Creator Tool : Appligent AppendPDF Pro 5.5
Document ID : uuid:f06a868b-a105-11b2-0a00-782dad000000
Instance ID : uuid:f06aec42-a105-11b2-0a00-400080adfd7f
Format : application/pdf
Title : Formation of bcc non-equilibrium La, Gd and Dy alloys and the magnetic structure of Mg-stabilized [beta] Gd and [beta] Dy
Producer : Prince 9.0 rev 5 (www.princexml.com)
Appligent : AppendPDF Pro 5.5 Linux Kernel 2.6 64bit Oct 2 2014 Library 10.1.0
Page Count : 140
Creator : Appligent AppendPDF Pro 5.5
</code></pre>
| 0 | 2016-08-17T09:50:35Z | 38,993,394 | <pre><code>try:
id_auth=new_op["Author"].split()[-1]
except KeyError:
id_auth="Unknown"
</code></pre>
| 0 | 2016-08-17T09:56:17Z | [
"python",
"python-3.x"
] |
Encrypted data in REST Services Response | 38,993,326 | <p>I use Django and Django-rest-framework about REST services from Back-end and Mobile Client Apps.</p>
<p>I would to have some responses with encrypted data. I have to return to my client some sensible and private data and I would to apply an additional security layer (in fact I already use SSL, but I would to disarm some attacks (like man in the middle) where some unwanted element can see some data contained in my responses).</p>
<p>I would to avoid this, so I thought to add in my response the encrypted data. </p>
<p>Does that make sense? Is there something similar in Django - REST- Framework?</p>
| 0 | 2016-08-17T09:53:37Z | 39,090,347 | <p>A good encryption libary with various implementations is <a href="https://github.com/google/keyczar" rel="nofollow">Keyczar</a>.</p>
<p>What you would need to do is write a global interceptor on all incoming request to your backend application, and when responses are sent back they are encrypted using the Keyczar library.</p>
<p>On the consumer side (your mobile application) you would need to implement something similar that decrypts the responses from your backend.</p>
<p>BONUS: if you're not doing this already, you probably want to look at using <a href="http://stackoverflow.com/a/23202907/4963159">2-way SSL</a> to ensure that you authenticate the client that calls your backend.</p>
| -1 | 2016-08-22T23:49:29Z | [
"python",
"django",
"encryption",
"django-rest-framework",
"restful-architecture"
] |
zero index for all rows in python dataframe | 38,993,372 | <p>I have problem with indexing python <code>dataframe</code>. I have dataframe which I fill it with loop. I simplified it like this :</p>
<pre><code>d = pd.DataFrame(columns=['img', 'time', 'key'])
for i in range(5):
image = i
timepoint = i+1
key = i+2
temp = pd.DataFrame({'img':[image], 'timepoint':[timepoint], 'key': [key]})
d = pd.concat([d, temp])
</code></pre>
<p>The problem is since it shows <code>0</code> as and index for all rows, I can not access to the specific row based on <code>.loc[]</code>. Does anybody have any idea how can I fix the problem and get normal index column?</p>
| 1 | 2016-08-17T09:55:10Z | 38,993,497 | <pre><code>d = d.reset_index(drop=True)
</code></pre>
<p>PS: It's better practice to make a list of rows and then turn it into a DataFrame, much less computationally expensive and it will make a good index instantly.</p>
<p>This list could be a list of lists combined with the columns in your DataFrame init or a list of dictionaries with column names as keys. In your case:</p>
<pre><code>list_of_dicts = []
for i in range(5):
new_row = {'img': i, 'time': i+1, 'key': i+2}
list_of_dicts.append(new_row)
d = pd.DataFrame(new_row)
</code></pre>
| 0 | 2016-08-17T10:00:24Z | [
"python",
"python-3.x",
"pandas",
"indexing",
"dataframe"
] |
zero index for all rows in python dataframe | 38,993,372 | <p>I have problem with indexing python <code>dataframe</code>. I have dataframe which I fill it with loop. I simplified it like this :</p>
<pre><code>d = pd.DataFrame(columns=['img', 'time', 'key'])
for i in range(5):
image = i
timepoint = i+1
key = i+2
temp = pd.DataFrame({'img':[image], 'timepoint':[timepoint], 'key': [key]})
d = pd.concat([d, temp])
</code></pre>
<p>The problem is since it shows <code>0</code> as and index for all rows, I can not access to the specific row based on <code>.loc[]</code>. Does anybody have any idea how can I fix the problem and get normal index column?</p>
| 1 | 2016-08-17T09:55:10Z | 38,993,587 | <p>You may want to use the <code>ignore_index</code> parameter in your concatenation :</p>
<pre><code>d = pd.concat([d, temp], ignore_index=True)
</code></pre>
<p>This gives me the following result :</p>
<pre><code> img key time timepoint
0 0.0 2.0 NaN 1.0
1 1.0 3.0 NaN 2.0
2 2.0 4.0 NaN 3.0
3 3.0 5.0 NaN 4.0
4 4.0 6.0 NaN 5.0
</code></pre>
| 2 | 2016-08-17T10:05:05Z | [
"python",
"python-3.x",
"pandas",
"indexing",
"dataframe"
] |
zero index for all rows in python dataframe | 38,993,372 | <p>I have problem with indexing python <code>dataframe</code>. I have dataframe which I fill it with loop. I simplified it like this :</p>
<pre><code>d = pd.DataFrame(columns=['img', 'time', 'key'])
for i in range(5):
image = i
timepoint = i+1
key = i+2
temp = pd.DataFrame({'img':[image], 'timepoint':[timepoint], 'key': [key]})
d = pd.concat([d, temp])
</code></pre>
<p>The problem is since it shows <code>0</code> as and index for all rows, I can not access to the specific row based on <code>.loc[]</code>. Does anybody have any idea how can I fix the problem and get normal index column?</p>
| 1 | 2016-08-17T09:55:10Z | 38,993,861 | <p>I think better is first fill <code>lists</code> by values and then once use <code>DataFrame</code> constructor:</p>
<pre><code>image, timepoint, key = [],[],[]
for i in range(5):
image.append(i)
timepoint.append(i+1)
key.append(i+2)
d = pd.DataFrame({'img':image, 'time':timepoint, 'key': key})
print (d)
img key time
0 0 2 1
1 1 3 2
2 2 4 3
3 3 5 4
4 4 6 5
</code></pre>
| 0 | 2016-08-17T10:18:01Z | [
"python",
"python-3.x",
"pandas",
"indexing",
"dataframe"
] |
Run Python function with input arguments form command line | 38,993,606 | <p>New to Python, used to use MATLAB.</p>
<p>My function convert.py is:</p>
<pre><code>def convert(a,b)
factor = 2194.2
return (a-b)*factor
</code></pre>
<p>How do I run it from the command line with input arguments 'a' and 'b' ?
I tried:</p>
<pre><code>python convert.py 32 46
</code></pre>
<p>But got an error.</p>
<p>I did try to find the answer online, found related things but not the answer:</p>
<ol>
<li><a href="http://stackoverflow.com/questions/3987041/python-run-function-from-the-command-line">Python: Run function from the command line</a> (Stack Overflow)</li>
<li><a href="http://stackoverflow.com/questions/1009860/command-line-arguments-in-python">Command Line Arguments In Python</a> (Stack Overflow)</li>
<li><a href="http://www.cyberciti.biz/faq/python-command-line-arguments-argv-example/" rel="nofollow">http://www.cyberciti.biz/faq/python-command-line-arguments-argv-example/</a></li>
<li><a href="http://www.saltycrane.com/blog/2007/12/how-to-pass-command-line-arguments-to/" rel="nofollow">http://www.saltycrane.com/blog/2007/12/how-to-pass-command-line-arguments-to/</a></li>
</ol>
<p>Also, were can I find the answer myself so that I can save this forum for more non-trivial questions?</p>
| -2 | 2016-08-17T10:05:56Z | 38,993,785 | <p>There exists a Python module for this sort of thing called <a href="https://docs.python.org/3/library/argparse.html" rel="nofollow">argparse</a>, which allows you to do really fancy things around command line flags. You don't really need that - you've just got two numbers on the command line. This can be handled really naively.</p>
<p>Python allows you direct access to the command line arguments via an array called <code>sys.argv</code> - you'll need to <code>import sys</code> first. The first element in this array is always the program name, but the second and third will be the numbers you pass in <code>i.e. sys.argv[1]</code> and <code>sys.argv[2]</code>. For a more complete example:</p>
<pre><code>if len(sys.argv) < 3:
print 'Didnt supply to numbers'
a = int(sys.argv[1])
b = int(sys.argv[2])
</code></pre>
<p>Of course you'll need some error checking around making sure they are actuall integers/floats. </p>
<p>A bit of extra reading around sys.argv if you're interested <a href="https://docs.python.org/2/library/sys.html#sys.argv" rel="nofollow">here</a></p>
<p>To be complete, we can give an argparse example as well:</p>
<pre><code>import argparse
parser = argparse.ArgumentParser(description='')
parser.add_argument('numbers', type=float, nargs=2,
help='Things to perform actions on')
args = parser.parse_args()
a = args.numbers[0]
b = args.numbers[1]
print a, b
</code></pre>
| -1 | 2016-08-17T10:14:16Z | [
"python",
"function",
"input",
"command-line",
"arguments"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.