title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Settin general defaults for named arguments in python | 39,139,935 | <p>I have the following problem:
I have to set some values for special entities (points, lines, faces, volumes, spheres,...), via an API into a database.</p>
<p>Some values are unique for every entity, others are always the same.
So my idea was to do something like (SetValues is the API command i have to use putting something into the database):</p>
<pre><code>def CreateLineEntity(ID,Name,Solver,P1,P2,Move='no',Perimeter=0.0,Gap='yes'):
SetValues(ID, {'Name':Name})
SetValues(ID, {'P1':P1})
SetValues(ID, {'P2':P2})
SetValues(ID, {'Solver':Solver})
SetValues(ID, {'Move':Move})
SetValues(ID, {'Perim':Perimeter})
SetValues(ID, {'Gap':Gap})
######################################################################
def CreatePointEntity(ID,Name,Solver,P1,Move='no',Perimeter=0.0,Gap='yes'):
SetValues(ID, {'Name':Name})
SetValues(ID, {'P1':P1})
SetValues(ID, {'Solver':Solver})
SetValues(ID, {'Move':Move})
SetValues(ID, {'Perim':Perimeter})
SetValues(ID, {'Gap':Gap})
</code></pre>
<p>So in every function the default for Move is 'no'</p>
<p>If the default changes for some reason i had to check the complete code changing the default now.</p>
<p>Is there a more intelligent way to define such defaults?
My goal is to change only one value in the code and than all defaults in the functions are changed, too</p>
| 6 | 2016-08-25T08:10:48Z | 39,140,051 | <p>You can create a dictionary for storing default values.</p>
<pre><code>defaults = {"move": "no", "perimeter": 0.0, "gap":"yes"}
def CreatePointEntity(ID,Name,Solver,P1,Move=defaults["move"],Perimeter=defaults["perimeter"],Gap=defaults["gap"]):
SetValues(ID, {'Name':Name})
SetValues(ID, {'P1':P1})
SetValues(ID, {'Solver':Solver})
SetValues(ID, {'Move':Move})
SetValues(ID, {'Perim':Perimeter})
SetValues(ID, {'Gap':Gap})
</code></pre>
<p>That way you need to only change default value inside the dictionary.</p>
| 2 | 2016-08-25T08:17:34Z | [
"python"
] |
Settin general defaults for named arguments in python | 39,139,935 | <p>I have the following problem:
I have to set some values for special entities (points, lines, faces, volumes, spheres,...), via an API into a database.</p>
<p>Some values are unique for every entity, others are always the same.
So my idea was to do something like (SetValues is the API command i have to use putting something into the database):</p>
<pre><code>def CreateLineEntity(ID,Name,Solver,P1,P2,Move='no',Perimeter=0.0,Gap='yes'):
SetValues(ID, {'Name':Name})
SetValues(ID, {'P1':P1})
SetValues(ID, {'P2':P2})
SetValues(ID, {'Solver':Solver})
SetValues(ID, {'Move':Move})
SetValues(ID, {'Perim':Perimeter})
SetValues(ID, {'Gap':Gap})
######################################################################
def CreatePointEntity(ID,Name,Solver,P1,Move='no',Perimeter=0.0,Gap='yes'):
SetValues(ID, {'Name':Name})
SetValues(ID, {'P1':P1})
SetValues(ID, {'Solver':Solver})
SetValues(ID, {'Move':Move})
SetValues(ID, {'Perim':Perimeter})
SetValues(ID, {'Gap':Gap})
</code></pre>
<p>So in every function the default for Move is 'no'</p>
<p>If the default changes for some reason i had to check the complete code changing the default now.</p>
<p>Is there a more intelligent way to define such defaults?
My goal is to change only one value in the code and than all defaults in the functions are changed, too</p>
| 6 | 2016-08-25T08:10:48Z | 39,140,062 | <p>I usually keep a <code>constants.py</code> in all such projects.</p>
<p>Here:<br></p>
<p><strong>constant.py</strong></p>
<pre><code>MOVE = 'no'
</code></pre>
<p><strong>yourFile.py</strong></p>
<pre><code>from constants import MOVE # add as many constants as needed
def CreateLineEntity(ID,Name,Solver,P1,P2,Move=MOVE,Perimeter=0.0,Gap='yes'):
SetValues(ID, {'Name':Name})
SetValues(ID, {'P1':P1})
SetValues(ID, {'P2':P2})
SetValues(ID, {'Solver':Solver})
SetValues(ID, {'Move':Move})
SetValues(ID, {'Perim':Perimeter})
SetValues(ID, {'Gap':Gap})
######################################################################
def CreatePointEntity(ID,Name,Solver,P1,Move=MOVE,Perimeter=0.0,Gap='yes'):
SetValues(ID, {'Name':Name})
SetValues(ID, {'P1':P1})
SetValues(ID, {'Solver':Solver})
SetValues(ID, {'Move':Move})
SetValues(ID, {'Perim':Perimeter})
SetValues(ID, {'Gap':Gap})
</code></pre>
<p>In this case, hierarchy is as follows:</p>
<pre><code>headFolder
-constants.py
-yourFile.py
</code></pre>
| 3 | 2016-08-25T08:18:25Z | [
"python"
] |
Settin general defaults for named arguments in python | 39,139,935 | <p>I have the following problem:
I have to set some values for special entities (points, lines, faces, volumes, spheres,...), via an API into a database.</p>
<p>Some values are unique for every entity, others are always the same.
So my idea was to do something like (SetValues is the API command i have to use putting something into the database):</p>
<pre><code>def CreateLineEntity(ID,Name,Solver,P1,P2,Move='no',Perimeter=0.0,Gap='yes'):
SetValues(ID, {'Name':Name})
SetValues(ID, {'P1':P1})
SetValues(ID, {'P2':P2})
SetValues(ID, {'Solver':Solver})
SetValues(ID, {'Move':Move})
SetValues(ID, {'Perim':Perimeter})
SetValues(ID, {'Gap':Gap})
######################################################################
def CreatePointEntity(ID,Name,Solver,P1,Move='no',Perimeter=0.0,Gap='yes'):
SetValues(ID, {'Name':Name})
SetValues(ID, {'P1':P1})
SetValues(ID, {'Solver':Solver})
SetValues(ID, {'Move':Move})
SetValues(ID, {'Perim':Perimeter})
SetValues(ID, {'Gap':Gap})
</code></pre>
<p>So in every function the default for Move is 'no'</p>
<p>If the default changes for some reason i had to check the complete code changing the default now.</p>
<p>Is there a more intelligent way to define such defaults?
My goal is to change only one value in the code and than all defaults in the functions are changed, too</p>
| 6 | 2016-08-25T08:10:48Z | 39,140,114 | <p>How about storing the default values in a separate file as Constants.</p>
<p>e.g:</p>
<p>constants.py:</p>
<pre><code>MOVE = "on"
PERIMETER = "xyz"
...
</code></pre>
<p>Your script:</p>
<pre><code>from constants import *
def CreateLineEntity(ID,Name,Solver,P1,P2,Move=MOVE,Perimeter=PERIMETER,Gap=GAP):
SetValues(ID, {'Name':Name})
SetValues(ID, {'P1':P1})
SetValues(ID, {'P2':P2})
SetValues(ID, {'Solver':Solver})
SetValues(ID, {'Move':Move})
SetValues(ID, {'Perim':Perimeter})
SetValues(ID, {'Gap':Gap})
######################################################################
def CreatePointEntity(ID,Name,Solver,P1,Move=MOVE,Perimeter=PERIMETER,Gap=GAP):
SetValues(ID, {'Name':Name})
SetValues(ID, {'P1':P1})
SetValues(ID, {'Solver':Solver})
SetValues(ID, {'Move':Move})
SetValues(ID, {'Perim':Perimeter})
SetValues(ID, {'Gap':Gap})
</code></pre>
| 2 | 2016-08-25T08:20:26Z | [
"python"
] |
How to distribute python application with all dependencies | 39,140,078 | <p>I want to distribute a python application with all its dependencies. Target machine doesn't have an outside connection so I can't <code>pip install</code> anything
and all packages must be included.</p>
<p>I'm using python 2.7 for my application and the target machine has a different python version. I would like to pack python 2.7 as part of my distribution. </p>
<p>Any ideas?</p>
| 3 | 2016-08-25T08:18:55Z | 39,140,123 | <p>If you want to distribute on Windows you can use py2exe: <a href="http://www.py2exe.org/" rel="nofollow">http://www.py2exe.org/</a>
or cross-platform <a href="http://www.pyinstaller.org/" rel="nofollow">http://www.pyinstaller.org/</a> </p>
<p>That way you convert your whole application to a single executable file, which includes everything you need to run it.</p>
| 0 | 2016-08-25T08:20:47Z | [
"python",
"python-2.7",
"install"
] |
Python Lazy Loading | 39,140,348 | <p>The following code is going to lazily print the contents of the text file line by line, with each print stopping at '/n' .</p>
<pre><code> with open('eggs.txt', 'rb') as file:
for line in file:
print line
</code></pre>
<p>Is there any configuration to lazily print the contents of a text file, with each print stopping at ', ' ?</p>
<p>(or any other character/string )</p>
<p>I am asking this because I am trying to read a file which contains one single 2.9 GB long line separated by commas. </p>
<p><em>PS. My question is different than this one: <a href="http://stackoverflow.com/questions/6475328/read-large-text-files-in-python-line-by-line-without-loading-it-in-to-memory">Read large text files in Python, line by line without loading it in to memory</a>
I am asking how to do the stopping at characters other than newlines ('\n')</em></p>
| 3 | 2016-08-25T08:32:25Z | 39,140,601 | <p>I don't think there is a built-in way to achieve this. You will have to use <code>file.read(block_size)</code> to read the file block by block, split each block at commas, and rejoin strings that go across block boundaries manually.</p>
<p>Note that you still might run out of memory if you don't encounter a comma for a long time. (The same problem applies to reading a file line by line, when encountering a very long line.)</p>
<p>Here's an example implementation:</p>
<pre><code>def split_file(file, sep=",", block_size=16384):
last_fragment = ""
while True:
block = file.read(block_size)
if not block:
break
block_fragments = iter(block.split(sep))
last_fragment += next(block_fragments)
for fragment in block_fragments:
yield last_fragment
last_fragment = fragment
yield last_fragment
</code></pre>
| 2 | 2016-08-25T08:44:39Z | [
"python",
"lazy-loading"
] |
Python Lazy Loading | 39,140,348 | <p>The following code is going to lazily print the contents of the text file line by line, with each print stopping at '/n' .</p>
<pre><code> with open('eggs.txt', 'rb') as file:
for line in file:
print line
</code></pre>
<p>Is there any configuration to lazily print the contents of a text file, with each print stopping at ', ' ?</p>
<p>(or any other character/string )</p>
<p>I am asking this because I am trying to read a file which contains one single 2.9 GB long line separated by commas. </p>
<p><em>PS. My question is different than this one: <a href="http://stackoverflow.com/questions/6475328/read-large-text-files-in-python-line-by-line-without-loading-it-in-to-memory">Read large text files in Python, line by line without loading it in to memory</a>
I am asking how to do the stopping at characters other than newlines ('\n')</em></p>
| 3 | 2016-08-25T08:32:25Z | 39,140,705 | <p>The following answer can be considered lazy since it is reading the file a character at a time:</p>
<pre><code>def commaBreak(filename):
word = ""
with open(filename) as f:
while True:
char = f.read(1)
if not char:
print "End of file"
yield word
break
elif char == ',':
yield word
word = ""
else:
word += char
</code></pre>
<p>You may choose to do something like this with a larger number of charachters, Eg 1000, read at a time.</p>
| 1 | 2016-08-25T08:51:00Z | [
"python",
"lazy-loading"
] |
Python Lazy Loading | 39,140,348 | <p>The following code is going to lazily print the contents of the text file line by line, with each print stopping at '/n' .</p>
<pre><code> with open('eggs.txt', 'rb') as file:
for line in file:
print line
</code></pre>
<p>Is there any configuration to lazily print the contents of a text file, with each print stopping at ', ' ?</p>
<p>(or any other character/string )</p>
<p>I am asking this because I am trying to read a file which contains one single 2.9 GB long line separated by commas. </p>
<p><em>PS. My question is different than this one: <a href="http://stackoverflow.com/questions/6475328/read-large-text-files-in-python-line-by-line-without-loading-it-in-to-memory">Read large text files in Python, line by line without loading it in to memory</a>
I am asking how to do the stopping at characters other than newlines ('\n')</em></p>
| 3 | 2016-08-25T08:32:25Z | 39,140,753 | <pre><code>with open('eggs.txt', 'rb') as file:
for line in file:
str_line = str(line)
words = str_line.split(', ')
for word in words:
print(word)
</code></pre>
<p>I'm not completely sure if I know what you are asking, is something like this what you mean?</p>
| -1 | 2016-08-25T08:53:09Z | [
"python",
"lazy-loading"
] |
Python Lazy Loading | 39,140,348 | <p>The following code is going to lazily print the contents of the text file line by line, with each print stopping at '/n' .</p>
<pre><code> with open('eggs.txt', 'rb') as file:
for line in file:
print line
</code></pre>
<p>Is there any configuration to lazily print the contents of a text file, with each print stopping at ', ' ?</p>
<p>(or any other character/string )</p>
<p>I am asking this because I am trying to read a file which contains one single 2.9 GB long line separated by commas. </p>
<p><em>PS. My question is different than this one: <a href="http://stackoverflow.com/questions/6475328/read-large-text-files-in-python-line-by-line-without-loading-it-in-to-memory">Read large text files in Python, line by line without loading it in to memory</a>
I am asking how to do the stopping at characters other than newlines ('\n')</em></p>
| 3 | 2016-08-25T08:32:25Z | 39,140,852 | <p>Using buffered reading from the file (Python 3):</p>
<pre><code>buffer_size = 2**12
delimiter = ','
with open(filename, 'r') as f:
# remember the characters after the last delimiter in the previously processed chunk
remaining = ""
while True:
# read the next chunk of characters from the file
chunk = f.read(buffer_size)
# end the loop if the end of the file has been reached
if not chunk:
break
# add the remaining characters from the previous chunk,
# split according to the delimiter, and keep the remaining
# characters after the last delimiter separately
*lines, remaining = (remaining + chunk).split(delimiter)
# print the parts up to each delimiter one by one
for line in lines:
print(line, end=delimiter)
# print the characters after the last delimiter in the file
if remaining:
print(remaining, end='')
</code></pre>
<p>Note that the way this is currently written, it will just print the original file's contents exactly as they were. This is easily changed though, e.g. by changing the <code>end=delimiter</code> parameter passed to the <code>print()</code> function in the loop.</p>
| 2 | 2016-08-25T08:58:23Z | [
"python",
"lazy-loading"
] |
Python Lazy Loading | 39,140,348 | <p>The following code is going to lazily print the contents of the text file line by line, with each print stopping at '/n' .</p>
<pre><code> with open('eggs.txt', 'rb') as file:
for line in file:
print line
</code></pre>
<p>Is there any configuration to lazily print the contents of a text file, with each print stopping at ', ' ?</p>
<p>(or any other character/string )</p>
<p>I am asking this because I am trying to read a file which contains one single 2.9 GB long line separated by commas. </p>
<p><em>PS. My question is different than this one: <a href="http://stackoverflow.com/questions/6475328/read-large-text-files-in-python-line-by-line-without-loading-it-in-to-memory">Read large text files in Python, line by line without loading it in to memory</a>
I am asking how to do the stopping at characters other than newlines ('\n')</em></p>
| 3 | 2016-08-25T08:32:25Z | 39,141,071 | <p>It yields each character from file at once, what means that there is no memory overloading.</p>
<pre><code>def lazy_read():
try:
with open('eggs.txt', 'rb') as file:
item = file.read(1)
while item:
if ',' == item:
raise StopIteration
yield item
item = file.read(1)
except StopIteration:
pass
print ''.join(lazy_read())
</code></pre>
| 0 | 2016-08-25T09:08:07Z | [
"python",
"lazy-loading"
] |
How to get the rank of a column in numpy 2d array? | 39,140,490 | <p>suppose I have an array:</p>
<pre><code>a = np.array([[1,2,3,4],
[4,2,5,6],
[6,5,0,3]])
</code></pre>
<p>I want to get the rank of column 0 in each row(i.e. <code>np.array([0, 1, 3])</code>), Is there any short way to do this?</p>
<p>In 1d array I can use <code>np.sum(a < a[0])</code> to do this, but how about 2d array? But it seems < cannot broadcast.</p>
| 1 | 2016-08-25T08:39:36Z | 39,140,537 | <p><strong>Approach #1</strong></p>
<p>Use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.html" rel="nofollow"><code>np.argsort</code></a> along the rows and look for the index <code>0</code> corresponding to the first column to give us a mask of the same shape as the input array. Finally, get the column indices of the matches (True) in the mask for the desired rank output. So, the implementation would be -</p>
<pre><code>np.where(a.argsort(1)==0)[1]
</code></pre>
<p><strong>Approach #2</strong></p>
<p>Another way to get the ranks of all columns in one go, would be a slight modification of the earlier method. The implementation would look like this -</p>
<pre><code>(a.argsort(1)).argsort(1)
</code></pre>
<p>So, to get the rank of first column, index into the first column of it, like so -</p>
<pre><code>(a.argsort(1)).argsort(1)[:,0]
</code></pre>
<hr>
<p><strong>Sample run</strong></p>
<pre><code>In [27]: a
Out[27]:
array([[1, 2, 3, 4],
[4, 2, 5, 6],
[6, 5, 0, 3]])
In [28]: np.where(a.argsort(1)==0)[1]
Out[28]: array([0, 1, 3])
In [29]: (a.argsort(1)).argsort(1) # Ranks for all cols
Out[29]:
array([[0, 1, 2, 3],
[1, 0, 2, 3],
[3, 2, 0, 1]])
In [30]: (a.argsort(1)).argsort(1)[:,0] # Rank for first col
Out[30]: array([0, 1, 3])
In [31]: (a.argsort(1)).argsort(1)[:,1] # Rank for second col
Out[31]: array([1, 0, 2])
</code></pre>
| 1 | 2016-08-25T08:41:43Z | [
"python",
"arrays",
"numpy",
"ranking",
"numpy-broadcasting"
] |
Python why time increases 13 times when using list append instead of string concatenation? | 39,140,524 | <p>I have a script. Please find complete code at code review <a href="http://codereview.stackexchange.com/questions/139501/python-script-to-convert-asn-to-hpp">here</a></p>
<p>I have removed string concatenation & replaced it with list append as below </p>
<p>sample old code </p>
<pre><code>def comment_line(self, line):
self.line = line
line_to_write = ""
line_to_write += "//"+self.line+"\n" #This is the line that I have changed
self.outputfile.write(line_to_write)
</code></pre>
<p>sample new code</p>
<pre><code>def comment_line(self, line):
self.line = line
self.output_list.append("//"+self.line+"\n") #This is the line that I have changed
</code></pre>
<p>complete code is at code review <a href="http://codereview.stackexchange.com/questions/139501/python-script-to-convert-asn-to-hpp">here</a></p>
<p>Then i ran code using below script to find time of execution</p>
<pre><code>import time
start = time.time()
import os
t =os.system('python_file.py input_file.asn')
print('It took', time.time()-start, 'seconds.')
</code></pre>
<p>Time taken by old & new script is as below</p>
<pre><code>Old script took 0.019999980926513672 seconds.
>>> ================================ RESTART ================================
>>>
New script took 0.2999999523162842 seconds.
</code></pre>
<p>Complete new code </p>
<pre><code>import re
from collections import deque
import sys
import inflection
class Convert(object):
'''To do: add data'''
def __init__(self):
'''To do: add data'''
self.plist = []
self.slist = []
self.tlist = []
self.llist = []
self.lines = []
self.line = None
self.open_braces = []
self.close_braces = []
self.outputfile = None
self.i = None
self.open_brace_list = []
self.close_brace_list = []
self.file_name = None
self.split_character = None
self.length = None
self.enumvariable_flag = None
self.inner_variable_prefix=""
self.output_list=[]
def start_tag(self, split_character, line):
'''To do: add data'''
self.split_character = split_character
self.line = line
self.output_list.append("enum E")
self.inner_variable_prefix = inflection.camelize(inflection.underscore((self.line.split(self.split_character)[0]).replace('-', '_')).lower()).strip()
self.output_list.append(self.inner_variable_prefix)
self.output_list.append("//"+self.line)
self.output_list.append("\n")
self.output_list.append("{\n")
self.enumvariable_flag = True
def end_tag(self,line):
self.line=line
self.output_list.append("};\n")
self.enumvariable_flag = False
def comment_line(self, line):
self.line = line
self.output_list.append("//"+self.line+"\n")
def handle_comment(self, line):
'''To do: add data'''
self.line = line
if (line.strip()).startswith("--")or(re.search(r'(.*)\{(.*)\}(.*)', line)):
self.output_list.append(" ")
self.output_list.append("//"+self.line+"\n")
def handle_inner_element(self, line, index):
'''To do: add data'''
self.line = line
self.index = index
if self.output_list[-1] != " ":
self.output_list.append(" ")
try:
try:
value = (re.findall(r'\d+', self.line.strip().split(' ')[1])[0])
self.output_list.append("e")
self.output_list.append(self.inner_variable_prefix)
self.output_list.append(inflection.camelize((self.line.strip().split(' ')[0]).replace('-', '_')))
self.output_list.append(" = ")
self.output_list.append(value)
if self.index not in self.llist:
self.output_list.append(",")
self.output_list.append("\n")
except Exception as e:
if (self.line.strip().split(' ')[0]).lower() == \
self.line.strip().split(' ')[1].split('-')[0].lower():
self.output_list.append("e")
self.output_list.append(self.inner_variable_prefix)
self.output_list.append(inflection.camelize((
self.line.strip().split(' ')[0].replace('-', '_')).lower()))
if self.index not in self.llist:
self.output_list.append(",")
else:
self.output_list.append("//")
self.output_list.append(self.line)
self.output_list.append("\n")
except Exception as exception:
print(exception)
def generate_lists(self, length, lines):
'''To do: add data'''
self.length = length
self.lines = lines
flag_llist=False
lastl=None
reg1 = r'::=(.*)\n\{'
reg2 = r'{'
reg3 = r'\}'
reg4 = r'(.*)\{(.*)\}(.*)'
for index, line in enumerate(self.lines):
if index < (self.length-1):
val = str(line) + "\n" + str(self.lines[index+1])
else:
val = str(line)
if re.search(reg1, val)and(not re.search(reg4, val)):
self.plist.append(index)
flag_llist=True
else:
val = str(line)
if re.search(reg2, val)and(not re.search(reg4, val)):
if index in self.plist:
pass
else:
self.slist.append(index)
flag_llist=True
if re.search(reg3, val)and(not re.search(reg4, val)):
self.tlist.append(index)
self.llist.append(lastl)
flag_llist=False
elif flag_llist:
try:
value = (re.findall(r'\d+', line.strip().split(' ')[1])[0])
lastl=index
except Exception as e:
pass
try:
if (line.strip().split(' ')[0]).lower() == \
line.strip().split(' ')[1].split('-')[0].lower():
lastl=index
except Exception as e:
pass
return self.plist, self.slist, self.tlist
def add_sub_element(self, open_brace_list, close_brace_list):
'''To do: add data'''
self.open_brace_list = open_brace_list
self.close_brace_list = close_brace_list
self.enumvariable_flag = False
for i in range(1, len(self.open_brace_list)):
for index, line in enumerate(self.lines):
if index == self.open_brace_list[i]-1:
self.start_tag(' ', line)
if (index <= self.close_brace_list[i-1])and\
(index > self.open_brace_list[i])and self.enumvariable_flag:
self.handle_comment(line)
if (self.line.strip()).startswith("}"):
self.end_tag(line)
if self.enumvariable_flag and(not (self.line.strip()).startswith("--"))and\
(not (self.line.strip()).startswith("{")and\
(index <= self.close_brace_list[i-1])and(index > open_brace_list[i])):
self.handle_inner_element(line, index)
def braces_line_no(self, i):
'''To do: add data'''
self.i = i
remaining_slist = [a for a in self.slist if a > self.plist[self.i]]
remaining_tlist = [a for a in self.tlist if a > self.plist[self.i]]
try:
self.open_braces = [b for b in remaining_slist if b < self.plist[self.i+1]]
except Exception as e:
self.open_braces = remaining_slist
try:
self.close_braces = [b for b in remaining_tlist if b < self.plist[self.i+1]]
except Exception as e:
self.close_braces = remaining_tlist
return self.open_braces, self.close_braces
def generate_output(self, file_name):
'''To do: add data'''
self.file_name = file_name
output_file_name = self.file_name.split('.')[0]+".hpp"
self.outputfile = open(output_file_name, 'w')
with open(self.file_name) as f_in:
self.lines = (line.strip() for line in f_in)
self.lines = list(line for line in self.lines if line)
length = len(self.lines)
self.plist, self.slist, self.tlist = self.generate_lists(length, self.lines)
for i in range(len(self.plist)):
self.open_braces, self.close_braces = self.braces_line_no(i)
open_braces_qu = deque(self.open_braces)
for index, line in enumerate(self.lines):
if (not self.enumvariable_flag)and(self.tlist[-1] != self.close_braces[-1]):
if(index > self.close_braces[-1]) and (index < self.slist[self.slist.index(self.open_braces[-1])+1]-1):
self.comment_line(line)
elif self.enumvariable_flag==None and (index < self.plist[0]):
self.comment_line(line)
elif self.close_braces[-1] == self.tlist[-1] and index > self.tlist[-1]:
self.comment_line(line)
if index == self.plist[i]:
self.start_tag('::=', line)
elif len(self.open_braces) == 1 and len(self.close_braces) == 1 and\
self.enumvariable_flag:
self.handle_comment(line)
if (self.line.strip()).startswith("}"):
self.end_tag(line)
if self.enumvariable_flag and(not (line.strip()).startswith("--"))and\
(not (line.strip()).startswith("{")):
self.handle_inner_element(line, index)
elif self.enumvariable_flag and(len(self.open_braces) > 1)and(len(open_braces_qu) > 1):
if self.output_list[-1] != " ":
self.output_list.append(" ")
try:
if index == open_braces_qu[1]-1:
try:
value = (re.findall(r'\d+', line.strip().split(' ')[1])[0])
self.output_list.append("e")
self.output_list.append(self.inner_variable_prefix)
self.output_list.append(inflection.camelize(inflection.underscore(line.strip().split(' ')[0]\
.replace('-', '_')).lower()))
self.output_list.append(" = ")
self.output_list.append(value)
if len(open_braces_qu) > 2:
self.output_list.append(", ")
self.output_list.append("\n")
except Exception as e:
if (line.strip().split(' ')[0]).lower() == line.strip()\
.split(' ')[1].split('-')[0].lower():
self.output_list.append("e")
self.output_list.append(self.inner_variable_prefix)
self.output_list.append(inflection.camelize(inflection.underscore(line.strip().split(' ')[0].replace('-', '_')).lower()))
if len(open_braces_qu) > 2:
self.output_list.append(", ")
else:
self.output_list.append("//")
self.output_list.append(line)
self.output_list.append("\n")
open_braces_qu.popleft()
if len(open_braces_qu) == 1:
self.end_tag(line)
open_braces_qu.popleft()
self.add_sub_element(self.open_braces, self.close_braces)
except Exception as exception:
print(exception)
for data in self.output_list:
self.outputfile.write(data)
self.outputfile.close()
if __name__ == '__main__':
INPUT_FILE_NAME = sys.argv[1]
CON_OBJ = Convert()
CON_OBJ.generate_output(INPUT_FILE_NAME)
</code></pre>
| 0 | 2016-08-25T08:40:59Z | 39,140,734 | <p>The second example contains a string concatenation (<code>"//"+self.line+"\n"</code>) <em>and</em> appending to a list.</p>
<p>That doesn't explain why it's suddenly so much slower; my guess is that the list contains many elements. Appending to long lists can be expensive since Python has to copy the list eventually.</p>
<p>In the original code, you just created a short string and flushed that to a buffer (and eventually to the file system). This operation can be relatively cheap. If you append to a list with millions of elements, Python will eventually run out of space in the underlying data structure and have to copy it into a bigger one. And after adding N more elements, it has to do the same.</p>
<p>In addition to that, your code to measure the timing is not reliable. Background jobs can have a huge influence when you do it this way. Use the <code>timeit</code> module as suggested by cdrake or maybe the shell command <code>time</code> (<code>timeit</code> will be more accurate).</p>
<p><strong>[EDIT]</strong> There are three strategies: String concatenation (SC), list append (LA) and streaming to a file (STF).</p>
<p>SC is efficient when you concatenate short strings and don't keep them around for long. SC becomes ever more inefficient as the string becomes longer because for every append, Python has to copy the whole string.</p>
<p>LA is efficient when you need to keep the data around. Lists allocate N slots. As long as you don't need them all, adding to the list is cheap: You just use one of the free slots. Lists become expensive when you run out of slots because then Python has to copy the list. So they are a bit more efficient than SC but eventually, they suffer from the same underlying problem: Append too much and the copy times will kill you.</p>
<p>STF means you open a file and write the data into the file as you produce it. You only keep small amounts of the data in memory. This is efficient for large amounts of output because you avoid the copying of existing data. The drawback is that this is not efficient for small amounts of data because of the overhead.</p>
<p>Conclusion: Know your data structures. There is no structure which works in every case. All of them have advantages and disadvantages.</p>
| 1 | 2016-08-25T08:52:24Z | [
"python",
"python-3.x"
] |
Transparent interaction with console (sh,bash,cmd,powershell) | 39,140,632 | <p>Please help me to write simple console application in python. It should redirect all input to system shell (bash or windows cmd or powershell) and give all their output to the screen.
Simply I can say run terminal from python application.</p>
<p>The next code works with some strange behavior: first 3 times after any key pressed it outputs (executes?) some previous commands (may be from cache)</p>
<pre><code>#!/bin/python3
import subprocess
import sys
proc = subprocess.Popen(['bash'])
while True:
buff = sys.stdin.readline()
stdoutdata, stderrdata = proc.communicate(buff)
if( stdoutdata ):
print( stdoutdata )
else:
print('n')
break
</code></pre>
| 0 | 2016-08-25T08:47:03Z | 39,145,596 | <p>I think you need</p>
<p><code>proc = subprocess.Popen(['bash'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)</code></p>
<p>From the <a href="https://docs.python.org/3/library/subprocess.html#subprocess.Popen" rel="nofollow">docs</a>:</p>
<blockquote>
<p>PIPE indicates that a new pipe to the child should be created.
DEVNULL indicates that the special file os.devnull will be used. With
the default settings of None, no redirection will occur; the childâs
file handles will be inherited from the parent.</p>
</blockquote>
<p>I don't think you want your bash to be connected to your parent process's stdin directly. That would explain wierdness.</p>
| 0 | 2016-08-25T12:40:12Z | [
"python",
"python-3.x",
"terminal",
"subprocess"
] |
better way of doing dict comprehension | 39,140,658 | <pre><code>UG=[
{"group_member":"myGroup1","user_name":"tom"},
{"group_member":"myGroup2","user_name":"wilson"},
{"group_member":"myGroup1","user_name":"kevin"},
{"group_member":"myGroup2","user_name":"donna"},
{"group_member":"myGroup3","user_name":"john"},
{"group_member":"myGroup1","user_name":"steve"},
{"group_member":"myGroup2","user_name":"jose"},
{"group_member":"myGroup3","user_name":"jags"}]
PG=[
{"group_member":"myGroup1","device_name":"device1"},
{"group_member":"myGroup1","device_name":"device2"},
{"group_member":"myGroup2","device_name":"device1"},
{"group_member":"myGroup1","device_name":"device2"},
{"group_member":"myGroup1","device_name":"device3"},
{"group_member":"myGroup3","device_name":"device1"}]
DG=[
{"device_name":"device1","server":"server1"},
{"device_name":"device2","server":"server2"},
{"device_name":"device3","server":"server3"},
{"device_name":"device4","server":"server4"},
{"device_name":"device5","server":"server5"},
{"device_name":"device6","server":"server6"}
]
</code></pre>
<p>I need to compare the lists and prepare a list of dictionary with the following condition </p>
<pre><code>UG[i]['group_member'] == PG[j]['group_member'] && PG[j]['device_name'] == UG[k]['device_name']
</code></pre>
<p>here is my implementation</p>
<pre><code># output array
output=[]
for i in DG:
for j in PG:
if i["device_name"] == j["device_name"]:
for k in UG:
if k["group_member"] == j["group_member"]:
output.append({"user_name":k["user_name"],"group_member":k["group_member"],"device_name":j["device_name"],"server":i["server"]})
for m in output:
print m
</code></pre>
<p>desired output============</p>
<pre><code>[
{'server': 'server1', 'user_name': 'tom', 'group_member': 'myGroup1', 'device_name': 'device1'},
{'server': 'server1', 'user_name': 'kevin', 'group_member': 'myGroup1', 'device_name': 'device1'},
{'server': 'server1', 'user_name': 'steve', 'group_member': 'myGroup1', 'device_name': 'device1'},
{'server': 'server1', 'user_name': 'wilson', 'group_member': 'myGroup2', 'device_name': 'device1'}
{'server': 'server1', 'user_name': 'donna', 'group_member': 'myGroup2', 'device_name': 'device1'},
{'server': 'server1', 'user_name': 'jose', 'group_member': 'myGroup2', 'device_name': 'device1'},
{'server': 'server1', 'user_name': 'john', 'group_member': 'myGroup3', 'device_name': 'device1'},
{'server': 'server1', 'user_name': 'jags', 'group_member': 'myGroup3', 'device_name': 'device1'},
{'server': 'server2', 'user_name': 'tom', 'group_member': 'myGroup1', 'device_name': 'device2'},
{'server': 'server2', 'user_name': 'kevin', 'group_member': 'myGroup1', 'device_name': 'device2'},
{'server': 'server2', 'user_name': 'steve', 'group_member': 'myGroup1', 'device_name': 'device2'},
{'server': 'server2', 'user_name': 'tom', 'group_member': 'myGroup1', 'device_name': 'device2'},
{'server': 'server2', 'user_name': 'kevin', 'group_member': 'myGroup1', 'device_name': 'device2'},
{'server': 'server2', 'user_name': 'steve', 'group_member': 'myGroup1', 'device_name': 'device2'},
{'server': 'server3', 'user_name':'tom', 'group_member': 'myGroup1', 'device_name': 'device3'},
{'server': 'server3', 'user_name': 'kevin', 'group_member': 'myGroup1', 'device_name': 'device3'},
{'server': 'server3', 'user_name': 'steve', 'group_member': 'myGroup1', 'device_name': 'device3'}
]
</code></pre>
<p>how can i improve my implementation?</p>
| 0 | 2016-08-25T08:48:20Z | 39,141,029 | <p>Yes, it can be done using a list comprehension. But first let's create a functional dict update function, since <code>dict</code>'s update method doesn't return a new dict (but, rather, updates the current one):</p>
<pre><code>def updt(d1, d2):
d3 = d1.copy()
d3.update(d2)
return d3
</code></pre>
<p>Now let's get wild with the list comprehension:</p>
<pre><code>dictlist = [updt(updt(ug, pg), dg) for ug in UG for pg in PG for dg in DG if ug['group_member'] == pg['group_member'] and pg['device_name'] == dg['device_name']]
</code></pre>
<p>Here is the result:</p>
<pre><code>for d in dictlist:
print(d)
</code></pre>
<p>And you have:</p>
<pre><code>
{'user_name': 'tom', 'server': 'server1', 'group_member': 'myGroup1', 'device_name': 'device1'}
{'user_name': 'tom', 'server': 'server2', 'group_member': 'myGroup1', 'device_name': 'device2'}
{'user_name': 'tom', 'server': 'server2', 'group_member': 'myGroup1', 'device_name': 'device2'}
{'user_name': 'tom', 'server': 'server3', 'group_member': 'myGroup1', 'device_name': 'device3'}
{'user_name': 'wilson', 'server': 'server1', 'group_member': 'myGroup2', 'device_name': 'device1'}
{'user_name': 'kevin', 'server': 'server1', 'group_member': 'myGroup1', 'device_name': 'device1'}
{'user_name': 'kevin', 'server': 'server2', 'group_member': 'myGroup1', 'device_name': 'device2'}
{'user_name': 'kevin', 'server': 'server2', 'group_member': 'myGroup1', 'device_name': 'device2'}
{'user_name': 'kevin', 'server': 'server3', 'group_member': 'myGroup1', 'device_name': 'device3'}
{'user_name': 'donna', 'server': 'server1', 'group_member': 'myGroup2', 'device_name': 'device1'}
{'user_name': 'john', 'server': 'server1', 'group_member': 'myGroup3', 'device_name': 'device1'}
{'user_name': 'steve', 'server': 'server1', 'group_member': 'myGroup1', 'device_name': 'device1'}
{'user_name': 'steve', 'server': 'server2', 'group_member': 'myGroup1', 'device_name': 'device2'}
{'user_name': 'steve', 'server': 'server2', 'group_member': 'myGroup1', 'device_name': 'device2'}
{'user_name': 'steve', 'server': 'server3', 'group_member': 'myGroup1', 'device_name': 'device3'}
{'user_name': 'jose', 'server': 'server1', 'group_member': 'myGroup2', 'device_name': 'device1'}
{'user_name': 'jags', 'server': 'server1', 'group_member': 'myGroup3', 'device_name': 'device1'}
</code></pre>
<p>which seems to be what you want, although in a different order.</p>
| 2 | 2016-08-25T09:06:16Z | [
"python",
"dictionary",
"dictionary-comprehension"
] |
better way of doing dict comprehension | 39,140,658 | <pre><code>UG=[
{"group_member":"myGroup1","user_name":"tom"},
{"group_member":"myGroup2","user_name":"wilson"},
{"group_member":"myGroup1","user_name":"kevin"},
{"group_member":"myGroup2","user_name":"donna"},
{"group_member":"myGroup3","user_name":"john"},
{"group_member":"myGroup1","user_name":"steve"},
{"group_member":"myGroup2","user_name":"jose"},
{"group_member":"myGroup3","user_name":"jags"}]
PG=[
{"group_member":"myGroup1","device_name":"device1"},
{"group_member":"myGroup1","device_name":"device2"},
{"group_member":"myGroup2","device_name":"device1"},
{"group_member":"myGroup1","device_name":"device2"},
{"group_member":"myGroup1","device_name":"device3"},
{"group_member":"myGroup3","device_name":"device1"}]
DG=[
{"device_name":"device1","server":"server1"},
{"device_name":"device2","server":"server2"},
{"device_name":"device3","server":"server3"},
{"device_name":"device4","server":"server4"},
{"device_name":"device5","server":"server5"},
{"device_name":"device6","server":"server6"}
]
</code></pre>
<p>I need to compare the lists and prepare a list of dictionary with the following condition </p>
<pre><code>UG[i]['group_member'] == PG[j]['group_member'] && PG[j]['device_name'] == UG[k]['device_name']
</code></pre>
<p>here is my implementation</p>
<pre><code># output array
output=[]
for i in DG:
for j in PG:
if i["device_name"] == j["device_name"]:
for k in UG:
if k["group_member"] == j["group_member"]:
output.append({"user_name":k["user_name"],"group_member":k["group_member"],"device_name":j["device_name"],"server":i["server"]})
for m in output:
print m
</code></pre>
<p>desired output============</p>
<pre><code>[
{'server': 'server1', 'user_name': 'tom', 'group_member': 'myGroup1', 'device_name': 'device1'},
{'server': 'server1', 'user_name': 'kevin', 'group_member': 'myGroup1', 'device_name': 'device1'},
{'server': 'server1', 'user_name': 'steve', 'group_member': 'myGroup1', 'device_name': 'device1'},
{'server': 'server1', 'user_name': 'wilson', 'group_member': 'myGroup2', 'device_name': 'device1'}
{'server': 'server1', 'user_name': 'donna', 'group_member': 'myGroup2', 'device_name': 'device1'},
{'server': 'server1', 'user_name': 'jose', 'group_member': 'myGroup2', 'device_name': 'device1'},
{'server': 'server1', 'user_name': 'john', 'group_member': 'myGroup3', 'device_name': 'device1'},
{'server': 'server1', 'user_name': 'jags', 'group_member': 'myGroup3', 'device_name': 'device1'},
{'server': 'server2', 'user_name': 'tom', 'group_member': 'myGroup1', 'device_name': 'device2'},
{'server': 'server2', 'user_name': 'kevin', 'group_member': 'myGroup1', 'device_name': 'device2'},
{'server': 'server2', 'user_name': 'steve', 'group_member': 'myGroup1', 'device_name': 'device2'},
{'server': 'server2', 'user_name': 'tom', 'group_member': 'myGroup1', 'device_name': 'device2'},
{'server': 'server2', 'user_name': 'kevin', 'group_member': 'myGroup1', 'device_name': 'device2'},
{'server': 'server2', 'user_name': 'steve', 'group_member': 'myGroup1', 'device_name': 'device2'},
{'server': 'server3', 'user_name':'tom', 'group_member': 'myGroup1', 'device_name': 'device3'},
{'server': 'server3', 'user_name': 'kevin', 'group_member': 'myGroup1', 'device_name': 'device3'},
{'server': 'server3', 'user_name': 'steve', 'group_member': 'myGroup1', 'device_name': 'device3'}
]
</code></pre>
<p>how can i improve my implementation?</p>
| 0 | 2016-08-25T08:48:20Z | 39,141,345 | <p>If you have thousands of records here is generator</p>
<pre><code>from itertools import product
def produce():
for dg, pg, ug in product(DG, PG, UG):
if pg['device_name'] == dg['device_name'] and ug['group_member'] == pg['group_member']:
item = dg.copy()
item.update(pg)
item.update(ug)
yield item
</code></pre>
| 1 | 2016-08-25T09:21:41Z | [
"python",
"dictionary",
"dictionary-comprehension"
] |
AppEngine achieving strong consistency | 39,140,673 | <p>I am trying to achieve strong consistency. Let's call my model <code>PVPPlayer</code>:</p>
<pre><code>class PVPPlayer(ndb.Model):
points = ndb.IntegerProperty()
</code></pre>
<p>Every key for the model is created like this:</p>
<pre><code>pvp_player = PVPPlayer(key=ndb.Key(Profile, "test_id", PVPPlayer, "test_id"))
</code></pre>
<p>where <code>Profile</code> is parent model:</p>
<pre><code>class Profile(ndb.Model):
def build_key(cls, some_id):
return ndb.Key(cls, some_id)
</code></pre>
<p>I have 2 REST api url:</p>
<pre><code>1) update_points
2) get_points
</code></pre>
<p>In 1) I do :</p>
<pre><code># I use transaction because I have to update all the models in single batch
@ndb.transactional(xg=True, retries=3)
def some_func(points):
pvp_player = ndb.Key(Profile, "test_id", PVPPlayer, "test_id").get()
pvp_player.points += points
pvp_player.put()
# update other models here`
</code></pre>
<p>In 2) I do:</p>
<pre><code>pvp_player = ndb.Key(Profile, "test_id", PVPPlayer, "test_id").get()
return pvp_player.points`
</code></pre>
<p>My flow looks like this:</p>
<pre><code>1) update_points()
2) get_points()
3) update_points()
4) get_points()`
...
</code></pre>
<p><strong>Problem</strong>: </p>
<p>Using <code>get()</code> guarantees strong consistency so what I don't understand is why sometimes as the result of <code>get_points()</code> I get stale data like points were not updated at all.</p>
<p><strong>Example</strong>:</p>
<pre><code>POST get_points -> 0
POST sleep 1-3 sec
POST update_points -> 15
POST sleep 1-3 sec
POST get_points -> 15
POST sleep 1-3 sec
POST update_points -> 20
POST sleep 1-3 sec
POST get_points -> 15 !!!`
</code></pre>
| 0 | 2016-08-25T08:48:56Z | 39,149,160 | <p>Is there a case you exceed the write limit per entity group, that is one update per second? I think this could break the strong consistency of the entity group as mentioned in the documentation. </p>
| 1 | 2016-08-25T15:25:28Z | [
"python",
"google-app-engine",
"eventual-consistency"
] |
AppEngine achieving strong consistency | 39,140,673 | <p>I am trying to achieve strong consistency. Let's call my model <code>PVPPlayer</code>:</p>
<pre><code>class PVPPlayer(ndb.Model):
points = ndb.IntegerProperty()
</code></pre>
<p>Every key for the model is created like this:</p>
<pre><code>pvp_player = PVPPlayer(key=ndb.Key(Profile, "test_id", PVPPlayer, "test_id"))
</code></pre>
<p>where <code>Profile</code> is parent model:</p>
<pre><code>class Profile(ndb.Model):
def build_key(cls, some_id):
return ndb.Key(cls, some_id)
</code></pre>
<p>I have 2 REST api url:</p>
<pre><code>1) update_points
2) get_points
</code></pre>
<p>In 1) I do :</p>
<pre><code># I use transaction because I have to update all the models in single batch
@ndb.transactional(xg=True, retries=3)
def some_func(points):
pvp_player = ndb.Key(Profile, "test_id", PVPPlayer, "test_id").get()
pvp_player.points += points
pvp_player.put()
# update other models here`
</code></pre>
<p>In 2) I do:</p>
<pre><code>pvp_player = ndb.Key(Profile, "test_id", PVPPlayer, "test_id").get()
return pvp_player.points`
</code></pre>
<p>My flow looks like this:</p>
<pre><code>1) update_points()
2) get_points()
3) update_points()
4) get_points()`
...
</code></pre>
<p><strong>Problem</strong>: </p>
<p>Using <code>get()</code> guarantees strong consistency so what I don't understand is why sometimes as the result of <code>get_points()</code> I get stale data like points were not updated at all.</p>
<p><strong>Example</strong>:</p>
<pre><code>POST get_points -> 0
POST sleep 1-3 sec
POST update_points -> 15
POST sleep 1-3 sec
POST get_points -> 15
POST sleep 1-3 sec
POST update_points -> 20
POST sleep 1-3 sec
POST get_points -> 15 !!!`
</code></pre>
| 0 | 2016-08-25T08:48:56Z | 39,164,175 | <p>First check your logs, one of the updates must be failing with error, because your logic is basically correct.</p>
<p>Also double-check all your updates are wrapped in transactions to avoid races. <a href="http://stackoverflow.com/questions/38955748/cloud-datastore-ways-to-avoid-race-conditions">Cloud Datastore: ways to avoid race conditions</a></p>
<hr>
<p>This case is likely not about consistency issues, but stomping updates, checkout this links for some interesting cases:</p>
<p><a href="http://engineering.khanacademy.org/posts/transaction-safety.htm" rel="nofollow">http://engineering.khanacademy.org/posts/transaction-safety.htm</a>
<a href="http://engineering.khanacademy.org/posts/user-write-lock.htm" rel="nofollow">http://engineering.khanacademy.org/posts/user-write-lock.htm</a></p>
| 1 | 2016-08-26T10:30:41Z | [
"python",
"google-app-engine",
"eventual-consistency"
] |
Flask can't find the config module in parent directory | 39,140,679 | <p>I'm basing myself on the structure of a web app I found up on github, <a href="https://github.com/nickjj/build-a-saas-app-with-flask" rel="nofollow">here</a>.</p>
<p>My project's structure looks like this:</p>
<pre><code>~/Learning/flask-celery $ tree
.
âââ config
â  âââ __init__.py
â  âââ settings.py
âââ docker-compose.yml
âââ Dockerfile
âââ requirements.txt
âââ web
âââ app.py
âââ __init__.py
âââ static
âââ templates
âââ index.html
</code></pre>
<p>I want my Flask app in <code>web/app.py</code> to load the settings in the <code>config</code> module, as I saw in the githug project linked above.</p>
<p>Here's how I'm instantiating the Flask app in <code>web/app.py</code>:</p>
<pre><code>from flask import Flask, request, render_template, session, flash, redirect, url_for, jsonify
[...]
app = Flask(__name__, instance_relative_config=True)
app.config.from_object('config.settings')
app.config.from_pyfile('settings.py')
[...]
</code></pre>
<p>The issue I'm getting is:</p>
<pre><code>root@0e221733b3d1:/usr/src/app# python3 web/app.py
Traceback (most recent call last):
File "/usr/local/lib/python3.5/site-packages/werkzeug/utils.py", line 427, in import_string
module = __import__(module_name, None, None, [obj_name])
ImportError: No module named 'config'
[...]
</code></pre>
<p>Obviously, Flask can't find the <code>config</code> module in the parent directory, which makes sens to me, but I don't understand how the linked project I'm basing myself on is successfully loading the module with the same tree structure and Flask config code.</p>
<p>In these circumstances, how can I get Flask to load the <code>config</code> module?</p>
| 0 | 2016-08-25T08:49:06Z | 39,141,145 | <p>Without adding the config directory to your path your Python package will not be able to see it.</p>
<p>Your code can only access what is in <code>web</code> by the looks of it.</p>
<p>You can add the config directory to the package like so:</p>
<pre><code>import os
import sys
import inspect
currentdir = os.path.dirname(
os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
sys.path.insert(0, parentdir)
</code></pre>
<p>Then you should be able to import config.</p>
| 2 | 2016-08-25T09:11:58Z | [
"python",
"web-applications",
"flask"
] |
Moving x-axis in matplotlib during real time plot (python) | 39,140,698 | <p>I want to manipulate the x-axis during a real time plot so that at most a number of 10 samples are seen at a time.
It seems like plt.axis() updates just once after the plot has been initialized. Any suggestions? Thanks in advance! </p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
# Initialize
x_axis_start = 0
x_axis_end = 10
plt.axis([x_axis_start, x_axis_end, 0, 1])
plt.ion()
# Realtime plot
for i in range(100):
y = np.random.random()
plt.scatter(i, y)
plt.pause(0.10)
# print(i)
if i%10 == 0 and i>1:
# print("Axis should update now!")
plt.axis([x_axis_start+10, x_axis_end+10, 0, 1])
</code></pre>
| 3 | 2016-08-25T08:50:20Z | 39,140,810 | <p>You have to update <code>x_axist_start</code> and <code>x_axis_end</code> in the <code>if</code> statement!</p>
<pre><code>if i%10 == 0 and i>1:
print("Axis should update now!")
x_axis_start += 10
x_axis_end += 10
plt.axis([x_axis_start, x_axis_end, 0, 1])
</code></pre>
<p>This does the trick! :)</p>
<p>Explanation: You only added 10 once to both parameters. In the end you always added 10 to 0 and 10, leaving you with only one update.</p>
| 1 | 2016-08-25T08:56:01Z | [
"python",
"matplotlib",
"plot",
"real-time",
"axis"
] |
How to gather information from user input and apply it elsewhere | 39,140,838 | <p>Hi I am new to programming and I am trying to write a code that will gather information from the input and determine if it is a valid alphabet.</p>
<p>This is my code so far</p>
<pre><code>words = []
word = input('Character: ')
while word:
if word not in words:
words.append(word)
word = input('Character: ')
print(''.join(words),'is a a valid alphabetical string.')
suppose I choose three letters then the output of my code then pressed enter on the fourth,
the code will be:
Character:a
Character:b
Character:c
Character:
abc is a valid alphabetical string.
I want to add to this code so that when I type in a character that is not
from the alphabet the code will do something like this.
Character:a
Character:b
Character:c
Character:4
4 is not in the alphabet.
</code></pre>
<p>This is how I want my program to work</p>
<p><a href="http://i.stack.imgur.com/k4H12.png" rel="nofollow"><img src="http://i.stack.imgur.com/k4H12.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/QIAh5.png" rel="nofollow"><img src="http://i.stack.imgur.com/QIAh5.png" alt="enter image description here"></a></p>
| 0 | 2016-08-25T08:57:35Z | 39,140,936 | <p>Use <code>str.isalpha()</code>
It is gives only true if all characters in the string are letters.</p>
<p>Example:</p>
<pre><code>>>> 'test'.isalpha()
True
>>> 'test44'.isalpha()
False
>>> 'test test'.isalpha()
False
</code></pre>
<p>In your code:</p>
<pre><code>words = []
word = input('Character: ')
while word:
if word.isalpha() and word not in words:
words.append(word)
word = input('Character: ')
print(words,'is a a valid alphabetical string.')
</code></pre>
| 1 | 2016-08-25T09:01:51Z | [
"python"
] |
How to gather information from user input and apply it elsewhere | 39,140,838 | <p>Hi I am new to programming and I am trying to write a code that will gather information from the input and determine if it is a valid alphabet.</p>
<p>This is my code so far</p>
<pre><code>words = []
word = input('Character: ')
while word:
if word not in words:
words.append(word)
word = input('Character: ')
print(''.join(words),'is a a valid alphabetical string.')
suppose I choose three letters then the output of my code then pressed enter on the fourth,
the code will be:
Character:a
Character:b
Character:c
Character:
abc is a valid alphabetical string.
I want to add to this code so that when I type in a character that is not
from the alphabet the code will do something like this.
Character:a
Character:b
Character:c
Character:4
4 is not in the alphabet.
</code></pre>
<p>This is how I want my program to work</p>
<p><a href="http://i.stack.imgur.com/k4H12.png" rel="nofollow"><img src="http://i.stack.imgur.com/k4H12.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/QIAh5.png" rel="nofollow"><img src="http://i.stack.imgur.com/QIAh5.png" alt="enter image description here"></a></p>
| 0 | 2016-08-25T08:57:35Z | 39,140,990 | <p>You can try this out :-</p>
<pre><code>words = []
while 1:
word = input('Character: ')
if word != '':
try:
if word.isalpha():
pass
if word not in words:
words.append(word)
except Exception:
print word, " is not in the alphabet"
break
else:
res = (''.join(words) +' is a valid alphabetical string.') if (words != []) else "The input was blank."
print res
break
</code></pre>
| 1 | 2016-08-25T09:04:23Z | [
"python"
] |
How to gather information from user input and apply it elsewhere | 39,140,838 | <p>Hi I am new to programming and I am trying to write a code that will gather information from the input and determine if it is a valid alphabet.</p>
<p>This is my code so far</p>
<pre><code>words = []
word = input('Character: ')
while word:
if word not in words:
words.append(word)
word = input('Character: ')
print(''.join(words),'is a a valid alphabetical string.')
suppose I choose three letters then the output of my code then pressed enter on the fourth,
the code will be:
Character:a
Character:b
Character:c
Character:
abc is a valid alphabetical string.
I want to add to this code so that when I type in a character that is not
from the alphabet the code will do something like this.
Character:a
Character:b
Character:c
Character:4
4 is not in the alphabet.
</code></pre>
<p>This is how I want my program to work</p>
<p><a href="http://i.stack.imgur.com/k4H12.png" rel="nofollow"><img src="http://i.stack.imgur.com/k4H12.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/QIAh5.png" rel="nofollow"><img src="http://i.stack.imgur.com/QIAh5.png" alt="enter image description here"></a></p>
| 0 | 2016-08-25T08:57:35Z | 39,154,251 | <p>You can use a <code>while</code> loop to collect input, then break out of the loop if either the input is empty (the user hit enter without inputting a character) or if the input is not in the alphabet.</p>
<pre><code>letters = []
while True:
letter = input('Character:')
if letter == '':
if letters:
print('{} is a valid alphabetical string.'.format(''.join(letters)))
else:
print('The input was blank.')
break
elif letter.isalpha():
letters.append(letter)
else:
print('{} is not in the alphabet.'.format(letter))
break
</code></pre>
| 1 | 2016-08-25T20:30:18Z | [
"python"
] |
SELECT columns from a variable in Python PYODBC | 39,140,963 | <p>my first question up here, please be kind.
I am new to both python and SQL so I am finding my way.
I am writing a function in python which should select columns from a table in the database with column names coming from a variable (list). Below is what I want the code to look like, obviously it does not work. Is there a way to do it, or I should not bother and instead of a list type column names directly into c.execute? Thank you! Alex</p>
<pre><code>def data_extract1():
column_list1 = ["column1","column2"]
c.execute ('SELECT column_list1 FROM myBD' )
for row in c.fetchall():
print (row)
</code></pre>
| 0 | 2016-08-25T09:03:13Z | 39,142,118 | <p>You can use <code>format()</code> and <code>join()</code> to replace <code>column_list1</code> in your query string with the columns you want.</p>
<pre><code>c.execute('SELECT {} FROM myBD'.format(", ".join(column_list1)))
</code></pre>
<p><code>", ".join(column_list1)</code> creates a comma-separated string from your column list.</p>
<p><code>format()</code> replaces the <code>{}</code> in your query string with that new string</p>
| 1 | 2016-08-25T09:57:39Z | [
"python",
"sql",
"select",
"odbc"
] |
find text that match certain string using regex | 39,141,012 | <p>I would like to find whether the string contains a certain character(s). if it does then output into a file, if it doesn't then output into another file.
My input data looks like this:</p>
<pre><code>exchange security volume
TO AAA 193099
TO AAB 81000
TO AAH 2310
TO AAV 1161144
TO AAVdbh 675000
TO ABC 98050
</code></pre>
<p>So far I have tried this:</p>
<pre><code>for row in data:
if 'a' in row['security'] then .....
</code></pre>
<p>However, I would like to use regex to match the string i.e. if string contains any lower character then ignore.
Thank you so much!</p>
| 2 | 2016-08-25T09:05:12Z | 39,141,221 | <p>You may use <code>filter</code> with <code>lambda</code> function if only upper values are required.</p>
<pre><code>>>> x = 'PrinOnlyUpperCaseLetter'
>>> filter(lambda x: x.isupper(), x)
'POUCL'
</code></pre>
<p>If you want both but in different list:</p>
<pre><code>>>> x = 'PrinOnlyUpperCaseLetter'
>>> upper_list, lower_list = [], []
>>> for i in x:
... if i.isupper():
... upper_list.append(i)
... else:
... lower_list.append(i)
</code></pre>
| 1 | 2016-08-25T09:15:46Z | [
"python",
"regex"
] |
find text that match certain string using regex | 39,141,012 | <p>I would like to find whether the string contains a certain character(s). if it does then output into a file, if it doesn't then output into another file.
My input data looks like this:</p>
<pre><code>exchange security volume
TO AAA 193099
TO AAB 81000
TO AAH 2310
TO AAV 1161144
TO AAVdbh 675000
TO ABC 98050
</code></pre>
<p>So far I have tried this:</p>
<pre><code>for row in data:
if 'a' in row['security'] then .....
</code></pre>
<p>However, I would like to use regex to match the string i.e. if string contains any lower character then ignore.
Thank you so much!</p>
| 2 | 2016-08-25T09:05:12Z | 39,141,283 | <p>try this:</p>
<pre><code>for i in row['security']:
if re.search(r"[a-z]", i):
#add to file 1
</code></pre>
<p>else:
#add to file 2</p>
| 0 | 2016-08-25T09:18:37Z | [
"python",
"regex"
] |
List most common members in Pandas group? | 39,141,080 | <p>I have a dataframe with columns like this:</p>
<pre><code> id lead_sponsor lead_sponsor_class
02837692 Janssen Research & Development, LLC Industry
02837679 Aarhus University Hospital Other
02837666 Universidad Autonoma de Ciudad Juarez Other
02837653 Universidad Autonoma de Madrid Other
02837640 Beirut Eye Specialist Hospital Other
</code></pre>
<p>I want to find the most common lead sponsors. I can list the size of each group using:</p>
<pre><code>df.groupby(['lead_sponsor', 'lead_sponsor_class']).size()
</code></pre>
<p>which gives me this:</p>
<pre><code>lead_sponsor lead_sponsor_class
307 Hospital of PLA Other 1
3E Therapeutics Corporation Industry 1
3M Industry 4
4SC AG Industry 8
5 Santé Other 1
</code></pre>
<p>But how do I find the top 10 most common groups? If I do:</p>
<pre><code>df.groupby(['lead_sponsor', 'lead_sponsor_class']).size().sort_values(ascending=False).head(10)
</code></pre>
<p>Then I get an error:</p>
<blockquote>
<p>AttributeError: 'Series' object has no attribute 'sort_values'</p>
</blockquote>
| 2 | 2016-08-25T09:08:33Z | 39,141,139 | <p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.nlargest.html" rel="nofollow"><code>Series.nlargest</code></a>:</p>
<pre><code>print (df.groupby(['lead_sponsor', 'lead_sponsor_class']).size().nlargest(10))
</code></pre>
<p>In <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.nlargest.html" rel="nofollow">docs</a> is <strong>Notes</strong>:</p>
<blockquote>
<p>Faster than .sort_values(ascending=False).head(n) for small n relative to the size of the Series object.</p>
</blockquote>
<p>Sample:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'id': {0: 2837692, 1: 2837679, 2: 2837666, 3: 2837653, 4: 2837640},
'lead_sponsor': {0: 'a', 1: 'a', 2: 'a', 3: 's', 4: 's'},
'lead_sponsor_class': {0: 'Industry', 1: 'Other', 2: 'Other', 3: 'Other', 4: 'Other'}})
print (df)
id lead_sponsor lead_sponsor_class
0 2837692 a Industry
1 2837679 a Other
2 2837666 a Other
3 2837653 s Other
4 2837640 s Other
print (df.groupby(['lead_sponsor', 'lead_sponsor_class']).size())
lead_sponsor lead_sponsor_class
a Industry 1
Other 2
s Other 2
dtype: int64
print (df.groupby(['lead_sponsor', 'lead_sponsor_class']).size().sort_values(ascending=False).head(2))
lead_sponsor lead_sponsor_class
s Other 2
a Other 2
dtype: int64
print (df.groupby(['lead_sponsor', 'lead_sponsor_class']).size().nlargest(2))
lead_sponsor lead_sponsor_class
a Other 2
s Other 2
dtype: int64
</code></pre>
| 2 | 2016-08-25T09:11:48Z | [
"python",
"sorting",
"pandas",
"dataframe",
"series"
] |
Why the test case return True for this python 2.7 coding? | 39,141,173 | <pre><code>def antisymmetric(A):
#Write your code here
for i in range(3):
for j in range(3):
if A[i][j] == -A[j][i]:
return True
else:
return False
# Test Cases:
print antisymmetric([[0, 1, 2],
[-1, 0, -2],
[2, 2, 3]])
#>>> False
</code></pre>
<p>Surprisingly the above test case return True for this python 2.7 coding, Can anyone tell me the reason ?</p>
| -3 | 2016-08-25T09:13:08Z | 39,141,216 | <p>You return True at i=0 j=0 the loop are not executed until the end</p>
| 0 | 2016-08-25T09:15:31Z | [
"python",
"python-2.7"
] |
Why the test case return True for this python 2.7 coding? | 39,141,173 | <pre><code>def antisymmetric(A):
#Write your code here
for i in range(3):
for j in range(3):
if A[i][j] == -A[j][i]:
return True
else:
return False
# Test Cases:
print antisymmetric([[0, 1, 2],
[-1, 0, -2],
[2, 2, 3]])
#>>> False
</code></pre>
<p>Surprisingly the above test case return True for this python 2.7 coding, Can anyone tell me the reason ?</p>
| -3 | 2016-08-25T09:13:08Z | 39,141,244 | <p>I guess it is because you return from the function after the first test.
You would probably want to return only after you can rule out the possibility of the matrix being antisymmetric. Ie. onley return false within the loop, and return true after the loops have run through.</p>
| 1 | 2016-08-25T09:16:55Z | [
"python",
"python-2.7"
] |
Why the test case return True for this python 2.7 coding? | 39,141,173 | <pre><code>def antisymmetric(A):
#Write your code here
for i in range(3):
for j in range(3):
if A[i][j] == -A[j][i]:
return True
else:
return False
# Test Cases:
print antisymmetric([[0, 1, 2],
[-1, 0, -2],
[2, 2, 3]])
#>>> False
</code></pre>
<p>Surprisingly the above test case return True for this python 2.7 coding, Can anyone tell me the reason ?</p>
| -3 | 2016-08-25T09:13:08Z | 39,141,267 | <p>It is returning True at i = 0 and j =0 </p>
<p>Modified solution:</p>
<pre><code>def antisymmetric(A):
for i in range(3):
for j in range(3):
if A[i][j] != -A[j][i]:
return False
return True
</code></pre>
| 3 | 2016-08-25T09:18:12Z | [
"python",
"python-2.7"
] |
Why the test case return True for this python 2.7 coding? | 39,141,173 | <pre><code>def antisymmetric(A):
#Write your code here
for i in range(3):
for j in range(3):
if A[i][j] == -A[j][i]:
return True
else:
return False
# Test Cases:
print antisymmetric([[0, 1, 2],
[-1, 0, -2],
[2, 2, 3]])
#>>> False
</code></pre>
<p>Surprisingly the above test case return True for this python 2.7 coding, Can anyone tell me the reason ?</p>
| -3 | 2016-08-25T09:13:08Z | 39,141,879 | <p>Your test returns immediately after the first check. To test if the matrix is antisymmetric (skew-symmetric) you need to keep checking until you find a pair (i, j) with <code>A[i][j] != -A[j][i]</code>.</p>
<p>It's almost always better in Python to directly loop over the items in containers rather than using indices. To transpose the matrix we can use the built-in <code>zip</code> function:</p>
<pre><code>def is_antisymmetric(m):
# Transpose matrix m
t = zip(*m)
#Check each row against each column
for row, col in zip(m, t):
#Test that each item in the row is the negative
# of the corresponding column item
for u, v in zip(row, col):
if u != -v:
return False
return True
# Test
data = (
# Not anti-symmetric
[[0, 1, 2],
[-1, 0, -2],
[2, 2, 3]],
# Anti-symmetric
[[0, 1, 2],
[-1, 0, -2],
[-2, 2, 0]],
)
for m in data:
for row in m:
print(row)
print(is_antisymmetric(m))
</code></pre>
<p><strong>output</strong></p>
<pre><code>[0, 1, 2]
[-1, 0, -2]
[2, 2, 3]
False
[0, 1, 2]
[-1, 0, -2]
[-2, 2, 0]
True
</code></pre>
<p>We can make the function much more compact by using a generator expression inside the <code>all</code> function:</p>
<pre><code>def is_antisymmetric(m):
return all([-u for u in col] == row for row, col in zip(m, zip(*m)))
</code></pre>
<p>The <code>all</code> function stops testing as soon as it finds a row that's not equal to the corresponding column. And the <code>==</code> test also stops comparing the current row with the current column as soon as it finds a mismatch, so this code is equivalent to the earlier version, except that it's a little more efficient. However, it may not be so easy to read if you're not used to generator expressions. :)</p>
<p>FWIW, all of the code in this answer runs on Python 2 and Python 3, and it handles square matrices of any size.</p>
| 1 | 2016-08-25T09:46:14Z | [
"python",
"python-2.7"
] |
Python multiprocessing - Return a dict | 39,141,236 | <p>I'd like to parallelize a function that returns a flatten list of values (called "keys") in a dict but I don't understand how to obtain in the final result. I have tried:</p>
<pre><code>def toParallel(ht, token):
keys = []
words = token[token['hashtag'] == ht]['word']
for w in words:
keys.append(checkString(w))
y = {ht:keys}
num_cores = multiprocessing.cpu_count()
pool = multiprocessing.Pool(num_cores)
token = pd.read_csv('/path', sep=",", header = None, encoding='utf-8')
token.columns = ['word', 'hashtag', 'count']
hashtag = pd.DataFrame(token.groupby(by='hashtag', as_index=False).count()['hashtag'])
result = pd.DataFrame(index = hashtag['hashtag'], columns = range(0, 21))
result = result.fillna(0)
final_result = []
final_result = [pool.apply_async(toParallel, args=(ht,token,)) for ht in hashtag['hashtag']]
</code></pre>
<p>Where toParallel function should return a dict with hashtag as key and a list of keys (where keys are int). But if I try to print final_result, I obtain only </p>
<blockquote>
<p>bound method ApplyResult.get of multiprocessing.pool.ApplyResult object at 0x10c4fa950</p>
</blockquote>
<p>How can I do it?</p>
| 1 | 2016-08-25T09:16:27Z | 39,141,643 | <pre><code>final_result = [pool.apply_async(toParallel, args=(ht,token,)) for ht in hashtag['hashtag']]
</code></pre>
<p>You can either use <code>Pool.apply()</code> and get the result right away (in which case you do not need <code>multiprocessing</code> hehe, the function is just there for completeness) or use <code>Pool.apply_async()</code> following by <code>Pool.get()</code>. <code>Pool.apply_async()</code> is <strong>asynchronous</strong>.</p>
<p>Something like this:</p>
<pre><code>workers = [pool.apply_async(toParallel, args=(ht,token,)) for ht in hashtag['hashtag']]
final_result = [worker.get() for worker in workers]
</code></pre>
<p>Alternatively, you can also use <code>Pool.map()</code> which will do all this for you.</p>
<p>Either way, I recommend you read <a href="https://docs.python.org/2/library/multiprocessing.html" rel="nofollow">the documentation</a> carefully.</p>
<hr>
<p><strong>Addendum:</strong> When answering this question I presumed the OP is using some Unix operating system like Linux or OSX. If you are using Windows, you must not forget to safeguard your parent/worker processes using <code>if __name__ == '__main__'</code>. This is because Windows lacks <code>fork()</code> and so the child process starts at the beginning of the file, and not at the point of forking like in Unix, so you must use an <code>if</code> condition to guide it. See <a href="https://docs.python.org/2/library/multiprocessing.html#windows" rel="nofollow">here</a>.</p>
<hr>
<p>ps: this is unnecessary:</p>
<pre><code>num_cores = multiprocessing.cpu_count()
pool = multiprocessing.Pool(num_cores)
</code></pre>
<p>If you call <code>multiprocessing.Pool()</code> without arguments (or <code>None</code>), it already creates a pool of workers with the size of your cpu count.</p>
| 1 | 2016-08-25T09:34:54Z | [
"python",
"dictionary",
"multiprocessing"
] |
Slicing list with different string matching conditions | 39,141,251 | <p>I'd like to slice a list of strings based on substrings <em>possibly</em> contained into its elements:</p>
<pre><code>l = ['Some long text', 'often begins', ' with ',
'impenetrable fog ', 'which ends', ' somewhere further']
startIndex = [u for u, v in enumerate(l) if 'begins' in v)][0]
finalIndex = [u for u, v in enumerate(l) if 'ends' in v)][0]
</code></pre>
<p>so that I'd get:</p>
<pre><code>' '.join(l[startIndex:finalIndex]) == 'often begins with impenetrable fog'
</code></pre>
<p>My main problem being that the beginning and end conditions used to get indexes are different and should be variable (basic substring containment as above-mentioned, regexes or other methods possible).</p>
<p>First and last elements might need to be stripped out but I guess this is a matter of adjusting indexes by 1.
My code works in the ideal cases but will often fail as structure and contents of <code>l</code> are not very predictable. Absence of one or both elements matching conditions should end up with the final string being <code>None</code>.</p>
<p>Are comprehensions relevant, or mapping a lambda function to apply both conditions?</p>
| 3 | 2016-08-25T09:17:14Z | 39,141,922 | <p>Try:</p>
<pre><code>l = ['Some long text', 'often begins', 'with', 'impenetrable fog', 'which ends', 'somewhere further']
"""
return the index of the phase in 'phases' if phase contains 'word'
if not found, return 'default'
"""
def index(phases, word, default):
for i, s in enumerate(phases):
if word in s: return i
return default
startIndex = index(l, "long", -1)
finalIndex = index(l, "somewhere", len(l))
print(' '.join(l[startIndex+1:finalIndex]))
</code></pre>
| 1 | 2016-08-25T09:48:21Z | [
"python",
"indexing",
"list-comprehension",
"slice"
] |
Slicing list with different string matching conditions | 39,141,251 | <p>I'd like to slice a list of strings based on substrings <em>possibly</em> contained into its elements:</p>
<pre><code>l = ['Some long text', 'often begins', ' with ',
'impenetrable fog ', 'which ends', ' somewhere further']
startIndex = [u for u, v in enumerate(l) if 'begins' in v)][0]
finalIndex = [u for u, v in enumerate(l) if 'ends' in v)][0]
</code></pre>
<p>so that I'd get:</p>
<pre><code>' '.join(l[startIndex:finalIndex]) == 'often begins with impenetrable fog'
</code></pre>
<p>My main problem being that the beginning and end conditions used to get indexes are different and should be variable (basic substring containment as above-mentioned, regexes or other methods possible).</p>
<p>First and last elements might need to be stripped out but I guess this is a matter of adjusting indexes by 1.
My code works in the ideal cases but will often fail as structure and contents of <code>l</code> are not very predictable. Absence of one or both elements matching conditions should end up with the final string being <code>None</code>.</p>
<p>Are comprehensions relevant, or mapping a lambda function to apply both conditions?</p>
| 3 | 2016-08-25T09:17:14Z | 39,142,006 | <p>Or with <a href="https://docs.python.org/2/library/functions.html#next" rel="nofollow"><code>next()</code></a>:</p>
<pre><code>l = ['Some long text', 'often begins', ' with ', 'impenetrable fog ',
'which ends', ' somewhere further']
startIndex = next((u for u, v in enumerate(l) if 'begins' in v), 0)
finalIndex = next((u for u, v in enumerate(l) if 'ends' in v), 0)
if (startIndex and finalIndex) and (finalIndex > startIndex):
sentence = ' '.join(l[startIndex:finalIndex])
else:
sentence = None
print(sentence)
</code></pre>
<p>Similar as list comprehension, execpt it doesn't return a list but the first element it found. if it doesn't found anything, it return an optional element (here <code>'0'</code>)</p>
<p>This way, if there is no <code>'begins'</code> or no <code>'ends'</code> in your list, you don't have to print anything. Therefore, this allows you to check either if the <code>'ends'</code> comes before the <code>'begins'</code>. </p>
<p>I also love list comprehension but sometimes what you need isn't a list.</p>
<p><strong>SOLUTION FOR ADVANCE USER:</strong></p>
<p>The problem with the use of two comprehension list, is that you check twice your list from start and it will fail when <code>ends</code> comes before start:</p>
<pre><code>l = ['Some long text ends here', 'often begins', ' with ', 'which ends']
^^^
</code></pre>
<p>To avoid this, you might use a generator with <a href="https://docs.python.org/3/reference/expressions.html#examples" rel="nofollow"><code>send()</code></a> to only iterate once on your list.</p>
<pre><code>def get_index(trigger_word):
for u, v in enumerate(l):
if trigger_word in v:
trigger_word = yield u
gen = get_index('begins')
startIndex = gen.send(None)
finalIndex = gen.send('ends')
</code></pre>
<p>Here, the <code>yield</code> allows you to get the index without exiting the function.</p>
<p>This is better, but if there is no <code>begins</code> or <code>ends</code>in the list, there will be a <a href="https://docs.python.org/2/library/exceptions.html#exceptions.StopIteration" rel="nofollow">StopIteration</a> exception. To avoid this, you can just do a infinite loop on <code>yield</code> 0 instead. Now the complete solution will be:</p>
<pre><code>def get_index(l, trigger_word):
for u, v in enumerate(l):
if trigger_word in v:
trigger_word = yield u
while True:
yield 0
def concat_with_trigger_words(l):
gen = get_index(l, 'begins')
startIndex = gen.send(None)
finalIndex = gen.send('ends')
return ' '.join(l[startIndex:finalIndex]) if (startIndex and finalIndex) else None
# Here some list for free lists for your future unitary tests ;)
l_orignal = ['Some long text here', 'often begins', ' with ',
'impenetrable fog ', 'which ends', ' somewhere further']
l_start_with_ends = ['ends', 'often begins', ' with ',
'impenetrable fog ', 'which ends', 'begins']
l_none = ['random', 'word']
l_without_begin = ['fog', 'ends here']
l_without_end = ['begins', 'but never' '...']
print(concat_with_trigger_words(l_orignal)) # often begins with impenetrable fog
print(concat_with_trigger_words(l_start_with_ends)) # often begins with impenetrable fog
print(concat_with_trigger_words(l_none)) # None
print(concat_with_trigger_words(l_without_end)) # None
print(concat_with_trigger_words(l_without_begin)) # None
</code></pre>
| 1 | 2016-08-25T09:52:03Z | [
"python",
"indexing",
"list-comprehension",
"slice"
] |
Slicing list with different string matching conditions | 39,141,251 | <p>I'd like to slice a list of strings based on substrings <em>possibly</em> contained into its elements:</p>
<pre><code>l = ['Some long text', 'often begins', ' with ',
'impenetrable fog ', 'which ends', ' somewhere further']
startIndex = [u for u, v in enumerate(l) if 'begins' in v)][0]
finalIndex = [u for u, v in enumerate(l) if 'ends' in v)][0]
</code></pre>
<p>so that I'd get:</p>
<pre><code>' '.join(l[startIndex:finalIndex]) == 'often begins with impenetrable fog'
</code></pre>
<p>My main problem being that the beginning and end conditions used to get indexes are different and should be variable (basic substring containment as above-mentioned, regexes or other methods possible).</p>
<p>First and last elements might need to be stripped out but I guess this is a matter of adjusting indexes by 1.
My code works in the ideal cases but will often fail as structure and contents of <code>l</code> are not very predictable. Absence of one or both elements matching conditions should end up with the final string being <code>None</code>.</p>
<p>Are comprehensions relevant, or mapping a lambda function to apply both conditions?</p>
| 3 | 2016-08-25T09:17:14Z | 39,142,075 | <pre><code>>>> l = ['Some long text', 'often begins', ' with ',
... 'impenetrable fog ', 'which ends', ' somewhere further']
>>> start, end = 'begins', 'ends'
>>> key_index = {'start': {'word': start, 'index': -1},
'end': {'word': end, 'index': -1}}
>>> for i, val in enumerate(l):
... if key_index['start']['word'] in val:
... key_index['start']['index'] = i
... elif key_index['end']['word'] in val:
... key_index['end']['index'] = i
...
>>> start_index, end_index = key_index['start']['index'], key_index['end']['index']
>>> my_list = l[start_index+1:end_index] if start_index >=0 and end_index >= 0 and start_index+1 < end_index else None
>>> my_list
[' with ', 'impenetrable fog ']
</code></pre>
| 1 | 2016-08-25T09:55:26Z | [
"python",
"indexing",
"list-comprehension",
"slice"
] |
Python 2.7: TypeError: 'float' object has no attribute '__getitem__' | 39,141,466 | <p>I am new to programming and Python so please kindly excuse if this is a silly mistake.</p>
<p>I am trying to run a script where I want to generate sample data based on lognormal distribution and then plot a histogram of that data.</p>
<p>I keep getting error</p>
<p>Here's my code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
a = 0.75 + (1.25 - 0.75)*np.random.lognormal(10000)
[n,bins,patches] = plt.hist(a, bins=50, color = 'red',alpha = 0.5, normed = True)
plt.show()
</code></pre>
<p>Error:</p>
<pre><code>Traceback (most recent call last):
File "H:\UQ&M\GUI Demos\WIP\Tester.py", line 10, in <module>
[n,bins,patches] = plt.hist(a, bins=50, color = 'red',alpha = 0.5, normed = True)
File "C:\Program Files (x86)\python27\lib\site-packages\matplotlib\pyplot.py", line 2341, in hist
ret = ax.hist(x, bins, range, normed, weights, cumulative, bottom, histtype, align, orientation, rwidth, log, color, label, **kwargs)
File "C:\Program Files (x86)\python27\lib\site-packages\matplotlib\axes.py", line 7650, in hist
if isinstance(x, np.ndarray) or not iterable(x[0]):
TypeError: 'float' object has no attribute '__getitem__'
</code></pre>
<p>I have read similar queries on here however I can't seem to find a solution.</p>
<p>Your expert suggestion will be highly appreciated.</p>
<p>Thanks in advance for putting your valuable time looking into this.</p>
| 1 | 2016-08-25T09:26:46Z | 39,141,560 | <p>From the <a href="http://matplotlib.org/api/pyplot_api.html" rel="nofollow">matplotlib API</a>
<code>a</code> should be an array or sequence and if I'm right in your code it is a single number not an array</p>
| 0 | 2016-08-25T09:30:52Z | [
"python",
"python-2.7",
"python-3.x",
"numpy"
] |
Any better way to calculate product of multiple lists? | 39,141,616 | <p>I need to get all possible combinations based on n-number of input lists and do some stuff to them.</p>
<p>current code example:</p>
<pre><code>import itertools
# example inputs
list_small = [1, 2, 3]
list_medium = [444, 666, 242]
list_huge = [1680, 7559, 5573, 43658, 530, 11772, 284, 50078, 783, 37809, 6740, 37765, 74492, 50078, 783, 37809, 6740, 37765, 74492]
# out of the input list, I need to generate all numbers from 0 to the current list element
# e.g. if I have 6, I need to get [0, 1, 2, 3, 4, 5, 6]
# if I get a list [1, 2, 3], the output will be [[0, 1], [0, 1, 2], [0, 1, 2, 3]]
# I achieved this by doing it with xrange: [x for x in xrange(0, current_list_element + 1)]
# after that, I need to generate all possible combinations using the generated lists
# I managed to do this by using itertools.product()
# print this to get all possible combinations
# print list(itertools.product(*[[x for x in xrange(0, current_list_element + 1)] for current_list_element in list_medium]))
cumulative_sum = 0
for current_combination in itertools.product(*[[x for x in xrange(0, current_list_element + 1)] for current_list_element in list_medium]):
# now I need to do some calculations to the current combination
# e.g. get sum of all combinations, this is just an example
cumulative_sum += sum(current_combination)
# another example
# get XOR sum of current combination, more at https://en.wikipedia.org/wiki/Exclusive_or
print reduce(operator.xor, current_combination, 0)
# runs fast for list_small, then takes some time for list_medium and then takes ages for list_huge
print cumulative_sum
</code></pre>
<p>This works fine for smaller lists, but takes infinity for larger lists / or throws Runtime Error. Is there any better way to do this? Better way to get all combinations? Or am I using xrange in some wrong way?</p>
<p>I tried this with Python 2.7 and Pypy 2.</p>
<p>EDIT:
thanks to @famagusta I got rid of xrange, but the problem still remains</p>
<pre><code>import itertools
# example inputs
list_small = [1, 2, 3]
list_medium = [444, 666, 242]
list_huge = [1680, 7559, 5573, 43658, 530, 11772, 284, 50078, 783, 37809, 6740, 37765, 74492, 50078, 783, 37809, 6740, 37765, 74492]
max_element = max(get_input_stones)
combo_list = range(0, max_element + 1)
cumulative_sum = 0
for current_combination in itertools.product(*combo_list):
# now I need to do some calculations to the current combination
# e.g. get sum of all combinations, this is just an example
cumulative_sum += sum(current_combination)
# another example
# get XOR sum of current combination, more at https://en.wikipedia.org/wiki/Exclusive_or
print reduce(operator.xor, current_combination, 0)
# runs fast for list_small, then takes some time for list_medium and then takes ages for list_huge
print cumulative_sum
</code></pre>
| 0 | 2016-08-25T09:33:32Z | 39,141,872 | <p>Generating such nested lists could get you into trouble with memory limitations. Instead of repeatedly generating sublists, you can use just one super list generated from the largest number in the list. Just store the indices where smaller elements would have stopped.</p>
<p>For e.g., [1, 6, 10] - [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [1, 6, 10]</p>
<p>The second list tells you where to stop in the first list to extract sublists of interest for computation</p>
<p>This should save you some space. </p>
<pre><code>list_small = [1, 2, 3]
list_medium = [444, 666, 242]
list_huge = [1680, 7559, 5573, 43658, 530, 11772, 284, 50078, 783, 37809, 6740, 37765, 74492, 50078, 783, 37809, 6740, 37765, 74492]
max_element = max(list_huge) # being lazy here - write a max function
combo_list = range(0, max_element + 1) # xrange does not support slicing
cumulative_sum = 0
for element in list_huge:
cumulative_sum += sum(combo_list[:element])
print(cumulative_sum)
</code></pre>
| 1 | 2016-08-25T09:45:56Z | [
"python",
"python-2.7",
"itertools",
"pypy",
"xrange"
] |
schema free solution to BigQuery Load job | 39,141,642 | <h3>Background</h3>
<p>I studied and found that bigQuery doesn't accept schemas defined by online tools (which have different formats, even though meaning is same).
So, I found that if I want to load data (where no. of columns keeps varying and increasing dynamically) into a table which has a fixed schema.</p>
<h3>Thoughts</h3>
<p>What i could do as a workaround is:</p>
<ol>
<li>First check if the data being loaded has extra fields.</li>
<li>If it has, a schema mismatch will occur, so first you create a temporary table in BQ and load this data into the table using "autodetect" parameter, which gives me a schema (that is in a format,which BQ accepts schema files).</li>
<li>Now i can download this schema file and use it,to update my exsisting table in BQ and load it with appropriate data.</li>
</ol>
<h3>Suggestion</h3>
<p>Any thoughts on this, if there is a better approach please share.</p>
| 0 | 2016-08-25T09:34:52Z | 39,172,452 | <p>We are in the process of releasing a new feature that can update the schema of the destination table within a load/query job. With autodetect and the new feature you can directly load the new data to the existing table, and the schema will be updated as part of the load job. Please stay tuned. The current ETA is 2 weeks.</p>
| 1 | 2016-08-26T18:17:36Z | [
"python",
"google-analytics",
"google-bigquery",
"google-cloud-platform"
] |
Generate numbers from a set of digits | 39,141,782 | <p>I am trying to generate all possible combinations of three numbers derived from a set of numbers.<br>
Let's say I have the digits 1 to 9 (once each) I would like to generate a triple of numbers like <code>14, 983, 7256</code> (but all possible combinations). So every digit can only be used once and all digits have to be used.</p>
<p>My first idea was to generate different sets of digits as a pool for each number like so:</p>
<pre><code>bin_arr = []
for i in range(1, 512):
bin_arr.append([int(a) for a in ("{0:0b}".format(i))])
>>> bin_arr[257]
>>> [1, 0, 0, 0, 0, 0, 0, 0, 1]
</code></pre>
<p>and <code>compress</code> these with <code>'123456789'</code> but that doesnt seem to go anywhere.</p>
<p>Is there a way to do that in a clever way?</p>
| 0 | 2016-08-25T09:42:02Z | 39,142,501 | <p>This is how I would do it, assuming order matters:</p>
<p>@ outcomes is a list of the desired options e.g. if you want a number with numerals 0-9, use output [0,1, . . ., 9].</p>
<p>@ length is the number of digits in each output number. You used up to 4 digits in your question, I assume there is some upper limit. </p>
<p>Use the function below to generate the permutations for each individual part of the triple and then feed that result back in to generate further permutations.</p>
<pre><code>def gen_permutations(outcomes, length):
ans = set([()])
for dummy_idx in range(length):
temp = set()
for seq in ans:
for item in outcomes: #each possible outcome
new_seq = list(seq)
if len(outcomes) == 1 or item in new_seq:
continue
new_seq.append(item)
temp.add(tuple(new_seq))
ans = temp
return ans
</code></pre>
| 0 | 2016-08-25T10:14:18Z | [
"python",
"numbers"
] |
Generate numbers from a set of digits | 39,141,782 | <p>I am trying to generate all possible combinations of three numbers derived from a set of numbers.<br>
Let's say I have the digits 1 to 9 (once each) I would like to generate a triple of numbers like <code>14, 983, 7256</code> (but all possible combinations). So every digit can only be used once and all digits have to be used.</p>
<p>My first idea was to generate different sets of digits as a pool for each number like so:</p>
<pre><code>bin_arr = []
for i in range(1, 512):
bin_arr.append([int(a) for a in ("{0:0b}".format(i))])
>>> bin_arr[257]
>>> [1, 0, 0, 0, 0, 0, 0, 0, 1]
</code></pre>
<p>and <code>compress</code> these with <code>'123456789'</code> but that doesnt seem to go anywhere.</p>
<p>Is there a way to do that in a clever way?</p>
| 0 | 2016-08-25T09:42:02Z | 39,143,435 | <p>We can assume one list like [14,983,2567] to be a number sequence 149832567,then we add two commas to it, one after 4 and the other after 3,so we generate a triple of numbers [14,983,2567].</p>
<p>So,how many number sequences can be generated?</p>
<pre><code>In [1]: import itertools
In [2]: a = range(1,10)
In [3]: a
Out[3]: [1, 2, 3, 4, 5, 6, 7, 8, 9]
In [4]: len(list(itertools.permutations(a,9)))
Out[4]: 362880
</code></pre>
<p>When we get a number sequence like 437865192, how many triple numbers can be generated?Combination</p>
<p>8*7/2 = 28(pick two gaps between the 9 numbers)</p>
<p>or use <code>itertools.combinations</code></p>
<pre><code>In [8]: len(list(itertools.combinations(list(range(8)),2)))
Out[8]: 28
</code></pre>
<p>given a sequence and we will get 28 combinations.</p>
<pre><code>In [1]: a = ['2','3','6','4','9','1','7','8','5']
In [2]: import itertools
In [4]: for i in itertools.combinations(range(1,9),2):
...: print [int(''.join(a[:i[0]])), int(''.join(a[i[0]:i[1]])), int(''.join(a[i[1]:]))]
[2, 3, 6491785]
[2, 36, 491785]
[2, 364, 91785]
[2, 3649, 1785]
[2, 36491, 785]
[2, 364917, 85]
[2, 3649178, 5]
[23, 6, 491785]
[23, 64, 91785]
[23, 649, 1785]
[23, 6491, 785]
[23, 64917, 85]
[23, 649178, 5]
[236, 4, 91785]
[236, 49, 1785]
[236, 491, 785]
[236, 4917, 85]
[236, 49178, 5]
[2364, 9, 1785]
[2364, 91, 785]
[2364, 917, 85]
[2364, 9178, 5]
[23649, 1, 785]
[23649, 17, 85]
[23649, 178, 5]
[236491, 7, 85]
[236491, 78, 5]
[2364917, 8, 5]
</code></pre>
<p><strong>so there are 10160640(362880*28) lists that will be generated.</strong></p>
<p><strong>THE FINAL CODE:</strong></p>
<pre><code>In [15]: a=map(lambda x:str(x), range(1,10))
In [16]: a
Out[16]: ['1', '2', '3', '4', '5', '6', '7', '8', '9']
In [17]: result = []
In [18]: for seq in itertools.permutations(a,9):
...: for i in itertools.combinations(range(1,9),2):
...: result.append([int(''.join(seq[:i[0]])), int(''.join(seq[i[0]:i[1]])), int(''.join(seq[i[1]:]))])
...:
In [19]: len(result)
Out[19]: 10160640
</code></pre>
| 1 | 2016-08-25T10:58:03Z | [
"python",
"numbers"
] |
Use of Pandas to slice and create Lists | 39,141,791 | <p>After some research on csv / pandas / etc to manipulate a huge csv file I decide to use pandas to slice just the information I need.
Now I am able to get just what I need using a filter i.e. "Name"="Greg" where I just see rows when the column Name has Greg.
However I would like now to create a pyhton List with all information of a specific column (i.e. City). How could I do that?
Then I will work just with the List to sort, count, etc.</p>
<p>What I have:</p>
<pre><code>import pandas as pd
all_data = pd.read_csv(
'myfile.csv', # file name
sep=',', # column separator
quotechar='"', # quoting character
encoding='utf-16',
na_values=0, # fill missing values with 0
usecols=[0,1,3], # columns to use
decimal='.') # symbol for decimals
slice1 = all_data[all_data['Name'] == 'Greg']
print (slice1)
</code></pre>
<p>Example of print (slice1):</p>
<p><a href="http://i.stack.imgur.com/3BD2D.png" rel="nofollow"><img src="http://i.stack.imgur.com/3BD2D.png" alt="enter image description here"></a></p>
| 1 | 2016-08-25T09:42:18Z | 39,141,817 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ix.html" rel="nofollow"><code>ix</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.tolist.html" rel="nofollow"><code>tolist</code></a>:</p>
<pre><code>#output is Series - column City
slice1 = all_data.ix[all_data['Name'] == 'Greg', 'City']
#generate list from Series
L = all_data.ix[all_data['Name'] == 'Greg', 'City'].tolist()
</code></pre>
<p>Sample:</p>
<pre><code>import pandas as pd
all_data = pd.DataFrame({'Name':['Greg','Greg','Greg','Adam'],
'Coutry':['US','UK','UK','UK'],
'City':['LA','LD','RE','LB']},
index=[221,564,800,500])
print (all_data)
City Coutry Name
221 LA US Greg
564 LD UK Greg
800 RE UK Greg
500 LB UK Adam
slice1 = all_data.ix[all_data['Name'] == 'Greg', 'City']
print (slice1)
221 LA
564 LD
800 RE
Name: City, dtype: object
L = all_data.ix[all_data['Name'] == 'Greg', 'City'].tolist()
print (L)
['LA', 'LD', 'RE']
</code></pre>
| 1 | 2016-08-25T09:43:27Z | [
"python",
"list",
"pandas",
"dataframe",
"condition"
] |
Capitalize first letter of each word in the column Python | 39,141,856 | <p>how do you capitalize the first letter of each word in the column? I am using python pandas by the way. For example, </p>
<pre><code> Column1
The apple
the Pear
Green tea
</code></pre>
<p>My desire result will be:</p>
<pre><code> Column1
The Apple
The Pear
Green Tea
</code></pre>
| 1 | 2016-08-25T09:45:16Z | 39,141,892 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.title.html" rel="nofollow"><code>str.title</code></a>:</p>
<pre><code>print (df.Column1.str.title())
0 The Apple
1 The Pear
2 Green Tea
Name: Column1, dtype: object
</code></pre>
<p>Another very similar method is <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.capitalize.html" rel="nofollow"><code>str.capitalize</code></a>, but it uppercases only first letters:</p>
<pre><code>print (df.Column1.str.capitalize())
0 The apple
1 The pear
2 Green tea
Name: Column1, dtype: object
</code></pre>
| 2 | 2016-08-25T09:46:41Z | [
"python",
"string",
"pandas",
"dataframe",
"capitalization"
] |
Numpy---How to substitute by certain multi-elements in array at the same time? | 39,141,895 | <p>I have a problem during substitution with data in array:
say, </p>
<pre><code>a = [1, 0, 0]
b = [0, 0, 0]
c = [0, 0]
X = numpy.zeros((3, 3, 2))
</code></pre>
<p>and I have Matrix <code>Y</code> with shape (2,3,2) and it is non a zero matrix</p>
<p>Now; I want to equal these elements of X by Y directly;</p>
<pre><code>X[tuple(numpy.where(a==0)[0]),
tuple(numpy.where(b==0)[0]),
tuple(numpy.where(c==0)[0])] = Y
</code></pre>
<p>I got the error <code>shape mismatch: objects cannot be broadcast to a single shape</code></p>
| 1 | 2016-08-25T09:46:51Z | 39,142,599 | <p>You could use <a href="http://docs.scipy.org/doc/numpy-1.9.0/reference/generated/numpy.ix_.html" rel="nofollow"><code>np.ix_</code></a> to construct index arrays appropriate for indexing <code>X</code>:</p>
<pre><code>import numpy as np
np.random.seed(2016)
a=np.array([1, 0, 0])
b=np.array([0, 0, 0])
c=np.array([0, 0])
X = np.zeros((3,3,2))
Y = np.random.randint(1, 10, size=(2,3,2))
idx = np.ix_(a==0, b==0, c==0)
X[idx] = Y
print(X)
</code></pre>
<p>yields</p>
<pre><code>array([[[ 0., 0.],
[ 0., 0.],
[ 0., 0.]],
[[ 9., 8.],
[ 3., 7.],
[ 4., 5.]],
[[ 2., 2.],
[ 3., 3.],
[ 9., 9.]]])
</code></pre>
<hr>
<p>Alternatively, you could construct a boolean mask</p>
<pre><code>mask = (a==0)[:,None,None] & (b==0)[None,:,None] & (c==0)[None,None,:]
X[mask] = Y
</code></pre>
<p>Indexing <code>(a=0)</code> as in <code>(a==0)[:,None,None]</code> <a href="http://stackoverflow.com/q/9510252/190597">adds new axes</a> to the 1D boolean array <code>(a=0)</code>. <code>(a==0)[:,None,None]</code> has shape (3,1,1). Similarly, <code>(b==0)[None,:,None]</code> has shape (1,3,1), and <code>(c==0)[None,None,:]</code> has shape (1,1,2).</p>
<p>When combined with <code>&</code> (bitwise-and), the three arrays are <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow">broadcasted</a> to one common shape, (3,3,2). Thus, <code>X</code> gets indexed by one boolean array of shape (3,3,2) in</p>
<pre><code>X[mask] = Y
</code></pre>
| 1 | 2016-08-25T10:18:26Z | [
"python",
"arrays",
"numpy"
] |
order of unpacking *args in Python | 39,141,899 | <p>I wonder why variable <code>last</code>is <code>5</code>
when i do</p>
<pre><code>first, *rest, last = 1,2,3,4,5
</code></pre>
<p>I thought assignment goes from left to right, thus
<code>*rest</code> will be <code>[2,3,4,5]</code>, but it actualy is <code>[2,3,4]</code>
And I thought that last will be empty, or this code will cause error, but suprisingly it works, but I dont understand why</p>
| 1 | 2016-08-25T09:47:07Z | 39,141,953 | <p>This is a valid syntax only in Python 3, is called "extended unpacking" and it is defined in PEP 3132 - <a href="https://www.python.org/dev/peps/pep-3132/" rel="nofollow">https://www.python.org/dev/peps/pep-3132/</a></p>
| 4 | 2016-08-25T09:49:50Z | [
"python",
"python-3.x",
"args"
] |
order of unpacking *args in Python | 39,141,899 | <p>I wonder why variable <code>last</code>is <code>5</code>
when i do</p>
<pre><code>first, *rest, last = 1,2,3,4,5
</code></pre>
<p>I thought assignment goes from left to right, thus
<code>*rest</code> will be <code>[2,3,4,5]</code>, but it actualy is <code>[2,3,4]</code>
And I thought that last will be empty, or this code will cause error, but suprisingly it works, but I dont understand why</p>
| 1 | 2016-08-25T09:47:07Z | 39,141,978 | <p>Just because it's unpacking feature. </p>
<p><code>first</code> and <code>last</code> are just variables, when <code>*rest</code> are arguments, so they get everything between <code>first</code> and <code>last</code> value in tuple (1,2,3,4,5).</p>
<p>If you want to have you wrote:</p>
<pre><code>[2,3,4,5]
</code></pre>
<p>then just use:</p>
<pre><code>first, *args = 1,2,3,4,5
# first --> 1
# args --> [2,3,4,5]
</code></pre>
| 0 | 2016-08-25T09:50:49Z | [
"python",
"python-3.x",
"args"
] |
Improve flatten function in python | 39,141,938 | <p>Given function needs some improvements:</p>
<pre class="lang-py prettyprint-override"><code>def flatten(d, parent_key=''):
items = []
for k, v in d.items():
try:
items.extend(flatten(v, '%s%s.' % (parent_key, k)).items())
except AttributeError:
items.append(('%s%s' % (parent_key, k), v))
return dict(items)
</code></pre>
<p>I want to modify upper function.<br>
That it also flattens lists:</p>
<pre class="lang-json prettyprint-override"><code>{'d': [1, 2, 3]}
</code></pre>
<p>To something like this:</p>
<pre class="lang-json prettyprint-override"><code>{'d[0]': 1, 'd[1]': 3, 'd[2]': 3}
</code></pre>
<p><strong>EDIT:</strong>
This Code does it for me, but it's not as sleek as the first one any ideas for improvements?</p>
<pre class="lang-py prettyprint-override"><code>def flatten_dict(d):
def items():
for key, value in d.items():
if isinstance(value, dict):
for subkey, subvalue in flatten_dict(value).items():
yield key + "." + subkey, subvalue
elif isinstance(value, list):
for index, val in enumerate(value):
yield key + "[" + str(index) + "]" , value[index]
else:
yield key, value
return dict(items())
</code></pre>
| 0 | 2016-08-25T09:49:09Z | 39,142,214 | <p>Try this,</p>
<pre><code>def flattern(dict_):
result = {}
for key in dict_:
for i,j in enumerate(dict_[key]):
result[key+'['+str(i)+']'] = j
return result
</code></pre>
<p>call function like this,</p>
<pre><code>In [40]: d
Out[40]: {'a': [1, 2, 3, 7], 'd': [1, 2, 3]}
In [41]: flattern(d)
Out[41]: {'a[0]': 1, 'a[1]': 2, 'a[2]': 3, 'a[3]': 7, 'd[0]': 1, 'd[1]': 2, 'd[2]': 3}
</code></pre>
| 0 | 2016-08-25T10:01:21Z | [
"python",
"list",
"dictionary",
"flatten"
] |
Python - method of a class with an optional argument and default value a class member | 39,141,993 | <p>I have something like this (I know this code doesn't work, but it's the closer to what I want to achieve):</p>
<pre><code>class A:
def __init__(self):
self.a = 'a'
def method(self, a=self.a):
print a
myClass = A()
myClass.method('b') # print b
myClass.method() # print a
</code></pre>
<p>What I've done so far, but I do not like it, is:</p>
<pre><code>class A:
def __init__(self):
self.a = 'a'
def method(self, a=None):
if a is None:
a = self.a
print a
myClass = A()
myClass.method('b') # print b
myClass.method() # print a
</code></pre>
| 1 | 2016-08-25T09:51:33Z | 39,142,073 | <p>Default arguments are evaluated at <strong>definition time</strong>. By the time the class and method are defined <code>self.a</code> is not.</p>
<p>Your working code example is actually the only clean way of achieving this behavior.</p>
| 3 | 2016-08-25T09:55:17Z | [
"python"
] |
Python - method of a class with an optional argument and default value a class member | 39,141,993 | <p>I have something like this (I know this code doesn't work, but it's the closer to what I want to achieve):</p>
<pre><code>class A:
def __init__(self):
self.a = 'a'
def method(self, a=self.a):
print a
myClass = A()
myClass.method('b') # print b
myClass.method() # print a
</code></pre>
<p>What I've done so far, but I do not like it, is:</p>
<pre><code>class A:
def __init__(self):
self.a = 'a'
def method(self, a=None):
if a is None:
a = self.a
print a
myClass = A()
myClass.method('b') # print b
myClass.method() # print a
</code></pre>
| 1 | 2016-08-25T09:51:33Z | 39,142,079 | <p>The default is evaluated at method definition time, i.e. when the interpreter executes the class body, which usually happens only once. Assigning a dynamic value as default can only happen within the method body, and the approach you use is perfectly fine.</p>
| 2 | 2016-08-25T09:55:46Z | [
"python"
] |
Why do I get different predict results between Xgboost's python and CLI version? | 39,142,015 | <p>Recently,when I try to use xgboost's CLI version to predict the inputs I found its results is much different from the python version.</p>
<p>With python,I predict it like this:</p>
<pre><code>data = xgb.DMatrix(X)
bst = xgb.Booster()
bst.load_model(modelfile)
leafindex = bst.predict(data, pred_leaf=False)
</code></pre>
<p>And use CLI as below:</p>
<pre><code>./xgboost xgboost.conf task=pred model_in=../models/gb.model_depth4_150trees_2016-07-02
</code></pre>
<p>here is my configuration file:</p>
<pre><code># General Parameters, see comment for each definition
# can be gbtree or gblinear
booster = gbtree
# choose logistic regression loss function for binary classification
objective = binary:logistic
# Tree Booster Parameters
# step size shrinkage
eta = 1.0
# minimum loss reduction required to make a further partition
gamma = 1.0
# minimum sum of instance weight(hessian) needed in a child
min_child_weight = 1
# maximum depth of a tree
max_depth = 4
# Task Parameters
# the number of round to do boosting
num_round = 150
# 0 means do not save any model except the final round model
save_period = 0
# The path of training data
data = "agaricus.txt.train"
# The path of validation data, used to monitor training process, here [test] sets name of the validation set
eval[test] = "agaricus.txt.test"
# The path of test data
test:data = "data"
</code></pre>
<p>Python input data format:</p>
<pre><code>8 201 1 2 26 10000.0 8589934592 32 0 0 1000000.0 0
2 3 1 1 50 10000.0 8589934592 32 524288 8 1000000.0 0
2 3 2 2 19 10000.0 8589934592 512 512 8 1000000.0 0
4 24 1 1 23 10000.0 8589934592 8192 0 0 1000000.0 0
1 2 2 3 50 10000.0 8589934592 32 512 8 1000000.0 0
21 1 2 3 48 10000.0 8589934592 32 512 8 1000000.0 0
5 12 1 2 42 10000.0 137438953472 32 512 8 1000000.0 0
2 11 2 2 86 10000.0 0 0 0 0 1000000.0 0
1 10 2 8 99 10000.0 8589934592 32 65536 8 1000000.0 0
2 11 2 8 97 10000.0 8589934592 32 65536 8 1000000.0 0
4 5 1 1 4 10000.0 1073741824 32 0 0 1000000.0 0
...
</code></pre>
<p>CLI input format:</p>
<pre><code>0 1:8 2:201 3:1 4:2 5:26 6:10000.0 7:8589934592 8:32 9:0 10:0 11:1000000.0 12:0
0 1:2 2:3 3:1 4:1 5:50 6:10000.0 7:8589934592 8:32 9:524288 10:8 11:1000000.0 12:0
0 1:2 2:3 3:2 4:2 5:19 6:10000.0 7:8589934592 8:512 9:512 10:8 11:1000000.0 12:0
0 1:4 2:24 3:1 4:1 5:23 6:10000.0 7:8589934592 8:8192 9:0 10:0 11:1000000.0 12:0
0 1:1 2:2 3:2 4:3 5:50 6:10000.0 7:8589934592 8:32 9:512 10:8 11:1000000.0 12:0
0 1:21 2:1 3:2 4:3 5:48 6:10000.0 7:8589934592 8:32 9:512 10:8 11:1000000.0 12:0
0 1:5 2:12 3:1 4:2 5:42 6:10000.0 7:137438953472 8:32 9:512 10:8 11:1000000.0 12:0
...
</code></pre>
<p>The results of python version:</p>
<pre><code>0.138298
0.00288907
0.0114002
0.0477143
0.00185653
0.00455882
0.000503023
0.000817317
0.00332584
0.00178041
0.0666806
0.03003
...
</code></pre>
<p>the CLI version:</p>
<pre><code>0.000100178
0.201246
0.449562
0.0506984
0.451953
0.389587
0.034748
0.992795
0.00348666
0.00661674
0.0186095
0.0260032
0.996163
0.259104
0.552341
0.972762
...
</code></pre>
<p>I used the same model file, and the CLI version got 40% value higher than 0.5,that was not in accordance with our expectations.</p>
| 1 | 2016-08-25T09:52:26Z | 39,159,108 | <p>Solved!</p>
<p>It seems that the model file trained by the python and the cli can't be used by each other.
And when use the model trained by each self the results still have a little difference like these:</p>
<pre><code>by python by cli
0.169874 0.222063
0.999997 0.999554
0.00454239 0.000879413
0.0140518 0.00824018
0.0148116 0.00859811
0.000353913 0.000880754
0.0207635 0.019058
0.000916939 0.000579058
0.00109237 0.000286653
0.00247333 0.00272115
0.0650928 0.0319875
0.946068 0.965301
0.997704 0.999615
0.987644 0.991665
0.997242 0.984403
0.948666 0.909703
0.000781899 0.00079996
0.000319449 0.000138011
0.0400793 0.164134
0.00216081 0.000781626
0.023867 0.0323994
</code></pre>
| 0 | 2016-08-26T05:48:52Z | [
"python",
"shell",
"machine-learning",
"xgboost"
] |
Retain Messages until a Subscription is Made using Python + Stomp | 39,142,052 | <p>I am currently writing two scripts to subscribe to a message server using the <em>stomp</em> client library, <em>write.py</em> to write data and <em>read.py</em> to get data.</p>
<p>If I start <em>read.py</em> first and then run <em>write.py</em>, <em>write.py</em> receives the messages correctly. </p>
<p>However, if I run <em>write.py</em> first and then run <em>read.py</em>, <em>read.py</em> does not retrieve any messages previously sent to the server.</p>
<p>Below are relevant parts of the scripts.</p>
<p>How can I achieve that messages put into the queue by <em>write.py</em> are being retained until <em>read.py</em> subscribes and retrieves them?</p>
<p><strong>write.py</strong></p>
<pre><code>def writeMQ(msg):
queue = '/topic/test'
conn = stomp.Connection(host_and_ports=[(MQ_SERVER, MQ_PORT)])
try:
conn.start()
conn.connect(MQ_USER, MQ_PASSWD, wait=True)
conn.send(body=msg, destination=queue, persistent=True)
except:
traceback.print_exc()
finally:
conn.disconnect()
return
</code></pre>
<p><strong>read.py</strong></p>
<pre><code>class MyListener(stomp.ConnectionListener):
def on_error(self, headers, message):
print ('received an error {0}'.format(message))
def on_message(self, headers, message):
print ('received an message {0}'.format(message))
def readMQ():
queue = '/topic/test'
conn = stomp.Connection(host_and_ports=[(MQ_SERVER, MQ_PORT)])
try:
conn.set_listener("", MyListener())
conn.start()
conn.connect(MQ_USER, MQ_PASSWD, wait=True)
conn.subscribe(destination=queue, ack="auto", id=1)
stop = raw_input()
except:
traceback.print_exc()
finally:
conn.disconnect()
return
</code></pre>
| 0 | 2016-08-25T09:54:39Z | 39,144,105 | <p>The problem is that the messages are being sent to a topic.</p>
<p>The <a href="https://activemq.apache.org/apollo/documentation/stomp-manual.html#Destination_Types" rel="nofollow">Apollo Documentation</a> describes the difference between topics and queues as follows:</p>
<blockquote>
<p>Queues hold on to unconsumed messages even when there are no subscriptions attached, while a topic will drop messages when there are no connected subscriptions.</p>
</blockquote>
<p>Thus, when <em>read.py</em> is startet first and listening, the topic recognizes the subscription and forwards the message. But when <em>write.py</em> is startet first the message is dropped because there is no subscribed client.</p>
<p>So you can use a queue instead of a topic. If the server is able to create a queue silently simply set</p>
<pre><code>queue = '/queue/test' .
</code></pre>
<p>I don't know which version of stomp is being used, but I cannot find the parameter</p>
<pre><code>send(..., persistent=True) .
</code></pre>
<p>Anyway persisting is not the right way to go since it still does not allow for messages to simply be retained for a later connection, but saves the messages in case of a server failure.</p>
<p>You can use the</p>
<pre><code>retain:set
</code></pre>
<p>header for topic messages instead.</p>
| 1 | 2016-08-25T11:30:43Z | [
"python",
"python-2.7",
"message-queue",
"stomp"
] |
Solution to avoid hard coding python | 39,142,137 | <p>I want a solution to avoid hard coding in this if condition of my python script:</p>
<pre><code>if (x.get('name')=='location'):
</code></pre>
<p>this is to be used for extracting the location tag from an xml file . user must modify this according to the xml file being used.
so what i must do ?</p>
| -4 | 2016-08-25T09:58:16Z | 39,142,206 | <pre><code>check_tag = raw_input("Enter the tag you wish to search for: ")
if (x.get('name')==check_tag):
</code></pre>
<p>use <code>input()</code> instead of <code>raw_input()</code> if using python 3</p>
| 0 | 2016-08-25T10:01:11Z | [
"python"
] |
Python Grok Learning exception of type IndexError | 39,142,168 | <p>I just started learning python a few days ago and I have been using Grok Learning. For the challenge I have everything working as far as i can see but when i submit it i am told "Testing yet another case that starts with a vowel. Your submission raised an exception of type IndexError. This occurred on line 8 of your submission." I am not sure how to solve this or even what i am doing wrong. By the way i am making a program to check if the message starts with a vowel and if so times the first letter by 10 if not then times the second letter by 10.</p>
<pre><code>msg = input("Enter a word: ")
h = " "
half =" "
first = msg[0]
second = msg[1]
msg2 = "gg"
length = len(msg)
third = msg[2]
if first not in "aeiou":
if second != third:
print(msg.replace(msg[1], msg[1] * 10))
elif second == third:
msg2 = third * 6
msg3 = (msg.replace(msg[2], msg2))
msg4 = first + msg3[2:]
print(msg4)
else:
half = first * 10
msg10 = msg[1:length]
print((half) + msg10)
</code></pre>
| 0 | 2016-08-25T09:59:26Z | 39,142,368 | <p><strong>You have recieved only two letters string as input</strong> which means you cannot access <code>msg[2]</code> because there are no such index. To handle that case you can define third value as next:</p>
<pre><code>if len(msg) > 2:
third = msg[2]
else:
third = None
</code></pre>
<p>Or use one liner:</p>
<pre><code>third = msg[2] if len(msg) > 2 else None
</code></pre>
| 0 | 2016-08-25T10:08:18Z | [
"python"
] |
Python Grok Learning exception of type IndexError | 39,142,168 | <p>I just started learning python a few days ago and I have been using Grok Learning. For the challenge I have everything working as far as i can see but when i submit it i am told "Testing yet another case that starts with a vowel. Your submission raised an exception of type IndexError. This occurred on line 8 of your submission." I am not sure how to solve this or even what i am doing wrong. By the way i am making a program to check if the message starts with a vowel and if so times the first letter by 10 if not then times the second letter by 10.</p>
<pre><code>msg = input("Enter a word: ")
h = " "
half =" "
first = msg[0]
second = msg[1]
msg2 = "gg"
length = len(msg)
third = msg[2]
if first not in "aeiou":
if second != third:
print(msg.replace(msg[1], msg[1] * 10))
elif second == third:
msg2 = third * 6
msg3 = (msg.replace(msg[2], msg2))
msg4 = first + msg3[2:]
print(msg4)
else:
half = first * 10
msg10 = msg[1:length]
print((half) + msg10)
</code></pre>
| 0 | 2016-08-25T09:59:26Z | 39,142,389 | <p>Well, this probably means the entered message has a length less than 3.
You should check if the message is long enough before trying to access it's content, otherwise you'll indeed get an <code>IndexError</code></p>
<pre><code>length = len(msg)
if length > 2:
third = msg[2]
</code></pre>
| 0 | 2016-08-25T10:09:14Z | [
"python"
] |
Define 2D matrix with non-zero elements only in a rectangular subregion | 39,142,199 | <p>Given two specified dimensions <code>N1</code> and <code>N2</code>, and some boundaries <code>ymin, ymax, xmin, xmax</code>, I want to build an <code>N1xN2</code> matrix (be it a numpy array or a plain python list) which is zero everywhere, except in a rectangular subregion specified by its boundaries.</p>
<p>To be more precise, I'm looking for how to implement a function <code>rectangular_sparse_matrix(N1, N2, ymin, ymax, xmin, xmax)</code> such that, for example,</p>
<pre><code>N1 = N2 = 5
rectangular_sparse_matrix(N1, N2, 0, 2, 1, 2)
</code></pre>
<p>returns</p>
<pre><code>[[ 0 1 1 0 0]
[ 0 1 1 0 0]
[ 0 1 1 0 0]
[ 0 0 0 0 0]
[ 0 0 0 0 0]]
</code></pre>
<hr>
<p>The naive way would be to just loop through the elements, like in</p>
<pre><code>def rectangular_sparse_matrix(N1, N2, ymin, ymax, xmin, xmax):
matrix = np.zeros([N1, N2])
for row in range(N1):
for col in range(N2):
if ymin <= row <= ymax and xmin <= col <= xmax:
matrix[row, col] = 1
return matrix
</code></pre>
<p>which does the job.
However, I was wandering if there was some more efficient/easier way to achieve something like this, maybe using some obscure (to me) numpy feature or something.</p>
| 1 | 2016-08-25T10:00:59Z | 39,142,337 | <pre><code>def rectangular_sparse_matrix(N1, N2, ymin, ymax, xmin, xmax):
m = np.zeros([N1, N2])
m[ymin:ymax+1, xmin:xmax+1] = 1
return m
</code></pre>
<p>In the function slicing numpy array is used.</p>
| 2 | 2016-08-25T10:07:08Z | [
"python",
"numpy",
"matrix",
"sparse-matrix"
] |
SyntaxError: encoding declaration in Unicode string | 39,142,304 | <p>If I try to use a magic comment like <code>#coding=utf-8</code> on top of a file, here's what happens:</p>
<pre><code>Traceback (most recent call last):
File <string>, line 0
SyntaxError: encoding declaration in Unicode string
</code></pre>
<p>I really haven't done anything wrong. Here is the code:</p>
<pre><code>#coding=utf-8
string = raw_input()
chars = {}
for i in string:
if i in chars:
chars[i] += 1
else:
chars[i] = 0
print chars
</code></pre>
<p>I use repl.it.</p>
| 0 | 2016-08-25T10:05:48Z | 39,142,445 | <p>You omitted something from your question: You are using <code>exec</code> to execute this code. And you passed a <em>Unicode object</em> to exec, which means you already have stated that the source is Unicode text:</p>
<pre><code>>>> code = '''\
... # coding=utf8
... print 'hello world!'
... '''
>>> exec code
hello world!
>>> exec code.decode('utf8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 0
SyntaxError: encoding declaration in Unicode string
</code></pre>
<p>You can't use a PEP 263 declaration in Unicode text passed to <code>exec</code>.</p>
<p>If you are using a 'custom' environment like <a href="https://repl.it/languages/python" rel="nofollow">repl.it</a>, then yes, such environments invariably use tricks like <code>exec</code> to execute code, and they load the source code as Unicode from your browser. See the <a href="https://repl.it/Crfs" rel="nofollow">actual code used</a>, which passes JSON-sourced strings to <code>exec</code> (where such strings are always going to be <code>unicode</code> strings).</p>
| 4 | 2016-08-25T10:11:40Z | [
"python",
"unicode"
] |
output on a new line in python curses | 39,142,340 | <p>I am using curses module in python to display output in real time by reading a file.
The string messages are output to the console using addstr() function
but I am not able to achieve printing to a newline wherever I need.</p>
<p>sample code: </p>
<pre><code>import json
import curses
w=curses.initscr()
try:
while True:
with open('/tmp/install-report.json') as json_data:
beta = json.load(json_data)
w.erase()
w.addstr("\nStatus Report for Install process\n=========\n\n")
for a1, b1 in beta.iteritems():
w.addstr("{0} : {1}\n".format(a1, b1))
w.refresh()
finally:
curses.endwin()
</code></pre>
<p>The above is not really outputting the strings to a new line (notice the \n in addstr()) with each iteration. On the contrary, the script fails off with error if I resize the terminal window. </p>
<pre><code>w.addstr("{0} ==> {1}\n".format(a1, b1))
_curses.error: addstr() returned ERR
</code></pre>
| 1 | 2016-08-25T10:07:20Z | 39,154,432 | <p>There's not enough program to offer more than general advice:</p>
<ul>
<li>you will get an error when printing to the end of the screen if your script does not enable scrolling (see <a href="https://docs.python.org/2/library/curses.html#curses.window.scroll" rel="nofollow"><code>window.scroll</code></a>).</li>
<li>if you resize the terminal window, you will have to read the keyboard to dispose of any <code>KEY_RESIZE</code> (and ignore errors).</li>
</ul>
<p>Regarding the expanded question, these features would be used something like this:</p>
<pre><code>import json
import curses
w=curses.initscr()
w.scrollok(1) # enable scrolling
w.timeout(1) # make 1-millisecond timeouts on `getch`
try:
while True:
with open('/tmp/install-report.json') as json_data:
beta = json.load(json_data)
w.erase()
w.addstr("\nStatus Report for Install process\n=========\n\n")
for a1, b1 in beta.iteritems():
w.addstr("{0} : {1}\n".format(a1, b1))
ignore = w.getch() # wait at most 1msec, then ignore it
finally:
curses.endwin()
</code></pre>
| 1 | 2016-08-25T20:43:45Z | [
"python",
"curses",
"python-curses"
] |
Merging Python Dictionaries and adding similar value of values | 39,142,369 | <p>I have a list of dictionaries like :</p>
<pre><code>[{'A': 2, 'B': u'cat'}, {'A': 1, 'B': u'dog'}, {'A': 3, 'B': u'rabbit'}, {'A': 4, 'B': u'cat'}, {'A': 4, 'B': u'dog'}, {'A': 8, 'B': u'rabbit'}]
</code></pre>
<p>I want to convert it into :</p>
<pre><code>[{'cat':'6'},{'dog':'5'}, {'rabbit':'11'}]
</code></pre>
<p>I tried doing something like this :</p>
<pre><code>super_dict = collections.defaultdict(set)
for d in ss:
for k, v in d.iteritems():
super_dict[k].add(v)
</code></pre>
<p>But it returns : </p>
<pre><code>{'A': set([2, 1, 3, 4, 4, 7]), 'B': set([u'cat', u'dog', u'rabbit'])}
</code></pre>
| -4 | 2016-08-25T10:08:19Z | 39,142,841 | <pre><code>>>> my_list = [{'A': 2, 'B': u'cat'}, {'A': 1, 'B': u'dog'}, {'A': 3, 'B': u'rabbit'}, {'A': 4, 'B': u'cat'}, {'A': 4, 'B': u'dog'}, {'A': 8, 'B': u'rabbit'}]
>>> new_dict = {}
>>> for item in my_list:
... new_dict[item['B']] = new_dict.get(item['B'], 0) + item['A']
...
>>> new_dict
{u'dog': 5, u'rabbit': 11, u'cat': 6}
</code></pre>
| 1 | 2016-08-25T10:29:26Z | [
"python",
"python-2.7",
"dictionary"
] |
Histogram with Python | 39,142,435 | <p>I have a dataframe df_Ratio with this structure </p>
<pre><code>class_energy ACT_TIME_AERATEUR_1_F1 ACT_TIME_AERATEUR_1_F3 ACT_TIME_AERATEUR_1_F5 ACT_TIME_AERATEUR_1_F6 ACT_TIME_AERATEUR_1_F7 ACT_TIME_AERATEUR_1_F8
high 0.166667 0.166667 0.166667 0.166667 0.166667 0.166667
low 0.166964 0.167003 0.167081 0.166935 0.166058 0.165961
medium 0.167268 0.167400 0.167165 0.167334 0.165224 0.165609
</code></pre>
<p>I need to create a histogram concerning only <strong>high</strong> rows :</p>
<p>In the x axis <code>ACT_TIME_AERATEUR_1_F5 ACT_TIME_AERATEUR_1_F6 ACT_TIME_AERATEUR_1_F7 ACT_TIME_AERATEUR_1_F8</code> </p>
<p>and the y axis represents the values : <code>"0.166667 0.166667 0.166667 0.166667 0.166667 0.166667 "</code></p>
<p>Any idea please?</p>
| 0 | 2016-08-25T10:11:23Z | 39,143,971 | <p>I'm not sure if I got everything right, but here is a working example:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('data.txt', sep=' ')
df = df.T # transpose dataframe
plt.bar(range(len(df.high)-1), df.high[1:], align='center')
plt.xticks(range(len(df.high)-1), df.index[1:], size='small')
plt.xticks(rotation=90)
plt.tight_layout()
plt.show()
</code></pre>
<p>Probably you have to adjust the data range by <code>[1:]</code>, depending on how exactly your data looks and on what you want to plot.</p>
| 0 | 2016-08-25T11:25:03Z | [
"python",
"matplotlib",
"plot",
"dataframe",
"histogram"
] |
How to render Form instance in Django CreateView & UpdateView? | 39,142,436 | <p>I'm trying to refactor my code into class-based views, and I'm having trouble understanding how to return/render at the end. I'm trying to render the form with the supplied POST data on success (or fail), with an appropriate confirmation message, so that it can be updated. Here's some example code, with comments showing what I want to return (where I don't know how):</p>
<pre><code>from django.views.generic import CreateView, UpdateView
from my_app.forms import ProductCreateForm
from my_app.models import Product
from django.contrib.auth.models import User
class ProductCreate(CreateView):
"""Simple CreateView to create a Product with ForeignKey relation to User"""
model = Product
form_class = ProductCreateForm
template = 'product_create.html'
def form_valid(self, form):
user = User.objects.filter(username=form.cleaned_data['user_email']).first()
if user is None:
messages.error("Invalid user email specified")
return some http response here...
#How to I render the page again, but with the data already in the
#form we don't have to re-enter it all?
form.save()
messages.info("Successfully saved the product!")
return some http response here...
#How do I redirect to the UpdateView here with the Product instance
#already in the form?
class ProductUpdate(UpdateView):
model = Product
form_class = ProductCreateForm
template = 'product_create.html'
</code></pre>
| 0 | 2016-08-25T10:11:24Z | 39,142,702 | <p>You shouldn't be doing any of that there. The view is already taking care of calling the validation methods and redisplaying the form if validation fails; <code>form_valid</code> is only called if the form is already valid. Your user check should go into the form itself:</p>
<pre><code>class ProductCreateForm(forms.ModelForm):
...
def clean_user_email(self):
user = User.objects.filter(username=self.cleaned_data['user_email']).first()
if user is None:
raise forms.ValidationError("Invalid user email specified")
return user
</code></pre>
<p>For the second part, redirecting to the update view, you can do that by defining the <code>get_success_url</code> method; what it returns depends on the URL you have defined for the ProductUpdate view, but assuming that URL takes an <code>id</code> argument it would be something like this:</p>
<pre><code>class ProductCreate(CreateView):
def get_success_url(self):
return reverse('product_update', kwargs={'id': self.instance.pk})
</code></pre>
<p>That leaves your <code>form_valid</code> method only needing to set the message on success, and you don't even need to do that if you use the <a href="https://docs.djangoproject.com/en/1.10/ref/contrib/messages/#adding-messages-in-class-based-views" rel="nofollow">SuccessMessageMixin</a> from contrib.messages.</p>
| 1 | 2016-08-25T10:22:44Z | [
"python",
"django"
] |
Convert python code with sha to php | 39,142,539 | <p>got a problem with converting python to php. I've got the following code in python: </p>
<pre><code>user = "DdrkmK5uFKmaaeNqfqReMADSUJ4sVSLrV2A8Bvs8"
passing = "K9hvwANSBW5tLYzuWptWMByTtzZZKHzm"
sha = hashlib.sha256()
sha.update(user)
sha.update(passing)
sha_A = [ord(x) for x in sha.digest()]
</code></pre>
<p><code>sha_A</code> is the following array: </p>
<p>[231, 13, 239, 136, 20, 198, 76, 121, 67, 163, 251, 153, 114, 13, 65, 203, 41, 37, 64, 168, 43, 69, 81, 103, 235, 161, 15, 58, 82, 57, 217, 178]</p>
<hr>
<p>I already converted it to php:</p>
<pre><code>$user = "DdrkmK5uFKmaaeNqfqReMADSUJ4sVSLrV2A8Bvs8";
$passing = "K9hvwANSBW5tLYzuWptWMByTtzZZKHzm"
$sha = hash_init("sha256");
$sha = hash_update($sha, $user);
$sha = hash_update($sha, $passing);
$sha_A = [];
$i = 0;
$digest = openssl_digest($sha, "sha256");
$digest = str_split($digest);
foreach ($digest as $x) {
$sha_A[$i] = ord($x);
$i = $i + 1;
}
</code></pre>
<p>But the returned array <code>$sha</code> looks like this one:</p>
<p>[101, 51, 98, 48, 99, 52, 52, 50, 57, 56, 102, 99, 49, 99, 49, 52, 57, 97, 102, 98, 102, 52, 99, 56, 57, 57, 54, 102, 98, 57, 50, 52]</p>
<p>Maybe some of you will find my mistake?</p>
| 0 | 2016-08-25T10:15:50Z | 39,143,015 | <p>I saw few errors in your PHP code.</p>
<p>This is a python snippet:</p>
<pre><code>>>> sha = hashlib.sha256()
>>> sha.update(user)
>>> sha.update(passing)
>>> sha_A = [ord(x) for x in sha.digest()]
[135, 146, 107, 215, 70, 126, 179, 21, 19, 177, 191, 236, 182, 136, 192, 53, 148, 42, 160, 24, 63, 224, 170, 211, 32, 131, 59, 146, 60, 162, 77, 2]
</code></pre>
<p>And the PHP version, corrected:</p>
<pre><code>$ctx = hash_init('sha256');
hash_update($ctx, $user);
hash_update($ctx, $passing);
$digest = hash_final($ctx, true);
$sha_A = [];
foreach (str_split($digest) as $x) {
$sha_A[] = ord($x);
}
[135, 146, 107, 215, 70, 126, 179, 21, 19, 177, 191, 236, 182, 136, 192, 53, 148, 42, 160, 24, 63, 224, 170, 211, 32, 131, 59, 146, 60, 162, 77, 2]
</code></pre>
<p>In your PHP version, <code>$sha = hash_update($sha, $user);</code> was bad because <a href="https://secure.php.net/manual/function.hash-update.php" rel="nofollow">hash_update</a> returns a boolean. The first argument is called the <code>context</code> and is the result of <a href="https://secure.php.net/manual/en/function.hash-init.php" rel="nofollow">hash_init</a>, the second one is the data to hash. Finally, you call <a href="https://secure.php.net/manual/function.hash-final.php" rel="nofollow">hash_final</a> with the last parameter (<code>raw_output</code>) to <code>true</code> to get binary data.</p>
<p>Last error, using <code>openssl_digest</code> on the SHA result's was computing the digest of the SHA digest's. Funny, isn't it? :).</p>
| 0 | 2016-08-25T10:37:04Z | [
"php",
"python",
"hash",
"sha",
"digest"
] |
is Dataframe.toPandas always on driver node or on worker nodes? | 39,142,549 | <p>Imagine you are loading a large dataset by the SparkContext and Hive. So this dataset is then distributed in your Spark cluster. For instance a observations (values + timestamps) for thousands of variables.</p>
<p>Now you would use some map/reduce methods or aggregations to organize/analyze your data. For instance grouping by variable name.</p>
<p>Once grouped, you could get all observations (values) for each variable as a timeseries Dataframe. If you now use DataFrame.toPandas</p>
<pre><code>def myFunction(data_frame):
data_frame.toPandas()
df = sc.load....
df.groupBy('var_name').mapValues(_.toDF).map(myFunction)
</code></pre>
<ol>
<li>is this converted to a Pandas Dataframe (per Variable) on each
worker node, or</li>
<li>are Pandas Dataframes always on the driver node and the data is therefore transferred from the worker nodes to the driver?</li>
</ol>
| 0 | 2016-08-25T10:16:12Z | 39,155,081 | <p>There is nothing special about Pandas <code>DataFrame</code> in this context.</p>
<ul>
<li>If <code>DataFrame</code> is created by using <code>toPandas</code> method on <code>pyspark.sql.dataframe.DataFrame</code> <a href="http://stackoverflow.com/a/30991297/1560062">this collects data and creates local Python object on the driver</a>.</li>
<li>If <code>pandas.core.frame.DataFrame</code> is created inside executor process (<a href="http://stackoverflow.com/a/34445113/1560062">for example in <code>mapPartitions</code></a>) you simply get <code>RDD[pandas.core.frame.DataFrame]</code>. There is no distinction between Pandas objects and let's say a <code>tuple</code>.</li>
<li>Finally pseudocode in you example couldn't work becasue you cannot create (in a sensible way) Spark <code>DataFrame</code> (I assume this what you mean by <code>_.toDF</code>) inside executor thread.</li>
</ul>
| 1 | 2016-08-25T21:35:57Z | [
"python",
"hadoop",
"pandas",
"apache-spark",
"pyspark"
] |
Python: How to determine the language? | 39,142,778 | <p>I want to get this: </p>
<pre><code>Input text: "ÑÑÌÑÑкий ÑзÑÌк"
Output text: "Russian"
Input text: "䏿"
Output text: "Chinese"
Input text: "ã«ã»ãã"
Output text: "Japanese"
Input text: "Ø§ÙØ¹ÙØ±ÙØ¨ÙÙÙÙØ©"
Output text: "Arabic"
</code></pre>
<p>How can I do it in python? Thanks.</p>
| -1 | 2016-08-25T10:26:00Z | 39,143,059 | <p>Have you had a look at <a href="https://pypi.python.org/pypi/langdetect?" rel="nofollow">langdetect</a>?</p>
<pre><code>from langdetect import detect
lang = detect("Ein, zwei, drei, vier")
print lang
#output: de
</code></pre>
| 3 | 2016-08-25T10:38:59Z | [
"python",
"string",
"parsing"
] |
Python: How to determine the language? | 39,142,778 | <p>I want to get this: </p>
<pre><code>Input text: "ÑÑÌÑÑкий ÑзÑÌк"
Output text: "Russian"
Input text: "䏿"
Output text: "Chinese"
Input text: "ã«ã»ãã"
Output text: "Japanese"
Input text: "Ø§ÙØ¹ÙØ±ÙØ¨ÙÙÙÙØ©"
Output text: "Arabic"
</code></pre>
<p>How can I do it in python? Thanks.</p>
| -1 | 2016-08-25T10:26:00Z | 39,143,700 | <p>You can try determining the Unicode group of chars in input string to point out type of language, (Cyrillic for Russian, for example), and then search for language-specific symbols in text.</p>
| 0 | 2016-08-25T11:10:34Z | [
"python",
"string",
"parsing"
] |
Accessing the dataLayer (JS variable) when scrapping with python | 39,142,846 | <p>I'm using beautiful soup to scrap a webpages. I want to access the dataLayer (a javascript variable) that is present on this <a href="http://www.allocine.fr/video/player_gen_cmedia=19561982&cfilm=144185.html" rel="nofollow">webpage</a>? How can I retrieve it using python?
<a href="http://i.stack.imgur.com/p1SeD.png" rel="nofollow"><img src="http://i.stack.imgur.com/p1SeD.png" alt="enter image description here"></a></p>
| 1 | 2016-08-25T10:29:40Z | 39,146,127 | <p>The beautifulsoup is not a JavaScript emulator, so you can't execute JS and get the content of a var. But maybe this var is populated by an ajax request and if you send the same request with your python script you can get touch those data. </p>
<p>In the other hand, if this data is statically assigned, then you can get them using string processing and regular expression.</p>
<p><em>Disclaimer: Sorry, for the general answer.</em></p>
| 1 | 2016-08-25T13:03:08Z | [
"javascript",
"python",
"python-2.7",
"web-scraping",
"beautifulsoup"
] |
Accessing the dataLayer (JS variable) when scrapping with python | 39,142,846 | <p>I'm using beautiful soup to scrap a webpages. I want to access the dataLayer (a javascript variable) that is present on this <a href="http://www.allocine.fr/video/player_gen_cmedia=19561982&cfilm=144185.html" rel="nofollow">webpage</a>? How can I retrieve it using python?
<a href="http://i.stack.imgur.com/p1SeD.png" rel="nofollow"><img src="http://i.stack.imgur.com/p1SeD.png" alt="enter image description here"></a></p>
| 1 | 2016-08-25T10:29:40Z | 39,153,525 | <p>You can parse it from the source with the help of <em>re</em> and <em>json.loads</em> to find the correct <em>script tag</em> that contains the json:</p>
<pre><code>from bs4 import BeautifulSoup
import re
from json import loads
url = "http://www.allocine.fr/video/player_gen_cmedia=19561982&cfilm=144185.html"
soup = BeautifulSoup(requests.get(url).content)
script_text = soup.find("script", text=re.compile("var\s+dataLayer")).text.split("= ", 1)[1]
json_data = loads(script_text[:script_text.find(";")])
</code></pre>
<p>Running it you see we get what you want:</p>
<pre><code>In [31]: from bs4 import BeautifulSoup
In [32]: import re
In [33]: from json import loads
In [34]: import requests
In [35]: url = "http://www.allocine.fr/video/player_gen_cmedia=19561982&cfilm=144185.html"
In [36]: soup = BeautifulSoup(requests.get(url).content, "html.parser")
In [37]: script_text = soup.find("script", text=re.compile("var\s+dataLayer")).text.split("= ", 1)[1]
In [38]: json_data = loads(script_text[:script_text.find(";")])
In [39]: json_data
Out[39]:
[{'actor': '403573,19358,22868,612492,418933,436500,46797,729453,66391,16893,211493,249636,18324,483703,1193,165792,231665,114167,139915,155111,258115,119842,610268,166263,597100,134791,520768,149470,734146,633703,684803,763372,673220,748361,178486,241328,517093,765381,693327,196630,758799,220756,550759,737383,263596,174710,118600,663153,463379,740361,702873,659451,779133,779134,779135,779136,779137,779138,779139,779140,779141,779142,779143,779144,779145,779146,779147,779241,779242,779243,779244',
'director': '41198',
'genre': '13025=action&13012=fantastique',
'movie_distributors': 929,
'movie_id': 144185,
'movie_isshowtime': 1,
'movie_label': 'suicide_squad',
'nationality': '5002',
'press_rating': 2,
'releasedate': '2016-08-03',
'site_route': 'moviepage_videos_trailer',
'site_section': 'movie',
'user_activity': 'videowatch',
'user_rating': 3.4,
'video_id': 19561982,
'video_label': 'suicide_squad_bande_annonce_finale_vo',
'video_type_id': 31003,
'video_type_label': 'trailer'}]
</code></pre>
<p>You could also use a regex but in this case using <em>str.find</em> to get the end of the data is sufficient.</p>
| 1 | 2016-08-25T19:42:14Z | [
"javascript",
"python",
"python-2.7",
"web-scraping",
"beautifulsoup"
] |
python shapely: check if a polygon is a multipolygon | 39,142,876 | <p>How can I check if a polygon entity is actually a multipolygon?
I've tried: </p>
<pre><code>if len(polygon)>1:
</code></pre>
<p>but then get the error:</p>
<pre><code>TypeError: object of type 'Polygon' has no len()
</code></pre>
<p>I've tried <code>Nill</code>, <code>None</code> and others, nothing worked.</p>
| 2 | 2016-08-25T10:30:47Z | 39,145,245 | <p>Ok, this worked for me:</p>
<pre><code>print ('type = ', type(poly))
</code></pre>
<p>outputs with:</p>
<pre><code>type = <class 'shapely.geometry.polygon.Polygon'>
</code></pre>
<p>in case of a polygon, and: </p>
<pre><code>type = <class 'shapely.geometry.multipolygon.MultiPolygon'>
</code></pre>
<p>in case of a multipolygon.</p>
<p>To check if a variable is a polygon or a multypolygon I did this:</p>
<pre><code>if (isinstance(poly, shapely.geometry.multipolygon.MultiPolygon)):
code...
</code></pre>
| 2 | 2016-08-25T12:23:34Z | [
"python",
"shapely"
] |
Can't load Flask config from parent directory | 39,142,928 | <p>I'm trying to follow the docs here on using configuration files: <a href="http://exploreflask.com/en/latest/configuration.html#the-simple-case" rel="nofollow">http://exploreflask.com/en/latest/configuration.html#the-simple-case</a></p>
<p>I want to use what they call "the simple case" but I want to load the <code>config.py</code> from the parent directory. My project tree looks like this:</p>
<pre><code>~/Learning/test $ tree
.
âââ app
â  âââ __init__.py
âââ config.py
</code></pre>
<p>This is my <code>app/__init__.py</code>:</p>
<pre><code>from flask import Flask
app = Flask(__name__)
app.config.from_object('config')
</code></pre>
<p>This is my <code>config.py</code>:</p>
<pre><code>DEBUG = True
</code></pre>
<p>This is the error I get when I try to run my project:</p>
<pre><code>Traceback (most recent call last):
File "app/__init__.py", line 4, in <module>
app.config.from_object('config')
File "/usr/local/lib/python2.7/dist-packages/flask/config.py", line 163, in from_object
obj = import_string(obj)
File "/usr/local/lib/python2.7/dist-packages/werkzeug/utils.py", line 443, in import_string
sys.exc_info()[2])
File "/usr/local/lib/python2.7/dist-packages/werkzeug/utils.py", line 418, in import_string
__import__(import_name)
werkzeug.utils.ImportStringError: import_string() failed for 'config'. Possible reasons are:
- missing __init__.py in a package;
- package or module path not included in sys.path;
- duplicated package or module name taking precedence in sys.path;
- missing module, class, function or variable;
Debugged import:
- 'config' not found.
Original exception:
ImportError: No module named config
</code></pre>
<p>I want to keep the <code>config.py</code> in another directory from the Flask app files. How can I make Flask load the <code>config.py</code> from the parent directory here?</p>
| 1 | 2016-08-25T10:33:09Z | 39,146,631 | <p>You can't load it from there because as the error message says:</p>
<blockquote>
<pre><code>- missing __init__.py in a package;
- package or module path not included in sys.path;
</code></pre>
</blockquote>
<p>It's not part of a package that is importable. This may have worked locally because you were running <code>python</code> in your project root so the current directory was implicitly added to the path. Do not rely on this behavior. Do not manually change <code>sys.path</code>.</p>
<p>Instead, Flask's <a href="http://flask.pocoo.org/docs/0.11/api/#flask.Config" rel="nofollow"><code>Config</code></a> has alternate ways to load the config: from a path in an environment variable</p>
<pre class="lang-bash prettyprint-override"><code>export FLASK_CONFIG="/path/to/config.py"
</code></pre>
<pre><code>app.config.from_envvar('FLASK_CONFIG')
</code></pre>
<p>or from a file relative to the <a href="http://flask.pocoo.org/docs/0.11/config/#instance-folders" rel="nofollow">instance folder</a></p>
<pre class="lang-none prettyprint-override"><code>app/
__init__.py
instance/
config.py
</code></pre>
<pre><code>app = Flask(__name__, instance_relative_config=True)
app.config.from_pyfile('config.py')
</code></pre>
| 2 | 2016-08-25T13:26:11Z | [
"python",
"flask"
] |
Read and wirte postgres script using python | 39,143,038 | <p>I have postgres tables and i want to run a <code>PostgreSQL</code> script file on these tables using python and then write the result of the queries in a csv file. The script file have multiple queries separated by semicolon <code>;</code>. Sample script is shown below</p>
<p><strong>Script file:</strong></p>
<pre><code>--Duplication Check
select p.*, c.name
from scale_polygons_v3 c inner join cartographic_v3 p
on (metaphone(c.name_displ, 20) LIKE metaphone(p.name, 20)) AND c.kind NOT IN (9,10)
where ST_Contains(c.geom, p.geom);
--Area Check
select sp.areaid,sp.name_displ,p.road_id,p.name
from scale_polygons_v3 sp, pak_roads_20162207 p
where st_contains(sp.geom,p.geom) and sp.kind = 1
and p.areaid != sp.areaid;
</code></pre>
<p>When i run the python code, it executes successfully without any error but the problem i am facing is, during writing the result of the queries to a csv file. Only the result of last executed query is written to the csv file. It means that first query result is overwrite by the second query, second by third and so on till the last query. </p>
<p><strong>Here is my python code:</strong></p>
<pre><code>import psycopg2
import sys
import csv
import datetime, time
def run_sql_file(filename, connection):
'''
The function takes a filename and a connection as input
and will run the SQL query on the given connection
'''
start = time.time()
file = open(filename, 'r')
sql = s = " ".join(file.readlines())
#sql = sql1[3:]
print "Start executing: " + " at " + str(datetime.datetime.now().strftime("%Y-%m-%d %H:%M")) + "\n"
print "Query:\n", sql + "\n"
cursor = connection.cursor()
cursor.execute(sql)
records = cursor.fetchall()
with open('Report.csv', 'a') as f:
writer = csv.writer(f, delimiter=',')
for row in records:
writer.writerow(row)
connection.commit()
end = time.time()
row_count = sum(1 for row in records)
print "Done Executing:", filename
print "Number of rows returned:", row_count
print "Time elapsed to run the query:",str((end - start)*1000) + ' ms'
print "\t ==============================="
def main():
connection = psycopg2.connect("host='localhost' dbname='central' user='postgres' password='tpltrakker'")
run_sql_file("script.sql", connection)
connection.close()
if __name__ == "__main__":
main()
</code></pre>
<p>What is wrong with my code?</p>
| 1 | 2016-08-25T10:37:47Z | 39,144,717 | <p>This is the simplest to output each query as a different file. <a href="http://initd.org/psycopg/docs/cursor.html#cursor.copy_expert" rel="nofollow"><code>copy_expert</code></a></p>
<pre><code>query = '''
select p.*, c.name
from
scale_polygons_v3 c
inner join
cartographic_v3 p on metaphone(c.name_displ, 20) LIKE metaphone(p.name, 20) and c.kind not in (9,10)
where ST_Contains(c.geom, p.geom)
'''
copy = "copy ({}) to stdout (format csv)".format(query)
f = open('Report.csv', 'wb')
cursor.copy_expert(copy, f, size=8192)
f.close()
query = '''
select sp.areaid,sp.name_displ,p.road_id,p.name
from scale_polygons_v3 sp, pak_roads_20162207 p
where st_contains(sp.geom,p.geom) and sp.kind = 1 and p.areaid != sp.areaid;
'''
copy = "copy ({}) to stdout (format csv)".format(query)
f = open('Report2.csv', 'wb')
cursor.copy_expert(copy, f, size=8192)
f.close()
</code></pre>
<p>If you want to append the second output to the same file then just keep the first file object opened.</p>
<p>Notice that it is necessary that <a href="https://www.postgresql.org/docs/current/static/sql-copy.html" rel="nofollow"><code>copy</code></a> outputs to <code>stdout</code> to make it available to <code>copy_expert</code></p>
| 1 | 2016-08-25T11:59:08Z | [
"python",
"postgresql",
"csv"
] |
Read and wirte postgres script using python | 39,143,038 | <p>I have postgres tables and i want to run a <code>PostgreSQL</code> script file on these tables using python and then write the result of the queries in a csv file. The script file have multiple queries separated by semicolon <code>;</code>. Sample script is shown below</p>
<p><strong>Script file:</strong></p>
<pre><code>--Duplication Check
select p.*, c.name
from scale_polygons_v3 c inner join cartographic_v3 p
on (metaphone(c.name_displ, 20) LIKE metaphone(p.name, 20)) AND c.kind NOT IN (9,10)
where ST_Contains(c.geom, p.geom);
--Area Check
select sp.areaid,sp.name_displ,p.road_id,p.name
from scale_polygons_v3 sp, pak_roads_20162207 p
where st_contains(sp.geom,p.geom) and sp.kind = 1
and p.areaid != sp.areaid;
</code></pre>
<p>When i run the python code, it executes successfully without any error but the problem i am facing is, during writing the result of the queries to a csv file. Only the result of last executed query is written to the csv file. It means that first query result is overwrite by the second query, second by third and so on till the last query. </p>
<p><strong>Here is my python code:</strong></p>
<pre><code>import psycopg2
import sys
import csv
import datetime, time
def run_sql_file(filename, connection):
'''
The function takes a filename and a connection as input
and will run the SQL query on the given connection
'''
start = time.time()
file = open(filename, 'r')
sql = s = " ".join(file.readlines())
#sql = sql1[3:]
print "Start executing: " + " at " + str(datetime.datetime.now().strftime("%Y-%m-%d %H:%M")) + "\n"
print "Query:\n", sql + "\n"
cursor = connection.cursor()
cursor.execute(sql)
records = cursor.fetchall()
with open('Report.csv', 'a') as f:
writer = csv.writer(f, delimiter=',')
for row in records:
writer.writerow(row)
connection.commit()
end = time.time()
row_count = sum(1 for row in records)
print "Done Executing:", filename
print "Number of rows returned:", row_count
print "Time elapsed to run the query:",str((end - start)*1000) + ' ms'
print "\t ==============================="
def main():
connection = psycopg2.connect("host='localhost' dbname='central' user='postgres' password='tpltrakker'")
run_sql_file("script.sql", connection)
connection.close()
if __name__ == "__main__":
main()
</code></pre>
<p>What is wrong with my code?</p>
| 1 | 2016-08-25T10:37:47Z | 39,145,425 | <p>If you are able to change the SQL script a bit then here is a workaround:</p>
<pre><code>#!/usr/bin/env python
import psycopg2
script = '''
declare cur1 cursor for
select * from (values(1,2),(3,4)) as t(x,y);
declare cur2 cursor for
select 'a','b','c';
'''
print script
conn = psycopg2.connect('');
# Cursors exists and available only inside the transaction
conn.autocommit = False;
# Create cursors from script
conn.cursor().execute(script);
# Read names of cursors
cursors = conn.cursor();
cursors.execute('select name from pg_cursors;')
cur_names = cursors.fetchall()
# Read data from each available cursor
for cname in cur_names:
print cname[0]
cur = conn.cursor()
cur.execute('fetch all from ' + cname[0])
rows = cur.fetchall()
# Here you can save the data to the file
print rows
conn.rollback()
print 'done'
</code></pre>
<p>Disclaimer: I am totally newbie with Python. </p>
| 1 | 2016-08-25T12:32:26Z | [
"python",
"postgresql",
"csv"
] |
Launch a batch file stored in different directory from Python script in another directory | 39,143,132 | <p>I have created a Python script which needs to launch a <strong>.bat</strong> file based on some condition.</p>
<p>Python script location : <strong>\Component\myScript.py</strong></p>
<p>Batch file location : <strong>\Component\MS20160825\toExecute.bat</strong><br>
<em>The batch file internally uses some executables which are in</em> <strong>\Component\bin\</strong></p>
<p>How do I do following :</p>
<ol>
<li><p>Launch .BAT file from Python script so that .BAT executes successfully. BAT file should be able to find executables in <strong>\Component\bin\</strong> directory to perform its task and produces desired result.</p></li>
<li><p>Hold Python script execution until .BAT finished its execution.</p></li>
<li><p>.BAT file has <strong>pause >nul</strong> statement. I need to bypass it, meaning when .BAT is executed from Python script it should not wait for user to press <strong>Enter</strong> rather it should terminate normally after executing second last statement. Because the same .BAT file needs to be executed multiple times.</p></li>
</ol>
| 0 | 2016-08-25T10:42:02Z | 39,143,492 | <p>Should solve all problems you are experiencing! </p>
<pre><code>import subprocess
p = subprocess.Popen('batch.bat', shell=True, stdin=subprocess.PIPE)
stdout, stderr = p.communicate()
</code></pre>
| 1 | 2016-08-25T11:00:50Z | [
"python",
"batch-file",
"subprocess",
"call",
"popen"
] |
Launch a batch file stored in different directory from Python script in another directory | 39,143,132 | <p>I have created a Python script which needs to launch a <strong>.bat</strong> file based on some condition.</p>
<p>Python script location : <strong>\Component\myScript.py</strong></p>
<p>Batch file location : <strong>\Component\MS20160825\toExecute.bat</strong><br>
<em>The batch file internally uses some executables which are in</em> <strong>\Component\bin\</strong></p>
<p>How do I do following :</p>
<ol>
<li><p>Launch .BAT file from Python script so that .BAT executes successfully. BAT file should be able to find executables in <strong>\Component\bin\</strong> directory to perform its task and produces desired result.</p></li>
<li><p>Hold Python script execution until .BAT finished its execution.</p></li>
<li><p>.BAT file has <strong>pause >nul</strong> statement. I need to bypass it, meaning when .BAT is executed from Python script it should not wait for user to press <strong>Enter</strong> rather it should terminate normally after executing second last statement. Because the same .BAT file needs to be executed multiple times.</p></li>
</ol>
| 0 | 2016-08-25T10:42:02Z | 39,172,632 | <p>Thanks to all for their active suggestion.<br>
With small correction following code worked for me:</p>
<pre><code>import subprocess
batchFileLocation = 'Component\\MS20160825'
batchFileFullPath = os.path.join(batchFileLocation, 'toExecute.bat')
p = subprocess.Popen(os.path.abspath(batchFileFullPath), stdin=subprocess.PIPE, cwd = batchFileLocation)
stdout,stderr = p.communicate()
</code></pre>
<p>Here <strong>cwd</strong> argument is very much important, it needs to be updated with the location where batch file is placed then only the batch file will be able to execute correctly.</p>
<p>After that only batch file will be able to search for the binary (placed in a different directory like <strong>Component\bin\</strong> in this case) required for its execution.</p>
| 0 | 2016-08-26T18:27:08Z | [
"python",
"batch-file",
"subprocess",
"call",
"popen"
] |
Pandas: multiple condition to strings | 39,143,199 | <p>I try to change my dataframe.
Usually I use something like</p>
<pre><code>df1= df[df.url.str.contains("avito.ru/*/telefony/")]
</code></pre>
<p>But if I want a lot of condition?
I want to write to <code>contains</code> more than 100 strings.
How can I do that?</p>
<p>Dataframe</p>
<pre><code>Ð°Ð½Ð¾Ð½Ñ ÐºÐ¸Ð½Ð¾ÑилÑмов 2016
Ð°Ð½Ð¾Ð½Ñ ÐºÐ¸Ð½Ð¾ÑилÑмов 2016
"вÑбоÑок имеÑÑ Ð²ÐµÐ»Ð¸ÑÐ¸Ð½Ñ Ð¼Ð¾Ð¼ÐµÐ½Ñа ÑопÑоÑивлениÑ"
"вÑбоÑок имеÑÑ Ð²ÐµÐ»Ð¸ÑÐ¸Ð½Ñ Ð¼Ð¾Ð¼ÐµÐ½Ñа ÑопÑоÑивлениÑ"
анÑÐ°Ð¼Ð±Ð»Ñ 9 Ñеловек
анÑÐ°Ð¼Ð±Ð»Ñ 9 Ñеловек
анÑÐ°Ð¼Ð±Ð»Ñ 9 Ñеловек
"ÐÑемена года в мÑзÑке, лиÑеÑаÑÑÑе, живопиÑи"
"ÐÑемена года в мÑзÑке, лиÑеÑаÑÑÑе, живопиÑи"
"ÐÑемена года в мÑзÑке, лиÑеÑаÑÑÑе, живопиÑи"
apple iphone
samsumg
facebook
None
None
None
</code></pre>
<p>And some words from list</p>
<pre><code>lst = ['iphone', 'sony', 'alcatel', 'galaxy', 'samsumg]
</code></pre>
<p>Desire output</p>
<pre><code>apple iphone
samsumg
None
None
None
</code></pre>
<p>I mean if some words don't contain in str, I want to delete that. (But values with None I want to have there too).</p>
| 2 | 2016-08-25T10:46:01Z | 39,143,517 | <p>You can create a pattern by joining <code>|</code> with all your list items and pass this to <code>str.contains</code>:</p>
<pre><code>In [31]:
lst = ['iphone', 'sony', 'alcatel', 'galaxy', 'samsumg','None']
pat = '|'.join(lst)
df[df['url'].str.contains(pat)]
Out[31]:
url
10 apple iphone
11 samsumg
13 None
14 None
15 None
</code></pre>
<p>To handle the missing values include <code>pd.isNull(df['url'])</code> in the boolean condition:</p>
<pre><code>In [54]:
lst = ['iphone', 'sony', 'alcatel', 'galaxy', 'samsumg']
pat = '|'.join(lst)
df[pd.isnull(df['url']) | df['url'].str.contains(pat) ]
Out[54]:
url
10 apple iphone
11 samsumg
13 NaN
14 NaN
15 NaN
</code></pre>
| 0 | 2016-08-25T11:01:49Z | [
"python",
"pandas"
] |
Getting specific indexed distinct values in nested lists | 39,143,204 | <p>I have a nested list of around 1 million records like:</p>
<pre><code>l = [['a', 'b', 'c', ...], ['d', 'b', 'e', ...], ['f', 'z', 'g', ...],...]
</code></pre>
<p>I want to get the distinct values of inner lists on second index, so that my resultant list be like:</p>
<pre><code>resultant = ['b', 'z', ...]
</code></pre>
<p>I have tried nested loops but its not fast, any help will be appreciated!</p>
| 2 | 2016-08-25T10:46:14Z | 39,143,255 | <p>Would that work for you? </p>
<pre><code>result = set([inner_list[1] for inner_list in l])
</code></pre>
| 0 | 2016-08-25T10:48:19Z | [
"python",
"nested-lists"
] |
Getting specific indexed distinct values in nested lists | 39,143,204 | <p>I have a nested list of around 1 million records like:</p>
<pre><code>l = [['a', 'b', 'c', ...], ['d', 'b', 'e', ...], ['f', 'z', 'g', ...],...]
</code></pre>
<p>I want to get the distinct values of inner lists on second index, so that my resultant list be like:</p>
<pre><code>resultant = ['b', 'z', ...]
</code></pre>
<p>I have tried nested loops but its not fast, any help will be appreciated!</p>
| 2 | 2016-08-25T10:46:14Z | 39,143,334 | <p>Since you want the unique items you can use <code>collections.OrderedDict.fromkeys()</code> in order to keep the order and unique items (because of using hashtable fro keys) and use <code>zip()</code> to get the second items.</p>
<pre><code>from collections import OrderedDict
list(OrderedDict.fromkeys(zip(my_lists)[2]))
</code></pre>
<p>In python 3.x since <code>zip()</code> returns an iterator you can do this:</p>
<pre><code>colls = zip(my_lists)
next(colls)
list(OrderedDict.fromkeys(next(colls)))
</code></pre>
<p>Or use a generator expression within <code>dict.formkeys()</code>:</p>
<pre><code>list(OrderedDict.fromkeys(i[1] for i in my_lists))
</code></pre>
<p>Demo:</p>
<pre><code>>>> lst = [['a', 'b', 'c'], ['d', 'b', 'e'], ['f', 'z', 'g']]
>>>
>>> list(OrderedDict().fromkeys(sub[1] for sub in lst))
['b', 'z']
</code></pre>
| 1 | 2016-08-25T10:52:15Z | [
"python",
"nested-lists"
] |
Getting specific indexed distinct values in nested lists | 39,143,204 | <p>I have a nested list of around 1 million records like:</p>
<pre><code>l = [['a', 'b', 'c', ...], ['d', 'b', 'e', ...], ['f', 'z', 'g', ...],...]
</code></pre>
<p>I want to get the distinct values of inner lists on second index, so that my resultant list be like:</p>
<pre><code>resultant = ['b', 'z', ...]
</code></pre>
<p>I have tried nested loops but its not fast, any help will be appreciated!</p>
| 2 | 2016-08-25T10:46:14Z | 39,143,533 | <p>I can think of two options. </p>
<p>Set comprehension:</p>
<pre><code>res = {x[1] for x in l}
</code></pre>
<p>I think numpy arrays work faster than list/set comprehensions, so converting this list to an array and then using array functions can be faster. Here:</p>
<pre><code>import numpy as np
res = np.unique(np.array(l)[:, 1])
</code></pre>
<p>Let me explain: <code>np.array(l)</code> converts the list to a 2d array, then <code>[:, 1]</code> take the second column (starting to count from 0) which consists of the second item of each sublist in the original <code>l</code>, and finally taking only unique values using <code>np.unique</code>.</p>
| 0 | 2016-08-25T11:02:55Z | [
"python",
"nested-lists"
] |
Getting specific indexed distinct values in nested lists | 39,143,204 | <p>I have a nested list of around 1 million records like:</p>
<pre><code>l = [['a', 'b', 'c', ...], ['d', 'b', 'e', ...], ['f', 'z', 'g', ...],...]
</code></pre>
<p>I want to get the distinct values of inner lists on second index, so that my resultant list be like:</p>
<pre><code>resultant = ['b', 'z', ...]
</code></pre>
<p>I have tried nested loops but its not fast, any help will be appreciated!</p>
| 2 | 2016-08-25T10:46:14Z | 39,143,721 | <p>You can unzip the list of lists then choice the second tuple with set like below :
This code take 4.05311584473e-06 millseconds, in my laptop </p>
<pre><code>list(set(zip(*lst)[1]))
</code></pre>
<p>Input : </p>
<pre><code>lst = [['a', 'b', 'c'], ['d', 'b', 'e'], ['f', 'z', 'g']]
</code></pre>
<p>Out put :</p>
<pre><code>['b', 'z']
</code></pre>
| 1 | 2016-08-25T11:12:06Z | [
"python",
"nested-lists"
] |
Prevent adding duplicate data from excel forms imported into db | 39,143,221 | <p>I have a model on which data imported from an excel files are saved.
I want to prevent having duplicate entries,it will check the already existing data to see if they match with the ones .</p>
<p>My model</p>
<pre><code>from django.db import models
class UserData(models.Model):
GENDER_CHOICES = (
('Male', 'Male'),
('Female', 'Female'),
)
first_name = models.CharField(max_length=50)
last_name = models.CharField(max_length=50)
age = models.IntegerField()
gender = models.CharField(default='Male', choices=GENDER_CHOICES, max_length=6)
address = models.CharField(max_length=200)
class Meta:
verbose_name_plural = 'User Data'
def __str__(self):
return self.fullname()
</code></pre>
<p>views.py</p>
<pre><code>class UploadFileForm(forms.Form):
file = forms.FileField()
def import_data(request):
if request.method == "POST":
form = UploadFileForm(request.POST,
request.FILES)
if form.is_valid():
request.FILES['file'].save_to_database(
name_columns_by_row=2,
model=UserData,
mapdict=['first_name', 'last_name', 'age', 'gender', 'address'])
return HttpResponse("OK")
else:
return HttpResponseBadRequest()
else:
form = UploadFileForm()
return render_to_response('excel/upload_form.html',
{'form': form},
context_instance=RequestContext(request))
</code></pre>
<p>I tried using <code>unique_together</code> and also tried overriding <code>cleaned_data</code> in the form but still can't prevent duplicates being added to the db.</p>
<p>What will be the best way to achieve this? Thanks</p>
| 0 | 2016-08-25T10:46:47Z | 39,241,437 | <p>1.Create one model form</p>
<pre><code>class UserDataForm(forms.ModelForm):
// code here
class Meta:
model = UserData
</code></pre>
<p>2. make the excel sheet data to dictionary format using.</p>
<pre><code>from pandas import *
xls = ExcelFile('path_to_file.xls')
df = xls.parse(xls.sheet_names[0])
</code></pre>
<p>ref: <a href="http://stackoverflow.com/questions/14196013/python-creating-dictionary-from-excel-data">Python Creating Dictionary from excel data</a>.</p>
<p>3.Instantiate the model form with the dictionary data and check for validation. if the data is valid check for the db entry for the same or use django model_name.get_or_create()</p>
<pre><code>form = UploadFileForm(request.POST,
request.FILES)
if form.is_valid():
# code here to convert to dictionary
data = data_in_dict_form
user_data_form = UserDataForm(data=data)
if user_data_form.is_valid():
obj, created = UserData.objects.get_or_create(**user_data_form.cleaned_data)
# this will create new entry if doesnt exist and just get from db if already exist.
# You can ause any method to create an entry to the data base. you can also just .filter().exists() to check if there is an existing entry. if It returns True you dont need to save it.
# Your Response
else:
# your Response
</code></pre>
<p>Note: If there are multiple rows , each rows are for separate entries in db. you need to loop each rows and validate and save()/Do_not_save. :) </p>
<p>I hope this would help you to get the problem resolved. </p>
| 1 | 2016-08-31T05:30:17Z | [
"python",
"django",
"django-forms"
] |
How to print the console to a text file AFTER the program finishes (Python)? | 39,143,417 | <p>I have a program that outputs many calculations and results to the console through the print statement. I want to write some code to export (or save) all the contents of the console to a simple text file. </p>
<p>I searched StackOverflow and other sites but I found some methods to redirect the print statement to print to a file directly, but I want the program to work normally, to display outputs to the console, then to save its contents AFTER all operations of the program done.</p>
<p>I am using PyCharm with Python2.7 if it matters</p>
| 4 | 2016-08-25T10:57:07Z | 39,143,606 | <p>Ok, so normally to get it done, you have to rewrite python <code>print</code> built-in function. But... There is ipython, which provides some hooks.</p>
<p>First you need to have <code>ipython</code> installed:</p>
<pre><code>#bash
sudo pip install ipython
</code></pre>
<p>(I'm using sudo to simple locate then folder I need to reach, read further)</p>
<p>After ipython installation you'll have ipython extensions folder available, so get to it:</p>
<pre><code>#bash
cd ~/.ipython/extensions/
</code></pre>
<p>and create there let's say a file called <code>print_to_file.py</code>, here is its content:</p>
<pre><code>#python
class PrintWatcher(object):
def __init__(self, ip):
self.shell = ip
def post_execute(self):
with open('/home/turkus/shell.txt', 'a+') as f:
in_len = len(self.shell.user_ns['In'])
i = in_len - 1
in_ = self.shell.user_ns['In'][i]
out = self.shell.user_ns['Out'].get(i, '')
# you can edit this line if you want different input in shell.txt
f.write('{}\n{}\n'.format(in_, out))
def load_ipython_extension(ip):
pw = PrintWatcher(ip)
ip.events.register('post_run_cell', pw.post_execute)
</code></pre>
<p>After saving a file just run:</p>
<pre><code>#bash
ipython profile create
# you will get something like that:
[ProfileCreate] Generating default config file: u'/home/turkus/.ipython/profile_default/ipython_config.py'
</code></pre>
<p>Now get back to setting up our hook. We must open <code>ipython_config.py</code> created under path above and put there some magic (there is a lot of stuff there, so go to the end of file):</p>
<pre><code># some commented lines here
c = get_config()
c.InteractiveShellApp.extensions = [
'print_to_file'
]
</code></pre>
<p>After saving it, you can run <code>ipython</code> and write your code. Every your input will be written in a file under path you provided above, in my case it was:</p>
<pre><code>/home/turkus/shell.txt
</code></pre>
<p><strong>Notes</strong></p>
<p>You can avoid loading your extension every time <code>ipython</code> fires up, by just delete <code>'print_to_file'</code> from <code>c.InteractiveShellApp.extensions</code> list in <code>ipython_config.py</code>. But remember that you can load it anytime you need, just by typing in <code>ipython</code> console:</p>
<pre><code>â ~ ipython
Python 2.7.12 (default, Jul 1 2016, 15:12:24)
Type "copyright", "credits" or "license" for more information.
IPython 4.0.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: %load_ext print_to_file
</code></pre>
<p>Any change in <code>print_to_file.py</code> is being reflected in open ipython shell after using <code>%reload_ext print_to_file</code> command, so you don't have to exit from and fire up it again.</p>
| 2 | 2016-08-25T11:06:01Z | [
"python",
"console",
"save",
"text-files"
] |
How to print the console to a text file AFTER the program finishes (Python)? | 39,143,417 | <p>I have a program that outputs many calculations and results to the console through the print statement. I want to write some code to export (or save) all the contents of the console to a simple text file. </p>
<p>I searched StackOverflow and other sites but I found some methods to redirect the print statement to print to a file directly, but I want the program to work normally, to display outputs to the console, then to save its contents AFTER all operations of the program done.</p>
<p>I am using PyCharm with Python2.7 if it matters</p>
| 4 | 2016-08-25T10:57:07Z | 39,143,624 | <p>There is a very obvious but not very elegant solution.</p>
<p>instead of:</p>
<pre><code>print statement 1
calculation
print statement 2
</code></pre>
<p>you can make something like</p>
<pre><code>sexport =''
calculation
print statement 1
sexport += statement1 + "\n"
calculaztion
print statement 2
sexport += statement 2
</code></pre>
<p>finally just save sexport to a file</p>
| -1 | 2016-08-25T11:06:57Z | [
"python",
"console",
"save",
"text-files"
] |
How to print the console to a text file AFTER the program finishes (Python)? | 39,143,417 | <p>I have a program that outputs many calculations and results to the console through the print statement. I want to write some code to export (or save) all the contents of the console to a simple text file. </p>
<p>I searched StackOverflow and other sites but I found some methods to redirect the print statement to print to a file directly, but I want the program to work normally, to display outputs to the console, then to save its contents AFTER all operations of the program done.</p>
<p>I am using PyCharm with Python2.7 if it matters</p>
| 4 | 2016-08-25T10:57:07Z | 39,143,791 | <p>I am unsure how you could receive the contents of a console for any editor however this can be achieved quite simply by replacing your <code>print()</code> statements with <code>.write</code></p>
<pre><code>class Writer(object):
def __init__(self, out_file, overwrite=False):
self.file_name = out_file
self.overwrite = overwrite
self.history = []
def write(self, statement):
self.history.append(statement)
print statement
def close(self):
if self.overwrite:
self.out_file = open(self.file_name, 'wb')
else:
self.out_file = open(self.file_name, 'ab')
for x in self.history:
self.out_file.write(x+'/n')
self.out_file.close()
self.history = []
p = Writer('my_output_file.txt')
p.write('my string to print and save!')
p.close() #close the writer to save the contents to a file before exiting
</code></pre>
| 1 | 2016-08-25T11:15:15Z | [
"python",
"console",
"save",
"text-files"
] |
How to print the console to a text file AFTER the program finishes (Python)? | 39,143,417 | <p>I have a program that outputs many calculations and results to the console through the print statement. I want to write some code to export (or save) all the contents of the console to a simple text file. </p>
<p>I searched StackOverflow and other sites but I found some methods to redirect the print statement to print to a file directly, but I want the program to work normally, to display outputs to the console, then to save its contents AFTER all operations of the program done.</p>
<p>I am using PyCharm with Python2.7 if it matters</p>
| 4 | 2016-08-25T10:57:07Z | 39,143,822 | <p>Maybe you should create a variable that will log the outputs and then put it into a file.</p>
<p>For ex:</p>
<pre><code>print statement
logger += statement+"\n" #a new line char so each statement is on a new line
with open('file.txt', 'a') as f:
f.write(statement)
</code></pre>
| 0 | 2016-08-25T11:17:08Z | [
"python",
"console",
"save",
"text-files"
] |
How to print the console to a text file AFTER the program finishes (Python)? | 39,143,417 | <p>I have a program that outputs many calculations and results to the console through the print statement. I want to write some code to export (or save) all the contents of the console to a simple text file. </p>
<p>I searched StackOverflow and other sites but I found some methods to redirect the print statement to print to a file directly, but I want the program to work normally, to display outputs to the console, then to save its contents AFTER all operations of the program done.</p>
<p>I am using PyCharm with Python2.7 if it matters</p>
| 4 | 2016-08-25T10:57:07Z | 39,145,632 | <p>After I know understood your question I think you search the tee command</p>
<pre><code>python your_program | tee output.txt
</code></pre>
<p>This will show you the output both, in the console and in output.txt</p>
<p>PS: Since you did not answer to my comment which OS you use I assumed that you use either Linux or MACOS. Should work on both. I don't know how to do this on windows...</p>
| 1 | 2016-08-25T12:41:39Z | [
"python",
"console",
"save",
"text-files"
] |
How to print the console to a text file AFTER the program finishes (Python)? | 39,143,417 | <p>I have a program that outputs many calculations and results to the console through the print statement. I want to write some code to export (or save) all the contents of the console to a simple text file. </p>
<p>I searched StackOverflow and other sites but I found some methods to redirect the print statement to print to a file directly, but I want the program to work normally, to display outputs to the console, then to save its contents AFTER all operations of the program done.</p>
<p>I am using PyCharm with Python2.7 if it matters</p>
| 4 | 2016-08-25T10:57:07Z | 39,147,116 | <p>You could override the <code>print</code> function which will still be accessible through the <code>builtins</code> module</p>
<pre><code>import builtins
f = open("logs.txt", "w")
def print(*args, sep=' ', end='\n', **kwargs):
builtins.print(*args, sep=sep, end=end, **kwargs)
f.write(sep.join(*args) + end)
</code></pre>
<hr>
<p>EDIT: A similar solution for Python 2</p>
<pre><code>from __future__ import print_function
class Print:
def __init__(self, print_function, filename='test', mode='w'):
self.print_function = print_function
self.file = open(filename, 'w')
def __call__(self, *args, **kwargs):
self.print_function(*args, **kwargs)
kwargs['file'] = self.file
self.print_function(*args, **kwargs)
print = Print(print, 'logs.txt')
</code></pre>
<p>This creates a <code>print</code> function that you use exactly as the function you import from <code>__future__</code>.<br>
To close the file when everything is done you have to run:</p>
<pre><code>print.file.close()
</code></pre>
| 1 | 2016-08-25T13:47:53Z | [
"python",
"console",
"save",
"text-files"
] |
How to print the console to a text file AFTER the program finishes (Python)? | 39,143,417 | <p>I have a program that outputs many calculations and results to the console through the print statement. I want to write some code to export (or save) all the contents of the console to a simple text file. </p>
<p>I searched StackOverflow and other sites but I found some methods to redirect the print statement to print to a file directly, but I want the program to work normally, to display outputs to the console, then to save its contents AFTER all operations of the program done.</p>
<p>I am using PyCharm with Python2.7 if it matters</p>
| 4 | 2016-08-25T10:57:07Z | 39,190,845 | <p>With all thanks and respect to all who contributed to this question. I have finally found a solution to this problem with minimal modifications to my original code. The solution is provided by the member @Status and <a href="http://stackoverflow.com/a/24583265/5820024">here is its link</a> .</p>
<p>Although I searched a lot before posting my question, but the answers of the respected members enlightened my mind to a precise search especially the contributions of @turcus, who performs an exceptional work, and @Glostas who opened my eyes to the "tee" which guided me to find the solution I posted (although it does not contain "tee").</p>
<p><strong>The solution</strong>, <em>as of <a href="http://stackoverflow.com/a/24583265/5820024">the mentioned post</a> with slight modifications</em>:</p>
<p>1- Put the following Class in the program:</p>
<pre><code>class Logger(object):
"""
Lumberjack class - duplicates sys.stdout to a log file and it's okay
source: http://stackoverflow.com/a/24583265/5820024
"""
def __init__(self, filename="Red.Wood", mode="a", buff=0):
self.stdout = sys.stdout
self.file = open(filename, mode, buff)
sys.stdout = self
def __del__(self):
self.close()
def __enter__(self):
pass
def __exit__(self, *args):
pass
def write(self, message):
self.stdout.write(message)
self.file.write(message)
def flush(self):
self.stdout.flush()
self.file.flush()
os.fsync(self.file.fileno())
def close(self):
if self.stdout != None:
sys.stdout = self.stdout
self.stdout = None
if self.file != None:
self.file.close()
self.file = None
</code></pre>
<p>2- At the beginning of the program, before any print statements, put this line:</p>
<pre><code>my_console = Logger('my_console_file.txt') # you can change the file's name
</code></pre>
<p>3- At the end of the program, after all of the print statements, put this line:</p>
<pre><code>my_console.close()
</code></pre>
<p><strong>I tested this, and It works perfectly</strong>, and finally I have a clone of the console's output after the program ends.</p>
<p>With best regards to everybody, and Many thanks to all contributors.</p>
| 0 | 2016-08-28T11:37:07Z | [
"python",
"console",
"save",
"text-files"
] |
OOP - Accessing Object Data | 39,143,577 | <p>Apologies if this is a repeated or simple question.</p>
<p>I have recently been learning Python (having spent many years writing simple MATLAB scripts). I've started exploring Object Oriented Programming and JSON.</p>
<p>I am trying to use an API to collect data from a server. When the objects are returned I'm mostly doing fine with using syntax to access particular data fields. However, I'm struggling with one. I have a row object:</p>
<pre><code>row = {"totalCount": 1, "results": [{"parentObjectId": 887, "contextData": ["Row 1"], "parentObjectType": "sheet", "objectId": 599, "text": "Text", "parentObjectName": "Data", "objectType": "row"}]}
</code></pre>
<p>I am trying to access the "objectId" attribute for the single result (<code>result[0]</code>).</p>
<p>I have tried <code>rowId = row.results[0].objectId</code> but get the error "'SearchResultItem' object has no attribute 'objectId'".</p>
<p>I have also tried <code>rowId = row.results[0]['objectId']</code> but get the error "'SearchResultItem' object has no attribute '<code>__getitem__</code>'".</p>
<p>--- EDIT:</p>
<pre><code>print(reportingRow.results[0]['objectId'])
Traceback (most recent call last):
File "<ipython-input-46-14e026c273e3>", line 1, in <module>
print(reportingRow.results[0]['objectId'])
TypeError: 'SearchResultItem' object has no attribute '__getitem__'
</code></pre>
<p>I am using a tool called Smartsheet. I am using the search_sheet request. The API documentation (<a href="http://smartsheet-platform.github.io/api-docs/#search-sheet" rel="nofollow">http://smartsheet-platform.github.io/api-docs/#search-sheet</a>) says that 'SearchResultItem' is an object containing a number of attributes. It doesn't give much more information.</p>
<p>The Smartsheet models are found here: <a href="https://github.com/smartsheet-platform/smartsheet-python-sdk/tree/master/smartsheet/models" rel="nofollow">https://github.com/smartsheet-platform/smartsheet-python-sdk/tree/master/smartsheet/models</a>. I am currently looking at search_result.py and search_result_item.py to find the answer/clues.</p>
<p>--- END OF EDIT</p>
<p>Thanks for any help!</p>
| 0 | 2016-08-25T11:04:45Z | 39,143,697 | <p>Please try:</p>
<pre><code>rowId = row['results'][0]['objectId']
</code></pre>
| 1 | 2016-08-25T11:10:22Z | [
"python",
"oop",
"smartsheet-api"
] |
OOP - Accessing Object Data | 39,143,577 | <p>Apologies if this is a repeated or simple question.</p>
<p>I have recently been learning Python (having spent many years writing simple MATLAB scripts). I've started exploring Object Oriented Programming and JSON.</p>
<p>I am trying to use an API to collect data from a server. When the objects are returned I'm mostly doing fine with using syntax to access particular data fields. However, I'm struggling with one. I have a row object:</p>
<pre><code>row = {"totalCount": 1, "results": [{"parentObjectId": 887, "contextData": ["Row 1"], "parentObjectType": "sheet", "objectId": 599, "text": "Text", "parentObjectName": "Data", "objectType": "row"}]}
</code></pre>
<p>I am trying to access the "objectId" attribute for the single result (<code>result[0]</code>).</p>
<p>I have tried <code>rowId = row.results[0].objectId</code> but get the error "'SearchResultItem' object has no attribute 'objectId'".</p>
<p>I have also tried <code>rowId = row.results[0]['objectId']</code> but get the error "'SearchResultItem' object has no attribute '<code>__getitem__</code>'".</p>
<p>--- EDIT:</p>
<pre><code>print(reportingRow.results[0]['objectId'])
Traceback (most recent call last):
File "<ipython-input-46-14e026c273e3>", line 1, in <module>
print(reportingRow.results[0]['objectId'])
TypeError: 'SearchResultItem' object has no attribute '__getitem__'
</code></pre>
<p>I am using a tool called Smartsheet. I am using the search_sheet request. The API documentation (<a href="http://smartsheet-platform.github.io/api-docs/#search-sheet" rel="nofollow">http://smartsheet-platform.github.io/api-docs/#search-sheet</a>) says that 'SearchResultItem' is an object containing a number of attributes. It doesn't give much more information.</p>
<p>The Smartsheet models are found here: <a href="https://github.com/smartsheet-platform/smartsheet-python-sdk/tree/master/smartsheet/models" rel="nofollow">https://github.com/smartsheet-platform/smartsheet-python-sdk/tree/master/smartsheet/models</a>. I am currently looking at search_result.py and search_result_item.py to find the answer/clues.</p>
<p>--- END OF EDIT</p>
<p>Thanks for any help!</p>
| 0 | 2016-08-25T11:04:45Z | 39,148,925 | <p>Your library code clearly says, that <code>SearchResultItem</code> has <a href="https://github.com/smartsheet-platform/smartsheet-python-sdk/blob/master/smartsheet/models/search_result_item.py#L108" rel="nofollow">a property <code>.object_id</code></a>.</p>
<pre><code>print(reportingRow.results[0].object_id) # this works just fine
</code></pre>
<p>Your problem is not dictionary/JSON-related because you <strong>are not</strong> using dictionaties. You are using custom objects wrapped around those dictionaries.</p>
| 0 | 2016-08-25T15:13:26Z | [
"python",
"oop",
"smartsheet-api"
] |
Is there a way to look up by index value without handling errors? | 39,143,640 | <p>I have a dataframe that looks like this:</p>
<pre><code> pmid
id
NCT02835976 NaN
NCT02835885 1235
NCT02835560 1270
NCT02835118 NaN
</code></pre>
<p>Now I want to find the row that matches a particular key. I can do <code>df.loc(x)</code> but I get an error if the ID is not in the index:</p>
<pre><code>KeyError: u'the label [NCT01001741] is not in the [index]'
</code></pre>
<p>Do I really need to write error-handling code? Is there any method in pandas that will simply return <code>None</code> if the key is not in the index?</p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-label" rel="nofollow">The docs</a> seem to suggest that <code>ix</code> will do this, but also that <code>ix</code> is generally to be avoided. </p>
| 1 | 2016-08-25T11:07:35Z | 39,144,408 | <p>You can try something like following:</p>
<pre><code>df[df.index == 'NCT01001741']
</code></pre>
<p>This will return no error</p>
| 0 | 2016-08-25T11:45:34Z | [
"python",
"pandas"
] |
run two thread concurrently and both of them manipulate single variable | 39,143,671 | <p>it's my code:</p>
<pre><code>import threading
x=0
class a(threading.thread)
def run(self):
global x
for i in range(1000000):
x+=1
class b(threading.thread)
def run(self):
global x
for i in range(1000000):
x-=1
def run():
a().start()
b().start()
//after both thread done
print x
run()
</code></pre>
<p>i expect show me 0 (x=0) but every time i run it the result is pretty different(less that zero)
<br />
what's wrong with it?</p>
| 1 | 2016-08-25T11:08:58Z | 39,143,815 | <p>Race conditions. The actual operation of <code>x += 1</code> is roughly:</p>
<ol>
<li>Load the value of <code>x</code></li>
<li>Compute <code>x + 1</code></li>
<li>Store the computed value to <code>x</code></li>
</ol>
<p>Except with threading, you might get preempted by the other thread after step 1 and before step 3 (whether it's before or after 2 doesn't matter). If the other thread sees the unincremented value, decrements that, then you store your incremented value before it stores the decremented value, you just dropped an increment; if they store before you do, you dropped a decrement.</p>
<p>You need to lock access to shared variables to ensure that the operation behaves atomically:</p>
<pre><code>import threading
x=0
xlock = threading.Lock()
class a(threading.Thread):
def run(self):
global x
for i in range(1000000):
with xlock:
x+=1
class b(threading.Thread):
def run(self):
global x
for i in range(1000000):
with xlock:
x-=1
</code></pre>
<p>This may introduce quite a bit of overhead, so other alternatives that touch the shared variable less may be better options (at the expense of having different behaviors).</p>
| 5 | 2016-08-25T11:16:40Z | [
"python",
"multithreading",
"concurrency",
"race-condition"
] |
Google Analytics Reporting API (Python) - How can I obtain previous and next page path? | 39,143,682 | <p>Using the google analytics portal, i can check the previous and next page path for a given path (for an example, see below) [ Behavior > Site Content > All Pages > Navigation Summary)</p>
<p><a href="http://i.stack.imgur.com/7Gxx4.png" rel="nofollow"><img src="http://i.stack.imgur.com/7Gxx4.png" alt="enter image description here"></a></p>
<p>How can I access path of all pages, and then for each of these pages the previous and next pages path via the API?
The API supports ga:previousPagePath, but nextPagePath is deprecated.</p>
<p>Here is a snippet of my source code (python).</p>
<pre><code>DIMENSIONS =['ga:date','ga:hour', 'ga:minute', ??, ??]
METRICS =['ga:pageviews','ga:uniquePageviews', 'ga:sessions', 'ga:avgTimeOnPage']
def get_api_traffic_query(service):
start ='2016-08-24'
end = '2016-08-24'
metrics =','.join(map(str, METRICS))
dimensions =','.join(map(str, DIMENSIONS))
start_index = '1'
segment='sessions::condition::ga:hostname!~mongo|app|help|docs|staging|googleweblight',
return service.data().ga().get(ids=PROFILE_ID,start_date=start,end_date=end,metrics=metrics,dimensions=dimensions,
start_index=start_index)
</code></pre>
| 0 | 2016-08-25T11:09:28Z | 39,151,871 | <p>As <a href="http://stackoverflow.com/users/1841839/daimto">DalmTo</a> mentioned just query for the combination of <code>ga:pagePath</code> and <code>ga:previousPagePath</code> <a href="https://developers.google.com/analytics/devguides/reporting/core/dimsmets" rel="nofollow">Dimensions</a>.</p>
<pre><code>DIMENSIONS =['ga:date','ga:hour', 'ga:minute', `ga:pagePath`, `ga:previousPagePath`]
METRICS =['ga:pageviews','ga:uniquePageviews', 'ga:sessions', 'ga:avgTimeOnPage']
</code></pre>
<p>Note: <code>ga:nextPagePath</code> is deprecated and returns the same value as <code>ga:pagePath</code>. Thus it is up to you to reconnect the flow of <code>/path1 -> /path2 -> /path3</code></p>
| 0 | 2016-08-25T17:56:40Z | [
"python",
"navigation",
"google-analytics-api"
] |
how to check whether a list element is in another list but also at the same index | 39,143,692 | <p>I need to check whether two lists have any same elements, but these same elements must also be at same index positions.</p>
<p>I came up with a following ugly solution:</p>
<pre><code>def check_any_at_same_index(list_input_1, list_input_2):
# set bool value
check_if_any = 0
for index, element in enumerate(list_input_1):
# check if any elements are the same and also at the same index position
if element == list_input_2[index]:
check_if_any = 1
return check_if_any
if __name__ == "__main__":
list_1 = [1, 2, 4]
list_2 = [2, 4, 1]
list_3 = [1, 3, 5]
# no same elements at same index
print check_any_at_same_index(list_1, list_2)
# has same element 0
print check_any_at_same_index(list_1, list_3)
</code></pre>
<p>There must a better a quicker way to do this, any suggestions?</p>
| 3 | 2016-08-25T11:10:04Z | 39,143,723 | <p>You can use <code>zip()</code> function and a generator expression within <code>any()</code> if you want to check if there are any equal items in same index.</p>
<pre><code>any(i == j for i, j in zip(list_input_1, list_input_2))
</code></pre>
<p>If you want to return that item (the first occurrence) you can use <code>next()</code>:</p>
<pre><code>next((i for i, j in zip(list_input_1, list_input_2) if i == j), None)
</code></pre>
<p>If you want to check the all you can use a simple comparison:</p>
<pre><code>list_input_1 == list_input_2
</code></pre>
| 5 | 2016-08-25T11:12:24Z | [
"python",
"python-2.7"
] |
Fetching parent's function from parent | 39,143,715 | <p>I override a function, but would like to get hold of the parent's function from within the parent.</p>
<pre><code>>>> class a:
... def __init__(self):
... print(self.f)
... def f(self):
... pass
...
>>> class b(a):
... def __init__(self):
... super(b, self).__init__()
... def f(self):
... pass
...
>>> b()
<bound method b.f of <__main__.b object at 0x000002E297A96160>>
</code></pre>
<p>I'd like the printout to say <code>a.f</code>.</p>
| -1 | 2016-08-25T11:11:47Z | 39,144,280 | <pre><code>class a(object):
def __init__(self):
pass
def f(self):
print 'Parent Method...'
class b(a):
def __init__(self):
super(b, self).__init__()
a.f(self) #referance the parent class rather than the child class, because the child overrides the parent method.
self.f()
def f(self):
print "Childs Method..."
b()
</code></pre>
| 0 | 2016-08-25T11:38:59Z | [
"python",
"python-3.x",
"inheritance"
] |
Fetching parent's function from parent | 39,143,715 | <p>I override a function, but would like to get hold of the parent's function from within the parent.</p>
<pre><code>>>> class a:
... def __init__(self):
... print(self.f)
... def f(self):
... pass
...
>>> class b(a):
... def __init__(self):
... super(b, self).__init__()
... def f(self):
... pass
...
>>> b()
<bound method b.f of <__main__.b object at 0x000002E297A96160>>
</code></pre>
<p>I'd like the printout to say <code>a.f</code>.</p>
| -1 | 2016-08-25T11:11:47Z | 39,144,471 | <p>You could use <a href="https://docs.python.org/3/tutorial/classes.html#private-variables" rel="nofollow">name mangling</a> to make <code>self.__f</code> refer to <code>A.__f</code> from within <code>A</code>'s class definition.</p>
<blockquote>
<p>Name mangling is helpful for letting subclasses override methods without breaking intraclass method calls</p>
</blockquote>
<pre><code>class A:
def __init__(self):
self.__f()
def f(self):
print('A.f')
__f = f # Private copy of A's `f` method
class B(A):
def __init__(self):
super(B, self).__init__()
def f(self):
print('B.f')
b = B()
b.f()
</code></pre>
<p>prints</p>
<pre><code>A.f
B.f
</code></pre>
| 1 | 2016-08-25T11:48:29Z | [
"python",
"python-3.x",
"inheritance"
] |
What is the point of re-raising exceptions? | 39,143,878 | <p>So I've seen mention elsewhere of using the following to re-raise an exception.</p>
<pre><code>try:
whatever()
except:
raise
</code></pre>
<p>What is the purpose re-raising an exception? Surely an uncaught exception will just raise to the top anyway? i.e:</p>
<pre><code>try:
int("bad")
except:
raise
</code></pre>
<p>has identical output to:</p>
<pre><code>int("bad")
</code></pre>
<p>i.e. I get a ValueError in the console.</p>
| 2 | 2016-08-25T11:19:36Z | 39,143,930 | <p>Your example code is pointless, but if you wanted to perform logging or cleanup that only occurs on failure, you could put that between the <code>except:</code> and the <code>raise</code> and you'd do that work and then proceed as if the original exception was bubbling normally.</p>
| 2 | 2016-08-25T11:22:52Z | [
"python",
"exception"
] |
What is the point of re-raising exceptions? | 39,143,878 | <p>So I've seen mention elsewhere of using the following to re-raise an exception.</p>
<pre><code>try:
whatever()
except:
raise
</code></pre>
<p>What is the purpose re-raising an exception? Surely an uncaught exception will just raise to the top anyway? i.e:</p>
<pre><code>try:
int("bad")
except:
raise
</code></pre>
<p>has identical output to:</p>
<pre><code>int("bad")
</code></pre>
<p>i.e. I get a ValueError in the console.</p>
| 2 | 2016-08-25T11:19:36Z | 39,144,452 | <p>Imagine the following code.</p>
<p>A little setup: You are responsible for maintaining a huge database of information for example, and any loss of data would be catastrophic!</p>
<pre><code>huge_dictionary = {'lots_of_important':['stuffs']}
try:
check_data(new_data) #make sure the data is in the correct format
huge_dictionary['lots_of_important'].append(new_data)
except:
data_writer.backup(huge_dictionary)
data_writer.close()
#and any other last second changes
raise
</code></pre>
| 1 | 2016-08-25T11:47:47Z | [
"python",
"exception"
] |
Querying usb device by DeviceID with WQL | 39,143,972 | <p>I can get deviceID from WMI, then I want to use that deviceID to check weather device is in enabled\disabled state and if it's status is OK or not, basically I want to use WQL to query that device later on using this usb device unique DeviceID. Here is a code example that I used and got exception with</p>
<pre><code>import wmi
devid = "USB\VID_04F2&PID_B315\6&EF94D1A&0&6"
c = wmi.WMI()
q2 = "SELECT * FROM Win32_PnPEntity WHERE DeviceID = " + devid + " "
dev = c.query(q2)
</code></pre>
<p>When I run this code I get the following error:</p>
<blockquote>
<p>Traceback (most recent call last):<br>
File "", line 1, in <br>
File "C:\Python27\lib\site-packages\wmi.py", line 1009, in query<br>
return [ _wmi_object (obj, instance_of, fields) for obj in self._raw_query(wql) ]<br>
File "C:\Python27\lib\site-packages\win32com\client\util.py", line 84, in next return _get_good_object_(self.<em>iter</em>.next(), resultCLSID = self.resultCLSID)<br>
pywintypes.com_error: (-2147217385, 'OLE error 0x80041017', None, None)</p>
</blockquote>
<p>Probably my wql query is wrong somehow, could you give me an example of the right way the compose the query?</p>
| 0 | 2016-08-25T11:25:04Z | 39,150,317 | <p><code>\</code> is a <a href="https://msdn.microsoft.com/en-us/library/aa394054(v=vs.85).aspx" rel="nofollow">special character</a> in WQL and must be escaped with a backslash so your <code>devid</code> should be this:</p>
<pre><code>devid = "'USB\\VID_04F2&PID_B315\\6&EF94D1A&0&6'"
</code></pre>
<p>Edit: I also noticed that you aren't wrapping the constant so I added a single quote around the value.</p>
| 0 | 2016-08-25T16:23:56Z | [
"python",
"wmi",
"wql"
] |
python request post doesn't submit | 39,144,120 | <p>I have a problem with request.post, instead of returning the html code with the results I get back the html code of the starting side. </p>
<pre><code>import requests
def test(pdb):
URL = "http://capture.caltech.edu/"
r = requests.post(URL,files={"upfile": open( pdb)})
content=r.text
print(content)
print(r.headers)
def main():
test("Model.pdb")
</code></pre>
<p>Could it be that I have to define which postmethod I want to use? Because there are two in the html file. If this is the case how do I do that?(I want to use the second one.)</p>
<pre><code><FORM ACTION="result.cgi" METHOD=POST>
<form action="capture_ul.cgi" method="post" enctype="multipart/form-data">
</code></pre>
<p>I am aware that there are similar questions here but the answers there didn't help because the mistake was that params was used instead of files, which shouldn't be a problem here.</p>
<p>Thanks in advance.</p>
| 1 | 2016-08-25T11:31:17Z | 39,144,535 | <p>1 - You are posting to the wrong url, it should be <code>http://capture.caltech.edu/capture_ul.cgi</code>.</p>
<p>2 - There's an hidden field (<code>name='note'</code>) that must be sent (value of an empty string will be enough).</p>
<pre><code>...
def test(pdb):
URL = "http://capture.caltech.edu/capture_ul.cgi"
r = requests.post(URL,files={"upfile": open(pdb)}, data={'note': ''})
content=r.text
print(content)
print(r.headers)
...
</code></pre>
| 2 | 2016-08-25T11:51:14Z | [
"python"
] |
Python write to ram file when using command line, ghostscript | 39,144,281 | <p>I want to run this command from python:<br>
<code>gs.exe -sDEVICE=jpeg -dTextAlphaBits=4 -r300 -o a.jpg a.pdf</code><br>
Using ghostscript, to convert pdf to series of images. How do I use the RAM for the input and output files? Is there something like <code>StringIO</code> that gives you a file path?</p>
<p>I noticed there's a python ghostscript library, but it does not seem to give much more over the command line</p>
| 1 | 2016-08-25T11:39:00Z | 39,147,540 | <p>You can't use RAM for the input and output file using the Ghostscript demo code, it doesn't support it. You <strong>can</strong> pipe input from stdin and out to stdout but that's it for the standard code.</p>
<p>You can use the Ghostscript API to feed data from any source, and you can write your own device (or co-opt the display device) to have the page buffer (which is what the input is rendered to) made available elsewhere. Provided you have enough memory to hold the entire page of course.</p>
<p>Doing that will require you to write code to interface with the Ghostscript shared object or DLL of course. Possibly the Python library does this, I wouldn't know not being a Python developer.</p>
<p>I suspect that the pointer from John Coleman is sufficient for your needs though.</p>
| 1 | 2016-08-25T14:07:37Z | [
"python",
"file",
"cmd",
"ghostscript",
"ram"
] |
In this short recursive function `list_sum(aList)`, the finish condition is `if not aList: return 0`. I see no logic in why this condition works | 39,144,343 | <p>I am learning the recursive functions. I completed an exercise, but in a different way than proposed.</p>
<p>"Write a recursive function which takes a list argument and returns the sum of its integers."</p>
<pre><code>L = [0, 1, 2, 3, 4] # The sum of elements will be 10
</code></pre>
<p>My solution is:</p>
<pre><code>def list_sum(aList):
count = len(aList)
if count == 0:
return 0
count -= 1
return aList[0] + list_sum(aList[1:])
</code></pre>
<p>The proposed solution is:</p>
<pre><code>def proposed_sum(aList):
if not aList:
return 0
return aList[0] + proposed_sum(aList[1:])
</code></pre>
<p>My solution is very clear in how it works.</p>
<p>The proposed solution is shorter, but it is not clear for me why does the function work. How does <code>if not aList</code> even happen? I mean, how would the rest of the code fulfill a <code>not aList</code>, if <code>not aList</code> means it checks for True/False, but how is it True/False here?</p>
<p>I understand that <code>return 0</code> causes the recursion to stop.</p>
<p>As a side note, executing without <code>if not aList</code> throws IndexError: list index out of range.</p>
<p>Also, timeit-1million says my function is slower. It takes 3.32 seconds while the proposed takes 2.26. Which means I gotta understand the proposed solution. </p>
| 0 | 2016-08-25T11:41:54Z | 39,144,457 | <pre><code>not aList
</code></pre>
<p>return True if there is no elements in aList. That if statement in the solution covers edge case and checks if input parameter is not empty list.</p>
| 1 | 2016-08-25T11:48:01Z | [
"python",
"python-2.7",
"recursion"
] |
In this short recursive function `list_sum(aList)`, the finish condition is `if not aList: return 0`. I see no logic in why this condition works | 39,144,343 | <p>I am learning the recursive functions. I completed an exercise, but in a different way than proposed.</p>
<p>"Write a recursive function which takes a list argument and returns the sum of its integers."</p>
<pre><code>L = [0, 1, 2, 3, 4] # The sum of elements will be 10
</code></pre>
<p>My solution is:</p>
<pre><code>def list_sum(aList):
count = len(aList)
if count == 0:
return 0
count -= 1
return aList[0] + list_sum(aList[1:])
</code></pre>
<p>The proposed solution is:</p>
<pre><code>def proposed_sum(aList):
if not aList:
return 0
return aList[0] + proposed_sum(aList[1:])
</code></pre>
<p>My solution is very clear in how it works.</p>
<p>The proposed solution is shorter, but it is not clear for me why does the function work. How does <code>if not aList</code> even happen? I mean, how would the rest of the code fulfill a <code>not aList</code>, if <code>not aList</code> means it checks for True/False, but how is it True/False here?</p>
<p>I understand that <code>return 0</code> causes the recursion to stop.</p>
<p>As a side note, executing without <code>if not aList</code> throws IndexError: list index out of range.</p>
<p>Also, timeit-1million says my function is slower. It takes 3.32 seconds while the proposed takes 2.26. Which means I gotta understand the proposed solution. </p>
| 0 | 2016-08-25T11:41:54Z | 39,144,494 | <p>On the call of the function, <code>aList</code> will have no elements. Or in other words, the only element it has is <code>null</code>. A list is like a string or array. When you create a variable you reserve some space in the memory for it. Lists and such have a <code>null</code> on the very last position which marks the end so nothing can be stored after that point. You keep cutting the first element in the list, so the only thing left is the null. When you reach it you know you're done.</p>
<p>If you don't use that condition the function will try to take a number that doesn't exist, so it throws that error.</p>
| 1 | 2016-08-25T11:49:26Z | [
"python",
"python-2.7",
"recursion"
] |
In this short recursive function `list_sum(aList)`, the finish condition is `if not aList: return 0`. I see no logic in why this condition works | 39,144,343 | <p>I am learning the recursive functions. I completed an exercise, but in a different way than proposed.</p>
<p>"Write a recursive function which takes a list argument and returns the sum of its integers."</p>
<pre><code>L = [0, 1, 2, 3, 4] # The sum of elements will be 10
</code></pre>
<p>My solution is:</p>
<pre><code>def list_sum(aList):
count = len(aList)
if count == 0:
return 0
count -= 1
return aList[0] + list_sum(aList[1:])
</code></pre>
<p>The proposed solution is:</p>
<pre><code>def proposed_sum(aList):
if not aList:
return 0
return aList[0] + proposed_sum(aList[1:])
</code></pre>
<p>My solution is very clear in how it works.</p>
<p>The proposed solution is shorter, but it is not clear for me why does the function work. How does <code>if not aList</code> even happen? I mean, how would the rest of the code fulfill a <code>not aList</code>, if <code>not aList</code> means it checks for True/False, but how is it True/False here?</p>
<p>I understand that <code>return 0</code> causes the recursion to stop.</p>
<p>As a side note, executing without <code>if not aList</code> throws IndexError: list index out of range.</p>
<p>Also, timeit-1million says my function is slower. It takes 3.32 seconds while the proposed takes 2.26. Which means I gotta understand the proposed solution. </p>
| 0 | 2016-08-25T11:41:54Z | 39,144,641 | <p>Python considers as False multiple values:</p>
<ul>
<li>False (of course)</li>
<li>0</li>
<li>None</li>
<li>empty collections (dictionaries, lists, tuples)</li>
<li>empty strings ('', "", '''''', """""", r'', u"", etc...)</li>
<li>any other object whose <a href="https://docs.python.org/2/reference/datamodel.html?highlight=__nonzero__#object.__nonzero__" rel="nofollow">__nonzero__</a> method returns False</li>
</ul>
<p>in your case, the list is evaluated as a boolean. If it is empty, it is considered as False, else it is considered as True. This is just a shorter way to write <code>if len(aList) == 0:</code></p>
<hr>
<p>in addition, concerning your new question in the comments, consider the last line of your function:</p>
<pre><code>return aList[0] + proposed_sum(aList[1:])
</code></pre>
<p>This line call a new "instance" of the function but with a subset of the original list (the original list minus the first element). At each recursion, the list passed in argument looses an element and after a certain amount of recursions, the passed list is empty.</p>
| 1 | 2016-08-25T11:55:40Z | [
"python",
"python-2.7",
"recursion"
] |
In this short recursive function `list_sum(aList)`, the finish condition is `if not aList: return 0`. I see no logic in why this condition works | 39,144,343 | <p>I am learning the recursive functions. I completed an exercise, but in a different way than proposed.</p>
<p>"Write a recursive function which takes a list argument and returns the sum of its integers."</p>
<pre><code>L = [0, 1, 2, 3, 4] # The sum of elements will be 10
</code></pre>
<p>My solution is:</p>
<pre><code>def list_sum(aList):
count = len(aList)
if count == 0:
return 0
count -= 1
return aList[0] + list_sum(aList[1:])
</code></pre>
<p>The proposed solution is:</p>
<pre><code>def proposed_sum(aList):
if not aList:
return 0
return aList[0] + proposed_sum(aList[1:])
</code></pre>
<p>My solution is very clear in how it works.</p>
<p>The proposed solution is shorter, but it is not clear for me why does the function work. How does <code>if not aList</code> even happen? I mean, how would the rest of the code fulfill a <code>not aList</code>, if <code>not aList</code> means it checks for True/False, but how is it True/False here?</p>
<p>I understand that <code>return 0</code> causes the recursion to stop.</p>
<p>As a side note, executing without <code>if not aList</code> throws IndexError: list index out of range.</p>
<p>Also, timeit-1million says my function is slower. It takes 3.32 seconds while the proposed takes 2.26. Which means I gotta understand the proposed solution. </p>
| 0 | 2016-08-25T11:41:54Z | 39,144,684 | <p>You are counting the items in the list, and the proposed one check if it's empty with <code>if not aList</code> this is equals to <code>len(aList) == 0</code>, so both of you use the same logic.</p>
<p>But, you're doing <code>count -= 1</code>, this has no sense since when you use recursion, you pass the list quiting one element, so here you lose some time.</p>
<p>According to <a href="http://www.python.org/dev/peps/pep-0008/#programming-recommendations" rel="nofollow">PEP 8</a>, this is the proper way:</p>
<blockquote>
<p>⢠For sequences, (strings, lists, tuples), use the fact that empty
sequences are false.</p>
<pre><code>Yes: if not seq:
if seq:
No: if len(seq)
if not len(seq)
</code></pre>
</blockquote>
<p>Here is my amateur thougts about why:</p>
<p>This implicit check will be faster than calling <code>len</code>, since <code>len</code> is a function to get the length of a collection, it works by calling an object's <code>__len__</code> method. This will find up there is no item to check <code>__len__</code>.</p>
<p>So both will find up there is no item there, but one does it directly.</p>
| 1 | 2016-08-25T11:57:18Z | [
"python",
"python-2.7",
"recursion"
] |
In this short recursive function `list_sum(aList)`, the finish condition is `if not aList: return 0`. I see no logic in why this condition works | 39,144,343 | <p>I am learning the recursive functions. I completed an exercise, but in a different way than proposed.</p>
<p>"Write a recursive function which takes a list argument and returns the sum of its integers."</p>
<pre><code>L = [0, 1, 2, 3, 4] # The sum of elements will be 10
</code></pre>
<p>My solution is:</p>
<pre><code>def list_sum(aList):
count = len(aList)
if count == 0:
return 0
count -= 1
return aList[0] + list_sum(aList[1:])
</code></pre>
<p>The proposed solution is:</p>
<pre><code>def proposed_sum(aList):
if not aList:
return 0
return aList[0] + proposed_sum(aList[1:])
</code></pre>
<p>My solution is very clear in how it works.</p>
<p>The proposed solution is shorter, but it is not clear for me why does the function work. How does <code>if not aList</code> even happen? I mean, how would the rest of the code fulfill a <code>not aList</code>, if <code>not aList</code> means it checks for True/False, but how is it True/False here?</p>
<p>I understand that <code>return 0</code> causes the recursion to stop.</p>
<p>As a side note, executing without <code>if not aList</code> throws IndexError: list index out of range.</p>
<p>Also, timeit-1million says my function is slower. It takes 3.32 seconds while the proposed takes 2.26. Which means I gotta understand the proposed solution. </p>
| 0 | 2016-08-25T11:41:54Z | 39,144,703 | <p>For understand this function, let's run it step by step :
step 0 :</p>
<pre><code>L=[0,1,2,3,4]
proposed_sum([0,1,2,3,4])
L != []
return l[0] + proposed_sum([1,2,3,4])
</code></pre>
<p>step 1 calcul proposed_sum([1,2,3,4]):</p>
<pre><code>proposed_sum([1,2,3,4])
L != []
return l[0] + sum([2,3,4])
</code></pre>
<p>step 2 calcul proposed_sum([2,3,4]):</p>
<pre><code>proposed_sum([2,3,4])
L != []
return l[0] + sum([3,4])
</code></pre>
<p>step 3 calcul proposed_sum([3,4]):</p>
<pre><code>proposed_sum([3,4])
L != []
return l[0] + sum([4])
</code></pre>
<p>step 4 calcul proposed_sum([4]):</p>
<pre><code>proposed_sum([4])
L != []
return l[0] + sum([])
</code></pre>
<p>step 5 calcul proposed_sum([]):</p>
<pre><code>proposed_sum([])
L == []
return 0
</code></pre>
<p>step 6 replace:</p>
<pre><code>proposed_sum([0,1,2,3,4])
</code></pre>
<p>By </p>
<pre><code>proposed_sum([]) + proposed_sum([4]) + proposed_sum([3,4]) + proposed_sum([2,3,4]) + proposed_sum([1,2,3,4])+ proposed_sum([0,1,2,3,4])
</code></pre>
<p>= </p>
<pre><code>(0) + 4 + 3 + 2 + 1 + 0
</code></pre>
| 1 | 2016-08-25T11:58:21Z | [
"python",
"python-2.7",
"recursion"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.