title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
expand plot for readability without expanding lines | 39,267,981 | <p>I am plotting 2 lines and a dot, X axis is a date range. The dot is most important, but it appears on the boundary of the plot. I want to "expand" the plot further right so that the dot position is more visible.
In other words I want to expand the X axis without adding new values to Y values of lines. However if I just add a few dates to X values of lines I get the "x and y dimensions must be equal" error. I tried to add a few np.NaN values to Y so that dimensions are equal, but then I get an error "integer required".
My plot:
<a href="http://i.stack.imgur.com/BwYp0.png" rel="nofollow"><img src="http://i.stack.imgur.com/BwYp0.png" alt="enter image description here"></a>
My code:</p>
<pre><code>fig1 = plt.figure()
ax1 = fig1.add_subplot(111)
plot_x = train_original.index.values
train_y = train_original.values
ax1.plot(plot_x, train_y, 'grey')
x = np.concatenate([np.array([train_original.index.values[-1]]), test_original.index.values])
y = np.concatenate([np.array([train_original.dropna().values[-1]]), test_original.dropna().values])
ax1.plot(x, y, color='grey')
ax1.plot(list(predicted.index.values), list(predicted.values), 'ro')
ax1.axvline(x=train_end, alpha=0.7, linestyle='--',color='blue')
plt.show()
</code></pre>
| 0 | 2016-09-01T09:39:17Z | 39,268,167 | <p>Just change the <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.xlim" rel="nofollow">xlim()</a>. Something like:</p>
<pre><code>xmin, xmax = plt.xlim() # return the current xlim
plt.xlim(xmax=xmax+1)
</code></pre>
| 2 | 2016-09-01T09:47:27Z | [
"python",
"matplotlib",
"plot"
] |
expand plot for readability without expanding lines | 39,267,981 | <p>I am plotting 2 lines and a dot, X axis is a date range. The dot is most important, but it appears on the boundary of the plot. I want to "expand" the plot further right so that the dot position is more visible.
In other words I want to expand the X axis without adding new values to Y values of lines. However if I just add a few dates to X values of lines I get the "x and y dimensions must be equal" error. I tried to add a few np.NaN values to Y so that dimensions are equal, but then I get an error "integer required".
My plot:
<a href="http://i.stack.imgur.com/BwYp0.png" rel="nofollow"><img src="http://i.stack.imgur.com/BwYp0.png" alt="enter image description here"></a>
My code:</p>
<pre><code>fig1 = plt.figure()
ax1 = fig1.add_subplot(111)
plot_x = train_original.index.values
train_y = train_original.values
ax1.plot(plot_x, train_y, 'grey')
x = np.concatenate([np.array([train_original.index.values[-1]]), test_original.index.values])
y = np.concatenate([np.array([train_original.dropna().values[-1]]), test_original.dropna().values])
ax1.plot(x, y, color='grey')
ax1.plot(list(predicted.index.values), list(predicted.values), 'ro')
ax1.axvline(x=train_end, alpha=0.7, linestyle='--',color='blue')
plt.show()
</code></pre>
| 0 | 2016-09-01T09:39:17Z | 39,268,548 | <p>There are a couple of ways to do this.</p>
<p>An easy, automatic way to do this, without needing knowledge of the existing <code>xlim</code> is to use <a href="http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes.margins" rel="nofollow"><code>ax.margins</code></a>. This will add a certain fraction of the data limits to either side of the plot. For example:</p>
<pre><code>ax.margins(x=0.1)
</code></pre>
<p>will add 10% of the current x range to both ends of the plot.</p>
<p>Another method is to explicitly set the x limits using <a href="http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes.set_xlim" rel="nofollow"><code>ax.set_xlim</code></a>.</p>
| 3 | 2016-09-01T10:04:04Z | [
"python",
"matplotlib",
"plot"
] |
Combine Rows in Pandas DataFrame | 39,268,010 | <p>I have financial performance Indicators for different companies, one row per year. Now I would like to have all the indicators per company over a specific range of years in one row.</p>
<p>Now my data looks similar to this:</p>
<pre><code>import numpy as np
import pandas as pd
startyear = 2014
endyear = 2015
df = pd.DataFrame(np.array([
['AAPL', 2014, 0.2, 0.4, 1.5],
['AAPL', 2015, 0.3, 0.4, 2.0],
['AAPL', 2016, 0.2, 0.3, 1.5],
['GOGL', 2014, 0.4, 0.5, 0.5],
['GOGL', 2015, 0.6, 0.8, 1.0],
['GOGL', 2016, 0.3, 0.5, 2.0]]),
columns=['Name', 'Year', 'ROE', 'ROA', 'DE'])
newcolumns = (df.columns + [str(startyear)]).append(df.columns + [str(endyear)])
dfnew=pd.DataFrame(columns=newcolumns)
</code></pre>
<p>What I would like to have is (e.g. only years 2014 & 2015):</p>
<pre><code>Name ROE2014 ROA2014 DE2014 ROE2015 ROA2015 DE2015
AAPL 0.2 0.4 1.5 0.3 0.4 2.0
GOOGL 0.4 0.5 0.5 0.6 0.8 1.0
</code></pre>
<p>So far I only managed to get the new column names, but somehow I can't get my head around how to fill this new DataFrame.</p>
| 0 | 2016-09-01T09:40:31Z | 39,271,199 | <p>Probably easier to create the new DataFrame, then adjust the column names:</p>
<pre><code># limit to data you want
dfnew = df[df.Year.isin(['2014', '2015'])]
# set index to 'Name' and pivot 'Year's into the columns
dfnew = dfnew.set_index(['Name', 'Year']).unstack()
# sort the columns by year
dfnew = dfnew.sortlevel(1, axis=1)
# rename columns
dfnew.columns = ["".join(a) for a in dfnew.columns.values]
# put 'Name' back into columns
dfnew.reset_index()
</code></pre>
| 2 | 2016-09-01T12:08:29Z | [
"python",
"dataframe"
] |
How to compare Enums in Python? | 39,268,052 | <p>Since Python 3.4, the <code>Enum</code> class exists.</p>
<p>I am writing a program, where some constants have a specific order and I wonder which way is the most pythonic to compare them:</p>
<pre><code>class Information(Enum):
ValueOnly = 0
FirstDerivative = 1
SecondDerivative = 2
</code></pre>
<p>Now there is a method, which needs to compare a given <code>information</code> of <code>Information</code> with the different enums:</p>
<pre><code>information = Information.FirstDerivative
print(value)
if information >= Information.FirstDerivative:
print(jacobian)
if information >= Information.SecondDerivative:
print(hessian)
</code></pre>
<p>The direct comparison does not work with Enums, so there are three approaches and I wonder which one is preferred:</p>
<p>Approach 1: Use values:</p>
<pre><code>if information.value >= Information.FirstDerivative.value:
...
</code></pre>
<p>Approach 2: Use IntEnum:</p>
<pre><code>class Information(IntEnum):
...
</code></pre>
<p>Approach 3: Not using Enums at all:</p>
<pre><code>class Information:
ValueOnly = 0
FirstDerivative = 1
SecondDerivative = 2
</code></pre>
<p>Each approach works, Approach 1 is a bit more verbose, while Approach 2 uses the not recommended IntEnum-class, while and Approach 3 seems to be the way one did this before Enum was added. </p>
<p>I tend to use Approach 1, but I am not sure. </p>
<p>Thanks for any advise!</p>
| 0 | 2016-09-01T09:42:28Z | 39,268,706 | <p>I hadn'r encountered Enum before so I scanned the doc (<a href="https://docs.python.org/3/library/enum.html" rel="nofollow">https://docs.python.org/3/library/enum.html</a>) ... and found OrderedEnum (section 8.13.13.2) Isn't this what you want? From the doc:</p>
<pre><code>>>> class Grade(OrderedEnum):
... A = 5
... B = 4
... C = 3
... D = 2
... F = 1
...
>>> Grade.C < Grade.A
True
</code></pre>
| 0 | 2016-09-01T10:10:48Z | [
"python",
"enums",
"compare"
] |
How to compare Enums in Python? | 39,268,052 | <p>Since Python 3.4, the <code>Enum</code> class exists.</p>
<p>I am writing a program, where some constants have a specific order and I wonder which way is the most pythonic to compare them:</p>
<pre><code>class Information(Enum):
ValueOnly = 0
FirstDerivative = 1
SecondDerivative = 2
</code></pre>
<p>Now there is a method, which needs to compare a given <code>information</code> of <code>Information</code> with the different enums:</p>
<pre><code>information = Information.FirstDerivative
print(value)
if information >= Information.FirstDerivative:
print(jacobian)
if information >= Information.SecondDerivative:
print(hessian)
</code></pre>
<p>The direct comparison does not work with Enums, so there are three approaches and I wonder which one is preferred:</p>
<p>Approach 1: Use values:</p>
<pre><code>if information.value >= Information.FirstDerivative.value:
...
</code></pre>
<p>Approach 2: Use IntEnum:</p>
<pre><code>class Information(IntEnum):
...
</code></pre>
<p>Approach 3: Not using Enums at all:</p>
<pre><code>class Information:
ValueOnly = 0
FirstDerivative = 1
SecondDerivative = 2
</code></pre>
<p>Each approach works, Approach 1 is a bit more verbose, while Approach 2 uses the not recommended IntEnum-class, while and Approach 3 seems to be the way one did this before Enum was added. </p>
<p>I tend to use Approach 1, but I am not sure. </p>
<p>Thanks for any advise!</p>
| 0 | 2016-09-01T09:42:28Z | 39,269,589 | <p>You should always implement the rich comparison operaters if you want to use them with an <code>Enum</code>. Using the <code>functools.total_ordering</code> class decorator, you only need to implement an <code>__eq__</code> method along with a single ordering, e.g. <code>__lt__</code>. Since <code>enum.Enum</code> already implements <code>__eq__</code> this becomes even easier:</p>
<pre><code>>>> import enum
>>> from functools import total_ordering
>>> @total_ordering
... class Grade(enum.Enum):
... A = 5
... B = 4
... C = 3
... D = 2
... F = 1
... def __lt__(self, other):
... if self.__class__ is other.__class__:
... return self.value < other.value
... return NotImplemented
...
>>> Grade.A >= Grade.B
True
>>> Grade.A >= 3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unorderable types: Grade() >= int()
</code></pre>
<p>Terrible, horrible, ghastly things can happen with <code>IntEnum</code>. It was mostly included for backwards-compatibility sake, enums used to be implemented by subclassing <code>int</code>. From the <a href="https://docs.python.org/3/library/enum.html#intenum" rel="nofollow">docs</a>:</p>
<blockquote>
<p>For the vast majority of code, Enum is strongly recommended, since
IntEnum breaks some semantic promises of an enumeration (by being
comparable to integers, and thus by transitivity to other unrelated
enumerations). It should be used only in special cases where thereâs
no other choice; for example, when integer constants are replaced with
enumerations and backwards compatibility is required with code that
still expects integers.</p>
</blockquote>
<p>Here's an example of why you don't want to do this:</p>
<pre><code>>>> class GradeNum(enum.IntEnum):
... A = 5
... B = 4
... C = 3
... D = 2
... F = 1
...
>>> class Suit(enum.IntEnum):
... spade = 4
... heart = 3
... diamond = 2
... club = 1
...
>>> GradeNum.A >= GradeNum.B
True
>>> GradeNum.A >= 3
True
>>> GradeNum.B == Suit.spade
True
>>>
</code></pre>
| 1 | 2016-09-01T10:52:04Z | [
"python",
"enums",
"compare"
] |
Understand pandas groupby/apply behaviour | 39,268,147 | <p>Let's take the following DataFrame:</p>
<pre><code> location outlook play players temperature
0 Hamburg sunny True 2.00 25.00
1 Berlin sunny True 2.00 25.00
2 Stuttgart NaN True 4.00 19.00
3 NaN NaN NaN nan nan
4 Flensburg overcast False 0.00 33.00
5 Hannover rain NaN 0.00 27.00
6 Heidelberg rain NaN 0.00 21.50
7 Frankfurt overcast True 2.00 26.00
8 Augsburg sunny True 2.00 13.00
9 Koeln sunny True 2.00 16.00
</code></pre>
<p>I run</p>
<pre><code>g = df(by=["outlook", "play"])
def gfunc(x):
print(x)
g.apply(gfunc)
</code></pre>
<p>and this is printed</p>
<pre><code> location outlook play players temperature
4 Flensburg overcast False 0.00 33.00
location outlook play players temperature
4 Flensburg overcast False 0.00 33.00
location outlook play players temperature
7 Frankfurt overcast True 2.00 26.00
location outlook play players temperature
0 Hamburg sunny True 2.00 25.00
1 Berlin sunny True 2.00 25.00
8 Augsburg sunny True 2.00 13.00
9 Koeln sunny True 2.00 16.00
</code></pre>
<p>I don't mind not returning anything; I just want to understand why it prints the exact same output twice and then a couple of different outputs. Shouldn't the output of printing rather be a different subgroup every time? What am I missing?</p>
| 0 | 2016-09-01T09:46:43Z | 39,268,532 | <p>According to the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow">docs</a></p>
<blockquote>
<p>In the current implementation apply calls func twice on the first column/row to decide whether it can take a fast or slow code path. This can lead to unexpected behavior if func has side-effects, as they will take effect twice for the first column/row.</p>
</blockquote>
| 3 | 2016-09-01T10:03:17Z | [
"python",
"pandas"
] |
Understand pandas groupby/apply behaviour | 39,268,147 | <p>Let's take the following DataFrame:</p>
<pre><code> location outlook play players temperature
0 Hamburg sunny True 2.00 25.00
1 Berlin sunny True 2.00 25.00
2 Stuttgart NaN True 4.00 19.00
3 NaN NaN NaN nan nan
4 Flensburg overcast False 0.00 33.00
5 Hannover rain NaN 0.00 27.00
6 Heidelberg rain NaN 0.00 21.50
7 Frankfurt overcast True 2.00 26.00
8 Augsburg sunny True 2.00 13.00
9 Koeln sunny True 2.00 16.00
</code></pre>
<p>I run</p>
<pre><code>g = df(by=["outlook", "play"])
def gfunc(x):
print(x)
g.apply(gfunc)
</code></pre>
<p>and this is printed</p>
<pre><code> location outlook play players temperature
4 Flensburg overcast False 0.00 33.00
location outlook play players temperature
4 Flensburg overcast False 0.00 33.00
location outlook play players temperature
7 Frankfurt overcast True 2.00 26.00
location outlook play players temperature
0 Hamburg sunny True 2.00 25.00
1 Berlin sunny True 2.00 25.00
8 Augsburg sunny True 2.00 13.00
9 Koeln sunny True 2.00 16.00
</code></pre>
<p>I don't mind not returning anything; I just want to understand why it prints the exact same output twice and then a couple of different outputs. Shouldn't the output of printing rather be a different subgroup every time? What am I missing?</p>
| 0 | 2016-09-01T09:46:43Z | 39,268,538 | <p>I.. don't know. It's weird. I'm actually able to replicate the problem. </p>
<p>Note that you have a small mistake, you should write <code>df.groupby(["series"])</code> instead of <code>df(by=["series"])</code>.</p>
<pre><code>import seaborn as sns
iris = sns.load_dataset('iris')
</code></pre>
<p>Now this statement prints a part double. </p>
<pre><code>iris.ix[1:100:10].groupby(["species"]).apply(lambda x: print(len(x), '\n***\n', x))
</code></pre>
<p><em>Output</em></p>
<pre><code>5
***
sepal_length sepal_width petal_length petal_width species
1 4.9 3.0 1.4 0.2 setosa
11 4.8 3.4 1.6 0.2 setosa
21 5.1 3.7 1.5 0.4 setosa
31 5.4 3.4 1.5 0.4 setosa
41 4.5 2.3 1.3 0.3 setosa
5
***
sepal_length sepal_width petal_length petal_width species
1 4.9 3.0 1.4 0.2 setosa
11 4.8 3.4 1.6 0.2 setosa
21 5.1 3.7 1.5 0.4 setosa
31 5.4 3.4 1.5 0.4 setosa
41 4.5 2.3 1.3 0.3 setosa
5
***
sepal_length sepal_width petal_length petal_width species
51 6.4 3.2 4.5 1.5 versicolor
61 5.9 3.0 4.2 1.5 versicolor
71 6.1 2.8 4.0 1.3 versicolor
81 5.5 2.4 3.7 1.0 versicolor
91 6.1 3.0 4.6 1.4 versicolor
</code></pre>
<p>What is extra weird, is that if I ask for the name, it doesn't double the print. </p>
<pre><code>iris.ix[1:100:10].groupby(["species"]).apply(lambda x: print(len(x), x.name, '\n***\n', x))
</code></pre>
<p><em>Output</em></p>
<pre><code>5 setosa
***
sepal_length sepal_width petal_length petal_width species
1 4.9 3.0 1.4 0.2 setosa
11 4.8 3.4 1.6 0.2 setosa
21 5.1 3.7 1.5 0.4 setosa
31 5.4 3.4 1.5 0.4 setosa
41 4.5 2.3 1.3 0.3 setosa
5 versicolor
***
sepal_length sepal_width petal_length petal_width species
51 6.4 3.2 4.5 1.5 versicolor
61 5.9 3.0 4.2 1.5 versicolor
71 6.1 2.8 4.0 1.3 versicolor
81 5.5 2.4 3.7 1.0 versicolor
91 6.1 3.0 4.6 1.4 versicolor
</code></pre>
<p>Well. You got me! Looks like a weird bug. </p>
| 1 | 2016-09-01T10:03:33Z | [
"python",
"pandas"
] |
Generator from function prints | 39,268,377 | <p>At the moment I have a little <code>flask</code> project that calls another python file. I'm fully aware that this way is kinda awful, and so, I want to swap it for a function call while maintaining <strong>the prints getting yelded to the website</strong>. </p>
<pre><code>def get_Checks():
root = request.url_root
def func():
yield ("Inicio <br>")
with subprocess.Popen(r"python somefile.py", stdout=subprocess.PIPE, bufsize=1,
universal_newlines=True) as p:
for line in p.stdout:
yield (line + "<br>")
return Response(func())
</code></pre>
<p>I've tryed to replace the file call with the function directly but it just prints it to the console.</p>
<p>I really appreciate any help you can provide.</p>
| 2 | 2016-09-01T09:56:57Z | 39,269,182 | <p>Assuming that all the printing you want to grab is done within the same module, You can monkey-patch the <code>print</code> function of the other module. In the example below, I use a context manager to revert the original print function after the grabbing is done.</p>
<p>This is <code>mod1</code>, the module with the misbehaving function.</p>
<pre><code>def bogus_function():
print('Hello World!')
print('Line 2')
</code></pre>
<p>This is <code>mod2</code>, the module using <code>mod1.bogus_function()</code></p>
<pre><code>import io
import functools
import contextlib
import mod1
@contextlib.contextmanager
def grab_stdout(module, fd):
def monkey_print(*args, **kwargs):
kwargs['file'] = fd
print(*args, **kwargs)
setattr(module, 'print', monkey_print)
try:
yield
finally:
setattr(module, 'print', print)
def line_generator():
fd = io.StringIO()
with grab_stdout(mod1, fd):
mod1.bogus_function()
fd.seek(0)
for line in fd:
yield line.rstrip('\r\n') + '<br>'
for t in enumerate(line_generator()):
print('line %d: %r' % t)
</code></pre>
<p>The <code>grab_stdout()</code> context manager redirects print calls of <code>module</code> to the file-like object <code>fd</code>. In the function <code>line_generator()</code>, <code>grab_stdout()</code> is used to store the print output of <code>bogus_function</code> in the <code>StringIO</code> object <code>fd</code>. The rest should be self-explanatory.</p>
<p>If you don't know exactly whether print is called in other modules in the call tree of the function in question, you can modify <code>grab_stdout</code> as follows:</p>
<pre><code>import builtins
print_orig = builtins.print
@contextlib.contextmanager
def grab_stdout_global(fd):
def monkey_print(*args, **kwargs):
kwargs['file'] = fd
print_orig(*args, **kwargs)
builtins.print = monkey_print
try:
yield
finally:
builtins.print = print_orig
</code></pre>
| 1 | 2016-09-01T10:33:03Z | [
"python",
"python-3.x",
"flask",
"subprocess",
"generator"
] |
Generator from function prints | 39,268,377 | <p>At the moment I have a little <code>flask</code> project that calls another python file. I'm fully aware that this way is kinda awful, and so, I want to swap it for a function call while maintaining <strong>the prints getting yelded to the website</strong>. </p>
<pre><code>def get_Checks():
root = request.url_root
def func():
yield ("Inicio <br>")
with subprocess.Popen(r"python somefile.py", stdout=subprocess.PIPE, bufsize=1,
universal_newlines=True) as p:
for line in p.stdout:
yield (line + "<br>")
return Response(func())
</code></pre>
<p>I've tryed to replace the file call with the function directly but it just prints it to the console.</p>
<p>I really appreciate any help you can provide.</p>
| 2 | 2016-09-01T09:56:57Z | 39,271,254 | <p>A simple way would be to temporarily change <code>sys.stdout</code> to a file-like object, call the function, then restore <code>sys.stdout</code>. The output will be available in the file-like object.</p>
<p>Here is a working Flask app that demonstrates the method:</p>
<pre><code>import sys
from io import StringIO
from flask import Flask, request, Response
import somefile
app = Flask(__name__)
@app.route("/")
def hello():
def func():
yield ("Inicio <br>")
try:
_stdout = sys.stdout
sys.stdout = output = StringIO()
somefile.main()
output.seek(0)
for line in output:
sys.stdout = _stdout
yield '{}<br>'.format(line.rstrip())
sys.stdout = output
finally:
sys.stdout.close() # close the StringIO object
sys.stdout = _stdout # restore sys.stdout
return Response(func())
if __name__ == "__main__":
app.run()
</code></pre>
<p>Here a <a href="https://docs.python.org/3/library/io.html?highlight=pipe#io.StringIO" rel="nofollow"><code>io.StringIO</code></a> object is used to collect the standard output produced by the function, and then the lines are yielded from that object. The <code>finally</code> ensures that the original <code>sys.stdout</code> is restored afterwards. There is some additional complexity around the <code>yield</code> statement because <code>yield</code> returns control to the calling code for which stdout must be restored in case the caller also wants to print to stdout.</p>
<p>It's assumed that the function in <code>somefile.py</code> is the "main" function, and that invocation of it is guarded by a <code>if __name__ == '__main__':</code> test, something like this:</p>
<pre><code>def main():
for i in range(10):
print(i)
if __name__ == '__main__':
main()
</code></pre>
| 1 | 2016-09-01T12:11:15Z | [
"python",
"python-3.x",
"flask",
"subprocess",
"generator"
] |
Displaying rows according to Specific Values | 39,268,518 | <p>I want to select several rows according to the attributes in Type column.</p>
<p>Let's pretend I have this dataframe:</p>
<pre><code>Type | Killed | Survive
Dog 1 0
Cat 3 5
Dog 4 1
Cow 2 4
Fish 1 3
</code></pre>
<p>I would like to select the row that has Type = ['Dog', 'Cat', 'Fish']</p>
<p>This would be my desire result:</p>
<pre><code>Type | Killed | Survived
Dog 1 0
Dog 4 1
Cat 3 5
Fish 1 3
</code></pre>
<p>I know that you can use :</p>
<pre><code> df[df['Type'] == 'Dog']
</code></pre>
<p>to get dog only.</p>
<p>but I would like to know how to select more than one type.</p>
<p>I have tried this but it doesn't work:</p>
<pre><code>df[df['Type'] == 'Dog', 'Cat', 'Fish']
</code></pre>
<p>Thanks for helping me guys!</p>
| 1 | 2016-09-01T10:02:53Z | 39,268,580 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow"><code>isin</code></a>:</p>
<pre><code>df[df['Type'].isin(['Dog', 'Cat', 'Fish'])]
</code></pre>
| 3 | 2016-09-01T10:05:20Z | [
"python",
"pandas"
] |
Selenium fails to change dropdown selection | 39,268,637 | <p>Here is the isolated HTML code:</p>
<pre><code><span style="position: relative; width: 100%; display: inline-block;">
<select id="0ac88542d16d6200fb983d094f655c76_select" class="form-control">
<option value="display_value">Number</option>
<option value="sys_class_name">Type</option>
</select>
</span>
</code></pre>
<p>Using Python Selenium, click on the dropdown element to expand it:</p>
<pre><code>search_elemets = driver.find_elements_by_xpath("//*[@class='form-control']")
search_elemets[0].click()
</code></pre>
<p>Now let's try different scenarios to change its selection to the desired value..</p>
<p>Option 1</p>
<pre><code>Select(search_elemets[0]) \
.select_by_visible_text("Number")
driver.find_elements_by_tag_name('option')[1].click()
</code></pre>
<p>Option 2</p>
<pre><code>Select(search_elemets[0]) \
.select_by_value("display_value")
driver.find_elements_by_tag_name('option')[1].send_keys(Keys.RETURN)
</code></pre>
<p>Option 3</p>
<pre><code>for option in search_elemets[0].find_elements_by_tag_name('option'):
if option.text == 'Number':
# Use one of the options below
# Option 3.1
Select(search_elemets[0]) \
.select_by_visible_text("Number")
option.click()
break
# Option 3.2
actions = ActionChains(driver)
actions.move_to_element(option)
actions.click(option)
actions.perform()
break
</code></pre>
<p>All tries result in a visual click which is performed by the webdriver, but
the value is not changed afterwards..</p>
| 0 | 2016-09-01T10:08:03Z | 39,269,522 | <p>Class <code>form-control</code> might give you other elements. Try to locate by partial id</p>
<pre><code>search_elemet = driver.find_element_by_css_selector('[id*="select"]')
Select(search_elemet).select_by_visible_text('Number')
# or
Select(search_elemet).select_by_value('display_value')
</code></pre>
| 0 | 2016-09-01T10:48:54Z | [
"python",
"html",
"selenium",
"selenium-webdriver",
"drop-down-menu"
] |
Selenium fails to change dropdown selection | 39,268,637 | <p>Here is the isolated HTML code:</p>
<pre><code><span style="position: relative; width: 100%; display: inline-block;">
<select id="0ac88542d16d6200fb983d094f655c76_select" class="form-control">
<option value="display_value">Number</option>
<option value="sys_class_name">Type</option>
</select>
</span>
</code></pre>
<p>Using Python Selenium, click on the dropdown element to expand it:</p>
<pre><code>search_elemets = driver.find_elements_by_xpath("//*[@class='form-control']")
search_elemets[0].click()
</code></pre>
<p>Now let's try different scenarios to change its selection to the desired value..</p>
<p>Option 1</p>
<pre><code>Select(search_elemets[0]) \
.select_by_visible_text("Number")
driver.find_elements_by_tag_name('option')[1].click()
</code></pre>
<p>Option 2</p>
<pre><code>Select(search_elemets[0]) \
.select_by_value("display_value")
driver.find_elements_by_tag_name('option')[1].send_keys(Keys.RETURN)
</code></pre>
<p>Option 3</p>
<pre><code>for option in search_elemets[0].find_elements_by_tag_name('option'):
if option.text == 'Number':
# Use one of the options below
# Option 3.1
Select(search_elemets[0]) \
.select_by_visible_text("Number")
option.click()
break
# Option 3.2
actions = ActionChains(driver)
actions.move_to_element(option)
actions.click(option)
actions.perform()
break
</code></pre>
<p>All tries result in a visual click which is performed by the webdriver, but
the value is not changed afterwards..</p>
| 0 | 2016-09-01T10:08:03Z | 39,270,297 | <p>@katericata Your html is as like- Number Type Type2 Type3</p>
<p>and my web driver class is as, here you can replace the drop down items as Type2,type 3 etc which you want to select.</p>
<p>driver.findElement(By.xpath("code']")).sendKeys("Type3"); } } let me know if there are any concern.</p>
| 0 | 2016-09-01T11:27:13Z | [
"python",
"html",
"selenium",
"selenium-webdriver",
"drop-down-menu"
] |
Selenium fails to change dropdown selection | 39,268,637 | <p>Here is the isolated HTML code:</p>
<pre><code><span style="position: relative; width: 100%; display: inline-block;">
<select id="0ac88542d16d6200fb983d094f655c76_select" class="form-control">
<option value="display_value">Number</option>
<option value="sys_class_name">Type</option>
</select>
</span>
</code></pre>
<p>Using Python Selenium, click on the dropdown element to expand it:</p>
<pre><code>search_elemets = driver.find_elements_by_xpath("//*[@class='form-control']")
search_elemets[0].click()
</code></pre>
<p>Now let's try different scenarios to change its selection to the desired value..</p>
<p>Option 1</p>
<pre><code>Select(search_elemets[0]) \
.select_by_visible_text("Number")
driver.find_elements_by_tag_name('option')[1].click()
</code></pre>
<p>Option 2</p>
<pre><code>Select(search_elemets[0]) \
.select_by_value("display_value")
driver.find_elements_by_tag_name('option')[1].send_keys(Keys.RETURN)
</code></pre>
<p>Option 3</p>
<pre><code>for option in search_elemets[0].find_elements_by_tag_name('option'):
if option.text == 'Number':
# Use one of the options below
# Option 3.1
Select(search_elemets[0]) \
.select_by_visible_text("Number")
option.click()
break
# Option 3.2
actions = ActionChains(driver)
actions.move_to_element(option)
actions.click(option)
actions.perform()
break
</code></pre>
<p>All tries result in a visual click which is performed by the webdriver, but
the value is not changed afterwards..</p>
| 0 | 2016-09-01T10:08:03Z | 39,281,752 | <p>I'm not sure why the action chain isn't working I've used that on styled options and had it work. But if you don't care about mimicking the user behavior exactly as it will occur you can use this to change the option.</p>
<pre><code> option = driver.find_element_by_xpath("//select[@id='0ac88542d16d6200fb983d094f655c76_select']/option[@value='sys_class_name']")
option.click()
</code></pre>
<p>Or even just</p>
<pre><code> driver.find_element_by_xpath("//select[@id='0ac88542d16d6200fb983d094f655c76_select']/option[@value='sys_class_name']").click()
</code></pre>
| 0 | 2016-09-01T22:22:36Z | [
"python",
"html",
"selenium",
"selenium-webdriver",
"drop-down-menu"
] |
python installations, directories and environments - good resource to understand the basics | 39,268,717 | <p>newbie here. I'm using python (3) on my mac, and although I'm able to write some (basic) scripts, I realise I have lots of confusion around where python is stored, the famous usr/bin directory, where packages are saved, etc.</p>
<p>For example I had pip installed and working fine, but then I installed miniconda and all of a sudden pip was 'managed' (for lack of a better term) by conda, some of the packages I had installed couldn't be found anymore etc.</p>
<p>This highlights just how confused I am with all of this. Can you recommend a good resource that can explain how these things work together? Ideally something for beginners :)</p>
| 0 | 2016-09-01T10:11:18Z | 39,268,898 | <p>First of all I suggest to see: <a href="https://www.python.org/doc/" rel="nofollow">https://www.python.org/doc/</a></p>
<p>Then I suggest to look: <a href="http://code.tutsplus.com/series/python-from-scratch--net-20566" rel="nofollow">http://code.tutsplus.com/series/python-from-scratch--net-20566</a></p>
| 1 | 2016-09-01T10:19:06Z | [
"python"
] |
python installations, directories and environments - good resource to understand the basics | 39,268,717 | <p>newbie here. I'm using python (3) on my mac, and although I'm able to write some (basic) scripts, I realise I have lots of confusion around where python is stored, the famous usr/bin directory, where packages are saved, etc.</p>
<p>For example I had pip installed and working fine, but then I installed miniconda and all of a sudden pip was 'managed' (for lack of a better term) by conda, some of the packages I had installed couldn't be found anymore etc.</p>
<p>This highlights just how confused I am with all of this. Can you recommend a good resource that can explain how these things work together? Ideally something for beginners :)</p>
| 0 | 2016-09-01T10:11:18Z | 39,270,243 | <p>I started with python 1 month back.
The best thing is to read the python documentation.</p>
<p>but to begin with you can try <a href="http://www.tutorialspoint.com/python/index.htm" rel="nofollow">http://www.tutorialspoint.com/python/index.htm</a> or <a href="https://learnpythonthehardway.org/book/ex0.html" rel="nofollow">https://learnpythonthehardway.org/book/ex0.html</a>
I prefer learn python the hard way. </p>
<p>for classes try <a href="https://jeffknupp.com/blog/2014/06/18/improve-your-python-python-classes-and-object-oriented-programming/" rel="nofollow">https://jeffknupp.com/blog/2014/06/18/improve-your-python-python-classes-and-object-oriented-programming/</a>
After you are familiar with the basics read the python documentation.</p>
<p>Refer these links for setting up python on MAC
<a href="http://docs.python-guide.org/en/latest/starting/install/osx/" rel="nofollow">http://docs.python-guide.org/en/latest/starting/install/osx/</a> and
<a href="http://www.pyladies.com/blog/Get-Your-Mac-Ready-for-Python-Programming/" rel="nofollow">http://www.pyladies.com/blog/Get-Your-Mac-Ready-for-Python-Programming/</a></p>
| 0 | 2016-09-01T11:23:53Z | [
"python"
] |
python installations, directories and environments - good resource to understand the basics | 39,268,717 | <p>newbie here. I'm using python (3) on my mac, and although I'm able to write some (basic) scripts, I realise I have lots of confusion around where python is stored, the famous usr/bin directory, where packages are saved, etc.</p>
<p>For example I had pip installed and working fine, but then I installed miniconda and all of a sudden pip was 'managed' (for lack of a better term) by conda, some of the packages I had installed couldn't be found anymore etc.</p>
<p>This highlights just how confused I am with all of this. Can you recommend a good resource that can explain how these things work together? Ideally something for beginners :)</p>
| 0 | 2016-09-01T10:11:18Z | 39,270,661 | <p>Python has a lot off stuff. The very basics you can learn on codecademy.com. </p>
<p>On python-course.org you have some more advanced topics. </p>
<p>If you want to learn something specific you should look on the official python documentation. </p>
<p>Overal I don't think you want to use anaconda. It is better to just install the modules with PiP unless you are only doing scientific stuff or so. </p>
| 0 | 2016-09-01T11:43:36Z | [
"python"
] |
Build 2 lists in one go while reading from file, pythonically | 39,268,792 | <p>I'm reading a big file with hundreds of thousands of number pairs representing the edges of a graph. I want to build 2 lists as I go: one with the forward edges and one with the reversed. </p>
<p>Currently I'm doing an explicit <code>for</code> loop, because I need to do some pre-processing on the lines I read. However, I'm wondering if there is a more pythonic approach to building those lists, like list comprehensions, etc. </p>
<p>But, as I have 2 lists, I don't see a way to populate them using comprehensions without reading the file twice.</p>
<p>My code right now is:</p>
<pre><code>with open('SCC.txt') as data:
for line in data:
line = line.rstrip()
if line:
edge_list.append((int(line.rstrip().split()[0]), int(line.rstrip().split()[1])))
reversed_edge_list.append((int(line.rstrip().split()[1]), int(line.rstrip().split()[0])))
</code></pre>
| 11 | 2016-09-01T10:14:47Z | 39,268,933 | <p>You <em>can't</em> create two lists in one comprehension, so, instead of doing the same operations twice on the two lists, one viable option would be to initialize one of them and then create the second one by reversing each entry in the first one. That way you don't iterate over the file twice.</p>
<p>To that end, you could create the first list <code>edge_list</code> with a comprehension (not sure why you called <code>rsplit</code> <em>again</em> on it):</p>
<pre><code>edge_list = [tuple(map(int, line.split())) for line in data]
</code></pre>
<p>And now go through each entry and reverse it with <code>[::-1]</code> in order to create its reversed sibling <code>reverse_edge_list</code>. </p>
<p><em>Using mock data</em> for <code>edge_list</code>:</p>
<pre><code>edge_list = [(1, 2), (3, 4), (5, 6)]
</code></pre>
<p>Reversing it could look like this:</p>
<pre><code>reverse_edge_list = [t[::-1] for t in edge_list]
</code></pre>
<p>Which now looks like:</p>
<pre><code>reverse_edge_list
[(2, 1), (4, 3), (6, 5)]
</code></pre>
| 5 | 2016-09-01T10:21:10Z | [
"python",
"list",
"python-3.x"
] |
Build 2 lists in one go while reading from file, pythonically | 39,268,792 | <p>I'm reading a big file with hundreds of thousands of number pairs representing the edges of a graph. I want to build 2 lists as I go: one with the forward edges and one with the reversed. </p>
<p>Currently I'm doing an explicit <code>for</code> loop, because I need to do some pre-processing on the lines I read. However, I'm wondering if there is a more pythonic approach to building those lists, like list comprehensions, etc. </p>
<p>But, as I have 2 lists, I don't see a way to populate them using comprehensions without reading the file twice.</p>
<p>My code right now is:</p>
<pre><code>with open('SCC.txt') as data:
for line in data:
line = line.rstrip()
if line:
edge_list.append((int(line.rstrip().split()[0]), int(line.rstrip().split()[1])))
reversed_edge_list.append((int(line.rstrip().split()[1]), int(line.rstrip().split()[0])))
</code></pre>
| 11 | 2016-09-01T10:14:47Z | 39,269,024 | <p>I would keep your logic as it is the <em>Pythonic</em> approach just not <em>split/rstrip</em> the same line multiple times:</p>
<pre><code>with open('SCC.txt') as data:
for line in data:
spl = line.split()
if spl:
i, j = map(int, spl)
edge_list.append((i, j))
reversed_edge_list.append((j, i))
</code></pre>
<p>Calling <em>rstrip</em> when you have already called it is redundant in itself even more so when you are splitting as that would already remove the whitespace so splitting just once means you save doing a lot of unnecessary work.</p>
<p>You can also use <em>csv.reader</em> to read the data and filter empty rows once you have a single whitespace delimiting:</p>
<pre><code>from csv import reader
with open('SCC.txt') as data:
edge_list, reversed_edge_list = [], []
for i, j in filter(None, reader(data, delimiter=" ")):
i, j = int(i), int(j)
edge_list.append((i, j))
reversed_edge_list.append((j, i))
</code></pre>
<p>Or if there are multiple whitespaces delimiting you can use <code>map(str.split, data)</code>:</p>
<pre><code> for i, j in filter(None, map(str.split, data)):
i, j = int(i), int(j)
</code></pre>
<p>Whatever you choose will be faster than going over the data twice or splitting the sames lines multiple times.</p>
| 11 | 2016-09-01T10:25:31Z | [
"python",
"list",
"python-3.x"
] |
Build 2 lists in one go while reading from file, pythonically | 39,268,792 | <p>I'm reading a big file with hundreds of thousands of number pairs representing the edges of a graph. I want to build 2 lists as I go: one with the forward edges and one with the reversed. </p>
<p>Currently I'm doing an explicit <code>for</code> loop, because I need to do some pre-processing on the lines I read. However, I'm wondering if there is a more pythonic approach to building those lists, like list comprehensions, etc. </p>
<p>But, as I have 2 lists, I don't see a way to populate them using comprehensions without reading the file twice.</p>
<p>My code right now is:</p>
<pre><code>with open('SCC.txt') as data:
for line in data:
line = line.rstrip()
if line:
edge_list.append((int(line.rstrip().split()[0]), int(line.rstrip().split()[1])))
reversed_edge_list.append((int(line.rstrip().split()[1]), int(line.rstrip().split()[0])))
</code></pre>
| 11 | 2016-09-01T10:14:47Z | 39,269,110 | <p>Maybe not clearer, but shorter:</p>
<pre><code>with open('SCC.txt') as data:
process_line = lambda line, r: (int(line.rstrip().split()[r]), int(line.rstrip().split()[1-r]))
edge_list, reverved_edge_list = map(list, zip(*[(process_line(line, 0), process_line(line, 1))
for line in data
if line.rstrip()]))
</code></pre>
| 3 | 2016-09-01T10:28:59Z | [
"python",
"list",
"python-3.x"
] |
Build 2 lists in one go while reading from file, pythonically | 39,268,792 | <p>I'm reading a big file with hundreds of thousands of number pairs representing the edges of a graph. I want to build 2 lists as I go: one with the forward edges and one with the reversed. </p>
<p>Currently I'm doing an explicit <code>for</code> loop, because I need to do some pre-processing on the lines I read. However, I'm wondering if there is a more pythonic approach to building those lists, like list comprehensions, etc. </p>
<p>But, as I have 2 lists, I don't see a way to populate them using comprehensions without reading the file twice.</p>
<p>My code right now is:</p>
<pre><code>with open('SCC.txt') as data:
for line in data:
line = line.rstrip()
if line:
edge_list.append((int(line.rstrip().split()[0]), int(line.rstrip().split()[1])))
reversed_edge_list.append((int(line.rstrip().split()[1]), int(line.rstrip().split()[0])))
</code></pre>
| 11 | 2016-09-01T10:14:47Z | 39,269,494 | <p>Here comes a solution</p>
<p>A test file:</p>
<pre><code>In[19]: f = ["{} {}".format(i,j) for i,j in zip(xrange(10), xrange(10, 20))]
In[20]: f
Out[20]:
['0 10',
'1 11',
'2 12',
'3 13',
'4 14',
'5 15',
'6 16',
'7 17',
'8 18',
'9 19']
</code></pre>
<p>One liner using comprehension, zip and map:</p>
<pre><code>In[27]: l, l2 = map(list,zip(*[(tuple(map(int, x.split())), tuple(map(int, x.split()))[::-1]) for x in f]))
In[28]: l
Out[28]:
[(0, 10),
(1, 11),
(2, 12),
(3, 13),
(4, 14),
(5, 15),
(6, 16),
(7, 17),
(8, 18),
(9, 19)]
In[29]: l2
Out[29]:
[(10, 0),
(11, 1),
(12, 2),
(13, 3),
(14, 4),
(15, 5),
(16, 6),
(17, 7),
(18, 8),
(19, 9)]
</code></pre>
<p>Explaining, with <code>[(tuple(map(int, x.split())), tuple(map(int, x.split()))[::-1]) for x in f]</code> we build a list containing a pair tuple with the pair tuples and its reversed forms:</p>
<pre><code>In[24]: [(tuple(map(int, x.split())), tuple(map(int, x.split()))[::-1]) for x in f]
Out[24]:
[((0, 10), (10, 0)),
((1, 11), (11, 1)),
((2, 12), (12, 2)),
((3, 13), (13, 3)),
((4, 14), (14, 4)),
((5, 15), (15, 5)),
((6, 16), (16, 6)),
((7, 17), (17, 7)),
((8, 18), (18, 8)),
((9, 19), (19, 9))]
</code></pre>
<p>Applaying <code>zip</code> to the unpack form we split the tuples inside the main tuple, so we have 2 tuples containing the tuples pairs in the first and the reversed in the others:</p>
<pre><code>In[25]: zip(*[(tuple(map(int, x.split())), tuple(map(int, x.split()))[::-1]) for x in f])
Out[25]:
[((0, 10),
(1, 11),
(2, 12),
(3, 13),
(4, 14),
(5, 15),
(6, 16),
(7, 17),
(8, 18),
(9, 19)),
((10, 0),
(11, 1),
(12, 2),
(13, 3),
(14, 4),
(15, 5),
(16, 6),
(17, 7),
(18, 8),
(19, 9))]
</code></pre>
<p>Almost there, we just use <code>map</code> to transform that tuples into lists.</p>
<p><strong>EDIT:</strong>
as @PadraicCunningham asked, for filtering empty lines, just add a <code>if x</code> in the comprehension <code>[ ... for x in f if x]</code></p>
| 3 | 2016-09-01T10:47:40Z | [
"python",
"list",
"python-3.x"
] |
VALIGN in reportlab TableStyle apparently without effect | 39,268,829 | <p>So, for a while now I have been struggling with this one. I know there are a lot of similar questions with good answers, and I have tried these answers, but the code I have basically reflects the answers given.</p>
<p>I am writing code to automatically generate matching exercises for worksheets. All this information should be in a table. And the text should all be aligned to the top of the cells.</p>
<p>Here is what I have now:</p>
<pre><code>from reportlab.lib.pagesizes import A4
from reportlab.platypus import SimpleDocTemplate, Paragraph, Table, TableStyle
from reportlab.lib.styles import getSampleStyleSheet
from reportlab.lib.units import cm
document = []
doc = SimpleDocTemplate('example.pdf', pagesize=A4, rightMargin=72, leftMargin=72, topMargin=72)
styles = getSampleStyleSheet()
definitions = []
i, a = 1, 65
table = []
for x in range(1, 10):
line = []
line.append(Paragraph(str(i), styles['BodyText']))
line.append(Paragraph('Vocabulary', styles['BodyText']))
line.append(Paragraph(chr(a), styles['BodyText']))
line.append(Paragraph('Often a multi-line definition of the vocabulary. But then, sometimes something short and sweet.', styles['BodyText']))
table.append(line)
i += 1
a += 1
t = Table(table, colWidths=(1*cm, 4*cm, 1*cm, None))
t.setStyle(TableStyle([
('VALIGN', (1, 1), (-1, -1), 'TOP')
]))
document.append(t)
doc.build(document)
</code></pre>
<p>What am I overlooking?</p>
| 1 | 2016-09-01T10:16:26Z | 39,269,293 | <p>The problem is the way you are indexing the <code>TableStyle</code>. The indexing in Reportlab starts at <code>(0, 0)</code> for first row, first column. So in your case <code>(1, 1)</code> only applies the styling to everything below the first row and right of the first column.</p>
<p>The correct way would be to use:</p>
<pre><code>('VALIGN', (0, 0), (-1, -1), 'TOP')
</code></pre>
<p>This will apply the styling to all cells in the <code>Table</code>.</p>
| 1 | 2016-09-01T10:37:43Z | [
"python",
"formatting",
"reportlab"
] |
Hello, i have written the below code to go out and look on a Cisco device, return output, write to a file and then save the file and close it off | 39,268,831 | <p>Okay Massive edit to better ask what im seeking:</p>
<p>I have the below code, which thanks to the answer below now will go and look in the .txt file for the IP address and then go to the device and return the commands, it will then print to the file requested and everything works.</p>
<p>What i cant get it to do is loop through a series of IP addresses and return the commands for different devices. I get an error of the script timing out when i add in more than one IP to the .txt list, this is proven by adding the same address twice so i know the addresses are good due to when only a single address is in the file it works seemlessly.</p>
<p>I am seeking a way to loop through 10 IP addresses and run the same commands when all is said and done.</p>
<pre><code>from __future__ import print_function
from netmiko import ConnectHandler
import sys
import time
import select
import paramiko
import re
fd = open(r'C:\Users\NewdayTest.txt','w')
old_stdout = sys.stdout
sys.stdout = fd
platform = 'cisco_ios'
username = 'Username'
password = 'Password'
ip_add_file = open(r'C:\Users\\IPAddressList.txt','r')
for host in ip_add_file:
device = ConnectHandler(device_type=platform, ip=host, username=username, password=password)
output = device.send_command('terminal length 0')
output = device.send_command('enable')
print('##############################################################\n')
print('...................CISCO COMMAND SHOW RUN OUTPUT......................\n')
output = device.send_command('sh run')
print(output)
print('##############################################################\n')
print('...................CISCO COMMAND SHOW IP INT BR OUTPUT......................\n')
output = device.send_command('sh ip int br')
print(output)
print('##############################################################\n')
fd.close()
</code></pre>
| -1 | 2016-09-01T10:16:30Z | 39,269,093 | <p>Keep in mind that every line will be a new IP addresses.</p>
<p>And you're not writing to ciscoOutput file , you can use command <code>fd.write('text')</code> for that.</p>
<pre><code>from __future__ import print_function
from netmiko import ConnectHandler
import sys
import time
import select
import paramiko
import re
fd = open(r'C:\Users\LocationOfMyFile\CiscoOutput.txt','w')
old_stdout = sys.stdout
sys.stdout = fd
platform = 'cisco_ios'
username = 'My Username'
password = 'My Password'
ip_add_file = open('file_name.txt','r')
for host in ip_add_file:
device = ConnectHandler(device_type=platform, ip=host, username=username, password=password)
output = device.send_command('show version')
print(output)
output = device.send_command('terminal length 0')
print(output)
output = device.send_command('sh ip int br')
print(output)
output = device.send_command('show interfaces GigabitEthernet0/1')
print(output)
fd.close()
</code></pre>
| 0 | 2016-09-01T10:28:07Z | [
"python",
"python-3.x",
"cisco",
"cisco-ios"
] |
Selenium Python (Import Error) Python-Selenium-Eclipse | 39,268,865 | <p><strong>After Running the below Command:</strong></p>
<pre><code>from selenium import webdriver
</code></pre>
<p><strong>I get the Following error:</strong></p>
<pre><code>Traceback (most recent call last):
File "C:\Users\tempjatop\workspace\TestPython\Sample.py", line 1, in <module>
from selenium import webdriver
File "C:\Python\lib\site-packages\selenium-3.0.0b2-py3.5.egg\selenium\webdriver\__init__.py", line 25, in <module>
from .safari.webdriver import WebDriver as Safari # noqa
File "C:\Python\lib\site-packages\selenium-3.0.0b2-py3.5.egg\selenium\webdriver\safari\webdriver.py", line 49
executable_path = os.environ.get("SELENIUM_SERVER_JAR")
^
TabError: inconsistent use of tabs and spaces in indentation
</code></pre>
<hr>
<p>I don't know why it is redirecting it to safari webdriver.
Please suggest any fixes or am i doing anything wrong?</p>
| 0 | 2016-09-01T10:17:30Z | 39,283,996 | <p>As the error says:</p>
<blockquote>
<p>TabError: inconsistent use of tabs and spaces in indentation</p>
</blockquote>
<p>Check for indentations in your code and correct the inconsistent ones.</p>
| 0 | 2016-09-02T03:44:30Z | [
"python",
"eclipse",
"selenium",
"testing",
"automation"
] |
Python: how to get rid of spaces in str(dict)? | 39,268,928 | <p>For example, if you use str() on a dict, you get:</p>
<pre><code>>>> str({'a': 1, 'b': 'as df'})
"{'a': 1, 'b': 'as df'}"
</code></pre>
<p>However, I want the string to be like:</p>
<pre><code>"{'a':1,'b':'as df'}"
</code></pre>
<p>How can I accomplish this?</p>
| 3 | 2016-09-01T10:20:57Z | 39,269,016 | <p>You could build the compact string representation yourself:</p>
<pre><code>In [9]: '{' + ','.join('{0!r}:{1!r}'.format(*x) for x in dct.items()) + '}'
Out[9]: "{'b':'as df','a':1}"
</code></pre>
<p>It will leave extra spaces inside string representations of nested <code>list</code>s, <code>dict</code>s etc.</p>
<p>A much better idea is to use the <a href="https://docs.python.org/3/library/json.html#json.dumps" rel="nofollow"><code>json.dumps</code></a> function with appropriate separators:</p>
<pre><code>In [15]: import json
In [16]: json.dumps(dct, separators=(',', ':'))
Out[16]: '{"b":"as df","a":1}'
</code></pre>
<p>This will work correctly regardless of the inner structure of <code>dct</code>.</p>
| 8 | 2016-09-01T10:25:11Z | [
"python"
] |
Python: how to get rid of spaces in str(dict)? | 39,268,928 | <p>For example, if you use str() on a dict, you get:</p>
<pre><code>>>> str({'a': 1, 'b': 'as df'})
"{'a': 1, 'b': 'as df'}"
</code></pre>
<p>However, I want the string to be like:</p>
<pre><code>"{'a':1,'b':'as df'}"
</code></pre>
<p>How can I accomplish this?</p>
| 3 | 2016-09-01T10:20:57Z | 39,270,269 | <p>There are two spaces naturally occurring. <code>': '</code> and <code>", "</code>. So I think you can just replace them using <code>replace</code> </p>
<pre><code>str({'a': 1, 'b': 'as df'}).replace(": ", ":").replace(", ", ",")
</code></pre>
<p>Note: To use this solution plz assume, you are not having <code>:</code> or <code>,</code> in any of the values or keys of the dictionary. </p>
| 0 | 2016-09-01T11:25:33Z | [
"python"
] |
Getting error on redirect django | 39,269,025 | <p>What am I trying to do is </p>
<p>if a user comes with a slug </p>
<p>Case 1</p>
<p>'new' I am either creating a new box for him or fetching his existing empty box and redirecting him based on the slug of the box</p>
<p>Case 2</p>
<p>anything else works fine</p>
<p>URL pattern is </p>
<pre><code>url(r'^box/manage/(?P<slug>.*)/$', login_required(views.ManageBoxView.as_view()), name='manage_box'),
</code></pre>
<p>I am getting an error </p>
<blockquote>
<p>dict' object has no attribute 'has_header'</p>
</blockquote>
<pre><code>class ManageBoxView(TemplateView):
template_name = "art/manage_box.html"
def get(self, request, **kwargs):
if kwargs['slug'] == 'new':
box = Box.get_or_create_empty_box(self.request.user)
return redirect('manage_box', slug= box.slug)
else:
box = Box.objects.get(slug=kwargs['slug'])
return {'box': box, 'drafts': box.drafts}
</code></pre>
<p>Stack Trace</p>
<pre><code>Environment:
Request Method: GET
Request URL: http://localhost:8000/box/manage/10-untitled-admin/
Django Version: 1.7.4
Python Version: 2.7.11
Installed Applications:
('django_admin_bootstrapped',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.admin',
'django.contrib.admindocs',
'django.contrib.messages',
'django.contrib.sites',
'django.contrib.humanize',
'django.contrib.staticfiles',
'gunicorn',
'rest_framework',
'imagekit',
'utils',
'users',
'arts',
'notification',
'connect',
'payment',
'products',
'orders',
'social')
Installed Middleware:
('sslify.middleware.SSLifyMiddleware',
'django.middleware.gzip.GZipMiddleware',
'django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'raygun4py.middleware.django.Provider')
Traceback:
File "/home/cj/.vrtualenvs/canvs/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
204. response = middleware_method(request, response)
File "/home/cj/.vrtualenvs/canvs/local/lib/python2.7/site-packages/django/contrib/sessions/middleware.py" in process_response
30. patch_vary_headers(response, ('Cookie',))
File "/home/cj/.vrtualenvs/canvs/local/lib/python2.7/site-packages/django/utils/cache.py" in patch_vary_headers
148. if response.has_header('Vary'):
Exception Type: AttributeError at /box/manage/10-untitled-admin/
Exception Value: 'dict' object has no attribute 'has_header'
</code></pre>
| 1 | 2016-09-01T10:25:34Z | 39,269,299 | <p>In the else condition of the get method, you are trying to return an object instead of a response. You have to return a response like <code>return render(request, template_name, {'box': box, 'drafts': box.drafts})</code> instead of <code>return {'box': box, 'drafts': box.drafts}</code> .</p>
| 2 | 2016-09-01T10:38:04Z | [
"python",
"django",
"django-1.7"
] |
Scipy maximum filter for all equal values | 39,269,133 | <p>I want to use scipy's maximum_filter to detect local maximas, but there is one issue. If the filter applies the function on values that are all equal, it returns all of them as local maximas, but I need to have them the opposite way.</p>
<p>An example script:</p>
<pre><code>import numpy as np
import scipy.ndimage as sc
ones_matrix = np.ones((6,6))
max_arr = (ones_matrix == sc.maximum_filter(ones_matrix, 3, mode = 'constant'))
print max_arr
</code></pre>
<p>This returns all True, but I need them as false. How can I do that? Thanks in advance!</p>
<pre><code>[[ True True True True True True]
[ True True True True True True]
[ True True True True True True]
[ True True True True True True]
[ True True True True True True]
[ True True True True True True]]
</code></pre>
| 0 | 2016-09-01T10:30:24Z | 39,269,757 | <p>You could change your sentence:</p>
<pre><code>max_arr = (ones_matrix == sc.maximum_filter(ones_matrix, 3, mode = 'constant'))
</code></pre>
<p>for this one:</p>
<pre><code>max_arr = (ones_matrix != sc.maximum_filter(ones_matrix, 3, mode = 'constant'))
</code></pre>
| 1 | 2016-09-01T11:00:21Z | [
"python",
"scipy",
"filtering"
] |
uwsgi working with nginx, systemd emperor not working with same configs and setup | 39,269,278 | <p>The issue here is that everything is configured, and runs without error, and yet I do not know why it is not working. As you'll see I get no app can't load error, I get no errors at all, in fact it's the most complete without errors I've had since I started. And yet, 500 response. Alot of this is here for completeness as I'll explain. If I take emperor out of the mix, it works, just fine.</p>
<p>I have been trying to deploy an opensource flask app ceph-dash for monitoring via uwsgi/nginx. I succeeded eventually albeit painfully. I have written a gist about my success <a href="https://gist.github.com/Lighiche/a6aec14166d62b4f8f013415a2c1f757" rel="nofollow">https://gist.github.com/Lighiche/a6aec14166d62b4f8f013415a2c1f757</a> </p>
<p>Howver, since there's no daemon management and the likes I thought it best to add to what I've done by using uwsgi emperor and I followed this guide to the letter
<a href="https://chriswarrick.com/blog/2016/02/10/deploying-python-web-apps-with-nginx-and-uwsgi-emperor/" rel="nofollow">https://chriswarrick.com/blog/2016/02/10/deploying-python-web-apps-with-nginx-and-uwsgi-emperor/</a></p>
<p>I am on Centos 7. I have successful run and tested the app on this implementation as you can see in the gist, I actually went out of my way to discover how to configure uwsgi under different scenarios. </p>
<p>However with the emperor implementation of wsgi and all it's config files. I am getting a 500 response internal server error. And I think it's to do with the hand off between nginx and emperor or emperor itself. As when I stop the uwsgi service I get a 502. So nginx at least sees the socket file.</p>
<p>WSGI server start log, all green</p>
<pre><code>*** Starting uWSGI 2.0.13.1 (64bit) on [Thu Sep 1 11:16:24 2016] ***
compiled with version: 4.8.5 20150623 (Red Hat 4.8.5-4) on 02 August 2016 21:07:54
os: Linux-3.10.0-327.28.3.el7.x86_64 #1 SMP Thu Aug 18 19:05:49 UTC 2016
nodename: prdceph-mon00
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 2
current working directory: /etc/uwsgi.d
detected binary path: /usr/sbin/uwsgi
chdir() to /etc/nginx/sites-enabled/ceph-dash
your processes number limit is 7282
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /var/run/uwsgi/ceph-dash.sock fd 6
Python version: 2.7.5 (default, Aug 18 2016, 15:58:25) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)]
Python main interpreter initialized at 0xee1030
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 363840 bytes (355 KB) for 4 cores
*** Operational MODE: preforking ***
WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0xee1030 pid: 3236 (default app)
mountpoint already configured. skip.
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 3236)
spawned uWSGI worker 1 (pid: 3241, cores: 1)
spawned uWSGI worker 2 (pid: 3242, cores: 1)
spawned uWSGI worker 3 (pid: 3243, cores: 1)
spawned uWSGI worker 4 (pid: 3244, cores: 1)
[pid: 3243|app: 0|req: 1/1] 127.0.0.1 () {34 vars in 429 bytes} [Thu Sep 1 11:17:31 2016] GET / => generated 291 bytes in 77 msecs (HTTP/1.1 500) 2 headers in 84 bytes (1 switches on core 0)
announcing my loyalty to the Emperor...
</code></pre>
<p>WSGI service start log, all green</p>
<pre><code> uwsgi.service - uWSGI Emperor Service
Loaded: loaded (/usr/lib/systemd/system/uwsgi.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2016-09-01 11:16:24 BST; 2s ago
Process: 3231 ExecStartPre=/bin/chown uwsgi:uwsgi /run/uwsgi (code=exited, status=0/SUCCESS)
Process: 3229 ExecStartPre=/bin/mkdir -p /run/uwsgi (code=exited, status=0/SUCCESS)
Main PID: 3234 (uwsgi)
Status: "The Emperor is governing 1 vassals"
CGroup: /system.slice/uwsgi.service
ââ3234 /usr/sbin/uwsgi --ini /etc/uwsgi.ini
ââ3235 /usr/sbin/uwsgi --ini /etc/uwsgi.ini
ââ3236 /usr/sbin/uwsgi --ini cephdash.ini
ââ3241 /usr/sbin/uwsgi --ini cephdash.ini
ââ3242 /usr/sbin/uwsgi --ini cephdash.ini
ââ3243 /usr/sbin/uwsgi --ini cephdash.ini
ââ3244 /usr/sbin/uwsgi --ini cephdash.ini
Sep 01 11:16:24 prdceph-mon00 uwsgi[3234]: thunder lock: disabled (you can enable it with --thunder-lock)
Sep 01 11:16:24 prdceph-mon00 uwsgi[3234]: your mercy for graceful operations on workers is 60 seconds
Sep 01 11:16:24 prdceph-mon00 uwsgi[3234]: *** Operational MODE: no-workers ***
Sep 01 11:16:24 prdceph-mon00 uwsgi[3234]: spawned uWSGI master process (pid: 3234)
Sep 01 11:16:24 prdceph-mon00 uwsgi[3234]: *** Stats server enabled on /run/uwsgi/stats.sock fd: 7 ***
Sep 01 11:16:24 prdceph-mon00 uwsgi[3234]: *** starting uWSGI Emperor ***
Sep 01 11:16:24 prdceph-mon00 uwsgi[3234]: *** has_emperor mode detected (fd: 7) ***
Sep 01 11:16:24 prdceph-mon00 uwsgi[3234]: [uWSGI] getting INI configuration from cephdash.ini
Sep 01 11:16:25 prdceph-mon00 uwsgi[3234]: Thu Sep 1 11:16:25 2016 - [emperor] vassal cephdash.ini has been spawned
Sep 01 11:16:25 prdceph-mon00 uwsgi[3234]: Thu Sep 1 11:16:25 2016 - [emperor] vassal cephdash.ini is ready to accept requests
</code></pre>
<p>WSGI server response 500</p>
<pre><code>[pid: 3243|app: 0|req: 1/1] 127.0.0.1 () {34 vars in 429 bytes} [Thu Sep 1 11:17:31 2016] GET / => generated 291 bytes in 77 msecs (HTTP/1.1 500) 2 headers in 84 bytes (1 switches on core 0)
</code></pre>
<p>NGINX server response 500</p>
<pre><code><ip> - - [01/Sep/2016:11:14:16 +0100] "GET / HTTP/1.1" 500 291 "http://<ip>/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36" "<my ip>, <ip>"
127.0.0.1 - - [01/Sep/2016:11:17:31 +0100] "GET / HTTP/1.1" 500 291 "-" "Python-urllib/2.7"
</code></pre>
<p>cat /etc/uwsgi.ini</p>
<pre><code>[uwsgi]
uid = uwsgi
gid = nginx
pidfile = /run/uwsgi/uwsgi.pid
emperor = /etc/uwsgi.d
stats = /run/uwsgi/stats.sock
#emperor-tyrant = true
cap = setgid,setuid
</code></pre>
<p>cat /etc/uwsgi.d/cephdash.ini (this is a symlink to the wsgi.ini for the app)</p>
<pre><code>[uwsgi]
chdir = /etc/nginx/sites-enabled/ceph-dash
wsgi-file = /etc/nginx/sites-enabled/ceph-dash/contrib/wsgi/cephdash.wsgi
module = ceph-dash:app
enable-threads = true
master = true
processes = 4
socket = /var/run/uwsgi/ceph-dash.sock
chmod-socket = 777
vacuum = true
uid = uwsgi
gid = nginx
daemonize = /tmp/ceph-dash.log
plugins = python,logfile
logger = file:/tmp/myappuwsgi.log
</code></pre>
<p>cat /etc/nginx/conf.d/cephdash.conf</p>
<pre><code>upstream uwsgi {
server unix:///var/run/uwsgi/ceph-dash.sock;
}
server {
listen 5000;
server_name <hostname>
charset utf-8;
location / {
uwsgi_pass uwsgi;
include /etc/nginx/sites-enabled/ceph-dash/contrib/nginx/uwsgi_params;
}
}
</code></pre>
<hr>
<p>I have uninstalled the uwsgi that I installed via pip. I appear still to have a <code>/sbin/uwsgi</code> (linked from <code>/usr/sbin</code>) which looks to be the one installed for systemd and a <code>/usr/bin/uwsgi</code> I don't know how that happened. I changed the <code>.service</code> file to use <code>/usr/bin</code> and have the same error. However the binary from the rpm does not have the plugsin installed and the one in <code>/usr/bin</code> does. systemd uwsgi now starts with <code>/usr/bin/uwsgi</code> and as you can see below that binary loads the plugins required but I get the same internal server error, and the same messaged from systemd and startup logs no errors, no failures. Just a 500.</p>
<pre><code>rpm -ql uwsgi
/etc/uwsgi.d
/etc/uwsgi.ini
/run/uwsgi
/usr/lib/systemd/system/uwsgi.service
/usr/sbin/uwsgi
/usr/share/doc/uwsgi-2.0.13.1
/usr/share/doc/uwsgi-2.0.13.1/CHANGELOG
/usr/share/doc/uwsgi-2.0.13.1/README
/usr/share/doc/uwsgi-2.0.13.1/README.Fedora
/usr/share/licenses/uwsgi-2.0.13.1
/usr/share/licenses/uwsgi-2.0.13.1/LICENSE
python --version
Python 2.7.5
uwsgi --version
2.0.13.1
which uwsgi (POINTING TO THE RIGHT PLACE but systemd is now loading /usr/bin)
/sbin/uwsgi
uwsgi --plugins-list (DOES NOT SHOW THE PLUGINS LOADED)
*** uWSGI loaded generic plugins ***
corerouter
*** uWSGI loaded request plugins ***
100: ping
101: echo
--- end of plugins list ---
*** Starting uWSGI 2.0.13.1 (64bit) on [Thu Sep 8 09:58:40 2016] ***
compiled with version: 4.8.5 20150623 (Red Hat 4.8.5-4) on 02 August 2016 21:07:54
os: Linux-3.10.0-327.28.3.el7.x86_64 #1 SMP Thu Aug 18 19:05:49 UTC 2016
nodename: prdceph-mon00
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 2
current working directory: /tmp
detected binary path: /usr/sbin/uwsgi
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 7282
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
The -s/--socket option is missing and stdin is not a socket.
/usr/bin/uwsgi --plugins-list
DOES SHOW THE PLUGINS LOADED
*** uWSGI loaded generic plugins ***
gevent
nagios
rrdtool
carbon
corerouter
fastrouter
http
ugreen
syslog
rsyslog
logsocket
router_uwsgi
router_redirect
router_basicauth
zergpool
redislog
mongodblog
router_rewrite
router_http
logfile
router_cache
rawrouter
router_static
sslrouter
cheaper_busyness
transformation_tofile
transformation_gzip
transformation_chunked
transformation_offload
router_memcached
router_redis
router_hash
router_expires
router_metrics
transformation_template
stats_pusher_socket
*** uWSGI loaded request plugins ***
0: python
17: spooler
18: symcall
100: ping
110: signal
111: cache
173: rpc
--- end of plugins list ---
*** Starting uWSGI 2.0.13.1 (64bit) on [Thu Sep 8 09:58:50 2016] ***
compiled with version: 4.8.5 20150623 (Red Hat 4.8.5-4) on 29 August 2016 09:55:26
os: Linux-3.10.0-327.28.3.el7.x86_64 #1 SMP Thu Aug 18 19:05:49 UTC 2016
nodename: prdceph-mon00
machine: x86_64
clock source: unix
detected number of CPU cores: 2
current working directory: /tmp
detected binary path: /usr/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 7282
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
The -s/--socket option is missing and
</code></pre>
| 0 | 2016-09-01T10:36:57Z | 39,366,640 | <p>uWSGI installed by yum and by pip are different. Systemd will probably use one installed by <code>yum</code>. When you're SSH'ing and running uWSGI by yourself, system will use one installed by <code>pip</code> by default.</p>
<p>Main difference between that uWSGI versions is: lack of python support in one installed by <code>yum</code>, because it tends to be modular. You can install it, probably by using <code>yum install uwsgi-python</code> (check your distribution repositories for specific package name). Be aware that different python plugins are needed for different python versions (2.7 will require different plugin than 3.5) and adjust package name to be installed accordingly.</p>
<p>After installation of python plugin check that's plugin name (visible for uWSGI) and adjust value in <code>plugins</code> in your .ini file. That name will probably match installed package name without <code>uwsgi-</code> prefix.</p>
| 0 | 2016-09-07T09:51:39Z | [
"python",
"nginx",
"flask",
"uwsgi"
] |
Update image of a button depending on its previous image | 39,269,346 | <p>I am trying to program a start/pause button in Tkinter (Pyhton) but the following code doesn't work:</p>
<pre><code>def startpause():
if startpause_button.cget('image')=='start_image':
startpause_button.config(image=pause_image)
else:
startpause_button.config(image=start_image)
return
start_image=ImageTk.PhotoImage(file='start.png')
pause_image=ImageTk.PhotoImage(file='pause.png')
startpause_button=ttk.Button(frame,image=start_image,command = startpause)
</code></pre>
<p>I understand the issue is about cget (which return ('pyimage3'), ) and what it returns but I don't know what to put as a value to check ("('pyimage3')," doesn't work).</p>
<p>Do you have any idea about this issue?</p>
<p>Thank you.</p>
| 0 | 2016-09-01T10:40:00Z | 39,270,338 | <p><code>startpause_button.cget('image')</code> and <code>'start_image'</code> are two different things, <code>.cget('image')</code> returns the name of the image in a list e.g. <code>('pyimage1',)</code>. This means to compare them you need to take it out of the list with <code>[0]</code> and make sure both variables are strings with <code>str()</code> because <code>'pyimage1'</code> and <code>pyimage1</code> are also two different things</p>
<pre><code>import tkinter.ttk
from tkinter import Tk, PhotoImage
def startpause():
global start_image
if str(startpause_button.cget('image')[0])==str(start_image):
startpause_button.config(image=pause_image)
else:
startpause_button.config(image=start_image)
#return # i dont think you need this
global start_image
start_image=PhotoImage(file='start.gif')
pause_image=PhotoImage(file='pause.gif')
startpause_button=tkinter.ttk.Button(frame,image=start_image,command = startpause)
</code></pre>
<p>This does work and I've tested it, hope this helps you! :)</p>
| 0 | 2016-09-01T11:29:10Z | [
"python",
"image",
"tkinter"
] |
Saving record to rails database from python script | 39,269,461 | <p>I have Rails application with classic forum models like <code>Post</code>, <code>Topic</code>.
Then I have python script using <code>praw</code> to download certain posts and topics from Reddit. (I know that I can use ruby version, but lets say we have to use this python script)</p>
<p>I'd like to put them directly into my rails database.
I can save them in json format and create rake task to upload them into my database, but I think I don't need to add this complexity and should somehowe save them directly to my rails db from this python script.</p>
<p>How can I do this?</p>
| 0 | 2016-09-01T10:45:55Z | 39,270,061 | <p>So you can use this one:</p>
<pre><code>import psycopg2
db_uri = "dbname='template1' user='dbuser' host='localhost' password='dbpass'"
with psycopg2.connect(db_uri) as conn:
cur = conn.cursor()
cur.execute("""SELECT datname from pg_database""")
</code></pre>
| 0 | 2016-09-01T11:15:23Z | [
"python",
"ruby-on-rails"
] |
PyBossa vagrant is not working | 39,269,483 | <p>We are facing issues while configuring PyBossa on following Cloud server.</p>
<pre><code>DigitalOcean
RAM: 8GB
SSD: 80GB
OS: UBUNTU 16.04.1
Arch: 64
</code></pre>
<p>I am trying to configure it using following commands.</p>
<pre><code>apt-get install virtualbox
apt-get install vagrant
git clone --recursive https://github.com/PyBossa/pybossa.git
cd pybossa
vagrant up
</code></pre>
<p>System stuck at "vagrant up" and trace is as follows.</p>
<pre><code># vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'ubuntu/trusty64' could not be found. Attempting to find and install...
default: Box Provider: virtualbox
default: Box Version: >= 0
==> default: Box file was not detected as metadata. Adding it directly...
==> default: Adding box 'ubuntu/trusty64' (v0) for provider: virtualbox
default: Downloading: https://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box
==> default: Successfully added box 'ubuntu/trusty64' (v0) for 'virtualbox'!
==> default: Importing base box 'ubuntu/trusty64'...
==> default: Matching MAC address for NAT networking...
==> default: Setting the name of the VM: pybossa_default_1472726103015_90247
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 5000 (guest) => 5000 (host) (adapter 1)
default: 5001 (guest) => 5001 (host) (adapter 1)
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
</code></pre>
<p>Tried changing timeout in Vagrant File using </p>
<pre><code>config.vm.boot_timeout = 300
</code></pre>
<p>But still no effect. Can you suggest resolution?</p>
<p>Issue was with Virtualbox version it works with 4.0, 4.1, 4.2, 4.3 only.</p>
<p>Now downgraded OS to Version 14.04 and and Virtual box to 4.3, now it's moving on, and now problem is this trace.</p>
<pre><code>[default] Forwarding ports...
[default] -- 22 => 2222 (adapter 1)
[default] -- 5000 => 5000 (adapter 1)
[default] -- 5001 => 5001 (adapter 1)
[default] Running 'pre-boot' VM customizations...
[default] Booting VM...
[default] Waiting for machine to boot. This may take a few minutes...
The guest machine entered an invalid state while waiting for it
to boot. Valid states are 'starting, running'. The machine is in the
'gurumeditation' state. Please verify everything is configured
properly and try again.
</code></pre>
<p>If the provider you're using has a GUI that comes with it,
it is often helpful to open that and watch the machine, since the
GUI often has more helpful error messages than Vagrant can retrieve.
For example, if you're using VirtualBox, run <code>vagrant up</code> while the
VirtualBox GUI is open.</p>
<p>If you again try this, Vbox is still locked.</p>
<pre><code># vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
[default] Clearing any previously set forwarded ports...
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.
Command: ["modifyvm", "2b7f90da-d5a5-4782-a6a5-4e3e96838ed3", "--natpf1", "delete", "ssh", "--natpf1", "delete", "tcp5000", "--natpf1", "delete", "tcp5001"]
Stderr: VBoxManage: error: The machine 'pybossa_default_1472733433672_71002' is already locked for a session (or being unlocked)
VBoxManage: error: Details: code VBOX_E_INVALID_OBJECT_STATE (0x80bb0007), component Machine, interface IMachine, callee nsISupports
VBoxManage: error: Context: "LockMachine(a->session, LockType_Write)" at line 471 of file VBoxManageModifyVM.cpp
</code></pre>
<p>Now if you kill process and command "vagrant up --debug"
then trace ended with vbox mode.</p>
<pre><code>/usr/lib/ruby/vendor_ruby/vagrant/machine.rb:147:in `action'
/usr/lib/ruby/vendor_ruby/vagrant/batch_action.rb:63:in `block (2 levels) in run'
INFO interface: error: The guest machine entered an invalid state while waiting for it
to boot. Valid states are 'starting, running'. The machine is in the
'gurumeditation' state. Please verify everything is configured
properly and try again.
If the provider you're using has a GUI that comes with it,
it is often helpful to open that and watch the machine, since the
GUI often has more helpful error messages than Vagrant can retrieve.
For example, if you're using VirtualBox, run `vagrant up` while the
VirtualBox GUI is open.
The guest machine entered an invalid state while waiting for it
to boot. Valid states are 'starting, running'. The machine is in the
'gurumeditation' state. Please verify everything is configured
properly and try again.
If the provider you're using has a GUI that comes with it,
it is often helpful to open that and watch the machine, since the
GUI often has more helpful error messages than Vagrant can retrieve.
For example, if you're using VirtualBox, run `vagrant up` while the
VirtualBox GUI is open.
</code></pre>
<p>What could be a solution?</p>
| 0 | 2016-09-01T10:47:00Z | 39,287,492 | <p>I've just tested a new installation of PYBOSSA with latest version of VirtualBox (v5.1.4) and Vagrant (v1.8.5) and everything has worked as expected. I don't know exactly what could be wrong, but I run vagrant up in my own laptop, not inside a virtual machine (did you use a virtual machine within Digital Ocean?). Thus, my advice will be the following:</p>
<ul>
<li>Install the latest versions of Vagrant and VirtualBox</li>
<li>Re-run vagrant-up and see what happens.</li>
</ul>
<p>Be sure to clean the previous VM, to start from clean fresh install.</p>
| 0 | 2016-09-02T08:14:46Z | [
"python",
"vagrant",
"pybossa"
] |
PySpark Dataframe identify distinct value on one column based on duplicate values in other columns | 39,269,564 | <p>I have a pyspark dataframe like: where c1,c2,c3,c4,c5,c6 are the columns</p>
<blockquote>
<pre><code> +----------------------------+
|c1 | c2 | c3 | c4 | c5 | c6 |
|----------------------------|
| a | x | y | z | g | h |
| b | m | f | l | n | o |
| c | x | y | z | g | h |
| d | m | f | l | n | o |
| e | x | y | z | g | i |
+----------------------------+
</code></pre>
</blockquote>
<p>I want to extract c1 values for the rows which have same c2,c3,c4,c5 values but different c1 values.
Like, 1st, 3rd & 5th rows have same values for c2,c3,c4 & c5 but different c1 value. So the output should be <strong>a, c & e</strong>.<br>
<strong>(update)</strong>
similarly, 2nd & 4th rows have same values for c2,c3,c4 & c5 but different c1 value. So the output should also contain <strong>b & d</strong> </p>
<p>How can I obtain such result ? I have tried applying groupby but I don't understand how to obtain distinct values for c1. </p>
<p><strong>UPDATE:</strong> </p>
<p>Output should be a Dataframe of c1 values </p>
<pre><code># +-------+
# |c1_dups|
# +-------+
# | a,c,e|
# | b,e|
# +-------+
</code></pre>
<p><strong>My Approach:</strong> </p>
<pre><code>m = data.groupBy('c2','c3','c4','c5)
</code></pre>
<p>but I'm not understanding how to retrieve the values in m. I'm new to pyspark dataframes hence very much confused</p>
| 1 | 2016-09-01T10:51:00Z | 39,271,137 | <p>This is actually very simple, let's create some data first :</p>
<pre><code>schema = ['c1','c2','c3','c4','c5','c6']
rdd = sc.parallelize(["a,x,y,z,g,h","b,x,y,z,l,h","c,x,y,z,g,h","d,x,f,y,g,i","e,x,y,z,g,i"]) \
.map(lambda x : x.split(","))
df = sqlContext.createDataFrame(rdd,schema)
# +---+---+---+---+---+---+
# | c1| c2| c3| c4| c5| c6|
# +---+---+---+---+---+---+
# | a| x| y| z| g| h|
# | b| x| y| z| l| h|
# | c| x| y| z| g| h|
# | d| x| f| y| g| i|
# | e| x| y| z| g| i|
# +---+---+---+---+---+---+
</code></pre>
<p>Now the fun part, you'll just need to import some functions, group by and explode as following :</p>
<pre><code>from pyspark.sql.functions import *
dupes = df.groupBy('c2','c3','c4','c5') \
.agg(collect_list('c1').alias("c1s"),count('c1').alias("count")) \ # we collect as list and count at the same time
.filter(col('count') > 1) # we filter dupes
df2 = dupes.select(explode("c1s").alias("c1_dups"))
df2.show()
# +-------+
# |c1_dups|
# +-------+
# | a|
# | c|
# | e|
# +-------+
</code></pre>
<p>I hope this answers your question.</p>
| 3 | 2016-09-01T12:05:10Z | [
"python",
"apache-spark",
"dataframe",
"pyspark"
] |
calling a function before its definition in python | 39,269,601 | <p>Is there any way to call a function before its definition.</p>
<pre><code>def Insert(value):
"""place value at an available leaf, then bubble up from there"""
heap.append(value)
BubbleUp(len(heap) - 1)
def BubbleUp(position):
print 'something'
</code></pre>
<p>This code shows "unresolved reference BubbleUp"</p>
| 0 | 2016-09-01T10:52:44Z | 39,269,656 | <p>The code here doesn't show anything, least of all an error, because neither of the functions is called. What matters is the location of the call to <code>Insert</code>, and as long as it comes after <code>BubbleUp</code> (and why wouldn't it), there is no issue. Function <em>definitions</em> don't execute the function body, so you can define functions in whatever order you like, as long as you refrain from calling any of them until all necessary functions are defined.</p>
| 3 | 2016-09-01T10:55:42Z | [
"python"
] |
python: trouble with Popen FileNotFoundError: [WinError 2] | 39,269,675 | <p>I've search a while and still can not figure it out...
Here's part of my code that went wrong.</p>
<pre><code>import subprocess as sp
import os
cmd_args = []
cmd_args.append('start ')
cmd_args.append('/wait ')
cmd_args.append(os.path.join(dirpath,filename))
print(cmd_args)
child = sp.Popen(cmd_args)
</code></pre>
<p>And the command prompt through out this.</p>
<pre><code>['start ', '/wait ', 'C:\\Users\\xxx\\Desktop\\directory\\myexecutable.EXE']
Traceback (most recent call last):
File "InstallALL.py", line 89, in <module>
child = sp.Popen(cmd_args)
File "C:\Python34\lib\subprocess.py", line 859, in __init__
restore_signals, start_new_session)
File "C:\Python34\lib\subprocess.py", line 1114, in _execute_child startupinfo)
FileNotFoundError: [WinError 2]
</code></pre>
<p>It looks like the filepath is wrong with 2 backslashes.</p>
<p>I know if I do </p>
<pre><code>print(os.path.join(dirpath,filename))
</code></pre>
<p>It'll return </p>
<pre><code>C:\Users\xxx\Desktop\directory\myexecutable.EXE
</code></pre>
<p>I'm sure the file is there.
How can I debug this?</p>
| 0 | 2016-09-01T10:57:01Z | 39,269,884 | <p>This is happening because <code>Popen</code> is trying to find the file <code>start</code> instead of the file you want to run.</p>
<p>For example, using <code>notepad.exe</code>:</p>
<pre><code>>>> import subprocess
>>> subprocess.Popen(['C:\\Windows\\System32\\notepad.exe', '/A', 'randomfile.txt']) # '/A' is a command line option
<subprocess.Popen object at 0x03970810>
</code></pre>
<p>This works fine. But if I put the path at the end of the list:</p>
<pre><code>>>> subprocess.Popen(['/A', 'randomfile.txt', 'C:\\Windows\\System32\\notepad.exe'])
Traceback (most recent call last):
File "<pyshell#53>", line 1, in <module>
subprocess.Popen(['/A', 'randomfile.txt', 'C:\\Windows\\System32\\notepad.exe'])
File "C:\python35\lib\subprocess.py", line 950, in __init__
restore_signals, start_new_session)
File "C:\python35\lib\subprocess.py", line 1220, in _execute_child
startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified
</code></pre>
<p>Use this instead:</p>
<pre><code>import subprocess as sp
import os
cmd_args = []
cmd_args.append(os.path.join(dirpath,filename))
cmd_args.append('start ')
cmd_args.append('/wait ')
print(cmd_args)
child = sp.Popen(cmd_args)
</code></pre>
<p>You might need to swap <code>cmd_args.append('start ')</code> and <code>cmd_args.append('/wait ')</code> around too depending on which order they are meant to be in.</p>
| 0 | 2016-09-01T11:06:06Z | [
"python"
] |
FFT normalization with numpy | 39,269,804 | <p>Just started working with numpy package and started it with the simple task to compute the FFT of the input signal. Here's the code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
#Some constants
L = 128
p = 2
X = 20
x = np.arange(-X/2,X/2,X/L)
fft_x = np.linspace(0,128,128, True)
fwhl = 1
fwhl_y = (2/fwhl) \
*(np.log([2])/np.pi)**0.5*np.e**(-(4*np.log([2]) \
*x**2)/fwhl**2)
fft_fwhl = np.fft.fft(fwhl_y, norm='ortho')
ampl_fft_fwhl = np.abs(fft_fwhl)
plt.bar(fft_x, ampl_fft_fwhl, width=.7, color='b')
plt.show()
</code></pre>
<p>Since I work with an exponential function with some constant divided by pi before it, I expect to get the exponential function in Fourier space, where the constant part of the FFT is always equal to 1 (zero frequency).
But the value of that component I get using numpy is larger (it's about 1,13). Here I have an amplitude spectrum which is normalized by 1/(number_of_counts)**0.5 (that's what I read in numpy documentation). I can't understand what's wrong... Can anybody help me?</p>
<p>Thanks!</p>
<p>[EDITED] It seems like the problem is solved, all you need to get the same result of Fourier integral and of FFT is to multiply FFT by the step (in my case it's X/L). And as for normalization as option of numpy.fft.fft(..., norm='ortho'), it's used only to save the scale of the transform, otherwise you'll need to divide the result of the inverse FFT by the number of samples. Thanks everyone for their help!</p>
| 3 | 2016-09-01T11:02:30Z | 39,270,151 | <p>Here's a possible solution to your problem:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from scipy import fft
from numpy import log, pi, e
# Signal setup
Fs = 150
Ts = 1.0 / Fs
t = np.arange(0, 1, Ts)
ff = 50
fwhl = 1
y = (2 / fwhl) * (log([2]) / pi)**0.5 * e**(-(4 * log([2]) * t**2) / fwhl**2)
# Plot original signal
plt.subplot(2, 1, 1)
plt.plot(t, y, 'k-')
plt.xlabel('time')
plt.ylabel('amplitude')
# Normalized FFT
plt.subplot(2, 1, 2)
n = len(y)
k = np.arange(n)
T = n / Fs
frq = k / T
freq = frq[range(n / 2)]
Y = np.fft.fft(y) / n
Y = Y[range(n / 2)]
plt.plot(freq, abs(Y), 'r-')
plt.xlabel('freq (Hz)')
plt.ylabel('|Y(freq)|')
plt.show()
</code></pre>
<p>With fwhl=1:</p>
<p><a href="http://i.stack.imgur.com/JlzuG.png" rel="nofollow"><img src="http://i.stack.imgur.com/JlzuG.png" alt="enter image description here"></a></p>
<p>With fwhl=0.1:</p>
<p><a href="http://i.stack.imgur.com/c9H3t.png" rel="nofollow"><img src="http://i.stack.imgur.com/c9H3t.png" alt="enter image description here"></a></p>
<p>You can see in the above graphs how the exponential & FFT plots varies when fwhl is close to 0</p>
| 0 | 2016-09-01T11:20:17Z | [
"python",
"numpy",
"fft"
] |
FFT normalization with numpy | 39,269,804 | <p>Just started working with numpy package and started it with the simple task to compute the FFT of the input signal. Here's the code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
#Some constants
L = 128
p = 2
X = 20
x = np.arange(-X/2,X/2,X/L)
fft_x = np.linspace(0,128,128, True)
fwhl = 1
fwhl_y = (2/fwhl) \
*(np.log([2])/np.pi)**0.5*np.e**(-(4*np.log([2]) \
*x**2)/fwhl**2)
fft_fwhl = np.fft.fft(fwhl_y, norm='ortho')
ampl_fft_fwhl = np.abs(fft_fwhl)
plt.bar(fft_x, ampl_fft_fwhl, width=.7, color='b')
plt.show()
</code></pre>
<p>Since I work with an exponential function with some constant divided by pi before it, I expect to get the exponential function in Fourier space, where the constant part of the FFT is always equal to 1 (zero frequency).
But the value of that component I get using numpy is larger (it's about 1,13). Here I have an amplitude spectrum which is normalized by 1/(number_of_counts)**0.5 (that's what I read in numpy documentation). I can't understand what's wrong... Can anybody help me?</p>
<p>Thanks!</p>
<p>[EDITED] It seems like the problem is solved, all you need to get the same result of Fourier integral and of FFT is to multiply FFT by the step (in my case it's X/L). And as for normalization as option of numpy.fft.fft(..., norm='ortho'), it's used only to save the scale of the transform, otherwise you'll need to divide the result of the inverse FFT by the number of samples. Thanks everyone for their help!</p>
| 3 | 2016-09-01T11:02:30Z | 39,346,237 | <p>I've finally solved my problem. All you need to bond FFT with Fourier integral is to multiply the result of the transform (FFT) by the step (X/L in my case, FFT<em>X/L), it works in general. In my case it's a bit more complex since I have an extra rule for the function to be transformed. I have to be sure that the area under the curve is equal to 1, because it's a model of δ function, so since the step is unchangeable, I have to fulfill step</em>sum(fwhl_y)=1 condition, that is X/L=1/sum(fwhl_y). So to get the correct result I have to make following things:</p>
<ol>
<li>to calculate FFT <strong>fft_fwhl = np.fft.fft(fwhl_y)</strong></li>
<li>to get rid of phase component which comes due to the symmetry of fwhl_y function, that is the function defined in <strong>[-T/2,T/2]</strong> interval, where T is period and np.fft.fft operation thinks that my function is defined in <strong>[0,T]</strong> interval. So to get amplitude spectrum only (that's what I need) I simply use <strong>np.abs(FFT)</strong></li>
<li>to get the values I expect I should multiply the result I got on previous step by X/L, that is <strong>np.abs(FFT)*X/L</strong></li>
<li>I have an extra condition on the area under the curve, so it's <strong>X/L*sum(fwhl_y)=1</strong> and I finally come to <strong>np.abs(FFT)*X/L = np.abs(FFT)/sum(fwhl_y)</strong></li>
</ol>
<p>Hope it'll help anyone at least. </p>
| 2 | 2016-09-06T10:05:01Z | [
"python",
"numpy",
"fft"
] |
Trying to use GET on a weather api but python keeps adding a questionmark | 39,269,843 | <p>So I wanted to try some things out with a weather api but I can't seem to get it working. When I run the code, the Python interpreter keeps adding a questionmark to my request so I just get a 404 response, not found.</p>
<p>This is my code:</p>
<pre><code>import requests
from requests.auth import HTTPDigestAuth
import json
url = "http://opendata-download-metfcst.smhi.se"
myResponse = requests.get(url,"api/category/pmp2g/version/2/geotype/point/lon/16.158/lat/58.5812/data.json", verify=True)
if(myResponse.ok):
jData = json.loads(myResponse.content)
print("The response contains {0} properties".format(len(jData)))
print("\n")
for key in jData:
print (key + " : " + jData[key])
else:
myResponse.raise_for_status()
</code></pre>
<p>And this is my error message:</p>
<pre><code>requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http://opendata-download-metfcst.smhi.se/?/category/pmp2g/version/2/geotype/point/lon/16.158/lat/58.5812/data.json
</code></pre>
<p>As you can see it replaces the beginning of the request "api" with a questionmark. This is why it can't find the resource. Why does it do this?</p>
| 0 | 2016-09-01T11:04:13Z | 39,269,951 | <p>Try to use:</p>
<pre><code>myResponse = requests.get("{}/api/category/pmp2g/version/2/geotype/point/lon/16.158/lat/58.5812/data.json".format(url))
</code></pre>
<p>Currently you're passing the rest of URL as GET params.</p>
<p>From <code>requests</code> docs:</p>
<pre><code>>>> payload = {'key1': 'value1', 'key2': 'value2'}
>>> r = requests.get('http://httpbin.org/get', params=payload)
</code></pre>
<p>You can see that the URL has been correctly encoded by printing the URL:</p>
<pre><code>>>> print(r.url)
http://httpbin.org/get?key2=value2&key1=value1
</code></pre>
| 0 | 2016-09-01T11:09:14Z | [
"python",
"api",
"request"
] |
Using Numpy's random.choice to randomly remove item from list | 39,270,111 | <p>According to the <a href="http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.random.choice.html" rel="nofollow">http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.random.choice.html</a>, using the <code>replace = False</code> with Numpy's <code>random.choice</code> method should make the sample without replacement. However, this does not seem to work for me:</p>
<pre><code>In [33]: import numpy as np
In [34]: arr = range(5)
In [35]: number = np.random.choice(arr, replace = False)
In [36]: arr
Out[36]: [0, 1, 2, 3, 4]
</code></pre>
<p>The array <code>arr</code> is still <code>range(5)</code> after sampling, and not missing a (random) number as I would expect. How could I sample a number from <code>range(5)</code> without replacement? </p>
| -1 | 2016-09-01T11:18:05Z | 39,270,341 | <p>I ended up defining a function using the <code>random</code> library:</p>
<pre><code>import random
def sample_without_replacement(arr):
random.shuffle(arr)
return arr.pop()
</code></pre>
<p>Its use is shown below:</p>
<pre><code>In [51]: arr = range(5)
In [52]: number = sample_without_replacement(arr)
In [53]: number
Out[53]: 4
In [54]: arr
Out[54]: [2, 0, 1, 3]
</code></pre>
<p>Note that the method also shuffles the array in place, but for my purposes that doesn't matter.</p>
| 0 | 2016-09-01T11:29:18Z | [
"python",
"numpy",
"random"
] |
Using Numpy's random.choice to randomly remove item from list | 39,270,111 | <p>According to the <a href="http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.random.choice.html" rel="nofollow">http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.random.choice.html</a>, using the <code>replace = False</code> with Numpy's <code>random.choice</code> method should make the sample without replacement. However, this does not seem to work for me:</p>
<pre><code>In [33]: import numpy as np
In [34]: arr = range(5)
In [35]: number = np.random.choice(arr, replace = False)
In [36]: arr
Out[36]: [0, 1, 2, 3, 4]
</code></pre>
<p>The array <code>arr</code> is still <code>range(5)</code> after sampling, and not missing a (random) number as I would expect. How could I sample a number from <code>range(5)</code> without replacement? </p>
| -1 | 2016-09-01T11:18:05Z | 39,270,413 | <p>As mentioned in one of the comments, np.choice selects with or without replacement, a series of numbers from a sequence. But it does not modify the sequence. </p>
<p><strong>Easy alternative</strong></p>
<pre><code>arr = range(5)
# numbers below will never contain repeated numbers (replace=False)
numbers = np.random.choice(arr, 3, replace=False)
</code></pre>
<p>The behaviour I think you want would be:</p>
<pre><code>arr = range(5)
all_but_one = np.random.choice(arr, len(arr) -1, replace=False)
</code></pre>
<p>so you would select N-1 numbers without replacement (to avoid repetitions), effectively removing a random element from the iterable. </p>
<p><strong>More efficient alternative</strong> </p>
<pre><code>arr = range(5)
random_index = np.random.randint(0, len(arr))
arr.pop(random_index)
</code></pre>
| 3 | 2016-09-01T11:32:19Z | [
"python",
"numpy",
"random"
] |
Unittesting a function containing an infinite loop | 39,270,120 | <p>Given a function handling requests on a connection the body of the function is a infinite loop:</p>
<pre><code>def handle_connection():
# initialize stuff
...
while True:
# stuff to get the request
...
# stuff to handle the request
...
</code></pre>
<p>How would I unittest this function?</p>
| 0 | 2016-09-01T11:18:23Z | 39,270,192 | <p>You can limit it to run only once while testing, like:</p>
<pre><code>a = 0
while True and not a:
# do your stuff
a = 1
</code></pre>
<p>that will not require you to change indentation,</p>
<p>or output specific content while running to make sure it gets the right values into the variables while running:</p>
<pre><code>while True:
# get request
print(request)
# interact with request
print(data_achieved)
</code></pre>
<p>which will save you adding a variable.</p>
| 1 | 2016-09-01T11:22:07Z | [
"python",
"unit-testing"
] |
MySQLdb cursor.execute formatter | 39,270,265 | <p>I'm working with Python and MySQLdb library. This is part of a code which has been working for a lot of time.</p>
<p>I was testing if the code executes correctly in other Ubuntu versions since we are planning a SO upgrade.</p>
<p>The following code works fine in Ubuntu 12.04 (our baseline system now with Python 2.7.3), Ubuntu 14.04.5 (Python 2.7.6), but doesn't in Ubuntu 16.04.1 (Python 2.7.12):</p>
<pre><code>def updateOutputPath(path):
try:
# Connect to database
with closing(MySQLdb.connect(host=Constants.database_host,
user=Constants.database_user, passwd=Constants.database_passwd,
db=Constants.database_name)) as db:
# Create a cursor to execute queries
with closing(db.cursor()) as cursor:
# Execute query
cursor.execute('UPDATE configuration SET value=%s WHERE ' +
'property=\'OUTPUT_PATH\'', (path))
# Save changes in the database and close connection
db.commit()
except MySQLdb.Error, e:
print_exception('Database error', e)
print_db_query(cursor)
</code></pre>
<p>In the <code>cursor.execute</code> statement, I get the following error: not all arguments converted during string formatting.</p>
<p>Obviously, I checked that the only argument is a valid string describing a valid path, just as when executed in other SO versions.</p>
<p>I could just create a string and pass it to the <code>cursor.execute</code> statement and the problem would be over, but I am curious about this problem.</p>
<p>Any idea why?</p>
<p>I also think it could be related to the python-mysqldb library version and not to the Python version. Its version is 1.2.3-1 in Ubuntu 12.04, 1.2.3-2 in Ubuntu 14.04, and 1.3.7-1 in Ubuntu 16.04 (I assume this update is related to the usage of Mysql-server 5.7 in this OS version).</p>
| 0 | 2016-09-01T11:25:09Z | 39,270,387 | <p>The parameters passed need to be iterables i.e. list or tuple. So it should be <code>(path,)</code> and not <code>(path)</code></p>
<pre><code>cursor.execute('UPDATE configuration SET value=%s WHERE ' +
'property=\'OUTPUT_PATH\'', (path,))
>>> path = 'hello'
>>> a = (path)
>>> type(a)
<type 'str'>
>>> b = (path,)
>>> type(b)
<type 'tuple'>
>>> for x in a:
... print(x)
...
h
e
l
l
o
>>> for x in b:
... print(x)
...
hello
</code></pre>
<p>If you pass <code>(path</code>) it would be the string which will be iterated as each character of path string, and not as items of the tuple, the correct tuple format is <code>(path,))</code></p>
| 0 | 2016-09-01T11:31:18Z | [
"python",
"mysql",
"ubuntu"
] |
How to output spark data to a csv file with separate columns? | 39,270,584 | <p>My code 1st extracts data using a regex and writes that data to a text file (string format).
I then tried creating a dataframe out of the contents in the text file so that i can have separate columns which led to an error. (Writing it to a csv file writes the entire thing into just one column). </p>
<pre><code>with open("C:\\Sample logs\\dataframe.txt",'a') as f:
f.write(str(time))
f.write(" ")
f.write(qtype)
f.write(" ")
f.write(rtype)
f.write(" ")
f.write(domain)
f.write("\n")
new = sc.textFile("C:\\Sample logs\\dataframe.txt").cache() # cause df requires an rdd
lines1 = new.map(lambda x: (x, ))
df = sqlContext.createDataFrame(lines1)
</code></pre>
<p>But i get the following error:</p>
<blockquote>
<p>TypeError: Can not infer schema for type: type 'unicode'</p>
</blockquote>
<p>I tried some other ways but didn't help. All that I want to do is that after performing write operation, i want to create a dataframe that has separate columns in order to use groupBy(). </p>
<p>The input in the text file:</p>
<pre><code>1472128348.0 HTTP - tr.vwt.gsf.asfh
1472237494.63 HTTP - tr.sdf.sff.sdfg
1473297794.26 HTTP - tr.asfr.gdfg.sdf
1474589345.0 HTTP - tr.sdgf.gdfg.gdfg
1472038475.0 HTTP - tr.sdf.csgn.sdf
</code></pre>
<p>Expected output in csv format:</p>
<blockquote>
<p>The same thing as above but separated into columns so i can perform
groupby operations.</p>
</blockquote>
| 1 | 2016-09-01T11:40:17Z | 39,274,991 | <p>In order to replace "space separated words" into a list of words you'll need to replace:</p>
<pre><code>lines1 = new.map(lambda x: (x, ))
</code></pre>
<p>with</p>
<pre><code> lines1 = new.map(lambda line: line.split(' '))
</code></pre>
<p>I tried it on my machine, and after executing the following</p>
<pre><code>df = sqlContext.createDataFrame(lines1)
</code></pre>
<p>A new DF was created:</p>
<pre><code>df.printSchema()
root
|-- _1: string (nullable = true)
|-- _2: string (nullable = true)
|-- _3: string (nullable = true)
|-- _4: string (nullable = true)
df.show()
+-------------+----+---+-----------------+
| _1| _2| _3| _4|
+-------------+----+---+-----------------+
| 1472128348.0|HTTP| -| tr.vwt.gsf.asfh|
|1472237494.63|HTTP| -| tr.sdf.sff.sdfg|
|1473297794.26|HTTP| -| tr.asfr.gdfg.sdf|
| 1474589345.0|HTTP| -|tr.sdgf.gdfg.gdfg|
| 1472038475.0|HTTP| -| tr.sdf.csgn.sdf|
+-------------+----+---+-----------------+
</code></pre>
<p>You can execute groupBy:</p>
<pre><code>>>> df2 = df.groupBy("_1")
>>> type(df2)
<class 'pyspark.sql.group.GroupedData'>
>>>
</code></pre>
<p>In order to use schema, you'll need first to define it:
see: <a href="https://spark.apache.org/docs/1.6.2/api/python/pyspark.sql.html" rel="nofollow">https://spark.apache.org/docs/1.6.2/api/python/pyspark.sql.html</a></p>
<p>A schema sample can be found below (you'll need to add fields, and update names, type in order to adopt it to your case)</p>
<pre><code>from pyspark.sql.types import *
schema = StructType([
StructField("F1", StringType(), True),
StructField("F2", StringType(), True),
StructField("F3", StringType(), True),
StructField("F4", StringType(), True)])
df = sqlContext.createDataFrame(rdd, schema)
</code></pre>
<p>Afterwards you'll be able to run it with schema:</p>
<pre><code>df = sqlContext.createDataFrame(lines1,schema)
</code></pre>
<p>And now, you'll have names for the fields:</p>
<pre><code>df.show()
+-------------+----+---+-----------------+
| F1| F2| F3| F4|
+-------------+----+---+-----------------+
| 1472128348.0|HTTP| -| tr.vwt.gsf.asfh|
|1472237494.63|HTTP| -| tr.sdf.sff.sdfg|
|1473297794.26|HTTP| -| tr.asfr.gdfg.sdf|
| 1474589345.0|HTTP| -|tr.sdgf.gdfg.gdfg|
| 1472038475.0|HTTP| -| tr.sdf.csgn.sdf|
+-------------+----+---+-----------------+
</code></pre>
<p>in order to save it to CSV, you'll need to use "to_pandas()" , and "to_csv()"
(part of python pandas)</p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html</a></p>
<pre><code>df.toPandas().to_csv('mycsv.csv')
</code></pre>
<p>the content of the csv file:</p>
<pre><code>cat mycsv.csv
,F1,F2,F3,F4
0,1472128348.0,HTTP,-,tr.vwt.gsf.asfh
1,1472237494.63,HTTP,-,tr.sdf.sff.sdfg
2,1473297794.26,HTTP,-,tr.asfr.gdfg.sdf
3,1474589345.0,HTTP,-,tr.sdgf.gdfg.gdfg
4,1472038475.0,HTTP,-,tr.sdf.csgn.sdf
</code></pre>
<p>Note that you can cast a column using ".cast()", e.g. casting F1 to be of type float - adding a new column with type float, and dropping the old column)</p>
<pre><code>df = df.withColumn("F1float", df["F1"].cast("float")).drop("F1")
</code></pre>
| 1 | 2016-09-01T15:00:57Z | [
"python",
"csv",
"apache-spark",
"pyspark",
"apache-spark-sql"
] |
Python alter external variable from within function | 39,270,728 | <p>When I run this it works, but it says </p>
<pre><code>"name 'select_place' is assigned to before global declaration"
</code></pre>
<p>When I get rid of the second global, no comment appears, but as select_place is no longer global it is not readable (if selected) in my last line of code.
I'm really new to python, ideally I'd like a way of not using the global command but after searching i still can't find anything that helps. </p>
<p>My code:</p>
<pre><code>def attempt(x):
if location =='a':
global select_place
select_place = 0
if location =='b'
global select_place
select_place = 1
place = ([a,b,c,d])
</code></pre>
<p>This is the start of some turtle graphics </p>
<pre><code>def Draw_piece_a(Top_right):
goto(place[select_place])
</code></pre>
| 1 | 2016-09-01T11:47:21Z | 39,270,864 | <p>You need to declare the variable first, additionally the function code can be made clearer:</p>
<pre><code>select_place = False
def attempt(x):
global select_place
if location == 'a':
select_place = 0
elif location == 'b':
select_place = 1
</code></pre>
<p>Also, there is no return value for <code>attempt()</code>, is this what you want?</p>
| 1 | 2016-09-01T11:52:58Z | [
"python"
] |
Python: Write data to file (excel or txt) with field names | 39,270,738 | <p>What is the best way (or what would you suggest) to do the following thing in Pyhton:</p>
<p>I run several simulations in Python and I want to store some results (each simulation in a new sheet, or new txt file). </p>
<p>Example:</p>
<pre><code>for model in simmodels:
result = simulate(model)
speed = result["speed"]
distance = result["distance"]
</code></pre>
<p>"speed" and "distance" are numpy arrays (1column, 1000lines)</p>
<p>Now i want to store these results with the following structure:</p>
<p>1 column: 1 line "speed" 2-1001 lines --> result-array speed</p>
<p>2 column: 1 line "distance" 2-1001 lines --> result-array distance
etc.</p>
<p>I am new to Python and i found a lot of possibilities (structured arrays in numpy, lists, dictionaries, etc.). I am looking for a pretty simple, straight forward approach :).</p>
<p>Thank you very much for your help</p>
| -1 | 2016-09-01T11:47:54Z | 39,271,040 | <p>Please check link - <a href="https://pymotw.com/2/csv/" rel="nofollow">CSV Module</a> </p>
<p>CSV files can be opened in Excel also.</p>
<p>Please check the below code :</p>
<pre><code>import csv
speed = [10,20]
dist = [100,200]
f = open("out.csv", 'wb')
try:
writer = csv.writer(f)
writer.writerow( ('Speed','Distance') )
for i in range(2):
writer.writerow( (speed[i],dist[i]) )
finally:
f.close()
</code></pre>
<p>Content of out.csv file :</p>
<pre><code>Speed,Distance
10,100
20,200
</code></pre>
<p>Also, if you want exact excel format, you can set dialect as below:</p>
<pre><code>writer = csv.writer(f,dialect='excel')
</code></pre>
<p>Please check dialect section in shared link for more info.</p>
<p>Finally can be use as :</p>
<pre><code>import csv
f = open("out.csv", 'wb')
writer = csv.writer(f,dialect='excel')
writer.writerow( ('Speed','Distance') )
for model in simmodels:
result = simulate(model)
speed = result["speed"]
distance = result["distance"]
writer.writerow( (speed,distance) )
f.close()
</code></pre>
| 0 | 2016-09-01T12:00:07Z | [
"python",
"export-to-excel"
] |
Python: Call for value inside nested class | 39,270,798 | <p>I'm trying to call for value from class B that is nested in class A and use it in class C.
I'm getting AttributeError:</p>
<pre><code>class A():
class B():
a = 1
class C():
b = 2
c = B.a + b
AttributeError: class B has no attribute 'a'
</code></pre>
<p>I also tried to call From 'A', Pycharm recognize it, but python still get AttributeError:</p>
<pre><code>class A(object):
class B(object):
a = 1
class C(object):
b = 2
c = A.B.a + b
AttributeError: class A has no attribute 'B'
</code></pre>
<p>Does someone have an idea of how to use it?
Thanks</p>
| 0 | 2016-09-01T11:50:20Z | 39,270,913 | <p>The problem is that the class template (<code>A</code>) is not constructed while you're calling <code>A.B.a</code>. That is, <code>A</code> is not bound yet to a class.</p>
<p>Try this workaround:</p>
<pre><code>class A():
class B():
a = 1
</code></pre>
<p>Now create <code>C</code> separately (<code>A</code> is already defined):</p>
<pre><code>class C():
b = 2
c = A.B.a + b
</code></pre>
<p>And reference <code>C</code> from <code>A</code>:</p>
<pre><code>A.C = C
</code></pre>
<p>This can possibly be done via <a class='doc-link' href="http://stackoverflow.com/documentation/python/286/metaclasses#t=201609011200116976592">meta-classes</a>, but could be an over-kill here.</p>
| 1 | 2016-09-01T11:55:08Z | [
"python"
] |
Python: Call for value inside nested class | 39,270,798 | <p>I'm trying to call for value from class B that is nested in class A and use it in class C.
I'm getting AttributeError:</p>
<pre><code>class A():
class B():
a = 1
class C():
b = 2
c = B.a + b
AttributeError: class B has no attribute 'a'
</code></pre>
<p>I also tried to call From 'A', Pycharm recognize it, but python still get AttributeError:</p>
<pre><code>class A(object):
class B(object):
a = 1
class C(object):
b = 2
c = A.B.a + b
AttributeError: class A has no attribute 'B'
</code></pre>
<p>Does someone have an idea of how to use it?
Thanks</p>
| 0 | 2016-09-01T11:50:20Z | 39,272,192 | <p>At compile time, the class definition for class A is not complete hence you can not access the classes, variables and methods defined in a parent class inside a nested class.</p>
<p>You can try separating the class definitions though as suggested by @Reut Sharabani.</p>
| 0 | 2016-09-01T12:58:21Z | [
"python"
] |
Python: Call for value inside nested class | 39,270,798 | <p>I'm trying to call for value from class B that is nested in class A and use it in class C.
I'm getting AttributeError:</p>
<pre><code>class A():
class B():
a = 1
class C():
b = 2
c = B.a + b
AttributeError: class B has no attribute 'a'
</code></pre>
<p>I also tried to call From 'A', Pycharm recognize it, but python still get AttributeError:</p>
<pre><code>class A(object):
class B(object):
a = 1
class C(object):
b = 2
c = A.B.a + b
AttributeError: class A has no attribute 'B'
</code></pre>
<p>Does someone have an idea of how to use it?
Thanks</p>
| 0 | 2016-09-01T11:50:20Z | 39,272,724 | <p>You can not access the class by its name, while the class definition statement is still executed. </p>
<pre><code>class A(object):
class B(object):
a = 1
class C(object):
b = 2
c = A.B.a + b # here class A statement is still executed, there is no A class yet
</code></pre>
<p>To solve the problem you must defer the execution of those statements :</p>
<ul>
<li>move the all those statements to a <a href="https://docs.python.org/3.5/library/functions.html?highlight=classmethod#classmethod" rel="nofollow">classmethod</a> </li>
<li>call them after the classes was defined.</li>
</ul>
<hr>
<pre><code>class A(object):
class B(object):
@classmethod
def init(cls):
cls.a = 1
class C(object):
@classmethod
def init(cls):
cls.b = 2
cls.c = A.B.a + cls.b
@classmethod
def init(cls):
cls.B.init()
cls.C.init()
A.init()
</code></pre>
| 0 | 2016-09-01T13:23:05Z | [
"python"
] |
Python, How to Define a Function whose input is a Variable in the Global Namespace it must then Alter | 39,270,841 | <p>I have been learning to program in python, and came across this question which I have been struggling to solve. The question is as follows:</p>
<p>Write a function f(list, start, end) which takes as arguments a list and two indices and modifies the argument list so that it is equal to the result of the slice expression list[start:end]</p>
<p>I can write a function that splices the list for positive indices, ie:</p>
<pre><code>def f(this_list, start, end):
this_list=this_list[start:end+1]
</code></pre>
<p>But how do I get it to update whatever list the function is pointed to in the global namespace?</p>
<p>So, for instance, if I then get it to run:</p>
<pre><code>x=[1, 2, 3, 4, 5]
f(x, 2, 4)
print x
</code></pre>
<p>it returns the originally defined x, not the updated. So this is because it has only updated the list in the function's namespace, yes? But then how can I get it to update x globally?</p>
| -2 | 2016-09-01T11:51:54Z | 39,271,912 | <p>As someone already mentioned in the comments, you should definitely read <a href="http://nedbatchelder.com/text/names.html" rel="nofollow">Facts and myths about Python names and values</a></p>
<p>The simple solution to your problem is making a new value and returning it.
This will work : </p>
<pre><code>def f(this_list, start, end):
this_list=this_list[start:end+1]
return this_list
</code></pre>
| 0 | 2016-09-01T12:43:19Z | [
"python",
"list",
"function"
] |
Printing PHP errors with python | 39,270,902 | <p>I'm using the following to run PHP code with python. If the PHP code located in the file.php encounters an error, I can see the error on my terminal, but I want to have the error on the result variable.</p>
<p>How can I catch the error as a string?</p>
<pre><code>proc = subprocess.Popen("php /path/file.php", shell=True, stdout=subprocess.PIPE)
result = proc.stdout.read().decode("utf-8")
</code></pre>
| 0 | 2016-09-01T11:54:50Z | 39,271,382 | <p>The error messages are usually written to <code>stderr</code>, rather than <code>stdout</code>. If that is the case, you can get it from the corresponding attribute.</p>
<pre><code>proc = subprocess.Popen(["php", "/path/file.php"],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out = proc.stdout.read().decode("utf-8")
err = proc.stderr.read().decode("utf-8")
</code></pre>
| 0 | 2016-09-01T12:17:41Z | [
"php",
"python"
] |
how to split this type of string? | 39,270,944 | <pre><code>row_data=" 'NULL','to_date(to_char(to_date('19700101'',''YYYYMMDD') + interval '1s' * logevent_timestamp_seconds',''YYYY-MM-DD')',''YYYY-MM-DD')','NULL'"
row_data_list = row_data.split("\',\'")
</code></pre>
<p>I want to split the data accordingly into three objects</p>
<ul>
<li>NULL </li>
<li>to_date(to_char(to_date('19700101'',''YYYYMMDD') + interval '1s'* logevent_timestamp_seconds',''YYYY-MM-DD')',''YYYY-MM-DD') </li>
<li>NULL</li>
</ul>
<p>I cannot hard code it so that it can split it. Is there any way to do it, apart from hard coding and detecting it?</p>
<p>I'm reading the data row_data from a file</p>
| -2 | 2016-09-01T11:56:13Z | 39,271,227 | <p>Split the string by <code>,</code>, then trim the <code>'</code> from both sides for every element:</p>
<pre><code>>>> row_data=" 'NULL','to_date(to_char(to_date('19700101'',''YYYYMMDD') + interval '1s' * logevent_timestamp_seconds',''YYYY-MM-DD')',''YYYY-MM-DD')','NULL'"
>>> row_data_list = list(map(lambda x: x[1:-1], row_data.strip().split(',')))
>>> row_data_list
['NULL', "to_date(to_char(to_date('19700101'", "'YYYYMMDD') + interval '1s' * logevent_timestamp_seconds", "'YYYY-MM-DD')", "'YYYY-MM-DD')", 'NULL']
</code></pre>
<p>This will work for every string styled that way, with every number of elements.</p>
| 2 | 2016-09-01T12:09:44Z | [
"python",
"string",
"parsing",
"split"
] |
datetime date value not updating into MySQL database in Python correctly (updates the integer 2005 instead) | 39,270,990 | <p>I am having a weird issue while trying to update a record in my database. Here is the section of the code that doesn't make sense:</p>
<pre><code> """code preceeds"""
self.NextMail = self.get_NextMail()
if self.NextMail != None:
"""debugging"""
print(self.NextMail)
print(type(self.NextMail))
print(record[0]) """this is the primary key for Mailouts table and a 1-1 related students table"""
self.cur.execute("""UPDATE Mailouts
SET NextMail={0}
WHERE StudentID={1}
""".format(self.NextMail,record[0]))
"""code continues..."""
</code></pre>
<p>When I run this, here is what is printed:</p>
<pre><code>2017-01-11
<class 'datetime.date'>
1
</code></pre>
<p>As you can see, the date added to the database should be 2017-01-11. The SQL type for that field is DATE. </p>
<p>When I select and output this table, the NextMail field appears as in the image: <a href="http://i.stack.imgur.com/0x3ow.png" rel="nofollow">screenshot</a></p>
<p>Note that there is no place in my code that ever mentions the year 2005, nor the number 2005. if you go ctrl + f and search 2005, nothing comes up. The database was empty, and the field I was updating was Null before I ran the program. </p>
<p>If I go </p>
<pre><code> self.cur.execute("""SELECT * FROM Mailouts LIMIT 1""")
self.all_data = self.cur.fetchall()
print(self.all_data)
</code></pre>
<p>the record shows the number 2005 in the NextMail field with the type int.</p>
<p>I am using PyQt for GUI
I am using the modules sqlite3, datetime, calendar.</p>
| 1 | 2016-09-01T11:58:02Z | 39,271,267 | <p>Try this instead:</p>
<pre><code>self.cur.execute("""UPDATE Mailouts
SET NextMail=?
WHERE StudentID=?
""", (self.NextMail, record[0]))
</code></pre>
<p>The string formatter will convert your <code>self.NextMail</code> to a string so when it passes the query to the engine, the engine will end up interpreting it as a mathematical operation (ie. 2017-1-11 = 2005).</p>
| 1 | 2016-09-01T12:12:29Z | [
"python",
"mysql",
"sql"
] |
Trying to combine elements of 2 lists into 1 | 39,271,077 | <p>I've written a function that should randomly pick 2 lists from the first 9 items of a bigger list and randomly pick values from each list and create a new list of the same length. When testing it on it's own the function seems to work properly, but when called as a part of the program it doesn't seem to do anything. Each generation returns the exact same values as the last. I'm really not sure what's wrong here and I would love your help!</p>
<pre><code> w, h = 9, 10
network = [[0 for x in range(w)] for y in range(h)]
def sigmoid(sigin):
return 1 / (1 + math.exp(-sigin))
def netcal(x):
network[x].append(sigmoid((sigmoid(i1*network[x][0]+i2*network[x][1])*network[x][6])+(sigmoid(i1*network[x][2]+i2*network[x][3])*network[x][7])+(sigmoid(i1*network[x][4]+i2*network[x][5])*network[x][8])))
def seed():
b = 0
while b < 10:
y = 0
while y < 9:
network[b][y] = random.random()
y += 1
b += 1
def calall():
c = 0
while c < 9:
netcal(c)
c += 1
def cost():
d = 0
while d < 9:
network[d].append(1 - network[d][9])
print(network[d][10])
d += 1
def sort():
network.sort(key=lambda x: x[-1])
#not working
def evol():
num = random.sample(range(0,10), 2)
evol1 = network[num[0]]
evol2 = network[num[1]]
evol3 = network[9]
i = 0
while i < 9 :
j = random.randrange(0,2)
if j == 0 :
evol3[i] = evol1[i]
else:
evol3[i] = evol2[i]
i += 1
network[9] = evol3
i1=0
i2=1
seed()
t = 0
while t < 1000:
calall()
cost()
sort()
evol()
print('break')
t += 1
</code></pre>
<p>Here's the code I used to test the function:</p>
<pre><code>w, h = 9, 10
network = [[0 for x in range(w)] for y in range(h)]
def seed():
b = 0
while b < 10:
y = 0
while y < 9:
network[b][y] = random.random()
y += 1
b += 1
def sort():
network.sort(key=lambda x: x[-1])
def evol():
num = random.sample(range(0,9), 2)
print(num)
evol1 = network[num[0]]
evol2 = network[num[1]]
evol3 = network[9]
i = 0
while i < 9 :
j = random.randrange(0,2)
if j == 0 :
evol3[i] = evol1[i]
else:
evol3[i] = evol2[i]
i += 1
network[9] = evol3
seed()
sort()
print(network[9])
evol()
print(network[9])
</code></pre>
<p>I just figured it out and I feel incredibly stupid. The output from the code is the cost function for every net. This cost value is output explicitly as network[x][10]. However, when calculating the cost function it is appended to the end of the list, rather than replacing the value. The function was working perfectly fine, but because the cost is appended instead of changing the code outputs a static value. </p>
<p>Scratch that. That definitely was an issue, but after fixing that I'm still faced with the exact same problem. Here's my updated code(edit: I've also made the evol() function output network[9] at the beginning and end of the function and it is showing the weights are changing as they should be. This leads me to believe the issue is with either the calculation of the nets or their output):</p>
<pre><code> w, h = 11, 10
network = [[0 for x in range(w)] for y in range(h)]
def sigmoid(sigin):
return 1 / (1 + math.exp(-sigin))
def netcal(x):
network[x][9] = (sigmoid((sigmoid(i1*network[x][0]+i2*network[x][1])*network[x][6])+(sigmoid(i1*network[x][2]+i2*network[x][3])*network[x][7])+(sigmoid(i1*network[x][4]+i2*network[x][5])*network[x][8])))
def seed():
b = 0
while b < 10:
y = 0
while y < 9:
network[b][y] = random.random()
y += 1
b += 1
def calall():
c = 0
while c < 9:
netcal(c)
c += 1
def cost():
d = 0
while d < 9:
network[d][10] = (1 - network[d][9])
print(network[d][10])
d += 1
def sort():
network.sort(key=lambda x: x[-1])
#not working
def evol():
num = random.sample(range(0,9), 2)
evol1 = network[num[0]]
evol2 = network[num[1]]
evol3 = network[9]
print('before', network[9])
i = 0
while i < 9 :
j = random.randrange(0,2)
if j == 0 :
evol3[i] = evol1[i]
else:
evol3[i] = evol2[i]
i += 1
network[9] = evol3
print('after', network[9])
i1=0
i2=1
seed()
t = 0
while t < 10:
calall()
cost()
sort()
evol()
print('break')
t += 1
</code></pre>
| -1 | 2016-09-01T12:02:07Z | 39,276,776 | <p>There were 2 very stupid issues in the original code. </p>
<ol>
<li>The output of the cost function was reading network[x][10], but new cost values were appended rather than replacing the old value, so the output was just putting out the same variable on a loop.</li>
<li>The calculate and cost loops were both set to end before they would get to the last item in the list, so while the weights were changing those values were never calculated nor displayed.</li>
</ol>
| 0 | 2016-09-01T16:40:06Z | [
"python",
"neural-network",
"genetic-algorithm",
"evolutionary-algorithm"
] |
Pop out the whole dic if element of 1st dic in list is repeated? | 39,271,253 | <p>Here I have a list of dic, where I want to remove duplicates which is equal to 1st element of dic.
Input :</p>
<pre><code>data = [
[
[{'color': '1'},{'color': '0'},{'color': '2'},{'color': '1'}],
[{'color': '2'},{'color': '3'},{'color': '2'},{'color': '5'}],
[{'color': '1'},{'color': '1'},{'color': '3'},{'color': '3'}]
],
[
[{'color': '1'},{'color': '1'},{'color': '4'},{'color': '4'}],
[{'color': '4'},{'color': '3'},{'color': '1'},{'color': '4'}],
[{'color': '7'},{'color': '1'},{'color': '7'},{'color': '1'}]
]
]
</code></pre>
<p>I have tried and got the expected output, Is there any pythonic way to achieve the same ?</p>
<p>Code :</p>
<pre><code>new = [] ;
for i in data:
master = []
for j in i:
temp = []
for k in j:
if j[0]['color'] != k['color']:
temp.append(k)
temp.insert(0,j[0])
master.append(temp)
new.append(master)
print(new)
</code></pre>
<p>Expected Output :</p>
<pre><code>data = [
[
[{'color': '1'},{'color': '0'},{'color': '2'}],
[{'color': '2'},{'color': '3'},{'color': '5'}],
[{'color': '1'},{'color': '3'},{'color': '3'}]
],
[
[{'color': '1'},{'color': '4'},{'color': '4'}],
[{'color': '4'},{'color': '3'},{'color': '5'}],
[{'color': '7'},{'color': '1'},{'color': '1'}]
]
]
</code></pre>
| 1 | 2016-09-01T12:11:13Z | 39,271,434 | <p>No need for all the temporary lists:</p>
<pre><code>for item in data:
for i,subitem in enumerate(item):
item[i] = [item[i][0]] + [dct for dct in item[i][1:]
if dct['color'] != item[i][0]['color']]
</code></pre>
<p>Basically it just iterates through each of the subitems and replaces it with its first item plus the rest of the items that don't have the same value.</p>
<p>Although it's ugly, you can even reduce it to a one-liner (for illustrative purposes only... I wouldn't recommend it because readability):</p>
<pre><code>data = [[[item[i][0]] + [dct for dct in item[i][1:] if dct['color'] != item[i][0]['color']] for i,subitem in enumerate(item)] for item in data]
</code></pre>
| 1 | 2016-09-01T12:20:07Z | [
"python",
"list",
"dictionary"
] |
Python boilerpipe installation issue | 39,271,350 | <p>I am trying to insatll <a href="https://github.com/misja/python-boilerpipe" rel="nofollow">Python Boilerpipe</a> in my Ubuntu 14. It fails with the following error:</p>
<pre><code> Traceback (most recent call last):
File "setup.py", line 27, in <module>
download_jars(datapath=DATAPATH)
File "setup.py", line 21, in download_jars
tar = tarfile.open(tgz_name, mode='r:gz')
File "/usr/lib/python2.7/tarfile.py", line 1678, in open
return func(name, filemode, fileobj, **kwargs)
File "/usr/lib/python2.7/tarfile.py", line 1730, in gzopen
raise ReadError("not a gzip file")
tarfile.ReadError: not a gzip file
</code></pre>
<p>These are the steps I am following:</p>
<ul>
<li>pip install JPype1</li>
<li>pip install charade</li>
<li>git clone
<a href="https://github.com/misja/python-boilerpipe.git" rel="nofollow">https://github.com/misja/python-boilerpipe.git</a></li>
<li>cd python-boilerpipe</li>
<li>sudo python setup.py install</li>
</ul>
| 0 | 2016-09-01T12:16:23Z | 39,286,265 | <p>Found the issue, so in the setup.py they are looking for boiler-pipe tar file. And they download it from googlecode, which is not there any more.</p>
<pre><code>def download_jars(datapath, version=boilerpipe_version):
tgz_url = 'https://boilerpipe.googlecode.com/files/boilerpipe-{0}- bin.tar.gz'.format(version)
</code></pre>
<p>So I replaced the same line with the new file location:</p>
<pre><code>tgz_url='https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/boilerpipe/boilerpipe-1.2.0-bin.tar.gz'
</code></pre>
<p>This worked for me.</p>
| 1 | 2016-09-02T07:06:41Z | [
"python",
"ubuntu-14.04",
"boilerpipe"
] |
Count all files in all folders/subfolders with Python | 39,271,372 | <p>Which is the <strong>most efficient way</strong> to count all files in all folders and subfolders in Python? I want to use this on Linux systems.</p>
<p>Example output:</p>
<blockquote>
<p>(Path files)</p>
<p>/ 2</p>
<p>/bin 100</p>
<p>/boot 20</p>
<p>/boot/efi/EFI/redhat 1</p>
<p>....</p>
<p>/root 34</p>
<p>....</p>
</blockquote>
<p>Paths without a file should be ignored.</p>
<p>Thanks.</p>
| 0 | 2016-09-01T12:17:05Z | 39,271,574 | <p>You can do it with <code>os.walk()</code>;</p>
<pre><code>import os
for root, dirs, files in os.walk('/some/path'):
if files:
print('{0} {1}'.format(root, len(files)))
</code></pre>
<p>Note that this will also include hidden files, i.e. those that begin with a dot (<code>.</code>).</p>
| -1 | 2016-09-01T12:27:08Z | [
"python",
"linux",
"file",
"count"
] |
Count all files in all folders/subfolders with Python | 39,271,372 | <p>Which is the <strong>most efficient way</strong> to count all files in all folders and subfolders in Python? I want to use this on Linux systems.</p>
<p>Example output:</p>
<blockquote>
<p>(Path files)</p>
<p>/ 2</p>
<p>/bin 100</p>
<p>/boot 20</p>
<p>/boot/efi/EFI/redhat 1</p>
<p>....</p>
<p>/root 34</p>
<p>....</p>
</blockquote>
<p>Paths without a file should be ignored.</p>
<p>Thanks.</p>
| 0 | 2016-09-01T12:17:05Z | 39,271,575 | <pre><code>import os
print [(item[0], len(item[2])) for item in os.walk('/path') if item[2]]
</code></pre>
<p>It returns a list of tuples of folders/subfolders and files count in <code>/path</code>.</p>
<p>OR</p>
<pre><code>import os
for item in os.walk('/path'):
if item[2]:
print item[0], len(item[2])
</code></pre>
<p>It prints folders/subfolders and files count in <code>/path</code>.</p>
<p>If you want try faster solution, then you had to try to combine:</p>
<pre><code>os.scandir() # from python 3.5.2
</code></pre>
<p>iterate recursively and use:</p>
<pre><code>from itertools import count
counter = count()
counter.next() # returns at first 0, next 1, 2, 3 ...
if counter.next() > 1000:
print 'dir with file count over 1000' # and use continue in for loop
</code></pre>
<p>Maybe that will be faster, because I think in <code>os.walk</code> function are unnecessary things for you.</p>
| -1 | 2016-09-01T12:27:09Z | [
"python",
"linux",
"file",
"count"
] |
Solving two coupled ODEs by matrix form in Python | 39,271,424 | <p>I want to solve a coupled system of ODEs in matrix form which has such a form:</p>
<blockquote>
<p>y'_n = ((m_n)**2) * y_n+(C * y)_n , m'_n=-4*m_n*y_n </p>
</blockquote>
<p>where <code>C</code> is a matrix, <code>[2 1, -1 3]</code>. </p>
<p>On the other hand I want to solve these equations:</p>
<blockquote>
<p>y'1= m1 ** 2 * y1 + 2 * y1 + y2<br>
y'2= m2 ** 2 * y2 - y1 + 3 * y3<br>
m'1= -4 * m1 * y1 ,<br>
m'2= -4 * m2 * y2<br>
y1(0)=y2(0)=-15. and m1(0)=m2(0)=0.01</p>
</blockquote>
<p>in matrix form and I wrote the following program:</p>
<pre><code>import numpy as np
from pylab import plot,show
from scipy.integrate import odeint
C=np.array([[2,1],[-1,3]])
dt=0.001
def dy_dt(Y,time):
y,m=Y
m=m+dt*(-4.*m*y)
dy=m**2*y+np.dot(C,y)
return dy
m_init=np.ones(2)*0.01
time=np.linspace(0,4,1/dt)
y_init=np.ones(2)*-15.
y_tot=odeint(dy_dt,[y_init,m_init],time)
plot(time,y_tot[0])#y_1
plot(time,y_tot[1])#y_2
plot(time,y_tot[2])#m_1
plot(time,y_tot[3])#m_2
show()
</code></pre>
<p>but I encountered the following error:</p>
<pre><code> y_tot=odeint(dy_dt,[y_init,m_init],time)
File "/usr/lib/python2.7/dist-packages/scipy/integrate/odepack.py", line 215, in odeint
ixpr, mxstep, mxhnil, mxordn, mxords)
ValueError: Initial condition y0 must be one-dimensional.
</code></pre>
| 0 | 2016-09-01T12:19:35Z | 39,272,763 | <p>The initial value to odeint must be an array, not a matrix. Try use <code>y0=np.hstack((y_init, m_init))</code> and put that as the initial value (y0 is the second argument to odeint).</p>
| 1 | 2016-09-01T13:24:22Z | [
"python",
"scipy",
"odeint"
] |
Node with as many children as possible python | 39,271,546 | <p>I want to create a data structure in Python, but since I'm very C oriented. I need a little bit of help.</p>
<p>In general, I want to create a Node class which will contain the data, a pointer to a sibling, pointers to the children and a pointer to the parent.</p>
<p>this is a way to think of the Node class:</p>
<pre><code> NODE
/ / ... \ \
child_node1 - child_node2 - ... - child_node(N-1) - child_nodeN
</code></pre>
<p>What I'm struggling with so far is:
I want to overload the '+' operator for the Node class so I can do this:</p>
<pre><code>node1 = Node("data1")
node2 = Node("data2", 3)
node1 = node1 + node2
</code></pre>
<p>So basically make the 2 nodes, siblings. </p>
<p>Here's my code: </p>
<pre><code>class Node:
def __init__(self, data = None, numberOfChildren = 0):
'''
Creates a new Node with *numberOfChildren* children.
By default it will be set to 0, meaning it will only create the root of the tree.
No children whatsoever.
'''
self.__sibling_count = 0
self.__parent = Node()
self.__sibling = Node()
self.__data = data
self.__children = []
if numberOfChildren != 0:
'''
The Node has children and we need to initialize them
'''
for i in range(numberOfChildren):
self.__children[i] = Node()
def getParent(self):
return self.__parent
def getData(self):
return self.__data
def getChild(self, i):
'''
Returns the ith child of the current *Node*.
'''
return self.__children[i]
def __add__(self, other):
'''
Overloads the *+* function so that *Node* objects can be added.
The 2 merged *Node* elements will now be siblings.
ex. node1 = Node()
node2 = Node()
node1 = node1 + node2
'''
if self.__sibling_count == 0:
self.__sibling = other
self.__sibling_count += 1
return self
</code></pre>
<p>But when I try to add 2 nodes like this:</p>
<pre><code>node1 = Node()
node2 = Node()
node1 = node1 + node2
</code></pre>
<p>I get a <code>RuntimeError: maximum recursion depth exceeded</code>. Why is this happening?</p>
| 0 | 2016-09-01T12:25:53Z | 39,271,663 | <p>Python recursion is limited to prevent stack overflowing and infinite recursion. There for recursion without break conditions or counter will be stopped after some-many iterations.</p>
<p>Stop creating any more nodes after a number of levels, otherwise python will stop you. <strong>You are activating <code>__init_</code> in the first <code>Node</code>, then in every one of it's children and so on.</strong> This never stops, and trigger this run-time error.</p>
<p>See that as an estimation to how far you can go:</p>
<pre><code>>>> def f(): f()
>>> f()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in f
File "<stdin>", line 1, in f
File "<stdin>", line 1, in f
... another 995 times the same
File "<stdin>", line 1, in f
RecursionError: maximum recursion depth exceeded
</code></pre>
<p>You can change <code>__init__</code> to this:</p>
<pre><code>def __init__(self, levels = 5, numberOfChildren = 0, data = None):
'''
Creates a new Node with *numberOfChildren* children.
By default it will be set to 0, meaning it will only create the root of the tree.
No children whatsoever.
'''
self.__sibling_count = 0
# these will only produce **new** nodes. I commented them.
# self.__parent = Node()
# self.__sibling = Node()
self.__data = data
self.__children = []
if numberOfChildren != 0 and levels > 0:
'''
The Node has children and we need to initialize them
'''
for i in range(numberOfChildren):
self.__children[i] = Node(levels - 1, numberOfChildren)
</code></pre>
| 0 | 2016-09-01T12:31:03Z | [
"python",
"tree"
] |
Node with as many children as possible python | 39,271,546 | <p>I want to create a data structure in Python, but since I'm very C oriented. I need a little bit of help.</p>
<p>In general, I want to create a Node class which will contain the data, a pointer to a sibling, pointers to the children and a pointer to the parent.</p>
<p>this is a way to think of the Node class:</p>
<pre><code> NODE
/ / ... \ \
child_node1 - child_node2 - ... - child_node(N-1) - child_nodeN
</code></pre>
<p>What I'm struggling with so far is:
I want to overload the '+' operator for the Node class so I can do this:</p>
<pre><code>node1 = Node("data1")
node2 = Node("data2", 3)
node1 = node1 + node2
</code></pre>
<p>So basically make the 2 nodes, siblings. </p>
<p>Here's my code: </p>
<pre><code>class Node:
def __init__(self, data = None, numberOfChildren = 0):
'''
Creates a new Node with *numberOfChildren* children.
By default it will be set to 0, meaning it will only create the root of the tree.
No children whatsoever.
'''
self.__sibling_count = 0
self.__parent = Node()
self.__sibling = Node()
self.__data = data
self.__children = []
if numberOfChildren != 0:
'''
The Node has children and we need to initialize them
'''
for i in range(numberOfChildren):
self.__children[i] = Node()
def getParent(self):
return self.__parent
def getData(self):
return self.__data
def getChild(self, i):
'''
Returns the ith child of the current *Node*.
'''
return self.__children[i]
def __add__(self, other):
'''
Overloads the *+* function so that *Node* objects can be added.
The 2 merged *Node* elements will now be siblings.
ex. node1 = Node()
node2 = Node()
node1 = node1 + node2
'''
if self.__sibling_count == 0:
self.__sibling = other
self.__sibling_count += 1
return self
</code></pre>
<p>But when I try to add 2 nodes like this:</p>
<pre><code>node1 = Node()
node2 = Node()
node1 = node1 + node2
</code></pre>
<p>I get a <code>RuntimeError: maximum recursion depth exceeded</code>. Why is this happening?</p>
| 0 | 2016-09-01T12:25:53Z | 39,272,829 | <p>Operator overriding in Python is allowed, but using the <code>+</code> operator for something that is not concatenation or summing is frowned upon. A more pythonic implementation would be something like this untested fragment:</p>
<pre><code>class Node(object):
def __init__(self, parent=None):
self.set_parent(parent)
self.children = set()
def set_parent(self, parent):
if self.parent and self.parent is not parent:
self.parent.children.remove(self)
self.parent = parent
def siblings(self):
if self.parent is None:
return []
return [_ for _ in self.parent.children if _ is not self]
def add_child(self, node):
self.children.add(node)
node.set_parent(self)
def add_sibling(self, node):
assert self.parent, "root node can't have siblings"
self.parent.add_child(node)
</code></pre>
<p>... and so on. Off course you can override the <code>+</code> operator to perform <code>add_sibling</code>, but the gist of it is to rely heavily on the native collections.</p>
<p>If you want to create a note with 3 children, it would be:</p>
<pre><code>root = Node()
nodes = [Node(parent=root) for _ in range(3)]
</code></pre>
| 0 | 2016-09-01T13:26:47Z | [
"python",
"tree"
] |
The fastest way to update (partial sum of elements with complex conditions) the pandas dataframe | 39,271,564 | <p>I try to update a pandas dataframe which has 3 million rows. At the below, I reduced my problem into a more simple problem. In short, it does add values in a cummulative sense. </p>
<p>But, this function takes too long time for me like more than 10 hours in a real problem. Is there any room for speeds up? Should I update it only at the last?</p>
<p>Can we update the pandas dataframe with a more faster way than a iterrows()?</p>
<p>Can we select multiple rows by their index and then updates?</p>
<pre><code>def set_r(group, i, colname, add):
if colname in group:
prev = group.iloc[i][colname]
if math.isnan(prev):
group.set_value(i, colname, add)
else:
group.set_value(i, colname, prev+add)
else:
group.set_value(i, colname, add)
def set_bl_info(group, i, r, bl_value, timeframe, clorca, bl_criteria):
group.set_value(i, timeframe + '_' + bl_criteria, True)
colname = timeframe + '_' + clorca + '_' + 'bb_count_'+ bl_criteria
set_r(group, i, colname, 1)
def bl_assign(days, bl_key, bl_value, group, bl_p05, bl_p01):
print bl_key
sub_group = group[(group.pledged_date >= bl_value[0]) & (group.pledged_date <= bl_value[1])]
coexisting_icl = sub_group[(sub_group.project_category == bl_value[2]) & (sub_group.cluster == bl_value[3])]
for i, r in coexisting_icl.iterrows():
set_bl_info(group, i, r, bl_value, 'coexisting', 'icl','p1')
# main function
bl_assign(days, bl_key, bl_value, group, bl_p05, bl_p01)
</code></pre>
<p>For more simplicity, my problem is something like below:</p>
<pre><code> A B C
0 0 0 False
1 7 0 True
2 8 0 True
3 5 0 True
</code></pre>
<p>Update B column if C is true with sum of A column's elements</p>
<pre><code> A B C
0 0 0 False
1 7 20 True
2 8 20 True
3 5 20 True
</code></pre>
<p>After then, if D is also true then update B with sum of E in cumulatively</p>
<pre><code> A B C D E
0 0 0 False False 1
1 7 20 True False 1
2 8 20 True True 1
3 5 20 True True 1
A B C D E
0 0 0 False False 1
1 7 20 True False 1
2 8 22 True True 1
3 5 22 True True 1
</code></pre>
| 2 | 2016-09-01T12:26:35Z | 39,273,011 | <blockquote>
<p>Update B column if C is true with sum of A column's elements</p>
</blockquote>
<pre><code>import numpy as np
df['B'] = np.where(df.C, df.A.sum(), 0)
</code></pre>
<blockquote>
<p>After then, if D is also tru then update B with the sum of E (using the comment to the question above)</p>
</blockquote>
<pre><code>df.B = df.B + np.where(df.D, (df.E * df.D.astype(int)).sum(), 0)
</code></pre>
<p>So, at the end you have</p>
<pre><code>>>> df
A C B E D
0 0 False 0 1 False
1 7 True 20 1 False
2 8 True 22 1 True
3 5 True 22 1 True
</code></pre>
| 2 | 2016-09-01T13:34:02Z | [
"python",
"algorithm",
"pandas",
"complexity-theory"
] |
Local server by PythonJS | 39,271,645 | <p>I downloaded this
<a href="https://github.com/PythonJS/pythonjs-demo-server-nodejs" rel="nofollow">demo server</a>.
I follow the instruction, so</p>
<blockquote>
<p>First, git clone this repo, and then run: npm install python-js. Now you are ready to run the server, run: ./run-demo.js and then open your browser to localhost:8080.</p>
</blockquote>
<p>Unfortunately I can't run run-demo.js beacuse I have this error</p>
<pre><code>---------------------------
Windows Script Host
---------------------------
Line: 1
Character: 1
Error: Invalid character
Code: 800A03F6
Source: Microsoft JScript - compilation error
</code></pre>
<p>I try to run this by node.js console but have only "..." and nothing happend.</p>
<p>This is code of run-demo.js:</p>
<pre><code>#!/usr/bin/env node
var fs = require('fs')
//var pythonjs = require('../PythonJS/pythonjs/python-js')
var pythonjs = require('python-js')
var pycode = fs.readFileSync( './server.py', {'encoding':'utf8'} )
var jscode = pythonjs.translator.to_javascript( pycode )
eval( pythonjs.runtime.javascript + jscode )
</code></pre>
<p>Any ideas? I want to run local server and use PythonJS</p>
| 1 | 2016-09-01T12:30:17Z | 39,271,719 | <p>I don't believe <code>#</code> is a valid character in Javascript. If the <code>run0demo.js</code> file is being delivered to your browser, it certainly won't know what to make of the shebang (<code>#!</code>) line, which is used by the UNIX kernel to determine which executbale should be used to process the file.</p>
| 1 | 2016-09-01T12:34:10Z | [
"javascript",
"python",
"node.js",
"server"
] |
Local server by PythonJS | 39,271,645 | <p>I downloaded this
<a href="https://github.com/PythonJS/pythonjs-demo-server-nodejs" rel="nofollow">demo server</a>.
I follow the instruction, so</p>
<blockquote>
<p>First, git clone this repo, and then run: npm install python-js. Now you are ready to run the server, run: ./run-demo.js and then open your browser to localhost:8080.</p>
</blockquote>
<p>Unfortunately I can't run run-demo.js beacuse I have this error</p>
<pre><code>---------------------------
Windows Script Host
---------------------------
Line: 1
Character: 1
Error: Invalid character
Code: 800A03F6
Source: Microsoft JScript - compilation error
</code></pre>
<p>I try to run this by node.js console but have only "..." and nothing happend.</p>
<p>This is code of run-demo.js:</p>
<pre><code>#!/usr/bin/env node
var fs = require('fs')
//var pythonjs = require('../PythonJS/pythonjs/python-js')
var pythonjs = require('python-js')
var pycode = fs.readFileSync( './server.py', {'encoding':'utf8'} )
var jscode = pythonjs.translator.to_javascript( pycode )
eval( pythonjs.runtime.javascript + jscode )
</code></pre>
<p>Any ideas? I want to run local server and use PythonJS</p>
| 1 | 2016-09-01T12:30:17Z | 39,280,125 | <p>If anyone else will be looking for solution, here is it:</p>
<pre><code>node run-demo.js
</code></pre>
<p>Simple as... ;)</p>
| 0 | 2016-09-01T20:10:04Z | [
"javascript",
"python",
"node.js",
"server"
] |
Why does get_name_by_addr return '' and org_by_addr return None? | 39,271,683 | <p>I am currently testing one of my classes which sets variables with the help of pygeoip.</p>
<p><code>org_by_addr</code> returns <code>None</code> when there is nothing found in the database:</p>
<pre><code>seek_org = self._seek_country(ipnum)
if seek_org == self._databaseSegments:
return None
</code></pre>
<p>While the <code>country_name_by_addr</code> function returns an empty string. </p>
<p>This forces me to check if the return is <code>None</code> and then setting it to <code>''</code> to have the variables uniformly.</p>
<p>Does anybody know, what the reason is to give different returns when there is no entry in the database? </p>
| 0 | 2016-09-01T12:31:57Z | 39,272,349 | <p>Other than the obvious "variable uniformity", what is the point of changing the NoneType to an empty string? There is a reason why </p>
<pre><code>bool ('') == bool (None) == False
</code></pre>
<p>In my opinion this is a stylistic difference. However, when a package has different return types like this, it can hint several things:</p>
<p>-if the function returns None, you can probably guess that the function would return an instanced object if a match was found in the database.
-if instead of returning None, the function returns an empty string, you can at least expect the output of that function to return a valid string when an entry is found in the database.
-if the function returns 0 instead of None, you can probably guess that the function would return a number of some sort if an entry was found in the database</p>
<p>So really it's mostly about informing the user in some way about what a valid return type would be.</p>
<p>My final suggestion would be to do away with the traditional thought of "types" when using Python. By that I mean the C philosophy of typing. In python there is a reason that you can say:</p>
<pre><code>if result:
#some code
</code></pre>
<p>And have it be valid across several different "types".</p>
| 0 | 2016-09-01T13:06:12Z | [
"python",
"geoip",
"maxmind"
] |
Unit-testing a boost::python library in python | 39,271,702 | <p>So I have a shared library created with boost::python (C++).
For the C++ functions inside I have unit-tests that check that they are working.
Now I would like to use unit-test to see if I implemented the python interface correctly.
For this I thought about using using the python package <code>unittest</code>.</p>
<p>Now my Folder setup is roughly:</p>
<pre><code>project
|
-- C++ source (library and boost::python stuff)
|
-- build (here the shared library is located)
|
-- Test (here I have the python classes that should test the interface)
</code></pre>
<p>The test folder has some subfolders that mirror the structure of the python interface, containing lots of small python modules testing the different aspects of the library.</p>
<p>So the <strong>question</strong> now:</p>
<blockquote>
<p>How do I <code>import</code> the shared library into the test?</p>
</blockquote>
<p>What I tried so far was in my <code>test_main.py</code></p>
<pre><code>import sys
sys.path.insert(0,'../build')
</code></pre>
<p>But this does not help for the modules inside the test folder. And anyways hardcoding this path into the test-code seems to be hackish. I also don't want to install an untested library just to figure out the tests failed to then uninstall it again.</p>
| 0 | 2016-09-01T12:32:58Z | 39,272,941 | <p>What you could do is run the tests while you are in the root directory in your case <code>project</code>. You can do <code>python Test/test_name.py</code>. Make sure your build library has a <code>__init__.py</code> file</p>
<p>The only change to the test is you'd have</p>
<pre><code>from build import blah #blah is the component you testing
#test code here
</code></pre>
| 1 | 2016-09-01T13:31:02Z | [
"python",
"c++",
"unit-testing",
"boost",
"out-of-source"
] |
What is the minimum number of swaps required to bubble sort an array? | 39,271,749 | <p>I'm trying to solve the Hackerrank problem <a href="https://www.hackerrank.com/challenges/new-year-chaos?h_r=next-challenge&h_v=zen" rel="nofollow">New Year Chaos</a>:</p>
<p><a href="http://i.stack.imgur.com/DBaq0.png" rel="nofollow"><img src="http://i.stack.imgur.com/DBaq0.png" alt="enter image description here"></a></p>
<p>Further explanation can be found on the page. For example, denoting the 'swapped' queue as <code>q</code>, if <code>q = [2, 1, 5, 3, 4]</code>, then the required number of swaps is 3:</p>
<p><a href="http://i.stack.imgur.com/oXzd0.png" rel="nofollow"><img src="http://i.stack.imgur.com/oXzd0.png" alt="enter image description here"></a></p>
<p>According to the first answer of <a href="https://www.quora.com/How-can-I-efficiently-compute-the-number-of-swaps-required-by-slow-sorting-methods-like-insertion-sort-and-bubble-sort-to-sort-a-given-array" rel="nofollow">https://www.quora.com/How-can-I-efficiently-compute-the-number-of-swaps-required-by-slow-sorting-methods-like-insertion-sort-and-bubble-sort-to-sort-a-given-array</a>, the number of swaps required by bubble sort is equal to the number of inversions in the array. I tried to test this with the following Hackerrank submission:</p>
<pre><code>#!/bin/python
import sys
T = int(raw_input().strip())
for a0 in xrange(T):
n = int(raw_input().strip())
q = map(int,raw_input().strip().split(' '))
# your code goes here
diff = [x - y for x, y in zip(q, range(1,n+1))]
if any([abs(el) > 2 for el in diff]):
print "Too chaotic"
else:
all_pairs = [(q[i], q[j]) for i in range(n) for j in range(i+1, n)]
inversions = [pair[0] > pair[1] for pair in all_pairs]
print inversions.count(True)
</code></pre>
<p>Here is also a version of the code to run locally:</p>
<pre><code>n = 5
q = [2, 1, 5, 3, 4]
diff = [x - y for x, y in zip(q, range(1,n+1))]
if any([abs(el) > 2 for el in diff]):
print "Too chaotic"
else:
all_pairs = [(q[i], q[j]) for i in range(n) for j in range(i+1, n)]
inversion_or_not = [pair[0] > pair[1] for pair in all_pairs]
print inversion_or_not.count(True)
</code></pre>
<p>For the given test case, the script correctly prints the number 3. However, for all the other 'hidden' test cases, it gives the wrong answer:</p>
<p><a href="http://i.stack.imgur.com/J8PJA.png" rel="nofollow"><img src="http://i.stack.imgur.com/J8PJA.png" alt="enter image description here"></a></p>
<p>I've also tried a submission which implements bubble sort:</p>
<pre><code>#!/bin/python
import sys
def swaps_bubble_sort(q):
q = list(q) # Make a shallow copy
swaps = 0
swapped = True
while swapped:
swapped = False
for i in range(n-1):
if q[i] > q[i+1]:
q[i], q[i+1] = q[i+1], q[i]
swaps += 1
swapped = True
return swaps
T = int(raw_input().strip())
for a0 in xrange(T):
n = int(raw_input().strip())
q = map(int,raw_input().strip().split(' '))
# your code goes here
diff = [x - y for x, y in zip(q, range(1,n+1))]
if any([abs(el) > 2 for el in diff]):
print "Too chaotic"
else:
print swaps_bubble_sort(q)
</code></pre>
<p>but with the same (failed) result. Is the minimum number of swaps not equal to the number of inversions or that attained by bubble sort?</p>
| 0 | 2016-09-01T12:35:56Z | 39,272,234 | <p>You just have to count the number of necessary swaps in bubble sort. Here is my code that got accepted.</p>
<pre><code>T = input()
for test in range(T):
n = input()
l = map(int, raw_input().split())
for i,x in enumerate(l):
if x-(i+1) > 2:
print "Too chaotic"
break
else:
counter = 0
while 1:
flag = True
for i in range(len(l)-1):
if l[i] > l[i+1]:
l[i],l[i+1] = l[i+1],l[i]
counter += 1
flag = False
if flag:
break
print counter
</code></pre>
<p>In your first code your approach is <code>O(n^2)</code> which is not appropriate for <code>n = 10^5</code>. In this line </p>
<pre><code>all_pairs = [(q[i], q[j]) for i in range(n) for j in range(i+1, n)]
</code></pre>
<p>you are trying to store <code>10^10</code> tuples in your RAM.</p>
<p>The problem with your second code is you are using the <code>abs</code> of elements of diff to make sure the array is not chaotic. However one person can go to the end of the line only by getting bribed and it doesn't violates the rules. So you just have to make sure a person doesn't come forward more than two positions not the other way around.</p>
| 1 | 2016-09-01T13:00:36Z | [
"python",
"algorithm",
"sorting"
] |
Python:Changing path at the end of if statement | 39,271,783 | <p>My gui currently has a combo box with the the option of selecting four different file locations. Once its selected every file in that directory will be displayed in a listbox:</p>
<pre><code>def ComboBox(self, event):
current = self.buttonChoice.current()
if (current == 0):
self.lb.delete(0, END)
for i in range(0, length1):
self.lb.insert(END, self.files1[i])
elif (current == 1):
self.lb.delete(0, END)
sys.path.insert(0, sys.path[0]+ "\\folder1")
for i in range(0, length2):
self.lb.insert(END, self.files2[i])
elif (current == 2):
self.lb.delete(0, END)
sys.path.insert(0, sys.path[0]+ "\\folder2")
for i in range(0, length3):
self.lb.insert(END, self.files2[i])
elif (current == 3):
self.lb.delete(0, END)
sys.path.insert(0, sys.path[0]+ "\\folder3")
for i in range(0, length4):
self.lb.insert(END, self.files4[i])
</code></pre>
<p>However my pathing isnt optimal since the function does not return to the parent directory and is instead stuck in that folder(example if current==1 the directory will be in ...\folder1). To get this working i need to move down one directory at the end of each statement. Ive looked at related question and came across <code>os.chdir('..')</code>. For some reason im having trouble implementing this. Any ideas of how i can move down one directory at the end of each statement.</p>
| -1 | 2016-09-01T12:37:13Z | 39,271,910 | <p>Indeed you should be using <code>os.chdir</code> and not <code>sys.path.insert</code>.</p>
<p>To give you a full answer, one needs to see the rest of your class. More specifically, one must know what there is in <code>self.lb</code> and <code>self.files</code> and the logic filling it.</p>
| 0 | 2016-09-01T12:43:17Z | [
"python",
"python-3.x",
"tkinter",
"sys"
] |
Python:Changing path at the end of if statement | 39,271,783 | <p>My gui currently has a combo box with the the option of selecting four different file locations. Once its selected every file in that directory will be displayed in a listbox:</p>
<pre><code>def ComboBox(self, event):
current = self.buttonChoice.current()
if (current == 0):
self.lb.delete(0, END)
for i in range(0, length1):
self.lb.insert(END, self.files1[i])
elif (current == 1):
self.lb.delete(0, END)
sys.path.insert(0, sys.path[0]+ "\\folder1")
for i in range(0, length2):
self.lb.insert(END, self.files2[i])
elif (current == 2):
self.lb.delete(0, END)
sys.path.insert(0, sys.path[0]+ "\\folder2")
for i in range(0, length3):
self.lb.insert(END, self.files2[i])
elif (current == 3):
self.lb.delete(0, END)
sys.path.insert(0, sys.path[0]+ "\\folder3")
for i in range(0, length4):
self.lb.insert(END, self.files4[i])
</code></pre>
<p>However my pathing isnt optimal since the function does not return to the parent directory and is instead stuck in that folder(example if current==1 the directory will be in ...\folder1). To get this working i need to move down one directory at the end of each statement. Ive looked at related question and came across <code>os.chdir('..')</code>. For some reason im having trouble implementing this. Any ideas of how i can move down one directory at the end of each statement.</p>
| -1 | 2016-09-01T12:37:13Z | 39,271,914 | <p>How about this?</p>
<pre><code>example_dir = r'C:\Users\****\Desktop\PythonScripts\ResidualCreation'
def move_back_dir(a_dir, steps=1):
return '\\'.join(a_dir.split('\\')[:-steps])
print(move_back_dir(example_dir)) # -> C:\Users\****\Desktop\PythonScripts
print(move_back_dir(example_dir, 2)) # -> C:\Users\****\Desktop
</code></pre>
<hr>
<p>Or as suggested in the comments implement the <code>os.path.dirname()</code> in a recursive way as follows:</p>
<pre><code>def move_back_dir(a_dir, steps=1):
for i in range(steps):
a_dir = os.path.dirname(a_dir)
return a_dir
print(move_back_dir(example_dir)) # -> C:\Users\****\Desktop\PythonScripts
print(move_back_dir(example_dir, 2)) # -> C:\Users\****\Desktop
</code></pre>
<hr>
<p>If setting the number of folders you want to go back (<code>step</code> in example above) is not required simply do <code>os.path.dirname(filename)</code></p>
| 1 | 2016-09-01T12:43:29Z | [
"python",
"python-3.x",
"tkinter",
"sys"
] |
Sequence number groupby ID with reset | 39,271,859 | <p>I'am looking for a way to générate a sequence of numbers that reset on every break</p>
<p>Example </p>
<pre><code>ID VAR
A 0
A 0
A 1
A 1
A 0
A 0
A 1
A 1
B 1
B 1
B 1
B 0
B 0
B 0
B 0
</code></pre>
<p>Each time var is at 1 and ID the same as before, you start the counter.
but if ID is not the same or VAR is 0 you start again from 0</p>
<p>Desired output </p>
<pre><code>ID VAR DESIRED
A 0 0
A 0 0
A 1 1
A 1 2
A 0 0
A 0 0
A 1 1
A 1 2
B 1 1
B 1 2
B 1 3
B 0 0
B 0 0
B 0 0
B 0 0
</code></pre>
| 0 | 2016-09-01T12:40:53Z | 39,272,237 | <p>You can create an intermediate index, and then <code>groupby</code> this index and <code>ID</code>, cumsumming up on <code>VAR</code>:</p>
<pre><code>df['ix'] = df['VAR'].diff().fillna(0).abs().cumsum()
df['DESIRED'] = df.groupby(['ID','ix'])['VAR'].cumsum()
In [21]: df
Out[21]:
ID VAR ix DESIRED
0 A 0 0 0
1 A 0 0 0
2 A 1 1 1
3 A 1 1 2
4 A 0 2 0
5 A 0 2 0
6 A 1 3 1
7 A 1 3 2
8 B 1 3 1
9 B 1 3 2
10 B 1 3 3
11 B 0 4 0
12 B 0 4 0
13 B 0 4 0
14 B 0 4 0
</code></pre>
| 1 | 2016-09-01T13:00:39Z | [
"python",
"pandas"
] |
AttributeError on Django | 39,271,876 | <p>I'm stuck with (I think) a dummy error on Django that I can't find where it's the fault.</p>
<p>On "catalog/models.py" I have (it connects to a MySQL database):</p>
<pre><code>from django.db import models
class Application(models.Model):
nameApp = models.CharField(max_length=50)
tarification = models.ForeignKey(Tarification)
</code></pre>
<p>Then, I'm using <em>django-tables2</em> (<a href="https://django-tables2.readthedocs.io/en/latest/pages/table-data.html" rel="nofollow">Doc to fill tables</a>) to make tables on Django, so on my <em>tables.py</em> I have:</p>
<pre><code>import django_tables2 as tables
from catalog.models import AppCost, Application, Tarification
class BillTable(tables.Table):
class Meta:
appName = Application.nameApp
userApp = AppCost.userApp
tarifName = Tarification.nameTarif
tarifCost = Tarification.cost
startTime = AppCost.startTime
finishTime = AppCost.finishTime
totalCost = AppCost.totalCost
# add class="paleblue" to <table> tag
attrs = {'class': 'paleblue'}
</code></pre>
<p>And I get an error when I render my website:</p>
<pre><code>type object 'Application' has no attribute 'nameApp'
</code></pre>
<p>On the line <code>appName = Application.nameApp</code> from <em>BillTable</em></p>
<p>But, looking at "Database" window on <em>Pycharm</em> I see the table schema and it's:</p>
<ul>
<li>catalog_application
<ul>
<li>id</li>
<li>tarification_id</li>
<li>nameApp</li>
<li>other stuff</li>
</ul></li>
</ul>
<p>And looking with MySQL Workbench the schema looks the same. So, why I'm getting this error?</p>
<p>Regards.</p>
| 0 | 2016-09-01T12:41:39Z | 39,272,899 | <p>You're very confused about how to use django-tables. You need to specify one model in the Meta class, then just the <code>fields</code> attribute to add a list of fields from that model, as strings, to display. You can't just specify fields from three arbitrary models.</p>
| 1 | 2016-09-01T13:29:19Z | [
"python",
"mysql",
"django"
] |
AttributeError on Django | 39,271,876 | <p>I'm stuck with (I think) a dummy error on Django that I can't find where it's the fault.</p>
<p>On "catalog/models.py" I have (it connects to a MySQL database):</p>
<pre><code>from django.db import models
class Application(models.Model):
nameApp = models.CharField(max_length=50)
tarification = models.ForeignKey(Tarification)
</code></pre>
<p>Then, I'm using <em>django-tables2</em> (<a href="https://django-tables2.readthedocs.io/en/latest/pages/table-data.html" rel="nofollow">Doc to fill tables</a>) to make tables on Django, so on my <em>tables.py</em> I have:</p>
<pre><code>import django_tables2 as tables
from catalog.models import AppCost, Application, Tarification
class BillTable(tables.Table):
class Meta:
appName = Application.nameApp
userApp = AppCost.userApp
tarifName = Tarification.nameTarif
tarifCost = Tarification.cost
startTime = AppCost.startTime
finishTime = AppCost.finishTime
totalCost = AppCost.totalCost
# add class="paleblue" to <table> tag
attrs = {'class': 'paleblue'}
</code></pre>
<p>And I get an error when I render my website:</p>
<pre><code>type object 'Application' has no attribute 'nameApp'
</code></pre>
<p>On the line <code>appName = Application.nameApp</code> from <em>BillTable</em></p>
<p>But, looking at "Database" window on <em>Pycharm</em> I see the table schema and it's:</p>
<ul>
<li>catalog_application
<ul>
<li>id</li>
<li>tarification_id</li>
<li>nameApp</li>
<li>other stuff</li>
</ul></li>
</ul>
<p>And looking with MySQL Workbench the schema looks the same. So, why I'm getting this error?</p>
<p>Regards.</p>
| 0 | 2016-09-01T12:41:39Z | 39,276,176 | <p>As Daniel Roseman mentioned above, the code you might looking for is below, it does not need a new model:</p>
<pre><code>import django_tables2 as tables
from catalog.models import AppCost, Application, Tarification
class AppCostTable(tables.Table):
userApp = tables.Column()
startTime = tables.Column()
finishTime = tables.Column()
totalCost = tables.Column()
class Meta:
model = AppCost
class ApplicationTable(tables.Table):
appName = tables.Column(accessor='nameApp')
class Meta:
model = Application
class TarificationTable(tables.Table):
tarifName = tables.Column(accessor='nameTarif')
tarifCost = tables.Column(accessor='cost')
class Meta:
model = Tarification
class BillTable(AppCostTable, ApplicationTable, TarificationTable, tables.Table):
pass
</code></pre>
<p>If you do not mind to have another model, then inside your <strong>catalog.models</strong> you can add a new Bill model:</p>
<pre><code>class Bill(models.Model):
application = models.ForeignKey('Application')
appcost = models.ForeignKey('AppCost')
tarification = models.ForeignKey('Tarification')
</code></pre>
<p>In your table file:</p>
<pre><code>from catalog.models import Bill
class BillTable(tables.Table):
appName = tables.Column(accessor='application.nameApp')
tarifName = tables.Column(accessor='tarification.nameTarif')
tarifCost = tables.Column(accessor='tarification.cost')
userApp = tables.Column(accessor='appcost.userApp')
startTime = tables.Column(accessor='appcost.startTime')
finishTime = tables.Column(accessor='appcost.finishTime')
totalCost = tables.Column(accessor='appcost.totalCost')
class Meta:
model = Bill
</code></pre>
| 1 | 2016-09-01T16:02:09Z | [
"python",
"mysql",
"django"
] |
Flask send stream as response | 39,272,072 | <p>I'm trying to "proxy" my Flask server (i will call it Server#01) with another server(Server#02). It's working well except for one thing : when the Server#01 use send_from_directory(), i don't know how to re-send this file. </p>
<p><strong>My classic "proxy"</strong></p>
<pre><code>result = requests.get(my_path_to_server01)
return Response(stream_with_context(result.iter_content()),
content_type = result.headers['Content-Type'])
</code></pre>
<p>With a file a response, it's taking hours... So i tried many things. The one who work is :</p>
<pre><code>result = requests.get(my_path_to_server01, stream=True)
with open('img.png', 'wb') as out_file:
shutil.copyfileobj(result.raw, out_file)
return send_from_directory('./', 'img.png')
</code></pre>
<p>I would like to "redirect" my response ("result" variable), or send/copy a stream of my file. Anyways I don't want to use a physical file because it don't seems the proper way in my mind and i can imagine all problems who can happens because of that.</p>
| 0 | 2016-09-01T12:51:43Z | 39,274,008 | <p>There should not be any problem with your "classic" proxy other than that it should use <code>stream=True</code>, and specify a <code>chunk_size</code> for <code>response.iter_content()</code>.</p>
<p>By default <code>chunk_size</code> is 1 byte, so the streaming will be very inefficient and consequently very slow. Trying a larger chunk size, e.g. 10K should yield faster transfers. Here's some code for the proxy.</p>
<pre><code>import requests
from flask import Flask, Response, stream_with_context
app = Flask(__name__)
my_path_to_server01 = 'http://localhost:5000/'
@app.route("/")
def streamed_proxy():
r = requests.get(my_path_to_server01, stream=True)
return Response(r.iter_content(chunk_size=10*1024),
content_type=r.headers['Content-Type'])
if __name__ == "__main__":
app.run(port=1234)
</code></pre>
<p>You don't even need to use <code>stream_with_context()</code> here because you don't need access to the request context within the generator returned by <code>iter_content()</code>.</p>
| 1 | 2016-09-01T14:17:14Z | [
"python",
"redirect",
"flask",
"proxy"
] |
Validate Credit Card Number Using Luhn Algorithm Python | 39,272,087 | <p>I am trying to implement Luhn algorithm in Python. Here is my code</p>
<pre><code>def validate(n):
if len(str(n)) > 16:
return False
else:
if len(str(n)) % 2 == 0:
for i in str(n[0::2]):
digit = int(str(n[i])) * 2
while digit > 9:
digit = sum(map(int, str(digit)))
dig_sum = sum(map(int, str(n)))
return True if dig_sum % 10 == 0 else False
elif len(str(n)) % 2 != 0:
for i in str(n[1::2]):
digit = int(str(n[i])) * 2
while digit > 9:
digit = sum(map(int, str(digit)))
dig_sum = sum(map(int, str(n)))
return True if dig_sum % 10 == 0 else False
</code></pre>
<p>I keep getting the error</p>
<pre><code>TypeError: 'int' object has no attribute '__getitem__
</code></pre>
| 1 | 2016-09-01T12:52:29Z | 39,272,155 | <p>It is hard to tell without the complete error message, but it is likely because you confused in some places where you put the indexing and where you put the string conversion, for example: <code>for i in str(**n[1::2]**)</code> and <code>digit = int(str(**n[i]**)) * 2</code></p>
<p>A good way to handle it is to just create a temporary variable <code>n_str = str(n)</code>, and use it instead of str(n) over and over again.</p>
| 1 | 2016-09-01T12:56:18Z | [
"python",
"algorithm"
] |
load_pem_private_key fails with ecdsa key of size 521 | 39,272,161 | <p>I have the following two ECDSA private key for testing.</p>
<pre><code>from cryptography.hazmat.primitives.serialization import load_pem_private_key
from cryptography.hazmat.backends import default_backend
privateECDSA_openssh521 = b"""-----BEGIN EC PRIVATE KEY-----
MIHcAgEBBEIAjn0lSVF6QweS4bjOGP9RHwqxUiTastSE0MVuLtFvkxygZqQ712oZ
ewMvqKkxthMQgxzSpGtRBcmkL7RqZ94+18qgBwYFK4EEACOhgYkDgYYABAFpX/6B
mxxglwD+VpEvw0hcyxVzLxNnMGzxZGF7xmNj8nlF7M+TQctdlR2Xv/J+AgIeVGmB
j2p84bkV9jBzrUNJEACsJjttZw8NbUrhxjkLT/3rMNtuwjE4vLja0P7DMTE0EV8X
f09ETdku/z/1tOSSrSvRwmUcM9nQUJtHHAZlr5Q0fw==
-----END EC PRIVATE KEY------"""
privateECDSA_openssh384 = b"""-----BEGIN EC PRIVATE KEY-----
MIGkAgEBBDAtAi7I8j73WCX20qUM5hhHwHuFzYWYYILs2Sh8UZ+awNkARZ/Fu2LU
LLl5RtOQpbWgBwYFK4EEACKhZANiAATU17sA9P5FRwSknKcFsjjsk0+E3CeXPYX0
Tk/M0HK3PpWQWgrO8JdRHP9eFE9O/23P8BumwFt7F/AvPlCzVd35VfraFT0o4cCW
G0RqpQ+np31aKmeJshkcYALEchnU+tQ=
-----END EC PRIVATE KEY-----"""
</code></pre>
<p>with this information if I do. <code>load_pem_private_key(privateECDSA_openssh384, None, default_backend())</code> it works fine but if I do <code>load_pem_private_key(privateECDSA_openssh521, None, default_backend())</code></p>
<p>I get the following error </p>
<pre><code> load_pem_private_key(privateECDSA_openssh521, None, default_backend())
File "/usr/lib/python2.7/dist-packages/cryptography/hazmat/primitives/serialization.py", line 20, in load_pem_private_key
return backend.load_pem_private_key(data, password)
File "/usr/lib/python2.7/dist-packages/cryptography/hazmat/backends/multibackend.py", line 282, in load_pem_private_key
return b.load_pem_private_key(data, password)
File "/usr/lib/python2.7/dist-packages/cryptography/hazmat/backends/openssl/backend.py", line 1606, in load_pem_private_key
password,
File "/usr/lib/python2.7/dist-packages/cryptography/hazmat/backends/openssl/backend.py", line 1802, in _load_key
self._handle_key_loading_error()
File "/usr/lib/python2.7/dist-packages/cryptography/hazmat/backends/openssl/backend.py", line 1874, in _handle_key_loading_error
raise ValueError("Could not unserialize key data.")
ValueError: Could not unserialize key data.
</code></pre>
<p>I don't understand what is the problem.</p>
| 2 | 2016-09-01T12:56:33Z | 39,293,047 | <p>Have tried to load your data and got the following error string <code>b'bad end line'</code></p>
<p>You have six dashes at the end line. Just fix it.</p>
<pre><code>>>> privateECDSA_openssh521 = b"""-----BEGIN EC PRIVATE KEY-----
... MIHcAgEBBEIAjn0lSVF6QweS4bjOGP9RHwqxUiTastSE0MVuLtFvkxygZqQ712oZ
... ewMvqKkxthMQgxzSpGtRBcmkL7RqZ94+18qgBwYFK4EEACOhgYkDgYYABAFpX/6B
... mxxglwD+VpEvw0hcyxVzLxNnMGzxZGF7xmNj8nlF7M+TQctdlR2Xv/J+AgIeVGmB
... j2p84bkV9jBzrUNJEACsJjttZw8NbUrhxjkLT/3rMNtuwjE4vLja0P7DMTE0EV8X
... f09ETdku/z/1tOSSrSvRwmUcM9nQUJtHHAZlr5Q0fw==
... -----END EC PRIVATE KEY-----"""
>>> load_pem_private_key(privateECDSA_openssh521, None, default_backend())
</code></pre>
<p>returns:</p>
<pre><code><cryptography.hazmat.backends.openssl.ec._EllipticCurvePrivateKey object at 0x109cda128>
</code></pre>
| 2 | 2016-09-02T12:58:39Z | [
"python",
"cryptography"
] |
Python - Inner Class Not Found | 39,272,195 | <p>(Sorry I'm new to Python)</p>
<p>I'm running a python django script as follows:</p>
<p><code>python3 manage.py test</code></p>
<hr>
<pre><code>class Command(NoArgsCommand):
help = 'Help Test'
def handle(self, **options):
gs = self.create_goalscorer(1,"Headed")
class GoalScorerX(object):
id = 0
goal_type = ""
#Constructor
def __init__(self, id, goal_type):
self.id= id
self.goal_type = goal_type
def create_goalscorer(self,id,goal_type):
gs = GoalScorerX(id, goal_type)
return gs
</code></pre>
<p>But I get the error that it can't be found?</p>
<pre><code> gs = GoalScorerX(id, goal_type)
NameError: name 'GoalScorerX' is not defined
</code></pre>
| 0 | 2016-09-01T12:58:27Z | 39,272,262 | <p>Inner classes are very rarely useful in Python. Certainly here there is nothing to be gained by making GoalScorerX an inner class. Move it outside; also note that there is no restriction in Python in the number of classes in a file, so it's fine to have them both as top-level classes.</p>
<p>(Note, you <em>could</em> fix this by referring to the inner class as Command.GoalScorerX - but don't do that.)</p>
| 2 | 2016-09-01T13:01:25Z | [
"python",
"django"
] |
How to get the IP address of the request to a Heroku app? | 39,272,216 | <p>Heroku has a routing system to forward requests to the dynos. My application needs to know from where the request came, but it always gets random addresses in a network, probably Heroku's internals.</p>
<p>And I see that in the logs, it (Heroku's router) gets my IP address and forwards the request. Is there a way to get the actual IP address of a request?</p>
<p>My application is written in Python, using Flask</p>
| 1 | 2016-09-01T12:59:45Z | 39,273,565 | <p>Checking Flask's documentation on filtering headers etc., I found that:</p>
<pre><code>request.headers['X-Forwarded-For']
</code></pre>
<p>is where you'll get the client's real IP address.</p>
<hr>
<p>From a deleted comment by OP, this article provides a <a href="http://esd.io/blog/flask-apps-heroku-real-ip-spoofing.html" rel="nofollow">safer solution</a>.</p>
| 0 | 2016-09-01T13:59:40Z | [
"python",
"heroku",
"flask",
"ip-address"
] |
Local Maxima with circular window | 39,272,267 | <p>I am trying to compute a local maxima filter on a matrix, using a circular kernel.
The output should be the cells that are local maximas. For each pixel in the input 'data', I need to see if it is a local maximum by a circular window, thus returning a value of 1, otherwise 0.</p>
<p>I have this code, built upon answers from here:
<a href="http://stackoverflow.com/questions/8647024/how-to-apply-a-disc-shaped-mask-to-a-numpy-array">How to apply a disc shaped mask to a numpy array?</a></p>
<pre><code>import numpy as np
import scipy.ndimage as sc
radius = 2
kernel = np.zeros((2*radius+1, 2*radius+1))
y,x = np.ogrid[-radius:radius+1, -radius:radius+1]
mask2 = x**2 + y**2 <= radius**2
kernel[mask2] = 1
def local_maxima(matrix, window_size):
loc_max = sc.maximum_filter(matrix, window_size, mode='constant')
return loc_max
data = np.array([(1, 1, 1, 1, 1, 1, 1, 1, 1), (1, 1, 1, 1, 1, 1, 1, 1, 1), (1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1), (1, 1, 1, 1, 4, 1, 1, 1, 1), (1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1), (1, 1, 1, 1, 1, 1, 1, 1, 1), (1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1)])
loc_max = sc.filters.generic_filter(data, local_maxima(data, np.shape(kernel)), footprint=kernel)
max_matrix = np.where(loc_max == data, 1, 0)
np.savetxt('.....\Local\Test_Local_Max.txt', max_matrix, delimiter='\t')
</code></pre>
<p>The kernel has this shape:</p>
<pre><code>[[ 0. 0. 1. 0. 0.]
[ 0. 1. 1. 1. 0.]
[ 1. 1. 1. 1. 1.]
[ 0. 1. 1. 1. 0.]
[ 0. 0. 1. 0. 0.]]
</code></pre>
<p>So the search cells will be only the ones that have value 1. The cells with 0 should be excluded from the local maxima search.</p>
<p>But the script gives the error below on line 21:</p>
<blockquote>
<pre><code>RuntimeError: function parameter is not callable
</code></pre>
</blockquote>
<p>Thanks for any help!</p>
| 1 | 2016-09-01T13:01:39Z | 39,273,614 | <p>The second parameter of <code>sc.filters.generic_filter()</code> should be a function, you are passing it the value returned by the <code>local_maxima(data, np.shape(kernel))</code> call, i.e. a matrix.</p>
<p>I'm a bit confused as to what exactly you have done here, but I think you do not need the <code>generic_filter</code> call at all, <code>maximum_filter</code> should do what you want:</p>
<pre><code>import numpy as np
import scipy.ndimage as sc
radius = 2
kernel = np.zeros((2*radius+1, 2*radius+1))
y,x = np.ogrid[-radius:radius+1, -radius:radius+1]
mask2 = x**2 + y**2 <= radius**2
kernel[mask2] = 1
data = np.array([(1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 4, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1)])
loc_max = sc.maximum_filter(data, footprint=kernel, mode='constant')
max_matrix = np.where(loc_max == data, 1, 0)
np.savetxt('.....\Local\Test_Local_Max.txt', max_matrix, delimiter='\t')
</code></pre>
<p>(I do not have python installed on this computer so I have not tested this out, sorry)</p>
<p>Edit:
I've tested it and it seems to give the correct result:</p>
<pre><code>[[1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 0, 1, 1, 1, 1],
[1, 1, 1, 0, 0, 0, 1, 1, 1],
[1, 1, 0, 0, 1, 0, 0, 1, 1],
[1, 1, 1, 0, 0, 0, 1, 1, 1],
[1, 1, 1, 1, 0, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1]]
</code></pre>
| 1 | 2016-09-01T14:01:24Z | [
"python",
"numpy",
"filtering"
] |
Local Maxima with circular window | 39,272,267 | <p>I am trying to compute a local maxima filter on a matrix, using a circular kernel.
The output should be the cells that are local maximas. For each pixel in the input 'data', I need to see if it is a local maximum by a circular window, thus returning a value of 1, otherwise 0.</p>
<p>I have this code, built upon answers from here:
<a href="http://stackoverflow.com/questions/8647024/how-to-apply-a-disc-shaped-mask-to-a-numpy-array">How to apply a disc shaped mask to a numpy array?</a></p>
<pre><code>import numpy as np
import scipy.ndimage as sc
radius = 2
kernel = np.zeros((2*radius+1, 2*radius+1))
y,x = np.ogrid[-radius:radius+1, -radius:radius+1]
mask2 = x**2 + y**2 <= radius**2
kernel[mask2] = 1
def local_maxima(matrix, window_size):
loc_max = sc.maximum_filter(matrix, window_size, mode='constant')
return loc_max
data = np.array([(1, 1, 1, 1, 1, 1, 1, 1, 1), (1, 1, 1, 1, 1, 1, 1, 1, 1), (1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1), (1, 1, 1, 1, 4, 1, 1, 1, 1), (1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1), (1, 1, 1, 1, 1, 1, 1, 1, 1), (1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1)])
loc_max = sc.filters.generic_filter(data, local_maxima(data, np.shape(kernel)), footprint=kernel)
max_matrix = np.where(loc_max == data, 1, 0)
np.savetxt('.....\Local\Test_Local_Max.txt', max_matrix, delimiter='\t')
</code></pre>
<p>The kernel has this shape:</p>
<pre><code>[[ 0. 0. 1. 0. 0.]
[ 0. 1. 1. 1. 0.]
[ 1. 1. 1. 1. 1.]
[ 0. 1. 1. 1. 0.]
[ 0. 0. 1. 0. 0.]]
</code></pre>
<p>So the search cells will be only the ones that have value 1. The cells with 0 should be excluded from the local maxima search.</p>
<p>But the script gives the error below on line 21:</p>
<blockquote>
<pre><code>RuntimeError: function parameter is not callable
</code></pre>
</blockquote>
<p>Thanks for any help!</p>
| 1 | 2016-09-01T13:01:39Z | 39,288,450 | <p>You can use the code below that return <code>1</code> if the cell visited is a local maximum by a circular window defined by <code>kernel</code> (I just used <code>%pylab</code> to plot the results as an illustration):</p>
<pre><code>%pylab
import scipy.ndimage as sc
data = np.array([(1, 1, 1, 1, 1, 1, 1, 1, 1), (1, 1, 1, 1, 1, 1, 1, 1, 1), (1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1), (1, 1, 1, 1, 4, 1, 1, 1, 1), (1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1), (1, 1, 1, 1, 1, 1, 1, 1, 1), (1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1)])
matshow(data)
colorbar()
</code></pre>
<p><a href="http://i.stack.imgur.com/afEJV.png" rel="nofollow"><img src="http://i.stack.imgur.com/afEJV.png" alt="data"></a></p>
<pre><code>radius = 2
kernel = np.zeros((2*radius+1, 2*radius+1))
y,x = np.ogrid[-radius:radius+1, -radius:radius+1]
mask2 = x**2 + y**2 <= radius**2
kernel[mask2] = 1
matshow(kernel)
colorbar()
</code></pre>
<p><a href="http://i.stack.imgur.com/mZdRZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/mZdRZ.png" alt="kernel"></a></p>
<pre><code>def filter_func(a):
return a[len(a)/2] == a.max()
out = sc.generic_filter(data, filter_func, footprint=kernel)
matshow(out)
colorbar()
</code></pre>
<p><a href="http://i.stack.imgur.com/jHdsn.png" rel="nofollow"><img src="http://i.stack.imgur.com/jHdsn.png" alt="output"></a></p>
<p>Below is the result with a random input data array:</p>
<pre><code>data = np.random.random(size=data.shape)
matshow(data)
</code></pre>
<p><a href="http://i.stack.imgur.com/CBVgO.png" rel="nofollow"><img src="http://i.stack.imgur.com/CBVgO.png" alt="random array"></a></p>
<pre><code>out = sc.generic_filter(data, filter_func, footprint=kernel)
matshow(out)
colorbar()
</code></pre>
<p><a href="http://i.stack.imgur.com/Eh65W.png" rel="nofollow"><img src="http://i.stack.imgur.com/Eh65W.png" alt="output on random array"></a></p>
| 1 | 2016-09-02T09:03:51Z | [
"python",
"numpy",
"filtering"
] |
pandas row operation to keep only the right most non zero value per row | 39,272,271 | <p>How to keep the right most number in each row in a dataframe?</p>
<pre><code>a = [[1, 2, 0], [1, 3, 0], [1, 0, 0]]
df = pd.DataFrame(a, columns=['col1','col2','col3'])
df
col1 col2 col3
row0 1 2 NaN
row1 1 3 0
row2 1 0 0
</code></pre>
<p>Then after transformation</p>
<pre><code> col1 col2 col3
row0 0 2 0
row1 0 3 0
row2 1 0 0
</code></pre>
<p>Based on the suggestion by <a href="http://stackoverflow.com/users/3293881/divakar">divakar</a> I've come up with the following:</p>
<pre><code>import pandas as pd
a = [[1, 2, 0, None],
[1, 3, 0,0],
[1, 0, 0,0],
[1, 0, 0,0],
[1, 0, 0,0],
[0, 0, 0,1]]
df = pd.DataFrame(a, columns=['col1','col2','col3','col4'])
df.fillna(value=0,inplace=True) # Get rid of non numeric items
a
[[1, 2, 0, None],
[1, 3, 0, 0],
[1, 0, 0, 0],
[1, 0, 0, 0],
[1, 0, 0, 0],
[0, 0, 0, 1]]
# Return index of first occurrence of maximum over requested axis.
# 0 or 'index' for row-wise, 1 or 'columns' for column-wise
df.idxmax(1)
0 col2
1 col2
2 col1
3 col1
4 col1
5 col4
dtype: object
</code></pre>
<p>Create a matrix to mask values</p>
<pre><code>numberOfRows = df.shape[0]
df_mask= pd.DataFrame(columns=df.columns,index=np.arange(0, numberOfRows))
df_test.fillna(value=0,inplace=True) # Get rid of non numeric items
# Add mask entries
for row,col in enumerate(df.idxmax(1)):
df_mask.loc[row,col] = 1
df_result=df*df_mask
df_result
col1 col2 col3 col4
0 0 2 0 0.0
1 0 3 0 0.0
2 1 0 0 0.0
3 1 0 0 0.0
4 1 0 0 0.0
5 0 0 0 1.0
</code></pre>
| 2 | 2016-09-01T13:01:44Z | 39,272,962 | <p>Here is a workaround that requires the use of helper functions:</p>
<pre><code>import pandas as pd
#Helper functions
def last_number(lst):
if all(map(lambda x: x == 0, lst)):
return 0
elif lst[-1] != 0:
return len(lst)-1
else:
return last_number(lst[:-1])
def fill_others(lst):
new_lst = [0]*len(lst)
new_lst[last_number(lst)] = lst[last_number(lst)]
return new_lst
#Data
a = [[1, 2, 0], [1, 3, 0], [1, 0, 0]]
df = pd.DataFrame(a, columns=['col1','col2','col3'])
df.fillna(0, inplace = True)
print df
col1 col2 col3
0 1 2 0
1 1 3 0
2 1 0 0
#Application
print df.apply(lambda x: fill_others(x.values.tolist()), axis=1)
col1 col2 col3
0 0 2 0
1 0 3 0
2 1 0 0
</code></pre>
<p>As their names suggest, the functions get the last number in a given row and fill the other values with zeros.</p>
<p>I hope this helps.</p>
| 2 | 2016-09-01T13:32:00Z | [
"python",
"pandas",
"dataframe"
] |
pandas row operation to keep only the right most non zero value per row | 39,272,271 | <p>How to keep the right most number in each row in a dataframe?</p>
<pre><code>a = [[1, 2, 0], [1, 3, 0], [1, 0, 0]]
df = pd.DataFrame(a, columns=['col1','col2','col3'])
df
col1 col2 col3
row0 1 2 NaN
row1 1 3 0
row2 1 0 0
</code></pre>
<p>Then after transformation</p>
<pre><code> col1 col2 col3
row0 0 2 0
row1 0 3 0
row2 1 0 0
</code></pre>
<p>Based on the suggestion by <a href="http://stackoverflow.com/users/3293881/divakar">divakar</a> I've come up with the following:</p>
<pre><code>import pandas as pd
a = [[1, 2, 0, None],
[1, 3, 0,0],
[1, 0, 0,0],
[1, 0, 0,0],
[1, 0, 0,0],
[0, 0, 0,1]]
df = pd.DataFrame(a, columns=['col1','col2','col3','col4'])
df.fillna(value=0,inplace=True) # Get rid of non numeric items
a
[[1, 2, 0, None],
[1, 3, 0, 0],
[1, 0, 0, 0],
[1, 0, 0, 0],
[1, 0, 0, 0],
[0, 0, 0, 1]]
# Return index of first occurrence of maximum over requested axis.
# 0 or 'index' for row-wise, 1 or 'columns' for column-wise
df.idxmax(1)
0 col2
1 col2
2 col1
3 col1
4 col1
5 col4
dtype: object
</code></pre>
<p>Create a matrix to mask values</p>
<pre><code>numberOfRows = df.shape[0]
df_mask= pd.DataFrame(columns=df.columns,index=np.arange(0, numberOfRows))
df_test.fillna(value=0,inplace=True) # Get rid of non numeric items
# Add mask entries
for row,col in enumerate(df.idxmax(1)):
df_mask.loc[row,col] = 1
df_result=df*df_mask
df_result
col1 col2 col3 col4
0 0 2 0 0.0
1 0 3 0 0.0
2 1 0 0 0.0
3 1 0 0 0.0
4 1 0 0 0.0
5 0 0 0 1.0
</code></pre>
| 2 | 2016-09-01T13:01:44Z | 39,273,030 | <p>Working at NumPy level, here's one vectorized approach using <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>broadcasting</code></a> -</p>
<pre><code>np.where(((a!=0).cumsum(1).argmax(1))[:,None] == np.arange(a.shape[1]),a,0)
</code></pre>
<p>Sample run -</p>
<pre><code>In [7]: a # NumPy array
Out[7]:
array([[1, 2, 0],
[1, 3, 0],
[1, 0, 0]])
In [8]: np.where(((a!=0).cumsum(1).argmax(1))[:,None] == np.arange(a.shape[1]),a,0)
Out[8]:
array([[0, 2, 0],
[0, 3, 0],
[1, 0, 0]])
</code></pre>
<hr>
<p>Porting it to <code>pandas</code>, we would have an implementation like so -</p>
<pre><code>idx = (df!=0).values.cumsum(1).argmax(1)
df_out = df*(idx[:,None] == np.arange(df.shape[1]))
</code></pre>
<p>Sample run -</p>
<pre><code>In [19]: df
Out[19]:
col1 col2 col3 col4
0 1 2 0 0.0
1 1 3 0 0.0
2 2 2 2 0.0
3 1 0 0 0.0
4 1 0 0 0.0
5 0 0 0 1.0
In [20]: idx = (df!=0).values.cumsum(1).argmax(1)
In [21]: df*(idx[:,None] == np.arange(df.shape[1]))
Out[21]:
col1 col2 col3 col4
0 0 2 0 0.0
1 0 3 0 0.0
2 0 0 2 0.0
3 1 0 0 0.0
4 1 0 0 0.0
5 0 0 0 1.0
</code></pre>
| 2 | 2016-09-01T13:35:03Z | [
"python",
"pandas",
"dataframe"
] |
pandas row operation to keep only the right most non zero value per row | 39,272,271 | <p>How to keep the right most number in each row in a dataframe?</p>
<pre><code>a = [[1, 2, 0], [1, 3, 0], [1, 0, 0]]
df = pd.DataFrame(a, columns=['col1','col2','col3'])
df
col1 col2 col3
row0 1 2 NaN
row1 1 3 0
row2 1 0 0
</code></pre>
<p>Then after transformation</p>
<pre><code> col1 col2 col3
row0 0 2 0
row1 0 3 0
row2 1 0 0
</code></pre>
<p>Based on the suggestion by <a href="http://stackoverflow.com/users/3293881/divakar">divakar</a> I've come up with the following:</p>
<pre><code>import pandas as pd
a = [[1, 2, 0, None],
[1, 3, 0,0],
[1, 0, 0,0],
[1, 0, 0,0],
[1, 0, 0,0],
[0, 0, 0,1]]
df = pd.DataFrame(a, columns=['col1','col2','col3','col4'])
df.fillna(value=0,inplace=True) # Get rid of non numeric items
a
[[1, 2, 0, None],
[1, 3, 0, 0],
[1, 0, 0, 0],
[1, 0, 0, 0],
[1, 0, 0, 0],
[0, 0, 0, 1]]
# Return index of first occurrence of maximum over requested axis.
# 0 or 'index' for row-wise, 1 or 'columns' for column-wise
df.idxmax(1)
0 col2
1 col2
2 col1
3 col1
4 col1
5 col4
dtype: object
</code></pre>
<p>Create a matrix to mask values</p>
<pre><code>numberOfRows = df.shape[0]
df_mask= pd.DataFrame(columns=df.columns,index=np.arange(0, numberOfRows))
df_test.fillna(value=0,inplace=True) # Get rid of non numeric items
# Add mask entries
for row,col in enumerate(df.idxmax(1)):
df_mask.loc[row,col] = 1
df_result=df*df_mask
df_result
col1 col2 col3 col4
0 0 2 0 0.0
1 0 3 0 0.0
2 1 0 0 0.0
3 1 0 0 0.0
4 1 0 0 0.0
5 0 0 0 1.0
</code></pre>
| 2 | 2016-09-01T13:01:44Z | 39,273,276 | <p>You can fill null values "from the left", and then take the values of the resulting last column:</p>
<pre><code>In [49]: df.fillna(axis=0, method='bfill')['col3']
Out[49]:
0 0.0
1 0.0
2 0.0
Name: col3, dtype: float64
</code></pre>
<p><strong>Full Example</strong></p>
<pre><code>In [50]: a = [[1, 2, None], [1, 3, 0], [0, 0, 0]]
In [51]: df = pd.DataFrame(a, columns=['col1','col2','col3'])
In [52]: df.fillna(axis=0, method='bfill')['col3']
Out[52]:
0 0.0
1 0.0
2 0.0
Name: col3, dtype: float64
</code></pre>
| 1 | 2016-09-01T13:46:43Z | [
"python",
"pandas",
"dataframe"
] |
Unable to locate popup with Chrome in selenium | 39,272,283 | <p>Before marking this as duplicate, please read the below points, as i have tried all the possible solutions given :</p>
<ol>
<li>Tried switching to alert and accepting it & dismissing it. It gets stuck to the accept statement.</li>
<li>Tried sending the ENTER/RETURN key to the popup. Nothing happens</li>
<li>Tried sending ENTER/RETURN key to the window. Nothing happens</li>
<li>Tried printing browser url without switching to alert. Throws an exception of unexpectedalertpresentexception.</li>
<li>Tried printing the text of the alert. Returns None.</li>
<li>Caught the exception and tried printing the alert_text, returns none. Tried printing msg, returns the popup text.</li>
</ol>
<p>Code Below:</p>
<pre><code> browser.get("https://www.cnm.att.com/emfe/QNCreateTicket?reportTbl=0&tblValue=DHEC297003811&stateCodeValue=&serviceidtypeValue=&trunkgrpandmessageValue=&testFocusControlID1=idfield1&tblCktValue=DHEC297003811&searchStringForASE=%25DHEC297003811%25&oorInd=null&ASEState=null&isASE_ADEinEMDB=&isASE_ADEinEMDBNotProvisioned=&aseflow=&adeflow=&isASEinEBTA=&searchStringForEditForASE=S%3A2%3ADHEC297003811&button=&clci=&qnfromscreen=quick+navigate&phone=&isProvForTesting=1&isProvForCCA=1&isProvForIP=0&isProvForPhone=1&fromtransporttfn=yes&transporttfn=&circuit_format=&state=&ccid=&cac=&isValidPhoneForUser=&isPOTSDataFound=&localPhoneInd=&circuitId=")
Submit_Ticket = browser.find_element_by_xpath('//*[@id="bottomsec"]/center/table/tbody/tr[2]/td[2]/a/img')
Submit_Ticket.click()
time.sleep(3)
try:
print browser.title
except UnexpectedAlertPresentException as e:
print "exception "+repr(e)
print "msg"+e.msg
print browser.current_window_handle
b = browser.switch_to.window(browser.current_window_handle)
print b
b.send_keys(Keys.RETURN,Keys.ENTER)
</code></pre>
<p>Output:</p>
<pre><code> exception UnexpectedAlertPresentException()
msgunexpected alert open: {Alert text : Before submitting this ticket, you must select the following fields:
Trouble Type
Outage Condition
Do you have Power to your Equipment
Authorize Testing
Service Impact}
(Session info: chrome=52.0.2743.116)
(Driver info: chromedriver=2.21.371459 (36d3d07f660ff2bc1bf28a75d1cdabed0983e7c4),platform=Windows NT 6.1 SP1 x86)
CDwindow-be1b9858-ddb0-4470-9c11-5f1420a94c82
None
Traceback (most recent call last):
File "C:\Python27\alert checking.py", line 72, in <module>
b.send_keys(Keys.RETURN,Keys.ENTER)
AttributeError: 'NoneType' object has no attribute 'send_keys'
</code></pre>
<p><a href="http://i.stack.imgur.com/sl11Q.png" rel="nofollow"><img src="http://i.stack.imgur.com/sl11Q.png" alt="Popup Alert"></a></p>
<p>When i click on the popup only then the code executes further.</p>
<p><a href="http://i.stack.imgur.com/XDAuj.png" rel="nofollow"><img src="http://i.stack.imgur.com/XDAuj.png" alt="Submit Element"></a></p>
| 0 | 2016-09-01T13:02:30Z | 39,274,619 | <p>Resolved it by upgrading the Chrome Driver to version 2.22, earlier it was 2.21 which is buggy in handling javascript popups.</p>
| 0 | 2016-09-01T14:44:29Z | [
"python",
"selenium-webdriver",
"popup",
"selenium-chromedriver"
] |
ImportError for a custom SimpleTestCase child using python manage.py test app | 39,272,288 | <p>I want to use a generic custom TestCase for my_app that is currently running fine. I have the following simplified directory architecture :</p>
<pre><code>my_app
âââ tests
â âââ __init__.py
â âââ test_views
â âââ __init__.py
â âââ custom_test.py
â âââ registration
â â âââ __init__.py
â â âââ test_login.py
âââ views
|ââ __init__.py
âââ registration
â âââ __init__.py
â âââ login.py
</code></pre>
<p>The CustomTest class look like this :</p>
<pre><code>from django.test import SimpleTestCase
from django.test.client import Client
class CustomTest(SimpleTestCase):
def setUp(self):
self.client = Client()
</code></pre>
<p>The test I want to launch is the following one :</p>
<pre><code>from tests.test_views.custom_test import CustomTest
class Test_Login(CustomTest):
def test_thing(self):
self.assertTrue(True)
</code></pre>
<p>The 'my_app' directory is in the pythonpath and there is an <strong>init</strong>.py file for every module. Django can find error in test_login.py (for example if I change CustomTest to something else that does not exists), so test_login.py is read before my real problem happens and it imports and uses CustomTest sucessfully. But when I want to launch the test with :</p>
<pre><code>python manage.py test my_app
</code></pre>
<p>I get the following error : </p>
<pre><code>ImportError: Failed to import test module: my_app.tests.test_views.registration.test_login
Traceback (most recent call last):
File "/usr/lib/python2.7/unittest/loader.py", line 254, in _find_tests
module = self._get_module_from_name(name)
File "/usr/lib/python2.7/unittest/loader.py", line 232, in _get_module_from_name
__import__(name)
File "~/workspace/my_app/my_app/tests/test_views/registration/test_login.py", line 7, in <module>
from tests.tests_views.my_app_test.custom_test import CustomTest
ImportError: No module named tests_views.my_app_test.custom_test
</code></pre>
<p>Any idea ?</p>
| 2 | 2016-09-01T13:02:50Z | 39,272,964 | <p>put the application name on the left of import:</p>
<pre><code>from myapp.tests.test_views.custom_test import CustomTest
</code></pre>
| 2 | 2016-09-01T13:32:02Z | [
"python",
"django",
"testing"
] |
Derive from C++ base class in SWIGged Python | 39,272,413 | <p>Note: The corresponding gist is <a href="https://gist.github.com/nschloe/3d8bc8a22a1bea81237c0db2c4af7a1f" rel="nofollow">here</a>.</p>
<hr>
<p>I have an abstract base class and a method that accepts a pointer to the base class, e.g.,</p>
<pre><code>#ifndef MYTEST_HPP
#define MYTEST_HPP
#include <iostream>
#include <memory>
class MyBaseClass {
public:
virtual
double
eval(const double x) const = 0;
};
class Square: public MyBaseClass {
public:
virtual
double
eval(const double x) const
{
return x*x;
}
};
void
mytest(const std::shared_ptr<MyBaseClass> & a) {
std::cout << a->eval(1.0) << std::endl;
std::cout << a->eval(2.0) << std::endl;
std::cout << a->eval(3.0) << std::endl;
}
#endif // MYTEST_HPP
</code></pre>
<p>After SWIGging this with</p>
<pre><code>%module mytest
%{
#define SWIG_FILE_WITH_INIT
#include "mytest.hpp"
%}
%include <std_shared_ptr.i>
%shared_ptr(MyBaseClass);
%shared_ptr(Square);
%include "mytest.hpp"
</code></pre>
<p>I can create <code>Square</code> instances and feed them into <code>mytest</code> from within Python, e.g.,</p>
<pre><code>import mytest
a = mytest.Square()
mytest.mytest(a)
</code></pre>
<p>As expected, this will print</p>
<pre><code>1.0
4.0
9.0
</code></pre>
<p>I'd now like to derive more classes from <code>MyBaseClass</code>, but from Python. Unfortunately, simply doing</p>
<pre><code>class Cube(mytest.MyBaseClass):
def __init__(self):
return
def eval(self, x):
return x*x*x
c = Cube()
mytest.mytest(c)
</code></pre>
<p>results in the error</p>
<pre><code>Traceback (most recent call last):
File "../source/test.py", line 14, in <module>
mytest.mytest(c)
TypeError: in method 'mytest', argument 1 of type 'std::shared_ptr< MyBaseClass > const &'
</code></pre>
<p>Any hints?</p>
| 1 | 2016-09-01T13:09:30Z | 39,273,055 | <p>Got it (via <a href="http://stackoverflow.com/a/9042139/353337">http://stackoverflow.com/a/9042139/353337</a>):</p>
<p>Add the director feature to <code>MyBaseClass</code></p>
<pre><code>%module(directors="1") mytest
%{
#define SWIG_FILE_WITH_INIT
#include "mytest.hpp"
%}
%include <std_shared_ptr.i>
%shared_ptr(MyBaseClass);
%shared_ptr(Square);
%feature("director") MyBaseClass;
%include "mytest.hpp"
</code></pre>
<p>and properly initialize the class in Python</p>
<pre><code>class Cube(mytest.MyBaseClass):
def __init__(self):
mytest.MyBaseClass.__init__(self)
return
def eval(self, x):
return x*x*x
</code></pre>
| 1 | 2016-09-01T13:36:06Z | [
"python",
"c++",
"swig"
] |
uWSGI-Django spending too much time on poll method | 39,272,445 | <p>I am using Nginx-uWSGI combination for my Django project, but the performance is sub-par when I compare it with Nginx-Apache-modwsgi combination. Apparently uwsgi was taking about 3-5 seconds to provide response for requests which should be server in about 300-400ms at most.</p>
<p>When I ran profiling, I realized most of the time is being spend in <code>response.render</code> function in uWSGI Handler. Here are the profiling results -</p>
<p><a href="http://i.stack.imgur.com/UvHV6.png" rel="nofollow"><img src="http://i.stack.imgur.com/UvHV6.png" alt="Profiling result"></a></p>
<p>I am unable to figure out why <code>method poll</code> is consuming more than 90% of the time, even when there is just one request on the server.</p>
<p>Here is my uwsgi configuration</p>
<pre><code>[uwsgi]
uid = www-data
gid = www-data
listen = 10000
socket-timeout = 60
socket-send-timeout = 60
socket-write-timeout = 60
set-placeholder = username=sysadmin
set-placeholder = project_directory=/home/sysadmin/builds/deploy_dir/qa_shine
set-placeholder = ruby_shims_path=/home/sysadmin/.rbenv/shims
socket = /run/%n.sock
chmod-socket = 666
chdir = %(project_directory)
pidfile = /run/%n.pid
wsgi-file = deploy/uwsgi.py
master = true
processes = 1
threads = 1
harakiri = 160
virtualenv = /home/%(username)/Envs/candidate/
stats = 127.0.0.1:9191
vacuum = true
die-on-term = true
daemonize = /var/log/uwsgi/%n.log
env = LANG=en_US.UTF-8
auto-procname = true
env = PATH=%(ruby_shims_path):$(PATH)
env = RUBYPATH=%(ruby_shims_path)/ruby
rbrequire = rubygems
</code></pre>
<p>My nginx upstream configuration</p>
<pre><code>location / {
uwsgi_pass django;
include uwsgi_params;
proxy_buffering off;
proxy_buffers 128 128k;
proxy_buffer_size 128k;
proxy_temp_path /run/ 1 2;
uwsgi_read_timeout 60;
uwsgi_send_timeout 60;
uwsgi_connect_timeout 60;
send_timeout 60;
}
# basic conf
events {
worker_connections 8192;
use epoll;
multi_accept on;
}
</code></pre>
<p>I'm completely lost right now, any help would be appreciated.</p>
| 1 | 2016-09-01T13:11:23Z | 39,483,433 | <p>Finally I fixed the issue. Apparently the issue was that I'd offline compression disabled, but I was using <code>sass</code> files which needed to be transformed into <code>css</code> at runtime. Turns out that was causing a lot of IO. When I enabled offline compression, response time fell back to 200ms.</p>
| 0 | 2016-09-14T06:00:15Z | [
"python",
"django",
"sockets",
"nginx",
"uwsgi"
] |
make np.vectorize return scalar value on scalar input | 39,272,465 | <p>The following code returns an array instead of expected float value.</p>
<pre><code>def f(x):
return x+1
f = np.vectorize(f, otypes=[np.float])
>>> f(10.5)
array(11.5)
</code></pre>
<p>Is there a way to force it return simple scalar value if the input is scalar and not the weird array type?</p>
<p>I find it weird it doesn't do it by default given that all other ufuncs like np.cos, np.sin etc do return regular scalars</p>
<p><strong>Edit</strong>:
This the the code that works:</p>
<pre><code>import numpy as np
import functools
def as_scalar_if_possible(func):
@functools.wraps(func) #this is here just to preserve signature
def wrapper(*args, **kwargs):
return func(*args, **kwargs)[()]
return wrapper
@as_scalar_if_possible
@np.vectorize
def f(x):
return x + 1
</code></pre>
<p>print(f(11.5)) # prints 12.5</p>
| 1 | 2016-09-01T13:12:16Z | 39,273,064 | <p>The result is technically a scalar as its shape is <code>()</code>. For instance, <code>np.array(11.5)[0]</code> is not a valid operation and will result in an exception. Indeed, the returned results will act as a scalar in most circumstances.</p>
<p>eg.</p>
<pre><code>x = np.array(11.5)
print(x + 1) # prints 12.5
print(x < 12) # prints True, rather than [ True]
x[0] # raises IndexError
</code></pre>
<p>If you want to get a "proper" scalar value back then you can just wrap the vectorised function to check the shape of the returned array. This is what numpy ufuncs do behind the scenes.</p>
<p>eg.</p>
<pre><code>import numpy as np
def as_scalar_if_possible(func):
def wrapper(arr):
arr = func(arr)
return arr if arr.shape else np.asscalar(arr)
return wrapper
@as_scalar_if_possible
@np.vectorize
def f(x):
return x + 1
print(f(11.5)) # prints 12.5
</code></pre>
| 1 | 2016-09-01T13:36:36Z | [
"python",
"numpy",
"numpy-ufunc"
] |
Average Time difference in pandas | 39,272,470 | <p>I am trying to calculate the average time difference, in hours/minutes/seconds, iterating on a field - in my example, for each different ip address.
Moreover, a column containing the count of each ip row.</p>
<p>My dataframe looks like:</p>
<pre><code>date ipAddress
2016-08-08 00:39:00 98.249.244.22
2016-08-08 13:03:00 98.249.244.22
2016-08-20 21:37:00 98.211.135.179
2016-08-21 16:11:00 98.211.135.179
2016-08-21 16:19:00 98.211.135.179
2016-08-25 01:30:00 98.248.215.244
</code></pre>
<p>My desired output:</p>
<pre><code>ipAddress avg_time_diff count
98.249.244.22 avg_diff_1 2
98.211.135.179 avg_diff_2 3
98.248.215.244 0 1
</code></pre>
<p>A reproducicle df:</p>
<pre><code>{u'date': {3233: Timestamp('2016-08-08 00:39:00'),
3551: Timestamp('2016-08-08 13:03:00'),
349036: Timestamp('2016-08-20 21:37:00'),
349040: Timestamp('2016-08-21 16:11:00'),
349049: Timestamp('2016-08-21 16:19:00'),
378843: Timestamp('2016-08-25 01:30:00')},
u'ipAddress': {3233: u'98.249.244.22',
3551: u'98.249.244.22',
49036: u'98.211.135.179',
349040: u'98.211.135.179',
349049: u'98.211.135.179',
378843: u'98.248.215.244'}}
</code></pre>
<p>I have no clue where to start, it tried timediff but I am not sure I've understood how it works and how to iterate over rows as a "window function".
Thanks in advance</p>
| 2 | 2016-09-01T13:12:29Z | 39,272,783 | <p>See <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#applying-different-functions-to-dataframe-columns" rel="nofollow">applying different functions to dataframe columns</a>:</p>
<pre><code>(df.groupby('ipAddress')
.date
.agg({'count': 'count',
'avg_time_diff': lambda group: group.sort_values().diff().mean()}))
# Output
# count avg_time_diff
# ipAddress
# 98.211.135.179 2 00:08:00
# 98.248.215.244 1 NaT
# 98.249.244.22 2 12:24:00
</code></pre>
| 1 | 2016-09-01T13:25:01Z | [
"python",
"datetime",
"pandas"
] |
OpenLabs connector Magento OpenERP 7.0 | 39,272,641 | <p>I'm trying to connect my OpenERP 7.0 to my Magento webSite 1.9.</p>
<p>I'm using the connector developed by openLabs <a href="https://github.com/openlabs/magento_integration" rel="nofollow">https://github.com/openlabs/magento_integration</a></p>
<p>I follow the instructions of <a href="https://openerp-magento-connector.readthedocs.io/en/develop/introduction.html#installation" rel="nofollow">https://openerp-magento-connector.readthedocs.io/en/develop/introduction.html#installation</a></p>
<p>But after 3 restores of my OpenERP, I still have an error when I tried to update my modules to get my Magento module.</p>
<blockquote>
<p>File "/opt/openerp/davidts/appserver-dts/parts/openerp-7.0-20140124-002431/openerp/addons/base/module/module.py", line 617, in update_list
handler.load_addons()
File "/opt/openerp/davidts/appserver-dts/parts/openerp-7.0-20140124-002431/openerp/addons/web/http.py", line 580, in load_addons
m = <strong>import</strong>('openerp.addons.' + module)
File "/opt/openerp/davidts/appserver-dts/parts/openerp-7.0-20140124-002431/openerp/modules/module.py", line 133, in load_module
mod = imp.load_module('openerp.addons.' + module_part, f, path, descr)
File "/opt/openerp/davidts/appserver-dts/parts/openerp-7.0-20140124-002431/openerp/addons/magento_integration-develop/<strong>init</strong>.py", line 9, in
import magento_
File "/opt/openerp/davidts/appserver-dts/parts/openerp-7.0-20140124-002431/openerp/addons/magento_integration-develop/magento_.py", line 17, in
import magento
ImportError: No module named magento</p>
</blockquote>
<p>I'm thinking this module isn't stable but when I read on forums, I saw people saying it works.</p>
<p>Some of this people or someone else can explain to me how they did ? Or another solution ? I'm open to multiple ways as it works. (But I haven't the possibility to upgrade my OpenERP in 8.0 or 9.0).</p>
<p>Thanks</p>
<p>EDIT : To @CZoellner</p>
<p>First, thx for your helpful answers</p>
<p>Ok, I solve the problem, my python lib was installed but my openERP installer didn't find it. So I modify the script to add the way of my module in sys.path. This error is resolved.</p>
<p>But now, I have another error seems the first</p>
<blockquote>
<p>File "/opt/openerp/davidts/appserver-dts/parts/openerp-7.0-20140124-002431/openerp/addons/magento_integration-develop/<strong>init</strong>.py", line 10, in
import country
File "/opt/openerp/davidts/appserver-dts/parts/openerp-7.0-20140124-002431/openerp/addons/magento_integration-develop/country.py", line 18, in
import pycountry
ImportError: No module named pycountry</p>
</blockquote>
<p>and the sys.path has already the good way </p>
<blockquote>
<p>[...
'/usr/local/lib/python2.7/dist-packages/pycountry-1.20-py2.7.egg',
...]</p>
</blockquote>
<p>I never develop in python so I certainly missed something</p>
| 0 | 2016-09-01T13:19:27Z | 39,289,839 | <p>Ok, So I again restoring my snapshot ...</p>
<p>But now, instead using the installer, I downloaded the library manually and install it one by one.</p>
<p>So I install, pycountry lib and magento lib. I update the file "magento_.py" and "pycountry.py" of openLab connector to add the path of my library on sys.path.</p>
<p>Something like that </p>
<blockquote>
<p>import sys
sys.path.append("/usr/local/lib/python2.7/dist-packages/pycountry-1.20-py2.7.egg/")</p>
</blockquote>
<p>I run the setup of openLabs connector and after I launch update of openERP.</p>
<p>And it finally works ! whew !</p>
<p>Anyway thx for your useful answers !</p>
| 0 | 2016-09-02T10:10:16Z | [
"python",
"magento",
"openerp",
"magento-1.9",
"openerp-7"
] |
Wrapping method returning c++ std::array<std::string, 4> in cython | 39,272,646 | <p>My method returns <code>std::array<std::string, 4></code> in C++ code. I wrap this code using Cython. I tried to wrap array using memory views. But the result is <code>Invalid base type for memoryview slice: string</code>. So can I wrap my <code>std::array<std::string, 4></code> to use it in python like list of strs?</p>
| 2 | 2016-09-01T13:19:34Z | 39,275,275 | <p>The easiest way is probably just to copy to a Python list.</p>
<p>For the sake of this answer I'm assuming you've wrapped your array similar to <a href="http://stackoverflow.com/a/36402807/4657412">this answer</a> and called it <code>arrstr4</code>. The code then looks something like:</p>
<pre><code>def f():
cdef arrstr4 res = your_cplus_plus_function()
py_res = []
for i in range(4):
py_res.append(res[i]) # take advantage of autoconversion to python string
return py_res
</code></pre>
| 2 | 2016-09-01T15:13:35Z | [
"python",
"c++",
"arrays",
"string",
"cython"
] |
Matplotlib contourf with 3 colors | 39,272,675 | <p>I would like to make a contour plot with 3 distinct colors. So far, my code looks like the following:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
xMin = 0
xMax = 3
xList = np.linspace(xMin, xMax, 10)
X1, X2 = np.meshgrid(xList, xList)
Z = []
# do some processing with Z
# Z now contains 0, 0.5 or 1, e.g. Z = [0, 0, 0, 1, 1, 0.5, 1, 0.5...]
Z = Z.reshape((len(X1), len(X2)))
plt.contourf(X1, X2, Z,alpha=0.5)
</code></pre>
<p>Now I'd like to plot every contour where Z = 0 as red, Z = 0.5 as green and Z = 1 as blue. I do not want to have smooth transitions between red/green/blue, but just a color switch.
I played around with the color and levels option, but it did not really work out as expected. </p>
<p>Is the contour plot the right way to go here?</p>
| 0 | 2016-09-01T13:20:32Z | 39,279,279 | <p>You can control the colors of a contour plot with the colors option but you might want to use imshow to avoid interpolation between the levels. You create a colormap for imshow with discrete levels using <a href="http://matplotlib.org/api/colors_api.html#matplotlib.colors.ListedColormap" rel="nofollow">ListedColormap</a>.</p>
<pre><code>data = 0*np.ones((20,20))
data[5:15,5:15] = 0.5
data[7:12,8:16] = 1
# contourf plot
fig = plt.figure()
ax1 = fig.add_subplot(2,2,1)
ax1.contourf(data, [0,0.4,0.9], colors = ['r','g','b'])
ax1.set_aspect('equal')
ax1.set_title('contourf')
# imshow plot
ax2 = fig.add_subplot(2,2,2)
# define colr map
cmap = colors.ListedColormap(['r','g','b'])
bounds = [0, 0.4,0.6, 1.1]
norm = colors.BoundaryNorm(bounds, cmap.N)
ax2.imshow(data, interpolation = 'none', cmap=cmap, norm=norm)
ax2.set_title('imshow')
</code></pre>
<p><a href="http://i.stack.imgur.com/dqsww.png" rel="nofollow"><img src="http://i.stack.imgur.com/dqsww.png" alt="enter image description here"></a></p>
| 1 | 2016-09-01T19:16:17Z | [
"python",
"matplotlib"
] |
Olympus camera kit bluetooth wakeup | 39,272,712 | <p>I'm working on a Python script destined to run on a Raspberry Pi which controls an Olympus Air A01 camera remotely via WiFi. The WiFi control works fine but I would also like for the script to be able to turn the camera on remotely.</p>
<p>As far as I can tell this can only be done through Bluetooth LE but the OPC SDK doesn't give much details regarding how this is done. I think that when developing under iOS/Android the "wakeup" Java method is used for this purpose but again there are no details as to what exactly this method transmits to the camera in order to get it to power up.</p>
<p>I've been experimenting with Bluez/Gatttool and have a list of the camera's services and handles but have no idea which handle does what and what values I should write to it to wake up the camera.</p>
<p>Has anyone been able to turn this camera on through Bluetooth LE without using the OPC SDK?</p>
<p>Thanks!</p>
| 0 | 2016-09-01T13:22:12Z | 39,376,042 | <p>So I ended up imitating the traffic between the Olympus Android App and the camera while turning it on and I am now able to wake up the camera using Gatttool to send the same values.</p>
<p>Here is the minimal Gatttool sequence which wakes up the camera:</p>
<pre><code>sudo gatttool -b 90:B6:86:XX:YY:ZZ -I
connect
primary
char-desc
char-write-req 0x0013 0001
char-write-req 0x0016 0001
char-write-req 0x0019 0001
char-write-req 0x0012 0101090c01023132333435364400
char-write-req 0x0015 0202000000
char-write-req 0x0012 0102040f0101021300
char-write-req 0x0015 0203000000
exit
</code></pre>
<p>Edit:</p>
<p>The same can be achieved in python like so:</p>
<pre><code>import os
os.system('gatttool -b 90:B6:86:XX:YY:ZZ --primary')
os.system('gatttool -b 90:B6:86:XX:YY:ZZ --char-desc')
os.system('gatttool -b 90:B6:86:XX:YY:ZZ --char-write-req --handle 0x0013 -n 0001')
os.system('gatttool -b 90:B6:86:XX:YY:ZZ --char-write-req --handle 0x0016 -n 0001')
os.system('gatttool -b 90:B6:86:XX:YY:ZZ --char-write-req --handle 0x0019 -n 0001')
os.system('gatttool -b 90:B6:86:XX:YY:ZZ --char-write-req --handle 0x0012 -n 0101090c01023132333435364400')
os.system('gatttool -b 90:B6:86:XX:YY:ZZ --char-write-req --handle 0x0015 -n 0202000000')
os.system('gatttool -b 90:B6:86:XX:YY:ZZ --char-write-req --handle 0x0012 -n 0102040f0101021300')
os.system('gatttool -b 90:B6:86:XX:YY:ZZ --char-write-req --handle 0x0015 -n 02030000000; sleep 5')
os.system('gatttool -b 90:B6:86:XX:YY:ZZ --char-write-req --handle 0x0012 -n 010304140101011700')
os.system('gatttool -b 90:B6:86:XX:YY:ZZ --char-write-req --handle 0x0015 -n 02040000000')
</code></pre>
<p>Replacing 90:B6:86:XX:YY:ZZ by your own MAC address...</p>
<p>At first I tried using Pygatt but wasn't able to perform the primary and char-desc operations from Gatttool so I reverted to calling Gatttool directly through its non-interactive mode.</p>
| 0 | 2016-09-07T17:30:38Z | [
"python",
"raspberry-pi",
"bluetooth-lowenergy",
"olympus-camerakit",
"olympus-air"
] |
Readlines function for an xlsx file works inproper | 39,272,775 | <p>The goal is sentiment classification. The steps are to open 3 xlsx files, read them, process with gensim.doc2vec methods and classify with SGDClassificator. Just try to repeat <a href="https://districtdatalabs.silvrback.com/modern-methods-for-sentiment-analysis#disqus_thread" rel="nofollow">this code on doc2vec</a>. Python 2.7</p>
<pre><code>with open('C:/doc2v/trainpos.xlsx','r') as infile:
pos_reviews = infile.readlines()
with open('C:/doc2v/trainneg.xlsx','r') as infile:
neg_reviews = infile.readlines()
with open('C:/doc2v/unsup.xlsx','r') as infile:
unsup_reviews = infile.readlines()
</code></pre>
<p>But it turned out that the resulting lists are not what they are expected to be:</p>
<pre><code>print 'length of pos_reviews is %s' % len(pos_reviews)
>>> length of pos_reviews is 1
</code></pre>
<p>The files contain 18, 1221 and 2203 raws correspondingly. I thought that the lists will have the same number of elements.</p>
<p>The next step is to concatenate all the sentences. </p>
<pre><code>y = np.concatenate((np.ones(len(pos_reviews)), np.zeros(len(neg_reviews))))
x_train, x_test, y_train, y_test = train_test_split(np.concatenate((pos_reviews, neg_reviews)), y, test_size=0.2)
</code></pre>
<p>This leads to the situation when x-train, x-test are lists of sentences as they should be while </p>
<pre><code>y_train = [0.]
y_test = [1.]
</code></pre>
<p>After this division every sentence gets a label:</p>
<pre><code>def labelizeReviews(reviews, label_type):
labelized = []
for i,v in enumerate(reviews):
label = '%s_%s'%(label_type,i)
labelized.append(LabeledSentence(v, [label]))
return labelized
x_train = labelizeReviews(x_train, 'TRAIN')
x_test = labelizeReviews(x_test, 'TEST')
unsup_reviews = labelizeReviews(unsup_reviews, 'UNSUP')
</code></pre>
<p>As written in <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html" rel="nofollow">the numpy documentation</a>, the arrays should be equal in size. But when I reduce the bigger files to 18 lines, nothing changes.
As I searched on the forum noone has a similar error. I've broken my head what went wrong and how to fix it. Thanks for help!</p>
| 0 | 2016-09-01T13:24:46Z | 39,274,171 | <p>Generally you can't read Microsoft Excel files as a text files using methods like <code>readlines</code> or <code>read</code>. You should convert files to another format before (good solution is .csv which can be readed by <a href="https://docs.python.org/2/library/csv.html" rel="nofollow">csv</a> module) or use a special python modules like <a href="https://pythonhosted.org/pyexcel/" rel="nofollow" title="pyexcel">pyexcel</a> and <a href="https://openpyxl.readthedocs.io/en/default/tutorial.html#loading-from-a-file" rel="nofollow" title="openpyxl">openpyxl</a> to read .xlsx files directly.</p>
| 0 | 2016-09-01T14:23:52Z | [
"python",
"xlsx",
"readlines",
"doc2vec"
] |
Replace a portion of a specific line | 39,272,776 | <p>I do in python an history file, so that a user puts an URL and after that the program writes the URL in a txt file and on the same link the time that the user enter the website.</p>
<p>So my txt is like below:</p>
<pre><code>- google.com 14:30
- yahoo.com 17:06
- apple.com 23:02
</code></pre>
<p>I want to create a function, so that the user put the url who wants to edit in parameter the time will be changed with the current time. The problem is that I do this function but when I see in the txt file I have something like this</p>
<pre><code>google.com current_time 14:30
</code></pre>
<p>So I want to know who to delete the <code>14:30</code> from the txt file inplace.
Thanks you !</p>
<p>This is my function :</p>
<pre><code>def Update(link):
for line in fileinput.input("history.txt", inplace=True):
localtime = time.asctime(time.localtime(time.time()))
print line.rstrip().replace(link, link+ localtime)
</code></pre>
| 0 | 2016-09-01T13:24:47Z | 39,272,840 | <p>If you copy the list without the last element will you get the correct answer?</p>
<pre><code>def Update(link):
for line in fileinput.input("history.txt", inplace=True):
newline = line[:-1]
return newline
</code></pre>
| 0 | 2016-09-01T13:27:23Z | [
"python"
] |
Replace a portion of a specific line | 39,272,776 | <p>I do in python an history file, so that a user puts an URL and after that the program writes the URL in a txt file and on the same link the time that the user enter the website.</p>
<p>So my txt is like below:</p>
<pre><code>- google.com 14:30
- yahoo.com 17:06
- apple.com 23:02
</code></pre>
<p>I want to create a function, so that the user put the url who wants to edit in parameter the time will be changed with the current time. The problem is that I do this function but when I see in the txt file I have something like this</p>
<pre><code>google.com current_time 14:30
</code></pre>
<p>So I want to know who to delete the <code>14:30</code> from the txt file inplace.
Thanks you !</p>
<p>This is my function :</p>
<pre><code>def Update(link):
for line in fileinput.input("history.txt", inplace=True):
localtime = time.asctime(time.localtime(time.time()))
print line.rstrip().replace(link, link+ localtime)
</code></pre>
| 0 | 2016-09-01T13:24:47Z | 39,273,068 | <p>You can use regular expressions.</p>
<pre><code>import re
def Update(link):
for line in fileinput.input("history.txt", inplace=True):
localtime = time.asctime(time.localtime(time.time()))
print re.sub('\d{2}:\d{2}', localtime, line)
</code></pre>
| 0 | 2016-09-01T13:36:59Z | [
"python"
] |
Is printing defaultdict supposed to be ugly (non human-readable) by default? | 39,272,862 | <p>Print <code>dict</code> and <code>defaultdict</code>:</p>
<pre><code>>>> d = {'key': 'value'}
>>> print(d)
{'key': 'value'}
>>> dd = defaultdict(lambda: 'value')
>>> dd['key']
'value'
>>> print(dd)
defaultdict(<function <lambda> at 0x7fbd44cb6b70>, {'key': 'value'})
</code></pre>
<p>With nested structure it becomes ugly:</p>
<pre><code>>>> nested_d = {'key1': {'key2': {'key3': 'value'}}}
>>> print(nested_d)
{'key1': {'key2': {'key3': 'value'}}}
>>> def factory():
... return defaultdict(factory)
...
>>> nested_dd = defaultdict(factory)
>>> nested_dd['key1']['key2']['key3'] = 'value'
>>> print(nested_dd)
defaultdict(<function factory at 0x7fbd44cd4ea0>, {'key1': defaultdict(<function factory at 0x7fbd44cd4ea0>, {'key2': defaultdict(<function factory at 0x7fbd44cd4ea0>, {'key3': 'value'})})})
</code></pre>
<p>Were there any reasons for not making it human-readable by default? (UPD: I mean what are the reasons behind not having custom <code>__str__</code> defined for <code>defaultdict</code> by default?)</p>
| -2 | 2016-09-01T13:28:02Z | 39,272,982 | <p><code>repr()</code> output (<code>defaultdict</code> has no <code>__str__</code>, only <code>__repr__</code>) is <em>debugging output</em>. It is not meant to be pretty, it is meant to be <em>functional</em>. It tells you the type, the <code>repr()</code> of the callable that produces the default, and the contents.</p>
<p>From the <a href="https://docs.python.org/3/reference/datamodel.html#object.__repr__" rel="nofollow"><code>__repr__</code> documentation</a>:</p>
<blockquote>
<p>This is typically used for debugging, so it is important that the representation is information-rich and unambiguous.</p>
</blockquote>
<p>Like all datatypes in Python, (except for strings for obvious reasons), no informal (<code>__str__</code>) is defined because it is up to the programmer to decide what output is suitable for their use-cases. No default can be set for that, because use-cases vary so widely. Output for a file has different needs than output to a GUI or to a web-page for example.</p>
<p>In Python 2, convert the object to a plain dictionary first, then use <code>pprint()</code> if you want 'pretty' output:</p>
<pre><code>def todict(d):
if not isinstance(d, dict):
return d
return {k: todict(v) for k, v in d.items()}
pprint(todict(nested_dd))
</code></pre>
<p>In Python 3, <code>pprint</code> supports <code>defaultdict</code> directly:</p>
<pre><code>>>> pprint(nested_dd)
defaultdict(<function factory at 0x105ed2f28>,
{'key1': defaultdict(<function factory at 0x105ed2f28>,
{'key2': defaultdict(<function factory at 0x105ed2f28>,
{'key3': 'value'})})})
</code></pre>
| 2 | 2016-09-01T13:32:47Z | [
"python"
] |
Is printing defaultdict supposed to be ugly (non human-readable) by default? | 39,272,862 | <p>Print <code>dict</code> and <code>defaultdict</code>:</p>
<pre><code>>>> d = {'key': 'value'}
>>> print(d)
{'key': 'value'}
>>> dd = defaultdict(lambda: 'value')
>>> dd['key']
'value'
>>> print(dd)
defaultdict(<function <lambda> at 0x7fbd44cb6b70>, {'key': 'value'})
</code></pre>
<p>With nested structure it becomes ugly:</p>
<pre><code>>>> nested_d = {'key1': {'key2': {'key3': 'value'}}}
>>> print(nested_d)
{'key1': {'key2': {'key3': 'value'}}}
>>> def factory():
... return defaultdict(factory)
...
>>> nested_dd = defaultdict(factory)
>>> nested_dd['key1']['key2']['key3'] = 'value'
>>> print(nested_dd)
defaultdict(<function factory at 0x7fbd44cd4ea0>, {'key1': defaultdict(<function factory at 0x7fbd44cd4ea0>, {'key2': defaultdict(<function factory at 0x7fbd44cd4ea0>, {'key3': 'value'})})})
</code></pre>
<p>Were there any reasons for not making it human-readable by default? (UPD: I mean what are the reasons behind not having custom <code>__str__</code> defined for <code>defaultdict</code> by default?)</p>
| -2 | 2016-09-01T13:28:02Z | 39,273,757 | <p>There's no way to know what, if anything, the author(s) were thinking or even whether they gave it much consideration at all.</p>
<p>For the specific case of nested <code>defaultdict</code>s, as shown your example code:</p>
<pre><code>def factory():
return defaultdict(factory)
nested_dd = defaultdict(factory)
nested_dd['key1']['key2']['key3'] = 'value'
</code></pre>
<p>You can avoid the issue by subclassing <code>dict</code> like this instead:</p>
<pre><code>class Tree(dict):
def __missing__(self, key):
value = self[key] = type(self)()
return value
nested_dd = Tree()
nested_dd['key1']['key2']['key3'] = 'value'
print(nested_dd) # -> {'key1': {'key2': {'key3': 'value'}}}
</code></pre>
<p>Since the subclass doesn't define its own <code>__repr__()</code> or <code>__str__()</code> methods, instances of it will <code>print</code> (and <code>pprint</code>) just like regular <code>dict</code> instances do.</p>
| 1 | 2016-09-01T14:07:27Z | [
"python"
] |
Creating an naming variables from an array | 39,272,885 | <p>I have several arrays:</p>
<pre><code>foo_1 = [URL, 2, 30]
foo_2 = [URL, 4, 1230]
foo_3 = [URL, 11, 980]
foo_4 = [URL, 6, 316]
</code></pre>
<p>... I want to create a function that creates variables and renames them like so:</p>
<pre><code>foo_1Count = foo_1[2]
foo_2Count = foo_2[2]
foo_3Count = foo_3[2]
foo_4Count = foo_4[2]
</code></pre>
<p>I am dealing with a very large set of arrays so creating the variables as such one by one isn't easy. I don't want to use a dictionary if I can help it. Is there a way to use .format() to create a variable name or something simple that I am missing? Thanks!</p>
| -1 | 2016-09-01T13:28:56Z | 39,273,080 | <pre><code>URL = 'www.abc.com'
foo_1 = [URL, 2, 30]
foo_2 = [URL, 4, 1230]
foo_3 = [URL, 11, 980]
foo_4 = [URL, 6, 316]
for i in range(4):
globals()['foo_{}Count'.format(i+1)] = globals()['foo_{}'.format(i+1)][2]
print foo_4Count # 316
</code></pre>
| 0 | 2016-09-01T13:37:33Z | [
"python",
"arrays",
"list"
] |
Creating an naming variables from an array | 39,272,885 | <p>I have several arrays:</p>
<pre><code>foo_1 = [URL, 2, 30]
foo_2 = [URL, 4, 1230]
foo_3 = [URL, 11, 980]
foo_4 = [URL, 6, 316]
</code></pre>
<p>... I want to create a function that creates variables and renames them like so:</p>
<pre><code>foo_1Count = foo_1[2]
foo_2Count = foo_2[2]
foo_3Count = foo_3[2]
foo_4Count = foo_4[2]
</code></pre>
<p>I am dealing with a very large set of arrays so creating the variables as such one by one isn't easy. I don't want to use a dictionary if I can help it. Is there a way to use .format() to create a variable name or something simple that I am missing? Thanks!</p>
| -1 | 2016-09-01T13:28:56Z | 39,273,631 | <p>If you're asking how to rename (understanding this action like create a new variable and deleting the existing old one) you could manipulate <a href="https://docs.python.org/2/library/functions.html#globals" rel="nofollow">globals</a> like this:</p>
<pre><code>if __name__ == "__main__":
URL = 'www.abc.com'
foo_1 = [URL, 2, 30]
foo_2 = [URL, 4, 1230]
foo_3 = [URL, 11, 980]
foo_4 = [URL, 6, 316]
for i in range(4):
old_name = 'foo_{}'.format(i + 1)
new_name = 'foo_{}Count'.format(i + 1)
globals()[new_name] = globals()[old_name][2]
del globals()[old_name]
print globals()
</code></pre>
<p>Now, I strongly recommend you not doing this! Don't mess with globals manually... Instead, if your variables have a similar structure, <strong>usually</strong> there isn't any good reason to declare them individually, you could pack them like this:</p>
<pre><code>URL = 'www.abc.com'
foos = [
[URL, 2, 30],
[URL, 4, 1230],
[URL, 11, 980],
[URL, 6, 316]
]
foo_counts = [foo[2] for foo in foos]
print foo_counts
</code></pre>
<p>That way, you can iterate over your data nicely without having any cheap trick like messing around with globals.</p>
| 1 | 2016-09-01T14:02:14Z | [
"python",
"arrays",
"list"
] |
Creating an naming variables from an array | 39,272,885 | <p>I have several arrays:</p>
<pre><code>foo_1 = [URL, 2, 30]
foo_2 = [URL, 4, 1230]
foo_3 = [URL, 11, 980]
foo_4 = [URL, 6, 316]
</code></pre>
<p>... I want to create a function that creates variables and renames them like so:</p>
<pre><code>foo_1Count = foo_1[2]
foo_2Count = foo_2[2]
foo_3Count = foo_3[2]
foo_4Count = foo_4[2]
</code></pre>
<p>I am dealing with a very large set of arrays so creating the variables as such one by one isn't easy. I don't want to use a dictionary if I can help it. Is there a way to use .format() to create a variable name or something simple that I am missing? Thanks!</p>
| -1 | 2016-09-01T13:28:56Z | 39,273,985 | <p>Dynamically creating variables is usually not a good idea. Instead, you can just get the attribute from the aggregating list object directly. It's even shorter than <code>foo_1Count</code>:</p>
<pre><code>>>> foo_1 = ["URL", 2, 30]
>>> foo_1[2]
30
</code></pre>
<p>But you might not want to memorize which index was the count. Alternatively, create a function:</p>
<pre><code>>>> def count(foo):
... return foo[2]
...
>>> count(foo_1)
30
</code></pre>
<p>Or make your <code>foo</code> objects dictionaries:</p>
<pre><code>>>> foo_1 = {"url": "URL", "whatever": 2, "count": 30}
>>> foo_1["count"]
30
</code></pre>
<p>Or use <code>collections.namedtuple</code> to make it a bit cleaner:</p>
<pre><code>>>> Foo = collections.namedtuple("Foo", ["url", "whatever", "count"])
>>> foo_1 = Foo("URL", 2, 30)
>>> foo_1.count
30
</code></pre>
| 0 | 2016-09-01T14:16:20Z | [
"python",
"arrays",
"list"
] |
Timed method in Python | 39,272,925 | <p>How do I have a part of python script(only a method, the whole script runs in 24/7) run everyday at a set-time, exactly at every 20th minutes? Like 12:20, 12:40, 13:00 in every hour.</p>
<p>I can not use cron, I tried periodic execution but that is not as accurate as I would... It depends from the script starting time.</p>
| 2 | 2016-09-01T13:30:24Z | 39,273,308 | <p>You can either put calling this method in a loop, which would sleep for some time </p>
<pre><code>import time
while True:
sleep(1200)
my_function()
</code></pre>
<p>and be triggered once in a while, you could use datetime to compare current timestamp and set next executions.</p>
<pre><code>import datetime
function_executed = False
trigger_time = datetime.datetime.now()
def set_trigger_time():
global function executed = False
return datetime.datetime.now() + datetime.timedelta(minutes=20)
while True:
if function_executed:
triggertime = set_trigger_time()
if datetime.datetime.now() == triggertime:
function_executed = True
my_function()
</code></pre>
<p>I think however making a system call the script would be a nicer solution.</p>
| 0 | 2016-09-01T13:48:04Z | [
"python"
] |
Timed method in Python | 39,272,925 | <p>How do I have a part of python script(only a method, the whole script runs in 24/7) run everyday at a set-time, exactly at every 20th minutes? Like 12:20, 12:40, 13:00 in every hour.</p>
<p>I can not use cron, I tried periodic execution but that is not as accurate as I would... It depends from the script starting time.</p>
| 2 | 2016-09-01T13:30:24Z | 39,273,486 | <p>Use for example redis for that and <a href="https://github.com/ui/rq-scheduler" rel="nofollow">rq-scheduler</a> package. You can schedule tasks with specific time. So you can run first script, save to the variable starting time, calculate starting time + 20 mins and if your current script will end, at the end you will push another, the same task with proper time.</p>
| 0 | 2016-09-01T13:56:06Z | [
"python"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.