title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
How to draw a heatmap in pandas with items that don't occur in both columns | 39,291,261 | <p>In <a href="http://stackoverflow.com/questions/39279858/how-to-draw-a-graphical-count-table-in-pandas">How to draw a graphical count table in pandas</a> I asked how to draw a heatmap from input data such as:</p>
<pre><code>customer1,customer2
a,b
a,c
a,c
b,a
b,c
b,c
c,c
a,a
b,c
b,c
</code></pre>
<p>The answer was</p>
<pre><code>x = df.pivot_table(index='customer1',columns='customer2',aggfunc='size',fill_value=0)
idx = x.max(axis=1).sort_values(ascending=0).index
sns.heatmap(x[idx].reindex(idx), annot=True)
</code></pre>
<p>This gives a square matrix showing the number of counts for each pair from the two columns.</p>
<p>This solution doesn't work however if there items in the first column which don't appear in the second. For example:</p>
<pre><code>a,b
a,c
c,b
</code></pre>
<p>Gives an error saying that [u,'a'] is not in the Index.</p>
<p>Is there a simple solution?</p>
| 0 | 2016-09-02T11:27:51Z | 39,291,743 | <p>Try this:</p>
<pre><code>In [129]: df
Out[129]:
customer1 customer2
0 a b
1 a c
2 a c
3 b b
4 b c
5 b c
6 c c
7 a b
8 b c
9 b c
In [130]: x = df.pivot_table(index='customer1',columns='customer2',aggfunc='size',fill_value=0)
In [131]: idx = x.max(axis=1).sort_values(ascending=0).index
In [132]: cols = x.max().sort_values(ascending=0).index
In [133]: sns.heatmap(x[cols].reindex(idx), annot=True)
Out[133]: <matplotlib.axes._subplots.AxesSubplot at 0xbb22588>
</code></pre>
<p><a href="http://i.stack.imgur.com/gS2ke.png" rel="nofollow"><img src="http://i.stack.imgur.com/gS2ke.png" alt="enter image description here"></a></p>
| 1 | 2016-09-02T11:50:49Z | [
"python",
"pandas",
"seaborn"
] |
Is is possible to break in lambda when the expected result is found | 39,291,336 | <p>I am Python newbie, and just become very interested in Lambda expression. The problem I have is to find one and only one target element from a list of elements with lambda filter. In theory, when the target element is found there is no sense to continue anymore. </p>
<p>With <code>for loop</code> it is pretty simple to <code>break</code> the loop, but what about by using <code>lambda</code>? is it after all possible to do this? I search from Google, but did not find the expected solution </p>
| 4 | 2016-09-02T11:31:28Z | 39,291,614 | <p>From <a href="https://docs.python.org/3/library/functions.html#filter" rel="nofollow">https://docs.python.org/3/library/functions.html#filter</a></p>
<blockquote>
<p>Note that <code>filter(function, iterable)</code> is equivalent to the <strong>generator</strong>
expression <code>(item for item in iterable if function(item))</code> if function
is not <code>None</code> and <code>(item for item in iterable if item)</code> if function is
<code>None</code>.</p>
</blockquote>
<p>So in practice your lambda will not be applied to whole list unless you start getting elements from it. So if you Only request one item from this generator you will get your functionality.</p>
<p>@edit:</p>
<p>To request one object you'd call <code>next(gen)</code></p>
<pre><code>lst = [1,2,3,4,5]
print(next(filter(lambda x: x%3==0, lst)))
</code></pre>
<p>Would output <code>3</code> and not process anything past <code>3</code> in <code>lst</code></p>
| 1 | 2016-09-02T11:44:34Z | [
"python",
"loops",
"lambda"
] |
Is is possible to break in lambda when the expected result is found | 39,291,336 | <p>I am Python newbie, and just become very interested in Lambda expression. The problem I have is to find one and only one target element from a list of elements with lambda filter. In theory, when the target element is found there is no sense to continue anymore. </p>
<p>With <code>for loop</code> it is pretty simple to <code>break</code> the loop, but what about by using <code>lambda</code>? is it after all possible to do this? I search from Google, but did not find the expected solution </p>
| 4 | 2016-09-02T11:31:28Z | 39,292,696 | <p>with just lambdas is no possible, you need either use <a href="https://docs.python.org/3/library/functions.html#filter" rel="nofollow"><code>filter</code></a> (<a href="https://docs.python.org/2/library/itertools.html#itertools.ifilter" rel="nofollow"><code>itertools.ifilter</code></a> in python 2) or a conditional generation expression, and call <a href="https://docs.python.org/3/library/functions.html#next" rel="nofollow"><code>next</code></a> on that one to get the first element out of it </p>
<p>For example, let said you want the multiple of 5 in a list</p>
<pre><code>>>> test=[1,2,3,6,58,78,50,65,36,79,100]
>>> next( filter(lambda x:x%5==0,test) )
50
>>> next( x for x in test if x%5==0 )
50
>>>
</code></pre>
| 0 | 2016-09-02T12:40:45Z | [
"python",
"loops",
"lambda"
] |
Problems with Tkinter Canvas | 39,291,365 | <p>I'm doing some simple tasks to prepare myself for python on University I am going to attend but I ran into a problem.
When I ran my code for the first time, the tkinter window appeared and the image was drawn, but when I ran it for the second time, the tkinter window did not appear :(
This is the code: </p>
<pre><code>import math, tkinter
canvas = tkinter.Canvas(width=300, height=300)
canvas.pack()
n = int(input('enter n: '))
x0, y0, r = 150, 150, 100
xx, yy = x0+r, y0
uhol = 360/n
for i in range(n):
rad = uhol/180*math.pi
x = x0 + r * math.cos(rad)
y = y0 + r * math.sin(rad)
canvas.create_line(x, y, xx, yy)
xx, yy = x, y
uhol += 360/n
</code></pre>
<p>Few hours before that, I wrote this code and it's working everytime I run it:</p>
<pre><code>import math, tkinter
canvas = tkinter.Canvas(width=300, height=300)
canvas.pack()
x0, y0, r = 150, 150, 100
xx, yy = x0+r, y0
for uhol in range(10, 361, 10):
rad = uhol/180*math.pi
x = x0 + r * math.cos(rad)
y = y0 + r * math.sin(rad)
canvas.create_line(x, y, xx, yy)
xx, yy = x, y
</code></pre>
<p>I am using Python 3.5.2.</p>
| -1 | 2016-09-02T11:32:47Z | 39,300,500 | <p>I presume that the second 'first' should be 'second'. There are two other problems.</p>
<p>You did not say <em>how</em> you ran this either time. Exact details can be important.</p>
<p>You did not create and pass to Canvas an instance of tkinter.Tk. Instead, you relied on the default root mechanism. I consider this unreliable and a bad idea. Among other things, you cannot call methods on the hidden root.</p>
<p>I loaded your code into an IDLE editor, changed lines 2 & 3 to</p>
<pre><code>root = tkinter.Tk()
canvas = tkinter.Canvas(root, width=300, height=300)
</code></pre>
<p>hit F5, and it ran. I hit F5 again and it ran. And again, and again.</p>
<p>To run from a command line or by double-clicking the file, you probably need to add this line at the end.</p>
<pre><code>root.mainloop()
</code></pre>
| 0 | 2016-09-02T20:47:52Z | [
"python",
"canvas",
"tkinter"
] |
Can I prevent a C++ dll from loading in Python Ctypes? | 39,291,466 | <p>I'd like to know if it's possible prevent or allow a dll to be loaded by python ctypes, based on whether a condition is true <strong>from within the dll.</strong></p>
<p><em>Some background:</em></p>
<p>My application uses various calculation algorithms, which I prototyped in Python, and then reimplemented in C++ for a speed boost. I still use Python for the application "glue" and GUI. I'm accessing the functions in the dll using a ctypes wrapper. </p>
<p>I now need to secure the software, so that it will only run if a security dongle is present. The open nature of Python makes this difficult, so I'd like to be able to stop a python script loading the dll unless a function which checks the dongle is present returns True.</p>
<p>Example Python wrapper:</p>
<pre><code>from ctypes import cdll, c_int , c_float, c_bool
lib = cdll.LoadLibrary('my.dll')
cpp_sum = lib.sum
cpp_sum.argtypes = [c_int,c_int]
cpp_sum.restype = c_int
def wrapped_sum(value_1,value_2):
return cpp_sum(value_1,value_2)
</code></pre>
<p>And the code for the my.dll:</p>
<pre><code>#include "stdafx.h"
#include <cmath>
#define DLLEXPORT extern "C" __declspec(dllexport)
DLLEXPORT int sum(int a, int b)
{return a + b;}
//pseudo dongle code:
bool is_dongle_present(){
if dongle present return true
else return false
</code></pre>
<p>Ideally the dll would fail to load if dongle_is_present returned false. Can anyone help?
Please tell me if this question is unclear!</p>
<p>Many thanks</p>
| 0 | 2016-09-02T11:38:01Z | 39,291,567 | <p>Add <code>DllMain</code> function to your library.</p>
<blockquote>
<p>An optional entry point into a dynamic-link library (DLL). When the
system starts or terminates a process or thread, it calls the
entry-point function for each loaded DLL using the first thread of the
process. The system also calls the entry-point function for a DLL when
it is loaded or unloaded using the LoadLibrary and FreeLibrary
functions. </p>
</blockquote>
<p>You could prevent dll load by returning <code>FALSE</code> on <code>DLL_PROCESS_ATTACH</code>: </p>
<blockquote>
<p>When the system calls the DllMain function with
the DLL_PROCESS_ATTACH value, the function returns TRUE if it succeeds
or FALSE if initialization fails. If the return value is FALSE when
DllMain is called because the process uses the LoadLibrary function,
LoadLibrary returns NULL. (The system immediately calls your
entry-point function with DLL_PROCESS_DETACH and unloads the DLL.) If
the return value is FALSE when DllMain is called during process
initialization, the process terminates with an error</p>
</blockquote>
<p>See <a href="https://msdn.microsoft.com/ru-ru/library/windows/desktop/ms682583(v=vs.85).aspx" rel="nofollow">DllMain MSDN entry</a> for the additional information.</p>
| 1 | 2016-09-02T11:42:37Z | [
"python",
"c++",
"c",
"dll",
"ctypes"
] |
How to one-hot-encode from a csv file input | 39,291,475 | <p>I have a csv file which I read in with</p>
<pre><code>import pandas
df = pd.read_csv("inputfile")
</code></pre>
<p>Some of the columns are numerical and some are strings. Let's call one of the numerical columns <code>'num'</code> and one of the string ones <code>'col'</code>. I would like do the following:</p>
<ol>
<li>I would like to be able to one-hot-encode a string column called <code>'col'</code> and result in a sparse matrix with all the features in it.</li>
<li>I would like to one-hot-encode <code>df['num']</code> but only when <code>df['num'] < 100</code>.</li>
</ol>
<p>This is easy to do if the input were in a list of dictionaries. </p>
<p><em>Step 1.</em> happens automatically when you run DictVectorizer</p>
<p><em>Step 2.</em> just need me to iterate over the dictionaries adding a key/value pair for a new string feature <code>'num_cat'</code> when necessary and then run DictVectorizer on the whole new list of dictionaries.</p>
<p>I am stuck on following:</p>
<ul>
<li>I need the matrix that results from the one-hot-encoding to be sparse. pd.get_dummies is very very slow if there are a large number of categories. Does it create a dense matrix and then make it sparse? As a result I don't feel I can use that.</li>
<li>Can I do what I need without first converting the data frame to a list of dictionaries and then running DictVectorizer? If not, is there an easy way to do the conversion?</li>
</ul>
| 2 | 2016-09-02T11:38:22Z | 39,308,115 | <p>Say you start with</p>
<pre><code>In [31]: df = pd.DataFrame({'col': ['foo', 'foo', 'bar', 'bar'], 'num': [1, 1, 3, 213]})
In [32]: df
Out[32]:
col num
0 foo 1
1 foo 1
2 bar 3
3 bar 213
</code></pre>
<p>First, let's take care of <code>col</code>:</p>
<p>If we define</p>
<pre><code>In [33]: d = dict([e[:: -1] for e in enumerate(df.col.unique())])
</code></pre>
<p>Then we can use it to "numerify" <code>col</code>:</p>
<pre><code>In [34]: df.col = df.col.map(d)
In [35]: df
Out[35]:
col num
0 0 1
1 0 1
2 1 3
3 1 213
</code></pre>
<p>Now let's deal with <code>num</code>:</p>
<pre><code>In [36]: import numpy as np
</code></pre>
<p>We'll just make everything over 100 into 100:</p>
<pre><code>In [37]: df.num = np.minimum(df.num.values, 100)
In [38]: df
Out[38]:
col num
0 0 1
1 0 1
2 1 3
3 1 100
</code></pre>
<p>Now for the encoding:</p>
<pre><code>In [49]: from sklearn import preprocessing
In [50]: enc = preprocessing.OneHotEncoder()
In [51]: enc.fit(df.as_matrix()).transform(df.as_matrix()).toarray()
Out[51]:
array([[ 1., 0., 1., 0., 0.],
[ 1., 0., 1., 0., 0.],
[ 0., 1., 0., 1., 0.],
[ 0., 1., 0., 0., 1.]])
</code></pre>
<p>Two things to note:</p>
<ol>
<li><p><code>toarray()</code> makes the matrix dense again; its use is optional, of course.</p></li>
<li><p>By construction, the last column is necessarily the "100 and over" category of <code>num</code>. You can retain it or drop this column, as needed.</p></li>
</ol>
| 1 | 2016-09-03T14:39:34Z | [
"python",
"csv",
"pandas",
"scikit-learn"
] |
python shapely intersection function: clock-wise or counter-clock-wise | 39,291,481 | <p>I have used the Shapely polygon intersection function: </p>
<pre><code>object.intersection(other)
</code></pre>
<p>and get inconsistency in the direction and order of the vertices of the output polygon. </p>
<p>Is there a way to have a systematic set of outputs, or should I run through the output polygon and sort it?</p>
| 0 | 2016-09-02T11:38:32Z | 39,291,987 | <p>You may get better answers on <a href="http://gis.stackexchange.com/">http://gis.stackexchange.com/</a> </p>
<p>Double check that you are using the right DE-9IM method. <a href="https://en.wikipedia.org/wiki/DE-9IM" rel="nofollow">DE-9IM Wikipedia</a> </p>
<p>If I understand you correctly with the systematic outputs you have multiple LinearRings inside your polygon and you want a separate result for each vs just having a single Boolean result for the entire polygon intersecting with other. The easiest way which is slower is to iterate through your polygon and compare each LinearRing. The faster way is to use sr-trees with the python package "rtrees". <a href="http://gis.stackexchange.com/a/119935/60045">http://gis.stackexchange.com/a/119935/60045</a></p>
| 0 | 2016-09-02T12:03:20Z | [
"python",
"shapely"
] |
How to concatenate multiple column values into a single column in Panda dataframe | 39,291,499 | <p>This question is same to <a href="http://stackoverflow.com/questions/11858472/pandas-combine-string-and-int-columns">this posted</a> earlier. I want to concatenate three columns instead of concatenating two columns:</p>
<p>Here is the combining two columns:</p>
<pre><code>df = DataFrame({'foo':['a','b','c'], 'bar':[1, 2, 3], 'new':['apple', 'banana', 'pear']})
df['combined']=df.apply(lambda x:'%s_%s' % (x['foo'],x['bar']),axis=1)
df
bar foo new combined
0 1 a apple a_1
1 2 b banana b_2
2 3 c pear c_3
</code></pre>
<p>I want to combine three columns with this command but it is not working, any idea?</p>
<pre><code>df['combined']=df.apply(lambda x:'%s_%s' % (x['bar'],x['foo'],x['new']),axis=1)
</code></pre>
| 0 | 2016-09-02T11:39:33Z | 39,291,591 | <p>I think you are missing one <em>%s</em></p>
<pre><code>df['combined']=df.apply(lambda x:'%s_%s_%s' % (x['bar'],x['foo'],x['new']),axis=1)
</code></pre>
| 3 | 2016-09-02T11:43:28Z | [
"python",
"pandas",
"dataframe"
] |
How to concatenate multiple column values into a single column in Panda dataframe | 39,291,499 | <p>This question is same to <a href="http://stackoverflow.com/questions/11858472/pandas-combine-string-and-int-columns">this posted</a> earlier. I want to concatenate three columns instead of concatenating two columns:</p>
<p>Here is the combining two columns:</p>
<pre><code>df = DataFrame({'foo':['a','b','c'], 'bar':[1, 2, 3], 'new':['apple', 'banana', 'pear']})
df['combined']=df.apply(lambda x:'%s_%s' % (x['foo'],x['bar']),axis=1)
df
bar foo new combined
0 1 a apple a_1
1 2 b banana b_2
2 3 c pear c_3
</code></pre>
<p>I want to combine three columns with this command but it is not working, any idea?</p>
<pre><code>df['combined']=df.apply(lambda x:'%s_%s' % (x['bar'],x['foo'],x['new']),axis=1)
</code></pre>
| 0 | 2016-09-02T11:39:33Z | 39,291,596 | <p>you can simply do:</p>
<pre><code>In[17]:df['combined']=df['bar'].astype(str)+'_'+df['foo']+'_'+df['new']
In[17]:df
Out[18]:
bar foo new combined
0 1 a apple 1_a_apple
1 2 b banana 2_b_banana
2 3 c pear 3_c_pear
</code></pre>
| 3 | 2016-09-02T11:43:44Z | [
"python",
"pandas",
"dataframe"
] |
How to concatenate multiple column values into a single column in Panda dataframe | 39,291,499 | <p>This question is same to <a href="http://stackoverflow.com/questions/11858472/pandas-combine-string-and-int-columns">this posted</a> earlier. I want to concatenate three columns instead of concatenating two columns:</p>
<p>Here is the combining two columns:</p>
<pre><code>df = DataFrame({'foo':['a','b','c'], 'bar':[1, 2, 3], 'new':['apple', 'banana', 'pear']})
df['combined']=df.apply(lambda x:'%s_%s' % (x['foo'],x['bar']),axis=1)
df
bar foo new combined
0 1 a apple a_1
1 2 b banana b_2
2 3 c pear c_3
</code></pre>
<p>I want to combine three columns with this command but it is not working, any idea?</p>
<pre><code>df['combined']=df.apply(lambda x:'%s_%s' % (x['bar'],x['foo'],x['new']),axis=1)
</code></pre>
| 0 | 2016-09-02T11:39:33Z | 39,293,567 | <p>Just wanted to make a time comparison for both solutions (for 30K rows DF):</p>
<pre><code>In [1]: df = DataFrame({'foo':['a','b','c'], 'bar':[1, 2, 3], 'new':['apple', 'banana', 'pear']})
In [2]: big = pd.concat([df] * 10**4, ignore_index=True)
In [3]: big.shape
Out[3]: (30000, 3)
In [4]: %timeit big.apply(lambda x:'%s_%s_%s' % (x['bar'],x['foo'],x['new']),axis=1)
1 loop, best of 3: 881 ms per loop
In [5]: %timeit big['bar'].astype(str)+'_'+big['foo']+'_'+big['new']
10 loops, best of 3: 44.2 ms per loop
</code></pre>
<p>a few more options:</p>
<pre><code>In [6]: %timeit big.ix[:, :-1].astype(str).add('_').sum(axis=1).str.cat(big.new)
10 loops, best of 3: 72.2 ms per loop
In [11]: %timeit big.astype(str).add('_').sum(axis=1).str[:-1]
10 loops, best of 3: 82.3 ms per loop
</code></pre>
| 2 | 2016-09-02T13:24:17Z | [
"python",
"pandas",
"dataframe"
] |
Why "ImportError: No module named builtins" appears after importing hive from PyHive package? | 39,291,533 | <p>I have a simple question to ask. I have been trying to execute HIVE queries from Python using <a href="https://github.com/cloudera/impyla" rel="nofollow">impyla</a> package. But I stuck into <a href="http://stackoverflow.com/questions/35854145/impyla-hangs-when-connecting-to-hiveserver2">cursor problem</a>, already a question has been asked on stackoverflow. In this question, a user answered and advised to use <a href="https://github.com/dropbox/PyHive" rel="nofollow">PyHive</a> instead.</p>
<p>Therefore, now I am trying to execute HIVE queries from Python using PyHive. But unluckily, I am stuck at another issue which seems to be not that complicated. As soon as I execute the following line in python I get an error:</p>
<pre><code>In [18]: from pyhive import hive
Traceback (most recent call last):
File "<ipython-input-18-747088b97eb4>", line 1, in <module>
from pyhive import hive
File "build\bdist.win32\egg\pyhive\hive.py", line 13, in <module>
File "build\bdist.win32\egg\pyhive\common.py", line 8, in <module>
ImportError: No module named builtins
</code></pre>
<p>Can anyone indicate where I am getting it wrong? Since, I have already <strong>successfully</strong> installed the PyHive package on my machine, so I don't expect this to appear. I have been searching allot to find a reason of this error. It will be really a great time saver if I know the solution today. Thank you very much for your time and support. </p>
<p><strong>UPDATE</strong></p>
<p>I am using:</p>
<ol>
<li>Windows 7 (64-bit)</li>
<li>Python 2.7 (32-bit)</li>
<li>Anaconda2 4.1.1 (32-bit)</li>
</ol>
| 0 | 2016-09-02T11:41:00Z | 39,291,894 | <p>In python 3 the module <strong>builtin</strong> was renamed to builtins.</p>
<p>It is possible that you have installed a python 3 package and trying to run it with python 2</p>
| 0 | 2016-09-02T11:58:00Z | [
"python",
"hadoop",
"hive"
] |
How to sort a dictionary with multiple inputs | 39,291,613 | <p>I have a dictionary called language list with 2 items: suburb and language.
Each suburb may have more than 1 language.</p>
<p>This is what is in the dictionary "languagelist":
('Edgecumbe', 'Farsi, English'), ('Junction Triangle', 'English, Mandarin'), ('Guildwood', 'Mandarin, English'), ('Greenmeadows', 'English')</p>
<p>The user enters a suburb eg. Edgecumbe
I need to display the languages for that suburb in alphabetical order eg
Languages: English, Farsi</p>
<p>I can only seem to sort the "suburb" item, but cannot sort the languages for that suburb.</p>
<p>How do I display only the languages for the suburb entered by user?</p>
<p>This is the current line of code to display:</p>
<pre><code>print('Languages: ' + languagelist[suburb])
</code></pre>
<p>But it is not sorted properly. It is displaying:
Languages: Farsi, English</p>
<p>A snip of my current code is: </p>
<pre><code>suburbs={} languagelist={} b=[] suburb=input('Suburb: ')
for line in open('census.txt'): line=line.split(',')
if line[3] not in suburbs: suburbs[line[3]] = line[3] a=line[4]a=a[:-1] languagelist[line[3]]= a elif line[3]
in suburbs: suburbs[line[3]] = suburbs[line[3]] a=line[4] a=a[:-1]
if a not in languagelist[line[3]]: a=line[4] a=a[:-1] languagelist[line[3]]= languagelist[line[3]] + ', ' + a b=languagelist[line[3]] b.split(', ') languagelist[line[3]]=[] languagelist[line[3]]=b
else: continue while suburb: if suburb not in suburbs:
print('No data found for ' + suburb + '.')
else:print('Languages:'+str(sorted(languagelist.suburb)))suburb=input('Suburb: ')
</code></pre>
<p>Any help would be greatly appreciated.</p>
| -1 | 2016-09-02T11:44:32Z | 39,291,890 | <p>Say you start with</p>
<pre><code>languagelist = dict([('Edgecumbe', 'Farsi, English'), ('Junction Triangle', 'English, Mandarin'), ('Guildwood', 'Mandarin, English'), ('Greenmeadows', 'English')])
</code></pre>
<p>Then, for any <code>place</code>, the sorted languages spoken at that place is</p>
<pre><code>', '.join(sorted(languagelist[place].split(', ')))
</code></pre>
<hr>
<p>Of course, it would be <em>much</em> better if you just use a dictionary mapping places into what you want to begin with.</p>
| 0 | 2016-09-02T11:57:57Z | [
"python",
"sorting",
"dictionary"
] |
How to replace the pattern w{3,3} with w{3} | 39,291,618 | <p>I meant any numbers in this regex expression.</p>
<pre><code>w{1,1} --> w{1}
w{2,2} --> w{2}
</code></pre>
<p>and so on.</p>
| 1 | 2016-09-02T11:44:46Z | 39,291,825 | <p>Find <code>w\{(\d*),\1\}</code> and replace it with <code>w{\1}</code>.</p>
<p>Here is a full example with python code:</p>
<pre><code>import re
re.sub(r'w\{\s*([0-9]+)\s*,\s*\1\s*\}', r'w{\1}', 'w{1,1}')
</code></pre>
<p>Explanation:</p>
<ul>
<li>we have to escape the curly braces: <code>\{</code> and <code>\}</code></li>
<li>we need one ore more number: <code>[0-9]+</code></li>
<li>this number is surrounded with zero or more whitespaces: <code>\s*</code></li>
<li>then the same numbers again, with a <em>backreference</em>: <code>\1</code></li>
<li>finally, we can use the backreference in the replacement too: <code>w{\1}</code></li>
</ul>
| 1 | 2016-09-02T11:54:52Z | [
"python",
"regex",
"python-2.7",
"python-3.x"
] |
Python Azure-Storage 0.33.0 broken with Azure WebApp | 39,291,622 | <p>I have this really weird issue with Flask app crashing when importing azure.storage. So I have this code:</p>
<pre><code>from azure.storage.queue import QueueService
</code></pre>
<p>As soon as I deploy it to Azure, it fails. Any ideas ? I have put both Azure and Azure-Storage in requirements.txt.</p>
<p>What could possibly be wrong? Thanks!</p>
| 1 | 2016-09-02T11:44:56Z | 39,341,273 | <p>Azure-storage 0.33.0 (latest as of now) has a dependency of cryptography package, which fails to install, take a look here:</p>
<p><a href="https://github.com/Azure/azure-storage-python/issues/219" rel="nofollow">https://github.com/Azure/azure-storage-python/issues/219</a></p>
<p>workaround: use earlier version, 0.32.0, for example</p>
| 1 | 2016-09-06T05:32:20Z | [
"python",
"azure",
"flask"
] |
send request body using a decorator in python | 39,291,895 | <p>I have a function in a django project something like this:</p>
<pre><code>class my_class():
def post(self, request, id, format=None):
logger.info(
''.join(
["id"+str(request.get('id')),
"name"+str(request.get('name')),
"grade"+str(request.get('grade'))]
)
)
row = Student(
id = request.get('id'),
name = request.get('name'),
grade = request.get('grade')
)
row.save()
</code></pre>
<p>Now I want to have a decorator(<strong>@logger</strong>) around my this function which logs everything inside (logger.info). i.e I should only add @logger above the function definition and can log all the request body. Can someone help me how can I do this. I am facing problem in sending the request body from <code>post</code> to the decorator.</p>
| 1 | 2016-09-02T11:58:06Z | 39,292,102 | <p>I guess this should suffice, you might want to also read <a href="http://www.artima.com/weblogs/viewpost.jsp?thread=240845" rel="nofollow">this</a>.</p>
<pre><code>class Logger(object):
def __init__(self, f):
self.f = f
def __call__(self, request, *args, **kwargs):
logger.info(request.GET)
self.f(request, *args, **kwargs)
</code></pre>
| 0 | 2016-09-02T12:10:00Z | [
"python",
"django",
"request",
"decorator"
] |
send request body using a decorator in python | 39,291,895 | <p>I have a function in a django project something like this:</p>
<pre><code>class my_class():
def post(self, request, id, format=None):
logger.info(
''.join(
["id"+str(request.get('id')),
"name"+str(request.get('name')),
"grade"+str(request.get('grade'))]
)
)
row = Student(
id = request.get('id'),
name = request.get('name'),
grade = request.get('grade')
)
row.save()
</code></pre>
<p>Now I want to have a decorator(<strong>@logger</strong>) around my this function which logs everything inside (logger.info). i.e I should only add @logger above the function definition and can log all the request body. Can someone help me how can I do this. I am facing problem in sending the request body from <code>post</code> to the decorator.</p>
| 1 | 2016-09-02T11:58:06Z | 39,292,122 | <pre><code>def logger(func):
def decorator(request, *args, **kwargs):
# log here
return func(request, *args, **kwargs)
return decorator
</code></pre>
| 1 | 2016-09-02T12:10:47Z | [
"python",
"django",
"request",
"decorator"
] |
TypeError: 'bytes' object is not callable | 39,292,002 | <h1>Edit:</h1>
<p>I made the very trivial mistake of having bound <code>bytes</code> to something else previous to executing the following code. This question is now entirely trivial and probably doesn't help anyone. Sorry.</p>
<h1>Original question:</h1>
<p>Code:</p>
<pre><code>import sys
print(sys.version)
b = bytes([10, 20, 30, 40])
print(b)
</code></pre>
<p>Output:</p>
<pre><code>3.5.1 (v3.5.1:37a07cee5969, Dec 5 2015, 21:12:44)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-38-21fec5626bc3> in <module>()
1 import sys
2 print(sys.version)
----> 3 b = bytes([10, 20, 30, 40])
4 print(b)
TypeError: 'bytes' object is not callable
</code></pre>
<p>Documentation:</p>
<pre><code>Type: bytes
String form: b'hello world'
Length: 11
Docstring:
bytes(iterable_of_ints) -> bytes
bytes(string, encoding[, errors]) -> bytes
bytes(bytes_or_buffer) -> immutable copy of bytes_or_buffer
bytes(int) -> bytes object of size given by the parameter initialized with null bytes
bytes() -> empty bytes object
Construct an immutable array of bytes from:
- an iterable yielding integers in range(256)
- a text string encoded using the specified encoding
- any object implementing the buffer API.
- an integer
</code></pre>
<p>What am I doing wrong?</p>
| -3 | 2016-09-02T12:04:05Z | 39,292,022 | <p>You have assigned <em>a <code>bytes</code> value</em> to the name <code>bytes</code>:</p>
<pre><code>>>> bytes([10, 20, 30, 40])
b'\n\x14\x1e('
>>> bytes = bytes([10, 20, 30, 40])
>>> bytes([10, 20, 30, 40])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'bytes' object is not callable
</code></pre>
<p><code>bytes</code> is now bound to the value <code>b'\n\x14\x1e('</code>, which is not callable. This global is shadowing the built-in. Delete it:</p>
<pre><code>del bytes
</code></pre>
<p>to reveal the built-in again.</p>
| 2 | 2016-09-02T12:05:24Z | [
"python",
"byte"
] |
portalocker does not seem to lock | 39,292,051 | <p>I have a sort of checkpoint file which I wish to modify sometimes by various python programs. I load the file, try to lock it using portalocker, change it, than unlock and close it.</p>
<p>However, portalocker does not work in the simplest case.
I created a simple file:</p>
<pre><code>$echo "this is something here" >> test
$python
Python 3.5.2 (default, Jul 5 2016, 12:43:10)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import portalocker
>>> f = open("test",'w')
>>> portalocker.lock(f, portalocker.LOCK_EX)
</code></pre>
<p>Meanwhile I can still open it in another terminal:</p>
<pre><code>$python
Python 3.5.2 (default, Jul 5 2016, 12:43:10)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> fl = open("test",'w')
>>> fl.write("I can still overwrite this\n")
>>> fl.close()
</code></pre>
<p>Then I close the first one, and check the file:</p>
<pre><code>>>> portalocker.unlock(f)
>>> f.close()
>>>
$ cat test
I can still overwrite this
</code></pre>
<p>What am I doing wrong?</p>
| 2 | 2016-09-02T12:07:23Z | 39,338,942 | <p>The problem is that, by default, Linux uses advisory locks. <a href="https://stackoverflow.com/questions/12062466/mandatory-file-lock-on-linux">To enable mandatory locking (which you are referring to) the filesytem needs to be mounted with the <code>mand</code> option</a>. The advisory locking system actually has several advantages but can be confusing if you're not expecting it.</p>
<p>To make sure your code works properly in both cases I would suggest encapsulating both of the open calls with the locker.</p>
<p>For example, try this in 2 separate Python instances:</p>
<pre><code>import portalocker
with portalocker.Lock('test', truncate=None) as fh:
fh.write('first instance')
print('waiting for your input')
raw_input()
</code></pre>
<p>Now from a second instance:</p>
<pre><code>import portalocker
with portalocker.Lock('test', truncate=None) as fh:
fh.write('second instance')
</code></pre>
<p>Ps: I'm the maintainer of the portalocker package</p>
| 1 | 2016-09-06T00:02:30Z | [
"python",
"python-3.x",
"locking"
] |
How to make thicker stem lines in matplolib | 39,292,117 | <p>I want to make thicker stem lines in python when using <code>plt.stem</code>.</p>
<p>Here is my code</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
N = 20
n = np.arange(0, 2*N, 1)
x = np.exp(-n/N)*np.exp(1j * 2*np.pi/N*n)
plt.stem(n,x.real)
plt.show()
</code></pre>
<p>I changed <code>plt.stem(n,x.real,linewidth=10)</code>, but nothing changed. Is there no function to set the <code>linewidth</code> in <code>plt.stem</code>?</p>
| 2 | 2016-09-02T12:10:34Z | 39,292,212 | <p>The documentation of <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.stem"><code>plt.stem</code></a> shows that the function returns all the line objects created by the plot. You can use that to manually make the lines thicker after plotting:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
N = 20
n = np.arange(0, 2*N, 1)
x = np.exp(-n/N)*np.exp(1j * 2*np.pi/N*n)
markers,stems,base = plt.stem(n,x.real)
for stem in stems:
stem.set_linewidth(10)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/nxVg3.png"><img src="http://i.stack.imgur.com/nxVg3.png" alt="result"></a></p>
| 6 | 2016-09-02T12:15:33Z | [
"python",
"stem"
] |
How to make thicker stem lines in matplolib | 39,292,117 | <p>I want to make thicker stem lines in python when using <code>plt.stem</code>.</p>
<p>Here is my code</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
N = 20
n = np.arange(0, 2*N, 1)
x = np.exp(-n/N)*np.exp(1j * 2*np.pi/N*n)
plt.stem(n,x.real)
plt.show()
</code></pre>
<p>I changed <code>plt.stem(n,x.real,linewidth=10)</code>, but nothing changed. Is there no function to set the <code>linewidth</code> in <code>plt.stem</code>?</p>
| 2 | 2016-09-02T12:10:34Z | 39,292,292 | <p>This can also be modified using <code>plt.setp()</code> as is shown in the matplotlib documentation <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.stem" rel="nofollow">example</a>. The <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.setp" rel="nofollow"><code>plt.setp()</code> method</a> allows you to set the properties of an artist object after it has been created.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0.1, 2*np.pi, 10)
markerline, stemlines, baseline = plt.stem(x, np.cos(x), '-.')
plt.setp(stemlines, 'linewidth', 4)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/RFopi.png" rel="nofollow"><img src="http://i.stack.imgur.com/RFopi.png" alt="enter image description here"></a></p>
| 4 | 2016-09-02T12:19:51Z | [
"python",
"stem"
] |
Python: indexing letters of string in a list | 39,292,190 | <p>I would like to ask if there is a way how to get exact letters of some string stored in a list? I'm working with DNA strings, get them from FASTA file using BioPython SeqIO and store them as strings in a list. In next step I will convert it to numerical sequence (called genomic signals). But as novice in Python I don't know how to obtain it from the list correctly. Should I use different data type?</p>
<p>In Maltab I used: </p>
<pre><code>a=1+1i;c=-1-1i;g=-1+1i;t=1-1i; %base values definition
for i=1:number of sequences
length_of_sequence(i)=length(sequence{1,i});
temp=zeros(1,length_of_sequence(i),'double');
temp(sequence{i}=='A')=angle(a);
temp(sequence{i}=='C')=angle(c);
temp(sequence{i}=='G')=angle(g);
temp(sequence{i}=='T')=angle(t);
KontigNumS{i,1}=cumsum(temp); %cumulated phase of whole vector
end
</code></pre>
<p>what creates a vector and replace zeros with according values.
I wasn't able to find a similar question. Thanks for replies. </p>
<p>My python code:</p>
<pre><code>#Dependencies
from Bio import SeqIO #fasta loading
import cmath #complex numbers
import numpy as np
#Open FASTA file new variable
lengths=list()
sequences=list()
handle=open("F:\GC_Assembler_Python\xx.fasta","r")
for record in SeqIO.parse(handle, "fasta"):
print(record.id)
print(len(record.seq))
lengths.append(len(record.seq))
sequences.append(str(record.seq))
#Convert to genomic signals
a=complex(1,1)
c=complex(-1,-1)
g=complex(-1,1)
t=complex(1,-1)
I stopped here.
</code></pre>
| 0 | 2016-09-02T12:14:15Z | 39,292,439 | <p>I don't know how MATLAB does it. In Python you can access any position in a string without converting to a list:</p>
<pre><code>DNA = "ACGTACGTACGT"
print(DNA[2])
# outputs "G", the third base
</code></pre>
<p>If you want to store "strings in a list" you can do this:</p>
<pre><code>DNA_list = ["AAAAAA", "CCCCC", "GGGGG", "TTTTT"]
print(DNA_list[0][0])
# outputs "A", the first "A" of the first sequence
print(DNA_list[1][0])
# outputs "C", the first "C" of the second sequence
</code></pre>
| 1 | 2016-09-02T12:27:46Z | [
"python",
"biopython"
] |
Python: indexing letters of string in a list | 39,292,190 | <p>I would like to ask if there is a way how to get exact letters of some string stored in a list? I'm working with DNA strings, get them from FASTA file using BioPython SeqIO and store them as strings in a list. In next step I will convert it to numerical sequence (called genomic signals). But as novice in Python I don't know how to obtain it from the list correctly. Should I use different data type?</p>
<p>In Maltab I used: </p>
<pre><code>a=1+1i;c=-1-1i;g=-1+1i;t=1-1i; %base values definition
for i=1:number of sequences
length_of_sequence(i)=length(sequence{1,i});
temp=zeros(1,length_of_sequence(i),'double');
temp(sequence{i}=='A')=angle(a);
temp(sequence{i}=='C')=angle(c);
temp(sequence{i}=='G')=angle(g);
temp(sequence{i}=='T')=angle(t);
KontigNumS{i,1}=cumsum(temp); %cumulated phase of whole vector
end
</code></pre>
<p>what creates a vector and replace zeros with according values.
I wasn't able to find a similar question. Thanks for replies. </p>
<p>My python code:</p>
<pre><code>#Dependencies
from Bio import SeqIO #fasta loading
import cmath #complex numbers
import numpy as np
#Open FASTA file new variable
lengths=list()
sequences=list()
handle=open("F:\GC_Assembler_Python\xx.fasta","r")
for record in SeqIO.parse(handle, "fasta"):
print(record.id)
print(len(record.seq))
lengths.append(len(record.seq))
sequences.append(str(record.seq))
#Convert to genomic signals
a=complex(1,1)
c=complex(-1,-1)
g=complex(-1,1)
t=complex(1,-1)
I stopped here.
</code></pre>
| 0 | 2016-09-02T12:14:15Z | 39,292,704 | <p>if you use the following you can convert any string to a list<code>
list(The string)</code></p>
| 0 | 2016-09-02T12:41:06Z | [
"python",
"biopython"
] |
Why crash when derive from QListWidgetItem AND QObject | 39,292,260 | <p>The following minimal example crashes in pyqt 5.7.1 on windows (copy-paste this in a .py file and run):</p>
<pre><code>from PyQt5.QtWidgets import QListWidgetItem, QListWidget, QApplication
from PyQt5.QtCore import QObject, pyqtSlot, pyqtSignal
class MyListItem(QListWidgetItem):
def __init__(self, obj):
QListWidgetItem.__init__(self, 'example')
obj.sig_name_changed.connect(self.__on_list_item_name_changed)
def __on_list_item_name_changed(self, new_name: str):
self.setText(new_name)
class MyListItem2(QListWidgetItem, QObject):
def __init__(self, obj):
QListWidgetItem.__init__(self, 'example')
QObject.__init__(self)
obj.sig_name_changed.connect(self.pyqt_slot)
@pyqtSlot(str)
def __on_list_item_name_changed(self, new_name: str):
self.setText(new_name)
class Data(QObject):
sig_name_changed = pyqtSignal(str)
class SearchPanel(QListWidget):
def __init__(self, parent=None):
QListWidget.__init__(self, parent)
obj = Data()
hit_item = MyListItem(obj) # OK
hit_item = MyListItem2(obj) # crashes
self.addItem(hit_item)
obj.sig_name_changed.emit('new_example')
app = QApplication([])
search = SearchPanel()
search.show()
app.exec()
</code></pre>
<p>Now just comment out the line that says "crashes", and it works fine. Moreover, the list widget shows 'new_example', showing that the signal went through. </p>
<p>Is there a way to make it work with MyListItem2? i.e. I want to be able to decorate the slot with pyqtSlot, which in turn requires (in PyQt 5.7) that I derive item from QObject. </p>
<p>The intent here is that each item in the list has several characteristics that can change (icon, font, text color) based on signals from associated Data instance (each instance actually "lives", in the Qt sense of the term, in a second thread of our application). </p>
| 1 | 2016-09-02T12:18:12Z | 39,297,775 | <p>This has got nothing to do with <code>pyqtSlot</code>.</p>
<p>The actual problem is that you are trying to inherit from two Qt classes, and that is <a href="http://pyqt.sourceforge.net/Docs/PyQt5/gotchas.html#multiple-inheritance" rel="nofollow">not generally supported</a>. The only exceptions to this are Qt classes that implement interfaces, and Qt classes that share a common base-class (e.g. <code>QListWidget</code> and <code>QWidget</code>). However, only the former is <em>offically</em> supported, and there are several provisos regarding the latter (none of which are relevant here).</p>
<p>So a Python class that inherits from both <code>QListWidgetItem</code> and <code>QObject</code> just will not work. The main problem occurs when PyQt tries to access attributes that are not defined by the top-level base-class (even when the attribute does not exist). In earlier PyQt versions, this would simply raise an error:</p>
<pre><code>>>> class MyListItem2(QListWidgetItem, QObject): pass
...
>>> x = MyListItem2()
>>> x.objectName()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: could not convert 'MyListItem2' to 'QObject'
>>> x.foo
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: could not convert 'MyListItem2' to 'QObject'
</code></pre>
<p>which makes it clear that (in C++ terms) a <code>MyListItem2(QListWidgetItem)</code> cannot be cast to a <code>QObject</code>. Unfortunately, it seems that more recent versions of PyQt5 no longer raise this error, and instead just immediately dump core (which presumably is a bug).</p>
<p>If you really need to use <code>pyqtSlot</code>, one suggestion would be to use composition rather than subclassing. So perhaps something like this:</p>
<pre><code>class ListItemProxy(QObject):
def __init__(self, item, obj):
QObject.__init__(self)
self._item = item
obj.sig_name_changed.connect(self.__on_list_item_name_changed)
@pyqtSlot(str)
def __on_list_item_name_changed(self, new_name: str):
self._item.setText(new_name)
class MyListItem2(QListWidgetItem):
def __init__(self, obj):
QListWidgetItem.__init__(self, 'example')
self._proxy = ListItemProxy(self, obj)
</code></pre>
| 1 | 2016-09-02T17:21:55Z | [
"python",
"multiple-inheritance",
"signals-slots",
"pyqt5"
] |
Pandas DataFrame with continuous index | 39,292,275 | <p>I have the following code:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{'Index' : ['1', '2', '5','7', '8', '9', '10'],
'Vals' : [1, 2, 3, 4, np.nan, np.nan, 5]})
</code></pre>
<p>This gives me:</p>
<pre><code> Index Vals
0 1 1.0
1 2 2.0
2 5 3.0
3 7 4.0
4 8 NaN
5 9 NaN
6 10 5.0
</code></pre>
<p>But what I want is something like this:</p>
<pre><code> Index Vals
0 1 1.000000
1 2 2.000000
2 3 NaN
3 4 NaN
4 5 3.000000
5 6 NaN
6 7 4.000000
7 8 NaN
8 9 NaN
9 10 5.000000
</code></pre>
<p>I tried to achieve this by creating a new dataframe with a continuous index. Then I would like to assign the values which I already have but how? The only thing I have so far is this:</p>
<pre><code>clean_data = pd.DataFrame({'Index' : range(1,11)})
</code></pre>
<p>Which gives me:</p>
<pre><code> Index
0 1
1 2
2 3
3 4
4 5
5 6
6 7
7 8
8 9
9 10
</code></pre>
| 3 | 2016-09-02T12:19:01Z | 39,292,535 | <p>So for your example it will look like:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame(
{'Index' : ['1', '2', '5','7', '8', '9', '10'],
'Vals' : [1, 2, 3, 4, np.nan, np.nan, 5]})
df['Index'] = df['Index'].astype(int)
clean_data = pd.DataFrame({'Index' : range(1,11)})
result = clean_data.merge(df,on="Index",how='outer')
</code></pre>
<p>And the result is : </p>
<pre><code> Index Vals
0 1 1.0
1 2 2.0
2 3 NaN
3 4 NaN
4 5 3.0
5 6 NaN
6 7 4.0
7 8 NaN
8 9 NaN
9 10 5.0
</code></pre>
| 3 | 2016-09-02T12:31:56Z | [
"python",
"pandas",
"indexing"
] |
Pandas DataFrame with continuous index | 39,292,275 | <p>I have the following code:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{'Index' : ['1', '2', '5','7', '8', '9', '10'],
'Vals' : [1, 2, 3, 4, np.nan, np.nan, 5]})
</code></pre>
<p>This gives me:</p>
<pre><code> Index Vals
0 1 1.0
1 2 2.0
2 5 3.0
3 7 4.0
4 8 NaN
5 9 NaN
6 10 5.0
</code></pre>
<p>But what I want is something like this:</p>
<pre><code> Index Vals
0 1 1.000000
1 2 2.000000
2 3 NaN
3 4 NaN
4 5 3.000000
5 6 NaN
6 7 4.000000
7 8 NaN
8 9 NaN
9 10 5.000000
</code></pre>
<p>I tried to achieve this by creating a new dataframe with a continuous index. Then I would like to assign the values which I already have but how? The only thing I have so far is this:</p>
<pre><code>clean_data = pd.DataFrame({'Index' : range(1,11)})
</code></pre>
<p>Which gives me:</p>
<pre><code> Index
0 1
1 2
2 3
3 4
4 5
5 6
6 7
7 8
8 9
9 10
</code></pre>
| 3 | 2016-09-02T12:19:01Z | 39,293,045 | <p>You can put the <code>Index</code> column in the index (after casting as integer), select the rows <code>1</code> through <code>10</code> (which will create the appropriate <code>NaN</code>s) and reset the index.</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame(
{'Index' : ['1', '2', '5','7', '8', '9', '10'],
'Vals' : [1, 2, 3, 4, np.nan, np.nan, 5]})
df['Index'] = df['Index'].astype(int)
df = df.set_index('Index').loc[range(1, 11)].reset_index()
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code> Index Vals
0 1 1.0
1 2 2.0
2 3 NaN
3 4 NaN
4 5 3.0
5 6 NaN
6 7 4.0
7 8 NaN
8 9 NaN
9 10 5.0
</code></pre>
| 1 | 2016-09-02T12:58:38Z | [
"python",
"pandas",
"indexing"
] |
How to convert 1-channel numpy matrix to 4-channel monochromatic image | 39,292,401 | <p>I am working on a pyqt project with numpy and cv2. Basically, I want to use a binary numpy mask <code>(1024, 1024)</code> to create a 4 channel monochromatic image <code>(1024, 1024, 4)</code>, where all 1s from the mask are pink and all 0s are invisible. Then I convert the image and show it as overlay in my QScene to highlight some pixels in another image.</p>
<p>My current approach does the job, but is too slow and I'm sure that numpy provides something more convenient.</p>
<pre><code>color = (255, 0, 238, 100)
r = (mask * color[0]).reshape((w*h))
g = (mask * color[1]).reshape((w*h))
b = (mask * color[2]).reshape((w*h))
a = (mask * color[3]).reshape((w*h))
rgba = np.dstack((r, g, b, a)).reshape((w, h, 4))
transposed = np.transpose(rgba, axes=[1, 0, 2])
</code></pre>
<p>Is there a better way to show a mask overlay? I don't insist on using numpy, however, it is important that I can set the color, as I will be needing several colors.</p>
| 2 | 2016-09-02T12:26:04Z | 39,292,493 | <p>Yes! Use <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>NumPy broadcasting</code></a> to clean it up and have a <code>one-liner</code>, like so -</p>
<pre><code>transposed = mask.T[...,None]*color
</code></pre>
<p><strong>Explanation:</strong></p>
<ol>
<li>Use <code>mask.T</code> to do the <code>np.transpose</code> operation done at the end.</li>
<li>Use <code>[...,None]</code> on the transposed array to basically push all its dimensions to the front and create a singleton dim (dim with <code>length=1</code>) as the last axis. For introducing this new axis, we have used an alias for <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#numpy.newaxis" rel="nofollow"><code>np.newaxis</code></a> - <code>None</code>. Thus, we would achieve broadcasting for the transposed array along its last axis aligned with the elements of <code>color</code>.</li>
<li>Finally, we perform the element-wise multiplication itself, which in fact would be a <code>broadcasted</code> operation.</li>
</ol>
<hr>
<p>You can perform the creation of new axis part at the start and then use <code>np.transpose</code> or <code>np.swapaxes</code> and this would be closer to your original code. So, alternatively we could have :</p>
<pre><code>transposed = mask[:,:,None].transpose(1,0,2)*color
</code></pre>
<p>and, </p>
<pre><code>transposed = mask[:,:,None].swapaxes(0,1)*color
</code></pre>
| 3 | 2016-09-02T12:30:30Z | [
"python",
"numpy",
"pyqt4"
] |
Slot on list widget item data object never called (in PyQt 5.7) | 39,292,449 | <p>In PyQt 5.5 the following code worked, but not so in PyQt 5.7 (the list shows 'example' rather than 'new example', and indeed debugging shows that the slot is never hit). Does anyone know what is wrong with it: </p>
<pre><code>from PyQt5.QtWidgets import QListWidgetItem, QListWidget, QApplication
from PyQt5.QtCore import QObject, pyqtSlot, pyqtSignal, Qt
class MyListItemData(QObject):
def __init__(self, list_widget_item, obj):
super().__init__()
self.list_widget_item = list_widget_item
obj.sig_name_changed.connect(self.__on_list_item_name_changed)
# @pyqtSlot(str)
def __on_list_item_name_changed(self, new_name: str):
self.list_widget_item.setText(new_name)
class Data(QObject):
sig_name_changed = pyqtSignal(str)
class SearchPanel2(QListWidget):
def __init__(self, parent=None):
QListWidget.__init__(self, parent)
obj = Data()
hit_item = QListWidgetItem('example')
hit_item.setData(Qt.UserRole, MyListItemData(hit_item, obj))
self.addItem(hit_item)
obj.sig_name_changed.emit('new_example')
app = QApplication([])
search = SearchPanel2()
search.show()
app.exec
</code></pre>
<p>Although probably not the way this was supposed to be done, in PyQt 5.5 it was an acceptable workaround for a PyQt 5.5 bug (that prevented us from simply deriving from QListWidgetItem so the item could be directly connect to signals). </p>
<p><strong><em>Post-answer edit</em></strong></p>
<p>After Ekhumoro answered, I was confronted with a harsh reality: this fixed the example code posted, but not my app, because my app was doing exactly what the solution said to do. So I revisited: in the real app, the items are created later, and the signal for name change is emitted later. Therefore a better minimal example to reproduce my problem would have had the following: </p>
<pre><code>class SearchPanel2(QListWidget):
def __init__(self, obj, parent=None):
QListWidget.__init__(self, parent)
hit_item = QListWidgetItem('example')
data = MyListItemData(hit_item, obj)
hit_item.setData(Qt.UserRole, data) # slot not called
self.addItem(hit_item)
# self.data = data
def emit(self):
obj.sig_name_changed.emit('new_example')
app = QApplication([])
obj = Data()
search = SearchPanel2(obj)
search.show()
QTimer.singleShot(2000, search.emit)
app.exec()
assert search.item(0).text() == 'new_example'
</code></pre>
<p>This fails assertion. The assertion passes if data is kept by strong reference (uncomment last line of init). So it is likely that setData() keeps only a weak reference to its second argument, causing data to get deleted at end of init unless it is stored somewhere.</p>
| 1 | 2016-09-02T12:28:28Z | 39,298,185 | <p>There seems to be some kind of garbage-collection issue. Try this instead:</p>
<pre><code> hit_item = QListWidgetItem('example')
data = MyListItemData(hit_item, obj)
hit_item.setData(Qt.UserRole, data)
</code></pre>
| 1 | 2016-09-02T17:50:26Z | [
"python",
"signals-slots",
"pyqt5",
"qlistwidgetitem"
] |
does the zoom parameter value of the html file change when I zoom-in or zoom-out in the google map? | 39,292,506 | <p>I'm not sure whether my question is clear enough or not. This is my problem:</p>
<p>I am working with the gmplot module for python and I would like to make the grids of the scalable according to the value of the zoom of the google map. I read the html file but, as I know nothing about html, I don't know if the values in the file change while I'm interacting with the map through its zoom button or the with the "street view little man". If this happens, I could relate both parameters and get what I'm looking for.</p>
<p>Do the parameters of the html code change in "real time" or is it just an initialisating file?</p>
| 0 | 2016-09-02T12:31:06Z | 39,292,848 | <p>Your best bet I think would be to use JavaScript to get the zoom value from the <a href="https://developers.google.com/maps/documentation/javascript/reference?csw=1" rel="nofollow"><code>google.maps.Map</code> object</a>.</p>
<p>You should already have such an object referenced somewhere in your code if you are seeing a map. Maybe something like:</p>
<pre><code>var mapObject = new google.maps.Map(document.getElementById("map"), mapOptions);
</code></pre>
<p>You can then call the <code>getZoom()</code> method on that object to get the zoom level, which should be updated in real time as your map is updated:</p>
<pre><code>mapObject.getZoom();
</code></pre>
<p>I don't know of any way to get the real time value from the html element.</p>
| 1 | 2016-09-02T12:49:08Z | [
"python",
"html",
"google-maps",
"google-maps-api-3"
] |
Python BeautifulSoup replace img src | 39,292,881 | <p>I'm trying to parse HTML content from site, change a href and img src. A href changed successful, but img src don't.</p>
<p>It changed in variable but not in HTML (post_content):</p>
<pre><code><p><img alt="alt text" src="https://lifehacker.ru/wp-content/uploads/2016/08/15120903sa_d2__1471520915-630x523.jpg" title="Title"/></p>
</code></pre>
<p>Not _http://site.ru...</p>
<pre><code><p><img alt="alt text" src="http://site.ru/wp-content/uploads/2016/08/15120903sa_d2__1471520915-630x523.jpg" title="Title"/></p>
</code></pre>
<p>My code</p>
<pre><code>if "app-store" not in url:
r = requests.get("https://lifehacker.ru/2016/08/23/kak-vybrat-trimmer/")
soup = BeautifulSoup(r.content)
post_content = soup.find("div", {"class", "post-content"})
for tag in post_content():
for attribute in ["class", "id", "style", "height", "width", "sizes"]:
del tag[attribute]
for a in post_content.find_all('a'):
a['href'] = a['href'].replace("https://lifehacker.ru", "http://site.ru")
for img in post_content.find_all('img'):
img_urls = img['src']
if "https:" not in img_urls:
img_urls="http:{}".format(img_urls)
thumb_url = img_urls.split('/')
urllib.urlretrieve(img_urls, "/Users/kr/PycharmProjects/education_py/{}/{}".format(folder_name, thumb_url[-1]))
file_url = "/Users/kr/PycharmProjects/education_py/{}/{}".format(folder_name, thumb_url[-1])
data = {
'name': '{}'.format(thumb_url[-1]),
'type': 'image/jpeg',
}
with open(file_url, 'rb') as img:
data['bits'] = xmlrpc_client.Binary(img.read())
response = client.call(media.UploadFile(data))
attachment_url = response['url']
img_urls = img_urls.replace(img_urls, attachment_url)
[s.extract() for s in post_content('script')]
post_content_insert = bleach.clean(post_content)
post_content_insert = post_content_insert.replace('&lt;', '<')
post_content_insert = post_content_insert.replace('&gt;', '>')
print post_content_insert
</code></pre>
| 1 | 2016-09-02T12:50:46Z | 39,292,990 | <p>Looks like you're never assigning <code>img_urls</code> back to <code>img['src']</code>. Try doing that at the end of the block.</p>
<pre><code>img_urls = img_urls.replace(img_urls, attachment_url)
img['src'] = img_urls
</code></pre>
<p>... But first, you need to change your <code>with</code> statement so it uses some name other than <code>img</code> for your file object. Right now you're overshadowing the dom element and you can no longer access it.</p>
<pre><code> with open(file_url, 'rb') as some_file:
data['bits'] = xmlrpc_client.Binary(some_file.read())
</code></pre>
| 1 | 2016-09-02T12:55:29Z | [
"python",
"html",
"href",
"src"
] |
generating XML from config using python | 39,292,951 | <p>Please tell me how I can get out of *.conf [section name] and the value of the first parameter (after the name of the section) and on their basis to create a XML file using python? I have a very simple config file, but each section is more than one option. Thank you in advance.</p>
| -3 | 2016-09-02T12:53:42Z | 39,293,051 | <p>In the standard library you can find <a href="https://docs.python.org/3/library/configparser.html" rel="nofollow">configparser</a> which helps you read such files and <a href="https://docs.python.org/3/library/xml.etree.elementtree.html" rel="nofollow">xml.etree.ElementTree</a> which helps you bould valid XML files (see sections about modifying and building).</p>
| 0 | 2016-09-02T12:58:47Z | [
"python",
"xml"
] |
generating XML from config using python | 39,292,951 | <p>Please tell me how I can get out of *.conf [section name] and the value of the first parameter (after the name of the section) and on their basis to create a XML file using python? I have a very simple config file, but each section is more than one option. Thank you in advance.</p>
| -3 | 2016-09-02T12:53:42Z | 39,320,869 | <pre><code>import xml.etree.cElementTree as ET # on Python 3.3+ use xml.etree.ElementTree instead
import configparser
config = configparser.ConfigParser()
config.read('sipusers.conf')
Main = ET.Element("Main")
ET.SubElement(Main, "TCMIPPhoneDirectory", clearlight="true")
ET.SubElement(Main, "Title").text = "Phonelist"
ET.SubElement(Main, "Prompt").text = "Prompt"
for section in config.sections():
Child = ET.SubElement(Main, "DirectoryEntry")
ET.SubElement(Child, "Name").text = section
ET.SubElement(Child, "Telephone").text = config.get(section,'username')
xml = ET.ElementTree(Main)
xml.write("phonebook.xml")
</code></pre>
| 0 | 2016-09-04T19:48:11Z | [
"python",
"xml"
] |
How to get pandas to throw exception (or otherwise note) if some fields are missing from csv? | 39,293,118 | <p>I am reading csv files with pandas in python. When a record in the file has less than the expected number of fields, then pandas just silently fills in the fields with NaN. Instead, I want an exception to be thrown.</p>
<p>For example, if I have a CSV file like this:</p>
<pre><code>A, B, C
1, 2, 3
4,,6
5
</code></pre>
<p>With pandas I can read this file like this:</p>
<pre><code> input_data = pandas.read_csv(input_filename)
</code></pre>
<p>I want pandas to throw an error on the final line. There should be two more commas on this line (5,,). The missing commas represent missing fields.</p>
<p>Instead of pandas just ignoring the missing commas, is there a way to detect this condition and throw an exception?</p>
<p>Thanks!</p>
| 1 | 2016-09-02T13:02:05Z | 39,293,296 | <p>See <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow">documentation</a> </p>
<p>Set <code>error_bad_lines</code> to <code>False</code> :</p>
<pre><code>error_bad_lines : boolean, default True
Lines with too many fields (e.g. a csv line with too many commas) will by default cause an exception to be raised, and no DataFrame will be returned. If False, then these âbad linesâ will dropped from the DataFrame that is returned. (Only valid with C parser)
</code></pre>
<p>Set <code>warn_bad_lines</code> to <code>True</code>:</p>
<pre><code>warn_bad_lines : boolean, default True
If error_bad_lines is False, and warn_bad_lines is True, a warning for each âbad lineâ will be output. (Only valid with C parser).
</code></pre>
<p>This will output a warning for each bad line that you have on your <code>dataframe</code> instead of ignoring its presence or not returning a <code>dataframe</code> at all</p>
| 0 | 2016-09-02T13:10:59Z | [
"python",
"csv",
"pandas"
] |
Can Python or SQLite search and replace output on the fly? | 39,293,119 | <p>Programs: SQLite database 3.14.1 & Python 2.7</p>
<p>I have a table called transactions. The field "name" has names in it. I can use Python to return the rows from selected fields to a text file based on criteria specified from tranType while adding some strings to it on the fly:</p>
<pre><code>e = open("export.txt", "w+")
sqlF1 = """SELECT name, quantity, item, tDate, tranType FROM transactions WHERE tranType LIKE "%sell%"
"""
c.execute(sqlF1)
for row in c.execute(sqlF1):
e.write('%s bought QTY %s of %s from %s on date %s\n' % row)
e.close()
</code></pre>
<p>This is fine and good. In the name field, sometimes there are first and last names and sometimes it's just the first name. </p>
<p>What I want to do is, while getting each row, check the name - if there is a space in the field (like a first last entry), change the space to a "+" symbol on the fly. I don't want to change the field beforehand and replace all the spaces with pluses.</p>
<p>Can this be done?</p>
<p>What I know/don't know:</p>
<p>Based on my limited knowledge and experience, I can't figure out how to do it. I was thinking the only way I can make this happen is to dump the names into another field and change all the spaces to pluses and then redo the SQL part to grab the different field. </p>
<p>Any advice is greatly appreciated. I'm taking some Python courses soon (and doing some studying on SQLite too, so hopefully I won't be continuously inundating StackOverflow with too many dumb questions.</p>
<p>Thanks!</p>
| 0 | 2016-09-02T13:02:12Z | 39,293,158 | <p>With the <a href="http://www.sqlite.org/lang_corefunc.html#replace" rel="nofollow">replace() function</a>, this can be done on the fly:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT replace(name, ' ', '+'), quantity, item, tDate FROM ...
</code></pre>
| 1 | 2016-09-02T13:04:06Z | [
"python",
"sqlite",
"loops",
"replace",
"string-concatenation"
] |
Python - Parsing Json format input | 39,293,190 | <p>I need to make a data parsing that come from another program in JSON format:</p>
<pre><code>import json
input = '''
Array
(
[error] => Array
(
)
[result] => Array
(
[0] => Person Object
(
[arr:Person:private] => Array
(
[cf] => DRGMRO75P03G273O
[first_name] => Mario
[last_name] => Dragoni
[email] => mario.dragoni@yahoo.com
[phone] => 558723
[uid] => dragom
[source] => USRDATA
)
)
)
)
'''
</code></pre>
<p>I tried:</p>
<pre><code>data = json.loads(input)
</code></pre>
<p>But I get:</p>
<pre><code>**ValueError:** No JSON object could be decoded
</code></pre>
<p>Perhaps the fault is due to lack of field separators?</p>
<p>Edit:</p>
<p>The input was generated by a <em>php</em> <strong>print_r</strong>, I replaced it with <strong>json_encode</strong></p>
| -1 | 2016-09-02T13:05:30Z | 39,293,382 | <p>your function is correct.</p>
<p>but the provided json string is wrong</p>
<p>in fact the input is a mixed array and class object</p>
<p>you can import json in python like this:</p>
<pre><code>import json
j = json.loads('{"one" : "1", "two" : "2", "three" : "3"}')
print j['two']
</code></pre>
| 1 | 2016-09-02T13:15:34Z | [
"python",
"json",
"parsing"
] |
Python Tkinter textbox inserting issue | 39,293,222 | <pre><code>from tkinter import *
import time
class MyClass(object):
def __init__(self):
root = Tk()
button = Button(root, text="Button", command=self.command).pack()
#scrollbar and textbox
scrollbar = Scrollbar(root)
scrollbar.pack(side=RIGHT, fill=Y)
self.tbox = Text(root, wrap=WORD, yscrollcommand=scrollbar.set)
self.tbox.pack(fill=X)
scrollbar.configure(command=self.tbox.yview)
root.mainloop()
def command(self):
time.sleep(2)
self.tbox.insert(END, "Some text1\n")
time.sleep(2)
self.tbox.insert(END, "Some text2\n")
time.sleep(2)
self.tbox.insert(END, "Some text3")
MyClass()
</code></pre>
<p>Is it possible to appear those texts one by one and not all at the same time? I put <code>time.sleep()</code> to prove that its not appearing those separately</p>
<p>EDIT: Here is my code. So the problem is that if I use <code>self.tbox.insert(END, "text")</code> instead of <code>print("text")</code>, that text is not appearing the same way, if I use print, it will appear (prints) instantly of course. I made a website crawler or something like that, so it is very frustrating to wait when the text appears in textbox. And yes, I dont want to use print in this case</p>
<pre><code>from selenium.common.exceptions import NoSuchElementException
from selenium import webdriver
from tkinter import *
phantom_path = r'phantomjs.exe'
driver = webdriver.PhantomJS(phantom_path)
class Crawler(object):
def __init__(self):
self.root = Tk()
self.root.title('Website Crawler')
label1 = Label(self.root, text='Select a website').pack()
self.website = StringVar()
Entry(self.root, textvariable=self.website).pack()
#button which executes the function
button = Button(self.root, text='Crawl', command=self.command)
button.pack()
#scrollbar and textbox
self.scrollbar = Scrollbar(self.root)
self.scrollbar.pack(side=RIGHT, fill=Y)
self.tbox = Text(self.root, wrap=WORD, yscrollcommand=self.scrollbar.set)
self.tbox.pack(fill=X)
self.scrollbar.configure(command=self.tbox.yview)
self.root.mainloop()
def command(self):
url = self.website.get()
link_list = []
link_list2 = []
driver.get(url)
driver.implicitly_wait(5)
self.tbox.insert(END, "Crawling links..\n")
#finds all links on the site and appens them to list
try:
links = driver.find_elements_by_tag_name('a')
for x in links:
x = x.get_attribute('href')
link_list.append(x)
self.tbox.insert(END, str(x)+'\n')
except NoSuchElementException:
self.tbox.insert(END, 'This site have no links\n')
pass
try:
for sites in link_list:
driver.get(sites)
self.tbox.insert(END, "### In "+str(sites)+': ###\n')
links = driver.find_elements_by_tag_name('a')
for y in links:
y = y.get_attribute('href')
link_list.append(y)
self.tbox.insert(END, str(y)+'\n')
except NoSuchElementException:
self.tbox.insert(END, 'This site have no links\n')
pass
self.tbox.insert(END, 'Done\n\n')
Crawler()
</code></pre>
| 1 | 2016-09-02T13:07:22Z | 39,299,436 | <p>time.sleep() is a blocking call. Use <a href="http://effbot.org/tkinterbook/widget.htm#Tkinter.Widget.after-method" rel="nofollow">after</a>.</p>
<pre><code>from tkinter import *
import time
class MyClass(object):
def __init__(self):
self.root = Tk()
button = Button(self.root, text="Button", command=self.command).pack()
#scrollbar and textbox
scrollbar = Scrollbar(self.root)
scrollbar.pack(side=RIGHT, fill=Y)
self.tbox = Text(self.root, wrap=WORD, yscrollcommand=scrollbar.set)
self.tbox.pack(fill=X)
scrollbar.configure(command=self.tbox.yview)
self.root.mainloop()
def command(self):
self.root.after(1000, lambda: self.tbox.insert(END, "Some text1\n"))
self.root.after(2000, lambda: self.tbox.insert(END, "Some text2\n"))
self.root.after(3000, lambda: self.tbox.insert(END, "Some text3"))
MyClass()
</code></pre>
<p>Demo:</p>
<p><a href="http://i.stack.imgur.com/iVCR6.gif" rel="nofollow"><img src="http://i.stack.imgur.com/iVCR6.gif" alt="enter image description here"></a></p>
| 1 | 2016-09-02T19:22:04Z | [
"python",
"tkinter",
"textbox",
"python-3.5"
] |
Python Tkinter textbox inserting issue | 39,293,222 | <pre><code>from tkinter import *
import time
class MyClass(object):
def __init__(self):
root = Tk()
button = Button(root, text="Button", command=self.command).pack()
#scrollbar and textbox
scrollbar = Scrollbar(root)
scrollbar.pack(side=RIGHT, fill=Y)
self.tbox = Text(root, wrap=WORD, yscrollcommand=scrollbar.set)
self.tbox.pack(fill=X)
scrollbar.configure(command=self.tbox.yview)
root.mainloop()
def command(self):
time.sleep(2)
self.tbox.insert(END, "Some text1\n")
time.sleep(2)
self.tbox.insert(END, "Some text2\n")
time.sleep(2)
self.tbox.insert(END, "Some text3")
MyClass()
</code></pre>
<p>Is it possible to appear those texts one by one and not all at the same time? I put <code>time.sleep()</code> to prove that its not appearing those separately</p>
<p>EDIT: Here is my code. So the problem is that if I use <code>self.tbox.insert(END, "text")</code> instead of <code>print("text")</code>, that text is not appearing the same way, if I use print, it will appear (prints) instantly of course. I made a website crawler or something like that, so it is very frustrating to wait when the text appears in textbox. And yes, I dont want to use print in this case</p>
<pre><code>from selenium.common.exceptions import NoSuchElementException
from selenium import webdriver
from tkinter import *
phantom_path = r'phantomjs.exe'
driver = webdriver.PhantomJS(phantom_path)
class Crawler(object):
def __init__(self):
self.root = Tk()
self.root.title('Website Crawler')
label1 = Label(self.root, text='Select a website').pack()
self.website = StringVar()
Entry(self.root, textvariable=self.website).pack()
#button which executes the function
button = Button(self.root, text='Crawl', command=self.command)
button.pack()
#scrollbar and textbox
self.scrollbar = Scrollbar(self.root)
self.scrollbar.pack(side=RIGHT, fill=Y)
self.tbox = Text(self.root, wrap=WORD, yscrollcommand=self.scrollbar.set)
self.tbox.pack(fill=X)
self.scrollbar.configure(command=self.tbox.yview)
self.root.mainloop()
def command(self):
url = self.website.get()
link_list = []
link_list2 = []
driver.get(url)
driver.implicitly_wait(5)
self.tbox.insert(END, "Crawling links..\n")
#finds all links on the site and appens them to list
try:
links = driver.find_elements_by_tag_name('a')
for x in links:
x = x.get_attribute('href')
link_list.append(x)
self.tbox.insert(END, str(x)+'\n')
except NoSuchElementException:
self.tbox.insert(END, 'This site have no links\n')
pass
try:
for sites in link_list:
driver.get(sites)
self.tbox.insert(END, "### In "+str(sites)+': ###\n')
links = driver.find_elements_by_tag_name('a')
for y in links:
y = y.get_attribute('href')
link_list.append(y)
self.tbox.insert(END, str(y)+'\n')
except NoSuchElementException:
self.tbox.insert(END, 'This site have no links\n')
pass
self.tbox.insert(END, 'Done\n\n')
Crawler()
</code></pre>
| 1 | 2016-09-02T13:07:22Z | 39,300,930 | <p>If you want to show texts one by one using <code>time.sleep()</code> you will need to use <code>Threading</code> module.</p>
<pre><code>from tkinter import *
import time
import threading
class Threader(threading.Thread):
def __init__(self, tbox, *args, **kwargs):
threading.Thread.__init__(self, *args, **kwargs)
self.tbox = tbox
self.daemon = True # Stop threads when your program exits.
self.start()
def run(self):
time.sleep(2)
self.tbox.insert(END, "Some text1\n")
time.sleep(2)
self.tbox.insert(END, "Some text2\n")
time.sleep(2)
self.tbox.insert(END, "Some text3")
break
class MyClass(object):
def __init__(self):
self.root = Tk()
button = Button(self.root, text="Button", command= lambda: Threader(tbox=self.tbox)).pack()
#scrollbar and textbox
scrollbar = Scrollbar(self.root)
scrollbar.pack(side=RIGHT, fill=Y)
self.tbox = Text(self.root, wrap=WORD, yscrollcommand=scrollbar.set)
self.tbox.pack(fill=X)
scrollbar.configure(command=self.tbox.yview)
self.root.mainloop()
MyClass()
</code></pre>
| 0 | 2016-09-02T21:27:35Z | [
"python",
"tkinter",
"textbox",
"python-3.5"
] |
How do I successfully include a definition in a if function | 39,293,257 | <p>How do I put a definition function into a <code>if</code> statement so when it's running the program will activate the definition by itself.</p>
<pre><code>person=int(input("select the number of any option which you would like to execute:"))
if (person)==(1):
print ("please write the value for each class ")
main()
def main ():
juvenile=(input(" number of juveniles: "))
adult= (input(" number of adults:"))
senile=(input("number of seniles:"))
</code></pre>
<p>When i run it it always gives me an error.</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\fenis\Desktop\TEST PAGE 4 GCSE CS CW.py", line 6, in <module>
main()
NameError: name 'main' is not defined
>>>
</code></pre>
| 0 | 2016-09-02T13:09:10Z | 39,293,395 | <p>You can't call <code>main()</code> at a point where <code>main()</code> hasn't been defined. Move your definition of <code>main()</code> to above the place where you try and call it.</p>
<pre><code>def main():
juvenile = input("number of juveniles:")
adult = input("number of adults:")
senile = input("number of seniles:")
person = int(input("select the number of any option which you would like to execute:"))
if person==1:
print ("please write the value for each class ")
main()
</code></pre>
| 1 | 2016-09-02T13:15:58Z | [
"python"
] |
how to efficiently cythonize the "vectorize" function (numpy library) - python | 39,293,273 | <p>as the title suggets, I'd like to efficiently cythonize the <code>numpy.vectorize</code> function, which, to the core, is simplyfying this piece below (the complete function is way too long to post but the majority of the time is spent here):</p>
<pre><code> def func(*vargs):
for _n, _i in enumerate(inds):
the_args[_i] = vargs[_n]
kwargs.update(zip(names, vargs[len(inds):]))
return self.pyfunc(*the_args, **kwargs)
</code></pre>
<p>I have read these guides (<a href="http://cython.readthedocs.io/en/latest/src/tutorial/numpy.html" rel="nofollow">http://cython.readthedocs.io/en/latest/src/tutorial/numpy.html</a> and <a href="http://pandas.pydata.org/pandas-docs/stable/enhancingperf.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/enhancingperf.html</a>) which are very useful but my knowledge of C is way too narrow to use them to a fraction of their potential.</p>
<p>how would you go about it ? [Python 3.5.1, Cython 0.25a, Numpy 1.10.4]</p>
| 1 | 2016-09-02T13:09:56Z | 39,297,140 | <p>The function you show is just a bit of dancing to deal with <code>kwargs</code>. Note the comment at the head of that block in <code>vectorize.__call__</code>. With simpler arguments it just sets <code>func = self.pyfunc</code>.</p>
<p>The actual work is done in the last line:</p>
<pre><code>self._vectorize_call(func=func, args=vargs)
</code></pre>
<p>which does</p>
<pre><code>outputs = ufunc(*inputs)
< return dtype conversion >
</code></pre>
<p><code>ufunc</code> is, in most cases, <code>frompyfunc(func, len(args), nout)</code>.</p>
<p>So stripped of all this Python cover, it comes down to</p>
<pre><code>np.frompyfunc(your_func, n, m)(args)
</code></pre>
<p>and <code>frompyfunc</code> is a compiled function. I suspect that function uses <code>nditer</code> (the <code>c</code> version) to broadcast the arguments, and feed the values as scalars to <code>your_func</code>. I discussed the use of <code>nditer</code> with <code>cython</code> in another recent SO.</p>
<p>In sum, as long as <code>your_func</code> is an impentrable (or general) python function, there's nothing <code>cython</code> can do to improve on this. The iteration is already being handled in compiled code.</p>
| 2 | 2016-09-02T16:36:44Z | [
"python",
"numpy",
"vectorization",
"cython"
] |
How to join two pandas Series into a single one with interleaved values? | 39,293,328 | <p>I have two pandas.Series...</p>
<pre><code>import pandas as pd
import numpy as np
length = 5
s1 = pd.Series( [1]*length ) # [1, 1, 1, 1, 1]
s2 = pd.Series( [2]*length ) # [2, 2, 2, 2, 2]
</code></pre>
<p>...and I would like to have them joined together in a single Series with the interleaved values from the first 2 series.
Something like: [1, 2, 1, 2, 1, 2, 1, 2, 1, 2]</p>
| 3 | 2016-09-02T13:12:54Z | 39,293,329 | <p>Here we are:</p>
<pre><code>s1.index = range(0,len(s1)*2,2)
s2.index = range(1,len(s2)*2,2)
interleaved = pd.concat([s1,s2]).sort_index()
idx values
0 1
1 2
2 1
3 2
4 1
5 2
6 1
7 2
8 1
9 2
</code></pre>
| 2 | 2016-09-02T13:12:54Z | [
"python",
"pandas",
"numpy"
] |
How to join two pandas Series into a single one with interleaved values? | 39,293,328 | <p>I have two pandas.Series...</p>
<pre><code>import pandas as pd
import numpy as np
length = 5
s1 = pd.Series( [1]*length ) # [1, 1, 1, 1, 1]
s2 = pd.Series( [2]*length ) # [2, 2, 2, 2, 2]
</code></pre>
<p>...and I would like to have them joined together in a single Series with the interleaved values from the first 2 series.
Something like: [1, 2, 1, 2, 1, 2, 1, 2, 1, 2]</p>
| 3 | 2016-09-02T13:12:54Z | 39,293,424 | <p>Here's one using <code>NumPy stacking</code>, <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.vstack.html" rel="nofollow"><code>np.vstack</code></a> -</p>
<pre><code>pd.Series(np.vstack((s1,s2)).ravel('F'))
</code></pre>
| 3 | 2016-09-02T13:17:17Z | [
"python",
"pandas",
"numpy"
] |
How to join two pandas Series into a single one with interleaved values? | 39,293,328 | <p>I have two pandas.Series...</p>
<pre><code>import pandas as pd
import numpy as np
length = 5
s1 = pd.Series( [1]*length ) # [1, 1, 1, 1, 1]
s2 = pd.Series( [2]*length ) # [2, 2, 2, 2, 2]
</code></pre>
<p>...and I would like to have them joined together in a single Series with the interleaved values from the first 2 series.
Something like: [1, 2, 1, 2, 1, 2, 1, 2, 1, 2]</p>
| 3 | 2016-09-02T13:12:54Z | 39,293,470 | <p>Using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.column_stack.html" rel="nofollow">np.column_stack</a>:</p>
<pre><code>In[27]:pd.Series(np.column_stack((s1,s2)).flatten())
Out[27]:
0 1
1 2
2 1
3 2
4 1
5 2
6 1
7 2
8 1
9 2
dtype: int64
</code></pre>
| 4 | 2016-09-02T13:19:26Z | [
"python",
"pandas",
"numpy"
] |
AttributeError: 'module' object has no attribute 'version' Canopy | 39,293,375 | <p>Hi I am going to preface this with I could just be really dumb so don't overlook that, but suddenly when opening canopy today I wasn't able to run one of my typical scripts with the error AttributeError: 'module' object has no attribute ' version' when trying to load pandas. From what I can gather it seems when bumpy is called through pandas it fails. I checked my working directory for files named numpy.py to see if I idiotically named a file numpy but failed to find such a file. I also attempted to uninstall and reinstall both numpy and pandas from the package manager in canopy. Any suggestions?</p>
<pre><code> %run "/Users/jim/Documents/ORAL-PAT-2.5-3.5plotly.py"
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/Users/jim/Documents/ORAL-PAT-2.5-3.5plotly.py in <module>()
1 #import the modules you need
----> 2 import pandas as pd
3 import numpy as np
4 import plotly.plotly as py
5 import plotly.tools as tls
/Users/jim/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/__init__.py in <module>()
20
21 # numpy compat
---> 22 from pandas.compat.numpy_compat import *
23
24 try:
/Users/jim/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/compat/numpy_compat.py in <module>()
13
14 # numpy versioning
---> 15 _np_version = np.version.short_version
16 _np_version_under1p8 = LooseVersion(_np_version) < '1.8'
17 _np_version_under1p9 = LooseVersion(_np_version) < '1.9'
AttributeError: 'module' object has no attribute 'version'
</code></pre>
| 0 | 2016-09-02T13:15:17Z | 39,293,932 | <p>Just had the same problem after downgrading Pandas and upgrading again to fix another issue. This is just a hack, but you could try this:</p>
<p>Open <code>...pandas/compat/numpy_compat.py</code> and replace <code>np.version.short_version</code> with <code>np._np_version</code></p>
<p>Hope that helps!</p>
| 0 | 2016-09-02T13:44:23Z | [
"python",
"numpy",
"canopy"
] |
How to edit dict in views.py from django admin | 39,293,387 | <p>I'm working on a site to better learn the Django-framework. I've currently set up views and links to template files to display content on the main page. In my views.py file I've added a dictionary that is displays the dict value for each key in in the index.html page when it gets rendered:</p>
<p>views.py:</p>
<pre><code>def Index(request):
projectmessage = {
"projectMessage":"This is text from a dictionary value. written in views.py",
"projectTitle":"Title from dict",
"projectText": "Text from dict",
}
return render(request,'wbdev/index.html', context=projectmessage)
</code></pre>
<p>Relevant lines in index.html:</p>
<pre><code> <h3>{{ projectTitle }}</h3>
<p>{{ projectMessage }}</p>
</code></pre>
<p>I'm wondering if this could be made visible on the django admin page so that I can change the dict text directly from the GUI. Could this be done or am I way off in the sense that this is not the intended for the django admin page? From what I've red django admin parses the models.py file to set up text fields and buttons. I've followed the official django tutorial and some of the "How to tango with django" book but I cant wrap my head around how I should proceed in getting the functions that I want.</p>
<p>I'm sorry for the noob question. I will return to my books and I will probably understand how this works down the line. If anyone could help me with an explanation of how I can achieve this I will be most grateful.</p>
<p>Thank you.</p>
| 0 | 2016-09-02T13:15:42Z | 39,294,020 | <p>You'll probably want to create a Model for Projects, so projects can be saved to a Database and easily displayed in the Admin. </p>
<p>Inside models.py include the following: </p>
<pre><code>class Project(models.Model):
message = models.CharField(max_length=20)
title = models.CharField(max_length=20)
text = models.CharField(max_length=20)
</code></pre>
<p>Inside admin.py if you register the model it should then appear in the admin</p>
<pre><code>from dajngo.contrib import admin
from .models import Project
admin.site.register(Project)
</code></pre>
<p>Finally for your index in views.py you'll want to query the database for the project objects in question before rendering them to the template</p>
<pre><code>def index(request):
projects = Project.objects.all()
return render(request,'wbdev/index.html', context={'projects': projects})
</code></pre>
<p>Inside your template you can then iterate over all the projects in your database like</p>
<pre><code>{% for project in projects %}
{{ project.message }}
{{ project.title }}
{{ project.text }}
{% endfor %}
</code></pre>
| 1 | 2016-09-02T13:48:23Z | [
"python",
"django",
"django-admin"
] |
generate multiple 2 dimensional data from given 2 dimensional data | 39,293,490 | <p>I have CCD with 512 x512 data of single frame (reference ). I need to generate 100000, 512x512 frames provided sum of value of pixel in each frame should give actual value of the corresponding pixel in the reference frame. Could you please help in this. </p>
| 0 | 2016-09-02T13:20:34Z | 39,294,683 | <p>I suppose you want to generate <code>1000000</code> frames with <strong>random</strong> values? Here is one approach. For simplicity and memory reasons I will take only <code>3x3</code> data instead of <code>512x512</code> and generate only <code>10</code> frames instead of <code>100000</code>.</p>
<pre><code>#I will generate a random reference with size 3x3,
#you can take your original CCD data
reference = np.random.random(size=(3,3))
>>> reference
array([[ 0.68618373, 0.69455787, 0.14494262],
[ 0.83277638, 0.63792746, 0.27089728],
[ 0.21380624, 0.30595052, 0.26136707]])
#Generate 10 random frames of size 3x3
frames = np.random.random(size=(10,3,3))
#Now normalize those random values such that their sum will result
#in your reference
frames = frames/np.sum(frames, axis=0)*reference
#Check whether the sum of all frames equals the reference
>>> np.sum(frames, axis=0)
array([[ 0.68618373, 0.69455787, 0.14494262],
[ 0.83277638, 0.63792746, 0.27089728],
[ 0.21380624, 0.30595052, 0.26136707]])
</code></pre>
<p>However, please consider that the sum over the frames will not equal your reference <strong>exactly</strong>:</p>
<pre><code>>>> np.sum(frames, axis=0)-reference
array([[ 0.00000000e+00, -1.11022302e-16, 0.00000000e+00],
[ 0.00000000e+00, -1.11022302e-16, 0.00000000e+00],
[ -2.77555756e-17, 0.00000000e+00, 5.55111512e-17]])
</code></pre>
<p>This is due to precision reasons and could be a problem depending on your actual task...</p>
| 0 | 2016-09-02T14:20:58Z | [
"python"
] |
Finding largest distance (on number scale) between n pairs of numbers | 39,293,520 | <p>I am trying to write an algorithm that finds the largest distance between two numbers, given n pairs of numbers.</p>
<p>Here is what I have so far.</p>
<p>Wire ints is my example numbers, with first pair being <code>1,10</code> and second pair being <code>1,10</code> and third pair being <code>7,7</code>.</p>
<pre><code>wire_ints = [10, 1, 1, 10, 7, 7]
longest_cases = {}
largest_length = 0
q = 0
y = 0
leftcounter = 0
rightcounter = 1
while q < len(wire_ints):
left_port = wire_ints[leftcounter]
right_port = wire_ints[rightcounter]
length_wire = left_port - right_port
wire_length = abs(length_wire)
leftcounter = leftcounter + 2
rightcounter = rightcounter + 2
q = q + 2
y = y + 1
if not longest_cases:
largest_length = wire_length
longest_cases[wire_length] = y
elif wire_length == largest_length:
longest_cases[wire_length] = y
elif wire_length > largest_length:
largest_length = wire_length
longest_cases.clear()
longest_cases[wire_length] = y
print(longest_cases)
</code></pre>
<p>This currently outputs <code>{9:2}</code>, and its not wrong. <code>9</code> is the greatest distance between any of these pairs of numbers. BUT, I want it to print <code>{9:1, 9:2}</code>.</p>
<p>The key in the dictionary refers to greatest length, and the value refers to the number of the pair in the original array. With the first 2 integers being pair 1, then the second pair 2, etc.</p>
<p>So, as the array has two pairs with the same length, it should output BOTH pairs in the dictionary.</p>
<p>I cannot figure this out.
Help!</p>
| 1 | 2016-09-02T13:21:41Z | 39,293,708 | <p>As far as you can have only unique keys in dictionary, you should either use pair number as a key or use a list:</p>
<pre><code>wire_ints = [10, 1, 1, 10, 7, 7]
longest_dict = []
longest_so_far = 0
for i in range(len(wire_ints)//2):
j = i*2
a, b = wire_ints[j:j+2]
dist = abs(a - b)
pair = [dist, i + 1]
if dist > longest_so_far:
longest_so_far = dist
longest_dict = [pair]
elif dist == longest_so_far:
longest_dict.append(pair)
print(longest_dict)
#=> [[9, 1], [9, 2]]
</code></pre>
| 2 | 2016-09-02T13:32:01Z | [
"python"
] |
Finding largest distance (on number scale) between n pairs of numbers | 39,293,520 | <p>I am trying to write an algorithm that finds the largest distance between two numbers, given n pairs of numbers.</p>
<p>Here is what I have so far.</p>
<p>Wire ints is my example numbers, with first pair being <code>1,10</code> and second pair being <code>1,10</code> and third pair being <code>7,7</code>.</p>
<pre><code>wire_ints = [10, 1, 1, 10, 7, 7]
longest_cases = {}
largest_length = 0
q = 0
y = 0
leftcounter = 0
rightcounter = 1
while q < len(wire_ints):
left_port = wire_ints[leftcounter]
right_port = wire_ints[rightcounter]
length_wire = left_port - right_port
wire_length = abs(length_wire)
leftcounter = leftcounter + 2
rightcounter = rightcounter + 2
q = q + 2
y = y + 1
if not longest_cases:
largest_length = wire_length
longest_cases[wire_length] = y
elif wire_length == largest_length:
longest_cases[wire_length] = y
elif wire_length > largest_length:
largest_length = wire_length
longest_cases.clear()
longest_cases[wire_length] = y
print(longest_cases)
</code></pre>
<p>This currently outputs <code>{9:2}</code>, and its not wrong. <code>9</code> is the greatest distance between any of these pairs of numbers. BUT, I want it to print <code>{9:1, 9:2}</code>.</p>
<p>The key in the dictionary refers to greatest length, and the value refers to the number of the pair in the original array. With the first 2 integers being pair 1, then the second pair 2, etc.</p>
<p>So, as the array has two pairs with the same length, it should output BOTH pairs in the dictionary.</p>
<p>I cannot figure this out.
Help!</p>
| 1 | 2016-09-02T13:21:41Z | 39,293,802 | <p>This takes your initial input and converts it to a list of tuples. Then it calculates the absolute difference between the tuple members and puts that into a new list. Then your final output is created as index_list.</p>
<pre><code>wire_ints = [10, 1, 1, 10, 7, 7]
new_list = [(x,y) for (x,y) in zip(wire_ints[::2], wire_ints[1::2])]
diff_list = [abs(x[0] - x[1]) for x in new_list]
index_list = [(x, index) for (index, x) in enumerate(diff_list) if x == max(diff_list)]
print index_list
</code></pre>
<p>Just realised this could be compressed further</p>
<pre><code>new_list = [abs(x-y) for (x,y) in zip(wire_ints[::2], wire_ints[1::2])]
index_list = [(x, index) for (index, x) in enumerate(diff_list) if x == max(diff_list)]
</code></pre>
<p>If you desperately want a dictionary, the best approach would probably be storing the max value as a key, and the pair numbers in a list or tuple as the value.</p>
| 3 | 2016-09-02T13:36:32Z | [
"python"
] |
Finding largest distance (on number scale) between n pairs of numbers | 39,293,520 | <p>I am trying to write an algorithm that finds the largest distance between two numbers, given n pairs of numbers.</p>
<p>Here is what I have so far.</p>
<p>Wire ints is my example numbers, with first pair being <code>1,10</code> and second pair being <code>1,10</code> and third pair being <code>7,7</code>.</p>
<pre><code>wire_ints = [10, 1, 1, 10, 7, 7]
longest_cases = {}
largest_length = 0
q = 0
y = 0
leftcounter = 0
rightcounter = 1
while q < len(wire_ints):
left_port = wire_ints[leftcounter]
right_port = wire_ints[rightcounter]
length_wire = left_port - right_port
wire_length = abs(length_wire)
leftcounter = leftcounter + 2
rightcounter = rightcounter + 2
q = q + 2
y = y + 1
if not longest_cases:
largest_length = wire_length
longest_cases[wire_length] = y
elif wire_length == largest_length:
longest_cases[wire_length] = y
elif wire_length > largest_length:
largest_length = wire_length
longest_cases.clear()
longest_cases[wire_length] = y
print(longest_cases)
</code></pre>
<p>This currently outputs <code>{9:2}</code>, and its not wrong. <code>9</code> is the greatest distance between any of these pairs of numbers. BUT, I want it to print <code>{9:1, 9:2}</code>.</p>
<p>The key in the dictionary refers to greatest length, and the value refers to the number of the pair in the original array. With the first 2 integers being pair 1, then the second pair 2, etc.</p>
<p>So, as the array has two pairs with the same length, it should output BOTH pairs in the dictionary.</p>
<p>I cannot figure this out.
Help!</p>
| 1 | 2016-09-02T13:21:41Z | 39,294,108 | <p>What you are missing is not keeping distance indexes in a list. You can provide it by the code below:</p>
<pre><code>longest_cases = {}
wire_ints = [10, 1, 1, 10, 7, 7]
leftIndex = 0
rightIndex = 1
largest_length = 0
y = 1
while leftIndex < len(wire_ints):
cur_length = abs(wire_ints[leftIndex] - wire_ints[rightIndex])
if cur_length in longest_cases:
currentList = longest_cases[cur_length]
else:
currentList = []
currentList.append(y)
longest_cases[cur_length] = currentList
leftIndex = leftIndex + 2
rightIndex = rightIndex + 2
y = y + 1
resultList = []
resultCase = 0
for k in longest_cases:
if k > largest_length:
resultList = longest_cases[k]
resultCase = k
for index in resultList:
print resultCase, ":", index
</code></pre>
<p>Here is <a href="http://ideone.com/qX65xi" rel="nofollow">the link</a> that you can see the result.</p>
| 0 | 2016-09-02T13:52:24Z | [
"python"
] |
Django : Adding extra feature in the User model of auth App | 39,293,543 | <p>I want to add extra feature in the User model of auth app but I don't want to get my hand dirty with the django source code or I don't want to create a new User model which extends the User model of auth app. Is it possible? if yes then how ?</p>
<p>P.S I have seen django-eav but I want to permanently store the value of that feature in the database and display it in the admin site.</p>
<p>The answers in the question (<a href="http://stackoverflow.com/questions/44109/extending-the-user-model-with-custom-fields-in-django">Extending the User model with custom fields in Django</a>) suggests either extension or substitution. I don't want those </p>
| 0 | 2016-09-02T13:22:52Z | 39,293,717 | <p>There are a few ways to do this, but I'd recommend subclassing Django's Abstract User model. For example:</p>
<pre><code>from django.contrib.auth.models import AbstractUser
from django.db import models
from django.utils.translation import ugettext_lazy as _
class KarmaUser(AbstractUser):
karma = models.PositiveIntegerField(default=0, blank=True)
# Inside project/settings.py
AUTH_USER_MODEL = "profiles.KarmaUser"
</code></pre>
<p>More info here: <a href="https://docs.djangoproject.com/en/1.10/topics/auth/customizing/#specifying-a-custom-user-model" rel="nofollow">https://docs.djangoproject.com/en/1.10/topics/auth/customizing/#specifying-a-custom-user-model</a></p>
<p>or you could try linking back to the user model with a OneToOneField e.g:</p>
<pre><code>from django.conf import settings
class CustomProfile(models.Model):
user = models.OneToOneField(settings.AUTH_USER_MODEL)
karma = models.PositiveIntegerField(default=0, blank=True)
</code></pre>
| 0 | 2016-09-02T13:32:29Z | [
"python",
"django"
] |
How to Filter Out Today's Date in Pandas Dataframe | 39,293,580 | <p>I have the following data frame:</p>
<pre><code>Company Date Value
ABC 08/21/16 00:00:00 500
ABC 08/22/16 00:00:00 600
ABC 08/23/16 00:00:00 650
ABC 08/24/16 00:00:00 625
ABC 08/25/16 00:00:00 675
ABC 08/26/16 00:00:00 680
</code></pre>
<p>If we assume that 26-August-2016 is today's date, then I would like to create a new data frame that effectively excludes the data in the 08/26/16 row.</p>
<p><strong>EDIT:</strong>
Here's my code to do so:</p>
<pre><code>today = time.strftime("%m/%d/%Y")
df.Date = df.Date <> today
</code></pre>
<p>Unfortunately, I see an error message indicating:</p>
<pre><code>'Series' object has no attribute 'Date'
</code></pre>
<p>Any idea how to resolve this?</p>
<p>Thanks!</p>
<p><strong>SOLUTION:</strong></p>
<pre><code>today = time.strftime("%Y-%m-%d")
df = df.loc[(df.Date < today)]
</code></pre>
| 1 | 2016-09-02T13:25:01Z | 39,294,800 | <p>I think you want to filter the dates that appear earlier than your specified date, though the title seems misleading. You must convert the <code>Date</code> column which are of dtype <code>object</code> to <code>datetime64</code>.</p>
<pre><code>In [22]: df['Date'] = pd.to_datetime(df['Date'], format='%Y-%m-%d')
In [23]: today = datetime.strptime('08/26/16', '%m/%d/%y')
In [24]: today
Out[24]: datetime.datetime(2016, 8, 26, 0, 0)
In [25]: df = df.loc[(df['Date'] < today)]
In [26]: df
Out[26]:
Company Date Value
0 ABC 2016-08-21 500
1 ABC 2016-08-22 600
2 ABC 2016-08-23 650
3 ABC 2016-08-24 625
4 ABC 2016-08-25 675
</code></pre>
| 2 | 2016-09-02T14:27:17Z | [
"python",
"datetime",
"pandas"
] |
Linked-list Data Structure understanding | 39,293,890 | <p>I have a little difficulty in understanding one thing in the structure of the Linked-lists.
Basically nodes of a linked list are created using the following class, and the next reference is obtained by the method getNext():
I have omitted other methods as not relevant to my problem.</p>
<pre><code>class Node:
def __init__(self,initdata):
self.data = initdata
self.next = None
def getNext(self):
return self.next
</code></pre>
<p>Now when creating a linkedlist and trying to find the size of the linked-list:</p>
<pre><code>class UnorderedList:
def __init__(self):
self.head = None
def size(self):
current = self.head
count = 0
while current != None:
count = count + 1
current = current.getNext() <-----
return count
</code></pre>
<p>I do not understand the line shown with an arrow. I know the logic that it tries to traverse to the next node, but getNext() is the method of the "NodeClass". how is it (getNext() method) being used by an object (i.e. current) which is not a NodeClass object? and actually it is an object of the "UnorderedList" class.</p>
| 2 | 2016-09-02T13:41:58Z | 39,294,830 | <p><code>current</code> is basically an instance of <code>UnOrderedList</code> in which each element is <code>Node</code> object. Hence the methods that are applied on <code>Nodes</code> can be applied on each element of <code>current</code>. Nodes are added to <code>UnOrderedList</code> using the add method.</p>
<pre><code>def add(self,item):
temp = Node(item)
temp.setNext(self.head)
self.head = temp
</code></pre>
| 1 | 2016-09-02T14:28:43Z | [
"python",
"class",
"data-structures",
"linked-list"
] |
Merge, sum and removing duplicates with pandas | 39,293,946 | <p>I have two different data frames with different sizes just like this:</p>
<pre><code>df_web = (['Event Category', 'ID', 'Total Events',
'Unique Events', 'Event Value', 'Avg. Value'])
df_app = (['Event Category', 'ID', 'Total Events',
'Unique Events', 'Event Value', 'Avg. Value']
</code></pre>
<p>I'm using pandas to try to merge them in a 'df_final', but I want to sum the values of 'Total Events' which have the same 'ID' , and in the end I would like to have a 'df_final' without duplicates in the ID.</p>
<p>I tried: </p>
<pre><code>df_final_analysis = df_web.groupby(['Event Category', 'ID', 'Total Events',
'Unique Events', 'Event Value', 'Avg. Value'],
as_index=False)['Total Events'].sum()
</code></pre>
<p>But it doesnt give me the result that I want. </p>
<p>For example:</p>
<p>df_web</p>
<pre><code> Video A 10
Video B 5
Video C 1
Video F 1
Video G 1
Video H 1
</code></pre>
<p>For df_app:</p>
<pre><code> Video A 15
Video D 3
Video C 1
</code></pre>
<p>For the df_final_analysis I want:</p>
<pre><code> Video A 25
Video B 5
Video D 3
Video C 2
Video F 1
Video G 1
Video H 1
</code></pre>
<p>Is there a elegant way to do this?</p>
| 0 | 2016-09-02T13:44:54Z | 39,295,541 | <p>Modified solution from your code using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow">pd.concat</a>:</p>
<pre><code>In [46]: df
Out[46]:
0 1 2
0 Video A 10
1 Video B 5
2 Video C 1
3 Video F 1
4 Video G 1
5 Video H 1
In [47]: df1
Out[47]:
0 1 2
0 Video A 15
1 Video D 3
2 Video C 1
In[59]:pd.concat([df,df1]).groupby([0,1],as_index=False)[2].sum()
Out[59]:
0 1 2
0 Video A 25
1 Video B 5
2 Video C 2
3 Video D 3
4 Video F 1
5 Video G 1
6 Video H 1
</code></pre>
<p>using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow">pd.merge</a>:</p>
<pre><code>In [60]: pd.merge(df,df1,how='outer').groupby([0,1],as_index=False)[2].sum()
Out[60]:
0 1 2
0 Video A 25.0
1 Video B 5.0
2 Video C 1.0
3 Video D 3.0
4 Video F 1.0
5 Video G 1.0
6 Video H 1.0
</code></pre>
| 0 | 2016-09-02T15:04:26Z | [
"python",
"pandas"
] |
(Python) How do i search directories and find files that match regex? | 39,293,968 | <p>I recently started getting into Python and i am having a hard time searching through directories and matching files based on a regex that i have created. Basically i want it to scan through all the directories in another directory and find all the files that ends with .zip or .rar or .r01 and then run various commands based on what file it is. </p>
<pre><code>import os, re
rootdir = "/mnt/externa/Torrents/completed"
for subdir, dirs, files in os.walk(rootdir):
if re.search('(w?.zip)|(w?.rar)|(w?.r01)', files):
print "match: " . files
</code></pre>
| -1 | 2016-09-02T13:46:16Z | 39,294,155 | <pre><code>import os
import re
rootdir = "/mnt/externa/Torrents/completed"
regex = re.compile('(.*zip$)|(.*rar$)|(.*r01$)')
for root, dirs, files in os.walk(rootdir):
for file in files:
if regex.match(file):
print(file)
</code></pre>
<p><strong>CODE BELLOW ANSWERS QUESTION IN FOLLOWING COMMENT</strong></p>
<blockquote>
<p>That worked really well, is there a way to do this if match is found on regex group 1 and do this if match is found on regex group 2 etc ? â nillenilsson</p>
</blockquote>
<pre><code>import os
import re
regex = re.compile('(.*zip$)|(.*rar$)|(.*r01$)')
rx = '(.*zip$)|(.*rar$)|(.*r01$)'
for root, dirs, files in os.walk("../Documents"):
for file in files:
res = re.match(rx, file)
if res:
if res.group(1):
print("ZIP",file)
if res.group(2):
print("RAR",file)
if res.group(3):
print("R01",file)
</code></pre>
<p>It might be possible to do this in a nicer way, but this works. </p>
| 2 | 2016-09-02T13:55:10Z | [
"python",
"linux",
"directory"
] |
Shrink pandas Df by deleting rows through modulo | 39,294,132 | <p>I need to reduce (or select) for example multiple of 4 of the index.
i have a 2MS dataframe and i want to get less data for a future plot. so the idea is to work with 1/4 of the data. leaving only the rows with index 4 - 8 - 16 - 20 - 4*n (or maybe the same but with 5*n)
if someone has any idea i will be grateful.</p>
| 1 | 2016-09-02T13:53:39Z | 39,294,197 | <p>You can use the <code>iloc</code> function, which takes a row/column slice. </p>
<p>From the <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.iloc.html" rel="nofollow">docs</a></p>
<blockquote>
<p>Purely integer-location based indexing for selection by position.</p>
<p>.iloc[] is primarily integer position based (from 0 to length-1 of the
axis), but may also be used with a boolean array.</p>
</blockquote>
<p>So you could write <code>df.iloc[::4, :]</code></p>
| 1 | 2016-09-02T13:57:25Z | [
"python",
"pandas",
"dataframe"
] |
Why does my QStandardItemModel itemFromIndex method return None? (index invalid) | 39,294,136 | <p>I am trying to make a listview with checkboxes that checks the selected boxes when the <strong>enter/return key</strong> is pressed. I do this with an override of the eventfilter for my MainWindow (yes I ought to subclass it, but I couldn't get that working)</p>
<p>In the eventfilter i get a <strong>None</strong> value returned from the <strong>itemFromIndex</strong> method even though I just passed the index through a <strong>.isValid()</strong> without problems. Obviously i am missing something, but i can't figure it out - is it looking at completely different indices? is the model not updated? </p>
<p>Any advice on alternate approaches are welcome</p>
<p>This is the method I use to fill the model (<strong>QStandardItemModel</strong>) with items, it's only called when i load a file.</p>
<pre><code> def update_siNLV(self,names,model):
model.clear()
for name in names:
item = Qg.QStandardItem(name)
item.setCheckState(Qc.Qt.Unchecked)
item.setCheckable(True)
model.appendRow(item)
</code></pre>
<p>This is from the <strong>init</strong> method where I create a variable for the selectionmodel and install the eventfilter on my <strong>QListView</strong></p>
<pre><code> self.sigInSelection = self.siNLV.selectionModel()
self.siNLV.installEventFilter(self)
</code></pre>
<p>The <strong>eventFilter</strong> method looks like this and the filtering part of the method works (I've made it print the selected indices with press on the enter key)</p>
<pre><code> def eventFilter(self,receiver,event):
if event.type() == QtCore.QEvent.KeyPress:
if event.key() == QtCore.Qt.Key_Return or event.key() == Qc.Qt.Key_Enter:
indexes = self.sigInSelection.selectedIndexes()
for index in indexes:
if index.isValid():
print(str(index.row())+" "+str(index.column()))
item = self.sigInModel.itemFromIndex(index)
item.setCheckState(qtCore.Qt.Checked)
return True
return super(form,self).eventFilter(receiver,event)
</code></pre>
| 0 | 2016-09-02T13:53:57Z | 39,326,817 | <p>As discussed in the comments:</p>
<p>The indices returned by <code>QItemSelectionModel.selectedIndexes()</code> come from the view and relate to the connection between the view and its immediate model. The identity of that model can be found by calling <code>QModelIndex.model()</code> and in this case it is not the model that you want: it is instead a proxy model that is in-between your desired <code>QStandardItemModel</code> and the view.</p>
<p>To get to the model you want you need to use <code>QAbstractProxyModel.mapToSource()</code>. So you might use code something like this:</p>
<pre><code>source_index = self.proxy.mapToSource(index)
item = self.sigInModel.itemFromIndex(source_index)
</code></pre>
<p>More generally you could traverse an arbitrary proxy structure and avoid this hard-coded usage of a single known proxy by code something like:</p>
<pre><code>proxy_model = index.model()
while proxy_model != self.sigInModel:
index = proxy_model.mapToSource(index)
proxy_model = index.model()
item = self.sigInModel.itemFromIndex(index)
</code></pre>
<p>But this is probably overkill in this case where you know there is a simple single proxy.</p>
| 0 | 2016-09-05T08:45:15Z | [
"python",
"qt",
"pyqt",
"qlistview",
"qstandarditemmodel"
] |
Finding most frequent combinations of numbers | 39,294,224 | <p>I have a list of 1000s of 7-number sequences and I want to know which combination of numbers are most frequent, ranging from 2 to 7 numbers.</p>
<p>So, for instance, in this list:</p>
<pre><code>1, 2, 3, 4, 5, 6, 7
1, 2, 4, 5, 6, 8, 9
1, 2, 9, 10, 12, 15, 27
</code></pre>
<p><code>[1, 2]</code> would be the highest scoring sequence in the 2-number category
<code>[1, 2, 4]</code> would be that for the 3-number category
etc.</p>
<p>I have a feeling numpy or another framework could help me with this but I don't have any grasp of statistics and I lack the necessary vocabulary to describe and hence find what I want.</p>
<p>Thanks in advance!</p>
| 0 | 2016-09-02T13:58:37Z | 39,294,789 | <p>You can use a data mining approach in order to achieve your goal: It is called frequent itemset mining.</p>
<p>Indeed, assuming that :</p>
<pre><code>1, 2, 3, 4, 5, 6, 7
1, 2, 4, 5, 6, 8, 9
1, 2, 9, 10, 12, 15, 27
</code></pre>
<p>is your transactions database, where a transaction is a row (for instance : 1, 2, 3, 4, 5, 6, 7), and a transaction contains items which are integers in your case. The goal is then to determine the most frequent itemsets (ie sets of items/integers which occure the most among the transaction database). pymining is a python library for achieving this kind of task (<a href="https://github.com/bartdag/pymining" rel="nofollow">https://github.com/bartdag/pymining</a>) </p>
| 1 | 2016-09-02T14:26:57Z | [
"python",
"numpy"
] |
Naming rotating log files specific name in Python | 39,294,275 | <p>I want to name rotating log files as I want.</p>
<p>For example, if I use RotatingFileHandler, It separates log file when it reaches to specific file size naming <code>log file name + extension numbering</code>, like below.</p>
<pre><code>filename.log #first log file
filename.log.1 #rotating log file1
filename.log.2 #rotating log file2
</code></pre>
<p>However, I want log handler to name them every time it is created at.
For example.</p>
<pre><code>09-01-12-20.log #first log file
09-01-12-43.log #rotating log file1
09-01-15-00.log #rotating log file2
</code></pre>
<p>How can I do this ?</p>
<p>Edit---------------------------</p>
<p>I am not asking how to create and name a file.</p>
<p>I want to facailate python <code>logging</code> package doing something like inheriting and overriding <code>logging</code> .</p>
| 1 | 2016-09-02T14:00:49Z | 39,296,094 | <p>Check the following code and see if it helps. As far as I could understand from your question if your issue is in achieving the filename based on timestamp then this shall work for you.</p>
<pre><code>import datetime, time
# This return the epoch timestamp
epochTime = time.time()
# We generate the timestamp
# as per the need
timeStamp = datetime.datetime\
.fromtimestamp(epochTime)\
.strftime('%Y-%m-%d-%H-%M')
# Create a log file
# use timeStamp as filename
fo = open(timeStamp+".log", "wb")
fo.write("Log data")
fo.close()
</code></pre>
| 1 | 2016-09-02T15:36:00Z | [
"python",
"logging"
] |
Naming rotating log files specific name in Python | 39,294,275 | <p>I want to name rotating log files as I want.</p>
<p>For example, if I use RotatingFileHandler, It separates log file when it reaches to specific file size naming <code>log file name + extension numbering</code>, like below.</p>
<pre><code>filename.log #first log file
filename.log.1 #rotating log file1
filename.log.2 #rotating log file2
</code></pre>
<p>However, I want log handler to name them every time it is created at.
For example.</p>
<pre><code>09-01-12-20.log #first log file
09-01-12-43.log #rotating log file1
09-01-15-00.log #rotating log file2
</code></pre>
<p>How can I do this ?</p>
<p>Edit---------------------------</p>
<p>I am not asking how to create and name a file.</p>
<p>I want to facailate python <code>logging</code> package doing something like inheriting and overriding <code>logging</code> .</p>
| 1 | 2016-09-02T14:00:49Z | 39,351,132 | <p>I inherit and override <code>RotatingFileHandler</code> of python logging handler.</p>
<p>RotatingFileHandler has <code>self.baseFilename</code> value, the handler will use <code>self.baseFilename</code> to create logFile.(when it creates file first or when rollover happens)</p>
<p><code>self.shouldRollover()</code> method, It checks if the handler should rollover logfile or not.</p>
<p>If this method <code>return 1</code>, it means rollover should happen or <code>return 0</code>.</p>
<p>By overriding them, I define when this handler makes rollover and which name should be used for new log file by rollover.</p>
<p><em>-----------------------------------------Edit-----------------------------------------</em></p>
<p>I post the example code.</p>
<pre><code>from logging import handlers
class DailyRotatingFileHandler(handlers.RotatingFileHandler):
def __init__(self, alias, basedir, mode='a', maxBytes=0, backupCount=0, encoding=None, delay=0):
"""
@summary:
Set self.baseFilename to date string of today.
The handler create logFile named self.baseFilename
"""
self.basedir_ = basedir
self.alias_ = alias
self.baseFilename = self.getBaseFilename()
handlers.RotatingFileHandler.__init__(self, self.baseFilename, mode, maxBytes, backupCount, encoding, delay)
def getBaseFilename(self):
"""
@summary: Return logFile name string formatted to "today.log.alias"
"""
self.today_ = datetime.date.today()
basename_ = self.today_.strftime("%Y-%m-%d") + ".log" + '.' + self.alias_
return os.path.join(self.basedir_, basename_)
def shouldRollover(self, record):
"""
@summary:
Rollover happen
1. When the logFile size is get over maxBytes.
2. When date is changed.
@see: BaseRotatingHandler.emit
"""
if self.stream is None:
self.stream = self._open()
if self.maxBytes > 0 :
msg = "%s\n" % self.format(record)
self.stream.seek(0, 2)
if self.stream.tell() + len(msg) >= self.maxBytes:
return 1
if self.today_ != datetime.date.today():
self.baseFilename = self.getBaseFilename()
return 1
return 0
</code></pre>
<p>This DailyRotatingFileHandler will create logfile like</p>
<pre><code>2016-10-05.log.alias
2016-10-05.log.alias.1
2016-10-05.log.alias.2
2016-10-06.log.alias
2016-10-06.log.alias.1
2016-10-07.log.alias.1
</code></pre>
| 0 | 2016-09-06T14:09:56Z | [
"python",
"logging"
] |
Get timestamp in seconds from python's datetime | 39,294,293 | <p>How to get timestamp from the structure datetime? What is right alternative for non-existing <code>datetime.utcnow().timestamp()</code>?</p>
| 0 | 2016-09-02T14:01:46Z | 39,294,363 | <pre><code>import time,datetime
time.mktime(datetime.datetime.today().timetuple())
</code></pre>
| 1 | 2016-09-02T14:06:03Z | [
"python",
"timestamp"
] |
Get timestamp in seconds from python's datetime | 39,294,293 | <p>How to get timestamp from the structure datetime? What is right alternative for non-existing <code>datetime.utcnow().timestamp()</code>?</p>
| 0 | 2016-09-02T14:01:46Z | 39,294,913 | <p>If you don't have to get timestamp from structure datetime, you can decrease instruction like this</p>
<pre><code>import time
print time.time()
</code></pre>
| 0 | 2016-09-02T14:32:29Z | [
"python",
"timestamp"
] |
Get timestamp in seconds from python's datetime | 39,294,293 | <p>How to get timestamp from the structure datetime? What is right alternative for non-existing <code>datetime.utcnow().timestamp()</code>?</p>
| 0 | 2016-09-02T14:01:46Z | 39,296,248 | <p>There is another stupid trick - achieve <code>timedelta</code> </p>
<pre><code>(datetime.utcnow()-datetime(1970,1,1,0,0,0)).total_seconds()
</code></pre>
<p>found <a href="http://stackoverflow.com/a/7852969/3743145">here</a>. <strong>Better</strong></p>
<pre><code>(datetime.utcnow()-datetime.fromtimestamp(0)).total_seconds()
</code></pre>
<p>And this solution contains subseconds.</p>
| 0 | 2016-09-02T15:43:40Z | [
"python",
"timestamp"
] |
Get timestamp in seconds from python's datetime | 39,294,293 | <p>How to get timestamp from the structure datetime? What is right alternative for non-existing <code>datetime.utcnow().timestamp()</code>?</p>
| 0 | 2016-09-02T14:01:46Z | 39,296,289 | <p>If I understand correctly what sort of output you are seeking:</p>
<pre><code>from datetime import datetime
timestamp = datetime.now().strftime("%H:%M:%S")
print(timestamp)
> 11:44:40
</code></pre>
<p>EDIT: Appears I misinterpreted your question? You are asking for the naive universal time, then galaxyan's answer is concise.</p>
| 0 | 2016-09-02T15:45:38Z | [
"python",
"timestamp"
] |
Python code to run for next 100 days from today | 39,294,317 | <p>I want to run below line of code for 200 days from today.
Suppose today is 1st days so my code is- </p>
<pre><code>line = linecache.getline("lines1.txt",1)
print(line)
</code></pre>
<p>Suppose today is 2nd days so my code is-</p>
<pre><code>line = linecache.getline("lines1.txt",2)
print(line)
</code></pre>
<p>Suppose today is 3rd days so my code becomes-</p>
<pre><code>line = linecache.getline("lines1.txt",3)
print(line)
</code></pre>
<p>I want above 1,2,3,4,5......200 to be calculated everyday. I don't want a loop to run for 200 days. </p>
<p>I just need to get 1,2,3,4,5......200 count from python Date system or any other counting system. </p>
| -2 | 2016-09-02T14:03:06Z | 39,294,475 | <p>set a marker date then calculated difference between today dan marker date</p>
<pre><code>(datetime.datetime(2016,9,2) - datetime.datetime(2016,9,1) ).days
output:
1
</code></pre>
| 1 | 2016-09-02T14:11:31Z | [
"python",
"python-3.x",
"linecache"
] |
How to run a sql query inside a WLST script | 39,294,327 | <p>As it is possible to access a specific weblogic jdbc datasource in wlst, how can we run a sql query on this datasource ?</p>
<p>Here is how i retreive the specific datasource : </p>
<pre><code># Load the properties file with all necessary values
loadProperties('domain.properties')
# Go online
connect(adminusername, adminpassword,'t3://' + adminurl)
serverRuntime()
dsMBeans = cmo.getJDBCServiceRuntime().getJDBCDataSourceRuntimeMBeans()
oamDS = 'oamDS'
for ds in dsMBeans:
if (ds.getName() == oamDS):
#how to do something like this : ds.query('Select * from AM_REPLICATION_SETTINGS') ?
disconnect()
exit()
</code></pre>
| 0 | 2016-09-02T14:03:42Z | 39,308,688 | <p>Try this WLST command</p>
<p>loadDB('12C', 'myDataSource', 'select name from emp')</p>
| -1 | 2016-09-03T15:47:41Z | [
"python",
"oracle",
"weblogic",
"wlst"
] |
How to specify argument types? | 39,294,350 | <p>I'm very much a Python newbie, but I've searched for a solution and I'm stumped. </p>
<p>I have defined a function which accepts several arguments:</p>
<pre><code>def func(arg1, arg2, arg3):
</code></pre>
<p>The first argument will be a string but the next two are always going to be integers. I need to construct the following <code>for</code> loop.</p>
<pre><code>for x in range(0, arg2 / 2):
</code></pre>
<p>The problem is that <code>arg2</code> is defaulting to type <code>float</code> and as a result I get the following:<br>
<code>TypeError: 'float' object cannot be interpreted as an integer</code></p>
<p>I have tried: </p>
<pre><code>for x in range(0, int(arg2) / 2):
</code></pre>
<p>But the same thing happens for some reason. How can I specify that <code>arg2</code> should be taken as an integer, or how can I reinterpret it as an integer?</p>
| 0 | 2016-09-02T14:05:32Z | 39,294,414 | <p>try to round for whole formula </p>
<pre><code>for x in range(int(arg2/2))
</code></pre>
| 0 | 2016-09-02T14:08:32Z | [
"python",
"python-3.x",
"arguments",
"formats"
] |
How to specify argument types? | 39,294,350 | <p>I'm very much a Python newbie, but I've searched for a solution and I'm stumped. </p>
<p>I have defined a function which accepts several arguments:</p>
<pre><code>def func(arg1, arg2, arg3):
</code></pre>
<p>The first argument will be a string but the next two are always going to be integers. I need to construct the following <code>for</code> loop.</p>
<pre><code>for x in range(0, arg2 / 2):
</code></pre>
<p>The problem is that <code>arg2</code> is defaulting to type <code>float</code> and as a result I get the following:<br>
<code>TypeError: 'float' object cannot be interpreted as an integer</code></p>
<p>I have tried: </p>
<pre><code>for x in range(0, int(arg2) / 2):
</code></pre>
<p>But the same thing happens for some reason. How can I specify that <code>arg2</code> should be taken as an integer, or how can I reinterpret it as an integer?</p>
| 0 | 2016-09-02T14:05:32Z | 39,294,415 | <p>In Python 3.x:</p>
<p>When you <code>arg2 / 2</code> you will get a float. If you want an int try <code>arg2 // 2</code></p>
<p>For example <code>2 / 2 = 1.0</code> but <code>2 // 2 = 1</code> </p>
<p>But you will lose accuracy doing this, but since you want an int I'm assuming you want to round up or down anyways. </p>
<p>In python 2.x / is an int division by default; to get a float division, you had to make sure one of the number was a float.</p>
<p>Edited:
As @martineau pointed out, in both Python 2.x and 3.x if the numerator or denominator is a float, / will do float division and // will result in a float result. To get around this cast it to int or make sure it's an int...
<code>2.0 / 2 = 1.0</code> and <code>2.0 // 2 = 1.0</code> and <code>2 / 2.0 = 1.0</code> and <code>2 // 2.0 = 1.0</code>
<code>2.5 / 2 = 1.25</code> and <code>2.0 // 2 = 1.0</code> and <code>2 / 2.5 = 0.8</code> and <code>2 // 2.5 = 0.0</code></p>
| 3 | 2016-09-02T14:08:32Z | [
"python",
"python-3.x",
"arguments",
"formats"
] |
SQLAlchemy GLOB | 39,294,367 | <p>I'm using SQLAlchemy over an SQLite backend and I want to perform the following sort of update: </p>
<pre><code>UPDATE Measurement SET MeasurementCampaign=? WHERE filename GLOB ?
</code></pre>
<p>But I can't find any equivalent GLOB functionality in the SQLAlchemy docs. I have quite a few wildcard expressions and I'd really rather not rewrite them. </p>
<p>I've tried using the SQL directly like this:</p>
<pre><code>session.query(Measurement).\
filter("WHERE filename GLOB {}".format(wildcard)).update({...})
</code></pre>
<p>But I don't think that's right. Should I create a GenericFunction for "GLOB" and if so, how would I go about it?</p>
<p>As I say, I have many wildcards to evaluate, so if I can perform the update on them all in one go so much the better!</p>
| 0 | 2016-09-02T14:06:08Z | 39,315,682 | <p>It turns out you can use the <a href="http://docs.sqlalchemy.org/en/latest/orm/internals.html?highlight=in_#sqlalchemy.orm.interfaces.PropComparator.op" rel="nofollow" title="op">'op'</a> function for exactly this:</p>
<pre><code>session.query(Measurement).filter(Measurement.filename.op('GLOB')(wildcard)).update({...})
</code></pre>
| 0 | 2016-09-04T09:55:19Z | [
"python",
"sqlite",
"sqlalchemy"
] |
How can I find start and end occurrence of character in Python | 39,294,469 | <p>I have a dataframe <code>df</code> with the following ids (in <code>Col</code>).
The last occurrence of A/B/C represents the start, and the last occurrence of X is the end. I should ignore any other A,B,C between start and end (e.g. rows 8 and 9).</p>
<p>I have to find start and end records from this data and assign a number to each of these occurrences. The column <code>count</code> is my desired output:</p>
<pre><code> Col ID
P
Q
A
A
A 1
Q 1
Q 1
B 1
C 1
S 1
S 1
X 1
X 1
X 1
Q
Q
R
R
C
C 2
D 2
E 2
B 2
K 2
D 2
E 2
E 2
X 2
X 2
</code></pre>
<p>This code: </p>
<pre><code>lc1 = df.index[df.Col.eq('A') & df.Col.ne(df.Col.shift(-1))]
</code></pre>
<p>would give me an array of all the last occurrences of Index values of 'A', in this case <code>[5]</code>.</p>
<pre><code>lc1 = df.index[df.Col.eq('C') & df.Col.ne(df.Col.shift(-1))] # [20]
lc2 = df.index[df.Col.eq('X') & df.Col.ne(df.Col.shift(-1))] # [14,29]
</code></pre>
<p>I would use <code>iloc</code> to print the count values:</p>
<pre><code>df.iloc[5:14]['count'] = 1
df.iloc[20:29]['count'] = 2
</code></pre>
<p>How can I find the indices of A/B/C together and print the count values of each start and end occurrence?</p>
| 4 | 2016-09-02T14:11:20Z | 39,350,539 | <p>To find your indices of A, B, and C you can do:</p>
<pre><code>df[(df.Col =='A')|(df.Col =='B')|(df.Col =='C')].index
</code></pre>
<p>Print your start counts:</p>
<pre><code>df1 = df[df['count'] != df['count'].shift(+1)]
print df1[df1['count'] != 0]['count']
</code></pre>
<p>Print your end counts:</p>
<pre><code>df2 = df[df['count'] != df['count'].shift(-1)]
print df2[df2['count'] != 0]['count']
</code></pre>
<p>On a sidenote, calling a column <code>count</code> is a bad idea because <code>count</code> is a method of the DataFrame and then you get ambiguity when doing <code>df.count</code>.</p>
<p>EDIT: Corrected since I was answering to a wrong question.</p>
| 1 | 2016-09-06T13:42:15Z | [
"python",
"pandas",
"indexing",
"shift",
"find-occurrences"
] |
django-subdomains config localhost | 39,294,559 | <p>I'm having trouble getting mine configured correctly.</p>
<p>Have sites enabled and site domain for "SITE_ID = 1" (db object 1 domain) set too "mysite.app"</p>
<p>I have these subdomains setup</p>
<pre><code>ROOT_URLCONF = 'mysite.urls'
SUBDOMAIN_URLCONFS = {
None: 'frontend.urls', # no subdomain, e.g. ``example.com``
'www': 'frontend.urls',
'api': 'api.urls',
}
</code></pre>
<p>etc/hosts file</p>
<pre><code>127.0.0.1 api.mysite.app
127.0.0.1 www.mysite.app
127.0.0.1 mysite.app
</code></pre>
<p>api/urls.py</p>
<pre><code>from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^$', views.index),
]
</code></pre>
<p>api/views.py</p>
<pre><code>from django.shortcuts import render
from django.http import HttpResponse
def index(request):
return HttpResponse('API')
</code></pre>
<p>The frontend app urls and views is identical except for returning string "FRONTEND" in the HttpResponse object.</p>
<p>I can tell django-subdomains is working becuase it does go to the "frontend" app when I hit "mysite.app:8000" vs the mysite.urls as seen in the root_url_conf. It displays "FRONTEND"</p>
<p>But no matter what I do I can't get "api.mysite.app:8000" to hit the api urls file to display "API"</p>
<p>Am I missing something? I'm very new to django. Any help is appreciated.</p>
<p>Thanks.</p>
| 0 | 2016-09-02T14:15:32Z | 39,299,755 | <p>Try <a href="http://api.127.0.0.1.xip.io:8000/" rel="nofollow">http://api.127.0.0.1.xip.io:8000/</a> with your dev server running at 127.0.0.1:8000. Since 127.0.0.1 isn't really a domain but an IP, it can't have subdomains. And since your hosts file redirects to 127.0.0.1 not 127.0.0.1.xip.io or similar (you don't have to do it for testing either) you won't be connected.<br>
I haven't used the library that you mentioned but from experience with self-written snippets for subdomains, I'd say this is the case.</p>
| 0 | 2016-09-02T19:48:24Z | [
"python",
"django"
] |
django-subdomains config localhost | 39,294,559 | <p>I'm having trouble getting mine configured correctly.</p>
<p>Have sites enabled and site domain for "SITE_ID = 1" (db object 1 domain) set too "mysite.app"</p>
<p>I have these subdomains setup</p>
<pre><code>ROOT_URLCONF = 'mysite.urls'
SUBDOMAIN_URLCONFS = {
None: 'frontend.urls', # no subdomain, e.g. ``example.com``
'www': 'frontend.urls',
'api': 'api.urls',
}
</code></pre>
<p>etc/hosts file</p>
<pre><code>127.0.0.1 api.mysite.app
127.0.0.1 www.mysite.app
127.0.0.1 mysite.app
</code></pre>
<p>api/urls.py</p>
<pre><code>from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^$', views.index),
]
</code></pre>
<p>api/views.py</p>
<pre><code>from django.shortcuts import render
from django.http import HttpResponse
def index(request):
return HttpResponse('API')
</code></pre>
<p>The frontend app urls and views is identical except for returning string "FRONTEND" in the HttpResponse object.</p>
<p>I can tell django-subdomains is working becuase it does go to the "frontend" app when I hit "mysite.app:8000" vs the mysite.urls as seen in the root_url_conf. It displays "FRONTEND"</p>
<p>But no matter what I do I can't get "api.mysite.app:8000" to hit the api urls file to display "API"</p>
<p>Am I missing something? I'm very new to django. Any help is appreciated.</p>
<p>Thanks.</p>
| 0 | 2016-09-02T14:15:32Z | 39,299,976 | <p>Simply had to restart the dev server. All was configured correctly. </p>
| 1 | 2016-09-02T20:05:07Z | [
"python",
"django"
] |
Check if values in list exceed threshold a certain amount of times and return index of first exceedance | 39,294,564 | <p>I am searching for a clean and pythonic way of checking if the contents of a list are greater than a given number (first threshold) for a certain number of times (second threshold). If both statements are true, I want to return the index of the first value which exceeds the given threshold.</p>
<p><strong>Example</strong>:</p>
<pre><code># Set first and second threshold
thr1 = 4
thr2 = 5
# Example 1: Both thresholds exceeded, looking for index (3)
list1 = [1, 1, 1, 5, 1, 6, 7, 3, 6, 8]
# Example 2: Only threshold 1 is exceeded, no index return needed
list2 = [1, 1, 6, 1, 1, 1, 2, 1, 1, 1]
</code></pre>
| 1 | 2016-09-02T14:15:44Z | 39,294,786 | <p>A naive and straightforward approach would be to iterate over the list counting the number of items greater than the first threshold and returning the index of the first match if the count exceeds the second threshold:</p>
<pre><code>def answer(l, thr1, thr2):
count = 0
first_index = None
for index, item in enumerate(l):
if item > thr1:
count += 1
if not first_index:
first_index = index
if count >= thr2: # TODO: check if ">" is required instead
return first_index
thr1 = 4
thr2 = 5
list1 = [1, 1, 1, 5, 1, 6, 7, 3, 6, 8]
list2 = [1, 1, 6, 1, 1, 1, 2, 1, 1, 1]
print(answer(list1, thr1, thr2)) # prints 3
print(answer(list2, thr1, thr2)) # prints None
</code></pre>
<p>This is probably not quite pythonic though, but this solution has couple of advantages - we <em>keep the index of the first match only</em> and have an <em>early exit out of the loop</em> if we hit the second threshold.</p>
<p>In other words, we have <code>O(k)</code> in the best case and <code>O(n)</code> in the worst case, where <code>k</code> is the number of items before reaching the second threshold; <code>n</code> is the total number of items in the input list.</p>
| 0 | 2016-09-02T14:26:51Z | [
"python",
"list",
"iterator",
"threshold"
] |
Check if values in list exceed threshold a certain amount of times and return index of first exceedance | 39,294,564 | <p>I am searching for a clean and pythonic way of checking if the contents of a list are greater than a given number (first threshold) for a certain number of times (second threshold). If both statements are true, I want to return the index of the first value which exceeds the given threshold.</p>
<p><strong>Example</strong>:</p>
<pre><code># Set first and second threshold
thr1 = 4
thr2 = 5
# Example 1: Both thresholds exceeded, looking for index (3)
list1 = [1, 1, 1, 5, 1, 6, 7, 3, 6, 8]
# Example 2: Only threshold 1 is exceeded, no index return needed
list2 = [1, 1, 6, 1, 1, 1, 2, 1, 1, 1]
</code></pre>
| 1 | 2016-09-02T14:15:44Z | 39,294,790 | <p>I don't know if I'd call it clean or pythonic, but this should work</p>
<pre><code>def get_index(list1, thr1, thr2):
cnt = 0
first_element = 0
for i in list1:
if i > thr1:
cnt += 1
if first_element == 0:
first_element = i
if cnt > thr2:
return list1.index(first_element)
else:
return "criteria not met"
</code></pre>
| 0 | 2016-09-02T14:27:00Z | [
"python",
"list",
"iterator",
"threshold"
] |
Check if values in list exceed threshold a certain amount of times and return index of first exceedance | 39,294,564 | <p>I am searching for a clean and pythonic way of checking if the contents of a list are greater than a given number (first threshold) for a certain number of times (second threshold). If both statements are true, I want to return the index of the first value which exceeds the given threshold.</p>
<p><strong>Example</strong>:</p>
<pre><code># Set first and second threshold
thr1 = 4
thr2 = 5
# Example 1: Both thresholds exceeded, looking for index (3)
list1 = [1, 1, 1, 5, 1, 6, 7, 3, 6, 8]
# Example 2: Only threshold 1 is exceeded, no index return needed
list2 = [1, 1, 6, 1, 1, 1, 2, 1, 1, 1]
</code></pre>
| 1 | 2016-09-02T14:15:44Z | 39,294,793 | <p>Try this:</p>
<pre><code>def check_list(testlist)
overages = [x for x in testlist if x > thr1]
if len(overages) >= thr2:
return testlist.index(overages[0])
# This return is not needed. Removing it will not change
# the outcome of the function.
return None
</code></pre>
<p>This uses the fact that you can use if statements in list comprehensions to ignore non-important values.</p>
<p>As mentioned by Chris_Rands in the comments, the <code>return None</code> is unnecessary. Removing it will not change the result of the function.</p>
| 1 | 2016-09-02T14:27:02Z | [
"python",
"list",
"iterator",
"threshold"
] |
Check if values in list exceed threshold a certain amount of times and return index of first exceedance | 39,294,564 | <p>I am searching for a clean and pythonic way of checking if the contents of a list are greater than a given number (first threshold) for a certain number of times (second threshold). If both statements are true, I want to return the index of the first value which exceeds the given threshold.</p>
<p><strong>Example</strong>:</p>
<pre><code># Set first and second threshold
thr1 = 4
thr2 = 5
# Example 1: Both thresholds exceeded, looking for index (3)
list1 = [1, 1, 1, 5, 1, 6, 7, 3, 6, 8]
# Example 2: Only threshold 1 is exceeded, no index return needed
list2 = [1, 1, 6, 1, 1, 1, 2, 1, 1, 1]
</code></pre>
| 1 | 2016-09-02T14:15:44Z | 39,294,795 | <pre><code>thr1 = 4
thr2 = 5
list1 = [1, 1, 1, 5, 1, 6, 7, 3, 6, 8]
list2 = [1, 1, 6, 1, 1, 1, 2, 1, 1, 1]
def func(lst)
res = [ i for i,j in enumerate(lst) if j > thr1]
return len(res)>=thr2 and res[0]
</code></pre>
<p>Output:</p>
<pre><code>func(list1)
3
func(list2)
false
</code></pre>
| 0 | 2016-09-02T14:27:05Z | [
"python",
"list",
"iterator",
"threshold"
] |
Check if values in list exceed threshold a certain amount of times and return index of first exceedance | 39,294,564 | <p>I am searching for a clean and pythonic way of checking if the contents of a list are greater than a given number (first threshold) for a certain number of times (second threshold). If both statements are true, I want to return the index of the first value which exceeds the given threshold.</p>
<p><strong>Example</strong>:</p>
<pre><code># Set first and second threshold
thr1 = 4
thr2 = 5
# Example 1: Both thresholds exceeded, looking for index (3)
list1 = [1, 1, 1, 5, 1, 6, 7, 3, 6, 8]
# Example 2: Only threshold 1 is exceeded, no index return needed
list2 = [1, 1, 6, 1, 1, 1, 2, 1, 1, 1]
</code></pre>
| 1 | 2016-09-02T14:15:44Z | 39,294,893 | <p>If you are looking for a one-liner (or almost)</p>
<pre><code>a = filter(lambda z: z is not None, map(lambda (i, elem) : i if elem>=thr1 else None, enumerate(list1)))
print a[0] if len(a) >= thr2 else false
</code></pre>
| 0 | 2016-09-02T14:31:32Z | [
"python",
"list",
"iterator",
"threshold"
] |
Check if values in list exceed threshold a certain amount of times and return index of first exceedance | 39,294,564 | <p>I am searching for a clean and pythonic way of checking if the contents of a list are greater than a given number (first threshold) for a certain number of times (second threshold). If both statements are true, I want to return the index of the first value which exceeds the given threshold.</p>
<p><strong>Example</strong>:</p>
<pre><code># Set first and second threshold
thr1 = 4
thr2 = 5
# Example 1: Both thresholds exceeded, looking for index (3)
list1 = [1, 1, 1, 5, 1, 6, 7, 3, 6, 8]
# Example 2: Only threshold 1 is exceeded, no index return needed
list2 = [1, 1, 6, 1, 1, 1, 2, 1, 1, 1]
</code></pre>
| 1 | 2016-09-02T14:15:44Z | 39,295,068 | <p>I don't know if it's considered pythonic to abuse the fact that booleans are ints but I like doing like this</p>
<pre><code>def check(l, thr1, thr2):
c = [n > thr1 for n in l]
if sum(c) >= thr2:
return c.index(1)
</code></pre>
| 3 | 2016-09-02T14:40:21Z | [
"python",
"list",
"iterator",
"threshold"
] |
Convert SQL result from self-join to square pandas dataframe | 39,294,602 | <p>Here's a tl;dr version of what I'm after; the details are below:
A SQL query gives me a table with fields [person 1 id], [person 2 id], and [number of times they were in a group together]. I want to convert to a pandas dataframe that's square -- one row per person and one column per person, with the value of each element being number of times they were in a group together. I'm looking for a more elegant way to do that than going through the rows of my result and filling up the dataframe one element at a time.</p>
<hr>
<p>I have a database with a table of assignments, which has a column for the person_id and a column for the assignment_id. It has other stuff too, but for our purposes here, this is what matters:</p>
<pre><code>SELECT person_id, assignment_id FROM assignments;
</code></pre>
<pre>
person_id | assignment_id
----------+--------------
385 | 42
163 | 29
51 | 42
385 | 37
163 | 37
...
</pre>
<p>I want to see how often any two people have been on the same assignment. So I do:</p>
<pre><code>SELECT a1.person_id AS p1_id, a2.person_id AS p2_id, COUNT(*)
FROM assignments AS a1
INNER JOIN assignments AS a2 ON a1.assignment_id = a2.assignment_id AND a1.person_id < a2.person_id
GROUP BY a1.person_id, a2.person_id
</code></pre>
<p>Which gives output like:</p>
<pre>
p1_id | p2_id | count
------+-------+------
51 | 385 | 1
163 | 385 | 1
...
</pre>
<p>Now I'm building a Python script to access the data and want to dump it into a dataframe with a row for each person, a column for each person, and the cell having the number of times they shared an assignment. So the output would be something like this (I don't care what goes in the * cells -- could reasonably be 0 or the number of assignments the person did -- and don't really care whether how the first row and column are formatted):</p>
<pre>
p1_id | p_51 | p_163 | p_385
-------+--------+--------+--------
51 | * | 0 | 1
163 | 0 | * | 1
385 | 1 | 1 | *
</pre>
<p>I'll only have about 20 people, so it wouldn't hurt performance measurably to just set the values one by one, but I'm trying to learn good practice for when I have larger data sets. What's the right way to do something like this?</p>
<p>(I'm open to modifying the SQL query, if that's the best way to handle it.)</p>
| 1 | 2016-09-02T14:17:17Z | 39,296,287 | <p>You could use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> after converting the required columns to type <code>str</code> and aggregate by joining them as well as taking their counts.</p>
<pre><code>df[['person_id', 'assignment_id']] = df[['person_id', 'assignment_id']].astype(str)
df = df.groupby(['assignment_id'], as_index=False, sort=False)['person_id'] \
.agg({'col':','.join})['col'] \
.str.split(',').apply(lambda x: sorted(x, reverse=True)) \
.apply(pd.Series).add_prefix('p_id_') \
.set_index('p_id_0', drop=False)
</code></pre>
<p>You could simplify further by using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html" rel="nofollow"><code>get_dummies</code></a> to obtain the indicator variables for the index, <code>p_id_0</code> as shown:</p>
<pre><code>df1 = pd.get_dummies(df['p_id_1']).add_prefix('p_')
print (df1)
p_163 p_385
p_id_0
51 0.0 1.0
163 0.0 0.0
385 1.0 0.0
df2 = pd.get_dummies(df['p_id_0']).add_prefix('p_')
print (df2)
p_163 p_385 p_51
p_id_0
51 0.0 0.0 1.0
163 1.0 0.0 0.0
385 0.0 1.0 0.0
</code></pre>
<p>Then, concatenating these individual <code>dataframes</code> after mapping all values of the indexed frame to 0's followed by grouping the same column named columns together:</p>
<pre><code>df_final = pd.concat([df1, df2.applymap(lambda x: 0)], axis=1).add_prefix('p_')
print (df_final.groupby(df.columns, axis=1).sum())
p_163 p_385 p_51
p_id_0
51 0.0 1.0 0.0
163 0.0 0.0 0.0
385 1.0 0.0 0.0
</code></pre>
| 1 | 2016-09-02T15:45:23Z | [
"python",
"postgresql",
"pandas"
] |
Multiple dictionaries causing AttributeError? | 39,294,613 | <p>I have 1 variable that contains multiple dictionaries:</p>
<pre><code>a = {"foo": "foo"}, {"foo2": "foo2"}
</code></pre>
<p>But if I do:</p>
<pre><code>a.get("foo")
</code></pre>
<p>it returns as <code>AttributeError</code>:</p>
<pre class="lang-none prettyprint-override"><code>AttributeError: 'tuple' object has no attribute 'get'
</code></pre>
| 1 | 2016-09-02T14:18:01Z | 39,294,626 | <p>You're assigning to variable a tuple of two elements which are dicts. </p>
<p>This:</p>
<pre><code>a = {"foo": "foo"}, {"foo2": "foo2"}
</code></pre>
<p>is equivalent to:</p>
<pre><code>a = ({"foo": "foo"}, {"foo2": "foo2"})
</code></pre>
<p>so you cannot access to dictionary in this way you try. </p>
<pre><code>AttributeError: 'tuple' object has no attribute 'get'
</code></pre>
<p>tells you that you want to use <code>get</code> attribute on tuple, but tuple hasn't it.</p>
<p><strong>Solving issue</strong>:</p>
<p>You can assign to <code>a</code> variable for example one dict:</p>
<pre><code>a = {"foo": "foo", "foo2": "foo2"}
</code></pre>
<p>and in this case you can use:</p>
<pre><code>a.get("foo")
</code></pre>
<p>which will print </p>
<pre><code>foo
</code></pre>
| 3 | 2016-09-02T14:18:38Z | [
"python",
"dictionary",
"tuples",
"attributeerror"
] |
Multiple dictionaries causing AttributeError? | 39,294,613 | <p>I have 1 variable that contains multiple dictionaries:</p>
<pre><code>a = {"foo": "foo"}, {"foo2": "foo2"}
</code></pre>
<p>But if I do:</p>
<pre><code>a.get("foo")
</code></pre>
<p>it returns as <code>AttributeError</code>:</p>
<pre class="lang-none prettyprint-override"><code>AttributeError: 'tuple' object has no attribute 'get'
</code></pre>
| 1 | 2016-09-02T14:18:01Z | 39,294,752 | <p>Multiple dictionaries does not exist in Python.
If you define <code>a</code> as:</p>
<pre><code>a = {"foo": "foo"}, {"foo2": "foo2"}
</code></pre>
<p><code>a</code> will be a <code>tuple</code>. So you have to call the element as follow:</p>
<pre><code>a[0].get("foo")
</code></pre>
<p>To use <code>a.get</code> method you have to define <code>a</code> as follow:</p>
<pre><code>a = {"foo": "foo", "foo2": "foo2"}
</code></pre>
<p>Now <code>a.get("foo")</code> call will have as output <code>"foo"</code>.</p>
| 2 | 2016-09-02T14:24:31Z | [
"python",
"dictionary",
"tuples",
"attributeerror"
] |
Custom date string to Python date object | 39,294,700 | <p>I am using Scrapy to parse data and getting date in <code>Jun 14, 2016
</code> format, I have tried to parse it with <code>datetime.strftime</code> but </p>
<p>what approach should I use to convert custom date strings and what to do in my case.</p>
<p><strong>UPDATE</strong></p>
<p>I want to parse UNIX timestamp to save in database.</p>
| -1 | 2016-09-02T14:22:00Z | 39,295,034 | <p>Something like this should work:</p>
<pre><code>import time
import datetime
datestring = "September 2, 2016"
unixdatetime = int(time.mktime(datetime.datetime.strptime(datestring, "%B %d, %Y").timetuple()))
print(unixdatetime)
</code></pre>
<p>Returns: <code>1472792400</code></p>
| 0 | 2016-09-02T14:38:51Z | [
"python",
"date",
"converter",
"python-3.5"
] |
unite 2 strings from array Python | 39,294,723 | <p>I want to unite 2 strings from an array extracted from a text file:</p>
<pre><code>for n in arrayhere:
for i in arrayhere:
newvariable = n+i
</code></pre>
<p>I also tried <code>x.split()</code> in both for loops and <code>str(n+i)</code>.</p>
<p>When I print <code>newvariable</code> or write it on the text file, variables are printed or written not in the same line.</p>
| 0 | 2016-09-02T14:23:01Z | 39,294,834 | <p>Add this line to your code prior to looping through the elements in your list (they're not called arrays in python, but i'll refer to the list as <code>arrayhere</code> just like you did so you can substitute the code easily):</p>
<pre><code>arrayhere = [x.strip('\n') for x in arrayhere]
</code></pre>
<p>This will strip the new line characters from the elements in your list. The <code>\n</code> at the end of each element are what's causing your <code>newvariable = n + i</code> to print on separate lines.</p>
<p>This would be an ideal way to read your text file into the list:</p>
<pre><code>with open(/path/to/file) as f:
arrayhere = f.readlines()
arrayhere = [x.strip('\n') for x in arrayhere]
</code></pre>
<p>... and then you can do as you were previously doing:</p>
<pre><code>for n in arrayhere:
for i in arrayhere:
newvariable = n + i
</code></pre>
| 0 | 2016-09-02T14:28:56Z | [
"python",
"arrays",
"string",
"for-loop"
] |
Matplotlib: How to increase colormap/linewidth quality in streamplot? | 39,294,987 | <p>I have the following code to generate a streamplot based on an <code>interp1d</code>-Interpolation of discrete data:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from scipy.interpolate import interp1d
# CSV Import
a1array=pd.read_csv('a1.csv', sep=',',header=None).values
rv=a1array[:,0]
a1v=a1array[:,1]
da1vM=a1array[:,2]
a1 = interp1d(rv, a1v)
da1M = interp1d(rv, da1vM)
# Bx and By vector components
def bx(x ,y):
rad = np.sqrt(x**2+y**2)
if rad == 0:
return 0
else:
return x*y/rad**4*(-2*a1(rad)+rad*da1M(rad))/2.87445E-19*1E-12
def by(x ,y):
rad = np.sqrt(x**2+y**2)
if rad == 0:
return 4.02995937E-04/2.87445E-19*1E-12
else:
return -1/rad**4*(2*a1(rad)*y**2+rad*da1M(rad)*x**2)/2.87445E-19*1E-12
Bx = np.vectorize(bx, otypes=[np.float])
By = np.vectorize(by, otypes=[np.float])
# Grid
num_steps = 11
Y, X = np.mgrid[-25:25:(num_steps * 1j), 0:25:(num_steps * 1j)]
Vx = Bx(X, Y)
Vy = By(X, Y)
speed = np.sqrt(Bx(X, Y)**2+By(X, Y)**2)
lw = 2*speed / speed.max()+.5
# Star Radius
circle3 = plt.Circle((0, 0), 16.3473140, color='black', fill=False)
# Plot
fig0, ax0 = plt.subplots(num=None, figsize=(11,9), dpi=80, facecolor='w', edgecolor='k')
strm = ax0.streamplot(X, Y, Vx, Vy, color=speed, linewidth=lw,density=[1,2], cmap=plt.cm.jet)
ax0.streamplot(-X, Y, -Vx, Vy, color=speed, linewidth=lw,density=[1,2], cmap=plt.cm.jet)
ax0.add_artist(circle3)
cbar=fig0.colorbar(strm.lines,fraction=0.046, pad=0.04)
cbar.set_label('B[GT]', rotation=270, labelpad=8)
cbar.set_clim(0,1500)
cbar.draw_all()
ax0.set_ylim([-25,25])
ax0.set_xlim([-25,25])
ax0.set_xlabel('x [km]')
ax0.set_ylabel('z [km]')
ax0.set_aspect(1)
plt.title('polyEos(0.05,2), M/R=0.2, B_r(0,0)=1402GT', y=1.01)
plt.savefig('MR02Br1402.pdf',bbox_inches=0)
plt.show(fig0)
</code></pre>
<p>I uploaded the csv-file here if you want to try some stuff <a href="https://www.dropbox.com/s/4t7jixpglt0mkl5/a1.csv?dl=0">https://www.dropbox.com/s/4t7jixpglt0mkl5/a1.csv?dl=0</a>.
Which generates the following plot:
<a href="http://i.stack.imgur.com/pYGPi.png"><img src="http://i.stack.imgur.com/pYGPi.png" alt="StreamPlot"></a></p>
<p>I am actually pretty happy with the result except for one small detail, which I can not figure out: If one looks closely the linewidth and the color change in rather big steps, which is especially visible at the center:</p>
<p><a href="http://i.stack.imgur.com/zG67s.png"><img src="http://i.stack.imgur.com/zG67s.png" alt="Problem"></a></p>
<p>Is there some way/option with which I can decrease the size of this steps to especially make the colormap smother?</p>
| 8 | 2016-09-02T14:36:31Z | 39,352,147 | <p>I think your best bet is to use a colormap other than jet. Perhaps <code>cmap=plt.cmap.plasma</code>.</p>
<p><a href="http://i.stack.imgur.com/Vw0Gc.png" rel="nofollow"><img src="http://i.stack.imgur.com/Vw0Gc.png" alt="The same graph with the plasma colormap"></a></p>
<p>Wierd looking graphs obscure understanding of the data.</p>
<p>For data which is ordered in some way, like by the speed vector magnitude in this case, uniform sequential colormaps will always look smoother. The brightness of sequential maps varies monotonically over the color range, removing large percieved color changes over small ranges of data. The uniform maps vary linearly over their whole range which makes the main features in the data much more visually apparent.</p>
<p><img src="http://matplotlib.org/_images/lightness_00.png" alt="Perceptually Uniform Sequential Color Maps"></p>
<p>The jet colormap spans a very wide variety of brightnesses over its range with in inflexion in the middle. This is responsible for the particularly egregious red to blue transition around the center region of your graph.</p>
<p><img src="http://matplotlib.org/_images/lightness_05.png" alt="Jet and other Non-Sequential Color Maps"></p>
<p><a href="http://matplotlib.org/users/colormaps.html" rel="nofollow">The matplotlib user guide on choosing a color map</a> has a few recomendations for about selecting an appropriate map for a given data set.</p>
<p>I dont think there is much else you can do to improve this by just changing parameters in your plot.</p>
<p>The streamplot divides the graph into cells with <code>30*density[x,y]</code> in each direction, at most one streamline goes through each cell. The only setting which directly increases the number of segments is the density of the grid matplotlib uses. Increasing the Y density will decrease the segment length so that the middle region may transition more smoothly. The cost of this is an inevitable cluttering of the graph in regions where the streamlines are horizontal. </p>
<p>You could also try to normalise the speeds differently so the the change is artifically lowered in near the center. At the end of the day though it seems like it defeats the point of the graph. The graph should provide a useful view of the data for a human to understand. Using a colormap with strange inflexions or warping the data so that it looks nicer removes some understanding which could otherwise be obtained from looking at the graph.</p>
<p>A more detailed discussion about the issues with colormaps like jet can be found on this <a href="https://mycarta.wordpress.com/2012/05/12/the-rainbow-is-dead-long-live-the-rainbow-part-1/" rel="nofollow">blog</a>.</p>
| 0 | 2016-09-06T14:58:21Z | [
"python",
"numpy",
"matplotlib"
] |
Matplotlib: How to increase colormap/linewidth quality in streamplot? | 39,294,987 | <p>I have the following code to generate a streamplot based on an <code>interp1d</code>-Interpolation of discrete data:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from scipy.interpolate import interp1d
# CSV Import
a1array=pd.read_csv('a1.csv', sep=',',header=None).values
rv=a1array[:,0]
a1v=a1array[:,1]
da1vM=a1array[:,2]
a1 = interp1d(rv, a1v)
da1M = interp1d(rv, da1vM)
# Bx and By vector components
def bx(x ,y):
rad = np.sqrt(x**2+y**2)
if rad == 0:
return 0
else:
return x*y/rad**4*(-2*a1(rad)+rad*da1M(rad))/2.87445E-19*1E-12
def by(x ,y):
rad = np.sqrt(x**2+y**2)
if rad == 0:
return 4.02995937E-04/2.87445E-19*1E-12
else:
return -1/rad**4*(2*a1(rad)*y**2+rad*da1M(rad)*x**2)/2.87445E-19*1E-12
Bx = np.vectorize(bx, otypes=[np.float])
By = np.vectorize(by, otypes=[np.float])
# Grid
num_steps = 11
Y, X = np.mgrid[-25:25:(num_steps * 1j), 0:25:(num_steps * 1j)]
Vx = Bx(X, Y)
Vy = By(X, Y)
speed = np.sqrt(Bx(X, Y)**2+By(X, Y)**2)
lw = 2*speed / speed.max()+.5
# Star Radius
circle3 = plt.Circle((0, 0), 16.3473140, color='black', fill=False)
# Plot
fig0, ax0 = plt.subplots(num=None, figsize=(11,9), dpi=80, facecolor='w', edgecolor='k')
strm = ax0.streamplot(X, Y, Vx, Vy, color=speed, linewidth=lw,density=[1,2], cmap=plt.cm.jet)
ax0.streamplot(-X, Y, -Vx, Vy, color=speed, linewidth=lw,density=[1,2], cmap=plt.cm.jet)
ax0.add_artist(circle3)
cbar=fig0.colorbar(strm.lines,fraction=0.046, pad=0.04)
cbar.set_label('B[GT]', rotation=270, labelpad=8)
cbar.set_clim(0,1500)
cbar.draw_all()
ax0.set_ylim([-25,25])
ax0.set_xlim([-25,25])
ax0.set_xlabel('x [km]')
ax0.set_ylabel('z [km]')
ax0.set_aspect(1)
plt.title('polyEos(0.05,2), M/R=0.2, B_r(0,0)=1402GT', y=1.01)
plt.savefig('MR02Br1402.pdf',bbox_inches=0)
plt.show(fig0)
</code></pre>
<p>I uploaded the csv-file here if you want to try some stuff <a href="https://www.dropbox.com/s/4t7jixpglt0mkl5/a1.csv?dl=0">https://www.dropbox.com/s/4t7jixpglt0mkl5/a1.csv?dl=0</a>.
Which generates the following plot:
<a href="http://i.stack.imgur.com/pYGPi.png"><img src="http://i.stack.imgur.com/pYGPi.png" alt="StreamPlot"></a></p>
<p>I am actually pretty happy with the result except for one small detail, which I can not figure out: If one looks closely the linewidth and the color change in rather big steps, which is especially visible at the center:</p>
<p><a href="http://i.stack.imgur.com/zG67s.png"><img src="http://i.stack.imgur.com/zG67s.png" alt="Problem"></a></p>
<p>Is there some way/option with which I can decrease the size of this steps to especially make the colormap smother?</p>
| 8 | 2016-09-02T14:36:31Z | 39,425,263 | <p>If you don't mind changing the <code>streamplot</code> code (<code>matplotlib/streamplot.py</code>), you could simply decrease the size of the integration steps. Inside <code>_integrate_rk12()</code> the maximum step size is defined as:</p>
<pre><code>maxds = min(1. / dmap.mask.nx, 1. / dmap.mask.ny, 0.1)
</code></pre>
<p>If you decrease that, lets say:</p>
<pre><code>maxds = 0.1 * min(1. / dmap.mask.nx, 1. / dmap.mask.ny, 0.1)
</code></pre>
<p>I get this result (left = new, right = original):</p>
<p><a href="http://i.stack.imgur.com/MdVDv.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/MdVDv.jpg" alt="enter image description here"></a></p>
<p>Of course, this makes the code about 10x slower, and I haven't thoroughly tested it, but it seems to work (as a quick hack) for this example.</p>
<p>About the density (mentioned in the comments): I personally don't see the problem of that. It's not like we are trying to visualize the actual path line of (e.g.) a particle; the density is already some arbitrary (controllable) choice, and yes it is influenced by choices in the integration, but I don't thing that it changes the (not quite sure how to call this) required visualization we're after. </p>
<p>The results (density) do seem to converge a bit for decreasing step sizes, this shows the results for decreasing the integration step with a factor {1,5,10,20}:</p>
<p><a href="http://i.stack.imgur.com/uW8xb.png" rel="nofollow"><img src="http://i.stack.imgur.com/uW8xb.png" alt="enter image description here"></a> </p>
| 2 | 2016-09-10T10:46:01Z | [
"python",
"numpy",
"matplotlib"
] |
Matplotlib: How to increase colormap/linewidth quality in streamplot? | 39,294,987 | <p>I have the following code to generate a streamplot based on an <code>interp1d</code>-Interpolation of discrete data:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from scipy.interpolate import interp1d
# CSV Import
a1array=pd.read_csv('a1.csv', sep=',',header=None).values
rv=a1array[:,0]
a1v=a1array[:,1]
da1vM=a1array[:,2]
a1 = interp1d(rv, a1v)
da1M = interp1d(rv, da1vM)
# Bx and By vector components
def bx(x ,y):
rad = np.sqrt(x**2+y**2)
if rad == 0:
return 0
else:
return x*y/rad**4*(-2*a1(rad)+rad*da1M(rad))/2.87445E-19*1E-12
def by(x ,y):
rad = np.sqrt(x**2+y**2)
if rad == 0:
return 4.02995937E-04/2.87445E-19*1E-12
else:
return -1/rad**4*(2*a1(rad)*y**2+rad*da1M(rad)*x**2)/2.87445E-19*1E-12
Bx = np.vectorize(bx, otypes=[np.float])
By = np.vectorize(by, otypes=[np.float])
# Grid
num_steps = 11
Y, X = np.mgrid[-25:25:(num_steps * 1j), 0:25:(num_steps * 1j)]
Vx = Bx(X, Y)
Vy = By(X, Y)
speed = np.sqrt(Bx(X, Y)**2+By(X, Y)**2)
lw = 2*speed / speed.max()+.5
# Star Radius
circle3 = plt.Circle((0, 0), 16.3473140, color='black', fill=False)
# Plot
fig0, ax0 = plt.subplots(num=None, figsize=(11,9), dpi=80, facecolor='w', edgecolor='k')
strm = ax0.streamplot(X, Y, Vx, Vy, color=speed, linewidth=lw,density=[1,2], cmap=plt.cm.jet)
ax0.streamplot(-X, Y, -Vx, Vy, color=speed, linewidth=lw,density=[1,2], cmap=plt.cm.jet)
ax0.add_artist(circle3)
cbar=fig0.colorbar(strm.lines,fraction=0.046, pad=0.04)
cbar.set_label('B[GT]', rotation=270, labelpad=8)
cbar.set_clim(0,1500)
cbar.draw_all()
ax0.set_ylim([-25,25])
ax0.set_xlim([-25,25])
ax0.set_xlabel('x [km]')
ax0.set_ylabel('z [km]')
ax0.set_aspect(1)
plt.title('polyEos(0.05,2), M/R=0.2, B_r(0,0)=1402GT', y=1.01)
plt.savefig('MR02Br1402.pdf',bbox_inches=0)
plt.show(fig0)
</code></pre>
<p>I uploaded the csv-file here if you want to try some stuff <a href="https://www.dropbox.com/s/4t7jixpglt0mkl5/a1.csv?dl=0">https://www.dropbox.com/s/4t7jixpglt0mkl5/a1.csv?dl=0</a>.
Which generates the following plot:
<a href="http://i.stack.imgur.com/pYGPi.png"><img src="http://i.stack.imgur.com/pYGPi.png" alt="StreamPlot"></a></p>
<p>I am actually pretty happy with the result except for one small detail, which I can not figure out: If one looks closely the linewidth and the color change in rather big steps, which is especially visible at the center:</p>
<p><a href="http://i.stack.imgur.com/zG67s.png"><img src="http://i.stack.imgur.com/zG67s.png" alt="Problem"></a></p>
<p>Is there some way/option with which I can decrease the size of this steps to especially make the colormap smother?</p>
| 8 | 2016-09-02T14:36:31Z | 39,440,194 | <p>I had another look at this and it wasnt as painful as I thought it might be. </p>
<p>Add:</p>
<pre><code> subdiv = 15
points = np.arange(len(t[0]))
interp_points = np.linspace(0, len(t[0]), subdiv * len(t[0]))
tgx = np.interp(interp_points, points, tgx)
tgy = np.interp(interp_points, points, tgy)
tx = np.interp(interp_points, points, tx)
ty = np.interp(interp_points, points, ty)
</code></pre>
<p>after <code>ty</code> is initialised in the trajectories loop (line <code>164</code> in my version). Just substitute whatever number of subdivisions you want for <code>subdiv = 15</code>. All the segments in the streamplot will be subdivided into as many equally sized segments as you choose. The colors and linewidths for each will still be properly obtained from interpolating the data.</p>
<p>Its not as neat as changing the integration step but it does plot exactly the same trajectories.</p>
<p><a href="http://i.stack.imgur.com/Ykrnn.png" rel="nofollow"><img src="http://i.stack.imgur.com/Ykrnn.png" alt="subdiv = 15"></a></p>
| 2 | 2016-09-11T20:10:57Z | [
"python",
"numpy",
"matplotlib"
] |
Matplotlib: How to increase colormap/linewidth quality in streamplot? | 39,294,987 | <p>I have the following code to generate a streamplot based on an <code>interp1d</code>-Interpolation of discrete data:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from scipy.interpolate import interp1d
# CSV Import
a1array=pd.read_csv('a1.csv', sep=',',header=None).values
rv=a1array[:,0]
a1v=a1array[:,1]
da1vM=a1array[:,2]
a1 = interp1d(rv, a1v)
da1M = interp1d(rv, da1vM)
# Bx and By vector components
def bx(x ,y):
rad = np.sqrt(x**2+y**2)
if rad == 0:
return 0
else:
return x*y/rad**4*(-2*a1(rad)+rad*da1M(rad))/2.87445E-19*1E-12
def by(x ,y):
rad = np.sqrt(x**2+y**2)
if rad == 0:
return 4.02995937E-04/2.87445E-19*1E-12
else:
return -1/rad**4*(2*a1(rad)*y**2+rad*da1M(rad)*x**2)/2.87445E-19*1E-12
Bx = np.vectorize(bx, otypes=[np.float])
By = np.vectorize(by, otypes=[np.float])
# Grid
num_steps = 11
Y, X = np.mgrid[-25:25:(num_steps * 1j), 0:25:(num_steps * 1j)]
Vx = Bx(X, Y)
Vy = By(X, Y)
speed = np.sqrt(Bx(X, Y)**2+By(X, Y)**2)
lw = 2*speed / speed.max()+.5
# Star Radius
circle3 = plt.Circle((0, 0), 16.3473140, color='black', fill=False)
# Plot
fig0, ax0 = plt.subplots(num=None, figsize=(11,9), dpi=80, facecolor='w', edgecolor='k')
strm = ax0.streamplot(X, Y, Vx, Vy, color=speed, linewidth=lw,density=[1,2], cmap=plt.cm.jet)
ax0.streamplot(-X, Y, -Vx, Vy, color=speed, linewidth=lw,density=[1,2], cmap=plt.cm.jet)
ax0.add_artist(circle3)
cbar=fig0.colorbar(strm.lines,fraction=0.046, pad=0.04)
cbar.set_label('B[GT]', rotation=270, labelpad=8)
cbar.set_clim(0,1500)
cbar.draw_all()
ax0.set_ylim([-25,25])
ax0.set_xlim([-25,25])
ax0.set_xlabel('x [km]')
ax0.set_ylabel('z [km]')
ax0.set_aspect(1)
plt.title('polyEos(0.05,2), M/R=0.2, B_r(0,0)=1402GT', y=1.01)
plt.savefig('MR02Br1402.pdf',bbox_inches=0)
plt.show(fig0)
</code></pre>
<p>I uploaded the csv-file here if you want to try some stuff <a href="https://www.dropbox.com/s/4t7jixpglt0mkl5/a1.csv?dl=0">https://www.dropbox.com/s/4t7jixpglt0mkl5/a1.csv?dl=0</a>.
Which generates the following plot:
<a href="http://i.stack.imgur.com/pYGPi.png"><img src="http://i.stack.imgur.com/pYGPi.png" alt="StreamPlot"></a></p>
<p>I am actually pretty happy with the result except for one small detail, which I can not figure out: If one looks closely the linewidth and the color change in rather big steps, which is especially visible at the center:</p>
<p><a href="http://i.stack.imgur.com/zG67s.png"><img src="http://i.stack.imgur.com/zG67s.png" alt="Problem"></a></p>
<p>Is there some way/option with which I can decrease the size of this steps to especially make the colormap smother?</p>
| 8 | 2016-09-02T14:36:31Z | 39,440,338 | <p>You could increase the <code>density</code> parameter to get more smooth color transitions,
but then use the <code>start_points</code> parameter to reduce your overall clutter.
The start_points parameter allows you to explicity choose the location and
number of trajectories to draw. It overrides the default, which is to plot
as many as possible to fill up the entire plot.</p>
<p>But first you need one little fix to your existing code:
According to the <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.streamplot" rel="nofollow">streamplot</a> documentation, the X and Y args should be 1d arrays, not 2d arrays as produced by mgrid.
It looks like passing in 2d arrays is supported, but it is undocumented
and it is currently not compatible with the start_points parameter.</p>
<p>Here is how I revised your X, Y, Vx, Vy and speed:</p>
<pre><code># Grid
num_steps = 11
Y = np.linspace(-25, 25, num_steps)
X = np.linspace(0, 25, num_steps)
Ygrid, Xgrid = np.mgrid[-25:25:(num_steps * 1j), 0:25:(num_steps * 1j)]
Vx = Bx(Xgrid, Ygrid)
Vy = By(Xgrid, Ygrid)
speed = np.hypot(Vx, Vy)
lw = 3*speed / speed.max()+.5
</code></pre>
<p>Now you can explicitly set your <code>start_points</code> parameter. The start points are actually
"seed" points. Any given stream trajectory will grow in <em>both</em> directions
from the seed point. So if you put a seed point right in the center of
the example plot, it will grow both up and down to produce a vertical
stream line.</p>
<p>Besides controlling the <em>number</em> of trajectories, using the
<code>start_points</code> parameter also controls the <em>order</em> they are
drawn. This is important when considering how trajectories terminate.
They will either hit the border of the plot, or they will terminate if
they hit a cell of the plot that already has a trajectory. That means
your first seeds will tend to grow longer and your later seeds will tend
to get limited by previous ones. Some of the later seeds may not grow
at all. The default seeding strategy is to plant a seed at every cell,
which is pretty obnoxious if you have a high density. It also orders
them by planting seeds first along the plot borders and spiraling inward.
This may not be ideal for your particular case. I found a very simple
strategy for your example was to just plant a few seeds between those
two points of zero velocity, y=0 and x from -10 to 10. Those trajectories
grow to their fullest and fill in most of the plot without clutter.</p>
<p>Here is how I create the seed points and set the density:</p>
<pre><code>num_streams = 8
stptsy = np.zeros((num_streams,), np.float)
stptsx_left = np.linspace(0, -10.0, num_streams)
stptsx_right = np.linspace(0, 10.0, num_streams)
stpts_left = np.column_stack((stptsx_left, stptsy))
stpts_right = np.column_stack((stptsx_right, stptsy))
density = (3,6)
</code></pre>
<p>And here is how I modify the calls to <code>streamplot</code>:</p>
<pre><code>strm = ax0.streamplot(X, Y, Vx, Vy, color=speed, linewidth=lw, density=density,
cmap=plt.cm.jet, start_points=stpts_right)
ax0.streamplot(-X, Y, -Vx, Vy, color=speed, linewidth=lw,density=density,
cmap=plt.cm.jet, start_points=stpts_left)
</code></pre>
<p>The result basically looks like the original, but with smoother color transitions and only 15 stream lines. (sorry no reputation to inline the image)</p>
| 1 | 2016-09-11T20:29:07Z | [
"python",
"numpy",
"matplotlib"
] |
Indentation Error for a loop | 39,295,051 | <p>I keep getting a IndentationError: expected an indented block. Why is this error occurring?</p>
<pre><code>import arcpy
from arcpy import env
env.workspace = r'D:\Programming\Lab1\lab1.gdb'
env.overwriteOutput = 1
env.qualifiedFieldNames = "UNQUALIFIED"
#list the feature classes
soils = arcpy.ListFeatureClasses()
for soils in arcpy.ListFeatureClasses():
</code></pre>
| -6 | 2016-09-02T14:39:14Z | 39,295,066 | <p>Python is expecting an indented block, which wasn't there:</p>
<pre><code>for soils in arcpy.ListFeatureClasses():
# here should be something
</code></pre>
<p>By providing some values we solve the problem, for example by putting <code>pass</code> value, which does nothing, but solves <code>IndentationError</code> issue.</p>
<pre><code>for soils in arcpy.ListFeatureClasses():
pass
</code></pre>
<p>There is another trick:</p>
<pre><code>for soils in arcpy.ListFeatureClasses(): pass
</code></pre>
<p>which also solves that issue.</p>
| 4 | 2016-09-02T14:40:14Z | [
"python"
] |
Indentation Error for a loop | 39,295,051 | <p>I keep getting a IndentationError: expected an indented block. Why is this error occurring?</p>
<pre><code>import arcpy
from arcpy import env
env.workspace = r'D:\Programming\Lab1\lab1.gdb'
env.overwriteOutput = 1
env.qualifiedFieldNames = "UNQUALIFIED"
#list the feature classes
soils = arcpy.ListFeatureClasses()
for soils in arcpy.ListFeatureClasses():
</code></pre>
| -6 | 2016-09-02T14:39:14Z | 39,295,950 | <p>You are missing code inside your for loop.</p>
<p>Try:</p>
<pre><code>for soil in soils:
print(soil)
# or pass
</code></pre>
| 0 | 2016-09-02T15:27:15Z | [
"python"
] |
Find values in array and change them efficiently | 39,295,128 | <p>I am working with a large array with around 300 000 values. It has 100 000 rows and 3 columns. I am doing iterations with this array and if any value in the first column exceeds a limit of lets say 10, I want the number to be replaced. Is there any more efficient way than running something like this?:</p>
<pre><code>for i in range(N):
if array[i][0]>10:
array[i][0] = 0
</code></pre>
<p>I need to repeat this sequense for the other two columns as well which included with all my other iterations makes my code pretty slow.</p>
| 2 | 2016-09-02T14:43:19Z | 39,295,320 | <p>If I understand you correctly you`re looking for something like that:</p>
<pre><code>>>> from numpy import array
>>> a = array([[1,2,3],[4,5,6],[7,8,9]])
>>> a
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
>>> a[a>5]=10 # <--- here the "magic" happens
>>> a
array([[ 1, 2, 3],
[ 4, 5, 10],
[10, 10, 10]])
</code></pre>
| 0 | 2016-09-02T14:52:29Z | [
"python",
"arrays"
] |
Find values in array and change them efficiently | 39,295,128 | <p>I am working with a large array with around 300 000 values. It has 100 000 rows and 3 columns. I am doing iterations with this array and if any value in the first column exceeds a limit of lets say 10, I want the number to be replaced. Is there any more efficient way than running something like this?:</p>
<pre><code>for i in range(N):
if array[i][0]>10:
array[i][0] = 0
</code></pre>
<p>I need to repeat this sequense for the other two columns as well which included with all my other iterations makes my code pretty slow.</p>
| 2 | 2016-09-02T14:43:19Z | 39,295,334 | <p>Convert your array to a numpy array (<code>numpy.asarray</code>) then to replace the values you would use the following:</p>
<pre><code>import numpy as np
N = np.asarray(N)
N[N > 10] = 0
</code></pre>
<p><a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.asarray.html" rel="nofollow">numpy.asarray documentation</a></p>
| 2 | 2016-09-02T14:52:56Z | [
"python",
"arrays"
] |
Find values in array and change them efficiently | 39,295,128 | <p>I am working with a large array with around 300 000 values. It has 100 000 rows and 3 columns. I am doing iterations with this array and if any value in the first column exceeds a limit of lets say 10, I want the number to be replaced. Is there any more efficient way than running something like this?:</p>
<pre><code>for i in range(N):
if array[i][0]>10:
array[i][0] = 0
</code></pre>
<p>I need to repeat this sequense for the other two columns as well which included with all my other iterations makes my code pretty slow.</p>
| 2 | 2016-09-02T14:43:19Z | 39,295,368 | <p>I've assumed you may not want to use the same threshold/replacement value for each column. That being the case, you can pack the three items in a list of tuples and iterate through that.</p>
<pre><code>import numpy as np
arr = np.ndarray(your_array)
#Edited with your values, and a more than symbol
threshold = 10
column_id = 0
replace_value = 0
arr[arr[:, column_id] > threshold, column_id] = replace_value
</code></pre>
<p>Set <code>threshold</code>, <code>column_id</code> and <code>replace_value</code> as you require.</p>
| 1 | 2016-09-02T14:54:44Z | [
"python",
"arrays"
] |
Gradient clipping appears to choke on None | 39,295,136 | <p>I'm trying to add gradient clipping to my graph. I used the approach recommended here: <a href="http://stackoverflow.com/questions/36498127/how-to-effectively-apply-gradient-clipping-in-tensor-flow">How to effectively apply gradient clipping in tensor flow?</a></p>
<pre><code> optimizer = tf.train.GradientDescentOptimizer(learning_rate)
if gradient_clipping:
gradients = optimizer.compute_gradients(loss)
clipped_gradients = [(tf.clip_by_value(grad, -1, 1), var) for grad, var in gradients]
opt = optimizer.apply_gradients(clipped_gradients, global_step=global_step)
else:
opt = optimizer.minimize(loss, global_step=global_step)
</code></pre>
<p>But when I turn on gradient clipping, I get the following stack trace:</p>
<pre><code><ipython-input-19-be0dcc63725e> in <listcomp>(.0)
61 if gradient_clipping:
62 gradients = optimizer.compute_gradients(loss)
---> 63 clipped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
64 opt = optimizer.apply_gradients(clipped_gradients, global_step=global_step)
65 else:
/home/armence/mlsandbox/venv/lib/python3.4/site-packages/tensorflow/python/ops/clip_ops.py in clip_by_value(t, clip_value_min, clip_value_max, name)
51 with ops.op_scope([t, clip_value_min, clip_value_max], name,
52 "clip_by_value") as name:
---> 53 t = ops.convert_to_tensor(t, name="t")
54
55 # Go through list of tensors, for each value in each tensor clip
/home/armence/mlsandbox/venv/lib/python3.4/site-packages/tensorflow/python/framework/ops.py in convert_to_tensor(value, dtype, name, as_ref)
619 for base_type, conversion_func in funcs_at_priority:
620 if isinstance(value, base_type):
--> 621 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
622 if ret is NotImplemented:
623 continue
/home/armence/mlsandbox/venv/lib/python3.4/site-packages/tensorflow/python/framework/constant_op.py in _constant_tensor_conversion_function(v, dtype, name, as_ref)
178 as_ref=False):
179 _ = as_ref
--> 180 return constant(v, dtype=dtype, name=name)
181
182
/home/armence/mlsandbox/venv/lib/python3.4/site-packages/tensorflow/python/framework/constant_op.py in constant(value, dtype, shape, name)
161 tensor_value = attr_value_pb2.AttrValue()
162 tensor_value.tensor.CopyFrom(
--> 163 tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape))
164 dtype_value = attr_value_pb2.AttrValue(type=tensor_value.tensor.dtype)
165 const_tensor = g.create_op(
/home/armence/mlsandbox/venv/lib/python3.4/site-packages/tensorflow/python/framework/tensor_util.py in make_tensor_proto(values, dtype, shape)
344 else:
345 if values is None:
--> 346 raise ValueError("None values not supported.")
347 # if dtype is provided, forces numpy array to be the type
348 # provided if possible.
ValueError: None values not supported.
</code></pre>
<p>How do I solve this problem?</p>
| 0 | 2016-09-02T14:44:00Z | 39,295,309 | <p>So, one option that seems to work is this:</p>
<pre><code> optimizer = tf.train.GradientDescentOptimizer(learning_rate)
if gradient_clipping:
gradients = optimizer.compute_gradients(loss)
def ClipIfNotNone(grad):
if grad is None:
return grad
return tf.clip_by_value(grad, -1, 1)
clipped_gradients = [(ClipIfNotNone(grad), var) for grad, var in gradients]
opt = optimizer.apply_gradients(clipped_gradients, global_step=global_step)
else:
opt = optimizer.minimize(loss, global_step=global_step)
</code></pre>
<p>It looks like compute_gradients returns None instead of a zero tensor when the gradient would be a zero tensor and tf.clip_by_value does not support a None value. So just don't pass None to it and preserve None values.</p>
| 0 | 2016-09-02T14:51:22Z | [
"python",
"machine-learning",
"tensorflow"
] |
Can I use functions imported from .py files in Dask/Distributed? | 39,295,200 | <p>I have a question about serialization and imports. </p>
<ul>
<li>should functions have their own imports? <a href="https://docs.continuum.io/anaconda-scale/howto/spark-basic#modify-std-script" rel="nofollow">like I've seen done with PySpark</a></li>
<li>Is the following just plain wrong? Does <code>mod.py</code> need to be a conda/pip package? <code>mod.py</code> was written to a shared filesystem.</li>
</ul>
<p></p>
<pre><code>In [1]: from distributed import Executor
In [2]: e = Executor('127.0.0.1:8786')
In [3]: e
Out[3]: <Executor: scheduler="127.0.0.1:8786" processes=2 cores=2>
In [4]: import socket
In [5]: e.run(socket.gethostname)
Out[5]: {'172.20.12.7:53405': 'n1015', '172.20.12.8:53779': 'n1016'}
In [6]: %%file mod.py
...: def hostname():
...: return 'the hostname'
...:
Overwriting mod.py
In [7]: import mod
In [8]: mod.hostname()
Out[8]: 'the hostname'
In [9]: e.run(mod.hostname)
distributed.utils - ERROR - No module named 'mod'
</code></pre>
| 3 | 2016-09-02T14:46:49Z | 39,295,372 | <h3>Quick Answer</h3>
<p>Upload your mod.py file to all of your workers. You can do this using whatever mechanism you used to set up dask.distributed, or you can use the <a href="http://distributed.readthedocs.io/en/latest/api.html#distributed.executor.Executor.upload_file" rel="nofollow">upload_file</a> method</p>
<pre><code>e.upload_file('mod.py')
</code></pre>
<p>Alternatively, if your function is made in IPython, rather than being part of a module, it will be sent along without a problem.</p>
<h3>Long Answer</h3>
<p>This all has to do with how functions get serialized in Python. Functions from modules are serialized by their module name and function name</p>
<pre><code>In [1]: from math import sin
In [2]: import pickle
In [3]: pickle.dumps(sin)
Out[3]: b'\x80\x03cmath\nsin\nq\x00.'
</code></pre>
<p>So if the client machine wants to refer to the <code>math.sin</code> function it sends along this bytestring (which you'll notice has <code>'math'</code> and <code>'sin'</code> in it buried among other bytes) to the worker machine. The worker looks at this bytestring and says "OK great, the function I want is in such and such a module, let me go and find that in my local file system. If the module isn't present then it'll raise an error, much like what you received above.</p>
<p>For dynamically created functions (functions that you make in IPython) it uses a completely different approach, bundling up all of the code. This approach generally works fine.</p>
<p>Generally speaking Dask assumes that the workers and the client all have the same software environment. Typically this is mostly handled by whoever sets up your cluster, using some other tool like Docker. Methods like <code>upload_file</code> are there to fill in the gaps when you have files or scripts that get updated more frequently.</p>
| 2 | 2016-09-02T14:54:49Z | [
"python",
"distributed-computing",
"dask"
] |
Using Flask web app as Windows application | 39,295,219 | <p>We have a web application developed using Flask that runs on a Windows server with clients that connect to it. We now have a use case where it is desired that the server and client be combined onto a laptop so that both server and client code run together and make it appear as a native Windows application.</p>
<p>Basically, we now have two requirements that we did not have before:
1. Must be able to launch the browser from within Python.
2. Must be able to terminate the Python (Flask) application on browser window close.</p>
<p>We have succeeded in item 1. Item 2 remains elusive. We have tried terminating the werkzeug server but the Python code keeps running. Seeking help from those that know.</p>
| 0 | 2016-09-02T14:47:54Z | 39,296,721 | <p>I do not currently have a Windows client here so I cannot exactly test what I am suggesting.</p>
<p>Using <strong>pywinauto</strong> you can check for a Window's name.</p>
<p>You could build a script that checks this in background and kills your Flask application when the requested browser window is not opened.</p>
<pre><code>from pywinauto.findwindows import find_windows
if not find_windows(best_match='YOURWINDOWNAMEHERE'):
# Do your kill
</code></pre>
| 0 | 2016-09-02T16:09:50Z | [
"python",
"flask"
] |
Using Flask web app as Windows application | 39,295,219 | <p>We have a web application developed using Flask that runs on a Windows server with clients that connect to it. We now have a use case where it is desired that the server and client be combined onto a laptop so that both server and client code run together and make it appear as a native Windows application.</p>
<p>Basically, we now have two requirements that we did not have before:
1. Must be able to launch the browser from within Python.
2. Must be able to terminate the Python (Flask) application on browser window close.</p>
<p>We have succeeded in item 1. Item 2 remains elusive. We have tried terminating the werkzeug server but the Python code keeps running. Seeking help from those that know.</p>
| 0 | 2016-09-02T14:47:54Z | 39,351,372 | <p>After reading the docs more thoroughly and experimenting with the implementation, we found the following main code to satisfy the objective.</p>
<pre><code>from multiprocessing import Process, freeze_support
def run_browser():
import webbrowser
chrome = webbrowser.get(r'C:\\Program\ Files\ (x86)\\Google\\Chrome\\Application\\chrome.exe --window-size=500,500 --app=%s')
chrome.open('http://localhost:5000/gui')
def run_app():
from app import webapp
webapp.run() #debug=True) #, use_reloader=False)
if __name__ == '__main__':
freeze_support()
a = Process(target=run_app)
a.daemon = True
a.start()
b = Process(target=run_browser)
b.start()
b.join()
</code></pre>
| 0 | 2016-09-06T14:20:34Z | [
"python",
"flask"
] |
FIWARE Authentication in Python | 39,295,231 | <p>I am trying to authenticate user using FIWARE. It returns a 404 when I request the token, but I don't have problems to get access code request.
My code:</p>
<pre><code>class OAuth2(object):
def __init__(self):
self.client_id = "<client_id>"
self.client_secret = "<client_secret>"
self.site = 'http://0.0.0.0:8000'
self.redirect_uri = "http://192.168.99.101:8000/auth"
self.authorization_url = '/oauth2/authorize'
self.token_url = '/oauth2/token'
def authorize_url(self, **kwargs):
oauth_params = {'response_type': 'code', 'redirect_uri': self.redirect_uri, 'client_id': self.client_id}
oauth_params.update(kwargs)
return "%s%s?%s" % (self.site, quote(self.authorization_url), urlencode(oauth_params))
def get_token(self, code, **kwargs):
url = "%s%s" % (self.site, quote(self.token_url))
data = {'grant_type': 'authorization_code', 'redirect_uri': self.redirect_uri, 'client_id': self.client_id, 'client_secret': self.client_secret, 'code': code}
data.update(kwargs)
response = requests.post(url, data=data)
content = response.content
if isinstance(response.content, str):
try:
content = json.loads(response.content)
except ValueError:
content = parse_qs(response.content)
else:
return content
</code></pre>
<p>In my app, I call authorize_url() to get the code.</p>
<pre><code>@app.route("/authenticate")
def authenticate():
auth_url = auth_app.authorize_url()
return redirect(auth_url)
</code></pre>
<p>After, I get the code by callback url and I call the get_token() method:</p>
<pre><code>@app.route('/auth', methods=['GET', 'POST'])
def auth():
error = request.args.get('error', '')
if error:
return "Error: " + error
code = request.args.get('code')
content = auth_app.get_token()
return render_template('index.html', content="content: " + content)
</code></pre>
<p>Github Project: <a href="https://github.com/I-am-Gabi/security-app/tree/master/2-BasicAuthentication/securityapp-ui/web" rel="nofollow">https://github.com/I-am-Gabi/security-app/tree/master/2-BasicAuthentication/securityapp-ui/web</a></p>
<p>OAuth2 class: <a href="https://github.com/I-am-Gabi/security-app/blob/master/2-BasicAuthentication/securityapp-ui/web/oauth_fiware.py" rel="nofollow">https://github.com/I-am-Gabi/security-app/blob/master/2-BasicAuthentication/securityapp-ui/web/oauth_fiware.py</a></p>
<p>App: <a href="https://github.com/I-am-Gabi/security-app/blob/master/2-BasicAuthentication/securityapp-ui/web/app.py" rel="nofollow">https://github.com/I-am-Gabi/security-app/blob/master/2-BasicAuthentication/securityapp-ui/web/app.py</a></p>
<p>Fiware wiki: <a href="https://github.com/ging/fiware-idm/wiki/using-the-fiware-lab-instance" rel="nofollow">https://github.com/ging/fiware-idm/wiki/using-the-fiware-lab-instance</a></p>
| 0 | 2016-09-02T14:48:20Z | 39,295,313 | <p>Try using POST method instead of GET</p>
| 0 | 2016-09-02T14:51:42Z | [
"python",
"authentication",
"fiware"
] |
FIWARE Authentication in Python | 39,295,231 | <p>I am trying to authenticate user using FIWARE. It returns a 404 when I request the token, but I don't have problems to get access code request.
My code:</p>
<pre><code>class OAuth2(object):
def __init__(self):
self.client_id = "<client_id>"
self.client_secret = "<client_secret>"
self.site = 'http://0.0.0.0:8000'
self.redirect_uri = "http://192.168.99.101:8000/auth"
self.authorization_url = '/oauth2/authorize'
self.token_url = '/oauth2/token'
def authorize_url(self, **kwargs):
oauth_params = {'response_type': 'code', 'redirect_uri': self.redirect_uri, 'client_id': self.client_id}
oauth_params.update(kwargs)
return "%s%s?%s" % (self.site, quote(self.authorization_url), urlencode(oauth_params))
def get_token(self, code, **kwargs):
url = "%s%s" % (self.site, quote(self.token_url))
data = {'grant_type': 'authorization_code', 'redirect_uri': self.redirect_uri, 'client_id': self.client_id, 'client_secret': self.client_secret, 'code': code}
data.update(kwargs)
response = requests.post(url, data=data)
content = response.content
if isinstance(response.content, str):
try:
content = json.loads(response.content)
except ValueError:
content = parse_qs(response.content)
else:
return content
</code></pre>
<p>In my app, I call authorize_url() to get the code.</p>
<pre><code>@app.route("/authenticate")
def authenticate():
auth_url = auth_app.authorize_url()
return redirect(auth_url)
</code></pre>
<p>After, I get the code by callback url and I call the get_token() method:</p>
<pre><code>@app.route('/auth', methods=['GET', 'POST'])
def auth():
error = request.args.get('error', '')
if error:
return "Error: " + error
code = request.args.get('code')
content = auth_app.get_token()
return render_template('index.html', content="content: " + content)
</code></pre>
<p>Github Project: <a href="https://github.com/I-am-Gabi/security-app/tree/master/2-BasicAuthentication/securityapp-ui/web" rel="nofollow">https://github.com/I-am-Gabi/security-app/tree/master/2-BasicAuthentication/securityapp-ui/web</a></p>
<p>OAuth2 class: <a href="https://github.com/I-am-Gabi/security-app/blob/master/2-BasicAuthentication/securityapp-ui/web/oauth_fiware.py" rel="nofollow">https://github.com/I-am-Gabi/security-app/blob/master/2-BasicAuthentication/securityapp-ui/web/oauth_fiware.py</a></p>
<p>App: <a href="https://github.com/I-am-Gabi/security-app/blob/master/2-BasicAuthentication/securityapp-ui/web/app.py" rel="nofollow">https://github.com/I-am-Gabi/security-app/blob/master/2-BasicAuthentication/securityapp-ui/web/app.py</a></p>
<p>Fiware wiki: <a href="https://github.com/ging/fiware-idm/wiki/using-the-fiware-lab-instance" rel="nofollow">https://github.com/ging/fiware-idm/wiki/using-the-fiware-lab-instance</a></p>
| 0 | 2016-09-02T14:48:20Z | 39,328,038 | <p>Please, check that you are correctly sending the Authorization header</p>
| 0 | 2016-09-05T09:55:29Z | [
"python",
"authentication",
"fiware"
] |
Good way to input PASCAL-VOC 2012 training data and labels with tensorflow | 39,295,263 | <p>I want to do object detection of <a href="http://host.robots.ox.ac.uk/pascal/VOC/voc2012/index.html#introduction" rel="nofollow">PASCAL-VOC 2012 dataset</a> with tensorflow. </p>
<p>I want to input <strong>the whole image</strong> with <strong>object labels</strong> and the corresponding <strong>bounding boxes</strong> into the tensorflow for training. </p>
<p>Is there any good way to write a data file for tensorflow to read? Or just read the original XML file in tensorflow?</p>
<p>Thank you very much.</p>
<p>Here is an image example:
<a href="http://i.stack.imgur.com/1i0O1.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/1i0O1.jpg" alt="enter image description here"></a> </p>
| 0 | 2016-09-02T14:49:30Z | 39,312,792 | <p>It seems that TF have no support of xml files yet. </p>
<ol>
<li><p>You can try to make batches by yourself and feed them to TF placeholders.
<a href="https://www.tensorflow.org/versions/r0.10/how_tos/reading_data/index.html#feeding" rel="nofollow">https://www.tensorflow.org/versions/r0.10/how_tos/reading_data/index.html#feeding</a></p></li>
<li><p>You can write your own file format and your own decoder. Then you can read file and get file bytes with <code>tf.decode_raw</code> function and do whatever you whant. Related question if you whant to read multiple files simultaneously: <a href="http://stackoverflow.com/questions/34340489/tensorflow-read-images-with-labels">Tensorflow read images with labels</a></p></li>
</ol>
<p>I think that first option is easier to implement.</p>
| 0 | 2016-09-04T01:34:40Z | [
"python",
"tensorflow",
"deep-learning"
] |
Why a filename is given closefd parameter of open() Function must be True in Python 3.5.2? | 39,295,337 | <p>In Python 3.5.2, when I give False value to closefd parameter of open() function with a filename, I get this error below:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: Cannot use closefd=False with file name
</code></pre>
<p>Results for me of my question (Why a filename is given closefd of open() Function must be True in Python 3.5.2?) and queries similiar to my question (e.g. closefd parameter in python) in Google Search isn't satisfying, the link of the results is below (You will may get different results.):</p>
<p><a href="https://www.google.com/#q=Why+a+filename+is+given+closefd+parameter+of+open()+function+must+be+True+in+Python+3.5.2%3F" rel="nofollow">https://www.google.com/#q=Why+a+filename+is+given+closefd+parameter+of+open()+function+must+be+True+in+Python+3.5.2%3F</a></p>
<p>In Python 3.5.2 documentation, there is only a description for closefd parameter of open() function (The link of open() function in Python 3.5.2 documentation is <a href="https://docs.python.org/3/library/functions.html#open" rel="nofollow">https://docs.python.org/3/library/functions.html#open</a>). The description is below:</p>
<blockquote>
<p>If closefd is False and a file descriptor rather than a filename was given, the underlying file descriptor will be kept open when the file is closed. If a filename is given closefd must be True (the default) otherwise an error will be raised.</p>
</blockquote>
<p>I couldn't find the answer of this question. My Python version string that first outputted in my Python command line interpreter is below:</p>
<pre><code>Python 3.5.2 (default, Jul 5 2016, 12:43:10)
[GCC 5.4.0 20160609] on linux
</code></pre>
| 0 | 2016-09-02T14:53:02Z | 39,295,513 | <p>In Python, <a href="https://docs.python.org/3/library/functions.html#open" rel="nofollow">open()</a> has two purposes:</p>
<ol>
<li>To open a given file from its name, or</li>
<li>To wrap a file descriptor representing an already open file.</li>
</ol>
<p>The second purpose is useful if you have a file descriptor and you want to use portable Python functions (rather than the <a href="https://docs.python.org/2/library/os.html#file-descriptor-operations" rel="nofollow">os</a> module) to manipulate it. However, in that case, the file descriptor will be closed when the Python file object wrapper is closed, which you may not want (as you may want the file descriptor to remain valid for further uses).</p>
<p>Therefore, the <code>open()</code> function provides an argument that allows you to specify you want the file descriptor to remain open. Since this behavior only makes sense when you pass a file descriptor to <code>open()</code>, an error is raised if you try to use it in conjunction with a file name.</p>
| 1 | 2016-09-02T15:02:56Z | [
"python",
"python-3.x"
] |
QTextBrowser won't append link correctly if that link has equals sign inside it | 39,295,342 | <pre><code>class TextBrowser(QtGui.QTextBrowser):
def __init__(self, parent=None):
QtGui.QTextBrowser.__init__(self, parent)
self.setAcceptRichText(True)
self.setOpenExternalLinks(True)
self.insertHtml('<a href=' + 'https://www.google.com/#q=dfsdf'+'>' + 'gg' + '</a>')
self.append('<a href=' + 'https://www.google.com/#q=dfsdf' + '>' + 'gg' + '</a>')
</code></pre>
<p>So anytime I try to append link that has equals sign QTextBrowser will append only part of the link before the sign. <a href="https://www.google.com/#q=dfsdf" rel="nofollow">https://www.google.com/#q=dfsdf</a> will become <a href="https://www.google.com/#q" rel="nofollow">https://www.google.com/#q</a></p>
<pre><code>setHtml()
</code></pre>
<p>works correctly but I just want to add clickable link - not to clean the whole area to display link only.
Anything I can do about it?</p>
| 0 | 2016-09-02T14:53:11Z | 39,298,997 | <p>Always make sure html attributes are enclosed in double-quotes, otherwise special characters like <code>=</code> may be parsed wrongly. The html should look like this:</p>
<pre><code> <a href="https://www.google.com/#q=dfsdf">gg</a>
</code></pre>
| 2 | 2016-09-02T18:50:25Z | [
"python",
"python-2.7",
"qt",
"pyqt",
"qt4"
] |
How to see error for python application in gunicorn | 39,295,358 | <p>I have <a href="https://falcon.readthedocs.io/en/stable/index.html" rel="nofollow">falcon</a> application which is run with gunicorn. If there are errors in .py file it gives traceback, but in grunicorn it sends to console only:</p>
<pre><code>[2016-09-02 17:39:26 +0300] [6927] [INFO] Starting gunicorn 19.6.0
[2016-09-02 17:39:26 +0300] [6927] [INFO] Listening at: http://127.0.0.1:8000 (6927)
[2016-09-02 17:39:26 +0300] [6927] [INFO] Using worker: sync
[2016-09-02 17:39:26 +0300] [6930] [INFO] Booting worker with pid: 6930
</code></pre>
<p>And the only error output is:</p>
<pre><code>[2016-09-02 17:39:29 +0300] [6927] [INFO] Shutting down: Master
[2016-09-02 17:39:29 +0300] [6927] [INFO] Reason: Worker failed to boot.
</code></pre>
<p>How to get full error output? I need to know what causes worker to fail to boot.</p>
| 0 | 2016-09-02T14:53:52Z | 39,300,848 | <p>This is a problem with gunicorn log (not falcon). Sometimes it won't output the exception that generated the boot failure. So to debug my WSGI <code>app</code> what I usually do is to run the server with the test server provided by <code>wsgiref</code> as follows:</p>
<pre><code>from wsgiref import simple_server
if __name__ == '__main__':
APP_HOST = 8080 #you may want to change this
APP_HOST = '0.0.0.0' #you may want to change this
httpd = simple_server.make_server(APP_HOST, APP_PORT, app)
httpd.serve_forever()
</code></pre>
<p>This should help you figure out where's the problem.</p>
| 0 | 2016-09-02T21:19:34Z | [
"python",
"error-handling",
"gunicorn",
"falconframework"
] |
How can I find out what Python distribution I am using from within Python? | 39,295,364 | <p>I'm using an application (QGIS) that can execute Python and can be extended with plugins written in Python. Depending on the platform, the installer of this application brings along its own Python distribution which it installs alongside the application to be used by the application. On other platforms, the installation doesn't bring Python along and the system Python interpreter is used.</p>
<p>Can I find out from within Python (in the application's interactive Python console or from within a plugin) what Python is being used?</p>
| 2 | 2016-09-02T14:54:15Z | 39,295,365 | <p>You can retrieve the path of the Python executable in use with <a href="https://docs.python.org/library/sys.html#sys.executable" rel="nofollow"><code>sys.executable</code></a>:</p>
<pre><code>import sys
print(sys.executable)
# e.g.
# /usr/bin/python2
# for the system Python 2 interpreter on Linux
</code></pre>
| 1 | 2016-09-02T14:54:15Z | [
"python"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.