title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
pandas: how to get the percentage for each row | 39,243,649 | <p>When I use pandas <code>value_count</code> method, I get the data below:</p>
<pre><code>new_df['mark'].value_counts()
1 1349110
2 1606640
3 175629
4 790062
5 330978
</code></pre>
<p>How can I get the percentage for each row like this?</p>
<pre><code>1 1349110 31.7%
2 1606640 37.8%
3 175629 4.1%
4 790062 18.6%
5 330978 7.8%
</code></pre>
<p>I need to divide each row by the sum of these data.</p>
| 1 | 2016-08-31T07:46:36Z | 39,243,707 | <p>I think you need:</p>
<pre><code>#if output is Series, convert it to DataFrame
df = df.rename('a').to_frame()
df['per'] = (df.a * 100 / df.a.sum()).round(1).astype(str) + '%'
print (df)
a per
1 1349110 31.7%
2 1606640 37.8%
3 175629 4.1%
4 790062 18.6%
5 330978 7.8%
</code></pre>
<p><strong>Timings</strong>:</p>
<p>It seems faster is use <code>sum</code> as twice <code>value_counts</code>:</p>
<pre><code>In [184]: %timeit (jez(s))
10 loops, best of 3: 38.9 ms per loop
In [185]: %timeit (pir(s))
10 loops, best of 3: 76 ms per loop
</code></pre>
<p>Code for timings:</p>
<pre><code>np.random.seed([3,1415])
s = pd.Series(np.random.choice(list('ABCDEFGHIJ'), 1000, p=np.arange(1, 11) / 55.))
s = pd.concat([s]*1000)#.reset_index(drop=True)
def jez(s):
df = s.value_counts()
df = df.rename('a').to_frame()
df['per'] = (df.a * 100 / df.a.sum()).round(1).astype(str) + '%'
return df
def pir(s):
return pd.DataFrame({'a':s.value_counts(),
'per':s.value_counts(normalize=True).mul(100).round(1).astype(str) + '%'})
print (jez(s))
print (pir(s))
</code></pre>
| 3 | 2016-08-31T07:49:40Z | [
"python",
"pandas",
"sum",
"percentage",
"series"
] |
pandas: how to get the percentage for each row | 39,243,649 | <p>When I use pandas <code>value_count</code> method, I get the data below:</p>
<pre><code>new_df['mark'].value_counts()
1 1349110
2 1606640
3 175629
4 790062
5 330978
</code></pre>
<p>How can I get the percentage for each row like this?</p>
<pre><code>1 1349110 31.7%
2 1606640 37.8%
3 175629 4.1%
4 790062 18.6%
5 330978 7.8%
</code></pre>
<p>I need to divide each row by the sum of these data.</p>
| 1 | 2016-08-31T07:46:36Z | 39,243,763 | <pre><code>np.random.seed([3,1415])
s = pd.Series(np.random.choice(list('ABCDEFGHIJ'), 1000, p=np.arange(1, 11) / 55.))
s.value_counts()
I 176
J 167
H 136
F 128
G 111
E 85
D 83
C 52
B 38
A 24
dtype: int64
</code></pre>
<hr>
<p>As percent</p>
<pre><code>s.value_counts(normalize=True)
I 0.176
J 0.167
H 0.136
F 0.128
G 0.111
E 0.085
D 0.083
C 0.052
B 0.038
A 0.024
dtype: float64
</code></pre>
<hr>
<p>Per @jezreal's suggestion</p>
<pre><code>counts = s.value_counts()
percent = s.value_counts(normalize=True) \
.mul(100).round(1).astype(str) + '%'
pd.DataFrame({'counts': counts, 'per': percent})
</code></pre>
<p><a href="http://i.stack.imgur.com/MEwx0.png" rel="nofollow"><img src="http://i.stack.imgur.com/MEwx0.png" alt="enter image description here"></a></p>
| 5 | 2016-08-31T07:52:33Z | [
"python",
"pandas",
"sum",
"percentage",
"series"
] |
Python performance on reading files with extremely long lines | 39,243,710 | <p>I've got a file with around 6MB of data. All of the data are written in a single line. Why is the following command taking more than 15 minutes to finish? Is it normal?</p>
<pre><code>infile = open('file.txt')
outfile = open('out.txt', 'w')
for line in infile.readlines():
outfile.write(line);
</code></pre>
<p><strong>Details:</strong></p>
<p>I'm using Python 2.7.</p>
<p>Output from <strong>wc</strong>: </p>
<ul>
<li>newline count: 2</li>
<li>word count: 3475246 </li>
<li>byte count: 6951140 </li>
</ul>
<p><strong>Evaluation 1:</strong></p>
<p><em>Reference</em> code using in file (suggestted by Ahsanul Haque and Daewon Lee):</p>
<pre><code>for line in infile:
output.write(line):
</code></pre>
<p><em>Time: 959.487 secs.</em></p>
| 1 | 2016-08-31T07:49:47Z | 39,244,001 | <p>Try the following code snippet. The <code>readlines()</code> loads all data on memory, which seems to cause a long time.</p>
<pre><code>infile = open('file.txt', 'r')
outfile = open('out.txt', 'w')
for line in infile:
outfile.write(line)
</code></pre>
<p>With my python 3.5 (64bit) on Windows 10 OS, the following code snippet finished within a few seconds.</p>
<pre><code>import time
start = time.time()
with open("huge_text.txt", "w") as fout:
for i in range(1737623):
fout.write("ABCD ")
fout.write('\n')
for i in range(1737623):
fout.write("EFGH ")
fout.write('\n')
# end of with
infile = open('huge_text.txt', 'r')
outfile = open('out.txt', 'w')
for line in infile:
outfile.write(line)
outfile.close()
infile.close()
end = time.time()
print("Time elapsed: ", end - start)
"""
<Output>
Time elapsed: 1.557690143585205
"""
</code></pre>
| 1 | 2016-08-31T08:05:28Z | [
"python",
"performance",
"file"
] |
Python performance on reading files with extremely long lines | 39,243,710 | <p>I've got a file with around 6MB of data. All of the data are written in a single line. Why is the following command taking more than 15 minutes to finish? Is it normal?</p>
<pre><code>infile = open('file.txt')
outfile = open('out.txt', 'w')
for line in infile.readlines():
outfile.write(line);
</code></pre>
<p><strong>Details:</strong></p>
<p>I'm using Python 2.7.</p>
<p>Output from <strong>wc</strong>: </p>
<ul>
<li>newline count: 2</li>
<li>word count: 3475246 </li>
<li>byte count: 6951140 </li>
</ul>
<p><strong>Evaluation 1:</strong></p>
<p><em>Reference</em> code using in file (suggestted by Ahsanul Haque and Daewon Lee):</p>
<pre><code>for line in infile:
output.write(line):
</code></pre>
<p><em>Time: 959.487 secs.</em></p>
| 1 | 2016-08-31T07:49:47Z | 39,244,160 | <p>try reading file in chunks</p>
<pre><code>infile = open('file.txt')
outfile = open('out.txt', 'w')
while True:
text = infile.read(100): # or any other size
if not text:
break
outfile.write(line);
</code></pre>
| 0 | 2016-08-31T08:14:04Z | [
"python",
"performance",
"file"
] |
Python 3 + Selenium: Clicked the element but nothing happen | 39,243,905 | <p>The element was clicked and I didn't get any error but the popup ("add featured photos" popup in Facebook) is still there. It is not closed. </p>
<p>This is html code:</p>
<pre><code><div class="_5lnf uiOverlayFooter _5a8u">
<table class="uiGrid _51mz uiOverlayFooterGrid" cellspacing="0" cellpadding="0">
<tbody>
<tr class="_51mx">
<td class="_51m- prs uiOverlayFooterMessage">
<td class="_51m- uiOverlayFooterButtons _51mw">
<a class="_42ft _4jy0 layerCancel uiOverlayButton _4jy3 _517h _51sy" href="#" role="button">Cancel</a>
<button class="_42ft _4jy0 layerConfirm uiOverlayButton _4jy3 _4jy1 selected _51sy" type="submit" value="1">Save</button>
</td>
</tr>
</tbody>
</table>
</div>
</code></pre>
<p>And this is my code:</p>
<pre><code>driver.find_element_by_xpath(".//button[@class='_42ft _4jy0 layerConfirm uiOverlayButton _4jy3 _4jy1 selected _51sy']")
</code></pre>
<p>How to click "save" button to close popup ? Thank you very much :)</p>
| 2 | 2016-08-31T07:59:57Z | 39,244,260 | <p>Try this</p>
<pre><code>driver.find_element_by_xpath("//button[text() = 'Save']").click()
</code></pre>
| 1 | 2016-08-31T08:18:35Z | [
"javascript",
"python",
"facebook",
"selenium",
"xpath"
] |
Python 3 + Selenium: Clicked the element but nothing happen | 39,243,905 | <p>The element was clicked and I didn't get any error but the popup ("add featured photos" popup in Facebook) is still there. It is not closed. </p>
<p>This is html code:</p>
<pre><code><div class="_5lnf uiOverlayFooter _5a8u">
<table class="uiGrid _51mz uiOverlayFooterGrid" cellspacing="0" cellpadding="0">
<tbody>
<tr class="_51mx">
<td class="_51m- prs uiOverlayFooterMessage">
<td class="_51m- uiOverlayFooterButtons _51mw">
<a class="_42ft _4jy0 layerCancel uiOverlayButton _4jy3 _517h _51sy" href="#" role="button">Cancel</a>
<button class="_42ft _4jy0 layerConfirm uiOverlayButton _4jy3 _4jy1 selected _51sy" type="submit" value="1">Save</button>
</td>
</tr>
</tbody>
</table>
</div>
</code></pre>
<p>And this is my code:</p>
<pre><code>driver.find_element_by_xpath(".//button[@class='_42ft _4jy0 layerConfirm uiOverlayButton _4jy3 _4jy1 selected _51sy']")
</code></pre>
<p>How to click "save" button to close popup ? Thank you very much :)</p>
| 2 | 2016-08-31T07:59:57Z | 39,244,514 | <p>You can scroll to the button before clicking to it</p>
<pre><code>from selenium.webdriver.common.action_chains import ActionChains
button = driver.find_element_by_xpath(".//button[@class='_42ft _4jy0 layerConfirm uiOverlayButton _4jy3 _4jy1 selected _51sy']")
ActionChains(driver).move_to_element(button).perform()
button.click()
</code></pre>
| 2 | 2016-08-31T08:30:49Z | [
"javascript",
"python",
"facebook",
"selenium",
"xpath"
] |
"CSRF token missing" with PUT/DELETE meethod rest-framework | 39,243,912 | <p>I'm using using django rest framework browsable api with ModelViewSet to do CRUD actions and want to use permissions.IsAuthenticatedOrReadOnly, but when I'm logged and try to DELETE or PUT I get
<code>"detail": "CSRF Failed: CSRF token missing or incorrect."</code></p>
<p>My view looks like this</p>
<pre><code>class objViewSet(viewsets.ModelViewSet):
queryset = obj.objects.all()
serializer_class = objSerializer
permission_classes = (permissions.IsAuthenticatedOrReadOnly,)
</code></pre>
<p>Settings.py</p>
<pre><code>REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': (
'rest_framework.permissions.AllowAny',
),
</code></pre>
<p>Serializer is just</p>
<pre><code>class ObjSerializer(serializers.ModelSerializer):
class Meta:
model = Obj
</code></pre>
<p>Although when I delete permission_classes (so the default allowAny triggers) I can it works just fine.</p>
<p><strong>What I want</strong></p>
<p>To be able to PUT/DELETE only when I'm authenticated. I don't know how to send CSRF token, when all happens automatically (modalviewset does the whole work)</p>
| 0 | 2016-08-31T08:00:27Z | 39,250,180 | <p>You can remove CSRF check on individual urls. Try this on your urls.py:</p>
<pre><code>url(r'^/my_url_to_view/', csrf_exempt(views.my_view_function), name='my_name'),
</code></pre>
| 0 | 2016-08-31T12:51:28Z | [
"python",
"django",
"rest"
] |
"CSRF token missing" with PUT/DELETE meethod rest-framework | 39,243,912 | <p>I'm using using django rest framework browsable api with ModelViewSet to do CRUD actions and want to use permissions.IsAuthenticatedOrReadOnly, but when I'm logged and try to DELETE or PUT I get
<code>"detail": "CSRF Failed: CSRF token missing or incorrect."</code></p>
<p>My view looks like this</p>
<pre><code>class objViewSet(viewsets.ModelViewSet):
queryset = obj.objects.all()
serializer_class = objSerializer
permission_classes = (permissions.IsAuthenticatedOrReadOnly,)
</code></pre>
<p>Settings.py</p>
<pre><code>REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': (
'rest_framework.permissions.AllowAny',
),
</code></pre>
<p>Serializer is just</p>
<pre><code>class ObjSerializer(serializers.ModelSerializer):
class Meta:
model = Obj
</code></pre>
<p>Although when I delete permission_classes (so the default allowAny triggers) I can it works just fine.</p>
<p><strong>What I want</strong></p>
<p>To be able to PUT/DELETE only when I'm authenticated. I don't know how to send CSRF token, when all happens automatically (modalviewset does the whole work)</p>
| 0 | 2016-08-31T08:00:27Z | 39,251,175 | <p>You might have used SessionAuthentication and session auth checks for the csrf_token always and you can avoid the checks by exempting but Lets not do that. </p>
<p>I think you can keep both Authentication classes. or just TokenAuthentication. ie,</p>
<pre><code>'DEFAULT_AUTHENTICATION_CLASSES': (
# 'rest_framework.authentication.BasicAuthentication',
'rest_framework.authentication.TokenAuthentication',
'rest_framework.authentication.SessionAuthentication',
# 'oauth2_provider.ext.rest_framework.OAuth2Authentication',
# 'rest_framework_social_oauth2.authentication.SocialAuthentication',
),
</code></pre>
<p>And If you are not going for TokenAuth and just the session Auth. you can always pass csrf_token via X-CSRFToken header. Or you just can go for csrf_except things. which will avoid the csrf missing issues. </p>
<p>This should work. Also refer the links below.</p>
<p>ref: <a href="http://stackoverflow.com/a/26639895/3520792">http://stackoverflow.com/a/26639895/3520792</a></p>
<p>ref: <a href="http://stackoverflow.com/a/30875830/3520792">http://stackoverflow.com/a/30875830/3520792</a></p>
| 0 | 2016-08-31T13:35:55Z | [
"python",
"django",
"rest"
] |
"CSRF token missing" with PUT/DELETE meethod rest-framework | 39,243,912 | <p>I'm using using django rest framework browsable api with ModelViewSet to do CRUD actions and want to use permissions.IsAuthenticatedOrReadOnly, but when I'm logged and try to DELETE or PUT I get
<code>"detail": "CSRF Failed: CSRF token missing or incorrect."</code></p>
<p>My view looks like this</p>
<pre><code>class objViewSet(viewsets.ModelViewSet):
queryset = obj.objects.all()
serializer_class = objSerializer
permission_classes = (permissions.IsAuthenticatedOrReadOnly,)
</code></pre>
<p>Settings.py</p>
<pre><code>REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': (
'rest_framework.permissions.AllowAny',
),
</code></pre>
<p>Serializer is just</p>
<pre><code>class ObjSerializer(serializers.ModelSerializer):
class Meta:
model = Obj
</code></pre>
<p>Although when I delete permission_classes (so the default allowAny triggers) I can it works just fine.</p>
<p><strong>What I want</strong></p>
<p>To be able to PUT/DELETE only when I'm authenticated. I don't know how to send CSRF token, when all happens automatically (modalviewset does the whole work)</p>
| 0 | 2016-08-31T08:00:27Z | 39,251,374 | <p>In your REST_FRAMEWORK settings you haven't mentioned the Authentication scheme so DRF uses the default authentication scheme which is SessionAuthentication. This scheme makes it mandatory for you to put csrf token with your requests. You can overcome this by:</p>
<ol>
<li>To make this setting for the whole project in settings.py add</li>
</ol>
<pre><code>
REST_FRAMEWORK = {
'DEFAULT_AUTHENTICATION_CLASSES': (
'rest_framework.authentication.BasicAuthentication',
)}
</code></pre>
<ol start="2">
<li>To make this setting in specific view do the following in your view</li>
</ol>
<pre><code>
class objViewSet(viewsets.ModelViewSet):
queryset = obj.objects.all()
serializer_class = objSerializer
permission_classes = (permissions.IsAuthenticatedOrReadOnly,)
authentication_classes = (BasicAuthentication,)
</code></pre>
<p>source: <a href="http://www.django-rest-framework.org/api-guide/authentication/#sessionauthentication" rel="nofollow">http://www.django-rest-framework.org/api-guide/authentication/#sessionauthentication</a></p>
<p>BTW, csrf token is saved as a cookie called 'csrftoken'. You can retrieve it from HTTP Response and attach it to your request header with the key 'X-CSRFToken'. You can see some details of this on: <a href="https://docs.djangoproject.com/en/dev/ref/csrf/#ajax" rel="nofollow">https://docs.djangoproject.com/en/dev/ref/csrf/#ajax</a></p>
| 0 | 2016-08-31T13:44:01Z | [
"python",
"django",
"rest"
] |
Inclusion templatetag with counter | 39,243,933 | <h1>What I want</h1>
<p>Inclusion Templatetag which returns number of uses and don't break when using template inheritance. I tried to storage counter in context, but it does not work as I intended.</p>
<p>base.html</p>
<pre><code>{% block body %}
{% my_tag %}<br>
{% my_tag %}<br>
{% endblock %}
</code></pre>
<p>page.html</p>
<pre><code>{% extends 'base.html' %}
{% block body %}
{{ block super }}
{% my_tag %}<br>
{% my_tag %}<br>
{% endblock %}
</code></pre>
<p>rendered result:</p>
<pre><code>1
2
3
4
</code></pre>
<h1>What I tried</h1>
<pre><code>@register.inclusion_tag('tagtemplate.html', takes_context=True)
def my_tag(context):
counter = context.get('tag_counter', 1)
ctx = {'tag_counter': counter}
context['tag_counter'] = counter + 1
return ctx
</code></pre>
<p>And result:</p>
<pre><code>1
2
1
2
</code></pre>
<h1>And what worked</h1>
<p>Added middleware which added counter to request</p>
<pre><code>class TagCounterMiddleware(object):
def process_request(self, request):
request.tag_counter = 1
</code></pre>
<p>and changed template tag</p>
<pre><code>@register.inclusion_tag('tagtemplate.html', takes_context=True)
def my_tag(context):
request = context.get['request']
ctx = {'tag_counter': request.tag_counter}
request.tag_counter += 1
return ctx
</code></pre>
<p>Thanks to @SardorbekImomaliev for his suggestion! :)</p>
| 0 | 2016-08-31T08:01:29Z | 39,245,635 | <p>I suggest putting your counter in <code>request</code>. Something like this.</p>
<pre><code># This code wasn't checked
@register.inclusion_tag('tagtemplate.html', takes_context=True)
def my_tag(context):
counter = getattr(context['request'], 'tag_counter', 0)
request.tag_counter = counter + 1
context['request'] = request
return context
</code></pre>
| 0 | 2016-08-31T09:23:05Z | [
"python",
"django",
"python-2.7",
"django-templates",
"django-views"
] |
update dataframe with series | 39,243,961 | <p>having a dataframe, I want to update subset of columns with a series of same length as number of columns being updated: </p>
<pre><code>>>> df = pd.DataFrame(np.random.randint(0,5,(6, 2)), columns=['col1','col2'])
>>> df
col1 col2
0 1 0
1 2 4
2 4 4
3 4 0
4 0 0
5 3 1
>>> df.loc[:,['col1','col2']] = pd.Series([0,1])
...
ValueError: shape mismatch: value array of shape (6,) could not be broadcast to indexing result of shape (2,6)
</code></pre>
<p>it fails, however, I am able to do the same thing using list:</p>
<pre><code>>>> df.loc[:,['col1','col2']] = list(pd.Series([0,1]))
>>> df
col1 col2
0 0 1
1 0 1
2 0 1
3 0 1
4 0 1
5 0 1
</code></pre>
<p>could you please help me to understand, why updating with series fails? do I have to perform some particular reshaping? </p>
| 4 | 2016-08-31T08:03:21Z | 39,244,289 | <p>When assigning with a pandas object, pandas treats the assignment more "rigorously". A pandas to pandas assignment must pass stricter protocols. Only when you turn it to a list (or equivalently <code>pd.Series([0, 1]).values</code>) did pandas give in and allow you to assign in the way you'd imagine it should work.</p>
<p>That higher standard of assignment requires that the indices line up as well, so even if you had the right shape, it still wouldn't have worked without the correct indices.</p>
<pre><code>df.loc[:, ['col1', 'col2']] = pd.DataFrame([[0, 1] for _ in range(6)])
df
</code></pre>
<p><a href="http://i.stack.imgur.com/W304c.png" rel="nofollow"><img src="http://i.stack.imgur.com/W304c.png" alt="enter image description here"></a></p>
<pre><code>df.loc[:, ['col1', 'col2']] = pd.DataFrame([[0, 1] for _ in range(6)], columns=['col1', 'col2'])
df
</code></pre>
<p><a href="http://i.stack.imgur.com/9DUi8.png" rel="nofollow"><img src="http://i.stack.imgur.com/9DUi8.png" alt="enter image description here"></a></p>
| 3 | 2016-08-31T08:19:36Z | [
"python",
"pandas",
"dataframe"
] |
More effective way of passing several arguments to script? | 39,244,007 | <p>My command line script takes several arguments including strings, ints, floats and lists, and it is becoming a bit tricky to do the call with all arguments:</p>
<pre><code>python myscript.py arg1 arg2 arg3 ... argN
</code></pre>
<p>To avoid having to write all arguments in the command line I have created a text file with all arguments which I simply read line by line to collect arguments. This is working fine, but is this the best practice for doing this? Are there a more effective way?</p>
| 0 | 2016-08-31T08:05:41Z | 39,244,085 | <p>You can use <code>argpase</code> module. Refer to the following links :)</p>
<p><a href="https://pymotw.com/2/argparse" rel="nofollow">https://pymotw.com/2/argparse</a></p>
<p><a href="http://stackoverflow.com/questions/7427101/dead-simple-argparse-example-wanted-1-argument-3-results">Dead simple argparse example wanted: 1 argument, 3 results</a></p>
| 1 | 2016-08-31T08:09:43Z | [
"python",
"arguments",
"parameter-passing"
] |
More effective way of passing several arguments to script? | 39,244,007 | <p>My command line script takes several arguments including strings, ints, floats and lists, and it is becoming a bit tricky to do the call with all arguments:</p>
<pre><code>python myscript.py arg1 arg2 arg3 ... argN
</code></pre>
<p>To avoid having to write all arguments in the command line I have created a text file with all arguments which I simply read line by line to collect arguments. This is working fine, but is this the best practice for doing this? Are there a more effective way?</p>
| 0 | 2016-08-31T08:05:41Z | 39,244,280 | <p>For me I think python getopt is the way to go.</p>
<pre><code>#!/usr/bin/python
import sys, getopt
def main(argv):
inputfile = ''
outputfile = ''
try:
opts, args = getopt.getopt(argv,"hi:o:",["ifile=","ofile="])
except getopt.GetoptError:
print 'test.py -i <inputfile> -o <outputfile>'
sys.exit(2)
for opt, arg in opts:
if opt == '-h':
print 'test.py -i <inputfile> -o <outputfile>'
sys.exit()
elif opt in ("-i", "--ifile"):
inputfile = arg
elif opt in ("-o", "--ofile"):
outputfile = arg
print 'Input file is "', inputfile
print 'Output file is "', outputfile
if __name__ == "__main__":
main(sys.argv[1:])
</code></pre>
<p>Usage:</p>
<pre><code>usage: test.py -i <inputfile> -o <outputfile>
</code></pre>
<p>It's easier to run when you have a name for the argument.</p>
<p>Ref: <a href="http://www.tutorialspoint.com/python/python_command_line_arguments.htm" rel="nofollow">http://www.tutorialspoint.com/python/python_command_line_arguments.htm</a></p>
| 1 | 2016-08-31T08:19:14Z | [
"python",
"arguments",
"parameter-passing"
] |
Python:Sorting 2d arrays generated in a loop causing errors | 39,244,063 | <p>This program randomly generates weights for several artificial neural networks using a loop, calculates the output of the network and appends that to the end of each array, then appends the cost to the end of each array. When run without sorting the arrays this program does all calculations correctly, but when I try to sort the array by the last item in each array it seems like it breaks the loop even though the sort function is called after all the other functions.</p>
<pre><code>w, h = 9, 10
network = [[0 for x in range(w)] for y in range(h)]
def sigmoid(sigin):
return 1 / (1 + math.exp(-sigin))
def netcal(x):
network[x].append(sigmoid((sigmoid(i1*network[x][0]+i2*network[x][1])*network[x][6])+(sigmoid(i1*network[x][2]+i2*network[x][3])*network[x][7])+(sigmoid(i1*network[x][4]+i2*network[x][5])*network[x][8])))
def seed():
b = 0
while b < 10:
y = 0
while y < 9:
network[b][y] = random.random()
y += 1
b += 1
def calall():
c = 0
while c < 9:
netcal(c)
print(network[c][9])
c += 1
def cost():
d = 0
while d < 9:
network[d].append(1 - network[d][9])
print(network[d][10])
d += 1
def sort():
sorted(network, key=lambda x: x[10])
def wait():
m.getch()
i1=0
i2=1
seed()
calall()
print("break")
cost()
sort()
print(network)
</code></pre>
| 2 | 2016-08-31T08:08:17Z | 39,244,131 | <p>Your current approach sorts the list but does not <code>return</code> the result nor update the network. Note that <code>sorted</code> <em>returns</em> a sorted list. </p>
<p>You want an <em>in-place</em> sort:</p>
<pre><code>def sort():
network.sort(key=lambda x: x[-1]) # or x[8]
# ^^
</code></pre>
| 0 | 2016-08-31T08:12:23Z | [
"python",
"neural-network"
] |
Google App Engine, start / stop continuously script on button click | 39,244,091 | <p>I am looking for guidance in how to achieve start / stop functionality of a function, which makes a <code>restAPI</code> to retrieve data in JSON format continuously, by clicking a button on a rendered html page through <code>webapp2</code> and hosted on GAE?</p>
<p>The current behaviour is once the http request is completed the function that was called of course stops (<code>while self._running == True</code>)(normal behaviour according to the GAE documentation).</p>
<p>main.py :</p>
<pre><code>#!/usr/bin/env python
#
import webapp2
from google.appengine.api import urlfetch
from matplotlib.path import Path as mpPath
import json
import base64
import socket
import logging
from threading import Thread
import jinja2
template_dir = os.path.join(os.path.dirname(__file__))
jinja_env = jinja2.Environment(loader = jinja2.FileSystemLoader(template_dir),
extensions=['jinja2.ext.autoescape'], autoescape=True)
# create a UDP socket for sending the commands
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1)
# create the stop / start functionality of a thread that runs infinite until
# thread is terminated
class CMX:
def __init__(self):
self._running = True
def terminate(self):
self._running = False
def run(self):
storedCredentials = False
username = None
password = None
ip_address = 'someip' # commands are send to the broadcast address for addressing the zones(bays)
port = someport # use of port 50011 because no return packet is send back, could cause the
# lights not to execute the command when using broadcast.
# define the boundaries of the different zones
zone = [[(106.03,141.19),(158.94,141.19),(158.94,194.50),(106.03,194.50)],
[(103.76,168),(62.26,168),(62.26,77.86),(103.67,77.86)],
[(106.38,77.86),(191.95,77.86),(191.95,106.52),(106.38,106.52)]]
flag_zone_1 = False
flag_zone_2 = False
flag_zone_3 = False
while self._running == True:
restURL = 'http://someurl'
print restURL
if not storedCredentials:
username = 'username'
password = 'password'
storedCredentials = True
try:
request = urlfetch.fetch(url = restURL, headers={"Authorization": "Basic %s" % base64.b64encode(username +':'+ password)})
<perform actions and other function calls>
.
.
except urlfetch.Error:
logging.exception('Caught exception fetching url')
class Handler(webapp2.RequestHandler):
def write(self, *a, **kw):
self.response.out.write(*a, **kw)
def render_str(self, template, **params):
t = jinja_env.get_template(template)
return t.render(params)
def render(self, template, **kw):
self.write(self.render_str(template, **kw))
class MainPage(Handler):
def get(self):
button = "Start Demo"
running = False
self.render('page.html', button = button, run = running)
def post(self):
startDemo = CMX()
t = Thread(target=startDemo.run, args=())
t.daemon = True
if self.request.get('button') == "Start Demo":
button = "Stop Demo"
running = True
self.render('page.html', button = button, run = running)
t.start()
else:
button = "Start Demo"
running = False
self.render('page.html', button = button, run = running)
startDemo.terminate()
def which_zone(xcoord, ycoord, zone):
point = (xcoord, ycoord)
in_zone_1 = mpPath(zone[0]).contains_point(point)
in_zone_2 = mpPath(zone[1]).contains_point(point)
in_zone_3 = mpPath(zone[2]).contains_point(point)
if in_zone_1 == True:
return "Zone 1"
elif in_zone_2 == True:
return "Zone 2"
elif in_zone_3 == True:
return "Zone 3"
def dim_lights(ip_address, port, control_string, sock):
control_string = control_string + 'S0F10' +'\r'
#sock.sendto(control_string, (ip_address, port))
return control_string
def norm_lights(ip_address, port, control_string, sock):
control_string = control_string + 'S255F10' +'\r'
#sock.sendto(control_string, (ip_address, port))
return control_string
app = webapp2.WSGIApplication([('/', MainPage)], debug=True)
</code></pre>
<p>page.html :</p>
<pre><code>{% extends "base.html" %}
{% block comment %}
{% autoescape true %}
<form method="post">
<div>
<input type="submit" name="button" value="{{button}}">
<input type="hidden" name="run" value="{{run}}">
</div>
<!-- <div>macAddress: <input type="text" name="macAddress"><br>
<input type="submit" value="Submit">
</div> -->
</form>
{% endautoescape %}
{% endblock %}
</code></pre>
| 1 | 2016-08-31T08:10:13Z | 39,252,758 | <p>The start/stop functionality is simple - just make the button control something like an <code>operation_is_stopped</code> flag persisted across requests (in the datastore, for example).</p>
<p>In case you didn't realize it yet your difficulty really comes from achieving the <strong>continuous</strong> operation that you want to control with that button. That's what's not really compatible with GAE - everything in GAE revolves around responding to requests, <em>in a limited amount of time</em>. You can not really have indefinitely-running proceses/threads in GAE.</p>
<p>But in many cases it's possible to implement a long-running, iterative continuous operation (like yours) as a flow of short-lived operations. In GAE that can be easily achieved using the <a href="https://cloud.google.com/appengine/docs/python/taskqueue/" rel="nofollow">task queues</a> - each iteration (in your case the body of the <code>while self._running == True</code> loop) is implemented as a response to a task queue request. </p>
<p>The flow is started by enqueueing a respective task when the "start" action is triggered. The flow is maintained by enqueueing a respective task after processing of a previous task request. And it's stopped by <strong>not</strong> enqueueing a new task :)</p>
<p>Something along these lines:</p>
<pre><code>def post(self): # handler for the "long running" task requests
# need to rebuild/restore your request context every time
...
try:
request = urlfetch.fetch(...)
<perform actions and other function calls>
except:
...
finally:
if not operation_is_stopped:
# enqueue another "long running" task
task = taskqueue.add(...)
</code></pre>
| 0 | 2016-08-31T14:46:10Z | [
"python",
"google-app-engine"
] |
Django CMS absolute url for Elasticsearch | 39,244,232 | <p>i write a command for django app like this :</p>
<pre><code>for p in Article.public.all():
data += '{"index": {"_id": "%s", "_type": "article"}}\n' % p.pk
data += json.dumps({
"title": p.title,
"category": p.category.category,
"content_type": p.content_type,
"duration": p.get_audio_duration(),
"thumbnail": p.get_thumbnail(),
"date": datetime.strftime(p.date, '%d.%m.%Y'),
"url": p.get_absolute_url(),
"content": p.content
}) + '\n'
response = requests.put('{}/radio_index/_bulk'.format(settings.ES_URL), data=data)
</code></pre>
<p><code>get_absolute_url</code> method looks like this:</p>
<pre><code> def get_absolute_url(self):
return reverse('tv_article_hook:article_tv_detail', kwargs={'slug': self.slug})
</code></pre>
<p>I am using Django CMS apphooks. Apphook urls are:</p>
<pre><code>urlpatterns = patterns('',
url(r'^$', ArticleTvListView.as_view(), name='article_tv_list'),
url(r'^(?P<slug>[^/]+)/$', ArticleTvView.as_view(), name='article_tv_detail')
)
</code></pre>
<p>When i am using <code>get_absolute_url</code> in template or in REST api it works fine. But when i run command via <code>manage.py feed_index</code> code fails on <code>p.get_absolute_url</code> with error:</p>
<p><code>django.core.urlresolvers.NoReverseMatch: Reverse for 'article_tv_detail' with arguments '()' and keyword arguments '{'slug': 'some slug'}' not found. 0 pattern(s) tried: []</code></p>
<p>How can i solve this problem ? </p>
| 2 | 2016-08-31T08:17:30Z | 39,407,839 | <p>You will need to ensure that Django itself, not just CMS, knows the app's namespace, <code>tv_article_hook</code>.</p>
<p>I suspect that is registered with CMS, but not a namespace which Django knows about. Now depending on what version of Django you're using there's a couple of ways to define the namespace for Django. But have a read of the <a href="https://docs.djangoproject.com/en/1.10/ref/applications/#for-application-authors" rel="nofollow">application docs</a>, essentially you can define your own app config like so;</p>
<pre><code># tv_article/__init__.py
default_app_config = 'tv_article.apps.TVArticleConfig'
# tv_article/apps.py
from django.apps import AppConfig
class TVArticleConfig(AppConfig):
name = 'tv_article'
verbose_name = "TV Article"
</code></pre>
<p>With that setup, you should then be able to modify your code like so (removing '_hook' from your CMS namespace as well because thats a CMS term, but we're just talking about a Django app here);</p>
<pre><code>def get_absolute_url(self):
return reverse('tv_article:article_tv_detail', kwargs={'slug': self.slug})
</code></pre>
| 0 | 2016-09-09T09:06:43Z | [
"python",
"django",
"elasticsearch",
"django-cms"
] |
How to manually invoke wxPython event | 39,244,310 | <p>I have a <code>textCTRL</code> (Wxpython) with event binding to it:</p>
<pre><code>self.x= wx.TextCtrl(self, -1, "")
self.x.Bind(wx.EVT_KILL_FOCUS, self.OnLeavex)
</code></pre>
<p>I want to manually trigger this event as I wish.
I read this topic: <a href="http://stackoverflow.com/questions/747781/wxpython-calling-an-event-manually">wxPython: Calling an event manually</a> but nothing works.</p>
<p>I tried:</p>
<pre><code>wx.PostEvent(self.x.GetEventHandler(), wx.EVT_KILL_FOCUS)
</code></pre>
<p>But it gives:</p>
<blockquote>
<p>TypeError: in method 'PostEvent', expected argument 2 of type 'wxEvent
&'</p>
</blockquote>
<p>I also tried:</p>
<pre><code>self.x.GetEventHandler().ProcessEvent(wx.EVT_KILL_FOCUS)
</code></pre>
<p>Which doesn't work as well.</p>
| 1 | 2016-08-31T08:20:32Z | 39,255,739 | <p>The things like <code>wx.EVT_KILL_FOCUS</code> are not the event object needed here. Those are instances of <code>wx.PyEventBinder</code> which, as the name suggests, are used to bind events to handlers. The event object needed for the <code>PostEvent</code> or <code>ProcessEvent</code> functions will be the same type of object as what is received by the event handler functions. In this case it would be an instance of <code>wx.FocusEvent</code>. </p>
<p>When creating the event object you may also need to set the event type if that event class is used with more than one type of event. The binder object has that value to help you know what to use. You usually also need to set the ID to the ID of the window where the event originated. So for your example, it would be done something like this:</p>
<pre><code>evt = wx.FocusEvent(wx.EVT_KILL_FOCUS.evtType, self.x.GetId())
wx.PostEvent(self.x.GetEventHandler(), evt)
</code></pre>
<p>...or...</p>
<pre><code>self.x.GetEventHandler().ProcessEvent(wx.EVT_KILL_FOCUS)
</code></pre>
| 0 | 2016-08-31T17:34:18Z | [
"python",
"wxpython"
] |
python repeat list elements in an iterator | 39,244,320 | <p>Is there any way to create an iterator to repeat elements in a list certain times? For example, a list is given:</p>
<pre><code>color = ['r', 'g', 'b']
</code></pre>
<p>Is there a way to create a iterator in form of <code>itertools.repeatlist(color, 7)</code> that can produce the following list?</p>
<pre><code>color_list = ['r', 'g', 'b', 'r', 'g', 'b', 'r']
</code></pre>
| 0 | 2016-08-31T08:21:01Z | 39,244,356 | <p>You can use <a href="https://docs.python.org/2/library/itertools.html#itertools.cycle"><code>itertools.cycle()</code></a> together with <a href="https://docs.python.org/2/library/itertools.html#itertools.islice"><code>itertools.islice()</code></a> to build your <code>repeatlist()</code> function:</p>
<pre><code>from itertools import cycle, islice
def repeatlist(it, count):
return islice(cycle(it), count)
</code></pre>
<p>This returns a new iterator; call <code>list()</code> on it if you must have a list object.</p>
<p>Demo:</p>
<pre><code>>>> from itertools import cycle, islice
>>> def repeatlist(it, count):
... return islice(cycle(it), count)
...
>>> color = ['r', 'g', 'b']
>>> list(repeatlist(color, 7))
['r', 'g', 'b', 'r', 'g', 'b', 'r']
</code></pre>
| 8 | 2016-08-31T08:22:54Z | [
"python",
"itertools"
] |
python repeat list elements in an iterator | 39,244,320 | <p>Is there any way to create an iterator to repeat elements in a list certain times? For example, a list is given:</p>
<pre><code>color = ['r', 'g', 'b']
</code></pre>
<p>Is there a way to create a iterator in form of <code>itertools.repeatlist(color, 7)</code> that can produce the following list?</p>
<pre><code>color_list = ['r', 'g', 'b', 'r', 'g', 'b', 'r']
</code></pre>
| 0 | 2016-08-31T08:21:01Z | 39,244,434 | <p>Try:</p>
<pre><code>def repeat_list(l, n):
for i in range(n): yield l[i%len(l)]
</code></pre>
| -1 | 2016-08-31T08:26:17Z | [
"python",
"itertools"
] |
Python - Sending Arabic E-Mails using smtplib | 39,244,358 | <p>I'm trying to send and email including Arabic and Persian characters, using smtplib. The Following is my Function:</p>
<pre><code>def send_email (admin, pwd, user, message):
server = smtplib.SMTP('smtp.gmail.com', 587)
server.ehlo()
server.starttls()
server.login(admin, pwd)
server.sendmail(admin, user, message)
server.close()
return True
send_email('sender@example.com', 'example', 'reciever@example.com', 'کاراکتر ÙØ§Ø±Ø³Û Ù Ø¹Ø±Ø¨Û Persian and Arabic Characters')
</code></pre>
<p>and, I get the following error:</p>
<pre><code>msg = _fix_eols(msg).encode('ascii')
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-3: ordinal not in range(128)
</code></pre>
<p>Any ideas on how to fix it?</p>
| 0 | 2016-08-31T08:22:55Z | 39,245,577 | <p>try .encode('UTF-8')
hope it'll help</p>
| 1 | 2016-08-31T09:20:54Z | [
"python",
"email",
"smtplib"
] |
Python - Sending Arabic E-Mails using smtplib | 39,244,358 | <p>I'm trying to send and email including Arabic and Persian characters, using smtplib. The Following is my Function:</p>
<pre><code>def send_email (admin, pwd, user, message):
server = smtplib.SMTP('smtp.gmail.com', 587)
server.ehlo()
server.starttls()
server.login(admin, pwd)
server.sendmail(admin, user, message)
server.close()
return True
send_email('sender@example.com', 'example', 'reciever@example.com', 'کاراکتر ÙØ§Ø±Ø³Û Ù Ø¹Ø±Ø¨Û Persian and Arabic Characters')
</code></pre>
<p>and, I get the following error:</p>
<pre><code>msg = _fix_eols(msg).encode('ascii')
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-3: ordinal not in range(128)
</code></pre>
<p>Any ideas on how to fix it?</p>
| 0 | 2016-08-31T08:22:55Z | 39,246,290 | <p>The following code should solve your problem:</p>
<pre><code>#!/usr/bin/env python
# -*- coding: utf-8 -*-
import smtplib
import email.mime.text
def send_email (admin, pwd, user, message):
server = smtplib.SMTP('smtp.gmail.com', 587)
server.ehlo()
server.starttls()
server.login(admin, pwd)
server.sendmail(admin, user, message)
server.close()
return True
msg = email.mime.text.MIMEText("Ù¾Ø§ÛØªÙÙ", _charset="UTF-8")
print send_email('send@gmail.com', 'passwd', 'rec@gmail.com', msg.as_string())`
</code></pre>
| 0 | 2016-08-31T09:50:46Z | [
"python",
"email",
"smtplib"
] |
Python regular expression(.map file parsing) | 39,244,364 | <p>I want to make a program that parses a .map file and I can't figure out what regular expression should I use to identify the name of a section.
<br><code>.text 00040000 00000d7e</code><br>
I want to get <code>.text</code> string from this line. There is no other(whitespace character) before it. What regular expression should I use?</p>
| 1 | 2016-08-31T08:23:08Z | 39,244,452 | <p>Simply use <code>split</code>:</p>
<pre><code>yourString.split(' ')[0]
</code></pre>
<p>It will print:</p>
<pre><code>.text
</code></pre>
| 1 | 2016-08-31T08:27:11Z | [
"python",
"expression"
] |
Pyplot move origin with axis | 39,244,380 | <p>I have some code for a plot I want to create:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
# data
X = np.linspace(-1, 3, num=50, endpoint=True)
b = 2.0
Y = X + b
# plot stuff
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(1, 1, 1)
ax.set_title('linear neuron')
# move axes
ax.spines['left'].set_position(('axes', 0.30))
# ax.spines['left'].set_smart_bounds(True)
ax.yaxis.set_ticks_position('left')
ax.spines['bottom'].set_position(('axes', 0.30))
# ax.spines['bottom'].set_smart_bounds(True)
ax.xaxis.set_ticks_position('bottom')
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
# title
title = ax.set_title('Linear Neuron', y=1.10)
# axis ticks
# ax.set_xticklabels([0 if item == 0 else '' for item in X])
# ax.set_yticklabels([])
# for tick in ax.xaxis.get_majorticklabels():
# tick.set_horizontalalignment('left')
# ax.tick_params(axis=u'both', which=u'both',length=0)
# axis labels
ax.xaxis.set_label_coords(1.04, 0.30 - 0.025)
ax.yaxis.set_label_coords(0.30 - 0.03, 1.04)
y_label = ax.set_ylabel('output')
y_label.set_rotation(0)
ax.set_xlabel('input')
# ax.get_xaxis().set_visible(False)
# ax.get_yaxis().set_visible(False)
# grid
ax.grid(True)
ax.plot(X, Y, '-', linewidth=1.5)
fig.tight_layout()
fig.savefig('plot.pdf')
</code></pre>
<p>In this plot the x and y axis are moved. However, the origin is not moved with then, as one can see from the ticks and ticklabels.</p>
<p>How can I always move the origin with the x and y axis?</p>
<p>I guess it would be the same as simply looking at another area of the plot, so that the x and y axis are at the lower left but not in the corner as they usually are.</p>
<p>To visualize this:</p>
<p><a href="http://i.stack.imgur.com/l3zQp.png" rel="nofollow"><img src="http://i.stack.imgur.com/l3zQp.png" alt="current situation"></a></p>
<p>What I want:</p>
<p><a href="http://i.stack.imgur.com/J7iyT.png" rel="nofollow"><img src="http://i.stack.imgur.com/J7iyT.png" alt="enter image description here"></a></p>
<p>Where the arrow points to the x and y axis intersection, I want to have the origin, <code>(0|0)</code>. Where the dashed arrow points upwards I want the line to move upwards, so that it is still mathematically at the correct position, when the origin moves.</p>
<p><em>(the final result of the efforts can be found <a href="https://gist.github.com/ZelphirKaltstahl/270df543207174f8ed82c0970d67b62e" rel="nofollow">here</a>)</em></p>
| 0 | 2016-08-31T08:23:50Z | 39,245,835 | <p>You've done a lot of manual tweaking of where each thing goes, so the solution is not very portable. But here it is: remove the <code>ax.spines['bottom'].set_position</code> and <code>ax.xaxis.set_label_coords</code> calls from your original code, and add this instead:</p>
<pre><code>ax.set_ylim(-1, 6)
ax.spines['bottom'].set_position('zero')
xlabel = ax.xaxis.get_label()
lpos = xlabel.get_position()
xlabel.set_position((1.04, lpos[1]))
</code></pre>
<p>The "bring origin up" was really accomplished by just <code>ax.set_ylim</code>, the rest is to get your labels where you want them.</p>
| 1 | 2016-08-31T09:32:03Z | [
"python",
"matplotlib",
"axis-labels"
] |
how to get the data from the table with python selenium if the class and xpath are always changing? | 39,244,407 | <pre><code><table border="0" cellpadding="0" cellspacing="0" class="A490847cc28c94895bcf96e98abdb2b32209xB">
<tbody>
<tr>
<td style="vertical-align:top">
<table border="0" cellpadding="0" cellspacing="0" class="A490847cc28c94895bcf96e98abdb2b32206" cols="8" style="border-collapse:collapse;">
<tbody>
<tr height="0">
<td style="WIDTH:56.91mm;min-width:56.91mm">
</td>
</tr>
<tr valign="top">
<td class="A490847cc28c94895bcf96e98abdb2b32126c Pe5cd046a24fa44ef80e79727141463c6_1_r7">
<div class="A490847cc28c94895bcf96e98abdb2b32126">
Total Tutorials
</div>
</td>
<td class="A490847cc28c94895bcf96e98abdb2b32144c Pe5cd046a24fa44ef80e79727141463c6_1_r7 Pe5cd046a24fa44ef80e79727141463c6_1_r6">
<div class="A490847cc28c94895bcf96e98abdb2b32144">
<div style="WIDTH:19.98mm;">
<div class="A490847cc28c94895bcf96e98abdb2b32141">
<span class="A490847cc28c94895bcf96e98abdb2b32140">
Attendance
</span>
</div>
<div class="A490847cc28c94895bcf96e98abdb2b32143">
<span class="A490847cc28c94895bcf96e98abdb2b32142">
%
</span>
</div>
</div>
</div>
</td>
</tr>
<tr valign="top">
<td class="A490847cc28c94895bcf96e98abdb2b32149cl Pe5cd046a24fa44ef80e79727141463c6_1_r5" style="HEIGHT:6.35mm;">
<div class="A490847cc28c94895bcf96e98abdb2b32149">
CSFf
</div>
</td>
<td class="A490847cc28c94895bcf96e98abdb2b32153cr">
<div class="A490847cc28c94895bcf96e98abdb2b32153">
9
</div>
</td>
<td class="A490847cc28c94895bcf96e98abdb2b32157cr">
<div class="A490847cc28c94895bcf96e98abdb2b32157">
7
</div>
</td>
<td class="A490847cc28c94895bcf96e98abdb2b32161cl">
<div class="A490847cc28c94895bcf96e98abdb2b32161">
0
</div>
</td>
<td class="A490847cc28c94895bcf96e98abdb2b32165cl">
<div class="A490847cc28c94895bcf96e98abdb2b32165">
0
</div>
</td>
<td class="A490847cc28c94895bcf96e98abdb2b32169cr">
<div class="A490847cc28c94895bcf96e98abdb2b32169">
4.0000
</div>
</td>
<td class="A490847cc28c94895bcf96e98abdb2b32173cr">
<div class="A490847cc28c94895bcf96e98abdb2b32173">
4.0000
</div>
</td>
<td class="A490847cc28c94895bcf96e98abdb2b32177cr Pe5cd046a24fa44ef80e79727141463c6_1_r6">
<div class="A490847cc28c94895bcf96e98abdb2b32177">
84.62
</div>
</td>
</code></pre>
<p>I am working with python selenium web driver.I want the loop through the table for the text inside the <code>td</code> inside <code>tr</code> tag <a href="http://stackoverflow.com/questions/11533982/how-to-read-table-data-using-selenium-python">like this</a><br>But problem here is the class name is always changing and xpath and css_selector is also changing.example xpath will be like this <code>.//*[@id='P825048fc6b084257a601fde4805c8c33_1_oReportCell']/table/tbody/tr[2]/td/table/tbody/tr/td/table/tbody/tr[3]/td[8]/div</code> <br>But the <code>id</code> is changing always.so couldn't apply <code>driver.find_element_by_id()</code>.I think regular expressions or BeautifulSoup can be used to solve this.I am beginner to regex.So is there any way this could be solved?</p>
| -1 | 2016-08-31T08:25:09Z | 39,244,776 | <p>You might be able to use this xpath, not sure if it's 100% correct:</p>
<pre><code>//div[text() = 'CSFf']/ancestor::tr/td[not(descendant::div[text() = 'CSFf'])]/div
</code></pre>
<p>To make it variable:</p>
<pre><code>public String getXpathForTableThing(String searchString) {
return "//div[text() = '"+searchString+"']/ancestor::tr/td[not(descendant::div[text() = '"+searchString+"'])]/div";
}
</code></pre>
<p>I'm not experienced with python but I'm sure you get the idea and can transform this to a python function.</p>
| 0 | 2016-08-31T08:42:53Z | [
"python",
"regex",
"selenium-webdriver",
"beautifulsoup"
] |
how to get the data from the table with python selenium if the class and xpath are always changing? | 39,244,407 | <pre><code><table border="0" cellpadding="0" cellspacing="0" class="A490847cc28c94895bcf96e98abdb2b32209xB">
<tbody>
<tr>
<td style="vertical-align:top">
<table border="0" cellpadding="0" cellspacing="0" class="A490847cc28c94895bcf96e98abdb2b32206" cols="8" style="border-collapse:collapse;">
<tbody>
<tr height="0">
<td style="WIDTH:56.91mm;min-width:56.91mm">
</td>
</tr>
<tr valign="top">
<td class="A490847cc28c94895bcf96e98abdb2b32126c Pe5cd046a24fa44ef80e79727141463c6_1_r7">
<div class="A490847cc28c94895bcf96e98abdb2b32126">
Total Tutorials
</div>
</td>
<td class="A490847cc28c94895bcf96e98abdb2b32144c Pe5cd046a24fa44ef80e79727141463c6_1_r7 Pe5cd046a24fa44ef80e79727141463c6_1_r6">
<div class="A490847cc28c94895bcf96e98abdb2b32144">
<div style="WIDTH:19.98mm;">
<div class="A490847cc28c94895bcf96e98abdb2b32141">
<span class="A490847cc28c94895bcf96e98abdb2b32140">
Attendance
</span>
</div>
<div class="A490847cc28c94895bcf96e98abdb2b32143">
<span class="A490847cc28c94895bcf96e98abdb2b32142">
%
</span>
</div>
</div>
</div>
</td>
</tr>
<tr valign="top">
<td class="A490847cc28c94895bcf96e98abdb2b32149cl Pe5cd046a24fa44ef80e79727141463c6_1_r5" style="HEIGHT:6.35mm;">
<div class="A490847cc28c94895bcf96e98abdb2b32149">
CSFf
</div>
</td>
<td class="A490847cc28c94895bcf96e98abdb2b32153cr">
<div class="A490847cc28c94895bcf96e98abdb2b32153">
9
</div>
</td>
<td class="A490847cc28c94895bcf96e98abdb2b32157cr">
<div class="A490847cc28c94895bcf96e98abdb2b32157">
7
</div>
</td>
<td class="A490847cc28c94895bcf96e98abdb2b32161cl">
<div class="A490847cc28c94895bcf96e98abdb2b32161">
0
</div>
</td>
<td class="A490847cc28c94895bcf96e98abdb2b32165cl">
<div class="A490847cc28c94895bcf96e98abdb2b32165">
0
</div>
</td>
<td class="A490847cc28c94895bcf96e98abdb2b32169cr">
<div class="A490847cc28c94895bcf96e98abdb2b32169">
4.0000
</div>
</td>
<td class="A490847cc28c94895bcf96e98abdb2b32173cr">
<div class="A490847cc28c94895bcf96e98abdb2b32173">
4.0000
</div>
</td>
<td class="A490847cc28c94895bcf96e98abdb2b32177cr Pe5cd046a24fa44ef80e79727141463c6_1_r6">
<div class="A490847cc28c94895bcf96e98abdb2b32177">
84.62
</div>
</td>
</code></pre>
<p>I am working with python selenium web driver.I want the loop through the table for the text inside the <code>td</code> inside <code>tr</code> tag <a href="http://stackoverflow.com/questions/11533982/how-to-read-table-data-using-selenium-python">like this</a><br>But problem here is the class name is always changing and xpath and css_selector is also changing.example xpath will be like this <code>.//*[@id='P825048fc6b084257a601fde4805c8c33_1_oReportCell']/table/tbody/tr[2]/td/table/tbody/tr/td/table/tbody/tr[3]/td[8]/div</code> <br>But the <code>id</code> is changing always.so couldn't apply <code>driver.find_element_by_id()</code>.I think regular expressions or BeautifulSoup can be used to solve this.I am beginner to regex.So is there any way this could be solved?</p>
| -1 | 2016-08-31T08:25:09Z | 39,250,488 | <p>The parent of table id has kind of unique in which every time the number changes but the ReportCell is appended to it so using that we can find its children like this</p>
<pre><code>//*[contains(@id,'oReportCell')]/table/tbody/tr[2]/td/table/tbody/tr/td/table/tbody/tr[3]/td[1]
</code></pre>
| 1 | 2016-08-31T13:05:00Z | [
"python",
"regex",
"selenium-webdriver",
"beautifulsoup"
] |
How to resolve complex struct from a JPEG file using Python? | 39,244,494 | <p>The tail of the JPEG file contains a complex structure, such as following:</p>
<pre><code>FFD8...........................
...............................
............FFD9 1C01 0000 ....
...............................
struct definations in C file are:
typedef struct
{
short wYear;
short wMonth;
short wDayOfWeek;
short wDay;
short wHour;
short wMinute;
short wSecond;
short wMilliseconds;
}SYSEMTTIME;
typedef struct
{
int nOcrResult;
char szPlateText[16];
char szPlateColor[8];
char szCarColor[8];
RECT rtPlate;
} OCR_PLATE;
typedef struct
{
unsigned int size;
unsigned char nLane[4];
unsigned char nImageFalgs[4];
unsigned int nRandom[4];
unsigned char nIndex[4];
unsigned char nImageIndex[4];
unsigned char nTotalCount[4];
unsigned char nTrigerNow[4];
unsigned char nCarSpeed[4];
unsigned char nLimitSpeed[4];
unsigned char nDelayFrame[4];
OCR_PLATE struPlate[4];
SYSTEMTIME stTime;
unsigned int szFlags;
} IMAGE_CAPTURE_INFO;
</code></pre>
<p>And in Python, I have wrote some class using ctype library:</p>
<pre><code> class POINT(Structure):
_fields_ = [("x", c_int),("y", c_int)]
class RECT(Structure):
_fields_ = [("left", c_int),("top", c_int),("right", c_int),
("bottom", c_int)]
class OCR_PLATE(Structure):
_fields_ = [("nOcrResult", c_int),
("szPlateText", c_char * 16),
("szPlateColor", c_char * 8),
("szCarColor", c_char * 8),
("rtPlate", RECT)]
class SYSTEMTIME(Structure):
_fields_ = [("wYear", c_short),
("wMonth", c_short),
("wDayOfWeek", c_short),
("wDay", c_short),
("wHour", c_short),
("wMinute", c_short),
("wSecond", c_short),
("wMilliseconds", c_short)]
class IMAGE_CAPTURE_INFO(Structure):
_fields_ = [("size", c_uint),
("nLane", c_ubyte * 4),
("nImageFalgs", c_ubyte * 4),
("nRandom", c_uint * 4),
("nIndex", c_ubyte * 4),
("nImageIndex", c_ubyte * 4),
("nTotalCount", c_ubyte * 4),
("nTrigerNow", c_ubyte * 4),
("nCarSpeed", c_ubyte * 4),
("nLimitSpeed", c_ubyte * 4),
("nDelayFrame", c_ubyte * 4),
("struPlate", OCR_PLATE * 4),
("stTime", SYSTEMTIME),
("szFlags", c_uint)]
</code></pre>
<p>But how to read the data from JPEG image file in the form of the above structure?</p>
| 0 | 2016-08-31T08:29:32Z | 39,245,686 | <p>You need to know the offset at which this struct is positioned in the file.
Assuming it is at the end of the file:</p>
<pre><code>data = open("filename.jpg", "rb").read()
offset = len(data) - sizeof(IMAGE_CAPTURE_INFO)
myStructure = IMAGE_CAPTURE_INFO.from_buffer(data, offset)
</code></pre>
| 0 | 2016-08-31T09:25:24Z | [
"python",
"struct",
"ctype"
] |
Select several directories with tkinter | 39,244,546 | <p>I have to perform an operation on several directories.</p>
<p>TKinter offers a dialog for opening one file (askopenfilename), and several files (askopenfilenames), but is lacking a dialog for several directories. </p>
<p>What is the quickest way to get to a feasible solution for "askdirectories"?</p>
| -1 | 2016-08-31T08:32:16Z | 39,245,736 | <p>You should be able to use <code>tkFileDialog.askdirectory</code>. Take a look at the docs <a href="http://tkinter.unpythonic.net/wiki/tkFileDialog" rel="nofollow">here</a> :)</p>
<p><strong>EDIT</strong></p>
<p>Perhaps something like this?</p>
<pre><code>from Tkinter import *
import tkFileDialog
root = Tk()
root.geometry('200x200')
root.grid_rowconfigure(0, weight = 1)
root.grid_columnconfigure(0, weight = 1)
dirs = []
def get_directories():
dirs.append(tkFileDialog.askdirectory())
return dirs
b1 = Button(root, text='select directories...', command = get_directories)
b1.pack()
root.mainloop()
</code></pre>
<p>Any thoughts?</p>
| 0 | 2016-08-31T09:27:15Z | [
"python",
"tkinter"
] |
Select several directories with tkinter | 39,244,546 | <p>I have to perform an operation on several directories.</p>
<p>TKinter offers a dialog for opening one file (askopenfilename), and several files (askopenfilenames), but is lacking a dialog for several directories. </p>
<p>What is the quickest way to get to a feasible solution for "askdirectories"?</p>
| -1 | 2016-08-31T08:32:16Z | 39,246,781 | <p>The only way for doing this in pure tkinter (except building the directory selector widget by hand) is requesting user for each dir in separate dialog. You could save previously used location, so user won't need to navigate there each time, by using code of below:</p>
<pre><code>from tkinter import filedialog
dirselect = filedialog.Directory()
dirs = []
while True:
d = dirselect.show()
if not d: break
dirs.append(d)
</code></pre>
<p>Another solution is to use <code>tkinter.tix</code> extension (now part of standard lib, but may require to install Tk's Tix on some platforms). Primarly, you'll need the <code>tkinter.tix.DirList</code> widget. It look as follows (a bit old img):</p>
<p><a href="http://i.stack.imgur.com/rrs7l.gif" rel="nofollow"><img src="http://i.stack.imgur.com/rrs7l.gif" alt="img"></a></p>
<p>For more, see <a href="https://docs.python.org/3.5/library/tkinter.tix.html" rel="nofollow">tkinter.tix</a> and <a href="http://tix.sourceforge.net/docs/tix-book/filesel.tex.html" rel="nofollow">Tk Tix</a> docs</p>
| 1 | 2016-08-31T10:13:49Z | [
"python",
"tkinter"
] |
Finding out running anaconda environment (not active on new processes or default) from within python | 39,244,579 | <p>What would be the best way to find the current anaconda environment from within python. </p>
<p>The problem is that it is not the default environment: for example calling </p>
<pre><code>import subprocess
subprocess.call(['conda','info'])
</code></pre>
<p>gives me the wrong result (because it creates a new process, which has the default environment)</p>
<p>I run this using anaconda2 on win7 and running the code from pycharm, but in the best case scenario the solution should work "everywhere" or at least for anaconda</p>
<p>The location of the python.exe that was used to run my program would give the hint me that information, so using e.g.</p>
<pre><code>>>> import sys
>>> print sys.executable
D:\Anaconda2\envs\py2\python.exe
</code></pre>
<p>is one option. </p>
| 0 | 2016-08-31T08:33:44Z | 39,298,093 | <pre><code>import sys
print sys.version
</code></pre>
<p>Returns: (Something like)</p>
<pre><code>2.7.11 |Anaconda 4.0.0 (64-bit)| (default, Feb 16 2016, 09:58:36) [MSC v.1500 64 bit (AMD64)]
</code></pre>
| 0 | 2016-09-02T17:43:53Z | [
"python",
"anaconda"
] |
Serializing image stream using protobuf | 39,244,589 | <p>I have two programs in Ubuntu: a C++ program (TORCS game) and a python program. The C++ program always generates images. I want to transfer these real-time images into python(maybe the numpy.ndarray format). So I think that maybe using Google protobuf to serialize the image to string and send string to python client by ZMQ is a feasible method.</p>
<p><strong>Question</strong>: which value type is suitable for the image(a pointer) in <code>.proto</code> file? In another words, which value type I should use to replace <code>string</code> type in the below example?</p>
<pre><code>message my_image{
repeated string image = 1
}
</code></pre>
<p>This is the way I write image to memory (uint8_t* image_data):</p>
<pre><code>glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, (GLvoid*)image_data);
</code></pre>
<p>At last, maybe there is a better way to transfer image (in the memory) to a python client?</p>
<p>Any suggestions are appreciated.</p>
| 2 | 2016-08-31T08:34:10Z | 39,248,610 | <p>If I had to do this, I would use one of:</p>
<pre><code>message image {
int width = 1;
int height = 2;
bytes image_data = 3;
}
message image {
int width = 1;
int height = 2;
bytes red_data = 3;
bytes green_data = 4;
bytes blue_data = 5;
}
</code></pre>
<p>Or possibly use an intermediate <code>ScanRow</code> message, composed either of interleaved R, G, B bytes or separated R, G, B bytes. The first version is likely going to be fastest to generate and display.</p>
| 1 | 2016-08-31T11:37:52Z | [
"python",
"c++",
"protocol-buffers"
] |
Compute the shortest path with exactly `n` nodes between two points on a meshgrid | 39,244,636 | <p>I have defined the following 3D surface on a grid:</p>
<pre><code>%pylab inline
def muller_potential(x, y, use_numpy=False):
"""Muller potential
Parameters
----------
x : {float, np.ndarray, or theano symbolic variable}
X coordinate. If you supply an array, x and y need to be the same shape,
and the potential will be calculated at each (x,y pair)
y : {float, np.ndarray, or theano symbolic variable}
Y coordinate. If you supply an array, x and y need to be the same shape,
and the potential will be calculated at each (x,y pair)
Returns
-------
potential : {float, np.ndarray, or theano symbolic variable}
Potential energy. Will be the same shape as the inputs, x and y.
Reference
---------
Code adapted from https://cims.nyu.edu/~eve2/ztsMueller.m
"""
aa = [-1, -1, -6.5, 0.7]
bb = [0, 0, 11, 0.6]
cc = [-10, -10, -6.5, 0.7]
AA = [-200, -100, -170, 15]
XX = [1, 0, -0.5, -1]
YY = [0, 0.5, 1.5, 1]
# use symbolic algebra if you supply symbolic quantities
exp = np.exp
value = 0
for j in range(0, 4):
if use_numpy:
value += AA[j] * numpy.exp(aa[j] * (x - XX[j])**2 + bb[j] * (x - XX[j]) * (y - YY[j]) + cc[j] * (y - YY[j])**2)
else: # use sympy
value += AA[j] * sympy.exp(aa[j] * (x - XX[j])**2 + bb[j] * (x - XX[j]) * (y - YY[j]) + cc[j] * (y - YY[j])**2)
return value
</code></pre>
<p>Which gave the following plot:</p>
<pre><code>minx=-1.5
maxx=1.2
miny=-0.2
maxy=2
ax=None
grid_width = max(maxx-minx, maxy-miny) / 50.0
xx, yy = np.mgrid[minx : maxx : grid_width, miny : maxy : grid_width]
V = muller_potential(xx, yy, use_numpy=True)
V = ma.masked_array(V, V>200)
contourf(V, 40)
colorbar();
</code></pre>
<p><a href="http://i.stack.imgur.com/ZD6TK.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZD6TK.png" alt="potential"></a></p>
<p>I wrote the following code to define the shortest path between two points on that grid. The metric I used between two adjacent points of the meshgrid is given by <code>(V[e]-V[cc])**2</code> with <code>cc</code> the current cell and <code>e</code> one of the neighboring cells. The neighbors are defined with a full connectivity: all direct neighbors with diagonal included.</p>
<pre><code>def dijkstra(V):
mask = V.mask
visit_mask = mask.copy() # mask visited cells
m = numpy.ones_like(V) * numpy.inf
connectivity = [(i,j) for i in [-1, 0, 1] for j in [-1, 0, 1] if (not (i == j == 0))]
cc = unravel_index(V.argmin(), m.shape) # current_cell
m[cc] = 0
P = {} # dictionary of predecessors
#while (~visit_mask).sum() > 0:
for _ in range(V.size):
#print cc
neighbors = [tuple(e) for e in asarray(cc) - connectivity
if e[0] > 0 and e[1] > 0 and e[0] < V.shape[0] and e[1] < V.shape[1]]
neighbors = [ e for e in neighbors if not visit_mask[e] ]
tentative_distance = [(V[e]-V[cc])**2 for e in neighbors]
for i,e in enumerate(neighbors):
d = tentative_distance[i] + m[cc]
if d < m[e]:
m[e] = d
P[e] = cc
visit_mask[cc] = True
m_mask = ma.masked_array(m, visit_mask)
cc = unravel_index(m_mask.argmin(), m.shape)
return m, P
def shortestPath(start, end, P):
Path = []
step = end
while 1:
Path.append(step)
if step == start: break
step = P[step]
Path.reverse()
return asarray(Path)
D, P = dijkstra(V)
path = shortestPath(unravel_index(V.argmin(), V.shape), (40,4), P)
</code></pre>
<p>Which gave the following result:</p>
<pre><code>contourf(V, 40)
plot(path[:,1], path[:,0], 'r.-')
</code></pre>
<p><a href="http://i.stack.imgur.com/gdRIK.png" rel="nofollow"><img src="http://i.stack.imgur.com/gdRIK.png" alt="shortest path"></a></p>
<p>The length of the path is 112:</p>
<pre><code>print path.shape[0]
112
</code></pre>
<p>I want to know if it's possible to compute the shortest path between <code>start</code> and <code>end</code> of exact length <code>n</code>, with <code>n</code> an argument given to the function.</p>
<p>Remark: If I change the metric used from <code>(V[e]-V[cc])**2</code> to <code>V[e]-V[cc]</code>, which raises negative distances I obtain that plot that look better as it goes through local minima as expected:</p>
<p><a href="http://i.stack.imgur.com/mPuye.png" rel="nofollow"><img src="http://i.stack.imgur.com/mPuye.png" alt="shortest path 2"></a></p>
| 4 | 2016-08-31T08:37:00Z | 39,269,175 | <p>As I want to obtain a reasonable path that sample basins in the potential, I wrote the functions below. For completeness I remember the <code>dijkstra</code> function I wrote:</p>
<pre><code>%pylab
def dijkstra(V, start):
mask = V.mask
visit_mask = mask.copy() # mask visited cells
m = numpy.ones_like(V) * numpy.inf
connectivity = [(i,j) for i in [-1, 0, 1] for j in [-1, 0, 1] if (not (i == j == 0))]
cc = start # current_cell
m[cc] = 0
P = {} # dictionary of predecessors
#while (~visit_mask).sum() > 0:
for _ in range(V.size):
#print cc
neighbors = [tuple(e) for e in asarray(cc) - connectivity
if e[0] > 0 and e[1] > 0 and e[0] < V.shape[0] and e[1] < V.shape[1]]
neighbors = [ e for e in neighbors if not visit_mask[e] ]
t.ntative_distance = asarray([V[e]-V[cc] for e in neighbors])
for i,e in enumerate(neighbors):
d = tentative_distance[i] + m[cc]
if d < m[e]:
m[e] = d
P[e] = cc
visit_mask[cc] = True
m_mask = ma.masked_array(m, visit_mask)
cc = unravel_index(m_mask.argmin(), m.shape)
return m, P
start, end = unravel_index(V.argmin(), V.shape), (40,4)
D, P = dijkstra(V, start)
def shortestPath(start, end, P):
Path = []
step = end
while 1:
Path.append(step)
if step == start: break
step = P[step]
Path.reverse()
return asarray(Path)
path = shortestPath(start, end, P)
</code></pre>
<p>Which gave the following plot:</p>
<pre><code>contourf(V, 40)
plot(path[:,1], path[:,0], 'r.-')
colorbar()
</code></pre>
<p><a href="http://i.stack.imgur.com/1kQMy.png" rel="nofollow"><img src="http://i.stack.imgur.com/1kQMy.png" alt="shortest path"></a></p>
<p>Then, the basic idea behind the <code>extend_path</code> function is to extend the shortest path obtained by taking neighbors of nodes in the path that minimize the potential. A set keeps record of the cells already visited during the extension process.</p>
<pre><code>def get_neighbors(cc, V, visited_nodes):
connectivity = [(i,j) for i in [-1, 0, 1] for j in [-1, 0, 1] if (not (i == j == 0))]
neighbors = [tuple(e) for e in asarray(cc) - connectivity
if e[0] > 0 and e[1] > 0 and e[0] < V.shape[0] and e[1] < V.shape[1]]
neighbors = [ e for e in neighbors if e not in visited_nodes ]
return neighbors
def extend_path(V, path, n):
"""
Extend the given path with n steps
"""
path = [tuple(e) for e in path]
visited_nodes = set()
for _ in range(n):
visited_nodes.update(path)
dist_min = numpy.inf
for i_cc, cc in enumerate(path[:-1]):
neighbors = get_neighbors(cc, V, visited_nodes)
next_step = path[i_cc+1]
next_neighbors = get_neighbors(next_step, V, visited_nodes)
join_neighbors = list(set(neighbors) & set(next_neighbors))
if len(join_neighbors) > 0:
tentative_distance = [ V[e] for e in join_neighbors ]
argmin_dist = argmin(tentative_distance)
if tentative_distance[argmin_dist] < dist_min:
dist_min, new_step, new_step_index = tentative_distance[argmin_dist], join_neighbors[argmin_dist], i_cc+1
path.insert(new_step_index, new_step)
return path
</code></pre>
<p>Below is the result I obtained by extending the shortest path with 250 steps:</p>
<pre><code>path_ext = extend_path(V, path, 250)
print len(path), len(path_ext)
path_ext = numpy.asarray(path_ext)
contourf(V, 40)
plot(path[:,1], path[:,0], 'w.-')
plot(path_ext[:,1], path_ext[:,0], 'r.-')
colorbar()
</code></pre>
<p><a href="http://i.stack.imgur.com/Te0J1.png" rel="nofollow"><img src="http://i.stack.imgur.com/Te0J1.png" alt="extended path"></a></p>
<p>As expected I start to sample the deeper basins first when I increase <code>n</code>, as seen below:</p>
<pre><code>rcParams['figure.figsize'] = 14,8
for i_plot, n in enumerate(range(0,250,42)):
path_ext = numpy.asarray(extend_path(V, path, n))
subplot('23%d'%(i_plot+1))
contourf(V, 40)
plot(path_ext[:,1], path_ext[:,0], 'r.-')
title('%d path steps'%len(path_ext))
</code></pre>
<p><a href="http://i.stack.imgur.com/ArgvD.png" rel="nofollow"><img src="http://i.stack.imgur.com/ArgvD.png" alt="increasing steps"></a></p>
| 1 | 2016-09-01T10:32:29Z | [
"python",
"numpy",
"shortest-path"
] |
How to run a luigi task with spark-submit and pyspark | 39,244,648 | <p>I have a <strong>luigi</strong> python task which includes some pyspark libs. Now I would like to submit this task on mesos with spark-submit. What should I do to run it? Below is my code skeleton:</p>
<pre><code>from pyspark.sql import functions as F
from pyspark import SparkContext
class myClass(SparkSubmitTask):
# date = luigi.DateParameter()
def __init__(self, date):
self.date = date # date is datetime.date.today().isoformat()
def output(self):
def input(self):
def run(self):
# Some functions are using pyspark libs
if __name__ == "__main__":
luigi.run()
</code></pre>
<p>Without luigi, I'm submmitting this task as the following command-line:</p>
<pre><code>/opt/spark/bin/spark-submit --master mesos://host:port --deploy-mode cluster --total-executor-cores 1 --driver-cores 1 --executor-memory 1G --driver-memory 1G my_module.py
</code></pre>
<p>Now the problem is how I can spark-submit the luigi task that includes luigi command-line such as:</p>
<pre><code>luigi --module my_module myClass --local-scheduler --date 2016-01
</code></pre>
<p>One more question is if my_module.py has a required task to finish first, do I need to do something more for it or just set the same as the current command-line?</p>
<p>I really appreciate for any hints or suggestions for this. Thanks very much. </p>
| 0 | 2016-08-31T08:37:30Z | 39,296,694 | <p>Luigi has some template Tasks. One of them called PySparkTask.
You can inherit from this class and override the properties:</p>
<p><a href="https://github.com/spotify/luigi/blob/master/luigi/contrib/spark.py" rel="nofollow">https://github.com/spotify/luigi/blob/master/luigi/contrib/spark.py</a>.</p>
<p>I haven't tested it but based on my experience with luigi I would have try this:</p>
<pre><code>import my_module
class MyPySparkTask(PySparkTask):
date = luigi.DateParameter()
@property
def name(self):
return self.__class__.__name__
@property
def master(self):
return 'mesos://host:port'
@property
def deploy_mode(self):
return 'cluster'
@property
def total_executor_cores(self):
return 1
@property
def driver_cores(self):
return 1
@property
def executor-memory(self):
return 1G
@property
def driver-memory(self):
return 1G
def main(self, sc, *args):
my_module.run(sc)
def self.app_options():
return [date]
</code></pre>
<p>Then you can run it with:
luigi --module task_module MyPySparkTask --local-scheduler --date 2016-01</p>
<p>There is also an option to set the properties in a client.cfg file in order to make them the default values for other PySparkTasks:</p>
<pre><code>[spark]
master: mesos://host:port
deploy_mode: cluster
total_executor_cores: 1
driver_cores: 1
executor-memory: 1G
driver-memory: 1G
</code></pre>
| 0 | 2016-09-02T16:08:07Z | [
"python",
"apache-spark",
"pyspark",
"luigi"
] |
Geometric rounding with numpy/quantize? | 39,244,873 | <p>I've got a pandas series of data which is a curve.</p>
<p>I want to round it in such a way as to make it 'stepped'. Furthermore, I want the steps to be roughly within 10% of the present value. (Another way of putting this is I want the steps to increase in increments of 10%, i.e. geometrically).</p>
<p>I've written something that's iterative and slow:</p>
<pre><code>def chunk_trades(A):
try:
last = A[0]
except:
print(A)
raise
new = []
for x in A.iteritems():
if not last or np.abs((x[1]-last)/last) > 0.1:
new.append(x[1])
last = x[1]
else:
new.append(last)
s = pd.Series(new, index=A.index)
return s
</code></pre>
<p>I don't want to use this code.</p>
<p>I'm trying to find a faster, pythonic way of doing this. I've tried using numpy.digitize() but I don't think that's what I'm looking for. Any ideas for how best to approach this?</p>
| 1 | 2016-08-31T08:47:24Z | 39,246,183 | <p>OK, I think the solution should be something like:</p>
<pre><code>np.exp(np.around(np.log(np.abs(j)), decimals=1)) * np.sign(j)
</code></pre>
<p>Map to logarithmic space, do the rounding, transform back.</p>
| 1 | 2016-08-31T09:46:20Z | [
"python",
"pandas",
"numpy"
] |
Double IIF Statement in Python | 39,244,955 | <p>I have an SQL code that has a double IIF statement in it, that is, the if true part is another IIF statement and I m trying to recreate this in python using <code>np.where</code></p>
<p>the SQL code is:</p>
<pre><code>IIF ([STATE] Is Null, IIF ([COUNTRY] Is Null,'No Country',[COUNTRY]), [STATE])
</code></pre>
<p>I've tried this:</p>
<pre><code>DF['NewCol'] = np.where(DF['STATE'].isnull(),(np.where(DF['COUNTRY'].isnull(),'No Country','DF['COUNTRY']'),DF['STATE']))
</code></pre>
<p>any help appreciated</p>
| 1 | 2016-08-31T08:51:20Z | 39,244,989 | <p>I think you need:</p>
<pre><code>DF['NewCol'] = np.where(DF['STATE'].isnull(),
np.where(DF['COUNTRY'].isnull(),'No Country', DF['COUNTRY']),DF['STATE'])
</code></pre>
<p>Sample:</p>
<pre><code>DF = pd.DataFrame({'STATE':[np.nan,'a','b',np.nan],
'COUNTRY':[np.nan,'c',np.nan, 'd']})
print (DF)
COUNTRY STATE
0 NaN NaN
1 c a
2 NaN b
3 d NaN
DF['NewCol'] = np.where(DF['STATE'].isnull(),
np.where(DF['COUNTRY'].isnull(),'No Country', DF['COUNTRY']),DF['STATE'])
print (DF)
COUNTRY STATE NewCol
0 NaN NaN No Country
1 c a a
2 NaN b b
3 d NaN d
</code></pre>
<p>If need first in output column values of <code>COUNTRY</code> column:</p>
<pre><code>DF['NewCol'] = np.where(DF['COUNTRY'].isnull(),
np.where(DF['STATE'].isnull(),'No Country', DF['STATE']),DF['COUNTRY'])
print (DF)
COUNTRY STATE NewCol
0 NaN NaN No Country
1 c a c
2 NaN b b
3 d NaN d
</code></pre>
| 1 | 2016-08-31T08:52:57Z | [
"python",
"sql",
"pandas",
"if-statement",
null
] |
Django: Random ManyToMany relation is added on User connect | 39,244,969 | <p>Strangest bug I ever encountered</p>
<p>User model is the regular one from <code>conrtib.auth.User</code></p>
<p>Let's say I have the following model:</p>
<pre><code>class Region(model.Model):
name = models.CharField(max_length=256)
rakazim = models.ManyToManyField(settings.AUTH_USER_MODEL)
goal = models.IntegerField(default=0)
</code></pre>
<p>and someplace else I have:</p>
<pre><code>user_model = get_user_model()
rakaz = user_model.objects.create_user(username, email, password)
</code></pre>
<p>Then Immediately after the user creation method is called the "rakaz" instance has a random region connected</p>
<pre><code>rakaz.region_set.all() = [<random_region>]
</code></pre>
<p>It also sometimes connects to another model that has a similar <code>ManyToManyField</code> to <code>AUTH_USER_MODEL</code></p>
<p>I debugged with pdb into the user creation method (in auth contrib) and immediately after calling save inside this happens.</p>
<p>AFAIK it happens only on the staging server, but until I find the reason I'm afraid to deploy to prod..</p>
<p>Django version 1.84. server using mariabdb on RDS</p>
<p>I don't use signals in my code (and at all :) ) and can't find relevant third party code doing this, (And if so it would happen on my machine also)</p>
<p>Any Ideas?</p>
| 0 | 2016-08-31T08:52:01Z | 39,251,022 | <p>The issue turned out to be:
I seeded the staging server with data from prod for "region" the <code>dumpdata</code> command dumps the <code>region</code> with foreign keys for <code>rakazim</code> . But since the users where actually missing (I didn't copy the users from my prod environment) Instead of shouting at me, and not allowing me to <code>loaddata</code> with non existing foreign keys, <code>mariadb</code> chose to give me :+1 and add random foreign keys each time I created a user (Perhaps not random, but according to the imported mapping, not sure).</p>
<p>Lesson learned: use a proper DBMS and not a mysql varient.</p>
| 0 | 2016-08-31T13:30:01Z | [
"python",
"django"
] |
Can't create a virtual environment in the Google Drive folder | 39,244,999 | <p>I'm using Google Drive to keep a copy of my code projects in case my computer dies (I'm also using GitHub, but not on some private projects).</p>
<p>However, when I try to create a virtual environment using <code>virtualenv</code>, I get the following error:</p>
<pre><code>PS C:\users\fchatter\google drive> virtualenv env
New python executable in C:\users\fchatter\google drive\env\Scripts\python.exe
ERROR: The executable "C:\users\fchatter\google drive\env\Scripts\python.exe" could not be run: [Error 5] Access is denied
</code></pre>
<p>Things I've tried:</p>
<ul>
<li><p>I thought it was because the path to the venv included blank spaces, but the command works in other paths with blank spaces. I also tried installing the win32api library, as recommended in the <code>virtualenv</code> docs, but it didn't work. </p></li>
<li><p>running the PowerShell as an administrator. </p></li>
</ul>
<p>Any ideas on how to solve this? My workaround at the moment is to create the venv outside of the Google Drive, which works but is inconvenient.</p>
| 0 | 2016-08-31T08:53:27Z | 39,245,144 | <p>Don't set up a virtual env in a cloud synced folder, nor should you run a python script from such a folder. It's a bad idea. They are not meant for version control. Write access (modifying files) to the folder is limited because in your case Google drive will periodically sync the folder which will prevent exclusive write access to the folder almost always. </p>
<p>TLDR; One cant possibly modify files while they are being synced.</p>
<p>I suggest you stick to <code>git</code> for version control.</p>
| 2 | 2016-08-31T08:59:59Z | [
"python",
"powershell",
"virtualenv",
"drive"
] |
Python: Generic/Templated getters | 39,245,032 | <p>I have a code that simply fetches a user/s from a database</p>
<pre><code>class users:
def __init__(self):
self.engine = create_engine("mysql+pymysql://root:password@127.0.0.1/my_database")
self.connection = self.engine.connect()
self.meta = MetaData(bind=self.connection)
self.users = Table('users', self.meta, autoload = true)
def get_user_by_user_id(self, user_id):
stmt = self.users.select().where(self.users.c.user_id == user_id)
return self.connection.execute(stmt)
def get_user_by_username(self, username):
stmt = self.users.select().where(self.users.c.username == username)
return self.connection.execute(stmt)
def get_users_by_role_and_company(self, role, company)
stmt = self.users.select().where(self.users.c.role == role).where(self.users.c.company == company)
return self.connection.execute(stmt)
</code></pre>
<p>Now, what I want to do is to make the getters generic like so:</p>
<pre><code>class users:
def __init__(self):
self.engine = create_engine("mysql+pymysql://root:password@127.0.0.1/my_database")
self.connection = self.engine.connect()
self.meta = MetaData(bind=self.connection)
self.users = Table('users', self.meta, autoload = true)
def get_user(self, **kwargs):
'''How do I make this generic function?'''
</code></pre>
<p>So, instead of calling something like this:</p>
<pre><code>u = users()
u.get_user_by_user_id(1)
u.get_user_by_username('foo')
u.get_users_by_role_and_company('Admin', 'bar')
</code></pre>
<p>I would just call the generic function like so:</p>
<pre><code>u = users()
u.get_user(user_id=1)
u.get_user(username='foo')
u.get_user(role='Admin', company='bar')
</code></pre>
<hr>
<p>So far, this was what I could think of:</p>
<pre><code>def get_user(**kwargs):
where_clause = ''
for key, value in kwargs.items():
where_clause += '{} == {} AND '.format(key, value)
where_clause = where_clause[:-5] # remove final AND
stmt = "SELECT * FROM {tablename} WHERE {where_clause};".format(tablename='users', where_clause=where_clause)
return self.connection.execute(stmt)
</code></pre>
<p>Is there any way that I could use the ORM style to create the statement?</p>
| 1 | 2016-08-31T08:54:57Z | 39,245,566 | <p>But you did all the hard work.. It's just a matter of combining the functions you created, initializing the input variables and throwing some <code>if</code> statements in the mix.</p>
<pre><code>def get_user(self, user_id=0, username='', role='', company=''):
if user_id:
stmt = self.users.select().where(self.users.c.user_id == user_id)
return self.connection.execute(stmt)
elif username:
stmt = self.users.select().where(self.users.c.username == username)
return self.connection.execute(stmt)
elif role and company:
stmt = self.users.select().where(self.users.c.role == role).where(self.users.c.company == company)
return self.connection.execute(stmt)
else:
print('Not adequate information given. Please enter "ID" or "USERNAME", or "ROLE"&"COMPANY"')
return
</code></pre>
<p>Note that <code>user_id</code> has been initialized to <code>0</code> so that it has a boolean of <code>False</code>. If a 0 id is possible, set it directly to <code>False</code> instead. So, since the input cannot be 'random', is there a reason you want to do it with <code>**kwargs</code>?</p>
<hr>
<p>Alternatively, if the numberof combinations are too many to code, i would go a different route (SQL-injection-valnurable script incoming!) and that is the following:</p>
<pre><code>def get_user(self, query):
form_query = 'SELECT user FROM {} WHERE {}'.format(table_name, query)
# now execute it and return whatever it is you want returned
</code></pre>
<p>You are no longer passing variables to the function but rather a string which will appended to the query and executed.
<strong>Needless to say you have to be very careful with that.</strong> </p>
| 0 | 2016-08-31T09:19:53Z | [
"python",
"python-3.x",
"sqlalchemy",
"metaprogramming"
] |
Python: Generic/Templated getters | 39,245,032 | <p>I have a code that simply fetches a user/s from a database</p>
<pre><code>class users:
def __init__(self):
self.engine = create_engine("mysql+pymysql://root:password@127.0.0.1/my_database")
self.connection = self.engine.connect()
self.meta = MetaData(bind=self.connection)
self.users = Table('users', self.meta, autoload = true)
def get_user_by_user_id(self, user_id):
stmt = self.users.select().where(self.users.c.user_id == user_id)
return self.connection.execute(stmt)
def get_user_by_username(self, username):
stmt = self.users.select().where(self.users.c.username == username)
return self.connection.execute(stmt)
def get_users_by_role_and_company(self, role, company)
stmt = self.users.select().where(self.users.c.role == role).where(self.users.c.company == company)
return self.connection.execute(stmt)
</code></pre>
<p>Now, what I want to do is to make the getters generic like so:</p>
<pre><code>class users:
def __init__(self):
self.engine = create_engine("mysql+pymysql://root:password@127.0.0.1/my_database")
self.connection = self.engine.connect()
self.meta = MetaData(bind=self.connection)
self.users = Table('users', self.meta, autoload = true)
def get_user(self, **kwargs):
'''How do I make this generic function?'''
</code></pre>
<p>So, instead of calling something like this:</p>
<pre><code>u = users()
u.get_user_by_user_id(1)
u.get_user_by_username('foo')
u.get_users_by_role_and_company('Admin', 'bar')
</code></pre>
<p>I would just call the generic function like so:</p>
<pre><code>u = users()
u.get_user(user_id=1)
u.get_user(username='foo')
u.get_user(role='Admin', company='bar')
</code></pre>
<hr>
<p>So far, this was what I could think of:</p>
<pre><code>def get_user(**kwargs):
where_clause = ''
for key, value in kwargs.items():
where_clause += '{} == {} AND '.format(key, value)
where_clause = where_clause[:-5] # remove final AND
stmt = "SELECT * FROM {tablename} WHERE {where_clause};".format(tablename='users', where_clause=where_clause)
return self.connection.execute(stmt)
</code></pre>
<p>Is there any way that I could use the ORM style to create the statement?</p>
| 1 | 2016-08-31T08:54:57Z | 39,245,616 | <p>Try something like:</p>
<pre><code>def get_user(self, **kwargs):
if 'user_id' in kwargs:
(...)
elif 'username' in kwargs:
(...)
elif all(a in ['role','company'] for a in kwargs):
(...)
else:
(...)
</code></pre>
| 0 | 2016-08-31T09:22:29Z | [
"python",
"python-3.x",
"sqlalchemy",
"metaprogramming"
] |
Python: Generic/Templated getters | 39,245,032 | <p>I have a code that simply fetches a user/s from a database</p>
<pre><code>class users:
def __init__(self):
self.engine = create_engine("mysql+pymysql://root:password@127.0.0.1/my_database")
self.connection = self.engine.connect()
self.meta = MetaData(bind=self.connection)
self.users = Table('users', self.meta, autoload = true)
def get_user_by_user_id(self, user_id):
stmt = self.users.select().where(self.users.c.user_id == user_id)
return self.connection.execute(stmt)
def get_user_by_username(self, username):
stmt = self.users.select().where(self.users.c.username == username)
return self.connection.execute(stmt)
def get_users_by_role_and_company(self, role, company)
stmt = self.users.select().where(self.users.c.role == role).where(self.users.c.company == company)
return self.connection.execute(stmt)
</code></pre>
<p>Now, what I want to do is to make the getters generic like so:</p>
<pre><code>class users:
def __init__(self):
self.engine = create_engine("mysql+pymysql://root:password@127.0.0.1/my_database")
self.connection = self.engine.connect()
self.meta = MetaData(bind=self.connection)
self.users = Table('users', self.meta, autoload = true)
def get_user(self, **kwargs):
'''How do I make this generic function?'''
</code></pre>
<p>So, instead of calling something like this:</p>
<pre><code>u = users()
u.get_user_by_user_id(1)
u.get_user_by_username('foo')
u.get_users_by_role_and_company('Admin', 'bar')
</code></pre>
<p>I would just call the generic function like so:</p>
<pre><code>u = users()
u.get_user(user_id=1)
u.get_user(username='foo')
u.get_user(role='Admin', company='bar')
</code></pre>
<hr>
<p>So far, this was what I could think of:</p>
<pre><code>def get_user(**kwargs):
where_clause = ''
for key, value in kwargs.items():
where_clause += '{} == {} AND '.format(key, value)
where_clause = where_clause[:-5] # remove final AND
stmt = "SELECT * FROM {tablename} WHERE {where_clause};".format(tablename='users', where_clause=where_clause)
return self.connection.execute(stmt)
</code></pre>
<p>Is there any way that I could use the ORM style to create the statement?</p>
| 1 | 2016-08-31T08:54:57Z | 39,262,531 | <p>Fully generalized, so any combination of legal field names is accepted as long as a field of that name exists in the table. The magic is in <code>getattr</code>, which allows us to look up the field we're interested in dynamically (and raises <code>AttributeError</code> if called with a non-existent field name):</p>
<pre><code>def get_user(self, **kwargs):
# Basic query
stmt = self.users.select()
# Query objects can be refined piecemeal, so we just loop and
# add new clauses, assigning back to stmt to build up the query
for field, value in kwargs.items():
stmt = stmt.where(getattr(self.users.c, field) == value)
return self.connection.execute(stmt)
</code></pre>
| 1 | 2016-09-01T04:02:50Z | [
"python",
"python-3.x",
"sqlalchemy",
"metaprogramming"
] |
Pass commands from one docker container to another | 39,245,139 | <p>I have a helper container and an app container. </p>
<p>The helper container handles mounting of code via git to a shared mount with the app container. </p>
<p>I need for the helper container to check for a <code>package.json</code> or <code>requirements.txt</code> in the cloned code and if one exists to run <code>npm install</code> or <code>pip install -r requirements.txt</code>, storing the dependencies in the shared mount.
Thing is the npm command and/or the pip command needs to be run from the app container to keep the helper container as generic and as agnostic as possible.</p>
<p>One solution would be to mount the docker socket to the helper container and run <code>docker exec <command> <app container></code> but what if I have thousands of such apps on a single host.
Will there be issues having hundreds of containers all accessing the docker socket at the same time? And is there a better way to do this? Get commands run on another container?</p>
| 0 | 2016-08-31T08:59:47Z | 39,245,305 | <p>Well there is no "container to container" internal communication layer like "ssh". In this regard, the containers are as standalone as 2 different VMs ( beside the network part in general ).</p>
<p>You might go the usual way, install opensshd-server on the "receiving" server, configure it key-based only. You do not need to export the port to the host, just connect to the port using the docker-internal network. Deploy the ssh private key on the 'caller server' and the public key into .ssh/authorized_keys on the 'receiving server' during container start time ( volume mount ) so you do not keep the secrets in the image (build time).</p>
<p>Probably also create a ssh-alias in .ssh/config and also set HostVerify to no, since the containers could be rebuild. Then do</p>
<pre><code>ssh <alias> your-command
</code></pre>
| 1 | 2016-08-31T09:08:12Z | [
"python",
"node.js",
"linux",
"git",
"docker"
] |
Pass commands from one docker container to another | 39,245,139 | <p>I have a helper container and an app container. </p>
<p>The helper container handles mounting of code via git to a shared mount with the app container. </p>
<p>I need for the helper container to check for a <code>package.json</code> or <code>requirements.txt</code> in the cloned code and if one exists to run <code>npm install</code> or <code>pip install -r requirements.txt</code>, storing the dependencies in the shared mount.
Thing is the npm command and/or the pip command needs to be run from the app container to keep the helper container as generic and as agnostic as possible.</p>
<p>One solution would be to mount the docker socket to the helper container and run <code>docker exec <command> <app container></code> but what if I have thousands of such apps on a single host.
Will there be issues having hundreds of containers all accessing the docker socket at the same time? And is there a better way to do this? Get commands run on another container?</p>
| 0 | 2016-08-31T08:59:47Z | 39,367,150 | <p>Found that better way I was looking for :-) .</p>
<p>Using supervisord and running the xml rpc server enables me to run something like:</p>
<p><code>supervisorctl -s http://127.0.0.1:9002 -utheuser -pthepassword start uwsgi</code>
<code>supervisorctl -s http://127.0.0.1:9002 -utheuser -pthepassword start uwsgi</code></p>
<p>In the helper container, this will connect to the rpc server running on port 9002 on the app container and execute a program block that may look something like;</p>
<pre><code>[program:uwsgi]
directory=/app
command=/usr/sbin/uwsgi --ini /app/app.ini --uid nginx --gid nginx --plugins http,python --limit-as 512
autostart=false
autorestart=unexpected
stdout_logfile=/var/log/uwsgi/stdout.log
stdout_logfile_maxbytes=0
stderr_logfile=/var/log/uwsgi/stderr.log
stderr_logfile_maxbytes=0
exitcodes=0
environment = HOME="/app", USER="nginx"]
</code></pre>
<p>This is exactly what I needed!</p>
<p>For anyone who finds this you'll probably need your supervisord.conf on your app container to look sth like:</p>
<pre><code>[supervisord]
nodaemon=true
[supervisorctl]
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[inet_http_server]
port=127.0.0.1:9002
username=user
password=password
[program:uwsgi]
directory=/app
command=/usr/sbin/uwsgi --ini /app/app.ini --uid nginx --gid nginx --plugins http,python --limit-as 512
autostart=false
autorestart=unexpected
stdout_logfile=/var/log/uwsgi/stdout.log
stdout_logfile_maxbytes=0
stderr_logfile=/var/log/uwsgi/stderr.log
stderr_logfile_maxbytes=0
exitcodes=0
environment = HOME="/app", USER="nginx"]
</code></pre>
<p>You can setup the inet_http_server to listen on a socket. You can link the containers to be able to access them at a hostname.</p>
| 0 | 2016-09-07T10:16:17Z | [
"python",
"node.js",
"linux",
"git",
"docker"
] |
Plotting Lists in Matplotlib | 39,245,167 | <p>I have to lists of that kind (series and pred_upd):</p>
<p><a href="http://i.stack.imgur.com/o5sES.png" rel="nofollow"><img src="http://i.stack.imgur.com/o5sES.png" alt="enter image description here"></a></p>
<p>I try to put them together on a plot doing that:</p>
<pre><code>az = series.plot(figsize=(12,8), label='o')
ax = pred_upd.plot(style='r--', label='Dynamic Prediction');
ax.legend();
az.legend();
plt.plot()
plt.show()
</code></pre>
<p>I receive error: </p>
<pre><code>-> 2417 if isinstance(data, DataFrame):
2418 if x is not None:
2419 if com.is_integer(x) and not data.columns.holds_integer():
TypeError: isinstance() arg 2 must be a type or tuple of types
</code></pre>
<p>What I do wrong?</p>
| 0 | 2016-08-31T09:01:36Z | 39,246,462 | <p>If you are using <a href="http://matplotlib.org/users/pyplot_tutorial.html" rel="nofollow"><code>matplotlib</code></a>, I think you are using it incorrectly. I cannot really infer what <code>type</code> are the variables <code>series</code> and <code>pred_upd</code> are, but I am assuming they are of type <code>list</code> (from your example).</p>
<p>To use matplotlib:</p>
<pre><code>import matplotlib.pyplot as plt
plt.plot(series)
plt.hold(True)
plt.plot(pred_upd)
plt.show()
</code></pre>
<p>You can put some parameters in there - but that should be the format.</p>
| 1 | 2016-08-31T09:58:38Z | [
"python",
"list",
"matplotlib",
"plot"
] |
VariableDoesNotExist while rendering: Failed lookup for key [object] | 39,245,224 | <p>I am using django-haystack and elasticsearch on my Ubuntu server and finding that certain search queries just raise an error page, and I have no idea why this is happening..
Any help appreciated! :)</p>
<p>model.py : </p>
<pre><code>class EnActress(models.Model):
name = models.CharField(max_length=100, null=True)
image_urls = models.CharField(max_length=200)
images = models.TextField(null=True)
actress_image = models.TextField(null=True)
class EnMovielist(models.Model):
content_ID = models.CharField(max_length=30)
release_date = models.CharField(max_length=30)
running_time = models.CharField(max_length=10)
actress = models.CharField(max_length=300)
series = models.CharField(max_length=30)
director = models.CharField(max_length=30)
label = models.CharField(max_length=30)
image_urls = models.CharField(max_length=200, null=True)
images = models.TextField(null=True)
image_paths = models.TextField(null=True)
</code></pre>
<p>search.indexes.py : </p>
<pre><code> class EnActressIndex(indexes.SearchIndex, indexes.Indexable):
text = indexes.EdgeNgramField(document=True, use_template=True)
en_name = indexes.CharField(model_attr='name')
def get_model(self):
return EnActress
def index_queryset(self, using=None):
return self.get_model().objects.all()
class EnMovielistIndex(indexes.SearchIndex, indexes.Indexable):
text = indexes.EdgeNgramField(document=True, use_template=True)
content_ID = indexes.CharField(model_attr='content_ID')
release_date = indexes.CharField(model_attr='release_date')
actress = indexes.CharField(model_attr='actress')
label = indexes.CharField(model_attr='label')
def get_model(self):
return EnMovielist
def index_queryset(self, using=None):
return self.get_model().objects.all()
</code></pre>
<p>This is an Error massage in Django</p>
<pre><code>VariableDoesNotExist at /search/
Failed lookup for key [actress_image] in 'EnMovielist object'
Failed lookup for key [%s] in %r
In template /home/ubuntu/venv/avdict/dmmactress/templates/search/search.html, error at line 122
112 {% load staticfiles %}
113 <table class="table table-striped table-hover" cellspacing="0" id='result_table'>
114 <thead>
115 <tr>
116 <th>phto</th>
117 <th>name</th>
118 </tr>
119 </thead>
120 <tbody>
121 <tr class="active">
122 {% with image='enActress/'|add:result.object.actress_image %}
123 <td><img src="{% static image %}" class="img-circle" class="img-responsive" alt="{{result.object.name}}"></td>
124 <td>{{ result.object.name }}</td>
125 {% endwith %}
126 </tr>
127
128 </tbody>
129 </table>
130 {% empty %}
131 <p>no results.</p>
132 {% endfor %}
enactress_text.txt file :
{{ object.name}}
{{ object.actress_image }}
enmovielist_text.txt file :
{{ object.content_ID }}
{{ object.image_paths }}
{{ object.release_date }}
{{ object.running_time }}
{{ object.actress }}
{{ object.series }}
{{ object.director }}
{{ object.label }}
</code></pre>
| 0 | 2016-08-31T09:04:37Z | 39,247,979 | <pre><code>Failed lookup for key [actress_image] in 'EnMovielist object'
</code></pre>
<p>This means you are looking up the <code>actress_image</code> value of an <code>EnMovielist</code> object.</p>
<p>What you are presumably trying to do requires <code>result.object</code> to be an instance of <code>EnActress</code>.</p>
| 0 | 2016-08-31T11:08:46Z | [
"python",
"django",
"elasticsearch"
] |
geodjango check PointField in PolygonField | 39,245,233 | <p>I have two django models:</p>
<pre><code>class Region(models.Model):
geometry = models.PolygonField()
class Position(models.Model):
coordinates = models.PointField()
</code></pre>
<p>I am trying to check if a Position is geographically contained inside a Region:</p>
<pre><code>def check(region, position):
return position.coordinates.intersect(region.geometry)
</code></pre>
<p>But it always return <strong>False</strong>, even when the Position is contained inside the Region (I am rendering both the PointField and the RegionField with django-leaflet).
I also tried using: </p>
<pre><code>def check(region, position):
return position.coordinates.within(region.geometry)
</code></pre>
<p>but no results so far. Here is the test data that I am using (geojson):</p>
<pre><code>{"coordinates": [46.2071762, 11.1245718], "type": "Point"}
{"coordinates": [[[11.102371215820312, 46.21939582902924], [11.106491088867188, 46.22111800038881], [11.134214401245117, 46.22188999070486], [11.140050888061523, 46.21791115519151], [11.141080856323242, 46.21422899084459], [11.137990951538086, 46.207695510993354], [11.13412857055664, 46.20122065978115], [11.12485885620117, 46.198844376182535], [11.102371215820312, 46.21939582902924]]], "type": "Polygon"}
</code></pre>
<p>Any hint on what the problem could be?
Thanks in advance!</p>
| 1 | 2016-08-31T09:05:00Z | 39,245,996 | <p>Your mistake is logical, I guess.Point's latitude and longitude should be swapped.
I mean </p>
<pre><code>{"coordinates": [11.1245718, 46.2071762], "type": "Point"}
</code></pre>
<p>Instead of</p>
<pre><code>{"coordinates": [46.2071762, 11.1245718], "type": "Point"}
</code></pre>
| 2 | 2016-08-31T09:39:05Z | [
"python",
"django",
"postgis",
"geodjango"
] |
Create a new list from two dictionaries | 39,245,557 | <p>This is a question about Python. I have the following list of dictionaries:</p>
<pre><code>listA = [
{"t": 1, "tid": 2, "gtm": 3, "c1": 4, "id": "111"},
{"t": 3, "tid": 4, "gtm": 3, "c1": 4, "c2": 5, "id": "222"},
{"t": 1, "tid": 2, "gtm": 3, "c1": 4, "c2": 5, "id": "333"},
{"t": 5, "tid": 6, "gtm": 3, "c1": 4, "c2": 5, "id": "444"}
]
</code></pre>
<p>and a dictionary I wanted to compare with:</p>
<pre><code>dictA = {"t": 1, "tid": 2, "gtm": 3}
</code></pre>
<p>I wanted to create a list of dicts that match all the items in <strong>dictA</strong> from <strong>listA</strong> and to include the "id" field as well:</p>
<pre><code>listB = [
{"t": 1, "tid": 2, "gtm": 3, "id": "111"},
{"t": 1, "tid": 2, "gtm": 3, "id": "333"}
]
</code></pre>
<p>I tried doing this:</p>
<pre><code>for k in listA:
for key, value in k.viewitems() & dictA.viewitems():
print key, value
</code></pre>
<p>But it's matching any item in <strong>dictA</strong>.</p>
| 3 | 2016-08-31T09:19:18Z | 39,245,694 | <p>You can use a <a href="https://docs.python.org/2/library/stdtypes.html#dictionary-view-objects" rel="nofollow">dictionary view</a>:</p>
<pre><code>listA = [
{"t": 1, "tid": 2, "gtm": 3, "c1": 4, "id": "111"},
{"t": 3, "tid": 4, "gtm": 3, "c1": 4, "c2": 5, "id": "222"},
{"t": 1, "tid": 2, "gtm": 3, "c1": 4, "c2": 5, "id": "333"},
{"t": 5, "tid": 6, "gtm": 3, "c1": 4, "c2": 5, "id": "444"}
]
dictA = {"t": 1, "tid": 2, "gtm": 3}
for k in listA:
if dictA.viewitems() <= k.viewitems():
print k
</code></pre>
<p>And for python 3 use:</p>
<pre><code>if dictA.items() <= k.items():
print(k)
</code></pre>
| 2 | 2016-08-31T09:25:34Z | [
"python",
"dictionary"
] |
Create a new list from two dictionaries | 39,245,557 | <p>This is a question about Python. I have the following list of dictionaries:</p>
<pre><code>listA = [
{"t": 1, "tid": 2, "gtm": 3, "c1": 4, "id": "111"},
{"t": 3, "tid": 4, "gtm": 3, "c1": 4, "c2": 5, "id": "222"},
{"t": 1, "tid": 2, "gtm": 3, "c1": 4, "c2": 5, "id": "333"},
{"t": 5, "tid": 6, "gtm": 3, "c1": 4, "c2": 5, "id": "444"}
]
</code></pre>
<p>and a dictionary I wanted to compare with:</p>
<pre><code>dictA = {"t": 1, "tid": 2, "gtm": 3}
</code></pre>
<p>I wanted to create a list of dicts that match all the items in <strong>dictA</strong> from <strong>listA</strong> and to include the "id" field as well:</p>
<pre><code>listB = [
{"t": 1, "tid": 2, "gtm": 3, "id": "111"},
{"t": 1, "tid": 2, "gtm": 3, "id": "333"}
]
</code></pre>
<p>I tried doing this:</p>
<pre><code>for k in listA:
for key, value in k.viewitems() & dictA.viewitems():
print key, value
</code></pre>
<p>But it's matching any item in <strong>dictA</strong>.</p>
| 3 | 2016-08-31T09:19:18Z | 39,245,730 | <p>You would need to check the <em>length</em> of the intersection, just checking <code>if dct.viewitems() & dictA.viewitems()</code> would evaluate to True for any intersection :</p>
<pre><code>[dct for dct in listA if len(dct.viewitems() & dictA.viewitems()) == len(dictA)]
</code></pre>
<p>Or just check for a subset, if the items from dictA are a <a href="https://docs.python.org/2/library/sets.html#set-objects" rel="nofollow">subset</a> of each dict:</p>
<pre><code>[dct for dct in listA if dictA.viewitems() <= dct.viewitems()]
</code></pre>
<p>Or reverse the logic looking for a <a href="https://docs.python.org/2/library/sets.html#set-objects" rel="nofollow">superset</a>:</p>
<pre><code> [dct for dct in listA if dct.viewitems() >= dictA.viewitems()]
</code></pre>
| 3 | 2016-08-31T09:27:02Z | [
"python",
"dictionary"
] |
Create a new list from two dictionaries | 39,245,557 | <p>This is a question about Python. I have the following list of dictionaries:</p>
<pre><code>listA = [
{"t": 1, "tid": 2, "gtm": 3, "c1": 4, "id": "111"},
{"t": 3, "tid": 4, "gtm": 3, "c1": 4, "c2": 5, "id": "222"},
{"t": 1, "tid": 2, "gtm": 3, "c1": 4, "c2": 5, "id": "333"},
{"t": 5, "tid": 6, "gtm": 3, "c1": 4, "c2": 5, "id": "444"}
]
</code></pre>
<p>and a dictionary I wanted to compare with:</p>
<pre><code>dictA = {"t": 1, "tid": 2, "gtm": 3}
</code></pre>
<p>I wanted to create a list of dicts that match all the items in <strong>dictA</strong> from <strong>listA</strong> and to include the "id" field as well:</p>
<pre><code>listB = [
{"t": 1, "tid": 2, "gtm": 3, "id": "111"},
{"t": 1, "tid": 2, "gtm": 3, "id": "333"}
]
</code></pre>
<p>I tried doing this:</p>
<pre><code>for k in listA:
for key, value in k.viewitems() & dictA.viewitems():
print key, value
</code></pre>
<p>But it's matching any item in <strong>dictA</strong>.</p>
| 3 | 2016-08-31T09:19:18Z | 39,245,841 | <p>For python 2.7 : </p>
<pre><code>listA = [
{"t": 1, "tid": 2, "gtm": 3, "c1": 4, "id": "111"},
{"t": 3, "tid": 4, "gtm": 3, "c1": 4, "c2": 5, "id": "222"},
{"t": 1, "tid": 2, "gtm": 3, "c1": 4, "c2": 5, "id": "333"},
{"t": 5, "tid": 6, "gtm": 3, "c1": 4, "c2": 5, "id": "444"}
]
dictA = {"t": 1, "tid": 2, "gtm": 3}
for k in listA:
if all(x in k.viewitems() for x in dictA.viewitems()):
print k
</code></pre>
<p>It gives output as :</p>
<pre><code>{'tid': 2, 'c1': 4, 'id': '111', 't': 1, 'gtm': 3}
{'gtm': 3, 't': 1, 'tid': 2, 'c2': 5, 'c1': 4, 'id': '333'}
</code></pre>
<p>And if you want to create list then instead of <code>print</code>, add dictionary to list As follows:</p>
<pre><code>listA = [
{"t": 1, "tid": 2, "gtm": 3, "c1": 4, "id": "111"},
{"t": 3, "tid": 4, "gtm": 3, "c1": 4, "c2": 5, "id": "222"},
{"t": 1, "tid": 2, "gtm": 3, "c1": 4, "c2": 5, "id": "333"},
{"t": 5, "tid": 6, "gtm": 3, "c1": 4, "c2": 5, "id": "444"}
]
dictA = {"t": 1, "tid": 2, "gtm": 3}
ans =[]
for k in listA:
if all(x in k.viewitems() for x in dictA.viewitems()):
ans.append(k)
#print k
print ans
</code></pre>
<p>It gives output:</p>
<pre><code>[{'tid': 2, 'c1': 4, 'id': '111', 't': 1, 'gtm': 3}, {'gtm': 3, 't': 1, 'tid': 2, 'c2': 5, 'c1': 4, 'id': '333'}]
</code></pre>
| 1 | 2016-08-31T09:32:12Z | [
"python",
"dictionary"
] |
Create a new list from two dictionaries | 39,245,557 | <p>This is a question about Python. I have the following list of dictionaries:</p>
<pre><code>listA = [
{"t": 1, "tid": 2, "gtm": 3, "c1": 4, "id": "111"},
{"t": 3, "tid": 4, "gtm": 3, "c1": 4, "c2": 5, "id": "222"},
{"t": 1, "tid": 2, "gtm": 3, "c1": 4, "c2": 5, "id": "333"},
{"t": 5, "tid": 6, "gtm": 3, "c1": 4, "c2": 5, "id": "444"}
]
</code></pre>
<p>and a dictionary I wanted to compare with:</p>
<pre><code>dictA = {"t": 1, "tid": 2, "gtm": 3}
</code></pre>
<p>I wanted to create a list of dicts that match all the items in <strong>dictA</strong> from <strong>listA</strong> and to include the "id" field as well:</p>
<pre><code>listB = [
{"t": 1, "tid": 2, "gtm": 3, "id": "111"},
{"t": 1, "tid": 2, "gtm": 3, "id": "333"}
]
</code></pre>
<p>I tried doing this:</p>
<pre><code>for k in listA:
for key, value in k.viewitems() & dictA.viewitems():
print key, value
</code></pre>
<p>But it's matching any item in <strong>dictA</strong>.</p>
| 3 | 2016-08-31T09:19:18Z | 39,246,091 | <p>Try this,<code>all</code> will check the existence of <code>dictA</code> in <code>listA</code>.</p>
<pre><code>[i for i in listA if all(j in i.items() for j in dictA.items())]
</code></pre>
<p><strong>Result</strong> </p>
<pre><code>[{'c1': 4, 'gtm': 3, 'id': '111', 't': 1, 'tid': 2},
{'c1': 4, 'c2': 5, 'gtm': 3, 'id': '333', 't': 1, 'tid': 2}]
</code></pre>
| 1 | 2016-08-31T09:42:32Z | [
"python",
"dictionary"
] |
Using Python to plot graphs from the mySQL database | 39,245,981 | <p>I am new to python and mySQL. Initially, I want to generate few graphs using the data which I have in mySQL database. At the end I want to generate a pdf report containing that graph. Any help/advice it will be great if you guys can point me in the right direction. Thanks very much. </p>
| 0 | 2016-08-31T09:38:10Z | 39,246,971 | <p>I am generating graphs from a database with >10 million entries. The aggregating is mostly done by the database itself. Some operations take a lot of time, but it's ok. </p>
<p>The connection to the database is easy with sqlalchemy and mysqlconnector. There are many others connectors for mysql. </p>
<p>Here a little test script for sqlalchemy. The tables are generated in this example.</p>
<p>You can find more at <a href="http://www.sqlalchemy.org/" rel="nofollow">http://www.sqlalchemy.org/</a></p>
<pre><code>from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, String, Integer, BigInteger, Text, Index
from sqlalchemy.orm.scoping import scoped_session
Base = declarative_base()
class ATestEntity(Base):
__tablename__ = 'a_test_entity_table'
id = Column(String(32), primary_key=True)
astring = Column(String(32))
aint = Column(BigInteger, default=-1)
dialect = "mysql+mysqlconnector"
username = "username"
password = "passwort"
host = "localhost"
port = "3306"
database = "tests"
dbconnector = '%s://%s:%s@%s:%s/%s?charset=utf8mb4&use_unicode=0' % \
(dialect, username, password, host, port, database, )
engine = create_engine(dbconnector)
session_factory = sessionmaker(autocommit=False, autoflush=False)
session_factory.configure(bind=engine)
Base.metadata.create_all(engine)
s = scoped_session(session_factory)
try:
#Create a new entity
a_obj = ATestEntity()
a_obj.id = 'test'
a_obj.astring = 'this is a test'
a_obj.aint = 10
s.add(a_obj)
s.commit()
#Delete a entity
s.query(ATestEntity).filter(ATestEntity.id == 'test').delete()
s.commit()
except:
s.rollback()
raise
</code></pre>
<p>The mathematical calculation outside can be done with numpy and scipy if needed.
<a href="http://www.numpy.org/" rel="nofollow">http://www.numpy.org/</a>
<a href="https://www.scipy.org/" rel="nofollow">https://www.scipy.org/</a></p>
<p>The plot can be done with matplotlib. There are many examples on.
<a href="http://matplotlib.org/examples/index.html" rel="nofollow">http://matplotlib.org/examples/index.html</a>
You can generate prety diagrams too.</p>
<p>I am generating my reports from generated latex files. But there are many pdf libraries for python too. </p>
<p>It's only one way to get it done.</p>
| 0 | 2016-08-31T10:21:52Z | [
"python",
"mysql",
"anaconda",
"spyder"
] |
Pandas Dataframe Line Plot: Show Random Markers | 39,246,115 | <p>I often have dataframes with many obervations and want to have a quick glance at the data using a line plot.</p>
<p>The problem is that the colors of the colormap are either repeated over X observations or hard to distinguish e.g. in case of sequential colormaps.</p>
<p>So my idea was to add random markers to the line plot which is where I got stuck.</p>
<p>Here's an example with one markerstyle:</p>
<pre><code># -*- coding: utf-8 -*-
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# dataframe with random data
df = pd.DataFrame(np.random.rand(10, 8))
# plot
df.plot(kind='line', marker='d')
plt.show()
</code></pre>
<p>which delivers:</p>
<p><a href="http://i.stack.imgur.com/BpgIg.png" rel="nofollow"><img src="http://i.stack.imgur.com/BpgIg.png" alt="enter image description here"></a></p>
<p>Is it also possible to draw a (random) marker for each line?</p>
<p>Thanks in advance!</p>
| 1 | 2016-08-31T09:43:29Z | 39,247,062 | <p>First we need to choose random marker. It could be done via <code>matplotlib.markers.MarkerStyle.markers</code> dictionary which contains all available markers. Also markers means 'nothing', starting with 'tick' and 'caret' should be dropped Some more <a href="http://matplotlib.org/api/markers_api.html" rel="nofollow">information</a> abour markers. Let's make list with valid markers and then random choose from them how many we need for plotting DataFrame or you could use second option with <code>filled_markers</code>:</p>
<pre><code>import matplotlib as mpl
import numpy as np
# create valid markers from mpl.markers
valid_markers = ([item[0] for item in mpl.markers.MarkerStyle.markers.items() if
item[1] is not 'nothing' and not item[1].startswith('tick')
and not item[1].startswith('caret')])
# use fillable markers
# valid_markers = mpl.markers.MarkerStyle.filled_markers
markers = np.random.choice(valid_markers, df.shape[1], replace=False)
</code></pre>
<p>For example:</p>
<pre><code>In [146]: list(markers )
Out[146]: ['H', '^', 'v', 's', '3', '.', '1', '_']
</code></pre>
<p>Then for markers you could plot your dataframe, and set markers for each line via <code>set_marker</code> method. Then you could add legend to your plot:</p>
<pre><code>import pandas as pd
np.random.seed(2016)
df = pd.DataFrame(np.random.rand(10, 8))
ax = df.plot(kind='line')
for i, line in enumerate(ax.get_lines()):
line.set_marker(markers[i])
# for adding legend
ax.legend(ax.get_lines(), df.columns, loc='best')
</code></pre>
<p>Original:</p>
<p><a href="http://i.stack.imgur.com/sP8Sj.png" rel="nofollow"><img src="http://i.stack.imgur.com/sP8Sj.png" alt="enter image description here"></a></p>
<p>Modified:</p>
<p><a href="http://i.stack.imgur.com/qyA96.png" rel="nofollow"><img src="http://i.stack.imgur.com/qyA96.png" alt="enter image description here"></a></p>
| 1 | 2016-08-31T10:25:59Z | [
"python",
"pandas",
"matplotlib"
] |
Python Text Cleaning Remove spaces between non alpha numeric characters | 39,246,146 | <p>I have a text-file with alphanumeric and non-alphanumeric characters in the text.</p>
<p>I want to remove any spaces that are between two non-alphanumeric characters.</p>
<p>How can I achieve this efficiently?</p>
<p>Any method/popular libraries is fine.</p>
| -3 | 2016-08-31T09:44:28Z | 39,248,038 | <p>Here's a possible solution to your question:</p>
<pre><code>import re
file = """
7 u p, S a k s F i f t h A v e, A u d i A 4, C a n o n A 7 5
"""
print re.sub(r"([A-Za-z0-9])\ *([A-Za-z0-9])\ *", r"\1\2", file)
</code></pre>
<p>I think <a href="https://docs.python.org/2/library/re.html#re.sub" rel="nofollow">re.sub</a> is a good way to go here.</p>
| 0 | 2016-08-31T11:11:02Z | [
"python",
"parsing",
"text"
] |
Pandas: Fill new column by condition row-wise | 39,246,197 | <pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame([np.random.rand(100),100*[0.1],100*[0.3]]).T
df.columns = ["value","lower","upper"]
df.head()
</code></pre>
<p><a href="http://i.stack.imgur.com/1J2Gm.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/1J2Gm.jpg" alt="enter image description here"></a></p>
<p>How can I create a new column which indicates that <code>value</code> is between <code>lower</code> and <code>upper</code> ?</p>
| 0 | 2016-08-31T09:46:58Z | 39,246,495 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.between.html" rel="nofollow"><code>between</code></a> for this purpose.</p>
<pre><code>df['new_col'] = df['value'].between(df['lower'], df['upper'])
</code></pre>
| 3 | 2016-08-31T10:00:03Z | [
"python",
"pandas"
] |
Check two list of dicts and filter one list to remove dicts that appear in other list | 39,246,218 | <p>I have two lists:</p>
<pre><code>a = [{'val': 'abc', 'locval': {'China':24},'key3': 'meh'},{'val': 'men', 'locval': {'China':24},'key3': 'bla'},{'val': 'men', 'locval': {'India':56},'key3': 'cheh'}]
b = [{'val': 'abc', 'locval': {'China':24},'key3': 'cheh'}, {'val': 'def', 'locval': {'India':56},'key3': 'men'}]
</code></pre>
<p>And want to remove certain items from list A (I don't mind creating a new list) that essentially are the same as the items in list B based on two specific keys - <code>locval</code> and <code>val</code>. For example, the new list should become:</p>
<pre><code>newa = [{'val': 'men', 'locval': {'China':24},'key3': 'bla'},{'val': 'men', 'locval': {'India':56},'key3': 'cheh'}]
</code></pre>
<p>How do I do this?</p>
| 0 | 2016-08-31T09:47:56Z | 39,246,411 | <p>You can make a <em>set</em> of all the pairs of interesting <em>key/value</em> pairings from the <em>b</em> list of dicts then only keep dicts from <em>a</em> that do not have the same <em>key/value</em> pairings:</p>
<pre><code>a = [{'val': 'abc', 'locval': {'China':24},'key3': 'meh'},{'val': 'men', 'locval': {'China':24},'key3': 'bla'},{'val': 'men', 'locval': {'India':56},'key3': 'cheh'}]
b = [{'val': 'abc', 'locval': {'China':24},'key3': 'cheh'}, {'val': 'def', 'locval': {'India':56},'key3': 'men'}]
st = {(tuple(d["locval"].items()), d["val"]) for d in b}
a[:] = (d for d in a if (tuple(d["locval"].items()), d["val"]) not in st)
print(a)
</code></pre>
| 1 | 2016-08-31T09:56:31Z | [
"python",
"list",
"dictionary",
"unique"
] |
TCP threaded python | 39,246,259 | <p>I'm learning some Networking through Python and came up with this idea of TCPServer Multithread so i can have multiple clients connected. The problem is that i can only connect one client.</p>
<pre><code>import socket
import os
import threading
from time import sleep
from threading import Thread
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print('Socket Preparado...')
def Main():
host = '127.0.0.1'
port = 5000
s.bind((host, port))
print('Enlaze listo...')
print('Escuchando...')
s.listen(5)
c, addr, = s.accept()
os.system('cls')
print('Conexion desde: '+str(addr))
def handle_client(client_socket):
while True:
data = client_socket.recv(1024).decode('utf-8')
if not data: break
print('Client says: ' + data)
print('Sending: ' + data)
client_socket.send(data.encode('utf-8'))
client_socket.close()
if __name__ == '__main__':
while True:
Main()
client_socket, addr = s.accept()
os.system('cls')
print('Conexion desde: '+str(addr))
Thread.start_new_thread(handle_client ,(client_socket,))
s.close()
</code></pre>
<p>Edit: This is my actual code, to test it i open up two Client.py codes and try to connect to it. The first Client.py successfully connects (Although there's bugs in receiving and sending back info)The second one executes but it's not shown in the server output as connected or something, it just compiles and stays like that.</p>
| 1 | 2016-08-31T09:49:38Z | 39,247,892 | <p>You need to create a new thread each time you get a new connection</p>
<pre><code>import socket
import thread
host = '127.0.0.1'
port = 5000
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print('Socket Ready...')
s.bind((host, port))
print('Bind Ready...')
print('Listening...')
s.listen(1)
def handle_client(client_socket):
while True:
data = client_socket.recv(1024)
if not data: break
print('Client says: ' + data)
print('Sending: ' + data)
client_socket.send(data)
client_socket.close()
while True:
client_socket, addr = s.accept()
print('Conexion from: '+str(addr))
thread.start_new_thread(handle_client ,(client_socket,))
s.close()
</code></pre>
| 1 | 2016-08-31T11:04:09Z | [
"python",
"multithreading",
"sockets",
"networking",
"tcp"
] |
TCP threaded python | 39,246,259 | <p>I'm learning some Networking through Python and came up with this idea of TCPServer Multithread so i can have multiple clients connected. The problem is that i can only connect one client.</p>
<pre><code>import socket
import os
import threading
from time import sleep
from threading import Thread
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print('Socket Preparado...')
def Main():
host = '127.0.0.1'
port = 5000
s.bind((host, port))
print('Enlaze listo...')
print('Escuchando...')
s.listen(5)
c, addr, = s.accept()
os.system('cls')
print('Conexion desde: '+str(addr))
def handle_client(client_socket):
while True:
data = client_socket.recv(1024).decode('utf-8')
if not data: break
print('Client says: ' + data)
print('Sending: ' + data)
client_socket.send(data.encode('utf-8'))
client_socket.close()
if __name__ == '__main__':
while True:
Main()
client_socket, addr = s.accept()
os.system('cls')
print('Conexion desde: '+str(addr))
Thread.start_new_thread(handle_client ,(client_socket,))
s.close()
</code></pre>
<p>Edit: This is my actual code, to test it i open up two Client.py codes and try to connect to it. The first Client.py successfully connects (Although there's bugs in receiving and sending back info)The second one executes but it's not shown in the server output as connected or something, it just compiles and stays like that.</p>
| 1 | 2016-08-31T09:49:38Z | 39,253,545 | <p>Ok, here's the code solved, i should have said i was working on Python3 version. Reading the docs i found out, here's the code and below the docs.</p>
<pre><code>import socket
import os
import threading
from time import sleep
from threading import Thread
import _thread
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print('Socket Preparado...')
def Main():
host = '127.0.0.1'
port = 5000
s.bind((host, port))
print('Enlaze listo...')
print('Escuchando...')
s.listen(1)
def handle_client(client_socket):
while True:
data = client_socket.recv(1024).decode('utf-8')
if not data: break
print('Client says: ' + data)
print('Sending: ' + data)
client_socket.send(data.encode('utf-8'))
client_socket.close()
if __name__ == '__main__':
Main()
while True:
client_socket, addr = s.accept()
os.system('cls')
print('Conexion desde: '+str(addr))
_thread.start_new_thread(handle_client ,(client_socket,))
s.close()
</code></pre>
<p><a href="https://docs.python.org/3/library/threading.html" rel="nofollow">https://docs.python.org/3/library/threading.html</a>
<a href="http://www.tutorialspoint.com/python3/python_multithreading.htm" rel="nofollow">http://www.tutorialspoint.com/python3/python_multithreading.htm</a></p>
<p>The problem was at <code>_thread.start_new_thread(handle_client ,(client_socket,))</code> just <code>import _thread</code> ask some questions here, keep researching and got it. </p>
<p>Thanks all of you.</p>
<p>WhiteGlove </p>
| 0 | 2016-08-31T15:26:20Z | [
"python",
"multithreading",
"sockets",
"networking",
"tcp"
] |
Get printer status with SNMP using Pysnmp | 39,246,288 | <p>I try to get status from my printer using SNMP protocol
The problem is, I've never used the SNMP and I have trouble understanding how can I get my status like ( PAPER OUT, RIBBON OUT etc... ).</p>
<p>I configured my printer to enable the SNMP protocol using the community name "public"<br>
I presume SNMP messages are sended on the port 161</p>
<p>I'm using Pysnmp because I want to integrate the python script in my program to listen my printer and display a status if there is a problem with the printer.</p>
<p>For now I've tried this code :</p>
<pre><code>import socket
import random
from struct import pack, unpack
from datetime import datetime as dt
from pysnmp.entity.rfc3413.oneliner import cmdgen
from pysnmp.proto.rfc1902 import Integer, IpAddress, OctetString
ip = '172.20.0.229'
community = 'public'
value = (1,3,6,1,2,1,25,3,5,1,2)
generator = cmdgen.CommandGenerator()
comm_data = cmdgen.CommunityData('server', community, 1) # 1 means version SNMP v2c
transport = cmdgen.UdpTransportTarget((ip, 161))
real_fun = getattr(generator, 'getCmd')
res = (errorIndication, errorStatus, errorIndex, varBinds) \
= real_fun(comm_data, transport, value)
if not errorIndication is None or errorStatus is True:
print "Error: %s %s %s %s" % res
else:
print "%s" % varBinds
</code></pre>
<p>The ip address is the ip of my printer
The problem is the OID : I don't know what to put in the OID field because I have trouble understanding how does OID works.</p>
<p>I found this page but i'm not sure it fit with all printers ==> <a href="https://secure.n-able.com/webhelp/NC_7-2-0_Cust_en/Content/N-central/Services/Services_PrinterStatus.html" rel="nofollow">click here</a></p>
| 0 | 2016-08-31T09:50:35Z | 39,246,630 | <p>You need your printer specific MIB file in common case. E.g., printer in my office seems to be not support both oids by your link. Also you can use <code>snmpwalk</code> to get available oids and values on your printer and if you somehow understand which values you need, you can use it for specific instance of your printer.</p>
| 0 | 2016-08-31T10:05:57Z | [
"python",
"snmp",
"zebra"
] |
Cannot import logging.py after I renamed one of my source files logging.py in my Eclipse project | 39,246,374 | <p>I thought, I broke Eclipse by renaming one of my source file logging.py, I quickly changed the name to something else but Eclipse cannot find the original python standard file ... I reinstalled my Anaconda install but it did not correct the problem ... and I later discovered it was likely a Python problem (see Edit 1)</p>
<p>When I ask Eclipse to find or open the file from the 'import logging' line, it seems it cannot find it ...</p>
<p>Any idea to correct this problem without reinstalling Eclipse?</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\ailete\workspace\landema\main.py", line 56, in <module>
import requests
File "C:\Users\ailete\Anaconda2\Lib\site-packages\requests\__init__.py", line 53, in <module>
from .packages.urllib3.contrib import pyopenssl
File "C:\Users\ailete\Anaconda2\Lib\site-packages\requests\packages\__init__.py", line 27, in <module>
from . import urllib3
File "C:\Users\ailete\Anaconda2\Lib\site-packages\requests\packages\urllib3\__init__.py", line 8, in <module>
from .connectionpool import (
File "C:\Users\ailete\Anaconda2\Lib\site-packages\requests\packages\urllib3\connectionpool.py", line 35, in <module>
from .connection import (
File "C:\Users\ailete\Anaconda2\Lib\site-packages\requests\packages\urllib3\connection.py", line 43, in <module>
from .util.ssl_ import (
File "C:\Users\ailete\Anaconda2\Lib\site-packages\requests\packages\urllib3\util\__init__.py", line 19, in <module>
from .retry import Retry
File "C:\Users\ailete\Anaconda2\Lib\site-packages\requests\packages\urllib3\util\retry.py", line 15, in <module>
log = logging.getLogger(__name__)
AttributeError: 'module' object has no attribute 'getLogger'
</code></pre>
<p><strong>Edit 1</strong></p>
<p>The problem happens with the executable generated by pyinstaller too ?! :</p>
<pre><code>...
LOADER: Running main.py
Traceback (most recent call last):
File "<string>", line 43, in <module>
File "c:\users\ailete\appdata\local\temp\pip-build-iulqaq\pyinstaller\PyInstaller\loader\pyimod03_importers.py", line 363, in load_module
File "C:\Users\ailete\workspace\landema\entities.py", line 7, in <module>
from config import locale, ribbon_menu
File "c:\users\ailete\appdata\local\temp\pip-build-iulqaq\pyinstaller\PyInstaller\loader\pyimod03_importers.py", line 363, in load_module
File "C:\Users\ailete\workspace\landema\config.py", line 7, in <module>
import logging
ImportError: No module named logging
main returned -1
...
</code></pre>
| 0 | 2016-08-31T09:55:00Z | 39,249,890 | <p>First, define modules path, where built-in module logging is placed:</p>
<pre><code>user@host:$ python
Python 2.7.9 (default, Mar 1 2015, 12:57:24)
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.path
['/usr/lib/python2.7', ... ]
</code></pre>
<p>Second, ensure that there is a logging package. In my OS i have <code>/usr/lib/python2.7/logging</code>. I think you renamed the package accidentally; If so, rename it back.</p>
| 1 | 2016-08-31T12:37:42Z | [
"python",
"eclipse",
"rename",
"pyinstaller",
"pyc"
] |
Cannot import logging.py after I renamed one of my source files logging.py in my Eclipse project | 39,246,374 | <p>I thought, I broke Eclipse by renaming one of my source file logging.py, I quickly changed the name to something else but Eclipse cannot find the original python standard file ... I reinstalled my Anaconda install but it did not correct the problem ... and I later discovered it was likely a Python problem (see Edit 1)</p>
<p>When I ask Eclipse to find or open the file from the 'import logging' line, it seems it cannot find it ...</p>
<p>Any idea to correct this problem without reinstalling Eclipse?</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\ailete\workspace\landema\main.py", line 56, in <module>
import requests
File "C:\Users\ailete\Anaconda2\Lib\site-packages\requests\__init__.py", line 53, in <module>
from .packages.urllib3.contrib import pyopenssl
File "C:\Users\ailete\Anaconda2\Lib\site-packages\requests\packages\__init__.py", line 27, in <module>
from . import urllib3
File "C:\Users\ailete\Anaconda2\Lib\site-packages\requests\packages\urllib3\__init__.py", line 8, in <module>
from .connectionpool import (
File "C:\Users\ailete\Anaconda2\Lib\site-packages\requests\packages\urllib3\connectionpool.py", line 35, in <module>
from .connection import (
File "C:\Users\ailete\Anaconda2\Lib\site-packages\requests\packages\urllib3\connection.py", line 43, in <module>
from .util.ssl_ import (
File "C:\Users\ailete\Anaconda2\Lib\site-packages\requests\packages\urllib3\util\__init__.py", line 19, in <module>
from .retry import Retry
File "C:\Users\ailete\Anaconda2\Lib\site-packages\requests\packages\urllib3\util\retry.py", line 15, in <module>
log = logging.getLogger(__name__)
AttributeError: 'module' object has no attribute 'getLogger'
</code></pre>
<p><strong>Edit 1</strong></p>
<p>The problem happens with the executable generated by pyinstaller too ?! :</p>
<pre><code>...
LOADER: Running main.py
Traceback (most recent call last):
File "<string>", line 43, in <module>
File "c:\users\ailete\appdata\local\temp\pip-build-iulqaq\pyinstaller\PyInstaller\loader\pyimod03_importers.py", line 363, in load_module
File "C:\Users\ailete\workspace\landema\entities.py", line 7, in <module>
from config import locale, ribbon_menu
File "c:\users\ailete\appdata\local\temp\pip-build-iulqaq\pyinstaller\PyInstaller\loader\pyimod03_importers.py", line 363, in load_module
File "C:\Users\ailete\workspace\landema\config.py", line 7, in <module>
import logging
ImportError: No module named logging
main returned -1
...
</code></pre>
| 0 | 2016-08-31T09:55:00Z | 39,253,125 | <p>Sorry, I solved it myself.
I thought Eclipse delete the .pyc files when I told it to clean the project, but it is not how it works apparently, So after 2 reinstalls of anaconda and configuration tweaks for this project (I have others ...), I finally remembered to check that ... the .pyc for the file was still there with the old name 'logging.pyc' ... thanks Eclipse as always, for wasting my time with your quirks!</p>
| 0 | 2016-08-31T15:04:12Z | [
"python",
"eclipse",
"rename",
"pyinstaller",
"pyc"
] |
Regular expressions issue with python due to values with brackets | 39,246,378 | <p>I have a very large string (300 MB+), and it has some garbage data in it that I need to clean up. I am using Python 2.7 32-bit.</p>
<p>I didn't want to use the string operation <code>replace</code> because the file the user uses is only going to grow over time, so I am trying to use <code>re.sub</code> to replace the value of <code>[linender]</code> with a new line character like <code>\n</code> or <code>os.linesep</code>.</p>
<p>It seems simple enough to do, so my pattern is:</p>
<p><code>re.sub('\[lineender]\b, os.linesep, text_value)</code></p>
<p>This results in only one value being replaced in the whole string, which is wrong.</p>
<p>Sample Data:</p>
<p><code>s = """A|B|3[lineender]E|F|2M[lineender]"""</code></p>
<p>Any ideas on how I need to modify my regex to get this working?
I basically need to replace the bracket word with a new line character.</p>
| 0 | 2016-08-31T09:55:08Z | 39,246,463 | <p>You need to pass the pattern as a raw string:</p>
<pre><code>re.sub(r'\[lineender\]\b', os.linesep, text_value)
</code></pre>
<p>alternatively, you'll have to use <code>\\</code> (double backslashes):</p>
<pre><code>re.sub('\\[lineender\\]\\b', os.linesep, text_value)
</code></pre>
| 1 | 2016-08-31T09:58:38Z | [
"python",
"regex",
"python-2.7"
] |
Regular expressions issue with python due to values with brackets | 39,246,378 | <p>I have a very large string (300 MB+), and it has some garbage data in it that I need to clean up. I am using Python 2.7 32-bit.</p>
<p>I didn't want to use the string operation <code>replace</code> because the file the user uses is only going to grow over time, so I am trying to use <code>re.sub</code> to replace the value of <code>[linender]</code> with a new line character like <code>\n</code> or <code>os.linesep</code>.</p>
<p>It seems simple enough to do, so my pattern is:</p>
<p><code>re.sub('\[lineender]\b, os.linesep, text_value)</code></p>
<p>This results in only one value being replaced in the whole string, which is wrong.</p>
<p>Sample Data:</p>
<p><code>s = """A|B|3[lineender]E|F|2M[lineender]"""</code></p>
<p>Any ideas on how I need to modify my regex to get this working?
I basically need to replace the bracket word with a new line character.</p>
| 0 | 2016-08-31T09:55:08Z | 39,246,511 | <p>Note that <code>\b</code> in a non-raw string literal is a backspace. If you use a word boundary <code>r'\b'</code>, it will require a word char (a letter, digit or an underscore) after <code>]</code>. In your case, I'd remove <code>\b</code> altogether:</p>
<pre><code>re.sub(r'\[lineender]', os.linesep, text_value)
</code></pre>
<p>If you want to make sure there is no word char after <code>]</code>, you may replace <code>\b</code> with <code>\B</code>, but please make sure you are using the <code>r</code> prefix to make your string literal raw.</p>
<p>See <a href="https://ideone.com/8BYN5z" rel="nofollow">Python demo</a>:</p>
<pre><code>import re, os
text_value = """A|B|3[lineender]E|F|2M[lineender]"""
print('"{}"'.format(re.sub(r'\[lineender]', os.linesep, text_value)))
</code></pre>
| 2 | 2016-08-31T10:00:28Z | [
"python",
"regex",
"python-2.7"
] |
Estimate power spectral density of time series DF using scipy.welch | 39,246,401 | <p>I have a pandas DF with datetime index with spacing = 200ms and corresponding values for each index as shown</p>
<pre><code>print(filtered)
2016-07-14 16:31:19.000 -0.010054
2016-07-14 16:31:19.200 -0.011849
2016-07-14 16:31:19.400 -0.009564
2016-07-14 16:31:19.600 -0.001077
[20038 rows x 1 columns]
</code></pre>
<p>I want to compute the power spectral density using scipy.welch function.</p>
<pre><code>f,pxx =welch(filtered.values.flatten(),5)
</code></pre>
<p>But when I run this line of code the power density array pxx is nan</p>
<pre><code>In [897]: pxx
Out[897]:
array([ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
</code></pre>
<p>What is the proper way to run the welch estimation on a time series dataframe and where might I find information on what causes the welch function to output nan?</p>
| 2 | 2016-08-31T02:27:44Z | 39,256,494 | <pre><code>f,pxx =welch(filtered.values.flatten(),5)
</code></pre>
<p>works fine on my side, make sure you have no missing values in your DF and your dtypes are correct (values are floats) first. </p>
<p>this should work </p>
<pre><code>filtered = filtered.astype(float)
filtered = filtered.dropna()
f,pxx =welch(filtered.values.flatten(),5)
</code></pre>
| 0 | 2016-08-31T18:21:39Z | [
"python",
"scipy",
"pandas"
] |
'unicode' object has no attribute 'utcoffset' | 39,246,594 | <p>In my admin, I am getting errors for only one class, 'unicode' object has no attribute 'utcoffset'. I have looked at a few other similar questions and have been unable to solve it. Any ideas on how to fix it? The traceback is below the class.</p>
<pre><code>class PartRequest(models.Model):
pub_date = models.DateTimeField('date published', default = '2016-08-10', blank=True)
user = models.ForeignKey(settings.AUTH_USER_MODEL, default=1)
part_request_number = models.CharField(_('Part Request Number'),max_length=10, default = number)
serialnumber = models.CharField(_('Serial Number'),max_length=10, default= snumber)
partnumber = models.CharField(_('Part Number'), max_length = 500, default = 'e.g. 002109_1')
build_type = models.ForeignKey(buildtyp, related_name='BuildType', null=True)
project_manager = models.ForeignKey(Person, related_name = 'Manager', null = True)
requester = models.ForeignKey(Person, related_name = 'Requester', null = True)
project_id = models.ForeignKey(Project, related_name = 'Project', null=True)
ordernumber = models.PositiveIntegerField(_('Order Number'), default=0)
description = models.CharField(_('Description'), max_length=500)
quantityrequired = models.PositiveIntegerField(_('Quantity'), default=0)
sensitivity = models.ForeignKey(sens, related_name='Sensitivity', null=True)
build_risk = models.ForeignKey(buildrisk, related_name= 'Risk', null=True)
daterequired = models.DateField(_('Date Required'), default = '2000-08-16')
image_or_pdf_upload = models.FileField(upload_to = upload_location, null=True, blank = True)
material = models.ForeignKey(mat, related_name = 'Material', null=True)
other_requirements = models.CharField(_('Other Requirements'), max_length = 100, default ='')
cert = models.CharField(_('Certificate of Conformity'), max_length=50, default ='')
location = models.ForeignKey(locat, related_name='Location', null=True)
identification_method = models.ForeignKey(MOI, related_name= 'MOI', null=True)
packing = models.CharField(_('Packing Specified by End User'),max_length=100, default = 'Please specify here')
qainfo = models.CharField(_('QA Information with Delivery'),max_length=100, default = 'Please specify here')
shipping = models.CharField(_('Shipping Details'),max_length=100, default = 'Please specify here')
slug = models.SlugField(unique=True, null=True)
def __unicode__(self):
return (self.part_request_number)
def get_absolute_url(self):
return "/buildpage/%s/" %(self.slug)
def create_slug(instance, new_slug=None):
slug = slugify(instance.part_request_number)
if new_slug is not None:
slug = new_slug
qs = PartRequest.objects.filter(slug=slug).order_by("-id")
exists = qs.exists()
if exists:
new_slug = "%s-%s" %(slug, qs.first().id)
return create_slug(instance, new_slug=new_slug)
return slug
def pre_save_receiver(sender, instance, *args, **kwargs):
if not instance.slug:
instance.slug = create_slug(instance)
pre_save.connect(pre_save_receiver, sender = PartRequest)
</code></pre>
<p>Traceback: </p>
<pre><code>Environment:
Request Method: GET
Request URL: http://127.0.0.1:8000/admin/buildpage/partrequest/add/?_changelist_filters=requester__id__exact%3D2
Django Version: 1.9.2
Python Version: 2.7.10
Installed Applications:
('django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'prodman',
'video',
'tande',
'assets',
'buildpage',
'concept',
'smart_selects')
Installed Middleware:
('django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware')
Template error:
In template /Library/Python/2.7/site-packages/django/contrib/admin/templates/admin/change_form.html, error at line 33
'unicode' object has no attribute 'utcoffset' 23 : {% endblock %}
24 : {% endif %}
25 :
26 : {% block content %}<div id="content-main">
27 : {% block object-tools %}
28 : {% if change %}{% if not is_popup %}
29 : <ul class="object-tools">
30 : {% block object-tools-items %}
31 : <li>
32 : {% url opts|admin_urlname:'history' original.pk|admin_urlquote as history_url %}
33 : <a href="{% add_preserved_filters histo ry_url %}" class= "historylink">{% trans "History" %}</a>
34 : </li>
35 : {% if has_absolute_url %}<li><a href="{{ absolute_url }}" class="viewsitelink">{% trans "View on site" %}</a></li>{% endif %}
36 : {% endblock %}
37 : </ul>
38 : {% endif %}{% endif %}
39 : {% endblock %}
40 : <form {% if has_file_field %}enctype="multipart/form-data" {% endif %}action="{{ form_url }}" method="post" id="{{ opts.model_name }}_form" novalidate>{% csrf_token %}{% block form_top %}{% endblock %}
41 : <div>
42 : {% if is_popup %}<input type="hidden" name="{{ is_popup_var }}" value="1" />{% endif %}
43 : {% if to_field %}<input type="hidden" name="{{ to_field_var }}" value="{{ to_field }}" />{% endif %}
Traceback:
File "/Library/Python/2.7/site-packages/django/core/handlers/base.py" in get_response
174. response = self.process_exception_by_middleware(e, request)
File "/Library/Python/2.7/site-packages/django/core/handlers/base.py" in get_response
172. response = response.render()
File "/Library/Python/2.7/site-packages/django/template/response.py" in render
160. self.content = self.rendered_content
File "/Library/Python/2.7/site-packages/django/template/response.py" in rendered_content
137. content = template.render(context, self._request)
File "/Library/Python/2.7/site-packages/django/template/backends/django.py" in render
95. return self.template.render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in render
206. return self._render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in _render
197. return self.nodelist.render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in render
992. bit = node.render_annotated(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in render_annotated
959. return self.render(context)
File "/Library/Python/2.7/site-packages/django/template/loader_tags.py" in render
173. return compiled_parent._render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in _render
197. return self.nodelist.render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in render
992. bit = node.render_annotated(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in render_annotated
959. return self.render(context)
File "/Library/Python/2.7/site-packages/django/template/loader_tags.py" in render
173. return compiled_parent._render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in _render
197. return self.nodelist.render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in render
992. bit = node.render_annotated(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in render_annotated
959. return self.render(context)
File "/Library/Python/2.7/site-packages/django/template/loader_tags.py" in render
69. result = block.nodelist.render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in render
992. bit = node.render_annotated(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in render_annotated
959. return self.render(context)
File "/Library/Python/2.7/site-packages/django/template/loader_tags.py" in render
69. result = block.nodelist.render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in render
992. bit = node.render_annotated(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in render_annotated
959. return self.render(context)
File "/Library/Python/2.7/site-packages/django/template/defaulttags.py" in render
220. nodelist.append(node.render_annotated(context))
File "/Library/Python/2.7/site-packages/django/template/base.py" in render_annotated
959. return self.render(context)
File "/Library/Python/2.7/site-packages/django/template/loader_tags.py" in render
209. return template.render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in render
208. return self._render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in _render
197. return self.nodelist.render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in render
992. bit = node.render_annotated(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in render_annotated
959. return self.render(context)
File "/Library/Python/2.7/site-packages/django/template/defaulttags.py" in render
220. nodelist.append(node.render_annotated(context))
File "/Library/Python/2.7/site-packages/django/template/base.py" in render_annotated
959. return self.render(context)
File "/Library/Python/2.7/site-packages/django/template/defaulttags.py" in render
220. nodelist.append(node.render_annotated(context))
File "/Library/Python/2.7/site-packages/django/template/base.py" in render_annotated
959. return self.render(context)
File "/Library/Python/2.7/site-packages/django/template/defaulttags.py" in render
326. return nodelist.render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in render
992. bit = node.render_annotated(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in render_annotated
959. return self.render(context)
File "/Library/Python/2.7/site-packages/django/template/defaulttags.py" in render
326. return nodelist.render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in render
992. bit = node.render_annotated(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in render_annotated
959. return self.render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in render
1049. return render_value_in_context(output, context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in render_value_in_context
1026. value = force_text(value)
File "/Library/Python/2.7/site-packages/django/utils/encoding.py" in force_text
78. s = six.text_type(s)
File "/Library/Python/2.7/site-packages/django/utils/html.py" in <lambda>
381. klass.__unicode__ = lambda self: mark_safe(klass_unicode(self))
File "/Library/Python/2.7/site-packages/django/forms/boundfield.py" in __str__
43. return self.as_widget()
File "/Library/Python/2.7/site-packages/django/forms/boundfield.py" in as_widget
101. return force_text(widget.render(name, self.value(), attrs=attrs))
File "/Library/Python/2.7/site-packages/django/forms/widgets.py" in render
832. value = self.decompress(value)
File "/Library/Python/2.7/site-packages/django/forms/widgets.py" in decompress
904. value = to_current_timezone(value)
File "/Library/Python/2.7/site-packages/django/forms/utils.py" in to_current_timezone
190. if settings.USE_TZ and value is not None and timezone.is_aware(value):
File "/Library/Python/2.7/site-packages/django/utils/timezone.py" in is_aware
340. return value.utcoffset() is not None
Exception Type: AttributeError at /admin/buildpage/partrequest/add/
Exception Value: 'unicode' object has no attribute 'utcoffset'
</code></pre>
| 0 | 2016-08-31T10:04:34Z | 39,246,873 | <p>The default value for your <code>pub_date</code> field is a string. It should be an instance of <code>datetime.date</code>.</p>
| 0 | 2016-08-31T10:17:30Z | [
"python",
"sql",
"django",
"unicode"
] |
Boolean filter using a timestamp value on a dataframe in Python | 39,246,615 | <p>I have a dataframe created from a <code>.csv</code> document. Since one of the columns has dates, I have used pandas <code>read_csv</code> with <code>parse_dates</code>:</p>
<pre><code>df = pd.read_csv('CSVdata.csv', encoding = "ISO-8859-1", parse_dates=['Dates_column'])
</code></pre>
<p>The dates range from 2012 to 2016. I want to crate a sub-dataframe, containing only the rows from 2014.</p>
<p>The only way I have managed to do this, is with two subsequent Boolean filters:</p>
<pre><code>df_a = df[df.Dates_column>pd.Timestamp('2014')] # To create a dataframe from 01/Jan/2014 onwards.
df = df_a[df_a.Dates_column<pd.Timestamp('2015')] # To remove all the values after 01/jan/2015
</code></pre>
<p>Is there a way of doing this in one step, more efficiently?</p>
<p>Many thanks!</p>
| 0 | 2016-08-31T10:05:25Z | 39,246,702 | <p>You can use the <a href="http://pandas.pydata.org/pandas-docs/stable/basics.html#dt-accessor" rel="nofollow"><code>dt</code> accessor</a>:</p>
<pre><code>df = df[df.Dates_column.dt.year == 2014]
</code></pre>
| 1 | 2016-08-31T10:09:46Z | [
"python",
"pandas",
"dataframe",
"timestamp"
] |
Calculate percentiles/quantiles for a timeseries with resample or groupby - pandas | 39,246,664 | <p>I have a time series of hourly values and I am trying to derive some basic statistics on a weekly/monthly basis.</p>
<p>If we use the following abstract dataframe, were each column is time-series:</p>
<pre><code>rng = pd.date_range('1/1/2016', periods=2400, freq='H')
df = pd.DataFrame(np.random.randn(len(rng), 4), columns=list('ABCD'), index=rng)
</code></pre>
<p><code>print df[:5]</code> returns:</p>
<pre><code> A B C D
2016-01-01 00:00:00 1.521581 0.102335 0.796271 0.317046
2016-01-01 01:00:00 -0.369221 -0.179821 -1.340149 -0.347298
2016-01-01 02:00:00 0.750247 0.698579 0.440716 0.362159
2016-01-01 03:00:00 -0.465073 1.783315 1.165954 0.142973
2016-01-01 04:00:00 1.995332 1.230331 -0.135243 1.189431
</code></pre>
<p>I can call:</p>
<pre><code>r = df.resample('W-MON')
</code></pre>
<p>and then use: <code>r.min()</code>, <code>r.mean()</code>, <code>r.max()</code>, which all work fine. For instance <code>print r.min()[:5]</code> returns:</p>
<pre><code> A B C D
2016-01-04 -2.676778 -2.450659 -2.401721 -3.209390
2016-01-11 -2.710066 -2.372032 -2.864887 -2.387026
2016-01-18 -2.984805 -2.527528 -3.414003 -2.616434
2016-01-25 -2.625299 -2.947864 -2.642569 -2.262959
2016-02-01 -2.100062 -2.568878 -3.008864 -2.315566
</code></pre>
<p>However, if I try to calculate <strong>percentiles</strong>, using the <strong>quantile</strong> formula, i.e. <code>r.quantile(0.95)</code>, I get one value for each column</p>
<pre><code>A 0.090502
B 0.136594
C 0.058720
D 0.125131
</code></pre>
<p>Is there a way to combine the grouping / resampling using quantiles as arguments?</p>
<p>Thanks</p>
| 1 | 2016-08-31T10:07:50Z | 39,246,854 | <p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.tseries.resample.Resampler.apply.html" rel="nofollow"><code>Resampler.apply</code></a>, because <code>Resampler.quantile</code> is <a href="http://pandas.pydata.org/pandas-docs/stable/api.html#resampling" rel="nofollow">not implemented</a> yet:</p>
<pre><code>np.random.seed(1234)
rng = pd.date_range('1/1/2016', periods=2400, freq='H')
df = pd.DataFrame(np.random.randn(len(rng), 4), columns=list('ABCD'), index=rng)
#print (df)
r = df.resample('W-MON')
print (r.apply(lambda x: x.quantile(0.95)))
A B C D
2016-01-04 1.540236 1.925962 1.439512 1.606239
2016-01-11 1.727545 1.520913 1.596961 1.652290
2016-01-18 1.595396 1.669630 1.763577 1.933235
2016-01-25 1.500270 1.604542 1.648790 1.778329
2016-02-01 1.608245 1.791356 1.548159 1.786005
2016-02-08 1.531625 1.408163 1.300414 1.877863
2016-02-15 1.818673 1.613632 1.498623 1.524481
2016-02-22 1.557928 1.566523 1.974486 1.727555
2016-02-29 1.530757 1.529591 1.869422 1.433620
2016-03-07 1.651609 1.452537 1.585765 1.414499
2016-03-14 1.311807 1.717968 1.410036 1.903715
2016-03-21 1.529065 1.693964 1.784480 1.708263
2016-03-28 1.636786 1.405565 1.809235 1.802555
2016-04-04 1.768068 1.564308 1.552492 1.801424
2016-04-11 1.824578 1.794437 1.649749 1.564300
</code></pre>
<hr>
<p>With <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> is possible use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.quantile.html" rel="nofollow"><code>DataFrameGroupBy.quantile</code></a>:</p>
<pre><code>g = df.groupby([pd.TimeGrouper('W-MON')])
print (g.quantile(0.95))
A B C D
2016-01-04 1.540236 1.925962 1.439512 1.606239
2016-01-11 1.727545 1.520913 1.596961 1.652290
2016-01-18 1.595396 1.669630 1.763577 1.933235
2016-01-25 1.500270 1.604542 1.648790 1.778329
2016-02-01 1.608245 1.791356 1.548159 1.786005
2016-02-08 1.531625 1.408163 1.300414 1.877863
2016-02-15 1.818673 1.613632 1.498623 1.524481
2016-02-22 1.557928 1.566523 1.974486 1.727555
2016-02-29 1.530757 1.529591 1.869422 1.433620
2016-03-07 1.651609 1.452537 1.585765 1.414499
2016-03-14 1.311807 1.717968 1.410036 1.903715
2016-03-21 1.529065 1.693964 1.784480 1.708263
2016-03-28 1.636786 1.405565 1.809235 1.802555
2016-04-04 1.768068 1.564308 1.552492 1.801424
2016-04-11 1.824578 1.794437 1.649749 1.564300
</code></pre>
| 2 | 2016-08-31T10:16:39Z | [
"python",
"pandas",
"group-by",
"resampling",
"quantile"
] |
Normalising pandas data frame using StandardScaler() excluding a particular column | 39,246,676 | <p>So I have a data frame which I formed by merging training (labeled) and test (unlabelled) data frames. And to un-append the test data frame I have kept a column which has a identifier if the row belonged to training or test.
Now I have to normalize all the values in all the columns except for this one column "Sl No." but I am not finding any way to pass this one column.
Here's what I was doing</p>
<pre><code>import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
data_norm = data_x_filled.copy() #Has training + test data frames combined to form single data frame
normalizer = StandardScaler()
data_array = normalizer.fit_transform(data_norm)
data_norm = pd.DataFrame(data_array,columns = data_norm.columns).set_index(data_norm.index)
</code></pre>
<p>I just want to exclude the column "Sl No." for normalization but want to retain it after normalization.</p>
| 1 | 2016-08-31T10:08:46Z | 39,247,955 | <p>try this it may work use <code>numpy</code> as <code>np</code>:</p>
<pre><code>data_norm = data_x_filled.copy() #Has training + test data frames combined to form single data frame
normalizer = StandardScaler()
data_array = normalizer.fit_transform(data_norm.ix[:,data_norm.columns!='SI No'])
data_norm = pd.DataFrame(np.column_stack((data_norm['SI No'].values,data_array)),columns = data_norm.columns).set_index(data_norm.index)
</code></pre>
| 2 | 2016-08-31T11:07:16Z | [
"python",
"pandas",
"scikit-learn",
"normalization"
] |
Converting a numpy array into a c++ native vector | 39,246,725 | <p>I currently have a python C extension which takes a PyObject list and I can parse is using 'PySequence_Fast'.</p>
<p>Is there an equivalent command which would allow my to parse a one dimensional numpy array?</p>
<p>Cheers,
Jack</p>
| 0 | 2016-08-31T10:10:55Z | 39,250,655 | <p>The function <code>PyArray_FROM_OTF</code> converts to a numpy array (unless the argument is already a numpy array when it just returns it with an incremented refcount). See <a href="http://docs.scipy.org/doc/numpy/user/c-info.how-to-extend.html#converting-an-arbitrary-sequence-object" rel="nofollow">http://docs.scipy.org/doc/numpy/user/c-info.how-to-extend.html#converting-an-arbitrary-sequence-object</a>. e.g.</p>
<pre><code>PyObject* definitely_numpy_array = PyArray_FROM_OTF(might_be_numpy_array,
NPY_DOUBLE, // you need to specify a type
0 // there's assorted flags you can add to describe the exact format you want which are described in the documentation
)
</code></pre>
<p>This can work on any number of dimensions so if you strictly require 1D you'll have to add a check. It also requires the numpy headers to be included ("numpy/arrayobject.h")</p>
| 1 | 2016-08-31T13:14:08Z | [
"python",
"numpy",
"python-c-api"
] |
AWS lambda to send email with attachements through SES> ParamValidationError: Parameter validation failed | 39,246,789 | <p>I am using AWS lambda to send email with attachments through Amazon SES. The attachment files stored in S3 bucket. Has anyone had any experience using lambda and sending emails via ses, through a lambda function? </p>
<p>here is my code:</p>
<pre><code>import boto3
from email.mime.text import MIMEText
from email.mime.application import MIMEApplication
from email.mime.multipart import MIMEMultipart
ses = boto3.client('ses')
s3_client = boto3.client('s3')
to_emails = ["xxx", "xxx"]
COMMASPACE = ', '
def lambda_handler(event, context):
# create raw email
msg = MIMEMultipart()
msg['Subject'] = 'Email subject'
msg['From'] = 'xxx'
msg['To'] = COMMASPACE.join(to_emails) ## joined the array of email strings
# edit: didn't end up using this ^
part = MIMEText('Attached is an important CSV')
msg.attach(part)
file_name = s3_client.download_file('x_file', 'x.jpg', '/tmp/x.jpg')
if file_name:
part = MIMEApplication(open(file_name, "rb").read())
part.add_header('Content-Disposition', 'attachment', filename=file_name)
msg.attach(part)
ses.send_raw_email(RawMessage=msg.as_string(), Source=msg['From'], Destinations=to_emails)
</code></pre>
<blockquote>
<p>found this error:</p>
</blockquote>
<pre><code>Parameter validation failed:
Invalid type for parameter RawMessage, value: Content-Type: multipart/mixed; boundary="===============1951926695068149774=="
MIME-Version: 1.0
Subject: Email subject
From: xxx
To: xxx, xxx
--===============1951926695068149774==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Attached is an important CSV
--===============1951926695068149774==--
, type: <type 'str'>, valid types: <type 'dict'>: ParamValidationError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 33, in lambda_handler
ses.send_raw_email(RawMessage=msg.as_string(), Source=msg['From'], Destinations=to_emails)
File "/var/runtime/botocore/client.py", line 278, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/runtime/botocore/client.py", line 548, in _make_api_call
api_params, operation_model, context=request_context)
File "/var/runtime/botocore/client.py", line 601, in _convert_to_request_dict
api_params, operation_model)
File "/var/runtime/botocore/validate.py", line 270, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
ParamValidationError: Parameter validation failed:
Invalid type for parameter RawMessage, value: Content-Type: multipart/mixed; boundary="===============1951926695068149774=="
MIME-Version: 1.0
Subject: Email subject
From: xxx
To:xxx, xxx
--===============1951926695068149774==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Attached is an important CSV
--===============1951926695068149774==--
, type: <type 'str'>, valid types: <type 'dict'>
</code></pre>
| 1 | 2016-08-31T10:14:26Z | 39,769,066 | <p>To fix the issue you should use:</p>
<pre><code>ses.send_raw_email(RawMessage={
'Data': msg.as_string(),
},
Source=msg['From'],
Destinations=to_emails
)
</code></pre>
<p>As described in <a href="http://boto3.readthedocs.io/en/latest/reference/services/ses.html#SES.Client.send_raw_email" rel="nofollow">documentation</a></p>
| 0 | 2016-09-29T11:30:09Z | [
"python",
"amazon-web-services",
"amazon-ses",
"amazon-lambda"
] |
How can I make a package from my PyCharm project available to other PyCharm projects on my computer? | 39,246,827 | <p>I am not sure if this is a Python or a PyCharm question. I think that it is quite simple, but I could not find a solution anywhere.</p>
<p>I have started to write a collection of modules with utility stuff. Now I want to import one of my packages in several of my other projects. What do I have to do to be able to simply write <code>import my_module</code>, so that Python finds the right one automatically?</p>
<p>I would like to set this up on a Mac and a PC, so advice on both platforms is appreciated.</p>
| 1 | 2016-08-31T10:15:33Z | 39,246,879 | <p>It's not pyCharm issue. You have to give python information of the place your module live, to do that update <code>PYTHONPATH</code>, if you're using bash (OS X,GNU/Linux distro), add this to your <code>~/.bashrc</code> or <code>~/.bash_profile</code></p>
<pre><code>export PYTHONPATH="${PYTHONPATH}:/path/to/my_module"
</code></pre>
<p>Also you can checkout that article <a href="https://support.enthought.com/hc/en-us/articles/204469160-How-do-I-set-PYTHONPATH-and-other-environment-variables-for-Canopy-" rel="nofollow">How do I set PYTHONPATH </a> and SO <a href="http://stackoverflow.com/questions/19917492/how-to-use-pythonpath">How to use PYTHONPATH</a> question to get more info.</p>
<p>To avoid duplicating answer adding a link to give info of how to update PYTHONPATH on Windows: <a href="http://stackoverflow.com/questions/3701646/how-to-add-to-the-pythonpath-in-windows-7?answertab=votes#tab-top">How to add to the pythonpath in windows 7?</a> </p>
<p>After you should be able to import your module.</p>
| 3 | 2016-08-31T10:17:48Z | [
"python",
"pycharm"
] |
List to nested dictionary | 39,246,906 | <p>I have list of levels like </p>
<pre><code>levels = [["A", "B", "C", "D"], ["A", "B"], ["A", "B", "X"]]
</code></pre>
<p>In response nested dictionary should appear like</p>
<pre><code> {
"name": "A",
"parent": -1,
"children": [
{
"name": "B",
"parent": "A",
"children": [
{
"name": "C",
"parent": "B",
"children": [
{
"name": "D",
"parent": "C",
}
],
},
{
"name": "X",
"parent": "B"
}
],
}
]
}
</code></pre>
<p>I know I am missing something in recursion loop.</p>
<p>So far this is my code</p>
<pre><code>import csv
import json
class AutoVivification(dict):
"""Implementation of perl's autovivification feature."""
def __getitem__(self, item):
try:
return dict.__getitem__(self, item)
except KeyError:
value = self[item] = type(self)()
return value
def master_tree(data, payload, parent = -1):
if len(data) == 1:
if data[0] not in payload['name']:
payload = {'name':data[0], "parent": parent}
else:
if data[0] in payload['name']:
for k in payload['children']:
master_tree(data[1:], k, data[0])
else:
payload = {'name': data[1], "parent": data[0], 'children':[master_tree(data[1:], payload, data[0])]}
return payload
payload = AutoVivification()
payload = master_tree(["A", "B", "X"], payload)
payload = master_tree(["A", "B", "C", "D"], payload)
print json.dumps(payload, indent=4)
</code></pre>
<p>Problem occurs when same structure is repeated. Like A->B->C->D is already present from 1st iteration and when A->B is passed again it should ideally skip it and in last case it should add just X node under B</p>
| 2 | 2016-08-31T10:18:59Z | 39,277,084 | <pre><code>import csv
import json
class AutoVivification(dict):
"""Implementation of perl's autovivification feature."""
def __getitem__(self, item):
try:
return dict.__getitem__(self, item)
except KeyError:
value = self[item] = type(self)()
return value
def master_tree(data, payload, parent = -1):
if len(data) == 1:
if data[0] not in payload['name']:
payload = {'name':data[0], "parent": parent}
else:
if 'name' in payload and data[0] in payload['name']:
get_list = []
for k in payload['children']:
get_list.append(k['name'])
if data[1] in get_list:
master_tree(data[1:], k, data[0])
else:
payload['children'].append(master_tree(data[1:], {'name':data[0], "parent": parent}, data[0]))
else:
payload = {'name': data[0], "parent": parent, 'children':[master_tree(data[1:], payload, data[0])]}
return payload
payload = AutoVivification()
payload = master_tree(["A", "B", "X"], payload)
payload = master_tree(["A", "B", "C", "D"], payload)
payload = master_tree(["A", "B", "Q"], payload)
print json.dumps(payload, indent=4)
</code></pre>
| 1 | 2016-09-01T16:57:18Z | [
"python",
"tree",
"hierarchy"
] |
"The system cannot find the file specified" error in Spyder | 39,246,978 | <p>I am using python to solve Captcha level 1 on <a href="http://hackthis.co.uk" rel="nofollow">http://hackthis.co.uk</a></p>
<p>here is the code</p>
<pre><code>from PIL import Image
import pytesseract
import requests
from StringIO import StringIO
url = "https://www.hackthis.co.uk/levels/captcha/1"
login = "https://www.hackthis.co.uk/?login"
payload = {"username": "user", "password": "pass"}
def solve(captcha):
pytesseract.image_to_string(Image.open(captcha))
# other code needs to be writted
return "abc";
s = requests.Session() # Start a session
s.post(login, data=payload) # Login
response = s.get(url).text # Get problem data
captcha = s.get("https://www.hackthis.co.uk/levels/extras/captcha1.php")
captcha = Image.open(StringIO(captcha.content))
captcha.save("E:/captcha1.png")
solution = solve("E:/captcha1.png")
payload = {"answer": solution}
s.post(url, data=payload) # Post data
</code></pre>
<p>But i am getting an error </p>
<p>Complete log is here <a href="http://pastebin.com/7T9aKnPN" rel="nofollow">http://pastebin.com/7T9aKnPN</a></p>
<p>If required, here is subprocess.py <a href="http://pastebin.com/zmkbhgj6" rel="nofollow">http://pastebin.com/zmkbhgj6</a></p>
<p>I tried every other solutions on other forums but none helped</p>
<p>Thanks in advance :)</p>
<p><strong>EDIT : The problem only occurs when i use the pytesseract.image_to_string() method</strong></p>
| 0 | 2016-08-31T10:22:13Z | 39,249,311 | <p>Well the problem was, tesserect wasn't installed!
i downloaded this Tesserect OCR executables from <a href="https://github.com/tesseract-ocr/tesseract/wiki/Downloads" rel="nofollow">the download page</a></p>
<p>then added the downloaded directory to <code>PATH</code> variable
Installed VC2015 x86 redist (required)
Worked for me :D</p>
| 0 | 2016-08-31T12:11:51Z | [
"python",
"spyder"
] |
module.__init__() takes at most 2 arguments error in Python | 39,246,994 | <p>I have 3 files, factory_imagenet.py, imdb.py and imagenet.py</p>
<p><strong>factory_imagenet.py has:</strong></p>
<pre><code>import datasets.imagenet
</code></pre>
<p>It also has a function call as </p>
<pre><code>datasets.imagenet.imagenet(split,devkit_path))
...
</code></pre>
<p><strong>imdb.py has:</strong></p>
<pre><code>class imdb(object):
def __init__(self, name):
self._name = name
...
</code></pre>
<p><strong>imagenet.py has:</strong></p>
<pre><code>import datasets
import datasets.imagenet
import datasets.imdb
</code></pre>
<p>It also has </p>
<pre><code>class imagenet(datasets.imdb):
def __init__(self, image_set, devkit_path=None):
datasets.imdb.__init__(self, image_set)
</code></pre>
<p>All three files are in the datasets folder.</p>
<p>When I am running another script that interacts with these files, I get this error:</p>
<pre><code>Traceback (most recent call last):
File "./tools/train_faster_rcnn_alt_opt.py", line 19, in <module>
from datasets.factory_imagenet import get_imdb
File "/mnt/data2/abhishek/py-faster-rcnn/tools/../lib/datasets/factory_imagenet.py", line 12, in <module>
import datasets.imagenet
File "/mnt/data2/abhishek/py-faster-rcnn/tools/../lib/datasets/imagenet.py", line 21, in <module>
class imagenet(datasets.imdb):
TypeError: Error when calling the metaclass bases
module.__init__() takes at most 2 arguments (3 given)
</code></pre>
<p>What is the problem here and what is the intuitive explanation to how to solve such inheritance problems? </p>
| -1 | 2016-08-31T10:22:41Z | 39,247,043 | <p>When you call <code>datasets.imdb.__init__(self, image_set)</code><br>
Your <code>imdb.__init__</code> method gets 3 arguments. Two you send and third is implicit <code>self</code></p>
| 0 | 2016-08-31T10:25:11Z | [
"python",
"inheritance"
] |
module.__init__() takes at most 2 arguments error in Python | 39,246,994 | <p>I have 3 files, factory_imagenet.py, imdb.py and imagenet.py</p>
<p><strong>factory_imagenet.py has:</strong></p>
<pre><code>import datasets.imagenet
</code></pre>
<p>It also has a function call as </p>
<pre><code>datasets.imagenet.imagenet(split,devkit_path))
...
</code></pre>
<p><strong>imdb.py has:</strong></p>
<pre><code>class imdb(object):
def __init__(self, name):
self._name = name
...
</code></pre>
<p><strong>imagenet.py has:</strong></p>
<pre><code>import datasets
import datasets.imagenet
import datasets.imdb
</code></pre>
<p>It also has </p>
<pre><code>class imagenet(datasets.imdb):
def __init__(self, image_set, devkit_path=None):
datasets.imdb.__init__(self, image_set)
</code></pre>
<p>All three files are in the datasets folder.</p>
<p>When I am running another script that interacts with these files, I get this error:</p>
<pre><code>Traceback (most recent call last):
File "./tools/train_faster_rcnn_alt_opt.py", line 19, in <module>
from datasets.factory_imagenet import get_imdb
File "/mnt/data2/abhishek/py-faster-rcnn/tools/../lib/datasets/factory_imagenet.py", line 12, in <module>
import datasets.imagenet
File "/mnt/data2/abhishek/py-faster-rcnn/tools/../lib/datasets/imagenet.py", line 21, in <module>
class imagenet(datasets.imdb):
TypeError: Error when calling the metaclass bases
module.__init__() takes at most 2 arguments (3 given)
</code></pre>
<p>What is the problem here and what is the intuitive explanation to how to solve such inheritance problems? </p>
| -1 | 2016-08-31T10:22:41Z | 39,247,101 | <blockquote>
<pre><code>module.__init__() takes at most 2 arguments (3 given)
</code></pre>
</blockquote>
<p>This means that you are trying to inherit from a module, not from a class. In fact, <code>datasets.imdb</code> is a module; <code>datasets.imdb.imdb</code> is your class.</p>
<p>You need to change your code so that it looks like this:</p>
<pre><code>class imagenet(datasets.imdb.imdb):
def __init__(self, image_set, devkit_path=None):
datasets.imdb.imdb.__init__(self, image_set)
</code></pre>
| 3 | 2016-08-31T10:27:51Z | [
"python",
"inheritance"
] |
Internal Reference - Auto Increment field | 39,247,220 | <p><a href="http://i.stack.imgur.com/PuAv9.png" rel="nofollow">enter image description here</a></p>
<p>Hello to all,</p>
<p>I need to know how to change the type of the field 'ref' (Internal Reference) from char to Auto Sequence. You can see on the picture the field. Everytime i create a new Contact, internal reference should auto increment by 1. For example first contact, Internal Reference 1 and so on.</p>
<p>Thank you,
Igor</p>
| 0 | 2016-08-31T10:33:25Z | 39,250,532 | <p>Crreate a sequence and write it value to <code>ref</code> </p>
<p><em>XML code</em>: </p>
<pre><code> <record id="Your_sequence_type_id" model="ir.sequence.type">
<field name="name">Reference</field>
<field name="code">Your_code</field>
</record>
<record id="your_sequence_id" model="ir.sequence">
<field name="name">Reference</field>
<field name="padding">3</field>
<field name="code">Your_code</field>
</record>
</code></pre>
<p><em>Python code</em>: </p>
<pre><code> ref = self.env['ir.sequence'].get(Your_code)
</code></pre>
| 0 | 2016-08-31T13:07:47Z | [
"python",
"xml",
"auto-increment",
"odoo-9"
] |
Python pandas: dataframe grouped by a column(such as, name), and get the value of some columns in each group | 39,247,335 | <p>There is dataframe called as df as following:</p>
<pre><code> name id age text
a 1 1 very good, and I like him
b 2 2 I play basketball with his brother
c 3 3 I hope to get a offer
d 4 4 everything goes well, I think
a 1 1 I will visit china
b 2 2 no one can understand me, I will solve it
c 3 3 I like followers
d 4 4 maybe I will be good
a 1 1 I should work hard to finish my research
b 2 2 water is the source of earth, I agree it
c 3 3 I hope you can keep in touch with me
d 4 4 My baby is very cute, I like him
</code></pre>
<p>You know, there are four names: a, b, c, d. and each name has id, age, and text. Actually there id, age for each name group are the same, but the text is different for each name group, each name has three rows(this just example, the real data is a large data)</p>
<p>I want to get the id, age for each name group (for example). In addition, I want to caculate the character index in all text for each group in the text by the function: extract_text(text). I mean I want to get the following data: take the name 'a' as example: age: 1, id: 1. 'I' index in three rows(I just give a example, not the real): 20, 0, 0.</p>
<p>I have tried to do as following:</p>
<pre><code> import pandas as pd
def extract_text(text):
index_n = None
text_len = len(text)
for i in range(0, text_len, 1):
if text[i] == 'I':
index_n = i
return index_n
df = pd.DataFrame({'name': ['a', 'b', 'c', 'd', 'a', 'b', 'c', 'd',
'a', 'b', 'c', 'd'],
'id': [1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4],
'age':[1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4],
'text':['very good, and I like him',
'I play basketball with his brother',
'I hope to get a offer',
'everything goes well, I think',
'I will visit china',
'no one can understand me, I will solve it',
'I like followers', 'maybe I will be good',
'I should work hard to finish my research',
'water is the source of earth, I agree it',
'I hope you can keep in touch with me',
'My baby is very cute, I like him']})
id_num = df.groupby('name')['id'].value[0]
id_num = df.groupby('age')['id'].value[0]
index_num = df.groupby('age')['text'].apply(extract_text)
</code></pre>
<p>But there is error: </p>
<blockquote>
<p>Traceback (most recent call last):File<br>
bot_test_new.py", line 25, in <br>
id_num = df.groupby('name')['id'].value[0]<br>
AttributeError: 'SeriesGroupBy' object has no attribute 'value'</p>
</blockquote>
<p>Please give me you hand, thanks in advance!</p>
| 1 | 2016-08-31T10:38:49Z | 39,247,555 | <p>I'll elaborate a bit more than in the comment. The problem is that extract_text is only able to handle individual strings. However when you groupby and then apply, you're sending a list with all the strings in the group.</p>
<p>There are two solutions, the first is the one I indicated (sending individual strings):</p>
<pre><code>index_num = df.groupby('age')['text'].apply(lambda x: [extract_text(_) for _ in x])
</code></pre>
<p>The other is changing extract_text so it can handle the list of strings:</p>
<pre><code> def extract_text(list_texts):
list_index = []
for text in list_texts:
index_n = None
text_len = len(text)
for i in range(0, text_len, 1):
if text[i] == 'I':
index_n = i
list_index.append(index_n)
return list_index
</code></pre>
<p>And then continue with:</p>
<pre><code>index_num = df.groupby('age')['text'].apply(extract_text)
</code></pre>
<p>Moreover, you can use <code>text.find("I")</code> instead of your loop inside extract_text. Something like this <code>def extract_text(list_texts): return [text.find("I") for text in list_texts]</code>.</p>
| 1 | 2016-08-31T10:48:58Z | [
"python",
"string",
"pandas",
"dataframe",
"group-by"
] |
Python pandas: dataframe grouped by a column(such as, name), and get the value of some columns in each group | 39,247,335 | <p>There is dataframe called as df as following:</p>
<pre><code> name id age text
a 1 1 very good, and I like him
b 2 2 I play basketball with his brother
c 3 3 I hope to get a offer
d 4 4 everything goes well, I think
a 1 1 I will visit china
b 2 2 no one can understand me, I will solve it
c 3 3 I like followers
d 4 4 maybe I will be good
a 1 1 I should work hard to finish my research
b 2 2 water is the source of earth, I agree it
c 3 3 I hope you can keep in touch with me
d 4 4 My baby is very cute, I like him
</code></pre>
<p>You know, there are four names: a, b, c, d. and each name has id, age, and text. Actually there id, age for each name group are the same, but the text is different for each name group, each name has three rows(this just example, the real data is a large data)</p>
<p>I want to get the id, age for each name group (for example). In addition, I want to caculate the character index in all text for each group in the text by the function: extract_text(text). I mean I want to get the following data: take the name 'a' as example: age: 1, id: 1. 'I' index in three rows(I just give a example, not the real): 20, 0, 0.</p>
<p>I have tried to do as following:</p>
<pre><code> import pandas as pd
def extract_text(text):
index_n = None
text_len = len(text)
for i in range(0, text_len, 1):
if text[i] == 'I':
index_n = i
return index_n
df = pd.DataFrame({'name': ['a', 'b', 'c', 'd', 'a', 'b', 'c', 'd',
'a', 'b', 'c', 'd'],
'id': [1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4],
'age':[1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4],
'text':['very good, and I like him',
'I play basketball with his brother',
'I hope to get a offer',
'everything goes well, I think',
'I will visit china',
'no one can understand me, I will solve it',
'I like followers', 'maybe I will be good',
'I should work hard to finish my research',
'water is the source of earth, I agree it',
'I hope you can keep in touch with me',
'My baby is very cute, I like him']})
id_num = df.groupby('name')['id'].value[0]
id_num = df.groupby('age')['id'].value[0]
index_num = df.groupby('age')['text'].apply(extract_text)
</code></pre>
<p>But there is error: </p>
<blockquote>
<p>Traceback (most recent call last):File<br>
bot_test_new.py", line 25, in <br>
id_num = df.groupby('name')['id'].value[0]<br>
AttributeError: 'SeriesGroupBy' object has no attribute 'value'</p>
</blockquote>
<p>Please give me you hand, thanks in advance!</p>
| 1 | 2016-08-31T10:38:49Z | 39,248,400 | <p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.find.html" rel="nofollow"><code>str.find</code></a>:</p>
<pre><code>print (df.groupby('age')['text'].apply(lambda x: x.str.find('I').tolist()))
age
1 [15, 0, 0]
2 [0, 26, 30]
3 [0, 0, 0]
4 [22, 6, 22]
Name: text, dtype: object
</code></pre>
<p>If need <code>id_num</code> use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.iloc.html" rel="nofollow"><code>iloc</code></a>:</p>
<pre><code>id_num = df.groupby('name')['id'].apply(lambda x: x.iloc[0])
print (id_num)
name
a 1
b 2
c 3
d 4
Name: id, dtype: int64
</code></pre>
<p>But it looks like you can use only:</p>
<pre><code>df['position'] = df['text'].str.find('I')
print (df)
age id name text position
0 1 1 a very good, and I like him 15
1 2 2 b I play basketball with his brother 0
2 3 3 c I hope to get a offer 0
3 4 4 d everything goes well, I think 22
4 1 1 a I will visit china 0
5 2 2 b no one can understand me, I will solve it 26
6 3 3 c I like followers 0
7 4 4 d maybe I will be good 6
8 1 1 a I should work hard to finish my research 0
9 2 2 b water is the source of earth, I agree it 30
10 3 3 c I hope you can keep in touch with me 0
11 4 4 d My baby is very cute, I like him 22
</code></pre>
| 1 | 2016-08-31T11:27:54Z | [
"python",
"string",
"pandas",
"dataframe",
"group-by"
] |
PyQt GUI size on high resolution screens | 39,247,342 | <p>I posted a <a href="http://stackoverflow.com/questions/38546580/tkinter-gui-size-on-high-resolution-screens/38546859#38546859">question</a> a while ago asking about Tkinter backends and subsequently forgot about it but I've since realised that I'm using the pyqt backend. Is there a fix for that? </p>
<p>Original Question:</p>
<p>So it appears that matplotlib gui plots (a la plt.show()) don't adapt to monitor resolution and appear tiny on high resolution screens. Is there a matplotlib+Pyqt fix or do I have fiddle around somewhere in Windows settings?</p>
<p>Thanks</p>
<p><a href="http://i.stack.imgur.com/HuLZy.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/HuLZy.jpg" alt="Example"></a></p>
| 0 | 2016-08-31T10:39:09Z | 39,247,516 | <p>While I do not have much experience with PyQt, upon googling I found <a href="http://stackoverflow.com/questions/14726296/how-to-make-pyqt-window-state-to-maximised-in-pyqt">this</a> SO question about setting the window state of a PyQT window to maximised. The accepted answer says to use <code>self.showMaximized()</code> to do so.</p>
| 0 | 2016-08-31T10:46:48Z | [
"python",
"qt",
"user-interface",
"matplotlib",
"pyqt"
] |
How to check if, elif, else conditions at the same time in django template | 39,247,352 | <p>I have 3 category in category field. I want to check it in django template and assign appropirte urls for 3 distinct category.</p>
<p>I tried:</p>
<pre><code>{% for entry in entries %}
{% if entry.category == 1 %}
<a href="{% url 'member:person-list' %}"><li>{{ entry.category }}</li></a>
{% elif entry.category == 2 %}
<a href="{% url 'member:execomember-list' %}"><li>{{ entry.category}}</li></a>
{% else %}
<a href="{% url 'member:lifemember-list' %}"><li>{{ entry.category}}</li></a>
{% endif %}
{% empty %}
<li>No recent entries</li>
{% endfor %}
</code></pre>
<p>But I know python only check first matching condition with if. Therefore it gave only one desired result. How do I get all three entries with their correct links?</p>
<p><strong>Edit:</strong></p>
<p>Though python only check first matching if condition, when use elif within for loop it check each condition until endfor loop. Therefore my answer below worked fine. </p>
| -6 | 2016-08-31T10:39:31Z | 39,247,780 | <p>This is my working answers:</p>
<pre><code> {% for entry in entries %}
{% if entry.category == 'General Member' %}
<a href="{% url 'member:person-list' %}"><li>{{ entry.category }}</li></a>
{% elif entry.category == 'Executive Committee Member' %}
<a href="{% url 'member:execomember-list' %}"><li>{{ entry.category}}</li></a>
{% else %}
<a href="{% url 'member:person-list' %}"><li>{{ entry.category}}</li></a>
{% endif %}
{% empty %}
<li>No recent entries</li>
{% endfor %}
</code></pre>
<p>Webpage view of output: </p>
<p><a href="http://i.stack.imgur.com/Tilua.png" rel="nofollow"><img src="http://i.stack.imgur.com/Tilua.png" alt="enter image description here"></a></p>
<p>For more clarifications, I check my code with the django shell. See the snippet of my shell: </p>
<p><a href="http://i.stack.imgur.com/NITIk.png" rel="nofollow"><img src="http://i.stack.imgur.com/NITIk.png" alt="enter image description here"></a></p>
<p>Even I change the order of if conditions, result remain the same. See my shell code with output:</p>
<p><a href="http://i.stack.imgur.com/wVJJs.png" rel="nofollow"><img src="http://i.stack.imgur.com/wVJJs.png" alt="enter image description here"></a></p>
<p>Do you see any wrong with my codes? Its fully comply with the python conditions and gives expected results. Anybody can check it on their django shell. </p>
| -1 | 2016-08-31T10:59:58Z | [
"python",
"django"
] |
How to check if, elif, else conditions at the same time in django template | 39,247,352 | <p>I have 3 category in category field. I want to check it in django template and assign appropirte urls for 3 distinct category.</p>
<p>I tried:</p>
<pre><code>{% for entry in entries %}
{% if entry.category == 1 %}
<a href="{% url 'member:person-list' %}"><li>{{ entry.category }}</li></a>
{% elif entry.category == 2 %}
<a href="{% url 'member:execomember-list' %}"><li>{{ entry.category}}</li></a>
{% else %}
<a href="{% url 'member:lifemember-list' %}"><li>{{ entry.category}}</li></a>
{% endif %}
{% empty %}
<li>No recent entries</li>
{% endfor %}
</code></pre>
<p>But I know python only check first matching condition with if. Therefore it gave only one desired result. How do I get all three entries with their correct links?</p>
<p><strong>Edit:</strong></p>
<p>Though python only check first matching if condition, when use elif within for loop it check each condition until endfor loop. Therefore my answer below worked fine. </p>
| -6 | 2016-08-31T10:39:31Z | 39,268,453 | <p>From what I understand, you want to associate each entry to a url depending on the category it belongs among the 3 categories. you can have this refactored in your <code>Entry</code> model in order to minimize logic in templates like:</p>
<pre><code>class Entry(models.Model):
category = ...
def get_entry_url(self):
if self.category == 'General Member':
return reverse for the url 'member:person-list'
elif self.category == 'Executive Committee Member':
return reverse for the url 'member:execomember-list'
else:
return reverse for the url 'member:person-list'
</code></pre>
<p>Then in template:</p>
<pre><code>{% for entry in entries %}
<a href="{{ entry.get_entry_url }}"><li>{{ entry.category }}</li></a>
{% empty %}
<li>No recent entries</li>
{% endfor %}
</code></pre>
| 1 | 2016-09-01T10:00:10Z | [
"python",
"django"
] |
How do I parse the data structure sent by DynamoDB to Lambda using python? | 39,247,445 | <p>I am trying to link DynamoDB with Amazon Lambda such that when an item is inserted in DynamoDB, it will be sent to my lambda function which will parse the data structure and perfomr operations on the data. However, the data posted looks like this :</p>
<pre><code>{
"Records": [
{
"eventID": "ebefd45a0610c7a54654nb09b491c7c",
"eventVersion": "1.1",
"dynamodb": {
"SequenceNumber": "3300000555554005534567541",
"Keys": {
"Id": {
"S": "13"
}
},
"SizeBytes": 100,
"NewImage": {
"url": {
"S": "http:\/\/test.com\/blog\/tips-on-choosing-a-suitable-company-2\/"
},
"db_ref_id": {
"S": "123"
},
"Id": {
"S": "13"
}
},
"ApproximateCreationDateTime": 1472638500,
"StreamViewType": "NEW_AND_OLD_IMAGES"
},
"awsRegion": "us-west-2",
"eventName": "INSERT",
"eventSourceARN": "arn:aws:dynamodb:us-west-2:3454354:table\/urls\/stream\/2016-08-31T07:21:11.412",
"eventSource": "aws:dynamodb"
}
]
}
</code></pre>
<p>I found this piece of code online that does this :</p>
<pre><code>for record in event['Records']:
print(record['eventID'])
</code></pre>
<p>However, I modified my code to this, and it doesn't work :</p>
<pre><code>for record in event['Records']:
print(record['dynamodb']['NewImage']['url']['S'])
</code></pre>
<p>Since I am not much aware of python, I'm not sure what is the best way of parsing this data structure. Any suggestions would really help.</p>
| 0 | 2016-08-31T10:43:09Z | 39,247,787 | <p><code>print string['Records'][0]['eventID']</code> to print the eventID</p>
<p>There are many data structures involved here. The outer one being a python dictionary whose objects you can receive using keys.
In our case you have only one mapping in your dictionary so, assuming that the whole data structure you have up there is assigned in variable named event you could do event['Records']. Later on, you have a list <code>[]</code> whose elements you can only fetch using indexes. (ex. list[1], list[2] etc)</p>
<p>Since from the format you have I suppose that you might have many records you could:</p>
<p><code>for record in event['Records']:
print(record['dynamodb']['NewImage']['url']['S'])</code></p>
<p>This should print what you want, if the data posted on above is a string the parsing needs to take place using the json python module first.</p>
| 1 | 2016-08-31T11:00:08Z | [
"python",
"amazon-web-services",
"amazon-dynamodb",
"aws-lambda"
] |
score_cbow_pair in word2vec (gensim) | 39,247,496 | <p>I wanted to output the log-probability during learning of the word and doc vectors in gensim. I have taken a look at the implementation of the score function in the "slow plain numpy" version.</p>
<pre class="lang-py prettyprint-override"><code>def score_cbow_pair(model, word, word2_indices, l1):
l2a = model.syn1[word.point] # 2d matrix, codelen x layer1_size
sgn = (-1.0)**word.code # ch function, 0-> 1, 1 -> -1
lprob = -log(1.0 + exp(-sgn*dot(l1, l2a.T)))
return sum(lprob)
</code></pre>
<p>The score function should make use of the parameters learned during hierarchical softmax training. But in the calculation of the log-probability there is supposed to be a sigmoid function( <a href="http://www-personal.umich.edu/~ronxin/pdf/w2vexp.pdf" rel="nofollow">word2vec Parameter Learning Explained equation (45)</a>).
So does gensim really calculate the log-probability in <code>lprob</code> or is it just a score for comparison purposes.</p>
<p>I would have calculated the log-probability as follows:
<code>-log(1.0/(1.0+exp(-sgn*dot(l1, l2a.T))))</code></p>
<p>Is this equation not used because it explodes for values close to zero or is it in general wrong?</p>
| 0 | 2016-08-31T10:45:53Z | 39,357,806 | <p>I've overlooked that the logarithm of the sigmoid function can be rewritten: <code>log(1.0/(1.0+exp(-sgn*dot(l1, l2a.T)))) = log(1)-log(1.0+exp(-sgn*dot(l1, l2a.T))) = -log(1.0+exp(-sgn*dot(l1, l2a.T)))</code></p>
<p>So the code does compute the log-likelihood.</p>
| 0 | 2016-09-06T21:03:22Z | [
"python",
"numpy",
"probability",
"gensim",
"word2vec"
] |
How to organize data of several datasets into the same dataframe using pandas in Python? | 39,247,527 | <p>I am having trouble in keeping some data organized as I want in a data frame using pandas in Python.</p>
<p>I would like to have a single data frame where the data would be organized in three columns (e.g. <code>Time</code>, <code>V</code> and <code>I</code>).</p>
<p>However, I would like to have the data of different samples in the same data frame so that I could easily select the data from <code>Sample#1</code> or <code>Sample#2</code>.</p>
<p>What I came up to was something like this:</p>
<pre><code>df1 = pd.DataFrame({'Time': np.arange(0,10,0.5), 'V': np.random.rand(20), 'I': np.random.rand(20)})
df1['Sample']= 'sample_1'
df2 = pd.DataFrame({'Time': np.arange(0,10,0.5), 'V': np.random.rand(20), 'I': np.random.rand(20)})
df2['Sample']= 'sample_2'
df = df1.append(df2)
</code></pre>
<p>Notice that I added another columns called <code>Sample</code> to keep track of which data corresponds to which sample.</p>
<p>But then I don't know how to call the data from <code>sample_1</code> or <code>sample_2</code> from df</p>
<p>How can I do this and is this the right way to organize my data? Should I be using <code>MultiIndex</code>?</p>
| 1 | 2016-08-31T10:47:14Z | 39,247,584 | <p>Yes, <code>MultiIndex</code> is one possible solution:</p>
<pre><code>np.random.seed(1)
df1 = pd.DataFrame({'Time': np.arange(0,10,0.5),
'V': np.random.rand(20),
'I': np.random.rand(20)})
np.random.seed(2)
df2 = pd.DataFrame({'Time': np.arange(0,10,0.5),
'V': np.random.rand(20),
'I': np.random.rand(20)})
#print (df1)
#print (df2)
</code></pre>
<p>You can <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow"><code>concat</code></a> all <code>DataFrame</code>s to one and in parameter <code>keys</code> specify each source <code>DataFrame</code>:</p>
<pre><code>print (pd.concat([df1, df2], keys=('sample_1','sample_2')))
I Time V
sample_1 0 0.800745 0.0 0.417022
1 0.968262 0.5 0.720324
2 0.313424 1.0 0.000114
3 0.692323 1.5 0.302333
4 0.876389 2.0 0.146756
5 0.894607 2.5 0.092339
6 0.085044 3.0 0.186260
7 0.039055 3.5 0.345561
8 0.169830 4.0 0.396767
9 0.878143 4.5 0.538817
10 0.098347 5.0 0.419195
11 0.421108 5.5 0.685220
12 0.957890 6.0 0.204452
13 0.533165 6.5 0.878117
14 0.691877 7.0 0.027388
15 0.315516 7.5 0.670468
16 0.686501 8.0 0.417305
17 0.834626 8.5 0.558690
18 0.018288 9.0 0.140387
19 0.750144 9.5 0.198101
sample_2 0 0.505246 0.0 0.435995
1 0.065287 0.5 0.025926
2 0.428122 1.0 0.549662
3 0.096531 1.5 0.435322
4 0.127160 2.0 0.420368
5 0.596745 2.5 0.330335
6 0.226012 3.0 0.204649
7 0.106946 3.5 0.619271
8 0.220306 4.0 0.299655
9 0.349826 4.5 0.266827
10 0.467787 5.0 0.621134
11 0.201743 5.5 0.529142
12 0.640407 6.0 0.134580
13 0.483070 6.5 0.513578
14 0.505237 7.0 0.184440
15 0.386893 7.5 0.785335
16 0.793637 8.0 0.853975
17 0.580004 8.5 0.494237
18 0.162299 9.0 0.846561
19 0.700752 9.5 0.079645
</code></pre>
<p>Select data is possible by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.xs.html" rel="nofollow"><code>xs</code></a> - see <a href="http://pandas.pydata.org/pandas-docs/stable/advanced.html#cross-section" rel="nofollow">cross section</a>:</p>
<pre><code>print (df.xs('sample_1', level=0))
I Time V
0 0.800745 0.0 0.417022
1 0.968262 0.5 0.720324
2 0.313424 1.0 0.000114
3 0.692323 1.5 0.302333
4 0.876389 2.0 0.146756
5 0.894607 2.5 0.092339
6 0.085044 3.0 0.186260
7 0.039055 3.5 0.345561
8 0.169830 4.0 0.396767
9 0.878143 4.5 0.538817
10 0.098347 5.0 0.419195
11 0.421108 5.5 0.685220
12 0.957890 6.0 0.204452
13 0.533165 6.5 0.878117
14 0.691877 7.0 0.027388
15 0.315516 7.5 0.670468
16 0.686501 8.0 0.417305
17 0.834626 8.5 0.558690
18 0.018288 9.0 0.140387
19 0.750144 9.5 0.198101
</code></pre>
<p>If need select only some columns:</p>
<pre><code>print (df.xs('sample_1', level=0)[['Time','I']])
Time I
0 0.0 0.800745
1 0.5 0.968262
2 1.0 0.313424
3 1.5 0.692323
4 2.0 0.876389
5 2.5 0.894607
6 3.0 0.085044
7 3.5 0.039055
8 4.0 0.169830
9 4.5 0.878143
10 5.0 0.098347
11 5.5 0.421108
12 6.0 0.957890
13 6.5 0.533165
14 7.0 0.691877
15 7.5 0.315516
16 8.0 0.686501
17 8.5 0.834626
18 9.0 0.018288
19 9.5 0.750144
</code></pre>
<hr>
<p>Another solution is use <code>IndexSlice</code> - see <a href="http://pandas.pydata.org/pandas-docs/stable/advanced.html#using-slicers" rel="nofollow">using slicers</a></p>
<pre><code>idx = pd.IndexSlice
print (df.loc[idx['sample_1',:], ['Time','I']])
Time I
sample_1 0 0.0 0.800745
1 0.5 0.968262
2 1.0 0.313424
3 1.5 0.692323
4 2.0 0.876389
5 2.5 0.894607
6 3.0 0.085044
7 3.5 0.039055
8 4.0 0.169830
9 4.5 0.878143
10 5.0 0.098347
11 5.5 0.421108
12 6.0 0.957890
13 6.5 0.533165
14 7.0 0.691877
15 7.5 0.315516
16 8.0 0.686501
17 8.5 0.834626
18 9.0 0.018288
19 9.5 0.750144
</code></pre>
<p>If need remove first level of <code>Multiindex</code>:</p>
<pre><code>idx = pd.IndexSlice
print (df.loc[idx['sample_1',:], ['Time','I']].reset_index(level=0, drop=True))
Time I
0 0.0 0.800745
1 0.5 0.968262
2 1.0 0.313424
3 1.5 0.692323
4 2.0 0.876389
5 2.5 0.894607
6 3.0 0.085044
7 3.5 0.039055
8 4.0 0.169830
9 4.5 0.878143
10 5.0 0.098347
11 5.5 0.421108
12 6.0 0.957890
13 6.5 0.533165
14 7.0 0.691877
15 7.5 0.315516
16 8.0 0.686501
17 8.5 0.834626
18 9.0 0.018288
19 9.5 0.750144
</code></pre>
| 1 | 2016-08-31T10:50:01Z | [
"python",
"pandas",
"indexing",
"dataframe",
"multi-index"
] |
AttributeError when trying to set a window type hint with GTK using Python | 39,247,538 | <p>I'm still really new to python, so this will probably be a dumb question.</p>
<p>I'm trying to learn how to make a GUI using pygtk (Mostly because I use linux and I'd like to have GTK theming support in my programs). I've started with the most simple window possible, and I've found that since I'm using a tiling window manager the program will be tiled.</p>
<p>This is not a problem, but the first program I wanted to make needs a floating window, and I could fix it client-side modifying the configuration of the window manager, but I'd like to do it right and make it working for everyone.</p>
<p><a href="https://faq.i3wm.org/question/61/forcing-windows-as-always-floating.1.html" rel="nofollow">After some research</a> I've found that the way to do it is by setting a window type hint that the window manager will automatically set as "floating". This is what I've tried, using <a href="http://www.pygtk.org/docs/pygtk/class-gtkwindow.html#method-gtkwindow--set-type-hint" rel="nofollow">this</a> as resource:</p>
<pre><code>import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk
win = Gtk.Window()
win.connect("delete-event", Gtk.main_quit)
win.set_type_hint(Gtk.gdk.WINDOW_TYPE_HINT_UTILITY)
win.show_all()
Gtk.main()
</code></pre>
<p>But it doesn't work. I get a traceback.</p>
<pre><code>Traceback (most recent call last):
File "/mnt/storHDD/Programming/Python/python-learning/guitesting.py", line 7, in <module>
win.set_type_hint(Gtk.gdk.WINDOW_TYPE_HINT_UTILITY)
File "/usr/lib/python3.5/site-packages/gi/overrides/__init__.py", line 39, in __getattr__
return getattr(self._introspection_module, name)
File "/usr/lib/python3.5/site-packages/gi/module.py", line 139, in __getattr__
self.__name__, name))
AttributeError: 'gi.repository.Gtk' object has no attribute 'gdk'
</code></pre>
<p>I don't really know what to do from here. I've tried to import gdk too, but it doesn't seem to change anything. Any idea about what can I do to solve this?</p>
| 1 | 2016-08-31T10:48:09Z | 39,248,628 | <p>You need to import <code>Gdk</code>, then use <code>Gdk.WindowTypeHint.UTILITY</code>, not <code>Gtk.gdk.WINDOW_TYPE_HINT_UTILITY</code>:</p>
<pre class="lang-py prettyprint-override"><code>import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk, Gdk
win = Gtk.Window()
win.connect("delete-event", Gtk.main_quit)
win.set_type_hint(Gdk.WindowTypeHint.UTILITY)
win.show_all()
Gtk.main()
</code></pre>
<p><a href="http://i.stack.imgur.com/TKBll.png" rel="nofollow"><img src="http://i.stack.imgur.com/TKBll.png" alt="enter image description here"></a></p>
<p>see also <a href="https://people.gnome.org/~gcampagna/docs/Gdk-3.0/Gdk.WindowTypeHint.html" rel="nofollow">here</a>.</p>
| 1 | 2016-08-31T11:38:37Z | [
"python",
"user-interface",
"gtk"
] |
Create dataframe column with elif of different types in pandas | 39,247,591 | <p>This is an extension of <a href="http://stackoverflow.com/questions/18194404/create-column-with-elif-in-pandas">question</a>.</p>
<p>I'd like to do some if/elif/else logic to create a dataframe column, pseudocode example:</p>
<pre><code>if Col1 = 'A' and Col2 = 1 then Col3 = 'A1'
else if Col1 = 'A' and Col2 = 0 then Col3 = 'A0'
else Col3 = 'XX'
</code></pre>
<p>is it ok to mix up the types there? I'm getting this error:</p>
<blockquote>
<p>TypeError: cannot compare a dtyped [int64] array with a scalar of type [bool]</p>
</blockquote>
| 1 | 2016-08-31T10:50:09Z | 39,247,683 | <p>I think you can use:</p>
<pre><code>df['Col3'] = 'XX'
df.loc[(df.Col1 == 'A') & (df.Col2 == 1), 'Col3'] = 'A1'
df.loc[(df.Col1 == 'A') & (df.Col2 == 0), 'Col3'] = 'A0'
</code></pre>
<p>With double <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.where.html" rel="nofollow"><code>numpy.where</code></a>:</p>
<pre><code>df["Col3"] = np.where((df.Col1 == 'A') & (df.Col2 == 1) , "A1",
np.where((df.Col1 == 'A') & (df.Col2 == 0), 'A0', 'XX'))
</code></pre>
<p>Sample:</p>
<pre><code>df = pd.DataFrame({'Col1':['A','B','A','B'],
'Col2':[1,1,0,0]})
print (df)
Col1 Col2
0 A 1
1 B 1
2 A 0
3 B 0
df['Col3'] = 'XX'
df.loc[(df.Col1 == 'A') & (df.Col2 == 1), 'Col3'] = 'A1'
df.loc[(df.Col1 == 'A') & (df.Col2 == 0), 'Col3'] = 'A0'
df["Col4"] = np.where((df.Col1 == 'A') & (df.Col2 == 1) , "A1",
np.where((df.Col1 == 'A') & (df.Col2 == 0), 'A0', 'XX'))
print (df)
Col1 Col2 Col3 Col4
0 A 1 A1 A1
1 B 1 XX XX
2 A 0 A0 A0
3 B 0 XX XX
</code></pre>
| 2 | 2016-08-31T10:55:53Z | [
"python",
"pandas",
"indexing",
"condition",
"multiple-columns"
] |
'PyQt4 module not found' - Cannot use PyQt to open a GUI | 39,247,929 | <p>I am very new to using PyQt4. So far, i have used QtDesigner to create the GUI windows i shall be using for my program. </p>
<p>However, when i run the first bit of python coding to get the user interface to appear, i get an error i cannot find a solution to.</p>
<p>Here's the code:</p>
<pre><code>import sys, os
from PyQt4 import QtCore, QtGui, uic
form_class = uic.loadUiType("HomeScreen.ui") [0]
</code></pre>
<p>All i am trying to do with this is to load a GUI, which is called 'HomeScreen.ui'.</p>
<p>Upon running this code, the python shell returns an error saying:</p>
<pre><code>from PyQt4 import QtCore, QtGui, uic
ImportError: No module named 'PyQt4'
</code></pre>
<p>I have Python 3.5 installed, as well as PyQt v4.11.4 and PyQt5.6.</p>
<p>'QtGui' and 'QtCore' are both saved in a folder called PyQt4 (and PyQt5), which are both stored in the path:
<code>C:\Program Files (x86)\Python\Python35-32\Lib\site-packages</code></p>
<p>I believe this is where third party modules are supposed to be stored, but python can never find the module and i repeatedly get the same error message.</p>
<p>Any help would be appreciated.</p>
| 1 | 2016-08-31T11:05:45Z | 39,263,674 | <p>I also work in windows and python 3,just run the execute file to install pyqt:</p>
<p><a href="https://sourceforge.net/projects/pyqt/files/PyQt5/PyQt-5.6/" rel="nofollow">https://sourceforge.net/projects/pyqt/files/PyQt5/PyQt-5.6/</a></p>
<p>it works well in my environment</p>
| 0 | 2016-09-01T05:57:20Z | [
"python",
"python-3.x",
"pyqt",
"pyqt4"
] |
Import packages from current project directory in VScode | 39,248,054 | <p>When I build or debug a particular file in my Python project (which imports a user defined package) I get an import error. How can I solve this problem?</p>
<p>test.py</p>
<pre><code>def sum(a,b):
return a+b
</code></pre>
<p>test2.py</p>
<pre><code>from test import sum
sum(3,4)
</code></pre>
<p>The above code will give an import error <code>cannot import test</code>.</p>
<p>Directory tree</p>
<pre><code>âââ graphs
â  âââ Dijkstra's\ Algorithm.py
â  âââ Floyd\ Warshall\ DP.py
â  âââ Kruskal's\ algorithm.py
â  âââ Prim's\ Algoritm.py
â  âââ __init__.py
â  âââ graph.py
âââ heap
â  âââ __init__.py
â  âââ heap.py
â  âââ priority_queue.py
</code></pre>
<p>Trying to import in graphs;</p>
<pre><code>from heap.heap import Heap
</code></pre>
| 0 | 2016-08-31T11:11:47Z | 39,249,192 | <p>About the <code>heap</code> file, make sure that you are running on the project root folder.</p>
<p>If these <code>test.py</code> files are running on the same folder, try to add a <code>__init__.py</code> empty file on this folder. </p>
<blockquote>
<p>The <code>__init__.py</code> files are required to make Python treat the directories as containing packages; this is done to prevent directories with a common name, such as string, from unintentionally hiding valid modules that occur later (deeper) on the module search path. In the simplest case, <code>__init__.py</code> can just be an empty file, but it can also execute initialisation code for the package or set the <code>__all__</code> variable, described later.</p>
</blockquote>
| 0 | 2016-08-31T12:05:30Z | [
"python",
"vscode"
] |
Extract part of file and print using python | 39,248,101 | <p>I want to extract part of the file content . Which starts with ("head"
and ends with ("arraydat"</p>
<p>content of file:</p>
<pre><code>(record
( "head"
(record
( "pname" "C16D")
( "ptype" "probe")
( "sn" "11224")
( "rev" "1")
( "opname" "Kaji")
( "comm"
[ "" ]
)
( "numel" 192)
( "mux" 1)
( "freq" 3400000)
( "date" 63602497416.093)
( "focus" 0.066)
( "probID" 574)
( "te" 0)
( "therm" 0)
( "bipolar" "N/A")
( "maker" "YMS")
( "PartNum" 5418916)
( "numrow" 1)
)
)
( "arraydat"
(record
( "FL6" 1625283.947393933)
( "FH6" 4932875.254089763)
( "FL20" 1283607.261269079)
( "FH20" 5673248.882271254)
( "Fc" 3279079.600741847)
( "BW" 100.8695033187829)
( "PW6" 3.316821935120381E-007)
( "PW20" 9.740000000000003E-007)
( "PW30" 1.456E-006)
( "PW40" 2.628000000000001E-006)
( "LG" -46.35823409318354)
( "TOF" -1.363659434369523E-008)
)
</code></pre>
<p>I need to extract the content of a file after "head" and before "arraydat".
I have tried this command but no luck </p>
<pre><code>import re
with open('sample.txt','r') as new_file:
data = new_file.read()
pattern = re.compile(r'\s[(]\s"head"[\s\S]*?\s[(]\s"arraydat"')
stringtext = re.findall(pattern, data)
print(stringtext)
</code></pre>
<p>Output should look like this:</p>
<pre><code> ( "pname" "C16D")
( "ptype" "probe")
( "sn" "11224")
( "rev" "1")
( "opname" "Kaji")
( "comm"
[ "" ]
)
( "numel" 192)
( "mux" 1)
( "freq" 3400000)
( "date" 63602497416.093)
( "focus" 0.066)
( "probID" 574)
( "te" 0)
( "therm" 0)
( "bipolar" "N/A")
( "maker" "YMS")
( "PartNum" 5418916)
( "numrow" 1)
)
)
</code></pre>
| 1 | 2016-08-31T11:13:43Z | 39,248,349 | <pre><code>data = []
read = False
for line in open('test.dat'):
if line.strip() == '( "head"':
read = True
continue
elif line.strip() == '( "arraydat"':
read = False
if read:
data.append(line.rstrip())
print('\n'.join(data))
</code></pre>
| 1 | 2016-08-31T11:25:09Z | [
"python"
] |
Extract part of file and print using python | 39,248,101 | <p>I want to extract part of the file content . Which starts with ("head"
and ends with ("arraydat"</p>
<p>content of file:</p>
<pre><code>(record
( "head"
(record
( "pname" "C16D")
( "ptype" "probe")
( "sn" "11224")
( "rev" "1")
( "opname" "Kaji")
( "comm"
[ "" ]
)
( "numel" 192)
( "mux" 1)
( "freq" 3400000)
( "date" 63602497416.093)
( "focus" 0.066)
( "probID" 574)
( "te" 0)
( "therm" 0)
( "bipolar" "N/A")
( "maker" "YMS")
( "PartNum" 5418916)
( "numrow" 1)
)
)
( "arraydat"
(record
( "FL6" 1625283.947393933)
( "FH6" 4932875.254089763)
( "FL20" 1283607.261269079)
( "FH20" 5673248.882271254)
( "Fc" 3279079.600741847)
( "BW" 100.8695033187829)
( "PW6" 3.316821935120381E-007)
( "PW20" 9.740000000000003E-007)
( "PW30" 1.456E-006)
( "PW40" 2.628000000000001E-006)
( "LG" -46.35823409318354)
( "TOF" -1.363659434369523E-008)
)
</code></pre>
<p>I need to extract the content of a file after "head" and before "arraydat".
I have tried this command but no luck </p>
<pre><code>import re
with open('sample.txt','r') as new_file:
data = new_file.read()
pattern = re.compile(r'\s[(]\s"head"[\s\S]*?\s[(]\s"arraydat"')
stringtext = re.findall(pattern, data)
print(stringtext)
</code></pre>
<p>Output should look like this:</p>
<pre><code> ( "pname" "C16D")
( "ptype" "probe")
( "sn" "11224")
( "rev" "1")
( "opname" "Kaji")
( "comm"
[ "" ]
)
( "numel" 192)
( "mux" 1)
( "freq" 3400000)
( "date" 63602497416.093)
( "focus" 0.066)
( "probID" 574)
( "te" 0)
( "therm" 0)
( "bipolar" "N/A")
( "maker" "YMS")
( "PartNum" 5418916)
( "numrow" 1)
)
)
</code></pre>
| 1 | 2016-08-31T11:13:43Z | 39,248,713 | <p>Please check the below code:</p>
<pre><code>import re
import csv
data_list= []
record_data = False
comm_line = False
#Open file read data.
#Save one set of data between 'head' and 'arraydat' in single dict
#Append that dict to data list
with open('sample.txt','r') as new_file:
for line in new_file:
if '( "head"' in line:
record_data = True
data_dict = {}
continue
if '( "arraydat"' in line:
data_list.append(data_dict)
record_data = False
#Data from comm line
if "comm" in line and record_data:
comm_line = True
nline = ''
if comm_line:
n = re.match(r'\s*\)\s*$',line)
if n is not None:
comm_line=False
nline = nline + line.strip('\r\n')
line=re.sub(' +',' ',nline)
n = None
else:
nline = nline + line.strip('\r\n')
next
if record_data:
line = line.strip()
if line.startswith('(') and line.endswith(')'):
line = line.strip(')(').strip()
#line = re.sub('\"',"",line)
#print line
m = re.match(r'\"(\w+)\"\s+\"*([\w+\W+]+)\"*',line)
if m is not None:
k = m.group(1).strip('"')
v = m.group(2).strip('"')
data_dict[k]=v
print data_list
#Write it to csv
with open('output.csv','wb') as out_file:
writer = csv.DictWriter(out_file, fieldnames=data_list[0].keys())
writer.writeheader()
for data in data_list:
writer.writerow(data)
</code></pre>
<p><strong>Output :</strong></p>
<p><em>On console:</em></p>
<pre><code>C:\Users\dinesh_pundkar\Desktop>python b.py
[{'therm': '0', 'probID': '574', 'PartNum': '5418916', 'numrow': '1', 'rev': '1'
, 'ptype': 'probe', 'mux': '1', 'bipolar': 'N/A', 'maker': 'YMS', 'comm': '[ ""
]', 'sn': '11224', 'numel': '192', 'focus': '0.066', 'date': '63602497416.093',
'pname': 'C16D', 'opname': 'Kaji', 'te': '0', 'freq': '3400000'}, {'therm': '0',
'probID': '574', 'PartNum': '5418916', 'numrow': '1', 'rev': '1', 'ptype': 'pro
be', 'mux': '1', 'bipolar': 'N/A', 'maker': 'YMS', 'comm': '[ "" ]', 'sn': '1122
4', 'numel': '192', 'focus': '0.066', 'date': '63602497416.093', 'pname': 'C16D'
, 'opname': 'Dinesh', 'te': '0', 'freq': '3400000'}]
C:\Users\dinesh_pundkar\Desktop>
</code></pre>
<p><em>Content of <strong>output.csv</strong>:</em></p>
<pre><code>therm,probID,PartNum,numrow,rev,ptype,mux,bipolar,maker,comm,sn,numel,focus,date,pname,opname,te,freq
0,574,5418916,1,1,probe,1,N/A,YMS,"[ """" ]",11224,192,0.066,63602497416.093,C16D,Kaji,0,3400000
0,574,5418916,1,1,probe,1,N/A,YMS,"[ """" ]",11224,192,0.066,63602497416.093,C16D,Dinesh,0,3400000
</code></pre>
| 0 | 2016-08-31T11:42:29Z | [
"python"
] |
Extract part of file and print using python | 39,248,101 | <p>I want to extract part of the file content . Which starts with ("head"
and ends with ("arraydat"</p>
<p>content of file:</p>
<pre><code>(record
( "head"
(record
( "pname" "C16D")
( "ptype" "probe")
( "sn" "11224")
( "rev" "1")
( "opname" "Kaji")
( "comm"
[ "" ]
)
( "numel" 192)
( "mux" 1)
( "freq" 3400000)
( "date" 63602497416.093)
( "focus" 0.066)
( "probID" 574)
( "te" 0)
( "therm" 0)
( "bipolar" "N/A")
( "maker" "YMS")
( "PartNum" 5418916)
( "numrow" 1)
)
)
( "arraydat"
(record
( "FL6" 1625283.947393933)
( "FH6" 4932875.254089763)
( "FL20" 1283607.261269079)
( "FH20" 5673248.882271254)
( "Fc" 3279079.600741847)
( "BW" 100.8695033187829)
( "PW6" 3.316821935120381E-007)
( "PW20" 9.740000000000003E-007)
( "PW30" 1.456E-006)
( "PW40" 2.628000000000001E-006)
( "LG" -46.35823409318354)
( "TOF" -1.363659434369523E-008)
)
</code></pre>
<p>I need to extract the content of a file after "head" and before "arraydat".
I have tried this command but no luck </p>
<pre><code>import re
with open('sample.txt','r') as new_file:
data = new_file.read()
pattern = re.compile(r'\s[(]\s"head"[\s\S]*?\s[(]\s"arraydat"')
stringtext = re.findall(pattern, data)
print(stringtext)
</code></pre>
<p>Output should look like this:</p>
<pre><code> ( "pname" "C16D")
( "ptype" "probe")
( "sn" "11224")
( "rev" "1")
( "opname" "Kaji")
( "comm"
[ "" ]
)
( "numel" 192)
( "mux" 1)
( "freq" 3400000)
( "date" 63602497416.093)
( "focus" 0.066)
( "probID" 574)
( "te" 0)
( "therm" 0)
( "bipolar" "N/A")
( "maker" "YMS")
( "PartNum" 5418916)
( "numrow" 1)
)
)
</code></pre>
| 1 | 2016-08-31T11:13:43Z | 39,248,823 | <pre><code>try:
with open('sample.txt', 'r') as fileObject:
fileData = fileObject.read().split("\n")
fileDataClean = [data.strip() for data in fileData]
startIndex, stopIndex = fileDataClean.index('( "head"'), fileDataClean.index('( "arraydat"')
resultData = "\n"join(fileData[startIndex:stopIndex])
print(resultData)
except Exception as e:
print(e)
</code></pre>
<p><strong><em>I hope this helps.</em></strong></p>
| 0 | 2016-08-31T11:48:04Z | [
"python"
] |
Factor an integer to something as close to a square as possible | 39,248,245 | <p>I have a function that reads a file byte by byte and converts it to a floating point array. It also returns the number of elements in said array.
Now I want to reshape the array into a 2D array with the shape being as close to a square as possible.</p>
<p>As an example let's look at the number 800:</p>
<p><code>sqrt(800) = 28.427...</code></p>
<p>Now by I can figure out by trial and error that <code>25*32</code> would be the solution I am looking for.
I do this by decrementing the <code>sqrt</code> (rounded to nearest integer) if the result of multiplying the integers is to high, or incrementing them if the result is too low.</p>
<p>I know about algorithms that do this for primes, but this is not a requirement for me. My problem is that even the brute force method I implemented will sometimes get stuck and never finish (which is the reason for my arbitrary limit of iterations):</p>
<pre><code>import math
def factor_int(n):
nsqrt = math.ceil(math.sqrt(n))
factors = [nsqrt, nsqrt]
cd = 0
result = factors[0] * factors[1]
ii = 0
while (result != n or ii > 10000):
if(result > n):
factors[cd] -= 1
else:
factors[cd] += 1
result = factors[0] * factors[1]
print factors, result
cd = 1 - cd
ii += 1
return "resulting factors: {0}".format(factors)
input = 80000
factors = factor_int(input)
</code></pre>
<p>using this script above the output will get stuck in a loop printing</p>
<pre><code>[273.0, 292.0] 79716.0
[273.0, 293.0] 79989.0
[274.0, 293.0] 80282.0
[274.0, 292.0] 80008.0
[273.0, 292.0] 79716.0
[273.0, 293.0] 79989.0
[274.0, 293.0] 80282.0
[274.0, 292.0] 80008.0
[273.0, 292.0] 79716.0
[273.0, 293.0] 79989.0
[274.0, 293.0] 80282.0
[274.0, 292.0] 80008.0
[273.0, 292.0] 79716.0
[273.0, 293.0] 79989.0
[274.0, 293.0] 80282.0
[274.0, 292.0] 80008.0
[273.0, 292.0] 79716.0
[273.0, 293.0] 79989.0
[274.0, 293.0] 80282.0
</code></pre>
<p>But I wonder if there are more efficient solutions for this? Certainly I can't be the first to want to do something like this.</p>
| 1 | 2016-08-31T11:20:43Z | 39,248,503 | <pre><code>def factor_int(n):
nsqrt = math.ceil(math.sqrt(n))
solution = False
val = nsqrt
while not solution:
val2 = int(n/val)
if val2 * val == float(n):
solution = True
else:
val-=1
return val, val2, n
</code></pre>
<p>try it with:</p>
<pre><code>for x in xrange(10, 20):
print factor_int(x)
</code></pre>
| 1 | 2016-08-31T11:33:26Z | [
"python",
"algorithm",
"cython",
"factoring"
] |
Factor an integer to something as close to a square as possible | 39,248,245 | <p>I have a function that reads a file byte by byte and converts it to a floating point array. It also returns the number of elements in said array.
Now I want to reshape the array into a 2D array with the shape being as close to a square as possible.</p>
<p>As an example let's look at the number 800:</p>
<p><code>sqrt(800) = 28.427...</code></p>
<p>Now by I can figure out by trial and error that <code>25*32</code> would be the solution I am looking for.
I do this by decrementing the <code>sqrt</code> (rounded to nearest integer) if the result of multiplying the integers is to high, or incrementing them if the result is too low.</p>
<p>I know about algorithms that do this for primes, but this is not a requirement for me. My problem is that even the brute force method I implemented will sometimes get stuck and never finish (which is the reason for my arbitrary limit of iterations):</p>
<pre><code>import math
def factor_int(n):
nsqrt = math.ceil(math.sqrt(n))
factors = [nsqrt, nsqrt]
cd = 0
result = factors[0] * factors[1]
ii = 0
while (result != n or ii > 10000):
if(result > n):
factors[cd] -= 1
else:
factors[cd] += 1
result = factors[0] * factors[1]
print factors, result
cd = 1 - cd
ii += 1
return "resulting factors: {0}".format(factors)
input = 80000
factors = factor_int(input)
</code></pre>
<p>using this script above the output will get stuck in a loop printing</p>
<pre><code>[273.0, 292.0] 79716.0
[273.0, 293.0] 79989.0
[274.0, 293.0] 80282.0
[274.0, 292.0] 80008.0
[273.0, 292.0] 79716.0
[273.0, 293.0] 79989.0
[274.0, 293.0] 80282.0
[274.0, 292.0] 80008.0
[273.0, 292.0] 79716.0
[273.0, 293.0] 79989.0
[274.0, 293.0] 80282.0
[274.0, 292.0] 80008.0
[273.0, 292.0] 79716.0
[273.0, 293.0] 79989.0
[274.0, 293.0] 80282.0
[274.0, 292.0] 80008.0
[273.0, 292.0] 79716.0
[273.0, 293.0] 79989.0
[274.0, 293.0] 80282.0
</code></pre>
<p>But I wonder if there are more efficient solutions for this? Certainly I can't be the first to want to do something like this.</p>
| 1 | 2016-08-31T11:20:43Z | 39,248,799 | <p>Interesting question, here's a possible solution to your problem:</p>
<pre><code>import math
def min_dist(a, b):
dist = []
for Pa in a:
for Pb in b:
d = math.sqrt(
math.pow(Pa[0] - Pb[0], 2) + math.pow(Pa[1] - Pb[1], 2))
dist.append([d, Pa])
return sorted(dist, key=lambda x: x[0])
def get_factors(N):
if N < 1:
return N
N2 = N / 2
NN = math.sqrt(N)
result = []
for a in range(1, N2 + 1):
for b in range(1, N2 + 1):
if N == (a * b):
result.append([a, b])
result = min_dist(result, [[NN, NN]])
if result:
return result[0][1]
else:
return [N, 1]
for i in range(801):
print i, get_factors(i)
</code></pre>
<p>The key of this method is finding the minimum distance to the cartesian point of [math.sqrt(N), math.sqrt(N)] which meets the requirements N=a*b, a&b integers.</p>
| 1 | 2016-08-31T11:47:03Z | [
"python",
"algorithm",
"cython",
"factoring"
] |
Factor an integer to something as close to a square as possible | 39,248,245 | <p>I have a function that reads a file byte by byte and converts it to a floating point array. It also returns the number of elements in said array.
Now I want to reshape the array into a 2D array with the shape being as close to a square as possible.</p>
<p>As an example let's look at the number 800:</p>
<p><code>sqrt(800) = 28.427...</code></p>
<p>Now by I can figure out by trial and error that <code>25*32</code> would be the solution I am looking for.
I do this by decrementing the <code>sqrt</code> (rounded to nearest integer) if the result of multiplying the integers is to high, or incrementing them if the result is too low.</p>
<p>I know about algorithms that do this for primes, but this is not a requirement for me. My problem is that even the brute force method I implemented will sometimes get stuck and never finish (which is the reason for my arbitrary limit of iterations):</p>
<pre><code>import math
def factor_int(n):
nsqrt = math.ceil(math.sqrt(n))
factors = [nsqrt, nsqrt]
cd = 0
result = factors[0] * factors[1]
ii = 0
while (result != n or ii > 10000):
if(result > n):
factors[cd] -= 1
else:
factors[cd] += 1
result = factors[0] * factors[1]
print factors, result
cd = 1 - cd
ii += 1
return "resulting factors: {0}".format(factors)
input = 80000
factors = factor_int(input)
</code></pre>
<p>using this script above the output will get stuck in a loop printing</p>
<pre><code>[273.0, 292.0] 79716.0
[273.0, 293.0] 79989.0
[274.0, 293.0] 80282.0
[274.0, 292.0] 80008.0
[273.0, 292.0] 79716.0
[273.0, 293.0] 79989.0
[274.0, 293.0] 80282.0
[274.0, 292.0] 80008.0
[273.0, 292.0] 79716.0
[273.0, 293.0] 79989.0
[274.0, 293.0] 80282.0
[274.0, 292.0] 80008.0
[273.0, 292.0] 79716.0
[273.0, 293.0] 79989.0
[274.0, 293.0] 80282.0
[274.0, 292.0] 80008.0
[273.0, 292.0] 79716.0
[273.0, 293.0] 79989.0
[274.0, 293.0] 80282.0
</code></pre>
<p>But I wonder if there are more efficient solutions for this? Certainly I can't be the first to want to do something like this.</p>
| 1 | 2016-08-31T11:20:43Z | 39,527,893 | <p>I think the modulus operator is a good fit for this problem:</p>
<pre><code>import math
def factint(n):
pos_n = abs(n)
max_candidate = int(math.sqrt(pos_n))
for candidate in xrange(max_candidate, 0, -1):
if pos_n % candidate == 0:
break
return candidate, n / candidate
</code></pre>
| 0 | 2016-09-16T09:22:02Z | [
"python",
"algorithm",
"cython",
"factoring"
] |
Bypassing intrusive cookie statement with requests library | 39,248,266 | <p>I'm trying to crawl a website using the <code>requests</code> library. However, the particular website I am trying to access (<a href="http://www.vi.nl/matchcenter/vandaag.shtml" rel="nofollow">http://www.vi.nl/matchcenter/vandaag.shtml</a>) has a very intrusive cookie statement.</p>
<p>I am trying to access the website as follows:</p>
<pre><code>from bs4 import BeautifulSoup as soup
import requests
website = r"http://www.vi.nl/matchcenter/vandaag.shtml"
html = requests.get(website, headers={"User-Agent": "Mozilla/5.0"})
htmlsoup = soup(html.text, "html.parser")
</code></pre>
<p>This returns a web page that consists of just the cookie statement with a big button to accept. If you try accessing this page in a browser, you find that pressing the button redirects you to the requested page. How can I do this using <code>requests</code>?</p>
<p>I considered using <code>mechanize.Browser</code> but that seems a pretty roundabout way of doing it.</p>
| 0 | 2016-08-31T11:21:41Z | 39,248,347 | <p>I have found <a href="http://stackoverflow.com/questions/7164679/how-to-send-cookies-in-a-post-request-with-the-python-requests-library">this</a> SO question which asks how to send cookies in a post using requests. The accepted answer states that the latest build of Requests will build CookieJars for you from simple dictionaries. Below is the POC code included in the original answer.</p>
<pre><code>import requests
cookie = {'enwiki_session': '17ab96bd8ffbe8ca58a78657a918558'}
r = requests.post('http://wikipedia.org', cookies=cookie)
</code></pre>
| -1 | 2016-08-31T11:25:04Z | [
"python",
"cookies",
"beautifulsoup",
"python-requests"
] |
Bypassing intrusive cookie statement with requests library | 39,248,266 | <p>I'm trying to crawl a website using the <code>requests</code> library. However, the particular website I am trying to access (<a href="http://www.vi.nl/matchcenter/vandaag.shtml" rel="nofollow">http://www.vi.nl/matchcenter/vandaag.shtml</a>) has a very intrusive cookie statement.</p>
<p>I am trying to access the website as follows:</p>
<pre><code>from bs4 import BeautifulSoup as soup
import requests
website = r"http://www.vi.nl/matchcenter/vandaag.shtml"
html = requests.get(website, headers={"User-Agent": "Mozilla/5.0"})
htmlsoup = soup(html.text, "html.parser")
</code></pre>
<p>This returns a web page that consists of just the cookie statement with a big button to accept. If you try accessing this page in a browser, you find that pressing the button redirects you to the requested page. How can I do this using <code>requests</code>?</p>
<p>I considered using <code>mechanize.Browser</code> but that seems a pretty roundabout way of doing it.</p>
| 0 | 2016-08-31T11:21:41Z | 39,248,697 | <p>Try setting:</p>
<pre><code>cookies = dict(BCPermissionLevel='PERSONAL')
html = requests.get(website, headers={"User-Agent": "Mozilla/5.0"}, cookies=cookies)
</code></pre>
<p>This will bypass the cookie consent page and will land you staight to the page.</p>
<p><strong>Note</strong>: You could find the above by analyzing the javascript code that is run on the cookie concent page, it is a bit obfuscated but it should not be difficult. If you run into the same type of problem again, take a look at what kind of cookies does the javascript code that is executed upon a event's handling sets.</p>
| -1 | 2016-08-31T11:41:36Z | [
"python",
"cookies",
"beautifulsoup",
"python-requests"
] |
Python: Tornado and persistent database connection | 39,248,278 | <p>I am reading <a href="http://www.tornadoweb.org/en/stable/web.html#tornado.web.RequestHandler.initialize" rel="nofollow">tornado documentation</a>. I would like to have persistent connection(connection is up during application lifetime) to DB and return data from DB asynchronously. Where is the best place of doing this?</p>
<ul>
<li><code>def initialize</code> ?</li>
<li>handler's <code>__init__</code> method?</li>
<li><code>def prepare</code>?</li>
<li>or other place?</li>
</ul>
<p>Could you provide some examples? </p>
| 0 | 2016-08-31T11:22:19Z | 39,248,853 | <p>The simplest thing is just to make the database connection object a module-level global variable. See this example <a href="http://motor.readthedocs.io/en/stable/tutorial-tornado.html#tornado-application-startup-sequence" rel="nofollow">from the Motor documentation</a>:</p>
<pre><code>db = motor.motor_tornado.MotorClient().test_database
application = tornado.web.Application([
(r'/', MainHandler)
], db=db)
application.listen(8888)
tornado.ioloop.IOLoop.instance().start()
</code></pre>
<p>RequestHandlers could simply use the global variable directly. Also, passing the database as the db keyword argument to Application makes it available to request handlers in their "settings" dict:</p>
<pre><code>class MainHandler(tornado.web.RequestHandler):
def get(self):
db = self.settings['db']
</code></pre>
<p>This might make it easier to access the database object from RequestHandlers defined in other files.</p>
| 1 | 2016-08-31T11:48:58Z | [
"python",
"database",
"asynchronous",
"tornado"
] |
Reshape dataframe to have the same index as another dataframe | 39,248,331 | <p>I have two dataframes:</p>
<pre><code>dayData
power_comparison final_average_delta_power calculated_power
1 0.0 0.0 0
2 0.0 0.0 0
3 0.0 0.0 0
4 0.0 0.0 0
5 0.0 0.0 0
7 0.0 0.0 0
</code></pre>
<p>and </p>
<pre><code>historicPower
power
0 0.0
1 0.0
2 0.0
3 -1.0
4 0.0
5 1.0
7 0.0
</code></pre>
<p>I'm trying to reindex the <code>historicPower</code> dataframe to have the same shape as the <code>dayData</code> dataframe (so in this example it would looks like):</p>
<pre><code> power
1 0.0
2 0.0
3 -1.0
4 0.0
5 1.0
7 0.0
</code></pre>
<p>The dataframes in reality will be alot larger with different shapes. </p>
<p>Any help much appreciated.</p>
| 1 | 2016-08-31T11:24:14Z | 39,248,442 | <p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html" rel="nofollow"><code>reindex</code></a> if <code>index</code> has no duplicates:</p>
<pre><code>historicPower = historicPower.reindex(dayData.index)
print (historicPower)
power
1 0.0
2 0.0
3 -1.0
4 0.0
5 1.0
7 0.0
</code></pre>
| 2 | 2016-08-31T11:30:34Z | [
"python",
"pandas",
"indexing",
"dataframe",
"reindex"
] |
handle http post request with json by python script on iis | 39,248,376 | <p>i have trouble handling http post request with JSON as body of the request.
i am running IIS with python as server script.</p>
<p>this is the code that makes request: </p>
<pre><code>var http = new XMLHttpRequest();
var url = "http://myurl.ext/py/script.py";
http.onreadystatechange = function() {
if(http.readyState == 4 && http.status == 200) {
console.log(http.responseText);
}
}
data = {"field":"value", "number":5}
http.open('POST', url, true);
http.setRequestHeader('Content-Type', 'application/json; charset=UTF-8');
http.send(JSON.stringify(data));
</code></pre>
<p>on server side i have: </p>
<pre><code>import cgi
import http.client
print("Content-Type: text/text")
print("")
print(cgi.parse())
print(http.client.HTTPResponse)
</code></pre>
<p>cgi.parse() gives empty string</p>
<p>http.client.HTTPResponse gives empty string</p>
<p>cgi.FieldStorage() gives empty string, but if i submit a form, it returns values of the input fields. </p>
<p>i want to send JSON data in the background to the script and return some processed values as JSON as well. </p>
| 0 | 2016-08-31T11:26:50Z | 39,250,912 | <p>The <a href="https://docs.python.org/3/library/cgi.html#module-cgi" rel="nofollow"><code>cgi</code></a> module is designed primarily with form processing from a POST request, or query string parsing from a GET request, in mind. As such it does not really provide much that might help you process a JSON request.</p>
<p>Keep in mind that all the CGI script does is read data from the process' environment and its standard input. Thus you can just read the body of the POST from <code>sys.stdin</code>:</p>
<pre><code>#!/usr/bin/env python3
import sys
import json
from pprint import pprint
print('Content-Type: text/plain')
print()
try:
data = json.load(sys.stdin)
print("Received:")
pprint(data)
except json.JSONDecodeError as exc:
print('Failed to decode JSON request: {}'.format(exc))
</code></pre>
<p>All this script does is to decode the standard input as JSON and pretty print it back out in the response.</p>
<p>You might be better off looking at something more usable such as <a href="http://flask.pocoo.org/" rel="nofollow"><code>flask</code></a>, <a href="http://bottlepy.org" rel="nofollow"><code>bottle</code></a>, etc.</p>
| 0 | 2016-08-31T13:25:07Z | [
"python",
"json",
"iis",
"post",
"request"
] |
handle http post request with json by python script on iis | 39,248,376 | <p>i have trouble handling http post request with JSON as body of the request.
i am running IIS with python as server script.</p>
<p>this is the code that makes request: </p>
<pre><code>var http = new XMLHttpRequest();
var url = "http://myurl.ext/py/script.py";
http.onreadystatechange = function() {
if(http.readyState == 4 && http.status == 200) {
console.log(http.responseText);
}
}
data = {"field":"value", "number":5}
http.open('POST', url, true);
http.setRequestHeader('Content-Type', 'application/json; charset=UTF-8');
http.send(JSON.stringify(data));
</code></pre>
<p>on server side i have: </p>
<pre><code>import cgi
import http.client
print("Content-Type: text/text")
print("")
print(cgi.parse())
print(http.client.HTTPResponse)
</code></pre>
<p>cgi.parse() gives empty string</p>
<p>http.client.HTTPResponse gives empty string</p>
<p>cgi.FieldStorage() gives empty string, but if i submit a form, it returns values of the input fields. </p>
<p>i want to send JSON data in the background to the script and return some processed values as JSON as well. </p>
| 0 | 2016-08-31T11:26:50Z | 39,286,864 | <p>to make it work you have to explicitly tell how much to read.
<br/></p>
<pre><code>data = "";
if int(os.environ.get('CONTENT_LENGTH', 0)) != 0:
for i in range(int(os.environ.get('CONTENT_LENGTH', 0))):
data += sys.stdin.read(1)
print(data)
</code></pre>
<p>this what worked for me</p>
| 0 | 2016-09-02T07:38:36Z | [
"python",
"json",
"iis",
"post",
"request"
] |
unable to plot two columns from DataFrame after using pandas.read_csv | 39,248,380 | <p>I'm trying to plot two columns that have been read in using pandas.read_csv, the code:-</p>
<pre><code>from pandas import read_csv
from matplotlib import pyplot
data = read_csv('Stats.csv', sep=',')
#data = data.astype(float)
data.plot(x = 1, y = 2)
pyplot.show()
</code></pre>
<p>the csv file snippet:-</p>
<pre><code>1,a4,2000,125,1.9,2.8,25.6
2,a4,7000,125,1.7,2.3,18
3,a2,7000,30,0.84,1.1,8.11
4,a2,5000,30,0.83,1.05,6.87
5,a2,4000,45,2.8,3.48,16.54
</code></pre>
<p>when x = 1 and y = 2 it will plot the second column against the fourth not the third as I expected</p>
<p>When I try to plot the third column against the fourth (x = 2, y = 3) it plots the third against the fifth</p>
<p>I'm trying to plot the third against the fourth right now, when both x and y = 2 it will plot the third column against the fourth but the values are incorrect, what am I missing? is the read_csv changing the order of the columns?</p>
| 2 | 2016-08-31T11:27:01Z | 39,249,150 | <p>Your input csv is without headers which doesn't help clarity (see Murali's comment). But I think the problem stems from the nature of column that contains a4,a2.</p>
<p>This column can be used for the x axis but not for y axis (non-numeric data on an x axis appears to be just read in order). Hence the count offset. So as y "reads over" the column at 1 (all 0 indexed) - but x does not.</p>
<p>Conducting</p>
<pre><code> data.plot(x=1,y=0)
</code></pre>
<p>and</p>
<pre><code>data.plot(x=0,y=1)
</code></pre>
<p>and inspecting the axis helps visualise what's going on.</p>
<p>Bizarrely this means you can do </p>
<pre><code> df.plot(x=1,y=1)
</code></pre>
<p>to get what you want.</p>
| 0 | 2016-08-31T12:03:31Z | [
"python",
"csv",
"pandas"
] |
Difference between Queue and Sets Python | 39,248,452 | <p>In case there is multi-threads and one function which adds a value to a list and another function which takes that value. What would the difference be with:</p>
<pre><code>import queue
scrape = queue.Queue()
def scrape():
scrape.put('example')
def send():
example = scrape.get()
print (example)
scrape = set([])
def scrape():
scrape.add('example')
def send():
example = scrape.pop()
print (example)
</code></pre>
<p>Why do people use the queue module which is 170-180 lines with if conditions slowing the process for this situation if they can use sets which also gives them the advantage of duplicates filtering.</p>
| 1 | 2016-08-31T11:30:55Z | 39,248,636 | <p><code>Queues</code> maintain ordering of possibly non-unique elements. <code>Sets</code>, on the other hand, do not maintain ordering and may not contain duplicates.</p>
<p>In your case you may need to keep a record of each thing scraped and/or the relative order in which it was scraped. In that case, use <code>queues</code>. If you just want a list of the unique things you scraped, and you don't care about the relative order in which you scraped them, use <code>sets</code>. </p>
<p>As <code>@mata</code> points out, a <code>queue</code> should be used if multiple threads are producing and consuming to/from it. <code>Queues</code> implement the blocking functionality needed to work with producer/consumer <code>threads</code>. <code>Queues</code> are thread-safe, <code>sets</code> are not. </p>
<p>In this example from the docs:</p>
<pre><code>def worker():
while True:
item = q.get()
do_work(item)
q.task_done()
q = Queue()
for i in range(num_worker_threads):
t = Thread(target=worker)
t.daemon = True
t.start()
for item in source():
q.put(item)
q.join() # block until all tasks are done
</code></pre>
<p><code>get</code> in the consumer thread (i.e. <code>worker</code>) blocks until there is something in the <code>queue</code> to get, <code>join</code> in the producer thread blocks until each item that it put into the <code>queue</code> is consumed, and <code>task_done</code> in the consumer thread tells the queue that the item it got has been consumed. </p>
| 4 | 2016-08-31T11:38:56Z | [
"python",
"multithreading",
"set",
"queue"
] |
Keyboardinterupt does not work after a cplex model is solved | 39,248,477 | <p>I know there is a bug with Keyboardinterupts in python with multiprocessor tasks, but I also know there are some workarounds. Here I can't figure out a solution because the threads are handled inside the cplex package, which I can not (and do not want to) change.</p>
<p>Here is a minimal example:</p>
<pre><code>def test_interupt():
"""loops until Ctrl-C is pressed"""
i = 0
try:
while True:
i+=1
except KeyboardInterrupt:
print 'interrupted at i='+str(i)
def solve_dummy_cplex_problem():
"""solves the dummy optimization problem max{x|x<42}"""
import cplex
c = cplex.Cplex()
c.objective.set_sense(c.objective.sense.maximize)
c.variables.add(names=['x'], types=[c.variables.type.continuous])
c.set_problem_type(c.problem_type.LP)
c.linear_constraints.add(rhs=[42], senses='L', names=['cons1'])
c.objective.set_linear( [(0,1)] )
c.linear_constraints.set_coefficients([(0,0,1)])
c.solve()
print c.solution.get_values(0)
test_interupt()
solve_dummy_cplex_problem()
test_interupt()
</code></pre>
<p>When I run this code, I can interupt the first loop, but not the second one. Once cplex has been called (and presumably multithreaded jobs have been started, but they should already be finished when I hit ctrl-C for the second time), I get '^C' printed on the screen, but I cannot interrupt the second loop.</p>
<p>Note however, that the problem does not appear when I type, in a prompt, </p>
<pre><code>In [1]: test_interupt()
^Cinterrupted at i=213655938
In [2]: solve_dummy_cplex_problem()
Freeing MIP data.
Tried aggregator 1 time.
LP Presolve eliminated 1 rows and 1 columns.
All rows and columns eliminated.
Presolve time = 0.00 sec. (0.00 ticks)
42.0
In [3]: test_interupt()
^Cinterrupted at i=35459170
</code></pre>
<p>So, how do I get the same behaviour as in a prompt, but in script that calls both cplex and the to-be-interrupted loop ? Any ideas ?</p>
| 1 | 2016-08-31T11:32:27Z | 39,261,525 | <p>I think you may have found a bug in the CPLEX Python API. I'm not sure yet, but I think the default signal handler is not being restored correctly there (i.e., it's corrupted once you're done with CPLEX).</p>
<p>This is a (ugly?) workaround for your specific question:</p>
<pre><code>import signal
def test_interupt(handler):
"""loops until Ctrl-C is pressed"""
signal.signal(signal.SIGINT, handler)
i = 0
try:
while True:
i+=1
except KeyboardInterrupt:
print 'interrupted at i='+str(i)
def solve_dummy_cplex_problem():
"""solves the dummy optimization problem max{x|x<42}"""
import cplex
c = cplex.Cplex()
c.objective.set_sense(c.objective.sense.maximize)
c.variables.add(names=['x'], types=[c.variables.type.continuous])
c.set_problem_type(c.problem_type.LP)
c.linear_constraints.add(rhs=[42], senses='L', names=['cons1'])
c.objective.set_linear( [(0,1)] )
c.linear_constraints.set_coefficients([(0,0,1)])
c.solve()
print c.solution.get_values(0)
defaulthandler = signal.getsignal(signal.SIGINT)
test_interupt(defaulthandler)
solve_dummy_cplex_problem()
test_interupt(defaulthandler)
</code></pre>
<p>This was inspired by <a href="http://bryceboe.com/2010/08/26/python-multiprocessing-and-keyboardinterrupt/" rel="nofollow">this</a> blog post, which may actually answer your larger question (how to deal with KeyboardInterrupt when using multiprocessing).</p>
| 0 | 2016-09-01T01:48:22Z | [
"python",
"multithreading",
"cplex",
"keyboardinterrupt"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.