title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
findAll doesn't work, nested tags | 39,017,243 | <p>I'm parsing through <a href="https://www.wired.com/2016/08/live-debate-whats-right-kind-intersection" rel="nofollow">this</a> page. I need to get text content - which is located in <code>p</code> tags. The general structure of the page is the following:</p>
<pre><code><html>
<body>
<article itemprop="articleBody">
<div...>
<div...>
<figure>
<span..></span>
<p>THE TEXT</p>
</figure>
</div>
</div>
</article>
</body>
</html>
</code></pre>
<p>So the <code>p</code> is not a direct child of <code>article</code> but it is still inside, and <code>findAll</code> should be able to find it. But it doesn't.</p>
<pre><code>articleBody=soupArticle.find("article", {"itemprop":"articleBody"})
textList=articleBody.findAll("p")
print(len(textList)) #gives 0
</code></pre>
<p>What am I doing wrong here?</p>
| -1 | 2016-08-18T11:38:04Z | 39,018,304 | <p>This should get you started: </p>
<pre><code>from bs4 import BeautifulSoup
import mechanize
url = "https://www.wired.com/2016/08/live-debate-whats-right-kind-intersection"
br = mechanize.Browser()
response = br.open(url)
soup = BeautifulSoup(response, 'html.parser')
media = []
for x in soup.findAll("script",{"type":"text/javascript"}):
media.append(x.get_text().split("*/"))
med = media[4][1].split("<p>")
strin=[]
for i, element in enumerate(med):
strin.append("")
for char in element:
strin[i]+=char
if char=="<":
break
for text in strin:
print text
</code></pre>
<p>You can encode the texts in 'utf-8' if you want.</p>
| 0 | 2016-08-18T12:31:10Z | [
"python",
"web-scraping",
"beautifulsoup"
] |
DoxyPy - Member (variable) of namespace is not documented | 39,017,317 | <p>I get the error message <code>warning: Member constant1 (variable) of namespace <file_name> is not documented.</code> for my doxygen (doxypy) documentation. I have documented the file and all functions and classes. But I also have some additional constants in this file which I don't know how to document and I get the error message. The file looks like this:</p>
<pre><code>"""\file
\brief <this is what the file contains>
\author <author>
"""
import numpy as np
constant1 = 24
constant2 = 48
def someFunction():
""" Doxygen documentation of someFunction() """
<body>
</code></pre>
<p>In this example, how do I document <code>constant1</code> and<code>constant2</code>, so that the error message goes away?</p>
| 0 | 2016-08-18T11:41:47Z | 39,018,888 | <p>@albert was right. It works when putting a <code>##</code> in front of the variables. I didn't find a solution using the <code>"""</code> syntax.</p>
<pre><code>"""\file
\brief <this is what the file contains>
\author <author>
"""
import numpy as np
## Doxygen documentation for constant1
constant1 = 24
## Doxygen documentation for constant2
constant2 = 48
def someFunction():
""" Doxygen documentation of someFunction() """
<body>
</code></pre>
| 0 | 2016-08-18T12:57:30Z | [
"python",
"doxygen"
] |
DoxyPy - Member (variable) of namespace is not documented | 39,017,317 | <p>I get the error message <code>warning: Member constant1 (variable) of namespace <file_name> is not documented.</code> for my doxygen (doxypy) documentation. I have documented the file and all functions and classes. But I also have some additional constants in this file which I don't know how to document and I get the error message. The file looks like this:</p>
<pre><code>"""\file
\brief <this is what the file contains>
\author <author>
"""
import numpy as np
constant1 = 24
constant2 = 48
def someFunction():
""" Doxygen documentation of someFunction() """
<body>
</code></pre>
<p>In this example, how do I document <code>constant1</code> and<code>constant2</code>, so that the error message goes away?</p>
| 0 | 2016-08-18T11:41:47Z | 39,019,321 | <p>If you need (because of any reason) to have more documentation for those members, you can also split the text into several lines. It will be treated as <code>"""</code> block.</p>
<pre><code>## Doxygen documentation for constant1. The way which you already found.
constant1 = 24
## Doxygen documentation for constant2.
# If you need to write a lot about constant2, you can split the text into
# several lines in this way.
constant2 = 48
</code></pre>
| 0 | 2016-08-18T13:18:56Z | [
"python",
"doxygen"
] |
array slices (checkio) , python 3 | 39,017,342 | <pre><code>array =[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]
def checkio(array):
if len(array)>0:
(sum (array[0:len(array):2])) * array [-1]
else :
return 0
</code></pre>
<p>The result is 0 always. What`s the problem?</p>
| -2 | 2016-08-18T11:43:14Z | 39,017,836 | <pre><code>array =[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]
def checkio(array):
if len(array)>0:
return (sum (array[0:len(array):2])) * array[-1]
else :
return 0
</code></pre>
<p>You're missing a return statement for your <code>if</code> block. </p>
| 0 | 2016-08-18T12:07:44Z | [
"python"
] |
to_csv append mode is not appending to next new line | 39,017,406 | <p>I have a csv called <code>test.csv</code> that looks like:</p>
<pre><code>accuracy threshold trainingLabels
abc 0.506 15000
eew 18.12 15000
</code></pre>
<p>And then a dataframe called <code>summaryDF</code> that looks like:</p>
<pre><code>accuracy threshold trainingLabels
def 0.116 342
dja 0.121 1271
</code></pre>
<p>I am doing:</p>
<pre><code>try:
if os.stat('test.csv').st_size > 0:
summaryDF.to_csv(path_or_buf=f, encoding='utf-8', mode='a', header=False)
f.close()
else:
print "empty file"
with open('test.csv', 'w+') as f:
summaryDF.to_csv(path_or_buf=f, encoding='utf-8')
f.close()
except OSError:
print "No file"
with open('test.csv', 'w+') as f:
summaryDF.to_csv(path_or_buf=f, encoding='utf-8')
f.close()
</code></pre>
<p>Because I want my file to be:</p>
<pre><code>accuracy threshold trainingLabels
abc 0.506 15000
eew 18.12 15000
def 0.116 342
dja 0.121 1271
</code></pre>
<p>Instead, it is:</p>
<pre><code>accuracy threshold trainingLabels
abc 0.506 15000
eew 18.12 15000def 0.116 342
dja 0.121 1271
</code></pre>
<p>How can I solve this? I am guessing using a CSV writer instead of <code>to_csv</code> but clearly the append mode is not skipping the last line of the existing file.</p>
| 0 | 2016-08-18T11:46:08Z | 39,017,897 | <p>Are you using the pandas package? You do not mention that anywhere.</p>
<p>Pandas does not automatically append a new line, and I am not sure how to force it. But you can just do:</p>
<pre><code>f.write('\n')
summaryDF.to_csv(path_or_buf=f, mode='a', ...)
</code></pre>
<hr>
<p>An unrelated bug in your code:</p>
<p>You seem to have a global file object called <code>f</code>.</p>
<p>When you do this:</p>
<pre><code>with open('test.csv', 'w+') as f:
...
f.close()
</code></pre>
<p>The file that you are closing there is the file that you just opened in the <code>with</code> block. You are not closing the global file <code>f</code> because the variable was overshadowed by the <code>f</code> in that scope.</p>
<p>Is this what you want? Either way, it makes no sense. The reason why we use the <code>with</code> scope is to avoid having to close the file explicitly.</p>
<p>You either use:</p>
<pre><code>f = open('filename')
...
f.close()
</code></pre>
<p>OR</p>
<pre><code>with open('filename') as f:
...
</code></pre>
<p>You do not close a file opened within a <code>with</code> block. Using a <code>with</code> block has the additional advantage that the file gets closed even if an exception is raised and the following code is not executed.</p>
| 1 | 2016-08-18T12:11:27Z | [
"python",
"csv",
"append",
"export-to-csv"
] |
common exit status to python,bash,tcl scripts | 39,017,466 | <p>"""i have a framework that calls python,bash,tcl modules ,each module should end with some common exit status SUCCESS=0,FAILURE=1.the framework has to catch it and interpret it ,how can i do that.
i thought of declaring a class and importing this class in all the modules
but how can i import into tcl and bash modules
"""</p>
<p>util.py</p>
<pre><code>class returncode:
success=0
failure=1
</code></pre>
<p>script.sh</p>
<pre><code> #!/bin/bash
. ../../util.py
catch_error("error","plugin")
</code></pre>
<p>while sourcing it shows error:</p>
<pre><code> ./../util.py: line 1: from: command not found
../../util.py: line 2: import: command not found
../../util.py: line 5: $'\nall the common utility functions for harness and plugins are provided in this module\n': command not found
../../util.py: line 6: syntax error near unexpected token `('
../../util.py: line 6: `def platform():'
./sample.sh: line 4: syntax error near unexpected token `"vam","Dada"'
./sample.sh: line 4: `catch_error("error","plugin")'
</code></pre>
| 0 | 2016-08-18T11:49:24Z | 39,048,730 | <p>Every language describes things differently. It's effectively a non-starter to use the same file as source code for more than one language at once (it can be done, but it's hard work and not worth it). You <em>don't</em> import things from one language into another without really complicated inter-lingual shims, and they're best avoided when doing simple things.</p>
<p>But different languages can do <em>equivalent</em> operations. This is relevant to you.</p>
<p>In your case, you should be aware that the concepts of successful and failing executions you're working with are exactly the standard ones (except that there are many other possible ways to fail, and so many non-zero exit codes that are legal) which means that code can be very simple indeed.
In each case, I list both the code for exiting successfully and for exiting with a failure. In each case, it turns out that the default exit code is zero; code only has to say more to use a non-zero code.</p>
<p>Sometimes these will look very similar (the Tcl and Bash code below). <strong><em>Do not think they are the same!</em></strong> They just might be tiny pieces that look alike; a larger program most definitely will not be the same. This just happens to be one of the places where there are similarities. Adding anything extra (like trivial conditional execution) would make the syntaxes look quite different.</p>
<h1>Python</h1>
<pre class="lang-python prettyprint-override"><code>import sys
# Exit with code for success (default)
sys.exit()
</code></pre>
<pre class="lang-python prettyprint-override"><code>import sys
# Exit with code for failure
sys.exit(1)
</code></pre>
<h1>Tcl</h1>
<pre class="lang-tcl prettyprint-override"><code># Exit with code for success (default)
exit
</code></pre>
<pre class="lang-tcl prettyprint-override"><code># Exit with code for failure
exit 1
</code></pre>
<h1>Bash</h1>
<pre class="lang-bash prettyprint-override"><code># Exit with code for success (default)
exit
</code></pre>
<pre class="lang-bash prettyprint-override"><code># Exit with code for failure
exit 1
</code></pre>
| 2 | 2016-08-19T22:56:56Z | [
"python",
"bash",
"tcl"
] |
Python grouping JSON object using multiple keys | 39,017,487 | <p>I have this JSON object which has structure as follows (the json object was extracted from pandas dataframe using <code>to_json(orient="records")</code>)</p>
<pre><code>data = [{'month': 'Jan','date': '18','activity': 'cycling','duration': 3},
{'month': 'Jan', 'date': '18','activity': 'reading', 'duration': 3.0},
{'month': 'Jan', 'date': '19', 'activity': 'scripting', 'duration': 19.5},
{'month': 'Feb','date': '18', 'activity': 'work', 'duration': 22.0 },
{'month': 'Feb', 'date': '19', 'activity': 'cooking','duration': 0.7},
{'month': 'March', 'date': '16', 'activity': 'hiking', 'duration': 8.0}]
</code></pre>
<p>Am trying to group by two fields <code>month</code> and <code>date</code>
Expected result:</p>
<pre><code>data = [{
"month": "Jan",
"details": [{
"date": "18",
"effort": [{
"activity": "cycling",
"duration": 3
}, {
"activity": "reading",
"duration": 3.0
}]
}, {
"date": "19",
"effort": [{
"activity": "scripting",
"duration": 19.5
}]
}]
}, {
"month": "Feb",
"details": [{
"date": "18",
"effort": [{
"activity": "work",
"duration": 22.0
}]
}, {
"date": "19",
"effort": [{
"activity": "cooking",
"duration": 0.7
}]
}]
}, {
"month": "March",
"details": [{
"date": "16",
"effort": [{
"activity": "hiking",
"duration": 8.0
}]
}]
}]
</code></pre>
<p>I tried having the data as python dictionary which is extracted from pandas dataframe using <code>to_dict(orient="records")</code></p>
<pre><code>list_ = []
for item in dict_:
list_.append({"month" : item["month"],
"details":
[{
"date" : item["date"],
"efforts" :
[{
"activity" : item["activity"],
"duration": item["duration"]
}]
}]
})
json.dumps(list_)
</code></pre>
<p>and the output i got is</p>
<pre><code>[{
"month": "Jan",
"details": [{
"date": "18",
"efforts": [{
"duration": 3,
"activity": "cycling"
}]
}]
}, {
"month": "Jan",
"details": [{
"date": "18",
"efforts": [{
"duration": 3.0,
"activity": "reading"
}]
}]
}, {
"month": "Jan",
"details": [{
"date": "19",
"efforts": [{
"duration": 19.5,
"activity": "scripting"
}]
}]
}, {
"month": "Feb",
"details": [{
"date": "18",
"efforts": [{
"duration": 22.0,
"activity": "work"
}]
}]
}, {
"month": "Feb",
"details": [{
"date": "19",
"efforts": [{
"duration": 0.7,
"activity": "cooking"
}]
}]
}, {
"month": "March",
"details": [{
"date": "16",
"efforts": [{
"duration": 8.0,
"activity": "hiking"
}]
}]
}]
</code></pre>
<p>am not handling the concatenation of values to the existing fields.</p>
<p>Tried using python as well as java-script, do you guys have any advice or solution to the problem? Thanks</p>
| 0 | 2016-08-18T11:51:02Z | 39,018,346 | <p>This seems to work:</p>
<h1>Code</h1>
<pre><code>data = [{'month': 'Jan','date': '18','activity': 'cycling','duration': 3},
{'month': 'Jan', 'date': '18','activity': 'reading', 'duration': 3.0},
{'month': 'Jan', 'date': '19', 'activity': 'scripting', 'duration': 19.5},
{'month': 'Feb','date': '18', 'activity': 'work', 'duration': 22.0 },
{'month': 'Feb', 'date': '19', 'activity': 'cooking','duration': 0.7},
{'month': 'March', 'date': '16', 'activity': 'hiking', 'duration': 8.0}]
new_data = []
not_found = True
for item in data:
for month in new_data:
not_found = True
if item['month'] == month['month']:
not_found = False
for date in month['details']:
if item['date'] == date['date']:
date['effort'].append({'activity':item['activity'], 'duration':item['duration']})
else:
month['details'].append({'date':item['date'], 'effort':[{'activity':item['activity'], 'duration':item['duration']}]})
break
if not_found:
new_data.append({'month':item['month'], 'details':[{'date':item['date'], \
'effort':[{'activity':item['activity'], 'duration':item['duration']}]}]})
print new_data
</code></pre>
<h1>Output</h1>
<pre><code>[{'details': [{'date': '18', 'effort': [{'duration': 3, 'activity': 'cycling'}, {'duration': 3.0, 'activity': 'reading'}]}, {'date': '19', 'effort': [{'duration': 19.5, 'activity': 'scripting'}, {'duration': 19.5, 'activity': 'scripting'}]}], 'month': 'Jan'}, {'details': [{'date': '18', 'effort': [{'duration': 22.0, 'activity': 'work'}]}, {'date': '19', 'effort': [{'duration': 0.7, 'activity': 'cooking'}, {'duration': 0.7, 'activity': 'cooking'}]}], 'month': 'Feb'}, {'details': [{'date': '16', 'effort': [{'duration': 8.0, 'activity': 'hiking'}]}], 'month': 'March'}]
</code></pre>
<p>Basically just iterate through each entry, first check if the month exists, then if it does, check if the date exists already, and append to the new data accordingly. So if no month exists, you append everything, if no date exists, you append the date details and the new activity. If the date exists too, then you just append the activity</p>
| 1 | 2016-08-18T12:32:50Z | [
"javascript",
"python",
"json"
] |
Extract files from folder according to timestamp given in their title using python | 39,017,665 | <p>I have a folder containing various files which are named using timestamps like this:</p>
<pre><code>2016.08.06_09.31.53_test
2016.08.06_09.36.23_test1
2016.08.04_10.41.23_test
2016.08.04_10.46.20_test1
</code></pre>
<p>I am trying to write a program that can pull out only files from this folder which have a time stamp within a certain time span.
E.g. if the user Input is a Folder Directory and the timespan is 6.8.2016, 9am to 10 pm and the program should give back only the first two files from above.
What is the best way to do so?</p>
| -1 | 2016-08-18T11:58:57Z | 39,018,157 | <p>With the above naming system, it is incredibly simple... Get your files into a list</p>
<pre><code>list_of_files = [file1, file2 , ... ]
list_of_files.sort()
print list_of_files
</code></pre>
<p>That will sort it for you incredibly quickly - the .sort() changes the original list, and puts it into alphabetical order (which here is all you need)</p>
| 0 | 2016-08-18T12:24:07Z | [
"python"
] |
Python for loops has index for counter | 39,017,671 | <p>Hello I'm beginner in Python and trying to read a part of a code with for loop but can't understand it, does any body knows how there is index over loop counter? Thanks</p>
<pre><code>updateNodeNbrs = []
for a in nodeData:
updateNodeNbrs.append(a[0])
</code></pre>
| -4 | 2016-08-18T11:59:11Z | 39,017,707 | <p>You're iterating <a href="http://www.diveintopython.net/file_handling/for_loops.html" rel="nofollow">directly over the elements</a> of <code>nodeData</code>, so there is no need for an index. The current element is designated by <code>a</code>.</p>
<p>This is equivalent to:</p>
<pre><code>updateNodeNbrs = []
for i in range(len(nodeData)):
updateNodeNbrs.append(nodeData[i][0])
</code></pre>
<p>Although the original code is more <a href="http://blog.startifact.com/posts/older/what-is-pythonic.html" rel="nofollow">pythonic</a>.</p>
<hr>
<p>If you wanted to make the index appear, you could transform the code with <a href="http://stackoverflow.com/a/22171593/5018771"><code>enumerate</code></a> to:</p>
<pre><code>updateNodeNbrs = []
for i, a in enumerate(nodeData):
updateNodeNbrs.append(a[0])
</code></pre>
<p>And here, <code>i</code> would be the index of element <code>a</code>, and you could use it in the loop.</p>
| 1 | 2016-08-18T12:01:05Z | [
"python",
"for-loop"
] |
Python for loops has index for counter | 39,017,671 | <p>Hello I'm beginner in Python and trying to read a part of a code with for loop but can't understand it, does any body knows how there is index over loop counter? Thanks</p>
<pre><code>updateNodeNbrs = []
for a in nodeData:
updateNodeNbrs.append(a[0])
</code></pre>
| -4 | 2016-08-18T11:59:11Z | 39,017,724 | <p>See same question <a href="http://stackoverflow.com/questions/9524209/count-indexes-using-for-in-python">here</a></p>
<p>If you have an existing list and you want to loop over it and keep track of the indices you can use the <code>enumerate</code> function. For example</p>
<pre><code>l = ["apple", "pear", "banana"]
for i, fruit in enumerate(l):
print "index", i, "is", fruit
</code></pre>
| 0 | 2016-08-18T12:02:00Z | [
"python",
"for-loop"
] |
Django and celery on different servers and celery being able to send a callback to django once a task gets completed | 39,017,678 | <p>I have a django project where I am using celery with rabbitmq to perform a set of async. tasks. So the setup i have planned goes like this.</p>
<ol>
<li>Django app running on one server.</li>
<li>Celery workers and rabbitmq running from another server.</li>
</ol>
<p>My initial issue being, how to do i access django models from the celery tasks resting on another server?</p>
<p>and assuming I am not able to access the Django models, is there a way once the tasks gets completed, i can send a callback to the Django application passing values, so that i get to update the Django's database based on the values passed?</p>
| 2 | 2016-08-18T11:59:32Z | 39,018,282 | <p>Concerning your first question, accessing django models from the workers' server:</p>
<p>Your django app must be available on both <strong>Server A</strong> (serving users) and <strong>Server B</strong> (hosting the celery workers)</p>
<p>Concerning your second question, updating the database based on the values. Do you mean the result of the async task? If so, then you have two options:</p>
<ul>
<li>You can just save whatever you need to save from within the task itself, assuming you have access to the database.</li>
<li>You could use a results backend (one of which is through the Django ORM) as mentioned in the official documentation of Celery about <a href="http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html#keeping-results" rel="nofollow" title="Keeping Results">Keeping Results</a></li>
</ul>
| 3 | 2016-08-18T12:30:28Z | [
"python",
"django",
"asynchronous",
"rabbitmq",
"celery"
] |
Django and celery on different servers and celery being able to send a callback to django once a task gets completed | 39,017,678 | <p>I have a django project where I am using celery with rabbitmq to perform a set of async. tasks. So the setup i have planned goes like this.</p>
<ol>
<li>Django app running on one server.</li>
<li>Celery workers and rabbitmq running from another server.</li>
</ol>
<p>My initial issue being, how to do i access django models from the celery tasks resting on another server?</p>
<p>and assuming I am not able to access the Django models, is there a way once the tasks gets completed, i can send a callback to the Django application passing values, so that i get to update the Django's database based on the values passed?</p>
| 2 | 2016-08-18T11:59:32Z | 39,065,804 | <p>I've used the following set up on my application:</p>
<ol>
<li>Task is initiated from Django - information is extracted from the model instance and passed to the task as a dictionary. NB - this will be more future proof as Celery 4 will default to JSON encoding</li>
<li>Remote server runs task and creates a dictionary of results</li>
<li>Remote server then calls an update task that is only listened for by a worker on the Django server.</li>
<li>Django worker read results dictionary and updates model.</li>
</ol>
<p>The Django worker listens to a separate queue, those this isn't strictly necessary. Results backend isn't used - data needed is just passed to the task</p>
| 0 | 2016-08-21T15:21:11Z | [
"python",
"django",
"asynchronous",
"rabbitmq",
"celery"
] |
Better way to change pandas date format to remove leading zeros? | 39,017,748 | <p>DataFrame look like:</p>
<pre><code> OPENED
0 2004-07-28
1 2010-03-02
2 2005-10-26
3 2006-06-30
4 2012-09-21
</code></pre>
<p>I converted them to my desired format successfully but it seems very inefficient.</p>
<pre><code> OPENED
0 40728
1 100302
2 51026
3 60630
4 120921
</code></pre>
<p>The code that I used for the date conversion is:</p>
<pre><code>df['OPENED'] = pd.to_datetime(df.OPENED, format='%Y-%m-%d')
df['OPENED'] = df['OPENED'].apply(lambda x: x.strftime('%y%m%d'))
df['OPENED'] = df['OPENED'].apply(lambda i: str(i))
df['OPENED'] = df['OPENED'].apply(lambda s: s.lstrip("0"))
</code></pre>
| 1 | 2016-08-18T12:02:59Z | 39,017,803 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.replace.html" rel="nofollow"><code>str.replace</code></a>, then remove first 2 chars by <code>str[2:]</code> and last remove leading <code>0</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.lstrip.html" rel="nofollow"><code>str.lstrip</code></a>:</p>
<pre><code>print (type(df.ix[0,'OPENED']))
<class 'str'>
print (df.OPENED.dtype)
object
print (df.OPENED.str.replace('-','').str[2:].str.lstrip('0'))
0 40728
1 100302
2 51026
3 60630
4 120921
Name: OPENED, dtype: object
</code></pre>
<p>If dtype is already <code>datetime</code> use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.strftime.html" rel="nofollow"><code>strftime</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.lstrip.html" rel="nofollow"><code>str.lstrip</code></a>:</p>
<pre><code>print (type(df.ix[0,'OPENED']))
<class 'pandas.tslib.Timestamp'>
print (df.OPENED.dtype)
datetime64[ns]
print (df.OPENED.dt.strftime('%y%m%d').str.lstrip('0'))
0 40728
1 100302
2 51026
3 60630
4 120921
Name: OPENED, dtype: object
</code></pre>
<p>Thank you <a href="http://stackoverflow.com/questions/39017748/better-way-to-change-pandas-date-format-to-remove-leading-zeros#comment65386375_39017803"><code>Jon Clements</code></a> for comment:</p>
<pre><code>print (df['OPENED'].apply(lambda L: '{0}{1:%m%d}'.format(L.year % 100, L)))
0 40728
1 100302
2 51026
3 60630
4 120921
Name: OPENED, dtype: object
</code></pre>
| 2 | 2016-08-18T12:05:42Z | [
"python",
"datetime",
"pandas",
"dataframe",
"leading-zero"
] |
Number of values lying in a specified range | 39,017,766 | <p>I have a data frame like the one below:</p>
<pre><code>NC_011163.1:1
NC_011163.1:22
NC_011163.1:44
NC_011163.1:65
NC_011163.1:73
NC_011163.1:87
NC_011163.1:104
NC_011163.1:130
NC_011163.1:151
NC_011163.1:172
NC_011163.1:194
NC_011163.1:210
NC_011163.1:235
NC_011163.1:255
NC_011163.1:295
NC_011163.1:320
NC_011163.1:445
NC_011163.1:520
</code></pre>
<p>I would like to scan the data frame using a window of 210 and count number of values lying in every 210 window.</p>
<p>Desired output:</p>
<pre><code>output: Values
NC_011163.1:1-210 12
NC_011163.1:211-420 4
NC_011163.1:421-630 2
</code></pre>
<p>I'd greatly appreciate your inputs to solve this problem.</p>
<p>Thanks</p>
<p>V</p>
| 0 | 2016-08-18T12:03:50Z | 39,018,033 | <p>If you use python and <a href="http://pandas.pydata.org/" rel="nofollow">Pandas</a>, you can do:</p>
<p>with your data in a dataframe <code>df</code>:</p>
<pre><code> NC N
0 NC_011163.1 1
1 NC_011163.1 22
2 NC_011163.1 44
3 NC_011163.1 65
4 NC_011163.1 73
5 NC_011163.1 87
6 NC_011163.1 104
7 NC_011163.1 130
8 NC_011163.1 151
9 NC_011163.1 172
10 NC_011163.1 194
11 NC_011163.1 210
12 NC_011163.1 235
13 NC_011163.1 255
14 NC_011163.1 295
15 NC_011163.1 320
16 NC_011163.1 445
17 NC_011163.1 520
df.groupby([df.NC, pd.cut(df.N, range(0,631,210))]).count()
N
NC N
NC_011163.1 (0, 210] 12
(210, 420] 4
(420, 630] 2
</code></pre>
<p>Where:</p>
<ul>
<li><code>pd.cut(df.N, range(0, 631, 210))</code> returns in which bins are the value in the column <code>N</code>. bins are defined by the range, which creates 3 bins: <code>[0, 210, 420, 630]</code>.</li>
<li>Then you groupby on:
<ul>
<li>the NC number (so it matches your output but here is useless as there is only one group, but I guess you'll have other chromosomes, hence it will perform the operation per chromosome)</li>
<li>the bins you've just made</li>
</ul></li>
<li><code>count</code> the number of element in each group.</li>
</ul>
| 2 | 2016-08-18T12:18:27Z | [
"python",
"unix",
"pandas",
"awk"
] |
Number of values lying in a specified range | 39,017,766 | <p>I have a data frame like the one below:</p>
<pre><code>NC_011163.1:1
NC_011163.1:22
NC_011163.1:44
NC_011163.1:65
NC_011163.1:73
NC_011163.1:87
NC_011163.1:104
NC_011163.1:130
NC_011163.1:151
NC_011163.1:172
NC_011163.1:194
NC_011163.1:210
NC_011163.1:235
NC_011163.1:255
NC_011163.1:295
NC_011163.1:320
NC_011163.1:445
NC_011163.1:520
</code></pre>
<p>I would like to scan the data frame using a window of 210 and count number of values lying in every 210 window.</p>
<p>Desired output:</p>
<pre><code>output: Values
NC_011163.1:1-210 12
NC_011163.1:211-420 4
NC_011163.1:421-630 2
</code></pre>
<p>I'd greatly appreciate your inputs to solve this problem.</p>
<p>Thanks</p>
<p>V</p>
| 0 | 2016-08-18T12:03:50Z | 39,018,535 | <pre><code>awk -v t=210 'BEGIN{FS=":";t++}{++a[int($2/t)]}
END{for(x in a){printf "%s:%s\t%d\n",$1,t*x"-"(x+1)*t,a[x]}}' file
</code></pre>
<p>will give this output:</p>
<pre><code>NC_011163.1:0-211 12
NC_011163.1:211-422 4
NC_011163.1:422-633 2
</code></pre>
<ul>
<li><p>You don't need to find out what is the max value, how many sections/ranges you have in result. This command does it for you.</p></li>
<li><p>The codes are easy to understand too I think, most codes are for the output format. </p></li>
</ul>
| 0 | 2016-08-18T12:41:30Z | [
"python",
"unix",
"pandas",
"awk"
] |
Number of values lying in a specified range | 39,017,766 | <p>I have a data frame like the one below:</p>
<pre><code>NC_011163.1:1
NC_011163.1:22
NC_011163.1:44
NC_011163.1:65
NC_011163.1:73
NC_011163.1:87
NC_011163.1:104
NC_011163.1:130
NC_011163.1:151
NC_011163.1:172
NC_011163.1:194
NC_011163.1:210
NC_011163.1:235
NC_011163.1:255
NC_011163.1:295
NC_011163.1:320
NC_011163.1:445
NC_011163.1:520
</code></pre>
<p>I would like to scan the data frame using a window of 210 and count number of values lying in every 210 window.</p>
<p>Desired output:</p>
<pre><code>output: Values
NC_011163.1:1-210 12
NC_011163.1:211-420 4
NC_011163.1:421-630 2
</code></pre>
<p>I'd greatly appreciate your inputs to solve this problem.</p>
<p>Thanks</p>
<p>V</p>
| 0 | 2016-08-18T12:03:50Z | 39,021,734 | <pre><code>$ cat tst.awk
BEGIN { FS=":"; OFS="\t"; endOfRange=210 }
{
key = $1
bucket = int((($2-1)/endOfRange)+1)
cnt[bucket]++
maxBucket = (bucket > maxBucket ? bucket : maxBucket)
}
END {
for (bucket=1; bucket<=maxBucket; bucket++) {
print key ":" ((bucket-1)*endOfRange)+1 "-" bucket*endOfRange, cnt[bucket]+0
}
}
$ awk -f tst.awk file
NC_011163.1:1-210 12
NC_011163.1:211-420 4
NC_011163.1:421-630 2
</code></pre>
<p>Note that this will work even if you have some ranges with no values in your input data (it will print the range with a count of zero) and it will always print the ranges in numerical order (output order when using the <code>in</code> operator is "random"):</p>
<pre><code>$ cat file
NC_011163.1:1
NC_011163.1:22
NC_011163.1:520
$ awk -f tst.awk file
NC_011163.1:1-210 2
NC_011163.1:211-420 0
NC_011163.1:421-630 1
</code></pre>
| 1 | 2016-08-18T15:10:04Z | [
"python",
"unix",
"pandas",
"awk"
] |
Espeak on windows 7 and python 2.7 | 39,017,808 | <p>On the begin I'll say that there is a similar post here: <a href="http://stackoverflow.com/questions/17547531/how-to-use-espeak-with-python">How to use espeak with python</a> and I was using answers from this post, but still i'm getting errors, so maybe u'll be able to help me fix it.</p>
<pre><code>import subprocess
text = '"Hello world"'
subprocess.call('espeak '+text, shell=True)
</code></pre>
<p>This code gives me an error:</p>
<pre><code>'espeak' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
<p>ps. I think I installed espeak correctly, because I can use in CMD line:</p>
<pre><code>espeak "text"
</code></pre>
<p>and it will say "text" correctly.</p>
<p>PS2. probably answer for this question will be the answer for my another question I posted earlier. (<a href="http://stackoverflow.com/questions/39014468/how-to-save-the-output-of-pyttsx-to-wav-file">How to save the output of PyTTSx to wav file</a>)</p>
| 0 | 2016-08-18T12:06:01Z | 39,017,848 | <pre><code>import subprocess
subprocess.call(['ping', '127.0.0.1'], shell=True)
</code></pre>
| 0 | 2016-08-18T12:08:37Z | [
"python",
"python-2.7",
"text-to-speech",
"espeak"
] |
ase.visualize.view misses pygtk | 39,017,925 | <p>On Ubuntu 16.04 machinge w/ XFCE desktop i have installed <code>python3</code> and <code>pip</code> with the command <code>sudo apt install python3-pip</code>. I then installed <code>numpy</code> and <code>ase</code> (atomic Simulation Environment) using <code>sudo -H python3 -m pip install --upgrade numpy ase</code>. No apparent problems. However, running this <code>mwe.m</code>:</p>
<pre><code>from ase import Atoms
from ase.build import fcc111
slab = fcc111('Cu', size=(4, 4, 2), vacuum=10.0)
from ase.visualize import view
view(slab)
</code></pre>
<p>results in the following: </p>
<pre><code>$ python3 mwe.m
$ ImportError: No module named 'pygtk'
To get a full traceback, use: ase-gui --verbose
</code></pre>
<p>The problem is in the <code>view</code> command that depends on <code>ase-gui</code> which seem to depend on <code>pygtk</code>.
My goal is to go through <a href="https://wiki.fysik.dtu.dk/ase/tutorials/surface.html" rel="nofollow">this tutorial</a>. Of course I'm a novice, any help is appreciated. How can i overcome this problem?</p>
| 0 | 2016-08-18T12:12:44Z | 39,723,931 | <p>Unfortunately, none of the ASE gui functions will work with Python3. PyGTK is only for Python2 and it has been moved to <a href="https://wiki.gnome.org/action/show/Projects/PyGObject?action=show&redirect=PyGObject" rel="nofollow">PyGObject</a> for Python3. This is an <a href="https://gitlab.com/ase/ase/issues/13" rel="nofollow">open issue</a> for the ASE team. Your best bet right now is to use ASE with Python2.</p>
| 0 | 2016-09-27T11:51:20Z | [
"python",
"python-3.x",
"numpy",
"python-3.5"
] |
Deploy caffe regression model | 39,017,998 | <p>I have trained a regression network with <code>caffe</code>. I use <code>"EuclideanLoss"</code> layer in both the train and test phase. I have plotted these and the results look promising. </p>
<p>Now I want to deploy the model and use it. I know that if <code>SoftmaxLoss</code> is used, the final layer must be <code>Softmax</code> in the deploy file. What should this be in the case of <code>Euclidean loss</code>? </p>
| 2 | 2016-08-18T12:15:53Z | 39,018,076 | <p>For deploy you only need to discard the loss layer, in your case the <code>"EuclideanLoss"</code> layer. The output of your net is the <code>"bottom"</code> you fed the loss layer.</p>
<p>For <code>"SoftmaxWithLoss"</code> layer (and <code>"SigmoidCrossEntropy"</code>) you need to <strong>replace</strong> the loss layer, since the loss layer includes an extra layer inside it (for computational reasons).</p>
| 1 | 2016-08-18T12:20:16Z | [
"python",
"neural-network",
"deep-learning",
"caffe",
"conv-neural-network"
] |
My Django form data won't save to my database | 39,018,024 | <p>I'm trying to get a simple form to save to my MariaDB database but I can't get it to save when I use the form. If I use the Django Admin interface, I can make changes to the database however.</p>
<p><strong>addstudent.html</strong></p>
<pre><code><!DOCTYPE html>
<html>
<head>
<title>Add Student</title>
</head>
<body>
<h1>Add a student</h1>
<form id="studentform" method="post" action="/add-student/index/">
{% csrf_token %}
{% for hidden in form.hidden_fields %}
{{ hidden }}
{% endfor %}
{% for field in form.visible_fields %}
{{ field.errors }}
{{ field.help_text }}
{{ field }}
{% endfor %}
<input type="submit" name="submit" value="Create Student" />
</form>
</body>
</code></pre>
<p></p>
<p><strong>views.py*</strong></p>
<pre><code>def addstudent(request):
if request.method == 'POST':
form = studentform(request.POST)
if form.is_valid():
form.save(commit=True)
return index(request)
else:
print (form.errors)
else:
form = studentform()
return render(request, 'addstudent/addstudent.html', {'form':form})
</code></pre>
<p><strong>models.py</strong></p>
<pre><code>class student(models.Model):
last_name = models.CharField(max_length=70)
other_names = models.CharField(max_length=70)
date_of_birth = models.DateField()
</code></pre>
<p><strong>forms.py</strong></p>
<pre><code>DOY = ('1980', '1981', '1982', '1983', '1984', '1985', '1986', '1987',
'1988', '1989', '1990', '1991', '1992', '1993', '1994', '1995',
'1996', '1997', '1998', '1999', '2000', '2001', '2002', '2003',
'2004', '2005', '2006', '2007', '2008', '2009', '2010', '2011',
'2012', '2013', '2014', '2015')
class studentform(forms.ModelForm):
last_name = forms.CharField(max_length=70, help_text="Last name")
other_names = forms.CharField(max_length=70, help_text="Other names")
date_of_birth = forms.DateField(widget = extras.SelectDateWidget(years = DOY ), help_text="Date Of Birth")
class Meta:
model = student
fields = ('last_name', 'other_names', 'date_of_birth',)
</code></pre>
<p>Any ideas as to why it's not saving to the database?</p>
| 0 | 2016-08-18T12:17:44Z | 39,018,207 | <p>Your form action is set to what I assume is the index page also which means its probably not calling that view whatsoever</p>
<pre><code><form id="studentform" method="post" action="/add-student/index/">
</code></pre>
<p>The action here should point to the url that links to that particular view (preferably using the <code>url</code> template tag)</p>
| 2 | 2016-08-18T12:26:54Z | [
"python",
"django",
"django-forms",
"mariadb"
] |
How to add Google places autocomplete to flask? | 39,018,026 | <p>I would like to add the Google places autocomplete library to an input field but am not able to in my flask app (it doesn't give the dropdown suggestions), although it works fine in a standalone HTML file.</p>
| 0 | 2016-08-18T12:17:48Z | 39,018,224 | <p>Have you consider using <a href="https://ubilabs.github.io/geocomplete/" rel="nofollow">Geocomplete</a>? The example code they provide is pretty easy to implement</p>
| 0 | 2016-08-18T12:27:27Z | [
"javascript",
"jquery",
"python",
"flask",
"autocomplete"
] |
BFS, trying to find the longest way between nodes | 39,018,055 | <p>Currently I'm working with an assignment (BFS), where I am supposed to find the longest way between two nodes. Notice that I'm working with both a queue-class and a node-class, the node-class is called <code>Word</code> in my assignment. The words are 3-letter words, and I currently have a method (longestway) that returns the longest path from a given word to its smallest child. </p>
<p>The problem is, I want to implement this so that it returns the longest path from any word in a list to its smallest child, and then returns the longest of all those paths. </p>
<p>The code I have right now is working but it's taking way too long time, I need help to speed this one up. </p>
<p>My code is currently looking like this:</p>
<pre><code>def printpath1(start):
ordet=longestway(setlista(),start)
path=[]
p=ordet
while p is not None:
path.append(p.ordet)
p=p.parents
path.reverse()
#print (path)
return len(path)
def ordpar(lista):
s=lista
a=[]
for elem in s:
if a[0]<printpath1(elem):
a.pop(0)
a.append(printpath1(elem))
print(a)
</code></pre>
<p><code>printpath1</code> is currently working fine, but <code>ordpar</code> is taking way too long to run, and I need help to speed it up.</p>
| 0 | 2016-08-18T12:19:35Z | 39,018,781 | <p>A good start would be to stop removing elements from the head of the list, since all you do is print, and introduce an offset variable. Also, you're calling <code>printpath1</code> twice with the same argument, making twice as much as necessary calculations.</p>
<pre><code>offset = 0
for elem in lista:
pp1 = printpath1(elem)
if a[offset] < pp1:
a.append(pp1)
offset += 1
print(a[offset::])
</code></pre>
| 0 | 2016-08-18T12:52:27Z | [
"python",
"bfs"
] |
How do the arguments are being passed to test functions in tox/py.test? | 39,018,056 | <p>I'm learning to write tests with tox. How do the arguments are being passed to test functions in tox/py.test? For example in <code>test_simple_backup_generation</code> from <code>tests/test_backup_cmd.py</code> of <code>django-backup</code> <a href="https://github.com/django-backup/django-backup" rel="nofollow">extension</a> there are three arguments <code>tmpdir</code>, <code>settings</code>, <code>db</code>. I don't have any idea where they came from. It's nothing said about this in tox documentation either.</p>
| 0 | 2016-08-18T12:19:38Z | 39,018,666 | <p>These are <a href="http://docs.pytest.org/en/latest/fixture.html" rel="nofollow">pytest fixtures</a> provided by <a href="http://pytest-django.readthedocs.io/en/latest/helpers.html#fixtures" rel="nofollow">pytest-django</a> and <a href="http://doc.pytest.org/en/latest/tmpdir.html" rel="nofollow">pytest itself</a>.</p>
| 1 | 2016-08-18T12:47:35Z | [
"python",
"automated-tests",
"py.test",
"tox"
] |
iterating replacing string in a text | 39,018,138 | <p>Iâm writing a program that has to replace the string â+â by â!â, and strings â*+â by â!!â in a particular text. As an example, I need to go from:</p>
<pre><code> some_text = âhere is +some*+ text and also +some more*+ text hereâ
</code></pre>
<p>to</p>
<pre><code> some_text_new = âhere is !some!! text and also !some more!! text hereâ
</code></pre>
<p>Youâll notice that â+â and â*+â enclose particular words in my text. After I run the program, those words need be enclosed between â!â and â!!â instead.</p>
<p>I wrote the following code but it iterates several times before giving the right output. How can I avoid that iteration?â¦.</p>
<pre><code> def many_cues(value):
if has_cue_marks(value) is True:
add_text(value)
#print value
def has_cue_marks(value):
return '+' in value and'+*' in value
def add_text(value):
n = '+'
m = "+*"
text0 = value
for n in text0:
text1 = text0.replace(n, â!', 3)
print text1
for m in text0:
text2 = text0.replace(m, â!!â, 3)
print text2
</code></pre>
| 0 | 2016-08-18T12:23:24Z | 39,018,214 | <pre><code>>>> x = 'here is +some*+ text and also +some more*+ text here'
>>> x = x.replace('*+','!!')
>>> x
'here is +some!! text and also +some more!! text here'
>>> x = x.replace('+','!')
>>> x
'here is !some!! text and also !some more!! text here'
</code></pre>
<p>The final argument to <a href="http://www.tutorialspoint.com/python/string_replace.htm" rel="nofollow">replace</a> is optional - if you leave it out, it will replace all instances of the word. So, just use replace on the larger substring first so you don't accidentally take out some of the smaller, then use replace on the smaller, and you should be all set.</p>
| 0 | 2016-08-18T12:27:15Z | [
"python",
"str-replace"
] |
iterating replacing string in a text | 39,018,138 | <p>Iâm writing a program that has to replace the string â+â by â!â, and strings â*+â by â!!â in a particular text. As an example, I need to go from:</p>
<pre><code> some_text = âhere is +some*+ text and also +some more*+ text hereâ
</code></pre>
<p>to</p>
<pre><code> some_text_new = âhere is !some!! text and also !some more!! text hereâ
</code></pre>
<p>Youâll notice that â+â and â*+â enclose particular words in my text. After I run the program, those words need be enclosed between â!â and â!!â instead.</p>
<p>I wrote the following code but it iterates several times before giving the right output. How can I avoid that iteration?â¦.</p>
<pre><code> def many_cues(value):
if has_cue_marks(value) is True:
add_text(value)
#print value
def has_cue_marks(value):
return '+' in value and'+*' in value
def add_text(value):
n = '+'
m = "+*"
text0 = value
for n in text0:
text1 = text0.replace(n, â!', 3)
print text1
for m in text0:
text2 = text0.replace(m, â!!â, 3)
print text2
</code></pre>
| 0 | 2016-08-18T12:23:24Z | 39,018,621 | <p>It can be done using regex groups</p>
<pre><code>import re
def replacer(matchObj):
if matchObj.group(1) == '*+':
return '!!'
elif matchObj.group(2) == '+'
return '!'
text = 'here is +some*+ text and also +some more*+ text here'
replaced = re.sub(r'(\*\+)|(\+)', replacer, text)
</code></pre>
<p>Notice that the order of the groups are important since you have common characters in the two patterns you want to replace</p>
| -1 | 2016-08-18T12:45:27Z | [
"python",
"str-replace"
] |
python - lists are being read in from csv as strings | 39,018,150 | <p>I have a dictionary that uses an strings(edit) as keys and stores lists of lists as values.</p>
<pre><code>dict = {key1: [[data1],[data2],[data3]], key2: [[data4],[data5]],...etc}
</code></pre>
<p>EDIT: where the data variables are rows containing different data types from a converted pandas DataFrame</p>
<p>Ex.</p>
<pre><code>df = pd.DataFrame()
df['City'] = ['New York','Austin','New Orleans','New Orleans']
df['State'] = ['NY','TX','LA','LA']
df['Latitude'] = [29.12,23.53,34.53,34.53]
df['Time'] = [1.46420e+09,1.47340e+09,1.487820e+09,1.497820e+09]
City State Latitude Time
New York NY 29.12 1.46420e+09
Austin TX 23.53 1.47340e+09
New Orleans LA 34.53 1.487820e+09
New Orleans LA 34.53 1.497820e+09
dict = {}
cities = df['City'].unique()
for c in cities:
temp = df[df['City'] == c]
dict[c] = temp.as_matrix().tolist()
#which outputs this for a given key
dict['New Orleans'] = [['New Orleans' 'LA' 34.53 1.487820e+09],
['New Orleans' 'LA' 34.53 1.497820e+09]]
</code></pre>
<p>I am storing it as a csv using the following:</p>
<pre><code>filename = 'storage.csv'
with open(filename,'w') as f:
w = csv.writer(f)
for key in dict.keys():
w.writerow((key,dict[key]))
</code></pre>
<p>I then read the file back into a dictionary using the following:</p>
<pre><code>reader = csv.reader(open(filename, 'r'))
dict = {}
for key,val in reader:
dict[key] = val
</code></pre>
<p>val comes in looking perfect, except it is now a string. for example, key1 looks like this:</p>
<pre><code>dict[key1] = "[[data1],[data2],[data3]]"
</code></pre>
<p>How can I read the values in as lists, or remove the quotes from the read-in version of val?</p>
| 2 | 2016-08-18T12:23:52Z | 39,018,399 | <p>your code must be like :</p>
<pre><code>import csv
import ast
#dict = {1: [[1],[2],[3]], 2: [[4],[5]]}
reader = csv.reader(open("storage.csv", 'r'))
dict = {}
for key,val in reader:
dict[int(key)] = ast.literal_eval(val)
print dict
</code></pre>
| 1 | 2016-08-18T12:34:39Z | [
"python",
"csv",
"pandas",
"read-write"
] |
python - lists are being read in from csv as strings | 39,018,150 | <p>I have a dictionary that uses an strings(edit) as keys and stores lists of lists as values.</p>
<pre><code>dict = {key1: [[data1],[data2],[data3]], key2: [[data4],[data5]],...etc}
</code></pre>
<p>EDIT: where the data variables are rows containing different data types from a converted pandas DataFrame</p>
<p>Ex.</p>
<pre><code>df = pd.DataFrame()
df['City'] = ['New York','Austin','New Orleans','New Orleans']
df['State'] = ['NY','TX','LA','LA']
df['Latitude'] = [29.12,23.53,34.53,34.53]
df['Time'] = [1.46420e+09,1.47340e+09,1.487820e+09,1.497820e+09]
City State Latitude Time
New York NY 29.12 1.46420e+09
Austin TX 23.53 1.47340e+09
New Orleans LA 34.53 1.487820e+09
New Orleans LA 34.53 1.497820e+09
dict = {}
cities = df['City'].unique()
for c in cities:
temp = df[df['City'] == c]
dict[c] = temp.as_matrix().tolist()
#which outputs this for a given key
dict['New Orleans'] = [['New Orleans' 'LA' 34.53 1.487820e+09],
['New Orleans' 'LA' 34.53 1.497820e+09]]
</code></pre>
<p>I am storing it as a csv using the following:</p>
<pre><code>filename = 'storage.csv'
with open(filename,'w') as f:
w = csv.writer(f)
for key in dict.keys():
w.writerow((key,dict[key]))
</code></pre>
<p>I then read the file back into a dictionary using the following:</p>
<pre><code>reader = csv.reader(open(filename, 'r'))
dict = {}
for key,val in reader:
dict[key] = val
</code></pre>
<p>val comes in looking perfect, except it is now a string. for example, key1 looks like this:</p>
<pre><code>dict[key1] = "[[data1],[data2],[data3]]"
</code></pre>
<p>How can I read the values in as lists, or remove the quotes from the read-in version of val?</p>
| 2 | 2016-08-18T12:23:52Z | 39,018,494 | <p><strong>Edit:</strong> Since you are using a <code>pandas.DataFrame</code> don't use the <code>csv</code> module or <code>json</code> module. Instead, use <a href="http://pandas.pydata.org/pandas-docs/stable/io.html" rel="nofollow"><code>pandas.io</code></a> for both reading and writing.</p>
<hr>
<p><strong>Original Answer:</strong></p>
<p>Short answer: use <code>json</code>.</p>
<p>CSV is fine for saving tables of strings. Anything further than that and you need to manually convert the strings back into Python objects.</p>
<p>If your data has just lists, dictionaries and basic literals like strings and numbers <code>json</code> would be the right tool for this job.</p>
<p>Given:</p>
<pre><code>example = {'x': [1, 2], 'y': [3, 4]}
</code></pre>
<p>Save to file:</p>
<pre><code>with open('f.txt','w') as f:
json.dump(example, f)
</code></pre>
<p>Load from file:</p>
<pre><code>with open('f.txt') as f:
reloaded_example = json.load(f)
</code></pre>
| 3 | 2016-08-18T12:39:33Z | [
"python",
"csv",
"pandas",
"read-write"
] |
Python: how to find elements in array x which have values close to elements in array y? | 39,018,232 | <p>Elements in arrays <em>x</em> and <em>y</em> are floats. I would like to find elements in array <em>x</em> which have values as close as possible to the ones in array <em>y</em> (for each value in array <em>y</em> - one element in array <em>x</em>). Also array <em>x</em> contains >10^6 elements and array <em>y</em> around 10^3, and this is part of a <em>for loop</em> so it should be done preferably fast. </p>
<p>I tried to avoid making it as a new for loop so I did this, but it is very slow for a big y array</p>
<pre><code>x = np.array([0, 0.2, 1, 2.4, 3, 5]); y = np.array([0, 1, 2]);
diff_xy = x.reshape(1,len(x)) - y.reshape(len(y),1);
diff_xy_abs = np.fabs(diff_xy);
args_x = np.argmin(diff_xy_abs, axis = 1);
x_new = x[args_x]
</code></pre>
<p>I'm new to Python, so any suggestion is welcome! </p>
| 1 | 2016-08-18T12:27:50Z | 39,018,271 | <p>maybe sort the larger array, then binary search the smaller array's values from it, if found that is the closest value and the nearby values are next to it in nearby indexes, if not found, then the closest values are next to the point of failure. </p>
| 0 | 2016-08-18T12:30:03Z | [
"python",
"arrays",
"elements"
] |
Python: how to find elements in array x which have values close to elements in array y? | 39,018,232 | <p>Elements in arrays <em>x</em> and <em>y</em> are floats. I would like to find elements in array <em>x</em> which have values as close as possible to the ones in array <em>y</em> (for each value in array <em>y</em> - one element in array <em>x</em>). Also array <em>x</em> contains >10^6 elements and array <em>y</em> around 10^3, and this is part of a <em>for loop</em> so it should be done preferably fast. </p>
<p>I tried to avoid making it as a new for loop so I did this, but it is very slow for a big y array</p>
<pre><code>x = np.array([0, 0.2, 1, 2.4, 3, 5]); y = np.array([0, 1, 2]);
diff_xy = x.reshape(1,len(x)) - y.reshape(len(y),1);
diff_xy_abs = np.fabs(diff_xy);
args_x = np.argmin(diff_xy_abs, axis = 1);
x_new = x[args_x]
</code></pre>
<p>I'm new to Python, so any suggestion is welcome! </p>
| 1 | 2016-08-18T12:27:50Z | 39,019,151 | <p>The following gives the desired result.</p>
<pre><code>x[abs((np.tile(x, (len(y), 1)).T - y).T).argmin(axis=1)]
</code></pre>
<p>It <code>tile</code>s <code>x</code> for each element in <code>y</code> (<code>len(y)</code>), transposes (<code>.T</code>) this tiled array, subtracts <code>y</code>, re-transposes it, takes the <code>abs</code>olute value of differences, determines the indexes of the minimum values using <code>argmin</code> (over <code>axis=1</code>), and finally gets the values from these indexes of <code>x</code>.</p>
| 0 | 2016-08-18T13:10:28Z | [
"python",
"arrays",
"elements"
] |
Python: how to find elements in array x which have values close to elements in array y? | 39,018,232 | <p>Elements in arrays <em>x</em> and <em>y</em> are floats. I would like to find elements in array <em>x</em> which have values as close as possible to the ones in array <em>y</em> (for each value in array <em>y</em> - one element in array <em>x</em>). Also array <em>x</em> contains >10^6 elements and array <em>y</em> around 10^3, and this is part of a <em>for loop</em> so it should be done preferably fast. </p>
<p>I tried to avoid making it as a new for loop so I did this, but it is very slow for a big y array</p>
<pre><code>x = np.array([0, 0.2, 1, 2.4, 3, 5]); y = np.array([0, 1, 2]);
diff_xy = x.reshape(1,len(x)) - y.reshape(len(y),1);
diff_xy_abs = np.fabs(diff_xy);
args_x = np.argmin(diff_xy_abs, axis = 1);
x_new = x[args_x]
</code></pre>
<p>I'm new to Python, so any suggestion is welcome! </p>
| 1 | 2016-08-18T12:27:50Z | 39,019,247 | <p>It come at the cost of the order of x and y, but is that code answer your needs of performance? Rem: the same value from x could be used for more than one value of y.</p>
<pre><code>import numpy as np
# x = np.array([0, 0.2, 1, 2.4, 3, 5]);
# y = np.array([0, 1, 2]);
x = np.random.rand(10**6)*5000000
y = (np.random.rand(10**3)*5000000).astype(int)
x_new = np.zeros(len(y)) # Create an 'empty' array for the result
x.sort() # could be skipped if already sorted
y.sort() # could be skipped if already sorted
len_x = len(x)
idx_x = 0
cur_x = x[0]
for idx_y, cur_y in enumerate(y):
while True:
if idx_x == len_x-1:
# If we are at the end of x, the last value is the best value
x_new[idx_y] = cur_x
break
next_x = x[idx_x+1]
if abs(cur_y - cur_x) < abs(cur_y - next_x):
# If the current value of x is better than the next, keep it
x_new[idx_y] = cur_x
break
# Check for the next value
idx_x += 1
cur_x = next_x
print(x_new)
</code></pre>
| 2 | 2016-08-18T13:15:46Z | [
"python",
"arrays",
"elements"
] |
Python list filter performance | 39,018,238 | <p>I'm trying to do</p>
<pre><code>In [21]: l1 = range(1,1000000)
In [22]: l2 = range(100,90000)
In [23]: l1.append(101)
In [24]: print(set([x for x in l1 if l1.count(x) - l2.count(x) == 1]))
</code></pre>
<p>in my python shell this takes ages. Generally my goal is to substract a list from a second one while taking care of duplicates.</p>
<p>e.g</p>
<pre><code>[1,2,2,3] - [2,3] = [1,2]
</code></pre>
<p>I'd be very glad for any hint how to get this done in max 500ms on a regular single core machine.</p>
| 2 | 2016-08-18T12:28:13Z | 39,018,440 | <p><strong>Non order preserving</strong> using <code>collections.Counter</code>:</p>
<pre><code>from collections import Counter
a = Counter([1, 2, 2, 3])
b = Counter([2, 3])
res = list(a - b )
# [1, 2]
</code></pre>
<p>This works because the <code>-</code> method of a <code>Counter</code> removes any elements from the output where the count present in <code>b</code> is equal to or greater than the count in <code>a</code>.</p>
<p><strong>Order preserving</strong> using an <code>OrderedCounter</code>, then manually generate the list, eg:</p>
<pre><code>from collections import Counter, OrderedDict
class OrderedCounter(Counter, OrderedDict):
pass
a = OrderedCounter([3, 2, 2, 1])
b = Counter([2, 3])
res = [k for k, v in a.items() if v - b[k] > 0]
# [2, 1]
</code></pre>
<p>Finally, if the original range contains non-unique values, and you want the elements duplicated the number of times that are left over after the subtraction, then:</p>
<pre><code>from collections import Counter, OrderedDict
class OrderedCounter(Counter, OrderedDict):
pass
a = OrderedCounter([3, 3, 2, 2, 2, 1])
b = Counter([2, 3])
res = [k for k, v in a.items() for _ in range(v - b[k])]
# [3, 2, 2, 1]
</code></pre>
| 5 | 2016-08-18T12:36:30Z | [
"python",
"performance"
] |
Python list filter performance | 39,018,238 | <p>I'm trying to do</p>
<pre><code>In [21]: l1 = range(1,1000000)
In [22]: l2 = range(100,90000)
In [23]: l1.append(101)
In [24]: print(set([x for x in l1 if l1.count(x) - l2.count(x) == 1]))
</code></pre>
<p>in my python shell this takes ages. Generally my goal is to substract a list from a second one while taking care of duplicates.</p>
<p>e.g</p>
<pre><code>[1,2,2,3] - [2,3] = [1,2]
</code></pre>
<p>I'd be very glad for any hint how to get this done in max 500ms on a regular single core machine.</p>
| 2 | 2016-08-18T12:28:13Z | 39,018,453 | <p>I think that I would use <code>Counter</code>, then convert the result of subtracting the two counters to a sorted list:</p>
<pre><code>from collections import Counter
l1 = [1, 2, 2, 3]
l2 = [2, 3]
counter_l1 = Counter(l1)
counter_l2 = Counter(l2)
res = counter_l1 - counter_l2
print(sorted(res))
>> [1, 2]
</code></pre>
| 0 | 2016-08-18T12:37:06Z | [
"python",
"performance"
] |
Python list filter performance | 39,018,238 | <p>I'm trying to do</p>
<pre><code>In [21]: l1 = range(1,1000000)
In [22]: l2 = range(100,90000)
In [23]: l1.append(101)
In [24]: print(set([x for x in l1 if l1.count(x) - l2.count(x) == 1]))
</code></pre>
<p>in my python shell this takes ages. Generally my goal is to substract a list from a second one while taking care of duplicates.</p>
<p>e.g</p>
<pre><code>[1,2,2,3] - [2,3] = [1,2]
</code></pre>
<p>I'd be very glad for any hint how to get this done in max 500ms on a regular single core machine.</p>
| 2 | 2016-08-18T12:28:13Z | 39,018,482 | <p>Count the elements first:</p>
<pre><code>from collections import defaultdict
count1 = defaultdict(int)
count2 = defaultdict(int)
for x in l1:
count1[x] += 1
for x in l2:
count2[x] += 1
print([x for x, count in count1.iteritems() if count - count2[x] == 1])
</code></pre>
<p>(don't forget to convert <code>l1</code> to a set in the last line)</p>
<p>The above code takes 625ms on my machine. (without printing results to stdout)</p>
| 0 | 2016-08-18T12:38:50Z | [
"python",
"performance"
] |
Python list filter performance | 39,018,238 | <p>I'm trying to do</p>
<pre><code>In [21]: l1 = range(1,1000000)
In [22]: l2 = range(100,90000)
In [23]: l1.append(101)
In [24]: print(set([x for x in l1 if l1.count(x) - l2.count(x) == 1]))
</code></pre>
<p>in my python shell this takes ages. Generally my goal is to substract a list from a second one while taking care of duplicates.</p>
<p>e.g</p>
<pre><code>[1,2,2,3] - [2,3] = [1,2]
</code></pre>
<p>I'd be very glad for any hint how to get this done in max 500ms on a regular single core machine.</p>
| 2 | 2016-08-18T12:28:13Z | 39,018,904 | <p>execute time:</p>
<pre><code>import time
from functools import wraps
from collections import defaultdict
from collections import Counter, OrderedDict
class OrderedCounter(Counter, OrderedDict):pass
def tefn(fn):
t = time.time()
set(fn())
return t - time.time()
def efn(fn):
t = time.time()
fn()
return t - time.time()
def runTime(tp=0, count=10):
def dec(fn):
@wraps(fn)
def wrap():
print(fn)
res = tefn if tp else efn
return sum(res(fn) for _ in range(count)) / count
return wrap
return dec
@runTime(tp=1)
def fnList():
return [x for x in l1 if l1.count(x) - l2.count(x) == 1]
@runTime(tp=1)
def fnIter():
return iter(x for x in l1 if l1.count(x) - l2.count(x) == 1)
@runTime(tp=1)
def fnYield():
for x in l1:
if l1.count(x) - l2.count(x) == 1:
yield x
@runTime()
def fnCounter():
a = Counter(l1)
b = Counter(l2)
return list(a - b )
@runTime()
def fnOrderedCounter():
a = OrderedCounter(l1)
b = Counter(l2)
return [k for k, v in a.items() if v - b[k] > 0]
@runTime()
def fnDefaultdict():
count1 = defaultdict(int)
count2 = defaultdict(int)
for x in l1: count1[x] += 1
for x in l2: count2[x] += 1
return [x for x, count in count1.items() if count - count2[x] == 1]
if __name__ == '__main__':
l1 = range(1, 1000000)
l2 = range(100, 90000)
g = globals()
result = list((fn.__name__, fn()) for fn in (g[f] for f in g if f.startswith('fn')))
result.sort(key=lambda r: r[1], reverse=True)
for e, r in enumerate(result):
print(e, r)
</code></pre>
<p>OUT:</p>
<pre><code><function fnList at 0x02F17540>
<function fnIter at 0x034C8DF8>
<function fnCounter at 0x034C8F18>
<function fnDefaultdict at 0x034CB078>
<function fnYield at 0x034C8E88>
<function fnOrderedCounter at 0x034C8FA8>
0 ('fnYield', -0.8542306900024415)
1 ('fnList', -0.8605266094207764)
2 ('fnIter', -0.8655695915222168)
3 ('fnDefaultdict', -1.054802918434143)
4 ('fnCounter', -1.3413111925125123)
5 ('fnOrderedCounter', -5.433168196678162)
</code></pre>
| 0 | 2016-08-18T12:58:23Z | [
"python",
"performance"
] |
Python list filter performance | 39,018,238 | <p>I'm trying to do</p>
<pre><code>In [21]: l1 = range(1,1000000)
In [22]: l2 = range(100,90000)
In [23]: l1.append(101)
In [24]: print(set([x for x in l1 if l1.count(x) - l2.count(x) == 1]))
</code></pre>
<p>in my python shell this takes ages. Generally my goal is to substract a list from a second one while taking care of duplicates.</p>
<p>e.g</p>
<pre><code>[1,2,2,3] - [2,3] = [1,2]
</code></pre>
<p>I'd be very glad for any hint how to get this done in max 500ms on a regular single core machine.</p>
| 2 | 2016-08-18T12:28:13Z | 39,076,830 | <p>I learned that the merge pattern is even faster (read <a href="http://openbookproject.net/thinkcs/python/english3e/list_algorithms.html#alice-in-wonderland-again" rel="nofollow">http://openbookproject.net/thinkcs/python/english3e/list_algorithms.html#alice-in-wonderland-again</a>):</p>
<pre><code> 6 def substract_list(origin, substract):
7 """
8 example: db tells us that user bought shares 1, 2, 2, 3, 4 and also sold
9 1, 2, 2. we need to know that he now owns 3, 4 for 10m item
10
11 learned from here:
12 http://openbookproject.net/thinkcs/python/english3e/list_algorithms.html#alice-in-wonderland-again
13
14 lists need to be sorted and can contain duplicates
15 """
16 result = []
17 xi = 0 # @origin
18 yi = 0 # @subscract
19
20 len_substract = len(substract)
21 len_origin = len(origin)
22
23 while True:
24 # reached end of substract, append rest of origin, return
25 if yi >= len_substract:
26 result.extend(origin[xi:])
27 return result
28
29 # reached end of origin, return
30 if xi >= len_origin:
31 return result
32
33 # step throught the values
34 if origin[xi] == substract[yi]:
35 # equal values, next one pls
36 yi += 1
37 xi += 1
38 elif origin[xi] > substract[yi]:
39 yi += 1
40 else:
41 result.append(origin[xi])
42 xi += 1
</code></pre>
<p>Counter did on my machine: 3-4s, merge pattern: 0.3s</p>
| 0 | 2016-08-22T10:12:53Z | [
"python",
"performance"
] |
Python/Tkinter - delete all objects in enclosed area | 39,018,279 | <p>I am trying to make a program where on click the program would delete all objects in enclosed area.</p>
<p>Here is my example code:</p>
<pre><code>import tkinter as tk
root = tk.Tk()
cv = tk.Canvas(root, height=400, width=400)
cv.pack()
cv.create_rectangle(50, 50, 100, 100)
cv.create_line(60, 60, 80, 80)
cv.create_line(60, 80, 80, 60)
def onclick():
todel = cv.find_enclosed(50, 50, 100, 100)
cv.delete(todel)
cv.bind("<Button-1>", onclick())
root.mainloop()
</code></pre>
<p>On click it should delete the two lines in the rectangle, but for some reason it does not. How can I make this happen?</p>
| 0 | 2016-08-18T12:30:23Z | 39,018,409 | <p>You have to apply <code>delete</code> to all items of the list</p>
<pre><code>for d in todel:
cv.delete(d)
</code></pre>
<p>or</p>
<pre><code>any(map(cv.delete,todel))
</code></pre>
| 2 | 2016-08-18T12:35:01Z | [
"python",
"canvas",
"tkinter"
] |
How to improve my coding style for assigning a new value? | 39,018,292 | <p>Is there a better way to write the third line of following code ?</p>
<pre><code>from datetime import date
d = date.today()
d = d.replace(day=1)
</code></pre>
| -1 | 2016-08-18T12:30:46Z | 39,018,598 | <p>How about</p>
<pre><code>d = date.today().replace(day=1)
</code></pre>
| 1 | 2016-08-18T12:44:34Z | [
"python",
"pep8"
] |
How to improve my coding style for assigning a new value? | 39,018,292 | <p>Is there a better way to write the third line of following code ?</p>
<pre><code>from datetime import date
d = date.today()
d = d.replace(day=1)
</code></pre>
| -1 | 2016-08-18T12:30:46Z | 39,018,895 | <p>you did it right.</p>
<p>another suggestion:</p>
<pre><code>current_date=date.today()
fixed_date=date(current_date.year, current_date.month, 1)
</code></pre>
| 0 | 2016-08-18T12:58:01Z | [
"python",
"pep8"
] |
super() gives an error in Python 2 | 39,018,313 | <p>I just started learning Python and I don't quite understand where the problem in this code is. I have a base class Proband with two methods and I want to create a subclass Gesunder and I want to override the attributes idn,artefakte. </p>
<pre><code>import scipy.io
class Proband:
def __init__(self,idn,artefakte):
self.__idn = idn
self.artefakte = artefakte
def getData(self):
path = 'C:\matlab\EKGnurbild_von Proband'+ str(self.idn)
return scipy.io.loadmat(path)
def __eq__(self,neueProband):
return self.idn == neueProband.idn and self.artefakte == neueProband.artefakte
class Gesunder(Proband):
def __init__(self,idn,artefakte,sportler):
super().__init__(self,idn,artefakte)
self.__sportler = sportler
hans = Gesunder(2,3,3)
</code></pre>
| -1 | 2016-08-18T12:31:25Z | 39,018,362 | <p>In Python 2, <code>super()</code> on its own is invalid. Instead, you must use <code>super(ClassName, self)</code>.</p>
<pre><code>super(Gesunder, self).__init__(self, idn, artefakte)
</code></pre>
| 0 | 2016-08-18T12:33:19Z | [
"python",
"subclass",
"super"
] |
super() gives an error in Python 2 | 39,018,313 | <p>I just started learning Python and I don't quite understand where the problem in this code is. I have a base class Proband with two methods and I want to create a subclass Gesunder and I want to override the attributes idn,artefakte. </p>
<pre><code>import scipy.io
class Proband:
def __init__(self,idn,artefakte):
self.__idn = idn
self.artefakte = artefakte
def getData(self):
path = 'C:\matlab\EKGnurbild_von Proband'+ str(self.idn)
return scipy.io.loadmat(path)
def __eq__(self,neueProband):
return self.idn == neueProband.idn and self.artefakte == neueProband.artefakte
class Gesunder(Proband):
def __init__(self,idn,artefakte,sportler):
super().__init__(self,idn,artefakte)
self.__sportler = sportler
hans = Gesunder(2,3,3)
</code></pre>
| -1 | 2016-08-18T12:31:25Z | 39,018,371 | <p>The <code>super()</code> call should be modified to :</p>
<pre><code>super(Gesunder, self).__init__(self, idn, artefakte)
</code></pre>
| 0 | 2016-08-18T12:33:42Z | [
"python",
"subclass",
"super"
] |
super() gives an error in Python 2 | 39,018,313 | <p>I just started learning Python and I don't quite understand where the problem in this code is. I have a base class Proband with two methods and I want to create a subclass Gesunder and I want to override the attributes idn,artefakte. </p>
<pre><code>import scipy.io
class Proband:
def __init__(self,idn,artefakte):
self.__idn = idn
self.artefakte = artefakte
def getData(self):
path = 'C:\matlab\EKGnurbild_von Proband'+ str(self.idn)
return scipy.io.loadmat(path)
def __eq__(self,neueProband):
return self.idn == neueProband.idn and self.artefakte == neueProband.artefakte
class Gesunder(Proband):
def __init__(self,idn,artefakte,sportler):
super().__init__(self,idn,artefakte)
self.__sportler = sportler
hans = Gesunder(2,3,3)
</code></pre>
| -1 | 2016-08-18T12:31:25Z | 39,018,545 | <p>You have 2 problems in your code. In python 2:</p>
<ol>
<li><code>super()</code> takes 2 arguments: the class name, and the instance</li>
<li>in order to use <code>super()</code>, the base class must inherit from <code>object</code></li>
</ol>
<p>So your code becomes:</p>
<pre><code>import scipy.io
class Proband(object):
def __init__(self,idn,artefakte):
self.__idn = idn
self.artefakte = artefakte
def getData(self):
path = 'C:\matlab\EKGnurbild_von Proband'+ str(self.idn)
return scipy.io.loadmat(path)
def __eq__(self,neueProband):
return self.idn == neueProband.idn and self.artefakte == neueProband.artefakte
class Gesunder(Proband):
def __init__(self,idn,artefakte,sportler):
super(Gesunder, self).__init__(idn,artefakte)
self.__sportler = sportler
hans = Gesunder(2,3,3)
</code></pre>
<p>Note the the call to <code>super(Gesunder, self).__init__</code> does not have <code>self</code> as the first argument.</p>
| 2 | 2016-08-18T12:42:14Z | [
"python",
"subclass",
"super"
] |
django test RequestFactory cannot get route parameter to work | 39,018,431 | <p>I have a problem that I don't know how to make it work. </p>
<p>urls.py:</p>
<pre><code>urlpatterns = [
url(r'athletes/search$', SearchAthletes.as_view()),
url(r'athletes/([0-9]+)$', ViewAthlete.as_view())
]
</code></pre>
<p>views.py:</p>
<pre><code>class ViewAthlete(APIView):
def get(self, request, id, format=None):
athlete = Athlete.objects.get(id=id)
serializer = AthleteSerializer(athlete)
return Response(serializer.data)
</code></pre>
<p>test.py:</p>
<pre><code>def test_view_athlete(self):
tmp = Athlete.objects.order_by('?')[0]
request = self.factory.get('/_api/v1/athletes/' + str(tmp.id))
request.user = AnonymousUser()
response = ViewAthlete.as_view()(request)
self.assertEquals(response.data.id, tmp.id)
</code></pre>
<p>I keep getting the following error:</p>
<blockquote>
<p>Traceback (most recent call last):
File "/tests.py", line 44, in test_view_athlete
response = ViewAthlete.as_view()(request)</p>
<p>File "/venv/lib/python3.5/site-packages/django/views/decorators/csrf.py", line 58, in wrapped_view
return view_func(*args, **kwargs)</p>
<p>File "/venv/lib/python3.5/site-packages/django/views/generic/base.py", line 68, in view
return self.dispatch(request, *args, **kwargs)</p>
<p>File "/venv/lib/python3.5/site-packages/rest_framework/views.py", line 474, in dispatch
response = self.handle_exception(exc)</p>
<p>File "/venv/lib/python3.5/site-packages/rest_framework/views.py", line 471, in dispatch
response = handler(request, *args, **kwargs)
TypeError: get() missing 1 required positional argument: 'id'</p>
</blockquote>
<p>To my understanding, the problem is that, there is no <code>id</code> parameter passed to <code>get</code> function of <code>ViewAthelete</code> view class. What is the reason for this? In development environment (not testing), it displays the data but testing environment doesn't recognize the arguments from the route.</p>
| 0 | 2016-08-18T12:36:10Z | 39,018,687 | <p>AFAIK <code>urlpatterns</code> are considered when testing through the full django request stack, e.g.: through <code>django.test.Client</code>, using it's <code>get</code>/<code>post</code> methods</p>
<p>When testing your view directly (<code>MyView.as_view()(request)</code>) the whole url resolver logic is bypassed, and then the <code>args</code>/<code>kwargs</code> need to be supplied by the caller (e.g.: <code>MyView.as_view()(request, 'arg1', 'arg2', id='34')</code>)</p>
| 2 | 2016-08-18T12:48:23Z | [
"python",
"django"
] |
django test RequestFactory cannot get route parameter to work | 39,018,431 | <p>I have a problem that I don't know how to make it work. </p>
<p>urls.py:</p>
<pre><code>urlpatterns = [
url(r'athletes/search$', SearchAthletes.as_view()),
url(r'athletes/([0-9]+)$', ViewAthlete.as_view())
]
</code></pre>
<p>views.py:</p>
<pre><code>class ViewAthlete(APIView):
def get(self, request, id, format=None):
athlete = Athlete.objects.get(id=id)
serializer = AthleteSerializer(athlete)
return Response(serializer.data)
</code></pre>
<p>test.py:</p>
<pre><code>def test_view_athlete(self):
tmp = Athlete.objects.order_by('?')[0]
request = self.factory.get('/_api/v1/athletes/' + str(tmp.id))
request.user = AnonymousUser()
response = ViewAthlete.as_view()(request)
self.assertEquals(response.data.id, tmp.id)
</code></pre>
<p>I keep getting the following error:</p>
<blockquote>
<p>Traceback (most recent call last):
File "/tests.py", line 44, in test_view_athlete
response = ViewAthlete.as_view()(request)</p>
<p>File "/venv/lib/python3.5/site-packages/django/views/decorators/csrf.py", line 58, in wrapped_view
return view_func(*args, **kwargs)</p>
<p>File "/venv/lib/python3.5/site-packages/django/views/generic/base.py", line 68, in view
return self.dispatch(request, *args, **kwargs)</p>
<p>File "/venv/lib/python3.5/site-packages/rest_framework/views.py", line 474, in dispatch
response = self.handle_exception(exc)</p>
<p>File "/venv/lib/python3.5/site-packages/rest_framework/views.py", line 471, in dispatch
response = handler(request, *args, **kwargs)
TypeError: get() missing 1 required positional argument: 'id'</p>
</blockquote>
<p>To my understanding, the problem is that, there is no <code>id</code> parameter passed to <code>get</code> function of <code>ViewAthelete</code> view class. What is the reason for this? In development environment (not testing), it displays the data but testing environment doesn't recognize the arguments from the route.</p>
| 0 | 2016-08-18T12:36:10Z | 39,018,854 | <p>As zsepi says, your URLs aren't used here. To avoid repeating the arguments, rather than calling the view directly you could use the test client to "call" the URL: another advantage of doing this is that the middleware runs, so you don't need to assign the user attribute separately.</p>
<pre><code>response = self.client.get('/_api/v1/athletes/' + str(tmp.id))
</code></pre>
| 0 | 2016-08-18T12:56:03Z | [
"python",
"django"
] |
Padding elements of a numpy array | 39,018,476 | <p>Lets say that I have the following <code>numpy array</code>:</p>
<pre><code>[[1,1,1]
[1,1,1]
[1,1,1]]
</code></pre>
<p>And I need to pad each element in the array with a zero on either side (rather then <code>numpy.pad()</code> which pads rows and columns). This ends up as follows:</p>
<pre><code>[ [0,1,0,0,1,0,0,1,0]
[0,1,0,0,1,0,0,1,0]
[0,1,0,0,1,0,0,1,0] ]
</code></pre>
<p>Is there a more efficient way to do this then creating an empty array and using nested loops? </p>
<p><strong>Note:</strong> My preference is as fast and memory light as possible. Individual arrays can be up to 12000^2 elements and I'm working with 16 of them at the same time so my margins are pretty thin in 32 bit</p>
<p><strong>Edit:</strong> Should have specified but the padding is not always 1, padding must be variable as I am up-sampling data by a factor dependent on the array with the highest resolution. given 3 arrays with the shapes (121,121) ; (1200,1200) ; (12010,12010) I need to be able to pad the first two arrays to a shape of (12010,12010) (I know that these numbers don't share common factors, this isn't a problem as being within an index or two of the actual location is acceptable, this padding is just needed to get them into the same shape, rounding out the numbers by padding the rows at the ends is acceptable)</p>
<p><strong>Working Solution:</strong> an adjustment of @Kasramvd solution does the trick. Here is the code that fits my application of the problem.</p>
<pre><code>import numpy as np
a = np.array([[1, 2, 3],[1, 2, 3], [1, 2, 3]])
print(a)
x, y = a.shape
factor = 3
indices = np.repeat(np.arange(y + 1), 1*factor*2)[1*factor:-1*factor]
a=np.insert(a, indices, 0, axis=1)
print(a)
</code></pre>
<p>results in:</p>
<pre><code> [[1 2 3]
[1 2 3]
[1 2 3]]
[[0 0 0 1 0 0 0 0 0 0 2 0 0 0 0 0 0 3 0 0 0]
[0 0 0 1 0 0 0 0 0 0 2 0 0 0 0 0 0 3 0 0 0]
[0 0 0 1 0 0 0 0 0 0 2 0 0 0 0 0 0 3 0 0 0]]
</code></pre>
| 3 | 2016-08-18T12:38:35Z | 39,018,798 | <p>Flatten the array, convert each 1 to [0, 1, 0], then reshape again to 3 rows. In the following code the ones array is in var a:</p>
<pre><code>a = np.ones([3,3])
b = [[0, x, 0] for x in a.ravel()]
c = np.reshape(b, (a.shape[0], -1))
print(c)
</code></pre>
<p>Output:</p>
<pre><code>[[0 1 0 0 1 0 0 1 0]
[0 1 0 0 1 0 0 1 0]
[0 1 0 0 1 0 0 1 0]]
</code></pre>
| 1 | 2016-08-18T12:53:10Z | [
"python",
"arrays",
"numpy"
] |
Padding elements of a numpy array | 39,018,476 | <p>Lets say that I have the following <code>numpy array</code>:</p>
<pre><code>[[1,1,1]
[1,1,1]
[1,1,1]]
</code></pre>
<p>And I need to pad each element in the array with a zero on either side (rather then <code>numpy.pad()</code> which pads rows and columns). This ends up as follows:</p>
<pre><code>[ [0,1,0,0,1,0,0,1,0]
[0,1,0,0,1,0,0,1,0]
[0,1,0,0,1,0,0,1,0] ]
</code></pre>
<p>Is there a more efficient way to do this then creating an empty array and using nested loops? </p>
<p><strong>Note:</strong> My preference is as fast and memory light as possible. Individual arrays can be up to 12000^2 elements and I'm working with 16 of them at the same time so my margins are pretty thin in 32 bit</p>
<p><strong>Edit:</strong> Should have specified but the padding is not always 1, padding must be variable as I am up-sampling data by a factor dependent on the array with the highest resolution. given 3 arrays with the shapes (121,121) ; (1200,1200) ; (12010,12010) I need to be able to pad the first two arrays to a shape of (12010,12010) (I know that these numbers don't share common factors, this isn't a problem as being within an index or two of the actual location is acceptable, this padding is just needed to get them into the same shape, rounding out the numbers by padding the rows at the ends is acceptable)</p>
<p><strong>Working Solution:</strong> an adjustment of @Kasramvd solution does the trick. Here is the code that fits my application of the problem.</p>
<pre><code>import numpy as np
a = np.array([[1, 2, 3],[1, 2, 3], [1, 2, 3]])
print(a)
x, y = a.shape
factor = 3
indices = np.repeat(np.arange(y + 1), 1*factor*2)[1*factor:-1*factor]
a=np.insert(a, indices, 0, axis=1)
print(a)
</code></pre>
<p>results in:</p>
<pre><code> [[1 2 3]
[1 2 3]
[1 2 3]]
[[0 0 0 1 0 0 0 0 0 0 2 0 0 0 0 0 0 3 0 0 0]
[0 0 0 1 0 0 0 0 0 0 2 0 0 0 0 0 0 3 0 0 0]
[0 0 0 1 0 0 0 0 0 0 2 0 0 0 0 0 0 3 0 0 0]]
</code></pre>
| 3 | 2016-08-18T12:38:35Z | 39,018,893 | <p>You can create the related indices with <code>np.repeat</code> based on array's shape, then insert the 0 in that indices.</p>
<pre><code>>>> def padder(arr, n):
... x, y = arr.shape
... indices = np.repeat(np.arange(y+1), n*2)[n:-n]
... return np.insert(arr, indices, 0, axis=1)
...
>>>
>>> padder(a, 1)
array([[0, 1, 0, 0, 1, 0, 0, 1, 0],
[0, 1, 0, 0, 1, 0, 0, 1, 0],
[0, 1, 0, 0, 1, 0, 0, 1, 0]])
>>>
>>> padder(a, 2)
array([[0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0]])
>>> padder(a, 3)
array([[0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0]])
</code></pre>
<p>Aforementioned approach in one line:</p>
<pre><code>np.insert(a, np.repeat(np.arange(a.shape[1] + 1), n*2)[n:-n], 0, axis=1)
</code></pre>
| 1 | 2016-08-18T12:57:56Z | [
"python",
"arrays",
"numpy"
] |
Padding elements of a numpy array | 39,018,476 | <p>Lets say that I have the following <code>numpy array</code>:</p>
<pre><code>[[1,1,1]
[1,1,1]
[1,1,1]]
</code></pre>
<p>And I need to pad each element in the array with a zero on either side (rather then <code>numpy.pad()</code> which pads rows and columns). This ends up as follows:</p>
<pre><code>[ [0,1,0,0,1,0,0,1,0]
[0,1,0,0,1,0,0,1,0]
[0,1,0,0,1,0,0,1,0] ]
</code></pre>
<p>Is there a more efficient way to do this then creating an empty array and using nested loops? </p>
<p><strong>Note:</strong> My preference is as fast and memory light as possible. Individual arrays can be up to 12000^2 elements and I'm working with 16 of them at the same time so my margins are pretty thin in 32 bit</p>
<p><strong>Edit:</strong> Should have specified but the padding is not always 1, padding must be variable as I am up-sampling data by a factor dependent on the array with the highest resolution. given 3 arrays with the shapes (121,121) ; (1200,1200) ; (12010,12010) I need to be able to pad the first two arrays to a shape of (12010,12010) (I know that these numbers don't share common factors, this isn't a problem as being within an index or two of the actual location is acceptable, this padding is just needed to get them into the same shape, rounding out the numbers by padding the rows at the ends is acceptable)</p>
<p><strong>Working Solution:</strong> an adjustment of @Kasramvd solution does the trick. Here is the code that fits my application of the problem.</p>
<pre><code>import numpy as np
a = np.array([[1, 2, 3],[1, 2, 3], [1, 2, 3]])
print(a)
x, y = a.shape
factor = 3
indices = np.repeat(np.arange(y + 1), 1*factor*2)[1*factor:-1*factor]
a=np.insert(a, indices, 0, axis=1)
print(a)
</code></pre>
<p>results in:</p>
<pre><code> [[1 2 3]
[1 2 3]
[1 2 3]]
[[0 0 0 1 0 0 0 0 0 0 2 0 0 0 0 0 0 3 0 0 0]
[0 0 0 1 0 0 0 0 0 0 2 0 0 0 0 0 0 3 0 0 0]
[0 0 0 1 0 0 0 0 0 0 2 0 0 0 0 0 0 3 0 0 0]]
</code></pre>
| 3 | 2016-08-18T12:38:35Z | 39,018,913 | <p>Here's an approach using <code>zeros-initialization</code> -</p>
<pre><code>def padcols(arr,padlen):
N = 1+2*padlen
m,n = arr.shape
out = np.zeros((m,N*n),dtype=arr.dtype)
out[:,padlen+np.arange(n)*N] = arr
return out
</code></pre>
<p>Sample run -</p>
<pre><code>In [118]: arr
Out[118]:
array([[21, 14, 23],
[52, 70, 90],
[40, 57, 11],
[71, 33, 78]])
In [119]: padcols(arr,1)
Out[119]:
array([[ 0, 21, 0, 0, 14, 0, 0, 23, 0],
[ 0, 52, 0, 0, 70, 0, 0, 90, 0],
[ 0, 40, 0, 0, 57, 0, 0, 11, 0],
[ 0, 71, 0, 0, 33, 0, 0, 78, 0]])
In [120]: padcols(arr,2)
Out[120]:
array([[ 0, 0, 21, 0, 0, 0, 0, 14, 0, 0, 0, 0, 23, 0, 0],
[ 0, 0, 52, 0, 0, 0, 0, 70, 0, 0, 0, 0, 90, 0, 0],
[ 0, 0, 40, 0, 0, 0, 0, 57, 0, 0, 0, 0, 11, 0, 0],
[ 0, 0, 71, 0, 0, 0, 0, 33, 0, 0, 0, 0, 78, 0, 0]])
</code></pre>
<hr>
<h2>Benchmarking</h2>
<p>In this section, I am benchmarking on runtime and memory usage the approach posted in this post : <code>padcols</code> and <a href="http://stackoverflow.com/a/39018893/3293881">@Kasramvd's solution func : <code>padder</code></a> on a decent sized array for various padding lengths.</p>
<p><strong>Timing profiling</strong></p>
<pre><code>In [151]: arr = np.random.randint(10,99,(300,300))
# Representative of original `3x3` sized array just bigger
In [152]: %timeit padder(arr,1)
100 loops, best of 3: 3.56 ms per loop
In [153]: %timeit padcols(arr,1)
100 loops, best of 3: 2.13 ms per loop
In [154]: %timeit padder(arr,2)
100 loops, best of 3: 5.82 ms per loop
In [155]: %timeit padcols(arr,2)
100 loops, best of 3: 3.66 ms per loop
In [156]: %timeit padder(arr,3)
100 loops, best of 3: 7.83 ms per loop
In [157]: %timeit padcols(arr,3)
100 loops, best of 3: 5.11 ms per loop
</code></pre>
<p><strong>Memory profiling</strong></p>
<p>Script used for these memory tests -</p>
<pre><code>import numpy as np
from memory_profiler import profile
arr = np.random.randint(10,99,(300,300))
padlen = 1 # Edited to 1,2,3 for the three cases
n = padlen
@profile(precision=10)
def padder():
x, y = arr.shape
indices = np.repeat(np.arange(y+1), n*2)[n:-n]
return np.insert(arr, indices, 0, axis=1)
@profile(precision=10)
def padcols():
N = 1+2*padlen
m,n = arr.shape
out = np.zeros((m,N*n),dtype=arr.dtype)
out[:,padlen+np.arange(n)*N] = arr
return out
if __name__ == '__main__':
padder()
if __name__ == '__main__':
padcols()
</code></pre>
<p>Memory usage output -</p>
<p>Case # 1:</p>
<pre><code>$ python -m memory_profiler timing_pads.py
Filename: timing_pads.py
Line # Mem usage Increment Line Contents
================================================
8 42.4492187500 MiB 0.0000000000 MiB @profile(precision=10)
9 def padder():
10 42.4492187500 MiB 0.0000000000 MiB x, y = arr.shape
11 42.4492187500 MiB 0.0000000000 MiB indices = np.repeat(np.arange(y+1), n*2)[n:-n]
12 44.7304687500 MiB 2.2812500000 MiB return np.insert(arr, indices, 0, axis=1)
Filename: timing_pads.py
Line # Mem usage Increment Line Contents
================================================
14 42.8750000000 MiB 0.0000000000 MiB @profile(precision=10)
15 def padcols():
16 42.8750000000 MiB 0.0000000000 MiB N = 1+2*padlen
17 42.8750000000 MiB 0.0000000000 MiB m,n = arr.shape
18 42.8750000000 MiB 0.0000000000 MiB out = np.zeros((m,N*n),dtype=arr.dtype)
19 44.6757812500 MiB 1.8007812500 MiB out[:,padlen+np.arange(n)*N] = arr
20 44.6757812500 MiB 0.0000000000 MiB return out
</code></pre>
<p>Case # 2:</p>
<pre><code>$ python -m memory_profiler timing_pads.py
Filename: timing_pads.py
Line # Mem usage Increment Line Contents
================================================
8 42.3710937500 MiB 0.0000000000 MiB @profile(precision=10)
9 def padder():
10 42.3710937500 MiB 0.0000000000 MiB x, y = arr.shape
11 42.3710937500 MiB 0.0000000000 MiB indices = np.repeat(np.arange(y+1), n*2)[n:-n]
12 46.2421875000 MiB 3.8710937500 MiB return np.insert(arr, indices, 0, axis=1)
Filename: timing_pads.py
Line # Mem usage Increment Line Contents
================================================
14 42.8476562500 MiB 0.0000000000 MiB @profile(precision=10)
15 def padcols():
16 42.8476562500 MiB 0.0000000000 MiB N = 1+2*padlen
17 42.8476562500 MiB 0.0000000000 MiB m,n = arr.shape
18 42.8476562500 MiB 0.0000000000 MiB out = np.zeros((m,N*n),dtype=arr.dtype)
19 46.1289062500 MiB 3.2812500000 MiB out[:,padlen+np.arange(n)*N] = arr
20 46.1289062500 MiB 0.0000000000 MiB return out
</code></pre>
<p>Case # 3:</p>
<pre><code>$ python -m memory_profiler timing_pads.py
Filename: timing_pads.py
Line # Mem usage Increment Line Contents
================================================
8 42.3906250000 MiB 0.0000000000 MiB @profile(precision=10)
9 def padder():
10 42.3906250000 MiB 0.0000000000 MiB x, y = arr.shape
11 42.3906250000 MiB 0.0000000000 MiB indices = np.repeat(np.arange(y+1), n*2)[n:-n]
12 47.4765625000 MiB 5.0859375000 MiB return np.insert(arr, indices, 0, axis=1)
Filename: timing_pads.py
Line # Mem usage Increment Line Contents
================================================
14 42.8945312500 MiB 0.0000000000 MiB @profile(precision=10)
15 def padcols():
16 42.8945312500 MiB 0.0000000000 MiB N = 1+2*padlen
17 42.8945312500 MiB 0.0000000000 MiB m,n = arr.shape
18 42.8945312500 MiB 0.0000000000 MiB out = np.zeros((m,N*n),dtype=arr.dtype)
19 47.4648437500 MiB 4.5703125000 MiB out[:,padlen+np.arange(n)*N] = arr
20 47.4648437500 MiB 0.0000000000 MiB return out
</code></pre>
| 6 | 2016-08-18T12:58:55Z | [
"python",
"arrays",
"numpy"
] |
Padding elements of a numpy array | 39,018,476 | <p>Lets say that I have the following <code>numpy array</code>:</p>
<pre><code>[[1,1,1]
[1,1,1]
[1,1,1]]
</code></pre>
<p>And I need to pad each element in the array with a zero on either side (rather then <code>numpy.pad()</code> which pads rows and columns). This ends up as follows:</p>
<pre><code>[ [0,1,0,0,1,0,0,1,0]
[0,1,0,0,1,0,0,1,0]
[0,1,0,0,1,0,0,1,0] ]
</code></pre>
<p>Is there a more efficient way to do this then creating an empty array and using nested loops? </p>
<p><strong>Note:</strong> My preference is as fast and memory light as possible. Individual arrays can be up to 12000^2 elements and I'm working with 16 of them at the same time so my margins are pretty thin in 32 bit</p>
<p><strong>Edit:</strong> Should have specified but the padding is not always 1, padding must be variable as I am up-sampling data by a factor dependent on the array with the highest resolution. given 3 arrays with the shapes (121,121) ; (1200,1200) ; (12010,12010) I need to be able to pad the first two arrays to a shape of (12010,12010) (I know that these numbers don't share common factors, this isn't a problem as being within an index or two of the actual location is acceptable, this padding is just needed to get them into the same shape, rounding out the numbers by padding the rows at the ends is acceptable)</p>
<p><strong>Working Solution:</strong> an adjustment of @Kasramvd solution does the trick. Here is the code that fits my application of the problem.</p>
<pre><code>import numpy as np
a = np.array([[1, 2, 3],[1, 2, 3], [1, 2, 3]])
print(a)
x, y = a.shape
factor = 3
indices = np.repeat(np.arange(y + 1), 1*factor*2)[1*factor:-1*factor]
a=np.insert(a, indices, 0, axis=1)
print(a)
</code></pre>
<p>results in:</p>
<pre><code> [[1 2 3]
[1 2 3]
[1 2 3]]
[[0 0 0 1 0 0 0 0 0 0 2 0 0 0 0 0 0 3 0 0 0]
[0 0 0 1 0 0 0 0 0 0 2 0 0 0 0 0 0 3 0 0 0]
[0 0 0 1 0 0 0 0 0 0 2 0 0 0 0 0 0 3 0 0 0]]
</code></pre>
| 3 | 2016-08-18T12:38:35Z | 39,028,460 | <p>A problem with time and memory comparison of these methods is that it treats <code>insert</code> as a blackbox. But that function is Python code that can be read and replicated. While it can handle various kinds of inputs, in this case I think it</p>
<ul>
<li>generates a <code>new</code> target array of the right size</li>
<li>calculates the indices of the columns that take the fill value</li>
<li>creates a mask for the old values</li>
<li>copies the fill values to new</li>
<li>copies the old values to new</li>
</ul>
<p>There's no way that <code>insert</code> can be more efficient than <code>Divakar's</code> <code>padcols</code>.</p>
<p>Let's see if I can clearly replicate <code>insert</code>:</p>
<pre><code>In [255]: indices = np.repeat(np.arange(y + 1), 1*factor*2)[1*factor:-1*factor]
In [256]: indices
Out[256]: array([0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3])
In [257]: numnew = len(indices)
In [258]: order = indices.argsort(kind='mergesort')
In [259]: indices[order] += np.arange(numnew)
In [260]: indices
Out[260]:
array([ 0, 1, 2, 4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 15, 16, 18, 19,
20])
</code></pre>
<p>these are the columns that will take the <code>0</code> fill value.</p>
<pre><code>In [266]: new = np.empty((3,21),a.dtype)
In [267]: new[:,indices] = 0 # fill
# in this case with a lot of fills
# np.zeros((3,21),a.dtype) would be just as good
In [270]: old_mask = np.ones((21,),bool)
In [271]: old_mask[indices] = False
In [272]: new[:,old_mask] = a
In [273]: new
Out[273]:
array([[0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0]])
</code></pre>
<p>The main difference from <code>padcols</code> is this uses a boolean mask for indexing as opposed to column numbers.</p>
| 0 | 2016-08-18T22:28:21Z | [
"python",
"arrays",
"numpy"
] |
How to build Side Bar Menu which shows on mouse hover on pyqt? | 39,018,549 | <p>I have been trying to build an app which has a dynamic side bar. I am using qtdesigner and pyqt5. I tryed in two ways:</p>
<p>1) Using stylesheet whith hover. But the sidebar (frame) has buttons on it. It happens that when I see the bar I just can see the button which has the mouse on.</p>
<p>stylesheetcode:</p>
<pre><code>QFrame {
background-color: rgba(255, 151, 231, 0);
}
QFrame:hover {
background-color:rgba(255, 151, 231, 0.45)
}
QPushButton {
color: rgba(0, 0, 0, 0);
background-color: rgba(255, 151, 231, 0);
}
QPushButton:hover {
background-color:rgba(255, 151, 231, 0.45);
}
</code></pre>
<p>output: (the mouse is over the button)</p>
<p><a href="http://i.stack.imgur.com/Kke2Q.png" rel="nofollow"><img src="http://i.stack.imgur.com/Kke2Q.png" alt="enter image description here"></a></p>
<p>I know why it happens but I dont know how to solve it.</p>
<p>2) Using Events</p>
<p>Using events I can reach what I want, but I am not sure if I am doing the right thig.</p>
<pre><code> class TelaPrincipal(QDialog, QMainWindow, Ui_Form):
def __init__(self, parent=None):
super(TelaPrincipal, self).__init__(parent)
QDialog.__init__(self, parent)
Ui_Form.__init__(self)
self.setupUi(self)
# -- Barra lateral esconde esconde --
# instala esse EventFilter
qApp.installEventFilter(self)
QtCore.QTimer.singleShot(0, self.frame.hide)
def eventFilter(self, source, event):
# do not hide frame when frame shown
if qApp.activePopupWidget() is None:
if event.type() == QtCore.QEvent.MouseMove:
if self.frame.isHidden():
self.frame.show()
rect = self.geometry()
# set mouse-sensitive zone
rect.setHeight(25)
if rect.contains(event.globalPos()):
self.frame.show()
else:
rect = QtCore.QRect(
self.frame.mapToGlobal(QtCore.QPoint(0, 0)),
self.frame.size())
if not rect.contains(event.globalPos()):
self.frame.hide()
elif event.type() == QtCore.QEvent.Leave and source is self:
self.frame.hide()
return QMainWindow.eventFilter(self, source, event)
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
main = TelaPrincipal()
main.show()
sys.exit(app.exec())
</code></pre>
<p>What do you suggest me ??</p>
| 1 | 2016-08-18T12:42:22Z | 39,079,379 | <p>I spent a little time reading the docs. The simplest way I found was this:</p>
<p>in the class <strong>init</strong> I installed a filter, then I overide the eventFilter method.</p>
<pre><code>class MainScreen(QMainWindow, QDialog, Ui_TelaPrincipal):
def __init__(self, parent=None):
super(MainScreen, self).__init__(parent)
QDialog.__init__(self, parent)
QMainWindow.__init__(self, parent)
Ui_MainScreen.__init__(self)
self.setupUi(self)
# ---- side bar ---- #
# installs a filter event
qApp.installEventFilter(self)
# hide the bar
QTimer.singleShot(0, self.plano_barra.hide)
self.plano_central.setMouseTracking(1)
self.plano_botao_barra.setMouseTracking(1)
self.setMouseTracking(1)
self.plano_principal.setMouseTracking(1)
def eventFilter(self, source, event):
if event.type() == QEvent.MouseMove:
if source == self.plano_botao_barra:
print("i am over the button")
self.plano_barra.show()
if source == self.plano_central:
print("i am at the main plan")
self.plano_barra.hide()
return QMainWindow.eventFilter(self, source, event)
</code></pre>
| 0 | 2016-08-22T12:19:07Z | [
"python",
"qt",
"pyqt",
"qt-designer"
] |
Django: passing variable to simple_tag other than id fails | 39,018,599 | <p>Situation is simple: I want to show a specific object (model Block) in a template like this: <code>{% block_by_name editorial as b %} {{ b.title }}</code> or, preferably with a filter like this <code>{{ block.title|get_by_name:editorial }}</code>.</p>
<p>I succeeded with a simple_tag.</p>
<h3>Getting items by id works fine:</h3>
<pre><code># in templatetags
@register.simple_tag
def block_by_id(id=1):
b = Block.objects.get(id=id)
return b
# in html template it get block with id 3 and shows it OK
{% block_by_id 3 as b %} {{ b.title }}
</code></pre>
<p>However, when I want to get blocks by names or tags like below, </p>
<h3>Getting items by name fails</h3>
<pre><code>#
@register.simple_tag
def block_by_name(n="default_name"):
b = Block.objects.get(name=n)
return b
# in html template it fails to get block with name "editorial"
{% block_by_name editorial as b %} {{ b.title }}
</code></pre>
<p>Django shows the error <code>Block matching query does not exist</code>
Because it assumes that variable <code>n</code> is empty string, though I passed it: "editorial"</p>
<p>The traceback:</p>
<pre><code> b = Block.objects.get(name=n)
...
â¼ Local vars
Variable Value
n
</code></pre>
<p>''</p>
<p>Not sure why this happens.
How can I pass the variable so as it does not disappear?</p>
| 0 | 2016-08-18T12:44:37Z | 39,018,711 | <p>But you didn't pass <code>"editorial"</code>, you passed <code>editorial</code>. That's a variable, which does not exist. Use the string.</p>
| 0 | 2016-08-18T12:49:14Z | [
"python",
"django",
"django-templates",
"django-custom-tags"
] |
Sorting separately over different attributes in django | 39,018,728 | <p>I have a mongodb collection of ~80,000 documents. I want to display the top 200 documents per attribute on a webpage, where the user can sort by different attributes by using a dropdown menu on the webpage.</p>
<p>At the moment I'm returning a queryset per attribute and then combining them, but this is super slow: </p>
<pre><code>from itertools import chain
tall_people = People.objects().order_by('-height')[:200]
heavy_people = People.objects().order_by('-weight')[:200]
old_people = People.objects().order_by('-age')[:200]
rich_people = People.objects().order_by('-wealth')[:200]
people = list(set(chain(tall_people, heavy_people, old_people, rich_people)))
for person in people:
do something....
</code></pre>
<p>Is there a more efficient way to do this?</p>
| 1 | 2016-08-18T12:50:04Z | 39,019,056 | <p>I think(haven't tested) you can use the initial query set object and sort it by different attributes with python's sorted </p>
<pre><code>all_people = People.objects().all()[:200]
tall_people = sorted(all_people, key=lambda person: person.height)
heavy_people = sorted(all_people, key=lambda person: person.weight)
old_people = sorted(all_people, key=lambda person: person.age)
rich_people = sorted(all_people, key=lambda person: person.wealth)
</code></pre>
<p>This will only save the database I/O overhead</p>
| 0 | 2016-08-18T13:06:07Z | [
"python",
"django",
"mongodb",
"mongoengine"
] |
Sorting separately over different attributes in django | 39,018,728 | <p>I have a mongodb collection of ~80,000 documents. I want to display the top 200 documents per attribute on a webpage, where the user can sort by different attributes by using a dropdown menu on the webpage.</p>
<p>At the moment I'm returning a queryset per attribute and then combining them, but this is super slow: </p>
<pre><code>from itertools import chain
tall_people = People.objects().order_by('-height')[:200]
heavy_people = People.objects().order_by('-weight')[:200]
old_people = People.objects().order_by('-age')[:200]
rich_people = People.objects().order_by('-wealth')[:200]
people = list(set(chain(tall_people, heavy_people, old_people, rich_people)))
for person in people:
do something....
</code></pre>
<p>Is there a more efficient way to do this?</p>
| 1 | 2016-08-18T12:50:04Z | 39,019,167 | <p>A <em>slightly</em> more efficient way is to fetch only the IDs for each group, and then do a final query to get the objects:</p>
<pre><code>tall_people = People.objects().values_list('pk', flat=True).order_by('-height')[:200]
heavy_people = People.objects().values_list('pk', flat=True).order_by('-weight')[:200]
old_people = People.objects().values_list('pk', flat=True).order_by('-age')[:200]
rich_people = People.objects().values_list('pk', flat=True).order_by('-wealth')[:200]
people_pks = set(chain(tall_people, heavy_people, old_people, rich_people))
people = People.objects.get(pk__in=people_pks)
</code></pre>
<p>The difference is that you're only fetching IDs for the first four queries, and then fetching objects in the last query. Currently you are fetching the entire rows in all four queries.</p>
<p>Obviously this has its limitations - passing a list of 200 IDs is probably fine, but doesn't scale well to thousands.</p>
| 0 | 2016-08-18T13:11:04Z | [
"python",
"django",
"mongodb",
"mongoengine"
] |
Creating a hierarchy of methods in a class? | 39,018,742 | <p>I have class with hundreds of methods
I want create a hierarchy of them that will let easy find method. </p>
<p>For example</p>
<pre><code>class MyClass:
def SpectrumFrequencyStart()...
def SpectrumFrequencyStop()...
def SpectrumFrequencyCenter()...
def SignalAmplitudedBm()...
</code></pre>
<p>That I want to call using:</p>
<pre><code>MyClassObject.Spectrum.Frequency.Start()
MyClassObject.Spectrum.Frequency.Stop()
MyClassObject.Signal.Amplitude.dBm()
</code></pre>
| -5 | 2016-08-18T12:50:48Z | 39,019,015 | <p>Consider using a dictionary to map your methods to keys (either hierarchical dictionaries, or simply '.' separated keys).</p>
<p>Another option which may be more elegant is <a href="https://docs.python.org/2/library/collections.html#collections.namedtuple" rel="nofollow">namedtuples</a>. Something like:</p>
<pre><code>from collections import namedtuple
MyClassObject = namedtuple('MyClassObject', ['Spectrum', 'Signal'])
MyClassObject.Spectrum = namedtuple('Spectrum', ['Frequency'])
MyClassObject.Spectrum.Frequency = namedtuple('Frequency', ['Start', 'Stop'])
MyClassObject.Spectrum.Frequency.Start = MyClass.SpectrumFrequencyStart
</code></pre>
<ul>
<li>You can automate this by using <a href="https://docs.python.org/2/library/inspect.html" rel="nofollow">inspection</a> and parse the method names by, say camel case, to build the namedtuples automatically.</li>
<li>Pay attention to binding of the methods</li>
</ul>
| 0 | 2016-08-18T13:03:52Z | [
"python"
] |
Creating a hierarchy of methods in a class? | 39,018,742 | <p>I have class with hundreds of methods
I want create a hierarchy of them that will let easy find method. </p>
<p>For example</p>
<pre><code>class MyClass:
def SpectrumFrequencyStart()...
def SpectrumFrequencyStop()...
def SpectrumFrequencyCenter()...
def SignalAmplitudedBm()...
</code></pre>
<p>That I want to call using:</p>
<pre><code>MyClassObject.Spectrum.Frequency.Start()
MyClassObject.Spectrum.Frequency.Stop()
MyClassObject.Signal.Amplitude.dBm()
</code></pre>
| -5 | 2016-08-18T12:50:48Z | 39,019,488 | <p>This is just a very bad design.<br>
It's clear that <code>Spectrum</code>, <code>Signal</code>, <code>Frequency</code> (and so on) should be all separate classes with much less than "hundreds of methods".<br>
I'm not sure if MyClassObject actually represents something or is effectively just a namespace.<br>
Objects can encapsulate objects of other classes. For example:</p>
<pre><code>class Frequency(object):
def start(self):
pass
def stop(self):
pass
class Spectrum(object):
def __init__(self):
self.frequency = Frequency()
class Amplitude(object):
def dbm(self):
pass
class Signal(object):
def __init__(self):
self.amplitude = Amplitude()
class MyClass(object):
def __init__(self):
self.spectrum = Spectrum()
self.signal = Signal()
my_class_instance = MyClass()
my_class_instance.spectrum.frequency.start()
my_class_instance.spectrum.frequency.stop()
my_class_instance.spectrum.signal.amplitude.dbm()
</code></pre>
<p>There's a convention of code formatting in Python <a href="https://www.python.org/dev/peps/pep-0008/" rel="nofollow">PEP 8</a> therefore I applied it in my example. </p>
| 0 | 2016-08-18T13:28:22Z | [
"python"
] |
Exception handler to check if inline script for variable worked | 39,018,875 | <p>I need to add exception handling that considers if line 7 fails because there is no intersection between the query and array brands. I'm new to using exception handlers and would appreciate any advice or solutions. </p>
<p>I have written an example structure for exception handling, but I am not certain whether it will work.</p>
<pre><code>brands = ["apple", "android", "windows"]
query = input("Enter your query: ").lower()
brand = brandSelector(query)
print(brand)
def brandSelector(query):
try:
brand = set(brands).intersection(query.split())
brand = ', '.join(brand)
return brand
except ValueError:
print("Brand does not exist")
#Â Redirect to main script to enter correct brand in query
</code></pre>
| -1 | 2016-08-18T12:56:55Z | 39,018,999 | <p>This is not the best way to do it, but it is <strong>a</strong> way.</p>
<pre><code>def brandSelector(query):
try:
brand = set(brands).intersection(query.split())
brand = ', '.join(brand)
return brand
except ValueError:
print("Brand does not exist")
query = input("Enter your query: ").lower()
brandSelector(query)
brands = ["apple", "android", "windows"]
query = input("Enter your query: ").lower()
brand = brandSelector(query)
print(brand)
</code></pre>
<p>Your function is now recursive since it includes a call to itself. What happens is that if the <code>try</code> throws an error, the <code>except</code> gets triggered where the user is prompted to redefine the query. The function is then reran.</p>
<hr>
<p>If no error is thrown by the <code>intersection()</code> but instead an empty container is returned, you can do the following:</p>
<pre><code>def brandSelector(query):
brand = set(brands).intersection(query.split())
brand = ', '.join(brand)
return brand
brands = ["apple", "android", "windows"]
brand = None
while not brand:
query = input("Enter your query: ").lower()
brand = brandSelector(query)
print(brand)
</code></pre>
<p>Which looks a lot like <strong>Tuan333's</strong> answer.</p>
| 2 | 2016-08-18T13:03:01Z | [
"python",
"arrays",
"function",
"error-handling",
"exception-handling"
] |
Exception handler to check if inline script for variable worked | 39,018,875 | <p>I need to add exception handling that considers if line 7 fails because there is no intersection between the query and array brands. I'm new to using exception handlers and would appreciate any advice or solutions. </p>
<p>I have written an example structure for exception handling, but I am not certain whether it will work.</p>
<pre><code>brands = ["apple", "android", "windows"]
query = input("Enter your query: ").lower()
brand = brandSelector(query)
print(brand)
def brandSelector(query):
try:
brand = set(brands).intersection(query.split())
brand = ', '.join(brand)
return brand
except ValueError:
print("Brand does not exist")
#Â Redirect to main script to enter correct brand in query
</code></pre>
| -1 | 2016-08-18T12:56:55Z | 39,019,092 | <p>When querying input from user, especially when you expect user to input bad data, I tend to put the querying function inside an infinite loop and break out when input data makes sense. As Ev. Kounis points out, there are many ways to do this. Here's a way I would do (untested code):</p>
<pre><code>brands = ["apple", "android", "windows"]
def brandSelector(query):
try:
brand = set(brands).intersection(query.split())
brand = ', '.join(brand)
return brand
except ValueError:
print("Brand does not exist")
return None;
brand = None;
while brand is None:
query = input("Enter your query: ").lower()
brand = brandSelector(query)
print(brand)
</code></pre>
<p>So the condition where you can break out of the <code>while</code> loop is when the input makes sense.</p>
| 2 | 2016-08-18T13:07:38Z | [
"python",
"arrays",
"function",
"error-handling",
"exception-handling"
] |
Plotting Coordinate Lines Using Matplotlib | 39,018,945 | <p>I want to plot the co-ordinate lines of a co-ordinate system (e.g. Cartesian co-ords) using matplotlib. </p>
<p>Then I want to transform them using some linear transform (skew, scale, rotate, etc.), and I want to plot this transformed version of the system as well. </p>
<p>I am quite new to matplotlib and I have no idea as to how I could go about doing this. Any suggestions?</p>
<p>Something like this:</p>
<p><a href="http://i.stack.imgur.com/aXgwG.png" rel="nofollow"><img src="http://i.stack.imgur.com/aXgwG.png" alt="enter image description here"></a></p>
<p>Doesn't have to be on the same plot as above, I just want to be able to plot the lines (and shapes and their transformed versions as well).</p>
<p>EDIT: If you instead have a MATLAB solution, I'll take that too. </p>
| 0 | 2016-08-18T13:00:34Z | 39,019,448 | <p>This should get you started on the right track</p>
<pre><code>import matplotlib
import matplotlib.pyplot as plt
xx = range(10)
yy = range(10)
[plt.plot([x,x],[min(yy),max(yy)],color='k') for x in xx]
[plt.plot([min(xx),max(xx)],[y,y],color='k') for y in yy]
</code></pre>
| 1 | 2016-08-18T13:26:09Z | [
"python",
"matlab",
"matplotlib",
"plot"
] |
Plotting Coordinate Lines Using Matplotlib | 39,018,945 | <p>I want to plot the co-ordinate lines of a co-ordinate system (e.g. Cartesian co-ords) using matplotlib. </p>
<p>Then I want to transform them using some linear transform (skew, scale, rotate, etc.), and I want to plot this transformed version of the system as well. </p>
<p>I am quite new to matplotlib and I have no idea as to how I could go about doing this. Any suggestions?</p>
<p>Something like this:</p>
<p><a href="http://i.stack.imgur.com/aXgwG.png" rel="nofollow"><img src="http://i.stack.imgur.com/aXgwG.png" alt="enter image description here"></a></p>
<p>Doesn't have to be on the same plot as above, I just want to be able to plot the lines (and shapes and their transformed versions as well).</p>
<p>EDIT: If you instead have a MATLAB solution, I'll take that too. </p>
| 0 | 2016-08-18T13:00:34Z | 39,020,767 | <p><a href="http://stackoverflow.com/a/39019448/812786">user2539738's answer</a> demonstrates how to draw a grid in a plot. The next step is applying a transform. This is a mathematical operation which can be described as a function of the x and y coordinates. For example, a shear transform like your example images -</p>
<pre><code>def my_transform(x, y):
return (x+y/2, y)
</code></pre>
<p>With this in mind, you can plot the transformed grid. You simply have to calculate the new coordinates:</p>
<pre><code># Transformed grid
for x in xx:
(x1, y1) = my_transform(x, min(yy))
(x2, y2) = my_transform(x, max(yy))
plt.plot([x1,x2],[y1,y2],color='r')
for y in yy:
(x1, y1) = my_transform(min(xx), y)
(x2, y2) = my_transform(max(xx), y)
plt.plot([x1,x2],[y1,y2],color='r')
</code></pre>
<p>This plots the transformed grid in red. The first for loop plots what were the vertical lines of the grid (going from point <code>x, min(yy)</code> to <code>x, max(yy)</code>), and the second plots the horizontal lines. The transform function is applied to the original pairs of points to calculate the new endpoints of the transformed line.</p>
| 1 | 2016-08-18T14:25:36Z | [
"python",
"matlab",
"matplotlib",
"plot"
] |
ctypes using HRESULT(python) | 39,018,998 | <p>I'm writing a DLL which is being called using a python script as below:</p>
<pre><code> //sample.h
#include<stdio.h>
typedef struct _data
{
char * name;
}data,*xdata;
__declspec(dllexport) void getinfo(data xdata,HRESULT *error);
//sample.c
#include<stdio.h>
#include"sample.h"
void get(data xdata,HRESULT *error)
{
//something is being done here
}
</code></pre>
<p>Now, the python script that is used to call the above function is shown as below:</p>
<pre><code>//sample.py
import ctypes
import sys
from ctypes import *
mydll=CDLL('sample.dll')
class data(Structure):
_fields_ = [('name',c_char_p)]
def get():
xdata=data()
error=HRESULT()
mydll=CDLL('sample.dll')
mydll.get.argtypes=[POINTER(data),POINTER(HRESULT)]
mydll.get.restype = None
mydll.get(xdata,error)
return xdata.value,error.value
xdata=get()
error=get()
print "information=",xdata.value
print "error=", error.value
</code></pre>
<p>But the error that I'm getting after running the python script is :</p>
<pre><code>Debug Assertion Failed!
Program:C:\Python27\pythonw.exe
File:minkernel\crts\ucrt\src\appcrt\stdio\fgets.cpp
Expression:stream.valid()
</code></pre>
<p>Can anybody help me in solving the problem? And the python script that I had written, is it the correct way to write it?</p>
| 0 | 2016-08-18T13:02:59Z | 39,068,518 | <p>Per my comments, I suspect the error with <code>fgets()</code> is in code not shown, but there are problems in the Python and C code shown as well. Here's the DLL source I used, making sure to pass a pointer to the data structure:</p>
<pre><code>typedef long HRESULT;
typedef struct _data {
char * name;
} data;
// Make sure to pass a pointer to data.
__declspec(dllexport) void getinfo(data* pdata, HRESULT *error)
{
pdata->name = "Mark";
*error = 0;
}
</code></pre>
<p>And here's the corrected Python code:</p>
<pre><code>from ctypes import *
class data(Structure):
_fields_ = [('name',c_char_p)]
def get():
xdata=data()
error=HRESULT()
mydll=CDLL('sample.dll')
mydll.getinfo.argtypes=[POINTER(data),POINTER(HRESULT)]
mydll.getinfo.restype = None
mydll.getinfo(xdata,error)
return xdata,error
# Correction in next two lines
xdata,error = get()
print "information =",xdata.name
print "error =", error.value
</code></pre>
<p>Output:</p>
<pre><code>information = Mark
error = 0
</code></pre>
| 0 | 2016-08-21T20:22:26Z | [
"python",
"c++",
"dll",
"ctypes"
] |
In Python3, is `urllib.request` thread-safe? | 39,019,045 | <p>It's not found if <code>urllib.request</code> is <strong>thread-safe</strong> or not in Pyhton 3 <a href="https://docs.python.org/3/library/urllib.request.html" rel="nofollow">document</a>. Is there anything special that I have to look for when using it in multi-threading scripts?</p>
| 0 | 2016-08-18T13:05:43Z | 39,019,876 | <p>According to <code>urllib.request</code>'s source <a href="https://hg.python.org/cpython/file/3.5/Lib/urllib/request.py#l1642" rel="nofollow">here</a>, <a href="https://hg.python.org/cpython/file/3.5/Lib/urllib/request.py#l1978" rel="nofollow">here</a> and <a href="https://hg.python.org/cpython/file/3.5/Lib/urllib/request.py#l1497" rel="nofollow">here</a>, no, it isn't. </p>
<p>In general, Python is <em>not</em> thread safe, so unless explicitly stated that a module is, you should assume it isn't.</p>
| 0 | 2016-08-18T13:46:03Z | [
"python",
"python-3.x",
"thread-safety",
"urllib",
"python-multithreading"
] |
How to parse certain text data? | 39,019,060 | <p>I have a text file with such a format: </p>
<p><code>B2100 Door Driver Key Cylinder Switch Failure B2101 Head Rest Switch Circuit Failure B2102 Antenna Circuit Short to Ground`, plus 1000 lines more.
</code></p>
<p>This is how I want it to be:</p>
<p><code>B2100*Door Driver Key Cylinder Switch Failure
B2101*Head Rest Switch Circuit Failure
B2102*Antenna Circuit Short to Ground
B2103*Antenna Not Connected
B2104*Door Passenger Key Cylinder Switch Failure
</code></p>
<p>so that I can copy this data in LibreOffice Calc and it will format it into two columns of code and meaning each.</p>
<p>My thought process:<br>
Apply a regular express over Bxxxx and put an asterisk in front of it (It acts as a delimiter) and a <code>\n</code> before the meaning (I don't know if that will work? ), and remove white-space till next character is encountered.</p>
<p>I am trying to isolate the B2100 and have failed till now. My naive attempt:</p>
<pre><code>import re
text = """B2100 Door Driver Key Cylinder Switch Failure B2101 Head Rest Switch Circuit Failure B2102 Antenna Circuit Short to Ground B2103 Antenna Not Connected B2104 Door Passenger Key Cylinder Switch Failure B2105 Throttle Position Input Out of Range Low B2106 Throttle Position Input Out of Range High B2107 Front Wiper Motor Relay Circuit Short to Vbatt B2108 Trunk Key Cylinder Switch Failure"""
# text_arr = text.split("\^B[0-9][0-9][0-9][0-9]$\gi");
l = re.compile('\^B[0-9][0-9][0-9][0-9]$\gi').split(text)
print(l)
</code></pre>
<p>This outputs:</p>
<pre><code>['B2100\tDoor Driver Key Cylinder Switch Failure B2101\tHead Rest Switch Circuit Failure B2102\tAntenna Circuit Short to Ground B2103\tAntenna Not Connected B2104\tDoor Passenger Key Cylinder Switch Failure B2105\tThrottle Position Input Out of Range Low B2106\tThrottle Position Input Out of Range High B2107\tFront Wiper Motor Relay Circuit Short to Vbatt B2108\tTrunk Key Cylinder Switch Failure']
</code></pre>
<p><strong>How do I achieve the desired result?</strong> </p>
<p>To break it down further, what I want to do is this:<br>
Break down everything into Code (B1001) and meaning (The text after it) array and then apply each operation (the <code>\n</code> thing) on it individually. If you have better ideas on how to do the whole thing, the better. I would love to hear it.</p>
| 1 | 2016-08-18T13:06:21Z | 39,019,395 | <p>First, your regex is wrong
'\^B[0-9][0-9][0-9][0-9]$\gi</p>
<ol>
<li>The modifiers don't work this way on Python</li>
<li>The ^ and $ means beginning and end of the line, which wouldn't match anything on your text</li>
<li>The multiples [0-9] can be replaced with a '[0-9]{4}'</li>
<li>If you want ignore case use the correspondent thing on Python <a href="https://docs.python.org/2/library/re.html#re.IGNORECASE" rel="nofollow">regex</a></li>
</ol>
<p>With that in mind a simple code to achieve what you want is something like this:</p>
<pre><code>l = [x.strip() for x in re.compile('\s*(B\d{4})\s*', re.IGNORECASE).split(text)]
lines = ['*'.join(l[i:i+2]) for i in range(0,len(l),2)]
</code></pre>
| 0 | 2016-08-18T13:23:17Z | [
"python",
"text-processing"
] |
How to parse certain text data? | 39,019,060 | <p>I have a text file with such a format: </p>
<p><code>B2100 Door Driver Key Cylinder Switch Failure B2101 Head Rest Switch Circuit Failure B2102 Antenna Circuit Short to Ground`, plus 1000 lines more.
</code></p>
<p>This is how I want it to be:</p>
<p><code>B2100*Door Driver Key Cylinder Switch Failure
B2101*Head Rest Switch Circuit Failure
B2102*Antenna Circuit Short to Ground
B2103*Antenna Not Connected
B2104*Door Passenger Key Cylinder Switch Failure
</code></p>
<p>so that I can copy this data in LibreOffice Calc and it will format it into two columns of code and meaning each.</p>
<p>My thought process:<br>
Apply a regular express over Bxxxx and put an asterisk in front of it (It acts as a delimiter) and a <code>\n</code> before the meaning (I don't know if that will work? ), and remove white-space till next character is encountered.</p>
<p>I am trying to isolate the B2100 and have failed till now. My naive attempt:</p>
<pre><code>import re
text = """B2100 Door Driver Key Cylinder Switch Failure B2101 Head Rest Switch Circuit Failure B2102 Antenna Circuit Short to Ground B2103 Antenna Not Connected B2104 Door Passenger Key Cylinder Switch Failure B2105 Throttle Position Input Out of Range Low B2106 Throttle Position Input Out of Range High B2107 Front Wiper Motor Relay Circuit Short to Vbatt B2108 Trunk Key Cylinder Switch Failure"""
# text_arr = text.split("\^B[0-9][0-9][0-9][0-9]$\gi");
l = re.compile('\^B[0-9][0-9][0-9][0-9]$\gi').split(text)
print(l)
</code></pre>
<p>This outputs:</p>
<pre><code>['B2100\tDoor Driver Key Cylinder Switch Failure B2101\tHead Rest Switch Circuit Failure B2102\tAntenna Circuit Short to Ground B2103\tAntenna Not Connected B2104\tDoor Passenger Key Cylinder Switch Failure B2105\tThrottle Position Input Out of Range Low B2106\tThrottle Position Input Out of Range High B2107\tFront Wiper Motor Relay Circuit Short to Vbatt B2108\tTrunk Key Cylinder Switch Failure']
</code></pre>
<p><strong>How do I achieve the desired result?</strong> </p>
<p>To break it down further, what I want to do is this:<br>
Break down everything into Code (B1001) and meaning (The text after it) array and then apply each operation (the <code>\n</code> thing) on it individually. If you have better ideas on how to do the whole thing, the better. I would love to hear it.</p>
| 1 | 2016-08-18T13:06:21Z | 39,019,521 | <pre><code>import re
text = """B2100 Door Driver Key Cylinder Switch Failure B2101 Head Rest Switch Circuit Failure B2102 Antenna Circuit Short to Ground B2103 Antenna Not Connected B2104 Door Passenger Key Cylinder Switch Failure B2105 Throttle Position Input Out of Range Low B2106 Throttle Position Input Out of Range High B2107 Front Wiper Motor Relay Circuit Short to Vbatt B2108 Trunk Key Cylinder Switch Failure"""
l = [i for i in re.split('(B[0-9]{4}\s+)', text) if i]
print '\n'.join(['{}*{}'.format(id_.strip(), label.strip()) for id_,label in zip(l[0::2], l[1::2])])
</code></pre>
<p>.split can keep the delimiters after splitting if you include () around your regex. The above produces the output:</p>
<pre><code>B2100*Door Driver Key Cylinder Switch Failure
B2101*Head Rest Switch Circuit Failure
B2102*Antenna Circuit Short to Ground
B2103*Antenna Not Connected
B2104*Door Passenger Key Cylinder Switch Failure
B2105*Throttle Position Input Out of Range Low
B2106*Throttle Position Input Out of Range High
B2107*Front Wiper Motor Relay Circuit Short to Vbatt
B2108*Trunk Key Cylinder Switch Failure
</code></pre>
| 0 | 2016-08-18T13:29:44Z | [
"python",
"text-processing"
] |
How to parse certain text data? | 39,019,060 | <p>I have a text file with such a format: </p>
<p><code>B2100 Door Driver Key Cylinder Switch Failure B2101 Head Rest Switch Circuit Failure B2102 Antenna Circuit Short to Ground`, plus 1000 lines more.
</code></p>
<p>This is how I want it to be:</p>
<p><code>B2100*Door Driver Key Cylinder Switch Failure
B2101*Head Rest Switch Circuit Failure
B2102*Antenna Circuit Short to Ground
B2103*Antenna Not Connected
B2104*Door Passenger Key Cylinder Switch Failure
</code></p>
<p>so that I can copy this data in LibreOffice Calc and it will format it into two columns of code and meaning each.</p>
<p>My thought process:<br>
Apply a regular express over Bxxxx and put an asterisk in front of it (It acts as a delimiter) and a <code>\n</code> before the meaning (I don't know if that will work? ), and remove white-space till next character is encountered.</p>
<p>I am trying to isolate the B2100 and have failed till now. My naive attempt:</p>
<pre><code>import re
text = """B2100 Door Driver Key Cylinder Switch Failure B2101 Head Rest Switch Circuit Failure B2102 Antenna Circuit Short to Ground B2103 Antenna Not Connected B2104 Door Passenger Key Cylinder Switch Failure B2105 Throttle Position Input Out of Range Low B2106 Throttle Position Input Out of Range High B2107 Front Wiper Motor Relay Circuit Short to Vbatt B2108 Trunk Key Cylinder Switch Failure"""
# text_arr = text.split("\^B[0-9][0-9][0-9][0-9]$\gi");
l = re.compile('\^B[0-9][0-9][0-9][0-9]$\gi').split(text)
print(l)
</code></pre>
<p>This outputs:</p>
<pre><code>['B2100\tDoor Driver Key Cylinder Switch Failure B2101\tHead Rest Switch Circuit Failure B2102\tAntenna Circuit Short to Ground B2103\tAntenna Not Connected B2104\tDoor Passenger Key Cylinder Switch Failure B2105\tThrottle Position Input Out of Range Low B2106\tThrottle Position Input Out of Range High B2107\tFront Wiper Motor Relay Circuit Short to Vbatt B2108\tTrunk Key Cylinder Switch Failure']
</code></pre>
<p><strong>How do I achieve the desired result?</strong> </p>
<p>To break it down further, what I want to do is this:<br>
Break down everything into Code (B1001) and meaning (The text after it) array and then apply each operation (the <code>\n</code> thing) on it individually. If you have better ideas on how to do the whole thing, the better. I would love to hear it.</p>
| 1 | 2016-08-18T13:06:21Z | 39,019,530 | <p>Basically, you want to:</p>
<ul>
<li>Find any Bxxxx strings in the input.</li>
<li>Replace any whitespace before them with a newline.</li>
<li>Replace any whitespace after them with a <code>*</code>.</li>
</ul>
<p>This can all be done with a single <code>re.sub()</code>:</p>
<pre><code>re.sub(r'\s*(B\d{4})\s*', r'\n\1*', text).strip()
</code></pre>
<p>Matching pattern:</p>
<pre><code>\s* # Any amount of whitespace
(B\d{4}) # "B" followed by exactly 4 digits
\s* # Any amount of whitespace
</code></pre>
<p>Replacement pattern:</p>
<pre><code>\n # Newline
\1 # The first parenthesized sequence from the matching pattern (B####)
* # Literal "*"
</code></pre>
<p>The purpose of the <code>strip()</code> is to prune any leading or trailing whitespace, including the newline that will result from the sub of the first B#### sequence.</p>
| 5 | 2016-08-18T13:30:01Z | [
"python",
"text-processing"
] |
How to parse certain text data? | 39,019,060 | <p>I have a text file with such a format: </p>
<p><code>B2100 Door Driver Key Cylinder Switch Failure B2101 Head Rest Switch Circuit Failure B2102 Antenna Circuit Short to Ground`, plus 1000 lines more.
</code></p>
<p>This is how I want it to be:</p>
<p><code>B2100*Door Driver Key Cylinder Switch Failure
B2101*Head Rest Switch Circuit Failure
B2102*Antenna Circuit Short to Ground
B2103*Antenna Not Connected
B2104*Door Passenger Key Cylinder Switch Failure
</code></p>
<p>so that I can copy this data in LibreOffice Calc and it will format it into two columns of code and meaning each.</p>
<p>My thought process:<br>
Apply a regular express over Bxxxx and put an asterisk in front of it (It acts as a delimiter) and a <code>\n</code> before the meaning (I don't know if that will work? ), and remove white-space till next character is encountered.</p>
<p>I am trying to isolate the B2100 and have failed till now. My naive attempt:</p>
<pre><code>import re
text = """B2100 Door Driver Key Cylinder Switch Failure B2101 Head Rest Switch Circuit Failure B2102 Antenna Circuit Short to Ground B2103 Antenna Not Connected B2104 Door Passenger Key Cylinder Switch Failure B2105 Throttle Position Input Out of Range Low B2106 Throttle Position Input Out of Range High B2107 Front Wiper Motor Relay Circuit Short to Vbatt B2108 Trunk Key Cylinder Switch Failure"""
# text_arr = text.split("\^B[0-9][0-9][0-9][0-9]$\gi");
l = re.compile('\^B[0-9][0-9][0-9][0-9]$\gi').split(text)
print(l)
</code></pre>
<p>This outputs:</p>
<pre><code>['B2100\tDoor Driver Key Cylinder Switch Failure B2101\tHead Rest Switch Circuit Failure B2102\tAntenna Circuit Short to Ground B2103\tAntenna Not Connected B2104\tDoor Passenger Key Cylinder Switch Failure B2105\tThrottle Position Input Out of Range Low B2106\tThrottle Position Input Out of Range High B2107\tFront Wiper Motor Relay Circuit Short to Vbatt B2108\tTrunk Key Cylinder Switch Failure']
</code></pre>
<p><strong>How do I achieve the desired result?</strong> </p>
<p>To break it down further, what I want to do is this:<br>
Break down everything into Code (B1001) and meaning (The text after it) array and then apply each operation (the <code>\n</code> thing) on it individually. If you have better ideas on how to do the whole thing, the better. I would love to hear it.</p>
| 1 | 2016-08-18T13:06:21Z | 39,020,006 | <p>import re</p>
<p>import pandas as pd</p>
<p>pat= r"(B\d+)"</p>
<p>zzz=[i for i in re.split(pat,kkk) if i!='']</p>
<p>pd.DataFrame({'Col1': zzz[::2],'Col2':[i.strip() for i in zzz if re.match(pat,i) is None] })</p>
<p>Col1 Col2</p>
<p>0 B2100 Door Driver Key Cylinder Switch Failure</p>
<p>1 B2101 Head Rest Switch Circuit Failure</p>
<p>2 B2102 Antenna Circuit Short to Ground
3 B2100 Door Driver Key Cylinder Switch Failure</p>
| 0 | 2016-08-18T13:50:59Z | [
"python",
"text-processing"
] |
Delete pandas group based on condition | 39,019,252 | <p>I have a pandas dataframe in with several groups and I would like to exclude groups where some conditions (in a specific column) are not met. E.g. delete here group B because they have a non-number value in column "crit1".</p>
<p>I could delete specific columns based on the condition <code>df.loc[:, (df >< 0).any(axis=0)]</code> but then it doesn't delete the whole group. </p>
<p>And somehow I can't make the next step and apply this to the whole group.</p>
<pre><code>name crit1 crit2
A 0.3 4
A 0.7 6
B inf 4
B 0.4 3
</code></pre>
<p>So the result after this filtering (allow only floats) should be:</p>
<pre><code>A 0.3 4
A 0.7 6
</code></pre>
| 2 | 2016-08-18T13:16:02Z | 39,019,380 | <p>You can use <code>groupby</code> and <code>filter</code>, for the example you give you can check if <code>np.inf</code> exists in a group and <code>filter</code> on the condition:</p>
<pre><code>import pandas as pd
import numpy as np
df.groupby('name').filter(lambda g: (g != np.inf).all().all())
# name crit1 crit2
# 0 A 0.3 4
# 1 A 0.7 6
</code></pre>
<p>If the predicate only applies to one column, you can access the column via <code>g.</code>, for example:</p>
<pre><code>df.groupby('name').filter(lambda g: (g.crit1 != np.inf).all())
# name crit1 crit2
# 0 A 0.3 4
# 1 A 0.7 6
</code></pre>
| 1 | 2016-08-18T13:22:38Z | [
"python",
"pandas",
"filter"
] |
Handling keyboard interrupt in async zmq | 39,019,354 | <p>I'm making a ZeroMQ server in <code>pyzmq</code> using <code>asyncio</code>. I'm trying to gracefully handle stopping the server, but there's very little documentation on the async module and there doesn't seem to be a simple way to handle stopping the current poll/await. Stopping the loop in the <code>.stop</code> method doesn't do much and won't actually exit.</p>
<pre><code>import zmq
import zmq.asyncio
import asyncio
class ZMQHandler():
def __init__(self):
self.loop = zmq.asyncio.ZMQEventLoop()
asyncio.set_event_loop(self.loop)
self.context = zmq.asyncio.Context()
self.socket = self.context.socket(zmq.DEALER)
self.socket.bind('tcp://127.0.0.1:5000')
self.socket.linger = -1
def start(self):
asyncio.ensure_future(self.listen())
self.loop.run_forever()
def stop(self):
print('Stopping')
self.loop.stop()
async def listen(self):
self.raw = await self.socket.recv()
asyncio.ensure_future(self.listen())
</code></pre>
<p>Here's some example code that would start this:</p>
<pre><code>daemon = ZMQHandler()
def signal_handler(num, frame):
daemon.stop()
signal.signal(signal.SIGTERM, signal_handler)
signal.signal(signal.SIGINT, signal_handler)
daemon.start()
</code></pre>
<p>How do I gracefully stop this when it's running? When I call <code>self.socket.close()</code>, I get the error <code>zmq.error.ZMQError: Socket operation on non-socket</code>, and if I call <code>self.context.destroy()</code> it basically complains that the sockets weren't terminated cleanly with <code>ETERM</code>.</p>
| 0 | 2016-08-18T13:20:39Z | 39,403,026 | <p>It ended up being a bug in the implementation of <code>pyzmq</code>. The bug was fixed and now calling <code>loop.stop()</code> works as intended.</p>
| 0 | 2016-09-09T02:40:52Z | [
"python",
"zeromq",
"python-asyncio",
"pyzmq"
] |
Inserting a file name to Postgres table | 39,019,516 | <p>I have the following code to insert the file name to Postgres table called 'logs'</p>
<pre><code>c = engine.connect()
conn = c.connection
cur = conn.cursor()
cur.execute("SELECT filename from logs" )
rows1 = cur.fetchall()
rows1 = [x[0] for x in rows1]
for root, directories, filenames in os.walk(path):
for filename in filenames:
fname = os.path.join(root,filename)
if os.path.isfile(fname) and fname[-4:] == '.log':
if fname not in rows1:
print fname
cur.execute(""" INSERT INTO logs(filename) VALUES (%(fname)s)""")
conn.commit()
</code></pre>
<p>I am getting the error</p>
<pre><code>ProgrammingError: syntax error at or near "%"
LINE 1: INSERT INTO logs(filename) VALUES (%(fname)s)
</code></pre>
<p>May I know where I am doing wrong?</p>
| 0 | 2016-08-18T13:29:34Z | 39,019,631 | <p>You haven't passed any parameters to the query, so no replacement will be made; the adapter will pass the literal string <code>(%(fname)s)</code> to Postgres.</p>
<pre><code>cur.execute("""INSERT INTO logs(filename) VALUES (%(fname)s)""", {'fname': fname'})
</code></pre>
| 0 | 2016-08-18T13:33:52Z | [
"python",
"postgresql",
"for-loop"
] |
How to calculate variables from worksheet columns using xlrd? | 39,019,564 | <p>I am attempting to calculate all variables of a specific value in a given column from an Excel document. I want to be able to iterate over the column and calculate the total of each instance... e.g. how many students received a grade "A".</p>
<p>Here is what I have so far...</p>
<p>test.xls:</p>
<blockquote>
<p>Name, Class, Grade</p>
<p>James, Math, A</p>
<p>Judy, Math, A</p>
<p>Bill, Social Studies, B</p>
<p>Denice, History, C</p>
<p>Sarah, History, B</p>
</blockquote>
<p>Here is my python script</p>
<pre><code>import xlrd
from collections import Counter
sh = xlrd.open_workbook('test.xls', on_demand = True).sheet_by_index(0) # Open workbook and sheet
for rownum in range(sh.nrows):
grades = str(sh.cell(rownum, 2).value) # Grab all variables in column 2.
print Counter(grades.split('\n')) # Count grades
</code></pre>
<p><strong>Expected output:</strong></p>
<blockquote>
<p>A = 2 </p>
<p>B = 2 </p>
<p>C = 1</p>
</blockquote>
<p><strong>Actual output:</strong></p>
<blockquote>
<p>Counter({'Grade': 1}) </p>
<p>Counter({'A': 1}) </p>
<p>Counter({'A': 1}) </p>
<p>Counter({'B': 1})</p>
<p>Counter({'C': 1})</p>
<p>Counter({'B': 1})</p>
</blockquote>
<p>As each grade is showing in a different list I have been unable to merge/concatenate lists to get a total. Also it is not in the desired output formatting.</p>
| 1 | 2016-08-18T13:31:04Z | 39,019,706 | <pre><code>for rownum in range(sh.nrows):
grades = str(sh.cell(rownum, 2).value) # Grab all variables in column 2.
print Counter(grades.split('\n')) # Count grades
</code></pre>
<p>You are creating a list in every iteration.</p>
<p>You can use list comprehension to a create a single list with all the grades:</p>
<pre><code>grades = [str(sh.cell(rownum, 2).value) for rownum in range(sh.nrows)]
print Counter(grades)
</code></pre>
<p>Or without comprehension :</p>
<pre><code>grades = []
for rownum in range(sh.nrows):
grades.append(str(sh.cell(rownum, 2).value))
print Counter(grades)
</code></pre>
<p>You would still need to format the content of <code>Counter(grades)</code> to your liking:</p>
<pre><code>res = Counter(grades)
for grade, count in res.iteritems():
print '{} = {}'.format(grade, count)
</code></pre>
<p>Note that:</p>
<ol>
<li><p>I'm not using <code>split</code>.</p></li>
<li><p>The output won't be in any particular order, and in fact might change between consecutive runs of the script.</p></li>
</ol>
| 0 | 2016-08-18T13:37:20Z | [
"python",
"xlrd",
"python-collections"
] |
How to calculate variables from worksheet columns using xlrd? | 39,019,564 | <p>I am attempting to calculate all variables of a specific value in a given column from an Excel document. I want to be able to iterate over the column and calculate the total of each instance... e.g. how many students received a grade "A".</p>
<p>Here is what I have so far...</p>
<p>test.xls:</p>
<blockquote>
<p>Name, Class, Grade</p>
<p>James, Math, A</p>
<p>Judy, Math, A</p>
<p>Bill, Social Studies, B</p>
<p>Denice, History, C</p>
<p>Sarah, History, B</p>
</blockquote>
<p>Here is my python script</p>
<pre><code>import xlrd
from collections import Counter
sh = xlrd.open_workbook('test.xls', on_demand = True).sheet_by_index(0) # Open workbook and sheet
for rownum in range(sh.nrows):
grades = str(sh.cell(rownum, 2).value) # Grab all variables in column 2.
print Counter(grades.split('\n')) # Count grades
</code></pre>
<p><strong>Expected output:</strong></p>
<blockquote>
<p>A = 2 </p>
<p>B = 2 </p>
<p>C = 1</p>
</blockquote>
<p><strong>Actual output:</strong></p>
<blockquote>
<p>Counter({'Grade': 1}) </p>
<p>Counter({'A': 1}) </p>
<p>Counter({'A': 1}) </p>
<p>Counter({'B': 1})</p>
<p>Counter({'C': 1})</p>
<p>Counter({'B': 1})</p>
</blockquote>
<p>As each grade is showing in a different list I have been unable to merge/concatenate lists to get a total. Also it is not in the desired output formatting.</p>
| 1 | 2016-08-18T13:31:04Z | 39,019,931 | <p>You can start by instantiating a <code>Counter</code> and then add grades to it while you iterate:</p>
<pre><code>grades_counter = Counter()
mysheet = xlrd.open_workbook('grades.xls').sheet_by_index(0)
for i in range(1,mysheet.nrows):
grades_counter += Counter(str(mysheet.row_values(i)[2]))
print grades_counter
Counter({'A': 2, 'B': 2, 'C': 1})
</code></pre>
<p>If you are looking to print the output in a more elegant way, you can do the following:</p>
<pre><code>for k,v in grades_counter.items():
print "{} = {}".format(k,v)
</code></pre>
<p>You should get:</p>
<pre><code>A = 2
C = 1
B = 2
</code></pre>
<p>I hope this helps.</p>
| 0 | 2016-08-18T13:48:21Z | [
"python",
"xlrd",
"python-collections"
] |
Duplicated rows when merging dataframes in python | 39,019,591 | <p>I am currently merging 2 dataframes with an outer join, but after merging, I see all the rows are duplicated even when the columns I did the merge upon contain the same values. In detail:</p>
<pre><code>list_1 = pd.read_csv('list_1.csv')
list_2 = pd.read_csv('list_2.csv')
merged_list = pd.merge(list_1 , list_2 , on=['email_address'], how='inner')
</code></pre>
<p>with the following input and results:</p>
<p>list_1:</p>
<pre><code>email_address, name, surname
john.smith@email.com, john, smith
john.smith@email.com, john, smith
elvis@email.com, elvis, presley
</code></pre>
<p>list_2:</p>
<pre><code>email_address, street, city
john.smith@email.com, street1, NY
john.smith@email.com, street1, NY
elvis@email.com, street2, LA
</code></pre>
<p>merged_list:</p>
<pre><code>email_address, name, surname, street, city
john.smith@email.com, john, smith, street1, NY
john.smith@email.com, john, smith, street1, NY
john.smith@email.com, john, smith, street1, NY
john.smith@email.com, john, smith, street1, NY
elvis@email.com, elvis, presley, street2, LA
elvis@email.com, elvis, presley, street2, LA
</code></pre>
<p>My question is, shouldn't it be like this?</p>
<p>merged_list (how I would like it to be :D):</p>
<pre><code>email_address, name, surname, street, city
john.smith@email.com, john, smith, street1, NY
john.smith@email.com, john, smith, street1, NY
elvis@email.com, elvis, presley, street2, LA
</code></pre>
<p>How can I make it so that it becomes like this?
Thanks a lot for your help!</p>
| 2 | 2016-08-18T13:32:00Z | 39,019,766 | <pre><code>list_2_nodups = list_2.drop_duplicates()
pd.merge(list_1 , list_2_nodups , on=['email_address'])
</code></pre>
<h2><a href="http://i.stack.imgur.com/MXtAP.png" rel="nofollow"><img src="http://i.stack.imgur.com/MXtAP.png" alt="enter image description here"></a></h2>
<p>The duplicate rows are expected. Each john smith in <code>list_1</code> matches with each john smith in <code>list_2</code>. I had to drop the duplicates in one of the lists. I chose <code>list_2</code>.</p>
| 1 | 2016-08-18T13:40:45Z | [
"python",
"python-2.7",
"python-3.x",
"pandas",
"merge"
] |
Different grid behavior from inherited Tkinter Text element | 39,019,667 | <p><a href="http://stackoverflow.com/a/21565476/3884408" title="In another answer here">In another answer here</a>, a user created an inherited TextWithVar class that provides instances of the Tkinter.Text element but with textvariable functionality like Tkinter.Entry has. However, in testing, this new class behaves differently than a Text element when when using the Grid manager. To demonstrate, this code is copied from that answer, with the addition of some test calls at the end:</p>
<pre><code>import Tkinter as tk
class TextWithVar(tk.Text):
'''A text widget that accepts a 'textvariable' option'''
def __init__(self, parent, *args, **kwargs):
try:
self._textvariable = kwargs.pop("textvariable")
except KeyError:
self._textvariable = None
tk.Text.__init__(self, *args, **kwargs)
# if the variable has data in it, use it to initialize
# the widget
if self._textvariable is not None:
self.insert("1.0", self._textvariable.get())
# this defines an internal proxy which generates a
# virtual event whenever text is inserted or deleted
self.tk.eval('''
proc widget_proxy {widget widget_command args} {
# call the real tk widget command with the real args
set result [uplevel [linsert $args 0 $widget_command]]
# if the contents changed, generate an event we can bind to
if {([lindex $args 0] in {insert replace delete})} {
event generate $widget <<Change>> -when tail
}
# return the result from the real widget command
return $result
}
''')
# this replaces the underlying widget with the proxy
self.tk.eval('''
rename {widget} _{widget}
interp alias {{}} ::{widget} {{}} widget_proxy {widget} _{widget}
'''.format(widget=str(self)))
# set up a binding to update the variable whenever
# the widget changes
self.bind("<<Change>>", self._on_widget_change)
# set up a trace to update the text widget when the
# variable changes
if self._textvariable is not None:
self._textvariable.trace("wu", self._on_var_change)
def _on_var_change(self, *args):
'''Change the text widget when the associated textvariable changes'''
# only change the widget if something actually
# changed, otherwise we'll get into an endless
# loop
text_current = self.get("1.0", "end-1c")
var_current = self._textvariable.get()
if text_current != var_current:
self.delete("1.0", "end")
self.insert("1.0", var_current)
def _on_widget_change(self, event=None):
'''Change the variable when the widget changes'''
if self._textvariable is not None:
self._textvariable.set(self.get("1.0", "end-1c"))
root = tk.Tk()
text_frame = TextWithVar(root)
text_frame.grid(row=1, column=1)
test_button = tk.Button(root, text='Test')
test_button.grid(row=1, column=1, sticky='NE')
root.mainloop()
root2 = tk.Tk()
frame = tk.Frame(root2)
frame.grid(row=1, column=1)
text_frame2 = TextWithVar(frame)
text_frame2.grid(row=1, column=1)
test_button2 = tk.Button(frame, text='Test')
test_button2.grid(row=1, column=1, sticky='NE')
root2.mainloop()
</code></pre>
<p>In this example, when the TextWithVar element is directly inside root, it acts like it should - the Button element is placed on top of it in the corner. However, when both are inside a Frame, the Button element is nowhere to be seen. Now change both the TextWithVar calls to tk.Text. Both of them work the way they should, with the Button in plain view. According to Bryan, who made the new class, these should work the exact same way, which I tend to agree with. So why do they work differently?</p>
| 0 | 2016-08-18T13:35:33Z | 39,022,345 | <p>It's a bug in <code>TextWithVar</code>, in this line of code:</p>
<pre><code>tk.Text.__init__(self, *args, **kwargs)
</code></pre>
<p>The code needs to include the <code>parent</code> parameter:</p>
<pre><code>tk.Text.__init__(self, parent, *args, **kwargs)
</code></pre>
| 0 | 2016-08-18T15:39:26Z | [
"python",
"tkinter"
] |
How to execute twitter.Api.PostUpdate in loop? | 39,019,773 | <p>This code executes with error:</p>
<pre><code> # some constants and auth before, looks not important
topPosts = reddit.get_subreddit('funny').get_top(limit=3)
for post in topPosts:
twitter.PostUpdate(status = post.title, media = post.url)
</code></pre>
<p>Console log:</p>
<pre><code>Traceback (most recent call last):
File "script.py", line 17, in <module>
twitter.PostUpdate(status = post.title, media = post.url)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twitter/api.py", line 990, in PostUpdate
media_additional_owners)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twitter/api.py", line 1132, in UploadMediaChunked
boundary = bytes("--{0}".format(uuid4()), 'utf-8')
TypeError: str() takes at most 1 argument (2 given)
</code></pre>
<p>If I do just <code>post.label</code> in loop it works perfectly.</p>
<p>If I execute only one (w/o loop) <code>PostUpdate</code> it works perfectly.</p>
<p>I think it's happening because <code>PostUpdate</code> is asynchronous, but can't figure out how to fix it. Please help.</p>
| 0 | 2016-08-18T13:40:58Z | 39,027,335 | <p>This is a bug in <code>python-twitter</code> library and it's fixed in this <a href="https://github.com/bear/python-twitter/pull/347/files#diff-ca47863eef03007ed84692b07b74eda6L1132" rel="nofollow">PR</a>. The problem is that <code>bytes</code> in python2 equals to <code>str</code> and accepts only one argument while in python3 <code>bytes</code> requires encoding as a second argument. </p>
| 1 | 2016-08-18T20:54:24Z | [
"python",
"twitter",
"praw"
] |
Get attributes from an XML tag using Python | 39,019,779 | <p>I am trying to grab the attributes from an XML tag so I can get the correct data and place it in a CSV file</p>
<p>I have attempted to find some answers, but have found nothing definitive other than how to grab that tag itself, or get the attributes using another language. </p>
<p>Here is what a line from the XML file will look like:</p>
<pre><code><item id="16" class="ItemClass" status="closed"
opened="2008-07-14T22:31:10Z" closed="2008-07-14T23:45:08Z"
url="www.google.com"/>
</code></pre>
<p>The Tag is "item"
The attributes are "id", "class", "status", "opened", "closed", "url"</p>
<p>I can get the tag name, and the items inside the tag <code>(<tag>item inside tag</tag>)</code> using a method found <a href="http://blog.appliedinformaticsinc.com/how-to-parse-and-convert-xml-to-csv-using-python/" rel="nofollow">here</a>:</p>
<pre><code>import xml.etree.ElementTree as ET
import csv
item_head = []
item = []
[...]
name = item.find('item').tag
item_head.append(name)
name = item.find('item').text
item.append(name)
</code></pre>
<p>But I want to get that ATTRIBUTES, not the tag. </p>
<p>I can easily write the header of the file since the items will always contain the same information. So, this would look like:</p>
<pre><code>item_head.append('id')
item_head.append('class')
item_head.append('status')
...and so on...
</code></pre>
<p>But I don't know how to grab the other information, such as id="16"</p>
<p>End result should be something like this:</p>
<pre><code>id, class, status, opened, closed, url
16, ItemClass, closed, 2008-07-14T22:31:10Z, 2008-07-14T23:45:08Z, www.google.com
</code></pre>
<p>Can anyone help me with this?</p>
| 0 | 2016-08-18T13:41:08Z | 39,020,032 | <p>An <a href="https://docs.python.org/2/library/xml.etree.elementtree.html?highlight=elementtree#xml.etree.ElementTree.Element" rel="nofollow">xml.etree.ElementTree.Element</a> also has an <code>attrib</code> property, which is a dictionary containing the elementâs attributes</p>
<pre><code>>>> item.attrib['id']
'16'
</code></pre>
| 2 | 2016-08-18T13:52:17Z | [
"python",
"xml",
"csv"
] |
Issue with Selenium,Python | 39,019,917 | <p>I have written the following. </p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.keys import Keys
bot = webdriver.Firefox()
bot.find_element_by_name("username").send_keys(config['username'])
</code></pre>
<p>When I am using send_keys and happen to be typing at the same instant, then what I typed is also added in the username.<br>
How to avoid this?</p>
<p>Example:</p>
<p>I want to fill the username with "sandeep"
If at the same instant I press 'a', then the username becomes "sandeepa" or something equivalent.</p>
| 1 | 2016-08-18T13:47:55Z | 39,020,817 | <p>I see 2 options:</p>
<ol>
<li><p>Create hidden input send keys to it than perform copy/paste from hidden to visible input, after remove hidden input. </p></li>
<li><p>Hide input, than send_keys to it and after show it back. </p></li>
</ol>
<p>Usefull links:</p>
<p><a href="http://stackoverflow.com/questions/11750447/performing-a-copy-and-paste-with-selenium-2">Performing a copy and paste with Selenium 2</a> </p>
<p><a href="http://stackoverflow.com/questions/4176515/webdriver-add-new-element">WebDriver: add new element</a></p>
| 0 | 2016-08-18T14:27:48Z | [
"python",
"selenium"
] |
Issue with Selenium,Python | 39,019,917 | <p>I have written the following. </p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.keys import Keys
bot = webdriver.Firefox()
bot.find_element_by_name("username").send_keys(config['username'])
</code></pre>
<p>When I am using send_keys and happen to be typing at the same instant, then what I typed is also added in the username.<br>
How to avoid this?</p>
<p>Example:</p>
<p>I want to fill the username with "sandeep"
If at the same instant I press 'a', then the username becomes "sandeepa" or something equivalent.</p>
| 1 | 2016-08-18T13:47:55Z | 39,020,920 | <p>You can use executeScript method:</p>
<pre><code>webdriver.execute_script("document.getElementById('username').setAttribute('value', 'Sandeep')")
</code></pre>
<p><code>JavaScript</code> will do text insertion as single operation. </p>
| 1 | 2016-08-18T14:32:14Z | [
"python",
"selenium"
] |
Count occurrences of certain values in dask.dataframe | 39,019,918 | <p>I have a dataframe like this:</p>
<pre><code>df.head()
day time resource_record
0 27 00:00:00 AAAA
1 27 00:00:00 A
2 27 00:00:00 AAAA
3 27 00:00:01 A
4 27 00:00:02 A
</code></pre>
<p>and want to find out how many occurrences of certain <code>resource_records</code> exist.</p>
<p>My first try was using the Series returned by <code>value_counts()</code>, which seems great, but does not allow me to exclude some labels afterwards, because there is no <code>drop()</code> implemented in <code>dask.Series</code>.</p>
<p>So I tried just to not print the undesired labels:</p>
<pre><code>for row in df.resource_record.value_counts().iteritems():
if row[0] in ['AAAA']:
continue
print('\t{0}\t{1}'.format(row[1], row[0]))
</code></pre>
<p>Which works fine, but what if I ever want to further work on this data and really want it 'cleaned'. So I searched the docs a bit more and found <code>mask()</code>, but this feels a bit clumsy as well:</p>
<pre><code>records = df.resource_record.mask(df.resource_record.map(lambda x: x in ['AAAA'])).value_counts()
</code></pre>
<p>I looked for a method which would allow me to just count individual values, but <code>count()</code> does count all values that are not NaN.</p>
<p>Then I found <code>str.contains()</code>, but I don't know how to handle the undocumented Scalar type I get returned with this code:</p>
<pre><code>print(df.resource_record.str.contains('A').sum())
</code></pre>
<p>Output:</p>
<pre><code>dd.Scalar<series-..., dtype=int64>
</code></pre>
<p>But even after looking at Scalar's code in <code>dask/dataframe/core.py</code> I didn't find a way of getting its value.</p>
<p>How would you efficiently count the occurrences of a certain set of values in your dataframe?</p>
| 0 | 2016-08-18T13:47:55Z | 39,023,088 | <p>One quite nice method I found is this:</p>
<pre><code>counts = df.resource_record.mask(df.resource_record.isin(['AAAA'])).dropna().value_counts()
</code></pre>
<p>First we mask all entries we'd like to get removed, which replaces the value with NaN. Then we drop all rows with NaN and last count the occurrences of unique values.</p>
<p>This requires <code>df</code> to have no NaN values, which otherwise leads to the row containing NaN being removed as well.</p>
<p>I expect something like</p>
<pre><code>df.resource_record.drop(df.resource_record.isin(['AAAA']))
</code></pre>
<p>would be faster, because I believe drop would run through the dataset once, while mask + dropna runs through the dataset twice. But drop is only implemented for axis=1, and here we need axis=0.</p>
| 0 | 2016-08-18T16:18:27Z | [
"python",
"dask"
] |
Count occurrences of certain values in dask.dataframe | 39,019,918 | <p>I have a dataframe like this:</p>
<pre><code>df.head()
day time resource_record
0 27 00:00:00 AAAA
1 27 00:00:00 A
2 27 00:00:00 AAAA
3 27 00:00:01 A
4 27 00:00:02 A
</code></pre>
<p>and want to find out how many occurrences of certain <code>resource_records</code> exist.</p>
<p>My first try was using the Series returned by <code>value_counts()</code>, which seems great, but does not allow me to exclude some labels afterwards, because there is no <code>drop()</code> implemented in <code>dask.Series</code>.</p>
<p>So I tried just to not print the undesired labels:</p>
<pre><code>for row in df.resource_record.value_counts().iteritems():
if row[0] in ['AAAA']:
continue
print('\t{0}\t{1}'.format(row[1], row[0]))
</code></pre>
<p>Which works fine, but what if I ever want to further work on this data and really want it 'cleaned'. So I searched the docs a bit more and found <code>mask()</code>, but this feels a bit clumsy as well:</p>
<pre><code>records = df.resource_record.mask(df.resource_record.map(lambda x: x in ['AAAA'])).value_counts()
</code></pre>
<p>I looked for a method which would allow me to just count individual values, but <code>count()</code> does count all values that are not NaN.</p>
<p>Then I found <code>str.contains()</code>, but I don't know how to handle the undocumented Scalar type I get returned with this code:</p>
<pre><code>print(df.resource_record.str.contains('A').sum())
</code></pre>
<p>Output:</p>
<pre><code>dd.Scalar<series-..., dtype=int64>
</code></pre>
<p>But even after looking at Scalar's code in <code>dask/dataframe/core.py</code> I didn't find a way of getting its value.</p>
<p>How would you efficiently count the occurrences of a certain set of values in your dataframe?</p>
| 0 | 2016-08-18T13:47:55Z | 39,048,119 | <p>In most cases pandas syntax will work as well with dask, with the necessary addition of <code>.compute()</code> (or <code>dask.compute</code>) to actually perform the action. Until the compute, you are merely constructing the graph which defined the action.</p>
<p>I believe the simplest solution to your question is this:</p>
<pre><code>df[df.resource_record!='AAAA'].resource_record.value_counts().compute()
</code></pre>
<p>Where the expression in the selector square brackets could be some mapping or function.</p>
| 0 | 2016-08-19T21:47:27Z | [
"python",
"dask"
] |
Working with checkboxes in tkinter (pygubu) | 39,019,935 | <p>I created a window with the GUI maker pygubu. This window contains checkboxes. But my script is not able to recognize if the checkboxes are marked or not.
How do I have to check if the checkboxes are marked or not?
Do Ihave to use the command or the variable? What is the right syntax?</p>
<p>Below you can see what I have done in pygubu. How do I get the state of my checkbox?
<a href="http://i.stack.imgur.com/W9o5A.jpg" rel="nofollow">http://i.stack.imgur.com/W9o5A.jpg</a> </p>
<p>I tried:</p>
<pre><code>def checker13(self, variable=check13):
self.variable13 = variable
print self.variable13
</code></pre>
<p>This should print me the state of the checkbox everytime something changed. But I always receive an error. What can I do?</p>
| -1 | 2016-08-18T13:48:24Z | 39,020,843 | <p>As per the example from here <a href="http://www.python-course.eu/tkinter_checkboxes.php" rel="nofollow">http://www.python-course.eu/tkinter_checkboxes.php</a></p>
<pre><code>from tkinter import *
master = Tk()
var1 = IntVar()
Checkbutton(master, text="male", variable=var1).grid(row=0, sticky=W)
var2 = IntVar()
Checkbutton(master, text="female", variable=var2).grid(row=1, sticky=W)
mainloop()
</code></pre>
<p>You need to set a variable that stores the result of the checkbox and then reference that (in this example it's var1 and var2).</p>
<p>Edit: to more specifically answer your question; you can check the result of <code>var1</code> and <code>var2</code> as they will either equal <code>0</code> for not checked or <code>1</code> for checked. Hope this helps.</p>
| 0 | 2016-08-18T14:28:41Z | [
"python",
"checkbox",
"tkinter"
] |
Working with checkboxes in tkinter (pygubu) | 39,019,935 | <p>I created a window with the GUI maker pygubu. This window contains checkboxes. But my script is not able to recognize if the checkboxes are marked or not.
How do I have to check if the checkboxes are marked or not?
Do Ihave to use the command or the variable? What is the right syntax?</p>
<p>Below you can see what I have done in pygubu. How do I get the state of my checkbox?
<a href="http://i.stack.imgur.com/W9o5A.jpg" rel="nofollow">http://i.stack.imgur.com/W9o5A.jpg</a> </p>
<p>I tried:</p>
<pre><code>def checker13(self, variable=check13):
self.variable13 = variable
print self.variable13
</code></pre>
<p>This should print me the state of the checkbox everytime something changed. But I always receive an error. What can I do?</p>
| -1 | 2016-08-18T13:48:24Z | 39,020,877 | <p>Thanks for clarifying your question. Now that I know you're using Pygubu to build your GUI, I could look into it a little more. I found the following doc page that could help with your issue:
<a href="http://infohost.nmt.edu/tcc/help/pubs/tkinter/web/ttk-Checkbutton.html" rel="nofollow">http://infohost.nmt.edu/tcc/help/pubs/tkinter/web/ttk-Checkbutton.html</a></p>
<p>The key Checkbutton option to pay attention to is the <code>variable</code> option. In the image you gave, it looks like you tried setting the <code>variable</code> option to <code>var</code>. The doc above says this about the <code>variable</code> option:</p>
<blockquote>
<p>A control variable that tracks the current state of the checkbutton...
Normally you will use an IntVar here, and the off and on values are 0
and 1, respectively.</p>
</blockquote>
<p>Do you have the option to change the <code>variable</code> option to be an <code>IntVar</code>, instead of the <code>int</code> that it currently is in the picture? If not, then that should still be fine. </p>
<p>So, the variable <code>var</code> will store the state of the Checkbutton. In the picture you gave, you set the <code>command</code> option to be <code>checker13</code>. Keep it like that. Now, since I've never used Pygubu (or even Tkinter for that matter), I can't be sure that this code will work, but try something like this:</p>
<pre><code>def checker13(self):
print(self.var)
</code></pre>
<p>If that doesn't work, try this:</p>
<pre><code>def checker13(self):
print(self.variable)
</code></pre>
<p>If one of those works, let me know so I can edit my answer with the working code. If you're still stuck, let me know as well.</p>
| 0 | 2016-08-18T14:30:05Z | [
"python",
"checkbox",
"tkinter"
] |
Working with checkboxes in tkinter (pygubu) | 39,019,935 | <p>I created a window with the GUI maker pygubu. This window contains checkboxes. But my script is not able to recognize if the checkboxes are marked or not.
How do I have to check if the checkboxes are marked or not?
Do Ihave to use the command or the variable? What is the right syntax?</p>
<p>Below you can see what I have done in pygubu. How do I get the state of my checkbox?
<a href="http://i.stack.imgur.com/W9o5A.jpg" rel="nofollow">http://i.stack.imgur.com/W9o5A.jpg</a> </p>
<p>I tried:</p>
<pre><code>def checker13(self, variable=check13):
self.variable13 = variable
print self.variable13
</code></pre>
<p>This should print me the state of the checkbox everytime something changed. But I always receive an error. What can I do?</p>
| -1 | 2016-08-18T13:48:24Z | 39,081,698 | <p>Finally I found an inelegant way to do it. It's not very "pretty", but it works:</p>
<pre><code>self.xxx1 = 0
def checker13(self):
if self.xxx1 == 0:
self.xxx1 = self.xxx1+1
else:
self.xxx1 = 0
</code></pre>
<p>Normally the checkbox is not marked and the value is <code>0</code>. If the checkbox gets marked there is an event and the value changes to <code>1</code>. The next event changes the value back to <code>0</code>. By this method I can check the state of the checkbox by checking <code>xxx1</code>.</p>
<p>I knew there are other ways (with <code>IntVar()</code>) but I was specifically searching for a solution with pygubu. If there is a smoother answer feel free to correct me.</p>
<p>EDIT:
to get the value of an variable from pygubu use:</p>
<pre><code>variable = self.builder.get_variable('variable')
variable = variable.get()
</code></pre>
| 0 | 2016-08-22T14:07:47Z | [
"python",
"checkbox",
"tkinter"
] |
How do I access Python code programmatically? For example, getting a list of classes, its docstrings, etc? | 39,019,962 | <p>I'm trying to read python code (specifically, unit tests) as structured objects.</p>
<p>For example.</p>
<pre><code>class ProjectA(unittest.TestCase):
def testB(self):
"""
hello world B
"""
assert False
def testA(self):
"""
hello world
"""
assert False
</code></pre>
<p>I will like to read this code file into an object a dict like this:</p>
<pre><code>{
'classes': [{'ProjectA': [__init__, testA, testB]}]
}
</code></pre>
<p>For which I can read testA's via testA['docstring'].</p>
<hr>
<p>Basically, I'd like to get the structure of python code into an object for which I can parse.</p>
<p>What will something like this be called? (So I can read up about it)</p>
<p>Thank you!</p>
| 2 | 2016-08-18T13:49:19Z | 39,020,263 | <p>You can explore the class using <a href="https://docs.python.org/2/library/inspect.html" rel="nofollow">inspect</a></p>
| 0 | 2016-08-18T14:02:23Z | [
"python"
] |
How do I access Python code programmatically? For example, getting a list of classes, its docstrings, etc? | 39,019,962 | <p>I'm trying to read python code (specifically, unit tests) as structured objects.</p>
<p>For example.</p>
<pre><code>class ProjectA(unittest.TestCase):
def testB(self):
"""
hello world B
"""
assert False
def testA(self):
"""
hello world
"""
assert False
</code></pre>
<p>I will like to read this code file into an object a dict like this:</p>
<pre><code>{
'classes': [{'ProjectA': [__init__, testA, testB]}]
}
</code></pre>
<p>For which I can read testA's via testA['docstring'].</p>
<hr>
<p>Basically, I'd like to get the structure of python code into an object for which I can parse.</p>
<p>What will something like this be called? (So I can read up about it)</p>
<p>Thank you!</p>
| 2 | 2016-08-18T13:49:19Z | 39,020,330 | <p>This is what the <a href="https://docs.python.org/2/library/ast.html" rel="nofollow">ast</a> module is for -- generate abstract syntax trees of Python source:</p>
<pre><code>>>> import ast
>>> source = '''import unittest
...
...
... class ProjectA(unittest.TestCase):
...
... def testB(self):
... """
... hello world B
... """
... assert False
...
... def testA(self):
... """
... hello world
... """
... assert False'''
>>> tree = ast.parse(source)
>>> for node in ast.walk(tree):
... print node
...
<_ast.Module object at 0x103aa5f50>
<_ast.Import object at 0x103b0a810>
<_ast.ClassDef object at 0x103b0a890>
<_ast.alias object at 0x103b0a850>
<_ast.Attribute object at 0x103b0a8d0>
<_ast.FunctionDef object at 0x103b0a950>
<_ast.FunctionDef object at 0x103b0ab10>
<_ast.Name object at 0x103b0a910>
<_ast.Load object at 0x103b02190>
<_ast.arguments object at 0x103b0a990>
<_ast.Expr object at 0x103b0aa10>
<_ast.Assert object at 0x103b0aa90>
<_ast.arguments object at 0x103b0ab50>
<_ast.Expr object at 0x103b0abd0>
<_ast.Assert object at 0x103b0ac50>
<_ast.Load object at 0x103b02190>
<_ast.Name object at 0x103b0a9d0>
<_ast.Str object at 0x103b0aa50>
<_ast.Name object at 0x103b0aad0>
<_ast.Name object at 0x103b0ab90>
<_ast.Str object at 0x103b0ac10>
<_ast.Name object at 0x103b27d50>
<_ast.Param object at 0x103b02410>
<_ast.Load object at 0x103b02190>
<_ast.Param object at 0x103b02410>
<_ast.Load object at 0x103b02190>
</code></pre>
| 2 | 2016-08-18T14:05:55Z | [
"python"
] |
Using Python's requests lib throwing ProxyError | 39,020,011 | <p>I am still new to python and can't figure out how to handle this error and what to do with it to avoid it.</p>
<p>When I use <code>requests.get('http://www.baidu.com')</code></p>
<pre><code> import requests
header = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64)AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36'}
h=requests.get('http://www.baidu.com',headers=header)
print h.text
</code></pre>
<p>It throws a <code>ProxyError</code>:</p>
<pre><code>Traceback (most recent call last):
File "D:/freedomcoder/Code/Python/rexx/rexx.py", line 8, in <module>
h = requests.get('http://github.com/kennethreitz/requests/issues/3050',headers=header)
File "C:\Python27\lib\site-packages\requests\api.py", line 70, in get
return request('get', url, params=params, **kwargs)
File "C:\Python27\lib\site-packages\requests\api.py", line 56, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Python27\lib\site-packages\requests\sessions.py", line 475, in request
resp = self.send(prep, **send_kwargs)
File "C:\Python27\lib\site-packages\requests\sessions.py", line 596, in send
r = adapter.send(request, **kwargs)
File "C:\Python27\lib\site-packages\requests\adapters.py", line 485, in send
raise ProxyError(e, request=request)
requests.exceptions.ProxyError: HTTPConnectionPool(host='107.160.9.10', port=80): Max retries exceeded with url: http://www.baidu.com/ (Caused by ProxyError('Cannot connect to proxy.', NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x02EE4170>: Failed to establish a new connection: [Errno 10061] ',)))
</code></pre>
<p>But when I use <code>requests.get('https://www.baidu.com')</code>, it returns the correct page. I don't know why this is the case.</p>
| 0 | 2016-08-18T13:51:10Z | 39,022,206 | <p>your code seems to work and the issue is a network connection error as others have stated. </p>
<p>One way to verify the issue via cli (the connection should open ...) </p>
<p>$ telnet baidu.com 80
Trying 220.181.57.217...
Connected to baidu.com.
Escape character is '^]'.</p>
| -1 | 2016-08-18T15:32:51Z | [
"python",
"python-requests"
] |
Using Python's requests lib throwing ProxyError | 39,020,011 | <p>I am still new to python and can't figure out how to handle this error and what to do with it to avoid it.</p>
<p>When I use <code>requests.get('http://www.baidu.com')</code></p>
<pre><code> import requests
header = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64)AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36'}
h=requests.get('http://www.baidu.com',headers=header)
print h.text
</code></pre>
<p>It throws a <code>ProxyError</code>:</p>
<pre><code>Traceback (most recent call last):
File "D:/freedomcoder/Code/Python/rexx/rexx.py", line 8, in <module>
h = requests.get('http://github.com/kennethreitz/requests/issues/3050',headers=header)
File "C:\Python27\lib\site-packages\requests\api.py", line 70, in get
return request('get', url, params=params, **kwargs)
File "C:\Python27\lib\site-packages\requests\api.py", line 56, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Python27\lib\site-packages\requests\sessions.py", line 475, in request
resp = self.send(prep, **send_kwargs)
File "C:\Python27\lib\site-packages\requests\sessions.py", line 596, in send
r = adapter.send(request, **kwargs)
File "C:\Python27\lib\site-packages\requests\adapters.py", line 485, in send
raise ProxyError(e, request=request)
requests.exceptions.ProxyError: HTTPConnectionPool(host='107.160.9.10', port=80): Max retries exceeded with url: http://www.baidu.com/ (Caused by ProxyError('Cannot connect to proxy.', NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x02EE4170>: Failed to establish a new connection: [Errno 10061] ',)))
</code></pre>
<p>But when I use <code>requests.get('https://www.baidu.com')</code>, it returns the correct page. I don't know why this is the case.</p>
| 0 | 2016-08-18T13:51:10Z | 39,033,251 | <p>I deal with the problem.</p>
<p>just add proxies={'http':'','https':''}</p>
<p>for example:</p>
<pre><code>h=requests.get('http://www.baidu.com', proxies={'http':'','https':''})
</code></pre>
| 0 | 2016-08-19T07:20:23Z | [
"python",
"python-requests"
] |
Python I/O: Mixing The Datatypes | 39,020,080 | <p>I'm writing a small script which merges a host of JSON files in one directory into a single file. Trouble is, I'm not entirely sure when my data is in which state. TypeErrors abound. Here's the script;</p>
<pre><code>import glob
import json
import codecs
reader = codecs.getreader("utf-8")
for file in glob.glob("/Users/me/Scripts/BagOfJson/*.json"):
#Aha, as binary here
with open(file, "rb") as infile:
data = json.load(reader(infile))
#If I print(data) here, looks like good ol' JSON
with open("test.json", "wb") as outfile:
json.dump(data, outfile, sort_keys = True, indent = 2, ensure_ascii = False)
#Crash
</code></pre>
<p>This script results in the following error;</p>
<pre><code>TypeError: a bytes-like object is required, not 'str'
</code></pre>
<p>Which is caused by the json.dump line.</p>
<p>Naive me just deletes the 'b' in 'wb' for the outfile open. That doesn't do the trick.</p>
<p>Maybe this is a lesson to me to use the shell for testing, and making use of the type() python function. Still, I'd love if someone can clear up for me the logic behind these data swaps. I wish it could all be strings...</p>
| 0 | 2016-08-18T13:54:42Z | 39,020,642 | <p>If this is Python 3, removing the <code>b</code> (binary mode) to open the file in <em>text mode</em> should work just fine. You probably want to specify the encoding explicitly:</p>
<pre><code>with open("test.json", "w", encoding='utf8') as outfile:
json.dump(data, outfile, sort_keys = True, indent = 2, ensure_ascii = False)
</code></pre>
<p>rather than rely on a default.</p>
<p>You shouldn't really use <code>codecs.getreader()</code>. The standard <code>open()</code> function can handle UTF-8 files just fine; just open the file in text mode and specify the encoding again:</p>
<pre><code>import glob
import json
for file in glob.glob("/Users/me/Scripts/BagOfJson/*.json"):
with open(file, "r", encoding='utf8') as infile:
data = json.load(infile)
with open("test.json", "w", encoding='utf8') as outfile:
json.dump(data, outfile, sort_keys = True, indent = 2, ensure_ascii = False)
</code></pre>
<p>The above will still re-create <code>test.json</code> for each file in the <code>*.json</code> glob; you can't really put multiple JSON documents in the same file anyway (unless you specifically create <a href="http://jsonlines.org/" rel="nofollow">JSONLines files</a>, which you are not doing here because you are using <code>indent</code>).</p>
<p>You'd need to write to a new filename and move the new back to the <code>file</code> filename if you wanted to re-format all JSON files in the glob.</p>
| 1 | 2016-08-18T14:20:12Z | [
"python",
"json",
"file-io",
"types"
] |
How to use joblib.Memory of cache the output of a member function of a Python Class | 39,020,217 | <p>I would like to cache the output of a member function of a class using <code>joblib.Memory</code> library. Here is a sample code:</p>
<pre><code>import joblib
import numpy as np
mem = joblib.Memory(cachedir='/tmp', verbose=1)
@mem.cache
def my_sum(x):
return np.sum(x)
class TestClass(object):
def __init__(self):
pass
@mem.cache
def my_sum(self, x):
return np.sum(x)
if __name__ == '__main__':
x = np.array([1, 2, 3, 4])
a = TestClass()
print a.my_sum(x) # does not work
print my_sum(x) # works fine
</code></pre>
<p>However, I get the following error:</p>
<pre><code>/nfs/sw/anaconda2/lib/python2.7/site-packages/joblib/memory.pyc in _get_output_dir(self, *args, **kwargs)
512 of the function called with the given arguments.
513 """
--> 514 argument_hash = self._get_argument_hash(*args, **kwargs)
515 output_dir = os.path.join(self._get_func_dir(self.func),
516 argument_hash)
/nfs/sw/anaconda2/lib/python2.7/site-packages/joblib/memory.pyc in _get_argument_hash(self, *args, **kwargs)
505 def _get_argument_hash(self, *args, **kwargs):
506 return hashing.hash(filter_args(self.func, self.ignore,
--> 507 args, kwargs),
508 coerce_mmap=(self.mmap_mode is not None))
509
/nfs/sw/anaconda2/lib/python2.7/site-packages/joblib/func_inspect.pyc in filter_args(func, ignore_lst, args, kwargs)
228 repr(args)[1:-1],
229 ', '.join('%s=%s' % (k, v)
--> 230 for k, v in kwargs.items())
231 )
232 )
ValueError: Wrong number of arguments for my_sum(self, x):
my_sum(array([1, 2, 3, 4]), ) was called.
</code></pre>
<p>Is there a way to cache a member function of a class using Memory or any other decorators?</p>
| 0 | 2016-08-18T14:00:38Z | 39,515,143 | <p>the following excerpt is taken from <a href="https://pythonhosted.org/joblib/memory.html#gotchas" rel="nofollow">https://pythonhosted.org/joblib/memory.html#gotchas</a></p>
<blockquote>
<p>caching methods: you cannot decorate a method at class definition,
because when the class is instantiated, the first argument (self) is
bound, and no longer accessible to the Memory object. The following
code wonât work:</p>
<p>class Foo(object):</p>
<pre><code>@mem.cache # WRONG
def method(self, args):
pass
</code></pre>
<p>The right way to do this is to decorate at instantiation time:</p>
<p>class Foo(object):</p>
<pre><code>def __init__(self, args):
self.method = mem.cache(self.method)
def method(self, ...):
pass
</code></pre>
</blockquote>
| 1 | 2016-09-15T15:38:24Z | [
"python",
"caching",
"memory",
"decorator",
"joblib"
] |
Python & Scipy: How to fit a von mises distribution? | 39,020,222 | <p>I'm trying to fit a von Mises distribution, from scipy (<a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.vonmises.html" rel="nofollow">http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.vonmises.html</a>)</p>
<p>So I tried </p>
<pre><code>from scipy.stats import vonmises
kappa = 3
r = vonmises.rvs(kappa, size=1000)
plt.hist(r, normed=True,alpha=0.2)
</code></pre>
<p>It returns </p>
<p><a href="http://i.stack.imgur.com/5h97k.png" rel="nofollow"><img src="http://i.stack.imgur.com/5h97k.png" alt="enter image description here"></a></p>
<p>However, when I fit the data on it </p>
<pre><code>vonmises.fit(r)
# returns (1.2222011312461918, 0.024913780423670054, 2.4243546157480105e-30)
vonmises.fit(r, loc=0, scale=1)
# returns (1.549290021706847, 0.0013319431181202394, 7.1653626652619939e-29)
</code></pre>
<p>But none of the value returned is the parameter of Von Mises, kappa.</p>
<p>What is the returning value? Ifeel the second is loc, or mean value. But have no idea what the first returned value is.</p>
<p>And how should I fit a von mises distribution?</p>
| 0 | 2016-08-18T14:00:44Z | 39,020,834 | <p>The returned values are kappa, loc and scale. Unfortunately, the von Mises pdf does not seem to yield itself to fitting. It does fit correctly if you fix the scale:</p>
<pre><code>>>> vonmises.fit(r, fscale=1)
(2.994517240859579, -0.0080482378119089287, 1)
</code></pre>
| 3 | 2016-08-18T14:28:11Z | [
"python",
"numpy",
"scipy"
] |
Why does my Python code skips a while loop? | 39,020,230 | <p>I'm building a guessing program for user's input in range 1 - 100.</p>
<p>Why does it skips the second while loop where I check user input and forward it. It goes straight with number 1</p>
<pre><code>import random
nums_lasted = []
a = 0
while a < 101:
nums_lasted.append(a)
a += 1
secret_num = 1
while secret_num < 0 or secret_num > 100:
try:
secret_num = int(input("My number is"))
except ValueError:
print("No way that was an integer!")
guess_pc = 50
min = 50
max = 101
while True:
print("Is it", guess_pc,"?")
if guess_pc == secret_num:
print("Easy")
break
elif guess_pc > secret_num:
max = guess_pc
nums_lasted.append(guess_pc)
nums_lasted1 = [i for i in nums_lasted if i < guess_pc]
nums_lasted = nums_lasted1
elif guess_pc < secret_num:
min = guess_pc
nums_lasted.append(guess_pc)
nums_lasted1 = [i for i in nums_lasted if i < guess_pc]
nums_lasted = nums_lasted1
guess_pc = random.choice(nums_lasted)
</code></pre>
| 0 | 2016-08-18T14:00:55Z | 39,020,295 | <p>Because the condition <code>secret_num < 0 or secret_num > 100</code> is false for <code>secret_num == 1</code>. 1 is definitely between 0 and 100. You should set <code>secret_number</code> to something larger than 100 for this to work as expected.</p>
| 1 | 2016-08-18T14:04:09Z | [
"python",
"while-loop"
] |
Why does my Python code skips a while loop? | 39,020,230 | <p>I'm building a guessing program for user's input in range 1 - 100.</p>
<p>Why does it skips the second while loop where I check user input and forward it. It goes straight with number 1</p>
<pre><code>import random
nums_lasted = []
a = 0
while a < 101:
nums_lasted.append(a)
a += 1
secret_num = 1
while secret_num < 0 or secret_num > 100:
try:
secret_num = int(input("My number is"))
except ValueError:
print("No way that was an integer!")
guess_pc = 50
min = 50
max = 101
while True:
print("Is it", guess_pc,"?")
if guess_pc == secret_num:
print("Easy")
break
elif guess_pc > secret_num:
max = guess_pc
nums_lasted.append(guess_pc)
nums_lasted1 = [i for i in nums_lasted if i < guess_pc]
nums_lasted = nums_lasted1
elif guess_pc < secret_num:
min = guess_pc
nums_lasted.append(guess_pc)
nums_lasted1 = [i for i in nums_lasted if i < guess_pc]
nums_lasted = nums_lasted1
guess_pc = random.choice(nums_lasted)
</code></pre>
| 0 | 2016-08-18T14:00:55Z | 39,020,298 | <pre><code>secret_num = 1
while secret_num < 0 or secret_num > 100:
</code></pre>
<p>You set <code>secret_num</code> to <code>1</code>. The <code>while</code> will only run when <code>secret_num</code> is less than 0 or grater than 100, so it will never be executed.</p>
| 3 | 2016-08-18T14:04:12Z | [
"python",
"while-loop"
] |
Python web crawler (NameError: name 'spider' is not defined) | 39,020,253 | <p>I'm trying to run an example I found online at <a href="http://www.netinstructions.com/how-to-make-a-web-crawler-in-under-50-lines-of-python-code/" rel="nofollow">http://www.netinstructions.com/how-to-make-a-web-crawler-in-under-50-lines-of-python-code/</a></p>
<p>However, I'm running into issues when running it via the Python 3.5.2 Shell.</p>
<p><code>spider("http://www.dreamhost.com", "secure", 200)</code>
gives me the message:<br>
Traceback (most recent call last):
File "", line 1, in
spider("<a href="http://www.dreamhost.com" rel="nofollow">http://www.dreamhost.com</a>", "secure", 200)
NameError: name 'spider' is not defined</p>
<pre><code>from html.parser import HTMLParser
from urllib.request import urlopen
from urllib import parse
class LinkParser(HTMLParser):
def handle_starttag(self, tag, attrs):
if tag == 'a':
for (key, value) in attrs:
if key == 'href':
newUrl = parse.urljoin(self.baseUrl, value)
self.links = self.links + [newUrl]
def getLinks(self, url):
self.links = []
self.baseUrl = url
response = urlopen(url)
if response.getheader('Content-Type')=='text/html':
htmlBytes = response.read()
htmlString = htmlBytes.decode("utf-8")
self.feed(htmlString)
return htmlString, self.links
else:
return "",[]
def spider(url, word, maxPages):
pagesToVisit = [url]
numberVisited = 0
foundWord = False
while numberVisited < maxPages and pagesToVisit != [] and not foundWord:
numberVisited = numberVisited +1
url = pagesToVisit[0]
pagesToVisit = pagesToVisit[1:]
try:
print(numberVisited, "Visiting:", url)
parser = LinkParser()
data, links = parser.getLinks(url)
if data.find(word)>-1:
foundWord = True
pagesToVisit = pagesToVisit + links
print(" **Success!**")
except:
print(" **Failed!**")
if foundWord:
print("The word", word, "was found at", url)
else:
print("Word never found")
</code></pre>
| 0 | 2016-08-18T14:02:00Z | 39,021,079 | <p>Yourno,</p>
<p>You have indentation problem in your code buddy. After defining class, there are no indentation before method <code>handle_starttag</code> and <code>getLinks</code>. Also in function <code>spider</code>, there are indetation missing in <code>if-else</code> section. Kindly check your code against the code posted on link that you have provided. Kindly find the below updated working code:</p>
<pre><code>from html.parser import HTMLParser
from urllib.request import urlopen
from urllib import parse
# We are going to create a class called LinkParser that inherits some
# methods from HTMLParser which is why it is passed into the definition
class LinkParser(HTMLParser):
# This is a function that HTMLParser normally has
# but we are adding some functionality to it
def handle_starttag(self, tag, attrs):
# We are looking for the begining of a link. Links normally look
# like <a href="www.someurl.com"></a>
if tag == 'a':
for (key, value) in attrs:
if key == 'href':
# We are grabbing the new URL. We are also adding the
# base URL to it. For example:
# www.netinstructions.com is the base and
# somepage.html is the new URL (a relative URL)
#
# We combine a relative URL with the base URL to create
# an absolute URL like:
# www.netinstructions.com/somepage.html
newUrl = parse.urljoin(self.baseUrl, value)
# And add it to our colection of links:
self.links = self.links + [newUrl]
# This is a new function that we are creating to get links
# that our spider() function will call
def getLinks(self, url):
self.links = []
# Remember the base URL which will be important when creating
# absolute URLs
self.baseUrl = url
# Use the urlopen function from the standard Python 3 library
response = urlopen(url)
# Make sure that we are looking at HTML and not other things that
# are floating around on the internet (such as
# JavaScript files, CSS, or .PDFs for example)
if response.getheader('Content-Type')=='text/html':
htmlBytes = response.read()
# Note that feed() handles Strings well, but not bytes
# (A change from Python 2.x to Python 3.x)
htmlString = htmlBytes.decode("utf-8")
self.feed(htmlString)
return htmlString, self.links
else:
return "",[]
# And finally here is our spider. It takes in an URL, a word to find,
# and the number of pages to search through before giving up
def spider(url, word, maxPages):
pagesToVisit = [url]
numberVisited = 0
foundWord = False
# The main loop. Create a LinkParser and get all the links on the page.
# Also search the page for the word or string
# In our getLinks function we return the web page
# (this is useful for searching for the word)
# and we return a set of links from that web page
# (this is useful for where to go next)
while numberVisited < maxPages and pagesToVisit != [] or not foundWord:
numberVisited = numberVisited +1
# Start from the beginning of our collection of pages to visit:
url = pagesToVisit[0]
pagesToVisit = pagesToVisit[1:]
try:
print(numberVisited, "Visiting:", url)
parser = LinkParser()
data, links = parser.getLinks(url)
if data.find(word)>-1:
foundWord = True
foundAtUrl = url
# Add the pages that we visited to the end of our collection
# of pages to visit:
pagesToVisit = pagesToVisit + links
print(" **Success!**")
#Added else, so if desired word not found, then make foundWord = False
else:
foundWord = False
except:
print(" **Failed!**")
#Moved this if-else condition block inside while loop, so for every url, it will give us message whether the desired word found or not
if foundWord:
print("The word", word, "was found at", url)
else:
print("Word never found")
spider("http://www.dreamhost.com", "secure", 200)
</code></pre>
<p>Kindly let me know, if you still have any issue/query.</p>
| 0 | 2016-08-18T14:38:37Z | [
"python",
"web-crawler"
] |
How to run Python 3 Pyinstaller when I have Pyinstaller installed for both python 2 and 3? | 39,020,353 | <p>I am on a windows machine. I wrote my application in python 3. I have Pyinstaller installed for both python 2 and 3. How do I call python 3 pyinstaller?</p>
| 1 | 2016-08-18T14:06:55Z | 39,025,222 | <p>I believe all you'd need to do is set the path for your python environment to point to the python3 install location. You'd then just use the <code>pip3 install pyinstaller</code> command and it should run. You can use the command <code>pyinstaller --version</code> to confirm.</p>
| 0 | 2016-08-18T18:32:47Z | [
"python",
"pyinstaller"
] |
Python scipy module import error due to missing ._ufuncs dll | 39,020,361 | <p>I have some troubles with sub-module integrate from scipy in python.
I have a 64 bits architecture, and it seems, according to the first lines of the python interpreter (see below) that I am also using a 64 bit build of Python together with Anaconda.</p>
<p>Below is the problem (I just wrote the minimal code to show what's happening)</p>
<hr>
<pre><code>Python 3.4.3 |Anaconda 2.3.0 (64-bit)| (default, Mar 6 2015, 12:06:10) [MSC v.1600 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import scipy
>>> import scipy.integrate
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\*********\Anaconda3\lib\site-packages\scipy\integrate\__init__.py", line 55, in <module>
from .quadrature import *
File "C:\Users\*********\Anaconda3\lib\site-packages\scipy\integrate\quadrature.py", line 10, in <module>
from scipy.special.orthogonal import p_roots
File "C:\Users\*********\Anaconda3\lib\site-packages\scipy\special\__init__.py", line 636, in <module>
from ._ufuncs import *
ImportError: DLL load failed: Le module spécifié est introuvable.
</code></pre>
<hr>
<p>The same happens with import scipy.special</p>
<p>As you can see scipy can be imported, however scipy.integrate generates an error. What is odd is that in the folder
...\lib\site-packages\scipy\special, the ._ufuncs.pyd is appears.
Also, I am using scipty regularly for other purposes, and everythings works usually fine. </p>
<p>I am using version 0.18.0 of scipy and pip 1.8.1.
I tried to reinstall scipy with conda but this does not seem to change anything.</p>
<p>It seems that the dll cannot be found. I found a few posts on the internet (including one that advises to download a "libmmd.dll" in C:\Windows\SysWOW64) with similar issue, but none seems to work. My guess is that this is still a pb of 32-64 bit compatibility as this is the most common pb with python and I remembered having huge pb when first seting up everything a few months ago.</p>
<p>So, following up the initial question, is there a way to know which version (32 bit or 64 bit) of each package or dll is effectively installed/loaded?
Do you have another idea why I get this error message?</p>
<p>Thank you for your answers, this problem is quite frustrating...</p>
| 0 | 2016-08-18T14:07:14Z | 39,021,044 | <p>It appears the DLL load failed because the module specified is irretrievable? </p>
<p>Please see: </p>
<p><a href="http://stackoverflow.com/questions/19019720/importerror-dll-load-failed-1-is-not-a-valid-win32-application-but-the-dlls">ImportError: DLL load failed: %1 is not a valid Win32 application. But the DLL's are there</a></p>
| 0 | 2016-08-18T14:37:19Z | [
"python",
"dll",
"import",
"scipy"
] |
Which Python pattern to use for dynamically selecting and ordering a filtered list of functions? | 39,020,362 | <p>I have a growing list of general purpose re-usable validation checks, similar to this list below:</p>
<pre><code> 1. check_product_name()
2. check_product_price_within_range()
3. check_product_expiration_date()
4. check_product_sizes()
5. check_foo()
6. check_bar()
7. ...
</code></pre>
<p>I also have a growing list of clients. Client A may want to implement validations 1-5, while Client B may want to implement validations 3, 6, and 4 (in that order).</p>
<p>Right now I'm considering a simple approach like below. (Pseudo-code, the actual configuration will be stored within a database)</p>
<pre><code>client_a_validations = ['check_product_name', 'check_product_price_within_range', 'check_product_expiration_date', 'check_product_sizes', 'check_foo']
client_b_validations = ['check_product_expiration_date', 'check_foo', 'check_product_sizes']
</code></pre>
<p>My question is, is there a better more established pattern I could be referencing to solve this problem?</p>
| 0 | 2016-08-18T14:07:14Z | 39,021,143 | <p>As I understand Django can be a good example.
Django have two entities - form field and validator and one field can have multiple validators.</p>
<pre><code>from django import forms
class MyForm(forms.Form):
even_field = forms.IntegerField(validators=[validate_even, max_value])
</code></pre>
<p>So it's similar to your case and you chose a right pattern.</p>
<p>If number of validators really big you can define a group of validators.</p>
<p>For example:</p>
<pre><code>check_all_product_specs = [check_product_name, check_product_price_within_range,
check_product_expiration_date]
client_a_validations = [check_all_product_specs, check_bar]
</code></pre>
<p>And before validation just expand this nested structure to flat list.</p>
| 1 | 2016-08-18T14:41:06Z | [
"python",
"design-patterns"
] |
Python: define a string variable pattern | 39,020,470 | <p>I'm new to python.</p>
<p>I have a program that reads from <code>str(sys.argv[1])</code>:</p>
<pre><code>myprogram.py "some date" # I'd like this in YYYYMMDD format. I.e:
myprogram.py 20160806
if __name__ == '__main__':
if str(sys.argv[1]):
CTRL = str(sys.argv[1])
print "some stuff"
sys.exit()
</code></pre>
<p>I need "some date" in YYYYMMDD format. How could it be possible? I've googled variable mask, variable pattern and nothing came out.</p>
<p>Thanks for your help and patience.</p>
<hr>
<p>UPDATE:</p>
<p>Fortunately all answers helped me! </p>
<p>As the CTRL variable gaves me <em>2016-08-17 00:00:00</em> format, I had to convert it to <em>20160817</em>. Here is the code that worked for me:</p>
<pre><code>if str(sys.argv[1]):
CTRL_args = str(sys.argv[1])
try:
CTRL = str(datetime.datetime.strptime(CTRL_args, "%Y%m%d")).strip().split(" ")[0].replace("-","").replace(" ","").replace(":","")
# do some stuff
except ValueError:
print('Wrong format!')
sys.exit()
</code></pre>
| 0 | 2016-08-18T14:11:54Z | 39,020,712 | <p>If I understand what you said right, you want to pass an argument in a YYYYMMDD format.</p>
<p>There is nothing stopping you from doing this with your script.</p>
<p>You can run: <code>python yourscript.py YYYYMMDD</code> and a string "YYYYMMDD" will be stored in your CTRL variable.</p>
<hr>
<p>UPDATE:</p>
<p>The following routine does the checks you are asking for:</p>
<p>Let me know if you need me to explain any of it!</p>
<pre><code>import sys
from datetime import datetime
if __name__ == '__main__':
try:
CTRL = str(sys.argv[1])
except IndexError:
print "You have not specified a date!"
sys.exit()
try:
parced_CTRL = datetime.strptime(CTRL, "%Y%m%d")
except ValueError:
print "Please input date in the YYYYMMDD format!"
sys.exit()
print "Date is in the correct format!"
print "Data = {}".format(parced_CTRL)
</code></pre>
<hr>
<p>UPDATE:</p>
<p>If you want to parse dates in the YYYY-MM-DD format you need to do this instead:</p>
<pre><code>parced_CTRL = datetime.strptime(CTRL, "%Y-%m-%d")
</code></pre>
<p>I think it is self explanatory... ;)</p>
| 0 | 2016-08-18T14:23:36Z | [
"python"
] |
Python: define a string variable pattern | 39,020,470 | <p>I'm new to python.</p>
<p>I have a program that reads from <code>str(sys.argv[1])</code>:</p>
<pre><code>myprogram.py "some date" # I'd like this in YYYYMMDD format. I.e:
myprogram.py 20160806
if __name__ == '__main__':
if str(sys.argv[1]):
CTRL = str(sys.argv[1])
print "some stuff"
sys.exit()
</code></pre>
<p>I need "some date" in YYYYMMDD format. How could it be possible? I've googled variable mask, variable pattern and nothing came out.</p>
<p>Thanks for your help and patience.</p>
<hr>
<p>UPDATE:</p>
<p>Fortunately all answers helped me! </p>
<p>As the CTRL variable gaves me <em>2016-08-17 00:00:00</em> format, I had to convert it to <em>20160817</em>. Here is the code that worked for me:</p>
<pre><code>if str(sys.argv[1]):
CTRL_args = str(sys.argv[1])
try:
CTRL = str(datetime.datetime.strptime(CTRL_args, "%Y%m%d")).strip().split(" ")[0].replace("-","").replace(" ","").replace(":","")
# do some stuff
except ValueError:
print('Wrong format!')
sys.exit()
</code></pre>
| 0 | 2016-08-18T14:11:54Z | 39,020,726 | <p>you need function <strong>datetime.strptime</strong> with mask <strong>%Y%m%d</strong> </p>
<pre><code>import sys
from datetime import datetime
if __name__ == '__main__':
if str(sys.argv[1]):
CTRL = str(sys.argv[1])
try:
print datetime.strptime(CTRL, "%Y%m%d")
except ValueError:
print 'Wrong format'
sys.exit()
</code></pre>
<p>Output:</p>
<pre><code>$ python example.py 20160817
2016-08-17 00:00:00
</code></pre>
| 2 | 2016-08-18T14:24:03Z | [
"python"
] |
Triple nested dictionary comprehension? | 39,020,492 | <p>Suppose I have a <code>pandas</code> <code>Series</code>, like this:</p>
<pre><code>import pandas as pd
s = pd.Series(["hello go home bye bye", "you can't always get", "what you waaaaaaant", "apple banana carrot munch 123"])
</code></pre>
<p>I want to create a dictionary with individual characters as keys, and their frequencies as values. Creating these dictionaries for words in the past has been easy with the help of <code>collections.Counter</code>:</p>
<pre><code>from collections import Counter
c = Counter(word for row in s for word in row.lower().split())
</code></pre>
<p>However, I'm trying now to store individual characters and am having some issues with triple-nested dict comprehensions. Here's what I have:</p>
<pre><code>c = Counter((letter for letter in word) for word for row in s for word in row.lower().split())
</code></pre>
<p>Which gives me a syntax error. How can I make the equivalent of the following <code>for</code> loop in one line?</p>
<pre><code>d = {}
for row in s:
for word in row.lower().split():
for letter in word:
d[letter] += 1
</code></pre>
| 3 | 2016-08-18T14:12:43Z | 39,020,558 | <p>I think you can use</p>
<pre><code>Counter([j for i in s for j in i])
Counter({'a': 16, ' ': 13, 'e': 6, 'o': 6, 'n': 5, 't': 5, 'y': 5, 'h': 4, 'l': 4, 'c': 3, 'b': 3, 'u': 3, 'w': 3, 'g': 2, 'm': 2, 'p': 2, 'r': 2, "'": 1, '1': 1, '3': 1, '2': 1, 's': 1})
</code></pre>
<p>to get the individual character count.</p>
| 2 | 2016-08-18T14:16:27Z | [
"python",
"dictionary",
"list-comprehension"
] |
Triple nested dictionary comprehension? | 39,020,492 | <p>Suppose I have a <code>pandas</code> <code>Series</code>, like this:</p>
<pre><code>import pandas as pd
s = pd.Series(["hello go home bye bye", "you can't always get", "what you waaaaaaant", "apple banana carrot munch 123"])
</code></pre>
<p>I want to create a dictionary with individual characters as keys, and their frequencies as values. Creating these dictionaries for words in the past has been easy with the help of <code>collections.Counter</code>:</p>
<pre><code>from collections import Counter
c = Counter(word for row in s for word in row.lower().split())
</code></pre>
<p>However, I'm trying now to store individual characters and am having some issues with triple-nested dict comprehensions. Here's what I have:</p>
<pre><code>c = Counter((letter for letter in word) for word for row in s for word in row.lower().split())
</code></pre>
<p>Which gives me a syntax error. How can I make the equivalent of the following <code>for</code> loop in one line?</p>
<pre><code>d = {}
for row in s:
for word in row.lower().split():
for letter in word:
d[letter] += 1
</code></pre>
| 3 | 2016-08-18T14:12:43Z | 39,020,598 | <p>Just pass each word calling <em>.lower()</em> flattening the list of lists:</p>
<pre><code>import pandas as pd
s = pd.Series(["hello go home bye bye", "you can't always get", "what you waaaaaaant", "apple banana carrot munch 123"])
from collections import Counter
print(Counter(word.lower() for row in s for word in row))
</code></pre>
<p>or chain with map:</p>
<pre><code>from collections import Counter
from itertools import chain
print(Counter(chain.from_iterable(map(str.lower, s))))
</code></pre>
<p>Both would give you:</p>
<pre><code>Counter({'a': 16, ' ': 13, 'e': 6, 'o': 6, 'n': 5, 't': 5, 'y': 5, 'h': 4, 'l': 4, 'c': 3, 'b': 3, 'u': 3, 'w': 3, 'g': 2, 'm': 2, 'p': 2, 'r': 2, "'": 1, '1': 1, '3': 1, '2': 1, 's': 1})
</code></pre>
<p>You can also use <em>apply</em> or <em>s.str.lower()</em></p>
<pre><code>print(Counter(chain.from_iterable(s.apply(str.lower))))
print(Counter(chain.from_iterable(s.str.lower())))
</code></pre>
| 2 | 2016-08-18T14:18:23Z | [
"python",
"dictionary",
"list-comprehension"
] |
Triple nested dictionary comprehension? | 39,020,492 | <p>Suppose I have a <code>pandas</code> <code>Series</code>, like this:</p>
<pre><code>import pandas as pd
s = pd.Series(["hello go home bye bye", "you can't always get", "what you waaaaaaant", "apple banana carrot munch 123"])
</code></pre>
<p>I want to create a dictionary with individual characters as keys, and their frequencies as values. Creating these dictionaries for words in the past has been easy with the help of <code>collections.Counter</code>:</p>
<pre><code>from collections import Counter
c = Counter(word for row in s for word in row.lower().split())
</code></pre>
<p>However, I'm trying now to store individual characters and am having some issues with triple-nested dict comprehensions. Here's what I have:</p>
<pre><code>c = Counter((letter for letter in word) for word for row in s for word in row.lower().split())
</code></pre>
<p>Which gives me a syntax error. How can I make the equivalent of the following <code>for</code> loop in one line?</p>
<pre><code>d = {}
for row in s:
for word in row.lower().split():
for letter in word:
d[letter] += 1
</code></pre>
| 3 | 2016-08-18T14:12:43Z | 39,020,638 | <p>Using pandas:</p>
<pre><code>n [6]: pd.Series(list(''.join(s))).value_counts()
Out[6]:
a 16
13
e 6
o 6
n 5
t 5
y 5
h 4
l 4
u 3
b 3
c 3
w 3
p 2
m 2
r 2
g 2
1 1
s 1
' 1
2 1
3 1
dtype: int64
In [7]: dict(pd.Series(list(''.join(s))).value_counts())
Out[7]:
{' ': 13,
"'": 1,
'1': 1,
'2': 1,
'3': 1,
'a': 16,
'b': 3,
'c': 3,
'e': 6,
'g': 2,
'h': 4,
'l': 4,
'm': 2,
'n': 5,
'o': 6,
'p': 2,
'r': 2,
's': 1,
't': 5,
'u': 3,
'w': 3,
'y': 5}
</code></pre>
| 2 | 2016-08-18T14:20:04Z | [
"python",
"dictionary",
"list-comprehension"
] |
Triple nested dictionary comprehension? | 39,020,492 | <p>Suppose I have a <code>pandas</code> <code>Series</code>, like this:</p>
<pre><code>import pandas as pd
s = pd.Series(["hello go home bye bye", "you can't always get", "what you waaaaaaant", "apple banana carrot munch 123"])
</code></pre>
<p>I want to create a dictionary with individual characters as keys, and their frequencies as values. Creating these dictionaries for words in the past has been easy with the help of <code>collections.Counter</code>:</p>
<pre><code>from collections import Counter
c = Counter(word for row in s for word in row.lower().split())
</code></pre>
<p>However, I'm trying now to store individual characters and am having some issues with triple-nested dict comprehensions. Here's what I have:</p>
<pre><code>c = Counter((letter for letter in word) for word for row in s for word in row.lower().split())
</code></pre>
<p>Which gives me a syntax error. How can I make the equivalent of the following <code>for</code> loop in one line?</p>
<pre><code>d = {}
for row in s:
for word in row.lower().split():
for letter in word:
d[letter] += 1
</code></pre>
| 3 | 2016-08-18T14:12:43Z | 39,020,989 | <p>You want this :</p>
<pre><code>dict(zip([letter for row in s for word in row.lower().split() for letter in word], range(len([letter for row in s for word in row.lower().split() for letter in word]))))
</code></pre>
| 1 | 2016-08-18T14:34:56Z | [
"python",
"dictionary",
"list-comprehension"
] |
Heroku R10 error in Python app | 39,020,501 | <p>I'm having a problem running my Python app in Heroku. My app scrapes a website for weather data and performs a few calculations on it. Pulling the data doesn't take more than a few seconds. When I open my app, my logs show the output of my app, but my app page continues loading until I get an R10 error, then the app page crashes. I can run my app using a one-off dyno from the command line just fine. Here are my logs:</p>
<pre><code> 2016-08-18T13:58:34.073915+00:00 heroku[web.1]: Starting process with command `python WebScraper.py`
2016-08-18T13:58:40.982330+00:00 app[web.1]: http://170.94.200.136/weather/Inversion.aspx
2016-08-18T13:58:41.005187+00:00 app[web.1]: Station Low Temp (F) Time of Low Current Temp (F) Current Time \
2016-08-18T13:58:41.005202+00:00 app[web.1]: 0 0.0 0 74.3 8:49 AM
2016-08-18T13:58:41.005205+00:00 app[web.1]: Arkansas: no inversion and spray OK
2016-08-18T13:59:34.319236+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2016-08-18T13:59:34.319458+00:00 heroku[web.1]: Stopping process with SIGKILL
2016-08-18T13:59:34.443283+00:00 heroku[web.1]: Process exited with status 137
2016-08-18T13:59:34.459932+00:00 heroku[web.1]: State changed from starting to crashed
2016-08-18T13:59:34.460745+00:00 heroku[web.1]: State changed from crashed to starting
2016-08-18T13:59:39.598773+00:00 heroku[web.1]: Starting process with command `python WebScraper.py`
2016-08-18T13:59:46.087192+00:00 app[web.1]: http://170.94.200.136/weather/Inversion.aspx
2016-08-18T13:59:46.099928+00:00 app[web.1]: Station Low Temp (F) Time of Low Current Temp (F) Current Time \
2016-08-18T13:59:46.100543+00:00 app[web.1]: Ashley: strong inversion and no spray suggested
2016-08-18T14:00:07.473438+00:00 heroku[router]: at=error code=H20 desc="App boot timeout" method=GET path="/" host=temperature-inversion.herokuapp.com request_id=c477a7b2-d755-475a-ad11-f857764386b6 fwd="199.133.80.68" dyno= connect= service= status=503 bytes=
2016-08-18T14:00:39.933670+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2016-08-18T14:00:39.933712+00:00 heroku[web.1]: Stopping process with SIGKILL
2016-08-18T14:00:40.095100+00:00 heroku[web.1]: State changed from starting to crashed
2016-08-18T14:00:40.079022+00:00 heroku[web.1]: Process exited with status 137
2016-08-18T14:00:42.882673+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=temperature-inversion.herokuapp.com request_id=c5ee4daf-9825-4c53-8f9c-852b9a3eaae2 fwd="199.133.80.68" dyno= connect= service= status=503 bytes=
2016-08-18T14:00:44.549938+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/favicon.ico" host=temperature-inversion.herokuapp.com request_id=f45183be-4b77-429e-91be-dfcf832ec3ca fwd="199.133.80.68" dyno= connect= service= status=503 bytes=
</code></pre>
| 0 | 2016-08-18T14:13:03Z | 39,022,423 | <p>Heroku expects the <code>web</code> process to bind to <code>$PORT</code> within 60 seconds. You need to use a different kind of process (e.g., <code>bot</code>) in your Procfile and scale the different processes (similar to <a href="http://stackoverflow.com/questions/35541027/python-twitter-bot-w-heroku-error-r10-boot-timeout">Python Twitter Bot w/ Heroku Error: R10 Boot Timeout</a>)</p>
| 0 | 2016-08-18T15:43:07Z | [
"python",
"python-2.7",
"heroku"
] |
How do Luigi parameters work? | 39,020,591 | <p>So I have two tasks (let's say TaskA and TaskB). I want both tasks to run hourly, but TaskB requires TaskA. TaskB does not have any parameters, but TaskA has two parameters for the day and the hour. If I run TaskB on the command line, would I need to pass it arguments?</p>
| 0 | 2016-08-18T14:18:01Z | 39,028,831 | <p>In general, you would not need to pass the parameters for Task A to Task B, but Task B would then need to generate the values of those parameters for Task A. If Task B can not generate those parameters, you would have to setup Task B to take those parameters in from the command line, and then pass them through to the Task A constructor in the requires method. </p>
| 1 | 2016-08-18T23:09:31Z | [
"python",
"luigi"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.