title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
returning the element in one list based on the index and maximum value of an element in another list | 38,953,807 | <p>I have a list of lists which I converted into a numpy array:</p>
<pre><code>lsts = ([[1,2,3], ['a','b','a']],
[[4,5,6,7], ['a','a','b','b']],
[[1,2,3],['b','a','b']])
np_lsts = np.array(lsts)
</code></pre>
<p>I want to return the largest element in the first list where a 'b' occurs in the second list. I think I have to use indexes but am stuck!</p>
<p>i.e. I want to return (2, 7, 3) in this case</p>
| 0 | 2016-08-15T10:58:47Z | 38,954,333 | <p>This should be a lot more efficient than the current solutions if the sublists have a lot of elements, since it is vectorized over those sublists:</p>
<pre><code>import numpy_indexed as npi
results = [npi.group_by(k).max(v) for v,k in lsts]
</code></pre>
| 0 | 2016-08-15T11:37:10Z | [
"python",
"numpy"
] |
Azure Bottle web app and numpy | 38,953,826 | <p>after publishing Bottle web app where I use NumPy library it says</p>
<blockquote>
<p>The page cannot be displayed because an internal server error has occurred.</p>
</blockquote>
<p>in local host it works. I used virtual environment as described in </p>
<blockquote>
<p><a href="http://stackoverflow.com/questions/23831479/use-numpy-scipy-in-azure-web-role">Use numpy & scipy in Azure web role</a></p>
</blockquote>
<p>but still does not work. Can someone help me with azure-python-numpy configuration?</p>
| -1 | 2016-08-15T11:00:37Z | 38,967,461 | <p>@Mr.Green, Per my experience, first, my suggestion is that you can refer to the tutorial <a href="https://azure.microsoft.com/en-us/documentation/articles/web-sites-python-create-deploy-bottle-app/" rel="nofollow">Creating web apps with Bottle in Azure</a> to be sure your webapp with Bottle has been deployed correctly on Azure.</p>
<p>Second, you can install Python Tools for Visual Studio to remote debugging on Azure if you are using Visual Studio, please see the wiki page <a href="https://github.com/Microsoft/PTVS/wiki/Azure-Remote-Debugging" rel="nofollow">https://github.com/Microsoft/PTVS/wiki/Azure-Remote-Debugging</a> to know how to debug.</p>
<p>Final, it's the most important. According to the <a href="https://azure.microsoft.com/en-us/documentation/articles/web-sites-python-create-deploy-bottle-app/#troubleshooting---package-installation" rel="nofollow">troubleshooting</a> of the tutorial, you need to download the <code>numpy</code> wheel package from <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy" rel="nofollow">http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy</a> and install it for your python webapp to enable <code>numpy</code> runs on Azure. Because some packages like <code>numpy</code> installed using <code>pip</code> which required complier that is not available on the machine running the web app in Azure App Service, only installation of wheel packages.</p>
<p>Note: If the webapp still not works, please check the <code>numpy</code> package whether be appended into the python system path, and try to add the code below to solve it.</p>
<pre><code>import sys, os
sys.path.append(os.path.join(os.getcwd(), "<numpy-package-path, such as 'site-package'>"))
</code></pre>
| 0 | 2016-08-16T05:51:35Z | [
"python",
"azure",
"numpy",
"azure-web-sites",
"bottle"
] |
Failed to coerce slice entry of type TensorVariable to integer | 38,953,854 | <p>I was following the code on one of the books about deep learning, where the author uses theano as a library for this kind of networks. When I try to run the code: </p>
<pre><code>i = T.lscalar() # mini-batch index
train_mb = theano.function(
[i], cost, updates=updates,
givens={
self.x:
training_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size],
self.y:
training_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size]
})
</code></pre>
<p>I get the following error: "IndexError: failed to coerce slice entry of type TensorVariable to integer".<br>
The call of that theano function looks like this: </p>
<pre><code>cost_ij = train_mb(minibatch_index)
</code></pre>
<p>So, basically, looks like the <code>i</code> is not evaluated and python tries to use TensorVariable instead of normal integer, though I pass a normal integer as function parameter. Can anyone point out what am I doing wrong here ?</p>
| 0 | 2016-08-15T11:02:46Z | 39,064,585 | <p>it looks like the same problem as I did,<br>
please visit <a href="http://stackoverflow.com/a/39064543/6740140">http://stackoverflow.com/a/39064543/6740140</a><br>
Hope it can help you.</p>
| 0 | 2016-08-21T13:13:26Z | [
"python",
"theano"
] |
pip install from specific setup.py | 38,953,855 | <p>I created a python 3.3 app on RedHat's Openshift cloud service. By default it has setup.py for my project. I'm learning Udemy course called "Build a SaaS app with Flask" (<a href="https://github.com/nickjj/build-a-saas-app-with-flask" rel="nofollow">source code</a>) Now I wanted to use python-click, as recommended by the course. It needs another setup.py for cli project; so to put that file in the project root folder, I renamed it to <strong>setup_cli.py</strong>. Now there are two files: <strong>setup.py</strong> and <strong>setup_cli.py</strong>.
Pip install seems to automatically look into setup.py.</p>
<pre><code># See Dockerfile in github source
pip install --editable <from setup_cli.py>
</code></pre>
<p>Can <code>pip install --editable</code> be used to point to setup_cli.py?</p>
| 1 | 2016-08-15T11:02:52Z | 38,957,066 | <p>it seems that you can't do anything about it :-) - It's hard coded in pip source code :-)</p>
<p>if you try to use this command: <code>pip install -e .</code></p>
<p>it will call a method named <a href="https://github.com/pypa/pip/blob/master/pip/req/req_install.py#L1115" rel="nofollow">parse_editable</a> which will run <a href="https://github.com/pypa/pip/blob/master/pip/req/req_install.py#L1140" rel="nofollow">this</a> line </p>
<pre><code>if not os.path.exists(os.path.join(url_no_extras, 'setup.py')):
raise InstallationError(
"Directory %r is not installable. File 'setup.py' not found." %
url_no_extras
)
</code></pre>
<p>or you may want to to use this command <code>pip install -e file:///full/path/to/setup_cli.py</code></p>
<p>but this command also appends a hard coded <code>setup.py</code> to your path :-)</p>
<p>in method <a href="https://github.com/pypa/pip/blob/master/pip/req/req_install.py#L372" rel="nofollow">setup_py</a> there is <a href="https://github.com/pypa/pip/blob/master/pip/req/req_install.py#L387" rel="nofollow">this</a> line:</p>
<pre><code>setup_py = os.path.join(self.setup_py_dir, 'setup.py')
</code></pre>
<p>so as @cel commented it seems that <code>python <whatever-setup.py> develop</code> is your only option</p>
| 1 | 2016-08-15T14:28:33Z | [
"python",
"pip",
"setup.py"
] |
Python TypeError: sequence item 0: expected string, list found in message body of email code | 38,953,859 | <p>I am trying to send some data in my email code in the body part of the email message. I am calling 2 functions which returns a list. I would like to include these into the email body part of the message. I am getting the error TypeError: sequence item 0: expected string</p>
<pre><code> Traceback (most recent call last):
File "E:/test_runners 2 edit project in progress add more tests/selenium_regression_test_5_1_1/Email/email_selenium_report.py", line 32, in <module>
report.send_report_summary_from_htmltestrunner_selenium_report2()
File "E:\test_runners 2 edit project in progress add more tests\selenium_regression_test_5_1_1\Email\report.py", line 318, in send_report_summary_from_htmltestrunner_selenium_report2
'\n'.join(extract_testcases_from_report_htmltestrunner()) +
TypeError: sequence item 0: expected string, list found
</code></pre>
<p>My email code is:</p>
<pre><code>def send_report_summary_from_htmltestrunner_selenium_report2():
print extract_only_header_from_summary_from_report_htmltestrunner()
print extract_header_count__from_summary_from_report_htmltestrunner()
all_testcases = list(extract_testcases_from_report_htmltestrunner())
# print all_data
pprint.pprint(all_testcases)
msg = MIMEText("\n ClearCore 5_1_1 Automated GUI Test_IE11_Selenium_VM \n " +
'\n'.join(extract_only_header_from_summary_from_report_htmltestrunner()) +
'\n'.join(extract_header_count__from_summary_from_report_htmltestrunner()) +
'\n'.join(extract_testcases_from_report_htmltestrunner()) +
"\n Report location = : \\storage-1\Testing\Selenium_Test_Report_Results\ClearCore_5_1_1\Selenium VM\IE11 \n")
msg['Subject'] = "ClearCore 5_1_1 Automated GUI Test"
msg['to'] = "riaz.ladhani@company.onmicrosoft.com"
msg['From'] = "system@company.com"
s = smtplib.SMTP()
s.connect(host=SMTP_SERVER)
s.sendmail(msg['From'], msg['To'], msg.as_string())
s.close()
</code></pre>
<p>My 3 functions which return the list are:</p>
<pre><code>def extract_only_header_from_summary_from_report_htmltestrunner():
filename = (r"E:\test_runners 2 edit project\selenium_regression_test_5_1_1\TestReport\ClearCore501_Automated_GUI_TestReport.html")
html_report_part = open(filename,'r')
soup = BeautifulSoup(html_report_part, "html.parser")
table = soup.select_one("#result_table")
#Create list here...
results = []
headers = [td.text for td in table.select_one("#header_row").find_all("td")[1:-1]]
# print(" ".join(headers))
#Don't forget to append header (if you want)
results.append(headers)
return results
def extract_header_count__from_summary_from_report_htmltestrunner():
filename = (r"E:\test_runners 2 edit project\selenium_regression_test_5_1_1\TestReport\ClearCore501_Automated_GUI_TestReport.html")
html_report_part = open(filename,'r')
soup = BeautifulSoup(html_report_part, "html.parser")
table = soup.select_one("#result_table")
#Create list here...
results = []
for row in table.select("tr.passClass"):
#Store row string in variable and append before printing
row_str = " ".join([td.text for td in row.find_all("td")[1:-1]])
results.append(row_str)
# print(row_str)
return results
def extract_testcases_from_report_htmltestrunner():
filename = (r"E:\test_runners 2 edit project\selenium_regression_test_5_1_1\TestReport\ClearCore501_Automated_GUI_TestReport.html")
html_report_part = open(filename,'r')
soup = BeautifulSoup(html_report_part, "html.parser")
for div in soup.select("#result_table tr div.testcase"):
yield div.text.strip().encode('utf-8'), div.find_next("a").text.strip().encode('utf-8')
</code></pre>
<p>How can I include the return values from the functions into the email body?</p>
| -1 | 2016-08-15T11:03:02Z | 38,954,045 | <p>Your function extract_testcases_from_report_htmltestrunner returns a tuple, not a single value. So basically, you might end up with something like this:</p>
<pre><code>'\n'.join([('a','b'), ('c', 'd')]
</code></pre>
<p>And this does simply not work in python.</p>
| 0 | 2016-08-15T11:15:21Z | [
"python",
"python-2.7"
] |
Python TypeError: sequence item 0: expected string, list found in message body of email code | 38,953,859 | <p>I am trying to send some data in my email code in the body part of the email message. I am calling 2 functions which returns a list. I would like to include these into the email body part of the message. I am getting the error TypeError: sequence item 0: expected string</p>
<pre><code> Traceback (most recent call last):
File "E:/test_runners 2 edit project in progress add more tests/selenium_regression_test_5_1_1/Email/email_selenium_report.py", line 32, in <module>
report.send_report_summary_from_htmltestrunner_selenium_report2()
File "E:\test_runners 2 edit project in progress add more tests\selenium_regression_test_5_1_1\Email\report.py", line 318, in send_report_summary_from_htmltestrunner_selenium_report2
'\n'.join(extract_testcases_from_report_htmltestrunner()) +
TypeError: sequence item 0: expected string, list found
</code></pre>
<p>My email code is:</p>
<pre><code>def send_report_summary_from_htmltestrunner_selenium_report2():
print extract_only_header_from_summary_from_report_htmltestrunner()
print extract_header_count__from_summary_from_report_htmltestrunner()
all_testcases = list(extract_testcases_from_report_htmltestrunner())
# print all_data
pprint.pprint(all_testcases)
msg = MIMEText("\n ClearCore 5_1_1 Automated GUI Test_IE11_Selenium_VM \n " +
'\n'.join(extract_only_header_from_summary_from_report_htmltestrunner()) +
'\n'.join(extract_header_count__from_summary_from_report_htmltestrunner()) +
'\n'.join(extract_testcases_from_report_htmltestrunner()) +
"\n Report location = : \\storage-1\Testing\Selenium_Test_Report_Results\ClearCore_5_1_1\Selenium VM\IE11 \n")
msg['Subject'] = "ClearCore 5_1_1 Automated GUI Test"
msg['to'] = "riaz.ladhani@company.onmicrosoft.com"
msg['From'] = "system@company.com"
s = smtplib.SMTP()
s.connect(host=SMTP_SERVER)
s.sendmail(msg['From'], msg['To'], msg.as_string())
s.close()
</code></pre>
<p>My 3 functions which return the list are:</p>
<pre><code>def extract_only_header_from_summary_from_report_htmltestrunner():
filename = (r"E:\test_runners 2 edit project\selenium_regression_test_5_1_1\TestReport\ClearCore501_Automated_GUI_TestReport.html")
html_report_part = open(filename,'r')
soup = BeautifulSoup(html_report_part, "html.parser")
table = soup.select_one("#result_table")
#Create list here...
results = []
headers = [td.text for td in table.select_one("#header_row").find_all("td")[1:-1]]
# print(" ".join(headers))
#Don't forget to append header (if you want)
results.append(headers)
return results
def extract_header_count__from_summary_from_report_htmltestrunner():
filename = (r"E:\test_runners 2 edit project\selenium_regression_test_5_1_1\TestReport\ClearCore501_Automated_GUI_TestReport.html")
html_report_part = open(filename,'r')
soup = BeautifulSoup(html_report_part, "html.parser")
table = soup.select_one("#result_table")
#Create list here...
results = []
for row in table.select("tr.passClass"):
#Store row string in variable and append before printing
row_str = " ".join([td.text for td in row.find_all("td")[1:-1]])
results.append(row_str)
# print(row_str)
return results
def extract_testcases_from_report_htmltestrunner():
filename = (r"E:\test_runners 2 edit project\selenium_regression_test_5_1_1\TestReport\ClearCore501_Automated_GUI_TestReport.html")
html_report_part = open(filename,'r')
soup = BeautifulSoup(html_report_part, "html.parser")
for div in soup.select("#result_table tr div.testcase"):
yield div.text.strip().encode('utf-8'), div.find_next("a").text.strip().encode('utf-8')
</code></pre>
<p>How can I include the return values from the functions into the email body?</p>
| -1 | 2016-08-15T11:03:02Z | 38,954,071 | <p>Both <code>extract_only_header_from_summary_from_report_htmltestrunner()</code> and <code>extract_testcases_from_report_htmltestrunner()</code> produce sequences of sequences (the first a list of lists, the other a generator yielding tuples). Neither are suitable for <code>str.join()</code>, which can only take a sequence of <em>strings</em>.</p>
<p>Either join those nested sequences individually, or flatten out the sequences.</p>
<p>You can use a nested list comprehension to join the nested sequences:</p>
<pre><code>'\n'.join([' - '.join(seq) for seq in extract_only_header_from_summary_from_report_htmltestrunner()])
</code></pre>
<p>or just nest the loops to flatten out the nested structures:</p>
<pre><code>'\n'.join([elem
for seq in extract_only_header_from_summary_from_report_htmltestrunner()
for elem in seq])
</code></pre>
<p>The latter can also be achieved with the <a href="https://docs.python.org/2/library/itertools.html#itertools.chain.from_iterable" rel="nofollow"><code>itertools.chain.from_iterable()</code> function</a>:</p>
<pre><code>'\n'.join(chain.from_iterable(extract_only_header_from_summary_from_report_htmltestrunner()))
</code></pre>
<p>The difference is in what you want the result to be; flatten out if <code>[('a', 'b'), ('c', 'd')]</code> should all end up on separate lines, using a nested <code>str.join()</code> loop if you need to join <code>'a'</code> and <code>'b'</code> together differently, then join the result with the joined result of <code>'c'</code> and <code>'d'</code> with a newline in between.</p>
<p>Note that <code>extract_only_header_from_summary_from_report_htmltestrunner()</code> appears to create a nested list for just one element:</p>
<pre><code>results = []
headers = [td.text for td in table.select_one("#header_row").find_all("td")[1:-1]]
results.append(headers)
return results
</code></pre>
<p>That's just <em>one</em> <code>results.append()</code> call; you could just as well return <code>headers</code> there and avoid having to unwrap in the first place.</p>
| 2 | 2016-08-15T11:17:39Z | [
"python",
"python-2.7"
] |
List and Numpy array in Python | 38,953,903 | <p>I was trying to hot-encode data.
Data is list of vocabulary_size = 17005207.</p>
<p>To hot-encode, I made a list of inputs of num_labels = 100.
Following code:</p>
<pre><code>inputs = []
for i in range(vocabulary_size):
inputs.append(np.arange(num_labels) == data[i]).astype(np.float32)
</code></pre>
<p>Throws me an Error:</p>
<pre><code>AttributeError: 'NoneType' object has no attribute 'astype'
</code></pre>
<p>I tried dtype = np.float32 inside append function but again erroneous.<br>
When I try this :</p>
<pre><code>inputs = []
for i in range(vocabulary_size):
inputs.append(np.arange(num_labels) == data[i])
inputs = np.array(inputs,dtype=np.float32)
</code></pre>
<p>I get correct answer : Hot-Encoded Input Sequence of vocabulary_size x num_labels.<br><br>
Any Alternative Solution Of One Line Without Using Numpy?<br><br>
<strong>Solved</strong> :Can I be done directly using numpy array(input) with list(data)? <br><br>
<strong>Info about data :</strong> data = np.ndarray(len(words), dtype=np.int32) <br><br>
<strong>Reformat function:</strong></p>
<pre><code> def reformat(data):
num_labels = vocabulary_size
print (type(data))
data = (np.arange(num_labels) == data[:,None]).astype(np.int32)
return data
print (data,len(data))
return data
</code></pre>
<p><strong>New Question : The dimension of data is (vocabulary_size,)...How to convert data using ravel or reshape into dimension of (1,vocabulary_size)?</strong></p>
| 1 | 2016-08-15T11:06:07Z | 38,954,023 | <p>Not sure whether I've understood correctly what you're asking for, but if what you want is a oneliner, you could transform you're already working code into this:</p>
<pre><code>inputs = np.array([np.arange(num_labels) == data[i] for i in range(vocabulary_size)], dtype=np.float32)
</code></pre>
| 0 | 2016-08-15T11:14:13Z | [
"python",
"numpy"
] |
Identify 90'degree projection between two points | 38,953,938 | <p>In <code>python</code> I have two points, A and B in 2D. </p>
<p>I have a user who is traveling between these 2 points (now a vector?). </p>
<p>These points are arbitrarily far apart.</p>
<p>I want to calculate 2 projections(?) at half way between these 2 points 100m away from the original line, one projection at 90 degrees and the other at 180 degrees.</p>
<p><a href="http://i.stack.imgur.com/D07zj.png" rel="nofollow">Better explained as a picture here</a></p>
<p>In the above image, I have points A and B, while I want to calculate points C and D.</p>
<p>Can somebody help me with this math calculation?</p>
| 0 | 2016-08-15T11:08:15Z | 38,954,994 | <p>Consider the direction vector from A to B:</p>
<pre><code>ab = (x2 - x, y2 - y)
</code></pre>
<p>Then, the vector that is orthogonal to this line is:</p>
<pre><code>orth = (y - y2, x2 - x)
</code></pre>
<p>The length of this vector (and of the direction vector) is:</p>
<pre><code>l = sqrt((x2-x)^2 + (y2-y)^2)
</code></pre>
<p>The midpoint on the line is</p>
<pre><code>m = 1/2 * (x + x2, y + y2)
</code></pre>
<p>Finally, the two points C and D are:</p>
<pre><code>C/D = m +- orth * 100 / l
</code></pre>
| 1 | 2016-08-15T12:22:25Z | [
"python",
"math",
"vector",
"projection"
] |
Pk fields always ends up in resulting query GROUP BY clause | 38,953,962 | <p>I have my model hierarchy defined as follows:</p>
<pre><code>class Meal(models.Model):
created_at = models.DateTimeField(auto_now_add=True)
discount_price = models.DecimalField(blank=False, null=False, decimal_places=2, max_digits=4)
normal_price = models.DecimalField(blank=True, null=True, decimal_places=2, max_digits=4)
available_count = models.IntegerField(blank=False, null=False)
name = models.CharField(blank=False, null=False, max_length=255)
active = models.BooleanField(blank=False, null=False, default=True)
class Order(models.Model):
created_at = models.DateTimeField(auto_now_add=True)
number = models.CharField(max_length=64, blank=True, null=True)
buyer_phone = models.CharField(max_length=32, blank=False, null=False)
buyer_email = models.CharField(max_length=64, blank=False, null=False)
pickup_time = models.DateTimeField(blank=False, null=False)
taken = models.BooleanField(blank=False, null=False, default=False)
class OrderItem(models.Model):
created_at = models.DateTimeField(auto_now_add=True)
order = models.ForeignKey(Order, on_delete=models.CASCADE, related_name='items')
meal = models.ForeignKey(Meal, on_delete=models.CASCADE)
amount = models.IntegerField(blank=False, null=False, default=1)
</code></pre>
<p>I'm trying to get some statistics about orders and I came up with django orm call that looks like this:</p>
<pre><code>queryset.filter(created_at__range=[date_start, date_end])\
.annotate(price=Sum(F('items__meal__discount_price') * F('items__amount'), output_field=DecimalField()))
.annotate(created_at_date=TruncDate('created_at'))\
.annotate(amount=Sum('items__amount'))\
.values('created_at_date', 'price', 'amount')
</code></pre>
<p>The above however doesn't give me the expected results, because for some reason the <code>id</code> column still ends up in the <code>GROUP BY</code> clause of sql query. Any help with that?</p>
| 0 | 2016-08-15T11:09:36Z | 38,969,218 | <p>To make it work I had to do the following:</p>
<pre><code>qs.filter(created_at__range=[date_start, date_end])\
.annotate(created_at_date=TruncDate('created_at'))\
.values('created_at_date')\
.annotate(price=Sum(F('items__meal__discount_price') * F('items__amount'),
output_field=DecimalField()))
.annotate(amount=Sum('items__amount'))
</code></pre>
<p>Which kind of makes sense - I pull only the <code>created_at</code> field, transform it and then annotate the result with two other fields.</p>
| 0 | 2016-08-16T07:43:48Z | [
"python",
"sql",
"django",
"django-orm",
"postgresql-9.3"
] |
Python - find closest position and value | 38,954,251 | <p>I m trying to find the closest point for a couple X and Y given as to access to its values. In my case, in X directions (np.arrange(0,X.max(),1), i would like to extract the closest values from Y = 0 to obtain its values in "values array" :</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
#My coordinates are given here :
X = np.array([0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4 ,5, 0, 1, 2, 3, 4 ,5])
Y = np.array([-2.5, -1.5, 0, 0, 2, 2.5, 2, 1.5, 1, -1, -1.2,-2.5, 0.2, 0.5, 0, -0.5, -0.1,0.05])
plt.scatter(X,Y);plt.show()
#The corresponding values are :
values = np.array([-1.1, -9, 10, 10, 20, 25, 21, 15, 0, 2, -2,-5, 2, 50, 0, -5, -1,5])
# I thought to use a for loop :
def find_index(x,y):
xi=np.searchsorted(X,x)
yi=np.searchsorted(Y,y)
return xi,yi
for i in arange(float(0),float(X.max()),1):
print i
thisLat, thisLong = find_index(i,0)
print thisLat, thisLong
values[thisLat,thisLong]
</code></pre>
<p>But i obtained an error : "IndexError: too many indices"</p>
| -1 | 2016-08-15T11:30:05Z | 38,954,411 | <p>You can use something faster than a <code>for</code> loop:</p>
<pre><code>import numpy as np
def find_nearest(array, value):
''' Find nearest value is an array '''
idx = (np.abs(array-value)).argmin()
return idx
haystack = np.arange(10)
needle = 5.8
idf = find_nearest(haystack, needle)
print haystack[idf] # This will return 6
</code></pre>
<p>This function will return the index of the nearest value in the array provided (such that we don't use global variables). Note that this is search in a 1D array, just like you have for <code>X</code> and <code>Y</code>.</p>
| 2 | 2016-08-15T11:42:07Z | [
"python",
"numpy",
"indexing"
] |
Match everything until optional string (Python regex) | 38,954,311 | <p>I've pounded my head against this issue, and it just seems like I am missing something uber-trivial, so apologies in advance. I have a url, which may, or may not, contain some POST values. I want to match the the entire url UNTIL this optional part (not inclusive). So for example:</p>
<pre><code>import re
myurl = r'http://myAddress.com/index.aspx?cat=ThisPartChanges&pageNum=41'
matchObj = re.match(r'(.*?)(&pageinfo=\d+){0,1}', myurl)
print matchObj.groups()
>> ('', None)
# Putting the non-greedy ? outside
matchObj = re.match(r'(.*)?(&pageinfo=\d+){0,1}', myurl)
print matchObj.groups()
>> ('http://myAddress.com/index.aspx?cat=ThisPartChanges&pageNum=41', None)
# The url might also be without the last part, that is
myurl = r'http://myAddress.com/index.aspx?cat=ThisPartChanges'
# I'd like the regex to capture the first part. "ThisPartChanges" might
# be different every time
</code></pre>
<p>What I would like is to get the everything until pageNum=\d+, not inclusive.
That is</p>
<pre><code>http://myAddress.com/index.aspx?cat=ThisPartChanges
</code></pre>
<p>I am only interested in the part before &pageNum, and don't care if it exists or not, just want to filter it out somehow so that I can get the real address until cat=....</p>
<p>I've tried all sorts of non-greedy acrobatics, but the part that fails me is that the 2nd part is optional, so there's nothing to 'anchor' the non-greedy match.
Any ideas how to elegantly do this? Only the first part is important. Non-regex solutions are also welcome</p>
<p>Thanks!</p>
| 1 | 2016-08-15T11:35:26Z | 38,954,408 | <p>you may want to take a look at <a href="https://docs.python.org/2/library/urlparse.html" rel="nofollow">https://docs.python.org/2/library/urlparse.html</a></p>
<p>the order in which parameters are passed may change:</p>
<pre><code>?pageNum=41&cat=ThisPartChanges
</code></pre>
| 3 | 2016-08-15T11:41:43Z | [
"python",
"regex"
] |
Match everything until optional string (Python regex) | 38,954,311 | <p>I've pounded my head against this issue, and it just seems like I am missing something uber-trivial, so apologies in advance. I have a url, which may, or may not, contain some POST values. I want to match the the entire url UNTIL this optional part (not inclusive). So for example:</p>
<pre><code>import re
myurl = r'http://myAddress.com/index.aspx?cat=ThisPartChanges&pageNum=41'
matchObj = re.match(r'(.*?)(&pageinfo=\d+){0,1}', myurl)
print matchObj.groups()
>> ('', None)
# Putting the non-greedy ? outside
matchObj = re.match(r'(.*)?(&pageinfo=\d+){0,1}', myurl)
print matchObj.groups()
>> ('http://myAddress.com/index.aspx?cat=ThisPartChanges&pageNum=41', None)
# The url might also be without the last part, that is
myurl = r'http://myAddress.com/index.aspx?cat=ThisPartChanges'
# I'd like the regex to capture the first part. "ThisPartChanges" might
# be different every time
</code></pre>
<p>What I would like is to get the everything until pageNum=\d+, not inclusive.
That is</p>
<pre><code>http://myAddress.com/index.aspx?cat=ThisPartChanges
</code></pre>
<p>I am only interested in the part before &pageNum, and don't care if it exists or not, just want to filter it out somehow so that I can get the real address until cat=....</p>
<p>I've tried all sorts of non-greedy acrobatics, but the part that fails me is that the 2nd part is optional, so there's nothing to 'anchor' the non-greedy match.
Any ideas how to elegantly do this? Only the first part is important. Non-regex solutions are also welcome</p>
<p>Thanks!</p>
| 1 | 2016-08-15T11:35:26Z | 38,954,476 | <p>I'd recommend you to avoid regular expressions when it comes to url parsing, use this <a href="https://docs.python.org/2/library/urlparse.html" rel="nofollow">module</a> instead, here's a working example for your problem:</p>
<pre><code>import urlparse
myurl = 'http://myAddress.com/index.aspx?cat=ThisPartChanges&pageNum=41'
parsed = urlparse.urlparse(myurl)
print 'scheme :', parsed.scheme
print 'netloc :', parsed.netloc
print 'path :', parsed.path
print 'params :', parsed.params
print 'query :', parsed.query
print 'fragment:', parsed.fragment
print 'username:', parsed.username
print 'password:', parsed.password
print 'hostname:', parsed.hostname, '(netloc in lower case)'
print 'port :', parsed.port
print urlparse.parse_qs(parsed.query)
</code></pre>
| 2 | 2016-08-15T11:46:56Z | [
"python",
"regex"
] |
Match everything until optional string (Python regex) | 38,954,311 | <p>I've pounded my head against this issue, and it just seems like I am missing something uber-trivial, so apologies in advance. I have a url, which may, or may not, contain some POST values. I want to match the the entire url UNTIL this optional part (not inclusive). So for example:</p>
<pre><code>import re
myurl = r'http://myAddress.com/index.aspx?cat=ThisPartChanges&pageNum=41'
matchObj = re.match(r'(.*?)(&pageinfo=\d+){0,1}', myurl)
print matchObj.groups()
>> ('', None)
# Putting the non-greedy ? outside
matchObj = re.match(r'(.*)?(&pageinfo=\d+){0,1}', myurl)
print matchObj.groups()
>> ('http://myAddress.com/index.aspx?cat=ThisPartChanges&pageNum=41', None)
# The url might also be without the last part, that is
myurl = r'http://myAddress.com/index.aspx?cat=ThisPartChanges'
# I'd like the regex to capture the first part. "ThisPartChanges" might
# be different every time
</code></pre>
<p>What I would like is to get the everything until pageNum=\d+, not inclusive.
That is</p>
<pre><code>http://myAddress.com/index.aspx?cat=ThisPartChanges
</code></pre>
<p>I am only interested in the part before &pageNum, and don't care if it exists or not, just want to filter it out somehow so that I can get the real address until cat=....</p>
<p>I've tried all sorts of non-greedy acrobatics, but the part that fails me is that the 2nd part is optional, so there's nothing to 'anchor' the non-greedy match.
Any ideas how to elegantly do this? Only the first part is important. Non-regex solutions are also welcome</p>
<p>Thanks!</p>
| 1 | 2016-08-15T11:35:26Z | 38,954,483 | <p>In your case, this could do:</p>
<pre><code>^[^&]+
</code></pre>
<p>More robust:</p>
<pre><code>^[^?]+\?cat=[^&]+
</code></pre>
<p><strong>Example:</strong></p>
<pre><code>In [40]: s = 'http://myAddress.com/index.aspx?cat=ThisPartChanges&pageNum=41'
In [41]: re.search(r'^[^&]+', s).group()
Out[41]: 'http://myAddress.com/index.aspx?cat=ThisPartChanges'
In [42]: re.search(r'^[^?]+\?cat=[^&]+', s).group()
Out[42]: 'http://myAddress.com/index.aspx?cat=ThisPartChanges'
</code></pre>
| 1 | 2016-08-15T11:47:18Z | [
"python",
"regex"
] |
How to make django form field without validation in administration | 38,954,322 | <p>I am learning django, currently I am working on django 1.9.
I have made a model named <code>experience</code>, which contains:</p>
<pre><code>from __future__ import unicode_literals
from django.db import models
# Create your models here.
class Experience(models.Model):
designation = models.CharField(max_length=250)
department = models.CharField(max_length=250, null=True)
present = models.BooleanField(default=False)
joining_date = models.DateField()
ending_date = models.DateField(null=True)
def __unicode__(self):
return self.designation
def __str__(self):
return self.designation
</code></pre>
<p>Now, when I go to experience form, department and ending date must be required data. But I want that <code>department</code> and <code>ending_date</code> will pass null value and will not raise any validation error for these two attributes in <code>admin/experience</code> form.</p>
<p>How to do this ? Please help me.</p>
| 1 | 2016-08-15T11:36:09Z | 38,954,378 | <p>Simply change it to <code>ending_date = models.DateField(null=True, blank=True)</code></p>
| 1 | 2016-08-15T11:39:33Z | [
"python",
"django",
"django-models",
"django-forms",
"django-admin"
] |
Python loop inserting last row only in cassandra | 38,954,357 | <p>I typed a small demo loop in order to insert random values in Cassandra but only the last record is persisted into the database. I am using cassandra-driver from datastax and its object modeling lib. Cassandra version is 3.7 and Python 3.4. Any idea what I am doing wrong?</p>
<pre><code>#!/usr/bin/env python
import datetime
import uuid
from random import randint, uniform
from cassandra.cluster import Cluster
from cassandra.cqlengine import connection, columns
from cassandra.cqlengine.management import sync_table
from cassandra.cqlengine.models import Model
from cassandra.cqlengine.query import BatchQuery
class TestTable(Model):
_table_name = 'test_table'
key = columns.UUID(primary_key=True, default=uuid.uuid4())
type = columns.Integer(index=True)
value = columns.Float(required=False)
created_time = columns.DateTime(default=datetime.datetime.now())
def main():
connection.setup(['127.0.0.1'], 'test', protocol_version = 3)
sync_table(TestTable)
for _ in range(10):
type = randint(1, 3)
value = uniform(-10, 10)
row = TestTable.create(type=type, value=value)
print("Inserted row: ", row.type, row.value)
print("Done inserting")
q = TestTable.objects.count()
print("We have inserted " + str(q) + " rows.")
if __name__ == "__main__":
main()
</code></pre>
<p>Many thanks!</p>
| 0 | 2016-08-15T11:38:21Z | 38,954,463 | <p>You need to use the method save.</p>
<pre><code>...
row = TestTable(type=type, value=value)
row.save()
...
</code></pre>
<p><a href="http://cqlengine.readthedocs.io/en/latest/topics/models.html#cqlengine.models.Model.save" rel="nofollow">http://cqlengine.readthedocs.io/en/latest/topics/models.html#cqlengine.models.Model.save</a></p>
| 0 | 2016-08-15T11:46:11Z | [
"python",
"cassandra"
] |
Python loop inserting last row only in cassandra | 38,954,357 | <p>I typed a small demo loop in order to insert random values in Cassandra but only the last record is persisted into the database. I am using cassandra-driver from datastax and its object modeling lib. Cassandra version is 3.7 and Python 3.4. Any idea what I am doing wrong?</p>
<pre><code>#!/usr/bin/env python
import datetime
import uuid
from random import randint, uniform
from cassandra.cluster import Cluster
from cassandra.cqlengine import connection, columns
from cassandra.cqlengine.management import sync_table
from cassandra.cqlengine.models import Model
from cassandra.cqlengine.query import BatchQuery
class TestTable(Model):
_table_name = 'test_table'
key = columns.UUID(primary_key=True, default=uuid.uuid4())
type = columns.Integer(index=True)
value = columns.Float(required=False)
created_time = columns.DateTime(default=datetime.datetime.now())
def main():
connection.setup(['127.0.0.1'], 'test', protocol_version = 3)
sync_table(TestTable)
for _ in range(10):
type = randint(1, 3)
value = uniform(-10, 10)
row = TestTable.create(type=type, value=value)
print("Inserted row: ", row.type, row.value)
print("Done inserting")
q = TestTable.objects.count()
print("We have inserted " + str(q) + " rows.")
if __name__ == "__main__":
main()
</code></pre>
<p>Many thanks!</p>
| 0 | 2016-08-15T11:38:21Z | 38,963,611 | <p>The problem is in the definition of the key column:</p>
<pre><code>key = columns.UUID(primary_key=True, default=uuid.uuid4())
</code></pre>
<p>For the default value it's going to call the <code>uuid.uuid4</code> function once and use that result as the default for all future inserts. Because that's your primary key, all 10 writes will happen to the same primary key.</p>
<p>Instead, drop the parentheses so you are just passing a reference to <code>uuid.uuid4</code> rather than calling it:</p>
<pre><code>key = columns.UUID(primary_key=True, default=uuid.uuid4)
</code></pre>
<p>Now each time you create a row you'll get a new unique UUID value, and therefore a new row in Cassandra.</p>
| 1 | 2016-08-15T21:51:33Z | [
"python",
"cassandra"
] |
Error when using round() for a variable | 38,954,362 | <p>I never coded before and started with Python-3.5 a few days ago.
After some exercise i try to play around myself.
Last time I wanted to create a script which stores the input as a variable and rounds it to three decimals. Unfortunately I get an error when I try to do that:</p>
<pre><code>round (spam, 3)
TypeError: type str doesn't define __round__ method"
</code></pre>
<p>I tried to look this up in the Q&A but you guys seem to have more complex problems related to this error msg.</p>
<p>So this is what I entered in the file editor when I got the error msg:</p>
<pre><code>print('Pls enter value')
spam = input()
#print(spam)
round(spam, 3)
</code></pre>
<p>when I enter the following in the interactive shell the rounding seems to work though:</p>
<pre><code>>>> spam = 3.666666
>>> round (spam, 3)
3.667
</code></pre>
<p>So why is the same logic working in the shell but not in the File editor ? Thanks in advance!</p>
| 0 | 2016-08-15T11:38:27Z | 38,954,390 | <p>The difference is that in the second case you supply the value of <code>spam</code> using a float literal (that is, <code>spam = 3.666666</code>) while, in the first case you get it from calling <code>input()</code> which isn't exactly the same.</p>
<p>The function <code>input()</code> returns a <code>str</code> instance in Python 3 and, for <code>str</code> types, the <code>round</code> function doesn't make much sense; you need to explicitly transform it to a float by wrapping the result of <code>input()</code> with <code>float()</code>:</p>
<pre><code>spam = float(input()) # change input to 'float' type
</code></pre>
<p>Now, you can call <code>round</code> on it. You <em>do need</em> to be careful that the input you actually supply is indeed transformable to a <code>float</code> or else a <code>ValueError</code> will be raised.</p>
<p>In addition to that, no need to add the <code>print</code> call before <code>input</code>, <code>input</code> has a <code>prompt</code> argument that allows you to specify text before submitting input:</p>
<pre><code>spam = input("Enter valid float number: ")
</code></pre>
<p>You should now get similar results for both cases.</p>
| 2 | 2016-08-15T11:40:20Z | [
"python",
"python-3.x",
"variables",
"rounding",
"typeerror"
] |
Python SQLite Return Default String For Non-Existent Row | 38,954,369 | <p>I have a DB with ID/Topic/Definition columns. When a select query is made, with possibly hundreds of parameters, I would like the fetchall call to also return the topic of any non-existent rows with a default text (i.e. "Not Found").</p>
<p>I realize this could be done in a loop, but that would query the DB every cycle and have a significant performance hit. With the parameters joined by "OR" in a single select statement the search is nearly instantaneous. </p>
<p>Is there a way to get a return of the query (topic) with default text for non-existent rows in SQLite?</p>
<p>Table Structure (named "dictionary")</p>
<pre><code>ID|Topic|Definition
1|wd1|def1
2|wd3|def3
</code></pre>
<p>Sample Query</p>
<pre><code>SELECT Topic,Definition FROM dictionary WHERE Topic = "wd1" or Topic = "wd2" or topic = "wd3"'
</code></pre>
<p>Desired Return</p>
<pre><code>[(wd1, def1), (wd2, "Not Found"), (wd3, def3)]
</code></pre>
| 0 | 2016-08-15T11:39:05Z | 38,955,706 | <p>To get data like <code>wd2</code> out of a query, such data must be in the database in the first place.
You could put it into a temporary table, or use a <a href="http://www.sqlite.org/lang_with.html" rel="nofollow">common table expression</a>.</p>
<p>To include rows without a match, use an <a href="https://en.wikipedia.org/wiki/Join_(SQL)#Outer_join" rel="nofollow">outer join</a>:</p>
<pre class="lang-sql prettyprint-override"><code>WITH IDs(ID) AS ( VALUES ('wd1'), ('wd2'), ('wd3') )
SELECT Topic,
IFNULL(Definition, 'Not Found') AS Definition
FROM IDs
LEFT JOIN dictionary USING (ID);
</code></pre>
| 0 | 2016-08-15T13:09:11Z | [
"python",
"sqlite"
] |
Defining 500 intial random positions of an rotating object in 2-D by Python | 38,954,401 | <p>I have edited my question. Now I don't want to use loop in my function. The function is for defining initial position of an rotating object in 2-D. I would like to get the output format like this: </p>
<p><code>theta1 theta2 theta3
phi1 phi2 phi3
eta1 eta2 eta3</code></p>
<p>The definition of e inside the function must be something else (my opinion). Can anyone help me to get my desired output?</p>
<pre><code>def randposi(N=500):
theta = 2*pi* rand()-pi
phi = arccos(1-2* rand())
eta = 2*pi*rand()-pi
r = random.rand(N)
e = 3*r*array()
return e
</code></pre>
| 0 | 2016-08-15T11:41:10Z | 38,954,512 | <p>So if we make your function return a tuple:</p>
<pre><code>def randpos():
theta = 2*pi* rand()-pi
if rand() < 12:
if phi < pi:
phi += pi
else:
phi -= pi
eta = 2*pi*rand()-pi
return (theta, phi, eta)
</code></pre>
<p>then call it 500 times and put the results in a list of tuples.</p>
<pre><code>starting_pos = []
for x in xrange(500):
starting_pos.append(randpos)
</code></pre>
<p>And you have your solution.</p>
| 0 | 2016-08-15T11:49:11Z | [
"python",
"numpy",
"rotation",
"2d"
] |
Defining 500 intial random positions of an rotating object in 2-D by Python | 38,954,401 | <p>I have edited my question. Now I don't want to use loop in my function. The function is for defining initial position of an rotating object in 2-D. I would like to get the output format like this: </p>
<p><code>theta1 theta2 theta3
phi1 phi2 phi3
eta1 eta2 eta3</code></p>
<p>The definition of e inside the function must be something else (my opinion). Can anyone help me to get my desired output?</p>
<pre><code>def randposi(N=500):
theta = 2*pi* rand()-pi
phi = arccos(1-2* rand())
eta = 2*pi*rand()-pi
r = random.rand(N)
e = 3*r*array()
return e
</code></pre>
| 0 | 2016-08-15T11:41:10Z | 38,954,542 | <p>you can use list comprehension for this, for your case:</p>
<pre><code>def randpos(N=500):
# your function code here
...
desired = 500
init_positions = [randpos() for i in range(desired)]
</code></pre>
| 0 | 2016-08-15T11:50:43Z | [
"python",
"numpy",
"rotation",
"2d"
] |
Defining 500 intial random positions of an rotating object in 2-D by Python | 38,954,401 | <p>I have edited my question. Now I don't want to use loop in my function. The function is for defining initial position of an rotating object in 2-D. I would like to get the output format like this: </p>
<p><code>theta1 theta2 theta3
phi1 phi2 phi3
eta1 eta2 eta3</code></p>
<p>The definition of e inside the function must be something else (my opinion). Can anyone help me to get my desired output?</p>
<pre><code>def randposi(N=500):
theta = 2*pi* rand()-pi
phi = arccos(1-2* rand())
eta = 2*pi*rand()-pi
r = random.rand(N)
e = 3*r*array()
return e
</code></pre>
| 0 | 2016-08-15T11:41:10Z | 38,954,677 | <p>So, you've defined your function f(x) called randpos, it seems this function doesn't accept any input. N is the variable you'll use to iterate over this function, you got few options here:</p>
<p>You can store your values in some list like this:</p>
<pre><code>N = 10
random_positions = [randpos() for i in range(N)]
print random_positions
</code></pre>
<p>If you don't need to store values you just loop through them like this:</p>
<pre><code>for i in range(N):
print randpos()
</code></pre>
<p>If you prefer instead, you just yield your values like this:</p>
<pre><code>def my_iterator(N=500):
for i in range(N):
yield randpos()
for rand_pos in my_iterator(N):
print rand_pos
</code></pre>
| 0 | 2016-08-15T11:59:16Z | [
"python",
"numpy",
"rotation",
"2d"
] |
Defining 500 intial random positions of an rotating object in 2-D by Python | 38,954,401 | <p>I have edited my question. Now I don't want to use loop in my function. The function is for defining initial position of an rotating object in 2-D. I would like to get the output format like this: </p>
<p><code>theta1 theta2 theta3
phi1 phi2 phi3
eta1 eta2 eta3</code></p>
<p>The definition of e inside the function must be something else (my opinion). Can anyone help me to get my desired output?</p>
<pre><code>def randposi(N=500):
theta = 2*pi* rand()-pi
phi = arccos(1-2* rand())
eta = 2*pi*rand()-pi
r = random.rand(N)
e = 3*r*array()
return e
</code></pre>
| 0 | 2016-08-15T11:41:10Z | 38,958,341 | <p>What about using a random numpy array ?</p>
<p>Something like that:</p>
<pre><code>import numpy as np
N=500
#we create a random array 3xN
r = np.random.rand(3,N)
#tetha is row 0, phi row 1, eta row 2
#we apply the same treatment that is in the question to get the right range
#also note that np.pi is just a predefined float
r[0]=2*np.pi*r[0] -np.pi
r[1]=np.arccos(1-2*r[1])
r[2]=2*np.pi*r[2] -np.pi
print(r[0],r[1],r[2])
</code></pre>
| 0 | 2016-08-15T15:44:47Z | [
"python",
"numpy",
"rotation",
"2d"
] |
Cache busting with Django | 38,954,505 | <p>I'm working on a website built with Django.<br>
When I'm doing updates on the static files, the users have to hard refresh the website to get the latest version.<br>
I'm using a CDN server to deliver my static files so using the built-in static storage from Django.<br>
I don't know about the best practices but my idea is to generate a random string when I redeploy the website and have something like <code>style.css?my_random_string</code>.<br>
I don't know how to handle such a global variable through the project (Using Gunicorn in production).<br>
I have a RedisDB running, I can store the random string in it and clear it on redeployment.<br>
I was thinking to have this variable globally available in templates with a <code>context_processors</code>. </p>
<p>What are your thoughts on this ?</p>
| 0 | 2016-08-15T11:48:47Z | 38,954,600 | <p>Django's built-in contrib.staticfiles app already does this for you; see <a href="https://docs.djangoproject.com/en/1.10/ref/contrib/staticfiles/#manifeststaticfilesstorage" rel="nofollow">ManifestStaticFilesStorage</a> and <a href="https://docs.djangoproject.com/en/1.10/ref/contrib/staticfiles/cachedstaticfilesstorage" rel="nofollow">CachedStaticFilesStorage</a>.</p>
| 0 | 2016-08-15T11:54:14Z | [
"python",
"django",
"caching",
"static",
"cdn"
] |
Cache busting with Django | 38,954,505 | <p>I'm working on a website built with Django.<br>
When I'm doing updates on the static files, the users have to hard refresh the website to get the latest version.<br>
I'm using a CDN server to deliver my static files so using the built-in static storage from Django.<br>
I don't know about the best practices but my idea is to generate a random string when I redeploy the website and have something like <code>style.css?my_random_string</code>.<br>
I don't know how to handle such a global variable through the project (Using Gunicorn in production).<br>
I have a RedisDB running, I can store the random string in it and clear it on redeployment.<br>
I was thinking to have this variable globally available in templates with a <code>context_processors</code>. </p>
<p>What are your thoughts on this ?</p>
| 0 | 2016-08-15T11:48:47Z | 38,956,967 | <p>Here's my work around : </p>
<p>On deployment (from a bash script), I get the shasum of my css style.<br>
I put this variable inside the environment. </p>
<p>I have a context processor for the template engine that will read from the environment.</p>
| 0 | 2016-08-15T14:23:55Z | [
"python",
"django",
"caching",
"static",
"cdn"
] |
How to have catagorical factor variables in python | 38,954,525 | <pre><code> age income student credit_rating Class_buys_computer
0 youth high no fair no
1 youth high no excellent no
2 middle_aged high no fair yes
3 senior medium no fair yes
4 senior low yes fair yes
5 senior low yes excellent no
6 middle_aged low yes excellent yes
7 youth medium no fair no
8 youth low yes fair yes
9 senior medium yes fair yes
10 youth medium yes excellent yes
11 middle_aged medium no excellent yes
12 middle_aged high yes fair yes
13 senior medium no excellent no
</code></pre>
<p>I am using this dataset and wish to have the variables like <code>age</code>, <code>income</code> etc as like <code>factor variables</code> in <code>R</code>, How can i do it in python </p>
| 1 | 2016-08-15T11:49:37Z | 38,954,579 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.astype.html" rel="nofollow"><code>astype</code></a> with parameter <code>category</code>:</p>
<pre><code>cols = ['age','income','student']
for col in cols:
df[col] = df[col].astype('category')
print (df.dtypes)
age category
income category
student category
credit_rating object
Class_buys_computer object
dtype: object
</code></pre>
<p>If need convert all columns:</p>
<pre><code>for col in df.columns:
df[col] = df[col].astype('category')
print (df.dtypes)
age category
income category
student category
credit_rating category
Class_buys_computer category
dtype: object
</code></pre>
<p>You need loop, because if use:</p>
<pre><code>df = df.astype('category')
</code></pre>
<blockquote>
<p>NotImplementedError: > 1 ndim Categorical are not supported at this time</p>
</blockquote>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/categorical.html" rel="nofollow">Pandas documentation about categorical</a>.</p>
<p>EDIT by comment:</p>
<p>If need ordered catagorical, use another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Categorical.html" rel="nofollow"><code>pandas.Categorical</code></a>:</p>
<pre><code>df['age']=pd.Categorical(df['age'],categories=["youth","middle_aged","senior"],ordered=True)
print (df.age)
0 youth
1 youth
2 middle_aged
3 senior
4 senior
5 senior
6 middle_aged
7 youth
8 youth
9 senior
10 youth
11 middle_aged
12 middle_aged
13 senior
Name: age, dtype: category
Categories (3, object): [youth < middle_aged < senior]
</code></pre>
<p>Then you can sort DataFrame by column <code>age</code>:</p>
<pre><code>df = df.sort_values('age')
print (df)
age income student credit_rating Class_buys_computer
0 youth high no fair no
1 youth high no excellent no
7 youth medium no fair no
8 youth low yes fair yes
10 youth medium yes excellent yes
2 middle_aged high no fair yes
6 middle_aged low yes excellent yes
11 middle_aged medium no excellent yes
12 middle_aged high yes fair yes
3 senior medium no fair yes
4 senior low yes fair yes
5 senior low yes excellent no
9 senior medium yes fair yes
13 senior medium no excellent no
</code></pre>
| 1 | 2016-08-15T11:52:58Z | [
"python",
"pandas",
"dataframe",
"categorical-data"
] |
Tensorflow: How can I extract image features from a specific layer of a pre-trained CNN? | 38,954,598 | <p>I have a pre-trained CNN model as a .pb file. I can load the model and extract the final vector from the last layer for all images. Now I would like to extract the vector coming from a specific layer and not the final for my images. I am using an import_graph_def function to load the model and I don't know the names of the layers because <strong>.pb</strong> file is large and I can't open it.</p>
<p>How can I run one part of the model and not the whole in order to get vectors until the layer I want?</p>
| 1 | 2016-08-15T11:54:09Z | 38,958,002 | <p>One approach other than what was mentioned by Peter Hawkins, to use tf.Graph.get_operations() in the comments is to use tensorboard to find the name of the layer you would like to extract from. </p>
<p>From there you can just use </p>
<pre><code>graph.get_tensor_by_name("import/layer_name")
</code></pre>
<p>to extract out whichever features you want.</p>
| 1 | 2016-08-15T15:21:40Z | [
"python",
"machine-learning",
"neural-network",
"tensorflow",
"conv-neural-network"
] |
functional field odoo type float , new api | 38,954,686 | <p>I'm trying to make an operation * on a field , i have tried functional fields but it didn't work , now i'm trying this @api.depends api , does it work with odoo 8 ? it still not work </p>
<pre><code>class fleuret(osv.Model):
_inherit = "mrp.bom.line"
_columns = {
'unit_price' : fields.float(string='unit price', related='product_id.lst_price', store=True, readonly=True),
'amount' : fields.Float(string='price ',store=True, readonly=True,digits=dp.get_precision('Account'),compute='_compute_price'),
}
@api.one
@api.depends('product_qty')
def _compute_price(self):
self.amount =(unit_price * self.product_qty)
</code></pre>
| 1 | 2016-08-15T12:00:04Z | 38,954,894 | <pre><code>from openerp import models,fields,api
import openerp.addons.decimal_precision as dp
class fleuret(models.Model)
_inherit = "mrp.bom.line"
@api.one
@api.depends('product_qty')
def _compute_price(self):
self.amount =(self.unit_price * self.product_qty)
unit_price = fields.Float(string='unit price', related='product_id.lst_price', store=True, readonly=True)
amount = fields.Float(string='price',store=True,digits=dp.get_precision('Account'),compute='_compute_price')
</code></pre>
| 2 | 2016-08-15T12:15:13Z | [
"python",
"api",
"openerp",
"field",
"odoo-8"
] |
functional field odoo type float , new api | 38,954,686 | <p>I'm trying to make an operation * on a field , i have tried functional fields but it didn't work , now i'm trying this @api.depends api , does it work with odoo 8 ? it still not work </p>
<pre><code>class fleuret(osv.Model):
_inherit = "mrp.bom.line"
_columns = {
'unit_price' : fields.float(string='unit price', related='product_id.lst_price', store=True, readonly=True),
'amount' : fields.Float(string='price ',store=True, readonly=True,digits=dp.get_precision('Account'),compute='_compute_price'),
}
@api.one
@api.depends('product_qty')
def _compute_price(self):
self.amount =(unit_price * self.product_qty)
</code></pre>
| 1 | 2016-08-15T12:00:04Z | 38,955,268 | <pre class="lang-py prettyprint-override"><code>from openerp import models, fields, api
import openerp.addons.decimal_precision as dp
class fleuret(models.Model)
_inherit = "mrp.bom.line"
unit_price = fields.Float(
string='unit price', related='product_id.lst_price',
store=True, readonly=True)
amount = fields.Float(
string='price', store=True, digits=dp.get_precision('Account'),
compute='_compute_price')
@api.multi
@api.depends('product_qty', 'unit_price')
def _compute_price(self):
for r in self:
r.amount = r.unit_price * r.product_qty
</code></pre>
| 0 | 2016-08-15T12:40:19Z | [
"python",
"api",
"openerp",
"field",
"odoo-8"
] |
functional field odoo type float , new api | 38,954,686 | <p>I'm trying to make an operation * on a field , i have tried functional fields but it didn't work , now i'm trying this @api.depends api , does it work with odoo 8 ? it still not work </p>
<pre><code>class fleuret(osv.Model):
_inherit = "mrp.bom.line"
_columns = {
'unit_price' : fields.float(string='unit price', related='product_id.lst_price', store=True, readonly=True),
'amount' : fields.Float(string='price ',store=True, readonly=True,digits=dp.get_precision('Account'),compute='_compute_price'),
}
@api.one
@api.depends('product_qty')
def _compute_price(self):
self.amount =(unit_price * self.product_qty)
</code></pre>
| 1 | 2016-08-15T12:00:04Z | 38,955,338 | <pre><code>from openerp.osv import osv, fields
from openerp import models,api, _
import openerp.addons.decimal_precision as dp
class fleuret(osv.Model):
_inherit = "mrp.bom.line"
_columns = {
'unit_price' : fields.float(string='unit price', related='product_id.lst_price', store=True, readonly=True),
'units_price' : fields.float(string='price ',store=True, readonly=True,digits=dp.get_precision('Account'),compute='_compute_price'),
}
@api.one
@api.depends('product_qty')
def _compute_price(self):
self.units_price = (self.unit_price * self.product_qty)
</code></pre>
| 1 | 2016-08-15T12:44:00Z | [
"python",
"api",
"openerp",
"field",
"odoo-8"
] |
Python request.py syntax error | 38,954,852 | <p>I am new to Python 3.4.5 which I am learning online by watching videos with some good knowledge of C. I am trying to download an image via Python which I am unable to do because of this error.</p>
<p>Code:</p>
<pre>import random
import urllib.request
def img(url):
full='name'+'.jpeg'
urllib.request.urlretrieve(url,full)
img("http://lorempixel.com/400/200")</pre>
<p>Error:</p>
<pre><code>Traceback (most recent call last):
File "image.py", line 2, in <module>
import urllib.request
File "/home/yuvi/pyth/urllib/request.py", line 88, in <module>
import http.client
File "/usr/local/lib/python3.4/http/client.py", line 69, in <module>
import email.parser
File "/usr/local/lib/python3.4/email/parser.py", line 12, in <module>
from email.feedparser import FeedParser, BytesFeedParser
File "/usr/local/lib/python3.4/email/feedparser.py", line 27, in <module>
from email import message
File "/usr/local/lib/python3.4/email/message.py", line 16, in <module>
from email import utils
File "/usr/local/lib/python3.4/email/utils.py", line 31, in <module>
import urllib.parse
File "/home/yuvi/pyth/urllib/parse.py", line 239, in <module>
_DefragResultBase.url.__doc__ = """The URL with no fragment identifier."""
AttributeError: readonly attribute
</code></pre>
| 0 | 2016-08-15T07:09:56Z | 38,954,853 | <p>Try,</p>
<p>def img(url): full='name'+'.jpeg';urllib.urlretrieve(url,full)</p>
<p>urllib.request does not exist in python 2.x, which seems to be your case<br>
so don't try to import that in second line of your code</p>
<p>plus you made a type(forgot semicolon) which works as a statement separator while writing inline function statements.
similar to: </p>
<pre><code>def img(url):
full='name'+'.jpeg'
urllib.urlretrieve(url,full)
</code></pre>
| 0 | 2016-08-15T07:22:39Z | [
"python"
] |
PyDev debugging: do not open "_pydev_execfile" at the end | 38,955,017 | <p>I am new to both Python and Eclipse.</p>
<p>I am debugging a module file with Eclipse/PyDev. When I click "Step over" or "Step return" at the last line of the file, Eclipse opens the file "_pydev_execfile" where I have to click "Step over" or "Step return" again, before the debugging is terminated. </p>
<p>Does this occur for everyone or just me?</p>
<p>Can I avoid this?</p>
| 9 | 2016-08-15T12:23:34Z | 39,012,273 | <p>In general, you can put <code># @DontTrace</code> at the end of lines that define functions to ignore these functions in the traceback. </p>
<p>In the particular case described in the question, this works as follows: Change the definition of execfile() in _pydev_execfile.py to:</p>
<pre><code>def execfile(file, glob=None, loc=None): # @DontTrace
...
</code></pre>
<p>Afterwards, PyDev ends up opening another file (codecs.py) at the end of debugging. To fix this, you will have to @DontTrace a few more functions in that (but only in that one) function. </p>
| 2 | 2016-08-18T07:28:44Z | [
"python",
"eclipse",
"debugging",
"pydev"
] |
cannot access simple http server | 38,955,047 | <p>I created a simple Simple http server using </p>
<p>python -m SimpleHTTPServer</p>
<p>I can access it via my laptop or through my phone but when it is acessed through another location it shows Webpage not available or when I try to access it through my browser's VPN it also gives an ERROR: Requested url could not be retrieved..
What seems to be the issue ?
I'm pasting the output of wget here</p>
<pre><code> wget 192.168.43.171:8000
--2016-08-15 17:40:06-- http://192.168.43.171:8000/
Connecting to 192.168.43.171:8000... connected.
HTTP request sent, awaiting response... 200 OK
Length: 242 [text/html]
Saving to: âindex.htmlâ
index.html 100%[====================>] 242 --.-KB/s in 0s
2016-08-15 17:40:06 (28.8 MB/s) - âindex.htmlâ saved [242/242]
</code></pre>
| -2 | 2016-08-15T12:25:25Z | 38,955,088 | <p>IPv4 addresses starting with 192.168 are part of the local network block.
You cannot access them outside your local network.</p>
<p>See also: <a href="http://xkcd.com/742" rel="nofollow">http://xkcd.com/742</a></p>
| 0 | 2016-08-15T12:27:54Z | [
"python",
"simplehttpserver"
] |
Downloading thousands of files using python | 38,955,052 | <p>I'm trying to download about 500k small csv files (5kb-1mb) from a list of urls but it is been taking too long to get this done. With the code bellow, I am lucky if I get 10k files a day.</p>
<p>I have tried using the multiprocessing package and a pool to download multiple files simultaneously. This seems to be effective for the first few thousand downloads, but eventually the overall speed goes down. I am no expert, but I assume that the decreasing speed indicates that the server I am trying to download from cannot keep up with this number of requests. Does that makes sense? </p>
<p>To be honest, I am quite lost here and was wondering if there is any piece of advice on how to speed this up. </p>
<pre><code>import urllib2
import pandas as pd
import csv
from multiprocessing import Pool
#import url file
df = pd.read_csv("url_list.csv")
#select only part of the total list to download
list=pd.Series(df[0:10000])
#define a job and set file name as the id string under urls
def job(url):
file_name = str("test/"+url[46:61])+".csv"
u = urllib2.urlopen(url)
f = open(file_name, 'wb')
f.write(u.read())
f.close()
#run job
pool = Pool()
url = [ "http://" + str(file_path) for file_path in list]
pool.map(job, url)
</code></pre>
| 0 | 2016-08-15T12:25:39Z | 38,955,902 | <p>You are re-coding the wheel!</p>
<p>How about that :</p>
<pre><code>parallel -a urls.file axel
</code></pre>
<p>Of course you'll have to install <code>parallel</code> and <code>axel</code> for your distribution.</p>
<p><code>axel</code> is a multithreaded counterpart to <code>wget</code></p>
<p><code>parrallel</code> allows you to run tasks using multithreading.</p>
| -1 | 2016-08-15T13:21:47Z | [
"python",
"web-scraping",
"urllib2",
"python-multiprocessing"
] |
numpy.subtract performs subtraction wrong on 1-dimensional arrays | 38,955,074 | <blockquote>
<p>I read the post <a href="http://stackoverflow.com/questions/588004/is-floating-point-math-broken">is-floating-point-math-broken</a> and get Why it
happens, but I couldn't find a solution that could help me..</p>
<p>How can I do the correct subtraction?</p>
</blockquote>
<p>Python version 2.6.6, Numpy version 1.4.1. </p>
<p>I have two numpy.ndarray each one contain float32 values, origin and new. I'm trying to use numpy.subtract to subtract them but I get the following (odd) result: </p>
<pre><code>>>> import numpy as
>>> with open('base_R.l_BSREM_S.9.1_001.bin', 'r+') as fid:
origin = np.fromfile(fid, np.float32)
>>> with open('new_R.l_BSREM_S.9.1_001.bin', 'r+') as fid:
new = np.fromfile(fid, np.float32)
>>> diff = np.subtract(origin, new)
>>> origin[5184939]
0.10000000149011611938
>>> new[5184939]
0.00000000023283064365
>>> diff[5184939]
0.10000000149011611938
</code></pre>
<p>Also when I try to subtract the arrays at 5184939 I get the same result as diff[5184939]</p>
<pre><code>>>> origin[5184939] - new[5184939]
0.10000000149011611938
</code></pre>
<p>But when I do the following I get this results:</p>
<pre><code>>>> 0.10000000149011611938 - 0.00000000023283064365
0.10000000125728548
</code></pre>
<p>and that's not equal to diff[5184939] </p>
<p>How the right subtraction can be done? (0.10000000125728548 is the one that I need)</p>
<p>Please help, and Thanks in advance</p>
| -1 | 2016-08-15T12:27:07Z | 38,959,017 | <p>You might add your Python and numpy versions to the question.</p>
<p>Differences can arise from <code>np.float32</code> v <code>np.float64</code> dtype, the default Python float type, as well as display standards. <code>numpy</code> uses different display rounding than the underlying Python.</p>
<p>The subtraction itself does not differ.</p>
<p>I can reproduce the <code>0.10000000125728548</code> value, which may also display as <code>0.1</code> (out 8 decimals).</p>
<p>I'm not sure where the <code>0.10000000149011611938</code> comes from. That looks as though <code>new[5184939]</code> was identically 0, not just something small like <code>0.00000000023283064365</code>.</p>
| 0 | 2016-08-15T16:27:45Z | [
"python",
"arrays",
"numpy"
] |
Python replace oneliner without using regexp | 38,955,140 | <p>I have my code here:</p>
<pre><code>a = u"\n".join(my_array).replace(u"\n\n", u"\n")
</code></pre>
<p>The problem is that if there are <code>"\n\n\n\n"</code> you are left with <code>"\n\n"</code> and I just want <strong>one</strong> <code>"\n"</code></p>
<p>So I've come up with:</p>
<pre><code>a = u"\n".join(my_array)
while a.find(u"\n\n")>=0:
a = a.replace(u"\n\n", u"\n")
</code></pre>
<p>I was wondering if there's a more elegant way / maybe oneliner <em>without using regexp</em> to do this in Python?</p>
| 1 | 2016-08-15T12:31:11Z | 38,955,240 | <p>perhaps this can help:</p>
<pre><code>u"\n".join(s.replace(u'\n', '') for s in my_array))
</code></pre>
| 0 | 2016-08-15T12:38:36Z | [
"python"
] |
Python replace oneliner without using regexp | 38,955,140 | <p>I have my code here:</p>
<pre><code>a = u"\n".join(my_array).replace(u"\n\n", u"\n")
</code></pre>
<p>The problem is that if there are <code>"\n\n\n\n"</code> you are left with <code>"\n\n"</code> and I just want <strong>one</strong> <code>"\n"</code></p>
<p>So I've come up with:</p>
<pre><code>a = u"\n".join(my_array)
while a.find(u"\n\n")>=0:
a = a.replace(u"\n\n", u"\n")
</code></pre>
<p>I was wondering if there's a more elegant way / maybe oneliner <em>without using regexp</em> to do this in Python?</p>
| 1 | 2016-08-15T12:31:11Z | 38,955,503 | <p>If you really want to do this in one line and without using regular expression, one way to reduce all the sequences of multiple <code>\n</code> to single <code>\n</code> would to be first <code>split</code> by <code>\n</code> and then <code>join</code> all the non-empty segments by a single <code>\n</code>.</p>
<pre><code>>>> a = "foo\n\nbar\n\n\nblub\n\n\n\nbaz"
>>> "\n".join(x for x in a.split("\n") if x)
'foo\nbar\nblub\nbaz'
</code></pre>
<p>Here, <code>a</code> is the <em>entire</em> string, i.e. after you did <code>"\n".join(my_array)</code>, and depending on what <code>my_array</code> originally is, there may be better solutions, e.g. stripping <code>\n</code> from the individual lines prior to joining, but this will work nonetheless.</p>
| 3 | 2016-08-15T12:54:29Z | [
"python"
] |
Python replace oneliner without using regexp | 38,955,140 | <p>I have my code here:</p>
<pre><code>a = u"\n".join(my_array).replace(u"\n\n", u"\n")
</code></pre>
<p>The problem is that if there are <code>"\n\n\n\n"</code> you are left with <code>"\n\n"</code> and I just want <strong>one</strong> <code>"\n"</code></p>
<p>So I've come up with:</p>
<pre><code>a = u"\n".join(my_array)
while a.find(u"\n\n")>=0:
a = a.replace(u"\n\n", u"\n")
</code></pre>
<p>I was wondering if there's a more elegant way / maybe oneliner <em>without using regexp</em> to do this in Python?</p>
| 1 | 2016-08-15T12:31:11Z | 38,955,521 | <p>To convert sequences of newlines to single newlines you can split the string on newlines and then filter out the empty strings before re-joining. Eg,</p>
<pre><code>mystring = u"this\n\nis a\ntest string\n\nwith embedded\n\n\nnewlines\n"
a = u'\n'.join(filter(None, mystring.split(u'\n')))
print '{0!r}\n{1!r}'.format(mystring, a)
</code></pre>
<p><strong>output</strong></p>
<pre><code>u'this\n\nis a\ntest string\n\nwith embedded\n\n\nnewlines\n'
u'this\nis a\ntest string\nwith embedded\nnewlines'
</code></pre>
<p>Note that this eliminates any trailing newlines, but that shouldn't be a big deal.</p>
| 2 | 2016-08-15T12:55:51Z | [
"python"
] |
Python replace oneliner without using regexp | 38,955,140 | <p>I have my code here:</p>
<pre><code>a = u"\n".join(my_array).replace(u"\n\n", u"\n")
</code></pre>
<p>The problem is that if there are <code>"\n\n\n\n"</code> you are left with <code>"\n\n"</code> and I just want <strong>one</strong> <code>"\n"</code></p>
<p>So I've come up with:</p>
<pre><code>a = u"\n".join(my_array)
while a.find(u"\n\n")>=0:
a = a.replace(u"\n\n", u"\n")
</code></pre>
<p>I was wondering if there's a more elegant way / maybe oneliner <em>without using regexp</em> to do this in Python?</p>
| 1 | 2016-08-15T12:31:11Z | 38,955,712 | <p>Using <code>reduce</code> should work:</p>
<pre><code>reduce(lambda x,y: (x+y).replace('\n\n', '\n'), x)
</code></pre>
<p>However, regular expressions would be more elegant:</p>
<pre><code>re.sub('\n+', '\n', x)
</code></pre>
| 1 | 2016-08-15T13:09:48Z | [
"python"
] |
Is there a way to register Jupyter Notebook Progress Bar Widget instead of Text Progress Bar in Dask/Distributed? | 38,955,163 | <p>I know that there is a way to globally register <code>dask.diagnostics.ProgressBar</code>, and while it is quite nice, it breaks my cell outputs. I have also seen a nice <code>distributed.diagnostics.progress</code> function, which can present the execution progress with Jupyter Notebook Progress Bar widget, but it expects to receive futures.</p>
<p>The issue I have here is that <code>dask.diagnostics.ProgressBar</code> uses <code>stdout</code> (so I cannot print anything while I use the Progress Bar), and <code>distributed.diagnostics.progress</code> needs to be called explicitly with Dask/Distributed futures, but I have my functions which compute something and return an instance of a "result" class rather than a Dask/Distributed future.</p>
| 2 | 2016-08-15T12:32:43Z | 38,955,284 | <p>The short answer today is "no". This is possible to build, but would require some effort on your part.</p>
<p>The <code>distributed.diagnostics.progress</code> function operates in the event loop, and so it stops updating when notebook is busy running a cell. There is no way to both have a synchronous dask.compute experience (you give a graph, it gives a result) and also use the IPython widget progressbar produced by <code>distributed.diagnostics.progress</code>.</p>
<p>However, both the single machine and distributed machine schedulers have plugin systems (which is how we built the progressbars) so it should be possible to extend either system if you'd like to build your own:</p>
<ul>
<li><a href="http://dask.readthedocs.io/en/latest/diagnostics.html" rel="nofollow">http://dask.readthedocs.io/en/latest/diagnostics.html</a></li>
<li><a href="http://distributed.readthedocs.io/en/latest/plugins.html" rel="nofollow">http://distributed.readthedocs.io/en/latest/plugins.html</a></li>
</ul>
| 2 | 2016-08-15T12:41:15Z | [
"python",
"dask"
] |
reading names with genfromtxt missing hyphen | 38,955,168 | <p>I am trying to read the following file and I tried to use numpy to load data:</p>
<pre><code>#Frame HIE_21@O-PHE_32@N-H THR_20@O-PHE_32@N-H HIE_21@ND1-PHE_32@N-H
1 0 0 0
2 1 0 0
3 0 0 0
4 0 0 0
5 0 0 0
</code></pre>
<p>If I read the field names from the 1st row starting with the first value, the names are missing a '-' character from the middle:</p>
<pre><code>f1 = np.genfromtxt(fileName1, dtype=None, names=True)
labels = f1.dtype.names[1:]
print labels
> ('HIE_21OPHE_32NH', 'THR_20OPHE_32NH', 'HIE_21ND1PHE_32NH')
</code></pre>
<p>instead of
HIE_21O-PHE_32NH, THR_20O-PHE_32NH, HIE_21ND1-PHE_32NH</p>
<p>Why? How can I retrieve the hyphen?</p>
| 0 | 2016-08-15T12:33:06Z | 38,955,478 | <p>Use the argument <code>deletechars=''</code>:</p>
<pre><code>In [15]: f1 = np.genfromtxt('hyphens.txt', dtype=None, names=True, deletechars='')
In [16]: f1
Out[16]:
array([(1, 0, 0, 0), (2, 1, 0, 0), (3, 0, 0, 0), (4, 0, 0, 0), (5, 0, 0, 0)],
dtype=[('Frame', '<i8'), ('HIE_21@O-PHE_32@N-H', '<i8'), ('THR_20@O-PHE_32@N-H', '<i8'), ('HIE_21@ND1-PHE_32@N-H', '<i8')])
In [17]: f1.dtype.names
Out[17]:
('Frame',
'HIE_21@O-PHE_32@N-H',
'THR_20@O-PHE_32@N-H',
'HIE_21@ND1-PHE_32@N-H')
</code></pre>
| 2 | 2016-08-15T12:52:49Z | [
"python",
"numpy",
"names",
"genfromtxt"
] |
top n columns with highest value in each pandas dataframe row | 38,955,182 | <p>I have the following dataframe:</p>
<pre><code> id p1 p2 p3 p4
1 0 9 1 4
2 0 2 3 4
3 1 3 10 7
4 1 5 3 1
5 2 3 7 10
</code></pre>
<p>I need to reshape the data frame in a way that for each id it will have the top 3 columns with the highest values. The result would be like this:</p>
<pre><code> id top1 top2 top3
1 p2 p4 p3
2 p4 p3 p2
3 p3 p4 p2
4 p2 p3 p4/p1
5 p4 p3 p2
</code></pre>
<p>It shows the top 3 best sellers for every <code>user_id</code>. I have already done it using the <code>dplyr</code> package in R, but I am looking for the pandas equivalent.</p>
| 0 | 2016-08-15T12:33:56Z | 38,955,314 | <p>You can use:</p>
<pre><code>df = df.set_index('id').apply(lambda x: pd.Series(x.sort_values(ascending=False)
.iloc[:3].index,
index=['top1','top2','top3']), axis=1).reset_index()
print (df)
id top1 top2 top3
0 1 p2 p4 p3
1 2 p4 p3 p2
2 3 p3 p4 p2
3 4 p2 p3 p4
4 5 p4 p3 p2
</code></pre>
| 2 | 2016-08-15T12:42:43Z | [
"python",
"pandas",
"dataframe",
"data-analysis"
] |
top n columns with highest value in each pandas dataframe row | 38,955,182 | <p>I have the following dataframe:</p>
<pre><code> id p1 p2 p3 p4
1 0 9 1 4
2 0 2 3 4
3 1 3 10 7
4 1 5 3 1
5 2 3 7 10
</code></pre>
<p>I need to reshape the data frame in a way that for each id it will have the top 3 columns with the highest values. The result would be like this:</p>
<pre><code> id top1 top2 top3
1 p2 p4 p3
2 p4 p3 p2
3 p3 p4 p2
4 p2 p3 p4/p1
5 p4 p3 p2
</code></pre>
<p>It shows the top 3 best sellers for every <code>user_id</code>. I have already done it using the <code>dplyr</code> package in R, but I am looking for the pandas equivalent.</p>
| 0 | 2016-08-15T12:33:56Z | 38,955,365 | <p>You could use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.html" rel="nofollow"><code>np.argsort</code></a> to find the indices of the <em>n</em> largest items for each row:</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame({'id': [1, 2, 3, 4, 5],
'p1': [0, 0, 1, 1, 2],
'p2': [9, 2, 3, 5, 3],
'p3': [1, 3, 10, 3, 7],
'p4': [4, 4, 7, 1, 10]})
df = df.set_index('id')
nlargest = 3
order = np.argsort(-df.values, axis=1)[:, :nlargest]
result = pd.DataFrame(df.columns[order],
columns=['top{}'.format(i) for i in range(1, nlargest+1)],
index=df.index)
print(result)
</code></pre>
<p>yields</p>
<pre><code> top1 top2 top3
id
1 p2 p4 p3
2 p4 p3 p2
3 p3 p4 p2
4 p2 p3 p1
5 p4 p3 p2
</code></pre>
| 6 | 2016-08-15T12:46:05Z | [
"python",
"pandas",
"dataframe",
"data-analysis"
] |
Pass the QAction object which calls triggered.connect() as a parameter in the function that is triggered after i click on QAction | 38,955,281 | <p>i am creating list of QAction objects using for loop like this:</p>
<pre><code>class some_class:
self.tabs = []
for self.i in range(0,10):
self.tabs[self.i] = QtGui.QAction("New", self)
self.tabs[self.i].triggered.connect(self.some_function)
def some_function(self):
print self.i
</code></pre>
<p>whenever i click on any of the tabs created, it triggers only tabs[9] and print "9" only. </p>
<p>So how to pass the QAction object itself in the some_function which triggered some_function()</p>
| 0 | 2016-08-15T12:41:09Z | 38,955,762 | <p>Cache the index as a default argument:</p>
<pre><code>for index in range(0, 10):
action = QtGui.QAction("New", self)
action.triggered.connect(
lambda checked, index=index: self.some_function(index))
self.tabs.append(action)
...
def some_function(self, index):
action = self.tabs[index]
print(action.text())
</code></pre>
| 1 | 2016-08-15T13:13:23Z | [
"python",
"python-2.7",
"pyqt4",
"signals-slots"
] |
Python append json object in file, guard if object does not exist | 38,955,307 | <p>I am reading in a json file using python, and then appending in an array within an object, the shape of this being </p>
<pre><code>"additional_info": {"other_names": ["12.13"]
</code></pre>
<p>I am appending the array as follows:</p>
<pre><code>data["additional_info"]["other_names"].append('13.9')
with open('jsonfile', 'w') as f:
json.dump(data, f)
</code></pre>
<p>I want to set a guard to check if additional_info and other_names exists in the json file and if it doesn't then to create it. How would I go about doing this?</p>
| 0 | 2016-08-15T12:42:25Z | 38,955,538 | <p>Usually I would use nested <code>try-except</code> to check for each missing key or a <a href="https://docs.python.org/3/library/collections.html#collections.defaultdict" rel="nofollow"><code>defaultdict</code></a>, but in this case I think I would go with 2 <code>if</code> statements for the sake of simplicity:</p>
<pre><code>if "additional_info" not in data:
data["additional_info"] = {}
if "other_names" not in data["additional_info"]:
data["additional_info"]["other_names"] = []
data["additional_info"]["other_names"].append('13.9')
with open('jsonfile', 'w') as f:
json.dump(data, f)
</code></pre>
<p>Two use cases:</p>
<pre><code>data = {}
if "additional_info" not in data:
data["additional_info"] = {}
if "other_names" not in data["additional_info"]:
data["additional_info"]["other_names"] = []
data["additional_info"]["other_names"].append('13.9')
print(data)
>> {'additional_info': {'other_names': ['13.9']}}
</code></pre>
<p>And </p>
<pre><code>data = {"additional_info": {"other_names": ["12.13"]}}
if "additional_info" not in data:
data["additional_info"] = {}
if "other_names" not in data["additional_info"]:
data["additional_info"]["other_names"] = []
data["additional_info"]["other_names"].append('13.9')
print(data)
>> {'additional_info': {'other_names': ['12.13', '13.9']}}
</code></pre>
| 1 | 2016-08-15T12:57:03Z | [
"python",
"json"
] |
Function works once, not the second time around | 38,955,325 | <p>in my program I start with this...</p>
<pre><code>def start():
acu.cont
acu.php
input('Welcome... ( Press Enter )')
acga.Game.game()
start()
</code></pre>
<p>This works fine and start the game normally the first time around. When I finish the game I have this..</p>
<pre><code>again = '1'
print('Play again? ( 1 for yes, 2 for no )')
again = input()
while again == '1':
import Start
Start.start()
else:
print('adios')
raise SystemExit
</code></pre>
<p>After choosing to play again the game displays the welcome message like normal, but pressing enter does nothing. I'm left in limbo.</p>
<p>Any ideas what's going on? Thanks</p>
| -2 | 2016-08-15T12:43:26Z | 38,955,463 | <p>Try changing the <code>while</code> in</p>
<pre><code>while again == '1':
</code></pre>
<p>to <code>if</code>.</p>
| -3 | 2016-08-15T12:52:04Z | [
"python",
"python-3.x"
] |
Property behaviour (getter and setter) for a column, instead of normal attribute behaviour | 38,955,334 | <p>Does SQLAlchemy support something like this:</p>
<pre><code>class Character(Base):
__tablename__ = 'character'
id = Column(Integer, primary_key=True)
level = Column(Integer)
@ColumnProperty(Integer)
def xp(self):
return self._xp
@xp.setter
def xp(self, value):
self._xp = value
while self._xp >= self.xp_to_level_up():
self.level += 1
self._xp -= 100
def __init__(self, level=0, xp=0):
self.level = level
self._xp = xp
</code></pre>
<p>Where <code>id</code>, <code>level</code>, and <code>xp</code> would get stored into the database. So basically a <code>Column</code> but instead of an attribute a property.</p>
| 0 | 2016-08-15T12:43:44Z | 38,955,426 | <p>Just add a <code>_xp</code> column definition, that maps to the <code>xp</code> column in the table, and use a regular <code>@property</code> object to implement your setter logic:</p>
<pre><code>class Character(Base):
__tablename__ = 'character'
id = Column(Integer, primary_key=True)
level = Column(Integer)
_xp = Column('xp', Integer)
def __init__(self, level=0, xp=0):
self.level = level
self._xp = xp
@property
def xp(self):
return self._xp
@xp.setter
def xp(self, value):
self._xp = value
while self._xp >= self.xp_to_level_up():
self.level += 1
self._xp -= 100
</code></pre>
<p>See <a href="http://docs.sqlalchemy.org/en/latest/orm/mapping_columns.html#naming-columns-distinctly-from-attribute-names" rel="nofollow"><em>Naming Naming Columns Distinctly from Attribute Names</em></a>.</p>
<p>The above stores the <code>_xp</code> attribute on the instance as <code>xp</code> in the <code>character</code> table, but your code uses the <code>Character.xp</code> property to read and write the value of <code>_xp</code> indirectly.</p>
| 2 | 2016-08-15T12:49:22Z | [
"python",
"python-3.x",
"properties",
"sqlalchemy"
] |
Select a particular element in a webpage from list using selenium python | 38,955,339 | <p>I am trying to automate through selenium python to select a particular element from a list of element in a webpage. That is in the first page of the webpage I am selecting one list item from 5 items displayed, later I need to deselect them in the next page of the website where all the five numbers are listed. How can i select that particular element which i have select in the first page of the webpage? </p>
<p>The HTML code of the first page list item looks like this -></p>
<pre><code><li class="ng-scope" ng-repeat="line in accountLines">
<label class="list-item inline-group">
<div class="inline-group-addon">
<div class="inline-group-main">
<p class="ng-binding">
</div>
</code></pre>
<p>and i have generated xpath as:</p>
<pre><code>.//*[@id='oobe-login']/body/div[1]/main/div/div[1]/ul/li[1]/label/div[2]/p
.//*[@id='oobe-login']/body/div[1]/main/div/div[1]/ul/li[2]/label/div[2]/p
.//*[@id='oobe-login']/body/div[1]/main/div/div[1]/ul/li[3]/label/div[2]/p
.//*[@id='oobe-login']/body/div[1]/main/div/div[1]/ul/li[4]/label/div[2]/p
.//*[@id='oobe-login']/body/div[1]/main/div/div[1]/ul/li[5]/label/div[2]/p
</code></pre>
<p>and the HTML code of the next page is which lists all the element is:</p>
<pre><code>li class="ng-scope" ng-repeat="line in allLinesList">
<a class="list-item inline-group reverse highlight" ng-click="expandSettings($index)" ng-class="(settings.selectedLine === $index && settings.expanded) ? 'active' : ''" href="">
</li>
</code></pre>
<p>I jus do a text check here which is a hard coding type to select the number that i have selected in the first page. </p>
<p>Can anybody suggest me an solution so that the selenium webdriver clicks each element in the next page and checks if it is registered, if registered, deregister it? Please Help.</p>
| 0 | 2016-08-15T12:44:01Z | 38,957,011 | <p>I'm not sure which webdriver you are using, but the snippet I'm including here has worked for me with the Firefox, Chrome, and CasperJS drivers.</p>
<p>To click an option in a drop down list, you'll want to use something like the following, using one of your XPATH attributes above:</p>
<pre><code>opt_button = driver.wait.until(EC.element_to_be_clickable((By.XPATH, """.//*[@id='oobe-login']/body/div[1]/main/div/div[1]/ul/li[1]/label/div[2]/p""")))
try:
opt_button.click() #This is the part that actually selects the option
print "Clicked the list item!"
except ElementNotVisibleException, s:
print "Could not click the list item..."
print "Error: "+str(s)
</code></pre>
<p>Selenium offers the ability to wait for an element to be loaded and clickable, so telling your driver to wait for that is usually a good way to prevent errors from slow-loading pages.</p>
<p>As for deselecting the option, you would have to have either a list that offers a NULL selection or a script on the page to clear your selection on page load. I'm not great with JavaScript, but I believe there may be a way to inject a script to deselect a list item. You may want to do a Google search for ways to clear list selections with JS, but I am sure there will be conditions that need to be met in order to do this.</p>
| 1 | 2016-08-15T14:26:01Z | [
"python",
"python-2.7",
"selenium",
"selenium-webdriver"
] |
Tricky wide to long conversion in Pandas with multi-index columns | 38,955,407 | <p>I have a dataframe that looks like</p>
<pre><code>stock date type1 type2 volume_A qtit_A volume_B qtit_B
'ABC' '2013-01-01' 1 2 1000 5 2500 6
'ABC' '2013-01-02' 1 3 4000 10 2500 0
</code></pre>
<p>and I would like to reshape it as follows:</p>
<pre><code>stock date type1 type2 volume qtit type
'ABC' '2013-01-01' 1 2 1000 5 A
'ABC' '2013-01-01' 1 2 2500 6 B
'ABC' '2013-01-02' 1 3 4000 10 A
'ABC' '2013-01-02' 1 3 2500 0 B
</code></pre>
<p>where you can see that the columns <code>['volume_A','qtit_A','volume_B','qtit_B']</code> are broken down in <code>['volume','qtit']</code> with a type indicator to remember which type of volume/price we are looking at.</p>
<p>I am struggling to have that done in Pandas in a clean way (using <code>melt</code> or <code>stack()</code> for instance) </p>
<p>Any ideas?
Thanks!</p>
| 0 | 2016-08-15T12:48:05Z | 38,955,534 | <pre><code>pd.lreshape(df.assign(type_A=['A']*len(df), type_B=['B']*len(df)),
{'volume': ['volume_A', 'volume_B'],
'qtit': ['qtit_A', 'qtit_B'],
'type': ['type_A', 'type_B']})
Out:
date stock type1 type2 qtit type volume
0 '2013-01-01' 'ABC' 1 2 5 A 1000
1 '2013-01-02' 'ABC' 1 3 10 A 4000
2 '2013-01-01' 'ABC' 1 2 6 B 2500
3 '2013-01-02' 'ABC' 1 3 0 B 2500
</code></pre>
<p>Assigning two new columns for type may not be necessary considering the output is ordered based on the order of the lists.</p>
| 3 | 2016-08-15T12:56:39Z | [
"python",
"pandas"
] |
Tricky wide to long conversion in Pandas with multi-index columns | 38,955,407 | <p>I have a dataframe that looks like</p>
<pre><code>stock date type1 type2 volume_A qtit_A volume_B qtit_B
'ABC' '2013-01-01' 1 2 1000 5 2500 6
'ABC' '2013-01-02' 1 3 4000 10 2500 0
</code></pre>
<p>and I would like to reshape it as follows:</p>
<pre><code>stock date type1 type2 volume qtit type
'ABC' '2013-01-01' 1 2 1000 5 A
'ABC' '2013-01-01' 1 2 2500 6 B
'ABC' '2013-01-02' 1 3 4000 10 A
'ABC' '2013-01-02' 1 3 2500 0 B
</code></pre>
<p>where you can see that the columns <code>['volume_A','qtit_A','volume_B','qtit_B']</code> are broken down in <code>['volume','qtit']</code> with a type indicator to remember which type of volume/price we are looking at.</p>
<p>I am struggling to have that done in Pandas in a clean way (using <code>melt</code> or <code>stack()</code> for instance) </p>
<p>Any ideas?
Thanks!</p>
| 0 | 2016-08-15T12:48:05Z | 38,955,716 | <p>If you set <code>['date','stock','type1','type2']</code> as the <code>index</code>, then you can split the remaining column labels on <code>'_'</code>, create a MultiIndex from these tuples, and then move the <code>A</code>,<code>B</code> labels into the <code>index</code> using <code>stack</code>. <code>reset_index</code> then produces the desired result by moving the index levels back into columns.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'date': ['2013-01-01', '2013-01-02'],
'qtit_A': [5, 10],
'qtit_B': [6, 0],
'stock': ['ABC', 'ABC'],
'type1': [1, 1],
'type2': [2, 3],
'volume_A': [1000, 4000],
'volume_B': [2500, 2500]})
df = df.set_index(['date','stock','type1','type2'])
df.columns = pd.MultiIndex.from_tuples([col.split('_', 1) for col in df.columns])
result = df.stack(level=1).reset_index()
result = result.rename(columns={'level_4':'type'})
print(result)
</code></pre>
<p>yields:</p>
<pre><code> date stock type1 type2 type qtit volume
0 2013-01-01 ABC 1 2 A 5 1000
1 2013-01-01 ABC 1 2 B 6 2500
2 2013-01-02 ABC 1 3 A 10 4000
3 2013-01-02 ABC 1 3 B 0 2500
</code></pre>
| 2 | 2016-08-15T13:10:09Z | [
"python",
"pandas"
] |
django post request is not showing javascript blob object in terminal | 38,955,447 | <p>I've been banging my head on the wall for hours with this. I am trying to use <code>recorder.js</code> using the source from this example <a href="https://webaudiodemos.appspot.com/AudioRecorder/" rel="nofollow">https://webaudiodemos.appspot.com/AudioRecorder/</a></p>
<p>Now I want to modify it so that the user will hear the audio played back to them and then the y will have the chance to upload it to the server if they like it. I got the audio to playback by adding the return of the method <code>createObjectURL(blob)</code> to an audio element. GREAT</p>
<p>Now I just have to write a post request and send it to my django instance to handle it in a view...which is where it gets weird. </p>
<p>I used jquery to post an ajax request like this...</p>
<pre><code>$(document).ready(function () {
$("#submit_audio_file").click(function () {
var data = new FormData();
data.append("csrfmiddlewaretoken", $("[name=csrfmiddlewaretoken]").val());
data.append("audio_file", blob, "test");
// Display the key/value pairs
for(var pair of data.entries()) {
console.log(pair[0]+ ', '+ pair[1]);
}
$.ajax({
type: "POST",
url: $("#post_url").val(),
data: data,
processData: false, // prevent jQuery from converting the data
contentType: false, // prevent jquery from changing something else as well
success: function(response) {
alert(response);
}
});
})
});
</code></pre>
<p>when I make this request go through I see this in the console...</p>
<pre><code>csrfmiddlewaretoken, 3YBQrdOUkquRDD5dN0hTJcUXYVFiNpSe
audio_file, [object File]
</code></pre>
<p>and then in my django CBV I put a <code>print request.POST.items()</code> in the first line so I could see what was coming in. In my terminal I see this...</p>
<pre><code> [(u'csrfmiddlewaretoken', u'mymiddlewaretokenvalue')]
</code></pre>
<p>there is no <code>audio_file</code> key in the post request at all. Wy would it show in the javascript console and then disappear in the django request?</p>
<p>I also see a potential problem in the future because I think that the javascript console is just printing a string of <code>[object File]</code> which will obviously not do what I want it to. </p>
<p>Any ideas of where to go next?</p>
| 0 | 2016-08-15T12:51:00Z | 38,955,664 | <p>As Daniel Roseman wrote, uploaded files end up in <code>request.FILES</code>. One more thing - from jQuery documentation:</p>
<blockquote>
<p>As of jQuery 1.6 you can pass false to tell jQuery to not set any content type header.</p>
</blockquote>
<p>and Django documentation states:</p>
<blockquote>
<p>Note that request.FILES will only contain data if the request method was POST and the form that posted the request has the attribute enctype="multipart/form-data". Otherwise, request.FILES will be empty.</p>
</blockquote>
| 1 | 2016-08-15T13:06:15Z | [
"javascript",
"jquery",
"python",
"django",
"post"
] |
How to add a duplicate csv column using pandas | 38,955,564 | <p>I have a <code>CSV</code> that has just one column with <code>domains</code>, similar to this:</p>
<pre><code>google.com
yahoo.com
cnn.com
toast.net
</code></pre>
<p>I want to add a duplicate column and add the headers <code>domain</code> and <code>matches</code> so my <code>csv</code> will look like:</p>
<pre><code>domain matching
google.com google.com
yahoo.com yahoo.com
cnn.com cnn.com
toast.net toast.net
</code></pre>
<p>I tried the following in my python script using pandas:</p>
<pre><code>df = read_csv('temp.csv')
df.columns = ['domain', 'matching']
df['matching'] = df['domain']
df.to_csv('temp.csv', index=False)
</code></pre>
<p>but I am getting the following error:</p>
<blockquote>
<p>"ValueError: Length mismatch: Expected axis has 1 elements, new values have 2 elements". </p>
</blockquote>
<p>I assume I need to add another column first? Can I do this using pandas?</p>
| 1 | 2016-08-15T12:58:46Z | 38,955,609 | <p>You can add parameter <code>name</code> to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow"><code>read_csv</code></a>:</p>
<pre><code>import pandas as pd
import io
temp=u"""google.com
yahoo.com
cnn.com
toast.net"""
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp), names=['domain'])
#real data
#df = pd.read_csv('temp.csv', names=['domain'])
print (df)
domain
0 google.com
1 yahoo.com
2 cnn.com
3 toast.net
df['matching'] = df['domain']
print (df.to_csv(index=False))
#real data
#df.to_csv('temp.csv', index=False)
domain,matching
google.com,google.com
yahoo.com,yahoo.com
cnn.com,cnn.com
toast.net,toast.net
</code></pre>
<p>You can modify your solution, but you lost first row, because it is read as column name:</p>
<pre><code>df = pd.read_csv(io.StringIO(temp))
print (df)
#real data
#df = pd.read_csv('temp.csv')
google.com
0 yahoo.com
1 cnn.com
2 toast.net
df.columns = ['domain']
df['matching'] = df['domain']
df.to_csv('temp.csv', index=False)
</code></pre>
<p>But you can add parameter <code>header=None</code> to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow"><code>read_csv</code></a> and remove second value from <code>df.columns = ['domain', 'matching']</code>, because first <code>DataFrame</code> has only one column:</p>
<pre><code>import pandas as pd
import io
temp=u"""google.com
yahoo.com
cnn.com
toast.net"""
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp), header=None)
print (df)
#real data
#df = pd.read_csv('temp.csv', header=None)
0
0 google.com
1 yahoo.com
2 cnn.com
3 toast.net
df.columns = ['domain']
df['matching'] = df['domain']
df.to_csv('temp.csv', index=False)
</code></pre>
| 1 | 2016-08-15T13:02:07Z | [
"python",
"csv",
"pandas",
"dataframe"
] |
TensorFlow GPU Epoch Optimization? | 38,955,736 | <p>So this code works, and it gives me a 2x boost over CPU only, but I think its possible to get it faster. I think the issue boils down to this area...</p>
<pre><code>for i in tqdm(range(epochs), ascii=True):
sess.run(train_step, feed_dict={x: train, y_:labels})
</code></pre>
<p>I think what happens is that every epoch, we go back to the CPU for information on what to do next (the for loop) and the for loop pushes back to the GPU. Now the GPU can fit the entire data set and more into memory.</p>
<p>Is it possible, and if so how? to just have it continually crunch 1000 epochs on the GPU without coming back to the CPU to report its status. Or perhaps control how often it reports status. It would be nice to say crunch 1000 epochs on GPU, and then see my train vs validation, then crunch again. But doing it between every epoch is not really helpful.</p>
<p>Thanks,</p>
<p>~David</p>
| 0 | 2016-08-15T13:11:30Z | 38,956,678 | <p>The overhead of <code>session.run</code> is around 100 usec, so if you do 10k steps, this overhead adds around 1 second. If this is significant, then you are doing many small iterations, and are incurring extra overhead in other places. IE, GPU kernel launch overhead is 5x larger than CPU (5 usec vs 1 usec).</p>
<p>Using <code>feed_dict</code> is probably a bigger problem and you could speed things up by using queues/input pipelines.</p>
<p>Also, a robust way to figure out where you are spending time is to profile.
IE, to figure out what fraction of time is due to your <code>for</code> loop, you can do cProfile as follows.</p>
<pre><code>python -m cProfile -o timing.prof myscript.py
snakeviz timing.prof
</code></pre>
<p>To figure out where the time goes inside of TensorFlow <code>run</code>, you can do timeline profiling as described <a href="https://github.com/tensorflow/tensorflow/issues/1824#issuecomment-225754659" rel="nofollow">here</a></p>
| 1 | 2016-08-15T14:08:30Z | [
"python",
"tensorflow"
] |
formset validation not working as intended django | 38,955,756 | <p>I have a formset and I want to check each form to make sure my reading object isn't greater than my target object. For some reason I can't get this to work correctly because if the reading is larger than the target it still saves, also my form errors are not showing up as well. How can I check each form to validate that each reading object is less than my target object any help would be greatly appreciated. </p>
<p>Here is my formset validation in views.py </p>
<pre><code>class BaseInspecitonFormSet(BaseInlineFormSet):
def clean(self):
if any(self.errors):
return
reading = []
for form in self.forms:
dim_id = 24 #form.cleaned_data['dimension_id']
reading_data = form.cleaned_data['reading']
target = get_dim_target(dim_id)
for x_reading in reading:
if int(x_reading) > int(target):
print True
raise forms.ValidationError("Reading larger than target")
else:
print False
reading.append(reading_data)
</code></pre>
<p>here is my get_dim_target function </p>
<pre><code>def get_dim_target(dim_id):
target = Dimension.objects.values_list('target', flat=True).filter(id=dim_id)
return target
</code></pre>
<p>Here is my actual formset in views.py </p>
<pre><code>def update_inspection_vals(request, dim_id=None):
dims_data = Dimension.objects.filter(id=dim_id)
can_delete = False
dims = Dimension.objects.get(pk=dim_id)
sheet_data = Sheet.objects.get(pk=dims.sheet_id)
serial_sample_number = Inspection_vals.objects.filter(dimension_id=24).values_list('serial_number', flat=True)[0]
target = Dimension.objects.filter(id=24).values_list('target', flat=True)[0]
title_head = 'Inspect-%s' % dims.description
if dims.ref_dim_id == 1:
inspection_inline_formset = inlineformset_factory(Dimension, Inspection_vals, formset=BaseInspecitonFormSet, can_delete=False, extra=0, fields=('reading',), widgets={
'reading': forms.TextInput(attrs={'cols': 80, 'rows': 20})
})
if request.method == "POST":
formset = inspection_inline_formset(request.POST, request.FILES, instance=dims)
if formset.is_valid():
print True
new_instance = formset.save(commit=False)
for n_i in new_instance:
n_i.created_at = datetime.datetime.now()
n_i.updated_at = datetime.datetime.now()
n_i.save()
else:
print False
formset.errors
formset.non_form_errors()
return HttpResponseRedirect(request.META.get('HTTP_REFERER', '/'))
else:
formset = inspection_inline_formset(instance=dims, queryset=Inspection_vals.objects.filter(dimension_id=dim_id).order_by('serial_number'))
return render(request, 'app/inspection_vals.html',
{
'formset': formset,
'dim_data': dims_data,
'title':title_head,
'dim_description': dims.description,
'dim_target': dims.target,
'work_order': sheet_data.work_order,
'customer_name': sheet_data.customer_name,
'serial_sample_number': serial_sample_number,
})
</code></pre>
<p>finally here is my template </p>
<pre><code> <h1>Inspeciton Values</h1>
<div class="well">
<form method="post">
{% csrf_token %}
<table>
{{ formset.management_form }}
{% for x in formset.forms %}
<tr>
<td>
Sample Number {{ forloop.counter0|add:serial_sample_number }}
</td>
<td>
{{ x }}
</td>
</tr>
{% endfor %}
</table>
<input type="submit" value="Submit Values" class="btn-primary" />
</form>
</div>
</div>
</code></pre>
| 0 | 2016-08-15T13:12:50Z | 38,974,680 | <p>I figured it out I had to change reading to an int as well as my target then presto it worked like a charm.</p>
<pre><code>class BaseInspecitonFormSet(BaseInlineFormSet):
def clean(self):
#if any(self.errors):
# return
reading = []
target = []
for form in self.forms:
dim_id = 24 #form.cleaned_data['dimension_id']
reading_data = form.cleaned_data['reading']
target_data = get_dim_target(dim_id)
reading.append(reading_data)
target.append(target_data)
x_t = target[0]
for x_r in reading:
if int(x_r) > int(x_t[0]):
print "Reading larger than target"
raise forms.ValidationError("Reading larger than target",code="bad")
else:
print "Reading is good"
</code></pre>
| 0 | 2016-08-16T12:13:14Z | [
"python",
"django"
] |
Missing letter Ä in ReportLab pdf created with Python 3.4 | 38,955,791 | <p>A couple of days ago I started to use ReportLab with Python34. It's pretty nice package but I have one big problem that I don't know how to overcome.</p>
<p>Could someone check my code and help me get over this? The problem is connected with letter Ä in Slovenian language. In the title there is no problem, but later in pdf file I cannot see that letter.</p>
<p>My code is below:</p>
<pre><code>from reportlab.platypus import SimpleDocTemplate, Paragraph, Spacer
from reportlab.lib.styles import getSampleStyleSheet
from reportlab.rl_config import defaultPageSize
from reportlab.lib.units import inch
from reportlab.pdfbase import pdfmetrics
from reportlab.pdfgen import canvas
from reportlab.pdfbase.ttfonts import TTFont
pdfmetrics.registerFont(TTFont('Vera', 'Vera.ttf'))
PAGE_HEIGHT=defaultPageSize[1]
PAGE_WIDTH=defaultPageSize[0]
styles = getSampleStyleSheet()
Title = "IzraÄun pokojnine"
bogustext =("""ÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ""")
def myPage(canvas, doc):
canvas.saveState()
canvas.setFont('Vera',16)
canvas.drawCentredString(PAGE_WIDTH/2.0, PAGE_HEIGHT-108, Title)
canvas.restoreState()
def go():
doc = SimpleDocTemplate("phello.pdf")
Story = [Spacer(1,2*inch)]
style = styles["Normal"]
p = Paragraph(bogustext, style)
Story.append(p)
Story.append(Spacer(1,0.2*inch))
doc.build(Story, onFirstPage=myPage)
go()
</code></pre>
<p>When I make pdf file I get this:
<a href="http://i.stack.imgur.com/FNVXF.png" rel="nofollow"><img src="http://i.stack.imgur.com/FNVXF.png" alt="enter image description here"></a></p>
<p>Why there is a difference between letter Ä in title and text?</p>
<p>Thanks in advance!</p>
<p>Best regards, David</p>
| 1 | 2016-08-15T13:14:49Z | 39,016,389 | <p>The problem is that in the title you are using <code>Vera</code> as the font, in the text you are using the default font used by Reportlab which is <code>Times-Roman</code> (if I remember correctly). </p>
<p>The blackboxes you're seeing indicate that the current font (<code>Times-Roman</code>) doesn't have a symbol for the character you are trying to display. So to fix it you will have to change the font of the text to a font that does contain a symbol for Ä. One way to do this is by creating a new style Like this:</p>
<pre><code>ParagraphStyle('MyNormal',
parent=styles['Normal'],
fontName='Vera')
</code></pre>
<p>In some cases it might be easier to replace the missing symbols with the symbol form a fallback font in which case you might want to check out <a href="http://stackoverflow.com/questions/35172207/reportlab-working-with-chinese-unicode-characters/35371704#35371704">this answer I posted earlier this year.</a></p>
| 0 | 2016-08-18T10:54:20Z | [
"python",
"python-3.x",
"pdf-generation",
"reportlab"
] |
Separate process for my time consuming task (flask application) | 38,955,792 | <p>I am having a problem similar to the one in <a href="http://stackoverflow.com/questions/18127128/time-out-issues-with-chrome-and-flask">this question</a>. I have a flask application that takes input from the user (sometimes multiple thousand addresses), then processes it (cleans/geocodes), then returns a results page after everything is done. During this time, the page remains loading. This loading time could potentially be up to 15 minutes, depending on the size of the input. The application can process roughly 300 address per minute.</p>
<p>I saw one of the answers say that it could potentially be solved by putting all of the work on a separate process and redirecting the user to sort of a 'Loading Please Wait' page, and then after that is complete, redirect the user to the results page. </p>
<p>I was wondering what all of this would entail.</p>
<p>Here is a simplified version of my code, excluding import statements etc.: (I am currently using gunicorn to serve the application)</p>
<pre><code>app = Flask(__name__)
@app.route("/app")
def index():
return """
<form action="/clean" method="POST"><textarea rows="25" cols="100"
name="addresses"></textarea>
<p><input type="submit"></p>
</form></center></body>"""
@app.route("/clean", methods=['POST'])
def dothing():
addresses = request.form['addresses']
return cleanAddress(addresses)
def cleanAddress(addresses):
....
....
.... processes each address 1 by 1,
.... then adds it to list to return to the user
....
....
return "\n".join(cleaned) #cleaned is the list containing the output
</code></pre>
<p>I was told that <code>Celery</code> could be used to help me do this.</p>
<p>Here is my current attempt with Celery. I am still getting the same error where the page times out, however like usual, from the console I can see that the application is still working...</p>
<pre><code>app = Flask(__name__)
app.config['CELERY_BROKER_URL'] = 'redis://0.0.0.0:5000'
app.config['CELERY_RESULT_BACKEND'] = 'redis://0.0.0.0:5000'
celery = Celery(app.name, broker = app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
@app.route("/clean", methods=['POST'])
def dothing():
addresses = request.form['addresses']
return cleanAddress(addresses)
@celery.task
def cleanAddress(addresses):
....
....
.... processes each address 1 by 1,
.... then adds it to list to return to the user
....
....
return "\n".join(cleaned) #cleaned is the list containing the output
</code></pre>
<p>After the application finishes running, I am given this console error:</p>
<pre><code>Traceback (most recent call last):
File "/home/my name/anaconda/lib/python2.7/SocketServer.py", line 596, in process_request_thread
self.finish_request(request, client_address)
File "/home/my name/anaconda/lib/python2.7/SocketServer.py", line 331, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/home/my name/anaconda/lib/python2.7/SocketServer.py", line 654, in __init__
self.finish()
File "/home/my name/anaconda/lib/python2.7/SocketServer.py", line 713, in finish
self.wfile.close()
File "/home/my name/anaconda/lib/python2.7/socket.py", line 283, in close
self.flush()
File "/home/my name/anaconda/lib/python2.7/socket.py", line 307, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe
</code></pre>
| -1 | 2016-08-15T13:14:52Z | 38,956,739 | <p>Using celery leaves out a bit- it allows you to kick off a background process, but does not help your problem of things timing out! My recommended solution is to use something like redis or your database to store the results... when someone visits the "kick off this job" url they get back a message "starting the process, please check '/results' in 15 minutes or so" and sets a flag in the db to "in work".</p>
<p>The celery process kicks off and stores the results in database somewhere. When it finishes, it sets the flag to "finished".</p>
<p>When someone goes to <code>/results</code> they get the results from the db if the flag is set to "finished" or a message saying "still working" if the flag is "in work".</p>
| 0 | 2016-08-15T14:11:06Z | [
"python",
"flask"
] |
Separate process for my time consuming task (flask application) | 38,955,792 | <p>I am having a problem similar to the one in <a href="http://stackoverflow.com/questions/18127128/time-out-issues-with-chrome-and-flask">this question</a>. I have a flask application that takes input from the user (sometimes multiple thousand addresses), then processes it (cleans/geocodes), then returns a results page after everything is done. During this time, the page remains loading. This loading time could potentially be up to 15 minutes, depending on the size of the input. The application can process roughly 300 address per minute.</p>
<p>I saw one of the answers say that it could potentially be solved by putting all of the work on a separate process and redirecting the user to sort of a 'Loading Please Wait' page, and then after that is complete, redirect the user to the results page. </p>
<p>I was wondering what all of this would entail.</p>
<p>Here is a simplified version of my code, excluding import statements etc.: (I am currently using gunicorn to serve the application)</p>
<pre><code>app = Flask(__name__)
@app.route("/app")
def index():
return """
<form action="/clean" method="POST"><textarea rows="25" cols="100"
name="addresses"></textarea>
<p><input type="submit"></p>
</form></center></body>"""
@app.route("/clean", methods=['POST'])
def dothing():
addresses = request.form['addresses']
return cleanAddress(addresses)
def cleanAddress(addresses):
....
....
.... processes each address 1 by 1,
.... then adds it to list to return to the user
....
....
return "\n".join(cleaned) #cleaned is the list containing the output
</code></pre>
<p>I was told that <code>Celery</code> could be used to help me do this.</p>
<p>Here is my current attempt with Celery. I am still getting the same error where the page times out, however like usual, from the console I can see that the application is still working...</p>
<pre><code>app = Flask(__name__)
app.config['CELERY_BROKER_URL'] = 'redis://0.0.0.0:5000'
app.config['CELERY_RESULT_BACKEND'] = 'redis://0.0.0.0:5000'
celery = Celery(app.name, broker = app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
@app.route("/clean", methods=['POST'])
def dothing():
addresses = request.form['addresses']
return cleanAddress(addresses)
@celery.task
def cleanAddress(addresses):
....
....
.... processes each address 1 by 1,
.... then adds it to list to return to the user
....
....
return "\n".join(cleaned) #cleaned is the list containing the output
</code></pre>
<p>After the application finishes running, I am given this console error:</p>
<pre><code>Traceback (most recent call last):
File "/home/my name/anaconda/lib/python2.7/SocketServer.py", line 596, in process_request_thread
self.finish_request(request, client_address)
File "/home/my name/anaconda/lib/python2.7/SocketServer.py", line 331, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/home/my name/anaconda/lib/python2.7/SocketServer.py", line 654, in __init__
self.finish()
File "/home/my name/anaconda/lib/python2.7/SocketServer.py", line 713, in finish
self.wfile.close()
File "/home/my name/anaconda/lib/python2.7/socket.py", line 283, in close
self.flush()
File "/home/my name/anaconda/lib/python2.7/socket.py", line 307, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe
</code></pre>
| -1 | 2016-08-15T13:14:52Z | 38,957,047 | <p>You're not running the task in the background. Use <code>delay</code> or <code>apply_async</code> to run the task in the background. Calling it directly executes it synchronously.</p>
<pre><code>task = cleanAddress.delay(address)
return jsonify(task.id)
</code></pre>
<p>Respond with the task's id, then poll its state with a separate view to determine whether the results are ready.</p>
<pre><code>from celery.states import SUCCESS
task = cleanAddress.AsyncResult(id)
return jsonify(task.state == SUCCESS)
</code></pre>
<p>Storing the state (and the results) requires both the broker and results backends to be configured. By default, there is no results backend configured, so all state is discarded.</p>
| 1 | 2016-08-15T14:27:43Z | [
"python",
"flask"
] |
Local binding of "self" to another variable in instance methods acceptable? | 38,955,807 | <p>Is it acceptable according to the standards of best practice to locally bind <em>self</em> in a instance method to another variable? It comes in handy, especially when testing the method. I would also like to like know whether this approach is more efficient if instance attributes are retrieve within loops. Here is an example</p>
<pre><code>class C:
def __init__(self):
self.a = "some attribute"
def some_function(self):
c = self
for _ in range(10):
print(c.a)
</code></pre>
| 1 | 2016-08-15T13:15:40Z | 38,955,891 | <p>This generally offers no benefits, it is actually counter-intuitive in that it confuses readers as to what your intentions are (<code>self</code> is arguably the most recognizable name in Python). </p>
<p>The only up-side of doing this (albeit, a bit differently) is if you want to eliminate the attribute look-up on <code>self</code> by assigning the attribute to a name (<a href="https://wiki.python.org/moin/PythonSpeed/PerformanceTips#Avoiding_dots..." rel="nofollow">noted as a performance tip in the Python Wiki</a>):</p>
<pre><code>def some_function(self):
# we assign 'self.a' to 'a'
# to avoid looking for 'a' in 'self' for every iteration
a = self.a
for _ in range(10):
print(a)
</code></pre>
<p>which will only help a bit by reducing execution speed on a minuscule degree. Other than that you really get no benefit of renaming <code>self</code> like that.</p>
| 3 | 2016-08-15T13:21:06Z | [
"python",
"python-2.7",
"python-3.x"
] |
Preserve index and column order when combining DataFrames | 38,955,825 | <p>Say we have the following DataFrames:</p>
<pre><code>import pandas as pd
import numpy as np
df1_column_array = [['foo', 'bar'],
['A', 'B']]
df1_column_tuple = list(zip(*df1_column_array))
df1_column_header = pd.MultiIndex.from_tuples(df1_column_tuple)
df1_index_array = [['one','two'],
['0', '1']]
df1_index_tuple = list(zip(*df1_index_array))
df1_index_header = pd.MultiIndex.from_tuples(df1_index_tuple)
df1 = pd.DataFrame(np.random.rand(2,2), columns = df1_column_header, index = df1_index_header)
print(df1)
foo bar
A B
one 1 0.755296 0.101329
two 2 0.925653 0.587948
df2_column_array = [['alpha', 'beta'],
['C', 'D']]
df2_column_tuple = list(zip(*df2_column_array))
df2_column_header = pd.MultiIndex.from_tuples(df2_column_tuple)
df2_index_array = [['three', 'four'],
['3', '4']]
df2_index_tuple = list(zip(*df2_index_array))
df2_index_header = pd.MultiIndex.from_tuples(df2_index_tuple)
df2 = pd.DataFrame(np.random.rand(2,2), columns = df2_column_header, index = df2_index_header)
print(df2)
alpha beta
C D
three 3 0.751013 0.957824
four 4 0.879353 0.045079
</code></pre>
<p>I would like to combine these DataFrames to produce:</p>
<pre><code> foo bar alpha beta
A B C D
one 1 0.755296 0.101329 NaN NaN
two 2 0.925653 0.587948 NaN NaN
three 3 NaN NaN 0.751013 0.957824
four 4 NaN NaN 0.879353 0.045079
</code></pre>
<p>When I try concat, the order of the indices are preserved, but not of the columns:</p>
<pre><code>df_joined = pd.concat([df1,df2])
print(df_joined)
alpha bar beta foo
C B D A
one 1 NaN 0.101329 NaN 0.755296
two 2 NaN 0.587948 NaN 0.925653
three 3 0.751013 NaN 0.957824 NaN
four 4 0.879353 NaN 0.045079 NaN
</code></pre>
<p>When I try join, the order of the columns are preserved, but not of the indices:</p>
<pre><code>df_joined = df1.join(df2, how = 'outer')
print(df_joined)
foo bar alpha beta
A B C D
four 4 NaN NaN 0.879353 0.045079
one 1 0.755296 0.101329 NaN NaN
three 3 NaN NaN 0.751013 0.957824
two 2 0.925653 0.587948 NaN NaN
</code></pre>
<p>How can I preserve the order of both columns and indices when combining DataFrames? </p>
<p>Edit 1:
Please note: this is sample data. My real world data does not have convenient labels (e.g. 1,2,3,4) to sort on.</p>
<p>Edit 2:
When applying the proposes solution to my real world data, I get the following error:</p>
<pre><code>Exception: cannot handle a non-unique multi-index!
</code></pre>
| 1 | 2016-08-15T13:16:40Z | 38,955,953 | <p>You can use <code>hack</code> - first concat and get <code>Multiindex</code> and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html" rel="nofollow"><code>reindex</code></a> output of second <code>concat</code>:</p>
<pre><code>idx = pd.concat([df1,df2]).index
df_joined = pd.concat([df1,df2], axis=1).reindex(idx)
print (df_joined)
foo bar alpha beta
A B C D
one 0 0.269298 0.819375 NaN NaN
two 1 0.574702 0.798920 NaN NaN
three 3 NaN NaN 0.436893 0.822041
four 4 NaN NaN 0.757332 0.271900
</code></pre>
<p>Faster solution with creating <code>DataFrames</code> with <code>Multiindexes</code>, concat it and get <code>index</code>:</p>
<pre><code>idx = pd.concat([pd.DataFrame(df1.index, index=df1.index),
pd.DataFrame(df2.index, index=df2.index)]).index
df_joined = pd.concat([df1,df2], axis=1).reindex(idx)
print (df_joined)
foo bar alpha beta
A B C D
one 0 0.007644 0.341335 NaN NaN
two 1 0.332005 0.449688 NaN NaN
three 3 NaN NaN 0.281876 0.883299
four 4 NaN NaN 0.880252 0.061797
</code></pre>
<p>EDIT1:</p>
<p>Problem of solution before is <code>reindex</code> hates duplicates.
So if <code>Multiindex</code> in columns is not duplicated, you can use:</p>
<pre><code>print(df1)
foo bar
A B
one 0 0.384705 0.932928
0 0.539197 0.519196
print(df2)
alpha beta
C D
three 3 0.957530 0.985926
four 4 0.479828 0.350042
cols = df1.join(df2, how = 'outer').columns
df_joined = pd.concat([df1,df2]).reindex(columns=cols)
print (df_joined)
foo bar alpha beta
A B C D
one 0 0.384705 0.932928 NaN NaN
0 0.539197 0.519196 NaN NaN
three 3 NaN NaN 0.957530 0.985926
four 4 NaN NaN 0.479828 0.350042
</code></pre>
| 1 | 2016-08-15T13:25:14Z | [
"python",
"pandas"
] |
Python/Biopython: How to search for a reference sequence (string) in a sequence with gaps? | 38,955,872 | <p>I am facing the following problem and have not found a solution yet:</p>
<p>I am working on a tool for sequence analysis which uses a file with reference sequences and tries to find one of these reference sequences in a test sequence.</p>
<p>The problem is that the test sequence might contain gaps (for example: <code>ATG---TCA</code>).
I want my tool to find a specific reference sequence as substring of the test sequence even if the reference sequence is interrupted by gaps (<code>-</code>) in the test sequence.</p>
<p>For example:</p>
<p>one of my reference sequences:
<code>a = TGTAACGAACGG</code></p>
<p>my test sequence:
<code>b = ACCT**TGT--CGAA-GG**AGT</code></p>
<p>(the corresponding part from the reference sequence is given in bold)</p>
<p>I though about regular expressions and tried to work myself into it but if I am not wrong regular expressions only work the other way round. So I would need to include the gap positions as regular expressions into the reference sequence and than map it against the test sequence.
However, I do not know the positions, the length and the number of gaps in the test sequence.
My idea was to exchange gap positions (so all <code>-</code>) in the test sequence string into some kind of regular expressions or into a special character which stand for any other character in the reference sequence. Than I would compare the unmodified reference sequences against my modified test sequence...
Unfortunately I have not found a function in python for string search or a type of regular expression which could to this.</p>
<p>Thank you very much!</p>
| 0 | 2016-08-15T13:19:42Z | 38,955,920 | <p>You could do this: </p>
<pre><code>import re
a = 'TGTAACGAACGG'
b = 'ACCTTGT--CGAA-GGAGT'
temp_b = re.sub(r'[\W_]+', '', b) #removes everything that isn't a number or letter
if a in temp_b:
#do something
</code></pre>
| 0 | 2016-08-15T13:23:08Z | [
"python",
"regex",
"biopython",
"string-search",
"sequence-analysis"
] |
Python/Biopython: How to search for a reference sequence (string) in a sequence with gaps? | 38,955,872 | <p>I am facing the following problem and have not found a solution yet:</p>
<p>I am working on a tool for sequence analysis which uses a file with reference sequences and tries to find one of these reference sequences in a test sequence.</p>
<p>The problem is that the test sequence might contain gaps (for example: <code>ATG---TCA</code>).
I want my tool to find a specific reference sequence as substring of the test sequence even if the reference sequence is interrupted by gaps (<code>-</code>) in the test sequence.</p>
<p>For example:</p>
<p>one of my reference sequences:
<code>a = TGTAACGAACGG</code></p>
<p>my test sequence:
<code>b = ACCT**TGT--CGAA-GG**AGT</code></p>
<p>(the corresponding part from the reference sequence is given in bold)</p>
<p>I though about regular expressions and tried to work myself into it but if I am not wrong regular expressions only work the other way round. So I would need to include the gap positions as regular expressions into the reference sequence and than map it against the test sequence.
However, I do not know the positions, the length and the number of gaps in the test sequence.
My idea was to exchange gap positions (so all <code>-</code>) in the test sequence string into some kind of regular expressions or into a special character which stand for any other character in the reference sequence. Than I would compare the unmodified reference sequences against my modified test sequence...
Unfortunately I have not found a function in python for string search or a type of regular expression which could to this.</p>
<p>Thank you very much!</p>
| 0 | 2016-08-15T13:19:42Z | 38,956,003 | <p>There's good news and there's bad news...</p>
<p>Bad news first: What you are trying to do it not easy and regex is really not the way to do it. In a simple case regex could be made to work (maybe) but it will be inefficient and would not scale.</p>
<p>However, the good news is that this is well understood problem in bioinformatics (e.g. see <a href="https://en.wikipedia.org/wiki/Sequence_alignment" rel="nofollow">https://en.wikipedia.org/wiki/Sequence_alignment</a>). Even better news is that there are tools in Biopython that can help you. E.g. <a href="http://biopython.org/DIST/docs/api/Bio.pairwise2-module.html" rel="nofollow">http://biopython.org/DIST/docs/api/Bio.pairwise2-module.html</a></p>
<p><strong>EDIT</strong>
From the discussion below it seems you are saying that 'b' is likely to be very long, but assuming 'a' is still short (12 bases in your example above) I think you can tackle this by iterating over every 12-mer in 'b'. I.e. divide 'b' into sequences that are 12 bases long (obviously you'll end up with a lot!). You can then easily compare the two sequences. If you really want to use regex (and I still advise you <em>not</em> to) then you can replace the '-' with a '.' and do a simple match. E.g. </p>
<pre><code>import re
''' a is the reference '''
a = 'TGTAACGAACGG'
''' b is 12-mer taken from the seqence of interest, in reality you'll be doing this test for every possible 12-mer in the sequence'''
b = 'TGT--CGAA-GG'
b = b.replace('-', '.')
r = re.compile(b);
m = r.match(a)
print(m)
</code></pre>
| 0 | 2016-08-15T13:28:53Z | [
"python",
"regex",
"biopython",
"string-search",
"sequence-analysis"
] |
Import variable from parent directory in python package | 38,955,895 | <p>I'm a bit new to python and I'm trying to learn Flask.</p>
<p>My project structure looks like this:</p>
<pre><code>project/
__init__.py
views/
__init__.py
profile.py
</code></pre>
<p>My <code>project/__init__.py</code> looks like this:</p>
<pre><code>from flask import Flask
from views.profile import app
app = Flask(__name__)
</code></pre>
<p>I'm trying to import <code>app</code> from <code>project/__init__.py</code> into<code>profile.py</code>.</p>
<p>I've tried some ways that none of them worked.</p>
| 0 | 2016-08-15T13:21:23Z | 38,974,217 | <p>Finally I found an answer which works:</p>
<p>In my profile.py I put this:</p>
<pre><code>import sys, os.path
sys.path.append(os.path.abspath('../'))
from ccs import app
</code></pre>
| 0 | 2016-08-16T11:52:24Z | [
"python",
"import"
] |
How to convert tuple to a list | 38,955,931 | <p>Following one of the accepted solutions to <a href="http://stackoverflow.com/questions/1815316/why-cant-i-join-this-tuple-in-python">this SO question</a>, I tried to convert a tuple into a list by doing this</p>
<pre><code>mylist = list(mytuple)
</code></pre>
<p>However, I am getting a <code>Tuple object is not iterable</code> error. Is there a way to convert a tuple into a list?</p>
<p>Note, not that it really matters for this question, the tuple in question is a <code>Tuple</code> of <code>ast</code> objects, one being type <code>ast.Str</code>, the other being <code>ast.Name</code>. My ultimate goal is to create a string of the two. However, there might be situations where there are more than two elements in the tuple so I need to be able to iterate over the tuple to check what type of <code>ast</code> object each element is.</p>
<p>This is the error message from the python interpreter</p>
<pre><code>TypeError: 'Tuple' object is not iterable
</code></pre>
<p>This is the code that produced the error</p>
<pre><code>if type(foo) is ast.Tuple:
g = list(foo)
</code></pre>
| -4 | 2016-08-15T13:23:42Z | 38,955,982 | <p>You could use</p>
<pre><code>mylist = [x for x in mytuple]
</code></pre>
| -1 | 2016-08-15T13:26:56Z | [
"python"
] |
How to convert tuple to a list | 38,955,931 | <p>Following one of the accepted solutions to <a href="http://stackoverflow.com/questions/1815316/why-cant-i-join-this-tuple-in-python">this SO question</a>, I tried to convert a tuple into a list by doing this</p>
<pre><code>mylist = list(mytuple)
</code></pre>
<p>However, I am getting a <code>Tuple object is not iterable</code> error. Is there a way to convert a tuple into a list?</p>
<p>Note, not that it really matters for this question, the tuple in question is a <code>Tuple</code> of <code>ast</code> objects, one being type <code>ast.Str</code>, the other being <code>ast.Name</code>. My ultimate goal is to create a string of the two. However, there might be situations where there are more than two elements in the tuple so I need to be able to iterate over the tuple to check what type of <code>ast</code> object each element is.</p>
<p>This is the error message from the python interpreter</p>
<pre><code>TypeError: 'Tuple' object is not iterable
</code></pre>
<p>This is the code that produced the error</p>
<pre><code>if type(foo) is ast.Tuple:
g = list(foo)
</code></pre>
| -4 | 2016-08-15T13:23:42Z | 38,956,546 | <p>You don't have a <code>tuple</code>. You have an <a href="https://docs.python.org/2/library/ast.html#node-classes" rel="nofollow"><code>ast.Node</code> subclass</a>. This <em>does matter</em>, AST nodes represent a parse tree, and although <code>ast.Tuple</code> may, in running Python code, result in a <code>tuple()</code> instance, there the relationship ends. A parse tree is an intermediary stage between source code and bytecode, and only <em>executed</em> bytecode produces actual tuples.</p>
<p>AST node classes only have the attributes documented in the <code>ast</code> module documentation and are not iterable. You can access the <code>_fields</code>, <code>lineno</code> and <code>col_offset</code> attributes, where <code>_fields</code> is an iterable.</p>
<p>For a specific <code>ast.Node</code> subclass, consult the <a href="https://docs.python.org/2/library/ast.html#abstract-grammar" rel="nofollow"><em>Abstract Grammar</em> section</a> to see what other types of nodes are passed in for that node, and by what names those other objects can be accessed. For <code>ast.Tuple</code> that grammar is:</p>
<pre><code>Tuple(expr* elts, expr_context ctx)
</code></pre>
<p>so <code>elts</code> and <code>ctx</code> will be available as two attributes, and <code>elts</code> is a sequence as well. Incidentally, the <code>ast.Node._fields</code> attribute names those attributes as well.</p>
<p>If you are looking for the 'contents' of a tuple as parsed into the tree, look no further than <code>elts</code>; this is already a list:</p>
<pre><code>>>> import ast
>>> tree = ast.parse('("foo", bar)')
>>> tree.body[0].value
<_ast.Tuple object at 0x10262f990>
>>> tree.body[0].value._fields
('elts', 'ctx')
>>> tree.body[0].value.elts
[<_ast.Str object at 0x10262f910>, <_ast.Name object at 0x10262f6d0>]
</code></pre>
| 1 | 2016-08-15T14:01:03Z | [
"python"
] |
Connecting Two Models and Only Displaying in Template When they have a relationship | 38,955,938 | <p>I'm working on a site in Django where I have two models (players and seasons). I would like to display the players on a season page, but only when they are part of that season. Currently, this is what I have in my models file:</p>
<p>models.py</p>
<pre><code>from django.db import models
class Player(models.Model):
pid = models.IntegerField(primary_key=True)
firstname = models.CharField(max_length=50)
lastname = models.CharField(max_length=50)
birthdate = models.DateField()
occupation = models.CharField(max_length=50)
city = models.CharField(max_length=50)
state = models.CharField(max_length=2)
def __str__(self):
name = self.firstname + " " + self.lastname
return name
class Season(models.Model):
sid = models.IntegerField(primary_key=True)
seasonname = models.CharField(max_length=50)
location = models.CharField(max_length=50)
#fsd is film start date
fsd = models.DateField()
#fed is film end date
fed = models.DateField()
#asd is air start date
asd = models.DateField()
#aed is air end date
aed = models.DateField()
def __str__(self):
return self.seasonname
class PxS(models.Model):
# Do I need a primary key on this? PROBABLY -- One to many relationship: one player, potential multiple seaons
pid = models.ForeignKey('Player', on_delete = models.CASCADE,)
sid = models.ForeignKey('Season', on_delete = models.CASCADE,)
# position they finished in
finishposition = models.IntegerField()
# total number of players that season
totalpositions = models.IntegerField()
def __str__(self):
name = "Player: " + str(self.pid) + " | Season: " + str(self.sid)
return name
</code></pre>
<p>Here is my views file for reference:</p>
<p>views.py</p>
<pre><code>from django.shortcuts import render, get_object_or_404, redirect
from django.views.generic import ListView
from .models import Player, Season, PxS
def home(request):
seasons = Season.objects.order_by('sid')
return render(request, 'webapp/home.html', {'seasons': seasons})
def player(request, pk):
player = get_object_or_404(Player, pk=pk)
return render(request, 'webapp/player.html', {'player': player})
def season(request, pk):
season = get_object_or_404(Season, pk=pk)
return render(
request,
'webapp/season.html',
{'season': season, 'players': Player.objects.all()}
)
def seasons(request):
seasons = Season.objects.order_by('sid')
return render(request, 'webapp/seasons.html', {'seasons': seasons})
</code></pre>
<p>Currently, all players display on all season pages. I just can't figure out how to limit them. I've created a PxS model to link players with seasons based on foreign keys (pid and sid) but not sure how to implement them into the view. Am I missing something super obvious? Also, I believe it's a one to many relationship, because one player can be on multiple seasons. Is my thinking on this correct? Any insight is greatly appreciated.</p>
| 0 | 2016-08-15T13:24:12Z | 38,956,024 | <p>PxS is the through table in a many-to-many relationship. You should define that relationship explicitly:</p>
<pre><code>class Season(models.Model):
...
players = models.ManyToManyField('Player', through='PxS')
</code></pre>
<p>Now, in your <code>season</code> view, rather than sending all players explicitly to the template, you can just send the season; then when you iterate through seasons you can just use <code>s.players.all</code> to get the players for that season.</p>
<p>(Note, you shouldn't set primary keys explicitly unless you have a very good reason. Django automatically allocates an id field as the pk, so your PxS model <em>does</em> have one; the only thing you've done by defining the <code>sid</code> and <code>pid</code> pks explicitly is a) renaming them and b) disabling the autoincrement, which you certainly shouldn't do.)</p>
| 1 | 2016-08-15T13:30:22Z | [
"python",
"django"
] |
Elementwise and in python list | 38,955,954 | <p>I have two <code>python lists</code> <code>A</code> and <code>B</code> of equal length each containing only boolean values. Is it possible to get a third list <code>C</code> where <code>C[i] = A[i] and B[i]</code> for <code>0 <= i < len(A)</code> without using loop?</p>
<p>I tried following </p>
<p><code>C = A and B</code></p>
<p>but probably it gives the list <code>B</code></p>
<p>I also tried</p>
<p><code>C = A or B</code></p>
<p>which gives first list</p>
<p>I know it can easily be done using for loop in single line like <code>C = [x and y for x, y in zip(A, B)]</code>.</p>
| 0 | 2016-08-15T13:25:21Z | 38,956,148 | <p>I'd recommend you use numpy to use these kind of predicates over arrays. Now, I don't think you can avoid loops to achieve what you want, but... if you don't consider mapping or enumerating as a form of looping, you could do something like this (C1):</p>
<pre><code>A = [True, True, True, True]
B = [False, False, True, True]
C = [x and y for x, y in zip(A, B)]
C1 = map(lambda (i,x): B[i] and x, enumerate(A))
C2 = [B[i] and x for i,x in enumerate(A)]
print C==C1==C2
</code></pre>
| 2 | 2016-08-15T13:37:17Z | [
"python",
"python-2.7"
] |
Elementwise and in python list | 38,955,954 | <p>I have two <code>python lists</code> <code>A</code> and <code>B</code> of equal length each containing only boolean values. Is it possible to get a third list <code>C</code> where <code>C[i] = A[i] and B[i]</code> for <code>0 <= i < len(A)</code> without using loop?</p>
<p>I tried following </p>
<p><code>C = A and B</code></p>
<p>but probably it gives the list <code>B</code></p>
<p>I also tried</p>
<p><code>C = A or B</code></p>
<p>which gives first list</p>
<p>I know it can easily be done using for loop in single line like <code>C = [x and y for x, y in zip(A, B)]</code>.</p>
| 0 | 2016-08-15T13:25:21Z | 38,956,162 | <p>You can do it without an explicit loop by using <code>map</code>, which performs the loop internally, at C speed. Of course, the actual <code>and</code> operation is still happening at Python speed, so I don't think it'll save much time (compared to doing essentially the same thing with Numpy, which can not only do the looping at C speed, it can do the <em>and</em> operation at C speed too. Of course, there's also the overhead of converting between native Python lists & Numpy arrays).</p>
<p>Demo:</p>
<pre><code>from operator import and_
a = [0, 1, 0, 1]
b = [0, 0, 1, 1]
c = map(and_, a, b)
print c
</code></pre>
<p><strong>output</strong></p>
<pre><code>[0, 0, 0, 1]
</code></pre>
<p>Note that the <code>and_</code> function performs a bitwise <em>and</em> operation, but that should be ok since you're operating on boolean values.</p>
| 2 | 2016-08-15T13:38:15Z | [
"python",
"python-2.7"
] |
Elementwise and in python list | 38,955,954 | <p>I have two <code>python lists</code> <code>A</code> and <code>B</code> of equal length each containing only boolean values. Is it possible to get a third list <code>C</code> where <code>C[i] = A[i] and B[i]</code> for <code>0 <= i < len(A)</code> without using loop?</p>
<p>I tried following </p>
<p><code>C = A and B</code></p>
<p>but probably it gives the list <code>B</code></p>
<p>I also tried</p>
<p><code>C = A or B</code></p>
<p>which gives first list</p>
<p>I know it can easily be done using for loop in single line like <code>C = [x and y for x, y in zip(A, B)]</code>.</p>
| 0 | 2016-08-15T13:25:21Z | 38,956,381 | <p>Simple answer: you can't.</p>
<p>Except in the trivial way, which is by calling a function that does this for you, using a loop. If you want this kind of nice syntax you can use libraries as suggested: map, numpy, etc. Or you can write your own function. </p>
<p>If what you are looking for is syntactic convenience, Python does not allow overloading operators for built-in types such as list. </p>
<p>Oh, and you can use recursion, if that's "not a loop" for you. </p>
| 0 | 2016-08-15T13:52:35Z | [
"python",
"python-2.7"
] |
Elementwise and in python list | 38,955,954 | <p>I have two <code>python lists</code> <code>A</code> and <code>B</code> of equal length each containing only boolean values. Is it possible to get a third list <code>C</code> where <code>C[i] = A[i] and B[i]</code> for <code>0 <= i < len(A)</code> without using loop?</p>
<p>I tried following </p>
<p><code>C = A and B</code></p>
<p>but probably it gives the list <code>B</code></p>
<p>I also tried</p>
<p><code>C = A or B</code></p>
<p>which gives first list</p>
<p>I know it can easily be done using for loop in single line like <code>C = [x and y for x, y in zip(A, B)]</code>.</p>
| 0 | 2016-08-15T13:25:21Z | 38,956,709 | <blockquote>
<p>Is it possible to get a third list <code>C</code> where <code>C[i] = A[i] and B[i]</code> for <code>0 <= i < len(A)</code> without using loop?</p>
</blockquote>
<p>Kind of:</p>
<pre><code>class AndList(list):
def __init__(self, A, B):
self.A = A
self.B = B
def __getitem__(self, index):
return self.A[index] and self.B[index]
A = [False, False, True, True]
B = [False, True, False, True]
C = AndList(A, B)
print isinstance(C, list) and all(C[i] == (A[i] and B[i])
for i in range(len(A)))
</code></pre>
<p>Prints <code>True</code>.</p>
| 0 | 2016-08-15T14:09:36Z | [
"python",
"python-2.7"
] |
Reduce running time in Texture analysis using GLCM [Python] | 38,955,970 | <p>I am working on 6641x2720 image to generate its feature images (Haralick features like contrast, second moment etc) using a moving GLCM(Grey level Co-occurrence matrix ) window. But it takes forever to run. <strong>The code works fine, as I have tested it on smaller images.</strong> But, I need to make it run faster. Reducing the dimensions to 25% (1661x680) it takes <strong>30 minutes</strong> to run. How can I make it run faster ? Here's the code:</p>
<pre><code>from skimage.feature import greycomatrix, greycoprops
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
import time
start_time = time.time()
img = Image.open('/home/student/python/test50.jpg').convert('L')
y=np.asarray(img, dtype=np.uint8)
#plt.imshow(y, cmap = plt.get_cmap('gray'), vmin = 0, vmax = 255)
contrast = np.zeros((y.shape[0], y.shape[1]), dtype = float)
for i in range(0,y.shape[0]):
for j in range(0,y.shape[1]):
if i < 2 or i > (y.shape[0]-3) or j < 2 or j > (y.shape[1]-3):
continue
else:
s = y[(i-2):(i+3), (j-2):(j+3)]
glcm = greycomatrix(s, [1], [0], symmetric = True, normed = True )
contrast[i,j] = greycoprops(glcm, 'contrast')
print("--- %s seconds ---" % (time.time() - start_time))
plt.imshow(contrast, cmap = plt.get_cmap('gray'), vmin = 0, vmax = 255)
</code></pre>
| 1 | 2016-08-15T13:26:15Z | 38,960,020 | <p>Fill a GLCM is a linear operation: you just go through all the pixels on your image/window and you fill the matching matrix case. Your issue is that you perform the operation for each pixel, and not just for an image. So in your case, if the image dimensions are Width x Height and the window dimensions are NxN, then the total complexity is Width x Height x (NxN + FeaturesComplexity), which is really bad.</p>
<p>There is a much faster solution, but it's trickier to implement. The goal is to reduce the matrix filling operations. The idea is to work row by row with a Forward Front and a Backward Front (principle already use to get fast mathematical morphology operators, see <a href="https://hal.archives-ouvertes.fr/hal-00692897/document" rel="nofollow">here</a> and <a href="https://www.lrde.epita.fr/~theo/papers/geraud.2010.book.pdf" rel="nofollow">here</a>). When you fill the matrix for two consecutive pixels, you reuse most of the pixels, in fact only the ones on the left and right are different, so the backward front and forward front respectively.</p>
<p>Here is an illustration for a GLCM window of dimensions 3x3:</p>
<blockquote>
<p>x1 x2 x3 x4</p>
<p>x5 p1 p2 x6</p>
<p>x7 x8 x9 x10</p>
</blockquote>
<p>When the window is centered on p1, you use the pixels: x1, x2, x3, x5, p2, x7, x8, x9. When the window is centered on p2, you use the pixels: x2, x3, 4, p1, x6, x8, x9, x10. So for p1, you use x1, x5 and x7, but you don't use them for p2, but all the other pixels are the same.</p>
<p>The idea of the algorithm is to compute the matrix normally for p1, but when you move to p2, you remove the backward front (x1, x2, x5) and you add the forward front (x4, x6, x10). This reduces dramatically the computation time (linear instead of quadratic for mathematical morphology operations). Here is the algorithm:</p>
<ol>
<li>For each row:</li>
<li>----- Fill the matrix (as usually) for the first pixel in the row and you compute the features</li>
<li>----- For each of the following pixels</li>
<li>----- ----- Add the forward front (new pixels in the window)</li>
<li>----- ----- Remove the backward front (pixels no longer in the window)</li>
<li>----- ----- Compute the features</li>
</ol>
| 0 | 2016-08-15T17:37:18Z | [
"python",
"image-processing",
"textures",
"skimage"
] |
Spike centered on zero in fast Fourier transform | 38,956,089 | <p>I have time associated data that I would like to perform a Fourier transform on. Data is located at <a href="http://pastebin.com/2i0UGJW9" rel="nofollow">http://pastebin.com/2i0UGJW9</a>. The problem is that the data is not uniformly spaced. To solve this, I attempted to interpolate the data and then perform the Fast Fourier Transform.</p>
<pre><code>import numpy as np
from scipy.fftpack import fft, fftfreq, fftshift
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
x = np.linspace(min(times), max(times), len(times))
y = interp1d(times, data)(x)
yf = fft(y)
xf = fftfreq(len(times), (max(times)-min(times))/len(times))
xf = fftshift(xf)
yplot = fftshift(yf)
plt.figure()
plt.plot(xf, 1.0/len(times) * np.abs(yplot))
plt.grid()
plt.show()
</code></pre>
<p>However, this gives a single spike centered on zero instead of an expected frequency graph. How can I get this to give accurate results?</p>
| 0 | 2016-08-15T13:34:31Z | 38,956,777 | <p>As I don't have enough reputation to post a comment, I'll post my suggestions as an answer and hope one of them does lead to answer.</p>
<h2>Interpolation</h2>
<p>It's probably wiser to interpolate onto a grid that is quite a bit finer than what you are doing. Otherwise your interpolation will smooth the noisy data in an unpredictable fashion. If you want to smooth the data, you'd better do this via the FFT (this might be the whole point of the exercise...) </p>
<p>The time data has a minimum interval of 24, you should probably use an interpolation grid of about half that. Better still, the time intervals are not constant, but they are very regular. After typing <code>print times % 24</code> it seems a good grid to use would be <code>np.arange(min(times), max(times)+1, 24)</code>. Note that the <code>+1</code> is just to include the last time too.</p>
<h2>Non-periodic data</h2>
<p>Your data is not periodic, but the FFT treats it as if it were. This means it sees a large jump between the first and last data points. You should look at the FFT documentation on how to tell it to perform an expansion of the data.</p>
<h2>And of course</h2>
<p>The spike at frequency zero is just a consequence of the fact that your signal does not have mean zero. </p>
<p>Hope this was of help.</p>
| 3 | 2016-08-15T14:13:30Z | [
"python",
"numpy",
"scipy",
"fft"
] |
Single Frame Stalling in Pygame | 38,956,240 | <p>I've been building a game and a lengthy way through I noticed movement wasn't absolutely smooth and jittered randomly. I created a mockup program that reproduces this: </p>
<pre><code>import pygame
import sys
pygame.init()
clock = pygame.time.Clock()
screen = pygame.display.set_mode((800,400))
def create_lines():
screen.fill((255,255,255))
for i in range(0, 300):
pygame.draw.line(screen, (255,0,0), ((i * 15) , 0 ), ((i * 15), 1600),(1))
class moving_object(pygame.sprite.Sprite):
def __init__(self):
super().__init__()
self.image = pygame.image.load("C:/Users/etc/7.png")
self.rect = self.image.get_rect()
self.rect.x = 0
self.rect.y = 200
def update(self):
self.rect.x += 2
def inputs():
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
#create object, add to group
sprite_group = pygame.sprite.Group()
block = moving_object()
sprite_group.add(block)
while True:
inputs()
create_lines()
sprite_group.draw(screen)
clock.tick(30)
pygame.display.flip()
sprite_group.update()
</code></pre>
<p>Here is a webm demonstrating the issue:
<a href="http://puu.sh/qC8cF/f05eff8a14.webm" rel="nofollow">http://puu.sh/qC8cF/f05eff8a14.webm</a></p>
<p>There is a point where the movement isn't updated for that frame and the next frame after that it catches itself up, doing the missed frame and the additional present frame. This hiccuping is really concerning in a game that revolves around dodging. </p>
<p>I don't know how to fix it, tried changing the order the functions are called, especially tick, flip, draw and update. This isn't screen tearing either, it seems a frame just doesn't happen yet is made up for on the call after so the math is still being done properly... </p>
| 2 | 2016-08-15T13:43:22Z | 38,957,450 | <p>Not sure if I've understood correctly, but what about if you tweak these 2 lines?</p>
<p><code>self.rect.x += 1</code> instead <code>self.rect.x += 2</code></p>
<p><code>clock.tick(60)</code> instead <code>clock.tick(30)</code></p>
<p>Also, you could try making your animation time-based, for example:</p>
<pre><code>import pygame
import sys
import math
pygame.init()
clock = pygame.time.Clock()
screen = pygame.display.set_mode((800, 600))
def create_lines():
screen.fill((255, 255, 255))
for i in range(0, 300):
pygame.draw.line(
screen, (255, 0, 0), ((i * 15), 0), ((i * 15), 1600), (1))
class moving_object(pygame.sprite.Sprite):
def __init__(self):
super().__init__()
self.image = pygame.image.load("C:/Users/etc/7.png")
self.rect = self.image.get_rect()
self.rect.x = 0
self.rect.y = 200
def inputs():
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
# create object, add to group
sprite_group = pygame.sprite.Group()
block = moving_object()
sprite_group.add(block)
t = 0
while True:
timedelta = clock.tick(60)
timedelta /= 500
inputs()
create_lines()
block.rect.x = 50 * math.cos(t)
block.rect.y = 100 + 50 * math.sin(t)
sprite_group.draw(screen)
pygame.display.flip()
sprite_group.update()
t += timedelta
</code></pre>
| 0 | 2016-08-15T14:50:32Z | [
"python",
"pygame"
] |
How to find index of an exact word in a string in Python | 38,956,274 | <pre><code>word = 'laugh'
string = 'This is laughing laugh'
index = string.find ( word )
</code></pre>
<p>index is 8, should be 17.
I looked around hard, but could not find an answer.</p>
| 0 | 2016-08-15T13:45:27Z | 38,956,318 | <p>Strings in code are not separated by spaces. If you want to find the space, you must include the space in the word you are searching for. You may find it would actually be more efficient for you to split the string into words then iterate, e.g:</p>
<pre><code>str = "This is a laughing laugh"
strList = str.split(" ")
for sWord in strList:
if sWord == "laugh":
DoStuff()
</code></pre>
<p>As you iterate you can add the length of the current word to an index and when you find the word, break from the loop. Don't forget to account for the spaces!</p>
| 0 | 2016-08-15T13:48:58Z | [
"python",
"find",
"word"
] |
How to find index of an exact word in a string in Python | 38,956,274 | <pre><code>word = 'laugh'
string = 'This is laughing laugh'
index = string.find ( word )
</code></pre>
<p>index is 8, should be 17.
I looked around hard, but could not find an answer.</p>
| 0 | 2016-08-15T13:45:27Z | 38,956,342 | <p>You should use regex (with word boundary) as <code>find</code> returns the <em>first</em> occurrence. Then use the <code>start</code> attribute of the <code>match</code> object to get the starting index.</p>
<pre><code>import re
string = 'This is laughing laugh'
a = re.search(r'\b(laugh)\b', string)
print(a.start())
>> 17
</code></pre>
<p>You can find more info on how it works <a href="https://regex101.com/r/jY0vY1/1" rel="nofollow">here</a>.</p>
| 2 | 2016-08-15T13:50:21Z | [
"python",
"find",
"word"
] |
How to find index of an exact word in a string in Python | 38,956,274 | <pre><code>word = 'laugh'
string = 'This is laughing laugh'
index = string.find ( word )
</code></pre>
<p>index is 8, should be 17.
I looked around hard, but could not find an answer.</p>
| 0 | 2016-08-15T13:45:27Z | 38,956,495 | <p>Here is one approach without regular expressions:</p>
<pre><code>word = 'laugh'
string = 'This is laughing laugh'
words = string.split(' ')
word_index = words.index(word)
index = sum(len(x) + 1 for i, x in enumerate(words)
if i < word_index)
=> 17
</code></pre>
<p>This splits the string into words, finds the index of the matching word and then sums up the lengths and the blank char as a separater of all words before it.</p>
<p>You should of course use regular expressions for performance and convenience. The equivalent using the <code>re</code> module is as follows:</p>
<pre><code>r = re.compile(r'\b%s\b' % word, re.I)
m = r.search(string)
index = m.start()
</code></pre>
<p>Here <code>\b</code> means <em>word boundary</em>, see the <a href="https://docs.python.org/2/library/re.html?highlight=re#module-re" rel="nofollow"><code>re</code></a> documentation. Regex can be quite daunting. A great way to test and find regular expressions is using <a href="https://regex101.com/r/oP1nO1/1" rel="nofollow">regex101.com</a></p>
| 0 | 2016-08-15T13:58:44Z | [
"python",
"find",
"word"
] |
How to find index of an exact word in a string in Python | 38,956,274 | <pre><code>word = 'laugh'
string = 'This is laughing laugh'
index = string.find ( word )
</code></pre>
<p>index is 8, should be 17.
I looked around hard, but could not find an answer.</p>
| 0 | 2016-08-15T13:45:27Z | 38,956,536 | <p>try this:</p>
<pre><code>word = 'laugh'
string = 'This is laughing laugh'.split(" ")
index = string.index(word)
</code></pre>
<p>This makes a list containing all the words and then searches for the relevant word. Then I guess you could add all of the lengths of the elements in the list less than index and find your index that way</p>
<pre><code>position = 0
for i,word in enumerate(string):
position += (1 + len(word))
if i>=index:
break
print position
</code></pre>
<p>Hope this helps.</p>
| 0 | 2016-08-15T14:00:43Z | [
"python",
"find",
"word"
] |
How to save multiple data sets with the sio.savemat function? | 38,956,278 | <p>I have created a code in python that reads multiple files (reads specific parts of them) and my goal is to save all these data points to matlab format. This is my 'main program' (all the functions are defined before):</p>
| 0 | 2016-08-15T13:45:42Z | 38,957,123 | <p>This <a href="http://stackoverflow.com/questions/29040276/how-to-append-to-mat-file-using-scipy-io-savemat">question</a> is about how to append data using savemat. It looks like that asker says you can only append data to the existing dictionary key though, so it may not solve your problem.</p>
<p>What I recommend is adding the data to a dictionary and then saving it all to a mat file once, at the end.</p>
<p>Initialize a dictionary outside the loop</p>
<pre><code>myDictionary = {}
</code></pre>
<p>You have one line of code that creates the dictionary and saves it.</p>
<pre><code>sio.savemat('Argo_Trajectories.mat', {'data':data})
</code></pre>
<p>Replace that line with two: create a unique key rather than naming them all data and add the data to the dictionary</p>
<pre><code>newkey = 'data%d' % ifl
myDictionary[newkey] = data
</code></pre>
<p>After the loop is finished, save the dictionary (this can be before or after the 2nd <code>print(z)</code> in your function</p>
<pre><code>sio.savemat('savename.mat',myDictionary)
</code></pre>
| 2 | 2016-08-15T14:32:27Z | [
"python",
"matlab",
"loops",
"scipy",
"save"
] |
IP Spoofing in python 3 | 38,956,401 | <p>Is it possible to send a spoofed packet with another ip source?
I've searched on the net and I found out that I need to use scapy library. I have this script that I found:</p>
<pre><code>import sys
from scapy.all import *
if len(sys.argv) != 4:
print ("Usage: ./spoof.py <target> <spoofed_ip> <port>")
sys.exit(1)
target = sys.argv[1]
spoofed_ip = sys.argv[2]
port = int(sys.argv[3])
p1=IP(dst=target,src=spoofed_ip)/TCP(dport=port,sport=5000,flags='S')
send(p1)
print ("Okay, SYN sent. Enter the sniffed sequence number now: ")
seq=sys.stdin.readline()
print ("Okay, using sequence number " + seq)
seq=int(seq[:-1])
p2=IP(dst=target,src=spoofed_ip)/TCP(dport=port,sport=5000,flags='A',
ack=seq+1,seq=1)
send(p2)
print ("Okay, final ACK sent. Check netstat on your target :-)")
</code></pre>
<p>But I don't get what does it mean "Enter the sniffed sequence number now:"</p>
<p>Also, is it possible to avoid using scapy, and use socket library instead? If yes, can you tell me the way?</p>
<p>Thanks! I'm a newbie</p>
| 0 | 2016-08-15T13:53:40Z | 39,053,296 | <p>solved on my own using scapy library:</p>
<pre><code>from scapy.all import *
A = "192.168.1.254" # spoofed source IP address
B = "192.168.1.105" # destination IP address
C = RandShort() # source port
D = 80 # destination port
payload = "yada yada yada" # packet payload
while True:
spoofed_packet = IP(src=A, dst=B) / TCP(sport=C, dport=D) / payload
send(spoofed_packet)
</code></pre>
| 0 | 2016-08-20T10:52:24Z | [
"python",
"python-3.x",
"ip",
"spoofing"
] |
django rest change json response design | 38,956,403 | <p>I'm working with the Django rest tutorial and I see that all the responses are return only the models fields, for example:</p>
<pre><code>[
{
"id": 1,
"title": "",
"code": "foo = \"bar\"\n",
"linenos": false,
"language": "python",
"style": "friendly"
}]
</code></pre>
<p>My question is how i can design the response for example:</p>
<pre><code> users:[
{
"id": 1,
"title": "",
"code": "foo = \"bar\"\n",
"linenos": false,
"language": "python",
"style": "friendly"
}]
</code></pre>
| 0 | 2016-08-15T13:53:49Z | 38,957,641 | <p>From what I can tell, you are using the following code, which I have modified to do what you have requested. However, I would suggest that unless you have very good reason to do so, to not modify how the response is given. It leads to complications in creating multiple objects in the future for ModelViewSets, and all list() methods would return different values.</p>
<p>I really don't like the below, but it does answer the question. In addition, you could change the serializer to be a nested serializer, but that's another question on its own.</p>
<pre><code>from rest_framework import status
from rest_framework.decorators import api_view
from rest_framework.response import Response
from snippets.models import Snippet
from snippets.serializers import SnippetSerializer
@api_view(['GET', 'POST'])
def snippet_list(request):
"""
List all snippets, or create a new snippet.
"""
if request.method == 'GET':
snippets = Snippet.objects.all()
serializer = SnippetSerializer(snippets, many=True)
return Response({'users': serializer.data})
elif request.method == 'POST':
# Assuming we have modified the below - we have to hack around it
serializer = SnippetSerializer(data=request.data['users'])
if serializer.is_valid():
serializer.save()
return Response({'users': serializer.data}, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
</code></pre>
<p>The above should give you the below response:</p>
<pre><code>{
"users": [
{
"id": 1,
"title": "",
"code": "foo=\"bar\"\n",
"linenos": false,
"language": "python",
"style": "friendly"
},
{
"id": 2,
"title": "",
"code": "print\"hello, world\"\n",
"linenos": false,
"language": "python",
"style": "friendly"
}
]
}
</code></pre>
| 0 | 2016-08-15T15:01:25Z | [
"python",
"django",
"django-views",
"django-rest-framework",
"django-serializer"
] |
Extract name entities from web domain address | 38,956,512 | <p>I am working on an NLP problem (in Python 2.7) to extract the location of a news report from the text inside the report. For this task I am using the Clavin API which works well enough. </p>
<p>However I've noticed that the name of the location area is often mentioned in the URL of the report itself and I'd like to find a way to extract this entity from a domain name, to increase the level of accuracy from Clavin by providing an additional named entity in the request.</p>
<p>In an ideal world I'd like to be able to give this input:
<code>
www.britainnews.net
</code></p>
<p>and return this, or a similar, output:
<code>
[www,britain,news,net]
</code></p>
<p>Of course I can use .split() feature to separate the <code>www</code> and <code>net</code> tokens which are unimportant, however I'm stumped as to how to split the middle phrase without an intensive dictionary lookup.</p>
<p>I'm not asking for someone to solve this problem or write any code for me - but this is an open call for suggestions as to the ideal NLP library (if one exists) or any ideas as to how to solve this problem. </p>
| 1 | 2016-08-15T13:59:29Z | 38,957,790 | <p>Check - <a href="http://nbviewer.jupyter.org/url/norvig.com/ipython/How%20to%20Do%20Things%20with%20Words.ipynb" rel="nofollow">Word Segmentation Task</a> from <a href="http://norvig.com/" rel="nofollow">Norvig</a>'s work.</p>
<pre><code>from __future__ import division
from collections import Counter
import re, nltk
WORDS = nltk.corpus.reuters.words()
COUNTS = Counter(WORDS)
def pdist(counter):
"Make a probability distribution, given evidence from a Counter."
N = sum(counter.values())
return lambda x: counter[x]/N
P = pdist(COUNTS)
def Pwords(words):
"Probability of words, assuming each word is independent of others."
return product(P(w) for w in words)
def product(nums):
"Multiply the numbers together. (Like `sum`, but with multiplication.)"
result = 1
for x in nums:
result *= x
return result
def splits(text, start=0, L=20):
"Return a list of all (first, rest) pairs; start <= len(first) <= L."
return [(text[:i], text[i:])
for i in range(start, min(len(text), L)+1)]
def segment(text):
"Return a list of words that is the most probable segmentation of text."
if not text:
return []
else:
candidates = ([first] + segment(rest)
for (first, rest) in splits(text, 1))
return max(candidates, key=Pwords)
print segment('britainnews') # ['britain', 'news']
</code></pre>
<p>More examples at : <a href="http://nbviewer.jupyter.org/url/norvig.com/ipython/How%20to%20Do%20Things%20with%20Words.ipynb" rel="nofollow">Word Segmentation Task</a></p>
| 0 | 2016-08-15T15:09:08Z | [
"python",
"string",
"machine-learning",
"nlp"
] |
Shared hosting setup for on Apache for Django 1.10 | 38,956,640 | <p>We're looking for a clean and isolated way to host several Django sites on a single Apache with vhosts on Ubuntu 14.04.</p>
<p>Following the docs <a href="https://docs.djangoproject.com/en/1.10/howto/deployment/wsgi/modwsgi/#using-mod-wsgi-daemon-mode" rel="nofollow">https://docs.djangoproject.com/en/1.10/howto/deployment/wsgi/modwsgi/#using-mod-wsgi-daemon-mode</a>
and <a href="http://stackoverflow.com/questions/27832777/where-should-wsgipythonpath-point-in-my-virtualenv/27881007#27881007">Where should WSGIPythonPath point in my virtualenv?</a> , we set the following setup : </p>
<p>Have a global virtualenv for mod_wsgi</p>
<pre><code>virtualenv -p /usr/bin/python3 /home/admin/vhosts_venv
. vhosts_venv/bin/activate
pip install mod-wsgi
sudo /home/admin/vhosts_venv/bin/mod_wsgi-express install-module
sudo vi /etc/apache2/mods-available/wsgi_express.load
</code></pre>
<p>added :</p>
<pre><code>LoadModule wsgi_module /usr/lib/apache2/modules/mod_wsgi-py34.cpython-34m.so
</code></pre>
<p>Then have a vhost venv and a basic app :</p>
<pre><code>virtualenv -p /usr/bin/python3 /home/admin/vhost1_venv
. vhost1_venv/bin/activate
pip install Django
pip install PyMySQL
django-admin startproject vhost1
cd vhost1
python manage.py startapp main
</code></pre>
<p>Setup host resolution with :</p>
<pre><code>sudo vi /etc/hosts
</code></pre>
<p>updated :</p>
<pre><code>127.0.0.1 localhost vhost1.example.com
</code></pre>
<p>Setup Apache vhost with :</p>
<pre><code><VirtualHost vhost1.example.com:80>
ServerName vhost1.example.com
ServerAlias example.com
ServerAdmin webmaster@localhost
#DocumentRoot /var/www/html
ErrorLog ${APACHE_LOG_DIR}/vhost1_error.log
CustomLog ${APACHE_LOG_DIR}/vhost1_access.log combined
WSGIProcessGroup vhost1.example.com
WSGIScriptAlias / /home/admin/vhost1/vhost1/wsgi.py process-group=vhost1.example.com
WSGIDaemonProcess vhost1.example.com user=www-data group=www-data threads=25 python-path=/home/admin/vhost1:/home/admin/vhost1_venv/lib/python3.4/site-packages:/home/admin/vhosts_venv/lib/python3.4/site-packages
<Directory /home/admin/vhost1>
<Files wsgi.py>
<IfVersion < 2.3>
Order deny,allow
Allow from all
</IfVersion>
<IfVersion >= 2.3>
Require all granted
</IfVersion>
</Files>
</Directory>
</VirtualHost>
</code></pre>
<p>Enabled everything with :</p>
<pre><code>sudo a2enmod wsgi_express
sudo a2ensite vhost1
sudo service apache2 restart
</code></pre>
<p>When testing it, we get 2 answers for a single curl request, delivered in 2 timing (sometime like 0.5s between each) :</p>
<pre><code>curl vhost1.example.com
</code></pre>
<blockquote>
<p>It worked! Congratulations on your first Django-powered page.</p>
<p>Of course, you haven't actually done any work yet. Next, start your
first app by running python manage.py startapp [app_label].</p>
<p>You're seeing this message because you have DEBUG = True in your
Django settings file and you haven't configured any URLs. Get to work!</p>
</blockquote>
<p>Directly followed by :</p>
<blockquote>
<p>Internal Server Error</p>
<p>The server encountered an internal error or misconfiguration and was
unable to complete your request.</p>
<p>Please contact the server administrator at webmaster@localhost to
inform them of the time this error occurred, and the actions you
performed just before this error.</p>
<p>More information about this error may be available in the server error
log. Apache/2.4.7 (Ubuntu) Server at vhost1.example.com Port 80</p>
</blockquote>
<p>In <code>/var/log/apache2/error.log</code>, we get :</p>
<blockquote>
<p>[Mon Aug 15 15:37:42.754139 2016] [core:notice] [pid 18622:tid
140151787534208] AH00051: child pid 18717 exit signal Segmentation
fault (11) , possible coredump in /etc/apache2</p>
</blockquote>
<p>and in <code>/var/log/apache2/vhost1_access.log</code> :</p>
<blockquote>
<p>127.0.0.1 - - [15/Aug/2016:15:37:42 +0200] "GET / HTTP/1.1" 500 2593 "-" "curl/7.35.0"</p>
</blockquote>
<p>How to set it up correctly?</p>
| 1 | 2016-08-15T14:05:49Z | 39,414,058 | <p>To summarize Graham Dumpleton's answer</p>
<ul>
<li>mod_wsgi 4.5.4 was broken with Python3. 4.5.5 fixed it.</li>
<li>It's possible to setup a Python virtualenv for the virtualhosting (with package <code>mod_wsgi</code>) but is recommended to setup a Python virtualenv using <code>python-home</code> option and not <code>python-path</code>.</li>
<li>It's recommended to set <code>WSGIRestrictEmbedded On</code> according to <a href="http://blog.dscpl.com.au/2009/11/save-on-memory-with-modwsgi-30.html" rel="nofollow">http://blog.dscpl.com.au/2009/11/save-on-memory-with-modwsgi-30.html</a></li>
</ul>
<p><code>/etc/apache2/mods-available/wsgi_express.conf</code> looks like :</p>
<pre><code>WSGIRestrictEmbedded On
</code></pre>
<p><code>/etc/apache2/sites-available/vhost1.conf</code> looks like :</p>
<pre><code><VirtualHost vhost1.example.com:80>
ServerName vhost1.example.com
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
ErrorLog ${APACHE_LOG_DIR}/vhost1_error.log
CustomLog ${APACHE_LOG_DIR}/vhost1_access.log combined
WSGIDaemonProcess vhost1.example.com threads=15 python-home=/home/admin/vhost1_venv python-path=/home/admin/vhost1
WSGIScriptAlias / /home/admin/vhost1/vhost1/wsgi.py process-group=vhost1.example.com application-group=%{GLOBAL}
<Directory /home/admin/vhost1>
<Files wsgi.py>
<IfVersion < 2.3>
Order deny,allow
Allow from all
</IfVersion>
<IfVersion >= 2.3>
Require all granted
</IfVersion>
</Files>
</Directory>
</VirtualHost>
</code></pre>
| 0 | 2016-09-09T14:35:47Z | [
"python",
"django",
"apache",
"mod-wsgi",
"shared-hosting"
] |
How to automatically call for a next iterator inside a loop | 38,956,684 | <p>I recently started scripting and am having difficulties with nested loops. I'm not getting the first iterator object from the first loop as an input to the second to run properly.</p>
<p>The problem itself is quite simple. I would like to change the second item (â20â) on row 1 in my data to a number from the range and create a file.
So if the first number from the range is 14 then the first line of the file is (L,14,0,0,0,0) and gets a name data1.txt.</p>
<p><strong>Data:</strong></p>
<p>L,1,5.827,20,-4.705,0<br>
L,20,0,0,0,0<br>
L,12,15,0,-6,0</p>
<p><strong>Original Script:</strong></p>
<pre><code>import re
from itertools import islice
import numpy as np
x = np.arange(14,30.5,0.5)
size = x.size
with open('data.txt', 'r') as line:
for line in islice(line, 1, 2):
re.sub(r'\s', '', line).split(',')
nline = line[:2] + line[3:]
x = iter(x)
y = next(x)
for i in x:
nline = nline[:2] + str(y)+ nline[3:]
with open('data.txt', 'r') as file:
data = file.readlines()
data[1] = nline
for i in range(1,size):
with open('data%i.txt' %i, 'w') as file:
file.writelines(data)
</code></pre>
<p>EDITED:</p>
<p>Ive made some progress with my script and Im almost there. </p>
<p>After the first loop I have the output that I need (33 cases). All I would like to do now is to write them to a 33 unique files, named data1 to data33. What seems to happen however is that the second loop iterates through the first loop another 33 times and creates 1089 cases. So what ends up in the files is only the last line of the first loop. </p>
<p>Any suggestions how to allow the second loop for file creation but disable it for data? </p>
<p><strong>Updated Script:</strong></p>
<pre><code>import re
from itertools import islice
import numpy as np
x = np.arange(14,30.5,0.5)
size = x.size
with open('data.txt', 'r') as line:
for line in islice(line, 1, 2):
re.sub(r'\s', '', line).split(',')
for i in x:
y=str(i)
nline = line[:2] + y + line[4:]
with open('data.txt', 'r') as file:
data = file.readlines()
data[1] = nline
for i in range(1,size+1):
with open('data%i.txt' %i, 'w') as file:
file.writelines(data)
print data
</code></pre>
| 1 | 2016-08-15T14:08:37Z | 38,958,573 | <p>It seems you are trying to concatenate a string to a list. <code>nline = nline[:2] + str(y)+ nline[3:]</code>. This will yield in a type error.</p>
<p>Also <code>nline[:2]</code> gets the first 2 parts of the list, so you want to split <code>nline[:1]</code> to <code>nline[2:]</code></p>
<p>Try something along the lines of:</p>
<pre><code>import re
from itertools import islice
import numpy as np
x = np.arange(14,30.5,0.5)
size = x.size
with open('data.txt', 'r') as line:
for line in islice(line, 1, 2):
re.sub(r'\s', '', line).split(',')
nline = line[:1] + line[2:] #not sure what this does, but this might be wrong, change accordingly
x = iter(x)
y = next(x)
temp = []
temp.append(y)
for i in x:
nline = nline[:1] + temp + nline[2:]
with open('data.txt', 'r') as file:
data = file.readlines()
data[1] = nline
for i in range(1,size):
with open('data%i.txt' %i, 'w') as file:
file.writelines(data)
</code></pre>
| 1 | 2016-08-15T16:00:03Z | [
"python",
"loops",
"iteration"
] |
Object detection and segmentation Using Python | 38,956,745 | <p>I am an undergraduate student. I am new to image processing and python.</p>
<p>I have many images of plants samples and their description(called labels which are stuck on the sample) as shown in the below Figure. I need to Automatically segment only those labels from the sample.</p>
<p>I tried thresholding based on colour, but it failed. Could you please suggest me an example to do this task. I need some ideas or codes to make it completely automatic segmentation.</p>
<p>Please help me if you are experts in image processing and Python, I need your help to complete this task.</p>
<p>The rectangle is detected on the Top Left, but it should be on bottom right. Could you please tell me where is my mistake and how to correct it.
I have also given the code below.</p>
| -2 | 2016-08-15T14:11:20Z | 38,957,099 | <p>You can try a template matching with a big white rectangle to identify the area where information is stored.</p>
<p><a href="http://docs.opencv.org/3.1.0/d4/dc6/tutorial_py_template_matching.html#gsc.tab=0" rel="nofollow">http://docs.opencv.org/3.1.0/d4/dc6/tutorial_py_template_matching.html#gsc.tab=0</a></p>
<p>When it will be done, you will be able to recognize characters in this area... You save a small subimage, and with a tool like pytesseract you will be able to read characters.</p>
<p><a href="https://pypi.python.org/pypi/pytesseract" rel="nofollow">https://pypi.python.org/pypi/pytesseract</a></p>
<p>You have other OCR here with some examples :
<a href="https://saxenarajat99.wordpress.com/2014/10/04/optical-character-recognition-in-python/" rel="nofollow">https://saxenarajat99.wordpress.com/2014/10/04/optical-character-recognition-in-python/</a></p>
<p>Good luck !</p>
| 0 | 2016-08-15T14:30:49Z | [
"python",
"opencv",
"image-processing",
"ocr",
"image-segmentation"
] |
Object detection and segmentation Using Python | 38,956,745 | <p>I am an undergraduate student. I am new to image processing and python.</p>
<p>I have many images of plants samples and their description(called labels which are stuck on the sample) as shown in the below Figure. I need to Automatically segment only those labels from the sample.</p>
<p>I tried thresholding based on colour, but it failed. Could you please suggest me an example to do this task. I need some ideas or codes to make it completely automatic segmentation.</p>
<p>Please help me if you are experts in image processing and Python, I need your help to complete this task.</p>
<p>The rectangle is detected on the Top Left, but it should be on bottom right. Could you please tell me where is my mistake and how to correct it.
I have also given the code below.</p>
| -2 | 2016-08-15T14:11:20Z | 38,972,226 | <p>Why using color threshold? I tried this one with ImageJ and get nice results. I just converted the image to 8bit and <a href="http://i.stack.imgur.com/5uKTg.png" rel="nofollow">binarise</a> using a fixed threshold (166 in this case). You can choose the best threshold from the image <a href="http://i.stack.imgur.com/Kz0mA.png" rel="nofollow">histogram</a>.
Then you just need to find your white rectangle region and read the characters like <a href="http://stackoverflow.com/users/3241752/frsecm">FrsECM</a> suggested.</p>
<p>Here's an example in c++:</p>
<pre><code>#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include <stdlib.h>
#include <stdio.h>
using namespace cv;
/// Global variables
int threshold_nvalue = 166;
const int thresh_increment = 2;
int threshold_type = THRESH_BINARY;//1
int const max_value = 255;
int const morph_size = 3;
int const min_blob_size = 1000;
Mat src, src_resized, src_gray, src_thresh, src_morph;
/**
* @function main
*/
int main(int argc, char** argv)
{
/// Load an image
src = imread("C:\\Users\\phili\\Pictures\\blatt.jpg", 1);
//Resize for displaying it properly
resize(src, src_resized, Size(600, 968));
/// Convert the image to Gray
cvtColor(src_resized, src_gray, COLOR_RGB2GRAY);
/// Region of interest
Rect label_rect;
//Binarization sing fixed threshold
threshold(src_gray,src_thresh, thres, max_value, threshold_type);
//Erase small object using morphologie
Mat element = getStructuringElement(0, Size(2 * morph_size + 1, 2 * morph_size + 1), Point(morph_size, morph_size));
morphologyEx(src_thresh, src_morph, MORPH_CLOSE, element);
//find white objects and their contours
std::vector<std::vector<Point> > contours;
std::vector<Vec4i> hierarchy;
findContours(src_morph, contours, CV_RETR_TREE, CV_CHAIN_APPROX_NONE, Point(0, 0));
for (std::vector<std::vector<Point> >::iterator it = contours.begin(); it != contours.end(); ++it)
{
//just big blobs
if (it->size()>min_blob_size)
{
//approx contour and check for rectangle
std::vector<Point> approx;
approxPolyDP(*it, approx, 0.01*arcLength(*it, true), true);
if (approx.size() == 4)
{
//just for visualization
drawContours(src_resized, approx, 0, Scalar(0, 255, 255),-1);
//bounding rect for ROI
label_rect = boundingRect(approx);
//exit loop
break;
}
}
}
//Region of interest
Mat label_roi = src_resized(label_rect);
//OCR comes here...
}
</code></pre>
| 0 | 2016-08-16T10:15:04Z | [
"python",
"opencv",
"image-processing",
"ocr",
"image-segmentation"
] |
Find channel name of message with SlackClient | 38,956,760 | <p>I am trying to print the channel a message was posted to in slack with the python SlackClient. After running this code I only get an ID and not the channel name.</p>
<pre><code>import time
import os
from slackclient import SlackClient
BOT_TOKEN = os.environ.get('SLACK_BOT_TOKEN')
def main():
# Creates a slackclient instance with bots token
sc = SlackClient(BOT_TOKEN)
#Connect to slack
if sc.rtm_connect():
print "connected"
while True:
# Read latest messages
for slack_message in sc.rtm_read():
message = slack_message.get("text")
print message
channel = slack_message.get("channel")
print channels
time.sleep(1)
if __name__ == '__main__':
main()
</code></pre>
<p>This is the output:</p>
<pre><code>test
U1K78788H
</code></pre>
| 0 | 2016-08-15T14:12:04Z | 38,970,310 | <p>I'm not sure what you are outputting. Shouldn't "channels" be "channel" ? Also, I think this output is the "user" field. The "Channel" field should yield an id starting with C or G (<a href="https://api.slack.com/events/message" rel="nofollow">doc</a>). </p>
<pre><code>{
"type": "message",
"channel": "C2147483705",
"user": "U2147483697",
"text": "Hello world",
"ts": "1355517523.000005"
}
</code></pre>
<p>Then, use either the python client to retrieve the channel name, if it stores it (I don't know the Python client), or use the web API method <a href="https://api.slack.com/methods/channels.info" rel="nofollow">channels.info</a> to retrieve the channel name.</p>
| 0 | 2016-08-16T08:44:16Z | [
"python",
"python-2.7",
"slack-api",
"slack"
] |
Find channel name of message with SlackClient | 38,956,760 | <p>I am trying to print the channel a message was posted to in slack with the python SlackClient. After running this code I only get an ID and not the channel name.</p>
<pre><code>import time
import os
from slackclient import SlackClient
BOT_TOKEN = os.environ.get('SLACK_BOT_TOKEN')
def main():
# Creates a slackclient instance with bots token
sc = SlackClient(BOT_TOKEN)
#Connect to slack
if sc.rtm_connect():
print "connected"
while True:
# Read latest messages
for slack_message in sc.rtm_read():
message = slack_message.get("text")
print message
channel = slack_message.get("channel")
print channels
time.sleep(1)
if __name__ == '__main__':
main()
</code></pre>
<p>This is the output:</p>
<pre><code>test
U1K78788H
</code></pre>
| 0 | 2016-08-15T14:12:04Z | 39,075,937 | <p>This will always produce a channel id and not channel name. You must call <strong>channels.info</strong> to get the channel name.</p>
<pre><code>import time
import os
from slackclient import SlackClient
BOT_TOKEN = os.environ.get('SLACK_BOT_TOKEN')
def main():
# Creates a slackclient instance with bots token
sc = SlackClient(BOT_TOKEN)
#Connect to slack
if sc.rtm_connect():
print "connected"
while True:
# Read latest messages
for slack_message in sc.rtm_read():
message = slack_message.get("text")
print message
channel = slack_message.get("channel")
print channel
channel_info=sc.api_call("channels.info",channel=channel)
print channel_info["channel"]["name"]
time.sleep(1)
if __name__ == '__main__':
main()
</code></pre>
<p>This will also print channel Name.
Another way is that you can store names of all the channels with their channel_id in a dictionary beforehand. And then get the channel name with id as key.</p>
| 0 | 2016-08-22T09:31:59Z | [
"python",
"python-2.7",
"slack-api",
"slack"
] |
Pandas add new columns based on splitting another column | 38,956,778 | <p>I have a pandas dataframe like the following:</p>
<pre><code>A B
US,65,AMAZON 2016
US,65,EBAY 2016
</code></pre>
<p>My goal is to get to look like this:</p>
<pre><code>A B country code com
US.65.AMAZON 2016 US 65 AMAZON
US.65.AMAZON 2016 US 65 EBAY
</code></pre>
<p>I know this question has been asked before <a href="http://stackoverflow.com/questions/14745022/pandas-dataframe-how-do-i-split-a-column-into-two">here</a> and <a href="http://stackoverflow.com/questions/25789445/pandas-make-new-column-from-string-slice-of-another-column#_=_">here</a> but <strong>none</strong> of them works for me. I have tried: </p>
<pre><code>df['country','code','com'] = df.Field.str.split('.')
</code></pre>
<p>and</p>
<pre><code>df2 = pd.DataFrame(df.Field.str.split('.').tolist(),columns = ['country','code','com','A','B'])
</code></pre>
<p>Am I missing something? Any help is much appreciated.</p>
| 2 | 2016-08-15T14:13:31Z | 38,956,805 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow"><code>split</code></a> with parameter <code>expand=True</code> and add one <code>[]</code> to left side:</p>
<pre><code>df[['country','code','com']] = df.A.str.split(',', expand=True)
</code></pre>
<p>Then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.replace.html" rel="nofollow"><code>replace</code></a> <code>,</code> to <code>.</code>:</p>
<pre><code>df.A = df.A.str.replace(',','.')
print (df)
A B country code com
0 US.65.AMAZON 2016 US 65 AMAZON
1 US.65.EBAY 2016 US 65 EBAY
</code></pre>
<p>Another solution with <code>DataFrame</code> constructor if there are no <code>NaN</code> values:</p>
<pre><code>df[['country','code','com']] = pd.DataFrame([ x.split(',') for x in df['A'].tolist() ])
df.A = df.A.str.replace(',','.')
print (df)
A B country code com
0 US.65.AMAZON 2016 US 65 AMAZON
1 US.65.EBAY 2016 US 65 EBAY
</code></pre>
<p>Also you can use column names in constructor, but then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow"><code>concat</code></a> is necessary:</p>
<pre><code>df1=pd.DataFrame([x.split(',') for x in df['A'].tolist()],columns= ['country','code','com'])
df.A = df.A.str.replace(',','.')
df = pd.concat([df, df1], axis=1)
print (df)
A B country code com
0 US.65.AMAZON 2016 US 65 AMAZON
1 US.65.EBAY 2016 US 65 EBAY
</code></pre>
| 2 | 2016-08-15T14:15:24Z | [
"python",
"pandas",
"dataframe",
"split",
"multiple-columns"
] |
Pandas add new columns based on splitting another column | 38,956,778 | <p>I have a pandas dataframe like the following:</p>
<pre><code>A B
US,65,AMAZON 2016
US,65,EBAY 2016
</code></pre>
<p>My goal is to get to look like this:</p>
<pre><code>A B country code com
US.65.AMAZON 2016 US 65 AMAZON
US.65.AMAZON 2016 US 65 EBAY
</code></pre>
<p>I know this question has been asked before <a href="http://stackoverflow.com/questions/14745022/pandas-dataframe-how-do-i-split-a-column-into-two">here</a> and <a href="http://stackoverflow.com/questions/25789445/pandas-make-new-column-from-string-slice-of-another-column#_=_">here</a> but <strong>none</strong> of them works for me. I have tried: </p>
<pre><code>df['country','code','com'] = df.Field.str.split('.')
</code></pre>
<p>and</p>
<pre><code>df2 = pd.DataFrame(df.Field.str.split('.').tolist(),columns = ['country','code','com','A','B'])
</code></pre>
<p>Am I missing something? Any help is much appreciated.</p>
| 2 | 2016-08-15T14:13:31Z | 38,957,891 | <p>For getting the new columns I would prefer doing it as following:</p>
<pre><code>df['Country'] = df['A'].apply(lambda x: x[0])
df['Code'] = df['A'].apply(lambda x: x[1])
df['Com'] = df['A'].apply(lambda x: x[2])
</code></pre>
<p>As for the replacement of <strong>,</strong> with a <strong>.</strong> you can use the following:</p>
<pre><code>df['A'] = df['A'].str.replace(',','.')
</code></pre>
| 0 | 2016-08-15T15:14:33Z | [
"python",
"pandas",
"dataframe",
"split",
"multiple-columns"
] |
Wagtail/Django block doesn't render content properly from custom/nested StructBlock template | 38,956,897 | <p>I have a block in the head of my base template that will render "extra" CSS. This CSS will be customized from fields coming in from a Wagtail CMS instance.</p>
<p>So, in the <code>base.html</code> template I have:</p>
<pre><code><head>
{% block extra_css %}{% endblock %}
</head>
<body>
{% block content %}{% endblock %}
</body>
</code></pre>
<p>Then, in my <code>detail.html</code> template, which extends off of the base, I have:</p>
<pre><code>{% block content %}
{% for block in page.body %}
{{ block }}
{% endfor %}
{% endblock %}
</code></pre>
<p><code>body</code> is a <code>StreamField</code> in Wagtail. One of said fields is a custom <code>StructBlock</code>, the model of which is set up like so:</p>
<pre><code>class CalloutBlock(blocks.StructBlock):
accent_color = blocks.CharBlock(required=False, label='Accent Color', help_text='HEX Value/Color')
class Meta:
template = 'inc/blocks/callout.inc.tpl'
</code></pre>
<p>Finally, in that <code>callout.inc.tpl</code> template, I am attempting to render a <code><style></code> tag that <em>should</em> get injected in my <code>extra_css</code> block:</p>
<pre><code>{% block extra_css %}
<style>
.accent_color {
background-color: {{accent_color}} !important;
}
</style>
{% endblock %}
</code></pre>
<p>However, this block is not injected into the <code><head></code> as I'd expected. Instead, it renders in the body, like so, as if the <code>{% block extra_css %}</code> tag were not there at all:</p>
<pre><code><head>
</head>
<body>
<style>
.accent_color {
background-color: {{accent_color}} !important;
}
</style>
</body>
</code></pre>
<p>Is simply a limitation in Django templates? Is nesting the issue? Or is it because I'm using a custom template at the model level, and that's somehow outside the scope of the parent template parsing?</p>
<p>Django: 1.10<br>
Wagtail: 1.6</p>
| 0 | 2016-08-15T14:20:16Z | 38,972,874 | <p>This is a limitation in the way custom templates for StreamField blocks work. (There's a similar limitation in Django templating in general too, though - the <code>{% block %}</code> mechanism only works in conjunction with <code>{% extends %}</code>, not <code>{% include %}</code>.) The HTML content for the block is rendered in a separate call to the template engine, independently of the outer page template, so there's no way of passing control between the two.</p>
<p>(Note that Wagtail 1.6 introduces the <a href="http://docs.wagtail.io/en/v1.6/releases/1.6.html#include-block-tag-for-improved-streamfield-template-inclusion" rel="nofollow"><code>{% include_block %}</code> tag</a>, which improves the situation a bit by making it possible to pass variables from the outer template's context to the block template. It still won't allow passing control from one to the other, though.)</p>
| 1 | 2016-08-16T10:46:52Z | [
"python",
"css",
"django",
"templates",
"wagtail"
] |
IndexError: index out of range in python | 38,956,910 | <p>i am tying to solve the following question on a competitive coding website where i have to convert '->' to '.' only in the code section but not in the comments.
<a href="https://www.hackerearth.com/problem/algorithm/compiler-version-2/" rel="nofollow">https://www.hackerearth.com/problem/algorithm/compiler-version-2/</a></p>
<p>i have tried to write a solution but everytime i run it it gives me IndexError message. Some help is much appreciated. Below is my solution</p>
<pre><code>import copy
temp_list = []
while True:
string = input()
if string != "":
temp_list.append(string)
string = None
else:
break
for i in range(len(temp_list)):
j = 0
while j <= (len(temp_list[i]) - 2):
if string[i][j] == '-' and string[i][j + 1] == '>':
#print("Hello WOrld")
temp_string = string[i][:j] + '.' + string[i][j + 2:]
string[i] = copy.deepcopy(temp_string)
elif string[i][j] == '/' and string[i][j + 1] == '/':
#print("Break")
break
else:
#print(j)
j += 1
for i in temp_list:
print(i)
</code></pre>
| -1 | 2016-08-15T14:21:06Z | 38,957,085 | <ol>
<li><code>if string</code> is the same as <code>if string != ""</code></li>
<li><code>temp_list</code> is a list so you can loop over it in a more pythonic way <code>for i in temp_list</code></li>
<li><code>string</code> is a variable of type <code>str</code> so you cann't index it like this: <code>string[i][j]</code> (i guess you wanted to use <code>temp_list</code> in those cases)</li>
</ol>
<p>Something like this below should work:</p>
<pre><code>import copy
temp_list = []
while True:
string = raw_input()
if string:
temp_list.append(string)
string = None
else:
break
for i in temp_list:
j = 0
while j <= (len(temp_list[i]) - 2):
if temp_list[i][j] == '-' and temp_list[i][j + 1] == '>':
#print("Hello WOrld")
temp_string = temp_list[i][:j] + '.' + temp_list[i][j + 2:]
temp_list[i] = copy.deepcopy(temp_string)
elif temp_list[i][j] == '/' and temp_list[i][j + 1] == '/':
#print("Break")
break
else:
#print(j)
j += 1
for i in temp_list:
print(i)
</code></pre>
| -1 | 2016-08-15T14:29:51Z | [
"python",
"python-3.x"
] |
Python & MySQL - non GCC module? | 38,956,942 | <p>Are there any MySQL modules that are written purely in Python and don't require GCC to be compiled? MySQLdb is great but it just requires too much for what I'm working with right now.</p>
<p>Thanks</p>
| 0 | 2016-08-15T14:22:26Z | 38,957,681 | <p>This does the trick. All python, no gcc.</p>
<p><a href="https://github.com/mysql/mysql-connector-python" rel="nofollow">https://github.com/mysql/mysql-connector-python</a></p>
| 0 | 2016-08-15T15:03:21Z | [
"python",
"mysql",
"database"
] |
Python splitting of text | 38,956,950 | <p>I have a text file (.txt) with different lines of words
For example: </p>
<pre><code>1,2,3,4,5
11,12,13,14,15
</code></pre>
<p>How can I use the <code>file.read().split()</code> command to just get the 2nd column...so 2 and 12 for example of the text?</p>
| 1 | 2016-08-15T14:22:51Z | 38,956,991 | <p>Assuming no spaces around <code>,</code>:</p>
<pre><code>with open('file.txt') as f:
for line in f:
print(line.split(',')[1])
</code></pre>
<p>If there could be whitespaces, use <code>re.split()</code>:</p>
<pre><code>with open('file.txt') as f:
for line in f:
print(re.split(r'\s?,\s?', line)[1])
</code></pre>
| 4 | 2016-08-15T14:25:05Z | [
"python"
] |
Python splitting of text | 38,956,950 | <p>I have a text file (.txt) with different lines of words
For example: </p>
<pre><code>1,2,3,4,5
11,12,13,14,15
</code></pre>
<p>How can I use the <code>file.read().split()</code> command to just get the 2nd column...so 2 and 12 for example of the text?</p>
| 1 | 2016-08-15T14:22:51Z | 38,957,009 | <p>First you have to split the file into lines (easily done simply by iterating over it), than you have to extract the 2nd column from each line.</p>
<p>Simple example:</p>
<pre><code>with open("myfile.txt") as myfile:
for line in myfile:
print(line.split(",")[1]) # indexing starts at 0
</code></pre>
<p>To accumulate a list of the values:</p>
<pre><code>with open("myfile.txt") as myfile:
mylist = [line.split(",")[1] for line in myfile]
</code></pre>
| 2 | 2016-08-15T14:25:58Z | [
"python"
] |
Python filter a DBF by a range of date (between two dates) | 38,956,992 | <p>I'm using the <a href="https://pypi.python.org/pypi/dbf/0.96.8" rel="nofollow">dbf</a> library with python3.5.
The DBF table has a column with only dates without time and another just with time. Want to retrieve records from the last five minutes.</p>
<p>I'm new to this module and currently see just two approaches to get a portion of the data stored in a DBF:</p>
<p>First, with the sympathetic SQL like query:</p>
<pre><code> records = table.query("SELECT * WHERE (SA03 BETWEEN " + beforedfilter + " AND " + nowdfilter + ") AND (SA04 BETWEEN " + beforetfilter + " AND " + nowtfilter + ")")
</code></pre>
<p>This would be a familiar approach but the records returned are the first records from the file and not between the given range of time. Probably it is because the sql querying is not well supported by the module? Or just I'm mistaking something in my query? And another odd is that after a few records are printed I'll get an exception: <code>UnicodeDecodeError: 'ascii' codec can't decode byte 0xce in position 3: ordinal not in range(128)</code>. To my knowledge there are no non-ascii characters in the table.</p>
<p>The other approach is using the module's default way of narrowing down records.. Got stuck with the filtering, as I could use it if I would want to find one specific date and time but for a range, I have no clues how to proceed.</p>
<pre><code>index = table.create_index(lambda rec: rec.SA03)
records = index.search(match=(?!))
</code></pre>
| 1 | 2016-08-15T14:25:06Z | 38,960,067 | <p>The simplest way is to have a filter function that only tracks matching records:</p>
<pre><code># lightly tested
def last_five_minutes(record, date_field, time_field):
now = dbf.DateTime.now()
record_date = record[date_field]
try:
# if time is stored as HH:MM:SS
record_time = dbf.DateTime.strptime(record[time_field], '%H:%M:%S').time()
moment = dbf.DateTime.combine(record_date, record_time)
lapsed = now - moment
except (ValueError, TypeError):
# should log exceptions, not just ignore them
return dbf.DoNotIndex
if lapsed <= datetime.timedelta(seconds=300):
# return value to sort on
return moment
else:
# do not include this record
return dbf.DoNotIndex
</code></pre>
<p>and then use it:</p>
<pre><code>index = table.create_index(
lambda rec: last_five_minutes(rec, 'date_field', 'time_field'))
</code></pre>
| 0 | 2016-08-15T17:40:14Z | [
"python",
"dbf"
] |
Python Pandas Drop Duplicates keep second to last | 38,957,036 | <p>What's the most efficient way to select the second to last of each duplicated set in a pandas dataframe?</p>
<p>For instance I basically want to do this operation:</p>
<pre><code>df = df.drop_duplicates(['Person','Question'],take_last=True)
</code></pre>
<p>But this:</p>
<pre><code>df = df.drop_duplicates(['Person','Question'],take_second_last=True)
</code></pre>
<p>Abstracted question: how to choose which duplicate to keep if duplicate is neither the max nor the min?</p>
| 6 | 2016-08-15T14:27:19Z | 38,957,387 | <p>With groupby.apply:</p>
<pre><code>df = pd.DataFrame({'A': [1, 1, 1, 1, 2, 2, 2, 3, 3, 4],
'B': np.arange(10), 'C': np.arange(10)})
df
Out:
A B C
0 1 0 0
1 1 1 1
2 1 2 2
3 1 3 3
4 2 4 4
5 2 5 5
6 2 6 6
7 3 7 7
8 3 8 8
9 4 9 9
(df.groupby('A', as_index=False).apply(lambda x: x if len(x)==1 else x.iloc[[-2]])
.reset_index(level=0, drop=True))
Out:
A B C
2 1 2 2
5 2 5 5
7 3 7 7
9 4 9 9
</code></pre>
<p>With a different DataFrame, subset two columns:</p>
<pre><code>df = pd.DataFrame({'A': [1, 1, 1, 1, 2, 2, 2, 3, 3, 4],
'B': [1, 1, 2, 1, 2, 2, 2, 3, 3, 4], 'C': np.arange(10)})
df
Out:
A B C
0 1 1 0
1 1 1 1
2 1 2 2
3 1 1 3
4 2 2 4
5 2 2 5
6 2 2 6
7 3 3 7
8 3 3 8
9 4 4 9
(df.groupby(['A', 'B'], as_index=False).apply(lambda x: x if len(x)==1 else x.iloc[[-2]])
.reset_index(level=0, drop=True))
Out:
A B C
1 1 1 1
2 1 2 2
5 2 2 5
7 3 3 7
9 4 4 9
</code></pre>
| 5 | 2016-08-15T14:46:50Z | [
"python",
"pandas"
] |
Python Pandas Drop Duplicates keep second to last | 38,957,036 | <p>What's the most efficient way to select the second to last of each duplicated set in a pandas dataframe?</p>
<p>For instance I basically want to do this operation:</p>
<pre><code>df = df.drop_duplicates(['Person','Question'],take_last=True)
</code></pre>
<p>But this:</p>
<pre><code>df = df.drop_duplicates(['Person','Question'],take_second_last=True)
</code></pre>
<p>Abstracted question: how to choose which duplicate to keep if duplicate is neither the max nor the min?</p>
| 6 | 2016-08-15T14:27:19Z | 38,957,404 | <p>Starting with this: </p>
<pre><code> Person Question
1 0 0
2 1 1
3 2 2
4 3 3
5 4 4
6 0 0
7 1 1
8 2 2
9 3 3
10 4 4
</code></pre>
<p>Try this:
For last..... </p>
<pre><code>df.drop_duplicates(['Person','Question'],take_last=True).tail(1)
Person Question
10 4 4
</code></pre>
<p>For second to last....</p>
<pre><code>df.drop_duplicates(['Person','Question'],take_last=True).nlargest(2,['Person','Question'] , keep='first').tail(1)
Person Question
9 3 3
</code></pre>
| -2 | 2016-08-15T14:47:56Z | [
"python",
"pandas"
] |
Python Pandas Drop Duplicates keep second to last | 38,957,036 | <p>What's the most efficient way to select the second to last of each duplicated set in a pandas dataframe?</p>
<p>For instance I basically want to do this operation:</p>
<pre><code>df = df.drop_duplicates(['Person','Question'],take_last=True)
</code></pre>
<p>But this:</p>
<pre><code>df = df.drop_duplicates(['Person','Question'],take_second_last=True)
</code></pre>
<p>Abstracted question: how to choose which duplicate to keep if duplicate is neither the max nor the min?</p>
| 6 | 2016-08-15T14:27:19Z | 38,965,036 | <p>You could <code>groupby/tail(2)</code> to take the last 2 items, then <code>groupby/head(1)</code> to take the first item from the tail:</p>
<pre><code>df.groupby(['A','B']).tail(2).groupby(['A','B']).head(1)
</code></pre>
<p>If there is only one item in the group, <code>tail(2)</code> returns just the one item.</p>
<hr>
<p>For example,</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randint(10, size=(10**2, 3)), columns=list('ABC'))
result = df.groupby(['A','B']).tail(2).groupby(['A','B']).head(1)
expected = (df.groupby(['A', 'B'], as_index=False).apply(lambda x: x if len(x)==1 else x.iloc[[-2]]).reset_index(level=0, drop=True))
assert expected.sort_index().equals(result)
</code></pre>
<p>The builtin groupby methods (such as <code>tail</code> and <code>head</code>) are often much faster
than <code>groupby/apply</code> with custom Python functions. This is especially true if there are a lot of groups:</p>
<pre><code>In [96]: %timeit df.groupby(['A','B']).tail(2).groupby(['A','B']).head(1)
1000 loops, best of 3: 1.7 ms per loop
In [97]: %timeit (df.groupby(['A', 'B'], as_index=False).apply(lambda x: x if len(x)==1 else x.iloc[[-2]]).reset_index(level=0, drop=True))
100 loops, best of 3: 17.9 ms per loop
</code></pre>
<hr>
<p>Alternatively, <a href="http://stackoverflow.com/questions/38957036/python-pandas-drop-duplicates-keep-second-to-last/38965036?noredirect=1#comment65284901_38965036">ayhan suggests</a> a nice improvement:</p>
<pre><code>alt = df.groupby(['A','B']).tail(2).drop_duplicates(['A','B'])
assert expected.sort_index().equals(alt)
In [99]: %timeit df.groupby(['A','B']).tail(2).drop_duplicates(['A','B'])
1000 loops, best of 3: 1.43 ms per loop
</code></pre>
| 2 | 2016-08-16T00:45:19Z | [
"python",
"pandas"
] |
Why does not django delete one to one relationship? | 38,957,082 | <p>I have the following simple relationship:</p>
<pre><code>class User(models.Model):
fields here
class UserProfile(models.Model):
user = models.OneToOneField(User)
</code></pre>
<p>I did the following in the shell:</p>
<pre><code>user = User.objects.create(...)
profile = UserProfile.objects.create(user=user)
user.userprofile
...<UserProfile: UserProfile object>
user.userprofile.delete()
...(1, {'accounts.UserProfile': 1})
user.userprofile
...<UserProfile: UserProfile object>
</code></pre>
<p>From the above, you can see that I create User and UserProfile instances. Than I try to delete UserProfile instance and it is deleted (at least seems like). Than I do <code>user.userprofile</code> and it's like it never was deleted.</p>
<p>After little digging into the Django delete method, I realized that when I do <code>user.userprofile.delete()</code> Django just deletes userprofile's pk and the rest fields are not touched. What I do not understand is what should I do in order to get the following result:</p>
<pre><code>user.userprofile.delete()
user.userprofile
...RelatedObjectDoesNotExist: User has no userprofile.
</code></pre>
<p>Does anyone have some ideas or code snippets?</p>
| 1 | 2016-08-15T14:29:47Z | 38,957,230 | <p>You can reload the user from the database:</p>
<pre><code>user = User.objects.get(pk=user.pk)
</code></pre>
<p>That will refresh all its attributes including the userprofile.</p>
| 3 | 2016-08-15T14:38:40Z | [
"python",
"django",
"django-models",
"django-queryset",
"one-to-one"
] |
extend python not extending correctly | 38,957,127 | <p>I have a list <code>indexlist</code> with the value:</p>
<p><code>[8, 11, 4, 3]</code></p>
<p>This is in the middle of a function, so other relevant values are <code>i=0</code> and <code>endsorted = sorted(indexlist[i+1:])</code></p>
<p>Then I call</p>
<p><code>indexlist[:i+1].extend(endsorted)</code></p>
<p>which returns</p>
<p><code>[8, 11, 4, 3]</code>. </p>
<p>Shouldn't it return <code>[8, 3, 4, 11]</code>? I've checked <code>indexlist[:i+1]</code>, which is <code>[8]</code>, and I've checked <code>endsorted</code> which is <code>[3, 4, 11]</code>. </p>
| 0 | 2016-08-15T14:32:48Z | 38,957,259 | <p>Look on that code:</p>
<pre><code># Init
indexlist = [8, 11, 4, 3]
# Your setup
i = 0
endsorted = sorted(indexlist[i+1:])
# Extending the right list object
new_index_list = indexlist[:i+1]
new_index_list.extend(endsorted)
</code></pre>
<p>The new_index_list value is <code>[8, 3, 4, 11]</code>.</p>
<p>Since I extended the right object and not just a copy of it(When slicing a list you get a new object and not the original one)</p>
| 0 | 2016-08-15T14:40:04Z | [
"python"
] |
Sanitizers in Python native module | 38,957,154 | <p>I would like to use GCC's sanitizers for a native module.</p>
<p>I use the link options:</p>
<pre><code>-static-libasan -static-libtsan -static-liblsan -static-libubsan -fsanitize=address -lasan -lubsan
</code></pre>
<p>When I load the native module it prints the error message:</p>
<pre><code>ASan runtime does not come first in initial library list; you should either link runtime to your application or manually preload it with LD_PRELOAD.
</code></pre>
<p>Now this seems like the static flags not working.
Is it possible to use sanitizers for a shared object only or is it necessary to have sanitizers linked into python3 directly?</p>
| 0 | 2016-08-15T14:34:05Z | 39,309,254 | <p>For some reason linking in <code>-lasan</code> didn't work either. However, <code>LD_PRELOAD</code> worked fine.</p>
<p>Try <code>LD_PRELOAD</code>, if that is feasible for you.</p>
| 0 | 2016-09-03T16:51:53Z | [
"python",
"python-3.x",
"gcc",
"address-sanitizer"
] |
Extract all single {key:value} pairs from dictionary | 38,957,159 | <p>I have a dictionary which maps some keys to 1 or <strong>more</strong> values. </p>
<p>In order to map to more than 1 value, I'm mapping each individual key to a list. How can I get the number of the single pairs? Is there a quick pythonic way to do this?</p>
<p>My dict looks something like this:</p>
<pre><code>>>print dict
{'key1':['value11',value12, ...], 'key2': ['value21'], 'key3':['value31', 'value32']}
</code></pre>
<p>So in the above example, I would expect my output to be <code>1</code></p>
| 0 | 2016-08-15T14:34:25Z | 38,957,246 | <p>You could try something like</p>
<pre><code>len([_ for v in d.values() if len(v) == 1])
</code></pre>
<p>where <code>d</code> is the name of your dictionary (you should avoid using identifiers such as <code>dict</code>, incidentally).</p>
<p>Depending on your interpreter version, you might need to use <code>itervalues</code> instead of <code>values</code>.</p>
| 1 | 2016-08-15T14:39:22Z | [
"python",
"dictionary"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.