title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Bokeh: Interact with legend label text | 39,049,622 | <p>Is there any way to interactively change legend label text in Bokeh?</p>
<p>I've read <a href="https://github.com/bokeh/bokeh/issues/2274" rel="nofollow">https://github.com/bokeh/bokeh/issues/2274</a> and <a href="http://stackoverflow.com/questions/32253900/how-to-interactively-display-and-hide-lines-in-a-bokeh-plot">How to interactively display and hide lines in a Bokeh plot?</a> but neither are applicable.</p>
<p>I don't need to modify the colors or anything of more complexity than changing the label text but I can't find a way to do it.</p>
| 1 | 2016-08-20T01:24:20Z | 39,071,084 | <p>As of Bokeh <code>0.12.1</code> it does not look like this is currently supported. <code>Legend</code> objects have a <code>legends</code> property that maps the text to a list of glyphs:</p>
<pre><code>{
"foo": [circle1],
"bar": [line2, circle2]
}
</code></pre>
<p>Ideally, you could update this <code>legends</code> property to cause it to re-render. But looking at <a href="https://github.com/bokeh/bokeh/blob/0.12.1/bokehjs/src/coffee/models/annotations/legend.coffee" rel="nofollow">the source code</a> it appears the value is used at initialization, but there is no plumbing to force a re-render if the value changes. A possible workaround could be to change the value of <code>legends</code> then also immediately set some other property that <em>does</em> trigger a re-render. </p>
<p>In any case making this work on update should not be much work, and would be a nice PR for a new contributor. I'd encourage you to submit a feature request issue on the <a href="https://github.com/bokeh/bokeh/issues" rel="nofollow">GitHub issue tracker</a> and, if you have the ability a Pull Request to implement it (we are always happy to help new contributors get started and answer questions)</p>
| 0 | 2016-08-22T03:41:05Z | [
"python",
"bokeh"
] |
How to import text data into excel columns based on spaces? | 39,049,645 | <p>I need to write a python code that takes the text between two blank lines in a .txt file and inserts said text into unique columns in Excel, pasting the headers only once. For example:</p>
<pre><code>d1_type:
shape:
2,
order:
false,
relation:
true,
d2_type:
shape:
false,
order:
false,
relation:
true,
encoding_rt:
6641,
verification_rt:
2429,
target:
2,"
</code></pre>
<p>So each cluster of text needs to be in its own column in excel (<em>Also, this page is formatting my text incorrectly--the words following each colon should be on their own line</em>). The main heading (e.g. order, relation, etc.) would ideally only be pasted once to name each column. I'm really at a loss for how to do this. I've googled it for the past 3 hours and made very little progress.</p>
| 0 | 2016-08-20T01:29:41Z | 39,051,479 | <p>You could:</p>
<ol>
<li>Parse each chunk</li>
<li>Convert each chunk into a dictionary</li>
<li>Finally, generate a CSV using your list of dictionaries.</li>
</ol>
<p>You can then open the CSV in Excel. Something like the following.</p>
<h1>Parse Chunks</h1>
<p>Given a filename, generate chunks.</p>
<pre><code>def parse_chunks(filename):
chunks = []
with open(filename) as f:
chunk = []
for line in f:
if line.strip().endswith('_type'):
if chunk:
chunks.append(chunk)
chunk = [line]
chunk.append(line)
return chunks
</code></pre>
<h1>Dictionary-ify</h1>
<p>Given chunks, generate a list of dictionaries.</p>
<pre><code>def dictionarify(chunks):
data = []
for chunk in chunks:
datum = {}
key = None
for line in chunk.splitlines():
if line.endswith(':'):
key = line.strip()[:-1]
elif line.endswith(','):
datum[key] = line.strip()[:-1]
# implicitly ignores blank lines
data.append(datum)
return data
</code></pre>
<h1>Generate CSV</h1>
<pre><code>def generate_csv(data, dest):
with open(dest, 'w') as f:
headers = set()
for datum in data:
for key in datum:
headers.add(key)
headers = list(headers) # arbitrarily establish order somehow
f.write(','.join(headers))
for datum in data:
f.write(','.join(datum[key] for key in headers))
</code></pre>
<p>You should then have a CSV that you can open in Excel.</p>
| 0 | 2016-08-20T07:05:56Z | [
"python",
"excel"
] |
Python-JSON - How to parse API output? | 39,049,647 | <p>I'm pretty new.</p>
<p>I wrote this python script to make an API call from blockr.io to check the balance of multiple bitcoin addresses.</p>
<p>The contents of btcaddy.txt are bitcoin addresses seperated by commas. For this example, let it parse <a href="http://btc.blockr.io/api/v1/address/info/1FQCJjV3JwoQVgRYVRmDUFyZD9rt946Yp9,1BitmixerEiyyp3eTLaCpgBbhYERs48qza,1CyDUaBBhZAwGZEnL266Kb4tRQ1Br9Rj8c" rel="nofollow">this</a>.</p>
<pre><code>import urllib2
import json
btcaddy = open("btcaddy.txt","r")
urlRequest = urllib2.Request("http://btc.blockr.io/api/v1/address/info/" + btcaddy.read())
data = urllib2.urlopen(urlRequest).read()
json_data = json.loads(data)
balance = float(json_data['data''address'])
print balance
raw_input()
</code></pre>
<p>However, it gives me an error. What am I doing wrong? For now, how do I get it to print the balance of the addresses?</p>
| -2 | 2016-08-20T01:30:34Z | 39,049,801 | <p>Your question is clear, but your tries not.</p>
<p>You said, you have a file, with at least, more than registry. So you need to retrieve the lines of this file.</p>
<pre><code>with open("btcaddy.txt","r") as a:
addresses = a.readlines()
</code></pre>
<p>Now you could iterate over registries and make a request to this uri. The <code>urllib</code> module is enough for this task.</p>
<pre><code>import json
import urllib
base_url = "http://btc.blockr.io/api/v1/address/info/%s"
for address in addresses:
request = urllib.request.urlopen(base_url % address)
result = json.loads(request.read().decode('utf8'))
print(result)
</code></pre>
<p>HTTP sends bytes as response, so you should to us <code>decode('utf8')</code> as approach to handle with data. </p>
| 1 | 2016-08-20T02:07:40Z | [
"python",
"json"
] |
Python-JSON - How to parse API output? | 39,049,647 | <p>I'm pretty new.</p>
<p>I wrote this python script to make an API call from blockr.io to check the balance of multiple bitcoin addresses.</p>
<p>The contents of btcaddy.txt are bitcoin addresses seperated by commas. For this example, let it parse <a href="http://btc.blockr.io/api/v1/address/info/1FQCJjV3JwoQVgRYVRmDUFyZD9rt946Yp9,1BitmixerEiyyp3eTLaCpgBbhYERs48qza,1CyDUaBBhZAwGZEnL266Kb4tRQ1Br9Rj8c" rel="nofollow">this</a>.</p>
<pre><code>import urllib2
import json
btcaddy = open("btcaddy.txt","r")
urlRequest = urllib2.Request("http://btc.blockr.io/api/v1/address/info/" + btcaddy.read())
data = urllib2.urlopen(urlRequest).read()
json_data = json.loads(data)
balance = float(json_data['data''address'])
print balance
raw_input()
</code></pre>
<p>However, it gives me an error. What am I doing wrong? For now, how do I get it to print the balance of the addresses?</p>
| -2 | 2016-08-20T01:30:34Z | 39,074,912 | <p>You've done multiple things wrong in your code. Here's my fix. I recommend a for loop. </p>
<pre><code>import json
import urllib
addresses = open("btcaddy.txt", "r").read()
base_url = "http://btc.blockr.io/api/v1/address/info/"
request = urllib.urlopen(base_url+addresses)
result = json.loads(request.read())['data']
for balance in result:
print balance['address'], ":" , balance['balance'], "BTC"
</code></pre>
<p>You don't need an input at the end, too.</p>
| 1 | 2016-08-22T08:43:47Z | [
"python",
"json"
] |
Using Python to edit the timestamps in a list? Convert POSIX to readable format using a function | 39,049,665 | <p>SECOND EDIT:</p>
<p>Finished snippet for adjusting timezones and converting format. See correct answer below for details leading to this solution.</p>
<pre><code>tzvar = int(input("Enter the number of hours you'd like to add to the timestamp:"))
tzvarsecs = (tzvar*3600)
print (tzvarsecs)
def timestamp_to_str(timestamp):
return datetime.fromtimestamp(timestamp).strftime('%H:%M:%S %m/%d/%Y')
timestamps = soup('span', {'class': '_timestamp js-short-timestamp '})
dtinfo = [timestamp["data-time"] for timestamp in timestamps]
times = map(int, dtinfo)
adjtimes = [x+tzvarsecs for x in times]
adjtimesfloat = [float(i) for i in adjtimes]
dtinfofloat = [float(i) for i in dtinfo]
finishedtimes = [x for x in map(timestamp_to_str, adjtimesfloat)]
originaltimes = [x for x in map(timestamp_to_str, dtinfofloat)]
</code></pre>
<p>END SECOND EDIT</p>
<hr>
<p>EDIT:</p>
<p>This code allows me to scrape the POSIX time from the HTML file and then add a number of hours entered by the user to the original value. Negative numbers will also work to subtract hours. The user will be working in whole hours as the changes are specifically to adjust for timezones.</p>
<pre><code>tzvar = int(input("Enter the number of hours you'd like to add to the timestamp:"))
tzvarsecs = (tzvar*3600)
print (tzvarsecs)
timestamps = soup('span', {'class': '_timestamp js-short-timestamp '})
dtinfo = [timestamp["data-time"] for timestamp in timestamps]
times = map(int, dtinfo)
adjtimes = [x+tzvarsecs for x in times]
</code></pre>
<p>All that is left is a reverse of a function like the one suggested below. How do I convert each POSIX time in the list to a readable format using a function?</p>
<p>END EDIT</p>
<hr>
<hr>
<p>The code below creates a csv file containing data scraped from a saved Twitter HTML file. </p>
<p>Twitter converts all the timestamps to the user's local time in the browser. I would like to have an input option for the user to adjust the timestamps by a certain number of hours so that the data for the tweet reflects the tweeter's local time. </p>
<p>I'm currently scraping an element called <code>'title'</code> that is a part of each permalink. I could just as easily scrape the POSIX time from each tweet instead.</p>
<pre><code>title="2:29 PM - 28 Sep 2015"
</code></pre>
<p>vs</p>
<pre><code>data-time="1443475777" data-time-ms="1443475777000"
</code></pre>
<p>How would I edit the following piece so it added a variable entered by the user to each timestamp? I don't need help with requesting input, I just need to know how to apply it to the list of timestamps after the input is passed to python.</p>
<pre><code>timestamps = soup('a', {'class': 'tweet-timestamp js-permalink js-nav js-tooltip'})
datetime = [timestamp["title"] for timestamp in timestamps]
</code></pre>
<hr>
<p>Other questions related to this code/project.</p>
<p><a href="http://stackoverflow.com/questions/35051723/fix-encoding-error-with-loop-in-beautifulsoup4">Fix encoding error with loop in BeautifulSoup4?</a></p>
<p><a href="http://stackoverflow.com/questions/35006682/focusing-in-on-specific-results-while-scraping-twitter-with-python-and-beautiful">Focusing in on specific results while scraping Twitter with Python and Beautiful Soup 4?</a></p>
<p><a href="http://stackoverflow.com/questions/34912889/using-python-to-scrape-nested-divs-and-spans-in-twitter">Using Python to Scrape Nested Divs and Spans in Twitter?</a></p>
<hr>
<p>Full code.</p>
<pre><code>from bs4 import BeautifulSoup
import requests
import sys
import csv
import re
from datetime import datetime
from pytz import timezone
url = input("Enter the name of the file to be scraped:")
with open(url, encoding="utf-8") as infile:
soup = BeautifulSoup(infile, "html.parser")
#url = 'https://twitter.com/search?q=%23bangkokbombing%20since%3A2015-08-10%20until%3A2015-09-30&src=typd&lang=en'
#headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36'}
#r = requests.get(url, headers=headers)
#data = r.text.encode('utf-8')
#soup = BeautifulSoup(data, "html.parser")
names = soup('strong', {'class': 'fullname js-action-profile-name show-popup-with-id'})
usernames = [name.contents for name in names]
handles = soup('span', {'class': 'username js-action-profile-name'})
userhandles = [handle.contents[1].contents[0] for handle in handles]
athandles = [('@')+abhandle for abhandle in userhandles]
links = soup('a', {'class': 'tweet-timestamp js-permalink js-nav js-tooltip'})
urls = [link["href"] for link in links]
fullurls = [permalink for permalink in urls]
timestamps = soup('a', {'class': 'tweet-timestamp js-permalink js-nav js-tooltip'})
datetime = [timestamp["title"] for timestamp in timestamps]
messagetexts = soup('p', {'class': 'TweetTextSize js-tweet-text tweet-text'})
messages = [messagetext for messagetext in messagetexts]
retweets = soup('button', {'class': 'ProfileTweet-actionButtonUndo js-actionButton js-actionRetweet'})
retweetcounts = [retweet.contents[3].contents[1].contents[1].string for retweet in retweets]
favorites = soup('button', {'class': 'ProfileTweet-actionButtonUndo u-linkClean js-actionButton js-actionFavorite'})
favcounts = [favorite.contents[3].contents[1].contents[1].string for favorite in favorites]
images = soup('div', {'class': 'content'})
imagelinks = [src.contents[5].img if len(src.contents) > 5 else "No image" for src in images]
#print (usernames, "\n", "\n", athandles, "\n", "\n", fullurls, "\n", "\n", datetime, "\n", "\n",retweetcounts, "\n", "\n", favcounts, "\n", "\n", messages, "\n", "\n", imagelinks)
rows = zip(usernames,athandles,fullurls,datetime,retweetcounts,favcounts,messages,imagelinks)
rownew = list(rows)
#print (rownew)
newfile = input("Enter a filename for the table:") + ".csv"
with open(newfile, 'w', encoding='utf-8') as f:
writer = csv.writer(f, delimiter=",")
writer.writerow(['Usernames', 'Handles', 'Urls', 'Timestamp', 'Retweets', 'Favorites', 'Message', 'Image Link'])
for row in rownew:
writer.writerow(row)
</code></pre>
| 1 | 2016-08-20T01:35:45Z | 39,049,954 | <p>Using your code as example, the var <code>datetime</code> store a list of string dates. So let's dissect the process in 3 steps, just for comprehension.</p>
<p>Example</p>
<pre><code>>>> datetime = [timestamp["title"] for timestamp in timestamps]
>>> print(datetime)
['2:13 AM - 29 Sep 2015', '2:29 PM - 28 Sep 2015', '8:04 AM - 28 Sep 2015']
</code></pre>
<p><strong>First step:</strong> convert it to a Python <a href="https://docs.python.org/2/library/datetime.html#datetime.datetime.strptime" rel="nofollow">datetime object</a>.</p>
<pre><code>>>> datetime_obj = datetime.strptime('2:13 AM - 29 Sep 2015', '%H:%M %p - %d %b %Y')
>>> print(datetime_obj)
datetime.datetime(2015, 9, 29, 2, 13)
</code></pre>
<p><strong>Second step:</strong> convert datetime object to a Python <a href="https://docs.python.org/2/library/datetime.html#datetime.date.timetuple" rel="nofollow">structured time object</a> </p>
<pre><code>>>> to_time = struct_date.timetuple()
>>> print(to_time)
time.struct_time(tm_year=2015, tm_mon=9, tm_mday=29, tm_hour=2, tm_min=13, tm_sec=0, tm_wday=1, tm_yday=272, tm_isdst=-1)
</code></pre>
<p><strong>Third step:</strong> convert sturctured time object to <a href="https://docs.python.org/2/library/time.html#time.time" rel="nofollow"><code>time</code></a> using <a href="https://docs.python.org/2/library/time.html#time.mktime" rel="nofollow"><code>time.mktime</code></a></p>
<pre><code>>>> timestamp = time.mktime(to_time)
>>> print(timestamp)
1443503580.0
</code></pre>
<p>All together now.</p>
<pre><code>import time
from datetime import datetime
...
def str_to_ts(str_date):
return time.mktime(datetime.strptime(str_date, '%H:%M %p - %d %b %Y').timetuple())
datetimes = [timestamp["title"] for timestamp in timestamps]
times = [i for i in map(str_to_ts, datetimes)]
</code></pre>
<p>PS: datetime is a bad choice for variable name. Specially in this context. :-)</p>
<p><strong>Update</strong></p>
<p>To apply a function to each value of list:</p>
<pre><code>def add_time(timestamp, hours=0, minutes=0, seconds=0):
return timestamp + seconds + (minutes * 60) + (hours * 60 * 60)
datetimes = [timestamp["title"] for timestamp in timestamps]
times = [add_time(i, 5, 0, 0) for i in datetimes]
</code></pre>
<p><strong>Update 2</strong></p>
<p>To convert a timestamp to string formatted date:</p>
<pre><code>def timestamp_to_str(timestamp):
return datetime.fromtimestamp(timestamp).strftime('%H:%M:%S %m/%d/%Y')
</code></pre>
<p>Example:</p>
<pre><code>>>> from time import time
>>> from datetime import datetime
>>> timestamp_to_str(time())
'17:01:47 08/29/2016'
</code></pre>
| 1 | 2016-08-20T02:42:00Z | [
"python",
"web-scraping",
"timestamp",
"list-comprehension"
] |
Using Python to edit the timestamps in a list? Convert POSIX to readable format using a function | 39,049,665 | <p>SECOND EDIT:</p>
<p>Finished snippet for adjusting timezones and converting format. See correct answer below for details leading to this solution.</p>
<pre><code>tzvar = int(input("Enter the number of hours you'd like to add to the timestamp:"))
tzvarsecs = (tzvar*3600)
print (tzvarsecs)
def timestamp_to_str(timestamp):
return datetime.fromtimestamp(timestamp).strftime('%H:%M:%S %m/%d/%Y')
timestamps = soup('span', {'class': '_timestamp js-short-timestamp '})
dtinfo = [timestamp["data-time"] for timestamp in timestamps]
times = map(int, dtinfo)
adjtimes = [x+tzvarsecs for x in times]
adjtimesfloat = [float(i) for i in adjtimes]
dtinfofloat = [float(i) for i in dtinfo]
finishedtimes = [x for x in map(timestamp_to_str, adjtimesfloat)]
originaltimes = [x for x in map(timestamp_to_str, dtinfofloat)]
</code></pre>
<p>END SECOND EDIT</p>
<hr>
<p>EDIT:</p>
<p>This code allows me to scrape the POSIX time from the HTML file and then add a number of hours entered by the user to the original value. Negative numbers will also work to subtract hours. The user will be working in whole hours as the changes are specifically to adjust for timezones.</p>
<pre><code>tzvar = int(input("Enter the number of hours you'd like to add to the timestamp:"))
tzvarsecs = (tzvar*3600)
print (tzvarsecs)
timestamps = soup('span', {'class': '_timestamp js-short-timestamp '})
dtinfo = [timestamp["data-time"] for timestamp in timestamps]
times = map(int, dtinfo)
adjtimes = [x+tzvarsecs for x in times]
</code></pre>
<p>All that is left is a reverse of a function like the one suggested below. How do I convert each POSIX time in the list to a readable format using a function?</p>
<p>END EDIT</p>
<hr>
<hr>
<p>The code below creates a csv file containing data scraped from a saved Twitter HTML file. </p>
<p>Twitter converts all the timestamps to the user's local time in the browser. I would like to have an input option for the user to adjust the timestamps by a certain number of hours so that the data for the tweet reflects the tweeter's local time. </p>
<p>I'm currently scraping an element called <code>'title'</code> that is a part of each permalink. I could just as easily scrape the POSIX time from each tweet instead.</p>
<pre><code>title="2:29 PM - 28 Sep 2015"
</code></pre>
<p>vs</p>
<pre><code>data-time="1443475777" data-time-ms="1443475777000"
</code></pre>
<p>How would I edit the following piece so it added a variable entered by the user to each timestamp? I don't need help with requesting input, I just need to know how to apply it to the list of timestamps after the input is passed to python.</p>
<pre><code>timestamps = soup('a', {'class': 'tweet-timestamp js-permalink js-nav js-tooltip'})
datetime = [timestamp["title"] for timestamp in timestamps]
</code></pre>
<hr>
<p>Other questions related to this code/project.</p>
<p><a href="http://stackoverflow.com/questions/35051723/fix-encoding-error-with-loop-in-beautifulsoup4">Fix encoding error with loop in BeautifulSoup4?</a></p>
<p><a href="http://stackoverflow.com/questions/35006682/focusing-in-on-specific-results-while-scraping-twitter-with-python-and-beautiful">Focusing in on specific results while scraping Twitter with Python and Beautiful Soup 4?</a></p>
<p><a href="http://stackoverflow.com/questions/34912889/using-python-to-scrape-nested-divs-and-spans-in-twitter">Using Python to Scrape Nested Divs and Spans in Twitter?</a></p>
<hr>
<p>Full code.</p>
<pre><code>from bs4 import BeautifulSoup
import requests
import sys
import csv
import re
from datetime import datetime
from pytz import timezone
url = input("Enter the name of the file to be scraped:")
with open(url, encoding="utf-8") as infile:
soup = BeautifulSoup(infile, "html.parser")
#url = 'https://twitter.com/search?q=%23bangkokbombing%20since%3A2015-08-10%20until%3A2015-09-30&src=typd&lang=en'
#headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36'}
#r = requests.get(url, headers=headers)
#data = r.text.encode('utf-8')
#soup = BeautifulSoup(data, "html.parser")
names = soup('strong', {'class': 'fullname js-action-profile-name show-popup-with-id'})
usernames = [name.contents for name in names]
handles = soup('span', {'class': 'username js-action-profile-name'})
userhandles = [handle.contents[1].contents[0] for handle in handles]
athandles = [('@')+abhandle for abhandle in userhandles]
links = soup('a', {'class': 'tweet-timestamp js-permalink js-nav js-tooltip'})
urls = [link["href"] for link in links]
fullurls = [permalink for permalink in urls]
timestamps = soup('a', {'class': 'tweet-timestamp js-permalink js-nav js-tooltip'})
datetime = [timestamp["title"] for timestamp in timestamps]
messagetexts = soup('p', {'class': 'TweetTextSize js-tweet-text tweet-text'})
messages = [messagetext for messagetext in messagetexts]
retweets = soup('button', {'class': 'ProfileTweet-actionButtonUndo js-actionButton js-actionRetweet'})
retweetcounts = [retweet.contents[3].contents[1].contents[1].string for retweet in retweets]
favorites = soup('button', {'class': 'ProfileTweet-actionButtonUndo u-linkClean js-actionButton js-actionFavorite'})
favcounts = [favorite.contents[3].contents[1].contents[1].string for favorite in favorites]
images = soup('div', {'class': 'content'})
imagelinks = [src.contents[5].img if len(src.contents) > 5 else "No image" for src in images]
#print (usernames, "\n", "\n", athandles, "\n", "\n", fullurls, "\n", "\n", datetime, "\n", "\n",retweetcounts, "\n", "\n", favcounts, "\n", "\n", messages, "\n", "\n", imagelinks)
rows = zip(usernames,athandles,fullurls,datetime,retweetcounts,favcounts,messages,imagelinks)
rownew = list(rows)
#print (rownew)
newfile = input("Enter a filename for the table:") + ".csv"
with open(newfile, 'w', encoding='utf-8') as f:
writer = csv.writer(f, delimiter=",")
writer.writerow(['Usernames', 'Handles', 'Urls', 'Timestamp', 'Retweets', 'Favorites', 'Message', 'Image Link'])
for row in rownew:
writer.writerow(row)
</code></pre>
| 1 | 2016-08-20T01:35:45Z | 39,050,310 | <p>This is what I was thinking but not sure if this is what you're after:</p>
<pre><code>>>> timestamps = ["1:00 PM - 28 Sep 2015", "2:00 PM - 28 Sep 2016", "3:00 PM - 29 Sep 2015"]
>>> datetime = dict(enumerate(timestamps))
>>> datetime
{0: '1:00 PM - 28 Sep 2015',
1: '2:00 PM - 28 Sep 2016',
2: '3:00 PM - 29 Sep 2015'}
</code></pre>
| 0 | 2016-08-20T03:54:18Z | [
"python",
"web-scraping",
"timestamp",
"list-comprehension"
] |
Using Python to edit the timestamps in a list? Convert POSIX to readable format using a function | 39,049,665 | <p>SECOND EDIT:</p>
<p>Finished snippet for adjusting timezones and converting format. See correct answer below for details leading to this solution.</p>
<pre><code>tzvar = int(input("Enter the number of hours you'd like to add to the timestamp:"))
tzvarsecs = (tzvar*3600)
print (tzvarsecs)
def timestamp_to_str(timestamp):
return datetime.fromtimestamp(timestamp).strftime('%H:%M:%S %m/%d/%Y')
timestamps = soup('span', {'class': '_timestamp js-short-timestamp '})
dtinfo = [timestamp["data-time"] for timestamp in timestamps]
times = map(int, dtinfo)
adjtimes = [x+tzvarsecs for x in times]
adjtimesfloat = [float(i) for i in adjtimes]
dtinfofloat = [float(i) for i in dtinfo]
finishedtimes = [x for x in map(timestamp_to_str, adjtimesfloat)]
originaltimes = [x for x in map(timestamp_to_str, dtinfofloat)]
</code></pre>
<p>END SECOND EDIT</p>
<hr>
<p>EDIT:</p>
<p>This code allows me to scrape the POSIX time from the HTML file and then add a number of hours entered by the user to the original value. Negative numbers will also work to subtract hours. The user will be working in whole hours as the changes are specifically to adjust for timezones.</p>
<pre><code>tzvar = int(input("Enter the number of hours you'd like to add to the timestamp:"))
tzvarsecs = (tzvar*3600)
print (tzvarsecs)
timestamps = soup('span', {'class': '_timestamp js-short-timestamp '})
dtinfo = [timestamp["data-time"] for timestamp in timestamps]
times = map(int, dtinfo)
adjtimes = [x+tzvarsecs for x in times]
</code></pre>
<p>All that is left is a reverse of a function like the one suggested below. How do I convert each POSIX time in the list to a readable format using a function?</p>
<p>END EDIT</p>
<hr>
<hr>
<p>The code below creates a csv file containing data scraped from a saved Twitter HTML file. </p>
<p>Twitter converts all the timestamps to the user's local time in the browser. I would like to have an input option for the user to adjust the timestamps by a certain number of hours so that the data for the tweet reflects the tweeter's local time. </p>
<p>I'm currently scraping an element called <code>'title'</code> that is a part of each permalink. I could just as easily scrape the POSIX time from each tweet instead.</p>
<pre><code>title="2:29 PM - 28 Sep 2015"
</code></pre>
<p>vs</p>
<pre><code>data-time="1443475777" data-time-ms="1443475777000"
</code></pre>
<p>How would I edit the following piece so it added a variable entered by the user to each timestamp? I don't need help with requesting input, I just need to know how to apply it to the list of timestamps after the input is passed to python.</p>
<pre><code>timestamps = soup('a', {'class': 'tweet-timestamp js-permalink js-nav js-tooltip'})
datetime = [timestamp["title"] for timestamp in timestamps]
</code></pre>
<hr>
<p>Other questions related to this code/project.</p>
<p><a href="http://stackoverflow.com/questions/35051723/fix-encoding-error-with-loop-in-beautifulsoup4">Fix encoding error with loop in BeautifulSoup4?</a></p>
<p><a href="http://stackoverflow.com/questions/35006682/focusing-in-on-specific-results-while-scraping-twitter-with-python-and-beautiful">Focusing in on specific results while scraping Twitter with Python and Beautiful Soup 4?</a></p>
<p><a href="http://stackoverflow.com/questions/34912889/using-python-to-scrape-nested-divs-and-spans-in-twitter">Using Python to Scrape Nested Divs and Spans in Twitter?</a></p>
<hr>
<p>Full code.</p>
<pre><code>from bs4 import BeautifulSoup
import requests
import sys
import csv
import re
from datetime import datetime
from pytz import timezone
url = input("Enter the name of the file to be scraped:")
with open(url, encoding="utf-8") as infile:
soup = BeautifulSoup(infile, "html.parser")
#url = 'https://twitter.com/search?q=%23bangkokbombing%20since%3A2015-08-10%20until%3A2015-09-30&src=typd&lang=en'
#headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36'}
#r = requests.get(url, headers=headers)
#data = r.text.encode('utf-8')
#soup = BeautifulSoup(data, "html.parser")
names = soup('strong', {'class': 'fullname js-action-profile-name show-popup-with-id'})
usernames = [name.contents for name in names]
handles = soup('span', {'class': 'username js-action-profile-name'})
userhandles = [handle.contents[1].contents[0] for handle in handles]
athandles = [('@')+abhandle for abhandle in userhandles]
links = soup('a', {'class': 'tweet-timestamp js-permalink js-nav js-tooltip'})
urls = [link["href"] for link in links]
fullurls = [permalink for permalink in urls]
timestamps = soup('a', {'class': 'tweet-timestamp js-permalink js-nav js-tooltip'})
datetime = [timestamp["title"] for timestamp in timestamps]
messagetexts = soup('p', {'class': 'TweetTextSize js-tweet-text tweet-text'})
messages = [messagetext for messagetext in messagetexts]
retweets = soup('button', {'class': 'ProfileTweet-actionButtonUndo js-actionButton js-actionRetweet'})
retweetcounts = [retweet.contents[3].contents[1].contents[1].string for retweet in retweets]
favorites = soup('button', {'class': 'ProfileTweet-actionButtonUndo u-linkClean js-actionButton js-actionFavorite'})
favcounts = [favorite.contents[3].contents[1].contents[1].string for favorite in favorites]
images = soup('div', {'class': 'content'})
imagelinks = [src.contents[5].img if len(src.contents) > 5 else "No image" for src in images]
#print (usernames, "\n", "\n", athandles, "\n", "\n", fullurls, "\n", "\n", datetime, "\n", "\n",retweetcounts, "\n", "\n", favcounts, "\n", "\n", messages, "\n", "\n", imagelinks)
rows = zip(usernames,athandles,fullurls,datetime,retweetcounts,favcounts,messages,imagelinks)
rownew = list(rows)
#print (rownew)
newfile = input("Enter a filename for the table:") + ".csv"
with open(newfile, 'w', encoding='utf-8') as f:
writer = csv.writer(f, delimiter=",")
writer.writerow(['Usernames', 'Handles', 'Urls', 'Timestamp', 'Retweets', 'Favorites', 'Message', 'Image Link'])
for row in rownew:
writer.writerow(row)
</code></pre>
| 1 | 2016-08-20T01:35:45Z | 39,090,597 | <p>It seems you are looking for <code>datetime.timedelta</code> (<a href="https://docs.python.org/3.5/library/datetime.html#timedelta-objects" rel="nofollow">documentation here</a>). You can convert your inputs into <code>datetime.datetime</code> objects in various ways, for example, </p>
<pre><code>timestamp = datetime.datetime.fromtimestamp(1443475777)
</code></pre>
<p>Then you can perform arithmetic on them with <code>timedelta</code> objects. A <code>timedelta</code> just represents a change in time. You can construct one with an <code>hours</code> argument like so: </p>
<pre><code>delta = datetime.timedelta(hours=1)
</code></pre>
<p>And then <code>timestamp + delta</code> will give you another <code>datetime</code> one hour in the future. Subtraction will work as well, as will other arbitrary time intervals.</p>
| 0 | 2016-08-23T00:30:49Z | [
"python",
"web-scraping",
"timestamp",
"list-comprehension"
] |
Is it possible to make a wit.ai bot remember/reuse a context across stories? | 39,049,703 | <p>I am creating a chatbot using Wit.ai and am trying to achieve a more conversational style of interaction. Currently I have several stories that all require a location to function but are somewhat related. Here is an example of how I interact with my bot now:</p>
<pre><code>What is the weather in Los Angeles, CA?
Bot response
How many people live in Los Angeles, CA?
Bot response
</code></pre>
<p>but I would like my chatbot to remember that I am talking about Los Angeles so the interaction would look like this:</p>
<pre><code>What is the weather in Los Angeles, CA?
Bot Response
How many people live there?
Bot Response
</code></pre>
<p>even though 2 different stories are being executed. Currently I was able to accomplish this by adding an extra function (which I use in the same way merge was used) and a singleton to my code that pulls values from entities and stores them for later use according to session info like this: </p>
<pre><code>session_info = {}
def _init_store(session_id):
global session_info
print "session info", session_info
if session_id in session_info:
pass
else:
s_info = {}
session_info[session_id] = s_info
def get_stored_info(session_id, key):
global session_info
try:
return session_info[session_id][key]
except:
return None
def add_stored_info(session_id, key, data):
_init_store(session_id)
global session_info
try:
session_info[session_id][key] = data
return True
except:
return False
</code></pre>
<p>I've read through all the docs and am slightly confused by what they say. The docs say this about contexts: </p>
<blockquote>
<p>Converse allows you to build conversational app. It allows you to
predict what your app should do at any given state in the conversation
based on the current context and the user query.</p>
<p>The context is an object you manage to tell Wit.ai about the current
state of the conversation. Wit.ai is able to predict the next action
your bot should take by comparing â among other things â the context
described in your Stories with the one you send to the the /converse
endpoint. Wit will never update the context by itself, you have to
manage the context object on your side. There is usually one context
object per session. In addition to helping Wit.ai predict the next
action, the context is used to create dynamic answers in templates.</p>
</blockquote>
<p>I read this to mean that wit will pass around the context object I manage and not make any changes to it, meaning that I am responsible for adding and removing keys from it. However I also found <a href="https://github.com/wit-ai/wit" rel="nofollow">this</a> which states that "Conversation-aware entity extraction" has yet to be implemented so I am pretty confused about if this is doable or not. </p>
<p>Also I have found that when I look at the value of <code>request['context']</code> that is passed in to each of my story execution functions the value of context is just an empty dictionary no matter what was added or removed before even though it says above that your context is never touched by wit. </p>
<p>Is this possible to do in wit itself or is there a wit approved way to achieve this or am what I doing now the best I can do? If I had to make a guess it would seem like it isn't supported yet but it seems like such a basic chatbot feature and the docs are ambiguous enough that I also could just be overlooking the proper way to do this. Any help would be greatly appreciated. I am using python in case that is relevant to anyone. </p>
| 2 | 2016-08-20T01:47:14Z | 39,340,512 | <p>It should work the way you described it. Yes Wit doesn't update the context, so if you want to keep/remember something for later, you will have to use a client-side action to store it in the context. In you example, you will store the value of the entity wit/location in a context key, let's say <code>loc</code></p>
<p>In your 'How many people live there?" story, you will have an client-side action that check is context.loc exists, if not, would ask for it via a branch a la <a href="https://wit.ai/docs/recipes#build-a-slot-based-bot" rel="nofollow">https://wit.ai/docs/recipes#build-a-slot-based-bot</a> recipe</p>
| 0 | 2016-09-06T04:09:42Z | [
"python",
"wit.ai"
] |
How to randomly assign values row-wise in a numpy array | 39,049,716 | <p>My google-fu has failed me!
I have a 10x10 numpy array initialized to <code>0</code> as follows:</p>
<pre><code>arr2d = np.zeros((10,10))
</code></pre>
<p>For each row in <code>arr2d</code>, I want to assign 3 random columns to <code>1</code>. I am able to do it using a loop as follows:</p>
<pre><code>for row in arr2d:
rand_cols = np.random.randint(0,9,3)
row[rand_cols] = 1
</code></pre>
<p><strong>output:</strong></p>
<pre><code>array([[ 0., 0., 0., 0., 0., 0., 1., 1., 1., 0.],
[ 0., 0., 0., 0., 1., 0., 0., 0., 1., 0.],
[ 0., 0., 1., 0., 1., 1., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 1., 1., 1., 0., 0., 0.],
[ 1., 0., 0., 1., 1., 0., 0., 0., 0., 0.],
[ 1., 0., 1., 1., 0., 0., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 0., 1., 0., 1., 0.],
[ 0., 0., 1., 0., 1., 0., 0., 0., 1., 0.],
[ 1., 0., 0., 0., 0., 0., 1., 1., 0., 0.],
[ 0., 1., 0., 0., 1., 0., 0., 1., 0., 0.]])
</code></pre>
<p>Is there a way to exploit numpy or array indexing/slicing to achieve the same result in a more pythonic/elegant way (preferably in 1 or 2 lines of code)?</p>
| 4 | 2016-08-20T01:50:01Z | 39,049,908 | <p>I'm not sure how good this would be in terms of performance, but it's fairly concise.</p>
<pre><code>arr2d[:, :3] = 1
map(np.random.shuffle, arr2d)
</code></pre>
| 0 | 2016-08-20T02:32:06Z | [
"python",
"numpy",
"vectorization"
] |
How to randomly assign values row-wise in a numpy array | 39,049,716 | <p>My google-fu has failed me!
I have a 10x10 numpy array initialized to <code>0</code> as follows:</p>
<pre><code>arr2d = np.zeros((10,10))
</code></pre>
<p>For each row in <code>arr2d</code>, I want to assign 3 random columns to <code>1</code>. I am able to do it using a loop as follows:</p>
<pre><code>for row in arr2d:
rand_cols = np.random.randint(0,9,3)
row[rand_cols] = 1
</code></pre>
<p><strong>output:</strong></p>
<pre><code>array([[ 0., 0., 0., 0., 0., 0., 1., 1., 1., 0.],
[ 0., 0., 0., 0., 1., 0., 0., 0., 1., 0.],
[ 0., 0., 1., 0., 1., 1., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 1., 1., 1., 0., 0., 0.],
[ 1., 0., 0., 1., 1., 0., 0., 0., 0., 0.],
[ 1., 0., 1., 1., 0., 0., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 0., 1., 0., 1., 0.],
[ 0., 0., 1., 0., 1., 0., 0., 0., 1., 0.],
[ 1., 0., 0., 0., 0., 0., 1., 1., 0., 0.],
[ 0., 1., 0., 0., 1., 0., 0., 1., 0., 0.]])
</code></pre>
<p>Is there a way to exploit numpy or array indexing/slicing to achieve the same result in a more pythonic/elegant way (preferably in 1 or 2 lines of code)?</p>
| 4 | 2016-08-20T01:50:01Z | 39,050,188 | <p>Use answers from <a href="http://stackoverflow.com/questions/8505651/non-repetitive-random-number-in-numpy">this question</a> to generate non-repeating random numbers. You can use <code>random.sample</code> from Python's <code>random</code> module, or <code>np.random.choice</code>.</p>
<p>So, just a small modification to your code:</p>
<pre><code>>>> import numpy as np
>>> for row in arr2d:
... rand_cols = np.random.choice(range(10), 3, replace=False)
... # Or the python standard lib alternative (use `import random`)
... # rand_cols = random.sample(range(10), 3)
... row[rand_cols] = 1
...
>>> arr2d
array([[ 0., 0., 0., 0., 0., 1., 1., 1., 0., 0.],
[ 0., 0., 1., 0., 0., 0., 0., 0., 1., 1.],
[ 1., 0., 0., 0., 0., 0., 1., 0., 1., 0.],
[ 0., 0., 1., 1., 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 1., 0., 1., 1.],
[ 0., 0., 1., 1., 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 0., 1., 1., 1., 0., 0., 0.],
[ 0., 1., 0., 0., 1., 1., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 1., 0., 0., 0., 1., 0.],
[ 0., 0., 1., 1., 0., 0., 0., 0., 1., 0.]])
</code></pre>
<p>I don't think you can really leverage column slicing here to <em>set</em> values to 1, unless you're generating the randomized array from scratch. This is because your column indices are random <em>for each row</em>. You're better off leaving it in the form of a loop for readability.</p>
| 0 | 2016-08-20T03:31:34Z | [
"python",
"numpy",
"vectorization"
] |
How to randomly assign values row-wise in a numpy array | 39,049,716 | <p>My google-fu has failed me!
I have a 10x10 numpy array initialized to <code>0</code> as follows:</p>
<pre><code>arr2d = np.zeros((10,10))
</code></pre>
<p>For each row in <code>arr2d</code>, I want to assign 3 random columns to <code>1</code>. I am able to do it using a loop as follows:</p>
<pre><code>for row in arr2d:
rand_cols = np.random.randint(0,9,3)
row[rand_cols] = 1
</code></pre>
<p><strong>output:</strong></p>
<pre><code>array([[ 0., 0., 0., 0., 0., 0., 1., 1., 1., 0.],
[ 0., 0., 0., 0., 1., 0., 0., 0., 1., 0.],
[ 0., 0., 1., 0., 1., 1., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 1., 1., 1., 0., 0., 0.],
[ 1., 0., 0., 1., 1., 0., 0., 0., 0., 0.],
[ 1., 0., 1., 1., 0., 0., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 0., 1., 0., 1., 0.],
[ 0., 0., 1., 0., 1., 0., 0., 0., 1., 0.],
[ 1., 0., 0., 0., 0., 0., 1., 1., 0., 0.],
[ 0., 1., 0., 0., 1., 0., 0., 1., 0., 0.]])
</code></pre>
<p>Is there a way to exploit numpy or array indexing/slicing to achieve the same result in a more pythonic/elegant way (preferably in 1 or 2 lines of code)?</p>
| 4 | 2016-08-20T01:50:01Z | 39,050,972 | <p>Once you have the <code>arr2d</code> initialized with <code>arr2d = np.zeros((10,10))</code>, you can use a vectorized approach with a <code>two-liner</code> like so -</p>
<pre><code># Generate random unique 3 column indices for 10 rows
idx = np.random.rand(10,10).argsort(1)[:,:3]
# Assign them into initialized array
arr2d[np.arange(10)[:,None],idx] = 1
</code></pre>
<p>Or cramp in everything for a one-liner if you like it that way -</p>
<pre><code>arr2d[np.arange(10)[:,None],np.random.rand(10,10).argsort(1)[:,:3]] = 1
</code></pre>
<p>Sample run -</p>
<pre><code>In [11]: arr2d = np.zeros((10,10)) # Initialize array
In [12]: idx = np.random.rand(10,10).argsort(1)[:,:3]
In [13]: arr2d[np.arange(10)[:,None],idx] = 1
In [14]: arr2d # Verify by manual inspection
Out[14]:
array([[ 0., 1., 0., 1., 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 1., 1., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 1., 0., 1., 0., 1.],
[ 0., 1., 1., 0., 0., 0., 0., 0., 1., 0.],
[ 0., 0., 1., 1., 0., 0., 0., 1., 0., 0.],
[ 1., 0., 0., 0., 0., 1., 0., 0., 1., 0.],
[ 0., 0., 0., 1., 0., 0., 0., 1., 0., 1.],
[ 1., 0., 0., 0., 0., 0., 1., 0., 1., 0.],
[ 1., 0., 0., 1., 0., 0., 0., 0., 1., 0.],
[ 0., 1., 0., 1., 0., 0., 0., 0., 0., 1.]])
In [15]: arr2d.sum(1) # Verify by counting ones in each row
Out[15]: array([ 3., 3., 3., 3., 3., 3., 3., 3., 3., 3.])
</code></pre>
<p>Note : If you are looking for performance, I would suggest going with a <code>np.argpartition</code> based approach as listed in <a href="http://stackoverflow.com/a/35416369/3293881"><code>this other post</code></a>.</p>
| 2 | 2016-08-20T05:58:25Z | [
"python",
"numpy",
"vectorization"
] |
Python Sort not Working | 39,049,752 | <p>I have code to sort a list of tuples:</p>
<pre><code>s = "betty bought a bit of butter but the butter was bitter"
words = s.split()
l = []
k = []
unique_words = sorted(set(words))
for word in unique_words:
k.append(word)
l.append(words.count(word))
z = zip(k,l)
print z
reversed(sorted(z, key=lambda x: x[1]))
print z
</code></pre>
<p>z is the same, list doesn't get sorted or even reversed.</p>
<p>I am trying to sort by the integer value of count.</p>
| 2 | 2016-08-20T01:57:15Z | 39,049,776 | <p>It's almost correct - if you check help(reversed) in a Python REPL you'll find that it returns an iterator containing your sorted result based on your dict's values.</p>
<p>If you want z to store your updated, reversed sorted list on count, you'll need to reassign z:</p>
<pre><code>z = list(reversed(sorted(z, key=lambda x: x[1])))
</code></pre>
<p>Edit: just to clarify, the outermost list conversion of the iterator objects 'converts' the iterator into a list of the objects contained inside the iterator.</p>
| 2 | 2016-08-20T02:03:21Z | [
"python",
"python-2.7",
"sorting"
] |
Python Sort not Working | 39,049,752 | <p>I have code to sort a list of tuples:</p>
<pre><code>s = "betty bought a bit of butter but the butter was bitter"
words = s.split()
l = []
k = []
unique_words = sorted(set(words))
for word in unique_words:
k.append(word)
l.append(words.count(word))
z = zip(k,l)
print z
reversed(sorted(z, key=lambda x: x[1]))
print z
</code></pre>
<p>z is the same, list doesn't get sorted or even reversed.</p>
<p>I am trying to sort by the integer value of count.</p>
| 2 | 2016-08-20T01:57:15Z | 39,049,777 | <p><code>reversed</code> and <code>sorted</code> do not sort in-place; instead they return the newly sorted and reversed object. Change the second to last line to </p>
<pre><code>z = list(reversed(sorted(z, key=lambda x: x[1])))
</code></pre>
<p>and it will work. The <code>list</code> call is because <code>reversed</code> returns an iterator rather than a list (on Python3, at least).</p>
<p>It might be a bit less verbose to do the following</p>
<pre><code>z = sorted(z, key=lambda x: x[1], reverse=True)
</code></pre>
| 5 | 2016-08-20T02:03:23Z | [
"python",
"python-2.7",
"sorting"
] |
Python Sort not Working | 39,049,752 | <p>I have code to sort a list of tuples:</p>
<pre><code>s = "betty bought a bit of butter but the butter was bitter"
words = s.split()
l = []
k = []
unique_words = sorted(set(words))
for word in unique_words:
k.append(word)
l.append(words.count(word))
z = zip(k,l)
print z
reversed(sorted(z, key=lambda x: x[1]))
print z
</code></pre>
<p>z is the same, list doesn't get sorted or even reversed.</p>
<p>I am trying to sort by the integer value of count.</p>
| 2 | 2016-08-20T01:57:15Z | 39,049,778 | <p>For an in-place sort, you should use <code>z.sort()</code>. </p>
<p>If you insist on using <code>sorted</code>, then send the value back to <code>z</code>.</p>
<p>So, use either,</p>
<pre><code>z.sort(key = lambda x:x[1])
z.reverse()
</code></pre>
<p>Or,</p>
<pre><code>z = reversed(sorted(z, key=lambda x: x[1]))
</code></pre>
<p>Or, a more sophisticated solution could be:</p>
<pre><code>z = sorted(z, key=lambda x: x[1], reverse= True)
</code></pre>
<p>As a matter of fact, you can get the end result more easily by using <code>collections.Counter()</code></p>
<pre><code>from collections import Counter
z = sorted(Counter(s.split()).items(), key = lambda x:x[1], reverse = True)
</code></pre>
<p>Sorting by two multiple keys are fine, you can pass them as a tuple. In your case, the solution would be:</p>
<pre><code># first sort by negatives of the second item, then alphabetically.
z = sorted(z, key=lambda x: (-x[1],x[0]))
</code></pre>
<p>Output:</p>
<pre><code>[('butter', 2), ('a', 1), ('betty', 1), ('bit', 1), ('bitter', 1),
('bought', 1), ('but', 1), ('of', 1), ('the', 1), ('was', 1)]
</code></pre>
| 4 | 2016-08-20T02:03:33Z | [
"python",
"python-2.7",
"sorting"
] |
Python Sort not Working | 39,049,752 | <p>I have code to sort a list of tuples:</p>
<pre><code>s = "betty bought a bit of butter but the butter was bitter"
words = s.split()
l = []
k = []
unique_words = sorted(set(words))
for word in unique_words:
k.append(word)
l.append(words.count(word))
z = zip(k,l)
print z
reversed(sorted(z, key=lambda x: x[1]))
print z
</code></pre>
<p>z is the same, list doesn't get sorted or even reversed.</p>
<p>I am trying to sort by the integer value of count.</p>
| 2 | 2016-08-20T01:57:15Z | 39,049,806 | <p>Least changes to your code:</p>
<pre><code>s = "betty bought a bit of butter but the butter was bitter"
words = s.split()
l = []
k = []
unique_words = sorted(set(words))
for word in unique_words:
k.append(word)
l.append(words.count(word))
z = zip(k,l)
print z
z = sorted(z, key=lambda x: x[1] , reverse=True)
print z
</code></pre>
<p>sorted() does not sort in place and it has a built-in reverse option that you omitted. </p>
| 1 | 2016-08-20T02:08:21Z | [
"python",
"python-2.7",
"sorting"
] |
Python Sort not Working | 39,049,752 | <p>I have code to sort a list of tuples:</p>
<pre><code>s = "betty bought a bit of butter but the butter was bitter"
words = s.split()
l = []
k = []
unique_words = sorted(set(words))
for word in unique_words:
k.append(word)
l.append(words.count(word))
z = zip(k,l)
print z
reversed(sorted(z, key=lambda x: x[1]))
print z
</code></pre>
<p>z is the same, list doesn't get sorted or even reversed.</p>
<p>I am trying to sort by the integer value of count.</p>
| 2 | 2016-08-20T01:57:15Z | 39,049,830 | <p>To count the words in the string, you can simply use <code>Counter</code> from <code>collections</code>. Then sort it in the descending order of counts.</p>
<p>Your code can be shortened to</p>
<pre><code>from collections import Counter
s = "betty bought a bit of butter but the butter was bitter"
c = Counter(i for i in s.split())
print sorted(c.items(),key=lambda x:x[1],reverse=True)
</code></pre>
| 1 | 2016-08-20T02:14:10Z | [
"python",
"python-2.7",
"sorting"
] |
Adding jinja template dynamically | 39,049,852 | <p>I have a jinja template that is the only content inside a set of div tags.</p>
<pre><code><div id="section0">
{% include 'temppage.html' %}
</div>
</code></pre>
<p>When I press a button, I want to replace everything between the tags with something else. I was hoping to replace it with another jinja template, "{% include 'realpage.html' %}", but first I am unsure of how to replace the entire section, instead of just replacing a single word. Second, can I even add a jinja template dynamically, or do I need replace it with a string with the contents of the file directly.</p>
| 1 | 2016-08-20T02:19:17Z | 39,050,351 | <p>You could use a JS framework (such as Angular, React...) in order to achieve this...I am assuming you are trying to build a single page app?</p>
<p>Otherwise, you will have to rely more on Javascript in order to change the HTML under you div depending on what you click. For example, if you have button 1, 2, 3. Each rendering a different HTML template upon clicking.</p>
<p>Example (using jQuery):</p>
<pre><code>$(document).on('click', '.some-class', function() {
document.getElementById("section0").innerHTML = "something";
});
</code></pre>
<p>fyi: "something" can be an html structure.</p>
| 0 | 2016-08-20T04:01:23Z | [
"javascript",
"python",
"html",
"jinja2"
] |
Adding jinja template dynamically | 39,049,852 | <p>I have a jinja template that is the only content inside a set of div tags.</p>
<pre><code><div id="section0">
{% include 'temppage.html' %}
</div>
</code></pre>
<p>When I press a button, I want to replace everything between the tags with something else. I was hoping to replace it with another jinja template, "{% include 'realpage.html' %}", but first I am unsure of how to replace the entire section, instead of just replacing a single word. Second, can I even add a jinja template dynamically, or do I need replace it with a string with the contents of the file directly.</p>
| 1 | 2016-08-20T02:19:17Z | 39,068,358 | <p>As glls said, replacing the content can be used with,</p>
<pre><code>document.getElementById("section0").innerHTML = "something";
</code></pre>
<p>As for adding a jinja template dynamically, you need to replace the innerHTML with a multi-line string of the wanted jinja template, with is used with backticks, "`". So it would look like,</p>
<pre><code>document.getElementById("section0").innerHTML = `{% include 'realpage.html' %}`;
</code></pre>
<p>The template is executed when the page loads (which is unavoidable as far as I'm aware), so when inspecting the html of the live page, the multi-line string will contain whatever is in the file you are including.</p>
| 0 | 2016-08-21T19:59:52Z | [
"javascript",
"python",
"html",
"jinja2"
] |
Finding the area of intersection of multiple overlapping rectangles in Python | 39,049,929 | <p>I tried to using the algorithm shown here: <a href="https://discuss.leetcode.com/topic/15733/my-java-solution-sum-of-areas-overlapped-area" rel="nofollow">https://discuss.leetcode.com/topic/15733/my-java-solution-sum-of-areas-overlapped-area</a></p>
<p>However, that algorithm only deals with finding the areas of only TWO overlapped rectangles. </p>
<p>How would I go on about finding the area of the intersection of say 3, or 4 or 5, etc number of overlapping rectangles, if I know the length, breadth of each rectangle?</p>
| 1 | 2016-08-20T02:37:30Z | 39,050,030 | <p><a href="http://toblerity.org/shapely/manual.html" rel="nofollow">Shapely</a> is a good library for stuff like this. </p>
<pre><code>from shapely.geometry import box
# make some rectangles (for demonstration purposes)
rect1 = box(0,0,5,2)
rect2 = box(0.5,0.5,3,3)
rect3 = box(2.5,2.5,4,6)
rect_list = [rect1, rect2, rect3]
# find intersection of rectangles (probably a more elegant way to do this)
for rect in rect_list[1:]:
rect1 = rect1.intersection(rect)
intersection = rect1
</code></pre>
<p>To visualize what's happening here. I plot the rectangles and their intersection:</p>
<pre><code>from matplotlib import pyplot as plt
from matplotlib.collections import PatchCollection
from matplotlib.patches import Polygon
# plot the rectangles before and after merging
patches = PatchCollection([Polygon(a.exterior) for a in rect_list], facecolor='red', linewidth=.5, alpha=.5)
intersect_patch = PatchCollection([Polygon(intersection.exterior)], facecolor='red', linewidth=.5, alpha=.5)
# make figure
fig, ax = plt.subplots(1,2, subplot_kw=dict(aspect='equal'))
ax[0].add_collection(patches, autolim=True)
ax[0].autoscale_view()
ax[0].set_title('separate polygons')
ax[1].add_collection(intersect_patch, autolim=True)
ax[1].set_title('intersection = single polygon')
ax[1].set_xlim(ax[0].get_xlim())
ax[1].set_ylim(ax[0].get_ylim())
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/XAoxD.png" rel="nofollow"><img src="http://i.stack.imgur.com/XAoxD.png" alt="enter image description here"></a></p>
| 2 | 2016-08-20T02:57:37Z | [
"python",
"area",
"rectangles"
] |
Calling .recv(1024) and .send() twice, nothing happening (Python) | 39,049,941 | <p>I am trying to learn Socket coding right now, and I wrote a little piece of Process-to-Process communication.
This is the Servercode:</p>
<pre><code>import socket
s = socket.socket()
host = socket.gethostname()
port = 17752
s.bind((host, port))
s.listen(5)
while True:
(client, address) = s.accept()
print(address, 'just connected!')
message = input("Would you like to close the connection? (y/n)")
if message == 'y':
message = "False"
client.send(message.encode(encoding="utf_8"))
client.close()
break
elif message == 'n':
print("sending message...")
testing = "Do you want to close the connection?"
client.send(testing.encode(encoding='utf_8'))
print("sent!")
</code></pre>
<p>And the Clientcode:</p>
<pre><code>import socket
client = socket.socket()
host = socket.gethostname()
port = 17752
client.connect((host, port))
while True:
print("awaiting closing message...")
closing = client.recv(1024)
closing = closing.decode(encoding='utf_8')
print("Closing message recieved and decoded")
if closing == 'False':
print("message is false, breaking loop")
break
else:
print("Awaiting message...")
recieved = client.recv(1024)
recieved = recieved.decode(encoding='utf_8')
print("Message recieved and decoded")
print(recieved)
sd = input('(y/n) >')
if sd == 'y':
print("Closing connection")
client.close()
break
print("Sorry, the server closed the connection!")
</code></pre>
<p>What it is meant to do?</p>
<p>It is basically to learn and practice socket coding.
It should be a program that sends data from the Server to the Client with both being able to terminate the connection by answering y or n to the questions.
If both sides keep answering n the program just keeps running.
As soon as someone answers y it terminates either the Server or the client.</p>
<p>Now, I don't know what to heck is wrong there.
If I type 'y' for the Servers question "Would you like to close this connection?" it all works as it should.</p>
<p>If I type 'n' the Server does what it should, but the client does not recieve anything. Most of the 'print' statements are for debugging. Thats how I know the Server works fine.</p>
<p>What is wrong there? I tried to find it, I couldn't.</p>
<p>I am kinda new to python and new to socket coding. So keep it easy please.
Thanks.</p>
<p>(I run it with Batch scripts under Win10 cmd)
(Since it is Process-to-Process it is probably not called a "Server"?)</p>
| 0 | 2016-08-20T02:39:30Z | 39,050,401 | <p>In you code each <code>connect</code> should have a matching <code>accept</code> on server side.<br>
Your client <code>connect</code>s once per session,
but the server <code>accept</code>s after each message, so at the point where the second <code>recv</code> is invoked the server is already trying to accept another client.
Apparently your server is supposed to handle only one client,
so you can just move the call to <code>accept</code> out of the loop:</p>
<pre><code>s.listen(5)
(client, address) = s.accept()
print(address, 'just connected!')
while True:
message = raw_input("Would you like to close the connection? (y/n)")
</code></pre>
| 0 | 2016-08-20T04:12:42Z | [
"python",
"sockets",
"process",
"send",
"recv"
] |
Sublime Text 3 unable to import python module although importing from command line is possible? | 39,050,042 | <p>when I tried to build with python in ST3, I get an import error as I tried to do</p>
<pre><code>import caffe
</code></pre>
<p>but when I simply ran on the terminal, typing</p>
<pre><code>$ python
>>> import caffe
</code></pre>
<p>it works. On my sublime text 3 I still can import other modules like numpy and matplotlib.</p>
<p>This is the sublime python build I found (is this the right location? Why is it not extracted out but instead in a package?):
The directory is: /opt/sublime_text/Packages/Python.sublime-package</p>
<p>and the file python.sublime-build in the Python.sublime-package is:</p>
<pre><code>{
"shell_cmd": "python -u \"$file\"",
"file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)",
"selector": "source.python",
"env": {"PYTHONIOENCODING": "utf-8"},
"variants":
[
{
"name": "Syntax Check",
"shell_cmd": "python -m py_compile \"${file}\"",
}
]
}
</code></pre>
<p>After I checked my python path:</p>
<pre><code>$ python -c "import sys; print '\n'.join(sys.path)"
</code></pre>
<p>my output is: </p>
<pre><code>/home/user/caffe/python
/home/user
/usr/lib/python2.7
/usr/lib/python2.7/plat-x86_64-linux-gnu
/usr/lib/python2.7/lib-tk
/usr/lib/python2.7/lib-old
/usr/lib/python2.7/lib-dynload
/usr/local/lib/python2.7/dist-packages
/usr/lib/python2.7/dist-packages
/usr/lib/python2.7/dist-packages/PILcompat
/usr/lib/python2.7/dist-packages/gtk-2.0
/usr/lib/python2.7/dist-packages/wx-3.0-gtk2
</code></pre>
<p>and my dist-packages doesn't have caffe as I installed it in home/user instead.</p>
<p>So I decided to run in the terminal:</p>
<pre><code>export PYTHONPATH=/home/user/caffe/python:$PYTHONPATH
</code></pre>
<p>but checking my python path again, it doesn't seem to get added in. Is this the reason? However, why is it that I can import caffe directly from my terminal but not in ST3? PS: I did add caffe to my user and etc bashrc profile.</p>
<p>Thank you for your help.</p>
| 1 | 2016-08-20T03:00:14Z | 39,050,250 | <p>You can add this before import caffe</p>
<pre><code>import sys
sys.path.insert(0, '/path_to_caffe_root/python')
</code></pre>
| 1 | 2016-08-20T03:45:06Z | [
"python",
"import",
"sublimetext3",
"sublimetext",
"caffe"
] |
Trigger Break Keyword Using Dictionary Python | 39,050,046 | <p>I'm trying to make do with Python's lack of a switch statement and make code more efficient by using a dictionary, but I can't quite get what I'm looking for. Here's a simplified version of the code I'm working with</p>
<pre><code>def zero():
print("Yo")
def one():
print("Hey")
options = {
0: zero,
1: one,
}
while True:
response = int(input("Number: "))
try:
options.get(response)()
except TypeError:
print("Not a valid response")
</code></pre>
<p>and what I would like to see is some way to break the loop such as <code>2: break</code> that exits the loop. Currently I'm using sys.exit(0), but was wondering if it was possible to use the break keyword</p>
| 2 | 2016-08-20T03:01:13Z | 39,050,056 | <p>This is all you need:</p>
<pre><code>while True:
response = int(input("Number: "))
if response not in options:
print("Not a valid response")
else if response == 2:
break
else:
options[response]() # invoke the corresponding function
</code></pre>
<p>Incidentally, storing functions in a dictionary and having to invoke them like this isn't exactly Pythonic. It's a lot nicer to simply explicitly enumerate the behaviour you need with successive <code>if</code>s.</p>
| 0 | 2016-08-20T03:03:34Z | [
"python",
"python-3.x",
"dictionary",
"switch-statement"
] |
Trigger Break Keyword Using Dictionary Python | 39,050,046 | <p>I'm trying to make do with Python's lack of a switch statement and make code more efficient by using a dictionary, but I can't quite get what I'm looking for. Here's a simplified version of the code I'm working with</p>
<pre><code>def zero():
print("Yo")
def one():
print("Hey")
options = {
0: zero,
1: one,
}
while True:
response = int(input("Number: "))
try:
options.get(response)()
except TypeError:
print("Not a valid response")
</code></pre>
<p>and what I would like to see is some way to break the loop such as <code>2: break</code> that exits the loop. Currently I'm using sys.exit(0), but was wondering if it was possible to use the break keyword</p>
| 2 | 2016-08-20T03:01:13Z | 39,050,114 | <p>There are a few ways you could do this.</p>
<p>Here's a more robust version of @BingsF's clever answer (this one won't mask exceptions raised within the selected function):</p>
<pre><code>class LoopBreak(Exception):
pass
def zero():
print("Yo")
def one():
print("Hey")
def two():
raise LoopBreak
options = {
0: zero,
1: one,
2: two
}
while True:
try:
response = int(input("Number: "))
action = options[response]
except (ValueError, KeyError):
print("Not a valid response")
continue
try:
action()
except LoopBreak:
break
</code></pre>
<p>Or you could specify a special flag in your dictionary that will force a break:</p>
<pre><code>def zero():
print("Yo")
def one():
print("Hey")
options = {
0: zero,
1: one,
2: False
}
while True:
try:
response = int(input("Number: "))
action = options[response]
except (ValueError, KeyError):
print("Not a valid response")
continue
if action is False:
break
else:
action()
</code></pre>
<p>Or use a special return value to force a break:</p>
<pre><code>def zero():
print("Yo")
# returns None by default
def one():
print("Hey")
# returns None by default
def two():
return False
options = {
0: zero,
1: one,
2: two
}
while True:
try:
response = int(input("Number: "))
action = options[response]
except (ValueError, KeyError):
print("Not a valid response")
continue
if action() is False:
break
</code></pre>
<p>The following code might be more "Pythonic". It could be infinitesimally slower than the approaches above, because it has to check all the <code>if</code> statements instead of looking up the function in a dictionary via a hash. But it may be easier to read and maintain.</p>
<pre><code>while True:
try:
response = int(input("Number: "))
except ValueError:
response = -1 # error flag
if response == 0:
print("Yo")
elif response == 1:
print("Hey")
elif response == 2:
break
else:
print("Not a valid response")
</code></pre>
| 1 | 2016-08-20T03:17:00Z | [
"python",
"python-3.x",
"dictionary",
"switch-statement"
] |
Trigger Break Keyword Using Dictionary Python | 39,050,046 | <p>I'm trying to make do with Python's lack of a switch statement and make code more efficient by using a dictionary, but I can't quite get what I'm looking for. Here's a simplified version of the code I'm working with</p>
<pre><code>def zero():
print("Yo")
def one():
print("Hey")
options = {
0: zero,
1: one,
}
while True:
response = int(input("Number: "))
try:
options.get(response)()
except TypeError:
print("Not a valid response")
</code></pre>
<p>and what I would like to see is some way to break the loop such as <code>2: break</code> that exits the loop. Currently I'm using sys.exit(0), but was wondering if it was possible to use the break keyword</p>
| 2 | 2016-08-20T03:01:13Z | 39,050,996 | <p>You could define a <code>LoopBreak</code> exception, raise that in a <code>two</code> function, and catch it in the loop to <code>break</code>:</p>
<pre><code>class LoopBreak(Exception):
pass
def zero():
print("Yo")
def one():
print("Hey")
def two():
raise LoopBreak
options = {
0: zero,
1: one,
2: two
}
while True:
response = int(input("Number: "))
try:
options.get(response)()
except TypeError:
print("Not a valid response")
except LoopBreak:
break
</code></pre>
<p>As a point of interest, this is similar to the pattern used natively by Python to stop generators; they raise a <code>StopIteration</code> exception when they run out of values to <code>yield</code>.</p>
<p>EDIT: As @mfripp correctly notes below, this will mask any <code>TypeError</code>s that are raised during execution of <code>zero</code> or <code>one</code>. I would change the main loop to this instead (so you don't have to rely on <code>TypeError</code>):</p>
<pre><code>while True:
response = int(input("Number: "))
action = options.get(response)
if action is None:
print("Not a valid response")
continue
try:
action()
except LoopBreak:
break
</code></pre>
| 2 | 2016-08-20T06:01:21Z | [
"python",
"python-3.x",
"dictionary",
"switch-statement"
] |
Writing a BytesIO object to a file, 'efficiently' | 39,050,095 | <p>So a quick way to write a bytesIO object to a file would be to just use:</p>
<pre><code>with open('myfile.ext', 'wb') as f:
f.write(myBytesIOObj.getvalue())
f.close()
myBytesIOObj.close()
</code></pre>
<p>However, if I wanted to iterate over the myBytesIOObj as opposed to writing it in one chunk, how would I go about it? I'm on Python 2.7.1. Also, if the bytesIO is huge, would it be a more efficient way of writing by iteration?</p>
<p>Thanks</p>
| 1 | 2016-08-20T03:13:48Z | 39,050,559 | <p><code>shutil</code> has a utility that will write the file efficiently. It copies in chunks, defaulting to 16K. Any multiple of 4K chunks should be a good cross platform number. I chose 131072 rather arbitrarily because really the file is written to the OS cache in RAM before going to disk and the chunk size isn't that big of a deal.</p>
<pre><code>import shutil
myBytesIOObj.seek(0)
with open('myfile.ext', 'wb') as f:
shutil.copyfileobj(myBytesIOObj, f, length=131072)
</code></pre>
<p>btw, there was no need to close the file objects at the end. The file is closed automatically when the with statement complietes.</p>
| 2 | 2016-08-20T04:44:26Z | [
"python",
"io",
"bytesio"
] |
Implementing External Source Code Block in Lua | 39,050,119 | <p>I'm looking to write a Lua script and have it also execute something from Python source code as well, as if it went like so:</p>
<pre><code>#!/bin/lua
-- begin lua part
print "Hello"
-- begin python part
Somehow_Executes_Python {
print "Hello" #In python, of course
}
-- End Script
</code></pre>
<p>Getting the idea?
I'm not sure if it's even possible, but if I can somehow implement foreign source code in controlled blocks, that would be great. I've seen other things about calling them from a different file/ link/ source, but I'm looking to have it work directly from inside of the lua source code, not from a different file entirely.</p>
| 2 | 2016-08-20T03:18:04Z | 39,050,157 | <p>There is a python-lua package called Lupa. Here's the <a href="https://pypi.python.org/pypi/lupa" rel="nofollow">documentation</a>. See if that helps.</p>
| 0 | 2016-08-20T03:24:42Z | [
"python",
"lua"
] |
Implementing External Source Code Block in Lua | 39,050,119 | <p>I'm looking to write a Lua script and have it also execute something from Python source code as well, as if it went like so:</p>
<pre><code>#!/bin/lua
-- begin lua part
print "Hello"
-- begin python part
Somehow_Executes_Python {
print "Hello" #In python, of course
}
-- End Script
</code></pre>
<p>Getting the idea?
I'm not sure if it's even possible, but if I can somehow implement foreign source code in controlled blocks, that would be great. I've seen other things about calling them from a different file/ link/ source, but I'm looking to have it work directly from inside of the lua source code, not from a different file entirely.</p>
| 2 | 2016-08-20T03:18:04Z | 39,050,254 | <p>The simplest approach would be something along these lines:</p>
<pre><code>#!/usr/bin/env lua
local python = function(code)
local file = assert(io.popen('python', 'w'))
file:write(code)
file:close()
end
-- begin lua part
print "Hello from Lua"
--begin python part
python [=[
print "Hello from Python"
]=]
-- End Script
</code></pre>
<p>Line-by-line explanation (without code highlighting, it seems that it is broken for Lua on the SO):</p>
<pre>
#!/usr/bin/env lua
-- The above is a more sure-fire way to run Lua on linux from a shebang
-- This function runs python code as follows
local python = function(code)
-- It opens a write pipe to the python executable
local file = assert(io.popen('python', 'w'))
-- pipes the code there
file:write(code)
-- and closes the file
file:close()
-- This is an equivalent of running
-- $ python <code.py
-- in the shell.
end
-- Begin Lua part
-- I added "from Lua" to better see in the output what works or not.
print "Hello from Lua"
-- Begin Python part
-- We're calling our python code running function,
-- passing Lua long string to it. This is equivalent of
-- python('print "Hello from Python"')
python [=[
print "Hello from Python"
]=]
-- End Script
</pre>
<p>I imagine you would like to have at least some interoperability between Lua and Python code. It is a bit more difficult to implement and the way you should do it highly depends on the details of the problem you're actually solving.</p>
<p>The cleanest way would probably to create a socket pair of one kind or another and to make Lua and Python code to talk over it. </p>
<p>Solutions where you may read a variable or call a function from one VM (say Lua) in another (say Python) and vice-versa usually lead to a mess for a multitude of reasons (I tried a lot of them and implemented several myself).</p>
| 2 | 2016-08-20T03:45:52Z | [
"python",
"lua"
] |
How to modify/transform the column of a dataframe? | 39,050,248 | <p>I have an instance of <code>pyspark.sql.dataframe.DataFrame</code> created using </p>
<pre><code>dataframe = sqlContext.sql("select * from table").
</code></pre>
<p>One column is 'arrival_date' and contains a string. </p>
<p>How do I modify this column so as to only take the first 4 characters from it and throw away the rest?</p>
<p>How would I convert the type of this column from string to date? </p>
<p>In graphlab.SFrame, this would be:</p>
<pre><code>dataframe['column_name'] = dataframe['column_name'].apply(lambda x: x[:4] )
</code></pre>
<p>and</p>
<pre><code>dataframe['column_name'] = dataframe['column_name'].str_to_datetime()
</code></pre>
| -2 | 2016-08-20T03:44:39Z | 39,053,815 | <p>To extract first 4 characters from the <code>arrival_date</code> (StringType) column, create a <code>new_df</code> by using <code>UserDefinedFunction</code> (as you cannot modify the columns: they are immutable):</p>
<pre><code>from pyspark.sql.functions import UserDefinedFunction, to_date
old_df = spark.sql("SELECT * FROM table")
udf = UserDefinedFunction(lambda x: str(x)[:4], StringType())
new_df = old_df.select(*[udf(column).alias('arrival_date') if column == 'arrival_date' else column for column in old_df.columns])
</code></pre>
<p>And to covert the <code>arrival_date</code> (StringType) column into <code>DateType</code> column, use the <code>to_date</code> function as show below:</p>
<pre><code>new_df = old_df.select(old_df.other_cols_if_any, to_date(old_df.arrival_date).alias('arrival_date'))
</code></pre>
<p>Sources:<br/>
<a href="http://stackoverflow.com/a/29257220/2873538">http://stackoverflow.com/a/29257220/2873538</a> <br/>
<a href="https://databricks.com/blog/2015/09/16/apache-spark-1-5-dataframe-api-highlights.html" rel="nofollow">https://databricks.com/blog/2015/09/16/apache-spark-1-5-dataframe-api-highlights.html</a></p>
| 2 | 2016-08-20T11:51:52Z | [
"python",
"apache-spark",
"pyspark",
"apache-spark-sql"
] |
How to modify/transform the column of a dataframe? | 39,050,248 | <p>I have an instance of <code>pyspark.sql.dataframe.DataFrame</code> created using </p>
<pre><code>dataframe = sqlContext.sql("select * from table").
</code></pre>
<p>One column is 'arrival_date' and contains a string. </p>
<p>How do I modify this column so as to only take the first 4 characters from it and throw away the rest?</p>
<p>How would I convert the type of this column from string to date? </p>
<p>In graphlab.SFrame, this would be:</p>
<pre><code>dataframe['column_name'] = dataframe['column_name'].apply(lambda x: x[:4] )
</code></pre>
<p>and</p>
<pre><code>dataframe['column_name'] = dataframe['column_name'].str_to_datetime()
</code></pre>
| -2 | 2016-08-20T03:44:39Z | 39,065,383 | <p>As stated by Orions, you can't modify a column, but you can override it. Also, you shouldn't need to create an user defined function, as there is a built-in function for extracting substrings:</p>
<pre><code>from pyspark.sql.functions import *
df = df.withColumn("arrival_date", df['arrival_date'].substr(0, 4))
</code></pre>
<p>To convert it to date, you can use <code>to_date</code>, as Orions said:</p>
<pre><code>from pyspark.sql.functions import *
df = df.withColumn("arrival_date", to_date(df['arrival_date'].substr(0, 4)))
</code></pre>
<p>However, if you need to specify the format, you should use <code>unix_timestamp</code>:</p>
<pre><code>from pyspark.sql.functions import *
format = 'yyMM'
col = unix_timestamp(df['arrival_date'].substr(0, 4), format).cast('timestamp')
df = df.withColumn("arrival_date", col)
</code></pre>
<p>All this can be found in the <a href="http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#module-pyspark.sql.functions" rel="nofollow">pyspark documentation</a>.</p>
| 2 | 2016-08-21T14:38:19Z | [
"python",
"apache-spark",
"pyspark",
"apache-spark-sql"
] |
In python, does lock get automatically released when an exception happens? | 39,050,277 | <p>for example:</p>
<pre><code>import threading
lock = threading.Lock()
with lock:
some code that throws an exception
</code></pre>
<p>This is assuming that code that throws an exception isn't wrapped in a try except block.</p>
| 0 | 2016-08-20T03:49:34Z | 39,052,041 | <p>The whole point of using your lock as a context manager (<code>with lock:</code>) is for Python to notify that lock object when an exception occurs.</p>
<p>So yes, the lock will unlock itself when an exception occurs because the <code>with</code> statement ensures it is notified of an exception.</p>
| 0 | 2016-08-20T08:19:28Z | [
"python",
"locking"
] |
Reordering rows in Python Pandas Dataframe | 39,050,293 | <p>I appended 3 rows to a dataframe with this code: <code>z = pandas.DataFrame.append(z, [{'Items': 'Foo'}, {'Items': 'Barr'}, {'Items': 'Options'}])</code> and got the below result:</p>
<pre><code> Items Last 12 Months
0 Apple 48.674
1 Dragon Fruit 8.786
2 Milk 2.367
3 Foo 3.336
4 Barr 0.005
5 Options NaN
</code></pre>
<p>Is it possible to move the last 3 rows "<code>Foo, Barr, Options</code>" to become the first 3 rows instead? Without changing the index.. </p>
| 1 | 2016-08-20T03:51:44Z | 39,050,379 | <p>How about <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html" rel="nofollow">reindex</a>?</p>
<pre><code>In [4]: df
Out[4]:
Items Last 12 Months
0 Apple 48.674
1 Dragon Fruit 8.786
2 Milk 2.367
In [5]: z = pd.DataFrame([{'Items': 'Foo', 'Last 12 Months': 3.336}, {'Items': 'Barr', 'Last12Months': 0.005}, {'Items': 'Options'}])
In [6]: z
Out[6]:
Items Last 12 Months
0 Foo 3.336
1 Barr 0.005
2 Options NaN
In [7]: df = df.append(z, ignore_index = True)
In [8]: df
Out[8]:
Items Last 12 Months
0 Apple 48.674
1 Dragon Fruit 8.786
2 Milk 2.367
3 Foo 3.336
4 Barr 0.005
5 Options NaN
In [9]: prev_len = len(df) - len(z)
In [10]: df = df.reindex(range(prev_len, len(df)) + range(prev_len))
In [11]: df
Out[11]:
Items Last 12 Months
3 Foo 3.336
4 Barr 0.005
5 Options NaN
0 Apple 48.674
1 Dragon Fruit 8.786
2 Milk 2.367
</code></pre>
| 1 | 2016-08-20T04:06:51Z | [
"python",
"pandas"
] |
Pandas Pivot Table formatting column names | 39,050,318 | <p>I used the <code>pandas.pivot_table</code> function on a pandas dataframe and my output looks like something simillar to this:</p>
<pre><code> Winners Runnerup
year 2016 2015 2014 2016 2015 2014
Country Sport
india badminton
india wrestling
</code></pre>
<p>What I actually needed was some thing like below</p>
<pre><code>Country Sport Winners_2016 Winners_2015 Winners_2014 Runnerup_2016 Runnerup_2015 Runnerup_2014
india badminton 1 1 1 1 1 1
india wrestling 1 0 1 0 1 0
</code></pre>
<p>I have lot of columns and years so I will not be able to manually edit them, so can anyone please advise me on how to do this ?</p>
| 0 | 2016-08-20T03:55:21Z | 39,050,508 | <p>Try this:</p>
<pre><code>df.columns=['{}_{}'.format(x,y) for x,y in zip(df.columns.get_level_values(0),df.columns.get_level_values(1))]
</code></pre>
<p><code>get_level_values</code> is what you need to get only one of the levels of the resulting multiindex.</p>
<p>Side note: you might try working with the data as is. I really hated pandas multiIndex for a long time, but it's grown on me.</p>
| 1 | 2016-08-20T04:35:29Z | [
"python",
"pandas",
"dataframe",
"pivot-table",
"data-munging"
] |
Pandas Pivot Table formatting column names | 39,050,318 | <p>I used the <code>pandas.pivot_table</code> function on a pandas dataframe and my output looks like something simillar to this:</p>
<pre><code> Winners Runnerup
year 2016 2015 2014 2016 2015 2014
Country Sport
india badminton
india wrestling
</code></pre>
<p>What I actually needed was some thing like below</p>
<pre><code>Country Sport Winners_2016 Winners_2015 Winners_2014 Runnerup_2016 Runnerup_2015 Runnerup_2014
india badminton 1 1 1 1 1 1
india wrestling 1 0 1 0 1 0
</code></pre>
<p>I have lot of columns and years so I will not be able to manually edit them, so can anyone please advise me on how to do this ?</p>
| 0 | 2016-08-20T03:55:21Z | 39,050,811 | <p>You can also use list comprehension:</p>
<pre><code>df.columns = ['_'.join(col) for col in df.columns]
print (df)
Winners_2016 Winners_2015 Winners_2014 Runnerup_2016 \
Country Sport
india badminton 1 1 1 1
wrestling 1 1 1 1
Runnerup_2015 Runnerup_2014
Country Sport
india badminton 1 1
wrestling 1 1
</code></pre>
<p>Another solution with convert <code>columns</code> <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.to_series.html" rel="nofollow"><code>to_series</code></a> and then call <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.join.html" rel="nofollow"><code>join</code></a>:</p>
<pre><code>df.columns = df.columns.to_series().str.join('_')
print (df)
Winners_2016 Winners_2015 Winners_2014 Runnerup_2016 \
Country Sport
india badminton 1 1 1 1
wrestling 1 1 1 1
Runnerup_2015 Runnerup_2014
Country Sport
india badminton 1 1
wrestling 1 1
</code></pre>
<p>I was really interested about <strong>timings</strong>:</p>
<pre><code>In [45]: %timeit ['_'.join(col) for col in df.columns]
The slowest run took 7.82 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 4.05 µs per loop
In [44]: %timeit ['{}_{}'.format(x,y) for x,y in zip(df.columns.get_level_values(0),df.columns.get_level_values(1))]
The slowest run took 4.56 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 131 µs per loop
In [46]: %timeit df.columns.to_series().str.join('_')
The slowest run took 4.31 times longer than the fastest. This could mean that an intermediate result is being cached.
1000 loops, best of 3: 452 µs per loop
</code></pre>
| 3 | 2016-08-20T05:32:16Z | [
"python",
"pandas",
"dataframe",
"pivot-table",
"data-munging"
] |
Regular Expressions: Matching Song names in Python 3 | 39,050,359 | <p>I'm currently working on a project to parse data from a music database and I'm creating a search function using regular expressions in python (version 3.5.1).</p>
<p>I would like to create a regular expression to make the song names- songs without characters following the name and songs with feature details - but not songs containing given song's name in the matching song's name(examples may help illustrate my point):</p>
<p>What I'd like to match:</p>
<ol>
<li>Work</li>
<li>Work (ft. Drake)</li>
</ol>
<p>What would NOT like to match:</p>
<ol>
<li>Work it</li>
<li>Workout</li>
</ol>
<p>My current regular expression is ' <strong>/Work(\s(\w+)?/</strong> ' but this matches all 4 example cases.</p>
<p>Can someone help me figure out an expression to accomplish this? </p>
| 0 | 2016-08-20T04:03:08Z | 39,050,517 | <p>Personally, I'd go with something like</p>
<pre><code>^Work(?:\s+\(.+\))?$
</code></pre>
<p>which will match your two provided test cases, but not the two you want to avoid. If you want to make it a but more specific regarding matching who the artist is, you can go with something like</p>
<pre><code>^Work(?:\s+\((?:ft.|featuring).+\))?$
</code></pre>
<p>Which will still match your two cases, but will only match stuff in the brackets that starts with "ft." or "featuring".</p>
| 1 | 2016-08-20T04:36:58Z | [
"python",
"regex",
"python-3.x"
] |
Balanced parentheses parser Python | 39,050,372 | <p>I am trying to make a balanced parentheses parser by using recursion and produce a tree.</p>
<p>For example, if you pass in <code>'()()'</code> the tree would be constructed like this</p>
<p>step 1</p>
<pre><code> B
|
empty
</code></pre>
<p>step 2</p>
<pre><code> B
/ | \ \
( empty ) B
|
empty
</code></pre>
<p>step 3</p>
<pre><code> B
/ | \ \
( empty ) B
/ | \ \
( empty ) B
|
empty
</code></pre>
<p>Right now, my code "kind of" work for a legit input like '()()', but it is supposed to give me False for something like '())('. It is not returning False. Can I get help with this?</p>
<pre><code>class Node:
def __init__(self, label):
self.label = label
self.leftmostChild = None
self.rightSibling = None
def makeNode0(x):
root = Node(None)
root.label = x
return root
def makeNode1(x, t):
root = makeNode0(x)
root.leftmostChild = t
return root
def makeNode4(x, t1, t2, t3, t4):
root = makeNode1(x, t1)
t1.rightSibling = t2
t2.rightSibling = t3
t3.rightSibling = t4
return root
nextTerminal = "())("
def B(index):
firstB = Node(None)
secondB = Node(None)
if index != len(nextTerminal):
if nextTerminal[index] == '(':
index += 1
firstB = B(index)
if firstB is not False and nextTerminal[index] == ')':
index += 1
secondB = B(index)
if secondB is False:
return False
else:
return makeNode4('B', makeNode0('('), firstB, makeNode0(')'), secondB)
else:
return False
else:
return makeNode1('B', makeNode0('emp'))
b = B(0)
</code></pre>
| 0 | 2016-08-20T04:05:51Z | 39,051,121 | <p>I'm going to outline a second approach here, in the hopes that it provides you with some insight into why your current program isn't working. To be frank, I'm not certain what's going on - I initially thought that each right sibling indicated an additional parenthetical statement, but it seems like the tree's structure is hardcoded, regardless of the parentheses. My suggestion would be to start from the below solution and work your way towards creating these trees. </p>
<ol>
<li>Keep track of a variable <code>depth</code>.</li>
<li>For every start parenthesis, increment <code>depth</code> by 1.</li>
<li>For every end parenthesis, decrement <code>depth</code> by 1. If <code>depth</code> is negative, we have encountered an end parenthesis too soon -- return <code>false</code>.</li>
</ol>
<p>After processing all parentheses, check that <code>depth</code> is 0. Otherwise, we had too many start parentheses.</p>
| 1 | 2016-08-20T06:20:29Z | [
"python"
] |
Why is seconds not converting to hours and days | 39,050,447 | <p>I created a function to display how much time has elapsed, but for some reason the conditional statement does not return at specific statements after the first elif. Adding a print statement shows that the function is flowing however as objects are printed out. I am not sure why this is happening as the time elapsed since some posts were made are greater than 3600 seconds in elapsed time. It is therefore converting all elapsed time solely to minutes, but I will say that the time elapsed in seconds is working properly. Am I doing my conversion wrong or is it a syntax or type error? Also, is there a better way to go about this?</p>
<pre><code>def time_diff(self):
if self.date_pub:
now = datetime.datetime.utcnow().replace(tzinfo=utc)
diff = now - self.date_pub
total_time = diff.total_seconds()
if total_time < 60:
return str(int(total_time)) + "s ago."
elif total_time > 60 or total_time <= 3600:
return str(int(total_time / 60)) + "m ago."
elif total_time >= 3600 or total_time <= 86400:
return str(int(total_time / 3600)) + "h ago."
elif total_time >= 86400 or total_time <= 604800:
return str(nt(total_time / 86400)) + "d ago."
else:
return date_pub
</code></pre>
| 0 | 2016-08-20T04:22:28Z | 39,050,488 | <p>You have a bug in the second branch.</p>
<p>When you do
<code>elif total_time > 60 or total_time <= 3600:</code> it will always pass. Since it didn't hit the first branch <code>total_time is < 60</code> your time is >=60. Since you have an <code>or</code> in your second branch it will always(except for value of 60) pass.</p>
<p>And then the rest of them don't even execute.
You should change the <code>or's to and's</code></p>
<p>Tip: you shouldn't even check for total_time >60. Think of it, what does it mean if the code already got there? First condition failed, therefore you are guaranteed that it will be >= 60.</p>
| 1 | 2016-08-20T04:29:37Z | [
"python",
"django",
"datetime"
] |
Why is seconds not converting to hours and days | 39,050,447 | <p>I created a function to display how much time has elapsed, but for some reason the conditional statement does not return at specific statements after the first elif. Adding a print statement shows that the function is flowing however as objects are printed out. I am not sure why this is happening as the time elapsed since some posts were made are greater than 3600 seconds in elapsed time. It is therefore converting all elapsed time solely to minutes, but I will say that the time elapsed in seconds is working properly. Am I doing my conversion wrong or is it a syntax or type error? Also, is there a better way to go about this?</p>
<pre><code>def time_diff(self):
if self.date_pub:
now = datetime.datetime.utcnow().replace(tzinfo=utc)
diff = now - self.date_pub
total_time = diff.total_seconds()
if total_time < 60:
return str(int(total_time)) + "s ago."
elif total_time > 60 or total_time <= 3600:
return str(int(total_time / 60)) + "m ago."
elif total_time >= 3600 or total_time <= 86400:
return str(int(total_time / 3600)) + "h ago."
elif total_time >= 86400 or total_time <= 604800:
return str(nt(total_time / 86400)) + "d ago."
else:
return date_pub
</code></pre>
| 0 | 2016-08-20T04:22:28Z | 39,050,489 | <p>First up, you should fix the cases where you have gaps, such as <code>60</code> which will not be caught by <em>any</em> of the <code>if/elif</code> clauses. It should be <code>< 60</code> the <code>>= 60</code> in the first two condition checks.</p>
<p>Additionally, all of those <code>or</code> keywords should <em>definitely</em> be <code>and</code>. Think of what happens for a day (86,400 seconds):</p>
<pre><code>elif total_time > 60 or total_time <= 3600:
return str(int(total_time / 60)) + "m ago."
</code></pre>
<p>Since <code>86,400</code> is greater than <code>60</code>, this will fire, resulting in <code>1440 m ago.</code> being returned. In fact (once you've fixed the gap issue referred to in the first paragraph), every value is either less-than or greater-than-or-equal-to 60, you'll only ever see seconds or minutes output.</p>
<p>In fact, since the whole <code>if something return else</code> construct is redundant (<code>return</code> means the <code>else</code> is superfluous), you could get away with the simpler:</p>
<pre><code>if total_time < 60:
return str(int(total_time)) + "s ago."
if total_time < 3600:
return str(int(total_time / 60)) + "m ago."
if total_time < 86400:
return str(int(total_time / 3600)) + "h ago."
if total_time < 604800:
return str(int(total_time / 86400)) + "d ago."
return self.date_pub
</code></pre>
| 4 | 2016-08-20T04:30:05Z | [
"python",
"django",
"datetime"
] |
Why is seconds not converting to hours and days | 39,050,447 | <p>I created a function to display how much time has elapsed, but for some reason the conditional statement does not return at specific statements after the first elif. Adding a print statement shows that the function is flowing however as objects are printed out. I am not sure why this is happening as the time elapsed since some posts were made are greater than 3600 seconds in elapsed time. It is therefore converting all elapsed time solely to minutes, but I will say that the time elapsed in seconds is working properly. Am I doing my conversion wrong or is it a syntax or type error? Also, is there a better way to go about this?</p>
<pre><code>def time_diff(self):
if self.date_pub:
now = datetime.datetime.utcnow().replace(tzinfo=utc)
diff = now - self.date_pub
total_time = diff.total_seconds()
if total_time < 60:
return str(int(total_time)) + "s ago."
elif total_time > 60 or total_time <= 3600:
return str(int(total_time / 60)) + "m ago."
elif total_time >= 3600 or total_time <= 86400:
return str(int(total_time / 3600)) + "h ago."
elif total_time >= 86400 or total_time <= 604800:
return str(nt(total_time / 86400)) + "d ago."
else:
return date_pub
</code></pre>
| 0 | 2016-08-20T04:22:28Z | 39,050,490 | <p>You should be using <code>and</code> instead of <code>or</code> within your if statement - when it comes into the first or, it knows "yes I'm over 60 seconds", then doesn't care about the other statement because it's already hit a true. Hence, why it always displays in minutes.</p>
| 1 | 2016-08-20T04:30:07Z | [
"python",
"django",
"datetime"
] |
Select rows where at least one value from the list of columns is not null | 39,050,512 | <p>I have a big dataframe with many columns (like 1000). I have a list of columns (generated by a script ~10). And I would like to select all the rows in the original dataframe where at least one of my list of columns is not null.</p>
<p>So if I would know the number of my columns in advance, I could do something like this:</p>
<pre><code>list_of_cols = ['col1', ...]
df[
df[list_of_cols[0]].notnull() |
df[list_of_cols[1]].notnull() |
...
df[list_of_cols[6]].notnull() |
]
</code></pre>
<p>I can also iterate over the list of cols and create a mask which then I would apply to <code>df</code>, but his looks too tedious. Knowing how powerful is pandas with respect to dealing with nan, I would expect that there is a way easier way to achieve what I want.</p>
| 3 | 2016-08-20T04:36:11Z | 39,050,719 | <p>Starting with this:</p>
<pre><code>data = {'a' : [np.nan,0,0,0,0,0,np.nan,0,0, 0,0,0, 9,9,],
'b' : [np.nan,np.nan,1,1,1,1,1,1,1, 2,2,2, 1,7],
'c' : [np.nan,np.nan,1,1,2,2,3,3,3, 1,1,1, 1,1],
'd' : [np.nan,np.nan,7,9,6,9,7,np.nan,6, 6,7,6, 9,6]}
df = pd.DataFrame(data, columns=['a','b','c','d'])
df
a b c d
0 NaN NaN NaN NaN
1 0.0 NaN NaN NaN
2 0.0 1.0 1.0 7.0
3 0.0 1.0 1.0 9.0
4 0.0 1.0 2.0 6.0
5 0.0 1.0 2.0 9.0
6 NaN 1.0 3.0 7.0
7 0.0 1.0 3.0 NaN
8 0.0 1.0 3.0 6.0
9 0.0 2.0 1.0 6.0
10 0.0 2.0 1.0 7.0
11 0.0 2.0 1.0 6.0
12 9.0 1.0 1.0 9.0
13 9.0 7.0 1.0 6.0
</code></pre>
<p>Rows where not all values are nulls. (Removing row index 0) </p>
<pre><code>df[~df.isnull().all(axis=1)]
a b c d
1 0.0 NaN NaN NaN
2 0.0 1.0 1.0 7.0
3 0.0 1.0 1.0 9.0
4 0.0 1.0 2.0 6.0
5 0.0 1.0 2.0 9.0
6 NaN 1.0 3.0 7.0
7 0.0 1.0 3.0 NaN
8 0.0 1.0 3.0 6.0
9 0.0 2.0 1.0 6.0
10 0.0 2.0 1.0 7.0
11 0.0 2.0 1.0 6.0
12 9.0 1.0 1.0 9.0
13 9.0 7.0 1.0 6.0
</code></pre>
| 1 | 2016-08-20T05:17:27Z | [
"python",
"pandas"
] |
Select rows where at least one value from the list of columns is not null | 39,050,512 | <p>I have a big dataframe with many columns (like 1000). I have a list of columns (generated by a script ~10). And I would like to select all the rows in the original dataframe where at least one of my list of columns is not null.</p>
<p>So if I would know the number of my columns in advance, I could do something like this:</p>
<pre><code>list_of_cols = ['col1', ...]
df[
df[list_of_cols[0]].notnull() |
df[list_of_cols[1]].notnull() |
...
df[list_of_cols[6]].notnull() |
]
</code></pre>
<p>I can also iterate over the list of cols and create a mask which then I would apply to <code>df</code>, but his looks too tedious. Knowing how powerful is pandas with respect to dealing with nan, I would expect that there is a way easier way to achieve what I want.</p>
| 3 | 2016-08-20T04:36:11Z | 39,051,472 | <p>Use the <code>thresh</code> parameter in the <code>dropna()</code> method. By setting <code>thresh=1</code>, you specify that if there is at least 1 non null item, don't drop it.</p>
<pre><code>df = pd.DataFrame(np.random.choice((1., np.nan), (1000, 1000), p=(.3, .7)))
list_of_cols = list(range(10))
df[list_of_cols].dropna(thresh=1).head()
</code></pre>
<p><a href="http://i.stack.imgur.com/u8nQb.png" rel="nofollow"><img src="http://i.stack.imgur.com/u8nQb.png" alt="enter image description here"></a></p>
| 2 | 2016-08-20T07:05:13Z | [
"python",
"pandas"
] |
adding multiple columns to pandas simultaneously | 39,050,539 | <p>I'm new to pandas and trying to figure out how to add multiple columns to pandas simultaneously. Any help here is appreciated. Ideally I would like to do this in one step rather than multiple repeated steps...</p>
<pre><code>import pandas as pd
df = {'col_1': [0, 1, 2, 3],
'col_2': [4, 5, 6, 7]}
df = pd.DataFrame(df)
df[[ 'column_new_1', 'column_new_2','column_new_3']] = [np.nan, 'dogs',3] #thought this would work here...
</code></pre>
| 2 | 2016-08-20T04:40:49Z | 39,050,648 | <p>With the use of <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow">concat</a>:</p>
<pre><code>In [128]: df
Out[128]:
col_1 col_2
0 0 4
1 1 5
2 2 6
3 3 7
In [129]: pd.concat([df, pd.DataFrame(columns = [ 'column_new_1', 'column_new_2','column_new_3'])])
Out[129]:
col_1 col_2 column_new_1 column_new_2 column_new_3
0 0.0 4.0 NaN NaN NaN
1 1.0 5.0 NaN NaN NaN
2 2.0 6.0 NaN NaN NaN
3 3.0 7.0 NaN NaN NaN
</code></pre>
<p>Not very sure of what you wanted to do with <code>[np.nan, 'dogs',3]</code>. Maybe now set them as default values?</p>
<pre><code>In [142]: df1 = pd.concat([df, pd.DataFrame(columns = [ 'column_new_1', 'column_new_2','column_new_3'])])
In [143]: df1[[ 'column_new_1', 'column_new_2','column_new_3']] = [np.nan, 'dogs', 3]
In [144]: df1
Out[144]:
col_1 col_2 column_new_1 column_new_2 column_new_3
0 0.0 4.0 NaN dogs 3
1 1.0 5.0 NaN dogs 3
2 2.0 6.0 NaN dogs 3
3 3.0 7.0 NaN dogs 3
</code></pre>
| 1 | 2016-08-20T05:00:50Z | [
"python",
"pandas"
] |
adding multiple columns to pandas simultaneously | 39,050,539 | <p>I'm new to pandas and trying to figure out how to add multiple columns to pandas simultaneously. Any help here is appreciated. Ideally I would like to do this in one step rather than multiple repeated steps...</p>
<pre><code>import pandas as pd
df = {'col_1': [0, 1, 2, 3],
'col_2': [4, 5, 6, 7]}
df = pd.DataFrame(df)
df[[ 'column_new_1', 'column_new_2','column_new_3']] = [np.nan, 'dogs',3] #thought this would work here...
</code></pre>
| 2 | 2016-08-20T04:40:49Z | 39,050,916 | <p>I agree, I would have expected your original syntax to work (I've run into that problem before). I think that doesn't work because it is ambiguous how your new data should be reproduced in your array (i.e., if <code>df</code> is 3x3, should the list of values be copied into all rows or all columns?).</p>
<p>Here are several approaches that <em>do</em> work:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({
'col_1': [0, 1, 2, 3],
'col_2': [4, 5, 6, 7]
})
</code></pre>
<p>Then one of the following:</p>
<p>(1) Technically this is three steps, but it looks like one step:</p>
<pre><code>df['column_new_1'], df['column_new_2'], df['column_new_3'] = [np.nan, 'dogs', 3]
</code></pre>
<p>(2) I wouldn't necessarily expect DataFrame to work this way, but it does</p>
<pre><code>df[['column_new_1', 'column_new_2', 'column_new_3']] = pd.DataFrame([[np.nan, 'dogs', 3]], index=df.index)
</code></pre>
<p>(3) A little more predictable than the previous one; would work well if you make the data frame with new columns, then combine with the original data frame later:</p>
<pre><code>df = pd.concat(
[
df,
pd.DataFrame(
[[np.nan, 'dogs', 3]],
index=df.index,
columns=['column_new_1', 'column_new_2', 'column_new_3']
)
], axis=1
)
</code></pre>
<p>(4) Similar to the previous, but using <code>join</code> instead of <code>concat</code> (may be less efficient):</p>
<pre><code>df = df.join(pd.DataFrame(
[[np.nan, 'dogs', 3]],
index=df.index,
columns=['column_new_1', 'column_new_2', 'column_new_3']
))
</code></pre>
<p>(5) This is a more "natural" way to create the new data frame than the previous two, but the new columns will be sorted alphabetically:</p>
<pre><code>df = df.join(pd.DataFrame(
{
'column_new_1': np.nan,
'column_new_2': 'dogs',
'column_new_3': 3
}, index=df.index
))
</code></pre>
| 3 | 2016-08-20T05:49:15Z | [
"python",
"pandas"
] |
adding multiple columns to pandas simultaneously | 39,050,539 | <p>I'm new to pandas and trying to figure out how to add multiple columns to pandas simultaneously. Any help here is appreciated. Ideally I would like to do this in one step rather than multiple repeated steps...</p>
<pre><code>import pandas as pd
df = {'col_1': [0, 1, 2, 3],
'col_2': [4, 5, 6, 7]}
df = pd.DataFrame(df)
df[[ 'column_new_1', 'column_new_2','column_new_3']] = [np.nan, 'dogs',3] #thought this would work here...
</code></pre>
| 2 | 2016-08-20T04:40:49Z | 39,051,352 | <p>use of list comprehension, <code>pd.DataFrame</code> and <code>pd.concat</code></p>
<pre><code>pd.concat(
[
df,
pd.DataFrame(
[[np.nan, 'dogs', 3] for _ in range(df.shape[0])],
df.index, ['column_new_1', 'column_new_2','column_new_3']
)
], axis=1)
</code></pre>
<p><a href="http://i.stack.imgur.com/Y8IDO.png" rel="nofollow"><img src="http://i.stack.imgur.com/Y8IDO.png" alt="enter image description here"></a></p>
| 1 | 2016-08-20T06:49:14Z | [
"python",
"pandas"
] |
Getting python string equivalence to work like SQL match | 39,050,676 | <p>I am trying to match two strings, <code>Serhat Kılıç</code> and <code>serhat kilic</code>. In SQL this is quite easy, as I can do:</p>
<pre><code>select name from main_creditperson where name = 'serhat kılıç'
union all
select name from main_creditperson where name = 'serhat kilic';
===
name
Serhat Kılıç
Serhat Kılıç
</code></pre>
<p>In other words, both names return the same result. How would I do a string equivalent in python to see that these two names are 'the same' in the SQL sense. I am looking to do something like:</p>
<pre><code>if name1 == name2:
do_something()
</code></pre>
<p>I tried going the <code>unicodedata.normalize('NFKD', input_str)</code> way but it wasn't getting me anywhere. How would I solve this?</p>
| 0 | 2016-08-20T05:06:43Z | 39,050,717 | <p>I found this</p>
<pre><code>def compare_words (str_1, str_2):
return unidecode(str_1.decode('utf-8')) == str_2
</code></pre>
<p>Tested on Python 2.7:</p>
<pre><code>In[2]: from unidecode import unidecode
In[3]: def compare_words (str_1, str_2):
return unidecode(str_1.decode('utf-8')) == str_2
In[4]: print compare_words('serhat kılıç', 'serhat kilic')
True
</code></pre>
| 0 | 2016-08-20T05:16:42Z | [
"python"
] |
Getting python string equivalence to work like SQL match | 39,050,676 | <p>I am trying to match two strings, <code>Serhat Kılıç</code> and <code>serhat kilic</code>. In SQL this is quite easy, as I can do:</p>
<pre><code>select name from main_creditperson where name = 'serhat kılıç'
union all
select name from main_creditperson where name = 'serhat kilic';
===
name
Serhat Kılıç
Serhat Kılıç
</code></pre>
<p>In other words, both names return the same result. How would I do a string equivalent in python to see that these two names are 'the same' in the SQL sense. I am looking to do something like:</p>
<pre><code>if name1 == name2:
do_something()
</code></pre>
<p>I tried going the <code>unicodedata.normalize('NFKD', input_str)</code> way but it wasn't getting me anywhere. How would I solve this?</p>
| 0 | 2016-08-20T05:06:43Z | 39,050,884 | <p>If you're OK with ASCII for everything, you can check <a href="http://stackoverflow.com/questions/816285/where-is-pythons-best-ascii-for-this-unicode-database">Where is Python's "best ASCII for this Unicode" database?</a> <a href="https://pypi.python.org/pypi/Unidecode" rel="nofollow"><code>Unidecode</code></a> is rather good, however it is GPL-licensed which might be a problem for some project. Anyway, it would work in your case and in quite a many others, and works on Python 2 and 3 alike (these are from Python 3 so that it is easier to see what's going in):</p>
<pre><code>>>> from unidecode import unidecode
>>> unidecode('serhat kılıç')
'serhat kilic'
>>> unidecode('serhat kilic')
'serhat kilic'
>>> # as a bonus it does much more, like
>>> unidecode('å亰')
'Bei Jing '
</code></pre>
| 1 | 2016-08-20T05:44:05Z | [
"python"
] |
Python 3 -- range based assignment | 39,050,829 | <p>I have a dictionary:</p>
<pre><code>BATTERY_LEVEL_TRANSFORMS = {(75, 101): "High", (30, 75): "Medium", (0, 30): "Low"}
</code></pre>
<p>and am trying to set a text indicator based on a battery value. Whichever key it is in the range of, the text will be assigned accordingly. Here's what I have:</p>
<pre><code> for level_range, level_text in BATTERY_LEVEL_TRANSFORMS.items():
if msg in range(*level_range):
batt_level_str = level_text
break
else:
batt_level_str = "Error"
</code></pre>
<p>That's adjusted code to make the problem understandable. Is this the proper way to do this? It doesn't seem to be the correct solution, but I can't think of what the correct solution would be (other than the equivalent interval conditionals).</p>
| 0 | 2016-08-20T05:35:22Z | 39,050,974 | <p>I actually think your solution is not bad. Here's my variation on it to make it a bit more readable:</p>
<pre><code>BATTERY_LEVELS = (
(0, 30, 'Low'),
(30, 75, 'Medium'),
(75, 101, 'High'),
)
def get_level_text(level):
for low, high, level_text in BATTERY_LEVELS:
if low <= level <= high:
return level_text
return 'Error'
print(get_level_text(20)) # Low
print(get_level_text(40)) # Medium
print(get_level_text(80)) # High
print(get_level_text(120)) # Error
</code></pre>
<p>Here are the changes and reasoning behind them:</p>
<ol>
<li>Change <code>BATTERY_LEVEL_TRANSFORMS</code>, which is a dictionary of tuples to text, to just a tuple of tuples called <code>BATTERY_LEVELS</code>. Since this structure doesn't change and tuples are immutable, it seems like a simplification. There's also no need to have a dictionary since we just have bunch of associated data with no clear key.</li>
<li>Change <code>msg</code> to <code>level</code> since it seems to be a numeric value.</li>
<li>Now that we have the tuple of tuples, change the loop to have intuitive variable names by using <code>low, high, level_text</code> to store the tuple values.</li>
<li>Change the condition to <code>if low <= level <= high</code> to check the range in a more readable way.</li>
</ol>
| 1 | 2016-08-20T05:58:30Z | [
"python",
"python-3.x",
"range",
"conditional"
] |
Python 3 -- range based assignment | 39,050,829 | <p>I have a dictionary:</p>
<pre><code>BATTERY_LEVEL_TRANSFORMS = {(75, 101): "High", (30, 75): "Medium", (0, 30): "Low"}
</code></pre>
<p>and am trying to set a text indicator based on a battery value. Whichever key it is in the range of, the text will be assigned accordingly. Here's what I have:</p>
<pre><code> for level_range, level_text in BATTERY_LEVEL_TRANSFORMS.items():
if msg in range(*level_range):
batt_level_str = level_text
break
else:
batt_level_str = "Error"
</code></pre>
<p>That's adjusted code to make the problem understandable. Is this the proper way to do this? It doesn't seem to be the correct solution, but I can't think of what the correct solution would be (other than the equivalent interval conditionals).</p>
| 0 | 2016-08-20T05:35:22Z | 39,051,055 | <p>instead using dictionary use this simple if else conditional statements.</p>
<pre><code>battery_level=40;#assing battery level variable to this
if(battery_level>=0 and battery_level<=30):
battery_lable="Low";
elif(battery_level>30 and battery_level<=75):
battery_lable="Medium";
elif(battery_level>75 and battery_level<=100):
battery_label="high";
else:
battery_lable="Error";
print battery_lable;
</code></pre>
<p>for example i assigned 40 to battery level statically,here you can assign dynamic variable to this</p>
| -1 | 2016-08-20T06:11:38Z | [
"python",
"python-3.x",
"range",
"conditional"
] |
Python 3 -- range based assignment | 39,050,829 | <p>I have a dictionary:</p>
<pre><code>BATTERY_LEVEL_TRANSFORMS = {(75, 101): "High", (30, 75): "Medium", (0, 30): "Low"}
</code></pre>
<p>and am trying to set a text indicator based on a battery value. Whichever key it is in the range of, the text will be assigned accordingly. Here's what I have:</p>
<pre><code> for level_range, level_text in BATTERY_LEVEL_TRANSFORMS.items():
if msg in range(*level_range):
batt_level_str = level_text
break
else:
batt_level_str = "Error"
</code></pre>
<p>That's adjusted code to make the problem understandable. Is this the proper way to do this? It doesn't seem to be the correct solution, but I can't think of what the correct solution would be (other than the equivalent interval conditionals).</p>
| 0 | 2016-08-20T05:35:22Z | 39,051,076 | <p>One option would be store the breakpoints between levels and level names to two sorted lists and use <a href="https://docs.python.org/2/library/bisect.html#bisect.bisect_right" rel="nofollow"><code>bisect.bisect_right</code></a> to do binary search on the breakpoints. Benefit of this approach would be that level retrieval would be <strong>O(log n)</strong> time complexity although it doesn't really matter when you have only couple levels:</p>
<pre><code>from bisect import bisect_right
LEVELS = [0, 30, 75, 101]
TEXTS = ['Low', 'Medium', 'High']
def get_level(num):
index = bisect_right(LEVELS, num) - 1
return TEXTS[index] if 0 <= index < len(TEXTS) else 'Error'
for x in [-1, 0, 29, 30, 74, 75, 100, 101]:
print('{}: {}'.format(x, get_level(x)))
</code></pre>
<p>Output:</p>
<pre><code>-1: Error
0: Low
29: Low
30: Medium
74: Medium
75: High
100: High
101: Error
</code></pre>
<p>If you need fast retrieval and are willing to utilize more space you could create a dict containing all the valid values so that retrieval would be <strong>O(1)</strong> time complexity:</p>
<pre><code>BATTERY_LEVEL_TRANSFORMS = {(75, 101): "High", (30, 75): "Medium", (0, 30): "Low"}
LEVEL_MAP = {i: level
for (lo, hi), level in sorted(BATTERY_LEVEL_TRANSFORMS.items())
for i in range(lo, hi)}
def get_level(num):
return LEVEL_MAP.get(num, 'Error')
for x in [-1, 0, 29, 30, 74, 75, 100, 101]:
print('{}: {}'.format(x, get_level(x)))
</code></pre>
<p>Output:</p>
<pre><code>-1: Error
0: Low
29: Low
30: Medium
74: Medium
75: High
100: High
101: Error
</code></pre>
<p>Feasibility of above approach obviously depends on how many valid values you have.</p>
| 2 | 2016-08-20T06:15:15Z | [
"python",
"python-3.x",
"range",
"conditional"
] |
Converting binary data to hex in python properly | 39,050,836 | <p>I'm working on a program that uses a BMP and a separate file for the transparency layer. I need to convert them into a PNG from that so I'm using PIL in python to do so. However, I need the data from the transparency file in hex so it can be added to the image. I am using the binascii.hexlify function to do that.</p>
<p>Now, the problem I'm having is for some reason the data, after going through the hexlify function (I've systematically narrowed it down by going through my code piece by piece), looks different than it does in my hex editor and is causing slight distortions in the images. I can't seem to figure out where I am going wrong. </p>
<p><a href="http://i.stack.imgur.com/29Fp8.png" rel="nofollow">Data before processing in Hex editor</a></p>
<p><a href="http://i.stack.imgur.com/j8aQf.png" rel="nofollow">Data after processing in Hex editor</a></p>
<p>Here is the problematic part off my code:</p>
<pre><code>filename = askopenfilename(parent=root)
with open(filename, 'rb') as f:
content = f.read()
f.close()
hexContent = binascii.hexlify(content).decode("utf-8")
</code></pre>
<p>My <a href="http://www.mediafire.com/download/gsokhzvln10oxmo/Input.alp" rel="nofollow">input</a></p>
<p>My <a href="http://www.mediafire.com/download/ei1jbs7pjbl756t/output.alp" rel="nofollow">output</a> (This is hexcontent written to a file. Since I know that it is not going wrong in the writing of the file, and it is also irrelevant to my actual program I did not add that part to the code snippet)</p>
<p>Before anyone asks I tried codecs.encode(content, 'hex') and binascii.b2a_hex(content).</p>
<p>As for how I know that it is this part that is messing up, I printed out binascii.hexlify(content) and found the same part as in the hex editor and it looked identical to what I had got in the end.</p>
<p>Another possibility for where it is going wrong is in the "open(filename, 'rb')" step. I haven't yet thought of a way to test that. So any help or suggestions would be appreciated. If you need one of the files I'm using for testing purposes, I'll gladly add one here.</p>
| 2 | 2016-08-20T05:36:38Z | 39,063,115 | <p>If I understand your question correctly then your desired output should match <a href="http://i.stack.imgur.com/29Fp8.png" rel="nofollow">Data before processing in Hex editor</a>. I can obtain this with the following code:</p>
<pre><code>with open('Input.alp', 'rb') as f:
i = 0
for i, chunk in enumerate(iter(lambda: f.read(16), b'')):
if 688 <= i * 16 <= 736:
print i * 16, chunk.encode('hex')
</code></pre>
<p>Outputs:</p>
<pre><code>688 ffffffffffffffffffffffffffffffff
704 ffffffffffffffffffffffe000000000
720 000000000000000001dfffffffffffff
736 ffffffffffffffffffffffffffffffff
</code></pre>
<p>See <a href="http://stackoverflow.com/questions/25005505/pythonic-way-to-hex-dump-files">this answer</a> for a more detailed explanation.</p>
| 0 | 2016-08-21T10:18:28Z | [
"python",
"python-2.7",
"binascii"
] |
Unresolved reference in Pycharm for importing modules in parent directory | 39,050,845 | <p>There was a similar question but not quite what I ran into so here we go:</p>
<p>My directory structure:<br>
âââ PycharmProjects<br>
____ âââ MyProject<br>
_______ âââ main.py<br>
_______ âââ ...<br>
___ âââ Tools<br>
_______ âââ web.py </p>
<p>To import web.py functions I use</p>
<pre><code>sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from Tools.web import *
</code></pre>
<p>It works in Pycharm as well as in Idle, however the code analysis in Pycharm shows Tools and its function as 'unresolved references'. How do I solve this issue ?</p>
<p>I already tried:<br>
1. <strong>init</strong>.py in each folder of all levels.<br>
2. add tools folder to Project Structure, mark it Source folder (blue color)</p>
| 0 | 2016-08-20T05:38:15Z | 39,057,260 | <p>I recommend the following, which I do, translated into your terms: put a file called, say, my_pycharm.pth into /Lib/site-packages with the following line.</p>
<pre><code> <drive>:/path/to/PycharmProjects
</code></pre>
<p>This essentially appends the <em>contents</em> of /PycharmProjects to the contents of /site-packages. In other words, when import searches /site-packages for modules, it also searches /PycharmProjects. (I do this for each installed Python version.)</p>
<p>Then make each project in /PycharmProjects a proper package by adding <strong>init</strong>.py. </p>
<p>Now, for instance, main.py can have</p>
<pre><code> from Tools import web
</code></pre>
<p>and it should just work.</p>
<p>If you share your code with someone else, they just need to put it in <em>their</em> site-packages.</p>
| 0 | 2016-08-20T18:07:41Z | [
"python",
"ide",
"pycharm",
"python-idle"
] |
"TypeError: Unsupported type <class 'list'> in write()" | 39,050,951 | <p>I wish to print 'out.csv' data in excel file when the condition is not uppercase. But the data in out.csv is list of data instead of string. How do I write the list to excel file without converting it to string? (As I have other file which may need to use list instead of string)</p>
<p>Python version #3.5.1</p>
<pre><code>import xlsxwriter
import csv
import xlwt
f1= open('out.csv')
data=csv.reader(f1)
# Create a new workbook and add a worksheet
workbook = xlsxwriter.Workbook('1.xlsx')
worksheet = workbook.add_worksheet()
# Write some test data.
for module in data:
str1 = ''.join(module)
if str1.isupper():
pass
else:
worksheet.write('A', module)
workbook.close()
</code></pre>
| 3 | 2016-08-20T05:55:42Z | 39,064,138 | <blockquote>
<p>How do I write the list to excel file without converting it to string</p>
</blockquote>
<p>You could either loop over the list and <code>write()</code> out each element or you could use the XlsxWriter <a href="http://xlsxwriter.readthedocs.io/worksheet.html#worksheet-write-row" rel="nofollow"><code>write_row()</code></a> method to write the list in one go.</p>
<p>Something like this:</p>
<pre><code>row = 0
col = 0
for module in data:
str1 = ''.join(module)
if str1.isupper():
pass
else:
worksheet.write_row(row, col, module)
row += 1
</code></pre>
| 0 | 2016-08-21T12:22:19Z | [
"python",
"python-3.x",
"xlsxwriter"
] |
TypeError: 'connection' object is not callable in python using mysqldb | 39,050,987 | <pre><code>from flask import Flask
from flask_restful import Resource, Api
from flask_restful import reqparse
from flask_mysqldb import MySQL
mysql = MySQL()
app = Flask(__name__)
# MySQL configurations
app.config['MYSQL_USER'] = 'root'
app.config['MYSQL_PASSWORD'] = ''
app.config['MYSQL_DB'] = 'itemlistdb'
app.config['MYSQL_HOST'] = 'localhost'
mysql.init_app(app)
api = Api(app)
class AuthenticateUser(Resource):
def post(self):
try:
# Parse the arguments
parser = reqparse.RequestParser()
parser.add_argument('email', type=str, help='Email address for Authentication')
parser.add_argument('password', type=str, help='Password for Authentication')
args = parser.parse_args()
_userEmail = args['email']
_userPassword = args['password']
conn = mysql.connection
cursor = conn.cursor()
cursor.callproc('sp_AuthenticateUser',(_userEmail,))
data = cursor.fetchall()
if(len(data)>0):
if(str(data[0][2])==_userPassword):
return {'status':200,'UserId':str(data[0][0])}
else:
return {'status':100,'message':'Authentication failure'}
except Exception as e:
return {'error': str(e)}
class GetAllItems(Resource):
def post(self):
try:
# Parse the arguments
parser = reqparse.RequestParser()
parser.add_argument('id', type=str)
args = parser.parse_args()
_userId = args['id']
conn = mysql.connection
cursor = conn.cursor()
cursor.callproc('sp_GetAllItems',(_userId,))
data = cursor.fetchall()
items_list=[];
for item in data:
i = {
'Id':item[0],
'Item':item[1]
}
items_list.append(i)
return {'StatusCode':'200','Items':items_list}
except Exception as e:
return {'error': str(e)}
class AddItem(Resource):
def post(self):
try:
# Parse the arguments
parser = reqparse.RequestParser()
parser.add_argument('id', type=str)
parser.add_argument('item', type=str)
args = parser.parse_args()
_userId = args['id']
_item = args['item']
print _userId;
conn = mysql.connection
cursor = conn.cursor()
cursor.callproc('sp_AddItems',(_userId,_item))
data = cursor.fetchall()
conn.commit()
return {'StatusCode':'200','Message': 'Success'}
except Exception as e:
return {'error': str(e)}
class CreateUser(Resource):
def post(self):
try:
# Parse the arguments
parser = reqparse.RequestParser()
parser.add_argument('email', type=str, help='Email address to create user')
parser.add_argument('password', type=str, help='Password to create user')
args = parser.parse_args()
_userEmail = args['email']
_userPassword = args['password']
conn = mysql.connection
cursor = conn.cursor()
cursor.callproc('spCreateUser',(_userEmail,_userPassword))
data = cursor.fetchall()
if len(data) is 0:
conn.commit()
return {'StatusCode':'200','Message': 'User creation success'}
else:
return {'StatusCode':'1000','Message': str(data[0])}
except Exception as e:
return {'error': str(e)}
api.add_resource(CreateUser, '/CreateUser')
api.add_resource(AuthenticateUser, '/AuthenticateUser')
api.add_resource(AddItem, '/AddItem')
api.add_resource(GetAllItems, '/GetAllItems')
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>It throws with an error "connection object is not callable" .
I have searched all possible questions here in @stackoverflow posted before but unable to find the solution for the same.. If anyone has the solution please do help me over it.
Thank you .</p>
<p>Note: now this code is working... Thank you</p>
| 0 | 2016-08-20T06:00:18Z | 39,051,006 | <p>Replace</p>
<pre><code>conn = mysql.connect()
cursor = conn.cursor()
</code></pre>
<p>with</p>
<pre><code>conn = mysql.connection
cursor = conn.cursor()
</code></pre>
<p>Read more at the Flask-MySQLdbâs <a href="http://flask-mysqldb.readthedocs.io/en/latest/" rel="nofollow">docs</a>.</p>
| 2 | 2016-08-20T06:03:36Z | [
"python",
"mysql",
"mysql-python"
] |
JSON string parser without literals | 39,051,049 | <p>How to check if a string like this <code>{:[{},{}]}</code>, without any literals, can be represented as a JSON object or not?</p>
<p>The input comes with the following constraints:
1. A JSON object should start with '{' and ends with a '}'.
2. The key and value should be separated by a ':'.
3. A ',' suggests an additional JSON property.
4. An array only consists of JSON objects. It cannot contain a "key":"value" pair by itself.</p>
<p>And it is to be intrepreted like this:</p>
<pre><code>{
"Key": [{
"Key": "Value"
}, {
"Key": "Value"
}]
}
</code></pre>
| 1 | 2016-08-20T06:10:48Z | 39,051,255 | <p>The syntax spec for JSON <a href="http://www.json.org/" rel="nofollow">can be found here</a>.</p>
<p>It indicates that the <code>[{},{}]</code> is legal, because <code>[]</code> has to contain 0 or more elements separated by <code>,</code>, and <code>{}</code> is a legal element. However, the first part of your example is NOT valid - the <code>:</code> must have a string in front of it. While it is legal for it to be an empty string, it's not legal for it to be null, and the interpretation of a totally missing element is ambiguous.</p>
<p>So. <code>{"":[{},{}]}</code> is legal, but <code>{:[{},{}]}</code> is not.</p>
| 2 | 2016-08-20T06:37:56Z | [
"python",
"json",
"python-2.7"
] |
django retrieve multiple in query parameters | 39,051,142 | <p>My url has <code>http://127.0.0.1:8000/theme/category/?q=check,hello</code>, How to retrive values of query parameter</p>
<p>When I try doing <code>query = request.GET.get('q')</code> I only get the <code>check</code> but <code>hello</code> is missing.</p>
<p>Any help in getting both check and hello from query string will be helpful</p>
| 1 | 2016-08-20T06:23:39Z | 39,051,201 | <p>You can use %2C, wich is the url-encoded value of ,.</p>
<p>source : <a href="http://stackoverflow.com/questions/8359352/alternative-to-sending-comma-seperated-parameter-by-querystring">Alternative to sending comma seperated parameter by querystring</a></p>
| 1 | 2016-08-20T06:32:05Z | [
"python",
"django",
"query-parameters"
] |
django retrieve multiple in query parameters | 39,051,142 | <p>My url has <code>http://127.0.0.1:8000/theme/category/?q=check,hello</code>, How to retrive values of query parameter</p>
<p>When I try doing <code>query = request.GET.get('q')</code> I only get the <code>check</code> but <code>hello</code> is missing.</p>
<p>Any help in getting both check and hello from query string will be helpful</p>
| 1 | 2016-08-20T06:23:39Z | 39,051,205 | <p>For this urls:</p>
<p><a href="http://example.com/blah/?myvar=123&myvar=567" rel="nofollow">http://example.com/blah/?myvar=123&myvar=567</a></p>
<p>You can use <a href="https://docs.djangoproject.com/en/1.10/ref/request-response/#django.http.QueryDict.getlist" rel="nofollow"><code>getlist()</code></a> like that:</p>
<pre><code>request.GET.getlist('myvar')
</code></pre>
| 1 | 2016-08-20T06:32:53Z | [
"python",
"django",
"query-parameters"
] |
How to serve media files on django production evironment? | 39,051,206 | <p>In me settings.py file :-</p>
<pre><code>DEBUG = False
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
STATIC_URL = '/static/'
LOGIN_URL = '/login/'
MEDIA_URL = '/media/'
</code></pre>
<p>In my urls.py file:-</p>
<pre><code>urlpatterns += static(settings.STATIC_URL, document_root = settings.STATIC_ROOT)
urlpatterns += static(settings.MEDIA_URL, document_root = settings.MEDIA_ROOT)
</code></pre>
<p>When I am uploading the profile image then it is uploading to specified folder. but when i am visiting the user profile url then i am getting error like this in terminal</p>
<pre><code>"GET /media/profile_images/a_34.jpg HTTP/1.1" 404 103
</code></pre>
<p>a_34.png is present in /media/profile_images/</p>
<p>then why it is not showing on browser and i am getting 404 error?</p>
| 2 | 2016-08-20T06:32:56Z | 39,051,324 | <p>Django is not made to serve media file in production environment. You must configure it directly from the web server.</p>
<p>For example</p>
<p>If you are using apache web server in production, add the below to your virtualhost configuration</p>
<pre><code>Alias /media/ /path/to/media_file/
<Directory /path/to/media_file/>
Order deny,allow
Allow from all
</Directory>
</code></pre>
<p>If you use Nginx you would have similar configuration.</p>
| 1 | 2016-08-20T06:47:03Z | [
"python",
"django"
] |
How to serve media files on django production evironment? | 39,051,206 | <p>In me settings.py file :-</p>
<pre><code>DEBUG = False
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
STATIC_URL = '/static/'
LOGIN_URL = '/login/'
MEDIA_URL = '/media/'
</code></pre>
<p>In my urls.py file:-</p>
<pre><code>urlpatterns += static(settings.STATIC_URL, document_root = settings.STATIC_ROOT)
urlpatterns += static(settings.MEDIA_URL, document_root = settings.MEDIA_ROOT)
</code></pre>
<p>When I am uploading the profile image then it is uploading to specified folder. but when i am visiting the user profile url then i am getting error like this in terminal</p>
<pre><code>"GET /media/profile_images/a_34.jpg HTTP/1.1" 404 103
</code></pre>
<p>a_34.png is present in /media/profile_images/</p>
<p>then why it is not showing on browser and i am getting 404 error?</p>
| 2 | 2016-08-20T06:32:56Z | 39,051,396 | <p>You need to setup a server to serve static content on production. When only Debug is True, static content is served by Django. So you need to </p>
<p>1) Setup a server</p>
<p>2) Point server media path to STATIC_ROOT directory</p>
<p>3) Run collectstatic command of django to collect all the static files to STATIC_ROOT.
Please refer</p>
<p><a href="https://docs.djangoproject.com/en/1.10/howto/static-files/" rel="nofollow">https://docs.djangoproject.com/en/1.10/howto/static-files/</a></p>
| 1 | 2016-08-20T06:55:41Z | [
"python",
"django"
] |
How to serve media files on django production evironment? | 39,051,206 | <p>In me settings.py file :-</p>
<pre><code>DEBUG = False
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
STATIC_URL = '/static/'
LOGIN_URL = '/login/'
MEDIA_URL = '/media/'
</code></pre>
<p>In my urls.py file:-</p>
<pre><code>urlpatterns += static(settings.STATIC_URL, document_root = settings.STATIC_ROOT)
urlpatterns += static(settings.MEDIA_URL, document_root = settings.MEDIA_ROOT)
</code></pre>
<p>When I am uploading the profile image then it is uploading to specified folder. but when i am visiting the user profile url then i am getting error like this in terminal</p>
<pre><code>"GET /media/profile_images/a_34.jpg HTTP/1.1" 404 103
</code></pre>
<p>a_34.png is present in /media/profile_images/</p>
<p>then why it is not showing on browser and i am getting 404 error?</p>
| 2 | 2016-08-20T06:32:56Z | 39,051,643 | <p>Django discourages to serve media files on production from the server. Use cloud services like amazon s3 to server your media files. See this <a href="https://docs.djangoproject.com/en/1.10/howto/static-files/#serving-files-uploaded-by-a-user-during-development" rel="nofollow">Django doc serve media</a> then give that path in MEDIA_URL. </p>
| 1 | 2016-08-20T07:28:07Z | [
"python",
"django"
] |
How to serve media files on django production evironment? | 39,051,206 | <p>In me settings.py file :-</p>
<pre><code>DEBUG = False
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
STATIC_URL = '/static/'
LOGIN_URL = '/login/'
MEDIA_URL = '/media/'
</code></pre>
<p>In my urls.py file:-</p>
<pre><code>urlpatterns += static(settings.STATIC_URL, document_root = settings.STATIC_ROOT)
urlpatterns += static(settings.MEDIA_URL, document_root = settings.MEDIA_ROOT)
</code></pre>
<p>When I am uploading the profile image then it is uploading to specified folder. but when i am visiting the user profile url then i am getting error like this in terminal</p>
<pre><code>"GET /media/profile_images/a_34.jpg HTTP/1.1" 404 103
</code></pre>
<p>a_34.png is present in /media/profile_images/</p>
<p>then why it is not showing on browser and i am getting 404 error?</p>
| 2 | 2016-08-20T06:32:56Z | 39,167,220 | <p>You can use S3 Amazon for static and media files. It will be better.</p>
<hr>
<p><strong>Problem with S3 Amazon</strong></p>
<p>Making the S3 bucket appear as part of the file system has terrible performance and fails randomly. When we are copying a lot of files it can take 10, 15, or 20 minutes for the copying to complete making deployments take a long time when they donât need to. If we send these directly into S3 the same copy command takes about 1 minute to complete.</p>
<p><strong>Solution</strong></p>
<p>Subclass S3BotoStorage twice, one class for static files and the other for media files. This allows us to use different buckets and subdirectories for each type. (see: custom_storage.py)</p>
<p><strong>Update settings</strong></p>
<pre><code>1. AWS_STORAGE_BUCKET_NAME needs to be bucket to hold static files and media files
2. MEDIAFILES_BUCKET
3. MEDIAFILES_LOCATION
4.DEFAULT_FILE_STORAGE
5.STATICFILES_BUCKET
6.STATICFILES_LOCATION
This is the subdirectory under the S3 bucket for the app
7.STATIC_URL
8.STATICFILES_STORAGE
</code></pre>
<hr>
<p>Create <strong>custom_storage.py</strong> with the contents:</p>
<pre><code>from django.utils.deconstruct import deconstructible
from storages.backends.s3boto import S3BotoStorage
from django.conf import settings
@deconstructible
class StaticS3Storage(S3BotoStorage):
bucket_name = settings.STATICFILES_BUCKET
location = settings.STATICFILES_LOCATION
@deconstructible
class MediaS3Storage(S3BotoStorage):
bucket_name = settings.MEDIAFILES_BUCKET
location = settings.MEDIAFILES_LOCATION
</code></pre>
<hr>
<p>Sample <strong>settings.py.tmpl</strong> for updates settings (as mentioned above) based on my stack.json</p>
<pre><code>MEDIAFILES_BUCKET = '<%= @node["apps_data"]["aws"]["buckets"]["bucket-name"] %>'
MEDIAFILES_LOCATION = 'folder_name_for_media_files_in_bucket'
DEFAULT_FILE_STORAGE = 'custom_storage.MediaS3Storage'
# If we're not using our S3 backend storage we need to serve the media files via path
if DEFAULT_FILE_STORAGE == "custom_storage.MediaS3Storage":
MEDIA_URL = 'https://%s.s3-website-us-east-1.amazonaws.com/%s/' % (MEDIAFILES_BUCKET, MEDIAFILES_LOCATION)
else:
MEDIA_URL = '/media/'
STATICFILES_BUCKET = '<%= @node["apps_data"]["aws"]["buckets"]["bucket-name"] %>'
STATICFILES_LOCATION = 'folder_name_for_static_files_in_bucket'
STATICFILES_STORAGE = '<%= @node["deploy_data"]["project_name"]["django_static_files_storage"] %>'
# If we're not using our S3 backend storage we need to serve the static files via path
if STATICFILES_STORAGE == "custom_storage.StaticS3Storage":
STATIC_URL = 'https://%s.s3-website-us-east-1.amazonaws.com/%s/' % (STATICFILES_BUCKET, STATICFILES_LOCATION)
else:
STATIC_URL = '/static/'
</code></pre>
<hr>
<h2>load static from staticfiles Django Template Tag</h2>
<p>Change all uses of <strong>{% load static %}</strong> in templates to <strong>{% load static from staticfiles %}</strong></p>
<p>The âstaticâ from static files can make use of different back ends for files, including an S3 back end or local file back end. Using âload staticâ uses the Django template tags library which doesnât handle different back ends.</p>
<p>Use this in the templates when including a static file and after including âstatic from staticfilesâ:
{% static âpath/to/the/file.extâ %}
This will figure out the full path to the file or if itâs in S3 it will insert a full URL to the file.</p>
<p><strong>Example</strong></p>
<pre><code><link rel="stylesheet" type="text/css" href="{% load static from staticfiles %}{% static "css/style.css" %}â>
</code></pre>
<p><strong>Useful info</strong></p>
<p>âdjango.contrib.staticfiles.storage.StaticFilesStorageâ is the default Django static files backend</p>
<p><strong>References</strong></p>
<p><a href="https://docs.djangoproject.com/en/1.9/howto/static-files/" rel="nofollow">https://docs.djangoproject.com/en/1.9/howto/static-files/</a>
<a href="https://www.caktusgroup.com/blog/2014/11/10/Using-Amazon-S3-to-store-your-Django-sites-static-and-media-files/" rel="nofollow">https://www.caktusgroup.com/blog/2014/11/10/Using-Amazon-S3-to-store-your-Django-sites-static-and-media-files/</a></p>
| 0 | 2016-08-26T13:13:32Z | [
"python",
"django"
] |
How to trigger a signal when one widget comes within some distance/area of another widget? | 39,051,247 | <p>I want to create an area around a widget such that if any other widgets come in that area, the widget sends a signal. Calculating the distance between the widget and every other widget might be an option, but the problem is there might be several widgets and it might be tedious. Specifically speaking, the widget is a <code>QLabel</code> and I am using <code>QPoint</code> to place the widgets. Is there an efficient way to solve the problem?</p>
<p><img src="http://i.stack.imgur.com/oArDE.jpg" alt="Please refer the Figure"></p>
| 0 | 2016-08-20T06:37:16Z | 39,067,099 | <p>It seems as though you want <code>Widget1</code> to emit a signal when any other widget (e.g., <code>Widget2</code>) is moved such that it lands within the region of interest around <code>Widget1</code>. One way to do this is override the <em>moving</em> widgets' <code>moveEvent()</code> handler functions to notify <code>Widget1</code> of their new position.</p>
<p>This handler is called after a widget has been moved, and is already at the new position. The simplest thing might be to just stick a custom signal at the top of this, before calling the parent's implementation of the handler. Something like:</p>
<pre><code>MyWidget::moveEvent(QMoveEvent *ev) {
emit widgetMoved(ev->pos()); // emit signal with this widget's new position
QLabel::moveEvent(ev);
}
</code></pre>
<p>This requires subclassing <code>QLabel</code>, declaring the <code>widgetMoved()</code> signal, and reimplementing the handler. You could also send the <code>this</code> pointer in the signal, so that <code>Widget1</code> knows immediately which widget sent the event. (This isn't necessary, since you can get a pointer to the sending widget with <code>QObject::sender()</code>, but it might be easier.)</p>
<p>Then write a slot in <code>Widget1</code>, connected to this signal, that computes the distance between this moved widget and itself, and emit a signal if that distance is less than whatever the ROI size is.</p>
| 0 | 2016-08-21T17:45:09Z | [
"python",
"qt",
"pyqt",
"widget",
"position"
] |
Filll in missing values in Pandas Dataframe wrong | 39,051,292 | <p>Suppose 'df' is dataframe object, 'ca' is one of the variables.</p>
<pre><code>>>> df.ca.value_counts()
0.0 176
1.0 65
2.0 38
3.0 20
? 4
Name: ca, dtype: int64
</code></pre>
<p>As you can see, I have four missing values. I want to fill in them. Using below code:</p>
<pre><code>>>> df.loc[df.ca == '?', 'ca'] = 0.0
0.0 176
1.0 65
2.0 38
3.0 20
0.0 4
Name: ca, dtype: int64
</code></pre>
<p>Why I got 5 unique values? I want to merge fifth row into first row, i.e. </p>
<pre><code>0.0 176 + 4 = 180
1.0 65
2.0 38
3.0 20
</code></pre>
<p>How can i fix it?</p>
| 2 | 2016-08-20T06:42:23Z | 39,051,393 | <p>Because <code>'?'</code> was one of your values, I know that <code>df.ca</code> is either <code>dtype</code> <code>object</code> or <code>string</code>. When you <code>replace('?', 0.)</code> you now have both string <code>'0.0'</code> and float <code>0.0</code>. After you convert all to float, you shouldn't have a problem.</p>
<pre><code>df.ca.replace('?', 0.).astype(float).value_counts()
0.0 180
1.0 65
2.0 38
3.0 20
dtype: int64
</code></pre>
| 1 | 2016-08-20T06:55:18Z | [
"python",
"pandas",
"dataframe"
] |
Filll in missing values in Pandas Dataframe wrong | 39,051,292 | <p>Suppose 'df' is dataframe object, 'ca' is one of the variables.</p>
<pre><code>>>> df.ca.value_counts()
0.0 176
1.0 65
2.0 38
3.0 20
? 4
Name: ca, dtype: int64
</code></pre>
<p>As you can see, I have four missing values. I want to fill in them. Using below code:</p>
<pre><code>>>> df.loc[df.ca == '?', 'ca'] = 0.0
0.0 176
1.0 65
2.0 38
3.0 20
0.0 4
Name: ca, dtype: int64
</code></pre>
<p>Why I got 5 unique values? I want to merge fifth row into first row, i.e. </p>
<pre><code>0.0 176 + 4 = 180
1.0 65
2.0 38
3.0 20
</code></pre>
<p>How can i fix it?</p>
| 2 | 2016-08-20T06:42:23Z | 39,051,421 | <p>The following pretty much works too:</p>
<pre><code>In [193]: df = pd.DataFrame({'ca': [0.0]*176 + [1.0]*65 + [2.0]*38 + [3.0]*20 + ['?']*4})
In [194]: df.ca.value_counts()
Out[194]:
0.0 176
1.0 65
2.0 38
3.0 20
? 4
Name: ca, dtype: int64
In [195]: df.loc[df.ca == '?', 'ca'] = 0.0
In [196]: df.ca.value_counts()
Out[196]:
0.0 180
1.0 65
2.0 38
3.0 20
Name: ca, dtype: int64
</code></pre>
| 0 | 2016-08-20T06:57:33Z | [
"python",
"pandas",
"dataframe"
] |
split a line of data with a constraint | 39,051,433 | <p>When I need to split a line of data I get following result:</p>
<pre><code>>>> s="MS Dhoni cricket captain 10000"
>>> val=s.split()
>>> print val
['MS', 'Dhoni', 'cricket', 'captain', '10000']
</code></pre>
<p>But I expect code in the below manner:</p>
<pre><code>['MS Dhoni', 'cricket', 'captain', '10000']
</code></pre>
<p>Though there is space in a specific position it must be skipped. How can I modify the code?</p>
| -1 | 2016-08-20T07:00:30Z | 39,051,463 | <p>That code does what you want</p>
<pre><code>import re
s="MS Dhoni cricket captain 10000"
print(re.split("\s(?=[a-z0-9])",s))
</code></pre>
<p>output:</p>
<pre><code>['MS Dhoni', 'cricket', 'captain', '10000']
</code></pre>
<p>Explanation: split according to spaces, but only if followed by a lowercase letter or a digit (not consumed in the split operation thanks to the <code>?=</code> construction (lookahead)</p>
<p>BUT this is cheating: had <code>MS Dhoni</code> been in the middle of the string, it wouldn't have worked. You assume that python knows how to read a distinction (Mr, ...) or group words containing only capital letters together with the next word. That is only in your mind.</p>
<p>It answers your question, but you have to be more specific if you want the answer to be useful for your projects.</p>
| 2 | 2016-08-20T07:03:50Z | [
"python",
"split"
] |
SSIM / MS-SSIM for TensorFlow | 39,051,451 | <p>Is there a <strong>SSIM</strong> or even <strong>MS-SSIM</strong> implementation for <strong>TensorFlow</strong>? </p>
<p>SSIM (<em>structural similarity index metric</em>) is a metric to measure image quality or similarity of images. It is inspired by human perception and according to a couple of papers, it is a much better loss-function compared to l1/l2. For example, see <a href="http://arxiv.org/abs/1511.08861" rel="nofollow">Loss Functions for Neural Networks for Image Processing</a>.</p>
<p>Up to now, I could not find an implementation in TensorFlow. And after trying to do it by myself by porting it from C++ or python code (such as <a href="https://github.com/lvchigo/VQMT/blob/master/src/SSIM.cpp" rel="nofollow">Github: VQMT/SSIM</a>), I got stuck on methods like applying Gaussian blur to an image in TensorFlow.</p>
<p>Has someone already tried to implement it by himself?</p>
| 2 | 2016-08-20T07:02:34Z | 39,053,516 | <p>After a deep dive into some other python implemention, I could finally implement a running example in TensorFlow:</p>
<pre><code>import tensorflow as tf
import numpy as np
def _tf_fspecial_gauss(size, sigma):
"""Function to mimic the 'fspecial' gaussian MATLAB function
"""
x_data, y_data = np.mgrid[-size//2 + 1:size//2 + 1, -size//2 + 1:size//2 + 1]
x_data = np.expand_dims(x_data, axis=-1)
x_data = np.expand_dims(x_data, axis=-1)
y_data = np.expand_dims(y_data, axis=-1)
y_data = np.expand_dims(y_data, axis=-1)
x = tf.constant(x_data, dtype=tf.float32)
y = tf.constant(y_data, dtype=tf.float32)
g = tf.exp(-((x**2 + y**2)/(2.0*sigma**2)))
return g / tf.reduce_sum(g)
def tf_ssim(img1, img2, cs_map=False, mean_metric=True, size=11, sigma=1.5):
window = _tf_fspecial_gauss(size, sigma) # window shape [size, size]
K1 = 0.01
K2 = 0.03
L = 1 # depth of image (255 in case the image has a differnt scale)
C1 = (K1*L)**2
C2 = (K2*L)**2
mu1 = tf.nn.conv2d(img1, window, strides=[1,1,1,1], padding='VALID')
mu2 = tf.nn.conv2d(img2, window, strides=[1,1,1,1],padding='VALID')
mu1_sq = mu1*mu1
mu2_sq = mu2*mu2
mu1_mu2 = mu1*mu2
sigma1_sq = tf.nn.conv2d(img1*img1, window, strides=[1,1,1,1],padding='VALID') - mu1_sq
sigma2_sq = tf.nn.conv2d(img2*img2, window, strides=[1,1,1,1],padding='VALID') - mu2_sq
sigma12 = tf.nn.conv2d(img1*img2, window, strides=[1,1,1,1],padding='VALID') - mu1_mu2
if cs_map:
value = (((2*mu1_mu2 + C1)*(2*sigma12 + C2))/((mu1_sq + mu2_sq + C1)*
(sigma1_sq + sigma2_sq + C2)),
(2.0*sigma12 + C2)/(sigma1_sq + sigma2_sq + C2))
else:
value = ((2*mu1_mu2 + C1)*(2*sigma12 + C2))/((mu1_sq + mu2_sq + C1)*
(sigma1_sq + sigma2_sq + C2))
if mean_metric:
value = tf.reduce_mean(value)
return value
def tf_ms_ssim(img1, img2, mean_metric=True, level=5):
weight = tf.constant([0.0448, 0.2856, 0.3001, 0.2363, 0.1333], dtype=tf.float32)
mssim = []
mcs = []
for l in range(level):
ssim_map, cs_map = tf_ssim(img1, img2, cs_map=True, mean_metric=False)
mssim.append(tf.reduce_mean(ssim_map))
mcs.append(tf.reduce_mean(cs_map))
filtered_im1 = tf.nn.avg_pool(img1, [1,2,2,1], [1,2,2,1], padding='SAME')
filtered_im2 = tf.nn.avg_pool(img2, [1,2,2,1], [1,2,2,1], padding='SAME')
img1 = filtered_im1
img2 = filtered_im2
# list to tensor of dim D+1
mssim = tf.pack(mssim, axis=0)
mcs = tf.pack(mcs, axis=0)
value = (tf.reduce_prod(mcs[0:level-1]**weight[0:level-1])*
(mssim[level-1]**weight[level-1]))
if mean_metric:
value = tf.reduce_mean(value)
return value
</code></pre>
<p>And here is how to run it:</p>
<pre><code>import numpy as np
import tensorflow as tf
from skimage import data, img_as_float
image = data.camera()
img = img_as_float(image)
rows, cols = img.shape
noise = np.ones_like(img) * 0.2 * (img.max() - img.min())
noise[np.random.random(size=noise.shape) > 0.5] *= -1
img_noise = img + noise
## TF CALC START
BATCH_SIZE = 1
CHANNELS = 1
image1 = tf.placeholder(tf.float32, shape=[rows, cols])
image2 = tf.placeholder(tf.float32, shape=[rows, cols])
def image_to_4d(image):
image = tf.expand_dims(image, 0)
image = tf.expand_dims(image, -1)
return image
image4d_1 = image_to_4d(image1)
image4d_2 = image_to_4d(image2)
ssim_index = tf_ssim(image4d_1, image4d_2)
msssim_index = tf_ms_ssim(image4d_1, image4d_2)
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
tf_ssim_none = sess.run(ssim_index,
feed_dict={image1: img, image2: img})
tf_ssim_noise = sess.run(ssim_index,
feed_dict={image1: img, image2: img_noise})
tf_msssim_none = sess.run(msssim_index,
feed_dict={image1: img, image2: img})
tf_msssim_noise = sess.run(msssim_index,
feed_dict={image1: img, image2: img_noise})
###TF CALC END
print('tf_ssim_none', tf_ssim_none)
print('tf_ssim_noise', tf_ssim_noise)
print('tf_msssim_none', tf_msssim_none)
print('tf_msssim_noise', tf_msssim_noise)
</code></pre>
<p>In case you find some errors, please let me know :)</p>
<p><strong>Edit:</strong>
This implementation only supports gray scaled images</p>
| 3 | 2016-08-20T11:17:43Z | [
"python",
"tensorflow",
"metrics",
"ssim"
] |
how to download a file from s3 bucket with a temporary token in python | 39,051,581 | <p>I have a django web app and I want to allow it to download files from my s3 bucket.
The files are not public. I have an IAM policy to access them.
The problem is that I do <strong>NOT</strong> want to download the file on the django app server and then serve it to download on the client. That is like downloading twice. I want to be able to download directly on the client of the django app.
Also, I don't think it's safe to pass my IAM credentials in an http request so I think I need to use a temporary token.
I read:
<a href="http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html" rel="nofollow">http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html</a></p>
<p>but I just do not understand how to generate a temporary token on the fly.
A python solution (maybe using boto) would be appreciated.</p>
| 0 | 2016-08-20T07:18:11Z | 39,051,646 | <p>With Boto (2), it should be really easy to generate time-limited download URLs, should your IAM policy have the proper permissions. I am using this approach to serve videos to logged-in users from private S3 bucket.</p>
<pre><code>from boto.s3.connection import S3Connection
conn = S3Connection('<aws access key>', '<aws secret key>')
bucket = conn.get_bucket('mybucket')
key = bucket.get_key('mykey', validate=False)
url = key.generate_url(86400)
</code></pre>
<p>This would generate a download URL for key <code>foo</code> in the given bucket, that is valid for 86400 seconds, that is 24 hours. Without <code>validate=False</code> Boto 2 will check that the key actually exists in the bucket first, and if not, will throw an exception. With these server-controlled files it is often an unnecessary extra step, thus <code>validate=False</code> in the example</p>
<hr>
<p>In Boto3 <a href="http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.generate_presigned_url" rel="nofollow">the API is quite different</a>:</p>
<pre><code>s3 = boto3.client('s3')
# Generate the URL to get 'key-name' from 'bucket-name'
url = s3.generate_presigned_url(
ClientMethod='get_object',
Params={
'Bucket': 'mybucket',
'Key': 'mykey'
},
expires=86400
)
</code></pre>
| 2 | 2016-08-20T07:28:17Z | [
"python",
"amazon-s3"
] |
Sending voice input from frontend (html) to backend | 39,051,619 | <p>I am trying to record voice from Chrome browser and have it sent to backend for converting it to text. I tried following articles:</p>
<ol>
<li><p><a href="http://codesamplez.com/programming/html5-web-speech-api" rel="nofollow">http://codesamplez.com/programming/html5-web-speech-api</a></p></li>
<li><p><a href="http://www.labnol.org/software/add-speech-recognition-to-website/19989/" rel="nofollow">http://www.labnol.org/software/add-speech-recognition-to-website/19989/</a></p></li>
<li><p><a href="https://shapeshed.com/html5-speech-recognition-api/" rel="nofollow">https://shapeshed.com/html5-speech-recognition-api/</a></p></li>
</ol>
<p>2nd link will give me following output when I speak 'What's your name'
<code>127.0.0.1 - - [20/Aug/2016 03:08:30] "GET /dashboard?q=what%27s+your+name HTTP/1.1" 200 -</code></p>
<p>This actually gives me speech to text. </p>
<p>However, I am stuck at trying to read the speech from browser. For backend I am using Python and Flask. Would appreciate any suggestions. </p>
<p>Thank you.</p>
| -1 | 2016-08-20T07:24:11Z | 39,056,903 | <p>I could do it after following 2nd link in question. I sent post request to the same handle and it could grab it. Thanks.</p>
| 0 | 2016-08-20T17:26:40Z | [
"python",
"html5",
"browser",
"flask"
] |
How to pipe multi-line JSON Objects into separate python invocations | 39,051,621 | <p>I know the basics of piping stdin to downstream processes in the shell and as long as each line is treated individually, or as one single input, I can get my pipelines to work.</p>
<p>But when I want to read 4 lines of stdin, do some processing, read 6 more lines, and do the same, my limited of understanding of pipelines becomes an issue. </p>
<p>For example, in the below pipeline, each curl invocation produces an unknown number of lines of output that constitute one JSONObject:</p>
<pre><code>cat geocodes.txt \
| xargs -I% -n 1 curl -s 'http://maps.googleapis.com/maps/api/geocode/json?latlng='%'&sensor=true' \
| python -c "import json,sys;obj=json.load(sys.stdin);print obj['results'][0]['address_components'][3]['short_name'];"
</code></pre>
<p>How can I consume exactly one JSONObject per <code>python</code> invocation? Note I actually have negligible experience in Python. I actually have more experience with <code>Node.js</code> (would it be better to use Node.js to process the JSON curl output?)</p>
<p>Geocodes.txt would be something like:</p>
<pre><code>51.5035705555556,-3.15153263888889
51.5035400277778,-3.15153477777778
51.5035285833333,-3.15150258333333
51.5033861111111,-3.15140833333333
51.5034980555556,-3.15146016666667
51.5035285833333,-3.15155505555556
51.5035362222222,-3.15156338888889
51.5035362222222,-3.15156338888889
</code></pre>
<p><strong>EDIT</strong>
I have a nasty feeling that the answer is that you need to read line by line and check whether you have a complete object before parsing. Is there a function which will do the hard work for me?</p>
| 0 | 2016-08-20T07:24:32Z | 39,051,915 | <p>I believe this approach would accomplish what you want. First, save your python script in a file, <code>my_script.py</code> for example. Then do the following:</p>
<pre><code>cat geocodes.txt \
| xargs -I% sh -c "curl -s 'http://maps.googleapis.com/maps/api/geocode/json?latlng='%'&sensor=true' | python my_script.py"
</code></pre>
<p>Where my_script.py is:</p>
<pre><code>import json,sys;obj=json.load(sys.stdin);print obj['results'][0]['address_components'][3]['short_name'];
</code></pre>
<p>Output:</p>
<pre><code>Cardiff
Cardiff
Cardiff
Cardiff
Cardiff
Cardiff
Cardiff
Cardiff
</code></pre>
<p>Seems a bit hacky, I'll admit. </p>
<hr>
<p>ORIGINAL ANSWER</p>
<p>I am no bash wizard, so my instinct is to simply do everything in Python. The following script would illustrate that approach in Python 3:</p>
<pre><code>import urllib.request as request
import urllib.parse as parse
import json
serviceurl = "http://maps.googleapis.com/maps/api/geocode/json?"
with open("geocodes.txt") as f:
for line in f:
url = (serviceurl +
parse.urlencode({'latlng':line, 'sensor':'true'}))
with request.urlopen(url) as response:
bytes_data = response.read()
obj = json.loads(bytes_data.decode('utf-8'))
print(obj['results'][0]['address_components'][3]['short_name'])
</code></pre>
<p>Output:</p>
<pre><code>Cardiff
Cardiff
Cardiff
Cardiff
Cardiff
Cardiff
Cardiff
Cardiff
</code></pre>
| 1 | 2016-08-20T08:04:58Z | [
"python",
"json",
"node.js",
"stdin",
"xargs"
] |
How to pipe multi-line JSON Objects into separate python invocations | 39,051,621 | <p>I know the basics of piping stdin to downstream processes in the shell and as long as each line is treated individually, or as one single input, I can get my pipelines to work.</p>
<p>But when I want to read 4 lines of stdin, do some processing, read 6 more lines, and do the same, my limited of understanding of pipelines becomes an issue. </p>
<p>For example, in the below pipeline, each curl invocation produces an unknown number of lines of output that constitute one JSONObject:</p>
<pre><code>cat geocodes.txt \
| xargs -I% -n 1 curl -s 'http://maps.googleapis.com/maps/api/geocode/json?latlng='%'&sensor=true' \
| python -c "import json,sys;obj=json.load(sys.stdin);print obj['results'][0]['address_components'][3]['short_name'];"
</code></pre>
<p>How can I consume exactly one JSONObject per <code>python</code> invocation? Note I actually have negligible experience in Python. I actually have more experience with <code>Node.js</code> (would it be better to use Node.js to process the JSON curl output?)</p>
<p>Geocodes.txt would be something like:</p>
<pre><code>51.5035705555556,-3.15153263888889
51.5035400277778,-3.15153477777778
51.5035285833333,-3.15150258333333
51.5033861111111,-3.15140833333333
51.5034980555556,-3.15146016666667
51.5035285833333,-3.15155505555556
51.5035362222222,-3.15156338888889
51.5035362222222,-3.15156338888889
</code></pre>
<p><strong>EDIT</strong>
I have a nasty feeling that the answer is that you need to read line by line and check whether you have a complete object before parsing. Is there a function which will do the hard work for me?</p>
| 0 | 2016-08-20T07:24:32Z | 39,059,856 | <p>Have a look at:</p>
<p><a href="http://trentm.com/json/#FEATURE-Grouping" rel="nofollow">http://trentm.com/json/#FEATURE-Grouping</a></p>
<pre><code>Grouping can be helpful for "one JSON object per line" formats or for things such as:
$ cat *.json | json -g ...
</code></pre>
<p>To install:</p>
<pre><code>sudo npm install -g json
</code></pre>
<p>I haven't tried this myself so can't verify it works, but it might be that missing link to do what you want (Group JSON)</p>
| 0 | 2016-08-21T00:27:41Z | [
"python",
"json",
"node.js",
"stdin",
"xargs"
] |
How to pipe multi-line JSON Objects into separate python invocations | 39,051,621 | <p>I know the basics of piping stdin to downstream processes in the shell and as long as each line is treated individually, or as one single input, I can get my pipelines to work.</p>
<p>But when I want to read 4 lines of stdin, do some processing, read 6 more lines, and do the same, my limited of understanding of pipelines becomes an issue. </p>
<p>For example, in the below pipeline, each curl invocation produces an unknown number of lines of output that constitute one JSONObject:</p>
<pre><code>cat geocodes.txt \
| xargs -I% -n 1 curl -s 'http://maps.googleapis.com/maps/api/geocode/json?latlng='%'&sensor=true' \
| python -c "import json,sys;obj=json.load(sys.stdin);print obj['results'][0]['address_components'][3]['short_name'];"
</code></pre>
<p>How can I consume exactly one JSONObject per <code>python</code> invocation? Note I actually have negligible experience in Python. I actually have more experience with <code>Node.js</code> (would it be better to use Node.js to process the JSON curl output?)</p>
<p>Geocodes.txt would be something like:</p>
<pre><code>51.5035705555556,-3.15153263888889
51.5035400277778,-3.15153477777778
51.5035285833333,-3.15150258333333
51.5033861111111,-3.15140833333333
51.5034980555556,-3.15146016666667
51.5035285833333,-3.15155505555556
51.5035362222222,-3.15156338888889
51.5035362222222,-3.15156338888889
</code></pre>
<p><strong>EDIT</strong>
I have a nasty feeling that the answer is that you need to read line by line and check whether you have a complete object before parsing. Is there a function which will do the hard work for me?</p>
| 0 | 2016-08-20T07:24:32Z | 39,060,758 | <p>You don't need python or node.js. <code>jq</code> is designed specifically for json filtering UNIX style:</p>
<pre><code>sudo apt-get install jq
</code></pre>
<p>Then:</p>
<pre><code>cat geocodes.txt \
| xargs -I% curl -s 'http://maps.googleapis.com/maps/api/geocode/json?latlng='%'&sensor=true' \
| jq --unbuffered '.results[0].formatted_address'
</code></pre>
<p>Or, if you want to do this on all your JPG files:</p>
<pre><code>find -iname "**jpg" \
| xargs -n 1 -d'\n' exiftool -q -n -p '$GPSLatitude,$GPSLongitude'
| xargs -I% curl -s 'http://maps.googleapis.com/maps/api/geocode/json?latlng='%'&sensor=true'
| jq --unbuffered '.results[0].formatted_address'
</code></pre>
| 0 | 2016-08-21T04:06:34Z | [
"python",
"json",
"node.js",
"stdin",
"xargs"
] |
Probelm with py2neo and flask query | 39,051,685 | <p>I have issue in my application, I am trying to run a code in flask with py2neo. I have latest version of NEO4j and python2.7</p>
<p>Here is my code for fucntion in USER class</p>
<pre><code>class User:
def __init__(self, username):
self.username = username
def find(self):
user = graph.find_one("User", "username", self.username)
def add_challenge(self,challenge_title,total_question_per_user,challengecat,percentage_question,prize,ranks,challenge_status):
query = '''
MATCH (u:User),(p:Prize),(ca:Category)
WHERE u.username = {username} and p.pid = {prize} and ca.catname = {challengecat}
CREATE (ch:Challenge {chid: str(uuid.uuid4()),challenge_title: {challenge_title}, total_question_per_user: {total_question_per_user},challenge_status: {challenge_status},timestamp:timestamp(),date:date()}),
(p)-[:BELONG {rank: {ranks} }]->(ch),(ca)-[:BELONG {percentage_question: {percentage_question} }]->(ch)
'''
return graph.run(query,username=self.username,prize=prize,challengecat=challengecat,challenge_title=challenge_title,total_question_per_user=total_question_per_user,challenge_status=challenge_status,ranks=ranks,percentage_question=percentage_question)
</code></pre>
<p>I am calling from my view file and i imported user class in view file but when i run this page then it show error</p>
<p>this is code f view file</p>
<pre><code>@app.route('/admin/add/challenge', methods = ['GET', 'POST'])
def admin_add_challenge():
if not session.get('username'):
return redirect(url_for('admin_login'))
if request.method == 'POST':
challenge_title = request.form['challenge_title']
total_question_per_user = request.form['total_question_per_user']
challengecat = request.form['challengecat']
percentage_question = request.form['percentage_question']
prize = request.form['prize']
ranks = request.form['ranks']
challenge_status = request.form['challenge_status']
if not challenge_title or not total_question_per_user or not ranks:
if not challenge_title:
flash('Please Enter Challenge')
if not total_question_per_user:
flash('Please Enter Number of question Per Player')
if not ranks:
flash('Please Enter Ranks for win this Challenge')
else:
User(session['username']).add_challenge(challenge_title,total_question_per_user,challengecat,percentage_question,prize,ranks,challenge_status)
flash('Challenge Added successfully')
return redirect(url_for('admin_add_challenge'))
categories = get_categories()
prizes = get_prizes()
return render_template('admin/admin_add_challenge.html',categories=categories,prizes=prizes)
</code></pre>
<p>Here is error when i submit form of challenge on page <a href="http://sitename/admin/add/challenge" rel="nofollow">http://sitename/admin/add/challenge</a></p>
<pre><code>ERROR in app: Exception on /admin/add/challenge [POST]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1988, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1641, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1544, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1639, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1625, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/root/gamepro/ddqcore/views.py", line 430, in admin_add_challenge
User(session['username']).add_challenge(challenge_title,total_question_per_user,challengecat,percentage_question,prize,ranks,challenge_status)
File "/root/gamepro/ddqcore/models.py", line 285, in add_challenge
return graph.run(query,username=self.username,prize=prize,challengecat=challengecat,challenge_title=challenge_title,total_question_per_user=total_question_per_user,challenge_status=challenge_status,ranks=ranks,percentage_question=percentage_question)
File "/usr/local/lib/python2.7/site-packages/py2neo/database/__init__.py", line 731, in run
return self.begin(autocommit=True).run(statement, parameters, **kwparameters)
File "/usr/local/lib/python2.7/site-packages/py2neo/database/__init__.py", line 1277, in run
self.finish()
File "/usr/local/lib/python2.7/site-packages/py2neo/database/__init__.py", line 1296, in finish
self._sync()
File "/usr/local/lib/python2.7/site-packages/py2neo/database/__init__.py", line 1286, in _sync
connection.fetch()
File "/usr/local/lib/python2.7/site-packages/py2neo/packages/neo4j/v1/bolt.py", line 337, in fetch
self.acknowledge_failure()
File "/usr/local/lib/python2.7/site-packages/py2neo/packages/neo4j/v1/bolt.py", line 284, in acknowledge_failure
fetch()
File "/usr/local/lib/python2.7/site-packages/py2neo/packages/neo4j/v1/bolt.py", line 337, in fetch
self.acknowledge_failure()
File "/usr/local/lib/python2.7/site-packages/py2neo/packages/neo4j/v1/bolt.py", line 284, in acknowledge_failure
fetch()
File "/usr/local/lib/python2.7/site-packages/py2neo/packages/neo4j/v1/bolt.py", line 322, in fetch
raw.writelines(self.channel.chunk_reader())
File "/usr/local/lib/python2.7/site-packages/py2neo/packages/neo4j/v1/bolt.py", line 173, in chunk_reader
chunk_header = self._recv(2)
File "/usr/local/lib/python2.7/site-packages/py2neo/packages/neo4j/v1/bolt.py", line 156, in _recv
raise ProtocolError("Server closed connection")
ProtocolError: Server closed connection
49.32.44.55 - - [20/Aug/2016 06:49:05] "POST /admin/add/challenge HTTP/1.1" 500 -
</code></pre>
| 0 | 2016-08-20T07:34:47Z | 39,093,818 | <p>In python 2.7 and py2neo version 3, we can not use query like this, we need to query like this one</p>
<pre><code>selector = NodeSelector(graph)
selected_user = selector.select("User", username=user)
selected_prize = selector.select("Prize", pid=prize)
selected_cat = selector.select("Category",catname = challengecat)
challenge = Node("Challenge",chid=str(uuid.uuid4()),challenge_title=challenge_title,total_question_per_user=total_question_per_user,challenge_status=challenge_status,timestamp=timestamp(),date=date())
rel = Relationship(selected_user,"ADDED",challenge)
rel1 = Relationship(selected_prize,"BELONG",challenge)
rel2 = Relationship(selected_cat,"BELONG",challenge)
graph.create(rel)
graph.create(rel1)
graph.create(rel2)
</code></pre>
<p>Thanks
CSR</p>
| 0 | 2016-08-23T06:27:55Z | [
"python",
"flask",
"neo4j",
"py2neo"
] |
django model create does not work | 39,051,752 | <p>I added a new model to my app named SocialProfile, which is responsible for keeping social-related properties of a user which has a one-to-one relationship with UserProfile model. This is the SocialProfile model in models.py:</p>
<pre><code>class SocialProfile(models.Model):
profile = models.OneToOneField('UserProfile', on_delete=models.CASCADE)
facebook_profiles = models.ManyToManyField('FacebookContact', related_name='synced_profiles', blank=True)
google_profiles = models.ManyToManyField('GoogleContact', related_name='synced_profiles', blank=True)
hash = models.CharField(max_length=30, unique=True, blank=True)
def save(self, *args, **kwargs):
if not self.pk:
hash = gen_hash(self.id, 30)
while SocialProfile.objects.filter(hash=hash).exists():
hash = gen_hash(self.id, 30)
self.hash = hash
def __str__(self):
return str(self.profile)
</code></pre>
<p>Right now, I keep a record for synced facebook & google profiles. Now, the problem is that creating new objects does not actually add any record in the database. I cannot create instances with scripts or admin. In case of scripts, the following runs without errors but no record is created:</p>
<pre><code>for profile in UserProfile.objects.all():
sp = SocialProfile.objects.create(profile=profile)
print(profile, sp)
SocialProfile.objects.count()
</code></pre>
<p>The prints are done, and look correct and the count() returns 0. I try creating objects in admin, but I get the following error:</p>
<pre><code>"{{socialprofile object}}" needs to have a value for field "socialprofile" before
this many-to-many relationship can be used.
</code></pre>
<p>I think that is another problem, because if I comment the Many-to-Many relationships, it is done, without error (still no new records). I mentioned it just if it might help.</p>
<p>I have checked the database, the tables are there, no new migrations are detected either. </p>
<p>Any help and idea about what could be the problem would be appreciated!</p>
| 1 | 2016-08-20T07:43:03Z | 39,051,940 | <p>You've overwritten the save method so that it never actually saves anything. You need to call the superclass method at the end:</p>
<pre><code>def save(self, *args, **kwargs):
if not self.pk:
...
return super(SocialProfile, self).save(*args, **kwargs)
</code></pre>
| 1 | 2016-08-20T08:07:26Z | [
"python",
"django",
"django-models"
] |
Python Tornado Web Service Cron Restart How? | 39,051,824 | <p>Fail Code:</p>
<pre><code>root = os.path.dirname(__file__)
static_application = tornado.web.Application([
(r"/(.*)", tornado.web.StaticFileHandler,
{"path": root, "default_filename": "Example.html"})
])
if __name__ == "__main__":
print "Starting Server..."
static_application.listen(8080)
tornado.ioloop.IOLoop.instance().start()
</code></pre>
<p>Fail iptables:</p>
<pre><code>Chain PREROUTING (policy ACCEPT)
num target prot opt source destination
1 REDIRECT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 redir ports 8080
</code></pre>
<p>Fail html:</p>
<pre><code><!DOCTYPE HTML>
<html>
<head>
<link href='StyleFile0.css' rel='stylesheet' type='text/css' />
<link href='StyleFile1.css' rel='stylesheet' type='text/css' />
<script src='ScriptFile0.js' type='text/javascript'> </script>
<script src='ScriptFile1.js' type='text/javascript'> </script>
</code></pre>
<p>Fail Cron:</p>
<pre><code> 0 * * * * cd /home/maindude/CodeHome && timeout -k 59m 59m python Process_TornadoService.py
</code></pre>
<p>Fail .js .css browser console "GET" ' s:</p>
<p><a href="http://i.stack.imgur.com/N0mlM.png" rel="nofollow"><img src="http://i.stack.imgur.com/N0mlM.png" alt="ScriptGetFail"></a></p>
<hr>
<p>So I figured out how to host a basic tornado web service.</p>
<p>I spawn up an Amazon server and throw it on there, everything works great. </p>
<p>Then I want to have the service die and restart every hour. </p>
<p>If I host on port 80 -> I need sudo permissions to start service so cron fails</p>
<p>If I host the service on port 8080 -> I have to use iptables </p>
<p>If I use iptables -> my script dependencies in html seem to be mysteriously unavailable. </p>
<hr>
<p>What is the right combo of python, tornado, html, cron, iptables to fix this and get a tornado service to die and restart every hour?</p>
<p><strong>EDIT:</strong></p>
<p>Everything above works just fine to achieve this. </p>
| 0 | 2016-08-20T07:51:32Z | 39,057,303 | <p>Ok - so it turns out -> I was not patient enough... </p>
<p>The above code, cron, html, javascript, and iptables all work flawlessly to achieve a server restart with tornado on port 8080 on an amazon web server. </p>
<p>Epic celebration ensues</p>
| 0 | 2016-08-20T18:13:13Z | [
"python",
"cron",
"tornado",
"iptables"
] |
Deploying Flask in Openshift | 39,051,871 | <p>The following codes are working without any problem in my system's localhost... But ain't doing the job on OpenShift..
There is something wrong with my <strong>wsgi.py</strong>.. Do I have to pass my username and password using environment variables OR I've need to change the <em>localhost</em> ? </p>
<p>The following is the tree of the directory/repository...</p>
<pre><code>myflaskaws
âââ requirements.txt
âââ setup.py
âââ static
â  âââ assets
â  â  âââ style.css
â  âââ images
â  âââ no.png
â  âââ yes.png
âââ templates
â  âââ index.html
â  âââ login.html
â  âââ searchlist.html
â  âââ update.html
âââ test.py
âââ test.pyc
âââ wsgi.py`
</code></pre>
<hr>
<p><strong>wsgi.py</strong></p>
<pre><code>#!/usr/bin/python
import os
virtenv = os.environ['OPENSHIFT_PYTHON_DIR'] + '/virtenv/'
virtualenv = os.path.join(virtenv, 'bin/activate_this.py')
try:
execfile(virtualenv, dict(__file__=virtualenv))
except IOError:
pass
from test import app as application
if __name__ == '__main__':
from wsgiref.simple_server import make_server
httpd = make_server('localhost', 8051, application)
print("Serving at http://localhost:8051/ \n PRESS CTRL+C to Terminate. \n")
httpd.serve_forever()
print("Terminated!!")
</code></pre>
<hr>
<p><strong>test.py</strong></p>
<pre><code>from flask import Flask
app = Flask(__name__)
</code></pre>
<p>PS : I'm not using "<em>if <strong>name</strong> == '<strong>main</strong>':</em>" in <strong>test.py</strong></p>
| 1 | 2016-08-20T07:58:56Z | 39,054,128 | <p>Yes, you do need to use Openshift's environment variables to set up the IP and port. </p>
<p>Try adding in the below code to setup the proper IP and port depending if you are on OS or localhost. </p>
<pre><code>Import os
if 'OPENSHIFT_APP_NAME' in os.environ: #are we on OPENSHIFT?
ip = os.environ['OPENSHIFT_PYTHON_IP']
port = int(os.environ['OPENSHIFT_PYTHON_PORT'])
else:
ip = '0.0.0.0' #localhost
port = 8051
httpd = make_server(ip, port, application)
</code></pre>
| 0 | 2016-08-20T12:25:49Z | [
"python",
"flask",
"openshift",
"pymongo",
"wsgi"
] |
Pygame--pygame can't run a pingpong collision game | 39,051,973 | <pre><code>#!/usr/bin/python
# -*- coding: utf-8 -*-
import pygame
import sys
class MyBallClass(pygame.sprite.Sprite):
def __init__(self, image_file, speed, location):
pygame.sprite.Sprite.__init__(self)
self.image = pygame.image.load(image_file)
self.rect = self.image.get_rect()
self.rect.left, self.rect.top = location
self.speed = speed
def move(self):
global points, score_text
self.rect = self.rect.move(self.speed)
if self.rect.left < 0 or self.rect.right > screen.get_width():
self.speed[0] = -self.speed[0]
if self.rect.top <= 0:
self.speed[1] = -self.speed[1]
points += 1
score_text = font.render(str(points), 1, (0, 0, 0))
class MyPaddleClass(pygame.sprite.Sprite):
def __init__(self, location=[0, 0]):
pygame.sprite.Sprite.__init__(self)
image_surface = pygame.surface.Surface([100, 20])
image_surface.fill([0, 0, 0])
self.image = image_surface.convert()
self.rect = self.image.get_rect()
self.rect.left, self.rect.top = location
pygame.init()
screen = pygame.display.set_mode([640, 480])
clock = pygame.time.Clock()
ball_speed = [3, 4]
myball = MyBallClass("E:\\python file\\blackball.jpg", ball_speed, [50, 50])
ballgroup = pygame.sprite.Group(myball)
paddle = MyPaddleClass([270, 400])
lives = 3
points = 0
font = pygame.font.Font(None, 50)
score_text = font.render(str(points), 1, (0, 0, 0))
textpos = [10, 10]
done = False
while 1:
clock.tick(30)
screen.fill([255, 255, 255])
for event in pygame.event.get():
if event.type == pygame.QUIT:
sys.exit()
elif event.type == pygame.MOUSEMOTION:
paddle.rect.centerx = event.pos[0]
if pygame.sprite.spritecollide(paddle, ballgroup, False):
myball.speed[1] = -myball.speed[1]
myball.move()
if not done:
screen.blit(myball.image, myball.rect)
screen.blit(paddle.image, paddle.rect)
screen.blit(score_text, textpos)
for i in range(lives):
width = screen.get_width()
screen.blit(myball.image, [width - 40 * i, 20])
pygame.display.flip()
if myball.rect.top <= screen.get_rect().bottom:
# In get_rect(), you cannot leave out brackets
lives -= 1
if lives == 0:
final_text1 = "Game over!"
final_text2 = 'your final score is' + str(points)
ft1_font = pygame.font.Font(None, 70)
ft2_font = pygame.font.Font(None, 50)
ft1_surface = font.render(final_text1, 1, (0, 0, 0))
ft2_surface = font.render(final_text2, 1, (0, 0, 0))
screen.blit(ft1_surface, [screen.get_width() / 2, 100])
screen.blit(ft2_surface, [screen.get_width() / 2, 200])
pygame.display.flip()
done = True
else:
pygame.time.delay(1000)
myball.rect.topleft = [50, 50]
frame_rate = clock.get_fps()
print(frame_rate)
</code></pre>
<p>Here is the pygame window picture of my code:(<a href="http://i.stack.imgur.com/SFyBJ.jpg" rel="nofollow">http://i.stack.imgur.com/SFyBJ.jpg</a>)</p>
<p>Every time I run it, I don't have the time to control the paddle, then it shows game is over. I have been searching for a long time but can't find out why. It seems like the blackball is moving so fast that the game is over in about one second. But I already set the speed to a reasonable range, I am so confused. Can anyone help?</p>
| 3 | 2016-08-20T08:11:40Z | 39,056,224 | <p>I could rebuild your game, and fixed the problem.</p>
<p>Try inverse condition here</p>
<pre><code>if myball.rect.top >= screen.get_rect().bottom:
</code></pre>
<p>and it works fine: I can hit the ball with the bat, make it bounce, ...</p>
<p>I don't know why it took me so long ot figure it out: you lose the game if ball goes off the screen by the bottom. For that, top y must be greater than the bottom of the window (computer screen coordinates are 0,0 from upper left)</p>
| 1 | 2016-08-20T16:11:09Z | [
"python",
"pygame"
] |
Kivy--Plyer--Android--sending notification while app is not running | 39,052,054 | <p>I am writing a python app in kivy.</p>
<p>the idea is to allow the user to make let the user make notes of bookings for certain dates, and then the program should send them a notification on that day about the booking.</p>
<p>Theres probably a simple way to do this, I am using plyer.</p>
<pre><code>from plyer import notification
notification.notify(title="Kivy Notification",message="Plyer Up and Running!",app_name="Waentjies",app_icon="icon.png",timeout=10)
</code></pre>
<p>this works, I get a notification whenever i call that function, however, i cant find any way to send this notification while the app is not running, i do know that there are some other questions that seem to answer this question, but they dont, their simply to run a app in background, wich i dont want to do, all i want is something like clash of clans notification when your troops are ready for battle, or facebooks notification when somebody liked your post</p>
<p>Any help apreciated with this.</p>
<p>Thanks</p>
| 10 | 2016-08-20T08:21:49Z | 39,202,531 | <p>I think you should take a look at the <a href="https://developer.android.com/reference/android/app/AlarmManager.html" rel="nofollow">Android AlarmManager</a>. If this is what you need, <a href="http://cheparev.com/kivy-receipt-notifications-and-service/" rel="nofollow">here</a> is an example for Kivy.</p>
<blockquote>
<p><strong>AlarmManager</strong></p>
<p>This class provides access to the system alarm services. These allow you to schedule your application to be run at some point in the future. When an alarm goes off, the Intent that had been registered for it is broadcast by the system, automatically starting the target application if it is not already running.</p>
</blockquote>
| 1 | 2016-08-29T09:15:28Z | [
"android",
"python",
"notifications",
"kivy"
] |
Why doesn't this method, redefined at runtime, retain its value? | 39,052,084 | <p>I'm stuck on something I suspect is quite simple, but I just can't wrap my head around it. I'm trying to create a class with a method that can be redefined on the fly. I want to be able to do this for an arbitrary number of instances, but I'm only showing two here in order to keep things simple.</p>
<p>Here is a MWE of my code: </p>
<pre><code>class Foo():
def __init__(self, z):
self.z = z
def f(self, t):
return 0
def f(obj1, obj2, t):
return (obj1.z - obj2.z) * t
a, b = Foo(3), Foo(5)
print(a.f(1), b.f(1)) # --> 0, 0
x, y = a, b
x.f = lambda t: f(x, y, t)
print(a.f(1), b.f(1)) # --> -2, 0
x, y = b, a
x.f = lambda t: f(x, y, t)
print(a.f(1), b.f(1)) # --> 2, 2
</code></pre>
<p>Why does the value of <code>a.f(1)</code> change?</p>
| 1 | 2016-08-20T08:25:28Z | 39,052,133 | <p>It changes because you are modifying the <code>x</code> and <code>y</code> global variables, which are referred to by the <code>f</code> function you defined:</p>
<pre><code>In [2]: a, b = Foo(3), Foo(5)
In [3]: print(a.f(1), b.f(1))
0 0
In [4]: x, y = a, b
...: x.f = lambda t: f(x, y, t)
...: print(a.f(1), b.f(1))
...:
-2 0
In [5]: x, y = b, a
In [6]: print(a.f(1), b.f(1)) # you changed x and y
2 0
</code></pre>
<p>After you swap <code>x</code> and <code>y</code> you have that <code>a.f</code> is the function <code>lambda t: f(x, y, t)</code> which means it calls <code>f(b, a, t)</code> and since <code>b = Foo(5)</code> and <code>a = Foo(3)</code> you have <code>5-3 == 2</code> instead of <code>-2</code>.</p>
<p>If you want to fix the value passed by <code>f</code> and "unlink" it from the global variables you could use default arguments:</p>
<pre><code>In [2]: x, y = a, b
...: x.f = lambda t, x=x, y=y: f(x, y, t)
...:
In [3]: print(a.f(1), b.f(1))
-2 0
In [4]: x, y = b, a
In [5]: print(a.f(1), b.f(1))
-2 0
</code></pre>
<p>Since default values are evaluated at definition time, by using <code>lambda t, x=x, y=y</code> you end up fixing the values of <code>x</code> and <code>y</code> as they were when the function was defined, so that when you subsequently swap them this doesn't affect that function.</p>
| 0 | 2016-08-20T08:31:17Z | [
"python",
"python-3.x",
"global-variables"
] |
upgrade scipy in redhat | 39,052,087 | <p>I'm trying to upgrade scipy on Redhat 6.7 using pip to local folder:</p>
<pre><code>pip install --user --upgrade scipy
</code></pre>
<p>However, the following error pop out:</p>
<pre><code>You are using pip version 7.1.0, however version 8.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Collecting scipy
Using cached scipy-0.18.0.tar.gz
Installing collected packages: scipy
Running setup.py install for scipy
Complete output from command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-build-i0GYOd/scipy/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-aq6BzP-record/install-record.txt --single-version-externally-managed --compile --user --prefix=:
Note: if you need reliable uninstall behavior, then install
with pip instead of using `setup.py install`:
- `pip install .` (from a git repo or downloaded source
release)
- `pip install scipy` (last SciPy release on PyPI)
lapack_opt_info:
lapack_mkl_info:
mkl_info:
libraries mkl,vml,guide not found in /usr/local/lib64
libraries mkl,vml,guide not found in /usr/local/lib
libraries mkl,vml,guide not found in /usr/lib64
libraries mkl,vml,guide not found in /usr/lib
NOT AVAILABLE
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib64
libraries lapack_atlas not found in /usr/local/lib64
libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib
libraries lapack_atlas not found in /usr/local/lib
libraries lapack_atlas not found in /usr/lib64/atlas
numpy.distutils.system_info.atlas_threads_info
Setting PTATLAS=ATLAS
Setting PTATLAS=ATLAS
FOUND:
libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas']
library_dirs = ['/usr/lib64/atlas']
language = f77
include_dirs = ['/usr/include']
/usr/lib64/python2.6/site-packages/numpy/distutils/command/config.py:394: DeprecationWarning:
+++++++++++++++++++++++++++++++++++++++++++++++++
Usage of get_output is deprecated: please do not
use it anymore, and avoid configuration checks
involving running executable on the target machine.
+++++++++++++++++++++++++++++++++++++++++++++++++
DeprecationWarning)
customize GnuFCompiler
Found executable /usr/bin/g77
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found
customize GnuFCompiler
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found
customize GnuFCompiler using config
compiling '_configtest.c':
/* This file is generated from numpy/distutils/system_info.py */
void ATL_buildinfo(void);
int main(void) {
ATL_buildinfo();
return 0;
}
C compiler: gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC
compile options: '-c'
gcc: _configtest.c
gcc -pthread _configtest.o -L/usr/lib64/atlas -llapack -lptf77blas -lptcblas -latlas -o _configtest
ATLAS version 3.8.4 built by mockbuild on Thu Feb 9 08:22:21 EST 2012:
UNAME : Linux x86-010.build.bos.redhat.com 2.6.18-274.17.1.el5 #1 SMP Wed Jan 4 22:45:44 EST 2012 x86_64 x86_64 x86_64 GNU/Linux
INSTFLG : -1 0 -a 1
ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_P4E -DATL_CPUMHZ=3600 -DATL_SSE2 -DATL_SSE1 -DATL_USE64BITS -DATL_GAS_x8664
F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle
CACHEEDGE: 524288
F77 : gfortran, version GNU Fortran (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3)
F77FLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -g -Wa,--noexecstack -fPIC -m64
SMC : gcc, version gcc (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3)
SMCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -g -Wa,--noexecstack -fPIC -m64
SKC : gcc, version gcc (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3)
SKCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -g -Wa,--noexecstack -fPIC -m64
success!
removing: _configtest.c _configtest.o _configtest
FOUND:
libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas']
library_dirs = ['/usr/lib64/atlas']
language = f77
define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')]
include_dirs = ['/usr/include']
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-i0GYOd/scipy/setup.py", line 415, in <module>
setup_package()
File "/tmp/pip-build-i0GYOd/scipy/setup.py", line 411, in setup_package
setup(**metadata)
File "/usr/lib64/python2.6/site-packages/numpy/distutils/core.py", line 152, in setup
config = configuration()
File "/tmp/pip-build-i0GYOd/scipy/setup.py", line 335, in configuration
config.add_subpackage('scipy')
File "/usr/lib64/python2.6/site-packages/numpy/distutils/misc_util.py", line 957, in add_subpackage
caller_level = 2)
File "/usr/lib64/python2.6/site-packages/numpy/distutils/misc_util.py", line 926, in get_subpackage
caller_level = caller_level + 1)
File "/usr/lib64/python2.6/site-packages/numpy/distutils/misc_util.py", line 863, in _get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "scipy/setup.py", line 9, in configuration
config.add_subpackage('cluster')
File "/usr/lib64/python2.6/site-packages/numpy/distutils/misc_util.py", line 957, in add_subpackage
caller_level = 2)
File "/usr/lib64/python2.6/site-packages/numpy/distutils/misc_util.py", line 926, in get_subpackage
caller_level = caller_level + 1)
File "/usr/lib64/python2.6/site-packages/numpy/distutils/misc_util.py", line 863, in _get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "scipy/cluster/setup.py", line 24, in configuration
extra_info=blas_opt)
File "/usr/lib64/python2.6/site-packages/numpy/distutils/misc_util.py", line 1419, in add_extension
ext = Extension(**ext_args)
File "/usr/lib64/python2.6/site-packages/numpy/distutils/extension.py", line 45, in __init__
export_symbols)
TypeError: __init__() takes at most 4 arguments (13 given)
----------------------------------------
Command "/usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-build-i0GYOd/scipy/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-aq6BzP-record/install-record.txt --single-version-externally-managed --compile --user --prefix=" failed with error code 1 in /tmp/pip-build-i0GYOd/scipy
</code></pre>
<p>It seems numpy is out-dated, when I tried to upgrade numpy, similar error message show up. How should I install numpy and scipy in local folder?</p>
| 1 | 2016-08-20T08:25:35Z | 39,052,195 | <p>RedHat 6.7 comes with Python 2.6 and that is no longer supported by the latest version of numpy/scipy. Apart from that, on Linux, you should not mix packages installed with the package manager (using <code>yum</code>, <code>dnf</code>, <code>apt-get</code>, etc.) with packages installed from PyPI using <code>pip</code>.
Since most Linux distributions rely on a working Python to be installed for all kinds of programs, a <code>pip</code> install can easily break things, even so if this is done in a non-global directory.</p>
<p>Instead always use a <a href="https://pypi.python.org/pypi/virtualenv" rel="nofollow"><code>virtualenv</code></a> and install the package that you need there. You can install <code>virtualenv</code> with your package manager, and if you want you can create a virtualenv for <code>virtualenv</code> and upgrade <code>pip</code> and <code>virtualenv</code> there to get rid of the message that pip is outdated and get the latest virtualenv.</p>
<p>This might seem overkill, but it is worth preventing your utilities from breaking. In addition working this way makes it easy to use a newer python than the one provided by the system. Assuming you installed the latest from the 2.7 series in <code>/opt/python/2.7</code>, just provide <code>-p /opt/python/2.7/bin/python</code> to the system <code>virtualenv</code> when you create your initial virtualenv (using virtualenv from the package manager).</p>
| 0 | 2016-08-20T08:38:41Z | [
"python",
"scipy",
"pip"
] |
Serve from /tmp on Heroku with Cherrypy | 39,052,097 | <p>My site writes new <code>.html</code> files into <code>/tmp</code> after the dyno is created.
The <code>cherrypy</code> app is in <code>/app</code> due to the Heroku's structure.</p>
<p>This prevents me from routing the <code>.html</code> files created with Cherrypy. Any idea on how to do this?</p>
| -2 | 2016-08-20T08:27:04Z | 39,056,121 | <p>Heroku's <a href="https://devcenter.heroku.com/articles/dynos#ephemeral-filesystem" rel="nofollow">filesystem is ephemeral</a>:</p>
<blockquote>
<p>Each dyno gets its own ephemeral filesystem, with a fresh copy of the most recently deployed code. During the dynoâs lifetime its running processes can use the filesystem as a temporary scratchpad, but no files that are written are visible to processes in any other dyno and any files written will be discarded the moment the dyno is stopped or restarted. For example, this occurs any time a dyno is replaced due to application deployment and approximately once a day as part of normal dyno management.</p>
</blockquote>
<p>It's not meant for permanent storage, and anything you write out to disk can disappear at any moment.</p>
<p>If you need to write data out persistently you can <a href="https://devcenter.heroku.com/articles/s3" rel="nofollow">use something like Amazon S3</a> or store it in a database.</p>
<blockquote>
<p>Will it be possible to serve the code directly from db then? Assuming that i write the code into db?</p>
</blockquote>
<p>Yes.</p>
<p>Heroku itself <a href="https://www.heroku.com/postgres" rel="nofollow">provides a PostgreSQL service</a> and <a href="https://elements.heroku.com/addons#data-stores" rel="nofollow">many others are available from the addons marketplace</a>.</p>
| 2 | 2016-08-20T16:01:07Z | [
"python",
"heroku",
"cherrypy"
] |
Modify code to capture values greater than - instead of exact match | 39,052,132 | <p>The following code works well for identifying whether a value is hit or missed over following rows and giving the output column showing the time the condition was met.</p>
<pre><code>import datetime,numpy as np,pandas as pd;
nan = np.nan;
a = pd.DataFrame( {'price': {datetime.time(9, 0): 1, datetime.time(10, 0): 0, datetime.time(11, 0): 3, datetime.time(12, 0): 4, datetime.time(13, 0): 7, datetime.time(14, 0): 6, datetime.time(15, 0): 5, datetime.time(16, 0): 4, datetime.time(17, 0): 0, datetime.time(18, 0): 2, datetime.time(19, 0): 4, datetime.time(20, 0): 7}, 'reversal': {datetime.time(9, 0): nan, datetime.time(10, 0): nan, datetime.time(11, 0): nan, datetime.time(12, 0): nan, datetime.time(13, 0): nan,
datetime.time(14, 0): 6.0, datetime.time(15, 0): nan, datetime.time(16, 0): nan, datetime.time(17, 0): nan, datetime.time(18, 0): nan, datetime.time(19, 0): nan, datetime.time(20, 0): nan}});
a['target_hit_time']=a['target_miss_time']=nan;
a['target1']=a['reversal']+1;
a['target2']=a['reversal']-a['reversal'];
a.sort_index(1,inplace=True);
hits = a.ix[:,:-2].dropna();
for row,hit in hits.iterrows():
forwardRows = [row]<a['price'].index.values
targetHit = a.index.values[(hit['target1']==a['price'].values) & forwardRows][0];
targetMiss = a.index.values[(hit['target2']==a['price'].values) & forwardRows][0];
if targetHit>targetMiss:
a.loc[row,"target_miss_time"] = targetMiss;
else:
a.loc[row,"target_hit_time"] = targetHit;
a
</code></pre>
<p>This image shows the output from the above code which can easily be reproduced by running this code:</p>
<p><a href="http://i.stack.imgur.com/pdsTm.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/pdsTm.jpg" alt="current working code"></a></p>
<p>The issue I have is that when this code is utilised on real data the price may not exactly match and/or may gap though a value. So if we look at the following image:</p>
<p><a href="http://i.stack.imgur.com/q9UGV.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/q9UGV.jpg" alt="desired"></a></p>
<p>We see that <code>target1</code> criteria would be met if we were looking for a value <code>>= 7.5</code> and not just looking for the value <code>7.5</code>. Can anybody help modify the code to achieve this please?</p>
| 4 | 2016-08-20T08:31:04Z | 39,053,138 | <p>Without modifying your code heavily, this is what I came up with:</p>
<pre><code>import numpy as np
for row,hit in hits.iterrows():
print ("row", row)
print ("hit",hit)
forwardRows = a[a.index.values > row]
targetHit = forwardRows[(hit['target1'] <= forwardRows['price'].values)].head(1).index.values
targetMiss = forwardRows[(hit['target2'] >= forwardRows['price'].values)].head(1).index.values
if targetHit>targetMiss:
a.loc[row,"target_miss_time"] = targetMiss
else:
a.loc[row,"target_hit_time"] = targetHit
price reversal target1 target2 target_hit_time target_miss_time
09:00:00 1 NaN NaN NaN NaN NaN
10:00:00 0 NaN NaN NaN NaN NaN
11:00:00 3 NaN NaN NaN NaN NaN
12:00:00 4 NaN NaN NaN NaN NaN
13:00:00 7 NaN NaN NaN NaN NaN
14:00:00 6 6.5 7.5 0.0 [20:00:00] NaN
15:00:00 5 NaN NaN NaN NaN NaN
16:00:00 4 NaN NaN NaN NaN NaN
17:00:00 2 NaN NaN NaN NaN NaN
18:00:00 2 NaN NaN NaN NaN NaN
19:00:00 4 NaN NaN NaN NaN NaN
20:00:00 8 NaN NaN NaN NaN NaN
</code></pre>
<p>This is still to be improved since targetHit, targetMiss return an array and you need to check if there are any elements in array and if there are elements in both arrays - you need to compare first alements. Right now it only works if one array is empty.</p>
| 0 | 2016-08-20T10:36:45Z | [
"python",
"pandas"
] |
Modify code to capture values greater than - instead of exact match | 39,052,132 | <p>The following code works well for identifying whether a value is hit or missed over following rows and giving the output column showing the time the condition was met.</p>
<pre><code>import datetime,numpy as np,pandas as pd;
nan = np.nan;
a = pd.DataFrame( {'price': {datetime.time(9, 0): 1, datetime.time(10, 0): 0, datetime.time(11, 0): 3, datetime.time(12, 0): 4, datetime.time(13, 0): 7, datetime.time(14, 0): 6, datetime.time(15, 0): 5, datetime.time(16, 0): 4, datetime.time(17, 0): 0, datetime.time(18, 0): 2, datetime.time(19, 0): 4, datetime.time(20, 0): 7}, 'reversal': {datetime.time(9, 0): nan, datetime.time(10, 0): nan, datetime.time(11, 0): nan, datetime.time(12, 0): nan, datetime.time(13, 0): nan,
datetime.time(14, 0): 6.0, datetime.time(15, 0): nan, datetime.time(16, 0): nan, datetime.time(17, 0): nan, datetime.time(18, 0): nan, datetime.time(19, 0): nan, datetime.time(20, 0): nan}});
a['target_hit_time']=a['target_miss_time']=nan;
a['target1']=a['reversal']+1;
a['target2']=a['reversal']-a['reversal'];
a.sort_index(1,inplace=True);
hits = a.ix[:,:-2].dropna();
for row,hit in hits.iterrows():
forwardRows = [row]<a['price'].index.values
targetHit = a.index.values[(hit['target1']==a['price'].values) & forwardRows][0];
targetMiss = a.index.values[(hit['target2']==a['price'].values) & forwardRows][0];
if targetHit>targetMiss:
a.loc[row,"target_miss_time"] = targetMiss;
else:
a.loc[row,"target_hit_time"] = targetHit;
a
</code></pre>
<p>This image shows the output from the above code which can easily be reproduced by running this code:</p>
<p><a href="http://i.stack.imgur.com/pdsTm.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/pdsTm.jpg" alt="current working code"></a></p>
<p>The issue I have is that when this code is utilised on real data the price may not exactly match and/or may gap though a value. So if we look at the following image:</p>
<p><a href="http://i.stack.imgur.com/q9UGV.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/q9UGV.jpg" alt="desired"></a></p>
<p>We see that <code>target1</code> criteria would be met if we were looking for a value <code>>= 7.5</code> and not just looking for the value <code>7.5</code>. Can anybody help modify the code to achieve this please?</p>
| 4 | 2016-08-20T08:31:04Z | 39,054,755 | <p>Some ifs and thats all :D... </p>
<pre><code>import datetime,numpy as np,pandas as pd;
nan = np.nan;
a = pd.DataFrame( {'price': {datetime.time(9, 0): 1, datetime.time(10, 0): 0, datetime.time(11, 0): 3, datetime.time(12, 0): 4, datetime.time(13, 0): 7, datetime.time(14, 0): 6, datetime.time(15, 0): 5, datetime.time(16, 0): 4, datetime.time(17, 0): 2, datetime.time(18, 0): 2, datetime.time(19, 0): 4, datetime.time(20, 0): 8}, 'reversal': {datetime.time(9, 0): nan, datetime.time(10, 0): nan, datetime.time(11, 0): nan, datetime.time(12, 0): nan, datetime.time(13, 0): nan,
datetime.time(14, 0): 6.0, datetime.time(15, 0): nan, datetime.time(16, 0): nan, datetime.time(17, 0): nan, datetime.time(18, 0): nan, datetime.time(19, 0): nan, datetime.time(20, 0): nan}});
a['target_hit_time']=a['target_miss_time']=nan;
a['target1']=a['reversal']+1;
a['target2']=a['reversal']-a['reversal'];
a.sort_index(1,inplace=True);
hits = a.ix[:,:-2].dropna();
for row,hit in hits.iterrows():
forwardRows = a[a.index.values > row];
targetHit = hit['target1']<=forwardRows['price'].values;
targetMiss = hit['target2']==forwardRows['price'].values;
targetHit = forwardRows[targetHit].head(1).index.values;
targetMiss = forwardRows[targetMiss].head(1).index.values;
targetHit, targetMiss = \
targetHit[0] if targetHit else [], \
targetMiss[0] if targetMiss else [];
goMiss,goHit = False,False
if targetHit and targetMiss:
if targetHit>targetMiss: goMiss=True;
else: goHit=True;
elif targetHit and not targetMiss:goHit = True;
elif not targetHit and targetMiss:goMiss = True;
if goMiss:a.loc[row,"target_miss_time"] = targetMiss;
elif goHit:a.loc[row,"target_hit_time"] = targetHit;
print '#'*50
print a
'''
##################################################
price reversal target1 target2 target_hit_time target_miss_time
09:00:00 1 NaN NaN NaN NaN NaN
10:00:00 0 NaN NaN NaN NaN NaN
11:00:00 3 NaN NaN NaN NaN NaN
12:00:00 4 NaN NaN NaN NaN NaN
13:00:00 7 NaN NaN NaN NaN NaN
14:00:00 6 6.0 7.0 0.0 20:00:00 NaN
15:00:00 5 NaN NaN NaN NaN NaN
16:00:00 4 NaN NaN NaN NaN NaN
17:00:00 2 NaN NaN NaN NaN NaN
18:00:00 2 NaN NaN NaN NaN NaN
19:00:00 4 NaN NaN NaN NaN NaN
20:00:00 8 NaN NaN NaN NaN NaN
'''
</code></pre>
| 1 | 2016-08-20T13:37:10Z | [
"python",
"pandas"
] |
grep No such file or directory with envoy.run | 39,052,191 | <p>I try to build script that execute some grep search in my logs and print the results.
I try to use Envoy because is more easy than subprocess but when I execute grep command it gives me back an error of no such file o directory.</p>
<p>The dir structure is easy:</p>
<ul>
<li>. # root of script</li>
<li>test.py # script file</li>
<li>web_logs/log/ # dir that contains log to search in</li>
</ul>
<p>My test.py is easy:</p>
<pre><code>import envoy
def test(value):
search = "grep 'cv="+str(value)+"' ./web_logs/log/log_*"
print(search) #check of the search string
r = envoy.run(search)
print(r.status_code, r.std_out, r.std_err)#check of the command
response = r.std_out
if __name__ == "__main__":
test(2)
</code></pre>
<p>The output is:</p>
<pre><code>grep 'cv=2' ./web_logs/log/log_*
(2, '', 'grep: ./web_logs/log/log_*: No such file or directory\n')
</code></pre>
<p>If i run the same command:</p>
<pre><code>grep 'cv=2' ./web_logs/log/log_*
</code></pre>
<p>I can find the occurrence of the string "cv=2" in the log files.</p>
<p>Where is the error?</p>
<p><strong>Update after the answers</strong>
The problem is in using of * that envoy cannot explode without use of glob module so I using the subprocess as it is and I try to study better the using of glob module to improve envoy.</p>
<p>The new code I used is:</p>
<pre><code>import subprocess
def test(value):
search = "grep 'cv="+str(value)+"' ./web_logs/log/log_*"
print(search) #check of the search string
proc = subprocess.check_output(search, shell=True)
print proc.split('\n')
if __name__ == "__main__":
test(2)
</code></pre>
| 3 | 2016-08-20T08:38:25Z | 39,066,853 | <p>Why it works in terminal but not in envoy is related to globbing (<a href="http://www.tldp.org/LDP/abs/html/globbingref.html" rel="nofollow">bash example</a>).</p>
<p>When you run in your terminal </p>
<pre><code>grep 'cv=2' ./web_logs/log/log_*
</code></pre>
<p>bash will parse the command line and replace the star character with every occurence of file that match. So if you have <code>./web_logs/log/log_1</code> <code>./web_logs/log/log_2</code> and <code>./web_logs/log/log_foo</code> your command will actually be</p>
<pre><code>grep 'cv=2' ./web_logs/log/log_1 ./web_logs/log/log_2 ./web_logs/log/log_foo
</code></pre>
<p>When you execute the same thing in envoy, that'd different, it won't perform the globing of the files then it'll pass to grep a file named <code>./web_logs/log/log_*</code> which doesn't exist, this is actually confirmed by the line you pasted in your question.</p>
<pre><code>print r.std_err
'grep: ./web_logs/log/log_*: No such file or directory\n'
</code></pre>
<p>ps: there is a <a href="https://docs.python.org/2/library/glob.html" rel="nofollow">glob module</a> for python</p>
| 1 | 2016-08-21T17:19:42Z | [
"python",
"grep",
"envoy"
] |
grep No such file or directory with envoy.run | 39,052,191 | <p>I try to build script that execute some grep search in my logs and print the results.
I try to use Envoy because is more easy than subprocess but when I execute grep command it gives me back an error of no such file o directory.</p>
<p>The dir structure is easy:</p>
<ul>
<li>. # root of script</li>
<li>test.py # script file</li>
<li>web_logs/log/ # dir that contains log to search in</li>
</ul>
<p>My test.py is easy:</p>
<pre><code>import envoy
def test(value):
search = "grep 'cv="+str(value)+"' ./web_logs/log/log_*"
print(search) #check of the search string
r = envoy.run(search)
print(r.status_code, r.std_out, r.std_err)#check of the command
response = r.std_out
if __name__ == "__main__":
test(2)
</code></pre>
<p>The output is:</p>
<pre><code>grep 'cv=2' ./web_logs/log/log_*
(2, '', 'grep: ./web_logs/log/log_*: No such file or directory\n')
</code></pre>
<p>If i run the same command:</p>
<pre><code>grep 'cv=2' ./web_logs/log/log_*
</code></pre>
<p>I can find the occurrence of the string "cv=2" in the log files.</p>
<p>Where is the error?</p>
<p><strong>Update after the answers</strong>
The problem is in using of * that envoy cannot explode without use of glob module so I using the subprocess as it is and I try to study better the using of glob module to improve envoy.</p>
<p>The new code I used is:</p>
<pre><code>import subprocess
def test(value):
search = "grep 'cv="+str(value)+"' ./web_logs/log/log_*"
print(search) #check of the search string
proc = subprocess.check_output(search, shell=True)
print proc.split('\n')
if __name__ == "__main__":
test(2)
</code></pre>
| 3 | 2016-08-20T08:38:25Z | 39,066,988 | <p>@baptistemm is actually right in that since you're not running bash as part of your process the globbing is not working.</p>
<p>However what's happening is a bit deeper.</p>
<p>When you run a sub process it can be done by one of several system services (system calls).</p>
<h1>Short Answer (TLDR;)</h1>
<p>Here's the correct way to do this:</p>
<pre><code>import envoy
def test(value):
search = "/bin/sh -c \"grep 'cv="+str(value)+"' ./web_logs/log/log_*\""
print(search) #check of the search string
r = envoy.run(search)
print(r.status_code, r.std_out, r.std_err)#check of the command
response = r.std_out
if __name__ == "__main__":
test(2)
</code></pre>
<p>Running the command as a shell command will take care of globbing.</p>
<h1>Long answer</h1>
<p>Whenever a sub process is executed, it eventually gets translated into an execve system call (or equivalent).</p>
<p>In <code>C</code> library there're helper functions such as <code>system(3)</code> and <code>popen(3)</code> which wrap around <code>execve(2)</code> to provide easier ways of executing processes. <code>system</code> launches a shell and passes its argument as is to <code>-c</code> option of the shell. popen does extra magic, kinda like what envoy is doing in python.</p>
<p>In envoy, the argument is parsed for <code>|</code> (see <code>def expand_args(command):</code>) in the envoy code. and then uses the equivalent of <code>popen</code> to execute the processes. <code>envoy</code> is essentially what the shell does with the <code>|</code> marker (splits things up across the <code>|</code> and then uses popen).</p>
<p>What envoy is NOT doing is interpreting <code>*</code> as the shell does, as in expanding it to match files using a <code>glob</code> function of some sort. Bash does. Thus my answer.</p>
<p>A fun exercise would be for you to contribute code to envoy :-) and make it do the globbing. </p>
| 1 | 2016-08-21T17:34:06Z | [
"python",
"grep",
"envoy"
] |
Breaking out of a for loop inside of python string join() Method | 39,052,197 | <p>My code is :</p>
<pre><code>maxlimit = 5
mystring = ' \r\n '.join([('' if (idx >= maxlimit) else str(name))
for idx,name in enumerate(queryset)])
</code></pre>
<p>How can I to break out of the for loop inside of <code>join()</code> method if <code>idx >= maxlimit</code> ?</p>
| 1 | 2016-08-20T08:38:48Z | 39,052,276 | <p>Well, you should use the <code>if</code> condition in the list comprehension instead:</p>
<pre><code>mystring = ' \r\n '.join([str(name) for idx, name in enumerate(queryset)
if idx < maxlimit])
</code></pre>
<p>This generates a list of 5 items only.</p>
<p>But then it is easier to just limit the number of items to iterate with <a href="https://docs.python.org/3/library/itertools.html#itertools.islice'" rel="nofollow"><code>itertools.islice</code></a>; or if the queryset (which I do not know what it supports), supports slicing, then just slice it with <code>[:maxlimit]</code>: </p>
<pre><code>from itertools import islice
' \r\n '.join([str(name) for name in islice(queryset, maxlimit)])
</code></pre>
<hr>
<p>Though for a simple application of one function to each element, I often use <a href="https://docs.python.org/3/library/functions.html#map" rel="nofollow"><code>map</code></a> as it requires less typing:</p>
<pre><code>' \r\n '.join(map(str, islice(queryset, maxlimit)))
</code></pre>
| 3 | 2016-08-20T08:51:12Z | [
"python",
"string",
"for-loop",
"join",
"break"
] |
Bypassing conditions | 39,052,321 | <p>Can anybody understand why the following code fails?</p>
<pre><code>def main(A):
A.sort()
B = A[:]
ll = len(B)
while ll > 1:
for i in range(ll):
for n in range(i + 1, ll):
if (B[i] + B[n]) % 2 == 0:
B.remove(B[n])
B.remove(B[i])
main(B)
return B
if __name__ == '__main__':
result = main([4, 5, 3, 7, 2])
print(result)
</code></pre>
<p>It runs ok until my list has only one value, reaches the "return B" statement, and then it jumps back into the loop again. What am I missing???</p>
| -1 | 2016-08-20T08:57:26Z | 39,052,368 | <p>You are using recursion, calling <code>main(B)</code> again in the loop. When the recursive call returns, the loop from which you called it <em>continues on</em>.</p>
<p>Moreover, you ignore the return value of the recursive calls. Since you use a <em>copy</em> of the list in each invocation of <code>main()</code>, ignoring the return value means you discard all the work the recursive call did.</p>
<p>Last but not least, you are deleting elements from <code>B</code>; you'll run into index errors with your loop as both <code>i</code> and <code>n</code> range to <code>ll</code>, a length that is no longer valid after removing elements. Since you can't update a <code>range()</code> object you are looping over; you'd have to use <code>while</code> loops that test the length each iteration:</p>
<pre><code>i = 0
while i < len(B):
n = i + 1
while n < len(B):
if (B[i] + B[n]) % 2 == 0:
del B[n], B[i]
break
else:
n += 1
else:
i += 1
</code></pre>
<p>The above loop will remove any two numbers from the list that sum up to an even number. Note that when you delete both the <code>n</code> and <code>i</code> values, you can use the <em>index</em>, rather than search for the number with <code>list.remove()</code>.</p>
<p>The inner <code>while</code> loop uses an <code>else</code> suite; this is only executed when you <em>don't</em> break out of the loop, which only happens if we didn't just remove a value at index <code>i</code>. So when you <em>don't</em> find a pairing for <code>B[i]</code>, <code>i</code> is incremented to move on to the next candidate. If you <em>do</em> find a pairing<code>, the value at</code>i<code>is deleted, so now</code>B[i]` already references the next value.</p>
<p>I'm not sure why you'd need to recurse after this loop; a recursive call won't find more such pairings, since you test <em>every combination</em> already.</p>
<p>Demo, adding in copying by using <code>sorted()</code>:</p>
<pre><code>>>> def main(A):
... B = sorted(A)
... i = 0
... while i < len(B):
... n = i + 1
... while n < len(B):
... if (B[i] + B[n]) % 2 == 0:
... del B[n], B[i]
... break
... else:
... n += 1
... else:
... i += 1
... return B
...
>>> main([4, 5, 3, 7, 2])
[7]
</code></pre>
| 1 | 2016-08-20T09:03:48Z | [
"python",
"python-3.x",
"if-statement",
"while-loop"
] |
Show if restaurant is open or not in current time and day | 39,052,502 | <p>I have two model one for restaurant and another for operating time. Operating time has foreign key relation with restaurant as operating time might be different in different days in a week. I wanted to show if the restaurant is open or closed in current time and in the current day. Will it be better to code this in views.py or create a template tag for this? Because the convention says views should be thin.</p>
<p><strong>models for restaurant and operating time are</strong></p>
<pre><code>class Restaurant(models.Model):
owner = models.ForeignKey(User)
name = models.CharField(max_length=150, db_index=True)
address = models.CharField(max_length=100)
class OperatingTime(models.Model):
MONDAY = 1
TUESDAY = 2
WEDNESDAY = 3
THURSDAY = 4
FRIDAY = 5
SATURDAY = 6
SUNDAY = 7
DAY_IN_A_WEEK = (
(MONDAY, 'Monday'),
(TUESDAY, 'Tuesday'),
(WEDNESDAY, 'Wednesday'),
(THURSDAY, 'Thursday'),
(FRIDAY, 'Friday'),
(SATURDAY, 'Saturday'),
(SUNDAY, 'Sunday'),
)
# HOURS = [(i, i) for i in range(1, 25)]
restaurant = models.ForeignKey(Restaurant,related_name="operating_time")
opening_time = models.TimeField()
closing_time = models.TimeField()
day_of_week = models.IntegerField(choices=DAY_IN_A_WEEK)
def __str__(self):
return '{} ---- {}'.format(self.opening_time, self.closing_time)
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>def home(request):
restaurant = Restaurant.objects.all()
print('restaurant',restaurant)
operating_time = OperatingTime.objects.all()
print('operating time',operating_time)
for operating_time in operating_time: # Tried to find if restaurant is opened or closed based on opening time & closing time in current time and day for each restaurant
opening = operating_time.opening_time
closing = operating_time.closing_time
print('opening',opening)
current_time = datetime.now()
current_time = current_time.time()
if current_time < closing or opening< current_time:
print('opening')
else:
print('closed')
return render(request, 'restaurant/homepage.html', {'restaurant':restaurant})
</code></pre>
<p><strong>How can i find this ? Is it better to code on views or create template tag?</strong> </p>
| 1 | 2016-08-20T09:23:57Z | 39,052,578 | <p>I'm working with django at my work place for about a month now and have been taught to not put any logic into the views. The view should be only to pass on the data from the database or post data to the database. The logic should be done in template tags. </p>
| 0 | 2016-08-20T09:33:45Z | [
"python",
"django",
"python-3.x",
"django-models",
"django-views"
] |
How to write new pooling layer in TensorFlow? | 39,052,545 | <p>I'd like to write RMS pooling layer in TensorFlow. This is like <code>tf.nn.avg_pool()</code> but instead of doing simple average it should calculate RMS average.</p>
<p>I'd like to write this using numpy initially; I don't care about it running on GPU (but later I would like to write a GPU version)</p>
| 0 | 2016-08-20T09:29:49Z | 39,060,493 | <p>Try</p>
<pre><code>tf.sqrt(tf.nn.avg_pool(tf.square(value - \
tf.reduce_mean(value, reduction_indices=[0, 1, 2])),
<ksize>, <strides>, <padding>))
</code></pre>
<p>Note that l_2 pooling is commonly used and not RMS pooling.</p>
| 1 | 2016-08-21T03:03:23Z | [
"python",
"neural-network",
"tensorflow"
] |
Django IntegrityError: no default value | 39,052,606 | <p>There is a longer SQL query that I need to run to migrate data from one table to another. It worked fine all the way up to this week when I migrated to Django 1.9.2 and Python 3.5. </p>
<p>The problem is that the table has a field 'last_update' that is NULL by default. The table definition is</p>
<pre><code>last_update = models.DateTimeField("last updated",null=True, auto_now_add=False, auto_now=True)
</code></pre>
<p>I've also checked with MySQL Workbench and the table is indeed set to allow NULLs on that field and the default is NULL.</p>
<p>The query crashes after about 30 minutes with the error message:</p>
<pre><code>django.db.utils.IntegrityError: (1364, "Field 'last_update' doesn't have a default value")
</code></pre>
<p>Very irritating! How can I insert rows to that table?</p>
<p>Per request, here is the table definition:</p>
<pre><code>CREATE TABLE `google_pla_plaproducts` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`product_code` varchar(55) CHARACTER SET utf8 NOT NULL,
`min_bid` decimal(20,4) NOT NULL,
`current_bid` decimal(20,4) NOT NULL,
`max_bid` decimal(20,4) NOT NULL,
`creation_date` datetime(6),
`last_update` datetime(6) DEFAULT NULL,
`status_change` datetime(6) DEFAULT NULL,
`starting_bid` decimal(20,4) NOT NULL,
`adgroup_id` int(11) NOT NULL,
`status_id` int(11) NOT NULL,
`product_price` decimal(20,4) NOT NULL,
PRIMARY KEY (`id`,`product_code`,`adgroup_id`),
KEY `google_p_adgroup_id_3b7c9d4ecddd04ba_fk_google_pla_plaadgroup_id` (`adgroup_id`),
KEY `google_pla_plaprod_status_id_2f8113a5ef0dd021_fk_globs_status_id` (`status_id`),
KEY `idx_prod_code` (`product_code`),
CONSTRAINT `google_p_adgroup_id_3b7c9d4ecddd04ba_fk_google_pla_plaadgroup_id` FOREIGN KEY (`adgroup_id`) REFERENCES `google_pla_plaadgroup` (`id`),
CONSTRAINT `google_pla_plaprod_status_id_2f8113a5ef0dd021_fk_globs_status_id` FOREIGN KEY (`status_id`) REFERENCES `globs_status` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=88949 DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
</code></pre>
| 1 | 2016-08-20T09:37:17Z | 39,056,251 | <p><code>updated = models.DateTimeField(default=None, auto_now=True, auto_now_add=False)</code></p>
<p>this should work.
i have used this before and it is working fine for me.</p>
| 0 | 2016-08-20T16:15:02Z | [
"python",
"mysql",
"django",
"django-orm"
] |
Django IntegrityError: no default value | 39,052,606 | <p>There is a longer SQL query that I need to run to migrate data from one table to another. It worked fine all the way up to this week when I migrated to Django 1.9.2 and Python 3.5. </p>
<p>The problem is that the table has a field 'last_update' that is NULL by default. The table definition is</p>
<pre><code>last_update = models.DateTimeField("last updated",null=True, auto_now_add=False, auto_now=True)
</code></pre>
<p>I've also checked with MySQL Workbench and the table is indeed set to allow NULLs on that field and the default is NULL.</p>
<p>The query crashes after about 30 minutes with the error message:</p>
<pre><code>django.db.utils.IntegrityError: (1364, "Field 'last_update' doesn't have a default value")
</code></pre>
<p>Very irritating! How can I insert rows to that table?</p>
<p>Per request, here is the table definition:</p>
<pre><code>CREATE TABLE `google_pla_plaproducts` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`product_code` varchar(55) CHARACTER SET utf8 NOT NULL,
`min_bid` decimal(20,4) NOT NULL,
`current_bid` decimal(20,4) NOT NULL,
`max_bid` decimal(20,4) NOT NULL,
`creation_date` datetime(6),
`last_update` datetime(6) DEFAULT NULL,
`status_change` datetime(6) DEFAULT NULL,
`starting_bid` decimal(20,4) NOT NULL,
`adgroup_id` int(11) NOT NULL,
`status_id` int(11) NOT NULL,
`product_price` decimal(20,4) NOT NULL,
PRIMARY KEY (`id`,`product_code`,`adgroup_id`),
KEY `google_p_adgroup_id_3b7c9d4ecddd04ba_fk_google_pla_plaadgroup_id` (`adgroup_id`),
KEY `google_pla_plaprod_status_id_2f8113a5ef0dd021_fk_globs_status_id` (`status_id`),
KEY `idx_prod_code` (`product_code`),
CONSTRAINT `google_p_adgroup_id_3b7c9d4ecddd04ba_fk_google_pla_plaadgroup_id` FOREIGN KEY (`adgroup_id`) REFERENCES `google_pla_plaadgroup` (`id`),
CONSTRAINT `google_pla_plaprod_status_id_2f8113a5ef0dd021_fk_globs_status_id` FOREIGN KEY (`status_id`) REFERENCES `globs_status` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=88949 DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
</code></pre>
| 1 | 2016-08-20T09:37:17Z | 39,064,759 | <p>I ended up modifying the table a few times. I didn't try the suggestion from Rohit, as the default=None. It started working again, after using multiple DB migrations.</p>
| 0 | 2016-08-21T13:33:05Z | [
"python",
"mysql",
"django",
"django-orm"
] |
python regression on time series | 39,052,643 | <p>I have a matrix containing 8 time series. </p>
<p>I want to build a model between their values at time <code>t</code> and their values at time <code>t-1, t-2,..., t-k</code>.</p>
<p>Let suppose for simplicity a linear model <code>sk.linear_model.LinearRegression</code> and the time series be: <code>X=np.random.normal(0, 1, (1000, 8))</code> </p>
<p>How can I write code that uses the previous <code>k</code> values to estimate <code>X(t)</code>? I also want to use the estimated <code>X(t)</code> to predict <code>X(t+1)</code>.</p>
| -1 | 2016-08-20T09:41:29Z | 39,058,255 | <p>The short answer is that if you want to do serious time-series analysis with Python, you should probably use a specific libaray like <a href="http://statsmodels.sourceforge.net/stable/tsa.html" rel="nofollow">statsmodels.tsa</a>.</p>
<p>However, if you are insistent on using sklearn, you're going to need to set up your initial training set to fit your model to. If you're using k points to predict the next one, and have an 8-dimensional time series, then you should end up with k*8 features and 8 targets for your training set. Here's a sample with k set to 3.</p>
<pre><code>import numpy as np
k = 3 #example
n = 1000
num_series = 8
raw_x = np.random.normal(0, 1, (n, num_series))
y = raw_x[k:]
X = np.empty((n-k, k*num_series))
for i in xrange(n-k):
X[i] = np.ravel(raw_x[i:i+k])
print X.shape, y.shape
</code></pre>
<p>which outputs</p>
<pre><code>(997, 24) (997, 8)
</code></pre>
<p>for the shapes of the features and labels respectively, as desired.</p>
<p>You can then pass this x and y into any of sklearn's models that supports multiple targets (including LinearRegression). If you want to predict future points, you simply need to pass the appropriate data to the <code>predict</code> method of your fitted model.</p>
| 0 | 2016-08-20T20:06:52Z | [
"python",
"scikit-learn",
"time-series"
] |
Select from given level of MultiIndex Series | 39,052,813 | <p>How can I select all values where the 'displacement' (second level of MultiIndex) is above a certain value, say > 2?</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
dicts = {}
index = np.linspace(1, 50)
index[2] = 2.0 # Create a duplicate for later testing
for n in range(5):
dicts['test' + str(n)] = pd.Series(np.linspace(0, 20) ** (n / 5),
index=index)
s = pd.concat(dicts, names=('test', 'displacement'))
# Something like this?
s[s.index['displacement'] > 2]
</code></pre>
<p>I tried reading <a href="http://pandas.pydata.org/pandas-docs/stable/advanced.html" rel="nofollow">the docs</a> but couldn't work it out, even trying IndexSlice.</p>
<p>Bonus points: how to I select a range, say between 2 and 4?</p>
<p>Thanks in advance for any help.</p>
| 0 | 2016-08-20T10:00:30Z | 39,053,155 | <pre><code>import pandas as pd
import numpy as np
dicts = {}
index = np.linspace(1, 50)
for n in range(5):
dicts['test' + str(n)] = pd.Series(np.linspace(0, 20) ** (n / 5),
index=index)
s = pd.concat(dicts, names=('test', 'displacement'))
displacement = s.index.get_level_values('displacement')
r = s.loc[(displacement > 2) & (displacement < 5)]
</code></pre>
<p>Inspired by <a href="http://stackoverflow.com/a/18103894/268075">http://stackoverflow.com/a/18103894/268075</a></p>
| 0 | 2016-08-20T10:38:06Z | [
"python",
"pandas"
] |
python - regex on "°F" or "°C", and "-40" like symbols | 39,052,991 | <p>In python I am trying replace a text file in the following way:</p>
<blockquote>
<p>İmparatorluk zirvesini 15 ve 17'nin arasında, özelikle I. Süleyman
döneminde 10.000'lerde yaÅadı.</p>
</blockquote>
<p>-></p>
<blockquote>
<p>"İmparatorluk" "zirvesini" "15" "ve" "17'nin" "arasında",
"özelikle" "I." "Süleyman" "döneminde" "10.000'lerde"
"yaÅadı" "."</p>
</blockquote>
<p>With the following code, I can manage to do the conversion above.</p>
<pre><code>#!/usr/bin/env python
# -*- coding: utf-8 -*-
import re, io, os
def create_data(txt_file):
with io.open (txt_file, "r", encoding="utf-8") as myfile:
text=myfile.read()
replacer = re.compile(r"([IVXLCDM]+\.|-[\d\.-]+(?:'\w+)?|[\w'-]+|[.,!?;()%])", re.UNICODE)
output_text = replacer.sub(r'"\1"', text).replace('""','" "')
file_name = os.getcwd() + "/" + txt_file[:-4] + ".data"
print file_name
text_file = open(file_name, "w")
text_file.write(output_text.encode('utf8'))
text_file.close()
</code></pre>
<p>But for another text;</p>
<blockquote>
<p>DoÄu Anadolu'da sıcaklıklar â30 °C ve â40 °C'ye (â22 °F ve â40 °F)
kadar düÅebilir ve kar yılın en az 120 günü yerde kalır.</p>
</blockquote>
<p>the conversion occurs as the following:</p>
<blockquote>
<p>"DoÄu" "Anadolu'da" "sıcaklıklar" â"30" °"C" "ve" â"40" °"C'ye"
"("â"22" °"F" "ve" â"40" °"F" ")" "kadar" "düÅebilir" "ve" "kar"
"yılın" "en" "az" "120" "günü" "yerde" "kalır" "."</p>
</blockquote>
<p>But I want the conversion to be occured as the following:</p>
<blockquote>
<p>"DoÄu" "Anadolu'da" "sıcaklıklar" "-30" "°C" "ve" "â40" "°C'ye" "("
"-22" "°F" "ve" "â40" "°F" ")" "kadar" "düÅebilir" "ve" "kar" "yılın"
"en" "az" "120" "günü" "yerde" "kalır" "."</p>
</blockquote>
<p>How can I fix my code or regex to achieve that?</p>
<p>Thanks,</p>
| 0 | 2016-08-20T10:19:16Z | 39,053,953 | <p>Regex: <code>^|$|</code> will match the start of the string, the end of the string, or single spaces. You can use that to split the string, then join it with the necessary quotes.</p>
<p>Here's the code I'd use for JavaScript; I hope you can figure out how to do the same in python.</p>
<pre><code>"\"" + string.split(new RegExp("^|$| ", "g")).join("\" \"") + "\"";
</code></pre>
| 1 | 2016-08-20T12:07:31Z | [
"python",
"regex"
] |
Filtering and Ordering based on multiple request.GET parameters | 39,053,048 | <p>I'm currently stuck in trying to find a solution to my problem. So I have a URL which is like so: </p>
<p><code>https://www.domain.com/forum/topic/</code></p>
<p>In my template view, I have a form and an input which is responsible for searching for posts:</p>
<pre><code><form method="GET" action="">
<div class="input-group">
<input type="text" name="q" placeholder="Search..." value="{{ request.GET.q }}" class="form-control">
<span class="input-group-btn">
<input class="btn btn-secondary" type="submit" value="Search">
</span>
</div>
</form>
</code></pre>
<p>In my <strong>Views.py</strong> the search acts as follows:</p>
<pre><code>def discussion(request, discussion):
topics_list = Topic.objects.all().filter(discussion__url=discussion)
discussion = Discussion.objects.get(url=discussion)
search_query = request.GET.get('q')
if search_query:
topics_list = topics_list.filter(
Q(title__icontains=search_query) |
Q(user__username__icontains=search_query)
)
paginator = Paginator(topics_list, 10)
page = request.GET.get('page')
try:
topics = paginator.page(page)
except PageNotAnInteger:
topics = paginator.page(1)
except EmptyPage:
topics = paginator.page(paginator.num_pages)
context = {'topics': topics, 'discussion': discussion,}
return render(request, 'forum/forum_show_posts.html', context)
</code></pre>
<p>Now when I run the search It works fine, It actually filters the objects based on my query, thus making the url appear as:</p>
<p><code>https://www.domain.com/forum/topic/?q=test</code></p>
<p>Now I want to work on a order by for my objects so I proceeded to modify the discussion view to be:</p>
<pre><code>def discussion(request, discussion):
topics_list = Topic.objects.all().filter(discussion__url=discussion)
discussion = Discussion.objects.get(url=discussion)
search_query = request.GET.get('q')
sort_query = request.GET.get('sort')
if search_query:
topics_list = topics_list.filter(
Q(title__icontains=search_query) |
Q(user__username__icontains=search_query)
)
elif sort_query:
if sort_query == "newest":
topics_list = topics_list.order_by('-timestamp')
if sort_query == "oldest":
topics_list = topics_list.order_by('timestamp')
if sort_query == "name":
topics_list = topics_list.order_by('title')
# sort_query = sort_query.title()
paginator = Paginator(topics_list, 10)
page = request.GET.get('page')
try:
topics = paginator.page(page)
except PageNotAnInteger:
topics = paginator.page(1)
except EmptyPage:
topics = paginator.page(paginator.num_pages)
context = {'topics': topics, 'discussion': discussion, 'sort_value':sort_query,}
return render(request, 'forum/forum_show_posts.html', context)
</code></pre>
<p>and my template to have the corresponding links for each method of ordering:</p>
<pre><code><div class="dropdown-menu">
<a class="dropdown-item disabled" href="#">Sort...</a>
<form method="GET" action="">
<div class="input-group">
<button class="dropdown-item" type="submit" name="sort" value="newest">Newest</button>
<button class="dropdown-item" type="submit" name="sort" value="oldest">Oldest</button>
<button class="dropdown-item" type="submit" name="sort" value="views">Views</button>
<button class="dropdown-item" type="submit" name="sort" value="comments">Comments</button>
<button class="dropdown-item" type="submit" name="sort" value="replies">Replies</button>
<button class="dropdown-item" type="submit" name="sort" value="name">Name</button>
</div>
</form>
</div>
</code></pre>
<p>Now when I actually go ahead and choose to order by Newest or Oldest, it sorts them, making the url appear as:</p>
<p><code>https://www.domain.com/forum/topic/?sort=newest</code></p>
<p>My problem is that let's say I wanted to search for 'test' making the URL</p>
<p><code>https://www.domain.com/forum/topic/?q=test</code></p>
<p>but when I want to sort with the search already, that gets overwritten and instead it just shows all posts, with what I chose to sort it with. How do I get it to sort even with the search already there, and if there is no search still sort it.</p>
<p>From <code>https://www.domain.com/forum/topic/?q=test</code> to <code>https://www.domain.com/forum/topic/?q=test&sort=newest</code> so It shows the newest of the list of posts with the query 'test'.</p>
| 0 | 2016-08-20T10:27:44Z | 39,053,386 | <p>You need to keep track of your <code>GET</code> params, update your view to be:</p>
<pre><code>def discussion(request, discussion): # <<- view name and var name both are same which might cause issues
search_query = request.GET.get('q', '')
sort = request.GET.get('sort', '')
direction = request.GET.get('dir', 'asc')
if direction not in ['asc', 'desc']:
direction = 'asc'
topics_list = Topic.objects.all().filter(discussion__url=discussion)
discussion = Discussion.objects.get(url=discussion)
if search_query:
topics_list = topics_list.filter(
Q(title__icontains=search_query) |
Q(user__username__icontains=search_query)
)
if sort:
order_by = '{0}{1}'.format('-' if direction == 'desc' else '', sort)
topics_list = topics_list.order_by(order_by)
# rest of code
# pass search_query, sort and direction in context
context = {
'topics': topics,
'discussion': discussion,
'sort': sort,
'direction': direction,
'search_query': search_query,
}
return render(request, 'forum/forum_show_posts.html', context)
</code></pre>
<p>Now in template keep track of those params in both forms:</p>
<p>Search Form:</p>
<pre><code><form method="GET" action="">
<div class="input-group">
<input type="text" name="q" placeholder="Search..." value="{{ search_query }}" class="form-control">
<span class="input-group-btn">
<input class="btn btn-secondary" type="submit" value="Search">
</span>
</div>
<input type="hidden" name="sort" value="{{ sort }}" />
<input type="hidden" name="direction" value="{{ direction }}" />
</form>
</code></pre>
<p>Sort Form:</p>
<pre><code><div class="dropdown-menu">
<a class="dropdown-item disabled" href="#">Sort...</a>
<form method="GET" action="">
<div class="input-group">
<button class="dropdown-item" type="submit" name="sort" value="newest">Newest</button>
<button class="dropdown-item" type="submit" name="sort" value="oldest">Oldest</button>
<button class="dropdown-item" type="submit" name="sort" value="views">Views</button>
<button class="dropdown-item" type="submit" name="sort" value="comments">Comments</button>
<button class="dropdown-item" type="submit" name="sort" value="replies">Replies</button>
<button class="dropdown-item" type="submit" name="sort" value="name">Name</button>
</div>
<input type="hidden" name="search_query" value="{{ search_query }}" />
<input type="hidden" name="direction" value="{{ direction }}" />
</form>
</div>
</code></pre>
| 1 | 2016-08-20T11:03:46Z | [
"python",
"django",
"sorting"
] |
3d image compression with numpy | 39,053,195 | <p>I have a 3d numpy array representing an object with cells as voxels and the voxels having values from 1 to 10. I would like to compress the image (a) to make it smaller and (b) to get a quick idea later on of how complex the image is by compressing it to a minimum level of agreement with the original image.</p>
<p>I have used SVD to do this with 2D images and seeing how many singular values were required but it looks to have difficulty with 3D ones. If e.g. I look at the diagonal terms in the S matrix, they are all zero and I was expecting singular values.</p>
<p>Is there any way I can use svd to compress 3D arrays (e.g. flattening in some way)? Or are other methods more appropriate? If necessary I could probably simplify the voxel values to 0 or 1.</p>
| 2 | 2016-08-20T10:42:33Z | 39,056,212 | <p>You could essentially apply the same principle to the 3D data without flattening it. There are some algorithms to separate N-dimensional matrices, such as the CP-ALS (using Alternating Least Squares) and this is implemented in the package <a href="https://github.com/mnick/scikit-tensor" rel="nofollow">sktensor</a>. You can use the package to decompose the tensor given a <em>rank</em>:</p>
<pre><code>from sktensor import dtensor, cp_als
T = dtensor(X)
rank = 5
P, fit, itr, exectimes = cp_als(T, rank, init='random')
</code></pre>
<p>With <code>X</code> being your data. You could then use the weights <code>weights = P.lmbda</code> to reconstruct the original array <code>X</code> and calculate the reconstruction error, as you would do with SVD.</p>
<p>Other decomposition methods for 3D data (or in general tensors) include the <a href="https://en.wikipedia.org/wiki/Tucker_decomposition" rel="nofollow">Tucker Decomposition</a> or the Canonical Decomposition (also available in the same package). </p>
<p>It is not directly a 3D SVD, but all the methods above can be used to analyze the principal components of your data.</p>
<p>Find bellow (just for completeness) an image of the tucker decomposition:</p>
<p><a href="http://i.stack.imgur.com/2nqzl.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/2nqzl.jpg" alt="enter image description here"></a></p>
<p>And bellow another image of the decomposition that CP-ALS (optimization algorithm) tries to obtain:</p>
<p><a href="http://i.stack.imgur.com/N3IKb.png" rel="nofollow"><img src="http://i.stack.imgur.com/N3IKb.png" alt="enter image description here"></a></p>
<p>Image credits to:</p>
<p>1- <a href="http://www.slideshare.net/KoheiHayashi1/talk-in-jokyonokai-12989223" rel="nofollow">http://www.slideshare.net/KoheiHayashi1/talk-in-jokyonokai-12989223</a></p>
<p>2- <a href="http://www.bsp.brain.riken.jp/~zhougx/tensor.html" rel="nofollow">http://www.bsp.brain.riken.jp/~zhougx/tensor.html</a></p>
| 1 | 2016-08-20T16:10:14Z | [
"python",
"arrays",
"numpy",
"compression",
"svd"
] |
Python redis client zrangebylex | 39,053,223 | <p>I want to perform the following command using the python client of redis</p>
<pre><code>zrangebylex names_sorted_set [a "[a\xff\xff\xff\xff"
</code></pre>
<p>but in my code the following cases happen</p>
<pre><code> name = request.GET.get('name', '')
redis_con = redis.StrictRedis(settings.REDIS_HOST, settings.REDIS_PORT)
min = '[' + name
max = '[' + name + """\xff\xff"""
result = redis_con.zrangebylex('names_sorted_set', min, max)
</code></pre>
<p>Above code gives me this error 'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128)...
I tried another piece of code which is </p>
<pre><code> redis_con = redis.StrictRedis(settings.REDIS_HOST, settings.REDIS_PORT)
min = '[' + name
max = '[' + name + """\\xff\\xff"""
result = redis_con.zrangebylex('names_sorted_set', min, max)
</code></pre>
<p>and The last one gives this to redis as the command:</p>
<pre><code>zrangebylex names_sorted_set [a "[a\\xff\\xff\\xff\\xff"
</code></pre>
<p>PS: The code is meant to find all the strings in a sorted set which start with a specifix prefix eg:a.</p>
| 0 | 2016-08-20T10:44:41Z | 39,053,495 | <p>this one worked</p>
<pre><code> name = request.GET.get('name', '')
redis_con = redis.StrictRedis(settings.REDIS_HOST, settings.REDIS_PORT)
min = '[' + name
max = bytearray('[') + \
bytearray(name, 'utf-8') + \
b'\xff\xff\xff\xff\xff\xff\xff\xff\xff'
result = redis_con.zrangebylex('names_sorted_set', min, max)
</code></pre>
| 0 | 2016-08-20T11:15:32Z | [
"python",
"redis",
"sortedset"
] |
How to solve ValueError: Index contains duplicate entries, cannot reshape | 39,053,226 | <p>I'm trying to unstack a MultiIndex Series so that I can plot the Series against one another.</p>
<pre><code>import pandas as pd
import numpy as np
dicts = {}
index = np.linspace(1, 50)
index[2] = 2.0
index2 = index.copy()
index2[3] = 3.0
for n in range(5):
if n == 1:
dicts['test' + str(n)] = pd.Series(np.linspace(0, 20) ** (n / 5),
index=index2)
else:
dicts['test' + str(n)] = pd.Series(np.linspace(0, 20) ** (n / 5),
index=index)
s = pd.concat(dicts, names=('test', 'displacement'))
s.unstack(level='test').plot()
</code></pre>
<p>The unstack() in the last line gets <code>ValueError: Index contains duplicate entries, cannot reshape</code>. The other StackOverflow questions all seem to relate to pivot tables, but I'm not trying to aggregate data; simply plot it.</p>
<p>I would like to have 1 plot with 1 line for each test (level 0 of MultiIndex). Each line would be the Series values versus the displacement (level 1 of MultiIndex).</p>
<p>My hack at the moment is:</p>
<pre><code>for test_name, test in s.groupby(level='test'):
test.index = test.index.droplevel()
test.plot()
plt.show()
</code></pre>
<p>Any help would be much appreciated.</p>
| 1 | 2016-08-20T10:44:58Z | 39,068,669 | <p>You can set <code>append=True</code> in <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow"><code>DF.set_index</code></a> so that it avoids the default writing entries all over again during <code>unstack</code> operation. It only adds the entries not present in the unstacked column before.</p>
<pre><code>import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("whitegrid")
df = pd.concat(dicts, names=('test', 'displacement')).reset_index()
labels = np.unique(df['test']).tolist()
df.set_index(['test', 'displacement'], append=True, inplace=True)
df.unstack(level='test').plot(figsize=(10,10), use_index=False,
legend=False, title="Grouped Plot")
plt.legend(loc='upper left', fontsize=12, frameon=True, labels=labels)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/TZ1Hb.png" rel="nofollow"><img src="http://i.stack.imgur.com/TZ1Hb.png" alt="Image1"></a></p>
<hr>
<p>Incase you want all the plots to start at the origin, you could use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.array_split.html#numpy.array_split" rel="nofollow"><code>array_split</code></a> to split the unstacked <code>dataframe</code> object into equal size based on the total length of the unique labels i.e 5 [<em>Test0</em> â <em>Test4</em>] as follows: </p>
<pre><code>df = pd.concat(dicts, names=('test', 'displacement')).reset_index()
labels = np.unique(df['test']).tolist()
df.set_index(['test', 'displacement'], append=True, inplace=True)
fig, ax = plt.subplots(figsize=(10,10))
for test_sample in range(len(labels)):
np.array_split(df.unstack('test'), len(labels))[test_sample].plot(grid=True,
use_index=False, ax=ax, legend=False, cmap=plt.cm.get_cmap('jet'))
plt.legend(loc='upper left', fontsize=12, frameon=True, labels=labels)
plt.xlim(0,50)
plt.title("Grouped Plot")
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/Ypz8f.png" rel="nofollow"><img src="http://i.stack.imgur.com/Ypz8f.png" alt="Image2"></a></p>
| 0 | 2016-08-21T20:41:07Z | [
"python",
"pandas",
"matplotlib",
"seaborn"
] |
ValueError: AES key must be either 16, 24, or 32 bytes long PyCrypto 2.7a1 | 39,053,286 | <p>I'm making programm for my school project and have one problem above.
Here's my code:</p>
<pre><code>def aes():
#aes
os.system('cls')
print('1. Encrypt')
print('2. Decrypt')
c = input('Your choice:')
if int(c) == 1:
#cipher
os.system('cls')
print("Let's encrypt, alright")
print('Input a text to be encrypted')
text = input()
f = open('plaintext.txt', 'w')
f.write(text)
f.close()
BLOCK_SIZE = 32
PADDING = '{'
pad = lambda s: s + (BLOCK_SIZE - len(s) % BLOCK_SIZE) * PADDING
EncodeAES = lambda c, s: base64.b64encode(c.encrypt(pad(s)))
secret = os.urandom(BLOCK_SIZE)
f = open('aeskey.txt', 'w')
f.write(str(secret))
f.close()
f = open('plaintext.txt', 'r')
privateInfo = f.read()
f.close()
cipher = AES.new(secret)
encoded = EncodeAES(cipher, privateInfo)
f = open('plaintext.txt', 'w')
f.write(str(encoded))
f.close()
print(str(encoded))
if int(c) == 2:
os.system('cls')
print("Let's decrypt, alright")
f = open('plaintext.txt','r')
encryptedString = f.read()
f.close()
PADDING = '{'
DecodeAES = lambda c, e: c.decrypt(base64.b64decode(e)).rstrip(PADDING)
encryption = encryptedString
f = open('aeskey.txt', 'r')
key = f.read()
f.close()
cipher = AES.new(key)
decoded = DecodeAES(cipher, encryption)
f = open('plaintext.txt', 'w')
f.write(decoded)
f.close()
print(decoded)
</code></pre>
<p>Full error text:</p>
<pre><code>Traceback (most recent call last): File "C:/Users/vital/Desktop/Prog/Python/Enc_dec/Enc_dec.py", line 341, in aes()
File "C:/Users/vital/Desktop/Prog/Python/Enc_dec/Enc_dec.py", line 180, in aes cipher = AES.new(key)
File "C:\Users\vital\AppData\Local\Programs\Python\Python35-32\lib\site-packages\Crypto\Cipher\AES.py", line 179, in new return AESCipher(key, *args, **kwargs)
File "C:\Users\vital\AppData\Local\Programs\Python\Python35-32\lib\site-packages\Crypto\Cipher\AES.py", line 114, in init blockalgo.BlockAlgo.init(self, _AES, key, *args, **kwargs)
File "C:\Users\vital\AppData\Local\Programs\Python\Python35-32\lib\site-packages\Crypto\Cipher\blockalgo.py", line 401, in init self._cipher = factory.new(key, *args, **kwargs)
ValueError: AES key must be either 16, 24, or 32 bytes long
Process finished with exit code 1
</code></pre>
<p>What am I doing wrong?</p>
| -1 | 2016-08-20T10:51:39Z | 39,053,407 | <p>The error is very clear. The key must be exactly of that size. <code>os.urandom</code> will return you the correct key. However this key is a <em>bytes</em> (binary string value). Furthermore, by using <code>str(secret)</code>, the value of <code>repr(secret)</code> is written into the file instead of <code>secret</code>. </p>
<p>What is more confusing is that <code>AES.new</code> allows you to pass the key as Unicode! However, suppose the key was the ASCII bytes <code>1234123412341234</code>. Now, </p>
<pre><code>f.write(str(secret))
</code></pre>
<p>will write <code>b'1234123412341234'</code> to the text file! Instead of 16 bytes, it now contains those 16 bytes + the <code>b</code>, and two <code>'</code> quote characters; 19 bytes in total.</p>
<p>Or if you take a random binary string from <code>os.urandom</code>,</p>
<pre><code>>>> os.urandom(16)
b'\xd7\x82K^\x7fe[\x9e\x96\xcb9\xbf\xa0\xd9s\xcb'
</code></pre>
<p>now, instead of writing 16 bytes <code>D7</code>, <code>82</code>,.. and so forth, it now writes that string into the file. And the error occurs because the decryption tries to use </p>
<pre><code>"b'\\xd7\\x82K^\\x7fe[\\x9e\\x96\\xcb9\\xbf\\xa0\\xd9s\\xcb'"
</code></pre>
<p>as the decryption key, which, when encoded as UTF-8 results in</p>
<pre><code>b"b'\\xd7\\x82K^\\x7fe[\\x9e\\x96\\xcb9\\xbf\\xa0\\xd9s\\xcb'"
</code></pre>
<p>which is a 49-bytes long <code>bytes</code> value.</p>
<hr>
<p>You have 2 good choices. Either you continue to write your key to a text file, but convert it to hex, or write the key into a binary file; then the file should be exactly the key length in bytes. I am going for the latter here:</p>
<p>Thus for storing the key, use</p>
<pre><code> with open('aeskey.bin', 'wb') as keyfile:
keyfile.write(secret)
</code></pre>
<p>and</p>
<pre><code> with open('aeskey.bin', 'rb') as keyfile:
key = keyfile.read()
</code></pre>
<hr>
<p>Same naturally applies to the cipher text (that is the encrypted binary), you must write and read it to and from a binary file:</p>
<pre><code> with open('ciphertext.bin', 'wb') as f:
f.write(encoded)
</code></pre>
<p>and</p>
<pre><code> with open('ciphertext.bin', 'rb') as f:
encryptedString = f.read()
</code></pre>
<p>If you want to base64-encode it, do note that <code>base64.b64encode/decode</code> are <code>bytes</code>-in/<code>bytes</code>-out.</p>
<p>By the way, <a href="https://en.wikipedia.org/wiki/Plaintext" rel="nofollow">plaintext</a> is the original, unencrypted text; the encrypted text is called <a href="https://en.wikipedia.org/wiki/Ciphertext" rel="nofollow">ciphertext</a>. AES is a cipher that can encrypt plaintext to ciphertext and decrypt ciphertext to plaintext using a key. </p>
<p>Despite these being called "-text" neither of them is textual data per se, as understood by Python, but they're binary data, and should be represented as <code>bytes</code>.</p>
| 3 | 2016-08-20T11:06:44Z | [
"python",
"python-3.x",
"cryptography",
"pycrypto"
] |
pandas groupby with a lambda parameter | 39,053,348 | <p>I can't understand the code:</p>
<pre><code>pivot = pd.pivot_table(subset, values='count', rows=['date'], cols=['sample'], fill_value=0)
by = lambda x: lambda y: getattr(y, x)
grouped = pivot.groupby([by('year'),by('month')]).sum()
</code></pre>
<p><code>subset</code> in the code is a DataFrame which have a column named "date"(e.g.2013-02-04 06:20:49.634244), and do not have a column named "year" and "month".</p>
<blockquote>
<p>where I have trouble with</p>
</blockquote>
<ul>
<li><p>I can't figure out the "year" and "month" in:</p>
<pre><code>grouped = pivot.groupby([by('year'),by('month')]).sum()
</code></pre></li>
<li><p>What the meaning of </p>
<pre><code>grouped = pivot.groupby([by('year'),by('month')]).sum()
</code></pre></li>
</ul>
<blockquote>
<p>What I have done:</p>
</blockquote>
<ul>
<li><p>In the pandas <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow">pandas document</a> says: the first parame of the pandas.DataFrame.groupby can be </p>
<blockquote>
<p>by : mapping function / list of functions, dict, Series, or tuple /</p>
</blockquote></li>
<li><p>by = lambda x: lambda y: getattr(y, x)</p></li>
</ul>
<blockquote>
<p>means by('bar') returns a function that returns the attribute 'bar' from an object</p>
</blockquote>
| 0 | 2016-08-20T10:58:53Z | 39,056,523 | <p>If a callable is passed to <code>groupby</code>, it is called on the <code>DataFrame</code>'s index, so this code is is grouping by the year and month of a datetimelike index.</p>
<pre><code>In [55]: df = pd.DataFrame({'a': 1.0},
index=pd.date_range('2014-01-01', periods=13, freq='M'))
In [56]: df.groupby([by('year'), by('month')]).sum()
Out[56]:
a
2014 1 1.0
2 1.0
3 1.0
4 1.0
5 1.0
6 1.0
7 1.0
8 1.0
9 1.0
10 1.0
11 1.0
12 1.0
2015 1 1.0
</code></pre>
<p>More explicitly</p>
<pre><code>In [57]: df.groupby([df.index.year, df.index.month]).sum()
Out[57]:
a
2014 1 1.0
2 1.0
3 1.0
4 1.0
5 1.0
6 1.0
7 1.0
8 1.0
9 1.0
10 1.0
11 1.0
12 1.0
2015 1 1.0
</code></pre>
| 1 | 2016-08-20T16:46:24Z | [
"python",
"pandas",
"lambda"
] |
recursive import in python | 39,053,360 | <p>in logging.py in my python library there are the lines:</p>
<pre><code>import logging
</code></pre>
<p>and:</p>
<pre><code>from logging import DEBUG, INFO, WARNING, ERROR, CRITICAL
</code></pre>
<p>I don't understand the meaning of importing logging within logging.py and
also where (DEBUG, INFO, WARNING, ERROR, CRITICAL) are defined?</p>
| -1 | 2016-08-20T11:00:27Z | 39,053,405 | <p>it's importing <code>logging</code> from python standard library, <a href="https://docs.python.org/2/library/logging.html" rel="nofollow">link</a>, </p>
<p>also <a href="https://docs.python.org/2/library/logging.html#logging-levels" rel="nofollow">here</a> is log levels in that page (DEBUG, INFO, ...)</p>
| 0 | 2016-08-20T11:06:29Z | [
"python",
"python-2.7",
"import"
] |
recursive import in python | 39,053,360 | <p>in logging.py in my python library there are the lines:</p>
<pre><code>import logging
</code></pre>
<p>and:</p>
<pre><code>from logging import DEBUG, INFO, WARNING, ERROR, CRITICAL
</code></pre>
<p>I don't understand the meaning of importing logging within logging.py and
also where (DEBUG, INFO, WARNING, ERROR, CRITICAL) are defined?</p>
| -1 | 2016-08-20T11:00:27Z | 39,053,441 | <p>It looks like someone has messed up the imports a little bit? :)</p>
<p>In general, if you do</p>
<p><code>import <module></code></p>
<p>You can refer to <em>all</em> its methods and objects as</p>
<p><code><module>.<method></code></p>
<p>Whereas if you do e.g,</p>
<p><code>import <method1>, <constant1> from <module></code></p>
<p>You can refer to only the ones you explicitly mentioned as:</p>
<p><code><method1></code>, <code><constant1></code> etc.</p>
<p>In this particular case I guess that the code author did not want to use module prefix for logging level constants.</p>
| 0 | 2016-08-20T11:10:37Z | [
"python",
"python-2.7",
"import"
] |
LCM of large numbers in python | 39,053,393 | <p>I'm using the formula "product of two number is equal to the product of their GCD and LCM".</p>
<p>Here's my code :</p>
<pre><code># Uses python3
import sys
def hcf(x, y):
while(y):
x, y = y, x % y
return x
a,b = map(int,sys.stdin.readline().split())
res=int(((a*b)/hcf(a,b)))
print(res)
</code></pre>
<p>It works great for small numbers. But when i give input as :</p>
<blockquote>
<p>Input:
226553150 1023473145</p>
<p>My output:
46374212988031352</p>
<p>Correct output:
46374212988031350</p>
</blockquote>
<p>Can anyone please tell me where am I going wrong ?</p>
| 0 | 2016-08-20T11:04:26Z | 39,054,602 | <p>Elaborating on the comments. In Python 3, true division, <code>/</code>, converts its arguments to floats. In your example, the true answer of <code>lcm(226553150, 1023473145)</code> is <code>46374212988031350</code>. By looking at <code>bin(46374212988031350)</code> you can verify that this is a 56 bit number. When you compute <code>226553150*1023473145/5</code> (5 is the gcd) you get <code>4.637421298803135e+16</code>. Documentation suggests that such floats only have 53 bits of precision. Since 53 < 56, you have lost information. Using <code>//</code> avoids this. Somewhat counterintuitively, in cases like this it is "true" division which is actually false.</p>
<p>By the way, a useful module when dealing with exact calculations involving large integers is <a href="https://docs.python.org/3/library/fractions.html" rel="nofollow">fractions</a> (*):</p>
<pre><code>from fractions import gcd
def lcm(a,b):
return a*b // gcd(a,b)
>>> lcm(226553150,1023473145)
46374212988031350
</code></pre>
<p>(*) I just noticed that the documentation on <code>fractions</code> says this about its <code>gcd</code>: "Deprecated since version 3.5: Use math.gcd() instead", but I decided to keep the reference to <code>fractions</code> since it is still good to know about it and you might be using a version prior to 3.5.</p>
| 1 | 2016-08-20T13:20:15Z | [
"python",
"python-3.x",
"math",
"greatest-common-divisor",
"lcm"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.