title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
What is the best way to store a list of functions? | 39,259,411 | <p>I need to store a large numbers of functions rules in Python (around 100000 ), to be used after....</p>
<pre><code>def rule1(x,y) :...
def rule2(x,y): ...
</code></pre>
<p>What is the best way to store, manage those function rules instance into Python structure ? </p>
<p>What about using Numpy dtype=np.object array ?
(list are bad when they become too large...)</p>
<p>Main goal is to access in the fastest and minimum memory footprint when storing in memory.</p>
<p>Thanks</p>
| 0 | 2016-08-31T21:37:52Z | 39,259,476 | <p>When you use those rules, you're going to have to reference them <em>somehow</em>. If your example is a hint of the naming convention, then go with a list. Calling them in sequence would be easy via <code>map</code> or in a loop.</p>
<pre><code>rules = [rule1, rule2, ...]
for fn in rules:
fn(arg1, arg2) # this calls rule1 and rule2 with args (as an example)
</code></pre>
<p>If you may also reference them by name, then use a dict, like:</p>
<pre><code>rules = {'rule1': rule1, 'rule2': rule2, ...}
something = rules['rule5'](arg1, arg2)
# or
for rule in rules: # iterates over the dict's keys
rules[rule](arg1, arg2)
</code></pre>
| 1 | 2016-08-31T21:43:24Z | [
"python"
] |
Porting a python 2 code to Python 3: ICMP Scan with errors | 39,259,570 | <pre><code>import random
import socket
import time
import ipaddress
import struct
from threading import Thread
def checksum(source_string):
sum = 0
count_to = (len(source_string) / 2) * 2
count = 0
while count < count_to:
this_val = ord(source_string[count + 1]) * 256 + ord(source_string[count])
sum = sum + this_val
sum = sum & 0xffffffff
count = count + 2
if count_to < len(source_string):
sum = sum + ord(source_string[len(source_string) - 1])
sum = sum & 0xffffffff
sum = (sum >> 16) + (sum & 0xffff)
sum = sum + (sum >> 16)
answer = ~sum
answer = answer & 0xffff
answer = answer >> 8 | (answer << 8 & 0xff00)
return answer
def create_packet(id):
header = struct.pack('bbHHh', 8, 0, 0, id, 1)
data = 192 * 'Q'
my_checksum = checksum(header + data)
header = struct.pack('bbHHh', 8, 0, socket.htons(my_checksum), id, 1)
return header + data
def ping(addr, timeout=1):
try:
my_socket = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_ICMP)
except Exception as e:
print (e)
packet_id = int((id(timeout) * random.random()) % 65535)
packet = create_packet(packet_id)
my_socket.connect((addr, 80))
my_socket.sendall(packet)
my_socket.close()
def rotate(addr, file_name, wait, responses):
print ("Sending Packets", time.strftime("%X %x %Z"))
for ip in addr:
ping(str(ip))
time.sleep(wait)
print ("All packets sent", time.strftime("%X %x %Z"))
print ("Waiting for all responses")
time.sleep(2)
# Stoping listen
global SIGNAL
SIGNAL = False
ping('127.0.0.1') # Final ping to trigger the false signal in listen
print (len(responses), "hosts found!")
print ("Writing File")
hosts = []
for response in sorted(responses):
ip = struct.unpack('BBBB', response)
ip = str(ip[0]) + "." + str(ip[1]) + "." + str(ip[2]) + "." + str(ip[3])
hosts.append(ip)
file = open(file_name, 'w')
file.write(str(hosts))
print ("Done", time.strftime("%X %x %Z"))
def listen(responses):
global SIGNAL
s = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_ICMP)
s.bind(('', 1))
print ("Listening")
while SIGNAL:
packet = s.recv(1024)[:20][-8:-4]
responses.append(packet)
print ("Stop Listening")
s.close()
SIGNAL = True
responses = []
ips = '200.131.0.0/20' # Internet network
wait = 0.002 # Adjust this based in your bandwidth (Faster link is Lower wait)
file_name = 'log1.txt'
ip_network = ipaddress.ip_network(unicode(ips), strict=False)
t_server = Thread(target=listen, args=[responses])
t_server.start()
t_ping = Thread(target=rotate, args=[ip_network, file_name, wait, responses])
t_ping.start()
</code></pre>
<p>I tried:</p>
<p><code>ip_network = ipaddress.ip_network( ips, strict=False)</code> instead of <code>ip_network = ipaddress.ip_network(unicode(ips), strict=False)</code></p>
<p>because of the error: ""NameError: name 'unicode' is not defined"</p>
<p>after:</p>
<p>I got: <code>my_checksum = checksum(header + data)</code> -> TypeError: can't concat bytes to str</p>
<p>so I tried:</p>
<p><code>data = bytes(192 * 'Q').encode('utf8')</code> instead of <code>data = 192 * 'Q'</code></p>
<p>Now, the error is : ""data = bytes (192 * 'Q').encode('utf8') TypeError: string argument without an encoding"</p>
<p>Could anyone help me to port the code to Python 3 ?</p>
| 2 | 2016-08-31T21:52:13Z | 39,262,694 | <pre><code>import random
import socket
import time
import ipaddress
import struct
from threading import Thread
def checksum(source_string):
sum = 0
count_to = (len(source_string) / 2) * 2
count = 0
while count < count_to:
this_val = source_string[count + 1] * 256 + source_string[count]
sum = sum + this_val
sum = sum & 0xffffffff
count = count + 2
if count_to < len(source_string):
sum = sum + source_string[len(source_string) - 1]
sum = sum & 0xffffffff
sum = (sum >> 16) + (sum & 0xffff)
sum = sum + (sum >> 16)
answer = ~sum
answer = answer & 0xffff
answer = answer >> 8 | (answer << 8 & 0xff00)
return answer
def create_packet(id):
header = struct.pack('bbHHh', 8, 0, 0, id, 1)
data = 192 * b'Q'
my_checksum = checksum(header + data)
header = struct.pack('bbHHh', 8, 0, socket.htons(my_checksum), id, 1)
return header + data
def ping(addr, timeout=1):
try:
my_socket = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_ICMP)
except Exception as e:
print (e)
packet_id = int((id(timeout) * random.random()) % 65535)
packet = create_packet(packet_id)
my_socket.connect((addr, 80))
my_socket.sendall(packet)
my_socket.close()
def rotate(addr, file_name, wait, responses):
print ("Sending Packets", time.strftime("%X %x %Z"))
for ip in addr:
ping(str(ip))
time.sleep(wait)
print ("All packets sent", time.strftime("%X %x %Z"))
print ("Waiting for all responses")
time.sleep(2)
# Stoping listen
global SIGNAL
SIGNAL = False
ping('127.0.0.1') # Final ping to trigger the false signal in listen
print (len(responses), "hosts found!")
print ("Writing File")
hosts = set()
for response in sorted(responses):
ip = struct.unpack('BBBB', response)
ip = str(ip[0]) + "." + str(ip[1]) + "." + str(ip[2]) + "." + str(ip[3])
hosts.add(ip)
with open(file_name, 'w') as file:
file.write('\n'.join(sorted(hosts, key=lambda item: socket.inet_aton(item))))
print ("Done", time.strftime("%X %x %Z"))
def listen(responses):
global SIGNAL
s = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_ICMP)
s.bind(('', 1))
print ("Listening")
while SIGNAL:
packet = s.recv(1024)[:20][-8:-4]
responses.append(packet)
print ("Stop Listening")
s.close()
SIGNAL = True
responses = []
ips = '192.168.1.0/28' # Internet network
wait = 0.002 # Adjust this based in your bandwidth (Faster link is Lower wait)
file_name = 'log1.txt'
ip_network = ipaddress.ip_network(ips, strict=False)
t_server = Thread(target=listen, args=[responses])
t_server.start()
t_ping = Thread(target=rotate, args=[ip_network, file_name, wait, responses])
t_ping.start()
</code></pre>
| 1 | 2016-09-01T04:24:16Z | [
"python",
"python-2.7",
"python-3.x"
] |
Scrapy indexing in order | 39,259,612 | <p>I'm currently creating a cutsom webcrawler with Scrapy and try to index the fetched content with Elasticsearch.
Works fine until as of now, but I'm only capable of adding content to the search index in the order the crawler filters html tags.
So for example with</p>
<pre><code>sel.xpath("//div[@class='article']/h2//text()").extract()
</code></pre>
<p>I can get all the content from all h2 tags inside a div with the class "article", so far so good. The next elements that get inside the index are from all h3 tags, naturally:</p>
<pre><code>sel.xpath("//div[@class='article']/h3//text()").extract()
</code></pre>
<p>But the problem here is that the entire order of the text on a site would get messed up like that, since all headlines would get indexed first and only then their child nodes get the chance, which is kind of fatal for a search index.
Does have a tip how to properly get all the content from a page in the right order? (doesnt have to be xpath, just with Scrapy)</p>
| 0 | 2016-08-31T21:55:49Z | 39,261,738 | <p>I guess you could solve the issue with something like this:</p>
<pre><code># Select multiple targeting nodes at once
sel_raw = '|'.join([
"//div[@class='article']/h2",
"//div[@class='article']/h3",
# Whatever else you want to select here
])
for sel in sel.xpath(sel_raw):
# Extract the texts for later use
texts = sel.xpath('self::*//text()').extract()
if sel.xpath('self::h2'):
# A h2 element. Do something with texts
pass
elif sel.xpath('self::h3'):
# A h3 element. Do something with texts
pass
</code></pre>
| 1 | 2016-09-01T02:20:03Z | [
"python",
"html",
"xpath",
"scrapy"
] |
PyQt QSortFilterProxyModel index from wrong model passed to mapToSource? | 39,259,659 | <p>I want to get the integer stored in <code>[(1, 'cb'), (3, 'cd'), (7, 'ca'), (11, 'aa'), (22, 'bd')]</code> when I select the drop down auto complete item. </p>
<p>Because I used a QSortFilterProxyModel, when using down key to select the item, the index is from the proxy model.</p>
<p>I read in the documentation that I should use <code>mapToSource</code> to get the index in original model, but here I got an error message <code>index from wrong model passed to mapToSource</code> and the <code>index.row()</code> is always -1. What am I missing? Thanks!</p>
<p>The error is:</p>
<pre><code>row in proxy model 0
QSortFilterProxyModel: index from wrong model passed to mapToSource
row in original model -1
</code></pre>
<p>code:</p>
<pre><code>from PyQt4.QtCore import *
from PyQt4.QtGui import *
import sys
import re
import signal
signal.signal(signal.SIGINT, signal.SIG_DFL)
class MyModel(QStandardItemModel):
def __init__(self, parent=None):
super(MyModel, self).__init__(parent)
def data(self, index, role):
symbol = self.symbol_data[index.row()]
if role == Qt.DisplayRole:
return symbol[1]
elif role == Qt.UserRole:
return symbol[0]
def setup(self, data):
self.symbol_data = data
for line, name in data:
item = QStandardItem(name)
self.appendRow(item)
class MyGui(QDialog):
def __init__(self, parent=None):
super(MyGui, self).__init__(parent)
symbols = [(1, 'cb'), (3, 'cd'), (7, 'ca'), (11, 'aa'), (22, 'bd')]
model = MyModel()
model.setup(symbols)
layout = QVBoxLayout(self)
self.line = QLineEdit(self)
layout.addWidget(self.line)
self.setLayout(layout)
completer = CustomQCompleter()
completer.setModel(model)
completer.setCaseSensitivity(Qt.CaseInsensitive)
completer.setCompletionMode(QCompleter.UnfilteredPopupCompletion)
completer.setWrapAround(False)
self.line.setCompleter(completer)
self.completer = completer
self.completer.highlighted[QModelIndex].connect(self.test)
# qApp.processEvents()
# QTimer.singleShot(0, self.completer.complete)
self.line.textChanged[QString].connect(self.pop)
def pop(self, *x):
text = x[0]
self.completer.splitPath(text)
QTimer.singleShot(0, self.completer.complete)
self.line.setFocus()
def test(self, index):
print 'row in proxy model', index.row()
print 'row in original model', self.completer.model().mapToSource(index).row()
# print 'line in original model:',
# self.completer.model().sourceModel().symbol_data[x[0].row()][0]
class CustomQCompleter(QCompleter):
def __init__(self, parent=None):
super(CustomQCompleter, self).__init__(parent)
self.local_completion_prefix = ""
self.source_model = None
self.first_down = True
def setModel(self, model):
self.source_model = model
self._proxy = QSortFilterProxyModel(
self, filterCaseSensitivity=Qt.CaseInsensitive)
self._proxy.setSourceModel(model)
super(CustomQCompleter, self).setModel(self._proxy)
def splitPath(self, path):
self.local_completion_prefix = str(path)
self._proxy.setFilterFixedString(path)
return ""
def eventFilter(self, obj, event):
if event.type() == QEvent.KeyPress:
'This is used to mute the connection to clear lineedit'
if event.key() in (Qt.Key_Down, Qt.Key_Up):
curIndex = self.popup().currentIndex()
if event.key() == Qt.Key_Down:
if curIndex.row() == self._proxy.rowCount()-1:
print 'already last row', curIndex.row()
if self._proxy.rowCount() == 1:
pass
else:
return True
else:
if curIndex.row() == 0:
print 'already first row'
return True
if curIndex.row() == 0 and self.first_down:
print 'already row 0 first'
self.popup().setCurrentIndex(curIndex)
self.first_down = False
return True
super(CustomQCompleter, self).eventFilter(obj, event)
return False
if __name__ == '__main__':
app = QApplication(sys.argv)
gui = MyGui()
gui.show()
sys.exit(app.exec_())
</code></pre>
<p>update:
This is resolved, Thanks for the help from Avaris in #pyqt. It turns out that I can do this to map the index to original model</p>
<pre><code>proxy_index= self.completer.completionModel().mapToSource(index)
print 'original row:', self.completer.model().mapToSource(proxy_index).row()
</code></pre>
<p>or even better:</p>
<pre><code>print 'data:', index.data(Qt.UserRole).toPyObject()
</code></pre>
<p>becuase: "
completionModel() is actually a proxy model on .model()</p>
<p>you don't need to mess with mapToSource for that. index.data(Qt.UserRole) should give you that number regardless of which index is returned</p>
<p>just an fyi, you rarely need to use mapToSource outside of a (proxy) model. it's mostly for internal use. a proper proxy should forward all relevant queries from the source. so you can use the proxy as if you're using the source one
-Avaris
"</p>
| 2 | 2016-08-31T21:59:40Z | 39,260,260 | <p>paste the correct code here for reference</p>
<pre><code>from PyQt4.QtCore import *
from PyQt4.QtGui import *
import sys
import re
import signal
signal.signal(signal.SIGINT, signal.SIG_DFL)
class MyModel(QStandardItemModel):
def __init__(self, parent=None):
super(MyModel, self).__init__(parent)
def data(self, index, role):
symbol = self.symbol_data[index.row()]
if role == Qt.DisplayRole:
return symbol[1]
elif role == Qt.UserRole:
return symbol[0]
def setup(self, data):
self.symbol_data = data
for line, name in data:
item = QStandardItem(name)
self.appendRow(item)
class MyGui(QDialog):
def __init__(self, parent=None):
super(MyGui, self).__init__(parent)
symbols = [(1, 'cb'), (3, 'cd'), (7, 'ca'), (11, 'aa'), (22, 'bd')]
model = MyModel()
model.setup(symbols)
layout = QVBoxLayout(self)
self.line = QLineEdit(self)
layout.addWidget(self.line)
self.setLayout(layout)
completer = CustomQCompleter()
completer.setModel(model)
completer.setCaseSensitivity(Qt.CaseInsensitive)
completer.setCompletionMode(QCompleter.UnfilteredPopupCompletion)
completer.setWrapAround(False)
self.line.setCompleter(completer)
self.completer = completer
self.completer.highlighted[QModelIndex].connect(self.test)
# QTimer.singleShot(0, self.completer.complete)
self.line.textChanged[QString].connect(self.pop)
def pop(self, *x):
text = x[0]
self.completer.splitPath(text)
QTimer.singleShot(0, self.completer.complete)
self.line.setFocus()
def test(self, index):
print 'row in completion model', index.row()
print 'data:', index.data(Qt.UserRole).toPyObject()
class CustomQCompleter(QCompleter):
def __init__(self, parent=None):
super(CustomQCompleter, self).__init__(parent)
self.local_completion_prefix = ""
self.source_model = None
self.first_down = True
def setModel(self, model):
self.source_model = model
self._proxy = QSortFilterProxyModel(
self, filterCaseSensitivity=Qt.CaseInsensitive)
self._proxy.setSourceModel(model)
super(CustomQCompleter, self).setModel(self._proxy)
def splitPath(self, path):
self.local_completion_prefix = str(path)
self._proxy.setFilterFixedString(path)
return ""
def eventFilter(self, obj, event):
if event.type() == QEvent.KeyPress:
'This is used to mute the connection to clear lineedit'
if event.key() in (Qt.Key_Down, Qt.Key_Up):
curIndex = self.popup().currentIndex()
if event.key() == Qt.Key_Down:
if curIndex.row() == self._proxy.rowCount()-1:
print 'already last row', curIndex.row()
if self._proxy.rowCount() == 1:
pass
else:
return True
else:
if curIndex.row() == 0:
print 'already first row'
return True
if curIndex.row() == 0 and self.first_down:
print 'already row 0 first'
self.popup().setCurrentIndex(curIndex)
self.first_down = False
return True
super(CustomQCompleter, self).eventFilter(obj, event)
return False
if __name__ == '__main__':
app = QApplication(sys.argv)
gui = MyGui()
gui.show()
sys.exit(app.exec_())
</code></pre>
| 0 | 2016-08-31T22:54:57Z | [
"python",
"autocomplete",
"pyqt",
"qsortfilterproxymodel",
"qcompleter"
] |
Python - Facebook Open Graph API Error: An active access token must be used to query information about the current user | 39,259,687 | <p>I am trying to fetch the details of a facebook page. For ex: <a href="https://graph.facebook.com/rio2016/posts?access-token=xxxxx" rel="nofollow">https://graph.facebook.com/rio2016/posts?access-token=xxxxx</a>. Now I generated a token in graph API explorer. If I use that token in Graph API explorer then I am able to get the data from above url. In fact, if I type the above url with the generated token in browser, I still get the data. But I am writing a python script in flask to achieve the same and I get the following error:</p>
<pre><code>'message': 'An active access token must be used to query information about the current user.', 'fbtrace_id': 'Hnp1lPb3dXu', 'code': 2500, 'type': 'OAuthException'
</code></pre>
<p>Following is my python script:</p>
<pre><code>def fb_crawler():
key = 'xyz'
parameters = {'access-token': key}
r = requests.get('https://graph.facebook.com/rio2016/posts', params = parameters)
result = r.json()
for i, v in result.items():
print(i)
print(v)
return result
</code></pre>
<p>It should simply be a GET request as in browser same url return desired result but not when used with a python script. Can someone please guide me here, what is going wrong? Thank you.</p>
| 0 | 2016-08-31T22:02:36Z | 39,261,063 | <p>Thank you. Finally I figured out the error in the code. Now I can fetch data with GET requests.</p>
<p>Revised Code:</p>
<pre><code>def fb_crawler():
key = 'xyz'
parameters = {'access_token': key}
r = requests.get('https://graph.facebook.com/rio2016/posts', params = parameters)
result = r.json()
</code></pre>
<p>Error: It is access_token instead of access-token.</p>
| 0 | 2016-09-01T00:38:47Z | [
"python",
"facebook",
"facebook-graph-api"
] |
Multiple print functions in list comprehension | 39,259,743 | <p>The goal of this post is to put multiple print functions throughout a list comprehension to visually understand what's happening within. </p>
<p>Important notes:</p>
<ul>
<li>This <strong>should not</strong> be used for anything other than educational purposes and trying to understand code. </li>
<li>If you are using Python 2.x, you need to add a future import (it's in the code I pasted) or else print won't work. Only functions work in list comprehension. Print in 2.x does not operate as a function. Or...just switch to Python 3.x. </li>
</ul>
<p>This was the original question:</p>
<pre><code> ## Using future to switch Print to a function
from __future__ import print_function
reg = []
for x in [1,2,3]:
for y in [3,1,4]:
print('looping through',x,'then',y)
if x == y:
print('success',x,y)
reg.append((x,y))
print(reg)
</code></pre>
<p>Here's the equivalent list comprehension with no print statements.</p>
<pre><code> from __future__ import print_function
comp = [(x,y) for x in [1,2,3] for y in [3,1,4] if x == y]
print(comp)
</code></pre>
<p>So is there any way to put in a bunch of print statements so both code print the same things?</p>
<hr>
<h3>Edit with solution to original question:</h3>
<p>Using the methods in the comments - I've figured it out!</p>
<p>So say you want to convert this.</p>
<pre><code> from __future__ import print_function
x = 1
y = 2
z = 1
n = 2
[[a,b,c] for a in range(x+1) for b in range(y+1) for c in range(z+1) if a + b + c != n]
</code></pre>
<p>Adding print statements to print each loop, showing if it failed or not.</p>
<pre><code> from __future__ import print_function
x = 1
y = 2
z = 1
n = 2
[
[a,b,c] for a in range(x+1) for b in range(y+1) for c in range(z+1) if
(print('current loop is',a,b,c) or a + b + c != n)
and
(print('condition true at',a,b,c) or True)
]
</code></pre>
<p>So really the only thing that was changed was the conditional at the end.</p>
<pre><code> (a + b + c != n)
</code></pre>
<p>to</p>
<pre><code> (print('current loop is',a,b,c) or a + b + c != n)
and
(print('condition true at',a,b,c) or True)
</code></pre>
<hr>
<h3>Additional Information:</h3>
<p>So there's good stuff in the comment section that I think would help others as well. I'm a visual learner so this website was great.</p>
<ul>
<li><a href="http://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/#colored-comprehension" rel="nofollow">http://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/#colored-comprehension</a></li>
</ul>
<p>(credits to Tadhg McDonald-Jensen)</p>
| 2 | 2016-08-31T22:07:46Z | 39,259,826 | <p>I think you shouldn't running debug code inside list comprehensions, that said, if you wanted to do so, you could wrap your code inside a function like this:</p>
<pre><code>from __future__ import print_function
def foo(x, y):
print('looping through', x, 'then', y)
if x == y:
print('success', x, y)
return (x, y)
comp = [foo(x, y) for x in [1, 2, 3] for y in [3, 1, 4] if x == y]
print(comp)
</code></pre>
| 3 | 2016-08-31T22:15:45Z | [
"python",
"python-2.7",
"python-3.x",
"for-loop",
"list-comprehension"
] |
Multiple print functions in list comprehension | 39,259,743 | <p>The goal of this post is to put multiple print functions throughout a list comprehension to visually understand what's happening within. </p>
<p>Important notes:</p>
<ul>
<li>This <strong>should not</strong> be used for anything other than educational purposes and trying to understand code. </li>
<li>If you are using Python 2.x, you need to add a future import (it's in the code I pasted) or else print won't work. Only functions work in list comprehension. Print in 2.x does not operate as a function. Or...just switch to Python 3.x. </li>
</ul>
<p>This was the original question:</p>
<pre><code> ## Using future to switch Print to a function
from __future__ import print_function
reg = []
for x in [1,2,3]:
for y in [3,1,4]:
print('looping through',x,'then',y)
if x == y:
print('success',x,y)
reg.append((x,y))
print(reg)
</code></pre>
<p>Here's the equivalent list comprehension with no print statements.</p>
<pre><code> from __future__ import print_function
comp = [(x,y) for x in [1,2,3] for y in [3,1,4] if x == y]
print(comp)
</code></pre>
<p>So is there any way to put in a bunch of print statements so both code print the same things?</p>
<hr>
<h3>Edit with solution to original question:</h3>
<p>Using the methods in the comments - I've figured it out!</p>
<p>So say you want to convert this.</p>
<pre><code> from __future__ import print_function
x = 1
y = 2
z = 1
n = 2
[[a,b,c] for a in range(x+1) for b in range(y+1) for c in range(z+1) if a + b + c != n]
</code></pre>
<p>Adding print statements to print each loop, showing if it failed or not.</p>
<pre><code> from __future__ import print_function
x = 1
y = 2
z = 1
n = 2
[
[a,b,c] for a in range(x+1) for b in range(y+1) for c in range(z+1) if
(print('current loop is',a,b,c) or a + b + c != n)
and
(print('condition true at',a,b,c) or True)
]
</code></pre>
<p>So really the only thing that was changed was the conditional at the end.</p>
<pre><code> (a + b + c != n)
</code></pre>
<p>to</p>
<pre><code> (print('current loop is',a,b,c) or a + b + c != n)
and
(print('condition true at',a,b,c) or True)
</code></pre>
<hr>
<h3>Additional Information:</h3>
<p>So there's good stuff in the comment section that I think would help others as well. I'm a visual learner so this website was great.</p>
<ul>
<li><a href="http://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/#colored-comprehension" rel="nofollow">http://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/#colored-comprehension</a></li>
</ul>
<p>(credits to Tadhg McDonald-Jensen)</p>
| 2 | 2016-08-31T22:07:46Z | 39,259,997 | <p>List comprehension was introduced with <a href="https://www.python.org/dev/peps/pep-0202/" rel="nofollow">PEP 202</a> which states:</p>
<blockquote>
<p>It is proposed to allow conditional construction of list literals
<strong>using for and if clauses.</strong> They would nest in the same way for loops
and if statements nest now.</p>
</blockquote>
<p>List comprehension was designed to replace constructs that formed a list using only <code>for</code> loops, <code>if</code> conditionals and <code>.append</code> method once per iteration. Any additional structure is not possible in list comprehensions so unless you stuck your prints into one of the allowed components you cannot add them.</p>
<p>That being said, putting a print statement in the conditional - while technically possible - is highly <strong>not</strong> recommended.</p>
<pre><code>[a for a in x if print("this is a bad way to test",a)]
</code></pre>
| 1 | 2016-08-31T22:30:14Z | [
"python",
"python-2.7",
"python-3.x",
"for-loop",
"list-comprehension"
] |
Multiple print functions in list comprehension | 39,259,743 | <p>The goal of this post is to put multiple print functions throughout a list comprehension to visually understand what's happening within. </p>
<p>Important notes:</p>
<ul>
<li>This <strong>should not</strong> be used for anything other than educational purposes and trying to understand code. </li>
<li>If you are using Python 2.x, you need to add a future import (it's in the code I pasted) or else print won't work. Only functions work in list comprehension. Print in 2.x does not operate as a function. Or...just switch to Python 3.x. </li>
</ul>
<p>This was the original question:</p>
<pre><code> ## Using future to switch Print to a function
from __future__ import print_function
reg = []
for x in [1,2,3]:
for y in [3,1,4]:
print('looping through',x,'then',y)
if x == y:
print('success',x,y)
reg.append((x,y))
print(reg)
</code></pre>
<p>Here's the equivalent list comprehension with no print statements.</p>
<pre><code> from __future__ import print_function
comp = [(x,y) for x in [1,2,3] for y in [3,1,4] if x == y]
print(comp)
</code></pre>
<p>So is there any way to put in a bunch of print statements so both code print the same things?</p>
<hr>
<h3>Edit with solution to original question:</h3>
<p>Using the methods in the comments - I've figured it out!</p>
<p>So say you want to convert this.</p>
<pre><code> from __future__ import print_function
x = 1
y = 2
z = 1
n = 2
[[a,b,c] for a in range(x+1) for b in range(y+1) for c in range(z+1) if a + b + c != n]
</code></pre>
<p>Adding print statements to print each loop, showing if it failed or not.</p>
<pre><code> from __future__ import print_function
x = 1
y = 2
z = 1
n = 2
[
[a,b,c] for a in range(x+1) for b in range(y+1) for c in range(z+1) if
(print('current loop is',a,b,c) or a + b + c != n)
and
(print('condition true at',a,b,c) or True)
]
</code></pre>
<p>So really the only thing that was changed was the conditional at the end.</p>
<pre><code> (a + b + c != n)
</code></pre>
<p>to</p>
<pre><code> (print('current loop is',a,b,c) or a + b + c != n)
and
(print('condition true at',a,b,c) or True)
</code></pre>
<hr>
<h3>Additional Information:</h3>
<p>So there's good stuff in the comment section that I think would help others as well. I'm a visual learner so this website was great.</p>
<ul>
<li><a href="http://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/#colored-comprehension" rel="nofollow">http://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/#colored-comprehension</a></li>
</ul>
<p>(credits to Tadhg McDonald-Jensen)</p>
| 2 | 2016-08-31T22:07:46Z | 39,260,152 | <p>You need to evaluate your <code>print</code> function, but the return value isn't useful since it's always <code>None</code>. You can use <code>and</code>/<code>or</code> to combine it with another expression.</p>
<pre><code>comp = [(x,y) for x in [1,2,3] for y in [3,1,4] if (print('looping through',x,'then',y) or x == y) and (print('success', x, y) or True)]
</code></pre>
<p>I really hope you're only doing this for educational purposes, because it's ugly as heck. Just because you <em>can</em> do something doesn't mean you <em>should</em>.</p>
| 2 | 2016-08-31T22:45:06Z | [
"python",
"python-2.7",
"python-3.x",
"for-loop",
"list-comprehension"
] |
Regular Expression to match "\\r" | 39,259,751 | <p>I'm having trouble writing a regex that matches these inputs: <br>
1.<code>\\r</code><br>
2.<code>\\rSomeString</code>
<br>
I need a regex that matches <code>\\r</code></p>
| 0 | 2016-08-31T22:08:03Z | 39,259,790 | <p>Escape the back slashes twice. String's interpret <code>\</code> as a special character marker.</p>
<p>Use <code>\\\\r</code> instead. <code>\\</code> is actually interpreted as just <code>\</code>.</p>
<p>EDIT: So as per the comments you want any string that starts with <code>\\r</code> with any string after it. The regex pattern is as follows:</p>
<pre><code>(\\\\r\S*)
</code></pre>
<p><code>\\\\r</code> is the string you want at the start and <code>\S*</code> says any non-white space (<code>\S</code>) can come after any number of times (<code>*</code>).</p>
| 6 | 2016-08-31T22:12:29Z | [
"python",
"regex"
] |
Regular Expression to match "\\r" | 39,259,751 | <p>I'm having trouble writing a regex that matches these inputs: <br>
1.<code>\\r</code><br>
2.<code>\\rSomeString</code>
<br>
I need a regex that matches <code>\\r</code></p>
| 0 | 2016-08-31T22:08:03Z | 39,260,378 | <p>A literal backslash in Python can be matched with <code>r'\\'</code> (note the use of the raw string literal!). You have two literal backslashes, thus, you need 4 backslashes (in a raw string literal) before <code>r</code>. </p>
<p>Since you may have any characters after <code>\\r</code>, you may use</p>
<pre><code>import re
p = re.compile(r'\\\\r\S*')
test_str = r"\\r \\rtest"
print(p.findall(test_str))
</code></pre>
<p>See <a href="http://ideone.com/kUK56Q" rel="nofollow">Python demo</a></p>
<p><strong>Pattern description</strong>:</p>
<ul>
<li><code>\\\\</code> - 2 backslashes</li>
<li><code>r</code> - a literal <code>r</code></li>
<li><code>\S*</code> - zero or more non-whitespace characters.</li>
</ul>
<p><em>Variations</em>:</p>
<ul>
<li>If the characters after <code>r</code> can only be alphanumerics or underscore, use <code>\w*</code> instead of <code>\S*</code></li>
<li>If you want to only match <code>\\r</code> before non-word chars, add a <code>\B</code> non-word boundary before the backslashes in the pattern.</li>
</ul>
| 1 | 2016-08-31T23:09:16Z | [
"python",
"regex"
] |
Regular Expression to match "\\r" | 39,259,751 | <p>I'm having trouble writing a regex that matches these inputs: <br>
1.<code>\\r</code><br>
2.<code>\\rSomeString</code>
<br>
I need a regex that matches <code>\\r</code></p>
| 0 | 2016-08-31T22:08:03Z | 39,273,397 | <p>You can fine-tune your <em>regular expressions</em> on-line, e.g. at <a href="https://regex101.com/#python" rel="nofollow">this site</a>.</p>
| 0 | 2016-09-01T13:52:01Z | [
"python",
"regex"
] |
How do I display thread/process Id in Python PDB? | 39,259,805 | <p>In Python, is there a way to display thread/process ID while debugging using PDB?</p>
<p>I was looking at <a href="https://docs.python.org/2/library/pdb.html" rel="nofollow">https://docs.python.org/2/library/pdb.html</a> but could not find anything related to it?</p>
| 1 | 2016-08-31T22:13:57Z | 39,266,060 | <pre><code>(pdb) import os,threading; os.getpid(), threading.current_thread().ident
</code></pre>
<p>If you need to do this often, it would be convenient to add an alias in your <code>.pdbrc</code> file:</p>
<pre><code>alias tid import os,threading;; p os.getpid(), threading.current_thread().ident
</code></pre>
| 1 | 2016-09-01T08:08:54Z | [
"python",
"pdb"
] |
'method' object is not iterable from calling splitlines from a read method | 39,259,819 | <p>Currently trying to setup my setup.py file for pip installation or simply python setup.py develop installation, however when I run the code, it faults saying that the method is not iterable, regarding to my read function.</p>
<p>The code is as follows for setup.py</p>
<pre><code>from setuptools import setup, find_packages
import os
def read(filename):
with open(os.path.join(os.path.dirname(__file__), filename)) as f:
return f.read()
setup(
name='example',
version='1.0',
description='example',
url='https://github.com/example',
author='Example',
author_email='iam@example.com',
packages=find_packages,
install_requires=read('requirements.txt').splitlines(),
zip_safe=False
)
</code></pre>
<p>And finally the error I receive is,</p>
<pre><code>Traceback (most recent call last):
File "setup.py", line 16, in <module>
install_requires=read('requirements.txt').splitlines()
File "C:\Python35\lib\distutils\core.py", line 108, in setup
_setup_distribution = dist = klass(attrs)
File "C:\Python35\lib\site-packages\setuptools\dist.py", line 272, in __init__
_Distribution.__init__(self,attrs)
File "C:\Python35\lib\distutils\dist.py", line 281, in __init__
self.finalize_options()
File "C:\Python35\lib\site-packages\setuptools\dist.py", line 327, in finalize_options
ep.load()(self, ep.name, value)
File "C:\Python35\lib\site-packages\setuptools\dist.py", line 161, in check_packages
for pkgname in value:
TypeError: 'method' object is not iterable
</code></pre>
| -3 | 2016-08-31T22:15:17Z | 39,260,031 | <p>This line is missing the call parens you need:</p>
<pre><code>packages=find_packages,
</code></pre>
<p>Change it to:</p>
<pre><code>packages=find_packages(),
</code></pre>
| 1 | 2016-08-31T22:33:21Z | [
"python",
"django"
] |
How to get "clean" match results in Python | 39,259,830 | <p>I am a total noob, coding for the first time and trying to learn by doing.
I'm using this:</p>
<pre><code>import re
f = open('aaa.txt', 'r')
string=f.read()
c = re.findall(r"Guest last name: (.*)", string)
print "Dear Mr.", c
</code></pre>
<p>that returns </p>
<pre><code>Dear Mr. ['XXXX']
</code></pre>
<p>I was wondering, is there any way to get the result like</p>
<pre><code>Dear Mr. XXXX
</code></pre>
<p>instead? </p>
<p>Thanks in advance.</p>
| 0 | 2016-08-31T22:16:01Z | 39,259,856 | <p>You need to take the first item in the list</p>
<pre><code>print "Dear Mr.", c[0]
</code></pre>
| 0 | 2016-08-31T22:18:29Z | [
"python",
"regex",
"python-2.7",
"match"
] |
How to get "clean" match results in Python | 39,259,830 | <p>I am a total noob, coding for the first time and trying to learn by doing.
I'm using this:</p>
<pre><code>import re
f = open('aaa.txt', 'r')
string=f.read()
c = re.findall(r"Guest last name: (.*)", string)
print "Dear Mr.", c
</code></pre>
<p>that returns </p>
<pre><code>Dear Mr. ['XXXX']
</code></pre>
<p>I was wondering, is there any way to get the result like</p>
<pre><code>Dear Mr. XXXX
</code></pre>
<p>instead? </p>
<p>Thanks in advance.</p>
| 0 | 2016-08-31T22:16:01Z | 39,259,890 | <p>Yes use <code>re.search</code> if you only expect one match:</p>
<pre><code>re.search(r"Guest last name: (.*)", string).group(1)`
</code></pre>
<p>findall is if you expect multiple matches. You probably want to also add <code>?</code> to your regex <code>(.*?)</code> for a non-greedy capture but you also probably want to be a little more specific and capture up to the next possible character after the name/phrase you want.</p>
| 0 | 2016-08-31T22:21:58Z | [
"python",
"regex",
"python-2.7",
"match"
] |
Creating a more consistant, randomly generated, world space for a text-based RPG | 39,259,846 | <p>Currently, I can create a randomized world within a 2-D array. However, I feel it is too random. Here is the class I'm currently working with:</p>
<p><code>from random import randint, choice, randrange</code></p>
<pre><code>class WorldSpace(object):
def __init__(self, row, col, world_array):
self.row = row # Number of lists to be created.
self.col = col # Number of indexes inside of each row.
self.world_array = world_array
</code></pre>
<p>The <code>WorldSpace</code> method that creates the world:</p>
<pre><code>@classmethod
def generate(cls, autogen):
print 'Starting world generation...'
print
if autogen is True:
row_amt = 75
col_amt = 75
else:
row_amt = input('Please specify row length of world array.: ')
col_amt = input('Please specify column length of world array.: ')
if (row_amt or col_amt) == 0:
print 'Invalid world values!'
cls.generateWorld(False)
world_array = [[' ']*col_amt for _ in xrange(row_amt)]
print 'Created world...'
return cls(row_amt, col_amt, world_array)
</code></pre>
<p>A method that modifies the world -- currently only creates <code>forests</code> though in my full segment of code, a series of oceans and mountains are formed as well:</p>
<pre><code>def modify_world(self, autogen):
if autogen is True:
# Forests:
count = randint(6, 10)
while count > 0:
a = randint(1, (self.row / randint(2, 6)))
b = randint(1, (self.col / randint(2, 6)))
row_val = randint(5, self.row)
count_val = randint(5, 15)
self.genLine_WObj(a, b, row_val, 't', count_val)
count -=1
print('\n'.join([''.join(['{:1}'.format(item) for item in row])
for row in self.world_array]))
inp = input('')
if inp != '':
return
</code></pre>
<p>And the method that actually creates the <code>forest</code> tiles:</p>
<pre><code>def genLine_WObj(self, a, b, row_val, char, count):
# 'genLine_WObj' - Generate Line(like) world object.
# Used to place lines of certain characters with psuedo-randomized
# width and length onto the world array.
while count != 0:
row_col_dict = {row_val: (a, b)}
for row in row_col_dict.keys():
startPos, endPos = row_col_dict[row]
for i in range(startPos, endPos):
self.world_array[row][i] = char
b += choice([0, 1])
a += choice([0, 0, 0, 0, 1])
row_val -= 1
count -= 1
</code></pre>
<p>Now to actually run the program:</p>
<pre><code>world = WorldSpace.generate(True)
world.modify_world(True)
</code></pre>
<p>However, while the works ~20-30% of the time, sometimes it will generate small forests, or small pairs of <code>t</code> characters, when it should be creating forests all around the map. How can I improve my code to make the randomized generation more consistent? </p>
| 2 | 2016-08-31T22:17:46Z | 39,260,480 | <p>Fixed:</p>
<ol>
<li>Sometimes your <code>a</code> is bigger than your <code>b</code> and forest is not generated.</li>
<li>All your forests tend to be at the left side of map.</li>
</ol>
<p>Inline comments are added for changed lines.</p>
<pre><code>def modify_world(self, autogen):
if autogen is True:
# Forests:
count = randint(6, 10)
while count > 0:
a = randrange(self.col) # begin of forest
width = self.col / randint(2, 6) # initial width of forest
b = int(a + width) # end of forest
row_val = randint(5, self.row)
count_val = randint(5, 15)
self.genLine_WObj(a, b, row_val, 't', count_val)
count -=1
print('\n'.join([''.join(['{:1}'.format(item) for item in row])
for row in self.world_array]))
inp = input('')
if inp != '':
return
def genLine_WObj(self, a, b, row_val, char, count):
# 'genLine_WObj' - Generate Line(like) world object.
# Used to place lines of certain characters with psuedo-randomized
# width and length onto the world array.
while count != 0:
row_col_dict = {row_val: (a, b)}
for row in row_col_dict.keys():
startPos, endPos = row_col_dict[row]
for i in range(startPos, min(self.col, endPos)): # added min
self.world_array[row][i] = char
b += choice([0, 1])
a += choice([0, 0, 0, 0, 1])
row_val -= 1
count -= 1
</code></pre>
<p>And I would change <code>width</code> of forest to something mapsize-independent. E. g.:</p>
<pre><code>width = randint(2, 15)
</code></pre>
<p>But that depends on your goals.</p>
| 0 | 2016-08-31T23:21:18Z | [
"python",
"arrays",
"python-2.7",
"oop"
] |
Make telnet communication using python | 39,259,858 | <p>It is my first question here :D</p>
<p>I have to send some commands from my personal computer (Linux-Red Hat) to a server (Robot Controller). I saw that the controller have a Ethernet protocol that allows sending commands using telnet communication.</p>
<p>My question: is it possible to make telnet connection, send commands and read the output using python? If it is, can you help me?</p>
<p>Thank you.</p>
| 0 | 2016-08-31T22:18:33Z | 39,260,021 | <p>In python they have telnetlib, which allows telnet communication. I'm pretty sure that this is what you're looking for. You can find the docs at <a href="https://docs.python.org/2/library/telnetlib.html" rel="nofollow">https://docs.python.org/2/library/telnetlib.html</a> Here is a basic way to logon to a windows server and get the directory listing (Courtesy of the docs pages above)</p>
<pre><code>import telnetlib
HOST = "localhost"
user = "username"
password = "password"
tn = telnetlib.Telnet(HOST)
tn.read_until("login: ")
tn.write(user + "\n")
if password:
tn.read_until("Password: ")
tn.write(password + "\n")
tn.write("dir\n")
tn.write("exit\n")
print tn.read_all()
</code></pre>
| 1 | 2016-08-31T22:32:40Z | [
"python",
"telnet",
"ethernet"
] |
Python language/syntax usage | 39,259,859 | <p>New to python but ran into something I don't understand. The following line of code:</p>
<pre><code>diff = features[0:] - test[0] # <- this is what I don't get
</code></pre>
<p>is used thusly:</p>
<pre><code>x = diff[i]
</code></pre>
<p>to return the element-wise difference between <code>features[i]</code> and <code>test[0]</code>. Can anyone point to a language reference for this or explain it? I understand how the result can be developed using "def" or "lambda" but I don't understand the construction.</p>
<p>thanks!</p>
| 1 | 2016-08-31T22:18:49Z | 39,259,974 | <p><code>feature</code> appears to be a <a href="http://www.numpy.org/" rel="nofollow">Numpy</a> array. Numpy arrays 'broadcast' scalar operations to the whole array.</p>
<pre><code>import numpy as np
asd = np.full([10,10], 10, np.int64)
asd /= 5
print asd # prints a 10x10 array of 2s
</code></pre>
| 2 | 2016-08-31T22:28:11Z | [
"python"
] |
Python language/syntax usage | 39,259,859 | <p>New to python but ran into something I don't understand. The following line of code:</p>
<pre><code>diff = features[0:] - test[0] # <- this is what I don't get
</code></pre>
<p>is used thusly:</p>
<pre><code>x = diff[i]
</code></pre>
<p>to return the element-wise difference between <code>features[i]</code> and <code>test[0]</code>. Can anyone point to a language reference for this or explain it? I understand how the result can be developed using "def" or "lambda" but I don't understand the construction.</p>
<p>thanks!</p>
| 1 | 2016-08-31T22:18:49Z | 39,259,984 | <p>the answer depends on what <code>features[0:]</code> and <code>test[0]</code> evaluate to.
if <code>test[0]</code> is a number and <code>features[0:]</code> is a numpy array, then you may be using numpy to subtract a number from each element in a list: </p>
<pre><code>>>> import numpy
>>> array = numpy.array([49, 51, 53, 56])
>>> array - 13
array([36, 38, 40, 43])
</code></pre>
| 3 | 2016-08-31T22:29:06Z | [
"python"
] |
cx_Oracle ignores order by clause | 39,259,912 | <p>I've created complex query builder in my project, and during tests stumbled upon strange issue: same query with the same plan produces different results on different clients: cx_Oracle ignores order by clause, while Oracle SQLDeveloper Studio process query correctly, however in both cases order by present in both plans.</p>
<p>Query in question is:</p>
<pre><code>select *
from
(
select
a.*,
ROWNUM tmp__rnum
from
(
select base.*
from
(
select id
from
(
(
select
profile_id as id,
surname as sort__col
from names
)
/* here usually are several other subqueries chained by unions */
)
group by id
order by min(sort__col) asc
) tmp
left join (profiles) base
on tmp.id = base.id
where exists
(
select t.object_id
from object_rights t
where
t.object_id = base.id
and t.subject_id = :a__subject_id
and t.rights in ('r','w')
)
) a
where ROWNUM < :rows_to
)
where tmp__rnum >= :rows_from
</code></pre>
<p>and plan from cx_Oracle in case I missed anything:</p>
<pre><code>{'operation': 'SELECT STATEMENT', 'position': 9225, 'cardinality': 2164, 'time': 1, 'cost': 9225, 'depth': 0, 'bytes': 84396, 'optimizer': 'ALL_ROWS', 'id': 0, 'cpu_cost': 1983805801},
{'operation': 'VIEW', 'position': 1, 'filter_predicates': '"TMP__RNUM">=TO_NUMBER(:ROWS_FROM)', 'parent_id': 0, 'object_instance': 1, 'cardinality': 2164SEL$1', 'projection': '"from$_subquery$_001"."ID"[NUMBER,22], "from$_subquery$_001"."CREATION_TIME"[TIMESTAMP,11], "TMP__RNUM"[NUMBER,22]', 'time': 1, 'cost': 9225, 'depth': 1, 'bytes': 84396, 'id': 1, 'cpu_cost': 1983805801},
{'operation': 'COUNT', 'position': 1, 'filter_predicates': 'ROWNUM<TO_NUMBER(:ROWS_TO)', 'parent_id': 1, 'projection': '"BASE"."ID"[NUMBER,22], "BASE"."CREATION_TIME"[TIMESTAMP,11], ROWNUM[8]', 'options': 'STOPKEY', 'depth': 2, 'id': 2,
{'operation': 'HASH JOIN', 'position': 1, 'parent_id': 2, 'access_predicates': '"TMP"."ID"="BASE"."ID"', 'cardinality': 2164, 'projection': '(#keys=1) "BASE"."ID"[NUMBER,22], "BASE"."CREATION_TIME"[TIMESTAMP,11]', 'time': 1, 'cost': 9225, 'depth': 3, 'bytes': 86560, 'id': 3, 'cpu_cost': 1983805801},
{'operation': 'JOIN FILTER', 'position': 1, 'parent_id': 3, 'object_owner': 'SYS', 'cardinality': 2219, 'projection': '"BASE"."ID"[NUMBER,22], "BASE"."CREATION_TIME"[TIMESTAMP,11]', 'object_name': ':BF0000', 'time': 1, 'cost': 662, 'options': 'CREATE', 'depth': 4, 'bytes': 59913, 'id': 4, 'cpu_cost': 223290732},
{'operation': 'HASH JOIN', 'position': 1, 'parent_id': 4, 'access_predicates': '"T"."OBJECT_ID"="BASE"."ID"', 'cardinality': 2219, 'projection': '(#keys=1) "BASE"."ID"[NUMBER,22], "BASE"."CREATION_TIME"[TIMESTAMP,11]', 'time': 1, 'cost': 662, 'options': 'RIGHT SEMI', 'depth': 5, 'bytes': 59913, 'id': 5, 'cpu_cost': 223290732},
{'operation': 'TABLE ACCESS', 'position': 1, 'filter_predicates': '"T"."SUBJECT_ID"=TO_NUMBER(:A__SUBJECT_ID) AND ("T"."RIGHTS"=\'r\' OR "T"."RIGHTS"=\'w\')', 'parent_id': 5, 'object_type': 'TABLE', 'object_instance': 8, 'cardinality': 2219, 'projection': '"T"."OBJECT_ID"[NUMBER,22]', 'object_name': 'OBJECT_RIGHTS', 'time': 1, 'cost': 5, 'options': 'FULL', 'depth': 6, 'bytes': 24409, 'optimizer': 'ANALYZED', 'id': 6, 'cpu_cost': 1823386},
{'operation': 'TABLE ACCESS', 'position': 2, 'parent_id': 5, 'object_type': 'TABLE', 'object_instance': 6, 'cardinality': 753862, 'projection': '"BASE"."ID"[NUMBER,22], "BASE"."CREATION_TIME"[TIMESTAMP,11]', 'object_name': 'PROFILES', 'time': 1, 'cost': 654, 'options': 'FULL', 'depth': 6, 'bytes': 12061792, 'optimizer': 'ANALYZED', 'id': 7, 'cpu_cost': 145148296},
{'operation': 'VIEW', 'position': 2, 'parent_id': 3, 'object_instance': 3, 'cardinality': 735296, 'projection': '"TMP"."ID"[NUMBER,22]', 'time': 1, 'cost': 8559, 'depth': 4, 'bytes': 9558848, 'id': 8, 'cpu_cost': 1686052619},
{'operation': 'SORT', 'position': 1, 'parent_id': 8, 'cardinality': 735296, 'projection': '(#keys=1) MIN("SURNAME")[50], "PROFILE_ID"[NUMBER,22]', 'time': 1, 'cost': 8559, 'options': 'ORDER BY', 'temp_space': 18244000, 'depth': 5, 'bytes': 10294144, 'id': 9, 'cpu_cost': 1686052619},
{'operation': 'HASH', 'position': 1, 'parent_id': 9, 'cardinality': 735296, 'projection': '(#keys=1; rowset=200) "PROFILE_ID"[NUMBER,22], MIN("SURNAME")[50]', 'time': 1, 'cost': 8559, 'options': 'GROUP BY', 'temp_space': 18244000, 'depth': 6, 'bytes': 10294144, 'id': 10, 'cpu_cost': 1686052619},
{'operation': 'JOIN FILTER', 'position': 1, 'parent_id': 10, 'object_owner': 'SYS', 'cardinality': 756586, 'projection': '(rowset=200) "PROFILE_ID"[NUMBER,22], "SURNAME"[VARCHAR2,50]', 'object_name': ':BF0000', 'time': 1, 'cost': 1202, 'options': 'USE', 'depth': 7, 'bytes': 10592204, 'id': 11, 'cpu_cost': 190231639},
{'operation': 'TABLE ACCESS', 'position': 1, 'filter_predicates': 'SYS_OP_BLOOM_FILTER(:BF0000,"PROFILE_ID")', 'parent_id': 11, 'object_type': 'TABLE', 'object_instance': 5, 'cardinality': 756586, 'projection': '(rowset=200) "PROFILE_ID"[NUMBER,22], "SURNAME"[VARCHAR2,50]', 'object_name': 'NAMES', 'time': 1, 'cost': 1202, 'options': 'FULL', 'depth': 8, 'bytes': 10592204, 'optimizer': 'ANALYZED', 'id': 12, 'cpu_cost': 190231639}
</code></pre>
<p>cx_Oracle output (appears to be ordered by id):</p>
<pre><code>ID, Created, rownum
(1829, 2016-08-24, 1)
(2438, 2016-08-24, 2)
</code></pre>
<p>SQLDeveloper Output (ordered by surname, as expected):</p>
<pre><code>ID, Created, rownum
(518926, 2016-08-28, 1)
(565556, 2016-08-29, 2)
</code></pre>
| 2 | 2016-08-31T22:23:36Z | 39,260,412 | <p>I don't see an <code>ORDER BY</code> clause that would affect the ordering of the results of the query. In SQL, the only way to guarantee the ordering of a result set is to have an <code>ORDER BY</code> clause for the outer-most <code>SELECT</code>.</p>
<p>In almost all cases, an <code>ORDER BY</code> in a subquery is not necessarily respected (Oracle makes an exception when there are <code>rownum</code> comparisons in the next level of the query -- and even that is now out of date with the support of <code>FETCH FIRST <n> ROWS</code>).</p>
<p>So, there is no reason to expect that an <code>ORDER BY</code> in the <em>innermost</em> subquery would have any effect, particularly with the <code>JOIN</code> that then happens.</p>
<p>Suggestions:</p>
<ul>
<li>Move the <code>ORDER BY</code> to the outermost query.</li>
<li>Use <code>FETCH FIRST</code> syntax, if you are using Oracle 12c+.</li>
<li>Move the <code>ORDER BY</code> <em>after</em> the <code>JOIN</code>.</li>
<li>Use <code>ROW_NUMBER()</code> instead of <code>rownum</code>.</li>
</ul>
| 4 | 2016-08-31T23:13:16Z | [
"python",
"sql",
"oracle",
"cx-oracle"
] |
Using regex to select html content | 39,259,989 | <p>I have a file with several instances of rows with this structure:</p>
<pre><code><tr>
<td style="width:25%;">
<span class="results_title_text">DUNS:</span> <span class="results_body_text"> 012361296</span>
</td>
<td style="width:25%;">
</td>
<!-- label as CAGE when US Territory is listed as Country -->
<td style="width:27%;">
<span class="results_title_text">CAGE Code:</span> <span class="results_body_text">HELLO</span>
</td>
<td style="width:15%" rowspan="2">
<input type="button" value="View Details" title="View Details for Rascal X-Press, Inc." class="center" style="height:25px; width:90px; vertical-align:middle; margin:7px 3px 7px 3px;" onClick="viewEntry('4420848', '1472652382619')" />
</td>
</tr>
</code></pre>
<p>I want to select only those <code><span class="results_body_text"></code> that are preceeded by <code><span class="results_title_text">DUNS:</span></code> so in this case I would only return the span that contains <code>012361296</code> and not the one that contains <code>HELLO</code></p>
<p>How can I do this using a regular expression or anything else? I have tried the "starts with" regex format, but I am failing to see what string I would be parsing in that case. I eventually want to parse the regex into a <code>re.compile()</code> compile function in python.</p>
| 0 | 2016-08-31T22:29:24Z | 39,260,413 | <p>Use a positive lookbehind. Since positive look(ahead|behind)s aren't included in the resulting match, they come very handy in parsing stuff at specific locations.</p>
<pre><code>(?<=<span class="results_title_text">\w*DUNS:\w*</span>\w*)<span class="results_body_text">\w*[\u0000-\uFFFF]*\w*</span>
</code></pre>
<p>If the existence of the lookbehind throws an error, you can just do it without a lookbehind:</p>
<pre><code><span class="results_title_text">\w*DUNS:\w*</span>\w*<span class="results_body_text">\w*[\u0000-\uFFFF]*\w*</span>
</code></pre>
<p>and then extract exactly what you want by passing the result(s) to another regex, which is basically a subset of the above regex:</p>
<pre><code><span class="results_body_text">\w*[\u0000-\uFFFF]*\w*</span>
</code></pre>
<p>Also, I placed <code>\w*</code>s at points where one can put an arbitrary amount of whitespace.</p>
| 0 | 2016-08-31T23:13:18Z | [
"python",
"html",
"regex",
"data-cleaning"
] |
Using regex to select html content | 39,259,989 | <p>I have a file with several instances of rows with this structure:</p>
<pre><code><tr>
<td style="width:25%;">
<span class="results_title_text">DUNS:</span> <span class="results_body_text"> 012361296</span>
</td>
<td style="width:25%;">
</td>
<!-- label as CAGE when US Territory is listed as Country -->
<td style="width:27%;">
<span class="results_title_text">CAGE Code:</span> <span class="results_body_text">HELLO</span>
</td>
<td style="width:15%" rowspan="2">
<input type="button" value="View Details" title="View Details for Rascal X-Press, Inc." class="center" style="height:25px; width:90px; vertical-align:middle; margin:7px 3px 7px 3px;" onClick="viewEntry('4420848', '1472652382619')" />
</td>
</tr>
</code></pre>
<p>I want to select only those <code><span class="results_body_text"></code> that are preceeded by <code><span class="results_title_text">DUNS:</span></code> so in this case I would only return the span that contains <code>012361296</code> and not the one that contains <code>HELLO</code></p>
<p>How can I do this using a regular expression or anything else? I have tried the "starts with" regex format, but I am failing to see what string I would be parsing in that case. I eventually want to parse the regex into a <code>re.compile()</code> compile function in python.</p>
| 0 | 2016-08-31T22:29:24Z | 39,265,429 | <p>Using pyparsing to process HTML allows you to gloss over things like unexpected whitespace, extra/missing attributes, tags in upper or lower case. Assuming you have read your HTML source into a variable <code>html</code>, this pyparsing code will extract the target value:</p>
<pre><code>from pyparsing import makeHTMLTags, SkipTo
span,end_span = makeHTMLTags("span")
patt = span + 'DUNS:' + end_span + span + SkipTo(end_span)("results_body") + end_span
print(patt.searchString(html)[0].results_body)
</code></pre>
<p>prints:</p>
<pre><code>012361296
</code></pre>
| 0 | 2016-09-01T07:35:12Z | [
"python",
"html",
"regex",
"data-cleaning"
] |
Rename a particular instance in multiple files with the file name? | 39,260,133 | <p>I had to create about 500 copies of a xml file in the directory, which I managed to get done. As a part of the next problem is that I want to rename particular text in the file. How can I go about doing it?</p>
<p>This is what I have:
1000.xml, 1001.xml, 1002.xml...</p>
<p>1000.xml:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<addresses xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation='test.xsd'>
<address>
<name>Joe Tester</name>
<street>Baker street 5</street>
<id>1000</id>
</address>
<count>1000</count>
</code></pre>
<p></p>
<p>Essentially, this is copied to all the other files, but with a numerical and chronological name. How do I replace this "1000" with the "file name"? So, the new file should be -
1001.xml:</p>
<pre><code> <?xml version="1.0" encoding="UTF-8"?>
<addresses xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation='test.xsd'>
<address>
<name>Joe Tester</name>
<street>Baker street 5</street>
<id>1001</id>
</address>
<count>1001</count>
</addresses>
</code></pre>
<p>I could do only this - <code>sed -i '' -e 's/1000/1001/g' $(find . -type f)</code> which will replace all the 1000 with 1001, but not the file name.</p>
| -2 | 2016-08-31T22:43:28Z | 39,260,362 | <p>After deciphering your question I see you want to change actual content in the xml file i.e the id or some other node's text to the name of the file so use an xml parser like <a href="http://lxml.de/" rel="nofollow"><em>lxml</em></a></p>
<pre><code>from glob import iglob
import lxml.etree as et
for fle in iglob("[0-9][0-9][0-9][0-9].xml"):
tree = et.parse(fle)
id_ = tree.find(".//id").text = fle
tree.write(fle, encoding="utf-8")
</code></pre>
<p>If you want to change the count also use:</p>
<pre><code> for fle in iglob("[0-9][0-9][0-9][0-9].xml"):
tree = et.parse(fle)
id_, count = tree.find(".//id"), tree.find(".//count")
id_.text = count.text = fle
tree.write(fle, encoding="utf-8")
</code></pre>
<p>Whatever text you want to set to the file name just look for the node with find and set the text use the <em>node.text = ...</em> logic. If you want to use the name ignoring the extension just split:</p>
<pre><code>for fle in iglob("[0-9][0-9][0-9][0-9].xml"):
tree = et.parse(fle)
id_, count = tree.find(".//id"), tree.find(".//count")
id_.text = count.text = fle.split(".")[0]
tree.write(fle, encoding="utf-8")
</code></pre>
| 2 | 2016-08-31T23:07:48Z | [
"python",
"bash",
"perl"
] |
Rename a particular instance in multiple files with the file name? | 39,260,133 | <p>I had to create about 500 copies of a xml file in the directory, which I managed to get done. As a part of the next problem is that I want to rename particular text in the file. How can I go about doing it?</p>
<p>This is what I have:
1000.xml, 1001.xml, 1002.xml...</p>
<p>1000.xml:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<addresses xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation='test.xsd'>
<address>
<name>Joe Tester</name>
<street>Baker street 5</street>
<id>1000</id>
</address>
<count>1000</count>
</code></pre>
<p></p>
<p>Essentially, this is copied to all the other files, but with a numerical and chronological name. How do I replace this "1000" with the "file name"? So, the new file should be -
1001.xml:</p>
<pre><code> <?xml version="1.0" encoding="UTF-8"?>
<addresses xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation='test.xsd'>
<address>
<name>Joe Tester</name>
<street>Baker street 5</street>
<id>1001</id>
</address>
<count>1001</count>
</addresses>
</code></pre>
<p>I could do only this - <code>sed -i '' -e 's/1000/1001/g' $(find . -type f)</code> which will replace all the 1000 with 1001, but not the file name.</p>
| -2 | 2016-08-31T22:43:28Z | 39,263,222 | <p>Try your sed command in a loop-</p>
<pre><code>for i in {1000..1500} #or whatever your maximum number is
do
sed -i "s/1000/"$i"/g" "$i".xml
done
</code></pre>
| 1 | 2016-09-01T05:21:37Z | [
"python",
"bash",
"perl"
] |
Rename a particular instance in multiple files with the file name? | 39,260,133 | <p>I had to create about 500 copies of a xml file in the directory, which I managed to get done. As a part of the next problem is that I want to rename particular text in the file. How can I go about doing it?</p>
<p>This is what I have:
1000.xml, 1001.xml, 1002.xml...</p>
<p>1000.xml:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<addresses xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation='test.xsd'>
<address>
<name>Joe Tester</name>
<street>Baker street 5</street>
<id>1000</id>
</address>
<count>1000</count>
</code></pre>
<p></p>
<p>Essentially, this is copied to all the other files, but with a numerical and chronological name. How do I replace this "1000" with the "file name"? So, the new file should be -
1001.xml:</p>
<pre><code> <?xml version="1.0" encoding="UTF-8"?>
<addresses xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation='test.xsd'>
<address>
<name>Joe Tester</name>
<street>Baker street 5</street>
<id>1001</id>
</address>
<count>1001</count>
</addresses>
</code></pre>
<p>I could do only this - <code>sed -i '' -e 's/1000/1001/g' $(find . -type f)</code> which will replace all the 1000 with 1001, but not the file name.</p>
| -2 | 2016-08-31T22:43:28Z | 39,269,104 | <p>You've tagged it <code>perl</code> so here's how I'd do it:</p>
<pre><code>#!/usr/bin/perl
use strict;
use warnings;
use XML::Twig;
#iterate the files.
foreach my $xml_file ( glob "*.xml" ) {
#regex match the number for the XML.
my ( $file_num ) = $xml_file =~ m/(\d+).xml/;
#create an XML::Twig, and set it to 'indented' output.
XML::Twig -> new ( pretty_print => 'indented',
#matches elements and runs the subroutine on 'it'. ($_) is the
#current element in this context.
twig_handlers => { 'address/id' => sub { $_ -> set_text($file_num) },
'count' => sub { $_ -> set_text($file_num) },
#parsefile_inplace reads and writes back any changes to the file
#as it goes.
} ) -> parsefile_inplace($xml_file);
}
</code></pre>
<p>This uses <code>XML::Twig</code>, which allows you do an in place edit. It does this via the element handlers, which upon hitting a suitable match, replaces the content with the right numeric value for the file.</p>
<p>I've opted to replace the defined content for <code>address/id</code> and <code>count</code>, rather than just doing straight search and replace, because then ... you don't have to worry about <code>1000</code> showing up anywhere else in the content. (Like the address). </p>
| 3 | 2016-09-01T10:28:53Z | [
"python",
"bash",
"perl"
] |
Adding default parameter that is object to a function that is inside a class | 39,260,134 | <p>I am trying to make a applet for mate-panel in linux . I have to modules <code>Window.py</code> and <code>Applet.py</code>. I get this error: </p>
<pre><code>Traceback (most recent call last):
File "./Applet.py", line 37, in WindowButtonsAppletFactory
WindowButtonsApplet(applet)
File "./Applet.py", line 23, in WindowButtonsApplet
CloseButton.connect("Clicked",WindowActions().win_close(SelectedWindow))
File "WindowButtonsApplet/Window.py", line 23, in win_close
Window.close(Timestamp)
AttributeError: 'WindowActions' object has no attribute 'close'
** (Applet.py:2191): WARNING **: need to free the control here
</code></pre>
<p>And I don't know why since the wnck librally has atribute named close.
The second thing I wan't to ask is why do I need to init the Class every time I call it .</p>
<p>Here is the code from:
Applet.py</p>
<pre><code>#!/bin/env python3
import gi
import Window
gi.require_version("Gtk","3.0")
gi.require_version("MatePanelApplet", "4.0")
from gi.repository import Gtk
from gi.repository import MatePanelApplet
from Window import *
def WindowButtonsApplet(applet):
Box = Gtk.Box("Horizontal")
CloseButton = Gtk.Button("x")
MinButton = Gtk.Button("_")
UmaximizeButton = Gtk.Button("[]")
SelectedWindow = WindowActions().active_window()
CloseButton.connect("Clicked",WindowActions().win_close(SelectedWindow))
Box.pack_start(CloseButton)
applet.add(Box)
applet.show_all()
// Hack for transparent background
applet.set_background_widget(applet)
def WindowButtonsAppletFactory(applet, iid,data):
if iid != "WindowButtonsApplet":
return False
WindowButtonsApplet(applet)
return True
//Mate panel procedure to load the applet on panel
MatePanelApplet.Applet.factory_main("WindowButtonsAppletFactory", True,
MatePanelApplet.Applet.__gtype__,
WindowButtonsAppletFactory, None)
</code></pre>
<p>Window.py</p>
<pre><code>#!/usr/bin/env python3
import time
import gi
gi.require_version("Gtk","3.0")
gi.require_version("Wnck","3.0")
from gi.repository import Gtk
from gi.repository import Wnck
class WindowActions:
DefaultScreen = Wnck.Screen.get_default()
DTimestamp = int(time.time())
def active_window(self,Screen=DefaultScreen):
Screen.force_update()
self.ActiveWindow = Screen.get_active_window()
return self.ActiveWindow
def win_close(Window,Timestamp=DTimestamp):
Window.close(Timestamp)
def win_minimize(self,Window):
Window.minimize()
def win_umaximize(self,Window):
self.Window.maximize()
</code></pre>
| 2 | 2016-08-31T22:43:34Z | 39,261,926 | <p>You're missing the reference to <code>self</code> in:</p>
<pre><code>def win_close(Window,Timestamp=DTimestamp):
Window.close(Timestamp)
</code></pre>
<p>as a result an <code>WindowActions</code> instance is passed which doesn't define a close method and your actual selected window is passed to <code>Timestamp</code>.</p>
<p>Add <code>self</code> to your method definition and that should solve it.</p>
| 1 | 2016-09-01T02:46:57Z | [
"python",
"python-3.x",
"gtk3",
"pygobject",
"wnck"
] |
Scrapy not executing in the correct order | 39,260,143 | <p>I am currently working on a web-crawler that is supposed to visit a list of websites in a directory, visit the sites' CSS stylesheets, check for an @media tag (a basic way of checking for responsive design, I know there are other corner cases to consider), and print all websites that do not use responsive design to a file.</p>
<p>I am fairly certain that my method of actually checking the CSS for an @media tag works fine, but the spider is not visiting all the CSS files before deciding whether or not it has found one with an @media tag. I have a test file that logs debugging output as the program progresses, and it shows odd patterns such as finishing checking all CSS files and then printing out what it found in the files, which shouldn't happen.</p>
<p>I was hoping someone could look at my code and help me determine why this isn't happening in the order I want it to. For reference, the goal is:</p>
<ol>
<li>Visit a website from the list</li>
<li>Visit every CSS file in the head element of that site's HTML</li>
<li>If an @media tag is found, we're done and the site uses responsive design</li>
<li>If not, continue checking more CSS files</li>
<li>If no CSS file contains an @media tag, the site does not use responsive design and should be added to the list</li>
</ol>
<p>Here's my code (not everything works perfectly - for example, the program times out because I haven't worked out using TimeOutError yet, but for the most part, I feel like this should do it's job of correctly evaluating websites, and it is not doing that):</p>
<pre><code>import scrapy
import re
import os.path
from scrapy.linkextractors import LinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from twisted.internet.error import TimeoutError
import time
class LCCISpider(CrawlSpider):
name = "lcci"
start_urls = ["http://www.lancasterchamber.com/busdirectory.aspx?mode=category"]
#Calls parse_item for every category link on main page
rules = (Rule(SgmlLinkExtractor(restrict_xpaths=('//div[@id="catListingResults"]/table/tr')),
callback = 'parse_item', follow = True),)
website_list = []
found_media = False
#Called for each category
def parse_item(self, response):
#For each site on the page, calls parse_website
sites = response.xpath('//div[@id="busListingResults"]/table/tr')
for site in sites:
urls = site.xpath('.//td/a[4]/@href').extract()
for url in urls:
if len(url) == 0:
continue
else:
new_site = response.urljoin(url)
yield scrapy.Request(new_site, callback=self.parse_website,
errback=self.errback_website)
def parse_website(self, response):
f = open('output2.txt', 'a')
f.write("NOW VISITING")
f.flush()
f.write(response.url)
f.flush()
f.write("\n")
f.flush()
f.close()
#reset found_media to false for each website
self.found_media = False
#for every link in the header, check potential css for @media tag
for href in response.css("head > link::attr('href')"):
url = response.urljoin(href.extract())
#if @media tag has not been found, continue checking css
if self.found_media == False:
#Call check_css for the url of the css file
yield scrapy.Request(url, callback=self.check_css,
errback=self.errback_website)
f = open('output2.txt', 'a')
f.write("step\n")
f.flush()
f.close()
else:
break
#if no @media tag is found in any link in the header, add the url to the website_list
if self.found_media == False:
#self.website_list.append(response.url)
f = open('output2.txt', 'a')
f.write("No @media tag in")
f.flush()
f.write(response.url)
f.flush()
f.write("\n")
f.flush()
f.close()
f = open('outputfalse2.txt', 'a')
f.write(response.url)
f.write("\n")
f.close()
else:
f = open('outputtrue.txt', 'a')
f.write(reponse.url)
f.write("\n")
f.close()
def check_css(self, response):
#Just a way of converting url into a string, the ".txt" is otherwise meaningless
string = str(response.url)
f = open('output2.txt', 'a')
f.write("Checking CSS in ")
f.write(response.url)
f.write("\n")
f.flush()
f.close()
#only perform regex search if it's a .css file
if (string[-4:] == ".css"):
media_match = re.search(r'@media', response.body, flags=0)
if media_match != None:
f = open('output2.txt', 'a')
f.write("found @media tag in " + response.url + "\n")
f.flush()
#If an @media tag is found, set found_media to True
self.found_media = True
f.close()
else:
f = open('output2.txt', 'a')
f.write("not css")
f.flush()
f.close()
def errback_website(self, failure):
if failure.check(TimeoutError):
request = failure.request
self.logger.error = ('TimeoutError on %s', request.url)
</code></pre>
| 0 | 2016-08-31T22:44:13Z | 39,266,767 | <p>I skimed through and couldn't help but make this work. This is fully cleaned up code.
Very little has changed in terms of logic.
What it does right now is:</p>
<ol>
<li>Connect to the website</li>
<li>Get all categories</li>
<li>Get all websites from categories</li>
<li>Connect to first page of every website</li>
<li>Look for <code>.css</code> links</li>
<li>Connect to <code>.css</code> links
6.1 If <code>media</code> regex matches yield item with css url and item</li>
</ol>
<p>The only problem here is because of asynchronious nature of scrapy you end up with lots of duplicates since you might crawl multiple .css files at the time. For that we can use simple pipeline to detect and drop duplicates.
For future reference you shouldn't debug with file writes. Take a look at scrapy shell, you can even use it inside of <code>parse</code> to open up shell during the crawl like:</p>
<pre><code>def parse(self, response):
inspect_response(response, self)
</code></pre>
<p>Here's the working spider:</p>
<pre><code>import re
from scrapy.spiders import CrawlSpider, Rule
from scrapy.exceptions import DropItem
from scrapy.linkextractors import LinkExtractor
from twisted.internet.error import TimeoutError
from scrapy import Request
class DupePipeline(object):
def __init__(self):
self.known_websites = set()
def process_item(self, item, spider):
if item['website'] in self.known_websites:
raise DropItem('duplicate')
self.known_websites.add(item['website'])
return item
class LCCISpider(CrawlSpider):
name = "lcci"
start_urls = ["http://www.lancasterchamber.com/busdirectory.aspx?mode=category"]
custom_settings = {
'ROBOTSTXT_OBEY': False,
'ITEM_PIPELINES': {
'myproject.spiders.spider.DupePipeline': 666,
}
}
# Calls parse_item for every category link on main page
rules = (Rule(LinkExtractor(restrict_xpaths=['//div[@id="catListingResults"]/table/tr']),
callback='parse_item', follow=True),) # why follow?
# Called for each category
def parse_item(self, response):
# For each site on the page, calls parse_website
sites = response.xpath('//div[@id="busListingResults"]/table/tr')
for site in sites:
urls = site.xpath('.//td/a[4]/@href').extract()
for url in urls:
if not url:
continue
new_site = response.urljoin(url)
yield Request(new_site,
callback=self.parse_website,
errback=self.errback_website)
def parse_website(self, response):
# for every link in the header, check potential css for @media tag
for href in response.css("head > link::attr('href')").extract():
if not href.endswith('.css'): # only css files
continue
yield Request(response.urljoin(href),
meta={'website': response.url},
callback=self.check_css,
errback=self.errback_website)
def check_css(self, response):
media_match = re.search(r'@media', response.body, flags=0)
if media_match:
# return item!
yield {'url': response.url,
'website': response.meta['website']}
def errback_website(self, failure):
if failure.check(TimeoutError):
request = failure.request
self.logger.error = ('TimeoutError on %s', request.url)
</code></pre>
<p>Results after runing for few minutes with <code>scrapy crawl lcci -o test.json</code> I got this: <a href="http://pastebin.com/raw/kfsTKqUY" rel="nofollow">http://pastebin.com/raw/kfsTKqUY</a></p>
| 1 | 2016-09-01T08:45:37Z | [
"python",
"css",
"web",
"scrapy",
"web-crawler"
] |
Allure with Pytest: no decorators for Title and Description | 39,260,159 | <p>I did try to use these decorators in a test, but the compiler have issues because the decorators for both the Title and Description are not recognized.</p>
<p>I did use</p>
<pre><code>@allure.feature("feature1")
@allure.story("story1")
</code></pre>
<p>Without issues, but </p>
<pre><code>@allure.description("test")
@allure.title("The test title")
</code></pre>
<p>does not exist. Is this a limitation of the python port of the allure tool? The documentation does not mention that there is neither a description nor a title decorator, but the Allure page show that these are in fact legit commands that you can use for Allure.</p>
<p>My assumption was that the porting of Allure for Python would include everything; but I start to think that probably it has only the basics, like story, step and such.</p>
| 1 | 2016-08-31T22:45:41Z | 39,268,514 | <p>author of <code>allure-python</code> here.</p>
<p>You are right, there are no such decorators as <code>description</code> or <code>title</code>.
The reason is <code>allure-python</code> collects test title and description from the <code>pytest</code>'s native means -- for bare python tests the are collected from test function title and docstring respectively.</p>
<p>Overally, <code>allure-python</code> is not a 1-to-1 port of Java version of allure but instead an adaptor to convert <code>pytest</code>'s own structures and means for allure report generation. Historically, only those parts of allure that are missing from native pytest (like steps) are implemented explicitly.</p>
<p>However, if you would feel more comfortable with those decorators feel free to open a pull request to add their implementation.</p>
<p>Best, Ivan.</p>
| 1 | 2016-09-01T10:02:42Z | [
"python",
"allure"
] |
Flask-WTF: CSRF token missing | 39,260,241 | <p>What seemed like a simple bug - a form submission that won't go through due to a "CSRF token missing" error - has turned into a day of hair pulling. I have gone through every SO article related to Flask or Flask-WTF and missing CSRF tokens, and nothing seems to be helping. </p>
<p>Here are the details:</p>
<p>Following <a href="http://stackoverflow.com/a/21501593/5578714" title="Martijin's guidelines">Martijin's guidelines</a> to an earlier question:</p>
<blockquote>
<p>The Flask-WTF CSRF infrastructure rejects a token if:</p>
<p>1) the token is missing. Not the case here, you can see the token in the form.</p>
</blockquote>
<p>The token is definitely present in my form, and being POST'ed successfully</p>
<blockquote>
<p>2) it is too old (default expiration is set to 3600 seconds, or an hour).
Set the TIME_LIMIT attribute on forms to override this. Probably not the
case here.</p>
</blockquote>
<p>Also OK for me - the token is well within the default expiration time </p>
<blockquote>
<p>3) if no 'csrf_token' key is found in the current session. You can
apparently see the session token, so that's out too.</p>
</blockquote>
<p>In my case, session['csrf_token'] is properly set and seen by Flask</p>
<blockquote>
<p>4) If the HMAC signature doesn't match; the signature is based on the
random value set in the session under the 'csrf_token' key, the
server-side secret, and the expiry timestamp in the token.</p>
</blockquote>
<p><strong>This is my problem</strong>. The HMAC comparison between the submitted form's CSRF and the session CSRF fails. And yet I don't know how to solve it. I've been desperate enough (as with the other questioner) to dig into Flask-WTF code and set debugging messages to find out what's going on. As best I can tell, it's working like this:</p>
<p>1) <code>generate_csrf_token()</code> in "form.py" (Flask-WTF) wants to generates a CSRF token. So it calls:</p>
<p>2) <code>generate_csrf()</code> in "csrf.py". That function generates a new session['csrf_token'] if one does not exist. <strong>In my case, this always happens</strong> - although other session variables appear to persist between requests, my debugging shows that I <strong>never</strong> have a 'csrf_token' in my session at the start of a request. Is this normal?</p>
<p>3) The generated token is returned and presumably incorporated into the form variable when I render hidden fields on the template. (again, debugging shows that this token is present in the form and properly submitted and received)</p>
<p>4) Next, the form is submitted.</p>
<p>5) Now, <code>validate_csrf</code> in csrf.py is called. But since another request has taken place, and generate_csrf() has generated a new session CSRF token, the two timestamps for the two tokens (in session and from the form) will not match. And since the CSRF is made up in part by expiration dates, therefore validation fails.</p>
<p>I suspect the problem is in step #2, where a new token is being generated for every request. But I have no clue why <strong>other</strong> variables in my session are persisting from request to request, but not "csrf_token".</p>
<p>There is no weirdness going on with SECRET_KEY or WTF_CSRF_SECRET_KEY either (they are properly set).</p>
<p>Anyone have any ideas?</p>
| -1 | 2016-08-31T22:52:44Z | 39,262,931 | <p>I figured it out. It appears to be a cookie/session limit (which probably beyond Flask's control) and a silent discarding of session variables when the limit is hit (which seems more like a bug).</p>
<p>Here's an example:</p>
<p><strong>templates/hello.html</strong></p>
<pre><code><p>{{ message|safe }}</p>
<form name="loginform" method="POST">
{{ form.hidden_tag() }}
{{ form.submit_button() }}
</form>
</code></pre>
<p><strong>myapp.py</strong></p>
<pre><code>from flask import Flask, make_response, render_template, session
from flask_restful import Resource, Api
from flask_wtf import csrf, Form
from wtforms import SubmitField
app = Flask(__name__)
app.secret_key = '5accdb11b2c10a78d7c92c5fa102ea77fcd50c2058b00f6e'
api = Api(app)
num_elements_to_generate = 500
class HelloForm(Form):
submit_button = SubmitField('Submit This Form')
class Hello(Resource):
def check_session(self):
if session.get('big'):
message = "session['big'] contains {} elements<br>".format(len(session['big']))
else:
message = "There is no session['big'] set<br>"
message += "session['secret'] is {}<br>".format(session.get('secret'))
message += "session['csrf_token'] is {}<br>".format(session.get('csrf_token'))
return message
def get(self):
myform = HelloForm()
session['big'] = list(range(num_elements_to_generate))
session['secret'] = "A secret phrase!"
csrf.generate_csrf()
message = self.check_session()
return make_response(render_template("hello.html", message=message, form=myform), 200, {'Content-Type': 'text/html'})
def post(self):
csrf.generate_csrf()
message = self.check_session()
return make_response("<p>This is the POST result page</p>" + message, 200, {'Content-Type': 'text/html'})
api.add_resource(Hello, '/')
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>Run this with <code>num_elements_to_generate</code> set to 500 and you'll get something like this:</p>
<pre><code>session['big'] contains 500 elements
session['secret'] is 'A secret phrase!'
session['csrf_token'] is a6acb57eb6e62876a9b1e808aa1302d40b44b945
</code></pre>
<p>and a "Submit This Form" button. Click the button, and you'll get:</p>
<pre><code>This is the POST result page
session['big'] contains 500 elements
session['secret'] is 'A secret phrase!'
session['csrf_token'] is a6acb57eb6e62876a9b1e808aa1302d40b44b945
</code></pre>
<p>All well and good. But now change <code>num_elements_to_generate</code> to 3000, clear your cookies, rerun the app and access the page. You'll get something like:</p>
<pre><code>session['big'] contains 3000 elements
session['secret'] is 'A secret phrase!'
session['csrf_token'] is 709b239857fd68a4649deb864868897f0dc0a8fd
</code></pre>
<p>and a "Submit This Form" button. Click the button, and this time you'll get:</p>
<pre><code>This is the POST result page
There is no session['big'] set
session['secret'] is 'None'
session['csrf_token'] is 13553dce0fbe938cc958a3653b85f98722525465
</code></pre>
<p>3,000 digits stored in the session variable is too much, so the session variables do not persist between requests. Interestingly they DO exist in the session on the first page (no matter how many elements you generate), but they will not survive to the next request. And Flask-WTF, since it does not see a <code>csrf_token</code> in the session when the form is posted, generates a new one. If this was a form validation step, the CSRF validation would fail.</p>
<p>This seems to be a known Flask (or Werkzeug) bug, <a href="https://github.com/pallets/werkzeug/pull/780" rel="nofollow">with a pull request here</a>. I'm not sure why Flask isn't generating a warning here - unless it is somehow technically unfeasible, it's an unexpected and unpleasant surprise that it is silently failing to keep the session variables when the cookie is too big.</p>
| 1 | 2016-09-01T04:55:45Z | [
"python",
"session",
"flask",
"wtforms",
"flask-wtforms"
] |
Faster RCNN:libcudart.so.7.0: cannot open shared object file: No such file or directory | 39,260,247 | <p>I get the following error when I run the demo from <a href="https://github.com/rbgirshick/py-faster-rcnn/tree/master" rel="nofollow">https://github.com/rbgirshick/py-faster-rcnn/tree/master</a> and all the other steps before demo has been done successfully:</p>
<pre><code>mona@pascal:~/computer_vision/py-faster-rcnn$ ./tools/demo.py
Traceback (most recent call last):
File "./tools/demo.py", line 18, in <module>
m from fast_rcnn.test import im_detect
File "/home/mona/computer_vision/py-faster-rcnn/tools/../lib/fast_rcnn/test.py", line 17, in <module>
from fast_rcnn.nms_wrapper import nms
File "/home/mona/computer_vision/py-faster-rcnn/tools/../lib/fast_rcnn/nms_wrapper.py", line 9, in <module>
from nms.gpu_nms import gpu_nms
ImportError: libcudart.so.7.0: cannot open shared object file: No such file or directory
</code></pre>
<p>I have the following system settings:</p>
<pre><code>CuDNN V4
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2015 NVIDIA Corporation
Built on Tue_Aug_11_14:27:32_CDT_2015
Cuda compilation tools, release 7.5, V7.5.17
$ uname -a
Linux pascal 3.13.0-62-generic #102-Ubuntu SMP Tue Aug 11 14:29:36 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
$ lspci | grep -i nvidia
03:00.0 3D controller: NVIDIA Corporation GK110BGL [Tesla K40c] (rev a1)
83:00.0 3D controller: NVIDIA Corporation GK110BGL [Tesla K40c] (rev a1)
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.5 LTS
Release: 14.04
Codename: trusty
$ gcc --version
gcc (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4
Copyright (C) 2013 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
</code></pre>
<p>What is the issue and how can it be solved?</p>
| -1 | 2016-08-31T22:53:05Z | 39,260,794 | <p>While not a neat solution, I ended up changing the paths to use CUDA 7.0. For whatever reason, seems Faster RCNN currently is not compatible with CUDA 7.5 on Ubuntu 14.04. On Ubuntu 15.10 I had it work with CUDA7.5 with the same exact settings!!!!</p>
| 1 | 2016-09-01T00:02:01Z | [
"python",
"cuda",
"shared-libraries",
"caffe",
"cudnn"
] |
Faster RCNN:libcudart.so.7.0: cannot open shared object file: No such file or directory | 39,260,247 | <p>I get the following error when I run the demo from <a href="https://github.com/rbgirshick/py-faster-rcnn/tree/master" rel="nofollow">https://github.com/rbgirshick/py-faster-rcnn/tree/master</a> and all the other steps before demo has been done successfully:</p>
<pre><code>mona@pascal:~/computer_vision/py-faster-rcnn$ ./tools/demo.py
Traceback (most recent call last):
File "./tools/demo.py", line 18, in <module>
m from fast_rcnn.test import im_detect
File "/home/mona/computer_vision/py-faster-rcnn/tools/../lib/fast_rcnn/test.py", line 17, in <module>
from fast_rcnn.nms_wrapper import nms
File "/home/mona/computer_vision/py-faster-rcnn/tools/../lib/fast_rcnn/nms_wrapper.py", line 9, in <module>
from nms.gpu_nms import gpu_nms
ImportError: libcudart.so.7.0: cannot open shared object file: No such file or directory
</code></pre>
<p>I have the following system settings:</p>
<pre><code>CuDNN V4
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2015 NVIDIA Corporation
Built on Tue_Aug_11_14:27:32_CDT_2015
Cuda compilation tools, release 7.5, V7.5.17
$ uname -a
Linux pascal 3.13.0-62-generic #102-Ubuntu SMP Tue Aug 11 14:29:36 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
$ lspci | grep -i nvidia
03:00.0 3D controller: NVIDIA Corporation GK110BGL [Tesla K40c] (rev a1)
83:00.0 3D controller: NVIDIA Corporation GK110BGL [Tesla K40c] (rev a1)
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.5 LTS
Release: 14.04
Codename: trusty
$ gcc --version
gcc (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4
Copyright (C) 2013 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
</code></pre>
<p>What is the issue and how can it be solved?</p>
| -1 | 2016-08-31T22:53:05Z | 39,290,077 | <p>Try This: Worked for me.</p>
<pre><code>export LD_LIBRARY_PATH=/usr/local/cuda/lib64/
</code></pre>
| 0 | 2016-09-02T10:22:12Z | [
"python",
"cuda",
"shared-libraries",
"caffe",
"cudnn"
] |
getting http debug info | 39,260,276 | <p>I'm going through Dive into Python3. When I get to the chapter on http web services section 14.4, I can't seem to duplicate the following output in the python3 shell. Here's what the sample code looks like:</p>
<pre><code>from http.client import HTTPConnection
HTTPConnection.debuglevel = 1
from urllib.request import urlopen
response = urlopen('http://diveintopython3.org/examples/feed.xml')
send: b'GET /examples/feed.xml HTTP/1.1
Host: diveintopython3.org
Accept-Encoding: identity
User-Agent: Python-urllib/3.1'
Connection: close
reply: 'HTTP/1.1 200 OK'
â¦further debugging information omittedâ¦
</code></pre>
<p>When I enter this in ipython3, the final command gives no output. So why am I not getting the debug info in the example? After entering the above code, response.debuglevel == 0. I'm using python3.5.2.</p>
| 1 | 2016-08-31T22:55:56Z | 39,260,376 | <p>The final command should not give any output, what you probably want is:</p>
<pre><code>print(response.read())
</code></pre>
<p>Further, as I wrote in the comments above:
urllib got deprecated in Python3, you should use urllib2, see: docs.python.org/2/library/urllib.html </p>
| 1 | 2016-08-31T23:09:11Z | [
"python",
"python-3.x",
"urllib",
"http.client"
] |
Inspecting Python Objects | 39,260,407 | <p>I am looking at a code given to me by a co-worker who no longer works with us.</p>
<p>I have a list variable called rx.</p>
<pre><code>>> type(rx)
type 'list'
</code></pre>
<p>When I go to look inside rx[0] I get this:</p>
<pre><code>>> rx[0]
<Thing.thing.stuff.Rx object at 0x10e1e1c10>
</code></pre>
<p>Can anyone translate what this means? And, more importantly, how can I see what is inside this object within the rx list?</p>
<p>Any help is appreciated.</p>
| -2 | 2016-08-31T23:12:41Z | 39,260,637 | <p>Start with <a href="https://docs.python.org/3/library/functions.html#help" rel="nofollow">help</a>: <code>help(rx[0])</code></p>
<pre><code># example python object
class Employee:
"""Common base class for all employees."""
empCount = 0
help(Employee)
</code></pre>
<p>Output:</p>
<pre><code>Help on class Employee in module __main__:
class Employee(builtins.object)
| Common base class for all employees.
|
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| empCount = 0
</code></pre>
<p>If that's not enough info check out the <a href="https://docs.python.org/3/library/inspect.html" rel="nofollow">inspect module</a>.</p>
<p>Inspect has a lot of methods that might be useful, like <strong>getmembers</strong> and <strong>getdoc</strong>:</p>
<pre><code>import inspect
inspect.getdoc(Employee) # 'Common base class for all employees.'
for name, data in inspect.getmembers(Employee):
if name == '__builtins__':
continue
print('%s :' % name, repr(data))
</code></pre>
<p>Output:</p>
<pre><code>__class__ : <class 'type'>
__delattr__ : <slot wrapper '__delattr__' of 'object' objects>
__dict__ : mappingproxy({'__module__': '__main__', '__dict__': <attribute '__dict__' of 'Employee' objects>, '__weakref__': <attribute '__weakref__' of 'Employee' objects>, 'empCount': 0, '__doc__': 'Common base class for all employees.'})
__dir__ : <method '__dir__' of 'object' objects>
__doc__ : 'Common base class for all employees.'
__eq__ : <slot wrapper '__eq__' of 'object' objects>
__format__ : <method '__format__' of 'object' objects>
__ge__ : <slot wrapper '__ge__' of 'object' objects>
__getattribute__ : <slot wrapper '__getattribute__' of 'object' objects>
__gt__ : <slot wrapper '__gt__' of 'object' objects>
__hash__ : <slot wrapper '__hash__' of 'object' objects>
__init__ : <slot wrapper '__init__' of 'object' objects>
__le__ : <slot wrapper '__le__' of 'object' objects>
__lt__ : <slot wrapper '__lt__' of 'object' objects>
__module__ : '__main__'
__ne__ : <slot wrapper '__ne__' of 'object' objects>
__new__ : <built-in method __new__ of type object at 0x108a69d20>
__reduce__ : <method '__reduce__' of 'object' objects>
__reduce_ex__ : <method '__reduce_ex__' of 'object' objects>
__repr__ : <slot wrapper '__repr__' of 'object' objects>
__setattr__ : <slot wrapper '__setattr__' of 'object' objects>
__sizeof__ : <method '__sizeof__' of 'object' objects>
__str__ : <slot wrapper '__str__' of 'object' objects>
__subclasshook__ : <built-in method __subclasshook__ of type object at 0x7faa994086e8>
__weakref__ : <attribute '__weakref__' of 'Employee' objects>
empCount : 0
</code></pre>
| 0 | 2016-08-31T23:42:39Z | [
"python",
"object",
"indexing"
] |
Input to a three dimensional array in python | 39,260,427 | <p>I'm new at coding in python and I was wondering if there is some way I can transcript this code in C to python:</p>
<pre><code>for (K=1; K<3; K++)
for (I=0; I<3; I++)
for (J=0; J<7; J++){
printf("Year: %d\tProvince: %d\tMonth: %d: ", K, I, J);
scanf("%f", &A[I][J][K]);
}
</code></pre>
<p>This is all I have done so far, the only thing missing is how to input data in the three dimensional array</p>
<pre><code>for k in range(1,3):
for i in range(1,3):
for j in range(1,7):
print("Year: " + str(k) + " Province: " + str(i) + " Month: " + str(j) + ": ")
</code></pre>
| 0 | 2016-08-31T23:15:14Z | 39,261,012 | <p>you could use numpy. example assuming you want float64s. change types add needed </p>
<pre><code>import numpy as np
arr = np.empty([2, 2, 6], np.float64)
for k in range(2):
for i in range(2):
for j in range(6):
print("Year: " + str(k) + " Province: " + str(i) + " Month: " + str(j) + ": ")
arr[k][i][j] = np.float64(raw_input())
</code></pre>
| 0 | 2016-09-01T00:31:29Z | [
"python",
"c",
"arrays",
"input"
] |
Input to a three dimensional array in python | 39,260,427 | <p>I'm new at coding in python and I was wondering if there is some way I can transcript this code in C to python:</p>
<pre><code>for (K=1; K<3; K++)
for (I=0; I<3; I++)
for (J=0; J<7; J++){
printf("Year: %d\tProvince: %d\tMonth: %d: ", K, I, J);
scanf("%f", &A[I][J][K]);
}
</code></pre>
<p>This is all I have done so far, the only thing missing is how to input data in the three dimensional array</p>
<pre><code>for k in range(1,3):
for i in range(1,3):
for j in range(1,7):
print("Year: " + str(k) + " Province: " + str(i) + " Month: " + str(j) + ": ")
</code></pre>
| 0 | 2016-08-31T23:15:14Z | 39,281,712 | <p>If you want to do that in just standard, native python this should work:</p>
<pre><code>import pprint # To display "3d array" in a recognizable format
A = [[[0 for _ in range(2)] for _ in range(2)] for _ in range(6)]
for k in range(2):
for i in range(2):
for j in range(6):
print("Year: {0} Province: {1} Month: {2}".format(k,i,j))
A[j][i][k] = raw_input()
pprint.pprint(A) # for an easy to read 2x2x6 format display
</code></pre>
<p>After inputting some numbers (eg. 1-24), gives output: </p>
<pre><code>[[['1', '13'], ['7', '19']],
[['2', '14'], ['8', '20']],
[['3', '15'], ['9', '21']],
[['4', '16'], ['10', '22']],
[['5', '17'], ['11', '23']],
[['6', '18'], ['12', '24']]]
</code></pre>
| 0 | 2016-09-01T22:18:28Z | [
"python",
"c",
"arrays",
"input"
] |
Check if a string contains all words of another string in the same order python? | 39,260,436 | <p>I want to check if a string contains all of the substring's words and retains their order; at the moment I am using the following code; However it is very basic, seems inefficient and likely there is a much better way of doing it. I'd really appreciate if you could tell me what a more efficient solution would be. Sorry for a noob question, I am new to the programming and wasn't able to find a good solution</p>
<pre><code>def check(main, sub_split):
n=0
while n < len(sub_split):
result = True
if sub_split[n] in main:
the_start = main.find(sub_split[n])
main = main[the_start:]
else:
result=False
n += 1
return result
a = "I believe that the biggest castle in the world is Prague Castle "
b= "the biggest castle".split(' ')
print check(a, b)
</code></pre>
<p>update: interesting; First of all thank you all for your answers. Also thank you for pointing out some of the spots that my code missed. I have been trying different solutions posted here and in the links, I will add update how they compare and accept the answer then.</p>
<p><strong>update:</strong>
Again thank you all for great solutions, every one of them had major improvements compared to my code; I checked the suggestions with my requirements for 100000 checks and got the following results;
suggestions by:
Padraic Cunningham - consistently under 0.4 secs (though gives some false positives when searching for only full words;
galaxyan - 0.65 secs; 0.75 secs
friendly dog - 0.70 secs
John1024 - 1.3 secs (Highly accurate, but seems to take extra time)</p>
| 0 | 2016-08-31T23:16:04Z | 39,260,527 | <p>if you just want to check whether there is a word contain in other string, no need to check all. You just need to find one and return true.<br>
when you check the item set is faster O(1)(average)</p>
<pre><code>a = "I believe that the biggest castle in the world is Prague Castle "
b = "the biggest castle"
def check(a,b):
setA,lstB = set( a.split() ), b.split()
if len(setA) < len(lstB): return False
for item in lstB:
if item in setA:
return True
return False
print check(a,b)
</code></pre>
<p>if you donot care the speed</p>
<pre><code>def check(a,b):
setA,lstB = set( a.split() ), b.split()
return len(setA) >= len(lstB) and any( 1 for item in lstB if item in setA)
</code></pre>
<p>for speed and time complexity: <a href="https://wiki.python.org/moin/TimeComplexity" rel="nofollow">link</a></p>
| -1 | 2016-08-31T23:27:49Z | [
"python",
"string",
"python-2.7",
"substring"
] |
Check if a string contains all words of another string in the same order python? | 39,260,436 | <p>I want to check if a string contains all of the substring's words and retains their order; at the moment I am using the following code; However it is very basic, seems inefficient and likely there is a much better way of doing it. I'd really appreciate if you could tell me what a more efficient solution would be. Sorry for a noob question, I am new to the programming and wasn't able to find a good solution</p>
<pre><code>def check(main, sub_split):
n=0
while n < len(sub_split):
result = True
if sub_split[n] in main:
the_start = main.find(sub_split[n])
main = main[the_start:]
else:
result=False
n += 1
return result
a = "I believe that the biggest castle in the world is Prague Castle "
b= "the biggest castle".split(' ')
print check(a, b)
</code></pre>
<p>update: interesting; First of all thank you all for your answers. Also thank you for pointing out some of the spots that my code missed. I have been trying different solutions posted here and in the links, I will add update how they compare and accept the answer then.</p>
<p><strong>update:</strong>
Again thank you all for great solutions, every one of them had major improvements compared to my code; I checked the suggestions with my requirements for 100000 checks and got the following results;
suggestions by:
Padraic Cunningham - consistently under 0.4 secs (though gives some false positives when searching for only full words;
galaxyan - 0.65 secs; 0.75 secs
friendly dog - 0.70 secs
John1024 - 1.3 secs (Highly accurate, but seems to take extra time)</p>
| 0 | 2016-08-31T23:16:04Z | 39,260,537 | <p>Let's define your <code>a</code> string and reformat your <code>b</code> string into a regex:</p>
<pre><code>>>> a = "I believe that the biggest castle in the world is Prague Castle "
>>> b = r'\b' + r'\b.*\b'.join(re.escape(word) for word in "the biggest castle".split(' ')) + r'\b'
</code></pre>
<p>This tests to see if the words in b appear in the same order in a:</p>
<pre><code>>>> import re
>>> bool(re.search(b, a))
True
</code></pre>
<p>Caveat: If speed is important, a non-regex approach may be faster.</p>
<h3>How it works</h3>
<p>The key thing here is the reformulation of the string into a regex:</p>
<pre><code>>>> b = r'\b' + r'\b.*\b'.join(re.escape(word) for word in "the biggest castle".split(' ')) + r'\b'
>>> print(b)
\bthe\b.*\bbiggest\b.*\bcastle\b
</code></pre>
<p><code>\b</code> matches only at word boundaries. This means, for example, that the word <code>the</code> will never be confused with the word <code>there</code>. Further, this regex requires that all the words be present in the target string in the same order.</p>
<p>If <code>a</code> contains a match to the regex <code>b</code>, then <code>re.search(b, a)</code> returns a match object. Otherwise, it returns <code>None</code>. Thus, <code>bool(re.search(b, a))</code> returns <code>True</code> only if a match was found.</p>
<h3>Example with punctuation</h3>
<p>Because word boundaries treat punctuation as not word characters, this approach is not confused by punctuation:</p>
<pre><code>>>> a = 'From here, I go there.'
>>> b = 'here there'
>>> b = r'\b' + r'\b.*\b'.join(re.escape(word) for word in b.split(' ')) + r'\b'
>>> bool(re.search(b, a))
True
</code></pre>
| 1 | 2016-08-31T23:28:58Z | [
"python",
"string",
"python-2.7",
"substring"
] |
Check if a string contains all words of another string in the same order python? | 39,260,436 | <p>I want to check if a string contains all of the substring's words and retains their order; at the moment I am using the following code; However it is very basic, seems inefficient and likely there is a much better way of doing it. I'd really appreciate if you could tell me what a more efficient solution would be. Sorry for a noob question, I am new to the programming and wasn't able to find a good solution</p>
<pre><code>def check(main, sub_split):
n=0
while n < len(sub_split):
result = True
if sub_split[n] in main:
the_start = main.find(sub_split[n])
main = main[the_start:]
else:
result=False
n += 1
return result
a = "I believe that the biggest castle in the world is Prague Castle "
b= "the biggest castle".split(' ')
print check(a, b)
</code></pre>
<p>update: interesting; First of all thank you all for your answers. Also thank you for pointing out some of the spots that my code missed. I have been trying different solutions posted here and in the links, I will add update how they compare and accept the answer then.</p>
<p><strong>update:</strong>
Again thank you all for great solutions, every one of them had major improvements compared to my code; I checked the suggestions with my requirements for 100000 checks and got the following results;
suggestions by:
Padraic Cunningham - consistently under 0.4 secs (though gives some false positives when searching for only full words;
galaxyan - 0.65 secs; 0.75 secs
friendly dog - 0.70 secs
John1024 - 1.3 secs (Highly accurate, but seems to take extra time)</p>
| 0 | 2016-08-31T23:16:04Z | 39,260,663 | <p>You can simplify your search by passing the index of the previous match + 1 to <em>find</em>, you don't need to slice anything:</p>
<pre><code>def check(main, sub_split):
ind = -1
for word in sub_split:
ind = main.find(word, ind+1)
if ind == -1:
return False
return True
a = "I believe that the biggest castle in the world is Prague Castle "
b= "the biggest castle".split(' ')
print check(a, b)
</code></pre>
<p>If <em>ind</em> is ever <em>-1</em> then you get no match after so you return False, if you get thorough all the words then all words are in the string in order.</p>
<p>For exact words you could do something similar with lists:</p>
<pre><code>def check(main, sub_split):
lst, ind = main.split(), -1
for word in sub_split:
try:
ind = lst.index(word, ind + 1)
except ValueError:
return False
return True
</code></pre>
<p>And to handle punctuation, you could first strip it off:</p>
<pre><code>from string import punctuation
def check(main, sub_split):
ind = -1
lst = [w.strip(punctuation) for w in main.split()]
for word in (w.strip(punctuation) for w sub_split):
try:
ind = lst.index(word, ind + 1)
except ValueError:
return False
return True
</code></pre>
<p>Of course some words are valid with punctuation but that is more a job for nltk or you may actually want to find matches including any punctuation.</p>
| 2 | 2016-08-31T23:45:42Z | [
"python",
"string",
"python-2.7",
"substring"
] |
How to pass data to scrapinghub? | 39,260,455 | <p>I'm trying to run a scrapy spider on scrapinghub, and I want to pass in some data. I'm using their API to run the spider:</p>
<p><a href="http://doc.scrapinghub.com/api/jobs.html#jobs-run-json" rel="nofollow">http://doc.scrapinghub.com/api/jobs.html#jobs-run-json</a></p>
<p>They have an option for <code>job_settings</code>, which seems relevant, but I can't figure out how to access the <code>job_settings</code> data in my <code>Spider</code> class. What is the correct approach here?</p>
| 1 | 2016-08-31T23:18:42Z | 39,261,783 | <p>This <code>job_settings</code> shall be merged directly into the <a href="http://doc.scrapy.org/en/latest/topics/settings.html" rel="nofollow">Scrapy settings</a>, with a higher precedence (of <code>40</code>, IIRC).</p>
<p>The Scrapy settings could be accessed via a <code>.settings</code> attribute of a spider instance, e.g. you could use <code>self.settings</code> if <code>self</code> is a <code>scrapy.Spider</code> instance.</p>
| 3 | 2016-09-01T02:25:31Z | [
"python",
"scrapy",
"scrapinghub"
] |
AttributeError: 'ErrorbarContainer' object has no attribute 'set_ylim' | 39,260,561 | <p>I am plotting the results of some experiments with error bars. I'd like to be able to set the y limit in the case of results with extreme outliers that aren't interesting. This code:</p>
<pre><code>axes = plt.errorbar(feature_data[feature_data.num_unique[feature_of_interest] > 1].index, chi_square_y, yerr=chi_square_y_error, fmt = 'o')
axes.set_ylim([-.2, .2])
plt.plot((min(feature_data[feature_data.num_unique[feature_of_interest] > 1].index), max(feature_data[feature_data.num_unique[feature_of_interest] > 1].index)), (0, 0), 'r--', linewidth = 2)
</code></pre>
<p>produces this error:</p>
<pre><code>AttributeError Traceback (most recent call last)
<ipython-input-79-794286dd3c29> in <module>()
18 rcParams['figure.figsize'] = 10, 5
19 axes = plt.errorbar(feature_data[feature_data.num_unique[feature_of_interest] > 1].index, chi_square_y, yerr=chi_square_y_error, fmt = 'o')
---> 20 axes.set_ylim([-.2, .2])
21 plt.plot((min(feature_data[feature_data.num_unique[feature_of_interest] > 1].index), max(feature_data[feature_data.num_unique[feature_of_interest] > 1].index)), (0, 0), 'r--', linewidth = 2)
AttributeError: 'ErrorbarContainer' object has no attribute 'set_ylim'
</code></pre>
<p>How can I set the y limits?</p>
<p>Thanks!</p>
| -1 | 2016-08-31T23:32:10Z | 39,309,727 | <p>Simply use the <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.ylim" rel="nofollow">matplotlib.pyplot.ylim()</a> function.</p>
<p>Your example is not self-contained, so I cannot check that the below code actually works, but at least the mentioned error will be fixed:</p>
<pre><code>plt.errorbar(feature_data[feature_data.num_unique[feature_of_interest] > 1].index, chi_square_y, yerr=chi_square_y_error, fmt = 'o')
plt.ylim(-.2, .2)
plt.plot((min(feature_data[feature_data.num_unique[feature_of_interest] > 1].index), max(feature_data[feature_data.num_unique[feature_of_interest] > 1].index)), (0, 0), 'r--', linewidth = 2)
</code></pre>
| 3 | 2016-09-03T17:47:24Z | [
"python",
"matplotlib"
] |
AttributeError: 'ErrorbarContainer' object has no attribute 'set_ylim' | 39,260,561 | <p>I am plotting the results of some experiments with error bars. I'd like to be able to set the y limit in the case of results with extreme outliers that aren't interesting. This code:</p>
<pre><code>axes = plt.errorbar(feature_data[feature_data.num_unique[feature_of_interest] > 1].index, chi_square_y, yerr=chi_square_y_error, fmt = 'o')
axes.set_ylim([-.2, .2])
plt.plot((min(feature_data[feature_data.num_unique[feature_of_interest] > 1].index), max(feature_data[feature_data.num_unique[feature_of_interest] > 1].index)), (0, 0), 'r--', linewidth = 2)
</code></pre>
<p>produces this error:</p>
<pre><code>AttributeError Traceback (most recent call last)
<ipython-input-79-794286dd3c29> in <module>()
18 rcParams['figure.figsize'] = 10, 5
19 axes = plt.errorbar(feature_data[feature_data.num_unique[feature_of_interest] > 1].index, chi_square_y, yerr=chi_square_y_error, fmt = 'o')
---> 20 axes.set_ylim([-.2, .2])
21 plt.plot((min(feature_data[feature_data.num_unique[feature_of_interest] > 1].index), max(feature_data[feature_data.num_unique[feature_of_interest] > 1].index)), (0, 0), 'r--', linewidth = 2)
AttributeError: 'ErrorbarContainer' object has no attribute 'set_ylim'
</code></pre>
<p>How can I set the y limits?</p>
<p>Thanks!</p>
| -1 | 2016-08-31T23:32:10Z | 39,346,791 | <p>Since this is a bounty question I'll try to get a bit into more detail here. </p>
<p><code>plt.errorbar</code> does not return an Axes object (which has the <code>set_ylim</code> method), but rather a collection of <code>(plotline, caplines, barlinecols)</code>. I suspect you may have expected the Axes object since this is what <code>pandas.DataFrame.plot</code> returns.</p>
<p>When working directly with matplotlib's pyplot you have two options:</p>
<p><strong>Option 1</strong> - use pyplot directly, without dealing with the axes:</p>
<pre><code>plt.errorbar( ... )
plt.ylim([-.2, .2])
</code></pre>
<p>Using <code>plt</code> will set properties to the last subplot selected (by default there is only one). You may prefer this method when dealing with a single figure.</p>
<p><strong>Option 2</strong> - get an axes object from <code>subplots</code>:</p>
<pre><code>fig, ax = plt.subplots(1, 1, figsize=(10, 5))
ax.errorbar( ... )
ax.set_ylim([-.2, .2])
</code></pre>
<p>This is may preferred method, partly because it allows setting the figure size without setting it globally in <code>rcParams</code>. It has a few other advantages which I won't get into here.</p>
<p>Notice that when using <code>plt</code> the method is <code>ylim</code> and when using the Axes object it's <code>set_ylim</code>. This is true for many other properties such as titles, labels, etc.</p>
| 1 | 2016-09-06T10:29:54Z | [
"python",
"matplotlib"
] |
Generate a normal distribution of dates within a range | 39,260,616 | <p>I have a date range - say between <code>1925-01-01</code> and <code>1992-01-01</code>. I'd like to generate a list of <code>x</code> dates between that range, and have those <code>x</code> dates generated follow a 'normal' (bell curve - see image) distribution.</p>
<p>There are many many answers on stackoverflow about doing this with integers (using <code>numpy</code>, <code>scipy</code>, etc), but I can't find a solid example with dates</p>
<p><a href="http://i.stack.imgur.com/dC2UB.png" rel="nofollow"><img src="http://i.stack.imgur.com/dC2UB.png" alt="enter image description here"></a></p>
| 0 | 2016-08-31T23:39:35Z | 39,262,071 | <p>As per @sascha's comment, a conversion from the dates to a time value does the job:</p>
<pre><code>#!/usr/bin/env python3
import time
import numpy
_DATE_RANGE = ('1925-01-01', '1992-01-01')
_DATE_FORMAT = '%Y-%m-%d'
_EMPIRICAL_SCALE_RATIO = 0.15
_DISTRIBUTION_SIZE = 1000
def main():
time_range = tuple(time.mktime(time.strptime(d, _DATE_FORMAT))
for d in _DATE_RANGE)
distribution = numpy.random.normal(
loc=(time_range[0] + time_range[1]) * 0.5,
scale=(time_range[1] - time_range[0]) * _EMPIRICAL_SCALE_RATIO,
size=_DISTRIBUTION_SIZE
)
date_range = tuple(time.strftime(_DATE_FORMAT, time.localtime(t))
for t in numpy.sort(distribution))
print(date_range)
if __name__ == '__main__':
main()
</code></pre>
<p>Note that instead of the <code>_EMPIRICAL_SCALE_RATIO</code>, you could (should?) use <code>scipy.stats.truncnorm</code> to generate a <a href="https://en.wikipedia.org/wiki/Truncated_normal_distribution" rel="nofollow">truncated normal distribution</a>.</p>
| 1 | 2016-09-01T03:05:37Z | [
"python",
"date",
"numpy",
"gaussian",
"normal-distribution"
] |
Is there a way to apply decorators to Python data structures? | 39,260,619 | <p>From what I got so far, decorators applies to callables(functions and classes).
Wondering if there is a way to apply a decorator to a dictionary?(It can be any data structure, in fact).
The problem I try to solve goes along the following lines:
I have a lot of dictionaries which, from time to time, might be valid or not.</p>
<p>I want to mark them with a <em>decorator-like</em> as shown in the skeleton bellow...:</p>
<pre><code>class Data(object):
def invalide(self):
return False
def valide(self, some_dict):
return some_dict
@Data.invalide
dict_1 = {...}
@Data.valide
dict_2 = {...}
@Data.valide
dict_3 = {...}
...
...
@Data.invalide
dict_n = {...}
</code></pre>
<p>...so, when I call <code>some_function(dict_x)</code>, it would know to do one thing or the other based on the <code>valide</code>, <code>invalide</code> marks.</p>
<h2>-------------Later on--------------------</h2>
<p>I ended up implementing the following:</p>
<pre><code>In [2]: class Data(object):
...:
...: @classmethod
...: def valide(self, func):
...: #print "Data is valide."
...: return func()
...:
...: @classmethod
...: def invalide(self, func):
...: #print "Data is invalide."
...: return False
...:
In [3]: @Data.valide
...: def dict1():
...: return {"a": 1, "b": 2, "c": 3}
...:
...: @Data.invalide
...: def dict2():
...: return {"d": 4, "e": 4, "f": 6}
...:
...: def run_funct(some_dict):
...: return some_dict
...:
In [4]: print(run_funct(dict1))
...: print type(run_funct(dict1))
...:
{'a': 1, 'c': 3, 'b': 2}
<type 'dict'>
In [5]: print(run_funct(dict2))
...: print type(run_funct(dict2))
...:
False
<type 'bool'>
</code></pre>
| 1 | 2016-08-31T23:40:04Z | 39,260,668 | <p>Decorators, to my knowledge, can only be applied to callables (functions, classes etc.)</p>
<p>If I was in your place, I'd probably separate the logic to decide between valid/invalid in its own function.</p>
| 0 | 2016-08-31T23:46:22Z | [
"python"
] |
Is there a way to apply decorators to Python data structures? | 39,260,619 | <p>From what I got so far, decorators applies to callables(functions and classes).
Wondering if there is a way to apply a decorator to a dictionary?(It can be any data structure, in fact).
The problem I try to solve goes along the following lines:
I have a lot of dictionaries which, from time to time, might be valid or not.</p>
<p>I want to mark them with a <em>decorator-like</em> as shown in the skeleton bellow...:</p>
<pre><code>class Data(object):
def invalide(self):
return False
def valide(self, some_dict):
return some_dict
@Data.invalide
dict_1 = {...}
@Data.valide
dict_2 = {...}
@Data.valide
dict_3 = {...}
...
...
@Data.invalide
dict_n = {...}
</code></pre>
<p>...so, when I call <code>some_function(dict_x)</code>, it would know to do one thing or the other based on the <code>valide</code>, <code>invalide</code> marks.</p>
<h2>-------------Later on--------------------</h2>
<p>I ended up implementing the following:</p>
<pre><code>In [2]: class Data(object):
...:
...: @classmethod
...: def valide(self, func):
...: #print "Data is valide."
...: return func()
...:
...: @classmethod
...: def invalide(self, func):
...: #print "Data is invalide."
...: return False
...:
In [3]: @Data.valide
...: def dict1():
...: return {"a": 1, "b": 2, "c": 3}
...:
...: @Data.invalide
...: def dict2():
...: return {"d": 4, "e": 4, "f": 6}
...:
...: def run_funct(some_dict):
...: return some_dict
...:
In [4]: print(run_funct(dict1))
...: print type(run_funct(dict1))
...:
{'a': 1, 'c': 3, 'b': 2}
<type 'dict'>
In [5]: print(run_funct(dict2))
...: print type(run_funct(dict2))
...:
False
<type 'bool'>
</code></pre>
| 1 | 2016-08-31T23:40:04Z | 39,261,880 | <p>you could cobble something together with properties and decorators:</p>
<pre><code>def valid(func):
return property(lambda self: func(self))
def invalid(func):
return property(lambda self: False)
class A:
@valid
def dict1(self):
return dict(a=4, b=5)
@invalid
def dict2(self):
return dict(c=6, d=7)
</code></pre>
<p>usage would then be:</p>
<pre><code>a = A()
a.dict1
a.dict2
</code></pre>
<p>i can't say i would recommend doing this, but it should meet your needs.</p>
| 0 | 2016-09-01T02:39:09Z | [
"python"
] |
Python: Is it possible to skip values in tuple assignment? | 39,260,653 | <p>I have a function which splits string in two parts at first encountered colon (skipping parts enclosed in brackets). This function returns tuple of three elements: index where the colon was encountered, part before colon and part after colon:</p>
<pre><code>def split_on_colon(str):
colon_ptr = find_separator(str, 0, ':')
if colon_ptr == -1:
return (colon_ptr, str, None)
return (colon_ptr, str[:colon_ptr], str[colon_ptr+1:])
</code></pre>
<p>I call it this way:</p>
<pre><code>def substitute_expression(expression):
# Split function and arguments
colon_ptr, func, args = split_on_colon(expression)
...
</code></pre>
<p>But sometimes it don't care about <code>colon_ptr</code> part.</p>
<p>Is there any simple construction in Python that would allow throwing away part of tuple in assignment, so that it wouldn't waste memory and variables namespace?</p>
<p>Something like this:</p>
<pre><code>Ignore, func, args = split
</code></pre>
| 0 | 2016-08-31T23:44:54Z | 39,260,685 | <p>You can use the <code>_</code>, which is used to store unwanted values. Your statement will look like this:</p>
<pre><code>_, func, args = split
</code></pre>
| 1 | 2016-08-31T23:48:49Z | [
"python"
] |
Python: Is it possible to skip values in tuple assignment? | 39,260,653 | <p>I have a function which splits string in two parts at first encountered colon (skipping parts enclosed in brackets). This function returns tuple of three elements: index where the colon was encountered, part before colon and part after colon:</p>
<pre><code>def split_on_colon(str):
colon_ptr = find_separator(str, 0, ':')
if colon_ptr == -1:
return (colon_ptr, str, None)
return (colon_ptr, str[:colon_ptr], str[colon_ptr+1:])
</code></pre>
<p>I call it this way:</p>
<pre><code>def substitute_expression(expression):
# Split function and arguments
colon_ptr, func, args = split_on_colon(expression)
...
</code></pre>
<p>But sometimes it don't care about <code>colon_ptr</code> part.</p>
<p>Is there any simple construction in Python that would allow throwing away part of tuple in assignment, so that it wouldn't waste memory and variables namespace?</p>
<p>Something like this:</p>
<pre><code>Ignore, func, args = split
</code></pre>
| 0 | 2016-08-31T23:44:54Z | 39,260,742 | <p>The best way for refusing of consuming extra memory is to handle this within your function. You can use a flag as an argument for your function then based on this flag you can decide to return 2 or 3 items.</p>
<pre><code>def split_on_colon(my_str, flag):
colon_ptr = find_separator(my_str, 0, ':')
if flag:
if colon_ptr == -1:
return (my_str, None)
return (my_str[:colon_ptr], my_str[colon_ptr+1:])
else:
if colon_ptr == -1:
return (colon_ptr, my_str, None)
return (colon_ptr, my_str[:colon_ptr], my_str[colon_ptr+1:])
</code></pre>
| 2 | 2016-08-31T23:54:36Z | [
"python"
] |
Why am I unable to join this thread in python? | 39,260,728 | <p>I am writing a multithreading class. The class has a <code>parallel_process()</code> function that is overridden with the parallel task. The data to be processed is put in the <code>queue</code>. The <code>worker()</code> function in each thread keeps calling <code>parallel_process()</code> until the <code>queue</code> is empty. Results are put in the <code>results</code> Queue object. The class definition is: </p>
<pre><code>import threading
try:
from Queue import Queue
except ImportError:
from queue import Queue
class Parallel:
def __init__(self, pkgs, common=None, nthreads=1):
self.nthreads = nthreads
self.threads = []
self.queue = Queue()
self.results = Queue()
self.common = common
for pkg in pkgs:
self.queue.put(pkg)
def parallel_process(self, pkg, common):
pass
def worker(self):
while not self.queue.empty():
pkg = self.queue.get()
self.results.put(self.parallel_process(pkg, self.common))
self.queue.task_done()
return
def start(self):
for i in range(self.nthreads):
t = threading.Thread(target=self.worker)
t.daemon = False
t.start()
self.threads.append(t)
def wait_for_threads(self):
print('Waiting on queue to empty...')
self.queue.join()
print('Queue processed. Joining threads...')
for t in self.threads:
t.join()
print('...Thread joined.')
def get_results(self):
results = []
print('Obtaining results...')
while not self.results.empty():
results.append(self.results.get())
return results
</code></pre>
<p>I use it to create a parallel task: </p>
<pre><code> class myParallel(Parallel): # return square of numbers in a list
def parallel_process(self, pkg, common):
return pkg**2
p = myParallel(range(50),nthreads=4)
p.start()
p.wait_for_threads()
r = p.get_results()
print('FINISHED')
</code></pre>
<p>However all threads do not join every time the code is run. Sometimes only 2 join, sometimes no thread joins. I do not think I am blocking the threads from finishing. What reason could there be for <code>join()</code> to not work here?</p>
| 3 | 2016-08-31T23:52:46Z | 39,260,844 | <p>This statement may lead to errors:</p>
<pre><code>while not self.queue.empty():
pkg = self.queue.get()
</code></pre>
<p>With multiple threads pulling items from the queue, there's no guarantee that <code>self.queue.get()</code> will return a valid item, even if you check if the queue is empty beforehand. Here is a possible scenario</p>
<ol>
<li>Thread 1 checks the queue and the queue is not empty, control proceeds into the <code>while</code> loop.</li>
<li>Control passes to Thread 2, which also checks the queue, finds it is not empty and enters the <code>while</code> loop. Thread 2 <code>gets</code> an item from the loop. The queue is now empty.</li>
<li>Control passes back to Thread 1, it <code>gets</code> an item from the queue, but the queue is now empty, an <code>Empty</code> Exception should be raised.</li>
</ol>
<p>You should just use a <code>try/except</code> to get an item from the queue</p>
<pre><code>try:
pkg = self.queue.get_nowait()
except Empty:
pass
</code></pre>
| 2 | 2016-09-01T00:08:35Z | [
"python",
"multithreading"
] |
Why am I unable to join this thread in python? | 39,260,728 | <p>I am writing a multithreading class. The class has a <code>parallel_process()</code> function that is overridden with the parallel task. The data to be processed is put in the <code>queue</code>. The <code>worker()</code> function in each thread keeps calling <code>parallel_process()</code> until the <code>queue</code> is empty. Results are put in the <code>results</code> Queue object. The class definition is: </p>
<pre><code>import threading
try:
from Queue import Queue
except ImportError:
from queue import Queue
class Parallel:
def __init__(self, pkgs, common=None, nthreads=1):
self.nthreads = nthreads
self.threads = []
self.queue = Queue()
self.results = Queue()
self.common = common
for pkg in pkgs:
self.queue.put(pkg)
def parallel_process(self, pkg, common):
pass
def worker(self):
while not self.queue.empty():
pkg = self.queue.get()
self.results.put(self.parallel_process(pkg, self.common))
self.queue.task_done()
return
def start(self):
for i in range(self.nthreads):
t = threading.Thread(target=self.worker)
t.daemon = False
t.start()
self.threads.append(t)
def wait_for_threads(self):
print('Waiting on queue to empty...')
self.queue.join()
print('Queue processed. Joining threads...')
for t in self.threads:
t.join()
print('...Thread joined.')
def get_results(self):
results = []
print('Obtaining results...')
while not self.results.empty():
results.append(self.results.get())
return results
</code></pre>
<p>I use it to create a parallel task: </p>
<pre><code> class myParallel(Parallel): # return square of numbers in a list
def parallel_process(self, pkg, common):
return pkg**2
p = myParallel(range(50),nthreads=4)
p.start()
p.wait_for_threads()
r = p.get_results()
print('FINISHED')
</code></pre>
<p>However all threads do not join every time the code is run. Sometimes only 2 join, sometimes no thread joins. I do not think I am blocking the threads from finishing. What reason could there be for <code>join()</code> to not work here?</p>
| 3 | 2016-08-31T23:52:46Z | 39,260,919 | <p>@Brendan Abel identified the cause. I'd like to suggest a different solution: <code>queue.join()</code> is usually a Bad Idea too. Instead, create a unique value to use as a sentinel:</p>
<pre><code>class Parallel:
_sentinel = object()
</code></pre>
<p>At the end of <code>__init__()</code>, add one sentinel to the queue for each thread:</p>
<pre><code> for i in range(nthreads):
self.queue.put(self._sentinel)
</code></pre>
<p>Change the start of <code>worker()</code> like so:</p>
<pre><code> while True:
pkg = self.queue.get()
if pkg is self._sentinel:
break
</code></pre>
<p>By the construction of the queue, it won't be empty until each thread has seen its sentinel value, so there's no need to mess with the unpredictable <code>queue.size()</code>.</p>
<p>Also remove the <code>queue.join()</code> and <code>queue.task_done()</code> cruft.</p>
<p>This will give you reliable code that's easy to modify for fancier scenarios. For example, if you want to add more work items <em>while</em> the threads are running, fine - just write another method to say "I'm done adding work items now", and move the loop adding sentinels into that.</p>
| 2 | 2016-09-01T00:18:37Z | [
"python",
"multithreading"
] |
Web2py comparing part of a request.vars element | 39,260,734 | <p>I have a form with a table with rows containing SELECTs with _names with IDs attached, like this:</p>
<pre><code>TD_list.append(TD(SELECT(lesson_reg_list, _name='lesson_reg_' + str(student[4]))))
</code></pre>
<p>When the form is submitted I want to extract both the student[4] value <em>and</em> the value held by request.vars.lesson_reg_student[4].</p>
<p>I've tried something like:</p>
<pre><code>for item in request.vars:
if item[0:9] == "lesson_reg":
enrolment_id = int(item[10:])
code = request.vars.item
</code></pre>
<p>I also tried treating request.vars like a dictionary by using:</p>
<pre><code>for key, value in request.vars:
if key[0:9] == "lesson_reg":
enrolment_id = int(key[10:])
code = value
</code></pre>
<p>but then I got 'too many values to unpack'. How do I retrieve the value of a request.vars item when the last part of its name could be any number, plus a substring of the item name itself?</p>
<p>Thanks in advance for helping me.</p>
| 0 | 2016-08-31T23:53:35Z | 39,295,714 | <p>In Python, when slicing, the ending index is <em>excluded</em>, so your two slices should be <code>0:10</code> and <code>0:11</code>. To simplify, you can also use <code>.startswith</code> for the first one:</p>
<pre><code>for item in request.vars:
if item.startswith('lesson_reg'):
enrolment_id = int(item[11:])
code = request.vars.item
</code></pre>
| 0 | 2016-09-02T15:14:35Z | [
"python",
"variables",
"request",
"substring",
"web2py"
] |
Is Spark's KMeans unable to handle bigdata? | 39,260,820 | <p>KMeans has several parameters for its <a href="http://spark.apache.org/docs/latest/api/python/pyspark.mllib.html?highlight=kmeans#pyspark.mllib.clustering.KMeans.train" rel="nofollow">training</a>, with initialization mode is defaulted to kmeans||. The problem is that it marches quickly (less than 10min) to the first 13 stages, but then <strong>hangs completely</strong>, without yielding an error!</p>
<p><strong>Minimal Example</strong> which reproduces the issue (it will succeed if I use 1000 points or random initialization):</p>
<pre><code>from pyspark.context import SparkContext
from pyspark.mllib.clustering import KMeans
from pyspark.mllib.random import RandomRDDs
if __name__ == "__main__":
sc = SparkContext(appName='kmeansMinimalExample')
# same with 10000 points
data = RandomRDDs.uniformVectorRDD(sc, 10000000, 64)
C = KMeans.train(data, 8192, maxIterations=10)
sc.stop()
</code></pre>
<p>The job does nothing (it doesn't succeed, fail or progress..), as shown below. There are no active/failed tasks in the Executors tab. Stdout and Stderr Logs don't have anything particularly interesting:</p>
<p><a href="http://i.stack.imgur.com/aggpL.png" rel="nofollow"><img src="http://i.stack.imgur.com/aggpL.png" alt="enter image description here"></a></p>
<p>If I use <code>k=81</code>, instead of 8192, it will succeed:</p>
<p><a href="http://i.stack.imgur.com/zBKqG.png" rel="nofollow"><img src="http://i.stack.imgur.com/zBKqG.png" alt="enter image description here"></a></p>
<p>Notice that the two calls of <code>takeSample()</code>, <a href="http://stackoverflow.com/questions/38986395/sparkkmeans-calls-takesample-twice">should not be an issue</a>, since there were called twice in the random initialization case.</p>
<p>So, what is happening? Is Spark's Kmeans <strong>unable to scale</strong>? Does anybody know? Can you reproduce?</p>
<hr>
<p>If it was a memory issue, <a href="https://gsamaras.wordpress.com/code/memoryoverhead-issue-in-spark/" rel="nofollow">I would get warnings and errors, as I had been before</a>. </p>
<p>Note: placeybordeaux's comments are based on the execution of the job in <em>client mode</em>, where the driver's configurations are invalidated, causing the exit code 143 and such (see edit history), not in cluster mode, where there is <em>no error reported at all</em>, the application <em>just hangs</em>.</p>
<hr>
<p>From zero323: <a href="http://stackoverflow.com/questions/35512139/why-is-spark-mllib-kmeans-algorithm-extremely-slow">Why is Spark Mllib KMeans algorithm extremely slow?</a> is related, but I think he witnesses some progress, while mine hangs, I did leave a comment...</p>
<p><a href="http://i.stack.imgur.com/osDrR.png" rel="nofollow"><img src="http://i.stack.imgur.com/osDrR.png" alt="enter image description here"></a></p>
| 5 | 2016-09-01T00:05:51Z | 39,304,813 | <p>I think the 'hanging' is because your executors keep dying. As I mentioned in a side conversation, this code runs fine for me, locally and on a cluster, in Pyspark and Scala. However, it takes a lot longer than it should. It is almost all time spent in k-means|| initialization.</p>
<p>I opened <a href="https://issues.apache.org/jira/browse/SPARK-17389" rel="nofollow">https://issues.apache.org/jira/browse/SPARK-17389</a> to track two main improvements, one of which you can use now. Edit: really, see also <a href="https://issues.apache.org/jira/browse/SPARK-11560" rel="nofollow">https://issues.apache.org/jira/browse/SPARK-11560</a></p>
<p>First, there are some code optimizations that would speed up the init by about 13%.</p>
<p>However most of the issue is that it default to 5 steps of k-means|| init, when it seems that 2 is almost always just as good. You can set initialization steps to 2 to see a speedup, especially in the stage that's hanging now.</p>
<p>In my (smaller) test on my laptop, init time went from 5:54 to 1:41 with both changes, mostly due to setting init steps.</p>
| 3 | 2016-09-03T08:20:45Z | [
"python",
"apache-spark",
"bigdata",
"k-means",
"apache-spark-mllib"
] |
Alembic not handling column_types.PasswordType : Flask+SQLAlchemy+Alembic | 39,260,850 | <p><strong>Background</strong></p>
<p>I'm trying to use a PostgreSQL back-end instead of Sqlite in this <a href="https://github.com/frol/flask-restplus-server-example" rel="nofollow">Flask + RESTplus server example</a>.</p>
<p>I faced an issue with the PasswordType db column type. In order to make it work, I had to change the following code in <a href="https://github.com/frol/flask-restplus-server-example/blob/master/app/modules/users/models.py#L46" rel="nofollow">app/modules/users/models.py</a></p>
<pre><code>password = db.Column(
column_types.PasswordType(
max_length=128,
schemes=('bcrypt', )
),
nullable=False
)
</code></pre>
<p>to </p>
<pre><code>password = db.Column(db.String(length=128), nullable=False)
</code></pre>
<p>which is really bad as passwords will be stored in clear text... and I need your help!</p>
<p>After changing the <a href="https://github.com/frol/flask-restplus-server-example/blob/master/tasks/app/db_templates/flask/script.py.mako#L13-15" rel="nofollow">line 13 to 15 in tasks/app/db_templates/flask/script.py.mako
</a> to </p>
<pre><code>from alembic import op
import sqlalchemy as sa
import sqlalchemy_utils
${imports if imports else ""}
</code></pre>
<p>I get the following error message, apparently related to passlib:</p>
<pre><code>2016-08-31 23:18:52,751 [INFO] [alembic.runtime.migration] Context impl PostgresqlImpl.
2016-08-31 23:18:52,752 [INFO] [alembic.runtime.migration] Will assume transactional DDL.
2016-08-31 23:18:52,759 [INFO] [alembic.runtime.migration] Running upgrade -> 99b329343a41, empty message
Traceback (most recent call last):
File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/type_api.py", line 359, in dialect_impl
return dialect._type_memos[self]['impl']
File "/usr/local/lib/python3.5/weakref.py", line 365, in __getitem__
return self.data[ref(key)]
KeyError: <weakref at 0x7fde70d524a8; to 'PasswordType' at 0x7fde70a840b8>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/invoke", line 11, in <module>
sys.exit(program.run())
File "/usr/local/lib/python3.5/site-packages/invoke/program.py", line 270, in run
self.execute()
File "/usr/local/lib/python3.5/site-packages/invoke/program.py", line 381, in execute
executor.execute(*self.tasks)
File "/usr/local/lib/python3.5/site-packages/invoke/executor.py", line 113, in execute
result = call.task(*args, **call.kwargs)
File "/usr/local/lib/python3.5/site-packages/invoke/tasks.py", line 111, in __call__
result = self.body(*args, **kwargs)
File "/usr/src/app/tasks/app/run.py", line 35, in run
context.invoke_execute(context, 'app.db.upgrade')
File "/usr/src/app/tasks/__init__.py", line 72, in invoke_execute
results = Executor(namespace, config=context.config).execute((command_name, kwargs))
File "/usr/local/lib/python3.5/site-packages/invoke/executor.py", line 113, in execute
result = call.task(*args, **call.kwargs)
File "/usr/local/lib/python3.5/site-packages/invoke/tasks.py", line 111, in __call__
result = self.body(*args, **kwargs)
File "/usr/src/app/tasks/app/db.py", line 177, in upgrade
command.upgrade(config, revision, sql=sql, tag=tag)
File "/usr/local/lib/python3.5/site-packages/alembic/command.py", line 174, in upgrade
script.run_env()
File "/usr/local/lib/python3.5/site-packages/alembic/script/base.py", line 407, in run_env
util.load_python_file(self.dir, 'env.py')
File "/usr/local/lib/python3.5/site-packages/alembic/util/pyfiles.py", line 93, in load_python_file
module = load_module_py(module_id, path)
File "/usr/local/lib/python3.5/site-packages/alembic/util/compat.py", line 68, in load_module_py
module_id, path).load_module(module_id)
File "<frozen importlib._bootstrap_external>", line 385, in _check_name_wrapper
File "<frozen importlib._bootstrap_external>", line 806, in load_module
File "<frozen importlib._bootstrap_external>", line 665, in load_module
File "<frozen importlib._bootstrap>", line 268, in _load_module_shim
File "<frozen importlib._bootstrap>", line 693, in _load
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 662, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "migrations/env.py", line 93, in <module>
run_migrations_online()
File "migrations/env.py", line 86, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/usr/local/lib/python3.5/site-packages/alembic/runtime/environment.py", line 797, in run_migrations
self.get_context().run_migrations(**kw)
File "/usr/local/lib/python3.5/site-packages/alembic/runtime/migration.py", line 312, in run_migrations
step.migration_fn(**kw)
File "/usr/src/app/migrations/versions/99b329343a41_.py", line 47, in upgrade
sa.UniqueConstraint('username')
File "<string>", line 8, in create_table
File "<string>", line 3, in create_table
File "/usr/local/lib/python3.5/site-packages/alembic/operations/ops.py", line 1098, in create_table
return operations.invoke(op)
File "/usr/local/lib/python3.5/site-packages/alembic/operations/base.py", line 318, in invoke
return fn(self, operation)
File "/usr/local/lib/python3.5/site-packages/alembic/operations/toimpl.py", line 101, in create_table
operations.impl.create_table(table)
File "/usr/local/lib/python3.5/site-packages/alembic/ddl/impl.py", line 194, in create_table
self._exec(schema.CreateTable(table))
File "/usr/local/lib/python3.5/site-packages/alembic/ddl/impl.py", line 118, in _exec
return conn.execute(construct, *multiparams, **params)
File "/usr/local/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 914, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/ddl.py", line 68, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
File "/usr/local/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 962, in _execute_ddl
compiled = ddl.compile(dialect=dialect)
File "<string>", line 1, in <lambda>
File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/elements.py", line 494, in compile
return self._compiler(dialect, bind=bind, **kw)
File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/ddl.py", line 26, in _compiler
return dialect.ddl_compiler(dialect, self, **kw)
File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/compiler.py", line 190, in __init__
self.string = self.process(self.statement, **compile_kwargs)
File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/compiler.py", line 213, in process
return obj._compiler_dispatch(self, **kwargs)
File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/visitors.py", line 81, in _compiler_dispatch
return meth(self, **kw)
File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/compiler.py", line 2157, in visit_create_table
and not first_pk)
File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/compiler.py", line 213, in process
return obj._compiler_dispatch(self, **kwargs)
File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/visitors.py", line 81, in _compiler_dispatch
return meth(self, **kw)
File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/compiler.py", line 2188, in visit_create_column
first_pk=first_pk
File "/usr/local/lib/python3.5/site-packages/sqlalchemy/dialects/postgresql/base.py", line 1580, in get_column_specification
impl_type = column.type.dialect_impl(self.dialect)
File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/type_api.py", line 361, in dialect_impl
return self._dialect_info(dialect)['impl']
File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/type_api.py", line 403, in _dialect_info
impl = self._gen_dialect_impl(dialect)
File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/type_api.py", line 763, in _gen_dialect_impl
typedesc = self.load_dialect_impl(dialect).dialect_impl(dialect)
File "/usr/local/lib/python3.5/site-packages/sqlalchemy_utils/types/password.py", line 194, in load_dialect_impl
impl = postgresql.BYTEA(self.length)
File "/usr/local/lib/python3.5/site-packages/sqlalchemy_utils/types/password.py", line 168, in length
self._max_length = self.calculate_max_length()
File "/usr/local/lib/python3.5/site-packages/sqlalchemy_utils/types/password.py", line 176, in calculate_max_length
for name in self.context.schemes():
File "/usr/local/lib/python3.5/site-packages/passlib/context.py", line 2714, in __getattribute__
self._lazy_init()
File "/usr/local/lib/python3.5/site-packages/passlib/context.py", line 2708, in _lazy_init
super(LazyCryptContext, self).__init__(**kwds)
File "/usr/local/lib/python3.5/site-packages/passlib/context.py", line 1707, in __init__
self.load(kwds)
File "/usr/local/lib/python3.5/site-packages/passlib/context.py", line 1896, in load
config = _CryptConfig(source)
File "/usr/local/lib/python3.5/site-packages/passlib/context.py", line 1019, in __init__
self._init_options(source)
File "/usr/local/lib/python3.5/site-packages/passlib/context.py", line 1097, in _init_options
key, value = norm_context_option(key, value)
File "/usr/local/lib/python3.5/site-packages/passlib/context.py", line 1162, in _norm_context_option
raise KeyError("unknown CryptContext keyword: %r" % (key,))
KeyError: "unknown CryptContext keyword: 'length'"
</code></pre>
<p>Any idea? Thanks in advance for your help!</p>
| 2 | 2016-09-01T00:09:32Z | 39,276,707 | <p>I see that you are having a new migration <code>99b329343a41_.py</code> (it doesn't exist in my Flask-RESTplus-example-server). Please, review your new migration and <strong>remove</strong> everything related to PasswordType. It is a bug in SQLAlchemy-Utils, which doesn't play nice with Alembic: <a href="https://github.com/kvesteri/sqlalchemy-utils/issues/106" rel="nofollow">https://github.com/kvesteri/sqlalchemy-utils/issues/106</a></p>
| 1 | 2016-09-01T16:35:39Z | [
"python",
"postgresql",
"flask-sqlalchemy",
"alembic",
"passlib"
] |
Using Pandas dataframes to write to complex format layout | 39,260,884 | <p>I am trying to create a file which has a very specific format, which means it is very difficult for me to operate and save with pandas alone.</p>
<p>Consider this:</p>
<pre><code>FILE = open('writeFileTest' + ".trc", "w")
# Print header information
FILE.write('A\tB\tC\n')
FILE.write('\t\tD\tE\tF\tG\n')
</code></pre>
<p>This will produce some headers which look roughly as so:</p>
<pre><code>A B C
D E F G
</code></pre>
<p>Now lets say that I also have a pandas dataframe that looks like this e.g:</p>
<pre><code>>>>import pandas as pd
>>>import numpy as np
>>>pd.DataFrame(np.random.randn(5, 6))
0 1 2 3 4 5
0 0.215413 0.075976 0.516593 1.699469 1.382774 -0.604032
1 0.156343 0.918240 0.728018 -0.975881 -1.034713 -1.920139
2 1.486848 -0.762764 -0.232464 1.824197 -0.321638 0.187009
3 -1.125282 -0.419082 1.025092 1.381589 0.369712 0.043958
4 -0.118296 0.699864 0.796202 -0.560172 -1.046126 0.398537
</code></pre>
<p>How do I combine both to produce this:</p>
<pre><code>A B C
D E F G
0.215413 0.075976 0.516593 1.699469 1.382774 -0.604032
0.156343 0.918240 0.728018 -0.975881 -1.034713 -1.920139
1.486848 -0.762764 -0.232464 1.824197 -0.321638 0.187009
-1.125282 -0.419082 1.025092 1.381589 0.369712 0.043958
0.118296 0.699864 0.796202 -0.560172 -1.046126 0.398537
</code></pre>
<p>My actual file is obviously a lot more complicated than this, which is why I cannot use pandas, and my data is much much larger. Hence, how can one combine both approaches? I should also mention that every is separated by <code>\t</code> tabs. </p>
<p>EDIT:</p>
<p>There is also the possibility of doing all of this in pure pandas (I think, but I obviously do not know how, hence this question). </p>
<p>Then instead we would treat this whole thing:</p>
<pre><code>A B C
D E F G
0.215413 0.075976 0.516593 1.699469 1.382774 -0.604032
0.156343 0.918240 0.728018 -0.975881 -1.034713 -1.920139
1.486848 -0.762764 -0.232464 1.824197 -0.321638 0.187009
-1.125282 -0.419082 1.025092 1.381589 0.369712 0.043958
0.118296 0.699864 0.796202 -0.560172 -1.046126 0.398537
</code></pre>
<p>as a pandas dataframe <code>foo</code> upon which we would do <code>foo.to_csv(foo.csv,sep='\t')</code> and then change the file extension afterwards. But then one would instead have to deal with empty dataframe cells and ensure that pandas treats them as empty upon save.</p>
| 1 | 2016-09-01T00:14:05Z | 39,261,339 | <p>You could open the file, write the header manually and then dump the data frame. Try this:</p>
<pre><code>import pandas as pd
import numpy as np
data = np.random.randint(0,10, (4,6))
df = pd.DataFrame(data, columns=list('abcdef'))
header1 = 'A\tB\tC\t\t\t\n'
header2 = '\t\tD\tE\tF\tG\n'
with open('./out.tsv','w') as fp:
fp.write(header1)
fp.write(header2)
df.to_csv(fp, sep='\t', header=False, index=False)
</code></pre>
| 2 | 2016-09-01T01:21:39Z | [
"python",
"pandas"
] |
Python3 failing big division? | 39,260,903 | <p>Straight to the point...</p>
<p>By using a website like <a href="http://www.javascripter.net/math/calculators/100digitbigintcalculator.htm" rel="nofollow">this</a></p>
<p>I can perform the division</p>
<pre><code>521867179249063104431771532319014802773340606735040694338278785354627691365941164783141053870302908448515826940518729762675345436448874794321999883308020735915961604709858819996385388187935859640654596335746881134780531452843909715448234514762462143856204913946601253808724104934316876333775771684458187648281797991584927160155639951080324566002195236407608721860154059967443327355489731291105400056189691357913203235154988726468260641765071983123570916184780526935910110174741817085928010767101123823291935770762480197142805725028939936563200000000000000000000000000000000000000000000000000
</code></pre>
<p>over</p>
<pre><code>788657867364790503552363213932185062295135977687173263294742533244359449963403342920304284011984623904177212138919638830257642790242637105061926624952829931113462857270763317237396988943922445621451664240254033291864131227428294853277524242407573903240321257405579568660226031904170324062351700858796178922222789623703897374720000000000000000000000000000000000000000000000000
</code></pre>
<p>which becomes</p>
<pre><code>661715556065930365627163346132458831897321703017638669364788134708891795956726411057801285583913163781806953211915554723373931451847059830252175887712457307547649354135460619296383882957897161889636280577155889117185
</code></pre>
<p>I perform the same operation in python, I get</p>
<pre><code>661715556065930359197186982471212353583889520695638616110586544529019733927881270928911271177789723412658535123976620081599644618300461938312003378924403625257962695989873267296634026151114440358844358704274123784192
</code></pre>
<p>The first 16 numbers are correct, the rest is just a mess. How could this be? I'm using <code>int(a/b)</code> to remove the scientific notification <code>+e**</code>, perhaps this is causing it...</p>
| -3 | 2016-09-01T00:16:35Z | 39,260,942 | <pre><code> Python 3.4.4 |Continuum Analytics, Inc.| (default, Feb 16 2016, 09:54:04) [MSC v.1600 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information.
>>> a = 521867179249063104431771532319014802773340606735040694338278785354627691365941164783141053870302908448515826940518729762675345436448874794321999883308020735915961604709858819996385388187935859640654596335746881134780531452843909715448234514762462143856204913946601253808724104934316876333775771684458187648281797991584927160155639951080324566002195236407608721860154059967443327355489731291105400056189691357913203235154988726468260641765071983123570916184780526935910110174741817085928010767101123823291935770762480197142805725028939936563200000000000000000000000000000000000000000000000000
>>> b = 788657867364790503552363213932185062295135977687173263294742533244359449963403342920304284011984623904177212138919638830257642790242637105061926624952829931113462857270763317237396988943922445621451664240254033291864131227428294853277524242407573903240321257405579568660226031904170324062351700858796178922222789623703897374720000000000000000000000000000000000000000000000000
>>> a // b
661715556065930365627163346132458831897321703017638669364788134708891795956726411057801285583913163781806953211915554723373931451847059830252175887712457307547649354135460619296383882957897161889636280577155889117185
</code></pre>
<p>In <strong>python3</strong>, <strong>integer-division</strong> is done by <code>//</code> operator.</p>
<p>Here's the relevant <a href="https://www.python.org/dev/peps/pep-0238/" rel="nofollow">PEP 238</a></p>
| 4 | 2016-09-01T00:21:31Z | [
"python"
] |
Serializers and ViewSets inheritance of PostgreSQL multitable models? | 39,260,904 | <p>My environment is based on Django 1.10 & PostgreSQL 9.5 and;</p>
<p>I have a childtable which inherits from a parenttable. For each, I have a Django model class, ChildModel and ParentModel respectively:</p>
<pre><code>class ParentModel(models.Model):
id = models.AutoField(primary_key=True)
owner = models.ForeignKey('auth.User', related_name='parents')
class Meta:
managed = False
db_table = 'parenttable'
class ChildModel(ParentModel):
name = models.CharField(unique=True, max_length=255)
abbreviation = models.CharField(unique=True, max_length=2)
class Meta:
managed = False
db_table = 'childtable'
</code></pre>
<p>Then, I made a serializer for each of these, and one for django.contrib.auth.models.User:</p>
<pre><code>class UserSerializer(serializers.HyperlinkedModelSerializer):
parents = serializers.PrimaryKeyRelatedField(many=True, queryset=ParentModel.objects.all())
class Meta:
model = User
fields = (['id', 'username', 'parents'])
class ParentModelSerializer(serializers.HyperlinkedModelSerializer):
owner = serializers.IntegerField(source='parent.id', read_only=True, allow_null=False)
class Meta:
model = ParentModel
fields = (['id', 'owner'])
class ChildModelSerializer(ParentModelSerializer):
class Meta:
model = ChildModel
fields = (['name', 'abbreviation'])
</code></pre>
<p>Hence, I made three ViewSets, one for each serializer:</p>
<pre><code>class UserViewSet(viewsets.ReadOnlyModelViewSet):
queryset = User.objects.all()
serializer_class = UserSerializer
class ParentModelViewSet(viewsets.ModelViewSet):
queryset = ParentModel.objects.all()
serializer_class = ParentModelSerializer
permission_classes = (permissions.IsAuthenticated, IsOwnerOrReadOnly)
def perform_create(self, serializer):
serializer.save(owner=self.request.user)
class ChildModelViewSet(ParentModelViewSet):
queryset = ChildModel.objects.all()
serializer_class = ChildModelSerializer
permission_classes = (permissions.IsAuthenticated, IsOwnerOrReadOnly)
</code></pre>
<p>Is it possible to have inheritance from among serializers and viewsets? I tried the former example, but every time I try to POST a new ChildModel, an exception is thrown showing that all ParentModel fields have NULL values (except for those from the ChildModel fields).</p>
| 0 | 2016-09-01T00:16:46Z | 39,263,694 | <p>It isn't inheritance, but you can do this like:</p>
<pre><code>class ChildModelSerializer(ParentModelSerializer):
class Meta:
model = ChildModel
fields = ParentModelSerializer.Meta.fields + ['name', 'abbreviation']
</code></pre>
<p>That's how I do it.</p>
| 0 | 2016-09-01T05:59:17Z | [
"python",
"django",
"postgresql",
"django-models",
"django-rest-framework"
] |
Adding a check list and remove from list in Python | 39,260,914 | <p>So the instruction for the assignment is this</p>
<blockquote>
<p>Print a simple set of instructions which will offer users a choice of
keys to open a door.</p>
</blockquote>
<p>So the goals that I think will accomplish this is.</p>
<blockquote>
<p>make an inventory [rainbow keys]</p>
<p>print the inventory ( you have these keys)</p>
<p>ask to guess key that will open the door</p>
<p>it will check the inventory and if it's a <strong>red</strong> key it will print "open"</p>
<p>else will print keep guessing and remove the key from the inventory</p>
</blockquote>
<p>This is what I have so far. I haven't been able to figure out how to add and check the inventory.</p>
<pre><code>keepGuess = True
correctKey = "red"
while keepGuess:
guess = raw_input("Guess the key to open the door: ")
if guess == correctKey:
print ("You may enter")
keepGuess = False
else:
print ("Keep guessing")
</code></pre>
<p>Thanks for helping me.
Here's the end results </p>
<pre><code>keepGuess = True
correctKey = "blue"
keys = ["red", "orange", "yellow", "green", "blue", "indigo", "violet"]
print keys
print
while keepGuess:
guess = raw_input("Which key will open the door? ")
if guess == correctKey:
print ("You may enter")
keepGuess = False
else:
if guess in keys != "blue":
keys.remove(guess)
if guess not in keys:
print
print ("The key didn't open the door.")
print ("Keep guessing")
print
print keys
print
</code></pre>
<p>Which prints out this</p>
<pre><code>['red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet']
Which key will open the door? red
The key didn't open the door.
Keep guessing
['orange', 'yellow', 'green', 'blue', 'indigo', 'violet']
Which key will open the door? red
Which key will open the door? blue
You may enter
</code></pre>
| -1 | 2016-09-01T00:17:47Z | 39,260,970 | <p>Without having more information about your error checking, I can't be sure I'm solving the problem. However, I would think that you want to maintain a simple list of keys, such as:</p>
<pre><code>door_key_inv = ["red", "yellow", "paisley-print chartreuse"]
</code></pre>
<p>You start the list as <strong>[]</strong> (i.e. empty) and add keys as they're found.</p>
<p>Now, when the user enters a guess, you have to make two checks:</p>
<ol>
<li><p>Is this key colour in inventory? If so, go to step 2; if not, print a warning.</p>
<p><strong>if guess in door_key_inv:</strong></p></li>
<li><p>Is this the correct key? If so, open the door and break the loop</p></li>
<li>Loop back to get the next guess.</li>
</ol>
<p>Is that what you needed?</p>
| 0 | 2016-09-01T00:25:52Z | [
"python",
"list",
"python-2.7"
] |
Adding a check list and remove from list in Python | 39,260,914 | <p>So the instruction for the assignment is this</p>
<blockquote>
<p>Print a simple set of instructions which will offer users a choice of
keys to open a door.</p>
</blockquote>
<p>So the goals that I think will accomplish this is.</p>
<blockquote>
<p>make an inventory [rainbow keys]</p>
<p>print the inventory ( you have these keys)</p>
<p>ask to guess key that will open the door</p>
<p>it will check the inventory and if it's a <strong>red</strong> key it will print "open"</p>
<p>else will print keep guessing and remove the key from the inventory</p>
</blockquote>
<p>This is what I have so far. I haven't been able to figure out how to add and check the inventory.</p>
<pre><code>keepGuess = True
correctKey = "red"
while keepGuess:
guess = raw_input("Guess the key to open the door: ")
if guess == correctKey:
print ("You may enter")
keepGuess = False
else:
print ("Keep guessing")
</code></pre>
<p>Thanks for helping me.
Here's the end results </p>
<pre><code>keepGuess = True
correctKey = "blue"
keys = ["red", "orange", "yellow", "green", "blue", "indigo", "violet"]
print keys
print
while keepGuess:
guess = raw_input("Which key will open the door? ")
if guess == correctKey:
print ("You may enter")
keepGuess = False
else:
if guess in keys != "blue":
keys.remove(guess)
if guess not in keys:
print
print ("The key didn't open the door.")
print ("Keep guessing")
print
print keys
print
</code></pre>
<p>Which prints out this</p>
<pre><code>['red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet']
Which key will open the door? red
The key didn't open the door.
Keep guessing
['orange', 'yellow', 'green', 'blue', 'indigo', 'violet']
Which key will open the door? red
Which key will open the door? blue
You may enter
</code></pre>
| -1 | 2016-09-01T00:17:47Z | 39,261,013 | <p>You're super close. You can just initialize an empty list to store the inventory. When someone guesses a key you just append it to the list. Of course we will check to see if the guessed key is already in the inventory, and if it is, we won't add it.</p>
<pre><code>keepGuess = True
correctKey = "red"
inventory = []
while keepGuess:
guess = raw_input("Guess the key to open the door: ")
if guess == correctKey:
print ("You may enter")
inventory.append(guess)
keepGuess = False
else:
if guess not in inventory:
inventory.append(guess)
else:
print ("You have already added this key to your inventory.")
print ("Keep guessing")
</code></pre>
<p>Here's a test:</p>
<pre><code>Guess the key to open the door: blue
Keep guessing
Guess the key to open the door: blue
You have already added this key to your inventory.
Keep guessing
Guess the key to open the door: red
You may enter
</code></pre>
| 0 | 2016-09-01T00:31:37Z | [
"python",
"list",
"python-2.7"
] |
download stock price history from yahoo finance with python 3.5 | 39,260,918 | <p>I am currently seeking a way to load multiple years of stock price history from yahoo finance. I will have a 100+ ticker symbols and I will be downloading data from 1985 to current date. I want the Open, High, Low, Close, Adj Close, Volume loaded into individual DataFrames (pandas) with name of the data frame named as the current ticker. </p>
<p>My Problem is that my variable ticker will not work, and even if it did work how can I store this data??</p>
<pre><code>import csv
import pandas as pd
import pandas_datareader.data as web
import datetime
#This represents the start and end date for the data
start = datetime.datetime(1985, 1, 1)
end = datetime.datetime(2016, 1, 27)
ticker = ['aapl','tvix','ugaz']
i= 0
while i < len(ticker):
f = web.DataReader(ticker, 'yahoo', start, end)
print(f)
i+=1
</code></pre>
<p>I want to be able to permanently store all this data so for example if I load 30 years of price history today for 100 ticker symbols, then tomorrow I will only have to append one day of data not all 30 years. Also the dataframe seems the most efficient way to organize the data but I am no quite for sure, but I want to do machine learning, and data analysis. </p>
| 0 | 2016-09-01T00:18:32Z | 39,261,236 | <p>You can use the <code>pandas_datareader</code> module to download the data that you want if it is available on yahoo. You can <code>pip install</code> it or install it any other way you prefer.</p>
<p>The following is a sample script that gets the data for a small list of symbols:</p>
<pre><code>from pandas_datareader import data as dreader
symbols = ['GOOG', 'AAPL', 'MMM', 'ACN', 'A', 'ADP']
pnls = {i:dreader.DataReader(i,'yahoo','1985-01-01','2016-09-01') for i in symbols}
</code></pre>
<p>This saves the data in a python dictionary which allows you access the data by using the <code>.get</code> method.</p>
<p>For example, if you wanted to get the data for <code>GOOG</code>, you can do:</p>
<pre><code>print(pnls.get('GOOG').head())
</code></pre>
<p>This gets the following:</p>
<pre><code> Open High Low Close Volume \
Date
2004-08-19 100.000168 104.060182 95.960165 100.340176 44871300
2004-08-20 101.010175 109.080187 100.500174 108.310183 22942800
2004-08-23 110.750191 113.480193 109.050183 109.400185 18342800
2004-08-24 111.240189 111.600192 103.570177 104.870176 15319700
2004-08-25 104.960181 108.000187 103.880180 106.000184 9232100
Adj Close
Date
2004-08-19 50.119968
2004-08-20 54.100990
2004-08-23 54.645447
2004-08-24 52.382705
2004-08-25 52.947145
</code></pre>
<p>Please keep in mind that if a given ticker did not have data for a time interval, the returned data will omit those years or days (obviously).</p>
<h3>Edit:</h3>
<p>The main problem with your script (among many other things) is that you are making the same call multiple times (<code>len(ticker)</code> times) with the same output being returned over and over again. This is because the <code>DataReader()</code> function, when provided with a <code>list</code>, will go out and get a data.frame for each element in that <code>list</code>. So, when you write:</p>
<pre><code>i = 0
while i < len(ticker):
f = web.DataReader(ticker, 'yahoo', start, end)
print(f)
i+=1
</code></pre>
<p>You are essentially saying: with <code>i = 0</code>, I want you to keep making the <code>web.DataReader(ticker, 'yahoo', start, end)</code> call and getting the same data and then printing it until (through incrementing) the value of <code>i</code> reaches the length of my list of tickers. You could have just gotten the same thing with <code>f = web.DataReader(ticker, 'yahoo', start, end)</code>. But even with that, you would not know which data.frame is for which ticker.</p>
<p>Also, one other thing that I find quite shocking is that you want to name every data.frame that gets returned, when you're dealing with 100+ tickers. Why on <strong><em>God's Green Earth</em></strong> would you want to have a 100+ names in your namespace? You can very easily centralize all those names into one <code>dictionary</code> (like I recommended above) and access any data.frame from it whenever you please by using <code>.get()</code> and providing the ticker.</p>
<p>Recap: 1) No need for a <code>while-loop</code>. 2) You have no way of knowing which data.frame is for which ticker if you just provide the whole list of tickers to the <code>DataReader()</code> function. 3) You don't want to have to name 100+ data.frames.</p>
<p>Suggestion: Simplify this whole thing by just using a dictionary (the tickers are the <code>keys</code> and the returned data will be the <code>values</code>) and a list comprehension to loop through the tickers and get their data. This can make your script much easier to read and way shorter, I suspect.</p>
<h3>Edit 2:</h3>
<p>This is an attempt to provide a solution to the desire to run a job once and then increment the data every day with the latest prices available. This is not the only way of doing it, but I think it's a good way. First, you will need to make a script that gets the data for all your tickers from 1985 to yesterday. This script will need to be run after market hours (or late at night) to capture the latest prices. This can be very similar to the script I have above. The only thing you would need to add is a couple of lines of code to save the data on your machine in your current working directory. The following should do:</p>
<pre><code>from pandas_datareader import data as dreader
symbols = ['GOOG', 'AAPL', 'MMM', 'ACN', 'A', 'ADP']
pnls = {i:dreader.DataReader(i,'yahoo','1985-01-01','2016-09-01') for i in symbols}
for df_name in pnls:
pnls.get(df_name).to_csv("{}_data.csv".format(df_name), index=True, header=True)
</code></pre>
<p>Then, you can write another script, which just grabs data for the same tickers for just today. It, too, would need to run at night (before midnight) so that it can capture data from today.</p>
<pre><code>from datetime import datetime
from pandas_datareader import data as dreader
from pandas_datareader._utils import RemoteDataError
symbols = ['GOOG', 'AAPL', 'MMM', 'ACN', 'A', 'ADP']
try:
pnls = {i:dreader.DataReader(i,'yahoo',datetime.today(),datetime.today()) for i in symbols}
except RemoteDataError:
pnls = None
if pnls is not None:
for df_name in pnls:
with open("{}_data.csv".format(df_name),"a") as outfile:
pnls.get(df_name).to_csv(outfile,header=False)
else:
print("No data available yet. Please run later.")
</code></pre>
<p>This second script should append the latest prices to every data file saved previously using the first script.</p>
<p><strong><em>Please note that I am working with the assumption that Yahoo (or any other data source) will have the current day's prices available as soon as markets close wherever you are.</em></strong></p>
<p>I hope this helps.</p>
| 3 | 2016-09-01T01:05:38Z | [
"python"
] |
Opening file in python using open() using "/" or r'filepath' gives error | 39,261,134 | <p>All I'm trying to is open a file to read</p>
<pre><code>file1 = open (r"C:\Users\Javier\Downloads\BodyFat.txt", "r" )
</code></pre>
<p>I get the error message:</p>
<pre><code>OSError: [Errno 22] Invalid argument: '
\u202aC:\\Users\\Javier\\Downloads\\BodyFat.txt'
</code></pre>
<p>If I proceed to manually change:</p>
<pre><code>file1 = open ("C:/Users/Javier/Downloads/BodyFat.txt", "r" )
</code></pre>
<p>I get the error:</p>
<pre><code>OSError: [Errno 22] Invalid argument: '
\u202aC:/Users/Javier/Downloads/BodyFat.txt'
</code></pre>
<p>Edit 1: " C... " to "C..."</p>
<p>Edit 2: Fixed problem with @spectras input about the invalid character, thank you </p>
<p>Edit 2.1: for some reason (below) works</p>
<pre><code> "/Users/Javier/Downloads/BodyFat.txt", "r"
</code></pre>
| -1 | 2016-09-01T00:51:12Z | 39,261,253 | <p>Replace </p>
<pre><code>file1 = open ("C:/Users/Javier/Downloads/BodyFat.txt", "r" )
</code></pre>
<p>with</p>
<pre><code>file1 = open ("/Users/Javier/Downloads/BodyFat.txt", "r" )
</code></pre>
<p>Python is looking for a directory, and <code>C:</code> is not being recognized as one.</p>
| -1 | 2016-09-01T01:07:40Z | [
"python",
"file"
] |
How can tkinter open with file directory of specific folder (also containing .py script) on any system? | 39,261,178 | <p>In other words if I pass off the folder to someone containing a .txt and .py, and the script is run on their machine from this folder, how can I ensure the file dialog will open with this folder to select the .txt without knowing the absolute path? Referring to <code>initialdir=</code>??</p>
<pre><code>from tkinter import filedialog
from tkinter import *
root = Tk()
root.withdraw()
root.filename = filedialog.askopenfilename(initialdir='/python', title="Select file",
filetypes=[("Text Files", "*.txt")])
print(root.filename)
</code></pre>
| 0 | 2016-09-01T00:57:51Z | 39,270,250 | <p>You should use the <code>os</code> module and <code>os.getcwd()</code> to find the current working directory of the .py file</p>
<pre><code>import os
from tkinter import filedialog
import tkinter as tk
root = tk.Tk()
root.withdraw()
root.filename = filedialog.askopenfilename(initialdir=os.getcwd(), title="Select file",
filetypes=[("Text Files", "*.txt")])
print(root.filename)
</code></pre>
<p>I would also suggest doing <code>import tkinter as tk</code> instead as importing everything may lead to naming conflicts if you are not careful, plus it's much easier to determine that what you are referring to came from the <code>tkinter</code> module when you prefix it with <code>tk</code></p>
| 0 | 2016-09-01T11:24:32Z | [
"python",
"tkinter"
] |
Ensure list of dicts has a dict with key for each key in list | 39,261,233 | <p>Context:
I'm using an Ajax call to return some complex JSON from a python module. I have to use a list of keys and confirm that a list of single-item dicts contains a dict with each key.</p>
<p>Example:</p>
<pre><code>mylist=['this', 'that', 'these', 'those']
mydictlist=[{'this':1},{'that':2},{'these':3}]
</code></pre>
<p>How do I know that mydictlist is missing the "those" key? Once I know that, I can append {'those':4} to mylist. Simply checking for "those" won't work since the list is dynamic. The data structure cannot change.</p>
<p>Thanks.</p>
| 0 | 2016-09-01T01:05:29Z | 39,261,307 | <p>The most straightforward way is to iterate over both the containers and check:</p>
<pre><code>for key in mylist:
if not any(key in dic for dic in mydictlist):
print key, "missing"
</code></pre>
<p>However, if you have a lot of keys and/or dictionaries, this is not going to be efficient: it iterates over <code>mydictlist</code> once for each element in <code>mylist</code>, which is O(n*m). Instead, consider a set operation:</p>
<pre><code>print set(mylist).difference(*mydictlist)
</code></pre>
| 0 | 2016-09-01T01:16:19Z | [
"python"
] |
Ensure list of dicts has a dict with key for each key in list | 39,261,233 | <p>Context:
I'm using an Ajax call to return some complex JSON from a python module. I have to use a list of keys and confirm that a list of single-item dicts contains a dict with each key.</p>
<p>Example:</p>
<pre><code>mylist=['this', 'that', 'these', 'those']
mydictlist=[{'this':1},{'that':2},{'these':3}]
</code></pre>
<p>How do I know that mydictlist is missing the "those" key? Once I know that, I can append {'those':4} to mylist. Simply checking for "those" won't work since the list is dynamic. The data structure cannot change.</p>
<p>Thanks.</p>
| 0 | 2016-09-01T01:05:29Z | 39,261,309 | <p>Simple code is to convert your search list to a set, then use differencing to determine what you're missing:</p>
<pre><code>missing = set(mylist).difference(*mydictlist)
</code></pre>
<p>which gets you <code>missing</code> of <code>{'those'}</code>.</p>
<p>Since the named <code>set</code> methods can take multiple arguments (and they need not be <code>set</code>s themselves), you can just unpack all the <code>dict</code>s as arguments to <code>difference</code> to subtract all of them from your <code>set</code> of desired keys at once.</p>
<p>If you do need to handle duplicates (to make sure you see each of the <code>keys</code> in <code>mylist</code> at least that many time in <code>mydictlist</code>'s keys, so <code>mylist</code> might contain a value twice which must occur twice in the <code>dict</code>s), you can use <code>collections</code> and <code>itertools</code> to get remaining counts:</p>
<pre><code>from collections import Counter
from itertools import chain
c = Counter(mylist)
c.subtract(chain.from_iterable(mydictlist))
# In 3.3+, easiest way to remove 0/negative counts
c = +c
# In pre-3.3 Python, change c = +c to get the same effect slightly less efficiently
c += Counter()
</code></pre>
| 2 | 2016-09-01T01:16:47Z | [
"python"
] |
Ensure list of dicts has a dict with key for each key in list | 39,261,233 | <p>Context:
I'm using an Ajax call to return some complex JSON from a python module. I have to use a list of keys and confirm that a list of single-item dicts contains a dict with each key.</p>
<p>Example:</p>
<pre><code>mylist=['this', 'that', 'these', 'those']
mydictlist=[{'this':1},{'that':2},{'these':3}]
</code></pre>
<p>How do I know that mydictlist is missing the "those" key? Once I know that, I can append {'those':4} to mylist. Simply checking for "those" won't work since the list is dynamic. The data structure cannot change.</p>
<p>Thanks.</p>
| 0 | 2016-09-01T01:05:29Z | 39,281,164 | <p>The pandas package is a great way to handle list of dicts problems. It takes all the keys and makes them column headers, values with similar keys populate the same column.</p>
<p>Check this out:</p>
<pre><code>import pandas as pd
mydictlist=[{'this':1},{'that':2},{'these':3}]
# Convert data to a DataFrame
df = pd.DataFrame(mydictlist)
# List all the column header names and check if any of the key words are missing
df.columns
</code></pre>
| 0 | 2016-09-01T21:31:08Z | [
"python"
] |
Flask session variable not persisting between requests | 39,261,260 | <p>Using the app below and Flask 0.11.1, I navigated to the routes associated with the following function calls, with the given results: </p>
<ul>
<li>create(): '1,2,3' # OK</li>
<li>remove(1) : '2,3' # OK</li>
<li>remove(2) : '1,3' # expected '3'</li>
<li>maintain(): '1,2,3' # expected '1,3' or '3'</li>
</ul>
<p> </p>
<pre><code>from flask import Flask, session
app = Flask(__name__)
@app.route('/')
def create():
session['list'] = ['1','2','3']
return ",".join(session['list'])
@app.route('/m')
def maintain():
return ",".join(session['list'])
@app.route('/r/<int:id>')
def remove(id):
session['list'].remove(str(id))
return ",".join(session['list'])
if __name__ == '__main__':
app.secret_key = "123"
app.run()
</code></pre>
<p>This question is similar in theme to <a href="http://stackoverflow.com/questions/18139910/using-session-in-flask-app">this question</a>, <a href="http://stackoverflow.com/questions/18709213/flask-session-not-persisting">this</a>, and <a href="http://stackoverflow.com/questions/7100315/flask-session-member-not-persisting-across-requests">this one</a>, but I'm setting the secret key and not regenerating it, and my variable is certainly not larger than the 4096 bytes allowed for cookies. Perhaps I'm missing some more basic understanding about Flask session variables?</p>
| 0 | 2016-09-01T01:08:36Z | 39,261,335 | <p>Flask uses a <a href="https://github.com/wklken/pyutils/blob/master/dict/CallbackDict.py" rel="nofollow">CallbackDict</a> to track modifications to sessions.</p>
<p>It will only register modifications when you set or delete a key. Here, you modify the values in place, which it will not detect. Try this:</p>
<pre><code>@app.route('/r/<int:id>')
def remove(id):
val = session['list']
val.remove(str(id))
session['list'] = val
return ",".join(session['list'])
</code></pre>
<p>â¦and same with other changes, or you might try to set <code>session.modified = True</code> yourself instead of triggering the detection.</p>
| 0 | 2016-09-01T01:21:01Z | [
"python",
"session",
"flask"
] |
Flask session variable not persisting between requests | 39,261,260 | <p>Using the app below and Flask 0.11.1, I navigated to the routes associated with the following function calls, with the given results: </p>
<ul>
<li>create(): '1,2,3' # OK</li>
<li>remove(1) : '2,3' # OK</li>
<li>remove(2) : '1,3' # expected '3'</li>
<li>maintain(): '1,2,3' # expected '1,3' or '3'</li>
</ul>
<p> </p>
<pre><code>from flask import Flask, session
app = Flask(__name__)
@app.route('/')
def create():
session['list'] = ['1','2','3']
return ",".join(session['list'])
@app.route('/m')
def maintain():
return ",".join(session['list'])
@app.route('/r/<int:id>')
def remove(id):
session['list'].remove(str(id))
return ",".join(session['list'])
if __name__ == '__main__':
app.secret_key = "123"
app.run()
</code></pre>
<p>This question is similar in theme to <a href="http://stackoverflow.com/questions/18139910/using-session-in-flask-app">this question</a>, <a href="http://stackoverflow.com/questions/18709213/flask-session-not-persisting">this</a>, and <a href="http://stackoverflow.com/questions/7100315/flask-session-member-not-persisting-across-requests">this one</a>, but I'm setting the secret key and not regenerating it, and my variable is certainly not larger than the 4096 bytes allowed for cookies. Perhaps I'm missing some more basic understanding about Flask session variables?</p>
| 0 | 2016-09-01T01:08:36Z | 39,261,342 | <p>From <a href="http://flask.pocoo.org/docs/0.11/api/#flask.session.modified" rel="nofollow">the doc</a>:</p>
<blockquote>
<p>Be advised that modifications on mutable structures are not picked up automatically, in that situation you have to explicitly set the [<code>modified</code> attribute] to <code>True</code> yourself. </p>
</blockquote>
<p>Try:</p>
<pre><code>session['list'].remove(str(id))
session.modified = True
</code></pre>
| 0 | 2016-09-01T01:21:47Z | [
"python",
"session",
"flask"
] |
Unable to run a basic GraphFrames example | 39,261,370 | <p>Trying to run a simple GraphFrame example using pyspark.</p>
<p>spark version : 2.0</p>
<p>graphframe version : 0.2.0</p>
<p>I am able to import graphframes in Jupyter:</p>
<pre><code>from graphframes import GraphFrame
GraphFrame
graphframes.graphframe.GraphFrame
</code></pre>
<p>I get this error when I try and create a GraphFrame object:</p>
<pre><code>---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<ipython-input-23-2bf19c66804d> in <module>()
----> 1 gr_links = GraphFrame(df_web_page, df_parent_child_link)
/Users/roopal/software/graphframes-release-0.2.0/python/graphframes/graphframe.pyc in __init__(self, v, e)
60 self._sc = self._sqlContext._sc
61 self._sc._jvm.org.apache.spark.ml.feature.Tokenizer()
---> 62 self._jvm_gf_api = _java_api(self._sc)
63 self._jvm_graph = self._jvm_gf_api.createGraph(v._jdf, e._jdf)
64
/Users/roopal/software/graphframes-release-0.2.0/python/graphframes/graphframe.pyc in _java_api(jsc)
32 def _java_api(jsc):
33 javaClassName = "org.graphframes.GraphFramePythonAPI"
---> 34 return jsc._jvm.Thread.currentThread().getContextClassLoader().loadClass(javaClassName) \
35 .newInstance()
36
/Users/roopal/software/spark-2.0.0-bin-hadoop2.7/python/lib/py4j-0.10.1-src.zip/py4j/java_gateway.py in __call__(self, *args)
931 answer = self.gateway_client.send_command(command)
932 return_value = get_return_value(
--> 933 answer, self.gateway_client, self.target_id, self.name)
934
935 for temp_arg in temp_args:
/Users/roopal/software/spark-2.0.0-bin-hadoop2.7/python/pyspark/sql/utils.pyc in deco(*a, **kw)
61 def deco(*a, **kw):
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
65 s = e.java_exception.toString()
/Users/roopal/software/spark-2.0.0-bin-hadoop2.7/python/lib/py4j-0.10.1-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
310 raise Py4JJavaError(
311 "An error occurred while calling {0}{1}{2}.\n".
--> 312 format(target_id, ".", name), value)
313 else:
314 raise Py4JError(
Py4JJavaError: An error occurred while calling o138.loadClass.
: java.lang.ClassNotFoundException: org.graphframes.GraphFramePythonAPI
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:128)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:211)
at java.lang.Thread.run(Thread.java:745)
</code></pre>
<p>The python code tries to read the java class (in the jar) I guess, but cant seem to find it.
Any suggestions how to fix this?</p>
| 1 | 2016-09-01T01:25:36Z | 39,262,445 | <p>Make sure that your PYSPARK_SUBMIT_ARGS is updated to have "--packages graphframes:graphframes:0.2.0-spark2.0" in your kernel.json ~/.ipython/kernels//kernel.json.</p>
<p>You probably already looked at the following <a href="https://developer.ibm.com/clouddataservices/2016/07/15/intro-to-apache-spark-graphframes/" rel="nofollow">link</a>. It has more details on Jupiter setup. Basically, pyspark has to be supplied the graphframes.jar.</p>
| 0 | 2016-09-01T03:52:25Z | [
"python",
"apache-spark",
"pyspark",
"jupyter",
"graphframes"
] |
Count the number of objects OpenCV - Python | 39,261,378 | <p>This is my first project in Python(3.5.1) and OpenCV (3), so I'm sorry for my mistakes.
I've some pictures like these ones:
<a href="https://s12.postimg.org/ox8gw5l8d/gado.jpg" rel="nofollow">https://s12.postimg.org/ox8gw5l8d/gado.jpg</a></p>
<p>I need to count how many white objects has on this image. I tried to ise SimpleBlobDetector but I didn't work as I was expecting.</p>
<pre><code># Standard imports
import cv2
import numpy as np;
# Read image
im = cv2.imread("C:/opencvTests/cattle.jpg", cv2.IMREAD_GRAYSCALE)
# Setup SimpleBlobDetector parameters.
params = cv2.SimpleBlobDetector_Params()
#filter by color
params.filterByColor = True
params.blobColor = 255
# Filter by Convexity
params.filterByConvexity = True
params.minConvexity = 0.87
# Filter by Inertia
params.filterByInertia = True
params.minInertiaRatio = 0.08
# Create a detector with the parameters
ver = (cv2.__version__).split('.')
if int(ver[0]) < 3 :
detector = cv2.SimpleBlobDetector(params)
else :
detector = cv2.SimpleBlobDetector_create(params)
# Detect blobs.
keypoints = detector.detect(im)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(im, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show keypoints
cv2.imwrite("C:/opencvTests/blobSave.jpg",im_with_keypoints)
print("Total of objects")
print(len(keypoints))
</code></pre>
<p>Any help would be really appreciate! Thanks in advance </p>
| 0 | 2016-09-01T01:26:32Z | 39,263,052 | <p>Probably you will end up in wrong count if you continue working on this image. Perform some pre-processing operations such as morphological operations to remove noise and also to separate the object from each other. After this make use of "findcontours" an inbuilt opencv function. Then read the size of "findcontours" this will give the count of the objects.</p>
| 2 | 2016-09-01T05:06:49Z | [
"python",
"opencv"
] |
Count the number of objects OpenCV - Python | 39,261,378 | <p>This is my first project in Python(3.5.1) and OpenCV (3), so I'm sorry for my mistakes.
I've some pictures like these ones:
<a href="https://s12.postimg.org/ox8gw5l8d/gado.jpg" rel="nofollow">https://s12.postimg.org/ox8gw5l8d/gado.jpg</a></p>
<p>I need to count how many white objects has on this image. I tried to ise SimpleBlobDetector but I didn't work as I was expecting.</p>
<pre><code># Standard imports
import cv2
import numpy as np;
# Read image
im = cv2.imread("C:/opencvTests/cattle.jpg", cv2.IMREAD_GRAYSCALE)
# Setup SimpleBlobDetector parameters.
params = cv2.SimpleBlobDetector_Params()
#filter by color
params.filterByColor = True
params.blobColor = 255
# Filter by Convexity
params.filterByConvexity = True
params.minConvexity = 0.87
# Filter by Inertia
params.filterByInertia = True
params.minInertiaRatio = 0.08
# Create a detector with the parameters
ver = (cv2.__version__).split('.')
if int(ver[0]) < 3 :
detector = cv2.SimpleBlobDetector(params)
else :
detector = cv2.SimpleBlobDetector_create(params)
# Detect blobs.
keypoints = detector.detect(im)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(im, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show keypoints
cv2.imwrite("C:/opencvTests/blobSave.jpg",im_with_keypoints)
print("Total of objects")
print(len(keypoints))
</code></pre>
<p>Any help would be really appreciate! Thanks in advance </p>
| 0 | 2016-09-01T01:26:32Z | 39,273,744 | <p>I'm getting very close to the answer, I believe I just need to change some parameters according to the image.
If someone needs something on this regard, this is my code:</p>
<pre><code># Standard imports
import cv2
import numpy as np;
# Read image
im = cv2.imread("C:/opencvTests/original.jpg", cv2.IMREAD_GRAYSCALE)
#Apply treshold
ret,im = cv2.threshold(im,240,255,cv2.THRESH_BINARY)
kernel = np.ones((6,6),np.uint8)
erosion = cv2.erode(im,kernel,iterations = 1)
opening = cv2.morphologyEx(im, cv2.MORPH_OPEN, kernel)
im = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, kernel)
# Setup SimpleBlobDetector parameters.
params = cv2.SimpleBlobDetector_Params()
#filter by color
params.filterByColor = True
params.blobColor = 255
# Filter by Convexity
params.filterByConvexity = True
params.minConvexity = 0.87
# Filter by Inertia
params.filterByInertia = True
params.minInertiaRatio = 0.08
# Create a detector with the parameters
ver = (cv2.__version__).split('.')
if int(ver[0]) < 3 :
detector = cv2.SimpleBlobDetector(params)
else :
detector = cv2.SimpleBlobDetector_create(params)
# Detect blobs.
keypoints = detector.detect(im)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(im, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show keypoints
cv2.imwrite("C:/opencvTests/keypoints.jpg",im_with_keypoints)
print("Total of objects")
print(len(keypoints))
</code></pre>
<p>Thank you very much!</p>
| 0 | 2016-09-01T14:06:53Z | [
"python",
"opencv"
] |
How to wait until all threads are ready before start them | 39,261,387 | <p>I have this working script:</p>
<pre><code>import socket, threading
def loop():
global threads
get_host = "GET " + url + " HTTP/1.1\r\nHost: " + url + "\r\n"
accept = "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\nAccept-Language: en-US,en;q=0.5\r\nAccept-Encoding: gzip, deflate\r\n"
connection = "Connection: Keep-Alive\r\n"
useragent = "User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36\r\n"
request = get_host + useragent + accept + connection + "\r\n"
for x in range(800):
send().start()
class send(threading.Thread):
def run(self):
self.requestdefault()
def requestdefault(self):
while True:
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((str(url), int(urlport)))
s.send (str.encode(request))
print ("Request sent!")
except:
pass
loop()
</code></pre>
<p>How can I let wait program before all threads are ready before starting them end send request? </p>
| 0 | 2016-09-01T01:27:57Z | 39,262,113 | <p>Maybe you need some notification mechanics. I think you should try event object like below snippet: </p>
<pre><code>import threading
import time
event=threading.Event()
def main_thread():
time.sleep(10)
event.set() # notify, enable worker thread run
def worker_thread():
event.wait()
print threading.currentThread().name +" running \n"
if __name__=='__main__':
for i in xrange(0,2):
t=threading.Thread(target=worker_thread)
t.start()
main_thread()
</code></pre>
| 0 | 2016-09-01T03:10:29Z | [
"python",
"multithreading",
"python-3.x"
] |
Writing to a file doesn't work if pywinauto throw an exception | 39,261,482 | <p>I'm new in Python so can not definitely say where the problem is: in PyWinAuto or in my knowledge of Python.</p>
<p>I run the next script Windows (Python 3.5.2):</p>
<pre><code>#!/usr/bin/env python3
import os
import sys
import pywinauto
def testLicenseForm():
app = pywinauto.Application().Start('Calc.exe')
try:
LicenseForm = app['Nonsense name']
LicenseForm.OK.Click()
# raise pywinauto.findbestmatch.MatchError
# raise pywinauto.timings.TimeoutError
except (pywinauto.timings.TimeoutError, pywinauto.findbestmatch.MatchError) as e:
f = open('R:\Temp\diagnostic\log.errors', 'w')
f.write('Exception raised')
sys.exit('Error in script'.format(__file__))
if __name__ == '__main__':
testLicenseForm()
</code></pre>
<p>The problem is the log.errors is created, but empty. If I change the code like this:</p>
<pre><code># LicenseForm.OK.Click()
raise pywinauto.findbestmatch.MatchError
</code></pre>
<p>the log.errors file is created and contains the expected text in it.
Not sure where the problem is. How to change the script to write some info to the file if pywinauto throws an exception.</p>
| 0 | 2016-09-01T01:42:25Z | 39,265,616 | <p><code>f.write</code> doesn't guarantee writing the data until you close the file (<code>f.close()</code>) or do <code>f.flush()</code>. But I'd recommend you the following way:</p>
<pre><code>with open('R:\Temp\diagnostic\log.errors', 'w') as f:
f.write('Exception raised')
</code></pre>
<p>This context manager will close the file automatically when exiting from <code>with</code> section. The file is guaranteed to close even if an exception is raised inside a <code>with</code>.</p>
| 0 | 2016-09-01T07:45:32Z | [
"python",
"windows",
"fwrite",
"pywinauto"
] |
Emulating namespaces in Python | 39,261,496 | <p>I understand that Python has what it calls "namespaces", but I'm looking to write some code along the lines of <code>security.crypto.hash.md5("b")</code> and such. JavaScript doesn't have namespaces, but their behavior can be replicated using object notation:</p>
<pre><code>var stdlib = {
math: {
constants: {
e: 2.718281828,
pi: 3.141596535,
silver: 2.414213562
},
operations: {
add: function(a, b) {
return a + b;
},
subtract: function(a, b) {
return a - b;
},
multiply: function(a, b) {
return a * b;
},
divide: function(a, b) {
return a / b;
}
},
abs: function(a) {
if (a < 0) {
return 0 - a;
} else {
return a;
}
},
sgn: function(a) {
if (a > 0) {
return 1;
} else if (a < 0) {
return -1;
} else {
return 0;
}
}
}
}
</code></pre>
<p>This allows for writing code like:</p>
<pre><code>var i = 0;
var j = 1;
var n = stdlib.math.operations.add(i, j);
var constants = Object.create(stdlib.math.constants);
var ops = Object.create(stdlib.math.operations);
var z = ops.multiply(n, constants.e);
</code></pre>
<p>In Python, modules are separated into namespaces by placing them in folders, so <code>os</code> and <code>os.path</code> represent $ROOT/os<code>and ``$ROOT/os/path</code>, respectively. Is there a way to avoid that and do something that emulates namespaces like the above trick with JavaScript?</p>
| 1 | 2016-09-01T01:43:48Z | 39,261,591 | <p>If all you're after is the dot notation, you can always create classes? See <a href="http://stackoverflow.com/questions/3576596/is-it-a-good-idea-to-using-class-as-a-namespace-in-python">Is it a good idea to using class as a namespace in Python</a>.</p>
| 1 | 2016-09-01T01:58:29Z | [
"python",
"namespaces"
] |
Webbrowser() reading through a text file for URLS | 39,261,609 | <p>I am trying to write a script to automate browsing to my most commonly visited websites. I have put the websites into a list and am trying to open it using the <code>webbrowser()</code> module in Python. My code looks like the following at the moment:</p>
<pre><code>import webbrowser
f = open("URLs", "r")
list = f.readline()
for line in list:
webbrowser.open_new_tab(list)
</code></pre>
<p>This only reads the first line from my file "URLs" and opens it in the browser. Could any one please help me understand how I can achieve reading through the entire file and also opening the URLs in different tabs?</p>
<p>Also other options that can help me achieve the same.</p>
| 0 | 2016-09-01T02:00:17Z | 39,261,692 | <p>You have two main problems. </p>
<p>The first problem you have is that you are using <code>readline</code> and not <code>readlines</code>. <code>readline</code> will give you the first line in the file, while <code>readlines</code> gives you a list of your file contents. </p>
<p>Take this file as an example: </p>
<pre><code># urls.txt
http://www.google.com
http://www.imdb.com
</code></pre>
<p>Also, get in to the habit of using a context manager, as this will close the file for you once you have finished reading from it. Right now, even though for what you are doing, there is no real danger, you are leaving your file open. </p>
<p><a href="https://docs.python.org/3/tutorial/inputoutput.html#methods-of-file-objects" rel="nofollow">Here</a> is the information from the documentation on files. There is a mention about best practices with handling files and using <code>with</code>. </p>
<p>The second problem in your code is that, when you are iterating over <code>list</code> (which you should not use as a variable name, since it shadows the builtin list), you are passing <code>list</code> in to your webrowser call. This is definitely not what you are trying to do. You want to pass your <em>iterator</em>.</p>
<p>So, taking all this in to mind, your final solution will be: </p>
<pre><code>import webbrowser
with open("urls.txt") as f:
for url in f:
webbrowser.open_new_tab(url.strip())
</code></pre>
<p>Note the <code>strip</code> that is called in order to ensure that newline characters are removed. </p>
| 1 | 2016-09-01T02:13:00Z | [
"python",
"python-2.7",
"python-webbrowser"
] |
Webbrowser() reading through a text file for URLS | 39,261,609 | <p>I am trying to write a script to automate browsing to my most commonly visited websites. I have put the websites into a list and am trying to open it using the <code>webbrowser()</code> module in Python. My code looks like the following at the moment:</p>
<pre><code>import webbrowser
f = open("URLs", "r")
list = f.readline()
for line in list:
webbrowser.open_new_tab(list)
</code></pre>
<p>This only reads the first line from my file "URLs" and opens it in the browser. Could any one please help me understand how I can achieve reading through the entire file and also opening the URLs in different tabs?</p>
<p>Also other options that can help me achieve the same.</p>
| 0 | 2016-09-01T02:00:17Z | 39,261,698 | <p>You're not reading the file properly. You're only reading the first line. Also, assuming you were reading the file properly, you're still trying to open <code>list</code>, which is incorrect. You should be trying to open <code>line</code>.</p>
<p>This should work for you:</p>
<pre><code>import webbrowser
with open('file name goes here') as f:
all_urls = f.read().split('\n')
for each_url in all_urls:
webbrowser.open_new_tab(each_url)
</code></pre>
<p>My answer is assuming that you have the URLs 1 per line in the text file. If they are separated by spaces, simply change the line to <code>all_urls = f.read().split(' ')</code>. If they're separated in another way just change the line to split accordingly.</p>
| 0 | 2016-09-01T02:13:53Z | [
"python",
"python-2.7",
"python-webbrowser"
] |
Uninstall Python 3.5.2 from Redhat Linux | 39,261,624 | <p>Linux is already preinstalled with Python 2.7, however I installed Python 3.5.2 thinking that I need it but actually I don't. So I want to safely and completely remove it from the system, how can I do it? </p>
<p>I previously installed Python 3.5.2 using the commands below</p>
<pre><code>wget https://www.python.orf.ftp/python/3.5.2/Python-3.5.2.tgz
tar -xvf Python-3.5.2.tgz
cd Python-3.5.2.tgz
./configure
make install
ls /usr/local/bin //python 3.5.2 is stored at this location
</code></pre>
| 0 | 2016-09-01T02:02:49Z | 39,261,660 | <p>There should be <code>uninstall</code> script but python really does not have it. If you didn't change any options from the firstplace( i mean config and make) you can do this one </p>
<p>//<em>be careful with</em> <strong>-rf</strong>//</p>
<pre><code>sudo rm -rf /usr/local/bin/python3* /usr/local/bin/pydoc3 /usr/local/lib/python3.1 /usr/local/include/python3.1 /usr/local/lib/pkgconfig/python3.pc /usr/local/lib/libpython3.1.a
</code></pre>
| 0 | 2016-09-01T02:08:17Z | [
"python",
"linux",
"redhat",
"uninstall",
"package-management"
] |
Yield both items and callback request in scrapy | 39,261,636 | <p>Disclaimer: I'm pretty new to both Python and Scrapy.</p>
<p>I'm trying to get my spider to gather urls from the start url, follow those gathered urls and both:</p>
<ol>
<li>scrape the next page for specific items (and eventually return them)</li>
<li>gather more specific urls from the next page and follow these urls.</li>
</ol>
<p>I want to be able to continue this process of yielding both items and callback requests, but I am not quite sure how to do it.
Currently my code only returns urls, and no items. I'm obviously doing something wrong. Any feedback would be greatly appreciated.</p>
<pre><code>class VSSpider(scrapy.Spider):
name = "vs5"
allowed_domains = ["votesmart.org"]
start_urls = [
"https://votesmart.org/officials/WA/L/washington-state-legislative#.V8M4p5MrKRv",
]
def parse(self, response):
sel = Selector(response)
#this gathers links to the individual legislator pages, it works
for href in response.xpath('//h5/a/@href'):
url = response.urljoin(href.extract())
yield scrapy.Request(url, callback=self.parse1)
def parse1(self, response):
sel = Selector(response)
items = []
#these xpaths are on the next page that the spider should follow, when it first visits an individual legislator page
for sel in response.xpath('//*[@id="main"]/section/div/div/div'):
item = LegislatorsItems()
item['current_office'] = sel.xpath('//tr[1]/td/text()').extract()
item['running_for'] = sel.xpath('//tr[2]/td/text()').extract()
items.append(item)
#this is the xpath to the biography of the legislator, which it should follow and scrape next
for href in response.xpath('//*[@id="folder-bio"]/@href'):
url = response.urljoin(href.extract())
yield scrapy.Request(url, callback=self.parse2, meta={'items': items})
def parse2(self, response):
sel = Selector(response)
items = response.meta['items']
#this is an xpath on the biography page
for sel in response.xpath('//*[@id="main"]/section/div[2]/div/div[3]/div/'):
item = LegislatorsItems()
item['tester'] = sel.xpath('//div[2]/div[2]/ul/li[3]').extract()
items.append(item)
return items
</code></pre>
<p>Thanks!</p>
| 3 | 2016-09-01T02:04:49Z | 39,267,248 | <p>There are 2 levels of your problem.<br></p>
<p><strong>1.</strong> Bio url is not available with JS disabled. Turn off JS in your browser and check this page:
<a href="https://votesmart.org/candidate/126288/derek-stanford" rel="nofollow">https://votesmart.org/candidate/126288/derek-stanford</a></p>
<p>You should see the tag with empty href and correct url hidden under comment. </p>
<pre><code><a href="#" class="folder" id="folder-bio">
<!--<a href='/candidate/biography/126288/derek-stanford' itemprop="url" class='more'>
See Full Biographical and Contact Information</a>-->
</code></pre>
<p>For extracting bio url, you can get this comment with xpath selector "/comment()", and then extract url with regexp. </p>
<p>Or, if url structure is common for all pages, just form url yourself: replace "/candidate/" in link with "/candidate/biography/".</p>
<blockquote>
<p><strong>NB!</strong> If you face unexpected issues, one of the first actions - disable JS and look at the page as Scrapy see it. Test all selectors.</p>
</blockquote>
<hr>
<p><strong>2.</strong> Your usage of items is very complicated. If "one item = one person", you should just define one item in "parse_person" and pass it to "parse_bio". </p>
<p>Take a look at the updated code. I rewrote some parts while finding the issue. Notes:</p>
<ul>
<li>You don't need (in most cases) to create "items" list and append items to it. Scrapy manage with items itself.</li>
<li>"sel = Selector(response)" has no sense in your code, you can throw it.</li>
</ul>
<p>This code is tested with Scrapy 1.0 and Python 3.5, though with earlier versions should work too.</p>
<pre><code>from scrapy import Spider, Request
class VSSpider(Spider):
name = "vs5"
allowed_domains = ["votesmart.org"]
start_urls = ["https://votesmart.org/officials/WA/L/washington-state-legislative"]
def parse(self, response):
for href in response.css('h5 a::attr(href)').extract():
person_url = response.urljoin(href)
yield Request(person_url, callback=self.parse_person)
def parse_person(self, response): # former "parse1"
# define item, one for both parse_person and bio function
item = LegislatorsItems()
# extract text from left menu table and populate to item
desc_rows = response.css('.span-abbreviated td::text').extract()
if desc_rows:
item['current_office'] = desc_rows[0]
item['running_for'] = desc_rows[1] if len(desc_rows) > 1 else None
# create right bio url and pass item to it
bio_url = response.url.replace('votesmart.org/candidate/',
'votesmart.org/candidate/biography/')
return Request(bio_url, callback=self.parse_bio, meta={'item': item})
def parse_bio(self, response): # former "parse2"
# get item from meta, add "tester" data and return
item = response.meta['item']
item['tester'] = response.css('.item.first').xpath('//li[3]').extract()
print(item) # for python 2: print item
return item
</code></pre>
| 1 | 2016-09-01T09:06:48Z | [
"python",
"callback",
"scrapy"
] |
wxpython toolbar not shown in os x | 39,261,653 | <p>I have a python script for GUI using wxpython. It works perfectly fine in Windows, however, when I run the script in OS X, the toolbar is now shown (I installed wxpython from its official website and used the cocoa version, and I am using OS X 10.10 and python 2.7). Following is the part of the code regarding the toolbar:</p>
<pre><code>self.toolBar = wx.ToolBar(self, -1, style=wx.TB_HORIZONTAL|wx.TB_FLAT|wx.TB_DOCKABLE)
self.myChoice = ComboBoxWithHelp(self.toolBar, wx.NewId(), size=(200, -1), value=..., choices=..., style=wx.CB_DROPDOWN,)
self.toolBar.AddControl(self.myChoice)
iconname = 'icons/new.png'
self.toolBar.AddSimpleTool(1, wx.Image(iconname, wx.BITMAP_TYPE_PNG).ConvertToBitmap(), 'New', 'New')
...
self.toolBar.Realize()
self.SetToolBar(self.toolBar)
</code></pre>
<p>Nothing is shown below the menu bar, however the space is left there. Did I installed the wxpython wrongly or use the function wrongly?</p>
<p>By the way, the above code also works for Ubuntu.</p>
| 1 | 2016-09-01T02:07:22Z | 39,277,524 | <p>What is <code>self</code> in this code? Toolbars are a little different on OSX, and and can be a bit tricky, so there may be some issues if <code>self</code> is not a <code>wx.Frame</code> or a class derived from <code>wx.Frame</code>. This is because the native toolbars are actually part of the frame rather than an independent widget. It should be switching to a non-native toolbar if the parent is not the frame, but then you'll need to manage its size and layout yourself. </p>
<p>If <code>self</code> is already the frame then you may want to try not specifying the style flags and let it just use the default style, or you can try creating the toolbar like this instead and let the frame create it:</p>
<pre><code>self.toolBar = self.CreateToolBar()
</code></pre>
| 0 | 2016-09-01T17:24:13Z | [
"python",
"wxpython"
] |
How to get index of a cell in QTableWidget? | 39,261,734 | <p>I created a table widget and added a contextmenu to it. When I right click the cell,I want to get a file directory and put it into the cell. I've got the directory and pass it to a variable, but i failed to display it in the cell,because I can't get the index of the cell.How to get index of a cell in QTableWidget? Is there any orther method to figure out this qusstion? I'm using Python and PyQt5.</p>
<p><a href="http://i.stack.imgur.com/gwqxR.jpg" rel="nofollow">enter image description here</a></p>
<pre><code>@pyqtSlot()
def on_actionAddFolder_triggered(self):
# TODO: Open filedialog and get directory
filedir = str(QFileDialog.getExistingDirectory(self, "Select Directory"))
return filedir
@pyqtSlot(QPoint)
def on_tableWidget_customContextMenuRequested(self, pos):
# TODO: get directory and display it in the cell
x = self.tableWidget.currentRow
y = self.tableWidget.currentColumn
RightClickMenu = QMenu()
AddFolder = RightClickMenu.addAction('Add Folder')
FolderAction = RightClickMenu.exec_(self.tableWidget.mapToGlobal(pos))
if FolderAction == AddFolder:
NewItem = QTableWidgetItem(self.on_actionAddFolder_triggered())
self.tableWidget.setItem(x,y, NewItem)
</code></pre>
| 0 | 2016-09-01T02:19:19Z | 39,262,128 | <p>hahaha, I find the mistake!</p>
<pre><code>x = self.tableWidget.currentRow
y = self.tableWidget.currentColumn
</code></pre>
<p>replace these two lines</p>
<pre><code>x = self.tableWidget.currentRow()
y = self.tableWidget.currentColumn()
</code></pre>
<p>then it works.</p>
| 0 | 2016-09-01T03:12:48Z | [
"python",
"pyqt5",
"qtablewidget"
] |
Iterating Through Lists (index out of range) | 39,261,811 | <p>Quick summary of what I am creating: It is a game in which an alien spaceship bounces around the screen (like the dell logo/loading screen) with a certain boundary so that it stays near the top of the screen. Near the bottom of the screen there is a player ship that has to click to shoot the enemy space invaders style, while moving side to side (but at the moment I'm still working on keyboard/mouse working simultaneously because events only do the top of the queue). There are also cows being beamed up from beneath you by the mothership. If you catch the cow you get points. If you fail to dodge one, you lose points and a life. If you catch one with a "net" you get points.</p>
<p>The issue I am having is this error (<code>cowRect = (cow_x[i], cow_y[i], 127, 76)
IndexError: list index out of range</code>), which I think would be caused by theprogram attempting to iterate through the lists while they are still empty, although it seems like items are in the list when it is "scanning" through it.</p>
<p>Some snippets of my code (It's like 170 lines so I won't post it all):</p>
<p>Beginning-</p>
<pre><code>cowList = []
statusList = []
cow_x = []
cow_y = []
</code></pre>
<p>Inside main loop-</p>
<pre><code>if hits >= hitsNeeded and time.time() >= currentTime + 1:
cowList.append(cownumber)
cownumber += 1
statusList.append(3)
cow_x.append(random.randint(0, 573))
cow_y.append(700)
</code></pre>
<p>Also inside main loop-</p>
<pre><code>for i in statusList:
cowRect = (cow_x[i], cow_y[i], 127, 76)
if cow_y[i] + 111 < 0:
statusList[i] = 0 #offscreen
if cowRect.colliderect(missileRect):
statusList[i] = 1 #exploded
points -= 15
netRect = (net_x, net_y, 127, 127)
if cowRect.colliderect(netRect):
points += 90
screen.blit(milkplus, (cow_x[i], cow_y[i]))
powerup = pygame.mixer.Sound("C:/Python35/powerup.mp3")
powerup.play()
shotNet = 0
statusList[i] = 2 #caught
if cowRect.colliderect(playerRect):
points -= 10
lives -= 1
statusList[i] = 4 #player collision
for i in statusList:
if statusList[i] == 3: #alive
screen.blit(cow, (cow_x[i], cow_y[i]))
cow_y[i] -= cowSpeed
</code></pre>
<p>Yes, I do realize that I don't really need to have 4 states of cow, it just helps to keep my head organized (that goes for some other things in here too).</p>
<p>I apologize if I have made some mistake, I haven't been on here in a very long time.</p>
| 0 | 2016-09-01T02:29:52Z | 39,262,705 | <p>The problem you are seeing is due to the way that <code>for</code> loops work in Python. They iterate through the contents of the list, not the list indexes. As mentioned in the comments, you <em>could</em> fix the error by simply doing <code>for i in range(len(statusList))</code>, but I want to suggest that you rather use a slightly different strategy to encode the information about the cows, by having a list of cows rather than four lists about the cows.</p>
<pre><code>class Cow:
def __init__(x, y):
self.status = 'alive'
self.x = x
self.y = y
</code></pre>
<p>Beginning-</p>
<pre><code>cows = []
</code></pre>
<p>Inside main loop-</p>
<pre><code>if hits >= hitsNeeded and time.time() >= currentTime + 1:
cows.append(Cow(random.randint(0, 573), 700))
</code></pre>
<p>Also inside main loop-</p>
<pre><code>for cow in cows:
cowRect = (cow.x, cow.y, 127, 76)
if cow.y + 111 < 0:
cow.status = 'offscreen'
if cowRect.colliderect(missileRect):
cow.status = 'exploded'
points -= 15
netRect = (net_x, net_y, 127, 127)
if cowRect.colliderect(netRect):
points += 90
screen.blit(milkplus, (cow.x, cow.y))
powerup = pygame.mixer.Sound("C:/Python35/powerup.mp3")
powerup.play()
shotNet = 0
cow.status = 'caught' #caught
if cowRect.colliderect(playerRect):
points -= 10
lives -= 1
cow.status = 'collision'
for cow in cows:
if cow.status == 'alive':
screen.blit(cow_pic, (cow.x, cow.y))
cow.y -= cowSpeed
</code></pre>
<p>This will make things easier in the future. If you are uncomfortable with classes, you can get similar behaviour with <code>collections.namedtuple</code>.</p>
| 2 | 2016-09-01T04:26:16Z | [
"python",
"python-3.x",
"pygame"
] |
How to fire a custom (python) service and continue in powershell using ansible? | 39,261,819 | <p>I am trying to start a python service in a windows host using ansible. I have tried using both Start-Job and Start-Process as follows. But I am not able to get the exact results.</p>
<p>Using Start-Job</p>
<pre><code>Start-Job -ScriptBlock {Start-Process C:\Users\voiceqa\ansitest\Scripts\python.exe C:\Users\voiceqa\ansitest\Scripts\run_wireshark_service.py -PassThru -RedirectStandardError C:\Users\voiceqa\error.txt -RedirectStandardOutput C:\Users\voiceqa\output.txt -NoNewWindow
</code></pre>
<p>The problem with this is as soon as ansible comes out of the session, Start-Job stops which inturn kills the process that it is running.</p>
<p>Using Start-Process</p>
<pre><code>Start-Process powershell -ArgumentList "C:\Users\voiceqa\ansitest\Scripts\python.exe C:\Users\voiceqa\ansitest\Scripts\run_wireshark_service.py" -WindowStyle Hidden -RedirectStandardError C:\Users\voiceqa\error.txt -RedirectStandardOutput C:\Users\voiceqa\output.txt -PassThru
Start-Process C:\Users\voiceqa\ansitest\Scripts\python.exe -ArgumentList "C:\Users\voiceqa\ansitest\Scripts\run_wireshark_service.py" -WindowStyle Hidden -RedirectStandardError C:\Users\voiceqa\error.txt -RedirectStandardOutput C:\Users\voiceqa\output.txt -PassThru -UseNewEnvironment| Export-Clixml -Path C:\Users\voiceqa\wiresharkservice.xml
Start-Process C:\Users\voiceqa\ansitest\Scripts\python.exe C:\Users\voiceqa\ansitest\Scripts\run_wireshark_service.py -PassThru -NoNewWindow
</code></pre>
<p>I have tried all these. All of these have same issues. Ansible is waiting for these commands to finish ( which it won't) as shown</p>
<p><a href="http://i.stack.imgur.com/EHwU3.png" rel="nofollow"><img src="http://i.stack.imgur.com/EHwU3.png" alt="Here"></a>
All I need is fire this python service and continue with rest of the work. How can I achieve this functionality? Any help would be welcome.</p>
| 0 | 2016-09-01T02:30:28Z | 39,277,521 | <p>It's tricky to do in the currently-released versions of Ansible. Even if you managed to background the tasks via <code>raw:</code> (which is possible with some hoop-jumping), WinRM won't let the command complete until all the processes exit (WinRM runs everything under a Windows job object). You have to escape the job.</p>
<p>Ansible 2.2 will have async, win_shell and win_command, but async isn't currently the right thing, since it leaves a watchdog process running that will kill the child process after the async timeout has elapsed. I've been testing a "breakaway" option for command/shell that will allow you to do what you're wanting (not sure if it'll be ready for primetime by 2.2 module freeze though). </p>
<p>If you're really running a service, I'd suggest setting it up as a Windows service (either directly, via sc, or using something like NSSM).</p>
<p>If that's a no-go, you can run background processes in the current version of Ansible (so long as you don't need access to stdin/stdout/stderr of the launched process) via <code>raw:</code> and WMI like this: </p>
<p><code>raw: ([wmiclass]"Win32_Process").Create("myprocess.exe /option1 /option2")</code></p>
| 1 | 2016-09-01T17:24:07Z | [
"python",
"powershell",
"ansible",
"ansible-playbook"
] |
How do I unload a istance of a class in python | 39,261,906 | <p>Ok I think it better to give an example on why I would want to do this.
Say I had a game.
There are some objects that are enemeys
these enemeys have a class that is running (health, weapons, ect)
If a player kills a enemy I want to unload that enemeys data from the game (removing the enemys instance of the class) How would I go about doing this?</p>
<p>EDIT:</p>
<p>Ok using del dose not help. Here is a basic example Imagination obj1 is a enemy and after i=10 imagination that the enemy dies.
I have the code to delete the obj however the function Update() that is in obj1 is still being called as the shell still prints "AAAAAAAAAAAA" even though the shell prints d as True</p>
<pre><code>import threading
import time
import sched
#import Queue
#setting up some OneInstance Var's
starttime=time.time()
condition = threading.Condition()
lock = threading.Lock()
#This is the tick method for caling the update threads in the child classes (set at 20 Ticks/s)
#The tick is run in its own thread so it can not be intrupded
#The tick method is part of the Base module but not part of the base class As there needs to be ONLEY ONE INSTANCE of it
def Tick(cond):
while True:
with cond:
cond.notifyAll() #This clears the block on the threads evey 20th of a second
time.sleep(0.05 - ((time.time() - starttime) % 0.05)) #This is the 20th of a second part
tickThread=threading.Thread(name = 'ThreadTick', target=Tick, args=(condition,)) #This setsup the Tick in its own thread
tickThread.start()#This runs said thread
#BaseClass is the class All other classes inhearit from
class BaseClass(object):
def __init__(self, name):
global condition
global lock
self.name = name
t=threading.Thread(name = 'Thread'+self.name, target=self.UpdateHandler, args=(condition,lock,))
t.start()
def BaseClassType(self):
"""Returns a string of the childs type"""
pass
def UpdateHandler(self, cond, lock):
#this part handles when to call the update.
#it is allso needed so that it can be run in a thread.
while True:
self.lock = lock #this is just passed down evey tick so that threads can lock them selfs when writting to the shell
self.Update() #Calls the update on all child classes
with cond:
cond.wait()#This makes all the threads waite(besides the tick thread) when they are done for the next tick.
def Update(self):
#This is a place holder.
#It stops classes that dont have a Update function crashing when called
pass
def Unload(self):
#this method will terminate the thread.... Dont know if this is even possable in python yet
#and then remove any active instances of itself befor removing the class instance
pass
#This class is made so that I did not have to rewrite self.lock.blablabla eatch time i wanted to print to the shell
def Print (self, message):
self.lock.acquire()
print(message)
self.lock.release()
#---------------------------------------------------------------------------------
class ChildClassA(BaseClass):
def __init__(self, name):
super(ChildClassA, self).__init__(name)
def BaseClassType(self):
return 'ChildClassA'
def Update(self):
self.Print('AAAAAAAAAAAAAAAAAAAAAAA')
#----------------------------------------------------------------------------------
class ChildClassB(BaseClass):
def __init__(self, name):
super(ChildClassB, self).__init__(name)
def Update(self):
#print(self.name, "This is a completley diffeent action")
self.Print('BBBBBBBBBBBBBBBBBBB')
self.Hi()
def Hi(self):
self.Print("Hi")
#----------------------------------------------------------------------------------
#works now just like in unity
class ChildClassC(BaseClass): #the new class
def __init__(self, name): #this is the equivalent of start()
super(ChildClassC, self).__init__(name) #this is the onley extra bit but its just to pass some name data through the classes
def Update(self): #this is litaley the same as update() except it has self in the pramaters
self.Print("CCCCCCCCCCCCCCCCCCCCCCCCCC")
#--------------------------------------------------------------------------------
obj1 = ChildClassA('Obj1')
obj2 = ChildClassB('Obj2')
obj3 = ChildClassC('Obj3')
i=0
d=False
while True:
if i >=10:
if d==False:
del obj1
d = True
i=i+1
print("D: ", d)
print("I: ", i)
</code></pre>
| 0 | 2016-09-01T02:43:11Z | 39,262,344 | <p>It's not clear what you are asking. But I am guessing it is one of the following:</p>
<ul>
<li>You are from a C background where objects are allocated and freed and Python is your first Garbage Collected language. If you remove all references to it (e.g. remove it from the list of sprites or whatnot). Garbage collection will automatically remove it from memory. </li>
<li>If you are asking how it is literally removed from a game scene, you will have to be more specific as to what game framework you are using or other existing code. </li>
</ul>
| 1 | 2016-09-01T03:40:52Z | [
"python",
"class"
] |
How can I get a list of all permutations of two column combination based on another column's value? | 39,261,932 | <p>My end goal is to create a <a href="https://bl.ocks.org/mbostock/4062045" rel="nofollow">Force-Directed graph</a> with d3 that shows clusters of users that utilize certain features in my applications. To do this, I need to create a set of "links" that have the following format (taken from the above link):</p>
<pre><code>{"source": "Napoleon", "target": "Myriel", "value": 1}
</code></pre>
<p>To get to this step though, I start with a pandas dataframe that looks like this. How can I generate a list of permutations of <code>APP_NAME</code>/<code>FEAT_ID</code> combinations for each <code>USER_ID</code>?</p>
<pre><code> APP_NAME FEAT_ID USER_ID CNT
280 app1 feature1 user1 114
2622 app2 feature2 user1 8
1698 app2 feature3 user1 15
184 app3 feature4 user1 157
2879 app2 feature5 user1 7
3579 app2 feature6 user1 5
232 app2 feature7 user1 136
295 app2 feature8 user1 111
2620 app2 feature9 user1 8
2047 app3 feature10 user2 11
3395 app2 feature2 user2 5
3044 app2 feature11 user2 6
3400 app2 feature12 user2 5
</code></pre>
<p>Expected Results:</p>
<p>Based on the above dataframe, I'd expect <code>user1</code> and <code>user2</code> to generate the following permutations</p>
<pre><code>user1:
app1-feature1 -> app2-feature2, app2-feature3, app3-feature4, app2-feature5, app2-feature6, app2-feature7, app2-feature8, app2-feature9
app2-feature2 -> app2-feature3, app3-feature4, app2-feature5, app2-feature6, app2-feature7, app2-feature8, app2-feature9
app2-feature3 -> app3-feature4, app2-feature5, app2-feature6, app2-feature7, app2-feature8, app2-feature9
app3-feature4 -> app2-feature5, app2-feature6, app2-feature7, app2-feature8, app2-feature9
app2-feature5 -> app2-feature6, app2-feature7, app2-feature8, app2-feature9
app2-feature6 -> app2-feature7, app2-feature8, app2-feature9
app2-feature7 -> app2-feature8, app2-feature9
app2-feature8 -> app2-feature9
user2:
app3-feature10 -> app2-feature2, app2-feature11, app2-feature12
app2-feature2 -> app2-feature11, app2-feature12
app2-feature11 -> app2-feature12
</code></pre>
<p>From this, I'd expect to be able to generate the expected inputs to D3, which would look like this for <code>user2</code>. </p>
<pre><code>{"source": "app3-feature10", "target": "app2-feature2"}
{"source": "app3-feature10", "target": "app2-feature11"}
{"source": "app3-feature10", "target": "app2-feature12"}
{"source": "app2-feature2", "target": "app2-feature11"}
{"source": "app2-feature2", "target": "app2-feature12"}
{"source": "app2-feature11", "target": "app2-feature12"}
</code></pre>
<p>How can I generate a list of permutations of <code>APP_NAME</code>/<code>FEAT_ID</code> combinations for each <code>USER_ID</code> in my dataframe?</p>
| 1 | 2016-09-01T02:47:34Z | 39,263,113 | <p>I would look at making some tuples out of your dataframe and then using something like <code>itertools.permutations</code> to create all the permutations, and then from there, craft your dictionaries as you need:</p>
<pre><code>import itertools
allUserPermutations = {}
groupedByUser = df.groupby('USER_ID')
for k, g in groupedByUser:
requisiteColumns = g[['APP_NAME', 'FEAT_ID']]
# tuples out of dataframe rows
userCombos = [tuple(x) for x in requisiteColumns.values]
# this is a generator obj
userPermutations = itertools.permutations(userCombos, 2)
# create a list of specified dicts for the current user
userPermutations = [{'source': s, 'target': tar for s, tar in userPermutations]
# store the current users specified dicts
allUserPermutations[k] = userPermutations
</code></pre>
<p>If the permutations don't return the desired behavior, you could try some other combinatoric generators <a href="https://docs.python.org/dev/library/itertools.html#module-itertools" rel="nofollow">found here</a>. Hopefully, this kind of strategy works (I don't have a pandas-enabled REPL to test it, at the moment). Best of luck!</p>
| 1 | 2016-09-01T05:11:48Z | [
"python",
"pandas"
] |
MinGW or Cygwin for Python development on Windows? | 39,261,962 | <p>I am using vanilla Python 3.x on Windows, using Pycharm as IDE.</p>
<p>Although this is working; I need to install different packages and modules; and I did notice that windows has no pip, from what I can tell.</p>
<p>I am familiar with Python on Linux, and most of the time it is a matter of use pip to install new packages and modules for Python; but on windows it seems more daunting and complex.</p>
<p>I was told to use either MinGW or Cygwin, so I can be in a full unix environment, using terminal commands that I am used to, and the unix console; althought I do not get the difference, related to python.</p>
<p>The code that I write, will be eventually deployed on Linux, OSX and Windows; would matter if I use CygWin or MinGW? If this is too much of a trouble, it is probably easier to just install Ubuntu as VM and work there; but I am hoping that I can do the same on a standard W10 machine.</p>
| 1 | 2016-09-01T02:51:23Z | 39,262,364 | <p>In Pycharms you also have access to a terminal in which you can run <code>pip install</code> on. <a href="https://www.jetbrains.com/help/pycharm/2016.1/working-with-embedded-local-terminal.html" rel="nofollow">Check this link out</a></p>
| 1 | 2016-09-01T03:43:15Z | [
"python",
"cygwin",
"mingw"
] |
MinGW or Cygwin for Python development on Windows? | 39,261,962 | <p>I am using vanilla Python 3.x on Windows, using Pycharm as IDE.</p>
<p>Although this is working; I need to install different packages and modules; and I did notice that windows has no pip, from what I can tell.</p>
<p>I am familiar with Python on Linux, and most of the time it is a matter of use pip to install new packages and modules for Python; but on windows it seems more daunting and complex.</p>
<p>I was told to use either MinGW or Cygwin, so I can be in a full unix environment, using terminal commands that I am used to, and the unix console; althought I do not get the difference, related to python.</p>
<p>The code that I write, will be eventually deployed on Linux, OSX and Windows; would matter if I use CygWin or MinGW? If this is too much of a trouble, it is probably easier to just install Ubuntu as VM and work there; but I am hoping that I can do the same on a standard W10 machine.</p>
| 1 | 2016-09-01T02:51:23Z | 39,262,887 | <p>I'd comment with this if I had the rep to do so, as it fails to stay within the scope of your question. I'm guessing you've heard of <a href="https://www.continuum.io/downloads" rel="nofollow">Anaconda</a> but using <code>conda</code> and many commands associated with it, you can manage packages and virtual environments, etc. If the problem is Python on Windows (as opposed to Python in a Linux environment on Windows), I recommend checking it out, if you haven't heard of it already!</p>
| 0 | 2016-09-01T04:50:06Z | [
"python",
"cygwin",
"mingw"
] |
OSError: [Errno 78] Function not implemented Flask-Assets | 39,261,974 | <p>I get the following error when accessing a URL with Flask-Assets which is supposed to render and minify css. </p>
<pre><code>ERROR 2016-09-01 02:45:00,096 app.py:1587] Exception on /SomeFile [GET]
Traceback (most recent call last):
File "/Users/vinay/App-Engine/zion-alpha/lib/flask/app.py", line 1988, in wsgi_app
response = self.full_dispatch_request()
File "/Users/vinay/App-Engine/zion-alpha/lib/flask/app.py", line 1641, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/vinay/App-Engine/zion-alpha/lib/flask/app.py", line 1544, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/Users/vinay/App-Engine/zion-alpha/lib/flask/app.py", line 1639, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/vinay/App-Engine/zion-alpha/lib/flask/app.py", line 1625, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/Users/vinay/App-Engine/zion-alpha/app/routes/home_routes.py", line 14, in show_file
return render_template('main.html')
File "/Users/vinay/App-Engine/zion-alpha/lib/flask/templating.py", line 134, in render_template
context, ctx.app)
File "/Users/vinay/App-Engine/zion-alpha/lib/flask/templating.py", line 116, in _render
rv = template.render(context)
File "/Users/vinay/App-Engine/zion-alpha/lib/jinja2/environment.py", line 989, in render
return self.environment.handle_exception(exc_info, True)
File "/Users/vinay/App-Engine/zion-alpha/lib/jinja2/environment.py", line 754, in handle_exception
reraise(exc_type, exc_value, tb)
File "/Users/vinay/App-Engine/zion-alpha/app/templates/main.html", line 6, in top-level template code
{% assets "css_all" %}
OSError: [Errno 78] Function not implemented
INFO 2016-09-01 02:45:00,116 module.py:788] default: "GET /SomeFile HTTP/1.1" 500 291
INFO 2016-09-01 02:45:00,853 module.py:402] [default] Detected file changes:
/Users/vinay/App-Engine/zion-alpha/app
</code></pre>
<p>The following is the code containing Flask-Assets</p>
<pre><code>def create_app():
"""Create the Flask App"""
app = Flask(__name__)
configure_blueprints(app)
css = Bundle('css/main.css',
'css/main2.css',
filters="cssmin",
output="static/css/min.css"
)
assets = Environment(app)
assets.register('css_all', css)
return app
</code></pre>
<p>HTML</p>
<pre><code>{% assets "css_all" %}
<link rel="stylesheet" href="{{ ASSET_URL }}"/>
{% endassets %}
</code></pre>
<p>Folder structure</p>
<pre><code>.
âââ admin
â  âââ __init__.py
â  âââ __init__.pyc
â  âââ routes
â  â  âââ __init__.py
â  â  âââ __init__.pyc
â  â  âââ admin_routes.py
â  â  âââ admin_routes.pyc
â  âââ static
â  â  âââ css
â  â  âââ admin-2.css
â  â  âââ admin.css
â  âââ templates
â  âââ adminindex.html
âââ app
â  âââ __init__.py
â  âââ __init__.pyc
â  âââ routes
â  â  âââ __init__.py
â  â  âââ __init__.pyc
â  â  âââ home_routes.py
â  â  âââ home_routes.pyc
â  âââ static
â  â  âââ css
â  â  âââ main.css
â  â  âââ main2.css
â  âââ templates
â  âââ main.html
âââ app.yaml
âââ appengine_config.py
âââ appengine_config.pyc
</code></pre>
<p>4 directories, 9 files</p>
| 0 | 2016-09-01T02:53:03Z | 39,818,969 | <p>This seems to be a problem with <code>flask-assets</code> and Google App Engine because GAE disallows file creation at run-time.</p>
<p>Possible duplicate of: <a href="http://stackoverflow.com/questions/17150096/gae-flask-webassets-throws-an-expection-on-extends-base-html">GAE: Flask/webassets throws an expection on {% extends "base.html" %}</a></p>
<p>Either minify the css before deploying to GAE, or find a different method of serving the minified css either at runtime (not preferred) or buildtime (preferred).</p>
<p>One possibility is to create the minified js on the first run (buildtime), which can then be uploaded to a google storage bucket.</p>
<p>Another option is to create the minified js at runtime and keep it stored in a memcache.</p>
| 0 | 2016-10-02T16:10:15Z | [
"python",
"flask-assets"
] |
Removing duplicates without set() | 39,262,034 | <p>I have a .txt file of IPs, Times, Search Queries, and Websites accessed. I used a for loop to break them up into respective indices of a list, I then placed all these lists, into a larger list. </p>
<p>When printed it may look like this...</p>
<pre><code>['4.16.159.114', '08:13:37', 'french-english dictionary', 'humanities.uchicago.edu/forms_unrest/FR-ENG.html\n']
['4.16.186.203', '00:13:54', 's.e.t.i.', 'www.seti.net/\n']
['4.16.189.59', '05:48:58', 'which is better http upload or ftp upload', 'www.ewebtribe.com/htmlhelp/uploading.htm\n']
['4.16.189.59', '06:50:49', 'cgi perl tutorial', 'www.cgi101.com/class/\n']
['4.16.189.59', '07:16:28', 'cgi perl tutorial', 'www.free-ed.net/fr03/lfc/course%20030207_01/\n']
</code></pre>
<p>My code for getting to here looks like so, which is just me scraping this data from a text file, and putting it into a list, then writing to another text file.</p>
<pre><code>import io
f = io.open(r'C:\Users\Ryan Asher\Desktop\%23AlltheWeb_2001.txt', encoding="Latin-1")
p = io.open(r'C:\Users\Ryan Asher\Desktop\workfile.txt', 'w')
sweet = []
for line in f:
x = line.split(" ")
lbreak = x[0].split("\t")
sweet.append(lbreak)
for item in sweet:
p.write("%s\n" % item)
</code></pre>
<p>My issue here is the 3rd index in the each list, within the sweet list or [2], which is the search query (french-english dictionary, s.e.t.i, etc.). I do not want multiples in the 'sweet' list.</p>
<p>So where it says 'cgi perl tutorial' but twice, I need to get rid of the other search of 'cgi perl tutorial', and only leave the first one, within the sweet list.</p>
<p>I can't use set for this I don't think, because I only want it to apply to the 3rd index of search queries, and I don't want it to get rid of duplicates of the same IP, or one of the others.</p>
| 2 | 2016-09-01T03:00:08Z | 39,262,109 | <p>Use a dict, with the query as the key and the entire list as the value. Something like this (untested):</p>
<pre><code>sweet = {}
for line in f:
...
query = lbreak[2]
if query not in sweet:
sweet[query] = lbreak
</code></pre>
<p>If you wanted the last instance of each query instead of the first, you could just lose the <code>if</code>, and do the assignment unconditionally.</p>
| 1 | 2016-09-01T03:09:49Z | [
"python",
"duplicates"
] |
Removing duplicates without set() | 39,262,034 | <p>I have a .txt file of IPs, Times, Search Queries, and Websites accessed. I used a for loop to break them up into respective indices of a list, I then placed all these lists, into a larger list. </p>
<p>When printed it may look like this...</p>
<pre><code>['4.16.159.114', '08:13:37', 'french-english dictionary', 'humanities.uchicago.edu/forms_unrest/FR-ENG.html\n']
['4.16.186.203', '00:13:54', 's.e.t.i.', 'www.seti.net/\n']
['4.16.189.59', '05:48:58', 'which is better http upload or ftp upload', 'www.ewebtribe.com/htmlhelp/uploading.htm\n']
['4.16.189.59', '06:50:49', 'cgi perl tutorial', 'www.cgi101.com/class/\n']
['4.16.189.59', '07:16:28', 'cgi perl tutorial', 'www.free-ed.net/fr03/lfc/course%20030207_01/\n']
</code></pre>
<p>My code for getting to here looks like so, which is just me scraping this data from a text file, and putting it into a list, then writing to another text file.</p>
<pre><code>import io
f = io.open(r'C:\Users\Ryan Asher\Desktop\%23AlltheWeb_2001.txt', encoding="Latin-1")
p = io.open(r'C:\Users\Ryan Asher\Desktop\workfile.txt', 'w')
sweet = []
for line in f:
x = line.split(" ")
lbreak = x[0].split("\t")
sweet.append(lbreak)
for item in sweet:
p.write("%s\n" % item)
</code></pre>
<p>My issue here is the 3rd index in the each list, within the sweet list or [2], which is the search query (french-english dictionary, s.e.t.i, etc.). I do not want multiples in the 'sweet' list.</p>
<p>So where it says 'cgi perl tutorial' but twice, I need to get rid of the other search of 'cgi perl tutorial', and only leave the first one, within the sweet list.</p>
<p>I can't use set for this I don't think, because I only want it to apply to the 3rd index of search queries, and I don't want it to get rid of duplicates of the same IP, or one of the others.</p>
| 2 | 2016-09-01T03:00:08Z | 39,262,127 | <p>Add <code>lbreak[2]</code> to a set, only append line that <code>lbreak[2]</code> not in the set, something like:</p>
<pre><code>sweet = []
seen = set()
for line in f:
x = line.split(" ")
lbreak = x[0].split("\t")
if lbreak[2] not in seen:
sweet.append(lbreak)
seen.add(lbreak[2])
</code></pre>
| 3 | 2016-09-01T03:12:38Z | [
"python",
"duplicates"
] |
Combining Indices of Two Sorted Lists to Form a Third SuperSorted List | 39,262,084 | <p>I have two lists <code>['AAPL', 'MMM', 'AMAT']</code> and <code>['AMAT', 'AAPL', 'MMM']</code> and I want to create a third list based on the positions of each string in its respective list.</p>
<p>For example:
<code>AAPL</code> ranks 1st + 2nd = total 3, <code>MMM</code> ranks 2nd and 3rd = total 5, <code>AMAT</code> ranks 1st and 3rd = total 4. </p>
<p>Final list would be (by decreasing cumulative position) <code>['AAPL', 'AMAT', 'MMM']</code>. </p>
<p>I don't even know where to begin with this. </p>
| 0 | 2016-09-01T03:07:23Z | 39,262,136 | <p>Just take one and do sort it by cumulative position.</p>
<pre><code>>>> a, b = ['AAPL', 'MMM', 'AMAT'], ['AMAT', 'AAPL', 'MMM']
>>> sorted(a, key=lambda x: a.index(x) + b.index(x))
['AAPL', 'AMAT', 'MMM']
</code></pre>
| 1 | 2016-09-01T03:13:42Z | [
"python",
"list"
] |
Combining Indices of Two Sorted Lists to Form a Third SuperSorted List | 39,262,084 | <p>I have two lists <code>['AAPL', 'MMM', 'AMAT']</code> and <code>['AMAT', 'AAPL', 'MMM']</code> and I want to create a third list based on the positions of each string in its respective list.</p>
<p>For example:
<code>AAPL</code> ranks 1st + 2nd = total 3, <code>MMM</code> ranks 2nd and 3rd = total 5, <code>AMAT</code> ranks 1st and 3rd = total 4. </p>
<p>Final list would be (by decreasing cumulative position) <code>['AAPL', 'AMAT', 'MMM']</code>. </p>
<p>I don't even know where to begin with this. </p>
| 0 | 2016-09-01T03:07:23Z | 39,262,163 | <p>You can use a dictionary in order to save the items with sum of their indices, then create the expected list based on indices:</p>
<pre><code>>>> from collections import defaultdict
>>>
>>> d = defaultdict(int)
>>>
>>> for (ind1, j), (ind2, t) in zip(enumerate(a), enumerate(b)):
... d[j] += ind1
... d[t] += ind2
...
>>> sorted(d.keys(), key=lambda x: d[x])
['AAPL', 'AMAT', 'MMM']
</code></pre>
<p>If lists don't have same length you can use <code>itertools.zip_longet</code> (in python2 <code>izip_longest</code>)in order to <code>zip</code> the enumerate objects.</p>
| 4 | 2016-09-01T03:16:26Z | [
"python",
"list"
] |
why this function get wrong result at some larger variable value | 39,262,094 | <p>I'm using gnu scientific library(gsl) to define <em>parabolic cylinder U function</em>. <a href="http://mathworld.wolfram.com/ParabolicCylinderFunction.html" rel="nofollow">Here</a> is the definition of this U function. But there is a little difference between my definition and the Mathworld's, cause I want to use mine to use the Gauss-Hermite integration method to calculate the infinity interval integration. So I dismiss the exponential part. Here is my code:</p>
<pre><code>#include <gsl/gsl_errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <gsl/gsl_sf.h>
#include <math.h>
#include <string.h>
double ParabolicCylinderU(double a, double x)
{
double res;
double tmp1,tmp2;
tmp1=sqrt(M_PI)/(gsl_sf_gamma(3./4.+a/2.)*pow(2,a/2.+1./4.))*gsl_sf_hyperg_1F1(a/2.+1./4.,1./2.,pow(x,2)/2.);
tmp2=sqrt(M_PI)/(gsl_sf_gamma(1./4.+a/2.)*pow(2,a/2.-1./4.))*x*gsl_sf_hyperg_1F1(a/2.+3./4.,3./2.,pow(x,2)/2.);
res=tmp1-tmp2;
return res;
}
</code></pre>
<p>I use <code>scipy.special.pbdv</code> to examine whether my definition is correct. For the smaller <code>x</code> value, the result is agreed with the <code>pbdv</code>, but when the value gets larger, the result is more weird. How can I solve this problem. There is some part of output:</p>
<pre><code>5.3599999999999994 10.720004342260813
5.4000000000000004 10.801040023779478
5.4399999999999995 10.879249134730191
5.4800000000000004 10.961176175487582
5.5199999999999996 11.036212437780495
5.5600000000000005 11.115912986845069
5.5999999999999996 11.189683877125612
5.6400000000000006 11.289942688341265
5.6799999999999997 11.385711218186625
5.7200000000000006 11.379394963160701
5.7599999999999998 11.532254417763568
5.8000000000000007 11.575985086141165
5.8399999999999999 11.881533311150061
5.8800000000000008 11.896642911301599
5.9199999999999999 11.639708082791794
5.9600000000000009 11.550981603033073
6.0000000000000000 10.455916671990396
6.0399999999999991 9.3887694230651952
...
9.6799999999999997 5.1646711481042656E+024
9.7199999999999989 -1.5183962001815768E+025
9.7600000000000016 -3.5383625277571119E+025
9.8000000000000007 4.2306292738245936E+025
9.8399999999999999 -1.3554993237535422E+026
9.8799999999999990 -3.9599339761206319E+026
9.9200000000000017 4.4052629528738374E+026
9.9600000000000009 3.7902904410228328E+027
10.000000000000000 -1.6696086530684606E+021
</code></pre>
<p>the first column is 'x' value, second column is parabolic function value, it happens to be wrong at about <code>5.83999999</code>, because if we input this <code>ParablicCylinderU(-0.999999-1./2.,sqrt(2)*x)</code>, this is nearly a straight line. The following is my examination python code.</p>
<pre><code>#!/usr/bin/env python
from math import *
import numpy as np
from scipy import special
import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pyplot as plt
nz= 0.999998779431
x=np.linspace(0,10,250,endpoint=True)
d,dv=special.pbdv(nz,sqrt(2.)*(x))
plt.figure()
d=d*np.exp(np.power(x,2)/2)*np.power(2,nz/2.)
plt.plot(x,d)
plt.plot(x,np.zeros(x.shape[0]))
plt.savefig("scipy_parabolic.png")
</code></pre>
| 1 | 2016-09-01T03:08:27Z | 39,262,433 | <p>You can get <code>pbdv</code> source code from <a href="https://github.com/scipy/scipy/blob/master/scipy/special/specfun/specfun.f" rel="nofollow">specfun scipy code</a>
Look for <code>SUBROUTINE PBDV</code>, if you not understand fortran, you can convert it to C via <a href="http://www.netlib.org/f2c" rel="nofollow">f2c</a></p>
| 0 | 2016-09-01T03:51:06Z | [
"python",
"c",
"gsl"
] |
Tkinter bind widgets below a rectangle widget to a mouse event | 39,262,191 | <p>I hope I am explaining the problem correctly.
My example below is able to move two images defined on a canvas. The problem is that I want a rectangle, also defined on the canvas, on top of the images. When I do that using .tag_raise, the event triggered by mouse drag is triggered by the rectangle, not the images. </p>
<p>I tried using bing_class but that did not work. I tried to define a separate canvas for the rectangle but it has to overlay the main canvas and I got stuck.</p>
<p>How to keep the rectangle on top but bind the images to my mouse drag event?</p>
<pre><code>import Tkinter as tk # for Python2
import PIL.Image, PIL.ImageTk
win = tk.Tk()
canvas = tk.Canvas(win, height = 500, width = 500)
#Create a rectangle with stipples on top of the images
rectangle = canvas.create_rectangle(0, 0, 400, 300, fill = "gray", stipple = "gray12")
#Create two images
SPRITE = PIL.Image.open("image.jpg")
imagePIL = SPRITE.resize((100, 100))
imagePI = PIL.ImageTk.PhotoImage(imagePIL)
image1 = canvas.create_image(100, 100, image = imagePI, tags = "image")
image2 = canvas.create_image(200, 200, image = imagePI, tags = "image")
#Callback
# Here I select image1 or image2 depending on where I click, and
# drag them on the canvas. The problem is when I put the rectangle
# on top using tag_raise (see below).
def callback(event):
id = canvas.find_withtag(tk.CURRENT)
canvas.coords(id, (event.x, event.y))
#Binding
canvas.bind("<B1-Motion>", callback)
#Place the rectangle on top of all
canvas.pack()
# This is the problem. I want to have the rectangle on top and be able to use the callback
#canvas.tag_raise(rectangle)
canvas.mainloop()
</code></pre>
<p><strong>SOLUTION:</strong> I enhanced Nehal's answer with the following code. His answer had a glitch, by which images could be switched. In my enhancement I solve it by storing a lock for each image so that, while dragging an image around on the canvas, the same image is dragged. When I move e.g. image1 over image2 I notice that image1 does not completely move over image2, which is fine for me. </p>
<pre><code>import Tkinter as tk # for Python2
import PIL.Image, PIL.ImageTk
win = tk.Tk()
canvas = tk.Canvas(win, height = 500, width = 500)
#Create a rectangle with stipples on top of the images
rectangle = canvas.create_rectangle(0, 0, 400, 300, fill = "gray", stipple = "gray12")
#Create two images
SPRITE = PIL.Image.open("image.jpg")
imagePIL = SPRITE.resize((100, 100))
imagePI = PIL.ImageTk.PhotoImage(imagePIL)
image1 = canvas.create_image(100, 100, image = imagePI, tags = "image")
image2 = canvas.create_image(200, 200, image = imagePI, tags = "image")
images = [image1, image2]
locks = [True, True]
def getImage(x, y):
for image in images:
curr_x, curr_y = canvas.coords(image)
x1 = curr_x - imagePI.width()/2
x2 = curr_x + imagePI.width()/2
y1 = curr_y - imagePI.height()/2
y2 = curr_y + imagePI.height()/2
if (x1 <= x <= x2) and (y1 <= y <= y2):
return image
#Callback
# Here I select image1 or image2 depending on where I click, and
# drag them on the canvas.
def callback(event):
id = getImage(event.x, event.y)
if id:
if locks[images.index(id)] is False: #Hold on to the image on which I originally clicked
canvas.coords(id, (event.x, event.y))
def mouseClick(event):
id = getImage(event.x, event.y)
if id:
locks[images.index(id)] = False
print(locks)
def mouseRelease(event):
id = getImage(event.x, event.y)
if id:
locks[images.index(id)] = True
print(locks)
#Binding
canvas.bind("<ButtonPress-1>", mouseClick) #unlock the image to move it
canvas.bind("<ButtonRelease-1>", mouseRelease) #lock the image
canvas.bind("<B1-Motion>", callback)
#Place the rectangle on top of all
canvas.pack()
# This was the original problem
canvas.tag_raise(rectangle)
canvas.mainloop()
</code></pre>
| 1 | 2016-09-01T03:19:54Z | 39,263,169 | <p>I don't know a <code>tkinter</code> specific way to do this, however, you can try to get the coordinates of the closest image and play with them. Like this:</p>
<pre><code>import Tkinter as tk # for Python2
import PIL.Image, PIL.ImageTk
win = tk.Tk()
canvas = tk.Canvas(win, height = 500, width = 500)
#Create a rectangle with stipples on top of the images
rectangle = canvas.create_rectangle(0, 0, 400, 300, fill = "gray", stipple = "gray12")
#Create two images
SPRITE = PIL.Image.open("image.jpg")
imagePIL = SPRITE.resize((100, 100))
imagePI = PIL.ImageTk.PhotoImage(imagePIL)
image1 = canvas.create_image(100, 100, image = imagePI, tags = "image")
image2 = canvas.create_image(200, 200, image = imagePI, tags = "image")
images = [image1, image2]
def getImage(x, y):
for image in images:
curr_x, curr_y = canvas.coords(image)
x1 = curr_x - imagePI.width()/2
x2 = curr_x + imagePI.width()/2
y1 = curr_y - imagePI.height()/2
y2 = curr_y + imagePI.height()/2
if (x1 <= x <= x2) and (y1 <= y <= y2):
return image
#Callback
# Here I select image1 or image2 depending on where I click, and
# drag them on the canvas. The problem is when I put the rectangle
# on top using tag_raise (see below).
def callback(event):
id = getImage(event.x, event.y)
if id:
canvas.coords(id, (event.x, event.y))
#Binding
canvas.bind("<B1-Motion>", callback)
#Place the rectangle on top of all
canvas.pack()
# This is the problem. I want to have the rectangle on top and be able to use the callback
canvas.tag_raise(rectangle)
canvas.mainloop()
</code></pre>
<p><a href="http://i.stack.imgur.com/H1GeH.gif" rel="nofollow"><img src="http://i.stack.imgur.com/H1GeH.gif" alt="enter image description here"></a></p>
| 2 | 2016-09-01T05:17:01Z | [
"python",
"user-interface",
"canvas",
"tkinter",
"tkinter-canvas"
] |
Rolling standard deviation with Pandas, and NaNs | 39,262,195 | <p>I have data that looks like this:</p>
<pre><code>1472698113000000000 -28.84
1472698118000000000 -26.69
1472698163000000000 -27.65
1472698168000000000 -26.1
1472698238000000000 -27.33
1472698243000000000 -26.47
1472698248000000000 -25.24
1472698253000000000 -25.53
1472698283000000000 -27.3
...
</code></pre>
<p>This is a time series that grows. Each time it grows, I attempt to get the rolling standard deviation of the set, using <code>pandas.rolling_std</code>. Each time, the result includes NaNs, which I cannot use (I am trying to insert the result into InfluxDB, and it complains when it sees the NaNs.)</p>
<p>I've experimented with different window sizes. I am doing this on different series, of varying rates of growth and current sizes (some just a couple of measurements long, some hundreds or thousands).</p>
<p>Simply, I just want to have a rolling standard deviation in InfluxDB so that I can graph it and watch how the source data is changing over time, with respect to its mean. How can I overcome this NaN problem?</p>
| 1 | 2016-09-01T03:20:35Z | 39,262,557 | <p>If you are doing something like</p>
<p><code>df.rolling(5).std()</code></p>
<p>and getting</p>
<pre><code>0 NaN NaN
1 NaN NaN
2 NaN NaN
3 NaN NaN
4 5.032395e+10 1.037386
5 5.345559e+10 0.633024
6 4.263215e+10 0.967352
7 3.510698e+10 0.822879
8 1.767767e+10 0.971972
</code></pre>
<p>You can strip away the NaNs by using <code>.dropna()</code>.</p>
<p><code>df.rolling(5).std().dropna()</code>:</p>
<pre><code>4 5.032395e+10 1.037386
5 5.345559e+10 0.633024
6 4.263215e+10 0.967352
7 3.510698e+10 0.822879
8 1.767767e+10 0.971972
</code></pre>
| 1 | 2016-09-01T04:06:36Z | [
"python",
"pandas",
"influxdb",
"standard-deviation"
] |
Is my adaptation of point-in-polygon (jordan curve theorem) in python correct? | 39,262,210 | <p><strong>Problem</strong></p>
<p>I recently found a need to determine if my points are inside of a polygon. So I learned about <a href="https://sidvind.com/wiki/Point-in-polygon:_Jordan_Curve_Theorem" rel="nofollow">this</a> approach in C++ and adapted it to python. However, the C++ code I was studying isn't quite right I think? I believe I have fixed it, but I am not quite sure so I was hoping folks brighter than me might help me caste some light on this? </p>
<p>The theorem is super simple and the idea is like this, given an nth closed polygon you draw an arbitrary line, if your point is inside, you line will intersect with the edges an odd number of times. Otherwise, you will be even and it is outside the polygon. Pretty freaking cool. </p>
<p>I had the following test cases: </p>
<pre class="lang-python prettyprint-override"><code> polygon_x = [5, 5, 11, 10]
polygon_y = [5, 10, 5, 10]
test1_x = 6
test1_y = 6
result1 = point_in_polygon(test1_x, test1_y, polygon_x, polygon_y)
print(result1)
test2_x = 13
test2_y = 5
result2 = point_in_polygon(test2_x, test2_y, polygon_x, polygon_y)
print(result2)
</code></pre>
<p>The above would give me both false if I defined it as follows: </p>
<pre><code> if polygon_x[i] < polygon_x[(i+1) % length]:
temp_x = polygon_x[i]
temp_y = polygon_x[(i+1) % length]
else:
temp_x = polygon_x[(i+1) % length]
temp_y = polygon_x[i]
</code></pre>
<p>This is wrong! I should be getting <code>true</code> for <code>result1</code> and then <code>false</code> for <code>result2</code>. So clearly, something is funky. </p>
<p>The code I was reading in C++ makes sense except for the above. In addition, it failed my test case which made me think that <code>temp_y</code> should be defined with <code>polygon_y</code> and not <code>polygon_x</code>. Sure enough, when I did this, my test case for <code>(6,6)</code> passes. It still fails when my points are on the line, but as long as I am inside the polygon, it will pass. Expected behavior. </p>
<h2>Polygon code adopted to python</h2>
<pre><code> def point_in_polygon(self, target_x, target_y, polygon_x, polygon_y):
print(polygon_x)
print(polygon_y)
#Variable to track how many times ray crosses a line segment
crossings = 0
temp_x = 0
temp_y = 0
length = len(polygon_x)
for i in range(0,length):
if polygon_x[i] < polygon_x[(i+1) % length]:
temp_x = polygon_x[i]
temp_y = polygon_y[(i+1) % length]
else:
temp_x = polygon_x[(i+1) % length]
temp_y = polygon_y[i]
print(str(temp_x) + ", " + str(temp_y))
#check
if target_x > temp_x and target_x <= temp_y and (target_y < polygon_y[i] or target_y <= polygon_y[(i+1)%length]):
eps = 0.000001
dx = polygon_x[(i+1) % length] - polygon_x[i]
dy = polygon_y[(i+1) % length] - polygon_y[i]
k = 0
if abs(dx) < eps:
k = 999999999999999999999999999
else:
k = dy/dx
m = polygon_y[i] - k * polygon_x[i]
y2 = k*target_x + m
if target_y <= y2:
crossings += 1
print(crossings)
if crossings % 2 == 1:
return True
else:
return False
</code></pre>
<p><strong>Summary</strong></p>
<p>Can someone please explain to me what the <code>temp_x</code> and <code>temp_y</code> approaches are doing? Also, if my fix for redefining the <code>temp_x</code> for <code>polygon_x</code> and <code>temp_y</code> for <code>polygon_y</code> is the correct approach? I doubt it. Here is why. </p>
<p>What is going on for <code>temp_x</code> and <code>temp_y</code> doesn't quite make sense to me. For <code>i = 0</code>, clearly <code>polygon_x[0] < polygon_x[1]</code> is <code>false</code>, so we get <code>temp_x[1] = 5</code> and <code>temp_y[0] = 5</code>. That is <code>(5,5)</code>. This just happens to be one of my pairs. However, suppose I feed the algorithm my points out of order (by axis, pairwise integrity is always a must), something like: </p>
<pre><code>x = [5, 10, 10, 5]
y = [10,10, 5, 5]
</code></pre>
<p>In this case, when <code>i = 0</code>, we get <code>temp_x[1] = 10</code> and <code>temp_y[0] = 10</code>. Okay, by coincidence <code>(10,10)</code>. I also tested points against the "corrected" algorithm <code>(9,9)</code> and it is still inside. In short, I am trying to find a counterexample, for why my fix won't work, but I can't. If this is working, I need to understand what the method is doing and hope someone could help explain it to me? </p>
<p>Regardless, if I am right or wrong, I would appreciate it if someone could help shed some better light on this problem. I'm even open to solving the problem in a more efficient way for n-polygons, but I want to make sure I am understanding code correctly. As a coder, I am uncomfortable with a method that doesn't quite make sense. </p>
<p>Thank you so much for listening to my thoughts above. Any suggestions greatly welcomed. </p>
| 4 | 2016-09-01T03:22:37Z | 39,266,511 | <p>You've misunderstood what the <code>x1</code> and <code>x2</code> values in the linked C++ code are for, and that misunderstanding has caused you to pick inappropriate variable names in Python. Both of the variables contain <code>x</code> values, so <code>temp_y</code> is a very misleading name. Better names might be <code>min_x</code> and <code>max_x</code>, since they are the minimum and maximum of the <code>x</code> values of the vertices that make up the polygon edge. A clearer version of the code might be written as:</p>
<pre><code>for i in range(length):
min_x = min(polygon_x[i], polygon_x[(i+1)%length])
max_x = max(polygon_x[i], polygon_x[(i+1)%length])
if x_min < target_x <= x_max:
# ...
</code></pre>
<p>This is perhaps a little less efficient than the C++ style of code, since calling both <code>min</code> and <code>max</code> will compare the values twice.</p>
<p>Your comment about the order of the points suggests that there's a further misunderstanding going on, which might explain the unexpected results you were seeing when setting <code>temp_y</code> to a value from <code>polygon_x</code>. The order of the coordinates in the polygon lists is important, as the edges go from one coordinate pair to the next around the list (with the last pair of coordinates connecting to the first pair to close the polygon). If you reorder them, the edges of the polygon will get switched around.</p>
<p>The example coordinates you give in your code (<code>polygon_x = [5, 5, 11, 10]</code> and <code>polygon_y = [5, 10, 5, 10]</code>) don't describe a normal kind of polygon. Rather, you get a (slightly lopsided) bow-tie shape, with two diagonal edges crossing each other like an <code>X</code> in the middle. That's not actually a problem for this algorithm though.</p>
<p>However, the first point you're testing lies exactly on one of those diagonal edges (the one that wraps around the list, from the last vertex, <code>(10, 10)</code> to the first, <code>(5, 5)</code>). Whether the code will decide it's inside or outside of the polygon likely comes down to floating point rounding and the choice of operator between <code><</code> or <code><=</code>. Either answer could be considered "correct" in this situation.</p>
<p>When you reordered the coordinates later in the question (and incidentally change an <code>11</code> to a <code>10</code>), you turned the bow-tie into a square. Now the <code>(6, 6)</code> test is fully inside the shape, and so the code will work if you don't assign a <code>y</code> coordinate to the second <code>temp</code> variable.</p>
| 3 | 2016-09-01T08:32:41Z | [
"python",
"python-3.x",
"polygon",
"intersection",
"points"
] |
Order Model by Dependent Field | 39,262,417 | <p>I have 2 models, PlayerPick and Game. PlayerPick has a foreign key to the game model, showing which game the pick is for. If I want to order all of the picks by the game time, what would be the best way to do so? Is there a way to automatically add the gameTime field to the player pick model? Please see the code below</p>
<pre><code>class PlayerPick(models.Model):
player_profile = models.ForeignKey('PlayerProfile')
team = models.ForeignKey('Team')
class Game(models.Model):
team1 = models.ForeignKey('Team', related_name="game_set_team1")
team2 = models.ForeignKey('Team', related_name="game_set_team2")
time = models.DateTimeField(null=True, blank=True)
</code></pre>
| 0 | 2016-09-01T03:49:25Z | 39,262,710 | <pre><code>class PlayerPick(models.Model):
player_profile = models.ForeignKey('PlayerProfile')
team = models.ForeignKey(Game)
class Game(models.Model):
team1 = models.ForeignKey('Team', related_name="game_set_team1")
team2 = models.ForeignKey('Team', related_name="game_set_team2")
time = models.DateTimeField(null=True, blank=True)
</code></pre>
<p>You can do it like so</p>
<pre><code>PlayerPick.objects.all().order_by(team = Game.objects.all().order_by('time'))
</code></pre>
<p>But you can always define a custom model Manager that if you will access that query regularly it will make your life easier. Please check:
<a href="http://blog.kevinastone.com/django-model-behaviors.html" rel="nofollow">http://blog.kevinastone.com/django-model-behaviors.html</a></p>
| 0 | 2016-09-01T04:26:43Z | [
"python",
"django",
"order",
"field",
"models"
] |
request.get python url-path instead of query argument passing in | 39,262,434 | <p>I am trying to pass in parameters into my REST API get request, like this:</p>
<p><code>parameters = {'key':value}</code></p>
<p><code>response = requests.get('some url', params= parameters)</code></p>
<p>but the API that I am using uses a url-path instead of a query argument. I want it like:</p>
<p><code>/api/resource/parametervalue</code></p>
<p>and it comes out like:</p>
<p><code>/api/resource?parameter=value</code></p>
<p>I've searched all over the web to find if I can somehow change what response.get takes in, but I can't seem to find it. What should I do?</p>
<p>Thank you!</p>
| 0 | 2016-09-01T03:51:12Z | 39,262,503 | <p>you can try like this :
you need is a url /api/resource/parametervalue </p>
<pre><code>response = requests.get('some url'+'/'parameters)//url/api/resource/parametervalue it's a url
response = requests.get('some url',params)//url/api/resource?params it't a url with params
</code></pre>
| 0 | 2016-09-01T03:59:41Z | [
"python",
"rest",
"api",
"get"
] |
request.get python url-path instead of query argument passing in | 39,262,434 | <p>I am trying to pass in parameters into my REST API get request, like this:</p>
<p><code>parameters = {'key':value}</code></p>
<p><code>response = requests.get('some url', params= parameters)</code></p>
<p>but the API that I am using uses a url-path instead of a query argument. I want it like:</p>
<p><code>/api/resource/parametervalue</code></p>
<p>and it comes out like:</p>
<p><code>/api/resource?parameter=value</code></p>
<p>I've searched all over the web to find if I can somehow change what response.get takes in, but I can't seem to find it. What should I do?</p>
<p>Thank you!</p>
| 0 | 2016-09-01T03:51:12Z | 39,267,513 | <p>The <code>params</code> optional argument will prepend a <code>?</code> before the query string as defined in the source code <a href="https://github.com/kennethreitz/requests/blob/master/requests/models.py#L69" rel="nofollow" title="here">here</a>.</p>
<p>You're better off modifying the URL before you call <code>requests.get</code></p>
<pre><code>>>> parameters = ('param', 'value')
>>> base_url = 'http://host/api/resource'
>>> query = ''.join((parameters[0], parameters[1]))
>>> url = '/'.join((base_url, query))
</code></pre>
<p>So <code>url</code> becomes:</p>
<pre><code>>>> url
'http://host/api/resource/paramvalue'
>>> response = requests.get(url)
</code></pre>
<p>If you have different paths in the same host that you use at different times you could even break down <code>base_url</code> into <code>host</code> and <code>path</code> and then join <code>host</code>, <code>path</code> and <code>query</code>. Remember the <code>join</code> method takes just one iterable argument.</p>
| 0 | 2016-09-01T09:18:15Z | [
"python",
"rest",
"api",
"get"
] |
Installing cuDNN for Theano without root access | 39,262,468 | <p>Can I install <a href="https://developer.nvidia.com/cudnn" rel="nofollow">cuDNN</a> locally without root access ?</p>
<p>I don't have root access to a linux machine I am using (the distro is openSuse), but I have CUDA 7.5 already installed. </p>
<p>I am using Theano and I need cuDNN to improve the speed of the operations on the GPU.</p>
<p>I downloaded <code>cudnn-7.5-linux-x64-v5.1</code> from Nvidia and as per the instructions I need to copy the CuDNN archive content to CUDA installation folder, i.e. (cuda/lib64/ and cuda/include/). But that would require me to have root access.</p>
<p>Is it possible that I extract the cudnn archive locally and provide theano with the path to the cudnn library ?</p>
| 0 | 2016-09-01T03:55:49Z | 39,275,036 | <p>You could copy the entire CUDA SDK to your home and tell Theano and others that they should use your local copy of CUDA by adding/modifying these environment variables in your <code>~/.bashrc</code></p>
<pre><code>export CUDA_ROOT=~/program/cuda-7.5
export CUDA_HOME=~/program/cuda-7.5
export PATH=${CUDA_HOME}/bin:$PATH
export LD_LIBRARY_PATH=/usr/lib64/nvidia:${CUDA_HOME}/lib64:$LD_LIBRARY_PATH
</code></pre>
<p>Then you could simply extract cuDNN to your local CUDA SDK dir <code>~/program/cuda-7.5/</code></p>
| 1 | 2016-09-01T15:03:14Z | [
"python",
"linux",
"gpu",
"theano-cuda"
] |
PyCharm - always show inspections | 39,262,493 | <p>PyCharm displays little bars on the scroll bar for things like code warnings. This feature is called "inspection".</p>
<p>If you move the mouse cursor over a bar, it shows a preview of the code annotated with the inspection.</p>
<p>I find this really fiddly, and I'd actually like full inspection notices to be displayed <em>all the time</em> in the normal editor, just like it appears in the small preview.</p>
<p>Is there any way I can achieve this?</p>
<p><a href="http://i.stack.imgur.com/NMRlg.png" rel="nofollow"><img src="http://i.stack.imgur.com/NMRlg.png" alt="screenshot showing an example PyCharm inspection"></a></p>
| 3 | 2016-09-01T03:58:42Z | 39,416,640 | <p>According to <a href="https://www.jetbrains.com/help/pycharm-edu/3.0/inspection-tool-window.html" rel="nofollow">this PyCharm's documentation</a> there seems to be an <em>Inspection Tool Window</em> which <em>displays inspection results on separate tabs.</em>.</p>
<p>You can access the tool window through menu <strong>Code | Inspect Code</strong>.</p>
<p>I just tried it and it showed a tab like this:</p>
<p><a href="http://i.stack.imgur.com/GlKFl.png" rel="nofollow"><img src="http://i.stack.imgur.com/GlKFl.png" alt="enter image description here"></a></p>
| 1 | 2016-09-09T17:13:47Z | [
"python",
"ide",
"pycharm",
"lint"
] |
Preserve quotes and also add data with quotes in Ruamel | 39,262,556 | <p>I am using Ruamel to preserve quote styles in human-edited YAML files.</p>
<p>I have example input data as:</p>
<pre><code>---
a: '1'
b: "2"
c: 3
</code></pre>
<p>I read in data using:</p>
<pre><code>def read_file(f):
with open(f, 'r') as _f:
return ruamel.yaml.round_trip_load(_f.read(), preserve_quotes=True)
</code></pre>
<p>I then edit that data:</p>
<pre><code>data = read_file('in.yaml')
data['foo'] = 'bar'
</code></pre>
<p>I write back to disk using:</p>
<pre><code>def write_file(f, data):
with open(f, 'w') as _f:
_f.write(ruamel.yaml.dump(data, Dumper=ruamel.yaml.RoundTripDumper, width=1024))
write_file('out.yaml', data)
</code></pre>
<p>And the output file is:</p>
<pre><code>a: '1'
b: "2"
c: 3
foo: bar
</code></pre>
<p>Is there a way I can enforce hard quoting of the string 'bar' without also enforcing that quoting style throughout the rest of the file?</p>
<p>(Also, can I stop it from deleting the three dashes <code>---</code> ?)</p>
| 2 | 2016-09-01T04:06:33Z | 39,263,202 | <p>In order to preserve quotes (and literal block style) for string scalars, ruamel.yaml¹—in round-trip-mode—represents these scalars as <code>SingleQuotedScalarString</code>, <code>DoubleQuotedScalarString</code> and <code>PreservedScalarString</code>. The class definitions for these very thin wrappers can be found in <code>scalarstring.py</code>.
When serializing such instances are written "as they were read", although sometimes the representer falls back to double quotes when things get difficult, as that can represent any string. </p>
<p>To get this behaviour when adding new key-value pairs (or when updating an existing pair), you just have to create these instances yourself:</p>
<pre><code>import sys
import ruamel.yaml
from ruamel.yaml.scalarstring import SingleQuotedScalarString, DoubleQuotedScalarString
yaml_str = """\
---
a: '1'
b: "2"
c: 3
"""
data = ruamel.yaml.round_trip_load(yaml_str, preserve_quotes=True)
data['foo'] = SingleQuotedScalarString('bar')
data.yaml_add_eol_comment('# <- single quotes added', 'foo', column=20)
ruamel.yaml.round_trip_dump(data, sys.stdout, explicit_start=True)
</code></pre>
<p>gives:</p>
<pre><code>---
a: '1'
b: "2"
c: 3
foo: 'bar' # <- single quotes added
</code></pre>
<p>the <code>explicit_start=True</code> recreates the (superfluous) document start marker. Whether such a marker was in the original file or not is not "known" by the top-level dictionary object, so you have to re-add it by hand.</p>
<p>Please note that without <code>preserve_quotes</code>, there would be (single) quotes around the values <code>1</code> and <code>2</code> anyway to make sure they are seen as string scalars and not as integers.</p>
<hr>
<p>¹ <sub>Of which I am the author.</sub></p>
| 2 | 2016-09-01T05:20:26Z | [
"python",
"ruamel.yaml"
] |
Pandas plot sharex=False does not behave as expected | 39,262,630 | <p>I am trying to plot histograms of a couple of series from a dataframe. Series have different maximum values:</p>
<pre><code>df[[
'age_sent', 'last_seen', 'forum_reply', 'forum_cnt', 'forum_exp', 'forum_quest'
]].max()
</code></pre>
<p>returns:</p>
<pre><code>age_sent 1516.564016
last_seen 986.790035
forum_reply 137.000000
forum_cnt 155.000000
forum_exp 13.000000
forum_quest 10.000000
</code></pre>
<p>When I tried to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html" rel="nofollow">plot histograms</a> I use <code>sharex=False, subplots=True</code> but it looks like <code>sharex</code> property is ignored:</p>
<pre><code>df[[
'age_sent', 'last_seen', 'forum_reply', 'forum_cnt', 'forum_exp', 'forum_quest'
]].plot.hist(figsize=(20, 10), logy=True, sharex=False, subplots=True)
</code></pre>
<p><a href="http://i.stack.imgur.com/ymcoa.png" rel="nofollow"><img src="http://i.stack.imgur.com/ymcoa.png" alt="enter image description here"></a></p>
<hr>
<p>I can clearly plot each of them separately, but this is less desirable. Also I would like to know what I am doing wrong.</p>
<hr>
<p>The data I have is too big too be included, but it is easy to create something similar:</p>
<pre><code>ttt = pd.DataFrame({'a': pd.Series(np.random.uniform(1, 1000, 100)), 'b': pd.Series(np.random.uniform(1, 10, 100))})
</code></pre>
<p>Now I have:</p>
<pre><code>ttt.plot.hist(logy=True, sharex=False, subplots=True)
</code></pre>
<p>Check the x axis. I want it to be this way (but using one command with subplots).</p>
<pre><code>ttt['a'].plot.hist(logy=True)
ttt['b'].plot.hist(logy=True)
</code></pre>
| 3 | 2016-09-01T04:15:56Z | 39,275,085 | <p>The <code>sharex</code> (most likely) just falls through to mpl and sets if the panning / zooming one axes changes the other.</p>
<p>The issue you are having is that the same bins are being used for all of the histograms (which is enforced by <a href="https://github.com/pydata/pandas/blob/master/pandas/tools/plotting.py#L2053" rel="nofollow">https://github.com/pydata/pandas/blob/master/pandas/tools/plotting.py#L2053</a> if I am understanding the code correctly) because pandas assumes that if you multiple histograms then you are probably plotting columns of similar data so using the same binning makes them comparable.</p>
<p>Assuming you have mpl >= 1.5 and numpy >= 1.11 you should write your self a little helper function like</p>
<pre><code>import matplotlib.pyplot as plt
import matplotlib as mpl
import pandas as pd
import numpy as np
plt.ion()
def make_hists(df, fig_kwargs=None, hist_kwargs=None,
style_cycle=None):
'''
Parameters
----------
df : pd.DataFrame
Datasource
fig_kwargs : dict, optional
kwargs to pass to `plt.subplots`
defaults to {'fig_size': (4, 1.5*len(df.columns),
'tight_layout': True}
hist_kwargs : dict, optional
Extra kwargs to pass to `ax.hist`, defaults
to `{'bins': 'auto'}
style_cycle : cycler
Style cycle to use, defaults to
mpl.rcParams['axes.prop_cycle']
Returns
-------
fig : mpl.figure.Figure
The figure created
ax_list : list
The mpl.axes.Axes objects created
arts : dict
maps column names to the histogram artist
'''
if style_cycle is None:
style_cycle = mpl.rcParams['axes.prop_cycle']
if fig_kwargs is None:
fig_kwargs = {}
if hist_kwargs is None:
hist_kwargs = {}
hist_kwargs.setdefault('log', True)
# this requires nmupy >= 1.11
hist_kwargs.setdefault('bins', 'auto')
cols = df.columns
fig_kwargs.setdefault('figsize', (4, 1.5*len(cols)))
fig_kwargs.setdefault('tight_layout', True)
fig, ax_lst = plt.subplots(len(cols), 1, **fig_kwargs)
arts = {}
for ax, col, sty in zip(ax_lst, cols, style_cycle()):
h = ax.hist(col, data=df, **hist_kwargs, **sty)
ax.legend()
arts[col] = h
return fig, list(ax_lst), arts
dist = [1, 2, 5, 7, 50]
col_names = ['weibull $a={}$'.format(alpha) for alpha in dist]
test_df = pd.DataFrame(np.random.weibull(dist,
(10000, len(dist))),
columns=col_names)
make_hists(test_df)
</code></pre>
<p><a href="http://i.stack.imgur.com/8aiH1.png" rel="nofollow"><img src="http://i.stack.imgur.com/8aiH1.png" alt="enter image description here"></a></p>
| 2 | 2016-09-01T15:05:06Z | [
"python",
"pandas",
"matplotlib"
] |
How I can either deny a login or registration a user based on current session | 39,262,664 | <p>How can I deny or redirect an active user to the logged-in screen?</p>
<p>I want, user only can access pages when current session allows the access. </p>
<p>Is this can be done directly in the HTML code or only in views ?</p>
| 0 | 2016-09-01T04:21:07Z | 39,262,946 | <p>If I understand you correctly, you mean that the user shouldn't see the login and signup links (maybe in the topbar). This can be done in the template as:</p>
<pre><code>{% if user.is_authenticated %}
<!-- show logout link/button -->
{% else %}
<!-- show login and signup links/buttons -->
{% endif %}
</code></pre>
<p>Also you should additionally take care of the login view in your python code for consistency (maybe security too?). I think you can do this (not tested):</p>
<pre><code>def login(request):
# check if it's a GET/POST method
if request.user.is_authenticated():
return redirect(reverse('your_homepage_for_example'))
# do login
</code></pre>
| 0 | 2016-09-01T04:56:51Z | [
"python",
"html",
"django",
"redirect",
"views"
] |
How can I add a constant percentage to each wedge of a pie chart using matplotlib | 39,262,783 | <p>Code snippet looks like this</p>
<pre><code>#df is dataframe with 6 rows, with each row index being the label of the sector
plt.pie(df.iloc[:,0], labels= df.index) #plot the first column of dataframe as a pie chart
</code></pre>
<p>It generates a pie chart like this:</p>
<p><a href="http://i.stack.imgur.com/z9PXx.png" rel="nofollow">Pie Chart with the 6 sectors.</a></p>
<p>As you can see, the sectors kitchen & entertainment are very small. I want to make each sector have a minimum value of 10 degree (1/36 %).</p>
<p>This would effectively mean fitting the data over 300 degrees (10 degree to each sector and we have 6 - Lighting, Entertainment, Kitchen, Cooling, Fridge, Others)</p>
| 2 | 2016-09-01T04:36:33Z | 39,265,862 | <blockquote>
<p>I want to make each sector have a minimum value of 10 degree</p>
</blockquote>
<p>Solving this for the general case is difficult and requires further definition. This introduces a skew into the results, and to decide between different algorithms, you'd need to specify a metric defining how close is some skewed result to the original. </p>
<p>However, there is a very simple thing you can do which will guarantee your requirement. Say the length of your DataFrame is <em>n</em>, the sum of its elements is <em>s</em>, and you want the result to have no less than <em>α</em> (in your case, <em>α = 1./36</em>). If you would simply add some <em>x</em> to all elements, then the new sum would be </p>
<p><em>s + nx</em></p>
<p>Assuming that no element is negative, the smallest element would have weight at least <em>x / (s + nx)</em>. </p>
<p>Solving for </p>
<p><em>x / (s + nx) = α</em></p>
<p>Gives</p>
<p><em>x = s α / (1 - αn)</em>.</p>
<p>So, if the Series you wish to pie-chart is <code>df.a</code>, you could do</p>
<pre><code>x = max(df.a.sum() * alpha / (1 - alpha * len(df)), 0)
</code></pre>
<p>and then plot instead <code>df.a + x</code>.</p>
<p><strong>Example</strong></p>
<pre><code>df = pd.DataFrame({'a': [1, 2, 3, 40, 40, 50, 50, 500]})
x = max(df.a.sum() * alpha / (1 - alpha * len(df)), 0)
</code></pre>
<p>You can check the smallest relative weight of <code>df.a + x</code></p>
<pre><code>(df.a + x).min() / (df.a + x).sum()
</code></pre>
<p>is indeed greater than <em>α</em>.</p>
| 2 | 2016-09-01T07:58:24Z | [
"python",
"pandas",
"matplotlib"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.