content stringlengths 85 101k | title stringlengths 0 150 | question stringlengths 15 48k | answers list | answers_scores list | non_answers list | non_answers_scores list | tags list | name stringlengths 35 137 |
|---|---|---|---|---|---|---|---|---|
Q:
Python. Regular expression
How to find everything which goes after symols #TR= and it is inside [ ] using re module. For example #TR=[ dfgg dfgddfg dgfgf dgdgdg dfgfg ]
A:
import re
txt = '#TR=[ dfgg ] a kuku #TR=[ala ma kota]'
If you want to search for just the first occurrence of this pattern, use:
matches = re.search('#TR=\[([^\]]*)\]', txt)
if matches:
print(repr(matches.group(1)))
' dfgg dfg '
If you want to find all occurrences in the text, use:
matches = re.findall('#TR=\[([^\]]*)\]', txt)
if matches:
print(matches)
[' dfgg ', 'ala ma kota']
Remember to check whether the characters you are searching for have special meaning in regular expressions (like [ or ]). If they are special, escape them with the backslash: \[.
Also remember, that by default, regular expressions are "greedy" which means they try to get as much text to match the pattern as possible; so if you use .* (which means "match any character except newline"; details) instead of [^\]]* (which means "match until the ] is found, and stop before it"), too much text could be matched:
matches = re.findall('#TR=\[(.*)\]', txt)
if matches:
print(matches)
[' dfgg ] a kuku #TR=[ala ma kota']
You can also use the "non-greedy" modifier ? in your pattern, after the qualifier (*, +) which enables the "the-less-characters-the-better" matching (use *?, +?). The result could be more readable:
'#TR=\[(.*?)\]'
instead of:
'#TR=\[([^\]]*)\]'
There's a great online tool to test your patterns as-you-type: RegExr by Grant Skinner.
A:
import re
# compile the regex
exp = re.compile('.*\[(.*)\].*')
txt = r"#TR=[ dfgg dfgddfg dgfgf dgdgdg dfgfg ]"
match = exp.match(txt)
# grab the text between the square brackets
result = match.group(1)
A:
(?<=#TR=[)[^]]*(?=])
| Python. Regular expression | How to find everything which goes after symols #TR= and it is inside [ ] using re module. For example #TR=[ dfgg dfgddfg dgfgf dgdgdg dfgfg ]
| [
"import re\ntxt = '#TR=[ dfgg ] a kuku #TR=[ala ma kota]'\n\nIf you want to search for just the first occurrence of this pattern, use:\nmatches = re.search('#TR=\\[([^\\]]*)\\]', txt)\nif matches:\n print(repr(matches.group(1)))\n' dfgg dfg '\n\nIf you want to find all occurrences in the text, use:\nmatches = re... | [
5,
1,
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0004272091_python_regex.txt |
Q:
Exploitable Python Functions
This question is similar to Exploitable PHP Functions.
Tainted data comes from the user, or more specifically an attacker. When a tainted variable reaches a sink function, then you have a vulnerability. For instance a function that executes a sql query is a sink, and GET/POST variables are sources of taint.
What are all of the sink functions in Python? I am looking for functions that introduce a vulnerability or software weakness. I am particularly interested in Remote Code Execution vulnerabilities. Are there whole classes/modules that contain dangerous functionally? Do you have any examples of interesting Python vulnerabilities?
A:
eval and exec are the classics. However, open and file can be abused too:
open('/proc/kcore', 'w').write('0' * 1000 * 1000 * 1000)
Then there are the os, sys, subprocess, and dircache modules. Pretty much anything that touches the filesystem or can be used to turn data into executable code (like os.system) is going to be on the list.
As S. Lott pointed out in the comments, writing to the filesystem and executing arbitrary external programs aren't Python-specific. However, they are worth security auditors' consideration. Most of these functions can be safely used without too much concern for security. eval and exec, on the other hand, are great big red flags. Using them safely requires meticulous care.
A:
right from the pickle documentation:
Warning
The pickle module is not intended to be secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source.
A:
I tend toward the paranoid when looking for this kind of thing. More so because I tend to do alot of metaprogramming.
most side effect commands (which other posts cover)
file manipulation (open, tarfile, zipfile, ...)
network calls (urllib2, socket, ...)
data serialization/persistence (pickle, shelve, ...)
process/thread management (subprocess, os.fork, os.kill, ...)
builtins
getattr
setattr
delattr
eval
exec
execfile
__import__
And probably others I'm forgetting. I'm also wary of user input going through functions where I'm modifying sys.path, sys.modules, etc.
A:
The subprocess module contains nasty functionally which deprecated these ways of executing commands/processes:
os.system
os.spawn*
os.popen*
popen2.*
commands.*
There is also exec which will execute python code and eval which will "evaluate" an expression and can be used to manipulate variables.
A:
The input function, which evaluates the given string and returns the result, has some restrictions, but still may be exploitable.
| Exploitable Python Functions | This question is similar to Exploitable PHP Functions.
Tainted data comes from the user, or more specifically an attacker. When a tainted variable reaches a sink function, then you have a vulnerability. For instance a function that executes a sql query is a sink, and GET/POST variables are sources of taint.
What are all of the sink functions in Python? I am looking for functions that introduce a vulnerability or software weakness. I am particularly interested in Remote Code Execution vulnerabilities. Are there whole classes/modules that contain dangerous functionally? Do you have any examples of interesting Python vulnerabilities?
| [
"eval and exec are the classics. However, open and file can be abused too:\nopen('/proc/kcore', 'w').write('0' * 1000 * 1000 * 1000)\n\nThen there are the os, sys, subprocess, and dircache modules. Pretty much anything that touches the filesystem or can be used to turn data into executable code (like os.system) is ... | [
14,
14,
10,
6,
3
] | [] | [] | [
"auditing",
"python",
"security"
] | stackoverflow_0004207485_auditing_python_security.txt |
Q:
Is it possible to implemet a Ruby-like internal DSL in Python?
Is it possible to implement an internal DSL in a language without macros? Has anyone succeeded in implementing a Ruby-like internal DSL in python?
I am trying to develop a simple state machine with a more intuitive syntax like:
start -> Event -> Next ->Action
A:
I am having a bit of trouble grokking your question.
AFAIU, you are asking
Can you implement a Ruby-like internal DSL in a language without macros?
And the answer to that is obviously "Yes", since Ruby doesn't have macros.
| Is it possible to implemet a Ruby-like internal DSL in Python? | Is it possible to implement an internal DSL in a language without macros? Has anyone succeeded in implementing a Ruby-like internal DSL in python?
I am trying to develop a simple state machine with a more intuitive syntax like:
start -> Event -> Next ->Action
| [
"I am having a bit of trouble grokking your question.\nAFAIU, you are asking\n\nCan you implement a Ruby-like internal DSL in a language without macros?\n\nAnd the answer to that is obviously \"Yes\", since Ruby doesn't have macros.\n"
] | [
1
] | [] | [] | [
"dsl",
"python",
"ruby"
] | stackoverflow_0004271939_dsl_python_ruby.txt |
Q:
how to upload muti file using html5 on google app engine(python)
this is my code, the upload.py:
class MyModel(db.Model):
data = db.BlobProperty(required=True)
mimetype = db.StringProperty(required=True)
class BaseRequestHandler(webapp.RequestHandler):
def render_template(self, filename, template_args=None):
if not template_args:
template_args = {}
path = os.path.join(os.path.dirname(__file__), 'templates', filename)
self.response.out.write(template.render(path, template_args))
class upload(BaseRequestHandler):
def get(self):
self.render_template('index.html',)
def post(self):
file = self.request.POST['file']
obj = MyModel(data=file.value, mimetype=file.type)
obj.put()
o=file
#self.response.out.write(''.join('%s: %s <br/>' % (a, getattr(o, a)) for a in dir(o)))
#return
file_url = "http://%s/download/%d/%s" % (self.request.host, obj.key().id(), file.filename)
self.response.out.write("Your uploaded file is now available at <a href='%s'>%s</a>" % (file_url,file_url,))
and the index.html is :
<form enctype="multipart/form-data" action="/" method="post">
<input type="file" name="file" multiple="true" />
<input type="submit" />
</form>
you can use file = self.request.POST['file'] to get one file , but using html5 muti file ,
how to get muti file using python when post ?
thanks
upload
it is ok now , Follow this article: Receive multi file post with google app engine
class Download_file(db.Model):
data = db.BlobProperty(required=True)
mimetype = db.StringProperty(required=True)
download_url = db.StringProperty()
class BaseRequestHandler(webapp.RequestHandler):
def render_template(self, filename, template_args=None):
if not template_args:
template_args = {}
path = os.path.join(os.path.dirname(__file__), 'templates', filename)
self.response.out.write(template.render(path, template_args))
class upload(BaseRequestHandler):
def get(self):
files=Download_file.all()
self.render_template('index.html',{'files':files})
def post(self):
files = self.request.POST.multi.__dict__['_items']
#self.response.out.write(files)
for file in files:
file=file[1]
obj = Download_file(data=file.value, mimetype=file.type)
obj.put()
file_url = "http://%s/download/%d/%s" % (self.request.host, obj.key().id(), file.filename)
file_url = "<a href='%s'>%s</a>" % (file_url,file_url,)
obj.download_url=file_url
obj.put()
self.response.out.write("Your uploaded file is now available at %s </br>" % (file_url))
class download(BaseRequestHandler):
def get(self,id,filename):
#id=self.request.get('id')
entity = Download_file.get_by_id(int(id))
self.response.headers['Content-Type'] = entity.mimetype
self.response.out.write(entity.data)
A:
AFAIK, the request.POST['file'] should be a dictionary of files, i.e. POST['file'] should have keys as names of the files uploaded and the value should be the contents of the files respectively, i.e. POST['file']['avatar.png'] = ... # raw image data.
I don't know what's the functionality provided by the GAE HTTP Request class, but it should be consistent to this. Whatever the case, it's definitely in self.request, somwhere!
EDIT:
Ok, I just noticed GAE creates a file object for you, my guessing is this should work:
for file in POST['file']:
obj = MyModel(data=file.value, mimetype=file.type)
obj.put()
A:
You probably want to use the blobstore service for this; I wrote a series of posts (1, 2, 3) covering how to do multiple-file upload to the blobstore, using an upload widget.
| how to upload muti file using html5 on google app engine(python) | this is my code, the upload.py:
class MyModel(db.Model):
data = db.BlobProperty(required=True)
mimetype = db.StringProperty(required=True)
class BaseRequestHandler(webapp.RequestHandler):
def render_template(self, filename, template_args=None):
if not template_args:
template_args = {}
path = os.path.join(os.path.dirname(__file__), 'templates', filename)
self.response.out.write(template.render(path, template_args))
class upload(BaseRequestHandler):
def get(self):
self.render_template('index.html',)
def post(self):
file = self.request.POST['file']
obj = MyModel(data=file.value, mimetype=file.type)
obj.put()
o=file
#self.response.out.write(''.join('%s: %s <br/>' % (a, getattr(o, a)) for a in dir(o)))
#return
file_url = "http://%s/download/%d/%s" % (self.request.host, obj.key().id(), file.filename)
self.response.out.write("Your uploaded file is now available at <a href='%s'>%s</a>" % (file_url,file_url,))
and the index.html is :
<form enctype="multipart/form-data" action="/" method="post">
<input type="file" name="file" multiple="true" />
<input type="submit" />
</form>
you can use file = self.request.POST['file'] to get one file , but using html5 muti file ,
how to get muti file using python when post ?
thanks
upload
it is ok now , Follow this article: Receive multi file post with google app engine
class Download_file(db.Model):
data = db.BlobProperty(required=True)
mimetype = db.StringProperty(required=True)
download_url = db.StringProperty()
class BaseRequestHandler(webapp.RequestHandler):
def render_template(self, filename, template_args=None):
if not template_args:
template_args = {}
path = os.path.join(os.path.dirname(__file__), 'templates', filename)
self.response.out.write(template.render(path, template_args))
class upload(BaseRequestHandler):
def get(self):
files=Download_file.all()
self.render_template('index.html',{'files':files})
def post(self):
files = self.request.POST.multi.__dict__['_items']
#self.response.out.write(files)
for file in files:
file=file[1]
obj = Download_file(data=file.value, mimetype=file.type)
obj.put()
file_url = "http://%s/download/%d/%s" % (self.request.host, obj.key().id(), file.filename)
file_url = "<a href='%s'>%s</a>" % (file_url,file_url,)
obj.download_url=file_url
obj.put()
self.response.out.write("Your uploaded file is now available at %s </br>" % (file_url))
class download(BaseRequestHandler):
def get(self,id,filename):
#id=self.request.get('id')
entity = Download_file.get_by_id(int(id))
self.response.headers['Content-Type'] = entity.mimetype
self.response.out.write(entity.data)
| [
"AFAIK, the request.POST['file'] should be a dictionary of files, i.e. POST['file'] should have keys as names of the files uploaded and the value should be the contents of the files respectively, i.e. POST['file']['avatar.png'] = ... # raw image data.\nI don't know what's the functionality provided by the GAE HTTP ... | [
1,
1
] | [] | [] | [
"google_app_engine",
"html",
"python",
"upload"
] | stackoverflow_0004269565_google_app_engine_html_python_upload.txt |
Q:
Mail send-receive in Google App Engine (reply_to field)
I am reading about mail send/receive in GAE and I have a question about how to use reply_to and the form of the email address replied to.
My register.py simply writes message.sender to database:
class User(db.Model):
userEmail = db.StringProperty()
userEmailContent = db.StringProperty()
class Register(InboundMailHandler):
def receive(self, message):
newUser = User(userEmail = message.sender)
db.put(newUser)
application = webapp.WSGIApplication([
Register.mapping()
], debug=True)
def main():
run_wsgi_app(application)
if __name__ == "__main__":
main()
From incoming.py I am replying to an applicant's mail with this email:
mail.send_mail(sender="<az@example.com>",
to=message.sender,
body="reply to this email to register"
reply_to=/_ah/mail/register@hello-1-world.appspotmail.com)
I am imagining that when the applicant replies to this email register.py will handle the email and write the applicant's email address to the database. I am not sure how to test this in dev server. Before deploying the app I wanted to ask advice about the correct email address to assign to reply_to and if this is the correct way of handling this. Thanks.
A:
The reply_to address should be a canonical email address without the /_ah/mail/ prefix and it follows the same restriction of the sender mail address.
The sender address of a message must be the
email address of an administrator for
the application, the Google Account
email address of the current user who
is signed in, or any valid email
receiving address for the app.
To test it on your dev server, you could configure sendmail and send a mail from your program.
Once received, clicking reply from your mail client should show the reply_to mail address set in your code.
A:
mail.send_mail(sender="<az@example.com>",
to=message.sender,
body="reply to this email to register"
reply_to="register@hello-1-world.appspotmail.com")
| Mail send-receive in Google App Engine (reply_to field) | I am reading about mail send/receive in GAE and I have a question about how to use reply_to and the form of the email address replied to.
My register.py simply writes message.sender to database:
class User(db.Model):
userEmail = db.StringProperty()
userEmailContent = db.StringProperty()
class Register(InboundMailHandler):
def receive(self, message):
newUser = User(userEmail = message.sender)
db.put(newUser)
application = webapp.WSGIApplication([
Register.mapping()
], debug=True)
def main():
run_wsgi_app(application)
if __name__ == "__main__":
main()
From incoming.py I am replying to an applicant's mail with this email:
mail.send_mail(sender="<az@example.com>",
to=message.sender,
body="reply to this email to register"
reply_to=/_ah/mail/register@hello-1-world.appspotmail.com)
I am imagining that when the applicant replies to this email register.py will handle the email and write the applicant's email address to the database. I am not sure how to test this in dev server. Before deploying the app I wanted to ask advice about the correct email address to assign to reply_to and if this is the correct way of handling this. Thanks.
| [
"The reply_to address should be a canonical email address without the /_ah/mail/ prefix and it follows the same restriction of the sender mail address.\n\nThe sender address of a message must be the\n email address of an administrator for\n the application, the Google Account\n email address of the current user... | [
3,
1
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0004271958_google_app_engine_python.txt |
Q:
How to write PIL image filter for plain pgm format?
How can I write a filter for python imaging library for pgm plain ascii format (P2). Problem here is that basic PIL filter assumes constant number of bytes per pixel.
My goal is to open feep.pgm with Image.open(). See http://netpbm.sourceforge.net/doc/pgm.html or below.
Alternative solution is that I find other well documented ascii grayscale format that is supported by PIL and all major graphics programs. Any suggestions?
feep.pgm:
P2
# feep.pgm
24 7
15
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 3 3 3 3 0 0 7 7 7 7 0 0 11 11 11 11 0 0 15 15 15 15 0
0 3 0 0 0 0 0 7 0 0 0 0 0 11 0 0 0 0 0 15 0 0 15 0
0 3 3 3 0 0 0 7 7 7 0 0 0 11 11 11 0 0 0 15 15 15 15 0
0 3 0 0 0 0 0 7 0 0 0 0 0 11 0 0 0 0 0 15 0 0 0 0
0 3 0 0 0 0 0 7 7 7 7 0 0 11 11 11 11 0 0 15 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
edit: Thanks for the answer, It works... but I need a solution that uses Image.open(). Most of python programs out there use PIL for graphics manipulation (google: python image open). Thus, I need to be able to register a filter to PIL. Then, I can use any software that uses PIL. I now think mostly scipy, pylab, etc. dependent programs.
edit Ok, I think I got it now. Below is the wrapper pgm2pil.py:
import Image
import numpy
def pgm2pil(filename):
try:
inFile = open(filename)
header = None
size = None
maxGray = None
data = []
for line in inFile:
stripped = line.strip()
if stripped[0] == '#':
continue
elif header == None:
if stripped != 'P2': return None
header = stripped
elif size == None:
size = map(int, stripped.split())
elif maxGray == None:
maxGray = int(stripped)
else:
for item in stripped.split():
data.append(int(item.strip()))
data = numpy.reshape(data, (size[1],size[0]))/float(maxGray)*255
return numpy.flipud(data)
except:
pass
return None
def imageOpenWrapper(fname):
pgm = pgm2pil(fname)
if pgm is not None:
return Image.fromarray(pgm)
return origImageOpen(fname)
origImageOpen = Image.open
Image.open = imageOpenWrapper
There is a slight upgrade to misha's answer. Image.open has to be saved in order to prevent never ending loops. If pgm2pil returns None wrapper calls pgm2pil which returns None which calls pgm2pil...
Below is the test function (feep_false.pgm is a malformed pgm e.g. "P2" -> "FOO" and lena.pgm is just the image file):
import pgm2pil
import pylab
try:
pylab.imread('feep_false.pgm')
except IOError:
pass
else:
raise ValueError("feep_false should fail")
pylab.subplot(2,1,1)
a = pylab.imread('feep.pgm')
pylab.imshow(a)
pylab.subplot(2,1,2)
b = pylab.imread('lena.png')
pylab.imshow(b)
pylab.show()
A:
The way I currently deal with this is through numpy:
Read image into a 2D numpy array. You don't need to use numpy, but I've found it easier to use than the regular Python 2D arrays
Convert 2D numpy array into PIL.Image object using PIL.Image.fromarray
If you insist on using PIL.Image.open, you could write a wrapper that attempts to load a PGM file first (by looking at the header). If it's a PGM, load the image using the steps above, otherwise just hands off responsibility to PIL.Image.open.
Here's some code that I use to get a PBM image into a numpy array.
import re
import numpy
def pbm2numpy(filename):
"""
Read a PBM into a numpy array. Only supports ASCII PBM for now.
"""
fin = None
debug = True
try:
fin = open(filename, 'r')
while True:
header = fin.readline().strip()
if header.startswith('#'):
continue
elif header == 'P1':
break
elif header == 'P4':
assert False, 'Raw PBM reading not implemented yet'
else:
#
# Unexpected header.
#
if debug:
print 'Bad mode:', header
return None
rows, cols = 0, 0
while True:
header = fin.readline().strip()
if header.startswith('#'):
continue
match = re.match('^(\d+) (\d+)$', header)
if match == None:
if debug:
print 'Bad size:', repr(header)
return None
cols, rows = match.groups()
break
rows = int(rows)
cols = int(cols)
assert (rows, cols) != (0, 0)
if debug:
print 'Rows: %d, cols: %d' % (rows, cols)
#
# Initialise a 2D numpy array
#
result = numpy.zeros((rows, cols), numpy.int8)
pxs = []
#
# Read to EOF.
#
while True:
line = fin.readline().strip()
if line == '':
break
for c in line:
if c == ' ':
continue
pxs.append(int(c))
if len(pxs) != rows*cols:
if debug:
print 'Insufficient image data:', len(pxs)
return None
for r in range(rows):
for c in range(cols):
#
# Index into the numpy array and set the pixel value.
#
result[r, c] = pxs[r*cols + c]
return result
finally:
if fin != None:
fin.close()
fin = None
return None
You will have to modify it slightly to fit your purposes, namely:
Deal with P2 (ASCII, greyscale) instead of P1 (ASCII, bilevel).
Use a different container if you're not using numpy. Normal Python 2D arrays will work just fine.
EDIT
Here is how I would handle a wrapper:
def pgm2pil(fname):
#
# This method returns a PIL.Image. Use pbm2numpy function above as a
# guide. If it can't load the image, it returns None.
#
pass
def wrapper(fname):
pgm = pgm2pil(fname)
if pgm is not None:
return pgm
return PIL.Image.open(fname)
#
# This is the line that "adds" the wrapper
#
PIL.Image.open = wrapper
I didn't write pgm2pil because it's going to be very similar to pgm2numpy. The only difference will be that it's storing the result in a PIL.Image as opposed to a numpy array. I also didn't test the wrapper code (sorry, a bit short on time at the moment) but it's a fairly common approach so I expect it to work.
Now, it sounds like you want other applications that use PIL for image loading to be able to handle PGMs. It's possible using the above approach, but you need to be sure that the above wrapper code gets added before the first call to PIL.Image.open. You can make sure that happens by adding the wrapper source code to the PIL source code (if you have access).
| How to write PIL image filter for plain pgm format? | How can I write a filter for python imaging library for pgm plain ascii format (P2). Problem here is that basic PIL filter assumes constant number of bytes per pixel.
My goal is to open feep.pgm with Image.open(). See http://netpbm.sourceforge.net/doc/pgm.html or below.
Alternative solution is that I find other well documented ascii grayscale format that is supported by PIL and all major graphics programs. Any suggestions?
feep.pgm:
P2
# feep.pgm
24 7
15
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 3 3 3 3 0 0 7 7 7 7 0 0 11 11 11 11 0 0 15 15 15 15 0
0 3 0 0 0 0 0 7 0 0 0 0 0 11 0 0 0 0 0 15 0 0 15 0
0 3 3 3 0 0 0 7 7 7 0 0 0 11 11 11 0 0 0 15 15 15 15 0
0 3 0 0 0 0 0 7 0 0 0 0 0 11 0 0 0 0 0 15 0 0 0 0
0 3 0 0 0 0 0 7 7 7 7 0 0 11 11 11 11 0 0 15 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
edit: Thanks for the answer, It works... but I need a solution that uses Image.open(). Most of python programs out there use PIL for graphics manipulation (google: python image open). Thus, I need to be able to register a filter to PIL. Then, I can use any software that uses PIL. I now think mostly scipy, pylab, etc. dependent programs.
edit Ok, I think I got it now. Below is the wrapper pgm2pil.py:
import Image
import numpy
def pgm2pil(filename):
try:
inFile = open(filename)
header = None
size = None
maxGray = None
data = []
for line in inFile:
stripped = line.strip()
if stripped[0] == '#':
continue
elif header == None:
if stripped != 'P2': return None
header = stripped
elif size == None:
size = map(int, stripped.split())
elif maxGray == None:
maxGray = int(stripped)
else:
for item in stripped.split():
data.append(int(item.strip()))
data = numpy.reshape(data, (size[1],size[0]))/float(maxGray)*255
return numpy.flipud(data)
except:
pass
return None
def imageOpenWrapper(fname):
pgm = pgm2pil(fname)
if pgm is not None:
return Image.fromarray(pgm)
return origImageOpen(fname)
origImageOpen = Image.open
Image.open = imageOpenWrapper
There is a slight upgrade to misha's answer. Image.open has to be saved in order to prevent never ending loops. If pgm2pil returns None wrapper calls pgm2pil which returns None which calls pgm2pil...
Below is the test function (feep_false.pgm is a malformed pgm e.g. "P2" -> "FOO" and lena.pgm is just the image file):
import pgm2pil
import pylab
try:
pylab.imread('feep_false.pgm')
except IOError:
pass
else:
raise ValueError("feep_false should fail")
pylab.subplot(2,1,1)
a = pylab.imread('feep.pgm')
pylab.imshow(a)
pylab.subplot(2,1,2)
b = pylab.imread('lena.png')
pylab.imshow(b)
pylab.show()
| [
"The way I currently deal with this is through numpy:\n\nRead image into a 2D numpy array. You don't need to use numpy, but I've found it easier to use than the regular Python 2D arrays\nConvert 2D numpy array into PIL.Image object using PIL.Image.fromarray\n\nIf you insist on using PIL.Image.open, you could write... | [
6
] | [] | [] | [
"image",
"image_processing",
"python",
"python_imaging_library"
] | stackoverflow_0004270700_image_image_processing_python_python_imaging_library.txt |
Q:
is there a python module can output to teminal like linux "top" or "more"?
May I know is there a such module in python?
Basically I need print database table records to terminal in python supporting page-up and page-down key press.
Also I need a highlighted title bar on top of the screen looks like "top" output, and the output will auto scale to fit the terminal window resize.
Thanks!
A:
Take a look at this, might be helpful:
http://docs.python.org/library/curses.html
Also, take a look at these question:
Text Progress Bar in the Console
and a link from it:
http://nadiana.com/animated-terminal-progress-bar-in-python
A:
Python's curses module is a wrapper around GNU's ncurses C library, which is what top uses.
| is there a python module can output to teminal like linux "top" or "more"? | May I know is there a such module in python?
Basically I need print database table records to terminal in python supporting page-up and page-down key press.
Also I need a highlighted title bar on top of the screen looks like "top" output, and the output will auto scale to fit the terminal window resize.
Thanks!
| [
"Take a look at this, might be helpful:\n\nhttp://docs.python.org/library/curses.html\n\nAlso, take a look at these question:\n\nText Progress Bar in the Console\n\nand a link from it:\n\nhttp://nadiana.com/animated-terminal-progress-bar-in-python\n\n",
"Python's curses module is a wrapper around GNU's ncurses C ... | [
3,
0
] | [] | [] | [
"python",
"terminal"
] | stackoverflow_0004273254_python_terminal.txt |
Q:
A shorter way to express this? C = A if A else B
I find myself repeating this a lot:
val = x if x else y
Sometimes x goes several levels deep into a class or dictionary so it gets very long:
val = obj.elements[0].something if obj.elements[0].something else y
It looks ugly and forces me to type a lot more. Any known ways to shorten this? Perhaps a builtin like this exists?
val = first_try(x, y)
I guess I could easily write my own but was hoping for a built in.
first_try = lambda x,y: x if x else y
A:
The or operator returns the first argument that converts to True:
val = x or y
E.g.:
>>> None or 'OK'
'OK'
A:
This to me seems like a case of trying to make code a little too terse. I'd probably do something like this:
x = a.b.c.d[0]
val = x if x else y
One extra line, and a whole lot less to digest at once.
A:
Use getattr with the optional default argument. In your long example above, this would be:
val = getattr(obj.elements[0], 'something', y)
| A shorter way to express this? C = A if A else B | I find myself repeating this a lot:
val = x if x else y
Sometimes x goes several levels deep into a class or dictionary so it gets very long:
val = obj.elements[0].something if obj.elements[0].something else y
It looks ugly and forces me to type a lot more. Any known ways to shorten this? Perhaps a builtin like this exists?
val = first_try(x, y)
I guess I could easily write my own but was hoping for a built in.
first_try = lambda x,y: x if x else y
| [
"The or operator returns the first argument that converts to True:\nval = x or y\n\nE.g.:\n>>> None or 'OK'\n'OK'\n\n",
"This to me seems like a case of trying to make code a little too terse. I'd probably do something like this:\nx = a.b.c.d[0]\nval = x if x else y\n\nOne extra line, and a whole lot less to dig... | [
16,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0004272970_python.txt |
Q:
Problem getting emacs to recognize mix of python and html code
I started working on "Django 1.0 Web Site Development" and have gotten my server to work, but when I'm editing my views.py file, I have a difficulty editing the python code in emacs.
The problem seems to be a line that contains the triple quotes ('''):
def main_page(request):
output = u'''
<html>
[more lines here]
</html>
''' % (
u'Django Bookmarks'
u'Welcome to Django Bookmarks',
u'Where you can store and share bookmarks!'
)
return HttpResponse(output)
Emacs identation gets all screwed up after the u''' line and thus I get errors when I try to load the page that is generated from this.
I've tried several modes (including a nxhtml mumamo mode), but so far I can get Emacs to recognize only python code or only html, but not both at the same time. Short of editing the file as raw text and manipulating the tabs myself, is there another way/
A:
Take the hint.
You're doing it wrong. Get to the templates business in chapter 3 before even thinking about tools to support what you're doing.
Stop trying to embed HTML in your view functions and you'll find that emacs is no longer confused. Move as quickly as possible to the next chapter. Stop messing with emacs and get going on learning Django. Move further forward in the book.
But HTML in template files which are almost pure HTML with a few extra {{variable}} and {% tag %} things thrown around.
You don't have an "emacs formatting" problem.
You have a "using Django incorrectly" problem.
| Problem getting emacs to recognize mix of python and html code | I started working on "Django 1.0 Web Site Development" and have gotten my server to work, but when I'm editing my views.py file, I have a difficulty editing the python code in emacs.
The problem seems to be a line that contains the triple quotes ('''):
def main_page(request):
output = u'''
<html>
[more lines here]
</html>
''' % (
u'Django Bookmarks'
u'Welcome to Django Bookmarks',
u'Where you can store and share bookmarks!'
)
return HttpResponse(output)
Emacs identation gets all screwed up after the u''' line and thus I get errors when I try to load the page that is generated from this.
I've tried several modes (including a nxhtml mumamo mode), but so far I can get Emacs to recognize only python code or only html, but not both at the same time. Short of editing the file as raw text and manipulating the tabs myself, is there another way/
| [
"Take the hint. \nYou're doing it wrong. Get to the templates business in chapter 3 before even thinking about tools to support what you're doing.\nStop trying to embed HTML in your view functions and you'll find that emacs is no longer confused. Move as quickly as possible to the next chapter. Stop messing with... | [
4
] | [] | [] | [
"django",
"emacs",
"html",
"python"
] | stackoverflow_0004273374_django_emacs_html_python.txt |
Q:
Resource use in file read/write with python, options and considerations
I'm developing in python, still new to the game, and I want to make sure I tackle this problem correctly. I'll gladly accept all advice.
Imagine trying to use data stored across multiple flat files, in many cases with a combined size exceeding 20-35 gb. In the most general case, these files may be delimited (csv, tab) or may simply be fixed width. The goal is to take these files, or some subset of each file, parse the input (each column represents a variable in data), and send it to some destination (which could be local or remote sql, some other local file in a variety of output formats including text or some proprietary data format such as stata's .dta)
The goal is to use available system resources to conduct this operation in the quickest way possible ( i guess in terms of kb per second? )
Questions:
Is there an efficiency gain from using compiled C to do the read operations? If so, which library should I learn how to use? And, should c also do the parsing and output?
If the file comes in a .zip or .gz, should the entire file be decompressed prior to reading and writing, or do you leave it compressed and use a library capable of reading from the compressed file?
Should the program use multi-threading? I imagine reading some subset of a file (say, n lines at a time), parsing and outputting in, say, j threads. It's not obvious that reading the file one line at a time is optimal... and it seems like the optimal number of threads and processes will depend upon the resources available and the size of the job.
So, maybe the code should be 'smart' enough to optimally determine how many threads to use and how much work each thread should be expected to do.
How does one measure and compare efficiency between different methods?
Should the program be able to do this dynamically and choose input output methods based on their performance? (Will method A always strictly dominate method B, or do idiosyncratic changes in the deployment environment matter)
Just to be clear, I'm willing to accept almost any level of code inefficiency in exchange for non-negligible improvements in resource efficiency
I hope these questions provide a clear idea of what I'm trying to figure out. My programming experience is mostly confined to scientific/statistical packages, so if any of my questions boil down to 'rtm', please be gentle and suggest the appropriate manual.
A:
Is there an efficiency gain from using compiled C to do the read operations?
Not really. The limitation will be I/O bandwidth and Python uses the underlying C libraries.
If the file comes in a .zip or .gz, should the entire file be decompressed prior to reading and writing, or do you leave it compressed and use a library capable of reading from the compressed file?
First, get everything else to work well. Don't try to finesse this up front. Python's ZipFile implementation can process CSV format files by opening ZIP archive members without expanding them.
Is this faster? You can't know in advance. You can only know by building it and measuring what you built. Don't wring your hands. It's only a few lines of code. Build both.
Should the program use multithreading?
No.
Use OS-level multi-processing.
python something.py source.zip | python part2.py | python part3.py | python part4.py >result
This will be amazingly fast and -- without much work -- will use all the available OS resources.
How does one measure and compare efficiency between different methods?
Ummm... That's a silly question. You build it and measure it. Elapsed time is as good a measure as anything else. If you're confused, use a stop watch. Seriously. There's no magic.
Should the program be able to do this dynamically and choose input output methods based on their performance?
No.
(Will method A always strictly dominate method B, or do idiosyncratic changes in the deployment environment matter)
Yes. And Yes. Some methods are always more efficient. However, an OS is hellishly complex, so nothing substitutes for simple, flexible, componentized design.
Build simple pieces that can be flexibly recombined.
Don't hand-wring in advance. Design the right data structure and algorithm when you can. When you can't, just pick something sensible and move on. Building something and tuning is much easier than fretting over details only to find out that they never mattered.
Build Something.
Measure.
Find the bottleneck.
Optimize only the proven bottlenecks.
| Resource use in file read/write with python, options and considerations | I'm developing in python, still new to the game, and I want to make sure I tackle this problem correctly. I'll gladly accept all advice.
Imagine trying to use data stored across multiple flat files, in many cases with a combined size exceeding 20-35 gb. In the most general case, these files may be delimited (csv, tab) or may simply be fixed width. The goal is to take these files, or some subset of each file, parse the input (each column represents a variable in data), and send it to some destination (which could be local or remote sql, some other local file in a variety of output formats including text or some proprietary data format such as stata's .dta)
The goal is to use available system resources to conduct this operation in the quickest way possible ( i guess in terms of kb per second? )
Questions:
Is there an efficiency gain from using compiled C to do the read operations? If so, which library should I learn how to use? And, should c also do the parsing and output?
If the file comes in a .zip or .gz, should the entire file be decompressed prior to reading and writing, or do you leave it compressed and use a library capable of reading from the compressed file?
Should the program use multi-threading? I imagine reading some subset of a file (say, n lines at a time), parsing and outputting in, say, j threads. It's not obvious that reading the file one line at a time is optimal... and it seems like the optimal number of threads and processes will depend upon the resources available and the size of the job.
So, maybe the code should be 'smart' enough to optimally determine how many threads to use and how much work each thread should be expected to do.
How does one measure and compare efficiency between different methods?
Should the program be able to do this dynamically and choose input output methods based on their performance? (Will method A always strictly dominate method B, or do idiosyncratic changes in the deployment environment matter)
Just to be clear, I'm willing to accept almost any level of code inefficiency in exchange for non-negligible improvements in resource efficiency
I hope these questions provide a clear idea of what I'm trying to figure out. My programming experience is mostly confined to scientific/statistical packages, so if any of my questions boil down to 'rtm', please be gentle and suggest the appropriate manual.
| [
"\nIs there an efficiency gain from using compiled C to do the read operations? \n\nNot really. The limitation will be I/O bandwidth and Python uses the underlying C libraries.\n\nIf the file comes in a .zip or .gz, should the entire file be decompressed prior to reading and writing, or do you leave it compressed ... | [
2
] | [] | [] | [
"mysql",
"performance",
"python",
"python_multithreading"
] | stackoverflow_0004273259_mysql_performance_python_python_multithreading.txt |
Q:
SQLite date storage and conversion
I am having design problems with date storage/retrieval using Python and SQLite.
I understand that a SQLite date column stores dates as text in ISO format
(ie. '2010-05-25'). So when I display a British date (eg. on a web-page) I
convert the date using
datetime.datetime.strptime(mydate,'%Y-%m-%d').strftime('%d/%m/%Y')
However, when it comes to writing-back data to the table, SQLite is very
forgiving and is quite happy to store '25/06/2003' in a date field, but this
is not ideal because
I could be left with a mixture of date formats in the same
column,
SQLite's date functions only work with ISO format.
Therefore I need to convert the date string back to ISO format before
committing, but then I would need a generic function which checks data about to
be written in all date fields and converts to ISO if necessary. That sounds a
bit tedious to me, but maybe it is inevitable.
Are there simpler solutions? Would it be easier to change the date field to a
10-character field and store 'dd/mm/yyyy' throughout the table? This way no
conversion is required when reading or writing from the table, and I could use
datetime() functions if I needed to perform any date-arithmetic.
How have other developers overcome this problem? Any help would be appreciated.
For the record, I am using SQLite3 with Python 3.1.
A:
If you set detect_types=sqlite3.PARSE_DECLTYPES in sqlite3.connect,
then the connection will try to convert sqlite data types to Python data types
when you draw data out of the database.
This is a very good thing since its much nicer to work with datetime objects than
random date-like strings which you then have to parse with
datetime.datetime.strptime or dateutil.parser.parse.
Unfortunately, using detect_types does not stop sqlite from accepting
strings as DATE data, but you will get an error when you try to
draw the data out of the database (if it was inserted in some format other than YYYY-MM-DD)
because the connection will fail to convert it to a datetime.date object:
conn=sqlite3.connect(':memory:',detect_types=sqlite3.PARSE_DECLTYPES)
cur=conn.cursor()
cur.execute('CREATE TABLE foo(bar DATE)')
# Unfortunately, this is still accepted by sqlite
cur.execute("INSERT INTO foo(bar) VALUES (?)",('25/06/2003',))
# But you won't be able to draw the data out later because parsing will fail
try:
cur.execute("SELECT * FROM foo")
except ValueError as err:
print(err)
# invalid literal for int() with base 10: '25/06/2003'
conn.rollback()
But at least the error will alert you to the fact that you've inserted
a string for a DATE when you really should be inserting datetime.date objects:
cur.execute("INSERT INTO foo(bar) VALUES (?)",(datetime.date(2003,6,25),))
cur.execute("SELECT ALL * FROM foo")
data=cur.fetchall()
data=zip(*data)[0]
print(data)
# (datetime.date(2003, 6, 25),)
You may also insert strings as DATE data as long as you use the YYYY-MM-DD format. Notice that although you inserted a string, it comes back out as a datetime.date object:
cur.execute("INSERT INTO foo(bar) VALUES (?)",('2003-06-25',))
cur.execute("SELECT ALL * FROM foo")
data=cur.fetchall()
data=zip(*data)[0]
print(data)
# (datetime.date(2003, 6, 25), datetime.date(2003, 6, 25))
So if you are disciplined about inserting only datetime.date objects into the DATE field, then you'll have no problems later when drawing the data out.
If your users are input-ing date data in various formats, check out dateutil.parser.parse. It may be able to help you convert those various strings into datetime.datetime objects.
A:
Note that SQLite itself does not have a native date/time type. As @unutbu answered, you can make the pysqlite/sqlite3 module try to guess (and note that it really is a guess) which columns/values are dates/times. SQL expressions will easily confuse it.
SQLite does have a variety of date time functions and can work with various strings, numbers in both unixepoch and julian format, and can do transformations. See the documentation:
http://www.sqlite.org/lang_datefunc.html
You may find it more convenient to get SQLite to do the date/time work you need instead of importing the values into Python and using Python libraries to do it. Note that you can put constraints in the SQL table definition for example requiring that string value be present, be a certain length etc.
| SQLite date storage and conversion | I am having design problems with date storage/retrieval using Python and SQLite.
I understand that a SQLite date column stores dates as text in ISO format
(ie. '2010-05-25'). So when I display a British date (eg. on a web-page) I
convert the date using
datetime.datetime.strptime(mydate,'%Y-%m-%d').strftime('%d/%m/%Y')
However, when it comes to writing-back data to the table, SQLite is very
forgiving and is quite happy to store '25/06/2003' in a date field, but this
is not ideal because
I could be left with a mixture of date formats in the same
column,
SQLite's date functions only work with ISO format.
Therefore I need to convert the date string back to ISO format before
committing, but then I would need a generic function which checks data about to
be written in all date fields and converts to ISO if necessary. That sounds a
bit tedious to me, but maybe it is inevitable.
Are there simpler solutions? Would it be easier to change the date field to a
10-character field and store 'dd/mm/yyyy' throughout the table? This way no
conversion is required when reading or writing from the table, and I could use
datetime() functions if I needed to perform any date-arithmetic.
How have other developers overcome this problem? Any help would be appreciated.
For the record, I am using SQLite3 with Python 3.1.
| [
"If you set detect_types=sqlite3.PARSE_DECLTYPES in sqlite3.connect,\nthen the connection will try to convert sqlite data types to Python data types\nwhen you draw data out of the database.\nThis is a very good thing since its much nicer to work with datetime objects than\nrandom date-like strings which you then ha... | [
17,
5
] | [] | [] | [
"python",
"sqlite"
] | stackoverflow_0004272908_python_sqlite.txt |
Q:
Automatically echo the result of an assignment statement in IPython
Is there a way to make IPython automatically echo the result of an assignment statement?
For example, in MATLAB, ending an assignment statement without a semicolon prints the result of the assignment, and putting a semicolon at the end of the statement suppresses any output.
>> b=1+2
b =
3
>> b=1+2;
>>
I want to be able to do something similar in IPython. However, currently I have to use two separate statements if I want to see the assignment result:
In [32]: b=1+2
In [33]: b
Out[33]: 3
A:
Assignment is purely a statement in Python, so you'd have to compile the code, walk the AST, find the assignment, and then print the variable's repr() after running it.
| Automatically echo the result of an assignment statement in IPython | Is there a way to make IPython automatically echo the result of an assignment statement?
For example, in MATLAB, ending an assignment statement without a semicolon prints the result of the assignment, and putting a semicolon at the end of the statement suppresses any output.
>> b=1+2
b =
3
>> b=1+2;
>>
I want to be able to do something similar in IPython. However, currently I have to use two separate statements if I want to see the assignment result:
In [32]: b=1+2
In [33]: b
Out[33]: 3
| [
"Assignment is purely a statement in Python, so you'd have to compile the code, walk the AST, find the assignment, and then print the variable's repr() after running it.\n"
] | [
0
] | [] | [] | [
"ipython",
"python"
] | stackoverflow_0004273695_ipython_python.txt |
Q:
Python Virtualenv
When creating a virtual environment with no -site packages do I need to install mysql & the mysqldb adapter which is in my global site packages in order to use them in my virtual project environment?
A:
You can also (on UNIX) symlink specific packages from the Python site-packages into your virtualenv's site-packages.
A:
Yes. you will need to install then exclusively for this virtualenv.
See: http://virtualenv.openplans.org/#the-no-site-packages-option
If you build with virtualenv --no-site-packages ENV it will not inherit any packages from global site package directory.
If you see the files inside the site-packages directory, the packages are made via symlinks.
drwxr-xr-x 3 ashish staff 102 Nov 24 20:52 ..
lrwxr-xr-x 1 ashish staff 85 Nov 24 20:52 UserDict.py -> /opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/UserDict.py
.....
So you could definitely add packages manually by creating these symlinks for specific package.
Adam Vandenberg correctly answers that.
| Python Virtualenv | When creating a virtual environment with no -site packages do I need to install mysql & the mysqldb adapter which is in my global site packages in order to use them in my virtual project environment?
| [
"You can also (on UNIX) symlink specific packages from the Python site-packages into your virtualenv's site-packages.\n",
"Yes. you will need to install then exclusively for this virtualenv.\nSee: http://virtualenv.openplans.org/#the-no-site-packages-option\nIf you build with virtualenv --no-site-packages ENV it ... | [
5,
0
] | [] | [] | [
"python",
"virtualenv"
] | stackoverflow_0004273729_python_virtualenv.txt |
Q:
Python - When duplicate items are identified, which object does a set or frozenset take?
I have a user-defined class MyClass that has a __hash__ and __eq__ implementation that ensures that, for example:
>>> a = MyClass([100, 99, 98, 97])
>>> b = MyClass([99, 98, 97, 100])
>>> a.__hash__() == b.__hash__()
True
>>> a == b
True
Question: if I do the following:
>>> x = [a, b]
>>> set(x)
can I count on set keeping a? Is the set __init__ iterating through x in order? Or do I need to worry about it taking b randomly?
Thanks,
Mike
A:
In these cases of hash-based things, it uses both __hash__ and __eq__.
If __hash__ and __eq__ are both the same, then the first one it gets to in the iterable is taken. When it gets to the next, it checks if it already has it and decides yes.
>>> class Same(object):
... def __init__(self, value):
... self.value = value
... def __hash__(self):
... return 42
... def __eq__(self, other):
... return True
... def __repr__(self):
... return 'Same(%r)' % self.value
>>> set([Same(2), Same(1)])
set([Same(2)])
>>> set([Same(1), Same(2)])
set([Same(1)])
With a dict, it becomes more interesting:
>>> {Same(1): 1, Same(2): 2}
{Same(1): 2}
>>> {Same(1): 2, Same(2): 1}
{Same(1): 1}
>>> {Same(2): 1, Same(2): 2}
{Same(2): 2}
>>> {Same(2): 2, Same(2): 1}
{Same(2): 1}
>>> {Same(2): 2, Same(2): 1}
{Same(2): 1}
You should be able to guess what's happening here. It stores the first item, then the hash/equality of the second is the same; however, it's got a different value, so it stores that. The value is overwritten always, whether they match or not:
>>> {Same(1): Same(2), Same(3): Same(4)}
{Same(1): Same(4)}
I hope that helps.
A:
set (and dict) check not only the equality of the hashes, but also the equality of the objects themselves into account.
A:
I believe that set() requires both hash and eq to be overridden. In this case, you could have hash(a) == hash(b) but still have a != b, assuming you defined eq in such a fashion
| Python - When duplicate items are identified, which object does a set or frozenset take? | I have a user-defined class MyClass that has a __hash__ and __eq__ implementation that ensures that, for example:
>>> a = MyClass([100, 99, 98, 97])
>>> b = MyClass([99, 98, 97, 100])
>>> a.__hash__() == b.__hash__()
True
>>> a == b
True
Question: if I do the following:
>>> x = [a, b]
>>> set(x)
can I count on set keeping a? Is the set __init__ iterating through x in order? Or do I need to worry about it taking b randomly?
Thanks,
Mike
| [
"In these cases of hash-based things, it uses both __hash__ and __eq__.\nIf __hash__ and __eq__ are both the same, then the first one it gets to in the iterable is taken. When it gets to the next, it checks if it already has it and decides yes.\n>>> class Same(object):\n... def __init__(self, value):\n... ... | [
3,
1,
1
] | [] | [] | [
"hash",
"python",
"set"
] | stackoverflow_0004273686_hash_python_set.txt |
Q:
Python re.sub use non-greedy mode (.*?) with end of string ($) it comes greedy!
Code:
str = '<br><br />A<br />B'
print(re.sub(r'<br.*?>\w$', '', str))
It is expected to return <br><br />A, but it returns an empty string ''!
Any suggestion?
A:
Greediness works from left to right, but not otherwise. It basically means "don't match unless you failed to match". Here's what's going on:
The regex engine matches <br at the start of the string.
.*? is ignored for now, it is lazy.
Try to match >, and succeeds.
Try to match \w and fails. Now it's interesting - the engine starts backtracking, and sees the .*? rule. In this case, . can match the first >, so there's still hope for that match.
This keep happening until the regex reaches the slash. Then >\w can match, but $ fails. Again, the engine comes back to the lazy .* rule, and keeps matching, until it matches<br><br />A<br />B
Luckily, there's an easy solution: By replacing <br[^>]*>\w$ you don't allow matching outside of your tags, so it should replace the last occurrence.
Strictly speaking, this doesn't work well for HTML, because tag attributes can contain > characters, but I assume it's just an example.
A:
The non-greediness won't start later on like that. It matches the first <br and will non-greedily match the rest, which actually need to go to the end of the string because you specify the $.
To make it work the way you wanted, use
/<br[^<]*?>\w$/
but usually, it is not recommended to use regex to parse HTML, as some attribute's value can have < or > in it.
| Python re.sub use non-greedy mode (.*?) with end of string ($) it comes greedy! | Code:
str = '<br><br />A<br />B'
print(re.sub(r'<br.*?>\w$', '', str))
It is expected to return <br><br />A, but it returns an empty string ''!
Any suggestion?
| [
"Greediness works from left to right, but not otherwise. It basically means \"don't match unless you failed to match\". Here's what's going on:\n\nThe regex engine matches <br at the start of the string.\n.*? is ignored for now, it is lazy.\nTry to match >, and succeeds.\nTry to match \\w and fails. Now it's intere... | [
7,
1
] | [] | [] | [
"python",
"regex",
"regex_greedy"
] | stackoverflow_0004273987_python_regex_regex_greedy.txt |
Q:
Dynamically calling nested functions based on arguments
If I have the following Python class:
class Test(object):
funcs = {
"me" : "action",
"action": "action",
"say" : "say",
"shout" : "say"
}
def dispatch(self, cmd):
def say:
print "Nested Say"
def action:
print "Nested Action"
# The line below gets the function name as a string,
# How can I call the nested function based on the string?
Test.funcs.get(cmd, "say")
I would like to be able to do the following:
>>> Test().dispatch("me")
Nested Action
>>> Test().dispatch("say")
Nested Say
Any suggestions as to how I can go about this?
A:
I would probably do something like this:
def register(dict_, *names):
def dec(f):
m_name = f.__name__
for name in names:
dict_[name] = m_name
return f
return dec
class Test(object):
commands = {}
@register(commands, 'foo', 'fu', 'fOo')
def _handle_foo(self):
print 'foo'
@register(commands, 'bar', 'BaR', 'bAR')
def _do_bar(self):
print 'bar'
def dispatch(self, cmd):
try:
return getattr(self, self.commands[cmd])()
except (KeyError, AttributeError):
# Command doesn't exist. Handle it somehow if you want to
# The AttributeError should actually never occur unless a method gets
# deleted from the class
Now, the class exposes a dict whose keys are commands for testing membership. All the methods and the dictionary are created only once.
t = Test()
if 'foo' in t.commands:
t.dispatch('foo')
for cmd in t.commands:
# Obviously this will call each method with multiple commands dispatched to it once
# for each command
t.dispatch(cmd)
Etc.
A:
class Test(object):
def dispatch(self):
def say():
print "Nested Say"
def action():
print "Nested Action"
funcs = {
"me" : action,
"action": action,
"say" : say,
"shout" : say
}
Test.funcs.get(cmd, say)()
Or, keeping your current structure:
class Test(object):
funcs = {
"me" : "action",
"action": "action",
"say" : "say",
"shout" : "say"
}
def dispatch(self, cmd):
def say():
print "Nested Say"
def action():
print "Nested Action"
locals()[Test.funcs.get(cmd, "say")]()
I find this design a bit odd, though. Why should the class-level dict know about dispatch's local functions?
| Dynamically calling nested functions based on arguments | If I have the following Python class:
class Test(object):
funcs = {
"me" : "action",
"action": "action",
"say" : "say",
"shout" : "say"
}
def dispatch(self, cmd):
def say:
print "Nested Say"
def action:
print "Nested Action"
# The line below gets the function name as a string,
# How can I call the nested function based on the string?
Test.funcs.get(cmd, "say")
I would like to be able to do the following:
>>> Test().dispatch("me")
Nested Action
>>> Test().dispatch("say")
Nested Say
Any suggestions as to how I can go about this?
| [
"I would probably do something like this:\ndef register(dict_, *names):\n def dec(f):\n m_name = f.__name__\n for name in names:\n dict_[name] = m_name\n return f\n return dec\n\nclass Test(object):\n\n commands = {}\n\n @register(commands, 'foo', 'fu', 'fOo')\n def _h... | [
4,
2
] | [] | [] | [
"closures",
"nested",
"python"
] | stackoverflow_0004273998_closures_nested_python.txt |
Q:
How can you convert a list of poorly formed data pairs into a dict?
I'd like to take a string containing tuples that are not well separated and convert them into a dictionary.
s = "banana 4 apple 2 orange 4"
d = {'banana':'4', 'apple':'2', 'orange':'4'}
I'm running into a problem because the space is used to separate the values as well as the pairs. What's the right trick?
A:
Simplistic but serves the solution here:
Use split()
>>> s = "banana 4 apple 2 orange 4"
>>> s.split()
['banana', '4', 'apple', '2', 'orange', '4']
>>>
Group them ( Some error checks required here)
>>> k = [(x[t], x[t+1]) for t in range(0, len(x) -1, 2)]
>>> k
[('banana', '4'), ('apple', '2'), ('orange', '4')]
>>>
Create a dictionary out of it
>>> dict(k)
{'orange': '4', 'banana': '4', 'apple': '2'}
>>>
A:
>> s = "banana 4 apple 2 orange 4"
>> lst = s.split()
>> dict(zip(lst[::2], lst[1::2]))
A:
Call .split(), get the elements 2 at a time, and pass it to dict().
| How can you convert a list of poorly formed data pairs into a dict? | I'd like to take a string containing tuples that are not well separated and convert them into a dictionary.
s = "banana 4 apple 2 orange 4"
d = {'banana':'4', 'apple':'2', 'orange':'4'}
I'm running into a problem because the space is used to separate the values as well as the pairs. What's the right trick?
| [
"Simplistic but serves the solution here:\nUse split()\n>>> s = \"banana 4 apple 2 orange 4\"\n>>> s.split()\n['banana', '4', 'apple', '2', 'orange', '4']\n>>> \n\nGroup them ( Some error checks required here)\n>>> k = [(x[t], x[t+1]) for t in range(0, len(x) -1, 2)]\n>>> k\n[('banana', '4'), ('apple', '2'), ('oran... | [
5,
3,
0
] | [
"I don't know python but it should be possible to convert the string into an array , then iterate thru the array to create a dictionary by alternating name & value.\n"
] | [
-1
] | [
"python"
] | stackoverflow_0004274207_python.txt |
Q:
How to order a Dictionary based on the value?
I have a Dictionary like
tel = {'jack': 18, 'sape': 4139, 'sar': 2139, 'mape': 319, 'sipe': 29}
I want to order it based on the value in descending order like
tel = {'sape': 4139, 'sar': 2139, 'mape': 319, 'sipe': 29 'jack': 18}
I searched the web a lot and found unrelated results. Is there any built-in function there or else any other way?
A:
There is no way to order a dict. Either use an ordered dictionary, or some other ordered sequence.
sorted(tel.iteritems(), key=operator.itemgetter(1), reverse=True)
A:
In python 2.7 or 3.1, you can use OrderedDict:
from collections import OrderedDict
from operator import itemgetter
tel = {'jack': 18, 'sape': 4139, 'sar': 2139, 'mape': 319, 'sipe': 29}
newtel = OrderedDict(sorted(tel.items(), key=itemgetter(1), reverse=True))
More examples can be found in the What's New doc.
EDIT: Now using itemgetter instead of lambda, to please those who care. Apologies to those who found it more clear the old way.
A:
import operator
sorted(my_dict.iteritems(), key=operator.itemgetter(1))
and that can be usefull.
| How to order a Dictionary based on the value? | I have a Dictionary like
tel = {'jack': 18, 'sape': 4139, 'sar': 2139, 'mape': 319, 'sipe': 29}
I want to order it based on the value in descending order like
tel = {'sape': 4139, 'sar': 2139, 'mape': 319, 'sipe': 29 'jack': 18}
I searched the web a lot and found unrelated results. Is there any built-in function there or else any other way?
| [
"There is no way to order a dict. Either use an ordered dictionary, or some other ordered sequence.\nsorted(tel.iteritems(), key=operator.itemgetter(1), reverse=True)\n\n",
"In python 2.7 or 3.1, you can use OrderedDict:\nfrom collections import OrderedDict\nfrom operator import itemgetter\ntel = {'jack': 18, 'sa... | [
3,
3,
1
] | [] | [] | [
"dictionary",
"django",
"python"
] | stackoverflow_0004274554_dictionary_django_python.txt |
Q:
Mercurial Extension with no/default options
Say I want an extension that I can execute as follows: hg sayhi
I tried the following, but it tells me there are invalid arguments:
def sayhi(ui, repo, node, **opts):
"""Says Hello"""
ui.write("hi")
cmdtable = {
"sayhi": (sayhi, [], '')
}
It seems no matter what I do, I need to give it an option like hg sayhi s.
Is there anyway to do this?
A:
Ok, got a fix. I removed the node parameter from the method signature and it works.
| Mercurial Extension with no/default options | Say I want an extension that I can execute as follows: hg sayhi
I tried the following, but it tells me there are invalid arguments:
def sayhi(ui, repo, node, **opts):
"""Says Hello"""
ui.write("hi")
cmdtable = {
"sayhi": (sayhi, [], '')
}
It seems no matter what I do, I need to give it an option like hg sayhi s.
Is there anyway to do this?
| [
"Ok, got a fix. I removed the node parameter from the method signature and it works.\n"
] | [
5
] | [] | [] | [
"mercurial_extension",
"python"
] | stackoverflow_0004274517_mercurial_extension_python.txt |
Q:
Cannot add new items into python dictionary
Hi I'm new to python. I am trying to add different key value pairs to a dictionary depending on different if statements like the following:
def getContent(file)
for line in file:
content = {}
if line.startswith(titlestart):
line = line.replace(titlestart, "")
line = line.replace("]]></title>", "")
content["title"] = line
elif line.startswith(linkstart):
line = line.replace(linkstart, "")
line = line.replace("]]>", "")
content["link"] = line
elif line.startswith(pubstart):
line = line.replace(pubstart, "")
line = line.replace("</pubdate>", "")
content["pubdate"] = line
return content
print getContent(list)
However, this always returns the empty dictionary {}.
I thought it was variable scope issue at first but that doesn't seem to be it. I feel like this is a very simple question but I'm not sure what to google to find the answer.
Any help would be appreciated.
A:
You reinitialize content for every line, move the initialization outside of the loop:
def getContent(file)
content = {}
for line in file:
etc.
| Cannot add new items into python dictionary | Hi I'm new to python. I am trying to add different key value pairs to a dictionary depending on different if statements like the following:
def getContent(file)
for line in file:
content = {}
if line.startswith(titlestart):
line = line.replace(titlestart, "")
line = line.replace("]]></title>", "")
content["title"] = line
elif line.startswith(linkstart):
line = line.replace(linkstart, "")
line = line.replace("]]>", "")
content["link"] = line
elif line.startswith(pubstart):
line = line.replace(pubstart, "")
line = line.replace("</pubdate>", "")
content["pubdate"] = line
return content
print getContent(list)
However, this always returns the empty dictionary {}.
I thought it was variable scope issue at first but that doesn't seem to be it. I feel like this is a very simple question but I'm not sure what to google to find the answer.
Any help would be appreciated.
| [
"You reinitialize content for every line, move the initialization outside of the loop:\ndef getContent(file)\n\n content = {}\n\n for line in file:\n\netc.\n"
] | [
8
] | [] | [] | [
"dictionary",
"python"
] | stackoverflow_0004275132_dictionary_python.txt |
Q:
Should I use Perl or Python for network monitoring?
I want to have some work done on the Network front, pinging numerous computers on a LAN and retrieving data about the response time. Which would be the most useful and productive to work with: Perl or Python?
A:
I agree that it is pretty subjective which programming language you use - essentially I would rather get the job done as quickly and efficiently as possible which making it supportable - so that depends on your infrastructure...
Can I suggest that you look at Nagios rather than re-inventing the wheel yourself?
While Nagios might require a greater learning curve in terms of configuration, it will be worth it in the long run, and if you can't find a plugin to suit your requirements, then it is easy to write your own. Joel Spolsky has written an interesting article on this.
A:
Well, I work in both Perl and Python, and my day job is supporting a network monitoring software. Most of the import points have already been covered, but I'll consolidate/reiterate here:
Don't reinvent the wheel - there are dozens of network monitoring solutions that you can use to perform ping tests and analyze collected data. See for example
Nagios
Zenoss
OpenNMS
PyNMS
If you insist on doing this yourself, this can be done in either Perl or Python - use the one you know best. If you're planning on parsing a lot of text, it will be easier to do this "quick and dirty" in Perl than it will be in Python. Both can do it, but Python requires an OOP approach and it just isn't as easy as Perl's inline regex syntax.
Use libraries - many, many people have done this task before you so look around for a suitable lib like Net::Ping in Perl or the icmplib in Python or this ping.py code.
Use threads or asynchronous pings - otherwise pinging is going to take forever for example see this recipe using threads to run pings simultaneously. This is particularly easy to do in Python using either approach, so this is one place Python will be easier to work with IMO than using Perl.
A:
Go with Perl.
You'll have access to a nice Ping object, Net::Ping and storing the results in a database is pretty easy.
A:
Either one should work just fine. If you don't have experience with either, flip a coin. No language is inherently productive; languages allow people to be productive. Different people will benefit differently from different languages.
In general, though, when you know your specific task and need to choose a tool, look for the libraries that would make your life easy. For Perl, check out the Comprehensive Perl Archive Network. There are modules for just every networking thing you might need.
Python probably has very similar tools and libraries; I just don't know what they are.
A:
I know Perl better than Python, so my choice would fall on Perl. That said, I'd argue that on low level tasks (like pinging computers on a network and things like that) they are rather equivalent. Python may have a better object-oriented support but for scripting (that happens to be what you need) the power of Perl is quite obvious. The large pool of tested modules (some of them are even object oriented) that you find on CPAN usually can do everything you need and they can even scale well if you use them appropriately.
A:
I don't know Python, so I can't comment on what it offers, and I agree with those who suggest Nagios or other existing systems.
However, if you decide to roll your own system with Perl, Consider using POE. POE is a cooperative multitasking and networking framework.
POE has a steep learning curve. But you will be repaid for you effort very quickly. POE will provide a solid foundation to build from. Much of the client code you will need is already available on CPAN.
A:
Whichever you know better or are more comfortable using. They both can do the job and do it well, so it is your preference.
A:
Right now I've experimented the approach of creating some simple unit test for network services using various TAP libraries (mainly bash+netcat+curl and perl). The advantage is that you wrote a single script that you can use for both unit and network testing.
The display is dove via TAP::Harness::HTML.
| Should I use Perl or Python for network monitoring? | I want to have some work done on the Network front, pinging numerous computers on a LAN and retrieving data about the response time. Which would be the most useful and productive to work with: Perl or Python?
| [
"I agree that it is pretty subjective which programming language you use - essentially I would rather get the job done as quickly and efficiently as possible which making it supportable - so that depends on your infrastructure...\nCan I suggest that you look at Nagios rather than re-inventing the wheel yourself? \n... | [
22,
14,
12,
7,
3,
2,
1,
1
] | [
"I'd say that if you need something quick and dirty that's up and running by this afternoon, then perl is probably the better language.\nHowever for developing solid application that's easy to maintain and extend and that you can build on over time, I'd go with python.\nThis is of course assuming you know both lang... | [
-1
] | [
"networking",
"perl",
"python",
"scripting_language"
] | stackoverflow_0000491380_networking_perl_python_scripting_language.txt |
Q:
my url pattern throwing an error
my url pattern is www.example.com/[any digit or number] say www.eaxmple.com/slug
my urls.py is:
url(r'^(?P<[a-z0-9>+)/$', mymethod),
But is saying:- bad character in group name
another query is that i want to process the slug coming with the url. Based on that value i will render different template. Is it possible? How?
A:
I don't know python, so can't say what the url function does, but your regex should probably look like this:
url(r'^(?P<someName>[a-z0-9]+)/$', mymethod)
This will capture the the [a-z0-9] Group of characters under the name "someName".
BTW: What about upper case characters?
| my url pattern throwing an error | my url pattern is www.example.com/[any digit or number] say www.eaxmple.com/slug
my urls.py is:
url(r'^(?P<[a-z0-9>+)/$', mymethod),
But is saying:- bad character in group name
another query is that i want to process the slug coming with the url. Based on that value i will render different template. Is it possible? How?
| [
"I don't know python, so can't say what the url function does, but your regex should probably look like this:\nurl(r'^(?P<someName>[a-z0-9]+)/$', mymethod)\n\nThis will capture the the [a-z0-9] Group of characters under the name \"someName\".\nBTW: What about upper case characters?\n"
] | [
7
] | [] | [] | [
"django",
"python"
] | stackoverflow_0004275545_django_python.txt |
Q:
call a function in Python syntax
Hey im writing a small program in python 2.6 and i have defined
2 helper functions which does almost all i want, for instance
def helper1:
...
def helper2:
...
Now my problem is that i want to make a new function that gather the two functions in one function so i dont have to write (in shell):
list(helper1(helper2(argument1,argument2)))
but instead just
function(argument1,argument2)
Is there any short way around that? Im all new to python, or do you need more code-sample to be able to answer?
Thanx in advance for any hints or help
A:
def function(arg1, arg2):
return list(helper1(helper2(arg1, arg2)))
should work.
A:
function = lambda x, y: list(helper1(helper2(x, y)))
A:
This is an example of the higher order function compose. It's handy to have laying around
def compose(*functions):
""" Returns the composition of functions"""
functions = reversed(functions)
def composition(*args, **kwargs):
func_iter = iter(functions)
ret = next(func_iter)(*args, **kwargs)
for f in func_iter:
ret = f(ret)
return ret
return composition
You can now write your function as
function1 = compose(list, helper1, helper2)
function2 = compose(tuple, helper3, helper4)
function42 = compose(set, helper4, helper2)
etc.
| call a function in Python syntax | Hey im writing a small program in python 2.6 and i have defined
2 helper functions which does almost all i want, for instance
def helper1:
...
def helper2:
...
Now my problem is that i want to make a new function that gather the two functions in one function so i dont have to write (in shell):
list(helper1(helper2(argument1,argument2)))
but instead just
function(argument1,argument2)
Is there any short way around that? Im all new to python, or do you need more code-sample to be able to answer?
Thanx in advance for any hints or help
| [
"def function(arg1, arg2):\n return list(helper1(helper2(arg1, arg2)))\n\nshould work.\n",
"function = lambda x, y: list(helper1(helper2(x, y)))\n\n",
"This is an example of the higher order function compose. It's handy to have laying around\ndef compose(*functions):\n \"\"\" Returns the composition of fu... | [
8,
2,
2
] | [] | [] | [
"python",
"syntax"
] | stackoverflow_0004275764_python_syntax.txt |
Q:
Regular expression for Indian landline Phone numbers
Indian landline phone numbers are in one of the following format.
080 25478965
0416-2565478
08172-268032
Whats the simplest Regex to accommodate all these. The space between the city code and the phone number can be a whitespace or a "-". It would be great if the regex can accommodate the case without the separator between the city code and the phone number.
A:
For these three formats exactly:
^\d{3}([ -]\d\d|\d[ -]\d|\d\d[ -])\d{6}$
Another, more liberal option, is to allow a space or a dash after each digit (except maybe the last):
^(\d[ -]?){10}\d$
A:
I'd probably just use /^(\d+[ \-]+\d+)$/ if you don't need to verify the number of digits etc. If you'd like to get both parts separated: /^(\d+)[ \-]+(\d+)$/
A:
^[\d]{3,4}[\-\s]*[\d]{6,7}$
This would match [3-4 digits][zero to many dashes and whitespaces][6-7 digits]
| Regular expression for Indian landline Phone numbers | Indian landline phone numbers are in one of the following format.
080 25478965
0416-2565478
08172-268032
Whats the simplest Regex to accommodate all these. The space between the city code and the phone number can be a whitespace or a "-". It would be great if the regex can accommodate the case without the separator between the city code and the phone number.
| [
"For these three formats exactly:\n^\\d{3}([ -]\\d\\d|\\d[ -]\\d|\\d\\d[ -])\\d{6}$\n\nAnother, more liberal option, is to allow a space or a dash after each digit (except maybe the last):\n^(\\d[ -]?){10}\\d$\n\n",
"I'd probably just use /^(\\d+[ \\-]+\\d+)$/ if you don't need to verify the number of digits etc.... | [
4,
2,
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0004276057_python_regex.txt |
Q:
How to download a webpage that require username and password?
For example, I want to download this page after inserting username and password:
http://forum.ubuntu-it.org/
I have tryed with wget but doesn't work.
Is there a solution with python ?
You can test with these username and password:
username: johnconnor
password: hellohello
A:
You can use the urllib2 module and with that it is possible do to basic and form based authentication (with cookies support).
Here is a nice tutorial on your issue.
A:
Try the mechanize module. It's basically a programmatic browser interface.
A:
Like @robert says, use mechanize.
To get you started:
from mechanize import Browser
b = Browser()
b.open("http://forum.ubuntu-it.org/index.php")
b.select_form(nr=0)
b["user"] = "johnconnor"
b["passwrd"] = "hellohello"
b.submit()
response = b.response().read()
if "Salve <b>johnconnor</b>" in response:
print "Logged in!"
| How to download a webpage that require username and password? | For example, I want to download this page after inserting username and password:
http://forum.ubuntu-it.org/
I have tryed with wget but doesn't work.
Is there a solution with python ?
You can test with these username and password:
username: johnconnor
password: hellohello
| [
"You can use the urllib2 module and with that it is possible do to basic and form based authentication (with cookies support).\nHere is a nice tutorial on your issue.\n",
"Try the mechanize module. It's basically a programmatic browser interface.\n",
"Like @robert says, use mechanize. \nTo get you started:\nfr... | [
2,
1,
1
] | [] | [] | [
"python",
"urllib",
"urllib2"
] | stackoverflow_0004275955_python_urllib_urllib2.txt |
Q:
os.walk exclude .svn folders
I got a script that I want to use to change a repeated string throughout a project folder structure. Once changed then I can check this into SVN. However when I run my script it goes into the .svn folders which I want it to ingore. How can I achieve this? Code below, thanks.
import os
import sys
replacement = "newString"
toReplace = "oldString"
rootdir = "pathToProject"
for root, subFolders, files in os.walk(rootdir):
print subFolders
if not ".svn" in subFolders:
for file in files:
fileParts = file.split('.')
if len(fileParts) > 1:
if not fileParts[len(fileParts)-1] in ["dll", "suo"]:
fpath = os.path.join(root, file)
with open(fpath) as f:
s = f.read()
s = s.replace(toReplace, replacement)
with open(fpath, "w") as f:
f.write(s)
print "DONE"
A:
Try this:
for root, subFolders, files in os.walk(rootdir):
if '.svn' in subFolders:
subFolders.remove('.svn')
And then continue processing.
A:
Err... what?
When topdown is True, the caller can modify the dirnames list in-place (perhaps using del or slice assignment), and walk() will only recurse into the subdirectories whose names remain in dirnames; this can be used to prune the search, impose a specific order of visiting, or even to inform walk() about directories the caller creates or renames before it resumes walk() again.
for root, subFolders, files in os.walk(rootdir):
try:
subFolders.remove('.svn')
except ValueError:
pass
dosomestuff()
| os.walk exclude .svn folders | I got a script that I want to use to change a repeated string throughout a project folder structure. Once changed then I can check this into SVN. However when I run my script it goes into the .svn folders which I want it to ingore. How can I achieve this? Code below, thanks.
import os
import sys
replacement = "newString"
toReplace = "oldString"
rootdir = "pathToProject"
for root, subFolders, files in os.walk(rootdir):
print subFolders
if not ".svn" in subFolders:
for file in files:
fileParts = file.split('.')
if len(fileParts) > 1:
if not fileParts[len(fileParts)-1] in ["dll", "suo"]:
fpath = os.path.join(root, file)
with open(fpath) as f:
s = f.read()
s = s.replace(toReplace, replacement)
with open(fpath, "w") as f:
f.write(s)
print "DONE"
| [
"Try this:\nfor root, subFolders, files in os.walk(rootdir):\n if '.svn' in subFolders:\n subFolders.remove('.svn')\n\nAnd then continue processing. \n",
"Err... what?\n\nWhen topdown is True, the caller can modify the dirnames list in-place (perhaps using del or slice assignment), and walk() will only re... | [
31,
8
] | [] | [] | [
"python"
] | stackoverflow_0004276255_python.txt |
Q:
How to convert Python threading code to multiprocessing code?
I need to convert a threading application to a multiprocessing application for multiple reasons (GIL, memory leaks). Fortunately the threads are quite isolated and only communicate via Queue.Queues. This primitive is also available in multiprocessing so everything looks fine. Now before I enter this minefield I'd like to get some advice on the upcoming problems:
How to ensure that my objects can be transfered via the Queue? Do I need to provide some __setstate__?
Can I rely on put returning instantly (like with threading Queues)?
General hints/tips?
Anything worthwhile to read apart from the Python documentation?
A:
Answer to part 1:
Everything that has to pass through a multiprocessing.Queue (or Pipe or whatever) has to be picklable. This includes basic types such as tuples, lists and dicts. Also classes are supported if they are top-level and not too complicated (check the details). Trying to pass lambdas around will fail however.
Answer to part 2:
A put consists of two parts: It takes a semaphore to modify the queue and it optionally starts a feeder thread. So if no other Process tries to put to the same Queue at the same time (for instance because there is only one Process writing to it), it should be fast. For me it turned out to be fast enough for all practical purposes.
Partial answer to part 3:
The plain multiprocessing.queue.Queue lacks a task_done method, so it cannot be used as a drop-in replacement directly. (A subclass provides the method.)
The old processing.queue.Queue lacks a qsize method and the newer multiprocessing version is inaccurate (just keep this in mind).
Since filedescriptors normally inherited on fork, care needs to be taken about closing them in the right processes.
| How to convert Python threading code to multiprocessing code? | I need to convert a threading application to a multiprocessing application for multiple reasons (GIL, memory leaks). Fortunately the threads are quite isolated and only communicate via Queue.Queues. This primitive is also available in multiprocessing so everything looks fine. Now before I enter this minefield I'd like to get some advice on the upcoming problems:
How to ensure that my objects can be transfered via the Queue? Do I need to provide some __setstate__?
Can I rely on put returning instantly (like with threading Queues)?
General hints/tips?
Anything worthwhile to read apart from the Python documentation?
| [
"Answer to part 1:\nEverything that has to pass through a multiprocessing.Queue (or Pipe or whatever) has to be picklable. This includes basic types such as tuples, lists and dicts. Also classes are supported if they are top-level and not too complicated (check the details). Trying to pass lambdas around will fail ... | [
5
] | [] | [] | [
"gil",
"multiprocessing",
"multithreading",
"python"
] | stackoverflow_0004097227_gil_multiprocessing_multithreading_python.txt |
Q:
help to convert python ctypes struct to 64bit
I found this script attached to a thread in the fontforge-users mailing list. It does exactly what I want. However, it seems only to work on 32bit systems, and I would really like to use it on my 64bit system.
I've done a little reading but I can't work out how I should modify this script (presumably the stuct?) to make it work under a 64bit architecture. Can anyone help?
Cheers!
#!/usr/bin/python
# vim:ts=8:sw=4:expandtab:encoding=utf-8
# Export named font from PDF file using fontforge and ctypes
import sys
from ctypes import *
STRING = c_char_p
real = c_longdouble
# We need the `map` attribute of SplineFont, so declear an incomplete struct.
# see: http://sourceforge.net/projects/wqy/files/misc/
# file: fontforge-bindctypes-0.1.tar.bz2
class splinefont(Structure):
pass
SplineFont = splinefont
splinefont._fields_ = [
('fontname', STRING),
('fullname', STRING),
('familyname', STRING),
('weight', STRING),
('copyright', STRING),
('filename', STRING),
('defbasefilename', STRING),
('version', STRING),
('italicangle', real),
('upos', real),
('uwidth', real),
('ascent', c_int),
('descent', c_int),
('uniqueid', c_int),
('glyphcnt', c_int),
('glyphmax', c_int),
('glyphs', POINTER(c_void_p)),
('changed', c_uint, 1),
('changed_since_autosave', c_uint, 1),
('changed_since_xuidchanged', c_uint, 1),
('display_antialias', c_uint, 1),
('display_bbsized', c_uint, 1),
('dotlesswarn', c_uint, 1),
('onlybitmaps', c_uint, 1),
('serifcheck', c_uint, 1),
('issans', c_uint, 1),
('isserif', c_uint, 1),
('hasvmetrics', c_uint, 1),
('loading_cid_map', c_uint, 1),
('dupnamewarn', c_uint, 1),
('encodingchanged', c_uint, 1),
('multilayer', c_uint, 1),
('strokedfont', c_uint, 1),
('new', c_uint, 1),
('compacted', c_uint, 1),
('backedup', c_uint, 2),
('use_typo_metrics', c_uint, 1),
('weight_width_slope_only', c_uint, 1),
('save_to_dir', c_uint, 1),
('head_optimized_for_cleartype', c_uint, 1),
('ticked', c_uint, 1),
('internal_temp', c_uint, 1),
('complained_about_spiros', c_uint, 1),
('use_xuid', c_uint, 1),
('use_uniqueid', c_uint, 1),
('fv', c_void_p),
('metrics', c_void_p),
('uni_interp', c_int),
('for_new_glyphs', c_void_p),
('map', c_void_p),
# ...
]
def main():
if len(sys.argv) != 3:
print "Usage: %s doc.pdf fontname" % sys.argv[0]
sys.exit(2)
pdfname = sys.argv[1]
fontname = sys.argv[2]
fontfile = fontname + '.ttf'
# ctypes functions
libc = CDLL("libc.so.6")
libc.fopen.restype = c_void_p
libc.fopen.argtype = [c_char_p, c_char_p]
lib_ff = CDLL('libfontforge.so.1')
# SplineFont *_SFReadPdfFont(FILE *pdf,char *filename,
# char *select_this_font, enum openflags openflags)
lib_ff._SFReadPdfFont.argtypes = [c_void_p, c_char_p, c_char_p, c_int]
lib_ff._SFReadPdfFont.restype = POINTER(SplineFont)
# int GenerateScript(SplineFont *sf, char *filename, char *bitmaptype,
# int fmflags, int res, char *subfontdefinition, struct sflist *sfs,
# EncMap *map, NameList *rename_to,int layer)
lib_ff.GenerateScript.argytpes = [POINTER(SplineFont), c_char_p, c_char_p,
c_int, c_int, c_char_p, c_void_p, c_void_p, c_void_p, c_int]
lib_ff.GenerateScript.restype = c_int
# need to somehow initialize libfontforge or it will segfault somewhere.
lib_ff.doinitFontForgeMain()
fobj = libc.fopen(pdfname, "rb")
if not fobj:
print "%s not found" % pdfname
sys.exit(1)
font = lib_ff._SFReadPdfFont(fobj, pdfname, fontname, 0)
ret = 0
if bool(font):
ret = lib_ff.GenerateScript(font, fontfile, None, -1, -1, None, None,
font.contents.map, None, 1)
if ret:
print 'Font export to "%s".' % fontfile
else:
print "** Error ** Failed to export font!!"
if __name__ == '__main__':
main()
A:
The question is whether FONTFORGE_CONFIG_USE_LONGDOUBLE is defined or not in /usr/include/fontforge/config.h. If it is defined, then the code's definition is correct. On my amd64 linux installation, neither FONTFORGE_CONFIG_USE_LONGDOUBLE nor FONTFORGE_CONFIG_USE_DOUBLE are defined, so I needed to change
real = c_float
With that change, it works fine.
| help to convert python ctypes struct to 64bit | I found this script attached to a thread in the fontforge-users mailing list. It does exactly what I want. However, it seems only to work on 32bit systems, and I would really like to use it on my 64bit system.
I've done a little reading but I can't work out how I should modify this script (presumably the stuct?) to make it work under a 64bit architecture. Can anyone help?
Cheers!
#!/usr/bin/python
# vim:ts=8:sw=4:expandtab:encoding=utf-8
# Export named font from PDF file using fontforge and ctypes
import sys
from ctypes import *
STRING = c_char_p
real = c_longdouble
# We need the `map` attribute of SplineFont, so declear an incomplete struct.
# see: http://sourceforge.net/projects/wqy/files/misc/
# file: fontforge-bindctypes-0.1.tar.bz2
class splinefont(Structure):
pass
SplineFont = splinefont
splinefont._fields_ = [
('fontname', STRING),
('fullname', STRING),
('familyname', STRING),
('weight', STRING),
('copyright', STRING),
('filename', STRING),
('defbasefilename', STRING),
('version', STRING),
('italicangle', real),
('upos', real),
('uwidth', real),
('ascent', c_int),
('descent', c_int),
('uniqueid', c_int),
('glyphcnt', c_int),
('glyphmax', c_int),
('glyphs', POINTER(c_void_p)),
('changed', c_uint, 1),
('changed_since_autosave', c_uint, 1),
('changed_since_xuidchanged', c_uint, 1),
('display_antialias', c_uint, 1),
('display_bbsized', c_uint, 1),
('dotlesswarn', c_uint, 1),
('onlybitmaps', c_uint, 1),
('serifcheck', c_uint, 1),
('issans', c_uint, 1),
('isserif', c_uint, 1),
('hasvmetrics', c_uint, 1),
('loading_cid_map', c_uint, 1),
('dupnamewarn', c_uint, 1),
('encodingchanged', c_uint, 1),
('multilayer', c_uint, 1),
('strokedfont', c_uint, 1),
('new', c_uint, 1),
('compacted', c_uint, 1),
('backedup', c_uint, 2),
('use_typo_metrics', c_uint, 1),
('weight_width_slope_only', c_uint, 1),
('save_to_dir', c_uint, 1),
('head_optimized_for_cleartype', c_uint, 1),
('ticked', c_uint, 1),
('internal_temp', c_uint, 1),
('complained_about_spiros', c_uint, 1),
('use_xuid', c_uint, 1),
('use_uniqueid', c_uint, 1),
('fv', c_void_p),
('metrics', c_void_p),
('uni_interp', c_int),
('for_new_glyphs', c_void_p),
('map', c_void_p),
# ...
]
def main():
if len(sys.argv) != 3:
print "Usage: %s doc.pdf fontname" % sys.argv[0]
sys.exit(2)
pdfname = sys.argv[1]
fontname = sys.argv[2]
fontfile = fontname + '.ttf'
# ctypes functions
libc = CDLL("libc.so.6")
libc.fopen.restype = c_void_p
libc.fopen.argtype = [c_char_p, c_char_p]
lib_ff = CDLL('libfontforge.so.1')
# SplineFont *_SFReadPdfFont(FILE *pdf,char *filename,
# char *select_this_font, enum openflags openflags)
lib_ff._SFReadPdfFont.argtypes = [c_void_p, c_char_p, c_char_p, c_int]
lib_ff._SFReadPdfFont.restype = POINTER(SplineFont)
# int GenerateScript(SplineFont *sf, char *filename, char *bitmaptype,
# int fmflags, int res, char *subfontdefinition, struct sflist *sfs,
# EncMap *map, NameList *rename_to,int layer)
lib_ff.GenerateScript.argytpes = [POINTER(SplineFont), c_char_p, c_char_p,
c_int, c_int, c_char_p, c_void_p, c_void_p, c_void_p, c_int]
lib_ff.GenerateScript.restype = c_int
# need to somehow initialize libfontforge or it will segfault somewhere.
lib_ff.doinitFontForgeMain()
fobj = libc.fopen(pdfname, "rb")
if not fobj:
print "%s not found" % pdfname
sys.exit(1)
font = lib_ff._SFReadPdfFont(fobj, pdfname, fontname, 0)
ret = 0
if bool(font):
ret = lib_ff.GenerateScript(font, fontfile, None, -1, -1, None, None,
font.contents.map, None, 1)
if ret:
print 'Font export to "%s".' % fontfile
else:
print "** Error ** Failed to export font!!"
if __name__ == '__main__':
main()
| [
"The question is whether FONTFORGE_CONFIG_USE_LONGDOUBLE is defined or not in /usr/include/fontforge/config.h. If it is defined, then the code's definition is correct. On my amd64 linux installation, neither FONTFORGE_CONFIG_USE_LONGDOUBLE nor FONTFORGE_CONFIG_USE_DOUBLE are defined, so I needed to change\nreal = ... | [
2
] | [] | [] | [
"32bit_64bit",
"ctypes",
"python",
"struct"
] | stackoverflow_0004275556_32bit_64bit_ctypes_python_struct.txt |
Q:
Tried to embed python in a visual studio 2010 c++ file, exits with code 1
I am trying to embed some python code in a c++ application i am developing with ms visual studio c++ 2010. But when i run the program, it exits with code 0x01 when i call Py_initialize().
I dont know how to find out what went wrong. the help file says, Py_Initialize can't return an error value, it only fails fataly.
But, why did it fail?
I am using a self-compiled python27_d.dll, which i created with the msvs project files in the source downloads from python.org.
A:
Is there simple 'hello world' type example of the Py_Initilize code in the python sdk you can start with?
That will at least tell you if you have the compiler environment setup correctly, or if the error is in your usage.
A:
Well, i finally found out what went wrong.
I did compile my python27_d.dll with the same VC10 as my program itself.
But my program is normally compiled as 64 bit executable. I just forgot to compile the dll for x64, too. I didnt think this would lead to such annoying behavoiur, as i believed i would get a linkr error then.
| Tried to embed python in a visual studio 2010 c++ file, exits with code 1 | I am trying to embed some python code in a c++ application i am developing with ms visual studio c++ 2010. But when i run the program, it exits with code 0x01 when i call Py_initialize().
I dont know how to find out what went wrong. the help file says, Py_Initialize can't return an error value, it only fails fataly.
But, why did it fail?
I am using a self-compiled python27_d.dll, which i created with the msvs project files in the source downloads from python.org.
| [
"Is there simple 'hello world' type example of the Py_Initilize code in the python sdk you can start with?\nThat will at least tell you if you have the compiler environment setup correctly, or if the error is in your usage.\n",
"Well, i finally found out what went wrong.\nI did compile my python27_d.dll with the... | [
0,
0
] | [] | [] | [
"c++",
"python",
"python_c_api",
"python_embedding",
"visual_studio_2010"
] | stackoverflow_0004216988_c++_python_python_c_api_python_embedding_visual_studio_2010.txt |
Q:
Nested conversation list with django
Hay I have a model like this
def Photo(models.Model):
# Photo object fields...
def PhotoThread(models.Model):
photo = models.ForeignKey(Photo)
message = models.TextField(blank=True)
reply_to = models.ForeignKey('self', related_name='replies', null=True, blank=True)
votes = models.IntegerField()
As you can see a Thread object has a reply_to field, so that Threads can become children of other Threads.
I can do stuff like -
photo = Photo.objects.get(pk=1)
threads = photo.photothread_set.all()
This will get the threads to a Photo, however, this system allows replies to also have replies.
How would i go about looping through all replies and getting replies for those (all the way down to the maxiumum number of replies we have).
I want to display this as a nested HTML list. Also i want to be able to order all Threads and replies by the 'votes' field.
Thanks
A:
Welcome to recursion. Here's a common solution
def thread_plus_replies( someThread ):
return someThread, [ thread_plus_replies(r) for r in someThread.replies.ordered('votes').all() ]
This kind of thing returns a list of 2-tuples for each thread and all of it's replies.
If a reply has no subsidiary threads, the follow-on list is empty. This can get clunky, so some folks like to optimize it.
def thread_plus_replies( someThread ):
if len(someThread.replies) == 0:
return someThread
return someThread, [ thread_plus_replies(r) for r in someThread.replies ]
Sticking with the first one, each thread is a 2-tuple. We can decorate the 2-tuple with HTML.
def make_html( thread_results ):
head, tail = thread_results
return "<ul><li>{0}</li><li>{1}</li></ul>".format( head, make_html(tail) )
This will give you nested <ul> tags for your nested threads.
A:
Write an inclusion template tag that uses itself in its template.
| Nested conversation list with django | Hay I have a model like this
def Photo(models.Model):
# Photo object fields...
def PhotoThread(models.Model):
photo = models.ForeignKey(Photo)
message = models.TextField(blank=True)
reply_to = models.ForeignKey('self', related_name='replies', null=True, blank=True)
votes = models.IntegerField()
As you can see a Thread object has a reply_to field, so that Threads can become children of other Threads.
I can do stuff like -
photo = Photo.objects.get(pk=1)
threads = photo.photothread_set.all()
This will get the threads to a Photo, however, this system allows replies to also have replies.
How would i go about looping through all replies and getting replies for those (all the way down to the maxiumum number of replies we have).
I want to display this as a nested HTML list. Also i want to be able to order all Threads and replies by the 'votes' field.
Thanks
| [
"Welcome to recursion. Here's a common solution\ndef thread_plus_replies( someThread ):\n return someThread, [ thread_plus_replies(r) for r in someThread.replies.ordered('votes').all() ]\n\nThis kind of thing returns a list of 2-tuples for each thread and all of it's replies. \nIf a reply has no subsidiary thr... | [
2,
1
] | [] | [] | [
"django",
"foreign_keys",
"list",
"nested",
"python"
] | stackoverflow_0004277243_django_foreign_keys_list_nested_python.txt |
Q:
Which text editor for python development?
Possible Duplicate:
What IDE to use for Python?
Which is the preferred text editor for python development, Vim, emacs or any other ?
A:
There is no preferred editor for anything. It's whatever your personal preference is. I personally prefer emacs with python-mode because it makes indenting easier (and indenting is kind of important in python), but anything will work.
A:
VIM! And not for Python only.
A:
The best editor is the one that works for you. Period! If you are comfortable with gedit, use that. If you are comfortable with emacs, use that. The word best is subjective, unless you are looking for specific features that vary from one to another.
| Which text editor for python development? |
Possible Duplicate:
What IDE to use for Python?
Which is the preferred text editor for python development, Vim, emacs or any other ?
| [
"There is no preferred editor for anything. It's whatever your personal preference is. I personally prefer emacs with python-mode because it makes indenting easier (and indenting is kind of important in python), but anything will work.\n",
"VIM! And not for Python only.\n",
"The best editor is the one that work... | [
0,
0,
0
] | [] | [] | [
"editor",
"ide",
"python",
"text"
] | stackoverflow_0004278223_editor_ide_python_text.txt |
Q:
CPython from Java?
I need to call CPython code from Java. What tools/APIs/libraries exist out there to help me do this?
Jython is not an option since the Python code is heavily dependent upon numpy.
edit 1: The main() function should be Java, not Python (i.e. I need to embed CPython into Java, not vice versa.)
edit 2: I should also mention that I'll be passing large numeric arrays between Java and Python and therefore a solution that brings the two into the same process space would be preferable (but not mandatory.)
A:
You can take a look at using Jepp to embed CPython into Java. Read documentation here.
edit: For windows the project has prebuilt binaries for Python 2.4, 2.5, and 2.6. For Linux/Unix systems, you have to build it yourself.
| CPython from Java? | I need to call CPython code from Java. What tools/APIs/libraries exist out there to help me do this?
Jython is not an option since the Python code is heavily dependent upon numpy.
edit 1: The main() function should be Java, not Python (i.e. I need to embed CPython into Java, not vice versa.)
edit 2: I should also mention that I'll be passing large numeric arrays between Java and Python and therefore a solution that brings the two into the same process space would be preferable (but not mandatory.)
| [
"You can take a look at using Jepp to embed CPython into Java. Read documentation here.\nedit: For windows the project has prebuilt binaries for Python 2.4, 2.5, and 2.6. For Linux/Unix systems, you have to build it yourself.\n"
] | [
4
] | [
"You probably want to read the docs on embedding a CPython interpreter. Also, on how to load native libraries in Java (was that called JNI?)\n"
] | [
-1
] | [
"cpython",
"java",
"jython",
"numpy",
"python"
] | stackoverflow_0004278182_cpython_java_jython_numpy_python.txt |
Q:
Python GUI with custom render/drawing
I am looking for a Python GUI library that I can rewrite the rendering / drawing.
It has to support basic widgets (buttons, combo boxes, list boxes, text editors, scrolls,), layout management, event handling
The thing that I am looking for is to use my custom Direct3D and OpenGL renderer for all of the GUI's drawing / rendering.
edit suggested by S.Lott: I need to use this GUI for a 3D editor, since I have to drag and drop a lot of things from the GUI elements to the 3d render area, I wanted to use a GUI system that renders with Direct3D (preffered) or OpenGL. It also has to have a nice look. It is difficult to achieve this with GUI's like WPF, since WPF does not have a handle. Also it needs to be absolutly free for commercial use.
edit: I would also like to use the rendering context I initialized for the 3d part in my application
A:
I don't know what are you working at, so maybe this is not what you're looking for, but:
Have you considered using Blender + its Game Engine?
It supports Python scripting, and provides some APIs to create "standard" GUIs too, while allowing you to do a lot of cool stuff with 3d models. This could be especially useful if your application does a lot of 3d models manipulation..
Then you can "compile" it (it just builds the all-in-one package containing all the dependencies, in a way similar to what py2exe does) for any platform you need.
A:
You can use Qt Scene Framework with OpenGL rendering. There are many examples on Nokia site.
A:
The best Python GUI toolkit is wxPython (also known as wxWidgets).
This is not merely my opinion, see also: wxPython quotes
wxPython is the best and most mature
cross-platform GUI toolkit, given a
number of constraints. The only reason
wxPython isn't the standard Python GUI
toolkit is that Tkinter was there
first. -- Guido van Rossum
I can't say how easy or hard it would be to add your own renderer.
A:
There are OpenGL bindings in Python that will get you 3D rendering. Personally, I'd use wxpython as your 'gui' manager and use the bindings to do opengl for the rest. Wx has the necessary demos (check the wxpython demos installation) and information in their GLCanvas demos.
Another sample code is here too.
A:
You might find PyClutter useful.
| Python GUI with custom render/drawing | I am looking for a Python GUI library that I can rewrite the rendering / drawing.
It has to support basic widgets (buttons, combo boxes, list boxes, text editors, scrolls,), layout management, event handling
The thing that I am looking for is to use my custom Direct3D and OpenGL renderer for all of the GUI's drawing / rendering.
edit suggested by S.Lott: I need to use this GUI for a 3D editor, since I have to drag and drop a lot of things from the GUI elements to the 3d render area, I wanted to use a GUI system that renders with Direct3D (preffered) or OpenGL. It also has to have a nice look. It is difficult to achieve this with GUI's like WPF, since WPF does not have a handle. Also it needs to be absolutly free for commercial use.
edit: I would also like to use the rendering context I initialized for the 3d part in my application
| [
"I don't know what are you working at, so maybe this is not what you're looking for, but:\nHave you considered using Blender + its Game Engine?\nIt supports Python scripting, and provides some APIs to create \"standard\" GUIs too, while allowing you to do a lot of cool stuff with 3d models. This could be especially... | [
2,
1,
1,
0,
0
] | [] | [] | [
"directx",
"opengl",
"python",
"user_interface"
] | stackoverflow_0004203614_directx_opengl_python_user_interface.txt |
Q:
Vix API in python
I need to import VIX api libs in ubuntu using ctypes module.
When I do:
vix = CDLL('libvix.so')
It fails: "cannot open..."
what´s the problem??? libvix.so and python module are in the same directory
thanks!!
A:
If the rest of the error is something like
OSError: libvix.so: cannot open shared object file: No such file or directory
Then you probably need to put the location of the libvix.so in the LD_LIBRARY_PATH environment variable.
| Vix API in python | I need to import VIX api libs in ubuntu using ctypes module.
When I do:
vix = CDLL('libvix.so')
It fails: "cannot open..."
what´s the problem??? libvix.so and python module are in the same directory
thanks!!
| [
"If the rest of the error is something like \nOSError: libvix.so: cannot open shared object file: No such file or directory\n\nThen you probably need to put the location of the libvix.so in the LD_LIBRARY_PATH environment variable.\n"
] | [
0
] | [] | [] | [
"python",
"vix"
] | stackoverflow_0004277895_python_vix.txt |
Q:
Python: Selecting numbers with associated probabilities
Possible Duplicates:
Random weighted choice
Generate random numbers with a given (numerical) distribution
I have a list of list which contains a series on numbers and there associated probabilities.
prob_list = [[1, 0.5], [2, 0.25], [3, 0.05], [4, 0.01], [5, 0.09], [6, 0.1]]
for example in prob_list[0] the number 1 has a probability of 0.5 associated with it. So you would expect 1 to show up 50% of the time.
How do I add weight to the numbers when I select them?
NOTE: the amount of numbers in the list can vary from 6 - 100
EDIT
In the list I have 6 numbers with their associated probabilities. I want to select two numbers based on their probability.
No number can be selected twice. If "2" is selected it can not be selected again.
A:
I'm going to assume the probabilities all add up to 1. If they don't, you're going to have to scale them accordingly so that they do.
First generate a uniform random variable [0, 1] using random.random(). Then pass through the list, summing the probabilities. The first time the sum exceeds the random number, return the associated number. This way, if the uniform random variable generated falls within the range (0.5, 0.75] in your example, 2 will be returned, thus giving it the required 0.25 probability of being returned.
import random
import sys
def pick_random(prob_list):
r, s = random.random(), 0
for num in prob_list:
s += num[1]
if s >= r:
return num[0]
print >> sys.stderr, "Error: shouldn't get here"
Here's a test showing it works:
import collections
count = collections.defaultdict(int)
for i in xrange(10000):
count[pick_random(prob_list)] += 1
for n in count:
print n, count[n] / 10000.0
which outputs:
1 0.498
2 0.25
3 0.0515
4 0.0099
5 0.0899
6 0.1007
EDIT: Just saw the edit in the question. If you want to select two distinct numbers, you can repeat the above until your second number chosen is distinct. But this will be terribly slow if one number has a very high (e.g. 0.99999999) probability associated with it. In this case, you could remove the first number from the list and rescale the probabilities so that they sum to 1 before selecting the second number.
A:
Here's something that appears to work and meet all your specifications (and subjectively it seems pretty fast). Note that your constraint that the second number not be the same as the first throws the probabilities off for selecting it. That issue is effectively ignored by the code below and it just enforces the restriction (in other words the probability of what the second number is won't be that given for each number in the prob_list).
import random
prob_list = [[1, 0.5], [2, 0.25], [3, 0.05], [4, 0.01], [5, 0.09], [6, 0.1]]
# create a list with the running total of the probabilities
acc = 0.0
acc_list = [acc]
for t in prob_list:
acc += t[1]
acc_list.append(acc)
TOLERANCE = .000001
def approx_eq(v1, v2):
return abs(v1-v2) <= TOLERANCE
def within(low, value, high):
""" Determine if low >= value <= high (approximately) """
return (value > low or approx_eq(low, value)) and \
(value < high or approx_eq(high, value))
def get_selection():
""" Find which weighted interval a random selection falls in """
interval = -1
rand = random.random()
for i in range(len(acc_list)-1):
if within(acc_list[i], rand, acc_list[i+1]):
interval = i
break
if interval == -1:
raise AssertionError('no interval for {:.6}'.format(rand))
return interval
def get_two_different_nums():
sel1 = get_selection()
sel2 = sel1
while sel2 == sel1:
sel2 = get_selection()
return prob_list[sel1][0], prob_list[sel2][0]
A:
This might be what you're looking for. Extension to a solution in Generate random numbers with a given (numerical) distribution. Removes the selected item from the distribution, updates the probabilities and returns selected item, updated distribution. Not proven to work, but should give a good impression of the idea.
def random_distr(l):
assert l # don't accept empty lists
r = random.uniform(0, 1)
s = 0
for i in xrange(len(l)):
item, prob = l[i]
s += prob
if s >= r:
l.pop(i) # remove the item from the distribution
break
else: # Might occur because of floating point inaccuracies
l.pop()
# update probabilities based on new domain
d = 1 - prob
for i in xrange(len(l)):
l[i][1] /= d
return item, l
dist = [[1, 0.5], [2, 0.25], [3, 0.05], [4, 0.01], [5, 0.09], [6, 0.1]]
while dist:
val, dist = random_distr(dist)
print val
A:
Maybe the problem is just related to the data structure. It would be easier if you had a dictionary instead of a list of lists:
prob_list = { 1:0.5, 2:0.25, 3:0.05, 4:0.01, 5:0.09, 6:0.1}
This way, you can obtain the weight corresponding to the number:
import random
number = weight = -1
while not( number in prob_list ):
number = random.randint( 0, length( prob_list ) )
weight = prob_list[ number ]
print( number, weight )
| Python: Selecting numbers with associated probabilities |
Possible Duplicates:
Random weighted choice
Generate random numbers with a given (numerical) distribution
I have a list of list which contains a series on numbers and there associated probabilities.
prob_list = [[1, 0.5], [2, 0.25], [3, 0.05], [4, 0.01], [5, 0.09], [6, 0.1]]
for example in prob_list[0] the number 1 has a probability of 0.5 associated with it. So you would expect 1 to show up 50% of the time.
How do I add weight to the numbers when I select them?
NOTE: the amount of numbers in the list can vary from 6 - 100
EDIT
In the list I have 6 numbers with their associated probabilities. I want to select two numbers based on their probability.
No number can be selected twice. If "2" is selected it can not be selected again.
| [
"I'm going to assume the probabilities all add up to 1. If they don't, you're going to have to scale them accordingly so that they do.\nFirst generate a uniform random variable [0, 1] using random.random(). Then pass through the list, summing the probabilities. The first time the sum exceeds the random number, retu... | [
5,
3,
1,
1
] | [] | [] | [
"probability",
"python",
"random",
"statistics"
] | stackoverflow_0004276787_probability_python_random_statistics.txt |
Q:
making a python gui for ffdshow
I was thinking that for a learning project for myself, I would try to make a GUI for ffdshow on linux, using tkinter. Wanted to make sure this project would be feasible first, before I get halfway through and run into something that cant be done in python.
Basic idea is to have a single GUI window with a bunch of drop down boxes that have the various presets (like format or bitrate), as well as a text box where a custom number can be entered if applicable. Then when all the options are selected, the user hits the Start button on the GUI and it shows a progress little bar with a percentage. All the options selected would just send the relevant selections as cli arguments for ffdshow, and begin the conversion progress (essentially turning all the user's input into a single perfect cli command).
Is all this doable with python and tkinter? and is it something that a relative newb with only very basic tkinter experience could pull off with books and other python resources?
Thanks
A:
That is precisely the type of thing that python and Tkinter excel at. And yes, a relative newbie can easily do a task like that.
| making a python gui for ffdshow | I was thinking that for a learning project for myself, I would try to make a GUI for ffdshow on linux, using tkinter. Wanted to make sure this project would be feasible first, before I get halfway through and run into something that cant be done in python.
Basic idea is to have a single GUI window with a bunch of drop down boxes that have the various presets (like format or bitrate), as well as a text box where a custom number can be entered if applicable. Then when all the options are selected, the user hits the Start button on the GUI and it shows a progress little bar with a percentage. All the options selected would just send the relevant selections as cli arguments for ffdshow, and begin the conversion progress (essentially turning all the user's input into a single perfect cli command).
Is all this doable with python and tkinter? and is it something that a relative newb with only very basic tkinter experience could pull off with books and other python resources?
Thanks
| [
"That is precisely the type of thing that python and Tkinter excel at. And yes, a relative newbie can easily do a task like that.\n"
] | [
1
] | [] | [] | [
"ffdshow",
"python",
"tkinter",
"user_interface"
] | stackoverflow_0004278461_ffdshow_python_tkinter_user_interface.txt |
Q:
Removing unknown characters from a text file
I have a large number of files containing data I am trying to process using a Python script.
The files are in an unknown encoding, and if I open them in Notepad++ they contain numerical data separated by a load of 'null' characters (represented as NULL in white on black background in Notepad++).
In order to handle this, I separate the file by the null character \x00 and retrieve only numerical values using the following script:
stripped_data=[]
for root,dirs,files in os.walk(PATH):
for rawfile in files:
(dirName, fileName)= os.path.split(rawfile)
(fileBaseName, fileExtension)=os.path.splitext(fileName)
h=open(os.path.join(root, rawfile),'r')
line=h.read()
for raw_value in line.split('\x00'):
try:
test=float(raw_value)
stripped_data.append(raw_value.strip())
except ValueError:
pass
However, there are sometimes other unrecognised characters in the file (as far as I have found, always at the very beginning) - these show up in Notepad++ as 'EOT', 'SUB' and 'ETX'. They seem to interfere with the processing of the file in Python - the file appears to end at those characters, even though there is clearly more data visible in Notepad++.
How can I remove all non-ASCII characters from these files prior to processing?
A:
You are opening the file in text mode. That means that the first Ctrl-Z character is considered as an end-of-file character. Specify 'rb' instead of 'r' in open().
A:
I don't know if this will work for sure, but you could try using the IO methods in the codec module:
import codec
inFile = codec.open(<SAME ARGS AS 'OPEN'>, 'utf-8')
for line in inFile.readline():
do_stuff()
You can treat the inFile just like a normal FILE object.
This may or may not help you, but it probably will.
[EDIT]
Basically you'll replace: h=open(os.path.join(root, rawfile),'r') with h=open(os.path.join(root, rawfile),'r', 'utf-8')
A:
The file.read() function will read until EOF.
As you said it stops too early you want to continue reading the file even when hitting an EOF.
Make sure to stop when you have read the entire file. You can do this by checking the position in the file via file.tell() when hitting an EOF and stopping when you hit the file-size (read file-size prior to reading).
As this is rather complex you may want to use file.next and iterate over bytes.
To remove non-ascii characters you can either use a white-list for specific characters or check the read Byte against a range your define.
E.g. is the Byte between x30 and x39 (a number) -> keep it / save it somewhere / add it to a string.
See an ASCII table.
| Removing unknown characters from a text file | I have a large number of files containing data I am trying to process using a Python script.
The files are in an unknown encoding, and if I open them in Notepad++ they contain numerical data separated by a load of 'null' characters (represented as NULL in white on black background in Notepad++).
In order to handle this, I separate the file by the null character \x00 and retrieve only numerical values using the following script:
stripped_data=[]
for root,dirs,files in os.walk(PATH):
for rawfile in files:
(dirName, fileName)= os.path.split(rawfile)
(fileBaseName, fileExtension)=os.path.splitext(fileName)
h=open(os.path.join(root, rawfile),'r')
line=h.read()
for raw_value in line.split('\x00'):
try:
test=float(raw_value)
stripped_data.append(raw_value.strip())
except ValueError:
pass
However, there are sometimes other unrecognised characters in the file (as far as I have found, always at the very beginning) - these show up in Notepad++ as 'EOT', 'SUB' and 'ETX'. They seem to interfere with the processing of the file in Python - the file appears to end at those characters, even though there is clearly more data visible in Notepad++.
How can I remove all non-ASCII characters from these files prior to processing?
| [
"You are opening the file in text mode. That means that the first Ctrl-Z character is considered as an end-of-file character. Specify 'rb' instead of 'r' in open().\n",
"I don't know if this will work for sure, but you could try using the IO methods in the codec module:\nimport codec\n\ninFile = codec.open(<SAME ... | [
5,
1,
1
] | [] | [] | [
"character_encoding",
"python",
"windows"
] | stackoverflow_0004278636_character_encoding_python_windows.txt |
Q:
ImportError: cannot import name output
iam using fabric 0.9.1 version on windows to do some deployment related stuff.
But the moment iam about to run "fab hello" iam facing the following error
D:\pythonscripts>fab hello
Traceback (most recent call last):
File "C:\Python26\Scripts\fab-script.py", line 8, in <module>
load_entry_point('fabric==0.9.1', 'console_scripts', 'fab')()
File "build\bdist.win-amd64\egg\pkg_resources.py", line 318, in load_entry_poi
nt
File "build\bdist.win-amd64\egg\pkg_resources.py", line 2221, in load_entry_po
int
File "build\bdist.win-amd64\egg\pkg_resources.py", line 1954, in load
File "build\bdist.win-amd64\egg\fabric\main.py", line 17, in <module>
File "build\bdist.win-amd64\egg\fabric\api.py", line 9, in <module>
File "build\bdist.win-amd64\egg\fabric\context_managers.py", line 12, in <modu
le>
File "build\bdist.win-amd64\egg\fabric\state.py", line 9, in <module>
File "build\bdist.win-amd64\egg\fabric\network.py", line 19, in <module>
File "build\bdist.win-amd64\egg\fabric\utils.py", line 21, in abort
ImportError: cannot import name output
Any clue on how to resolve this error?
A:
It seems to be this issue: http://code.fabfile.org/issues/show/194, probably it's not Fabric related but PyCrypto or Python64. If it is PyCrypto then the easiest thing is to download a binary version from http://www.voidspace.org.uk/python/modules.shtml#pycrypto and install it and download pywin32 from http://sourceforge.net/projects/pywin32/files/ and install it as well.
| ImportError: cannot import name output | iam using fabric 0.9.1 version on windows to do some deployment related stuff.
But the moment iam about to run "fab hello" iam facing the following error
D:\pythonscripts>fab hello
Traceback (most recent call last):
File "C:\Python26\Scripts\fab-script.py", line 8, in <module>
load_entry_point('fabric==0.9.1', 'console_scripts', 'fab')()
File "build\bdist.win-amd64\egg\pkg_resources.py", line 318, in load_entry_poi
nt
File "build\bdist.win-amd64\egg\pkg_resources.py", line 2221, in load_entry_po
int
File "build\bdist.win-amd64\egg\pkg_resources.py", line 1954, in load
File "build\bdist.win-amd64\egg\fabric\main.py", line 17, in <module>
File "build\bdist.win-amd64\egg\fabric\api.py", line 9, in <module>
File "build\bdist.win-amd64\egg\fabric\context_managers.py", line 12, in <modu
le>
File "build\bdist.win-amd64\egg\fabric\state.py", line 9, in <module>
File "build\bdist.win-amd64\egg\fabric\network.py", line 19, in <module>
File "build\bdist.win-amd64\egg\fabric\utils.py", line 21, in abort
ImportError: cannot import name output
Any clue on how to resolve this error?
| [
"It seems to be this issue: http://code.fabfile.org/issues/show/194, probably it's not Fabric related but PyCrypto or Python64. If it is PyCrypto then the easiest thing is to download a binary version from http://www.voidspace.org.uk/python/modules.shtml#pycrypto and install it and download pywin32 from http://sour... | [
1
] | [] | [] | [
"fabric",
"python"
] | stackoverflow_0003051387_fabric_python.txt |
Q:
Tracing an IP address in Python
For a college project for my course on Introduction to Programming, I decided to make a small software that traces the IP address and puts them nicely on a GUI (PyQt). Not a big deal I know, but still I like the idea.
So I Googled around and found MaxMind's IP and their free offering and the pygeoip, which is an API for the MaxMind GeoIP databases. Pretty cool, eh!
But the downside is that to query their database, I have to download individual databases for country city. This is not good cause I have to make the end user download additional files (in MBs) just to look up an IP address.
So I am wondering, is there another method of doing this? How do I trace IP addresses? Note that I need them down to the city level, if possible. Something like this guy aruljohn.com/track.pl
Thanks!
A:
I would have preferred "pygeoip", because it allows you to develop a complete solution locally. Of course, you will need to keep the database.
If you do not want to keep the database locally, you will have to depend on an external service to query for location of an IP. This will keep your solution small but dependent on this service.
For this check out: ipinfodb.com
http://ipinfodb.com/ip_location_api.php
They provide JSON and XML APIs interface which should be sufficiently easy to build.
Check out more information at : http://ipinfo.info/html/geolocation_2.php
A:
I have even better idea. Why don't you make a very simple web app, which will do the actual look up; and you PyQt client would do HTTP request to that. Or maybe in that case you don't even need a client. Just make a web page to get IP address and show city.
| Tracing an IP address in Python | For a college project for my course on Introduction to Programming, I decided to make a small software that traces the IP address and puts them nicely on a GUI (PyQt). Not a big deal I know, but still I like the idea.
So I Googled around and found MaxMind's IP and their free offering and the pygeoip, which is an API for the MaxMind GeoIP databases. Pretty cool, eh!
But the downside is that to query their database, I have to download individual databases for country city. This is not good cause I have to make the end user download additional files (in MBs) just to look up an IP address.
So I am wondering, is there another method of doing this? How do I trace IP addresses? Note that I need them down to the city level, if possible. Something like this guy aruljohn.com/track.pl
Thanks!
| [
"I would have preferred \"pygeoip\", because it allows you to develop a complete solution locally. Of course, you will need to keep the database.\nIf you do not want to keep the database locally, you will have to depend on an external service to query for location of an IP. This will keep your solution small but de... | [
2,
1
] | [] | [] | [
"ip_address",
"python"
] | stackoverflow_0004279583_ip_address_python.txt |
Q:
coordinates changed when migrating from pygame+rabbyt to pyglet+rabbyt
I'm working on a 2D game and decided to switch from SDL to OpenGL. I took rabbyt as an opengl wrapper for rendering my sprites and using pymunk (chipmunk) for my physics. I used pygame for creating the window and rabbyt for drawing the sprites on the screen.
I discovered that with pygame+rabbyt the (0,0) coordinate is in the middle of the screen. I liked that fact, because the coordinate representation in the physics engine were the same as in my graphics engine (I don't have to recalculate the coordinates when rendering the sprites).
Then I switched to pyglet because I wanted to draw lines with OpenGL - and discovered that suddenly the (0,0) coordinate was at the bottom left of the screen.
I suspected that that has something to do with the glViewport function, but only rabbyt executes that function, pyglet touches it only when the window is resized.
How can I set the (0,0) coordinate at the middle of the Screen?
I'm not very familiar with OpenGL and couldn't find anything after several hours googling and trial&error... I hope someone can help me :)
Edit: Some additional information :)
This is my pyglet screen initialization code:
self.window = Window(width=800, height=600)
rabbyt.set_viewport((800,600))
rabbyt.set_default_attribs()
This is my pygame screen initialization code:
display = pygame.display.set_mode((800,600), \
pygame.OPENGL | pygame.DOUBLEBUF)
rabbyt.set_viewport((800, 600))
rabbyt.set_default_attribs()
Edit 2: I looked at the sources of pyglet and pygame and didn't discover anything in the screen initialization code that has something to do with the OpenGL viewport... Here is the source of the two rabbyt functions:
def set_viewport(viewport, projection=None):
"""
``set_viewport(viewport, [projection])``
Sets how coordinates map to the screen.
``viewport`` gives the screen coordinates that will be drawn to. It
should be in either the form ``(width, height)`` or
``(left, top, right, bottom)``
``projection`` gives the sprite coordinates that will be mapped to the
screen coordinates given by ``viewport``. It too should be in one of the
two forms accepted by ``viewport``. If ``projection`` is not given, it
will default to the width and height of ``viewport``. If only the width
and height are given, ``(0, 0)`` will be the center point.
"""
glMatrixMode(GL_PROJECTION)
glLoadIdentity()
if len(viewport) == 4:
l, t, r, b = viewport
else:
l, t = 0, 0
r, b = viewport
for i in (l,t,r,b):
if i < 0:
raise ValueError("Viewport values cannot be negative")
glViewport(l, t, r-l, b-t)
if projection is not None:
if len(projection) == 4:
l, t, r, b = projection
else:
w,h = projection
l, r, t, b = -w/2, w/2, -h/2, h/2
else:
w,h = r-l, b-t
l, r, b, t = -w/2, w/2, -h/2, h/2
glOrtho(l, r, b, t, -1, 1)
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
def set_default_attribs():
"""
``set_default_attribs()``
Sets a few of the OpenGL attributes that sprites expect.
Unless you know what you are doing, you should call this at least once
before rendering any sprites. (It is called automatically in
``rabbyt.init_display()``)
"""
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE)
glEnable(GL_BLEND)
#glEnable(GL_POLYGON_SMOOTH)
Thanks,
Steffen
A:
As l33tnerd suggested the origin can be placed at the center with glTranslatef...
I added the following below my screen initialization code:
pyglet.gl.glTranslatef(width/2, height/2, 0)
Thanks!
| coordinates changed when migrating from pygame+rabbyt to pyglet+rabbyt | I'm working on a 2D game and decided to switch from SDL to OpenGL. I took rabbyt as an opengl wrapper for rendering my sprites and using pymunk (chipmunk) for my physics. I used pygame for creating the window and rabbyt for drawing the sprites on the screen.
I discovered that with pygame+rabbyt the (0,0) coordinate is in the middle of the screen. I liked that fact, because the coordinate representation in the physics engine were the same as in my graphics engine (I don't have to recalculate the coordinates when rendering the sprites).
Then I switched to pyglet because I wanted to draw lines with OpenGL - and discovered that suddenly the (0,0) coordinate was at the bottom left of the screen.
I suspected that that has something to do with the glViewport function, but only rabbyt executes that function, pyglet touches it only when the window is resized.
How can I set the (0,0) coordinate at the middle of the Screen?
I'm not very familiar with OpenGL and couldn't find anything after several hours googling and trial&error... I hope someone can help me :)
Edit: Some additional information :)
This is my pyglet screen initialization code:
self.window = Window(width=800, height=600)
rabbyt.set_viewport((800,600))
rabbyt.set_default_attribs()
This is my pygame screen initialization code:
display = pygame.display.set_mode((800,600), \
pygame.OPENGL | pygame.DOUBLEBUF)
rabbyt.set_viewport((800, 600))
rabbyt.set_default_attribs()
Edit 2: I looked at the sources of pyglet and pygame and didn't discover anything in the screen initialization code that has something to do with the OpenGL viewport... Here is the source of the two rabbyt functions:
def set_viewport(viewport, projection=None):
"""
``set_viewport(viewport, [projection])``
Sets how coordinates map to the screen.
``viewport`` gives the screen coordinates that will be drawn to. It
should be in either the form ``(width, height)`` or
``(left, top, right, bottom)``
``projection`` gives the sprite coordinates that will be mapped to the
screen coordinates given by ``viewport``. It too should be in one of the
two forms accepted by ``viewport``. If ``projection`` is not given, it
will default to the width and height of ``viewport``. If only the width
and height are given, ``(0, 0)`` will be the center point.
"""
glMatrixMode(GL_PROJECTION)
glLoadIdentity()
if len(viewport) == 4:
l, t, r, b = viewport
else:
l, t = 0, 0
r, b = viewport
for i in (l,t,r,b):
if i < 0:
raise ValueError("Viewport values cannot be negative")
glViewport(l, t, r-l, b-t)
if projection is not None:
if len(projection) == 4:
l, t, r, b = projection
else:
w,h = projection
l, r, t, b = -w/2, w/2, -h/2, h/2
else:
w,h = r-l, b-t
l, r, b, t = -w/2, w/2, -h/2, h/2
glOrtho(l, r, b, t, -1, 1)
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
def set_default_attribs():
"""
``set_default_attribs()``
Sets a few of the OpenGL attributes that sprites expect.
Unless you know what you are doing, you should call this at least once
before rendering any sprites. (It is called automatically in
``rabbyt.init_display()``)
"""
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE)
glEnable(GL_BLEND)
#glEnable(GL_POLYGON_SMOOTH)
Thanks,
Steffen
| [
"As l33tnerd suggested the origin can be placed at the center with glTranslatef...\nI added the following below my screen initialization code:\npyglet.gl.glTranslatef(width/2, height/2, 0)\n\nThanks!\n"
] | [
0
] | [] | [] | [
"opengl",
"pygame",
"pyglet",
"python"
] | stackoverflow_0004279516_opengl_pygame_pyglet_python.txt |
Q:
I need a good web development framework for Python
I'm a C/C++ developer and I also have experience developing web apps with C#, ASP.NET MVC and fluent nhibernate. I'm looking for non-MS alternatives for web development and I'm really interested in python so I went out after Django but I've been told that Django makes it difficult for me to personalize my HTML (not sure if this is accurate).
What I'm looking for is a Python web development framework that is integrated with an ORM, is able to generate the interfaces BUT provides an easy way for me to customize the interface to create an AJAX intensive app
A:
go for django.
does all you wanted,
has perfect docs and even free book,
partially runs on appengine,
has really large user base,
it is mature:
db sharding, (With model router)
xss protection in forms
memcache,
localisation,
well tested support for unicode,
really easy to learn because of level of it documentation.
A:
I'm using Flask (a very minimal web framework) and SQLAlchemy as my ORM. I'm exceedingly happy with it. Disclaimer: I'm only using this for personal projects at the moment, though I do plan to launch a web app in the next 6 months using this setup.
A:
Various options in Python you can look at -
Django (obviously!)
Pylons
Nagare
Flask
Django is really good. And no your info is not correct, HTML templates are real easy to edit them.
Also this is from a developer of Nagare -
Ajax without to write any Javascript
code or the use of continuations makes
a Web application looks like a desktop
one. In fact we have often found that
developers like you, without prior Web
experiences, can be quicker to get
Nagare because they have nothing to
"unlearn".
I am going deeper into this framework Since you said that your app is AJAX intensive. From what I have heard, Nagare makes it easy to do so...
All these frameworks are really good. Some are really good in some areas, others not. So may be explore them all & see which best suits your purpose.
A:
I'm in agreement with the rest of the answers and think that Django is by-far the best choice as a "complete framework" and I think their template system is second-to-none.
If you are looking to create an ajax intensive application, I'd suggestion checking out django-piston (http://bitbucket.org/jespern/django-piston/wiki/Home). Piston is a REST API framework built on top of Django. I've used it for a number of ajax intensive applications and have found it's workflow to be incredibly clean, quick and flexible.
If you are wanting to go a bit slimmer and lighter-weight though, I'd suggest checking out web.py (http://webpy.org/) or Tornado (http://www.tornadoweb.org/).
A:
For Web applications development, we're using Nagare, coming with YUI for AJAX communications.
Having a look to Nagare might be an option.
A:
I would definitely look into Pylons which is very thoroughly documented and has sql alchemy (one of the best python ORM's) baked in. Plus it's easy to setup and learn.
I currently am working with a framework called restish which is flavor of pylons that (surprise, surprise) puts the focus on sticking to RESTful web design. I don't think it's exactly what you're looking for in that it lacks good documentation and any form of an ORM.
Just A side note I'm pretty sure that Django uses Mako templating which gives you excellent control over the HTML.
| I need a good web development framework for Python | I'm a C/C++ developer and I also have experience developing web apps with C#, ASP.NET MVC and fluent nhibernate. I'm looking for non-MS alternatives for web development and I'm really interested in python so I went out after Django but I've been told that Django makes it difficult for me to personalize my HTML (not sure if this is accurate).
What I'm looking for is a Python web development framework that is integrated with an ORM, is able to generate the interfaces BUT provides an easy way for me to customize the interface to create an AJAX intensive app
| [
"go for django.\n\ndoes all you wanted,\nhas perfect docs and even free book,\npartially runs on appengine,\nhas really large user base,\nit is mature: \n\n\ndb sharding, (With model router)\nxss protection in forms\nmemcache, \nlocalisation, \nwell tested support for unicode, \nreally easy to learn because of leve... | [
7,
2,
1,
1,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0004260324_python.txt |
Q:
danger of recursive functions
Often people say that it's not recommended to use recursive functions in python (recursion depth restrictions, memory consumption, etc)
I took a permutation example from this question.
def all_perms(str):
if len(str) <=1:
yield str
else:
for perm in all_perms(str[1:]):
for i in range(len(perm)+1):
yield perm[:i] + str[0:1] + perm[i:]
Afterwards I transformed it into a not recursive version (I'm a python newbie)
def not_recursive(string):
perm = [string[0]]
for e in string[1:]:
perm_next = []
for p in perm:
perm_next.extend(p[:i] + e + p[i:] for i in range(len(p) + 1))
perm = perm_next
for p in perm:
yield p
And I compared them
before=time()
print len([p for p in all_perms("1234567890")])
print "time of all_perms %i " % (time()-before)
before=time()
print len([p for p in not_recursive("1234567890")])
print "time of not_recursive %i " % (time()-before)
before=time()
print len([p for p in itertools.permutations("1234567890")])
print "time of itertools.permutations %i " % (time()-before)
The results are quite interesting. The recursive function is the fastest one 5 sec, then not recursive 8 sec, then buildin 35 sec.
So are recursive functions that bad in Python? What is wrong with build-in itertools.permutations ?
A:
Recursion is good for problems that lend themselves to clean, clear, recursive implementations. But like all programming you must perform some algorithm analysis to understand the performance characteristics. In the case of recursion, besides number of operations you must also estimate the maximum stack depth.
Most problems occur when new programmers assume recursion is magical and don't realize there is a stack underneath making it possible. New programmers have also been known to allocate memory and never free it, and other mistakes, so recursion is not unique in this danger.
A:
Recursion is "bad" in Python because it is usually slower than an iterative solution, and because Python's stack depth is not unlimited (and there's no tail call optimization). For a sum function, yes, you probably want unlimited depth since it would be perfectly reasonable to want to sum a list of a million numbers, and the performance delta will become an issue with such a large number of items. In that case you should not use recursion.
If you are walking a DOM tree read from an XML file, on the other hand, you are unlikely to exceed Python's recursion depth (1000 by default). It certainly could, but as a practical matter, it probably won't. When you know what kinds of data you'll be working with ahead of time, you can be confident you won't overflow the stack.
A recursive tree walk is, in my opinion, much more natural to write and read than an iterative one, and the recursion overhead is generally a small part of the running time. If it really matters to you that it takes 16 seconds instead of 14, throwing PyPy at it might be a better use of your time.
Recursion seems a natural fit for the problem you posted and if you think the code is easier to read and maintain that way, and the performance is adequate, then go for it.
I grew up writing code on computers that, as a practical matter, limited recursion depth to about 16, if it was provided at all, so 1000 seems luxurious to me. :-)
A:
Your timings are completely wrong:
def perms1(str):
if len(str) <=1:
yield str
else:
for perm in perms1(str[1:]):
for i in range(len(perm)+1):
yield perm[:i] + str[0:1] + perm[i:]
def perms2(string):
perm = [string[0]]
for e in string[1:]:
perm_next = []
for p in perm:
perm_next.extend(p[:i] + e + p[i:] for i in range(len(p) + 1))
perm = perm_next
for p in perm:
yield p
s = "01235678"
import itertools
perms3 = itertools.permutations
Now test it with timeit:
thc:~$ for i in 1 2 3; do echo "perms$i:"; python -m timeit -s "import permtest as p" "list(p.perms$i(p.s))"; done
perms1:
10 loops, best of 3: 23.9 msec per loop
perms2:
10 loops, best of 3: 39.1 msec per loop
perms3:
100 loops, best of 3: 5.64 msec per loop
As you can see itertools.permutations is by far the fastest, followed by the recursive version.
But both the pure Python function had no chance to be fast, because they do costly operations such as adding lists ala perm[:i] + str[0:1] + perm[i:]
A:
I can't reproduce your timing results (in Python 2.6.1 on Mac OS X):
>>> import itertools, timeit
>>> timeit.timeit('list(all_perms("0123456789"))',
... setup='from __main__ import all_perms'),
... number=1)
2.603626012802124
>>> timeit.timeit('list(itertools.permutations("0123456789"))', number=1)
1.6111600399017334
| danger of recursive functions | Often people say that it's not recommended to use recursive functions in python (recursion depth restrictions, memory consumption, etc)
I took a permutation example from this question.
def all_perms(str):
if len(str) <=1:
yield str
else:
for perm in all_perms(str[1:]):
for i in range(len(perm)+1):
yield perm[:i] + str[0:1] + perm[i:]
Afterwards I transformed it into a not recursive version (I'm a python newbie)
def not_recursive(string):
perm = [string[0]]
for e in string[1:]:
perm_next = []
for p in perm:
perm_next.extend(p[:i] + e + p[i:] for i in range(len(p) + 1))
perm = perm_next
for p in perm:
yield p
And I compared them
before=time()
print len([p for p in all_perms("1234567890")])
print "time of all_perms %i " % (time()-before)
before=time()
print len([p for p in not_recursive("1234567890")])
print "time of not_recursive %i " % (time()-before)
before=time()
print len([p for p in itertools.permutations("1234567890")])
print "time of itertools.permutations %i " % (time()-before)
The results are quite interesting. The recursive function is the fastest one 5 sec, then not recursive 8 sec, then buildin 35 sec.
So are recursive functions that bad in Python? What is wrong with build-in itertools.permutations ?
| [
"Recursion is good for problems that lend themselves to clean, clear, recursive implementations. But like all programming you must perform some algorithm analysis to understand the performance characteristics. In the case of recursion, besides number of operations you must also estimate the maximum stack depth.\nMo... | [
8,
8,
2,
0
] | [] | [] | [
"python",
"recursion"
] | stackoverflow_0004278327_python_recursion.txt |
Q:
Python, how to setup hooks for tracing I/O Events
My app downloads files, creates files as final/intermediate data. I would like to setup a hook (outside the app), to alert/log whenever my app does any I/O events - like writing a file, deleting a file, downloading a file from the file server. I use the urllib to retrieve fits files from the data servers.
A:
If you know where the file will be downloaded to, one solution could be to use inotify. In particular, pyinotify seems interesting. I don't know if CentOS has a recent enough version of the Linux kernel for this to work though.
A:
If you want a list of your process file operations, you can use FileMon or ProcMon from SysInternals.
Edit: for Linux, you can use strace.
| Python, how to setup hooks for tracing I/O Events | My app downloads files, creates files as final/intermediate data. I would like to setup a hook (outside the app), to alert/log whenever my app does any I/O events - like writing a file, deleting a file, downloading a file from the file server. I use the urllib to retrieve fits files from the data servers.
| [
"If you know where the file will be downloaded to, one solution could be to use inotify. In particular, pyinotify seems interesting. I don't know if CentOS has a recent enough version of the Linux kernel for this to work though.\n",
"If you want a list of your process file operations, you can use FileMon or Pro... | [
2,
1
] | [] | [] | [
"centos",
"events",
"hook",
"io",
"python"
] | stackoverflow_0004277743_centos_events_hook_io_python.txt |
Q:
how to customize django admin?
I have a two field in models.py
password_protected = models.BooleanField(default=False)
password = models.CharField(max_length=50)
I want to write admin.py in such a manner:
-- if password_protected is True: Then the password field should be enable.
-- if password_protected is False: Then the password field should be disabled.
A:
You can try what @luc suggested, but you can also try adding a widget to password_protected field:
password = models.CharField(default=False, widget=forms.TextInput())
-- and adjust its attributes:
form = MyForm(request.POST)
if form.is_valid():
# do some nice stuff here
else:
if form['password_protected'].data:
form.fields['password_protected'].widget.attrs['disabled'] = 'disabled'
Note that specifying that widget for password field should be unnecessary, as fields have default widgets, but I added it in case it's required to modify widget attributes.
A:
I think that the easier way is to implement this behavior with javascript and jquery. The fields can be accessed by their name with jquery (for example something like $("input[name=password]");.
Then you can insert your js file into the admin by adding Media class into the admin class.
I hope it helps
| how to customize django admin? | I have a two field in models.py
password_protected = models.BooleanField(default=False)
password = models.CharField(max_length=50)
I want to write admin.py in such a manner:
-- if password_protected is True: Then the password field should be enable.
-- if password_protected is False: Then the password field should be disabled.
| [
"You can try what @luc suggested, but you can also try adding a widget to password_protected field:\npassword = models.CharField(default=False, widget=forms.TextInput())\n\n-- and adjust its attributes:\nform = MyForm(request.POST)\nif form.is_valid():\n # do some nice stuff here\nelse:\n if form['password_pr... | [
1,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0004274707_django_python.txt |
Q:
Ordering from a method
Hay all, i have a simple model like this
def Article(models.Model):
upvotes = models.ManyToManyField(User, related_name='article_upvotes')
downvotes = models.ManyToManyField(User, related_name='article_downvotes')
def votes(self):
return self.upvotes - self.downvotes
With the view i can do things like
article_votes = article.votes
Am i able to order by the votes function? Something like
article = Article.objects.order_by('votes')
EDIT
I'm not near my dev system at the moment, so the syntax might be a little off.
A:
You can sort the list after the query returns the results:
article_votes = sorted(article.votes, key=lambda a: a.votes())
sorted takes a list and sorts it. You can provide a custom function that takes an element and returns the value to use when comparing elements. lambda a: a.votes() is an anonymous function that takes an article and returns the number of votes on the article.
If you are going to retrieve all the articles anyway, there's no downside to this solution. On the other hand, if you wanted only the top 10 articles by votes, then you're pulling all the articles from the db instead of letting the db do the sort, and only returning the top ten. Compared to a pure SQL solution, this is retrieving much more data from the database.
A:
This is a faster version of what Ned Batchelder suggested - as it does the counting in the database:
articles = list(Article.objects.annotate(upvote_num=models.Count('upvotes'), downvote_num=models.Count('downvotes')))
articles.sort(key=lambda a: a.upvotes - a.downvotes)
You can also do this completely inside the database:
articles = Article.objects.raw("""
SELECT DISTINCT id from articles_article,
COUNT(DISTINCT upv) AS num_upvotes,
COUNT(DISTINCT downv) AS num_downvotes,
(num_upvotes - num_downvotes) AS score
INNER JOIN [article_user_upvotes_m2m_table_name] AS upv
ON upv.article_id = id
INNER JOIN [article_user_downvotes_m2m_table_name] AS downv
ON downv.article_id = id
GROUP BY id
ORDER BY score
""")
-- but I'm not sure if the double join is a good idea in your case. Also, I'm not sure if all those DISCTINCTs are needed. It's quite likely that this query can be rewritten in some better way, but I don't have an idea at the moment..
| Ordering from a method | Hay all, i have a simple model like this
def Article(models.Model):
upvotes = models.ManyToManyField(User, related_name='article_upvotes')
downvotes = models.ManyToManyField(User, related_name='article_downvotes')
def votes(self):
return self.upvotes - self.downvotes
With the view i can do things like
article_votes = article.votes
Am i able to order by the votes function? Something like
article = Article.objects.order_by('votes')
EDIT
I'm not near my dev system at the moment, so the syntax might be a little off.
| [
"You can sort the list after the query returns the results:\narticle_votes = sorted(article.votes, key=lambda a: a.votes())\n\nsorted takes a list and sorts it. You can provide a custom function that takes an element and returns the value to use when comparing elements. lambda a: a.votes() is an anonymous functio... | [
2,
0
] | [] | [] | [
"django",
"function",
"python",
"sql_order_by"
] | stackoverflow_0004270675_django_function_python_sql_order_by.txt |
Q:
Is there an easier way to package with Python?
I tried to package a django app today. It's a big baby, and with the setup file, I have to manually write all packages and sub packages in the 'package' parameter. Then I have to find a way to copy fixtures, htmls / Css / image files, documentations, etc.
It's a terrible way to work. We are computer scientists, we automatize, doing this makes no sense.
And what when I change my app structure ? I have to rewrite the setup.py.
Is there a better way ? Some tool to automate that ? I can't believe a language than value developer time like Python makes packaging such a chore.
I want to be able to eventually install the app using a simple pip install. I know about build out, but it's not much simpler, and is not pip friendly.
A:
At the very least if you use setuptools (an alternative to the stdlib's distutils) you get an awesome function called find_packages() which when ran from the package root returns a list of package names in dot-notation suitable for the packages parameter.
Here is an example:
# setup.py
from setuptools import find_packages, setup
setup(
#...
packages=find_packages(exclude='tests'),
#...
)
p.s. Packaging sucks in every language and every system. It sucks no matter how you slice it.
A:
I've been through this pain myself today. I used the following, yoinked straight from Django's setup.py, which walks the app's filesystem looking for packages and data files (assuming you never mix the two):
import os
from distutils.command.install import INSTALL_SCHEMES
def fullsplit(path, result=None):
"""
Split a pathname into components (the opposite of os.path.join) in a
platform-neutral way.
"""
if result is None:
result = []
head, tail = os.path.split(path)
if head == '':
return [tail] + result
if head == path:
return result
return fullsplit(head, [tail] + result)
# Tell distutils to put the data_files in platform-specific installation
# locations. See here for an explanation:
# http://groups.google.com/group/comp.lang.python/browse_thread/thread/35ec7b2fed36eaec/2105ee4d9e8042cb
for scheme in INSTALL_SCHEMES.values():
scheme['data'] = scheme['purelib']
# Compile the list of packages available, because distutils doesn't have
# an easy way to do this.
packages, data_files = [], []
root_dir = os.path.dirname(__file__)
if root_dir != '':
os.chdir(root_dir)
myapp_dir = 'myapp'
for dirpath, dirnames, filenames in os.walk(myapp_dir):
# Ignore dirnames that start with '.'
for i, dirname in enumerate(dirnames):
if dirname.startswith('.'): del dirnames[i]
if '__init__.py' in filenames:
packages.append('.'.join(fullsplit(dirpath)))
elif filenames:
data_files.append([dirpath, [os.path.join(dirpath, f) for f in filenames]])
A:
I think the tool you are looking for is Buildout. There lots of places where you can learn more about it, from SlideShare to Pycon videos.
Other similar or related tools which you might want to check out include virtualenv, Fabric, and PIP.
| Is there an easier way to package with Python? | I tried to package a django app today. It's a big baby, and with the setup file, I have to manually write all packages and sub packages in the 'package' parameter. Then I have to find a way to copy fixtures, htmls / Css / image files, documentations, etc.
It's a terrible way to work. We are computer scientists, we automatize, doing this makes no sense.
And what when I change my app structure ? I have to rewrite the setup.py.
Is there a better way ? Some tool to automate that ? I can't believe a language than value developer time like Python makes packaging such a chore.
I want to be able to eventually install the app using a simple pip install. I know about build out, but it's not much simpler, and is not pip friendly.
| [
"At the very least if you use setuptools (an alternative to the stdlib's distutils) you get an awesome function called find_packages() which when ran from the package root returns a list of package names in dot-notation suitable for the packages parameter.\nHere is an example:\n# setup.py\n\nfrom setuptools import ... | [
5,
1,
0
] | [
"I've been doing a bit of research myself on Django deployment methods recently.\nI found these two resources very useful:\n\nhttp://www.caktusgroup.com/blog/2010/04/22/basic-django-deployment-with-virtualenv-fabric-pip-and-rsync/ - A basic overview of how to use virtualenv, pip and fabric to do easy Django deploym... | [
-1
] | [
"django",
"installation",
"package",
"pip",
"python"
] | stackoverflow_0003885504_django_installation_package_pip_python.txt |
Q:
Python's underlying hash data structure for dictionaries
I am build a very large dictionary and I am performing many checks to see if a key is in the structure and then adding if it unique or incrementing a counter if it is identical.
Python uses a hash data structure to store dictionaries (not to be confused with a cryptographic hash function). Lookups are O(1), but if the hash table is full the it has to be rehashed which is very expensive.
My Question is, would I be better off using a AVL Binary Search Tree or is a hash table good enough?
A:
The only way to be sure would be to implement both and check, but my informed guess is that the dictionary will be faster, because a binary search tree has cost O(log(n)) for lookup and insertion, and I think that except under the most pessimal of situations (such as massive hash collisions) the hash table's O(1) lookup will outweigh the occasional resize.
If you take a look at the Python dictionary implementation, you'll see that:
a dictionary starts out with 8 entries (PyDict_MINSIZE);
a dictionary with 50,000 or fewer entries quadruples in size when it grows;
a dictionary with more than 50,000 entries doubles in size when it grows;
key hashes are cached in the dictionary, so they are not recomputed when the dictionary is resized.
(The "NOTES ON OPTIMIZING DICTIONARIES" are worth reading too.)
So if your dictionary has 1,000,000 entries, I believe that it will be resized eleven times (8 → 32 → 128 → 512 → 2048 → 8192 → 32768 → 131072 → 262144 → 524288 → 1048576 → 2097152) at a cost of 2,009,768 extra insertions during the resizes. This seems likely to be much less than the cost of all the rebalancing involved in 1,000,000 insertions into an AVL tree.
A:
What's the ratio of items vs unique items?
What's the expected number of unique items?
If a hash bucket fills, then extending should just be a matter of some memory reallocation, not rehashing.
Testing a counting dict should be very quick and easy to do.
Note also the counter class available since python 2.7
http://docs.python.org/library/collections.html#counter-objects
http://svn.python.org/view?view=rev&revision=68559
A:
Python dictionaries are highly optimized. Python makes various special-case optimizations that the Python developers cater for in the CPython dictionary implementation.
In CPython, all PyDictObject's are optimized for dictionaries containing only string keys.
Python's dictionary makes an effort to never be more than 2/3rds full.
The book "Beautiful Code" discusses this all.
The eighteenth chapter is Python's Dictionary Implementation: Being All Things to All People by Adrew Kuchling
It is much better to use it than try to achieve the hand crafted custom implementation which will have to replicate all these optimizations to be any where near the main CPython implementation of dictionary look ups.
A:
You would have to implement your own data structures in C to stand a reasonable chance of beating the built-in structures.
Also you can avoid some of the overhead by using get, avoiding find existing elements twice.
Or collections.Counter if you are using python 2.7+.
def increment(map, key):
map[key] = map.get(key,0)+1
A:
Using a dict is O(1). As the dict grows, reallocation is sometimes required, but that is amortized O(1)
If your other algorithm is O(log n), the simple dict will always beat it as the dataset grows larger.
If you use any type of tree, I would expect a O(log n) component in there somewhere.
Not only is a hash table good enough, it is better
| Python's underlying hash data structure for dictionaries | I am build a very large dictionary and I am performing many checks to see if a key is in the structure and then adding if it unique or incrementing a counter if it is identical.
Python uses a hash data structure to store dictionaries (not to be confused with a cryptographic hash function). Lookups are O(1), but if the hash table is full the it has to be rehashed which is very expensive.
My Question is, would I be better off using a AVL Binary Search Tree or is a hash table good enough?
| [
"The only way to be sure would be to implement both and check, but my informed guess is that the dictionary will be faster, because a binary search tree has cost O(log(n)) for lookup and insertion, and I think that except under the most pessimal of situations (such as massive hash collisions) the hash table's O(1) ... | [
27,
4,
4,
2,
2
] | [] | [] | [
"algorithm",
"data_structures",
"performance",
"python"
] | stackoverflow_0004279358_algorithm_data_structures_performance_python.txt |
Q:
Python: how to cut off sequences of more than 2 equal characters in a string
I'm looking for an efficient way to chance a string such that all sequences of more than 2 equal characters are cut off after the first 2.
Some input->output examples are:
hellooooooooo -> helloo
woooohhooooo -> woohhoo
I'm currently looping over the characters, but it's a bit slow. Does anyone have another solution (regexp or something else)
EDIT: current code:
word_new = ""
for i in range(0,len(word)-2):
if not word[i] == word[i+1] == word[i+2]:
word_new = word_new+word[i]
for i in range(len(word)-2,len(word)):
word_new = word_new + word[i]
A:
Edit: after applying helpful comments
import re
def ReplaceThreeOrMore(s):
# pattern to look for three or more repetitions of any character, including
# newlines.
pattern = re.compile(r"(.)\1{2,}", re.DOTALL)
return pattern.sub(r"\1\1", s)
(original response here)
Try something like this:
import re
# look for a character followed by at least one repetition of itself.
pattern = re.compile(r"(\w)\1+")
# a function to perform the substitution we need:
def repl(matchObj):
char = matchObj.group(1)
return "%s%s" % (char, char)
>>> pattern.sub(repl, "Foooooooooootball")
'Football'
A:
The following code (unlike other regexp-based answers) does exactly what you say that you want: replace all sequences of more than 2 equal characters by 2 of the same.
>>> import re
>>> text = 'the numberr offf\n\n\n\ntheeee beast is 666 ...'
>>> pattern = r'(.)\1{2,}'
>>> repl = r'\1\1'
>>> re.sub(pattern, repl, text, flags=re.DOTALL)
'the numberr off\n\nthee beast is 66 ..'
>>>
You may not really want to apply this treatment to some or all of: digits, punctuation, spaces, tabs, newlines etcccc. In that case you need to replace the . by a more restrictive sub-pattern.
For example:
ASCII letters: [A-Za-z]
Any letters, depending on the locale: [^\W\d_] in conjunction with the re.LOCALE flag
A:
Also using a regex, but without a function:
import re
expr = r'(.)\1{3,}'
replace_by = r'\1\1'
mystr1 = 'hellooooooo'
print re.sub(expr, replace_by, mystr1)
mystr2 = 'woooohhooooo'
print re.sub(expr, replace_by, mystr2)
A:
I don't really know python regexp but you could adapt this one:
s/((.)\2)\2+/$1/g;
A:
I post my code, it's not regex but since you mentioned "or something else"...
def removeD(input):
if len(input) < 3: return input
output = input[0:2]
for i in range (2, len(input)):
if not input[i] == input[i-1] == input[i-2]:
output += input[i]
return output
is not as bgporter's one (no joke, I really like it more than mine!) but - at least on my system - time report that it performs always faster.
| Python: how to cut off sequences of more than 2 equal characters in a string | I'm looking for an efficient way to chance a string such that all sequences of more than 2 equal characters are cut off after the first 2.
Some input->output examples are:
hellooooooooo -> helloo
woooohhooooo -> woohhoo
I'm currently looping over the characters, but it's a bit slow. Does anyone have another solution (regexp or something else)
EDIT: current code:
word_new = ""
for i in range(0,len(word)-2):
if not word[i] == word[i+1] == word[i+2]:
word_new = word_new+word[i]
for i in range(len(word)-2,len(word)):
word_new = word_new + word[i]
| [
"Edit: after applying helpful comments\nimport re\n\ndef ReplaceThreeOrMore(s):\n # pattern to look for three or more repetitions of any character, including\n # newlines.\n pattern = re.compile(r\"(.)\\1{2,}\", re.DOTALL) \n return pattern.sub(r\"\\1\\1\", s)\n\n\n(original response here) \nTry somethi... | [
8,
2,
1,
0,
0
] | [] | [] | [
"python",
"regex",
"string"
] | stackoverflow_0004278313_python_regex_string.txt |
Q:
Is there a more pythonic way to find the point in a list which is closest to another point?
I have a list of 2d points, and would like to find the one which is closest to a given point. The code (get_closest_point()) below does what I want. But is there a nicer way to do this in python?
class Circle(object):
def __init__(self, pos):
self.position = pos
class Point(object):
..
def compute_distance_to(self, p)
..
class SomeClient(object):
..
def get_closest_point(self, points, p1):
closest = (None, float(sys.maxint))
for p2 in points:
distance = p2.compute_distance_to(p1)
if distance < closest[1]:
closest = (p2, distance)
return closest[0]
def get_closest_circle(self, circles, p1):
closest = (None, float(sys.maxint))
for c in circles:
distance = c.position.compute_distance_to(p1)
if distance < closest[1]:
closest = (c, distance)
return closest[0]
A:
You can use the key argument to the min() function:
Edit: after some consideration, this should be a method of your Point class, and i'll fix some other obvious deficiencies:
class Point(object):
def get_closest_point(self, points):
return min(points, key=self.compute_distance_to)
or, to do this with a more elaborate case, say a list of instances with a loc attribute,
min(items, key= lambda item: p1.compute_distance_to(item.loc))
and so on
| Is there a more pythonic way to find the point in a list which is closest to another point? | I have a list of 2d points, and would like to find the one which is closest to a given point. The code (get_closest_point()) below does what I want. But is there a nicer way to do this in python?
class Circle(object):
def __init__(self, pos):
self.position = pos
class Point(object):
..
def compute_distance_to(self, p)
..
class SomeClient(object):
..
def get_closest_point(self, points, p1):
closest = (None, float(sys.maxint))
for p2 in points:
distance = p2.compute_distance_to(p1)
if distance < closest[1]:
closest = (p2, distance)
return closest[0]
def get_closest_circle(self, circles, p1):
closest = (None, float(sys.maxint))
for c in circles:
distance = c.position.compute_distance_to(p1)
if distance < closest[1]:
closest = (c, distance)
return closest[0]
| [
"You can use the key argument to the min() function:\nEdit: after some consideration, this should be a method of your Point class, and i'll fix some other obvious deficiencies:\nclass Point(object):\n def get_closest_point(self, points):\n return min(points, key=self.compute_distance_to)\n\nor, to do this... | [
19
] | [] | [] | [
"python"
] | stackoverflow_0004280554_python.txt |
Q:
receiving a linux signal and interating with threads
hello to you all :)
i have a program that have a n number of threads(could be a lot) and they do a pretty extensive job. My problem is that sometimes some people turn off or reboot the server(the program runs all day in the company servers) i know that there is a way to make a handler for the linux signals i want to know what i should do to interact with all threads making them to use run a function and then stop working. There is a way to do that?
sorry the bad english :P
A:
The best way of handling this is not requiring any shutdown actions at all.
For example, your signal handler for (e.g.) SIGTERM or SIGQUIT can just call _exit and quit the process with no clean-up.
Under Linux (with non-ancient threads) when one thread calls _exit (or exit if you really want) other threads get stopped too - whatever they were in the middle of doing.
This would be good as it implements a crash-only design.
Crash-only design for a server is based on the principle that the machine may crash at any point, so you need to be able to recover from such a failure anyway, so just make it the normal way of quitting. No extra code should be required as your server should be robust enough anyway.
A:
About the only thing you can do is set a global variable from your signal handler, and have your threads check its value periodically.
A:
As others have already mentioned, signal handlers can get messy (due to the restrictions, particularly in multi-threaded programs), so it's better to chose another option:
have a dedicated thread for handling signals via sigwaitinfo - the bad news, though, is that python doesn't appear to support that out of the box.
use the Linux-specific signalfd to handle signals (either in a separate thread or integrated into some event loop) - at least there is a python-signalfd module you can use.
As there is no need to install signal handlers here, there is no restriction on what you can do when you are notified of a signal and it should be easy to shut down the others threads in your program cleanly.
| receiving a linux signal and interating with threads | hello to you all :)
i have a program that have a n number of threads(could be a lot) and they do a pretty extensive job. My problem is that sometimes some people turn off or reboot the server(the program runs all day in the company servers) i know that there is a way to make a handler for the linux signals i want to know what i should do to interact with all threads making them to use run a function and then stop working. There is a way to do that?
sorry the bad english :P
| [
"The best way of handling this is not requiring any shutdown actions at all.\nFor example, your signal handler for (e.g.) SIGTERM or SIGQUIT can just call _exit and quit the process with no clean-up.\nUnder Linux (with non-ancient threads) when one thread calls _exit (or exit if you really want) other threads get s... | [
3,
1,
1
] | [] | [] | [
"linux",
"multithreading",
"python",
"signals"
] | stackoverflow_0004280432_linux_multithreading_python_signals.txt |
Q:
raw_input causing EOFError after creating exe with py2exe
After creating an exe from a script with py2exe raw_input() is causing an EOFError.
How can I avoid this?
File "test.py", line 143, in main
raw_input("\nPress ENTER to continue ")
EOFError: EOF when reading a line
A:
>>> help(raw_input)
Help on built-in function raw_input in module __builtin__:
raw_input(...)
raw_input([prompt]) -> string
Read a string from standard input. The trailing newline is stripped.
If the user hits EOF (Unix: Ctl-D, Windows: Ctl-Z+Return), raise EOFError.
On Unix, GNU readline is used if enabled. The prompt string, if given,
is printed without a trailing newline before reading.
what's wrong? what do you type on the keyboard?
edit (reported comment up here):
My guess is that you used py2exe with the "windows" argument, meaning that no console is opened - without a console there is no stdin for raw_input to use. You can instead use the "console" argument in your setup.py, and your exe will open a console window allowing raw_input to work
| raw_input causing EOFError after creating exe with py2exe | After creating an exe from a script with py2exe raw_input() is causing an EOFError.
How can I avoid this?
File "test.py", line 143, in main
raw_input("\nPress ENTER to continue ")
EOFError: EOF when reading a line
| [
">>> help(raw_input)\nHelp on built-in function raw_input in module __builtin__:\n\nraw_input(...)\n raw_input([prompt]) -> string\n\n Read a string from standard input. The trailing newline is stripped.\n If the user hits EOF (Unix: Ctl-D, Windows: Ctl-Z+Return), raise EOFError.\n On Unix, GNU readlin... | [
4
] | [] | [] | [
"eoferror",
"py2exe",
"python"
] | stackoverflow_0004280889_eoferror_py2exe_python.txt |
Q:
Python web application: How to keep state
I wrote a WSGI compatible web application using web.py that loads a few dozen MB data into memory during startup.
It works quite well with the web.py integrated server.
However, using Apache 2 + mod_wsgi, every single request reloads the data, essentially starting the program again. Due to the loading time of several seconds, this is unbearable.
Is it inherent to mod_wsgi or can it be configured? What are my alternatives?
A:
"Is it inherent to mod_wsgi?" No. It's inherent in HTTP
Since you didn't post your mod_wsgi configuration, it's impossible to say what you did wrong.
I can only guess that you didn't use daemon mode.
See http://code.google.com/p/modwsgi/wiki/ConfigurationGuidelines#Defining_Process_Groups for more information on daemon mode.
This may not be the best solution. It may be better (far, far better) to use a proper database. Without actual code examples, and more details, this is all just random guessing.
| Python web application: How to keep state | I wrote a WSGI compatible web application using web.py that loads a few dozen MB data into memory during startup.
It works quite well with the web.py integrated server.
However, using Apache 2 + mod_wsgi, every single request reloads the data, essentially starting the program again. Due to the loading time of several seconds, this is unbearable.
Is it inherent to mod_wsgi or can it be configured? What are my alternatives?
| [
"\"Is it inherent to mod_wsgi?\" No. It's inherent in HTTP \nSince you didn't post your mod_wsgi configuration, it's impossible to say what you did wrong.\nI can only guess that you didn't use daemon mode.\nSee http://code.google.com/p/modwsgi/wiki/ConfigurationGuidelines#Defining_Process_Groups for more informatio... | [
1
] | [] | [] | [
"apache",
"mod_wsgi",
"python",
"web.py"
] | stackoverflow_0004281102_apache_mod_wsgi_python_web.py.txt |
Q:
What specific issue try zope folks to solve with TAL, TALES & METAL
TAL, TALES and METAL are all three the zope templating language. The thing that I don't understand is why so much troubles. I don't understand the spirit of ZTL, any tips ?
one more question : is there a standalone library that try to achieve the same thing that ZTL but outside the Zope ecosystem ?
A:
The core idea of tal/tales is to have proper valid (x)html. All the template functionality is in attributes or namespaced elements. HTML editors should work just fine with these templates. Let's give an example. First tal/tales:
<ul>
<li tal:repeat="customer customers">
<a href=""
tal:attributes="href customer.url"
tal:content="customer.name>
Sample customer name
</a>
</li>
</ul>
And in Django's template language, just as an example:
<ul>
{% for customer in customers %}
<li>
<a href="{{ customer.url }}">
{{ customer.name }}
</a>
</li>
{% endfor %}
</ul>
Which one's better? Open question. One plays nice with your html editor, the other makes the non-html statements clearer. Anyway, making it proper html is the main idea behind tal/tales!
A:
Your last question: http://zpt.sourceforge.net/
Since the other question isn't that specific, I'm not sure there's a definitive answer to this, unless one of the original developers answers.
Zope Page Templates is the templating system making use of TAL/TALES/METAL, and the specific issue it tries to solve is the same as with many other templating systems: produce valid HTML. In the case of ZPT it is possible to create also any flavour of XML. At the time of its creation, it had some outstanding properties:
the templates itself could be used in designing tools like Dr*beep*mw*beep*ver or Fr*beeb*ntp*beep*ge without modification
the nested structure of XML/XHTML was ensured (invalid structured XML wouldn't work)
the templates themselves could be nested, mixed and matched
pure python implementation (rather clean code) and embedded python expressions
in the meantime the web has caught up and there are many alternatives available
| What specific issue try zope folks to solve with TAL, TALES & METAL | TAL, TALES and METAL are all three the zope templating language. The thing that I don't understand is why so much troubles. I don't understand the spirit of ZTL, any tips ?
one more question : is there a standalone library that try to achieve the same thing that ZTL but outside the Zope ecosystem ?
| [
"The core idea of tal/tales is to have proper valid (x)html. All the template functionality is in attributes or namespaced elements. HTML editors should work just fine with these templates. Let's give an example. First tal/tales:\n<ul>\n <li tal:repeat=\"customer customers\">\n <a href=\"\"\n tal:attri... | [
4,
1
] | [] | [] | [
"python",
"template_metal",
"template_tal",
"zope",
"zpt"
] | stackoverflow_0004281275_python_template_metal_template_tal_zope_zpt.txt |
Q:
Python framework for visualising/animating particles in cartesian space
I have data representing the position of particles for multiple time steps and need to create an animation showing the movement of these particles.
Are there any frameworks or toolkits (ideally Python based) out there does something like this out of the box, or at least something that makes it easy to quickly plot sprites/3d-objects and animate it across multiple time steps?
For the first stage a simple 2D animation is sufficient. However I would like to have the option of extending it further to support 3D and user interaction (changing view, animation control, exporting animation to file, etc).
Just to clarify, I'm not looking to render a complicated scene. Something like the following would do:
This particular image is a screenshot of a single frame for a similar data set.
A:
Pyprocessing is a Python treatment of the processing Java animation library. The processing development environment includes some very nice examples of implementing a particle system.
A:
Houdini by Side Effects Software is an industry-grade 3D animation application with excellent Python bindings, Python expressions and general support. It would be simple to import your data, and Houdini even has a Python shell within the application for tinkering.
After you've imported it you can take advantage of the full range of animation and visualisation tools and the excellent bundled renderer, "Mantra".
There is a free "apprentice" edition with very few restrictions, and various levels of paid licenses.
A:
We've used pyOGRE, which are Python bindings to the OGRE library, which describes itself as:
What Is OGRE? OGRE (Object-Oriented
Graphics Rendering Engine) is a
scene-oriented, flexible 3D engine
written in C++ designed to make it
easier and more intuitive for
developers to produce applications
utilising hardware-accelerated 3D
graphics. The class library abstracts
all the details of using the
underlying system libraries like
Direct3D and OpenGL and provides an
interface based on world objects and
other intuitive classes.
A:
In 2D why don't you just use matplotlib to do scatter plots of the frames from your simulation.
For example
import numpy as np
import matplotlib.pyplot as plt
# Just some sample data but I'm assuming that you
# can get your data into vectors like this.
x = np.random.randn(100)
y = np.random.randn(100)
plt.figure()
plt.plot(x,y, '.')
plt.savefig('frame0000.png')
You can then make a video from the frames.
As for 3D - you could try matplotlib's mlab or mplot3D. From my experience mlab is a bit trickier to get going. Comment on this post if you need more help with using matplotlib.
http://www.scipy.org/Cookbook/Matplotlib/mplot3D
http://code.enthought.com/projects/mayavi/docs/development/html/mayavi/mlab.html
A:
Have a look at PyODE. That will help with the physics part. You're on your own with the graphics.
| Python framework for visualising/animating particles in cartesian space | I have data representing the position of particles for multiple time steps and need to create an animation showing the movement of these particles.
Are there any frameworks or toolkits (ideally Python based) out there does something like this out of the box, or at least something that makes it easy to quickly plot sprites/3d-objects and animate it across multiple time steps?
For the first stage a simple 2D animation is sufficient. However I would like to have the option of extending it further to support 3D and user interaction (changing view, animation control, exporting animation to file, etc).
Just to clarify, I'm not looking to render a complicated scene. Something like the following would do:
This particular image is a screenshot of a single frame for a similar data set.
| [
"Pyprocessing is a Python treatment of the processing Java animation library. The processing development environment includes some very nice examples of implementing a particle system.\n",
"Houdini by Side Effects Software is an industry-grade 3D animation application with excellent Python bindings, Python expre... | [
4,
2,
2,
2,
1
] | [] | [] | [
"3d",
"animation",
"opengl",
"python",
"visualization"
] | stackoverflow_0004277564_3d_animation_opengl_python_visualization.txt |
Q:
Best Python IDE on Linux
Possible Duplicate:
What IDE to use for Python?
Can anybody suggest me good Python IDE for Linux?
A:
Probably the new PyCharm from the makers of IntelliJ and ReSharper.
A:
I haven't played around with it much but eclipse/pydev feels nice.
| Best Python IDE on Linux |
Possible Duplicate:
What IDE to use for Python?
Can anybody suggest me good Python IDE for Linux?
| [
"Probably the new PyCharm from the makers of IntelliJ and ReSharper.\n",
"I haven't played around with it much but eclipse/pydev feels nice.\n"
] | [
21,
8
] | [] | [] | [
"python"
] | stackoverflow_0004281612_python.txt |
Q:
List Search Optimization
first = [(1, text, text, 1, 2, 3),
(1, text, text, 1, 0, 3), ... (6054, text, text, 2, 2, 3)]
second = (1, 2, 3, 4, 5 ... 5412)
Is there a faster way to do this:
data = [x for x in first if x[0] in second]
A:
Try this:
first = [(1, text, text, 1, 2, 3),
(1, text, text, 1, 0, 3), ... (1054, text, text, 2, 2, 3)]
second = (1, 2, 3, 4, 5 ... 5412)
second_set = set(second)
data = [x for x in first if x[0] in second_set]
Assume first has m elements and second has n elements.
Sets are hashed, so searching them is close to O(1) giving an overall efficiency of O(m). Searching second as a list is O(n) giving an overall efficiency of O(m * n).
A:
Maybe you want just this instead of the in check:
data = [x for x in first if 1 <= x[0] <= 5412 ]
| List Search Optimization | first = [(1, text, text, 1, 2, 3),
(1, text, text, 1, 0, 3), ... (6054, text, text, 2, 2, 3)]
second = (1, 2, 3, 4, 5 ... 5412)
Is there a faster way to do this:
data = [x for x in first if x[0] in second]
| [
"Try this:\nfirst = [(1, text, text, 1, 2, 3), \n (1, text, text, 1, 0, 3), ... (1054, text, text, 2, 2, 3)]\nsecond = (1, 2, 3, 4, 5 ... 5412)\nsecond_set = set(second)\ndata = [x for x in first if x[0] in second_set]\n\nAssume first has m elements and second has n elements.\nSets are hashed, so searching ... | [
6,
1
] | [] | [] | [
"optimization",
"python"
] | stackoverflow_0004281679_optimization_python.txt |
Q:
PyNotify not working from cron?
I've written a script that uses pynotify to give an alert. it works just fine when I run it (python script.py), but when run from cron with 00 * * * * myname python ~/scripts/script.py, it doesn't work! I haven't a clue why. Here's the snippet:
if os.path.isfile(os.path.expanduser('~/.thumbnails/normal')+'/'+thumbnail):
n = pynotify.Notification(video_file[0], 'finished download', os.path.expanduser('~/.thumbnails')+'/'+thumbnail)
else:
n = pynotify.Notification(video_file[0], 'finished download', '/usr/share/icons/gnome/48x48/mimetypes/gnome-mime-application-x-shockwave-flash.png')
print n
n.show()
directing the output to ~/log.file gives: <pynotify.Notification object at 0x16d4e60 (NotifyNotification at 0x13804e0)> and no errors, so i'm not quite sure where else to look.
A:
I'm not that deep into cron jobs, but I know a bit about pynotify. It uses libnotify and some DBUS stuff, so somewhere it makes the call to the DBUS and iirc it also passes the display id on which the notification should be shown.
Now, by default cron's don't work with GUI applications, you have to specify a display for them to use:
00 * * * * myname env DISPLAY=:0 python ~/scripts/script.py
This will make the cron use the current display (Desktop).
If you're running on Ubuntu this page might be of interest for you:
https://help.ubuntu.com/community/CronHowto
| PyNotify not working from cron? | I've written a script that uses pynotify to give an alert. it works just fine when I run it (python script.py), but when run from cron with 00 * * * * myname python ~/scripts/script.py, it doesn't work! I haven't a clue why. Here's the snippet:
if os.path.isfile(os.path.expanduser('~/.thumbnails/normal')+'/'+thumbnail):
n = pynotify.Notification(video_file[0], 'finished download', os.path.expanduser('~/.thumbnails')+'/'+thumbnail)
else:
n = pynotify.Notification(video_file[0], 'finished download', '/usr/share/icons/gnome/48x48/mimetypes/gnome-mime-application-x-shockwave-flash.png')
print n
n.show()
directing the output to ~/log.file gives: <pynotify.Notification object at 0x16d4e60 (NotifyNotification at 0x13804e0)> and no errors, so i'm not quite sure where else to look.
| [
"I'm not that deep into cron jobs, but I know a bit about pynotify. It uses libnotify and some DBUS stuff, so somewhere it makes the call to the DBUS and iirc it also passes the display id on which the notification should be shown.\nNow, by default cron's don't work with GUI applications, you have to specify a disp... | [
5
] | [] | [] | [
"cron",
"pynotify",
"python"
] | stackoverflow_0004281821_cron_pynotify_python.txt |
Q:
What's the relationship between numbers.Integral and int in builtins module in Python 3
I am a newbie for Python, I am a little confuse of the relationship between numbers.Integral and int in builtins module.
Then my questions are:
What the relationship between numbers.Integral and int? Is it similiar with the relationship between Integer and int in Java (Integer is just a wrapper class for int)?
When to use numbers.Integral?
Since none of the types defined in mumbers module can be instantiated, then in what kind of scenarios we need to use these types?
Only one case I know we can use Integral:
# check if x is a integer
isinstance(x, Integral)
From the Python language reference:
"
numbers.Number These are created by numeric literals and returned as
results by arithmetic operators and
arithmetic built-in functions. "
But it seems the arithmetic operators or arithmetic built-in functions doesn't return a type in numbers. It is still the type in builtins.
>>> a,b=123,5
>>> c=a*b
>>> print(c.__class__)
<class 'int'>
>>> d=abs(a)
>>> print(d.__class__)
<class 'int'>
I am using Python 3.2a3.
A:
Since you sound like you know Java: numbers.Integral defines the "interface" (it's really a generalization of interfaces, called "Abstract Base Class" (ABC)) for integral numbers. It's not a concrete type that you can instantiate.
The only builtin type that implements this interface is int.
With isinstance( obj, numbers.Integral) you can test if obj implements a interface:
>>> isinstance(3, numbers.Integral)
True
>>> isinstance(3.0, numbers.Integral)
False
>>> isinstance(3+0j, numbers.Integral)
False
If you wanted to write a custom class that behaves like a integral number you would inherit from numbers.Integral -- or if you wanted to invent a new type of number you could register it there.
| What's the relationship between numbers.Integral and int in builtins module in Python 3 | I am a newbie for Python, I am a little confuse of the relationship between numbers.Integral and int in builtins module.
Then my questions are:
What the relationship between numbers.Integral and int? Is it similiar with the relationship between Integer and int in Java (Integer is just a wrapper class for int)?
When to use numbers.Integral?
Since none of the types defined in mumbers module can be instantiated, then in what kind of scenarios we need to use these types?
Only one case I know we can use Integral:
# check if x is a integer
isinstance(x, Integral)
From the Python language reference:
"
numbers.Number These are created by numeric literals and returned as
results by arithmetic operators and
arithmetic built-in functions. "
But it seems the arithmetic operators or arithmetic built-in functions doesn't return a type in numbers. It is still the type in builtins.
>>> a,b=123,5
>>> c=a*b
>>> print(c.__class__)
<class 'int'>
>>> d=abs(a)
>>> print(d.__class__)
<class 'int'>
I am using Python 3.2a3.
| [
"Since you sound like you know Java: numbers.Integral defines the \"interface\" (it's really a generalization of interfaces, called \"Abstract Base Class\" (ABC)) for integral numbers. It's not a concrete type that you can instantiate.\nThe only builtin type that implements this interface is int. \nWith isinstance(... | [
3
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0004281955_python_python_3.x.txt |
Q:
Unexpected output when filtering Python list: What am I doing wrong?
I am trying to filter a list, here is the code:
test=['aaa','bbb','ccc','ddd','footer','header']
def rm_hf(x): return x != 'footer'
filter(rm_hf,test)
Results in :
>>>['aaa','bbb','ccc','ddd','header']
That is expected result, finding 'footer' in the list and remove it.
Now I want to remove both 'header' and 'footer' so I do this:
test2=['aaa','bbb','ccc','ddd','footer','header']
def rm_hf2(x): return x != 'footer' or x != 'header'
filter(rm_hf2,test2)
Results in :
>>>['aaa','bbb','ccc','ddd','footer','header']
Now that's weird, it just gives out both 'footer','header' instead of filtering them?
What did I do wrong? I think my logic is correct...
A:
Your logic is correct because you think like a human. Your computer does not. He reads every element from your list, then stumbles upon 'footer'. "Is footer different than footer?", he says. "NO! It's the same string! It's evaluated to false. Let's see the next condition". "Is footer different than header? YES!" The condition is therefore False or True, which obviously evaluates to true.
You want a and, not a or :
def rm_hf2(x): return x != 'footer' and x != 'header'
You could also use a tuple and the in keyword, which is more readable :
def rm_hf2(x): return x not in ('footer', 'header')
It's important that you understand what's really going on with "and" and "or", though. And let's be honest : if something isn't working as you think it should, the problem most likely lies in your own code, and not in Python language itself.
A:
my logic is correct
Actually, no it isn't, as highlighted in other answers.
A far neater way to achieve the desired outcome is to use list comprehensions, viz:
test = ['aaa', 'bbb', 'ccc', 'ddd', 'footer', 'header']
undesirable = ['footer', 'header']
[_ for _ in test if _ not in undesirable]
From the documentation:
Note that filter(function, iterable) is equivalent to [item for item in iterable if function(item)] if function is not None and [item for item in iterable if item] if function is None.
That said, there's no time like the present to brush-up on your Boolean logic!
Were you to unit test your code, you would quickly find out that your second filtration function is not doing what you expect. Here is a simplistic example:
$ cat 4281875.py
#!/usr/bin/env python
import unittest
def rm_hf2(x): return x != 'footer' or x != 'header'
class test_rm_hft(unittest.TestCase):
def test_aaa_is_not_filtered(self):
self.assertTrue(rm_hf2('aaa'))
def test_footer_is_filtered_out(self):
self.assertFalse(rm_hf2('footer'))
if __name__ == '__main__':
unittest.main()
$ ./4281875.py
.F
======================================================================
FAIL: test_footer_is_filtered_out (__main__.test_rm_hft)
----------------------------------------------------------------------
Traceback (most recent call last):
File "./4281875.py", line 13, in test_footer_is_filtered_out
self.assertFalse(rm_hf2('footer'))
AssertionError
----------------------------------------------------------------------
Ran 2 tests in 0.000s
FAILED (failures=1)
A:
What everybody else said, plus:
When you have several items that you want to exclude, use a set instead of a chain of ands or a tuple:
# do once
blacklist = set(['header', 'footer'])
# as needed
filter(lambda x: x not in blacklist, some_iterable)
Rationale: Looking through a tuple takes time proportional to the position of the found item; failure takes the same time as the last item. Looking up an item in a set takes the same time for all items, and for failure. Sets usually win for a large number of items. It all depends on the probability that each item will be searched, and what the probability of failure is. Tuples can win even with a large collection when there's a high probability of a few items (they should be put at the front of the tuple) and a low chance of failure.
A:
you can also use a list comprehension instead of filter.
test = ['aaa','bbb','ccc','ddd','footer','header']
filtered_test = [x for x in test if x not in ('footer', 'header')]
or a generator expression (depending on your needs)
test = ['aaa','bbb','ccc','ddd','footer','header']
filtered_test = (x for x in test if x not in ('footer', 'header'))
| Unexpected output when filtering Python list: What am I doing wrong? | I am trying to filter a list, here is the code:
test=['aaa','bbb','ccc','ddd','footer','header']
def rm_hf(x): return x != 'footer'
filter(rm_hf,test)
Results in :
>>>['aaa','bbb','ccc','ddd','header']
That is expected result, finding 'footer' in the list and remove it.
Now I want to remove both 'header' and 'footer' so I do this:
test2=['aaa','bbb','ccc','ddd','footer','header']
def rm_hf2(x): return x != 'footer' or x != 'header'
filter(rm_hf2,test2)
Results in :
>>>['aaa','bbb','ccc','ddd','footer','header']
Now that's weird, it just gives out both 'footer','header' instead of filtering them?
What did I do wrong? I think my logic is correct...
| [
"Your logic is correct because you think like a human. Your computer does not. He reads every element from your list, then stumbles upon 'footer'. \"Is footer different than footer?\", he says. \"NO! It's the same string! It's evaluated to false. Let's see the next condition\". \"Is footer different than header? YE... | [
7,
4,
2,
1
] | [] | [] | [
"filtering",
"list",
"logic",
"python",
"sequence"
] | stackoverflow_0004281875_filtering_list_logic_python_sequence.txt |
Q:
Python MySQLDB executemany() works on mac, does nothing on linux
I run a number of SQL scripts in this manner:
db_conn = (created earlier)
cursor = db_conn.cursor()
script_file = open(join(script_path, script_name))
script_text = script_file.read()
script_file.close()
num_rows = cursor.executemany(script_text, None)
This works like a charm on my Mac, but fails on Linux, executemany(...) simply does nothing and returns None. Connection settings are fine: They are identical on both systems and I can execute the SQL scripts manually, i.e. using the mysql command line client. Also, MySQLDB.execute(...) works fine with shorter SQL statements, but then presumably fails on the changed delimiter in a stored procedure definition (reports an SQL error around the DELIMITER line anyway).
Has anyone ever come across something similar before ?
Is anyone using executemany() successfully on Linux ?
Versions:
Mac OS X 10.6.4
mysql Ver 14.14 Distrib 5.1.39, for apple-darwin9.5.0 (i386) using readline 5.1
MySQL_python-1.2.3-py2.6-macosx-10.6-universal
Kubuntu 10.10
mysql Ver 14.14 Distrib 5.1.49, for debian-linux-gnu (i686) using readline 6.1
MySQL_python-1.2.3-py2.6-linux-i686
(the default Kubuntu package is 1.2.2, so I upgraded manually)
(for some reason, there's an extra empty line after Kubuntu 10.10 I can't seem to get rid of, possibly a stackoverflow bug...)
A:
Check if you upgraded properly to 1.2.3 ... I remember reading that 1.2.2 had a bug that caused problems with executemany.
| Python MySQLDB executemany() works on mac, does nothing on linux | I run a number of SQL scripts in this manner:
db_conn = (created earlier)
cursor = db_conn.cursor()
script_file = open(join(script_path, script_name))
script_text = script_file.read()
script_file.close()
num_rows = cursor.executemany(script_text, None)
This works like a charm on my Mac, but fails on Linux, executemany(...) simply does nothing and returns None. Connection settings are fine: They are identical on both systems and I can execute the SQL scripts manually, i.e. using the mysql command line client. Also, MySQLDB.execute(...) works fine with shorter SQL statements, but then presumably fails on the changed delimiter in a stored procedure definition (reports an SQL error around the DELIMITER line anyway).
Has anyone ever come across something similar before ?
Is anyone using executemany() successfully on Linux ?
Versions:
Mac OS X 10.6.4
mysql Ver 14.14 Distrib 5.1.39, for apple-darwin9.5.0 (i386) using readline 5.1
MySQL_python-1.2.3-py2.6-macosx-10.6-universal
Kubuntu 10.10
mysql Ver 14.14 Distrib 5.1.49, for debian-linux-gnu (i686) using readline 6.1
MySQL_python-1.2.3-py2.6-linux-i686
(the default Kubuntu package is 1.2.2, so I upgraded manually)
(for some reason, there's an extra empty line after Kubuntu 10.10 I can't seem to get rid of, possibly a stackoverflow bug...)
| [
"Check if you upgraded properly to 1.2.3 ... I remember reading that 1.2.2 had a bug that caused problems with executemany.\n"
] | [
0
] | [] | [] | [
"linux",
"macos",
"mysql",
"python"
] | stackoverflow_0004200691_linux_macos_mysql_python.txt |
Q:
Populate templates using list,detail generic views
I am creating a list of users present in my database, they are being displayed in user_list.html template through the use of generic views, but my models inherit many of its properties from other classes in the model. Now I want that when a user clicks on his name he should be redirected to the user_detail.html page and he should get his details here.
The details are to be picked up from the database, it is just picking the values from the same model for which the queryset is defined.
my views.py looks like
from django.contrib.auth.models import User
from django.shortcuts import render_to_response, get_object_or_404
from django.views.generic.list_detail import object_list, object_detail
from contacts.models import *
def employee_list(request, queryset=None, **kwargs):
if queryset is None:
queryset = Employee.objects.all()
return object_list(
request,
queryset=queryset,
paginate_by=20,
**kwargs)
def employee_detail(request, employee_id):
return object_detail(
request,
queryset= Employee.objects.all(),
# extra_context ={"EC_list": EmergencyContact.objects.all()},
object_id=employee_id)
urls.py
from contacts.views import employees
urlpatterns = patterns('',
url(r'^$',
employees.employee_list,
name='contacts_employee_list'),
url(r'^(?P<employee_id>\d+)/$',
employees.employee_detail,
name='contacts_employee_detail'),
my employee_deatil.html looks like
{% block title %} Employee details {% endblock %}
{% block heading1%}<h1> Employee's Details </h1>{% endblock %}
{% block right_menu %}
{% if object %}
<ul>
<li> Name:{{ object.full_name }}</li>
<li> Contact No.: {{ object.phone_number }}</li>
<!-- <li> Refrence Contact No.: {{ EC_list.contact }}</li> -->
<li> Blood Group: {{ object.blood_type }}</li>
<li> Martial Status: {{ object.martial_status }}</li>
<li> Nationality: {{ object.about }}</li>
<!-- <li> Relationship: {{ EC_list.relationship }}</li>
<li>Course: {{ object.course }}</li> -->
</ul>
{% else %}
No Registered user present.
{% endif %}
{% endblock %}
So please help me to figure out that how can I display all the data of employee which is present in the other models. Thank you!
A:
If i understand you correctly, you want to display information about the employee which is stored in other models.
I assume you know that you can pre-filter in the view and send an extra context variable with a queryset. using your existing line extra_context ="EC_list": EmergencyContact.objects.all() selects too many. I assume your EmergencyContact model has a foreign key to Employee
( employee = ForeignKey(Employee, related_name='emergency_contacts') ). In this case you must pass your extra context filtered.
def employee_detail(request, employee_id):
return object_detail(
request,
queryset= Employee.objects.all(),
extra_context ={"EC_list": EmergencyContact.objects.filter(employee__pk=employee_id)},
object_id=employee_id)
this will filter the list to just the emergency contacts you need.
{% block title %} Employee details {% endblock %}
{% block heading1%}<h1> Employee's Details </h1>{% endblock %}
{% block right_menu %}
{% if object %}
<ul>
<li> Name:{{ object.full_name }}</li>
<li> Contact No.: {{ object.phone_number }}</li>
<li> Blood Group: {{ object.blood_type }}</li>
<li> Martial Status: {{ object.martial_status }}</li>
<li> Nationality: {{ object.about }}</li>
<li> Course: {{ object.course }}</li>
<li> Emergency Contacts:
<ul>
{% for EC in EC_list %}
<li> Name: {{ EC.name }} </li>
<li> Contact No.: {{ EC.contact }}</li>
<li> Relationship: {{ EC.relationship }}</li>
{% endfor %}
</ul>
</li>
</ul>
{% else %}
No Registered user present.
{% endif %}
{% endblock %}
Of course, this is only one way to do it. if you don't need any fancy filters on emergency contacts, you can use foreign key reverse lookups within the template. i.e. get rid of the extra_context for EC_list and replace the contact rendering function with this:
<li> Emergency Contacts:
<ul>
{% for EC in object.emergency_contacts %}
<li> Name: {{ EC.name }} </li>
<li> Contact No.: {{ EC.contact }}</li>
<li> Relationship: {{ EC.relationship }}</li>
{% endfor %}
</ul>
</li>
Remember that we have employee = ForeignKey(Employee, related_name='emergency_contacts') as a foreign key from EmergencyContact to Employee. not only does this declaration add the employee field to EmergencyContact but it adds an extra virtual field to Employee with the name 'emergency_contacts'. this virtual field returns a queryset of all Emergency Contacts linked to the current employee.
Let me know if you have any questions or need links to documentation
EDIT: for readability sake, considder setting the template_object_name parameter of the generic view.
def employee_detail(request, employee_id):
return object_detail(request, queryset= Employee.objects.all(),
object_id=employee_id, template_object_name='employee')
------------------------------------------------------------------------------------------
<li> Name: {{ employee.full_name }}</li>
<li> Emergency Contacts:
<ul>
{% for EC in employee.emergency_contacts %}
<li> Name: {{ EC.name }} </li>
<li> Contact No.: {{ EC.contact }}</li>
<li> Relationship: {{ EC.relationship }}</li>
{% endfor %}
</ul>
</li>
| Populate templates using list,detail generic views | I am creating a list of users present in my database, they are being displayed in user_list.html template through the use of generic views, but my models inherit many of its properties from other classes in the model. Now I want that when a user clicks on his name he should be redirected to the user_detail.html page and he should get his details here.
The details are to be picked up from the database, it is just picking the values from the same model for which the queryset is defined.
my views.py looks like
from django.contrib.auth.models import User
from django.shortcuts import render_to_response, get_object_or_404
from django.views.generic.list_detail import object_list, object_detail
from contacts.models import *
def employee_list(request, queryset=None, **kwargs):
if queryset is None:
queryset = Employee.objects.all()
return object_list(
request,
queryset=queryset,
paginate_by=20,
**kwargs)
def employee_detail(request, employee_id):
return object_detail(
request,
queryset= Employee.objects.all(),
# extra_context ={"EC_list": EmergencyContact.objects.all()},
object_id=employee_id)
urls.py
from contacts.views import employees
urlpatterns = patterns('',
url(r'^$',
employees.employee_list,
name='contacts_employee_list'),
url(r'^(?P<employee_id>\d+)/$',
employees.employee_detail,
name='contacts_employee_detail'),
my employee_deatil.html looks like
{% block title %} Employee details {% endblock %}
{% block heading1%}<h1> Employee's Details </h1>{% endblock %}
{% block right_menu %}
{% if object %}
<ul>
<li> Name:{{ object.full_name }}</li>
<li> Contact No.: {{ object.phone_number }}</li>
<!-- <li> Refrence Contact No.: {{ EC_list.contact }}</li> -->
<li> Blood Group: {{ object.blood_type }}</li>
<li> Martial Status: {{ object.martial_status }}</li>
<li> Nationality: {{ object.about }}</li>
<!-- <li> Relationship: {{ EC_list.relationship }}</li>
<li>Course: {{ object.course }}</li> -->
</ul>
{% else %}
No Registered user present.
{% endif %}
{% endblock %}
So please help me to figure out that how can I display all the data of employee which is present in the other models. Thank you!
| [
"If i understand you correctly, you want to display information about the employee which is stored in other models.\nI assume you know that you can pre-filter in the view and send an extra context variable with a queryset. using your existing line extra_context =\"EC_list\": EmergencyContact.objects.all() selects t... | [
0
] | [] | [] | [
"django",
"generics",
"python",
"views"
] | stackoverflow_0004274666_django_generics_python_views.txt |
Q:
Django: How to "extend" Group creation using Signals?
I am not sure if I am asking the right question. But is it possible to add "behaviors" when creating a Group in Django? I want want to create directories/files after creating a Group in the admin panel.
The "additional" behaviors (creating the directory/file) can happen after the Group was successfully added in the database or after the POST was successful (?).
Thanks!
Wenbert
A:
Yes. Catch the post_save signal for the Group model, then do your processing in there.
| Django: How to "extend" Group creation using Signals? | I am not sure if I am asking the right question. But is it possible to add "behaviors" when creating a Group in Django? I want want to create directories/files after creating a Group in the admin panel.
The "additional" behaviors (creating the directory/file) can happen after the Group was successfully added in the database or after the POST was successful (?).
Thanks!
Wenbert
| [
"Yes. Catch the post_save signal for the Group model, then do your processing in there.\n"
] | [
1
] | [] | [] | [
"django",
"python"
] | stackoverflow_0004282805_django_python.txt |
Q:
Using Django-openid-auth
I'm trying to implement openid login with Django and having some trouble. The library I'm trying to use now is Django-openid-auth. I haven't found any django+openid libraries that have much documentation. How can I actually go about using this as a login and store the information I need for my app based on the users that come in from openid? How does this store the information shared from the openid provider and does this library already store it?
A:
The OpenID identity verification process most commonly uses the
following steps:
The user enters their OpenID into a field on the consumer'ssite, and hits a login button.
The consumer site discovers the user's OpenID provider using
the Yadis protocol.
The consumer site sends the browser a redirect to the OpenID
provider. This is the authentication request as described in
the OpenID specification.
The OpenID provider's site sends the browser a redirect back to
the consumer site. This redirect contains the provider's
response to the authentication request.
your web app needs to keep track of:
-the user's identity URL and the list of endpoints discovered for
that URL
-State of relationships with servers, i.e. shared secrets
(associations) with servers and nonces seen on signed messages.
This information should persist from one session to the next
and should not be bound to a particular user-agent.
hope this helps:D
| Using Django-openid-auth | I'm trying to implement openid login with Django and having some trouble. The library I'm trying to use now is Django-openid-auth. I haven't found any django+openid libraries that have much documentation. How can I actually go about using this as a login and store the information I need for my app based on the users that come in from openid? How does this store the information shared from the openid provider and does this library already store it?
| [
"The OpenID identity verification process most commonly uses the\n following steps:\n\nThe user enters their OpenID into a field on the consumer'ssite, and hits a login button.\nThe consumer site discovers the user's OpenID provider using\nthe Yadis protocol.\nThe consumer site sends the browser a redirect to the... | [
4
] | [] | [] | [
"django",
"openid",
"python"
] | stackoverflow_0004282056_django_openid_python.txt |
Q:
Catch PyGTK TreeView reorder
I have a simple gtk.TreeView with a gtk.ListStore model and set_reorderable(True), I want to catch the signal/event emited when the user reorder through drag&drop the list, but the documentation does not help much:
"The application can listen to these changes by connecting to the model's signals"
So I tried to connect the model (ListStore) signals... but surprise! ListStore has no signals, so you are dispatched to TreeModel signals, then I tried to connect with the TreeModel "rows-reordered" signal with no lucky.
How should I catch the list reorder performed by the user?
A:
There is no way to do that in PyGTK currently. "rows-reordered" is the correct signal, but it is impossible to derive any information from it in PyGTK other than "somehow reordered". In C GTK+ you could use the same signal and get the required information in callback, but not in Python.
A:
I too had this problem and the docs are unclear. But here's what I found
'rows-reordered' signal is emitted when you have
tvcolumn.set_sort_column_id(0)
However you still bind the signal to the treemodel.
treestore = gtk.TreeStore(str, object)
treestore.connect("rows-reordered", self.rows_r)
That will cause the visible column header to become clickable. When you click on the column header, it will reorder the items in the tree in ascending order and then in descending order if you click it again, and back and forth.
Here's a simple code you can test and see what i mean.
import pygtk
pygtk.require('2.0')
import gtk
class BasicTreeViewExample:
def __init__(self):
window = gtk.Window(gtk.WINDOW_TOPLEVEL)
window.set_title("Treeview")
window.set_size_request(200, 200)
window.connect("destroy", lambda w: gtk.main_quit())
treestore = gtk.TreeStore(str)
treestore.connect("rows-reordered", self.rows_reordered)
for i in range(4):
piter = treestore.append(None, ['Item %i' % i])
treeview = gtk.TreeView(treestore)
tvcolumn = gtk.TreeViewColumn('Click Me!')
treeview.append_column(tvcolumn)
cell = gtk.CellRendererText()
tvcolumn.pack_start(cell, True)
tvcolumn.add_attribute(cell, 'text', 0)
# This allows the column header ("Click me!") to be clickable and sort/order items
tvcolumn.set_sort_column_id(0)
window.add(treeview)
window.show_all()
def rows_reordered(self, a, b, c, d):
print a
print b
print c
print d
def main():
gtk.main()
if __name__ == "__main__":
tvexample = BasicTreeViewExample()
main()
| Catch PyGTK TreeView reorder | I have a simple gtk.TreeView with a gtk.ListStore model and set_reorderable(True), I want to catch the signal/event emited when the user reorder through drag&drop the list, but the documentation does not help much:
"The application can listen to these changes by connecting to the model's signals"
So I tried to connect the model (ListStore) signals... but surprise! ListStore has no signals, so you are dispatched to TreeModel signals, then I tried to connect with the TreeModel "rows-reordered" signal with no lucky.
How should I catch the list reorder performed by the user?
| [
"There is no way to do that in PyGTK currently. \"rows-reordered\" is the correct signal, but it is impossible to derive any information from it in PyGTK other than \"somehow reordered\". In C GTK+ you could use the same signal and get the required information in callback, but not in Python.\n",
"I too had this... | [
3,
0
] | [] | [] | [
"gtk",
"gtktreeview",
"pygtk",
"python"
] | stackoverflow_0002831779_gtk_gtktreeview_pygtk_python.txt |
Q:
Finding the consecutive substring match
I have say two strings;
str1="wild animals are trying to escape the deserted jungle to the sandy island"
str2="people are trying to escape from the smoky mountain to the sandy road"
In order to find the match between these two strings, kgrams of certain length(here 10) are produced, their hashes are produced and the hashes of these two strings are compared. Say for example if the matching kgrams from these two strings are;
['aretryingt', 'etryingtoe', 'ngtoescape', 'tothesandy']
Please suggest me an efficient way of finding the consecutive substing (kgram) match from these kgrams. In the above case the actual answer would be
"aretryingtoescape"
Thanking you in advance!!!
A:
First make yourself a coverage mask consisting of 0 and 1 (or other characters if you prefer), then find the longest run of 1s with itertools.groupby().
A:
Code following Ignacio's idea:
#!/usr/bin/env python
from itertools import groupby
str1 = 'wild animals are trying to escape the deserted jungle to the sandy island'
str2 = 'people are trying to escape from the smoky mountain to the sandy road'
words = ['aretryingt', 'etryingtoe', 'ngtoescape', 'tothesandy']
def solve(strings, words):
s = min([ s.replace(' ', '') for s in strings ], key=len)
coverage = [False]*len(s)
for w in words:
p = s.find(w)
if p >= 0:
for i in range(len(w)):
coverage[p+i] = True
return max([ ''.join([ y[1] for y in g ]) for k, g in groupby(enumerate(s), key=lambda x: coverage[x[0]]) if k ], key=len)
print solve([str1, str2], words)
| Finding the consecutive substring match | I have say two strings;
str1="wild animals are trying to escape the deserted jungle to the sandy island"
str2="people are trying to escape from the smoky mountain to the sandy road"
In order to find the match between these two strings, kgrams of certain length(here 10) are produced, their hashes are produced and the hashes of these two strings are compared. Say for example if the matching kgrams from these two strings are;
['aretryingt', 'etryingtoe', 'ngtoescape', 'tothesandy']
Please suggest me an efficient way of finding the consecutive substing (kgram) match from these kgrams. In the above case the actual answer would be
"aretryingtoescape"
Thanking you in advance!!!
| [
"First make yourself a coverage mask consisting of 0 and 1 (or other characters if you prefer), then find the longest run of 1s with itertools.groupby().\n",
"Code following Ignacio's idea:\n#!/usr/bin/env python\n\nfrom itertools import groupby\n\nstr1 = 'wild animals are trying to escape the deserted jungle to ... | [
2,
0
] | [] | [] | [
"algorithm",
"python"
] | stackoverflow_0004280202_algorithm_python.txt |
Q:
Python process file and return specific items
Say i have a dict, dict is a dictonary with dictonary as value
dict = {'**Leon**':{'Name':'Leon L','**follow**':['Apple', 'PPy','Jack','Tommy']},'**Jack**':{'name':'Jack Y','**follow**':['Apple','Cruise','Jay']},'**Tommy**':{'name':'Tommy T','**follow**':['Hill']},'**Apple**':{'name':'Apple A','**follow**':['Jack']},**'Cruise'**:{'name':'Cruise L','**follow**':['Jay']}}
**follow** means users that is followed by this user, for example:
Leon follows Apple, PPy,Jack,Tommy
and i have query file that has tasks in it. We need to accomplish the task and return a list of usernames (the usernames are the keys in dict, eg 'Leon','Jack','Tommy'), the format of that file is:
SEARCH
Leon
follow
follow-by # there might be many more follow, and follow-by
FILTER
name-include Leon
follow Apple # format: keyword follow, a space and a username. same apply to follow-by username
follow-by Leon # there might be more name-include, , follow username, follow by username
the meaning of the query file:
The SEARCH and FILTER are keywords. The line after SEARCH is the starting username (we need to put it in the list, lets name the list user_list). The search specification has the 2 steps (in this case):
create a list that has Leon in it, call this list L1
follow says to replace each person p in L1 with people who follows p, then we get L2
follow-by says to replace each person p in L2 with people who are followed by p (people who are followed by p are in the follow list of the user profile). Then we get L3
The filter specification (in this example):
for each person p in L3, if p's name has 'Leon' in it, the user is kept in the list.Then we get L4
for each person p in L4, if p follows Apple, then p is kept in the list, we get L5
for each person p in L5, if p is followed by Leon, p is kept in the list. Then we get our final list. We need to return the final list
Can anyone help me to write a program that can accomplish the task?
A:
I would suggest using a state machine, or more specifically pushdown automata. You would initialize the machine with the data dict, then for every line in the input file, you will transition into a new state and do the work specified, storing the result in the stack. Every step can access the data returned from the previous step.
| Python process file and return specific items | Say i have a dict, dict is a dictonary with dictonary as value
dict = {'**Leon**':{'Name':'Leon L','**follow**':['Apple', 'PPy','Jack','Tommy']},'**Jack**':{'name':'Jack Y','**follow**':['Apple','Cruise','Jay']},'**Tommy**':{'name':'Tommy T','**follow**':['Hill']},'**Apple**':{'name':'Apple A','**follow**':['Jack']},**'Cruise'**:{'name':'Cruise L','**follow**':['Jay']}}
**follow** means users that is followed by this user, for example:
Leon follows Apple, PPy,Jack,Tommy
and i have query file that has tasks in it. We need to accomplish the task and return a list of usernames (the usernames are the keys in dict, eg 'Leon','Jack','Tommy'), the format of that file is:
SEARCH
Leon
follow
follow-by # there might be many more follow, and follow-by
FILTER
name-include Leon
follow Apple # format: keyword follow, a space and a username. same apply to follow-by username
follow-by Leon # there might be more name-include, , follow username, follow by username
the meaning of the query file:
The SEARCH and FILTER are keywords. The line after SEARCH is the starting username (we need to put it in the list, lets name the list user_list). The search specification has the 2 steps (in this case):
create a list that has Leon in it, call this list L1
follow says to replace each person p in L1 with people who follows p, then we get L2
follow-by says to replace each person p in L2 with people who are followed by p (people who are followed by p are in the follow list of the user profile). Then we get L3
The filter specification (in this example):
for each person p in L3, if p's name has 'Leon' in it, the user is kept in the list.Then we get L4
for each person p in L4, if p follows Apple, then p is kept in the list, we get L5
for each person p in L5, if p is followed by Leon, p is kept in the list. Then we get our final list. We need to return the final list
Can anyone help me to write a program that can accomplish the task?
| [
"I would suggest using a state machine, or more specifically pushdown automata. You would initialize the machine with the data dict, then for every line in the input file, you will transition into a new state and do the work specified, storing the result in the stack. Every step can access the data returned from th... | [
0
] | [] | [] | [
"file",
"python",
"python_twitter",
"twitter"
] | stackoverflow_0004281615_file_python_python_twitter_twitter.txt |
Q:
Convert a space-delimited tree to useful dict in python
I have an output (a list) of items, like such:
Root
Branch1
LeafA
LeafB
Branch2
LeafC
LeafZ
LeafD
They are all two-space delimited.
I want build a logical representation of this list without the leading spaces, and retain the parent-child relationshiop.
A final possible result:
aDict = {
'Root': null,
'Branch1': 'Root',
'LeafA': 'Branch1',
... so on and so forth
}
Ultimately, I want to iterate through the dictionary and retrieve the Key and parent, as well as another value from another dict based on Key.
A:
Try this:
tree = """Root
Branch1
LeafA
LeafB
Branch2
LeafC
LeafZ
LeafD"""
aDict = {}
iDict = {}
for line in tree.split("\n"):
key = line.lstrip(" ")
indent = (len(line) - len(key)) / 2
if indent == 0:
aDict[key] = None
else:
aDict[key] = iDict[indent - 1]
iDict[indent] = key
print aDict
# {'LeafD': 'Branch2', 'LeafA': 'Branch1', 'Branch2': 'Root', 'LeafC': 'Branch2', 'LeafB': 'Branch1', 'Branch1': 'Root', 'Root': None, 'LeafZ': 'LeafC'}
A:
I guess this solves the problem:
#!/usr/bin/env python
def f(txt):
stack = []
ret = {}
for line in txt.split('\n'):
a = line.split(' ')
level = len(a) - 1
key = a[-1]
stack = stack[:level]
ret[key] = stack[-1] if len(stack) > 0 else None
stack.append(key)
return ret
print f("""Root
Branch1
LeafA
LeafB
Branch2
LeafC
LeafZ
LeafD""")
print f("""Root1
Branch1
LeafA
LeftZ
Branch2
LeftB
Root2
Branch3""")
Output:
{'LeafD': 'Branch2', 'LeafA': 'Branch1', 'Branch2': 'Root', 'LeafC': 'Branch2', 'LeafB': 'Branch1', 'Branch1': 'Root', 'Root': None, 'LeafZ': 'LeafC'}
{'LeafA': 'Branch1', 'Branch2': 'Root1', 'Branch1': 'Root1', 'LeftZ': 'LeafA', 'LeftB': 'Branch2', 'Branch3': 'Root2', 'Root1': None, 'Root2': None}
| Convert a space-delimited tree to useful dict in python | I have an output (a list) of items, like such:
Root
Branch1
LeafA
LeafB
Branch2
LeafC
LeafZ
LeafD
They are all two-space delimited.
I want build a logical representation of this list without the leading spaces, and retain the parent-child relationshiop.
A final possible result:
aDict = {
'Root': null,
'Branch1': 'Root',
'LeafA': 'Branch1',
... so on and so forth
}
Ultimately, I want to iterate through the dictionary and retrieve the Key and parent, as well as another value from another dict based on Key.
| [
"Try this:\ntree = \"\"\"Root\n Branch1\n LeafA\n LeafB\n Branch2\n LeafC\n LeafZ\n LeafD\"\"\"\n\naDict = {}\niDict = {}\nfor line in tree.split(\"\\n\"):\n key = line.lstrip(\" \")\n indent = (len(line) - len(key)) / 2\n if indent == 0:\n aDict[key] = None\n else:\n aD... | [
4,
0
] | [] | [] | [
"python",
"whitespace"
] | stackoverflow_0004270970_python_whitespace.txt |
Q:
Including Psyco Files in distribution
I'm developing a game server in python. Since this server is distributed to other people who don't have much coding experience(or common sense) I try to include any modules I can into a pre-reqs folder in the project, so users can run the code without installing things. I tried out python on my program, and noticed a speed increase. I tried including the files from the psyco folder in the pre reqs folder, but my users got errors about psyco not being installed. So now I'm wondering, is it possible to include psyco with my package, and if so, what files and other things need to be included?
A:
The canonical way to distribute Python packages is to use distutils (for now anyway). You can specify psyco as a dependency and when people install your package, it will get pulled in.
| Including Psyco Files in distribution | I'm developing a game server in python. Since this server is distributed to other people who don't have much coding experience(or common sense) I try to include any modules I can into a pre-reqs folder in the project, so users can run the code without installing things. I tried out python on my program, and noticed a speed increase. I tried including the files from the psyco folder in the pre reqs folder, but my users got errors about psyco not being installed. So now I'm wondering, is it possible to include psyco with my package, and if so, what files and other things need to be included?
| [
"The canonical way to distribute Python packages is to use distutils (for now anyway). You can specify psyco as a dependency and when people install your package, it will get pulled in. \n"
] | [
0
] | [] | [] | [
"distribution",
"packaging",
"psyco",
"python",
"windows"
] | stackoverflow_0004282474_distribution_packaging_psyco_python_windows.txt |
Q:
Installing python modules in GNU/Linux
installing python modules in GNU/Linux. Are there any good PDFs on installing modules? I would like to install some of these Python: 50 modules for all needs. I tried PIL http://effbot.org/downloads/Imaging-1.1.7.tar.gz but it did not work.
PS: what does community wiki mean?
A:
Most of those are probably already available as packages in your Linux distribution. You didn't mention which one you used. Typically, "apt-get" or "yum" would cover most current distributions. Read the man pages for those tools, and use the search features to find packages containing the name "python", or the names of the packages in the list.
If the one you want is not there, install "setuptools", then use "easy-install" to fetch them from the Python package index (PyPI).
Only if the above fail you should build from source. That will require a build environment, typically involving installing "dev" or "development" packages, e.g "python-dev" on some distros. You may also need some other library -dev packages.
Once you have the required development packages installed, the distutils (or setuptools) standard method to build from source should be used.
The command
$ python setup.py build
should work. If it doesn't you may need more -dev packages. Check the error messages.
If it does build, then use "sudo python setup.py install" to install it.
Without more info, it's hard to give a more specific answer.
A:
On a Debian or Ubuntu system, the easiest way to install Python packages, if available, is apt-get. The easy_install system contains more Python packages, but the apt-get packages are specifically tuned for your system.
They always start with 'python-'.
For example,
sudo apt-get install python-imaging
| Installing python modules in GNU/Linux | installing python modules in GNU/Linux. Are there any good PDFs on installing modules? I would like to install some of these Python: 50 modules for all needs. I tried PIL http://effbot.org/downloads/Imaging-1.1.7.tar.gz but it did not work.
PS: what does community wiki mean?
| [
"Most of those are probably already available as packages in your Linux distribution. You didn't mention which one you used. Typically, \"apt-get\" or \"yum\" would cover most current distributions. Read the man pages for those tools, and use the search features to find packages containing the name \"python\", or t... | [
4,
0
] | [] | [] | [
"linux",
"module",
"python"
] | stackoverflow_0003759810_linux_module_python.txt |
Q:
How can I start a python subprocess command on linux shell from within Django web app?
I have a web app which reads and displays log files generated by python test files. Currently I can launch the python files from cli and it works fine.
However I would like to be able to launch python tests from within the application.
At the minute I have a "Launch test" button which calls a function in the views.py file
def launch_tests(request, test_txt_file):
test_list_file = test_txt_file
launcher2.main(test_list_file)
return render_to_response('front_page.html')
The text file contains the names of the .py files to be executed
In launcher2.main
import os,time, string, pdb, sys
import getopt,subprocess
import pdb
bi_python_home = '//belfiler1/scratch-disk/conor.doherty/dev/python/bi_python'
def main(test_list_file):
# if argument passed in for test list file
if test_list_file != None:
test_list = bi_python_home + "/launcher/" + test_list_file
else:
# default
test_list = bi_python_home + "/launcher/test_list.txt"
tests_dir = bi_python_home + "/tests/"
log_dir =bi_python_home + "/logs/"
month = '%Y%m'
day = '%d'
try :
for test_to_run in open(test_list):
sub_dir=os.path.dirname(test_to_run)
test_filename = os.path.basename(test_to_run)
# Create log directory for date if doesn't exist
cur_log_dir = log_dir + time.strftime(month, time.localtime()) + '/' + time.strftime(month+day, time.localtime()) + '/' + sub_dir
if not os.path.exists(cur_log_dir):
print "creating cur_log_dir " + cur_log_dir
os.makedirs(cur_log_dir)
full_path = string.rstrip(tests_dir + test_to_run)
#print ' full_path is "' + full_path + '"'
cmd = 'python ' + full_path
if os.path.isfile(full_path) == True :
print'\n'*2
print "Processing file " + test_to_run,
log_timestamp = time.strftime("%Y%m%d_%H%M%S", time.localtime())
log_filename = cur_log_dir + '/' + string.rstrip(test_filename) + "_" + log_timestamp + '.log'
print 'log file to use is' + log_filename
# Redirect stdout and stderr to logfile
cmd = string.rstrip(cmd) + ' > ' + log_filename
popen_obj = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(stdout, stderr) = popen_obj.communicate()
print 'executing: ' + cmd
print 'stderr is: ', stderr
# if need to debug , remove stream redirection &> above and print stdout, stderr
#else :
#print 'ignoring ' , full_path
except IOError, err:
print str(err)
sys.exit(2)
The Code works fine apart from one thing. When I kick off the subprocess it seems to be executed on windows by default. Since the tests use a module called pexpect i need it to be executed on linux.
I keep thinking there has to be a simple solution but so far I have had no luck.
Any help would be greatly appreciated,
Thanks
A:
There is not enough information here to give a proper answer, but a common problem here is that the web front-end (e.g. apache) is running as a different user than what you run under. Therefore the web handler doesn't have the permissions to ready the files in your home directory. It probably also lacks some important environment variables, such as HOME.
Check that out, first. What do the web server logs say?
| How can I start a python subprocess command on linux shell from within Django web app? | I have a web app which reads and displays log files generated by python test files. Currently I can launch the python files from cli and it works fine.
However I would like to be able to launch python tests from within the application.
At the minute I have a "Launch test" button which calls a function in the views.py file
def launch_tests(request, test_txt_file):
test_list_file = test_txt_file
launcher2.main(test_list_file)
return render_to_response('front_page.html')
The text file contains the names of the .py files to be executed
In launcher2.main
import os,time, string, pdb, sys
import getopt,subprocess
import pdb
bi_python_home = '//belfiler1/scratch-disk/conor.doherty/dev/python/bi_python'
def main(test_list_file):
# if argument passed in for test list file
if test_list_file != None:
test_list = bi_python_home + "/launcher/" + test_list_file
else:
# default
test_list = bi_python_home + "/launcher/test_list.txt"
tests_dir = bi_python_home + "/tests/"
log_dir =bi_python_home + "/logs/"
month = '%Y%m'
day = '%d'
try :
for test_to_run in open(test_list):
sub_dir=os.path.dirname(test_to_run)
test_filename = os.path.basename(test_to_run)
# Create log directory for date if doesn't exist
cur_log_dir = log_dir + time.strftime(month, time.localtime()) + '/' + time.strftime(month+day, time.localtime()) + '/' + sub_dir
if not os.path.exists(cur_log_dir):
print "creating cur_log_dir " + cur_log_dir
os.makedirs(cur_log_dir)
full_path = string.rstrip(tests_dir + test_to_run)
#print ' full_path is "' + full_path + '"'
cmd = 'python ' + full_path
if os.path.isfile(full_path) == True :
print'\n'*2
print "Processing file " + test_to_run,
log_timestamp = time.strftime("%Y%m%d_%H%M%S", time.localtime())
log_filename = cur_log_dir + '/' + string.rstrip(test_filename) + "_" + log_timestamp + '.log'
print 'log file to use is' + log_filename
# Redirect stdout and stderr to logfile
cmd = string.rstrip(cmd) + ' > ' + log_filename
popen_obj = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(stdout, stderr) = popen_obj.communicate()
print 'executing: ' + cmd
print 'stderr is: ', stderr
# if need to debug , remove stream redirection &> above and print stdout, stderr
#else :
#print 'ignoring ' , full_path
except IOError, err:
print str(err)
sys.exit(2)
The Code works fine apart from one thing. When I kick off the subprocess it seems to be executed on windows by default. Since the tests use a module called pexpect i need it to be executed on linux.
I keep thinking there has to be a simple solution but so far I have had no luck.
Any help would be greatly appreciated,
Thanks
| [
"There is not enough information here to give a proper answer, but a common problem here is that the web front-end (e.g. apache) is running as a different user than what you run under. Therefore the web handler doesn't have the permissions to ready the files in your home directory. It probably also lacks some impor... | [
1
] | [] | [] | [
"django",
"django_views",
"linux",
"python"
] | stackoverflow_0003750181_django_django_views_linux_python.txt |
Q:
Globally accessible object across all Celery workers / memory cache in Django
I have pretty standard Django+Rabbitmq+Celery setup with 1 Celery task and 5 workers.
Task uploads the same (I simplify a bit) big file (~100MB) asynchronously to a number of remote PCs.
All is working fine at the expense of using lots of memory, since every task/worker load that big file into memory separatelly.
What I would like to do is to have some kind of cache, accessible to all tasks, i.e. load the file only once. Django caching based on locmem would be perfect, but like documentation says: "each process will have its own private cache instance" and I need this cache accessible to all workers.
Tried to play with Celery signals like described in #2129820, but that's not what I need.
So the question is: is there a way I can define something global in Celery (like a class based on dict, where I could load the file or smth). Or is there a Django trick I could use in this situation ?
Thanks.
A:
Why not simply stream the upload(s) from disk instead of loading the whole file in memory ?
A:
It seems to me that what you need is memcached backed for django. That way each task in Celery will have access to it.
A:
Maybe you can use threads instead of processes for this particular task. Since threads all share the same memory, you only need one copy of the data in memory, but you still get parallel execution.
( this means not using Celery for this task )
| Globally accessible object across all Celery workers / memory cache in Django | I have pretty standard Django+Rabbitmq+Celery setup with 1 Celery task and 5 workers.
Task uploads the same (I simplify a bit) big file (~100MB) asynchronously to a number of remote PCs.
All is working fine at the expense of using lots of memory, since every task/worker load that big file into memory separatelly.
What I would like to do is to have some kind of cache, accessible to all tasks, i.e. load the file only once. Django caching based on locmem would be perfect, but like documentation says: "each process will have its own private cache instance" and I need this cache accessible to all workers.
Tried to play with Celery signals like described in #2129820, but that's not what I need.
So the question is: is there a way I can define something global in Celery (like a class based on dict, where I could load the file or smth). Or is there a Django trick I could use in this situation ?
Thanks.
| [
"Why not simply stream the upload(s) from disk instead of loading the whole file in memory ?\n",
"It seems to me that what you need is memcached backed for django. That way each task in Celery will have access to it.\n",
"Maybe you can use threads instead of processes for this particular task. Since threads al... | [
2,
1,
0
] | [] | [] | [
"caching",
"celery",
"django",
"global_variables",
"python"
] | stackoverflow_0002500799_caching_celery_django_global_variables_python.txt |
Q:
Want to remove illegal tags from mp3 file using Python
How to acheive this in my mp3 files.
Artist:www.xyz.com
---->
Artist: Artist:free
downloads,free music,xyzhi.com
----> Artist:
Artist:Kurukuru Kan (Amma Na) -
www.musicxyx.com - ® Danaa
collections ®
----> Artist:
Kurukuru Kan (Amma Na) Artist:
Nan Pogiren - - ® Danna collections ®
----> Artist:Nan
Pogiren
I have been using Mutagen to access ID3 tags. How to manipulate the strings in the tags to achieve the above?
A:
First you'll need a library to understand the MP3 format and allow you to edit the tags, maybe: http://id3-py.sourceforge.net/
Beyond that, you'll just need to work on the string replacements.
For the ones you specified (including weird space requirements):
#!/usr/bin/env python
# -*- coding: utf-8 -*-
EXPECTED = {
'Artist:www.xyz.com':'Artist:',
'Artist:free downloads,free music,xyzhi.com':'Artist:',
'Artist:Kurukuru Kan (Amma Na) - www.musicxyx.com - ® Danaa collections ®':'Artist: Kurukuru Kan (Amma Na)',
'Artist: Nan Pogiren - - ® Danna collections ®':'Artist:Nan Pogiren'}
import re
def process(instr):
assert instr.startswith("Artist:")
mo = re.match(r"^(Artist:)( ?)(.*?) - .*$",instr)
if mo:
spc = mo.group(2)
if spc == " ":
spc = ""
else:
spc = " "
return "Artist:"+spc+mo.group(3)
return "Artist:"
for (instr,outstr) in EXPECTED.iteritems():
print process(instr),outstr,process(instr) == outstr
assert process(instr) == outstr
| Want to remove illegal tags from mp3 file using Python | How to acheive this in my mp3 files.
Artist:www.xyz.com
---->
Artist: Artist:free
downloads,free music,xyzhi.com
----> Artist:
Artist:Kurukuru Kan (Amma Na) -
www.musicxyx.com - ® Danaa
collections ®
----> Artist:
Kurukuru Kan (Amma Na) Artist:
Nan Pogiren - - ® Danna collections ®
----> Artist:Nan
Pogiren
I have been using Mutagen to access ID3 tags. How to manipulate the strings in the tags to achieve the above?
| [
"First you'll need a library to understand the MP3 format and allow you to edit the tags, maybe: http://id3-py.sourceforge.net/\nBeyond that, you'll just need to work on the string replacements.\nFor the ones you specified (including weird space requirements):\n#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nEXPECT... | [
0
] | [] | [] | [
"mp3",
"python",
"regex",
"string"
] | stackoverflow_0004283868_mp3_python_regex_string.txt |
Q:
check files for equality
what's the most elegant way to check to files for equality in Python?
Checksum? Bytes comparing? Think files wont' be larger than 100-200 MB
A:
What about filecmp module? It can do file comparison in many different ways with different tradeoffs.
And even better, it is part of the standard library:
http://docs.python.org/library/filecmp.html
A:
use hashlib to get the md5 of each file, and compare the results.
#! /bin/env python
import hashlib
def filemd5(filename, block_size=2**20):
f = open(filename)
md5 = hashlib.md5()
while True:
data = f.read(block_size)
if not data:
break
md5.update(data)
f.close()
return md5.digest()
if __name__ == "__main__":
a = filemd5('/home/neo/todo')
b = filemd5('/home/neo/todo2')
print(a == b)
Update: As of Python 2.1 there is a filecmp module that does just what you want, and has methods to compare directories too. I never knew about this module, I'm still learning Python myself :-)
>>> import filecmp
>>> filecmp.cmp('undoc.rst', 'undoc.rst')
True
>>> filecmp.cmp('undoc.rst', 'index.rst')
False
A:
Ok, this might need two separate answers.
If you have many files to compare, go for the checksum and cache the checksum for each file. To be sure, compare matching files byte for byte afterwards.
If you have only two files, go directly for byte comparison because you have to read the file anyway to compute the checksum.
In both cases, use the file size as an early way of checking for inequality.
A:
Before attempting any of the other solutions, you might want to do os.path.getsize(...) on both files.
If that differs, there is no need to compare bytes or calculate checksum.
Of course, this only helps if the filesize isn't fixed.
Example:
def foo(f1, f2):
if not os.path.getsize(f1) == os.path.getsize(f2):
return False # Or similar
... # Checksumming / byte-comparing / whatever
| check files for equality | what's the most elegant way to check to files for equality in Python?
Checksum? Bytes comparing? Think files wont' be larger than 100-200 MB
| [
"What about filecmp module? It can do file comparison in many different ways with different tradeoffs.\nAnd even better, it is part of the standard library:\nhttp://docs.python.org/library/filecmp.html\n",
"use hashlib to get the md5 of each file, and compare the results. \n#! /bin/env python\nimport hashlib\ndef... | [
9,
6,
4,
1
] | [
"I would do checksum with MD5 (for example) instead of byte comaprasion plus the date check and depend on you needs name check.\n",
"What about shelling out to cmp?\nimport commands\nstatus, output = commands.getstatusoutput(\"/usr/bin/cmp file1 file2\")\nif (status == 0):\n print \"files are same\"\nelif (statu... | [
-2,
-2
] | [
"equality",
"file",
"python"
] | stackoverflow_0004283639_equality_file_python.txt |
Q:
Do i need Node.js in Python like I would with PHP?
I have been using PHP for some time now. And I have been thinking about learning Node.js to go along with it to use the non blocking idea for creating an online game or app. There is quite a bit of info on using the two together. Using Node as part of the back end of a game could really speed up some aspects of the game, especially if the game allows users to play against each other in real time.
Well, lately I have also been looking into learning Python (yes I have a lot of time on my hands). There are many features about it over PHP which I really like. But for the use of Node.js to do the background work like I was considering with PHP, I cannot find much information at all. I have also noticed that Python does have some threading features. As I am still very new to Python's world, would I even need Node.js in Python? Can Python handle these kind of features that Node.js can? Or would there still be benefits to using Node, or would I actually need it.
As a side note, since I started looking up Python, I also discovered Twisted which seems to be another framework like Node. But Twisted is written in Python. So in either of the cases above would Twisted be better (aside from the fact that Twisted seems to have been out longer and is more stable than Node). I just mean in general is it worth using at all, either Node or Twisted? And if so is one better than the other with Python?
Sorry for the big question, but I'm just so unsure and n00b in this area. Thanks.
So as it stands, after reading the helpful answers, I see the following options:
PHP + JS
Python + Twisted
Python + PyJamas
Python + Node.js
Node.js
Twisted
I already know PHP and am comfortable with it, and am currently learning JS. This this was a major possible route for me. But I was also leaning away from PHP to Python because the in general features of the language I liked.
This option I thought might be more plausible than #3, using Twisted to handle the networking port to allow the player to play live with each other.
This just makes it so you don't have to learn JS, which to me doesn't seem like that big of a deal. I've already started studying it and it's not that hard to learn. But like was mentioned in a question, mixing things like ; and {} could potentially have some issues.
Like #2, but with Node.js. Mostly I see the addition of Node to handle the networking aspect to let the players be able to play in a live/real-time game. And the majority of the code would be in Python.
Sole Node.js was a consideration as well as it is the single language route. But it doesn't have the same benefits of learning and using Python either (it being a general scripting language I can use in, and out of web design. A big reason I wanted to learn and use it.).
Ans as #5 but I was not considering an only Twisted route until reading the comments. While it seems plausible, it doesn't really involve one of the two languages I want to learn, Python and Node.
The above seems to be the main routes I can go. Now I'm still not really sure which route to go. I really want to learn both Node and Python. So it seems I may just need to learn the two separately. But I still need to pick a choice for this project. And so far I am liking numbers 2 and 5, with 6 a close to 5 since Node and Twisted have some very similar functionality. And 1 as a mention because that's what I already know. But I was wanting to learn something new anyways. So still, really numbers 2 and 5. Or 4 as it's similar to 2. Hah, I still need to do some homework. Maybe it deserves another question.
EDIT (9-19-2012): I just wanted to update, to say that I am using mostly Node.js currently for development. And plan on using Redis for PubSub functionality to give the appearance of real time page updates, as I do not need true real time as in games, or in paired content editing.
A:
While Python can definitely be used for Asynchronous Programming, it doesn't feel natural, even with Twisted, if you compare it to Node.js it just doesn't look or feel that nice.
Since you're planing on doing a real-time Web Game, you'll most likely will end up using WebSockets.
WebSockets are based on HTTP and use the upgrade header to initiate the bi-directional connection, that means, that you can easily have both your normal Server and your WebSockets run on port 80, if you need a lot of fall backs to support older Browsers, then there's always the almighty Socket.IO.
Depending on how complicated your front-end will be I'd rather go with express.js or just write my own stuff.
Having both the front-end and the game in the same process has (obviously) a lot of advantages, you can fetch a lot of information without the need of having to query the database.
Another big "feature" is, that you don't have to context switch between the client logic, and the server's logic. That might seems like a small benefit at first, but besides the fact that you won't type ; in Python and don't forget your {} in JS after having worked continuously on either side for a couple of hours, you will also be able to re-use code between Server and Client. Again that might look like a small benefit at first, but good multi player games always run a lot of stuff on the client, just to compensate the lag, also Python and JavaScript are quite different in place so having to re-write portions of JS in Python takes time and may even introduce bugs.
(Now on to the shameless plugs...)
I've done 2 multi player games with Node.js already, although the have no HTTP front end both games run quite a lot of JS on the Client:
Multiplayer Asteroids/Geometry Wars Crossover
RTS Concept (a bit like Eufloria)
Also, while JSON seems to fit perfectly for sending the data between Browser and client, you'll soon find out that it uses a ton of bandwidth, since I encountered the same problem I've written some specialized library which saves up to 45% traffic:
BiSON.js
Again, having JavaScript both on the Server and the Client lets one re-use code and therefore save development time.
So to sum it all up I would strongly suggest to go with Node.js:
Re-usable code, less context switching therefore shorter development time
V8 is faster than Python in many cases.
No concurrency issues, everything is async by default.
Node.js is the next big thing, jump on the bandwagon now.
It's JavaScript! ;)
A:
I don't think it's so much that it's better because it's Python-on-Python, but because you can do both the game part and the web part in Twisted.
EDIT:
Also, Pyjamas.
A:
If you like callback-oriented programming, twisted and nodejs are the thing for you. Otherwise, you could take a look at gevent. It is similar to twisted/nodejs in that it is an asynchronous framework, but it allows you to write code just like you would do in a threaded approach.
It achieves this by doing coroutines-based magic behind the scenes.
A:
The whole point of using Node.js is its strengths which is well documented at http://nodejs.org/#about. While you can certainly use a server-side language and a frontend stack for your needs, I think writing all code in 1 language will be a huge productivity boost.
if i were you, I'll attempt to write most of my code in 1 language as much as possible. So I do not think you should try to use Node.js together with Python (Twisted or Tornado) . It seems to have some kind of overlap.
Just imagine the coolness of writing all of your code in JavaScript. ;)
A:
It sounds to me like you're talking about having a system in place to do some kind of processing in the background that you want to do asynchronously. If that's the case you might consider using some kind of queueing system. This way you can put a message into the queue until it gets processed by a pool of worker processes.
Celery makes this pretty easy to do, but getting RabbitMQ (or another message broker) configured correctly might be a bit of a pain if you haven't done that before.
| Do i need Node.js in Python like I would with PHP? | I have been using PHP for some time now. And I have been thinking about learning Node.js to go along with it to use the non blocking idea for creating an online game or app. There is quite a bit of info on using the two together. Using Node as part of the back end of a game could really speed up some aspects of the game, especially if the game allows users to play against each other in real time.
Well, lately I have also been looking into learning Python (yes I have a lot of time on my hands). There are many features about it over PHP which I really like. But for the use of Node.js to do the background work like I was considering with PHP, I cannot find much information at all. I have also noticed that Python does have some threading features. As I am still very new to Python's world, would I even need Node.js in Python? Can Python handle these kind of features that Node.js can? Or would there still be benefits to using Node, or would I actually need it.
As a side note, since I started looking up Python, I also discovered Twisted which seems to be another framework like Node. But Twisted is written in Python. So in either of the cases above would Twisted be better (aside from the fact that Twisted seems to have been out longer and is more stable than Node). I just mean in general is it worth using at all, either Node or Twisted? And if so is one better than the other with Python?
Sorry for the big question, but I'm just so unsure and n00b in this area. Thanks.
So as it stands, after reading the helpful answers, I see the following options:
PHP + JS
Python + Twisted
Python + PyJamas
Python + Node.js
Node.js
Twisted
I already know PHP and am comfortable with it, and am currently learning JS. This this was a major possible route for me. But I was also leaning away from PHP to Python because the in general features of the language I liked.
This option I thought might be more plausible than #3, using Twisted to handle the networking port to allow the player to play live with each other.
This just makes it so you don't have to learn JS, which to me doesn't seem like that big of a deal. I've already started studying it and it's not that hard to learn. But like was mentioned in a question, mixing things like ; and {} could potentially have some issues.
Like #2, but with Node.js. Mostly I see the addition of Node to handle the networking aspect to let the players be able to play in a live/real-time game. And the majority of the code would be in Python.
Sole Node.js was a consideration as well as it is the single language route. But it doesn't have the same benefits of learning and using Python either (it being a general scripting language I can use in, and out of web design. A big reason I wanted to learn and use it.).
Ans as #5 but I was not considering an only Twisted route until reading the comments. While it seems plausible, it doesn't really involve one of the two languages I want to learn, Python and Node.
The above seems to be the main routes I can go. Now I'm still not really sure which route to go. I really want to learn both Node and Python. So it seems I may just need to learn the two separately. But I still need to pick a choice for this project. And so far I am liking numbers 2 and 5, with 6 a close to 5 since Node and Twisted have some very similar functionality. And 1 as a mention because that's what I already know. But I was wanting to learn something new anyways. So still, really numbers 2 and 5. Or 4 as it's similar to 2. Hah, I still need to do some homework. Maybe it deserves another question.
EDIT (9-19-2012): I just wanted to update, to say that I am using mostly Node.js currently for development. And plan on using Redis for PubSub functionality to give the appearance of real time page updates, as I do not need true real time as in games, or in paired content editing.
| [
"While Python can definitely be used for Asynchronous Programming, it doesn't feel natural, even with Twisted, if you compare it to Node.js it just doesn't look or feel that nice.\nSince you're planing on doing a real-time Web Game, you'll most likely will end up using WebSockets.\nWebSockets are based on HTTP and ... | [
9,
2,
2,
1,
0
] | [] | [] | [
"node.js",
"nonblocking",
"php",
"python",
"twisted"
] | stackoverflow_0004276205_node.js_nonblocking_php_python_twisted.txt |
Q:
How to use urllib2 to open an 'infinite' jpg?
I'm using urllib2 open to download some web pages. Unfortunately one page is an infinite stream (a set of live video frames) and urllib2 will not timeout because the open call succeeds, while the 'read' call hangs forever. Example code:
res = opener.open(encoded, timeout=timeout)
log('opened', url)
contents = res.read()
log('never get here')
Any tips on avoiding/killing these connections?
A:
This sounds like a job for generators!
Imagine you have an infinate text file... lets call it test.txt now open('test.txt').read() will hang the machine and eventually crash, so why not yield line for line in this infinate stream in a generator e.g.
def yield_line(file):
with open(file) as inp:
for line in inp:
yield line
now when initialised yield_line becomes an iterable object so this becomes legal
out = open('out.txt')
for line in yield_line('test.txt'):
out.write(line.replace('1','2'))
Now consider that a url can operate in the same way as a file, you can just yield line for line in the url
def yield_url(url):
with urllib.urlopen(url) as inp:
for line in inp:
yield line
Edit:
Timeout example
out = open('out.txt')
for count, line in enumerate(yield_line('test.txt')):
if count == 444: #timeout value :D
break
out.write(line.replace('1','2'))
A:
Using the generator approach mentioned by Jacob I integrated a "kill switch".
startime = datetime.now()
res = opener.open(url, timeout=timeout)
contents = ''
for item in res:
contents += item
if (datetime.now() - starttime).seconds > timeout:
raise IOError('timeout')
A:
You should examine the header to detect if it is multipart, and then drop or read the content.
A:
You can put a timeout on the operation as a whole i.e. the function where those lines you mentioned are defined.
| How to use urllib2 to open an 'infinite' jpg? | I'm using urllib2 open to download some web pages. Unfortunately one page is an infinite stream (a set of live video frames) and urllib2 will not timeout because the open call succeeds, while the 'read' call hangs forever. Example code:
res = opener.open(encoded, timeout=timeout)
log('opened', url)
contents = res.read()
log('never get here')
Any tips on avoiding/killing these connections?
| [
"This sounds like a job for generators!\nImagine you have an infinate text file... lets call it test.txt now open('test.txt').read() will hang the machine and eventually crash, so why not yield line for line in this infinate stream in a generator e.g.\ndef yield_line(file):\n with open(file) as inp:\n for... | [
3,
3,
2,
0
] | [] | [] | [
"download",
"python",
"sockets",
"urllib2"
] | stackoverflow_0004284480_download_python_sockets_urllib2.txt |
Q:
Python (pygame): How can I delete a line?
I'm using pygame to create a small scene. Right now, I'm working with lines.
I have an array of lines which are drawn to the screen and when a line is deleted from the array, I would like the line to disappear from the screen.
The problem I've found is that the line is drawn on the screen and remains static. I can't find a way to reset the screen (I'm using a JPEG as the background).
Is there a way to remove a drawn line from the screen?
Thanks
A:
Although it does not seem very efficient, I think that the easiest and also the best way of doing it is by redrawing everything. In many many games the screen is totally redrawn at every frame, even without 3D cards (remember the old Doom games?). So drawing a few lines over a background will be very fast, even in python.
I would imagine something like that:
import pygame
import random
SCREEN_WIDTH = 320
SCREEN_HEIGHT = 200
class Line(object):
def __init__(self, start_pos, end_pos, color, width):
object.__init__(self)
self.start_pos = start_pos
self.end_pos = end_pos
self.color = color
self.width = width
def CreateRandomLine():
rnd = random.randrange
start_pos = (rnd(SCREEN_WIDTH), rnd(SCREEN_HEIGHT))
end_pos = (rnd(SCREEN_WIDTH), rnd(SCREEN_HEIGHT))
color = (rnd(255), rnd(255), rnd(255))
width = rnd(10) + 1
return Line(start_pos, end_pos, color, width)
def DrawScene(screen_surface, background_image, lines):
screen_surface.blit(background_image, (0, 0))
for line in lines:
pygame.draw.line(screen_surface, line.color, \
line.start_pos, line.end_pos, line.width)
pygame.init()
screen_surface = pygame.display.set_mode((SCREEN_WIDTH, SCREEN_HEIGHT))
background_image = pygame.Surface(((SCREEN_WIDTH, SCREEN_HEIGHT)))
background_image.fill((200, 100, 200)) # I kinda like purple.
# Alternatively, if you have a file for your background:
# background_image = pygame.image.load('background.png')
# background_image.convert()
lines = []
for i in range(10):
lines.append(CreateRandomLine())
for frame_id in range(10):
del lines[0] # Remove the oldest line, the one at index 0.
lines.append(CreateRandomLine()) # Add a new line.
DrawScene(screen_surface, background_image, lines)
pygame.display.flip()
pygame.time.wait(1000) # Wait one second between frames.
This script displays random lines on a background. There are 10 frames, each frame lasts one second. Between each frame, the first line is removed from the list of lines and a new line is added.
Just remove the pygame.time.wait and see how fast it goes :D.
A:
If you use screen.fill([0,0,0]) it will fill in the background (or whatever you have set to be your background).
This will erase any lines drawn on the image, essentially removing anything drawn on the background.
A:
You'll probably need to run the command to update the display, as explained in the documentation. The command to update the full display surface is:
pygame.display.update()
It may be possible to pass an argument to the function to only update the required part, which would be more efficient, but without the full code I can't tell you what that argument would be.
| Python (pygame): How can I delete a line? | I'm using pygame to create a small scene. Right now, I'm working with lines.
I have an array of lines which are drawn to the screen and when a line is deleted from the array, I would like the line to disappear from the screen.
The problem I've found is that the line is drawn on the screen and remains static. I can't find a way to reset the screen (I'm using a JPEG as the background).
Is there a way to remove a drawn line from the screen?
Thanks
| [
"Although it does not seem very efficient, I think that the easiest and also the best way of doing it is by redrawing everything. In many many games the screen is totally redrawn at every frame, even without 3D cards (remember the old Doom games?). So drawing a few lines over a background will be very fast, even ... | [
4,
1,
0
] | [] | [] | [
"line",
"pygame",
"python"
] | stackoverflow_0004276342_line_pygame_python.txt |
Q:
Python: Recursion
although there's quite a bit of information about recursion on the web, I haven't found anything that I was able to apply to my problem. I am still very new to programming so please excuse me if my question is rather trivial.
Thanks for helping out :)
This is what I want to end up with:
listVariations(listOfItems, numberOfDigits)
>>> listVariations(['a', 'b', 'c'], 1)
>>> ['a', 'b', 'c']
>>> listVariations(['a', 'b', 'c'], 2)
>>> ['aa', 'ab', 'ac', 'ba', 'bb', 'bc', 'ca', 'cb', 'cc']
>>> listVariations(['a', 'b', 'c'], 3)
>>> ['aaa', 'aab', 'aac', 'aba', 'abb', 'abc', 'aca', 'acb', 'acc', 'baa', 'bab', 'bac', 'bba', 'bbb', 'bbc', 'bca', 'bcb', 'bcc', 'caa', 'cab', 'cac', 'cba', 'cbb', 'cbc', 'cca', 'ccb', 'ccc']
but so far I was only able to come up with a function where I need to specify/know the number of digits in advance. This is ugly and wrong:
list = ['a', 'b', 'c']
def listVariations1(list):
variations = []
for i in list:
variations.append(i)
return variations
def listVariations2(list):
variations = []
for i in list:
for j in list:
variations.append(i+j)
return variations
def listVariations3(list):
variations = []
for i in list:
for j in list:
for k in list:
variations.append(i+j+k)
return variations
oneDigitList = listVariations1(list)
twoDigitList = listVariations2(list)
threeDigitList = listVariations3(list)
This is probably very easy, but I couldn't come up with a good way to concatenate the strings when the function calls itself.
Thanks for your effort :)
A:
You can use the product() function in itertools
A:
from itertools import combinations_with_replacement
This function does exactly what you want.
A:
You can use itertools.permutations to do this.
from itertools import permutations
def listVariations(listOfItems, numberOfDigits):
return [''.join(x) for x in permutations(listOfItems, numberOfDigits)]
If you do want to implement something similar with recursive function call, you can do it like this:
def permute(seq, n):
for i in xrange(len(seq)):
head, tail = seq[i:i+1], seq[0:i]+seq[i+1:]
if n == 1:
yield head
else:
if tail:
for sub_seq in permute(tail, n-1):
yield head + sub_seq
else:
yield head
a_list = ['a', 'b', 'c']
list(permute(''.join(a_list), 2))
a_str = 'abc'
list(permute(a_str, 2))
| Python: Recursion | although there's quite a bit of information about recursion on the web, I haven't found anything that I was able to apply to my problem. I am still very new to programming so please excuse me if my question is rather trivial.
Thanks for helping out :)
This is what I want to end up with:
listVariations(listOfItems, numberOfDigits)
>>> listVariations(['a', 'b', 'c'], 1)
>>> ['a', 'b', 'c']
>>> listVariations(['a', 'b', 'c'], 2)
>>> ['aa', 'ab', 'ac', 'ba', 'bb', 'bc', 'ca', 'cb', 'cc']
>>> listVariations(['a', 'b', 'c'], 3)
>>> ['aaa', 'aab', 'aac', 'aba', 'abb', 'abc', 'aca', 'acb', 'acc', 'baa', 'bab', 'bac', 'bba', 'bbb', 'bbc', 'bca', 'bcb', 'bcc', 'caa', 'cab', 'cac', 'cba', 'cbb', 'cbc', 'cca', 'ccb', 'ccc']
but so far I was only able to come up with a function where I need to specify/know the number of digits in advance. This is ugly and wrong:
list = ['a', 'b', 'c']
def listVariations1(list):
variations = []
for i in list:
variations.append(i)
return variations
def listVariations2(list):
variations = []
for i in list:
for j in list:
variations.append(i+j)
return variations
def listVariations3(list):
variations = []
for i in list:
for j in list:
for k in list:
variations.append(i+j+k)
return variations
oneDigitList = listVariations1(list)
twoDigitList = listVariations2(list)
threeDigitList = listVariations3(list)
This is probably very easy, but I couldn't come up with a good way to concatenate the strings when the function calls itself.
Thanks for your effort :)
| [
"You can use the product() function in itertools\n",
"from itertools import combinations_with_replacement\n\nThis function does exactly what you want.\n",
"You can use itertools.permutations to do this.\nfrom itertools import permutations\n\ndef listVariations(listOfItems, numberOfDigits):\n return [''.join(... | [
5,
0,
0
] | [] | [] | [
"concatenation",
"python",
"recursion"
] | stackoverflow_0004284719_concatenation_python_recursion.txt |
Q:
Which is the best python framework for developing Facebook application as on now
We are developing Facebook applications in python, my query is which is the best python framework for developing Facebook application as on now. which will run easily with GAE as well as with Django...
Thank you very much.
** edit **
I think my query was misunderstood.. We are looking for a best python based facebook framework which can easily be used in django applications as well as GAE applications..
It would be good if you can provide reasons for recommending particular framework...
I DONT WANT TO RUN DJANGO ON GAE.
A:
Let's remove the facebook buzzword from the equation. If you need/want to leverage google's infrastructure, you use appengine. There are projects for django that allow you to access appengine services at a higher level of abstraction such as the datastore, blobstore etc...
http://www.allbuttonspressed.com/projects/django-nonrel
I wouldn't call it the best solution but if the Django style of development is what you are comfortable with, it's a good option. I personally prefer using google's model api for the orm and using something else for routing requests to views. bfg/pyramid is a good option if your routing fits the notion of an object graph, and you can get instance level security fairly easy if you do if thats something you need.
conclusion: it all depends on what you need to do. :)
| Which is the best python framework for developing Facebook application as on now | We are developing Facebook applications in python, my query is which is the best python framework for developing Facebook application as on now. which will run easily with GAE as well as with Django...
Thank you very much.
** edit **
I think my query was misunderstood.. We are looking for a best python based facebook framework which can easily be used in django applications as well as GAE applications..
It would be good if you can provide reasons for recommending particular framework...
I DONT WANT TO RUN DJANGO ON GAE.
| [
"Let's remove the facebook buzzword from the equation. If you need/want to leverage google's infrastructure, you use appengine. There are projects for django that allow you to access appengine services at a higher level of abstraction such as the datastore, blobstore etc... \nhttp://www.allbuttonspressed.com/projec... | [
0
] | [] | [] | [
"django",
"facebook",
"facebook_graph_api",
"python"
] | stackoverflow_0004282855_django_facebook_facebook_graph_api_python.txt |
Q:
Grabbing all files in a query using Python
I'm processing a large amount of files using Python. Data related to each other through their file names.
If I was to perform CMD command to perform this (in windoes) it would look something like:
DIR filePrefix_??.txt
And this would return all the file names I would need for that group.
Is there a similar function that I can use in Python?
A:
Have a look at the glob module.
glob.glob("filePrefix_??.txt")
returns a list of matching file names.
| Grabbing all files in a query using Python | I'm processing a large amount of files using Python. Data related to each other through their file names.
If I was to perform CMD command to perform this (in windoes) it would look something like:
DIR filePrefix_??.txt
And this would return all the file names I would need for that group.
Is there a similar function that I can use in Python?
| [
"Have a look at the glob module.\nglob.glob(\"filePrefix_??.txt\")\n\nreturns a list of matching file names.\n"
] | [
5
] | [] | [] | [
"python"
] | stackoverflow_0004285567_python.txt |
Q:
Django forms
I had asked a question pertaining to this. But I think it would be better to ask my question directly.
I have a "User" table with manytomany relationship with two other tables "Domain" and "Groups".
So in the admin interface I see the Groups and Domains as 2 ModelMultipleChoiceFields.
But I want to present them on the UI in a more user friendly way.
I would like to show each available choice in Domain and Group with a checkbox that is selected/unselected depending on the User properties.
Was wondering if I can do this in admin or I need to write my own view independent of admin.
A:
I think the built-in widget CheckboxSelectMultiple does what you want. If it doesn't, you're going to have to create your own widget. The documentation for widgets is a good place to start. The easiest way to start is by copying an existing widget from the Django source and modifying it.
A:
Well to be precise its the widget that Django Admin choses to show in case of ManyToManyField.
Here in this case its SelectMultiple widget which you feel, is less user friendly.
Well easy part is, you can always chose the widget while using your own ModelForm.
But in case you want that in Django Admin you need a roundtrip. Check this out.
from django.forms import widgets
from django.contrib import admin
class MyModelAdmin(admin.ModelAdmin):
def formfield_for_manytomany(self, db_field, request=None, **kwargs):
"""
Get a form Field for a ManyToManyField.
"""
# If it uses an intermediary model, don't show field in admin.
if db_field.rel.through is not None:
return None
if db_field.name in self.raw_id_fields:
kwargs['widget'] = admin.widgets.ManyToManyRawIdWidget(db_field.rel)
kwargs['help_text'] = ''
elif db_field.name in (list(self.filter_vertical) + list(self.filter_horizontal)):
kwargs['widget'] = admin.widgets.FilteredSelectMultiple(db_field.verbose_name, (db_field.name in self.filter_vertical))
else:
kwargs['widget'] = widgets.CheckboxSelectMultiple()
kwargs['help_text'] = ''
return db_field.formfield(**kwargs)
Now, define your routine admin in admin.py as
class SomeModelAdmin(MyModelAdmin):
search_fields = ['foo', 'bar']
list_display = ('foo',)
ordering = ('-bar',)
admin.site.register(SomeModel, SomeModelAdmin)
You will get checkboxes in DJango Admin now. Of course you will need some CSS changes.
A:
It actually uses model form by default in the admin. So, you need to overwrite it.
from django import forms
from django.forms import widgets
class DomainForm(forms.ModelForm):
field2 = YourField(widget=widgets.CheckboxSelectMultiple)
class Meta:
model = Domain()
fields = ('field1', 'field2')
So, In this case i have overwritten the default FIELD2 field type.
A:
I'm not completely sure I understand what you are attempting to do, but perhaps something like filter_horizontal would do what you want.
A:
You can change django admin interface field widget
from django.forms import widgets
class UserAdmin(admin.ModelAdmin):
model = User
def formfield_for_manytomany(self, db_field, request=None, **kwargs):
if db_field.name == 'domain' or db_field.name == 'groups':
kwargs['widget'] = widgets.CheckboxSelectMultiple()
# or just make all the manytomany fields as checkbox
kwargs['widget'] = widgets.CheckboxSelectMultiple()
return db_field.formfield(**kwargs)
# for other field
def formfield_for_dbfield(self, db_field, **kwargs):
.....
return super(UserAdmin, self).formfield_for_dbfield(db_field, **kwargs)
admin.site.register(User, UserAdmin)
| Django forms | I had asked a question pertaining to this. But I think it would be better to ask my question directly.
I have a "User" table with manytomany relationship with two other tables "Domain" and "Groups".
So in the admin interface I see the Groups and Domains as 2 ModelMultipleChoiceFields.
But I want to present them on the UI in a more user friendly way.
I would like to show each available choice in Domain and Group with a checkbox that is selected/unselected depending on the User properties.
Was wondering if I can do this in admin or I need to write my own view independent of admin.
| [
"I think the built-in widget CheckboxSelectMultiple does what you want. If it doesn't, you're going to have to create your own widget. The documentation for widgets is a good place to start. The easiest way to start is by copying an existing widget from the Django source and modifying it.\n",
"Well to be precise ... | [
2,
2,
1,
0,
0
] | [] | [] | [
"django",
"django_admin",
"python"
] | stackoverflow_0000627772_django_django_admin_python.txt |
Q:
Google App Engine - How to get a list of all app administrators?
I want to programmatically retrieve a list of all the app administrators from within the app. However, I've found no APIs in User Service section that can accomplish this. Is there any way or any undocumented API to do this?
A:
No, there is not an API to programatically get a list of administrators for your app. However, you could create an augmented user model which includes extra information like this.
I'm not sure why you need a list of all admins, but if you just wanted to e-mail them then you could use the send_mail_to_admins() function.
| Google App Engine - How to get a list of all app administrators? | I want to programmatically retrieve a list of all the app administrators from within the app. However, I've found no APIs in User Service section that can accomplish this. Is there any way or any undocumented API to do this?
| [
"No, there is not an API to programatically get a list of administrators for your app. However, you could create an augmented user model which includes extra information like this.\nI'm not sure why you need a list of all admins, but if you just wanted to e-mail them then you could use the send_mail_to_admins() fu... | [
2
] | [] | [] | [
"django",
"google_app_engine",
"python"
] | stackoverflow_0004285723_django_google_app_engine_python.txt |
Q:
Test out small snippets of Django code
I am still in the development phase of a Django App. Before even writing my views.py, I test them out to see if my models are correctly defined. This I do it in the terminal by invoking
python manage.py shell
But oh so often I make some syntax error prompting me to abort the shell ctrl-D & retype everything. This process is taking forever. It would be better if I could write all this in some file just for my trials & if all's well copy it to views.py.
What's the process for this? Is it as simple as creating a trial.py in my app directory. Won't I have to import the Django env? What's the best way to do this?
A:
How about writing unit tests? You can execute them easily with one command. You can probably get started by reading the django manual chapter on testing
A:
By all means create a trial.py for simple experimentation, then after doing
python manage.py shell
you can do
>>> import trial
and then invoke the code in trial, directly from the prompt, e.g.
trial.myfunc()
If you need to change things you can just save your changed trial.py and do
reload(trial)
Of course, you may need to recreate any existing objects in the interactive session in order to make use of your changes.
This should be seen as complementary to writing unit tests (as per Jani's answer), but I do find this approach useful for trying things out using iterative refinement.
A:
Create your file-for-tests in the Django project directory and add path to your project to env variable:
import sys
sys.path.append(os.path.realpath(os.path.dirname(__file__)))
After that you'll be able to import any module from the project (e.g. models from models.py or just functions from views.py) and use your favorite IDE with it's editor and shell.
A:
For simpler cases you can use django-extensions add-on that contains shell_plus management command that is similar to the standard shell command but preloads all models.
A:
You need to set the module used for settings.
A:
I also had this problem before,
may be you could install ipython,which has a magic function called like this:
%save.
This will save what you input into a file .
and ipython is a very charming tool ,which can take the palce of standard python prompt perfectly.. It also have other wonderful things !
And in django , if you have install ipython, when you input python manage.py shell,
it will invoke ipython directly.
| Test out small snippets of Django code | I am still in the development phase of a Django App. Before even writing my views.py, I test them out to see if my models are correctly defined. This I do it in the terminal by invoking
python manage.py shell
But oh so often I make some syntax error prompting me to abort the shell ctrl-D & retype everything. This process is taking forever. It would be better if I could write all this in some file just for my trials & if all's well copy it to views.py.
What's the process for this? Is it as simple as creating a trial.py in my app directory. Won't I have to import the Django env? What's the best way to do this?
| [
"How about writing unit tests? You can execute them easily with one command. You can probably get started by reading the django manual chapter on testing\n",
"By all means create a trial.py for simple experimentation, then after doing\npython manage.py shell\n\nyou can do\n>>> import trial\n\nand then invoke the ... | [
5,
2,
1,
1,
1,
0
] | [] | [] | [
"django",
"django_models",
"django_testing",
"django_views",
"python"
] | stackoverflow_0004279885_django_django_models_django_testing_django_views_python.txt |
Q:
pythonic way to collect specific data from complex dict
i need to collect some data from complext dict based on Dot Notation key names
for example
sample data
data = {
'name': {'last': 'smith', 'first': 'bob'},
'address':{'city': 'NY', 'state': 'NY'},
'contact':{'phone':{'self':'1234', 'home':'222'}},
'age':38,
'other':'etc'
}
keys = ['contact.phone.self', 'name.last', 'age']
my logic
result = []
for rev_key in rev_keys:
current = data.copy()
rev_key = rev_key.split('.')
while rev_key:
value = rev_key.pop(0)
current = current[value]
result.append(current)
Thanks in advance!
A:
[reduce(dict.get, key.split("."), data) for key in keys]
A:
How about this?
def fetch( some_dict, key_iter ):
for key in key_iter:
subdict= some_dict
for field in key.split('.'):
subdict = subdict[field]
yield subdict
a_dict = {
'name': {'last': 'smith', 'first': 'bob'},
'address':{'city': 'NY', 'state': 'NY'},
'contact':{'phone':{'self':'1234', 'home':'222'}},
'age':38,
'other':'etc'
}
keys = ['contact.phone.self', 'name.last', 'age']
result = list( fetch( a_dict, keys ) )
A:
Here's my crack at it:
>>> def find(tree,cur):
if len(cur)==1:
return tree[cur[0]]
else:
return find(tree[cur[0]],cur[1:])
>>> print [find(data,k.split(".")) for k in keys]
['1234', 'smith', 38]
Of course this will cause stack overflows if the items are too deeply nested (unless you explicitly raise the recursion depth), and I would use a deque instead of a list if this were production code.
A:
Just write a function that gets one key at a time
def getdottedkey(data, dottedkey):
for key in dottedkey.split('.'):
data = data[key]
return data
print [getdottedkey(data, k) for k in keys]
| pythonic way to collect specific data from complex dict | i need to collect some data from complext dict based on Dot Notation key names
for example
sample data
data = {
'name': {'last': 'smith', 'first': 'bob'},
'address':{'city': 'NY', 'state': 'NY'},
'contact':{'phone':{'self':'1234', 'home':'222'}},
'age':38,
'other':'etc'
}
keys = ['contact.phone.self', 'name.last', 'age']
my logic
result = []
for rev_key in rev_keys:
current = data.copy()
rev_key = rev_key.split('.')
while rev_key:
value = rev_key.pop(0)
current = current[value]
result.append(current)
Thanks in advance!
| [
"[reduce(dict.get, key.split(\".\"), data) for key in keys]\n\n",
"How about this?\ndef fetch( some_dict, key_iter ):\n for key in key_iter:\n subdict= some_dict\n for field in key.split('.'):\n subdict = subdict[field]\n yield subdict\n\na_dict = {\n 'name': {'last': 'smith', ... | [
4,
0,
0,
0
] | [] | [] | [
"coding_style",
"python"
] | stackoverflow_0004095386_coding_style_python.txt |
Q:
Rendering json as dictionary in template
I have a JSON object returned from server. It looks like this :
{"1":{"id":"1","name":"autos"},
"2":{"id":"2","name":"business"},
"3":{"id":"3","name":"cities"},
"4":{"id":"4","name":"drama"},
"5":{"id":"5","name":"movies"},
"6":{"id":"6","name":"finance"},
"7":{"id":"7","name":"electronics"}}
So I'm rendering a template as a string with my JSON included :
<h3>Ugly, raw list. Yuck !</h3>
1: {{ interests }}
<ul>
{% for k,v in interests.items %}
<li>{{k}}. - {{ v }}</li>
{% endfor %}
</ul>
template_name = 'socialauth/interests.html'
html = render_to_string(template_name, RequestContext(request, {'interests': ResultDict,}))
and as a result I'm getting :
<h3>Ugly, raw list. Yuck !</h3>
1: {"1":{"id":"1","name":"autos"},"2":{"id":"2","name":"business"},"3":{"id":"3","name":"cities"},"4":{"id":"4","name":"drama"},"5":{"id":"5","name":"movies"},"6":{"id":"6","name":"finance"},"7":{"id":"7","name":"electronics"}}
<ul>
</ul>
So it looks like my {{ interests }} variable is not treated as a dictionary. But why ? What more, now I'm including the rendered list to parent template which is also rendered as a string (because I'm loading it with ajax). And the final result looks as follows :
template:
<div class="connect-twitter" style="background:#f8f8f8">
<div id="likes-list">
{{ likes|safe }}
</div>
<a href="#" class="submit-step-2">Proceed</a>
</div>
result:
Content-Type: text/html; charset=utf-8
{"html": "<h3>Ugly, raw list. Yuck !</h3>\n\n1: {"1":{"id":"1","name":"autos"},"2":{"id":"2","name":"business"},"3":{"id":"3","name":"cities"},"4":{"id":"4","name":"drama"},"5":{"id":"5","name":"movies"},"6":{"id":"6","name":"finance"},"7":{"id":"7","name":"electronics"}}\n\n<ul>\n \n</ul>"}
And when this code is inserted into html it looks just awful :
http://img204.imageshack.us/img204/3858/listaxv.png
What the hell ? Why it's not rendering normally as strings but some 'Content-type' header is added ?
A:
It looks like the template variable interests is just a string with the json response. The string gets escaped in the template, that's why you end up with all the ". Check if the response from the server is correctly parsed.
To verify the type, you can use the type class, i.e. type(ResultDict).
A:
Do you do any conversion on the respons, like $parseJSON(string) or eval(string) to convert the response to a JS object?
| Rendering json as dictionary in template | I have a JSON object returned from server. It looks like this :
{"1":{"id":"1","name":"autos"},
"2":{"id":"2","name":"business"},
"3":{"id":"3","name":"cities"},
"4":{"id":"4","name":"drama"},
"5":{"id":"5","name":"movies"},
"6":{"id":"6","name":"finance"},
"7":{"id":"7","name":"electronics"}}
So I'm rendering a template as a string with my JSON included :
<h3>Ugly, raw list. Yuck !</h3>
1: {{ interests }}
<ul>
{% for k,v in interests.items %}
<li>{{k}}. - {{ v }}</li>
{% endfor %}
</ul>
template_name = 'socialauth/interests.html'
html = render_to_string(template_name, RequestContext(request, {'interests': ResultDict,}))
and as a result I'm getting :
<h3>Ugly, raw list. Yuck !</h3>
1: {"1":{"id":"1","name":"autos"},"2":{"id":"2","name":"business"},"3":{"id":"3","name":"cities"},"4":{"id":"4","name":"drama"},"5":{"id":"5","name":"movies"},"6":{"id":"6","name":"finance"},"7":{"id":"7","name":"electronics"}}
<ul>
</ul>
So it looks like my {{ interests }} variable is not treated as a dictionary. But why ? What more, now I'm including the rendered list to parent template which is also rendered as a string (because I'm loading it with ajax). And the final result looks as follows :
template:
<div class="connect-twitter" style="background:#f8f8f8">
<div id="likes-list">
{{ likes|safe }}
</div>
<a href="#" class="submit-step-2">Proceed</a>
</div>
result:
Content-Type: text/html; charset=utf-8
{"html": "<h3>Ugly, raw list. Yuck !</h3>\n\n1: {"1":{"id":"1","name":"autos"},"2":{"id":"2","name":"business"},"3":{"id":"3","name":"cities"},"4":{"id":"4","name":"drama"},"5":{"id":"5","name":"movies"},"6":{"id":"6","name":"finance"},"7":{"id":"7","name":"electronics"}}\n\n<ul>\n \n</ul>"}
And when this code is inserted into html it looks just awful :
http://img204.imageshack.us/img204/3858/listaxv.png
What the hell ? Why it's not rendering normally as strings but some 'Content-type' header is added ?
| [
"It looks like the template variable interests is just a string with the json response. The string gets escaped in the template, that's why you end up with all the \". Check if the response from the server is correctly parsed.\nTo verify the type, you can use the type class, i.e. type(ResultDict).\n",
"Do you do ... | [
0,
0
] | [] | [] | [
"dictionary",
"django",
"html",
"json",
"python"
] | stackoverflow_0004284890_dictionary_django_html_json_python.txt |
Q:
python how to fetch these string
text=u’<a href="#5" accesskey="5"></a><a href="#1" accesskey="1"><font color="#667755">\ue689</font></a><a href="#2" accesskey="2"><font color="#667755">\ue6ec</font></a><a href="#3" accesskey="3"><font color="#667755">\ue6f6</font></a>‘
I am a python new hand.
I wanna get \ue6ec、\ue6f6、\ue6ec,how to fetch these string use re module.
Thank you very much!
A:
Regexp is not good tool to work with HTML. Use the Beautiful Soup.
A:
>>> from BeautifulSoup import BeautifulSoup
>>> text=u'<a href="#5" accesskey="5"></a><a href="#1" accesskey="1"><font color="#667755">\ue689</font></a><a href="#2" accesskey="2"><font color="#667755">\ue6ec</font></a><a href="#3" accesskey="3"><font color="#667755">\ue6f6</font></a>'
>>> t = BeautifulSoup(text)
>>> t.findAll(text=True)
[u'\ue689', u'\ue6ec', u'\ue6f6']
A:
Don't use regular expressions to parse HTML. Use BeautifulSoup. Documentation for BeautifulSoup.
A:
If you know that the page will always have that format, use BeautifulSoup parser to find what you need in HTML.
However, sometimes BeautifulSoup may break due to malformed HTML. I'd suggest you to use lxml which is python binding of libxml2. It will parse and usually correct the malformed HTML.
| python how to fetch these string | text=u’<a href="#5" accesskey="5"></a><a href="#1" accesskey="1"><font color="#667755">\ue689</font></a><a href="#2" accesskey="2"><font color="#667755">\ue6ec</font></a><a href="#3" accesskey="3"><font color="#667755">\ue6f6</font></a>‘
I am a python new hand.
I wanna get \ue6ec、\ue6f6、\ue6ec,how to fetch these string use re module.
Thank you very much!
| [
"Regexp is not good tool to work with HTML. Use the Beautiful Soup.\n",
">>> from BeautifulSoup import BeautifulSoup\n>>> text=u'<a href=\"#5\" accesskey=\"5\"></a><a href=\"#1\" accesskey=\"1\"><font color=\"#667755\">\\ue689</font></a><a href=\"#2\" accesskey=\"2\"><font color=\"#667755\">\\ue6ec</font></a><a h... | [
2,
2,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0004283155_python.txt |
Q:
pygame: custom classes inheriting from pygame.Surface
I'm playing with pygame for the first time (and am kind of a newb about python in general), and wondering if anyone could help me with this...
I'm making a little shootem-up game and want to be able to create a class for bad guys. My thought was that the class should inherit from pygame.Surface, but that's giving me all kinds of problems (def, could be me screwing up basic inheritance/class stuff). For example, why doesn't this work(pygame, screen, etc all work fine and are used in other parts of the code, I'm just trying to move functions which I already have working into a class):
class Zombie(pygame.Surface):
x_pos = y_pos = 0
def __init__(self, x, y):
#create zombie
self = pygame.image.load('zombie_1.png')
self = pygame.transform.scale(self,(50, 50))
x_pos = x
y_pos = y
zombie = Zombie(screen.get_width()/3, screen.get_height()/3)
screen.blit(zombie, (zombie.x_pos, zombie.y_pos))
The above produces an error: "pygame.error: display Surface quit"
edit: apparently this is a result of calling a Surface after pygame.display.quit() has been called. Anyone with any experience in pygame want to take a swing at this?
Entire code:
#Initialize
import pygame, sys
pygame.init()
#classes
class Zombie(pygame.Surface):
x_pos = y_pos = 0
def __init__(self, x, y):
#create zombie
self = pygame.image.load('zombie_1.png')
self = pygame.transform.scale(self,(50, 50))
x_pos = x
y_pos = y
def is_hit(mouse_pos):
#grab variables
(mouseX, mouseY) = mouse_pos
print "\nboxW_x, y = " + str(self.x) + ", " + str(self.y) + "\nmouseX, Y = " + str(mouseX) + ", " + str(mouseY)
headshot_x = self.x + (.5 * zombie.get_width())
headshot_y = self.y + (.25 * zombie.get_height())
margin_of_error_x = (zombie.get_width()/float(1000)) * zombie.get_width()
margin_of_error_y = (zombie.get_height()/float(1000)) * zombie.get_height()
print "Headshot_x: " + str(headshot_x) + ", " + str(headshot_y)
print "diff in headshot and actual shot: " + str(mouseX - headshot_x) + ", " + str(mouseY - headshot_y)
print "margin of error x = " + str(margin_of_error_x) + " y = " + str(margin_of_error_y)
print "zombie size: " + str(zombie.get_size())
valid_x = valid_y = False
if abs(mouseX-headshot_x) < margin_of_error_x:
valid_x = True
print "valid x"
if abs(mouseY-headshot_y) < margin_of_error_y:
valid_y = True
print "valid y"
return (valid_x and valid_y)
#list of bad guys
zombie_list = []
#functions (which should later be moved into classes)
def is_hit():
#grab variables
(mouseX, mouseY) = pygame.mouse.get_pos()
print "\nboxW_x, y = " + str(boxW_x) + ", " + str(boxW_y) + "\nmouseX, Y = " + str(mouseX) + ", " + str(mouseY)
headshot_x = boxW_x + (.5 * boxW.get_width())
headshot_y = boxW_y + (.25 * boxW.get_height())
margin_of_error_x = (boxW.get_width()/float(1000)) * boxW.get_width()
margin_of_error_y = (boxW.get_height()/float(1000)) * boxW.get_height()
print "Headshot_x: " + str(headshot_x) + ", " + str(headshot_y)
print "diff in headshot and actual shot: " + str(mouseX - headshot_x) + ", " + str(mouseY - headshot_y)
print "margin of error x = " + str(margin_of_error_x) + " y = " + str(margin_of_error_y)
print "zombie size: " + str(boxW.get_size())
valid_x = valid_y = False
if abs(mouseX-headshot_x) < margin_of_error_x:
valid_x = True
print "valid x"
if abs(mouseY-headshot_y) < margin_of_error_y:
valid_y = True
print "valid y"
return (valid_x and valid_y)
pygame.mouse.set_visible(True)
pygame.mouse.set_cursor(*pygame.cursors.diamond)
#Display
screen = pygame.display.set_mode((640, 640))
pygame.display.set_caption("Zombie Massacre")
#Entities
#background
background = pygame.Surface(screen.get_size())
background = background.convert()
background.fill((0, 0, 0))
#make a zombie
boxW = pygame.image.load('zombie_1.png')
boxW = pygame.transform.scale(boxW,(50, 50))
#set up some box variables
boxW_x = screen.get_width()/3
boxW_y = screen.get_height()/3
#testing zombie class
zombie = Zombie(screen.get_width()/3, screen.get_height()/3)
#Action
#Assign
clock = pygame.time.Clock()
keepGoing = True
#Loop
count = 0;
rotation_vect = 1.01
while keepGoing:
#setup rotation_vect for this pass
if (count % 3) == 0:
rotation_vect = 0 - rotation_vect
#Time
clock.tick(30)
#Events
for event in pygame.event.get():
if event.type == pygame.QUIT:
keepGoing = False
elif event.type is pygame.MOUSEBUTTONDOWN:
#loop through zombie list, using each one's "is_hit()" function
keepGoing = not(is_hit())
#Refresh screen
screen.blit(background, (0,0))
boxW = pygame.transform.rotozoom(pygame.image.load('zombie_1.png'), rotation_vect, 1.01)
boxW = pygame.transform.scale(boxW,(count+50, count+100))
#for zombie in zombies
screen.blit(boxW,(boxW_x+(boxW_x * .1), boxW_y+(boxW_y * .1)))
# error is result of following line
screen.blit(zombie, (zombie.x_pos, zombie.y_pos))
pygame.display.flip()
#increment count
count += 1
A:
You didn't call the inherited constructor:
class Zombie(pygame.Surface):
def __init__(self, x, y):
pygame.Surface.__init__(self, size=(w,h))
And you're assigning to self. That's not going to work.
A:
I see that this question was asked one month ago, but maybe it's not too late...
What I see from your code is that you're basically trying to re-create the Sprite class. Indeed, you are binding an image and a position together.
I did not understand what the gameplay was about (except for the head-shot part :D) but here's an example on how to display zombies and detect when they're being shot at:
import pygame
TITLE = "Zombie Massacre"
SCREEN_WIDTH = 640
SCREEN_HEIGHT = 480
class ZombieSprite(pygame.sprite.Sprite):
def __init__(self, image):
pygame.sprite.Sprite.__init__(self)
self.image = image
self.rect = pygame.Rect((0, 0), image.get_size())
self.defineHeadPos()
def defineHeadPos(self):
# To call each time the size of the rect changes.
head_x_center = self.rect.width / 2
head_y_center = self.rect.height / 4
head_width = self.rect.height / 4
head_height = self.rect.height / 4
head_x_min = head_x_center - head_width / 2
head_y_min = head_y_center - head_height / 2
self.head_rect = pygame.Rect((head_x_min, head_y_min),
(head_width, head_height))
def update(self):
# Here we could move the zombie.
pass
def shoot(self, pos):
x, y = pos
x -= self.rect.left
y -= self.rect.top
if self.head_rect.collidepoint(x, y):
print "Head shot !"
else:
print "Shot."
def ActionShoot(zombies, pos):
print "Shot at %s." % (pos,)
sprites = zombies.get_sprites_at(pos)
if not sprites:
print "Missed."
return
sprite = sprites[-1]
sprite.shoot(pos)
def DrawScene(screen, background, zombies):
screen.blit(background, (0, 0))
zombies.draw(screen)
pygame.init()
screen = pygame.display.set_mode((SCREEN_WIDTH, SCREEN_HEIGHT))
pygame.display.set_caption(TITLE)
background = pygame.Surface(screen.get_size())
background = background.convert()
background.fill((32, 32, 32))
image_zombie = pygame.image.load('zombie.png')
image_zombie.convert()
zombies = pygame.sprite.LayeredUpdates()
zombie = ZombieSprite(image_zombie)
zombie.rect.move_ip(0, 0)
zombies.add(zombie)
zombie = ZombieSprite(image_zombie)
zombie.rect.move_ip(400, 100)
zombies.add(zombie)
zombie = ZombieSprite(image_zombie)
zombie.rect.move_ip(300, 250)
zombies.add(zombie)
clock = pygame.time.Clock()
running = True
while running:
clock.tick(30)
zombies.update()
DrawScene(screen, background, zombies)
pygame.display.flip()
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
if event.type == pygame.MOUSEBUTTONDOWN:
ActionShoot(zombies, pygame.mouse.get_pos())
print "Quitting..."
pygame.quit()
It's not a complete game at all but I leave that up to you. The key here is to use the sprites and the groups for displaying a bunch of zombies at different places. You can put the 'intelligence' of the zombies in their update method (move them, zoom them if they get closer to the screen). Note: I used only one image object for all the zombies. If you start zooming, you want each zombie to have its own image object, that means loading zombie.png for each of them. Otherwise when you zoom one they all zoom.
| pygame: custom classes inheriting from pygame.Surface | I'm playing with pygame for the first time (and am kind of a newb about python in general), and wondering if anyone could help me with this...
I'm making a little shootem-up game and want to be able to create a class for bad guys. My thought was that the class should inherit from pygame.Surface, but that's giving me all kinds of problems (def, could be me screwing up basic inheritance/class stuff). For example, why doesn't this work(pygame, screen, etc all work fine and are used in other parts of the code, I'm just trying to move functions which I already have working into a class):
class Zombie(pygame.Surface):
x_pos = y_pos = 0
def __init__(self, x, y):
#create zombie
self = pygame.image.load('zombie_1.png')
self = pygame.transform.scale(self,(50, 50))
x_pos = x
y_pos = y
zombie = Zombie(screen.get_width()/3, screen.get_height()/3)
screen.blit(zombie, (zombie.x_pos, zombie.y_pos))
The above produces an error: "pygame.error: display Surface quit"
edit: apparently this is a result of calling a Surface after pygame.display.quit() has been called. Anyone with any experience in pygame want to take a swing at this?
Entire code:
#Initialize
import pygame, sys
pygame.init()
#classes
class Zombie(pygame.Surface):
x_pos = y_pos = 0
def __init__(self, x, y):
#create zombie
self = pygame.image.load('zombie_1.png')
self = pygame.transform.scale(self,(50, 50))
x_pos = x
y_pos = y
def is_hit(mouse_pos):
#grab variables
(mouseX, mouseY) = mouse_pos
print "\nboxW_x, y = " + str(self.x) + ", " + str(self.y) + "\nmouseX, Y = " + str(mouseX) + ", " + str(mouseY)
headshot_x = self.x + (.5 * zombie.get_width())
headshot_y = self.y + (.25 * zombie.get_height())
margin_of_error_x = (zombie.get_width()/float(1000)) * zombie.get_width()
margin_of_error_y = (zombie.get_height()/float(1000)) * zombie.get_height()
print "Headshot_x: " + str(headshot_x) + ", " + str(headshot_y)
print "diff in headshot and actual shot: " + str(mouseX - headshot_x) + ", " + str(mouseY - headshot_y)
print "margin of error x = " + str(margin_of_error_x) + " y = " + str(margin_of_error_y)
print "zombie size: " + str(zombie.get_size())
valid_x = valid_y = False
if abs(mouseX-headshot_x) < margin_of_error_x:
valid_x = True
print "valid x"
if abs(mouseY-headshot_y) < margin_of_error_y:
valid_y = True
print "valid y"
return (valid_x and valid_y)
#list of bad guys
zombie_list = []
#functions (which should later be moved into classes)
def is_hit():
#grab variables
(mouseX, mouseY) = pygame.mouse.get_pos()
print "\nboxW_x, y = " + str(boxW_x) + ", " + str(boxW_y) + "\nmouseX, Y = " + str(mouseX) + ", " + str(mouseY)
headshot_x = boxW_x + (.5 * boxW.get_width())
headshot_y = boxW_y + (.25 * boxW.get_height())
margin_of_error_x = (boxW.get_width()/float(1000)) * boxW.get_width()
margin_of_error_y = (boxW.get_height()/float(1000)) * boxW.get_height()
print "Headshot_x: " + str(headshot_x) + ", " + str(headshot_y)
print "diff in headshot and actual shot: " + str(mouseX - headshot_x) + ", " + str(mouseY - headshot_y)
print "margin of error x = " + str(margin_of_error_x) + " y = " + str(margin_of_error_y)
print "zombie size: " + str(boxW.get_size())
valid_x = valid_y = False
if abs(mouseX-headshot_x) < margin_of_error_x:
valid_x = True
print "valid x"
if abs(mouseY-headshot_y) < margin_of_error_y:
valid_y = True
print "valid y"
return (valid_x and valid_y)
pygame.mouse.set_visible(True)
pygame.mouse.set_cursor(*pygame.cursors.diamond)
#Display
screen = pygame.display.set_mode((640, 640))
pygame.display.set_caption("Zombie Massacre")
#Entities
#background
background = pygame.Surface(screen.get_size())
background = background.convert()
background.fill((0, 0, 0))
#make a zombie
boxW = pygame.image.load('zombie_1.png')
boxW = pygame.transform.scale(boxW,(50, 50))
#set up some box variables
boxW_x = screen.get_width()/3
boxW_y = screen.get_height()/3
#testing zombie class
zombie = Zombie(screen.get_width()/3, screen.get_height()/3)
#Action
#Assign
clock = pygame.time.Clock()
keepGoing = True
#Loop
count = 0;
rotation_vect = 1.01
while keepGoing:
#setup rotation_vect for this pass
if (count % 3) == 0:
rotation_vect = 0 - rotation_vect
#Time
clock.tick(30)
#Events
for event in pygame.event.get():
if event.type == pygame.QUIT:
keepGoing = False
elif event.type is pygame.MOUSEBUTTONDOWN:
#loop through zombie list, using each one's "is_hit()" function
keepGoing = not(is_hit())
#Refresh screen
screen.blit(background, (0,0))
boxW = pygame.transform.rotozoom(pygame.image.load('zombie_1.png'), rotation_vect, 1.01)
boxW = pygame.transform.scale(boxW,(count+50, count+100))
#for zombie in zombies
screen.blit(boxW,(boxW_x+(boxW_x * .1), boxW_y+(boxW_y * .1)))
# error is result of following line
screen.blit(zombie, (zombie.x_pos, zombie.y_pos))
pygame.display.flip()
#increment count
count += 1
| [
"You didn't call the inherited constructor:\nclass Zombie(pygame.Surface):\n def __init__(self, x, y):\n pygame.Surface.__init__(self, size=(w,h))\n\nAnd you're assigning to self. That's not going to work.\n",
"I see that this question was asked one month ago, but maybe it's not too late...\nWhat I see ... | [
4,
3
] | [] | [] | [
"class",
"inheritance",
"pygame",
"python"
] | stackoverflow_0004002019_class_inheritance_pygame_python.txt |
Q:
Write lines longer than 80 chars in output file [Python]
I've got a pretty basic question. I'm using Python to calculate an n×12 vector
y = numpy.array([V1,V2,V3,V4,V5,V6,V7,V8,V9,V10,V11,V12])
which I append after each loop calculation.
My problem is that when I try to save it to a file or print it
Python automatically breaks the result in three lines as my
output usually exceeds 200 chars. Is there a way to supress
this 80 char/line behavior? Many thanks in advance.
A:
You can use numpy.savetxt() to save an array to a text file while controlling the formatting. To print it to the screen, you have different options to control the linewidth. One would be to call
numpy.set_printoptions(linewidth=200)
to set the linewidth to a higher value.
| Write lines longer than 80 chars in output file [Python] | I've got a pretty basic question. I'm using Python to calculate an n×12 vector
y = numpy.array([V1,V2,V3,V4,V5,V6,V7,V8,V9,V10,V11,V12])
which I append after each loop calculation.
My problem is that when I try to save it to a file or print it
Python automatically breaks the result in three lines as my
output usually exceeds 200 chars. Is there a way to supress
this 80 char/line behavior? Many thanks in advance.
| [
"You can use numpy.savetxt() to save an array to a text file while controlling the formatting. To print it to the screen, you have different options to control the linewidth. One would be to call\nnumpy.set_printoptions(linewidth=200)\n\nto set the linewidth to a higher value.\n"
] | [
24
] | [] | [] | [
"file_io",
"numpy",
"python"
] | stackoverflow_0004286544_file_io_numpy_python.txt |
Q:
Find gaps in a sequence of Strings
I have got a sequence of strings - 0000001, 0000002, 0000003.... upto 2 million. They are not contiguous. Meaning there are gaps. Say after 0000003 the next string might be 0000006. I need to find out all these gaps. In the above case (0000004, 0000005).
This is what I have done so far -
gaps = list()
total = len(curr_ids)
for i in range(total):
tmp_id = '%s' %(str(i).zfill(7))
if tmp_id in curr_ids:
continue
else:
gaps.append(tmp_id)
return gaps
But as you would have guessed, this is slow since I am using list. If I use a dict, to pre-populate curr_ids it'll be faster. But what's the complexity to populating a hash-table? What's the fastest way to do this.
A:
You could sort the list of ids and then step through it once only:
def find_gaps(ids):
"""Generate the gaps in the list of ids."""
j = 1
for id_i in sorted(ids):
while True:
id_j = '%07d' % j
j += 1
if id_j >= id_i:
break
yield id_j
>>> list(find_gaps(["0000001", "0000003", "0000006"]))
['0000002', '0000004', '0000005']
If the input list is already in order, then you can avoid the sorted (though it does little harm: Python's adaptive mergesort is O(n) if the list is already sorted).
A:
For storing sequence of 2 millions ints you can use bitarray. Here each bit means one integer (the integer of that index in bitarray). Example code:
gaps = []
# bitarray is 0 based
a = bitarray.bitarray(total + 1)
a.setall(False)
for sid in curr_ids:
a[int(sid)] = True
for i in range(1, total):
if not a[i]:
gaps.append('%07d' %(i))
return gaps
A:
seq = *the sequence of strings*
n = 2000000
gaps = set(str(i).zfill(7) for i in range(1,n+1)) - set(seq)
A:
I would suggest take it int rather than string for processing and then making it a string again in output
j=0
n=2000000
#create a list of int number from your string
foo = [i for i in range(n)]
#creating gaps
foo.remove(1)
foo.remove(50)
while j<n:
for i in foo:
if i>j:
print '%07d'%j
j+=1
j+=1
| Find gaps in a sequence of Strings | I have got a sequence of strings - 0000001, 0000002, 0000003.... upto 2 million. They are not contiguous. Meaning there are gaps. Say after 0000003 the next string might be 0000006. I need to find out all these gaps. In the above case (0000004, 0000005).
This is what I have done so far -
gaps = list()
total = len(curr_ids)
for i in range(total):
tmp_id = '%s' %(str(i).zfill(7))
if tmp_id in curr_ids:
continue
else:
gaps.append(tmp_id)
return gaps
But as you would have guessed, this is slow since I am using list. If I use a dict, to pre-populate curr_ids it'll be faster. But what's the complexity to populating a hash-table? What's the fastest way to do this.
| [
"You could sort the list of ids and then step through it once only:\ndef find_gaps(ids):\n \"\"\"Generate the gaps in the list of ids.\"\"\"\n j = 1\n for id_i in sorted(ids):\n while True:\n id_j = '%07d' % j\n j += 1\n if id_j >= id_i:\n break\n ... | [
11,
3,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0004284740_python.txt |
Q:
How to loop through list in Django Template
I am having a list in the name 'bestforproglist'. Also I had a for loop like this
{% for act in activities %}
<div style="float:left;">{{ act.spotcategoryactivity }}</div>
<div class="progit">
<div class="prog_c" >
<div id="prog_p" style="width:20%;"></div>
</div>
<span id="p_caps">{{ ____________ }}%</span><br/>
</div>
{% endfor %}
in the above code, in the space of underline, how should i have have first item in the list when the loop is in first iteration, the second item in the list when the loop is in second iteration and so on...
I tried
<span id="p_caps">{{ mylist[ {{forloop.counter}} ] }}</span><br/>
But it's not working.
A:
What is mylist? If you want to iterate over multiple lists, perhaps you should zip them and pass them into the template? Then you can use something like
{% for x,y in zipped_list %}
and use both the items rather than the indexing thing you're trying.
A:
If it's important to have activities and bestforproglist synchronized like this then it's best to zip() them in the view and then iterate over both of them together.
{% for act, prog in zippedlist %}
A:
In template , code like this
""" mylist[ {{forloop.counter}} ] """ won`t work.
use "." instead.
http://docs.djangoproject.com/en/dev/topics/templates/#variables
this will make you clear about how to output things in template
| How to loop through list in Django Template | I am having a list in the name 'bestforproglist'. Also I had a for loop like this
{% for act in activities %}
<div style="float:left;">{{ act.spotcategoryactivity }}</div>
<div class="progit">
<div class="prog_c" >
<div id="prog_p" style="width:20%;"></div>
</div>
<span id="p_caps">{{ ____________ }}%</span><br/>
</div>
{% endfor %}
in the above code, in the space of underline, how should i have have first item in the list when the loop is in first iteration, the second item in the list when the loop is in second iteration and so on...
I tried
<span id="p_caps">{{ mylist[ {{forloop.counter}} ] }}</span><br/>
But it's not working.
| [
"What is mylist? If you want to iterate over multiple lists, perhaps you should zip them and pass them into the template? Then you can use something like\n {% for x,y in zipped_list %} \n\nand use both the items rather than the indexing thing you're trying.\n",
"If it's important to have activities and bestforpro... | [
5,
1,
0
] | [] | [] | [
"django",
"django_templates",
"html",
"python"
] | stackoverflow_0004274374_django_django_templates_html_python.txt |
Q:
Recursive Error when introducing setattr in python
I am trying to write a simple object in python that will load up settings using ConfigParser, get all the items as a dictionary, then set those as attributes for the object.
This seems to work if I don't include a __setattr__ method. I can call "settings.top_travel" and get the answer back. However, as soon as I try and put a __setattr__, I seem to get an error.
It looks fairly recursive, so I assume Get is calling Set, etc. In the set attribute part, I wish to make it write back to the configuration file. So, whenever one of the settings attributes changes, it gets stored back in the file where it came from.
Below you will find the code and the error.
import ConfigParser
class settingsFile(object):
def __init__(self):
"""
Reloads the configuration file and returns a dictionary with the
settings :
[config]
top_travel = 250
"""
# Create a configuration object and read in the file
configuration = ConfigParser.ConfigParser()
configuration.read('config/config.cfg')
# Return all the "config" section as a list and convert to a dictionary
self.configuration = dict(configuration.items("config"))
def refresh(self):
self.__init__()
def __getattr__(self, attr):
return self.configuration[attr]
def __setattr__(self, attr, value):
print attr, " is now ", value
# Do some clever storing with ConfigParser
if __name__ == "__main__":
settings = settingsFile()
print settings.top_travel
settings.top_travel = 600
print settings.top_travel
Error:
Traceback (most recent call last):
File "/home/stuff/Documents/Software/Python/dBControllers v2/dBControllers.py", line 52, in <module>
settings = settingsFile()
File "/home/stuff/Documents/Software/Python/dBControllers v2/dBControllers.py", line 37, in __init__
self.configuration = dict(configuration.items("config"))
File "/home/stuff/Documents/Software/Python/dBControllers v2/dBControllers.py", line 47, in __setattr__
print self.configuration[attr], " is now ", value
File "/home/stuff/Documents/Software/Python/dBControllers v2/dBControllers.py", line 44, in __getattr__
return self.configuration[attr]
File "/home/stuff/Documents/Software/Python/dBControllers v2/dBControllers.py", line 44, in __getattr__
return self.configuration[attr]
......
RuntimeError: maximum recursion depth exceeded
A:
The problem is that setting self.configuration invokes self.__setattr__
You can circumvent that by changing the assignment to a call to __setattr__ of the super class:
class settingsFile(object):
def __init__(self):
...
# Return all the "config" section as a list and convert to a dictionary
object.__setattr__(self, 'configuration', dict(configuration.items("config")))
A:
Make __setattr__ exclusive to attributes not starting with '_', and store the configuration in self._configuration, then add a requirement that configuration files don't accept options whose names start with an underscore.
def __setattr__(self, attribute, value):
if attribute.startswith('_'):
super(settingsFile, self).__setattr__(attribute, value)
return
# Clever stuff happens here
| Recursive Error when introducing setattr in python | I am trying to write a simple object in python that will load up settings using ConfigParser, get all the items as a dictionary, then set those as attributes for the object.
This seems to work if I don't include a __setattr__ method. I can call "settings.top_travel" and get the answer back. However, as soon as I try and put a __setattr__, I seem to get an error.
It looks fairly recursive, so I assume Get is calling Set, etc. In the set attribute part, I wish to make it write back to the configuration file. So, whenever one of the settings attributes changes, it gets stored back in the file where it came from.
Below you will find the code and the error.
import ConfigParser
class settingsFile(object):
def __init__(self):
"""
Reloads the configuration file and returns a dictionary with the
settings :
[config]
top_travel = 250
"""
# Create a configuration object and read in the file
configuration = ConfigParser.ConfigParser()
configuration.read('config/config.cfg')
# Return all the "config" section as a list and convert to a dictionary
self.configuration = dict(configuration.items("config"))
def refresh(self):
self.__init__()
def __getattr__(self, attr):
return self.configuration[attr]
def __setattr__(self, attr, value):
print attr, " is now ", value
# Do some clever storing with ConfigParser
if __name__ == "__main__":
settings = settingsFile()
print settings.top_travel
settings.top_travel = 600
print settings.top_travel
Error:
Traceback (most recent call last):
File "/home/stuff/Documents/Software/Python/dBControllers v2/dBControllers.py", line 52, in <module>
settings = settingsFile()
File "/home/stuff/Documents/Software/Python/dBControllers v2/dBControllers.py", line 37, in __init__
self.configuration = dict(configuration.items("config"))
File "/home/stuff/Documents/Software/Python/dBControllers v2/dBControllers.py", line 47, in __setattr__
print self.configuration[attr], " is now ", value
File "/home/stuff/Documents/Software/Python/dBControllers v2/dBControllers.py", line 44, in __getattr__
return self.configuration[attr]
File "/home/stuff/Documents/Software/Python/dBControllers v2/dBControllers.py", line 44, in __getattr__
return self.configuration[attr]
......
RuntimeError: maximum recursion depth exceeded
| [
"The problem is that setting self.configuration invokes self.__setattr__\nYou can circumvent that by changing the assignment to a call to __setattr__ of the super class:\nclass settingsFile(object):\n\n def __init__(self):\n ...\n # Return all the \"config\" section as a list and convert to a dicti... | [
5,
5
] | [
"Whatever clever stuff you're doing with ConfigParser is recursing infinitely. I can't be certain because I don't see the code, but if you're using recursion, make sure you cover all of your base cases.\n"
] | [
-3
] | [
"configuration",
"python",
"setattribute"
] | stackoverflow_0004286648_configuration_python_setattribute.txt |
Q:
Make unicode from variable containing QString
I have QPlainTextEdit field with data containing national characters (iso-8859-2).
tmp = self.ui.field.toPlainText() (QString type)
When I do:
tmp = unicode(tmp, 'iso-8859-2')
I get question marks instead of national characters. How can I convert properly the data in QPlainTextEdit field to unicode?
A:
As it was said QPlainTextEdit.toPlainText() returns QString which should be UTF-16, whereas unicode() constructor expects a byte string. Below is a small example:
tmp = self.field.toPlainText()
print 'field.toPlainText: ', tmp
codec0 = QtCore.QTextCodec.codecForName("UTF-16");
codec1 = QtCore.QTextCodec.codecForName("ISO 8859-2");
print 'UTF-16: ', unicode(codec0.fromUnicode(tmp), 'UTF-16')
print 'ISO 8859-2: ', unicode(codec1.fromUnicode(tmp), 'ISO 8859-2')
this code produces following output:
field.toPlainText: test ÖÄ это
китайский: 最主要的
UTF-16: test ÖÄ это китайский: 最主要的
ISO 8859-2: test ÖÄ ??? ?????????:
????
hope this helps, regards
| Make unicode from variable containing QString | I have QPlainTextEdit field with data containing national characters (iso-8859-2).
tmp = self.ui.field.toPlainText() (QString type)
When I do:
tmp = unicode(tmp, 'iso-8859-2')
I get question marks instead of national characters. How can I convert properly the data in QPlainTextEdit field to unicode?
| [
"As it was said QPlainTextEdit.toPlainText() returns QString which should be UTF-16, whereas unicode() constructor expects a byte string. Below is a small example:\ntmp = self.field.toPlainText()\nprint 'field.toPlainText: ', tmp\n\ncodec0 = QtCore.QTextCodec.codecForName(\"UTF-16\");\ncodec1 = QtCore.QTextCodec.co... | [
3
] | [] | [] | [
"pyqt",
"python",
"qt",
"qt4"
] | stackoverflow_0004281116_pyqt_python_qt_qt4.txt |
Q:
Python go to def
Hi there
First of all just let me say that I'm new in Python and this is for a school work so this should be done without advanced programing and global functions. Using Python 2.6.6 and WingIDE 101
I need a program to present the user with a menu. The user must pick an option and accordingly to is pick the program does what the user wants.
For example, in the code bellow (its not the actual code), if the user picks 1 it goes to the sum() function.
def menu():
print "What do you want? "
print " 1 for sum"
print " 2 for subtraction"
pick = raw_input("please insert 1 or 2 ")
if pick == "1":
return sum()
if pick == "2":
return subtraction()
else:
menu()
menu()
def sum():
return 8 + 4
def subtraction():
return 8 - 4
I what to know how do I send, after my pick, the program to execute an determined definition.
Thanks
P.S. - running this gives me this error:
Traceback (most recent call last):
File "/usr/lib/wingide-101-3.2/src/debug/tserver/_sandbox.py", line 12, in
File "/usr/lib/wingide-101-3.2/src/debug/tserver/_sandbox.py", line 7, in menu
TypeError: sum expected at least 1 arguments, got 0
A:
There are lots of things wrong with this, so we will take up one by one.
sum is a built-in function in Python. So you cannot name your function sum. You have to call yourself something else, like sum_foo. Or _sum.
Also, your code is executed from top to bottom. So if you are calling a function say X in a function like Y.
def f():
y()
f()
def y():
print 'Y called'
Results in this error:
NameError: global name 'y' is not defined
Because at your program runs, it is not aware of y because the y has not been declared at that point, since the program jumps to f().
To fix this, you would do:
def f():
y()
def y():
print 'Y called'
f()
Also, call the function as func_name() and not return func. And since in your sum and subtraction, you are returning the values, save them in some variable and print them, or print them directly.
def sum():
return 8 + 4
No output
def sum():
print 8 + 4
A:
the error is because you're using the sum function before you declare it. it should raise a NameError, but the sum function is a builtin so you're calling that func ( that requires at least one argument) and not the func you've written...
to pass through you can call menu() after declaring the sum and subtraction func
ps it's not a good idea to overwrite python's builtin function..change name to your func, and call menu later, or you will get a NameError
A:
sum is a builtin at the point you're calling menu(). If you move this call after defining sum and substraction, it won't give you this error.
A:
Please place the sum and subtraction functions above the menu() function definition. You are calling into the built-in sum(iterable[, start]) function provided by python.
A:
You should wrap the call to menu() in a if __name__ == '__main__:' block. At the bottom of your code, just add
if __name__ == '__main__':
menu()
This will help prevent using variables before they're defined.
| Python go to def | Hi there
First of all just let me say that I'm new in Python and this is for a school work so this should be done without advanced programing and global functions. Using Python 2.6.6 and WingIDE 101
I need a program to present the user with a menu. The user must pick an option and accordingly to is pick the program does what the user wants.
For example, in the code bellow (its not the actual code), if the user picks 1 it goes to the sum() function.
def menu():
print "What do you want? "
print " 1 for sum"
print " 2 for subtraction"
pick = raw_input("please insert 1 or 2 ")
if pick == "1":
return sum()
if pick == "2":
return subtraction()
else:
menu()
menu()
def sum():
return 8 + 4
def subtraction():
return 8 - 4
I what to know how do I send, after my pick, the program to execute an determined definition.
Thanks
P.S. - running this gives me this error:
Traceback (most recent call last):
File "/usr/lib/wingide-101-3.2/src/debug/tserver/_sandbox.py", line 12, in
File "/usr/lib/wingide-101-3.2/src/debug/tserver/_sandbox.py", line 7, in menu
TypeError: sum expected at least 1 arguments, got 0
| [
"There are lots of things wrong with this, so we will take up one by one.\nsum is a built-in function in Python. So you cannot name your function sum. You have to call yourself something else, like sum_foo. Or _sum.\nAlso, your code is executed from top to bottom. So if you are calling a function say X in a functio... | [
1,
0,
0,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0004287122_python.txt |
Q:
Sort list of strings by integer suffix
I have a list of strings:
[song_1, song_3, song_15, song_16, song_4, song_8]
I would like to sort them by the # at the end, unfortunately since the lower numbers aren't "08" and are "8", they are treated as larger than 15 in lexicographical order.
I know I have to pass a key to the sort function, I saw this somewhere on this site to sort decimal numbers that are strings:
sorted(the_list, key=lambda a:map(int,a.split('.'))
But that was for "1.2, 2.5, 2.3" but I don't have that case. I thought of replacing '.' with '_' but from what I understand it converts both sides to ints, which will fail since the left side of the _ is a string.
EDIT: I forgot to mention that all the prefixes are the same (song in this example)
A:
You're close.
sorted(the_list, key = lambda x: int(x.split("_")[1]))
should do it. This splits on the underscore, takes the second part (i.e. the one after the first underscore), and converts it to integer to use as a key.
A:
Well, you want to sort by the filename first, then on the int part:
def splitter( fn ):
try:
name, num = fn.rsplit('_',1) # split at the rightmost `_`
return name, int(num)
except ValueError: # no _ in there
return fn, None
sorted(the_list, key=splitter)
A:
sorted(the_list, key = lambda k: int(k.split('_')[1]))
A:
def number_key(name):
parts = re.findall('[^0-9]+|[0-9]+', name)
L = []
for part in parts:
try:
L.append(int(part))
except ValueError:
L.append(part)
return L
sorted(your_list, key=number_key)
| Sort list of strings by integer suffix | I have a list of strings:
[song_1, song_3, song_15, song_16, song_4, song_8]
I would like to sort them by the # at the end, unfortunately since the lower numbers aren't "08" and are "8", they are treated as larger than 15 in lexicographical order.
I know I have to pass a key to the sort function, I saw this somewhere on this site to sort decimal numbers that are strings:
sorted(the_list, key=lambda a:map(int,a.split('.'))
But that was for "1.2, 2.5, 2.3" but I don't have that case. I thought of replacing '.' with '_' but from what I understand it converts both sides to ints, which will fail since the left side of the _ is a string.
EDIT: I forgot to mention that all the prefixes are the same (song in this example)
| [
"You're close.\nsorted(the_list, key = lambda x: int(x.split(\"_\")[1]))\n\nshould do it. This splits on the underscore, takes the second part (i.e. the one after the first underscore), and converts it to integer to use as a key.\n",
"Well, you want to sort by the filename first, then on the int part:\ndef splitt... | [
22,
7,
3,
1
] | [] | [] | [
"python"
] | stackoverflow_0004287209_python.txt |
Q:
NameError in defining python multi-level package
I'm trying to create a simple multi-level package:
test_levels.py
level1/
__init__.py (empty file)
level2/
__init__.py (only contents: __all__ = ["leaf"])
leaf.py
leaf.py:
class Leaf(object):
print("read Leaf class")
pass
if __name__ == "__main__":
x = Leaf()
print("done")
test_levels.py:
from level1.level2 import *
x = Leaf()
Running leaf.py directly works fine, but running test_levels.py returns the output below,
where I was expecting no output:
read Leaf class
Traceback (most recent call last):
File "C:\Dev\intranet\test_levels.py", line 2, in <module>
x = Leaf()
NameError: name 'Leaf' is not defined
Can someone point out what I'm doing wrong?
A:
try to add
from leaf import *
in file level1/level2/__init__.py
upd: as in previous comment, add dot before module name, and remove "__all__" declaration.
$ cat level1/level2/__init__.py
from .leaf import Leaf
$ cat level1/level2/leaf.py
class Leaf:
def __init__(self):
print("hello")
$ cat test.py
from level1.level2 import *
x = Leaf()
$ python test.py
hello
A:
In level1/level2/__init__.py, I think you want to do from leaf import * (or, in Py3k, from .leaf import *).
When you import from level1.level2, you're really importing the __init__.py file in that directory. Since you haven't defined Leaf in there, you don't get it by importing that.
A:
Are you expecting to import all the variable names from all the modules in that package? That's a horrible idea. To do what you want, it should be
from level1.level2.leaf import *
or even better yet to remove the wildcard import, which is usually bad, it should be
from level1.level2.leaf import Leaf
| NameError in defining python multi-level package | I'm trying to create a simple multi-level package:
test_levels.py
level1/
__init__.py (empty file)
level2/
__init__.py (only contents: __all__ = ["leaf"])
leaf.py
leaf.py:
class Leaf(object):
print("read Leaf class")
pass
if __name__ == "__main__":
x = Leaf()
print("done")
test_levels.py:
from level1.level2 import *
x = Leaf()
Running leaf.py directly works fine, but running test_levels.py returns the output below,
where I was expecting no output:
read Leaf class
Traceback (most recent call last):
File "C:\Dev\intranet\test_levels.py", line 2, in <module>
x = Leaf()
NameError: name 'Leaf' is not defined
Can someone point out what I'm doing wrong?
| [
"try to add\nfrom leaf import *\n\nin file level1/level2/__init__.py\nupd: as in previous comment, add dot before module name, and remove \"__all__\" declaration.\n$ cat level1/level2/__init__.py\nfrom .leaf import Leaf\n$ cat level1/level2/leaf.py\nclass Leaf:\n def __init__(self):\n print(\"hello\")\n$ ... | [
0,
0,
0
] | [] | [] | [
"nameerror",
"package",
"python"
] | stackoverflow_0004286999_nameerror_package_python.txt |
Q:
What's a good name for this kind of object?
I have an associative array of element IDs to counts of each element.
e.g. (in python):
myObject = { 'a': 5, 'b': 3 }
It should support addition and subtraction. For example:
myObject - { 'a': 3 }
Should evaluate to:
{ 'a': 2, 'b': 3 }
For context, this is supporting a costs system. Each element is a resource type, and the quantity is how much of that resource the entity owns. A user would have one of these objects as an inventory of some kind, but then an item might have one of these as its cost. As a result, the user object can simply subtract the item's cost from their inventory.
I'm just trying to think of a good term for this kind of object.
A:
This collection type is already in Python 2.7 and 3.1. It's called a Counter.
A:
I'd go for ResourceVector or ResourceTuple I guess.
A:
I would call that a Matrix, without knowing the extent of what you want to do.
| What's a good name for this kind of object? | I have an associative array of element IDs to counts of each element.
e.g. (in python):
myObject = { 'a': 5, 'b': 3 }
It should support addition and subtraction. For example:
myObject - { 'a': 3 }
Should evaluate to:
{ 'a': 2, 'b': 3 }
For context, this is supporting a costs system. Each element is a resource type, and the quantity is how much of that resource the entity owns. A user would have one of these objects as an inventory of some kind, but then an item might have one of these as its cost. As a result, the user object can simply subtract the item's cost from their inventory.
I'm just trying to think of a good term for this kind of object.
| [
"This collection type is already in Python 2.7 and 3.1. It's called a Counter.\n",
"I'd go for ResourceVector or ResourceTuple I guess.\n",
"I would call that a Matrix, without knowing the extent of what you want to do.\n"
] | [
8,
1,
0
] | [] | [] | [
"data_structures",
"naming",
"python",
"terminology"
] | stackoverflow_0004287388_data_structures_naming_python_terminology.txt |
Q:
Gui Library in Python
I am in need of making an interface for Automata Construction. For those who are unaware of what an automata is, I basically need circles and arrows, extending them on the user interface and then various text to describe it. For example :- http://en.wikipedia.org/wiki/File:DFAexample.svg
I was wondering if there is any advanced library in Python which could let me do all of this. Please give me suggestions. I plan to use lots of internal frames and showing parsing of strings diagramatically . I had planned to use jython to use the swing libraries but I would like to know if there any other suggestions that I can use.
A:
I've used PyQt4 to produce something like this:
http://doc.qt.nokia.com/4.6/graphicsview-diagramscene.html
which is the C++ version. Draw boxes, draw arrows, drag them around and so forth. Actually I think I pretty much ported the C++ version line-by-line to Python. Or someone else did.
A:
PySide (Qt), WxPython, PyGtk
A:
I looked at your picture and have another one idea - you can use graphviz and pydot.
| Gui Library in Python | I am in need of making an interface for Automata Construction. For those who are unaware of what an automata is, I basically need circles and arrows, extending them on the user interface and then various text to describe it. For example :- http://en.wikipedia.org/wiki/File:DFAexample.svg
I was wondering if there is any advanced library in Python which could let me do all of this. Please give me suggestions. I plan to use lots of internal frames and showing parsing of strings diagramatically . I had planned to use jython to use the swing libraries but I would like to know if there any other suggestions that I can use.
| [
"I've used PyQt4 to produce something like this:\nhttp://doc.qt.nokia.com/4.6/graphicsview-diagramscene.html\nwhich is the C++ version. Draw boxes, draw arrows, drag them around and so forth. Actually I think I pretty much ported the C++ version line-by-line to Python. Or someone else did.\n",
"PySide (Qt), WxPyt... | [
2,
1,
1
] | [] | [] | [
"python"
] | stackoverflow_0004287469_python.txt |
Q:
Python: How to get an updated Entry text to use in a command binded to it?
Consider the following code:
text = Entry(); text.pack()
def show(e):
print text.get()
text.bind('<Key>', show)
Let's say I put the letters ABC in the Entry, one by one. The output would be:
>>>
>>> A
>>> AB
Note that when pressing A, it prints an empty string. When I press B, it prints A, not AB. If i don't press anything after C, it will never be shown. It seems that the Entry content is only updated after the binded command has returned, so I can't use the actual Entry value in that function.
Is there any way to get an updated Entry value to use inside a binded command?
A:
You could replace the <Key> event with the <KeyRelease> event. That should work.
Here is a list of events: http://infohost.nmt.edu/tcc/help/pubs/tkinter/events.html#event-types
A:
The reason for this has to do with Tk "bindtags". Bindings are associated with tags, and the bindings are processed in tag order. Both widget names and widget classes are tags, and they are processed in that order (widget-specific bindings first, class bindings second).
For that reason, any time you press a key your widget specific binding will fire before the class binding has a chance to modify the widget.
There are many workarounds. The simplest is to bind to <KeyRelease> since the class bindings happen on the key press. There are other solutions that involve either adding or rearranging bind tags, or using the built-in data validation features of the entry widget. Which is best depends on what you're really trying to accomplish.
For more information on the data validation functions, see this question: Interactively validating Entry widget content in tkinter
For a more comprehensive answer, see Tkinter: set StringVar after <Key> event, including the key pressed
| Python: How to get an updated Entry text to use in a command binded to it? | Consider the following code:
text = Entry(); text.pack()
def show(e):
print text.get()
text.bind('<Key>', show)
Let's say I put the letters ABC in the Entry, one by one. The output would be:
>>>
>>> A
>>> AB
Note that when pressing A, it prints an empty string. When I press B, it prints A, not AB. If i don't press anything after C, it will never be shown. It seems that the Entry content is only updated after the binded command has returned, so I can't use the actual Entry value in that function.
Is there any way to get an updated Entry value to use inside a binded command?
| [
"You could replace the <Key> event with the <KeyRelease> event. That should work.\nHere is a list of events: http://infohost.nmt.edu/tcc/help/pubs/tkinter/events.html#event-types\n",
"The reason for this has to do with Tk \"bindtags\". Bindings are associated with tags, and the bindings are processed in tag order... | [
3,
2
] | [] | [] | [
"python",
"tkinter",
"tkinter_entry"
] | stackoverflow_0004287553_python_tkinter_tkinter_entry.txt |
Q:
How to customize the look and feel of django admin
How to customize the look and feel of django admin?
A:
You should create ModelAdmin classes to customize the django admin site.
See this page in the django book for details:
http://www.djangobook.com/en/2.0/chapter06/
A:
try Django Grappelli
Bye!
A:
Have you set up the tables in the DB yet? As per http://docs.djangoproject.com/en/1.2/intro/tutorial01/?
A:
1) ModelAdmin, you can customize the Forms for CRUD operations with this.
2) Overload your Admin Templates
3) Admin Tools and Grappelli provide great options to extend and customize the Django Admin
sorry, i can't post links but google can help easily with 1) and 2) searchs.
| How to customize the look and feel of django admin | How to customize the look and feel of django admin?
| [
"You should create ModelAdmin classes to customize the django admin site.\nSee this page in the django book for details:\nhttp://www.djangobook.com/en/2.0/chapter06/\n",
"try Django Grappelli\nBye!\n",
"Have you set up the tables in the DB yet? As per http://docs.djangoproject.com/en/1.2/intro/tutorial01/?\n",
... | [
1,
1,
0,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0004285388_django_python.txt |
Q:
Improving run time of simple query
Gurus:
I have a very simple data model relating two different kinds of Users via an Interaction:
# myapp/models.py
class C1(models.Model):
user = models.ForeignKey(User)
class C2(models.Model):
user = models.ForeignKey(User)
class Interaction(models.Model):
c1 = models.ForeignKey(C1)
c2 = models.ForeignKey(C2)
date = models.DateField()
So, an Interaction has a User of class C1, a User of class C2, and a date (in addition to the primary key, automatically an integer); a given pair of users can have many Interactions.
I populated the database with 2000 random Users (1000 each class), and when I query the Interaction the run time is too slow (about three seconds - unacceptable in production environment).
Is there something I can do to improve the run time of this search? Should I define the Interaction differently?
Thanks.
A:
If you'd like to store additional information related to your users, Django provides a method to specify a site-specific related model -- termed a "user profile" -- for this purpose.
To make use of this feature, define a model with fields for the additional information you'd like to store, or additional methods you'd like to have available, and also add a OneToOneField from your model to the User model. This will ensure only one instance of your model can be created for each User. For example:
# settings.py
AUTH_PROFILE_MODULE = 'myapp.UserProfile'
# myapp/models.py
class UserProfile(models.Model):
CLASS_CHOICES = (
(0, 'Yellow User'),
(1, 'Green User'),
)
user_class = models.IntegerField(choices=CLASS_CHOICES)
user = models.OneToOneField(User)
class Interaction(models.Model):
u1 = models.ForeignKey(User, related_name='u1s')
u2 = models.ForeignKey(User, related_name='u2s')
date = models.DateField()
Creating a new model and associated table for each User class seems not like good design.
A:
You have used foreign keys to associate C1, C2 with Users, and called this a one-to-many relationship. However, the relationship between Interaction C1, C2 is not one-to-many because one Interaction can be associated with many Users, and one User can also have many Interactions associated with it. This is called a many-to-many relationship, and is represented in Django models using models.ManyToManyField.
So try changing your models.py file to -
class Interaction(models.Model):
ic1 = models.ManyToManyField(C1)
ic2 = models.ManyToManyField(C2)
date= models.DateField()
See if this helps...
| Improving run time of simple query | Gurus:
I have a very simple data model relating two different kinds of Users via an Interaction:
# myapp/models.py
class C1(models.Model):
user = models.ForeignKey(User)
class C2(models.Model):
user = models.ForeignKey(User)
class Interaction(models.Model):
c1 = models.ForeignKey(C1)
c2 = models.ForeignKey(C2)
date = models.DateField()
So, an Interaction has a User of class C1, a User of class C2, and a date (in addition to the primary key, automatically an integer); a given pair of users can have many Interactions.
I populated the database with 2000 random Users (1000 each class), and when I query the Interaction the run time is too slow (about three seconds - unacceptable in production environment).
Is there something I can do to improve the run time of this search? Should I define the Interaction differently?
Thanks.
| [
"If you'd like to store additional information related to your users, Django provides a method to specify a site-specific related model -- termed a \"user profile\" -- for this purpose.\nTo make use of this feature, define a model with fields for the additional information you'd like to store, or additional methods... | [
4,
0
] | [] | [] | [
"database_design",
"django",
"python"
] | stackoverflow_0004287801_database_design_django_python.txt |
Q:
Lazily sample random results in python
Python question. I'm generating a large array of objects, which I only need to make a small random sample. Actually generating the objects in question takes a while, so I wonder if it would be possible to somehow skip over those objects that don't need generating and only explicitly create those objects that have been sampled.
In other words, I now have
a = createHugeArray()
s = random.sample(a,len(a)*0.001)
which is rather wasteful. I would prefer something more lazy like
a = createArrayGenerator()
s = random.sample(a,len(a)*0.001)
I don't know if this works. The documentation on random.sample isn't too clear, though it mentions xrange as being very fast - which makes me believe it might work. Converting the array creation to a generator would be a bit of work (my knowledge of generators is very rusty), so I want to know if this works in advance. :)
An alternative I can see is to make a random sample via xrange, and only generate those objects that are actually selected by index. That's not very clean though, because the indices generated are arbitrary and unnecessary, and I would need rather hacky logic to support this in my generateHugeArray method.
For bonus points: how does random.sample actually work? Especially, how does it work if it doesn't know the size of the population in advance, as with generators like xrange?
A:
There does not seem a way that avoids figuring out how the indices map to your permutations. If you don't know this, how would you create a random object from your array? You could either use the trick using xrange() you suggested yourself, or implement a class defining the __getitem__() and __len__() methods and pass and object of this class as population argument to random.sample().
Some further comments:
Converting createHugeArray() into a generator won't buy you anything -- random.sample() will just not work anymore. It needs an object supporting len().
So it does need to know the number of elements in the population right from the beginning.
The implementation features two different algorithms and chooses the one that will use less memory. For relatively small k (that is, in the case at hand) it will simply save the indices already chosen in a set and make a new random choice if it hits one of them.
Edit: A completely different approach would be to iterate over all permutations once and decide for each permutation if it should be included. If the total number of permutations is n and you would like to select k of them, you could write
selected = []
for i in xrange(n):
perm = nextPermutation()
if random.random() < float(k-len(selected))/(n-i):
selected.append(perm)
This would choose exactly k permutations randomly.
A:
You could create a list of array indexes with sample and then generate the objects according to the results:
def get_object(index):
return MyClass(index)
or something like this. Then use sample to generate the indexes you need and call this function with those indexes:
objs = map(get_object, random.sample(range(length), 0.001 * length))
This is a little indirect as in it only chooses from a list of possible array indexes.
A:
Explaining how random.sample works,
random.sample(container, k) will return k number of values randomly from the container. Because a generator is iterable like lists, tuples and the keys or values in dicts it will iterate through the container and then take these random elements.
e.g. random.sample(xrange(111),4) will return something like [33,52,111,1] as k = 4 meaning 4 random numbers from the xrange generator upto 111.
A:
I'm guessing the function createHugeArray() contains a piece of code that is repeated once for each object that is created. And I'm guessing the objects are generated from some sort of initial value or seed, in which case createHugeArray() looks something like this:
def createHugeArray( list_of_seeds ):
huge_array = []
for i in list_of_seeds:
my_object = makeObject( i )
huge_array.append( my_object )
return huge_array
(I used lists not arrays, but you get the idea.)
To do the random sampling before actually creating the objects, just add a line that generates a random number, and then only create the object if the random number is below a certain threshold. Say you only want one object in a thousand. random.randint(0,999) gives a number from 0 to 999 - so only generate an object if you get zero. The code above becomes:
import random
def createHugeArray( list_of_seeds ):
huge_array = []
for i in list_of_seeds:
die_roll = random.randint(0,999)
if( die_roll == 0 ):
my_object = makeObject( i )
huge_array.append( my_object )
return huge_array
Of course if my guess about how your code works is wrong then this is useless to you, in which case sorry and good luck :-)
| Lazily sample random results in python | Python question. I'm generating a large array of objects, which I only need to make a small random sample. Actually generating the objects in question takes a while, so I wonder if it would be possible to somehow skip over those objects that don't need generating and only explicitly create those objects that have been sampled.
In other words, I now have
a = createHugeArray()
s = random.sample(a,len(a)*0.001)
which is rather wasteful. I would prefer something more lazy like
a = createArrayGenerator()
s = random.sample(a,len(a)*0.001)
I don't know if this works. The documentation on random.sample isn't too clear, though it mentions xrange as being very fast - which makes me believe it might work. Converting the array creation to a generator would be a bit of work (my knowledge of generators is very rusty), so I want to know if this works in advance. :)
An alternative I can see is to make a random sample via xrange, and only generate those objects that are actually selected by index. That's not very clean though, because the indices generated are arbitrary and unnecessary, and I would need rather hacky logic to support this in my generateHugeArray method.
For bonus points: how does random.sample actually work? Especially, how does it work if it doesn't know the size of the population in advance, as with generators like xrange?
| [
"There does not seem a way that avoids figuring out how the indices map to your permutations. If you don't know this, how would you create a random object from your array? You could either use the trick using xrange() you suggested yourself, or implement a class defining the __getitem__() and __len__() methods an... | [
2,
0,
0,
0
] | [] | [] | [
"lazy_evaluation",
"python",
"random",
"sampling"
] | stackoverflow_0004286862_lazy_evaluation_python_random_sampling.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.