content stringlengths 85 101k | title stringlengths 0 150 | question stringlengths 15 48k | answers list | answers_scores list | non_answers list | non_answers_scores list | tags list | name stringlengths 35 137 |
|---|---|---|---|---|---|---|---|---|
Q:
How can I make xml.sax use a HTTP proxy for its DTD requests?
It is a known problem that XML parsers often send out HTTP requests for fetching DTDs referenced in the documents. Specifically, Python's one does this. This causes excessive traffic for www.w3.org, which hosts a lot of these DTDs. In turn, this makes the XML parsing take a very long time and in some cases time out. This can be a serious problem, as it makes a task seemingly only related to text processing dependent on an unreliable third party.
In order to mitigate this problem (since a real solution is very hard), I'd like to install a caching web proxy locally and ask xml.sax to send its requests via this proxy. I specifically don't want the proxy settings leaking out to other components, so system wide settings are out of the question.
How can I make xml.sax use a HTTP proxy?
I've got:
handler = # instance of a subclass of xml.sax.handler.ContentHandler
parser = xml.sax.make_parser()
parser.setContentHandler(handler)
parser.parse(indata)
return handler.result()
One approach is to use a custom EntityResolver. However, it turns out it is not possible to implement a caching EntityResolver, because it doesn't get enough information.
A:
One quick and dirty way to do this would be to monkey patch saxutils.prepare_input_source. You can pretty much just copy+paste it and tweak the branch that calls urllib.urlopen so that it gets a UrlOpener from urllib2 with your proxy installed.
Unfortunately, I think that this is the only way that you're going to be able to get your literally desired behavior without changing system wide settings or creating your own EntityResolver that could cache results.
The problem is that saxutils.prepare_input_source pretty unambiguously makes a call to urllib.urlopen and with no options for modifying this behavior. So you'd have to route that through your proxy which would affect all other clients of urllib.
By Magnus Hoff: A working monkey-patching implementation:
def make_caching_prepare_input_source(old_prepare_input_source, proxy):
def caching_prepare_input_source(source, base = None):
if isinstance(source, xmlreader.InputSource):
return source
full_uri = urlparse.urljoin(base or "", source)
if not full_uri.startswith('http:'):
args = (source,) if base == None else (source, base)
return old_prepare_input_source(*args)
r = urllib2.Request(full_uri)
r.set_proxy(proxy, 'http')
f = urllib2.urlopen(r)
i = xmlreader.InputSource()
i.setSystemId(source)
i.setByteStream(f)
return i
return caching_prepare_input_source
def enable_http_proxy(server):
saxutils.prepare_input_source = make_caching_prepare_input_source(
saxutils.prepare_input_source,
server,
)
| How can I make xml.sax use a HTTP proxy for its DTD requests? | It is a known problem that XML parsers often send out HTTP requests for fetching DTDs referenced in the documents. Specifically, Python's one does this. This causes excessive traffic for www.w3.org, which hosts a lot of these DTDs. In turn, this makes the XML parsing take a very long time and in some cases time out. This can be a serious problem, as it makes a task seemingly only related to text processing dependent on an unreliable third party.
In order to mitigate this problem (since a real solution is very hard), I'd like to install a caching web proxy locally and ask xml.sax to send its requests via this proxy. I specifically don't want the proxy settings leaking out to other components, so system wide settings are out of the question.
How can I make xml.sax use a HTTP proxy?
I've got:
handler = # instance of a subclass of xml.sax.handler.ContentHandler
parser = xml.sax.make_parser()
parser.setContentHandler(handler)
parser.parse(indata)
return handler.result()
One approach is to use a custom EntityResolver. However, it turns out it is not possible to implement a caching EntityResolver, because it doesn't get enough information.
| [
"One quick and dirty way to do this would be to monkey patch saxutils.prepare_input_source. You can pretty much just copy+paste it and tweak the branch that calls urllib.urlopen so that it gets a UrlOpener from urllib2 with your proxy installed. \nUnfortunately, I think that this is the only way that you're going t... | [
2
] | [] | [] | [
"python",
"sax",
"xml"
] | stackoverflow_0004254767_python_sax_xml.txt |
Q:
Python Cookbook is for Python 2.4
Python Cookbook 2nd edition is updated for Python 2.4. Is it still ok to study the book using Python version 2.5 or 2.6?
A:
Python 2.4 is too old. In my opinion it will worth the time and money to find a more recent resource, especially if your time is limited. More recent books will also cover changes in the libraries, including advances in python web app development, which I don't expect to find in aged resources. Especially for a cookbook, which includes solutions to common problems, being up-to-dated is important.
May I also say that Python is now in version 3, where major changes have been introduced. It will be beneficial to study Python 3, even if you are only planning to use 2.x versions. A great online resource is of course Dive into Python.
A:
Sure, although 2.4 is pretty old now -- not too much has changed, and what has, you can review in the What's New in Python series.
A:
I would say yes. There have been a few big changes since 2.4, but most if not all of the Cookbook will still apply. It also gives you a good idea of idiomatic python.
A:
It depends on what you want to learn out of the book.
Let me guess that you are a newbie. If you are not new to programming (probably you are not, you are in SO), then the 2.4 cookbook will be fine. There would be a few changes in the later versions to catch up with, the ones that simplify code and introduce new idioms and help you do things in a better/cleaner way, but you can pick them up later on.
If you are new to programming, then may be you should pick up something more recent. It is important to pick up clean coding habits and know your community's idioms.
A:
I find it's a useful reference and still use it. It's full of good general tips and advice much of which still applies to the newer versions of Python. That said, I'd save money and get a used copy.
I found an online version here: http://flylib.com/books/en/2.9.1.2/1/
A:
Well you can always start with Byte of Python OpenSource Doument Small Precise and to the Point Description. here the link :
Regards
| Python Cookbook is for Python 2.4 | Python Cookbook 2nd edition is updated for Python 2.4. Is it still ok to study the book using Python version 2.5 or 2.6?
| [
"Python 2.4 is too old. In my opinion it will worth the time and money to find a more recent resource, especially if your time is limited. More recent books will also cover changes in the libraries, including advances in python web app development, which I don't expect to find in aged resources. Especially for a co... | [
2,
1,
1,
1,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0004253903_python.txt |
Q:
How do I make Django ManyToMany 'through' queries more efficient?
I'm using a ManyToManyField with a 'through' class and this results in a lot of queries when fetching a list of things. I'm wondering if there's a more efficient way.
For example here are some simplified classes describing Books and their several authors, which goes through a Role class (to define roles like "Editor", "Illustrator", etc):
class Person(models.Model):
first_name = models.CharField(max_length=100)
last_name = models.CharField(max_length=100)
@property
def full_name(self):
return ' '.join([self.first_name, self.last_name,])
class Role(models.Model):
name = models.CharField(max_length=50)
person = models.ForeignKey(Person)
book = models.ForeignKey(Book)
class Book(models.Model):
title = models.CharField(max_length=255)
authors = models.ManyToManyField(Person, through='Role')
@property
def authors_names(self):
names = []
for role in self.role_set.all():
person_name = role.person.full_name
if role.name:
person_name += ' (%s)' % (role.name,)
names.append(person_name)
return ', '.join(names)
If I call Book.authors_names() then I can get a string something like this:
John Doe (Editor), Fred Bloggs, Billy Bob (Illustrator)
It works fine but it does one query to get the Roles for the book, and then another query for every Person. If I'm displaying a list of Books, this adds up to a lot of queries.
Is there a way to do this more efficiently, in a single query per Book, with a join? Or is the only way to use something like batch-select?
(For bonus points... my coding of authors_names() looks a bit clunky - is there a way to make it more elegantly Python-esque?)
A:
This is a pattern I come across often in Django. It's really easy to create properties such as your author_name, and they work great when you display one book, but the number of queries explodes when you want to use the property for many books on a page.
Firstly, you can use select_related to prevent the lookup for every person
for role in self.role_set.all().select_related(depth=1):
person_name = role.person.full_name
if role.name:
person_name += ' (%s)' % (role.name,)
names.append(person_name)
return ', '.join(names)
However, this doesn't solve the problem of looking up the roles for every book.
If you are displaying a list of books, you can look up all the roles for your books in one query, then cache them.
>>> books = Book.objects.filter(**your_kwargs)
>>> roles = Role.objects.filter(book_in=books).select_related(depth=1)
>>> roles_by_book = defaultdict(list)
>>> for role in roles:
... roles_by_book[role.book].append(books)
You can then access a book's roles through the roles_by_dict dictionary.
>>> for book in books:
... book_roles = roles_by_book[book]
You will have to rethink your author_name property to use caching like this.
I'll shoot for the bonus points as well.
Add a method to role to render the full name and role name.
class Role(models.Model):
...
@property
def name_and_role(self):
out = self.person.full_name
if self.name:
out += ' (%s)' % role.name
return out
The author_names collapses to a one liner similar to Paulo's suggestion
@property
def authors_names(self):
return ', '.join([role.name_and_role for role in self.role_set.all() ])
A:
I would make authors = models.ManyToManyField(Role) and store fullname at Role.alias, because same person can sign books under distinct pseudonyms.
About the clunky, this:
def authors_names(self):
names = []
for role in self.role_set.all():
person_name = role.person.full_name
if role.name:
person_name += ' (%s)' % (role.name,)
names.append(person_name)
return ', '.join(names)
Could be:
def authors_names(self):
return ', '.join([ '%s (%s)' % (role.person.full_name, role.name)
for role in self.role_set.all() ])
| How do I make Django ManyToMany 'through' queries more efficient? | I'm using a ManyToManyField with a 'through' class and this results in a lot of queries when fetching a list of things. I'm wondering if there's a more efficient way.
For example here are some simplified classes describing Books and their several authors, which goes through a Role class (to define roles like "Editor", "Illustrator", etc):
class Person(models.Model):
first_name = models.CharField(max_length=100)
last_name = models.CharField(max_length=100)
@property
def full_name(self):
return ' '.join([self.first_name, self.last_name,])
class Role(models.Model):
name = models.CharField(max_length=50)
person = models.ForeignKey(Person)
book = models.ForeignKey(Book)
class Book(models.Model):
title = models.CharField(max_length=255)
authors = models.ManyToManyField(Person, through='Role')
@property
def authors_names(self):
names = []
for role in self.role_set.all():
person_name = role.person.full_name
if role.name:
person_name += ' (%s)' % (role.name,)
names.append(person_name)
return ', '.join(names)
If I call Book.authors_names() then I can get a string something like this:
John Doe (Editor), Fred Bloggs, Billy Bob (Illustrator)
It works fine but it does one query to get the Roles for the book, and then another query for every Person. If I'm displaying a list of Books, this adds up to a lot of queries.
Is there a way to do this more efficiently, in a single query per Book, with a join? Or is the only way to use something like batch-select?
(For bonus points... my coding of authors_names() looks a bit clunky - is there a way to make it more elegantly Python-esque?)
| [
"This is a pattern I come across often in Django. It's really easy to create properties such as your author_name, and they work great when you display one book, but the number of queries explodes when you want to use the property for many books on a page.\nFirstly, you can use select_related to prevent the lookup f... | [
8,
1
] | [] | [] | [
"django",
"many_to_many",
"python"
] | stackoverflow_0004255700_django_many_to_many_python.txt |
Q:
Django: can't bind an uploaded image to a form ImageField()
here's my problem, I've created this form:
class SettingsForm(forms.Form):
...
logo = forms.ImageField()
...
The upload works fine and I managed to display the image but I can't bind it to the form. Here's what I've done:
data = ...
files = {'logo': SimpleUploadedFile('logo.jpg', logo.read())}
form = SettingsForm(data=data, files=files)
the logo object is a ImageFieldFile. I've tested the read method in a shell, it's ok. I've got no warnings displaying the page, only "no file chosen".
Thanks for your help. Sorry for the format of this post, I'm new to stackoverflow and to django.
A:
Im not sure about this, but according to the django documentation, on binding forms, the data and files are not kwargs, but are args, so try this:
form = SettingsForm(data, files)
| Django: can't bind an uploaded image to a form ImageField() | here's my problem, I've created this form:
class SettingsForm(forms.Form):
...
logo = forms.ImageField()
...
The upload works fine and I managed to display the image but I can't bind it to the form. Here's what I've done:
data = ...
files = {'logo': SimpleUploadedFile('logo.jpg', logo.read())}
form = SettingsForm(data=data, files=files)
the logo object is a ImageFieldFile. I've tested the read method in a shell, it's ok. I've got no warnings displaying the page, only "no file chosen".
Thanks for your help. Sorry for the format of this post, I'm new to stackoverflow and to django.
| [
"Im not sure about this, but according to the django documentation, on binding forms, the data and files are not kwargs, but are args, so try this:\nform = SettingsForm(data, files)\n\n"
] | [
1
] | [] | [] | [
"binding",
"django",
"forms",
"python",
"upload"
] | stackoverflow_0004255854_binding_django_forms_python_upload.txt |
Q:
mod_wsgi + apache not multithreaded, why?
WSGI application
# coding: utf-8
import time
def application(environ, start_response):
status = '200 OK'
output = str(time.time())
time.sleep(5)
output += ' -> ' + str(time.time())
response_headers = [('Content-type', 'text/html; charset=utf-8'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
return [output]
Apache VirtualHost
ServerName localhost
WSGIDaemonProcess main user=www-data group=www-data processes=1 threads=5
WSGIScriptAlias / /var/www/main/main.wsgi
WSGIProcessGroup main
WSGIApplicationGroup %{GLOBAL}
Order deny,allow
Allow from all
ErrorLog /var/log/apache2/main_error_log
CustomLog /var/log/apache2/main_log common
Сonnecting multiple clients, they are processed sequentially, there is no multithreading. Why?
A:
This is being dealt with on mod_wsgi mailing list. See:
http://groups.google.com/group/modwsgi/browse_frm/thread/b8aaab6bfc4cca6d
A:
While not exactly an answer, I noticed the serial behavior with a similar setup when testing on a single browser with multiple tabs. ( i tried chrome7 and ff4 )
Wondering if it was the browser enforcing the serial-ness, I tried the same experiment with two separate browsers, and it definitely showed the server to be acting multi-threaded.
My setup was:
mod_wsgi 3.3-1
python 3.1.2-2
apache 2.2.17-1
on archlinux x86_64
tests were run with mod_wsgi in embedded mode.
hope it helps.
| mod_wsgi + apache not multithreaded, why? | WSGI application
# coding: utf-8
import time
def application(environ, start_response):
status = '200 OK'
output = str(time.time())
time.sleep(5)
output += ' -> ' + str(time.time())
response_headers = [('Content-type', 'text/html; charset=utf-8'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
return [output]
Apache VirtualHost
ServerName localhost
WSGIDaemonProcess main user=www-data group=www-data processes=1 threads=5
WSGIScriptAlias / /var/www/main/main.wsgi
WSGIProcessGroup main
WSGIApplicationGroup %{GLOBAL}
Order deny,allow
Allow from all
ErrorLog /var/log/apache2/main_error_log
CustomLog /var/log/apache2/main_log common
Сonnecting multiple clients, they are processed sequentially, there is no multithreading. Why?
| [
"This is being dealt with on mod_wsgi mailing list. See:\nhttp://groups.google.com/group/modwsgi/browse_frm/thread/b8aaab6bfc4cca6d\n",
"While not exactly an answer, I noticed the serial behavior with a similar setup when testing on a single browser with multiple tabs. ( i tried chrome7 and ff4 )\nWondering if it... | [
3,
1
] | [] | [] | [
"apache",
"python",
"wsgi"
] | stackoverflow_0003341514_apache_python_wsgi.txt |
Q:
How to override Python list(iterator) behaviour?
Running this:
class DontList(object):
def __getitem__(self, key):
print 'Getting item %s' % key
if key == 10: raise KeyError("You get the idea.")
return None
def __getattr__(self, name):
print 'Getting attr %s' % name
return None
list(DontList())
Produces this:
Getting attr __length_hint__
Getting item 0
Getting item 1
Getting item 2
Getting item 3
Getting item 4
Getting item 5
Getting item 6
Getting item 7
Getting item 8
Getting item 9
Getting item 10
Traceback (most recent call last):
File "list.py", line 11, in <module>
list(DontList())
File "list.py", line 4, in __getitem__
if key == 10: raise KeyError("You get the idea.")
KeyError: 'You get the idea.'
How can I change that so that I'll get [], while still allowing access to those keys [1] etc.?
(I've tried putting in def __length_hint__(self): return 0, but it doesn't help.)
My real use case: (for perusal if it'll be useful; feel free to ignore past this point)
After applying a certain patch to iniparse, I've found a nasty side-effect to my patch. Having __getattr__ set on my Undefined class, which returns a new Undefined object. Unfortunately, this means that list(iniconfig.invalid_section) (where isinstance(iniconfig, iniparse.INIConfig)) is doing this (put in simple prints in the __getattr__ and __getitem__):
Getting attr __length_hint__
Getting item 0
Getting item 1
Getting item 2
Getting item 3
Getting item 4
Et cetera ad infinitum.
A:
If you want to override the iteration then just define the __iter__ method in your class
A:
Just raise IndexError instead of KeyError. KeyError is meant for mapping-like classes (e.g. dict), while IndexError is meant for sequences.
If you define the __getitem__() method on your class, Python will automatically generate an iterator from it. And the iterator terminates upon IndexError -- see PEP234.
A:
As @Sven says, that's the wrong error to raise. But that's not the point, the point is that this is broken because it's not something you should do: preventing __getattr__ from raising AttributeError means that you have overridden Python's default methodology for testing whether an object has an attribute and replaced it with a new one (ini_defined(foo.bar)).
But Python already has hasattr! Why not use that?
>>> class Foo:
... bar = None
...
>>> hasattr(Foo, "bar")
True
>>> hasattr(Foo, "baz")
False
A:
Override how your class is iterated by implementing an __iter__() method. Iterator signal when they're finished by raising a StopIteration exception, which is part of the normal iterator protocol and not propagated further. Here's one way of applying that to your example class:
class DontList(object):
def __getitem__(self, key):
print 'Getting item %s' % key
if key == 10: raise KeyError("You get the idea.")
return None
def __iter__(self):
class iterator(object):
def __init__(self, obj):
self.obj = obj
self.index = -1
def __iter__(self):
return self
def next(self):
if self.index < 9:
self.index += 1
return self.obj[self.index]
else:
raise StopIteration
return iterator(self)
list(DontList())
print 'done'
# Getting item 0
# Getting item 1
# ...
# Getting item 8
# Getting item 9
# done
A:
I think that using return iter([]) is the right way, but let's start thinking how list() works:
Get an element from __iter__; if receive a StopIrteration error stops..then get that element..
So you have just to yield an empty generator in __iter__, for example (x for x in xrange(0, 0)), or simply iter([]))
| How to override Python list(iterator) behaviour? | Running this:
class DontList(object):
def __getitem__(self, key):
print 'Getting item %s' % key
if key == 10: raise KeyError("You get the idea.")
return None
def __getattr__(self, name):
print 'Getting attr %s' % name
return None
list(DontList())
Produces this:
Getting attr __length_hint__
Getting item 0
Getting item 1
Getting item 2
Getting item 3
Getting item 4
Getting item 5
Getting item 6
Getting item 7
Getting item 8
Getting item 9
Getting item 10
Traceback (most recent call last):
File "list.py", line 11, in <module>
list(DontList())
File "list.py", line 4, in __getitem__
if key == 10: raise KeyError("You get the idea.")
KeyError: 'You get the idea.'
How can I change that so that I'll get [], while still allowing access to those keys [1] etc.?
(I've tried putting in def __length_hint__(self): return 0, but it doesn't help.)
My real use case: (for perusal if it'll be useful; feel free to ignore past this point)
After applying a certain patch to iniparse, I've found a nasty side-effect to my patch. Having __getattr__ set on my Undefined class, which returns a new Undefined object. Unfortunately, this means that list(iniconfig.invalid_section) (where isinstance(iniconfig, iniparse.INIConfig)) is doing this (put in simple prints in the __getattr__ and __getitem__):
Getting attr __length_hint__
Getting item 0
Getting item 1
Getting item 2
Getting item 3
Getting item 4
Et cetera ad infinitum.
| [
"If you want to override the iteration then just define the __iter__ method in your class\n",
"Just raise IndexError instead of KeyError. KeyError is meant for mapping-like classes (e.g. dict), while IndexError is meant for sequences.\nIf you define the __getitem__() method on your class, Python will automatical... | [
7,
3,
3,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0004256357_python.txt |
Q:
Is it possible to use celery for synchronous tasks?
Nearly synchronous works, too; basically, I want to delegate the data access and processing behind a web app to a task queue for most jobs. What's the fastest latency that I can consider reasonable for celery tasks?
Update (for clarification)
I guess for clarity I should explain that throughput -- while nice -- is not a necessary issue for me; I won't be needing scaling in that direction for a while, yet. Latency is the only criterion that I'll be evaluating at the moment. I'm content to use task.apply if that's the only way it'll work, but I'd like to farm the work out a bit.
A:
When I say throughput I mean the average latency from sending a task until it's been executed. With roundtrip I mean the average time it takes to send a task, executing it, sending the result back and retrieving the result.
As I said in the comments I currently don't have any official numbers to share, but with
the right configuration Celery is low latency compared to many other solutions, but still it does come with more overhead than executing a function locally. This is something to take into account when designing the granularity of a task[1]
I'm currently writing a performance guide that may be of interest:
http://ask.github.com/celery/userguide/optimizing.html
Feedback welcome, and would like to know about any other performance factors you are interested in.
[1] http://celeryq.org/docs/userguide/tasks.html#granularity
| Is it possible to use celery for synchronous tasks? | Nearly synchronous works, too; basically, I want to delegate the data access and processing behind a web app to a task queue for most jobs. What's the fastest latency that I can consider reasonable for celery tasks?
Update (for clarification)
I guess for clarity I should explain that throughput -- while nice -- is not a necessary issue for me; I won't be needing scaling in that direction for a while, yet. Latency is the only criterion that I'll be evaluating at the moment. I'm content to use task.apply if that's the only way it'll work, but I'd like to farm the work out a bit.
| [
"When I say throughput I mean the average latency from sending a task until it's been executed. With roundtrip I mean the average time it takes to send a task, executing it, sending the result back and retrieving the result.\nAs I said in the comments I currently don't have any official numbers to share, but with\n... | [
7
] | [] | [] | [
"celery",
"python",
"task",
"task_queue"
] | stackoverflow_0004222489_celery_python_task_task_queue.txt |
Q:
Make URL point to .py file in App Engine
I'm just now digging into GAE and I see two ways to make a particular URL pull up the right page.
The first way is using handlers:
handlers:
- url: /.*
script: helloworld.py
The other way is using the following:
application = webapp.WSGIApplication(
[('/', MainPage),
('/sign', Guestbook)],
debug=True)
Which is better or which is right? I don't fully understand what the second example is doing exactly.
A:
You need to use both. The section in app.yaml tells App Engine where to look for your WSGI application. application = webapp.WSGIApplication(...) sets up your WSGI application using the webapp framework.
update:
app.yaml:
handlers:
- url: /city.*
script: cityhandler.py
cityhandler.py
application = webapp.WSGIApplication([('/city', ShowCityPage)],
debug=True)
| Make URL point to .py file in App Engine | I'm just now digging into GAE and I see two ways to make a particular URL pull up the right page.
The first way is using handlers:
handlers:
- url: /.*
script: helloworld.py
The other way is using the following:
application = webapp.WSGIApplication(
[('/', MainPage),
('/sign', Guestbook)],
debug=True)
Which is better or which is right? I don't fully understand what the second example is doing exactly.
| [
"You need to use both. The section in app.yaml tells App Engine where to look for your WSGI application. application = webapp.WSGIApplication(...) sets up your WSGI application using the webapp framework.\nupdate:\napp.yaml:\nhandlers:\n - url: /city.*\n script: cityhandler.py\n\ncityhandler.py\napplication =... | [
7
] | [] | [] | [
"google_app_engine",
"python",
"url"
] | stackoverflow_0004258214_google_app_engine_python_url.txt |
Q:
How do I manually put cookies in a jar?
I'm using python with urllib2 & cookielib and such to open a url. This url set's one cookie in it's header and two more in the page with some javascript. It then redirects to a different page.
I can parse out all the relevant info for the cookies being set with the javascript, but I can't for the life of me figure out how to get them into the cookie-jar as cookies.
Essentially, when I follow to the site being redirected too, those two cookies have to be accessible by that site.
To be very specific, I'm trying to login in to gomtv.net by using their "login in with a Twitter account" feature in python.
Anyone?
A:
You can't set cookies for another domain - browsers will not allow it.
| How do I manually put cookies in a jar? | I'm using python with urllib2 & cookielib and such to open a url. This url set's one cookie in it's header and two more in the page with some javascript. It then redirects to a different page.
I can parse out all the relevant info for the cookies being set with the javascript, but I can't for the life of me figure out how to get them into the cookie-jar as cookies.
Essentially, when I follow to the site being redirected too, those two cookies have to be accessible by that site.
To be very specific, I'm trying to login in to gomtv.net by using their "login in with a Twitter account" feature in python.
Anyone?
| [
"You can't set cookies for another domain - browsers will not allow it.\n"
] | [
0
] | [] | [] | [
"authentication",
"cookielib",
"cookies",
"python"
] | stackoverflow_0004258278_authentication_cookielib_cookies_python.txt |
Q:
How can I clean stuff up on program exit?
I have a command line program that wants to pickle things when I send it a ctrl-C via the terminal. I have a some questions and concerns:
How do I perform this handling? Do I check for a KeyboardInterrupt? Is there a way to implement an exit function?
What if the program is halted in the middle of a write to a structure that I'm writing to? I presume these writes aren't treated atomically, so then how can I keep from writing trash into the pickle file?
A:
You can use atexit for defining an exit handler. Modifications of Python objects will be treated atomically, so you should be fine as long as your code is arranged in a way that your objects are always in a consistent state between (byte code) instructions.
A:
(1) Use the atexit module:
def pickle_things():
pass
import atexit
atexit.register(pickle_things)
(2) In general, you can't. Imagine someone trips on the power cord while your program is in the middle of a write. It's impossible to guarantee everything gets properly written in all cases.
However, in the KeyboardInterrupt case, the interpreter will make sure to finish whatever it's currently doing before raising that exception, so you should be fine.
| How can I clean stuff up on program exit? | I have a command line program that wants to pickle things when I send it a ctrl-C via the terminal. I have a some questions and concerns:
How do I perform this handling? Do I check for a KeyboardInterrupt? Is there a way to implement an exit function?
What if the program is halted in the middle of a write to a structure that I'm writing to? I presume these writes aren't treated atomically, so then how can I keep from writing trash into the pickle file?
| [
"You can use atexit for defining an exit handler. Modifications of Python objects will be treated atomically, so you should be fine as long as your code is arranged in a way that your objects are always in a consistent state between (byte code) instructions.\n",
"(1) Use the atexit module:\ndef pickle_things():\... | [
3,
1
] | [] | [] | [
"python"
] | stackoverflow_0004258291_python.txt |
Q:
shutil.move -> WindowsError: [Error32] The process cannot access the file
I use Python 2.5. and have a problem with shutil.move
print(srcFile)
print(dstFile)
shutil.move(srcFile, dstFile)
Output:
c:\docume~1\aaa\locals~1\temp\3\tmpnw-sgp
D:\dirtest\d\c\test.txt
...
WindowsError: [Error32] The process cannot access the file because it is being used by
another process: 'c:\\docume~1\\aaa\\locals~1\\temp\\3\\tmpnw-sgp'
I use it on a Windows 2003 Server.
So, what's wrong here? Does anyone know?
Best Regards.
A:
If you want to continue in your script use:
try:
shutil.move(srcFile, dstFile)
except WindowsError:
pass
The reason your getting error 32 is because there is another process on your computer or server that is using that file. You might want to not copy temp files as they are not really important by name.
| shutil.move -> WindowsError: [Error32] The process cannot access the file | I use Python 2.5. and have a problem with shutil.move
print(srcFile)
print(dstFile)
shutil.move(srcFile, dstFile)
Output:
c:\docume~1\aaa\locals~1\temp\3\tmpnw-sgp
D:\dirtest\d\c\test.txt
...
WindowsError: [Error32] The process cannot access the file because it is being used by
another process: 'c:\\docume~1\\aaa\\locals~1\\temp\\3\\tmpnw-sgp'
I use it on a Windows 2003 Server.
So, what's wrong here? Does anyone know?
Best Regards.
| [
"If you want to continue in your script use:\ntry:\n shutil.move(srcFile, dstFile)\nexcept WindowsError:\n pass\n\nThe reason your getting error 32 is because there is another process on your computer or server that is using that file. You might want to not copy temp files as they are not really important by ... | [
8
] | [] | [] | [
"python"
] | stackoverflow_0004258140_python.txt |
Q:
python reporting tool, similar to birtviewer
Can you guys please tell if building my own birtviewer like reporting tool but using python is a crazy idea. The company I'm working now, we are using birtviewer to generate reports for the clients, but I'm already getting frustrated tweaking the code to suit our client needs and it's written on massive java code which I don't have any experience at all. And they don't want to mavenize birtviewer, so every new releases I have to manually update my local copy and mavenize it. And the fact that it is really owned by a private company worries me about the future of birtviewer. What do you guys think?
A:
Sure. Write it. Make it open source and give us a git repo to have a little look... Honestly if the problem exists solve it.
| python reporting tool, similar to birtviewer | Can you guys please tell if building my own birtviewer like reporting tool but using python is a crazy idea. The company I'm working now, we are using birtviewer to generate reports for the clients, but I'm already getting frustrated tweaking the code to suit our client needs and it's written on massive java code which I don't have any experience at all. And they don't want to mavenize birtviewer, so every new releases I have to manually update my local copy and mavenize it. And the fact that it is really owned by a private company worries me about the future of birtviewer. What do you guys think?
| [
"Sure. Write it. Make it open source and give us a git repo to have a little look... Honestly if the problem exists solve it.\n"
] | [
0
] | [] | [] | [
"python",
"reporting"
] | stackoverflow_0004258624_python_reporting.txt |
Q:
Python: some function for args -> *args (similar like those in functools)
I had these implementations:
def vecAdd(v1, v2): return tuple(map(add, izip(v1,v2)))
def vecMul(v1, f): return tuple(map(mul, izip(v1,repeat(f))))
Those didn't work because add (and mul) is called like add((x,y)), i.e. it only gets one single argument.
Is there some function which basically does the following?
def funccaller_with_exposed_args(func):
return lambda args: func(*args)
Maybe this is overkill and overengineered in this case but in general, this can be very important performance wise if you otherwise could lay out a complete heavy loop into pure C code.
A:
You could do this with itertools.starmap or itertools.imap.
imap is like starmap except that it zips the arguments first.
So instead of calling izip yourself, you could just use imap:
import itertools as it
def vecAdd(v1, v2): return tuple(it.imap(add, v1, v2))
def vecMul(v1, f): return tuple(it.imap(mul, v1, it.repeat(f)))
| Python: some function for args -> *args (similar like those in functools) | I had these implementations:
def vecAdd(v1, v2): return tuple(map(add, izip(v1,v2)))
def vecMul(v1, f): return tuple(map(mul, izip(v1,repeat(f))))
Those didn't work because add (and mul) is called like add((x,y)), i.e. it only gets one single argument.
Is there some function which basically does the following?
def funccaller_with_exposed_args(func):
return lambda args: func(*args)
Maybe this is overkill and overengineered in this case but in general, this can be very important performance wise if you otherwise could lay out a complete heavy loop into pure C code.
| [
"You could do this with itertools.starmap or itertools.imap.\nimap is like starmap except that it zips the arguments first.\nSo instead of calling izip yourself, you could just use imap:\nimport itertools as it\ndef vecAdd(v1, v2): return tuple(it.imap(add, v1, v2))\ndef vecMul(v1, f): return tuple(it.imap(mul, v1,... | [
5
] | [] | [] | [
"functional_programming",
"python"
] | stackoverflow_0004258992_functional_programming_python.txt |
Q:
Convert wav to ogg vorbis in Python
How can I transcode a wav file to ogg vorbis format using Python?
I can convert to mp3 using PyMedia 1.3.7.3, but when I set the output stream type to 'ogg', I get the error: oggvorbis_encode_init: init_encoder failed and the script dies.
A:
From PyMedia's website:
OGG( optional with vorbis library )
You need to install Vorbis in order for the OGG encoder to work. Since the old version of your question tells me that you're on windows you can grab it here:
http://www.vorbis.com/setup_windows/
| Convert wav to ogg vorbis in Python | How can I transcode a wav file to ogg vorbis format using Python?
I can convert to mp3 using PyMedia 1.3.7.3, but when I set the output stream type to 'ogg', I get the error: oggvorbis_encode_init: init_encoder failed and the script dies.
| [
"From PyMedia's website: \n\nOGG( optional with vorbis library )\n\nYou need to install Vorbis in order for the OGG encoder to work. Since the old version of your question tells me that you're on windows you can grab it here:\nhttp://www.vorbis.com/setup_windows/\n"
] | [
2
] | [] | [] | [
"mp3",
"ogg",
"oggvorbis",
"python",
"transcoding"
] | stackoverflow_0003982530_mp3_ogg_oggvorbis_python_transcoding.txt |
Q:
import of python/django modules fails on debian vps
debian vps with gigatux.
using django/python with mod_wsgi and im using virtualenvs as i hope to be hosting a few different sites which may well be at different levels.
im having an issue getting the site running, right now i can't even do a syncdb as it refuses to import the django package that is inside the site-packages folder
I've got the statement below in my django.wsgi file which is called from the wsgi config line in apache2/sites-available/default
import sys
sys.path.append('/home/shofty/virtualenvs/sitename/lib/python2.5/site-packages')
and i've got quite a few packages in that folder.
however syncdb won't run.
now ive installed django on the vps without forcing it into a virtualenv, then i can run syncdb. but syncdb fails on the import of modules in installedapps that are in the site-packages but not installed on the vps. so i know that that statement above isnt working.
i appreciate there may be a more specialised place to ask this question, i just dont know it. tell me what that is if you know somewhere where this will get answered. i've got two days getting this vps running and to be honest, im ready to give up.
A:
Mixing up environments won't help.
Clearly some packages are installed in the bundled python and some other in the virtualenv.
My suggestion, stick to virtualenv
And work on the virtualenv, always, by source /path/to/venv/bin/activate
Within the wsgi file, enable the virtualenv. You do that not by importing its site_packages, but by asking the mod_wsgi to use that python. You do that by the following command:
activate_this = '/path/to/venv/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))
A:
I worked on getting debian, apache, django, mod_wsgi, and virtualenvs to play nicely a couple weeks ago. Looking at the template I made for our wsgi files I'm using both 'site.addsitedir' and 'sys.path.append' where 'site.addsitedir' points to the site_package and 'sys.path.append' points to a copy of the application on the host. Here is what the first part of the jinja2 template looks like for django.wsgi
import sys
import os
import site
site.addsitedir('{{ site_package }}')
sys.path.append('{{ local_source }}')
...
'local_source' is something like '/home/jdoe/my_project' and 'site_package' is something like '/usr/local/lib/python2.6/site-packages'.
I remember have to play around with it a bit, and I also remember an issue with 'django.wsgi' and 'settings.py' having to be in the same directory. I hope that helps.
| import of python/django modules fails on debian vps | debian vps with gigatux.
using django/python with mod_wsgi and im using virtualenvs as i hope to be hosting a few different sites which may well be at different levels.
im having an issue getting the site running, right now i can't even do a syncdb as it refuses to import the django package that is inside the site-packages folder
I've got the statement below in my django.wsgi file which is called from the wsgi config line in apache2/sites-available/default
import sys
sys.path.append('/home/shofty/virtualenvs/sitename/lib/python2.5/site-packages')
and i've got quite a few packages in that folder.
however syncdb won't run.
now ive installed django on the vps without forcing it into a virtualenv, then i can run syncdb. but syncdb fails on the import of modules in installedapps that are in the site-packages but not installed on the vps. so i know that that statement above isnt working.
i appreciate there may be a more specialised place to ask this question, i just dont know it. tell me what that is if you know somewhere where this will get answered. i've got two days getting this vps running and to be honest, im ready to give up.
| [
"Mixing up environments won't help.\nClearly some packages are installed in the bundled python and some other in the virtualenv.\nMy suggestion, stick to virtualenv\nAnd work on the virtualenv, always, by source /path/to/venv/bin/activate\nWithin the wsgi file, enable the virtualenv. You do that not by importing it... | [
1,
0
] | [] | [] | [
"django",
"mod_wsgi",
"python"
] | stackoverflow_0004258277_django_mod_wsgi_python.txt |
Q:
In python, what's the most efficient way to combine 3 dicts and to sort by one of the dict's keys?
Example:
list1 = {'52': {'timestamp':'1234567890', 'name':'Jason Bourne'},
'42': {'timestamp':'2345678891', 'name':'Sarah Parker'}
}
list2 = {'61': {'timestamp':'3456789012', 'name':'Mike Williams'},
'12': {'timestamp':'4567889123', 'name':'Robert Tissone'}
}
list3 = {'56': {'timestamp':'4567890123', 'name':'Peter Blake'},
'71': {'timestamp':'5678891234', 'name':'Alex Cheng'}
}
//Best way to combine list1, list2, and list3 and sort by timestamp
result = [ {'timestamp':'1234567890', 'name':'Jason Bourne'},
{'timestamp':'2345678891', 'name':'Sarah Parker'},
{'timestamp':'3456789012', 'name':'Mike Williams'},
{'timestamp':'4567889123', 'name':'Robert Tissone'},
{'timestamp':'4567890123', 'name':'Peter Blake'},
{'timestamp':'5678891234', 'name':'Alex Cheng'}
]
A:
sorted(itertools.chain(list1.itervalues(), list2.itervalues(),
list3.itervalues()), key=operator.itemgetter('timestamp'))
A:
from itertools import chain
from operator import itemgetter
def sortedchain(dicos, key):
return sorted(chain(v for d in dicos for v in d.values()), key=itemgetter(key))
dicos=[list1, list2, list3]
sortedvalues = sortedchain(dicos, 'timestamp')
So that dicos may have any lenght
| In python, what's the most efficient way to combine 3 dicts and to sort by one of the dict's keys? | Example:
list1 = {'52': {'timestamp':'1234567890', 'name':'Jason Bourne'},
'42': {'timestamp':'2345678891', 'name':'Sarah Parker'}
}
list2 = {'61': {'timestamp':'3456789012', 'name':'Mike Williams'},
'12': {'timestamp':'4567889123', 'name':'Robert Tissone'}
}
list3 = {'56': {'timestamp':'4567890123', 'name':'Peter Blake'},
'71': {'timestamp':'5678891234', 'name':'Alex Cheng'}
}
//Best way to combine list1, list2, and list3 and sort by timestamp
result = [ {'timestamp':'1234567890', 'name':'Jason Bourne'},
{'timestamp':'2345678891', 'name':'Sarah Parker'},
{'timestamp':'3456789012', 'name':'Mike Williams'},
{'timestamp':'4567889123', 'name':'Robert Tissone'},
{'timestamp':'4567890123', 'name':'Peter Blake'},
{'timestamp':'5678891234', 'name':'Alex Cheng'}
]
| [
"sorted(itertools.chain(list1.itervalues(), list2.itervalues(),\n list3.itervalues()), key=operator.itemgetter('timestamp'))\n\n",
"from itertools import chain\nfrom operator import itemgetter\n\ndef sortedchain(dicos, key):\n return sorted(chain(v for d in dicos for v in d.values()), key=itemgetter(key))\n... | [
8,
0
] | [] | [] | [
"dictionary",
"python"
] | stackoverflow_0004258782_dictionary_python.txt |
Q:
Module uncallable in Python?
I've run into an error with Python. Please see the code examples below. I run event_timer.py and I get the following error message. Both of the files listed below are in the same folder.
Traceback (most recent call last):
File "E:\python\event_timer\event_timer.py", line 7, in
timer = EventTimer()
TypeError: 'module' object is not callable
What am I missing?
event_timer.py:
import EventTimer
timer = EventTimer()
timer.addStep("Preheat Oven", seconds = 10)
timer.addStep("Cook Pizza", seconds = 20)
timer.addStep("Done!")
timer.start()
EventTimer.py:
import time
class Timer:
event = 'Event'
steps = []
def __init__(self, event = None):
if event is not None:
self.event = event
def addStep(self, step, seconds = None, minutes = None, hours = None, days = None):
if seconds is not None:
unit = 'seconds'
amount = seconds
elif minutes is not None:
unit = 'minutes'
amount = minutes
elif hours is not None:
unit = 'hours'
amount = hours
elif days is not None:
unit = 'days'
amount = days
else:
print 'Invalid arguments'
return False
self.steps.append({'unit': unit, 'amount': amount})
return True
def __timeInSeconds(self, unit, amount):
if unit == 'seconds':
return amount
elif unit == 'minutes':
return amount * 60
elif unit == 'hours':
return amount * 60 * 60
elif unit == 'days':
return amount * 60 * 60 * 24
else:
print 'Invalid unit'
return False
def start(self):
if len(self.steps) == 0:
print 'No steps to complete'
return False
print "{0} has started.".format(self.event)
for step in self.steps:
print step.step
time.sleep(self.__timeInSeconds(step.unit, step.amount))
print "Completed"
print 'Event complete'
A:
When you write
import EventTimer
you make a new variable, EventTimer, pointing to a module -- the module that you've just written! Inside that module is a class, Timer. So to make an instance of that class, you do
timer = EventTimer.Timer()
| Module uncallable in Python? | I've run into an error with Python. Please see the code examples below. I run event_timer.py and I get the following error message. Both of the files listed below are in the same folder.
Traceback (most recent call last):
File "E:\python\event_timer\event_timer.py", line 7, in
timer = EventTimer()
TypeError: 'module' object is not callable
What am I missing?
event_timer.py:
import EventTimer
timer = EventTimer()
timer.addStep("Preheat Oven", seconds = 10)
timer.addStep("Cook Pizza", seconds = 20)
timer.addStep("Done!")
timer.start()
EventTimer.py:
import time
class Timer:
event = 'Event'
steps = []
def __init__(self, event = None):
if event is not None:
self.event = event
def addStep(self, step, seconds = None, minutes = None, hours = None, days = None):
if seconds is not None:
unit = 'seconds'
amount = seconds
elif minutes is not None:
unit = 'minutes'
amount = minutes
elif hours is not None:
unit = 'hours'
amount = hours
elif days is not None:
unit = 'days'
amount = days
else:
print 'Invalid arguments'
return False
self.steps.append({'unit': unit, 'amount': amount})
return True
def __timeInSeconds(self, unit, amount):
if unit == 'seconds':
return amount
elif unit == 'minutes':
return amount * 60
elif unit == 'hours':
return amount * 60 * 60
elif unit == 'days':
return amount * 60 * 60 * 24
else:
print 'Invalid unit'
return False
def start(self):
if len(self.steps) == 0:
print 'No steps to complete'
return False
print "{0} has started.".format(self.event)
for step in self.steps:
print step.step
time.sleep(self.__timeInSeconds(step.unit, step.amount))
print "Completed"
print 'Event complete'
| [
"When you write\nimport EventTimer\n\nyou make a new variable, EventTimer, pointing to a module -- the module that you've just written! Inside that module is a class, Timer. So to make an instance of that class, you do\ntimer = EventTimer.Timer()\n\n"
] | [
10
] | [] | [] | [
"python"
] | stackoverflow_0004259180_python.txt |
Q:
Python, BeautifulSoup or LXML - Parsing image URL's from HTML using CSS tags
I have searched high and low for a decent explanation of how BeautifulSoup or LXML work. Granted, their documentation is great, but for someone like myself, a python/programming novice, it is difficult to decipher what I am looking for.
Anyways, as my first project, I am using Python to parse an RSS feed for post links - I have accomplished this with Feedparser. My plan is to then scrape each posts' images. For the life of me though, I can not figure out how to get either BeautifulSoup or LXML to do what I want! I have spent hours reading through the documentation and googling to no avail, so I am here. The following is a line from the Big Picture (my scrapee).
<div class="bpBoth"><a name="photo2"></a><img src="http://inapcache.boston.com/universal/site_graphics/blogs/bigpicture/shanghaifire_11_22/s02_25947507.jpg" class="bpImage" style="height:1393px;width:990px" /><br/><div onclick="this.style.display='none'" class="noimghide" style="margin-top:-1393px;height:1393px;width:990px"></div><div class="bpCaption"><div class="photoNum"><a href="#photo2">2</a></div>In this photo released by China's Xinhua news agency, spectators watch an apartment building on fire in the downtown area of Shanghai on Monday Nov. 15, 2010. (AP Photo/Xinhua) <a href="#photo2">#</a><div class="cf"></div></div></div>
So, according to my understanding of the documentation, I should be able to pass the following:
soup.find("a", { "class" : "bpImage" })
To find all instances with that css class. Well, it doesn't return anything. I'm sure I'm overlooking something trivial so I greatly appreciate your patience.
Thank you very much for your responses!
For future googlers, I'll include my feedparser code:
#! /usr/bin/python
# RSS Feed Parser for the Big Picture Blog
# Import applicable libraries
import feedparser
#Import Feed for Parsing
d = feedparser.parse("http://feeds.boston.com/boston/bigpicture/index")
# Print feed name
print d['feed']['title']
# Determine number of posts and set range maximum
posts = len(d['entries'])
# Collect Post URLs
pointer = 0
while pointer < posts:
e = d.entries[pointer]
print e.link
pointer = pointer + 1
A:
Using lxml, you might do something like this:
import feedparser
import lxml.html as lh
import urllib2
#Import Feed for Parsing
d = feedparser.parse("http://feeds.boston.com/boston/bigpicture/index")
# Print feed name
print d['feed']['title']
# Determine number of posts and set range maximum
posts = len(d['entries'])
# Collect Post URLs
for post in d['entries']:
link=post['link']
print('Parsing {0}'.format(link))
doc=lh.parse(urllib2.urlopen(link))
imgs=doc.xpath('//img[@class="bpImage"]')
for img in imgs:
print(img.attrib['src'])
A:
The code you have posted looks for all a elements with the bpImage class. But your example has the bpImage class on the img element, not the a. You just need to do:
soup.find("img", { "class" : "bpImage" })
A:
Using pyparsing to search for tags is fairly intuitive:
from pyparsing import makeHTMLTags, withAttribute
imgTag,notused = makeHTMLTags('img')
# only retrieve <img> tags with class='bpImage'
imgTag.setParseAction(withAttribute(**{'class':'bpImage'}))
for img in imgTag.searchString(html):
print img.src
| Python, BeautifulSoup or LXML - Parsing image URL's from HTML using CSS tags | I have searched high and low for a decent explanation of how BeautifulSoup or LXML work. Granted, their documentation is great, but for someone like myself, a python/programming novice, it is difficult to decipher what I am looking for.
Anyways, as my first project, I am using Python to parse an RSS feed for post links - I have accomplished this with Feedparser. My plan is to then scrape each posts' images. For the life of me though, I can not figure out how to get either BeautifulSoup or LXML to do what I want! I have spent hours reading through the documentation and googling to no avail, so I am here. The following is a line from the Big Picture (my scrapee).
<div class="bpBoth"><a name="photo2"></a><img src="http://inapcache.boston.com/universal/site_graphics/blogs/bigpicture/shanghaifire_11_22/s02_25947507.jpg" class="bpImage" style="height:1393px;width:990px" /><br/><div onclick="this.style.display='none'" class="noimghide" style="margin-top:-1393px;height:1393px;width:990px"></div><div class="bpCaption"><div class="photoNum"><a href="#photo2">2</a></div>In this photo released by China's Xinhua news agency, spectators watch an apartment building on fire in the downtown area of Shanghai on Monday Nov. 15, 2010. (AP Photo/Xinhua) <a href="#photo2">#</a><div class="cf"></div></div></div>
So, according to my understanding of the documentation, I should be able to pass the following:
soup.find("a", { "class" : "bpImage" })
To find all instances with that css class. Well, it doesn't return anything. I'm sure I'm overlooking something trivial so I greatly appreciate your patience.
Thank you very much for your responses!
For future googlers, I'll include my feedparser code:
#! /usr/bin/python
# RSS Feed Parser for the Big Picture Blog
# Import applicable libraries
import feedparser
#Import Feed for Parsing
d = feedparser.parse("http://feeds.boston.com/boston/bigpicture/index")
# Print feed name
print d['feed']['title']
# Determine number of posts and set range maximum
posts = len(d['entries'])
# Collect Post URLs
pointer = 0
while pointer < posts:
e = d.entries[pointer]
print e.link
pointer = pointer + 1
| [
"Using lxml, you might do something like this:\nimport feedparser\nimport lxml.html as lh\nimport urllib2\n\n#Import Feed for Parsing\nd = feedparser.parse(\"http://feeds.boston.com/boston/bigpicture/index\")\n\n# Print feed name\nprint d['feed']['title']\n\n# Determine number of posts and set range maximum\nposts ... | [
3,
1,
1
] | [] | [] | [
"beautifulsoup",
"image",
"lxml",
"parsing",
"python"
] | stackoverflow_0004258658_beautifulsoup_image_lxml_parsing_python.txt |
Q:
How to design a website on Django platform?
Apologies if my question is misleading. Here's what I mean. In Visual Studio 2010 for example, you could visually design a website or a web application as well as add C# code here and there. How would I design my website without an IDE, just by using Python IDLE and Django platform.
Would I need to use something like Dreamweaver to design the front-end part and then link it with Django? And if that is so, how difficult is the linking process of front-end with back-end?
A:
Django source code is usually edited in a non-GUI application. So, unless you use the code tab, Dreamweaver will be useless.
That is, unless you create a static HTML file first then populate it with dynamic code and separate it into bits.
A:
just by using Python IDLE and Django platform.
Start the webserver
Make a model, a controller, and a view.
Add data to the db (maybe with the admin model)
Load up the page in your browser.
Repeat as necessary
But I would recommend a more powerful text editor than idle if you are really going to do develop a whole site.
A:
Even with an IDE you will just be editing text. Once you get comfortable with the framework, as long as you have debug turned on in your settings, you can do it from any old text editor.
I develop websites using the Eclipse IDE with the Pydev plug-in. The html/css plugin i have installed always seems to break when theres too many Django template tags in the template, so i just use the built in text editor.
| How to design a website on Django platform? | Apologies if my question is misleading. Here's what I mean. In Visual Studio 2010 for example, you could visually design a website or a web application as well as add C# code here and there. How would I design my website without an IDE, just by using Python IDLE and Django platform.
Would I need to use something like Dreamweaver to design the front-end part and then link it with Django? And if that is so, how difficult is the linking process of front-end with back-end?
| [
"Django source code is usually edited in a non-GUI application. So, unless you use the code tab, Dreamweaver will be useless. \nThat is, unless you create a static HTML file first then populate it with dynamic code and separate it into bits.\n",
"\njust by using Python IDLE and Django platform. \n\n\nStart the we... | [
8,
2,
1
] | [] | [] | [
"django",
"python"
] | stackoverflow_0004259352_django_python.txt |
Q:
Python setup script extensions, how do you include a .h file?
So I've got a directory that looks something like this:
home\
setup.py
some_python_file.py
ext\
__init__.py
c_file1.c
c_file2.c
ext_header.h
Obviously the header file is necessary to compile the c files, but the problem is that I can't get the setup script to include the header file.
My extension object is something like this:
Extension('ext.the_extension', ['ext/c_file1.c', 'ext/c_file2.c'])
Which works, but doesn't include the header file. If I change it to:
Extension('ext.the_extension', ['ext/c_file1.c', 'ext/c_file2.c', 'ext_header.h'])
It includes the '.h' file but then doesn't build when I run install. Instead it gives and error error: unknown file type '.h' (from 'ext/ext_header.h')
If I include the header file as a data file like this:
data_files=[('ext', ['ext/ext_header.h'])]
it doesn't work at all, the .h file doesn't even make it into the MANIFEST file.
So my queustion is, how do you include this extension with the header file so that python setup.py install will build it correctly?
A:
I have a feeling pyfunc is on track for a more standard solution, but I did find another solution on my own. I have no idea if this is a good solution or just a hack, but all I did is add the header file to the MANIFEST.in. The documentation doesn't really make it seem like this is what the MANIFEST.in file is for, but it does work. My MANIFEST.in file now looks like this:
include ext/ext_header.h
Which includes the file and sucessfully compiles when I run python setup.py install
A:
From the docs,
module1 = Extension('demo',
define_macros = [('MAJOR_VERSION', '1'),
('MINOR_VERSION', '0')],
include_dirs = ['/usr/local/include'],
libraries = ['tcl83'],
library_dirs = ['/usr/local/lib'],
sources = ['demo.c'])
You should provide the include files via "include_dirs".
Why does this not work for you?
| Python setup script extensions, how do you include a .h file? | So I've got a directory that looks something like this:
home\
setup.py
some_python_file.py
ext\
__init__.py
c_file1.c
c_file2.c
ext_header.h
Obviously the header file is necessary to compile the c files, but the problem is that I can't get the setup script to include the header file.
My extension object is something like this:
Extension('ext.the_extension', ['ext/c_file1.c', 'ext/c_file2.c'])
Which works, but doesn't include the header file. If I change it to:
Extension('ext.the_extension', ['ext/c_file1.c', 'ext/c_file2.c', 'ext_header.h'])
It includes the '.h' file but then doesn't build when I run install. Instead it gives and error error: unknown file type '.h' (from 'ext/ext_header.h')
If I include the header file as a data file like this:
data_files=[('ext', ['ext/ext_header.h'])]
it doesn't work at all, the .h file doesn't even make it into the MANIFEST file.
So my queustion is, how do you include this extension with the header file so that python setup.py install will build it correctly?
| [
"I have a feeling pyfunc is on track for a more standard solution, but I did find another solution on my own. I have no idea if this is a good solution or just a hack, but all I did is add the header file to the MANIFEST.in. The documentation doesn't really make it seem like this is what the MANIFEST.in file is f... | [
10,
3
] | [] | [] | [
"c",
"installation",
"python",
"setup.py"
] | stackoverflow_0004259170_c_installation_python_setup.py.txt |
Q:
how to find the POST or GET variables posted by mechanize (python)
i'm using mechanize to submit a form like this...
import mechanize
br = mechanize.Browser()
br.open('http://stackoverflow.com')
br.select_form(nr=0)
br['q'] = "test"
br.set_handle_robots(False)
response = br.submit()
print response.info()
print response.read()
using firebug i can see that the actual variables posted are:
q test
how can i retrieve these programatically using my python script?
please note i'm not actually scraping SO - just using it as an example!
also, i know in this case the posted variables are obvious, since there's only the one i specified - often this is not the case!
thanks :)
A:
You can enable debug mode in mechanize by putting this:
import mechanize
br = mechanize.Browser()
br.set_debug_http(True)
...
Hope this can help :)
A:
print br.form.get_value('q')
| how to find the POST or GET variables posted by mechanize (python) | i'm using mechanize to submit a form like this...
import mechanize
br = mechanize.Browser()
br.open('http://stackoverflow.com')
br.select_form(nr=0)
br['q'] = "test"
br.set_handle_robots(False)
response = br.submit()
print response.info()
print response.read()
using firebug i can see that the actual variables posted are:
q test
how can i retrieve these programatically using my python script?
please note i'm not actually scraping SO - just using it as an example!
also, i know in this case the posted variables are obvious, since there's only the one i specified - often this is not the case!
thanks :)
| [
"You can enable debug mode in mechanize by putting this:\nimport mechanize\n\nbr = mechanize.Browser()\nbr.set_debug_http(True)\n... \n\nHope this can help :)\n",
"print br.form.get_value('q')\n\n"
] | [
2,
1
] | [] | [] | [
"forms",
"get",
"mechanize",
"post",
"python"
] | stackoverflow_0004186194_forms_get_mechanize_post_python.txt |
Q:
MAC, ethernet id using python
How do you get correct MAC/Ethernet id of local network card using python?
Most of the article on Google/stackoverflow suggests to parse the result of ipconfig /all (windows) and ifconfig (Linux).
On windows (2x/xp/7) 'ipconfig /all' works fine but is this a fail safe method?
I am new to linux and I have no idea whether 'ifconfig' is the standard method to get MAC/Ethernet id.
I have to implement a license check method in a python application which is based on local MAC/Ethernet id.
There is a special case when you have a VPN or virtualization apps such as VirtualBox installed. In this case you'll get more then one MAC/Ethernet Ids. This is not going to be a problem if I have to use parsing method but I am not sure.
Cheers
Prashant
A:
import sys
import os
def getMacAddress():
if sys.platform == 'win32':
for line in os.popen("ipconfig /all"):
if line.lstrip().startswith('Physical Address'):
mac = line.split(':')[1].strip().replace('-',':')
break
else:
for line in os.popen("/sbin/ifconfig"):
if line.find('Ether') > -1:
mac = line.split()[4]
break
return mac
Is a cross platform function that will return the answer for you.
A:
On linux, you can access hardware information through sysfs.
>>> ifname = 'eth0'
>>> print open('/sys/class/net/%s/address' % ifname).read()
78:e7:g1:84:b5:ed
This way you avoid the complications of shelling out to ifconfig, and parsing the output.
A:
I have used a socket based solution, works well on linux and I believe windows would be fine too
def getHwAddr(ifname):
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
info = fcntl.ioctl(s.fileno(), 0x8927, struct.pack('256s', ifname[:15]))
return ''.join(['%02x:' % ord(char) for char in info[18:24]])[:-1]
getHwAddr("eth0")
Original Source
| MAC, ethernet id using python | How do you get correct MAC/Ethernet id of local network card using python?
Most of the article on Google/stackoverflow suggests to parse the result of ipconfig /all (windows) and ifconfig (Linux).
On windows (2x/xp/7) 'ipconfig /all' works fine but is this a fail safe method?
I am new to linux and I have no idea whether 'ifconfig' is the standard method to get MAC/Ethernet id.
I have to implement a license check method in a python application which is based on local MAC/Ethernet id.
There is a special case when you have a VPN or virtualization apps such as VirtualBox installed. In this case you'll get more then one MAC/Ethernet Ids. This is not going to be a problem if I have to use parsing method but I am not sure.
Cheers
Prashant
| [
"import sys\nimport os\n\ndef getMacAddress(): \n if sys.platform == 'win32': \n for line in os.popen(\"ipconfig /all\"): \n if line.lstrip().startswith('Physical Address'): \n mac = line.split(':')[1].strip().replace('-',':') \n break \n else: \n for lin... | [
6,
4,
2
] | [] | [] | [
"ethernet",
"mac_address",
"python"
] | stackoverflow_0004258822_ethernet_mac_address_python.txt |
Q:
What is the state of exception safety in Python?
I know about the with statement for Python resource handling. What other concerns are there for exception safe code in Python?
EDIT: The concern here is with opening files and such. For instance, suppose an init function raises an exception. What is the state of the object being initialized?
A:
For instance, suppose an init function raises an exception. What is the state of the object being initialized?
Hint. When in doubt, actually run an experiment.
>>> class Partial( object ):
... def __init__( self ):
... self.a= 1
... raise Exception
... self.b= 2
...
>>> p= Partial()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in __init__
Exception
>>> p
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'p' is not defined
The statement -- as a whole -- fails. Object not created. Variable not assigned.
Any other questions?
In C++, things are so much more complex. In Python, the object is simply discarded.
A:
If you asking about language constructs:
Use try: except: else: it ensures that you won't catch wrong exceptions.
Think twice before you catch BaseException, Exception or use bare except:
as you can easly catch to much:
your spelling errors - NameError, ImportError
user's attempt to terminate your program. KeyboardInterrupt, SystemExit
errors that indicates incomplete implementation: NotImplementedError
If you decide to catch generic exceptions log them using log.exception('your message', e)
Keep in mind that exceptions in python are used also for regular flow control (like StopIteration exception)
Use new syntax: except MyException as myex: instead of except MyException, myex:. It is easier to read for not experienced python developers.
Here is an example that catches NameError:
try:
this_doesn_not_exisit();
except Exception: #Don't do that!
pass
print "But this line is still printed"
Answer to the edited question:
Regarding files if you read text files always use codecs.open instead of open to ensure that you can safety store unicode strings.
Regarding the __init__ and state of the object. __init__ is an initializer so the object already exist when __init__ is called. If you raise an exception the flow will be interrupted and object won't get stored in any variable so it will be garbage collected. At least that is my understanding. Think that MyObject() is just a function that returns a value if you raise an exception you return nothing, the flow is interrupted and the value you were assigning to isn't modified.
Check this out:
>>> def throw(): raise Exception()
...
>>> a=1
>>> a=throw()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in throw
Exception
>>> a
1
Here is an example to prove that the object is created even if you raise exception in __init__. I wasn't able to post that fragment in the comments to @S.Lott answer:
>>> global_collection=[]
>>> class Partial(object):
... def __init__(self):
... self.test="test"
... global_collection.append(self)
... raise Exception()
...
>>> x=Partial()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 5, in __init__
Exception
>>> global_collection
[<__main__.Partial object at 0xb74f8f6c>]
>>> global_collection[0].test
'test'
UPDATE:
Included comments made by: @Paul McGuire, @martineau, @aaronasterling
A:
Python isn't like C++. Exceptions produce useful backtraces. If you're taking user input, catch appropriate exceptions (like EOFError) but otherwise, you don't have to do anything special to be exception safe. If a program does something unexpected and throws an exception, use the resulting stack dump to debug your problem. If you're going for extremely high availability, you might want to wrap your top level with a try/except in a loop with a log and restart.
| What is the state of exception safety in Python? | I know about the with statement for Python resource handling. What other concerns are there for exception safe code in Python?
EDIT: The concern here is with opening files and such. For instance, suppose an init function raises an exception. What is the state of the object being initialized?
| [
"\nFor instance, suppose an init function raises an exception. What is the state of the object being initialized? \n\nHint. When in doubt, actually run an experiment.\n>>> class Partial( object ):\n... def __init__( self ):\n... self.a= 1\n... raise Exception\n... self.b= 2\n... \n>>> p... | [
4,
3,
0
] | [] | [] | [
"exception",
"python"
] | stackoverflow_0004259792_exception_python.txt |
Q:
Updating a Django project from 1.2 to 1.3, manage.py not working properly
I decided I wanted to update my Django 1.2 project to Django 1.3 to take advantage of the new static files mechanisms. I deleted my old version of Django, and followed the documentation's instructions for installing the development version from svn.
The changes seem to have taken. That is, python -c "import django; print django.get_version()" yields "1.3 alpha 1 SVN-14686". Yet, I can't seem to take advantage of 1.3 features in my old project. If I do "python manage.py collectstatic --help" I get "Unknown command: 'collectstatic'".
I tried creating a fresh project and doing the same thing, and the collectstatic command worked. I dug into django.core.management, but can't really make a lot of sense of it. The docstring for get_commands() mentions:
The dictionary is cached on the first
call and reused on subsequent calls.
Maybe this is totally irrelevant, but I wonder if my problem has something to do with caching (that is, an old version of the command dictionary is cached, which doesn't have the new 1.3 commands?). Any thoughts?
A:
In order to use a management command, you need to add the application that provides it to INSTALLED_APPS in settings.py. From the docs:
First, you’ll need to make sure that django.contrib.staticfiles is in your INSTALLED_APPS.
That should make the command available.
| Updating a Django project from 1.2 to 1.3, manage.py not working properly | I decided I wanted to update my Django 1.2 project to Django 1.3 to take advantage of the new static files mechanisms. I deleted my old version of Django, and followed the documentation's instructions for installing the development version from svn.
The changes seem to have taken. That is, python -c "import django; print django.get_version()" yields "1.3 alpha 1 SVN-14686". Yet, I can't seem to take advantage of 1.3 features in my old project. If I do "python manage.py collectstatic --help" I get "Unknown command: 'collectstatic'".
I tried creating a fresh project and doing the same thing, and the collectstatic command worked. I dug into django.core.management, but can't really make a lot of sense of it. The docstring for get_commands() mentions:
The dictionary is cached on the first
call and reused on subsequent calls.
Maybe this is totally irrelevant, but I wonder if my problem has something to do with caching (that is, an old version of the command dictionary is cached, which doesn't have the new 1.3 commands?). Any thoughts?
| [
"In order to use a management command, you need to add the application that provides it to INSTALLED_APPS in settings.py. From the docs:\n\nFirst, you’ll need to make sure that django.contrib.staticfiles is in your INSTALLED_APPS.\n\nThat should make the command available.\n"
] | [
18
] | [] | [] | [
"django",
"python"
] | stackoverflow_0004259831_django_python.txt |
Q:
Running a web app in Grails vs Django
I'm currently in the planning stage for a web application and I find myself trying to decide on using Grails or Django. From an operation perspective:
Which ecosystem is easier to maintain (migrations, backup, disaster recovery etc.)? If using grails it'll probably be a typical tomcat + mysql on linux. If django it'll be apache + mysql on linux.
Does django or grails have a better choice of cheap and flexible hosting? Initially it'll probably be low bandwidth requirements. I'm not sure about the exact specs required, but from what I've been reading it seems like django would require far less server resources (even 256MB server is ok) than grails.
A:
You can run grails in 256 megs of ram. Many members of the community are doing so. That being said I would say in either platform you want much more ram than that to make sure your performant. But I might also reccomend checking out www.linode.com. You can get quality hosting for a very reasonable cost and adding a bit of ram for grails will not break your budget. Also if your interested in cloud based solutions Morph is hosting grails apps.
http://developer.mor.ph/grails
I like Django, but I for the maturity of the platform and the amount of quality Java work out there in terms of libaries and frameworks I chose grails. In truth I think they are both good solutions but you cannot deny that your options are much greater with grails.
A:
With Java hosting you don't need to do all the stupid tricks with apache, or nginx. Jetty itself can host everything you need, that's how the guys at www.mor.ph do it, and they find it to be pretty fast.
The memory usage that way is pretty minimal, I host mine on a 256MB Ubuntu server from RapidXen, so it's about $10/month.
I tried developing in Django, and while it runs all the scripts faster (like bootstrapping, or test cases) it's not as well-crafted in my oppinion
A:
I think from an operations perspective things are going to be close enough that you can base your decision on other criteria. If you can afford a virtual private server with at least 256 MB RAM you will be able to deploy Grails applications. If the cost seems like a lot check out Sun. They are really pushing hosting solutions based on their product stack and there are some greats deals available. I have free hosting from Layered Tech for a year through Ostatic.
A:
You can host Grails apps cheaply on EATJ:
http://smithnicholas.wordpress.com/2010/09/20/deploying-your-grails-application-on-eatj/
A:
i think python tend to have lower hosting requirements (i.e., grails needs a jvm, and most el-cheapo hosts dont provide one, where as they usually provide python support). Plus google app engine supports django (to some extend).
But if you got the dough, grails is so much better imho.
| Running a web app in Grails vs Django | I'm currently in the planning stage for a web application and I find myself trying to decide on using Grails or Django. From an operation perspective:
Which ecosystem is easier to maintain (migrations, backup, disaster recovery etc.)? If using grails it'll probably be a typical tomcat + mysql on linux. If django it'll be apache + mysql on linux.
Does django or grails have a better choice of cheap and flexible hosting? Initially it'll probably be low bandwidth requirements. I'm not sure about the exact specs required, but from what I've been reading it seems like django would require far less server resources (even 256MB server is ok) than grails.
| [
"You can run grails in 256 megs of ram. Many members of the community are doing so. That being said I would say in either platform you want much more ram than that to make sure your performant. But I might also reccomend checking out www.linode.com. You can get quality hosting for a very reasonable cost and add... | [
9,
5,
2,
1,
0
] | [] | [] | [
"django",
"grails",
"groovy",
"python",
"web_applications"
] | stackoverflow_0000195101_django_grails_groovy_python_web_applications.txt |
Q:
How do i correct a files extension?
I started programming in Python a couple of days ago and I have a problem I couldn't solve yet.
I want to correct a files file extension by guessing its mimetype.
I tried this:
new_file_name = mimetypes.guess_extension(mimetypes.guess_type(file_name)))
os.rename(file_name, new_file_name)
Why doesnt it work?
A:
mimetypes uses the existing extension to guess the file type. Use magic instead to examine the contents.
| How do i correct a files extension? | I started programming in Python a couple of days ago and I have a problem I couldn't solve yet.
I want to correct a files file extension by guessing its mimetype.
I tried this:
new_file_name = mimetypes.guess_extension(mimetypes.guess_type(file_name)))
os.rename(file_name, new_file_name)
Why doesnt it work?
| [
"mimetypes uses the existing extension to guess the file type. Use magic instead to examine the contents.\n"
] | [
5
] | [] | [] | [
"file",
"mime_types",
"python",
"rename"
] | stackoverflow_0004260737_file_mime_types_python_rename.txt |
Q:
PHP equivalent of Python's func(*[args])
In Python I can do this:
def f(a, b, c):
print a, b, c
f(*[1, 2, 3])
How do you say this in PHP?
A:
Use call_user_func_array:
call_user_func_array('f', array(1, 2, 3));
If you wanted to call a class method, you'd use array($instance, 'f') instead of 'f' and if it was a static class function you'd use array('ClassName', 'f') or 'ClassName::f'. See the callback type docs for details about that.
| PHP equivalent of Python's func(*[args]) | In Python I can do this:
def f(a, b, c):
print a, b, c
f(*[1, 2, 3])
How do you say this in PHP?
| [
"Use call_user_func_array:\ncall_user_func_array('f', array(1, 2, 3));\n\nIf you wanted to call a class method, you'd use array($instance, 'f') instead of 'f' and if it was a static class function you'd use array('ClassName', 'f') or 'ClassName::f'. See the callback type docs for details about that.\n"
] | [
9
] | [] | [] | [
"args",
"php",
"python"
] | stackoverflow_0004261936_args_php_python.txt |
Q:
How to get running counts for numpy array values?
OK, I think this will be fairly simple, but my numpy-fu is not quite strong enough. I've got a an array A of ints; it's tiled N times. I want a running count of the number of times each element is used.
For example, the following (I've reshaped the array to make the repetition obvious):
[0, 1, 2, 0, 0, 1, 0] \
[0, 1, 2, 0, 0, 1, 0] ...
would become:
[0, 0, 0, 1, 2, 1, 3] \
[4, 2, 1, 5, 6, 3, 7]
This python code does it, albeit inelegantly and slowly:
def running_counts(ar):
from collections import defaultdict
counts = defaultdict(lambda: 0)
def get_count(num):
c = counts[num]
counts[num] += 1
return c
return [get_count(num) for num in ar]
I can almost see a numpy trick to make this go, but not quite.
Update
Ok, I've made improvements, but still rely on the above running_counts method. The following speeds things up and feels right-track-ish to me:
def sample_counts(ar, repititions):
tile_bins = np.histogram(ar, np.max(ar)+1)[0]
tile_mult = tile_bins[ar]
first_steps = running_counts(ar)
tiled = np.tile(tile_mult, repititions).reshape(repititions, -1)
multiplier = np.reshape(np.arange(repititions), (repititions, 1))
tiled *= multiplier
tiled += first_steps
return tiled.ravel()
Any elegant thoughts to get rid of running_counts()? Speed is now OK; it just feels a little inelegant.
A:
Here's my take on it:
def countify2(ar):
ar2 = np.ravel(ar)
ar3 = np.empty(ar2.shape, dtype=np.int32)
uniques = np.unique(ar2)
myarange = np.arange(ar2.shape[0])
for u in uniques:
ar3[ar2 == u] = myarange
return ar3
This method is most effective when there are many more elements than there are unique elements.
Yes, it is similar to Sven's, but I really did write it up long before he posted. I just had to run somewhere.
| How to get running counts for numpy array values? | OK, I think this will be fairly simple, but my numpy-fu is not quite strong enough. I've got a an array A of ints; it's tiled N times. I want a running count of the number of times each element is used.
For example, the following (I've reshaped the array to make the repetition obvious):
[0, 1, 2, 0, 0, 1, 0] \
[0, 1, 2, 0, 0, 1, 0] ...
would become:
[0, 0, 0, 1, 2, 1, 3] \
[4, 2, 1, 5, 6, 3, 7]
This python code does it, albeit inelegantly and slowly:
def running_counts(ar):
from collections import defaultdict
counts = defaultdict(lambda: 0)
def get_count(num):
c = counts[num]
counts[num] += 1
return c
return [get_count(num) for num in ar]
I can almost see a numpy trick to make this go, but not quite.
Update
Ok, I've made improvements, but still rely on the above running_counts method. The following speeds things up and feels right-track-ish to me:
def sample_counts(ar, repititions):
tile_bins = np.histogram(ar, np.max(ar)+1)[0]
tile_mult = tile_bins[ar]
first_steps = running_counts(ar)
tiled = np.tile(tile_mult, repititions).reshape(repititions, -1)
multiplier = np.reshape(np.arange(repititions), (repititions, 1))
tiled *= multiplier
tiled += first_steps
return tiled.ravel()
Any elegant thoughts to get rid of running_counts()? Speed is now OK; it just feels a little inelegant.
| [
"Here's my take on it:\ndef countify2(ar):\n ar2 = np.ravel(ar)\n ar3 = np.empty(ar2.shape, dtype=np.int32)\n uniques = np.unique(ar2)\n myarange = np.arange(ar2.shape[0])\n for u in uniques:\n ar3[ar2 == u] = myarange\n return ar3\n\nThis method is most effective when there are many more e... | [
3
] | [] | [] | [
"numpy",
"python"
] | stackoverflow_0004260645_numpy_python.txt |
Q:
Efficient Empirical CDF Computation / Storage
I'm trying to precompute the distributions of several random variables. In particular, these random variables are the results of functions evaluated at locations in a genome, so there will be on the order of 10^8 or 10^9 values for each. The functions are pretty smooth, so I don't think I'll lose much accuracy by only evaluating at every 2nd/10th/100th? base or so, but regardless there will be a large number of samples. My plan is to precompute quantile tables (maybe percentiles) for each function and reference these in the execution of my main program to avoid having to compute these distribution statistics in every run.
But I don't really see how I can easily do this: storing, sorting, and reducing an array of 10^9 floats isn't really feasible, but I can't think of another way that doesn't lose information about the distribution. Is there a way of measuring the quantiles of a sample distribution that doesn't require storing the whole thing in memory?
A:
I agree with @katriealex's comment: ask someone w/ a strong statistics background.
You could easily evaluate min/max/mean/std deviation w/o needing to store any significant amount of memory. (note for mean + std deviation: use Knuth's technique:
delta = x - m[n-1]
m[n] = m[n-1] + 1/n * delta
S[n] = S[n-1] + (x[n] - m[n])*delta
mean = m[n]
std dev = sqrt(S[n]/n)
This prevents you from floating point overflow/underflow problems encountered in the naive calculation of std dev, e.g. taking S1 = the sum of x[k] and S2 = the sum of x[k]^2 and trying to calculate std deviation = sqrt(S2/N - S1^2/N^2). See also Wikipedia.)
There are probably other stream-oriented algorithms for computing higher characteristic moments of the distribution, but I don't know what they are.
Or alternatively, you could also use histogramming techniques with enough bins to characterize the distribution.
| Efficient Empirical CDF Computation / Storage | I'm trying to precompute the distributions of several random variables. In particular, these random variables are the results of functions evaluated at locations in a genome, so there will be on the order of 10^8 or 10^9 values for each. The functions are pretty smooth, so I don't think I'll lose much accuracy by only evaluating at every 2nd/10th/100th? base or so, but regardless there will be a large number of samples. My plan is to precompute quantile tables (maybe percentiles) for each function and reference these in the execution of my main program to avoid having to compute these distribution statistics in every run.
But I don't really see how I can easily do this: storing, sorting, and reducing an array of 10^9 floats isn't really feasible, but I can't think of another way that doesn't lose information about the distribution. Is there a way of measuring the quantiles of a sample distribution that doesn't require storing the whole thing in memory?
| [
"I agree with @katriealex's comment: ask someone w/ a strong statistics background.\nYou could easily evaluate min/max/mean/std deviation w/o needing to store any significant amount of memory. (note for mean + std deviation: use Knuth's technique:\ndelta = x - m[n-1]\nm[n] = m[n-1] + 1/n * delta\nS[n] = S[n-1] + (... | [
2
] | [] | [] | [
"bioinformatics",
"optimization",
"python",
"statistics"
] | stackoverflow_0004261817_bioinformatics_optimization_python_statistics.txt |
Q:
python process._sbootstrap() / .start()
How does python's multiprocessing.Process's start() method work. In particular, is there any documentation for Process._bootstrap() & if so where?
A:
I'm not sure you need to mess with _bootstrap. If you are looking for a basic idea on how to use the Process class, I would take a look at:
http://pymotw.com/2/multiprocessing/basics.html
http://pymotw.com/2/multiprocessing/communication.html
A:
Here's the code, straight from my Python 2.7 install:
def _bootstrap(self):
from . import util
global _current_process
try:
self._children = set()
self._counter = itertools.count(1)
try:
sys.stdin.close()
sys.stdin = open(os.devnull)
except (OSError, ValueError):
pass
_current_process = self
util._finalizer_registry.clear()
util._run_after_forkers()
util.info('child process calling self.run()')
try:
self.run()
exitcode = 0
finally:
util._exit_function()
except SystemExit, e:
if not e.args:
exitcode = 1
elif type(e.args[0]) is int:
exitcode = e.args[0]
else:
sys.stderr.write(e.args[0] + '\n')
sys.stderr.flush()
exitcode = 1
except:
exitcode = 1
import traceback
sys.stderr.write('Process %s:\n' % self.name)
sys.stderr.flush()
traceback.print_exc()
util.info('process exiting with exitcode %d' % exitcode)
return exitcode
| python process._sbootstrap() / .start() | How does python's multiprocessing.Process's start() method work. In particular, is there any documentation for Process._bootstrap() & if so where?
| [
"I'm not sure you need to mess with _bootstrap. If you are looking for a basic idea on how to use the Process class, I would take a look at:\n\nhttp://pymotw.com/2/multiprocessing/basics.html\nhttp://pymotw.com/2/multiprocessing/communication.html\n\n",
"Here's the code, straight from my Python 2.7 install:\ndef... | [
0,
0
] | [] | [] | [
"bootstrapping",
"multiprocessing",
"process",
"python"
] | stackoverflow_0004260912_bootstrapping_multiprocessing_process_python.txt |
Q:
CoreGraphics not found by Python on Mac OS
I'm trying to run some of the Quartz demos in /Developer/Examples/Quartz/Python with Mac 10.5.8 and Python 2.6. However, I'm getting errors that CoreGraphics isn't found.
Traceback (most recent call last):
File "circle.py", line 38, in <module>
from CoreGraphics import *
ImportError: No module named CoreGraphics
In looking at Apple's documentation, isn't this supposed to be baked in? http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_python/dq_python.html#//apple_ref/doc/uid/TP30001066-CH218-TPXREF101
A:
The CoreGraphics wrapper is supplied by Apple as part of the Apple-supplied Python in OS X. Since in OS X 10.5 there is no Apple-supplied version of Python 2.6 (Apple supplies 2.5 and 2.3 there), you must be using a non-Apple version, possibly from a python.org installer or MacPorts. They will not have this module. Either switch to using the Apple-supplied 2.5 or take a look at using the open-source PyObjC Quartz bindings.
| CoreGraphics not found by Python on Mac OS | I'm trying to run some of the Quartz demos in /Developer/Examples/Quartz/Python with Mac 10.5.8 and Python 2.6. However, I'm getting errors that CoreGraphics isn't found.
Traceback (most recent call last):
File "circle.py", line 38, in <module>
from CoreGraphics import *
ImportError: No module named CoreGraphics
In looking at Apple's documentation, isn't this supposed to be baked in? http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_python/dq_python.html#//apple_ref/doc/uid/TP30001066-CH218-TPXREF101
| [
"The CoreGraphics wrapper is supplied by Apple as part of the Apple-supplied Python in OS X. Since in OS X 10.5 there is no Apple-supplied version of Python 2.6 (Apple supplies 2.5 and 2.3 there), you must be using a non-Apple version, possibly from a python.org installer or MacPorts. They will not have this modu... | [
4
] | [] | [] | [
"macos",
"operating_system",
"python",
"quartz_graphics"
] | stackoverflow_0004261877_macos_operating_system_python_quartz_graphics.txt |
Q:
App Engine: CPU Over Quota on urlfetch()
Hey. I'm quite new to App Engine. I created a web-based Twitter app which is now running on App Engine and I'm constantly hitting my CPU Over Quota limits. I did a little profiling and I found out that every request consists of two urlfetch queries, each one of which takes up to 2 CPU seconds. That time is probably spent waiting, all the rest of the code is done in under 200 ms (including work with the Datastore). The Quota is for 6.5 hours per day and every request of mine takes approx. 4 CPU seconds. I ran out of the free quota this morning in only a few hours.
What is the way around this? I can't make Twitter respond to my API calls quicker, and I cannot cache the results, since every request is for a different Twitter profile.
Any help is appreciated,
Thanks!
A:
I would find it confusing that time spent in urlfetch to wait for a remote response is counted towards your CPU quota, given that there is no CPU time spent.
But assuming that's really the problem, asynchronous requests may be your solution. At a minimum, you can overlap the two urlfetch requests to proceed simultaneously. Perhaps you find other stuff you can do until the response is back.
A:
You should change the design of your application.
Instead of making requests to Twitter from App Engine for every user request:
Do the request in the user's browser with JavaScript if possible.
After a urlfetch, store Twitter's response in the datastore, since a call to the datastore is faster on the next request. If you can cache something in memcache, even better.
Update the stored data regularly with the help of cron jobs and task queue.
| App Engine: CPU Over Quota on urlfetch() | Hey. I'm quite new to App Engine. I created a web-based Twitter app which is now running on App Engine and I'm constantly hitting my CPU Over Quota limits. I did a little profiling and I found out that every request consists of two urlfetch queries, each one of which takes up to 2 CPU seconds. That time is probably spent waiting, all the rest of the code is done in under 200 ms (including work with the Datastore). The Quota is for 6.5 hours per day and every request of mine takes approx. 4 CPU seconds. I ran out of the free quota this morning in only a few hours.
What is the way around this? I can't make Twitter respond to my API calls quicker, and I cannot cache the results, since every request is for a different Twitter profile.
Any help is appreciated,
Thanks!
| [
"I would find it confusing that time spent in urlfetch to wait for a remote response is counted towards your CPU quota, given that there is no CPU time spent.\nBut assuming that's really the problem, asynchronous requests may be your solution. At a minimum, you can overlap the two urlfetch requests to proceed simul... | [
3,
2
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0004256767_google_app_engine_python.txt |
Q:
What does the <> operator do in python?
I just came across this here, always used like this:
if string1.find(string2) <> -1:
pass
What does the <> operator do, and why not use the usual == or in?
Sorry if that has been answered before, search engines don't like punctuation.
A:
http://docs.python.org/reference/expressions.html#notin says:
The [operators] <> and != are equivalent; for consistency with C, != is preferred. [...] The <> spelling is considered obsolescent.
A:
<> is the same as != although the <> form is deprecated. Your code sample could be more cleanly be written as:
if string2 not in string1:
pass
A:
<> would mean greater than or less than, essentially 'not equal'.
| What does the <> operator do in python? | I just came across this here, always used like this:
if string1.find(string2) <> -1:
pass
What does the <> operator do, and why not use the usual == or in?
Sorry if that has been answered before, search engines don't like punctuation.
| [
"http://docs.python.org/reference/expressions.html#notin says:\n\nThe [operators] <> and != are equivalent; for consistency with C, != is preferred. [...] The <> spelling is considered obsolescent.\n\n",
"<> is the same as != although the <> form is deprecated. Your code sample could be more cleanly be written a... | [
17,
7,
0
] | [] | [] | [
"python"
] | stackoverflow_0004262638_python.txt |
Q:
what is making try: exit early?
in the following function, what is making try: exit early? If I put the same code outside a def block it works fine.
tiles = ['095D', '094M']
in_file = 'in_file'
out_file = 'out_file'
expression = ''' "FIELDNAME" LIKE 'STUFF' '''
def foobar(in_file, out_file, expression):
print in_file, out_file, expression
try:
print 'this is trying'
#pretty print tile list, from http://stackoverflow.com/questions/2399112/python-print-delimited-list
tiles = ','.join(map(str,tiles))
print 'made it past tiles!'
print 'From %s \nselecting %s \ninto %s' % (in_file, tiles, out_file)
except:
print 'Made it to the except block!'
foobar(in_file, out_file, expression)
Results:
D:\> python xx-debug.py
in_file out_file "FIELDNAME" LIKE 'STUFF'
this is trying
Made it to the except block!
Results with the same code not in a def:
this is trying
made it past tiles!
From in_file
selecting 095D,094M
into out_file
A:
The reason it's not working is because you defined tiles in the global scope. In the function you're assigning to tiles. This makes tiles a local scoped name in the function. This, in turn, means that the code in the function won't look for tiles in the global scope at all.
In the assignment, you're trying to get tiles (this is before it has been assigned locally.) This results in an exception being raised, since you tried to access an unassigned local variable.
The quick fix is to use global:
...
def foobar(in_file, out_file, expression):
global tiles
...
As others said, don't just catch exceptions without doing something with them. When debugging code, you want exceptions to be thrown so you can find and fix the cause! Either remove the try...except, or make the except take an exception and print useful information about it, like this:
try:
...
except Exception, e:
print 'Oh noes!', e
This may be a lot to read, but you will understand Python much better if you do read it:
http://docs.python.org/reference/executionmodel.html
It explains how Python handles variable definitions in the module scope and function scopes, etc. It also covers exceptions.
A:
Exception output:
Traceback (most recent call last):
File "sof.py", line 19, in <module>
foobar(in_file, out_file, expression)
File "sof.py", line 11, in foobar
tiles = ','.join(map(str,tiles))
UnboundLocalError: local variable 'tiles' referenced before assignment
Now, this happens because tiles is actually defined in the global space. So, your function should look like this:
def foobar(in_file, out_file, expression):
global tiles
...
| what is making try: exit early? | in the following function, what is making try: exit early? If I put the same code outside a def block it works fine.
tiles = ['095D', '094M']
in_file = 'in_file'
out_file = 'out_file'
expression = ''' "FIELDNAME" LIKE 'STUFF' '''
def foobar(in_file, out_file, expression):
print in_file, out_file, expression
try:
print 'this is trying'
#pretty print tile list, from http://stackoverflow.com/questions/2399112/python-print-delimited-list
tiles = ','.join(map(str,tiles))
print 'made it past tiles!'
print 'From %s \nselecting %s \ninto %s' % (in_file, tiles, out_file)
except:
print 'Made it to the except block!'
foobar(in_file, out_file, expression)
Results:
D:\> python xx-debug.py
in_file out_file "FIELDNAME" LIKE 'STUFF'
this is trying
Made it to the except block!
Results with the same code not in a def:
this is trying
made it past tiles!
From in_file
selecting 095D,094M
into out_file
| [
"The reason it's not working is because you defined tiles in the global scope. In the function you're assigning to tiles. This makes tiles a local scoped name in the function. This, in turn, means that the code in the function won't look for tiles in the global scope at all.\nIn the assignment, you're trying to get... | [
3,
3
] | [
"That's an interesting bug you have. So there's a module-scoped tiles outside the function, and since you're not using global, you're creating a new tiles variable inside the function. This is fine. Or it would be fine, except that it appears (when I mess around in interactive Python) that the lvalue of the stateme... | [
-1
] | [
"python"
] | stackoverflow_0004262609_python.txt |
Q:
Segmentation Fault (segfault) when using OGR CreateField() in Python
Receiving a segfault when running this very short script in Ubuntu.
from osgeo import ogr, osr
shpfile = 'Census_County_TIGER00_IN.shp'
def cust_field(field):
'''cust_field(shpfile, field) creates a field definition, which, by calling cust_field(), can be used to create a field using the CreateField() function.
cust_field() DOES NOT create a field -- it simply creates a "model" for a field, that can then be called later. It's weird, but that's GDAL/OGR, as far as I can tell.'''
fieldDefn = ogr.FieldDefn(field, ogr.OFTInteger)
fieldDefn.SetWidth(14)
fieldDefn.SetPrecision(6)
return fieldDefn
ds = ogr.Open(shpfile, 1)
lyr = ds.GetLayerByIndex(0)
field = cust_field("Test")
lyr.CreateField(field)
Everything runs smoothly until that last line, when iPython, normal shell Python and the IDLE command line all dump to a segmentation fault. Is this an error on my end or an issue with the underlying C that I'm not addressing properly?
A:
Is this an error on my end or an issue
with the underlying C that I'm not
addressing properly?
It is probably both. GDAL/OGR's bindings do tend to segfault occasionally, when objects go out of scope and are garbage collected. While this is a known bug, it is unlikely to be fixed any time soon.
Chances are you can find a way to work around this. I can't reproduce this segfault with another shapefile on Windows XP, and the following version of GDAL/OGR:
>>> gdal.VersionInfo('')
'GDAL 1.6.0, released 2008/12/04'
You could try temporarily to refactor the cust_field function into the body of the script like this:
from osgeo import ogr, osr
shpfile = 'Census_County_TIGER00_IN.shp'
ds = ogr.Open(shpfile, 1)
lyr = ds.GetLayerByIndex(0)
fieldDefn = ogr.FieldDefn("Test", ogr.OFTInteger)
fieldDefn.SetWidth(14)
fieldDefn.SetPrecision(6)
lyr.CreateField(fieldDefn)
Let me know if this solves your problem.
| Segmentation Fault (segfault) when using OGR CreateField() in Python | Receiving a segfault when running this very short script in Ubuntu.
from osgeo import ogr, osr
shpfile = 'Census_County_TIGER00_IN.shp'
def cust_field(field):
'''cust_field(shpfile, field) creates a field definition, which, by calling cust_field(), can be used to create a field using the CreateField() function.
cust_field() DOES NOT create a field -- it simply creates a "model" for a field, that can then be called later. It's weird, but that's GDAL/OGR, as far as I can tell.'''
fieldDefn = ogr.FieldDefn(field, ogr.OFTInteger)
fieldDefn.SetWidth(14)
fieldDefn.SetPrecision(6)
return fieldDefn
ds = ogr.Open(shpfile, 1)
lyr = ds.GetLayerByIndex(0)
field = cust_field("Test")
lyr.CreateField(field)
Everything runs smoothly until that last line, when iPython, normal shell Python and the IDLE command line all dump to a segmentation fault. Is this an error on my end or an issue with the underlying C that I'm not addressing properly?
| [
"\nIs this an error on my end or an issue\n with the underlying C that I'm not\n addressing properly?\n\nIt is probably both. GDAL/OGR's bindings do tend to segfault occasionally, when objects go out of scope and are garbage collected. While this is a known bug, it is unlikely to be fixed any time soon.\nChances... | [
5
] | [] | [] | [
"osgeo",
"python"
] | stackoverflow_0004262411_osgeo_python.txt |
Q:
Looking for cross-platform rsync-like functionality in python, such as rsync.py
I am implementing backup scripts in python. I'm trying to keep things cross platform. I hear there is a python based rsync implementation: http://pypi.python.org/pypi/rsync.py
But I can't seem to find it anywhere. All of the download links I find are dead. Does anyone know where I could find the rsync.py program?
At the moment I am using unison for Windows but I would like to try rsync.py
A:
Alternative : pysync - implementation of the rsync and related algorithms in pure Python, and a high speed librsync Python extension
http://freshmeat.net/projects/pysync/
Another alternative: http://code.google.com/p/pyrsync/
A:
Here is the algorythm (not sure if it helps you out):
http://code.activestate.com/recipes/577518-rsync-algorithm/
A:
I know that rdiff-backup is written in python and use an rsync-like algorithm. It use librsync. Note that rdiff-backup is not a replacement for rsync, so it will not fill your needs. But you can take a look at librsync and see how rdiff-backup use it.
A:
Here is another rsync implementation in Python:
http://snipperize.todayclose.com/snippet/py/Rsync-Algorithm-In-Python--188001/
| Looking for cross-platform rsync-like functionality in python, such as rsync.py | I am implementing backup scripts in python. I'm trying to keep things cross platform. I hear there is a python based rsync implementation: http://pypi.python.org/pypi/rsync.py
But I can't seem to find it anywhere. All of the download links I find are dead. Does anyone know where I could find the rsync.py program?
At the moment I am using unison for Windows but I would like to try rsync.py
| [
"Alternative : pysync - implementation of the rsync and related algorithms in pure Python, and a high speed librsync Python extension\n\nhttp://freshmeat.net/projects/pysync/\n\nAnother alternative: http://code.google.com/p/pyrsync/\n",
"Here is the algorythm (not sure if it helps you out):\nhttp://code.activesta... | [
14,
3,
1,
0
] | [] | [] | [
"mirroring",
"python",
"rsync",
"scripting"
] | stackoverflow_0004260767_mirroring_python_rsync_scripting.txt |
Q:
Django ModelForm CheckBox Widget
I'm currently have an issue, and likely overlooking something very trivial. I have a field in my model that should allow for multiple choices via a checkbox form (it doesn't have to be a checkbox in the admin screen, just in the form area that the end-user will see). Currently I have the field setup like so:
# Type of Media
MEDIA_CHOICES = (
('1', 'Magazine'),
('2', 'Radio Station'),
('3', 'Journal'),
('4', 'TV Station'),
('5', 'Newspaper'),
('6', 'Website'),
)
media_choice = models.CharField(max_length=25,
choices=MEDIA_CHOICES)
I need to take that and make a checkbox selectable field in a form out of it though. When I create a ModelForm, it wants to do a drop down box. So I naturally overrode that field, and I get my checkbox that I want. However, when the form's submitted, it would appear that nothing useful is saved when I look at the admin screen. The database does however show that I have a number of things selected, which is a positive sign. However, how can I get that to reflect in the admin screen properly?
Edit: FWIW I'll gladly accept documentation links as answers, because it would seem I'm just glossing over something obvious.
A:
In such a case, the easiest way is to put the choices into a separate model and then use a ManyToMany relationship. After that, you simply override the ModelForm's widget for that field to use forms.CheckboxSelectMultiple and Django will automatically do the right thing. If you insist to use a CharField, you'll probably have to do something like this snippet.
@ 2. comment: how are you overriding the widget? This is how I do it and it works flawlessly:
class SomeModelForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
super(SomeModelForm, self).__init__(*args, **kwargs)
self.fields['some_field'].widget = forms.CheckboxSelectMultiple()
A:
I've just started to look into widgets assignment with ModelForms. In a lot of examples I've seen, piquadrat's included, the Form's __ init __ method is overridden.
I find this a little confusing, and just overriding the desired field is more natural for me:
class SomeModelForm(forms.ModelForm):
some_field = forms.CharField(choices=MEDIA_CHOICES,
widget=forms.CheckboxSelectMultiple)
class Meta:
model=SomeModel
Note: I'm using Django 1.1.
A:
Using piquadrat's answer worked for me, but needed to add a line to define the queryset for the M2M. See this link.
| Django ModelForm CheckBox Widget | I'm currently have an issue, and likely overlooking something very trivial. I have a field in my model that should allow for multiple choices via a checkbox form (it doesn't have to be a checkbox in the admin screen, just in the form area that the end-user will see). Currently I have the field setup like so:
# Type of Media
MEDIA_CHOICES = (
('1', 'Magazine'),
('2', 'Radio Station'),
('3', 'Journal'),
('4', 'TV Station'),
('5', 'Newspaper'),
('6', 'Website'),
)
media_choice = models.CharField(max_length=25,
choices=MEDIA_CHOICES)
I need to take that and make a checkbox selectable field in a form out of it though. When I create a ModelForm, it wants to do a drop down box. So I naturally overrode that field, and I get my checkbox that I want. However, when the form's submitted, it would appear that nothing useful is saved when I look at the admin screen. The database does however show that I have a number of things selected, which is a positive sign. However, how can I get that to reflect in the admin screen properly?
Edit: FWIW I'll gladly accept documentation links as answers, because it would seem I'm just glossing over something obvious.
| [
"In such a case, the easiest way is to put the choices into a separate model and then use a ManyToMany relationship. After that, you simply override the ModelForm's widget for that field to use forms.CheckboxSelectMultiple and Django will automatically do the right thing. If you insist to use a CharField, you'll pr... | [
16,
1,
0
] | [] | [] | [
"django",
"django_forms",
"django_models",
"python"
] | stackoverflow_0001268209_django_django_forms_django_models_python.txt |
Q:
Install Python 3.1 on Mac OS X version 10.5.8
I use Windows. I wrote a Python 3.1 script that my Mac-using friend would like to run, but we can't get Python 3.1 to work right on his Mac. I think the problem is that the Python 3.1 framework is not being installed. Here's exactly what what I did and why I think this is the problem.
I downloaded Python 3.1.2 from the Python download page (this file). I opened the file, then opened "Python.mpkg". I proceeded with the install wizard, and I made sure to check the box to install "Shell profile updater" during the wizard. I went to "/Applications/Python 3.1" and double-clicked "Update Shell Profile.command".
Next I selected the python script I wrote and selected "File", "Get Info" in the menu bar.
Under "Open With" I selected "PythonLauncher" from "/Applications/Python 3.1". I then clicked the "Change All" button. Now I double-clicked my program to run it, but it was run by Python 2.5.1 instead of Python 3.1. (I'm sure of this, I wrote a program to "print(sys.version)".)
So now I tried to figure out why the "PythonLauncher" from "/Applications/Python 3.1" is using Python 2.5.1. I opened "PythonLauncher" and found that the interpreter for "Python Script" is "/usr/bin/pythonw". So I went to "/usr/bin/" and discovered that "pythonw" was an alias pointing to "/System/Library/Frameworks/Python.framework/Versions/2.5/bin/pythonw2.5". Obviously this should be version 3.1 instead. So I went to "/System/Library/Frameworks/Python.framework/Versions/" and discovered that the only sub-folders are "2.3" and "2.5". Where's 3.1?
A:
Take a look at PythonBrew. It made installing Python on my Mac a lot easier.
Also this might help:
Is there a Python Equivalent of Ruby's RVM?
A:
Python Launcher.app is a somewhat neglected app; there have been some discussion about removing it all together because it could be somewhat of a security risk if someone downloads arbitrary Python scripts from the web. It can also be kind of hit or miss if you have multiple Python versions installed. And many people just run Python from a terminal window command line so they don't need Python Launcher.app and that's probably the safest thing to do. To do so, you should first run the Update Shell Profile command in /Applications/Python 3.1 which will ensure that the proper Python framework bin directory is added to your shell path. Then you can just type:
$ python3 /path/to/script.py
That said, you can make Python Launcher work by changing the interpreter path to:
/Library/Frameworks/Python.framework/Versions/3.1/bin/python3
but I discourage you from doing so.
Another better GUI option is to launch IDLE.app and drag-and-drop files onto it in the dock or open them in its File menu.
A:
The versions in /System/Library/Frameworks/Python.framework/... were put there as part of OS X. They should be left alone. (See this question.)
.dmg files downloaded from python.org install to /Library/Frameworks/Python.framework/.... These are user-installed, so you can install/uninstall/move however you like. These installers also created symlinks in /usr/local/bin.
You can add either /Library/.../3.1/bin or /usr/local/bin to your path in your shell if you want python3 to be on your path.
| Install Python 3.1 on Mac OS X version 10.5.8 | I use Windows. I wrote a Python 3.1 script that my Mac-using friend would like to run, but we can't get Python 3.1 to work right on his Mac. I think the problem is that the Python 3.1 framework is not being installed. Here's exactly what what I did and why I think this is the problem.
I downloaded Python 3.1.2 from the Python download page (this file). I opened the file, then opened "Python.mpkg". I proceeded with the install wizard, and I made sure to check the box to install "Shell profile updater" during the wizard. I went to "/Applications/Python 3.1" and double-clicked "Update Shell Profile.command".
Next I selected the python script I wrote and selected "File", "Get Info" in the menu bar.
Under "Open With" I selected "PythonLauncher" from "/Applications/Python 3.1". I then clicked the "Change All" button. Now I double-clicked my program to run it, but it was run by Python 2.5.1 instead of Python 3.1. (I'm sure of this, I wrote a program to "print(sys.version)".)
So now I tried to figure out why the "PythonLauncher" from "/Applications/Python 3.1" is using Python 2.5.1. I opened "PythonLauncher" and found that the interpreter for "Python Script" is "/usr/bin/pythonw". So I went to "/usr/bin/" and discovered that "pythonw" was an alias pointing to "/System/Library/Frameworks/Python.framework/Versions/2.5/bin/pythonw2.5". Obviously this should be version 3.1 instead. So I went to "/System/Library/Frameworks/Python.framework/Versions/" and discovered that the only sub-folders are "2.3" and "2.5". Where's 3.1?
| [
"Take a look at PythonBrew. It made installing Python on my Mac a lot easier.\nAlso this might help:\nIs there a Python Equivalent of Ruby's RVM?\n",
"Python Launcher.app is a somewhat neglected app; there have been some discussion about removing it all together because it could be somewhat of a security risk if ... | [
2,
1,
0
] | [] | [] | [
"installation",
"macos",
"python"
] | stackoverflow_0004261429_installation_macos_python.txt |
Q:
Is it possible to have a Python class decorator with arguments?
What I'd like to do is this:
@add_cache(cache_this, cache_that, cache_this_and_that)
class MyDjangoModel(models.Model):
blah
But fails because it seems that the first argument is implicitly the actual class object. Is it possible to get around this or am I forced to use the ugly syntax as opposed to this beautiful syntax?
A:
Your arg_cache definition needs to do something like:
def arg_cache(cthis, cthat, cthisandthat):
def f(obj):
obj.cache_this = cthis
obj.cache_that = cthat
obj.thisandthat = cthisandthat
return obj
return f
@arg_cache(cache_this, cache_that, cache_this_and_that)
...
The example assumes you just want to set some properties on the decorated class. You could of course do something else with the three parameters.
A:
Write a callable that returns an appropriate decorator.
| Is it possible to have a Python class decorator with arguments? | What I'd like to do is this:
@add_cache(cache_this, cache_that, cache_this_and_that)
class MyDjangoModel(models.Model):
blah
But fails because it seems that the first argument is implicitly the actual class object. Is it possible to get around this or am I forced to use the ugly syntax as opposed to this beautiful syntax?
| [
"Your arg_cache definition needs to do something like:\ndef arg_cache(cthis, cthat, cthisandthat):\n def f(obj):\n obj.cache_this = cthis\n obj.cache_that = cthat\n obj.thisandthat = cthisandthat\n return obj\n return f\n\n@arg_cache(cache_this, cache_that, cache_this_and_that)\n...\n\nT... | [
10,
2
] | [] | [] | [
"decorator",
"python"
] | stackoverflow_0004262892_decorator_python.txt |
Q:
Keeping in-memory data in sync with a file for long running Python script
I have a Python (2.7) script that acts as a server and it will therefore run for very long periods of time. This script has a bunch of values to keep track of which can change at any time based on client input.
What I'm ideally after is something that can keep a Python data structure (with values of types dict, list, unicode, int and float – JSON, basically) in memory, letting me update it however I want (except referencing any of the reference type instances more than once) while also keeping this data up-to-date in a human-readable file, so that even if the power plug was pulled, the server could just start up and continue with the same data.
I know I'm basically talking about a database, but the data I'm keeping will be very simple and probably less than 1 kB most of the time, so I'm looking for the simplest solution possible that can provide me with the described data integrity. Are there any good Python (2.7) libraries that let me do something like this?
A:
Well, since you know we're basically talking about a database, albeit a very simple one, you probably won't be surprised that I suggest you have a look at the sqlite3 module.
A:
Any reason for the human readable requirement?
I would suggest looking at sqlite for a simple database solution, or at pickle for a simple way to serialise objects and write them to disk. Neither is particularly human readable though.
Other options are JSON, or XML as you hinted at - use the built in json module to serialize the objects then write that to disk. When you start up, check for the presence of that file and load the data if required.
From the docs:
>>> import json
>>> print json.dumps({'4': 5, '6': 7}, sort_keys=True, indent=4)
{
"4": 5,
"6": 7
}
A:
I agree that you don't need a fully blown database, as it seems that all you want is atomic file writes. You need to solve this problem in two parts, serialisation/deserialisation, and the atomic writing.
For the first section, json, or pickle are probably suitable formats for you. JSON has the advantage of being human readable. It doesn't seem as though this the primary problem you are facing though.
Once you have serialised your object to a string, use the following procedure to write a file to disk atomically, assuming a single concurrent writer (at least on POSIX, see below):
import os, platform
backup_filename = "output.back.json"
filename = "output.json"
serialised_str = json.dumps(...)
with open(backup_filename, 'wb') as f:
f.write(serialised_str)
if platform.system() == 'Windows':
os.unlink(filename)
os.rename(backup_filename, filename)
While os.rename is will overwrite an existing file and is atomic on POSIX, this is sadly not the case on Windows. On Windows, there is the possibility that os.unlink will succeed but os.rename will fail, meaning that you have only backup_filename and no filename. If you are targeting Windows, you will need to consider this possibility when you are checking for the existence of filename.
If there is a possibility of more than one concurrent writer, you will have to consider a synchronisation construct.
A:
Since you mentioned your data is small, I'd go with a simple solution and use the pickle module, which lets you dump a python object into a line very easily.
Then you just set up a Thread that saves your object to a file in defined time intervals.
Not a "libraried" solution, but - if I understand your requirements - simple enough for you not to really need one.
EDIT: you mentioned you wanted to cover the case that a problem occurs during the write itself, effectively making it an atomic transaction. In this case, the traditional way to go is using "Log-based recovery". It is essentially writing a record to a log file saying that "write transaction started" and then writing "write transaction comitted" when you're done. If a "started" has no corresponding "commit", then you rollback.
In this case, I agree that you might be better off with a simple database like SQLite. It might be a slight overkill, but on the other hand, implementing atomicity yourself might be reinventing the wheel a little (and I didn't find any obvious libraries that do it for you).
If you do decide to go the crafty way, this topic is covered on the Process Synchronization chapter of Silberschatz's Operating Systems book, under the section "atomic transactions".
A very simple (though maybe not "transactionally perfect") alternative would be just to record to a new file every time, so that if one corrupts you have a history. You can even add a checksum to each file to automatically determine if it's broken.
A:
You are asking how to implement a database which provides ACID guarantees, but you haven't provided a good reason why you can't use one off-the-shelf. SQLite is perfect for this sort of thing and gives you those guarantees.
However, there is KirbyBase. I've never used it and I don't think it makes ACID guarantees, but it does have some of the characteristics you're looking for.
| Keeping in-memory data in sync with a file for long running Python script | I have a Python (2.7) script that acts as a server and it will therefore run for very long periods of time. This script has a bunch of values to keep track of which can change at any time based on client input.
What I'm ideally after is something that can keep a Python data structure (with values of types dict, list, unicode, int and float – JSON, basically) in memory, letting me update it however I want (except referencing any of the reference type instances more than once) while also keeping this data up-to-date in a human-readable file, so that even if the power plug was pulled, the server could just start up and continue with the same data.
I know I'm basically talking about a database, but the data I'm keeping will be very simple and probably less than 1 kB most of the time, so I'm looking for the simplest solution possible that can provide me with the described data integrity. Are there any good Python (2.7) libraries that let me do something like this?
| [
"Well, since you know we're basically talking about a database, albeit a very simple one, you probably won't be surprised that I suggest you have a look at the sqlite3 module.\n",
"Any reason for the human readable requirement?\nI would suggest looking at sqlite for a simple database solution, or at pickle for a ... | [
5,
2,
2,
1,
0
] | [] | [] | [
"database",
"file",
"memory",
"python",
"python_2.7"
] | stackoverflow_0004262403_database_file_memory_python_python_2.7.txt |
Q:
Running Django tutorial example with mod_wsgi?
I installed wsgi (mod_wsgi), and could run the simple application by calling http://localhost/myapp.
WSGIScriptAlias /myapp /Library/WebServer/Documents/wsgi/scripts/myapp.wsgi
def application(environ, start_response):
status = '200 OK'
output = 'Hello World! WSGI working perfectly!'
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
return [output]
How can I run django example?
I found this example that has the simple example.
import os
import sys
os.environ['DJANGO_SETTINGS_MODULE'] = 'my.settings' # ???
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
mysite/
__init__.py
manage.py
settings.py
urls.py
Django generated the three python files, and I see a settings.py that has setup info.
The project is in '/ABC/DEF/mysite' directory.
I modified the code as follows, and named it myapp.wsgi.
import os
import sys
sys.path.append('/ABC/DEF/mysite')
sys.path.append('/ABC/DEF')
os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
Even though I could run the original django example from the tutorial that uses "python manage.py runserver 8080", I got an error running the modified example with mod_wsgi.
The apache2/log file has the following.
[Wed Nov 24 09:02:41 2010] [error] [client 127.0.0.1] ImportError: Could not import settings 'mysite.settings' (Is it on sys.path? Does it have syntax errors?): No module named mysite.settings
As far as I know, the directory/project name is mysite, and it has the settings.py. And the directory is in the sys.path. So, I don't know why mysite.settings is not found.
What's wrong with my myapp.wsgi?
SOLVED
The following code works fine.
And the other thing that I modified was the django project should not be in my home directory, when I moved the project to www doc directory, everything works fine.
import os
import sys
mysite = 'SOMEWEHRE/django'
if mysite not in sys.path:sys.path.insert(0,mysite)
mysite = 'SOMEWEHRE/scripts'
if mysite not in sys.path:sys.path.insert(0,mysite)
os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
A:
What you need to do is add either the project directory or the directory the project is in to sys.path, and then point the settings to mysite.settings if you've done the latter.
EDIT:
"Django settings"
EDIT 2:
How to set the Django settings module
EDIT 3:
The settings module should be the module, not the filename. And if it's 'settings' then you don't need to set it.
| Running Django tutorial example with mod_wsgi? | I installed wsgi (mod_wsgi), and could run the simple application by calling http://localhost/myapp.
WSGIScriptAlias /myapp /Library/WebServer/Documents/wsgi/scripts/myapp.wsgi
def application(environ, start_response):
status = '200 OK'
output = 'Hello World! WSGI working perfectly!'
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
return [output]
How can I run django example?
I found this example that has the simple example.
import os
import sys
os.environ['DJANGO_SETTINGS_MODULE'] = 'my.settings' # ???
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
mysite/
__init__.py
manage.py
settings.py
urls.py
Django generated the three python files, and I see a settings.py that has setup info.
The project is in '/ABC/DEF/mysite' directory.
I modified the code as follows, and named it myapp.wsgi.
import os
import sys
sys.path.append('/ABC/DEF/mysite')
sys.path.append('/ABC/DEF')
os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
Even though I could run the original django example from the tutorial that uses "python manage.py runserver 8080", I got an error running the modified example with mod_wsgi.
The apache2/log file has the following.
[Wed Nov 24 09:02:41 2010] [error] [client 127.0.0.1] ImportError: Could not import settings 'mysite.settings' (Is it on sys.path? Does it have syntax errors?): No module named mysite.settings
As far as I know, the directory/project name is mysite, and it has the settings.py. And the directory is in the sys.path. So, I don't know why mysite.settings is not found.
What's wrong with my myapp.wsgi?
SOLVED
The following code works fine.
And the other thing that I modified was the django project should not be in my home directory, when I moved the project to www doc directory, everything works fine.
import os
import sys
mysite = 'SOMEWEHRE/django'
if mysite not in sys.path:sys.path.insert(0,mysite)
mysite = 'SOMEWEHRE/scripts'
if mysite not in sys.path:sys.path.insert(0,mysite)
os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
| [
"What you need to do is add either the project directory or the directory the project is in to sys.path, and then point the settings to mysite.settings if you've done the latter.\nEDIT:\n\"Django settings\"\nEDIT 2:\nHow to set the Django settings module\nEDIT 3:\nThe settings module should be the module, not the f... | [
2
] | [] | [] | [
"django",
"mod_wsgi",
"python"
] | stackoverflow_0004263001_django_mod_wsgi_python.txt |
Q:
Recording Video with FFmpeg gives "The filename, directory name, or volume label syntax is incorrect."
I'm trying to record video (with audio!) in this way:
ffmpeg = "C:\bin\ffmpeg.exe"
cmd = '%s -r 15 -f vfwcap -i 0 c:/output2.mpg' % (ffmpeg)
os.system(cmd)
And I have the error: "The filename, directory name, or volume label syntax is incorrect." I think that this is a problem with vfwcap, but I don't know how to fix it.
Any ideas? Maby something else is wrong?
A:
I think mermoz must be joking with you. You've got a few problems here. Python uses '\' as an escape character, so it won't find your file unless you either double them up or switch to forward slashes, as you've then done in your cmd. The syntax of your ffmpeg command line is also completely wrong. You're saying you want to set the frames per minute to 15 and format vfwcap to your inputfile, which is "0." Also you shouldn't use os.system. Use subprocess.popen and pass your commands as lists. Not sure if this question is serious, but if so, this should start you in the right direction.
A:
isn't it just the small c in "c:/output.mpg" instead of "C:/output.mpg"?
A:
The direct problem is that the \ in the command line are being interpreted as control chars, use either c: \ \ or use c:/
As Profane says you have the output file flags wrong for ffmpeg
| Recording Video with FFmpeg gives "The filename, directory name, or volume label syntax is incorrect." | I'm trying to record video (with audio!) in this way:
ffmpeg = "C:\bin\ffmpeg.exe"
cmd = '%s -r 15 -f vfwcap -i 0 c:/output2.mpg' % (ffmpeg)
os.system(cmd)
And I have the error: "The filename, directory name, or volume label syntax is incorrect." I think that this is a problem with vfwcap, but I don't know how to fix it.
Any ideas? Maby something else is wrong?
| [
"I think mermoz must be joking with you. You've got a few problems here. Python uses '\\' as an escape character, so it won't find your file unless you either double them up or switch to forward slashes, as you've then done in your cmd. The syntax of your ffmpeg command line is also completely wrong. You're say... | [
1,
0,
0
] | [] | [] | [
"ffmpeg",
"python",
"video_streaming"
] | stackoverflow_0003372820_ffmpeg_python_video_streaming.txt |
Q:
GAE email me errors
Can GAE be configured to bust me an email when there's an error?
A:
I think the best you can do is to have in your main function some code like...:
try:
...normal body of your main goes here...
except:
from google.appengine.api import mail
import sys
mail.send_mail(sender="Your GAE App <yourappname@example.com>",
to="You <bobobobo@example.com>",
subject="GAE App error",
body="""
Your App Engine app raised an exception:
%s
""" % sys.exc_info()[:2])
(of course, you can do better formatting on the exception information, etc, etc).
A:
Here's an example of using GAE to send an email. You could build on this example to catch exceptions and send an email to yourself....
http://www.fishbonecloud.com/2010/11/automated-email-using-google-app-engine.html
| GAE email me errors | Can GAE be configured to bust me an email when there's an error?
| [
"I think the best you can do is to have in your main function some code like...:\ntry:\n ...normal body of your main goes here...\nexcept:\n from google.appengine.api import mail\n import sys\n\n mail.send_mail(sender=\"Your GAE App <yourappname@example.com>\",\n to=\"You <bobobobo@example.com>\",\... | [
4,
0
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0002118524_google_app_engine_python.txt |
Q:
Google App Engine: Intro to their Data Store API for people with SQL Background?
Does anyone have any good information aside from the Google App Engine docs provided by Google that gives a good overview for people with MS SQL background to porting their knowledge and using Google App Engine Data Store API effectively.
For Example, if you have a self created Users Table and a Message Table
Where there is a relationship between Users and Message (connected by the UserID), how would this structure be represented in Google App Engine?
SELECT * FROM Users INNER JOIN Message ON Users.ID = Message.UserID
A:
Here is a good link: One to Many Join using Google App Engine.
http://blog.arbingersys.com/2008/04/google-app-engine-one-to-many-join.html
Here is another good link: Many to Many Join using Google App Engine:
http://blog.arbingersys.com/2008/04/google-app-engine-many-to-many-join.html
Here is a good discussion regarding the above two links:
http://groups.google.com/group/google-appengine/browse_thread/thread/e9464ceb131c726f/6aeae1e390038592?pli=1
Personally I find this comment in the discussion very informative about the Google App Engine Data Store:
http://groups.google.com/group/google-appengine/msg/ee3bd373bd31e2c7
At scale you wind up doing a bunch of
things that seem wrong, but that are
required by the numbers we are
running. Go watch the EBay talks. Or
read the posts about how many database
instances FaceBook is running.
The simple truth is, what we learned
about in uni was great for the
business automation apps of small to
medium enterprise applications, where
the load was predictable, and there
was money enough to buy the server
required to handle the load of 50
people doing data entry into an
accounts or business planning and
control app....
Searched around a bit more and came across this Google Doc Article:
http://code.google.com/appengine/articles/modeling.html
App Engine allows the creation of easy
to use relationships between datastore
entities which can represent
real-world things and ideas. Use
ReferenceProperty when you need to
associate an arbitrary number of
repeated types of information with a
single entity. Use key-lists when you
need to allow lots of different
objects to share other instances
between each other. You will find that
these two approaches will provide you
with most of what you need to create
the model behind great applications.
A:
Can I supplement the excellent answer further above with a link to a video:
http://sites.google.com/site/io/building-scalable-web-applications-with-google-app-engine
It's a great talk by Google's Brett Slatkin who talks for an hour about the special way you need to think about your application before you can expect it to scale well. There are some genuine WTFs (such as no count() in db queries) that will cause you to struggle if you are coming from a relational background.
A:
I think this is the basics : Keys and Entity Groups
look for it in appengine docs. (I'm new here so can't post a link)
A:
I have worked on it but not a expert though Google app engine is very good thing and it is the future as it implements Platform as a Service and Software as a Service. Google app engine provides a non- relational database. So you cantreally write relationships here.
Regards,
Gaurav J
A:
These links are great, but are predominantly python biased, I am using GWT, and therefore have to use the java flavour of GAE, does anyone have any examples of how to achieve these "join" equivalencies in the java version of GAE?
Cheers,
John
A:
The standalone GAE SDK is pretty difficult to use for putting data into and retrieving data from the Google App Engine data store.
"Objectify" is a GAE extension that makes these operations much easier. The Objectify wiki and source code can be found here. I strongly recommend using Objectify in your GAE project.
http://code.google.com/p/objectify-appengine/
Here are a couple of tutorials on using Objectify with the app engine. Follow these tutorials and you will be storing and retrieving data in no time.
http://www.fishbonecloud.com/2010/11/use-objectify-to-store-data-in-google.html
| Google App Engine: Intro to their Data Store API for people with SQL Background? | Does anyone have any good information aside from the Google App Engine docs provided by Google that gives a good overview for people with MS SQL background to porting their knowledge and using Google App Engine Data Store API effectively.
For Example, if you have a self created Users Table and a Message Table
Where there is a relationship between Users and Message (connected by the UserID), how would this structure be represented in Google App Engine?
SELECT * FROM Users INNER JOIN Message ON Users.ID = Message.UserID
| [
"Here is a good link: One to Many Join using Google App Engine.\nhttp://blog.arbingersys.com/2008/04/google-app-engine-one-to-many-join.html\nHere is another good link: Many to Many Join using Google App Engine:\nhttp://blog.arbingersys.com/2008/04/google-app-engine-many-to-many-join.html\nHere is a good discussion... | [
13,
2,
1,
0,
0,
0
] | [] | [] | [
"google_app_engine",
"gql",
"python",
"sql"
] | stackoverflow_0000739490_google_app_engine_gql_python_sql.txt |
Q:
Screen Scraping in Python
I am currently trying to screen scrape a website to put info into a dictionary. I am using urllib2 and BeautifulSoup. I cannot figure out how to parse the web pages source info to get what I want and to read it into a dictionary. The info I want is displayed as <title>Nov 24 | 8:00AM | Sole In. Peace Out. </title> in the source code. I am thinking of using a reg expression to read in the line, convert the time and date to a datetime, and then parse the line to read the data into a dictionary. The dictionary output should be something along the lines of
[
{
"date": dateime(2010, 11, 24, 23, 59),
"title": "Sole In. Peace Out.",
}
]
Current Code:
from BeautifulSoup import BeautifulSoup
import re
import urllib2
url = 'http://events.cmich.edu/RssStudentEvents.aspx'
response = urllib2.urlopen(url)
html = response.read()
soup = BeautifulSoup(html)
Sorry for the wall of text, and thank you for your time and help!
A:
Something like this..
titletext = soup.findAll('title')[1].string #assuming it's the second title element.. I've seen worse in html
import datetime
datetext = titletext.split("|")[0]
title = titletext.split("|")[2]
date = datetime.datetime.strptime(datetext,"%b %d").replace(year=2010)
the_final_dict = {'date':date,'title':title}
findAll() returns all instances of the search element.. so you can just treat it like any other list.
That should just about do it :)
Edit: small fix
Edit2: fix from comments below
A:
EDIT: I did not realize it's not a HTML page, so take a look at the correction by Chris. The below would work for HTML pages.
You can use:
titleTag = soup.html.head.title
or:
soup.findAll('title')
Take a look here:
http://www.crummy.com/software/BeautifulSoup/documentation.html
A:
>>> soup.findAll('item')[1].title
<title>Nov 24 | 8:00AM | Sole In. Peace Out. </title>
>>> soup.findAll('item')[1].title.text
u'Nov 24 | 8:00AM | Sole In. Peace Out.'
>>> date, _, title = soup.findAll('item')[1].title.text.rpartition(' | ')
>>> date
u'Nov 24 | 8:00AM'
>>> title
u'Sole In. Peace Out.'
>>> from datetime import datetime
>>> date = datetime.strptime(date, "%b %d | %I:%M%p").replace(year=datetime.now().year)
>>> dict(date=date, title=title)
{'date': datetime.datetime(2010, 11, 24, 8, 0), 'title': u'Sole In. Peace Out.'}
Note that that's also including the time of day.
And then, as I think you want all the items,
>>> from datetime import datetime
>>> matches = []
>>> for item in soup.findAll('item'):
... date, _, title = item.title.text.rpartition(' | ')
... matches.append(dict(date=datetime.strptime(date, '%b %d | %I:%M%p').replace(year=datetime.now().year), title=title))
...
>>> from pprint import pprint
>>> pprint(matches)
[{'date': datetime.datetime(2010, 11, 24, 8, 0),
'title': u'The Americana Indian\u2014American Indian in the American Imagination'},
{'date': datetime.datetime(2010, 11, 24, 8, 0),
'title': u'Sole In. Peace Out.'},
...
{'date': datetime.datetime(2010, 12, 8, 8, 0),
'title': u'Apply to be an FYE Mentor'}]
If you wanted more complex year handling you could do it. You get the idea.
Final addition: a generator would be a nice way of using this.
from datetime import datetime
import urllib2
from BeautifulSoup import BeautifulSoup
def whatevers():
soup = BeautifulSoup(urllib2.urlopen('http://events.cmich.edu/RssStudentEvents.aspx').read())
for item in soup.findAll('item'):
date, _, title = item.title.text.rpartition(' | ')
yield dict(date=datetime.strptime(date, '%b %d | %I:%M%p').replace(year=datetime.now().year), title=title)
for match in whatevers():
pass # Use match['date'], match['title']. a namedtuple might also be neat here.
| Screen Scraping in Python | I am currently trying to screen scrape a website to put info into a dictionary. I am using urllib2 and BeautifulSoup. I cannot figure out how to parse the web pages source info to get what I want and to read it into a dictionary. The info I want is displayed as <title>Nov 24 | 8:00AM | Sole In. Peace Out. </title> in the source code. I am thinking of using a reg expression to read in the line, convert the time and date to a datetime, and then parse the line to read the data into a dictionary. The dictionary output should be something along the lines of
[
{
"date": dateime(2010, 11, 24, 23, 59),
"title": "Sole In. Peace Out.",
}
]
Current Code:
from BeautifulSoup import BeautifulSoup
import re
import urllib2
url = 'http://events.cmich.edu/RssStudentEvents.aspx'
response = urllib2.urlopen(url)
html = response.read()
soup = BeautifulSoup(html)
Sorry for the wall of text, and thank you for your time and help!
| [
"Something like this..\ntitletext = soup.findAll('title')[1].string #assuming it's the second title element.. I've seen worse in html\nimport datetime\ndatetext = titletext.split(\"|\")[0]\ntitle = titletext.split(\"|\")[2]\ndate = datetime.datetime.strptime(datetext,\"%b %d\").replace(year=2010)\nthe_final_dict = ... | [
1,
0,
0
] | [] | [] | [
"python",
"regex",
"screen_scraping"
] | stackoverflow_0004263310_python_regex_screen_scraping.txt |
Q:
Django modeling problem, need a subset of foreign key field
I intend to create an app for categories which will have separate category sets (vocabularies) for pages, gallery, product types etc. So there will need to be two models, vocabulary and category.
The categories/models.py code might be something like this:
class Vocabulary(models.Model):
title = models.CharField()
class Category(models.Model):
title = models.CharField()
vocabulary = models.ForeignKey(Vocabulary)
From my pages, blogs, gallery, etc apps how I will need a ForeignKey field to categories:
class Page(models.Model):
title = models.CharField()
content = models.TextField()
category = models.ForeignKey('categories.Category')
This will of course list all available categories in the admin app. If I have a product I want only the product categories to be avaialble. How can I filter the available categories to a specific vocabulary?
I'm learning Django and not really sure where to begin. Maybe I have the whole model wrong? If there are any apps which already do it please let me know.
A:
Filtering of selection like this is done in the form using a queryset, or in the admin interface with limit_choices_to.
| Django modeling problem, need a subset of foreign key field | I intend to create an app for categories which will have separate category sets (vocabularies) for pages, gallery, product types etc. So there will need to be two models, vocabulary and category.
The categories/models.py code might be something like this:
class Vocabulary(models.Model):
title = models.CharField()
class Category(models.Model):
title = models.CharField()
vocabulary = models.ForeignKey(Vocabulary)
From my pages, blogs, gallery, etc apps how I will need a ForeignKey field to categories:
class Page(models.Model):
title = models.CharField()
content = models.TextField()
category = models.ForeignKey('categories.Category')
This will of course list all available categories in the admin app. If I have a product I want only the product categories to be avaialble. How can I filter the available categories to a specific vocabulary?
I'm learning Django and not really sure where to begin. Maybe I have the whole model wrong? If there are any apps which already do it please let me know.
| [
"Filtering of selection like this is done in the form using a queryset, or in the admin interface with limit_choices_to.\n"
] | [
7
] | [] | [] | [
"django",
"django_models",
"python"
] | stackoverflow_0004263421_django_django_models_python.txt |
Q:
Unexpected Indent error in Python
I have a simple piece of code that I'm not understanding where my error is coming from. The parser is barking at me with an Unexpected Indent on line 5 (the if statement). Does anyone see the problem here? I don't.
def gen_fibs():
a, b = 0, 1
while True:
a, b = b, a + b
if len(str(a)) == 1000:
return a
A:
If you just copy+pasted your code, then you used a tab on the line with the if statement. Python interprets a tab as 8 spaces and not 4. Don't ever use tabs with python1 :)
1 Or at least don't ever use tabs and spaces mixed. It's highly advisable to use 4 spaces for consistency with the rest of the python universe.
A:
You're mixing tabs and spaces. Tabs are always considered the same as 8 spaces for the purposes of indenting. Run the script with python -tt to verify.
A:
Check if you aren't mixing tabs with spaces or something, because your code pasted verbatim doesn't produce any errors.
A:
The line with a, b = b, a + b is indented with 8 spaces, and the if line is indented with 4 spaces and a tab. Configure your editor to never ever insert tabs ever.
(Python considers a tab to be 8 spaces, but it's easier just not to use them)
| Unexpected Indent error in Python | I have a simple piece of code that I'm not understanding where my error is coming from. The parser is barking at me with an Unexpected Indent on line 5 (the if statement). Does anyone see the problem here? I don't.
def gen_fibs():
a, b = 0, 1
while True:
a, b = b, a + b
if len(str(a)) == 1000:
return a
| [
"If you just copy+pasted your code, then you used a tab on the line with the if statement. Python interprets a tab as 8 spaces and not 4. Don't ever use tabs with python1 :)\n1 Or at least don't ever use tabs and spaces mixed. It's highly advisable to use 4 spaces for consistency with the rest of the python univers... | [
13,
4,
1,
1
] | [
"Check whether your whitespace in front of every line is correctly typed. You may have typed tabs instead of spaces - try deleting all the space and change it so you use only tabs or only spaces.\n"
] | [
-1
] | [
"python"
] | stackoverflow_0004263485_python.txt |
Q:
question about postgresql bind variables
I was looking at the question and decided to try using the bind variables. I use
sql = 'insert into abc2 (interfield,textfield) values (%s,%s)'
a = time.time()
for i in range(10000):
#just a wrapper around cursor.execute
db.executeUpdateCommand(sql,(i,'test'))
db.commit()
and
sql = 'insert into abc2 (intfield,textfield) values (%(x)s,%(y)s)'
for i in range(10000):
db.executeUpdateCommand(sql,{'x':i,'y':'test'})
db.commit()
Looking at the time taken for the two sets, above it seems like there isn't much time difference. In fact, the second one takes longer. Can someone correct me if I've made a mistake somewhere? using psycopg2 here.
A:
The queries are equivalent in Postgresql.
Bind is oracle lingo. When you use it will save the query plan so the next execution will be a little faster. prepare does the same thing in Postgres.
http://www.postgresql.org/docs/current/static/sql-prepare.html
psycopg2 supports an internal 'bind', not prepare with cursor.executemany() and cursor.execute()
(But don't call it bind to pg people. Call it prepare or they may not know what you mean:)
A:
IMPORTANT UPDATE :
I've seen into source of all python libraries to connect to PostgreSQL in FreeBSD ports and can say, that only py-postgresql does real prepared statements! But it is Python 3+ only.
also py-pg_queue is funny lib implementing official DB protocol (python 2.4+)
You've missed answer for that question about prepared statements to use as many as possible. "Binded variables" are better form of this, let's see:
sql_q = 'insert into abc (intfield, textfield) values (?, ?)' # common form
sql_b = 'insert into abc2 (intfield, textfield) values (:x , :y)' # should have driver and db support
so your test should be this:
sql = 'insert into abc2 (intfield, textfield) values (:x , :y)'
for i in range (10000):
cur.execute(sql, x=i, y='test')
or this:
def _data(n):
for i in range (n):
yield (i, 'test')
sql = 'insert into abc2 (intfield, textfield) values (? , ?)'
cur.executemany(sql, _data(10000))
and so on.
UPDATE:
I've just found interest reciple how to transparently replace SQL queries with prepared and with usage of %(name)s
A:
As far as I know, psycopg2 has never supported server-side parameter binding ("bind variables" in Oracle parlance). Current versions of PostgreSQL do support it at the protocol level using prepared statements, but only a few connector libraries make use of it. The Postgres wiki notes this here. Here are some connectors that you might want to try: (I haven't used these myself.)
pg8000
python-pgsql
py-postgresql
As long as you're using DB-API calls, you probably ought to consider cursor.executemany() instead of repeatedly calling cursor.execute().
Also, binding parameters to their query in the server (instead of in the connector) is not always going to be faster in PostgreSQL. Note this FAQ entry.
| question about postgresql bind variables | I was looking at the question and decided to try using the bind variables. I use
sql = 'insert into abc2 (interfield,textfield) values (%s,%s)'
a = time.time()
for i in range(10000):
#just a wrapper around cursor.execute
db.executeUpdateCommand(sql,(i,'test'))
db.commit()
and
sql = 'insert into abc2 (intfield,textfield) values (%(x)s,%(y)s)'
for i in range(10000):
db.executeUpdateCommand(sql,{'x':i,'y':'test'})
db.commit()
Looking at the time taken for the two sets, above it seems like there isn't much time difference. In fact, the second one takes longer. Can someone correct me if I've made a mistake somewhere? using psycopg2 here.
| [
"The queries are equivalent in Postgresql.\nBind is oracle lingo. When you use it will save the query plan so the next execution will be a little faster. prepare does the same thing in Postgres.\nhttp://www.postgresql.org/docs/current/static/sql-prepare.html\npsycopg2 supports an internal 'bind', not prepare with c... | [
10,
7,
3
] | [] | [] | [
"postgresql",
"psycopg2",
"python"
] | stackoverflow_0004263508_postgresql_psycopg2_python.txt |
Q:
How do I perform frequently file reading/writing operations(multi-thread) in python?
Here's my problem: I need to perform frequently reading/writing operations to one file via different threads, and I tend to use threading.RLock().acquire() and threading.RLock().release() to lock the resource(e.g., one txt file). The thing is, you can easily do file writing operations in a certain thread, how do you get the return value in the thread? Here's my sample code:
FileLock = threading.RLock()
def FileReadingWriting(self, Input, Mode)
#### Input: things to write
#### Mode: 1 - write to the file
#### 2 - read from the file
FileLock.acquire()
if Mode == 1:
TheFile = open("example.txt", "w")
TheFile.write(Input)
TheFile.close()
elif Mode == 2:
TheFile = open("example.txt", "r")
Content = TheFile.read()
TheFile.close()
return Content #### Obviously this won't work, but if I want to get the content in a thread, how should I code?
FileLock.release()
A:
You should use some variable accessible both to the worker thread and to the main thread.
For example, worker thread can get as an input a reference to some object and fill it when done. Main thread will wait workers threads to finish, then will check the objects.
| How do I perform frequently file reading/writing operations(multi-thread) in python? | Here's my problem: I need to perform frequently reading/writing operations to one file via different threads, and I tend to use threading.RLock().acquire() and threading.RLock().release() to lock the resource(e.g., one txt file). The thing is, you can easily do file writing operations in a certain thread, how do you get the return value in the thread? Here's my sample code:
FileLock = threading.RLock()
def FileReadingWriting(self, Input, Mode)
#### Input: things to write
#### Mode: 1 - write to the file
#### 2 - read from the file
FileLock.acquire()
if Mode == 1:
TheFile = open("example.txt", "w")
TheFile.write(Input)
TheFile.close()
elif Mode == 2:
TheFile = open("example.txt", "r")
Content = TheFile.read()
TheFile.close()
return Content #### Obviously this won't work, but if I want to get the content in a thread, how should I code?
FileLock.release()
| [
"You should use some variable accessible both to the worker thread and to the main thread. \nFor example, worker thread can get as an input a reference to some object and fill it when done. Main thread will wait workers threads to finish, then will check the objects.\n"
] | [
0
] | [] | [] | [
"file_io",
"python"
] | stackoverflow_0004263562_file_io_python.txt |
Q:
Chopping media stream in HTML5 websocket server for webbased chat/video conference application
We are currently working on a chat + (file sharing +) video conference application using HTML5 websockets. To make our application more accessible we want to implement Adaptive Streaming, using the following sequence:
Raw audio/video data client goes to server
Stream is split into 1 second chunks
Encode stream into varying bandwidths
Client receives manifest file describing available segments
Downloads one segment using normal HTTP
Bandwidth next segment chosen on performance of previous one
Client may select from a number of different alternate streams at a variety of data rates
So.. How do we split our audio/video data in chunks with Python?
We know Microsoft already build the Expression Encoder 2 which enables Adaptive Streaming, but it only supports Silverlight and that's not what we want.
Edit:
There's also an solution called FFmpeg (and for Python a PyFFmpeg wrapper), but it only supports Apple Adaptive streaming.
A:
I think ffmpeg is the main tool you'll want to look at. It's become most well supported open source media manipulator. There is a python wrapper for it. Though it is also possible to access the command line through the subprocess module.
A:
I've found some nice articles about how other people build a stream segmenter for other platforms, so now we know how to build one in Python.
| Chopping media stream in HTML5 websocket server for webbased chat/video conference application | We are currently working on a chat + (file sharing +) video conference application using HTML5 websockets. To make our application more accessible we want to implement Adaptive Streaming, using the following sequence:
Raw audio/video data client goes to server
Stream is split into 1 second chunks
Encode stream into varying bandwidths
Client receives manifest file describing available segments
Downloads one segment using normal HTTP
Bandwidth next segment chosen on performance of previous one
Client may select from a number of different alternate streams at a variety of data rates
So.. How do we split our audio/video data in chunks with Python?
We know Microsoft already build the Expression Encoder 2 which enables Adaptive Streaming, but it only supports Silverlight and that's not what we want.
Edit:
There's also an solution called FFmpeg (and for Python a PyFFmpeg wrapper), but it only supports Apple Adaptive streaming.
| [
"I think ffmpeg is the main tool you'll want to look at. It's become most well supported open source media manipulator. There is a python wrapper for it. Though it is also possible to access the command line through the subprocess module.\n",
"I've found some nice articles about how other people build a stream... | [
5,
2
] | [] | [] | [
"html",
"python",
"split",
"stream"
] | stackoverflow_0004242081_html_python_split_stream.txt |
Q:
How to see generated source from an URL page with python script and not anly source?
I have some url to parse, and they used some javascript to create it dynamicly. So if i want to parse the result generated page with python... how can i do that ?
Firefox do that well with web developer... so i think it possible ... but i don't know where to start...
Thx for help
lo
A:
I've done this by doing a POST of document.body.innerHTML, after the page is loaded, to a CGI script in Python.
For the parsing, BeautifulSoup is a good choice.
A:
I you want generated source you'll need a browser, I don't think you can with only python.
| How to see generated source from an URL page with python script and not anly source? | I have some url to parse, and they used some javascript to create it dynamicly. So if i want to parse the result generated page with python... how can i do that ?
Firefox do that well with web developer... so i think it possible ... but i don't know where to start...
Thx for help
lo
| [
"I've done this by doing a POST of document.body.innerHTML, after the page is loaded, to a CGI script in Python.\nFor the parsing, BeautifulSoup is a good choice.\n",
"I you want generated source you'll need a browser, I don't think you can with only python.\n"
] | [
2,
0
] | [] | [] | [
"dynamically_generated",
"javascript",
"parsing",
"python",
"url"
] | stackoverflow_0004264076_dynamically_generated_javascript_parsing_python_url.txt |
Q:
Gmail SMTP + XOAuth mystery
I am using python smtplib and xoauth and I am trying to send an email.
I am using the code posted by Google: http://code.google.com/p/google-mail-xoauth-tools/source/browse/trunk/python/xoauth.py
I am actually authenticating against Gmail and i get this reply
reply: '235 2.7.0 Accepted\r\n'
after sending my XOAuth string as expected (http://code.google.com/apis/gmail/oauth/protocol.html#smtp)
When I compose an email I try to send I get the following error
reply: '530-5.5.1 Authentication Required. Learn more at
reply: '530 5.5.1 http://mail.google.com/support/bin/answer.py?answer=14257 f10sm4144741bkl.17\r\n'
Any clue?
A:
The problem is on how you do the SMTP connection here is a snippet from my code:
smtp_conn = smtplib.SMTP('smtp.googlemail.com', 587)
#smtp_conn.set_debuglevel(True)
smtp_conn.ehlo()
smtp_conn.starttls()
smtp_conn.ehlo()
smtp_conn.docmd('AUTH', 'XOAUTH ' + base64.b64encode(xoauth_string))
You create the xoauth_string as in the example from Google. After that you can use smtp_conn to send your email. If you have any problems let me know. You can find some sample code at https://github.com/PanosJee/xoauth
| Gmail SMTP + XOAuth mystery | I am using python smtplib and xoauth and I am trying to send an email.
I am using the code posted by Google: http://code.google.com/p/google-mail-xoauth-tools/source/browse/trunk/python/xoauth.py
I am actually authenticating against Gmail and i get this reply
reply: '235 2.7.0 Accepted\r\n'
after sending my XOAuth string as expected (http://code.google.com/apis/gmail/oauth/protocol.html#smtp)
When I compose an email I try to send I get the following error
reply: '530-5.5.1 Authentication Required. Learn more at
reply: '530 5.5.1 http://mail.google.com/support/bin/answer.py?answer=14257 f10sm4144741bkl.17\r\n'
Any clue?
| [
"The problem is on how you do the SMTP connection here is a snippet from my code:\n smtp_conn = smtplib.SMTP('smtp.googlemail.com', 587)\n #smtp_conn.set_debuglevel(True)\n smtp_conn.ehlo()\n smtp_conn.starttls()\n smtp_conn.ehlo()\n smtp_conn.docmd('AUTH', 'XOAUTH ' + base64.b64encode(xoauth_stri... | [
3
] | [] | [] | [
"gmail",
"oauth",
"python",
"smtplib"
] | stackoverflow_0003347731_gmail_oauth_python_smtplib.txt |
Q:
It is possible export table sqlite3 table to csv or similiar?
It is possible export sqlite3 table to csv or xls format? I'm using python 2.7 and sqlite3.
A:
I knocked this very basic script together using a slightly modified example class from the docs; it simply exports an entire table to a CSV file:
import sqlite3
import csv, codecs, cStringIO
class UnicodeWriter:
"""
A CSV writer which will write rows to CSV file "f",
which is encoded in the given encoding.
"""
def __init__(self, f, dialect=csv.excel, encoding="utf-8", **kwds):
# Redirect output to a queue
self.queue = cStringIO.StringIO()
self.writer = csv.writer(self.queue, dialect=dialect, **kwds)
self.stream = f
self.encoder = codecs.getincrementalencoder(encoding)()
def writerow(self, row):
self.writer.writerow([unicode(s).encode("utf-8") for s in row])
# Fetch UTF-8 output from the queue ...
data = self.queue.getvalue()
data = data.decode("utf-8")
# ... and reencode it into the target encoding
data = self.encoder.encode(data)
# write to the target stream
self.stream.write(data)
# empty queue
self.queue.truncate(0)
def writerows(self, rows):
for row in rows:
self.writerow(row)
conn = sqlite3.connect('yourdb.sqlite')
c = conn.cursor()
c.execute('select * from yourtable')
writer = UnicodeWriter(open("export.csv", "wb"))
writer.writerows(c)
Hope this helps!
Edit: If you want headers in the CSV, the quick way is to manually add another row before you write the data from the database, e.g:
# Select whichever rows you want in whatever order you like
c.execute('select id, forename, surname, email from contacts')
writer = UnicodeWriter(open("export.csv", "wb"))
# Make sure the list of column headers you pass in are in the same order as your SELECT
writer.writerow(["ID", "Forename", "Surname", "Email"])
writer.writerows(c)
Edit 2: To output pipe-separated columns, register a custom CSV dialect and pass that into the writer, like so:
csv.register_dialect('pipeseparated', delimiter = '|')
writer = UnicodeWriter(open("export.csv", "wb"), dialect='pipeseparated')
Here's a list of the various formatting parameters you can use with a custom dialect.
A:
Yes.Read here sqlitebrowser
Import and export records as text
Import and export tables from/to CSV files
Import and export databases from/to SQL dump files
A:
External programs: see documentation for sqlite3 for details. You can do it from shell/command line.
On the fly:
csv module will help you to handle CSV file format correctly
| It is possible export table sqlite3 table to csv or similiar? | It is possible export sqlite3 table to csv or xls format? I'm using python 2.7 and sqlite3.
| [
"I knocked this very basic script together using a slightly modified example class from the docs; it simply exports an entire table to a CSV file:\nimport sqlite3\nimport csv, codecs, cStringIO\n\nclass UnicodeWriter:\n \"\"\"\n A CSV writer which will write rows to CSV file \"f\", \n which is encoded in t... | [
9,
1,
0
] | [] | [] | [
"python",
"sqlite"
] | stackoverflow_0004264379_python_sqlite.txt |
Q:
How to use pipes in python without blocking?
In python, I want to create a subprocess and read and write data to its stdio.
Lets say I have the following C program that just writes its input to its output.
#include <stdio.h>
int main() {
char c;
for(;;) {
scanf("%c", &c);
printf("%c", c);
}
}
In python I should be able to use this using the subprocess module. Something like this:
from subprocess import *
pipe = Popen("thing", stdin=PIPE, stdout=PIPE)
pipe.stdin.write("blah blah blah")
text = pipe.stdout.read(4) # text should == "blah"
However in this case the call to read blocks indefinitely.
How can I do what I'm trying to achieve?
A:
stdout is line-buffered when writing to a terminal, but fully buffered when writing to a pipe so your output isn't being seen immediately.
To flush the buffer, call fflush(stdout); after each printf(). See also this question which is the same except that your subprocess is written in C, and this question which references stdin/stdout behaviour as defined in C99.
A:
I found the pexpect module which does exactly what I need.
| How to use pipes in python without blocking? | In python, I want to create a subprocess and read and write data to its stdio.
Lets say I have the following C program that just writes its input to its output.
#include <stdio.h>
int main() {
char c;
for(;;) {
scanf("%c", &c);
printf("%c", c);
}
}
In python I should be able to use this using the subprocess module. Something like this:
from subprocess import *
pipe = Popen("thing", stdin=PIPE, stdout=PIPE)
pipe.stdin.write("blah blah blah")
text = pipe.stdout.read(4) # text should == "blah"
However in this case the call to read blocks indefinitely.
How can I do what I'm trying to achieve?
| [
"stdout is line-buffered when writing to a terminal, but fully buffered when writing to a pipe so your output isn't being seen immediately.\nTo flush the buffer, call fflush(stdout); after each printf(). See also this question which is the same except that your subprocess is written in C, and this question which re... | [
4,
0
] | [] | [] | [
"pipe",
"popen",
"python",
"subprocess"
] | stackoverflow_0004264576_pipe_popen_python_subprocess.txt |
Q:
assign a retrievable unique ID to a changing lambda list in python?
def a(p): return p + 1
def b(p): return p + 2
def c(p): return p + 3
l= [a,b,c]
import itertools
ll = itertools.combinations(l, 2)
[x for x in ll]
[(<function a at 0x00CBD770>, <function b at 0x00CBD7F0>),
(<function a at 0x00CBD770>, <function c at 0x00BB27F0>),
(<function b at 0x00CBD7F0>, <function c at 0x00BB27F0>)]
Q1: here, how to return a lambda list in simple line(s):
[a(b(1)), # not the result of a(b(1)), but just a lambda object
a(c(1)), # also items may more than 2 here if itertools.combinations(l, 4)
b(c(1))]
Q2:
suppose I defined another function d
def d(p): return p + 4
l= [a,b,c,d]
ll = itertools.combinations(l, 2)
[(<function a at 0x00CBD770>, <function b at 0x00CBD7F0>),
(<function a at 0x00CBD770>, <function c at 0x00BB27F0>),
(<function a at 0x00CBD770>, <function d at 0x00CBDC70>),
(<function b at 0x00CBD7F0>, <function c at 0x00BB27F0>),
(<function b at 0x00CBD7F0>, <function d at 0x00CBDC70>),
(<function c at 0x00BB27F0>, <function d at 0x00CBDC70>)]
this combination with different a sequence compare with last one :
ab,ac,ad,bc,bd,cd
=================
ab,ac,bc
But I want to keep all possible item with an unque ID, it means no matter how the
l= [a,b,c,d]
or
l= [b,a,c,d]
pr
l= [a,b,e,d]
Take "ac" for example: the "ac" and other possible item always with an unique ID bind then I can access "ac" with that unique ID. I think it is like to create an extendable hash table for each item.
So, is it possible to assign an int ID or a "HASH" to the lambda item? I also want this mapping relationship should be able to store in disk as a file and can be retrieved later.
Thanks for any idea.
sample to explain Q2
=====================
l= [a,b,c,d]
func_combos = itertools.combinations(l, 2)
compositions = [compose(f1, f2) for f1, f2 in func_combos]
[compositions[x](100) for x in compositions] # take very long time to finish
[result1,
result2,
result3,
...
]
======== three days later on another machine ======
l= [a,c,b,e,f,g,h]
[compositions[x](100) for x in compositions] # take very long time to finish
[newresult1,
newresult2,
newresult3,
...
]
but wait: here we can saving time: take "ac" for example:
[result1, tag
result2, tag_for_ac_aka_uniqueID_or_hash
result3, tag
...
]
we just need to check if the "ac" tag exists we can reduce the calculation:
if hash_of(ac) in list(result.taglist):
copy result to new result:
A:
just use set to avoid dicts?
They are works for me.
| assign a retrievable unique ID to a changing lambda list in python? | def a(p): return p + 1
def b(p): return p + 2
def c(p): return p + 3
l= [a,b,c]
import itertools
ll = itertools.combinations(l, 2)
[x for x in ll]
[(<function a at 0x00CBD770>, <function b at 0x00CBD7F0>),
(<function a at 0x00CBD770>, <function c at 0x00BB27F0>),
(<function b at 0x00CBD7F0>, <function c at 0x00BB27F0>)]
Q1: here, how to return a lambda list in simple line(s):
[a(b(1)), # not the result of a(b(1)), but just a lambda object
a(c(1)), # also items may more than 2 here if itertools.combinations(l, 4)
b(c(1))]
Q2:
suppose I defined another function d
def d(p): return p + 4
l= [a,b,c,d]
ll = itertools.combinations(l, 2)
[(<function a at 0x00CBD770>, <function b at 0x00CBD7F0>),
(<function a at 0x00CBD770>, <function c at 0x00BB27F0>),
(<function a at 0x00CBD770>, <function d at 0x00CBDC70>),
(<function b at 0x00CBD7F0>, <function c at 0x00BB27F0>),
(<function b at 0x00CBD7F0>, <function d at 0x00CBDC70>),
(<function c at 0x00BB27F0>, <function d at 0x00CBDC70>)]
this combination with different a sequence compare with last one :
ab,ac,ad,bc,bd,cd
=================
ab,ac,bc
But I want to keep all possible item with an unque ID, it means no matter how the
l= [a,b,c,d]
or
l= [b,a,c,d]
pr
l= [a,b,e,d]
Take "ac" for example: the "ac" and other possible item always with an unique ID bind then I can access "ac" with that unique ID. I think it is like to create an extendable hash table for each item.
So, is it possible to assign an int ID or a "HASH" to the lambda item? I also want this mapping relationship should be able to store in disk as a file and can be retrieved later.
Thanks for any idea.
sample to explain Q2
=====================
l= [a,b,c,d]
func_combos = itertools.combinations(l, 2)
compositions = [compose(f1, f2) for f1, f2 in func_combos]
[compositions[x](100) for x in compositions] # take very long time to finish
[result1,
result2,
result3,
...
]
======== three days later on another machine ======
l= [a,c,b,e,f,g,h]
[compositions[x](100) for x in compositions] # take very long time to finish
[newresult1,
newresult2,
newresult3,
...
]
but wait: here we can saving time: take "ac" for example:
[result1, tag
result2, tag_for_ac_aka_uniqueID_or_hash
result3, tag
...
]
we just need to check if the "ac" tag exists we can reduce the calculation:
if hash_of(ac) in list(result.taglist):
copy result to new result:
| [
"just use set to avoid dicts?\nThey are works for me.\n"
] | [
0
] | [] | [] | [
"lambda",
"python",
"uniqueidentifier"
] | stackoverflow_0004264136_lambda_python_uniqueidentifier.txt |
Q:
Check whether the process is being run as a pipe
I have a small Python utility which should be run only as a pipe. I want it to print out the help message when it runs stand alone. How can a process know whether it is being used as a pipe. Comparing sys.stdin and sys.__stdin__ does not work.
A:
You can use isatty:
if sys.stdin.isatty():
It will be True if standard input is a tty, which roughly means it's being used directly, outside a pipe.
| Check whether the process is being run as a pipe | I have a small Python utility which should be run only as a pipe. I want it to print out the help message when it runs stand alone. How can a process know whether it is being used as a pipe. Comparing sys.stdin and sys.__stdin__ does not work.
| [
"You can use isatty:\nif sys.stdin.isatty():\n\nIt will be True if standard input is a tty, which roughly means it's being used directly, outside a pipe.\n"
] | [
16
] | [] | [] | [
"pipe",
"process",
"python",
"stdio"
] | stackoverflow_0004265057_pipe_process_python_stdio.txt |
Q:
Python Django Forms Multiple Select HTML Ouput
I'm trying to output the description field from my model
in a django form without joy.
After searching around and not finding what I need
I hope I can ask here.
Here is my models, form, template and template output.
I've abrieviated to help make this post concise.
I'm working on the view in this project so the model
has been designed by someone else and I can't really change it.
MODELS:
1)
from django.db import models
class Project(models.Model):
description = models.TextField(blank = True)
name = models.CharField(max_length = 255, blank = True)
def __unicode__(self):
""" String representation of projects. """
return unicode(self.name)
2)
from django.db import models
class Share(models.Model):
description = models.TextField
last_access = models.DateTimeField(auto_now = True)
location = models.URLField(verify_exists = False)
project = models.ForeignKey('Project', related_name = 'shares')
def __unicode__(self):
return unicode(self.location)
FORM:
from django import forms
from models import Share
class ExportForm(forms.Form):
ps = forms.ModelMultipleChoiceField(queryset=Share.objects.filter(project=1),widget=forms.SelectMultiple())
VIEW:
form = ExportForm()
TEMPLATE:
I have this to ouput the multiple select bos:
{{ form.ps }}
TEMPLATE OUTPUT:
<select multiple="multiple" name="ps" id="id_ps">
<option value="11">Share object </option>
<option value="10">Share object </option>
</select>
I've tried several things from searching around but can't
seem to be able to make the 'description' field in there rather than 'Share object'
Any advise much appreciated.
Thanks!
A:
The easiest way to do this is to change the __unicode__ method of the Share model to return description instead of location, but since you say you can't change the model you will need to subclass ModelMultipleChoiceField and override the label_from_instance method.
class MyModelMultipleChoiceField(forms.ModelMultipleChoiceField):
def label_from_instance(self, obj):
return obj.description
This is explained in the documentation.
| Python Django Forms Multiple Select HTML Ouput | I'm trying to output the description field from my model
in a django form without joy.
After searching around and not finding what I need
I hope I can ask here.
Here is my models, form, template and template output.
I've abrieviated to help make this post concise.
I'm working on the view in this project so the model
has been designed by someone else and I can't really change it.
MODELS:
1)
from django.db import models
class Project(models.Model):
description = models.TextField(blank = True)
name = models.CharField(max_length = 255, blank = True)
def __unicode__(self):
""" String representation of projects. """
return unicode(self.name)
2)
from django.db import models
class Share(models.Model):
description = models.TextField
last_access = models.DateTimeField(auto_now = True)
location = models.URLField(verify_exists = False)
project = models.ForeignKey('Project', related_name = 'shares')
def __unicode__(self):
return unicode(self.location)
FORM:
from django import forms
from models import Share
class ExportForm(forms.Form):
ps = forms.ModelMultipleChoiceField(queryset=Share.objects.filter(project=1),widget=forms.SelectMultiple())
VIEW:
form = ExportForm()
TEMPLATE:
I have this to ouput the multiple select bos:
{{ form.ps }}
TEMPLATE OUTPUT:
<select multiple="multiple" name="ps" id="id_ps">
<option value="11">Share object </option>
<option value="10">Share object </option>
</select>
I've tried several things from searching around but can't
seem to be able to make the 'description' field in there rather than 'Share object'
Any advise much appreciated.
Thanks!
| [
"The easiest way to do this is to change the __unicode__ method of the Share model to return description instead of location, but since you say you can't change the model you will need to subclass ModelMultipleChoiceField and override the label_from_instance method.\nclass MyModelMultipleChoiceField(forms.ModelMul... | [
1
] | [] | [] | [
"django_forms",
"python"
] | stackoverflow_0004265003_django_forms_python.txt |
Q:
Prompt user to save my desired text file name
I wish when the user clicks on a link say.
http://www.xyz.com/dynamic-text.py
The browser will prompt user to save the text file.
The following script will make browser to prompt user to save dynamic-text.py. I wish to prompt user to save some-file-name.txt. How can I do so?
dynamic-text.py
#!c:/Python27/python.exe -u
print "Content-type: text/text"
print
print "This is Text File"
A:
You can send the Content-Disposition HTTP header:
print "Content-Type: text/plain"
print "Content-Disposition: attachment; filename=some-file-name.txt"
print
Also note that you should use the text/plain content type instead of text/text which, to my knowledge, doesn't exist.
| Prompt user to save my desired text file name | I wish when the user clicks on a link say.
http://www.xyz.com/dynamic-text.py
The browser will prompt user to save the text file.
The following script will make browser to prompt user to save dynamic-text.py. I wish to prompt user to save some-file-name.txt. How can I do so?
dynamic-text.py
#!c:/Python27/python.exe -u
print "Content-type: text/text"
print
print "This is Text File"
| [
"You can send the Content-Disposition HTTP header:\nprint \"Content-Type: text/plain\"\nprint \"Content-Disposition: attachment; filename=some-file-name.txt\"\nprint\n\nAlso note that you should use the text/plain content type instead of text/text which, to my knowledge, doesn't exist.\n"
] | [
4
] | [] | [] | [
"html",
"http",
"python"
] | stackoverflow_0004265381_html_http_python.txt |
Q:
Create a symlink to a programme
Suppose I am in ~/programming/ass1 and the executable is in ~/programming/ass1/seattle/seattle_repy/repy.py.
I tried to create a symlink like so
ln -s seattle/seattle_repy/repy.py repy
to be able to type
python repy restrictions.test example.1.1.repy
instead of
python seattle/seattle_repy/repy.py restrictions.test example.1.1.repy
But it didn't work (I get "python: can't open file '/home/philipp/Desktop/Uni/NTM/UE/Uebungsblatt 3/safe_check.py': [Errno 2] No such file or directory").
So repy.py can't find safe_check.py.
Is this possible at all?
Cheers,
Philipp
A:
You'll need to frob sys.path to add the path containing the modules, but it's probably easier to make a shell script that calls exec python ~/programming/ass1/seattle/seattle_repy/repy.py.
A:
Thanks for the tips Ignacio and Mark.
I solved it with the following bash script in ~/programming/ass1
#!/bin/bash
exec python ~/programming/ass1/seattle/seattle_repy/repy.py $@
To finally get what I want I copied it to /usr/bin and created a symlink to it:
sudo cp repy.sh /usr/bin/
sudo ln -s /usr/bin/repy.sh /usr/bin/repy
So now I can just sayrepy restrictions.test example.2.1.repy and it'll work.
| Create a symlink to a programme | Suppose I am in ~/programming/ass1 and the executable is in ~/programming/ass1/seattle/seattle_repy/repy.py.
I tried to create a symlink like so
ln -s seattle/seattle_repy/repy.py repy
to be able to type
python repy restrictions.test example.1.1.repy
instead of
python seattle/seattle_repy/repy.py restrictions.test example.1.1.repy
But it didn't work (I get "python: can't open file '/home/philipp/Desktop/Uni/NTM/UE/Uebungsblatt 3/safe_check.py': [Errno 2] No such file or directory").
So repy.py can't find safe_check.py.
Is this possible at all?
Cheers,
Philipp
| [
"You'll need to frob sys.path to add the path containing the modules, but it's probably easier to make a shell script that calls exec python ~/programming/ass1/seattle/seattle_repy/repy.py.\n",
"Thanks for the tips Ignacio and Mark.\nI solved it with the following bash script in ~/programming/ass1\n\n#!/bin/bash\... | [
1,
0
] | [] | [] | [
"python",
"symlink"
] | stackoverflow_0004265624_python_symlink.txt |
Q:
How to include Non-AscII character in python appengine send html mail
My problem is that I want to compose an email in python environment of google appengine.
When I add Greek characters to the body of my message I get:
SyntaxError: Non-ASCII character '\xce'
megssage.html = """
<html>
<body>
παραδειγμα
</body>
</html>"""
A:
Use this shebang:
# -*- coding: utf-8 -*-
| How to include Non-AscII character in python appengine send html mail | My problem is that I want to compose an email in python environment of google appengine.
When I add Greek characters to the body of my message I get:
SyntaxError: Non-ASCII character '\xce'
megssage.html = """
<html>
<body>
παραδειγμα
</body>
</html>"""
| [
"Use this shebang:\n# -*- coding: utf-8 -*-\n\n"
] | [
3
] | [] | [] | [
"email",
"google_app_engine",
"html",
"python"
] | stackoverflow_0004265747_email_google_app_engine_html_python.txt |
Q:
Set editable=False only in subclass
I have a field "name" that is automatically constructed from "first_name" and "last_name" in one of the subclasses:
from django.db import models
from django.utils.translation import ugettext_lazy as _
class Actor(models.Model):
name = models.CharField(_('name'), max_length=60)
class Company(Actor):
pass
class Person(Actor):
first_name = models.CharField(_('first name'), max_length=30, blank=True)
last_name = models.CharField(_('last name'), max_length=30, blank=True)
email = models.EmailField(_('e-mail address'), unique=True)
def save(self, *args, **kwargs):
if self.first_name or self.last_name:
self.name = (self.first_name + ' ' + self.last_name).strip()
else:
self.name = self.email
super(Person, self).save(*args, **kwargs)
I would like the "name" field to be editable in the Actor and Company models, but not in the Person model. How can I accomplish that?
I can't override the field definition by adding
name = models.CharField(_('name'), max_length=60, editable=False)
to the Person model because Django raises a FieldError ("Local field 'name' in class 'Person' clashes with field of similar name from base class 'Actor'").
A:
Forget about editable and exclude the field in the model's ModelAdmin instead:
from django.contrib import admin
admin.site.register(Person, exclude=['name'])
| Set editable=False only in subclass | I have a field "name" that is automatically constructed from "first_name" and "last_name" in one of the subclasses:
from django.db import models
from django.utils.translation import ugettext_lazy as _
class Actor(models.Model):
name = models.CharField(_('name'), max_length=60)
class Company(Actor):
pass
class Person(Actor):
first_name = models.CharField(_('first name'), max_length=30, blank=True)
last_name = models.CharField(_('last name'), max_length=30, blank=True)
email = models.EmailField(_('e-mail address'), unique=True)
def save(self, *args, **kwargs):
if self.first_name or self.last_name:
self.name = (self.first_name + ' ' + self.last_name).strip()
else:
self.name = self.email
super(Person, self).save(*args, **kwargs)
I would like the "name" field to be editable in the Actor and Company models, but not in the Person model. How can I accomplish that?
I can't override the field definition by adding
name = models.CharField(_('name'), max_length=60, editable=False)
to the Person model because Django raises a FieldError ("Local field 'name' in class 'Person' clashes with field of similar name from base class 'Actor'").
| [
"Forget about editable and exclude the field in the model's ModelAdmin instead:\nfrom django.contrib import admin\n\nadmin.site.register(Person, exclude=['name'])\n\n"
] | [
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0004266247_django_python.txt |
Q:
How can I dump raw XML of my request and server's response using suds in python
i'm using suds 0.4 and python 2.6, to communicate with remote server.
It's WSDL loads perfectly, but any function call returns error. Something is wrong with that server.
Now i need to get a dump of soap structure, that is sent to server and it's response, in pure soap either.
How can i do that?
A:
Setting the logging for suds.transport to debug will get you the sent and received messages.
For an interactive session, I find this is good:
import logging
logging.basicConfig(level=logging.INFO)
logging.getLogger('suds.client').setLevel(logging.DEBUG)
logging.getLogger('suds.transport').setLevel(logging.DEBUG)
logging.getLogger('suds.xsd.schema').setLevel(logging.DEBUG)
logging.getLogger('suds.wsdl').setLevel(logging.DEBUG)
from suds.client import Client
s = Client('http://someservice?wsdl')
For specifically just the sent and received XML sent to a file, you'll need to play with the logging settings, see http://docs.python.org/library/logging.html
| How can I dump raw XML of my request and server's response using suds in python | i'm using suds 0.4 and python 2.6, to communicate with remote server.
It's WSDL loads perfectly, but any function call returns error. Something is wrong with that server.
Now i need to get a dump of soap structure, that is sent to server and it's response, in pure soap either.
How can i do that?
| [
"Setting the logging for suds.transport to debug will get you the sent and received messages.\nFor an interactive session, I find this is good:\nimport logging\nlogging.basicConfig(level=logging.INFO)\nlogging.getLogger('suds.client').setLevel(logging.DEBUG)\nlogging.getLogger('suds.transport').setLevel(logging.DEB... | [
14
] | [] | [] | [
"python",
"soap",
"suds"
] | stackoverflow_0004265602_python_soap_suds.txt |
Q:
Check if a number is rational in Python, for a given fp accuracy
I would like to know a good way of checking if a number x is a rational (two integers n,m exist so that x=n/m) in python.
In Mathematica, this is done by the function Rationalize[6.75] : 27/4
I assume this question has an answer for a given accuracy.
Is there a common algorithm of obtaining these two integers?
A:
In python >= 2.6 there is a as_integer_ratio method on floats:
>>> a = 6.75
>>> a.as_integer_ratio()
(27, 4)
>>> import math
>>> math.pi.as_integer_ratio()
(884279719003555, 281474976710656)
However, due to the way floats are defined in programming languages there are no irrational numbers.
A:
The nature of floating-point numbers means that it makes no sense to check if a floating-point number is rational, since all floating-point numbers are really fractions of the form n / 2e. However, you might well want to know whether there is a simple fraction (one with a small denominator rather than a big power of 2) that closely approximates a given floating-point number.
Donald Knuth discusses this latter problem in The Art of Computer Programming volume II. See the answer to exercise 4.53-39. The idea is to search for the fraction with the lowest denominator within a range, by expanding the endpoints of the range as continued fractions so long as their coefficients are equal, and then when they differ, take the simplest value between them. Here's a fairly straightforward implementation in Python:
from fractions import Fraction
from math import modf
def simplest_fraction_in_interval(x, y):
"""Return the fraction with the lowest denominator in [x,y]."""
if x == y:
# The algorithm will not terminate if x and y are equal.
raise ValueError("Equal arguments.")
elif x < 0 and y < 0:
# Handle negative arguments by solving positive case and negating.
return -simplest_fraction_in_interval(-y, -x)
elif x <= 0 or y <= 0:
# One argument is 0, or arguments are on opposite sides of 0, so
# the simplest fraction in interval is 0 exactly.
return Fraction(0)
else:
# Remainder and Coefficient of continued fractions for x and y.
xr, xc = modf(1/x);
yr, yc = modf(1/y);
if xc < yc:
return Fraction(1, int(xc) + 1)
elif yc < xc:
return Fraction(1, int(yc) + 1)
else:
return 1 / (int(xc) + simplest_fraction_in_interval(xr, yr))
def approximate_fraction(x, e):
"""Return the fraction with the lowest denominator that differs
from x by no more than e."""
return simplest_fraction_in_interval(x - e, x + e)
And here are some results:
>>> approximate_fraction(6.75, 0.01)
Fraction(27, 4)
>>> approximate_fraction(math.pi, 0.00001)
Fraction(355, 113)
>>> approximate_fraction((1 + math.sqrt(5)) / 2, 0.00001)
Fraction(377, 233)
A:
Any number with a finite decimal expansion is a rational number. You could always solve for instance
5.195181354985216
by saying that it corresponds to
5195181354985216 / 1000000000000000
So since floats and doubles have finite precision they're all rationals.
A:
Python uses floating-point representation rather than rational numbers. Take a look at the standard library fractions module for some details about rational numbers.
Observe, for example, this, to see why it goes wrong:
>>> from fractions import Fraction
>>> 1.1 # Uh oh.
1.1000000000000001
>>> Fraction(1.1) # Will only work in >= Python 2.7, anyway.
Fraction(2476979795053773, 2251799813685248)
>>> Fraction(*1.1.as_integer_ratio()) # Python 2.6 compatible
Fraction(2476979795053773, 2251799813685248)
(Oh, you want to see a case where it works?)
>>> Fraction('1.1')
Fraction(11, 10)
A:
May be this will be interesting to you: Best rational approximation
A:
As you noted any floating point number can be converted to a rational number by moving the decimal point and dividing by the appropriate power of ten.
You can then remove the greatest common divisor from the dividend and divisor and check if both of these numbers fit in the data type of your choice.
A:
The problem with real numbers in programming languages is that they are usually defined as functions returning a finite representation given an accuracy (eg. a function which takes n as an argument and returns a floating point number within 2^-n accuracy).
You can definitely turn a rational/integer into a real, but even comparing reals for equality is undecidable (it is essentially the halting problem).
You cannot tell whether a real number x is rational: even in mathematics, it is usually difficult, since you have to find p and q such that x = p/q, and this is often non constructive.
However, given an accuracy window, you can find the "best" rational approximation for this accuracy using for instance continuous fraction expansion. I believe that is essentially what mathematica does. But in your exemple, 6.75 is already rational. Try with Pi instead.
| Check if a number is rational in Python, for a given fp accuracy | I would like to know a good way of checking if a number x is a rational (two integers n,m exist so that x=n/m) in python.
In Mathematica, this is done by the function Rationalize[6.75] : 27/4
I assume this question has an answer for a given accuracy.
Is there a common algorithm of obtaining these two integers?
| [
"In python >= 2.6 there is a as_integer_ratio method on floats:\n>>> a = 6.75\n>>> a.as_integer_ratio()\n(27, 4)\n>>> import math\n>>> math.pi.as_integer_ratio()\n(884279719003555, 281474976710656)\n\nHowever, due to the way floats are defined in programming languages there are no irrational numbers.\n",
"The nat... | [
20,
10,
7,
5,
4,
1,
0
] | [] | [] | [
"algorithm",
"floating_accuracy",
"floating_point",
"math",
"python"
] | stackoverflow_0004266741_algorithm_floating_accuracy_floating_point_math_python.txt |
Q:
Why is int(50)<str(5) in python 2.x?
In python 3, int(50)<'2' causes a TypeError, and well it should. In python 2.x, however, int(50)<'2' returns True (this is also the case for other number formats, but int exists in both py2 and py3). My question, then, has several parts:
Why does Python 2.x (< 3?) allow this behavior?
(And who thought it was a good idea to allow this to begin with???)
What does it mean that an int is less than a str?
Is it referring to ord / chr?
Is there some binary format which is less obvious?
Is there a difference between '5' and u'5' in this regard?
A:
It works like this1.
>>> float() == long() == int() < dict() < list() < str() < tuple()
True
Numbers compare as less than containers. Numeric types are converted to a common type and compared based on their numeric value. Containers are compared by the alphabetic value of their names.2
From the docs:
CPython implementation detail: Objects of different types except numbers are ordered by >their type names; objects of the same types that don’t support proper comparison are >ordered by their address.
Objects of different builtin types compare alphabetically by the name of their type int starts with an 'i' and str starts with an s so any int is less than any str..
I have no idea.
A drunken master.
It means that a formal order has been introduced on the builtin types.
It's referring to an arbitrary order.
No.
No. strings and unicode objects are considered the same for this purpose. Try it out.
In response to the comment about long < int
>>> int < long
True
You probably meant values of those types though, in which case the numeric comparison applies.
1 This is all on Python 2.6.5
2 Thank to kRON for clearing this up for me. I'd never thought to compare a number to a dict before and comparison of numbers is one of those things that's so obvious that it's easy to overlook.
A:
The reason why these comparisons are allowed, is sorting. Python 2.x can sort lists containing mixed types, including strings and integers -- integers always appear first. Python 3.x does not allow this, for the exact reasons you pointed out.
Python 2.x:
>>> sorted([1, '1'])
[1, '1']
>>> sorted([1, '1', 2, '2'])
[1, 2, '1', '2']
Python 3.x:
>>> sorted([1, '1'])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unorderable types: str() < int()
A:
(And who thought it was a good idea to allow this to begin with???)
I can imagine that the reason might be to allow object from different types to be stored in tree-like structures, which use comparisons internally.
A:
As Aaron said. Breaking it up into your points:
Because it makes sort do something halfway usable where it otherwise would make no sense at all (mixed lists). It's not a good idea generally, but much in Python is designed for convenience over strictness.
Ordered by type name. This means things of the same type group together, where they can be sorted. They should probably be grouped by type class, such as numbers together, but there's no proper type class framework. There may be a few more specific rules in there (probably is one for numeric types), I'd have to check the source.
One is string and the other is unicode. They may have a direct comparison operation, however, but it's conceivable a non-comparable type would get grouped between them, causing a mess. I don't know if there's code to avoid this.
So, it doesn't make sense in the general case, but occasionally it's helpful.
from random import shuffle
letters=list('abcdefgh')
ints=range(8)
both=ints+letters
shuffle(ints)
shuffle(letters)
shuffle(both)
print sorted(ints+letters)
print sorted(both)
Both print the ints first, then the letters.
As a rule, you don't want to mix types randomly within a program, and apparently Python 3 prevents it where Python 2 tries to make vague sense where none exists. You could still sort by lambda a,b: cmp(repr(a),repr(b)) (or something better) if you really want to, but it appears the language developers agreed it's impractical default behaviour. I expect it varies which gives the least surprise, but it's a lot harder to detect a problem in the Python 2 sense.
| Why is int(50)<str(5) in python 2.x? | In python 3, int(50)<'2' causes a TypeError, and well it should. In python 2.x, however, int(50)<'2' returns True (this is also the case for other number formats, but int exists in both py2 and py3). My question, then, has several parts:
Why does Python 2.x (< 3?) allow this behavior?
(And who thought it was a good idea to allow this to begin with???)
What does it mean that an int is less than a str?
Is it referring to ord / chr?
Is there some binary format which is less obvious?
Is there a difference between '5' and u'5' in this regard?
| [
"It works like this1.\n>>> float() == long() == int() < dict() < list() < str() < tuple()\nTrue\n\nNumbers compare as less than containers. Numeric types are converted to a common type and compared based on their numeric value. Containers are compared by the alphabetic value of their names.2\nFrom the docs:\n\nCPyt... | [
8,
6,
1,
1
] | [] | [] | [
"comparison",
"int",
"python",
"python_2.x",
"string"
] | stackoverflow_0004266918_comparison_int_python_python_2.x_string.txt |
Q:
Does ruby have a zip function like python's?
I am a newibe from python to ruby.
In python there is a feature like the following:
a=range(3)
b=range(3)
for e1,e2 in zip(a,b)
print e1,e2
Is there something that can achieve the same function in ruby?
A:
That is what Array#zip does:
foo = [1,2,3,4]
bar = ['a','b','c','d']
foo.zip(bar) #=> [[1, "a"], [2, "b"], [3, "c"], [4, "d"]]
A:
You mean, like Array#zip?
| Does ruby have a zip function like python's? | I am a newibe from python to ruby.
In python there is a feature like the following:
a=range(3)
b=range(3)
for e1,e2 in zip(a,b)
print e1,e2
Is there something that can achieve the same function in ruby?
| [
"That is what Array#zip does:\nfoo = [1,2,3,4]\nbar = ['a','b','c','d']\n\nfoo.zip(bar) #=> [[1, \"a\"], [2, \"b\"], [3, \"c\"], [4, \"d\"]]\n\n",
"You mean, like Array#zip?\n"
] | [
7,
0
] | [] | [] | [
"arrays",
"python",
"ruby"
] | stackoverflow_0004267206_arrays_python_ruby.txt |
Q:
Replace a client library with a mock from a test?
Is there a way to replace a client library (which communicates with a remote server) with a mock object from within a unittest?
Here's a diagram to explain what I'm attempting to do
+---------------+
| tests |----{ mock }
+---------------+ |
| |
v |
+---------------+ |
| model | |
+---------------+ |
| |
v |
+---------------+ |
| client-module |<--{replaces}
+---------------+
^
:
:
v
+---------------+
| service |
+---------------+
Since the tests import the model, which imports the client-module, there doesn't seem to be a way to apply the mock to the model's internals.
A:
If model.py does an
import client_module
and doesn't use any features of it at import time, you can do
import model
...
model.client_module = MyMockModule()
where MyMockModule returns suitable mocks for stuff the real client_module provides. I haven't shown setUp/tearDown stuff to take care of this, but hopefully you get the idea,
If model does use stuff from client_module at import time, you'll need to replace sys.modules['client_module'] with the mocked module before importing model.
| Replace a client library with a mock from a test? | Is there a way to replace a client library (which communicates with a remote server) with a mock object from within a unittest?
Here's a diagram to explain what I'm attempting to do
+---------------+
| tests |----{ mock }
+---------------+ |
| |
v |
+---------------+ |
| model | |
+---------------+ |
| |
v |
+---------------+ |
| client-module |<--{replaces}
+---------------+
^
:
:
v
+---------------+
| service |
+---------------+
Since the tests import the model, which imports the client-module, there doesn't seem to be a way to apply the mock to the model's internals.
| [
"If model.py does an\nimport client_module\n\nand doesn't use any features of it at import time, you can do\nimport model\n\n...\n\nmodel.client_module = MyMockModule()\n\nwhere MyMockModule returns suitable mocks for stuff the real client_module provides. I haven't shown setUp/tearDown stuff to take care of this, ... | [
1
] | [] | [] | [
"nose",
"python",
"unit_testing"
] | stackoverflow_0004267007_nose_python_unit_testing.txt |
Q:
wxPython display a login box
I'm trying to build a small application in wxPython (absolute beginner) in which I display a login box before showing the content. I created a frame, inside the frame a panel with a flexigrid to put the login form inside but it doesn't show. If I launch the application the login form is invisible. If I resize the application the login box shows. Any idea why? Here's my code so far:
import wx
class AP_App(wx.App):
def OnInit(self):
frame = AP_MainFrame("Test application", (0, 0), (650, 350))
frame.Show()
self.SetTopWindow(frame)
loginPanel = AP_LoginPanel(frame)
self.Bind(wx.EVT_CLOSE, self.OnCloseWindow)
return True
def OnCloseWindow(self, event):
self.Destroy()
class AP_MainFrame(wx.Frame):
def __init__(self, title, pos, size):
wx.Frame.__init__(self, None, -1, title, pos, size)
self.CreateStatusBar()
class AP_LoginPanel(wx.Panel):
def __init__(self, frame):
self.panel = wx.Panel(frame)
self.frame = frame
self.frame.SetStatusText("Authentification required!")
self.showLoginBox()
def showLoginBox(self): #Create the sizer
sizer = wx.FlexGridSizer(rows = 3, cols = 2, hgap = 5, vgap = 15)
# Username
self.txt_Username = wx.TextCtrl(self.panel, 1, size = (150, -1))
lbl_Username = wx.StaticText(self.panel, -1, "Username:")
sizer.Add(lbl_Username,0, wx.LEFT|wx.TOP| wx.RIGHT, 50)
sizer.Add(self.txt_Username,0, wx.TOP| wx.RIGHT, 50)
# Password
self.txt_Password = wx.TextCtrl(self.panel, 1, size=(150, -1), style=wx.TE_PASSWORD)
lbl_Password = wx.StaticText(self.panel, -1, "Password:")
sizer.Add(lbl_Password,0, wx.LEFT|wx.RIGHT, 50)
sizer.Add(self.txt_Password,0, wx.RIGHT, 50)
# Submit button
btn_Process = wx.Button(self.panel, -1, "&Login")
self.panel.Bind(wx.EVT_BUTTON, self.OnSubmit, btn_Process)
sizer.Add(btn_Process,0, wx.LEFT, 50)
self.panel.SetSizer(sizer)
def OnSubmit(self, event):
UserText = self.txt_Username.GetValue()
PasswordText = self.txt_Password.GetValue()
if __name__ == '__main__':
app = AP_App()
app.MainLoop()
A:
I just discovered I'm calling frame.Show() too soon. :)
| wxPython display a login box | I'm trying to build a small application in wxPython (absolute beginner) in which I display a login box before showing the content. I created a frame, inside the frame a panel with a flexigrid to put the login form inside but it doesn't show. If I launch the application the login form is invisible. If I resize the application the login box shows. Any idea why? Here's my code so far:
import wx
class AP_App(wx.App):
def OnInit(self):
frame = AP_MainFrame("Test application", (0, 0), (650, 350))
frame.Show()
self.SetTopWindow(frame)
loginPanel = AP_LoginPanel(frame)
self.Bind(wx.EVT_CLOSE, self.OnCloseWindow)
return True
def OnCloseWindow(self, event):
self.Destroy()
class AP_MainFrame(wx.Frame):
def __init__(self, title, pos, size):
wx.Frame.__init__(self, None, -1, title, pos, size)
self.CreateStatusBar()
class AP_LoginPanel(wx.Panel):
def __init__(self, frame):
self.panel = wx.Panel(frame)
self.frame = frame
self.frame.SetStatusText("Authentification required!")
self.showLoginBox()
def showLoginBox(self): #Create the sizer
sizer = wx.FlexGridSizer(rows = 3, cols = 2, hgap = 5, vgap = 15)
# Username
self.txt_Username = wx.TextCtrl(self.panel, 1, size = (150, -1))
lbl_Username = wx.StaticText(self.panel, -1, "Username:")
sizer.Add(lbl_Username,0, wx.LEFT|wx.TOP| wx.RIGHT, 50)
sizer.Add(self.txt_Username,0, wx.TOP| wx.RIGHT, 50)
# Password
self.txt_Password = wx.TextCtrl(self.panel, 1, size=(150, -1), style=wx.TE_PASSWORD)
lbl_Password = wx.StaticText(self.panel, -1, "Password:")
sizer.Add(lbl_Password,0, wx.LEFT|wx.RIGHT, 50)
sizer.Add(self.txt_Password,0, wx.RIGHT, 50)
# Submit button
btn_Process = wx.Button(self.panel, -1, "&Login")
self.panel.Bind(wx.EVT_BUTTON, self.OnSubmit, btn_Process)
sizer.Add(btn_Process,0, wx.LEFT, 50)
self.panel.SetSizer(sizer)
def OnSubmit(self, event):
UserText = self.txt_Username.GetValue()
PasswordText = self.txt_Password.GetValue()
if __name__ == '__main__':
app = AP_App()
app.MainLoop()
| [
"I just discovered I'm calling frame.Show() too soon. :)\n"
] | [
4
] | [] | [] | [
"python",
"wxpython"
] | stackoverflow_0004267651_python_wxpython.txt |
Q:
How to determine whether an object is a sequence
I can think of two ways to determine whether an object is a sequence:
hasattr(object, '__iter__').
And whether calling iter(object) raises a TypeError.
As it is most Pythonic to ask forgiveness than to ask permission, I'd use the second idiom, although I consider it more ugly (additionally, raising an exception once you've caught the TypeError to determine that the object isn't a sequence would yield an undesirable "double-exception" stack trace).
Ultimately, is checking that an object defines an __iter__ method exhaustive enough to determine whether an object is a sequence? (In older versions of Python, for example, str didn't define an __iter__ method; I've also heard that some objects can also simply define and use __getitem__ without defining an __iter__ and act like a sequence.) Or is defining __iter__ the contract of a sequence?
A:
Use isinstance(obj, collections.Sequence). Abstract base classes are exactly for this. They didn't exist prior to 2.6 though. In case you're forced to use older versions, you're out of luck and better stick with EAFP.
| How to determine whether an object is a sequence | I can think of two ways to determine whether an object is a sequence:
hasattr(object, '__iter__').
And whether calling iter(object) raises a TypeError.
As it is most Pythonic to ask forgiveness than to ask permission, I'd use the second idiom, although I consider it more ugly (additionally, raising an exception once you've caught the TypeError to determine that the object isn't a sequence would yield an undesirable "double-exception" stack trace).
Ultimately, is checking that an object defines an __iter__ method exhaustive enough to determine whether an object is a sequence? (In older versions of Python, for example, str didn't define an __iter__ method; I've also heard that some objects can also simply define and use __getitem__ without defining an __iter__ and act like a sequence.) Or is defining __iter__ the contract of a sequence?
| [
"Use isinstance(obj, collections.Sequence). Abstract base classes are exactly for this. They didn't exist prior to 2.6 though. In case you're forced to use older versions, you're out of luck and better stick with EAFP.\n"
] | [
4
] | [] | [] | [
"collections",
"python",
"python_3.x"
] | stackoverflow_0004267909_collections_python_python_3.x.txt |
Q:
urllib2 not retrieving entire HTTP response
I'm perplexed as to why I'm not able to download the entire contents of some JSON responses from FriendFeed using urllib2.
>>> import urllib2
>>> stream = urllib2.urlopen('http://friendfeed.com/api/room/the-life-scientists/profile?format=json')
>>> stream.headers['content-length']
'168928'
>>> data = stream.read()
>>> len(data)
61058
>>> # We can see here that I did not retrieve the full JSON
... # given that the stream doesn't end with a closing }
...
>>> data[-40:]
'ce2-003048343a40","name":"Vincent Racani'
How can I retrieve the full response with urllib2?
A:
Best way to get all of the data:
fp = urllib2.urlopen("http://www.example.com/index.cfm")
response = ""
while 1:
data = fp.read()
if not data: # This might need to be if data == "": -- can't remember
break
response += data
print response
The reason is that .read() isn't guaranteed to return the entire response, given the nature of sockets. I thought this was discussed in the documentation (maybe urllib) but I cannot find it.
A:
Use tcpdump (or something like it) to monitor the actual network interactions - then you can analyze why the site is broken for some client libraries. Ensure that you repeat multiple times by scripting the test, so you can see if the problem is consistent:
import urllib2
url = 'http://friendfeed.com/api/room/friendfeed-feedback/profile?format=json'
stream = urllib2.urlopen(url)
expected = int(stream.headers['content-length'])
data = stream.read()
datalen = len(data)
print expected, datalen, expected == datalen
The site's working consistently for me so I can't give examples of finding failures :)
A:
Keep calling stream.read() until it's done...
while data = stream.read() :
... do stuff with data
A:
readlines()
also works
| urllib2 not retrieving entire HTTP response | I'm perplexed as to why I'm not able to download the entire contents of some JSON responses from FriendFeed using urllib2.
>>> import urllib2
>>> stream = urllib2.urlopen('http://friendfeed.com/api/room/the-life-scientists/profile?format=json')
>>> stream.headers['content-length']
'168928'
>>> data = stream.read()
>>> len(data)
61058
>>> # We can see here that I did not retrieve the full JSON
... # given that the stream doesn't end with a closing }
...
>>> data[-40:]
'ce2-003048343a40","name":"Vincent Racani'
How can I retrieve the full response with urllib2?
| [
"Best way to get all of the data:\nfp = urllib2.urlopen(\"http://www.example.com/index.cfm\")\n\nresponse = \"\"\nwhile 1:\n data = fp.read()\n if not data: # This might need to be if data == \"\": -- can't remember\n break\n response += data\n\nprint response\n\nThe reason is that .rea... | [
18,
4,
2,
0
] | [] | [] | [
"http",
"python",
"urllib2"
] | stackoverflow_0001824069_http_python_urllib2.txt |
Q:
How to get Phone Number from Latitude & Longitude
I have a bunch of restaurants in an Excel. All these restaurants have Lat/Long co-ordinates. Sadly in this Excel their phone number is missing. If I want to book a reservation at a particular restaurant, I'll have to find out manually.
Is there some service which legally allows me to return the address along with phone number given Lat/Long which I can automate? Does Google Maps do this? Some links & guidance appreciated here.
A:
Getting the address of a lat/lon pair is called reverse geocoding, and there are websites who can give an address. For example the webservice at geonames.org is offering this. I don't know the possibilities with google maps. But this blogpost says it is possible.
But you won't get a phone number this way.
A:
Take a look at the Google Places API. You can find by loc/lat and obtain the name and the phone number, among other interesting things, if I'm not wrong.
Note: the Places API may only be used in conjunction with displaying results on a Google map; using Place data without displaying a map for which Place data was requested is prohibited.
A:
I usually head on over to programmableweb.com when I need an API to perform these kinds of lookups.
If I can't find what I'm looking for there I'll probably write a scraper and point it to a site that would have the data I need (in this scenario, possibly Yellow Pages)
You didn't specify any particular programming languages so I'd recommend YQL as an easy entry yet powerful tool for data scraping.
Alternativley, Perl is a great language for scraping and for the major frameworks like .NET and Java you should find class libraries that make this kind of task a lot more straightforward (e.g. HtmlAgilityPack)
| How to get Phone Number from Latitude & Longitude | I have a bunch of restaurants in an Excel. All these restaurants have Lat/Long co-ordinates. Sadly in this Excel their phone number is missing. If I want to book a reservation at a particular restaurant, I'll have to find out manually.
Is there some service which legally allows me to return the address along with phone number given Lat/Long which I can automate? Does Google Maps do this? Some links & guidance appreciated here.
| [
"Getting the address of a lat/lon pair is called reverse geocoding, and there are websites who can give an address. For example the webservice at geonames.org is offering this. I don't know the possibilities with google maps. But this blogpost says it is possible.\nBut you won't get a phone number this way. \n",
... | [
2,
2,
1
] | [] | [] | [
"geocoding",
"python",
"reverse_geocoding"
] | stackoverflow_0004267788_geocoding_python_reverse_geocoding.txt |
Q:
How should I jump into the Flex-Python Boat
http://www.artima.com/weblogs/viewpost.jsp?thread=208528
Bruce Eckel talked about using Flex and Python together. Since then, we have had PyAMF and the likes.
It has been almost three years, but googling does not reveal much more than a bunch of articles/comments linking to that article above (or related ones). There is no buzz, no excitement. Not much on SO either.
I am thinking of attempting something using Flex/Python which would require me to be heavily invested in it. What I worry about is that the support system is very weak and activity is almost nonexistent.
I really want to do this. Can anyone direct me towards some useful resource?
A:
An application written in Flex/Flash is server agnostic...and it should be easy to replace the server side language with another one. The client application will consume some web services exposed by the server(REST/SOAP), or it can use as an alternative remote method invocation. The last one is implemented for the most important languages, from what I know.
There are some exceptions..if you want to use messaging the professional solutions are offered mainly by the frameworks build on top of Java.
So if you do not rely heavy on messaging the heavily investment is going to be mainly of the client side, especially if you haven't worked before with the so called "fat" clients. But not on the integration side..it not so complicated.
Regarding useful Flex resources, my suggestion is to take a look at http://www.adobe.com/devnet/flex.html
| How should I jump into the Flex-Python Boat | http://www.artima.com/weblogs/viewpost.jsp?thread=208528
Bruce Eckel talked about using Flex and Python together. Since then, we have had PyAMF and the likes.
It has been almost three years, but googling does not reveal much more than a bunch of articles/comments linking to that article above (or related ones). There is no buzz, no excitement. Not much on SO either.
I am thinking of attempting something using Flex/Python which would require me to be heavily invested in it. What I worry about is that the support system is very weak and activity is almost nonexistent.
I really want to do this. Can anyone direct me towards some useful resource?
| [
"An application written in Flex/Flash is server agnostic...and it should be easy to replace the server side language with another one. The client application will consume some web services exposed by the server(REST/SOAP), or it can use as an alternative remote method invocation. The last one is implemented for the... | [
2
] | [] | [] | [
"apache_flex",
"pyamf",
"python"
] | stackoverflow_0004266889_apache_flex_pyamf_python.txt |
Q:
retrieving the listening port during buildProtocol
i'm modifying the ServerFactory's buildProtocol method, basically the factory listens in at port 11000 and 12000, and I have two protocols, one for port each port. I'm trying to retrieve the port that the client used to listen in so that I can instantiate the correct protocol.
ex. client listens in at port 11000, protocol 1 is instantiated. client listens in at port 12000, protocol 2 is instantiated.
i think this can be only done in the buildProtocol stage, is there a way to determine which port was used to connect? the address parameter used by buildProtocol is the client address, I need the server port.
pseudo code:
def buildProtocol(self, address):
if address connects at port 11000:
proto = TransformProtocol()
else:
proto = TransformProtocol2()
proto.factory = self
return proto
A:
I think, you may need to use Twisted services:
link text
You code become something like:
from twisted.application import internet, service
from twisted.internet import protocol, reactor
from twisted.protocols import basic
class UpperCaseProtocol(basic.LineReceiver):
def lineReceived(self, line):
self.transport.write(line.upper() + '\r\n')
self.transport.loseConnection()
class LowerCaseProtocol(basic.LineReceiver):
def lineReceived(self, line):
self.transport.write(line.lower() + '\r\n')
self.transport.loseConnection()
class LineCaseService(service.Service):
def getFactory(self, p):
f = protocol.ServerFactory()
f.protocol = p
return f
application = service.Application('LineCase')
f = LineCaseService()
serviceCollection = service.IServiceCollection(application)
internet.TCPServer(11000,f.getFactory(UpperCaseProtocol)).setServiceParent(serviceCollection)
internet.TCPServer(12000,f.getFactory(LowerCaseProtocol)).setServiceParent(serviceCollection)
But, here we have two factory instances.
| retrieving the listening port during buildProtocol | i'm modifying the ServerFactory's buildProtocol method, basically the factory listens in at port 11000 and 12000, and I have two protocols, one for port each port. I'm trying to retrieve the port that the client used to listen in so that I can instantiate the correct protocol.
ex. client listens in at port 11000, protocol 1 is instantiated. client listens in at port 12000, protocol 2 is instantiated.
i think this can be only done in the buildProtocol stage, is there a way to determine which port was used to connect? the address parameter used by buildProtocol is the client address, I need the server port.
pseudo code:
def buildProtocol(self, address):
if address connects at port 11000:
proto = TransformProtocol()
else:
proto = TransformProtocol2()
proto.factory = self
return proto
| [
"I think, you may need to use Twisted services:\nlink text\nYou code become something like:\nfrom twisted.application import internet, service\nfrom twisted.internet import protocol, reactor\nfrom twisted.protocols import basic\n\nclass UpperCaseProtocol(basic.LineReceiver):\n def lineReceived(self, line):\n ... | [
0
] | [] | [] | [
"python",
"twisted"
] | stackoverflow_0004251433_python_twisted.txt |
Q:
How do I use StandardAnalyzer with TermQuery?
I'm trying to produce something similar to what QueryParser in lucene does, but without the parser, i.e. run a string through StandardAnalyzer, tokenize this and use TermQuery:s in a BooleanQuery to produce a query. My problem is that I only get Token:s from StandardAnalyzer, and not Term:s. I can convert a Token to a term by just extracting the string from it with Token.term(), but this is 2.4.x-only and it seems backwards, because I need to add the field a second time. What is the proper way of producing a TermQuery with StandardAnalyzer?
I'm using pylucene, but I guess the answer is the same for Java etc. Here is the code I've come up with:
from lucene import *
def term_match(self, phrase):
query = BooleanQuery()
sa = StandardAnalyzer()
for token in sa.tokenStream("contents", StringReader(phrase)):
term_query = TermQuery(Term("contents", token.term())
query.add(term_query), BooleanClause.Occur.SHOULD)
A:
The established way to get the token text is with token.termText() - that API's been there forever.
And yes, you'll need to specify a field name to both the Analyzer and the Term; I think that's considered normal. 8-)
A:
I've come across the same problem, and, using Lucene 2.9 API and Java, my code snippet looks like this:
final TokenStream tokenStream = new StandardAnalyzer(Version.LUCENE_29)
.tokenStream( fieldName , new StringReader( value ) );
final List< String > result = new ArrayList< String >();
try {
while ( tokenStream.incrementToken() ) {
final TermAttribute term = ( TermAttribute ) tokenStream.getAttribute( TermAttribute.class );
result.add( term.term() );
}
| How do I use StandardAnalyzer with TermQuery? | I'm trying to produce something similar to what QueryParser in lucene does, but without the parser, i.e. run a string through StandardAnalyzer, tokenize this and use TermQuery:s in a BooleanQuery to produce a query. My problem is that I only get Token:s from StandardAnalyzer, and not Term:s. I can convert a Token to a term by just extracting the string from it with Token.term(), but this is 2.4.x-only and it seems backwards, because I need to add the field a second time. What is the proper way of producing a TermQuery with StandardAnalyzer?
I'm using pylucene, but I guess the answer is the same for Java etc. Here is the code I've come up with:
from lucene import *
def term_match(self, phrase):
query = BooleanQuery()
sa = StandardAnalyzer()
for token in sa.tokenStream("contents", StringReader(phrase)):
term_query = TermQuery(Term("contents", token.term())
query.add(term_query), BooleanClause.Occur.SHOULD)
| [
"The established way to get the token text is with token.termText() - that API's been there forever.\nAnd yes, you'll need to specify a field name to both the Analyzer and the Term; I think that's considered normal. 8-)\n",
"I've come across the same problem, and, using Lucene 2.9 API and Java, my code snippet l... | [
2,
0
] | [] | [] | [
"lucene",
"pylucene",
"python"
] | stackoverflow_0001390088_lucene_pylucene_python.txt |
Q:
how to generate various database dumps
I have a CSV file and want to generate dumps of the data for sqlite, mysql, postgres, oracle, and mssql.
Is there a common API (ideally Python based) to do this?
I could use an ORM to insert the data into each database and then export dumps, however that would require installing each database. It also seems a waste of resources - these CSV files are BIG.
I am wary of trying to craft the SQL myself because of the variations with each database. Ideally someone has already done this hard work, but I haven't found it yet.
A:
SQLAlchemy is a database library that (as well as ORM functionality) supports SQL generation in the dialects of the all the different databases you mention (and more).
In normal use, you could create a SQL expression / instruction (using a schema.Table object), create a database engine, and then bind the instruction to the engine, to generate the SQL.
However, the engine is not strictly necessary; the dialects each have a compiler that can generate the SQL without a connection; the only caveat being that you need to stop it from generating bind parameters as it does by default:
from sqlalchemy.sql import expression, compiler
from sqlalchemy import schema, types
import csv
# example for mssql
from sqlalchemy.dialects.mssql import base
dialect = base.dialect()
compiler_cls = dialect.statement_compiler
class NonBindingSQLCompiler(compiler_cls):
def _create_crud_bind_param(self, col, value, required=False):
# Don't do what we're called; return a literal value rather than binding
return self.render_literal_value(value, col.type)
recipe_table = schema.Table("recipe", schema.MetaData(), schema.Column("name", types.String(50), primary_key=True), schema.Column("culture", types.String(50)))
for row in [{"name": "fudge", "culture": "america"}]: # csv.DictReader(open("x.csv", "r")):
insert = expression.insert(recipe_table, row, inline=True)
c = NonBindingSQLCompiler(dialect, insert)
c.compile()
sql = str(c)
print sql
The above example actually works; it assumes you know the target database table schema; it should be easily adaptable to import from a CSV and generate for multiple target database dialects.
A:
I am no database wizard, but AFAIK in Python there's not a common API that would do out-of-the-box what you ask for. There is PEP 249 that defines an API that should be used by modules accessing DB's and that AFAIK is used at least by the MySQL and Postgre python modules (here and here) and that perhaps could be a starting point.
The road I would attempt to follow myself - however - would be another one:
Import the CVS nto MySQL (this is just because MySQL is the one I know best and there are tons of material on the net, as for example this very easy recipe, but you could do the same procedure starting from another database).
Generate the MySQL dump.
Process the MySQL dump file in order to modify it to meet SQLite (and others) syntax.
The scripts for processing the dump file could be very compact, although they might somehow be tricky if you use regex for parsing the lines. Here's an example script MySQL → SQLite that I simply pasted from this page:
#!/bin/sh
mysqldump --compact --compatible=ansi --default-character-set=binary mydbname |
grep -v ' KEY "' |
grep -v ' UNIQUE KEY "' |
perl -e 'local $/;$_=<>;s/,\n\)/\n\)/gs;print "begin;\n";print;print "commit;\n"' |
perl -pe '
if (/^(INSERT.+?)\(/) {
$a=$1;
s/\\'\''/'\'\''/g;
s/\\n/\n/g;
s/\),\(/\);\n$a\(/g;
}
' |
sqlite3 output.db
You could write your script in python (in which case you should have a look to re.compile for performance).
The rationale behind my choice would be:
I get the heavy-lifting [importing and therefore data consistency checks + generating starting SQL file] done for me by mysql
I only have to have one database installed.
I have full control on what is happening and the possibility to fine-tune the process.
I can structure my script in such a way that it will be very easy to extend it for other databases (basically I would structure it like a parser that recognises individual fields + a set of grammars - one for each database - that I can select via command-line option)
There is much more documentation on the differences between SQL flavours than on single DB import/export libraries.
EDIT: A template-based approach
If for any reason you don't feel confident enough to write the SQL yourself, you could use a sort of template-based script. Here's how I would do it:
Import and generate a dump of the table in all the 4 DB you are planning to use.
For each DB save the initial part of the dump (with the schema declaration and all the rest) and a single insert instruction.
Write a python script that - for each DB export - will output the "header" of the dump plus the same "saved line" into which you will programmatically replace the values for each line in your CVS file.
The obvious drawback of this approach is that your "template" will only work for one table. The strongest point of it is that writing such script would be extremely easy and quick.
HTH at least a bit!
A:
You could do this - Create SQL tables from CSV files
or Generate Insert Statements from CSV file
or try this Generate .sql from .csv python
Of course you might need to tweak the scripts mentioned to suite your needs.
| how to generate various database dumps | I have a CSV file and want to generate dumps of the data for sqlite, mysql, postgres, oracle, and mssql.
Is there a common API (ideally Python based) to do this?
I could use an ORM to insert the data into each database and then export dumps, however that would require installing each database. It also seems a waste of resources - these CSV files are BIG.
I am wary of trying to craft the SQL myself because of the variations with each database. Ideally someone has already done this hard work, but I haven't found it yet.
| [
"SQLAlchemy is a database library that (as well as ORM functionality) supports SQL generation in the dialects of the all the different databases you mention (and more).\nIn normal use, you could create a SQL expression / instruction (using a schema.Table object), create a database engine, and then bind the instruct... | [
5,
1,
0
] | [] | [] | [
"csv",
"database",
"mysqldump",
"python"
] | stackoverflow_0004121658_csv_database_mysqldump_python.txt |
Q:
RabbitMQ / Celery with Django hangs on delay/ready/etc - No useful log info
So I just setup celery and rabbitmq, created my user, setup the vhost, mapped the user to the vhost, and ran the celery daemon succesfully (or so I assume)
(queuetest)corky@corky-server:~/projects/queuetest$ ./manage.py celeryd
celery@corky-server v0.9.5 is starting.
Configuration ->
. broker -> amqp://celery@localhost:5672/
. queues ->
. celery -> exchange:celery (direct) binding:celery
. concurrency -> 2
. loader -> celery.loaders.djangoapp
. logfile -> [stderr]@WARNING
. events -> OFF
. beat -> OFF
Celery has started.
I created a user of "celery" because I wasn't feeling very inventive in this case.
When I try to do one of the simple examples within the celery docs:
>>> from tasks import add
>>> r = add.delay(2, 2)
>>> r
<AsyncResult: 16235ea3-c7d6-4cce-9387-5c6285312c7c>
>>> r.ready()
(hangs for eternity.)
So I checked the FAQ wondering what else could be up and it told me this is a common bug due to user permissions, so I triple checked those, nothing, made another new user, still nothing. If I import DjangoBrokerConnection from carrot.connection and get the information, it matches up with what's in my celery settings. The FAQ stated to check your log file.
My rabbit.log file isn't very helpful in this situation, simply showing:
=INFO REPORT==== 26-Jan-2010::11:58:22 ===
accepted TCP connection on 0.0.0.0:5672 from 127.0.0.1:60572
=INFO REPORT==== 26-Jan-2010::11:58:22 ===
starting TCP connection <0.1120.0> from 127.0.0.1:60572
And so forth. At this point, I'm at a loss as to what else my problem could be. I'm running Ubuntu Jaunty and installed RabbitMQ from apt-get.
Thanks in advance for any help.
A:
I just fixed a really ugly bug that would only show up for new users that would have caused this. (http://github.com/ask/celery/commit/a9c1316b15055b67ee3c38d294461fa82ed6d2b5)
Please pull from the master branch at github. If it still doesn't work you
probably have to stop rabbitmq, remove the rabbitmq database directory (usually /var/lib/rabbitmq) and start rabbitmq again)
Really sorry for the inconvenience.
The bug happened because we recently changed the name of the consumers routing key option to "binding key", but the amqp libraries still use routing_key and we forgot to rewrite the option.
A:
For anyone stumbling upon this: it really does seem to help to remove your /var/lib/rabbitmq, even if the problem seems to go away with updating celery. I was seeing lots of unreliability and unpredictability until I did so.
| RabbitMQ / Celery with Django hangs on delay/ready/etc - No useful log info | So I just setup celery and rabbitmq, created my user, setup the vhost, mapped the user to the vhost, and ran the celery daemon succesfully (or so I assume)
(queuetest)corky@corky-server:~/projects/queuetest$ ./manage.py celeryd
celery@corky-server v0.9.5 is starting.
Configuration ->
. broker -> amqp://celery@localhost:5672/
. queues ->
. celery -> exchange:celery (direct) binding:celery
. concurrency -> 2
. loader -> celery.loaders.djangoapp
. logfile -> [stderr]@WARNING
. events -> OFF
. beat -> OFF
Celery has started.
I created a user of "celery" because I wasn't feeling very inventive in this case.
When I try to do one of the simple examples within the celery docs:
>>> from tasks import add
>>> r = add.delay(2, 2)
>>> r
<AsyncResult: 16235ea3-c7d6-4cce-9387-5c6285312c7c>
>>> r.ready()
(hangs for eternity.)
So I checked the FAQ wondering what else could be up and it told me this is a common bug due to user permissions, so I triple checked those, nothing, made another new user, still nothing. If I import DjangoBrokerConnection from carrot.connection and get the information, it matches up with what's in my celery settings. The FAQ stated to check your log file.
My rabbit.log file isn't very helpful in this situation, simply showing:
=INFO REPORT==== 26-Jan-2010::11:58:22 ===
accepted TCP connection on 0.0.0.0:5672 from 127.0.0.1:60572
=INFO REPORT==== 26-Jan-2010::11:58:22 ===
starting TCP connection <0.1120.0> from 127.0.0.1:60572
And so forth. At this point, I'm at a loss as to what else my problem could be. I'm running Ubuntu Jaunty and installed RabbitMQ from apt-get.
Thanks in advance for any help.
| [
"I just fixed a really ugly bug that would only show up for new users that would have caused this. (http://github.com/ask/celery/commit/a9c1316b15055b67ee3c38d294461fa82ed6d2b5)\nPlease pull from the master branch at github. If it still doesn't work you \nprobably have to stop rabbitmq, remove the rabbitmq database... | [
4,
2
] | [] | [] | [
"celery",
"django",
"python",
"rabbitmq"
] | stackoverflow_0002141083_celery_django_python_rabbitmq.txt |
Q:
Resolve variables from a string
I have a string in the following form :
"425x344"
Now I'd like to resolve first and second numbers from this string as two separate variables. How can I do this ? I've created this regex:
regex = re.compile(r"^(\d+x\d+)")
to check if the string form is proper. But what next ?
A:
a, b = '425x344'.split('x')
A:
Since you're filtering it with a regex, you can just do
a, b = map(int, s.split('x'))
res = a * b
If you're planning on multiplying, it can be done in one line:
res = eval(s.replace('x', '*'))
or
res = (lambda x, y: x * y)(*map(int, s.split('x')))
With an import and one line, this can be done with
import operator
res = operator.mul(*map(int, s.split('x')))
You'll have to profile them to see which is faster.
A:
Change it to:
regex = re.compile(r"^(\d+)x(\d+)")
Then use regex.match(my_string) to get the MatchObject out, and you can use match.group(1) and match.group(2) to get the variables out.
A:
You probably mean "values", or "literals". 425 is not typically a valid name for a variable, which tend to have symbolic names.
If you change your regular expression to capture the numbers separately:
regex = re.compile(r"^(\d+)x(\d+)")
you can then use code like this:
str = "425x344"
mo = regex.search(str)
if mo != None:
print "%s=%d" % (str, int(mo.group(1)) * int(mo.group(2)))
to compute the result.
A:
If you are wanting to do it with re, http://effbot.org/zone/xml-scanner.htm might be worth a read. It shows how to properly split each argument in a expr with re
import re
expr = "b = 2 + a*10"
for item in re.findall("\s*(?:(\d+)|(\w+)|(.))", expr):
print item
| Resolve variables from a string | I have a string in the following form :
"425x344"
Now I'd like to resolve first and second numbers from this string as two separate variables. How can I do this ? I've created this regex:
regex = re.compile(r"^(\d+x\d+)")
to check if the string form is proper. But what next ?
| [
"a, b = '425x344'.split('x')\n",
"Since you're filtering it with a regex, you can just do\na, b = map(int, s.split('x'))\nres = a * b\n\nIf you're planning on multiplying, it can be done in one line:\nres = eval(s.replace('x', '*'))\n\nor\nres = (lambda x, y: x * y)(*map(int, s.split('x')))\n\nWith an import and ... | [
5,
2,
1,
0,
0
] | [] | [] | [
"match",
"python",
"regex"
] | stackoverflow_0004268323_match_python_regex.txt |
Q:
Gtk loop or Cron for timer
Well, I have created a python script, which checks the number of uncompleted tasks of tasque and displays it using pynotify periodically. My question is how do I implement this timer. I can think of two things. A Cron job to execute a python script periodically or using a python script which uses a gtk loop to call the specified function for checking periodically.
A:
Cron job. It's more likely to be "in line" with actual time, since it's a more stable and time-tested choice. It's also less demanding on resources than using a loop in Python since it doesn't require a constant Python interpreter process, and is probably better optimized than pyGTK (choice is mature, stable software vs. less mature, less stable).
| Gtk loop or Cron for timer | Well, I have created a python script, which checks the number of uncompleted tasks of tasque and displays it using pynotify periodically. My question is how do I implement this timer. I can think of two things. A Cron job to execute a python script periodically or using a python script which uses a gtk loop to call the specified function for checking periodically.
| [
"Cron job. It's more likely to be \"in line\" with actual time, since it's a more stable and time-tested choice. It's also less demanding on resources than using a loop in Python since it doesn't require a constant Python interpreter process, and is probably better optimized than pyGTK (choice is mature, stable sof... | [
2
] | [] | [] | [
"python"
] | stackoverflow_0004268374_python.txt |
Q:
How can I use non ASCII file names in a zipped file and extract them using ZipFile in Python 2.4?
I have a file öl_och_ål_är_gott.txt inside a zip archive named öl_och_ål_är_gott.zip. The archive isn't created using zipfile. It could come from any software capable of creating a zip archive.
src = open(file_path, "rb" )
zip_file = ZipFile(src)
for info in zip_file.infolist():
print info.filename
...
prints out:
”l_och_†l_„r_gott.txt
How can I force zipfile to represent the name as I want it to be represented?
A:
As the docs state, there is no official file name encoding for ZIP files. If you have Unicode file names (as in your case), you must convert them to byte strings in your desired encoding before passing them.
DOCS
Though I don't know why it doesn't work for you.
>>> src = open('/Desktop/test.zip', 'rb')
>>> zip_file = zipfile.ZipFile(src)
>>> for info in zip_file.infolist():
... print info.filename
...
öl_och_ål_är_gott
On my Ubuntu box.
| How can I use non ASCII file names in a zipped file and extract them using ZipFile in Python 2.4? | I have a file öl_och_ål_är_gott.txt inside a zip archive named öl_och_ål_är_gott.zip. The archive isn't created using zipfile. It could come from any software capable of creating a zip archive.
src = open(file_path, "rb" )
zip_file = ZipFile(src)
for info in zip_file.infolist():
print info.filename
...
prints out:
”l_och_†l_„r_gott.txt
How can I force zipfile to represent the name as I want it to be represented?
| [
"As the docs state, there is no official file name encoding for ZIP files. If you have Unicode file names (as in your case), you must convert them to byte strings in your desired encoding before passing them.\nDOCS\nThough I don't know why it doesn't work for you.\n>>> src = open('/Desktop/test.zip', 'rb')\n>>> zip... | [
1
] | [] | [] | [
"character_encoding",
"python",
"python_zipfile"
] | stackoverflow_0004269084_character_encoding_python_python_zipfile.txt |
Q:
mixing 2d and 3d in opengl (using pyglet)
I am trying to mix 2d and 3d in opengl in pyglet, i.e. draw a 3d scene then switch to orthographic projection and draw stuff over the top. I draw the 3d stuff, push the projection matrix to the
stack, do a glOrtho projection matrix, draw the 2d stuff, then pop the previous matrix off the stack.
The 3d stuff draws fine but for some reason the 2d part isn't drawing at all, even on its own.
Here's the code:
class Window(pyglet.window.Window):
# resolution
width, height = 1024, 786
def __init__(self, width, height):
# initialise window
super(Window, self).__init__(width, height)
# set title
self.set_caption("OpenGL Doss")
# call update() at 30fps
pyglet.clock.schedule_interval(self.update, 1 / 30.0)
glEnable(GL_TEXTURE_2D) # enable textures
glShadeModel(GL_SMOOTH) # smooth shading of polygons
glClearColor(0.0, 0.0, 0.0, 0.0)
glClearDepth(1.0)
glDepthFunc(GL_LEQUAL)
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST) # make stuff look nice
self.world = World() # initialise world
self.label = pyglet.text.Label('Hello, world',
font_name='Times New Roman',
font_size=20,
width=10, height=10)
def on_resize(self, width, height):
print 'on resize'
if height == 0:
height = 1
glViewport(0, 0, width, height) # specify viewport
# load perspective projection matrix
glMatrixMode(GL_PROJECTION)
glLoadIdentity()
gluPerspective(45, 1.0 * width / height, 0.1, 100.0)
#glLoadIdentity()
def on_draw(self):
self.set3d()
# draw 3d stuff
self.world.draw()
self.set2d()
# draw 2d stuff
self.draw2d()
self.unSet2d()
def update(self, dt):
"called at set interval during runtime"
#maze = self.world.maze
maze_platform = self.world.maze_platform
pacman = maze_platform.maze.pacman
maze_platform.update()
# send it world pointer
pacman.update(self.world)
def on_key_press(self, symbol, modifiers):
control.press(symbol, modifiers)
def on_key_release(self, symbol, modifiers):
control.release(symbol, modifiers)
def set3d(self):
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glEnable(GL_DEPTH_TEST) # enable depth testing
# reset modelview matrix
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
def set2d(self):
glDisable(GL_DEPTH_TEST)
# store the projection matrix to restore later
glMatrixMode(GL_PROJECTION)
glPushMatrix()
# load orthographic projection matrix
glLoadIdentity()
#glOrtho(0, float(self.width),0, float(self.height), 0, 1)
far = 8192
glOrtho(-self.width / 2., self.width / 2., -self.height / 2., self.height / 2., 0, far)
# reset modelview
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
#glClear(GL_COLOR_BUFFER_BIT)
def unSet2d(self):
# load back the projection matrix saved before
glMatrixMode(GL_PROJECTION)
glPopMatrix()
def draw2d(self):
z=-6
n=100
glTranslatef(0, 0.0, -z)
glBegin(GL_TRIANGLES)
glVertex3f(0.0, n, 0.0)
glVertex3f(-n, -n, 0)
glVertex3f(n, -n, 0)
glEnd()
def main():
window = Window(Window.width, Window.height)
pyglet.app.run()
print 'framerate:', pyglet.clock.get_fps(), '(error checking = %s)' % pyglet.options['debug_gl']
if __name__ == '__main__': main()
#command = 'main()'
#cProfile.run(command)
A:
I would recommend that you fully reset the modelview and projection matrices on each render, and then don't use push/pop when you go from 3d to 2d.
However, I suspect that you are using bad coordinates so the scene is drawing outside the clip planes. In partciular I am a tad suspicious of putting the near clipping plane at zero. Normally 2d elements are drawn with z=0.
Try putting the near clip-plane at -1.
I'm also a bit unsure why you're calling glTranslatef(0, 0.0, -z) in draw2d, I wouldn't bother.
A:
In draw2d() try glDisable(GL_TEXTURE_2D) and glColor3ub(255,255,255) before drawing your triangle.
Make sure to re-glEnable(GL_TEXTURE_2D) before calling world.draw() again if it uses textured geometry.
| mixing 2d and 3d in opengl (using pyglet) | I am trying to mix 2d and 3d in opengl in pyglet, i.e. draw a 3d scene then switch to orthographic projection and draw stuff over the top. I draw the 3d stuff, push the projection matrix to the
stack, do a glOrtho projection matrix, draw the 2d stuff, then pop the previous matrix off the stack.
The 3d stuff draws fine but for some reason the 2d part isn't drawing at all, even on its own.
Here's the code:
class Window(pyglet.window.Window):
# resolution
width, height = 1024, 786
def __init__(self, width, height):
# initialise window
super(Window, self).__init__(width, height)
# set title
self.set_caption("OpenGL Doss")
# call update() at 30fps
pyglet.clock.schedule_interval(self.update, 1 / 30.0)
glEnable(GL_TEXTURE_2D) # enable textures
glShadeModel(GL_SMOOTH) # smooth shading of polygons
glClearColor(0.0, 0.0, 0.0, 0.0)
glClearDepth(1.0)
glDepthFunc(GL_LEQUAL)
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST) # make stuff look nice
self.world = World() # initialise world
self.label = pyglet.text.Label('Hello, world',
font_name='Times New Roman',
font_size=20,
width=10, height=10)
def on_resize(self, width, height):
print 'on resize'
if height == 0:
height = 1
glViewport(0, 0, width, height) # specify viewport
# load perspective projection matrix
glMatrixMode(GL_PROJECTION)
glLoadIdentity()
gluPerspective(45, 1.0 * width / height, 0.1, 100.0)
#glLoadIdentity()
def on_draw(self):
self.set3d()
# draw 3d stuff
self.world.draw()
self.set2d()
# draw 2d stuff
self.draw2d()
self.unSet2d()
def update(self, dt):
"called at set interval during runtime"
#maze = self.world.maze
maze_platform = self.world.maze_platform
pacman = maze_platform.maze.pacman
maze_platform.update()
# send it world pointer
pacman.update(self.world)
def on_key_press(self, symbol, modifiers):
control.press(symbol, modifiers)
def on_key_release(self, symbol, modifiers):
control.release(symbol, modifiers)
def set3d(self):
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glEnable(GL_DEPTH_TEST) # enable depth testing
# reset modelview matrix
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
def set2d(self):
glDisable(GL_DEPTH_TEST)
# store the projection matrix to restore later
glMatrixMode(GL_PROJECTION)
glPushMatrix()
# load orthographic projection matrix
glLoadIdentity()
#glOrtho(0, float(self.width),0, float(self.height), 0, 1)
far = 8192
glOrtho(-self.width / 2., self.width / 2., -self.height / 2., self.height / 2., 0, far)
# reset modelview
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
#glClear(GL_COLOR_BUFFER_BIT)
def unSet2d(self):
# load back the projection matrix saved before
glMatrixMode(GL_PROJECTION)
glPopMatrix()
def draw2d(self):
z=-6
n=100
glTranslatef(0, 0.0, -z)
glBegin(GL_TRIANGLES)
glVertex3f(0.0, n, 0.0)
glVertex3f(-n, -n, 0)
glVertex3f(n, -n, 0)
glEnd()
def main():
window = Window(Window.width, Window.height)
pyglet.app.run()
print 'framerate:', pyglet.clock.get_fps(), '(error checking = %s)' % pyglet.options['debug_gl']
if __name__ == '__main__': main()
#command = 'main()'
#cProfile.run(command)
| [
"I would recommend that you fully reset the modelview and projection matrices on each render, and then don't use push/pop when you go from 3d to 2d.\nHowever, I suspect that you are using bad coordinates so the scene is drawing outside the clip planes. In partciular I am a tad suspicious of putting the near clippin... | [
2,
0
] | [] | [] | [
"opengl",
"pyglet",
"python"
] | stackoverflow_0004269079_opengl_pyglet_python.txt |
Q:
Combiner function in python hadoop streaming
I have a mapper that outputs key and value , which is sorted and piped into reducer.py ,
As the keys are already sorted, before I get to the reducer , I want to write a combiner which iterates through the sorted list and outputs key , [ v1,v2,v3] pair which will be used in reducer.
cat data | python mapper.py | sort | python reducer.py
What's the best mechanism to write a reducer so that I won't use dictionary containing all keys, lot of memory to hold the entries in dictionary.
A:
Use itertools.groupby:
>>> import itertools
>>> import operator
>>> foo = [("a", 1), ("a", 2), ("b", 1), ("c", 1), ("c", 2)]
>>> for group in itertools.groupby(foo, operator.itemgetter(0)):
... print group[0], list(map(operator.itemgetter(1), group[1]))
...
a [1, 2]
b [1]
c [1, 2]
Explanation:
groupby, as the name suggest, groups elements of an iterable into chunks based on some key function. That is, it calls keyfunc on the first element of the iterable, then pulls elements one by one from the iterable until the value of keyfunc changes, at which point it yields all of the elements it has got so far and starts again from the new key. It is also sensible and does not consume more memory than necessary; once values have been yielded they are no longer held by groupby.
Here, we group the elements of the input by operator.itemgetter(0), which is a useful "toolbox" function which maps x to x[0]. In other words, we group by the first element of the tuple, which is a key.
Naturally, you will need to write a custom generator to handle reading the input (from sys.stdin, probably) and yield them one by one. Fortunately, this is pretty easy, using the yield keyword.
Note also that this assumes that the keys are sorted. Naturally, if they are not sorted there is nothing you can do: you would need to look until the end of the input to ensure that you have all values for a given key.
| Combiner function in python hadoop streaming | I have a mapper that outputs key and value , which is sorted and piped into reducer.py ,
As the keys are already sorted, before I get to the reducer , I want to write a combiner which iterates through the sorted list and outputs key , [ v1,v2,v3] pair which will be used in reducer.
cat data | python mapper.py | sort | python reducer.py
What's the best mechanism to write a reducer so that I won't use dictionary containing all keys, lot of memory to hold the entries in dictionary.
| [
"Use itertools.groupby:\n>>> import itertools\n>>> import operator\n>>> foo = [(\"a\", 1), (\"a\", 2), (\"b\", 1), (\"c\", 1), (\"c\", 2)]\n>>> for group in itertools.groupby(foo, operator.itemgetter(0)):\n... print group[0], list(map(operator.itemgetter(1), group[1]))\n...\na [1, 2]\nb [1]\nc [1, 2]\n\n\nExpla... | [
5
] | [] | [] | [
"hadoop",
"mapreduce",
"python"
] | stackoverflow_0004269355_hadoop_mapreduce_python.txt |
Q:
Does Django or mod_wsgi modify sys.path when it's running?
I setup mod_wsgi, and checked it works fine.
I also came up with simple django project, and also checked it works fine with the following command
django-admin.py runserver --settings=mysite.settings
However, when I run the following wsgi,
import os
import sys
mysite = '/Users/smcho/Desktop/django/mysite'
if mysite not in sys.path:
sys.path.insert(0,'/Users/smcho/Desktop/django/mysite')
django = '/Users/smcho/Desktop/django'
if django not in sys.path:
sys.path.insert(0,'/Users/smcho/Desktop/django')
os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
I got this error.
[Wed Nov 24 10:36:45 2010] [error] [client 127.0.0.1] ImportError: Could not import settings 'mysite.settings' (Is it on sys.path? Does it have syntax errors?): No module named mysite.settings
By running following python code with mod_wsgi, I learned that "/Library/Python/2.6/site-packages" is in sys.path.
def application(environ, start_response):
status = '200 OK'
output = 'Hello World! WSGI working perfectly!\n'*10
for i in sys.path:
output += str(i) + '\n'
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
return [output]
And this is the sys.path that I got. It doesn't have the path from PYTHONPATH.
/Library/Python/2.6/site-packages/ply-3.3-py2.6.egg
...
/Library/Python/2.6/site-packages/nose-0.11.4-py2.6.egg
/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python26.zip
/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6
/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-darwin
/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac
/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac/lib-scriptpackages
/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python
/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-tk
/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-old
/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload
/Library/Python/2.6/site-packages
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/PyObjC
/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/wx-2.8-mac-unicode
So, I copied 'mysite' django project to '/Library/Python/2.6/site-packages', and now it works OK.
What's wrong with this? It's very likely that even though I added my django project directory to sys.path, django doesn't know (or disregards) this path and the paths from PYTHONPATH environment variable.
sys.path.insert(0,'/Users/smcho/Desktop/django/mysite')
sys.path.insert(0,'/Users/smcho/Desktop/django')
Does Django modifies sys.path when it's running? If so, how to prevent this? If not, why Django doesn't know about my appened path?
This is the system that I'm using.
OS : Mac OS X 10.6.5
Python : 2.6.1
Django : 1.2.3
SOLVED
Please refer to my other PO. In short, I moved my django project to www doc directory that is accessible from web, everything's OK.
A:
You only check that '/Users/smcho/Desktop/django/mysite' is in sys.path, but if not you add that and the '/Users/smcho/Desktop/django'. To import mysite.settings, Python needs the latter, not the former. So you should be sure to add that.
| Does Django or mod_wsgi modify sys.path when it's running? | I setup mod_wsgi, and checked it works fine.
I also came up with simple django project, and also checked it works fine with the following command
django-admin.py runserver --settings=mysite.settings
However, when I run the following wsgi,
import os
import sys
mysite = '/Users/smcho/Desktop/django/mysite'
if mysite not in sys.path:
sys.path.insert(0,'/Users/smcho/Desktop/django/mysite')
django = '/Users/smcho/Desktop/django'
if django not in sys.path:
sys.path.insert(0,'/Users/smcho/Desktop/django')
os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
I got this error.
[Wed Nov 24 10:36:45 2010] [error] [client 127.0.0.1] ImportError: Could not import settings 'mysite.settings' (Is it on sys.path? Does it have syntax errors?): No module named mysite.settings
By running following python code with mod_wsgi, I learned that "/Library/Python/2.6/site-packages" is in sys.path.
def application(environ, start_response):
status = '200 OK'
output = 'Hello World! WSGI working perfectly!\n'*10
for i in sys.path:
output += str(i) + '\n'
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
return [output]
And this is the sys.path that I got. It doesn't have the path from PYTHONPATH.
/Library/Python/2.6/site-packages/ply-3.3-py2.6.egg
...
/Library/Python/2.6/site-packages/nose-0.11.4-py2.6.egg
/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python26.zip
/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6
/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-darwin
/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac
/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac/lib-scriptpackages
/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python
/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-tk
/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-old
/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload
/Library/Python/2.6/site-packages
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/PyObjC
/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/wx-2.8-mac-unicode
So, I copied 'mysite' django project to '/Library/Python/2.6/site-packages', and now it works OK.
What's wrong with this? It's very likely that even though I added my django project directory to sys.path, django doesn't know (or disregards) this path and the paths from PYTHONPATH environment variable.
sys.path.insert(0,'/Users/smcho/Desktop/django/mysite')
sys.path.insert(0,'/Users/smcho/Desktop/django')
Does Django modifies sys.path when it's running? If so, how to prevent this? If not, why Django doesn't know about my appened path?
This is the system that I'm using.
OS : Mac OS X 10.6.5
Python : 2.6.1
Django : 1.2.3
SOLVED
Please refer to my other PO. In short, I moved my django project to www doc directory that is accessible from web, everything's OK.
| [
"You only check that '/Users/smcho/Desktop/django/mysite' is in sys.path, but if not you add that and the '/Users/smcho/Desktop/django'. To import mysite.settings, Python needs the latter, not the former. So you should be sure to add that.\n"
] | [
1
] | [] | [] | [
"django",
"mod_wsgi",
"python"
] | stackoverflow_0004269445_django_mod_wsgi_python.txt |
Q:
Should I wait for Django to start supporting Python 3?
I have a website idea that I'm very excited about, and I love Python. So, I'm interested in using Django. However, I started learning Python in version 3.1, and Django currently only supports various 2.x versions. I've searched for information about when Django will start supporting Python 3.x, and gotten mostly articles from a year or two ago that say it will take a year or two. Meanwhile, the Django FAQ says that there are no immediate plans.
I'm reluctant to build in an old version of Python and then either be stuck with it or go through a huge ordeal trying to migrate later. My original solution to this was to start learning PHP instead and then choose a framework, but as it turns out I don't really like PHP. Django it is, then.
If I decide to wait for a 3.x-compatible version, I can teach myself more front-end web development (and more Python) in the meantime. I've only been at this for a few months so I have a lot to learn. But I don't want to wait forever, and I realize that even after a 3.x-compatible version comes out it will take a while for third-party APIs to catch up. What do you all recommend?
A:
No. Don't wait.
Why? Pretty much all django libraries are written for Python 2.x, and if you ever plan on using any of them with Python 3 with the next major release of Django then you'll be waiting not 1 but 3-4 years when everyone starts converting their code.
In this time, you could have already mastered django and could have worked and launched many sites, could've got a Django gig, etc.
Start immediately and don't postpone!
A:
Python 2 will still live for a very long time. Actually, there's no really good reason to use Python 3 right now unless you need Python3 features which are not available as future imports and know that you won't ever need to use 3rd party modules which might not be Python3-compatible.
So the best solution is doing your app now using Python 2 instead of waiting; especially if the app will be useful for you now.
A:
I recommend you learn the frameworks on the old version now, and let 2to3 figure it out when the time comes.
| Should I wait for Django to start supporting Python 3? | I have a website idea that I'm very excited about, and I love Python. So, I'm interested in using Django. However, I started learning Python in version 3.1, and Django currently only supports various 2.x versions. I've searched for information about when Django will start supporting Python 3.x, and gotten mostly articles from a year or two ago that say it will take a year or two. Meanwhile, the Django FAQ says that there are no immediate plans.
I'm reluctant to build in an old version of Python and then either be stuck with it or go through a huge ordeal trying to migrate later. My original solution to this was to start learning PHP instead and then choose a framework, but as it turns out I don't really like PHP. Django it is, then.
If I decide to wait for a 3.x-compatible version, I can teach myself more front-end web development (and more Python) in the meantime. I've only been at this for a few months so I have a lot to learn. But I don't want to wait forever, and I realize that even after a 3.x-compatible version comes out it will take a while for third-party APIs to catch up. What do you all recommend?
| [
"No. Don't wait. \nWhy? Pretty much all django libraries are written for Python 2.x, and if you ever plan on using any of them with Python 3 with the next major release of Django then you'll be waiting not 1 but 3-4 years when everyone starts converting their code.\nIn this time, you could have already mastered dja... | [
14,
3,
2
] | [] | [] | [
"django",
"python",
"python_3.x"
] | stackoverflow_0004270192_django_python_python_3.x.txt |
Q:
Python cumulative histogram of 2D array
I have the following problem. I have a 2D array of N pairs. e.g: x = [[5,2],[10,5],[3,2],...]
(so a set of arrays a = [5,10,3,...] and b= [2,5,2,...]
The first column (a) corresponds to the number of items.
The second column (b) is time taken to obtain the items in column (a).
I want to plot a cumulative histogram of the total time taken to obtain the items.
The x axis will be in bins of array (a), and the y axis should be the sum of the times from array (b) for each bin of (a). i.e. I want to plot "Nr of items"-vs-"Total time to obtain (cumulative)" as opposed to the default "Nr of items"-vs-"Nr of instances in array (a)"
I hope that makes some sense.
A:
Any chance this is what you are talking about?
>>> pairs = [[5,2],[10,5],[3,2]]
>>> a, b = zip(*pairs)
>>> x = list(a)
>>> y = [reduce(lambda c, d: c+d, b[:i], 0) for i in range(1, len(b)+1)]
>>> x
[5, 10, 3]
>>> y
[2, 7, 9]
Here the resulting y values is the sum of all values from b up to that index.
A:
I tend to be a big fan of matplotlib (http://matplotlib.sourceforge.net/) these days. It's got lots of built-in functionality for just about every type of plotting you'll want to do.
Here are a whole bunch of examples on how to create histograms (with images and source code available):
http://matplotlib.sourceforge.net/examples/pylab_examples/histogram_demo_extended.html
Here's the documentation of the hist() function itself:
http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.hist
If that's not quite what you want, you can browse the gallery and look for a more fitting plot type. They all have source code available:
http://matplotlib.sourceforge.net/gallery.html
Hopefully that's what you're looking for.
Adding an example. So is this more along the lines of what you're looking for? (Not a histogram really anymore):
If so, here's the code to generate it (x is the sample input):
from pylab import *
x = [[5,2],[10,5],[3,2],[5,99],[10,22],[3,15],[4,30]]
a,b = zip(*x) #Unzip x into a & b as per your example
#Make a dictionary where the key is the item from a and the value
#is the sum of all the corresponding entries in b
sums = {}
for i in range(0,len(a)):
sums[a[i]] = b[i] if not a[i] in sums else sums[a[i]] + b[i]
#Plot it
ylabel('Bins')
xlabel('Total Times')
barh(sums.keys(),sums.values(),align='center')
show()
If not, I will give up and admit I'm still not quite understanding what you want. Good luck!
A:
I'm not sure that's what you want...
x = [[5,2],[10,5],[3,2]]
a,b=zip(*x) #(5, 10, 3),(2, 5, 2)
tmp = []
for i in range(len(a)):
tmp.extend(b[i:i+1]*a[i]) #[2, 2, 2, 2, 2, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 2, 2, 2]
def cum(l):
c=0
for i in range(len(l)):
c+=l[i]
yield c
y=list(cum(tmp)) #[2, 4, 6, 8, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 62, 64, 66]
list(zip(range(1,1+len(y)),y)) #[(1, 2), (2, 4), (3, 6), (4, 8), (5, 10), (6, 15), (7, 20), (8, 25), (9, 30), (10, 35), (11, 40), (12, 45), (13, 50), (14, 55), (15, 60), (16, 62), (17, 64), (18, 66)]
| Python cumulative histogram of 2D array | I have the following problem. I have a 2D array of N pairs. e.g: x = [[5,2],[10,5],[3,2],...]
(so a set of arrays a = [5,10,3,...] and b= [2,5,2,...]
The first column (a) corresponds to the number of items.
The second column (b) is time taken to obtain the items in column (a).
I want to plot a cumulative histogram of the total time taken to obtain the items.
The x axis will be in bins of array (a), and the y axis should be the sum of the times from array (b) for each bin of (a). i.e. I want to plot "Nr of items"-vs-"Total time to obtain (cumulative)" as opposed to the default "Nr of items"-vs-"Nr of instances in array (a)"
I hope that makes some sense.
| [
"Any chance this is what you are talking about?\n>>> pairs = [[5,2],[10,5],[3,2]]\n>>> a, b = zip(*pairs)\n>>> x = list(a)\n>>> y = [reduce(lambda c, d: c+d, b[:i], 0) for i in range(1, len(b)+1)]\n>>> x\n[5, 10, 3]\n>>> y\n[2, 7, 9]\n\nHere the resulting y values is the sum of all values from b up to that index. \... | [
2,
1,
1
] | [] | [] | [
"histogram",
"python"
] | stackoverflow_0004269929_histogram_python.txt |
Q:
How do I require Tkinter with distutils?
I'm trying to compile a program using distutils but I want to make sure that the user has Tkinter installed before installing my package.
My Google searches have failed to turn up any useful info, any clue how I'd do this?
Thanks,
Wayne
A:
You can have a class that inherits from install and then do this:
from distutils.command.install import install
class Install(install):
def run(self):
if not check_dependencies():
# Tkinter was not installed, handle this here
install.run(self) # proceed with the installation
def check_dependencies():
try:
return __import__('Tkinter')
except ImportError:
return None
A:
Unfortunately there is no standard cross-platform way to force Tkinter to be installed. Tkinter is part of the Python standard library so distributors who strip out Tkinter, or other standard library modules, and package them as optional entities are doing so using their own package management tools and, in general, you'd need to know the specific commands for each distribution. The best you can do in general is test for and fail gracefully if Tkinter (or tkinter in Python 3) is not importable, so something like:
import sys
try:
import Tkinter
except ImportError:
sys.exit("Tkinter not found")
A:
Tkinter is in the python standard library, it should always be there.
| How do I require Tkinter with distutils? | I'm trying to compile a program using distutils but I want to make sure that the user has Tkinter installed before installing my package.
My Google searches have failed to turn up any useful info, any clue how I'd do this?
Thanks,
Wayne
| [
"You can have a class that inherits from install and then do this:\nfrom distutils.command.install import install\n\nclass Install(install):\n def run(self):\n if not check_dependencies():\n # Tkinter was not installed, handle this here\n install.run(self) # proceed with the installatio... | [
2,
1,
0
] | [] | [] | [
"distutils",
"python",
"tkinter"
] | stackoverflow_0004269541_distutils_python_tkinter.txt |
Q:
Django: retrieve session or cookies in middleware
can i get the current session or cookie inside a middleware?
I tried but I got:
'WSGIRequest' object has no attribute 'session'
A:
This will work in any middleware that comes after django.contrib.sessions.middleware.SessionMiddleware in settings.py.
| Django: retrieve session or cookies in middleware | can i get the current session or cookie inside a middleware?
I tried but I got:
'WSGIRequest' object has no attribute 'session'
| [
"This will work in any middleware that comes after django.contrib.sessions.middleware.SessionMiddleware in settings.py. \n"
] | [
5
] | [] | [] | [
"django",
"django_middleware",
"django_sessions",
"python"
] | stackoverflow_0004270389_django_django_middleware_django_sessions_python.txt |
Q:
How can I restore EXIF data after resizing an image with PIL?
This question has been asked before, but it was answered a few years ago and the answer is refers to a broken link and is probably no longer the best method.
pyxiv2 looks like it would do the task, but it has a lot of dependencies for a seemingly simple task.
I'd also like to know what values will no longer be valid for the resized image. Width and Height being the obvious ones.
A:
Just in case someone would like to use piexiv2, here is a solution: image is resized using PIL library, EXIF data copied with pyexiv2 and image size EXIF fields are set with new size.
import pyexiv2
import tempfile
from PIL import Image
def resize_image(source_path, dest_path, size):
# resize image
image = Image.open(source_path)
image.thumbnail(size, Image.ANTIALIAS)
image.save(dest_path, "JPEG")
# copy EXIF data
source_image = pyexiv2.Image(source_path)
source_image.readMetadata()
dest_image = pyexiv2.Image(dest_path)
dest_image.readMetadata()
source_image.copyMetadataTo(dest_image)
# set EXIF image size info to resized size
dest_image["Exif.Photo.PixelXDimension"] = image.size[0]
dest_image["Exif.Photo.PixelYDimension"] = image.size[1]
dest_image.writeMetadata()
# resizing local file
resize_image("41965749.jpg", "resized.jpg", (600,400))
| How can I restore EXIF data after resizing an image with PIL? | This question has been asked before, but it was answered a few years ago and the answer is refers to a broken link and is probably no longer the best method.
pyxiv2 looks like it would do the task, but it has a lot of dependencies for a seemingly simple task.
I'd also like to know what values will no longer be valid for the resized image. Width and Height being the obvious ones.
| [
"Just in case someone would like to use piexiv2, here is a solution: image is resized using PIL library, EXIF data copied with pyexiv2 and image size EXIF fields are set with new size.\nimport pyexiv2\nimport tempfile\nfrom PIL import Image\n\n\ndef resize_image(source_path, dest_path, size):\n # resize image\n ... | [
3
] | [] | [] | [
"exif",
"python",
"python_imaging_library"
] | stackoverflow_0004029353_exif_python_python_imaging_library.txt |
Q:
Python inheritance - One constructor that calls a method that's overriden in child classes, which method is used?
I'm writing a class in python that has multiple subclasses in it I have:
class Parent:
def __init__(self, parameters):
self.MethodA(parameters)
def MethodA(parameters):
doStuff
class child1(Parent):
def MethodA(parameters):
doOtherStuff
Which method will be used when I make an object of type child1?
A:
Try it and see:
class Parent(object):
def __init__(self, params):
self.method(params)
def method(self, params):
print "Parent's method called with", params
class Child(Parent):
def method(self, params):
print "Child's method called with", params
Child('foo')
outputs:
Child's method called with foo
A:
child1.MethodA() would be called. Methods in most dynamic languages are essentially always virtual since the lookup of self is done at runtime.
A:
It can be usefull for you - method resolution order.
A:
All methods in Python are effectively virtual.
>>> class Parent(object):
def __init__(self):
self.MethodA()
def MethodA(self):
print 'A method'
>>> class child1(Parent):
def MethodA(self):
print 'child1 method'
>>> x = child1()
child1 method
>>> x.MethodA()
child1 method
| Python inheritance - One constructor that calls a method that's overriden in child classes, which method is used? | I'm writing a class in python that has multiple subclasses in it I have:
class Parent:
def __init__(self, parameters):
self.MethodA(parameters)
def MethodA(parameters):
doStuff
class child1(Parent):
def MethodA(parameters):
doOtherStuff
Which method will be used when I make an object of type child1?
| [
"Try it and see:\nclass Parent(object):\n def __init__(self, params):\n self.method(params)\n\n def method(self, params):\n print \"Parent's method called with\", params\n\nclass Child(Parent):\n def method(self, params):\n print \"Child's method called with\", params\n\nChild('foo')\n... | [
3,
1,
1,
0
] | [] | [] | [
"inheritance",
"python"
] | stackoverflow_0004270988_inheritance_python.txt |
Q:
Need to build a Python/Django "dashboard" to replace system emails to admins
For my python/django site I need to build a "dashboard" that will update me on the status of dozens of error/heartbeat/unexpected events going on.
There are a few types of "events" that I'm currently tracking by having the Django site send emails to the admin accounts:
1) Something that normally should happen goes wrong. We synch files to different services and other machines every few hours and I send error emails when this goes wrong.
2) When something that should happen actually happens. Sometimes events in item #1 fail so horribly that they don't even send emails (try: except: around an event should always work, but things can get deleted from the crontab, the system configuration can get knocked askew where things won't run, etc. where I won't even get an error email and the lack of a success/heartbeat email will let me know something that should have happened didn't happen.)
3) When anything unexpected happens. We've made a lot of assumptions on how backend operations will run and if any of these assumptions are violated (e.g. we find two users who have the same email address) we want to know about it. These events aren't necessarily errors, more like warnings to investigate.
So I want to build a dashboard that I can easily update from python/django to give me a bird's eye view of all of these types of activity so I can stop sending hundreds of emails out per week (which is already unmanageble.)
A:
Sounds like you want to create a basic logging system that outputs to a web page.
So you could write something simple app called, say, systemevents that creates an Event record each time something happens on the site. You'd add a signal hook so that anywhere in the rest of the site you could code something like:
from systemevents.signals import record_event
...
try:
# code goes here
except Exception, inst:
record_event("Error occurred while taunting %s: %s" % (obj, inst,), type="Error")
else:
record_event("Successfully taunted %s" % (obj, ), type="Success")
Then you can pretty easily create a view that lists these events.
However, keep in mind that this is adding a layer of complexity that is highly problematic. What if the error lies in your database? Then each time you try to record an error event, another error occurs!
Far better to use something like a built-in logging system to create a text-based log file, then whip up something that can import that text file and lay it out in a somewhat more readable fashion.
One more tip: in order to change how Django handles exceptions, you have to write a custom view for 500 errors. If using systemevents, you'd write something like:
from django.views.defaults import server_error
def custom_error_view(request)
try:
import sys
type, value, tb = sys.exc_info()
error_message = "" # create an error message from the values above
record_event("Error occurred: %s" % (error_message,), type="Error")
except Exception:
pass
return server_error(request)
Note that none of this code has been tested for correctness. It's just meant as a guide.
A:
Have you tried looking at django-sentry?
http://dcramer.github.com/django-sentry/
| Need to build a Python/Django "dashboard" to replace system emails to admins | For my python/django site I need to build a "dashboard" that will update me on the status of dozens of error/heartbeat/unexpected events going on.
There are a few types of "events" that I'm currently tracking by having the Django site send emails to the admin accounts:
1) Something that normally should happen goes wrong. We synch files to different services and other machines every few hours and I send error emails when this goes wrong.
2) When something that should happen actually happens. Sometimes events in item #1 fail so horribly that they don't even send emails (try: except: around an event should always work, but things can get deleted from the crontab, the system configuration can get knocked askew where things won't run, etc. where I won't even get an error email and the lack of a success/heartbeat email will let me know something that should have happened didn't happen.)
3) When anything unexpected happens. We've made a lot of assumptions on how backend operations will run and if any of these assumptions are violated (e.g. we find two users who have the same email address) we want to know about it. These events aren't necessarily errors, more like warnings to investigate.
So I want to build a dashboard that I can easily update from python/django to give me a bird's eye view of all of these types of activity so I can stop sending hundreds of emails out per week (which is already unmanageble.)
| [
"Sounds like you want to create a basic logging system that outputs to a web page.\nSo you could write something simple app called, say, systemevents that creates an Event record each time something happens on the site. You'd add a signal hook so that anywhere in the rest of the site you could code something like:\... | [
3,
2
] | [] | [] | [
"dashboard",
"django",
"python"
] | stackoverflow_0004268937_dashboard_django_python.txt |
Q:
pygame OpenGL windows not refreshing on Mac OS X 10.6, Python 2.7
I posted this on the pygame mailing list but maybe someone here will have an answer. I can't be sure whether it's a pygame problem or an SDL problem, really.
Essentially, I have some code that uses PyOpenGL and pygame to render rudimentary animations. It works fine under Linux but for some reason, the pygame windows on my Mac don't refresh unless I click outside the window to "unfocus", at which point they refresh once.
To install SDL and pygame I followed the instructions here. I should note that
2D pygame examples such as pygame.examples.aliens work fine, while 3D ones such as pygame.examples.glcube exhibit the same problem as my own code.
PyOpenGL demos work fine, so I'm assuming the problem isn't there.
I also see this printed to the console:
2010-11-12 00:31:51.328 python[75402:903] *** __NSAutoreleaseNoPool():
Object 0x101da6570 of class NSCFData autoreleased with no pool in
place - just leaking
Anyone know what that means?
A:
It turns out some sort of OS X driver glitch causes this when I Ctrl+C a pygame-based application, and the only fix is to reboot.
A:
I am afraid its not a OpenGL problem. I had refresh problem in both windows 7 and MACOSX 10.4.11 . For a strange reason no refresh problem on Linux.
What I did to solve this issue was to detect the "unfocus" event as the window had refresh problem only when it was unfocused and a window passed in front of it. As soon I detected the "unfocus" and / or "focus" even I issued a redraw of the whole window.
By the way I used none of the OpenGL binding or techniques, just standard pygame functions in particular the update function.
| pygame OpenGL windows not refreshing on Mac OS X 10.6, Python 2.7 | I posted this on the pygame mailing list but maybe someone here will have an answer. I can't be sure whether it's a pygame problem or an SDL problem, really.
Essentially, I have some code that uses PyOpenGL and pygame to render rudimentary animations. It works fine under Linux but for some reason, the pygame windows on my Mac don't refresh unless I click outside the window to "unfocus", at which point they refresh once.
To install SDL and pygame I followed the instructions here. I should note that
2D pygame examples such as pygame.examples.aliens work fine, while 3D ones such as pygame.examples.glcube exhibit the same problem as my own code.
PyOpenGL demos work fine, so I'm assuming the problem isn't there.
I also see this printed to the console:
2010-11-12 00:31:51.328 python[75402:903] *** __NSAutoreleaseNoPool():
Object 0x101da6570 of class NSCFData autoreleased with no pool in
place - just leaking
Anyone know what that means?
| [
"It turns out some sort of OS X driver glitch causes this when I Ctrl+C a pygame-based application, and the only fix is to reboot.\n",
"I am afraid its not a OpenGL problem. I had refresh problem in both windows 7 and MACOSX 10.4.11 . For a strange reason no refresh problem on Linux.\nWhat I did to solve this is... | [
1,
0
] | [] | [] | [
"macos",
"osx_snow_leopard",
"pygame",
"python",
"sdl"
] | stackoverflow_0004161926_macos_osx_snow_leopard_pygame_python_sdl.txt |
Q:
Functional programming in Python and C++
Is there any good book for functional programming in Python or C++ ? I need to master functional programming in those languages.
A:
By functional programming, I assume you mean referential transparency (basically, no global state or side-effects), plus things like functions as first-class objects, polymorphic types, partial function application etc.
There is no book that I know of that covers functional programming in C++. You could program without global state and side-effects in C++, and you can pass functions as arguments using function-typed pointers, but you couldn't get partial function application, nor anonymous lambda expressions.
A:
Text Processing in Python uses a functional style, and is what turned me on to functional programming. It's also a great Python/programming book in general, and I highly recommend it.
A:
MMhh if you want to learn functional programming you should learn a functional language first to really understand the principals then you can tree to apply them as good as you can, witch will be harder.
In python you can use functions to modify dictionarys witch is pretty functional. Make use of lambda with higher order function. You have to avoid classes and inheritance.
I cant really say much about C++. Maybe you can find some functional datastructures and then write functions on those. Look for a library that provides functions like map, reduce ...
C++0x should support closures and currying (more or less) so thing will get better.
In general:
Try to write immutible layers on librarys (be awair that that wont performe to well)
Look for librarys that are written in a functional way
Edit: I would recomend learning scheme its really small and you can pick it up fast. Read something like SICP or the Little Schemer that teach you recursiv thinking.
A:
Only a future version of C++ supports lambdas / anonymous functions. I guess Boost.Lambda supports C++ Functional programming, but it isn't really a first-class citizen of C++ yet.
Books about functional programming typically use functional languages. Like Haskell, Miranda, Lisp, Scheme, OCaml, Javascript and so forth.
EDIT: I'll withhold my opinion on Python for now. It appears I'm mistaken about a few things.
A:
In looking for info on functional programming in Python, I've found this web page to be very useful:
http://www.amk.ca/python/writing/functional
The references part contains a lot of crude information.
If you are looking for a book covering only functional programming in Python, I don't know about it.
C++ is more difficult. It has many of the ingredients (and it is steadily gaining more) for functional programming, but lacks many others. Actually, C++ was never designed for supporting functional programming, so it is typical that you'll be able to work with functional programming in some containers, but need to revert to imperative programming frequently.
A:
While not a book, here is a site that can at least get you started on some things. http://www.ibm.com/developerworks/library/l-prog.html
As far as really understanding functional programming I would suggest something like "The Little Schemer" to get a quick handle on scheme. You can then apply the ideas to python.
| Functional programming in Python and C++ | Is there any good book for functional programming in Python or C++ ? I need to master functional programming in those languages.
| [
"By functional programming, I assume you mean referential transparency (basically, no global state or side-effects), plus things like functions as first-class objects, polymorphic types, partial function application etc.\nThere is no book that I know of that covers functional programming in C++. You could program ... | [
6,
6,
3,
1,
0,
0
] | [
"For Perl, I can recommend \"Higher Order Perl\".\nDon't know about Python or C++ though.\n"
] | [
-1
] | [
"c++",
"functional_programming",
"python"
] | stackoverflow_0004268152_c++_functional_programming_python.txt |
Q:
Writing a Python RegEx to select a sub-set of list items in HTML
I have a web index view of a folder...
<ul><li><a href="/sustainabilitymedia/pics/s5/"> Parent Directory</a></li>
<li><a href="n150850_.jpg"> n150850_.jpg</a></li>
<li><a href="n150850_ss.jpg"> n150850_ss.jpg</a></li>
<li><a href="n150850q.jpg"> n150850q.jpg</a></li>
<li><a href="n150858_.jpg"> n150858_.jpg</a></li>
<li><a href="n150858_ss.jpg"> n150858_ss.jpg</a></li>
<li><a href="n150858q.jpg"> n150858q.jpg</a></li>
<li><a href="n150906_.jpg"> n150906_.jpg</a></li>
<li><a href="n150906_ss.jpg"> n150906_ss.jpg</a></li>
...
The list goes on and on and on. My goal is to grab only the list items ending in _ss.jpg so that I can render out my results and display them nicely on a page for presentation.
I can grab the page with BeautifulSoup but from there, im not sure how to filter out only list items matching a particular pattern. The page is behind Basic Auth which I have solved in a previous question regarding BeautifulSoup. Im happy to not use it either.
Any ideas?
A:
You can do a findAll() using a regex, for example soup_object.findAll('a', {'href': re.compile('.*_ss\.jpg')}).
A:
Brent's exactly right; +1 to him for being so fast.
I had already worked out an example so I figured I'd just post anyway (no need to vote on this):
>>> from BeautifulSoup import BeautifulSoup as bs
>>> from pprint import pprint
>>> import re
>>> markup = '''
... <ul><li><a href="/sustainabilitymedia/pics/s5/"> Parent Directory</a></li>
... <li><a href="n150850_.jpg"> n150850_.jpg</a></li>
... <li><a href="n150850_ss.jpg"> n150850_ss.jpg</a></li>
... <li><a href="n150850q.jpg"> n150850q.jpg</a></li>
... <li><a href="n150858_.jpg"> n150858_.jpg</a></li>
... <li><a href="n150858_ss.jpg"> n150858_ss.jpg</a></li>
... <li><a href="n150858q.jpg"> n150858q.jpg</a></li>
... <li><a href="n150906_.jpg"> n150906_.jpg</a></li>
... <li><a href="n150906_ss.jpg"> n150906_ss.jpg</a></li>'''
>>> soup = bs(markup)
>>> pprint(soup.findAll(href=re.compile('_ss[.]jpg$')))
[<a href="n150850_ss.jpg"> n150850_ss.jpg</a>,
<a href="n150858_ss.jpg"> n150858_ss.jpg</a>,
<a href="n150906_ss.jpg"> n150906_ss.jpg</a>]
Happy Thanksgiving to those who celebrate it.
A:
i would use something like
data = data.split("\n")
data = filter(x : x.find("_ss.jpg") >= 0,data)
data = map(lambda x: re.match("(?<=<href=)\".*_ss\.jpg\"(?=>)",x),data)
this should produce a list of the names ending with _ss.jpg .
| Writing a Python RegEx to select a sub-set of list items in HTML | I have a web index view of a folder...
<ul><li><a href="/sustainabilitymedia/pics/s5/"> Parent Directory</a></li>
<li><a href="n150850_.jpg"> n150850_.jpg</a></li>
<li><a href="n150850_ss.jpg"> n150850_ss.jpg</a></li>
<li><a href="n150850q.jpg"> n150850q.jpg</a></li>
<li><a href="n150858_.jpg"> n150858_.jpg</a></li>
<li><a href="n150858_ss.jpg"> n150858_ss.jpg</a></li>
<li><a href="n150858q.jpg"> n150858q.jpg</a></li>
<li><a href="n150906_.jpg"> n150906_.jpg</a></li>
<li><a href="n150906_ss.jpg"> n150906_ss.jpg</a></li>
...
The list goes on and on and on. My goal is to grab only the list items ending in _ss.jpg so that I can render out my results and display them nicely on a page for presentation.
I can grab the page with BeautifulSoup but from there, im not sure how to filter out only list items matching a particular pattern. The page is behind Basic Auth which I have solved in a previous question regarding BeautifulSoup. Im happy to not use it either.
Any ideas?
| [
"You can do a findAll() using a regex, for example soup_object.findAll('a', {'href': re.compile('.*_ss\\.jpg')}).\n",
"Brent's exactly right; +1 to him for being so fast.\nI had already worked out an example so I figured I'd just post anyway (no need to vote on this): \n>>> from BeautifulSoup import BeautifulSou... | [
6,
1,
0
] | [] | [] | [
"beautifulsoup",
"html",
"python",
"regex"
] | stackoverflow_0004271215_beautifulsoup_html_python_regex.txt |
Q:
App Engine: Is time.sleep() counting towards my quotas?
Hey. I'm working on an App Engine app that involves queries to the Google Maps API for geocoding. Google Maps doesn't like too much requests so I put a 1 second delay between each request with time.sleep(1).
I noticed that my quotas are running low in my GAE dashboard and decided to run a short test:
import cProfile
import time
def foo():
time.sleep(3)
cProfile.run('foo()')
Which gave me the following output:
4 function calls in 3.003 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 3.003 3.003 <stdin>:1(foo)
1 0.000 0.000 3.003 3.003 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
1 3.003 3.003 3.003 3.003 {time.sleep}
So it says that it's consuming 3 CPU seconds for a time.sleep(3). Now I'm wondering if calls like these are counted towards the quota limits that GAE provides. And if it does, what is the other way of making delays between API calls for geocoding?
Thanks.
A:
You certainly don't want to be trying to sleep in a system that's designed completely from the ground up to finish requests in the absolute shortest time possible :D
What you could do instead, is create a task for each geocode, (check out the deferred library). You'd want to specify a queue for this task, then just set the rate limit on the queue to whatever you feel the maps geocoder might be comfortable with.
This way every geocode will run, and you'll never go faster than the rate limit you set, and you don't need to do any plumbing.
A:
I am fairly certain that queue tasks also count towards your CPU usage in GAP. Regarding sleep(), i don't think there will be CPU "penalty" from that but I think it's a bad style.
Why sleep at all? In your task, do a single geocoding and simply post another invocation to yourself in the queue in 3secs. See the parameter countdown When invoking http://code.google.com/intl/el/appengine/docs/python/taskqueue/functions.html#add .
A:
Your experiment proves that the time.sleep time counts against your quota. Have a look at the experimental Task Queue API. If your task isn't user initiated, you could also use Cron tasks, but I don't know if this will work well with so small intervals.
A:
This Issue reports that the reporter has not been billed for cpu seconds incurred by time.sleep(), but that they show up on their appstats. It is very likely appstats uses cprofile as well. Sleep is important for people trying to make better asyncronous proxies which he could use for geocoding larger set of items.
http://code.google.com/p/googleappengine/issues/detail?id=3291
| App Engine: Is time.sleep() counting towards my quotas? | Hey. I'm working on an App Engine app that involves queries to the Google Maps API for geocoding. Google Maps doesn't like too much requests so I put a 1 second delay between each request with time.sleep(1).
I noticed that my quotas are running low in my GAE dashboard and decided to run a short test:
import cProfile
import time
def foo():
time.sleep(3)
cProfile.run('foo()')
Which gave me the following output:
4 function calls in 3.003 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 3.003 3.003 <stdin>:1(foo)
1 0.000 0.000 3.003 3.003 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
1 3.003 3.003 3.003 3.003 {time.sleep}
So it says that it's consuming 3 CPU seconds for a time.sleep(3). Now I'm wondering if calls like these are counted towards the quota limits that GAE provides. And if it does, what is the other way of making delays between API calls for geocoding?
Thanks.
| [
"You certainly don't want to be trying to sleep in a system that's designed completely from the ground up to finish requests in the absolute shortest time possible :D\nWhat you could do instead, is create a task for each geocode, (check out the deferred library). You'd want to specify a queue for this task, then ju... | [
17,
2,
1,
1
] | [] | [] | [
"cprofile",
"google_app_engine",
"python"
] | stackoverflow_0004254678_cprofile_google_app_engine_python.txt |
Q:
IPC on Windows between Java and Python secured to the current user
We have a Rest API that requires client certificate authentication. The API is used by this collection of python scripts that a user can run. To make it so that the user doesn't have to enter their password for their client certificate every time they run one of the scripts, we've created this broker process in java that a user can startup and run in the background which holds the user's certificate password in memory (we just have the javax.net.ssl.keyStorePassword property set in the JVM). The scripts communicate with this process and the process just forwards the Rest API calls to the server (adding the certificate credentials).
To do the IPC between the scripts and the broker process we're just using a socket. The problem is that the socket opens up a security risk in that someone could use the Rest API using another person's certificate by communicating through the broker process port on the other person's machine. We've mitigated the risk somewhat by using java security to only allow connections to the port from localhost. I think though someone in theory could still do it by remotely connecting to the machine and then using the port. Is there a way to further limit the use of the port to the current windows user? Or maybe is there another form of IPC I could use that can do authorization using the current windows user?
We're using Java for the broker process just because everyone on our team is much more familiar with Java than python but it could be rewritten in python if that would help.
Edit: Just remembered the other reason for using java for the broker process is that we are stuck with using python v2.6 and at this version https with client certificates doesn't appear to be supported (at least not without using a 3rd party library).
A:
The most simple approach is to use cookie-based access control. Have a file in the user's profile/homedirectory which contains the cookie. Have the Java server generate and save the cookie, and have the Python client scripts send the cookie as the first piece of data on any TCP connection.
This is secure as long as an adversary cannot get the cookie, which then should be protected by file system ACLs.
A:
I think I've come up with a solution inspired by Martin's post above. When the broker process starts up I'll create an mini http server listening on the IPC port. Also during startup I'll write a file containing a randomly generated password (that's different every startup) to the user's home directory so that only the user can read the file (or an administrator but I don't think I need to worry about that). Then I'll lock down the IPC port by requiring all http requests sent there to use the password. It's a bit Rube Goldberg-esque but I think it will work.
| IPC on Windows between Java and Python secured to the current user | We have a Rest API that requires client certificate authentication. The API is used by this collection of python scripts that a user can run. To make it so that the user doesn't have to enter their password for their client certificate every time they run one of the scripts, we've created this broker process in java that a user can startup and run in the background which holds the user's certificate password in memory (we just have the javax.net.ssl.keyStorePassword property set in the JVM). The scripts communicate with this process and the process just forwards the Rest API calls to the server (adding the certificate credentials).
To do the IPC between the scripts and the broker process we're just using a socket. The problem is that the socket opens up a security risk in that someone could use the Rest API using another person's certificate by communicating through the broker process port on the other person's machine. We've mitigated the risk somewhat by using java security to only allow connections to the port from localhost. I think though someone in theory could still do it by remotely connecting to the machine and then using the port. Is there a way to further limit the use of the port to the current windows user? Or maybe is there another form of IPC I could use that can do authorization using the current windows user?
We're using Java for the broker process just because everyone on our team is much more familiar with Java than python but it could be rewritten in python if that would help.
Edit: Just remembered the other reason for using java for the broker process is that we are stuck with using python v2.6 and at this version https with client certificates doesn't appear to be supported (at least not without using a 3rd party library).
| [
"The most simple approach is to use cookie-based access control. Have a file in the user's profile/homedirectory which contains the cookie. Have the Java server generate and save the cookie, and have the Python client scripts send the cookie as the first piece of data on any TCP connection.\nThis is secure as long ... | [
0,
0
] | [] | [] | [
"ipc",
"java",
"python",
"ssl_certificate",
"windows"
] | stackoverflow_0004257038_ipc_java_python_ssl_certificate_windows.txt |
Q:
Python Undo Unicode
Let's say I have the following two variables:
bob1 = u'bob\xf0\xa4\xad\xa2'
bob2 = 'bob\xf0\xa4\xad\xa2'
How can I get the value of bob1 to be the value of bob2? That is, how do I unroll the unicode formatting but keep the escapped hex value?
If I do this:
bob1.encode('utf8')
'bob\xc3\xb0\xc2\xa4\xc2\xad\xc2\xa2'
That's not right...
Help!
A:
Code points between U+0000 and U+00FF map to the same byte value in the ISO 8859-1 or Latin 1 encodings.
>>> u'bob\xf0\xa4\xad\xa2'.encode('latin-1')
'bob\xf0\xa4\xad\xa2'
| Python Undo Unicode | Let's say I have the following two variables:
bob1 = u'bob\xf0\xa4\xad\xa2'
bob2 = 'bob\xf0\xa4\xad\xa2'
How can I get the value of bob1 to be the value of bob2? That is, how do I unroll the unicode formatting but keep the escapped hex value?
If I do this:
bob1.encode('utf8')
'bob\xc3\xb0\xc2\xa4\xc2\xad\xc2\xa2'
That's not right...
Help!
| [
"Code points between U+0000 and U+00FF map to the same byte value in the ISO 8859-1 or Latin 1 encodings.\n>>> u'bob\\xf0\\xa4\\xad\\xa2'.encode('latin-1')\n'bob\\xf0\\xa4\\xad\\xa2'\n\n"
] | [
6
] | [] | [] | [
"encoding",
"python",
"unicode",
"utf_8"
] | stackoverflow_0004271670_encoding_python_unicode_utf_8.txt |
Q:
Django AdminSite - Difficult or Easy Implementation?
Has anyone implemented their own AdminSite? How easy/hard was the basic implementation?
I'm in the midst of building a "cms" that's going to be quite large and decently complex in some areas and I'm wondering if using something like AdminSite would save some time. I'd rather not have to make my own implementation for admin actions and inlines and the like (I know I can just use inline forms but that's not as simple as inlines = [Foo]).
When using a custom AdminSite, is further customization equivalent to customizing the standard Django admin?
A:
You've read the admin site docs. It's a lengthy document, but two main hooks for adding custom functionality is through custom urls and modified standard views in your own AdminSite and ModelAdmin objects. Once you hook those in and the urls get mapped, it's just like building any other Django application, only that the templates aren't yours, so they're are a bit hard to manage and take getting used to. But it allows you to do additional gymnastics, like adding a form wizard to the admin site or splitting everything into multiple forms and rendering them in a single HTML form element in the templates, doing custom handling of GET/POST requests, etc.
I've used it in the past to create views for displaying custom reports and to create custom editing scenarios for the staff. My opinion is that you should KISS as much as possible. The admin site is all about generic views and generic presentation. Do expand, but be cautious if you override template blocks and think twice before you override something that's not wrapped in a block. Certain admin site features have certain presentation assumptions and the JS client app that's shipped with Django makes some too (that's what I've figured when working with adding dynamic inline models way back), so it'd be quite an undertaking if you'd like to roll a completely different presentation.
The answer in any case is YES! The admin site will provide you with more features for managing your model data interactively. I don't know how extensively you'd need to customize the admin, but there are CMSs, dedicated admin apps and admin integrated apps that are a real eye-opener. Django CMS, as I recalled, has been praised as the best open-source Django CMS out there and from what I can see it rolls it's own cust change/list views. Rosetta is an admin site only app that allows you to edit your translation files interactively and has an exhaustive admin interface! If you shop around on bitbucket and github you'll find many more examples, it should help you figure out best how much effort you'd need to put into it.
A:
Both
If you are OK with it not doing exactly what you want its pretty much done for you automatically. If you need fine grain control over certain things it can be hard to customize without knowing the internals of the admin code.
A:
the django admin is more of a one size fits all kind of ui which may not be intuitive for use in some cases ..
customizing its look is easy but extending it is some how hard. you are better off designing your own views in that case.
| Django AdminSite - Difficult or Easy Implementation? | Has anyone implemented their own AdminSite? How easy/hard was the basic implementation?
I'm in the midst of building a "cms" that's going to be quite large and decently complex in some areas and I'm wondering if using something like AdminSite would save some time. I'd rather not have to make my own implementation for admin actions and inlines and the like (I know I can just use inline forms but that's not as simple as inlines = [Foo]).
When using a custom AdminSite, is further customization equivalent to customizing the standard Django admin?
| [
"You've read the admin site docs. It's a lengthy document, but two main hooks for adding custom functionality is through custom urls and modified standard views in your own AdminSite and ModelAdmin objects. Once you hook those in and the urls get mapped, it's just like building any other Django application, only th... | [
2,
0,
0
] | [] | [] | [
"django",
"django_admin",
"python"
] | stackoverflow_0004271089_django_django_admin_python.txt |
Q:
Python & Multiprocessing, breaking a set generation down into sub-processes
I've got to generate a set of strings based on some calculations of other strings. This takes quite a while, and I'm working on a multiprocessor/multicore server so I figured that I could break these tasks down into chunks and pass them off to different process.
Firstly I break the first list of strings down into chunks of 10000 each, send this off to a process which creates a new set, then try to obtain a lock and report these back to the master process. However, my master processes's set is empty!
Here's some code:
def build_feature_labels(self,strings,return_obj,l):
feature_labels = set()
for s in strings:
feature_labels = feature_labels.union(s.get_feature_labels())
print "method: ", len(feature_labels)
l.acquire()
return_obj.return_feature_labels(feature_labels)
l.release()
print "Thread Done"
def return_feature_labels(self,labs):
self.feature_labels = self.feature_labels.union(labs)
print "length self", len(self.feature_labels)
print "length labs", len(labs)
current_pos = 0
lock = multiprocessing.Lock()
while current_pos < len(orig_strings):
while len(multiprocessing.active_children()) > threads:
print "WHILE: cpu count", str(multiprocessing.cpu_count())
T.sleep(30)
print "number of processes", str(len(multiprocessing.active_children()))
proc = multiprocessing.Process(target=self.build_feature_labels,args=(orig_strings[current_pos:current_pos+self.MAX_ITEMS],self,lock))
proc.start()
current_pos = current_pos + self.MAX_ITEMS
while len(multiprocessing.active_children()) > 0:
T.sleep(3)
print len(self.feature_labels)
What is strange is a) that self.feature_labels on the master process is empty, but when it is called from each sub-process it has items. I think I'm taking the wrong approach here (it's how I used to do it in Java!). Is there a better approach?
Thanks in advance.
A:
Consider using a pool of workers: http://docs.python.org/dev/library/multiprocessing.html#using-a-pool-of-workers. This does a lot of the work for you in a map-reduce style and returns the assembled results.
A:
Use a multiprocessing.Pipe, or Queue (or other such object) to pass data between processes. Use a Pipe to pass data between two processes, and a Queue to allow multiple producers and consumers.
Along with the official documentation, there are nice examples to be found in Doug Hellman's multiprocessing tutorial. In particular, it has an example of how to use multiprocessing.Pool to implement a mapreduce-type operation. It might suit your purposes very well.
A:
Why it didn't work: multiprocessing uses processes, and process memory isn't shared. Multiprocessing can set up shared memory or pipes for IPC, but it must be done explicitly. This is how the various suggestions send data back to the master.
| Python & Multiprocessing, breaking a set generation down into sub-processes | I've got to generate a set of strings based on some calculations of other strings. This takes quite a while, and I'm working on a multiprocessor/multicore server so I figured that I could break these tasks down into chunks and pass them off to different process.
Firstly I break the first list of strings down into chunks of 10000 each, send this off to a process which creates a new set, then try to obtain a lock and report these back to the master process. However, my master processes's set is empty!
Here's some code:
def build_feature_labels(self,strings,return_obj,l):
feature_labels = set()
for s in strings:
feature_labels = feature_labels.union(s.get_feature_labels())
print "method: ", len(feature_labels)
l.acquire()
return_obj.return_feature_labels(feature_labels)
l.release()
print "Thread Done"
def return_feature_labels(self,labs):
self.feature_labels = self.feature_labels.union(labs)
print "length self", len(self.feature_labels)
print "length labs", len(labs)
current_pos = 0
lock = multiprocessing.Lock()
while current_pos < len(orig_strings):
while len(multiprocessing.active_children()) > threads:
print "WHILE: cpu count", str(multiprocessing.cpu_count())
T.sleep(30)
print "number of processes", str(len(multiprocessing.active_children()))
proc = multiprocessing.Process(target=self.build_feature_labels,args=(orig_strings[current_pos:current_pos+self.MAX_ITEMS],self,lock))
proc.start()
current_pos = current_pos + self.MAX_ITEMS
while len(multiprocessing.active_children()) > 0:
T.sleep(3)
print len(self.feature_labels)
What is strange is a) that self.feature_labels on the master process is empty, but when it is called from each sub-process it has items. I think I'm taking the wrong approach here (it's how I used to do it in Java!). Is there a better approach?
Thanks in advance.
| [
"Consider using a pool of workers: http://docs.python.org/dev/library/multiprocessing.html#using-a-pool-of-workers. This does a lot of the work for you in a map-reduce style and returns the assembled results.\n",
"Use a multiprocessing.Pipe, or Queue (or other such object) to pass data between processes. Use a Pi... | [
2,
1,
0
] | [] | [] | [
"list",
"locks",
"multiprocessing",
"python",
"set"
] | stackoverflow_0004271322_list_locks_multiprocessing_python_set.txt |
Q:
guidance on python scraping packages
I'm still a newcomer to python, so I hope this question isn't inane.
The more I google for web scraping solutions, the more confused I become (unable to see a forest, despite investigating many trees..)
I've been reading documentation on a number of projects, including (but not limited to)
scrapy
mechanize
spynner
but I can't really figure out which hammer I should be trying to use..
There is a specific page i'm trying to crawl (www.schooldigger.com)
It uses asp, and there's some java script I need to be able to emulate.
I'm aware this sort of problem isn't easily dealt with, so I'd love any guidance.
In addition to some general discussion of the options available (and the relationships between different projects, if possible) i have a couple of specific questions
When using scrapy, is there any way to avoid defining the 'items' to be parsed, and just download the first couple hundred pages or so? I don't actually want to download entire websites, but, I would like to be able to see which pages are being downloaded while developing the scraper.
mechanize, asp and javascript, please see a question I posted but havent seen any answers to,
https://stackoverflow.com/questions/4249513/emulating-js-in-mechanize
Why not build some sort of utility (either a turbogears application or a browser plug in) that allows a user to select links to follow and items to parse graphically? All i'm suggesting is some sort of gui to sit around a parsing API. I don't know if I have the technical knowledge to create such a project, but I dont see why it isn't possible, in fact, it seems rather feasible given what I know about python. Maybe some feedback about what problems this sort of project would face?
Most importantly, are all web crawlers built 'site specific'? It seems to me that I'm sort of reinventing the wheel in my code.. (but thats probably because I'm not very good at programming)
Anyone have any examples of fully-featured scrapers? There are lots of examples in the documentation, (which ive been studying), but they all seem to focus on simplicity, just for the exposition of package usage, maybe I'd benefit from a more detailed/ complicated example.
thanks for your thoughts.
A:
For full browser interaction you are best to look at using Selenium-RC
This has a python driver and you can script a browser to "test" just about any site on the internet
| guidance on python scraping packages | I'm still a newcomer to python, so I hope this question isn't inane.
The more I google for web scraping solutions, the more confused I become (unable to see a forest, despite investigating many trees..)
I've been reading documentation on a number of projects, including (but not limited to)
scrapy
mechanize
spynner
but I can't really figure out which hammer I should be trying to use..
There is a specific page i'm trying to crawl (www.schooldigger.com)
It uses asp, and there's some java script I need to be able to emulate.
I'm aware this sort of problem isn't easily dealt with, so I'd love any guidance.
In addition to some general discussion of the options available (and the relationships between different projects, if possible) i have a couple of specific questions
When using scrapy, is there any way to avoid defining the 'items' to be parsed, and just download the first couple hundred pages or so? I don't actually want to download entire websites, but, I would like to be able to see which pages are being downloaded while developing the scraper.
mechanize, asp and javascript, please see a question I posted but havent seen any answers to,
https://stackoverflow.com/questions/4249513/emulating-js-in-mechanize
Why not build some sort of utility (either a turbogears application or a browser plug in) that allows a user to select links to follow and items to parse graphically? All i'm suggesting is some sort of gui to sit around a parsing API. I don't know if I have the technical knowledge to create such a project, but I dont see why it isn't possible, in fact, it seems rather feasible given what I know about python. Maybe some feedback about what problems this sort of project would face?
Most importantly, are all web crawlers built 'site specific'? It seems to me that I'm sort of reinventing the wheel in my code.. (but thats probably because I'm not very good at programming)
Anyone have any examples of fully-featured scrapers? There are lots of examples in the documentation, (which ive been studying), but they all seem to focus on simplicity, just for the exposition of package usage, maybe I'd benefit from a more detailed/ complicated example.
thanks for your thoughts.
| [
"For full browser interaction you are best to look at using Selenium-RC\nThis has a python driver and you can script a browser to \"test\" just about any site on the internet\n"
] | [
2
] | [] | [] | [
"mechanize",
"python",
"scrape",
"scrapy"
] | stackoverflow_0004270476_mechanize_python_scrape_scrapy.txt |
Q:
Calling urlopen from within cgi script, 'connection refused' error
I have a python cgi-script that checks if a process is active, and starts it if this process is not found. The process itself is a webserver (based on web.py). After I ensure that the process is running, I try to make a url request to it. The idea is to redirect the results of this request to the requester of my cgi script, basically I want to redirect a query to a local webserver that listens to a different port.
This code works fine if I have started the server first (findImgServerProcess returns True), from a shell, not using cgi requests. But if I try to start the process through the cgi-script below, I do get as far as the urllib2.urlopen call, which throws an exception that the connection is refused.
I don't understand why?
If I print the list of processes (r in findImgServerProcess()) I can see the process is there, but why does urllib2.urlopen throw an exception? I have the apache2 webserver set up to use suexec.
Here's the code:
#!/usr/bin/python
import cgi, cgitb
cgitb.enable()
import os, re
import sys
import subprocess
import urllib2
urlbase = "http://localhost:8080/getimage"
imgserver = "/home/martin/public_html/cgi-bin/stlimgservermirror.py" # this is based on web.py
def findImgServerProcess():
r = subprocess.Popen(["ps", "aux"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT).communicate()[0]
return re.match(".*%s" % os.path.split(imgserver)[-1], r, re.DOTALL)
def ensureImgServerProcess():
if findImgServerProcess():
return True
os.environ['LD_LIBRARY_PATH'] = '/home/martin/lib'
fout = open("/dev/null", "w")
ferr = fout
subprocess.Popen([sys.executable, imgserver], stdout=fout, stderr=subprocess.STDOUT)
# now try and find the process
return findImgServerProcess() != None
def main():
if not ensureImgServerProcess():
print "Content-type: text/plain\n"
print "image server down!"
return
form = cgi.FieldStorage()
if form.has_key("debug"):
print "Content-type: text/plain\n"
print os.environ['QUERY_STRING']
else:
try:
img = urllib2.urlopen("%s?%s" % (urlbase, os.environ['QUERY_STRING'])).read()
except Exception, e:
print "Content-type: text/plain\n"
print e
sys.exit()
print "Content-type: image/png\n"
print img
if __name__ == "__main__":
main()
A:
A possibility is that the subprocess hasn't had an opportunity to completely start up before you try to connect to it. To test this, try adding a time.sleep(5) before you call urlopen.
This isn't ideal, but will at least help you figure out if that's the problem. Down the road, you'll probably want to set up a better way to check that the HTTP daemon is running and keep it running.
| Calling urlopen from within cgi script, 'connection refused' error | I have a python cgi-script that checks if a process is active, and starts it if this process is not found. The process itself is a webserver (based on web.py). After I ensure that the process is running, I try to make a url request to it. The idea is to redirect the results of this request to the requester of my cgi script, basically I want to redirect a query to a local webserver that listens to a different port.
This code works fine if I have started the server first (findImgServerProcess returns True), from a shell, not using cgi requests. But if I try to start the process through the cgi-script below, I do get as far as the urllib2.urlopen call, which throws an exception that the connection is refused.
I don't understand why?
If I print the list of processes (r in findImgServerProcess()) I can see the process is there, but why does urllib2.urlopen throw an exception? I have the apache2 webserver set up to use suexec.
Here's the code:
#!/usr/bin/python
import cgi, cgitb
cgitb.enable()
import os, re
import sys
import subprocess
import urllib2
urlbase = "http://localhost:8080/getimage"
imgserver = "/home/martin/public_html/cgi-bin/stlimgservermirror.py" # this is based on web.py
def findImgServerProcess():
r = subprocess.Popen(["ps", "aux"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT).communicate()[0]
return re.match(".*%s" % os.path.split(imgserver)[-1], r, re.DOTALL)
def ensureImgServerProcess():
if findImgServerProcess():
return True
os.environ['LD_LIBRARY_PATH'] = '/home/martin/lib'
fout = open("/dev/null", "w")
ferr = fout
subprocess.Popen([sys.executable, imgserver], stdout=fout, stderr=subprocess.STDOUT)
# now try and find the process
return findImgServerProcess() != None
def main():
if not ensureImgServerProcess():
print "Content-type: text/plain\n"
print "image server down!"
return
form = cgi.FieldStorage()
if form.has_key("debug"):
print "Content-type: text/plain\n"
print os.environ['QUERY_STRING']
else:
try:
img = urllib2.urlopen("%s?%s" % (urlbase, os.environ['QUERY_STRING'])).read()
except Exception, e:
print "Content-type: text/plain\n"
print e
sys.exit()
print "Content-type: image/png\n"
print img
if __name__ == "__main__":
main()
| [
"A possibility is that the subprocess hasn't had an opportunity to completely start up before you try to connect to it. To test this, try adding a time.sleep(5) before you call urlopen. \nThis isn't ideal, but will at least help you figure out if that's the problem. Down the road, you'll probably want to set up a... | [
0
] | [] | [] | [
"cgi",
"python",
"urllib2"
] | stackoverflow_0004267834_cgi_python_urllib2.txt |
Q:
The Python script youtube-dl returns a new file
I'm writing a script which uses youtube-dl in a subprocess, p = subprocess.Popen(cmd), to download a YouTube video. What is the best way to get back the new file (which is determined by %(title)s.%(ext)s variables by youtube-dl --output=TEMPLATE)?
Unfortunately, there is no option to print the new file name which I could redirect to a variable. I could check for the latest created file in the download directory, but that feels unsafe, as I'll be using the information for other commands.
A:
It does print destination filenane on stdout. It looks like this:
[download] Destination: XXX
if you give it options --no-progress --output=XXX.
Or you could make a template for filename that would allow you to easily recognize newly downloaded file: %(title)s.%(ext)s-latest-download or something and rename it later.
| The Python script youtube-dl returns a new file | I'm writing a script which uses youtube-dl in a subprocess, p = subprocess.Popen(cmd), to download a YouTube video. What is the best way to get back the new file (which is determined by %(title)s.%(ext)s variables by youtube-dl --output=TEMPLATE)?
Unfortunately, there is no option to print the new file name which I could redirect to a variable. I could check for the latest created file in the download directory, but that feels unsafe, as I'll be using the information for other commands.
| [
"It does print destination filenane on stdout. It looks like this:\n[download] Destination: XXX\nif you give it options --no-progress --output=XXX.\nOr you could make a template for filename that would allow you to easily recognize newly downloaded file: %(title)s.%(ext)s-latest-download or something and rename it ... | [
2
] | [] | [] | [
"download",
"python",
"youtube"
] | stackoverflow_0004271983_download_python_youtube.txt |
Q:
pydb problems on invocation from MinGW and from Emacs
I am trying to run pydb 1.26 from emacs on a MinGW environment running in windows 7. The python I am currently running is python26 although I have tried this with python25 with the same results. Reading the documentation and looking at the video seems to indicate that in order to start using pydb with GUD all I have to do is: pydb myprogram.py
Unfortunately this is not the case. When I issue "pydb myprogram.py" in a shell inside emacs, I get:
pydb tetris.py
Traceback (most recent call last):
File "c:/MinGW/msys/1.0/local/bin/pydb", line 19, in
import fns
ImportError: No module named fns
I have tried the altenative invocation of
python -t /c/python26/Lib/site-packages/pydb/pydb.py /c/fullpath/myprogram.py
which seems to satisfy all dependencies, however upon doing this, the OS seems to spawn the python process and but it never comes back.
Issuing either of these two invocations directly from emacs ( without the intermediate shell ) generates the same result.
What am I doing wrong? I am sure I had this working before but lost the environment due to a disk crash.
TIA.
A:
I think there is a lot of confusion here. Also a lot of vagueness in terms of how you are set up in terms of how you got emacs, python and pydb installed.
pydb comes with GNU Emacs code that hooks into the Emacs Lisp package GUD (grand unified debugger). Let's leave that aside for the moment since that has nothing to do with your not being able to run pydb say in a emacs shell.
I'm assuming you install pydb from the source since there is no ez_install or egg for that. When you run the "configure" script as well as "make" install, there is a bit of information that's given that is telling you where it is going to install things and what's happening. Some of this is important to keep track of.
By way of example here is some of the information I got when running from a mingw terminal inside a VMWare image of Windows XP:
$ ./autogen.sh
configure.ac:120: installing `./install-sh'
configure.ac:120: installing `./missing'
emacs/Makefile.am:22: installing `./elisp-comp'
...
Running ./configure --enable-maintainer-mode ...
checking for emacs... no
checking for xemacs... no
checking where .elc files should go... ${datadir}/emacs/site-lisp
checking for emacs... no
checking for emacs... no
checking where .elc files should go... (cached) ${datadir}/emacs/site-lisp
checking for a Python interpreter with version >= 2.4.0... python
checking for python... /c/Python27//python
checking for python version... 2.7
checking for python platform... win32
checking for python script directory... ${prefix}\Lib\site-packages
checking for python extension module directory... ${exec_prefix}\Lib\site-packages
...
Now type `make' to compile
$
And when I run "make install" some of the output there is:
$ make install
Making install in test
...
test -z "c:\Python27\lib\site-packages/pydb" || /bin/mkdir -p "c:\Python27\lib\site-packages/pydb"
...
test -z "c:\Python27\lib\site-packages/pydb" || /bin/mkdir -p "c:\Python27\lib\site-packages/pydb"
...
if ! test -d "/usr/local/bin"; then \
test -z "/usr/local/bin" || /bin/mkdir -p "/usr/local/bin"; \
fi
...
From the above, in my case you see that the package is installed in c:/Python27/lib/site-packages. You will probably have Python26 in there since you are using version 2.6.
Also, the above output shows me that the pydb script is installed in /usr/local/bin.
Great. Now I need to make sure c:/Python27/lib/site-packages is set in the PYTHONPATH environment variable which gets fed into sys.path. To see what is in sys.path run:
python -c 'import sys; print sys.path'
Since you were having problems you probably don't find where pydb is installed in sys.path. So to add that here is the export command I used:
$ export PYTHONPATH='c:/Python27/lib/site-packages/pydb;.'
Now when I run pydb which is located in /usr/local/bin as "make install" indicated above, I get:
$ /usr/local/bin/pydb test/gcd.py 3 5
(c:\cygwin\home\rocky\src\external-vcs\pydb\test\gcd.py:10): <module>
10 """
Don't worry about the cygwin stuff above, that's just where I have the Python script I want to debug located.
This was done inside a MinGW shell. To get running in an Emacs shell, it depends on how Emacs was compiled.
Finally after having written all of this let me say that both pydb and the Emacs code that hooks into GUD are now a bit deprecated. The newer and better debugger is pydbgr available from http://code.google.com/p/pydbgr/ and the new and better Emacs code is on github.com/rocky/emacs-dbgr . Alas both of them require other Python or Emacs packages to be installed. On the Python side there are eggs to simplify this if you are not running virtualenv. On the Emacs side, packaging is alas more steps.
| pydb problems on invocation from MinGW and from Emacs | I am trying to run pydb 1.26 from emacs on a MinGW environment running in windows 7. The python I am currently running is python26 although I have tried this with python25 with the same results. Reading the documentation and looking at the video seems to indicate that in order to start using pydb with GUD all I have to do is: pydb myprogram.py
Unfortunately this is not the case. When I issue "pydb myprogram.py" in a shell inside emacs, I get:
pydb tetris.py
Traceback (most recent call last):
File "c:/MinGW/msys/1.0/local/bin/pydb", line 19, in
import fns
ImportError: No module named fns
I have tried the altenative invocation of
python -t /c/python26/Lib/site-packages/pydb/pydb.py /c/fullpath/myprogram.py
which seems to satisfy all dependencies, however upon doing this, the OS seems to spawn the python process and but it never comes back.
Issuing either of these two invocations directly from emacs ( without the intermediate shell ) generates the same result.
What am I doing wrong? I am sure I had this working before but lost the environment due to a disk crash.
TIA.
| [
"I think there is a lot of confusion here. Also a lot of vagueness in terms of how you are set up in terms of how you got emacs, python and pydb installed.\npydb comes with GNU Emacs code that hooks into the Emacs Lisp package GUD (grand unified debugger). Let's leave that aside for the moment since that has nothin... | [
0
] | [] | [] | [
"emacs",
"mingw",
"python"
] | stackoverflow_0004248558_emacs_mingw_python.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.