title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
|---|---|---|---|---|---|---|---|---|---|
python: determine if a class is nested
| 639,162
|
<p>Suppose you have a python method that gets a type as parameter; is it possible to determine if the given type is a nested class?<br />
E.g. in this example:</p>
<pre><code>def show_type_info(t):
print t.__name__
# print outer class name (if any) ...
class SomeClass:
pass
class OuterClass:
class InnerClass:
pass
show_type_info(SomeClass)
show_type_info(OuterClass.InnerClass)
</code></pre>
<p>I would like the call to <code>show_type_info(OuterClass.InnerClass)</code> to show also that InnerClass is defined inside OuterClass.</p>
| 5
|
2009-03-12T15:26:53Z
| 639,461
|
<p>If you do not set it yourself, I do not believe that there is any way to determine if the class is nested. As anyway a Python class cannot be used as a namespace (or at least not easily), I would say that the best thing to do is simply use different files.</p>
| 0
|
2009-03-12T16:37:15Z
|
[
"python",
"introspection",
"inner-classes"
] |
python: determine if a class is nested
| 639,162
|
<p>Suppose you have a python method that gets a type as parameter; is it possible to determine if the given type is a nested class?<br />
E.g. in this example:</p>
<pre><code>def show_type_info(t):
print t.__name__
# print outer class name (if any) ...
class SomeClass:
pass
class OuterClass:
class InnerClass:
pass
show_type_info(SomeClass)
show_type_info(OuterClass.InnerClass)
</code></pre>
<p>I would like the call to <code>show_type_info(OuterClass.InnerClass)</code> to show also that InnerClass is defined inside OuterClass.</p>
| 5
|
2009-03-12T15:26:53Z
| 639,492
|
<p>Really a nested class is no different from any other class - it just happens to be defined somewhere else than the top-level namespace (inside another class instead). If we modify the description from "nested" to "non-top-level", then you may be able to come close enough to what you need.</p>
<p>eg:</p>
<pre><code>import inspect
def not_toplevel(cls):
m = inspect.getmodule(cls)
return not (getattr(m, cls.__name__, []) is cls)
</code></pre>
<p>This will work for common cases, but it may not do what you want in situations where classes are renamed or otherwise manipulated after definition. For example:</p>
<pre><code>class C: # not_toplevel(C) = False
class B: pass # not_toplevel(C.B) = True
B=C.B # not_toplevel(B) = True
D=C # D is defined at the top, but...
del C # not_toplevel(D) = True
def getclass(): # not_toplevel(getclass()) = True
class C: pass
</code></pre>
| 4
|
2009-03-12T16:46:50Z
|
[
"python",
"introspection",
"inner-classes"
] |
python: determine if a class is nested
| 639,162
|
<p>Suppose you have a python method that gets a type as parameter; is it possible to determine if the given type is a nested class?<br />
E.g. in this example:</p>
<pre><code>def show_type_info(t):
print t.__name__
# print outer class name (if any) ...
class SomeClass:
pass
class OuterClass:
class InnerClass:
pass
show_type_info(SomeClass)
show_type_info(OuterClass.InnerClass)
</code></pre>
<p>I would like the call to <code>show_type_info(OuterClass.InnerClass)</code> to show also that InnerClass is defined inside OuterClass.</p>
| 5
|
2009-03-12T15:26:53Z
| 639,588
|
<p>Thank you all for your answers.<br />
I've found this possible solution using metaclasses; I've done it more for obstination than real need, and it's done in a way that will not be applicable to python 3.<br />
I want to share this solution anyway, so I'm posting it here. </p>
<pre><code>#!/usr/bin/env python
class ScopeInfo(type): # stores scope information
__outers={} # outer classes
def __init__(cls, name, bases, dict):
super(ScopeInfo, cls).__init__(name, bases, dict)
ScopeInfo.__outers[cls] = None
for v in dict.values(): # iterate objects in the class's dictionary
for t in ScopeInfo.__outers:
if (v == t): # is the object an already registered type?
ScopeInfo.__outers[t] = cls
break;
def FullyQualifiedName(cls):
c = ScopeInfo.__outers[cls]
if c is None:
return "%s::%s" % (cls.__module__,cls.__name__)
else:
return "%s.%s" % (c.FullyQualifiedName(),cls.__name__)
__metaclass__ = ScopeInfo
class Outer:
class Inner:
class EvenMoreInner:
pass
print Outer.FullyQualifiedName()
print Outer.Inner.FullyQualifiedName()
print Outer.Inner.EvenMoreInner.FullyQualifiedName()
X = Outer.Inner
del Outer.Inner
print X.FullyQualifiedName()
</code></pre>
| 1
|
2009-03-12T17:14:20Z
|
[
"python",
"introspection",
"inner-classes"
] |
PHP vs. long-running process (Python, Java, etc.)?
| 639,409
|
<p>I'd like to have your opinion about writing web apps in PHP vs. a long-running process using tools such as Django or Turbogears for Python.</p>
<p>As far as I know:
- In PHP, pages are fetched from the hard-disk every time (although I assume the OS keeps files in RAM for a while after they've been accessed)
- Pages are recompiled into opcode every time (although tools from eg. Zend can keep a compiled version in RAM)
- Fetching pages every time means reading global and session data every time, and re-opening connections to the DB</p>
<p>So, I guess PHP makes sense on a shared server (multiple sites sharing the same host) to run apps with moderate use, while a long-running process offers higher performance with apps that run on a dedicated server and are under heavy use?</p>
<p>Thanks for any feedback.</p>
| 4
|
2009-03-12T16:22:12Z
| 639,431
|
<p>PHP is a language like Java etc.
Only your executable is the php binary and not the JVM! You can set another MAX-Runtime for PHP-Scripts without any problems (if your shared hosting provider let you do so).</p>
<p>Where your apps are running shouldn't depend on the kind of the server. It should depend on the ressources used by the application (CPU-Time,RAM) and what is given by your Server/Vserver/Shared Host!</p>
<p>For performance tuning reasons you should have a look at <a href="http://eaccelerator.net/" rel="nofollow">eAccelerator</a> etc.
Apache supports also modules for connection pooling! See <a href="http://httpd.apache.org/docs/2.2/mod/mod%5Fdbd.html" rel="nofollow">mod_dbd</a>.
If you need to scale (like in a cluster) you can use distributed memory caching systems like <a href="http://www.danga.com/memcached/" rel="nofollow">memcached</a>!</p>
| -1
|
2009-03-12T16:27:05Z
|
[
"php",
"python"
] |
PHP vs. long-running process (Python, Java, etc.)?
| 639,409
|
<p>I'd like to have your opinion about writing web apps in PHP vs. a long-running process using tools such as Django or Turbogears for Python.</p>
<p>As far as I know:
- In PHP, pages are fetched from the hard-disk every time (although I assume the OS keeps files in RAM for a while after they've been accessed)
- Pages are recompiled into opcode every time (although tools from eg. Zend can keep a compiled version in RAM)
- Fetching pages every time means reading global and session data every time, and re-opening connections to the DB</p>
<p>So, I guess PHP makes sense on a shared server (multiple sites sharing the same host) to run apps with moderate use, while a long-running process offers higher performance with apps that run on a dedicated server and are under heavy use?</p>
<p>Thanks for any feedback.</p>
| 4
|
2009-03-12T16:22:12Z
| 639,435
|
<p>PHP is fine for either use in my opinion, the performance overheads are rarely noticed. It's usually other processes which will delay the program. It's easy to cache PHP programs with something like eAccelerator.</p>
| 1
|
2009-03-12T16:28:02Z
|
[
"php",
"python"
] |
PHP vs. long-running process (Python, Java, etc.)?
| 639,409
|
<p>I'd like to have your opinion about writing web apps in PHP vs. a long-running process using tools such as Django or Turbogears for Python.</p>
<p>As far as I know:
- In PHP, pages are fetched from the hard-disk every time (although I assume the OS keeps files in RAM for a while after they've been accessed)
- Pages are recompiled into opcode every time (although tools from eg. Zend can keep a compiled version in RAM)
- Fetching pages every time means reading global and session data every time, and re-opening connections to the DB</p>
<p>So, I guess PHP makes sense on a shared server (multiple sites sharing the same host) to run apps with moderate use, while a long-running process offers higher performance with apps that run on a dedicated server and are under heavy use?</p>
<p>Thanks for any feedback.</p>
| 4
|
2009-03-12T16:22:12Z
| 639,489
|
<ul>
<li>With <a href="http://www.php.net/manual/en/book.apc.php" rel="nofollow">APC</a>, which is soon to be included by default in PHP compiled bytecode is kept in RAM. </li>
<li>With mod_php, which is the most popular way to use PHP, the PHP interpreter stays in web server's memory.</li>
<li>With <a href="http://www.php.net/manual/en/function.apc-store.php" rel="nofollow">APC data store</a> or <a href="http://www.php.net/manual/en/ref.memcache.php" rel="nofollow">memcache</a>, you can have persistent objects in RAM instead of for example always creating them all anew by fetching data from DB.</li>
</ul>
<p>In real life deployment you'd use all of above.</p>
| 2
|
2009-03-12T16:45:46Z
|
[
"php",
"python"
] |
PHP vs. long-running process (Python, Java, etc.)?
| 639,409
|
<p>I'd like to have your opinion about writing web apps in PHP vs. a long-running process using tools such as Django or Turbogears for Python.</p>
<p>As far as I know:
- In PHP, pages are fetched from the hard-disk every time (although I assume the OS keeps files in RAM for a while after they've been accessed)
- Pages are recompiled into opcode every time (although tools from eg. Zend can keep a compiled version in RAM)
- Fetching pages every time means reading global and session data every time, and re-opening connections to the DB</p>
<p>So, I guess PHP makes sense on a shared server (multiple sites sharing the same host) to run apps with moderate use, while a long-running process offers higher performance with apps that run on a dedicated server and are under heavy use?</p>
<p>Thanks for any feedback.</p>
| 4
|
2009-03-12T16:22:12Z
| 639,537
|
<p>After you apply memcache, opcode caching, and connection pooling, the only real difference between PHP and other options is that PHP is short-lived, processed based, while other options are, typically, long-lived multithreaded based.</p>
<p>The advantage PHP has is that its dirt simple to write scripts. You don't have to worry about memory management (its always released at the end of the request), and you don't have to worry about concurrency very much.</p>
<p>The major disadvantage, I can see anyways, is that some more advanced (sometimes crazier?) things are harder: pre-computing results, warming caches, reusing existing data, request prioritizing, and asynchronous programming. I'm sure people can think of many more.</p>
<p>Most of the time, though, those disadvantages aren't a big deal. You can scale by adding more machines and using more caching. The average web developer doesn't need to worry about concurrency control or memory management, so taking the minuscule hit from removing them isn't a big deal.</p>
| 2
|
2009-03-12T17:00:05Z
|
[
"php",
"python"
] |
PHP vs. long-running process (Python, Java, etc.)?
| 639,409
|
<p>I'd like to have your opinion about writing web apps in PHP vs. a long-running process using tools such as Django or Turbogears for Python.</p>
<p>As far as I know:
- In PHP, pages are fetched from the hard-disk every time (although I assume the OS keeps files in RAM for a while after they've been accessed)
- Pages are recompiled into opcode every time (although tools from eg. Zend can keep a compiled version in RAM)
- Fetching pages every time means reading global and session data every time, and re-opening connections to the DB</p>
<p>So, I guess PHP makes sense on a shared server (multiple sites sharing the same host) to run apps with moderate use, while a long-running process offers higher performance with apps that run on a dedicated server and are under heavy use?</p>
<p>Thanks for any feedback.</p>
| 4
|
2009-03-12T16:22:12Z
| 640,138
|
<p>As many others have noted, PHP nor Django are going to be your bottlenecks. Hitting the hard disk for the bytecode on PHP is irrelevant for a heavily trafficked site because caching will take over at that point. The same is true for Django.</p>
<p>Model/View and user experience design will have order of magnitude benefits to performance over the language itself.</p>
| 0
|
2009-03-12T19:26:09Z
|
[
"php",
"python"
] |
Running unit tests with Nose inside a Python environment such as Autodesk Maya?
| 639,744
|
<p>I'd like to start creating unit tests for my Maya scripts. These scripts must be run inside the Maya environment and rely on the <code>maya.cmds</code> module namespace.</p>
<p>How can I run Nose tests from inside a running environment such as Maya?</p>
| 6
|
2009-03-12T17:49:21Z
| 640,472
|
<p>Use the mayapy executable included in your maya install instead of the standard python executable.</p>
<p>In order for this work you'll need to run nose programmatically. Create a python file called <code>runtests.py</code> and put it next to your test files. In it, include the following code:</p>
<pre><code>import os
os.environ['PYTHONPATH'] = '/path/to/site-packages'
import nose
nose.run()
</code></pre>
<p>Since mayapy loads its own pythonpath, it doesn't know about the site-packages directory where nose is. os.environ is used to set this manually inside the script. Optionally you can set this as a system environment variable as well.</p>
<p>From the command line use the mayapy application to run the <code>runtests.py</code> script:</p>
<blockquote>
<p>/path/to/mayapy.exe runtests.py</p>
</blockquote>
<p>You may need to import the <code>maya.standalone</code> depending on what your tests do.</p>
<pre><code>import maya.standalone
maya.standalone.initialize(name='python')
</code></pre>
| 15
|
2009-03-12T21:03:01Z
|
[
"python",
"unit-testing",
"environment",
"nose",
"maya"
] |
Setting object owner with generic create_object view in django
| 639,792
|
<p>Is it possible to use create_object view to create a new object and automatically asign request.user as foreign key?</p>
<p>P.E:</p>
<pre><code>class Post(models.Model):
text = models.TextField()
author = models.ForeignKey(User)
</code></pre>
<p>What I want is to use create_object and fill author with request.user.</p>
| 10
|
2009-03-12T18:02:08Z
| 640,058
|
<p>If a user is authenticated, their user object is the <code>request.user</code> object.</p>
<p>I'm not familiar with create_object... I'm still a beginner to django and have only just started my first real project with it.</p>
<p>Note that you should check to make sure a user is logged in before using this. This can be done with <code>request.user.is_authenticated()</code>.</p>
| 1
|
2009-03-12T19:08:59Z
|
[
"python",
"django"
] |
Setting object owner with generic create_object view in django
| 639,792
|
<p>Is it possible to use create_object view to create a new object and automatically asign request.user as foreign key?</p>
<p>P.E:</p>
<pre><code>class Post(models.Model):
text = models.TextField()
author = models.ForeignKey(User)
</code></pre>
<p>What I want is to use create_object and fill author with request.user.</p>
| 10
|
2009-03-12T18:02:08Z
| 640,318
|
<p>In many ways, all the solutions to this will be more trouble than they are worth. This one qualifies as a hack. It is possible for a django update to leave you high and dry if they change the way create_update is implemented. For simplicity sake, I'll assume that you are trying to set a default user, not silently force the user to be the logged in user.</p>
<p>Write a context processor:</p>
<pre><code>from django.views.generic.create_update import get_model_and_form_class
def form_user_default(request):
if request.method == 'GET':
model, custom_form = get_model_and_form_class(Post,None)
custom_form.author = request.user
return {'form':custom_form}
else: return {}
</code></pre>
<p>What this will do is override the form object that create_update passes to the template. What it's technically doing is re-creating the form after the default view has done it. </p>
<p>Then in your url conf:</p>
<pre><code>url(r'pattern_to_match', 'django.views.generic.create_update.create_object', kwargs={'context_processors':form_user_default})
</code></pre>
<p>Again, I had to delve into the source code to figure out how to do this. It might really be best to try writing your own view (but incorporate as many Django custom objects as possible). There's no "simple default" way to do this, because in the django paradigm forms are more closely tied to the model layer than to views, and only views have knowledge of the request object.</p>
| 2
|
2009-03-12T20:22:47Z
|
[
"python",
"django"
] |
Setting object owner with generic create_object view in django
| 639,792
|
<p>Is it possible to use create_object view to create a new object and automatically asign request.user as foreign key?</p>
<p>P.E:</p>
<pre><code>class Post(models.Model):
text = models.TextField()
author = models.ForeignKey(User)
</code></pre>
<p>What I want is to use create_object and fill author with request.user.</p>
| 10
|
2009-03-12T18:02:08Z
| 643,999
|
<p>There is no good way to hook into the saving of an object when using the current Django generic views. Once they are <a href="http://code.djangoproject.com/ticket/6735" rel="nofollow">rewritten as classes</a>, you'll be able to subclass the view and hook in at the proper place without having to rewrite the whole view.</p>
<p>I already use my own class-based generic views for this reason.</p>
| 0
|
2009-03-13T18:24:55Z
|
[
"python",
"django"
] |
Setting object owner with generic create_object view in django
| 639,792
|
<p>Is it possible to use create_object view to create a new object and automatically asign request.user as foreign key?</p>
<p>P.E:</p>
<pre><code>class Post(models.Model):
text = models.TextField()
author = models.ForeignKey(User)
</code></pre>
<p>What I want is to use create_object and fill author with request.user.</p>
| 10
|
2009-03-12T18:02:08Z
| 792,330
|
<p>I would suggest to make a wrapper for the create_object, as this author suggest
<a href="http://www.b-list.org/weblog/2006/nov/16/django-tips-get-most-out-generic-views/" rel="nofollow">http://www.b-list.org/weblog/2006/nov/16/django-tips-get-most-out-generic-views/</a>
in the view you'll have access to the user info.
Afterwards, you will need to use the extra_context to pass the user to the template. Finally at the template you can add a hidden field with the user info. I haven't tried it, but I have been thinking of it for quite some time. Hope this solution suits you!
;) cheers!</p>
| 0
|
2009-04-27T05:23:45Z
|
[
"python",
"django"
] |
Setting object owner with generic create_object view in django
| 639,792
|
<p>Is it possible to use create_object view to create a new object and automatically asign request.user as foreign key?</p>
<p>P.E:</p>
<pre><code>class Post(models.Model):
text = models.TextField()
author = models.ForeignKey(User)
</code></pre>
<p>What I want is to use create_object and fill author with request.user.</p>
| 10
|
2009-03-12T18:02:08Z
| 1,829,358
|
<p>You may want to consider a closure.</p>
<pre><code>from django.forms import ModelForm
from django.views.generic.create_update import create_object, update_object
def make_foo_form(request):
class FooForm(ModelForm):
class Meta:
model = Foo
fields = ['foo', 'bar']
def save(self, commit=True):
f = super(FooForm, self).save(commit=False)
if not f.pk: f.user = request.user
if commit: f.save()
return f
return FooForm
def create_foo(request):
FooForm = make_foo_form(request)
return create_object(form_class=FooForm)
</code></pre>
<p>There is some inefficiency here, since you need to create the ModelForm object on each request, but it does allow you to inject functionality into the generic view.</p>
<p>You need to decide whether the added complexity for the form creation is worth maintaining simplicity on the view side.</p>
<p>A benefit here, though, is that this also works with the update case with practically no extra effort:</p>
<pre><code>def update_foo(request, object_id):
FooForm = make_foo_form(request)
return update_object(form_class=FooForm, object_id=object_id)
</code></pre>
<p>Obviously, you can use this approach for more complex cases as well.</p>
| 3
|
2009-12-01T22:21:53Z
|
[
"python",
"django"
] |
Python: Incrementally marshal / pickle an object?
| 639,821
|
<p>I have a large object I'd like to serialize to disk. I'm finding <a href="http://www.python.org/doc/2.5.2/lib/module-marshal.html" rel="nofollow">marshal</a> works quite well and is nice and fast. </p>
<p>Right now I'm creating my large object then calling marshal.dump . I'd like to avoid holding the large object in memory if possible - I'd like to dump it incrementally as I build it. Is that possible?</p>
<p>The object is fairly simple, a dictionary of arrays. </p>
| 4
|
2009-03-12T18:08:33Z
| 640,094
|
<p>This very much depends on how you are building the object. Is it an array of sub objects? You could marshal/pickle each array element as you build it. Is it a dictionary? Same idea applies (marshal/pickle keys)</p>
<p>If it is just a big complex harry object, you might want to marshal dump each piece of the object, and then the apply what ever your 'building' process is when you read it back in.</p>
| 0
|
2009-03-12T19:14:21Z
|
[
"python",
"serialization",
"data-structures",
"memory-management"
] |
Python: Incrementally marshal / pickle an object?
| 639,821
|
<p>I have a large object I'd like to serialize to disk. I'm finding <a href="http://www.python.org/doc/2.5.2/lib/module-marshal.html" rel="nofollow">marshal</a> works quite well and is nice and fast. </p>
<p>Right now I'm creating my large object then calling marshal.dump . I'd like to avoid holding the large object in memory if possible - I'd like to dump it incrementally as I build it. Is that possible?</p>
<p>The object is fairly simple, a dictionary of arrays. </p>
| 4
|
2009-03-12T18:08:33Z
| 641,175
|
<p>You should be able to dump the item piece by piece to the file. The two design questions that need settling are:</p>
<ol>
<li>How are you building the object when you're putting it in memory?</li>
<li>How do you need you're data when it comes out of memory?</li>
</ol>
<p>If your build process populates the entire array associated with a given key at a time, you might just dump the key:array pair in a file as a separate dictionary:</p>
<pre><code>big_hairy_dictionary['sample_key'] = pre_existing_array
marshal.dump({'sample_key':big_hairy_dictionary['sample_key']},'central_file')
</code></pre>
<p>Then on update, each call to marshal.load('central_file') will return a dictionary that you can use to update a central dictionary. But this is really only going to be helpful if, when you need the data back, you want to handle reading 'central_file' once per key.</p>
<p>Alternately, if you are populating arrays element by element in no particular order, maybe try:</p>
<pre><code>big_hairy_dictionary['sample_key'].append(single_element)
marshal.dump(single_element,'marshaled_files/'+'sample_key')
</code></pre>
<p>Then, when you load it back, you don't necessarily need to build the entire dictionary to get back what you need; you just call marshal.load('marshaled_files/sample_key') until it returns None, and you have everything associated with the key.</p>
| 0
|
2009-03-13T01:37:54Z
|
[
"python",
"serialization",
"data-structures",
"memory-management"
] |
Python: Incrementally marshal / pickle an object?
| 639,821
|
<p>I have a large object I'd like to serialize to disk. I'm finding <a href="http://www.python.org/doc/2.5.2/lib/module-marshal.html" rel="nofollow">marshal</a> works quite well and is nice and fast. </p>
<p>Right now I'm creating my large object then calling marshal.dump . I'd like to avoid holding the large object in memory if possible - I'd like to dump it incrementally as I build it. Is that possible?</p>
<p>The object is fairly simple, a dictionary of arrays. </p>
| 4
|
2009-03-12T18:08:33Z
| 645,112
|
<p>The bsddb module's 'hashopen' and 'btopen' functions provide a persistent dictionary-like interface. Perhaps you could use one of these, instead of a regular dictionary, to incrementally serialize the arrays to disk?</p>
<pre><code>import bsddb
import marshal
db = bsddb.hashopen('file.db')
db['array1'] = marshal.dumps(array1)
db['array2'] = marshal.dumps(array2)
...
db.close()
</code></pre>
<p>To retrieve the arrays:</p>
<pre><code>db = bsddb.hashopen('file.db')
array1 = marshal.loads(db['array1'])
...
</code></pre>
| 4
|
2009-03-14T00:57:24Z
|
[
"python",
"serialization",
"data-structures",
"memory-management"
] |
Python: Incrementally marshal / pickle an object?
| 639,821
|
<p>I have a large object I'd like to serialize to disk. I'm finding <a href="http://www.python.org/doc/2.5.2/lib/module-marshal.html" rel="nofollow">marshal</a> works quite well and is nice and fast. </p>
<p>Right now I'm creating my large object then calling marshal.dump . I'd like to avoid holding the large object in memory if possible - I'd like to dump it incrementally as I build it. Is that possible?</p>
<p>The object is fairly simple, a dictionary of arrays. </p>
| 4
|
2009-03-12T18:08:33Z
| 645,574
|
<p>It all your object has to do is be a dictionary of lists, then you may be able to use the <a href="http://www.python.org/doc/2.5.2/lib/module-shelve.html" rel="nofollow">shelve module</a>. It presents a dictionary-like interface where the keys and values are stored in a database file instead of in memory. One limitation which may or may not affect you is that keys in Shelf objects must be strings. Value storage will be more efficient if you specify protocol=-1 when creating the Shelf object to have it use a more efficient binary representation.</p>
| 4
|
2009-03-14T07:22:53Z
|
[
"python",
"serialization",
"data-structures",
"memory-management"
] |
Validating Oracle dates in Python
| 639,949
|
<p>Our Python CMS stores some date values in a generic "attribute" table's <em>varchar</em> column. Some of these dates are later moved into a table with an actual <em>date</em> column. If the CMS user entered an invalid date, it doesn't get caught until the migration, when the query fails with an "Invalid string date" error.</p>
<p>How can I use Python to make sure that all dates put into our CMS are valid Oracle string date representations?</p>
| 1
|
2009-03-12T18:41:20Z
| 640,036
|
<blockquote>
<p>How can I use Python to make sure that all dates put into our CMS are valid Oracle string date representations?</p>
</blockquote>
<p>I'd change the approach a bit. Have Python parse the original date input as forgivingly as possible, then output the date in a known-good representation.</p>
<p><a href="http://labix.org/python-dateutil" rel="nofollow">dateutil</a>'s liberal parser may be a good place to start:</p>
<pre><code>import dateutil.parser
d= dateutil.parser.parse('1/2/2003')
d.strftime('%d-%b-%y')
</code></pre>
<p>I'm not sure '%d-%b-%y' is actually still the right date format for Oracle, but it'll probably be something similar, ideally with four-digit years and no reliance on month names. (Trap: %b is locale-dependent so may return unwanted month names on a non-English OS.) Perhaps âstrftime('%Y-%m-%d')â followed by âTO_DATE(..., 'YYYY-MM-DD')â at the Oracle end is needed?</p>
| 5
|
2009-03-12T19:03:31Z
|
[
"python",
"oracle",
"validation"
] |
Validating Oracle dates in Python
| 639,949
|
<p>Our Python CMS stores some date values in a generic "attribute" table's <em>varchar</em> column. Some of these dates are later moved into a table with an actual <em>date</em> column. If the CMS user entered an invalid date, it doesn't get caught until the migration, when the query fails with an "Invalid string date" error.</p>
<p>How can I use Python to make sure that all dates put into our CMS are valid Oracle string date representations?</p>
| 1
|
2009-03-12T18:41:20Z
| 640,115
|
<p>The format of a date string that Oracle recognizes as a date is a configurable property of the database and as such it's considered bad form to rely on implicit conversions of strings to dates.</p>
<p>Typically Oracle dates format to 'DD-MON-YYYY' but you can't always rely on it being set that way.</p>
<p>Personally I would have the CMS write to this "attribute" table in a standard format like 'YYYY-MM-DD', and then whichever job moves that to a DATE column can explicitly cast the value with to_date( value, 'YYYY-MM-DD' ) and you won't have any problems.</p>
| 1
|
2009-03-12T19:18:18Z
|
[
"python",
"oracle",
"validation"
] |
Validating Oracle dates in Python
| 639,949
|
<p>Our Python CMS stores some date values in a generic "attribute" table's <em>varchar</em> column. Some of these dates are later moved into a table with an actual <em>date</em> column. If the CMS user entered an invalid date, it doesn't get caught until the migration, when the query fails with an "Invalid string date" error.</p>
<p>How can I use Python to make sure that all dates put into our CMS are valid Oracle string date representations?</p>
| 1
|
2009-03-12T18:41:20Z
| 640,153
|
<p>Validate as early as possible. Why don't you store dates as dates in your Python CMS? </p>
<p>It is difficult to know what date a string like '03-04-2008' is. Is it 3 april 2008 or 4 march 2008? An American will say 4 march 2008 but a Dutch person will say 3 april 2008. </p>
| -1
|
2009-03-12T19:33:41Z
|
[
"python",
"oracle",
"validation"
] |
Unzipping directory structure with python
| 639,962
|
<p>I have a zip file which contains the following directory structure:</p>
<pre><code>dir1\dir2\dir3a
dir1\dir2\dir3b
</code></pre>
<p>I'm trying to unzip it and maintain the directory structure however I get the error:</p>
<pre><code>IOError: [Errno 2] No such file or directory: 'C:\\\projects\\\testFolder\\\subdir\\\unzip.exe'
</code></pre>
<p>where testFolder is dir1 above and subdir is dir2.</p>
<p>Is there a quick way of unzipping the file and maintaining the directory structure?</p>
| 26
|
2009-03-12T18:44:45Z
| 640,032
|
<p>There's a very easy way if you're using Python 2.6: the <a href="http://docs.python.org/library/zipfile.html#zipfile.ZipFile.extractall" rel="nofollow">extractall</a> method.</p>
<p>However, since the <code>zipfile</code> module is implemented completely in Python without any C extensions, you can probably copy it out of a 2.6 installation and use it with an older version of Python; you may find this easier than having to reimplement the functionality yourself. However, the function itself is quite short:</p>
<pre><code>def extractall(self, path=None, members=None, pwd=None):
"""Extract all members from the archive to the current working
directory. `path' specifies a different directory to extract to.
`members' is optional and must be a subset of the list returned
by namelist().
"""
if members is None:
members = self.namelist()
for zipinfo in members:
self.extract(zipinfo, path, pwd)
</code></pre>
| 2
|
2009-03-12T19:03:04Z
|
[
"python",
"unzip"
] |
Unzipping directory structure with python
| 639,962
|
<p>I have a zip file which contains the following directory structure:</p>
<pre><code>dir1\dir2\dir3a
dir1\dir2\dir3b
</code></pre>
<p>I'm trying to unzip it and maintain the directory structure however I get the error:</p>
<pre><code>IOError: [Errno 2] No such file or directory: 'C:\\\projects\\\testFolder\\\subdir\\\unzip.exe'
</code></pre>
<p>where testFolder is dir1 above and subdir is dir2.</p>
<p>Is there a quick way of unzipping the file and maintaining the directory structure?</p>
| 26
|
2009-03-12T18:44:45Z
| 640,033
|
<p>It sounds like you are trying to run unzip to extract the zip.</p>
<p>It would be better to use the python <a href="http://docs.python.org/library/zipfile.html" rel="nofollow"><code>zipfile</code></a> module, and therefore do the extraction in python.</p>
<pre><code>import zipfile
def extract(zipfilepath, extractiondir):
zip = zipfile.ZipFile(zipfilepath)
zip.extractall(path=extractiondir)
</code></pre>
| 1
|
2009-03-12T19:03:11Z
|
[
"python",
"unzip"
] |
Unzipping directory structure with python
| 639,962
|
<p>I have a zip file which contains the following directory structure:</p>
<pre><code>dir1\dir2\dir3a
dir1\dir2\dir3b
</code></pre>
<p>I'm trying to unzip it and maintain the directory structure however I get the error:</p>
<pre><code>IOError: [Errno 2] No such file or directory: 'C:\\\projects\\\testFolder\\\subdir\\\unzip.exe'
</code></pre>
<p>where testFolder is dir1 above and subdir is dir2.</p>
<p>Is there a quick way of unzipping the file and maintaining the directory structure?</p>
| 26
|
2009-03-12T18:44:45Z
| 640,078
|
<p>The extract and extractall methods are great if you're on Python 2.6. I have to use Python 2.5 for now, so I just need to create the directories if they don't exist. You can get a listing of directories with the <code>namelist()</code> method. The directories will always end with a forward slash (even on Windows) e.g.,</p>
<pre><code>import os, zipfile
z = zipfile.ZipFile('myfile.zip')
for f in z.namelist():
if f.endswith('/'):
os.makedirs(f)
</code></pre>
<p>You probably don't want to do it <em>exactly</em> like that (i.e., you'd probably want to extract the contents of the zip file as you iterate over the namelist), but you get the idea.</p>
| 21
|
2009-03-12T19:11:36Z
|
[
"python",
"unzip"
] |
Unzipping directory structure with python
| 639,962
|
<p>I have a zip file which contains the following directory structure:</p>
<pre><code>dir1\dir2\dir3a
dir1\dir2\dir3b
</code></pre>
<p>I'm trying to unzip it and maintain the directory structure however I get the error:</p>
<pre><code>IOError: [Errno 2] No such file or directory: 'C:\\\projects\\\testFolder\\\subdir\\\unzip.exe'
</code></pre>
<p>where testFolder is dir1 above and subdir is dir2.</p>
<p>Is there a quick way of unzipping the file and maintaining the directory structure?</p>
| 26
|
2009-03-12T18:44:45Z
| 640,080
|
<p>I tried this out, and can reproduce it. The extractall method, as suggested by other answers, does <strong>not</strong> solve the problem. This seems like a bug in the zipfile module to me (perhaps Windows-only?), unless I'm misunderstanding how zipfiles are structured.</p>
<pre><code>testa\
testa\testb\
testa\testb\test.log
> test.zip
>>> from zipfile import ZipFile
>>> zipTest = ZipFile("C:\\...\\test.zip")
>>> zipTest.extractall("C:\\...\\")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "...\zipfile.py", line 940, in extractall
File "...\zipfile.py", line 928, in extract
File "...\zipfile.py", line 965, in _extract_member
IOError: [Errno 2] No such file or directory: 'C:\\...\\testa\\testb\\test.log'
</code></pre>
<p>If I do a <code>printdir()</code>, I get this (first column):</p>
<pre><code>>>> zipTest.printdir()
File Name
testa/testb/
testa/testb/test.log
</code></pre>
<p>If I try to extract just the first entry, like this:</p>
<pre><code>>>> zipTest.extract("testa/testb/")
'C:\\...\\testa\\testb'
</code></pre>
<p>On disk, this results in the creation of a folder <code>testa</code>, with a <strong>file</strong> <code>testb</code> inside. This is apparently the reason why the subsequent attempt to extract <code>test.log</code> fails; <code>testa\testb</code> is a file, not a folder.</p>
<p>Edit #1: If you extract just the file, then it works:</p>
<pre><code>>>> zipTest.extract("testa/testb/test.log")
'C:\\...\\testa\\testb\\test.log'
</code></pre>
<p>Edit #2: Jeff's code is the way to go; iterate through <code>namelist</code>; if it's a directory, create the directory. Otherwise, extract the file.</p>
| 5
|
2009-03-12T19:11:42Z
|
[
"python",
"unzip"
] |
Unzipping directory structure with python
| 639,962
|
<p>I have a zip file which contains the following directory structure:</p>
<pre><code>dir1\dir2\dir3a
dir1\dir2\dir3b
</code></pre>
<p>I'm trying to unzip it and maintain the directory structure however I get the error:</p>
<pre><code>IOError: [Errno 2] No such file or directory: 'C:\\\projects\\\testFolder\\\subdir\\\unzip.exe'
</code></pre>
<p>where testFolder is dir1 above and subdir is dir2.</p>
<p>Is there a quick way of unzipping the file and maintaining the directory structure?</p>
| 26
|
2009-03-12T18:44:45Z
| 641,100
|
<p><strong>Don't</strong> trust extract() or extractall().</p>
<p>These methods blindly extract files to the paths given in their filenames. But ZIP filenames can be anything at all, including dangerous strings like âx/../../../etc/passwdâ. Extract such files and you could have just compromised your entire server.</p>
<p>Maybe this should be considered a reportable security hole in Python's zipfile module, but any number of zip-dearchivers have exhibited the exact same behaviour in the past. To unarchive a ZIP file with folder structure safely you need in-depth checking of each file path.</p>
| 14
|
2009-03-13T00:56:12Z
|
[
"python",
"unzip"
] |
Unzipping directory structure with python
| 639,962
|
<p>I have a zip file which contains the following directory structure:</p>
<pre><code>dir1\dir2\dir3a
dir1\dir2\dir3b
</code></pre>
<p>I'm trying to unzip it and maintain the directory structure however I get the error:</p>
<pre><code>IOError: [Errno 2] No such file or directory: 'C:\\\projects\\\testFolder\\\subdir\\\unzip.exe'
</code></pre>
<p>where testFolder is dir1 above and subdir is dir2.</p>
<p>Is there a quick way of unzipping the file and maintaining the directory structure?</p>
| 26
|
2009-03-12T18:44:45Z
| 2,346,007
|
<p>Note that zip files can have entries for directories as well as files. When creating archives with the <code>zip</code> command, pass the <code>-D</code> option to disable adding directory entries explicitly to the archive. When Python 2.6's <code>ZipFile.extractall</code> method runs across a directory entry, it seems to create a <em>file</em> in its place. Since archive entries aren't necessarily in order, this causes <code>ZipFile.extractall</code> to fail quite often, as it tries to create a file in a subdirectory of a file. If you've got an archive that you want to use with the Python module, simply extract it and re-zip it with the <code>-D</code> option. Here's a little snippet I've been using for a while to do exactly that:</p>
<pre><code>P=`pwd` &&
Z=`mktemp -d -t zip` &&
pushd $Z &&
unzip $P/<busted>.zip &&
zip -r -D $P/<new>.zip . &&
popd &&
rm -rf $Z
</code></pre>
<p>Replace <code><busted>.zip</code> and <code><new>.zip</code> with real filenames relative to the current directory. Then just copy the whole thing and paste it into a command shell, and it will create a new archive that's ready to rock with Python 2.6. There <em>is</em> a <code>zip</code> command that will remove these directory entries without unzipping but IIRC it behaved oddly in different shell environments or zip configurations.</p>
| 0
|
2010-02-27T02:27:24Z
|
[
"python",
"unzip"
] |
Unzipping directory structure with python
| 639,962
|
<p>I have a zip file which contains the following directory structure:</p>
<pre><code>dir1\dir2\dir3a
dir1\dir2\dir3b
</code></pre>
<p>I'm trying to unzip it and maintain the directory structure however I get the error:</p>
<pre><code>IOError: [Errno 2] No such file or directory: 'C:\\\projects\\\testFolder\\\subdir\\\unzip.exe'
</code></pre>
<p>where testFolder is dir1 above and subdir is dir2.</p>
<p>Is there a quick way of unzipping the file and maintaining the directory structure?</p>
| 26
|
2009-03-12T18:44:45Z
| 6,478,402
|
<p>I know it may be a little late to say this but Jeff is right.
It's as simple as:</p>
<pre><code>import os
from zipfile import ZipFile as zip
def extractAll(zipName):
z = zip(zipName)
for f in z.namelist():
if f.endswith('/'):
os.makedirs(f)
else:
z.extract(f)
if __name__ == '__main__':
zipList = ['one.zip', 'two.zip', 'three.zip']
for zip in zipList:
extractAll(zipName)
</code></pre>
| 4
|
2011-06-25T14:32:43Z
|
[
"python",
"unzip"
] |
Unzipping directory structure with python
| 639,962
|
<p>I have a zip file which contains the following directory structure:</p>
<pre><code>dir1\dir2\dir3a
dir1\dir2\dir3b
</code></pre>
<p>I'm trying to unzip it and maintain the directory structure however I get the error:</p>
<pre><code>IOError: [Errno 2] No such file or directory: 'C:\\\projects\\\testFolder\\\subdir\\\unzip.exe'
</code></pre>
<p>where testFolder is dir1 above and subdir is dir2.</p>
<p>Is there a quick way of unzipping the file and maintaining the directory structure?</p>
| 26
|
2009-03-12T18:44:45Z
| 12,130,069
|
<h2>Filter namelist to exclude the folders</h2>
<p>All you have to do is filter out the <code>namelist()</code> entries ending with <code>/</code> and the problem is resolved: </p>
<pre><code> z.extractall(dest, filter(lambda f: not f.endswith('/'), z.namelist()))
</code></pre>
<p>nJoy!</p>
| 1
|
2012-08-26T12:20:07Z
|
[
"python",
"unzip"
] |
Unzipping directory structure with python
| 639,962
|
<p>I have a zip file which contains the following directory structure:</p>
<pre><code>dir1\dir2\dir3a
dir1\dir2\dir3b
</code></pre>
<p>I'm trying to unzip it and maintain the directory structure however I get the error:</p>
<pre><code>IOError: [Errno 2] No such file or directory: 'C:\\\projects\\\testFolder\\\subdir\\\unzip.exe'
</code></pre>
<p>where testFolder is dir1 above and subdir is dir2.</p>
<p>Is there a quick way of unzipping the file and maintaining the directory structure?</p>
| 26
|
2009-03-12T18:44:45Z
| 29,133,705
|
<p>If like me, you have to extract a complete zip archive with an older Python release (in my case, 2.4) here's what I came up with (based on Jeff's answer):</p>
<pre><code>import zipfile
import os
def unzip(source_file_path, destination_dir):
destination_dir += '/'
z = zipfile.ZipFile(source_file_path, 'r')
for file in z.namelist():
outfile_path = destination_dir + file
if file.endswith('/'):
os.makedirs(outfile_path)
else:
outfile = open(outfile_path, 'wb')
outfile.write(z.read(file))
outfile.close()
z.close()
</code></pre>
| 0
|
2015-03-18T22:13:22Z
|
[
"python",
"unzip"
] |
How can I remove text within parentheses with a regex?
| 640,001
|
<p>I'm trying to handle a bunch of files, and I need to alter then to remove extraneous information in the filenames; notably, I'm trying to remove text inside parentheses. For example:</p>
<pre><code>filename = "Example_file_(extra_descriptor).ext"
</code></pre>
<p>and I want to regex a whole bunch of files where the parenthetical expression might be in the middle or at the end, and of variable length.</p>
<p>What would the regex look like? Perl or Python syntax would be preferred.</p>
| 33
|
2009-03-12T18:56:57Z
| 640,016
|
<pre><code>s/\([^)]*\)//
</code></pre>
<p>So in Python, you'd do:</p>
<pre><code>re.sub(r'\([^)]*\)', '', filename)
</code></pre>
| 57
|
2009-03-12T18:59:43Z
|
[
"python",
"regex",
"perl"
] |
How can I remove text within parentheses with a regex?
| 640,001
|
<p>I'm trying to handle a bunch of files, and I need to alter then to remove extraneous information in the filenames; notably, I'm trying to remove text inside parentheses. For example:</p>
<pre><code>filename = "Example_file_(extra_descriptor).ext"
</code></pre>
<p>and I want to regex a whole bunch of files where the parenthetical expression might be in the middle or at the end, and of variable length.</p>
<p>What would the regex look like? Perl or Python syntax would be preferred.</p>
| 33
|
2009-03-12T18:56:57Z
| 640,038
|
<p>If you can stand to use <code>sed</code> (possibly execute from within your program, it'd be as simple as:</p>
<pre><code>sed 's/(.*)//g'
</code></pre>
| 2
|
2009-03-12T19:03:40Z
|
[
"python",
"regex",
"perl"
] |
How can I remove text within parentheses with a regex?
| 640,001
|
<p>I'm trying to handle a bunch of files, and I need to alter then to remove extraneous information in the filenames; notably, I'm trying to remove text inside parentheses. For example:</p>
<pre><code>filename = "Example_file_(extra_descriptor).ext"
</code></pre>
<p>and I want to regex a whole bunch of files where the parenthetical expression might be in the middle or at the end, and of variable length.</p>
<p>What would the regex look like? Perl or Python syntax would be preferred.</p>
| 33
|
2009-03-12T18:56:57Z
| 640,054
|
<p>I would use:</p>
<pre><code>\([^)]*\)
</code></pre>
| 15
|
2009-03-12T19:08:27Z
|
[
"python",
"regex",
"perl"
] |
How can I remove text within parentheses with a regex?
| 640,001
|
<p>I'm trying to handle a bunch of files, and I need to alter then to remove extraneous information in the filenames; notably, I'm trying to remove text inside parentheses. For example:</p>
<pre><code>filename = "Example_file_(extra_descriptor).ext"
</code></pre>
<p>and I want to regex a whole bunch of files where the parenthetical expression might be in the middle or at the end, and of variable length.</p>
<p>What would the regex look like? Perl or Python syntax would be preferred.</p>
| 33
|
2009-03-12T18:56:57Z
| 640,250
|
<p>If a path may contain parentheses then the <code>r'\(.*?\)'</code> regex is not enough:</p>
<pre><code>import os, re
def remove_parenthesized_chunks(path, safeext=True, safedir=True):
dirpath, basename = os.path.split(path) if safedir else ('', path)
name, ext = os.path.splitext(basename) if safeext else (basename, '')
name = re.sub(r'\(.*?\)', '', name)
return os.path.join(dirpath, name+ext)
</code></pre>
<p>By default the function preserves parenthesized chunks in directory and extention parts of the path.</p>
<p>Example:</p>
<pre><code>>>> f = remove_parenthesized_chunks
>>> f("Example_file_(extra_descriptor).ext")
'Example_file_.ext'
>>> path = r"c:\dir_(important)\example(extra).ext(untouchable)"
>>> f(path)
'c:\\dir_(important)\\example.ext(untouchable)'
>>> f(path, safeext=False)
'c:\\dir_(important)\\example.ext'
>>> f(path, safedir=False)
'c:\\dir_\\example.ext(untouchable)'
>>> f(path, False, False)
'c:\\dir_\\example.ext'
>>> f(r"c:\(extra)\example(extra).ext", safedir=False)
'c:\\\\example.ext'
</code></pre>
| 3
|
2009-03-12T20:03:48Z
|
[
"python",
"regex",
"perl"
] |
How can I remove text within parentheses with a regex?
| 640,001
|
<p>I'm trying to handle a bunch of files, and I need to alter then to remove extraneous information in the filenames; notably, I'm trying to remove text inside parentheses. For example:</p>
<pre><code>filename = "Example_file_(extra_descriptor).ext"
</code></pre>
<p>and I want to regex a whole bunch of files where the parenthetical expression might be in the middle or at the end, and of variable length.</p>
<p>What would the regex look like? Perl or Python syntax would be preferred.</p>
| 33
|
2009-03-12T18:56:57Z
| 640,630
|
<pre><code>>>> import re
>>> filename = "Example_file_(extra_descriptor).ext"
>>> p = re.compile(r'\([^)]*\)')
>>> re.sub(p, '', filename)
'Example_file_.ext'
</code></pre>
| 0
|
2009-03-12T21:48:15Z
|
[
"python",
"regex",
"perl"
] |
How can I remove text within parentheses with a regex?
| 640,001
|
<p>I'm trying to handle a bunch of files, and I need to alter then to remove extraneous information in the filenames; notably, I'm trying to remove text inside parentheses. For example:</p>
<pre><code>filename = "Example_file_(extra_descriptor).ext"
</code></pre>
<p>and I want to regex a whole bunch of files where the parenthetical expression might be in the middle or at the end, and of variable length.</p>
<p>What would the regex look like? Perl or Python syntax would be preferred.</p>
| 33
|
2009-03-12T18:56:57Z
| 640,819
|
<p>If you don't absolutely need to use a regex, <strike>use</strike>consider using Perl's <a href="http://perldoc.perl.org/Text/Balanced.html">Text::Balanced</a> to remove the parenthesis.</p>
<pre><code>use Text::Balanced qw(extract_bracketed);
my ($extracted, $remainder, $prefix) = extract_bracketed( $filename, '()', '[^(]*' );
{ no warnings 'uninitialized';
$filename = (defined $prefix or defined $remainder)
? $prefix . $remainder
: $extracted;
}
</code></pre>
<p>You may be thinking, "Why do all this when a regex does the trick in one line?"</p>
<pre><code>$filename =~ s/\([^}]*\)//;
</code></pre>
<p>Text::Balanced handles nested parenthesis. So <code>$filename = 'foo_(bar(baz)buz)).foo'</code> will be extracted properly. The regex based solutions offered here will fail on this string. The one will stop at the first closing paren, and the other will eat them all.</p>
<p>$filename =~ s/([^}]*)//;
# returns 'foo_buz)).foo'</p>
<p>$filename =~ s/(.<em>)//;
# returns 'foo</em>.foo'</p>
<p># text balanced example returns 'foo_).foo'</p>
<p>If either of the regex behaviors is acceptable, use a regex--but document the limitations and the assumptions being made.</p>
| 5
|
2009-03-12T22:55:18Z
|
[
"python",
"regex",
"perl"
] |
How can I remove text within parentheses with a regex?
| 640,001
|
<p>I'm trying to handle a bunch of files, and I need to alter then to remove extraneous information in the filenames; notably, I'm trying to remove text inside parentheses. For example:</p>
<pre><code>filename = "Example_file_(extra_descriptor).ext"
</code></pre>
<p>and I want to regex a whole bunch of files where the parenthetical expression might be in the middle or at the end, and of variable length.</p>
<p>What would the regex look like? Perl or Python syntax would be preferred.</p>
| 33
|
2009-03-12T18:56:57Z
| 11,793,027
|
<p>Java code:</p>
<pre><code>Pattern pattern1 = Pattern.compile("(\\_\\(.*?\\))");
System.out.println(fileName.replace(matcher1.group(1), ""));
</code></pre>
| 0
|
2012-08-03T09:30:47Z
|
[
"python",
"regex",
"perl"
] |
What's the nearest equivalent of Beautiful Soup for Ruby?
| 640,068
|
<p>I love the Beautiful Soup scraping library in Python. It just works. Is there a close equivalent in Ruby? </p>
| 11
|
2009-03-12T19:10:29Z
| 640,104
|
<p><a href="http://wiki.github.com/why/hpricot" rel="nofollow">Hpricot</a>? I don't know what others are using...</p>
| 1
|
2009-03-12T19:16:05Z
|
[
"python",
"ruby",
"beautifulsoup"
] |
What's the nearest equivalent of Beautiful Soup for Ruby?
| 640,068
|
<p>I love the Beautiful Soup scraping library in Python. It just works. Is there a close equivalent in Ruby? </p>
| 11
|
2009-03-12T19:10:29Z
| 640,129
|
<p>There's <a href="https://github.com/scrubber/scrubyt" rel="nofollow">scRUBYt!</a>,
<a href="http://www.crummy.com/software/RubyfulSoup/" rel="nofollow">Rubyful-soup</a> (no longer maintained),
<a href="http://rubyforge.org/projects/mechanize/" rel="nofollow">WWW::Mechanize</a>,
<a href="http://blog.labnotes.org/2006/07/11/scraping-with-style-scrapi-toolkit-for-ruby/" rel="nofollow">scrAPI</a> and a few more.</p>
<p>Or you could just use Hpricot or <a href="http://github.com/tenderlove/nokogiri/tree/master" rel="nofollow">Nokogiri</a> for parsing.</p>
| 4
|
2009-03-12T19:24:00Z
|
[
"python",
"ruby",
"beautifulsoup"
] |
What's the nearest equivalent of Beautiful Soup for Ruby?
| 640,068
|
<p>I love the Beautiful Soup scraping library in Python. It just works. Is there a close equivalent in Ruby? </p>
| 11
|
2009-03-12T19:10:29Z
| 640,135
|
<p><a href="http://wiki.github.com/tenderlove/nokogiri">Nokogiri</a> is another HTML/XML parser. It's faster than hpricot according to <a href="http://gist.github.com/18533">these benchmarks</a>. Nokogiri uses libxml2 and is a drop in replacement for hpricot. It also has css3 selector support which is pretty nice.</p>
<p>Edit: There's a new benchmark comparing nokogiri, libxml-ruby, hpricot and rexml <a href="http://www.rubyinside.com/ruby-xml-performance-benchmarks-1641.html">here</a>.</p>
<p><a href="http://www.ruby-toolbox.com/">Ruby Toolbox</a> has a category on HTML parsers <a href="http://www.ruby-toolbox.com/categories/html_parsing.html">here</a>.</p>
| 8
|
2009-03-12T19:25:16Z
|
[
"python",
"ruby",
"beautifulsoup"
] |
What's the nearest equivalent of Beautiful Soup for Ruby?
| 640,068
|
<p>I love the Beautiful Soup scraping library in Python. It just works. Is there a close equivalent in Ruby? </p>
| 11
|
2009-03-12T19:10:29Z
| 1,718,308
|
<p>This page from <a href="https://www.ruby-toolbox.com/categories/html_parsing" rel="nofollow">Ruby Toolbox</a> includes a chart of the relative popularity of various parsers.</p>
| 2
|
2009-11-11T21:46:08Z
|
[
"python",
"ruby",
"beautifulsoup"
] |
Can I override the html_name for a tabularinline field in the admin interface?
| 640,218
|
<p><strong>Is it possible to override the html naming of fields in TabularInline admin forms so they won't contain dashes?</strong></p>
<p>I'm trying to apply the knowledge obtained <a href="http://jannisleidel.com/2008/11/autocomplete-form-widget-foreignkey-model-fields/" rel="nofollow">here</a> to create a TabularInline admin form that has the auto-complete feature.</p>
<p>It all works except that Django insists in naming the fields in a tabularinline queryset as something in the lines of:</p>
<pre><code>[model]_set-[index]-[field]
</code></pre>
<p>So, if my model is TravelLogClient and my foreign key field is company, the fields in the HTML form for the three entries in the tabularinline queryset will be:</p>
<pre><code>travellogclient_set-0-company
travellogclient_set-1-company
travellogclient_set-2-company
</code></pre>
<p>The problem is that javascript dislikes identifiers with dashes in them. So the javascript fails and the autocomplete doesn't work.</p>
<p>THIS IS ONLY A PROBLEM WITH TABULAR INLINE forms! If I use <a href="http://jannisleidel.com/2008/11/autocomplete-form-widget-foreignkey-model-fields/" rel="nofollow">Jannis' autocomplete example</a> on a non tabular admin form field, it works just fine because the field name doesn't have the "<code>..._set-[index]-...</code>" portion in the HTML and javascript.</p>
<p>Rather than submitting a patch to django's source code changing dashes for underscores on <code>contrib.forms.forms.py</code> and <code>contrib.forms.formsets.py</code>, it occurred to me that it is possible that behavior can be overridden somehow. </p>
<p>Failing that, what is the easiest way to make those dashes in the html_name become underscores instead?</p>
<p>Thanks in advance!</p>
| 0
|
2009-03-12T19:54:32Z
| 643,019
|
<p>Paolo and Guðmundur are right. I modified my usage in the javascript according to Guðmundur's suggestion and things now work as expected - no django intervention needed.</p>
<p>Sorry for the mental lapse...</p>
<p>Thanks!</p>
| 0
|
2009-03-13T14:42:35Z
|
[
"python",
"django",
"django-admin",
"django-forms"
] |
How do I upload a file with mod_python?
| 640,310
|
<p>I want to create a simple file upload form and I must be completely incapable. I've read docs and tutorials,but for some reason, I'm not getting the submitted form data. I wrote the smallest amount of code I could to test and it still isn't working. Any ideas what's wrong?</p>
<pre><code>def index():
html = '''
<html>
<body>
<form id="fileUpload" action="./result" method="post">
<input type="file" id="file"/>
<input type="submit" value="Upload"/>
</form>
</body>
</html>
'''
return html
def result(req):
try: tmpfile = req.form['file']
except:
return "no file!"
</code></pre>
| 3
|
2009-03-12T20:21:02Z
| 640,650
|
<p>try putting enctype="multipart/form-data" in your form tag. Your mistake is not really mod_python related.</p>
| 1
|
2009-03-12T21:54:06Z
|
[
"python",
"upload",
"mod-python"
] |
tell whether python is in -i mode
| 640,389
|
<p>How can you tell whether python has been started with the -i flag?</p>
<p>According to <a href="http://www.wingware.com/psupport/python-manual/2.6/using/cmdline.html" rel="nofollow">the docs</a>, you can check the PYTHONINSPECT variable in os.environ, which is the <em>equivalent</em> of -i. But apparently it doesn't work the same way.</p>
<p>Works:</p>
<pre><code>$ PYTHONINSPECT=1 python -c 'import os; print os.environ["PYTHONINSPECT"]'
</code></pre>
<p>Doesn't work:</p>
<pre><code>$ python -i -c 'import os; print os.environ["PYTHONINSPECT"]'
</code></pre>
<p>The reason I ask is because I have a script that calls sys.exit(-1) if certain conditions fail. This is good, but sometimes I want to manually debug it using -i. I suppose I can just learn to use "PYTHONINSPECT=1 python" instead of "python -i", but it would be nice if there were a universal way of doing this.</p>
| 5
|
2009-03-12T20:38:47Z
| 640,405
|
<p>This specifies <a href="http://mail.python.org/pipermail/python-list/2005-April/319756.html" rel="nofollow">how to programatically switch your script to interactive mode</a>.</p>
| 0
|
2009-03-12T20:44:32Z
|
[
"python"
] |
tell whether python is in -i mode
| 640,389
|
<p>How can you tell whether python has been started with the -i flag?</p>
<p>According to <a href="http://www.wingware.com/psupport/python-manual/2.6/using/cmdline.html" rel="nofollow">the docs</a>, you can check the PYTHONINSPECT variable in os.environ, which is the <em>equivalent</em> of -i. But apparently it doesn't work the same way.</p>
<p>Works:</p>
<pre><code>$ PYTHONINSPECT=1 python -c 'import os; print os.environ["PYTHONINSPECT"]'
</code></pre>
<p>Doesn't work:</p>
<pre><code>$ python -i -c 'import os; print os.environ["PYTHONINSPECT"]'
</code></pre>
<p>The reason I ask is because I have a script that calls sys.exit(-1) if certain conditions fail. This is good, but sometimes I want to manually debug it using -i. I suppose I can just learn to use "PYTHONINSPECT=1 python" instead of "python -i", but it would be nice if there were a universal way of doing this.</p>
| 5
|
2009-03-12T20:38:47Z
| 640,431
|
<h3>How to set inspect mode programmatically</h3>
<p>The answer from <a href="http://mail.python.org/pipermail/python-list/2005-April/319756.html" rel="nofollow">the link</a> <a href="http://stackoverflow.com/questions/640389/tell-whether-python-is-in-i-mode/640405#640405">@Jweede provided</a> is imprecise. It should be:</p>
<pre><code>import os
os.environ['PYTHONINSPECT'] = '1'
</code></pre>
<h3>How to retrieve whether interactive/inspect flags are set</h3>
<p>Just another variant of <a href="http://stackoverflow.com/questions/640389/tell-whether-python-is-in-i-mode/640534#640534">@Brian's answer</a>:</p>
<pre><code>import os
from ctypes import POINTER, c_int, cast, pythonapi
def in_interactive_inspect_mode():
"""Whether '-i' option is present or PYTHONINSPECT is not empty."""
if os.environ.get('PYTHONINSPECT'): return True
iflag_ptr = cast(pythonapi.Py_InteractiveFlag, POINTER(c_int))
#NOTE: in Python 2.6+ ctypes.pythonapi.Py_InspectFlag > 0
# when PYTHONINSPECT set or '-i' is present
return iflag_ptr.contents.value != 0
</code></pre>
<p>See the Python's <a href="http://svn.python.org/view/python/trunk/Modules/main.c?view=markup" rel="nofollow">main.c</a>.</p>
| 3
|
2009-03-12T20:53:17Z
|
[
"python"
] |
tell whether python is in -i mode
| 640,389
|
<p>How can you tell whether python has been started with the -i flag?</p>
<p>According to <a href="http://www.wingware.com/psupport/python-manual/2.6/using/cmdline.html" rel="nofollow">the docs</a>, you can check the PYTHONINSPECT variable in os.environ, which is the <em>equivalent</em> of -i. But apparently it doesn't work the same way.</p>
<p>Works:</p>
<pre><code>$ PYTHONINSPECT=1 python -c 'import os; print os.environ["PYTHONINSPECT"]'
</code></pre>
<p>Doesn't work:</p>
<pre><code>$ python -i -c 'import os; print os.environ["PYTHONINSPECT"]'
</code></pre>
<p>The reason I ask is because I have a script that calls sys.exit(-1) if certain conditions fail. This is good, but sometimes I want to manually debug it using -i. I suppose I can just learn to use "PYTHONINSPECT=1 python" instead of "python -i", but it would be nice if there were a universal way of doing this.</p>
| 5
|
2009-03-12T20:38:47Z
| 640,534
|
<p>I took a look at the source, and although the variable set when -i is provided is stored in Py_InteractiveFlag, it doesn't look like it gets exposed to python.</p>
<p>However, if you don't mind getting your hands a bit dirty with some low-level ctypes inspecting, I think you can get at the value by:</p>
<pre><code>import ctypes, os
def interactive_inspect_mode():
flagPtr = ctypes.cast(ctypes.pythonapi.Py_InteractiveFlag,
ctypes.POINTER(ctypes.c_int))
return flagPtr.contents.value > 0 or bool(os.environ.get("PYTHONINSPECT",False))
</code></pre>
<p>[Edit] fix typo and also check PYTHONINSPECT (which doesn't set the variable), as pointed out in comments. </p>
| 1
|
2009-03-12T21:21:17Z
|
[
"python"
] |
Python introspection: How to get an 'unsorted' list of object attributes?
| 640,479
|
<p>The following code</p>
<pre><code>import types
class A:
class D:
pass
class C:
pass
for d in dir(A):
if type(eval('A.'+d)) is types.ClassType:
print d
</code></pre>
<p>outputs</p>
<pre><code>C
D
</code></pre>
<p>How do I get it to output in the order in which these classes were defined in the code? I.e.</p>
<pre><code>D
C
</code></pre>
<p>Is there any way other than using inspect.getsource(A) and parsing that?</p>
| 4
|
2009-03-12T21:04:56Z
| 640,503
|
<p>AFAIK, no -- there isn't*. This is because all of a class's attributes are stored in a dictionary (which is, as you know, unordered).</p>
<p>*: it might actually be possible, but that would require either decorators or possibly metaclass hacking. Do either of those interest you?</p>
| 1
|
2009-03-12T21:13:00Z
|
[
"python",
"introspection",
"python-datamodel"
] |
Python introspection: How to get an 'unsorted' list of object attributes?
| 640,479
|
<p>The following code</p>
<pre><code>import types
class A:
class D:
pass
class C:
pass
for d in dir(A):
if type(eval('A.'+d)) is types.ClassType:
print d
</code></pre>
<p>outputs</p>
<pre><code>C
D
</code></pre>
<p>How do I get it to output in the order in which these classes were defined in the code? I.e.</p>
<pre><code>D
C
</code></pre>
<p>Is there any way other than using inspect.getsource(A) and parsing that?</p>
| 4
|
2009-03-12T21:04:56Z
| 640,507
|
<p>No, you can't get those attributes in the order you're looking for. Python attributes are stored in a dict (read: hashmap), which has no awareness of insertion order. </p>
<p>Also, I would avoid the use of eval by simply saying</p>
<pre><code>if type(getattr(A, d)) is types.ClassType:
print d
</code></pre>
<p>in your loop. Note that you can also just iterate through key/value pairs in <code>A.__dict__</code></p>
| 5
|
2009-03-12T21:14:07Z
|
[
"python",
"introspection",
"python-datamodel"
] |
Python introspection: How to get an 'unsorted' list of object attributes?
| 640,479
|
<p>The following code</p>
<pre><code>import types
class A:
class D:
pass
class C:
pass
for d in dir(A):
if type(eval('A.'+d)) is types.ClassType:
print d
</code></pre>
<p>outputs</p>
<pre><code>C
D
</code></pre>
<p>How do I get it to output in the order in which these classes were defined in the code? I.e.</p>
<pre><code>D
C
</code></pre>
<p>Is there any way other than using inspect.getsource(A) and parsing that?</p>
| 4
|
2009-03-12T21:04:56Z
| 640,578
|
<p>I'm not trying to be glib here, but would it be feasible for you to organize the classes in your source alphabetically? i find that when there are lots of classes in one file this can be useful in its own right.</p>
| 0
|
2009-03-12T21:33:47Z
|
[
"python",
"introspection",
"python-datamodel"
] |
Python introspection: How to get an 'unsorted' list of object attributes?
| 640,479
|
<p>The following code</p>
<pre><code>import types
class A:
class D:
pass
class C:
pass
for d in dir(A):
if type(eval('A.'+d)) is types.ClassType:
print d
</code></pre>
<p>outputs</p>
<pre><code>C
D
</code></pre>
<p>How do I get it to output in the order in which these classes were defined in the code? I.e.</p>
<pre><code>D
C
</code></pre>
<p>Is there any way other than using inspect.getsource(A) and parsing that?</p>
| 4
|
2009-03-12T21:04:56Z
| 640,638
|
<p>Note that that parsing is already done for you in inspect - take a look at <code>inspect.findsource</code>, which searches the module for the class definition and returns the source and line number. Sorting on that line number (you may also need to split out classes defined in separate modules) should give the right order.</p>
<p>However, this function doesn't seem to be documented, and is just using a regular expression to find the line, so it may not be too reliable.</p>
<p>Another option is to use metaclasses, or some other way to either implicitly or explicitly ordering information to the object. For example:</p>
<pre><code>import itertools, operator
next_id = itertools.count().next
class OrderedMeta(type):
def __init__(cls, name, bases, dct):
super(OrderedMeta, cls).__init__(name, bases, dct)
cls._order = next_id()
# Set the default metaclass
__metaclass__ = OrderedMeta
class A:
class D:
pass
class C:
pass
print sorted([cls for cls in [getattr(A, name) for name in dir(A)]
if isinstance(cls, OrderedMeta)], key=operator.attrgetter("_order"))
</code></pre>
<p>However this is a fairly intrusive change (requires setting the metaclass of any classes you're interested in to OrderedMeta)</p>
| 8
|
2009-03-12T21:51:30Z
|
[
"python",
"introspection",
"python-datamodel"
] |
Python introspection: How to get an 'unsorted' list of object attributes?
| 640,479
|
<p>The following code</p>
<pre><code>import types
class A:
class D:
pass
class C:
pass
for d in dir(A):
if type(eval('A.'+d)) is types.ClassType:
print d
</code></pre>
<p>outputs</p>
<pre><code>C
D
</code></pre>
<p>How do I get it to output in the order in which these classes were defined in the code? I.e.</p>
<pre><code>D
C
</code></pre>
<p>Is there any way other than using inspect.getsource(A) and parsing that?</p>
| 4
|
2009-03-12T21:04:56Z
| 640,682
|
<p>The <code>inspect</code> module also has the <code>findsource</code> function. It returns a tuple of source lines and line number where the object is defined.</p>
<pre><code>>>> import inspect
>>> import StringIO
>>> inspect.findsource(StringIO.StringIO)[1]
41
>>>
</code></pre>
<p>The <code>findsource</code> function actually searches trough the source file and looks for likely candidates if it is given a class-object.</p>
<p>Given a method-, function-, traceback-, frame-, or code-object, it simply looks at the <code>co_firstlineno</code> attribute of the (contained) code-object.</p>
| 4
|
2009-03-12T22:03:07Z
|
[
"python",
"introspection",
"python-datamodel"
] |
Formatted text in GAE
| 640,733
|
<p>Google app engine question: What is a good way to take formatted text (does not have to be rich text) from the user and then store it in a text or blog property in the datastore? Mainly what I'm looking for is it to store newlines and strings of spaces, so that the text comes back looking the same as when it was submitted. </p>
| 3
|
2009-03-12T22:22:32Z
| 640,776
|
<p>The text will always "come back" the same as how you put it in. You will lose some formatting rendering to HTML (as you noticed line endings and spaces). One solution might be to render the text into a <code><pre></code> <a href="http://www.w3schools.com/TAGS/tag%5Fpre.asp" rel="nofollow">element (which implies preformatted text)</a>.</p>
<pre><code><pre>
This text will
be formatted correctly
</pre>
</code></pre>
<p>Another way would be to convert your format into HTML which is well formatted. Typically a Wiki might do this: <em>store the text as markup, and render it to HTML</em>. It's probably exactly what this site is doing with it's posts etc. If you do choose this route, I can recommend the <a href="http://code.google.com/p/creoleparser/" rel="nofollow">creoleparser</a> library, and it works well on Appengine.</p>
| 2
|
2009-03-12T22:41:52Z
|
[
"python",
"google-app-engine",
"gae-datastore"
] |
Formatted text in GAE
| 640,733
|
<p>Google app engine question: What is a good way to take formatted text (does not have to be rich text) from the user and then store it in a text or blog property in the datastore? Mainly what I'm looking for is it to store newlines and strings of spaces, so that the text comes back looking the same as when it was submitted. </p>
| 3
|
2009-03-12T22:22:32Z
| 641,790
|
<p>Other commonly used <em>simplified markups</em> include <a href="http://pypi.python.org/pypi/textile" rel="nofollow">Textile</a> and <a href="http://pypi.python.org/pypi/Markdown" rel="nofollow">Markdown</a>.</p>
| 2
|
2009-03-13T08:00:57Z
|
[
"python",
"google-app-engine",
"gae-datastore"
] |
Writing to the serial port in Vista from Python
| 640,802
|
<p>How do I write to the serial port in Vista from Python? The termios package only seem to support posix.</p>
| 5
|
2009-03-12T22:49:56Z
| 640,832
|
<p><a href="http://pyserial.wiki.sourceforge.net/pySerial">pyserial</a> does the trick, you'll need <a href="http://sourceforge.net/projects/pywin32/">python extensions for windows</a> for it to work in windows.</p>
| 9
|
2009-03-12T23:02:11Z
|
[
"python",
"windows"
] |
Writing to the serial port in Vista from Python
| 640,802
|
<p>How do I write to the serial port in Vista from Python? The termios package only seem to support posix.</p>
| 5
|
2009-03-12T22:49:56Z
| 641,193
|
<p>Seems like it wasn't any harder than this using <a href="http://pyserial.wiki.sourceforge.net/pySerial">pyserial</a>: </p>
<pre><code>import serial
ser = serial.Serial(0) # open first serial port with 9600,8,N,1
print ser.portstr # check which port was really used
ser.write('hello')
ser.close()
</code></pre>
| 7
|
2009-03-13T01:47:42Z
|
[
"python",
"windows"
] |
Can anyone point out the pros and cons of TG2 over Django?
| 640,877
|
<p>Django is my favorite python web framework. I've tried out others like pylons, web2py, nevow and others.</p>
<p>But I've never looked into TurboGears with much enthusiasm.</p>
<p>Now with TG2 out of beta I may give it a try. I'd like to know what are some of the pros and cons compared to Django.</p>
| 8
|
2009-03-12T23:26:35Z
| 640,964
|
<p>TG2 takes Pylons and changes some defaults - object dispatching instead of Routes, and Genshi instead of Mako. They believe <a href="http://wiki.python.org/moin/TOOWTDI" rel="nofollow">there's only one way to do it</a>, so apps can rely on the same API for any TurboGears website.</p>
<h2>Similarities</h2>
<ul>
<li>TG2 and Django both distinguish between websites and components, so you'll eventually see <a href="http://djangoplugables.com/" rel="nofollow">reusable building blocks</a> for TurboGears, too.</li>
</ul>
<h2>Differences</h2>
<ul>
<li><p>Django uses its own handlers for HTTP, routing, templating, and persistence. Django also has stellar documentation and an established community.</p></li>
<li><p>TurboGears defaults to best-of-breed libraries, which apparently are <a href="http://pythonpaste.org/" rel="nofollow">Paste</a>, object dispatching, <a href="http://genshi.edgewall.org/" rel="nofollow">Genshi</a>, and <a href="http://www.sqlalchemy.org/" rel="nofollow">SqlAlchemy</a>. This philosophy produces a better all-round toolset, but at the risk of instability - because it means throwing away backwards compatibility if and when better libraries appear.</p></li>
</ul>
| 14
|
2009-03-13T00:06:44Z
|
[
"python",
"django",
"turbogears",
"turbogears2"
] |
Can anyone point out the pros and cons of TG2 over Django?
| 640,877
|
<p>Django is my favorite python web framework. I've tried out others like pylons, web2py, nevow and others.</p>
<p>But I've never looked into TurboGears with much enthusiasm.</p>
<p>Now with TG2 out of beta I may give it a try. I'd like to know what are some of the pros and cons compared to Django.</p>
| 8
|
2009-03-12T23:26:35Z
| 641,033
|
<p>Besides what Nikhil gave in his answer, I think another minor difference is that Turbogears provdes some support for javascript widgets and integration with <a href="http://mochikit.com/" rel="nofollow">Mochikit</a>.</p>
<p>Whereas Django steadfastly remains javascript framework neutral.</p>
<p>(At least this was true with older versions of Turbogears... this might have changed with TG2)</p>
<p><strong>Edit:</strong> I just went over TG2 documentation and see that it did indeed change. Turbogears now uses ToscaWidgets which can use jQuery, ExtJS, Dojo, etc. underneath. This nicely makes it more framework neutral while still providing nice javascript widgets.</p>
<p>This strikes me as a <strong>pro</strong> for Turbogears if you don't have any javascript experience and a <strong>pro</strong> for Django if you are writing a lot of specialized javascript.</p>
| 1
|
2009-03-13T00:33:17Z
|
[
"python",
"django",
"turbogears",
"turbogears2"
] |
Can anyone point out the pros and cons of TG2 over Django?
| 640,877
|
<p>Django is my favorite python web framework. I've tried out others like pylons, web2py, nevow and others.</p>
<p>But I've never looked into TurboGears with much enthusiasm.</p>
<p>Now with TG2 out of beta I may give it a try. I'd like to know what are some of the pros and cons compared to Django.</p>
| 8
|
2009-03-12T23:26:35Z
| 641,046
|
<p>Because Django uses its own ORM it limits you to learn that ORM for that specific web framework. I think using an web framework with a more popular ORM (like SqlAlchemy which TG uses) increases your employability chances. Just my 2 cents ..</p>
| 0
|
2009-03-13T00:37:04Z
|
[
"python",
"django",
"turbogears",
"turbogears2"
] |
Can anyone point out the pros and cons of TG2 over Django?
| 640,877
|
<p>Django is my favorite python web framework. I've tried out others like pylons, web2py, nevow and others.</p>
<p>But I've never looked into TurboGears with much enthusiasm.</p>
<p>Now with TG2 out of beta I may give it a try. I'd like to know what are some of the pros and cons compared to Django.</p>
| 8
|
2009-03-12T23:26:35Z
| 641,232
|
<p>One of the most important questions is not just what technical features this platform provides or that platform provides, but the driving philosophy of the open source project and the nature of the community supporting it. </p>
<p>I've got no dog in this fight myself, but I found <a href="http://www.youtube.com/watch?v=fipFKyW2FA4&feature=PlayList&p=D415FAF806EC47A1&index=12" rel="nofollow">Mark Ramm's talk at DjangoCon 2008</a> to be very interesting on this point (Google will yield no end of subsequent discussion, no doubt).</p>
| 1
|
2009-03-13T02:16:20Z
|
[
"python",
"django",
"turbogears",
"turbogears2"
] |
Can anyone point out the pros and cons of TG2 over Django?
| 640,877
|
<p>Django is my favorite python web framework. I've tried out others like pylons, web2py, nevow and others.</p>
<p>But I've never looked into TurboGears with much enthusiasm.</p>
<p>Now with TG2 out of beta I may give it a try. I'd like to know what are some of the pros and cons compared to Django.</p>
| 8
|
2009-03-12T23:26:35Z
| 642,874
|
<p>Last I checked, django has a very poor data implementation. And that's a huge weakness in my book. Django's orm doesn't allow me to use the power of the underlying database. For example I can't use compound primary keys, which are important to good db design. It also doesn't support more than a single database, which is not a big deal until you really need it and find that you can't do it without resorting to doing it manually. Lastly if you have to make changes to your database structure in a team-friendly way, you have to try to choose between a set of 3rd party migration tools.</p>
<p>Turbogears seems to be more architecturally sound, doing its best to integrate individual tools that are awesome in their own right. And because TG is more of an integrator, you're able to switch out pieces to suit your preferences. Don't like SQL Alchemy? You can use SQLObject. Don't like Genshi templates? You can use Mako or even django's, although you're not exactly stuck with the default on django either.</p>
<p>Time for tg2's cons:</p>
<ul>
<li>TG has a much smaller community, and community usually has its benefit.</li>
<li>Django has a much better name. I really like that name ;-)</li>
<li>Django seems simpler for the beginning web developer, with pretty cool admin tools.</li>
<li>TG has decent documentation, but you also need to go to Genshi's site to learn Genshi, SQL Alchemy's site to learn that, etc. Django has great docs.</li>
</ul>
<p>My 2 cents.</p>
| 0
|
2009-03-13T14:06:02Z
|
[
"python",
"django",
"turbogears",
"turbogears2"
] |
Can anyone point out the pros and cons of TG2 over Django?
| 640,877
|
<p>Django is my favorite python web framework. I've tried out others like pylons, web2py, nevow and others.</p>
<p>But I've never looked into TurboGears with much enthusiasm.</p>
<p>Now with TG2 out of beta I may give it a try. I'd like to know what are some of the pros and cons compared to Django.</p>
| 8
|
2009-03-12T23:26:35Z
| 703,311
|
<p>TG2 has several advantages that I think are important: </p>
<ul>
<li>Multi-database support</li>
<li>sharding/data partitioning support</li>
<li>longstanding support for aggregates, multi-column primary keys</li>
<li>a transaction system that handles multi-database transactions for you</li>
<li>an admin system that works with all of the above</li>
<li>out of the box support for reusable template snipits</li>
<li>an easy method for creating reusable template tag-libraries</li>
<li>more flexibility in using non-standard components</li>
</ul>
<p>There are more, but I think it's also important to know that Django has some advantages over TG2: </p>
<ul>
<li>Larger, community, more active IRC channel</li>
<li>more re-usable app-components</li>
<li>a bit more developed documentation</li>
</ul>
<p>All of this means that it's a bit easier to get started in Django than TG2, but I personally think the added power and flexibility that you get is worth it. But your needs may always be different. </p>
| 15
|
2009-03-31T22:21:15Z
|
[
"python",
"django",
"turbogears",
"turbogears2"
] |
Can anyone point out the pros and cons of TG2 over Django?
| 640,877
|
<p>Django is my favorite python web framework. I've tried out others like pylons, web2py, nevow and others.</p>
<p>But I've never looked into TurboGears with much enthusiasm.</p>
<p>Now with TG2 out of beta I may give it a try. I'd like to know what are some of the pros and cons compared to Django.</p>
| 8
|
2009-03-12T23:26:35Z
| 1,387,123
|
<p>Pros.</p>
<ul>
<li>SQLAlchemy > django ORM</li>
<li>Multiple template languages out of the box (genshi,mako,jinja2)</li>
<li>more WSGI friendly</li>
<li>Object Dispatch > routes > regexp routing. You can get the first 2 with TG2</li>
<li>Almost all components are optional you can keep the core and use any ORM, template, auth library, etc.</li>
<li>Sprox > django forms</li>
</ul>
<p>Cons.
- Admin is more basic (no inline objects yet!)
- less third party apps
- "app" system still in the making.
- given it's modularity you need to read documentation from different sources (SQLAlchemy, Genshi or Mako, repoze.who, Pylons, etc.)</p>
| 5
|
2009-09-07T00:40:33Z
|
[
"python",
"django",
"turbogears",
"turbogears2"
] |
Can anyone point out the pros and cons of TG2 over Django?
| 640,877
|
<p>Django is my favorite python web framework. I've tried out others like pylons, web2py, nevow and others.</p>
<p>But I've never looked into TurboGears with much enthusiasm.</p>
<p>Now with TG2 out of beta I may give it a try. I'd like to know what are some of the pros and cons compared to Django.</p>
| 8
|
2009-03-12T23:26:35Z
| 1,518,126
|
<p>I was struggling with the same question months ago and decided for <strong>Turbogears 2</strong>, and my reasoning was simple. "<em>I'm new to python, I want to learn it not just for web-projects but as a substitute to php for scripting small helpers</em>"</p>
<p>What I didn't like about Django, to me looks like a "close platform". ORM, Template system, sessions, etc they all are Django's</p>
<p>On the other hand, Turbogears 2 uses already known open platforms and just glued them, just like <a href="http://appfuse.org" rel="nofollow">Appfuse</a> does it for Java</p>
<p>With TurboGears 2 I learn SQLAlchemy that I can use later for small python scripts, or from the python shell to solve common tasks.</p>
<p>Main drawbacks are the lack of complete documentation and error messages.</p>
<p>Sometimes you have to search very deep to find simple solutions, the learning curve is steep, but it pays long term. The error messages where to me very confusing (coming from more than 10 years in Java development). I had lost many hours trying to find an "ascii encode error" when the real problem was a module not being imported.</p>
<p>That's my opinion, just remember I'm new to python and I could be wrong about many things stated here.</p>
| 2
|
2009-10-05T03:10:41Z
|
[
"python",
"django",
"turbogears",
"turbogears2"
] |
Email integration
| 640,970
|
<p>I was wondering if someone could help me out. In some web application, the app will send out emails, say when a new message has been posted. Then instead of signing into the application to post a reply you can just simply reply to the email and it will automatically update the web app with your response.</p>
<p>My question is, how is this done and what is it called?</p>
<p>Thanks</p>
| 12
|
2009-03-13T00:09:14Z
| 641,004
|
<p><strong>Generally:</strong></p>
<p>1) Set up a dedicated email account for the purpose.</p>
<p>2) Have a programm monitor the mailbox (let's say fetchmail, since that's what I do).</p>
<p>3) When an email arrives at the account, fetchmail downloads the email, writes it to disk, and calls script or program you have written with the email file as an argument.</p>
<p>4) Your script or program parses the email and takes an appropriate action.</p>
<p>The part that's usually mysterious to people is the fetchmail part (#2).</p>
<p><strong>Specifically on Mail Servers</strong> <em>(iff you control the mailserver enough to redirect emails to scripts)</em>:</p>
<p>1-3) Configure an address to be piped to a script you have written.</p>
<p>4) Same as above.</p>
| 7
|
2009-03-13T00:20:12Z
|
[
"python",
"django",
"email",
"email-integration"
] |
Email integration
| 640,970
|
<p>I was wondering if someone could help me out. In some web application, the app will send out emails, say when a new message has been posted. Then instead of signing into the application to post a reply you can just simply reply to the email and it will automatically update the web app with your response.</p>
<p>My question is, how is this done and what is it called?</p>
<p>Thanks</p>
| 12
|
2009-03-13T00:09:14Z
| 641,020
|
<p>A common tool used for this purpose is <a href="http://en.wikipedia.org/wiki/Procmail" rel="nofollow">procmail</a>.</p>
<p>You need to set up dedicated email address (which is the "from_email" address in your outgoing email). Then your MTA, such as postfix or qmail, will deliver mail to that address to procmail instead of an actual mailbox.</p>
<p>Procmail can then pass the email on to your python script that can do updates in the app. See <a href="http://www.b-list.org/weblog/2007/sep/22/standalone-django-scripts/" rel="nofollow">standalone django scripts</a> by James Bennett on how to code python scripts that can work with your app.</p>
| 1
|
2009-03-13T00:27:36Z
|
[
"python",
"django",
"email",
"email-integration"
] |
Email integration
| 640,970
|
<p>I was wondering if someone could help me out. In some web application, the app will send out emails, say when a new message has been posted. Then instead of signing into the application to post a reply you can just simply reply to the email and it will automatically update the web app with your response.</p>
<p>My question is, how is this done and what is it called?</p>
<p>Thanks</p>
| 12
|
2009-03-13T00:09:14Z
| 641,038
|
<p>To see a working example on how to receive emails in python and process then using django, check this: <a href="http://code.google.com/p/jutda-helpdesk/" rel="nofollow">http://code.google.com/p/jutda-helpdesk/</a></p>
| 1
|
2009-03-13T00:34:48Z
|
[
"python",
"django",
"email",
"email-integration"
] |
Email integration
| 640,970
|
<p>I was wondering if someone could help me out. In some web application, the app will send out emails, say when a new message has been posted. Then instead of signing into the application to post a reply you can just simply reply to the email and it will automatically update the web app with your response.</p>
<p>My question is, how is this done and what is it called?</p>
<p>Thanks</p>
| 12
|
2009-03-13T00:09:14Z
| 641,062
|
<p>From your tags, I'll assume you're wanting to do this in Django.</p>
<p>There's an app out there called <a href="http://code.google.com/p/jutda-helpdesk/" rel="nofollow">jutda-helpdesk</a> that does exactly what you're looking for using poplib, which means you just have to set up a POP3 compatible email address.</p>
<p>Take a look at their <a href="http://code.google.com/p/jutda-helpdesk/source/browse/trunk/management/commands/get%5Femail.py" rel="nofollow">get_email.py</a> to see how they do it. You just run this script from cron.</p>
| 4
|
2009-03-13T00:41:04Z
|
[
"python",
"django",
"email",
"email-integration"
] |
Email integration
| 640,970
|
<p>I was wondering if someone could help me out. In some web application, the app will send out emails, say when a new message has been posted. Then instead of signing into the application to post a reply you can just simply reply to the email and it will automatically update the web app with your response.</p>
<p>My question is, how is this done and what is it called?</p>
<p>Thanks</p>
| 12
|
2009-03-13T00:09:14Z
| 641,837
|
<p>This is an area where the Rails-world is ahead: <a href="http://guides.rubyonrails.org/action%5Fmailer%5Fbasics.html#receiving-emails" rel="nofollow">Rails has built-in support for receiving emails</a>. The mail sever configuration though is probably just the same.</p>
| 3
|
2009-03-13T08:36:14Z
|
[
"python",
"django",
"email",
"email-integration"
] |
Email integration
| 640,970
|
<p>I was wondering if someone could help me out. In some web application, the app will send out emails, say when a new message has been posted. Then instead of signing into the application to post a reply you can just simply reply to the email and it will automatically update the web app with your response.</p>
<p>My question is, how is this done and what is it called?</p>
<p>Thanks</p>
| 12
|
2009-03-13T00:09:14Z
| 1,337,275
|
<p>You should take a look at <a href="http://lamsonproject.org">Lamson</a>; it'll enable you do what you've described, and more besides.</p>
| 5
|
2009-08-26T20:27:59Z
|
[
"python",
"django",
"email",
"email-integration"
] |
Unable to set iPython to use 2.6.1 Python
| 641,000
|
<p>I have installed the newest iPython in Mac. However, it uses the Python verion 2.5.1.</p>
<p>I installed the Python 2.6.1 by MacPython package at <a href="http://www.python.org/download/" rel="nofollow">here</a>. </p>
<p><strong>How can I make my iPython to use Python 2.6.1?</strong></p>
<p>I am not sure where the MacPython package exactly installed the newest Python.
The <strong>newest Python should somehow put the PATH</strong> so that iPyhon can use it. </p>
<p><strong>[edit]</strong> after the first answer</p>
<p>I run the following command</p>
<pre><code>$ln -s python python2.6
</code></pre>
<p>I cannot open python2.6 by </p>
<pre><code>python
</code></pre>
| 1
|
2009-03-13T00:19:37Z
| 641,367
|
<p>you should have a python, python2.5 and python2.6, is that correct? If you wan't to use python2.6 system wide the symple solution would be to sym link (ln -s ..) python to python2.6 instead of python2.5</p>
| 1
|
2009-03-13T03:25:37Z
|
[
"python",
"ipython"
] |
Unable to set iPython to use 2.6.1 Python
| 641,000
|
<p>I have installed the newest iPython in Mac. However, it uses the Python verion 2.5.1.</p>
<p>I installed the Python 2.6.1 by MacPython package at <a href="http://www.python.org/download/" rel="nofollow">here</a>. </p>
<p><strong>How can I make my iPython to use Python 2.6.1?</strong></p>
<p>I am not sure where the MacPython package exactly installed the newest Python.
The <strong>newest Python should somehow put the PATH</strong> so that iPyhon can use it. </p>
<p><strong>[edit]</strong> after the first answer</p>
<p>I run the following command</p>
<pre><code>$ln -s python python2.6
</code></pre>
<p>I cannot open python2.6 by </p>
<pre><code>python
</code></pre>
| 1
|
2009-03-13T00:19:37Z
| 2,422,942
|
<p>A good way to get it to work is <a href="http://www.jamesmurty.com/2009/06/05/ipython-with-python-2_6-osx-leopard/" rel="nofollow">here.</a> I needed to restart my terminal before ipython pointed to python2.6. Note the latest ipython distribution is 0.10, not 0.9.</p>
| 4
|
2010-03-11T06:08:52Z
|
[
"python",
"ipython"
] |
Django, Python Loop Logic Problem
| 641,145
|
<p>This works, partially. More information may be needed, however, I thought I would post to get advice on anything obvious that might be wrong here.</p>
<p>The problem is that if activity.get_cost() returns a <code>False</code> value, the function seems to exit entirely, returning <code>None</code>.</p>
<p>What I'd like it to do, of course, is accumulate <code>cost</code> Decimal values in the <code>costs = []</code> and return their sum. Simple, I would have thought... but my novice Python skills are apparently missing something.</p>
<p>More information provided on request. Thank you.</p>
<pre><code>def get_jobrecord_cost(self):
costs = []
for activity in self.activity_set.all():
cost = activity.get_cost()
if cost:
costs.append(cost)
if len(costs):
return sum(costs)
else:
return False
</code></pre>
| 1
|
2009-03-13T01:21:36Z
| 641,157
|
<p>I think you can simplify this with:</p>
<pre><code>def get_jobrecord_cost(self):
costs = 0
for activity in self.activity_set.all():
cost = activity.get_cost()
if cost:
costs += cost
return costs
</code></pre>
| 2
|
2009-03-13T01:26:41Z
|
[
"python",
"django-models"
] |
Django, Python Loop Logic Problem
| 641,145
|
<p>This works, partially. More information may be needed, however, I thought I would post to get advice on anything obvious that might be wrong here.</p>
<p>The problem is that if activity.get_cost() returns a <code>False</code> value, the function seems to exit entirely, returning <code>None</code>.</p>
<p>What I'd like it to do, of course, is accumulate <code>cost</code> Decimal values in the <code>costs = []</code> and return their sum. Simple, I would have thought... but my novice Python skills are apparently missing something.</p>
<p>More information provided on request. Thank you.</p>
<pre><code>def get_jobrecord_cost(self):
costs = []
for activity in self.activity_set.all():
cost = activity.get_cost()
if cost:
costs.append(cost)
if len(costs):
return sum(costs)
else:
return False
</code></pre>
| 1
|
2009-03-13T01:21:36Z
| 641,178
|
<p>I notice you're returning False if all the costs were None; I don't know if there's a specific reason for that, but it does make it a little bit harder to write. If that's not a requirement, you could write it like this:</p>
<pre><code>def get_jobrecord_cost(self):
costs = [activity.get_cost() or 0 for activity in self.activity_set.all()]
return sum(costs)
</code></pre>
| 3
|
2009-03-13T01:39:07Z
|
[
"python",
"django-models"
] |
Django, Python Loop Logic Problem
| 641,145
|
<p>This works, partially. More information may be needed, however, I thought I would post to get advice on anything obvious that might be wrong here.</p>
<p>The problem is that if activity.get_cost() returns a <code>False</code> value, the function seems to exit entirely, returning <code>None</code>.</p>
<p>What I'd like it to do, of course, is accumulate <code>cost</code> Decimal values in the <code>costs = []</code> and return their sum. Simple, I would have thought... but my novice Python skills are apparently missing something.</p>
<p>More information provided on request. Thank you.</p>
<pre><code>def get_jobrecord_cost(self):
costs = []
for activity in self.activity_set.all():
cost = activity.get_cost()
if cost:
costs.append(cost)
if len(costs):
return sum(costs)
else:
return False
</code></pre>
| 1
|
2009-03-13T01:21:36Z
| 641,187
|
<pre><code>def get_jobrecord_cost(self):
return sum((activity.get_cost() or 0 for activity in activity_set.all()) or 0)
</code></pre>
<p>Depending on how much data you're dealing with, this version is just a bit more efficient than DNS's because it uses a generator comprehension and doesn't require loading up a whole list into memory. It's functionally equivalent to grieve's except the looping happens in C. Note that this doesn't necessarily mean this is <em>better</em>. This approach is obviously more dense and can be less readable.</p>
| 1
|
2009-03-13T01:43:49Z
|
[
"python",
"django-models"
] |
How should I log while using multiprocessing in Python?
| 641,420
|
<p>Right now I have a central module in a framework that spawns multiple processes using the Python 2.6 <a href="http://docs.python.org/library/multiprocessing.html?#module-multiprocessing"><code>multiprocessing</code> module</a>. Because it uses <code>multiprocessing</code>, there is module-level multiprocessing-aware log, <code>LOG = multiprocessing.get_logger()</code>. Per <a href="http://docs.python.org/library/multiprocessing.html#logging">the docs</a>, this logger has process-shared locks so that you don't garble things up in <code>sys.stderr</code> (or whatever filehandle) by having multiple processes writing to it simultaneously.</p>
<p>The issue I have now is that the other modules in the framework are not multiprocessing-aware. The way I see it, I need to make all dependencies on this central module use multiprocessing-aware logging. That's annoying <em>within</em> the framework, let alone for all clients of the framework. Are there alternatives I'm not thinking of?</p>
| 123
|
2009-03-13T04:02:31Z
| 641,488
|
<p>The only way to deal with this non-intrusively is to:</p>
<ol>
<li>Spawn each worker process such that its log goes to <strong>a different file descriptor</strong> (to disk or to pipe.) Ideally, all log entries should be timestamped. </li>
<li>Your controller process can then do <em>one</em> of the following:
<ul>
<li><strong>If using disk files:</strong> Coalesce the log files at the end of the run, sorted by timestamp</li>
<li><strong>If using pipes (recommended):</strong> Coalesce log entries on-the-fly from all pipes, into a central log file. (E.g., Periodically <a href="http://docs.python.org/library/select.html"><code>select</code></a> from the pipes' file descriptors, perform merge-sort on the available log entries, and flush to centralized log. Repeat.)</li>
</ul></li>
</ol>
| 40
|
2009-03-13T04:39:42Z
|
[
"python",
"logging",
"multiprocessing"
] |
How should I log while using multiprocessing in Python?
| 641,420
|
<p>Right now I have a central module in a framework that spawns multiple processes using the Python 2.6 <a href="http://docs.python.org/library/multiprocessing.html?#module-multiprocessing"><code>multiprocessing</code> module</a>. Because it uses <code>multiprocessing</code>, there is module-level multiprocessing-aware log, <code>LOG = multiprocessing.get_logger()</code>. Per <a href="http://docs.python.org/library/multiprocessing.html#logging">the docs</a>, this logger has process-shared locks so that you don't garble things up in <code>sys.stderr</code> (or whatever filehandle) by having multiple processes writing to it simultaneously.</p>
<p>The issue I have now is that the other modules in the framework are not multiprocessing-aware. The way I see it, I need to make all dependencies on this central module use multiprocessing-aware logging. That's annoying <em>within</em> the framework, let alone for all clients of the framework. Are there alternatives I'm not thinking of?</p>
| 123
|
2009-03-13T04:02:31Z
| 641,489
|
<p>just publish somewhere your instance of the logger. that way, the other modules and clients can use your API to get the logger without having to <code>import multiprocessing</code>.</p>
| 3
|
2009-03-13T04:40:00Z
|
[
"python",
"logging",
"multiprocessing"
] |
How should I log while using multiprocessing in Python?
| 641,420
|
<p>Right now I have a central module in a framework that spawns multiple processes using the Python 2.6 <a href="http://docs.python.org/library/multiprocessing.html?#module-multiprocessing"><code>multiprocessing</code> module</a>. Because it uses <code>multiprocessing</code>, there is module-level multiprocessing-aware log, <code>LOG = multiprocessing.get_logger()</code>. Per <a href="http://docs.python.org/library/multiprocessing.html#logging">the docs</a>, this logger has process-shared locks so that you don't garble things up in <code>sys.stderr</code> (or whatever filehandle) by having multiple processes writing to it simultaneously.</p>
<p>The issue I have now is that the other modules in the framework are not multiprocessing-aware. The way I see it, I need to make all dependencies on this central module use multiprocessing-aware logging. That's annoying <em>within</em> the framework, let alone for all clients of the framework. Are there alternatives I'm not thinking of?</p>
| 123
|
2009-03-13T04:02:31Z
| 641,490
|
<p>One of the alternatives is to write the mutliprocessing logging to a known file and register an <code>atexit</code> handler to join on those processes read it back on stderr; however, you won't get a real-time flow to the output messages on stderr that way.</p>
| 0
|
2009-03-13T04:40:17Z
|
[
"python",
"logging",
"multiprocessing"
] |
How should I log while using multiprocessing in Python?
| 641,420
|
<p>Right now I have a central module in a framework that spawns multiple processes using the Python 2.6 <a href="http://docs.python.org/library/multiprocessing.html?#module-multiprocessing"><code>multiprocessing</code> module</a>. Because it uses <code>multiprocessing</code>, there is module-level multiprocessing-aware log, <code>LOG = multiprocessing.get_logger()</code>. Per <a href="http://docs.python.org/library/multiprocessing.html#logging">the docs</a>, this logger has process-shared locks so that you don't garble things up in <code>sys.stderr</code> (or whatever filehandle) by having multiple processes writing to it simultaneously.</p>
<p>The issue I have now is that the other modules in the framework are not multiprocessing-aware. The way I see it, I need to make all dependencies on this central module use multiprocessing-aware logging. That's annoying <em>within</em> the framework, let alone for all clients of the framework. Are there alternatives I'm not thinking of?</p>
| 123
|
2009-03-13T04:02:31Z
| 642,221
|
<p>Yet another alternative might be the various non-file-based logging handlers in the <a href="http://docs.python.org/library/logging.html" rel="nofollow"><code>logging</code> package</a>: </p>
<ul>
<li><code>SocketHandler</code></li>
<li><code>DatagramHandler</code></li>
<li><code>SyslogHandler</code></li>
</ul>
<p>(and others)</p>
<p>This way, you could easily have a logging daemon somewhere that you could write to safely and would handle the results correctly. (E.g., a simple socket server that just unpickles the message and emits it to its own rotating file handler.)</p>
<p>The <code>SyslogHandler</code> would take care of this for you, too. Of course, you could use your own instance of <code>syslog</code>, not the system one.</p>
| 16
|
2009-03-13T11:19:29Z
|
[
"python",
"logging",
"multiprocessing"
] |
How should I log while using multiprocessing in Python?
| 641,420
|
<p>Right now I have a central module in a framework that spawns multiple processes using the Python 2.6 <a href="http://docs.python.org/library/multiprocessing.html?#module-multiprocessing"><code>multiprocessing</code> module</a>. Because it uses <code>multiprocessing</code>, there is module-level multiprocessing-aware log, <code>LOG = multiprocessing.get_logger()</code>. Per <a href="http://docs.python.org/library/multiprocessing.html#logging">the docs</a>, this logger has process-shared locks so that you don't garble things up in <code>sys.stderr</code> (or whatever filehandle) by having multiple processes writing to it simultaneously.</p>
<p>The issue I have now is that the other modules in the framework are not multiprocessing-aware. The way I see it, I need to make all dependencies on this central module use multiprocessing-aware logging. That's annoying <em>within</em> the framework, let alone for all clients of the framework. Are there alternatives I'm not thinking of?</p>
| 123
|
2009-03-13T04:02:31Z
| 894,284
|
<p>I just now wrote a log handler of my own that just feeds everything to the parent process via a pipe. I've only been testing it for ten minutes but it seems to work pretty well. </p>
<p>(<strong>Note:</strong> This is hardcoded to <code>RotatingFileHandler</code>, which is my own use case.)</p>
<hr>
<h2>Update: Implementation!</h2>
<p>This now uses a queue for correct handling of concurrency, and also recovers from errors correctly. I've now been using this in production for several months, and the current version below works without issue.</p>
<pre><code>from logging.handlers import RotatingFileHandler
import multiprocessing, threading, logging, sys, traceback
class MultiProcessingLog(logging.Handler):
def __init__(self, name, mode, maxsize, rotate):
logging.Handler.__init__(self)
self._handler = RotatingFileHandler(name, mode, maxsize, rotate)
self.queue = multiprocessing.Queue(-1)
t = threading.Thread(target=self.receive)
t.daemon = True
t.start()
def setFormatter(self, fmt):
logging.Handler.setFormatter(self, fmt)
self._handler.setFormatter(fmt)
def receive(self):
while True:
try:
record = self.queue.get()
self._handler.emit(record)
except (KeyboardInterrupt, SystemExit):
raise
except EOFError:
break
except:
traceback.print_exc(file=sys.stderr)
def send(self, s):
self.queue.put_nowait(s)
def _format_record(self, record):
# ensure that exc_info and args
# have been stringified. Removes any chance of
# unpickleable things inside and possibly reduces
# message size sent over the pipe
if record.args:
record.msg = record.msg % record.args
record.args = None
if record.exc_info:
dummy = self.format(record)
record.exc_info = None
return record
def emit(self, record):
try:
s = self._format_record(record)
self.send(s)
except (KeyboardInterrupt, SystemExit):
raise
except:
self.handleError(record)
def close(self):
self._handler.close()
logging.Handler.close(self)
</code></pre>
| 65
|
2009-05-21T18:10:33Z
|
[
"python",
"logging",
"multiprocessing"
] |
How should I log while using multiprocessing in Python?
| 641,420
|
<p>Right now I have a central module in a framework that spawns multiple processes using the Python 2.6 <a href="http://docs.python.org/library/multiprocessing.html?#module-multiprocessing"><code>multiprocessing</code> module</a>. Because it uses <code>multiprocessing</code>, there is module-level multiprocessing-aware log, <code>LOG = multiprocessing.get_logger()</code>. Per <a href="http://docs.python.org/library/multiprocessing.html#logging">the docs</a>, this logger has process-shared locks so that you don't garble things up in <code>sys.stderr</code> (or whatever filehandle) by having multiple processes writing to it simultaneously.</p>
<p>The issue I have now is that the other modules in the framework are not multiprocessing-aware. The way I see it, I need to make all dependencies on this central module use multiprocessing-aware logging. That's annoying <em>within</em> the framework, let alone for all clients of the framework. Are there alternatives I'm not thinking of?</p>
| 123
|
2009-03-13T04:02:31Z
| 959,727
|
<p>I liked zzzeek's answer. I would just substitute the Pipe for a Queue since if multiple threads/processes use the same pipe end to generate log messages they will get garbled.</p>
| 3
|
2009-06-06T13:59:52Z
|
[
"python",
"logging",
"multiprocessing"
] |
How should I log while using multiprocessing in Python?
| 641,420
|
<p>Right now I have a central module in a framework that spawns multiple processes using the Python 2.6 <a href="http://docs.python.org/library/multiprocessing.html?#module-multiprocessing"><code>multiprocessing</code> module</a>. Because it uses <code>multiprocessing</code>, there is module-level multiprocessing-aware log, <code>LOG = multiprocessing.get_logger()</code>. Per <a href="http://docs.python.org/library/multiprocessing.html#logging">the docs</a>, this logger has process-shared locks so that you don't garble things up in <code>sys.stderr</code> (or whatever filehandle) by having multiple processes writing to it simultaneously.</p>
<p>The issue I have now is that the other modules in the framework are not multiprocessing-aware. The way I see it, I need to make all dependencies on this central module use multiprocessing-aware logging. That's annoying <em>within</em> the framework, let alone for all clients of the framework. Are there alternatives I'm not thinking of?</p>
| 123
|
2009-03-13T04:02:31Z
| 1,009,560
|
<p>I also like zzzeek's answer but Andre is correct that a queue is required to prevent garbling. I had some luck with the pipe, but did see garbling which is somewhat expected. Implementing it turned out to be harder than I thought, particularly due to running on Windows, where there are some additional restrictions about global variables and stuff (see: <a href="http://stackoverflow.com/questions/765129/hows-python-multiprocessing-implemented-on-windows">How's Python Multiprocessing Implemented on Windows?</a>)</p>
<p>But, I finally got it working. This example probably isn't perfect, so comments and suggestions are welcome. It also does not support setting the formatter or anything other than the root logger. Basically, you have to reinit the logger in each of the pool processes with the queue and set up the other attributes on the logger.</p>
<p>Again, any suggestions on how to make the code better are welcome. I certainly don't know all the Python tricks yet :-)</p>
<pre><code>import multiprocessing, logging, sys, re, os, StringIO, threading, time, Queue
class MultiProcessingLogHandler(logging.Handler):
def __init__(self, handler, queue, child=False):
logging.Handler.__init__(self)
self._handler = handler
self.queue = queue
# we only want one of the loggers to be pulling from the queue.
# If there is a way to do this without needing to be passed this
# information, that would be great!
if child == False:
self.shutdown = False
self.polltime = 1
t = threading.Thread(target=self.receive)
t.daemon = True
t.start()
def setFormatter(self, fmt):
logging.Handler.setFormatter(self, fmt)
self._handler.setFormatter(fmt)
def receive(self):
#print "receive on"
while (self.shutdown == False) or (self.queue.empty() == False):
# so we block for a short period of time so that we can
# check for the shutdown cases.
try:
record = self.queue.get(True, self.polltime)
self._handler.emit(record)
except Queue.Empty, e:
pass
def send(self, s):
# send just puts it in the queue for the server to retrieve
self.queue.put(s)
def _format_record(self, record):
ei = record.exc_info
if ei:
dummy = self.format(record) # just to get traceback text into record.exc_text
record.exc_info = None # to avoid Unpickleable error
return record
def emit(self, record):
try:
s = self._format_record(record)
self.send(s)
except (KeyboardInterrupt, SystemExit):
raise
except:
self.handleError(record)
def close(self):
time.sleep(self.polltime+1) # give some time for messages to enter the queue.
self.shutdown = True
time.sleep(self.polltime+1) # give some time for the server to time out and see the shutdown
def __del__(self):
self.close() # hopefully this aids in orderly shutdown when things are going poorly.
def f(x):
# just a logging command...
logging.critical('function number: ' + str(x))
# to make some calls take longer than others, so the output is "jumbled" as real MP programs are.
time.sleep(x % 3)
def initPool(queue, level):
"""
This causes the logging module to be initialized with the necessary info
in pool threads to work correctly.
"""
logging.getLogger('').addHandler(MultiProcessingLogHandler(logging.StreamHandler(), queue, child=True))
logging.getLogger('').setLevel(level)
if __name__ == '__main__':
stream = StringIO.StringIO()
logQueue = multiprocessing.Queue(100)
handler= MultiProcessingLogHandler(logging.StreamHandler(stream), logQueue)
logging.getLogger('').addHandler(handler)
logging.getLogger('').setLevel(logging.DEBUG)
logging.debug('starting main')
# when bulding the pool on a Windows machine we also have to init the logger in all the instances with the queue and the level of logging.
pool = multiprocessing.Pool(processes=10, initializer=initPool, initargs=[logQueue, logging.getLogger('').getEffectiveLevel()] ) # start worker processes
pool.map(f, range(0,50))
pool.close()
logging.debug('done')
logging.shutdown()
print "stream output is:"
print stream.getvalue()
</code></pre>
| 6
|
2009-06-17T21:15:15Z
|
[
"python",
"logging",
"multiprocessing"
] |
How should I log while using multiprocessing in Python?
| 641,420
|
<p>Right now I have a central module in a framework that spawns multiple processes using the Python 2.6 <a href="http://docs.python.org/library/multiprocessing.html?#module-multiprocessing"><code>multiprocessing</code> module</a>. Because it uses <code>multiprocessing</code>, there is module-level multiprocessing-aware log, <code>LOG = multiprocessing.get_logger()</code>. Per <a href="http://docs.python.org/library/multiprocessing.html#logging">the docs</a>, this logger has process-shared locks so that you don't garble things up in <code>sys.stderr</code> (or whatever filehandle) by having multiple processes writing to it simultaneously.</p>
<p>The issue I have now is that the other modules in the framework are not multiprocessing-aware. The way I see it, I need to make all dependencies on this central module use multiprocessing-aware logging. That's annoying <em>within</em> the framework, let alone for all clients of the framework. Are there alternatives I'm not thinking of?</p>
| 123
|
2009-03-13T04:02:31Z
| 3,253,442
|
<p>A variant of the others that keeps the logging and queue thread separate.</p>
<pre><code>"""sample code for logging in subprocesses using multiprocessing
* Little handler magic - The main process uses loggers and handlers as normal.
* Only a simple handler is needed in the subprocess that feeds the queue.
* Original logger name from subprocess is preserved when logged in main
process.
* As in the other implementations, a thread reads the queue and calls the
handlers. Except in this implementation, the thread is defined outside of a
handler, which makes the logger definitions simpler.
* Works with multiple handlers. If the logger in the main process defines
multiple handlers, they will all be fed records generated by the
subprocesses loggers.
tested with Python 2.5 and 2.6 on Linux and Windows
"""
import os
import sys
import time
import traceback
import multiprocessing, threading, logging, sys
DEFAULT_LEVEL = logging.DEBUG
formatter = logging.Formatter("%(levelname)s: %(asctime)s - %(name)s - %(process)s - %(message)s")
class SubProcessLogHandler(logging.Handler):
"""handler used by subprocesses
It simply puts items on a Queue for the main process to log.
"""
def __init__(self, queue):
logging.Handler.__init__(self)
self.queue = queue
def emit(self, record):
self.queue.put(record)
class LogQueueReader(threading.Thread):
"""thread to write subprocesses log records to main process log
This thread reads the records written by subprocesses and writes them to
the handlers defined in the main process's handlers.
"""
def __init__(self, queue):
threading.Thread.__init__(self)
self.queue = queue
self.daemon = True
def run(self):
"""read from the queue and write to the log handlers
The logging documentation says logging is thread safe, so there
shouldn't be contention between normal logging (from the main
process) and this thread.
Note that we're using the name of the original logger.
"""
# Thanks Mike for the error checking code.
while True:
try:
record = self.queue.get()
# get the logger for this record
logger = logging.getLogger(record.name)
logger.callHandlers(record)
except (KeyboardInterrupt, SystemExit):
raise
except EOFError:
break
except:
traceback.print_exc(file=sys.stderr)
class LoggingProcess(multiprocessing.Process):
def __init__(self, queue):
multiprocessing.Process.__init__(self)
self.queue = queue
def _setupLogger(self):
# create the logger to use.
logger = logging.getLogger('test.subprocess')
# The only handler desired is the SubProcessLogHandler. If any others
# exist, remove them. In this case, on Unix and Linux the StreamHandler
# will be inherited.
for handler in logger.handlers:
# just a check for my sanity
assert not isinstance(handler, SubProcessLogHandler)
logger.removeHandler(handler)
# add the handler
handler = SubProcessLogHandler(self.queue)
handler.setFormatter(formatter)
logger.addHandler(handler)
# On Windows, the level will not be inherited. Also, we could just
# set the level to log everything here and filter it in the main
# process handlers. For now, just set it from the global default.
logger.setLevel(DEFAULT_LEVEL)
self.logger = logger
def run(self):
self._setupLogger()
logger = self.logger
# and here goes the logging
p = multiprocessing.current_process()
logger.info('hello from process %s with pid %s' % (p.name, p.pid))
if __name__ == '__main__':
# queue used by the subprocess loggers
queue = multiprocessing.Queue()
# Just a normal logger
logger = logging.getLogger('test')
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(DEFAULT_LEVEL)
logger.info('hello from the main process')
# This thread will read from the subprocesses and write to the main log's
# handlers.
log_queue_reader = LogQueueReader(queue)
log_queue_reader.start()
# create the processes.
for i in range(10):
p = LoggingProcess(queue)
p.start()
# The way I read the multiprocessing warning about Queue, joining a
# process before it has finished feeding the Queue can cause a deadlock.
# Also, Queue.empty() is not realiable, so just make sure all processes
# are finished.
# active_children joins subprocesses when they're finished.
while multiprocessing.active_children():
time.sleep(.1)
</code></pre>
| 11
|
2010-07-15T07:41:56Z
|
[
"python",
"logging",
"multiprocessing"
] |
How should I log while using multiprocessing in Python?
| 641,420
|
<p>Right now I have a central module in a framework that spawns multiple processes using the Python 2.6 <a href="http://docs.python.org/library/multiprocessing.html?#module-multiprocessing"><code>multiprocessing</code> module</a>. Because it uses <code>multiprocessing</code>, there is module-level multiprocessing-aware log, <code>LOG = multiprocessing.get_logger()</code>. Per <a href="http://docs.python.org/library/multiprocessing.html#logging">the docs</a>, this logger has process-shared locks so that you don't garble things up in <code>sys.stderr</code> (or whatever filehandle) by having multiple processes writing to it simultaneously.</p>
<p>The issue I have now is that the other modules in the framework are not multiprocessing-aware. The way I see it, I need to make all dependencies on this central module use multiprocessing-aware logging. That's annoying <em>within</em> the framework, let alone for all clients of the framework. Are there alternatives I'm not thinking of?</p>
| 123
|
2009-03-13T04:02:31Z
| 3,412,173
|
<p>I have a solution that's similar to ironhacker's except that I use logging.exception in some of my code and found that I needed to format the exception before passing it back over the Queue since tracebacks aren't pickle'able:</p>
<pre><code>class QueueHandler(logging.Handler):
def __init__(self, queue):
logging.Handler.__init__(self)
self.queue = queue
def emit(self, record):
if record.exc_info:
# can't pass exc_info across processes so just format now
record.exc_text = self.formatException(record.exc_info)
record.exc_info = None
self.queue.put(record)
def formatException(self, ei):
sio = cStringIO.StringIO()
traceback.print_exception(ei[0], ei[1], ei[2], None, sio)
s = sio.getvalue()
sio.close()
if s[-1] == "\n":
s = s[:-1]
return s
</code></pre>
| 1
|
2010-08-05T06:11:37Z
|
[
"python",
"logging",
"multiprocessing"
] |
How should I log while using multiprocessing in Python?
| 641,420
|
<p>Right now I have a central module in a framework that spawns multiple processes using the Python 2.6 <a href="http://docs.python.org/library/multiprocessing.html?#module-multiprocessing"><code>multiprocessing</code> module</a>. Because it uses <code>multiprocessing</code>, there is module-level multiprocessing-aware log, <code>LOG = multiprocessing.get_logger()</code>. Per <a href="http://docs.python.org/library/multiprocessing.html#logging">the docs</a>, this logger has process-shared locks so that you don't garble things up in <code>sys.stderr</code> (or whatever filehandle) by having multiple processes writing to it simultaneously.</p>
<p>The issue I have now is that the other modules in the framework are not multiprocessing-aware. The way I see it, I need to make all dependencies on this central module use multiprocessing-aware logging. That's annoying <em>within</em> the framework, let alone for all clients of the framework. Are there alternatives I'm not thinking of?</p>
| 123
|
2009-03-13T04:02:31Z
| 19,397,561
|
<p>All current solutions are too coupled to the logging configuration by using a handler. My solution has the following architecture and features:</p>
<ul>
<li>You can use <em>any</em> logging configuration you want </li>
<li>Logging is done in a daemon thread</li>
<li>Safe shutdown of the daemon by using a context manager</li>
<li>Communication to the logging thread is done by <code>multiprocessing.Queue</code></li>
<li>In subprocesses, <code>logging.Logger</code> (and already defined instances) are patched to send <em>all</em> records to the queue</li>
<li><em>New</em>: format traceback and message before sending to queue to prevent pickling errors</li>
</ul>
<p>Code with usage example and output can be found at the following Gist: <a href="https://gist.github.com/schlamar/7003737">https://gist.github.com/schlamar/7003737</a></p>
| 9
|
2013-10-16T07:31:34Z
|
[
"python",
"logging",
"multiprocessing"
] |
How should I log while using multiprocessing in Python?
| 641,420
|
<p>Right now I have a central module in a framework that spawns multiple processes using the Python 2.6 <a href="http://docs.python.org/library/multiprocessing.html?#module-multiprocessing"><code>multiprocessing</code> module</a>. Because it uses <code>multiprocessing</code>, there is module-level multiprocessing-aware log, <code>LOG = multiprocessing.get_logger()</code>. Per <a href="http://docs.python.org/library/multiprocessing.html#logging">the docs</a>, this logger has process-shared locks so that you don't garble things up in <code>sys.stderr</code> (or whatever filehandle) by having multiple processes writing to it simultaneously.</p>
<p>The issue I have now is that the other modules in the framework are not multiprocessing-aware. The way I see it, I need to make all dependencies on this central module use multiprocessing-aware logging. That's annoying <em>within</em> the framework, let alone for all clients of the framework. Are there alternatives I'm not thinking of?</p>
| 123
|
2009-03-13T04:02:31Z
| 22,365,834
|
<p>How about delegating all the logging to another process that reads all log entries from a Queue?</p>
<pre><code>LOG_QUEUE = multiprocessing.JoinableQueue()
class CentralLogger(multiprocessing.Process):
def __init__(self, queue):
multiprocessing.Process.__init__(self)
self.queue = queue
self.log = logger.getLogger('some_config')
self.log.info("Started Central Logging process")
def run(self):
while True:
log_level, message = self.queue.get()
if log_level is None:
self.log.info("Shutting down Central Logging process")
break
else:
self.log.log(log_level, message)
central_logger_process = CentralLogger(LOG_QUEUE)
central_logger_process.start()
</code></pre>
<p>Simply share LOG_QUEUE via any of the multiprocess mechanisms or even inheritance and it all works out fine!</p>
| 2
|
2014-03-12T23:13:44Z
|
[
"python",
"logging",
"multiprocessing"
] |
How should I log while using multiprocessing in Python?
| 641,420
|
<p>Right now I have a central module in a framework that spawns multiple processes using the Python 2.6 <a href="http://docs.python.org/library/multiprocessing.html?#module-multiprocessing"><code>multiprocessing</code> module</a>. Because it uses <code>multiprocessing</code>, there is module-level multiprocessing-aware log, <code>LOG = multiprocessing.get_logger()</code>. Per <a href="http://docs.python.org/library/multiprocessing.html#logging">the docs</a>, this logger has process-shared locks so that you don't garble things up in <code>sys.stderr</code> (or whatever filehandle) by having multiple processes writing to it simultaneously.</p>
<p>The issue I have now is that the other modules in the framework are not multiprocessing-aware. The way I see it, I need to make all dependencies on this central module use multiprocessing-aware logging. That's annoying <em>within</em> the framework, let alone for all clients of the framework. Are there alternatives I'm not thinking of?</p>
| 123
|
2009-03-13T04:02:31Z
| 29,277,969
|
<p>If you have deadlocks occurring in a combination of locks, threads and forks in the <code>logging</code> module, that is reported in <a href="http://bugs.python.org/issue6721" rel="nofollow">bug report 6721</a> (see also <a href="http://stackoverflow.com/questions/24509650">related SO question</a>).</p>
<p>There is a small fixup solution posted <a href="https://github.com/google/python-atfork/blob/master/atfork/stdlib_fixer.py" rel="nofollow">here</a>.</p>
<p>However, that will just fix any potential deadlocks in <code>logging</code>. That will not fix that things are maybe garbled up. See the other answers presented here.</p>
| 0
|
2015-03-26T12:04:40Z
|
[
"python",
"logging",
"multiprocessing"
] |
How should I log while using multiprocessing in Python?
| 641,420
|
<p>Right now I have a central module in a framework that spawns multiple processes using the Python 2.6 <a href="http://docs.python.org/library/multiprocessing.html?#module-multiprocessing"><code>multiprocessing</code> module</a>. Because it uses <code>multiprocessing</code>, there is module-level multiprocessing-aware log, <code>LOG = multiprocessing.get_logger()</code>. Per <a href="http://docs.python.org/library/multiprocessing.html#logging">the docs</a>, this logger has process-shared locks so that you don't garble things up in <code>sys.stderr</code> (or whatever filehandle) by having multiple processes writing to it simultaneously.</p>
<p>The issue I have now is that the other modules in the framework are not multiprocessing-aware. The way I see it, I need to make all dependencies on this central module use multiprocessing-aware logging. That's annoying <em>within</em> the framework, let alone for all clients of the framework. Are there alternatives I'm not thinking of?</p>
| 123
|
2009-03-13T04:02:31Z
| 32,065,395
|
<p>The python logging cookbook has two complete examples here: <a href="https://docs.python.org/3/howto/logging-cookbook.html#logging-to-a-single-file-from-multiple-processes" rel="nofollow">https://docs.python.org/3/howto/logging-cookbook.html#logging-to-a-single-file-from-multiple-processes</a></p>
<p>It uses <code>QueueHandler</code>, which is new in python 3.2 but easy to copy into your own code (as I did myself in python 2.7) from: <a href="https://gist.github.com/vsajip/591589" rel="nofollow">https://gist.github.com/vsajip/591589</a></p>
<p>Each process puts its logging on the <code>Queue</code>, and then a <code>listener</code> thread or process (one example is provided for each) picks those up and writes them all to a file - no risk of corruption or garbling.</p>
| 4
|
2015-08-18T06:47:17Z
|
[
"python",
"logging",
"multiprocessing"
] |
How should I log while using multiprocessing in Python?
| 641,420
|
<p>Right now I have a central module in a framework that spawns multiple processes using the Python 2.6 <a href="http://docs.python.org/library/multiprocessing.html?#module-multiprocessing"><code>multiprocessing</code> module</a>. Because it uses <code>multiprocessing</code>, there is module-level multiprocessing-aware log, <code>LOG = multiprocessing.get_logger()</code>. Per <a href="http://docs.python.org/library/multiprocessing.html#logging">the docs</a>, this logger has process-shared locks so that you don't garble things up in <code>sys.stderr</code> (or whatever filehandle) by having multiple processes writing to it simultaneously.</p>
<p>The issue I have now is that the other modules in the framework are not multiprocessing-aware. The way I see it, I need to make all dependencies on this central module use multiprocessing-aware logging. That's annoying <em>within</em> the framework, let alone for all clients of the framework. Are there alternatives I'm not thinking of?</p>
| 123
|
2009-03-13T04:02:31Z
| 34,964,369
|
<p>Below is another solution with a focus on simplicity for anyone else (like me) who get here from Google. Logging should be easy! Only for 3.2 or higher.</p>
<pre><code>import multiprocessing
import logging
from logging.handlers import QueueHandler, QueueListener
import time
import random
def f(i):
time.sleep(random.uniform(.01, .05))
logging.info('function called with {} in worker thread.'.format(i))
time.sleep(random.uniform(.01, .05))
return i
def worker_init(q):
# all records from worker processes go to qh and then into q
qh = QueueHandler(q)
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logger.addHandler(qh)
def logger_init():
q = multiprocessing.Queue()
# this is the handler for all log records
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter("%(levelname)s: %(asctime)s - %(process)s - %(message)s"))
# ql gets records from the queue and sends them to the handler
ql = QueueListener(q, handler)
ql.start()
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
# add the handler to the logger so records from this process are handled
logger.addHandler(handler)
return ql, q
def main():
q_listener, q = logger_init()
logging.info('hello from main thread')
pool = multiprocessing.Pool(4, worker_init, [q])
for result in pool.map(f, range(10)):
pass
pool.close()
pool.join()
q_listener.stop()
if __name__ == '__main__':
main()
</code></pre>
| 2
|
2016-01-23T13:59:47Z
|
[
"python",
"logging",
"multiprocessing"
] |
How should I log while using multiprocessing in Python?
| 641,420
|
<p>Right now I have a central module in a framework that spawns multiple processes using the Python 2.6 <a href="http://docs.python.org/library/multiprocessing.html?#module-multiprocessing"><code>multiprocessing</code> module</a>. Because it uses <code>multiprocessing</code>, there is module-level multiprocessing-aware log, <code>LOG = multiprocessing.get_logger()</code>. Per <a href="http://docs.python.org/library/multiprocessing.html#logging">the docs</a>, this logger has process-shared locks so that you don't garble things up in <code>sys.stderr</code> (or whatever filehandle) by having multiple processes writing to it simultaneously.</p>
<p>The issue I have now is that the other modules in the framework are not multiprocessing-aware. The way I see it, I need to make all dependencies on this central module use multiprocessing-aware logging. That's annoying <em>within</em> the framework, let alone for all clients of the framework. Are there alternatives I'm not thinking of?</p>
| 123
|
2009-03-13T04:02:31Z
| 37,561,518
|
<p>Below is a class that can be used in Windows environment, requires ActivePython.
You can also inherit for other logging handlers (StreamHandler etc.)</p>
<pre><code>class SyncronizedFileHandler(logging.FileHandler):
MUTEX_NAME = 'logging_mutex'
def __init__(self , *args , **kwargs):
self.mutex = win32event.CreateMutex(None , False , self.MUTEX_NAME)
return super(SyncronizedFileHandler , self ).__init__(*args , **kwargs)
def emit(self, *args , **kwargs):
try:
win32event.WaitForSingleObject(self.mutex , win32event.INFINITE)
ret = super(SyncronizedFileHandler , self ).emit(*args , **kwargs)
finally:
win32event.ReleaseMutex(self.mutex)
return ret
</code></pre>
<p>And here is an example that demonstrates usage:</p>
<pre><code>import logging
import random , time , os , sys , datetime
from string import letters
import win32api , win32event
from multiprocessing import Pool
def f(i):
time.sleep(random.randint(0,10) * 0.1)
ch = random.choice(letters)
logging.info( ch * 30)
def init_logging():
'''
initilize the loggers
'''
formatter = logging.Formatter("%(levelname)s - %(process)d - %(asctime)s - %(filename)s - %(lineno)d - %(message)s")
logger = logging.getLogger()
logger.setLevel(logging.INFO)
file_handler = SyncronizedFileHandler(sys.argv[1])
file_handler.setLevel(logging.INFO)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
#must be called in the parent and in every worker process
init_logging()
if __name__ == '__main__':
#multiprocessing stuff
pool = Pool(processes=10)
imap_result = pool.imap(f , range(30))
for i , _ in enumerate(imap_result):
pass
</code></pre>
| 1
|
2016-06-01T06:57:48Z
|
[
"python",
"logging",
"multiprocessing"
] |
How should I log while using multiprocessing in Python?
| 641,420
|
<p>Right now I have a central module in a framework that spawns multiple processes using the Python 2.6 <a href="http://docs.python.org/library/multiprocessing.html?#module-multiprocessing"><code>multiprocessing</code> module</a>. Because it uses <code>multiprocessing</code>, there is module-level multiprocessing-aware log, <code>LOG = multiprocessing.get_logger()</code>. Per <a href="http://docs.python.org/library/multiprocessing.html#logging">the docs</a>, this logger has process-shared locks so that you don't garble things up in <code>sys.stderr</code> (or whatever filehandle) by having multiple processes writing to it simultaneously.</p>
<p>The issue I have now is that the other modules in the framework are not multiprocessing-aware. The way I see it, I need to make all dependencies on this central module use multiprocessing-aware logging. That's annoying <em>within</em> the framework, let alone for all clients of the framework. Are there alternatives I'm not thinking of?</p>
| 123
|
2009-03-13T04:02:31Z
| 39,355,401
|
<p>Not really a full answer, but to chip in my 2 cents I think for people browsing this thread it would be very useful to know that QueueHandler, QueueListener are made compatible with Python 2.7 through logutils module. </p>
| 0
|
2016-09-06T18:16:11Z
|
[
"python",
"logging",
"multiprocessing"
] |
How should I log while using multiprocessing in Python?
| 641,420
|
<p>Right now I have a central module in a framework that spawns multiple processes using the Python 2.6 <a href="http://docs.python.org/library/multiprocessing.html?#module-multiprocessing"><code>multiprocessing</code> module</a>. Because it uses <code>multiprocessing</code>, there is module-level multiprocessing-aware log, <code>LOG = multiprocessing.get_logger()</code>. Per <a href="http://docs.python.org/library/multiprocessing.html#logging">the docs</a>, this logger has process-shared locks so that you don't garble things up in <code>sys.stderr</code> (or whatever filehandle) by having multiple processes writing to it simultaneously.</p>
<p>The issue I have now is that the other modules in the framework are not multiprocessing-aware. The way I see it, I need to make all dependencies on this central module use multiprocessing-aware logging. That's annoying <em>within</em> the framework, let alone for all clients of the framework. Are there alternatives I'm not thinking of?</p>
| 123
|
2009-03-13T04:02:31Z
| 39,475,286
|
<p>Here's my simple hack/workaround... not the most comprehensive, but easily modifiable and simpler to read and understand I think than any other answers I found before writing this:</p>
<pre><code>import logging
import multiprocessing
class FakeLogger(object):
def __init__(self, q):
self.q = q
def info(self, item):
self.q.put('INFO - {}'.format(item))
def debug(self, item):
self.q.put('DEBUG - {}'.format(item))
def critical(self, item):
self.q.put('CRITICAL - {}'.format(item))
def warning(self, item):
self.q.put('WARNING - {}'.format(item))
def some_other_func_that_gets_logger_and_logs(num):
# notice the name get's discarded
# of course you can easily add this to your FakeLogger class
local_logger = logging.getLogger('local')
local_logger.info('Hey I am logging this: {} and working on it to make this {}!'.format(num, num*2))
local_logger.debug('hmm, something may need debugging here')
return num*2
def func_to_parallelize(data_chunk):
# unpack our args
the_num, logger_q = data_chunk
# since we're now in a new process, let's monkeypatch the logging module
logging.getLogger = lambda name=None: FakeLogger(logger_q)
# now do the actual work that happens to log stuff too
new_num = some_other_func_that_gets_logger_and_logs(the_num)
return (the_num, new_num)
if __name__ == '__main__':
multiprocessing.freeze_support()
m = multiprocessing.Manager()
logger_q = m.Queue()
# we have to pass our data to be parallel-processed
# we also need to pass the Queue object so we can retrieve the logs
parallelable_data = [(1, logger_q), (2, logger_q)]
# set up a pool of processes so we can take advantage of multiple CPU cores
pool_size = multiprocessing.cpu_count() * 2
pool = multiprocessing.Pool(processes=pool_size, maxtasksperchild=4)
worker_output = pool.map(func_to_parallelize, parallelable_data)
pool.close() # no more tasks
pool.join() # wrap up current tasks
# get the contents of our FakeLogger object
while not logger_q.empty():
print logger_q.get()
print 'worker output contained: {}'.format(worker_output)
</code></pre>
| 0
|
2016-09-13T16:55:19Z
|
[
"python",
"logging",
"multiprocessing"
] |
How should I log while using multiprocessing in Python?
| 641,420
|
<p>Right now I have a central module in a framework that spawns multiple processes using the Python 2.6 <a href="http://docs.python.org/library/multiprocessing.html?#module-multiprocessing"><code>multiprocessing</code> module</a>. Because it uses <code>multiprocessing</code>, there is module-level multiprocessing-aware log, <code>LOG = multiprocessing.get_logger()</code>. Per <a href="http://docs.python.org/library/multiprocessing.html#logging">the docs</a>, this logger has process-shared locks so that you don't garble things up in <code>sys.stderr</code> (or whatever filehandle) by having multiple processes writing to it simultaneously.</p>
<p>The issue I have now is that the other modules in the framework are not multiprocessing-aware. The way I see it, I need to make all dependencies on this central module use multiprocessing-aware logging. That's annoying <em>within</em> the framework, let alone for all clients of the framework. Are there alternatives I'm not thinking of?</p>
| 123
|
2009-03-13T04:02:31Z
| 40,126,988
|
<p>Since we can represent multiprocess logging as many publishers and one subscriber (listener), using <a href="http://zguide.zeromq.org/page:all" rel="nofollow">ZeroMQ</a> to implement PUB-SUB messaging is indeed an option. </p>
<p>Moreover, <a href="http://pyzmq.readthedocs.io" rel="nofollow">PyZMQ</a> module, is the Python bindings for ZMQ, implements <a href="http://pyzmq.readthedocs.io/en/latest/api/zmq.log.handlers.html#zmq.log.handlers.PUBHandler" rel="nofollow">PUBHandler</a>, which is object for publishing logging messages over a zmq.PUB socket.</p>
<p>There's a <a href="https://pyfunc.blogspot.co.il/2013/08/centralized-logging-for-distributed.html" rel="nofollow">solution on the web</a>, for centralized logging from distributed application using PyZMQ and PUBHandler, which can be easily adopted for working locally with multiple publishing processes.</p>
<pre><code>formatters = {
logging.DEBUG: logging.Formatter("[%(name)s] %(message)s"),
logging.INFO: logging.Formatter("[%(name)s] %(message)s"),
logging.WARN: logging.Formatter("[%(name)s] %(message)s"),
logging.ERROR: logging.Formatter("[%(name)s] %(message)s"),
logging.CRITICAL: logging.Formatter("[%(name)s] %(message)s")
}
# This one will be used by publishing processes
class PUBLogger:
def __init__(self, host, port=config.PUBSUB_LOGGER_PORT):
self._logger = logging.getLogger(__name__)
self._logger.setLevel(logging.DEBUG)
self.ctx = zmq.Context()
self.pub = self.ctx.socket(zmq.PUB)
self.pub.connect('tcp://{0}:{1}'.format(socket.gethostbyname(host), port))
self._handler = PUBHandler(self.pub)
self._handler.formatters = formatters
self._logger.addHandler(self._handler)
@property
def logger(self):
return self._logger
# This one will be used by listener process
class SUBLogger:
def __init__(self, ip, output_dir="", port=config.PUBSUB_LOGGER_PORT):
self.output_dir = output_dir
self._logger = logging.getLogger()
self._logger.setLevel(logging.DEBUG)
self.ctx = zmq.Context()
self._sub = self.ctx.socket(zmq.SUB)
self._sub.bind('tcp://*:{1}'.format(ip, port))
self._sub.setsockopt(zmq.SUBSCRIBE, "")
handler = handlers.RotatingFileHandler(os.path.join(output_dir, "client_debug.log"), "w", 100 * 1024 * 1024, 10)
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter("%(asctime)s;%(levelname)s - %(message)s")
handler.setFormatter(formatter)
self._logger.addHandler(handler)
@property
def sub(self):
return self._sub
@property
def logger(self):
return self._logger
# And that's the way we actually run things:
# Listener process will forever listen on SUB socket for incoming messages
def run_sub_logger(ip, event):
sub_logger = SUBLogger(ip)
while not event.is_set():
try:
topic, message = sub_logger.sub.recv_multipart(flags=zmq.NOBLOCK)
log_msg = getattr(logging, topic.lower())
log_msg(message)
except zmq.ZMQError as zmq_error:
if zmq_error.errno == zmq.EAGAIN:
pass
# Publisher processes loggers should be initialized as follows:
class Publisher:
def __init__(self, stop_event, proc_id):
self.stop_event = stop_event
self.proc_id = proc_id
self._logger = pub_logger.PUBLogger('127.0.0.1').logger
def run(self):
self._logger.info("{0} - Sending message".format(proc_id))
def run_worker(event, proc_id):
worker = Publisher(event, proc_id)
worker.run()
# Starting subscriber process so we won't loose publisher's messages
sub_logger_process = Process(target=run_sub_logger,
args=('127.0.0.1'), stop_event,))
sub_logger_process.start()
#Starting publisher processes
for i in range(MAX_WORKERS_PER_CLIENT):
processes.append(Process(target=run_worker,
args=(stop_event, i,)))
for p in processes:
p.start()
</code></pre>
| 0
|
2016-10-19T09:11:43Z
|
[
"python",
"logging",
"multiprocessing"
] |
"Pythonic" equivalent for handling switch and multiple string compares
| 641,469
|
<p>Alright, so my title sucked. An example works better:</p>
<pre><code>input = 'check yahoo.com'
</code></pre>
<p>I want to parse input, using the first word as the "command", and the rest of the string as a parameter. Here's the simple version of how my non-Pythonic mind is coding it:</p>
<pre><code>if len(input) > 0:
a = input.split(' ')
if a[0] == 'check':
if len(a) > 1:
do_check(a[1])
elif a[0] == 'search':
if len(a) > 1:
do_search(a[1])
</code></pre>
<p>I like Python because it makes normally complicated things into rather simple things. I'm not too experienced with it, and I am fairly sure there's a much better way to do these things... some way more pythonic. I've seen some examples of people replacing switch statements with dicts and lambda functions, while other people simply recommended if..else nests.</p>
| 8
|
2009-03-13T04:31:55Z
| 641,485
|
<pre><code>dispatch = {
'check': do_check,
'search': do_search,
}
cmd, _, arg = input.partition(' ')
if cmd in dispatch:
dispatch[cmd](arg)
else:
do_default(cmd, arg)
</code></pre>
| 31
|
2009-03-13T04:37:58Z
|
[
"conditional",
"python"
] |
"Pythonic" equivalent for handling switch and multiple string compares
| 641,469
|
<p>Alright, so my title sucked. An example works better:</p>
<pre><code>input = 'check yahoo.com'
</code></pre>
<p>I want to parse input, using the first word as the "command", and the rest of the string as a parameter. Here's the simple version of how my non-Pythonic mind is coding it:</p>
<pre><code>if len(input) > 0:
a = input.split(' ')
if a[0] == 'check':
if len(a) > 1:
do_check(a[1])
elif a[0] == 'search':
if len(a) > 1:
do_search(a[1])
</code></pre>
<p>I like Python because it makes normally complicated things into rather simple things. I'm not too experienced with it, and I am fairly sure there's a much better way to do these things... some way more pythonic. I've seen some examples of people replacing switch statements with dicts and lambda functions, while other people simply recommended if..else nests.</p>
| 8
|
2009-03-13T04:31:55Z
| 641,546
|
<p>If you're looking for a one liner 'pythonic' approach to this you can use this:</p>
<pre><code>
def do_check(x): print 'checking for:', x
def do_search(x): print 'searching for:', x
input = 'check yahoo.com'
{'check': do_check}.get(input.split()[0], do_search)(input.split()[1])
# checking for: yahoo.com
input = 'search google.com'
{'check': do_check}.get(input.split()[0], do_search)(input.split()[1])
# searching for: google.com
input = 'foo bar.com'
{'check': do_check}.get(input.split()[0], do_search)(input.split()[1])
# searching for: bar.com
</code></pre>
| 0
|
2009-03-13T05:11:16Z
|
[
"conditional",
"python"
] |
"Pythonic" equivalent for handling switch and multiple string compares
| 641,469
|
<p>Alright, so my title sucked. An example works better:</p>
<pre><code>input = 'check yahoo.com'
</code></pre>
<p>I want to parse input, using the first word as the "command", and the rest of the string as a parameter. Here's the simple version of how my non-Pythonic mind is coding it:</p>
<pre><code>if len(input) > 0:
a = input.split(' ')
if a[0] == 'check':
if len(a) > 1:
do_check(a[1])
elif a[0] == 'search':
if len(a) > 1:
do_search(a[1])
</code></pre>
<p>I like Python because it makes normally complicated things into rather simple things. I'm not too experienced with it, and I am fairly sure there's a much better way to do these things... some way more pythonic. I've seen some examples of people replacing switch statements with dicts and lambda functions, while other people simply recommended if..else nests.</p>
| 8
|
2009-03-13T04:31:55Z
| 641,607
|
<p>This lets you avoid giving each command name twice; function names are used almost directly as command names.</p>
<pre><code>class CommandFunctions:
def c_check(self, arg):
print "checking", arg
def c_search(self, arg):
print "searching for", arg
def c_compare(self, arg1, arg2):
print "comparing", arg1, "with", arg2
def execute(self, line):
words = line.split(' ')
fn = getattr(self, 'c_' + words[0], None)
if fn is None:
import sys
sys.stderr.write('error: no such command "%s"\n' % words[0])
return
fn(*words[1:])
cf = CommandFunctions()
import sys
for line in sys.stdin:
cf.execute(line.strip())
</code></pre>
| 3
|
2009-03-13T05:50:13Z
|
[
"conditional",
"python"
] |
"Pythonic" equivalent for handling switch and multiple string compares
| 641,469
|
<p>Alright, so my title sucked. An example works better:</p>
<pre><code>input = 'check yahoo.com'
</code></pre>
<p>I want to parse input, using the first word as the "command", and the rest of the string as a parameter. Here's the simple version of how my non-Pythonic mind is coding it:</p>
<pre><code>if len(input) > 0:
a = input.split(' ')
if a[0] == 'check':
if len(a) > 1:
do_check(a[1])
elif a[0] == 'search':
if len(a) > 1:
do_search(a[1])
</code></pre>
<p>I like Python because it makes normally complicated things into rather simple things. I'm not too experienced with it, and I am fairly sure there's a much better way to do these things... some way more pythonic. I've seen some examples of people replacing switch statements with dicts and lambda functions, while other people simply recommended if..else nests.</p>
| 8
|
2009-03-13T04:31:55Z
| 642,037
|
<blockquote>
<p>I am fairly sure there's a much better way to do these things... some way more pythonic.</p>
</blockquote>
<p>Not really. You code is simple, clear, obvious and English-like.</p>
<blockquote>
<p>I've seen some examples of people replacing switch statements with dicts and lambda functions,</p>
</blockquote>
<p>Yes, you've seen them and they're not clear, obvious or English-like. They exist because some people like to wring their hands over the switch statement.</p>
<blockquote>
<p>while other people simply recommended if..else nests.</p>
</blockquote>
<p>Correct. They work. They're simple, clear, ...</p>
<p>Your code is good. Leave it alone. Move on.</p>
| 4
|
2009-03-13T10:08:02Z
|
[
"conditional",
"python"
] |
"Pythonic" equivalent for handling switch and multiple string compares
| 641,469
|
<p>Alright, so my title sucked. An example works better:</p>
<pre><code>input = 'check yahoo.com'
</code></pre>
<p>I want to parse input, using the first word as the "command", and the rest of the string as a parameter. Here's the simple version of how my non-Pythonic mind is coding it:</p>
<pre><code>if len(input) > 0:
a = input.split(' ')
if a[0] == 'check':
if len(a) > 1:
do_check(a[1])
elif a[0] == 'search':
if len(a) > 1:
do_search(a[1])
</code></pre>
<p>I like Python because it makes normally complicated things into rather simple things. I'm not too experienced with it, and I am fairly sure there's a much better way to do these things... some way more pythonic. I've seen some examples of people replacing switch statements with dicts and lambda functions, while other people simply recommended if..else nests.</p>
| 8
|
2009-03-13T04:31:55Z
| 649,034
|
<p>Disregard, I just realized that my answer was similar to one of the other answers - and apparently there's no delete key :)</p>
| 0
|
2009-03-16T02:06:21Z
|
[
"conditional",
"python"
] |
"Pythonic" equivalent for handling switch and multiple string compares
| 641,469
|
<p>Alright, so my title sucked. An example works better:</p>
<pre><code>input = 'check yahoo.com'
</code></pre>
<p>I want to parse input, using the first word as the "command", and the rest of the string as a parameter. Here's the simple version of how my non-Pythonic mind is coding it:</p>
<pre><code>if len(input) > 0:
a = input.split(' ')
if a[0] == 'check':
if len(a) > 1:
do_check(a[1])
elif a[0] == 'search':
if len(a) > 1:
do_search(a[1])
</code></pre>
<p>I like Python because it makes normally complicated things into rather simple things. I'm not too experienced with it, and I am fairly sure there's a much better way to do these things... some way more pythonic. I've seen some examples of people replacing switch statements with dicts and lambda functions, while other people simply recommended if..else nests.</p>
| 8
|
2009-03-13T04:31:55Z
| 991,052
|
<p>Variation on <a href="http://stackoverflow.com/questions/641469/pythonic-equivalent-for-handling-switch-and-multiple-string-compares/641485#641485">@MizardX's answer</a>:</p>
<pre><code>from collections import defaultdict
dispatch = defaultdict(do_default, check=do_check, search=do_search)
cmd, _, arg = input.partition(' ')
dispatch[cmd](arg)
</code></pre>
| 0
|
2009-06-13T17:05:03Z
|
[
"conditional",
"python"
] |
Can 3D OpenGL game written in Python look good and run fast?
| 641,770
|
<p>I am planning to write an simple 3d(isometric view) game in Java using jMonkeyEngine - nothing to fancy, I just want to learn something about OpenGL and writing efficient algorithms (random map generating ones). </p>
<p>When I was planning what to do, I started wondering about switching to Python. I know that Python didn't come into existence to be a tool to write 3d games, but is it possible to write good looking games with this language? </p>
<p>I have in mind 3d graphics, nice effects and free CPU time to power to rest of game engine? I had seen good looking java games - and too be honest, I was rather shocked when I saw level of detail achieved in Runescape HD. </p>
<p>On the other hand, pygame.org has only 2d games, with some starting 3d projects. Are there any efficient 3d game engines for python? Is pyopengl the only alternative? Good looking games in python aren't popular or possible to achieve? </p>
<p>I would be grateful for any information / feedback.</p>
| 12
|
2009-03-13T07:48:58Z
| 641,832
|
<p>If you are worried about 3D performance: Most of the performance-critical parts will be handled by OpenGL (in a C library or even in hardware), so the language you use to drive it should not matter too much.</p>
<p>To really find out if performance is a problem, you'd have to try it. But there is no reason why it cannot work in principle.</p>
<p>At any rate, you could still optimize the critical parts, either in Python or by dropping to C. You still gain Python's benefit for most of the game engine which is less performance-critical.</p>
| 17
|
2009-03-13T08:31:34Z
|
[
"python",
"opengl"
] |
Can 3D OpenGL game written in Python look good and run fast?
| 641,770
|
<p>I am planning to write an simple 3d(isometric view) game in Java using jMonkeyEngine - nothing to fancy, I just want to learn something about OpenGL and writing efficient algorithms (random map generating ones). </p>
<p>When I was planning what to do, I started wondering about switching to Python. I know that Python didn't come into existence to be a tool to write 3d games, but is it possible to write good looking games with this language? </p>
<p>I have in mind 3d graphics, nice effects and free CPU time to power to rest of game engine? I had seen good looking java games - and too be honest, I was rather shocked when I saw level of detail achieved in Runescape HD. </p>
<p>On the other hand, pygame.org has only 2d games, with some starting 3d projects. Are there any efficient 3d game engines for python? Is pyopengl the only alternative? Good looking games in python aren't popular or possible to achieve? </p>
<p>I would be grateful for any information / feedback.</p>
| 12
|
2009-03-13T07:48:58Z
| 641,838
|
<p>You might want to check out <a href="http://www.python-ogre.org/" rel="nofollow">Python-Ogre</a>. I just messed with it myself, nothing serious, but seems pretty good. </p>
| 2
|
2009-03-13T08:36:47Z
|
[
"python",
"opengl"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.