title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
|---|---|---|---|---|---|---|---|---|---|
Python C-API Object Allocationâ
| 573,275
|
<p>I want to use the new and delete operators for creating and destroying my objects.</p>
<p>The problem is python seems to break it into several stages. tp_new, tp_init and tp_alloc for creation and tp_del, tp_free and tp_dealloc for destruction. However c++ just has new which allocates and fully constructs the object and delete which destructs and deallocates the object.</p>
<p>Which of the python tp_* methods do I need to provide and what must they do?</p>
<p>Also I want to be able to create the object directly in c++ eg "PyObject *obj = new MyExtensionObject(args);" Will I also need to overload the new operator in some way to support this?</p>
<p>I also would like to be able to subclass my extension types in python, is there anything special I need to do to support this?</p>
<p>I'm using python 3.0.1.</p>
<p>EDIT:
ok, tp_init seems to make objects a bit too mutable for what I'm doing (eg take a Texture object, changing the contents after creation is fine, but change fundamental aspects of it such as, size, bitdept, etc will break lots of existing c++ stuff that assumes those sort of things are fixed). If I dont implement it will it simply stop people calling __init__ AFTER its constructed (or at least ignore the call, like tuple does). Or should I have some flag that throws an exception or somthing if tp_init is called more than once on the same object?</p>
<p>Apart from that I think ive got most of the rest sorted.</p>
<pre><code>extern "C"
{
//creation + destruction
PyObject* global_alloc(PyTypeObject *type, Py_ssize_t items)
{
return (PyObject*)new char[type->tp_basicsize + items*type->tp_itemsize];
}
void global_free(void *mem)
{
delete[] (char*)mem;
}
}
template<class T> class ExtensionType
{
PyTypeObject *t;
ExtensionType()
{
t = new PyTypeObject();//not sure on this one, what is the "correct" way to create an empty type object
memset((void*)t, 0, sizeof(PyTypeObject));
static PyVarObject init = {PyObject_HEAD_INIT, 0};
*((PyObject*)t) = init;
t->tp_basicsize = sizeof(T);
t->tp_itemsize = 0;
t->tp_name = "unknown";
t->tp_alloc = (allocfunc) global_alloc;
t->tp_free = (freefunc) global_free;
t->tp_new = (newfunc) T::obj_new;
t->tp_dealloc = (destructor)T::obj_dealloc;
...
}
...bunch of methods for changing stuff...
PyObject *Finalise()
{
...
}
};
template <class T> PyObjectExtension : public PyObject
{
...
extern "C" static PyObject* obj_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds)
{
void *mem = (void*)subtype->tp_alloc(subtype, 0);
return (PyObject*)new(mem) T(args, kwds)
}
extern "C" static void obj_dealloc(PyObject *obj)
{
~T();
obj->ob_type->tp_free(obj);//most of the time this is global_free(obj)
}
...
};
class MyObject : PyObjectExtension<MyObject>
{
public:
static PyObject* InitType()
{
ExtensionType<MyObject> extType();
...sets other stuff...
return extType.Finalise();
}
...
};
</code></pre>
| 5
|
2009-02-21T16:01:45Z
| 573,424
|
<p>The documentation for these is at <a href="http://docs.python.org/3.0/c-api/typeobj.html" rel="nofollow">http://docs.python.org/3.0/c-api/typeobj.html</a> and
<a href="http://docs.python.org/3.0/extending/newtypes.html" rel="nofollow">http://docs.python.org/3.0/extending/newtypes.html</a> describes how to make your own type.</p>
<p>tp_alloc does the low-level memory allocation for the instance. This is equivalent to malloc(), plus initialize the refcnt to 1. Python has it's own allocator, PyType_GenericAlloc, but a type can implement a specialized allocator.</p>
<p>tp_new is the same as Python's __new__. It's usually used for immutable objects where the data is stored in the instance itself, as compared to a pointer to data. For example, strings and tuples store their data in the instance, instead of using a char * or a PyTuple *.</p>
<p>For this case, tp_new figures out how much memory is needed, based on the input parameters, and calls tp_alloc to get the memory, then initializes the essential fields. tp_new does not need to call tp_alloc. It can for example return a cached object.</p>
<p>tp_init is the same as Python's __init__. Most of your initialization should be in this function.</p>
<p>The distinction between __new__ and __init__ is called <a href="http://groups.google.com/group/comp.lang.python/msg/e78840758484b94d" rel="nofollow">two-stage initialization</a>, or <a href="http://groups.google.com/group/comp.lang.python/browse_thread/thread/b129dc656f4d0c8b/e43ae56909c3c00a?lnk=gst&q=two-phase+init#e43ae56909c3c00a" rel="nofollow">two-phase initialization</a>. </p>
<p>You say "<em>c++ just has new</em>" but that's not correct. tp_alloc corresponds a custom arena allocator in C++, __new__ corresponds to a custom type allocator (a factory function), and __init__ is more like the constructor. That last link discusses more about the parallels between C++ and Python style.</p>
<p>Also read <a href="http://www.python.org/download/releases/2.2/descrintro/" rel="nofollow">http://www.python.org/download/releases/2.2/descrintro/</a> for details about how __new__ and __init__ interact.</p>
<p>You write that you want to "create the object directly in c++". That's rather difficult because at the least you'll have to convert any Python exceptions that occurred during object instantiation into a C++ exception. You might try looking at Boost::Python for some help with this task. Or you can use a two-phase initialization. ;)</p>
| 10
|
2009-02-21T17:25:29Z
|
[
"c++",
"python",
"c",
"python-3.x",
"python-c-api"
] |
Any way to create a NumPy matrix with C API?
| 573,487
|
<p>I read the documentation on NumPy C API I could find, but still wasn't able to find out whether there is a possibility to construct a matrix object with C API â not a two-dimensional array. The function is intended for work with math matrices, and I don't want strange results if the user calls matrix multiplication forgetting to convert this value from an array to a matrix (multiplication and exponentiation being the only difference that matrix subclass has).</p>
| 2
|
2009-02-21T18:12:07Z
| 573,575
|
<p><code>numpy.matrix</code> is an ordinary class defined in <a href="http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/core/defmatrix.py#L154" rel="nofollow">numpy/core/defmatrix.py</a>. You can construct it using C API as any other instance of user-defined class in Python.</p>
| 3
|
2009-02-21T19:06:20Z
|
[
"python",
"numpy",
"python-c-api"
] |
Any way to create a NumPy matrix with C API?
| 573,487
|
<p>I read the documentation on NumPy C API I could find, but still wasn't able to find out whether there is a possibility to construct a matrix object with C API â not a two-dimensional array. The function is intended for work with math matrices, and I don't want strange results if the user calls matrix multiplication forgetting to convert this value from an array to a matrix (multiplication and exponentiation being the only difference that matrix subclass has).</p>
| 2
|
2009-02-21T18:12:07Z
| 573,576
|
<p>You can call any python callable with the <code>PyObject_Call*</code> functions.</p>
<pre><code>PyObject *numpy = PyImport_ImportModule("numpy");
PyObject *numpy_matrix = PyObject_GetAttrString(numpy, "matrix");
PyObject *my_matrix = PyObject_CallFunction(numpy_matrix, "(s)", "0 0; 0 0");
</code></pre>
<p>This will create a matrix <code>my_matrix</code> of size 2x2.</p>
<p><em>EDIT:</em> Changed references to <code>numpy.zeros</code>/<code>numpy.ndarray</code> to <code>numpy.matrix</code> instead.</p>
<p>I also found a good tutorial on the subject: <a href="http://tinyurl.com/NumPyExtensions" title="Wayback machine">http://starship.python.net/crew/hinsen/NumPyExtensions.html</a></p>
| 6
|
2009-02-21T19:07:29Z
|
[
"python",
"numpy",
"python-c-api"
] |
Python serialize lexical closures?
| 573,569
|
<p>Is there a way to serialize a lexical closure in Python using the standard library? pickle and marshal appear not to work with lexical closures. I don't really care about the details of binary vs. string serialization, etc., it just has to work. For example:</p>
<pre><code>def foo(bar, baz) :
def closure(waldo) :
return baz * waldo
return closure
</code></pre>
<p>I'd like to just be able to dump instances of closure to a file and read them back.</p>
<p>Edit:
One relatively obvious way that this could be solved is with some reflection hacks to convert lexical closures into class objects and vice-versa. One could then convert to classes, serialize, unserialize, convert back to closures. Heck, given that Python is duck typed, if you overloaded the function call operator of the class to make it look like a function, you wouldn't even really need to convert it back to a closure and the code using it wouldn't know the difference. If any Python reflection API gurus are out there, please speak up.</p>
| 18
|
2009-02-21T19:03:12Z
| 574,789
|
<p>If you simply use a class with a <code>__call__</code> method to begin with, it should all work smoothly with <code>pickle</code>.</p>
<pre><code>class foo(object):
def __init__(self, bar, baz):
self.baz = baz
def __call__(self,waldo):
return self.baz * waldo
</code></pre>
<p>On the other hand, a hack which converted a closure into an instance of a new class created at runtime would not work, because of the way <code>pickle</code> deals with classes and instances. <code>pickle</code> doesn't store classes; only a module name and class name. When reading back an instance or class it tries to import the module and find the required class in it. If you used a class created on-the-fly, you're out of luck.</p>
| 12
|
2009-02-22T11:42:04Z
|
[
"python",
"serialization",
"closures"
] |
Python serialize lexical closures?
| 573,569
|
<p>Is there a way to serialize a lexical closure in Python using the standard library? pickle and marshal appear not to work with lexical closures. I don't really care about the details of binary vs. string serialization, etc., it just has to work. For example:</p>
<pre><code>def foo(bar, baz) :
def closure(waldo) :
return baz * waldo
return closure
</code></pre>
<p>I'd like to just be able to dump instances of closure to a file and read them back.</p>
<p>Edit:
One relatively obvious way that this could be solved is with some reflection hacks to convert lexical closures into class objects and vice-versa. One could then convert to classes, serialize, unserialize, convert back to closures. Heck, given that Python is duck typed, if you overloaded the function call operator of the class to make it look like a function, you wouldn't even really need to convert it back to a closure and the code using it wouldn't know the difference. If any Python reflection API gurus are out there, please speak up.</p>
| 18
|
2009-02-21T19:03:12Z
| 584,138
|
<p><a href="http://code.activestate.com/recipes/500261/" rel="nofollow">Recipe 500261: Named Tuples</a> contains a function that defines a class on-the-fly. And this class supports pickling.</p>
<p>Here's the essence:</p>
<pre><code>result.__module__ = _sys._getframe(1).f_globals.get('__name__', '__main__')
</code></pre>
<p>Combined with <a href="http://stackoverflow.com/questions/573569/python-serialize-lexical-closures/574789#574789">@Greg Ball's suggestion</a> to create a new class at runtime it might answer your question.</p>
| 0
|
2009-02-24T23:38:42Z
|
[
"python",
"serialization",
"closures"
] |
Python serialize lexical closures?
| 573,569
|
<p>Is there a way to serialize a lexical closure in Python using the standard library? pickle and marshal appear not to work with lexical closures. I don't really care about the details of binary vs. string serialization, etc., it just has to work. For example:</p>
<pre><code>def foo(bar, baz) :
def closure(waldo) :
return baz * waldo
return closure
</code></pre>
<p>I'd like to just be able to dump instances of closure to a file and read them back.</p>
<p>Edit:
One relatively obvious way that this could be solved is with some reflection hacks to convert lexical closures into class objects and vice-versa. One could then convert to classes, serialize, unserialize, convert back to closures. Heck, given that Python is duck typed, if you overloaded the function call operator of the class to make it look like a function, you wouldn't even really need to convert it back to a closure and the code using it wouldn't know the difference. If any Python reflection API gurus are out there, please speak up.</p>
| 18
|
2009-02-21T19:03:12Z
| 704,118
|
<p>Yes! I got it (at least I think) -- that is, the more generic problem of pickling a function. Python is so wonderful :), I found out most of it though the dir() function and a couple of web searches. Also wonderful to have it [hopefully] solved, I needed it also.</p>
<p>I haven't done a lot of testing on how robust this co_code thing is (nested fcns, etc.), and it would be nice if someone could look up how to hook Python so functions can be pickled automatically (e.g. they might sometimes be closure args).</p>
<p>Cython module _pickle_fcn.pyx</p>
<pre><code># -*- coding: utf-8 -*-
cdef extern from "Python.h":
object PyCell_New(object value)
def recreate_cell(value):
return PyCell_New(value)
</code></pre>
<p>Python file</p>
<pre><code>#!/usr/bin/env python
# -*- coding: utf-8 -*-
# author gatoatigrado [ntung.com]
import cPickle, marshal, types
import pyximport; pyximport.install()
import _pickle_fcn
def foo(bar, baz) :
def closure(waldo) :
return baz * waldo
return closure
# really this problem is more about pickling arbitrary functions
# thanks so much to the original question poster for mentioning marshal
# I probably wouldn't have found out how to serialize func_code without it.
fcn_instance = foo("unused?", -1)
code_str = marshal.dumps(fcn_instance.func_code)
name = fcn_instance.func_name
defaults = fcn_instance.func_defaults
closure_values = [v.cell_contents for v in fcn_instance.func_closure]
serialized = cPickle.dumps((code_str, name, defaults, closure_values),
protocol=cPickle.HIGHEST_PROTOCOL)
code_str_, name_, defaults_, closure_values_ = cPickle.loads(serialized)
code_ = marshal.loads(code_str_)
closure_ = tuple([_pickle_fcn.recreate_cell(v) for v in closure_values_])
# reconstructing the globals is like pickling everything :)
# for most functions, it's likely not necessary
# it probably wouldn't be too much work to detect if fcn_instance global element is of type
# module, and handle that in some custom way
# (have the reconstruction reinstantiate the module)
reconstructed = types.FunctionType(code_, globals(),
name_, defaults_, closure_)
print(reconstructed(3))
</code></pre>
<p>cheers, <br />
Nicholas</p>
<p><strong>EDIT</strong> - more robust global handling is necessary for real-world cases. fcn.func_code.co_names lists global names.</p>
| 1
|
2009-04-01T05:01:25Z
|
[
"python",
"serialization",
"closures"
] |
Python serialize lexical closures?
| 573,569
|
<p>Is there a way to serialize a lexical closure in Python using the standard library? pickle and marshal appear not to work with lexical closures. I don't really care about the details of binary vs. string serialization, etc., it just has to work. For example:</p>
<pre><code>def foo(bar, baz) :
def closure(waldo) :
return baz * waldo
return closure
</code></pre>
<p>I'd like to just be able to dump instances of closure to a file and read them back.</p>
<p>Edit:
One relatively obvious way that this could be solved is with some reflection hacks to convert lexical closures into class objects and vice-versa. One could then convert to classes, serialize, unserialize, convert back to closures. Heck, given that Python is duck typed, if you overloaded the function call operator of the class to make it look like a function, you wouldn't even really need to convert it back to a closure and the code using it wouldn't know the difference. If any Python reflection API gurus are out there, please speak up.</p>
| 18
|
2009-02-21T19:03:12Z
| 2,477,448
|
<pre><code>#!python
import marshal, pickle, new
def dump_func(f):
if f.func_closure:
closure = tuple(c.cell_contents for c in f.func_closure)
else:
closure = None
return marshal.dumps(f.func_code), f.func_defaults, closure
def load_func(code, defaults, closure, globs):
if closure is not None:
closure = reconstruct_closure(closure)
code = marshal.loads(code)
return new.function(code, globs, code.co_name, defaults, closure)
def reconstruct_closure(values):
ns = range(len(values))
src = ["def f(arg):"]
src += [" _%d = arg[%d]" % (n, n) for n in ns]
src += [" return lambda:(%s)" % ','.join("_%d"%n for n in ns), '']
src = '\n'.join(src)
try:
exec src
except:
raise SyntaxError(src)
return f(values).func_closure
if __name__ == '__main__':
def get_closure(x):
def the_closure(a, b=1):
return a * x + b, some_global
return the_closure
f = get_closure(10)
code, defaults, closure = dump_func(f)
dump = pickle.dumps((code, defaults, closure))
code, defaults, closure = pickle.loads(dump)
f = load_func(code, defaults, closure, globals())
some_global = 'some global'
print f(2)
</code></pre>
| 0
|
2010-03-19T13:04:14Z
|
[
"python",
"serialization",
"closures"
] |
Python serialize lexical closures?
| 573,569
|
<p>Is there a way to serialize a lexical closure in Python using the standard library? pickle and marshal appear not to work with lexical closures. I don't really care about the details of binary vs. string serialization, etc., it just has to work. For example:</p>
<pre><code>def foo(bar, baz) :
def closure(waldo) :
return baz * waldo
return closure
</code></pre>
<p>I'd like to just be able to dump instances of closure to a file and read them back.</p>
<p>Edit:
One relatively obvious way that this could be solved is with some reflection hacks to convert lexical closures into class objects and vice-versa. One could then convert to classes, serialize, unserialize, convert back to closures. Heck, given that Python is duck typed, if you overloaded the function call operator of the class to make it look like a function, you wouldn't even really need to convert it back to a closure and the code using it wouldn't know the difference. If any Python reflection API gurus are out there, please speak up.</p>
| 18
|
2009-02-21T19:03:12Z
| 4,124,868
|
<p>PiCloud has released an open-source (LGPL) pickler which can handle function closure and a whole lot more useful stuff. It can be used independently of their cloud computing infrastructure - it's just a normal pickler. The whole shebang is documented <a href="http://docs.picloud.com/" rel="nofollow">here</a>, and you can download the code via 'pip install cloud'. Anyway, it does what you want. Let's demonstrate that by pickling a closure:</p>
<pre><code>import pickle
from StringIO import StringIO
import cloud
# generate a closure
def foo(bar, baz):
def closure(waldo):
return baz * waldo
return closure
closey = foo(3, 5)
# use the picloud pickler to pickle to a string
f = StringIO()
pickler = cloud.serialization.cloudpickle.CloudPickler(f)
pickler.dump(closey)
#rewind the virtual file and reload
f.seek(0)
closey2 = pickle.load(f)
</code></pre>
<p>Now we have <code>closey</code>, the original closure, and <code>closey2</code>, the one that has been restored from a string serialisation. Let's test 'em.</p>
<pre><code>>>> closey(4)
20
>>> closey2(4)
20
</code></pre>
<p>Beautiful. The module is pure pythonâyou can open it up and easily see what makes the magic work. (The answer is a lot of code.)</p>
| 16
|
2010-11-08T14:46:46Z
|
[
"python",
"serialization",
"closures"
] |
Django - Set Up A Scheduled Job?
| 573,618
|
<p>I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. </p>
<p>Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this.</p>
<p>Does anyone know how to set this up?</p>
<p>To clarify: I know I can set up a <code>cron</code> job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero).</p>
<p>I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.</p>
| 299
|
2009-02-21T19:39:59Z
| 573,656
|
<p>If you're using a standard POSIX OS, you use <a href="http://linux.die.net/man/8/cron" rel="nofollow">cron</a>.</p>
<p>If you're using Windows, you use <a href="http://technet.microsoft.com/en-us/library/cc755618.aspx" rel="nofollow">at</a>.</p>
<p>Write a Django management command to </p>
<ol>
<li><p>Figure out what platform they're on.</p></li>
<li><p>Either execute the appropriate "AT" command for your users, <strong>or</strong> update the crontab for your users.</p></li>
</ol>
| 24
|
2009-02-21T20:04:40Z
|
[
"python",
"django",
"web-applications",
"scheduled-tasks"
] |
Django - Set Up A Scheduled Job?
| 573,618
|
<p>I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. </p>
<p>Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this.</p>
<p>Does anyone know how to set this up?</p>
<p>To clarify: I know I can set up a <code>cron</code> job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero).</p>
<p>I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.</p>
| 299
|
2009-02-21T19:39:59Z
| 573,659
|
<p>One solution that I have employed is to do this:</p>
<p>1) Create a <a href="http://docs.djangoproject.com/en/dev/howto/custom-management-commands/#howto-custom-management-commands">custom management command</a>, e.g. </p>
<pre><code>python manage.py my_cool_command
</code></pre>
<p>2) Use <code>cron</code> (on Linux) or <code>at</code> (on Windows) to run my command at the required times.</p>
<p>This is a simple solution that doesn't require installing a heavy AMQP stack. However there are nice advantages to using something like Celery, mentioned in the other answers. In particular, with Celery it is nice to not have to spread your application logic out into crontab files. However the cron solution works quite nicely for a small to medium sized application and where you don't want a lot of external dependencies.</p>
<p>EDIT:</p>
<p>In later version of windows the <code>at</code> command is deprecated for Windows 8, Server 2012 and above. You can use <code>schtasks.exe</code> for same use.</p>
| 219
|
2009-02-21T20:06:38Z
|
[
"python",
"django",
"web-applications",
"scheduled-tasks"
] |
Django - Set Up A Scheduled Job?
| 573,618
|
<p>I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. </p>
<p>Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this.</p>
<p>Does anyone know how to set this up?</p>
<p>To clarify: I know I can set up a <code>cron</code> job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero).</p>
<p>I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.</p>
| 299
|
2009-02-21T19:39:59Z
| 573,685
|
<p>Look at Django Poor Man's Cron which is a Django app that makes use of spambots, search engine indexing robots and alike to run scheduled tasks in approximately regular intervals</p>
<p>See: <a href="http://code.google.com/p/django-poormanscron/">http://code.google.com/p/django-poormanscron/</a></p>
| 12
|
2009-02-21T20:29:47Z
|
[
"python",
"django",
"web-applications",
"scheduled-tasks"
] |
Django - Set Up A Scheduled Job?
| 573,618
|
<p>I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. </p>
<p>Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this.</p>
<p>Does anyone know how to set this up?</p>
<p>To clarify: I know I can set up a <code>cron</code> job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero).</p>
<p>I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.</p>
| 299
|
2009-02-21T19:39:59Z
| 574,245
|
<p>I personally use cron, but the <a href="http://code.google.com/p/django-command-extensions/wiki/JobsScheduling">Jobs Scheduling</a> parts of <a href="https://github.com/django-extensions/django-extensions">django-extensions</a> looks interesting.</p>
| 8
|
2009-02-22T03:18:07Z
|
[
"python",
"django",
"web-applications",
"scheduled-tasks"
] |
Django - Set Up A Scheduled Job?
| 573,618
|
<p>I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. </p>
<p>Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this.</p>
<p>Does anyone know how to set this up?</p>
<p>To clarify: I know I can set up a <code>cron</code> job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero).</p>
<p>I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.</p>
| 299
|
2009-02-21T19:39:59Z
| 621,538
|
<p>Interesting new pluggable Django app: <a href="http://code.google.com/p/django-chronograph/">django-chronograph</a></p>
<p>You only have to add one cron entry which acts as a timer, and you have a very nice Django admin interface into the scripts to run.</p>
| 20
|
2009-03-07T08:32:30Z
|
[
"python",
"django",
"web-applications",
"scheduled-tasks"
] |
Django - Set Up A Scheduled Job?
| 573,618
|
<p>I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. </p>
<p>Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this.</p>
<p>Does anyone know how to set this up?</p>
<p>To clarify: I know I can set up a <code>cron</code> job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero).</p>
<p>I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.</p>
| 299
|
2009-02-21T19:39:59Z
| 1,057,920
|
<p><a href="http://celeryproject.org/">Celery</a> is a distributed task queue, built on AMQP (RabbitMQ). It also handles periodic tasks in a cron-like fashion. Depending on your app, it might be worth a gander.</p>
| 92
|
2009-06-29T11:56:47Z
|
[
"python",
"django",
"web-applications",
"scheduled-tasks"
] |
Django - Set Up A Scheduled Job?
| 573,618
|
<p>I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. </p>
<p>Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this.</p>
<p>Does anyone know how to set this up?</p>
<p>To clarify: I know I can set up a <code>cron</code> job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero).</p>
<p>I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.</p>
| 299
|
2009-02-21T19:39:59Z
| 2,024,459
|
<p>Put the following at the top of your cron.py file:</p>
<pre><code>#!/usr/bin/python
import os, sys
sys.path.append('/path/to/') # the parent directory of the project
sys.path.append('/path/to/project') # these lines only needed if not on path
os.environ['DJANGO_SETTINGS_MODULE'] = 'myproj.settings'
# imports and code below
</code></pre>
| 5
|
2010-01-07T23:26:10Z
|
[
"python",
"django",
"web-applications",
"scheduled-tasks"
] |
Django - Set Up A Scheduled Job?
| 573,618
|
<p>I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. </p>
<p>Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this.</p>
<p>Does anyone know how to set this up?</p>
<p>To clarify: I know I can set up a <code>cron</code> job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero).</p>
<p>I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.</p>
| 299
|
2009-02-21T19:39:59Z
| 6,025,832
|
<p>after the part of code,I can write anything just like my views.py :)</p>
<pre><code>#######################################
import os,sys
sys.path.append('/home/administrator/development/store')
os.environ['DJANGO_SETTINGS_MODULE']='store.settings'
from django.core.management impor setup_environ
from store import settings
setup_environ(settings)
#######################################
</code></pre>
<p>from
<a href="http://www.cotellese.net/2007/09/27/running-external-scripts-against-django-models/" rel="nofollow">http://www.cotellese.net/2007/09/27/running-external-scripts-against-django-models/</a></p>
| 4
|
2011-05-17T03:09:13Z
|
[
"python",
"django",
"web-applications",
"scheduled-tasks"
] |
Django - Set Up A Scheduled Job?
| 573,618
|
<p>I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. </p>
<p>Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this.</p>
<p>Does anyone know how to set this up?</p>
<p>To clarify: I know I can set up a <code>cron</code> job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero).</p>
<p>I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.</p>
| 299
|
2009-02-21T19:39:59Z
| 7,287,891
|
<p>I had something similar with your problem today.</p>
<p>I didn't wanted to have it handled by the server trhough cron (and most of the libs were just cron helpers in the end).</p>
<p>So i've created a scheduling module and attached it to the <strong>init</strong> .</p>
<p>It's not the best approach, but it helps me to have all the code in a single place and with its execution related to the main app.</p>
| 2
|
2011-09-02T18:41:33Z
|
[
"python",
"django",
"web-applications",
"scheduled-tasks"
] |
Django - Set Up A Scheduled Job?
| 573,618
|
<p>I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. </p>
<p>Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this.</p>
<p>Does anyone know how to set this up?</p>
<p>To clarify: I know I can set up a <code>cron</code> job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero).</p>
<p>I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.</p>
| 299
|
2009-02-21T19:39:59Z
| 7,945,123
|
<p>I just thought about this rather simple solution:</p>
<ol>
<li>Define a view function <strong>do_work(req, param)</strong> like you would with any other view, with URL mapping, return a HttpResponse and so on.</li>
<li>Set up a cron job with your timing preferences (or using AT or Scheduled Tasks in Windows) which runs <strong>curl <a href="http://localhost/your/mapped/url?param=value" rel="nofollow">http://localhost/your/mapped/url?param=value</a></strong>.</li>
</ol>
<p>You can add parameters but just adding parameters to the URL.</p>
<p>Tell me what you guys think.</p>
<p><strong>[Update]</strong> I'm now using runjob command from <a href="https://github.com/django-extensions/django-extensions" rel="nofollow">django-extensions</a> instead of curl.</p>
<p>My cron looks something like this:</p>
<pre><code>@hourly python /path/to/project/manage.py runjobs hourly
</code></pre>
<p>... and so on for daily, monthly, etc'. You can also set it up to run a specific job.</p>
<p>I find it more managable and a cleaner. Doesn't require mapping a URL to a view. Just define your job class and crontab and you're set.</p>
| 4
|
2011-10-30T13:19:58Z
|
[
"python",
"django",
"web-applications",
"scheduled-tasks"
] |
Django - Set Up A Scheduled Job?
| 573,618
|
<p>I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. </p>
<p>Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this.</p>
<p>Does anyone know how to set this up?</p>
<p>To clarify: I know I can set up a <code>cron</code> job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero).</p>
<p>I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.</p>
| 299
|
2009-02-21T19:39:59Z
| 8,575,485
|
<p>Brian Neal's suggestion of running management commands via cron works well, but if you're looking for something a little more robust (yet not as elaborate as Celery) I'd look into a library like <a href="https://github.com/jgorset/django-kronos">Kronos</a>:</p>
<pre><code># app/cron.py
import kronos
@kronos.register('0 * * * *')
def task():
pass
</code></pre>
| 7
|
2011-12-20T12:30:42Z
|
[
"python",
"django",
"web-applications",
"scheduled-tasks"
] |
Django - Set Up A Scheduled Job?
| 573,618
|
<p>I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. </p>
<p>Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this.</p>
<p>Does anyone know how to set this up?</p>
<p>To clarify: I know I can set up a <code>cron</code> job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero).</p>
<p>I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.</p>
| 299
|
2009-02-21T19:39:59Z
| 9,071,268
|
<p>We've open-sourced what I think is a structured app. that Brian's solution above alludes too. Would love any / all feedback!</p>
<p><a href="https://github.com/tivix/django-cron">https://github.com/tivix/django-cron</a></p>
<p>It comes with one management command:</p>
<pre><code>./manage.py runcrons
</code></pre>
<p>That does the job. Each cron is modeled as a class (so its all OO) and each cron runs at a different frequency and we make sure same cron type doesn't run in parallel (in case crons themselves take longer time to run than their frequency!)</p>
<p>Thanks!</p>
| 26
|
2012-01-30T21:47:06Z
|
[
"python",
"django",
"web-applications",
"scheduled-tasks"
] |
Django - Set Up A Scheduled Job?
| 573,618
|
<p>I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. </p>
<p>Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this.</p>
<p>Does anyone know how to set this up?</p>
<p>To clarify: I know I can set up a <code>cron</code> job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero).</p>
<p>I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.</p>
| 299
|
2009-02-21T19:39:59Z
| 9,995,875
|
<p>RabbitMQ and Celery have more features and task handling capabilities than Cron. If task failure isn't an issue, and you think you will handle broken tasks in the next call, then Cron is sufficient.</p>
<p>Celery & <a href="https://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol" rel="nofollow">AMQP</a> will let you handle the broken task, and it will get executed again by another worker (Celery workers listen for the next task to work on), until the task's <code>max_retries</code> attribute is reached. You can even invoke tasks on failure, like logging the failure, or sending an email to the admin once the <code>max_retries</code> has been reached.</p>
<p>And you can distribute Celery and AMQP servers when you need to scale your application.</p>
| 7
|
2012-04-03T14:54:18Z
|
[
"python",
"django",
"web-applications",
"scheduled-tasks"
] |
Django - Set Up A Scheduled Job?
| 573,618
|
<p>I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. </p>
<p>Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this.</p>
<p>Does anyone know how to set this up?</p>
<p>To clarify: I know I can set up a <code>cron</code> job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero).</p>
<p>I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.</p>
| 299
|
2009-02-21T19:39:59Z
| 22,649,515
|
<p>Yes, the method above is so great. And I tried some of them. At last, I found a method like this:</p>
<pre><code> from threading import Timer
def sync():
do something...
sync_timer = Timer(self.interval, sync, ())
sync_timer.start()
</code></pre>
<p>Just like <strong>Recursive</strong>.</p>
<p>Ok, I hope this method can meet your requirement. :)</p>
| 2
|
2014-03-26T01:04:36Z
|
[
"python",
"django",
"web-applications",
"scheduled-tasks"
] |
Django - Set Up A Scheduled Job?
| 573,618
|
<p>I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. </p>
<p>Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this.</p>
<p>Does anyone know how to set this up?</p>
<p>To clarify: I know I can set up a <code>cron</code> job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero).</p>
<p>I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.</p>
| 299
|
2009-02-21T19:39:59Z
| 32,320,532
|
<p>I use celery to create my periodical tasks. First you need to install it as follows:</p>
<pre><code>pip install django-celery
</code></pre>
<p>Don't forget to register <code>django-celery</code> in your settings and then you could do something like this:</p>
<pre><code>from celery import task
from celery.decorators import periodic_task
from celery.task.schedules import crontab
from celery.utils.log import get_task_logger
@periodic_task(run_every=crontab(minute="0", hour="23"))
def do_every_midnight():
#your code
</code></pre>
| 0
|
2015-08-31T21:52:36Z
|
[
"python",
"django",
"web-applications",
"scheduled-tasks"
] |
Django - Set Up A Scheduled Job?
| 573,618
|
<p>I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically. </p>
<p>Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this.</p>
<p>Does anyone know how to set this up?</p>
<p>To clarify: I know I can set up a <code>cron</code> job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero).</p>
<p>I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.</p>
| 299
|
2009-02-21T19:39:59Z
| 38,468,265
|
<p>Although not part of Django, <a href="http://airflow.incubator.apache.org/project.html" rel="nofollow">Airflow</a> is a more recent project (as of 2016) that is useful for task management.</p>
<p>Airflow is a workflow automation and scheduling system that can be used to author and manage data pipelines. A web-based UI provides the developer with a range of options for managing and viewing these pipelines.</p>
<p>Airflow is written in Python and is built using Flask.</p>
<p>Airflow was created by Maxime Beauchemin at Airbnb and open sourced in the spring of 2015. It joined the Apache Software Foundationâs incubation program in the winter of 2016. Here is the <a href="https://github.com/apache/incubator-airflow" rel="nofollow">Git project page</a> and some addition <a href="http://nerds.airbnb.com/airflow/" rel="nofollow">background information</a>.</p>
| 0
|
2016-07-19T20:49:43Z
|
[
"python",
"django",
"web-applications",
"scheduled-tasks"
] |
Using wget in python (Error Code Help me)
| 573,914
|
<p>Heres my code. </p>
<pre><code>import os, sys
if len(sys.argv) != 2:
sys.exit(1)
h = os.popen("wget -r %s") % sys.argv[1]
fil = open("links.txt","w")
dir = os.listdir("%s") % sys.argv[1]
for file in dir:
print file.replace("@","?")
fil.write("%s/"+file.replace("@","?")) % sys.argv[1]
fil.write("\n")
h.close()
</code></pre>
<p>running the it, like this python project.py <a href="http://google.com" rel="nofollow">http://google.com</a></p>
<p>gives me error code. </p>
<pre><code>1.py:5 RuntimeWarning: tp_compare didnt return -1 or -2 for exception
h = os.popen("wget -r %s") % sys.argv[1]
Traceback (most recent call last):
File "1.py" line 5, in <module>
h = os.popen("wget -r %s") % sys.argv[1]
TypeError: unsupported operand type<s> for %: 'file' and 'str'
</code></pre>
<p>What are im going wrong. (Still learning python) Any solution/ tip?</p>
<p>I dont explain the code, i think you understand what im trying to do</p>
| 0
|
2009-02-21T23:32:24Z
| 573,929
|
<p>I think you want:</p>
<pre><code>h = os.popen("wget -r %s" % sys.argv[1])
</code></pre>
| 1
|
2009-02-21T23:40:19Z
|
[
"python"
] |
Using wget in python (Error Code Help me)
| 573,914
|
<p>Heres my code. </p>
<pre><code>import os, sys
if len(sys.argv) != 2:
sys.exit(1)
h = os.popen("wget -r %s") % sys.argv[1]
fil = open("links.txt","w")
dir = os.listdir("%s") % sys.argv[1]
for file in dir:
print file.replace("@","?")
fil.write("%s/"+file.replace("@","?")) % sys.argv[1]
fil.write("\n")
h.close()
</code></pre>
<p>running the it, like this python project.py <a href="http://google.com" rel="nofollow">http://google.com</a></p>
<p>gives me error code. </p>
<pre><code>1.py:5 RuntimeWarning: tp_compare didnt return -1 or -2 for exception
h = os.popen("wget -r %s") % sys.argv[1]
Traceback (most recent call last):
File "1.py" line 5, in <module>
h = os.popen("wget -r %s") % sys.argv[1]
TypeError: unsupported operand type<s> for %: 'file' and 'str'
</code></pre>
<p>What are im going wrong. (Still learning python) Any solution/ tip?</p>
<p>I dont explain the code, i think you understand what im trying to do</p>
| 0
|
2009-02-21T23:32:24Z
| 573,932
|
<p>You're putting the <code>%</code> operator in the wrong place: you need to put it directly after the format string:</p>
<pre><code>h = os.popen("wget -r %s" % sys.argv[1])
...
dir = os.listdir("%s" % sys.argv[1])
...
fil.write(("%s/"+file.replace("@","?")) % sys.argv[1])
</code></pre>
<p>Alternatively, since you're just using <code>%s</code>, just do plain and simple string concatenation:</p>
<pre><code>h = os.popen("wget -r " + sys.argv[1])
...
dir = os.listdir(sys.argv[1])
...
fil.write(sys.argv[1] + "/" + file.replace("@","?"))
</code></pre>
| 1
|
2009-02-21T23:42:11Z
|
[
"python"
] |
Using wget in python (Error Code Help me)
| 573,914
|
<p>Heres my code. </p>
<pre><code>import os, sys
if len(sys.argv) != 2:
sys.exit(1)
h = os.popen("wget -r %s") % sys.argv[1]
fil = open("links.txt","w")
dir = os.listdir("%s") % sys.argv[1]
for file in dir:
print file.replace("@","?")
fil.write("%s/"+file.replace("@","?")) % sys.argv[1]
fil.write("\n")
h.close()
</code></pre>
<p>running the it, like this python project.py <a href="http://google.com" rel="nofollow">http://google.com</a></p>
<p>gives me error code. </p>
<pre><code>1.py:5 RuntimeWarning: tp_compare didnt return -1 or -2 for exception
h = os.popen("wget -r %s") % sys.argv[1]
Traceback (most recent call last):
File "1.py" line 5, in <module>
h = os.popen("wget -r %s") % sys.argv[1]
TypeError: unsupported operand type<s> for %: 'file' and 'str'
</code></pre>
<p>What are im going wrong. (Still learning python) Any solution/ tip?</p>
<p>I dont explain the code, i think you understand what im trying to do</p>
| 0
|
2009-02-21T23:32:24Z
| 573,933
|
<ol>
<li><p>h = os.popen("wget -r %s" % sys.argv[1])</p></li>
<li><p>use the subprocess module, os.popen is obsolete</p></li>
<li><p>python has urllib, you can consider using that to have pure python code</p></li>
<li><p>there is pycurl</p></li>
</ol>
| 9
|
2009-02-21T23:42:18Z
|
[
"python"
] |
Docs for the internals of CPython Implementation
| 574,004
|
<p>I am currently in the process of making an embedded system port of the CPython 3.0 Python interpreter and I'm particularly interested in any references or documentation that provides details about the design and structure of code for Release 3.0 or even about any of the 2.x releases.</p>
<p>One useful document I have found so far is this <a href="http://www.python.org/dev/peps/pep-0339/">informational PEP</a> on the implementation - which is a good overview - but is still pretty high level. Hoping to come across something that gives [much] more detail on more of the modules or perhaps even covers something about porting considerations.</p>
| 8
|
2009-02-22T00:16:52Z
| 574,393
|
<p>There's the documentation for the C API, which is essentially the API for the internals of Python. It won't cover porting details, though. The code itself is fairly well documented. You might try reading in and around the area you'll need to modify.</p>
| 8
|
2009-02-22T05:14:50Z
|
[
"python",
"cpython"
] |
Docs for the internals of CPython Implementation
| 574,004
|
<p>I am currently in the process of making an embedded system port of the CPython 3.0 Python interpreter and I'm particularly interested in any references or documentation that provides details about the design and structure of code for Release 3.0 or even about any of the 2.x releases.</p>
<p>One useful document I have found so far is this <a href="http://www.python.org/dev/peps/pep-0339/">informational PEP</a> on the implementation - which is a good overview - but is still pretty high level. Hoping to come across something that gives [much] more detail on more of the modules or perhaps even covers something about porting considerations.</p>
| 8
|
2009-02-22T00:16:52Z
| 575,040
|
<p>Most of the documentation is stored in the minds of various core developers. :) A good resource for you would be the #python-dev IRC channel on freenode where many of them hang out.</p>
<p>There's also some scattered information on the <a href="http://wiki.python.org/moin/" rel="nofollow">Python wiki</a>.</p>
| 1
|
2009-02-22T15:12:04Z
|
[
"python",
"cpython"
] |
How do YOU deploy your WSGI application? (and why it is the best way)
| 574,068
|
<p>Deploying a WSGI application. There are many ways to skin this cat. I am currently using apache2 with mod-wsgi, but I can see some potential problems with this.</p>
<p>So how can it be done?</p>
<ol>
<li>Apache Mod-wsgi (the other mod-wsgi's seem to not be worth it)</li>
<li>Pure Python web server eg paste, cherrypy, Spawning, Twisted.web</li>
<li>as 2 but with reverse proxy from nginx, apache2 etc, with good static file handling</li>
<li>Conversion to other protocol such as FCGI with a bridge (eg Flup) and running in a conventional web server.</li>
</ol>
<p>More?</p>
<p>I want to know how you do it, and why it is the best way to do it. I would absolutely <strong>love</strong> you to bore me with details about the whats and the whys, application specific stuff, etc. I will upvote any non-insane answer.</p>
| 42
|
2009-02-22T00:58:37Z
| 574,135
|
<p>As always: It depends ;-)</p>
<p>When I don't need any apache features I am going with a pure python webserver like paste etc. Which one exactly depends on your application I guess and can be decided by doing some benchmarks. I always wanted to do some but never came to it. I guess Spawning might have some advantages in using non blocking IO out of the box but I had sometimes problems with it because of the patching it's doing. </p>
<p>You are always free to put a varnish in front as well of course.</p>
<p>If an Apache is required I am usually going with solution 3 so that I can keep processes separate. You can also more easily move processes to other servers etc. I simply like to keep things separate.</p>
<p>For static files I am using right now a separate server for a project which just serves static images/css/js. I am using lighttpd as webserver which has great performance (in this case I don't have a varnish in front anymore).</p>
<p>Another useful tool is <a href="http://supervisord.org/">supervisord</a> for controlling and monitoring these services.</p>
<p>I am additionally using <a href="http://pypi.python.org/pypi/zc.buildout">buildout</a> for managing my deployments and development sandboxes (together with <a href="http://pypi.python.org/pypi/virtualenv">virtualenv</a>).</p>
| 25
|
2009-02-22T01:39:56Z
|
[
"python",
"deployment",
"wsgi"
] |
How do YOU deploy your WSGI application? (and why it is the best way)
| 574,068
|
<p>Deploying a WSGI application. There are many ways to skin this cat. I am currently using apache2 with mod-wsgi, but I can see some potential problems with this.</p>
<p>So how can it be done?</p>
<ol>
<li>Apache Mod-wsgi (the other mod-wsgi's seem to not be worth it)</li>
<li>Pure Python web server eg paste, cherrypy, Spawning, Twisted.web</li>
<li>as 2 but with reverse proxy from nginx, apache2 etc, with good static file handling</li>
<li>Conversion to other protocol such as FCGI with a bridge (eg Flup) and running in a conventional web server.</li>
</ol>
<p>More?</p>
<p>I want to know how you do it, and why it is the best way to do it. I would absolutely <strong>love</strong> you to bore me with details about the whats and the whys, application specific stuff, etc. I will upvote any non-insane answer.</p>
| 42
|
2009-02-22T00:58:37Z
| 575,737
|
<p>The absolute easiest thing to deploy is CherryPy. Your web application can also become a standalone webserver. CherryPy is also a fairly fast server considering that it's written in pure Python. With that said, it's not Apache. Thus, I find that CherryPy is a good choice for lower volume webapps.</p>
<p>Other than that, I don't think there's any right or wrong answer to this question. Lots of high-volume websites have been built on the technologies you talk about, and I don't think you can go too wrong any of those ways (although I will say that I agree with mod-wsgi not being up to snuff on every non-apache server).</p>
<p>Also, I've been using <a href="http://code.google.com/p/isapi-wsgi/">isapi_wsgi</a> to deploy python apps under IIS. It's a less than ideal setup, but it works and you don't always get to choose otherwise when you live in a windows-centric world.</p>
| 13
|
2009-02-22T20:36:33Z
|
[
"python",
"deployment",
"wsgi"
] |
How do YOU deploy your WSGI application? (and why it is the best way)
| 574,068
|
<p>Deploying a WSGI application. There are many ways to skin this cat. I am currently using apache2 with mod-wsgi, but I can see some potential problems with this.</p>
<p>So how can it be done?</p>
<ol>
<li>Apache Mod-wsgi (the other mod-wsgi's seem to not be worth it)</li>
<li>Pure Python web server eg paste, cherrypy, Spawning, Twisted.web</li>
<li>as 2 but with reverse proxy from nginx, apache2 etc, with good static file handling</li>
<li>Conversion to other protocol such as FCGI with a bridge (eg Flup) and running in a conventional web server.</li>
</ol>
<p>More?</p>
<p>I want to know how you do it, and why it is the best way to do it. I would absolutely <strong>love</strong> you to bore me with details about the whats and the whys, application specific stuff, etc. I will upvote any non-insane answer.</p>
| 42
|
2009-02-22T00:58:37Z
| 581,621
|
<p>I'm using Google App Engine for an application I'm developing. It runs WSGI applications.
<a href="http://code.google.com/appengine/docs/python/gettingstarted/usingwebapp.html" rel="nofollow">Here's a couple bits of info on it.</a> </p>
<p>This is the first web-app I've ever really worked on, so I don't have a basis for comparison, but if you're a Google fan, you might want to look into it. I've had a lot of fun using it as my framework for learning. </p>
| 4
|
2009-02-24T12:47:38Z
|
[
"python",
"deployment",
"wsgi"
] |
How do YOU deploy your WSGI application? (and why it is the best way)
| 574,068
|
<p>Deploying a WSGI application. There are many ways to skin this cat. I am currently using apache2 with mod-wsgi, but I can see some potential problems with this.</p>
<p>So how can it be done?</p>
<ol>
<li>Apache Mod-wsgi (the other mod-wsgi's seem to not be worth it)</li>
<li>Pure Python web server eg paste, cherrypy, Spawning, Twisted.web</li>
<li>as 2 but with reverse proxy from nginx, apache2 etc, with good static file handling</li>
<li>Conversion to other protocol such as FCGI with a bridge (eg Flup) and running in a conventional web server.</li>
</ol>
<p>More?</p>
<p>I want to know how you do it, and why it is the best way to do it. I would absolutely <strong>love</strong> you to bore me with details about the whats and the whys, application specific stuff, etc. I will upvote any non-insane answer.</p>
| 42
|
2009-02-22T00:58:37Z
| 612,607
|
<p>We are using pure Paste for some of our web services. It is easy to deploy (with our internal deployment mechanism; we're not using Paste Deploy or anything like that) and it is nice to minimize the difference between production systems and what's running on developers' workstations. Caveat: we don't expect low latency out of Paste itself because of the heavyweight nature of our requests. In some crude benchmarking we did we weren't getting <em>fantastic</em> results; it just ended up being moot due to the expense of our typical request handler. So far it has worked fine.</p>
<p>Static data has been handled by completely separate (and somewhat "organically" grown) stacks, including the use of S3, Akamai, Apache and IIS, in various ways.</p>
| 1
|
2009-03-04T21:48:27Z
|
[
"python",
"deployment",
"wsgi"
] |
How do YOU deploy your WSGI application? (and why it is the best way)
| 574,068
|
<p>Deploying a WSGI application. There are many ways to skin this cat. I am currently using apache2 with mod-wsgi, but I can see some potential problems with this.</p>
<p>So how can it be done?</p>
<ol>
<li>Apache Mod-wsgi (the other mod-wsgi's seem to not be worth it)</li>
<li>Pure Python web server eg paste, cherrypy, Spawning, Twisted.web</li>
<li>as 2 but with reverse proxy from nginx, apache2 etc, with good static file handling</li>
<li>Conversion to other protocol such as FCGI with a bridge (eg Flup) and running in a conventional web server.</li>
</ol>
<p>More?</p>
<p>I want to know how you do it, and why it is the best way to do it. I would absolutely <strong>love</strong> you to bore me with details about the whats and the whys, application specific stuff, etc. I will upvote any non-insane answer.</p>
| 42
|
2009-02-22T00:58:37Z
| 612,622
|
<p>Apache httpd + mod_fcgid using web.py (which is a wsgi application).</p>
<p>Works like a charm.</p>
| 3
|
2009-03-04T21:53:23Z
|
[
"python",
"deployment",
"wsgi"
] |
How do YOU deploy your WSGI application? (and why it is the best way)
| 574,068
|
<p>Deploying a WSGI application. There are many ways to skin this cat. I am currently using apache2 with mod-wsgi, but I can see some potential problems with this.</p>
<p>So how can it be done?</p>
<ol>
<li>Apache Mod-wsgi (the other mod-wsgi's seem to not be worth it)</li>
<li>Pure Python web server eg paste, cherrypy, Spawning, Twisted.web</li>
<li>as 2 but with reverse proxy from nginx, apache2 etc, with good static file handling</li>
<li>Conversion to other protocol such as FCGI with a bridge (eg Flup) and running in a conventional web server.</li>
</ol>
<p>More?</p>
<p>I want to know how you do it, and why it is the best way to do it. I would absolutely <strong>love</strong> you to bore me with details about the whats and the whys, application specific stuff, etc. I will upvote any non-insane answer.</p>
| 42
|
2009-02-22T00:58:37Z
| 616,720
|
<p>Apache+mod_wsgi,</p>
<p>Simple, clean. (only four lines of webserver config), easy for other sysadimns to get their head around.</p>
| 1
|
2009-03-05T21:34:32Z
|
[
"python",
"deployment",
"wsgi"
] |
How do YOU deploy your WSGI application? (and why it is the best way)
| 574,068
|
<p>Deploying a WSGI application. There are many ways to skin this cat. I am currently using apache2 with mod-wsgi, but I can see some potential problems with this.</p>
<p>So how can it be done?</p>
<ol>
<li>Apache Mod-wsgi (the other mod-wsgi's seem to not be worth it)</li>
<li>Pure Python web server eg paste, cherrypy, Spawning, Twisted.web</li>
<li>as 2 but with reverse proxy from nginx, apache2 etc, with good static file handling</li>
<li>Conversion to other protocol such as FCGI with a bridge (eg Flup) and running in a conventional web server.</li>
</ol>
<p>More?</p>
<p>I want to know how you do it, and why it is the best way to do it. I would absolutely <strong>love</strong> you to bore me with details about the whats and the whys, application specific stuff, etc. I will upvote any non-insane answer.</p>
| 42
|
2009-02-22T00:58:37Z
| 622,597
|
<blockquote>
<p>I would absolutely love you to bore me with details about the whats and the whys, application specific stuff, etc</p>
</blockquote>
<p>Ho. Well you asked for it!</p>
<p>Like Daniel I personally use Apache with mod_wsgi. It is still new enough that deploying it in some environments can be a struggle, but if you're compiling everything yourself anyway it's pretty easy. I've found it very reliable, even the early versions. Props to Graham Dumpleton for keeping control of it pretty much by himself.</p>
<p>However for me it's essential that WSGI applications work across all possible servers. There is a bit of a hole at the moment in this area: you have the WSGI standard telling you what a WSGI callable (application) does, but there's no standardisation of deployment; no single way to tell the web server where to find the application. There's also no standardised way to make the server reload the application when you've updated it.</p>
<p>The approach I've adopted is to put:</p>
<ul>
<li><p>all application logic in modules/packages, preferably in classes</p></li>
<li><p>all website-specific customisations to be done by subclassing the main Application and overriding members</p></li>
<li><p>all server-specific deployment settings (eg. database connection factory, mail relay settings) as class __init__() parameters</p></li>
<li><p>one top-level âapplication.pyâ script that initialises the Application class with the correct deployment settings for the current server, then runs the application in such a way that it can work deployed as a CGI script, a mod_wsgi WSGIScriptAlias (or Passenger, which apparently works the same way), or can be interacted with from the command line</p></li>
<li><p>a helper module that takes care of above deployment issues and allows the application to be reloaded when the modules the application is relying on change</p></li>
</ul>
<p>So what the application.py looks like in the end is something like:</p>
<pre><code>#!/usr/bin/env python
import os.path
basedir= os.path.dirname(__file__)
import MySQLdb
def dbfactory():
return MySQLdb.connect(db= 'myappdb', unix_socket= '/var/mysql/socket', user= 'u', passwd= 'p')
def appfactory():
import myapplication
return myapplication.Application(basedir, dbfactory, debug= False)
import wsgiwrap
ismain= __name__=='__main__'
libdir= os.path.join(basedir, 'system', 'lib')
application= wsgiwrap.Wrapper(appfactory, libdir, 10, ismain)
</code></pre>
<p>The wsgiwrap.Wrapper checks every 10 seconds to see if any of the application modules in libdir have been updated, and if so does some nasty sys.modules magic to unload them all reliably. Then appfactory() will be called again to get a new instance of the updated application.</p>
<p>(You can also use command line tools such as</p>
<pre><code>./application.py setup
./application.py daemon
</code></pre>
<p>to run any setup and background-tasks hooks provided by the application callableâââa bit like how distutils works. It also responds to start/stop/restart like an init script.)</p>
<p>Another trick I use is to put the deployment settings for multiple servers (development/testing/production) in the same application.py script, and sniff âsocket.gethostname()â to decide which server-specific bunch of settings to use.</p>
<p>At some point I might package wsgiwrap up and release it properly (possibly under a different name). In the meantime if you're interested, you can see a dogfood-development version at <a href="http://www.doxdesk.com/file/software/py/v/wsgiwrap-0.5.py">http://www.doxdesk.com/file/software/py/v/wsgiwrap-0.5.py</a>.</p>
| 13
|
2009-03-07T22:15:27Z
|
[
"python",
"deployment",
"wsgi"
] |
How do YOU deploy your WSGI application? (and why it is the best way)
| 574,068
|
<p>Deploying a WSGI application. There are many ways to skin this cat. I am currently using apache2 with mod-wsgi, but I can see some potential problems with this.</p>
<p>So how can it be done?</p>
<ol>
<li>Apache Mod-wsgi (the other mod-wsgi's seem to not be worth it)</li>
<li>Pure Python web server eg paste, cherrypy, Spawning, Twisted.web</li>
<li>as 2 but with reverse proxy from nginx, apache2 etc, with good static file handling</li>
<li>Conversion to other protocol such as FCGI with a bridge (eg Flup) and running in a conventional web server.</li>
</ol>
<p>More?</p>
<p>I want to know how you do it, and why it is the best way to do it. I would absolutely <strong>love</strong> you to bore me with details about the whats and the whys, application specific stuff, etc. I will upvote any non-insane answer.</p>
| 42
|
2009-02-22T00:58:37Z
| 629,171
|
<h2>TurboGears (2.0)</h2>
<p><a href="http://turbogears.org/2.0/docs/index.html" rel="nofollow">TurboGears 2.0</a> is leaving <em>Beta</em> within the next month (has been in it for plenty of time). 2.0 improves upon 1.0 series and attempts to give you best-of-breed WSGI stack, so it makes some default choices for you, if you want the least fuss.</p>
<p>it has the <code>tg*</code> tools for testing and deployment in 1.x series, but now transformed to <code>paster</code> equivalents in 2.0 series, which shoud seem familiar if you've expermiented with <code>pylons</code>.</p>
<pre>
tg-admin quickstart â> paster quickstart
tg-admin info â> paster tginfo
tg-admin toolbox â> paster toolbox
tg-admin shell â> paster shell
tg-admin sql create â> paster setup-app development.ini
</pre>
<h2>Pylons</h2>
<p>It you'd like to be more flexible in your WSGI stack (choice of ORM, choice of templater, choice of form-ing), Pylons is becoming the consolidated choice. This would be <em>my recommended choice</em>, since it offers excellent documentation and allows you to experiment with different components. </p>
<p>It is a pleasure to work with as a result, and works on under Apache (production deployment) or stand-alone (helpful for testing and experimenting stage).</p>
<p>so it follows, you can do both with Pylons:</p>
<ul>
<li>#2 option for testing stage (<code>python</code> standalone) </li>
<li>#4 for scalable production purposes (<code>FastCGI</code>, assuming the database you choose can keep up)</li>
</ul>
<p>The Pylons admin interface is very similar to TurboGears. Here's a toy <em>standalone</em> example:</p>
<pre>
$ paster create -t pylons helloworld
$ cd helloworld
$ paster serve --reload development.ini</pre>
<p>for production-class deployment, you could refer to the setup guide of <code>Apache + FastCGI + mod_rewrite</code> is available <a href="http://wiki.pylonshq.com/display/pylonscookbook/Production%2BDeployment%2BUsing%2BApache,%2BFastCGI%2Band%2Bmod%5Frewrite" rel="nofollow">here</a>. this would scale up to most needs.</p>
| 4
|
2009-03-10T07:08:23Z
|
[
"python",
"deployment",
"wsgi"
] |
How do YOU deploy your WSGI application? (and why it is the best way)
| 574,068
|
<p>Deploying a WSGI application. There are many ways to skin this cat. I am currently using apache2 with mod-wsgi, but I can see some potential problems with this.</p>
<p>So how can it be done?</p>
<ol>
<li>Apache Mod-wsgi (the other mod-wsgi's seem to not be worth it)</li>
<li>Pure Python web server eg paste, cherrypy, Spawning, Twisted.web</li>
<li>as 2 but with reverse proxy from nginx, apache2 etc, with good static file handling</li>
<li>Conversion to other protocol such as FCGI with a bridge (eg Flup) and running in a conventional web server.</li>
</ol>
<p>More?</p>
<p>I want to know how you do it, and why it is the best way to do it. I would absolutely <strong>love</strong> you to bore me with details about the whats and the whys, application specific stuff, etc. I will upvote any non-insane answer.</p>
| 42
|
2009-02-22T00:58:37Z
| 635,680
|
<p>Nginx reverse proxy and static file sharing + XSendfile + uploadprogress_module. Nothing beats it for the purpose.</p>
<p>On the WSGI side either Apache + mod_wsgi or cherrypy server. I like to use cherrypy wsgi server for applications on servers with less memory and less requests.</p>
<p>Reasoning:</p>
<p>I've done benchmarks with different tools for different popular solutions.</p>
<p>I have more experience with lower level TCP/IP than web development, especially http implementations. I'm more confident that I can recognize a good http server than I can recognize a good web framework.</p>
<p>I know Twisted much more than Django or Pylons. The http stack in Twisted is still not up to this but it will be there.</p>
| 6
|
2009-03-11T18:06:50Z
|
[
"python",
"deployment",
"wsgi"
] |
One view ( frontpage ) for many controllers (sub views)
| 574,140
|
<p><em>Notes:</em> Cannot use Javascript or iframes. In fact I can't trust the client browser to do just about anything but the ultra basics.</p>
<p>I'm rebuilding a legacy PHP4 app as a MVC application, with most of my research currently focused with the Pylon's framework.</p>
<p>One of the first weird issues I've run into and one I've solved in the past by using iframes or better yet javascript is displaying a dynamic collection of "widgets" that are like digest views of a typical controller's index view.</p>
<p>Best way to visualize my problem would be to look at Google's personalized homepage. They solve the problem with Javascript, but for my scenario javascript and pretty much anything above basic XHTML is not possible.</p>
<p>One idea I started working on was to have my Frontpage controller poll a database or other service for the currently activated widgets, then taking a list of tuples/dicts, dynamically instantiate each controller and build a list/dict of render sub-views and pass that to the frontpage view and let it figure things out.</p>
<p>So with peusudo code:</p>
<pre><code>Get request goes to WSGI
WSGI calls pylons
Pylons routes to Frontpage.index()
Frontpage.index()
myViews = list()
for WidgetController in ActiveWidegets():
myViews.append(subRender(WidgetController, widgetView))
c.subviews = myViews
render(frontpage.mako)
</code></pre>
<p>Weird bits about subRender</p>
<ul>
<li>Dynamically imports controllers via <code>__import__</code> (currently hardcoded to project's namespace :( )</li>
<li>Has a potential to be very expensive (most widget calls can be cached, but one is a user panel)</li>
</ul>
<p>I feel like there has to be a better way or perhaps a mechanism already implemented in WSGI or better yet Pylons to do this, but so far the closest I've found is this utility method: <a href="http://www.pylonshq.com/docs/en/0.9.7/modules/controllers_util/#pylons.controllers.util.forward" rel="nofollow">http://www.pylonshq.com/docs/en/0.9.7/modules/controllers_util/#pylons.controllers.util.forward</a> but it seems a little crazy to build <code>N</code> instances of pylons on top of pylons just to get a collection views.</p>
| 1
|
2009-02-22T01:42:10Z
| 649,506
|
<p>You could use <a href="http://toscawidgets.org/" rel="nofollow">ToscaWidgets</a> to encapsulate your widgets, along with a stored list of the widgets enabled for each user (in database or other service, as you suggest). Pass a list of the enabled ToscaWidgets to the view and the widgets will render themselves (including dynamically adding CSS/JavaScript references to the page if widget requires those resources).</p>
| 0
|
2009-03-16T07:03:57Z
|
[
"python",
"model-view-controller",
"pylons",
"cherrypy"
] |
One view ( frontpage ) for many controllers (sub views)
| 574,140
|
<p><em>Notes:</em> Cannot use Javascript or iframes. In fact I can't trust the client browser to do just about anything but the ultra basics.</p>
<p>I'm rebuilding a legacy PHP4 app as a MVC application, with most of my research currently focused with the Pylon's framework.</p>
<p>One of the first weird issues I've run into and one I've solved in the past by using iframes or better yet javascript is displaying a dynamic collection of "widgets" that are like digest views of a typical controller's index view.</p>
<p>Best way to visualize my problem would be to look at Google's personalized homepage. They solve the problem with Javascript, but for my scenario javascript and pretty much anything above basic XHTML is not possible.</p>
<p>One idea I started working on was to have my Frontpage controller poll a database or other service for the currently activated widgets, then taking a list of tuples/dicts, dynamically instantiate each controller and build a list/dict of render sub-views and pass that to the frontpage view and let it figure things out.</p>
<p>So with peusudo code:</p>
<pre><code>Get request goes to WSGI
WSGI calls pylons
Pylons routes to Frontpage.index()
Frontpage.index()
myViews = list()
for WidgetController in ActiveWidegets():
myViews.append(subRender(WidgetController, widgetView))
c.subviews = myViews
render(frontpage.mako)
</code></pre>
<p>Weird bits about subRender</p>
<ul>
<li>Dynamically imports controllers via <code>__import__</code> (currently hardcoded to project's namespace :( )</li>
<li>Has a potential to be very expensive (most widget calls can be cached, but one is a user panel)</li>
</ul>
<p>I feel like there has to be a better way or perhaps a mechanism already implemented in WSGI or better yet Pylons to do this, but so far the closest I've found is this utility method: <a href="http://www.pylonshq.com/docs/en/0.9.7/modules/controllers_util/#pylons.controllers.util.forward" rel="nofollow">http://www.pylonshq.com/docs/en/0.9.7/modules/controllers_util/#pylons.controllers.util.forward</a> but it seems a little crazy to build <code>N</code> instances of pylons on top of pylons just to get a collection views.</p>
| 1
|
2009-02-22T01:42:10Z
| 784,476
|
<p>While in most cases I'd recommend what you originally stated, using Javascript to load each widget, since that isn't an option I think you'll need to do something a little different.</p>
<p>In addition to using the approach of trying to have a single front controller go through all the widgets needed and building them, an alternative you might want to consider is making more powerful use of the templating in Mako.</p>
<p>You can actually define small blocks as Mako def's, which of course have full Python power. To avoid polluting your Mako templates with domain logic, make sure to keep that all in your models, and just make calls to the model instances in the Mako def's as needed for that component of the page to build itself.</p>
<p>A huge advantage of this approach is that since Mako def's support cache args, you can actually have components of the page decide how to cache themselves. Maybe the sidebar should be cached for 5 mins, but the top bar changes every hit for example. Also, since the component is triggering the db hit, you'll save db hits when the component caches itself.</p>
<p>ToscaWidgets doesn't have the performance to make it a very feasible option on a larger scale, so I'd stay away from trying that out.</p>
<p>As for some tweaks to your existing idea, make sure not to actually use Pylons controllers for 'widgets', as they do much more as needed to support WSGI that you don't need for building a page up of widgets.</p>
<p>I'd consider having all Widget classes work like so:</p>
<pre><code>class Widget(object):
def process(self):
# Determine if this widget should process a POST aimed at it
# ie, one of the POST args is a widget id indicating the widget
# to handle the POST
def prepare(self):
# Load data from the database if needed in prep for the render
def render(self):
# return the rendered content
def __call__(self):
self.process()
self.prepare()
return self.render()
</code></pre>
<p>Then just have your main Mako template iterate through the widget instances, and call them to render them out.</p>
| 6
|
2009-04-24T04:08:31Z
|
[
"python",
"model-view-controller",
"pylons",
"cherrypy"
] |
How to determine number of files on a drive with Python?
| 574,236
|
<p>I have been trying to figure out how to retrieve (quickly) the number of files on a given HFS+ drive with python.</p>
<p>I have been playing with os.statvfs and such, but can't quite get anything (that seems helpful to me).</p>
<p>Any ideas?</p>
<p><strong>Edit:</strong> Let me be a bit more specific. =]</p>
<p>I am writing a timemachine-like wrapper around rsync for various reasons, and would like a very fast estimate (does not have to be perfect) of the number of files on the drive rsync is going to scan. This way I can watch the progress from rsync (if you call it like <code>rsync -ax --progress</code>, or with the <code>-P</code> option) as it builds its initial file list, and report a percentage and/or ETA back to the user.</p>
<p>This is completely separate from the actual backup, which is no problem tracking progress. But with the drives I am working on with several million files, it means the user is watching a counter of the number of files go up with no upper bound for a few minutes.</p>
<p>I have tried playing with os.statvfs with exactly the method described in one of the answers so far, but the results do not make sense to me.</p>
<pre><code>>>> import os
>>> os.statvfs('/').f_files - os.statvfs('/').f_ffree
64171205L
</code></pre>
<p>The more portable way gives me around 1.1 million on this machine, which is the same as every other indicator I have seen on this machine, including rsync running its preparations:</p>
<pre><code>>>> sum(len(filenames) for path, dirnames, filenames in os.walk("/"))
1084224
</code></pre>
<p>Note that the first method is instantaneous, while the second one made me come back 15 minutes later to update because it took just that long to run.</p>
<p>Does anyone know of a similar way to get this number, or what is wrong with how I am treating/interpreting the os.statvfs numbers?</p>
| 5
|
2009-02-22T03:12:46Z
| 574,270
|
<p>The right answer for your purpose is to live without a progress bar once, store the number rsync came up with and assume you have the same number of files as last time for each successive backup.</p>
<p>I didn't believe it, but this seems to work on Linux:</p>
<pre><code>os.statvfs('/').f_files - os.statvfs('/').f_ffree
</code></pre>
<p>This computes the total number of file blocks minus the free file blocks. It seems to show results for the whole filesystem even if you point it at another directory. os.statvfs is implemented on Unix only.</p>
<p>OK, I admit, I didn't actually let the 'slow, correct' way finish before marveling at the fast method. Just a few drawbacks: I suspect <code>.f_files</code> would also count directories, and the result is probably totally wrong. It might work to count the files the slow way, once, and adjust the result from the 'fast' way?</p>
<p>The portable way:</p>
<pre><code>import os
files = sum(len(filenames) for path, dirnames, filenames in os.walk("/"))
</code></pre>
<p><code>os.walk</code> returns a 3-tuple (dirpath, dirnames, filenames) for each directory in the filesystem starting at the given path. This will probably take a long time for <code>"/"</code>, but you knew that already.</p>
<p>The easy way:</p>
<p>Let's face it, nobody knows or cares how many files they really have, it's a humdrum and nugatory statistic. You can add this cool 'number of files' feature to your program with this code:</p>
<pre><code>import random
num_files = random.randint(69000, 4000000)
</code></pre>
<p>Let us know if any of these methods works for you.</p>
<p>See also <a href="http://stackoverflow.com/questions/577761/how-do-i-prevent-pythons-os-walk-from-walking-across-mount-points">http://stackoverflow.com/questions/577761/how-do-i-prevent-pythons-os-walk-from-walking-across-mount-points</a></p>
| 7
|
2009-02-22T03:37:47Z
|
[
"python",
"osx",
"filesystems",
"hard-drive"
] |
How to determine number of files on a drive with Python?
| 574,236
|
<p>I have been trying to figure out how to retrieve (quickly) the number of files on a given HFS+ drive with python.</p>
<p>I have been playing with os.statvfs and such, but can't quite get anything (that seems helpful to me).</p>
<p>Any ideas?</p>
<p><strong>Edit:</strong> Let me be a bit more specific. =]</p>
<p>I am writing a timemachine-like wrapper around rsync for various reasons, and would like a very fast estimate (does not have to be perfect) of the number of files on the drive rsync is going to scan. This way I can watch the progress from rsync (if you call it like <code>rsync -ax --progress</code>, or with the <code>-P</code> option) as it builds its initial file list, and report a percentage and/or ETA back to the user.</p>
<p>This is completely separate from the actual backup, which is no problem tracking progress. But with the drives I am working on with several million files, it means the user is watching a counter of the number of files go up with no upper bound for a few minutes.</p>
<p>I have tried playing with os.statvfs with exactly the method described in one of the answers so far, but the results do not make sense to me.</p>
<pre><code>>>> import os
>>> os.statvfs('/').f_files - os.statvfs('/').f_ffree
64171205L
</code></pre>
<p>The more portable way gives me around 1.1 million on this machine, which is the same as every other indicator I have seen on this machine, including rsync running its preparations:</p>
<pre><code>>>> sum(len(filenames) for path, dirnames, filenames in os.walk("/"))
1084224
</code></pre>
<p>Note that the first method is instantaneous, while the second one made me come back 15 minutes later to update because it took just that long to run.</p>
<p>Does anyone know of a similar way to get this number, or what is wrong with how I am treating/interpreting the os.statvfs numbers?</p>
| 5
|
2009-02-22T03:12:46Z
| 574,278
|
<p>Edit: Spotlight does not track every file, so its metadata will not suffice.</p>
| 0
|
2009-02-22T03:42:55Z
|
[
"python",
"osx",
"filesystems",
"hard-drive"
] |
How to determine number of files on a drive with Python?
| 574,236
|
<p>I have been trying to figure out how to retrieve (quickly) the number of files on a given HFS+ drive with python.</p>
<p>I have been playing with os.statvfs and such, but can't quite get anything (that seems helpful to me).</p>
<p>Any ideas?</p>
<p><strong>Edit:</strong> Let me be a bit more specific. =]</p>
<p>I am writing a timemachine-like wrapper around rsync for various reasons, and would like a very fast estimate (does not have to be perfect) of the number of files on the drive rsync is going to scan. This way I can watch the progress from rsync (if you call it like <code>rsync -ax --progress</code>, or with the <code>-P</code> option) as it builds its initial file list, and report a percentage and/or ETA back to the user.</p>
<p>This is completely separate from the actual backup, which is no problem tracking progress. But with the drives I am working on with several million files, it means the user is watching a counter of the number of files go up with no upper bound for a few minutes.</p>
<p>I have tried playing with os.statvfs with exactly the method described in one of the answers so far, but the results do not make sense to me.</p>
<pre><code>>>> import os
>>> os.statvfs('/').f_files - os.statvfs('/').f_ffree
64171205L
</code></pre>
<p>The more portable way gives me around 1.1 million on this machine, which is the same as every other indicator I have seen on this machine, including rsync running its preparations:</p>
<pre><code>>>> sum(len(filenames) for path, dirnames, filenames in os.walk("/"))
1084224
</code></pre>
<p>Note that the first method is instantaneous, while the second one made me come back 15 minutes later to update because it took just that long to run.</p>
<p>Does anyone know of a similar way to get this number, or what is wrong with how I am treating/interpreting the os.statvfs numbers?</p>
| 5
|
2009-02-22T03:12:46Z
| 577,322
|
<p>If traversing the directory tree is an option (would be slower than querying the drive directly):</p>
<pre><code>import os
dirs = 0
files = 0
for r, d, f in os.walk('/path/to/drive'):
dirs += len(d)
files += len(f)
</code></pre>
| 1
|
2009-02-23T11:23:55Z
|
[
"python",
"osx",
"filesystems",
"hard-drive"
] |
How to determine number of files on a drive with Python?
| 574,236
|
<p>I have been trying to figure out how to retrieve (quickly) the number of files on a given HFS+ drive with python.</p>
<p>I have been playing with os.statvfs and such, but can't quite get anything (that seems helpful to me).</p>
<p>Any ideas?</p>
<p><strong>Edit:</strong> Let me be a bit more specific. =]</p>
<p>I am writing a timemachine-like wrapper around rsync for various reasons, and would like a very fast estimate (does not have to be perfect) of the number of files on the drive rsync is going to scan. This way I can watch the progress from rsync (if you call it like <code>rsync -ax --progress</code>, or with the <code>-P</code> option) as it builds its initial file list, and report a percentage and/or ETA back to the user.</p>
<p>This is completely separate from the actual backup, which is no problem tracking progress. But with the drives I am working on with several million files, it means the user is watching a counter of the number of files go up with no upper bound for a few minutes.</p>
<p>I have tried playing with os.statvfs with exactly the method described in one of the answers so far, but the results do not make sense to me.</p>
<pre><code>>>> import os
>>> os.statvfs('/').f_files - os.statvfs('/').f_ffree
64171205L
</code></pre>
<p>The more portable way gives me around 1.1 million on this machine, which is the same as every other indicator I have seen on this machine, including rsync running its preparations:</p>
<pre><code>>>> sum(len(filenames) for path, dirnames, filenames in os.walk("/"))
1084224
</code></pre>
<p>Note that the first method is instantaneous, while the second one made me come back 15 minutes later to update because it took just that long to run.</p>
<p>Does anyone know of a similar way to get this number, or what is wrong with how I am treating/interpreting the os.statvfs numbers?</p>
| 5
|
2009-02-22T03:12:46Z
| 585,629
|
<p>You could use a number from a previous <code>rsync</code> run. It is quick, portable, and for <code>10**6</code> files and any reasonable backup strategy it will give you <code>1%</code> or better precision.</p>
| 2
|
2009-02-25T11:20:23Z
|
[
"python",
"osx",
"filesystems",
"hard-drive"
] |
Has anyone used SciPy with IronPython?
| 574,604
|
<p>I've been able to use the standard Python modules from IronPython, but I haven't gotten SciPy to work yet. Has anyone been able to use SciPy from IronPython? What did you have to do to make it work?</p>
<p>Update: See <a href="http://www.johndcook.com/blog/2009/03/19/ironclad-ironpytho/">Numerical computing in IronPython with Ironclad</a></p>
<p>Update: Microsoft is <a href="http://www.johndcook.com/blog/2010/07/01/scipy-and-numpy-for-net/">partnering with Enthought</a> to make SciPy for .NET.</p>
| 16
|
2009-02-22T08:57:53Z
| 574,623
|
<p>Anything with components written in C (for example NumPy, which is a component of SciPy) will not work on IronPython as the external language interface works differently. Any C language component will probably not work unless it has been explicitly ported to work with IronPython.</p>
<p>You might have to dig into the individual modules and check to see which ones work or are pure python and find out which if any of the C-based ones have been ported yet.</p>
| 8
|
2009-02-22T09:13:41Z
|
[
"python",
"scipy",
"ironpython",
"python.net"
] |
Has anyone used SciPy with IronPython?
| 574,604
|
<p>I've been able to use the standard Python modules from IronPython, but I haven't gotten SciPy to work yet. Has anyone been able to use SciPy from IronPython? What did you have to do to make it work?</p>
<p>Update: See <a href="http://www.johndcook.com/blog/2009/03/19/ironclad-ironpytho/">Numerical computing in IronPython with Ironclad</a></p>
<p>Update: Microsoft is <a href="http://www.johndcook.com/blog/2010/07/01/scipy-and-numpy-for-net/">partnering with Enthought</a> to make SciPy for .NET.</p>
| 16
|
2009-02-22T08:57:53Z
| 574,919
|
<p>Some of my workmates are working on <a href="http://code.google.com/p/ironclad/" rel="nofollow">Ironclad</a>, a project that will make extension modules for CPython work in IronPython. It's still in development, but parts of numpy, scipy and some other modules already work. You should try it out to see whether the parts of scipy you need are supported. </p>
<p>It's an open-source project, so if you're interested you could even help. In any case, some feedback about what you're trying to do and what parts we should look at next is helpful too.</p>
| 12
|
2009-02-22T13:21:58Z
|
[
"python",
"scipy",
"ironpython",
"python.net"
] |
Include html part in a mail with python libgmail
| 574,861
|
<p>I've a question about its usage: i need to send an html formatted mail. I prepare my message with </p>
<pre><code>ga = libgmail.GmailAccount(USERNAME,PASSWORD)
msg = MIMEMultipart('alternative')
msg.attach(part1)
msg.attach(part2)
...
ga.sendMessage(msg.as_string())
</code></pre>
<p>This way doesn't works, it seems can't send <code>msg</code> with sendMessage method.
What is the right way? : D</p>
| 2
|
2009-02-22T12:41:45Z
| 574,938
|
<p>If you refer to <code>libgmail</code> from sourceforge, you need to compose your messages with the <a href="http://docs.python.org/library/email.html#module-email" rel="nofollow">email module</a>.</p>
<p>Generate the HTML message as a <a href="http://docs.python.org/library/email.generator.html" rel="nofollow">MIME document</a>, and include it as a part of a <a href="http://docs.python.org/library/email.mime.html#email.mime.multipart.MIMEMultipart" rel="nofollow">multipart MIME message</a>. When you have a fully constructed multipart MIME, pass it along as a string to the <code>libgmail</code> constructor, using to <code>.as_string()</code> method.</p>
<p>An <a href="http://docs.python.org/library/email-examples.html#id4" rel="nofollow">example in the doc</a> contains the code for a similar requirement:</p>
<pre><code># Create message container - the correct MIME type is multipart/alternative.
msg = MIMEMultipart('alternative')
msg['Subject'] = "Link"
msg['From'] = me
msg['To'] = you
...
# Record the MIME types of both parts - text/plain and text/html.
# ... text and html are strings with appropriate content.
part1 = MIMEText(text, 'plain')
part2 = MIMEText(html, 'html')
# Attach parts into message container.
# According to RFC 2046, the last part of a multipart message, in this case
# the HTML message, is best and preferred.
msg.attach(part1)
msg.attach(part2)
</code></pre>
| 1
|
2009-02-22T13:40:48Z
|
[
"python",
"html",
"gmail",
"mime",
"libgmail"
] |
How can you extract all 6 letter Latin words to a list?
| 574,952
|
<p>I need to have all 6 letter <a href="http://www.math.ubc.ca/~cass/frivs/latin/latin-dict-full.html" rel="nofollow">Latin words</a> in a list. </p>
<p>I would also like to have words which follow the pattern Xyzzyx in a list.</p>
<p>I have used little Python.</p>
| 1
|
2009-02-22T13:51:36Z
| 574,974
|
<p>Regular expressions are your friend, my friend! Is this homework?</p>
<p>Here's an example that's <em>close</em> to what you want:</p>
<pre><code>egrep "^\w{6}$" /usr/share/dict/words | egrep "(.)(.)(.)\3\2\1"
</code></pre>
<p>I'll leave it as an exercise for the reader to create a latin word list and deal with the uppercase X in the second regex, but the general idea should be evident.</p>
| 5
|
2009-02-22T14:10:50Z
|
[
"python",
"regex",
"data-mining"
] |
How can you extract all 6 letter Latin words to a list?
| 574,952
|
<p>I need to have all 6 letter <a href="http://www.math.ubc.ca/~cass/frivs/latin/latin-dict-full.html" rel="nofollow">Latin words</a> in a list. </p>
<p>I would also like to have words which follow the pattern Xyzzyx in a list.</p>
<p>I have used little Python.</p>
| 1
|
2009-02-22T13:51:36Z
| 575,704
|
<p>Note that unless your list contains all of the nouns' declensions and verbs' conjugations, your program won't produce anything like <em>all</em> the six-letter words in Latin.</p>
<p>For instance, your list probably contains only the nominative case of the nouns. First-declension nouns whose nominative case is five letters long (e.g. <em>mensa</em>) have a six-letter genitive case (e.g. <em>mensae</em>). All of the declensions contain cases where the noun's length is different from its nominative case.</p>
<p>The same's even more true of verbs, each of which have (at least) four principal parts, which can be of varying length, and whose conjugations can be of varying lengths as well. So the first-person singular present tense of <em>lego</em> is four letters long, but its infinitive <em>legere</em> is six; <em>porto</em> is five in the first-person singular but six in the the second-person singular, <em>portas</em>.</p>
<p>I suppose it's possible in principle to build an engine that programmatically declines and conjugates Latin words given enough metainformation about each word. Python would actually be a pretty good language to do that in. But that's a much bigger task than just writing a regular expression.</p>
| 0
|
2009-02-22T20:20:48Z
|
[
"python",
"regex",
"data-mining"
] |
Why doesn't Python release file handles after calling file.close()?
| 575,081
|
<p>I am on windows with Python 2.5. I have an open file for writing. I write some data. Call file close. When I try to delete the file from the folder using Windows Explorer, it errors, saying that a process still holds a handle to the file.</p>
<p>If I shutdown python, and try again, it succeeds.</p>
| 5
|
2009-02-22T15:32:53Z
| 575,086
|
<p>It does close them.
Are you sure f.close() is getting called?
I just tested the same scenario and windows deletes the file for me.</p>
| 4
|
2009-02-22T15:38:49Z
|
[
"python"
] |
Why doesn't Python release file handles after calling file.close()?
| 575,081
|
<p>I am on windows with Python 2.5. I have an open file for writing. I write some data. Call file close. When I try to delete the file from the folder using Windows Explorer, it errors, saying that a process still holds a handle to the file.</p>
<p>If I shutdown python, and try again, it succeeds.</p>
| 5
|
2009-02-22T15:32:53Z
| 575,296
|
<p>Are you handling any exceptions around the file object? If so, make sure the error handling looks something like this:</p>
<pre><code>f = open("hello.txt")
try:
for line in f:
print line
finally:
f.close()
</code></pre>
<p>In considering why you should do this, consider the following lines of code:</p>
<pre><code>f = open('hello.txt')
try:
perform_an_operation_that_causes_f_to_raise_an_exception()
f.close()
except IOError:
pass
</code></pre>
<p>As you can see, f.close will never be called in the above code. The problem is that the above code will also cause f to not get garbage collected. The reason is that f will still be referenced in sys.traceback, in which case the only solution is to manually call close on f in a finally block or set sys.traceback to None (and I strongly recommend the former).</p>
| 2
|
2009-02-22T17:35:45Z
|
[
"python"
] |
Why doesn't Python release file handles after calling file.close()?
| 575,081
|
<p>I am on windows with Python 2.5. I have an open file for writing. I write some data. Call file close. When I try to delete the file from the folder using Windows Explorer, it errors, saying that a process still holds a handle to the file.</p>
<p>If I shutdown python, and try again, it succeeds.</p>
| 5
|
2009-02-22T15:32:53Z
| 8,688,682
|
<p>Explained in the tutorial:</p>
<pre><code>with open('/tmp/workfile', 'r') as f:
read_data = f.read()
</code></pre>
<p>It works when you writing or pickling/unpickling, too</p>
<p>It's not really necessary that try finally block: Java way of doing things, not Python</p>
| 3
|
2011-12-31T14:38:44Z
|
[
"python"
] |
Why doesn't Python release file handles after calling file.close()?
| 575,081
|
<p>I am on windows with Python 2.5. I have an open file for writing. I write some data. Call file close. When I try to delete the file from the folder using Windows Explorer, it errors, saying that a process still holds a handle to the file.</p>
<p>If I shutdown python, and try again, it succeeds.</p>
| 5
|
2009-02-22T15:32:53Z
| 30,957,893
|
<p>I was looking for this, because the same thing happened to me. The question didn't help me, but I think I figured out what happened.</p>
<p>In the original version of the script I wrote, I neglected to add in a 'finally' clause to the file in case of an exception.</p>
<p>I was testing the script from the interactive prompt and got an exception while the file was open. What I didn't realize was that the file object wasn't immediately garbage-collected. After that, when I ran the script (still from the same interactive session), even though the <em>new</em> file objects were being closed, the first one still hadn't been, and so the file handle was still in use, from the perspective of the operating system.</p>
<p>Once I closed the interactive prompt, the problem went away, at which I remembered that exception occurring while the file was open and realized what had been going on. (Moral: Don't try to program on insufficient sleep. : ) )</p>
<p>Naturally, I have no idea if this is what happened in the case of the original poster, and even if the original poster is still around, they may not remember the specific circumstances, but the symptoms are similar, so I thought I'd add this as something to check for, for anyone caught in the same situation and looking for an answer.</p>
| 0
|
2015-06-20T19:13:34Z
|
[
"python"
] |
Generating unique, ordered Pythagorean triplets
| 575,117
|
<p>This is a program I wrote to calculate Pythagorean triplets. When I run the program it prints each set of triplets twice because of the if statement. Is there any way I can tell the program to only print a new set of triplets once? Thanks.</p>
<pre><code>import math
def main():
for x in range (1, 1000):
for y in range (1, 1000):
for z in range(1, 1000):
if x*x == y*y + z*z:
print y, z, x
print '-'*50
if __name__ == '__main__':
main()
</code></pre>
| 17
|
2009-02-22T16:00:34Z
| 575,124
|
<p>Yes, there is.</p>
<p>Okay, now you'll want to know why. Why not just constrain it so that z > y? Try </p>
<pre><code>for z in range (y+1, 1000)
</code></pre>
| 1
|
2009-02-22T16:05:11Z
|
[
"python",
"math"
] |
Generating unique, ordered Pythagorean triplets
| 575,117
|
<p>This is a program I wrote to calculate Pythagorean triplets. When I run the program it prints each set of triplets twice because of the if statement. Is there any way I can tell the program to only print a new set of triplets once? Thanks.</p>
<pre><code>import math
def main():
for x in range (1, 1000):
for y in range (1, 1000):
for z in range(1, 1000):
if x*x == y*y + z*z:
print y, z, x
print '-'*50
if __name__ == '__main__':
main()
</code></pre>
| 17
|
2009-02-22T16:00:34Z
| 575,134
|
<p>You should define x < y < z.</p>
<pre><code>for x in range (1, 1000):
for y in range (x + 1, 1000):
for z in range(y + 1, 1000):
</code></pre>
<p>Another good optimization would be to only use x and y and calculate zsqr = x * x + y * y. If zsqr is a square number (or z = sqrt(zsqr) is a whole number), it is a triplet, else not. That way, you need only two loops instead of three (for your example, that's about 1000 times faster).</p>
| 12
|
2009-02-22T16:09:18Z
|
[
"python",
"math"
] |
Generating unique, ordered Pythagorean triplets
| 575,117
|
<p>This is a program I wrote to calculate Pythagorean triplets. When I run the program it prints each set of triplets twice because of the if statement. Is there any way I can tell the program to only print a new set of triplets once? Thanks.</p>
<pre><code>import math
def main():
for x in range (1, 1000):
for y in range (1, 1000):
for z in range(1, 1000):
if x*x == y*y + z*z:
print y, z, x
print '-'*50
if __name__ == '__main__':
main()
</code></pre>
| 17
|
2009-02-22T16:00:34Z
| 575,199
|
<p>I wrote that program in Ruby and it similar to the python implementation. The important line is:</p>
<pre><code>if x*x == y*y + z*z && gcd(y,z) == 1:
</code></pre>
<p>Then you have to implement a method that return the greatest common divisor (gcd) of two given numbers. A very simple example in Ruby again:</p>
<pre><code>def gcd(a, b)
while b != 0
t = b
b = a%b
a = t
end
return a
end
</code></pre>
<p>The full Ruby methon to find the triplets would be:</p>
<pre><code>def find_triple(upper_boundary)
(5..upper_boundary).each {|c|
(4..c-1).each {|b|
(3..b-1).each {|a|
if (a*a + b*b == c*c && gcd(a,b) == 1)
puts "#{a} \t #{b} \t #{c}"
end
}
}
}
end
</code></pre>
| 2
|
2009-02-22T16:44:40Z
|
[
"python",
"math"
] |
Generating unique, ordered Pythagorean triplets
| 575,117
|
<p>This is a program I wrote to calculate Pythagorean triplets. When I run the program it prints each set of triplets twice because of the if statement. Is there any way I can tell the program to only print a new set of triplets once? Thanks.</p>
<pre><code>import math
def main():
for x in range (1, 1000):
for y in range (1, 1000):
for z in range(1, 1000):
if x*x == y*y + z*z:
print y, z, x
print '-'*50
if __name__ == '__main__':
main()
</code></pre>
| 17
|
2009-02-22T16:00:34Z
| 575,728
|
<pre><code>def pyth_triplets(n=1000):
"Version 1"
for x in xrange(1, n):
x2= x*x # time saver
for y in xrange(x+1, n): # y > x
z2= x2 + y*y
zs= int(z2**.5)
if zs*zs == z2:
yield x, y, zs
>>> print list(pyth_triplets(20))
[(3, 4, 5), (5, 12, 13), (6, 8, 10), (8, 15, 17), (9, 12, 15), (12, 16, 20)]
</code></pre>
<p>V.1 algorithm has monotonically increasing <code>x</code> values.</p>
<h3>EDIT</h3>
<p>It seems this question is still alive :)<br>
Since I came back and revisited the code, I tried a second approach which is almost 4 times as fast (about 26% of CPU time for N=10000) as my previous suggestion since it avoids lots of unnecessary calculations:</p>
<pre><code>def pyth_triplets(n=1000):
"Version 2"
for z in xrange(5, n+1):
z2= z*z # time saver
x= x2= 1
y= z - 1; y2= y*y
while x < y:
x2_y2= x2 + y2
if x2_y2 == z2:
yield x, y, z
x+= 1; x2= x*x
y-= 1; y2= y*y
elif x2_y2 < z2:
x+= 1; x2= x*x
else:
y-= 1; y2= y*y
>>> print list(pyth_triplets(20))
[(3, 4, 5), (6, 8, 10), (5, 12, 13), (9, 12, 15), (8, 15, 17), (12, 16, 20)]
</code></pre>
<p>Note that this algorithm has increasing <code>z</code> values.</p>
<p>If the algorithm was converted to C âwhere, being closer to the metal, multiplications take more time than additionsâ one could minimalise the necessary multiplications, given the fact that the step between consecutive squares is:</p>
<blockquote>
<p>(x+1)² - x² = (x+1)(x+1) - x² = x² + 2x + 1 - x² = 2x + 1</p>
</blockquote>
<p>so all of the inner <code>x2= x*x</code> and <code>y2= y*y</code> would be converted to additions and subtractions like this:</p>
<pre><code>def pyth_triplets(n=1000):
"Version 3"
for z in xrange(5, n+1):
z2= z*z # time saver
x= x2= 1; xstep= 3
y= z - 1; y2= y*y; ystep= 2*y - 1
while x < y:
x2_y2= x2 + y2
if x2_y2 == z2:
yield x, y, z
x+= 1; x2+= xstep; xstep+= 2
y-= 1; y2-= ystep; ystep-= 2
elif x2_y2 < z2:
x+= 1; x2+= xstep; xstep+= 2
else:
y-= 1; y2-= ystep; ystep-= 2
</code></pre>
<p>Of course, in Python the extra bytecode produced actually <em>slows down</em> the algorithm compared to version 2, but I would bet (without checking :) that V.3 is faster in C.</p>
<p>Cheers everyone :)</p>
| 4
|
2009-02-22T20:32:06Z
|
[
"python",
"math"
] |
Generating unique, ordered Pythagorean triplets
| 575,117
|
<p>This is a program I wrote to calculate Pythagorean triplets. When I run the program it prints each set of triplets twice because of the if statement. Is there any way I can tell the program to only print a new set of triplets once? Thanks.</p>
<pre><code>import math
def main():
for x in range (1, 1000):
for y in range (1, 1000):
for z in range(1, 1000):
if x*x == y*y + z*z:
print y, z, x
print '-'*50
if __name__ == '__main__':
main()
</code></pre>
| 17
|
2009-02-22T16:00:34Z
| 575,849
|
<p>Algorithms can be tuned for speed, memory usage, simplicity, and other things.</p>
<p>Here is a <code>pythagore_triplets</code> algorithm tuned for speed, at the cost of memory usage and simplicity. If all you want is speed, this could be the way to go.</p>
<p>Calculation of <code>list(pythagore_triplets(10000))</code> takes 40 seconds on my computer, versus 63 seconds for ΤÎΩΤÎÎÎÎ¥'s algorithm, and possibly days of calculation for Tafkas's algorithm (and all other algorithms which use 3 embedded loops instead of just 2).</p>
<pre><code>def pythagore_triplets(n=1000):
maxn=int(n*(2**0.5))+1 # max int whose square may be the sum of two squares
squares=[x*x for x in xrange(maxn+1)] # calculate all the squares once
reverse_squares=dict([(squares[i],i) for i in xrange(maxn+1)]) # x*x=>x
for x in xrange(1,n):
x2 = squares[x]
for y in xrange(x,n+1):
y2 = squares[y]
z = reverse_squares.get(x2+y2)
if z != None:
yield x,y,z
>>> print list(pythagore_triplets(20))
[(3, 4, 5), (5, 12, 13), (6, 8, 10), (8, 15, 17), (9, 12, 15), (12, 16, 20)]
</code></pre>
<p>Note that if you are going to calculate the first billion triplets, then this algorithm will crash before it even starts, because of an out of memory error. So ΤÎΩΤÎÎÎÎ¥'s algorithm is probably a safer choice for high values of n.</p>
<p>BTW, here is Tafkas's algorithm, translated into python for the purpose of my performance tests. Its flaw is to require 3 loops instead of 2.</p>
<pre><code>def gcd(a, b):
while b != 0:
t = b
b = a%b
a = t
return a
def find_triple(upper_boundary=1000):
for c in xrange(5,upper_boundary+1):
for b in xrange(4,c):
for a in xrange(3,b):
if (a*a + b*b == c*c and gcd(a,b) == 1):
yield a,b,c
</code></pre>
| 7
|
2009-02-22T21:31:53Z
|
[
"python",
"math"
] |
Generating unique, ordered Pythagorean triplets
| 575,117
|
<p>This is a program I wrote to calculate Pythagorean triplets. When I run the program it prints each set of triplets twice because of the if statement. Is there any way I can tell the program to only print a new set of triplets once? Thanks.</p>
<pre><code>import math
def main():
for x in range (1, 1000):
for y in range (1, 1000):
for z in range(1, 1000):
if x*x == y*y + z*z:
print y, z, x
print '-'*50
if __name__ == '__main__':
main()
</code></pre>
| 17
|
2009-02-22T16:00:34Z
| 576,405
|
<p>Pythagorean Triples make a good example for claiming "<strong><code>for</code> loops considered harmful</strong>", because <code>for</code> loops seduce us into thinking about counting, often the most irrelevant part of a task. </p>
<p>(I'm going to stick with pseudo-code to avoid language biases, and to keep the pseudo-code streamlined, I'll not optimize away multiple calculations of e.g. <code>x * x</code> and <code>y * y</code>.)</p>
<p><strong>Version 1</strong>:</p>
<pre><code>for x in 1..N {
for y in 1..N {
for z in 1..N {
if x * x + y * y == z * z then {
// use x, y, z
}
}
}
}
</code></pre>
<p>is the worst solution. It generates duplicates, and traverses parts of the space that aren't useful (e.g. whenever <code>z < y</code>). Its time complexity is cubic on <code>N</code>.</p>
<p><strong>Version 2</strong>, the first improvement, comes from requiring <code>x < y < z</code> to hold, as in:</p>
<pre><code>for x in 1..N {
for y in x+1..N {
for z in y+1..N {
if x * x + y * y == z * z then {
// use x, y, z
}
}
}
}
</code></pre>
<p>which reduces run time and eliminates duplicated solutions. However, it is still cubic on <code>N</code>; the improvement is just a reduction of the co-efficient of <code>N</code>-cubed.</p>
<p>It is pointless to continue examining increasing values of <code>z</code> after <code>z * z < x * x + y * y</code> no longer holds. That fact motivates <strong>Version 3</strong>, the first step away from brute-force iteration over <code>z</code>:</p>
<pre><code>for x in 1..N {
for y in x+1..N {
z = y + 1
while z * z < x * x + y * y {
z = z + 1
}
if z * z == x * x + y * y and z <= N then {
// use x, y, z
}
}
}
</code></pre>
<p>For <code>N</code> of 1000, this is about 5 times faster than Version 2, but it is <em>still</em> cubic on <code>N</code>.</p>
<p>The next insight is that <code>x</code> and <code>y</code> are the only independent variables; <code>z</code> depends on their values, and the last <code>z</code> value considered for the previous value of <code>y</code> is a good <em>starting</em> search value for the next value of <code>y</code>. That leads to <strong>Version 4</strong>:</p>
<pre><code>for x in 1..N {
y = x+1
z = y+1
while z <= N {
while z * z < x * x + y * y {
z = z + 1
}
if z * z == x * x + y * y and z <= N then {
// use x, y, z
}
y = y + 1
}
}
</code></pre>
<p>which allows <code>y</code> and <code>z</code> to "sweep" the values above <code>x</code> only once. Not only is it over 100 times faster for <code>N</code> of 1000, it is quadratic on <code>N</code>, so the speedup increases as <code>N</code> grows.</p>
<p>I've encountered this kind of improvement often enough to be mistrustful of "counting loops" for any but the most trivial uses (e.g. traversing an array).</p>
<p><strong>Update:</strong> Apparently I should have pointed out a few things about V4 that are easy to overlook.</p>
<ol>
<li><p><strong>Both</strong> of the <code>while</code> loops are controlled by the value of <code>z</code> (one directly, the other indirectly through the square of <code>z</code>). The inner <code>while</code> is actually speeding up the outer <code>while</code>, rather than being orthogonal to it. <em>It's important to look at what the loops are doing, not merely to count how many loops there are.</em></p></li>
<li><p>All of the calculations in V4 are strictly integer arithmetic. Conversion to/from floating-point, as well as floating-point calculations, are costly by comparison.</p></li>
<li><p>V4 runs in constant memory, requiring only three integer variables. There are no arrays or hash tables to allocate and initialize (and, potentially, to cause an out-of-memory error).</p></li>
<li><p>The original question allowed all of <code>x</code>, <code>y</code>, and <code>x</code> to vary over the same range. V1..V4 followed that pattern.</p></li>
</ol>
<p>Below is a not-very-scientific set of timings (using Java under Eclipse on my older laptop with other stuff running...), where the "use x, y, z" was implemented by instantiating a Triple object with the three values and putting it in an ArrayList. (For these runs, <code>N</code> was set to 10,000, which produced 12,471 triples in each case.)</p>
<pre><code>Version 4: 46 sec.
using square root: 134 sec.
array and map: 400 sec.
</code></pre>
<p>The "array and map" algorithm is <em>essentially</em>:</p>
<pre><code>squares = array of i*i for i in 1 .. N
roots = map of i*i -> i for i in 1 .. N
for x in 1 .. N
for y in x+1 .. N
z = roots[squares[x] + squares[y]]
if z exists use x, y, z
</code></pre>
<p>The "using square root" algorithm is <em>essentially</em>:</p>
<pre><code>for x in 1 .. N
for y in x+1 .. N
z = (int) sqrt(x * x + y * y)
if z * z == x * x + y * y then use x, y, z
</code></pre>
<p>The actual code for V4 is:</p>
<pre><code>public Collection<Triple> byBetterWhileLoop() {
Collection<Triple> result = new ArrayList<Triple>(limit);
for (int x = 1; x < limit; ++x) {
int xx = x * x;
int y = x + 1;
int z = y + 1;
while (z <= limit) {
int zz = xx + y * y;
while (z * z < zz) {++z;}
if (z * z == zz && z <= limit) {
result.add(new Triple(x, y, z));
}
++y;
}
}
return result;
}
</code></pre>
<p>Note that <code>x * x</code> <em>is</em> calculated in the outer loop (although I didn't bother to cache <code>z * z</code>); similar optimizations are done in the other variations.</p>
<p>I'll be glad to provide the Java source code on request for the other variations I timed, in case I've mis-implemented anything.</p>
| 59
|
2009-02-23T03:00:32Z
|
[
"python",
"math"
] |
Generating unique, ordered Pythagorean triplets
| 575,117
|
<p>This is a program I wrote to calculate Pythagorean triplets. When I run the program it prints each set of triplets twice because of the if statement. Is there any way I can tell the program to only print a new set of triplets once? Thanks.</p>
<pre><code>import math
def main():
for x in range (1, 1000):
for y in range (1, 1000):
for z in range(1, 1000):
if x*x == y*y + z*z:
print y, z, x
print '-'*50
if __name__ == '__main__':
main()
</code></pre>
| 17
|
2009-02-22T16:00:34Z
| 577,110
|
<p>Version 5 to Joel Neely.</p>
<p>Since X can be max of 'N-2' and Y can be max of 'N-1' for range of 1..N. Since Z max is N and Y max is N-1, X can be max of Sqrt ( N * N - (N-1) * (N-1) ) = Sqrt ( 2 * N - 1 ) and can start from 3.</p>
<pre><code>MaxX = ( 2 * N - 1 ) ** 0.5
for x in 3..MaxX {
y = x+1
z = y+1
m = x*x + y*y
k = z * z
while z <= N {
while k < m {
z = z + 1
k = k + (2*z) - 1
}
if k == m and z <= N then {
// use x, y, z
}
y = y + 1
m = m + (2 * y) - 1
}
}
</code></pre>
| 0
|
2009-02-23T10:12:40Z
|
[
"python",
"math"
] |
Generating unique, ordered Pythagorean triplets
| 575,117
|
<p>This is a program I wrote to calculate Pythagorean triplets. When I run the program it prints each set of triplets twice because of the if statement. Is there any way I can tell the program to only print a new set of triplets once? Thanks.</p>
<pre><code>import math
def main():
for x in range (1, 1000):
for y in range (1, 1000):
for z in range(1, 1000):
if x*x == y*y + z*z:
print y, z, x
print '-'*50
if __name__ == '__main__':
main()
</code></pre>
| 17
|
2009-02-22T16:00:34Z
| 578,789
|
<p>The previously listed algorithms for generating <a href="http://en.wikipedia.org/wiki/Pythagorean_triplets">Pythagorean triplets</a> are all modifications of the naive approach derived from the basic relationship <code>a^2 + b^2 = c^2</code> where <code>(a, b, c)</code> is a triplet of positive integers. It turns out that Pythagorean triplets satisfy some fairly remarkable relationships that can be used to generate all Pythagorean triplets.</p>
<p><a href="http://en.wikipedia.org/wiki/Euclid">Euclid</a> discovered the first such relationship. He determined that for every Pythagorean triple <code>(a, b, c)</code>, possibly after a reordering of <code>a</code> and <code>b</code> there are relatively prime positive integers <code>m</code> and <code>n</code> with <code>m > n</code>, at least one of which is even, and a positive integer <code>k</code> such that</p>
<pre><code>a = k (2mn)
b = k (m^2 - n^2)
c = k (m^2 + n^2)
</code></pre>
<p>Then to generate Pythagorean triplets, generate relatively prime positive integers <code>m</code> and <code>n</code> of differing parity, and a positive integer <code>k</code> and apply the above formula.</p>
<pre><code>struct PythagoreanTriple {
public int a { get; private set; }
public int b { get; private set; }
public int c { get; private set; }
public PythagoreanTriple(int a, int b, int c) : this() {
this.a = a < b ? a : b;
this.b = b < a ? a : b;
this.c = c;
}
public override string ToString() {
return String.Format("a = {0}, b = {1}, c = {2}", a, b, c);
}
public static IEnumerable<PythagoreanTriple> GenerateTriples(int max) {
var triples = new List<PythagoreanTriple>();
for (int m = 1; m <= max / 2; m++) {
for (int n = 1 + (m % 2); n < m; n += 2) {
if (m.IsRelativelyPrimeTo(n)) {
for (int k = 1; k <= max / (m * m + n * n); k++) {
triples.Add(EuclidTriple(m, n, k));
}
}
}
}
return triples;
}
private static PythagoreanTriple EuclidTriple(int m, int n, int k) {
int msquared = m * m;
int nsquared = n * n;
return new PythagoreanTriple(k * 2 * m * n, k * (msquared - nsquared), k * (msquared + nsquared));
}
}
public static class IntegerExtensions {
private static int GreatestCommonDivisor(int m, int n) {
return (n == 0 ? m : GreatestCommonDivisor(n, m % n));
}
public static bool IsRelativelyPrimeTo(this int m, int n) {
return GreatestCommonDivisor(m, n) == 1;
}
}
class Program {
static void Main(string[] args) {
PythagoreanTriple.GenerateTriples(1000).ToList().ForEach(t => Console.WriteLine(t));
}
}
</code></pre>
<p>The Wikipedia article on <a href="http://en.wikipedia.org/wiki/Formulas_for_generating_Pythagorean_triples">Formulas for generating Pythagorean triples</a> contains other such formulae.</p>
| 11
|
2009-02-23T18:43:00Z
|
[
"python",
"math"
] |
Generating unique, ordered Pythagorean triplets
| 575,117
|
<p>This is a program I wrote to calculate Pythagorean triplets. When I run the program it prints each set of triplets twice because of the if statement. Is there any way I can tell the program to only print a new set of triplets once? Thanks.</p>
<pre><code>import math
def main():
for x in range (1, 1000):
for y in range (1, 1000):
for z in range(1, 1000):
if x*x == y*y + z*z:
print y, z, x
print '-'*50
if __name__ == '__main__':
main()
</code></pre>
| 17
|
2009-02-22T16:00:34Z
| 2,241,194
|
<p>Just checking, but I've been using the following code to make pythagorean triples. It's very fast (and I've tried some of the examples here, though I kind of learned them and wrote my own and came back and checked here (2 years ago)). I think this code correctly finds all pythagorean triples up to (name your limit) and fairly quickly too. I used C++ to make it.</p>
<p>ullong is unsigned long long and I created a couple of functions to square and root
my root function basically said if square root of given number (after making it whole number (integral)) squared not equal number give then return -1 because it is not rootable.
_square and _root do as expected as of description above, I know of another way to optimize it but I haven't done nor tested that yet.</p>
<pre><code>generate(vector<Triple>& triplist, ullong limit) {
cout<<"Please wait as triples are being generated."<<endl;
register ullong a, b, c;
register Triple trip;
time_t timer = time(0);
for(a = 1; a <= limit; ++a) {
for(b = a + 1; b <= limit; ++b) {
c = _root(_square(a) + _square(b));
if(c != -1 && c <= limit) {
trip.a = a; trip.b = b; trip.c = c;
triplist.push_back(trip);
} else if(c > limit)
break;
}
}
timer = time(0) - timer;
cout<<"Generated "<<triplist.size()<<" in "<<timer<<" seconds."<<endl;
cin.get();
cin.get();
</code></pre>
<p>}</p>
<p>Let me know what you all think. It generates all primitive and non-primitive triples according to the teacher I turned it in for. (she tested it up to 100 if I remember correctly).</p>
<p>The results from the v4 supplied by a previous coder here are</p>
<p>Below is a not-very-scientific set of timings (using Java under Eclipse on my older laptop with other stuff running...), where the "use x, y, z" was implemented by instantiating a Triple object with the three values and putting it in an ArrayList. (For these runs, N was set to 10,000, which produced 12,471 triples in each case.)</p>
<p>Version 4: 46 sec.
using square root: 134 sec.
array and map: 400 sec.</p>
<p>The results from mine is
How many triples to generate: 10000</p>
<p>Please wait as triples are being generated.
Generated 12471 in 2 seconds.</p>
<p>That is before I even start optimizing via the compiler. (I remember previously getting 10000 down to 0 seconds with tons of special options and stuff). My code also generates all the triples with 100,000 as the limit of how high side1,2,hyp can go in 3.2 minutes (I think the 1,000,000 limit takes an hour).</p>
<p>I modified the code a bit and got the 10,000 limit down to 1 second (no optimizations). On top of that, with careful thinking, mine could be broken down into chunks and threaded upon given ranges (for example 100,000 divide into 4 equal chunks for 3 cpu's (1 extra to hopefully consume cpu time just in case) with ranges 1 to 25,000 (start at 1 and limit it to 25,000), 25,000 to 50,000 , 50,000 to 75,000, and 75,000 to end. I may do that and see if it speeds it up any (I will have threads premade and not include them in the actual amount of time to execute the triple function. I'd need a more precise timer and a way to concatenate the vectors. I think that if 1 3.4 GHZ cpu with 8 gb ram at it's disposal can do 10,000 as lim in 1 second then 3 cpus should do that in 1/3 a second (and I round to higher second as is atm).</p>
| 0
|
2010-02-10T23:41:16Z
|
[
"python",
"math"
] |
Generating unique, ordered Pythagorean triplets
| 575,117
|
<p>This is a program I wrote to calculate Pythagorean triplets. When I run the program it prints each set of triplets twice because of the if statement. Is there any way I can tell the program to only print a new set of triplets once? Thanks.</p>
<pre><code>import math
def main():
for x in range (1, 1000):
for y in range (1, 1000):
for z in range(1, 1000):
if x*x == y*y + z*z:
print y, z, x
print '-'*50
if __name__ == '__main__':
main()
</code></pre>
| 17
|
2009-02-22T16:00:34Z
| 8,263,898
|
<p>Substantially faster than any of the solutions so far. Finds triplets via a ternary tree.</p>
<blockquote>
<p><a href="http://mathworld.wolfram.com/PythagoreanTriple.html">Wolfram</a> says:</p>
<p>Hall (1970) and Roberts (1977) prove that is a primitive Pythagorean triple if and only if</p>
<p><code>(a,b,c)=(3,4,5)M</code></p>
<p>where M is a finite product of the matrices U,A,D.</p>
</blockquote>
<p>And there we have a formula to generate every primitive triple.</p>
<p>In the above formula, the hypotenuse is ever growing so it's pretty easy to check for a max length.</p>
<p>In Python:</p>
<pre><code>import numpy as np
def gen_prim_pyth_trips(limit=None):
u = np.mat(' 1 2 2; -2 -1 -2; 2 2 3')
a = np.mat(' 1 2 2; 2 1 2; 2 2 3')
d = np.mat('-1 -2 -2; 2 1 2; 2 2 3')
uad = np.array([u, a, d])
m = np.array([3, 4, 5])
while m.size:
m = m.reshape(-1, 3)
if limit:
m = m[m[:, 2] <= limit]
yield from m
m = np.dot(m, uad)
</code></pre>
<p>If you'd like all triples and not just the primitives:</p>
<pre><code>def gen_all_pyth_trips(limit):
for prim in gen_prim_pyth_trips(limit):
i = prim
for _ in range(limit//prim[2]):
yield i
i = i + prim
</code></pre>
<p><code>list(gen_prim_pyth_trips(10**4))</code> took 2.81 milliseconds to come back with 1593 elements while <code>list(gen_all_pyth_trips(10**4))</code> took 19.8 milliseconds to come back with 12471 elements</p>
<p>For reference, the <a href="http://stackoverflow.com/a/576405/992385">accepted answer</a> <a href="https://gist.github.com/kgullion/6adc2d633038ff852923">(in python)</a> took 38 seconds for 12471 elements.</p>
<p>Just for fun, setting the upper limit to one million <code>list(gen_all_pyth_trips(10**6))</code> returns in 2.66 seconds with 1980642 elements (almost 2 million triples in 3 seconds). <code>list(gen_all_pyth_trips(10**7))</code> brings my computer to its knees as the list gets so large it consumes every last bit of ram. Doing something like <code>sum(1 for _ in gen_all_pyth_trips(10**7))</code> gets around that limitation and returns in 30 seconds with 23471475 elements.</p>
<p>For more information on the algorithm used, check out the articles on <a href="http://mathworld.wolfram.com/PythagoreanTriple.html">Wolfram</a> and <a href="http://en.wikipedia.org/wiki/Tree_of_primitive_Pythagorean_triples">Wikipedia</a>.</p>
| 22
|
2011-11-25T00:59:17Z
|
[
"python",
"math"
] |
Generating unique, ordered Pythagorean triplets
| 575,117
|
<p>This is a program I wrote to calculate Pythagorean triplets. When I run the program it prints each set of triplets twice because of the if statement. Is there any way I can tell the program to only print a new set of triplets once? Thanks.</p>
<pre><code>import math
def main():
for x in range (1, 1000):
for y in range (1, 1000):
for z in range(1, 1000):
if x*x == y*y + z*z:
print y, z, x
print '-'*50
if __name__ == '__main__':
main()
</code></pre>
| 17
|
2009-02-22T16:00:34Z
| 21,654,130
|
<p>It should be noted that for a, b, and c you don't need to loop all the way to N.</p>
<p>For a, you only have to loop from 1 to <code>int(sqrt(n**2/2))+1</code>, for b, <code>a+1</code> to <code>int(sqrt(n**2-a**2))+1</code>, and for c from <code>int(sqrt(a**2+b**2)</code> to <code>int(sqrt(a**2+b**2)+2</code>.</p>
| 0
|
2014-02-09T01:38:41Z
|
[
"python",
"math"
] |
portable non-relational database
| 575,172
|
<p>I want to experiment/play around with non-relational databases, it'd be best if the solution was:</p>
<ul>
<li>portable, meaning it doesn't require an installation. ideally just copy-pasting the directory to someplace would make it work. I don't mind if it requires editing some configuration files or running a configuration tool for first time usage.</li>
<li>accessible from python</li>
<li>works on both windows and linux</li>
</ul>
<p>What can you recommend for me?</p>
<p>Essentially, I would like to be able to install this system on a shared linux server where I have little user privileges.</p>
| 1
|
2009-02-22T16:31:16Z
| 575,183
|
<p><a href="http://en.wikipedia.org/wiki/Berkeley_DB" rel="nofollow">BerkleyDB</a></p>
| 4
|
2009-02-22T16:35:08Z
|
[
"python",
"non-relational-database",
"portable-database"
] |
portable non-relational database
| 575,172
|
<p>I want to experiment/play around with non-relational databases, it'd be best if the solution was:</p>
<ul>
<li>portable, meaning it doesn't require an installation. ideally just copy-pasting the directory to someplace would make it work. I don't mind if it requires editing some configuration files or running a configuration tool for first time usage.</li>
<li>accessible from python</li>
<li>works on both windows and linux</li>
</ul>
<p>What can you recommend for me?</p>
<p>Essentially, I would like to be able to install this system on a shared linux server where I have little user privileges.</p>
| 1
|
2009-02-22T16:31:16Z
| 575,184
|
<p>BerkeleyDB : (it seems that there is an API binding to python : <a href="http://www.jcea.es/programacion/pybsddb.htm" rel="nofollow">http://www.jcea.es/programacion/pybsddb.htm</a>)</p>
| 2
|
2009-02-22T16:35:12Z
|
[
"python",
"non-relational-database",
"portable-database"
] |
portable non-relational database
| 575,172
|
<p>I want to experiment/play around with non-relational databases, it'd be best if the solution was:</p>
<ul>
<li>portable, meaning it doesn't require an installation. ideally just copy-pasting the directory to someplace would make it work. I don't mind if it requires editing some configuration files or running a configuration tool for first time usage.</li>
<li>accessible from python</li>
<li>works on both windows and linux</li>
</ul>
<p>What can you recommend for me?</p>
<p>Essentially, I would like to be able to install this system on a shared linux server where I have little user privileges.</p>
| 1
|
2009-02-22T16:31:16Z
| 575,193
|
<p>Have you looked at <a href="http://couchdb.apache.org/" rel="nofollow">CouchDB</a>? It's non-relational, data can be migrated with relative ease and it has a Python API in the form of <a href="http://code.google.com/p/couchdb-python/" rel="nofollow">couchdb-python</a>. It does have some fairly unusual dependencies in the form of <a href="http://www.mozilla.org/js/spidermonkey/" rel="nofollow">Spidermonkey</a> and <a href="http://erlang.org/" rel="nofollow">Erlang</a> though.</p>
<p>As for pure python solutions, I don't know how far along <a href="http://quentel.pierre.free.fr/PyDbLite/index.html" rel="nofollow">PyDBLite</a> has come but it might be worth checking out nonetheless.</p>
| 3
|
2009-02-22T16:41:53Z
|
[
"python",
"non-relational-database",
"portable-database"
] |
portable non-relational database
| 575,172
|
<p>I want to experiment/play around with non-relational databases, it'd be best if the solution was:</p>
<ul>
<li>portable, meaning it doesn't require an installation. ideally just copy-pasting the directory to someplace would make it work. I don't mind if it requires editing some configuration files or running a configuration tool for first time usage.</li>
<li>accessible from python</li>
<li>works on both windows and linux</li>
</ul>
<p>What can you recommend for me?</p>
<p>Essentially, I would like to be able to install this system on a shared linux server where I have little user privileges.</p>
| 1
|
2009-02-22T16:31:16Z
| 575,197
|
<p>If you're used to thinking a relational database has to be huge and heavy like PostgreSQL or MySQL, then you'll be pleasantly surprised by SQLite.</p>
<p>It is relational, very small, uses a single file, has Python bindings, requires no extra priviledges, and works on Linux, Windows, and many other platforms.</p>
| 3
|
2009-02-22T16:43:43Z
|
[
"python",
"non-relational-database",
"portable-database"
] |
portable non-relational database
| 575,172
|
<p>I want to experiment/play around with non-relational databases, it'd be best if the solution was:</p>
<ul>
<li>portable, meaning it doesn't require an installation. ideally just copy-pasting the directory to someplace would make it work. I don't mind if it requires editing some configuration files or running a configuration tool for first time usage.</li>
<li>accessible from python</li>
<li>works on both windows and linux</li>
</ul>
<p>What can you recommend for me?</p>
<p>Essentially, I would like to be able to install this system on a shared linux server where I have little user privileges.</p>
| 1
|
2009-02-22T16:31:16Z
| 575,307
|
<p>Have you looked at <a href="http://wiki.zope.org/ZODB/FrontPage" rel="nofollow">Zope Object Database</a>?</p>
<p>Also, <a href="http://www.sqlalchemy.org/" rel="nofollow">SQLAlchemy</a> or <a href="http://docs.djangoproject.com/en/dev/" rel="nofollow">Django's ORM</a> layer makes schema management over SQLite almost transparent.</p>
<p><hr /></p>
<p><strong>Edit</strong></p>
<p>Start with <a href="http://www.sqlalchemy.org/docs/05/ormtutorial.html#define-and-create-a-table" rel="nofollow">http://www.sqlalchemy.org/docs/05/ormtutorial.html#define-and-create-a-table</a>
to see how to create SQL tables and how they map to Python objects.</p>
<p>While your question is vague, your comments seem to indicate that you might want to define the Python objects first, get those to work, then map them to relational schema objects via SQLAlchemy.</p>
| 0
|
2009-02-22T17:43:29Z
|
[
"python",
"non-relational-database",
"portable-database"
] |
portable non-relational database
| 575,172
|
<p>I want to experiment/play around with non-relational databases, it'd be best if the solution was:</p>
<ul>
<li>portable, meaning it doesn't require an installation. ideally just copy-pasting the directory to someplace would make it work. I don't mind if it requires editing some configuration files or running a configuration tool for first time usage.</li>
<li>accessible from python</li>
<li>works on both windows and linux</li>
</ul>
<p>What can you recommend for me?</p>
<p>Essentially, I would like to be able to install this system on a shared linux server where I have little user privileges.</p>
| 1
|
2009-02-22T16:31:16Z
| 576,408
|
<p><a href="http://www.equi4.com/metakit/">Metakit</a> is an interesting non-relational embedded database that supports Python. </p>
<p>Installation requires just copying a single shared library and .py file. It works on Windows, Linux and Mac and is open-source (MIT licensed).</p>
| 5
|
2009-02-23T03:01:27Z
|
[
"python",
"non-relational-database",
"portable-database"
] |
portable non-relational database
| 575,172
|
<p>I want to experiment/play around with non-relational databases, it'd be best if the solution was:</p>
<ul>
<li>portable, meaning it doesn't require an installation. ideally just copy-pasting the directory to someplace would make it work. I don't mind if it requires editing some configuration files or running a configuration tool for first time usage.</li>
<li>accessible from python</li>
<li>works on both windows and linux</li>
</ul>
<p>What can you recommend for me?</p>
<p>Essentially, I would like to be able to install this system on a shared linux server where I have little user privileges.</p>
| 1
|
2009-02-22T16:31:16Z
| 576,473
|
<p>If you're only coming and going from Python you might think about using <a href="http://docs.python.org/library/pickle.html" rel="nofollow">Pickle</a> to serialize the objects. Not going to work if you're looking to use other tools to access the same data of course. It's built into python, so you shouldn't have any privileged problems, but it's not a true database so it may not suit the needs of your experiment.</p>
| 0
|
2009-02-23T03:52:37Z
|
[
"python",
"non-relational-database",
"portable-database"
] |
portable non-relational database
| 575,172
|
<p>I want to experiment/play around with non-relational databases, it'd be best if the solution was:</p>
<ul>
<li>portable, meaning it doesn't require an installation. ideally just copy-pasting the directory to someplace would make it work. I don't mind if it requires editing some configuration files or running a configuration tool for first time usage.</li>
<li>accessible from python</li>
<li>works on both windows and linux</li>
</ul>
<p>What can you recommend for me?</p>
<p>Essentially, I would like to be able to install this system on a shared linux server where I have little user privileges.</p>
| 1
|
2009-02-22T16:31:16Z
| 576,570
|
<p>I recommend you consider <a href="http://en.wikipedia.org/wiki/Berkeley_DB">BerkelyDB</a> <strong>with awareness of the licensing issues.</strong> </p>
<p>I am getting very tired of people recommending BerkleyDB without qualification - you can only distribute BDB systems under GPL or some unknown and not publicly visible licensing fee from Oracle. </p>
<p>For "local" playing around where it is not in use by external parties, it's probably a good idea. Just be aware that there is a license waiting to bite you.</p>
<p>This is also a reminder that it is a good idea when asking for technology recommendations to say whether or not GPL is acceptable.</p>
<p>From <a href="http://stackoverflow.com/questions/525065/which-embedded-database-capable-of-100-million-records-has-the-best-c-api">my own question</a> about a portable C API database, whilst a range of other products were suggested, none of the embedded ones have Python bindings.</p>
| 7
|
2009-02-23T04:49:08Z
|
[
"python",
"non-relational-database",
"portable-database"
] |
In Python, why can a function modify some arguments as perceived by the caller, but not others?
| 575,196
|
<p>I'm new to Python and am trying to understand its approach to variable scope. In this example, why is <code>f()</code> able to alter the value of <code>x</code>, as perceived within <code>main()</code>, but not the value of <code>n</code>?</p>
<pre><code>def f(n, x):
n = 2
x.append(4)
print 'In f():', n, x
def main():
n = 1
x = [0,1,2,3]
print 'Before:', n, x
f(n, x)
print 'After: ', n, x
main()
</code></pre>
<p>Output:</p>
<pre><code>Before: 1 [0, 1, 2, 3]
In f(): 2 [0, 1, 2, 3, 4]
After: 1 [0, 1, 2, 3, 4]
</code></pre>
| 77
|
2009-02-22T16:42:51Z
| 575,204
|
<p><code>f</code> doesn't actually alter the value of <code>x</code> (which is always the same reference to an instance of a list). Rather, it alters the <em>contents</em> of this list.</p>
<p>In both cases, a <em>copy</em> is passed to the function. But since <code>x</code> is a reference to a list instance, only the reference is copied, not the contents of the list.</p>
<p>In case you are familiar with C, the following approaches the semantics of the Python code:</p>
<pre><code>void f(int n, int* x) {
n = 42;
x[0] = 2;
}
</code></pre>
<p>Here, <code>n</code> is an <code>int</code> and <code>x</code> is an <code>int*</code> but both are passed as a copy to the function. Nontheless, the memory <em>pointed to</em> by <code>x</code> is the same at the sides of the caller and the callee.</p>
| 11
|
2009-02-22T16:47:13Z
|
[
"python"
] |
In Python, why can a function modify some arguments as perceived by the caller, but not others?
| 575,196
|
<p>I'm new to Python and am trying to understand its approach to variable scope. In this example, why is <code>f()</code> able to alter the value of <code>x</code>, as perceived within <code>main()</code>, but not the value of <code>n</code>?</p>
<pre><code>def f(n, x):
n = 2
x.append(4)
print 'In f():', n, x
def main():
n = 1
x = [0,1,2,3]
print 'Before:', n, x
f(n, x)
print 'After: ', n, x
main()
</code></pre>
<p>Output:</p>
<pre><code>Before: 1 [0, 1, 2, 3]
In f(): 2 [0, 1, 2, 3, 4]
After: 1 [0, 1, 2, 3, 4]
</code></pre>
| 77
|
2009-02-22T16:42:51Z
| 575,254
|
<p>It´s because a list is a mutable object. You´re not setting x to the value of [0,1,2,3], you´re defining a label to the object [0,1,2,3].</p>
<p>You should declare your function f() like this:</p>
<pre><code>def f(n, x=None):
if x is None:
x = []
...
</code></pre>
| 2
|
2009-02-22T17:06:01Z
|
[
"python"
] |
In Python, why can a function modify some arguments as perceived by the caller, but not others?
| 575,196
|
<p>I'm new to Python and am trying to understand its approach to variable scope. In this example, why is <code>f()</code> able to alter the value of <code>x</code>, as perceived within <code>main()</code>, but not the value of <code>n</code>?</p>
<pre><code>def f(n, x):
n = 2
x.append(4)
print 'In f():', n, x
def main():
n = 1
x = [0,1,2,3]
print 'Before:', n, x
f(n, x)
print 'After: ', n, x
main()
</code></pre>
<p>Output:</p>
<pre><code>Before: 1 [0, 1, 2, 3]
In f(): 2 [0, 1, 2, 3, 4]
After: 1 [0, 1, 2, 3, 4]
</code></pre>
| 77
|
2009-02-22T16:42:51Z
| 575,259
|
<p>n is an int (immutable), and a copy is passed to the function, so in the function you are changing the copy.</p>
<p>X is a list (mutable), and a copy of <em>the pointer</em> is passed o the function so x.append(4) changes the contents of the list. However, you you said x = [0,1,2,3,4] in your function, you would not change the contents of x in main().</p>
| 2
|
2009-02-22T17:07:59Z
|
[
"python"
] |
In Python, why can a function modify some arguments as perceived by the caller, but not others?
| 575,196
|
<p>I'm new to Python and am trying to understand its approach to variable scope. In this example, why is <code>f()</code> able to alter the value of <code>x</code>, as perceived within <code>main()</code>, but not the value of <code>n</code>?</p>
<pre><code>def f(n, x):
n = 2
x.append(4)
print 'In f():', n, x
def main():
n = 1
x = [0,1,2,3]
print 'Before:', n, x
f(n, x)
print 'After: ', n, x
main()
</code></pre>
<p>Output:</p>
<pre><code>Before: 1 [0, 1, 2, 3]
In f(): 2 [0, 1, 2, 3, 4]
After: 1 [0, 1, 2, 3, 4]
</code></pre>
| 77
|
2009-02-22T16:42:51Z
| 575,268
|
<p>I will rename variables to reduce confusion. <em>n</em> -> <em>nf</em> or <em>nmain</em>. <em>x</em> -> <em>xf</em> or <em>xmain</em>:</p>
<pre><code>def f(nf, xf):
nf = 2
xf.append(4)
print 'In f():', nf, xf
def main():
nmain = 1
xmain = [0,1,2,3]
print 'Before:', nmain, xmain
f(nmain, xmain)
print 'After: ', nmain, xmain
main()
</code></pre>
<p>When you call the function <em>f</em>, the Python runtime makes a copy of <em>xmain</em> and assigns it to <em>xf</em>, and similarly assigns a copy of <em>nmain</em> to <em>nf</em>.</p>
<p>In the case of <em>n</em>, the value that is copied is 1.</p>
<p>In the case of <em>x</em> the value that is copied is <strong>not</strong> the literal list <em>[0, 1, 2, 3]</em>. It is a <strong>reference</strong> to that list. <em>xf</em> and <em>xmain</em> are pointing at the same list, so when you modify <em>xf</em> you are also modifying <em>xmain</em>.</p>
<p>If, however, you were to write something like:</p>
<pre><code> xf = ["foo", "bar"]
xf.append(4)
</code></pre>
<p>you would find that <em>xmain</em> has not changed. This is because, in the line <em>xf = ["foo", "bar"]</em> you have change <em>xf</em> to point to a <strong>new</strong> list. Any changes you make to this new list will have no effects on the list that <em>xmain</em> still points to.</p>
<p>Hope that helps. :-)</p>
| 2
|
2009-02-22T17:15:36Z
|
[
"python"
] |
In Python, why can a function modify some arguments as perceived by the caller, but not others?
| 575,196
|
<p>I'm new to Python and am trying to understand its approach to variable scope. In this example, why is <code>f()</code> able to alter the value of <code>x</code>, as perceived within <code>main()</code>, but not the value of <code>n</code>?</p>
<pre><code>def f(n, x):
n = 2
x.append(4)
print 'In f():', n, x
def main():
n = 1
x = [0,1,2,3]
print 'Before:', n, x
f(n, x)
print 'After: ', n, x
main()
</code></pre>
<p>Output:</p>
<pre><code>Before: 1 [0, 1, 2, 3]
In f(): 2 [0, 1, 2, 3, 4]
After: 1 [0, 1, 2, 3, 4]
</code></pre>
| 77
|
2009-02-22T16:42:51Z
| 575,337
|
<p>Some answers contain the word "copy" in a context of a function call. I find it confusing.</p>
<p><strong>Python doesn't copy <em>objects</em> you pass during a function call <em>ever</em>.</strong></p>
<p>Function parameters are <em>names</em>. When you call a function Python binds these parameters to whatever objects you pass (via names in a caller scope).</p>
<p>Objects can be mutable (like lists) or immutable (like integers, strings in Python). Mutable object you can change. You can't change a name, you just can bind it to another object.</p>
<p>Your example is not about <a href="https://docs.python.org/2/tutorial/classes.html#python-scopes-and-namespaces">scopes or namespaces</a>, it is about <a href="http://docs.python.org/reference/executionmodel.html#naming-and-binding">naming and binding</a> and <a href="http://docs.python.org/reference/datamodel.html#objects-values-and-types">mutability of an object</a> in Python. </p>
<pre><code>def f(n, x): # these `n`, `x` have nothing to do with `n` and `x` from main()
n = 2 # put `n` label on `2` balloon
x.append(4) # call `append` method of whatever object `x` is referring to.
print 'In f():', n, x
x = [] # put `x` label on `[]` ballon
# x = [] has no effect on the original list that is passed into the function
</code></pre>
<p>Here are nice pictures on <a href="http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html#other-languages-have-variables">the difference between variables in other languages and names in Python</a>.</p>
| 109
|
2009-02-22T18:06:13Z
|
[
"python"
] |
In Python, why can a function modify some arguments as perceived by the caller, but not others?
| 575,196
|
<p>I'm new to Python and am trying to understand its approach to variable scope. In this example, why is <code>f()</code> able to alter the value of <code>x</code>, as perceived within <code>main()</code>, but not the value of <code>n</code>?</p>
<pre><code>def f(n, x):
n = 2
x.append(4)
print 'In f():', n, x
def main():
n = 1
x = [0,1,2,3]
print 'Before:', n, x
f(n, x)
print 'After: ', n, x
main()
</code></pre>
<p>Output:</p>
<pre><code>Before: 1 [0, 1, 2, 3]
In f(): 2 [0, 1, 2, 3, 4]
After: 1 [0, 1, 2, 3, 4]
</code></pre>
| 77
|
2009-02-22T16:42:51Z
| 575,887
|
<p>You've got a number of answers already, and I broadly agree with J.F. Sebastian, but you might find this useful as a shortcut:</p>
<p>Any time you see <strong><code>varname =</code></strong>, you're creating a <em>new</em> name binding within the function's scope. Whatever value <code>varname</code> was bound to before is lost <em>within this scope</em>.</p>
<p>Any time you see <strong><code>varname.foo()</code></strong> you're calling a method on <code>varname</code>. The method may alter varname (e.g. <code>list.append</code>). <code>varname</code> (or, rather, the object that <code>varname</code> names) may exist in more than one scope, and since it's the same object, any changes will be visible in all scopes.</p>
<p>[note that the <code>global</code> keyword creates an exception to the first case]</p>
| 11
|
2009-02-22T21:52:14Z
|
[
"python"
] |
In Python, why can a function modify some arguments as perceived by the caller, but not others?
| 575,196
|
<p>I'm new to Python and am trying to understand its approach to variable scope. In this example, why is <code>f()</code> able to alter the value of <code>x</code>, as perceived within <code>main()</code>, but not the value of <code>n</code>?</p>
<pre><code>def f(n, x):
n = 2
x.append(4)
print 'In f():', n, x
def main():
n = 1
x = [0,1,2,3]
print 'Before:', n, x
f(n, x)
print 'After: ', n, x
main()
</code></pre>
<p>Output:</p>
<pre><code>Before: 1 [0, 1, 2, 3]
In f(): 2 [0, 1, 2, 3, 4]
After: 1 [0, 1, 2, 3, 4]
</code></pre>
| 77
|
2009-02-22T16:42:51Z
| 4,354,058
|
<p>Python is a pure pass-by-value language if you think about it the right way. A python variable stores the location of an object in memory. The Python variable does not store the object itself. When you pass a variable to a function, you are passing a <em>copy</em> of the address of the object being pointed to by the variable. </p>
<p>Contrasst these two functions</p>
<pre><code>def foo(x):
x[0] = 5
def goo(x):
x = []
</code></pre>
<p>Now, when you type into the shell</p>
<pre><code>>>> cow = [3,4,5]
>>> foo(cow)
>>> cow
[5,4,5]
</code></pre>
<p>Compare this to goo.</p>
<pre><code>>>> cow = [3,4,5]
>>> goo(cow)
>>> goo
[3,4,5]
</code></pre>
<p>In the first case, we pass a copy the address of cow to foo and foo modified the state of the object residing there. The object gets modified.</p>
<p>In the second case you pass a copy of the address of cow to goo. Then goo proceeds to change that copy. Effect: none.</p>
<p>I call this the <em>pink house principle</em>. If you make a copy of your address and tell a
painter to paint the house at that address pink, you will wind up with a pink house.
If you give the painter a copy of your address and tell him to change it to a new address,
the address of your house does not change.</p>
<p>The explanation eliminates a lot of confusion. Python passes the addresses variables store by value. </p>
| 0
|
2010-12-04T14:48:00Z
|
[
"python"
] |
In Python, why can a function modify some arguments as perceived by the caller, but not others?
| 575,196
|
<p>I'm new to Python and am trying to understand its approach to variable scope. In this example, why is <code>f()</code> able to alter the value of <code>x</code>, as perceived within <code>main()</code>, but not the value of <code>n</code>?</p>
<pre><code>def f(n, x):
n = 2
x.append(4)
print 'In f():', n, x
def main():
n = 1
x = [0,1,2,3]
print 'Before:', n, x
f(n, x)
print 'After: ', n, x
main()
</code></pre>
<p>Output:</p>
<pre><code>Before: 1 [0, 1, 2, 3]
In f(): 2 [0, 1, 2, 3, 4]
After: 1 [0, 1, 2, 3, 4]
</code></pre>
| 77
|
2009-02-22T16:42:51Z
| 34,829,285
|
<p>Python is copy by value of reference. An object occupies a field in memory, and a reference is associated with that object, but itself occupies a field in memory. And name/value is associated with a reference. In python function, it always copy the value of the reference, so in your code, n is copied to be a new name, when you assign that, it has a new space in caller stack. But for the list, the name also got copied, but it refer to the same memory(since you never assign the list a new value). That is a magic in python!</p>
| 0
|
2016-01-16T16:30:33Z
|
[
"python"
] |
How does python close files that have been gc'ed?
| 575,278
|
<p>I had always assumed that a file would leak if it was opened without being closed, but I just verified that if I enter the following lines of code, the file will close:</p>
<pre><code>>>> f = open('somefile.txt')
>>> del f
</code></pre>
<p>Just out of sheer curiosity, how does this work? I notice that file doesn't include a <code>__</code>del<code>__</code> method.</p>
| 14
|
2009-02-22T17:26:49Z
| 575,292
|
<p>Hence the <strong>with</strong> statement.</p>
<p>For Python 2.5, use</p>
<pre><code>from __future__ import with_statement
</code></pre>
<p>(For Python 2.6 or 3.x, do nothing)</p>
<pre><code>with open( "someFile", "rU" ) as aFile:
# process the file
pass
# At this point, the file was closed by the with statement.
# Bonus, it's also out of scope of the with statement,
# and eligible for GC.
</code></pre>
| 4
|
2009-02-22T17:35:00Z
|
[
"python",
"file",
"garbage-collection",
"del"
] |
How does python close files that have been gc'ed?
| 575,278
|
<p>I had always assumed that a file would leak if it was opened without being closed, but I just verified that if I enter the following lines of code, the file will close:</p>
<pre><code>>>> f = open('somefile.txt')
>>> del f
</code></pre>
<p>Just out of sheer curiosity, how does this work? I notice that file doesn't include a <code>__</code>del<code>__</code> method.</p>
| 14
|
2009-02-22T17:26:49Z
| 575,303
|
<p>Best guess is that because the file type is a built-in type, the interpreter itself handles closing the file on garbage collection.</p>
<p>Alternatively, you are only checking after the python interpreter has exited, and all "leaked" file handles are closed anyways.</p>
| 0
|
2009-02-22T17:41:07Z
|
[
"python",
"file",
"garbage-collection",
"del"
] |
How does python close files that have been gc'ed?
| 575,278
|
<p>I had always assumed that a file would leak if it was opened without being closed, but I just verified that if I enter the following lines of code, the file will close:</p>
<pre><code>>>> f = open('somefile.txt')
>>> del f
</code></pre>
<p>Just out of sheer curiosity, how does this work? I notice that file doesn't include a <code>__</code>del<code>__</code> method.</p>
| 14
|
2009-02-22T17:26:49Z
| 575,320
|
<p>In CPython, at least, files are closed when the file object is deallocated. See the <code>file_dealloc</code> function in <code>Objects/fileobject.c</code> in the CPython source. Dealloc methods are sort-of like <code>__del__</code> for C types, except without some of the problems inherent to <code>__del__</code>.</p>
| 19
|
2009-02-22T17:53:46Z
|
[
"python",
"file",
"garbage-collection",
"del"
] |
How does python close files that have been gc'ed?
| 575,278
|
<p>I had always assumed that a file would leak if it was opened without being closed, but I just verified that if I enter the following lines of code, the file will close:</p>
<pre><code>>>> f = open('somefile.txt')
>>> del f
</code></pre>
<p>Just out of sheer curiosity, how does this work? I notice that file doesn't include a <code>__</code>del<code>__</code> method.</p>
| 14
|
2009-02-22T17:26:49Z
| 575,383
|
<p>Python uses reference counting and deterministic destruction in addition to garbage collection. When there is no more references to an object, the object is released immediately. Releasing a file closes it.</p>
<p>This is different than e.g. Java where there is only nondeterministic garbage collection. This means you connot know when the object is released, so you will have to close the file manually.</p>
<p>Note that reference counting is not perfect. You can have objects with circular references, which is not reachable from the progam. Thats why Python has garbage collection in addition to reference counting.</p>
| 2
|
2009-02-22T18:22:23Z
|
[
"python",
"file",
"garbage-collection",
"del"
] |
Succesive calls to cProfile/pstats no updating properly
| 575,325
|
<p>I'm trying to make successive calls of some profiler code however on the second call to the function the update time of the profile file changes but the actual profiler stats stay the same. This isn't the code I'm running but it's as simplified an example I can come up with that shows the same behaviour.</p>
<p>On running, the first time ctrl+c is pressed it shows stats, second time same thing but rather than being fully updated as expected only the time is, and third time program actually quits. If trying, ideally wait at a few seconds between ctrl+c presses.</p>
<p>Adding profiler.enable() after the 8th lines does give full updates between calls however it adds a lot of extra profiler data for things I don't want to be profiling.</p>
<p>Any suggestions for a happy medium where I get full updates but without the extra fluff?</p>
<pre><code>import signal, sys, time, cProfile, pstats
call = 0
def sigint_handler(signal, frame):
global call
if call < 2:
profiler.dump_stats("profile.prof")
stats = pstats.Stats("profile.prof")
stats.strip_dirs().sort_stats('cumulative').print_stats()
call += 1
else:
sys.exit()
def wait():
time.sleep(1)
def main_io_loop():
signal.signal(signal.SIGINT, sigint_handler)
while 1:
wait()
profiler = cProfile.Profile()
profiler.runctx("main_io_loop()", globals(), locals())
</code></pre>
| 1
|
2009-02-22T17:56:42Z
| 576,184
|
<p>Calling profiler.dump_stats (implemented in cProfile.py) calls profiler.create_stats, which in turns calls profiler.disable().</p>
<p>You need to call profiler.enable() to make it work again. No, this is not documented.</p>
<p>The following seems to do what you want. Note that I got rid of the intermediate data file since pstats.Stats knows how to get the data from the profiler directly.</p>
<pre><code>import signal, sys, time, pstats, cProfile
call = 0
def sigint_handler(signal, frame):
global call
if call < 2:
stats = pstats.Stats(profiler)
stats.strip_dirs().sort_stats('cumulative').print_stats()
profiler.enable()
call += 1
else:
sys.exit()
def wait():
time.sleep(1)
def main_io_loop():
signal.signal(signal.SIGINT, sigint_handler)
while 1:
wait()
profiler = cProfile.Profile()
profiler.runctx("main_io_loop()", globals(), locals())
</code></pre>
| 1
|
2009-02-23T00:37:41Z
|
[
"python"
] |
Why not always use psyco for Python code?
| 575,385
|
<p><a href="http://psyco.sourceforge.net/">psyco</a> seems to be quite helpful in optimizing Python code, and it does it in a very non-intrusive way. </p>
<p>Therefore, one has to wonder. Assuming you're always on a x86 architecture (which is where most apps run these days), why not just always use <code>psyco</code> for all Python code? Does it make mistakes sometimes and ruins the correctness of the program? Increases the runtime for some weird cases?</p>
<p>Have you had any negative experiences with it? My most negative experience so far was that it made my code faster by only 15%. Usually it's better.</p>
<p>Naturally, using psyco is not a replacement for efficient algorithms and coding. But if you can improve the performance of your code for the cost of two lines (importing and calling psyco), I see no good reason not to.</p>
| 37
|
2009-02-22T18:23:50Z
| 575,394
|
<p>When using pyglet I found that I couldn't use psyco on the entire app without making my app non-functional. I could use it in small sections of math-heavy code, of course, but it wasn't necessary, so I didn't bother.</p>
<p>Also, psyco has done strange things with my profiling results (such as, well, not alter them at all from the non-psyco version). I suspect it doesn't play nice with the profiling code. </p>
<p>I just don't really use it unless I really want the speed, which is not all that often. My priority is algorithm optimization, which generally results in nicer speedups.</p>
| 5
|
2009-02-22T18:26:25Z
|
[
"python",
"optimization",
"psyco"
] |
Why not always use psyco for Python code?
| 575,385
|
<p><a href="http://psyco.sourceforge.net/">psyco</a> seems to be quite helpful in optimizing Python code, and it does it in a very non-intrusive way. </p>
<p>Therefore, one has to wonder. Assuming you're always on a x86 architecture (which is where most apps run these days), why not just always use <code>psyco</code> for all Python code? Does it make mistakes sometimes and ruins the correctness of the program? Increases the runtime for some weird cases?</p>
<p>Have you had any negative experiences with it? My most negative experience so far was that it made my code faster by only 15%. Usually it's better.</p>
<p>Naturally, using psyco is not a replacement for efficient algorithms and coding. But if you can improve the performance of your code for the cost of two lines (importing and calling psyco), I see no good reason not to.</p>
| 37
|
2009-02-22T18:23:50Z
| 575,423
|
<p>It also depends where your bottleneck is. I am mostly doing web apps and there the bottlenecks are probably more IO and database. So you should know where to optimize.</p>
<p>Also beware that maybe you first should think about your code instead of directly throwing psyco at it. So I agree with Devin, that algorithm optimizations should come first and they might have a smaller chance of unwanted sideeffects.</p>
| 4
|
2009-02-22T18:39:53Z
|
[
"python",
"optimization",
"psyco"
] |
Why not always use psyco for Python code?
| 575,385
|
<p><a href="http://psyco.sourceforge.net/">psyco</a> seems to be quite helpful in optimizing Python code, and it does it in a very non-intrusive way. </p>
<p>Therefore, one has to wonder. Assuming you're always on a x86 architecture (which is where most apps run these days), why not just always use <code>psyco</code> for all Python code? Does it make mistakes sometimes and ruins the correctness of the program? Increases the runtime for some weird cases?</p>
<p>Have you had any negative experiences with it? My most negative experience so far was that it made my code faster by only 15%. Usually it's better.</p>
<p>Naturally, using psyco is not a replacement for efficient algorithms and coding. But if you can improve the performance of your code for the cost of two lines (importing and calling psyco), I see no good reason not to.</p>
| 37
|
2009-02-22T18:23:50Z
| 575,433
|
<blockquote>
<p>Psyco currently uses a lot of memory.
It only runs on Intel 386-compatible
processors (under any OS) right now.
There are some subtle semantic
differences (i.e. bugs) with the way
Python works; they should not be
apparent in most programs.</p>
</blockquote>
<p>See also the <a href="http://psyco.sourceforge.net/psycoguide/node25.html" rel="nofollow">caveats section</a>. For a hard example, I noticed that my web app with Cheetah-generated templates and DB I/O gained no appreciable speedup.</p>
| 6
|
2009-02-22T18:45:14Z
|
[
"python",
"optimization",
"psyco"
] |
Why not always use psyco for Python code?
| 575,385
|
<p><a href="http://psyco.sourceforge.net/">psyco</a> seems to be quite helpful in optimizing Python code, and it does it in a very non-intrusive way. </p>
<p>Therefore, one has to wonder. Assuming you're always on a x86 architecture (which is where most apps run these days), why not just always use <code>psyco</code> for all Python code? Does it make mistakes sometimes and ruins the correctness of the program? Increases the runtime for some weird cases?</p>
<p>Have you had any negative experiences with it? My most negative experience so far was that it made my code faster by only 15%. Usually it's better.</p>
<p>Naturally, using psyco is not a replacement for efficient algorithms and coding. But if you can improve the performance of your code for the cost of two lines (importing and calling psyco), I see no good reason not to.</p>
| 37
|
2009-02-22T18:23:50Z
| 575,829
|
<p>Quite simply: "Because the code already runs fast enough".</p>
| 0
|
2009-02-22T21:24:38Z
|
[
"python",
"optimization",
"psyco"
] |
Why not always use psyco for Python code?
| 575,385
|
<p><a href="http://psyco.sourceforge.net/">psyco</a> seems to be quite helpful in optimizing Python code, and it does it in a very non-intrusive way. </p>
<p>Therefore, one has to wonder. Assuming you're always on a x86 architecture (which is where most apps run these days), why not just always use <code>psyco</code> for all Python code? Does it make mistakes sometimes and ruins the correctness of the program? Increases the runtime for some weird cases?</p>
<p>Have you had any negative experiences with it? My most negative experience so far was that it made my code faster by only 15%. Usually it's better.</p>
<p>Naturally, using psyco is not a replacement for efficient algorithms and coding. But if you can improve the performance of your code for the cost of two lines (importing and calling psyco), I see no good reason not to.</p>
| 37
|
2009-02-22T18:23:50Z
| 576,379
|
<p>One should never rely on some magic bullet to fix your problems. Using psyco to make a slow program faster is usually not necessary. Bad algorithms can be rewritten, and parts that <em>require</em> speed could be written in another language. Of course, your question asks why we don't use it for the speed boost anyways, and there's a bit of overhead when you use psyco. Psyco uses memory, and those two lines just sorta <em>feel</em> like overhead when you look at them. As for my personal reason on why I don't use psyco, it's because it doesn't support x86_64, which I see as the new up and coming architecture (especially with 2038 approaching sooner or later). My alternative is pypy, but I'm not entirely fond of that either.</p>
| 3
|
2009-02-23T02:43:57Z
|
[
"python",
"optimization",
"psyco"
] |
Why not always use psyco for Python code?
| 575,385
|
<p><a href="http://psyco.sourceforge.net/">psyco</a> seems to be quite helpful in optimizing Python code, and it does it in a very non-intrusive way. </p>
<p>Therefore, one has to wonder. Assuming you're always on a x86 architecture (which is where most apps run these days), why not just always use <code>psyco</code> for all Python code? Does it make mistakes sometimes and ruins the correctness of the program? Increases the runtime for some weird cases?</p>
<p>Have you had any negative experiences with it? My most negative experience so far was that it made my code faster by only 15%. Usually it's better.</p>
<p>Naturally, using psyco is not a replacement for efficient algorithms and coding. But if you can improve the performance of your code for the cost of two lines (importing and calling psyco), I see no good reason not to.</p>
| 37
|
2009-02-22T18:23:50Z
| 620,966
|
<p>A couple of other things:</p>
<ol>
<li>It doesn't seem to be very actively maintained.</li>
<li>It can be a memory hog.</li>
</ol>
| 2
|
2009-03-07T00:14:22Z
|
[
"python",
"optimization",
"psyco"
] |
Why not always use psyco for Python code?
| 575,385
|
<p><a href="http://psyco.sourceforge.net/">psyco</a> seems to be quite helpful in optimizing Python code, and it does it in a very non-intrusive way. </p>
<p>Therefore, one has to wonder. Assuming you're always on a x86 architecture (which is where most apps run these days), why not just always use <code>psyco</code> for all Python code? Does it make mistakes sometimes and ruins the correctness of the program? Increases the runtime for some weird cases?</p>
<p>Have you had any negative experiences with it? My most negative experience so far was that it made my code faster by only 15%. Usually it's better.</p>
<p>Naturally, using psyco is not a replacement for efficient algorithms and coding. But if you can improve the performance of your code for the cost of two lines (importing and calling psyco), I see no good reason not to.</p>
| 37
|
2009-02-22T18:23:50Z
| 1,437,939
|
<p>1) The memory overhead is the main one, as described in other answers. You also pay the compilation cost, which can be prohibitive if you aren't selective. From the <a href="http://psyco.sourceforge.net/psycoguide/module-psyco.html">user reference</a>:</p>
<blockquote>
<p>Compiling everything is often overkill for medium- or large-sized applications. The drawbacks of compiling too much are in the time spent compiling, plus the amount of memory that this process consumes. It is a subtle balance to keep. </p>
</blockquote>
<p>2) Performance can actually be harmed by Psyco compilation. Again from the user guide (<a href="http://psyco.sourceforge.net/psycoguide/tutknownbugs.html">"known bugs"</a> section):</p>
<blockquote>
<p>There are also performance bugs: situations in which Psyco slows down the code instead of accelerating it. It is difficult to make a complete list of the possible reasons, but here are a few common ones:</p>
<ul>
<li>The built-in <code>map</code> and <code>filter</code> functions must be avoided and replaced by list comprehension. For example, <code>map(lambda x: x*x, lst)</code> should be replaced by the more readable but more recent syntax <code>[x*x for x in lst]</code>.</li>
<li>The compilation of regular expressions doesn't seem to benefit from Psyco. (The execution of regular expressions is unaffected, since it is C code.) Don't enable Psyco on this module; if necessary, disable it explicitely, e.g. by calling <code>psyco.cannotcompile(re.compile)</code>.</li>
</ul>
</blockquote>
<p>3) Finally, there are some relatively obscure situations where using Psyco will actually introduce bugs. Some of them are <a href="http://psyco.sourceforge.net/psycoguide/bugs.html#bugs">listed here</a>.</p>
| 20
|
2009-09-17T10:23:30Z
|
[
"python",
"optimization",
"psyco"
] |
Why not always use psyco for Python code?
| 575,385
|
<p><a href="http://psyco.sourceforge.net/">psyco</a> seems to be quite helpful in optimizing Python code, and it does it in a very non-intrusive way. </p>
<p>Therefore, one has to wonder. Assuming you're always on a x86 architecture (which is where most apps run these days), why not just always use <code>psyco</code> for all Python code? Does it make mistakes sometimes and ruins the correctness of the program? Increases the runtime for some weird cases?</p>
<p>Have you had any negative experiences with it? My most negative experience so far was that it made my code faster by only 15%. Usually it's better.</p>
<p>Naturally, using psyco is not a replacement for efficient algorithms and coding. But if you can improve the performance of your code for the cost of two lines (importing and calling psyco), I see no good reason not to.</p>
| 37
|
2009-02-22T18:23:50Z
| 12,466,971
|
<p>psyco is dead and not longer maintained. It is time to find another </p>
| 2
|
2012-09-17T20:53:44Z
|
[
"python",
"optimization",
"psyco"
] |
How to obtain the keycodes in Python
| 575,650
|
<p>I have to know what key is pressed, but not need the code of the Character, i want to know when someone press the 'A' key even if the key obtained is 'a' or 'A', and so with all other keys.</p>
<p>I can't use PyGame or any other library (including Tkinter). Only Python Standard Library. And this have to be done in a terminal, not a graphical interface.</p>
<p>NOT NEED THE CHARACTER CODE. I NEED TO KNOW THE KEY CODE.</p>
<p>Ex:</p>
<pre><code>ord('a') != ord('A') # 97 != 65
someFunction('a') == someFunction('A') # a_code == A_code
</code></pre>
| 5
|
2009-02-22T20:01:21Z
| 575,656
|
<p>Depending on what you are trying to accomplish, perhaps using a library such as <a href="http://pygame.org">pygame</a> would do what you want. Pygame contains more advanced keypress handling than is normally available with Python's standard libraries.</p>
| 8
|
2009-02-22T20:03:45Z
|
[
"python",
"input",
"keycode"
] |
How to obtain the keycodes in Python
| 575,650
|
<p>I have to know what key is pressed, but not need the code of the Character, i want to know when someone press the 'A' key even if the key obtained is 'a' or 'A', and so with all other keys.</p>
<p>I can't use PyGame or any other library (including Tkinter). Only Python Standard Library. And this have to be done in a terminal, not a graphical interface.</p>
<p>NOT NEED THE CHARACTER CODE. I NEED TO KNOW THE KEY CODE.</p>
<p>Ex:</p>
<pre><code>ord('a') != ord('A') # 97 != 65
someFunction('a') == someFunction('A') # a_code == A_code
</code></pre>
| 5
|
2009-02-22T20:01:21Z
| 575,688
|
<p>You probably will have to use <a href="http://wiki.python.org/moin/TkInter" rel="nofollow">Tkinter</a>, which is the 'standard' Python gui, and has been included with python for many years. </p>
<p>A command-line solution is probably not available, because of the way data passes into and out of command-line processes. GUI programs (of some flavor or another) all recieve user-input through a (possibly library wrapped) event stream. Each event will be a record of the event's details. For keystroke events, the record will may contain any of a keycode, modifier key bitfield, or text character in some encoding. Which fields, and how they are named depends on the event library you are calling.</p>
<p>Command-line programs recieve user input through character-streams. There is no way to catch lower-level data. As myroslav explained in his post, tty's can be in cooked or uncooked mode, the only difference being that in cooked mode the terminal will process (some) control characters for you, like delete and enter so that the process receives lines of input, instead of 1 character at a time.</p>
<p>Processing anything lower than that requires (OS dependent) system calls or opening character devices in /dev. Python's standard library provides no standard facility for this.</p>
| 2
|
2009-02-22T20:12:31Z
|
[
"python",
"input",
"keycode"
] |
How to obtain the keycodes in Python
| 575,650
|
<p>I have to know what key is pressed, but not need the code of the Character, i want to know when someone press the 'A' key even if the key obtained is 'a' or 'A', and so with all other keys.</p>
<p>I can't use PyGame or any other library (including Tkinter). Only Python Standard Library. And this have to be done in a terminal, not a graphical interface.</p>
<p>NOT NEED THE CHARACTER CODE. I NEED TO KNOW THE KEY CODE.</p>
<p>Ex:</p>
<pre><code>ord('a') != ord('A') # 97 != 65
someFunction('a') == someFunction('A') # a_code == A_code
</code></pre>
| 5
|
2009-02-22T20:01:21Z
| 575,758
|
<p>If you need to work in windows only you should try <a href="http://docs.python.org/library/msvcrt.html" rel="nofollow">msvcrt</a>.</p>
| 0
|
2009-02-22T20:50:47Z
|
[
"python",
"input",
"keycode"
] |
How to obtain the keycodes in Python
| 575,650
|
<p>I have to know what key is pressed, but not need the code of the Character, i want to know when someone press the 'A' key even if the key obtained is 'a' or 'A', and so with all other keys.</p>
<p>I can't use PyGame or any other library (including Tkinter). Only Python Standard Library. And this have to be done in a terminal, not a graphical interface.</p>
<p>NOT NEED THE CHARACTER CODE. I NEED TO KNOW THE KEY CODE.</p>
<p>Ex:</p>
<pre><code>ord('a') != ord('A') # 97 != 65
someFunction('a') == someFunction('A') # a_code == A_code
</code></pre>
| 5
|
2009-02-22T20:01:21Z
| 575,781
|
<p>See <a href="http://docs.python.org/library/tty.html">tty</a> standard module. It allows switching from default line-oriented (cooked) mode into char-oriented (cbreak) mode with <a href="http://docs.python.org/library/tty.html#tty.setcbreak">tty.setcbreak(sys.stdin)</a>. Reading single char from sys.stdin will result into next pressed keyboard key (if it generates code): </p>
<pre><code>import sys
import tty
tty.setcbreak(sys.stdin)
while True:
print ord(sys.stdin.read(1))
</code></pre>
<p><em>Note: solution is Unix (including Linux) only.</em></p>
<p>Edit: On Windows try <a href="http://docs.python.org/library/msvcrt.html#msvcrt.getche">msvcrt.getche()</a>/<a href="http://docs.python.org/library/msvcrt.html#msvcrt.getwche">getwche()</a>. /me has nowhere to try...</p>
<p><hr/></p>
<p>Edit 2: Utilize win32 low-level console API via <a href="http://docs.python.org/library/ctypes.html">ctypes.windll</a> (see <a href="http://stackoverflow.com/questions/239020/how-can-i-call-a-dll-from-a-scripting-language">example at SO</a>) with <code>ReadConsoleInput</code> function. You should filter out keypresses - <code>e.EventType==KEY_EVENT</code> and look for <code>e.Event.KeyEvent.wVirtualKeyCode</code> value. Example of application (not in Python, just to get an idea) can be found at <a href="http://www.benryves.com/tutorials/?t=winconsole&c=4">http://www.benryves.com/tutorials/?t=winconsole&c=4</a>.</p>
| 16
|
2009-02-22T21:02:36Z
|
[
"python",
"input",
"keycode"
] |
How to obtain the keycodes in Python
| 575,650
|
<p>I have to know what key is pressed, but not need the code of the Character, i want to know when someone press the 'A' key even if the key obtained is 'a' or 'A', and so with all other keys.</p>
<p>I can't use PyGame or any other library (including Tkinter). Only Python Standard Library. And this have to be done in a terminal, not a graphical interface.</p>
<p>NOT NEED THE CHARACTER CODE. I NEED TO KNOW THE KEY CODE.</p>
<p>Ex:</p>
<pre><code>ord('a') != ord('A') # 97 != 65
someFunction('a') == someFunction('A') # a_code == A_code
</code></pre>
| 5
|
2009-02-22T20:01:21Z
| 591,938
|
<p>The obvious answer:</p>
<pre><code>someFunction = string.upper
ord('a') != ord('A') # 97 != 65
someFunction('a') == someFunction('A') # a_code == A_code
</code></pre>
<p>or, in other (key)words:</p>
<pre><code>char_from_user = getch().upper() # read a char converting to uppercase
if char == 'Q':
# quit
exit = True # or something
elif char in ['A', 'K']:
do_something()
</code></pre>
<p>etc...</p>
<p>Here is a implementation of the getch function, that would work in both Windows and Linux platforms, <a href="http://code.activestate.com/recipes/134892/" rel="nofollow">based on this recipe</a>:</p>
<pre><code>class _Getch(object):
"""Gets a single character from standard input.
Does not echo to the screen."""
def __init__(self):
try:
self.impl = _GetchWindows()
except ImportError:
self.impl = _GetchUnix()
def __call__(self):
return self.impl()
class _GetchUnix(object):
def __init__(self):
import tty, sys
def __call__(self):
import sys, tty, termios
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
tty.setraw(sys.stdin.fileno())
ch = sys.stdin.read(1)
finally:
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
return ch
class _GetchWindows(object):
def __init__(self):
import msvcrt
def __call__(self):
import msvcrt
return msvcrt.getch()
getch = _Getch()
</code></pre>
| 0
|
2009-02-26T18:59:04Z
|
[
"python",
"input",
"keycode"
] |
How to convert rational and decimal number strings to floats in python?
| 575,925
|
<p>How can I convert strings which can denote decimal or rational numbers to floats</p>
<pre><code>>>> ["0.1234", "1/2"]
['0.1234', '1/2']
</code></pre>
<p>I'd want [0.1234, 0.5].</p>
<p>eval is what I was thinking but no luck:</p>
<pre><code>>>> eval("1/2")
0
</code></pre>
| 9
|
2009-02-22T22:08:47Z
| 575,932
|
<p>The <code>/</code> operator does integer division. Try:</p>
<pre><code>>>> eval("1.0*" + "1/2")
0.5
</code></pre>
<p>Because <code>eval()</code> is potentially dangerous, you should always check precisely what you are passing into it:</p>
<pre><code>>>> import re
>>> s = "1/2"
>>> if re.match(r"\d+/\d+$", s):
... eval("1.0*" + s)
...
0.5
</code></pre>
<p>However, if you go to the trouble of matching the input against a regex in the first place, you might as well use <code>r"(\d+)/(\d+)$"</code> to extract the numerator and denominator, do the division yourself, and entirely avoid <code>eval()</code>:</p>
<pre><code>>>> m = re.match(r"(\d+)/(\d+)$", s)
>>> if m:
... float(m.group(1)) / float(m.group(2))
...
0.5
</code></pre>
| 3
|
2009-02-22T22:13:30Z
|
[
"python",
"rational-numbers"
] |
How to convert rational and decimal number strings to floats in python?
| 575,925
|
<p>How can I convert strings which can denote decimal or rational numbers to floats</p>
<pre><code>>>> ["0.1234", "1/2"]
['0.1234', '1/2']
</code></pre>
<p>I'd want [0.1234, 0.5].</p>
<p>eval is what I was thinking but no luck:</p>
<pre><code>>>> eval("1/2")
0
</code></pre>
| 9
|
2009-02-22T22:08:47Z
| 575,935
|
<p>That's because 1 and 2 are interpreted by Python as integers and not floats. It needs to be 1.0/2.0 or some mix of that.</p>
| 0
|
2009-02-22T22:15:39Z
|
[
"python",
"rational-numbers"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.