title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
|---|---|---|---|---|---|---|---|---|---|
Which version of python is currently best for os x?
| 651,717
|
<p>After going through hell trying to install the latest version of postgresql and psycopg2 today I'm going for a complete reinstall of Leopard.</p>
<p>I've been sticking with macpython 2.5 for the past year but now I'm considering macports even 2.6</p>
<p>For me it's most important for Twisted, PIL and psycopg2 to be working without a problem.</p>
<p>Can anyone give some guidelines for what version I should choose, based on experience? </p>
<p>Edit:</p>
<p>Ok I've decided to go without reinstalling the os. Hacked around to clean up the bad PostgresPlus installation and installed another one. The official python 2.6.1 package works great, no problem installing it alongside 2.5.2. Psycopg2 works. But as expected PIL wont compile. </p>
<p>I guess I'll be switching between the 2.5 from macports and the official 2.6 for different tasks, since I know the macports python has it's issues with some packages.</p>
<p>Another Edit:</p>
<p>I've now compiled PIL. Had to hide the whole macports directory and half the xcode libraries, so it would find the right ones. It wouldn't accept the paths I was feeding it. PIL is notorious for this on leopard.</p>
| 2
|
2009-03-16T19:00:09Z
| 652,455
|
<p>I am using Python 2.5.1. It's working great for me for general scripting and some CherryPy web projects. </p>
| 0
|
2009-03-16T22:58:35Z
|
[
"python",
"osx"
] |
Which version of python is currently best for os x?
| 651,717
|
<p>After going through hell trying to install the latest version of postgresql and psycopg2 today I'm going for a complete reinstall of Leopard.</p>
<p>I've been sticking with macpython 2.5 for the past year but now I'm considering macports even 2.6</p>
<p>For me it's most important for Twisted, PIL and psycopg2 to be working without a problem.</p>
<p>Can anyone give some guidelines for what version I should choose, based on experience? </p>
<p>Edit:</p>
<p>Ok I've decided to go without reinstalling the os. Hacked around to clean up the bad PostgresPlus installation and installed another one. The official python 2.6.1 package works great, no problem installing it alongside 2.5.2. Psycopg2 works. But as expected PIL wont compile. </p>
<p>I guess I'll be switching between the 2.5 from macports and the official 2.6 for different tasks, since I know the macports python has it's issues with some packages.</p>
<p>Another Edit:</p>
<p>I've now compiled PIL. Had to hide the whole macports directory and half the xcode libraries, so it would find the right ones. It wouldn't accept the paths I was feeding it. PIL is notorious for this on leopard.</p>
| 2
|
2009-03-16T19:00:09Z
| 866,310
|
<p>I had some trouble installing PIL. I compiled it and it worked with the modification explained on this post <a href="http://passingcuriosity.com/2009/installing-pil-on-mac-os-x-leopard/" rel="nofollow">http://passingcuriosity.com/2009/installing-pil-on-mac-os-x-leopard/</a>
After that it worked fine.</p>
| 1
|
2009-05-14T23:02:49Z
|
[
"python",
"osx"
] |
Which version of python is currently best for os x?
| 651,717
|
<p>After going through hell trying to install the latest version of postgresql and psycopg2 today I'm going for a complete reinstall of Leopard.</p>
<p>I've been sticking with macpython 2.5 for the past year but now I'm considering macports even 2.6</p>
<p>For me it's most important for Twisted, PIL and psycopg2 to be working without a problem.</p>
<p>Can anyone give some guidelines for what version I should choose, based on experience? </p>
<p>Edit:</p>
<p>Ok I've decided to go without reinstalling the os. Hacked around to clean up the bad PostgresPlus installation and installed another one. The official python 2.6.1 package works great, no problem installing it alongside 2.5.2. Psycopg2 works. But as expected PIL wont compile. </p>
<p>I guess I'll be switching between the 2.5 from macports and the official 2.6 for different tasks, since I know the macports python has it's issues with some packages.</p>
<p>Another Edit:</p>
<p>I've now compiled PIL. Had to hide the whole macports directory and half the xcode libraries, so it would find the right ones. It wouldn't accept the paths I was feeding it. PIL is notorious for this on leopard.</p>
| 2
|
2009-03-16T19:00:09Z
| 1,115,314
|
<p>If your using Macports, I recommend downloading the python_select package, which facilitates easy switching between different versions <em>including</em> the built in apple versions. Makes life a lot easier.</p>
| 0
|
2009-07-12T04:31:27Z
|
[
"python",
"osx"
] |
Django: Access primary key in models.filefield(upload_to) location
| 651,949
|
<p>I'd like to save my files using the primary key of the entry.</p>
<p>Here is my code:</p>
<pre><code>def get_nzb_filename(instance, filename):
if not instance.pk:
instance.save() # Does not work.
name_slug = re.sub('[^a-zA-Z0-9]', '-', instance.name).strip('-').lower()
name_slug = re.sub('[-]+', '-', name_slug)
return u'files/%s_%s.nzb' % (instance.pk, name_slug)
class File(models.Model):
nzb = models.FileField(upload_to=get_nzb_filename)
name = models.CharField(max_length=256)
</code></pre>
<p>I know the first time an object is saved the primary key isn't available, so I'm willing to take the extra hit to save the object just to get the primary key, and then continue on. </p>
<p>The above code doesn't work. It throws the following error:</p>
<pre><code>maximum recursion depth exceeded while calling a Python object
</code></pre>
<p>I'm assuming this is an infinite loop. Calling the <code>save</code> method would call the <code>get_nzb_filename</code> method, which would again call the <code>save</code> method, and so on.</p>
<p>I'm using the latest version of the Django trunk.</p>
<p>How can I get the primary key so I can use it to save my uploaded files?</p>
<p><hr /></p>
<p><strong>Update @muhuk:</strong></p>
<p>I like your solution. Can you help me implement it? I've updated my code to the following and the error is <code>'File' object has no attribute 'create'</code>. Perhaps I'm using what you've written out of context?</p>
<pre><code>def create_with_pk(self):
instance = self.create()
instance.save()
return instance
def get_nzb_filename(instance, filename):
if not instance.pk:
create_with_pk(instance)
name_slug = re.sub('[^a-zA-Z0-9]', '-', instance.name).strip('-').lower()
name_slug = re.sub('[-]+', '-', name_slug)
return u'files/%s_%s.nzb' % (instance.pk, name_slug)
class File(models.Model):
nzb = models.FileField(upload_to=get_nzb_filename, blank=True, null=True)
name = models.CharField(max_length=256)
</code></pre>
<p>Instead of enforcing the required field in my model I'll do it in my Form class. No problem.</p>
| 9
|
2009-03-16T20:02:38Z
| 652,026
|
<p>It seems you'll need to pre-generate your <code>File</code> models with empty file fields first. Then pick up one and save it with the given file object.</p>
<p>You can have a custom manager method like this;</p>
<pre><code>def create_with_pk(self):
instance = self.create()
instance.save() # probably this line is unneeded
return instance
</code></pre>
<p>But this will be troublesome if either of your fields is required. Because you are initially creating a null object, you can't enforce required fields on the model level.</p>
<h3>EDIT</h3>
<p><code>create_with_pk</code> is supposed to be a <a href="http://docs.djangoproject.com/en/dev/topics/db/managers/#adding-extra-manager-methods" rel="nofollow">custom manager method</a>, in your code it is just a regular method. Hence <code>self</code> is meaningless. It is all properly <a href="http://docs.djangoproject.com/en/dev/topics/db/managers/" rel="nofollow">documented</a> with examples.</p>
| 2
|
2009-03-16T20:24:14Z
|
[
"python",
"django",
"file-io"
] |
Django: Access primary key in models.filefield(upload_to) location
| 651,949
|
<p>I'd like to save my files using the primary key of the entry.</p>
<p>Here is my code:</p>
<pre><code>def get_nzb_filename(instance, filename):
if not instance.pk:
instance.save() # Does not work.
name_slug = re.sub('[^a-zA-Z0-9]', '-', instance.name).strip('-').lower()
name_slug = re.sub('[-]+', '-', name_slug)
return u'files/%s_%s.nzb' % (instance.pk, name_slug)
class File(models.Model):
nzb = models.FileField(upload_to=get_nzb_filename)
name = models.CharField(max_length=256)
</code></pre>
<p>I know the first time an object is saved the primary key isn't available, so I'm willing to take the extra hit to save the object just to get the primary key, and then continue on. </p>
<p>The above code doesn't work. It throws the following error:</p>
<pre><code>maximum recursion depth exceeded while calling a Python object
</code></pre>
<p>I'm assuming this is an infinite loop. Calling the <code>save</code> method would call the <code>get_nzb_filename</code> method, which would again call the <code>save</code> method, and so on.</p>
<p>I'm using the latest version of the Django trunk.</p>
<p>How can I get the primary key so I can use it to save my uploaded files?</p>
<p><hr /></p>
<p><strong>Update @muhuk:</strong></p>
<p>I like your solution. Can you help me implement it? I've updated my code to the following and the error is <code>'File' object has no attribute 'create'</code>. Perhaps I'm using what you've written out of context?</p>
<pre><code>def create_with_pk(self):
instance = self.create()
instance.save()
return instance
def get_nzb_filename(instance, filename):
if not instance.pk:
create_with_pk(instance)
name_slug = re.sub('[^a-zA-Z0-9]', '-', instance.name).strip('-').lower()
name_slug = re.sub('[-]+', '-', name_slug)
return u'files/%s_%s.nzb' % (instance.pk, name_slug)
class File(models.Model):
nzb = models.FileField(upload_to=get_nzb_filename, blank=True, null=True)
name = models.CharField(max_length=256)
</code></pre>
<p>Instead of enforcing the required field in my model I'll do it in my Form class. No problem.</p>
| 9
|
2009-03-16T20:02:38Z
| 5,410,715
|
<p>Ty, is there a reason you rolled your own slugify filter?</p>
<p>Django ships with a built-in <code>slugify</code> filter, you can use it like so:</p>
<pre><code>from django.template.defaultfilters import slugify
slug = slugify(some_string)
</code></pre>
<p>Not sure if you were aware it was available to use...</p>
| -1
|
2011-03-23T19:41:58Z
|
[
"python",
"django",
"file-io"
] |
Django: Access primary key in models.filefield(upload_to) location
| 651,949
|
<p>I'd like to save my files using the primary key of the entry.</p>
<p>Here is my code:</p>
<pre><code>def get_nzb_filename(instance, filename):
if not instance.pk:
instance.save() # Does not work.
name_slug = re.sub('[^a-zA-Z0-9]', '-', instance.name).strip('-').lower()
name_slug = re.sub('[-]+', '-', name_slug)
return u'files/%s_%s.nzb' % (instance.pk, name_slug)
class File(models.Model):
nzb = models.FileField(upload_to=get_nzb_filename)
name = models.CharField(max_length=256)
</code></pre>
<p>I know the first time an object is saved the primary key isn't available, so I'm willing to take the extra hit to save the object just to get the primary key, and then continue on. </p>
<p>The above code doesn't work. It throws the following error:</p>
<pre><code>maximum recursion depth exceeded while calling a Python object
</code></pre>
<p>I'm assuming this is an infinite loop. Calling the <code>save</code> method would call the <code>get_nzb_filename</code> method, which would again call the <code>save</code> method, and so on.</p>
<p>I'm using the latest version of the Django trunk.</p>
<p>How can I get the primary key so I can use it to save my uploaded files?</p>
<p><hr /></p>
<p><strong>Update @muhuk:</strong></p>
<p>I like your solution. Can you help me implement it? I've updated my code to the following and the error is <code>'File' object has no attribute 'create'</code>. Perhaps I'm using what you've written out of context?</p>
<pre><code>def create_with_pk(self):
instance = self.create()
instance.save()
return instance
def get_nzb_filename(instance, filename):
if not instance.pk:
create_with_pk(instance)
name_slug = re.sub('[^a-zA-Z0-9]', '-', instance.name).strip('-').lower()
name_slug = re.sub('[-]+', '-', name_slug)
return u'files/%s_%s.nzb' % (instance.pk, name_slug)
class File(models.Model):
nzb = models.FileField(upload_to=get_nzb_filename, blank=True, null=True)
name = models.CharField(max_length=256)
</code></pre>
<p>Instead of enforcing the required field in my model I'll do it in my Form class. No problem.</p>
| 9
|
2009-03-16T20:02:38Z
| 8,655,455
|
<h2>Context</h2>
<p>Had the same issue.
Solved it attributing an id to the current object by saving the object first.</p>
<h2>Method</h2>
<ol>
<li>create a custom upload_to function</li>
<li>detect if object has pk</li>
<li>if not, save instance first, retrieve the pk and assign it to the object</li>
<li>generate your path with that</li>
</ol>
<h2>Sample working code :</h2>
<pre><code>class Image(models.Model):
def upload_path(self, filename):
if not self.pk:
i = Image.objects.create()
i.save()
self.id = self.pk = i.id
return "my/path/%s" % str(self.id)
file = models.ImageField(upload_to=upload_path)
</code></pre>
| 1
|
2011-12-28T11:41:13Z
|
[
"python",
"django",
"file-io"
] |
Django: Access primary key in models.filefield(upload_to) location
| 651,949
|
<p>I'd like to save my files using the primary key of the entry.</p>
<p>Here is my code:</p>
<pre><code>def get_nzb_filename(instance, filename):
if not instance.pk:
instance.save() # Does not work.
name_slug = re.sub('[^a-zA-Z0-9]', '-', instance.name).strip('-').lower()
name_slug = re.sub('[-]+', '-', name_slug)
return u'files/%s_%s.nzb' % (instance.pk, name_slug)
class File(models.Model):
nzb = models.FileField(upload_to=get_nzb_filename)
name = models.CharField(max_length=256)
</code></pre>
<p>I know the first time an object is saved the primary key isn't available, so I'm willing to take the extra hit to save the object just to get the primary key, and then continue on. </p>
<p>The above code doesn't work. It throws the following error:</p>
<pre><code>maximum recursion depth exceeded while calling a Python object
</code></pre>
<p>I'm assuming this is an infinite loop. Calling the <code>save</code> method would call the <code>get_nzb_filename</code> method, which would again call the <code>save</code> method, and so on.</p>
<p>I'm using the latest version of the Django trunk.</p>
<p>How can I get the primary key so I can use it to save my uploaded files?</p>
<p><hr /></p>
<p><strong>Update @muhuk:</strong></p>
<p>I like your solution. Can you help me implement it? I've updated my code to the following and the error is <code>'File' object has no attribute 'create'</code>. Perhaps I'm using what you've written out of context?</p>
<pre><code>def create_with_pk(self):
instance = self.create()
instance.save()
return instance
def get_nzb_filename(instance, filename):
if not instance.pk:
create_with_pk(instance)
name_slug = re.sub('[^a-zA-Z0-9]', '-', instance.name).strip('-').lower()
name_slug = re.sub('[-]+', '-', name_slug)
return u'files/%s_%s.nzb' % (instance.pk, name_slug)
class File(models.Model):
nzb = models.FileField(upload_to=get_nzb_filename, blank=True, null=True)
name = models.CharField(max_length=256)
</code></pre>
<p>Instead of enforcing the required field in my model I'll do it in my Form class. No problem.</p>
| 9
|
2009-03-16T20:02:38Z
| 16,574,947
|
<p>You can do this by setting <code>upload_to</code> to a temporary location and by creating a custom save method.</p>
<p>The save method should call super first, to generate the primary key (this will save the file to the temporary location). Then you can rename the file using the primary key and move it to it's proper location. Call super one more time to save the changes and you are good to go! This worked well for me when I came across this exact issue.</p>
<p>For example:</p>
<pre><code>class File( models.Model ):
nzb = models.FileField( upload_to='temp' )
def save( self, *args, **kwargs ):
# Call save first, to create a primary key
super( File, self ).save( *args, **kwargs )
nzb = self.nzb
if nzb:
# Create new filename, using primary key and file extension
oldfile = self.nzb.name
dot = oldfile.rfind( '.' )
newfile = str( self.pk ) + oldfile[dot:]
# Create new file and remove old one
if newfile != oldfile:
self.nzb.storage.delete( newfile )
self.nzb.storage.save( newfile, nzb )
self.nzb.name = newfile
self.nzb.close()
self.nzb.storage.delete( oldfile )
# Save again to keep changes
super( File, self ).save( *args, **kwargs )
</code></pre>
| 2
|
2013-05-15T21:03:44Z
|
[
"python",
"django",
"file-io"
] |
Simultaneously inserting and extending a list?
| 652,184
|
<p>Is there a better way of simultaneously inserting and extending a list? Here is an ugly example of how I'm currently doing it. (lets say I want to insert '2.4' and '2.6' after the '2' element):</p>
<pre><code>>>> a = ['1', '2', '3', '4']
>>> b = a[:a.index('2')+1] + ['2.4', '2.6'] + a[a.index('2'):]
>>> b
<<< ['1', '2', '2.4', '2.6', '3', '4']
</code></pre>
| 5
|
2009-03-16T21:17:23Z
| 652,200
|
<pre><code>>>> a = ['1', '2', '3', '4']
>>> a
['1', '2', '3', '4']
>>> i = a.index('2') + 1 # after the item '2'
>>> a[i:i] = ['2.4', '2.6']
>>> a
['1', '2', '2.4', '2.6', '3', '4']
>>>
</code></pre>
| 14
|
2009-03-16T21:23:15Z
|
[
"python"
] |
Simultaneously inserting and extending a list?
| 652,184
|
<p>Is there a better way of simultaneously inserting and extending a list? Here is an ugly example of how I'm currently doing it. (lets say I want to insert '2.4' and '2.6' after the '2' element):</p>
<pre><code>>>> a = ['1', '2', '3', '4']
>>> b = a[:a.index('2')+1] + ['2.4', '2.6'] + a[a.index('2'):]
>>> b
<<< ['1', '2', '2.4', '2.6', '3', '4']
</code></pre>
| 5
|
2009-03-16T21:17:23Z
| 652,225
|
<p>I'm not entirely clear on what you're doing; if you want to add values, and have the list remain in order, it's cleaner (and probably still faster) to just sort the whole thing:</p>
<pre><code>a.extend(['2.4', '2.6'])
a.sort()
</code></pre>
| 2
|
2009-03-16T21:29:32Z
|
[
"python"
] |
Simultaneously inserting and extending a list?
| 652,184
|
<p>Is there a better way of simultaneously inserting and extending a list? Here is an ugly example of how I'm currently doing it. (lets say I want to insert '2.4' and '2.6' after the '2' element):</p>
<pre><code>>>> a = ['1', '2', '3', '4']
>>> b = a[:a.index('2')+1] + ['2.4', '2.6'] + a[a.index('2'):]
>>> b
<<< ['1', '2', '2.4', '2.6', '3', '4']
</code></pre>
| 5
|
2009-03-16T21:17:23Z
| 652,244
|
<p>You can easily insert a single element using <code>list.insert(i, x)</code>, which Python defines as <code>s[i:i] = [x]</code>.</p>
<pre><code>a = ['1', '2', '3', '4']
for elem in reversed(['2.4', '2.6']):
a.insert(a.index('2')+1, elem))
</code></pre>
<p>If you want to insert a list, you can make your own function that omits the []:</p>
<pre><code>def iextend(lst, i, x):
lst[i:i] = x
a = ['1', '2', '3', '4']
iextend(a, a.index('2')+1, ['2.4', '2.6']
# a = ['1', '2', '2.4', '2.6', '3', '4']
</code></pre>
| 5
|
2009-03-16T21:34:32Z
|
[
"python"
] |
Simultaneously inserting and extending a list?
| 652,184
|
<p>Is there a better way of simultaneously inserting and extending a list? Here is an ugly example of how I'm currently doing it. (lets say I want to insert '2.4' and '2.6' after the '2' element):</p>
<pre><code>>>> a = ['1', '2', '3', '4']
>>> b = a[:a.index('2')+1] + ['2.4', '2.6'] + a[a.index('2'):]
>>> b
<<< ['1', '2', '2.4', '2.6', '3', '4']
</code></pre>
| 5
|
2009-03-16T21:17:23Z
| 652,253
|
<p>Have a look at the <a href="http://docs.python.org/library/bisect.html" rel="nofollow">bisect module</a>. I think it does what you want.</p>
| 2
|
2009-03-16T21:39:56Z
|
[
"python"
] |
Is it possible to create anonymous objects in Python?
| 652,276
|
<p>I'm debugging some Python that takes, as input, a list of objects, each with some attributes.</p>
<p>I'd like to hard-code some test values -- let's say, a list of four objects whose "foo" attribute is set to some number.</p>
<p>Is there a more concise way than this?</p>
<pre><code>x1.foo = 1
x2.foo = 2
x3.foo = 3
x4.foo = 4
myfunc([x1, x2, x3, x4])
</code></pre>
<p>Ideally, I'd just like to be able to say something like:</p>
<pre><code>myfunc([<foo=1>, <foo=2>, <foo=3>, <foo=4>])
</code></pre>
<p>(Obviously, that is made-up syntax. But is there something similar that really works?)</p>
<p>Note: This will never be checked in. It's just some throwaway debug code. So don't worry about readability or maintainability.</p>
| 27
|
2009-03-16T21:50:45Z
| 652,299
|
<p>Have a look at this:</p>
<pre><code>
class MiniMock(object):
def __new__(cls, **attrs):
result = object.__new__(cls)
result.__dict__ = attrs
return result
def print_foo(x):
print x.foo
print_foo(MiniMock(foo=3))
</code></pre>
| 15
|
2009-03-16T21:59:28Z
|
[
"python",
"anonymous-types"
] |
Is it possible to create anonymous objects in Python?
| 652,276
|
<p>I'm debugging some Python that takes, as input, a list of objects, each with some attributes.</p>
<p>I'd like to hard-code some test values -- let's say, a list of four objects whose "foo" attribute is set to some number.</p>
<p>Is there a more concise way than this?</p>
<pre><code>x1.foo = 1
x2.foo = 2
x3.foo = 3
x4.foo = 4
myfunc([x1, x2, x3, x4])
</code></pre>
<p>Ideally, I'd just like to be able to say something like:</p>
<pre><code>myfunc([<foo=1>, <foo=2>, <foo=3>, <foo=4>])
</code></pre>
<p>(Obviously, that is made-up syntax. But is there something similar that really works?)</p>
<p>Note: This will never be checked in. It's just some throwaway debug code. So don't worry about readability or maintainability.</p>
| 27
|
2009-03-16T21:50:45Z
| 652,417
|
<p>I like Tetha's solution, but it's unnecessarily complex.</p>
<p>Here's something simpler:</p>
<pre><code>>>> class MicroMock(object):
... def __init__(self, **kwargs):
... self.__dict__.update(kwargs)
...
>>> def print_foo(x):
... print x.foo
...
>>> print_foo(MicroMock(foo=3))
3
</code></pre>
| 34
|
2009-03-16T22:40:38Z
|
[
"python",
"anonymous-types"
] |
Is it possible to create anonymous objects in Python?
| 652,276
|
<p>I'm debugging some Python that takes, as input, a list of objects, each with some attributes.</p>
<p>I'd like to hard-code some test values -- let's say, a list of four objects whose "foo" attribute is set to some number.</p>
<p>Is there a more concise way than this?</p>
<pre><code>x1.foo = 1
x2.foo = 2
x3.foo = 3
x4.foo = 4
myfunc([x1, x2, x3, x4])
</code></pre>
<p>Ideally, I'd just like to be able to say something like:</p>
<pre><code>myfunc([<foo=1>, <foo=2>, <foo=3>, <foo=4>])
</code></pre>
<p>(Obviously, that is made-up syntax. But is there something similar that really works?)</p>
<p>Note: This will never be checked in. It's just some throwaway debug code. So don't worry about readability or maintainability.</p>
| 27
|
2009-03-16T21:50:45Z
| 17,700,031
|
<p>Non classy:</p>
<pre><code>def mock(**attrs):
r = lambda:0
r.__dict__ = attrs
return r
def test(a, b, c, d):
print a.foo, b.foo, c.foo, d.foo
test(*[mock(foo=i) for i in xrange(1,5)])
# or
test(mock(foo=1), mock(foo=2), mock(foo=3), mock(foo=4))
</code></pre>
| 3
|
2013-07-17T12:40:53Z
|
[
"python",
"anonymous-types"
] |
Is it possible to create anonymous objects in Python?
| 652,276
|
<p>I'm debugging some Python that takes, as input, a list of objects, each with some attributes.</p>
<p>I'd like to hard-code some test values -- let's say, a list of four objects whose "foo" attribute is set to some number.</p>
<p>Is there a more concise way than this?</p>
<pre><code>x1.foo = 1
x2.foo = 2
x3.foo = 3
x4.foo = 4
myfunc([x1, x2, x3, x4])
</code></pre>
<p>Ideally, I'd just like to be able to say something like:</p>
<pre><code>myfunc([<foo=1>, <foo=2>, <foo=3>, <foo=4>])
</code></pre>
<p>(Obviously, that is made-up syntax. But is there something similar that really works?)</p>
<p>Note: This will never be checked in. It's just some throwaway debug code. So don't worry about readability or maintainability.</p>
| 27
|
2009-03-16T21:50:45Z
| 29,480,317
|
<p>I found this: <a href="http://www.hydrogen18.com/blog/python-anonymous-objects.html">http://www.hydrogen18.com/blog/python-anonymous-objects.html</a>, and in my limited testing it seems like it works:</p>
<pre><code>>>> obj = type('',(object,),{"foo": 1})()
>>> obj.foo
1
</code></pre>
| 14
|
2015-04-06T21:54:38Z
|
[
"python",
"anonymous-types"
] |
Is it possible to create anonymous objects in Python?
| 652,276
|
<p>I'm debugging some Python that takes, as input, a list of objects, each with some attributes.</p>
<p>I'd like to hard-code some test values -- let's say, a list of four objects whose "foo" attribute is set to some number.</p>
<p>Is there a more concise way than this?</p>
<pre><code>x1.foo = 1
x2.foo = 2
x3.foo = 3
x4.foo = 4
myfunc([x1, x2, x3, x4])
</code></pre>
<p>Ideally, I'd just like to be able to say something like:</p>
<pre><code>myfunc([<foo=1>, <foo=2>, <foo=3>, <foo=4>])
</code></pre>
<p>(Obviously, that is made-up syntax. But is there something similar that really works?)</p>
<p>Note: This will never be checked in. It's just some throwaway debug code. So don't worry about readability or maintainability.</p>
| 27
|
2009-03-16T21:50:45Z
| 35,059,764
|
<p>Another obvious hack:</p>
<pre><code>class foo1: x=3; y='y'
class foo2: y=5; x=6
print(foo1.x, foo2.y)
</code></pre>
<p>But for your exact usecase, calling a function with anonymous objects directly, I don't know any one-liner less verbose than</p>
<pre><code>myfunc(type('', (object,), {'foo': 3},), type('', (object,), {'foo': 4}))
</code></pre>
<p>Ugly, does the job, but not really.</p>
| 2
|
2016-01-28T11:06:48Z
|
[
"python",
"anonymous-types"
] |
Can a neural network be used to find a functions minimum(a)?
| 652,283
|
<p>I had been interested in neural networks for a bit and thought about using one in python for a light project that compares various minimization techniques in a time domain (which is fastest).</p>
<p>Then I realized I didn't even know if a NN is good for minimization. What do you think?</p>
| 10
|
2009-03-16T21:53:30Z
| 652,327
|
<p>The training process of a back-propagation neural network works by minimizing the error from the optimal result. But having a trained neural network finding the minimum of an unknown function would be pretty hard.</p>
<p>If you restrict the problem to a specific function class, it could work, and be pretty quick too. Neural networks are good at finding patterns, if there are any.</p>
| 1
|
2009-03-16T22:08:40Z
|
[
"python",
"artificial-intelligence",
"neural-network",
"minimization"
] |
Can a neural network be used to find a functions minimum(a)?
| 652,283
|
<p>I had been interested in neural networks for a bit and thought about using one in python for a light project that compares various minimization techniques in a time domain (which is fastest).</p>
<p>Then I realized I didn't even know if a NN is good for minimization. What do you think?</p>
| 10
|
2009-03-16T21:53:30Z
| 652,348
|
<p>They're pretty bad for the purpose; one of the big problems of neural networks is that they get stuck in local minima. You might want to look into support vector machines instead.</p>
| 0
|
2009-03-16T22:14:42Z
|
[
"python",
"artificial-intelligence",
"neural-network",
"minimization"
] |
Can a neural network be used to find a functions minimum(a)?
| 652,283
|
<p>I had been interested in neural networks for a bit and thought about using one in python for a light project that compares various minimization techniques in a time domain (which is fastest).</p>
<p>Then I realized I didn't even know if a NN is good for minimization. What do you think?</p>
| 10
|
2009-03-16T21:53:30Z
| 652,351
|
<p>It sounds to me like this is a problem more suited to <a href="http://en.wikipedia.org/wiki/Genetic%5Falgorithm" rel="nofollow">genetic algorithms</a> than neural networks. Neural nets tend to need a bounded problem to solve, requiring training against known data, etc. - whereas genetic algorithms work by finding better and better approximate solutions to a problem without requiring training.</p>
| 4
|
2009-03-16T22:15:08Z
|
[
"python",
"artificial-intelligence",
"neural-network",
"minimization"
] |
Can a neural network be used to find a functions minimum(a)?
| 652,283
|
<p>I had been interested in neural networks for a bit and thought about using one in python for a light project that compares various minimization techniques in a time domain (which is fastest).</p>
<p>Then I realized I didn't even know if a NN is good for minimization. What do you think?</p>
| 10
|
2009-03-16T21:53:30Z
| 652,362
|
<p>Neural networks are classifiers. They separate two classes of data elements. They learn this separation (usually) by preclassified data elements. Thus, I say: No, unless you do a major stretch beyond breakage.</p>
| 1
|
2009-03-16T22:17:49Z
|
[
"python",
"artificial-intelligence",
"neural-network",
"minimization"
] |
Can a neural network be used to find a functions minimum(a)?
| 652,283
|
<p>I had been interested in neural networks for a bit and thought about using one in python for a light project that compares various minimization techniques in a time domain (which is fastest).</p>
<p>Then I realized I didn't even know if a NN is good for minimization. What do you think?</p>
| 10
|
2009-03-16T21:53:30Z
| 709,482
|
<p>Actually you could use the NN to find a function minimum, but it would work best combined with genetic algorithms mentioned by <a href="http://#652351" rel="nofollow">Erik</a>.</p>
<p>Basically NN tent to find solutions which correspond to a function local minimum or maximum but in doing so are pretty precise (to comment on <a href="http://#652362" rel="nofollow">Tetha</a> answer stating that NN are classifiers you can use if to say it the data input is minimum or not)</p>
<p>in contrast genetic algorithms tend to find more universal solution from the whole range of the inputs possible but then give you the proximate results.</p>
<p>The solution is to combine the 2 worlds</p>
<ol>
<li>Get the approximate result from genetic algorithms</li>
<li>Use that result to find the more precise answer using NN </li>
</ol>
| 0
|
2009-04-02T12:17:53Z
|
[
"python",
"artificial-intelligence",
"neural-network",
"minimization"
] |
Can a neural network be used to find a functions minimum(a)?
| 652,283
|
<p>I had been interested in neural networks for a bit and thought about using one in python for a light project that compares various minimization techniques in a time domain (which is fastest).</p>
<p>Then I realized I didn't even know if a NN is good for minimization. What do you think?</p>
| 10
|
2009-03-16T21:53:30Z
| 13,611,588
|
<p>Back-propagation works by minimizing the error. However, you can really minimize whatever you want. So, you could use back-prop-like update rules to find the Artificial Neural Network inputs that minimize the output.</p>
<p>This is a big question, sorry for the short answer. I should also add that, my suggested approach sounds pretty inefficient compared to more established methods and would only find a local minima.</p>
| 2
|
2012-11-28T18:04:47Z
|
[
"python",
"artificial-intelligence",
"neural-network",
"minimization"
] |
Can a neural network be used to find a functions minimum(a)?
| 652,283
|
<p>I had been interested in neural networks for a bit and thought about using one in python for a light project that compares various minimization techniques in a time domain (which is fastest).</p>
<p>Then I realized I didn't even know if a NN is good for minimization. What do you think?</p>
| 10
|
2009-03-16T21:53:30Z
| 18,789,493
|
<p>You can teach a NN to approximate a function. If a function is differentiable or your NN has more than one hidden layers, you can teach it to give derivative of a function.</p>
<p>Example:</p>
<pre><code>You can train a 1 input 1 output NN to give output=sin(input)
You can train it also give output=cos(input) which is derivative of sin()
You get a minima/maxima of sin when you equate cos to zero.
Scan for zero output while giving many values from input. 0=cos() -> minima of sin
</code></pre>
<p>When you reach zero output, you know that the input value is the minima of the function.</p>
<p>Training takes less, sweeping for zero takes long.</p>
| 0
|
2013-09-13T15:02:25Z
|
[
"python",
"artificial-intelligence",
"neural-network",
"minimization"
] |
Can a neural network be used to find a functions minimum(a)?
| 652,283
|
<p>I had been interested in neural networks for a bit and thought about using one in python for a light project that compares various minimization techniques in a time domain (which is fastest).</p>
<p>Then I realized I didn't even know if a NN is good for minimization. What do you think?</p>
| 10
|
2009-03-16T21:53:30Z
| 24,077,082
|
<p>Although this comes a bit too late for the the author of this question.
Maybe somebody wants to test some optimization algorithms, when he reads this...</p>
<p>If you are working with regressions in machine learning (NN, SVM, Multiple Linear Regression, K Nearest Neighbor) and you want to minimize (maximize) your regression-function, actually this is possible but the efficiency of such algorithms depends on smootheness, (step-size... etc.) of the region you are searching in.</p>
<p>In order to construct such "Machine Learning Regressions" you could use <a href="http://scikit-learn.org/stable/" rel="nofollow">scikit- learn</a>. You have to train and validate your MLR <a href="http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html" rel="nofollow">Support Vector Regression</a>.
("fit" method) </p>
<pre><code>SVR.fit(Sm_Data_X,Sm_Data_y)
</code></pre>
<p>Then you have to define a function which returns a prediction of your regression for an array "x".</p>
<pre><code>def fun(x):
return SVR.predict(x)
</code></pre>
<p>You can use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html" rel="nofollow">scipiy.optimize.minimize</a> for optimization. See the examples following the doc-links. </p>
| 0
|
2014-06-06T08:08:24Z
|
[
"python",
"artificial-intelligence",
"neural-network",
"minimization"
] |
sorting a list of dictionary values by date in python
| 652,291
|
<p>I have a list and I am appending a dictionary to it as I loop through my data...and I would like to sort by one of the dictionary keys.</p>
<p>ex: </p>
<pre><code>data = "data from database"
list = []
for x in data:
dict = {'title':title, 'date': x.created_on}
list.append(dict)
</code></pre>
<p>I want to sort the list in reverse order by value of 'date'</p>
| 15
|
2009-03-16T21:56:51Z
| 652,302
|
<p>Sort the data (or a copy of the data) directly and build the list of dicts afterwards. Sort using the function sorted with an appropiate key function (operator.attrgetter probably)</p>
| 2
|
2009-03-16T22:02:45Z
|
[
"python",
"django"
] |
sorting a list of dictionary values by date in python
| 652,291
|
<p>I have a list and I am appending a dictionary to it as I loop through my data...and I would like to sort by one of the dictionary keys.</p>
<p>ex: </p>
<pre><code>data = "data from database"
list = []
for x in data:
dict = {'title':title, 'date': x.created_on}
list.append(dict)
</code></pre>
<p>I want to sort the list in reverse order by value of 'date'</p>
| 15
|
2009-03-16T21:56:51Z
| 652,347
|
<p>You can do it this way:</p>
<pre><code>list.sort(key=lambda item:item['date'], reverse=True)
</code></pre>
| 35
|
2009-03-16T22:14:17Z
|
[
"python",
"django"
] |
sorting a list of dictionary values by date in python
| 652,291
|
<p>I have a list and I am appending a dictionary to it as I loop through my data...and I would like to sort by one of the dictionary keys.</p>
<p>ex: </p>
<pre><code>data = "data from database"
list = []
for x in data:
dict = {'title':title, 'date': x.created_on}
list.append(dict)
</code></pre>
<p>I want to sort the list in reverse order by value of 'date'</p>
| 15
|
2009-03-16T21:56:51Z
| 652,463
|
<pre><code>from operator import itemgetter
your_list.sort(key=itemgetter('date'), reverse=True)
</code></pre>
<h3>Related notes</h3>
<ul>
<li><p>don't use <code>list</code>, <code>dict</code> as variable names, they are builtin names in Python. It makes your code hard to read.</p></li>
<li><p>you might need to replace dictionary by <code>tuple</code> or <a href="http://docs.python.org/library/collections.html#collections.namedtuple"><code>collections.namedtuple</code></a> or custom struct-like class depending on the context</p>
<pre><code>from collections import namedtuple
from operator import itemgetter
Row = namedtuple('Row', 'title date')
rows = [Row(row.title, row.created_on) for row in data]
rows.sort(key=itemgetter(1), reverse=True)
</code></pre></li>
</ul>
<p>Example:</p>
<pre><code>>>> lst = [Row('a', 1), Row('b', 2)]
>>> lst.sort(key=itemgetter(1), reverse=True)
>>> lst
[Row(title='b', date=2), Row(title='a', date=1)]
</code></pre>
<p>Or </p>
<pre><code>>>> from operator import attrgetter
>>> lst = [Row('a', 1), Row('b', 2)]
>>> lst.sort(key=attrgetter('date'), reverse=True)
>>> lst
[Row(title='b', date=2), Row(title='a', date=1)]
</code></pre>
<p>Here's how <code>namedtuple</code> looks inside:</p>
<pre><code>>>> Row = namedtuple('Row', 'title date', verbose=True)
class Row(tuple):
'Row(title, date)'
__slots__ = ()
_fields = ('title', 'date')
def __new__(cls, title, date):
return tuple.__new__(cls, (title, date))
@classmethod
def _make(cls, iterable, new=tuple.__new__, len=len):
'Make a new Row object from a sequence or iterable'
result = new(cls, iterable)
if len(result) != 2:
raise TypeError('Expected 2 arguments, got %d' % len(result))
return result
def __repr__(self):
return 'Row(title=%r, date=%r)' % self
def _asdict(t):
'Return a new dict which maps field names to their values'
return {'title': t[0], 'date': t[1]}
def _replace(self, **kwds):
'Return a new Row object replacing specified fields with new values'
result = self._make(map(kwds.pop, ('title', 'date'), self))
if kwds:
raise ValueError('Got unexpected field names: %r' % kwds.keys())
return result
def __getnewargs__(self):
return tuple(self)
title = property(itemgetter(0))
date = property(itemgetter(1))
</code></pre>
| 13
|
2009-03-16T23:01:22Z
|
[
"python",
"django"
] |
sorting a list of dictionary values by date in python
| 652,291
|
<p>I have a list and I am appending a dictionary to it as I loop through my data...and I would like to sort by one of the dictionary keys.</p>
<p>ex: </p>
<pre><code>data = "data from database"
list = []
for x in data:
dict = {'title':title, 'date': x.created_on}
list.append(dict)
</code></pre>
<p>I want to sort the list in reverse order by value of 'date'</p>
| 15
|
2009-03-16T21:56:51Z
| 652,478
|
<p>If you're into the whole brevity thing:</p>
<pre><code>data = "data from database"
sorted_data = sorted(
[{'title': x.title, 'date': x.created_on} for x in data],
key=operator.itemgetter('date'),
reverse=True)
</code></pre>
| 1
|
2009-03-16T23:06:37Z
|
[
"python",
"django"
] |
sorting a list of dictionary values by date in python
| 652,291
|
<p>I have a list and I am appending a dictionary to it as I loop through my data...and I would like to sort by one of the dictionary keys.</p>
<p>ex: </p>
<pre><code>data = "data from database"
list = []
for x in data:
dict = {'title':title, 'date': x.created_on}
list.append(dict)
</code></pre>
<p>I want to sort the list in reverse order by value of 'date'</p>
| 15
|
2009-03-16T21:56:51Z
| 654,675
|
<p>I actually had this almost exact question yesterday and <a href="http://stackoverflow.com/questions/72899/in-python-how-do-i-sort-a-list-of-dictionaries-by-values-of-the-dictionary">solved it using search</a>. The best answer applied to your question is this:</p>
<pre><code>from operator import itemgetter
list.sort(key=itemgetter('date'), reverse=True)
</code></pre>
| 2
|
2009-03-17T15:12:41Z
|
[
"python",
"django"
] |
Check if value exists in nested lists
| 652,423
|
<p>in my list:</p>
<pre><code>animals = [ ['dog', ['bite'] ],
['cat', ['bite', 'scratch'] ],
['bird', ['peck', 'bite'] ], ]
add('bird', 'peck')
add('bird', 'screech')
add('turtle', 'hide')
</code></pre>
<p>The add function should check that the animal and action haven't been added before adding them to the list. Is there a way to accomplish this without nesting a loop for each step into the list?</p>
| 2
|
2009-03-16T22:41:50Z
| 652,439
|
<p>You're using the wrong data type. Use a <code>dict</code> of <code>set</code>s instead:</p>
<pre><code>def add(key, value, userdict):
userdict.setdefault(key, set())
userdict[key].add(value)
</code></pre>
<p>Usage:</p>
<pre><code>animaldict = {}
add('bird', 'peck', animaldict)
add('bird', 'screech', animaldict)
add('turtle', 'hide', animaldict)
</code></pre>
| 6
|
2009-03-16T22:49:44Z
|
[
"python"
] |
Check if value exists in nested lists
| 652,423
|
<p>in my list:</p>
<pre><code>animals = [ ['dog', ['bite'] ],
['cat', ['bite', 'scratch'] ],
['bird', ['peck', 'bite'] ], ]
add('bird', 'peck')
add('bird', 'screech')
add('turtle', 'hide')
</code></pre>
<p>The add function should check that the animal and action haven't been added before adding them to the list. Is there a way to accomplish this without nesting a loop for each step into the list?</p>
| 2
|
2009-03-16T22:41:50Z
| 652,444
|
<pre><code>animals_dict = dict(animals)
def add(key, action):
animals_dict.setdefault(key, [])
if action not in animals_dict[key]:
animals_dict[key].append(action)
</code></pre>
<p>(Updated to use <code>setdefault</code> - nice one @recursive)</p>
| 0
|
2009-03-16T22:52:15Z
|
[
"python"
] |
Check if value exists in nested lists
| 652,423
|
<p>in my list:</p>
<pre><code>animals = [ ['dog', ['bite'] ],
['cat', ['bite', 'scratch'] ],
['bird', ['peck', 'bite'] ], ]
add('bird', 'peck')
add('bird', 'screech')
add('turtle', 'hide')
</code></pre>
<p>The add function should check that the animal and action haven't been added before adding them to the list. Is there a way to accomplish this without nesting a loop for each step into the list?</p>
| 2
|
2009-03-16T22:41:50Z
| 652,445
|
<p>While it is possible to construct a generic function that finds the animal in the list using a.index or testing with "dog" in animals, you really want a dictionary here, otherwise the add function will scale abysmally as more animals are added:</p>
<pre><code>animals = {'dog':set(['bite']),
'cat':set(['bite', 'scratch'])}
</code></pre>
<p>You can then "one-shot" the add function using setdefault:</p>
<pre><code>animals.setdefault('dog', set()).add('bite')
</code></pre>
<p>It will create the 'dog' key if it doesn't exist, and since setdefault returns the set that either exists or was just created, you can then add the bite action. Sets ensure that there are no duplicates automatically.</p>
| 4
|
2009-03-16T22:53:08Z
|
[
"python"
] |
Check if value exists in nested lists
| 652,423
|
<p>in my list:</p>
<pre><code>animals = [ ['dog', ['bite'] ],
['cat', ['bite', 'scratch'] ],
['bird', ['peck', 'bite'] ], ]
add('bird', 'peck')
add('bird', 'screech')
add('turtle', 'hide')
</code></pre>
<p>The add function should check that the animal and action haven't been added before adding them to the list. Is there a way to accomplish this without nesting a loop for each step into the list?</p>
| 2
|
2009-03-16T22:41:50Z
| 652,446
|
<p>You really should use a dictionary for this purpose. Or alternatively a class <code>Animal</code>.</p>
<p>You could improve your code like this:</p>
<pre><code>if not any((animal[0] == "bird") for animal in animals):
# append "bird" to animals
</code></pre>
| 0
|
2009-03-16T22:54:02Z
|
[
"python"
] |
Check if value exists in nested lists
| 652,423
|
<p>in my list:</p>
<pre><code>animals = [ ['dog', ['bite'] ],
['cat', ['bite', 'scratch'] ],
['bird', ['peck', 'bite'] ], ]
add('bird', 'peck')
add('bird', 'screech')
add('turtle', 'hide')
</code></pre>
<p>The add function should check that the animal and action haven't been added before adding them to the list. Is there a way to accomplish this without nesting a loop for each step into the list?</p>
| 2
|
2009-03-16T22:41:50Z
| 652,453
|
<p>Based on recursive's solution, in Python 2.5 or newer you can use the <code>defaultdict</code> class, something like this:</p>
<pre><code>from collections import defaultdict
a = defaultdict(set)
def add(animal, behavior):
a[animal].add(behavior)
add('bird', 'peck')
add('bird', 'screech')
add('turtle', 'hide')
</code></pre>
| 4
|
2009-03-16T22:58:03Z
|
[
"python"
] |
Check if value exists in nested lists
| 652,423
|
<p>in my list:</p>
<pre><code>animals = [ ['dog', ['bite'] ],
['cat', ['bite', 'scratch'] ],
['bird', ['peck', 'bite'] ], ]
add('bird', 'peck')
add('bird', 'screech')
add('turtle', 'hide')
</code></pre>
<p>The add function should check that the animal and action haven't been added before adding them to the list. Is there a way to accomplish this without nesting a loop for each step into the list?</p>
| 2
|
2009-03-16T22:41:50Z
| 652,803
|
<p>While I agree with the others re. your choice of data structure, here is an answer to your question:</p>
<pre><code>def add(name, action):
for animal in animals:
if animal[0] == name:
if action not in animal[1]:
animal[1].append(action)
return
else:
animals.append([name, [action]])
</code></pre>
<p>The <code>for</code> loop is an inevitable consequence of your data structure, which is why everyone is advising you to consider dictionaries instead.</p>
| 0
|
2009-03-17T02:05:44Z
|
[
"python"
] |
Separating Models and Request Handlers In Google App Engine
| 652,449
|
<p>I'd like to move my models to a separate directory, similar to the way it's done with Rails to cut down on code clutter. Is there any way to do this easily?</p>
<p>Thanks,
Collin</p>
| 2
|
2009-03-16T22:55:07Z
| 652,458
|
<p>I assume you're using the basic webkit and not Django or something fancy. In that case just create a subdirectory called models. Put any python files you use for your models in here. Create also one blank file in this folder called __init__.py.</p>
<p>Then in your main.py or "controller" or what have you, put:</p>
<pre><code>import models
</code></pre>
<p>at the top.</p>
<p>You just created a <a href="http://docs.python.org/tutorial/modules.html#packages" rel="nofollow">python package</a>.</p>
| 7
|
2009-03-16T22:59:44Z
|
[
"python",
"google-app-engine",
"model"
] |
Separating Models and Request Handlers In Google App Engine
| 652,449
|
<p>I'd like to move my models to a separate directory, similar to the way it's done with Rails to cut down on code clutter. Is there any way to do this easily?</p>
<p>Thanks,
Collin</p>
| 2
|
2009-03-16T22:55:07Z
| 1,566,716
|
<p>Brandon's answer is what I do. Furthermore, I rather like Rails's custom of one model per file. I don't stick to it completely but that is my basic pattern, especially since Python tends to encourage more-but-simpler lines of code than Ruby.</p>
<p>So what I do is I make models a package too:</p>
<pre><code>models/
models/__init__.py
models/user.py
models/item.py
models/blog_post.py
</code></pre>
<p>In the main .py files I put my basic class definition, plus perhaps some helper functions (Python's module system makes it much safer to keep quickie helper functions coupled to the class definition). And my <code>__init__.py</code> stitches them all together:</p>
<pre><code>"""The application models"""
from user import User
from item import Item
from blog_post import BlogPost
</code></pre>
<p>It's slightly redundant but I have lots of control of the namespace.</p>
| 1
|
2009-10-14T14:37:44Z
|
[
"python",
"google-app-engine",
"model"
] |
Writing a binary buffer to a file in python
| 652,535
|
<p>I have some python code that:</p>
<ol>
<li>Takes a BLOB from a database which is compressed.</li>
<li>Calls an uncompression routine in C that uncompresses the data.</li>
<li>Writes the uncompressed data to a file.</li>
</ol>
<p>It uses <a href="http://docs.python.org/library/ctypes.html" rel="nofollow">ctypes</a> to call the C routine, which is in a shared library.</p>
<p>This mostly works, except for the actual writing to the file. To uncompress, I get the data uncompressed into a python buffer, created using the ctypes <code>create_string_buffer</code> method:</p>
<p><code>c_uncompData_p = create_string_buffer(64000)</code></p>
<p>so the uncompression call is like this:</p>
<p><code>c_uncompSize = mylib.explodeCharBuffer (c_data_p, c_data_len, c_uncompData_p)</code></p>
<p>The size of the resulting uncompressed data is returned as the return value.</p>
<p>But... I have no idea how to force python on only write <code>c_uncompSize</code> bytes out - if I do:</p>
<p><code>myfile.write (c_uncompData_p.raw)</code></p>
<p>it writes the whole 64k buffer out (the data is binary - so it is not null terminated).</p>
<p>So, my question is - using Python 2.5 how do I get c_uncompSize bytes printed out, rather than the whole 64k?</p>
<p>Thanks
Jamie</p>
| 6
|
2009-03-16T23:30:36Z
| 652,565
|
<p>Slicing works for c_char_Arrays too:</p>
<pre><code>myfile.write(c_uncompData_p[:c_uncompSize])
</code></pre>
| 6
|
2009-03-16T23:41:07Z
|
[
"python",
"binary",
"io"
] |
Writing a binary buffer to a file in python
| 652,535
|
<p>I have some python code that:</p>
<ol>
<li>Takes a BLOB from a database which is compressed.</li>
<li>Calls an uncompression routine in C that uncompresses the data.</li>
<li>Writes the uncompressed data to a file.</li>
</ol>
<p>It uses <a href="http://docs.python.org/library/ctypes.html" rel="nofollow">ctypes</a> to call the C routine, which is in a shared library.</p>
<p>This mostly works, except for the actual writing to the file. To uncompress, I get the data uncompressed into a python buffer, created using the ctypes <code>create_string_buffer</code> method:</p>
<p><code>c_uncompData_p = create_string_buffer(64000)</code></p>
<p>so the uncompression call is like this:</p>
<p><code>c_uncompSize = mylib.explodeCharBuffer (c_data_p, c_data_len, c_uncompData_p)</code></p>
<p>The size of the resulting uncompressed data is returned as the return value.</p>
<p>But... I have no idea how to force python on only write <code>c_uncompSize</code> bytes out - if I do:</p>
<p><code>myfile.write (c_uncompData_p.raw)</code></p>
<p>it writes the whole 64k buffer out (the data is binary - so it is not null terminated).</p>
<p>So, my question is - using Python 2.5 how do I get c_uncompSize bytes printed out, rather than the whole 64k?</p>
<p>Thanks
Jamie</p>
| 6
|
2009-03-16T23:30:36Z
| 652,987
|
<p><code>buffer()</code> might help to avoid unnecessary copying (caused by slicing as in <a href="http://stackoverflow.com/questions/652535/writing-a-binary-buffer-to-a-file-in-python/652565#652565">@elo80ka's answer</a>):</p>
<pre><code>myfile.write(buffer(c_uncompData_p.raw, 0, c_uncompSize))
</code></pre>
<p>In your example it doesn't matter (due to <code>c_uncompData_p</code> is written only once and it is small) but in general it could be useful.</p>
<p><hr /></p>
<p>Just for the sake of exercise here's the answer that uses C <code>stdio</code>'s <code>fwrite()</code>:</p>
<pre><code>from ctypes import *
# load C library
try: libc = cdll.msvcrt # Windows
except AttributeError:
libc = CDLL("libc.so.6") # Linux
# fopen()
libc.fopen.restype = c_void_p
def errcheck(res, func, args):
if not res: raise IOError
return res
libc.fopen.errcheck = errcheck
# errcheck() could be similarly defined for `fwrite`, `fclose`
# write data
file_p = libc.fopen("output.bin", "wb")
sizeof_item = 1 # bytes
nitems = libc.fwrite(c_uncompData_p, sizeof_item, c_uncompSize, file_p)
retcode = libc.fclose(file_p)
if nitems != c_uncompSize: # not all data were written
pass
if retcode != 0: # the file was NOT successfully closed
pass
</code></pre>
| 5
|
2009-03-17T04:01:25Z
|
[
"python",
"binary",
"io"
] |
How to unlock an sqlite3 db?
| 652,750
|
<p>OMG!</p>
<p>What an apparent problem... my django based scripts have locked my sqlite db...</p>
<p>Does anyone know how to fix?</p>
| -1
|
2009-03-17T01:33:18Z
| 652,758
|
<p>Your database is locked because you have a transaction running somewhere. </p>
<p>Stop all your Django apps. If necessary, reboot.</p>
<p>It's also remotely possible that you crashed a SQLite client in the middle of a transaction and the file lock was left in place.</p>
| 6
|
2009-03-17T01:37:21Z
|
[
"python",
"django"
] |
How to unlock an sqlite3 db?
| 652,750
|
<p>OMG!</p>
<p>What an apparent problem... my django based scripts have locked my sqlite db...</p>
<p>Does anyone know how to fix?</p>
| -1
|
2009-03-17T01:33:18Z
| 9,482,213
|
<p>Make sure the database file isn't read-only or that you don't have database already open by other application. E.g. </p>
<pre><code>$ ps aux | grep sqlite
martin 21118 0.0 0.3
$ kill -9 21118
</code></pre>
| 1
|
2012-02-28T12:49:00Z
|
[
"python",
"django"
] |
Effective way to iteratively append to a string in Python?
| 653,259
|
<p>I'm writing a Python function to split text into words, ignoring specified punctuation. Here is some working code. I'm not convinced that constructing strings out of lists (buf = [] in the code) is efficient though. Does anyone have a suggestion for a better way to do this?</p>
<pre><code>def getwords(text, splitchars=' \t|!?.;:"'):
"""
Generator to get words in text by splitting text along specified splitchars
and stripping out the splitchars::
>>> list(getwords('this is some text.'))
['this', 'is', 'some', 'text']
>>> list(getwords('and/or'))
['and', 'or']
>>> list(getwords('one||two'))
['one', 'two']
>>> list(getwords(u'hola unicode!'))
[u'hola', u'unicode']
"""
splitchars = set(splitchars)
buf = []
for char in text:
if char not in splitchars:
buf.append(char)
else:
if buf:
yield ''.join(buf)
buf = []
# All done. Yield last word.
if buf:
yield ''.join(buf)
</code></pre>
| 2
|
2009-03-17T07:04:05Z
| 653,268
|
<p>You don't want to use re.split?</p>
<pre><code>import re
re.split("[,; ]+", "coucou1 , coucou2;coucou3")
</code></pre>
| 4
|
2009-03-17T07:08:48Z
|
[
"python",
"string",
"split",
"append",
"generator"
] |
Effective way to iteratively append to a string in Python?
| 653,259
|
<p>I'm writing a Python function to split text into words, ignoring specified punctuation. Here is some working code. I'm not convinced that constructing strings out of lists (buf = [] in the code) is efficient though. Does anyone have a suggestion for a better way to do this?</p>
<pre><code>def getwords(text, splitchars=' \t|!?.;:"'):
"""
Generator to get words in text by splitting text along specified splitchars
and stripping out the splitchars::
>>> list(getwords('this is some text.'))
['this', 'is', 'some', 'text']
>>> list(getwords('and/or'))
['and', 'or']
>>> list(getwords('one||two'))
['one', 'two']
>>> list(getwords(u'hola unicode!'))
[u'hola', u'unicode']
"""
splitchars = set(splitchars)
buf = []
for char in text:
if char not in splitchars:
buf.append(char)
else:
if buf:
yield ''.join(buf)
buf = []
# All done. Yield last word.
if buf:
yield ''.join(buf)
</code></pre>
| 2
|
2009-03-17T07:04:05Z
| 653,269
|
<p><a href="http://www.skymind.com/~ocrow/python%5Fstring/">http://www.skymind.com/~ocrow/python_string/</a> talks about several ways of concatenating strings in Python and assesses their performance as well. </p>
| 5
|
2009-03-17T07:08:57Z
|
[
"python",
"string",
"split",
"append",
"generator"
] |
Effective way to iteratively append to a string in Python?
| 653,259
|
<p>I'm writing a Python function to split text into words, ignoring specified punctuation. Here is some working code. I'm not convinced that constructing strings out of lists (buf = [] in the code) is efficient though. Does anyone have a suggestion for a better way to do this?</p>
<pre><code>def getwords(text, splitchars=' \t|!?.;:"'):
"""
Generator to get words in text by splitting text along specified splitchars
and stripping out the splitchars::
>>> list(getwords('this is some text.'))
['this', 'is', 'some', 'text']
>>> list(getwords('and/or'))
['and', 'or']
>>> list(getwords('one||two'))
['one', 'two']
>>> list(getwords(u'hola unicode!'))
[u'hola', u'unicode']
"""
splitchars = set(splitchars)
buf = []
for char in text:
if char not in splitchars:
buf.append(char)
else:
if buf:
yield ''.join(buf)
buf = []
# All done. Yield last word.
if buf:
yield ''.join(buf)
</code></pre>
| 2
|
2009-03-17T07:04:05Z
| 653,287
|
<p>You can split the input using <a href="http://docs.python.org/library/re.html#re.split" rel="nofollow"><code>re.split()</code></a>:</p>
<pre><code>>>> splitchars=' \t|!?.;:"'
>>> re.split("[%s]" % splitchars, "one\ttwo|three?four")
['one', 'two', 'three', 'four']
>>>
</code></pre>
<p>EDIT: If your <code>splitchars</code> may contain special chars like <code>]</code> or <code>^</code>, you can use <code>re.escpae()</code></p>
<pre><code>>>> re.escape(splitchars)
'\\ \\\t\\|\\!\\?\\.\\;\\:\\"'
>>> re.split("[%s]" % re.escape(splitchars), "one\ttwo|three?four")
['one', 'two', 'three', 'four']
>>>
</code></pre>
| 1
|
2009-03-17T07:25:01Z
|
[
"python",
"string",
"split",
"append",
"generator"
] |
Effective way to iteratively append to a string in Python?
| 653,259
|
<p>I'm writing a Python function to split text into words, ignoring specified punctuation. Here is some working code. I'm not convinced that constructing strings out of lists (buf = [] in the code) is efficient though. Does anyone have a suggestion for a better way to do this?</p>
<pre><code>def getwords(text, splitchars=' \t|!?.;:"'):
"""
Generator to get words in text by splitting text along specified splitchars
and stripping out the splitchars::
>>> list(getwords('this is some text.'))
['this', 'is', 'some', 'text']
>>> list(getwords('and/or'))
['and', 'or']
>>> list(getwords('one||two'))
['one', 'two']
>>> list(getwords(u'hola unicode!'))
[u'hola', u'unicode']
"""
splitchars = set(splitchars)
buf = []
for char in text:
if char not in splitchars:
buf.append(char)
else:
if buf:
yield ''.join(buf)
buf = []
# All done. Yield last word.
if buf:
yield ''.join(buf)
</code></pre>
| 2
|
2009-03-17T07:04:05Z
| 653,312
|
<p>You can use re.split</p>
<pre><code>re.split('[\s|!\?\.;:"]', text)
</code></pre>
<p>However if the text is very large the resulting array may be consuming too much memory. Then you may consider re.finditer:</p>
<pre><code>import re
def getwords(text, splitchars=' \t|!?.;:"'):
words_iter = re.finditer(
"([%s]+)" % "".join([("^" + c) for c in splitchars]),
text)
for word in words_iter:
yield word.group()
# a quick test
s = "a:b cc? def...a||"
words = [x for x in getwords(s)]
assert ["a", "b", "cc", "def", "a"] == words, words
</code></pre>
| 3
|
2009-03-17T07:36:10Z
|
[
"python",
"string",
"split",
"append",
"generator"
] |
How to create a Python decorator that can be used either with or without parameters?
| 653,368
|
<p>I'd like to create a Python decorator that can be used either with parameters:</p>
<pre><code>@redirect_output("somewhere.log")
def foo():
....
</code></pre>
<p>or without them (for instance to redirect the output to stderr by default):</p>
<pre><code>@redirect_output
def foo():
....
</code></pre>
<p>Is that at all possible?</p>
<p>Note that I'm not looking for a different solution to the problem of redirecting output, it's just an example of the syntax I'd like to achieve.</p>
| 41
|
2009-03-17T08:10:19Z
| 653,379
|
<p>Generally you can give default arguments in Python...</p>
<pre><code>def redirect_output(fn, output = stderr):
# whatever
</code></pre>
<p>Not sure if that works with decorators as well, though. I don't know of any reason why it wouldn't.</p>
| -2
|
2009-03-17T08:18:44Z
|
[
"python",
"decorator"
] |
How to create a Python decorator that can be used either with or without parameters?
| 653,368
|
<p>I'd like to create a Python decorator that can be used either with parameters:</p>
<pre><code>@redirect_output("somewhere.log")
def foo():
....
</code></pre>
<p>or without them (for instance to redirect the output to stderr by default):</p>
<pre><code>@redirect_output
def foo():
....
</code></pre>
<p>Is that at all possible?</p>
<p>Note that I'm not looking for a different solution to the problem of redirecting output, it's just an example of the syntax I'd like to achieve.</p>
| 41
|
2009-03-17T08:10:19Z
| 653,381
|
<p>Have you tried keyword arguments with default values? Something like</p>
<pre><code>def decorate_something(foo=bar, baz=quux):
pass
</code></pre>
| -1
|
2009-03-17T08:19:36Z
|
[
"python",
"decorator"
] |
How to create a Python decorator that can be used either with or without parameters?
| 653,368
|
<p>I'd like to create a Python decorator that can be used either with parameters:</p>
<pre><code>@redirect_output("somewhere.log")
def foo():
....
</code></pre>
<p>or without them (for instance to redirect the output to stderr by default):</p>
<pre><code>@redirect_output
def foo():
....
</code></pre>
<p>Is that at all possible?</p>
<p>Note that I'm not looking for a different solution to the problem of redirecting output, it's just an example of the syntax I'd like to achieve.</p>
| 41
|
2009-03-17T08:10:19Z
| 653,433
|
<p>Building on vartec's answer:</p>
<pre><code>imports sys
def redirect_output(func, output=None):
if output is None:
output = sys.stderr
if isinstance(output, basestring):
output = open(output, 'w') # etc...
# everything else...
</code></pre>
| -2
|
2009-03-17T08:53:57Z
|
[
"python",
"decorator"
] |
How to create a Python decorator that can be used either with or without parameters?
| 653,368
|
<p>I'd like to create a Python decorator that can be used either with parameters:</p>
<pre><code>@redirect_output("somewhere.log")
def foo():
....
</code></pre>
<p>or without them (for instance to redirect the output to stderr by default):</p>
<pre><code>@redirect_output
def foo():
....
</code></pre>
<p>Is that at all possible?</p>
<p>Note that I'm not looking for a different solution to the problem of redirecting output, it's just an example of the syntax I'd like to achieve.</p>
| 41
|
2009-03-17T08:10:19Z
| 653,478
|
<p>Using keyword arguments with default values (as suggested by kquinn) is a good idea, but will require you to include the parenthesis:</p>
<pre><code>@redirect_output()
def foo():
...
</code></pre>
<p>If you would like a version that works without the parenthesis on the decorator you will have to account both scenarios in your decorator code.</p>
<p>If you were using Python 3.0 you could use keyword only arguments for this:</p>
<pre><code>def redirect_output(fn=None,*,destination=None):
destination = sys.stderr if destination is None else destination
def wrapper(*args, **kwargs):
... # your code here
if fn is None:
def decorator(fn):
return functools.update_wrapper(wrapper, fn)
return decorator
else:
return functools.update_wrapper(wrapper, fn)
</code></pre>
<p>In Python 2.x this can be emulated with varargs tricks:</p>
<pre><code>def redirected_output(*fn,**options):
destination = options.pop('destination', sys.stderr)
if options:
raise TypeError("unsupported keyword arguments: %s" %
",".join(options.keys()))
def wrapper(*args, **kwargs):
... # your code here
if fn:
return functools.update_wrapper(wrapper, fn[0])
else:
def decorator(fn):
return functools.update_wrapper(wrapper, fn)
return decorator
</code></pre>
<p>Any of these versions would allow you to write code like this:</p>
<pre><code>@redirected_output
def foo():
...
@redirected_output(destination="somewhere.log")
def bar():
...
</code></pre>
| 27
|
2009-03-17T09:13:19Z
|
[
"python",
"decorator"
] |
How to create a Python decorator that can be used either with or without parameters?
| 653,368
|
<p>I'd like to create a Python decorator that can be used either with parameters:</p>
<pre><code>@redirect_output("somewhere.log")
def foo():
....
</code></pre>
<p>or without them (for instance to redirect the output to stderr by default):</p>
<pre><code>@redirect_output
def foo():
....
</code></pre>
<p>Is that at all possible?</p>
<p>Note that I'm not looking for a different solution to the problem of redirecting output, it's just an example of the syntax I'd like to achieve.</p>
| 41
|
2009-03-17T08:10:19Z
| 653,511
|
<p>You need to detect both cases, for example using the type of the first argument, and accordingly return either the wrapper (when used without parameter) or a decorator (when used with arguments).</p>
<pre><code>from functools import wraps
import inspect
def redirect_output(fn_or_output):
def decorator(fn):
@wraps(fn)
def wrapper(*args, **args):
# Redirect output
try:
return fn(*args, **args)
finally:
# Restore output
return wrapper
if inspect.isfunction(fn_or_output):
# Called with no parameter
return decorator(fn_or_output)
else:
# Called with a parameter
return decorator
</code></pre>
<p>When using the <code>@redirect_output("output.log")</code> syntax, <code>redirect_output</code> is called with a single argument <code>"output.log"</code>, and it must return a decorator accepting the function to be decorated as an argument. When used as <code>@redirect_output</code>, it is called directly with the function to be decorated as an argument.</p>
<p>Or in other words: the <code>@</code> syntax must be followed by an expression whose result is a function accepting a function to be decorated as its sole argument, and returning the decorated function. The expression itself can be a function call, which is the case with <code>@redirect_output("output.log")</code>. Convoluted, but true :-)</p>
| 11
|
2009-03-17T09:25:42Z
|
[
"python",
"decorator"
] |
How to create a Python decorator that can be used either with or without parameters?
| 653,368
|
<p>I'd like to create a Python decorator that can be used either with parameters:</p>
<pre><code>@redirect_output("somewhere.log")
def foo():
....
</code></pre>
<p>or without them (for instance to redirect the output to stderr by default):</p>
<pre><code>@redirect_output
def foo():
....
</code></pre>
<p>Is that at all possible?</p>
<p>Note that I'm not looking for a different solution to the problem of redirecting output, it's just an example of the syntax I'd like to achieve.</p>
| 41
|
2009-03-17T08:10:19Z
| 653,553
|
<p>A python decorator is called in a fundamentally different way depending on whether you give it arguments or not. The decoration is actually just a (syntactically restricted) expression.</p>
<p>In your first example:</p>
<pre><code>@redirect_output("somewhere.log")
def foo():
....
</code></pre>
<p>the function <code>redirect_output</code> is called with the
given argument, which is expected to return a decorator
function, which itself is called with <code>foo</code> as an argument,
which (finally!) is expected to return the final decorated function.</p>
<p>The equivalent code looks like this:</p>
<pre><code>def foo():
....
d = redirect_output("somewhere.log")
foo = d(foo)
</code></pre>
<p>The equivalent code for your second example looks like:</p>
<pre><code>def foo():
....
d = redirect_output
foo = d(foo)
</code></pre>
<p>So you <em>can</em> do what you'd like but not in a totally seamless way:</p>
<pre><code>import types
def redirect_output(arg):
def decorator(file, f):
def df(*args, **kwargs):
print 'redirecting to ', file
return f(*args, **kwargs)
return df
if type(arg) is types.FunctionType:
return decorator(sys.stderr, arg)
return lambda f: decorator(arg, f)
</code></pre>
<p>This should be ok unless you wish to use a function as an
argument to your decorator, in which case the decorator
will wrongly assume it has no arguments. It will also fail
if this decoration is applied to another decoration that
does not return a function type.</p>
<p>An alternative method is just to require that the
decorator function is always called, even if it is with no arguments.
In this case, your second example would look like this:</p>
<pre><code>@redirect_output()
def foo():
....
</code></pre>
<p>The decorator function code would look like this:</p>
<pre><code>def redirect_output(file = sys.stderr):
def decorator(file, f):
def df(*args, **kwargs):
print 'redirecting to ', file
return f(*args, **kwargs)
return df
return lambda f: decorator(file, f)
</code></pre>
| 8
|
2009-03-17T09:41:39Z
|
[
"python",
"decorator"
] |
How to create a Python decorator that can be used either with or without parameters?
| 653,368
|
<p>I'd like to create a Python decorator that can be used either with parameters:</p>
<pre><code>@redirect_output("somewhere.log")
def foo():
....
</code></pre>
<p>or without them (for instance to redirect the output to stderr by default):</p>
<pre><code>@redirect_output
def foo():
....
</code></pre>
<p>Is that at all possible?</p>
<p>Note that I'm not looking for a different solution to the problem of redirecting output, it's just an example of the syntax I'd like to achieve.</p>
| 41
|
2009-03-17T08:10:19Z
| 14,412,901
|
<p>I know this question is old, but some of the comments are new, and while all of the viable solutions are essentially the same, most of them aren't very clean or easy to read.</p>
<p>Like thobe's answer says, the only way to handle both cases is to check for both scenarios. The easiest way is simply to check to see if there is a single argument and it is callabe (NOTE: extra checks will be necessary if your decorator only takes 1 argument and it happens to be a callable object):</p>
<pre><code>def decorator(*args, **kwargs):
if len(args) == 1 and len(kwargs) == 0 and callable(args[0]):
# called as @decorator
else:
# called as @decorator(*args, **kwargs)
</code></pre>
<p>In the first case, you do what any normal decorator does, return a modified or wrapped version of the passed in function.</p>
<p>In the second case, you return a 'new' decorator that somehow uses the information passed in with *args, **kwargs.</p>
<p>This is fine and all, but having to write it out for every decorator you make can be pretty annoying and not as clean. Instead, it would be nice to be able to automagically modify our decorators without having to re-write them... but that's what decorators are for! </p>
<p>Using the following decorator decorator, we can deocrate our decorators so that they can be used with or without arguments:</p>
<pre><code>def doublewrap(f):
'''
a decorator decorator, allowing the decorator to be used as:
@decorator(with, arguments, and=kwargs)
or
@decorator
'''
@wraps(f)
def new_dec(*args, **kwargs):
if len(args) == 1 and len(kwargs) == 0 and callable(args[0]):
# actual decorated function
return f(args[0])
else:
# decorator arguments
return lambda realf: f(realf, *args, **kwargs)
return new_dec
</code></pre>
<p>Now, we can decorate our decorators with @doublewrap, and they will work with and without arguments, with one caveat:</p>
<p>I noted above but should repeat here, the check in this decorator makes an assumption about the arguments that a decorator can receive (namely that it can't receive a single, callable argument). Since we are making it applicable to any generator now, it needs to be kept in mind, or modified if it will be contradicted.</p>
<p>The following demonstrates its use:</p>
<pre><code>def test_doublewrap():
from util import doublewrap
from functools import wraps
@doublewrap
def mult(f, factor=2):
'''multiply a function's return value'''
@wraps(f)
def wrap(*args, **kwargs):
return factor*f(*args,**kwargs)
return wrap
# try normal
@mult
def f(x, y):
return x + y
# try args
@mult(3)
def f2(x, y):
return x*y
# try kwargs
@mult(factor=5)
def f3(x, y):
return x - y
assert f(2,3) == 10
assert f2(2,5) == 30
assert f3(8,1) == 5*7
</code></pre>
| 17
|
2013-01-19T09:24:01Z
|
[
"python",
"decorator"
] |
How to create a Python decorator that can be used either with or without parameters?
| 653,368
|
<p>I'd like to create a Python decorator that can be used either with parameters:</p>
<pre><code>@redirect_output("somewhere.log")
def foo():
....
</code></pre>
<p>or without them (for instance to redirect the output to stderr by default):</p>
<pre><code>@redirect_output
def foo():
....
</code></pre>
<p>Is that at all possible?</p>
<p>Note that I'm not looking for a different solution to the problem of redirecting output, it's just an example of the syntax I'd like to achieve.</p>
| 41
|
2009-03-17T08:10:19Z
| 19,017,908
|
<p>I know this is an old question, but I really don't like any of the techniques proposed so I wanted to add another method. I saw that django uses a really clean method in their <a href="https://github.com/django/django/blob/1.5.4/django/contrib/auth/decorators.py#L44" rel="nofollow"><code>login_required</code> decorator in <code>django.contrib.auth.decorators</code></a>. As you can see in the <a href="https://docs.djangoproject.com/en/1.5/topics/auth/default/#the-login-required-decorator" rel="nofollow">decorator's docs</a>, it can be used alone as <code>@login_required</code> or with arguments, <code>@login_required(redirect_field_name='my_redirect_field')</code>.</p>
<p>The way they do it is quite simple. They add a <code>kwarg</code> (<code>function=None</code>) before their decorator arguments. If the decorator is used alone, <code>function</code> will be the actual function it is decorating, whereas if it is called with arguments, <code>function</code> will be <code>None</code>.</p>
<p><strong>Example:</strong></p>
<pre><code>from functools import wraps
def custom_decorator(function=None, some_arg=None, some_other_arg=None):
def actual_decorator(f):
@wraps(f)
def wrapper(*args, **kwargs):
# Do stuff with args here...
if some_arg:
print some_arg
if some_other_arg:
print some_other_arg
return f(*args, **kwargs)
return wrapper
if function:
return actual_decorator(function)
return actual_decorator
</code></pre>
<hr>
<pre><code>@custom_decorator
def test1():
print 'test1'
>>> test1()
test1
</code></pre>
<hr>
<pre><code>@custom_decorator(some_arg='hello')
def test2():
print 'test2'
>>> test2()
hello
test2
</code></pre>
<hr>
<pre><code>@custom_decorator(some_arg='hello', some_other_arg='world')
def test3():
print 'test3'
>>> test3()
hello
world
test3
</code></pre>
<p>I find this approach that django uses to be more elegant and easier to understand than any of the other techniques proposed here.</p>
| 2
|
2013-09-26T01:35:52Z
|
[
"python",
"decorator"
] |
How to create a Python decorator that can be used either with or without parameters?
| 653,368
|
<p>I'd like to create a Python decorator that can be used either with parameters:</p>
<pre><code>@redirect_output("somewhere.log")
def foo():
....
</code></pre>
<p>or without them (for instance to redirect the output to stderr by default):</p>
<pre><code>@redirect_output
def foo():
....
</code></pre>
<p>Is that at all possible?</p>
<p>Note that I'm not looking for a different solution to the problem of redirecting output, it's just an example of the syntax I'd like to achieve.</p>
| 41
|
2009-03-17T08:10:19Z
| 38,414,401
|
<p>In fact, the caveat case in @bj0's solution can be checked easily:</p>
<pre><code>def meta_wrap(decor):
@functools.wraps(decor)
def new_decor(*args, **kwargs):
if len(args) == 1 and len(kwargs) == 0 and callable(args[0]):
# this is the double-decorated f.
# Its first argument should not be a callable
doubled_f = decor(args[0])
@functools.wraps(doubled_f)
def checked_doubled_f(*f_args, **f_kwargs):
if callable(f_args[0]):
raise ValueError('meta_wrap failure: '
'first positional argument cannot be callable.')
return doubled_f(*f_args, **f_kwargs)
return checked_doubled_f
else:
# decorator arguments
return lambda real_f: decor(real_f, *args, **kwargs)
return new_decor
</code></pre>
<p>Here are a few test cases for this fail-safe version of <code>meta_wrap</code>.</p>
<pre><code> @meta_wrap
def baddecor(f, caller=lambda x: -1*x):
@functools.wraps(f)
def _f(*args, **kwargs):
return caller(f(args[0]))
return _f
@baddecor # used without arg: no problem
def f_call1(x):
return x + 1
assert f_call1(5) == -6
@baddecor(lambda x : 2*x) # bad case
def f_call2(x):
return x + 1
f_call2(5) # raises ValueError
# explicit keyword: no problem
@baddecor(caller=lambda x : 100*x)
def f_call3(x):
return x + 1
assert f_call3(5) == 600
</code></pre>
| 0
|
2016-07-16T18:39:43Z
|
[
"python",
"decorator"
] |
How to create a Python decorator that can be used either with or without parameters?
| 653,368
|
<p>I'd like to create a Python decorator that can be used either with parameters:</p>
<pre><code>@redirect_output("somewhere.log")
def foo():
....
</code></pre>
<p>or without them (for instance to redirect the output to stderr by default):</p>
<pre><code>@redirect_output
def foo():
....
</code></pre>
<p>Is that at all possible?</p>
<p>Note that I'm not looking for a different solution to the problem of redirecting output, it's just an example of the syntax I'd like to achieve.</p>
| 41
|
2009-03-17T08:10:19Z
| 39,335,652
|
<p>Several answers here already address your problem nicely. With respect to style, however, I prefer solving this decorator predicament using <code>functools.partial</code>, as suggested in David Beazley's <em>Python Cookbook 3</em>:</p>
<pre><code>from functools import partial, wraps
def decorator(func=None, foo='spam'):
if func is None:
return partial(decorator, foo=foo)
@wraps(func)
def wrapper(*args, **kwargs):
# do something with `func` and `foo`, if you're so inclined
pass
return wrapper
</code></pre>
<p>While yes, you can just do</p>
<pre><code>@decorator()
def f(*args, **kwargs):
pass
</code></pre>
<p>without funky workarounds, I find it strange looking, and I like having the option of simply decorating with <code>@decorator</code>.</p>
<p>As for the secondary mission objective, redirecting a function's output is addressed in this <a href="https://stackoverflow.com/questions/4675728/redirect-stdout-to-a-file-in-python">Stack Overflow post</a>. </p>
<hr>
<p>If you want to dive deeper, check out Chapter 9 (Metaprogramming) in <em>Python Cookbook 3</em>, which is freely available to be <a href="http://chimera.labs.oreilly.com/books/1230000000393/ch09.html" rel="nofollow">read online</a>.</p>
<p>Some of that material is live demoed (plus more!) in Beazley's awesome YouTube video <a href="https://youtu.be/sPiWg5jSoZI" rel="nofollow">Python 3 Metaprogramming</a>.</p>
<p>Happy coding :)</p>
| 1
|
2016-09-05T17:59:30Z
|
[
"python",
"decorator"
] |
How can I profile a multithread program in Python?
| 653,419
|
<p>I'm developing an inherently multithreaded module in Python, and I'd like to find out where it's spending its time. cProfile only seems to profile the main thread. Is there any way of profiling all threads involved in the calculation?</p>
| 38
|
2009-03-17T08:45:24Z
| 653,484
|
<p>I don't know any profiling-application that supports such thing for python - but You could write a Trace-class that writes log-files where you put in the information of when an operation is started and when it ended and how much time it consumed.</p>
<p>It's a simple and quick solution for your problem.</p>
| 0
|
2009-03-17T09:16:20Z
|
[
"python",
"multithreading",
"profiling"
] |
How can I profile a multithread program in Python?
| 653,419
|
<p>I'm developing an inherently multithreaded module in Python, and I'd like to find out where it's spending its time. cProfile only seems to profile the main thread. Is there any way of profiling all threads involved in the calculation?</p>
| 38
|
2009-03-17T08:45:24Z
| 653,497
|
<p>Instead of running one <code>cProfile</code>, you could run separate <code>cProfile</code> instance in each thread, then combine the stats. <code>Stats.add()</code> does this automatically.</p>
| 14
|
2009-03-17T09:21:43Z
|
[
"python",
"multithreading",
"profiling"
] |
How can I profile a multithread program in Python?
| 653,419
|
<p>I'm developing an inherently multithreaded module in Python, and I'd like to find out where it's spending its time. cProfile only seems to profile the main thread. Is there any way of profiling all threads involved in the calculation?</p>
| 38
|
2009-03-17T08:45:24Z
| 654,217
|
<p>If you're okay with doing a bit of extra work, you can write your own profiling class that implements <code>profile(self, frame, event, arg)</code>. That gets called whenever a function is called, and you can fairly easily set up a structure to gather statistics from that.</p>
<p>You can then use <a href="http://docs.python.org/2/library/threading.html#threading.setprofile" rel="nofollow"><code>threading.setprofile</code></a> to register that function on every thread. When the function is called you can use <code>threading.currentThread()</code> to see which it's running on. More information (and ready-to-run recipe) here:</p>
<p><a href="http://code.activestate.com/recipes/465831/" rel="nofollow">http://code.activestate.com/recipes/465831/</a></p>
<p><a href="http://docs.python.org/library/threading.html#threading.setprofile" rel="nofollow">http://docs.python.org/library/threading.html#threading.setprofile</a></p>
| 4
|
2009-03-17T13:22:20Z
|
[
"python",
"multithreading",
"profiling"
] |
How can I profile a multithread program in Python?
| 653,419
|
<p>I'm developing an inherently multithreaded module in Python, and I'd like to find out where it's spending its time. cProfile only seems to profile the main thread. Is there any way of profiling all threads involved in the calculation?</p>
| 38
|
2009-03-17T08:45:24Z
| 656,057
|
<p>Given that your different threads' main functions differ, you can use the very helpful <code>profile_func()</code> decorator from <a href="https://translate.svn.sourceforge.net/svnroot/translate/src/trunk/virtaal/devsupport/profiling.py" rel="nofollow">here</a>.</p>
| 1
|
2009-03-17T21:19:20Z
|
[
"python",
"multithreading",
"profiling"
] |
How can I profile a multithread program in Python?
| 653,419
|
<p>I'm developing an inherently multithreaded module in Python, and I'd like to find out where it's spending its time. cProfile only seems to profile the main thread. Is there any way of profiling all threads involved in the calculation?</p>
| 38
|
2009-03-17T08:45:24Z
| 1,658,540
|
<p>Please see <a href="https://pypi.python.org/pypi/yappi/" rel="nofollow">yappi</a> (Yet Another Python Profiler).</p>
| 23
|
2009-11-01T22:09:51Z
|
[
"python",
"multithreading",
"profiling"
] |
How can I cluster a graph in Python?
| 653,496
|
<p>Let G be a graph. So G is a set of nodes and set of links. I need to find a fast way to partition the graph. The graph I am now working has only 120*160 nodes, but I might soon be working on an equivalent problem, in another context (not medicine, but website development), with millions of nodes. </p>
<p>So, what I did was to store all the links into a graph matrix:</p>
<pre><code>M=numpy.mat(numpy.zeros((len(data.keys()),len(data.keys()))))
</code></pre>
<p>Now M holds a 1 in position s,t, if node s is connected to node t. I make sure M is symmetrical M[s,t]=M[t,s] and each node links to itself M[s,s]=1.</p>
<p>If I remember well if I multiply M with M, the results is a matrix that represents the graph that connects vertexes that are reached on through two steps.</p>
<p>So I keep on multplying M with itself, until the number of zeros in the matrix do not decrease any longer. Now I have the list of the connected components.
And now I need to cluster this matrix.</p>
<p>Up to now I am pretty satisfied with the algorithm. I think it is easy, elegant, and reasonably fast. I am having trouble with this part.</p>
<p>Essentially I need to split this graph into its connected components.</p>
<p>I can go through all the nodes, and see what are they connected to.</p>
<p>But what about sorting the matrix reordering the lines. But I don't know if it is possible to do it.</p>
<p>What follows is the code so far:</p>
<pre><code>def findzeros(M):
nZeros=0
for t in M.flat:
if not t:
nZeros+=1
return nZeros
M=numpy.mat(numpy.zeros((len(data.keys()),len(data.keys()))))
for s in data.keys():
MatrixCells[s,s]=1
for t in data.keys():
if t<s:
if (scipy.corrcoef(data[t],data[s])[0,1])>threashold:
M[s,t]=1
M[t,s]=1
nZeros=findzeros(M)
M2=M*M
nZeros2=findzeros(M2)
while (nZeros-nZeros2):
nZeros=nZeros2
M=M2
M2=M*M
nZeros2=findzeros(M2)
</code></pre>
<p><hr /></p>
<h3>Edit:</h3>
<p>It has been suggested that I use SVD decomposition. Here is a simple example of the problem on a 5x5 graph. We shall use this since with the 19200x19200 square matrix is not that easy to see the clusters.</p>
<pre><code>import numpy
import scipy
M=numpy.mat(numpy.zeros((5,5)))
M[1,3]=1
M[3,1]=1
M[1,1]=1
M[2,2]=1
M[3,3]=1
M[4,4]=1
M[0,0]=1
print M
u,s,vh = numpy.linalg.linalg.svd(M)
print u
print s
print vh
</code></pre>
<p>Essentially there are 4 clusters here: (0),(1,3),(2),(4)
But I still don't see how the svn can help in this context.</p>
| 13
|
2009-03-17T09:21:07Z
| 653,539
|
<p>In SciPy you can use <a href="http://docs.scipy.org/doc/scipy/reference/sparse.html">sparse matrices</a>. Also note, that there are more efficient ways of multiplying matrix by itself. Anyway, what you're trying to do can by done by SVD decomposition. </p>
<p><a href="http://expertvoices.nsdl.org/cornell-cs322/2008/04/06/svd-and-graph-partitioning-can-life-get-more-exciting-than-this/">Introduction with useful links</a>. </p>
| 7
|
2009-03-17T09:37:00Z
|
[
"python",
"sorting",
"graph",
"matrix",
"cluster-analysis"
] |
How can I cluster a graph in Python?
| 653,496
|
<p>Let G be a graph. So G is a set of nodes and set of links. I need to find a fast way to partition the graph. The graph I am now working has only 120*160 nodes, but I might soon be working on an equivalent problem, in another context (not medicine, but website development), with millions of nodes. </p>
<p>So, what I did was to store all the links into a graph matrix:</p>
<pre><code>M=numpy.mat(numpy.zeros((len(data.keys()),len(data.keys()))))
</code></pre>
<p>Now M holds a 1 in position s,t, if node s is connected to node t. I make sure M is symmetrical M[s,t]=M[t,s] and each node links to itself M[s,s]=1.</p>
<p>If I remember well if I multiply M with M, the results is a matrix that represents the graph that connects vertexes that are reached on through two steps.</p>
<p>So I keep on multplying M with itself, until the number of zeros in the matrix do not decrease any longer. Now I have the list of the connected components.
And now I need to cluster this matrix.</p>
<p>Up to now I am pretty satisfied with the algorithm. I think it is easy, elegant, and reasonably fast. I am having trouble with this part.</p>
<p>Essentially I need to split this graph into its connected components.</p>
<p>I can go through all the nodes, and see what are they connected to.</p>
<p>But what about sorting the matrix reordering the lines. But I don't know if it is possible to do it.</p>
<p>What follows is the code so far:</p>
<pre><code>def findzeros(M):
nZeros=0
for t in M.flat:
if not t:
nZeros+=1
return nZeros
M=numpy.mat(numpy.zeros((len(data.keys()),len(data.keys()))))
for s in data.keys():
MatrixCells[s,s]=1
for t in data.keys():
if t<s:
if (scipy.corrcoef(data[t],data[s])[0,1])>threashold:
M[s,t]=1
M[t,s]=1
nZeros=findzeros(M)
M2=M*M
nZeros2=findzeros(M2)
while (nZeros-nZeros2):
nZeros=nZeros2
M=M2
M2=M*M
nZeros2=findzeros(M2)
</code></pre>
<p><hr /></p>
<h3>Edit:</h3>
<p>It has been suggested that I use SVD decomposition. Here is a simple example of the problem on a 5x5 graph. We shall use this since with the 19200x19200 square matrix is not that easy to see the clusters.</p>
<pre><code>import numpy
import scipy
M=numpy.mat(numpy.zeros((5,5)))
M[1,3]=1
M[3,1]=1
M[1,1]=1
M[2,2]=1
M[3,3]=1
M[4,4]=1
M[0,0]=1
print M
u,s,vh = numpy.linalg.linalg.svd(M)
print u
print s
print vh
</code></pre>
<p>Essentially there are 4 clusters here: (0),(1,3),(2),(4)
But I still don't see how the svn can help in this context.</p>
| 13
|
2009-03-17T09:21:07Z
| 653,724
|
<p>Why not use a real graph library, like <a href="http://code.google.com/p/python-graph/">Python-Graph</a>? It has a <a href="http://www.linux.ime.usp.br/~matiello/python-graph/docs/graph.algorithms.accessibility-module.html#connected_components">function to determine connected components</a> (though no example is provided). I'd imagine a dedicated library is going to be faster than whatever ad-hoc graph code you've cooked up.</p>
<p>EDIT: <a href="http://networkx.lanl.gov/index.html">NetworkX</a> seems like it might be a better choice than python-graph; its <a href="http://networkx.lanl.gov/reference/generated/networkx.algorithms.components.connected.connected_components.html#networkx.algorithms.components.connected.connected_components">documentation (here for the connected components function)</a> certainly is.</p>
| 12
|
2009-03-17T10:50:17Z
|
[
"python",
"sorting",
"graph",
"matrix",
"cluster-analysis"
] |
How can I cluster a graph in Python?
| 653,496
|
<p>Let G be a graph. So G is a set of nodes and set of links. I need to find a fast way to partition the graph. The graph I am now working has only 120*160 nodes, but I might soon be working on an equivalent problem, in another context (not medicine, but website development), with millions of nodes. </p>
<p>So, what I did was to store all the links into a graph matrix:</p>
<pre><code>M=numpy.mat(numpy.zeros((len(data.keys()),len(data.keys()))))
</code></pre>
<p>Now M holds a 1 in position s,t, if node s is connected to node t. I make sure M is symmetrical M[s,t]=M[t,s] and each node links to itself M[s,s]=1.</p>
<p>If I remember well if I multiply M with M, the results is a matrix that represents the graph that connects vertexes that are reached on through two steps.</p>
<p>So I keep on multplying M with itself, until the number of zeros in the matrix do not decrease any longer. Now I have the list of the connected components.
And now I need to cluster this matrix.</p>
<p>Up to now I am pretty satisfied with the algorithm. I think it is easy, elegant, and reasonably fast. I am having trouble with this part.</p>
<p>Essentially I need to split this graph into its connected components.</p>
<p>I can go through all the nodes, and see what are they connected to.</p>
<p>But what about sorting the matrix reordering the lines. But I don't know if it is possible to do it.</p>
<p>What follows is the code so far:</p>
<pre><code>def findzeros(M):
nZeros=0
for t in M.flat:
if not t:
nZeros+=1
return nZeros
M=numpy.mat(numpy.zeros((len(data.keys()),len(data.keys()))))
for s in data.keys():
MatrixCells[s,s]=1
for t in data.keys():
if t<s:
if (scipy.corrcoef(data[t],data[s])[0,1])>threashold:
M[s,t]=1
M[t,s]=1
nZeros=findzeros(M)
M2=M*M
nZeros2=findzeros(M2)
while (nZeros-nZeros2):
nZeros=nZeros2
M=M2
M2=M*M
nZeros2=findzeros(M2)
</code></pre>
<p><hr /></p>
<h3>Edit:</h3>
<p>It has been suggested that I use SVD decomposition. Here is a simple example of the problem on a 5x5 graph. We shall use this since with the 19200x19200 square matrix is not that easy to see the clusters.</p>
<pre><code>import numpy
import scipy
M=numpy.mat(numpy.zeros((5,5)))
M[1,3]=1
M[3,1]=1
M[1,1]=1
M[2,2]=1
M[3,3]=1
M[4,4]=1
M[0,0]=1
print M
u,s,vh = numpy.linalg.linalg.svd(M)
print u
print s
print vh
</code></pre>
<p>Essentially there are 4 clusters here: (0),(1,3),(2),(4)
But I still don't see how the svn can help in this context.</p>
| 13
|
2009-03-17T09:21:07Z
| 653,851
|
<p>Looks like there is a library <a href="http://mathema.tician.de/software/pymetis" rel="nofollow">PyMetis</a>, which will partition your graph for you, given a list of links. It should be fairly easy to extract the list of links from your graph by passing it your original list of linked nodes (not the matrix-multiply-derived one).</p>
<p>Repeatedly performing M' = MM will not be efficient for large orders of M. A full matrix-multiply for matrices of order N will cost N multiplications and N-1 additions per element, of which there are N<sup>2</sup>, that is O(N<sup>3</sup>) operations. If you are scaling that to "millions of nodes", that would be O(10<sup>18</sup>) operations per matrix-matrix multiplication, of which you want to do several. </p>
<p>In short, you don't want to do it this way. The <a href="http://stackoverflow.com/questions/653496/clustering-a-graph-in-python/653539#653539">SVD suggestion</a> from Vartec would be the only appropriate choice there. Your best option is just to use PyMetis, and not try to reinvent graph-partitioning. </p>
| 0
|
2009-03-17T11:34:58Z
|
[
"python",
"sorting",
"graph",
"matrix",
"cluster-analysis"
] |
How can I cluster a graph in Python?
| 653,496
|
<p>Let G be a graph. So G is a set of nodes and set of links. I need to find a fast way to partition the graph. The graph I am now working has only 120*160 nodes, but I might soon be working on an equivalent problem, in another context (not medicine, but website development), with millions of nodes. </p>
<p>So, what I did was to store all the links into a graph matrix:</p>
<pre><code>M=numpy.mat(numpy.zeros((len(data.keys()),len(data.keys()))))
</code></pre>
<p>Now M holds a 1 in position s,t, if node s is connected to node t. I make sure M is symmetrical M[s,t]=M[t,s] and each node links to itself M[s,s]=1.</p>
<p>If I remember well if I multiply M with M, the results is a matrix that represents the graph that connects vertexes that are reached on through two steps.</p>
<p>So I keep on multplying M with itself, until the number of zeros in the matrix do not decrease any longer. Now I have the list of the connected components.
And now I need to cluster this matrix.</p>
<p>Up to now I am pretty satisfied with the algorithm. I think it is easy, elegant, and reasonably fast. I am having trouble with this part.</p>
<p>Essentially I need to split this graph into its connected components.</p>
<p>I can go through all the nodes, and see what are they connected to.</p>
<p>But what about sorting the matrix reordering the lines. But I don't know if it is possible to do it.</p>
<p>What follows is the code so far:</p>
<pre><code>def findzeros(M):
nZeros=0
for t in M.flat:
if not t:
nZeros+=1
return nZeros
M=numpy.mat(numpy.zeros((len(data.keys()),len(data.keys()))))
for s in data.keys():
MatrixCells[s,s]=1
for t in data.keys():
if t<s:
if (scipy.corrcoef(data[t],data[s])[0,1])>threashold:
M[s,t]=1
M[t,s]=1
nZeros=findzeros(M)
M2=M*M
nZeros2=findzeros(M2)
while (nZeros-nZeros2):
nZeros=nZeros2
M=M2
M2=M*M
nZeros2=findzeros(M2)
</code></pre>
<p><hr /></p>
<h3>Edit:</h3>
<p>It has been suggested that I use SVD decomposition. Here is a simple example of the problem on a 5x5 graph. We shall use this since with the 19200x19200 square matrix is not that easy to see the clusters.</p>
<pre><code>import numpy
import scipy
M=numpy.mat(numpy.zeros((5,5)))
M[1,3]=1
M[3,1]=1
M[1,1]=1
M[2,2]=1
M[3,3]=1
M[4,4]=1
M[0,0]=1
print M
u,s,vh = numpy.linalg.linalg.svd(M)
print u
print s
print vh
</code></pre>
<p>Essentially there are 4 clusters here: (0),(1,3),(2),(4)
But I still don't see how the svn can help in this context.</p>
| 13
|
2009-03-17T09:21:07Z
| 654,595
|
<p>Here's some naive implementation, which finds the connected components using <a href="http://en.wikipedia.org/wiki/Depth%5Ffirst%5Fsearch" rel="nofollow">depth first search</a>, i wrote some time ago. Although it's very simple, it scales well to ten thousands of vertices and edges...</p>
<pre><code>
import sys
from operator import gt, lt
class Graph(object):
def __init__(self):
self.nodes = set()
self.edges = {}
self.cluster_lookup = {}
self.no_link = {}
def add_edge(self, n1, n2, w):
self.nodes.add(n1)
self.nodes.add(n2)
self.edges.setdefault(n1, {}).update({n2: w})
self.edges.setdefault(n2, {}).update({n1: w})
def connected_components(self, threshold=0.9, op=lt):
nodes = set(self.nodes)
components, visited = [], set()
while len(nodes) > 0:
connected, visited = self.dfs(nodes.pop(), visited, threshold, op)
connected = set(connected)
for node in connected:
if node in nodes:
nodes.remove(node)
subgraph = Graph()
subgraph.nodes = connected
subgraph.no_link = self.no_link
for s in subgraph.nodes:
for k, v in self.edges.get(s, {}).iteritems():
if k in subgraph.nodes:
subgraph.edges.setdefault(s, {}).update({k: v})
if s in self.cluster_lookup:
subgraph.cluster_lookup[s] = self.cluster_lookup[s]
components.append(subgraph)
return components
def dfs(self, v, visited, threshold, op=lt, first=None):
aux = [v]
visited.add(v)
if first is None:
first = v
for i in (n for n, w in self.edges.get(v, {}).iteritems()
if op(w, threshold) and n not in visited):
x, y = self.dfs(i, visited, threshold, op, first)
aux.extend(x)
visited = visited.union(y)
return aux, visited
def main(args):
graph = Graph()
# first component
graph.add_edge(0, 1, 1.0)
graph.add_edge(1, 2, 1.0)
graph.add_edge(2, 0, 1.0)
# second component
graph.add_edge(3, 4, 1.0)
graph.add_edge(4, 5, 1.0)
graph.add_edge(5, 3, 1.0)
first, second = graph.connected_components(op=gt)
print first.nodes
print second.nodes
if __name__ == '__main__':
main(sys.argv)
</code></pre>
| 1
|
2009-03-17T14:57:04Z
|
[
"python",
"sorting",
"graph",
"matrix",
"cluster-analysis"
] |
How can I cluster a graph in Python?
| 653,496
|
<p>Let G be a graph. So G is a set of nodes and set of links. I need to find a fast way to partition the graph. The graph I am now working has only 120*160 nodes, but I might soon be working on an equivalent problem, in another context (not medicine, but website development), with millions of nodes. </p>
<p>So, what I did was to store all the links into a graph matrix:</p>
<pre><code>M=numpy.mat(numpy.zeros((len(data.keys()),len(data.keys()))))
</code></pre>
<p>Now M holds a 1 in position s,t, if node s is connected to node t. I make sure M is symmetrical M[s,t]=M[t,s] and each node links to itself M[s,s]=1.</p>
<p>If I remember well if I multiply M with M, the results is a matrix that represents the graph that connects vertexes that are reached on through two steps.</p>
<p>So I keep on multplying M with itself, until the number of zeros in the matrix do not decrease any longer. Now I have the list of the connected components.
And now I need to cluster this matrix.</p>
<p>Up to now I am pretty satisfied with the algorithm. I think it is easy, elegant, and reasonably fast. I am having trouble with this part.</p>
<p>Essentially I need to split this graph into its connected components.</p>
<p>I can go through all the nodes, and see what are they connected to.</p>
<p>But what about sorting the matrix reordering the lines. But I don't know if it is possible to do it.</p>
<p>What follows is the code so far:</p>
<pre><code>def findzeros(M):
nZeros=0
for t in M.flat:
if not t:
nZeros+=1
return nZeros
M=numpy.mat(numpy.zeros((len(data.keys()),len(data.keys()))))
for s in data.keys():
MatrixCells[s,s]=1
for t in data.keys():
if t<s:
if (scipy.corrcoef(data[t],data[s])[0,1])>threashold:
M[s,t]=1
M[t,s]=1
nZeros=findzeros(M)
M2=M*M
nZeros2=findzeros(M2)
while (nZeros-nZeros2):
nZeros=nZeros2
M=M2
M2=M*M
nZeros2=findzeros(M2)
</code></pre>
<p><hr /></p>
<h3>Edit:</h3>
<p>It has been suggested that I use SVD decomposition. Here is a simple example of the problem on a 5x5 graph. We shall use this since with the 19200x19200 square matrix is not that easy to see the clusters.</p>
<pre><code>import numpy
import scipy
M=numpy.mat(numpy.zeros((5,5)))
M[1,3]=1
M[3,1]=1
M[1,1]=1
M[2,2]=1
M[3,3]=1
M[4,4]=1
M[0,0]=1
print M
u,s,vh = numpy.linalg.linalg.svd(M)
print u
print s
print vh
</code></pre>
<p>Essentially there are 4 clusters here: (0),(1,3),(2),(4)
But I still don't see how the svn can help in this context.</p>
| 13
|
2009-03-17T09:21:07Z
| 656,768
|
<p>The SVD algorithm is not applicable here, but otherwise Phil H is correct. </p>
| 0
|
2009-03-18T02:28:41Z
|
[
"python",
"sorting",
"graph",
"matrix",
"cluster-analysis"
] |
How can I cluster a graph in Python?
| 653,496
|
<p>Let G be a graph. So G is a set of nodes and set of links. I need to find a fast way to partition the graph. The graph I am now working has only 120*160 nodes, but I might soon be working on an equivalent problem, in another context (not medicine, but website development), with millions of nodes. </p>
<p>So, what I did was to store all the links into a graph matrix:</p>
<pre><code>M=numpy.mat(numpy.zeros((len(data.keys()),len(data.keys()))))
</code></pre>
<p>Now M holds a 1 in position s,t, if node s is connected to node t. I make sure M is symmetrical M[s,t]=M[t,s] and each node links to itself M[s,s]=1.</p>
<p>If I remember well if I multiply M with M, the results is a matrix that represents the graph that connects vertexes that are reached on through two steps.</p>
<p>So I keep on multplying M with itself, until the number of zeros in the matrix do not decrease any longer. Now I have the list of the connected components.
And now I need to cluster this matrix.</p>
<p>Up to now I am pretty satisfied with the algorithm. I think it is easy, elegant, and reasonably fast. I am having trouble with this part.</p>
<p>Essentially I need to split this graph into its connected components.</p>
<p>I can go through all the nodes, and see what are they connected to.</p>
<p>But what about sorting the matrix reordering the lines. But I don't know if it is possible to do it.</p>
<p>What follows is the code so far:</p>
<pre><code>def findzeros(M):
nZeros=0
for t in M.flat:
if not t:
nZeros+=1
return nZeros
M=numpy.mat(numpy.zeros((len(data.keys()),len(data.keys()))))
for s in data.keys():
MatrixCells[s,s]=1
for t in data.keys():
if t<s:
if (scipy.corrcoef(data[t],data[s])[0,1])>threashold:
M[s,t]=1
M[t,s]=1
nZeros=findzeros(M)
M2=M*M
nZeros2=findzeros(M2)
while (nZeros-nZeros2):
nZeros=nZeros2
M=M2
M2=M*M
nZeros2=findzeros(M2)
</code></pre>
<p><hr /></p>
<h3>Edit:</h3>
<p>It has been suggested that I use SVD decomposition. Here is a simple example of the problem on a 5x5 graph. We shall use this since with the 19200x19200 square matrix is not that easy to see the clusters.</p>
<pre><code>import numpy
import scipy
M=numpy.mat(numpy.zeros((5,5)))
M[1,3]=1
M[3,1]=1
M[1,1]=1
M[2,2]=1
M[3,3]=1
M[4,4]=1
M[0,0]=1
print M
u,s,vh = numpy.linalg.linalg.svd(M)
print u
print s
print vh
</code></pre>
<p>Essentially there are 4 clusters here: (0),(1,3),(2),(4)
But I still don't see how the svn can help in this context.</p>
| 13
|
2009-03-17T09:21:07Z
| 656,895
|
<p>As others have pointed out, no need to reinvent the wheel. A lot of thought has been put into optimal clustering techniques. <a href="http://www.micans.org/mcl/" rel="nofollow">Here</a> is one well-known clustering program.</p>
| 2
|
2009-03-18T03:39:18Z
|
[
"python",
"sorting",
"graph",
"matrix",
"cluster-analysis"
] |
How can I cluster a graph in Python?
| 653,496
|
<p>Let G be a graph. So G is a set of nodes and set of links. I need to find a fast way to partition the graph. The graph I am now working has only 120*160 nodes, but I might soon be working on an equivalent problem, in another context (not medicine, but website development), with millions of nodes. </p>
<p>So, what I did was to store all the links into a graph matrix:</p>
<pre><code>M=numpy.mat(numpy.zeros((len(data.keys()),len(data.keys()))))
</code></pre>
<p>Now M holds a 1 in position s,t, if node s is connected to node t. I make sure M is symmetrical M[s,t]=M[t,s] and each node links to itself M[s,s]=1.</p>
<p>If I remember well if I multiply M with M, the results is a matrix that represents the graph that connects vertexes that are reached on through two steps.</p>
<p>So I keep on multplying M with itself, until the number of zeros in the matrix do not decrease any longer. Now I have the list of the connected components.
And now I need to cluster this matrix.</p>
<p>Up to now I am pretty satisfied with the algorithm. I think it is easy, elegant, and reasonably fast. I am having trouble with this part.</p>
<p>Essentially I need to split this graph into its connected components.</p>
<p>I can go through all the nodes, and see what are they connected to.</p>
<p>But what about sorting the matrix reordering the lines. But I don't know if it is possible to do it.</p>
<p>What follows is the code so far:</p>
<pre><code>def findzeros(M):
nZeros=0
for t in M.flat:
if not t:
nZeros+=1
return nZeros
M=numpy.mat(numpy.zeros((len(data.keys()),len(data.keys()))))
for s in data.keys():
MatrixCells[s,s]=1
for t in data.keys():
if t<s:
if (scipy.corrcoef(data[t],data[s])[0,1])>threashold:
M[s,t]=1
M[t,s]=1
nZeros=findzeros(M)
M2=M*M
nZeros2=findzeros(M2)
while (nZeros-nZeros2):
nZeros=nZeros2
M=M2
M2=M*M
nZeros2=findzeros(M2)
</code></pre>
<p><hr /></p>
<h3>Edit:</h3>
<p>It has been suggested that I use SVD decomposition. Here is a simple example of the problem on a 5x5 graph. We shall use this since with the 19200x19200 square matrix is not that easy to see the clusters.</p>
<pre><code>import numpy
import scipy
M=numpy.mat(numpy.zeros((5,5)))
M[1,3]=1
M[3,1]=1
M[1,1]=1
M[2,2]=1
M[3,3]=1
M[4,4]=1
M[0,0]=1
print M
u,s,vh = numpy.linalg.linalg.svd(M)
print u
print s
print vh
</code></pre>
<p>Essentially there are 4 clusters here: (0),(1,3),(2),(4)
But I still don't see how the svn can help in this context.</p>
| 13
|
2009-03-17T09:21:07Z
| 7,507,451
|
<p>Finding an optimal graph partition is an NP-hard problem, so whatever the algorithm, it is going to be an approximation or a heuristic. Not surprisingly, different clustering algorithms produce (wildly) different results. </p>
<p>Python implementation of Newman's modularity algorithm:
<a href="http://perso.crans.org/aynaud/communities/index.html" rel="nofollow">modularity</a></p>
<p>Also: <a href="http://www.micans.org/mcl/" rel="nofollow">MCL</a>, <a href="http://baderlab.org/Software/MCODE" rel="nofollow">MCODE</a>, <a href="http://www.cfinder.org/" rel="nofollow">CFinder</a>, <a href="http://chianti.ucsd.edu/cyto_web/plugins/displayplugininfo.php?name=NeMo" rel="nofollow">NeMo</a>, <a href="http://www.paccanarolab.org/software/cluster-one/index.html" rel="nofollow">clusterONE</a></p>
| 2
|
2011-09-21T22:25:38Z
|
[
"python",
"sorting",
"graph",
"matrix",
"cluster-analysis"
] |
How can I cluster a graph in Python?
| 653,496
|
<p>Let G be a graph. So G is a set of nodes and set of links. I need to find a fast way to partition the graph. The graph I am now working has only 120*160 nodes, but I might soon be working on an equivalent problem, in another context (not medicine, but website development), with millions of nodes. </p>
<p>So, what I did was to store all the links into a graph matrix:</p>
<pre><code>M=numpy.mat(numpy.zeros((len(data.keys()),len(data.keys()))))
</code></pre>
<p>Now M holds a 1 in position s,t, if node s is connected to node t. I make sure M is symmetrical M[s,t]=M[t,s] and each node links to itself M[s,s]=1.</p>
<p>If I remember well if I multiply M with M, the results is a matrix that represents the graph that connects vertexes that are reached on through two steps.</p>
<p>So I keep on multplying M with itself, until the number of zeros in the matrix do not decrease any longer. Now I have the list of the connected components.
And now I need to cluster this matrix.</p>
<p>Up to now I am pretty satisfied with the algorithm. I think it is easy, elegant, and reasonably fast. I am having trouble with this part.</p>
<p>Essentially I need to split this graph into its connected components.</p>
<p>I can go through all the nodes, and see what are they connected to.</p>
<p>But what about sorting the matrix reordering the lines. But I don't know if it is possible to do it.</p>
<p>What follows is the code so far:</p>
<pre><code>def findzeros(M):
nZeros=0
for t in M.flat:
if not t:
nZeros+=1
return nZeros
M=numpy.mat(numpy.zeros((len(data.keys()),len(data.keys()))))
for s in data.keys():
MatrixCells[s,s]=1
for t in data.keys():
if t<s:
if (scipy.corrcoef(data[t],data[s])[0,1])>threashold:
M[s,t]=1
M[t,s]=1
nZeros=findzeros(M)
M2=M*M
nZeros2=findzeros(M2)
while (nZeros-nZeros2):
nZeros=nZeros2
M=M2
M2=M*M
nZeros2=findzeros(M2)
</code></pre>
<p><hr /></p>
<h3>Edit:</h3>
<p>It has been suggested that I use SVD decomposition. Here is a simple example of the problem on a 5x5 graph. We shall use this since with the 19200x19200 square matrix is not that easy to see the clusters.</p>
<pre><code>import numpy
import scipy
M=numpy.mat(numpy.zeros((5,5)))
M[1,3]=1
M[3,1]=1
M[1,1]=1
M[2,2]=1
M[3,3]=1
M[4,4]=1
M[0,0]=1
print M
u,s,vh = numpy.linalg.linalg.svd(M)
print u
print s
print vh
</code></pre>
<p>Essentially there are 4 clusters here: (0),(1,3),(2),(4)
But I still don't see how the svn can help in this context.</p>
| 13
|
2009-03-17T09:21:07Z
| 29,882,547
|
<p>There's also <a href="http://graph-tool.skewed.de/" rel="nofollow">graph_tool</a> and <a href="http://networkit.iti.kit.edu/" rel="nofollow">networkit</a> that have efficient routines for connected components, and both store the network efficiently. If you're going to work with millions of nodes, networkx will likely not be sufficient (it's pure python afaik). Both those tools are written in C++ so can handle analysis of large graphs with reasonable run times.</p>
<p>As Phil points out, your method will have horribly long compute times for large graphs (we're talking days, weeks, months...), and your representation for a graph of a million nodes will need something like a million gigabytes of memory!</p>
| 2
|
2015-04-26T19:37:21Z
|
[
"python",
"sorting",
"graph",
"matrix",
"cluster-analysis"
] |
Django Forms, set an initial value to request.user
| 653,735
|
<p>Is there some way to make the following possible, or should it be done elsewhere?</p>
<pre><code>class JobRecordForm(forms.ModelForm):
supervisor = forms.ModelChoiceField(
queryset = User.objects.filter(groups__name='Supervisors'),
widget = forms.RadioSelect,
initial = request.user # is there some way to make this possible?
)
class Meta:
model = JobRecord
</code></pre>
| 13
|
2009-03-17T10:53:00Z
| 653,767
|
<p>You might want to handle this in your view function. Since your view function must create the initial form, and your view function knows the user.</p>
<pre><code>form = JobRecordForm( {'supervisor':request.user} )
</code></pre>
<p>This will trigger validation of this input, BTW, so you can't provide hint values this way.</p>
| 5
|
2009-03-17T11:04:13Z
|
[
"python",
"django-forms"
] |
Django Forms, set an initial value to request.user
| 653,735
|
<p>Is there some way to make the following possible, or should it be done elsewhere?</p>
<pre><code>class JobRecordForm(forms.ModelForm):
supervisor = forms.ModelChoiceField(
queryset = User.objects.filter(groups__name='Supervisors'),
widget = forms.RadioSelect,
initial = request.user # is there some way to make this possible?
)
class Meta:
model = JobRecord
</code></pre>
| 13
|
2009-03-17T10:53:00Z
| 1,188,305
|
<p>If you do this in your view.py instead:</p>
<pre><code>form = JobRecordForm( initial={'supervisor':request.user} )
</code></pre>
<p>Then you won't trigger the validation.</p>
<p>See <a href="http://docs.djangoproject.com/en/dev/ref/forms/api/#dynamic-initial-values">http://docs.djangoproject.com/en/dev/ref/forms/api/#dynamic-initial-values</a></p>
| 29
|
2009-07-27T13:58:31Z
|
[
"python",
"django-forms"
] |
Django Forms, set an initial value to request.user
| 653,735
|
<p>Is there some way to make the following possible, or should it be done elsewhere?</p>
<pre><code>class JobRecordForm(forms.ModelForm):
supervisor = forms.ModelChoiceField(
queryset = User.objects.filter(groups__name='Supervisors'),
widget = forms.RadioSelect,
initial = request.user # is there some way to make this possible?
)
class Meta:
model = JobRecord
</code></pre>
| 13
|
2009-03-17T10:53:00Z
| 1,233,143
|
<p>An Another solution with Middleware and save rewriting :
With middleware solution You can call "request" everywhere.</p>
<p><hr /></p>
<p>""" Middleware """</p>
<pre><code> # coding: utf-8
from django.utils.thread_support import currentThread
_requests = {}
def get_request():
return _requests[currentThread()]
class GlobalRequestMiddleware(object):
def process_request(self, request):
_requests[currentThread()] = request
</code></pre>
<p><hr /></p>
<p>""" save Rewrinting """</p>
<pre><code>class Production(models.Model):
creator = models.ForeignKey(User, related_name = "%(class)s_creator")
creation_date = models.DateTimeField(auto_now_add = True)
modification_date = models.DateTimeField(auto_now = True)
def save(self, force_insert = False, force_update = False):
self.creator = get_request().user
super(Production, self).save(force_insert = force_insert, force_update = force_update)
return
</code></pre>
| 5
|
2009-08-05T13:06:58Z
|
[
"python",
"django-forms"
] |
Django Forms, set an initial value to request.user
| 653,735
|
<p>Is there some way to make the following possible, or should it be done elsewhere?</p>
<pre><code>class JobRecordForm(forms.ModelForm):
supervisor = forms.ModelChoiceField(
queryset = User.objects.filter(groups__name='Supervisors'),
widget = forms.RadioSelect,
initial = request.user # is there some way to make this possible?
)
class Meta:
model = JobRecord
</code></pre>
| 13
|
2009-03-17T10:53:00Z
| 14,402,474
|
<p>For a complete answer, here's the CBV solution:</p>
<pre><code>class MyFormView(TemplateView, FormMixin):
def get_initial(self):
self.initial.update({'your_field': self.request.user})
return super(MyFormView, self).get_initial()
</code></pre>
| 4
|
2013-01-18T15:47:15Z
|
[
"python",
"django-forms"
] |
Python - Acquire value from dictionary depending on location/index in list
| 653,765
|
<p>From MySQL query I get data which I put into a dictionary "d":</p>
<blockquote>
<p>d = {0: (datetime.timedelta(0,
25200),), 1: (datetime.timedelta(0,
25500),), 2: (datetime.timedelta(0,
25800),), 3: (datetime.timedelta(0,
26100),), 4: (datetime.timedelta(0,
26400),), 5: (datetime.timedelta(0,
26700),)}</p>
</blockquote>
<p>I have a list "m" with numbers like: </p>
<blockquote>
<p>m = [3, 4, 1, 4, 7, 4]</p>
</blockquote>
<p>I'd like to test "m" and if there is number "4", I'd like to receive another list "h" with hours from "d" where index from list "m" would be corresponding with keys from dictionary "d", so: m[1], m[3], m[5] would get me hours assigned to d[1], d[3], d[5] in list "h":</p>
<blockquote>
<p>h = [7:05:00, 7:15:00, 7:25:00]</p>
</blockquote>
<p>I'll appreciate your input for that...</p>
| 1
|
2009-03-17T11:03:54Z
| 653,827
|
<p>Are you asking for</p>
<pre><code>def hms( td ):
h = dt.seconds // 3600
m = dt.seconds%3600 // 60
s = dt.seconds%60
return h+td.days*24, m, s
[ hms(d[ m[i] ]) for i in m ]
</code></pre>
<p>?</p>
| -1
|
2009-03-17T11:26:13Z
|
[
"python",
"list",
"dictionary"
] |
Python - Acquire value from dictionary depending on location/index in list
| 653,765
|
<p>From MySQL query I get data which I put into a dictionary "d":</p>
<blockquote>
<p>d = {0: (datetime.timedelta(0,
25200),), 1: (datetime.timedelta(0,
25500),), 2: (datetime.timedelta(0,
25800),), 3: (datetime.timedelta(0,
26100),), 4: (datetime.timedelta(0,
26400),), 5: (datetime.timedelta(0,
26700),)}</p>
</blockquote>
<p>I have a list "m" with numbers like: </p>
<blockquote>
<p>m = [3, 4, 1, 4, 7, 4]</p>
</blockquote>
<p>I'd like to test "m" and if there is number "4", I'd like to receive another list "h" with hours from "d" where index from list "m" would be corresponding with keys from dictionary "d", so: m[1], m[3], m[5] would get me hours assigned to d[1], d[3], d[5] in list "h":</p>
<blockquote>
<p>h = [7:05:00, 7:15:00, 7:25:00]</p>
</blockquote>
<p>I'll appreciate your input for that...</p>
| 1
|
2009-03-17T11:03:54Z
| 653,829
|
<p>I'm not entirely sure if this is what you're looking for, but I'll take a shot:</p>
<pre><code>>>> indices = [index for index, i in enumerate(m) if i == 4]
>>> h = [d[i][0] for i in indices]
</code></pre>
<p>Then you have to process the timedeltas as you want to.</p>
| 2
|
2009-03-17T11:27:11Z
|
[
"python",
"list",
"dictionary"
] |
Python - Acquire value from dictionary depending on location/index in list
| 653,765
|
<p>From MySQL query I get data which I put into a dictionary "d":</p>
<blockquote>
<p>d = {0: (datetime.timedelta(0,
25200),), 1: (datetime.timedelta(0,
25500),), 2: (datetime.timedelta(0,
25800),), 3: (datetime.timedelta(0,
26100),), 4: (datetime.timedelta(0,
26400),), 5: (datetime.timedelta(0,
26700),)}</p>
</blockquote>
<p>I have a list "m" with numbers like: </p>
<blockquote>
<p>m = [3, 4, 1, 4, 7, 4]</p>
</blockquote>
<p>I'd like to test "m" and if there is number "4", I'd like to receive another list "h" with hours from "d" where index from list "m" would be corresponding with keys from dictionary "d", so: m[1], m[3], m[5] would get me hours assigned to d[1], d[3], d[5] in list "h":</p>
<blockquote>
<p>h = [7:05:00, 7:15:00, 7:25:00]</p>
</blockquote>
<p>I'll appreciate your input for that...</p>
| 1
|
2009-03-17T11:03:54Z
| 653,832
|
<pre><code>deltas = [str(d[i][0]) for i, j in enumerate(m) if j == 4]
</code></pre>
<p>produces list of delta representation as strings.</p>
| 0
|
2009-03-17T11:28:08Z
|
[
"python",
"list",
"dictionary"
] |
Python - Acquire value from dictionary depending on location/index in list
| 653,765
|
<p>From MySQL query I get data which I put into a dictionary "d":</p>
<blockquote>
<p>d = {0: (datetime.timedelta(0,
25200),), 1: (datetime.timedelta(0,
25500),), 2: (datetime.timedelta(0,
25800),), 3: (datetime.timedelta(0,
26100),), 4: (datetime.timedelta(0,
26400),), 5: (datetime.timedelta(0,
26700),)}</p>
</blockquote>
<p>I have a list "m" with numbers like: </p>
<blockquote>
<p>m = [3, 4, 1, 4, 7, 4]</p>
</blockquote>
<p>I'd like to test "m" and if there is number "4", I'd like to receive another list "h" with hours from "d" where index from list "m" would be corresponding with keys from dictionary "d", so: m[1], m[3], m[5] would get me hours assigned to d[1], d[3], d[5] in list "h":</p>
<blockquote>
<p>h = [7:05:00, 7:15:00, 7:25:00]</p>
</blockquote>
<p>I'll appreciate your input for that...</p>
| 1
|
2009-03-17T11:03:54Z
| 3,224,406
|
<p>So at each index is an n-tuple of <code>timedeltas</code> right? Assuming, from the code, that potentially there could be more than one <code>timedelta</code> at each index.
import datetime</p>
<pre><code>import datetime
d = {0: (datetime.timedelta(0, 25200),), 1: (datetime.timedelta(0, 25500),), 2: (datetime.timedelta(0, 25800),), 3: (datetime.timedelta(0, 26100),), 4: (datetime.timedelta(0, 26400),), 5: (datetime.timedelta(0, 26700),)}
m = [3, 4, 1, 4, 7, 4]
def something(m, d):
h = {}
for index in m:
if index in d and index not in h:
for dt in d[index]:
total = sum([dt.seconds for dt in d[index]])
hours = total / 3600
minutes = (total - (3600 * hours)) / 60
seconds = (total - (3600 * hours) - (60 * minutes))
h[index] = "%d:%02d:%02d" % (hours, minutes, seconds)
return h.values()
print something(m, d) # returns exactly what you asked for
</code></pre>
| 0
|
2010-07-11T19:48:39Z
|
[
"python",
"list",
"dictionary"
] |
A simple freeze behavior decorator
| 653,783
|
<p>I'm trying to write a freeze decorator for Python.</p>
<p>The idea is as follows :</p>
<p><strong>(In response to the two comments)</strong></p>
<p>I might be wrong but I think there is two main use of
test case.</p>
<ul>
<li><p>One is the test-driven development :
Ideally , developers are writing case before writing the code.
It usually helps defining the architecture because this discipline
forces to define the real interfaces before development.
One may even consider that in some case the person who
dispatches job between dev is writing the test case and
use it to illustrate efficiently the specification he has in mind.
I don't have any experience of the use of test case like that.</p></li>
<li><p>The second is the idea that all project with a decent
size and a several programmers is suffering from broken code.
Something that use to work may get broken from a change
that looked like an innocent refactoring.
Though good architecture, loose couple between component may
help to fight against this phenomenon ; you will sleep better
at night if you have written some test case to make sure
that nothing will break your program's behavior.</p></li>
</ul>
<p>HOWEVER,
Nobody can deny the overhead of writting test cases. In the
first case one may argue that test case is actually guiding
development and is therefore not to be considered as an overhead.</p>
<p>Frankly speaking, I'm a pretty young programmer and if I were
you, my word on this subject is not really valuable...
Anyway, I think that mosts company/projects are not working
like that, and that unit tests are mainly used in the second
case...</p>
<p>In other words, rather than ensuring that the program is
working correctly, it is aiming at checking that it will
work the same in the future.</p>
<p>This needs can be met without the cost of writing tests,
by using this freezing decorator.</p>
<p>Let's say you have a function</p>
<pre><code>def pow(n,k):
if n == 0: return 1
else: return n * pow(n,k-1)
</code></pre>
<p>It is perfectly nice, and you want to rewrite it as an optimized version.
It is part of a big project. You want it to give back the same result
for a few value.
Rather than going through the pain of test cases, one could use some
kind of freeze decorator.</p>
<p>Something such that the first time the decorator is run,
the decorator run the function with the defined args (below 0, and 7)
and saves the result in a map ( f --> args --> result )</p>
<pre><code>@freeze(2,0)
@freeze(1,3)
@freeze(3,5)
@freeze(0,0)
def pow(n,k):
if n == 0: return 1
else: return n * pow(n,k-1)
</code></pre>
<p>Next time the program is executed, the decorator will load this map and check
that the result of this function for these args as not changed.</p>
<p>I already wrote quickly the decorator (see below), but hurt a few problems about
which I need your advise...</p>
<pre><code>from __future__ import with_statement
from collections import defaultdict
from types import GeneratorType
import cPickle
def __id_from_function(f):
return ".".join([f.__module__, f.__name__])
def generator_firsts(g, N=100):
try:
if N==0:
return []
else:
return [g.next()] + generator_firsts(g, N-1)
except StopIteration :
return []
def __post_process(v):
specialized_postprocess = [
(GeneratorType, generator_firsts),
(Exception, str),
]
try:
val_mro = v.__class__.mro()
for ( ancestor, specialized ) in specialized_postprocess:
if ancestor in val_mro:
return specialized(v)
raise ""
except:
print "Cannot accept this as a value"
return None
def __eval_function(f):
def aux(args, kargs):
try:
return ( True, __post_process( f(*args, **kargs) ) )
except Exception, e:
return ( False, __post_process(e) )
return aux
def __compare_behavior(f, past_records):
for (args, kargs, result) in past_records:
assert __eval_function(f)(args,kargs) == result
def __record_behavior(f, past_records, args, kargs):
registered_args = [ (a, k) for (a, k, r) in past_records ]
if (args, kargs) not in registered_args:
res = __eval_function(f)(args, kargs)
past_records.append( (args, kargs, res) )
def __open_frz():
try:
with open(".frz", "r") as __open_frz:
return cPickle.load(__open_frz)
except:
return defaultdict(list)
def __save_frz(past_records):
with open(".frz", "w") as __open_frz:
return cPickle.dump(past_records, __open_frz)
def freeze_behavior(*args, **kvargs):
def freeze_decorator(f):
past_records = __open_frz()
f_id = __id_from_function(f)
f_past_records = past_records[f_id]
__compare_behavior(f, f_past_records)
__record_behavior(f, f_past_records, args, kvargs)
__save_frz(past_records)
return f
return freeze_decorator
</code></pre>
<ul>
<li><p>Dumping and Comparing of results is not trivial for all type. Right now I'm thinking about using a function (I call it postprocess here), to solve this problem.
Basically instead of storing res I store postprocess(res) and I compare postprocess(res1)==postprocess(res2), instead of comparing res1 res2.
It is important to let the user overload the predefined postprocess function.
My first question is :
<strong>Do you know a way to check if an object is dumpable or not?</strong></p></li>
<li><p>Defining a key for the function decorated is a pain. In the following snippets
I am using the function module and its name.
** Can you think of a smarter way to do that. **</p></li>
<li><p>The snippets below is kind of working, but opens and close the file when testing and when recording. This is just a stupid prototype... but do you know a nice way to open the file, process the decorator for all function, close the file...</p></li>
<li><p>I intend to add some functionalities to this. For instance, add the possibity to define
an iterable to browse a set of argument, record arguments from real use, etc.
Why would you expect from such a decorator?</p></li>
<li><p>In general, would you use such a feature, knowing its limitation... Especially when trying to use it with POO?</p></li>
</ul>
| 2
|
2009-03-17T11:08:54Z
| 654,608
|
<p>Are you looking to implement <strong>invariants</strong> or post conditions?</p>
<p>You should specify the result explicitly, this wil remove most of you problems.</p>
| 0
|
2009-03-17T14:59:44Z
|
[
"python",
"testing",
"decorator",
"freeze"
] |
A simple freeze behavior decorator
| 653,783
|
<p>I'm trying to write a freeze decorator for Python.</p>
<p>The idea is as follows :</p>
<p><strong>(In response to the two comments)</strong></p>
<p>I might be wrong but I think there is two main use of
test case.</p>
<ul>
<li><p>One is the test-driven development :
Ideally , developers are writing case before writing the code.
It usually helps defining the architecture because this discipline
forces to define the real interfaces before development.
One may even consider that in some case the person who
dispatches job between dev is writing the test case and
use it to illustrate efficiently the specification he has in mind.
I don't have any experience of the use of test case like that.</p></li>
<li><p>The second is the idea that all project with a decent
size and a several programmers is suffering from broken code.
Something that use to work may get broken from a change
that looked like an innocent refactoring.
Though good architecture, loose couple between component may
help to fight against this phenomenon ; you will sleep better
at night if you have written some test case to make sure
that nothing will break your program's behavior.</p></li>
</ul>
<p>HOWEVER,
Nobody can deny the overhead of writting test cases. In the
first case one may argue that test case is actually guiding
development and is therefore not to be considered as an overhead.</p>
<p>Frankly speaking, I'm a pretty young programmer and if I were
you, my word on this subject is not really valuable...
Anyway, I think that mosts company/projects are not working
like that, and that unit tests are mainly used in the second
case...</p>
<p>In other words, rather than ensuring that the program is
working correctly, it is aiming at checking that it will
work the same in the future.</p>
<p>This needs can be met without the cost of writing tests,
by using this freezing decorator.</p>
<p>Let's say you have a function</p>
<pre><code>def pow(n,k):
if n == 0: return 1
else: return n * pow(n,k-1)
</code></pre>
<p>It is perfectly nice, and you want to rewrite it as an optimized version.
It is part of a big project. You want it to give back the same result
for a few value.
Rather than going through the pain of test cases, one could use some
kind of freeze decorator.</p>
<p>Something such that the first time the decorator is run,
the decorator run the function with the defined args (below 0, and 7)
and saves the result in a map ( f --> args --> result )</p>
<pre><code>@freeze(2,0)
@freeze(1,3)
@freeze(3,5)
@freeze(0,0)
def pow(n,k):
if n == 0: return 1
else: return n * pow(n,k-1)
</code></pre>
<p>Next time the program is executed, the decorator will load this map and check
that the result of this function for these args as not changed.</p>
<p>I already wrote quickly the decorator (see below), but hurt a few problems about
which I need your advise...</p>
<pre><code>from __future__ import with_statement
from collections import defaultdict
from types import GeneratorType
import cPickle
def __id_from_function(f):
return ".".join([f.__module__, f.__name__])
def generator_firsts(g, N=100):
try:
if N==0:
return []
else:
return [g.next()] + generator_firsts(g, N-1)
except StopIteration :
return []
def __post_process(v):
specialized_postprocess = [
(GeneratorType, generator_firsts),
(Exception, str),
]
try:
val_mro = v.__class__.mro()
for ( ancestor, specialized ) in specialized_postprocess:
if ancestor in val_mro:
return specialized(v)
raise ""
except:
print "Cannot accept this as a value"
return None
def __eval_function(f):
def aux(args, kargs):
try:
return ( True, __post_process( f(*args, **kargs) ) )
except Exception, e:
return ( False, __post_process(e) )
return aux
def __compare_behavior(f, past_records):
for (args, kargs, result) in past_records:
assert __eval_function(f)(args,kargs) == result
def __record_behavior(f, past_records, args, kargs):
registered_args = [ (a, k) for (a, k, r) in past_records ]
if (args, kargs) not in registered_args:
res = __eval_function(f)(args, kargs)
past_records.append( (args, kargs, res) )
def __open_frz():
try:
with open(".frz", "r") as __open_frz:
return cPickle.load(__open_frz)
except:
return defaultdict(list)
def __save_frz(past_records):
with open(".frz", "w") as __open_frz:
return cPickle.dump(past_records, __open_frz)
def freeze_behavior(*args, **kvargs):
def freeze_decorator(f):
past_records = __open_frz()
f_id = __id_from_function(f)
f_past_records = past_records[f_id]
__compare_behavior(f, f_past_records)
__record_behavior(f, f_past_records, args, kvargs)
__save_frz(past_records)
return f
return freeze_decorator
</code></pre>
<ul>
<li><p>Dumping and Comparing of results is not trivial for all type. Right now I'm thinking about using a function (I call it postprocess here), to solve this problem.
Basically instead of storing res I store postprocess(res) and I compare postprocess(res1)==postprocess(res2), instead of comparing res1 res2.
It is important to let the user overload the predefined postprocess function.
My first question is :
<strong>Do you know a way to check if an object is dumpable or not?</strong></p></li>
<li><p>Defining a key for the function decorated is a pain. In the following snippets
I am using the function module and its name.
** Can you think of a smarter way to do that. **</p></li>
<li><p>The snippets below is kind of working, but opens and close the file when testing and when recording. This is just a stupid prototype... but do you know a nice way to open the file, process the decorator for all function, close the file...</p></li>
<li><p>I intend to add some functionalities to this. For instance, add the possibity to define
an iterable to browse a set of argument, record arguments from real use, etc.
Why would you expect from such a decorator?</p></li>
<li><p>In general, would you use such a feature, knowing its limitation... Especially when trying to use it with POO?</p></li>
</ul>
| 2
|
2009-03-17T11:08:54Z
| 654,633
|
<p><strong>"In general, would you use such a feature, knowing its limitation...?"</strong></p>
<p>Frankly speaking -- never.</p>
<p>There are no circumstances under which I would "freeze" results of a function in this way.</p>
<p>The use case appears to be based on two wrong ideas: (1) that unit testing is either hard or complex or expensive; and (2) it could be simpler to write the code, "freeze" the results and somehow use the frozen results for refactoring. This isn't helpful. Indeed, the very real possibility of freezing wrong answers makes this a bad idea.</p>
<p>First, on "consistency vs. correctness". This is easier to preserve with a simple mapping than with a complex set of decorators.</p>
<p>Do this instead of writing a freeze decorator.</p>
<pre><code>print "frozen_f=", dict( (i,f(i)) for i in range(100) )
</code></pre>
<p>The dictionary object that's created will work perfectly as a frozen result set. No decorator. No complexity to speak of.</p>
<p>Second, on "unit testing".</p>
<p>The point of a unit test is <strong>not</strong> to "freeze" some random results. The point of a unit test is to compare real results with results developed another (simpler, more obvious, poorly-performing way). Usually unit tests compare hand-developed results. Other times unit tests use obvious but horribly slow algorithms to produce a few key results.</p>
<p>The point of having test data around is not that it's a "frozen" result. The point of having test data is that it is an independent result. Done differently -- sometimes by different people -- that confirms that the function works.</p>
<p>Sorry. This appears to me to be a bad idea; it looks like it subverts the intent of unit testing.</p>
<p><hr /></p>
<p><strong>"HOWEVER, Nobody can deny the overhead of writting test cases"</strong></p>
<p>Actually, many folks would deny the "overhead". It isn't "overhead" in the sense of wasted time and effort. For some of us, unittests are essential. Without them, the code may work, but only by accident. With them, we have ample evidence that it actually works; and the specific cases for which it works.</p>
| 3
|
2009-03-17T15:04:36Z
|
[
"python",
"testing",
"decorator",
"freeze"
] |
Equivalent for LinkedHashMap in Python
| 653,887
|
<p><a href="http://java.sun.com/j2se/1.5.0/docs/api/java/util/LinkedHashMap.html">LinkedHashMap</a> is the Java implementation of a Hashtable like data structure (dict in Python) with predictable iteration order. That means that during a traversal over all keys, they are ordered by insertion. This is done by an additional linked list maintaining the insertion order.</p>
<p>Is there an equivalent to that in Python? </p>
| 11
|
2009-03-17T11:46:11Z
| 653,902
|
<p>Although you can do the same thing by maintaining a list to keep track of insertion order, <a href="https://docs.python.org/2/library/collections.html" rel="nofollow">Python 2.7</a> and <a href="https://docs.python.org/3/library/collections.html" rel="nofollow">Python >=3.1</a> have an OrderedDict class in the collections module.</p>
<p>Before 2.7, you can subclass <code>dict</code> <a href="http://code.activestate.com/recipes/576693/" rel="nofollow">following this recipe</a>.</p>
| 10
|
2009-03-17T11:50:44Z
|
[
"python"
] |
Equivalent for LinkedHashMap in Python
| 653,887
|
<p><a href="http://java.sun.com/j2se/1.5.0/docs/api/java/util/LinkedHashMap.html">LinkedHashMap</a> is the Java implementation of a Hashtable like data structure (dict in Python) with predictable iteration order. That means that during a traversal over all keys, they are ordered by insertion. This is done by an additional linked list maintaining the insertion order.</p>
<p>Is there an equivalent to that in Python? </p>
| 11
|
2009-03-17T11:46:11Z
| 653,905
|
<p>I don't think so; you'd have to use a dict plus a list. But you could pretty easily wrap that in a class, and define <code>keys</code>, <code>__getitem__</code>, <code>__setitem__</code>, etc. to make it work the way you want.</p>
| 1
|
2009-03-17T11:51:35Z
|
[
"python"
] |
Equivalent for LinkedHashMap in Python
| 653,887
|
<p><a href="http://java.sun.com/j2se/1.5.0/docs/api/java/util/LinkedHashMap.html">LinkedHashMap</a> is the Java implementation of a Hashtable like data structure (dict in Python) with predictable iteration order. That means that during a traversal over all keys, they are ordered by insertion. This is done by an additional linked list maintaining the insertion order.</p>
<p>Is there an equivalent to that in Python? </p>
| 11
|
2009-03-17T11:46:11Z
| 653,910
|
<p>I am not sure whether this is what you are asking for:</p>
<pre><code>>>> dic = {1: 'one', 2: 'two'}
>>> for k, v in dic.iteritems():
... print k, v
</code></pre>
<p>you can order the dic in the order of the insertion using <a href="http://www.xs4all.nl/~anthon/Python/ordereddict/" rel="nofollow">ordereddict</a> module.</p>
<pre><code>d = ordereddict(dic, relax=True)
</code></pre>
| 2
|
2009-03-17T11:53:21Z
|
[
"python"
] |
Equivalent for LinkedHashMap in Python
| 653,887
|
<p><a href="http://java.sun.com/j2se/1.5.0/docs/api/java/util/LinkedHashMap.html">LinkedHashMap</a> is the Java implementation of a Hashtable like data structure (dict in Python) with predictable iteration order. That means that during a traversal over all keys, they are ordered by insertion. This is done by an additional linked list maintaining the insertion order.</p>
<p>Is there an equivalent to that in Python? </p>
| 11
|
2009-03-17T11:46:11Z
| 653,987
|
<p>If you're on Python 2.7 or Python >=3.1 you can use <a href="https://docs.python.org/2/library/collections.html#collections.OrderedDict" rel="nofollow">collections.OrderedDict</a> in the standard library.</p>
<p><a href="http://stackoverflow.com/questions/60848/how-do-you-retrieve-items-from-a-dictionary-in-the-order-that-theyre-inserted/61031#61031">This answer</a> to the question <a href="http://stackoverflow.com/questions/60848/how-do-you-retrieve-items-from-a-dictionary-in-the-order-that-theyre-inserted">How do you retrieve items from a dictionary in the order that theyâre inserted?</a> contains an implementation of an ordered dict, in case you're not using Python 3.x and don't want to give yourself a dependency on the third-party <a href="http://www.xs4all.nl/~anthon/Python/ordereddict/" rel="nofollow">ordereddict module</a>.</p>
| 6
|
2009-03-17T12:21:59Z
|
[
"python"
] |
Python Hash Functions
| 654,128
|
<p>What is a good way of hashing a hierarchy (similar to a file structure) in python?</p>
<p>I could convert the whole hierarchy into a dotted string and then hash that, but is there a better (or more efficient) way of doing this without going back and forth all the time?</p>
<p>An example of a structure I might want to hash is:</p>
<pre><code>a -> b1 -> c -> 1 -> d
a -> b2 -> c -> 2 -> d
a -> c -> 1 -> d
</code></pre>
| 0
|
2009-03-17T13:00:41Z
| 654,184
|
<p>If you have access to your hierarchy components as a tuple, just hash it - tuples are hashable. You may not gain a lot over conversion to and from a delimited string, but it's a start.</p>
<p>If this doesn't help, perhaps you could provide more information about how you store the hierarchy/path information.</p>
| 8
|
2009-03-17T13:12:32Z
|
[
"python",
"hash"
] |
Python Hash Functions
| 654,128
|
<p>What is a good way of hashing a hierarchy (similar to a file structure) in python?</p>
<p>I could convert the whole hierarchy into a dotted string and then hash that, but is there a better (or more efficient) way of doing this without going back and forth all the time?</p>
<p>An example of a structure I might want to hash is:</p>
<pre><code>a -> b1 -> c -> 1 -> d
a -> b2 -> c -> 2 -> d
a -> c -> 1 -> d
</code></pre>
| 0
|
2009-03-17T13:00:41Z
| 654,570
|
<p>You can make any object hashable by implementing the <a href="http://docs.python.org/reference/datamodel.html#object.%5F%5Fhash%5F%5F" rel="nofollow"><code>__hash__()</code> method</a></p>
<p>So you can simply add a suitable <code>__hash__()</code> method to the objects storing your hierarchy, e.g. compute the hash recursively, etc.</p>
| 1
|
2009-03-17T14:51:34Z
|
[
"python",
"hash"
] |
Python Hash Functions
| 654,128
|
<p>What is a good way of hashing a hierarchy (similar to a file structure) in python?</p>
<p>I could convert the whole hierarchy into a dotted string and then hash that, but is there a better (or more efficient) way of doing this without going back and forth all the time?</p>
<p>An example of a structure I might want to hash is:</p>
<pre><code>a -> b1 -> c -> 1 -> d
a -> b2 -> c -> 2 -> d
a -> c -> 1 -> d
</code></pre>
| 0
|
2009-03-17T13:00:41Z
| 655,085
|
<p>How do you want to access your hierarchy? </p>
<p>If you're always going to be checking for a full path, then as suggested, use a tuple:
eg:</p>
<pre><code>>>> d["a","b1","c",1,"d"] = value
</code></pre>
<p>However, if you're going to be doing things like "quickly find all the items below "a -> b1", it may make more sense to store it as a nested hashtable (otherwise you must iterate through all items to find those you're intereted in).</p>
<p>For this, a defaultdict is probably the easiest way to store. For example:</p>
<pre><code>from collections import defaultdict
def new_dict(): return defaultdict(new_dict)
d = defaultdict(new_dict)
d["a"]["b1"]["c"][1]["d"] = "test"
d["a"]["b2"]["c"][2]["d"] = "test2"
d["a"]["c"][1]["d"] = "test3"
print d["a"]["c"][1]["d"] # Prints test3
print d["a"].keys() # Prints ["c", "b1", "b2"]
</code></pre>
| 4
|
2009-03-17T16:48:32Z
|
[
"python",
"hash"
] |
Python file modes detail
| 654,499
|
<p>In Python, the following statements do not work:</p>
<pre><code>f = open("ftmp", "rw")
print >> f, "python"
</code></pre>
<p>I get the error:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IOError: [Errno 9] Bad file descriptor
</code></pre>
<p>But with the following code it works:</p>
<pre><code>g = open("ftmp", "r+")
print >> g, "python"
</code></pre>
<p>It looks like I need to revise the file modes. What are the deep intricacies of the file opening modes?</p>
| 12
|
2009-03-17T14:32:48Z
| 654,518
|
<p>Better yet, let the documentation do it for you: <a href="http://docs.python.org/library/functions.html#open">http://docs.python.org/library/functions.html#open</a>. Your issue in the question is that there is no "rw" mode... you probably want 'r+' as you wrote (or 'a+' if the file does not yet exist).</p>
| 16
|
2009-03-17T14:38:28Z
|
[
"python",
"file-io"
] |
Python file modes detail
| 654,499
|
<p>In Python, the following statements do not work:</p>
<pre><code>f = open("ftmp", "rw")
print >> f, "python"
</code></pre>
<p>I get the error:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IOError: [Errno 9] Bad file descriptor
</code></pre>
<p>But with the following code it works:</p>
<pre><code>g = open("ftmp", "r+")
print >> g, "python"
</code></pre>
<p>It looks like I need to revise the file modes. What are the deep intricacies of the file opening modes?</p>
| 12
|
2009-03-17T14:32:48Z
| 655,524
|
<p>In fact, this is okay, but I found an "rw" mode on the socket in following code (for Python on <a href="http://en.wikipedia.org/wiki/S60_%28software_platform%29" rel="nofollow">S60</a>) at lines 42 and 45:</p>
<p><a href="http://www.mobilenin.com/mobilepythonbook/examples/057-btchat.html" rel="nofollow">http://www.mobilenin.com/mobilepythonbook/examples/057-btchat.html</a></p>
| 0
|
2009-03-17T18:31:35Z
|
[
"python",
"file-io"
] |
Python file modes detail
| 654,499
|
<p>In Python, the following statements do not work:</p>
<pre><code>f = open("ftmp", "rw")
print >> f, "python"
</code></pre>
<p>I get the error:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IOError: [Errno 9] Bad file descriptor
</code></pre>
<p>But with the following code it works:</p>
<pre><code>g = open("ftmp", "r+")
print >> g, "python"
</code></pre>
<p>It looks like I need to revise the file modes. What are the deep intricacies of the file opening modes?</p>
| 12
|
2009-03-17T14:32:48Z
| 656,289
|
<p>As an addition to <a href="http://stackoverflow.com/questions/654499/python-file-modes-detail/654518#654518">@Jarret Hardie's answer</a> here's how Python check file mode in the function <a href="http://hg.python.org/cpython/file/3.2/Modules/_io/fileio.c#l286">fileio_init()</a>:</p>
<pre><code>s = mode;
while (*s) {
switch (*s++) {
case 'r':
if (rwa) {
bad_mode:
PyErr_SetString(PyExc_ValueError,
"Must have exactly one of read/write/append mode");
goto error;
}
rwa = 1;
self->readable = 1;
break;
case 'w':
if (rwa)
goto bad_mode;
rwa = 1;
self->writable = 1;
flags |= O_CREAT | O_TRUNC;
break;
case 'a':
if (rwa)
goto bad_mode;
rwa = 1;
self->writable = 1;
flags |= O_CREAT;
append = 1;
break;
case 'b':
break;
case '+':
if (plus)
goto bad_mode;
self->readable = self->writable = 1;
plus = 1;
break;
default:
PyErr_Format(PyExc_ValueError,
"invalid mode: %.200s", mode);
goto error;
}
}
if (!rwa)
goto bad_mode;
</code></pre>
<p>That is: only <code>"rwab+"</code> characters are allowed; there must be exactly one of <code>"rwa"</code>, at most one <code>'+'</code> and <code>'b'</code> is a noop.</p>
| 10
|
2009-03-17T22:30:32Z
|
[
"python",
"file-io"
] |
Is there any preferable way to get user/group information from an Active Directory domain in Python?
| 654,520
|
<p>For a Django application that I'm working on, I wanted to allow group membership to be determined by Active Directory group. After a while of digging through the pywin32 documentation, I came up with this:</p>
<pre><code>>>> import win32net
>>> win32net.NetUserGetGroups('domain_name.com', 'username')
[(u'Domain Users', 7), ...]
</code></pre>
<p>I spent a while googling before I figured this out though, and the examples I found almost exclusively used LDAP for this kind of thing. Is there any reason why that's to be preferred over this method? Bear a couple things in mind:</p>
<ol>
<li>I'm not using Active Directory to actually perform authentication, only permissions. Authentication is performed by another server.</li>
<li>While it would be nice to have some cross-platform capabilities, this will probably run almost exclusively on Windows.</li>
</ol>
| 4
|
2009-03-17T14:39:46Z
| 654,739
|
<p>AD's LDAP interface has quite a few 'quirks' that make it more difficult to use than it might appear on the surface, and it tends to lag significantly behind on features. When I worked with it, I mostly dealt with authentication, but it's probably the same no matter what you're doing. There's a lot of weirdness in terms of having to be bound as a certain user just to do simple searches that a normal LDAP server would let you do as anonymous.</p>
<p>Also, at least as of a year ago, when I worked on this, python-ldap was the only Python LDAP implementation to support anywhere close to the full feature set, since it's built on top of OpenLDAP, However, OpenLDAP is rather difficult to build on Windows (and in general), so most builds will be missing one or more features. Although you're not doing authentication, a lack of SASL/Kerberos support (which was missing at the time I used it) might make things complicated for you.</p>
<p>If you have something that works, and only need to run it on Windows, I would really recommend sticking to it; using AD via LDAP can turn into a big project.</p>
| 3
|
2009-03-17T15:27:17Z
|
[
"python",
"winapi",
"active-directory",
"ldap",
"pywin32"
] |
Is there any preferable way to get user/group information from an Active Directory domain in Python?
| 654,520
|
<p>For a Django application that I'm working on, I wanted to allow group membership to be determined by Active Directory group. After a while of digging through the pywin32 documentation, I came up with this:</p>
<pre><code>>>> import win32net
>>> win32net.NetUserGetGroups('domain_name.com', 'username')
[(u'Domain Users', 7), ...]
</code></pre>
<p>I spent a while googling before I figured this out though, and the examples I found almost exclusively used LDAP for this kind of thing. Is there any reason why that's to be preferred over this method? Bear a couple things in mind:</p>
<ol>
<li>I'm not using Active Directory to actually perform authentication, only permissions. Authentication is performed by another server.</li>
<li>While it would be nice to have some cross-platform capabilities, this will probably run almost exclusively on Windows.</li>
</ol>
| 4
|
2009-03-17T14:39:46Z
| 1,391,300
|
<p>Check out <a href="http://timgolden.me.uk/python/active%5Fdirectory.html" rel="nofollow">Tim Golden's Python Stuff</a>.</p>
<pre><code>import active_directory
user = active_directory.find_user(user_name)
groups = user.memberOf
</code></pre>
| 1
|
2009-09-07T23:27:43Z
|
[
"python",
"winapi",
"active-directory",
"ldap",
"pywin32"
] |
Is there any preferable way to get user/group information from an Active Directory domain in Python?
| 654,520
|
<p>For a Django application that I'm working on, I wanted to allow group membership to be determined by Active Directory group. After a while of digging through the pywin32 documentation, I came up with this:</p>
<pre><code>>>> import win32net
>>> win32net.NetUserGetGroups('domain_name.com', 'username')
[(u'Domain Users', 7), ...]
</code></pre>
<p>I spent a while googling before I figured this out though, and the examples I found almost exclusively used LDAP for this kind of thing. Is there any reason why that's to be preferred over this method? Bear a couple things in mind:</p>
<ol>
<li>I'm not using Active Directory to actually perform authentication, only permissions. Authentication is performed by another server.</li>
<li>While it would be nice to have some cross-platform capabilities, this will probably run almost exclusively on Windows.</li>
</ol>
| 4
|
2009-03-17T14:39:46Z
| 5,170,066
|
<pre><code>import wmi
oWMI = wmi.WMI(namespace="directory\ldap")
ADUsers = oWMI.query("select ds_name from ds_user")
for user in ADUsers:
print user.ds_name
</code></pre>
| 2
|
2011-03-02T15:59:21Z
|
[
"python",
"winapi",
"active-directory",
"ldap",
"pywin32"
] |
splitting a ManyToManyField over multiple form fields in a ModelForm
| 654,576
|
<p>So I have a model with a ManyToManyField called tournaments. I have a ModelForm with two tournament fields:</p>
<pre><code>pay_tourns = forms.ModelMultipleChoiceField(
queryset=Tourn.objects.all().active().pay_tourns(),
widget=forms.CheckboxSelectMultiple())
rep_tourns = forms.ModelMultipleChoiceField(
queryset=Tourn.objects.all().active().rep_tourns(),
widget=forms.CheckboxSelectMultiple())
</code></pre>
<p>The methods after all() there are from a subclassed QuerySet. When I'm saving the form in my view I do thus:</p>
<pre><code>post.tournaments = (post_form.cleaned_data.get('pay_tourns')
+ post_form.cleaned_data.get('rep_tourns'))
</code></pre>
<p>Anyway, this all works fine. What I can't figure out how to do is fill these form fields out when I'm loading an existing post. That is, when I pass instance=post to the form. Any ideas?</p>
| 1
|
2009-03-17T14:52:53Z
| 655,249
|
<p>You could do something like this to the ModelForm: </p>
<pre><code>def __init__(self, *args, **kwargs):
super(MyForm, self).__init__(*args, **kwargs)
instance = kwargs.get('instance')
if instance:
self.fields['pay_tourns'].queryset.filter(post=instance)
self.fields['rep_tourns'].queryset.filter(post=instance)
</code></pre>
<p>I don't see why that wouldn't work, but I'm going to test it just to make sure...</p>
<p><strong>EDIT:</strong> Tested and it works.</p>
| 2
|
2009-03-17T17:27:47Z
|
[
"python",
"django",
"django-forms",
"modelform"
] |
splitting a ManyToManyField over multiple form fields in a ModelForm
| 654,576
|
<p>So I have a model with a ManyToManyField called tournaments. I have a ModelForm with two tournament fields:</p>
<pre><code>pay_tourns = forms.ModelMultipleChoiceField(
queryset=Tourn.objects.all().active().pay_tourns(),
widget=forms.CheckboxSelectMultiple())
rep_tourns = forms.ModelMultipleChoiceField(
queryset=Tourn.objects.all().active().rep_tourns(),
widget=forms.CheckboxSelectMultiple())
</code></pre>
<p>The methods after all() there are from a subclassed QuerySet. When I'm saving the form in my view I do thus:</p>
<pre><code>post.tournaments = (post_form.cleaned_data.get('pay_tourns')
+ post_form.cleaned_data.get('rep_tourns'))
</code></pre>
<p>Anyway, this all works fine. What I can't figure out how to do is fill these form fields out when I'm loading an existing post. That is, when I pass instance=post to the form. Any ideas?</p>
| 1
|
2009-03-17T14:52:53Z
| 655,544
|
<p>Paolo Bergantino was on the right track, and helped me find it. This was the solution:</p>
<pre><code>def __init__(self, *args, **kwargs):
super(MyForm, self).__init__(*args, **kwargs)
instance = kwargs.get('instance')
if instance:
self.fields['pay_tourns'].initial = [ o.id for o in instance.tournaments.all().active().pay_tourns()]
self.fields['rep_tourns'].initial = [ o.id for o in instance.tournaments.all().active().rep_tourns()]
</code></pre>
| 1
|
2009-03-17T18:37:23Z
|
[
"python",
"django",
"django-forms",
"modelform"
] |
How do I create a unique value for each key using dict.fromkeys?
| 654,646
|
<p>First, I'm new to Python, so I apologize if I've overlooked something, but I would like to use <code>dict.fromkeys</code> (or something similar) to create a dictionary of lists, the keys of which are provided in another list. I'm performing some timing tests and I'd like for the key to be the input variable and the list to contain the times for the runs:</p>
<pre><code>def benchmark(input):
...
return time_taken
runs = 10
inputs = (1, 2, 3, 5, 8, 13, 21, 34, 55)
results = dict.fromkeys(inputs, [])
for run in range(0, runs):
for i in inputs:
results[i].append(benchmark(i))
</code></pre>
<p>The problem I'm having is that all the keys in the dictionary appear to share the same list, and each run simply appends to it. Is there any way to generate a unique empty list for each key using <code>fromkeys</code>? If not, is there another way to do this without generating the resulting dictionary by hand?</p>
| 9
|
2009-03-17T15:06:46Z
| 654,686
|
<p>Check out <a href="http://docs.python.org/library/collections.html#collections.defaultdict">defaultdict</a> (requires Python 2.5 or greater).</p>
<pre><code>from collections import defaultdict
def benchmark(input):
...
return time_taken
runs = 10
inputs = (1, 2, 3, 5, 8, 13, 21, 34, 55)
results = defaultdict(list) # Creates a dict where the default value for any key is an empty list
for run in range(0, runs):
for i in inputs:
results[i].append(benchmark(i))
</code></pre>
| 12
|
2009-03-17T15:15:24Z
|
[
"python",
"dictionary",
"fromkeys"
] |
How do I create a unique value for each key using dict.fromkeys?
| 654,646
|
<p>First, I'm new to Python, so I apologize if I've overlooked something, but I would like to use <code>dict.fromkeys</code> (or something similar) to create a dictionary of lists, the keys of which are provided in another list. I'm performing some timing tests and I'd like for the key to be the input variable and the list to contain the times for the runs:</p>
<pre><code>def benchmark(input):
...
return time_taken
runs = 10
inputs = (1, 2, 3, 5, 8, 13, 21, 34, 55)
results = dict.fromkeys(inputs, [])
for run in range(0, runs):
for i in inputs:
results[i].append(benchmark(i))
</code></pre>
<p>The problem I'm having is that all the keys in the dictionary appear to share the same list, and each run simply appends to it. Is there any way to generate a unique empty list for each key using <code>fromkeys</code>? If not, is there another way to do this without generating the resulting dictionary by hand?</p>
| 9
|
2009-03-17T15:06:46Z
| 654,689
|
<p>The problem is that in </p>
<pre><code>results = dict.fromkeys(inputs, [])
</code></pre>
<p>[] is evaluated only once, right there. </p>
<p>I'd rewrite this code like that:</p>
<pre><code>runs = 10
inputs = (1, 2, 3, 5, 8, 13, 21, 34, 55)
results = {}
for run in range(runs):
for i in inputs:
results.setdefault(i,[]).append(benchmark(i))
</code></pre>
<p>Other option is:</p>
<pre><code>runs = 10
inputs = (1, 2, 3, 5, 8, 13, 21, 34, 55)
results = dict([(i,[]) for i in inputs])
for run in range(runs):
for i in inputs:
results[i].append(benchmark(i))
</code></pre>
| 9
|
2009-03-17T15:16:09Z
|
[
"python",
"dictionary",
"fromkeys"
] |
How do I create a unique value for each key using dict.fromkeys?
| 654,646
|
<p>First, I'm new to Python, so I apologize if I've overlooked something, but I would like to use <code>dict.fromkeys</code> (or something similar) to create a dictionary of lists, the keys of which are provided in another list. I'm performing some timing tests and I'd like for the key to be the input variable and the list to contain the times for the runs:</p>
<pre><code>def benchmark(input):
...
return time_taken
runs = 10
inputs = (1, 2, 3, 5, 8, 13, 21, 34, 55)
results = dict.fromkeys(inputs, [])
for run in range(0, runs):
for i in inputs:
results[i].append(benchmark(i))
</code></pre>
<p>The problem I'm having is that all the keys in the dictionary appear to share the same list, and each run simply appends to it. Is there any way to generate a unique empty list for each key using <code>fromkeys</code>? If not, is there another way to do this without generating the resulting dictionary by hand?</p>
| 9
|
2009-03-17T15:06:46Z
| 654,726
|
<p>You can also do this if you don't want to learn anything new (although I recommend you do!) I'm curious as to which method is faster?</p>
<pre><code>results = dict.fromkeys(inputs)
for run in range(0, runs):
for i in inputs:
if not results[i]:
results[i] = []
results[i].append(benchmark(i))
</code></pre>
| 2
|
2009-03-17T15:24:09Z
|
[
"python",
"dictionary",
"fromkeys"
] |
Python process management
| 654,738
|
<p>Is there any way python is natively, or through some code available online (preferably under the GPL), capable of doing process management. The goal is similar to the functionality of ps, but preferably in arrays, lists, and/or dicts.</p>
| 6
|
2009-03-17T15:26:44Z
| 654,767
|
<p>There is something out there called <a href="http://www.humanized.com/ProcessManager/" rel="nofollow">ProcessManager</a>, but I have not used it before.</p>
| 0
|
2009-03-17T15:32:38Z
|
[
"python",
"process",
"ps"
] |
Python process management
| 654,738
|
<p>Is there any way python is natively, or through some code available online (preferably under the GPL), capable of doing process management. The goal is similar to the functionality of ps, but preferably in arrays, lists, and/or dicts.</p>
| 6
|
2009-03-17T15:26:44Z
| 654,813
|
<p><a href="http://pypi.python.org/pypi/PSI" rel="nofollow">http://pypi.python.org/pypi/PSI</a></p>
| 2
|
2009-03-17T15:41:29Z
|
[
"python",
"process",
"ps"
] |
Python process management
| 654,738
|
<p>Is there any way python is natively, or through some code available online (preferably under the GPL), capable of doing process management. The goal is similar to the functionality of ps, but preferably in arrays, lists, and/or dicts.</p>
| 6
|
2009-03-17T15:26:44Z
| 10,543,775
|
<p>This one is cross-platform:
<a href="http://code.google.com/p/psutil/" rel="nofollow">http://code.google.com/p/psutil/</a></p>
| 2
|
2012-05-11T00:10:51Z
|
[
"python",
"process",
"ps"
] |
Python process management
| 654,738
|
<p>Is there any way python is natively, or through some code available online (preferably under the GPL), capable of doing process management. The goal is similar to the functionality of ps, but preferably in arrays, lists, and/or dicts.</p>
| 6
|
2009-03-17T15:26:44Z
| 10,543,890
|
<p>If you're not interested in a module, you might also have a look at the <code>/proc</code> filesystem, procfs.</p>
| 0
|
2012-05-11T00:29:09Z
|
[
"python",
"process",
"ps"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.