title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
|---|---|---|---|---|---|---|---|---|---|
How to write a function that takes a string and prints the letters in decreasing order of frequency?
| 665,942
|
<p>I got this far:</p>
<pre><code>def most_frequent(string):
d = dict()
for key in string:
if key not in d:
d[key] = 1
else:
d[key] += 1
return d
print most_frequent('aabbbc')
</code></pre>
<p>Returning:</p>
<pre><code>{'a': 2, 'c': 1, 'b': 3}
</code></pre>
<p>Now I need to:</p>
<ol>
<li>reverse the pair</li>
<li>sort by number by decreasing order</li>
<li>only print the letters out</li>
</ol>
<p>Should I convert this dictionary to tuples or list?</p>
| 4
|
2009-03-20T12:32:19Z
| 673,064
|
<p>Here's the code for your <code>most_frequent</code> function:</p>
<pre><code>>>> a = 'aabbbc'
>>> {i: a.count(i) for i in set(a)}
{'a': 2, 'c': 1, 'b': 3}
</code></pre>
<p>this particular syntax is for py3k, but it's easy to write something similar using syntax of previous versions. it seems to me a bit more readable than yours.</p>
| 0
|
2009-03-23T11:36:23Z
|
[
"python"
] |
How to write a function that takes a string and prints the letters in decreasing order of frequency?
| 665,942
|
<p>I got this far:</p>
<pre><code>def most_frequent(string):
d = dict()
for key in string:
if key not in d:
d[key] = 1
else:
d[key] += 1
return d
print most_frequent('aabbbc')
</code></pre>
<p>Returning:</p>
<pre><code>{'a': 2, 'c': 1, 'b': 3}
</code></pre>
<p>Now I need to:</p>
<ol>
<li>reverse the pair</li>
<li>sort by number by decreasing order</li>
<li>only print the letters out</li>
</ol>
<p>Should I convert this dictionary to tuples or list?</p>
| 4
|
2009-03-20T12:32:19Z
| 8,117,915
|
<p>Python 2.7 supports this use case directly:</p>
<pre><code>>>> from collections import Counter
>>> Counter('abracadabra').most_common()
[('a', 5), ('r', 2), ('b', 2), ('c', 1), ('d', 1)]
</code></pre>
| 3
|
2011-11-14T06:06:40Z
|
[
"python"
] |
What errors/exceptions do I need to handle with urllib2.Request / urlopen?
| 666,022
|
<p>I have the following code to do a postback to a remote URL:</p>
<pre><code>request = urllib2.Request('http://www.example.com', postBackData, { 'User-Agent' : 'My User Agent' })
try:
response = urllib2.urlopen(request)
except urllib2.HTTPError, e:
checksLogger.error('HTTPError = ' + str(e.code))
except urllib2.URLError, e:
checksLogger.error('URLError = ' + str(e.reason))
except httplib.HTTPException, e:
checksLogger.error('HTTPException')
</code></pre>
<p>The postBackData is created using a dictionary encoded using urllib.urlencode. checksLogger is a logger using <a href="http://docs.python.org/library/logging.html">logging</a>.</p>
<p>I have had a problem where this code runs when the remote server is down and the code exits (this is on customer servers so I don't know what the exit stack dump / error is at this time). I'm assuming this is because there is an exception and/or error that is not being handled. So are there any other exceptions that might be triggered that I'm not handling above?</p>
| 43
|
2009-03-20T12:56:21Z
| 666,037
|
<p>You can catch all exceptions and log what's get caught:</p>
<pre><code> import sys
import traceback
def formatExceptionInfo(maxTBlevel=5):
cla, exc, trbk = sys.exc_info()
excName = cla.__name__
try:
excArgs = exc.__dict__["args"]
except KeyError:
excArgs = "<no args>"
excTb = traceback.format_tb(trbk, maxTBlevel)
return (excName, excArgs, excTb)
try:
x = x + 1
except:
print formatExceptionInfo()
</code></pre>
<p>(Code from <a href="http://www.linuxjournal.com/article/5821" rel="nofollow">http://www.linuxjournal.com/article/5821</a>)</p>
<p>Also read <a href="http://docs.python.org/library/sys.html#sys.exc%5Finfo" rel="nofollow">documentation on sys.exc_info</a>.</p>
| 1
|
2009-03-20T13:00:38Z
|
[
"python"
] |
What errors/exceptions do I need to handle with urllib2.Request / urlopen?
| 666,022
|
<p>I have the following code to do a postback to a remote URL:</p>
<pre><code>request = urllib2.Request('http://www.example.com', postBackData, { 'User-Agent' : 'My User Agent' })
try:
response = urllib2.urlopen(request)
except urllib2.HTTPError, e:
checksLogger.error('HTTPError = ' + str(e.code))
except urllib2.URLError, e:
checksLogger.error('URLError = ' + str(e.reason))
except httplib.HTTPException, e:
checksLogger.error('HTTPException')
</code></pre>
<p>The postBackData is created using a dictionary encoded using urllib.urlencode. checksLogger is a logger using <a href="http://docs.python.org/library/logging.html">logging</a>.</p>
<p>I have had a problem where this code runs when the remote server is down and the code exits (this is on customer servers so I don't know what the exit stack dump / error is at this time). I'm assuming this is because there is an exception and/or error that is not being handled. So are there any other exceptions that might be triggered that I'm not handling above?</p>
| 43
|
2009-03-20T12:56:21Z
| 666,052
|
<p>From the <a href="http://docs.python.org/library/urllib2.html">docs page</a> <code>urlopen</code> entry, it looks like you just need to catch <a href="http://docs.python.org/library/urllib2.html#urllib2.URLError">URLError</a>. If you really want to hedge your bets against problems within the urllib code, you can also catch <code>Exception</code> as a fall-back. Do <strong>not</strong> just <code>except:</code>, since that will catch <code>SystemExit</code> and <code>KeyboardInterrupt</code> also.</p>
<p>Edit: What I mean to say is, you're catching the errors it's supposed to throw. If it's throwing something else, it's probably due to urllib code not catching something that it should have caught and wrapped in a <code>URLError</code>. Even the stdlib tends to miss simple things like <code>AttributeError</code>. Catching <code>Exception</code> as a fall-back (and logging what it caught) will help you figure out what's happening, without trapping <code>SystemExit</code> and <code>KeyboardInterrupt</code>.</p>
| 12
|
2009-03-20T13:06:01Z
|
[
"python"
] |
What errors/exceptions do I need to handle with urllib2.Request / urlopen?
| 666,022
|
<p>I have the following code to do a postback to a remote URL:</p>
<pre><code>request = urllib2.Request('http://www.example.com', postBackData, { 'User-Agent' : 'My User Agent' })
try:
response = urllib2.urlopen(request)
except urllib2.HTTPError, e:
checksLogger.error('HTTPError = ' + str(e.code))
except urllib2.URLError, e:
checksLogger.error('URLError = ' + str(e.reason))
except httplib.HTTPException, e:
checksLogger.error('HTTPException')
</code></pre>
<p>The postBackData is created using a dictionary encoded using urllib.urlencode. checksLogger is a logger using <a href="http://docs.python.org/library/logging.html">logging</a>.</p>
<p>I have had a problem where this code runs when the remote server is down and the code exits (this is on customer servers so I don't know what the exit stack dump / error is at this time). I'm assuming this is because there is an exception and/or error that is not being handled. So are there any other exceptions that might be triggered that I'm not handling above?</p>
| 43
|
2009-03-20T12:56:21Z
| 666,071
|
<pre><code>$ grep "raise" /usr/lib64/python/urllib2.py
IOError); for HTTP errors, raises an HTTPError, which can also be
raise AttributeError, attr
raise ValueError, "unknown url type: %s" % self.__original
# XXX raise an exception if no one else should try to handle
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
perform the redirect. Otherwise, raise HTTPError if no-one
raise HTTPError(req.get_full_url(), code, msg, headers, fp)
raise HTTPError(req.get_full_url(), code,
raise HTTPError(req.get_full_url(), 401, "digest auth failed",
raise ValueError("AbstractDigestAuthHandler doesn't know "
raise URLError('no host given')
raise URLError('no host given')
raise URLError(err)
raise URLError('unknown url type: %s' % type)
raise URLError('file not on local host')
raise IOError, ('ftp error', 'no host given')
raise URLError(msg)
raise IOError, ('ftp error', msg), sys.exc_info()[2]
raise GopherError('no host given')
</code></pre>
<p>There is also the possibility of exceptions in urllib2 dependencies, or of exceptions caused by genuine bugs.</p>
<p>You are best off logging all uncaught exceptions in a file via a custom <a href="http://docs.python.org/library/sys.html#sys.excepthook">sys.excepthook</a>. <em>The key rule of thumb here is to <strong>never catch exceptions you aren't planning to correct</strong>, and <strong>logging is not a correction</strong>.</em> So don't catch them just to log them.</p>
| 10
|
2009-03-20T13:11:27Z
|
[
"python"
] |
What errors/exceptions do I need to handle with urllib2.Request / urlopen?
| 666,022
|
<p>I have the following code to do a postback to a remote URL:</p>
<pre><code>request = urllib2.Request('http://www.example.com', postBackData, { 'User-Agent' : 'My User Agent' })
try:
response = urllib2.urlopen(request)
except urllib2.HTTPError, e:
checksLogger.error('HTTPError = ' + str(e.code))
except urllib2.URLError, e:
checksLogger.error('URLError = ' + str(e.reason))
except httplib.HTTPException, e:
checksLogger.error('HTTPException')
</code></pre>
<p>The postBackData is created using a dictionary encoded using urllib.urlencode. checksLogger is a logger using <a href="http://docs.python.org/library/logging.html">logging</a>.</p>
<p>I have had a problem where this code runs when the remote server is down and the code exits (this is on customer servers so I don't know what the exit stack dump / error is at this time). I'm assuming this is because there is an exception and/or error that is not being handled. So are there any other exceptions that might be triggered that I'm not handling above?</p>
| 43
|
2009-03-20T12:56:21Z
| 666,075
|
<p>Add generic exception handler:</p>
<pre><code>request = urllib2.Request('http://www.example.com', postBackData, { 'User-Agent' : 'My User Agent' })
try:
response = urllib2.urlopen(request)
except urllib2.HTTPError, e:
checksLogger.error('HTTPError = ' + str(e.code))
except urllib2.URLError, e:
checksLogger.error('URLError = ' + str(e.reason))
except httplib.HTTPException, e:
checksLogger.error('HTTPException')
except Exception:
import traceback
checksLogger.error('generic exception: ' + traceback.format_exc())
</code></pre>
| 40
|
2009-03-20T13:12:45Z
|
[
"python"
] |
What errors/exceptions do I need to handle with urllib2.Request / urlopen?
| 666,022
|
<p>I have the following code to do a postback to a remote URL:</p>
<pre><code>request = urllib2.Request('http://www.example.com', postBackData, { 'User-Agent' : 'My User Agent' })
try:
response = urllib2.urlopen(request)
except urllib2.HTTPError, e:
checksLogger.error('HTTPError = ' + str(e.code))
except urllib2.URLError, e:
checksLogger.error('URLError = ' + str(e.reason))
except httplib.HTTPException, e:
checksLogger.error('HTTPException')
</code></pre>
<p>The postBackData is created using a dictionary encoded using urllib.urlencode. checksLogger is a logger using <a href="http://docs.python.org/library/logging.html">logging</a>.</p>
<p>I have had a problem where this code runs when the remote server is down and the code exits (this is on customer servers so I don't know what the exit stack dump / error is at this time). I'm assuming this is because there is an exception and/or error that is not being handled. So are there any other exceptions that might be triggered that I'm not handling above?</p>
| 43
|
2009-03-20T12:56:21Z
| 666,393
|
<p>I catch:<br />
<br />
httplib.HTTPException<br />
urllib2.HTTPError<br />
urllib2.URLError<br />
<br />
I believe this covers everything including socket errors.</p>
| 0
|
2009-03-20T14:30:09Z
|
[
"python"
] |
PyObjC + Python 3.0 Questions
| 666,148
|
<p>By default, a Cocoa-Python application uses the default Python runtime which is version 2.5. How can I configure my Xcode project so that it would use the newer Python 3.0 runtime? I tried replacing the Python.framework included in the project with the newer version but it did not work.</p>
<p>And another thing, are PyObjc modules compatible with the new version of Python?</p>
| 10
|
2009-03-20T13:30:08Z
| 667,584
|
<p>PyObjC does not yet work with Python 3.0. According to Ronald Oussoren, a (the?) PyObjC developer, Python 3.0 support is possible, but not yet implemented:</p>
<blockquote>
<p>Support for Python 3.x is on my todo
list but is non-trivial to achieve.
PyObjC contains a large amount of
pretty low-level C code, getting the
details w.r.t. to the changes in 3.0
right is not easy. I have looked into
a Python 3.x port and this should be
fairly easy, but it's still a couple
of days work. I'm not planning to work
on that before the next release of
PyObjC, that's way too long overdue as
it is.</p>
</blockquote>
<p>I'm sure patches would be welcomed.</p>
| 9
|
2009-03-20T19:20:56Z
|
[
"python",
"cocoa",
"xcode",
"pyobjc"
] |
PyObjC + Python 3.0 Questions
| 666,148
|
<p>By default, a Cocoa-Python application uses the default Python runtime which is version 2.5. How can I configure my Xcode project so that it would use the newer Python 3.0 runtime? I tried replacing the Python.framework included in the project with the newer version but it did not work.</p>
<p>And another thing, are PyObjc modules compatible with the new version of Python?</p>
| 10
|
2009-03-20T13:30:08Z
| 4,464,084
|
<p>PyObjC 2.3 added initial support for Python 3.1:</p>
<blockquote>
<p>This version requires Python 2.6 or later, and also supports
Python 3.1 or later.</p>
</blockquote>
<p>but also</p>
<blockquote>
<p>NOTE: Python 3 support is pre-alpha at this time: the code compiles
but does not pass tests yet. The code also needs to be reviewed to
check for python3<->objc integration (dict.keys now returns a view,
NSDictionary.keys still returns a basic iterator, ...)</p>
</blockquote>
<p><a href="http://svn.red-bean.com/pyobjc/tags/pyobjc-2.3/pyobjc-core/NEWS.txt" rel="nofollow">http://svn.red-bean.com/pyobjc/tags/pyobjc-2.3/pyobjc-core/NEWS.txt</a></p>
| 2
|
2010-12-16T18:23:49Z
|
[
"python",
"cocoa",
"xcode",
"pyobjc"
] |
Decorator classes in Python
| 666,216
|
<p>I want to construct classes for use as decorators with the following principles intact:</p>
<ol>
<li>It should be possible to stack multiple such class decorators on top off 1 function.</li>
<li>The resulting function name pointer should be indistinguishable from the same function without a decorator, save maybe for just which type/class it is.</li>
<li>Ordering off the decorators should not be relevant unless actually mandated by the decorators. Ie. independent decorators could be applied in any order.</li>
</ol>
<p>This is for a Django project, and the specific case I am working on now the method needs 2 decorators, and to appear as a normal python function:</p>
<pre><code>@AccessCheck
@AutoTemplate
def view(request, item_id) {}
</code></pre>
<p>@AutoTemplate changes the function so that instead of returning a HttpResponse, it just returns a dictionary for use in the context. A RequestContext is used, and the template name is inferred from the method name and module.</p>
<p>@AccessCheck adds additional checks on the user based on the item_id.</p>
<p>I am guessing it's just to get the constructor right and copy the appropriate attributes, but which attributes are these?</p>
<p>The following decorator won't work as I describe:</p>
<pre><code>class NullDecl (object):
def __init__ (self, func):
self.func = func
def __call__ (self, * args):
return self.func (*args)
</code></pre>
<p>As demonstrated by the following code:</p>
<pre><code>@NullDecl
@NullDecl
def decorated():
pass
def pure():
pass
# results in set(['func_closure', 'func_dict', '__get__', 'func_name',
# 'func_defaults', '__name__', 'func_code', 'func_doc', 'func_globals'])
print set(dir(pure)) - set(dir(decorated));
</code></pre>
<p>Additionally, try and add "print func.<strong>name</strong>" in the NullDecl constructor, and it will work for the first decorator, but not the second - as name will be missing.</p>
<p>Refined <strong>eduffy</strong>'s answer a bit, and it seems to work pretty well:</p>
<pre><code>class NullDecl (object):
def __init__ (self, func):
self.func = func
for n in set(dir(func)) - set(dir(self)):
setattr(self, n, getattr(func, n))
def __call__ (self, * args):
return self.func (*args)
def __repr__(self):
return self.func
</code></pre>
| 29
|
2009-03-20T13:48:00Z
| 666,230
|
<p>A do-nothing decorator class would look like this:</p>
<pre><code>class NullDecl (object):
def __init__ (self, func):
self.func = func
for name in set(dir(func)) - set(dir(self)):
setattr(self, name, getattr(func, name))
def __call__ (self, *args):
return self.func (*args)
</code></pre>
<p>And then you can apply it normally:</p>
<pre><code>@NullDecl
def myFunc (x,y,z):
return (x+y)/z
</code></pre>
| 18
|
2009-03-20T13:52:14Z
|
[
"python",
"decorator"
] |
Decorator classes in Python
| 666,216
|
<p>I want to construct classes for use as decorators with the following principles intact:</p>
<ol>
<li>It should be possible to stack multiple such class decorators on top off 1 function.</li>
<li>The resulting function name pointer should be indistinguishable from the same function without a decorator, save maybe for just which type/class it is.</li>
<li>Ordering off the decorators should not be relevant unless actually mandated by the decorators. Ie. independent decorators could be applied in any order.</li>
</ol>
<p>This is for a Django project, and the specific case I am working on now the method needs 2 decorators, and to appear as a normal python function:</p>
<pre><code>@AccessCheck
@AutoTemplate
def view(request, item_id) {}
</code></pre>
<p>@AutoTemplate changes the function so that instead of returning a HttpResponse, it just returns a dictionary for use in the context. A RequestContext is used, and the template name is inferred from the method name and module.</p>
<p>@AccessCheck adds additional checks on the user based on the item_id.</p>
<p>I am guessing it's just to get the constructor right and copy the appropriate attributes, but which attributes are these?</p>
<p>The following decorator won't work as I describe:</p>
<pre><code>class NullDecl (object):
def __init__ (self, func):
self.func = func
def __call__ (self, * args):
return self.func (*args)
</code></pre>
<p>As demonstrated by the following code:</p>
<pre><code>@NullDecl
@NullDecl
def decorated():
pass
def pure():
pass
# results in set(['func_closure', 'func_dict', '__get__', 'func_name',
# 'func_defaults', '__name__', 'func_code', 'func_doc', 'func_globals'])
print set(dir(pure)) - set(dir(decorated));
</code></pre>
<p>Additionally, try and add "print func.<strong>name</strong>" in the NullDecl constructor, and it will work for the first decorator, but not the second - as name will be missing.</p>
<p>Refined <strong>eduffy</strong>'s answer a bit, and it seems to work pretty well:</p>
<pre><code>class NullDecl (object):
def __init__ (self, func):
self.func = func
for n in set(dir(func)) - set(dir(self)):
setattr(self, n, getattr(func, n))
def __call__ (self, * args):
return self.func (*args)
def __repr__(self):
return self.func
</code></pre>
| 29
|
2009-03-20T13:48:00Z
| 666,258
|
<p>The <a href="http://pypi.python.org/pypi/decorator">decorator module</a> helps you writing signature-preserving decorators.</p>
<p>And the <a href="http://wiki.python.org/moin/PythonDecoratorLibrary">PythonDecoratorLibrary</a> might provide useful examples for decorators.</p>
| 10
|
2009-03-20T13:59:10Z
|
[
"python",
"decorator"
] |
Decorator classes in Python
| 666,216
|
<p>I want to construct classes for use as decorators with the following principles intact:</p>
<ol>
<li>It should be possible to stack multiple such class decorators on top off 1 function.</li>
<li>The resulting function name pointer should be indistinguishable from the same function without a decorator, save maybe for just which type/class it is.</li>
<li>Ordering off the decorators should not be relevant unless actually mandated by the decorators. Ie. independent decorators could be applied in any order.</li>
</ol>
<p>This is for a Django project, and the specific case I am working on now the method needs 2 decorators, and to appear as a normal python function:</p>
<pre><code>@AccessCheck
@AutoTemplate
def view(request, item_id) {}
</code></pre>
<p>@AutoTemplate changes the function so that instead of returning a HttpResponse, it just returns a dictionary for use in the context. A RequestContext is used, and the template name is inferred from the method name and module.</p>
<p>@AccessCheck adds additional checks on the user based on the item_id.</p>
<p>I am guessing it's just to get the constructor right and copy the appropriate attributes, but which attributes are these?</p>
<p>The following decorator won't work as I describe:</p>
<pre><code>class NullDecl (object):
def __init__ (self, func):
self.func = func
def __call__ (self, * args):
return self.func (*args)
</code></pre>
<p>As demonstrated by the following code:</p>
<pre><code>@NullDecl
@NullDecl
def decorated():
pass
def pure():
pass
# results in set(['func_closure', 'func_dict', '__get__', 'func_name',
# 'func_defaults', '__name__', 'func_code', 'func_doc', 'func_globals'])
print set(dir(pure)) - set(dir(decorated));
</code></pre>
<p>Additionally, try and add "print func.<strong>name</strong>" in the NullDecl constructor, and it will work for the first decorator, but not the second - as name will be missing.</p>
<p>Refined <strong>eduffy</strong>'s answer a bit, and it seems to work pretty well:</p>
<pre><code>class NullDecl (object):
def __init__ (self, func):
self.func = func
for n in set(dir(func)) - set(dir(self)):
setattr(self, n, getattr(func, n))
def __call__ (self, * args):
return self.func (*args)
def __repr__(self):
return self.func
</code></pre>
| 29
|
2009-03-20T13:48:00Z
| 666,302
|
<p>To create a decorator that wraps functions in a matter that make them indistinguishable from the original function, use <code>functools.wraps</code>.</p>
<p>Example:</p>
<pre><code>
def mydecorator(func):
@functools.wraps(func):
def _mydecorator(*args, **kwargs):
do_something()
try:
return func(*args, **kwargs)
finally:
clean_up()
return _mydecorator
# ... and with parameters
def mydecorator(param1, param2):
def _mydecorator(func):
@functools.wraps(func)
def __mydecorator(*args, **kwargs):
do_something(param1, param2)
try:
return func(*args, **kwargs)
finally:
clean_up()
return __mydecorator
return _mydecorator
</code></pre>
<p>(my personal preference is to create decorators using functions, not classes)</p>
<p>The ordering of decorators is as follows:</p>
<pre><code>
@d1
@d2
def func():
pass
# is equivalent to
def func():
pass
func = d1(d2(func))
</code></pre>
| 7
|
2009-03-20T14:10:41Z
|
[
"python",
"decorator"
] |
Unable to decode unicode string in Python 2.4
| 666,417
|
<p>This is in python 2.4. Here is my situation. I pull a string from a database, and it contains an umlauted 'o' (\xf6). At this point if I run type(value) it returns str. I then attempt to run .decode('utf-8'), and I get an error ('utf8' codec can't decode bytes in position 1-4). </p>
<p>Really my goal here is just to successfully make type(value) return unicode. I found an <a href="http://stackoverflow.com/questions/447107/whats-the-difference-between-encode-decode-python-2-x">earlier question</a>
that had some useful information, but the example from the picked answer doesn't seem to run for me. Is there something I am doing wrong here?</p>
<p>Here is some code to reproduce:</p>
<pre><code>Name = 'w\xc3\xb6rner'.decode('utf-8')
file.write('Name: %s - %s\n' %(Name, type(Name)))
</code></pre>
<p>I never actually get to the write statement, because it fails on the first statement. </p>
<p>Thank you for your help.</p>
<p><strong>Edit:</strong></p>
<p>I verified that the DB's charset is utf8. So in my code to reproduce I changed '\xf6' to '\xc3\xb6', and the failure still occurs. Is there a difference between 'utf-8' and 'utf8'?</p>
<p>The tip on using codecs to write to a file is handy (I'll definitely use it), but in this scenario I am only writing to a log file for debugging purposes.</p>
| 3
|
2009-03-20T14:36:31Z
| 666,430
|
<p>You need to use "ISO-8859-1":</p>
<pre><code>Name = 'w\xf6rner'.decode('iso-8859-1')
file.write('Name: %s - %s\n' %(Name, type(Name)))
</code></pre>
<p>utf-8 uses 2 bytes for escaping anything outside ascii, but here it's just 1 byte, so iso-8859-1 is probably correct.</p>
| 2
|
2009-03-20T14:41:06Z
|
[
"python",
"unicode",
"decode"
] |
Unable to decode unicode string in Python 2.4
| 666,417
|
<p>This is in python 2.4. Here is my situation. I pull a string from a database, and it contains an umlauted 'o' (\xf6). At this point if I run type(value) it returns str. I then attempt to run .decode('utf-8'), and I get an error ('utf8' codec can't decode bytes in position 1-4). </p>
<p>Really my goal here is just to successfully make type(value) return unicode. I found an <a href="http://stackoverflow.com/questions/447107/whats-the-difference-between-encode-decode-python-2-x">earlier question</a>
that had some useful information, but the example from the picked answer doesn't seem to run for me. Is there something I am doing wrong here?</p>
<p>Here is some code to reproduce:</p>
<pre><code>Name = 'w\xc3\xb6rner'.decode('utf-8')
file.write('Name: %s - %s\n' %(Name, type(Name)))
</code></pre>
<p>I never actually get to the write statement, because it fails on the first statement. </p>
<p>Thank you for your help.</p>
<p><strong>Edit:</strong></p>
<p>I verified that the DB's charset is utf8. So in my code to reproduce I changed '\xf6' to '\xc3\xb6', and the failure still occurs. Is there a difference between 'utf-8' and 'utf8'?</p>
<p>The tip on using codecs to write to a file is handy (I'll definitely use it), but in this scenario I am only writing to a log file for debugging purposes.</p>
| 3
|
2009-03-20T14:36:31Z
| 666,440
|
<p>Your string <strong>is not</strong> in UTF8 encoding. If you want to 'decode' string to unicode, your string must be in encoding you specified by parameter. I tried this and it works perfectly:</p>
<pre><code>print 'w\xf6rner'.decode('cp1250')
</code></pre>
<p><strong>EDIT</strong></p>
<p>For writing unicode strings to the file you can use codecs module:</p>
<pre><code>import codecs
f = codecs.open("yourfile.txt", "w", "utf8")
f.write( ... )
</code></pre>
<p>It is handy to specify encoding of the input/output and using 'unicode' string throughout your code without bothering of different encodings.</p>
| 7
|
2009-03-20T14:43:51Z
|
[
"python",
"unicode",
"decode"
] |
Unable to decode unicode string in Python 2.4
| 666,417
|
<p>This is in python 2.4. Here is my situation. I pull a string from a database, and it contains an umlauted 'o' (\xf6). At this point if I run type(value) it returns str. I then attempt to run .decode('utf-8'), and I get an error ('utf8' codec can't decode bytes in position 1-4). </p>
<p>Really my goal here is just to successfully make type(value) return unicode. I found an <a href="http://stackoverflow.com/questions/447107/whats-the-difference-between-encode-decode-python-2-x">earlier question</a>
that had some useful information, but the example from the picked answer doesn't seem to run for me. Is there something I am doing wrong here?</p>
<p>Here is some code to reproduce:</p>
<pre><code>Name = 'w\xc3\xb6rner'.decode('utf-8')
file.write('Name: %s - %s\n' %(Name, type(Name)))
</code></pre>
<p>I never actually get to the write statement, because it fails on the first statement. </p>
<p>Thank you for your help.</p>
<p><strong>Edit:</strong></p>
<p>I verified that the DB's charset is utf8. So in my code to reproduce I changed '\xf6' to '\xc3\xb6', and the failure still occurs. Is there a difference between 'utf-8' and 'utf8'?</p>
<p>The tip on using codecs to write to a file is handy (I'll definitely use it), but in this scenario I am only writing to a log file for debugging purposes.</p>
| 3
|
2009-03-20T14:36:31Z
| 666,489
|
<p>It's obviously 1-byte encoding. 'ö' in UTF-8 is '\xc3\xb6'.</p>
<p>The encoding might be:</p>
<ul>
<li>ISO-8859-1</li>
<li>ISO-8859-2</li>
<li>ISO-8859-13</li>
<li>ISO-8859-15</li>
<li>Win-1250</li>
<li>Win-1252</li>
</ul>
| 4
|
2009-03-20T14:55:11Z
|
[
"python",
"unicode",
"decode"
] |
Unable to decode unicode string in Python 2.4
| 666,417
|
<p>This is in python 2.4. Here is my situation. I pull a string from a database, and it contains an umlauted 'o' (\xf6). At this point if I run type(value) it returns str. I then attempt to run .decode('utf-8'), and I get an error ('utf8' codec can't decode bytes in position 1-4). </p>
<p>Really my goal here is just to successfully make type(value) return unicode. I found an <a href="http://stackoverflow.com/questions/447107/whats-the-difference-between-encode-decode-python-2-x">earlier question</a>
that had some useful information, but the example from the picked answer doesn't seem to run for me. Is there something I am doing wrong here?</p>
<p>Here is some code to reproduce:</p>
<pre><code>Name = 'w\xc3\xb6rner'.decode('utf-8')
file.write('Name: %s - %s\n' %(Name, type(Name)))
</code></pre>
<p>I never actually get to the write statement, because it fails on the first statement. </p>
<p>Thank you for your help.</p>
<p><strong>Edit:</strong></p>
<p>I verified that the DB's charset is utf8. So in my code to reproduce I changed '\xf6' to '\xc3\xb6', and the failure still occurs. Is there a difference between 'utf-8' and 'utf8'?</p>
<p>The tip on using codecs to write to a file is handy (I'll definitely use it), but in this scenario I am only writing to a log file for debugging purposes.</p>
| 3
|
2009-03-20T14:36:31Z
| 666,789
|
<blockquote>
<p>So in my code to reproduce I changed '\xf6' to '\xc3\xb6', and the failure still occurs</p>
</blockquote>
<p>Not in the first line it doesn't:</p>
<pre><code>>>> 'w\xc3\xb6rner'.decode('utf-8')
u'w\xf6rner'
</code></pre>
<p>The second line will error out though:</p>
<pre><code>>>> file.write('Name: %s - %s\n' %(Name, type(Name)))
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf6' in position 7: ordinal not in range(128)
</code></pre>
<p>Which is entirely what you'd expect, trying to write non-ASCII Unicode characters to a byte stream. If you use Jiri's suggestion of a codecs-wrapped stream you can write Unicode directly, otherwise you will have to re-encode the Unicode string into bytes manually.</p>
<p>Better, for logging purposes, would be simply to spit out a repr() of the variable. Then you don't have to worry about Unicode characters being in there, or newlines or other unwanted characters:</p>
<pre><code>name= 'w\xc3\xb6rner'.decode('utf-8')
file.write('Name: %r\n' % name)
Name: u'w\xf6rner'
</code></pre>
| 2
|
2009-03-20T16:01:34Z
|
[
"python",
"unicode",
"decode"
] |
qt design issue
| 666,712
|
<p>i'm trying to design interface like this one
<a href="http://www.softpedia.com/screenshots/FlashFXP_2.png" rel="nofollow">http://www.softpedia.com/screenshots/FlashFXP_2.png</a></p>
<p>i'm using the QT design and programming with python
well on the left it's a treeWidget
but what is on the right side ? as everytime i change the cursor on the tree
all widgets replace...</p>
<p>thanks :p</p>
| 2
|
2009-03-20T15:43:31Z
| 666,739
|
<p>Use <a href="http://doc.trolltech.com/4.5/qstackedwidget.html" rel="nofollow">QStackedWidget</a>. You insert several widgets which correspond to the <strong>pages</strong>. Changing the active item in tree should switch the active widget/page inside the stacked widget.</p>
| 8
|
2009-03-20T15:48:42Z
|
[
"python",
"user-interface",
"qt"
] |
How to get whole text of an Element in xml.minidom?
| 666,724
|
<p>I want to get the whole text of an Element to parse some xhtml:</p>
<pre><code><div id='asd'>
<pre>skdsk</pre>
</div>
</code></pre>
<p>begin E = div element on the above example, I want to get </p>
<pre><code><pre>skdsk</pre>
</code></pre>
<p>How?</p>
| 0
|
2009-03-20T15:44:37Z
| 666,764
|
<p>Strictly speaking:</p>
<pre><code>from xml.dom.minidom import parse, parseString
tree = parseString("<div id='asd'><pre>skdsk</pre></div>")
root = tree.firstChild
node = root.childNodes[0]
print node.toxml()
</code></pre>
<p>In practice, though, I'd recommend looking at the <a href="http://www.crummy.com/software/BeautifulSoup/" rel="nofollow">http://www.crummy.com/software/BeautifulSoup/</a> library. Finding the right childNode in an xhtml document, and skipping "whitespace nodes" is a pain. BeautifulSoup is a robust html/xhtml parser with fantastic tree-search capacilities.</p>
<p>Edit: The example above compresses the HTML into one string. If you use the HTML as in the question, the line breaks and so-forth will generate "whitespace" nodes, so the node you want won't be at childNodes[0].</p>
| 2
|
2009-03-20T15:54:51Z
|
[
"python",
"minidom"
] |
How can I blue-box an image?
| 667,016
|
<p>I have a scanned image which is basically black print on some weird (non-gray) background, say, green or yellow (think old paper).</p>
<p>How can I get rid of the green/yellow and receive a gray picture with as much of the gray structure of the original image intact? I.e. I want to keep the gray around the letters for the anti-aliasing effect or for gray areas but I want to turn anything which even is remotely green/yellow to become pure white?</p>
<p>Note that the background is by no means homogeneous; so the algorithm should be able accept a color and an error margin or a color range.</p>
<p>For bonus points: How can I automatically determine the background color?</p>
<p>I'd like to use Python with the Imaging Library or maybe ImageMagick.</p>
<p>Note: I'm aware of packages like <a href="http://unpaper.berlios.de/" rel="nofollow">unpaper</a>. My problem with unpaper is that it produces B&W images which probably look good for an OCR software but not for the human eye.</p>
| 0
|
2009-03-20T16:54:22Z
| 667,108
|
<p>I am more of C++ than python programmer, so I can't give you a code sample. But the general algorithm is something like this:</p>
<p>Finding the background color:
You make a histogram of the image. The histogram should have two peaks representing the background and foreground colors. Because you know that the background has higher intensity you choose the peak with higher intensity and that is the background color.
Now you have the RGB background <code>(R_bg, G_bg, B_bg)</code></p>
<p>Setting the background to white:
You loop over all pixels and calculates the distance to the background:</p>
<pre><code>distance = sqrt((R_bg - R_pixel) ^ 2 + (G_bg - G_pixel) ^ 2 + (B_bg - B_pixel) ^ 2)
</code></pre>
<p>If the distance is less than a threshold you set the pixel to white. You can experiment with different thresholds until you get a good result.</p>
| 1
|
2009-03-20T17:18:03Z
|
[
"python",
"python-imaging-library"
] |
How can I blue-box an image?
| 667,016
|
<p>I have a scanned image which is basically black print on some weird (non-gray) background, say, green or yellow (think old paper).</p>
<p>How can I get rid of the green/yellow and receive a gray picture with as much of the gray structure of the original image intact? I.e. I want to keep the gray around the letters for the anti-aliasing effect or for gray areas but I want to turn anything which even is remotely green/yellow to become pure white?</p>
<p>Note that the background is by no means homogeneous; so the algorithm should be able accept a color and an error margin or a color range.</p>
<p>For bonus points: How can I automatically determine the background color?</p>
<p>I'd like to use Python with the Imaging Library or maybe ImageMagick.</p>
<p>Note: I'm aware of packages like <a href="http://unpaper.berlios.de/" rel="nofollow">unpaper</a>. My problem with unpaper is that it produces B&W images which probably look good for an OCR software but not for the human eye.</p>
| 0
|
2009-03-20T16:54:22Z
| 930,580
|
<p>I was looking to make an arbitrary background color transparent a while ago and developed this script. It takes the most popular (background) color in an image and creates an alpha mask where the transparency is proportional to the distance from the background color. Taking RGB colorspace distances is an expensive process for large images so I've tried some optimization using numpy and a fast integer sqrt approximation operation. Converting to HSV first might be the right approach. If you havn't solved your problem yet, I hope this helps:</p>
<pre><code>from PIL import Image
import sys, time, numpy
fldr = r'C:\python_apps'
fp = fldr+'\\IMG_0377.jpg'
rz = 0 # 2 will halve the size of the image, etc..
# ----------------
im = Image.open(fp)
if rz:
w,h = im.size
im = im.resize((w/rz,h/rz))
w,h = im.size
h = im.histogram()
rgb = r0,g0,b0 = [b.index(max(b)) for b in [ h[i*256:(i+1)*256] for i in range(3) ]]
def isqrt(n):
xn = 1
xn1 = (xn + n/xn)/2
while abs(xn1 - xn) > 1:
xn = xn1
xn1 = (xn + n/xn)/2
while xn1*xn1 > n:
xn1 -= 1
return xn1
vsqrt = numpy.vectorize(isqrt)
def dist(image):
imarr = numpy.asarray(image, dtype=numpy.int32) # dtype=numpy.int8
d = (imarr[:,:,0]-r0)**2 + (imarr[:,:,1]-g0)**2 + (imarr[:,:,2]-b0)**2
d = numpy.asarray((vsqrt(d)).clip(0,255), dtype=numpy.uint8)
return Image.fromarray(d,'L')
im.putalpha(dist(im))
im.save(fldr+'\\test.png')
</code></pre>
| 1
|
2009-05-30T20:50:00Z
|
[
"python",
"python-imaging-library"
] |
How can I blue-box an image?
| 667,016
|
<p>I have a scanned image which is basically black print on some weird (non-gray) background, say, green or yellow (think old paper).</p>
<p>How can I get rid of the green/yellow and receive a gray picture with as much of the gray structure of the original image intact? I.e. I want to keep the gray around the letters for the anti-aliasing effect or for gray areas but I want to turn anything which even is remotely green/yellow to become pure white?</p>
<p>Note that the background is by no means homogeneous; so the algorithm should be able accept a color and an error margin or a color range.</p>
<p>For bonus points: How can I automatically determine the background color?</p>
<p>I'd like to use Python with the Imaging Library or maybe ImageMagick.</p>
<p>Note: I'm aware of packages like <a href="http://unpaper.berlios.de/" rel="nofollow">unpaper</a>. My problem with unpaper is that it produces B&W images which probably look good for an OCR software but not for the human eye.</p>
| 0
|
2009-03-20T16:54:22Z
| 25,145,104
|
<p>I know the question is old, but I was playing around with ImageMagick trying to do something similar, and came up with this:</p>
<pre><code>convert text.jpg -fill white -fuzz 50% +opaque black out.jpg
</code></pre>
<p>which converts this:</p>
<p><img src="http://i.stack.imgur.com/yNBpO.jpg" alt="enter image description here"></p>
<p>into this:</p>
<p><img src="http://i.stack.imgur.com/sJtNe.jpg" alt="enter image description here"></p>
<p>As regards the "average" colour, I used this:</p>
<pre><code>convert text.jpg -colors 2 -colorspace RGB -format %c histogram:info:-
5894: ( 50, 49, 19) #323113 rgb(50,49,19)
19162: (186,187, 87) #BABB57 rgb(186,187,87) <- THIS ONE !
</code></pre>
<p>which is this colour:</p>
<p><img src="http://i.stack.imgur.com/PO0an.jpg" alt="enter image description here"></p>
<p>After some more experimentation, I can get this:</p>
<p><img src="http://i.stack.imgur.com/c2nVg.jpg" alt="enter image description here"></p>
<p>using this:</p>
<pre><code>convert text.jpg -fill black -fuzz 50% -opaque rgb\(50,50,10\) -fill white +opaque black out.jpg
</code></pre>
| 1
|
2014-08-05T17:42:54Z
|
[
"python",
"python-imaging-library"
] |
Finding the static attributes of a class in Python
| 667,166
|
<p>This is an unusual question, but I'd like to dynamically generate the <code>__slots__</code> attribute of the class based on whatever attributes I happened to have added to the class.</p>
<p>For example, if I have a class:</p>
<pre><code>class A(object):
one = 1
two = 2
__slots__ = ['one', 'two']
</code></pre>
<p>I'd like to do this dynamically rather than specifying the arguments by hand, how would I do this?</p>
| 3
|
2009-03-20T17:33:12Z
| 667,195
|
<p>At the point you're trying to define <strong>slots</strong>, the class hasn't been built yet, so you cannot define it dynamically from within the A class.</p>
<p>To get the behaviour you want, use a metaclass to introspect the definition of A and add a slots attribute.</p>
<pre><code>class MakeSlots(type):
def __new__(cls, name, bases, attrs):
attrs['__slots__'] = attrs.keys()
return super(MakeSlots, cls).__new__(cls, name, bases, attrs)
class A(object):
one = 1
two = 2
__metaclass__ = MakeSlots
</code></pre>
| 3
|
2009-03-20T17:41:24Z
|
[
"python",
"class-design"
] |
Finding the static attributes of a class in Python
| 667,166
|
<p>This is an unusual question, but I'd like to dynamically generate the <code>__slots__</code> attribute of the class based on whatever attributes I happened to have added to the class.</p>
<p>For example, if I have a class:</p>
<pre><code>class A(object):
one = 1
two = 2
__slots__ = ['one', 'two']
</code></pre>
<p>I'd like to do this dynamically rather than specifying the arguments by hand, how would I do this?</p>
| 3
|
2009-03-20T17:33:12Z
| 7,950,908
|
<p>One very important thing to be aware of -- if those attributes stay in the class, the <code>__slots__</code> generation will be useless... okay, maybe not <em>useless</em> -- it will make the class attributes read-only; probably not what you want.</p>
<p>The easy way is to say, "Okay, I'll initialize them to None, then let them disappear." Excellent! Here's one way to do that:</p>
<pre><code>class B(object):
three = None
four = None
temp = vars() # get the local namespace as a dict()
__slots__ = temp.keys() # put their names into __slots__
__slots__.remove('temp') # remove non-__slots__ names
__slots__.remove('__module__') # now remove the names from the local
for name in __slots__: # namespace so we don't get read-only
del temp[name] # class attributes
del temp # and get rid of temp
</code></pre>
<p>If you want to keep those initial values it takes a bit more work... here's one possible solution:</p>
<pre><code>class B(object):
three = 3
four = 4
def __init__(self):
for key, value in self.__init__.defaults.items():
setattr(self, key, value)
temp = vars()
__slots__ = temp.keys()
__slots__.remove('temp')
__slots__.remove('__module__')
__slots__.remove('__init__')
__init__.defaults = dict()
for name in __slots__:
__init__.defaults[name] = temp[name]
del temp[name]
del temp
</code></pre>
<p>As you can see, it is possible to do this without a metaclass -- but who wants all that boilerplate? A metaclass could definitely help us clean this up:</p>
<pre><code>class MakeSlots(type):
def __new__(cls, name, bases, attrs):
new_attrs = {}
new_attrs['__slots__'] = slots = attrs.keys()
slots.remove('__module__')
slots.remove('__metaclass__')
new_attrs['__weakref__'] = None
new_attrs['__init__'] = init = new_init
init.defaults = dict()
for name in slots:
init.defaults[name] = attrs[name]
return super(MakeSlots, cls).__new__(cls, name, bases, new_attrs)
def new_init(self):
for key, value in self.__init__.defaults.items():
setattr(self, key, value)
class A(object):
__metaclass__ = MakeSlots
one = 1
two = 2
class B(object):
__metaclass__ = MakeSlots
three = 3
four = 4
</code></pre>
<p>Now all the tediousness is kept in the metaclass, and the actual class is easy to read and (hopefully!) understand.</p>
<p>If you need to have anything else in these classes besides attributes I strongly suggest you put whatever it is in a mixin class -- having them directly in the final class would complicate the metaclass even more.</p>
| 1
|
2011-10-31T07:25:51Z
|
[
"python",
"class-design"
] |
Crunching xml with python
| 667,359
|
<p>I need to remove white spaces between xml tags, e.g. if the original xml looks like:</p>
<pre><code><node1>
<node2>
<node3>foo</node3>
</node2>
</node1>
</code></pre>
<p>I'd like the end-result to be <em>crunched</em> down to single line:</p>
<pre><code><node1><node2><node3>foo</node3></node2></node1>
</code></pre>
<p>Please note that I will not have control over the xml structure, so the solution should be generic enough to be able to handle any valid xml. Also the xml might contain CDATA blocks, which I'd need to exclude from this <em>crunching</em> and leave them as-is.</p>
<p>I have couple of ideas so far: (1) parse the xml as text and look for start and end of tags < and > (2) another approach is to load the xml document and go node-by-node and print out a <em>new</em> document by concatenating the tags.</p>
<p>I think either method would work, but I'd rather not reinvent the wheel here, so may be there is a python library that already does something like this? If not, then any issues/pitfalls to be aware of when rolling out my own <em>cruncher</em>? Any recommendations?</p>
<p><b>EDIT</b>
Thank you all for answers/suggestions, both Triptych's and Van Gale's solutions work for me and do exactly what I want. Wish I could accept both answers.</p>
| 5
|
2009-03-20T18:22:30Z
| 667,393
|
<p>I'd use XSLT:</p>
<pre><code><xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="xml" encoding="UTF-8" omit-xml-declaration="yes"/>
<xsl:strip-space elements="*"/>
<xsl:template match="*">
<xsl:copy>
<xsl:copy-of select="@*" />
<xsl:apply-templates />
</xsl:copy>
</xsl:template>
</xsl:stylesheet>
</code></pre>
<p>That should do the trick.</p>
<p>In python you could use <a href="http://codespeak.net/lxml/xpathxslt.html#xslt" rel="nofollow">lxml (direct link to sample on homepage)</a> to transform it.</p>
<p>For some tests, use <code>xsltproc</code>, sample:</p>
<pre><code>xsltproc test.xsl test.xml
</code></pre>
<p>where <code>test.xsl</code> is the file above and <code>test.xml</code> your XML file.</p>
| 5
|
2009-03-20T18:33:18Z
|
[
"python",
"xml"
] |
Crunching xml with python
| 667,359
|
<p>I need to remove white spaces between xml tags, e.g. if the original xml looks like:</p>
<pre><code><node1>
<node2>
<node3>foo</node3>
</node2>
</node1>
</code></pre>
<p>I'd like the end-result to be <em>crunched</em> down to single line:</p>
<pre><code><node1><node2><node3>foo</node3></node2></node1>
</code></pre>
<p>Please note that I will not have control over the xml structure, so the solution should be generic enough to be able to handle any valid xml. Also the xml might contain CDATA blocks, which I'd need to exclude from this <em>crunching</em> and leave them as-is.</p>
<p>I have couple of ideas so far: (1) parse the xml as text and look for start and end of tags < and > (2) another approach is to load the xml document and go node-by-node and print out a <em>new</em> document by concatenating the tags.</p>
<p>I think either method would work, but I'd rather not reinvent the wheel here, so may be there is a python library that already does something like this? If not, then any issues/pitfalls to be aware of when rolling out my own <em>cruncher</em>? Any recommendations?</p>
<p><b>EDIT</b>
Thank you all for answers/suggestions, both Triptych's and Van Gale's solutions work for me and do exactly what I want. Wish I could accept both answers.</p>
| 5
|
2009-03-20T18:22:30Z
| 667,394
|
<p>Not a solution really but since you asked for recommendations: I'd advise against doing your own parsing (unless you want to learn how to write a complex parser) because, as you say, not all spaces should be removed. There are not only CDATA blocks but also elements with the "xml:space=preserve" attribute, which correspond to things like <code><pre></code> in XHTML (where the enclosed whitespaces actually have meaning), and writing a parser that is able to recognize those elements and leave the whitespace alone would be possible but unpleasant. </p>
<p>I would go with the parsing method, i.e. load the document and go node-by-node printing them out. That way you can easily identify which nodes you can strip the spaces out of and which you can't. There are some modules in the Python standard library, none of which I have ever used ;-) that could be useful to you... try <code>xml.dom</code>, or I'm not sure if you could do this with <code>xml.parsers.expat</code>.</p>
| 2
|
2009-03-20T18:34:09Z
|
[
"python",
"xml"
] |
Crunching xml with python
| 667,359
|
<p>I need to remove white spaces between xml tags, e.g. if the original xml looks like:</p>
<pre><code><node1>
<node2>
<node3>foo</node3>
</node2>
</node1>
</code></pre>
<p>I'd like the end-result to be <em>crunched</em> down to single line:</p>
<pre><code><node1><node2><node3>foo</node3></node2></node1>
</code></pre>
<p>Please note that I will not have control over the xml structure, so the solution should be generic enough to be able to handle any valid xml. Also the xml might contain CDATA blocks, which I'd need to exclude from this <em>crunching</em> and leave them as-is.</p>
<p>I have couple of ideas so far: (1) parse the xml as text and look for start and end of tags < and > (2) another approach is to load the xml document and go node-by-node and print out a <em>new</em> document by concatenating the tags.</p>
<p>I think either method would work, but I'd rather not reinvent the wheel here, so may be there is a python library that already does something like this? If not, then any issues/pitfalls to be aware of when rolling out my own <em>cruncher</em>? Any recommendations?</p>
<p><b>EDIT</b>
Thank you all for answers/suggestions, both Triptych's and Van Gale's solutions work for me and do exactly what I want. Wish I could accept both answers.</p>
| 5
|
2009-03-20T18:22:30Z
| 667,466
|
<p>This is pretty easily handled with lxml (note: this particular feature isn't in ElementTree):</p>
<pre><code>from lxml import etree
parser = etree.XMLParser(remove_blank_text=True)
foo = """<node1>
<node2>
<node3>foo </node3>
</node2>
</node1>"""
bar = etree.XML(foo, parser)
print etree.tostring(bar,pretty_print=False,with_tail=True)
</code></pre>
<p>Results in:</p>
<pre><code><node1><node2><node3>foo </node3></node2></node1>
</code></pre>
<p><strong>Edit:</strong> The answer by Triptych reminded me about the CDATA requirements, so the line creating the parser object should actually look like this:</p>
<pre><code>parser = etree.XMLParser(remove_blank_text=True, strip_cdata=False)
</code></pre>
| 8
|
2009-03-20T18:52:01Z
|
[
"python",
"xml"
] |
Crunching xml with python
| 667,359
|
<p>I need to remove white spaces between xml tags, e.g. if the original xml looks like:</p>
<pre><code><node1>
<node2>
<node3>foo</node3>
</node2>
</node1>
</code></pre>
<p>I'd like the end-result to be <em>crunched</em> down to single line:</p>
<pre><code><node1><node2><node3>foo</node3></node2></node1>
</code></pre>
<p>Please note that I will not have control over the xml structure, so the solution should be generic enough to be able to handle any valid xml. Also the xml might contain CDATA blocks, which I'd need to exclude from this <em>crunching</em> and leave them as-is.</p>
<p>I have couple of ideas so far: (1) parse the xml as text and look for start and end of tags < and > (2) another approach is to load the xml document and go node-by-node and print out a <em>new</em> document by concatenating the tags.</p>
<p>I think either method would work, but I'd rather not reinvent the wheel here, so may be there is a python library that already does something like this? If not, then any issues/pitfalls to be aware of when rolling out my own <em>cruncher</em>? Any recommendations?</p>
<p><b>EDIT</b>
Thank you all for answers/suggestions, both Triptych's and Van Gale's solutions work for me and do exactly what I want. Wish I could accept both answers.</p>
| 5
|
2009-03-20T18:22:30Z
| 667,488
|
<p><strong>Pretty straightforward with BeautifulSoup.</strong></p>
<p>This solution assumes it is ok to strip whitespace from the tail ends of character data.<br />
Example: <code><foo> bar </foo></code> becomes <code><foo>bar</foo></code></p>
<p>It will correctly ignore comments and CDATA.</p>
<pre><code>import BeautifulSoup
s = """
<node1>
<node2>
<node3>foo</node3>
</node2>
<node3>
<!-- I'm a comment! Leave me be! -->
</node3>
<node4>
<![CDATA[
I'm CDATA! Changing me would be bad!
]]>
</node4>
</node1>
"""
soup = BeautifulSoup.BeautifulStoneSoup(s)
for t in soup.findAll(text=True):
if type(t) is BeautifulSoup.NavigableString: # Ignores comments and CDATA
t.replaceWith(t.strip())
print soup
</code></pre>
| 4
|
2009-03-20T18:59:55Z
|
[
"python",
"xml"
] |
How to debug a weird threaded open fifo issue?
| 667,500
|
<p>A web service is configured to expose some of its data when receiving a USR1 signal. The signal will be sent by a xinetd server when it receives a request from a remote client, e.g. nc myserver 50666. When the web server receives USR1 signal, it opens a dedicated fifo pipe, writes its data to the pipe, and then close the pipe. In the meantime, the xinetd server reads the pipe and feeds to the remote client.</p>
<p>In most of times, they work nicely but occasionally, for some reason, the client will receive dup records. From the log, it seems like the pipe did not get closed properly and the cache is leftover, so when next time it serves, both previous and current are sent to the client. The problem is its not constantly happening when trying to reproduce, unluckily, I wasn't able to reproduce once. </p>
<p>The following are the simple snippets to demonstrate the process:</p>
<p>The web server: (webserver.py)</p>
<pre><code>def SendStream(data, pipe):
try:
for i in data:
pipe.write(i + '\n')
pipe.flush()
finally:
pipe.close()
def Serve():
threading.Thread(target=SendStream, args=(data, pipe)).start()
</code></pre>
<p>The xinetd.d server: (spitter.py)</p>
<pre><code>def Serve():
if not os.path.exists(PIPE_FILE):
os.mkfifo(PIPE_FILE)
os.kill(server_pid, signal.SIGUSR1)
for i in open(PIPE_FILE):
print i,
</code></pre>
<p>So what exactly happened to cause the dup? How to trigger it? The current fix I unlink the pipe file and recreate it every time to avoid any leftovers but I don't know if that's a proper solution.</p>
| 0
|
2009-03-20T19:01:14Z
| 667,670
|
<p>If you get two copies of splitter.py running at the same time, there will be trouble and almost anything that happens to you is legal. Try adding a process id value to webserver.py, ie:</p>
<p>pipe.write(str(os.getpid()) + i + '\n')</p>
<p>That might be illuminating.</p>
| 0
|
2009-03-20T19:40:18Z
|
[
"python",
"multithreading",
"stream",
"fifo",
"inetd"
] |
How to debug a weird threaded open fifo issue?
| 667,500
|
<p>A web service is configured to expose some of its data when receiving a USR1 signal. The signal will be sent by a xinetd server when it receives a request from a remote client, e.g. nc myserver 50666. When the web server receives USR1 signal, it opens a dedicated fifo pipe, writes its data to the pipe, and then close the pipe. In the meantime, the xinetd server reads the pipe and feeds to the remote client.</p>
<p>In most of times, they work nicely but occasionally, for some reason, the client will receive dup records. From the log, it seems like the pipe did not get closed properly and the cache is leftover, so when next time it serves, both previous and current are sent to the client. The problem is its not constantly happening when trying to reproduce, unluckily, I wasn't able to reproduce once. </p>
<p>The following are the simple snippets to demonstrate the process:</p>
<p>The web server: (webserver.py)</p>
<pre><code>def SendStream(data, pipe):
try:
for i in data:
pipe.write(i + '\n')
pipe.flush()
finally:
pipe.close()
def Serve():
threading.Thread(target=SendStream, args=(data, pipe)).start()
</code></pre>
<p>The xinetd.d server: (spitter.py)</p>
<pre><code>def Serve():
if not os.path.exists(PIPE_FILE):
os.mkfifo(PIPE_FILE)
os.kill(server_pid, signal.SIGUSR1)
for i in open(PIPE_FILE):
print i,
</code></pre>
<p>So what exactly happened to cause the dup? How to trigger it? The current fix I unlink the pipe file and recreate it every time to avoid any leftovers but I don't know if that's a proper solution.</p>
| 0
|
2009-03-20T19:01:14Z
| 699,049
|
<p>There isn't enough to debug here. You don't show how the server handles signals, or opens the pipe.</p>
<p>If at all possible I would recommend not using signals. They're hairy enough in C, nevermind with python's own peculiarities added on top.</p>
| 0
|
2009-03-30T21:13:25Z
|
[
"python",
"multithreading",
"stream",
"fifo",
"inetd"
] |
How to debug a weird threaded open fifo issue?
| 667,500
|
<p>A web service is configured to expose some of its data when receiving a USR1 signal. The signal will be sent by a xinetd server when it receives a request from a remote client, e.g. nc myserver 50666. When the web server receives USR1 signal, it opens a dedicated fifo pipe, writes its data to the pipe, and then close the pipe. In the meantime, the xinetd server reads the pipe and feeds to the remote client.</p>
<p>In most of times, they work nicely but occasionally, for some reason, the client will receive dup records. From the log, it seems like the pipe did not get closed properly and the cache is leftover, so when next time it serves, both previous and current are sent to the client. The problem is its not constantly happening when trying to reproduce, unluckily, I wasn't able to reproduce once. </p>
<p>The following are the simple snippets to demonstrate the process:</p>
<p>The web server: (webserver.py)</p>
<pre><code>def SendStream(data, pipe):
try:
for i in data:
pipe.write(i + '\n')
pipe.flush()
finally:
pipe.close()
def Serve():
threading.Thread(target=SendStream, args=(data, pipe)).start()
</code></pre>
<p>The xinetd.d server: (spitter.py)</p>
<pre><code>def Serve():
if not os.path.exists(PIPE_FILE):
os.mkfifo(PIPE_FILE)
os.kill(server_pid, signal.SIGUSR1)
for i in open(PIPE_FILE):
print i,
</code></pre>
<p>So what exactly happened to cause the dup? How to trigger it? The current fix I unlink the pipe file and recreate it every time to avoid any leftovers but I don't know if that's a proper solution.</p>
| 0
|
2009-03-20T19:01:14Z
| 860,215
|
<p>So the real problem is that there are multiple clients exist. The server has been queried/abused from other unknown clients which weren't initially being agreed with customers and sure it will break under the current design. A fix has been deployed to address the issue. So Andy's suspicion is right. Thanks guys! </p>
| 0
|
2009-05-13T20:23:02Z
|
[
"python",
"multithreading",
"stream",
"fifo",
"inetd"
] |
What's a good rate limiting algorithm?
| 667,508
|
<p>I could use some pseudo-code, or better, Python. I am trying to implement a rate-limiting queue for a Python IRC bot, and it partially works, but if someone triggers less messages than the limit (e.g., rate limit is 5 messages per 8 seconds, and the person triggers only 4), and the next trigger is over the 8 seconds (e.g., 16 seconds later), the bot sends the message, but the queue becomes full and the bot waits 8 seconds, even though it's not needed since the 8 second period has lapsed.</p>
| 93
|
2009-03-20T19:02:27Z
| 667,520
|
<p>A Token Bucket is fairly simple to implement.</p>
<p>Start with a bucket with 5 tokens.</p>
<p>Every 5/8 seconds: If the bucket has less than 5 tokens, add one.</p>
<p>Each time you want to send a message: If the bucket has â¥1 token, take one token out and send the message. Otherwise, wait/drop the message/whatever.</p>
<p>(obviously, in actual code, you'd use an integer counter instead of real tokens and you can optimize out the every 5/8s step by storing timestamps)</p>
<p><hr /></p>
<p>Reading the question again, if the rate limit is fully reset each 8 seconds, then here is a modification:</p>
<p>Start with a timestamp, <code>last_send</code>, at a time long ago (e.g., at the epoch). Also, start with the same 5-token bucket.</p>
<p>Strike the every 5/8 seconds rule.</p>
<p>Each time you send a message: First, check if <code>last_send</code> ⥠8 seconds ago. If so, fill the bucket (set it to 5 tokens). Second, if there are tokens in the bucket, send the message (otherwise, drop/wait/etc.). Third, set <code>last_send</code> to now.</p>
<p>That should work for that scenario.</p>
<p><hr /></p>
<p>I've actually written an IRC bot using a strategy like this (the first approach). Its in Perl, not Python, but here is some code to illustrate:</p>
<p>The first part here handles adding tokens to the bucket. You can see the optimization of adding tokens based on time (2nd to last line) and then the last line clamps bucket contents to the maximum (MESSAGE_BURST)</p>
<pre><code> my $start_time = time;
...
# Bucket handling
my $bucket = $conn->{fujiko_limit_bucket};
my $lasttx = $conn->{fujiko_limit_lasttx};
$bucket += ($start_time-$lasttx)/MESSAGE_INTERVAL;
($bucket <= MESSAGE_BURST) or $bucket = MESSAGE_BURST;
</code></pre>
<p>$conn is a data structure which is passed around. This is inside a method that runs routinely (it calculates when the next time it'll have something to do, and sleeps either that long or until it gets network traffic). The next part of the method handles sending. It is rather complicated, because messages have priorities associated with them.</p>
<pre><code> # Queue handling. Start with the ultimate queue.
my $queues = $conn->{fujiko_queues};
foreach my $entry (@{$queues->[PRIORITY_ULTIMATE]}) {
# Ultimate is special. We run ultimate no matter what. Even if
# it sends the bucket negative.
--$bucket;
$entry->{code}(@{$entry->{args}});
}
$queues->[PRIORITY_ULTIMATE] = [];
</code></pre>
<p>That's the first queue, which is run no matter what. Even if it gets our connection killed for flooding. Used for extremely important thinks, like responding to the server's PING. Next, the rest of the queues:</p>
<pre><code> # Continue to the other queues, in order of priority.
QRUN: for (my $pri = PRIORITY_HIGH; $pri >= PRIORITY_JUNK; --$pri) {
my $queue = $queues->[$pri];
while (scalar(@$queue)) {
if ($bucket < 1) {
# continue later.
$need_more_time = 1;
last QRUN;
} else {
--$bucket;
my $entry = shift @$queue;
$entry->{code}(@{$entry->{args}});
}
}
}
</code></pre>
<p>Finally, the bucket status is saved back to the $conn data structure (actually a bit later in the method; it first calculates how soon it'll have more work)</p>
<pre><code> # Save status.
$conn->{fujiko_limit_bucket} = $bucket;
$conn->{fujiko_limit_lasttx} = $start_time;
</code></pre>
<p>As you can see, the actual bucket handling code is very small â about four lines. The rest of the code is priority queue handling. The bot has priority queues so that e.g., someone chatting with it can't prevent it from doing its important kick/ban duties.</p>
| 19
|
2009-03-20T19:04:30Z
|
[
"python",
"algorithm",
"message-queue"
] |
What's a good rate limiting algorithm?
| 667,508
|
<p>I could use some pseudo-code, or better, Python. I am trying to implement a rate-limiting queue for a Python IRC bot, and it partially works, but if someone triggers less messages than the limit (e.g., rate limit is 5 messages per 8 seconds, and the person triggers only 4), and the next trigger is over the 8 seconds (e.g., 16 seconds later), the bot sends the message, but the queue becomes full and the bot waits 8 seconds, even though it's not needed since the 8 second period has lapsed.</p>
| 93
|
2009-03-20T19:02:27Z
| 667,528
|
<p>One solution is to attach a timestamp to each queue item and to discard the item after 8 seconds have passed. You can perform this check each time the queue is added to.</p>
<p>This only works if you limit the queue size to 5 and discard any additions whilst the queue is full.</p>
| 1
|
2009-03-20T19:07:15Z
|
[
"python",
"algorithm",
"message-queue"
] |
What's a good rate limiting algorithm?
| 667,508
|
<p>I could use some pseudo-code, or better, Python. I am trying to implement a rate-limiting queue for a Python IRC bot, and it partially works, but if someone triggers less messages than the limit (e.g., rate limit is 5 messages per 8 seconds, and the person triggers only 4), and the next trigger is over the 8 seconds (e.g., 16 seconds later), the bot sends the message, but the queue becomes full and the bot waits 8 seconds, even though it's not needed since the 8 second period has lapsed.</p>
| 93
|
2009-03-20T19:02:27Z
| 667,568
|
<p>Keep the time that the last five lines were sent. Hold the queued messages until the time the fifth-most-recent message (if it exists) is a least 8 seconds in the past (with last_five as an array of times):</p>
<pre><code>now = time.time()
if len(last_five) == 0 or (now - last_five[-1]) >= 8.0:
last_five.insert(0, now)
send_message(msg)
if len(last_five) > 5:
last_five.pop()
</code></pre>
| 2
|
2009-03-20T19:18:10Z
|
[
"python",
"algorithm",
"message-queue"
] |
What's a good rate limiting algorithm?
| 667,508
|
<p>I could use some pseudo-code, or better, Python. I am trying to implement a rate-limiting queue for a Python IRC bot, and it partially works, but if someone triggers less messages than the limit (e.g., rate limit is 5 messages per 8 seconds, and the person triggers only 4), and the next trigger is over the 8 seconds (e.g., 16 seconds later), the bot sends the message, but the queue becomes full and the bot waits 8 seconds, even though it's not needed since the 8 second period has lapsed.</p>
| 93
|
2009-03-20T19:02:27Z
| 667,706
|
<p>Use this decorator @RateLimited(ratepersec) before your function that enqueues.</p>
<p>Basically, this checks if 1/rate secs have passed since the last time and if not, waits the remainder of the time, otherwise it doesn't wait. This effectively limits you to rate/sec. The decorator can be applied to any function you want rate-limited.</p>
<p>In your case, if you want a maximum of 5 messages per 8 seconds, use @RateLimited(0.625) before your sendToQueue function.</p>
<pre><code>import time
def RateLimited(maxPerSecond):
minInterval = 1.0 / float(maxPerSecond)
def decorate(func):
lastTimeCalled = [0.0]
def rateLimitedFunction(*args,**kargs):
elapsed = time.clock() - lastTimeCalled[0]
leftToWait = minInterval - elapsed
if leftToWait>0:
time.sleep(leftToWait)
ret = func(*args,**kargs)
lastTimeCalled[0] = time.clock()
return ret
return rateLimitedFunction
return decorate
@RateLimited(2) # 2 per second at most
def PrintNumber(num):
print num
if __name__ == "__main__":
print "This should print 1,2,3... at about 2 per second."
for i in range(1,100):
PrintNumber(i)
</code></pre>
| 34
|
2009-03-20T19:51:44Z
|
[
"python",
"algorithm",
"message-queue"
] |
What's a good rate limiting algorithm?
| 667,508
|
<p>I could use some pseudo-code, or better, Python. I am trying to implement a rate-limiting queue for a Python IRC bot, and it partially works, but if someone triggers less messages than the limit (e.g., rate limit is 5 messages per 8 seconds, and the person triggers only 4), and the next trigger is over the 8 seconds (e.g., 16 seconds later), the bot sends the message, but the queue becomes full and the bot waits 8 seconds, even though it's not needed since the 8 second period has lapsed.</p>
| 93
|
2009-03-20T19:02:27Z
| 668,327
|
<p>Here the <a href="http://en.wikipedia.org/wiki/Token_bucket">simplest algorithm</a>, if you want just to drop messages when they arrive too quickly (instead of queuing them, which makes sense because the queue might get arbitrarily large):</p>
<pre><code>rate = 5.0; // unit: messages
per = 8.0; // unit: seconds
allowance = rate; // unit: messages
last_check = now(); // floating-point, e.g. usec accuracy. Unit: seconds
when (message_received):
current = now();
time_passed = current - last_check;
last_check = current;
allowance += time_passed * (rate / per);
if (allowance > rate):
allowance = rate; // throttle
if (allowance < 1.0):
discard_message();
else:
forward_message();
allowance -= 1.0;
</code></pre>
<p>There are no datastructures, timers etc. in this solution and it works cleanly :) To see this, 'allowance' grows at speed 5/8 units per seconds at most, i.e. at most five units per eight seconds. Every message that is forwarded deducts one unit, so you can't send more than five messages per every eight seconds.</p>
<p>Note that <code>rate</code> should be an integer, i.e. without non-zero decimal part, or the algorithm won't work correctly (actual rate will not be <code>rate/per</code>). E.g. <code>rate=0.5; per=1.0;</code> does not work because <code>allowance</code> will never grow to 1.0. But <code>rate=1.0; per=2.0;</code> works fine.</p>
| 152
|
2009-03-20T23:15:27Z
|
[
"python",
"algorithm",
"message-queue"
] |
What's a good rate limiting algorithm?
| 667,508
|
<p>I could use some pseudo-code, or better, Python. I am trying to implement a rate-limiting queue for a Python IRC bot, and it partially works, but if someone triggers less messages than the limit (e.g., rate limit is 5 messages per 8 seconds, and the person triggers only 4), and the next trigger is over the 8 seconds (e.g., 16 seconds later), the bot sends the message, but the queue becomes full and the bot waits 8 seconds, even though it's not needed since the 8 second period has lapsed.</p>
| 93
|
2009-03-20T19:02:27Z
| 937,220
|
<p>How about this:</p>
<pre><code>long check_time = System.currentTimeMillis();
int msgs_sent_count = 0;
private boolean isRateLimited(int msgs_per_sec) {
if (System.currentTimeMillis() - check_time > 1000) {
check_time = System.currentTimeMillis();
msgs_sent_count = 0;
}
if (msgs_sent_count > (msgs_per_sec - 1)) {
return true;
} else {
msgs_sent_count++;
}
return false;
}
</code></pre>
| 0
|
2009-06-01T22:58:49Z
|
[
"python",
"algorithm",
"message-queue"
] |
What's a good rate limiting algorithm?
| 667,508
|
<p>I could use some pseudo-code, or better, Python. I am trying to implement a rate-limiting queue for a Python IRC bot, and it partially works, but if someone triggers less messages than the limit (e.g., rate limit is 5 messages per 8 seconds, and the person triggers only 4), and the next trigger is over the 8 seconds (e.g., 16 seconds later), the bot sends the message, but the queue becomes full and the bot waits 8 seconds, even though it's not needed since the 8 second period has lapsed.</p>
| 93
|
2009-03-20T19:02:27Z
| 6,415,181
|
<p>to block processing until the message can be sent, thus queuing up further messages, antti's beautiful solution may also be modified like this:</p>
<pre><code>rate = 5.0; // unit: messages
per = 8.0; // unit: seconds
allowance = rate; // unit: messages
last_check = now(); // floating-point, e.g. usec accuracy. Unit: seconds
when (message_received):
current = now();
time_passed = current - last_check;
last_check = current;
allowance += time_passed * (rate / per);
if (allowance > rate):
allowance = rate; // throttle
if (allowance < 1.0):
time.sleep( (1-allowance) * (per/rate))
forward_message();
allowance = 0.0;
else:
forward_message();
allowance -= 1.0;
</code></pre>
<p>it just waits until enough allowance is there to send the message. to not start with two times the rate, allowance may also initialized with 0.</p>
| 8
|
2011-06-20T17:39:35Z
|
[
"python",
"algorithm",
"message-queue"
] |
What's a good rate limiting algorithm?
| 667,508
|
<p>I could use some pseudo-code, or better, Python. I am trying to implement a rate-limiting queue for a Python IRC bot, and it partially works, but if someone triggers less messages than the limit (e.g., rate limit is 5 messages per 8 seconds, and the person triggers only 4), and the next trigger is over the 8 seconds (e.g., 16 seconds later), the bot sends the message, but the queue becomes full and the bot waits 8 seconds, even though it's not needed since the 8 second period has lapsed.</p>
| 93
|
2009-03-20T19:02:27Z
| 17,031,239
|
<p>If someone still interested, I use this simple callable class in conjunction with a timed LRU key value storage to limit request rate per IP. Uses a deque, but can rewritten to be used with a list instead.</p>
<pre><code>from collections import deque
import time
class RateLimiter:
def __init__(self, maxRate=5, timeUnit=1):
self.timeUnit = timeUnit
self.deque = deque(maxlen=maxRate)
def __call__(self):
if self.deque.maxlen == len(self.deque):
cTime = time.time()
if cTime - self.deque[0] > self.timeUnit:
self.deque.append(cTime)
return False
else:
return True
self.deque.append(time.time())
return False
r = RateLimiter()
for i in range(0,100):
time.sleep(0.1)
print(i, "block" if r() else "pass")
</code></pre>
| 1
|
2013-06-10T19:23:38Z
|
[
"python",
"algorithm",
"message-queue"
] |
What's a good rate limiting algorithm?
| 667,508
|
<p>I could use some pseudo-code, or better, Python. I am trying to implement a rate-limiting queue for a Python IRC bot, and it partially works, but if someone triggers less messages than the limit (e.g., rate limit is 5 messages per 8 seconds, and the person triggers only 4), and the next trigger is over the 8 seconds (e.g., 16 seconds later), the bot sends the message, but the queue becomes full and the bot waits 8 seconds, even though it's not needed since the 8 second period has lapsed.</p>
| 93
|
2009-03-20T19:02:27Z
| 40,034,562
|
<p>I needed a variation in Scala. Here it is:</p>
<pre><code>case class Limiter[-A, +B](callsPerSecond: (Double, Double), f: A â B) extends (A â B) {
import Thread.sleep
private def now = System.currentTimeMillis / 1000.0
private val (calls, sec) = callsPerSecond
private var allowance = 1.0
private var last = now
def apply(a: A): B = {
synchronized {
val t = now
val delta_t = t - last
last = t
allowance += delta_t * (calls / sec)
if (allowance > calls)
allowance = calls
if (allowance < 1d) {
sleep(((1 - allowance) * (sec / calls) * 1000d).toLong)
}
allowance -= 1
}
f(a)
}
}
</code></pre>
<p>Here is how it can be used:</p>
<pre><code>val f = Limiter((5d, 8d), {
_: Unit â
println(System.currentTimeMillis)
})
while(true){f(())}
</code></pre>
| 0
|
2016-10-14T03:39:42Z
|
[
"python",
"algorithm",
"message-queue"
] |
Is there a way to manually register a user with a py-transport server-side?
| 667,510
|
<p>I'm trying to write some scripts to migrate my users to ejabberd, but
the only way that's been suggested for me to register a user with a
transport is to have them use their client and discover the service.
Certainly there is a way, right? </p>
| 1
|
2009-03-20T19:02:48Z
| 1,703,042
|
<ol>
<li>Go through once for each transport
and register yourself. Capture the
XMPP packets. </li>
<li>Dump the transport
registration data from your current
system into a csv file, xml file, or
something else you can know the
structure.</li>
<li>Write a script
using jabberpy, xmpppy, pyxmpp, or
whatever, and emulate each of your
users registering with the
transports.</li>
</ol>
<p>One issue is you may have to be connected to the Internet for the transports to come online. Then you're going live with someone else's account. If you can't get their current password data for your jabber server, set it all to a default and then migrate it back after your transport registration.</p>
| 0
|
2009-11-09T19:02:43Z
|
[
"python",
"xmpp",
"ejabberd"
] |
Full command line as it was typed
| 667,540
|
<p>I want to get the full command line as it was typed. </p>
<p>This: </p>
<p><code>" ".join(sys.argv[:])</code> </p>
<p>doesn't work here (deletes double quotes). Also I prefer not to rejoin something that was parsed and splited.</p>
<p>Any ideas? </p>
<p>Thank you in advance.</p>
| 13
|
2009-03-20T19:11:34Z
| 667,554
|
<p>In a unix environment, this is not generally possible...the best you can hope for is the command line as passed to your process.</p>
<p>Because the shell (essentially <em>any</em> shell) may munge the typed command line in several ways before handing it to the OS to for execution.</p>
| 11
|
2009-03-20T19:15:42Z
|
[
"python",
"command-line"
] |
Full command line as it was typed
| 667,540
|
<p>I want to get the full command line as it was typed. </p>
<p>This: </p>
<p><code>" ".join(sys.argv[:])</code> </p>
<p>doesn't work here (deletes double quotes). Also I prefer not to rejoin something that was parsed and splited.</p>
<p>Any ideas? </p>
<p>Thank you in advance.</p>
| 13
|
2009-03-20T19:11:34Z
| 667,556
|
<p>You're too late. By the time that the typed command gets to Python your shell has already worked its magic. For example, quotes get consumed (as you've noticed), variables get interpolated, etc.</p>
| 22
|
2009-03-20T19:16:07Z
|
[
"python",
"command-line"
] |
Full command line as it was typed
| 667,540
|
<p>I want to get the full command line as it was typed. </p>
<p>This: </p>
<p><code>" ".join(sys.argv[:])</code> </p>
<p>doesn't work here (deletes double quotes). Also I prefer not to rejoin something that was parsed and splited.</p>
<p>Any ideas? </p>
<p>Thank you in advance.</p>
| 13
|
2009-03-20T19:11:34Z
| 667,754
|
<h2>*nix</h2>
<p>Look at <a href="http://asm.sourceforge.net/articles/startup.html#st" rel="nofollow">the initial stack layout (Linux on i386) that provides access to command line and environment of a program</a>: the process sees only separate arguments.</p>
<p>You can't get the command-line <em>as it was typed</em> in the general case. On Unix, the shell parses the command-line into separate arguments and eventually <a href="http://pubs.opengroup.org/onlinepubs/9699919799/functions/exec.html" rel="nofollow"><code>execv(path, argv)</code> function that invokes the corresponding syscall</a> is called. <code>sys.argv</code> is derived from <code>argv</code> parameter passed to the <code>execve()</code> function. You could get something equivalent using <code>" ".join(map(pipes.quote, argv))</code> though you shouldn't need to e.g., if you want to restart the script with slightly different command-line parameters then <code>sys.argv</code> is enough, see <a href="http://stackoverflow.com/a/7527983/4279">Is it possible to set the python -O (optimize) flag within a script?</a></p>
<p>There are some creative (non-practical) solutions:</p>
<ul>
<li>attach the shell using gdb and interrogate it (most shells are capable of repeating the same command twice)âyou should be able to get almost the same command as it was typedâ or read its history file directly if it is updated before your process exits</li>
<li>use screen, script utilities to get the terminal session</li>
<li>use a keylogger, to get what was typed.</li>
</ul>
<h2>Windows</h2>
<p>On Windows the native <code>CreateProcess()</code> interface is a string but python.exe still receives arguments as a list. <code>subprocess.list2cmdline(sys.argv)</code> might help to reverse the process. <code>list2cmdline</code> is designed for applications using <a href="http://www.daviddeley.com/autohotkey/parameters/parameters.htm#WINCRULESCHANGE" rel="nofollow">the same
rules as the MS C runtime</a>â<code>python.exe</code> is one of them. <code>list2cmdline</code> doesn't return the command-line <em>as it was typed</em> but it returns a functional equivalent in this case.</p>
<p>On Python 2, you might need <a href="http://stackoverflow.com/a/846931/4279"><code>GetCommandLineW()</code></a>, to get Unicode characters from the command line that can't be represented in Windows ANSI codepage (such as cp1252).</p>
| 4
|
2009-03-20T20:04:36Z
|
[
"python",
"command-line"
] |
Full command line as it was typed
| 667,540
|
<p>I want to get the full command line as it was typed. </p>
<p>This: </p>
<p><code>" ".join(sys.argv[:])</code> </p>
<p>doesn't work here (deletes double quotes). Also I prefer not to rejoin something that was parsed and splited.</p>
<p>Any ideas? </p>
<p>Thank you in advance.</p>
| 13
|
2009-03-20T19:11:34Z
| 667,889
|
<p>As mentioned, this probably cannot be done, at least not reliably. In a few cases, you might be able to find a history file for the shell (e.g. - "bash", but not "tcsh") and get the user's typing from that. I don't know how much, if any, control you have over the user's environment.</p>
| 5
|
2009-03-20T20:40:46Z
|
[
"python",
"command-line"
] |
Full command line as it was typed
| 667,540
|
<p>I want to get the full command line as it was typed. </p>
<p>This: </p>
<p><code>" ".join(sys.argv[:])</code> </p>
<p>doesn't work here (deletes double quotes). Also I prefer not to rejoin something that was parsed and splited.</p>
<p>Any ideas? </p>
<p>Thank you in advance.</p>
| 13
|
2009-03-20T19:11:34Z
| 668,016
|
<p>If you're on linux, I'd suggest monkeying with the ~/.bash_history or the shell history command, although I believe the command must finish executing before it's added to the shell history.</p>
<p>I started playing with:</p>
<pre><code>import popen2
x,y = popen2.popen4("tail ~/.bash_history")
print x.readlines()
</code></pre>
<p>but I'm getting weird behavior where the shell doesn't seem to be completely flushing to the .bash_history file</p>
| 1
|
2009-03-20T21:10:35Z
|
[
"python",
"command-line"
] |
Full command line as it was typed
| 667,540
|
<p>I want to get the full command line as it was typed. </p>
<p>This: </p>
<p><code>" ".join(sys.argv[:])</code> </p>
<p>doesn't work here (deletes double quotes). Also I prefer not to rejoin something that was parsed and splited.</p>
<p>Any ideas? </p>
<p>Thank you in advance.</p>
| 13
|
2009-03-20T19:11:34Z
| 668,518
|
<p>On linux there is <code>/proc/<pid>/cmdline</code> that is in format of <code>argv[]</code> (ie. there is 0x00 between all the lines and you can't really know how many strings there are since you don't get the argc; though you will know it when the file runs out of data ;).</p>
<p>You can be sure that that commandline is already munged too since all escaping/variable filling is done and parameters are nicely packages (no extra spaces between parameters etc.).</p>
| 0
|
2009-03-21T01:09:36Z
|
[
"python",
"command-line"
] |
How to tell if a connection is dead in python
| 667,640
|
<p>I want my python application to be able to tell when the socket on the other side has been dropped. Is there a method for this?</p>
| 27
|
2009-03-20T19:31:50Z
| 667,666
|
<p>If I'm not mistaken this is usually handled via a <a href="http://docs.python.org/library/socket.html" rel="nofollow">timeout</a>.</p>
| 3
|
2009-03-20T19:39:31Z
|
[
"python",
"sockets"
] |
How to tell if a connection is dead in python
| 667,640
|
<p>I want my python application to be able to tell when the socket on the other side has been dropped. Is there a method for this?</p>
| 27
|
2009-03-20T19:31:50Z
| 667,702
|
<p>From the link Jweede posted:</p>
<blockquote>
<p>exception socket.timeout:</p>
<pre><code>This exception is raised when a timeout occurs on a socket
which has had timeouts enabled via a prior call to settimeout().
The accompanying value is a string whose value is currently
always âtimed outâ.
</code></pre>
</blockquote>
<p>Here are the demo server and client programs for the socket module from the <a href="http://docs.python.org/library/socket.html#example" rel="nofollow">python docs</a></p>
<pre><code># Echo server program
import socket
HOST = '' # Symbolic name meaning all available interfaces
PORT = 50007 # Arbitrary non-privileged port
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen(1)
conn, addr = s.accept()
print 'Connected by', addr
while 1:
data = conn.recv(1024)
if not data: break
conn.send(data)
conn.close()
</code></pre>
<p>And the client:</p>
<pre><code># Echo client program
import socket
HOST = 'daring.cwi.nl' # The remote host
PORT = 50007 # The same port as used by the server
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
s.send('Hello, world')
data = s.recv(1024)
s.close()
print 'Received', repr(data)
</code></pre>
<p>On the docs example page I pulled these from, there are more complex examples that employ this idea, but here is the simple answer:</p>
<p>Assuming you're writing the client program, just put all your code that uses the socket when it is at risk of being dropped, inside a try block...</p>
<pre><code>try:
s.connect((HOST, PORT))
s.send("Hello, World!")
...
except socket.timeout:
# whatever you need to do when the connection is dropped
</code></pre>
| 8
|
2009-03-20T19:50:20Z
|
[
"python",
"sockets"
] |
How to tell if a connection is dead in python
| 667,640
|
<p>I want my python application to be able to tell when the socket on the other side has been dropped. Is there a method for this?</p>
| 27
|
2009-03-20T19:31:50Z
| 667,710
|
<p>It depends on what you mean by "dropped". For TCP sockets, if the other end closes the connection either through
close() or the process terminating, you'll find out by reading an end of file, or getting a read error, usually the errno being set to whatever 'connection reset by peer' is by your operating system. For python, you'll read a zero length string, or a socket.error will be thrown when you try to read or write from the socket.</p>
| 17
|
2009-03-20T19:52:41Z
|
[
"python",
"sockets"
] |
How to tell if a connection is dead in python
| 667,640
|
<p>I want my python application to be able to tell when the socket on the other side has been dropped. Is there a method for this?</p>
| 27
|
2009-03-20T19:31:50Z
| 15,175,067
|
<p>Short answer:</p>
<blockquote>
<p>use a non-blocking recv(), or a blocking recv() / select() with a very
short timeout.</p>
</blockquote>
<p>Long answer:</p>
<p>The way to handle socket connections is to read or write as you need to, and be prepared to handle connection errors.</p>
<p>TCP distinguishes between 3 forms of "dropping" a connection: timeout, reset, close.</p>
<p>Of these, the timeout can not really be detected, TCP might only tell you the time has not expired yet. But even if it told you that, the time might still expire right after.</p>
<p>Also remember that using shutdown() either you or your peer (the other end of the connection) may close only the incoming byte stream, and keep the outgoing byte stream running, or close the outgoing stream and keep the incoming one running.</p>
<p>So strictly speaking, you want to check if the read stream is closed, or if the write stream is closed, or if both are closed.</p>
<p>Even if the connection was "dropped", you should still be able to read any data that is still in the network buffer. Only after the buffer is empty will you receive a disconnect from recv().</p>
<p>Checking if the connection was dropped is like asking "what will I receive after reading all data that is currently buffered ?" To find that out, you just have to read all data that is currently bufferred.</p>
<p>I can see how "reading all buffered data", to get to the end of it, might be a problem for some people, that still think of recv() as a blocking function. With a blocking recv(), "checking" for a read when the buffer is already empty will block, which defeats the purpose of "checking".</p>
<p>In my opinion any function that is documented to potentially block the entire process indefinitely is a design flaw, but I guess it is still there for historical reasons, from when using a socket just like a regular file descriptor was a cool idea.</p>
<p>What you can do is:</p>
<ul>
<li>set the socket to non-blocking mode, but than you get a system-depended error to indicate the receive buffer is empty, or the send buffer is full</li>
<li>stick to blocking mode but set a very short socket timeout. This will allow you to "ping" or "check" the socket with recv(), pretty much what you want to do</li>
<li>use select() call or asyncore module with a very short timeout. Error reporting is still system-specific.</li>
</ul>
<p>For the write part of the problem, keeping the read buffers empty pretty much covers it. You will discover a connection "dropped" after a non-blocking read attempt, and you may choose to stop sending anything after a read returns a closed channel.</p>
<p>I guess the only way to be sure your sent data has reached the other end (and is not still in the send buffer) is either:</p>
<ul>
<li>receive a proper response on the same socket for the exact message that you sent. Basically you are using the higher level protocol to provide confirmation.</li>
<li>perform a successful shutdow() and close() on the socket</li>
</ul>
<p>The python socket howto says send() will return 0 bytes written if channel is closed. You may use a non-blocking or a timeout socket.send() and if it returns 0 you can no longer send data on that socket. But if it returns non-zero, you have already sent something, good luck with that :)</p>
<p>Also here I have not considered OOB (out-of-band) socket data here as a means to approach your problem, but I think OOB was not what you meant.</p>
| 17
|
2013-03-02T13:35:25Z
|
[
"python",
"sockets"
] |
How do I create a D-Bus service that dynamically creates multiple objects?
| 667,760
|
<p>I'm new to D-Bus (and to Python, double whammy!) and I am trying to figure out the best way to do something that was discussed in the tutorial.</p>
<blockquote>
<p>However, a text editor application
could as easily own multiple bus names
(for example, org.kde.KWrite in
addition to generic TextEditor), have
multiple objects (maybe
/org/kde/documents/4352 where the
number changes according to the
document), and each object could
implement multiple interfaces, such as
org.freedesktop.DBus.Introspectable,
org.freedesktop.BasicTextField,
org.kde.RichTextDocument.</p>
</blockquote>
<p>For example, say I want to create a wrapper around <code>flickrapi</code> such that the service can expose a handful of Flickr API methods (say, <code>urls_lookupGroup()</code>). This is relatively straightforward if I want to assume that the service will always be specifying the same API key and that the auth information will be the same for everyone using the service.</p>
<p>Especially in the latter case, I cannot really assume this will be true.</p>
<p>Based on the documentation quoted above, I am assuming there should be something like this:</p>
<pre><code># Get the connection proxy object.
flickrConnectionService = bus.get_object("com.example.FlickrService",
"/Connection")
# Ask the connection object to connect, the return value would be
# maybe something like "/connection/5512" ...
flickrObjectPath = flickrConnectionService.connect("MY_APP_API_KEY",
"MY_APP_API_SECRET",
flickrUsername)
# Get the service proxy object.
flickrService = bus.get_object("com.example.FlickrService",
flickrObjectPath);
# As the flickr service object to get group information.
groupInfo = flickrService.getFlickrGroupInfo('s3a-belltown')
</code></pre>
<p>So, my questions:</p>
<p>1) Is this how this should be handled?</p>
<p>2) If so, how will the service know when the client is done? Is there a way to detect if the current client has broken connection so that the service can cleanup its dynamically created objects? Also, how would I create the individual objects in the first place?</p>
<p>3) If this is not how this should be handled, what are some other suggestions for accomplishing something similar?</p>
<p>I've read through a number of D-Bus tutorials and various documentation and about the closest I've come to seeing what I am looking for is what I quoted above. However, none of the examples look to actually do anything like this so I am not sure how to proceed.</p>
| 2
|
2009-03-20T20:05:24Z
| 671,553
|
<p>1) Mostly yes, I would only change one thing in the connect method as I explain in 2). </p>
<p>2) D-Bus connections are not persistent, everything is done with request/response messages, no connection state is stored unless you implement this in third objects as you do with your <code>flickerObject</code>. The d-bus objects in python bindings are mostly proxies that abstract the remote objects as if you were "connected" to them, but what it really does is to build messages based on the information you give to D-Bus object instantiation (object path, interface and so). So the service cannot know when the client is done if client doesn't announce it with other explicit call.</p>
<p>To handle unexpected client finalization you can create a D-Bus object in the client and send the object path to the service when connecting, change your <code>connect</code> method to accept also an <code>ObjectPath</code> parameter. The service can listen to <strong><code>NameOwnerChanged</code></strong> signal to know if a client has died.</p>
<p>To create the individual object you only have to instantiate an object in the same service as you do with your "/Connection", but you have to be sure that you are using an unexisting name. You could have a "/Connection/Manager", and various "/Connection/1", "/Connection/2"...</p>
<p>3) If you need to store the connection state, you have to do something like that.</p>
| 2
|
2009-03-22T20:17:20Z
|
[
"python",
"dbus"
] |
Can someone explain Gtk2 packing?
| 668,226
|
<p>I need to use Gtk2 for a project. I will be using python/ruby for it. The problem is that packing seems kind of mystical to me. I tried using a VBox so that I could have the following widgets in my window ( in the following order ):</p>
<ul>
<li>menubar</li>
<li>toolbar</li>
<li>text view/editor control</li>
</ul>
<p>I've managed to "guess" my way with <code>pack_start</code> and get the layout I need, but I'd like to be able to understand it. The documentation at <a href="http://ruby-gnome2.sourceforge.jp/" rel="nofollow">Ruby Gtk2</a> seems way too unintuitive (and so is the python one, since it's the same, only written for python), could you shed some light?</p>
<p>Also, <code>set_size_request</code> isn't always working when I add a component with <code>pack_start</code>. Why is that ? </p>
| 2
|
2009-03-20T22:24:32Z
| 669,033
|
<p>Box packing is <em>really</em> simple, so perhaps your failure to understand it is because you imagine it is more complicated than it is.</p>
<p>Layout is either Vertical (like a pile of bricks) or horizontal (like a queue of people). Each element in that layout can expand or it can not expand.</p>
<p>Horizontal (HBox)</p>
<pre><code>[widget][widget][widget][widget]
</code></pre>
<p>Vertical (VBox)</p>
<pre><code>[widget]
[widget]
[widget]
[widget]
</code></pre>
<p>So for example, a Horizontal layout (HBox) with two buttons, which the code would be:</p>
<pre><code>import gtk
box = gtk.HBox()
b1 = gtk.Button('button1')
b2 = gtk.Button('button2')
box.pack_start(b1)
box.pack_start(b2)
</code></pre>
<p>Now since the default for packing is to have <code>expand=True</code>, both those buttons added to the box will expand and they will each occupy half the area. No matter what the size of the container is. I think of this as "stretchy".</p>
<p>Expanding widgets:</p>
<pre><code>[[ widget ][ widget ]]
</code></pre>
<p>So, if you want one of the buttons to not expand, you will pack it like this:</p>
<pre><code>box.pack_start(b1, expand=False)
</code></pre>
<p>Non-expanding widget:</p>
<pre><code>[[widget][ widget ]]
</code></pre>
<p>Then the button will only occupy the space it needs to draw itself: text + borders + shadows + images (if any) etc. And the other button will expand to fill the remaining area. Normally, buttons don't need to be expanded, so a more real-life situation is a TextArea which you would want to expand to fill the window.</p>
<p>The other parameter that can be passed to <code>pack_start</code> is the fill parameter, and normally this can be ignored. It is enough here to say that if <code>expand=False</code> then the <code>fill</code> parameter is entirely ignored (because it doesn't make sense in that situation).</p>
<p>The other thing you mentioned is <code>set_size_request</code>. I would generally say that this is not a good idea. I say generally, because there are situations where you will need to use it. But for someone beginning out with packing a GTK+ user interface, I would strongly recommend not using it. In general, let the boxes and other containers handle your layout for you. The <code>set_size_request</code> does not do exactly what you would expect it to do. It does not change the size of a widget, but merely how much space it will request. It may use more, and may even stretch to fill larger spaces. It will rarely become smaller than the request, but again because it is just a "request" there is no guarantee theat the request will be fulfilled.</p>
| 14
|
2009-03-21T09:29:09Z
|
[
"python",
"ruby",
"pygtk",
"packing",
"gtk2"
] |
Alternatives to ffmpeg as a cli tools for video still extraction?
| 668,240
|
<p>I need to extract stills from video files. Currently I am using ffmpeg, but I am looking for a simpler tool and for a tool that my collegues can just install. No need to compile it from a svn checkout.</p>
<p>Any hints? A python interface would be nice.</p>
| 1
|
2009-03-20T22:31:43Z
| 668,582
|
<p>Your requirements "cli tool" and "python interface" aren't entirely compatible. Which do you want?</p>
<p>The following media libraries all have Python bindings: <a href="http://www.gstreamer.org/" rel="nofollow">GStreamer</a>, <a href="http://www.videolan.org/" rel="nofollow">libVLC</a> (<a href="http://code.google.com/p/pyvlc/" rel="nofollow">pyvlc</a> provides w32 binaries), <a href="http://www.xine-project.org/" rel="nofollow">Xine</a> (via <a href="http://pyxine.sourceforge.net/" rel="nofollow">Pyxine</a>). I'm pretty sure none of them will be easier than using the <a href="http://www.ffmpeg.org/" rel="nofollow">ffmpeg</a> or <a href="http://www.mplayerhq.hu/" rel="nofollow">mplayer</a> command-line tools, though.</p>
<p>Regarding ffmpeg: why would more than one person need to compile from a svn checkout (or tarball, as they've recently had their <a href="http://www.ffmpeg.org/releases/ffmpeg-0.5.tar.bz2" rel="nofollow">0.5</a> release)? Grab or make a binary package and have everybody use it.</p>
| 1
|
2009-03-21T01:58:31Z
|
[
"python"
] |
Python: simple async download of url content?
| 668,257
|
<p>I have a web.py server that responds to various user requests. One of these requests involves downloading and analyzing a series of web pages. </p>
<p>Is there a simple way to setup an async / callback based url download mechanism in web.py? Low resource usage is particularly important as each user initiated request could result in download of multiple pages.</p>
<p>The flow would look like:</p>
<p>User request -> web.py -> Download 10 pages in parallel or asynchronously -> Analyze contents, return results</p>
<p>I recognize that Twisted would be a nice way to do this, but I'm already in web.py so I'm particularly interested in something that can fit within web.py .</p>
| 9
|
2009-03-20T22:40:42Z
| 668,486
|
<p>I'm not sure I'm understanding your question, so I'll give multiple partial answers to start with.</p>
<ul>
<li>If your concern is that web.py is having to download data from somewhere and analyze the results before responding, and you fear the request may time out before the results are ready, you could use ajax to split the work up. Return immediately with a container page (to hold the results) and a bit of javascript to poll the sever for the results until the client has them all. Thus the client never waits for the server, though the user still has to wait for the results.</li>
<li>If your concern is tying up the server waiting for the client to get the results, I doubt if that will actually be a problem. Your networking layers should not require you to wait-on-write</li>
<li>If you are worrying about the server waiting while the client downloads static content from elsewhere, either ajax or clever use of redirects should solve your problem</li>
</ul>
| 0
|
2009-03-21T00:39:24Z
|
[
"python",
"asynchronous"
] |
Python: simple async download of url content?
| 668,257
|
<p>I have a web.py server that responds to various user requests. One of these requests involves downloading and analyzing a series of web pages. </p>
<p>Is there a simple way to setup an async / callback based url download mechanism in web.py? Low resource usage is particularly important as each user initiated request could result in download of multiple pages.</p>
<p>The flow would look like:</p>
<p>User request -> web.py -> Download 10 pages in parallel or asynchronously -> Analyze contents, return results</p>
<p>I recognize that Twisted would be a nice way to do this, but I'm already in web.py so I'm particularly interested in something that can fit within web.py .</p>
| 9
|
2009-03-20T22:40:42Z
| 668,495
|
<p>I don't know if this will exactly work, but it looks like it might: <a href="http://code.google.com/p/evserver/" rel="nofollow">EvServer: Python Asynchronous WSGI Server</a> has a <a href="http://code.google.com/p/evserver/source/browse/trunk/evserver/examples/framework%5Fwebpy.py" rel="nofollow">web.py interface</a> and can do comet style push to the browser client.</p>
<p>If that isn't right, maybe you can use the <a href="http://opensource.hyves.org/concurrence/http.html" rel="nofollow">Concurrence HTTP client</a> for async download of the pages and figure out how to serve them to browser via ajax or comet.</p>
| 0
|
2009-03-21T00:53:59Z
|
[
"python",
"asynchronous"
] |
Python: simple async download of url content?
| 668,257
|
<p>I have a web.py server that responds to various user requests. One of these requests involves downloading and analyzing a series of web pages. </p>
<p>Is there a simple way to setup an async / callback based url download mechanism in web.py? Low resource usage is particularly important as each user initiated request could result in download of multiple pages.</p>
<p>The flow would look like:</p>
<p>User request -> web.py -> Download 10 pages in parallel or asynchronously -> Analyze contents, return results</p>
<p>I recognize that Twisted would be a nice way to do this, but I'm already in web.py so I'm particularly interested in something that can fit within web.py .</p>
| 9
|
2009-03-20T22:40:42Z
| 668,665
|
<p>Along the lines of MarkusQ's answer, <a href="http://mochikit.com/" rel="nofollow">MochiKit</a> is a nice JavaScript library, with robust async methods inspired by Twisted.</p>
| 0
|
2009-03-21T03:08:26Z
|
[
"python",
"asynchronous"
] |
Python: simple async download of url content?
| 668,257
|
<p>I have a web.py server that responds to various user requests. One of these requests involves downloading and analyzing a series of web pages. </p>
<p>Is there a simple way to setup an async / callback based url download mechanism in web.py? Low resource usage is particularly important as each user initiated request could result in download of multiple pages.</p>
<p>The flow would look like:</p>
<p>User request -> web.py -> Download 10 pages in parallel or asynchronously -> Analyze contents, return results</p>
<p>I recognize that Twisted would be a nice way to do this, but I'm already in web.py so I'm particularly interested in something that can fit within web.py .</p>
| 9
|
2009-03-20T22:40:42Z
| 668,723
|
<p>Actually you can integrate twisted with web.py. I'm not really sure how as I've only done it with django (used twisted with it).</p>
| 0
|
2009-03-21T04:09:41Z
|
[
"python",
"asynchronous"
] |
Python: simple async download of url content?
| 668,257
|
<p>I have a web.py server that responds to various user requests. One of these requests involves downloading and analyzing a series of web pages. </p>
<p>Is there a simple way to setup an async / callback based url download mechanism in web.py? Low resource usage is particularly important as each user initiated request could result in download of multiple pages.</p>
<p>The flow would look like:</p>
<p>User request -> web.py -> Download 10 pages in parallel or asynchronously -> Analyze contents, return results</p>
<p>I recognize that Twisted would be a nice way to do this, but I'm already in web.py so I'm particularly interested in something that can fit within web.py .</p>
| 9
|
2009-03-20T22:40:42Z
| 668,765
|
<p>One option would be to post the work onto a queue of some sort (you could use something Enterprisey like <a href="http://activemq.apache.org/" rel="nofollow">ActiveMQ</a> with <a href="http://code.google.com/p/pyactivemq/" rel="nofollow">pyactivemq</a> or <a href="http://stomp.codehaus.org/" rel="nofollow">STOMP</a> as a connector or you could use something lightweight like <a href="http://github.com/robey/kestrel/tree/master" rel="nofollow">Kestrel</a> which is written in Scala and speaks the same protocl as memcache so you can just use the python memcache client to talk to it).</p>
<p>Once you have the queueing mechanism set up, you can create as many or as few worker tasks that are subscribed to the queue and do the actual downloading work as you want. You can even have them live on other machines so they don't interfere with the speed of serving yourwebsite at all. When the workers are done, they post the results back to the database or another queue where the webserver can pick them up.</p>
<p>If you don't want to have to manage external worker processes then you could make the workers threads in the same python process that is running the webserver, but then obviously it will have greater potential to impact your web page serving performance.</p>
| 4
|
2009-03-21T05:02:48Z
|
[
"python",
"asynchronous"
] |
Python: simple async download of url content?
| 668,257
|
<p>I have a web.py server that responds to various user requests. One of these requests involves downloading and analyzing a series of web pages. </p>
<p>Is there a simple way to setup an async / callback based url download mechanism in web.py? Low resource usage is particularly important as each user initiated request could result in download of multiple pages.</p>
<p>The flow would look like:</p>
<p>User request -> web.py -> Download 10 pages in parallel or asynchronously -> Analyze contents, return results</p>
<p>I recognize that Twisted would be a nice way to do this, but I'm already in web.py so I'm particularly interested in something that can fit within web.py .</p>
| 9
|
2009-03-20T22:40:42Z
| 668,772
|
<p>I'd just build a service in twisted that did that concurrent fetch and analysis and access that from web.py as a simple http request.</p>
| 2
|
2009-03-21T05:11:00Z
|
[
"python",
"asynchronous"
] |
Python: simple async download of url content?
| 668,257
|
<p>I have a web.py server that responds to various user requests. One of these requests involves downloading and analyzing a series of web pages. </p>
<p>Is there a simple way to setup an async / callback based url download mechanism in web.py? Low resource usage is particularly important as each user initiated request could result in download of multiple pages.</p>
<p>The flow would look like:</p>
<p>User request -> web.py -> Download 10 pages in parallel or asynchronously -> Analyze contents, return results</p>
<p>I recognize that Twisted would be a nice way to do this, but I'm already in web.py so I'm particularly interested in something that can fit within web.py .</p>
| 9
|
2009-03-20T22:40:42Z
| 2,697,141
|
<p>Use the async http client which uses asynchat and asyncore.
<a href="http://sourceforge.net/projects/asynchttp/files/asynchttp-production/asynchttp.py-1.0/asynchttp.py/download" rel="nofollow">http://sourceforge.net/projects/asynchttp/files/asynchttp-production/asynchttp.py-1.0/asynchttp.py/download</a></p>
| 2
|
2010-04-23T08:24:40Z
|
[
"python",
"asynchronous"
] |
Python: simple async download of url content?
| 668,257
|
<p>I have a web.py server that responds to various user requests. One of these requests involves downloading and analyzing a series of web pages. </p>
<p>Is there a simple way to setup an async / callback based url download mechanism in web.py? Low resource usage is particularly important as each user initiated request could result in download of multiple pages.</p>
<p>The flow would look like:</p>
<p>User request -> web.py -> Download 10 pages in parallel or asynchronously -> Analyze contents, return results</p>
<p>I recognize that Twisted would be a nice way to do this, but I'm already in web.py so I'm particularly interested in something that can fit within web.py .</p>
| 9
|
2009-03-20T22:40:42Z
| 2,697,314
|
<p>You might be able to use <a href="http://docs.python.org/library/urllib.html" rel="nofollow"><code>urllib</code></a> to download the files and the <a href="http://docs.python.org/library/queue.html" rel="nofollow"><code>Queue</code></a> module to manage a number of worker threads. e.g:</p>
<pre><code>import urllib
from threading import Thread
from Queue import Queue
NUM_WORKERS = 20
class Dnld:
def __init__(self):
self.Q = Queue()
for i in xrange(NUM_WORKERS):
t = Thread(target=self.worker)
t.setDaemon(True)
t.start()
def worker(self):
while 1:
url, Q = self.Q.get()
try:
f = urllib.urlopen(url)
Q.put(('ok', url, f.read()))
f.close()
except Exception, e:
Q.put(('error', url, e))
try: f.close() # clean up
except: pass
def download_urls(self, L):
Q = Queue() # Create a second queue so the worker
# threads can send the data back again
for url in L:
# Add the URLs in `L` to be downloaded asynchronously
self.Q.put((url, Q))
rtn = []
for i in xrange(len(L)):
# Get the data as it arrives, raising
# any exceptions if they occur
status, url, data = Q.get()
if status == 'ok':
rtn.append((url, data))
else:
raise data
return rtn
inst = Dnld()
for url, data in inst.download_urls(['http://www.google.com']*2):
print url, data
</code></pre>
| 3
|
2010-04-23T08:56:24Z
|
[
"python",
"asynchronous"
] |
Python: simple async download of url content?
| 668,257
|
<p>I have a web.py server that responds to various user requests. One of these requests involves downloading and analyzing a series of web pages. </p>
<p>Is there a simple way to setup an async / callback based url download mechanism in web.py? Low resource usage is particularly important as each user initiated request could result in download of multiple pages.</p>
<p>The flow would look like:</p>
<p>User request -> web.py -> Download 10 pages in parallel or asynchronously -> Analyze contents, return results</p>
<p>I recognize that Twisted would be a nice way to do this, but I'm already in web.py so I'm particularly interested in something that can fit within web.py .</p>
| 9
|
2009-03-20T22:40:42Z
| 4,483,862
|
<p>Here is an interesting piece of code. I didn't use it myself, but it looks nice ;)</p>
<p><a href="https://github.com/facebook/tornado/blob/master/tornado/httpclient.py">https://github.com/facebook/tornado/blob/master/tornado/httpclient.py</a></p>
<p>Low level AsyncHTTPClient:</p>
<p>"An non-blocking HTTP client backed with pycurl. Example usage:"</p>
<pre><code>import ioloop
def handle_request(response):
if response.error:
print "Error:", response.error
else:
print response.body
ioloop.IOLoop.instance().stop()
http_client = httpclient.AsyncHTTPClient()
http_client.fetch("http://www.google.com/", handle_request)
ioloop.IOLoop.instance().start()
</code></pre>
<p>"
fetch() can take a string URL or an HTTPRequest instance, which offers more options, like executing POST/PUT/DELETE requests.</p>
<p>The keyword argument max_clients to the AsyncHTTPClient constructor determines the maximum number of simultaneous fetch() operations that can execute in parallel on each IOLoop.
"</p>
<p>There is also new implementation in progress:
<a href="https://github.com/facebook/tornado/blob/master/tornado/simple_httpclient.py">https://github.com/facebook/tornado/blob/master/tornado/simple_httpclient.py</a>
"Non-blocking HTTP client with no external dependencies. ... This class is still in development and not yet recommended for production use."</p>
| 6
|
2010-12-19T16:31:56Z
|
[
"python",
"asynchronous"
] |
Python: simple async download of url content?
| 668,257
|
<p>I have a web.py server that responds to various user requests. One of these requests involves downloading and analyzing a series of web pages. </p>
<p>Is there a simple way to setup an async / callback based url download mechanism in web.py? Low resource usage is particularly important as each user initiated request could result in download of multiple pages.</p>
<p>The flow would look like:</p>
<p>User request -> web.py -> Download 10 pages in parallel or asynchronously -> Analyze contents, return results</p>
<p>I recognize that Twisted would be a nice way to do this, but I'm already in web.py so I'm particularly interested in something that can fit within web.py .</p>
| 9
|
2009-03-20T22:40:42Z
| 9,017,195
|
<p>Nowadays there are excellent Python libs you might want to use - <a href="http://urllib3.readthedocs.org/" rel="nofollow">urllib3</a> (uses thread pools) and <a href="http://docs.python-requests.org/" rel="nofollow">requests</a> (uses thread pools through urllib3 or non blocking IO through <a href="http://www.gevent.org/" rel="nofollow">gevent</a>)</p>
| 2
|
2012-01-26T11:05:13Z
|
[
"python",
"asynchronous"
] |
Programmatic login and use of non-api-supported Google services
| 668,350
|
<p>Google provides APIs for a number of their services and bindings for several languages. However, not everything is supported. So this question comes from my incomplete understanding of things like wget, curl, and the various web programming libraries.</p>
<ol>
<li><p>How can I authenticate programmatically to Google?</p></li>
<li><p>Is it possible to leverage the existing APIs to gain access to the unsupported parts of Google?</p></li>
</ol>
<p>Once I have authenticated, how do I use that to access my restricted pages? It seems like the API could be used do the login and get a token, but I don't understand <i>what I'm supposed to do next</i> to fetch a restricted webpage.</p>
<p>Specifically, I am playing around with Android and want to write a script to grab my app usage stats from the Android Market once or twice a day so I can make pretty charts. My most likely target is python, but code in any language illustrating non-API use of Google's services would be helpful. Thanks folks.</p>
| 4
|
2009-03-20T23:25:41Z
| 668,381
|
<p>You can use something like mechanize, or even urllib to achieve this sort of thing. As a tutorial, you can check out my article <a href="http://ssscripting.wordpress.com/2009/03/17/how-to-submit-a-form-programmatically/" rel="nofollow">here</a> about programmatically submitting a form .
Once you authenticate, you can use the cookie to access restricted pages.</p>
| 1
|
2009-03-20T23:46:27Z
|
[
"python",
"curl",
"gdata-api"
] |
Programmatic login and use of non-api-supported Google services
| 668,350
|
<p>Google provides APIs for a number of their services and bindings for several languages. However, not everything is supported. So this question comes from my incomplete understanding of things like wget, curl, and the various web programming libraries.</p>
<ol>
<li><p>How can I authenticate programmatically to Google?</p></li>
<li><p>Is it possible to leverage the existing APIs to gain access to the unsupported parts of Google?</p></li>
</ol>
<p>Once I have authenticated, how do I use that to access my restricted pages? It seems like the API could be used do the login and get a token, but I don't understand <i>what I'm supposed to do next</i> to fetch a restricted webpage.</p>
<p>Specifically, I am playing around with Android and want to write a script to grab my app usage stats from the Android Market once or twice a day so I can make pretty charts. My most likely target is python, but code in any language illustrating non-API use of Google's services would be helpful. Thanks folks.</p>
| 4
|
2009-03-20T23:25:41Z
| 668,872
|
<p>You can get the auth tokens by authenticating a particular service against https://www.google.com/accounts/ClientLogin</p>
<p>E.g.</p>
<pre><code>curl -d "Email=youremail" -d "Passwd=yourpassword" -d "service=blogger" "https://www.google.com/accounts/ClientLogin"
</code></pre>
<p>Then you can just pass the auth tokens and cookies along when accessing the service. You can use firebug or temper data firefox plugin to find out the parameter names etc.</p>
| 1
|
2009-03-21T06:44:20Z
|
[
"python",
"curl",
"gdata-api"
] |
Programmatic login and use of non-api-supported Google services
| 668,350
|
<p>Google provides APIs for a number of their services and bindings for several languages. However, not everything is supported. So this question comes from my incomplete understanding of things like wget, curl, and the various web programming libraries.</p>
<ol>
<li><p>How can I authenticate programmatically to Google?</p></li>
<li><p>Is it possible to leverage the existing APIs to gain access to the unsupported parts of Google?</p></li>
</ol>
<p>Once I have authenticated, how do I use that to access my restricted pages? It seems like the API could be used do the login and get a token, but I don't understand <i>what I'm supposed to do next</i> to fetch a restricted webpage.</p>
<p>Specifically, I am playing around with Android and want to write a script to grab my app usage stats from the Android Market once or twice a day so I can make pretty charts. My most likely target is python, but code in any language illustrating non-API use of Google's services would be helpful. Thanks folks.</p>
| 4
|
2009-03-20T23:25:41Z
| 10,834,881
|
<p>CLientLogin is now deprecated: <a href="https://developers.google.com/accounts/docs/AuthForInstalledApps" rel="nofollow">https://developers.google.com/accounts/docs/AuthForInstalledApps</a></p>
<p>How can we authenticate programmatically to Google with OAuth2?</p>
<p>I can't find an expample of request with user and password parameter as in the CLientLogin :(</p>
<p>is there a solution?</p>
| 0
|
2012-05-31T13:43:46Z
|
[
"python",
"curl",
"gdata-api"
] |
Unicode fonts in PyGame
| 668,359
|
<p>How can I display Chinese characters in PyGame? And what's a good free/libre font to use for this purpose?</p>
| 8
|
2009-03-20T23:33:33Z
| 668,592
|
<p>As far as I'm awareâââand I haven't tried it myselfâââPyGame should Just Work when you pass it a Unicode string containing Chinese characters, eg. u'\u4e2d\u56fd'.</p>
<p>See âEast Asiaâ under <a href="http://www.unifont.org/fontguide/" rel="nofollow">http://www.unifont.org/fontguide/</a> for quite a few suitable open-licence fonts.</p>
| 0
|
2009-03-21T02:08:56Z
|
[
"python",
"unicode",
"pygame",
"cjk"
] |
Unicode fonts in PyGame
| 668,359
|
<p>How can I display Chinese characters in PyGame? And what's a good free/libre font to use for this purpose?</p>
| 8
|
2009-03-20T23:33:33Z
| 668,596
|
<p>pygame uses SDL_ttf for rendering, so you should be in fine shape as rendering goes.</p>
<p><a href="http://www.unifont.org/fontguide/" rel="nofollow">unifont.org</a> appears to have some extensive resources on Open-Source fonts for a range of scripts.</p>
<p>I grabbed the Cyberbit pan-unicode font and extracted the encluded ttf. The folowing 'worked on my machine' which is a Windows Vista Home Basic and Python 2.6:</p>
<pre><code># -*- coding: utf-8 -*-
import pygame, sys
unistr = u"黿¾¤ æ"
pygame.font.init()
srf = pygame.display.set_mode((640,480))
f = pygame.font.Font("Cyberbit.ttf",20)
srf.blit(f.render(unistr,True,(0,0,0)),(0,0))
pygame.display.flip()
while True:
srf.blit(f.render(unistr,True,(255,255,255)),(0,0))
for e in pygame.event.get():
if e.type == pygame.QUIT:
pygame.quit()
sys.exit()
</code></pre>
<p>As long as you're just displaying unicode text, you should be in fantastic shape. If, however, you want to actually read unicode input from the user, the situation is much more bleak. Pygame has no input methods of any sort. </p>
| 7
|
2009-03-21T02:10:27Z
|
[
"python",
"unicode",
"pygame",
"cjk"
] |
Why is Beautiful Soup truncating this page?
| 668,591
|
<p>I am trying to pull at list of resource/database names and IDs from a listing of resources that my school library has subscriptions to. There are pages listing the different resources, and I can use urllib2 to get the pages, but when I pass the page to BeautifulSoup, it truncates its tree just before the end of the entry for the first resource in the list. The problem seems to be in image link used to add the resource to a search set. This is where things get cut off, here's the HTML:</p>
<pre><code><a href="http://www2.lib.myschool.edu:7017/V/ACDYFUAMVRFJRN4PV8CIL7RUPC9QXMQT8SFV2DVDSBA5GBJCTT-45899?func=find-db-add-res&amp;resource=XYZ00618&amp;z122_key=000000000&amp;function-in=www_v_find_db_0" onclick='javascript:addToz122("XYZ00618","000000000","myImageXYZ00618","http://discover.lib.myschool.edu:8331/V/ACDYFUAMVRFJRN4PV8CIL7RUPC9QXMQT8SFV2DVDSBA5GBJCTT-45900");return false;'>
<img name="myImageXYZ00618" id="myImageXYZ00618" src="http://www2.lib.myschool.edu:7017/INS01/icon_eng/v-add_favorite.png" title="Add to My Sets" alt="Add to My Sets" border="0">
</a>
</code></pre>
<p>And here is my python code:</p>
<pre><code>import urllib2
from BeautifulSoup import BeautifulSoup
page = urllib2.urlopen("http://discover.lib.myschool.edu:8331/V?func=find-db-1-title&mode=titles&scan_start=latp&scan_utf=D&azlist=Y&restricted=all")
print BeautifulSoup(page).prettify
</code></pre>
<p>In BeautifulSoup's version, the opening <code><a href...></code> shows up, but the <code><img></code> doesn't, and the <code><a></code> is immediately closed, as are the rest of the open tags, all the way to <code></html></code>.</p>
<p>The only distinguishing trait I see for these "add to sets" images is that they are the only ones to have name and id attributes. I can't see why that would cause BeautifulSoup to stop parsing immediately, though.</p>
<p>Note: I am almost entirely new to Python, but seem to be understanding it all right.</p>
<p>Thank you for your help!</p>
| 2
|
2009-03-21T02:08:30Z
| 668,717
|
<p>If I remember correctly, BeautifulSoup uses "name" in it's tree as the name of the tag. In this case "a" would be the "name" of the anchor tag.</p>
<p>That doesn't seem like it should break it though. What version of Python and BS are you using?</p>
| 0
|
2009-03-21T03:59:26Z
|
[
"python",
"screen-scraping",
"beautifulsoup"
] |
Why is Beautiful Soup truncating this page?
| 668,591
|
<p>I am trying to pull at list of resource/database names and IDs from a listing of resources that my school library has subscriptions to. There are pages listing the different resources, and I can use urllib2 to get the pages, but when I pass the page to BeautifulSoup, it truncates its tree just before the end of the entry for the first resource in the list. The problem seems to be in image link used to add the resource to a search set. This is where things get cut off, here's the HTML:</p>
<pre><code><a href="http://www2.lib.myschool.edu:7017/V/ACDYFUAMVRFJRN4PV8CIL7RUPC9QXMQT8SFV2DVDSBA5GBJCTT-45899?func=find-db-add-res&amp;resource=XYZ00618&amp;z122_key=000000000&amp;function-in=www_v_find_db_0" onclick='javascript:addToz122("XYZ00618","000000000","myImageXYZ00618","http://discover.lib.myschool.edu:8331/V/ACDYFUAMVRFJRN4PV8CIL7RUPC9QXMQT8SFV2DVDSBA5GBJCTT-45900");return false;'>
<img name="myImageXYZ00618" id="myImageXYZ00618" src="http://www2.lib.myschool.edu:7017/INS01/icon_eng/v-add_favorite.png" title="Add to My Sets" alt="Add to My Sets" border="0">
</a>
</code></pre>
<p>And here is my python code:</p>
<pre><code>import urllib2
from BeautifulSoup import BeautifulSoup
page = urllib2.urlopen("http://discover.lib.myschool.edu:8331/V?func=find-db-1-title&mode=titles&scan_start=latp&scan_utf=D&azlist=Y&restricted=all")
print BeautifulSoup(page).prettify
</code></pre>
<p>In BeautifulSoup's version, the opening <code><a href...></code> shows up, but the <code><img></code> doesn't, and the <code><a></code> is immediately closed, as are the rest of the open tags, all the way to <code></html></code>.</p>
<p>The only distinguishing trait I see for these "add to sets" images is that they are the only ones to have name and id attributes. I can't see why that would cause BeautifulSoup to stop parsing immediately, though.</p>
<p>Note: I am almost entirely new to Python, but seem to be understanding it all right.</p>
<p>Thank you for your help!</p>
| 2
|
2009-03-21T02:08:30Z
| 668,867
|
<p>I was using Firefox's "view selection source", which apparently cleans up the HTML for me. When I viewed the original source, this is what I saw</p>
<pre><code><img name="myImageXYZ00618" id="myImageXYZ00618" src='http://www2.lib.myschool.edu:7017/INS01/icon_eng/v-add_favorite.png' alt='Add to My Sets' title='Add to My Sets' border="0"title="Add to clipboard PAIS International (CSA)" alt="Add to clipboard PAIS International (CSA)">
</code></pre>
<p>By putting a space after the <code>border="0"</code> attribute, I can get BS to parse the page.</p>
| 2
|
2009-03-21T06:34:15Z
|
[
"python",
"screen-scraping",
"beautifulsoup"
] |
Why is Beautiful Soup truncating this page?
| 668,591
|
<p>I am trying to pull at list of resource/database names and IDs from a listing of resources that my school library has subscriptions to. There are pages listing the different resources, and I can use urllib2 to get the pages, but when I pass the page to BeautifulSoup, it truncates its tree just before the end of the entry for the first resource in the list. The problem seems to be in image link used to add the resource to a search set. This is where things get cut off, here's the HTML:</p>
<pre><code><a href="http://www2.lib.myschool.edu:7017/V/ACDYFUAMVRFJRN4PV8CIL7RUPC9QXMQT8SFV2DVDSBA5GBJCTT-45899?func=find-db-add-res&amp;resource=XYZ00618&amp;z122_key=000000000&amp;function-in=www_v_find_db_0" onclick='javascript:addToz122("XYZ00618","000000000","myImageXYZ00618","http://discover.lib.myschool.edu:8331/V/ACDYFUAMVRFJRN4PV8CIL7RUPC9QXMQT8SFV2DVDSBA5GBJCTT-45900");return false;'>
<img name="myImageXYZ00618" id="myImageXYZ00618" src="http://www2.lib.myschool.edu:7017/INS01/icon_eng/v-add_favorite.png" title="Add to My Sets" alt="Add to My Sets" border="0">
</a>
</code></pre>
<p>And here is my python code:</p>
<pre><code>import urllib2
from BeautifulSoup import BeautifulSoup
page = urllib2.urlopen("http://discover.lib.myschool.edu:8331/V?func=find-db-1-title&mode=titles&scan_start=latp&scan_utf=D&azlist=Y&restricted=all")
print BeautifulSoup(page).prettify
</code></pre>
<p>In BeautifulSoup's version, the opening <code><a href...></code> shows up, but the <code><img></code> doesn't, and the <code><a></code> is immediately closed, as are the rest of the open tags, all the way to <code></html></code>.</p>
<p>The only distinguishing trait I see for these "add to sets" images is that they are the only ones to have name and id attributes. I can't see why that would cause BeautifulSoup to stop parsing immediately, though.</p>
<p>Note: I am almost entirely new to Python, but seem to be understanding it all right.</p>
<p>Thank you for your help!</p>
| 2
|
2009-03-21T02:08:30Z
| 674,849
|
<p>I strongly recommend using html5lib + lxml instead of beautiful soup. It uses a real HTML parser (very similar to the one in Firefox) and lxml provides a very flexible way to query the resulting tree (css-selectors or xpath).</p>
<p>There are tons of bugs or strange behavior in BeautifulSoup which makes it not the best solution for a lot of HTML markup you can't trust.</p>
| 2
|
2009-03-23T19:19:48Z
|
[
"python",
"screen-scraping",
"beautifulsoup"
] |
Why is Beautiful Soup truncating this page?
| 668,591
|
<p>I am trying to pull at list of resource/database names and IDs from a listing of resources that my school library has subscriptions to. There are pages listing the different resources, and I can use urllib2 to get the pages, but when I pass the page to BeautifulSoup, it truncates its tree just before the end of the entry for the first resource in the list. The problem seems to be in image link used to add the resource to a search set. This is where things get cut off, here's the HTML:</p>
<pre><code><a href="http://www2.lib.myschool.edu:7017/V/ACDYFUAMVRFJRN4PV8CIL7RUPC9QXMQT8SFV2DVDSBA5GBJCTT-45899?func=find-db-add-res&amp;resource=XYZ00618&amp;z122_key=000000000&amp;function-in=www_v_find_db_0" onclick='javascript:addToz122("XYZ00618","000000000","myImageXYZ00618","http://discover.lib.myschool.edu:8331/V/ACDYFUAMVRFJRN4PV8CIL7RUPC9QXMQT8SFV2DVDSBA5GBJCTT-45900");return false;'>
<img name="myImageXYZ00618" id="myImageXYZ00618" src="http://www2.lib.myschool.edu:7017/INS01/icon_eng/v-add_favorite.png" title="Add to My Sets" alt="Add to My Sets" border="0">
</a>
</code></pre>
<p>And here is my python code:</p>
<pre><code>import urllib2
from BeautifulSoup import BeautifulSoup
page = urllib2.urlopen("http://discover.lib.myschool.edu:8331/V?func=find-db-1-title&mode=titles&scan_start=latp&scan_utf=D&azlist=Y&restricted=all")
print BeautifulSoup(page).prettify
</code></pre>
<p>In BeautifulSoup's version, the opening <code><a href...></code> shows up, but the <code><img></code> doesn't, and the <code><a></code> is immediately closed, as are the rest of the open tags, all the way to <code></html></code>.</p>
<p>The only distinguishing trait I see for these "add to sets" images is that they are the only ones to have name and id attributes. I can't see why that would cause BeautifulSoup to stop parsing immediately, though.</p>
<p>Note: I am almost entirely new to Python, but seem to be understanding it all right.</p>
<p>Thank you for your help!</p>
| 2
|
2009-03-21T02:08:30Z
| 19,749,734
|
<p>You can try beautiful soup with html5lib rather than the built-in parser.</p>
<pre><code>BeautifulSoup(markup, "html5lib")
</code></pre>
<p>html5lib is more lenient and often parses pages that the built-in parser truncates. See the docs at <a href="http://www.crummy.com/software/BeautifulSoup/bs4/doc/#searching-the-tree" rel="nofollow">http://www.crummy.com/software/BeautifulSoup/bs4/doc/#searching-the-tree</a></p>
| 2
|
2013-11-03T04:09:12Z
|
[
"python",
"screen-scraping",
"beautifulsoup"
] |
Problem Extending Python(Linking Error )?
| 668,971
|
<p>I have installed Python 3k(C:\Python30) and Visual Studio Professional Edition 2008.</p>
<p>I'm studying <strong><a href="http://en.wikibooks.org/wiki/Python%5FProgramming/Extending%5Fwith%5FC" rel="nofollow">this</a>.</strong></p>
<p>Here is a problem:</p>
<pre><code>C:\hello>dir
Volume in drive C has no label.
Volume Serial Number is 309E-14FB
Directory of C:\hello
03/21/2009 01:15 AM <DIR> .
03/21/2009 01:15 AM <DIR> ..
03/21/2009 01:14 AM 481 hellomodule.c
1 File(s) 481 bytes
2 Dir(s) 10,640,642,048 bytes free
C:\hello>cl /LD hellomodule.c /Ic:\Python30\include c:\Python30\libs\python30.lib /link/out:hello.
dll
Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 15.00.21022.08 for 80x86
Copyright (C) Microsoft Corporation. All rights reserved.
hellomodule.c
c:\hello\hellomodule.c(26) : warning C4716: 'inithello' : must return a value
Microsoft (R) Incremental Linker Version 9.00.21022.08
Copyright (C) Microsoft Corporation. All rights reserved.
/out:hellomodule.dll
/dll
/implib:hellomodule.lib
/out:hello.dll
hellomodule.obj
c:\Python30\libs\python30.lib
Creating library hellomodule.lib and object hellomodule.exp
hellomodule.obj : error LNK2019: unresolved external symbol _Py_InitModule referenced in function
_inithello
hello.dll : fatal error LNK1120: 1 unresolved externals
C:\hello>
</code></pre>
<p>What is the problem? Please, guide me.</p>
| 1
|
2009-03-21T08:23:00Z
| 668,974
|
<p>If Python is installed in <code>c:\python30</code>, why are you searching for the libraries in <code>c:\Python24\libs\python30</code>?</p>
<p>And now that you've changed the question to fix this :-),</p>
<p>I don't think <code>Py_InitModule</code> is available any more, you have to use <code>PyModule_Create</code> (this may have changed since the early betas of Py3k which is the last time I looked).</p>
<p>Based on your comments, David, I'd suggest you need to avoid the HowTo sites outside of the official Python docs (I suspect they're well out of date). There was a lot of work happening on the extension interface at the 3.0 level and the best place to look is either
at the <a href="http://docs.python.org/3.0/extending/extending.html" rel="nofollow">3.0 docs</a> or <a href="http://docs.python.org/dev/3.0/extending/extending.html" rel="nofollow">3.1 alpha docs</a>.</p>
<p>The specific Windows build instructions are <a href="http://docs.python.org/3.0/extending/windows.html" rel="nofollow">here</a>.</p>
| 2
|
2009-03-21T08:27:59Z
|
[
"python",
"c",
"visual-studio",
"visual-studio-2008",
"linker"
] |
Why GQL Query does not match?
| 669,043
|
<p>What I want to do is build some mini cms which hold pages with a uri.</p>
<p>The last route in my urls.py points to a function in my views.py, which checks in the datastore if there's a page available with the same uri of the current request, and if so show the page.</p>
<p>I have a model: </p>
<pre><code>class Page(db.Model):
title = db.StringProperty(required=True)
uri = db.TextProperty(required=True)
created = db.DateTimeProperty(auto_now_add=True)
modified = db.DateTimeProperty(auto_now=True)
content = db.TextProperty()
</code></pre>
<p>In my view: </p>
<pre><code>def show(request):
page = db.GqlQuery('SELECT * FROM Page WHERE uri=:uri', uri=request.path).get()
if page is None:
return http.HttpResponseNotFound()
else:
return respond(request, 'pages_show', {'content': request.path})
</code></pre>
<p>And I've added an entity with '/work' as uri to the datastore. </p>
<p>Even when request.path is exactly '/work', the query does not return a match. </p>
<p>Thanks for any advice you can give me!</p>
<p>And yes, i'm a python noob, App Engine is perfect to finally learn the language. </p>
| 1
|
2009-03-21T09:37:24Z
| 669,062
|
<p>If you use named keyword arguments ("uri=:uri"), you have to explicitly bind your parameters to the named keyword. Instead of:</p>
<pre><code># incorrect named parameter
GqlQuery('SELECT * FROM Page WHERE uri=:uri', request.path).get()
</code></pre>
<p>you want</p>
<pre><code># correct named parameter
GqlQuery('SELECT * FROM Page WHERE uri=:uri', uri=request.path).get()
</code></pre>
<p>or you could just use a positional parameter:</p>
<pre><code># correct positional parameter
GqlQuery('SELECT * FROM Page WHERE uri=:1', request.path).get()
</code></pre>
| 2
|
2009-03-21T09:59:51Z
|
[
"python",
"google-app-engine",
"gae-datastore"
] |
Why GQL Query does not match?
| 669,043
|
<p>What I want to do is build some mini cms which hold pages with a uri.</p>
<p>The last route in my urls.py points to a function in my views.py, which checks in the datastore if there's a page available with the same uri of the current request, and if so show the page.</p>
<p>I have a model: </p>
<pre><code>class Page(db.Model):
title = db.StringProperty(required=True)
uri = db.TextProperty(required=True)
created = db.DateTimeProperty(auto_now_add=True)
modified = db.DateTimeProperty(auto_now=True)
content = db.TextProperty()
</code></pre>
<p>In my view: </p>
<pre><code>def show(request):
page = db.GqlQuery('SELECT * FROM Page WHERE uri=:uri', uri=request.path).get()
if page is None:
return http.HttpResponseNotFound()
else:
return respond(request, 'pages_show', {'content': request.path})
</code></pre>
<p>And I've added an entity with '/work' as uri to the datastore. </p>
<p>Even when request.path is exactly '/work', the query does not return a match. </p>
<p>Thanks for any advice you can give me!</p>
<p>And yes, i'm a python noob, App Engine is perfect to finally learn the language. </p>
| 1
|
2009-03-21T09:37:24Z
| 669,704
|
<p>I've found the solution!</p>
<p>The problem lies in the model. </p>
<p>App engines datastore does not index a TextProperty. Using that type was wrong from the beginning, so i changed it to StringProperty, which does get indexed, and thus which datastore allows us to use in a WHERE clause.</p>
<p>Example of working model:</p>
<pre><code> class Page(db.Model):
title = db.StringProperty(required=True)
// string property now
uri = db.StringProperty(required=True)
created = db.DateTimeProperty(auto_now_add=True)
modified = db.DateTimeProperty(auto_now=True)
content = db.TextProperty()
</code></pre>
| 4
|
2009-03-21T18:16:25Z
|
[
"python",
"google-app-engine",
"gae-datastore"
] |
How can I change the display text of a MenuItem in Gtk2?
| 669,152
|
<p>I need to change the display text of a MenuItem. Is there any way of doing this without removing the MenuItem and then adding another one with a different text?</p>
| 2
|
2009-03-21T11:28:47Z
| 669,426
|
<p>It somewhat depends how you created the menu item, since a MenuItem is a container that can contain anything. If you created it like:</p>
<pre><code>menuitem = gtk.MenuItem('This is the label')
</code></pre>
<p>Then you can access the label widget in the menu item with:</p>
<pre><code>label = menuitem.child
</code></pre>
<p>And can then treat that as a normal label:</p>
<pre><code>label.set_text('This is the new label')
</code></pre>
<p>However, unless you made the menu item yourself, you can't guarantee that the child widget will be a label like this, so you should take some care.</p>
| 3
|
2009-03-21T15:03:39Z
|
[
"python",
"gtk",
"pygtk"
] |
Encoding of string returned by GetUserName()
| 669,770
|
<p>How do I get the encoding that is used for the string returned by <code>GetUserName</code> from the win32 API? I'm using pywin32 and it returns an 8-bit string. On my German XP, this string is obviously encoded using Latin-1, but this might not be the case for other Windows installations.</p>
<p>I could use <code>GetUserNameW</code>, but I would have to wrap that myself using ctypes, which I'd like to avoid for now if there is a simpler solution.</p>
| 1
|
2009-03-21T18:51:08Z
| 669,789
|
<p>From the API docs, GetUserNameA will return the name in ANSI and GetUserNameW returns the name in Unicode. You will have to use GetUserNameW.</p>
| 0
|
2009-03-21T19:09:16Z
|
[
"python",
"winapi",
"encoding",
"pywin32"
] |
Encoding of string returned by GetUserName()
| 669,770
|
<p>How do I get the encoding that is used for the string returned by <code>GetUserName</code> from the win32 API? I'm using pywin32 and it returns an 8-bit string. On my German XP, this string is obviously encoded using Latin-1, but this might not be the case for other Windows installations.</p>
<p>I could use <code>GetUserNameW</code>, but I would have to wrap that myself using ctypes, which I'd like to avoid for now if there is a simpler solution.</p>
| 1
|
2009-03-21T18:51:08Z
| 669,790
|
<p>You can call <a href="http://msdn.microsoft.com/en-us/library/dd318070%28VS.85%29.aspx">GetACP</a> to find the current ANSI codepage, which is what non-Unicode APIs use. You can also use <a href="http://msdn.microsoft.com/en-us/library/bb202786.aspx">MultiByteToWideChar</a>, and pass zero as the codepage (CP_ACP is defined as zero in the Windows headers) to convert a codepage string to Unicode.</p>
| 5
|
2009-03-21T19:09:41Z
|
[
"python",
"winapi",
"encoding",
"pywin32"
] |
Encoding of string returned by GetUserName()
| 669,770
|
<p>How do I get the encoding that is used for the string returned by <code>GetUserName</code> from the win32 API? I'm using pywin32 and it returns an 8-bit string. On my German XP, this string is obviously encoded using Latin-1, but this might not be the case for other Windows installations.</p>
<p>I could use <code>GetUserNameW</code>, but I would have to wrap that myself using ctypes, which I'd like to avoid for now if there is a simpler solution.</p>
| 1
|
2009-03-21T18:51:08Z
| 669,791
|
<p>I realize this isn't answering your question directly, but I <em>strongly</em> recommend you go through the trouble of using the Unicode-clean <code>GetUserNameW</code> as you mentioned.</p>
<p>The non-wide commands work differently on different Windows editions (e.g. ME, although I admit that example is old!), so IMHO it's worth just getting it right.</p>
<p>Having done a lot of multi-lingual Windows development, although the wide API can add a layer of translation or wrapping (as you suggest!), it's worth it.</p>
| 4
|
2009-03-21T19:09:53Z
|
[
"python",
"winapi",
"encoding",
"pywin32"
] |
Encoding of string returned by GetUserName()
| 669,770
|
<p>How do I get the encoding that is used for the string returned by <code>GetUserName</code> from the win32 API? I'm using pywin32 and it returns an 8-bit string. On my German XP, this string is obviously encoded using Latin-1, but this might not be the case for other Windows installations.</p>
<p>I could use <code>GetUserNameW</code>, but I would have to wrap that myself using ctypes, which I'd like to avoid for now if there is a simpler solution.</p>
| 1
|
2009-03-21T18:51:08Z
| 670,979
|
<p>Okay, here's what I'm using right now:</p>
<pre><code>>>> import win32api
>>> u = unicode(win32api.GetUserName(), "mbcs")
>>> type(u)
<type 'unicode'>
</code></pre>
<p><code>mbcs</code> is a special <a href="http://docs.python.org/dev/2.5/lib/standard-encodings.html" rel="nofollow">standard encoding</a> in Windows:</p>
<blockquote>
<p>Windows only: Encode operand according to the ANSI codepage (CP_ACP)</p>
</blockquote>
| 3
|
2009-03-22T13:05:37Z
|
[
"python",
"winapi",
"encoding",
"pywin32"
] |
Pagination of Date-Based Generic Views in Django
| 669,903
|
<p>I have a pretty simple question. I want to make some date-based generic views on a Django site, but I also want to paginate them. According to the documentation the object_list view has page and paginate_by arguments, but the archive_month view does not. What's the "right" way to do it?</p>
| 3
|
2009-03-21T20:08:56Z
| 670,763
|
<p>Date based generic views don't have pagination. It seems you can't add pagination via wrapping them as well since they return rendered result.</p>
<p>I would simply write my own view in this case. You can check out generic views' code as well, but most of it will probably be unneeded in your case.</p>
<p>Since your question is a valid one, and looking at the code; I wonder why they didn't decouple queryset generation as separate functions. You could just use them and render as you wish then.</p>
| 1
|
2009-03-22T09:35:00Z
|
[
"python",
"django",
"pagination"
] |
Pagination of Date-Based Generic Views in Django
| 669,903
|
<p>I have a pretty simple question. I want to make some date-based generic views on a Django site, but I also want to paginate them. According to the documentation the object_list view has page and paginate_by arguments, but the archive_month view does not. What's the "right" way to do it?</p>
| 3
|
2009-03-21T20:08:56Z
| 905,480
|
<p>I created a template tag to do template-based pagination on collections passed to the templates that aren't already paginated. Copy the following code to an <code>app/templatetags/pagify.py</code> file.</p>
<pre><code>from django.template import Library, Node, Variable
from django.core.paginator import Paginator
import settings
register = Library()
class PagifyNode(Node):
def __init__(self, items, page_size, varname):
self.items = Variable(items)
self.page_size = int(page_size)
self.varname = varname
def render(self, context):
pages = Paginator(self.items.resolve(context), self.page_size)
request = context['request']
page_num = int(request.GET.get('page', 1))
context[self.varname] = pages.page(page_num)
return ''
@register.tag
def pagify(parser, token):
"""
Usage:
{% pagify items by page_size as varname %}
"""
bits = token.contents.split()
if len(bits) != 6:
raise TemplateSyntaxError, 'pagify tag takes exactly 5 arguments'
if bits[2] != 'by':
raise TemplateSyntaxError, 'second argument to pagify tag must be "by"'
if bits[4] != 'as':
raise TemplateSyntaxError, 'fourth argument to pagify tag must be "as"'
return PagifyNode(bits[1], bits[3], bits[5])
</code></pre>
<p>To use it in the templates (assume we've passed in an un-paginated list called <strong>items</strong>):</p>
<pre><code>{% load pagify %}
{% pagify items by 20 as page %}
{% for item in page %}
{{ item }}
{% endfor %}
</code></pre>
<p>The page_size argument (the 20) can be a variable as well. The tag automatically detects <code>page=5</code> variables in the querystring. And if you ever need to get at the paginator that belong to the page (for a page count, for example), you can simply call:</p>
<pre><code>{{ page.paginator.num_pages }}
</code></pre>
| 3
|
2009-05-25T05:50:15Z
|
[
"python",
"django",
"pagination"
] |
Pagination of Date-Based Generic Views in Django
| 669,903
|
<p>I have a pretty simple question. I want to make some date-based generic views on a Django site, but I also want to paginate them. According to the documentation the object_list view has page and paginate_by arguments, but the archive_month view does not. What's the "right" way to do it?</p>
| 3
|
2009-03-21T20:08:56Z
| 905,880
|
<p>I was working on a problem similar to this yesterday, and I found the best solution for me personally was to use the object_list generic view for all date-based pages, but pass a filtered queryset, as follows:</p>
<pre><code>import datetime, time
def post_archive_month(request, year, month, page=0, template_name='post_archive_month.html', **kwargs):
# Convert date to numeric format
date = datetime.date(*time.strptime('%s-%s' % (year, month), '%Y-%b')[:3])
return list_detail.object_list(
request,
queryset = Post.objects.filter(publish__year=date.year, publish__date.month).order_by('-publish',),
paginate_by = 5,
page = page,
template_name = template_name,
**kwargs)
</code></pre>
<p>Where the urls.py reads something like:</p>
<pre><code>url(r'^blog/(?P<year>\d{4})/(?P<month>\w{3})/$',
view=path.to.generic_view,
name='archive_month'),
</code></pre>
<p>I found this the easiest way around the problem without resorting to hacking the other generic views or writing a custom view.</p>
| 1
|
2009-05-25T08:33:50Z
|
[
"python",
"django",
"pagination"
] |
Pagination of Date-Based Generic Views in Django
| 669,903
|
<p>I have a pretty simple question. I want to make some date-based generic views on a Django site, but I also want to paginate them. According to the documentation the object_list view has page and paginate_by arguments, but the archive_month view does not. What's the "right" way to do it?</p>
| 3
|
2009-03-21T20:08:56Z
| 1,416,244
|
<p>Django date-based generic views do not support pagination. There is an <a href="http://code.djangoproject.com/ticket/2367" rel="nofollow">open ticket from 2006</a> on this. If you want, you can try out the code patches supplied to implement this feature. I am not sure why the patches have not been applied to the codebase yet.</p>
| 0
|
2009-09-12T21:50:22Z
|
[
"python",
"django",
"pagination"
] |
Pagination of Date-Based Generic Views in Django
| 669,903
|
<p>I have a pretty simple question. I want to make some date-based generic views on a Django site, but I also want to paginate them. According to the documentation the object_list view has page and paginate_by arguments, but the archive_month view does not. What's the "right" way to do it?</p>
| 3
|
2009-03-21T20:08:56Z
| 2,798,212
|
<p>There is also excellent <a href="http://code.google.com/p/django-pagination/" rel="nofollow">django-pagination</a> add-on, which is completely independent of underlying view.</p>
| 1
|
2010-05-09T15:55:12Z
|
[
"python",
"django",
"pagination"
] |
How to create a class that doesn't re-create an object with identical input parameters
| 669,932
|
<p>I am trying to create a class that doesn't re-create an object with the same input parameters. When I try to instantiate a class with the same parameters that were used to create an already-existing object, I just want my new class to return a pointer to the already-created (expensively-created) object. This is what I have tried so far:</p>
<pre><code>class myobject0(object):
# At first, I didn't realize that even already-instantiated
# objects had their __init__ called again
instances = {}
def __new__(cls,x):
if x not in cls.instances.keys():
cls.instances[x] = object.__new__(cls,x)
return cls.instances[x]
def __init__(self,x):
print 'doing something expensive'
class myobject1(object):
# I tried to override the existing object's __init__
# but it didnt work.
instances = {}
def __new__(cls,x):
if x not in cls.instances.keys():
cls.instances[x] = object.__new__(cls,x)
else:
cls.instances[x].__init__ = lambda x: None
return cls.instances[x]
def __init__(self,x):
print 'doing something expensive'
class myobject2(object):
# does what I want but is ugly
instances = {}
def __new__(cls,x):
if x not in cls.instances.keys():
cls.instances[x] = object.__new__(cls,x)
cls.instances[x]._is_new = 1
else:
cls.instances[x]._is_new = 0
return cls.instances[x]
def __init__(self,x):
if self._is_new:
print 'doing something expensive'
</code></pre>
<p>This is my first venture into overriding <code>__new__</code> and I'm convinced I'm not going about it the right way. Set me straight, please.</p>
| 2
|
2009-03-21T20:24:02Z
| 669,941
|
<p>First, use Upper Case Class Names in Python.</p>
<p>Second, use a <strong>Factory</strong> design pattern to solve this problem.</p>
<pre><code>class MyObject( object ):
def __init__( self, args ):
pass # Something Expensive
class MyObjectFactory( object ):
def __init__( self ):
self.pool = {}
def makeMyObject( self, args ):
if args not in self.pool:
self.pool[args] = MyObject( args )
return self.pool[args]
</code></pre>
<p>This is much simpler than fooling around with new and having class level pools of objects.</p>
| 8
|
2009-03-21T20:31:15Z
|
[
"python",
"multiton"
] |
How to create a class that doesn't re-create an object with identical input parameters
| 669,932
|
<p>I am trying to create a class that doesn't re-create an object with the same input parameters. When I try to instantiate a class with the same parameters that were used to create an already-existing object, I just want my new class to return a pointer to the already-created (expensively-created) object. This is what I have tried so far:</p>
<pre><code>class myobject0(object):
# At first, I didn't realize that even already-instantiated
# objects had their __init__ called again
instances = {}
def __new__(cls,x):
if x not in cls.instances.keys():
cls.instances[x] = object.__new__(cls,x)
return cls.instances[x]
def __init__(self,x):
print 'doing something expensive'
class myobject1(object):
# I tried to override the existing object's __init__
# but it didnt work.
instances = {}
def __new__(cls,x):
if x not in cls.instances.keys():
cls.instances[x] = object.__new__(cls,x)
else:
cls.instances[x].__init__ = lambda x: None
return cls.instances[x]
def __init__(self,x):
print 'doing something expensive'
class myobject2(object):
# does what I want but is ugly
instances = {}
def __new__(cls,x):
if x not in cls.instances.keys():
cls.instances[x] = object.__new__(cls,x)
cls.instances[x]._is_new = 1
else:
cls.instances[x]._is_new = 0
return cls.instances[x]
def __init__(self,x):
if self._is_new:
print 'doing something expensive'
</code></pre>
<p>This is my first venture into overriding <code>__new__</code> and I'm convinced I'm not going about it the right way. Set me straight, please.</p>
| 2
|
2009-03-21T20:24:02Z
| 7,493,603
|
<p>Here's a class decorator to make a class a multiton:</p>
<pre><code>def multiton(cls):
instances = {}
def getinstance(id):
if id not in instances:
instances[id] = cls(id)
return instances[id]
return getinstance
</code></pre>
<p>(This is a slight variant of the singleton decorator from PEP 318.)</p>
<p>Then, to make your class a multiton, use the decorator:</p>
<pre><code>@multiton
class MyObject( object ):
def __init__( self, arg):
self.id = arg
# other expensive stuff
</code></pre>
<p>Now, if you instantiate MyObject with the same id, you get the same instance:</p>
<pre><code>a = MyObject(1)
b = MyObject(2)
c = MyObject(2)
a is b # False
b is c # True
</code></pre>
| 10
|
2011-09-21T01:27:16Z
|
[
"python",
"multiton"
] |
Partial Upload With storbinary in python
| 670,084
|
<p>I've written some python code to download an image using </p>
<pre><code>urllib.urlopen().read()
</code></pre>
<p>and then upload it to an FTP site using </p>
<pre><code>ftplib.FTP().storbinary()
</code></pre>
<p>but I'm having a problem. Sometimes the image file is only partially uploaded, so I get images with the bottom 20% or so cut off. I've checked the locally downloaded version and I have successfully downloaded the entire image, which leads me to believe that it is a problem with storbinary. I believe I am opening and closing all of the files correctly. Does anyone have any clues as to why I'm getting a partial upload with storbinary?</p>
<p><strong>Update:</strong>
When I run through the commands in the Python shell, the upload completes successfully, I don't know why it would be different from when run as a script...</p>
| 1
|
2009-03-21T22:02:27Z
| 670,567
|
<p>It turns out I was not closing the downloaded file correctly. Let's all pretend this never happened.</p>
| 0
|
2009-03-22T04:52:53Z
|
[
"python",
"ftp",
"ftplib"
] |
Partial Upload With storbinary in python
| 670,084
|
<p>I've written some python code to download an image using </p>
<pre><code>urllib.urlopen().read()
</code></pre>
<p>and then upload it to an FTP site using </p>
<pre><code>ftplib.FTP().storbinary()
</code></pre>
<p>but I'm having a problem. Sometimes the image file is only partially uploaded, so I get images with the bottom 20% or so cut off. I've checked the locally downloaded version and I have successfully downloaded the entire image, which leads me to believe that it is a problem with storbinary. I believe I am opening and closing all of the files correctly. Does anyone have any clues as to why I'm getting a partial upload with storbinary?</p>
<p><strong>Update:</strong>
When I run through the commands in the Python shell, the upload completes successfully, I don't know why it would be different from when run as a script...</p>
| 1
|
2009-03-21T22:02:27Z
| 4,589,462
|
<p>It's been a while since I looked at this code, but I remember the crux of it was that I was not closing the downloaded file correctly. I have the working code though, so just in case it was a problem with the upload and not the download, here are both snippets:</p>
<p>Here is the working code to download the image:</p>
<pre><code>socket = urllib.urlopen(TheURL)
FileContents = socket.read()
LocalFilename = LocalDir + FilenameOnly
LocalFile = open(LocalDir + FilenameOnly, 'wb')
LocalFile.write(FileContents)
LocalFile.close()
</code></pre>
<p>Where <code>TheURL</code> is the URL of the file I'm trying to download, <code>FilenameOnly</code> is just the filename portion of the path, and <code>LocalDir</code> is the local destination. <strong>I believe my problem was that I was not calling <code>LocalFile.close()</code>.</strong></p>
<p>Here is the working code to upload the image:</p>
<pre><code>FTPServer = ftplib.FTP(FTPServer, FTPUsername, FTPPassword)
UploadFile = open(Filename, "rb")
FTPServer.cwd(FTPSubDirectory)
UploadFile.close()
FTPServer.quit()
</code></pre>
<p><strong>The problem could also have been that I was not calling <code>FTPServer.quit()</code></strong></p>
<p>If anyone has any questions about this code, I'll happily reply in the comments; I feel really bad that I left any Googlers hanging!</p>
| 0
|
2011-01-03T23:47:05Z
|
[
"python",
"ftp",
"ftplib"
] |
Unable to find files/folders with permissions 777 by AWK/SED/Python
| 670,269
|
<p><strong>Problems</strong></p>
<ol>
<li><p>to get permissions
of each file in every folder</p></li>
<li><p>to find files
which have 777 permissions, and then
print the filenames with their paths
to a list</p></li>
</ol>
<p>We can get permissions for files in one folder by </p>
<pre><code>ls -ls
</code></pre>
<p>I do not know how you can get permissions of each file in every folder effectively.</p>
<p><strong>How can you find files which have permissions 777 by AWK/SED/Python?</strong></p>
| 0
|
2009-03-21T23:55:39Z
| 670,283
|
<p>Are you looking for <code>find</code>?</p>
<pre><code>find /some/path -perm 0777
</code></pre>
| 6
|
2009-03-22T00:05:57Z
|
[
"python",
"sed",
"awk"
] |
Unable to find files/folders with permissions 777 by AWK/SED/Python
| 670,269
|
<p><strong>Problems</strong></p>
<ol>
<li><p>to get permissions
of each file in every folder</p></li>
<li><p>to find files
which have 777 permissions, and then
print the filenames with their paths
to a list</p></li>
</ol>
<p>We can get permissions for files in one folder by </p>
<pre><code>ls -ls
</code></pre>
<p>I do not know how you can get permissions of each file in every folder effectively.</p>
<p><strong>How can you find files which have permissions 777 by AWK/SED/Python?</strong></p>
| 0
|
2009-03-21T23:55:39Z
| 671,191
|
<p>find /some/path -perm 0777 -type f</p>
| 4
|
2009-03-22T15:43:47Z
|
[
"python",
"sed",
"awk"
] |
How do I develop against OAuth locally?
| 670,398
|
<p>I'm building a Python application that needs to communicate with an OAuth service provider. The SP requires me to specify a callback URL. Specifying localhost obviously won't work. I'm unable to set up a public facing server. Any ideas besides paying for server/hosting? Is this even possible?</p>
| 32
|
2009-03-22T01:37:13Z
| 670,402
|
<p>This may help you:</p>
<p><a href="http://www.marcworrell.com/article-2990-en.html" rel="nofollow">http://www.marcworrell.com/article-2990-en.html</a></p>
<p>It's php so should be pretty straightforward to set up on your dev server.</p>
<p>I've tried this one once:</p>
<p><a href="http://term.ie/oauth/example/" rel="nofollow">http://term.ie/oauth/example/</a></p>
<p>It's pretty simple. You have a link to download the code at the bottom.</p>
| 1
|
2009-03-22T01:42:42Z
|
[
"python",
"oauth"
] |
How do I develop against OAuth locally?
| 670,398
|
<p>I'm building a Python application that needs to communicate with an OAuth service provider. The SP requires me to specify a callback URL. Specifying localhost obviously won't work. I'm unable to set up a public facing server. Any ideas besides paying for server/hosting? Is this even possible?</p>
| 32
|
2009-03-22T01:37:13Z
| 670,608
|
<p>Two things:</p>
<ol>
<li><p>The OAuth Service Provider in question is violating the OAuth spec if it's giving you an error if you don't specify a callback URL. callback_url is <a href="http://oauth.net/core/1.0/#auth%5Fstep2">spec'd to be an OPTIONAL parameter</a>.</p></li>
<li><p>But, pedantry aside, you probably want to get a callback when the user's done just so you know you can redeem the Request Token for an Access Token. <a href="http://fireeagle.yahoo.net/developer/documentation/oauth%5Fbest%5Fpractice">Yahoo's FireEagle developer docs</a> have lots of great information on how to do this.</p></li>
</ol>
<p>Even in the second case, the callback URL doesn't actually have to be visible from the Internet at all. The OAuth Service Provider will redirect the browser that the user uses to provide his username/password to the callback URL.</p>
<p>The two common ways to do this are:</p>
<ol>
<li>Create a dumb web service from within your application that listens on some port (say, <a href="http://localhost:1234/">http://localhost:1234/</a>) for the completion callback, or</li>
<li>Register a protocol handler (you'll have to check with the documentation for your OS specifically on how to do such a thing, but it enables things like <a href="skype:555-1212"> to work).</li>
</ol>
<p>(An example of the flow that I believe you're describing <a href="http://wiki.oauth.net/iPhoto-to-Flickr">lives here</a>.)</p>
| 16
|
2009-03-22T06:15:23Z
|
[
"python",
"oauth"
] |
How do I develop against OAuth locally?
| 670,398
|
<p>I'm building a Python application that needs to communicate with an OAuth service provider. The SP requires me to specify a callback URL. Specifying localhost obviously won't work. I'm unable to set up a public facing server. Any ideas besides paying for server/hosting? Is this even possible?</p>
| 32
|
2009-03-22T01:37:13Z
| 2,953,681
|
<p>localtunnel [port] and voila</p>
<p><a href="http://blogrium.wordpress.com/2010/05/11/making-a-local-web-server-public-with-localtunnel/" rel="nofollow">http://blogrium.wordpress.com/2010/05/11/making-a-local-web-server-public-with-localtunnel/</a></p>
<p><a href="http://github.com/progrium/localtunnel" rel="nofollow">http://github.com/progrium/localtunnel</a></p>
| 0
|
2010-06-01T21:59:59Z
|
[
"python",
"oauth"
] |
How do I develop against OAuth locally?
| 670,398
|
<p>I'm building a Python application that needs to communicate with an OAuth service provider. The SP requires me to specify a callback URL. Specifying localhost obviously won't work. I'm unable to set up a public facing server. Any ideas besides paying for server/hosting? Is this even possible?</p>
| 32
|
2009-03-22T01:37:13Z
| 3,117,885
|
<p>You could create 2 applications? 1 for deployment and the other for testing. </p>
<p>Alternatively, you can also include an oauth_callback parameter when you requesting for a request token. Some providers will redirect to the url specified by oauth_callback (eg. Twitter, Google) but some will ignore this callback url and redirect to the one specified during configuration (eg. Yahoo)</p>
| 0
|
2010-06-25T12:17:48Z
|
[
"python",
"oauth"
] |
How do I develop against OAuth locally?
| 670,398
|
<p>I'm building a Python application that needs to communicate with an OAuth service provider. The SP requires me to specify a callback URL. Specifying localhost obviously won't work. I'm unable to set up a public facing server. Any ideas besides paying for server/hosting? Is this even possible?</p>
| 32
|
2009-03-22T01:37:13Z
| 7,971,246
|
<p>This was with the Facebook OAuth - I actually <em>was</em> able to specify 'http://127.0.0.1:8080' as the Site URL and the callback URL. It took several minutes for the changes to the Facebook app to propagate, but then it worked.</p>
| 2
|
2011-11-01T18:40:58Z
|
[
"python",
"oauth"
] |
How do I develop against OAuth locally?
| 670,398
|
<p>I'm building a Python application that needs to communicate with an OAuth service provider. The SP requires me to specify a callback URL. Specifying localhost obviously won't work. I'm unable to set up a public facing server. Any ideas besides paying for server/hosting? Is this even possible?</p>
| 32
|
2009-03-22T01:37:13Z
| 12,107,449
|
<p>In case you are using *nix style system, create a alias like <code>127.0.0.1 mywebsite.dev</code> in <code>/etc/hosts</code> (you need have the line which is similar to above mentioned in the file, Use <code>http://website.dev/callbackurl/for/app</code> in call back URL and during local testing. </p>
| 8
|
2012-08-24T10:16:18Z
|
[
"python",
"oauth"
] |
How do I develop against OAuth locally?
| 670,398
|
<p>I'm building a Python application that needs to communicate with an OAuth service provider. The SP requires me to specify a callback URL. Specifying localhost obviously won't work. I'm unable to set up a public facing server. Any ideas besides paying for server/hosting? Is this even possible?</p>
| 32
|
2009-03-22T01:37:13Z
| 15,788,200
|
<p>So how I solved this issue (using BitBucket's OAuth interface) was by specifying the callback URL to localhost (or whatever the hell you want really), and then following the authorisation URL with curl, but with the twist of only returning the HTTP header. Example:</p>
<pre><code>curl --user BitbucketUsername:BitbucketPassword -sL -w "%{http_code} %{url_effective}\\n" "AUTH_URL" -o /dev/null
</code></pre>
<p>Inserting for your credentials and the authorisation url (remember to escape the exclamation mark!).</p>
<p>What you should get is something like this:</p>
<pre><code>200 http://localhost?dump&oauth_verifier=OATH_VERIFIER&oauth_token=OATH_TOKEN
</code></pre>
<p>And you can scrape the oath_verifier from this.</p>
<p>Doing the same in python:</p>
<pre><code>import pycurl
devnull = open('/dev/null', 'w')
c = pycurl.Curl()
c.setopt(pycurl.WRITEFUNCTION, devnull.write)
c.setopt(c.USERPWD, "BBUSERNAME:BBPASSWORD")
c.setopt(pycurl.URL, authorize_url)
c.setopt(pycurl.FOLLOWLOCATION, 1)
c.perform()
print c.getinfo(pycurl.HTTP_CODE), c.getinfo(pycurl.EFFECTIVE_URL)
</code></pre>
<p>I hope this is useful for someone!</p>
| 0
|
2013-04-03T13:06:46Z
|
[
"python",
"oauth"
] |
Asynchronous File Upload to Amazon S3 with Django
| 670,442
|
<p>I am using this file storage engine to store files to Amazon S3 when they are uploaded:</p>
<p><a href="http://code.welldev.org/django-storages/wiki/Home">http://code.welldev.org/django-storages/wiki/Home</a></p>
<p>It takes quite a long time to upload because the file must first be uploaded from client to web server, and then web server to Amazon S3 before a response is returned to the client.</p>
<p>I would like to make the process of sending the file to S3 asynchronous, so the response can be returned to the user much faster. What is the best way to do this with the file storage engine?</p>
<p>Thanks for your advice!</p>
| 32
|
2009-03-22T02:29:32Z
| 671,047
|
<p>You could decouple the process:</p>
<ul>
<li>the user selects file to upload and sends it to your server. After this he sees a page "Thank you for uploading foofile.txt, it is now stored in our storage backend"</li>
<li>When the users has uploaded the file it is stored temporary directory on your server and, if needed, some metadata is stored in your database.</li>
<li>A background process on your server then uploads the file to S3. This would only possible if you have full access to your server so you can create some kind of "deamon" to to this (or simply use a cronjob).*</li>
<li>The page that is displayed polls asynchronously and displays some kind of progress bar to the user (or s simple "please wait" Message. This would only be needed if the user should be able to "use" (put it in a message, or something like that) it directly after uploading.</li>
</ul>
<p>[*: In case you have only a shared hosting you could possibly build some solution which uses an hidden Iframe in the users browser to start a script which then uploads the file to S3]</p>
| 6
|
2009-03-22T14:01:38Z
|
[
"python",
"django",
"amazon-s3"
] |
Asynchronous File Upload to Amazon S3 with Django
| 670,442
|
<p>I am using this file storage engine to store files to Amazon S3 when they are uploaded:</p>
<p><a href="http://code.welldev.org/django-storages/wiki/Home">http://code.welldev.org/django-storages/wiki/Home</a></p>
<p>It takes quite a long time to upload because the file must first be uploaded from client to web server, and then web server to Amazon S3 before a response is returned to the client.</p>
<p>I would like to make the process of sending the file to S3 asynchronous, so the response can be returned to the user much faster. What is the best way to do this with the file storage engine?</p>
<p>Thanks for your advice!</p>
| 32
|
2009-03-22T02:29:32Z
| 671,255
|
<p>I've taken another approach to this problem.</p>
<p>My models have 2 file fields, one uses the standard file storage backend and the other one uses the s3 file storage backend. When the user uploads a file it get's stored localy.</p>
<p>I have a management command in my application that uploads all the localy stored files to s3 and updates the models.</p>
<p>So when a request comes for the file I check to see if the model object uses the s3 storage field, if so I send a redirect to the correct url on s3, if not I send a redirect so that nginx can serve the file from disk.</p>
<p>This management command can ofcourse be triggered by any event a cronjob or whatever.</p>
| 23
|
2009-03-22T16:35:38Z
|
[
"python",
"django",
"amazon-s3"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.