title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
|---|---|---|---|---|---|---|---|---|---|
python, "a in b" keyword, how about multiple a's?
| 763,944
|
<p>My adventures in Python continue and my favorite books are silent again. Python offers a built-in way to test if a variable is inside an iterable object, using the 'in' keyword:</p>
<pre><code>if "a" in "abrakadabra" :
print "it is definitely here"
</code></pre>
<p>But is it possible to test if more than one item is in the list (any one)?
Currently, I'm using the syntax below, but it is kinda long:</p>
<pre><code>if "// @in " in sTxt or "// @out " in sTxt or "// @ret " in sTxt or <10 more>
print "found."
</code></pre>
<p>Of course regexes can help, but using regexes will take lots of verbose of code and will
not be as clear as "a in b". Are there any other Pythonic ways?</p>
| 15
|
2009-04-18T18:53:11Z
| 763,963
|
<p>If you want <em>any</em> check then you would use this:</p>
<pre><code>inthere = False
checks = ('a', 'b')
for check in checks:
if check in 'abrakadabra':
inthere = True
break
</code></pre>
<p>If you want <em>all</em> check out you could use this:</p>
<pre><code>inthere = True
checks = ('a', 'b')
for check in checks:
if check not in 'abrakadabra':
inthere = False
break
</code></pre>
<p>EDIT: Didn't know the more pythonic <code>any()</code>. It's probably better to use that on python.</p>
<p>EDIT2: Added break statements, and corrected the <em>all</em>-case.</p>
| 1
|
2009-04-18T19:01:07Z
|
[
"python"
] |
python, "a in b" keyword, how about multiple a's?
| 763,944
|
<p>My adventures in Python continue and my favorite books are silent again. Python offers a built-in way to test if a variable is inside an iterable object, using the 'in' keyword:</p>
<pre><code>if "a" in "abrakadabra" :
print "it is definitely here"
</code></pre>
<p>But is it possible to test if more than one item is in the list (any one)?
Currently, I'm using the syntax below, but it is kinda long:</p>
<pre><code>if "// @in " in sTxt or "// @out " in sTxt or "// @ret " in sTxt or <10 more>
print "found."
</code></pre>
<p>Of course regexes can help, but using regexes will take lots of verbose of code and will
not be as clear as "a in b". Are there any other Pythonic ways?</p>
| 15
|
2009-04-18T18:53:11Z
| 763,979
|
<p>You could also use <a href="http://docs.python.org/3.0/library/stdtypes.html#set-types-set-frozenset" rel="nofollow">set methods and operators</a>:</p>
<pre><code>not alternatives.isdisjoint(sTxt) # for "any"
(alternatives & sTxt) != set() # Again, the intersection is nonempty
alternatives <= sTxt # for "all"
</code></pre>
<p>I think these are easier to read than using the any or all, but have to convert your collections into sets. Since intersection and containment are what you care about, you might consider making them sets in the first place.</p>
| 1
|
2009-04-18T19:12:29Z
|
[
"python"
] |
python, "a in b" keyword, how about multiple a's?
| 763,944
|
<p>My adventures in Python continue and my favorite books are silent again. Python offers a built-in way to test if a variable is inside an iterable object, using the 'in' keyword:</p>
<pre><code>if "a" in "abrakadabra" :
print "it is definitely here"
</code></pre>
<p>But is it possible to test if more than one item is in the list (any one)?
Currently, I'm using the syntax below, but it is kinda long:</p>
<pre><code>if "// @in " in sTxt or "// @out " in sTxt or "// @ret " in sTxt or <10 more>
print "found."
</code></pre>
<p>Of course regexes can help, but using regexes will take lots of verbose of code and will
not be as clear as "a in b". Are there any other Pythonic ways?</p>
| 15
|
2009-04-18T18:53:11Z
| 764,117
|
<p>If you're testing lots of lines for the same words, it may be faster to compile them as a regular expression. eg:</p>
<pre><code>import re
words = ["// @in ", "// @out ", "// @ret "] + ["// @test%s " % i for i in range(10)]
my_regex = re.compile("|".join(map(re.escape, words)))
for line in lines_to_search:
if my_regex.search(line): print "Found match"
</code></pre>
<p>Some quick timing shows that this is usually faster than the <code>any(word in theString for word in words)</code> approach. I've tested both approaches with varying text (short/long with/without matches). Here are the results:</p>
<pre><code> { No keywords } | {contain Keywords }
short long short long
regex : 0.214 27.214 0.147 0.149
any in : 0.579 81.341 0.295 0.300
</code></pre>
<p>If performance doesn't matter though, the <code>any()</code> approach is more readable.</p>
| 6
|
2009-04-18T20:20:41Z
|
[
"python"
] |
Determine if a function is available in a Python module
| 763,971
|
<p>I am working on some Python socket code that's using the <a href="http://docs.python.org/library/socket.html#socket.fromfd"><code>socket.fromfd()</code></a> function.</p>
<p>However, this method is not available on all platforms, so I am writing some fallback code in the case that the method is not defined.</p>
<p><strong>What's the best way to determine if a method is defined at runtime?</strong> Is the following sufficient or is there a better idiom?</p>
<pre><code>if 'fromfd' in dir(socket):
sock = socket.fromfd(...)
else:
sock = socket.socket(...)
</code></pre>
<p>I'm slightly concerned that the documentation for <a href="http://docs.python.org/library/functions.html#dir"><code>dir()</code></a> seems to discourage its use. Would <a href="http://docs.python.org/library/functions.html#getattr"><code>getattr()</code></a> be a better choice, as in:</p>
<pre><code>if getattr(socket, 'fromfd', None) is not None:
sock = socket.fromfd(...)
else:
sock = socket.socket(...)
</code></pre>
<p>Thoughts?</p>
<p><strong>EDIT</strong> As <a href="http://stackoverflow.com/questions/763971/determine-if-a-function-is-available-in-a-python-module/763975#763975">Paolo</a> pointed out, this question is <a href="http://stackoverflow.com/questions/610883">nearly a duplicate</a> of a question about determining attribute presence. However, since the terminology used is disjoint (lk's <em>"object has an attribute"</em> vs my <em>"module has a function"</em>) it may be helpful to preserve this question for searchability unless the two can be combined.</p>
| 14
|
2009-04-18T19:06:00Z
| 763,975
|
<p><a href="http://docs.python.org/library/functions.html#hasattr"><code>hasattr()</code></a> is the best choice. Go with that. :)</p>
<pre><code>if hasattr(socket, 'fromfd'):
pass
else:
pass
</code></pre>
<p><strong>EDIT</strong>: Actually, according to the docs all hasattr is doing is calling getattr and catching the exception. So if you want to cut out the middle man you should go with <a href="#763977">marcog</a>'s answer.</p>
<p><strong>EDIT</strong>: I also just realized this question is actually a <a href="http://stackoverflow.com/questions/610883/how-to-know-if-an-object-has-an-attribute-in-python/610893">duplicate</a>. One of the answers there discusses the merits of the two options you have: catching the exception ("easier to ask for forgiveness than permission") or simply checking before hand ("look before you leap"). Honestly, I am more of the latter, but it seems like the Python community leans towards the former school of thought.</p>
| 17
|
2009-04-18T19:09:31Z
|
[
"python"
] |
Determine if a function is available in a Python module
| 763,971
|
<p>I am working on some Python socket code that's using the <a href="http://docs.python.org/library/socket.html#socket.fromfd"><code>socket.fromfd()</code></a> function.</p>
<p>However, this method is not available on all platforms, so I am writing some fallback code in the case that the method is not defined.</p>
<p><strong>What's the best way to determine if a method is defined at runtime?</strong> Is the following sufficient or is there a better idiom?</p>
<pre><code>if 'fromfd' in dir(socket):
sock = socket.fromfd(...)
else:
sock = socket.socket(...)
</code></pre>
<p>I'm slightly concerned that the documentation for <a href="http://docs.python.org/library/functions.html#dir"><code>dir()</code></a> seems to discourage its use. Would <a href="http://docs.python.org/library/functions.html#getattr"><code>getattr()</code></a> be a better choice, as in:</p>
<pre><code>if getattr(socket, 'fromfd', None) is not None:
sock = socket.fromfd(...)
else:
sock = socket.socket(...)
</code></pre>
<p>Thoughts?</p>
<p><strong>EDIT</strong> As <a href="http://stackoverflow.com/questions/763971/determine-if-a-function-is-available-in-a-python-module/763975#763975">Paolo</a> pointed out, this question is <a href="http://stackoverflow.com/questions/610883">nearly a duplicate</a> of a question about determining attribute presence. However, since the terminology used is disjoint (lk's <em>"object has an attribute"</em> vs my <em>"module has a function"</em>) it may be helpful to preserve this question for searchability unless the two can be combined.</p>
| 14
|
2009-04-18T19:06:00Z
| 763,977
|
<p>Or simply use a try..except block:</p>
<pre><code>try:
sock = socket.fromfd(...)
except AttributeError:
sock = socket.socket(...)
</code></pre>
| 17
|
2009-04-18T19:11:40Z
|
[
"python"
] |
Determine if a function is available in a Python module
| 763,971
|
<p>I am working on some Python socket code that's using the <a href="http://docs.python.org/library/socket.html#socket.fromfd"><code>socket.fromfd()</code></a> function.</p>
<p>However, this method is not available on all platforms, so I am writing some fallback code in the case that the method is not defined.</p>
<p><strong>What's the best way to determine if a method is defined at runtime?</strong> Is the following sufficient or is there a better idiom?</p>
<pre><code>if 'fromfd' in dir(socket):
sock = socket.fromfd(...)
else:
sock = socket.socket(...)
</code></pre>
<p>I'm slightly concerned that the documentation for <a href="http://docs.python.org/library/functions.html#dir"><code>dir()</code></a> seems to discourage its use. Would <a href="http://docs.python.org/library/functions.html#getattr"><code>getattr()</code></a> be a better choice, as in:</p>
<pre><code>if getattr(socket, 'fromfd', None) is not None:
sock = socket.fromfd(...)
else:
sock = socket.socket(...)
</code></pre>
<p>Thoughts?</p>
<p><strong>EDIT</strong> As <a href="http://stackoverflow.com/questions/763971/determine-if-a-function-is-available-in-a-python-module/763975#763975">Paolo</a> pointed out, this question is <a href="http://stackoverflow.com/questions/610883">nearly a duplicate</a> of a question about determining attribute presence. However, since the terminology used is disjoint (lk's <em>"object has an attribute"</em> vs my <em>"module has a function"</em>) it may be helpful to preserve this question for searchability unless the two can be combined.</p>
| 14
|
2009-04-18T19:06:00Z
| 763,992
|
<p>hasattr(obj, 'attributename') is probably a better one. hasattr will try to access the attribute, and if it's not there, it'll return false. </p>
<p>It's possible to have dynamic methods in python, i.e. methods that are created when you try to access them. They would not be in dir(...). However hasattr would check for it.</p>
<pre><code>>>> class C(object):
... def __init__(self):
... pass
... def mymethod1(self):
... print "In #1"
... def __getattr__(self, name):
... if name == 'mymethod2':
... def func():
... print "In my super meta #2"
... return func
... else:
... raise AttributeError
...
>>> c = C()
>>> 'mymethod1' in dir(c)
True
>>> hasattr(c, 'mymethod1')
True
>>> c.mymethod1()
In #1
>>> 'mymethod2' in dir(c)
False
>>> hasattr(c, 'mymethod2')
True
>>> c.mymethod2()
In my super meta #2
</code></pre>
| 2
|
2009-04-18T19:20:28Z
|
[
"python"
] |
mod_wsgi/python sys.path.exend problems
| 764,081
|
<p>I'm working on a mod_wsgi script.. at the beginning is:</p>
<pre><code>sys.path.extend(map(os.path.abspath, ['/media/server/www/webroot/']))
</code></pre>
<p>But I've noticed, that every time I update the script the sys.path var keeps growing with duplicates of this extension:</p>
<pre><code>['/usr/lib64/python25.zip'
'/usr/lib64/python2.5'
'/usr/lib64/python2.5/plat-linux2'
'/usr/lib64/python2.5/lib-tk'
'/usr/lib64/python2.5/lib-dynload'
'/usr/lib64/python2.5/site-packages'
'/usr/lib64/python2.5/site-packages/Numeric'
'/usr/lib64/python2.5/site-packages/gtk-2.0'
'/usr/lib64/python2.5/site-packages/scim-0.1'
'/usr/lib/python2.5/site-packages'
'/media/server/www/webroot'
'/media/server/www/webroot'
'/media/server/www/webroot'
'/media/server/www/webroot']
</code></pre>
<p>It resets every time I restart apache.. is there any way to make sure this doesn't happen? I want the module path to be loaded only once.. </p>
| 1
|
2009-04-18T20:02:05Z
| 764,148
|
<p>One fairly simple way to do this is to check to see if the path has already been extended before extending it::</p>
<pre><code>path_extension = map(os.path.abspath,['/media/server/www/webroot/'])
if path_extension[0] not in sys.path:
sys.path.extend(path_extension)
</code></pre>
<p>This has the disadvantage, however, of always scanning through most of <code>sys.path</code> when checking to see if it's been extended. A faster, though more complex, version is below::</p>
<pre><code>path_extension = map(os.path.abspath,['/media/server/www/webroot/'])
if path_extension[-1] not in reversed(sys.path):
sys.path.extend(path_extension)
</code></pre>
<p>A better solution, however, is probably to either add the path extensions to your <code>PYTHONPATH</code> environment variable or put a <code>.pth</code> file into your <code>site-packages</code> directory:</p>
<p><a href="http://docs.python.org/install/index.html" rel="nofollow">http://docs.python.org/install/index.html</a></p>
| 3
|
2009-04-18T20:33:16Z
|
[
"python",
"apache",
"mod-wsgi"
] |
mod_wsgi/python sys.path.exend problems
| 764,081
|
<p>I'm working on a mod_wsgi script.. at the beginning is:</p>
<pre><code>sys.path.extend(map(os.path.abspath, ['/media/server/www/webroot/']))
</code></pre>
<p>But I've noticed, that every time I update the script the sys.path var keeps growing with duplicates of this extension:</p>
<pre><code>['/usr/lib64/python25.zip'
'/usr/lib64/python2.5'
'/usr/lib64/python2.5/plat-linux2'
'/usr/lib64/python2.5/lib-tk'
'/usr/lib64/python2.5/lib-dynload'
'/usr/lib64/python2.5/site-packages'
'/usr/lib64/python2.5/site-packages/Numeric'
'/usr/lib64/python2.5/site-packages/gtk-2.0'
'/usr/lib64/python2.5/site-packages/scim-0.1'
'/usr/lib/python2.5/site-packages'
'/media/server/www/webroot'
'/media/server/www/webroot'
'/media/server/www/webroot'
'/media/server/www/webroot']
</code></pre>
<p>It resets every time I restart apache.. is there any way to make sure this doesn't happen? I want the module path to be loaded only once.. </p>
| 1
|
2009-04-18T20:02:05Z
| 764,683
|
<p>No need to worry about checking or using abspath yourself. Use the âsiteâ module's built-in <a href="http://docs.python.org/library/site.html#site.addsitedir">addsitedir</a> function. It will take care of these issues and others (eg. pth files) automatically:</p>
<pre><code>import site
site.addsitedir('/media/server/www/webroot/')
</code></pre>
<p>(This function is only documented in Python 2.6, but it has pretty much always existed.)</p>
| 7
|
2009-04-19T02:15:56Z
|
[
"python",
"apache",
"mod-wsgi"
] |
mod_wsgi/python sys.path.exend problems
| 764,081
|
<p>I'm working on a mod_wsgi script.. at the beginning is:</p>
<pre><code>sys.path.extend(map(os.path.abspath, ['/media/server/www/webroot/']))
</code></pre>
<p>But I've noticed, that every time I update the script the sys.path var keeps growing with duplicates of this extension:</p>
<pre><code>['/usr/lib64/python25.zip'
'/usr/lib64/python2.5'
'/usr/lib64/python2.5/plat-linux2'
'/usr/lib64/python2.5/lib-tk'
'/usr/lib64/python2.5/lib-dynload'
'/usr/lib64/python2.5/site-packages'
'/usr/lib64/python2.5/site-packages/Numeric'
'/usr/lib64/python2.5/site-packages/gtk-2.0'
'/usr/lib64/python2.5/site-packages/scim-0.1'
'/usr/lib/python2.5/site-packages'
'/media/server/www/webroot'
'/media/server/www/webroot'
'/media/server/www/webroot'
'/media/server/www/webroot']
</code></pre>
<p>It resets every time I restart apache.. is there any way to make sure this doesn't happen? I want the module path to be loaded only once.. </p>
| 1
|
2009-04-18T20:02:05Z
| 766,705
|
<p>The mod_wsgi <a href="http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode" rel="nofollow">documentation on code reloading</a> covers this.</p>
| 2
|
2009-04-20T02:51:33Z
|
[
"python",
"apache",
"mod-wsgi"
] |
Trying to get enum style choices= working for django but the whole tuplets are appearing in the drop down
| 764,177
|
<p>Im using appengine and the appenginepatch (so my issue could be related to that)</p>
<p>I have set up a model with a property that has several choices but when trying to display on a form or via admin interface I am getting an error:</p>
<blockquote>
<p>Property mode is 'o'; must be one of (('s', 'Single'), ('m', 'Multi'), ('o', 'Ordered')) </p>
</blockquote>
<p>This is my code:</p>
<pre><code>MODES = (
('s', 'Single'),
('m', 'Multi'),
('o', 'Ordered')
)
class X(search.SearchableModel):
mode = models.StringProperty( default='s', choices=MODES )
</code></pre>
<p>if I set it to use Integers (as below) the admin form (and my own ModelForm) shows each option for the property as the whole tuplet so that when I select and try to save I get the error that I'm not saving an Integer value</p>
<pre><code>MODES = (
(0, 'Single'),
(1, 'Multi'),
(2, 'Ordered')
)
class X(search.SearchableModel):
mode = models.IntegerProperty( default=0, choices=MODES )
</code></pre>
<p>Is there something special I have to do?</p>
| 1
|
2009-04-18T20:50:54Z
| 764,332
|
<p>It looks like this is an issue in Django/appengine support. It's documented <a href="http://code.google.com/p/google-app-engine-django/issues/detail?id=72" rel="nofollow">here</a> on the google-app-engine-django bug tracker, but it's closed as "wontfix" there. It is also documented <a href="http://code.google.com/p/googleappengine/issues/detail?id=350" rel="nofollow">here</a> on the googleappengine bug tracker and is closed as invalid.</p>
<p>According to the <a href="http://code.google.com/appengine/docs/python/datastore/propertyclass.html#Property" rel="nofollow">docs</a>, the appengine <code>choices</code> parameter works different than the Django one. You do not appear to be able to do what you want without creating a custom widget. According to Guido's comment closing the googleappengine ticket, </p>
<blockquote>
<p>I realize that this may cause problems
when you're trying to create a form
from the model, but the solution is to
override the form field using a custom
widget and passing the list of desired
choices to the widget. (There's an
example of this in Rietveld, in
codereview/views.py, class
SettingForm.)</p>
</blockquote>
| 2
|
2009-04-18T22:11:45Z
|
[
"python",
"django",
"google-app-engine"
] |
Python: How do I get time from a datetime.timedelta object?
| 764,184
|
<p>A mysql database table has a column whose datatype is time ( <a href="http://dev.mysql.com/doc/refman/5.0/en/time.html">http://dev.mysql.com/doc/refman/5.0/en/time.html</a> ). When the table data is accessed, Python returns the value of this column as a datetime.timedelta object. How do I extract the time out of this? (I didn't really understand what timedelta is for from the python manuals).</p>
<p>E.g. The column in the table contains the value "18:00:00"
Python-MySQLdb returns this as datetime.timedelta(0, 64800)</p>
<p><hr /></p>
<p>Please ignore what is below (it does return different value) - </p>
<p><em>Added: Irrespective of the time value in the table, python-MySQLdb seems to only return datetime.timedelta(0, 64800).</em></p>
<p>Note: I use Python 2.4</p>
| 16
|
2009-04-18T20:55:48Z
| 764,198
|
<p>It's strange that Python returns the value as a <code>datetime.timedelta</code>. It probably should return a <code>datetime.time</code>. Anyway, it looks like it's returning the elapsed time since midnight (assuming the column in the table is 6:00 PM). In order to convert to a <code>datetime.time</code>, you can do the following::</p>
<pre><code>value = datetime.timedelta(0, 64800)
(datetime.datetime.min + value).time()
</code></pre>
<p><code>datetime.datetime.min</code> and <code>datetime.time()</code> are, of course, documented as part of the <a href="http://docs.python.org/library/datetime.html">datetime</a> module if you want more information.</p>
<p>A <code>datetime.timedelta</code> is, by the way, a representation of the difference between two <code>datetime.datetime</code> values. So if you subtract one <code>datetime.datetime</code> from another, you will get a <code>datetime.timedelta</code>. And if you add a <code>datetime.datetime</code> with a <code>datetime.timedelta</code>, you'll get a <code>datetime.datetime</code>. That's how the code above works.</p>
| 22
|
2009-04-18T21:04:40Z
|
[
"python",
"mysql",
"timedelta"
] |
Python: How do I get time from a datetime.timedelta object?
| 764,184
|
<p>A mysql database table has a column whose datatype is time ( <a href="http://dev.mysql.com/doc/refman/5.0/en/time.html">http://dev.mysql.com/doc/refman/5.0/en/time.html</a> ). When the table data is accessed, Python returns the value of this column as a datetime.timedelta object. How do I extract the time out of this? (I didn't really understand what timedelta is for from the python manuals).</p>
<p>E.g. The column in the table contains the value "18:00:00"
Python-MySQLdb returns this as datetime.timedelta(0, 64800)</p>
<p><hr /></p>
<p>Please ignore what is below (it does return different value) - </p>
<p><em>Added: Irrespective of the time value in the table, python-MySQLdb seems to only return datetime.timedelta(0, 64800).</em></p>
<p>Note: I use Python 2.4</p>
| 16
|
2009-04-18T20:55:48Z
| 26,671,843
|
<p>It seems to me that the TIME type in MySQL is intended to represent time intervals as datetime.timedelta does in Python. From the docs you referenced:</p>
<blockquote>
<p>TIME values may range from '-838:59:59' to '838:59:59'. The hours part may be so large because the TIME type can be used not only to represent a time of day (which must be less than 24 hours), but also elapsed time or a time interval between two events (which may be much greater than 24 hours, or even negative). </p>
</blockquote>
<p>An alternative to converting from datetime.timedelta to datetime.time would be to change the column type to DATETIME and not using the date fields.</p>
<p>-Insert:</p>
<pre><code>tIn = datetime.datetime(
year=datetime.MINYEAR,
month=1,
day=1,
hour=10,
minute=52,
second=10
)
cursor.execute('INSERT INTO TableName (TimeColumn) VALUES (%s)', [tIn])
</code></pre>
<p>-Select:</p>
<pre><code> cursor.execute('SELECT TimeColumn FROM TableName')
result = cursor.fetchone()
if result is not None:
tOut = result[0].time()
print 'Selected time: {0}:{1}:{2}'.format(tOut.hour, tOut.minute, tOut.second)
</code></pre>
<p>datetime.time() is called on a datetime object to get a time object.</p>
| 1
|
2014-10-31T09:57:54Z
|
[
"python",
"mysql",
"timedelta"
] |
Dictionary to lowercase in Python
| 764,235
|
<p>I wish to do this but for a dictionary:</p>
<pre><code>"My string".lower()
</code></pre>
<p>Is there a built in function or should I use a loop?</p>
| 27
|
2009-04-18T21:23:14Z
| 764,244
|
<p>You will need to use either a loop or a list/generator comprehension. If you want to lowercase all the keys and values, you can do this::</p>
<pre><code>dict((k.lower(), v.lower()) for k,v in {'My Key':'My Value'}.iteritems())
</code></pre>
<p>If you want to lowercase just the keys, you can do this::</p>
<pre><code>dict((k.lower(), v) for k,v in {'My Key':'My Value'}.iteritems())
</code></pre>
<p><a href="http://www.python.org/dev/peps/pep-0289/">Generator expressions</a> (used above) are often useful in building dictionaries; I use them all the time. All the expressivity of a loop comprehension with none of the memory overhead.</p>
| 53
|
2009-04-18T21:31:02Z
|
[
"python"
] |
Dictionary to lowercase in Python
| 764,235
|
<p>I wish to do this but for a dictionary:</p>
<pre><code>"My string".lower()
</code></pre>
<p>Is there a built in function or should I use a loop?</p>
| 27
|
2009-04-18T21:23:14Z
| 765,409
|
<p>The following is identical to Rick Copeland's answer, just written without a using generator expression:</p>
<pre><code>outdict = {}
for k, v in {'My Key': 'My Value'}.iteritems():
outdict[k.lower()] = v.lower()
</code></pre>
<p>Generator-expressions, list comprehension's and (in Python 2.7 and higher) dict comprehension's are basically ways of rewriting loops.</p>
<p>In Python 2.7+, you can use a dictionary comprehension (it's a single line of code, but you can reformat them to make it more readable):</p>
<pre><code>{k.lower():v.lower()
for k, v in
{'My Key': 'My Value'}.items()
}
</code></pre>
<p>They are quite often tidier than the loop equivalent, as you don't have to initialise an empty dict/list/etc.. but, if you need to do anything more than a single function/method call they can quickly become messy.</p>
| 13
|
2009-04-19T13:25:52Z
|
[
"python"
] |
Dictionary to lowercase in Python
| 764,235
|
<p>I wish to do this but for a dictionary:</p>
<pre><code>"My string".lower()
</code></pre>
<p>Is there a built in function or should I use a loop?</p>
| 27
|
2009-04-18T21:23:14Z
| 34,940,226
|
<p>If the dictionary supplied have multiple type of key/values(numeric, string etc.); then use the following solution.</p>
<p>For example; if you have a dictionary named mydict as shown below</p>
<pre><code>mydict = {"FName":"John","LName":"Doe",007:true}
</code></pre>
<p><strong>In Python 2.x</strong></p>
<pre><code>dict((k.lower() if isinstance(k, basestring) else k, v.lower() if isinstance(v, basestring) else v) for k,v in mydict.iteritems())
</code></pre>
<p><strong>In Python 3.x</strong></p>
<pre><code>dict((k.lower() if isinstance(k, str) else k, v.lower() if isinstance(v, str) else v) for k,v in mydict.iteritems())
</code></pre>
<p><strong>Note:</strong> this works good on single dimensional dictionaries</p>
| 0
|
2016-01-22T06:18:25Z
|
[
"python"
] |
Dictionary to lowercase in Python
| 764,235
|
<p>I wish to do this but for a dictionary:</p>
<pre><code>"My string".lower()
</code></pre>
<p>Is there a built in function or should I use a loop?</p>
| 27
|
2009-04-18T21:23:14Z
| 37,166,894
|
<pre><code>def convert_to_lower_case(data):
if type(data) is dict:
for k, v in data.iteritems():
if type(v) is str:
data[k] = v.lower()
elif type(v) is list:
data[k] = [x.lower() for x in v]
elif type(v) is dict:
data[k] = convert_to_lower_case(v)
return data
</code></pre>
| -1
|
2016-05-11T15:18:00Z
|
[
"python"
] |
Dictionary to lowercase in Python
| 764,235
|
<p>I wish to do this but for a dictionary:</p>
<pre><code>"My string".lower()
</code></pre>
<p>Is there a built in function or should I use a loop?</p>
| 27
|
2009-04-18T21:23:14Z
| 38,572,808
|
<p><strong>Shorter way in python 3:</strong> <code>{k.lower(): v for k, v in my_dict.items()}</code></p>
| 1
|
2016-07-25T16:17:22Z
|
[
"python"
] |
How do I modify sys.path from .htaccess to allow mod_python to see Django?
| 764,312
|
<p>The host I'm considering for hosting a Django site has mod_python installed, but does not have Django. Django's INSTALL file indicates that I can simply copy the django directory to Python's site-packages directory to install Django, so I suspect that it might be possible to configure Python / mod_python to look for it elsewhere (namely my user space) by modifying sys.path, but I don't know how to change it from .htaccess or mod_python.</p>
<p><strong>How do I modify <code>sys.path</code> from <code>.htaccess</code> to allow mod_python to see Django?</strong></p>
<p>P.S. I can only access the site via FTP (i.e. no shell access). I realize that it sounds like I should just switch hosts, but there are compelling reasons for me to make this work so I'm at least going to try.</p>
| 4
|
2009-04-18T22:02:45Z
| 764,341
|
<p>Is the <a href="http://www.modpython.org/live/current/doc-html/dir-other-pp.html" rel="nofollow">PythonPath</a> setting what you are looking for? I haven't tried it with Django, but I would assume that it should do the job for you.</p>
| 1
|
2009-04-18T22:20:16Z
|
[
"python",
"django",
"apache",
".htaccess",
"mod-python"
] |
How do I modify sys.path from .htaccess to allow mod_python to see Django?
| 764,312
|
<p>The host I'm considering for hosting a Django site has mod_python installed, but does not have Django. Django's INSTALL file indicates that I can simply copy the django directory to Python's site-packages directory to install Django, so I suspect that it might be possible to configure Python / mod_python to look for it elsewhere (namely my user space) by modifying sys.path, but I don't know how to change it from .htaccess or mod_python.</p>
<p><strong>How do I modify <code>sys.path</code> from <code>.htaccess</code> to allow mod_python to see Django?</strong></p>
<p>P.S. I can only access the site via FTP (i.e. no shell access). I realize that it sounds like I should just switch hosts, but there are compelling reasons for me to make this work so I'm at least going to try.</p>
| 4
|
2009-04-18T22:02:45Z
| 764,349
|
<p>According to <a href="http://code.djangoproject.com/ticket/2255" rel="nofollow">ticket #2255</a> for Django, you need admin access to httpd.conf in order to use Django with mod_python, and this is not going to change, so you may be dead in the water. To answer the basic question of how to modify <code>sys.path</code> from <code>.htaccess</code>, you can use the <a href="http://www.modpython.org/live/current/doc-html/dir-other-pp.html" rel="nofollow">PythonPath</a> directive in <code>.htaccess</code>.</p>
| 3
|
2009-04-18T22:22:26Z
|
[
"python",
"django",
"apache",
".htaccess",
"mod-python"
] |
How do I modify sys.path from .htaccess to allow mod_python to see Django?
| 764,312
|
<p>The host I'm considering for hosting a Django site has mod_python installed, but does not have Django. Django's INSTALL file indicates that I can simply copy the django directory to Python's site-packages directory to install Django, so I suspect that it might be possible to configure Python / mod_python to look for it elsewhere (namely my user space) by modifying sys.path, but I don't know how to change it from .htaccess or mod_python.</p>
<p><strong>How do I modify <code>sys.path</code> from <code>.htaccess</code> to allow mod_python to see Django?</strong></p>
<p>P.S. I can only access the site via FTP (i.e. no shell access). I realize that it sounds like I should just switch hosts, but there are compelling reasons for me to make this work so I'm at least going to try.</p>
| 4
|
2009-04-18T22:02:45Z
| 764,769
|
<p>You're using mod_python wrong. It was never intended to serve python web applications. You should be using WSGI for this... or at least FastCGI.</p>
| 1
|
2009-04-19T03:15:58Z
|
[
"python",
"django",
"apache",
".htaccess",
"mod-python"
] |
A list of string replacements in Python
| 764,360
|
<p>Is there a far shorter way to write the following code?</p>
<pre><code>my_string = my_string.replace('A', '1')
my_string = my_string.replace('B', '2')
my_string = my_string.replace('C', '3')
my_string = my_string.replace('D', '4')
my_string = my_string.replace('E', '5')
</code></pre>
<p>Note that I don't need those exact values replaced; I'm simply looking for a way to turn 5+ lines into fewer than 5</p>
| 17
|
2009-04-18T22:28:06Z
| 764,368
|
<pre>replaceDict = {'A':'1','B':'2','C':'3','D':'4','E':'5'}
for key, replacement in replaceDict.items():
my_string = my_string.replace( key, replacement )</pre>
| 6
|
2009-04-18T22:32:28Z
|
[
"python",
"string",
"replace"
] |
A list of string replacements in Python
| 764,360
|
<p>Is there a far shorter way to write the following code?</p>
<pre><code>my_string = my_string.replace('A', '1')
my_string = my_string.replace('B', '2')
my_string = my_string.replace('C', '3')
my_string = my_string.replace('D', '4')
my_string = my_string.replace('E', '5')
</code></pre>
<p>Note that I don't need those exact values replaced; I'm simply looking for a way to turn 5+ lines into fewer than 5</p>
| 17
|
2009-04-18T22:28:06Z
| 764,374
|
<p>Looks like a good opportunity to use a loop:</p>
<pre><code>mapping = { 'A':'1', 'B':'2', 'C':'3', 'D':'4', 'E':'5'}
for k, v in mapping.iteritems():
my_string = my_string.replace(k, v)
</code></pre>
<p>A faster approach if you don't mind the parentheses would be:</p>
<pre><code>mapping = [ ('A', '1'), ('B', '2'), ('C', '3'), ('D', '4'), ('E', '5') ]
for k, v in mapping:
my_string = my_string.replace(k, v)
</code></pre>
| 33
|
2009-04-18T22:36:06Z
|
[
"python",
"string",
"replace"
] |
A list of string replacements in Python
| 764,360
|
<p>Is there a far shorter way to write the following code?</p>
<pre><code>my_string = my_string.replace('A', '1')
my_string = my_string.replace('B', '2')
my_string = my_string.replace('C', '3')
my_string = my_string.replace('D', '4')
my_string = my_string.replace('E', '5')
</code></pre>
<p>Note that I don't need those exact values replaced; I'm simply looking for a way to turn 5+ lines into fewer than 5</p>
| 17
|
2009-04-18T22:28:06Z
| 764,385
|
<p>Also look into <a href="http://docs.python.org/library/stdtypes.html#str.translate" rel="nofollow"><code>str.translate()</code></a>. It replaces characters according to a mapping you provide for Unicode strings, or otherwise must be told what to replace each character from chr(0) to chr(255) with.</p>
| 15
|
2009-04-18T22:43:25Z
|
[
"python",
"string",
"replace"
] |
A list of string replacements in Python
| 764,360
|
<p>Is there a far shorter way to write the following code?</p>
<pre><code>my_string = my_string.replace('A', '1')
my_string = my_string.replace('B', '2')
my_string = my_string.replace('C', '3')
my_string = my_string.replace('D', '4')
my_string = my_string.replace('E', '5')
</code></pre>
<p>Note that I don't need those exact values replaced; I'm simply looking for a way to turn 5+ lines into fewer than 5</p>
| 17
|
2009-04-18T22:28:06Z
| 764,591
|
<p>You can easily use string.maketrans() to create the mapping string to pass to str.translate():</p>
<pre><code>import string
trans = string.maketrans("ABCDE","12345")
my_string = my_string.translate(trans)
</code></pre>
| 29
|
2009-04-19T00:59:10Z
|
[
"python",
"string",
"replace"
] |
A list of string replacements in Python
| 764,360
|
<p>Is there a far shorter way to write the following code?</p>
<pre><code>my_string = my_string.replace('A', '1')
my_string = my_string.replace('B', '2')
my_string = my_string.replace('C', '3')
my_string = my_string.replace('D', '4')
my_string = my_string.replace('E', '5')
</code></pre>
<p>Note that I don't need those exact values replaced; I'm simply looking for a way to turn 5+ lines into fewer than 5</p>
| 17
|
2009-04-18T22:28:06Z
| 765,835
|
<p>If you want to get the wrong answer, slowly, then use string.replace in a loop. (Though it does work in this case of no overlap among the patterns and replacements.)</p>
<p>For the general case with possible overlaps or a long subject string, use re.sub:</p>
<pre><code>import re
def multisub(subs, subject):
"Simultaneously perform all substitutions on the subject string."
pattern = '|'.join('(%s)' % re.escape(p) for p, s in subs)
substs = [s for p, s in subs]
replace = lambda m: substs[m.lastindex - 1]
return re.sub(pattern, replace, subject)
>>> multisub([('hi', 'bye'), ('bye', 'hi')], 'hi and bye')
'bye and hi'
</code></pre>
<p>For the special case of 1-character patterns and 1- or 0-character replacements, use string.maketrans.</p>
| 9
|
2009-04-19T18:20:59Z
|
[
"python",
"string",
"replace"
] |
Python Macros: Use Cases?
| 764,412
|
<p>If Python had a macro facility similar to Lisp/Scheme (something like <a href="https://code.google.com/p/metapython/" rel="nofollow">MetaPython</a>), how would you use it? </p>
<p>If you are a Lisp/Scheme programmer, what sorts of things do you use macros for (other than things that have a clear syntactic parallel in Python such as a while loop)?</p>
| 20
|
2009-04-18T22:56:16Z
| 764,418
|
<p>Some uses cases I have seen before include making <a href="http://code.google.com/p/metapython/wiki/Tutorial" rel="nofollow">class factories</a> or <a href="http://blog.pythonisito.com/2009/03/announcing-metapython-macros-for-python.html" rel="nofollow">stripping logging</a> statements out of production code.</p>
| 3
|
2009-04-18T23:00:39Z
|
[
"python",
"macros",
"lisp",
"scheme"
] |
Python Macros: Use Cases?
| 764,412
|
<p>If Python had a macro facility similar to Lisp/Scheme (something like <a href="https://code.google.com/p/metapython/" rel="nofollow">MetaPython</a>), how would you use it? </p>
<p>If you are a Lisp/Scheme programmer, what sorts of things do you use macros for (other than things that have a clear syntactic parallel in Python such as a while loop)?</p>
| 20
|
2009-04-18T22:56:16Z
| 764,419
|
<p>There's a <a href="http://lists.warhead.org.uk/pipermail/iwe/2005-July/000130.html" rel="nofollow">mailing list posting</a> (<a href="http://web.archive.org/web/20080117093622/http%3A//lists.warhead.org.uk/pipermail/iwe/2005-July/000130.html" rel="nofollow">archive.org mirror</a>) which explains this rather well. The post is about Perl, but it applies to Python just as well.</p>
| 4
|
2009-04-18T23:01:42Z
|
[
"python",
"macros",
"lisp",
"scheme"
] |
Python Macros: Use Cases?
| 764,412
|
<p>If Python had a macro facility similar to Lisp/Scheme (something like <a href="https://code.google.com/p/metapython/" rel="nofollow">MetaPython</a>), how would you use it? </p>
<p>If you are a Lisp/Scheme programmer, what sorts of things do you use macros for (other than things that have a clear syntactic parallel in Python such as a while loop)?</p>
| 20
|
2009-04-18T22:56:16Z
| 764,425
|
<p>I believe that macros run counter to Python's culture. Macros in Lisp allow the <a href="http://en.wikipedia.org/wiki/Big%5Fball%5Fof%5Fmud">big ball of mud</a> approach; you get to redefine the language to become more suited to your problem domain. Conversely Pythonic code uses the most natural built in feature of Python to solve a problem, instead of solving it in a way that would be more natural in a different language. </p>
<p>Macros are inherently unpythonic. </p>
| 14
|
2009-04-18T23:05:46Z
|
[
"python",
"macros",
"lisp",
"scheme"
] |
Python Macros: Use Cases?
| 764,412
|
<p>If Python had a macro facility similar to Lisp/Scheme (something like <a href="https://code.google.com/p/metapython/" rel="nofollow">MetaPython</a>), how would you use it? </p>
<p>If you are a Lisp/Scheme programmer, what sorts of things do you use macros for (other than things that have a clear syntactic parallel in Python such as a while loop)?</p>
| 20
|
2009-04-18T22:56:16Z
| 764,459
|
<p>In lisp, macros are just another way to abstract ideas.</p>
<p>This is an example from an incomplete ray-tracer written in clojure:</p>
<pre><code>(defmacro per-pixel
"Macro.
Excecutes body for every pixel. Binds i and j to the current pixel coord."
[i j & body]
`(dotimes [~i @width]
(dotimes [~j @height]
~@body)))
</code></pre>
<p>If you want to do something to every pixel with coordinates (i,j), say, draw a black pixel if i is even, you would write:</p>
<pre><code>(per-pixel i,j
(if (even? i)
(draw-black i,j)))
</code></pre>
<p>This is not possible to do without macros because @body can mean anything inside (per-pixel i j @body) </p>
<p>Something like this would be possible in python as well. You need to use decorators.
You can't do everything you can do with lisp macros, but they are very powerful</p>
<p>Check out this decorator tutorial:
<a href="http://www.artima.com/weblogs/viewpost.jsp?thread=240808" rel="nofollow">http://www.artima.com/weblogs/viewpost.jsp?thread=240808</a></p>
| 4
|
2009-04-18T23:22:39Z
|
[
"python",
"macros",
"lisp",
"scheme"
] |
Python Macros: Use Cases?
| 764,412
|
<p>If Python had a macro facility similar to Lisp/Scheme (something like <a href="https://code.google.com/p/metapython/" rel="nofollow">MetaPython</a>), how would you use it? </p>
<p>If you are a Lisp/Scheme programmer, what sorts of things do you use macros for (other than things that have a clear syntactic parallel in Python such as a while loop)?</p>
| 20
|
2009-04-18T22:56:16Z
| 764,466
|
<p>See also this question: <a href="http://stackoverflow.com/questions/454648/pythonic-macro-syntax">Pythonic macro syntax</a></p>
| 2
|
2009-04-18T23:24:12Z
|
[
"python",
"macros",
"lisp",
"scheme"
] |
Python Macros: Use Cases?
| 764,412
|
<p>If Python had a macro facility similar to Lisp/Scheme (something like <a href="https://code.google.com/p/metapython/" rel="nofollow">MetaPython</a>), how would you use it? </p>
<p>If you are a Lisp/Scheme programmer, what sorts of things do you use macros for (other than things that have a clear syntactic parallel in Python such as a while loop)?</p>
| 20
|
2009-04-18T22:56:16Z
| 764,906
|
<p>Some examples of lisp macros:</p>
<ul>
<li><a href="http://common-lisp.net/project/iterate">ITERATE</a> which is a funny and extensible loop facility</li>
<li><a href="http://www.pps.jussieu.fr/~jch/software/cl-yacc/">CL-YACC</a>/<a href="http://common-lisp.net/project/fucc">FUCC</a> that are parser generators that generate parsers at compile time</li>
<li><a href="http://www.weitz.de/cl-who/">CL-WHO</a> which allows specifying html documents with static and dynamic parts</li>
<li><a href="http://common-lisp.net/project/parenscript/">Parenscript</a> which is a javascript code generator</li>
<li>Various simple code-wrappers, e.g., error handlers (I have a with-gtk-error-message-handler that executes code and shows GtkMessageDialog if unhandled error occurs), executors (e.g., given a code, execute it in different thread; I have a within-main-thread macro that executes code in different threads; <a href="http://marijn.haverbeke.nl/pcall/">PCall</a> library uses macros to wrap code to be executed concurrently)</li>
<li>GUI builders with macros (e.g., specify widgets hierarchy and widgets' properties and have a macro generate code for creation of all widgets)</li>
<li>Code generators that use external resources during compilation time. E.g., a macro that processes C headers and generates FFI code or a macro that generates classes definitions based on database schema</li>
<li>Declarative FFI. E.g., specifying the foreign structures, functions, their argument types and having macros to generate corresponding lisp structures, functions with type mapping and marshaling code</li>
<li>Continuations-based web frameworks for Common Lisp use macros that transform the code into CPS (continuation passing style) form.</li>
</ul>
| 12
|
2009-04-19T05:45:58Z
|
[
"python",
"macros",
"lisp",
"scheme"
] |
Python Macros: Use Cases?
| 764,412
|
<p>If Python had a macro facility similar to Lisp/Scheme (something like <a href="https://code.google.com/p/metapython/" rel="nofollow">MetaPython</a>), how would you use it? </p>
<p>If you are a Lisp/Scheme programmer, what sorts of things do you use macros for (other than things that have a clear syntactic parallel in Python such as a while loop)?</p>
| 20
|
2009-04-18T22:56:16Z
| 764,990
|
<p>I don't think Python needs macros, because they are useful for 2 things:</p>
<ol>
<li><p>Creating a DSL or more eloquent syntax for something (Lisp LOOP macro is a nice example). In this case, Python philosophy decided against it deliberately. If there is some explicit notation you're missing, you can always ask for a PEP.</p></li>
<li><p>Making things faster by precomputing things at compile time. Python isn't oriented to speed, so you can always use a function instead.</p></li>
</ol>
<p>I am not saying macros are wrong, just that they don't fit Python philosophy. You can always do without them without much code duplication, because you have duck typing and operator overloading.</p>
<p>And as a side note, I would much rather see Lisp's restarts in Python than macros.</p>
| 2
|
2009-04-19T07:05:08Z
|
[
"python",
"macros",
"lisp",
"scheme"
] |
Python Macros: Use Cases?
| 764,412
|
<p>If Python had a macro facility similar to Lisp/Scheme (something like <a href="https://code.google.com/p/metapython/" rel="nofollow">MetaPython</a>), how would you use it? </p>
<p>If you are a Lisp/Scheme programmer, what sorts of things do you use macros for (other than things that have a clear syntactic parallel in Python such as a while loop)?</p>
| 20
|
2009-04-18T22:56:16Z
| 983,180
|
<p>Read "The Lambda Papers" so you might find out generally why one would take advtage of macros at all.</p>
<p>You should start with âAIM-353 Lambda:The Ultimate Imperativeâ and follow it with âAIM-443 Lambda: The Ultimate GOTOâ. Both may be found here:</p>
<p><a href="http://library.readscheme.org/page1.html" rel="nofollow">http://library.readscheme.org/page1.html</a></p>
| 1
|
2009-06-11T19:37:34Z
|
[
"python",
"macros",
"lisp",
"scheme"
] |
Python Macros: Use Cases?
| 764,412
|
<p>If Python had a macro facility similar to Lisp/Scheme (something like <a href="https://code.google.com/p/metapython/" rel="nofollow">MetaPython</a>), how would you use it? </p>
<p>If you are a Lisp/Scheme programmer, what sorts of things do you use macros for (other than things that have a clear syntactic parallel in Python such as a while loop)?</p>
| 20
|
2009-04-18T22:56:16Z
| 2,444,061
|
<p>Here's one real-world example I came across that would be trivial with macros or real metaprogramming support, but has to be done with CPython bytecode manipulation due to absence of both in Python:</p>
<p><a href="http://www.aminus.net/dejavu/chrome/common/doc/2.0a/html/intro.html#cpython">http://www.aminus.net/dejavu/chrome/common/doc/2.0a/html/intro.html#cpython</a></p>
<p>This is how the problem is solved in Common Lisp using a combination of regular macros, and read-macros to extend the syntax (it could have been done without the latter, but not the former):</p>
<p><a href="http://clsql.b9.com/manual/csql-find.html">http://clsql.b9.com/manual/csql-find.html</a></p>
<p>The same problem solved in Smalltalk using closures and metaprogramming (Smalltalk is one of the few single-dispatch OO languages that actually gets message passing right):</p>
<p><a href="http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/msg02096.html">http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/msg02096.html</a></p>
<p>Here I tried to implement the Smalltalk approach in Common Lisp, which is a good illustration of how metaprogramming is poorly supported in the latter:</p>
<p><a href="http://carcaddar.blogspot.com/2009/04/closure-oriented-metaprogramming-via.html">http://carcaddar.blogspot.com/2009/04/closure-oriented-metaprogramming-via.html</a></p>
| 6
|
2010-03-14T22:28:58Z
|
[
"python",
"macros",
"lisp",
"scheme"
] |
Python Macros: Use Cases?
| 764,412
|
<p>If Python had a macro facility similar to Lisp/Scheme (something like <a href="https://code.google.com/p/metapython/" rel="nofollow">MetaPython</a>), how would you use it? </p>
<p>If you are a Lisp/Scheme programmer, what sorts of things do you use macros for (other than things that have a clear syntactic parallel in Python such as a while loop)?</p>
| 20
|
2009-04-18T22:56:16Z
| 6,200,044
|
<p>Hy,
For my own use, I created a Python module (Espy) that allows macro definitions with arguments, loop and conditional code generation:
You create a source.espy file, then launch the appropriate function, then source.py is generated.</p>
<p>It allows syntaxes as following:</p>
<pre><code>macro repeat(arg1):
for i in range(%arg1%):
socket
print "stop"
...
repeat(5):
print "Hi everybody"
print "See you soon"
</code></pre>
<p>is equivalent to:</p>
<pre><code>...
for i in range(5):
print "Hi everybody"
print "See you soon"
print "stop"
</code></pre>
<p>Other syntax:</p>
<pre><code>macro doit(arg1):
for i in %arg1%:
socket suit(arg2):
socket
print %arg2%
socket check(arg3):
if %arg2%==%arg3%:
socket
...
#use
doit(range(10)):
suit(result):
result=i*i
check(result,25):
print "I knew that 5*5 == 25"
</code></pre>
<p>is equivalent to:</p>
<pre><code>for i in range(10):
result=i*i
print result
if result==25:
print "I knew that 5*5 == 25"
</code></pre>
<p>More, Espy has 2 functions: "macro for" and "macro if". An example:</p>
<pre><code>macro for v in [6,10,12,20,23]:
macro if 7<%v%<22:
True:
print "At %v%, I'm awake."
False:
print "At %v%, I'm sleeping."
</code></pre>
<p>is translated by Espy in:</p>
<pre><code>print "At 6, I'm sleeping."
print "At 10, I'm awake."
print "At 12, I'm awake."
print "At 20, I'm awake."
print "At 23, I'm sleeping."
</code></pre>
<p>Complete documentation and free download can be found here: <a href="http://elp.chronocv.fr" rel="nofollow">http://elp.chronocv.fr</a></p>
<p>I use this module in many cases. It permits more structured and shorter codes. With it I generated 65000 lines of clear and efficient python code from 1000 lines of espy code for a new chess engine project (still in progress).</p>
<p>If Python could include macros in futur release, it'd become more impressive.</p>
| 1
|
2011-06-01T10:57:08Z
|
[
"python",
"macros",
"lisp",
"scheme"
] |
Python Macros: Use Cases?
| 764,412
|
<p>If Python had a macro facility similar to Lisp/Scheme (something like <a href="https://code.google.com/p/metapython/" rel="nofollow">MetaPython</a>), how would you use it? </p>
<p>If you are a Lisp/Scheme programmer, what sorts of things do you use macros for (other than things that have a clear syntactic parallel in Python such as a while loop)?</p>
| 20
|
2009-04-18T22:56:16Z
| 8,869,336
|
<p>Possibly if you want the source code at runtime such as for debugging (say printf debugging an expression's value with the name of it so you don't have to write it twice).</p>
<p><a href="http://stackoverflow.com/questions/8573023/python-is-there-no-better-way-to-get-the-expression-in-a-debug-function">The only way I could think of to do it in python is to pass a string to eval.</a></p>
| 0
|
2012-01-15T11:37:54Z
|
[
"python",
"macros",
"lisp",
"scheme"
] |
Python Macros: Use Cases?
| 764,412
|
<p>If Python had a macro facility similar to Lisp/Scheme (something like <a href="https://code.google.com/p/metapython/" rel="nofollow">MetaPython</a>), how would you use it? </p>
<p>If you are a Lisp/Scheme programmer, what sorts of things do you use macros for (other than things that have a clear syntactic parallel in Python such as a while loop)?</p>
| 20
|
2009-04-18T22:56:16Z
| 10,083,737
|
<p>Well, I'd like instead of</p>
<pre><code>print >> sys.stderr, "abc"
</code></pre>
<p>to write</p>
<pre><code>err "abc"
</code></pre>
<p>in some scripts which have many debug printout statements.</p>
<p>I can do</p>
<pre><code>import sys
err = sys.stderr
</code></pre>
<p>and then </p>
<pre><code>print >> err, "abc"
</code></pre>
<p>which is shorter, but that still takes too many characters on the line.</p>
| 0
|
2012-04-10T05:51:02Z
|
[
"python",
"macros",
"lisp",
"scheme"
] |
Python Macros: Use Cases?
| 764,412
|
<p>If Python had a macro facility similar to Lisp/Scheme (something like <a href="https://code.google.com/p/metapython/" rel="nofollow">MetaPython</a>), how would you use it? </p>
<p>If you are a Lisp/Scheme programmer, what sorts of things do you use macros for (other than things that have a clear syntactic parallel in Python such as a while loop)?</p>
| 20
|
2009-04-18T22:56:16Z
| 14,507,882
|
<p>I want to use macro to enable sql statement in python code. -
select * from table1</p>
| 0
|
2013-01-24T18:08:35Z
|
[
"python",
"macros",
"lisp",
"scheme"
] |
Python Macros: Use Cases?
| 764,412
|
<p>If Python had a macro facility similar to Lisp/Scheme (something like <a href="https://code.google.com/p/metapython/" rel="nofollow">MetaPython</a>), how would you use it? </p>
<p>If you are a Lisp/Scheme programmer, what sorts of things do you use macros for (other than things that have a clear syntactic parallel in Python such as a while loop)?</p>
| 20
|
2009-04-18T22:56:16Z
| 15,610,890
|
<p>I'd use it to wrap <code>yield</code> to enable me to build more powerful generator pipelines.</p>
| 0
|
2013-03-25T08:55:04Z
|
[
"python",
"macros",
"lisp",
"scheme"
] |
Python Macros: Use Cases?
| 764,412
|
<p>If Python had a macro facility similar to Lisp/Scheme (something like <a href="https://code.google.com/p/metapython/" rel="nofollow">MetaPython</a>), how would you use it? </p>
<p>If you are a Lisp/Scheme programmer, what sorts of things do you use macros for (other than things that have a clear syntactic parallel in Python such as a while loop)?</p>
| 20
|
2009-04-18T22:56:16Z
| 16,174,524
|
<p>Currently, the only way features can be added to Python the language is through a PEP (Python Enhancement Proposal). This can be slow, and doesn't help you in the cases when you want to add a feature to the language that is only useful for your use case.</p>
<p>For example, <a href="http://www.python.org/dev/peps/pep-0315/" rel="nofollow">there's a PEP to add a do-while loop</a>. This will probably be added to Python, but the PEP was created in 2003. I'd like to write <code>do-while</code> loops today, and I could do that if Python had macros.</p>
<p>Similarly, there was a <a href="http://www.python.org/dev/peps/pep-3136/" rel="nofollow">PEP to add labelled break and continue</a> but this was rejected. If labelled break statements would make my code clearer, I could implement them with a macro.</p>
<p>PEPs aside, I would also like an <code>unless</code> macro. Rather than writing:</p>
<pre><code>if not is_superman():
dodge_bullet()
</code></pre>
<p>I could write:</p>
<pre><code>unless is_superman():
dodge_bullet()
</code></pre>
<p>I'd like a <code>case</code> macro (often called <code>cond</code> in Lisp). Rather than writing:</p>
<pre><code>if x == FOO:
do_stuff_with_foos()
elif x == BAR:
do_stuff_with_bars()
elif x == BAZ:
do_stuff_with_bazs()
</code></pre>
<p>I could write:</p>
<pre><code>switch x:
case FOO:
do_stuff_with_foos()
case BAR:
do_stuff_with_bars()
case BAZ:
do_stuff_with_bazs()
</code></pre>
<p>These would be straightforward to implement as macros. More complex, useful macros would include:</p>
<ul>
<li>Ruby style string interpolation e.g. <code>"hello there {user}"</code> (probably best implemented as a reader macro)</li>
<li>Pattern matching</li>
</ul>
<p>Currently, these are only features in other languages. With macros, I could add them to Python. I could even write PEPs that included an example implementation. (Some PEPs already do this, but they are forced to modify the C source of the interpreter itself.)</p>
| 0
|
2013-04-23T16:27:33Z
|
[
"python",
"macros",
"lisp",
"scheme"
] |
Python Macros: Use Cases?
| 764,412
|
<p>If Python had a macro facility similar to Lisp/Scheme (something like <a href="https://code.google.com/p/metapython/" rel="nofollow">MetaPython</a>), how would you use it? </p>
<p>If you are a Lisp/Scheme programmer, what sorts of things do you use macros for (other than things that have a clear syntactic parallel in Python such as a while loop)?</p>
| 20
|
2009-04-18T22:56:16Z
| 16,493,714
|
<p>This is a somewhat late answer, but <a href="https://github.com/lihaoyi/macropy">MacroPy</a> is a new project of mine to bring macros to Python. We have a pretty substantial list of demos, all of which are use cases which require macros to implement, for example providing an extremely concise way of declaring classes:</p>
<pre><code>@case
class Point(x, y)
p = Point(1, 2)
print p.x # 1
print p # Point(1, 2)
</code></pre>
<p>MacroPy has been used to implement features such as:</p>
<ul>
<li>Case Classes, easy <a href="https://en.wikipedia.org/wiki/Algebraic_data_type">Algebraic Data Types</a> from Scala</li>
<li>Pattern Matching from the Functional Programming world</li>
<li>Tail-call Optimization</li>
<li>Quasiquotes, a quick way to manipulate fragments of a program</li>
<li>String Interpolation, a common feature in many languages, and Pyxl.</li>
<li>Tracing and Smart Asserts</li>
<li>PINQ to SQLAlchemy, a clone of LINQ to SQL from C#</li>
<li>Quick Lambdas from Scala and Groovy,</li>
<li>Parser Combinators, inspired by <a href="http://www.suryasuravarapu.com/2011/04/scala-parser-combinators-win.html">Scala's</a>.</li>
</ul>
<p>Check out the linked page to find out more; I think I can confidently say that the use cases we demonstrate far surpass anything anyone's suggested so far on this thread =D</p>
| 13
|
2013-05-11T04:19:22Z
|
[
"python",
"macros",
"lisp",
"scheme"
] |
Python Macros: Use Cases?
| 764,412
|
<p>If Python had a macro facility similar to Lisp/Scheme (something like <a href="https://code.google.com/p/metapython/" rel="nofollow">MetaPython</a>), how would you use it? </p>
<p>If you are a Lisp/Scheme programmer, what sorts of things do you use macros for (other than things that have a clear syntactic parallel in Python such as a while loop)?</p>
| 20
|
2009-04-18T22:56:16Z
| 22,334,057
|
<p>Coming from a C-world, I'd really appreciate being able to do efficient logging of rich messages. Instead of writing</p>
<pre><code>if logger.level > logger.DEBUG:
logger.log('%s' % (an_expensive_call_that_gives_useful_info(),))
</code></pre>
<p>with macros, one could instead do something like</p>
<pre><code>DEBUG('%s' % (an_expensive_call_that_gives_useful_info(),))
</code></pre>
| 0
|
2014-03-11T18:53:32Z
|
[
"python",
"macros",
"lisp",
"scheme"
] |
How to hide console window in python?
| 764,631
|
<p>I am writing an IRC bot in Python. </p>
<p>I wish to make stand-alone binaries for Linux and Windows of it. And mainly I wish that when the bot initiates, the console window should hide and the user should not be able to see the window.</p>
<p>What can I do for that?</p>
| 33
|
2009-04-19T01:35:30Z
| 764,642
|
<p>In linux, just run it, no problem. In Windows, you want to use the pythonw executable.</p>
<h3>Update</h3>
<p>Okay, if I understand the question in the comments, you're asking how to make the command window in which you've started the bot from the command line go away afterwards?</p>
<ul>
<li>UNIX (Linux)</li>
</ul>
<blockquote>
<p>$ nohup mypythonprog &</p>
</blockquote>
<ul>
<li>Windows</li>
</ul>
<blockquote>
<p>C:/> start pythonw mypythonprog</p>
</blockquote>
<p>I <em>think</em> that's right. In any case, now you can close the terminal.</p>
| 13
|
2009-04-19T01:44:08Z
|
[
"python",
"console",
"hide"
] |
How to hide console window in python?
| 764,631
|
<p>I am writing an IRC bot in Python. </p>
<p>I wish to make stand-alone binaries for Linux and Windows of it. And mainly I wish that when the bot initiates, the console window should hide and the user should not be able to see the window.</p>
<p>What can I do for that?</p>
| 33
|
2009-04-19T01:35:30Z
| 764,654
|
<p>Simply save it with a <code>.pyw</code> extension. This will prevent the console window from opening.</p>
<blockquote>
<p>On Windows systems, there is no notion of an âexecutable modeâ. The Python installer automatically associates .py files with python.exe so that a double-click on a Python file will run it as a script. <strong>The extension can also be .pyw, in that case, the console window that normally appears is suppressed.</strong></p>
</blockquote>
<p><a href="https://docs.python.org/2.6/tutorial/interpreter.html">Explanation at the bottom of section 2.2.2</a></p>
| 54
|
2009-04-19T01:51:12Z
|
[
"python",
"console",
"hide"
] |
How to hide console window in python?
| 764,631
|
<p>I am writing an IRC bot in Python. </p>
<p>I wish to make stand-alone binaries for Linux and Windows of it. And mainly I wish that when the bot initiates, the console window should hide and the user should not be able to see the window.</p>
<p>What can I do for that?</p>
| 33
|
2009-04-19T01:35:30Z
| 15,148,338
|
<p>Save it with a <code>.pyw</code> extension and will open with <code>pythonw.exe</code>. Double click it, and Python will start without a window. </p>
<p>For example, if you have the following script name:</p>
<pre><code>foo.py
</code></pre>
<p>Rename it to:</p>
<pre><code>foo.pyw
</code></pre>
| 7
|
2013-03-01T00:30:37Z
|
[
"python",
"console",
"hide"
] |
SQLite Performance Benchmark -- why is :memory: so slow...only 1.5X as fast as disk?
| 764,710
|
<h2>Why is :memory: in sqlite so slow?</h2>
<p>I've been trying to see if there are any performance improvements gained by using in-memory sqlite vs. disk based sqlite. Basically I'd like to trade startup time and memory to get extremely rapid queries which do <em>not</em> hit disk during the course of the application. </p>
<p>However, the following benchmark gives me only a factor of 1.5X in improved speed. Here, I'm generating 1M rows of random data and loading it into both a disk and memory based version of the same table. I then run random queries on both dbs, returning sets of size approx 300k. I expected the memory based version to be considerably faster, but as mentioned I'm only getting 1.5X speedups. </p>
<p>I experimented with several other sizes of dbs and query sets; the advantage of :memory: <em>does</em> seem to go up as the number of rows in the db increases. I'm not sure why the advantage is so small, though I had a few hypotheses: </p>
<ul>
<li>the table used isn't big enough (in rows) to make :memory: a huge winner</li>
<li>more joins/tables would make the :memory: advantage more apparent</li>
<li>there is some kind of caching going on at the connection or OS level such that the previous results are accessible somehow, corrupting the benchmark</li>
<li>there is some kind of hidden disk access going on that I'm not seeing (I haven't tried lsof yet, but I did turn off the PRAGMAs for journaling)</li>
</ul>
<p>Am I doing something wrong here? Any thoughts on why :memory: isn't producing nearly instant lookups? Here's the benchmark: </p>
<pre><code>==> sqlite_memory_vs_disk_benchmark.py <==
#!/usr/bin/env python
"""Attempt to see whether :memory: offers significant performance benefits.
"""
import os
import time
import sqlite3
import numpy as np
def load_mat(conn,mat):
c = conn.cursor()
#Try to avoid hitting disk, trading safety for speed.
#http://stackoverflow.com/questions/304393
c.execute('PRAGMA temp_store=MEMORY;')
c.execute('PRAGMA journal_mode=MEMORY;')
# Make a demo table
c.execute('create table if not exists demo (id1 int, id2 int, val real);')
c.execute('create index id1_index on demo (id1);')
c.execute('create index id2_index on demo (id2);')
for row in mat:
c.execute('insert into demo values(?,?,?);', (row[0],row[1],row[2]))
conn.commit()
def querytime(conn,query):
start = time.time()
foo = conn.execute(query).fetchall()
diff = time.time() - start
return diff
#1) Build some fake data with 3 columns: int, int, float
nn = 1000000 #numrows
cmax = 700 #num uniques in 1st col
gmax = 5000 #num uniques in 2nd col
mat = np.zeros((nn,3),dtype='object')
mat[:,0] = np.random.randint(0,cmax,nn)
mat[:,1] = np.random.randint(0,gmax,nn)
mat[:,2] = np.random.uniform(0,1,nn)
#2) Load it into both dbs & build indices
try: os.unlink('foo.sqlite')
except OSError: pass
conn_mem = sqlite3.connect(":memory:")
conn_disk = sqlite3.connect('foo.sqlite')
load_mat(conn_mem,mat)
load_mat(conn_disk,mat)
del mat
#3) Execute a series of random queries and see how long it takes each of these
numqs = 10
numqrows = 300000 #max number of ids of each kind
results = np.zeros((numqs,3))
for qq in range(numqs):
qsize = np.random.randint(1,numqrows,1)
id1a = np.sort(np.random.permutation(np.arange(cmax))[0:qsize]) #ensure uniqueness of ids queried
id2a = np.sort(np.random.permutation(np.arange(gmax))[0:qsize])
id1s = ','.join([str(xx) for xx in id1a])
id2s = ','.join([str(xx) for xx in id2a])
query = 'select * from demo where id1 in (%s) AND id2 in (%s);' % (id1s,id2s)
results[qq,0] = round(querytime(conn_disk,query),4)
results[qq,1] = round(querytime(conn_mem,query),4)
results[qq,2] = int(qsize)
#4) Now look at the results
print " disk | memory | qsize"
print "-----------------------"
for row in results:
print "%.4f | %.4f | %d" % (row[0],row[1],row[2])
</code></pre>
<p>Here's the results. Note that disk takes about 1.5X as long as memory for a fairly wide range of query sizes. </p>
<pre><code>[ramanujan:~]$python -OO sqlite_memory_vs_disk_clean.py
disk | memory | qsize
-----------------------
9.0332 | 6.8100 | 12630
9.0905 | 6.6953 | 5894
9.0078 | 6.8384 | 17798
9.1179 | 6.7673 | 60850
9.0629 | 6.8355 | 94854
8.9688 | 6.8093 | 17940
9.0785 | 6.6993 | 58003
9.0309 | 6.8257 | 85663
9.1423 | 6.7411 | 66047
9.1814 | 6.9794 | 11345
</code></pre>
<p>Shouldn't RAM be almost instant relative to disk? What's going wrong here? </p>
<h2>Edit</h2>
<p>Some good suggestions here. </p>
<p>I guess the main takehome point for me is that **there's probably no way to make :memory: <em>absolutely faster</em>, but there is a way to make disk access <em>relatively slower.</em> ** </p>
<p>In other words, the benchmark is adequately measuring the realistic performance of memory, but not the realistic performance of disk (e.g. because the cache_size pragma is too big or because I'm not doing writes). I'll mess around with those parameters and post my findings when I get a chance. </p>
<p>That said, if there is anyone who thinks I can squeeze some more speed out of the in-memory db (other than by jacking up the cache_size and default_cache_size, which I will do), I'm all ears...</p>
| 37
|
2009-04-19T02:33:43Z
| 764,718
|
<p>Could it be that sqlite3 isnt actually writing your data to disk from cache? which might explain why the numbers are similar.</p>
<p>It could also be possible that your OS is paging due to low memory avaliable? </p>
| 1
|
2009-04-19T02:40:29Z
|
[
"python",
"database",
"sqlite",
"memory",
"benchmarking"
] |
SQLite Performance Benchmark -- why is :memory: so slow...only 1.5X as fast as disk?
| 764,710
|
<h2>Why is :memory: in sqlite so slow?</h2>
<p>I've been trying to see if there are any performance improvements gained by using in-memory sqlite vs. disk based sqlite. Basically I'd like to trade startup time and memory to get extremely rapid queries which do <em>not</em> hit disk during the course of the application. </p>
<p>However, the following benchmark gives me only a factor of 1.5X in improved speed. Here, I'm generating 1M rows of random data and loading it into both a disk and memory based version of the same table. I then run random queries on both dbs, returning sets of size approx 300k. I expected the memory based version to be considerably faster, but as mentioned I'm only getting 1.5X speedups. </p>
<p>I experimented with several other sizes of dbs and query sets; the advantage of :memory: <em>does</em> seem to go up as the number of rows in the db increases. I'm not sure why the advantage is so small, though I had a few hypotheses: </p>
<ul>
<li>the table used isn't big enough (in rows) to make :memory: a huge winner</li>
<li>more joins/tables would make the :memory: advantage more apparent</li>
<li>there is some kind of caching going on at the connection or OS level such that the previous results are accessible somehow, corrupting the benchmark</li>
<li>there is some kind of hidden disk access going on that I'm not seeing (I haven't tried lsof yet, but I did turn off the PRAGMAs for journaling)</li>
</ul>
<p>Am I doing something wrong here? Any thoughts on why :memory: isn't producing nearly instant lookups? Here's the benchmark: </p>
<pre><code>==> sqlite_memory_vs_disk_benchmark.py <==
#!/usr/bin/env python
"""Attempt to see whether :memory: offers significant performance benefits.
"""
import os
import time
import sqlite3
import numpy as np
def load_mat(conn,mat):
c = conn.cursor()
#Try to avoid hitting disk, trading safety for speed.
#http://stackoverflow.com/questions/304393
c.execute('PRAGMA temp_store=MEMORY;')
c.execute('PRAGMA journal_mode=MEMORY;')
# Make a demo table
c.execute('create table if not exists demo (id1 int, id2 int, val real);')
c.execute('create index id1_index on demo (id1);')
c.execute('create index id2_index on demo (id2);')
for row in mat:
c.execute('insert into demo values(?,?,?);', (row[0],row[1],row[2]))
conn.commit()
def querytime(conn,query):
start = time.time()
foo = conn.execute(query).fetchall()
diff = time.time() - start
return diff
#1) Build some fake data with 3 columns: int, int, float
nn = 1000000 #numrows
cmax = 700 #num uniques in 1st col
gmax = 5000 #num uniques in 2nd col
mat = np.zeros((nn,3),dtype='object')
mat[:,0] = np.random.randint(0,cmax,nn)
mat[:,1] = np.random.randint(0,gmax,nn)
mat[:,2] = np.random.uniform(0,1,nn)
#2) Load it into both dbs & build indices
try: os.unlink('foo.sqlite')
except OSError: pass
conn_mem = sqlite3.connect(":memory:")
conn_disk = sqlite3.connect('foo.sqlite')
load_mat(conn_mem,mat)
load_mat(conn_disk,mat)
del mat
#3) Execute a series of random queries and see how long it takes each of these
numqs = 10
numqrows = 300000 #max number of ids of each kind
results = np.zeros((numqs,3))
for qq in range(numqs):
qsize = np.random.randint(1,numqrows,1)
id1a = np.sort(np.random.permutation(np.arange(cmax))[0:qsize]) #ensure uniqueness of ids queried
id2a = np.sort(np.random.permutation(np.arange(gmax))[0:qsize])
id1s = ','.join([str(xx) for xx in id1a])
id2s = ','.join([str(xx) for xx in id2a])
query = 'select * from demo where id1 in (%s) AND id2 in (%s);' % (id1s,id2s)
results[qq,0] = round(querytime(conn_disk,query),4)
results[qq,1] = round(querytime(conn_mem,query),4)
results[qq,2] = int(qsize)
#4) Now look at the results
print " disk | memory | qsize"
print "-----------------------"
for row in results:
print "%.4f | %.4f | %d" % (row[0],row[1],row[2])
</code></pre>
<p>Here's the results. Note that disk takes about 1.5X as long as memory for a fairly wide range of query sizes. </p>
<pre><code>[ramanujan:~]$python -OO sqlite_memory_vs_disk_clean.py
disk | memory | qsize
-----------------------
9.0332 | 6.8100 | 12630
9.0905 | 6.6953 | 5894
9.0078 | 6.8384 | 17798
9.1179 | 6.7673 | 60850
9.0629 | 6.8355 | 94854
8.9688 | 6.8093 | 17940
9.0785 | 6.6993 | 58003
9.0309 | 6.8257 | 85663
9.1423 | 6.7411 | 66047
9.1814 | 6.9794 | 11345
</code></pre>
<p>Shouldn't RAM be almost instant relative to disk? What's going wrong here? </p>
<h2>Edit</h2>
<p>Some good suggestions here. </p>
<p>I guess the main takehome point for me is that **there's probably no way to make :memory: <em>absolutely faster</em>, but there is a way to make disk access <em>relatively slower.</em> ** </p>
<p>In other words, the benchmark is adequately measuring the realistic performance of memory, but not the realistic performance of disk (e.g. because the cache_size pragma is too big or because I'm not doing writes). I'll mess around with those parameters and post my findings when I get a chance. </p>
<p>That said, if there is anyone who thinks I can squeeze some more speed out of the in-memory db (other than by jacking up the cache_size and default_cache_size, which I will do), I'm all ears...</p>
| 37
|
2009-04-19T02:33:43Z
| 764,719
|
<p>I note you are focussing on queries that involve relatively large data sets to return. I wonder what effect you would see with smaller sets of data? To return a single row many times might require the disk to seek a lot - the random access time for memory might be much faster.</p>
| 1
|
2009-04-19T02:40:59Z
|
[
"python",
"database",
"sqlite",
"memory",
"benchmarking"
] |
SQLite Performance Benchmark -- why is :memory: so slow...only 1.5X as fast as disk?
| 764,710
|
<h2>Why is :memory: in sqlite so slow?</h2>
<p>I've been trying to see if there are any performance improvements gained by using in-memory sqlite vs. disk based sqlite. Basically I'd like to trade startup time and memory to get extremely rapid queries which do <em>not</em> hit disk during the course of the application. </p>
<p>However, the following benchmark gives me only a factor of 1.5X in improved speed. Here, I'm generating 1M rows of random data and loading it into both a disk and memory based version of the same table. I then run random queries on both dbs, returning sets of size approx 300k. I expected the memory based version to be considerably faster, but as mentioned I'm only getting 1.5X speedups. </p>
<p>I experimented with several other sizes of dbs and query sets; the advantage of :memory: <em>does</em> seem to go up as the number of rows in the db increases. I'm not sure why the advantage is so small, though I had a few hypotheses: </p>
<ul>
<li>the table used isn't big enough (in rows) to make :memory: a huge winner</li>
<li>more joins/tables would make the :memory: advantage more apparent</li>
<li>there is some kind of caching going on at the connection or OS level such that the previous results are accessible somehow, corrupting the benchmark</li>
<li>there is some kind of hidden disk access going on that I'm not seeing (I haven't tried lsof yet, but I did turn off the PRAGMAs for journaling)</li>
</ul>
<p>Am I doing something wrong here? Any thoughts on why :memory: isn't producing nearly instant lookups? Here's the benchmark: </p>
<pre><code>==> sqlite_memory_vs_disk_benchmark.py <==
#!/usr/bin/env python
"""Attempt to see whether :memory: offers significant performance benefits.
"""
import os
import time
import sqlite3
import numpy as np
def load_mat(conn,mat):
c = conn.cursor()
#Try to avoid hitting disk, trading safety for speed.
#http://stackoverflow.com/questions/304393
c.execute('PRAGMA temp_store=MEMORY;')
c.execute('PRAGMA journal_mode=MEMORY;')
# Make a demo table
c.execute('create table if not exists demo (id1 int, id2 int, val real);')
c.execute('create index id1_index on demo (id1);')
c.execute('create index id2_index on demo (id2);')
for row in mat:
c.execute('insert into demo values(?,?,?);', (row[0],row[1],row[2]))
conn.commit()
def querytime(conn,query):
start = time.time()
foo = conn.execute(query).fetchall()
diff = time.time() - start
return diff
#1) Build some fake data with 3 columns: int, int, float
nn = 1000000 #numrows
cmax = 700 #num uniques in 1st col
gmax = 5000 #num uniques in 2nd col
mat = np.zeros((nn,3),dtype='object')
mat[:,0] = np.random.randint(0,cmax,nn)
mat[:,1] = np.random.randint(0,gmax,nn)
mat[:,2] = np.random.uniform(0,1,nn)
#2) Load it into both dbs & build indices
try: os.unlink('foo.sqlite')
except OSError: pass
conn_mem = sqlite3.connect(":memory:")
conn_disk = sqlite3.connect('foo.sqlite')
load_mat(conn_mem,mat)
load_mat(conn_disk,mat)
del mat
#3) Execute a series of random queries and see how long it takes each of these
numqs = 10
numqrows = 300000 #max number of ids of each kind
results = np.zeros((numqs,3))
for qq in range(numqs):
qsize = np.random.randint(1,numqrows,1)
id1a = np.sort(np.random.permutation(np.arange(cmax))[0:qsize]) #ensure uniqueness of ids queried
id2a = np.sort(np.random.permutation(np.arange(gmax))[0:qsize])
id1s = ','.join([str(xx) for xx in id1a])
id2s = ','.join([str(xx) for xx in id2a])
query = 'select * from demo where id1 in (%s) AND id2 in (%s);' % (id1s,id2s)
results[qq,0] = round(querytime(conn_disk,query),4)
results[qq,1] = round(querytime(conn_mem,query),4)
results[qq,2] = int(qsize)
#4) Now look at the results
print " disk | memory | qsize"
print "-----------------------"
for row in results:
print "%.4f | %.4f | %d" % (row[0],row[1],row[2])
</code></pre>
<p>Here's the results. Note that disk takes about 1.5X as long as memory for a fairly wide range of query sizes. </p>
<pre><code>[ramanujan:~]$python -OO sqlite_memory_vs_disk_clean.py
disk | memory | qsize
-----------------------
9.0332 | 6.8100 | 12630
9.0905 | 6.6953 | 5894
9.0078 | 6.8384 | 17798
9.1179 | 6.7673 | 60850
9.0629 | 6.8355 | 94854
8.9688 | 6.8093 | 17940
9.0785 | 6.6993 | 58003
9.0309 | 6.8257 | 85663
9.1423 | 6.7411 | 66047
9.1814 | 6.9794 | 11345
</code></pre>
<p>Shouldn't RAM be almost instant relative to disk? What's going wrong here? </p>
<h2>Edit</h2>
<p>Some good suggestions here. </p>
<p>I guess the main takehome point for me is that **there's probably no way to make :memory: <em>absolutely faster</em>, but there is a way to make disk access <em>relatively slower.</em> ** </p>
<p>In other words, the benchmark is adequately measuring the realistic performance of memory, but not the realistic performance of disk (e.g. because the cache_size pragma is too big or because I'm not doing writes). I'll mess around with those parameters and post my findings when I get a chance. </p>
<p>That said, if there is anyone who thinks I can squeeze some more speed out of the in-memory db (other than by jacking up the cache_size and default_cache_size, which I will do), I'm all ears...</p>
| 37
|
2009-04-19T02:33:43Z
| 764,743
|
<p>It has to do with the fact that SQLite has a page cache. According to the <a href="http://www.sqlite.org/compile.html">Documentation</a>, the default page cache is 2000 1K pages or about 2Mb. Since this is about 75% to 90% of your data, it isn't surprising that the two number are very similar. My guess is that in addition to the SQLite page cache, the rest of the data is still in the OS disk cache. If you got SQLite to flush the page cache (and the disk cache) you would see some really significant differences.</p>
| 32
|
2009-04-19T02:57:03Z
|
[
"python",
"database",
"sqlite",
"memory",
"benchmarking"
] |
SQLite Performance Benchmark -- why is :memory: so slow...only 1.5X as fast as disk?
| 764,710
|
<h2>Why is :memory: in sqlite so slow?</h2>
<p>I've been trying to see if there are any performance improvements gained by using in-memory sqlite vs. disk based sqlite. Basically I'd like to trade startup time and memory to get extremely rapid queries which do <em>not</em> hit disk during the course of the application. </p>
<p>However, the following benchmark gives me only a factor of 1.5X in improved speed. Here, I'm generating 1M rows of random data and loading it into both a disk and memory based version of the same table. I then run random queries on both dbs, returning sets of size approx 300k. I expected the memory based version to be considerably faster, but as mentioned I'm only getting 1.5X speedups. </p>
<p>I experimented with several other sizes of dbs and query sets; the advantage of :memory: <em>does</em> seem to go up as the number of rows in the db increases. I'm not sure why the advantage is so small, though I had a few hypotheses: </p>
<ul>
<li>the table used isn't big enough (in rows) to make :memory: a huge winner</li>
<li>more joins/tables would make the :memory: advantage more apparent</li>
<li>there is some kind of caching going on at the connection or OS level such that the previous results are accessible somehow, corrupting the benchmark</li>
<li>there is some kind of hidden disk access going on that I'm not seeing (I haven't tried lsof yet, but I did turn off the PRAGMAs for journaling)</li>
</ul>
<p>Am I doing something wrong here? Any thoughts on why :memory: isn't producing nearly instant lookups? Here's the benchmark: </p>
<pre><code>==> sqlite_memory_vs_disk_benchmark.py <==
#!/usr/bin/env python
"""Attempt to see whether :memory: offers significant performance benefits.
"""
import os
import time
import sqlite3
import numpy as np
def load_mat(conn,mat):
c = conn.cursor()
#Try to avoid hitting disk, trading safety for speed.
#http://stackoverflow.com/questions/304393
c.execute('PRAGMA temp_store=MEMORY;')
c.execute('PRAGMA journal_mode=MEMORY;')
# Make a demo table
c.execute('create table if not exists demo (id1 int, id2 int, val real);')
c.execute('create index id1_index on demo (id1);')
c.execute('create index id2_index on demo (id2);')
for row in mat:
c.execute('insert into demo values(?,?,?);', (row[0],row[1],row[2]))
conn.commit()
def querytime(conn,query):
start = time.time()
foo = conn.execute(query).fetchall()
diff = time.time() - start
return diff
#1) Build some fake data with 3 columns: int, int, float
nn = 1000000 #numrows
cmax = 700 #num uniques in 1st col
gmax = 5000 #num uniques in 2nd col
mat = np.zeros((nn,3),dtype='object')
mat[:,0] = np.random.randint(0,cmax,nn)
mat[:,1] = np.random.randint(0,gmax,nn)
mat[:,2] = np.random.uniform(0,1,nn)
#2) Load it into both dbs & build indices
try: os.unlink('foo.sqlite')
except OSError: pass
conn_mem = sqlite3.connect(":memory:")
conn_disk = sqlite3.connect('foo.sqlite')
load_mat(conn_mem,mat)
load_mat(conn_disk,mat)
del mat
#3) Execute a series of random queries and see how long it takes each of these
numqs = 10
numqrows = 300000 #max number of ids of each kind
results = np.zeros((numqs,3))
for qq in range(numqs):
qsize = np.random.randint(1,numqrows,1)
id1a = np.sort(np.random.permutation(np.arange(cmax))[0:qsize]) #ensure uniqueness of ids queried
id2a = np.sort(np.random.permutation(np.arange(gmax))[0:qsize])
id1s = ','.join([str(xx) for xx in id1a])
id2s = ','.join([str(xx) for xx in id2a])
query = 'select * from demo where id1 in (%s) AND id2 in (%s);' % (id1s,id2s)
results[qq,0] = round(querytime(conn_disk,query),4)
results[qq,1] = round(querytime(conn_mem,query),4)
results[qq,2] = int(qsize)
#4) Now look at the results
print " disk | memory | qsize"
print "-----------------------"
for row in results:
print "%.4f | %.4f | %d" % (row[0],row[1],row[2])
</code></pre>
<p>Here's the results. Note that disk takes about 1.5X as long as memory for a fairly wide range of query sizes. </p>
<pre><code>[ramanujan:~]$python -OO sqlite_memory_vs_disk_clean.py
disk | memory | qsize
-----------------------
9.0332 | 6.8100 | 12630
9.0905 | 6.6953 | 5894
9.0078 | 6.8384 | 17798
9.1179 | 6.7673 | 60850
9.0629 | 6.8355 | 94854
8.9688 | 6.8093 | 17940
9.0785 | 6.6993 | 58003
9.0309 | 6.8257 | 85663
9.1423 | 6.7411 | 66047
9.1814 | 6.9794 | 11345
</code></pre>
<p>Shouldn't RAM be almost instant relative to disk? What's going wrong here? </p>
<h2>Edit</h2>
<p>Some good suggestions here. </p>
<p>I guess the main takehome point for me is that **there's probably no way to make :memory: <em>absolutely faster</em>, but there is a way to make disk access <em>relatively slower.</em> ** </p>
<p>In other words, the benchmark is adequately measuring the realistic performance of memory, but not the realistic performance of disk (e.g. because the cache_size pragma is too big or because I'm not doing writes). I'll mess around with those parameters and post my findings when I get a chance. </p>
<p>That said, if there is anyone who thinks I can squeeze some more speed out of the in-memory db (other than by jacking up the cache_size and default_cache_size, which I will do), I'm all ears...</p>
| 37
|
2009-04-19T02:33:43Z
| 765,136
|
<p>You're doing SELECTs, you're using memory cache. Try to interleave SELECTs with UPDATEs.</p>
| 7
|
2009-04-19T09:38:08Z
|
[
"python",
"database",
"sqlite",
"memory",
"benchmarking"
] |
SQLite Performance Benchmark -- why is :memory: so slow...only 1.5X as fast as disk?
| 764,710
|
<h2>Why is :memory: in sqlite so slow?</h2>
<p>I've been trying to see if there are any performance improvements gained by using in-memory sqlite vs. disk based sqlite. Basically I'd like to trade startup time and memory to get extremely rapid queries which do <em>not</em> hit disk during the course of the application. </p>
<p>However, the following benchmark gives me only a factor of 1.5X in improved speed. Here, I'm generating 1M rows of random data and loading it into both a disk and memory based version of the same table. I then run random queries on both dbs, returning sets of size approx 300k. I expected the memory based version to be considerably faster, but as mentioned I'm only getting 1.5X speedups. </p>
<p>I experimented with several other sizes of dbs and query sets; the advantage of :memory: <em>does</em> seem to go up as the number of rows in the db increases. I'm not sure why the advantage is so small, though I had a few hypotheses: </p>
<ul>
<li>the table used isn't big enough (in rows) to make :memory: a huge winner</li>
<li>more joins/tables would make the :memory: advantage more apparent</li>
<li>there is some kind of caching going on at the connection or OS level such that the previous results are accessible somehow, corrupting the benchmark</li>
<li>there is some kind of hidden disk access going on that I'm not seeing (I haven't tried lsof yet, but I did turn off the PRAGMAs for journaling)</li>
</ul>
<p>Am I doing something wrong here? Any thoughts on why :memory: isn't producing nearly instant lookups? Here's the benchmark: </p>
<pre><code>==> sqlite_memory_vs_disk_benchmark.py <==
#!/usr/bin/env python
"""Attempt to see whether :memory: offers significant performance benefits.
"""
import os
import time
import sqlite3
import numpy as np
def load_mat(conn,mat):
c = conn.cursor()
#Try to avoid hitting disk, trading safety for speed.
#http://stackoverflow.com/questions/304393
c.execute('PRAGMA temp_store=MEMORY;')
c.execute('PRAGMA journal_mode=MEMORY;')
# Make a demo table
c.execute('create table if not exists demo (id1 int, id2 int, val real);')
c.execute('create index id1_index on demo (id1);')
c.execute('create index id2_index on demo (id2);')
for row in mat:
c.execute('insert into demo values(?,?,?);', (row[0],row[1],row[2]))
conn.commit()
def querytime(conn,query):
start = time.time()
foo = conn.execute(query).fetchall()
diff = time.time() - start
return diff
#1) Build some fake data with 3 columns: int, int, float
nn = 1000000 #numrows
cmax = 700 #num uniques in 1st col
gmax = 5000 #num uniques in 2nd col
mat = np.zeros((nn,3),dtype='object')
mat[:,0] = np.random.randint(0,cmax,nn)
mat[:,1] = np.random.randint(0,gmax,nn)
mat[:,2] = np.random.uniform(0,1,nn)
#2) Load it into both dbs & build indices
try: os.unlink('foo.sqlite')
except OSError: pass
conn_mem = sqlite3.connect(":memory:")
conn_disk = sqlite3.connect('foo.sqlite')
load_mat(conn_mem,mat)
load_mat(conn_disk,mat)
del mat
#3) Execute a series of random queries and see how long it takes each of these
numqs = 10
numqrows = 300000 #max number of ids of each kind
results = np.zeros((numqs,3))
for qq in range(numqs):
qsize = np.random.randint(1,numqrows,1)
id1a = np.sort(np.random.permutation(np.arange(cmax))[0:qsize]) #ensure uniqueness of ids queried
id2a = np.sort(np.random.permutation(np.arange(gmax))[0:qsize])
id1s = ','.join([str(xx) for xx in id1a])
id2s = ','.join([str(xx) for xx in id2a])
query = 'select * from demo where id1 in (%s) AND id2 in (%s);' % (id1s,id2s)
results[qq,0] = round(querytime(conn_disk,query),4)
results[qq,1] = round(querytime(conn_mem,query),4)
results[qq,2] = int(qsize)
#4) Now look at the results
print " disk | memory | qsize"
print "-----------------------"
for row in results:
print "%.4f | %.4f | %d" % (row[0],row[1],row[2])
</code></pre>
<p>Here's the results. Note that disk takes about 1.5X as long as memory for a fairly wide range of query sizes. </p>
<pre><code>[ramanujan:~]$python -OO sqlite_memory_vs_disk_clean.py
disk | memory | qsize
-----------------------
9.0332 | 6.8100 | 12630
9.0905 | 6.6953 | 5894
9.0078 | 6.8384 | 17798
9.1179 | 6.7673 | 60850
9.0629 | 6.8355 | 94854
8.9688 | 6.8093 | 17940
9.0785 | 6.6993 | 58003
9.0309 | 6.8257 | 85663
9.1423 | 6.7411 | 66047
9.1814 | 6.9794 | 11345
</code></pre>
<p>Shouldn't RAM be almost instant relative to disk? What's going wrong here? </p>
<h2>Edit</h2>
<p>Some good suggestions here. </p>
<p>I guess the main takehome point for me is that **there's probably no way to make :memory: <em>absolutely faster</em>, but there is a way to make disk access <em>relatively slower.</em> ** </p>
<p>In other words, the benchmark is adequately measuring the realistic performance of memory, but not the realistic performance of disk (e.g. because the cache_size pragma is too big or because I'm not doing writes). I'll mess around with those parameters and post my findings when I get a chance. </p>
<p>That said, if there is anyone who thinks I can squeeze some more speed out of the in-memory db (other than by jacking up the cache_size and default_cache_size, which I will do), I'm all ears...</p>
| 37
|
2009-04-19T02:33:43Z
| 790,522
|
<p>Memory database in SQLite is actually page cache that never touches the disk.
So you should forget using memory db in SQLite for performance tweaks</p>
<p>It's possible to turn off journal, turn off sync mode, set large page cache and you will have almost the same performance on most of operations, but durability will be lost.</p>
<p>From your code it's absolutely clear that you <strong>SHOULD REUSE the command</strong> and ONLY BIND parameters, because that took <strong>more than 90%</strong> of your test performance away.</p>
| 6
|
2009-04-26T09:07:26Z
|
[
"python",
"database",
"sqlite",
"memory",
"benchmarking"
] |
SQLite Performance Benchmark -- why is :memory: so slow...only 1.5X as fast as disk?
| 764,710
|
<h2>Why is :memory: in sqlite so slow?</h2>
<p>I've been trying to see if there are any performance improvements gained by using in-memory sqlite vs. disk based sqlite. Basically I'd like to trade startup time and memory to get extremely rapid queries which do <em>not</em> hit disk during the course of the application. </p>
<p>However, the following benchmark gives me only a factor of 1.5X in improved speed. Here, I'm generating 1M rows of random data and loading it into both a disk and memory based version of the same table. I then run random queries on both dbs, returning sets of size approx 300k. I expected the memory based version to be considerably faster, but as mentioned I'm only getting 1.5X speedups. </p>
<p>I experimented with several other sizes of dbs and query sets; the advantage of :memory: <em>does</em> seem to go up as the number of rows in the db increases. I'm not sure why the advantage is so small, though I had a few hypotheses: </p>
<ul>
<li>the table used isn't big enough (in rows) to make :memory: a huge winner</li>
<li>more joins/tables would make the :memory: advantage more apparent</li>
<li>there is some kind of caching going on at the connection or OS level such that the previous results are accessible somehow, corrupting the benchmark</li>
<li>there is some kind of hidden disk access going on that I'm not seeing (I haven't tried lsof yet, but I did turn off the PRAGMAs for journaling)</li>
</ul>
<p>Am I doing something wrong here? Any thoughts on why :memory: isn't producing nearly instant lookups? Here's the benchmark: </p>
<pre><code>==> sqlite_memory_vs_disk_benchmark.py <==
#!/usr/bin/env python
"""Attempt to see whether :memory: offers significant performance benefits.
"""
import os
import time
import sqlite3
import numpy as np
def load_mat(conn,mat):
c = conn.cursor()
#Try to avoid hitting disk, trading safety for speed.
#http://stackoverflow.com/questions/304393
c.execute('PRAGMA temp_store=MEMORY;')
c.execute('PRAGMA journal_mode=MEMORY;')
# Make a demo table
c.execute('create table if not exists demo (id1 int, id2 int, val real);')
c.execute('create index id1_index on demo (id1);')
c.execute('create index id2_index on demo (id2);')
for row in mat:
c.execute('insert into demo values(?,?,?);', (row[0],row[1],row[2]))
conn.commit()
def querytime(conn,query):
start = time.time()
foo = conn.execute(query).fetchall()
diff = time.time() - start
return diff
#1) Build some fake data with 3 columns: int, int, float
nn = 1000000 #numrows
cmax = 700 #num uniques in 1st col
gmax = 5000 #num uniques in 2nd col
mat = np.zeros((nn,3),dtype='object')
mat[:,0] = np.random.randint(0,cmax,nn)
mat[:,1] = np.random.randint(0,gmax,nn)
mat[:,2] = np.random.uniform(0,1,nn)
#2) Load it into both dbs & build indices
try: os.unlink('foo.sqlite')
except OSError: pass
conn_mem = sqlite3.connect(":memory:")
conn_disk = sqlite3.connect('foo.sqlite')
load_mat(conn_mem,mat)
load_mat(conn_disk,mat)
del mat
#3) Execute a series of random queries and see how long it takes each of these
numqs = 10
numqrows = 300000 #max number of ids of each kind
results = np.zeros((numqs,3))
for qq in range(numqs):
qsize = np.random.randint(1,numqrows,1)
id1a = np.sort(np.random.permutation(np.arange(cmax))[0:qsize]) #ensure uniqueness of ids queried
id2a = np.sort(np.random.permutation(np.arange(gmax))[0:qsize])
id1s = ','.join([str(xx) for xx in id1a])
id2s = ','.join([str(xx) for xx in id2a])
query = 'select * from demo where id1 in (%s) AND id2 in (%s);' % (id1s,id2s)
results[qq,0] = round(querytime(conn_disk,query),4)
results[qq,1] = round(querytime(conn_mem,query),4)
results[qq,2] = int(qsize)
#4) Now look at the results
print " disk | memory | qsize"
print "-----------------------"
for row in results:
print "%.4f | %.4f | %d" % (row[0],row[1],row[2])
</code></pre>
<p>Here's the results. Note that disk takes about 1.5X as long as memory for a fairly wide range of query sizes. </p>
<pre><code>[ramanujan:~]$python -OO sqlite_memory_vs_disk_clean.py
disk | memory | qsize
-----------------------
9.0332 | 6.8100 | 12630
9.0905 | 6.6953 | 5894
9.0078 | 6.8384 | 17798
9.1179 | 6.7673 | 60850
9.0629 | 6.8355 | 94854
8.9688 | 6.8093 | 17940
9.0785 | 6.6993 | 58003
9.0309 | 6.8257 | 85663
9.1423 | 6.7411 | 66047
9.1814 | 6.9794 | 11345
</code></pre>
<p>Shouldn't RAM be almost instant relative to disk? What's going wrong here? </p>
<h2>Edit</h2>
<p>Some good suggestions here. </p>
<p>I guess the main takehome point for me is that **there's probably no way to make :memory: <em>absolutely faster</em>, but there is a way to make disk access <em>relatively slower.</em> ** </p>
<p>In other words, the benchmark is adequately measuring the realistic performance of memory, but not the realistic performance of disk (e.g. because the cache_size pragma is too big or because I'm not doing writes). I'll mess around with those parameters and post my findings when I get a chance. </p>
<p>That said, if there is anyone who thinks I can squeeze some more speed out of the in-memory db (other than by jacking up the cache_size and default_cache_size, which I will do), I'm all ears...</p>
| 37
|
2009-04-19T02:33:43Z
| 1,056,148
|
<p>My question to you is, What are you trying to benchmark?</p>
<p>As already mentioned, SQLite's :memory: DB is just the same as the disk-based one, i.e. paged, and the only difference is that the pages are never written to disk. So the only difference between the two are the disk writes :memory: doesn't need to do (it also doesn't need to do any disk reads either, when a disk page had to be offloaded from the cache).</p>
<p>But read/writes from the cache may represent only a fraction of the query processing time, depending on the query. Your query has a where clause with two large sets of ids the selected rows must be members of, which is expensive.</p>
<p>As Cary Millsap demonstrates in his blog on optimizing Oracle (here's a representative post: <a href="http://carymillsap.blogspot.com/2009/06/profiling-with-my-boy.html">http://carymillsap.blogspot.com/2009/06/profiling-with-my-boy.html</a>), you need to understand which parts of the query processing take time. Assuming the set membership tests represented 90% of the query time, and the disk-based IO 10%, going to :memory: saves only those 10%. That's an extreme example unlikely to be representative, but I hope that it illustrates that your particular query is slanting the results. Use a simpler query, and the IO parts of the query processing will increase, and thus the benefit of :memory:.</p>
<p>As a final note, we've experimented with SQLite's virtual tables, where you are in charge of the actual storage, and by using C++ containers, which are typed unlike SQLite's way of storing cell values, we could see a significant improment in processing time over :memory:, but that's getting of topic a bit ;) --DD</p>
<p>PS: I haven't enough Karma to comment on the most popular post of this thread,
so I'm commenting here :) to say that recent SQLite version don't use 1KB
pages by default on Windows: <a href="http://www.sqlite.org/changes.html#version_3_6_12">http://www.sqlite.org/changes.html#version_3_6_12</a></p>
| 17
|
2009-06-29T00:45:03Z
|
[
"python",
"database",
"sqlite",
"memory",
"benchmarking"
] |
SQLite Performance Benchmark -- why is :memory: so slow...only 1.5X as fast as disk?
| 764,710
|
<h2>Why is :memory: in sqlite so slow?</h2>
<p>I've been trying to see if there are any performance improvements gained by using in-memory sqlite vs. disk based sqlite. Basically I'd like to trade startup time and memory to get extremely rapid queries which do <em>not</em> hit disk during the course of the application. </p>
<p>However, the following benchmark gives me only a factor of 1.5X in improved speed. Here, I'm generating 1M rows of random data and loading it into both a disk and memory based version of the same table. I then run random queries on both dbs, returning sets of size approx 300k. I expected the memory based version to be considerably faster, but as mentioned I'm only getting 1.5X speedups. </p>
<p>I experimented with several other sizes of dbs and query sets; the advantage of :memory: <em>does</em> seem to go up as the number of rows in the db increases. I'm not sure why the advantage is so small, though I had a few hypotheses: </p>
<ul>
<li>the table used isn't big enough (in rows) to make :memory: a huge winner</li>
<li>more joins/tables would make the :memory: advantage more apparent</li>
<li>there is some kind of caching going on at the connection or OS level such that the previous results are accessible somehow, corrupting the benchmark</li>
<li>there is some kind of hidden disk access going on that I'm not seeing (I haven't tried lsof yet, but I did turn off the PRAGMAs for journaling)</li>
</ul>
<p>Am I doing something wrong here? Any thoughts on why :memory: isn't producing nearly instant lookups? Here's the benchmark: </p>
<pre><code>==> sqlite_memory_vs_disk_benchmark.py <==
#!/usr/bin/env python
"""Attempt to see whether :memory: offers significant performance benefits.
"""
import os
import time
import sqlite3
import numpy as np
def load_mat(conn,mat):
c = conn.cursor()
#Try to avoid hitting disk, trading safety for speed.
#http://stackoverflow.com/questions/304393
c.execute('PRAGMA temp_store=MEMORY;')
c.execute('PRAGMA journal_mode=MEMORY;')
# Make a demo table
c.execute('create table if not exists demo (id1 int, id2 int, val real);')
c.execute('create index id1_index on demo (id1);')
c.execute('create index id2_index on demo (id2);')
for row in mat:
c.execute('insert into demo values(?,?,?);', (row[0],row[1],row[2]))
conn.commit()
def querytime(conn,query):
start = time.time()
foo = conn.execute(query).fetchall()
diff = time.time() - start
return diff
#1) Build some fake data with 3 columns: int, int, float
nn = 1000000 #numrows
cmax = 700 #num uniques in 1st col
gmax = 5000 #num uniques in 2nd col
mat = np.zeros((nn,3),dtype='object')
mat[:,0] = np.random.randint(0,cmax,nn)
mat[:,1] = np.random.randint(0,gmax,nn)
mat[:,2] = np.random.uniform(0,1,nn)
#2) Load it into both dbs & build indices
try: os.unlink('foo.sqlite')
except OSError: pass
conn_mem = sqlite3.connect(":memory:")
conn_disk = sqlite3.connect('foo.sqlite')
load_mat(conn_mem,mat)
load_mat(conn_disk,mat)
del mat
#3) Execute a series of random queries and see how long it takes each of these
numqs = 10
numqrows = 300000 #max number of ids of each kind
results = np.zeros((numqs,3))
for qq in range(numqs):
qsize = np.random.randint(1,numqrows,1)
id1a = np.sort(np.random.permutation(np.arange(cmax))[0:qsize]) #ensure uniqueness of ids queried
id2a = np.sort(np.random.permutation(np.arange(gmax))[0:qsize])
id1s = ','.join([str(xx) for xx in id1a])
id2s = ','.join([str(xx) for xx in id2a])
query = 'select * from demo where id1 in (%s) AND id2 in (%s);' % (id1s,id2s)
results[qq,0] = round(querytime(conn_disk,query),4)
results[qq,1] = round(querytime(conn_mem,query),4)
results[qq,2] = int(qsize)
#4) Now look at the results
print " disk | memory | qsize"
print "-----------------------"
for row in results:
print "%.4f | %.4f | %d" % (row[0],row[1],row[2])
</code></pre>
<p>Here's the results. Note that disk takes about 1.5X as long as memory for a fairly wide range of query sizes. </p>
<pre><code>[ramanujan:~]$python -OO sqlite_memory_vs_disk_clean.py
disk | memory | qsize
-----------------------
9.0332 | 6.8100 | 12630
9.0905 | 6.6953 | 5894
9.0078 | 6.8384 | 17798
9.1179 | 6.7673 | 60850
9.0629 | 6.8355 | 94854
8.9688 | 6.8093 | 17940
9.0785 | 6.6993 | 58003
9.0309 | 6.8257 | 85663
9.1423 | 6.7411 | 66047
9.1814 | 6.9794 | 11345
</code></pre>
<p>Shouldn't RAM be almost instant relative to disk? What's going wrong here? </p>
<h2>Edit</h2>
<p>Some good suggestions here. </p>
<p>I guess the main takehome point for me is that **there's probably no way to make :memory: <em>absolutely faster</em>, but there is a way to make disk access <em>relatively slower.</em> ** </p>
<p>In other words, the benchmark is adequately measuring the realistic performance of memory, but not the realistic performance of disk (e.g. because the cache_size pragma is too big or because I'm not doing writes). I'll mess around with those parameters and post my findings when I get a chance. </p>
<p>That said, if there is anyone who thinks I can squeeze some more speed out of the in-memory db (other than by jacking up the cache_size and default_cache_size, which I will do), I'm all ears...</p>
| 37
|
2009-04-19T02:33:43Z
| 7,851,376
|
<p>numpy arrays are slower than dict and tuple and other object sequences until you are dealing with 5 million or more objects in a sequence. You can significantly improve the speed of processing massive amounts of data by iterating over it and using generators to avoid creating and recreating temporary large objects.</p>
<p>numpy has become your limiting factor as it is designed to deliver linear performance. It is not a star with small or even large amounts of data. But numpy's performance does not turn into a curve as the data set grows. It remains a straight line.</p>
<p>Besides SQLite is just a really fast database. Faster even than most server databases. It begs the question of why anyone would use NOSQL databases when a light weight super fast fault tolerant database that uses SQL has been around and put to the test in everything from browsers to mobile phones for years.</p>
| 0
|
2011-10-21T15:04:28Z
|
[
"python",
"database",
"sqlite",
"memory",
"benchmarking"
] |
SQLite Performance Benchmark -- why is :memory: so slow...only 1.5X as fast as disk?
| 764,710
|
<h2>Why is :memory: in sqlite so slow?</h2>
<p>I've been trying to see if there are any performance improvements gained by using in-memory sqlite vs. disk based sqlite. Basically I'd like to trade startup time and memory to get extremely rapid queries which do <em>not</em> hit disk during the course of the application. </p>
<p>However, the following benchmark gives me only a factor of 1.5X in improved speed. Here, I'm generating 1M rows of random data and loading it into both a disk and memory based version of the same table. I then run random queries on both dbs, returning sets of size approx 300k. I expected the memory based version to be considerably faster, but as mentioned I'm only getting 1.5X speedups. </p>
<p>I experimented with several other sizes of dbs and query sets; the advantage of :memory: <em>does</em> seem to go up as the number of rows in the db increases. I'm not sure why the advantage is so small, though I had a few hypotheses: </p>
<ul>
<li>the table used isn't big enough (in rows) to make :memory: a huge winner</li>
<li>more joins/tables would make the :memory: advantage more apparent</li>
<li>there is some kind of caching going on at the connection or OS level such that the previous results are accessible somehow, corrupting the benchmark</li>
<li>there is some kind of hidden disk access going on that I'm not seeing (I haven't tried lsof yet, but I did turn off the PRAGMAs for journaling)</li>
</ul>
<p>Am I doing something wrong here? Any thoughts on why :memory: isn't producing nearly instant lookups? Here's the benchmark: </p>
<pre><code>==> sqlite_memory_vs_disk_benchmark.py <==
#!/usr/bin/env python
"""Attempt to see whether :memory: offers significant performance benefits.
"""
import os
import time
import sqlite3
import numpy as np
def load_mat(conn,mat):
c = conn.cursor()
#Try to avoid hitting disk, trading safety for speed.
#http://stackoverflow.com/questions/304393
c.execute('PRAGMA temp_store=MEMORY;')
c.execute('PRAGMA journal_mode=MEMORY;')
# Make a demo table
c.execute('create table if not exists demo (id1 int, id2 int, val real);')
c.execute('create index id1_index on demo (id1);')
c.execute('create index id2_index on demo (id2);')
for row in mat:
c.execute('insert into demo values(?,?,?);', (row[0],row[1],row[2]))
conn.commit()
def querytime(conn,query):
start = time.time()
foo = conn.execute(query).fetchall()
diff = time.time() - start
return diff
#1) Build some fake data with 3 columns: int, int, float
nn = 1000000 #numrows
cmax = 700 #num uniques in 1st col
gmax = 5000 #num uniques in 2nd col
mat = np.zeros((nn,3),dtype='object')
mat[:,0] = np.random.randint(0,cmax,nn)
mat[:,1] = np.random.randint(0,gmax,nn)
mat[:,2] = np.random.uniform(0,1,nn)
#2) Load it into both dbs & build indices
try: os.unlink('foo.sqlite')
except OSError: pass
conn_mem = sqlite3.connect(":memory:")
conn_disk = sqlite3.connect('foo.sqlite')
load_mat(conn_mem,mat)
load_mat(conn_disk,mat)
del mat
#3) Execute a series of random queries and see how long it takes each of these
numqs = 10
numqrows = 300000 #max number of ids of each kind
results = np.zeros((numqs,3))
for qq in range(numqs):
qsize = np.random.randint(1,numqrows,1)
id1a = np.sort(np.random.permutation(np.arange(cmax))[0:qsize]) #ensure uniqueness of ids queried
id2a = np.sort(np.random.permutation(np.arange(gmax))[0:qsize])
id1s = ','.join([str(xx) for xx in id1a])
id2s = ','.join([str(xx) for xx in id2a])
query = 'select * from demo where id1 in (%s) AND id2 in (%s);' % (id1s,id2s)
results[qq,0] = round(querytime(conn_disk,query),4)
results[qq,1] = round(querytime(conn_mem,query),4)
results[qq,2] = int(qsize)
#4) Now look at the results
print " disk | memory | qsize"
print "-----------------------"
for row in results:
print "%.4f | %.4f | %d" % (row[0],row[1],row[2])
</code></pre>
<p>Here's the results. Note that disk takes about 1.5X as long as memory for a fairly wide range of query sizes. </p>
<pre><code>[ramanujan:~]$python -OO sqlite_memory_vs_disk_clean.py
disk | memory | qsize
-----------------------
9.0332 | 6.8100 | 12630
9.0905 | 6.6953 | 5894
9.0078 | 6.8384 | 17798
9.1179 | 6.7673 | 60850
9.0629 | 6.8355 | 94854
8.9688 | 6.8093 | 17940
9.0785 | 6.6993 | 58003
9.0309 | 6.8257 | 85663
9.1423 | 6.7411 | 66047
9.1814 | 6.9794 | 11345
</code></pre>
<p>Shouldn't RAM be almost instant relative to disk? What's going wrong here? </p>
<h2>Edit</h2>
<p>Some good suggestions here. </p>
<p>I guess the main takehome point for me is that **there's probably no way to make :memory: <em>absolutely faster</em>, but there is a way to make disk access <em>relatively slower.</em> ** </p>
<p>In other words, the benchmark is adequately measuring the realistic performance of memory, but not the realistic performance of disk (e.g. because the cache_size pragma is too big or because I'm not doing writes). I'll mess around with those parameters and post my findings when I get a chance. </p>
<p>That said, if there is anyone who thinks I can squeeze some more speed out of the in-memory db (other than by jacking up the cache_size and default_cache_size, which I will do), I'm all ears...</p>
| 37
|
2009-04-19T02:33:43Z
| 10,404,068
|
<p>Thank you for the code. I have tested in on 2 x XEON 2690 with 192GB RAM with 4 SCSI 15k hard drives in RAID 5 and the results are:</p>
<pre><code> disk | memory | qsize
-----------------------
6.3590 | 2.3280 | 15713
6.6250 | 2.3690 | 8914
6.0040 | 2.3260 | 225168
6.0210 | 2.4080 | 132388
6.1400 | 2.4050 | 264038
</code></pre>
<p>The speed-increase in memory is significant.</p>
| 3
|
2012-05-01T20:43:36Z
|
[
"python",
"database",
"sqlite",
"memory",
"benchmarking"
] |
how to send mail in python ssmtp vs smtplib
| 764,778
|
<p>I need to send email in delbian linux. How to send? I run my server on 256 MB linux box and I heard postfix and sendmail is overkill.</p>
<p>Recently I came across the ssmtp, that seems to be an executable, needs to be executed as a process and called through python using os modules.</p>
<p>alternatively, python already provides smtplib which is working fine with me.</p>
<p>What is the advantage of using ssmtp over python's smtplib?</p>
| 3
|
2009-04-19T03:24:33Z
| 765,044
|
<p>In a Python program, there is no advantage.</p>
<p>The only purpose of ssmtp is to wrap the SMTP protocol in the sendmail API. That is, it provides a program <code>/usr/sbin/sendmail</code> that accepts the same options, arguments, and inputs as the full-blown sendmail (though most of the options do nothing); but behind the scenes, instead of processing the email itself, it sends the message to an SMTP server. This is for systems that need to have a <code>sendmail</code> program present, perhaps because they don't understand SMTP - for example, I think older versions of PHP had this requirement, and even in recent versions it might still be easier to configure PHP to use the so-called sendmail interface (i.e. the program <code>sendmail</code>) than to use SMTP directly. (I haven't used PHP in a little while, I'm not sure about the current status) </p>
<p>However, in Python the situation is reversed: you have a builtin library that makes it easy to use SMTP directly, whereas using <code>sendmail</code> requires you to invoke the <code>subprocess</code> module which is somewhat clunky and also very dependent on things that are not part of Python. So basically there is no reason not to use <code>smtplib</code>.</p>
| 5
|
2009-04-19T08:03:55Z
|
[
"python",
"smtplib",
"ssmtp"
] |
how to send mail in python ssmtp vs smtplib
| 764,778
|
<p>I need to send email in delbian linux. How to send? I run my server on 256 MB linux box and I heard postfix and sendmail is overkill.</p>
<p>Recently I came across the ssmtp, that seems to be an executable, needs to be executed as a process and called through python using os modules.</p>
<p>alternatively, python already provides smtplib which is working fine with me.</p>
<p>What is the advantage of using ssmtp over python's smtplib?</p>
| 3
|
2009-04-19T03:24:33Z
| 766,413
|
<p>Additionally, <em>postfix</em> is very easy to install in "satellite" mode, where all it does is queue and deliver email for you. Way easier than implementing your own email queue. Most decent package management systems will let you configure it this way.</p>
| 2
|
2009-04-19T23:42:01Z
|
[
"python",
"smtplib",
"ssmtp"
] |
how to send mail in python ssmtp vs smtplib
| 764,778
|
<p>I need to send email in delbian linux. How to send? I run my server on 256 MB linux box and I heard postfix and sendmail is overkill.</p>
<p>Recently I came across the ssmtp, that seems to be an executable, needs to be executed as a process and called through python using os modules.</p>
<p>alternatively, python already provides smtplib which is working fine with me.</p>
<p>What is the advantage of using ssmtp over python's smtplib?</p>
| 3
|
2009-04-19T03:24:33Z
| 768,031
|
<p>There are other lightweight SMTP senders, such as <a href="http://msmtp.sourceforge.net/" rel="nofollow">msmtp</a>, the one I prefer.</p>
<p>But Postfix is fine for a 256 Mb machine. The good thing about a full MTA like Postfix is that it keeps the message and retries if the destination server is down. With smtplib and the server on a remote machine, you program now depends on the network...</p>
| 1
|
2009-04-20T12:22:44Z
|
[
"python",
"smtplib",
"ssmtp"
] |
Drawing a Dragons curve in Python
| 765,048
|
<p>I am trying to work out how to draw the dragons curve, with pythons turtle using the An L-System or Lindenmayer system. I no the code is something like </p>
<p>the Dragon curve; initial state = âFâ, replacement rule â replace âFâ with âF+F-Fâ, number of replacements = 8, length = 5, angle = 60</p>
<p>But have no idea how to put that into code. </p>
| -1
|
2009-04-19T08:06:50Z
| 765,072
|
<p>First hit on Google for "dragons curve python": </p>
<p><a href="http://www.pynokio.org/dragon.py.htm" rel="nofollow">http://www.pynokio.org/dragon.py.htm</a></p>
<p>You can probably modify that to work with your plotting program of choice. I'd try matplotlib. </p>
| 3
|
2009-04-19T08:30:21Z
|
[
"python",
"fractals"
] |
Drawing a Dragons curve in Python
| 765,048
|
<p>I am trying to work out how to draw the dragons curve, with pythons turtle using the An L-System or Lindenmayer system. I no the code is something like </p>
<p>the Dragon curve; initial state = âFâ, replacement rule â replace âFâ with âF+F-Fâ, number of replacements = 8, length = 5, angle = 60</p>
<p>But have no idea how to put that into code. </p>
| -1
|
2009-04-19T08:06:50Z
| 766,310
|
<p>Well, presumably, you could start by defining:</p>
<pre><code>def replace(s):
return s.replace('F', 'F+F-F')
</code></pre>
<p>Then you can generate your sequence as:</p>
<pre><code>code = 'F'
for i in range(8):
code = replace(code)
</code></pre>
<p>I'm not familiar with <code>turtle</code> so I can't help you there.</p>
| 0
|
2009-04-19T22:56:20Z
|
[
"python",
"fractals"
] |
Drawing a Dragons curve in Python
| 765,048
|
<p>I am trying to work out how to draw the dragons curve, with pythons turtle using the An L-System or Lindenmayer system. I no the code is something like </p>
<p>the Dragon curve; initial state = âFâ, replacement rule â replace âFâ with âF+F-Fâ, number of replacements = 8, length = 5, angle = 60</p>
<p>But have no idea how to put that into code. </p>
| -1
|
2009-04-19T08:06:50Z
| 1,059,019
|
<p>Draw the dragon curve using <code>turtle</code> module (suggested by <a href="http://stackoverflow.com/questions/765048/drawing-a-dragons-curve-in-python/766310#766310">@John Fouhy</a>):</p>
<pre><code>#!/usr/bin/env python
import turtle
from functools import partial
nreplacements = 8
angle = 60
step = 5
# generate command
cmd = 'f'
for _ in range(nreplacements):
cmd = cmd.replace('f', 'f+f-f')
# draw
t = turtle.Turtle()
i2c = {'f': partial(t.forward, step),
'+': partial(t.left, angle),
'-': partial(t.right, angle),
}
for c in cmd: i2c[c]()
</code></pre>
| 3
|
2009-06-29T15:46:15Z
|
[
"python",
"fractals"
] |
Standard Django way for letting users edit rich content
| 765,066
|
<p>I have a Django website in which I want site administrators to be able to edit rich content.
Suppose we're talking about an organizational info page, which might include some pictures, and some links, where the page is not as structured as a news page (which updates with news pieces every few days), but still needs the ability to be easily edited by site admins which do not necessarily want to mess with HTML (or rather, I do not want them to).</p>
<p>So where do I put this dynamic content? On the database? In which format? How do I make it accesible in the django default admin?</p>
| 2
|
2009-04-19T08:24:27Z
| 765,070
|
<p><strong>Use one of the existing rich-text editors</strong></p>
<p>The lightest weight would be to use something at the js level like DojoEditor: </p>
<p><a href="http://code.djangoproject.com/wiki/AddDojoEditor" rel="nofollow">http://code.djangoproject.com/wiki/AddDojoEditor</a></p>
<p>See also this thread: </p>
<p><a href="http://stackoverflow.com/questions/329963/replace-textarea-with-rich-text-editor-in-django-admin">http://stackoverflow.com/questions/329963/replace-textarea-with-rich-text-editor-in-django-admin</a></p>
| 4
|
2009-04-19T08:28:12Z
|
[
"python",
"django",
"django-models",
"django-admin"
] |
Standard Django way for letting users edit rich content
| 765,066
|
<p>I have a Django website in which I want site administrators to be able to edit rich content.
Suppose we're talking about an organizational info page, which might include some pictures, and some links, where the page is not as structured as a news page (which updates with news pieces every few days), but still needs the ability to be easily edited by site admins which do not necessarily want to mess with HTML (or rather, I do not want them to).</p>
<p>So where do I put this dynamic content? On the database? In which format? How do I make it accesible in the django default admin?</p>
| 2
|
2009-04-19T08:24:27Z
| 766,119
|
<p>For what you're describing I'd use <a href="http://docs.djangoproject.com/en/dev/ref/contrib/flatpages/" rel="nofollow">flatpages</a>, which is a django app that lets users create and edit pages in the admin panel.</p>
<p>As for formatting, I'd use <a href="http://tinymce.moxiecode.com/" rel="nofollow">TinyMCE</a>. Integrating it is pretty easy, <a href="http://code.djangoproject.com/wiki/AddWYSIWYGEditor" rel="nofollow">here is a walkthrough</a> (do steps 1 and 2 and jump to the bottom, "Using TinyMCE with flatpages (newforms)")</p>
| 1
|
2009-04-19T21:22:23Z
|
[
"python",
"django",
"django-models",
"django-admin"
] |
How's Python Multiprocessing Implemented on Windows?
| 765,129
|
<p>Given the absence of a Windows fork() call, how's the multiprocessing package in Python 2.6 implemented under Windows? On top of Win32 threads or some sort of fake fork or just compatibility on top of the existing multithreading?</p>
| 12
|
2009-04-19T09:32:46Z
| 765,207
|
<p>It's done using a subprocess call to sys.executable (i.e. start a new Python process) followed by serializing all of the globals, and sending those over the pipe. A poor man's cloning of the current process. This is the cause of the <a href="http://docs.python.org/library/multiprocessing.html#windows">extra restrictions</a> found when using multiprocessing on Windows plaform.</p>
<p>You may also be interested in viewing <a href="http://pycon.blip.tv/file/1947354/">Jesse Noller's talk from PyCon</a> about multiprocessing where he discusses its use.</p>
| 29
|
2009-04-19T10:41:44Z
|
[
"python",
"multithreading",
"fork"
] |
Proxy Check in python
| 765,305
|
<p>I have written a script in python that uses cookies and POST/GET. I also included proxy support in my script. However, when one enters a dead proxy proxy, the script crashes. Is there any way to check if a proxy is dead/alive before running the rest of my script?</p>
<p>Furthermore, I noticed that some proxies don't handle cookies/POST headers properly. Is there any way to fix this?</p>
| 8
|
2009-04-19T12:01:51Z
| 765,436
|
<p>The simplest was is to simply catch the IOError exception from urllib:</p>
<pre><code>try:
urllib.urlopen(
"http://example.com",
proxies={'http':'http://example.com:8080'}
)
except IOError:
print "Connection error! (Check proxy)"
else:
print "All was fine"
</code></pre>
<p>Also, from <a href="http://love-python.blogspot.com/2008/07/check-status-proxy-address.html">this blog post - "check status proxy address"</a> (with some slight improvements):</p>
<pre><code>import urllib2
import socket
def is_bad_proxy(pip):
try:
proxy_handler = urllib2.ProxyHandler({'http': pip})
opener = urllib2.build_opener(proxy_handler)
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
urllib2.install_opener(opener)
req=urllib2.Request('http://www.example.com') # change the URL to test here
sock=urllib2.urlopen(req)
except urllib2.HTTPError, e:
print 'Error code: ', e.code
return e.code
except Exception, detail:
print "ERROR:", detail
return True
return False
def main():
socket.setdefaulttimeout(120)
# two sample proxy IPs
proxyList = ['125.76.226.9:80', '213.55.87.162:6588']
for currentProxy in proxyList:
if is_bad_proxy(currentProxy):
print "Bad Proxy %s" % (currentProxy)
else:
print "%s is working" % (currentProxy)
if __name__ == '__main__':
main()
</code></pre>
<p>Remember this could double the time the script takes, if the proxy is down (as you will have to wait for two connection-timeouts).. Unless you specifically have to know the proxy is at fault, handling the IOError is far cleaner, simpler and quicker..</p>
| 11
|
2009-04-19T13:41:40Z
|
[
"python",
"http",
"proxy"
] |
Proxy Check in python
| 765,305
|
<p>I have written a script in python that uses cookies and POST/GET. I also included proxy support in my script. However, when one enters a dead proxy proxy, the script crashes. Is there any way to check if a proxy is dead/alive before running the rest of my script?</p>
<p>Furthermore, I noticed that some proxies don't handle cookies/POST headers properly. Is there any way to fix this?</p>
| 8
|
2009-04-19T12:01:51Z
| 7,135,302
|
<p>I think that the better approach is like dbr said, handling the exception.</p>
<p>Another solution that could be better in some cases, is to use an external <strong><a href="http://proxyipchecker.com/" rel="nofollow">online proxy checker</a></strong> tool to check if a proxy server is alive and then continue using your script without any modification.</p>
| 1
|
2011-08-20T23:10:29Z
|
[
"python",
"http",
"proxy"
] |
Proxy Check in python
| 765,305
|
<p>I have written a script in python that uses cookies and POST/GET. I also included proxy support in my script. However, when one enters a dead proxy proxy, the script crashes. Is there any way to check if a proxy is dead/alive before running the rest of my script?</p>
<p>Furthermore, I noticed that some proxies don't handle cookies/POST headers properly. Is there any way to fix this?</p>
| 8
|
2009-04-19T12:01:51Z
| 16,950,568
|
<p>There is one nice package <a href="http://docs.grablib.org/" rel="nofollow">Grab</a>
So, if it ok for you, you can write something like this(simple valid proxy checker-generator):</p>
<pre><code>from grab import Grab, GrabError
def get_valid_proxy(proxy_list): #format of items e.g. '128.2.198.188:3124'
g = Grab()
for proxy in proxy_list:
g.setup(proxy=proxy, proxy_type='http', connect_timeout=5, timeout=5)
try:
g.go('google.com')
except GrabError:
#logging.info("Test error")
pass
else:
yield proxy
</code></pre>
| 0
|
2013-06-05T21:55:17Z
|
[
"python",
"http",
"proxy"
] |
Using PIL to make all white pixels transparent?
| 765,736
|
<p>I'm trying to make all white pixels transparent using the Python Image Library. (I'm a C hacker trying to learn python so be gentle)
I've got the conversion working (at least the pixel values look correct) but I can't figure out how to convert the list into a buffer to re-create the image. Here's the code</p>
<pre><code>img = Image.open('img.png')
imga = img.convert("RGBA")
datas = imga.getdata()
newData = list()
for item in datas:
if item[0] == 255 and item[1] == 255 and item[2] == 255:
newData.append([255, 255, 255, 0])
else:
newData.append(item)
imgb = Image.frombuffer("RGBA", imga.size, newData, "raw", "RGBA", 0, 1)
imgb.save("img2.png", "PNG")
</code></pre>
| 28
|
2009-04-19T17:13:27Z
| 765,774
|
<p>You need to make the following changes:</p>
<ul>
<li>append a tuple <code>(255, 255, 255, 0)</code> and not a list <code>[255, 255, 255, 0]</code></li>
<li>use <code>img.putdata(newData)</code></li>
</ul>
<p>This is the working code:</p>
<pre><code>from PIL import Image
img = Image.open('img.png')
img = img.convert("RGBA")
datas = img.getdata()
newData = []
for item in datas:
if item[0] == 255 and item[1] == 255 and item[2] == 255:
newData.append((255, 255, 255, 0))
else:
newData.append(item)
img.putdata(newData)
img.save("img2.png", "PNG")
</code></pre>
| 34
|
2009-04-19T17:38:26Z
|
[
"python",
"image",
"python-imaging-library"
] |
Using PIL to make all white pixels transparent?
| 765,736
|
<p>I'm trying to make all white pixels transparent using the Python Image Library. (I'm a C hacker trying to learn python so be gentle)
I've got the conversion working (at least the pixel values look correct) but I can't figure out how to convert the list into a buffer to re-create the image. Here's the code</p>
<pre><code>img = Image.open('img.png')
imga = img.convert("RGBA")
datas = imga.getdata()
newData = list()
for item in datas:
if item[0] == 255 and item[1] == 255 and item[2] == 255:
newData.append([255, 255, 255, 0])
else:
newData.append(item)
imgb = Image.frombuffer("RGBA", imga.size, newData, "raw", "RGBA", 0, 1)
imgb.save("img2.png", "PNG")
</code></pre>
| 28
|
2009-04-19T17:13:27Z
| 765,829
|
<p>You can also use pixel access mode to modify the image in-place:</p>
<pre><code>from PIL import Image
img = Image.open('img.png')
img = img.convert("RGBA")
pixdata = img.load()
for y in xrange(img.size[1]):
for x in xrange(img.size[0]):
if pixdata[x, y] == (255, 255, 255, 255):
pixdata[x, y] = (255, 255, 255, 0)
img.save("img2.png", "PNG")
</code></pre>
| 24
|
2009-04-19T18:17:14Z
|
[
"python",
"image",
"python-imaging-library"
] |
Using PIL to make all white pixels transparent?
| 765,736
|
<p>I'm trying to make all white pixels transparent using the Python Image Library. (I'm a C hacker trying to learn python so be gentle)
I've got the conversion working (at least the pixel values look correct) but I can't figure out how to convert the list into a buffer to re-create the image. Here's the code</p>
<pre><code>img = Image.open('img.png')
imga = img.convert("RGBA")
datas = imga.getdata()
newData = list()
for item in datas:
if item[0] == 255 and item[1] == 255 and item[2] == 255:
newData.append([255, 255, 255, 0])
else:
newData.append(item)
imgb = Image.frombuffer("RGBA", imga.size, newData, "raw", "RGBA", 0, 1)
imgb.save("img2.png", "PNG")
</code></pre>
| 28
|
2009-04-19T17:13:27Z
| 4,531,395
|
<pre><code>import Image
import ImageMath
def distance2(a, b):
return (a[0] - b[0]) * (a[0] - b[0]) + (a[1] - b[1]) * (a[1] - b[1]) + (a[2] - b[2]) * (a[2] - b[2])
def makeColorTransparent(image, color, thresh2=0):
image = image.convert("RGBA")
red, green, blue, alpha = image.split()
image.putalpha(ImageMath.eval("""convert(((((t - d(c, (r, g, b))) >> 31) + 1) ^ 1) * a, 'L')""",
t=thresh2, d=distance2, c=color, r=red, g=green, b=blue, a=alpha))
return image
if __name__ == '__main__':
import sys
makeColorTransparent(Image.open(sys.argv[1]), (255, 255, 255)).save(sys.argv[2]);
</code></pre>
| 3
|
2010-12-25T19:34:34Z
|
[
"python",
"image",
"python-imaging-library"
] |
Testing time sensitive applications in Python
| 765,773
|
<p>I've written an auction system in Django. I want to write unit tests but the application is time sensitive (e.g. the amount advertisers are charged is a function of how long their ad has been active on a website). What's a good approach for testing this type of application?</p>
<p>Here's one possible solution: a <a href="http://www.artima.com/weblogs/viewpost.jsp?thread=251755" rel="nofollow">DateFactory class</a> which provides some methods to generate a predictable date in testing and the realtime value in production. Do you have any thoughts on this approach, or have you tried something else in practice?</p>
| 3
|
2009-04-19T17:36:55Z
| 766,112
|
<p>In the link you provided, the author somewhat rejects the idea of adding additional parameters to your methods for the sake of unit testing, but in some cases I think you can justify this as just an extension of your business logic. In my opinion, it's a form of inversion of control that can make your model more flexible and possibly even more expressive. For example:</p>
<pre><code>def is_expired(self, check_date=None):
_check_date = check_date or datetime.utcnow()
return self.create_date + timedelta(days=15) < _check_date
</code></pre>
<p>Essentially this allows my unit test to supply its own date/time for the purpose of validating my logic. </p>
<p>The argument in the referenced blog seems to be that this mucks up the API. However, I have encountered situations in which <em>production</em> use cases called for supplanting current date/time with an alternate value. In other words, the inversion of control approach eventually became a necessary part of my application. </p>
| 3
|
2009-04-19T21:19:09Z
|
[
"python",
"django",
"unit-testing",
"datetime"
] |
Testing time sensitive applications in Python
| 765,773
|
<p>I've written an auction system in Django. I want to write unit tests but the application is time sensitive (e.g. the amount advertisers are charged is a function of how long their ad has been active on a website). What's a good approach for testing this type of application?</p>
<p>Here's one possible solution: a <a href="http://www.artima.com/weblogs/viewpost.jsp?thread=251755" rel="nofollow">DateFactory class</a> which provides some methods to generate a predictable date in testing and the realtime value in production. Do you have any thoughts on this approach, or have you tried something else in practice?</p>
| 3
|
2009-04-19T17:36:55Z
| 942,062
|
<p>In general I try to make the production code take date objects as input (where the semantics allows). In many testing situations a DateFactory as you describe is what people do.</p>
<p>In Python you can also get away with changing the static module methods Datetime.now or Time.now directly. You need to be careful here to replace them in the teardown part of the test. This is particularly useful when you are unable to (or it is awkward to) change the class you are testing.</p>
<p>To do this you have</p>
<pre><code> def setUp(self)
self.oldNow = Datetime.now
Datetime.now = self._fakenow
...
def tearDown(self)
Datetime.now = self.oldNow
</code></pre>
<p>I do the substitutions last if there is the slightest possiblity that the setup method will fail.</p>
<p>For many cases a custom DateFactory is safer to use, particularly if you have to worry about people forgetting the tearDown portion.</p>
| 1
|
2009-06-02T21:46:51Z
|
[
"python",
"django",
"unit-testing",
"datetime"
] |
Python timedelta in years
| 765,797
|
<p>I need to check if some number of years have been since some date. Currently I've got <code>timedelta</code> from <code>datetime</code> module and I don't know how to convert it to years.</p>
| 79
|
2009-04-19T18:00:40Z
| 765,804
|
<p>How exact do you need it to be? <code>td.days / 365.25</code> will get you pretty close, if you're worried about leap years.</p>
| 0
|
2009-04-19T18:04:03Z
|
[
"python",
"datetime",
"timedelta"
] |
Python timedelta in years
| 765,797
|
<p>I need to check if some number of years have been since some date. Currently I've got <code>timedelta</code> from <code>datetime</code> module and I don't know how to convert it to years.</p>
| 79
|
2009-04-19T18:00:40Z
| 765,812
|
<p>First off, at the most detailed level, the problem can't be solved exactly. Years vary in length, and there isn't a clear "right choice" for year length. </p>
<p>That said, get the difference in whatever units are "natural" (probably seconds) and divide by the ratio between that and years. E.g.</p>
<pre><code>delta_in_days / (365.25)
delta_in_seconds / (365.25*24*60*60)
</code></pre>
<p>...or whatever. Stay away from months, since they are even less well defined than years.</p>
| 7
|
2009-04-19T18:08:08Z
|
[
"python",
"datetime",
"timedelta"
] |
Python timedelta in years
| 765,797
|
<p>I need to check if some number of years have been since some date. Currently I've got <code>timedelta</code> from <code>datetime</code> module and I don't know how to convert it to years.</p>
| 79
|
2009-04-19T18:00:40Z
| 765,862
|
<p>If you're trying to check if someone is 18 years of age, using <code>timedelta</code> will not work correctly on some edge cases because of leap years. For example, someone born on January 1, 2000, will turn 18 exactly 6575 days later on January 1, 2018 (5 leap years included), but someone born on January 1, 2001, will turn 18 exactly 6574 days later on January 1, 2019 (4 leap years included). Thus, you if someone is exactly 6574 days old, you can't determine if they are 17 or 18 without knowing a little more information about their birthdate.</p>
<p>The correct way to do this is to calculate the age directly from the dates, by subtracting the two years, and then subtracting one if the current month/day precedes the birth month/day.</p>
| 39
|
2009-04-19T18:36:03Z
|
[
"python",
"datetime",
"timedelta"
] |
Python timedelta in years
| 765,797
|
<p>I need to check if some number of years have been since some date. Currently I've got <code>timedelta</code> from <code>datetime</code> module and I don't know how to convert it to years.</p>
| 79
|
2009-04-19T18:00:40Z
| 765,990
|
<p>You need more than a <code>timedelta</code> to tell how many years have passed; you also need to know the beginning (or ending) date. (It's a leap year thing.) </p>
<p>Your best bet is to use the <code>dateutil.relativedelta</code> <a href="http://labix.org/python-dateutil">object</a>, but that's a 3rd party module. If you want to know the <code>datetime</code> that was <code>n</code> years from some date (defaulting to right now), you can do the following::</p>
<pre><code>from dateutil.relativedelta import relativedelta
def yearsago(years, from_date=None):
if from_date is None:
from_date = datetime.now()
return from_date - relativedelta(years=years)
</code></pre>
<p>If you'd rather stick with the standard library, the answer is a little more complex::</p>
<pre><code>from datetime import datetime
def yearsago(years, from_date=None):
if from_date is None:
from_date = datetime.now()
try:
return from_date.replace(year=from_date.year - years)
except ValueError:
# Must be 2/29!
assert from_date.month == 2 and from_date.day == 29 # can be removed
return from_date.replace(month=2, day=28,
year=from_date.year-years)
</code></pre>
<p>If it's 2/29, and 18 years ago there was no 2/29, this function will return 2/28. If you'd rather return 3/1, just change the last <code>return</code> statement to read::</p>
<pre><code> return from_date.replace(month=3, day=1,
year=from_date.year-years)
</code></pre>
<p>Your question originally said you wanted to know how many years it's been since some date. Assuming you want an integer number of years, you can guess based on 365.25 days per year and then check using either of the <code>yearsago</code> functions defined above::</p>
<pre><code>def num_years(begin, end=None):
if end is None:
end = datetime.now()
num_years = int((end - begin).days / 365.25)
if begin > yearsago(num_years, end):
return num_years - 1
else:
return num_years
</code></pre>
| 85
|
2009-04-19T20:05:15Z
|
[
"python",
"datetime",
"timedelta"
] |
Python timedelta in years
| 765,797
|
<p>I need to check if some number of years have been since some date. Currently I've got <code>timedelta</code> from <code>datetime</code> module and I don't know how to convert it to years.</p>
| 79
|
2009-04-19T18:00:40Z
| 1,078,826
|
<p>Even though this thread is already dead, might i suggest a working solution for this very same problem i was facing. Here it is (date is a string in the format dd-mm-yyyy):</p>
<pre><code>def validatedate(date):
parts = date.strip().split('-')
if len(parts) == 3 and False not in [x.isdigit() for x in parts]:
birth = datetime.date(int(parts[2]), int(parts[1]), int(parts[0]))
today = datetime.date.today()
b = (birth.year * 10000) + (birth.month * 100) + (birth.day)
t = (today.year * 10000) + (today.month * 100) + (today.day)
if (t - 18 * 10000) >= b:
return True
return False
</code></pre>
| 0
|
2009-07-03T10:51:42Z
|
[
"python",
"datetime",
"timedelta"
] |
Python timedelta in years
| 765,797
|
<p>I need to check if some number of years have been since some date. Currently I've got <code>timedelta</code> from <code>datetime</code> module and I don't know how to convert it to years.</p>
| 79
|
2009-04-19T18:00:40Z
| 1,361,482
|
<p>this function returns the difference in years between two dates (taken as strings in ISO format, but it can easily modified to take in any format)</p>
<pre><code>import time
def years(earlydateiso, laterdateiso):
"""difference in years between two dates in ISO format"""
ed = time.strptime(earlydateiso, "%Y-%m-%d")
ld = time.strptime(laterdateiso, "%Y-%m-%d")
#switch dates if needed
if ld < ed:
ld, ed = ed, ld
res = ld[0] - ed [0]
if res > 0:
if ld[1]< ed[1]:
res -= 1
elif ld[1] == ed[1]:
if ld[2]< ed[2]:
res -= 1
return res
</code></pre>
| 0
|
2009-09-01T09:29:40Z
|
[
"python",
"datetime",
"timedelta"
] |
Python timedelta in years
| 765,797
|
<p>I need to check if some number of years have been since some date. Currently I've got <code>timedelta</code> from <code>datetime</code> module and I don't know how to convert it to years.</p>
| 79
|
2009-04-19T18:00:40Z
| 1,588,693
|
<p>Well, question seems rather easy.
You need to check the number of 'full' years, and only if it's equal to 18 you need to bother with months and days. The edge case is: <code>endDate.year - startDate.year == 18</code> and it splits to two cases: <code>startDate.month != endDate.month</code> and <code>startDate.month == endDate.month</code>, when you just have to check days:</p>
<pre><code> def isOfAge(birthDate, age=18):
endDate = date.today()
years = endDate.year - birthDate.year
if years == age:
return (birthDate.month < endDate.month or
(birthDate.month == endDate.month and birthDate.day < endDate.day))
return years > age
</code></pre>
<p>It's still more than one-liner-lambda, but it's still pretty short, and seems quick in execution.</p>
| -2
|
2009-10-19T13:33:56Z
|
[
"python",
"datetime",
"timedelta"
] |
Python timedelta in years
| 765,797
|
<p>I need to check if some number of years have been since some date. Currently I've got <code>timedelta</code> from <code>datetime</code> module and I don't know how to convert it to years.</p>
| 79
|
2009-04-19T18:00:40Z
| 1,817,519
|
<pre><code>def age(dob):
import datetime
today = datetime.date.today()
if today.month < dob.month or \
(today.month == dob.month and today.day < dob.day):
return today.year - dob.year - 1
else:
return today.year - dob.year
>>> import datetime
>>> datetime.date.today()
datetime.date(2009, 12, 1)
>>> age(datetime.date(2008, 11, 30))
1
>>> age(datetime.date(2008, 12, 1))
1
>>> age(datetime.date(2008, 12, 2))
0
</code></pre>
| 1
|
2009-11-30T01:51:16Z
|
[
"python",
"datetime",
"timedelta"
] |
Python timedelta in years
| 765,797
|
<p>I need to check if some number of years have been since some date. Currently I've got <code>timedelta</code> from <code>datetime</code> module and I don't know how to convert it to years.</p>
| 79
|
2009-04-19T18:00:40Z
| 1,824,065
|
<p>I'll suggest <a href="http://www.ferg.org/pyfdate/index.html" rel="nofollow">Pyfdate</a> </p>
<blockquote>
<p><strong>What is pyfdate?</strong></p>
<p>Given Python's goal to be a powerful and easy-to-use scripting
language, its features for working
with dates and times are not as
user-friendly as they should be. The
purpose of pyfdate is to remedy that
situation by providing features for
working with dates and times that are
as powerful and easy-to-use as the
rest of Python.</p>
</blockquote>
<p>the <a href="http://www.ferg.org/pyfdate/tutorial.html" rel="nofollow">tutorial</a></p>
| 0
|
2009-12-01T04:48:56Z
|
[
"python",
"datetime",
"timedelta"
] |
Python timedelta in years
| 765,797
|
<p>I need to check if some number of years have been since some date. Currently I've got <code>timedelta</code> from <code>datetime</code> module and I don't know how to convert it to years.</p>
| 79
|
2009-04-19T18:00:40Z
| 1,959,733
|
<p>Yet another 3rd party lib not mentioned here is mxDateTime (predecessor of both python <code>datetime</code> and 3rd party <code>timeutil</code>) could be used for this task.</p>
<p>The aforementioned <code>yearsago</code> would be:</p>
<pre><code>from mx.DateTime import now, RelativeDateTime
def years_ago(years, from_date=None):
if from_date == None:
from_date = now()
return from_date-RelativeDateTime(years=years)
</code></pre>
<p>First parameter is expected to be a <code>DateTime</code> instance.</p>
<p>To convert ordinary <code>datetime</code> to <code>DateTime</code> you could use this for 1 second precision):</p>
<pre><code>def DT_from_dt_s(t):
return DT.DateTimeFromTicks(time.mktime(t.timetuple()))
</code></pre>
<p>or this for 1 microsecond precision:</p>
<pre><code>def DT_from_dt_u(t):
return DT.DateTime(t.year, t.month, t.day, t.hour,
t.minute, t.second + t.microsecond * 1e-6)
</code></pre>
<p>And yes, adding the dependency for this single task in question would definitely be an overkill compared even with using timeutil (suggested by Rick Copeland).</p>
| 1
|
2009-12-24T20:52:12Z
|
[
"python",
"datetime",
"timedelta"
] |
Python timedelta in years
| 765,797
|
<p>I need to check if some number of years have been since some date. Currently I've got <code>timedelta</code> from <code>datetime</code> module and I don't know how to convert it to years.</p>
| 79
|
2009-04-19T18:00:40Z
| 1,959,782
|
<p>Get the number of days, then divide by 365.2425 (the mean Gregorian year) for years. Divide by 30.436875 (the mean Gregorian month) for months.</p>
| 1
|
2009-12-24T21:13:42Z
|
[
"python",
"datetime",
"timedelta"
] |
Python timedelta in years
| 765,797
|
<p>I need to check if some number of years have been since some date. Currently I've got <code>timedelta</code> from <code>datetime</code> module and I don't know how to convert it to years.</p>
| 79
|
2009-04-19T18:00:40Z
| 3,260,826
|
<p>In the end what you have is a maths issue. If every 4 years we have an extra day lets then dived the timedelta in days, not by 365 but 365*4 + 1, that would give you the amount of 4 years. Then divide it again by 4.
timedelta / ((365*4) +1) / 4 = timedelta * 4 / (365*4 +1)</p>
| 1
|
2010-07-15T23:20:49Z
|
[
"python",
"datetime",
"timedelta"
] |
Python timedelta in years
| 765,797
|
<p>I need to check if some number of years have been since some date. Currently I've got <code>timedelta</code> from <code>datetime</code> module and I don't know how to convert it to years.</p>
| 79
|
2009-04-19T18:00:40Z
| 5,261,066
|
<p>This is the solution I worked out, I hope can help ;-)</p>
<pre><code>def menor_edad_legal(birthday):
""" returns true if aged<18 in days """
try:
today = time.localtime()
fa_divuit_anys=date(year=today.tm_year-18, month=today.tm_mon, day=today.tm_mday)
if birthday>fa_divuit_anys:
return True
else:
return False
except Exception, ex_edad:
logging.error('Error menor de edad: %s' % ex_edad)
return True
</code></pre>
| 1
|
2011-03-10T14:29:20Z
|
[
"python",
"datetime",
"timedelta"
] |
Python timedelta in years
| 765,797
|
<p>I need to check if some number of years have been since some date. Currently I've got <code>timedelta</code> from <code>datetime</code> module and I don't know how to convert it to years.</p>
| 79
|
2009-04-19T18:00:40Z
| 11,900,199
|
<p>Here's a updated DOB function, which calculates birthdays the same way humans do:</p>
<pre><code>import datetime
import locale
# Source: https://en.wikipedia.org/wiki/February_29
PRE = [
'US',
'TW',
]
POST = [
'GB',
'HK',
]
def get_country():
code, _ = locale.getlocale()
try:
return code.split('_')[1]
except IndexError:
raise Exception('Country cannot be ascertained from locale.')
def get_leap_birthday(year):
country = get_country()
if country in PRE:
return datetime.date(year, 2, 28)
elif country in POST:
return datetime.date(year, 3, 1)
else:
raise Exception('It is unknown whether your country treats leap year '
+ 'birthdays as being on the 28th of February or '
+ 'the 1st of March. Please consult your country\'s '
+ 'legal code for in order to ascertain an answer.')
def age(dob):
today = datetime.date.today()
years = today.year - dob.year
try:
birthday = datetime.date(today.year, dob.month, dob.day)
except ValueError as e:
if dob.month == 2 and dob.day == 29:
birthday = get_leap_birthday(today.year)
else:
raise e
if today < birthday:
years -= 1
return years
print(age(datetime.date(1988, 2, 29)))
</code></pre>
| 4
|
2012-08-10T10:53:50Z
|
[
"python",
"datetime",
"timedelta"
] |
Amazon S3 permissions
| 765,964
|
<p>Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this?</p>
| 10
|
2009-04-19T19:51:07Z
| 766,004
|
<p>You might find some help in this <a href="http://stackoverflow.com/questions/629102/updating-permissions-on-amazon-s3-files-that-were-uploaded-via-jungledisk">SO question's</a> responses.</p>
| 1
|
2009-04-19T20:12:28Z
|
[
"python",
"django",
"amazon-web-services",
"amazon-s3"
] |
Amazon S3 permissions
| 765,964
|
<p>Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this?</p>
| 10
|
2009-04-19T19:51:07Z
| 766,030
|
<p>You will have to build the whole access logic to S3 in your applications</p>
| 1
|
2009-04-19T20:24:55Z
|
[
"python",
"django",
"amazon-web-services",
"amazon-s3"
] |
Amazon S3 permissions
| 765,964
|
<p>Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this?</p>
| 10
|
2009-04-19T19:51:07Z
| 768,050
|
<p>There are various ways to control access to the S3 objects:</p>
<ol>
<li><p>Use the query string auth - but as you noted this does require an expiration date. You could make it far in the future, which has been good enough for most things I have done.</p></li>
<li><p>Use the S3 ACLS - but this requires the user to have an AWS account and authenticate with AWS to access the S3 object. This is probably not what you are looking for.</p></li>
<li><p>You proxy the access to the S3 object through your application, which implements your access control logic. This will bring all the bandwidth through your box.</p></li>
<li><p>You can set up an EC2 instance with your proxy logic - this keeps the bandwidth closer to S3 and can reduce latency in certain situations. The difference between this and #3 could be minimal, but depends your particular situation.</p></li>
</ol>
| 14
|
2009-04-20T12:28:03Z
|
[
"python",
"django",
"amazon-web-services",
"amazon-s3"
] |
Amazon S3 permissions
| 765,964
|
<p>Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this?</p>
| 10
|
2009-04-19T19:51:07Z
| 768,090
|
<ol>
<li>Have the user hit your server</li>
<li>Have the server set up a query-string authentication with a short expiration (minutes, hours?)</li>
<li>Have your server redirect to #2</li>
</ol>
| 8
|
2009-04-20T12:43:17Z
|
[
"python",
"django",
"amazon-web-services",
"amazon-s3"
] |
Amazon S3 permissions
| 765,964
|
<p>Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this?</p>
| 10
|
2009-04-19T19:51:07Z
| 814,738
|
<p>I've been dealing with this, too. Don, who wrote the <a href="http://undesigned.org.za/2007/10/22/amazon-s3-php-class" rel="nofollow">S3 PHP</a> class I'm using, pointed out you can use dirs inside buckets. So you can put your file in a dir with a random string and then redirect to that. mybucket.amazon.net/wef49kfe4j409jf4f4f9jdfd/myfile.zip While not at all secure, you can control access to it by changing permissions or creating and deleting it (keep the original securely in a different bucket) as necessary.</p>
| 0
|
2009-05-02T12:00:20Z
|
[
"python",
"django",
"amazon-web-services",
"amazon-s3"
] |
reverse a string in Python
| 766,141
|
<p>I was looking for a way to print a string backwards, and after a quick search on google, I found this method:</p>
<p>Suppose 'a' is a string variable. This will return the 'a' string backwards:</p>
<pre><code>a[::-1]
</code></pre>
<p>Can anyone explain how that works?</p>
| 25
|
2009-04-19T21:35:54Z
| 766,159
|
<p>It's called Slice Notation in Python and you can read a bit more of how it works here:</p>
<p><a href="http://stackoverflow.com/questions/509211/good-primer-for-python-slice-notation">http://stackoverflow.com/questions/509211/good-primer-for-python-slice-notation</a></p>
| 2
|
2009-04-19T21:42:52Z
|
[
"python",
"string",
"slice"
] |
reverse a string in Python
| 766,141
|
<p>I was looking for a way to print a string backwards, and after a quick search on google, I found this method:</p>
<p>Suppose 'a' is a string variable. This will return the 'a' string backwards:</p>
<pre><code>a[::-1]
</code></pre>
<p>Can anyone explain how that works?</p>
| 25
|
2009-04-19T21:35:54Z
| 766,160
|
<p>It's the <a href="https://docs.python.org/whatsnew/2.3.html#extended-slices" rel="nofollow">extended slice notation</a>:</p>
<pre><code>sequence[start:end:step]
</code></pre>
<p>In this case, step -1 means backwards, and omitting the start and end means you want the whole string.</p>
| 2
|
2009-04-19T21:43:04Z
|
[
"python",
"string",
"slice"
] |
reverse a string in Python
| 766,141
|
<p>I was looking for a way to print a string backwards, and after a quick search on google, I found this method:</p>
<p>Suppose 'a' is a string variable. This will return the 'a' string backwards:</p>
<pre><code>a[::-1]
</code></pre>
<p>Can anyone explain how that works?</p>
| 25
|
2009-04-19T21:35:54Z
| 766,161
|
<p>The "-1" part represents the "step" part of the slicingâin this case, it goes through the string 1 character at a time, but backwards (a negative step means start from the end of the string). If you specify the step to be 2, for instance, you would get every other character of the string, starting with the first one. If you specify a step of -2, then you'd get every other character of the string, starting with the last character and working backwards.</p>
<p>So, in a nutshell, if <code>a = '12345'</code>:</p>
<ul>
<li><code>a[::2]</code> becomes <code>135</code></li>
<li><code>a[::-1]</code> becomes <code>54321</code></li>
<li><code>a[::-2]</code> becomes <code>531</code></li>
</ul>
| 6
|
2009-04-19T21:43:30Z
|
[
"python",
"string",
"slice"
] |
reverse a string in Python
| 766,141
|
<p>I was looking for a way to print a string backwards, and after a quick search on google, I found this method:</p>
<p>Suppose 'a' is a string variable. This will return the 'a' string backwards:</p>
<pre><code>a[::-1]
</code></pre>
<p>Can anyone explain how that works?</p>
| 25
|
2009-04-19T21:35:54Z
| 766,162
|
<p>It's using extended slicing - a string is a sequence in Python, and shares some methods with other sequences (namely lists and tuples). There are three parts to slicing - start, stop and step. All of them have default values - start defaults to 0, stop defaults to len(sequence), and step defaults to 1. By specifying [::-1] you're saying "all the elements in sequence a, starting from the beginning, to the end going backward one at a time.</p>
<p>This feature was introduced in Python 2.3.5, and you can read more in the <a href="http://www.python.org/doc/2.3.5/whatsnew/section-slices.html" rel="nofollow">What's New docs</a>.</p>
| 2
|
2009-04-19T21:43:36Z
|
[
"python",
"string",
"slice"
] |
reverse a string in Python
| 766,141
|
<p>I was looking for a way to print a string backwards, and after a quick search on google, I found this method:</p>
<p>Suppose 'a' is a string variable. This will return the 'a' string backwards:</p>
<pre><code>a[::-1]
</code></pre>
<p>Can anyone explain how that works?</p>
| 25
|
2009-04-19T21:35:54Z
| 766,163
|
<p>Sure, the <code>[::]</code> is the <a href="https://docs.python.org/whatsnew/2.3.html#extended-slices">extended slice</a> operator. It allows you to take substrings. Basically, it works by specifying which elements you want as [begin:end:step], and it works for all sequences. Two neat things about it:</p>
<ul>
<li>You can omit one or more of the elements and it does "the right thing"</li>
<li>Negative numbers for begin, end, and step have meaning</li>
</ul>
<p>For begin and end, if you give a negative number, it means to count from the end of the sequence. For instance, if I have a list:</p>
<pre><code>l = [1,2,3]
</code></pre>
<p>Then <code>l[-1]</code> is 3, <code>l[-2]</code> is 2, and <code>l[-3]</code> is 1. </p>
<p>For the <code>step</code> argument, a negative number means to work <em>backwards</em> through the sequence. So for a list::</p>
<pre><code>l = [1,2,3,4,5,6,7,8,9,10]
</code></pre>
<p>You could write <code>l[::-1]</code> which basically means to use a step size of -1 while reading through the list. Python will "do the right thing" when filling in the start and stop so it iterates through the list backwards and gives you <code>[10,9,8,7,6,5,4,3,2,1]</code>. </p>
<p>I've given the examples with lists, but strings are just another sequence and work the same way. So <code>a[::-1]</code> means to build a string by joining the characters you get by walking backwards through the string. </p>
| 59
|
2009-04-19T21:43:46Z
|
[
"python",
"string",
"slice"
] |
reverse a string in Python
| 766,141
|
<p>I was looking for a way to print a string backwards, and after a quick search on google, I found this method:</p>
<p>Suppose 'a' is a string variable. This will return the 'a' string backwards:</p>
<pre><code>a[::-1]
</code></pre>
<p>Can anyone explain how that works?</p>
| 25
|
2009-04-19T21:35:54Z
| 766,164
|
<p>[::-1] gives a slice of the string a. the full syntax is <code>a[begin:end:step]</code>
which gives a[begin], a[begin+step], ... a[end-1]. WHen step is negative, you start at end and move to begin.</p>
<p>Finally, begin defaults to the beginning of the sequence, end to the end, and step to -1.</p>
| 1
|
2009-04-19T21:44:01Z
|
[
"python",
"string",
"slice"
] |
reverse a string in Python
| 766,141
|
<p>I was looking for a way to print a string backwards, and after a quick search on google, I found this method:</p>
<p>Suppose 'a' is a string variable. This will return the 'a' string backwards:</p>
<pre><code>a[::-1]
</code></pre>
<p>Can anyone explain how that works?</p>
| 25
|
2009-04-19T21:35:54Z
| 766,165
|
<p>a string is essentially a sequence of characters and so the slicing operation works on it. What you are doing is in fact:</p>
<p>-> get an slice of 'a' from start to end in steps of 1 backward.</p>
| 1
|
2009-04-19T21:44:29Z
|
[
"python",
"string",
"slice"
] |
reverse a string in Python
| 766,141
|
<p>I was looking for a way to print a string backwards, and after a quick search on google, I found this method:</p>
<p>Suppose 'a' is a string variable. This will return the 'a' string backwards:</p>
<pre><code>a[::-1]
</code></pre>
<p>Can anyone explain how that works?</p>
| 25
|
2009-04-19T21:35:54Z
| 766,291
|
<p>I think the following makes a bit more sense for print strings in reverse, but maybe that's just me:</p>
<pre><code>for char in reversed( myString ):
print( char, end = "" )
</code></pre>
| 5
|
2009-04-19T22:48:09Z
|
[
"python",
"string",
"slice"
] |
reverse a string in Python
| 766,141
|
<p>I was looking for a way to print a string backwards, and after a quick search on google, I found this method:</p>
<p>Suppose 'a' is a string variable. This will return the 'a' string backwards:</p>
<pre><code>a[::-1]
</code></pre>
<p>Can anyone explain how that works?</p>
| 25
|
2009-04-19T21:35:54Z
| 6,238,928
|
<p>I would do it like this:</p>
<pre><code>variable = "string"
message = ""
for b in variable:
message = b+message
print (message)
</code></pre>
<p>and it prints: gnirts</p>
| 0
|
2011-06-04T19:33:28Z
|
[
"python",
"string",
"slice"
] |
reverse a string in Python
| 766,141
|
<p>I was looking for a way to print a string backwards, and after a quick search on google, I found this method:</p>
<p>Suppose 'a' is a string variable. This will return the 'a' string backwards:</p>
<pre><code>a[::-1]
</code></pre>
<p>Can anyone explain how that works?</p>
| 25
|
2009-04-19T21:35:54Z
| 9,827,706
|
<p>Consider the list below</p>
<pre><code>l=[12,23,345,456,67,7,945,467]
</code></pre>
<p>Another trick for reversing a list may be :</p>
<pre><code>l[len(l):-len(l)-1:-1] [467, 945, 7, 67, 456, 345, 23, 12]
l[:-len(l)-1:-1] [467, 945, 7, 67, 456, 345, 23, 12]
l[len(l)::-1] [467, 945, 7, 67, 456, 345, 23, 12]
</code></pre>
| 0
|
2012-03-22T17:46:18Z
|
[
"python",
"string",
"slice"
] |
reverse a string in Python
| 766,141
|
<p>I was looking for a way to print a string backwards, and after a quick search on google, I found this method:</p>
<p>Suppose 'a' is a string variable. This will return the 'a' string backwards:</p>
<pre><code>a[::-1]
</code></pre>
<p>Can anyone explain how that works?</p>
| 25
|
2009-04-19T21:35:54Z
| 25,497,656
|
<p>we can use append and pop to do it</p>
<pre><code>def rev(s):
i = list(s)
o = list()
while len(i) > 0:
o.append(t.pop())
return ''.join(o)
</code></pre>
| 0
|
2014-08-26T03:39:09Z
|
[
"python",
"string",
"slice"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.