title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Searching across multiple tables (best practices) | 1,059,253 | <p>I have property management application consisting of tables:</p>
<pre><code>tenants
landlords
units
properties
vendors-contacts
</code></pre>
<p>Basically I want one search field to search them all rather than having to select which category I am searching. Would this be an acceptable solution (technology wise?)</p>
<p>Will searching across 5 tables be OK in the long run and not bog down the server? What's the best way of accomplishing this?</p>
<p>Using PostgreSQL</p>
| 2 | 2009-06-29T16:36:05Z | 1,059,488 | <p>I would suggest using a specialized full-text indexing tool like Lucene for this. It will probably be easier to get up and running, and the result is faster and more featureful too. Postgres full text indexes will be useful if you also need structured search capability on top of this or transactionality of your search index is important.</p>
<p>If you do want to implement this in the database, something like the following scheme might work, assuming you use surrogate keys:</p>
<ol>
<li>for each searchable table create a view that has the primary key column of that table, the name of the table and a concatenation of all the searchable fields in that table.</li>
<li>create a functional GIN or GiST index on the underlying over the to_tsvector() of the exact same concatenation.</li>
<li>create a UNION ALL over all the views to create the searchable view.</li>
</ol>
<p>After that you can do the searches like this:</p>
<pre><code>SELECT id, table_name, ts_rank_cd(body, query) AS rank
FROM search_view, to_tsquery('search&words') query
WHERE query @@ body
ORDER BY rank DESC
LIMIT 10;
</code></pre>
| 3 | 2009-06-29T17:31:09Z | [
"python",
"sql",
"mysql",
"postgresql",
"pylons"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 1,059,596 | <p>A case where regular expressions are justified:</p>
<pre><code>import re
DATA = "Hey, you - what are you doing here!?"
print re.findall(r"[\w']+", DATA)
# Prints ['Hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>[Edited to include <code>'</code> in the word characters - thanks, Danosaure.]</p>
| 271 | 2009-06-29T17:56:39Z | [
"python",
"string",
"split"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 1,059,600 | <p>You want Python's RegEx module's <code>findall()</code> method:</p>
<p><a href="http://www.regular-expressions.info/python.html" rel="nofollow">http://www.regular-expressions.info/python.html</a></p>
<p><a href="http://stackoverflow.com/questions/234512/splitting-strings-in-python">Example</a></p>
| -2 | 2009-06-29T17:57:25Z | [
"python",
"string",
"split"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 1,059,601 | <p><a href="http://docs.python.org/library/re.html#re.split">re.split()</a></p>
<blockquote>
<p>re.split(pattern, string[, maxsplit=0])</p>
<p>Split string by the occurrences of pattern. If capturing parentheses are used in pattern, then the text of all groups in the pattern are also returned as part of the resulting list. If maxsplit is nonzero, at most maxsplit splits occur, and the remainder of the string is returned as the final element of the list. (Incompatibility note: in the original Python 1.5 release, maxsplit was ignored. This has been fixed in later releases.)</p>
</blockquote>
<pre><code>>>> re.split('\W+', 'Words, words, words.')
['Words', 'words', 'words', '']
>>> re.split('(\W+)', 'Words, words, words.')
['Words', ', ', 'words', ', ', 'words', '.', '']
>>> re.split('\W+', 'Words, words, words.', 1)
['Words', 'words, words.']
</code></pre>
| 244 | 2009-06-29T17:57:49Z | [
"python",
"string",
"split"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 1,059,612 | <p>try this:</p>
<pre><code>import re
phrase = "Hey, you - what are you doing here!?"
matches = re.findall('\w+', phrase)
print matches
</code></pre>
<p>this will print <code>['Hey', 'you', 'what', 'are', 'you', 'doing', 'here']</code></p>
| 4 | 2009-06-29T18:01:00Z | [
"python",
"string",
"split"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 1,059,856 | <p>Another way to achieve this is to use the Natural Language Tool Kit (<a href="http://nltk.org/doc/en/ch02.html" rel="nofollow">nltk</a>).</p>
<pre><code>import nltk
data= "Hey, you - what are you doing here!?"
word_tokens = nltk.tokenize.regexp_tokenize(data, r'\w+')
print word_tokens
</code></pre>
<p>This prints: <code>['Hey', 'you', 'what', 'are', 'you', 'doing', 'here']</code></p>
<p>The biggest drawback of this method is that you need to <a href="http://pypi.python.org/pypi/nltk/0.9.9" rel="nofollow">install the nltk package</a>.</p>
<p>The benefits are that you can do <a href="http://nltk.googlecode.com/svn/trunk/doc/howto/index.html" rel="nofollow">a lot of fun stuff</a> with the rest of the nltk package once you get your tokens.</p>
| 1 | 2009-06-29T18:51:37Z | [
"python",
"string",
"split"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 1,157,473 | <p>Use list comprehensions for this stuff...it seems easier</p>
<pre><code>data= "Hey, you - what are you doing here!?"
tokens = [c for c in data if c not in (',', ' ', '-', '!', '?')]
</code></pre>
<p>I find this easier to comprehend (read..maintain) than using regexp, simply because I am not that good at regexp...which is the case with most of us :) . Also if you know what set of separators you might be using, you can keep them in a set. With a very huge set, this might be slower...but the 're' module is slow as well.</p>
| -4 | 2009-07-21T05:49:02Z | [
"python",
"string",
"split"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 1,157,498 | <p>Another way, without regex</p>
<pre><code>import string
punc = string.punctuation
thestring = "Hey, you - what are you doing here!?"
s = list(thestring)
''.join([o for o in s if not o in punc]).split()
</code></pre>
| 40 | 2009-07-21T06:02:03Z | [
"python",
"string",
"split"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 2,911,664 | <p>Kinda late answer :), but I had a similar dilemma and didn't want to use 're' module.</p>
<pre><code>def my_split(s, seps):
res = [s]
for sep in seps:
s, res = res, []
for seq in s:
res += seq.split(sep)
return res
print my_split('1111 2222 3333;4444,5555;6666', [' ', ';', ','])
['1111', '', '2222', '3333', '4444', '5555', '6666']
</code></pre>
| 20 | 2010-05-26T09:31:24Z | [
"python",
"string",
"split"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 5,310,226 | <p>got same problem as @ooboo and find this topic
@ghostdog74 inspired me, maybe someone finds my solution usefull</p>
<pre><code>str1='adj:sg:nom:m1.m2.m3:pos'
splitat=':.'
''.join([ s if s not in splitat else ' ' for s in str1]).split()
</code></pre>
<p>input something in space place and split using same character if you dont want to split at spaces.</p>
| 1 | 2011-03-15T10:12:20Z | [
"python",
"string",
"split"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 5,894,804 | <pre><code>join = lambda x: sum(x,[]) # a.k.a. flatten1([[1],[2,3],[4]]) -> [1,2,3,4]
# ...alternatively...
join = lambda lists: [x for l in lists for x in l]
</code></pre>
<p>Then this becomes a three-liner:</p>
<pre><code>fragments = [text]
for token in tokens:
fragments = join(f.split(token) for f in fragments)
</code></pre>
<hr>
<p><strong>Explanation</strong></p>
<p>This is what in Haskell is known as the List monad. The idea behind the monad is that once "in the monad" you "stay in the monad" until something takes you out. For example in Haskell, say you map the python <code>range(n) -> [1,2,...,n]</code> function over a List. If the result is a List, it will be append to the List in-place, so you'd get something like <code>map(range, [3,4,1]) -> [0,1,2,0,1,2,3,0]</code>. This is known as map-append (or mappend, or maybe something like that). The idea here is that you've got this operation you're applying (splitting on a token), and whenever you do that, you join the result into the list.</p>
<p>You can abstract this into a function and have <code>tokens=string.punctuation</code> by default. </p>
<p>Advantages of this approach:</p>
<ul>
<li>This approach (unlike naive regex-based approaches) can work with arbitrary-length tokens (which regex can also do with more advanced syntax).</li>
<li>You are not restricted to mere tokens; you could have arbitrary logic in place of each token, for example one of the "tokens" could be a function which splits according to how nested parentheses are.</li>
</ul>
| 8 | 2011-05-05T08:35:59Z | [
"python",
"string",
"split"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 6,966,543 | <p>Here is my go at a split with multiple deliminaters:</p>
<pre><code>def msplit( str, delims ):
w = ''
for z in str:
if z not in delims:
w += z
else:
if len(w) > 0 :
yield w
w = ''
if len(w) > 0 :
yield w
</code></pre>
| 1 | 2011-08-06T11:38:15Z | [
"python",
"string",
"split"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 7,215,696 | <p>Another quick way to do this without a regexp is to replace the characters first, as below:</p>
<pre><code>>>> 'a;bcd,ef g'.replace(';',' ').replace(',',' ').split()
['a', 'bcd', 'ef', 'g']
</code></pre>
| 147 | 2011-08-27T16:10:52Z | [
"python",
"string",
"split"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 9,630,983 | <p>I think the following is the best answer to suite your needs :</p>
<p><code>\W+</code> maybe suitable for this case, but may not be suitable for other cases.</p>
<pre><code>filter(None, re.compile('[ |,|\-|!|?]').split( "Hey, you - what are you doing here!?")
</code></pre>
| 0 | 2012-03-09T08:30:11Z | [
"python",
"string",
"split"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 9,943,927 | <p>Use replace two times:</p>
<pre><code>a = '11223FROM33344INTO33222FROM3344'
a.replace('FROM', ',,,').replace('INTO', ',,,').split(',,,')
</code></pre>
<p>results in: </p>
<pre><code>['11223', '33344', '33222', '3344']
</code></pre>
| 4 | 2012-03-30T13:27:30Z | [
"python",
"string",
"split"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 10,250,343 | <p>I'm re-acquainting myself with Python and needed the same thing.
The findall solution may be better, but I came up with this:</p>
<pre><code>tokens = [x.strip() for x in data.split(',')]
</code></pre>
| 2 | 2012-04-20T16:53:46Z | [
"python",
"string",
"split"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 12,189,710 | <p>Pro-Tip: Use <code>string.translate</code> for the fastest string operations Python has.</p>
<p>Some proof...</p>
<p>First, the slow way (sorry pprzemek):</p>
<pre><code>>>> import timeit
>>> S = 'Hey, you - what are you doing here!?'
>>> def my_split(s, seps):
... res = [s]
... for sep in seps:
... s, res = res, []
... for seq in s:
... res += seq.split(sep)
... return res
...
>>> timeit.Timer('my_split(S, punctuation)', 'from __main__ import S,my_split; from string import punctuation').timeit()
54.65477919578552
</code></pre>
<p>Next, we use <code>re.findall()</code> (as given by the suggested answer). MUCH faster:</p>
<pre><code>>>> timeit.Timer('findall(r"\w+", S)', 'from __main__ import S; from re import findall').timeit()
4.194725036621094
</code></pre>
<p>Finally, we use <code>translate</code>:</p>
<pre><code>>>> from string import translate,maketrans,punctuation
>>> T = maketrans(punctuation, ' '*len(punctuation))
>>> timeit.Timer('translate(S, T).split()', 'from __main__ import S,T,translate').timeit()
1.2835021018981934
</code></pre>
<p><strong>Explanation:</strong></p>
<p><code>string.translate</code> is implemented in C and unlike many string manipulation functions in Python, <code>string.translate</code> <strong>does not</strong> produce a new string. So it's about as fast as you can get for string substitution.</p>
<p>It's a bit awkward, though, as it needs a translation table in order to do this magic. You can make a translation table with the <code>maketrans()</code> convenience function. The objective here is to translate all unwanted characters to spaces. A one-for-one substitute. Again, no new data is produced. So this is <strong>fast</strong>!</p>
<p>Next, we use good old <code>split()</code>. <code>split()</code> by default will operate on all whitespace characters, grouping them together for the split. The result will be the list of words that you want. And this approach is almost 4x faster than <code>re.findall()</code>!</p>
| 29 | 2012-08-30T04:05:54Z | [
"python",
"string",
"split"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 16,271,773 | <p>Heres my take on it....</p>
<pre><code>def split_string(source,splitlist):
splits = frozenset(splitlist)
l = []
s1 = ""
for c in source:
if c in splits:
if s1:
l.append(s1)
s1 = ""
else:
print s1
s1 = s1 + c
if s1:
l.append(s1)
return l
>>>out = split_string("First Name,Last Name,Street Address,City,State,Zip Code",",")
>>>print out
>>>['First Name', 'Last Name', 'Street Address', 'City', 'State', 'Zip Code']
</code></pre>
| 0 | 2013-04-29T05:32:04Z | [
"python",
"string",
"split"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 19,211,729 | <p>I like <strong>re</strong>, but here is my solution without it:</p>
<pre><code>from itertools import groupby
sep = ' ,-!?'
s = "Hey, you - what are you doing here!?"
print [''.join(g) for k, g in groupby(s, sep.__contains__) if not k]
</code></pre>
<p><strong>sep.__contains__</strong> is a method used by 'in' operator. Basically it is the same as</p>
<pre><code>lambda ch: ch in sep
</code></pre>
<p>but is more convenient here.</p>
<p><strong>groupby</strong> gets our string and function. It splits string in groups using that function: whenever a value of function changes - a new group is generated. So, <strong>sep.__contains__</strong> is exactly what we need.</p>
<p><strong>groupby</strong> returns a sequence of pairs, where pair[0] is a result of our function and pair[1] is a group. Using <strong>'if not k'</strong> we filter out groups with separators (because a result of <strong>sep.__contains__</strong> is True on separators). Well, that's all - now we have a sequence of groups where each one is a word (group is actually an iterable so we use <strong>join</strong> to convert it to string).</p>
<p>This solution is quite general, because it uses a function to separate string (you can split by any condition you need). Also, it doesn't create intermediate strings/lists (you can remove <strong>join</strong> and the expression will become lazy, since each group is an iterator)</p>
| 1 | 2013-10-06T17:30:05Z | [
"python",
"string",
"split"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 20,753,993 | <pre><code>def get_words(s):
l = []
w = ''
for c in s.lower():
if c in '-!?,. ':
if w != '':
l.append(w)
w = ''
else:
w = w + c
if w != '':
l.append(w)
return l
</code></pre>
<p>Here is the usage:</p>
<pre><code>>>> s = "Hey, you - what are you doing here!?"
>>> print get_words(s)
['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
| 1 | 2013-12-24T02:17:13Z | [
"python",
"string",
"split"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 21,639,500 | <p>I like the <code>replace()</code> way the best. The following procedure changes all separators defined in a string <code>splitlist</code> to the first separator in <code>splitlist</code> and then splits the text on that one separator. It also accounts for if <code>splitlist</code> happens to be an empty string. It returns a list of words, with no empty strings in it.</p>
<pre><code>def split_string(text, splitlist):
for sep in splitlist:
text = text.replace(sep, splitlist[0])
return filter(None, text.split(splitlist[0])) if splitlist else [text]
</code></pre>
| 1 | 2014-02-07T23:15:39Z | [
"python",
"string",
"split"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 23,720,594 | <p>So many answers, yet I can't find any solution that does efficiently what the <em>title</em> of the questions literally asks for (splitting with multiple separatorsâinstead, many answers remove anything that is not a word). So here is an answer to the question in the title ("string split with multiple separators") that relies on Python's standard and efficient <code>re</code> module:</p>
<pre><code>>>> import re
>>> # Splitting on: , <space> - ! ? :
>>> filter(None, re.split("[, \-!?:]+", "Hey, you - what are you doing here!?"))
['Hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>where:</p>
<ul>
<li>the <code>\-</code> in the regular expression is here to prevent the special interpretation of <code>-</code> as a character range indicator, and where</li>
<li><code>filter(None, â¦)</code> removes the empty strings possibly created by leading and trailing separators (since empty strings have a false boolean value).</li>
</ul>
<p>This <code>re.split()</code> precisely "splits with multiple separators", as asked for in the question title. The <code>re</code> module is much more efficient than doing Python loops and tests "by hand".</p>
| 96 | 2014-05-18T09:43:54Z | [
"python",
"string",
"split"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 26,743,214 | <p>First of all, I don't think that your intention is to actually use punctuation as delimiters in the split functions. Your description suggests that you simply want to eliminate punctuation from the resultant strings.</p>
<p>I come across this pretty frequently, and my usual solution doesn't require re.</p>
<h2>One-liner lambda function w/ list comprehension:</h2>
<p>(requires <code>import string</code>):</p>
<pre><code>split_without_punc = lambda text : [word.strip(string.punctuation) for word in
text.split() if word.strip(string.punctuation) != '']
# Call function
split_without_punc("Hey, you -- what are you doing?!")
# returns ['Hey', 'you', 'what', 'are', 'you', 'doing']
</code></pre>
<p><br></p>
<h2>Function (traditional)</h2>
<p>As a traditional function, this is still only two lines with a list comprehension (in addition to <code>import string</code>):</p>
<pre><code>def split_without_punctuation2(text):
# Split by whitespace
words = text.split()
# Strip punctuation from each word
return [word.strip(ignore) for word in words if word.strip(ignore) != '']
split_without_punctuation2("Hey, you -- what are you doing?!")
# returns ['Hey', 'you', 'what', 'are', 'you', 'doing']
</code></pre>
<p>It will also naturally leave contractions and hyphenated words intact. You can always use <code>text.replace("-", " ")</code> to turn hyphens into spaces before the split.</p>
<h2>General Function w/o Lambda or List Comprehension</h2>
<p>For a more general solution (where you can specify the characters to eliminate), and without a list comprehension, you get:</p>
<pre><code>def split_without(text: str, ignore: str) -> list:
# Split by whitespace
split_string = text.split()
# Strip any characters in the ignore string, and ignore empty strings
words = []
for word in split_string:
word = word.strip(ignore)
if word != '':
words.append(word)
return words
# Situation-specific call to general function
import string
final_text = split_without("Hey, you - what are you doing?!", string.punctuation)
# returns ['Hey', 'you', 'what', 'are', 'you', 'doing']
</code></pre>
<p>Of course, you can always generalize the lambda function to any specified string of characters as well.</p>
| 0 | 2014-11-04T19:17:37Z | [
"python",
"string",
"split"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 30,589,540 | <p>First of all, always use re.compile() before performing any RegEx operation in a loop because it works faster than normal operation.</p>
<p>so for your problem first compile the pattern and then perform action on it.</p>
<pre><code>import re
DATA = "Hey, you - what are you doing here!?"
reg_tok = re.compile("[\w']+")
print reg_tok.findall(DATA)
</code></pre>
| 1 | 2015-06-02T07:06:45Z | [
"python",
"string",
"split"
] |
Python - Split Strings with Multiple Delimiters | 1,059,559 | <p>I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. </p>
<pre><code>"Hey, you - what are you doing here!?"
</code></pre>
<p>should be</p>
<pre><code>['hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>But Python's <code>str.split()</code> only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?</p>
| 334 | 2009-06-29T17:49:35Z | 37,634,255 | <p>Here is the answer with some explanation.</p>
<pre><code>st = "Hey, you - what are you doing here!?"
# replace all the non alpha-numeric with space and then join.
new_string = ''.join([x.replace(x, ' ') if not x.isalnum() else x for x in st])
# output of new_string
'Hey you what are you doing here '
# str.split() will remove all the empty string if separator is not provided
new_list = new_string.split()
# output of new_list
['Hey', 'you', 'what', 'are', 'you', 'doing', 'here']
# we can join it to get a complete string without any non alpha-numeric character
' '.join(new_list)
# output
'Hey you what are you doing'
</code></pre>
<p>or in one line, we can do like this:</p>
<pre><code>(''.join([x.replace(x, ' ') if not x.isalnum() else x for x in st])).split()
# output
['Hey', 'you', 'what', 'are', 'you', 'doing', 'here']
</code></pre>
<p>updated answer</p>
| 0 | 2016-06-04T19:35:58Z | [
"python",
"string",
"split"
] |
How do I install with distutils to a specific Python installation? | 1,059,594 | <p>I have a Windows machine with Python 2.3, 2.6 and 3.0 installed and 2.5 installed with Cygwin. I've downloaded the pexpect package but when I run "python setup.py install" it installs to the 2.6 installation.</p>
<p>How could I have it install to the Cygwin Python installation, or any other installation?</p>
| 1 | 2009-06-29T17:56:32Z | 1,059,606 | <p>call the specific python version that you want to install for. For example:</p>
<pre><code>$ python2.3 setup.py install
</code></pre>
<p>should install the package for python 2.3</p>
| 5 | 2009-06-29T17:59:00Z | [
"python",
"install",
"distutils"
] |
How do I install with distutils to a specific Python installation? | 1,059,594 | <p>I have a Windows machine with Python 2.3, 2.6 and 3.0 installed and 2.5 installed with Cygwin. I've downloaded the pexpect package but when I run "python setup.py install" it installs to the 2.6 installation.</p>
<p>How could I have it install to the Cygwin Python installation, or any other installation?</p>
| 1 | 2009-06-29T17:56:32Z | 1,085,582 | <p>using "python2.3" can be wrong if another (default) installation patched PATH to itself only.</p>
<p>Task can be solved by:</p>
<ol>
<li>finding full path to desired python interpreter, on ActivePython it is C:\Python26 for default installation of Python 2.6</li>
<li>make a full path to binary (in this case C:\Python26\python.exe)</li>
<li>execute module install command from unpacked module directory using full path to interpreter: <code>C:\Python26\python.exe setup.py install</code></li>
</ol>
| 0 | 2009-07-06T05:46:20Z | [
"python",
"install",
"distutils"
] |
Python Memory Model | 1,059,674 | <p>I have a very large list
Suppose I do that (yeah, I know the code is very unpythonic, but for the example's sake..):</p>
<pre><code>n = (2**32)**2
for i in xrange(10**7)
li[i] = n
</code></pre>
<p>works fine. however:</p>
<pre><code>for i in xrange(10**7)
li[i] = i**2
</code></pre>
<p>consumes a significantly larger amount of memory. I don't understand why that is - storing the big number takes more bits, and in Java, the the second option is indeed more memory-efficient...</p>
<p>Does anyone have an explanation for this?</p>
| 7 | 2009-06-29T18:12:55Z | 1,059,691 | <p>You have only one variable n, but you create many i**2.</p>
<p>What happens is that Python works with references. Each time you do <code>array[i] = n</code> you create a new reference to the value of <code>n</code>. Not to the variable, mind you, to the value. However, in the second case, when you do <code>array[i] = i**2</code> you create a new value, and reference this new value. This will of course use up much more memory.</p>
<p>In fact, Python will keep reusing the same value and just use references to it even if it's recalculated. So for example:</p>
<pre><code>l = []
x = 2
for i in xrange(1000000):
l.append(x*2)
</code></pre>
<p>Will generally not use more memory than </p>
<pre><code>l = []
x = 2
for i in xrange(1000000):
l.append(x)
</code></pre>
<p>However, in the case of</p>
<pre><code>l = []
x = 2
for i in xrange(1000000):
l.append(i)
</code></pre>
<p>each value of i will get a reference and therefore be kept in memory, using up a lot of memory compared to the other examples.</p>
<p>(Alex pointed out some confusion in terminology. In python there is a module called array. These types of arrays store integer values, instead of references to objects like Pythons normal list objects, but otherwise behave the same. But since the first example uses a value that can't be stored in such an array, this is unlikely to be the case here.</p>
<p>Instead the question is most likely using the word array as it's used in many other languages, which is the same as Pythons list type.)</p>
| 3 | 2009-06-29T18:17:37Z | [
"python",
"arrays",
"memory",
"model"
] |
Python Memory Model | 1,059,674 | <p>I have a very large list
Suppose I do that (yeah, I know the code is very unpythonic, but for the example's sake..):</p>
<pre><code>n = (2**32)**2
for i in xrange(10**7)
li[i] = n
</code></pre>
<p>works fine. however:</p>
<pre><code>for i in xrange(10**7)
li[i] = i**2
</code></pre>
<p>consumes a significantly larger amount of memory. I don't understand why that is - storing the big number takes more bits, and in Java, the the second option is indeed more memory-efficient...</p>
<p>Does anyone have an explanation for this?</p>
| 7 | 2009-06-29T18:12:55Z | 1,059,707 | <p>In your first example you are storing the same integer len(arr) times. So python need just store the integer once in memory and refers to it len(arr) times.</p>
<p>In your second example, you are storing len(arr) different integers. Now python must allocate storage for len(arr) integers and refer to to them in each of the len(arr) slots.</p>
| 5 | 2009-06-29T18:21:42Z | [
"python",
"arrays",
"memory",
"model"
] |
Python Memory Model | 1,059,674 | <p>I have a very large list
Suppose I do that (yeah, I know the code is very unpythonic, but for the example's sake..):</p>
<pre><code>n = (2**32)**2
for i in xrange(10**7)
li[i] = n
</code></pre>
<p>works fine. however:</p>
<pre><code>for i in xrange(10**7)
li[i] = i**2
</code></pre>
<p>consumes a significantly larger amount of memory. I don't understand why that is - storing the big number takes more bits, and in Java, the the second option is indeed more memory-efficient...</p>
<p>Does anyone have an explanation for this?</p>
| 7 | 2009-06-29T18:12:55Z | 1,059,716 | <p>Java special-cases a few value types (including integers) so that they're stored by value (instead of, by object reference like everything else). Python doesn't special-case such types, so that assigning n to many entries <em>in a list</em> (or other normal Python container) doesn't have to make copies.</p>
<p>Edit: note that the references are always to <em>objects</em>, not "to variables" -- there's no such thing as "a reference to a variable" in Python (or Java). For example:</p>
<pre><code>>>> n = 23
>>> a = [n,n]
>>> print id(n), id(a[0]), id(a[1])
8402048 8402048 8402048
>>> n = 45
>>> print id(n), id(a[0]), id(a[1])
8401784 8402048 8402048
</code></pre>
<p>We see from the first print that both entries in list <code>a</code> refer to exactly the same object as <code>n</code> refers to -- but when <code>n</code> is reassigned, <strong>it</strong> now refers to a different object, while both entries in <code>a</code> still refer to the previous one.</p>
<p>An <code>array.array</code> (from the Python standard library module <a href="http://docs.python.org/library/array.html">array</a>) is very different from a list: it keeps compact copies of a homogeneous type, taking as few bits per item as are needed to store copies of values of that type. All normal containers keep references (internally implemented in the C-coded Python runtime as pointers to PyObject structures: each pointer, on a 32-bit build, takes 4 bytes, each PyObject at least 16 or so [including pointer to type, reference count, actual value, and malloc rounding up]), arrays don't (so they can't be heterogeneous, can't have items except from a few basic types, etc).</p>
<p>For example, a 1000-items container, with all items being different small integers (ones whose values can fit in 2 bytes each), would take about 2,000 bytes of data as an <code>array.array('h')</code>, but about 20,000 as a <code>list</code>. But if all items were the same number, the array would still take 2,000 bytes of data, the list would take only 20 or so [[in every one of these cases you have to add about another 16 or 32 bytes for the container-object proper, in addition to the memory for the data]].</p>
<p>However, although the question says "array" (even in a tag), I doubt its <code>arr</code> is actually an array -- if it were, it could not store (2*<em>32)</em>2 (largest int values in an array are 32 bits) and the memory behavior reported in the question would not actually be observed. So, the question is probably in fact about a list, not an array.</p>
<p><strong>Edit</strong>: a comment by @ooboo asks lots of reasonable followup questions, and rather than trying to squish the detailed explanation in a comment I'm moving it here.</p>
<blockquote>
<p>It's weird, though - after all, how is
the reference to the integer stored?
id(variable) gives an integer, the
reference is an integer itself, isn't
it cheaper to use the integer?</p>
</blockquote>
<p>CPython stores references as pointers to PyObject (Jython and IronPython, written in Java and C#, use those language's implicit references; PyPy, written in Python, has a very flexible back-end and can use lots of different strategies)</p>
<p><code>id(v)</code> gives (on CPython only) the numeric value of the pointer (just as a handy way to uniquely identify the object). A list can be heterogeneous (some items may be integers, others objects of different types) so it's just not a sensible option to store some items as pointers to PyObject and others differently (each object also needs a type indication and, in CPython, a reference count, at least) -- <code>array.array</code> is homogeneous and limited so it can (and does) indeed store a copy of the items' values rather than references (this is often cheaper, but not for collections where the same item appears a LOT, such as a sparse array where the vast majority of items are 0).</p>
<p>A Python implementation would be fully allowed by the language specs to try subtler tricks for optimization, as long as it preserves semantics untouched, but as far as I know none currently does for this specific issue (you could try hacking a PyPy backend, but don't be surprised if the overhead of checking for int vs non-int overwhelms the hoped-for gains).</p>
<blockquote>
<p>Also, would it make a difference if I
assigned 2**64 to every slot instead
of assigning n, when n holds a
reference to 2**64? What happens when
I just write 1?</p>
</blockquote>
<p>These are examples of implementation choices that every implementation is fully allowed to make, as it's not hard to preserve the semantics (so hypothetically even, say, 3.1 and 3.2 could behave differently in this regard).</p>
<p>When you use an int literal (or any other literal of an immutable type), or other expression producing a result of such a type, it's up to the implementation to decide whether to make a new object of that type unconditionally, or spend some time checking among such objects to see if there's an existing one it can reuse.</p>
<p>In practice, CPython (and I believe the other implementations, but I'm less familiar with their internals) uses a single copy of sufficiently <em>small</em> integers (keeps a predefined C array of a few small integer values in PyObject form, ready to use or reuse at need) but doesn't go out of its way in general to look for other existing reusable objects.</p>
<p>But for example identical literal constants within the same function are easily and readily compiled as references to a single constant object in the function's table of constants, so that's an optimization that's very easily done, and I believe every current Python implementation does perform it.</p>
<p>It can sometimes be hard to remember than Python is <em>a language</em> and it has several implementations that may (legitimately and correctly) differ in a lot of such details -- everybody, including pedants like me, tends to say just "Python" rather than "CPython" when talking about the popular C-coded implementation (excepts in contexts like this one where drawing the distinction between language and implementation is paramount;-). Nevertheless, the distinction <em>is</em> quite important, and well worth repeating once in a while.</p>
| 14 | 2009-06-29T18:23:07Z | [
"python",
"arrays",
"memory",
"model"
] |
Python Memory Model | 1,059,674 | <p>I have a very large list
Suppose I do that (yeah, I know the code is very unpythonic, but for the example's sake..):</p>
<pre><code>n = (2**32)**2
for i in xrange(10**7)
li[i] = n
</code></pre>
<p>works fine. however:</p>
<pre><code>for i in xrange(10**7)
li[i] = i**2
</code></pre>
<p>consumes a significantly larger amount of memory. I don't understand why that is - storing the big number takes more bits, and in Java, the the second option is indeed more memory-efficient...</p>
<p>Does anyone have an explanation for this?</p>
| 7 | 2009-06-29T18:12:55Z | 1,059,761 | <p>In both examples <code>arr[i]</code> takes reference of object whether it is <code>n</code> or resulting object of <code>i * 2</code>.</p>
<p>In first example, <code>n</code> is already defined so it only takes reference, but in second example, it has to evaluate <code>i * 2</code>, GC has to allocate space if needed for this new resulting object, and then use its reference.</p>
| 0 | 2009-06-29T18:33:27Z | [
"python",
"arrays",
"memory",
"model"
] |
Strange behavior with ModelForm and saving | 1,059,831 | <p>This problem is very strange and I'm hoping someone can help me. For the sake of argument, I have a Author model with ForeignKey relationship to the Book model. When I display an author, I would like to have a ChoiceField that ONLY displays the books associated with that author. As such, I override the AuthorForm.<strong>init</strong>() method and I create a List of choices (tuples) based upon a query that filters books based upon the author ID. The tuple is a composite of the book ID and the book name (i.e., (1, 'Moby Dick')). Those "choices" are then assigned to the ModelForm's choices attribute.</p>
<p>When the form renders in the template, the ChoiceField is properly displayed, listing only those books associated with that author.</p>
<p>This is where things get weird.</p>
<p>When I save the form, I receive a ValueError (Cannot assign "u'1'":Author.book" must be a Book instance). This error makes sense due to the FK relationship. However, if I add a "print" statement to the code, make no other changes, and then save the record, it works. The ValueError magically disappears. I've tried this a number of times, ensuring I haven't inadvertently made another change, and it works each time.</p>
<p>Does anyone know what's going on here?</p>
| 0 | 2009-06-29T18:45:24Z | 1,059,845 | <p>Not quite sure what you are doing wrong, but it is best to just modify the queryset:</p>
<pre><code>class ClientForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
self.affiliate = kwargs.pop('affiliate')
super(ClientForm, self).__init__(*args, **kwargs)
self.fields["referral"].queryset = Referral.objects.filter(affiliate = self.affiliate)
class Meta:
model = Client
</code></pre>
<p>The above is straight out of one my projects and it works perfectly to only show the Referral objects related to the passed affiliate:</p>
<pre><code>form = ClientForm(affiliate=request.affiliate)
</code></pre>
| 2 | 2009-06-29T18:49:58Z | [
"python",
"django",
"modelform"
] |
How to delete all the items of a specific key in a list of dicts? | 1,059,924 | <p>I'm trying to remove some items of a dict based on their key, here is my code:</p>
<pre><code>d1 = {'a': 1, 'b': 2}
d2 = {'a': 1}
l = [d1, d2, d1, d2, d1, d2]
for i in range(len(l)):
if l[i].has_key('b'):
del l[i]['b']
print l
</code></pre>
<p>The output will be:</p>
<pre><code>[{'a': 1}, {'a': 1}, {'a': 1}, {'a': 1}, {'a': 1}, {'a': 1}]
</code></pre>
<p>Is there a better way to do it?</p>
| 0 | 2009-06-29T19:05:35Z | 1,059,981 | <pre><code>d1 = {'a': 1, 'b': 2}
d2 = {'a': 1}
l = [d1, d2, d1, d2, d1, d2]
for d in l:
d.pop('b',None)
print l
</code></pre>
| 16 | 2009-06-29T19:15:45Z | [
"python"
] |
How to delete all the items of a specific key in a list of dicts? | 1,059,924 | <p>I'm trying to remove some items of a dict based on their key, here is my code:</p>
<pre><code>d1 = {'a': 1, 'b': 2}
d2 = {'a': 1}
l = [d1, d2, d1, d2, d1, d2]
for i in range(len(l)):
if l[i].has_key('b'):
del l[i]['b']
print l
</code></pre>
<p>The output will be:</p>
<pre><code>[{'a': 1}, {'a': 1}, {'a': 1}, {'a': 1}, {'a': 1}, {'a': 1}]
</code></pre>
<p>Is there a better way to do it?</p>
| 0 | 2009-06-29T19:05:35Z | 1,060,002 | <p>A slight simplification:</p>
<pre><code> for d in l:
if d.has_key('b'):
del d['b']
</code></pre>
<p>Some people might also do</p>
<pre><code> for d in l:
try:
del d['b']
except KeyError:
pass
</code></pre>
<p>Catching exceptions like this is not considered as expensive in Python as in other languages.</p>
| 3 | 2009-06-29T19:18:15Z | [
"python"
] |
How to delete all the items of a specific key in a list of dicts? | 1,059,924 | <p>I'm trying to remove some items of a dict based on their key, here is my code:</p>
<pre><code>d1 = {'a': 1, 'b': 2}
d2 = {'a': 1}
l = [d1, d2, d1, d2, d1, d2]
for i in range(len(l)):
if l[i].has_key('b'):
del l[i]['b']
print l
</code></pre>
<p>The output will be:</p>
<pre><code>[{'a': 1}, {'a': 1}, {'a': 1}, {'a': 1}, {'a': 1}, {'a': 1}]
</code></pre>
<p>Is there a better way to do it?</p>
| 0 | 2009-06-29T19:05:35Z | 1,060,031 | <p>I like your way of doing it (except that you use a loop variable, but others pointed that out already), it's excplicit and easy to understand. If you want something that minimizes typing then this works:</p>
<p>[x.pop('b', None) for x in l]</p>
<p>Note though that only one 'b' will be deleted, because your list l references the dictionaries. So run your code above, and then print out d1, and you'll notice that in fact you deleted the b-key from d1 as well.</p>
<p>To avoid this you need to copy the dictionaries:</p>
<pre><code>d1 = {'a': 1, 'b': 2}
d2 = {'a': 1}
l = [d1.copy(), d2.copy(), d1.copy(), d2.copy(), d1.copy(), d2.copy()]
[b.pop('b', None) for b in l]
</code></pre>
<p>d1 will now retain the b key.</p>
| 2 | 2009-06-29T19:25:26Z | [
"python"
] |
How to delete all the items of a specific key in a list of dicts? | 1,059,924 | <p>I'm trying to remove some items of a dict based on their key, here is my code:</p>
<pre><code>d1 = {'a': 1, 'b': 2}
d2 = {'a': 1}
l = [d1, d2, d1, d2, d1, d2]
for i in range(len(l)):
if l[i].has_key('b'):
del l[i]['b']
print l
</code></pre>
<p>The output will be:</p>
<pre><code>[{'a': 1}, {'a': 1}, {'a': 1}, {'a': 1}, {'a': 1}, {'a': 1}]
</code></pre>
<p>Is there a better way to do it?</p>
| 0 | 2009-06-29T19:05:35Z | 1,060,086 | <p>d1 = {'a': 1, 'b': 2}
d2 = {'a': 1}</p>
<p>l = [d1, d2, d1, d2, d1, d2]</p>
<p>for i in range(len(l)):
if l[i].has_key('b'):
del l[i]['b']</p>
<p>print l</p>
<p>Here is little review of your code:</p>
<ol>
<li>iterating on list is not done like in C. If you don't need reference the list index it's better to use for item in l and then replace l[i] by item.</li>
<li>for key existence test you can just write if 'b' in l[i]</li>
</ol>
<p>So your code becomes:</p>
<pre><code>for item in l:
if 'b' in item:
del item['b']
</code></pre>
<p>One more thing you need to be careful is that on the first iteration that calls del, you will in fact delete all you need as d1 is mutable. You need to think that d1 is a reference and not the value (a bit like a pointer in C).</p>
<p>As Lennart Regebro mentioned to optimize your code you can also use list comprehension.</p>
| 0 | 2009-06-29T19:37:37Z | [
"python"
] |
using pyodbc on ubuntu to insert a image field on SQL Server | 1,060,035 | <p>I am using <strong>Ubuntu 9.04</strong></p>
<p>I have installed the following package versions:</p>
<pre><code>unixodbc and unixodbc-dev: 2.2.11-16build3
tdsodbc: 0.82-4
libsybdb5: 0.82-4
freetds-common and freetds-dev: 0.82-4
python2.6-dev
</code></pre>
<p>I have configured <code>/etc/unixodbc.ini</code> like this:</p>
<pre><code>[FreeTDS]
Description = TDS driver (Sybase/MS SQL)
Driver = /usr/lib/odbc/libtdsodbc.so
Setup = /usr/lib/odbc/libtdsS.so
CPTimeout =
CPReuse =
UsageCount = 2
</code></pre>
<p>I have configured <code>/etc/freetds/freetds.conf</code> like this:</p>
<pre><code>[global]
tds version = 8.0
client charset = UTF-8
text size = 4294967295
</code></pre>
<p>I have grabbed pyodbc revision <code>31e2fae4adbf1b2af1726e5668a3414cf46b454f</code> from <code>http://github.com/mkleehammer/pyodbc</code> and installed it using "<code>python setup.py install</code>"</p>
<p>I have a windows machine with <em>Microsoft SQL Server 2000</em> installed on my local network, up and listening on the local ip address 10.32.42.69. I have an empty database created with name "Common". I have the user "sa" with password "secret" with full privileges.</p>
<p>I am using the following python code to setup the connection:</p>
<pre><code>import pyodbc
odbcstring = "SERVER=10.32.42.69;UID=sa;PWD=secret;DATABASE=Common;DRIVER=FreeTDS"
con = pyodbc.connect(odbcstring)
cur = con.cursor()
cur.execute("""
IF EXISTS(SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = 'testing')
DROP TABLE testing
""")
cur.execute('''
CREATE TABLE testing (
id INTEGER NOT NULL IDENTITY(1,1),
myimage IMAGE NULL,
PRIMARY KEY (id)
)
''')
con.commit()
</code></pre>
<p>Everything <strong>WORKS</strong> up to this point. I have used SQLServer's Enterprise Manager on the server and the new table is there.
Now I want to insert some data on the table.</p>
<pre><code>cur = con.cursor()
# using web data for exact reproduction of the error by all.
# I'm actually reading a local file in my real code.
url = 'http://www.forestwander.com/wp-content/original/2009_02/west-virginia-mountains.jpg'
data = urllib2.urlopen(url).read()
sql = "INSERT INTO testing (myimage) VALUES (?)"
</code></pre>
<p>Now here on my original question, I was having trouble using <code>cur.execute(sql, (data,))</code> but now I've edited the question, because following Vinay Sajip's answer below (THANKS), I have changed it to:</p>
<pre><code>cur.execute(sql, (pyodbc.Binary(data),))
con.commit()
</code></pre>
<p>And insertion <strong>is working perfectly</strong>. I can confirm the size of the inserted data using the following test code:</p>
<pre><code>cur.execute('SELECT DATALENGTH(myimage) FROM testing WHERE id = 1')
data_inside = cur.fetchone()[0]
assert data_inside == len(data)
</code></pre>
<p>Which passes <strong>perfectly</strong>!!!</p>
<p>Now the problem is on retrieval of the data back.</p>
<p>I am trying the common approach:</p>
<pre><code>cur.execute('SELECT myimage FROM testing WHERE id = 1')
result = cur.fetchone()
returned_data = str(result[0]) # transforming buffer object
print 'Original: %d; Returned: %d' % (len(data), len(returned_data))
assert data == returned_data
</code></pre>
<p>However that fails!!</p>
<pre><code>Original: 4744611; Returned: 4096
Traceback (most recent call last):
File "/home/nosklo/devel/teste_mssql_pyodbc_unicode.py", line 53, in <module>
assert data == returned_data
AssertionError
</code></pre>
<p>I've put all the code above in a single file <a href="http://paste.pocoo.org/show/125955/">here</a>, for easy testing of anyone that wants to help.</p>
<p>Now for the question:</p>
<p>I want python code to insert an image file into mssql. I want to query the image back and show it to the user.</p>
<p>I don't care about the column type in mssql. I am using the "<code>IMAGE</code>" column type on the example, but any binary/blob type would do, as long as I get the binary data for the file I inserted back unspoiled. Vinay Sajip said below that this is the preferred data type for this in SQL SERVER 2000.</p>
<p>The data is now being inserted without errors, however when I retrieve the data, only 4k are returned. (Data is truncated on 4096).</p>
<p>How can I make that work?</p>
<p><hr></p>
<p><strong>EDITS</strong>: Vinay Sajip's answer below gave me a hint to use pyodbc.Binary on the field. I have updated the question accordingly. Thanks Vinay Sajip!</p>
<p>Alex Martelli's comment gave me the idea of using the <code>DATALENGTH</code> MS SQL function to test if the data is fully loaded on the column. Thanks Alex Martelli !</p>
| 7 | 2009-06-29T19:26:37Z | 1,060,182 | <p>I think you should be using a <code>pyodbc.Binary</code> instance to wrap the data:</p>
<pre><code>cur.execute('INSERT INTO testing (myimage) VALUES (?)', (pyodbc.Binary(data),))
</code></pre>
<p>Retrieving should be</p>
<pre><code>cur.execute('SELECT myimage FROM testing')
print "image bytes: %r" % str(cur.fetchall()[0][0])
</code></pre>
<p><strong>UPDATE:</strong> The problem is in insertion. Change your insertion SQL to the following:</p>
<pre><code>"""DECLARE @txtptr varbinary(16)
INSERT INTO testing (myimage) VALUES ('')
SELECT @txtptr = TEXTPTR(myimage) FROM testing
WRITETEXT testing.myimage @txtptr ?
"""
</code></pre>
<p>I've also updated the mistake I made in using the value attribute in the retrieval code.</p>
<p>With this change, I'm able to insert and retrieve a 320K JPEG image into the database (retrieved data is identical to inserted data).</p>
<p>N.B. The <code>image</code> data type is deprecated, and is replaced by <code>varbinary(max)</code> in later versions of SQL Server. The same logic for insertion/retrieval should apply, however, for the newer column type.</p>
| 3 | 2009-06-29T19:58:17Z | [
"python",
"sql-server",
"image",
"pyodbc",
"freetds"
] |
using pyodbc on ubuntu to insert a image field on SQL Server | 1,060,035 | <p>I am using <strong>Ubuntu 9.04</strong></p>
<p>I have installed the following package versions:</p>
<pre><code>unixodbc and unixodbc-dev: 2.2.11-16build3
tdsodbc: 0.82-4
libsybdb5: 0.82-4
freetds-common and freetds-dev: 0.82-4
python2.6-dev
</code></pre>
<p>I have configured <code>/etc/unixodbc.ini</code> like this:</p>
<pre><code>[FreeTDS]
Description = TDS driver (Sybase/MS SQL)
Driver = /usr/lib/odbc/libtdsodbc.so
Setup = /usr/lib/odbc/libtdsS.so
CPTimeout =
CPReuse =
UsageCount = 2
</code></pre>
<p>I have configured <code>/etc/freetds/freetds.conf</code> like this:</p>
<pre><code>[global]
tds version = 8.0
client charset = UTF-8
text size = 4294967295
</code></pre>
<p>I have grabbed pyodbc revision <code>31e2fae4adbf1b2af1726e5668a3414cf46b454f</code> from <code>http://github.com/mkleehammer/pyodbc</code> and installed it using "<code>python setup.py install</code>"</p>
<p>I have a windows machine with <em>Microsoft SQL Server 2000</em> installed on my local network, up and listening on the local ip address 10.32.42.69. I have an empty database created with name "Common". I have the user "sa" with password "secret" with full privileges.</p>
<p>I am using the following python code to setup the connection:</p>
<pre><code>import pyodbc
odbcstring = "SERVER=10.32.42.69;UID=sa;PWD=secret;DATABASE=Common;DRIVER=FreeTDS"
con = pyodbc.connect(odbcstring)
cur = con.cursor()
cur.execute("""
IF EXISTS(SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = 'testing')
DROP TABLE testing
""")
cur.execute('''
CREATE TABLE testing (
id INTEGER NOT NULL IDENTITY(1,1),
myimage IMAGE NULL,
PRIMARY KEY (id)
)
''')
con.commit()
</code></pre>
<p>Everything <strong>WORKS</strong> up to this point. I have used SQLServer's Enterprise Manager on the server and the new table is there.
Now I want to insert some data on the table.</p>
<pre><code>cur = con.cursor()
# using web data for exact reproduction of the error by all.
# I'm actually reading a local file in my real code.
url = 'http://www.forestwander.com/wp-content/original/2009_02/west-virginia-mountains.jpg'
data = urllib2.urlopen(url).read()
sql = "INSERT INTO testing (myimage) VALUES (?)"
</code></pre>
<p>Now here on my original question, I was having trouble using <code>cur.execute(sql, (data,))</code> but now I've edited the question, because following Vinay Sajip's answer below (THANKS), I have changed it to:</p>
<pre><code>cur.execute(sql, (pyodbc.Binary(data),))
con.commit()
</code></pre>
<p>And insertion <strong>is working perfectly</strong>. I can confirm the size of the inserted data using the following test code:</p>
<pre><code>cur.execute('SELECT DATALENGTH(myimage) FROM testing WHERE id = 1')
data_inside = cur.fetchone()[0]
assert data_inside == len(data)
</code></pre>
<p>Which passes <strong>perfectly</strong>!!!</p>
<p>Now the problem is on retrieval of the data back.</p>
<p>I am trying the common approach:</p>
<pre><code>cur.execute('SELECT myimage FROM testing WHERE id = 1')
result = cur.fetchone()
returned_data = str(result[0]) # transforming buffer object
print 'Original: %d; Returned: %d' % (len(data), len(returned_data))
assert data == returned_data
</code></pre>
<p>However that fails!!</p>
<pre><code>Original: 4744611; Returned: 4096
Traceback (most recent call last):
File "/home/nosklo/devel/teste_mssql_pyodbc_unicode.py", line 53, in <module>
assert data == returned_data
AssertionError
</code></pre>
<p>I've put all the code above in a single file <a href="http://paste.pocoo.org/show/125955/">here</a>, for easy testing of anyone that wants to help.</p>
<p>Now for the question:</p>
<p>I want python code to insert an image file into mssql. I want to query the image back and show it to the user.</p>
<p>I don't care about the column type in mssql. I am using the "<code>IMAGE</code>" column type on the example, but any binary/blob type would do, as long as I get the binary data for the file I inserted back unspoiled. Vinay Sajip said below that this is the preferred data type for this in SQL SERVER 2000.</p>
<p>The data is now being inserted without errors, however when I retrieve the data, only 4k are returned. (Data is truncated on 4096).</p>
<p>How can I make that work?</p>
<p><hr></p>
<p><strong>EDITS</strong>: Vinay Sajip's answer below gave me a hint to use pyodbc.Binary on the field. I have updated the question accordingly. Thanks Vinay Sajip!</p>
<p>Alex Martelli's comment gave me the idea of using the <code>DATALENGTH</code> MS SQL function to test if the data is fully loaded on the column. Thanks Alex Martelli !</p>
| 7 | 2009-06-29T19:26:37Z | 1,073,801 | <p>Huh, just after offering the bounty, I've found out the solution.</p>
<p>You have to use <code>SET TEXTSIZE 2147483647</code> on the query, in addition of text size configuration option in <code>/etc/freetds/freetds.conf</code>. </p>
<p>I have used</p>
<pre><code>cur.execute('SET TEXTSIZE 2147483647 SELECT myimage FROM testing WHERE id = 1')
</code></pre>
<p>And everything worked fine.</p>
<p>Strange is what <a href="http://www.freetds.org/userguide/freetdsconf.htm">FreeTDS documentation says</a> about the text size configuration option:</p>
<blockquote>
<p><em>default value of <code>TEXTSIZE</code>, in bytes. For <code>text</code> and <code>image</code> datatypes, sets the maximum width of any returned column. Cf. <code>set TEXTSIZE</code> in the <code>T-SQL</code> documentation for your server.</em></p>
</blockquote>
<p>The configuration also says that the maximum value (and the default) is 4,294,967,295. However when trying to use that value in the query I get an error, the max number I could use in the query is 2,147,483,647 (half).</p>
<p>From that explanation I thought that only setting this configuration option would be enough. It turns out that I was wrong, setting TEXTSIZE in the query fixed the issue.</p>
<p>Below is the complete working code:</p>
<pre><code>#!/usr/bin/env python
# -*- coding: utf-8 -*-
import pyodbc
import urllib2
odbcstring = "SERVER=10.32.42.69;UID=sa;PWD=secret;DATABASE=Common;DRIVER=FreeTDS"
con = pyodbc.connect(odbcstring)
cur = con.cursor()
cur.execute("""
IF EXISTS(SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = 'testing')
DROP TABLE testing
""")
cur.execute('''
CREATE TABLE testing (
id INTEGER NOT NULL IDENTITY(1,1),
myimage IMAGE NULL,
PRIMARY KEY (id)
)
''')
con.commit()
cur = con.cursor()
url = 'http://www.forestwander.com/wp-content/original/2009_02/west-virginia-mountains.jpg'
data = urllib2.urlopen(url).read()
sql = "INSERT INTO testing (myimage) VALUES (?)"
cur.execute(sql, (pyodbc.Binary(data),))
con.commit()
cur.execute('SELECT DATALENGTH(myimage) FROM testing WHERE id = 1')
data_inside = cur.fetchone()[0]
assert data_inside == len(data)
cur.execute('SET TEXTSIZE 2147483647 SELECT myimage FROM testing WHERE id = 1')
result = cur.fetchone()
returned_data = str(result[0])
print 'Original: %d; Returned; %d' % (len(data), len(returned_data))
assert data == returned_data
</code></pre>
| 5 | 2009-07-02T11:20:33Z | [
"python",
"sql-server",
"image",
"pyodbc",
"freetds"
] |
using pyodbc on ubuntu to insert a image field on SQL Server | 1,060,035 | <p>I am using <strong>Ubuntu 9.04</strong></p>
<p>I have installed the following package versions:</p>
<pre><code>unixodbc and unixodbc-dev: 2.2.11-16build3
tdsodbc: 0.82-4
libsybdb5: 0.82-4
freetds-common and freetds-dev: 0.82-4
python2.6-dev
</code></pre>
<p>I have configured <code>/etc/unixodbc.ini</code> like this:</p>
<pre><code>[FreeTDS]
Description = TDS driver (Sybase/MS SQL)
Driver = /usr/lib/odbc/libtdsodbc.so
Setup = /usr/lib/odbc/libtdsS.so
CPTimeout =
CPReuse =
UsageCount = 2
</code></pre>
<p>I have configured <code>/etc/freetds/freetds.conf</code> like this:</p>
<pre><code>[global]
tds version = 8.0
client charset = UTF-8
text size = 4294967295
</code></pre>
<p>I have grabbed pyodbc revision <code>31e2fae4adbf1b2af1726e5668a3414cf46b454f</code> from <code>http://github.com/mkleehammer/pyodbc</code> and installed it using "<code>python setup.py install</code>"</p>
<p>I have a windows machine with <em>Microsoft SQL Server 2000</em> installed on my local network, up and listening on the local ip address 10.32.42.69. I have an empty database created with name "Common". I have the user "sa" with password "secret" with full privileges.</p>
<p>I am using the following python code to setup the connection:</p>
<pre><code>import pyodbc
odbcstring = "SERVER=10.32.42.69;UID=sa;PWD=secret;DATABASE=Common;DRIVER=FreeTDS"
con = pyodbc.connect(odbcstring)
cur = con.cursor()
cur.execute("""
IF EXISTS(SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = 'testing')
DROP TABLE testing
""")
cur.execute('''
CREATE TABLE testing (
id INTEGER NOT NULL IDENTITY(1,1),
myimage IMAGE NULL,
PRIMARY KEY (id)
)
''')
con.commit()
</code></pre>
<p>Everything <strong>WORKS</strong> up to this point. I have used SQLServer's Enterprise Manager on the server and the new table is there.
Now I want to insert some data on the table.</p>
<pre><code>cur = con.cursor()
# using web data for exact reproduction of the error by all.
# I'm actually reading a local file in my real code.
url = 'http://www.forestwander.com/wp-content/original/2009_02/west-virginia-mountains.jpg'
data = urllib2.urlopen(url).read()
sql = "INSERT INTO testing (myimage) VALUES (?)"
</code></pre>
<p>Now here on my original question, I was having trouble using <code>cur.execute(sql, (data,))</code> but now I've edited the question, because following Vinay Sajip's answer below (THANKS), I have changed it to:</p>
<pre><code>cur.execute(sql, (pyodbc.Binary(data),))
con.commit()
</code></pre>
<p>And insertion <strong>is working perfectly</strong>. I can confirm the size of the inserted data using the following test code:</p>
<pre><code>cur.execute('SELECT DATALENGTH(myimage) FROM testing WHERE id = 1')
data_inside = cur.fetchone()[0]
assert data_inside == len(data)
</code></pre>
<p>Which passes <strong>perfectly</strong>!!!</p>
<p>Now the problem is on retrieval of the data back.</p>
<p>I am trying the common approach:</p>
<pre><code>cur.execute('SELECT myimage FROM testing WHERE id = 1')
result = cur.fetchone()
returned_data = str(result[0]) # transforming buffer object
print 'Original: %d; Returned: %d' % (len(data), len(returned_data))
assert data == returned_data
</code></pre>
<p>However that fails!!</p>
<pre><code>Original: 4744611; Returned: 4096
Traceback (most recent call last):
File "/home/nosklo/devel/teste_mssql_pyodbc_unicode.py", line 53, in <module>
assert data == returned_data
AssertionError
</code></pre>
<p>I've put all the code above in a single file <a href="http://paste.pocoo.org/show/125955/">here</a>, for easy testing of anyone that wants to help.</p>
<p>Now for the question:</p>
<p>I want python code to insert an image file into mssql. I want to query the image back and show it to the user.</p>
<p>I don't care about the column type in mssql. I am using the "<code>IMAGE</code>" column type on the example, but any binary/blob type would do, as long as I get the binary data for the file I inserted back unspoiled. Vinay Sajip said below that this is the preferred data type for this in SQL SERVER 2000.</p>
<p>The data is now being inserted without errors, however when I retrieve the data, only 4k are returned. (Data is truncated on 4096).</p>
<p>How can I make that work?</p>
<p><hr></p>
<p><strong>EDITS</strong>: Vinay Sajip's answer below gave me a hint to use pyodbc.Binary on the field. I have updated the question accordingly. Thanks Vinay Sajip!</p>
<p>Alex Martelli's comment gave me the idea of using the <code>DATALENGTH</code> MS SQL function to test if the data is fully loaded on the column. Thanks Alex Martelli !</p>
| 7 | 2009-06-29T19:26:37Z | 6,943,683 | <p>I had a similar <code>4096</code> truncation issue on <code>TEXT</code> fields, which <code>SET TEXTSIZE 2147483647</code> fixed for me, but this also fixed it for me:</p>
<pre><code>import os
os.environ['TDSVER'] = '8.0'
</code></pre>
| 1 | 2011-08-04T14:56:53Z | [
"python",
"sql-server",
"image",
"pyodbc",
"freetds"
] |
Python Decorator 3.0 and arguments to the decorator | 1,060,193 | <p>I'm excited to see the latest version of the <code>decorator</code> python module (3.0). It looks a lot cleaner (e.g. the syntax is more sugary than ever) than previous iterations.</p>
<p>However, it seems to have lousy support (e.g. "sour" syntax, to horribly stretch the metaphor) for decorators that take arguments themselves. Does anyone have a good example for how you'd cleanly do this using <code>decorator</code> 3.0?</p>
<pre><code> def substitute_args(fun, arg_sub_dict):
def wrapper(arg):
new_arg = arg_sub_dict.get(arg, arg)
return fun(new_arg)
# some magic happens here to make sure that type signature,
# __name__, __doc__, etc. of wrapper matches fun
return wrapper
</code></pre>
| 5 | 2009-06-29T20:00:53Z | 1,060,244 | <p>In this case, you need to make your function return the decorator. (Anything can be solved by another level of indirection...)</p>
<pre><code>from decorator import decorator
def substitute_args(arg_sub_dict):
@decorator
def wrapper(fun, arg):
new_arg = arg_sub_dict.get(arg, arg)
return fun(new_arg)
return wrapper
</code></pre>
<p>This means <code>substitute_args</code> isn't a decorator itself, it's a decorator <em>factory</em>. Here's the equivalent without the <code>decorator</code> module.</p>
<pre><code>def substitute_args(arg_sub_dict):
def my_decorator(fun):
def wrapper(arg):
new_arg = arg_sub_dict.get(arg, arg)
return fun(new_arg)
# magic to update __name__, etc.
return wrapper
return my_decorator
</code></pre>
<p>Three levels deep isn't very convenient, but remember two of them are when the function is defined:</p>
<pre><code>@substitute_args({}) # this function is called and return value is the decorator
def f(x):
return x
# that (anonymous) decorator is applied to f
</code></pre>
<p>Which is equivalent to:</p>
<pre><code>def f(x):
return x
f = substitude_args({})(f) # notice the double call
</code></pre>
| 7 | 2009-06-29T20:11:15Z | [
"python",
"decorator"
] |
Python Decorator 3.0 and arguments to the decorator | 1,060,193 | <p>I'm excited to see the latest version of the <code>decorator</code> python module (3.0). It looks a lot cleaner (e.g. the syntax is more sugary than ever) than previous iterations.</p>
<p>However, it seems to have lousy support (e.g. "sour" syntax, to horribly stretch the metaphor) for decorators that take arguments themselves. Does anyone have a good example for how you'd cleanly do this using <code>decorator</code> 3.0?</p>
<pre><code> def substitute_args(fun, arg_sub_dict):
def wrapper(arg):
new_arg = arg_sub_dict.get(arg, arg)
return fun(new_arg)
# some magic happens here to make sure that type signature,
# __name__, __doc__, etc. of wrapper matches fun
return wrapper
</code></pre>
| 5 | 2009-06-29T20:00:53Z | 2,187,674 | <p>here is another way i have just discovered: check whether the first (and only) argument to your decorator is callable; if so, you are done and can return your behavior-modifying wrapper method (itself decorated with <code>functools.wraps</code> to preserve name and documentation string). </p>
<p>in the other case, one or more named or positional arguments should be present; you can collect those arguments and return a callable that accepts a callable as first argument and returns a wrapper methodâand since that description fits the description of the decorator method, return that very decorator method! iâve used <code>functools.partial</code> here to get a version of my decorator, <code>is_global_method</code> (which iâm working on right nowâits implementation is of course nonsense as shown below, this is only to demonstrate the decoration works).</p>
<p>this solution appears to work but sure needs more testing. if you quint our eyes, you can see that the trick is only three or four lines as a pattern to remember. now i wonder whether i can wrap that kind of functionality into another decorator? ah, the metaness of it! </p>
<pre><code>from functools import wraps
from functools import partial
_ = print
is_instance_of = isinstance
is_callable = lambda x: hasattr( x, '__call__' )
def is_global_method( x, *, name = None ):
if is_callable( x ):
@wraps( x )
def wrapper( *P, **Q ):
return { 'name': name, 'result': x( *P, **Q ), }
return wrapper
# assert is_instance_of( x, str ) # could do some sanity checks here
return partial( is_global_method, name = x )
@is_global_method
def f( x ):
"""This is method f."""
return x ** 2
@is_global_method( 'foobar' )
def g( x ):
"""This is method g."""
return x ** 2
_( f.__name__ )
_( f.__doc__ )
_( f( 42 ) )
_( g.__name__ )
_( g.__doc__ )
_( g( 42 ) )
</code></pre>
| -2 | 2010-02-02T21:01:47Z | [
"python",
"decorator"
] |
Iterating through a range of dates in Python | 1,060,279 | <p>I have the following code to do this, but how can I do it better? Right now I think it's better than nested loops, but it starts to get Perl-one-linerish when you have a generator in a list comprehension. </p>
<pre class="lang-py prettyprint-override"><code>day_count = (end_date - start_date).days + 1
for single_date in [d for d in (start_date + timedelta(n) for n in range(day_count)) if d <= end_date]:
print strftime("%Y-%m-%d", single_date.timetuple())
</code></pre>
<h2>Notes</h2>
<ul>
<li>I'm not actually using this to print. That's just for demo purposes. </li>
<li>The <code>start_date</code> and <code>end_date</code> variables are <code>datetime.date</code> objects because I don't need the timestamps. (They're going to be used to generate a report).</li>
</ul>
<h2>Sample Output</h2>
<p>For a start date of <code>2009-05-30</code> and an end date of <code>2009-06-09</code>:</p>
<pre class="lang-py prettyprint-override"><code>2009-05-30
2009-05-31
2009-06-01
2009-06-02
2009-06-03
2009-06-04
2009-06-05
2009-06-06
2009-06-07
2009-06-08
2009-06-09
</code></pre>
| 172 | 2009-06-29T20:16:02Z | 1,060,330 | <p>Why are there two nested iterations? For me it produces the same list of data with only one iteration:</p>
<pre><code>for single_date in (start_date + timedelta(n) for n in range(day_count)):
print ...
</code></pre>
<p>And no list gets stored, only one generator is iterated over. Also the "if" in the generator seems to be unnecessary.</p>
<p>After all, a linear sequence should only require one iterator, not two.</p>
<h2>Update after discussion with John Machin:</h2>
<p>Maybe the most elegant solution is using a generator function to completely hide/abstract the iteration over the range of dates:</p>
<pre><code>from datetime import timedelta, date
def daterange(start_date, end_date):
for n in range(int ((end_date - start_date).days)):
yield start_date + timedelta(n)
start_date = date(2013, 1, 1)
end_date = date(2015, 6, 2)
for single_date in daterange(start_date, end_date):
print single_date.strftime("%Y-%m-%d")
</code></pre>
<p>NB: For consistency with the built-in <code>range()</code> function this iteration stops <strong>before</strong> reaching the <code>end_date</code>. So for inclusive iteration use the next day, as you would with <code>range()</code>.</p>
| 235 | 2009-06-29T20:27:14Z | [
"python",
"datetime",
"iteration"
] |
Iterating through a range of dates in Python | 1,060,279 | <p>I have the following code to do this, but how can I do it better? Right now I think it's better than nested loops, but it starts to get Perl-one-linerish when you have a generator in a list comprehension. </p>
<pre class="lang-py prettyprint-override"><code>day_count = (end_date - start_date).days + 1
for single_date in [d for d in (start_date + timedelta(n) for n in range(day_count)) if d <= end_date]:
print strftime("%Y-%m-%d", single_date.timetuple())
</code></pre>
<h2>Notes</h2>
<ul>
<li>I'm not actually using this to print. That's just for demo purposes. </li>
<li>The <code>start_date</code> and <code>end_date</code> variables are <code>datetime.date</code> objects because I don't need the timestamps. (They're going to be used to generate a report).</li>
</ul>
<h2>Sample Output</h2>
<p>For a start date of <code>2009-05-30</code> and an end date of <code>2009-06-09</code>:</p>
<pre class="lang-py prettyprint-override"><code>2009-05-30
2009-05-31
2009-06-01
2009-06-02
2009-06-03
2009-06-04
2009-06-05
2009-06-06
2009-06-07
2009-06-08
2009-06-09
</code></pre>
| 172 | 2009-06-29T20:16:02Z | 1,060,352 | <p>This might be more clear:</p>
<pre><code>d = start_date
delta = datetime.timedelta(days=1)
while d <= end_date:
print d.strftime("%Y-%m-%d")
d += delta
</code></pre>
| 99 | 2009-06-29T20:31:43Z | [
"python",
"datetime",
"iteration"
] |
Iterating through a range of dates in Python | 1,060,279 | <p>I have the following code to do this, but how can I do it better? Right now I think it's better than nested loops, but it starts to get Perl-one-linerish when you have a generator in a list comprehension. </p>
<pre class="lang-py prettyprint-override"><code>day_count = (end_date - start_date).days + 1
for single_date in [d for d in (start_date + timedelta(n) for n in range(day_count)) if d <= end_date]:
print strftime("%Y-%m-%d", single_date.timetuple())
</code></pre>
<h2>Notes</h2>
<ul>
<li>I'm not actually using this to print. That's just for demo purposes. </li>
<li>The <code>start_date</code> and <code>end_date</code> variables are <code>datetime.date</code> objects because I don't need the timestamps. (They're going to be used to generate a report).</li>
</ul>
<h2>Sample Output</h2>
<p>For a start date of <code>2009-05-30</code> and an end date of <code>2009-06-09</code>:</p>
<pre class="lang-py prettyprint-override"><code>2009-05-30
2009-05-31
2009-06-01
2009-06-02
2009-06-03
2009-06-04
2009-06-05
2009-06-06
2009-06-07
2009-06-08
2009-06-09
</code></pre>
| 172 | 2009-06-29T20:16:02Z | 1,060,376 | <pre><code>import datetime
def daterange(start, stop, step=datetime.timedelta(days=1), inclusive=False):
# inclusive=False to behave like range by default
if step.days > 0:
while start < stop:
yield start
start = start + step
# not +=! don't modify object passed in if it's mutable
# since this function is not restricted to
# only types from datetime module
elif step.days < 0:
while start > stop:
yield start
start = start + step
if inclusive and start == stop:
yield start
# ...
for date in daterange(start_date, end_date, inclusive=True):
print strftime("%Y-%m-%d", date.timetuple())
</code></pre>
<p>This function does more than you strictly require, by supporting negative step, etc. As long as you factor out your range logic, then you don't need the separate <code>day_count</code> and most importantly the code becomes easier to read as you call the function from multiple places.</p>
| 12 | 2009-06-29T20:35:13Z | [
"python",
"datetime",
"iteration"
] |
Iterating through a range of dates in Python | 1,060,279 | <p>I have the following code to do this, but how can I do it better? Right now I think it's better than nested loops, but it starts to get Perl-one-linerish when you have a generator in a list comprehension. </p>
<pre class="lang-py prettyprint-override"><code>day_count = (end_date - start_date).days + 1
for single_date in [d for d in (start_date + timedelta(n) for n in range(day_count)) if d <= end_date]:
print strftime("%Y-%m-%d", single_date.timetuple())
</code></pre>
<h2>Notes</h2>
<ul>
<li>I'm not actually using this to print. That's just for demo purposes. </li>
<li>The <code>start_date</code> and <code>end_date</code> variables are <code>datetime.date</code> objects because I don't need the timestamps. (They're going to be used to generate a report).</li>
</ul>
<h2>Sample Output</h2>
<p>For a start date of <code>2009-05-30</code> and an end date of <code>2009-06-09</code>:</p>
<pre class="lang-py prettyprint-override"><code>2009-05-30
2009-05-31
2009-06-01
2009-06-02
2009-06-03
2009-06-04
2009-06-05
2009-06-06
2009-06-07
2009-06-08
2009-06-09
</code></pre>
| 172 | 2009-06-29T20:16:02Z | 1,061,779 | <p>Use the <a href="http://labix.org/python-dateutil"><code>dateutil</code></a> library:</p>
<pre><code>from datetime import date
from dateutil.rrule import rrule, DAILY
a = date(2009, 5, 30)
b = date(2009, 6, 9)
for dt in rrule(DAILY, dtstart=a, until=b):
print dt.strftime("%Y-%m-%d")
</code></pre>
<p>This python library has many more advanced features, some very useful, like <code>relative delta</code>sâand is implemented as a single file (module) that's easily included into a project.</p>
| 91 | 2009-06-30T04:49:37Z | [
"python",
"datetime",
"iteration"
] |
Iterating through a range of dates in Python | 1,060,279 | <p>I have the following code to do this, but how can I do it better? Right now I think it's better than nested loops, but it starts to get Perl-one-linerish when you have a generator in a list comprehension. </p>
<pre class="lang-py prettyprint-override"><code>day_count = (end_date - start_date).days + 1
for single_date in [d for d in (start_date + timedelta(n) for n in range(day_count)) if d <= end_date]:
print strftime("%Y-%m-%d", single_date.timetuple())
</code></pre>
<h2>Notes</h2>
<ul>
<li>I'm not actually using this to print. That's just for demo purposes. </li>
<li>The <code>start_date</code> and <code>end_date</code> variables are <code>datetime.date</code> objects because I don't need the timestamps. (They're going to be used to generate a report).</li>
</ul>
<h2>Sample Output</h2>
<p>For a start date of <code>2009-05-30</code> and an end date of <code>2009-06-09</code>:</p>
<pre class="lang-py prettyprint-override"><code>2009-05-30
2009-05-31
2009-06-01
2009-06-02
2009-06-03
2009-06-04
2009-06-05
2009-06-06
2009-06-07
2009-06-08
2009-06-09
</code></pre>
| 172 | 2009-06-29T20:16:02Z | 1,063,240 | <pre><code>import datetime
def daterange(start, stop, step_days=1):
current = start
step = datetime.timedelta(step_days)
if step_days > 0:
while current < stop:
yield current
current += step
elif step_days < 0:
while current > stop:
yield current
current += step
else:
raise ValueError("daterange() step_days argument must not be zero")
if __name__ == "__main__":
from pprint import pprint as pp
lo = datetime.date(2008, 12, 27)
hi = datetime.date(2009, 1, 5)
pp(list(daterange(lo, hi)))
pp(list(daterange(hi, lo, -1)))
pp(list(daterange(lo, hi, 7)))
pp(list(daterange(hi, lo, -7)))
assert not list(daterange(lo, hi, -1))
assert not list(daterange(hi, lo))
assert not list(daterange(lo, hi, -7))
assert not list(daterange(hi, lo, 7))
</code></pre>
| 4 | 2009-06-30T11:49:11Z | [
"python",
"datetime",
"iteration"
] |
Iterating through a range of dates in Python | 1,060,279 | <p>I have the following code to do this, but how can I do it better? Right now I think it's better than nested loops, but it starts to get Perl-one-linerish when you have a generator in a list comprehension. </p>
<pre class="lang-py prettyprint-override"><code>day_count = (end_date - start_date).days + 1
for single_date in [d for d in (start_date + timedelta(n) for n in range(day_count)) if d <= end_date]:
print strftime("%Y-%m-%d", single_date.timetuple())
</code></pre>
<h2>Notes</h2>
<ul>
<li>I'm not actually using this to print. That's just for demo purposes. </li>
<li>The <code>start_date</code> and <code>end_date</code> variables are <code>datetime.date</code> objects because I don't need the timestamps. (They're going to be used to generate a report).</li>
</ul>
<h2>Sample Output</h2>
<p>For a start date of <code>2009-05-30</code> and an end date of <code>2009-06-09</code>:</p>
<pre class="lang-py prettyprint-override"><code>2009-05-30
2009-05-31
2009-06-01
2009-06-02
2009-06-03
2009-06-04
2009-06-05
2009-06-06
2009-06-07
2009-06-08
2009-06-09
</code></pre>
| 172 | 2009-06-29T20:16:02Z | 3,059,362 | <pre class="lang-py prettyprint-override"><code>for i in range(16):
print datetime.date.today() + datetime.timedelta(days=i)
</code></pre>
| 2 | 2010-06-17T06:26:49Z | [
"python",
"datetime",
"iteration"
] |
Iterating through a range of dates in Python | 1,060,279 | <p>I have the following code to do this, but how can I do it better? Right now I think it's better than nested loops, but it starts to get Perl-one-linerish when you have a generator in a list comprehension. </p>
<pre class="lang-py prettyprint-override"><code>day_count = (end_date - start_date).days + 1
for single_date in [d for d in (start_date + timedelta(n) for n in range(day_count)) if d <= end_date]:
print strftime("%Y-%m-%d", single_date.timetuple())
</code></pre>
<h2>Notes</h2>
<ul>
<li>I'm not actually using this to print. That's just for demo purposes. </li>
<li>The <code>start_date</code> and <code>end_date</code> variables are <code>datetime.date</code> objects because I don't need the timestamps. (They're going to be used to generate a report).</li>
</ul>
<h2>Sample Output</h2>
<p>For a start date of <code>2009-05-30</code> and an end date of <code>2009-06-09</code>:</p>
<pre class="lang-py prettyprint-override"><code>2009-05-30
2009-05-31
2009-06-01
2009-06-02
2009-06-03
2009-06-04
2009-06-05
2009-06-06
2009-06-07
2009-06-08
2009-06-09
</code></pre>
| 172 | 2009-06-29T20:16:02Z | 6,673,470 | <p>What about the following for doing a range incremented by days:</p>
<pre><code>for d in map( lambda x: startDate+datetime.timedelta(days=x), xrange( (stopDate-startDate).days ) ):
# Do stuff here
</code></pre>
<ul>
<li>startDate and stopDate are datetime.date objects</li>
</ul>
<p>For a generic version:</p>
<pre><code>for d in map( lambda x: startTime+x*stepTime, xrange( (stopTime-startTime).total_seconds() / stepTime.total_seconds() ) ):
# Do stuff here
</code></pre>
<ul>
<li>startTime and stopTime are datetime.date or datetime.datetime object
(both should be the same type)</li>
<li>stepTime is a timedelta object</li>
</ul>
<p>Note that .total_seconds() is only supported after python 2.7 If you are stuck with an earlier version you can write your own function:</p>
<pre><code>def total_seconds( td ):
return float(td.microseconds + (td.seconds + td.days * 24 * 3600) * 10**6) / 10**6
</code></pre>
| 0 | 2011-07-13T02:35:14Z | [
"python",
"datetime",
"iteration"
] |
Iterating through a range of dates in Python | 1,060,279 | <p>I have the following code to do this, but how can I do it better? Right now I think it's better than nested loops, but it starts to get Perl-one-linerish when you have a generator in a list comprehension. </p>
<pre class="lang-py prettyprint-override"><code>day_count = (end_date - start_date).days + 1
for single_date in [d for d in (start_date + timedelta(n) for n in range(day_count)) if d <= end_date]:
print strftime("%Y-%m-%d", single_date.timetuple())
</code></pre>
<h2>Notes</h2>
<ul>
<li>I'm not actually using this to print. That's just for demo purposes. </li>
<li>The <code>start_date</code> and <code>end_date</code> variables are <code>datetime.date</code> objects because I don't need the timestamps. (They're going to be used to generate a report).</li>
</ul>
<h2>Sample Output</h2>
<p>For a start date of <code>2009-05-30</code> and an end date of <code>2009-06-09</code>:</p>
<pre class="lang-py prettyprint-override"><code>2009-05-30
2009-05-31
2009-06-01
2009-06-02
2009-06-03
2009-06-04
2009-06-05
2009-06-06
2009-06-07
2009-06-08
2009-06-09
</code></pre>
| 172 | 2009-06-29T20:16:02Z | 15,969,361 | <p>Why not try:</p>
<pre><code>import datetime as dt
start_date = dt.datetime(2012, 12,1)
end_date = dt.datetime(2012, 12,5)
total_days = (end_date - start_date).days + 1 #inclusive 5 days
for day_number in range(total_days):
current_date = (start_date + dt.timedelta(days = day_number)).date()
print current_date
</code></pre>
| 8 | 2013-04-12T10:47:11Z | [
"python",
"datetime",
"iteration"
] |
Iterating through a range of dates in Python | 1,060,279 | <p>I have the following code to do this, but how can I do it better? Right now I think it's better than nested loops, but it starts to get Perl-one-linerish when you have a generator in a list comprehension. </p>
<pre class="lang-py prettyprint-override"><code>day_count = (end_date - start_date).days + 1
for single_date in [d for d in (start_date + timedelta(n) for n in range(day_count)) if d <= end_date]:
print strftime("%Y-%m-%d", single_date.timetuple())
</code></pre>
<h2>Notes</h2>
<ul>
<li>I'm not actually using this to print. That's just for demo purposes. </li>
<li>The <code>start_date</code> and <code>end_date</code> variables are <code>datetime.date</code> objects because I don't need the timestamps. (They're going to be used to generate a report).</li>
</ul>
<h2>Sample Output</h2>
<p>For a start date of <code>2009-05-30</code> and an end date of <code>2009-06-09</code>:</p>
<pre class="lang-py prettyprint-override"><code>2009-05-30
2009-05-31
2009-06-01
2009-06-02
2009-06-03
2009-06-04
2009-06-05
2009-06-06
2009-06-07
2009-06-08
2009-06-09
</code></pre>
| 172 | 2009-06-29T20:16:02Z | 23,853,523 | <p>Pandas is great for time series in general, and has direct support for date ranges.</p>
<pre><code>import pandas as pd
daterange = pd.date_range(start_date, end_date)
</code></pre>
<p>You can then loop over the daterange to print the date:</p>
<pre><code>for single_date in daterange:
print (single_date.strftime("%Y-%m-%d"))
</code></pre>
<p>It also has lots of options to make life easier. For example if you only wanted weekdays, you would just swap in bdate_range. See <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#generating-ranges-of-timestamps">http://pandas.pydata.org/pandas-docs/stable/timeseries.html#generating-ranges-of-timestamps</a></p>
<p>The power of Pandas is really its dataframes, which support vectorized operations (much like numpy) that make operations across large quantities of data very fast and easy.</p>
<p>EDIT:
You could also completely skip the for loop and just print it directly, which is easier and more efficient:</p>
<pre><code>print(daterange)
</code></pre>
| 13 | 2014-05-25T08:44:46Z | [
"python",
"datetime",
"iteration"
] |
Iterating through a range of dates in Python | 1,060,279 | <p>I have the following code to do this, but how can I do it better? Right now I think it's better than nested loops, but it starts to get Perl-one-linerish when you have a generator in a list comprehension. </p>
<pre class="lang-py prettyprint-override"><code>day_count = (end_date - start_date).days + 1
for single_date in [d for d in (start_date + timedelta(n) for n in range(day_count)) if d <= end_date]:
print strftime("%Y-%m-%d", single_date.timetuple())
</code></pre>
<h2>Notes</h2>
<ul>
<li>I'm not actually using this to print. That's just for demo purposes. </li>
<li>The <code>start_date</code> and <code>end_date</code> variables are <code>datetime.date</code> objects because I don't need the timestamps. (They're going to be used to generate a report).</li>
</ul>
<h2>Sample Output</h2>
<p>For a start date of <code>2009-05-30</code> and an end date of <code>2009-06-09</code>:</p>
<pre class="lang-py prettyprint-override"><code>2009-05-30
2009-05-31
2009-06-01
2009-06-02
2009-06-03
2009-06-04
2009-06-05
2009-06-06
2009-06-07
2009-06-08
2009-06-09
</code></pre>
| 172 | 2009-06-29T20:16:02Z | 24,977,763 | <p>This function has some extra features:</p>
<ul>
<li>can pass a string matching the DATE_FORMAT for start or end and it is converted to a date object</li>
<li>can pass a date object for start or end</li>
<li><p>error checking in case the end is older than the start</p>
<pre><code>import datetime
from datetime import timedelta
DATE_FORMAT = '%Y/%m/%d'
def daterange(start, end):
def convert(date):
try:
date = datetime.datetime.strptime(date, DATE_FORMAT)
return date.date()
except TypeError:
return date
def get_date(n):
return datetime.datetime.strftime(convert(start) + timedelta(days=n), DATE_FORMAT)
days = (convert(end) - convert(start)).days
if days <= 0:
raise ValueError('The start date must be before the end date.')
for n in range(0, days):
yield get_date(n)
start = '2014/12/1'
end = '2014/12/31'
print list(daterange(start, end))
start_ = datetime.date.today()
end = '2015/12/1'
print list(daterange(start, end))
</code></pre></li>
</ul>
| 0 | 2014-07-27T03:55:35Z | [
"python",
"datetime",
"iteration"
] |
Iterating through a range of dates in Python | 1,060,279 | <p>I have the following code to do this, but how can I do it better? Right now I think it's better than nested loops, but it starts to get Perl-one-linerish when you have a generator in a list comprehension. </p>
<pre class="lang-py prettyprint-override"><code>day_count = (end_date - start_date).days + 1
for single_date in [d for d in (start_date + timedelta(n) for n in range(day_count)) if d <= end_date]:
print strftime("%Y-%m-%d", single_date.timetuple())
</code></pre>
<h2>Notes</h2>
<ul>
<li>I'm not actually using this to print. That's just for demo purposes. </li>
<li>The <code>start_date</code> and <code>end_date</code> variables are <code>datetime.date</code> objects because I don't need the timestamps. (They're going to be used to generate a report).</li>
</ul>
<h2>Sample Output</h2>
<p>For a start date of <code>2009-05-30</code> and an end date of <code>2009-06-09</code>:</p>
<pre class="lang-py prettyprint-override"><code>2009-05-30
2009-05-31
2009-06-01
2009-06-02
2009-06-03
2009-06-04
2009-06-05
2009-06-06
2009-06-07
2009-06-08
2009-06-09
</code></pre>
| 172 | 2009-06-29T20:16:02Z | 36,831,624 | <p><strong>Show the last n days from today:</strong></p>
<pre><code>import datetime
for i in range(0, 100):
print (datetime.date.today() + datetime.timedelta(i)).isoformat()
</code></pre>
<p><strong>Output:</strong> </p>
<pre><code>2016-06-29
2016-06-30
2016-07-01
2016-07-02
2016-07-03
2016-07-04
</code></pre>
| 0 | 2016-04-25T03:34:21Z | [
"python",
"datetime",
"iteration"
] |
Iterating through a range of dates in Python | 1,060,279 | <p>I have the following code to do this, but how can I do it better? Right now I think it's better than nested loops, but it starts to get Perl-one-linerish when you have a generator in a list comprehension. </p>
<pre class="lang-py prettyprint-override"><code>day_count = (end_date - start_date).days + 1
for single_date in [d for d in (start_date + timedelta(n) for n in range(day_count)) if d <= end_date]:
print strftime("%Y-%m-%d", single_date.timetuple())
</code></pre>
<h2>Notes</h2>
<ul>
<li>I'm not actually using this to print. That's just for demo purposes. </li>
<li>The <code>start_date</code> and <code>end_date</code> variables are <code>datetime.date</code> objects because I don't need the timestamps. (They're going to be used to generate a report).</li>
</ul>
<h2>Sample Output</h2>
<p>For a start date of <code>2009-05-30</code> and an end date of <code>2009-06-09</code>:</p>
<pre class="lang-py prettyprint-override"><code>2009-05-30
2009-05-31
2009-06-01
2009-06-02
2009-06-03
2009-06-04
2009-06-05
2009-06-06
2009-06-07
2009-06-08
2009-06-09
</code></pre>
| 172 | 2009-06-29T20:16:02Z | 37,508,911 | <p>Numpy's <code>arange</code> function can be applied to dates:</p>
<pre><code>import numpy as np
from datetime import datetime, timedelta
d0 = datetime(2009, 1,1)
d1 = datetime(2010, 1,1)
dt = timedelta(days = 1)
dates = np.arange(d0, d1, dt).astype(datetime)
</code></pre>
<p>The use of <code>astype</code> is to convert from <code>numpy.datetime64</code> to an array of <code>datetime.datetime</code> objects.</p>
| 1 | 2016-05-29T10:41:40Z | [
"python",
"datetime",
"iteration"
] |
Iterating through a range of dates in Python | 1,060,279 | <p>I have the following code to do this, but how can I do it better? Right now I think it's better than nested loops, but it starts to get Perl-one-linerish when you have a generator in a list comprehension. </p>
<pre class="lang-py prettyprint-override"><code>day_count = (end_date - start_date).days + 1
for single_date in [d for d in (start_date + timedelta(n) for n in range(day_count)) if d <= end_date]:
print strftime("%Y-%m-%d", single_date.timetuple())
</code></pre>
<h2>Notes</h2>
<ul>
<li>I'm not actually using this to print. That's just for demo purposes. </li>
<li>The <code>start_date</code> and <code>end_date</code> variables are <code>datetime.date</code> objects because I don't need the timestamps. (They're going to be used to generate a report).</li>
</ul>
<h2>Sample Output</h2>
<p>For a start date of <code>2009-05-30</code> and an end date of <code>2009-06-09</code>:</p>
<pre class="lang-py prettyprint-override"><code>2009-05-30
2009-05-31
2009-06-01
2009-06-02
2009-06-03
2009-06-04
2009-06-05
2009-06-06
2009-06-07
2009-06-08
2009-06-09
</code></pre>
| 172 | 2009-06-29T20:16:02Z | 38,402,483 | <p>Here's code for a general date range function, similar to Ber's answer, but more flexible:</p>
<pre><code>def count_timedelta(delta, step, seconds_in_interval):
"""Helper function for iterate. Finds the number of intervals in the timedelta."""
return int(delta.total_seconds() / (seconds_in_interval * step))
def range_dt(start, end, step=1, interval='day'):
"""Iterate over datetimes or dates, similar to builtin range."""
intervals = functools.partial(count_timedelta, (end - start), step)
if interval == 'week':
for i in range(intervals(3600 * 24 * 7)):
yield start + datetime.timedelta(weeks=i) * step
elif interval == 'day':
for i in range(intervals(3600 * 24)):
yield start + datetime.timedelta(days=i) * step
elif interval == 'hour':
for i in range(intervals(3600)):
yield start + datetime.timedelta(hours=i) * step
elif interval == 'minute':
for i in range(intervals(60)):
yield start + datetime.timedelta(minutes=i) * step
elif interval == 'second':
for i in range(intervals(1)):
yield start + datetime.timedelta(seconds=i) * step
elif interval == 'millisecond':
for i in range(intervals(1 / 1000)):
yield start + datetime.timedelta(milliseconds=i) * step
elif interval == 'microsecond':
for i in range(intervals(1e-6)):
yield start + datetime.timedelta(microseconds=i) * step
else:
raise AttributeError("Interval must be 'week', 'day', 'hour' 'second', \
'microsecond' or 'millisecond'.")
</code></pre>
| 0 | 2016-07-15T17:59:27Z | [
"python",
"datetime",
"iteration"
] |
Iterating through a range of dates in Python | 1,060,279 | <p>I have the following code to do this, but how can I do it better? Right now I think it's better than nested loops, but it starts to get Perl-one-linerish when you have a generator in a list comprehension. </p>
<pre class="lang-py prettyprint-override"><code>day_count = (end_date - start_date).days + 1
for single_date in [d for d in (start_date + timedelta(n) for n in range(day_count)) if d <= end_date]:
print strftime("%Y-%m-%d", single_date.timetuple())
</code></pre>
<h2>Notes</h2>
<ul>
<li>I'm not actually using this to print. That's just for demo purposes. </li>
<li>The <code>start_date</code> and <code>end_date</code> variables are <code>datetime.date</code> objects because I don't need the timestamps. (They're going to be used to generate a report).</li>
</ul>
<h2>Sample Output</h2>
<p>For a start date of <code>2009-05-30</code> and an end date of <code>2009-06-09</code>:</p>
<pre class="lang-py prettyprint-override"><code>2009-05-30
2009-05-31
2009-06-01
2009-06-02
2009-06-03
2009-06-04
2009-06-05
2009-06-06
2009-06-07
2009-06-08
2009-06-09
</code></pre>
| 172 | 2009-06-29T20:16:02Z | 40,023,824 | <p>This is the most human-readable solution I can think of.</p>
<pre><code>import datetime
def daterange(start, end, step=datetime.timedelta(1)):
curr = start
while curr < end:
yield curr
curr += step
</code></pre>
| 0 | 2016-10-13T14:28:08Z | [
"python",
"datetime",
"iteration"
] |
How do i output a dynamically generated web page to a .html page instead of .py cgi page? | 1,060,289 | <p>Hi
So ive just started learning python on WAMP, ive got the results of a html form using cgi, and successfully performed a database search with mysqldb. I can return the results to a page that ends with .py by using print statements in the python cgi code, but i want to create a webpage that's .html and have that returned to the user, and/or keep them on the same webaddress when the database search results return.</p>
<p>thanks
paul</p>
<p>edit: to clarify on my local machine, i see /localhost/search.html in the address bar i submit the html form, and receive a results page at /localhost/cgi-bin/searchresults.py. i want to see the results on /localhost/results.html or /localhost/search.html. if this was on a public server im ASSUMING it would return .../cgi-bin/searchresults.py, the last time i saw /cgi-bin/ directories was in the 90s in a url. ive glanced at addhandler, as david suggested, im not sure if thats what i want. </p>
<p>edit: thanks all of you for your input, yep without using frameworks, mod_rewrite seems the way to go, but having looked at that, I decided to save myself the trouble and go with django with mod_wsgi, mainly because of the size of its userbase and amount of docs. i might switch to a lighter/more customisable framework, once ive got the basics</p>
| 1 | 2009-06-29T20:18:18Z | 1,060,342 | <p>First, I'd suggest that you remember that URLs are URLs and that file extensions don't matter, and that you should just leave it.</p>
<p>If that isn't enough, then remember that URLs are URLs and that file extensions don't matter — and configure Apache to use a different rule to determine that is a CGI program rather than a static file to be served up as is. You can use <a href="http://httpd.apache.org/docs/2.0/howto/cgi.html" rel="nofollow">AddHandler</a> to add a handler for files on the hard disk with a .html extension.</p>
<p>Alternatively, you could use <a href="http://www.workingwith.me.uk/articles/scripting/mod%5Frewrite" rel="nofollow">mod_rewrite</a> to tell Apache that …/foo.html means …/foo.py</p>
<p>Finally, I'd suggest that if you do muck around with what URLs look like, that you remove any sign of something that looks like a file extension (so that …/foo is requested rather then …/foo.anything).</p>
<p>As for keeping the user on the same address for results as for the request … that is just a matter of having the program output the basic page without results if it doesn't get the query string parameters that indicate a search term had been passed.</p>
| 3 | 2009-06-29T20:30:33Z | [
"python",
"html",
"webpage"
] |
USB - sync vs async vs semi-async | 1,060,305 | <p>Updates:</p>
<p>I wrote an asynchronous C version and it works as it should. </p>
<p>Turns out the speed issue was due to Python's GIL. There's a method to fine tune its behavior.
sys.setcheckinterval(interval)</p>
<p>Setting interval to zero (default is 100) fixes the slow speed issue. Now all that's left is to figure out is what's causing the other issue (not all pixels are filled). This one doesn't make any sense. usbmon shows all the communications are going through. libusb's debug messaging shows nothing out of the ordinary. I guess I need to take usbmon's output and compare sync vs async. The data that usbmon shows seems to look correct at a glance (The first byte should be 0x96 or 0x95).</p>
<p>As said below in the original question, S. Lott, it's for a USB LCD controller. There are three different versions of drv_send, which is the outgoing endpoint method. I've explained the differences below. Maybe it'll help if I outline the asynchronous USB operations. Note that syncrhonous USB operations work the same way, it's just that it's done synchronously.</p>
<p>We can view asynchronous I/O as a 5 step process:</p>
<ol>
<li>Allocation: allocate a libusb_transfer (This is self.transfer)</li>
<li>Filling: populate the libusb_transfer instance with information about the transfer you wish to perform (libusb_fill_bulk_transfer)</li>
<li>Submission: ask libusb to submit the transfer (libusb_submit_transfer)</li>
<li>Completion handling: examine transfer results in the libusb_transfer structure (libusb_handle_events and libusb_handle_events_timeout)</li>
<li>Deallocation: clean up resources (Not shown below)</li>
</ol>
<p>Original question:</p>
<p>I have three different versions. One's entirely synchronous, one's semi-asynchronous, and the last is fully asynchronous. The differences is that synchronous fully populates the LCD display I'm controlling with the expected pixels, <s>and it's really fast</s>. The semi-asynchronous version only populates a portion of the display, <s>but it's still very fast</s>. The asynchronous version <s>is really slow and</s> only fills a portion of the display. I'm baffled why the pixels aren't fully populated, <s>and why the asynchronous version is really slow.</s> Any clues?</p>
<p>Here's the fully synchronous version:</p>
<pre><code>def drv_send(self, data):
if not self.Connected():
return
self.drv_locked = True
buffer = ''
for c in data:
buffer = buffer + chr(c)
length = len(buffer)
out_buffer = cast(buffer, POINTER(c_ubyte))
libusb_fill_bulk_transfer(self.transfer, self.handle, LIBUSB_ENDPOINT_OUT + 1, out_buffer, length, self.cb_send_transfer, None, 0)
lib.libusb_submit_transfer(self.transfer)
while self.drv_locked:
r = lib.libusb_handle_events(None)
if r < 0:
if r == LIBUSB_ERROR_INTERRUPTED:
continue
lib.libusb_cancel_transfer(transfer)
while self.drv_locked:
if lib.libusb_handle_events(None) < 0:
break
self.count += 1
</code></pre>
<p>Here's the semi-asynchronous version:</p>
<pre><code>def drv_send(self, data):
if not self.Connected():
return
def f(d):
self.drv_locked = True
buffer = ''
for c in data:
buffer = buffer + chr(c)
length = len(buffer)
out_buffer = cast(buffer, POINTER(c_ubyte))
libusb_fill_bulk_transfer(self.transfer, self.handle, LIBUSB_ENDPOINT_OUT + 1, out_buffer, length, self.cb_send_transfer, None, 0)
lib.libusb_submit_transfer(self.transfer)
while self.drv_locked:
r = lib.libusb_handle_events(None)
if r < 0:
if r == LIBUSB_ERROR_INTERRUPTED:
continue
lib.libusb_cancel_transfer(transfer)
while self.drv_locked:
if lib.libusb_handle_events(None) < 0:
break
self.count += 1
self.command_queue.put(Command(f, data))
</code></pre>
<p>Here's the fully asynchronous version. device_poll is in a thread by itself.</p>
<pre><code>def device_poll(self):
while self.Connected():
tv = TIMEVAL(1, 0)
r = lib.libusb_handle_events_timeout(None, byref(tv))
if r < 0:
break
def drv_send(self, data):
if not self.Connected():
return
def f(d):
self.drv_locked = True
buffer = ''
for c in data:
buffer = buffer + chr(c)
length = len(buffer)
out_buffer = cast(buffer, POINTER(c_ubyte))
libusb_fill_bulk_transfer(self.transfer, self.handle, LIBUSB_ENDPOINT_OUT + 1, out_buffer, length, self.cb_send_transfer, None, 0)
lib.libusb_submit_transfer(self.transfer)
self.count += 1
self.command_queue.put(Command(f, data))
</code></pre>
<p>And here's where the queue is emptied. It's the callback for a gobject timeout.</p>
<pre><code>def command_worker(self):
if self.drv_locked: # or time.time() - self.command_time < self.command_rate:
return True
try:
tmp = self.command_queue.get_nowait()
except Queue.Empty:
return True
tmp.func(*tmp.args)
self.command_time = time.time()
return True
</code></pre>
<p>Here's the transfer's callback. It just changes the locked state back to false, indicating the operation's finished.</p>
<pre><code>def cb_send_transfer(self, transfer):
if transfer[0].status.value != LIBUSB_TRANSFER_COMPLETED:
error("%s: transfer status %d" % (self.name, transfer.status))
print "cb_send_transfer", self.count
self.drv_locked = False
</code></pre>
| 4 | 2009-06-29T20:22:41Z | 1,082,707 | <p>Ok I don't know if I get you right. You have some device with LCD, you have some firmware on it to handle USB requests. On PC side you are using PyUSB wich wraps libUsb. </p>
<p>Couple of suggestions if you are experiancing speed problems, try to limit data you are transfering. Do not transfer whole raw data, mayby only pixels that changed. </p>
<p>Second, have you measured speed of transfers by using some USB analuzer sofware, if you don't have money for hardvare usb analyzer maybe try software version. I never used that kind of analyzers but I think data provided by them is not very reiable. </p>
<p>Thirdly, see what device is realy doing, maybe that is bottleneck of your data transfers.</p>
<p>I have not much time today to exactly anwser your question so I will get back on this later.</p>
<p>I am watching this thread for some time, and there is dead silence around this, so I tried to spare some time and look deeper. Still not much time today maybe later today. Unfortunetly I am no python expert but I know some stuff about C,C++, windows and most of all USB. But I think this may be LCD device problem, what are you using, Because if the transfers works fine, and data was recived by the device it points that is device problem. </p>
<p>I looked at your code a little, could you do some testing, sending only 1 byte, 8 bytes, and Endpoint size byte length transfer. And see how it looks on USB mon ? </p>
<p>Endpoint size is size of Hardvare buffer used by PICO LCD USB controler. I am not sure what it is for your's but I am guessing that when you send ENdpoint size message next masage should be 0 bytes length. Maybe there is the problem.
Regarding the test I assume you have seen data wich you programed to send.
Second thing could be that the data gets overwriten, or not recived fast enough. Saying overwriten I mean LCD could not see data end, and mix one transfer with another. </p>
<p>I am not sure what USB mon is capable of showing, but according to USB standart after Endpoint size packet len, there should be 0 len packet data send, showing that is end of transfer. </p>
| 1 | 2009-07-04T18:13:09Z | [
"python",
"usb",
"ctypes",
"libusb"
] |
payment processing - pylons/python | 1,060,334 | <p>I'm building an application that eventually needs to process cc #s. I'd like to handle it completely in my app, and then hand off the information securely to my payment gateway. Ideally the user would have no interaction with the payment gateway directly.</p>
<p>Any thoughts? Is there an easier way?</p>
| 2 | 2009-06-29T20:28:25Z | 1,060,758 | <p>That's something usual to do. Please follow the instructions your payment gateway gives you on how to send info to them, and write the code. If you have some issue, feel free to ask a more specific question.</p>
| 1 | 2009-06-29T21:50:25Z | [
"python",
"pylons",
"payment-gateway",
"payment"
] |
payment processing - pylons/python | 1,060,334 | <p>I'm building an application that eventually needs to process cc #s. I'd like to handle it completely in my app, and then hand off the information securely to my payment gateway. Ideally the user would have no interaction with the payment gateway directly.</p>
<p>Any thoughts? Is there an easier way?</p>
| 2 | 2009-06-29T20:28:25Z | 1,061,221 | <p>Most payment gateways offer a few mechanisms for submitting CC payments:</p>
<p>1) A simple HTTPS POST where your application collects the customer's payment details (card number, expiry date, amount, optional CVV) and then submits this to the gateway. The payment parameters are sent through in the POST variables, and the gateway returns a HTTP response.</p>
<p>2) Via an API (often XML over HTTPS). In this case your application collects the customer's payment details, constructs an XML document encapsulating the payment details, and then posts this information to the gateway. The gateway response will be an XML document which your application then has to parse and interpret.</p>
<p>3) Some form of redirect to web pages hosted by the payment gateway. The payment gateway collects the customer's CC number and other details, processes the payment, and then redirects the customer back to a web page hosted by you.</p>
<p>Option 3 is usually the easiest solution but would require the customer to interact with pages hosted by the gateway (although this can usually be made to be almost transparent).
1 and 2 above would satisfy your requirements with 1 being the simplest of the two to implement. </p>
<p>Because your preference is to have your application collect the payment details, you may need to consider whether you need to acquire PCI DSS compliance, but there are many factors that affect this. There is a lot of information about <a href="https://www.pcisecuritystandards.org/" rel="nofollow">PCI DSS here</a> and on <a href="http://en.wikipedia.org/wiki/PCI%5FDSS" rel="nofollow">Wikipedia</a>.</p>
| 3 | 2009-06-30T00:28:13Z | [
"python",
"pylons",
"payment-gateway",
"payment"
] |
payment processing - pylons/python | 1,060,334 | <p>I'm building an application that eventually needs to process cc #s. I'd like to handle it completely in my app, and then hand off the information securely to my payment gateway. Ideally the user would have no interaction with the payment gateway directly.</p>
<p>Any thoughts? Is there an easier way?</p>
| 2 | 2009-06-29T20:28:25Z | 1,063,700 | <p>You will probably find that it's easier to just let the payment gateway handle it. It's best to leave PCI compliance to the experts.</p>
| 1 | 2009-06-30T13:29:06Z | [
"python",
"pylons",
"payment-gateway",
"payment"
] |
How do I include a PHP script in Python? | 1,060,436 | <p>I have a PHP script (news-generator.php) which, when I include it, grabs a bunch of news items and prints them. Right now, I'm using Python for my website (CGI). When I was using PHP, I used something like this on the "News" page:</p>
<pre><code><?php
print("<h1>News and Updates</h1>");
include("news-generator.php");
print("</body>");
?>
</code></pre>
<p>(I cut down the example for simplicity.)</p>
<p>Is there a way I could make Python execute the script (news-generator.php) and return the output which would work cross-platform? That way, I could do this:</p>
<pre><code>page_html = "<h1>News and Updates</h1>"
news_script_output = php("news-generator.php") //should return a string
print page_html + news_script_output
</code></pre>
| 6 | 2009-06-29T20:44:02Z | 1,060,509 | <p>PHP is a program. You can run any program with <a href="http://docs.python.org/library/subprocess.html" rel="nofollow">subprocess</a>.</p>
<p>The hard part is simulating the whole CGI environment that PHP expects. </p>
| 6 | 2009-06-29T20:57:03Z | [
"php",
"python",
"scripting",
"integration",
"execution"
] |
How do I include a PHP script in Python? | 1,060,436 | <p>I have a PHP script (news-generator.php) which, when I include it, grabs a bunch of news items and prints them. Right now, I'm using Python for my website (CGI). When I was using PHP, I used something like this on the "News" page:</p>
<pre><code><?php
print("<h1>News and Updates</h1>");
include("news-generator.php");
print("</body>");
?>
</code></pre>
<p>(I cut down the example for simplicity.)</p>
<p>Is there a way I could make Python execute the script (news-generator.php) and return the output which would work cross-platform? That way, I could do this:</p>
<pre><code>page_html = "<h1>News and Updates</h1>"
news_script_output = php("news-generator.php") //should return a string
print page_html + news_script_output
</code></pre>
| 6 | 2009-06-29T20:44:02Z | 1,060,515 | <pre><code>import subprocess
def php(script_path):
p = subprocess.Popen(['php', script_path], stdout=subprocess.PIPE)
result = p.communicate()[0]
return result
# YOUR CODE BELOW:
page_html = "<h1>News and Updates</h1>"
news_script_output = php("news-generator.php")
print page_html + news_script_output
</code></pre>
| 9 | 2009-06-29T20:58:16Z | [
"php",
"python",
"scripting",
"integration",
"execution"
] |
How do I include a PHP script in Python? | 1,060,436 | <p>I have a PHP script (news-generator.php) which, when I include it, grabs a bunch of news items and prints them. Right now, I'm using Python for my website (CGI). When I was using PHP, I used something like this on the "News" page:</p>
<pre><code><?php
print("<h1>News and Updates</h1>");
include("news-generator.php");
print("</body>");
?>
</code></pre>
<p>(I cut down the example for simplicity.)</p>
<p>Is there a way I could make Python execute the script (news-generator.php) and return the output which would work cross-platform? That way, I could do this:</p>
<pre><code>page_html = "<h1>News and Updates</h1>"
news_script_output = php("news-generator.php") //should return a string
print page_html + news_script_output
</code></pre>
| 6 | 2009-06-29T20:44:02Z | 1,060,620 | <p>I think the best answer would be to have apache render both pages separately and then use javascript to load that page into a div. You have the slight slowdown of the ajax load but then you dont have to worry about it. </p>
<p>There is an open-source widget thing that will run multiple languages in 1 page but I cant remember what its called. </p>
| 0 | 2009-06-29T21:18:56Z | [
"php",
"python",
"scripting",
"integration",
"execution"
] |
How do I include a PHP script in Python? | 1,060,436 | <p>I have a PHP script (news-generator.php) which, when I include it, grabs a bunch of news items and prints them. Right now, I'm using Python for my website (CGI). When I was using PHP, I used something like this on the "News" page:</p>
<pre><code><?php
print("<h1>News and Updates</h1>");
include("news-generator.php");
print("</body>");
?>
</code></pre>
<p>(I cut down the example for simplicity.)</p>
<p>Is there a way I could make Python execute the script (news-generator.php) and return the output which would work cross-platform? That way, I could do this:</p>
<pre><code>page_html = "<h1>News and Updates</h1>"
news_script_output = php("news-generator.php") //should return a string
print page_html + news_script_output
</code></pre>
| 6 | 2009-06-29T20:44:02Z | 1,061,284 | <p>You could use urllib to get the page from the server (localhost) and execute it in the right environment for php. Not pretty, but it'll work. It may cause performance problems if you do it a lot.</p>
| 0 | 2009-06-30T00:55:49Z | [
"php",
"python",
"scripting",
"integration",
"execution"
] |
How do I include a PHP script in Python? | 1,060,436 | <p>I have a PHP script (news-generator.php) which, when I include it, grabs a bunch of news items and prints them. Right now, I'm using Python for my website (CGI). When I was using PHP, I used something like this on the "News" page:</p>
<pre><code><?php
print("<h1>News and Updates</h1>");
include("news-generator.php");
print("</body>");
?>
</code></pre>
<p>(I cut down the example for simplicity.)</p>
<p>Is there a way I could make Python execute the script (news-generator.php) and return the output which would work cross-platform? That way, I could do this:</p>
<pre><code>page_html = "<h1>News and Updates</h1>"
news_script_output = php("news-generator.php") //should return a string
print page_html + news_script_output
</code></pre>
| 6 | 2009-06-29T20:44:02Z | 1,065,081 | <p>maybe off topic, but if you want to do this in a way where you can access the vars and such created by the php script (eg. array of news items), your best best will be to do the exec of the php script, but return a json encoded array of items from php as a string, then json decode them on the python side, and do your html generation and iteration there.</p>
| 0 | 2009-06-30T18:00:02Z | [
"php",
"python",
"scripting",
"integration",
"execution"
] |
HTML Agility Pack or HTML Screen Scraping libraries for Java, Ruby, Python? | 1,060,484 | <p>I found the <a href="http://www.codeplex.com/htmlagilitypack" rel="nofollow">HTML Agility Pack</a> useful and easy to use for screen scraping web sites. What's the equivalent library for HTML screen scraping in Java, Ruby, Python?</p>
| 2 | 2009-06-29T20:53:56Z | 1,060,527 | <p><a href="http://www.crummy.com/software/BeautifulSoup/" rel="nofollow">BeautifulSoup</a> is the standard Python screen scraping tool.</p>
<p>Recently, however, I used the (incomplete at the moment) <a href="http://pyquery.org/" rel="nofollow">pyQuery</a>, which is more or less a rewrite of jQuery into python, and found it to be very useful.</p>
| 3 | 2009-06-29T20:59:59Z | [
"java",
"python",
"html",
"ruby",
"screen-scraping"
] |
HTML Agility Pack or HTML Screen Scraping libraries for Java, Ruby, Python? | 1,060,484 | <p>I found the <a href="http://www.codeplex.com/htmlagilitypack" rel="nofollow">HTML Agility Pack</a> useful and easy to use for screen scraping web sites. What's the equivalent library for HTML screen scraping in Java, Ruby, Python?</p>
| 2 | 2009-06-29T20:53:56Z | 1,060,595 | <p>Found what I was looking for:
<a href="http://stackoverflow.com/questions/2861/options-for-html-scraping">http://stackoverflow.com/questions/2861/options-for-html-scraping</a></p>
| 5 | 2009-06-29T21:13:47Z | [
"java",
"python",
"html",
"ruby",
"screen-scraping"
] |
Difference between type(obj) and obj.__class__ | 1,060,499 | <p>What is the difference between <code>type(obj)</code> and <code>obj.__class__</code>? Is there ever a possibility of <code>type(obj) is not obj.__class__</code>?</p>
<p>I want to write a function that works generically on the supplied objects, using a default value of 1 in the same type as another parameter. Which variation, #1 or #2 below, is going to do the right thing?</p>
<pre><code>def f(a, b=None):
if b is None:
b = type(a)(1) # #1
b = a.__class__(1) # #2
</code></pre>
| 33 | 2009-06-29T20:55:52Z | 1,060,537 | <p>Old-style classes are the problem, sigh:</p>
<pre><code>>>> class old: pass
...
>>> x=old()
>>> type(x)
<type 'instance'>
>>> x.__class__
<class __main__.old at 0x6a150>
>>>
</code></pre>
<p>Not a problem in Python 3 since all classes are new-style now;-).</p>
<p>In Python 2, a class is new-style only if it inherits from another new-style class (including <code>object</code> and the various built-in types such as <code>dict</code>, <code>list</code>, <code>set</code>, ...) or implicitly or explicitly sets <code>__metaclass__</code> to <code>type</code>.</p>
| 28 | 2009-06-29T21:02:10Z | [
"python",
"new-style-class"
] |
Difference between type(obj) and obj.__class__ | 1,060,499 | <p>What is the difference between <code>type(obj)</code> and <code>obj.__class__</code>? Is there ever a possibility of <code>type(obj) is not obj.__class__</code>?</p>
<p>I want to write a function that works generically on the supplied objects, using a default value of 1 in the same type as another parameter. Which variation, #1 or #2 below, is going to do the right thing?</p>
<pre><code>def f(a, b=None):
if b is None:
b = type(a)(1) # #1
b = a.__class__(1) # #2
</code></pre>
| 33 | 2009-06-29T20:55:52Z | 1,060,547 | <p><code>type(obj)</code> and <code>type.__class__</code> do not behave the same for old style classes:</p>
<pre><code>>>> class a(object):
... pass
...
>>> class b(a):
... pass
...
>>> class c:
... pass
...
>>> ai=a()
>>> bi=b()
>>> ci=c()
>>> type(ai) is ai.__class__
True
>>> type(bi) is bi.__class__
True
>>> type(ci) is ci.__class__
False
</code></pre>
| 12 | 2009-06-29T21:04:00Z | [
"python",
"new-style-class"
] |
Difference between type(obj) and obj.__class__ | 1,060,499 | <p>What is the difference between <code>type(obj)</code> and <code>obj.__class__</code>? Is there ever a possibility of <code>type(obj) is not obj.__class__</code>?</p>
<p>I want to write a function that works generically on the supplied objects, using a default value of 1 in the same type as another parameter. Which variation, #1 or #2 below, is going to do the right thing?</p>
<pre><code>def f(a, b=None):
if b is None:
b = type(a)(1) # #1
b = a.__class__(1) # #2
</code></pre>
| 33 | 2009-06-29T20:55:52Z | 10,633,356 | <p>This is an old question, but none of the answers seems to mention that. in the general case, it <strong>IS</strong> possible for a new-style class to have different values for <code>type(instance)</code> and <code>instance.__class__</code>:</p>
<pre><code>class ClassA(object):
def display(self):
print("ClassA")
class ClassB(object):
__class__ = ClassA
def display(self):
print("ClassB")
instance = ClassB()
print(type(instance))
print(instance.__class__)
instance.display()
</code></pre>
<p>Output:</p>
<pre><code><class '__main__.ClassB'>
<class '__main__.ClassA'>
ClassB
</code></pre>
<p>The reason is that <code>ClassB</code> is overriding the <code>__class__</code> descriptor, however the internal type field in the object is not changed. <code>type(instance)</code> reads directly from that type field, so it returns the correct value, whereas <code>instance.__class__</code> refers to the new descriptor replacing the original descriptor provided by Python, which reads the internal type field. Instead of reading that internal type field, it returns a hardcoded value.</p>
| 21 | 2012-05-17T09:50:40Z | [
"python",
"new-style-class"
] |
How to direct tkinter to look elsewhere for Tcl/Tk library (to dodge broken library without reinstalling) | 1,060,745 | <p>I've written a Python script that uses Tkinter. I want to deploy that script on a handful of computers that are on Mac OS 10.4.11. But that build of MAC OS X seems to have a broken TCL/TK install. Even loading the package gives me:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in ?
ImportError: dlopen(/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/lib-dynload/_tkinter.so 2): Symbol not found: _tclStubsPtr
Referenced from: /System/Library/Frameworks/Tk.framework/Versions/8.4/Tk
Expected in: /System/Library/Frameworks/Tcl.framework/Versions/8.4/Tcl
</code></pre>
<p>Reinstalling TCL/TK isn't an option since we're in an office and we'd have to get IT to come to each computer, which would deter people from using the script.</p>
<p>Is there any easy way to direct Tkinter to look elsewhere for the TK/TCL framework? I've downloaded a stand alone version of <a href="http://www.categorifiedcoder.info/tcltk/" rel="nofollow">Tcl/Tk Aqua</a>, but I don't know how to control which framework Tkinter uses...</p>
<p>Thanks for the help.</p>
<p>Adam</p>
| 4 | 2009-06-29T21:46:16Z | 1,081,118 | <p>You can change where your system looks for dynamic/shared libraries by altering <code>DYLD_LIBRARY_PATH</code> in your environment before launching Python. You can do this in Terminal like so:</p>
<pre><code>$ DYLD_LIBRARY_PATH=<insert path here>:$DYLD_LIBRARY_PATH python
</code></pre>
<p>... or create a wrapper:</p>
<pre><code>#!/bin/sh
export DYLD_LIBRARY_PATH=<insert path here>:$DYLD_LIBRARY_PATH
exec python "$@"
</code></pre>
<p>The documentation for <code>DYLD_LIBRARY_PATH</code> can be found on the <a href="http://developer.apple.com/documentation/Darwin/Reference/Manpages/man1/dyld.1.html" rel="nofollow"><code>dyld</code> man page</a>.</p>
<p>Do <em>not</em> set this in your <code>.bashrc</code> or any other profile- or system-wide setting, as it has the potential to cause some nasty problems.</p>
| 1 | 2009-07-03T23:21:24Z | [
"python",
"osx",
"tkinter"
] |
Callable modules | 1,060,796 | <p>Why doesn't Python allow modules to have a <code>__call__</code>? (Beyond the obvious that it wouldn't be easy to import directly.) Specifically, why doesn't using <code>a(b)</code> syntax find the <code>__call__</code> attribute like it does for functions, classes, and objects? (Is lookup just incompatibly different for modules?)</p>
<pre><code>>>> print open("mod_call.py").read()
def __call__():
return 42
>>> import mod_call
>>> mod_call()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'module' object is not callable
>>> mod_call.__call__()
42
</code></pre>
| 33 | 2009-06-29T22:01:09Z | 1,060,862 | <p>Special methods are only guaranteed to be called implicitly when they are defined on the type, not on the instance. (<code>__call__</code> is an attribute of the module instance <code>mod_call</code>, not of <code><type 'module'></code>.) You can't add methods to built-in types.</p>
<p><a href="http://docs.python.org/reference/datamodel.html#special-method-lookup-for-new-style-classes">http://docs.python.org/reference/datamodel.html#special-method-lookup-for-new-style-classes</a></p>
| 24 | 2009-06-29T22:18:56Z | [
"python",
"module"
] |
Callable modules | 1,060,796 | <p>Why doesn't Python allow modules to have a <code>__call__</code>? (Beyond the obvious that it wouldn't be easy to import directly.) Specifically, why doesn't using <code>a(b)</code> syntax find the <code>__call__</code> attribute like it does for functions, classes, and objects? (Is lookup just incompatibly different for modules?)</p>
<pre><code>>>> print open("mod_call.py").read()
def __call__():
return 42
>>> import mod_call
>>> mod_call()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'module' object is not callable
>>> mod_call.__call__()
42
</code></pre>
| 33 | 2009-06-29T22:01:09Z | 1,060,872 | <p>Python doesn't allow modules to override or add <em>any</em> magic method, because keeping module objects simple, regular and lightweight is just too advantageous considering how rarely strong use cases appear where you could use magic methods there.</p>
<p>When such use cases <em>do</em> appear, the solution is to make a class instance masquerade as a module. Specifically, code your <code>mod_call.py</code> as follows:</p>
<pre><code>import sys
class mod_call(object):
def __call__(self):
return 42
sys.modules[__name__] = mod_call()
</code></pre>
<p>Now your code importing and calling <code>mod_call</code> works fine.</p>
| 62 | 2009-06-29T22:20:06Z | [
"python",
"module"
] |
__lt__ instead of __cmp__ | 1,061,283 | <p>Python 2.x has two ways to overload comparison operators, <a href="http://docs.python.org/2.6/reference/datamodel.html#object.__cmp__"><code>__cmp__</code></a> or the "rich comparison operators" such as <a href="http://docs.python.org/2.6/reference/datamodel.html#object.__lt__"><code>__lt__</code></a>. <strong>The rich comparison overloads are said to be preferred, but why is this so?</strong></p>
<p>Rich comparison operators are simpler to implement each, but you must implement several of them with nearly identical logic. However, if you can use the builtin <code>cmp</code> and tuple ordering, then <code>__cmp__</code> gets quite simple and fulfills all the comparisons:</p>
<pre><code>class A(object):
def __init__(self, name, age, other):
self.name = name
self.age = age
self.other = other
def __cmp__(self, other):
assert isinstance(other, A) # assumption for this example
return cmp((self.name, self.age, self.other),
(other.name, other.age, other.other))
</code></pre>
<p>This simplicity seems to meet my needs much better than overloading all 6(!) of the rich comparisons. (However, you can get it down to "just" 4 if you rely on the "swapped argument"/reflected behavior, but that results in a net increase of complication, in my humble opinion.)</p>
<p><strong>Are there any unforeseen pitfalls I need to be made aware of if I only overload <code>__cmp__</code>?</strong></p>
<p>I understand the <code><</code>, <code><=</code>, <code>==</code>, etc. operators can be overloaded for other purposes, and can return any object they like. I am not asking about the merits of that approach, but only about differences when using these operators for comparisons in the same sense that they mean for numbers.</p>
<p><em>Update:</em> As Christopher <a href="http://stackoverflow.com/questions/1061283/style-why-lt-instead-of-cmp/1061323#1061323">pointed out</a>, <code>cmp</code> is disappearing in 3.x. <strong>Are there any alternatives that make implementing comparisons as easy as the above <code>__cmp__</code>?</strong></p>
| 73 | 2009-06-30T00:55:45Z | 1,061,323 | <p>This is covered by <a href="http://www.python.org/dev/peps/pep-0207/" rel="nofollow">PEP 207 - Rich Comparisons</a></p>
<p>Also, <code>__cmp__</code> goes away in python 3.0. ( Note that it is not present on <a href="http://docs.python.org/3.0/reference/datamodel.html" rel="nofollow">http://docs.python.org/3.0/reference/datamodel.html</a> but it IS on <a href="http://docs.python.org/2.7/reference/datamodel.html" rel="nofollow">http://docs.python.org/2.7/reference/datamodel.html</a> )</p>
| 8 | 2009-06-30T01:13:02Z | [
"python",
"operator-overloading"
] |
__lt__ instead of __cmp__ | 1,061,283 | <p>Python 2.x has two ways to overload comparison operators, <a href="http://docs.python.org/2.6/reference/datamodel.html#object.__cmp__"><code>__cmp__</code></a> or the "rich comparison operators" such as <a href="http://docs.python.org/2.6/reference/datamodel.html#object.__lt__"><code>__lt__</code></a>. <strong>The rich comparison overloads are said to be preferred, but why is this so?</strong></p>
<p>Rich comparison operators are simpler to implement each, but you must implement several of them with nearly identical logic. However, if you can use the builtin <code>cmp</code> and tuple ordering, then <code>__cmp__</code> gets quite simple and fulfills all the comparisons:</p>
<pre><code>class A(object):
def __init__(self, name, age, other):
self.name = name
self.age = age
self.other = other
def __cmp__(self, other):
assert isinstance(other, A) # assumption for this example
return cmp((self.name, self.age, self.other),
(other.name, other.age, other.other))
</code></pre>
<p>This simplicity seems to meet my needs much better than overloading all 6(!) of the rich comparisons. (However, you can get it down to "just" 4 if you rely on the "swapped argument"/reflected behavior, but that results in a net increase of complication, in my humble opinion.)</p>
<p><strong>Are there any unforeseen pitfalls I need to be made aware of if I only overload <code>__cmp__</code>?</strong></p>
<p>I understand the <code><</code>, <code><=</code>, <code>==</code>, etc. operators can be overloaded for other purposes, and can return any object they like. I am not asking about the merits of that approach, but only about differences when using these operators for comparisons in the same sense that they mean for numbers.</p>
<p><em>Update:</em> As Christopher <a href="http://stackoverflow.com/questions/1061283/style-why-lt-instead-of-cmp/1061323#1061323">pointed out</a>, <code>cmp</code> is disappearing in 3.x. <strong>Are there any alternatives that make implementing comparisons as easy as the above <code>__cmp__</code>?</strong></p>
| 73 | 2009-06-30T00:55:45Z | 1,061,350 | <p>Yep, it's easy to implement everything in terms of e.g. <code>__lt__</code> with a mixin class (or a metaclass, or a class decorator if your taste runs that way).</p>
<p>For example:</p>
<pre><code>class ComparableMixin:
def __eq__(self, other):
return not self<other and not other<self
def __ne__(self, other):
return self<other or other<self
def __gt__(self, other):
return other<self
def __ge__(self, other):
return not self<other
def __le__(self, other):
return not other<self
</code></pre>
<p>Now your class can define just <code>__lt__</code> and multiply inherit from ComparableMixin (after whatever other bases it needs, if any). A class decorator would be quite similar, just inserting similar functions as attributes of the new class it's decorating (the result might be microscopically faster at runtime, at equally minute cost in terms of memory).</p>
<p>Of course, if your class has some particularly fast way to implement (e.g.) <code>__eq__</code> and <code>__ne__</code>, it should define them directly so the mixin's versions are not use (for example, that is the case for <code>dict</code>) -- in fact <code>__ne__</code> might well be defined to facilitate that as:</p>
<pre><code>def __ne__(self, other):
return not self == other
</code></pre>
<p>but in the code above I wanted to keep the pleasing symmetry of only using <code><</code>;-).
As to why <code>__cmp__</code> had to go, since we <em>did</em> have <code>__lt__</code> and friends, why keep another, different way to do exactly the same thing around? It's just so much dead-weight in every Python runtime (Classic, Jython, IronPython, PyPy, ...). The code that <strong>definitely</strong> won't have bugs is the code that isn't there -- whence Python's principle that there ought to be ideally one obvious way to perform a task (C has the same principle in the "Spirit of C" section of the ISO standard, btw).</p>
<p>This doesn't mean we go out of our way to prohibit things (e.g., near-equivalence between mixins and class decorators for some uses), but it definitely <strong>does</strong> mean that we don't like to carry around code in the compilers and/or runtimes that redundantly exists just to support multiple equivalent approaches to perform exactly the same task.</p>
<p>Further edit: there's actually an even better way to provide comparison AND hashing for many classes, including that in the question -- a <code>__key__</code> method, as I mentioned on my comment to the question. Since I never got around to writing the PEP for it, you must currently implement it with a Mixin (&c) if you like it:</p>
<pre><code>class KeyedMixin:
def __lt__(self, other):
return self.__key__() < other.__key__()
# and so on for other comparators, as above, plus:
def __hash__(self):
return hash(self.__key__())
</code></pre>
<p>It's a very common case for an instance's comparisons with other instances to boil down to comparing a tuple for each with a few fields -- and then, hashing should be implemented on exactly the same basis. The <code>__key__</code> special method addresses that need directly.</p>
| 68 | 2009-06-30T01:28:10Z | [
"python",
"operator-overloading"
] |
__lt__ instead of __cmp__ | 1,061,283 | <p>Python 2.x has two ways to overload comparison operators, <a href="http://docs.python.org/2.6/reference/datamodel.html#object.__cmp__"><code>__cmp__</code></a> or the "rich comparison operators" such as <a href="http://docs.python.org/2.6/reference/datamodel.html#object.__lt__"><code>__lt__</code></a>. <strong>The rich comparison overloads are said to be preferred, but why is this so?</strong></p>
<p>Rich comparison operators are simpler to implement each, but you must implement several of them with nearly identical logic. However, if you can use the builtin <code>cmp</code> and tuple ordering, then <code>__cmp__</code> gets quite simple and fulfills all the comparisons:</p>
<pre><code>class A(object):
def __init__(self, name, age, other):
self.name = name
self.age = age
self.other = other
def __cmp__(self, other):
assert isinstance(other, A) # assumption for this example
return cmp((self.name, self.age, self.other),
(other.name, other.age, other.other))
</code></pre>
<p>This simplicity seems to meet my needs much better than overloading all 6(!) of the rich comparisons. (However, you can get it down to "just" 4 if you rely on the "swapped argument"/reflected behavior, but that results in a net increase of complication, in my humble opinion.)</p>
<p><strong>Are there any unforeseen pitfalls I need to be made aware of if I only overload <code>__cmp__</code>?</strong></p>
<p>I understand the <code><</code>, <code><=</code>, <code>==</code>, etc. operators can be overloaded for other purposes, and can return any object they like. I am not asking about the merits of that approach, but only about differences when using these operators for comparisons in the same sense that they mean for numbers.</p>
<p><em>Update:</em> As Christopher <a href="http://stackoverflow.com/questions/1061283/style-why-lt-instead-of-cmp/1061323#1061323">pointed out</a>, <code>cmp</code> is disappearing in 3.x. <strong>Are there any alternatives that make implementing comparisons as easy as the above <code>__cmp__</code>?</strong></p>
| 73 | 2009-06-30T00:55:45Z | 7,150,594 | <p>I tried out the comparable mixin answer above. I ran into trouble with "None". Here is a modified version that handles equality comparisons with "None". (I saw no reason to bother with inequality comparisons with None as lacking semantics):</p>
<pre><code>
class ComparableMixin(object):
def __eq__(self, other):
if type(self) == type(None):
if type(other) == type(None):
return True
else:
return False
elif type(other) == type(None):
return False
else:
return not self<other and not other<self
def __ne__(self, other):
return not __eq__(self, other)
def __gt__(self, other):
return other<self
def __ge__(self, other):
return not self<other
def __le__(self, other):
return not other<self
</code></pre>
| 2 | 2011-08-22T16:35:10Z | [
"python",
"operator-overloading"
] |
__lt__ instead of __cmp__ | 1,061,283 | <p>Python 2.x has two ways to overload comparison operators, <a href="http://docs.python.org/2.6/reference/datamodel.html#object.__cmp__"><code>__cmp__</code></a> or the "rich comparison operators" such as <a href="http://docs.python.org/2.6/reference/datamodel.html#object.__lt__"><code>__lt__</code></a>. <strong>The rich comparison overloads are said to be preferred, but why is this so?</strong></p>
<p>Rich comparison operators are simpler to implement each, but you must implement several of them with nearly identical logic. However, if you can use the builtin <code>cmp</code> and tuple ordering, then <code>__cmp__</code> gets quite simple and fulfills all the comparisons:</p>
<pre><code>class A(object):
def __init__(self, name, age, other):
self.name = name
self.age = age
self.other = other
def __cmp__(self, other):
assert isinstance(other, A) # assumption for this example
return cmp((self.name, self.age, self.other),
(other.name, other.age, other.other))
</code></pre>
<p>This simplicity seems to meet my needs much better than overloading all 6(!) of the rich comparisons. (However, you can get it down to "just" 4 if you rely on the "swapped argument"/reflected behavior, but that results in a net increase of complication, in my humble opinion.)</p>
<p><strong>Are there any unforeseen pitfalls I need to be made aware of if I only overload <code>__cmp__</code>?</strong></p>
<p>I understand the <code><</code>, <code><=</code>, <code>==</code>, etc. operators can be overloaded for other purposes, and can return any object they like. I am not asking about the merits of that approach, but only about differences when using these operators for comparisons in the same sense that they mean for numbers.</p>
<p><em>Update:</em> As Christopher <a href="http://stackoverflow.com/questions/1061283/style-why-lt-instead-of-cmp/1061323#1061323">pointed out</a>, <code>cmp</code> is disappearing in 3.x. <strong>Are there any alternatives that make implementing comparisons as easy as the above <code>__cmp__</code>?</strong></p>
| 73 | 2009-06-30T00:55:45Z | 10,312,447 | <p>To simplify this case there's a class decorator in Python 2.7+/3.2+, <a href="http://docs.python.org/library/functools.html#functools.total_ordering">functools.total_ordering</a>, that can be used to implement what Alex suggests. Example from the docs:</p>
<pre><code>@total_ordering
class Student:
def __eq__(self, other):
return ((self.lastname.lower(), self.firstname.lower()) ==
(other.lastname.lower(), other.firstname.lower()))
def __lt__(self, other):
return ((self.lastname.lower(), self.firstname.lower()) <
(other.lastname.lower(), other.firstname.lower()))
</code></pre>
| 25 | 2012-04-25T09:01:11Z | [
"python",
"operator-overloading"
] |
__lt__ instead of __cmp__ | 1,061,283 | <p>Python 2.x has two ways to overload comparison operators, <a href="http://docs.python.org/2.6/reference/datamodel.html#object.__cmp__"><code>__cmp__</code></a> or the "rich comparison operators" such as <a href="http://docs.python.org/2.6/reference/datamodel.html#object.__lt__"><code>__lt__</code></a>. <strong>The rich comparison overloads are said to be preferred, but why is this so?</strong></p>
<p>Rich comparison operators are simpler to implement each, but you must implement several of them with nearly identical logic. However, if you can use the builtin <code>cmp</code> and tuple ordering, then <code>__cmp__</code> gets quite simple and fulfills all the comparisons:</p>
<pre><code>class A(object):
def __init__(self, name, age, other):
self.name = name
self.age = age
self.other = other
def __cmp__(self, other):
assert isinstance(other, A) # assumption for this example
return cmp((self.name, self.age, self.other),
(other.name, other.age, other.other))
</code></pre>
<p>This simplicity seems to meet my needs much better than overloading all 6(!) of the rich comparisons. (However, you can get it down to "just" 4 if you rely on the "swapped argument"/reflected behavior, but that results in a net increase of complication, in my humble opinion.)</p>
<p><strong>Are there any unforeseen pitfalls I need to be made aware of if I only overload <code>__cmp__</code>?</strong></p>
<p>I understand the <code><</code>, <code><=</code>, <code>==</code>, etc. operators can be overloaded for other purposes, and can return any object they like. I am not asking about the merits of that approach, but only about differences when using these operators for comparisons in the same sense that they mean for numbers.</p>
<p><em>Update:</em> As Christopher <a href="http://stackoverflow.com/questions/1061283/style-why-lt-instead-of-cmp/1061323#1061323">pointed out</a>, <code>cmp</code> is disappearing in 3.x. <strong>Are there any alternatives that make implementing comparisons as easy as the above <code>__cmp__</code>?</strong></p>
| 73 | 2009-06-30T00:55:45Z | 12,188,881 | <p>Inspired by Alex Martelli's <code>ComparableMixin</code> & <code>KeyedMixin</code> answers, I came up with the following mixin.
It allows you to implement a single <code>_compare_to()</code> method, which uses key-based comparisons
similar to <code>KeyedMixin</code>, but allows your class to pick the most efficient comparison key based on the type of <code>other</code>. (Note that this mixin doesn't help much for objects which can be tested for equality but not order). </p>
<pre><code>class ComparableMixin(object):
"""mixin which implements rich comparison operators in terms of a single _compare_to() helper"""
def _compare_to(self, other):
"""return keys to compare self to other.
if self and other are comparable, this function
should return ``(self key, other key)``.
if they aren't, it should return ``None`` instead.
"""
raise NotImplementedError("_compare_to() must be implemented by subclass")
def __eq__(self, other):
keys = self._compare_to(other)
return keys[0] == keys[1] if keys else NotImplemented
def __ne__(self, other):
return not self == other
def __lt__(self, other):
keys = self._compare_to(other)
return keys[0] < keys[1] if keys else NotImplemented
def __le__(self, other):
keys = self._compare_to(other)
return keys[0] <= keys[1] if keys else NotImplemented
def __gt__(self, other):
keys = self._compare_to(other)
return keys[0] > keys[1] if keys else NotImplemented
def __ge__(self, other):
keys = self._compare_to(other)
return keys[0] >= keys[1] if keys else NotImplemented
</code></pre>
| 0 | 2012-08-30T01:52:08Z | [
"python",
"operator-overloading"
] |
Python-Hotshot error trying to profile a simple program | 1,061,361 | <p>I was trying to learn how to profile a simple python program using hotshot, but am facing a weird error,</p>
<pre><code>import sys
import hotshot
def main(argv):
for i in range(1,1000):
print i
if __name__ == "__main__":
prof = hotshot.Profile("hotshot_edi_stats")
b,c = prof.runcall(main(sys.argv))
prof.close()
</code></pre>
<p>and the output,</p>
<pre><code>.
.
995
996
997
998
999
Traceback (most recent call last):
File "t.py", line 9, in <module>
b, c = prof.runcall(main(sys.argv))
File "/usr/lib/python2.5/hotshot/__init__.py", line 76, in runcall
return self._prof.runcall(func, args, kw)
TypeError: 'NoneType' object is not callable
</code></pre>
<p>Would anyone know why this happens? It looks to me like a problem with the hotshot profiler itself. Alternatively, do people have suggestions on other methods to profile python programs?</p>
<p>Thanks!</p>
| 2 | 2009-06-30T01:40:02Z | 1,061,378 | <p>And I think I've figured out something I missed for over 2 hours.. </p>
<p>Turns out, runcall() should be called as,</p>
<pre><code>runcall(main, self.argv)
</code></pre>
<p>and this makes things work!</p>
| 3 | 2009-06-30T01:46:54Z | [
"python",
"profiler",
"profiling"
] |
Python-Hotshot error trying to profile a simple program | 1,061,361 | <p>I was trying to learn how to profile a simple python program using hotshot, but am facing a weird error,</p>
<pre><code>import sys
import hotshot
def main(argv):
for i in range(1,1000):
print i
if __name__ == "__main__":
prof = hotshot.Profile("hotshot_edi_stats")
b,c = prof.runcall(main(sys.argv))
prof.close()
</code></pre>
<p>and the output,</p>
<pre><code>.
.
995
996
997
998
999
Traceback (most recent call last):
File "t.py", line 9, in <module>
b, c = prof.runcall(main(sys.argv))
File "/usr/lib/python2.5/hotshot/__init__.py", line 76, in runcall
return self._prof.runcall(func, args, kw)
TypeError: 'NoneType' object is not callable
</code></pre>
<p>Would anyone know why this happens? It looks to me like a problem with the hotshot profiler itself. Alternatively, do people have suggestions on other methods to profile python programs?</p>
<p>Thanks!</p>
| 2 | 2009-06-30T01:40:02Z | 1,070,978 | <p>In general, if you have a way to randomly pause or interrupt the program and see the call stack, <a href="http://stackoverflow.com/questions/375913/what-can-i-use-to-profile-c-code-in-linux/378024#378024">this method always works</a>.</p>
| 1 | 2009-07-01T19:46:59Z | [
"python",
"profiler",
"profiling"
] |
Why isn't this a valid schema for Rx? | 1,061,482 | <p>I'm using YAML as a configuration file format for a Python project.</p>
<p>Recently I found <a href="http://rjbs.manxome.org/rx/" rel="nofollow">Rx</a> to be the only schema validator available for Python and YAML. :-/ <a href="http://www.kuwata-lab.com/kwalify/" rel="nofollow">Kwalify</a> works with YAML, but it's only for Ruby and Java. :(</p>
<p>I've been reading their lacking documentation all day and just can't seem to write a valid schema to represent my file structure. Help?</p>
<p>I have the following YAML config file:</p>
<pre><code>cmd:
exec: mycmd
aliases: [my, cmd]
filter:
sms: 'regex .*'
load:
exec: load
filter:
sms: 'load: .*$'
echo:
exec: echo %
</code></pre>
<p>I'm failing at representing a nested structure. What I want is for the outer-most item (cmd, load and echo, in this case) to be an arbitrary string that in turn contains other items. 'exec' is a fixed string and required item; 'aliases' and 'filter' are also fixed, but should be optional. Filter in turn has another set of required and optional items. How should I represent this with Rx?</p>
<p>So far I have the following schema (in YAML), which Rx fails to compile:</p>
<pre><code>type: //rec
required:
type: //rec
required:
exec: //str
optional:
aliases:
type: //arr
contents: //str
length: {min: 1, max: 10}
filter:
type: //rec
optional:
sms: //str
email: //str
all: //str
</code></pre>
<p>Testing this in IPython gives me this:</p>
<pre><code>/Rx.py in make_schema(self, schema)
68 raise Error('invalid schema argument to make_schema')
69
---> 70 uri = self.expand_uri(schema["type"])
71
72 if not self.type_registry.get(uri): raise "unknown type %s" % uri
KeyError: 'type'
</code></pre>
<p>Which leads me to believe I'm not specifying "type" somewhere. :-S</p>
<p>Any ideas?</p>
<p>I'm pretty tired fighting with this thing... Is there some other way I can write a schema and use it to validate my configuration files?</p>
<p>Thanks in advance,</p>
<p>Ivan</p>
| 4 | 2009-06-30T02:36:15Z | 1,063,888 | <p>Try this:</p>
<pre><code>type: //map
values:
type: //rec
required:
exec: //str
optional:
aliases:
type: //arr
contents: //str
length: {min: 1, max: 10}
filter:
type: //rec
optional:
sms: //str
email: //str
all: //str
</code></pre>
<p>A map can contain any string as a key, whereas a rec can only contain the keys specified in 'required' and 'optional'.</p>
| 3 | 2009-06-30T14:06:17Z | [
"python",
"schema",
"yaml"
] |
print statement in for loop only executes once | 1,061,534 | <p>I am teaching myself python. I was thinking of small programs, and came up with an idea to do a keno number generator. For any who don't know, you can pick 4-12 numbers, ranged 1-80, to match. So the first is part asks how many numbers, the second generates them. I came up with</p>
<pre><code>x = raw_input('How many numbers do you want to play?')
for i in x:
random.randrange(1,81)
print i
</code></pre>
<p>Which doesn't work, it prints x. So I am wondering the best way to do this. Make a random.randrange function? And how do i call it x times based on user input. </p>
<p>As always, thank you in advance for the help</p>
| 0 | 2009-06-30T03:06:03Z | 1,061,540 | <p>This should do what you want:</p>
<pre><code>x = raw_input('How many numbers do you want to play?')
for i in xrange(int(x)):
print random.randrange(1,81)
</code></pre>
<p>In Python indentation matters. It is the way it knows when you're in a specific block of code. So basically we use the <code>xrange</code> function to create a range to loop through (we call <code>int</code> on x because it expects an integer while <code>raw_input</code> returns a string). We then print the <code>randrange</code> return value inside the for block.</p>
| 5 | 2009-06-30T03:07:55Z | [
"python"
] |
KenKen puzzle addends: REDUX A (corrected) non-recursive algorithm | 1,061,590 | <p>This question relates to those parts of the KenKen Latin Square puzzles which ask you to find all possible combinations of ncells numbers with values x such that 1 <= x <= maxval and x(1) + ... + x(ncells) = targetsum. Having tested several of the more promising answers, I'm going to award the answer-prize to Lennart Regebro, because:</p>
<ol>
<li><p>his routine is as fast as mine (+-5%), and</p></li>
<li><p>he pointed out that my original routine had a bug somewhere, which led me to see what it was really trying to do. Thanks, Lennart.</p></li>
</ol>
<p>chrispy contributed an algorithm that seems equivalent to Lennart's, but 5 hrs later, sooo, first to the wire gets it.</p>
<p>A remark: Alex Martelli's bare-bones recursive algorithm is an example of making every possible combination and throwing them all at a sieve and seeing which go through the holes. This approach takes 20+ times longer than Lennart's or mine. (Jack up the input to max_val = 100, n_cells = 5, target_sum = 250 and on my box it's 18 secs vs. 8+ mins.) Moral: Not generating every possible combination is good.</p>
<p>Another remark: Lennart's and my routines generate <strong>the same answers in the same order</strong>. Are they in fact the same algorithm seen from different angles? I don't know.</p>
<p>Something occurs to me. If you sort the answers, starting, say, with (8,8,2,1,1) and ending with (4,4,4,4,4) (what you get with max_val=8, n_cells=5, target_sum=20), the series forms kind of a "slowest descent", with the first ones being "hot" and the last one being "cold" and the greatest possible number of stages in between. Is this related to "informational entropy"? What's the proper metric for looking at it? Is there an algorithm that producs the combinations in descending (or ascending) order of heat? (This one doesn't, as far as I can see, although it's close over short stretches, looking at normalized std. dev.)</p>
<p>Here's the Python routine:</p>
<pre><code>#!/usr/bin/env python
#filename: makeAddCombos.07.py -- stripped for StackOverflow
def initialize_combo( max_val, n_cells, target_sum):
"""returns combo
Starting from left, fills combo to max_val or an intermediate value from 1 up.
E.g.: Given max_val = 5, n_cells=4, target_sum = 11, creates [5,4,1,1].
"""
combo = []
#Put 1 in each cell.
combo += [1] * n_cells
need = target_sum - sum(combo)
#Fill as many cells as possible to max_val.
n_full_cells = need //(max_val - 1)
top_up = max_val - 1
for i in range( n_full_cells): combo[i] += top_up
need = target_sum - sum(combo)
# Then add the rest to next item.
if need > 0:
combo[n_full_cells] += need
return combo
#def initialize_combo()
def scrunch_left( combo):
"""returns (new_combo,done)
done Boolean; if True, ignore new_combo, all done;
if Falso, new_combo is valid.
Starts a new combo list. Scanning from right to left, looks for first
element at least 2 greater than right-end element.
If one is found, decrements it, then scrunches all available counts on its
right up against its right-hand side. Returns the modified combo.
If none found, (that is, either no step or single step of 1), process
done.
"""
new_combo = []
right_end = combo[-1]
length = len(combo)
c_range = range(length-1, -1, -1)
found_step_gt_1 = False
for index in c_range:
value = combo[index]
if (value - right_end) > 1:
found_step_gt_1 = True
break
if not found_step_gt_1:
return ( new_combo,True)
if index > 0:
new_combo += combo[:index]
ceil = combo[index] - 1
new_combo += [ceil]
new_combo += [1] * ((length - 1) - index)
need = sum(combo[index:]) - sum(new_combo[index:])
fill_height = ceil - 1
ndivf = need // fill_height
nmodf = need % fill_height
if ndivf > 0:
for j in range(index + 1, index + ndivf + 1):
new_combo[j] += fill_height
if nmodf > 0:
new_combo[index + ndivf + 1] += nmodf
return (new_combo, False)
#def scrunch_left()
def make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum):
"""
Build combos, list of tuples of 2 or more addends.
"""
combo = initialize_combo( max_val, n_cells, target_sum)
combos.append( tuple( combo))
while True:
(combo, done) = scrunch_left( combo)
if done:
break
else:
combos.append( tuple( combo))
return combos
#def make_combos_n_cells_ge_two()
if __name__ == '__main__':
combos = []
max_val = 8
n_cells = 5
target_sum = 20
if n_cells == 1: combos.append( (target_sum,))
else:
combos = make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum)
import pprint
pprint.pprint( combos)
</code></pre>
| 4 | 2009-06-30T03:33:36Z | 1,061,740 | <p>Sorry to say, your code is kind of long and not particularly readable. If you can try to summarize it somehow, maybe someone can help you write it more clearly.</p>
<p>As for the problem itself, my first thought would be to use recursion. (For all I know, you're already doing that. Sorry again for my inability to read your code.) Think of a way that you can reduce the problem to a smaller easier version of the same problem, repeatedly, until you have a trivial case with a very simple answer.</p>
<p>To be a bit more concrete, you have these three parameters, max_val, target_sum, and n_cells. Can you set one of those numbers to some particular value, in order to give you an extremely simple problem requiring no thought at all? Once you have that, can you reduce the slightly harder version of the problem to the already solved one?</p>
<p>EDIT: Here is my code. I don't like the way it does de-duplication. I'm sure there's a more Pythonic way. Also, it disallows using the same number twice in one combination. To undo this behavior, just take out the line <code>if n not in numlist:</code>. I'm not sure if this is completely correct, but it seems to work and is (IMHO) more readable. You could easily add memoization and that would probably speed it up quite a bit.</p>
<pre><code>def get_combos(max_val, target, n_cells):
if target <= 0:
return []
if n_cells is 1:
if target > max_val:
return []
else:
return [[target]]
else:
combos = []
for n in range(1, max_val+1, 1):
for numlist in get_combos(max_val, target-n, n_cells-1):
if n not in numlist:
combos.append(numlist + [n])
return combos
def deduplicate(combos):
for numlist in combos:
numlist.sort()
answer = [tuple(numlist) for numlist in combos]
return set(answer)
def kenken(max_val, target, n_cells):
return deduplicate(get_combos(max_val, target, n_cells))
</code></pre>
| 1 | 2009-06-30T04:32:51Z | [
"python",
"algorithm",
"statistics",
"puzzle",
"combinations"
] |
KenKen puzzle addends: REDUX A (corrected) non-recursive algorithm | 1,061,590 | <p>This question relates to those parts of the KenKen Latin Square puzzles which ask you to find all possible combinations of ncells numbers with values x such that 1 <= x <= maxval and x(1) + ... + x(ncells) = targetsum. Having tested several of the more promising answers, I'm going to award the answer-prize to Lennart Regebro, because:</p>
<ol>
<li><p>his routine is as fast as mine (+-5%), and</p></li>
<li><p>he pointed out that my original routine had a bug somewhere, which led me to see what it was really trying to do. Thanks, Lennart.</p></li>
</ol>
<p>chrispy contributed an algorithm that seems equivalent to Lennart's, but 5 hrs later, sooo, first to the wire gets it.</p>
<p>A remark: Alex Martelli's bare-bones recursive algorithm is an example of making every possible combination and throwing them all at a sieve and seeing which go through the holes. This approach takes 20+ times longer than Lennart's or mine. (Jack up the input to max_val = 100, n_cells = 5, target_sum = 250 and on my box it's 18 secs vs. 8+ mins.) Moral: Not generating every possible combination is good.</p>
<p>Another remark: Lennart's and my routines generate <strong>the same answers in the same order</strong>. Are they in fact the same algorithm seen from different angles? I don't know.</p>
<p>Something occurs to me. If you sort the answers, starting, say, with (8,8,2,1,1) and ending with (4,4,4,4,4) (what you get with max_val=8, n_cells=5, target_sum=20), the series forms kind of a "slowest descent", with the first ones being "hot" and the last one being "cold" and the greatest possible number of stages in between. Is this related to "informational entropy"? What's the proper metric for looking at it? Is there an algorithm that producs the combinations in descending (or ascending) order of heat? (This one doesn't, as far as I can see, although it's close over short stretches, looking at normalized std. dev.)</p>
<p>Here's the Python routine:</p>
<pre><code>#!/usr/bin/env python
#filename: makeAddCombos.07.py -- stripped for StackOverflow
def initialize_combo( max_val, n_cells, target_sum):
"""returns combo
Starting from left, fills combo to max_val or an intermediate value from 1 up.
E.g.: Given max_val = 5, n_cells=4, target_sum = 11, creates [5,4,1,1].
"""
combo = []
#Put 1 in each cell.
combo += [1] * n_cells
need = target_sum - sum(combo)
#Fill as many cells as possible to max_val.
n_full_cells = need //(max_val - 1)
top_up = max_val - 1
for i in range( n_full_cells): combo[i] += top_up
need = target_sum - sum(combo)
# Then add the rest to next item.
if need > 0:
combo[n_full_cells] += need
return combo
#def initialize_combo()
def scrunch_left( combo):
"""returns (new_combo,done)
done Boolean; if True, ignore new_combo, all done;
if Falso, new_combo is valid.
Starts a new combo list. Scanning from right to left, looks for first
element at least 2 greater than right-end element.
If one is found, decrements it, then scrunches all available counts on its
right up against its right-hand side. Returns the modified combo.
If none found, (that is, either no step or single step of 1), process
done.
"""
new_combo = []
right_end = combo[-1]
length = len(combo)
c_range = range(length-1, -1, -1)
found_step_gt_1 = False
for index in c_range:
value = combo[index]
if (value - right_end) > 1:
found_step_gt_1 = True
break
if not found_step_gt_1:
return ( new_combo,True)
if index > 0:
new_combo += combo[:index]
ceil = combo[index] - 1
new_combo += [ceil]
new_combo += [1] * ((length - 1) - index)
need = sum(combo[index:]) - sum(new_combo[index:])
fill_height = ceil - 1
ndivf = need // fill_height
nmodf = need % fill_height
if ndivf > 0:
for j in range(index + 1, index + ndivf + 1):
new_combo[j] += fill_height
if nmodf > 0:
new_combo[index + ndivf + 1] += nmodf
return (new_combo, False)
#def scrunch_left()
def make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum):
"""
Build combos, list of tuples of 2 or more addends.
"""
combo = initialize_combo( max_val, n_cells, target_sum)
combos.append( tuple( combo))
while True:
(combo, done) = scrunch_left( combo)
if done:
break
else:
combos.append( tuple( combo))
return combos
#def make_combos_n_cells_ge_two()
if __name__ == '__main__':
combos = []
max_val = 8
n_cells = 5
target_sum = 20
if n_cells == 1: combos.append( (target_sum,))
else:
combos = make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum)
import pprint
pprint.pprint( combos)
</code></pre>
| 4 | 2009-06-30T03:33:36Z | 1,061,744 | <p>Your algorithm seems pretty good at first blush, and I don't think OO or another language would improve the code. I can't say if recursion would have helped but I admire the non-recursive approach. I bet it was harder to get working and it's harder to read but it likely is more efficient and it's definitely quite clever. To be honest I didn't analyze the algorithm in detail but it certainly looks like something that took a long while to get working correctly. I bet there were lots of off-by-1 errors and weird edge cases you had to think through, eh?</p>
<p>Given all that, basically all I tried to do was pretty up your code as best I could by replacing the numerous C-isms with more idiomatic Python-isms. Often times what requires a loop in C can be done in one line in Python. Also I tried to rename things to follow Python naming conventions better and cleaned up the comments a bit. Hope I don't offend you with any of my changes. You can take what you want and leave the rest. :-)</p>
<p>Here are the notes I took as I worked:</p>
<ul>
<li>Changed the code that initializes <code>tmp</code> to a bunch of 1's to the more idiomatic <code>tmp = [1] * n_cells</code>.</li>
<li>Changed <code>for</code> loop that sums up <code>tmp_sum</code> to idiomatic <code>sum(tmp)</code>.</li>
<li>Then replaced all the loops with a <code>tmp = <list> + <list></code> one-liner.</li>
<li>Moved <code>raise doneException</code> to <code>init_tmp_new_ceiling</code> and got rid of the <code>succeeded</code> flag.</li>
<li>The check in <code>init_tmp_new_ceiling</code> actually seems unnecessary. Removing it, the only <code>raise</code>s left were in <code>make_combos_n_cells</code>, so I just changed those to regular returns and dropped <code>doneException</code> entirely.</li>
<li>Normalized mix of 4 spaces and 8 spaces for indentation.</li>
<li>Removed unnecessary parentheses around your <code>if</code> conditions.</li>
<li><code>tmp[p2] - tmp[p1] == 0</code> is the same thing as <code>tmp[p2] == tmp[p1]</code>.</li>
<li>Changed <code>while True: if new_ceiling_flag: break</code> to <code>while not new_ceiling_flag</code>.</li>
<li>You don't need to initialize variables to 0 at the top of your functions.</li>
<li>Removed <code>combos</code> list and changed function to <code>yield</code> its tuples as they are generated.</li>
<li>Renamed <code>tmp</code> to <code>combo</code>.</li>
<li>Renamed <code>new_ceiling_flag</code> to <code>ceiling_changed</code>.</li>
</ul>
<p>And here's the code for your perusal:</p>
<pre><code>def initial_combo(ceiling=5, target_sum=13, num_cells=4):
"""
Returns a list of possible addends, probably to be modified further.
Starts a new combo list, then, starting from left, fills items to ceiling
or intermediate between 1 and ceiling or just 1. E.g.:
Given ceiling = 5, target_sum = 13, num_cells = 4: creates [5,5,2,1].
"""
num_full_cells = (target_sum - num_cells) // (ceiling - 1)
combo = [ceiling] * num_full_cells \
+ [1] * (num_cells - num_full_cells)
if num_cells > num_full_cells:
combo[num_full_cells] += target_sum - sum(combo)
return combo
def all_combos(ceiling, target_sum, num_cells):
# p0 points at the rightmost item and moves left under some conditions
# p1 starts out at rightmost items and steps left
# p2 starts out immediately to the left of p1 and steps left as p1 does
# So, combo[p2] and combo[p1] always point at a pair of adjacent items.
# d combo[p2] - combo[p1]; immediate difference
# cd combo[p2] - combo[p0]; cumulative difference
# The ceiling decreases by 1 each iteration.
while True:
combo = initial_combo(ceiling, target_sum, num_cells)
yield tuple(combo)
ceiling_changed = False
# Generate all of the remaining combos with this ceiling.
while not ceiling_changed:
p2, p1, p0 = -2, -1, -1
while combo[p2] == combo[p1] and abs(p2) <= num_cells:
# 3,3,3,3
if abs(p2) == num_cells:
return
p2 -= 1
p1 -= 1
p0 -= 1
cd = 0
# slide_ptrs_left loop
while abs(p2) <= num_cells:
d = combo[p2] - combo[p1]
cd += d
# 5,5,3,3 or 5,5,4,3
if cd > 1:
if abs(p2) < num_cells:
# 5,5,3,3 --> 5,4,4,3
if d > 1:
combo[p2] -= 1
combo[p1] += 1
# d == 1; 5,5,4,3 --> 5,4,4,4
else:
combo[p2] -= 1
combo[p0] += 1
yield tuple(combo)
# abs(p2) == num_cells; 5,4,4,3
else:
ceiling -= 1
ceiling_changed = True
# Resume at make_combo_same_ceiling while
# and follow branch.
break
# 4,3,3,3 or 4,4,3,3
elif cd == 1:
if abs(p2) == num_cells:
return
p1 -= 1
p2 -= 1
if __name__ == '__main__':
print list(all_combos(ceiling=6, target_sum=12, num_cells=4))
</code></pre>
| 3 | 2009-06-30T04:34:45Z | [
"python",
"algorithm",
"statistics",
"puzzle",
"combinations"
] |
KenKen puzzle addends: REDUX A (corrected) non-recursive algorithm | 1,061,590 | <p>This question relates to those parts of the KenKen Latin Square puzzles which ask you to find all possible combinations of ncells numbers with values x such that 1 <= x <= maxval and x(1) + ... + x(ncells) = targetsum. Having tested several of the more promising answers, I'm going to award the answer-prize to Lennart Regebro, because:</p>
<ol>
<li><p>his routine is as fast as mine (+-5%), and</p></li>
<li><p>he pointed out that my original routine had a bug somewhere, which led me to see what it was really trying to do. Thanks, Lennart.</p></li>
</ol>
<p>chrispy contributed an algorithm that seems equivalent to Lennart's, but 5 hrs later, sooo, first to the wire gets it.</p>
<p>A remark: Alex Martelli's bare-bones recursive algorithm is an example of making every possible combination and throwing them all at a sieve and seeing which go through the holes. This approach takes 20+ times longer than Lennart's or mine. (Jack up the input to max_val = 100, n_cells = 5, target_sum = 250 and on my box it's 18 secs vs. 8+ mins.) Moral: Not generating every possible combination is good.</p>
<p>Another remark: Lennart's and my routines generate <strong>the same answers in the same order</strong>. Are they in fact the same algorithm seen from different angles? I don't know.</p>
<p>Something occurs to me. If you sort the answers, starting, say, with (8,8,2,1,1) and ending with (4,4,4,4,4) (what you get with max_val=8, n_cells=5, target_sum=20), the series forms kind of a "slowest descent", with the first ones being "hot" and the last one being "cold" and the greatest possible number of stages in between. Is this related to "informational entropy"? What's the proper metric for looking at it? Is there an algorithm that producs the combinations in descending (or ascending) order of heat? (This one doesn't, as far as I can see, although it's close over short stretches, looking at normalized std. dev.)</p>
<p>Here's the Python routine:</p>
<pre><code>#!/usr/bin/env python
#filename: makeAddCombos.07.py -- stripped for StackOverflow
def initialize_combo( max_val, n_cells, target_sum):
"""returns combo
Starting from left, fills combo to max_val or an intermediate value from 1 up.
E.g.: Given max_val = 5, n_cells=4, target_sum = 11, creates [5,4,1,1].
"""
combo = []
#Put 1 in each cell.
combo += [1] * n_cells
need = target_sum - sum(combo)
#Fill as many cells as possible to max_val.
n_full_cells = need //(max_val - 1)
top_up = max_val - 1
for i in range( n_full_cells): combo[i] += top_up
need = target_sum - sum(combo)
# Then add the rest to next item.
if need > 0:
combo[n_full_cells] += need
return combo
#def initialize_combo()
def scrunch_left( combo):
"""returns (new_combo,done)
done Boolean; if True, ignore new_combo, all done;
if Falso, new_combo is valid.
Starts a new combo list. Scanning from right to left, looks for first
element at least 2 greater than right-end element.
If one is found, decrements it, then scrunches all available counts on its
right up against its right-hand side. Returns the modified combo.
If none found, (that is, either no step or single step of 1), process
done.
"""
new_combo = []
right_end = combo[-1]
length = len(combo)
c_range = range(length-1, -1, -1)
found_step_gt_1 = False
for index in c_range:
value = combo[index]
if (value - right_end) > 1:
found_step_gt_1 = True
break
if not found_step_gt_1:
return ( new_combo,True)
if index > 0:
new_combo += combo[:index]
ceil = combo[index] - 1
new_combo += [ceil]
new_combo += [1] * ((length - 1) - index)
need = sum(combo[index:]) - sum(new_combo[index:])
fill_height = ceil - 1
ndivf = need // fill_height
nmodf = need % fill_height
if ndivf > 0:
for j in range(index + 1, index + ndivf + 1):
new_combo[j] += fill_height
if nmodf > 0:
new_combo[index + ndivf + 1] += nmodf
return (new_combo, False)
#def scrunch_left()
def make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum):
"""
Build combos, list of tuples of 2 or more addends.
"""
combo = initialize_combo( max_val, n_cells, target_sum)
combos.append( tuple( combo))
while True:
(combo, done) = scrunch_left( combo)
if done:
break
else:
combos.append( tuple( combo))
return combos
#def make_combos_n_cells_ge_two()
if __name__ == '__main__':
combos = []
max_val = 8
n_cells = 5
target_sum = 20
if n_cells == 1: combos.append( (target_sum,))
else:
combos = make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum)
import pprint
pprint.pprint( combos)
</code></pre>
| 4 | 2009-06-30T03:33:36Z | 1,061,814 | <p>Here's the simplest recursive solution that I can think of to "find all possible combinations of n numbers with values x such that 1 <= x <= max_val and x(1) + ... + x(n) = target". I'm developing it from scratch. Here's a version without any optimization at all, just for simplicity:</p>
<pre><code>def apcnx(n, max_val, target, xsofar=(), sumsofar=0):
if n==0:
if sumsofar==target:
yield xsofar
return
if xsofar:
minx = xsofar[-1] - 1
else:
minx = 0
for x in xrange(minx, max_val):
for xposs in apcnx(n-1, max_val, target, xsofar + (x+1,), sumsofar+x+1):
yield xposs
for xs in apcnx(4, 6, 12):
print xs
</code></pre>
<p>The base case <code>n==0</code> (where we can't yield any more numbers) either yield the tuple so far if it satisfies the condition, or nothing, then finishes (returns).</p>
<p>If we're supposed to yield longer tuples than we've built so far, the <code>if/else</code> makes sure we only yield non-decreasing tuples, to avoid repetition (you did say "combination" rather than "permutation").</p>
<p>The <code>for</code> tries all possibilities for "this" item and loops over whatever the next-lower-down level of recursion is still able to yield.</p>
<p>The output I see is:</p>
<pre><code>(1, 1, 4, 6)
(1, 1, 5, 5)
(1, 2, 3, 6)
(1, 2, 4, 5)
(1, 3, 3, 5)
(1, 3, 4, 4)
(2, 2, 2, 6)
(2, 2, 3, 5)
(2, 2, 4, 4)
(2, 3, 3, 4)
(3, 3, 3, 3)
</code></pre>
<p>which seems correct.</p>
<p>There are a bazillion possible optimizations, but, remember:</p>
<blockquote>
<p>First make it work, then make it fast</p>
</blockquote>
<p>I corresponded with Kent Beck to properly attribute this quote in "Python in a Nutshell", and he tells me he got it from his dad, whose job was actually unrelated to programming;-).</p>
<p>In this case, it seems to me that the key issue is <strong>understanding</strong> what's going on, and any optimization might interfere, so I'm going all out for "simple and understandable"; we can, if need be!, optimize the socks off it once the OP confirms they <strong>can</strong> understand what's going on in this sheer, unoptimized version!</p>
| 2 | 2009-06-30T04:58:43Z | [
"python",
"algorithm",
"statistics",
"puzzle",
"combinations"
] |
KenKen puzzle addends: REDUX A (corrected) non-recursive algorithm | 1,061,590 | <p>This question relates to those parts of the KenKen Latin Square puzzles which ask you to find all possible combinations of ncells numbers with values x such that 1 <= x <= maxval and x(1) + ... + x(ncells) = targetsum. Having tested several of the more promising answers, I'm going to award the answer-prize to Lennart Regebro, because:</p>
<ol>
<li><p>his routine is as fast as mine (+-5%), and</p></li>
<li><p>he pointed out that my original routine had a bug somewhere, which led me to see what it was really trying to do. Thanks, Lennart.</p></li>
</ol>
<p>chrispy contributed an algorithm that seems equivalent to Lennart's, but 5 hrs later, sooo, first to the wire gets it.</p>
<p>A remark: Alex Martelli's bare-bones recursive algorithm is an example of making every possible combination and throwing them all at a sieve and seeing which go through the holes. This approach takes 20+ times longer than Lennart's or mine. (Jack up the input to max_val = 100, n_cells = 5, target_sum = 250 and on my box it's 18 secs vs. 8+ mins.) Moral: Not generating every possible combination is good.</p>
<p>Another remark: Lennart's and my routines generate <strong>the same answers in the same order</strong>. Are they in fact the same algorithm seen from different angles? I don't know.</p>
<p>Something occurs to me. If you sort the answers, starting, say, with (8,8,2,1,1) and ending with (4,4,4,4,4) (what you get with max_val=8, n_cells=5, target_sum=20), the series forms kind of a "slowest descent", with the first ones being "hot" and the last one being "cold" and the greatest possible number of stages in between. Is this related to "informational entropy"? What's the proper metric for looking at it? Is there an algorithm that producs the combinations in descending (or ascending) order of heat? (This one doesn't, as far as I can see, although it's close over short stretches, looking at normalized std. dev.)</p>
<p>Here's the Python routine:</p>
<pre><code>#!/usr/bin/env python
#filename: makeAddCombos.07.py -- stripped for StackOverflow
def initialize_combo( max_val, n_cells, target_sum):
"""returns combo
Starting from left, fills combo to max_val or an intermediate value from 1 up.
E.g.: Given max_val = 5, n_cells=4, target_sum = 11, creates [5,4,1,1].
"""
combo = []
#Put 1 in each cell.
combo += [1] * n_cells
need = target_sum - sum(combo)
#Fill as many cells as possible to max_val.
n_full_cells = need //(max_val - 1)
top_up = max_val - 1
for i in range( n_full_cells): combo[i] += top_up
need = target_sum - sum(combo)
# Then add the rest to next item.
if need > 0:
combo[n_full_cells] += need
return combo
#def initialize_combo()
def scrunch_left( combo):
"""returns (new_combo,done)
done Boolean; if True, ignore new_combo, all done;
if Falso, new_combo is valid.
Starts a new combo list. Scanning from right to left, looks for first
element at least 2 greater than right-end element.
If one is found, decrements it, then scrunches all available counts on its
right up against its right-hand side. Returns the modified combo.
If none found, (that is, either no step or single step of 1), process
done.
"""
new_combo = []
right_end = combo[-1]
length = len(combo)
c_range = range(length-1, -1, -1)
found_step_gt_1 = False
for index in c_range:
value = combo[index]
if (value - right_end) > 1:
found_step_gt_1 = True
break
if not found_step_gt_1:
return ( new_combo,True)
if index > 0:
new_combo += combo[:index]
ceil = combo[index] - 1
new_combo += [ceil]
new_combo += [1] * ((length - 1) - index)
need = sum(combo[index:]) - sum(new_combo[index:])
fill_height = ceil - 1
ndivf = need // fill_height
nmodf = need % fill_height
if ndivf > 0:
for j in range(index + 1, index + ndivf + 1):
new_combo[j] += fill_height
if nmodf > 0:
new_combo[index + ndivf + 1] += nmodf
return (new_combo, False)
#def scrunch_left()
def make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum):
"""
Build combos, list of tuples of 2 or more addends.
"""
combo = initialize_combo( max_val, n_cells, target_sum)
combos.append( tuple( combo))
while True:
(combo, done) = scrunch_left( combo)
if done:
break
else:
combos.append( tuple( combo))
return combos
#def make_combos_n_cells_ge_two()
if __name__ == '__main__':
combos = []
max_val = 8
n_cells = 5
target_sum = 20
if n_cells == 1: combos.append( (target_sum,))
else:
combos = make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum)
import pprint
pprint.pprint( combos)
</code></pre>
| 4 | 2009-06-30T03:33:36Z | 1,061,899 | <p>First of all, I'd use variable names that mean something, so that the code gets comprehensible. Then, after I understood the problem, it's clearly a recursive problem, as once you have chosen one number, the question of finding the possible values for the rest of the squares are exactly the same problem, but with different values in.</p>
<p>So I would do it like this:</p>
<pre><code>from __future__ import division
from math import ceil
def make_combos(max_val,target_sum,n_cells):
combos = []
# The highest possible value of the next cell is whatever is
# largest of the max_val, or the target_sum minus the number
# of remaining cells (as you can't enter 0).
highest = min(max_val, target_sum - n_cells + 1)
# The lowest is the lowest number you can have that will add upp to
# target_sum if you multiply it with n_cells.
lowest = int(ceil(target_sum/n_cells))
for x in range(highest, lowest-1, -1):
if n_cells == 1: # This is the last cell, no more recursion.
combos.append((x,))
break
# Recurse to get the next cell:
# Set the max to x (or we'll get duplicates like
# (6,3,2,1) and (6,2,3,1), which is pointless.
# Reduce the target_sum with x to keep the sum correct.
# Reduce the number of cells with 1.
for combo in make_combos(x, target_sum-x, n_cells-1):
combos.append((x,)+combo)
return combos
if __name__ == '__main__':
import pprint
# And by using pprint the output gets easier to read
pprint.pprint(make_combos( 6,12,4))
</code></pre>
<p>I also notice that your solution still seems buggy. For the values <code>max_val=8, target_sum=20 and n_cells=5</code> your code doesn't find the solution <code>(8,6,4,1,1,)</code>, as an example. I'm not sure if that means I've missed a rule in this or not, but as I understand the rules that should be a valid option.</p>
<p>Here's a version using generators, It saves a couple of lines, and memory if the values are really big, but as recursion, generators can be tricky to "get". </p>
<pre><code>from __future__ import division
from math import ceil
def make_combos(max_val,target_sum,n_cells):
highest = min(max_val, target_sum - n_cells + 1)
lowest = int(ceil(target_sum/n_cells))
for x in xrange(highest, lowest-1, -1):
if n_cells == 1:
yield (x,)
break
for combo in make_combos(x, target_sum-x, n_cells-1):
yield (x,)+combo
if __name__ == '__main__':
import pprint
pprint.pprint(list(make_combos( 6,12,4)))
</code></pre>
| 2 | 2009-06-30T05:37:53Z | [
"python",
"algorithm",
"statistics",
"puzzle",
"combinations"
] |
KenKen puzzle addends: REDUX A (corrected) non-recursive algorithm | 1,061,590 | <p>This question relates to those parts of the KenKen Latin Square puzzles which ask you to find all possible combinations of ncells numbers with values x such that 1 <= x <= maxval and x(1) + ... + x(ncells) = targetsum. Having tested several of the more promising answers, I'm going to award the answer-prize to Lennart Regebro, because:</p>
<ol>
<li><p>his routine is as fast as mine (+-5%), and</p></li>
<li><p>he pointed out that my original routine had a bug somewhere, which led me to see what it was really trying to do. Thanks, Lennart.</p></li>
</ol>
<p>chrispy contributed an algorithm that seems equivalent to Lennart's, but 5 hrs later, sooo, first to the wire gets it.</p>
<p>A remark: Alex Martelli's bare-bones recursive algorithm is an example of making every possible combination and throwing them all at a sieve and seeing which go through the holes. This approach takes 20+ times longer than Lennart's or mine. (Jack up the input to max_val = 100, n_cells = 5, target_sum = 250 and on my box it's 18 secs vs. 8+ mins.) Moral: Not generating every possible combination is good.</p>
<p>Another remark: Lennart's and my routines generate <strong>the same answers in the same order</strong>. Are they in fact the same algorithm seen from different angles? I don't know.</p>
<p>Something occurs to me. If you sort the answers, starting, say, with (8,8,2,1,1) and ending with (4,4,4,4,4) (what you get with max_val=8, n_cells=5, target_sum=20), the series forms kind of a "slowest descent", with the first ones being "hot" and the last one being "cold" and the greatest possible number of stages in between. Is this related to "informational entropy"? What's the proper metric for looking at it? Is there an algorithm that producs the combinations in descending (or ascending) order of heat? (This one doesn't, as far as I can see, although it's close over short stretches, looking at normalized std. dev.)</p>
<p>Here's the Python routine:</p>
<pre><code>#!/usr/bin/env python
#filename: makeAddCombos.07.py -- stripped for StackOverflow
def initialize_combo( max_val, n_cells, target_sum):
"""returns combo
Starting from left, fills combo to max_val or an intermediate value from 1 up.
E.g.: Given max_val = 5, n_cells=4, target_sum = 11, creates [5,4,1,1].
"""
combo = []
#Put 1 in each cell.
combo += [1] * n_cells
need = target_sum - sum(combo)
#Fill as many cells as possible to max_val.
n_full_cells = need //(max_val - 1)
top_up = max_val - 1
for i in range( n_full_cells): combo[i] += top_up
need = target_sum - sum(combo)
# Then add the rest to next item.
if need > 0:
combo[n_full_cells] += need
return combo
#def initialize_combo()
def scrunch_left( combo):
"""returns (new_combo,done)
done Boolean; if True, ignore new_combo, all done;
if Falso, new_combo is valid.
Starts a new combo list. Scanning from right to left, looks for first
element at least 2 greater than right-end element.
If one is found, decrements it, then scrunches all available counts on its
right up against its right-hand side. Returns the modified combo.
If none found, (that is, either no step or single step of 1), process
done.
"""
new_combo = []
right_end = combo[-1]
length = len(combo)
c_range = range(length-1, -1, -1)
found_step_gt_1 = False
for index in c_range:
value = combo[index]
if (value - right_end) > 1:
found_step_gt_1 = True
break
if not found_step_gt_1:
return ( new_combo,True)
if index > 0:
new_combo += combo[:index]
ceil = combo[index] - 1
new_combo += [ceil]
new_combo += [1] * ((length - 1) - index)
need = sum(combo[index:]) - sum(new_combo[index:])
fill_height = ceil - 1
ndivf = need // fill_height
nmodf = need % fill_height
if ndivf > 0:
for j in range(index + 1, index + ndivf + 1):
new_combo[j] += fill_height
if nmodf > 0:
new_combo[index + ndivf + 1] += nmodf
return (new_combo, False)
#def scrunch_left()
def make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum):
"""
Build combos, list of tuples of 2 or more addends.
"""
combo = initialize_combo( max_val, n_cells, target_sum)
combos.append( tuple( combo))
while True:
(combo, done) = scrunch_left( combo)
if done:
break
else:
combos.append( tuple( combo))
return combos
#def make_combos_n_cells_ge_two()
if __name__ == '__main__':
combos = []
max_val = 8
n_cells = 5
target_sum = 20
if n_cells == 1: combos.append( (target_sum,))
else:
combos = make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum)
import pprint
pprint.pprint( combos)
</code></pre>
| 4 | 2009-06-30T03:33:36Z | 1,062,037 | <p>First of all, I am learning Python myself so this solution won't be great but this is just an attempt at solving this. I have tried to solve it recursively and I think a recursive solution would be ideal for this kind of problem although THAT recursive solution might not be this one:</p>
<pre><code>def GetFactors(maxVal, noOfCells, targetSum):
l = []
while(maxVal != 0):
remCells = noOfCells - 1
if(remCells > 2):
retList = GetFactors(maxVal, remCells, targetSum - maxVal)
#Append the returned List to the original List
#But first, add the maxVal to the start of every elem of returned list.
for i in retList:
i.insert(0, maxVal)
l.extend(retList)
else:
remTotal = targetSum - maxVal
for i in range(1, remTotal/2 + 1):
itemToInsert = remTotal - i;
if (i > maxVal or itemToInsert > maxVal):
continue
l.append([maxVal, i, remTotal - i])
maxVal -= 1
return l
if __name__ == "__main__":
l = GetFactors(5, 5, 15)
print l
</code></pre>
| 1 | 2009-06-30T06:31:16Z | [
"python",
"algorithm",
"statistics",
"puzzle",
"combinations"
] |
KenKen puzzle addends: REDUX A (corrected) non-recursive algorithm | 1,061,590 | <p>This question relates to those parts of the KenKen Latin Square puzzles which ask you to find all possible combinations of ncells numbers with values x such that 1 <= x <= maxval and x(1) + ... + x(ncells) = targetsum. Having tested several of the more promising answers, I'm going to award the answer-prize to Lennart Regebro, because:</p>
<ol>
<li><p>his routine is as fast as mine (+-5%), and</p></li>
<li><p>he pointed out that my original routine had a bug somewhere, which led me to see what it was really trying to do. Thanks, Lennart.</p></li>
</ol>
<p>chrispy contributed an algorithm that seems equivalent to Lennart's, but 5 hrs later, sooo, first to the wire gets it.</p>
<p>A remark: Alex Martelli's bare-bones recursive algorithm is an example of making every possible combination and throwing them all at a sieve and seeing which go through the holes. This approach takes 20+ times longer than Lennart's or mine. (Jack up the input to max_val = 100, n_cells = 5, target_sum = 250 and on my box it's 18 secs vs. 8+ mins.) Moral: Not generating every possible combination is good.</p>
<p>Another remark: Lennart's and my routines generate <strong>the same answers in the same order</strong>. Are they in fact the same algorithm seen from different angles? I don't know.</p>
<p>Something occurs to me. If you sort the answers, starting, say, with (8,8,2,1,1) and ending with (4,4,4,4,4) (what you get with max_val=8, n_cells=5, target_sum=20), the series forms kind of a "slowest descent", with the first ones being "hot" and the last one being "cold" and the greatest possible number of stages in between. Is this related to "informational entropy"? What's the proper metric for looking at it? Is there an algorithm that producs the combinations in descending (or ascending) order of heat? (This one doesn't, as far as I can see, although it's close over short stretches, looking at normalized std. dev.)</p>
<p>Here's the Python routine:</p>
<pre><code>#!/usr/bin/env python
#filename: makeAddCombos.07.py -- stripped for StackOverflow
def initialize_combo( max_val, n_cells, target_sum):
"""returns combo
Starting from left, fills combo to max_val or an intermediate value from 1 up.
E.g.: Given max_val = 5, n_cells=4, target_sum = 11, creates [5,4,1,1].
"""
combo = []
#Put 1 in each cell.
combo += [1] * n_cells
need = target_sum - sum(combo)
#Fill as many cells as possible to max_val.
n_full_cells = need //(max_val - 1)
top_up = max_val - 1
for i in range( n_full_cells): combo[i] += top_up
need = target_sum - sum(combo)
# Then add the rest to next item.
if need > 0:
combo[n_full_cells] += need
return combo
#def initialize_combo()
def scrunch_left( combo):
"""returns (new_combo,done)
done Boolean; if True, ignore new_combo, all done;
if Falso, new_combo is valid.
Starts a new combo list. Scanning from right to left, looks for first
element at least 2 greater than right-end element.
If one is found, decrements it, then scrunches all available counts on its
right up against its right-hand side. Returns the modified combo.
If none found, (that is, either no step or single step of 1), process
done.
"""
new_combo = []
right_end = combo[-1]
length = len(combo)
c_range = range(length-1, -1, -1)
found_step_gt_1 = False
for index in c_range:
value = combo[index]
if (value - right_end) > 1:
found_step_gt_1 = True
break
if not found_step_gt_1:
return ( new_combo,True)
if index > 0:
new_combo += combo[:index]
ceil = combo[index] - 1
new_combo += [ceil]
new_combo += [1] * ((length - 1) - index)
need = sum(combo[index:]) - sum(new_combo[index:])
fill_height = ceil - 1
ndivf = need // fill_height
nmodf = need % fill_height
if ndivf > 0:
for j in range(index + 1, index + ndivf + 1):
new_combo[j] += fill_height
if nmodf > 0:
new_combo[index + ndivf + 1] += nmodf
return (new_combo, False)
#def scrunch_left()
def make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum):
"""
Build combos, list of tuples of 2 or more addends.
"""
combo = initialize_combo( max_val, n_cells, target_sum)
combos.append( tuple( combo))
while True:
(combo, done) = scrunch_left( combo)
if done:
break
else:
combos.append( tuple( combo))
return combos
#def make_combos_n_cells_ge_two()
if __name__ == '__main__':
combos = []
max_val = 8
n_cells = 5
target_sum = 20
if n_cells == 1: combos.append( (target_sum,))
else:
combos = make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum)
import pprint
pprint.pprint( combos)
</code></pre>
| 4 | 2009-06-30T03:33:36Z | 1,062,168 | <p>Here a simple solution in C/C++:</p>
<pre><code>const int max = 6;
int sol[N_CELLS];
void enum_solutions(int target, int n, int min) {
if (target == 0 && n == 0)
report_solution(); /* sol[0]..sol[N_CELLS-1] is a solution */
if (target <= 0 || n == 0) return; /* nothing further to explore */
sol[n - 1] = min; /* remember */
for (int i = min; i <= max; i++)
enum_solutions(target - i, n - 1, i);
}
enum_solutions(12, 4, 1);
</code></pre>
| 1 | 2009-06-30T07:10:49Z | [
"python",
"algorithm",
"statistics",
"puzzle",
"combinations"
] |
KenKen puzzle addends: REDUX A (corrected) non-recursive algorithm | 1,061,590 | <p>This question relates to those parts of the KenKen Latin Square puzzles which ask you to find all possible combinations of ncells numbers with values x such that 1 <= x <= maxval and x(1) + ... + x(ncells) = targetsum. Having tested several of the more promising answers, I'm going to award the answer-prize to Lennart Regebro, because:</p>
<ol>
<li><p>his routine is as fast as mine (+-5%), and</p></li>
<li><p>he pointed out that my original routine had a bug somewhere, which led me to see what it was really trying to do. Thanks, Lennart.</p></li>
</ol>
<p>chrispy contributed an algorithm that seems equivalent to Lennart's, but 5 hrs later, sooo, first to the wire gets it.</p>
<p>A remark: Alex Martelli's bare-bones recursive algorithm is an example of making every possible combination and throwing them all at a sieve and seeing which go through the holes. This approach takes 20+ times longer than Lennart's or mine. (Jack up the input to max_val = 100, n_cells = 5, target_sum = 250 and on my box it's 18 secs vs. 8+ mins.) Moral: Not generating every possible combination is good.</p>
<p>Another remark: Lennart's and my routines generate <strong>the same answers in the same order</strong>. Are they in fact the same algorithm seen from different angles? I don't know.</p>
<p>Something occurs to me. If you sort the answers, starting, say, with (8,8,2,1,1) and ending with (4,4,4,4,4) (what you get with max_val=8, n_cells=5, target_sum=20), the series forms kind of a "slowest descent", with the first ones being "hot" and the last one being "cold" and the greatest possible number of stages in between. Is this related to "informational entropy"? What's the proper metric for looking at it? Is there an algorithm that producs the combinations in descending (or ascending) order of heat? (This one doesn't, as far as I can see, although it's close over short stretches, looking at normalized std. dev.)</p>
<p>Here's the Python routine:</p>
<pre><code>#!/usr/bin/env python
#filename: makeAddCombos.07.py -- stripped for StackOverflow
def initialize_combo( max_val, n_cells, target_sum):
"""returns combo
Starting from left, fills combo to max_val or an intermediate value from 1 up.
E.g.: Given max_val = 5, n_cells=4, target_sum = 11, creates [5,4,1,1].
"""
combo = []
#Put 1 in each cell.
combo += [1] * n_cells
need = target_sum - sum(combo)
#Fill as many cells as possible to max_val.
n_full_cells = need //(max_val - 1)
top_up = max_val - 1
for i in range( n_full_cells): combo[i] += top_up
need = target_sum - sum(combo)
# Then add the rest to next item.
if need > 0:
combo[n_full_cells] += need
return combo
#def initialize_combo()
def scrunch_left( combo):
"""returns (new_combo,done)
done Boolean; if True, ignore new_combo, all done;
if Falso, new_combo is valid.
Starts a new combo list. Scanning from right to left, looks for first
element at least 2 greater than right-end element.
If one is found, decrements it, then scrunches all available counts on its
right up against its right-hand side. Returns the modified combo.
If none found, (that is, either no step or single step of 1), process
done.
"""
new_combo = []
right_end = combo[-1]
length = len(combo)
c_range = range(length-1, -1, -1)
found_step_gt_1 = False
for index in c_range:
value = combo[index]
if (value - right_end) > 1:
found_step_gt_1 = True
break
if not found_step_gt_1:
return ( new_combo,True)
if index > 0:
new_combo += combo[:index]
ceil = combo[index] - 1
new_combo += [ceil]
new_combo += [1] * ((length - 1) - index)
need = sum(combo[index:]) - sum(new_combo[index:])
fill_height = ceil - 1
ndivf = need // fill_height
nmodf = need % fill_height
if ndivf > 0:
for j in range(index + 1, index + ndivf + 1):
new_combo[j] += fill_height
if nmodf > 0:
new_combo[index + ndivf + 1] += nmodf
return (new_combo, False)
#def scrunch_left()
def make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum):
"""
Build combos, list of tuples of 2 or more addends.
"""
combo = initialize_combo( max_val, n_cells, target_sum)
combos.append( tuple( combo))
while True:
(combo, done) = scrunch_left( combo)
if done:
break
else:
combos.append( tuple( combo))
return combos
#def make_combos_n_cells_ge_two()
if __name__ == '__main__':
combos = []
max_val = 8
n_cells = 5
target_sum = 20
if n_cells == 1: combos.append( (target_sum,))
else:
combos = make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum)
import pprint
pprint.pprint( combos)
</code></pre>
| 4 | 2009-06-30T03:33:36Z | 1,062,810 | <p>Here is a naive, but succinct, solution using generators:</p>
<pre><code>def descending(v):
"""Decide if a square contains values in descending order"""
return list(reversed(v)) == sorted(v)
def latinSquares(max_val, target_sum, n_cells):
"""Return all descending n_cells-dimensional squares,
no cell larger than max_val, sum equal to target_sum."""
possibilities = itertools.product(range(1,max_val+1),repeat=n_cells)
for square in possibilities:
if descending(square) and sum(square) == target_sum:
yield square
</code></pre>
<p>I could have optimized this code by directly enumerating the list of descending grids, but I find itertools.product much clearer for a first-pass solution. Finally, calling the function:</p>
<pre><code>for m in latinSquares(6, 12, 4):
print m
</code></pre>
| 1 | 2009-06-30T09:58:23Z | [
"python",
"algorithm",
"statistics",
"puzzle",
"combinations"
] |
KenKen puzzle addends: REDUX A (corrected) non-recursive algorithm | 1,061,590 | <p>This question relates to those parts of the KenKen Latin Square puzzles which ask you to find all possible combinations of ncells numbers with values x such that 1 <= x <= maxval and x(1) + ... + x(ncells) = targetsum. Having tested several of the more promising answers, I'm going to award the answer-prize to Lennart Regebro, because:</p>
<ol>
<li><p>his routine is as fast as mine (+-5%), and</p></li>
<li><p>he pointed out that my original routine had a bug somewhere, which led me to see what it was really trying to do. Thanks, Lennart.</p></li>
</ol>
<p>chrispy contributed an algorithm that seems equivalent to Lennart's, but 5 hrs later, sooo, first to the wire gets it.</p>
<p>A remark: Alex Martelli's bare-bones recursive algorithm is an example of making every possible combination and throwing them all at a sieve and seeing which go through the holes. This approach takes 20+ times longer than Lennart's or mine. (Jack up the input to max_val = 100, n_cells = 5, target_sum = 250 and on my box it's 18 secs vs. 8+ mins.) Moral: Not generating every possible combination is good.</p>
<p>Another remark: Lennart's and my routines generate <strong>the same answers in the same order</strong>. Are they in fact the same algorithm seen from different angles? I don't know.</p>
<p>Something occurs to me. If you sort the answers, starting, say, with (8,8,2,1,1) and ending with (4,4,4,4,4) (what you get with max_val=8, n_cells=5, target_sum=20), the series forms kind of a "slowest descent", with the first ones being "hot" and the last one being "cold" and the greatest possible number of stages in between. Is this related to "informational entropy"? What's the proper metric for looking at it? Is there an algorithm that producs the combinations in descending (or ascending) order of heat? (This one doesn't, as far as I can see, although it's close over short stretches, looking at normalized std. dev.)</p>
<p>Here's the Python routine:</p>
<pre><code>#!/usr/bin/env python
#filename: makeAddCombos.07.py -- stripped for StackOverflow
def initialize_combo( max_val, n_cells, target_sum):
"""returns combo
Starting from left, fills combo to max_val or an intermediate value from 1 up.
E.g.: Given max_val = 5, n_cells=4, target_sum = 11, creates [5,4,1,1].
"""
combo = []
#Put 1 in each cell.
combo += [1] * n_cells
need = target_sum - sum(combo)
#Fill as many cells as possible to max_val.
n_full_cells = need //(max_val - 1)
top_up = max_val - 1
for i in range( n_full_cells): combo[i] += top_up
need = target_sum - sum(combo)
# Then add the rest to next item.
if need > 0:
combo[n_full_cells] += need
return combo
#def initialize_combo()
def scrunch_left( combo):
"""returns (new_combo,done)
done Boolean; if True, ignore new_combo, all done;
if Falso, new_combo is valid.
Starts a new combo list. Scanning from right to left, looks for first
element at least 2 greater than right-end element.
If one is found, decrements it, then scrunches all available counts on its
right up against its right-hand side. Returns the modified combo.
If none found, (that is, either no step or single step of 1), process
done.
"""
new_combo = []
right_end = combo[-1]
length = len(combo)
c_range = range(length-1, -1, -1)
found_step_gt_1 = False
for index in c_range:
value = combo[index]
if (value - right_end) > 1:
found_step_gt_1 = True
break
if not found_step_gt_1:
return ( new_combo,True)
if index > 0:
new_combo += combo[:index]
ceil = combo[index] - 1
new_combo += [ceil]
new_combo += [1] * ((length - 1) - index)
need = sum(combo[index:]) - sum(new_combo[index:])
fill_height = ceil - 1
ndivf = need // fill_height
nmodf = need % fill_height
if ndivf > 0:
for j in range(index + 1, index + ndivf + 1):
new_combo[j] += fill_height
if nmodf > 0:
new_combo[index + ndivf + 1] += nmodf
return (new_combo, False)
#def scrunch_left()
def make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum):
"""
Build combos, list of tuples of 2 or more addends.
"""
combo = initialize_combo( max_val, n_cells, target_sum)
combos.append( tuple( combo))
while True:
(combo, done) = scrunch_left( combo)
if done:
break
else:
combos.append( tuple( combo))
return combos
#def make_combos_n_cells_ge_two()
if __name__ == '__main__':
combos = []
max_val = 8
n_cells = 5
target_sum = 20
if n_cells == 1: combos.append( (target_sum,))
else:
combos = make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum)
import pprint
pprint.pprint( combos)
</code></pre>
| 4 | 2009-06-30T03:33:36Z | 1,062,908 | <p>And here is another recursive, generator-based solution, but this time using some simple math to calculate ranges at each step, avoiding needless recursion:</p>
<pre><code>def latinSquares(max_val, target_sum, n_cells):
if n_cells == 1:
assert(max_val >= target_sum >= 1)
return ((target_sum,),)
else:
lower_bound = max(-(-target_sum / n_cells), 1)
upper_bound = min(max_val, target_sum - n_cells + 1)
assert(lower_bound <= upper_bound)
return ((v,) + w for v in xrange(upper_bound, lower_bound - 1, -1)
for w in latinSquares(v, target_sum - v, n_cells - 1))
</code></pre>
<p>This code will fail with an AssertionError if you supply parameters that are impossible to satisfy; this is a side-effect of my "correctness criterion" that we never do an unnecessary recursion. If you don't want that side-effect, remove the assertions.</p>
<p>Note the use of -(-x/y) to round up after division. There may be a more pythonic way to write that. Note also I'm using <a href="http://www.python.org/dev/peps/pep-0289/" rel="nofollow" title="PEP 289">generator expressions</a> instead of yield.</p>
<pre><code>for m in latinSquares(6,12,4):
print m
</code></pre>
| 1 | 2009-06-30T10:23:57Z | [
"python",
"algorithm",
"statistics",
"puzzle",
"combinations"
] |
KenKen puzzle addends: REDUX A (corrected) non-recursive algorithm | 1,061,590 | <p>This question relates to those parts of the KenKen Latin Square puzzles which ask you to find all possible combinations of ncells numbers with values x such that 1 <= x <= maxval and x(1) + ... + x(ncells) = targetsum. Having tested several of the more promising answers, I'm going to award the answer-prize to Lennart Regebro, because:</p>
<ol>
<li><p>his routine is as fast as mine (+-5%), and</p></li>
<li><p>he pointed out that my original routine had a bug somewhere, which led me to see what it was really trying to do. Thanks, Lennart.</p></li>
</ol>
<p>chrispy contributed an algorithm that seems equivalent to Lennart's, but 5 hrs later, sooo, first to the wire gets it.</p>
<p>A remark: Alex Martelli's bare-bones recursive algorithm is an example of making every possible combination and throwing them all at a sieve and seeing which go through the holes. This approach takes 20+ times longer than Lennart's or mine. (Jack up the input to max_val = 100, n_cells = 5, target_sum = 250 and on my box it's 18 secs vs. 8+ mins.) Moral: Not generating every possible combination is good.</p>
<p>Another remark: Lennart's and my routines generate <strong>the same answers in the same order</strong>. Are they in fact the same algorithm seen from different angles? I don't know.</p>
<p>Something occurs to me. If you sort the answers, starting, say, with (8,8,2,1,1) and ending with (4,4,4,4,4) (what you get with max_val=8, n_cells=5, target_sum=20), the series forms kind of a "slowest descent", with the first ones being "hot" and the last one being "cold" and the greatest possible number of stages in between. Is this related to "informational entropy"? What's the proper metric for looking at it? Is there an algorithm that producs the combinations in descending (or ascending) order of heat? (This one doesn't, as far as I can see, although it's close over short stretches, looking at normalized std. dev.)</p>
<p>Here's the Python routine:</p>
<pre><code>#!/usr/bin/env python
#filename: makeAddCombos.07.py -- stripped for StackOverflow
def initialize_combo( max_val, n_cells, target_sum):
"""returns combo
Starting from left, fills combo to max_val or an intermediate value from 1 up.
E.g.: Given max_val = 5, n_cells=4, target_sum = 11, creates [5,4,1,1].
"""
combo = []
#Put 1 in each cell.
combo += [1] * n_cells
need = target_sum - sum(combo)
#Fill as many cells as possible to max_val.
n_full_cells = need //(max_val - 1)
top_up = max_val - 1
for i in range( n_full_cells): combo[i] += top_up
need = target_sum - sum(combo)
# Then add the rest to next item.
if need > 0:
combo[n_full_cells] += need
return combo
#def initialize_combo()
def scrunch_left( combo):
"""returns (new_combo,done)
done Boolean; if True, ignore new_combo, all done;
if Falso, new_combo is valid.
Starts a new combo list. Scanning from right to left, looks for first
element at least 2 greater than right-end element.
If one is found, decrements it, then scrunches all available counts on its
right up against its right-hand side. Returns the modified combo.
If none found, (that is, either no step or single step of 1), process
done.
"""
new_combo = []
right_end = combo[-1]
length = len(combo)
c_range = range(length-1, -1, -1)
found_step_gt_1 = False
for index in c_range:
value = combo[index]
if (value - right_end) > 1:
found_step_gt_1 = True
break
if not found_step_gt_1:
return ( new_combo,True)
if index > 0:
new_combo += combo[:index]
ceil = combo[index] - 1
new_combo += [ceil]
new_combo += [1] * ((length - 1) - index)
need = sum(combo[index:]) - sum(new_combo[index:])
fill_height = ceil - 1
ndivf = need // fill_height
nmodf = need % fill_height
if ndivf > 0:
for j in range(index + 1, index + ndivf + 1):
new_combo[j] += fill_height
if nmodf > 0:
new_combo[index + ndivf + 1] += nmodf
return (new_combo, False)
#def scrunch_left()
def make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum):
"""
Build combos, list of tuples of 2 or more addends.
"""
combo = initialize_combo( max_val, n_cells, target_sum)
combos.append( tuple( combo))
while True:
(combo, done) = scrunch_left( combo)
if done:
break
else:
combos.append( tuple( combo))
return combos
#def make_combos_n_cells_ge_two()
if __name__ == '__main__':
combos = []
max_val = 8
n_cells = 5
target_sum = 20
if n_cells == 1: combos.append( (target_sum,))
else:
combos = make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum)
import pprint
pprint.pprint( combos)
</code></pre>
| 4 | 2009-06-30T03:33:36Z | 1,063,557 | <p>Little bit offtopic, but still might help at programming kenken.</p>
<p>I got good results using DLX algorhitm for solving Killer Sudoku (very simmilar as KenKen it has cages, but only sums). It took less than second for most of problems and it was implemented in MATLAB language. </p>
<p>reference this forum
<a href="http://www.setbb.com/phpbb/viewtopic.php?t=1274&highlight=&mforum=sudoku" rel="nofollow">http://www.setbb.com/phpbb/viewtopic.php?t=1274&highlight=&mforum=sudoku</a></p>
<p>killer sudoku
"look at wikipedia, cant post hyper link" damt spammers</p>
| 1 | 2009-06-30T13:03:02Z | [
"python",
"algorithm",
"statistics",
"puzzle",
"combinations"
] |
Adding Cookie to SOAPpy Request | 1,061,690 | <p>I'm trying to send a SOAP request using SOAPpy as the client. I've found some documentation stating how to add a cookie by extending SOAPpy.HTTPTransport, but I can't seem to get it to work.</p>
<p>I tried to use the example <a href="http://code.activestate.com/recipes/444758/" rel="nofollow">here</a>,
but the server I'm trying to send the request to started throwing 415 errors, so I'm trying to accomplish this without using ClientCookie, or by figuring out why the server is throwing 415's when I do use it. I suspect it might be because ClientCookie uses urllib2 & http/1.1, whereas SOAPpy uses urllib & http/1.0</p>
<p>Does someone know how to make ClientCookie use http/1.0, if that is even the problem, or a way to add a cookie to the SOAPpy headers without using ClientCookie? If tried this code using other services, it only seems to throw errors when sending requests to Microsoft servers.</p>
<p>I'm still finding my footing with python, so it could just be me doing something dumb.</p>
<pre><code>import sys, os, string
from SOAPpy import WSDL,HTTPTransport,Config,SOAPAddress,Types
import ClientCookie
Config.cookieJar = ClientCookie.MozillaCookieJar()
class CookieTransport(HTTPTransport):
def call(self, addr, data, namespace, soapaction = None, encoding = None,
http_proxy = None, config = Config):
if not isinstance(addr, SOAPAddress):
addr = SOAPAddress(addr, config)
cookie_cutter = ClientCookie.HTTPCookieProcessor(config.cookieJar)
hh = ClientCookie.HTTPHandler()
hh.set_http_debuglevel(1)
# TODO proxy support
opener = ClientCookie.build_opener(cookie_cutter, hh)
t = 'text/xml';
if encoding != None:
t += '; charset="%s"' % encoding
opener.addheaders = [("Content-Type", t),
("Cookie", "Username=foobar"), # ClientCookie should handle
("SOAPAction" , "%s" % (soapaction))]
response = opener.open(addr.proto + "://" + addr.host + addr.path, data)
data = response.read()
# get the new namespace
if namespace is None:
new_ns = None
else:
new_ns = self.getNS(namespace, data)
print '\n' * 4 , '-'*50
# return response payload
return data, new_ns
url = 'http://www.authorstream.com/Services/Test.asmx?WSDL'
proxy = WSDL.Proxy(url, transport=CookieTransport)
print proxy.GetList()
</code></pre>
| 0 | 2009-06-30T04:13:21Z | 1,061,806 | <p>Error 415 is because of incorrect content-type header.</p>
<p>Install httpfox for firefox or whatever tool (wireshark, Charles or Fiddler) to track what headers are you sending. Try Content-Type: application/xml.</p>
<pre><code>...
t = 'application/xml';
if encoding != None:
t += '; charset="%s"' % encoding
...
</code></pre>
<p>If you trying to send file to the web server use Content-Type:application/x-www-form-urlencoded</p>
| 0 | 2009-06-30T04:56:46Z | [
"python",
"web-services",
"soap",
"soappy"
] |
Adding Cookie to SOAPpy Request | 1,061,690 | <p>I'm trying to send a SOAP request using SOAPpy as the client. I've found some documentation stating how to add a cookie by extending SOAPpy.HTTPTransport, but I can't seem to get it to work.</p>
<p>I tried to use the example <a href="http://code.activestate.com/recipes/444758/" rel="nofollow">here</a>,
but the server I'm trying to send the request to started throwing 415 errors, so I'm trying to accomplish this without using ClientCookie, or by figuring out why the server is throwing 415's when I do use it. I suspect it might be because ClientCookie uses urllib2 & http/1.1, whereas SOAPpy uses urllib & http/1.0</p>
<p>Does someone know how to make ClientCookie use http/1.0, if that is even the problem, or a way to add a cookie to the SOAPpy headers without using ClientCookie? If tried this code using other services, it only seems to throw errors when sending requests to Microsoft servers.</p>
<p>I'm still finding my footing with python, so it could just be me doing something dumb.</p>
<pre><code>import sys, os, string
from SOAPpy import WSDL,HTTPTransport,Config,SOAPAddress,Types
import ClientCookie
Config.cookieJar = ClientCookie.MozillaCookieJar()
class CookieTransport(HTTPTransport):
def call(self, addr, data, namespace, soapaction = None, encoding = None,
http_proxy = None, config = Config):
if not isinstance(addr, SOAPAddress):
addr = SOAPAddress(addr, config)
cookie_cutter = ClientCookie.HTTPCookieProcessor(config.cookieJar)
hh = ClientCookie.HTTPHandler()
hh.set_http_debuglevel(1)
# TODO proxy support
opener = ClientCookie.build_opener(cookie_cutter, hh)
t = 'text/xml';
if encoding != None:
t += '; charset="%s"' % encoding
opener.addheaders = [("Content-Type", t),
("Cookie", "Username=foobar"), # ClientCookie should handle
("SOAPAction" , "%s" % (soapaction))]
response = opener.open(addr.proto + "://" + addr.host + addr.path, data)
data = response.read()
# get the new namespace
if namespace is None:
new_ns = None
else:
new_ns = self.getNS(namespace, data)
print '\n' * 4 , '-'*50
# return response payload
return data, new_ns
url = 'http://www.authorstream.com/Services/Test.asmx?WSDL'
proxy = WSDL.Proxy(url, transport=CookieTransport)
print proxy.GetList()
</code></pre>
| 0 | 2009-06-30T04:13:21Z | 16,461,253 | <p>A nice hack to use cookies with SOAPpy calls
<a href="http://code.activestate.com/recipes/444758-how-to-add-cookiesheaders-to-soappy-calls/" rel="nofollow">Using Cookies with SOAPpy calls</a></p>
| 0 | 2013-05-09T11:58:43Z | [
"python",
"web-services",
"soap",
"soappy"
] |
What's the easiest way to escape HTML in Python? | 1,061,697 | <p>cgi.escape seems like one possible choice. Does it work well? Is there something that is considered better?</p>
| 93 | 2009-06-30T04:15:54Z | 1,061,702 | <p><a href="http://docs.python.org/library/cgi.html#cgi.escape"><code>cgi.escape</code></a> is fine. It escapes:</p>
<ul>
<li><code><</code> to <code>&lt;</code></li>
<li><code>></code> to <code>&gt;</code></li>
<li><code>&</code> to <code>&amp;</code></li>
</ul>
<p>That is enough for all HTML.</p>
<p>EDIT: If you have non-ascii chars you also want to escape, for inclusion in another encoded document that uses a different encoding, like <em>Craig</em> says, just use:</p>
<pre><code>data.encode('ascii', 'xmlcharrefreplace')
</code></pre>
<p>Don't forget to decode <code>data</code> to <code>unicode</code> first, using whatever encoding it was encoded.</p>
<p>However in my experience that kind of encoding is useless if you just work with <code>unicode</code> all the time from start. Just encode at the end to the encoding specified in the document header (<code>utf-8</code> for maximum compatibility).</p>
<p>Example:</p>
<pre><code>>>> cgi.escape(u'<a>bá</a>').encode('ascii', 'xmlcharrefreplace')
'&lt;a&gt;b&#225;&lt;/a&gt;
</code></pre>
<p>Also worth of note (thanks Greg) is the extra <em><code>quote</code></em> parameter <code>cgi.escape</code> takes. With it set to <code>True</code>, <code>cgi.escape</code> also escapes double quote chars (<code>"</code>) so you can use the resulting value in a XML/HTML attribute.</p>
<p>EDIT: Note that cgi.escape has been deprecated in Python 3.2 in favor of <a href="http://docs.python.org/3/library/html.html#html.escape"><code>html.escape</code></a>, which does the same except that <em><code>quote</code></em> defaults to True.</p>
| 133 | 2009-06-30T04:18:11Z | [
"python",
"html"
] |
What's the easiest way to escape HTML in Python? | 1,061,697 | <p>cgi.escape seems like one possible choice. Does it work well? Is there something that is considered better?</p>
| 93 | 2009-06-30T04:15:54Z | 1,061,747 | <p><a href="https://docs.python.org/2/library/cgi.html#cgi.escape" rel="nofollow"><code>cgi.escape</code></a> should be good to escape HTML in the limited sense of escaping the HTML tags and character entities.</p>
<p>But you might have to also consider encoding issues: if the HTML you want to quote has non-ASCII characters in a particular encoding, then you would also have to take care that you represent those sensibly when quoting. Perhaps you could convert them to entities. Otherwise you should ensure that the correct encoding translations are done between the "source" HTML and the page it's embedded in, to avoid corrupting the non-ASCII characters.</p>
| 7 | 2009-06-30T04:35:26Z | [
"python",
"html"
] |
What's the easiest way to escape HTML in Python? | 1,061,697 | <p>cgi.escape seems like one possible choice. Does it work well? Is there something that is considered better?</p>
| 93 | 2009-06-30T04:15:54Z | 5,072,031 | <p>In Python 3.2 a new <code>html</code> module was introduced, which is used for escaping reserved characters from HTML markup.</p>
<p>It has one function <code>escape()</code>:</p>
<pre><code>>>> import html
>>> html.escape('x > 2 && x < 7')
'x &gt; 2 &amp;&amp; x &lt; 7'
</code></pre>
| 55 | 2011-02-21T22:31:07Z | [
"python",
"html"
] |
What's the easiest way to escape HTML in Python? | 1,061,697 | <p>cgi.escape seems like one possible choice. Does it work well? Is there something that is considered better?</p>
| 93 | 2009-06-30T04:15:54Z | 18,094,938 | <p><strong>If you wish to escape HTML in a URL:</strong></p>
<p>This is probably NOT what the OP wanted (the question doesn't clearly indicate in which context the escaping is meant to be used), but Python's native library <a href="http://docs.python.org/2/library/urllib.html">urllib</a> has a method to escape HTML entities that need to be included in a URL safely.</p>
<p>The following is an example:</p>
<pre><code>#!/usr/bin/python
from urllib import quote
x = '+<>^&'
print quote(x) //prints '%2B%3C%3E%5E%26'
</code></pre>
<p><a href="http://docs.python.org/2/library/urllib.html">Find docs here</a></p>
| 4 | 2013-08-07T04:53:27Z | [
"python",
"html"
] |
What's the easiest way to escape HTML in Python? | 1,061,697 | <p>cgi.escape seems like one possible choice. Does it work well? Is there something that is considered better?</p>
| 93 | 2009-06-30T04:15:54Z | 28,511,611 | <h2><code>cgi.escape</code> extended</h2>
<p>This version improves <code>cgi.escape</code>. It also preserves whitespace and newlines. Returns a <code>unicode</code> string.</p>
<pre><code>def escape_html(text):
"""escape strings for display in HTML"""
return cgi.escape(text, quote=True).\
replace(u'\n', u'<br />').\
replace(u'\t', u'&emsp;').\
replace(u' ', u' &nbsp;')
</code></pre>
<h3>for example</h3>
<pre><code>>>> escape_html('<foo>\nfoo\t"bar"')
u'&lt;foo&gt;<br />foo&emsp;&quot;bar&quot;'
</code></pre>
| 1 | 2015-02-14T01:55:32Z | [
"python",
"html"
] |
What's the easiest way to escape HTML in Python? | 1,061,697 | <p>cgi.escape seems like one possible choice. Does it work well? Is there something that is considered better?</p>
| 93 | 2009-06-30T04:15:54Z | 28,827,075 | <p>Via <a href="http://www.crummy.com/software/BeautifulSoup/bs4/doc/" rel="nofollow">BeautifulSoup4</a>:</p>
<pre><code>>>> bs4.dammit import EntitySubstitution
>>> esub = EntitySubstitution()
>>> esub.substitute_html("r&d")
'r&amp;d'
</code></pre>
| 0 | 2015-03-03T08:22:47Z | [
"python",
"html"
] |
What's the easiest way to escape HTML in Python? | 1,061,697 | <p>cgi.escape seems like one possible choice. Does it work well? Is there something that is considered better?</p>
| 93 | 2009-06-30T04:15:54Z | 32,446,224 | <p>Not the easiest way, but still straightforward. The main difference from <strong>cgi.escape</strong> module - it still will work properly if you already have <code>&amp;</code> in your text. As you see from comments to it:</p>
<p><em>cgi.escape version</em></p>
<pre><code>def escape(s, quote=None):
'''Replace special characters "&", "<" and ">" to HTML-safe sequences.
If the optional flag quote is true, the quotation mark character (")
is also translated.'''
s = s.replace("&", "&amp;") # Must be done first!
s = s.replace("<", "&lt;")
s = s.replace(">", "&gt;")
if quote:
s = s.replace('"', "&quot;")
return s
</code></pre>
<p><em>regex version</em></p>
<pre><code>QUOTE_PATTERN = r"""([&<>"'])(?!(amp|lt|gt|quot|#39);)"""
def escape(word):
"""
Replaces special characters <>&"' to HTML-safe sequences.
With attention to already escaped characters.
"""
replace_with = {
'<': '&gt;',
'>': '&lt;',
'&': '&amp;',
'"': '&quot;', # should be escaped in attributes
"'": '&#39' # should be escaped in attributes
}
quote_pattern = re.compile(QUOTE_PATTERN)
return re.sub(quote_pattern, lambda x: replace_with[x.group(0)], word)
</code></pre>
| 0 | 2015-09-07T21:25:18Z | [
"python",
"html"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.