title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
pycurl request exist in header function?
525,405
<p>in C do return -1 when i want to cancel the download in either the header or the write function. In pycurl i get this error</p> <pre><code>pycurl.error: invalid return value for write callback -1 17 </code></pre> <p>I dont know what the 17 means but what am i not doing correctly?</p>
2
2009-02-08T08:32:58Z
525,481
<p>from pycurl.c: </p> <pre><code>else if (PyInt_Check(result)) { long obj_size = PyInt_AsLong(result); if (obj_size &lt; 0 || obj_size &gt; total_size) { PyErr_Format(ErrorObject, "invalid return value for write callback %ld %ld", (long)obj_size, (long)total_size); goto verbose_error; } </code></pre> <p>this would mean 17 is the total_size - is this possible ? and -1 (result) is what your callback is returning.</p>
3
2009-02-08T10:00:13Z
[ "python", "libcurl", "pycurl" ]
pycurl request exist in header function?
525,405
<p>in C do return -1 when i want to cancel the download in either the header or the write function. In pycurl i get this error</p> <pre><code>pycurl.error: invalid return value for write callback -1 17 </code></pre> <p>I dont know what the 17 means but what am i not doing correctly?</p>
2
2009-02-08T08:32:58Z
2,304,840
<pre><code>import pycurl import StringIO c = pycurl.Curl() s = StringIO.StringIO() c.setopt(pycurl.URL, url) c.setopt(pycurl.HEADER, True) c.setopt(pycurl.NOBODY, True) c.setopt(pycurl.WRITEFUNCTION, s.write) c.perform() print(s.getvalue()) </code></pre>
1
2010-02-21T05:05:28Z
[ "python", "libcurl", "pycurl" ]
Regular expression: match start or whitespace
525,635
<p><strong>Can a regular expression match whitespace <em>or</em> the start of a string?</strong></p> <p>I'm trying to replace currency the abbreviation GBP with a £ symbol. I could just match anything starting GBP, but I'd like to be a bit more conservative, and look for certain delimiters around it.</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; text = u'GBP 5 Off when you spend GBP75.00' &gt;&gt;&gt; re.sub(ur'GBP([\W\d])', ur'£\g&lt;1&gt;', text) # matches GBP with any prefix u'\xa3 5 Off when you spend \xa375.00' &gt;&gt;&gt; re.sub(ur'^GBP([\W\d])', ur'£\g&lt;1&gt;', text) # matches at start only u'\xa3 5 Off when you spend GBP75.00' &gt;&gt;&gt; re.sub(ur'(\W)GBP([\W\d])', ur'\g&lt;1&gt;£\g&lt;2&gt;', text) # matches whitespace prefix only u'GBP 5 Off when you spend \xa375.00' </code></pre> <p>Can I do both of the latter examples at the same time?</p>
25
2009-02-08T12:38:08Z
525,639
<p><code>\b</code> is word boundary, which can be a white space, the beginning of a line or a non-alphanumeric symbol (<code>\bGBP\b</code>).</p>
8
2009-02-08T12:42:22Z
[ "python", "regex" ]
Regular expression: match start or whitespace
525,635
<p><strong>Can a regular expression match whitespace <em>or</em> the start of a string?</strong></p> <p>I'm trying to replace currency the abbreviation GBP with a £ symbol. I could just match anything starting GBP, but I'd like to be a bit more conservative, and look for certain delimiters around it.</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; text = u'GBP 5 Off when you spend GBP75.00' &gt;&gt;&gt; re.sub(ur'GBP([\W\d])', ur'£\g&lt;1&gt;', text) # matches GBP with any prefix u'\xa3 5 Off when you spend \xa375.00' &gt;&gt;&gt; re.sub(ur'^GBP([\W\d])', ur'£\g&lt;1&gt;', text) # matches at start only u'\xa3 5 Off when you spend GBP75.00' &gt;&gt;&gt; re.sub(ur'(\W)GBP([\W\d])', ur'\g&lt;1&gt;£\g&lt;2&gt;', text) # matches whitespace prefix only u'GBP 5 Off when you spend \xa375.00' </code></pre> <p>Can I do both of the latter examples at the same time?</p>
25
2009-02-08T12:38:08Z
525,642
<p>Yes, why not?</p> <pre><code>re.sub(u'^\W*GBP... </code></pre> <p>matches the start of the string, 0 or more whitespaces, then GBP...</p> <p><em>edit</em>: Oh, I think you want alternation, use the <code>|</code>:</p> <pre><code>re.sub(u'(^|\W)GBP... </code></pre>
1
2009-02-08T12:43:25Z
[ "python", "regex" ]
Regular expression: match start or whitespace
525,635
<p><strong>Can a regular expression match whitespace <em>or</em> the start of a string?</strong></p> <p>I'm trying to replace currency the abbreviation GBP with a £ symbol. I could just match anything starting GBP, but I'd like to be a bit more conservative, and look for certain delimiters around it.</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; text = u'GBP 5 Off when you spend GBP75.00' &gt;&gt;&gt; re.sub(ur'GBP([\W\d])', ur'£\g&lt;1&gt;', text) # matches GBP with any prefix u'\xa3 5 Off when you spend \xa375.00' &gt;&gt;&gt; re.sub(ur'^GBP([\W\d])', ur'£\g&lt;1&gt;', text) # matches at start only u'\xa3 5 Off when you spend GBP75.00' &gt;&gt;&gt; re.sub(ur'(\W)GBP([\W\d])', ur'\g&lt;1&gt;£\g&lt;2&gt;', text) # matches whitespace prefix only u'GBP 5 Off when you spend \xa375.00' </code></pre> <p>Can I do both of the latter examples at the same time?</p>
25
2009-02-08T12:38:08Z
525,645
<p>You can always trim leading and trailing whitespace from the token before you search if it's not a matching/grouping situation that requires the full line.</p>
0
2009-02-08T12:44:29Z
[ "python", "regex" ]
Regular expression: match start or whitespace
525,635
<p><strong>Can a regular expression match whitespace <em>or</em> the start of a string?</strong></p> <p>I'm trying to replace currency the abbreviation GBP with a £ symbol. I could just match anything starting GBP, but I'd like to be a bit more conservative, and look for certain delimiters around it.</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; text = u'GBP 5 Off when you spend GBP75.00' &gt;&gt;&gt; re.sub(ur'GBP([\W\d])', ur'£\g&lt;1&gt;', text) # matches GBP with any prefix u'\xa3 5 Off when you spend \xa375.00' &gt;&gt;&gt; re.sub(ur'^GBP([\W\d])', ur'£\g&lt;1&gt;', text) # matches at start only u'\xa3 5 Off when you spend GBP75.00' &gt;&gt;&gt; re.sub(ur'(\W)GBP([\W\d])', ur'\g&lt;1&gt;£\g&lt;2&gt;', text) # matches whitespace prefix only u'GBP 5 Off when you spend \xa375.00' </code></pre> <p>Can I do both of the latter examples at the same time?</p>
25
2009-02-08T12:38:08Z
525,649
<p>This replaces GBP if it's preceded by the start of a string or a <a href="http://www.regular-expressions.info/wordboundaries.html" rel="nofollow">word boundary</a> (which the start of a string already is), and after GBP comes a numeric value or a word boundary: </p> <pre><code>re.sub(u'\bGBP(?=\b|\d)', u'£', text) </code></pre> <p>This removes the need for any unnecessary backreferencing by using a <a href="http://www.regular-expressions.info/lookaround.html" rel="nofollow">lookahead</a>. Inclusive enough?</p>
6
2009-02-08T12:46:39Z
[ "python", "regex" ]
Regular expression: match start or whitespace
525,635
<p><strong>Can a regular expression match whitespace <em>or</em> the start of a string?</strong></p> <p>I'm trying to replace currency the abbreviation GBP with a £ symbol. I could just match anything starting GBP, but I'd like to be a bit more conservative, and look for certain delimiters around it.</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; text = u'GBP 5 Off when you spend GBP75.00' &gt;&gt;&gt; re.sub(ur'GBP([\W\d])', ur'£\g&lt;1&gt;', text) # matches GBP with any prefix u'\xa3 5 Off when you spend \xa375.00' &gt;&gt;&gt; re.sub(ur'^GBP([\W\d])', ur'£\g&lt;1&gt;', text) # matches at start only u'\xa3 5 Off when you spend GBP75.00' &gt;&gt;&gt; re.sub(ur'(\W)GBP([\W\d])', ur'\g&lt;1&gt;£\g&lt;2&gt;', text) # matches whitespace prefix only u'GBP 5 Off when you spend \xa375.00' </code></pre> <p>Can I do both of the latter examples at the same time?</p>
25
2009-02-08T12:38:08Z
525,650
<p>Use the OR "<code>|</code>" operator:</p> <pre><code>&gt;&gt;&gt; re.sub(r'(^|\W)GBP([\W\d])', u'\g&lt;1&gt;£\g&lt;2&gt;', text) u'\xa3 5 Off when you spend \xa375.00' </code></pre>
30
2009-02-08T12:46:54Z
[ "python", "regex" ]
Regular expression: match start or whitespace
525,635
<p><strong>Can a regular expression match whitespace <em>or</em> the start of a string?</strong></p> <p>I'm trying to replace currency the abbreviation GBP with a £ symbol. I could just match anything starting GBP, but I'd like to be a bit more conservative, and look for certain delimiters around it.</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; text = u'GBP 5 Off when you spend GBP75.00' &gt;&gt;&gt; re.sub(ur'GBP([\W\d])', ur'£\g&lt;1&gt;', text) # matches GBP with any prefix u'\xa3 5 Off when you spend \xa375.00' &gt;&gt;&gt; re.sub(ur'^GBP([\W\d])', ur'£\g&lt;1&gt;', text) # matches at start only u'\xa3 5 Off when you spend GBP75.00' &gt;&gt;&gt; re.sub(ur'(\W)GBP([\W\d])', ur'\g&lt;1&gt;£\g&lt;2&gt;', text) # matches whitespace prefix only u'GBP 5 Off when you spend \xa375.00' </code></pre> <p>Can I do both of the latter examples at the same time?</p>
25
2009-02-08T12:38:08Z
525,651
<p>I think you're looking for <code>'(^|\W)GBP([\W\d])'</code></p>
2
2009-02-08T12:47:27Z
[ "python", "regex" ]
Regular expression: match start or whitespace
525,635
<p><strong>Can a regular expression match whitespace <em>or</em> the start of a string?</strong></p> <p>I'm trying to replace currency the abbreviation GBP with a £ symbol. I could just match anything starting GBP, but I'd like to be a bit more conservative, and look for certain delimiters around it.</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; text = u'GBP 5 Off when you spend GBP75.00' &gt;&gt;&gt; re.sub(ur'GBP([\W\d])', ur'£\g&lt;1&gt;', text) # matches GBP with any prefix u'\xa3 5 Off when you spend \xa375.00' &gt;&gt;&gt; re.sub(ur'^GBP([\W\d])', ur'£\g&lt;1&gt;', text) # matches at start only u'\xa3 5 Off when you spend GBP75.00' &gt;&gt;&gt; re.sub(ur'(\W)GBP([\W\d])', ur'\g&lt;1&gt;£\g&lt;2&gt;', text) # matches whitespace prefix only u'GBP 5 Off when you spend \xa375.00' </code></pre> <p>Can I do both of the latter examples at the same time?</p>
25
2009-02-08T12:38:08Z
525,683
<p>It works in Perl:</p> <pre><code>$text = 'GBP 5 off when you spend GBP75'; $text =~ s/(\W|^)GBP([\W\d])/$1\$$2/g; printf "$text\n"; </code></pre> <p>The output is:</p> <pre><code>$ 5 off when you spend $75 </code></pre> <p>Note that I stipulated that the match should be global, to get all occurrences.</p>
0
2009-02-08T13:10:56Z
[ "python", "regex" ]
Accept Cookies in Python
525,773
<p>How can I accept cookies in a python script?</p>
10
2009-02-08T14:09:04Z
525,781
<p>You might want to look at <a href="http://docs.python.org/library/cookielib.html" rel="nofollow">cookielib</a>.</p>
4
2009-02-08T14:15:20Z
[ "python", "cookies" ]
Accept Cookies in Python
525,773
<p>How can I accept cookies in a python script?</p>
10
2009-02-08T14:09:04Z
525,966
<p>I believe you mean having a Python script that tries to speak HTTP. I suggest you to use a high-level library that handles cookies automatically. pycurl, mechanize, twill - you choose.</p> <p>For Nikhil Chelliah:</p> <p>I don't see what's not clear here.</p> <p><strong>Accepting</strong> a cookie happens client-side. The server can <strong>set</strong> a cookie.</p>
1
2009-02-08T16:14:19Z
[ "python", "cookies" ]
Accept Cookies in Python
525,773
<p>How can I accept cookies in a python script?</p>
10
2009-02-08T14:09:04Z
525,982
<p>There's the cookielib library. You can also implement your own cookie storage and policies, the cookies are found in the set-cookie header of the response (Set-Cookie: name=value), then you send the back to a server in one or more Cookie headers in the request (Cookie: name=value).</p>
0
2009-02-08T16:24:59Z
[ "python", "cookies" ]
Accept Cookies in Python
525,773
<p>How can I accept cookies in a python script?</p>
10
2009-02-08T14:09:04Z
526,013
<p>It's unclear whether you want a client-side or a server-side solution.</p> <p>For client-side, <a href="http://www.docs.python.org/library/cookielib.html" rel="nofollow">cookielib</a> will work fine. <a href="http://stackoverflow.com/questions/189555/how-to-use-python-to-login-to-a-webpage-and-retrieve-cookies-for-later-usage#answer-189580">This answer</a> and a few web tutorials offer more in-depth explanations.</p> <p>If this is a server-side problem, you should be using a framework that takes care of all the boilerplate. I really like how <a href="http://www.cherrypy.org/wiki/Cookies" rel="nofollow">CherryPy</a> and <a href="http://webpy.org/cookbook/cookies" rel="nofollow">web.py</a> handle them, but the API is pretty simple in any library.</p>
1
2009-02-08T16:49:40Z
[ "python", "cookies" ]
Accept Cookies in Python
525,773
<p>How can I accept cookies in a python script?</p>
10
2009-02-08T14:09:04Z
526,695
<p>Try this:</p> <pre><code>import urllib2 import cookielib jar = cookielib.FileCookieJar("cookies") opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(jar)) print "Currently have %d cookies" % len(jar) print "Getting page" response = opener.open("http://google.com") print response.headers print "Got page" print "Currently have %d cookies" % len(jar) print jar </code></pre> <p>It should print</p> <pre><code>Currently have 0 cookies ... Currently have 2 cookies </code></pre> <p>(Google always sets a cookie). You don't really need this much unless you want to save your cookies to disk and use them later. You should find that</p> <pre><code>urllib2.build_opener(HTTPCookieProcessor).open(url) </code></pre> <p>Takes care of most of what you want.</p> <p>More info here:</p> <ul> <li><a href="http://docs.python.org/library/urllib2.html#urllib2.HTTPCookieProcessor">HTTPCookieProcessor</a></li> <li><a href="http://docs.python.org/library/urllib2.html#urllib2.urlopen">build_opener</a></li> <li><a href="http://docs.python.org/library/cookielib.html#cookielib.FileCookieJar">FileCookieJar</a></li> <li><a href="http://www.voidspace.org.uk/python/articles/urllib2.shtml">Urllib2 - the missing maual</a></li> </ul>
18
2009-02-08T23:49:44Z
[ "python", "cookies" ]
Accept Cookies in Python
525,773
<p>How can I accept cookies in a python script?</p>
10
2009-02-08T14:09:04Z
13,911,060
<p><strong>The easiest way is to use <a href="http://docs.python-requests.org/" rel="nofollow">requests</a> library.</strong></p> <pre><code>import requests url = 'http://www.google.com/doodles/' r = requests.get(url) print r.cookies </code></pre>
4
2012-12-17T09:02:21Z
[ "python", "cookies" ]
Probability distribution in Python
526,255
<p>I have a bunch of keys that each have an unlikeliness variable. I want to randomly choose one of these keys, yet I want it to be more unlikely for unlikely (key, values) to be chosen than a less unlikely (a more likely) object. I am wondering if you would have any suggestions, preferably an existing python module that I could use, else I will need to make it myself.</p> <p>I have checked out the random module; it does not seem to provide this.</p> <p>I have to make such choices many millions of times for 1000 different sets of objects each containing 2,455 objects. Each set will exchange objects among each other so the random chooser needs to be dynamic. With 1000 sets of 2,433 objects, that is 2,433 million objects; low memory consumption is crucial. And since these choices are not the bulk of the algorithm, I need this process to be quite fast; CPU-time is limited.</p> <p>Thx </p> <p>Update:</p> <p>Ok, I tried to consider your suggestions wisely, but time is so limited... </p> <p>I looked at the binary search tree approach and it seems too risky (complex and complicated). The other suggestions all resemble the ActiveState recipe. I took it and modified it a little in the hope of making more efficient:</p> <pre><code>def windex(dict, sum, max): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' n = random.uniform(0, 1) sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: break n = n - weight return key </code></pre> <p>I am hoping to get an efficiency gain from dynamically maintaining the sum of certainties and the maximum certainty. Any further suggestions are welcome. You guys saves me so much time and effort, while increasing my effectiveness, it is crazy. Thx! Thx! Thx!</p> <p>Update2:</p> <p>I decided to make it more efficient by letting it choose more choices at once. This will result in an acceptable loss in precision in my algo for it is dynamic in nature. Anyway, here is what I have now:</p> <pre><code>def weightedChoices(dict, sum, max, choices=10): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' list = [random.uniform(0, 1) for i in range(choices)] (n, list) = relavate(list.sort()) keys = [] sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: keys.append(key) if list: (n, list) = relavate(list) else: break n = n - weight return keys def relavate(list): min = list[0] new = [l - min for l in list[1:]] return (min, new) </code></pre> <p>I haven't tried it out yet. If you have any comments/suggestions, please do not hesitate. Thx!</p> <p>Update3:</p> <p>I have been working all day on a task-tailored version of Rex Logan's answer. Instead of a 2 arrays of objects and weights, it is actually a special dictionary class; which makes things quite complex since Rex's code generates a random index... I also coded a test case that kind of resembles what will happen in my algo (but I can't really know until I try!). The basic principle is: the more a key is randomly generated often, the more unlikely it will be generated again: </p> <pre><code>import random, time import psyco psyco.full() class ProbDict(): """ Modified version of Rex Logans RandomObject class. The more a key is randomly chosen, the more unlikely it will further be randomly chosen. """ def __init__(self,keys_weights_values={}): self._kw=keys_weights_values self._keys=self._kw.keys() self._len=len(self._keys) self._findSeniors() self._effort = 0.15 self._fails = 0 def __iter__(self): return self.next() def __getitem__(self, key): return self._kw[key] def __setitem__(self, key, value): self.append(key, value) def __len__(self): return self._len def next(self): key=self._key() while key: yield key key = self._key() def __contains__(self, key): return key in self._kw def items(self): return self._kw.items() def pop(self, key): try: (w, value) = self._kw.pop(key) self._len -=1 if w == self._seniorW: self._seniors -= 1 if not self._seniors: #costly but unlikely: self._findSeniors() return [w, value] except KeyError: return None def popitem(self): return self.pop(self._key()) def values(self): values = [] for key in self._keys: try: values.append(self._kw[key][1]) except KeyError: pass return values def weights(self): weights = [] for key in self._keys: try: weights.append(self._kw[key][0]) except KeyError: pass return weights def keys(self, imperfect=False): if imperfect: return self._keys return self._kw.keys() def append(self, key, value=None): if key not in self._kw: self._len +=1 self._kw[key] = [0, value] self._keys.append(key) else: self._kw[key][1]=value def _key(self): for i in range(int(self._effort*self._len)): ri=random.randint(0,self._len-1) #choose a random object rx=random.uniform(0,self._seniorW) rkey = self._keys[ri] try: w = self._kw[rkey][0] if rx &gt;= w: # test to see if that is the value we want w += 1 self._warnSeniors(w) self._kw[rkey][0] = w return rkey except KeyError: self._keys.pop(ri) # if you do not find one after 100 tries then just get a random one self._fails += 1 #for confirming effectiveness only for key in self._keys: if key in self._kw: w = self._kw[key][0] + 1 self._warnSeniors(w) self._kw[key][0] = w return key return None def _findSeniors(self): '''this function finds the seniors, counts them and assess their age. It is costly but unlikely.''' seniorW = 0 seniors = 0 for w in self._kw.itervalues(): if w &gt;= seniorW: if w == seniorW: seniors += 1 else: seniorsW = w seniors = 1 self._seniors = seniors self._seniorW = seniorW def _warnSeniors(self, w): #a weight can only be incremented...good if w &gt;= self._seniorW: if w == self._seniorW: self._seniors+=1 else: self._seniors = 1 self._seniorW = w def test(): #test code iterations = 200000 size = 2500 nextkey = size pd = ProbDict(dict([(i,[0,i]) for i in xrange(size)])) start = time.clock() for i in xrange(iterations): key=pd._key() w=pd[key][0] if random.randint(0,1+pd._seniorW-w): #the heavier the object, the more unlikely it will be removed pd.pop(key) probAppend = float(500+(size-len(pd)))/1000 if random.uniform(0,1) &lt; probAppend: nextkey+=1 pd.append(nextkey) print (time.clock()-start)*1000/iterations, "msecs / iteration with", pd._fails, "failures /", iterations, "iterations" weights = pd.weights() weights.sort() print "avg weight:", float(sum(weights))/pd._len, max(weights), pd._seniorW, pd._seniors, len(pd), len(weights) print weights test() </code></pre> <p>Any comments are still welcome. @Darius: your binary trees are too complex and complicated for me; and I do not think its leafs can be removed efficiently... Thx all</p>
17
2009-02-08T19:52:22Z
526,262
<p>The simplest thing to do is to use random.choice (which uses a uniform distribution) and vary the frequency of occurrence on the object in the source collection.</p> <pre><code>&gt;&gt;&gt; random.choice([1, 2, 3, 4]) 4 </code></pre> <p>... vs:</p> <pre><code>&gt;&gt;&gt; random.choice([1, 1, 1, 1, 2, 2, 2, 3, 3, 4]) 2 </code></pre> <p>So your objects could have a base occurrence rate (n) and between 1 and n objects are added to the source collection as a function of the conviction rate. This method is really simple; however, it can have significant overhead if the number of distinct objects is large or the conviction rate needs to be very fine grained.</p> <p>Alternatively, if you generate more that one random number using a uniform distribution and sum them, numbers occurring near the mean are more probable that those occurring near the extremes (think of rolling two dice and the probability of getting 7 versus 12 or 2). You can then order the objects by conviction rate and generate a number using multiple die rolls which you use to calculate and index into the objects. Use numbers near the mean to index low conviction objects and numbers near the extremes to index high conviction items. You can vary the precise probability that a given object will be selected by changing the "number of sides" and number of your "dice" (it may be simpler to put the objects into buckets and use dice with a small number of sides rather than trying to associate each object with a specific result):</p> <pre><code>&gt;&gt;&gt; die = lambda sides : random.randint(1, sides) &gt;&gt;&gt; die(6) 3 &gt;&gt;&gt; die(6) + die(6) + die(6) 10 </code></pre>
1
2009-02-08T20:01:37Z
[ "python", "algorithm", "random", "distribution", "probability" ]
Probability distribution in Python
526,255
<p>I have a bunch of keys that each have an unlikeliness variable. I want to randomly choose one of these keys, yet I want it to be more unlikely for unlikely (key, values) to be chosen than a less unlikely (a more likely) object. I am wondering if you would have any suggestions, preferably an existing python module that I could use, else I will need to make it myself.</p> <p>I have checked out the random module; it does not seem to provide this.</p> <p>I have to make such choices many millions of times for 1000 different sets of objects each containing 2,455 objects. Each set will exchange objects among each other so the random chooser needs to be dynamic. With 1000 sets of 2,433 objects, that is 2,433 million objects; low memory consumption is crucial. And since these choices are not the bulk of the algorithm, I need this process to be quite fast; CPU-time is limited.</p> <p>Thx </p> <p>Update:</p> <p>Ok, I tried to consider your suggestions wisely, but time is so limited... </p> <p>I looked at the binary search tree approach and it seems too risky (complex and complicated). The other suggestions all resemble the ActiveState recipe. I took it and modified it a little in the hope of making more efficient:</p> <pre><code>def windex(dict, sum, max): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' n = random.uniform(0, 1) sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: break n = n - weight return key </code></pre> <p>I am hoping to get an efficiency gain from dynamically maintaining the sum of certainties and the maximum certainty. Any further suggestions are welcome. You guys saves me so much time and effort, while increasing my effectiveness, it is crazy. Thx! Thx! Thx!</p> <p>Update2:</p> <p>I decided to make it more efficient by letting it choose more choices at once. This will result in an acceptable loss in precision in my algo for it is dynamic in nature. Anyway, here is what I have now:</p> <pre><code>def weightedChoices(dict, sum, max, choices=10): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' list = [random.uniform(0, 1) for i in range(choices)] (n, list) = relavate(list.sort()) keys = [] sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: keys.append(key) if list: (n, list) = relavate(list) else: break n = n - weight return keys def relavate(list): min = list[0] new = [l - min for l in list[1:]] return (min, new) </code></pre> <p>I haven't tried it out yet. If you have any comments/suggestions, please do not hesitate. Thx!</p> <p>Update3:</p> <p>I have been working all day on a task-tailored version of Rex Logan's answer. Instead of a 2 arrays of objects and weights, it is actually a special dictionary class; which makes things quite complex since Rex's code generates a random index... I also coded a test case that kind of resembles what will happen in my algo (but I can't really know until I try!). The basic principle is: the more a key is randomly generated often, the more unlikely it will be generated again: </p> <pre><code>import random, time import psyco psyco.full() class ProbDict(): """ Modified version of Rex Logans RandomObject class. The more a key is randomly chosen, the more unlikely it will further be randomly chosen. """ def __init__(self,keys_weights_values={}): self._kw=keys_weights_values self._keys=self._kw.keys() self._len=len(self._keys) self._findSeniors() self._effort = 0.15 self._fails = 0 def __iter__(self): return self.next() def __getitem__(self, key): return self._kw[key] def __setitem__(self, key, value): self.append(key, value) def __len__(self): return self._len def next(self): key=self._key() while key: yield key key = self._key() def __contains__(self, key): return key in self._kw def items(self): return self._kw.items() def pop(self, key): try: (w, value) = self._kw.pop(key) self._len -=1 if w == self._seniorW: self._seniors -= 1 if not self._seniors: #costly but unlikely: self._findSeniors() return [w, value] except KeyError: return None def popitem(self): return self.pop(self._key()) def values(self): values = [] for key in self._keys: try: values.append(self._kw[key][1]) except KeyError: pass return values def weights(self): weights = [] for key in self._keys: try: weights.append(self._kw[key][0]) except KeyError: pass return weights def keys(self, imperfect=False): if imperfect: return self._keys return self._kw.keys() def append(self, key, value=None): if key not in self._kw: self._len +=1 self._kw[key] = [0, value] self._keys.append(key) else: self._kw[key][1]=value def _key(self): for i in range(int(self._effort*self._len)): ri=random.randint(0,self._len-1) #choose a random object rx=random.uniform(0,self._seniorW) rkey = self._keys[ri] try: w = self._kw[rkey][0] if rx &gt;= w: # test to see if that is the value we want w += 1 self._warnSeniors(w) self._kw[rkey][0] = w return rkey except KeyError: self._keys.pop(ri) # if you do not find one after 100 tries then just get a random one self._fails += 1 #for confirming effectiveness only for key in self._keys: if key in self._kw: w = self._kw[key][0] + 1 self._warnSeniors(w) self._kw[key][0] = w return key return None def _findSeniors(self): '''this function finds the seniors, counts them and assess their age. It is costly but unlikely.''' seniorW = 0 seniors = 0 for w in self._kw.itervalues(): if w &gt;= seniorW: if w == seniorW: seniors += 1 else: seniorsW = w seniors = 1 self._seniors = seniors self._seniorW = seniorW def _warnSeniors(self, w): #a weight can only be incremented...good if w &gt;= self._seniorW: if w == self._seniorW: self._seniors+=1 else: self._seniors = 1 self._seniorW = w def test(): #test code iterations = 200000 size = 2500 nextkey = size pd = ProbDict(dict([(i,[0,i]) for i in xrange(size)])) start = time.clock() for i in xrange(iterations): key=pd._key() w=pd[key][0] if random.randint(0,1+pd._seniorW-w): #the heavier the object, the more unlikely it will be removed pd.pop(key) probAppend = float(500+(size-len(pd)))/1000 if random.uniform(0,1) &lt; probAppend: nextkey+=1 pd.append(nextkey) print (time.clock()-start)*1000/iterations, "msecs / iteration with", pd._fails, "failures /", iterations, "iterations" weights = pd.weights() weights.sort() print "avg weight:", float(sum(weights))/pd._len, max(weights), pd._seniorW, pd._seniors, len(pd), len(weights) print weights test() </code></pre> <p>Any comments are still welcome. @Darius: your binary trees are too complex and complicated for me; and I do not think its leafs can be removed efficiently... Thx all</p>
17
2009-02-08T19:52:22Z
526,277
<p>You want to give each object a weight. The bigger the weight the more likely it will happen. More precisely probx =weight/sum_all_weights.</p> <p>Then generate a random number in the range 0 to sum_all_weights and map it to each object.</p> <p>This code allows you to generate a random index and it is mapped when the object is created for speed. If all of your sets of objects have the same distribution then you can get by with only one RandomIndex object.</p> <pre><code>import random class RandomIndex: def __init__(self, wlist): self._wi=[] self._rsize=sum(wlist)-1 self._m={} i=0 s=wlist[i] for n in range(self._rsize+1): if n == s: i+=1 s+=wlist[i] self._m[n]=i def i(self): rn=random.randint(0,self._rsize) return self._m[rn] sx=[1,2,3,4] wx=[1,10,100,1000] #weight list ri=RandomIndex(wx) cnt=[0,0,0,0] for i in range(1000): cnt[ri.i()] +=1 #keep track of number of times each index was generated print(cnt) </code></pre>
2
2009-02-08T20:08:31Z
[ "python", "algorithm", "random", "distribution", "probability" ]
Probability distribution in Python
526,255
<p>I have a bunch of keys that each have an unlikeliness variable. I want to randomly choose one of these keys, yet I want it to be more unlikely for unlikely (key, values) to be chosen than a less unlikely (a more likely) object. I am wondering if you would have any suggestions, preferably an existing python module that I could use, else I will need to make it myself.</p> <p>I have checked out the random module; it does not seem to provide this.</p> <p>I have to make such choices many millions of times for 1000 different sets of objects each containing 2,455 objects. Each set will exchange objects among each other so the random chooser needs to be dynamic. With 1000 sets of 2,433 objects, that is 2,433 million objects; low memory consumption is crucial. And since these choices are not the bulk of the algorithm, I need this process to be quite fast; CPU-time is limited.</p> <p>Thx </p> <p>Update:</p> <p>Ok, I tried to consider your suggestions wisely, but time is so limited... </p> <p>I looked at the binary search tree approach and it seems too risky (complex and complicated). The other suggestions all resemble the ActiveState recipe. I took it and modified it a little in the hope of making more efficient:</p> <pre><code>def windex(dict, sum, max): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' n = random.uniform(0, 1) sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: break n = n - weight return key </code></pre> <p>I am hoping to get an efficiency gain from dynamically maintaining the sum of certainties and the maximum certainty. Any further suggestions are welcome. You guys saves me so much time and effort, while increasing my effectiveness, it is crazy. Thx! Thx! Thx!</p> <p>Update2:</p> <p>I decided to make it more efficient by letting it choose more choices at once. This will result in an acceptable loss in precision in my algo for it is dynamic in nature. Anyway, here is what I have now:</p> <pre><code>def weightedChoices(dict, sum, max, choices=10): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' list = [random.uniform(0, 1) for i in range(choices)] (n, list) = relavate(list.sort()) keys = [] sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: keys.append(key) if list: (n, list) = relavate(list) else: break n = n - weight return keys def relavate(list): min = list[0] new = [l - min for l in list[1:]] return (min, new) </code></pre> <p>I haven't tried it out yet. If you have any comments/suggestions, please do not hesitate. Thx!</p> <p>Update3:</p> <p>I have been working all day on a task-tailored version of Rex Logan's answer. Instead of a 2 arrays of objects and weights, it is actually a special dictionary class; which makes things quite complex since Rex's code generates a random index... I also coded a test case that kind of resembles what will happen in my algo (but I can't really know until I try!). The basic principle is: the more a key is randomly generated often, the more unlikely it will be generated again: </p> <pre><code>import random, time import psyco psyco.full() class ProbDict(): """ Modified version of Rex Logans RandomObject class. The more a key is randomly chosen, the more unlikely it will further be randomly chosen. """ def __init__(self,keys_weights_values={}): self._kw=keys_weights_values self._keys=self._kw.keys() self._len=len(self._keys) self._findSeniors() self._effort = 0.15 self._fails = 0 def __iter__(self): return self.next() def __getitem__(self, key): return self._kw[key] def __setitem__(self, key, value): self.append(key, value) def __len__(self): return self._len def next(self): key=self._key() while key: yield key key = self._key() def __contains__(self, key): return key in self._kw def items(self): return self._kw.items() def pop(self, key): try: (w, value) = self._kw.pop(key) self._len -=1 if w == self._seniorW: self._seniors -= 1 if not self._seniors: #costly but unlikely: self._findSeniors() return [w, value] except KeyError: return None def popitem(self): return self.pop(self._key()) def values(self): values = [] for key in self._keys: try: values.append(self._kw[key][1]) except KeyError: pass return values def weights(self): weights = [] for key in self._keys: try: weights.append(self._kw[key][0]) except KeyError: pass return weights def keys(self, imperfect=False): if imperfect: return self._keys return self._kw.keys() def append(self, key, value=None): if key not in self._kw: self._len +=1 self._kw[key] = [0, value] self._keys.append(key) else: self._kw[key][1]=value def _key(self): for i in range(int(self._effort*self._len)): ri=random.randint(0,self._len-1) #choose a random object rx=random.uniform(0,self._seniorW) rkey = self._keys[ri] try: w = self._kw[rkey][0] if rx &gt;= w: # test to see if that is the value we want w += 1 self._warnSeniors(w) self._kw[rkey][0] = w return rkey except KeyError: self._keys.pop(ri) # if you do not find one after 100 tries then just get a random one self._fails += 1 #for confirming effectiveness only for key in self._keys: if key in self._kw: w = self._kw[key][0] + 1 self._warnSeniors(w) self._kw[key][0] = w return key return None def _findSeniors(self): '''this function finds the seniors, counts them and assess their age. It is costly but unlikely.''' seniorW = 0 seniors = 0 for w in self._kw.itervalues(): if w &gt;= seniorW: if w == seniorW: seniors += 1 else: seniorsW = w seniors = 1 self._seniors = seniors self._seniorW = seniorW def _warnSeniors(self, w): #a weight can only be incremented...good if w &gt;= self._seniorW: if w == self._seniorW: self._seniors+=1 else: self._seniors = 1 self._seniorW = w def test(): #test code iterations = 200000 size = 2500 nextkey = size pd = ProbDict(dict([(i,[0,i]) for i in xrange(size)])) start = time.clock() for i in xrange(iterations): key=pd._key() w=pd[key][0] if random.randint(0,1+pd._seniorW-w): #the heavier the object, the more unlikely it will be removed pd.pop(key) probAppend = float(500+(size-len(pd)))/1000 if random.uniform(0,1) &lt; probAppend: nextkey+=1 pd.append(nextkey) print (time.clock()-start)*1000/iterations, "msecs / iteration with", pd._fails, "failures /", iterations, "iterations" weights = pd.weights() weights.sort() print "avg weight:", float(sum(weights))/pd._len, max(weights), pd._seniorW, pd._seniors, len(pd), len(weights) print weights test() </code></pre> <p>Any comments are still welcome. @Darius: your binary trees are too complex and complicated for me; and I do not think its leafs can be removed efficiently... Thx all</p>
17
2009-02-08T19:52:22Z
526,284
<p>I would use this <a href="http://code.activestate.com/recipes/117241/" rel="nofollow">recipe</a> . You will need to add a weight to your objects, but that is just a simple ratio and put them in a list of tuples (object, conviction/(sum of convictions)). This should be easy to do using a list comprehension.</p>
2
2009-02-08T20:16:43Z
[ "python", "algorithm", "random", "distribution", "probability" ]
Probability distribution in Python
526,255
<p>I have a bunch of keys that each have an unlikeliness variable. I want to randomly choose one of these keys, yet I want it to be more unlikely for unlikely (key, values) to be chosen than a less unlikely (a more likely) object. I am wondering if you would have any suggestions, preferably an existing python module that I could use, else I will need to make it myself.</p> <p>I have checked out the random module; it does not seem to provide this.</p> <p>I have to make such choices many millions of times for 1000 different sets of objects each containing 2,455 objects. Each set will exchange objects among each other so the random chooser needs to be dynamic. With 1000 sets of 2,433 objects, that is 2,433 million objects; low memory consumption is crucial. And since these choices are not the bulk of the algorithm, I need this process to be quite fast; CPU-time is limited.</p> <p>Thx </p> <p>Update:</p> <p>Ok, I tried to consider your suggestions wisely, but time is so limited... </p> <p>I looked at the binary search tree approach and it seems too risky (complex and complicated). The other suggestions all resemble the ActiveState recipe. I took it and modified it a little in the hope of making more efficient:</p> <pre><code>def windex(dict, sum, max): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' n = random.uniform(0, 1) sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: break n = n - weight return key </code></pre> <p>I am hoping to get an efficiency gain from dynamically maintaining the sum of certainties and the maximum certainty. Any further suggestions are welcome. You guys saves me so much time and effort, while increasing my effectiveness, it is crazy. Thx! Thx! Thx!</p> <p>Update2:</p> <p>I decided to make it more efficient by letting it choose more choices at once. This will result in an acceptable loss in precision in my algo for it is dynamic in nature. Anyway, here is what I have now:</p> <pre><code>def weightedChoices(dict, sum, max, choices=10): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' list = [random.uniform(0, 1) for i in range(choices)] (n, list) = relavate(list.sort()) keys = [] sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: keys.append(key) if list: (n, list) = relavate(list) else: break n = n - weight return keys def relavate(list): min = list[0] new = [l - min for l in list[1:]] return (min, new) </code></pre> <p>I haven't tried it out yet. If you have any comments/suggestions, please do not hesitate. Thx!</p> <p>Update3:</p> <p>I have been working all day on a task-tailored version of Rex Logan's answer. Instead of a 2 arrays of objects and weights, it is actually a special dictionary class; which makes things quite complex since Rex's code generates a random index... I also coded a test case that kind of resembles what will happen in my algo (but I can't really know until I try!). The basic principle is: the more a key is randomly generated often, the more unlikely it will be generated again: </p> <pre><code>import random, time import psyco psyco.full() class ProbDict(): """ Modified version of Rex Logans RandomObject class. The more a key is randomly chosen, the more unlikely it will further be randomly chosen. """ def __init__(self,keys_weights_values={}): self._kw=keys_weights_values self._keys=self._kw.keys() self._len=len(self._keys) self._findSeniors() self._effort = 0.15 self._fails = 0 def __iter__(self): return self.next() def __getitem__(self, key): return self._kw[key] def __setitem__(self, key, value): self.append(key, value) def __len__(self): return self._len def next(self): key=self._key() while key: yield key key = self._key() def __contains__(self, key): return key in self._kw def items(self): return self._kw.items() def pop(self, key): try: (w, value) = self._kw.pop(key) self._len -=1 if w == self._seniorW: self._seniors -= 1 if not self._seniors: #costly but unlikely: self._findSeniors() return [w, value] except KeyError: return None def popitem(self): return self.pop(self._key()) def values(self): values = [] for key in self._keys: try: values.append(self._kw[key][1]) except KeyError: pass return values def weights(self): weights = [] for key in self._keys: try: weights.append(self._kw[key][0]) except KeyError: pass return weights def keys(self, imperfect=False): if imperfect: return self._keys return self._kw.keys() def append(self, key, value=None): if key not in self._kw: self._len +=1 self._kw[key] = [0, value] self._keys.append(key) else: self._kw[key][1]=value def _key(self): for i in range(int(self._effort*self._len)): ri=random.randint(0,self._len-1) #choose a random object rx=random.uniform(0,self._seniorW) rkey = self._keys[ri] try: w = self._kw[rkey][0] if rx &gt;= w: # test to see if that is the value we want w += 1 self._warnSeniors(w) self._kw[rkey][0] = w return rkey except KeyError: self._keys.pop(ri) # if you do not find one after 100 tries then just get a random one self._fails += 1 #for confirming effectiveness only for key in self._keys: if key in self._kw: w = self._kw[key][0] + 1 self._warnSeniors(w) self._kw[key][0] = w return key return None def _findSeniors(self): '''this function finds the seniors, counts them and assess their age. It is costly but unlikely.''' seniorW = 0 seniors = 0 for w in self._kw.itervalues(): if w &gt;= seniorW: if w == seniorW: seniors += 1 else: seniorsW = w seniors = 1 self._seniors = seniors self._seniorW = seniorW def _warnSeniors(self, w): #a weight can only be incremented...good if w &gt;= self._seniorW: if w == self._seniorW: self._seniors+=1 else: self._seniors = 1 self._seniorW = w def test(): #test code iterations = 200000 size = 2500 nextkey = size pd = ProbDict(dict([(i,[0,i]) for i in xrange(size)])) start = time.clock() for i in xrange(iterations): key=pd._key() w=pd[key][0] if random.randint(0,1+pd._seniorW-w): #the heavier the object, the more unlikely it will be removed pd.pop(key) probAppend = float(500+(size-len(pd)))/1000 if random.uniform(0,1) &lt; probAppend: nextkey+=1 pd.append(nextkey) print (time.clock()-start)*1000/iterations, "msecs / iteration with", pd._fails, "failures /", iterations, "iterations" weights = pd.weights() weights.sort() print "avg weight:", float(sum(weights))/pd._len, max(weights), pd._seniorW, pd._seniors, len(pd), len(weights) print weights test() </code></pre> <p>Any comments are still welcome. @Darius: your binary trees are too complex and complicated for me; and I do not think its leafs can be removed efficiently... Thx all</p>
17
2009-02-08T19:52:22Z
526,295
<p>I suggest you port <a href="http://w-shadow.com/blog/2008/12/10/fast-weighted-random-choice-in-php/" rel="nofollow">this PHP implementation of weighted random</a> to Python. In particular, the binary-search-based second algorithm helps address your speed concerns.</p>
2
2009-02-08T20:20:31Z
[ "python", "algorithm", "random", "distribution", "probability" ]
Probability distribution in Python
526,255
<p>I have a bunch of keys that each have an unlikeliness variable. I want to randomly choose one of these keys, yet I want it to be more unlikely for unlikely (key, values) to be chosen than a less unlikely (a more likely) object. I am wondering if you would have any suggestions, preferably an existing python module that I could use, else I will need to make it myself.</p> <p>I have checked out the random module; it does not seem to provide this.</p> <p>I have to make such choices many millions of times for 1000 different sets of objects each containing 2,455 objects. Each set will exchange objects among each other so the random chooser needs to be dynamic. With 1000 sets of 2,433 objects, that is 2,433 million objects; low memory consumption is crucial. And since these choices are not the bulk of the algorithm, I need this process to be quite fast; CPU-time is limited.</p> <p>Thx </p> <p>Update:</p> <p>Ok, I tried to consider your suggestions wisely, but time is so limited... </p> <p>I looked at the binary search tree approach and it seems too risky (complex and complicated). The other suggestions all resemble the ActiveState recipe. I took it and modified it a little in the hope of making more efficient:</p> <pre><code>def windex(dict, sum, max): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' n = random.uniform(0, 1) sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: break n = n - weight return key </code></pre> <p>I am hoping to get an efficiency gain from dynamically maintaining the sum of certainties and the maximum certainty. Any further suggestions are welcome. You guys saves me so much time and effort, while increasing my effectiveness, it is crazy. Thx! Thx! Thx!</p> <p>Update2:</p> <p>I decided to make it more efficient by letting it choose more choices at once. This will result in an acceptable loss in precision in my algo for it is dynamic in nature. Anyway, here is what I have now:</p> <pre><code>def weightedChoices(dict, sum, max, choices=10): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' list = [random.uniform(0, 1) for i in range(choices)] (n, list) = relavate(list.sort()) keys = [] sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: keys.append(key) if list: (n, list) = relavate(list) else: break n = n - weight return keys def relavate(list): min = list[0] new = [l - min for l in list[1:]] return (min, new) </code></pre> <p>I haven't tried it out yet. If you have any comments/suggestions, please do not hesitate. Thx!</p> <p>Update3:</p> <p>I have been working all day on a task-tailored version of Rex Logan's answer. Instead of a 2 arrays of objects and weights, it is actually a special dictionary class; which makes things quite complex since Rex's code generates a random index... I also coded a test case that kind of resembles what will happen in my algo (but I can't really know until I try!). The basic principle is: the more a key is randomly generated often, the more unlikely it will be generated again: </p> <pre><code>import random, time import psyco psyco.full() class ProbDict(): """ Modified version of Rex Logans RandomObject class. The more a key is randomly chosen, the more unlikely it will further be randomly chosen. """ def __init__(self,keys_weights_values={}): self._kw=keys_weights_values self._keys=self._kw.keys() self._len=len(self._keys) self._findSeniors() self._effort = 0.15 self._fails = 0 def __iter__(self): return self.next() def __getitem__(self, key): return self._kw[key] def __setitem__(self, key, value): self.append(key, value) def __len__(self): return self._len def next(self): key=self._key() while key: yield key key = self._key() def __contains__(self, key): return key in self._kw def items(self): return self._kw.items() def pop(self, key): try: (w, value) = self._kw.pop(key) self._len -=1 if w == self._seniorW: self._seniors -= 1 if not self._seniors: #costly but unlikely: self._findSeniors() return [w, value] except KeyError: return None def popitem(self): return self.pop(self._key()) def values(self): values = [] for key in self._keys: try: values.append(self._kw[key][1]) except KeyError: pass return values def weights(self): weights = [] for key in self._keys: try: weights.append(self._kw[key][0]) except KeyError: pass return weights def keys(self, imperfect=False): if imperfect: return self._keys return self._kw.keys() def append(self, key, value=None): if key not in self._kw: self._len +=1 self._kw[key] = [0, value] self._keys.append(key) else: self._kw[key][1]=value def _key(self): for i in range(int(self._effort*self._len)): ri=random.randint(0,self._len-1) #choose a random object rx=random.uniform(0,self._seniorW) rkey = self._keys[ri] try: w = self._kw[rkey][0] if rx &gt;= w: # test to see if that is the value we want w += 1 self._warnSeniors(w) self._kw[rkey][0] = w return rkey except KeyError: self._keys.pop(ri) # if you do not find one after 100 tries then just get a random one self._fails += 1 #for confirming effectiveness only for key in self._keys: if key in self._kw: w = self._kw[key][0] + 1 self._warnSeniors(w) self._kw[key][0] = w return key return None def _findSeniors(self): '''this function finds the seniors, counts them and assess their age. It is costly but unlikely.''' seniorW = 0 seniors = 0 for w in self._kw.itervalues(): if w &gt;= seniorW: if w == seniorW: seniors += 1 else: seniorsW = w seniors = 1 self._seniors = seniors self._seniorW = seniorW def _warnSeniors(self, w): #a weight can only be incremented...good if w &gt;= self._seniorW: if w == self._seniorW: self._seniors+=1 else: self._seniors = 1 self._seniorW = w def test(): #test code iterations = 200000 size = 2500 nextkey = size pd = ProbDict(dict([(i,[0,i]) for i in xrange(size)])) start = time.clock() for i in xrange(iterations): key=pd._key() w=pd[key][0] if random.randint(0,1+pd._seniorW-w): #the heavier the object, the more unlikely it will be removed pd.pop(key) probAppend = float(500+(size-len(pd)))/1000 if random.uniform(0,1) &lt; probAppend: nextkey+=1 pd.append(nextkey) print (time.clock()-start)*1000/iterations, "msecs / iteration with", pd._fails, "failures /", iterations, "iterations" weights = pd.weights() weights.sort() print "avg weight:", float(sum(weights))/pd._len, max(weights), pd._seniorW, pd._seniors, len(pd), len(weights) print weights test() </code></pre> <p>Any comments are still welcome. @Darius: your binary trees are too complex and complicated for me; and I do not think its leafs can be removed efficiently... Thx all</p>
17
2009-02-08T19:52:22Z
526,300
<p><a href="http://code.activestate.com/recipes/117241/">This activestate recipe</a> gives an easy-to-follow approach, specifically the version in the comments that doesn't require you to pre-normalize your weights:</p> <pre><code>import random def weighted_choice(items): """items is a list of tuples in the form (item, weight)""" weight_total = sum((item[1] for item in items)) n = random.uniform(0, weight_total) for item, weight in items: if n &lt; weight: return item n = n - weight return item </code></pre> <p>This will be slow if you have a large list of items. A binary search would probably be better in that case... but would also be more complicated to write, for little gain if you have a small sample size. <a href="http://code.activestate.com/recipes/498229/">Here's an example of the binary search approach in python</a> if you want to follow that route.</p> <p>(I'd recommend doing some quick performance testing of both methods on your dataset. The performance of different approaches to this sort of algorithm is often a bit unintuitive.)</p> <p><hr /></p> <p><strong>Edit:</strong> I took my own advice, since I was curious, and did a few tests.</p> <p>I compared four approaches:</p> <p>*The weighted_choice function above.*</p> <p><em>A binary-search choice function like so:</em></p> <pre><code>def weighted_choice_bisect(items): added_weights = [] last_sum = 0 for item, weight in items: last_sum += weight added_weights.append(last_sum) return items[bisect.bisect(added_weights, random.random() * last_sum)][0] </code></pre> <p><em>A compiling version of 1:</em></p> <pre><code>def weighted_choice_compile(items): """returns a function that fetches a random item from items items is a list of tuples in the form (item, weight)""" weight_total = sum((item[1] for item in items)) def choice(uniform = random.uniform): n = uniform(0, weight_total) for item, weight in items: if n &lt; weight: return item n = n - weight return item return choice </code></pre> <p><em>A compiling version of 2:</em></p> <pre><code>def weighted_choice_bisect_compile(items): """Returns a function that makes a weighted random choice from items.""" added_weights = [] last_sum = 0 for item, weight in items: last_sum += weight added_weights.append(last_sum) def choice(rnd=random.random, bis=bisect.bisect): return items[bis(added_weights, rnd() * last_sum)][0] return choice </code></pre> <p>I then built a big list of choices like so:</p> <pre><code>choices = [(random.choice("abcdefg"), random.uniform(0,50)) for i in xrange(2500)] </code></pre> <p>And an excessively simple profiling function:</p> <pre><code>def profiler(f, n, *args, **kwargs): start = time.time() for i in xrange(n): f(*args, **kwargs) return time.time() - start </code></pre> <p><strong>The results:</strong></p> <p>(Seconds taken for 1,000 calls to the function.)</p> <ul> <li>Simple uncompiled: 0.918624162674</li> <li>Binary uncompiled: 1.01497793198</li> <li>Simple compiled: 0.287325024605</li> <li>Binary compiled: 0.00327413797379</li> </ul> <p>The "compiled" results include the average time taken to compile the choice function once. (I timed 1,000 compiles, then divided that time by 1,000, and added the result to the choice function time.)</p> <p>So: if you have a list of items+weights which change very rarely, the binary compiled method is <em>by far</em> the fastest.</p>
25
2009-02-08T20:26:28Z
[ "python", "algorithm", "random", "distribution", "probability" ]
Probability distribution in Python
526,255
<p>I have a bunch of keys that each have an unlikeliness variable. I want to randomly choose one of these keys, yet I want it to be more unlikely for unlikely (key, values) to be chosen than a less unlikely (a more likely) object. I am wondering if you would have any suggestions, preferably an existing python module that I could use, else I will need to make it myself.</p> <p>I have checked out the random module; it does not seem to provide this.</p> <p>I have to make such choices many millions of times for 1000 different sets of objects each containing 2,455 objects. Each set will exchange objects among each other so the random chooser needs to be dynamic. With 1000 sets of 2,433 objects, that is 2,433 million objects; low memory consumption is crucial. And since these choices are not the bulk of the algorithm, I need this process to be quite fast; CPU-time is limited.</p> <p>Thx </p> <p>Update:</p> <p>Ok, I tried to consider your suggestions wisely, but time is so limited... </p> <p>I looked at the binary search tree approach and it seems too risky (complex and complicated). The other suggestions all resemble the ActiveState recipe. I took it and modified it a little in the hope of making more efficient:</p> <pre><code>def windex(dict, sum, max): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' n = random.uniform(0, 1) sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: break n = n - weight return key </code></pre> <p>I am hoping to get an efficiency gain from dynamically maintaining the sum of certainties and the maximum certainty. Any further suggestions are welcome. You guys saves me so much time and effort, while increasing my effectiveness, it is crazy. Thx! Thx! Thx!</p> <p>Update2:</p> <p>I decided to make it more efficient by letting it choose more choices at once. This will result in an acceptable loss in precision in my algo for it is dynamic in nature. Anyway, here is what I have now:</p> <pre><code>def weightedChoices(dict, sum, max, choices=10): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' list = [random.uniform(0, 1) for i in range(choices)] (n, list) = relavate(list.sort()) keys = [] sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: keys.append(key) if list: (n, list) = relavate(list) else: break n = n - weight return keys def relavate(list): min = list[0] new = [l - min for l in list[1:]] return (min, new) </code></pre> <p>I haven't tried it out yet. If you have any comments/suggestions, please do not hesitate. Thx!</p> <p>Update3:</p> <p>I have been working all day on a task-tailored version of Rex Logan's answer. Instead of a 2 arrays of objects and weights, it is actually a special dictionary class; which makes things quite complex since Rex's code generates a random index... I also coded a test case that kind of resembles what will happen in my algo (but I can't really know until I try!). The basic principle is: the more a key is randomly generated often, the more unlikely it will be generated again: </p> <pre><code>import random, time import psyco psyco.full() class ProbDict(): """ Modified version of Rex Logans RandomObject class. The more a key is randomly chosen, the more unlikely it will further be randomly chosen. """ def __init__(self,keys_weights_values={}): self._kw=keys_weights_values self._keys=self._kw.keys() self._len=len(self._keys) self._findSeniors() self._effort = 0.15 self._fails = 0 def __iter__(self): return self.next() def __getitem__(self, key): return self._kw[key] def __setitem__(self, key, value): self.append(key, value) def __len__(self): return self._len def next(self): key=self._key() while key: yield key key = self._key() def __contains__(self, key): return key in self._kw def items(self): return self._kw.items() def pop(self, key): try: (w, value) = self._kw.pop(key) self._len -=1 if w == self._seniorW: self._seniors -= 1 if not self._seniors: #costly but unlikely: self._findSeniors() return [w, value] except KeyError: return None def popitem(self): return self.pop(self._key()) def values(self): values = [] for key in self._keys: try: values.append(self._kw[key][1]) except KeyError: pass return values def weights(self): weights = [] for key in self._keys: try: weights.append(self._kw[key][0]) except KeyError: pass return weights def keys(self, imperfect=False): if imperfect: return self._keys return self._kw.keys() def append(self, key, value=None): if key not in self._kw: self._len +=1 self._kw[key] = [0, value] self._keys.append(key) else: self._kw[key][1]=value def _key(self): for i in range(int(self._effort*self._len)): ri=random.randint(0,self._len-1) #choose a random object rx=random.uniform(0,self._seniorW) rkey = self._keys[ri] try: w = self._kw[rkey][0] if rx &gt;= w: # test to see if that is the value we want w += 1 self._warnSeniors(w) self._kw[rkey][0] = w return rkey except KeyError: self._keys.pop(ri) # if you do not find one after 100 tries then just get a random one self._fails += 1 #for confirming effectiveness only for key in self._keys: if key in self._kw: w = self._kw[key][0] + 1 self._warnSeniors(w) self._kw[key][0] = w return key return None def _findSeniors(self): '''this function finds the seniors, counts them and assess their age. It is costly but unlikely.''' seniorW = 0 seniors = 0 for w in self._kw.itervalues(): if w &gt;= seniorW: if w == seniorW: seniors += 1 else: seniorsW = w seniors = 1 self._seniors = seniors self._seniorW = seniorW def _warnSeniors(self, w): #a weight can only be incremented...good if w &gt;= self._seniorW: if w == self._seniorW: self._seniors+=1 else: self._seniors = 1 self._seniorW = w def test(): #test code iterations = 200000 size = 2500 nextkey = size pd = ProbDict(dict([(i,[0,i]) for i in xrange(size)])) start = time.clock() for i in xrange(iterations): key=pd._key() w=pd[key][0] if random.randint(0,1+pd._seniorW-w): #the heavier the object, the more unlikely it will be removed pd.pop(key) probAppend = float(500+(size-len(pd)))/1000 if random.uniform(0,1) &lt; probAppend: nextkey+=1 pd.append(nextkey) print (time.clock()-start)*1000/iterations, "msecs / iteration with", pd._fails, "failures /", iterations, "iterations" weights = pd.weights() weights.sort() print "avg weight:", float(sum(weights))/pd._len, max(weights), pd._seniorW, pd._seniors, len(pd), len(weights) print weights test() </code></pre> <p>Any comments are still welcome. @Darius: your binary trees are too complex and complicated for me; and I do not think its leafs can be removed efficiently... Thx all</p>
17
2009-02-08T19:52:22Z
526,308
<p>Here is a classic way to do it, in pseudocode, where random.random() gives you a random float from 0 to 1.</p> <pre><code>let z = sum of all the convictions let choice = random.random() * z iterate through your objects: choice = choice - the current object's conviction if choice &lt;= 0, return this object return the last object </code></pre> <p>For an example: imagine you have two objects, one with weight 2, another with weight 4. You generate a number from 0 to 6. If <code>choice</code> is between 0 and 2, which will happen with 2/6 = 1/3 probability, then it will get subtracted by 2 and the first object is chosen. If choice is between 2 and 6, which will happen with 4/6 = 2/3 probability, then the first subtraction will still have choice being > 0, and the second subtraction will make the 2nd object get chosen.</p>
2
2009-02-08T20:30:14Z
[ "python", "algorithm", "random", "distribution", "probability" ]
Probability distribution in Python
526,255
<p>I have a bunch of keys that each have an unlikeliness variable. I want to randomly choose one of these keys, yet I want it to be more unlikely for unlikely (key, values) to be chosen than a less unlikely (a more likely) object. I am wondering if you would have any suggestions, preferably an existing python module that I could use, else I will need to make it myself.</p> <p>I have checked out the random module; it does not seem to provide this.</p> <p>I have to make such choices many millions of times for 1000 different sets of objects each containing 2,455 objects. Each set will exchange objects among each other so the random chooser needs to be dynamic. With 1000 sets of 2,433 objects, that is 2,433 million objects; low memory consumption is crucial. And since these choices are not the bulk of the algorithm, I need this process to be quite fast; CPU-time is limited.</p> <p>Thx </p> <p>Update:</p> <p>Ok, I tried to consider your suggestions wisely, but time is so limited... </p> <p>I looked at the binary search tree approach and it seems too risky (complex and complicated). The other suggestions all resemble the ActiveState recipe. I took it and modified it a little in the hope of making more efficient:</p> <pre><code>def windex(dict, sum, max): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' n = random.uniform(0, 1) sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: break n = n - weight return key </code></pre> <p>I am hoping to get an efficiency gain from dynamically maintaining the sum of certainties and the maximum certainty. Any further suggestions are welcome. You guys saves me so much time and effort, while increasing my effectiveness, it is crazy. Thx! Thx! Thx!</p> <p>Update2:</p> <p>I decided to make it more efficient by letting it choose more choices at once. This will result in an acceptable loss in precision in my algo for it is dynamic in nature. Anyway, here is what I have now:</p> <pre><code>def weightedChoices(dict, sum, max, choices=10): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' list = [random.uniform(0, 1) for i in range(choices)] (n, list) = relavate(list.sort()) keys = [] sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: keys.append(key) if list: (n, list) = relavate(list) else: break n = n - weight return keys def relavate(list): min = list[0] new = [l - min for l in list[1:]] return (min, new) </code></pre> <p>I haven't tried it out yet. If you have any comments/suggestions, please do not hesitate. Thx!</p> <p>Update3:</p> <p>I have been working all day on a task-tailored version of Rex Logan's answer. Instead of a 2 arrays of objects and weights, it is actually a special dictionary class; which makes things quite complex since Rex's code generates a random index... I also coded a test case that kind of resembles what will happen in my algo (but I can't really know until I try!). The basic principle is: the more a key is randomly generated often, the more unlikely it will be generated again: </p> <pre><code>import random, time import psyco psyco.full() class ProbDict(): """ Modified version of Rex Logans RandomObject class. The more a key is randomly chosen, the more unlikely it will further be randomly chosen. """ def __init__(self,keys_weights_values={}): self._kw=keys_weights_values self._keys=self._kw.keys() self._len=len(self._keys) self._findSeniors() self._effort = 0.15 self._fails = 0 def __iter__(self): return self.next() def __getitem__(self, key): return self._kw[key] def __setitem__(self, key, value): self.append(key, value) def __len__(self): return self._len def next(self): key=self._key() while key: yield key key = self._key() def __contains__(self, key): return key in self._kw def items(self): return self._kw.items() def pop(self, key): try: (w, value) = self._kw.pop(key) self._len -=1 if w == self._seniorW: self._seniors -= 1 if not self._seniors: #costly but unlikely: self._findSeniors() return [w, value] except KeyError: return None def popitem(self): return self.pop(self._key()) def values(self): values = [] for key in self._keys: try: values.append(self._kw[key][1]) except KeyError: pass return values def weights(self): weights = [] for key in self._keys: try: weights.append(self._kw[key][0]) except KeyError: pass return weights def keys(self, imperfect=False): if imperfect: return self._keys return self._kw.keys() def append(self, key, value=None): if key not in self._kw: self._len +=1 self._kw[key] = [0, value] self._keys.append(key) else: self._kw[key][1]=value def _key(self): for i in range(int(self._effort*self._len)): ri=random.randint(0,self._len-1) #choose a random object rx=random.uniform(0,self._seniorW) rkey = self._keys[ri] try: w = self._kw[rkey][0] if rx &gt;= w: # test to see if that is the value we want w += 1 self._warnSeniors(w) self._kw[rkey][0] = w return rkey except KeyError: self._keys.pop(ri) # if you do not find one after 100 tries then just get a random one self._fails += 1 #for confirming effectiveness only for key in self._keys: if key in self._kw: w = self._kw[key][0] + 1 self._warnSeniors(w) self._kw[key][0] = w return key return None def _findSeniors(self): '''this function finds the seniors, counts them and assess their age. It is costly but unlikely.''' seniorW = 0 seniors = 0 for w in self._kw.itervalues(): if w &gt;= seniorW: if w == seniorW: seniors += 1 else: seniorsW = w seniors = 1 self._seniors = seniors self._seniorW = seniorW def _warnSeniors(self, w): #a weight can only be incremented...good if w &gt;= self._seniorW: if w == self._seniorW: self._seniors+=1 else: self._seniors = 1 self._seniorW = w def test(): #test code iterations = 200000 size = 2500 nextkey = size pd = ProbDict(dict([(i,[0,i]) for i in xrange(size)])) start = time.clock() for i in xrange(iterations): key=pd._key() w=pd[key][0] if random.randint(0,1+pd._seniorW-w): #the heavier the object, the more unlikely it will be removed pd.pop(key) probAppend = float(500+(size-len(pd)))/1000 if random.uniform(0,1) &lt; probAppend: nextkey+=1 pd.append(nextkey) print (time.clock()-start)*1000/iterations, "msecs / iteration with", pd._fails, "failures /", iterations, "iterations" weights = pd.weights() weights.sort() print "avg weight:", float(sum(weights))/pd._len, max(weights), pd._seniorW, pd._seniors, len(pd), len(weights) print weights test() </code></pre> <p>Any comments are still welcome. @Darius: your binary trees are too complex and complicated for me; and I do not think its leafs can be removed efficiently... Thx all</p>
17
2009-02-08T19:52:22Z
526,319
<p>A very easy and simple way of doing this is to set weights for each of the values, and it wouldn't require much memory.</p> <p>You could probably use a hash/dictionary to do this.</p> <p>What you'll want to do is to have the random number, <em>x</em>, multiplied and summed over the entire set of things you want selected, and divide that result over the number of objects in your set.</p> <p>Pseudo-code:</p> <pre><code>objectSet = [(object1, weight1), ..., (objectN, weightN)] sum = 0 rand = random() for obj, weight in objectSet sum = sum+weight*rand choice = objectSet[floor(sum/objectSet.size())] </code></pre> <p><em>EDIT</em>: I just thought of how slow my code would be with very large sets (it's O(n)). The following pseudo-code is O(log(n)), and is basically using a binary search.</p> <pre><code>objectSet = [(object1, weight1), ..., (objectN, weightN)] sort objectSet from less to greater according to weights choice = random() * N # where N is the number of objects in objectSet do a binary search until you have just one answer </code></pre> <p>There are implementations of binary search in Python all over the 'net, so no need repeating here.</p>
1
2009-02-08T20:35:52Z
[ "python", "algorithm", "random", "distribution", "probability" ]
Probability distribution in Python
526,255
<p>I have a bunch of keys that each have an unlikeliness variable. I want to randomly choose one of these keys, yet I want it to be more unlikely for unlikely (key, values) to be chosen than a less unlikely (a more likely) object. I am wondering if you would have any suggestions, preferably an existing python module that I could use, else I will need to make it myself.</p> <p>I have checked out the random module; it does not seem to provide this.</p> <p>I have to make such choices many millions of times for 1000 different sets of objects each containing 2,455 objects. Each set will exchange objects among each other so the random chooser needs to be dynamic. With 1000 sets of 2,433 objects, that is 2,433 million objects; low memory consumption is crucial. And since these choices are not the bulk of the algorithm, I need this process to be quite fast; CPU-time is limited.</p> <p>Thx </p> <p>Update:</p> <p>Ok, I tried to consider your suggestions wisely, but time is so limited... </p> <p>I looked at the binary search tree approach and it seems too risky (complex and complicated). The other suggestions all resemble the ActiveState recipe. I took it and modified it a little in the hope of making more efficient:</p> <pre><code>def windex(dict, sum, max): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' n = random.uniform(0, 1) sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: break n = n - weight return key </code></pre> <p>I am hoping to get an efficiency gain from dynamically maintaining the sum of certainties and the maximum certainty. Any further suggestions are welcome. You guys saves me so much time and effort, while increasing my effectiveness, it is crazy. Thx! Thx! Thx!</p> <p>Update2:</p> <p>I decided to make it more efficient by letting it choose more choices at once. This will result in an acceptable loss in precision in my algo for it is dynamic in nature. Anyway, here is what I have now:</p> <pre><code>def weightedChoices(dict, sum, max, choices=10): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' list = [random.uniform(0, 1) for i in range(choices)] (n, list) = relavate(list.sort()) keys = [] sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: keys.append(key) if list: (n, list) = relavate(list) else: break n = n - weight return keys def relavate(list): min = list[0] new = [l - min for l in list[1:]] return (min, new) </code></pre> <p>I haven't tried it out yet. If you have any comments/suggestions, please do not hesitate. Thx!</p> <p>Update3:</p> <p>I have been working all day on a task-tailored version of Rex Logan's answer. Instead of a 2 arrays of objects and weights, it is actually a special dictionary class; which makes things quite complex since Rex's code generates a random index... I also coded a test case that kind of resembles what will happen in my algo (but I can't really know until I try!). The basic principle is: the more a key is randomly generated often, the more unlikely it will be generated again: </p> <pre><code>import random, time import psyco psyco.full() class ProbDict(): """ Modified version of Rex Logans RandomObject class. The more a key is randomly chosen, the more unlikely it will further be randomly chosen. """ def __init__(self,keys_weights_values={}): self._kw=keys_weights_values self._keys=self._kw.keys() self._len=len(self._keys) self._findSeniors() self._effort = 0.15 self._fails = 0 def __iter__(self): return self.next() def __getitem__(self, key): return self._kw[key] def __setitem__(self, key, value): self.append(key, value) def __len__(self): return self._len def next(self): key=self._key() while key: yield key key = self._key() def __contains__(self, key): return key in self._kw def items(self): return self._kw.items() def pop(self, key): try: (w, value) = self._kw.pop(key) self._len -=1 if w == self._seniorW: self._seniors -= 1 if not self._seniors: #costly but unlikely: self._findSeniors() return [w, value] except KeyError: return None def popitem(self): return self.pop(self._key()) def values(self): values = [] for key in self._keys: try: values.append(self._kw[key][1]) except KeyError: pass return values def weights(self): weights = [] for key in self._keys: try: weights.append(self._kw[key][0]) except KeyError: pass return weights def keys(self, imperfect=False): if imperfect: return self._keys return self._kw.keys() def append(self, key, value=None): if key not in self._kw: self._len +=1 self._kw[key] = [0, value] self._keys.append(key) else: self._kw[key][1]=value def _key(self): for i in range(int(self._effort*self._len)): ri=random.randint(0,self._len-1) #choose a random object rx=random.uniform(0,self._seniorW) rkey = self._keys[ri] try: w = self._kw[rkey][0] if rx &gt;= w: # test to see if that is the value we want w += 1 self._warnSeniors(w) self._kw[rkey][0] = w return rkey except KeyError: self._keys.pop(ri) # if you do not find one after 100 tries then just get a random one self._fails += 1 #for confirming effectiveness only for key in self._keys: if key in self._kw: w = self._kw[key][0] + 1 self._warnSeniors(w) self._kw[key][0] = w return key return None def _findSeniors(self): '''this function finds the seniors, counts them and assess their age. It is costly but unlikely.''' seniorW = 0 seniors = 0 for w in self._kw.itervalues(): if w &gt;= seniorW: if w == seniorW: seniors += 1 else: seniorsW = w seniors = 1 self._seniors = seniors self._seniorW = seniorW def _warnSeniors(self, w): #a weight can only be incremented...good if w &gt;= self._seniorW: if w == self._seniorW: self._seniors+=1 else: self._seniors = 1 self._seniorW = w def test(): #test code iterations = 200000 size = 2500 nextkey = size pd = ProbDict(dict([(i,[0,i]) for i in xrange(size)])) start = time.clock() for i in xrange(iterations): key=pd._key() w=pd[key][0] if random.randint(0,1+pd._seniorW-w): #the heavier the object, the more unlikely it will be removed pd.pop(key) probAppend = float(500+(size-len(pd)))/1000 if random.uniform(0,1) &lt; probAppend: nextkey+=1 pd.append(nextkey) print (time.clock()-start)*1000/iterations, "msecs / iteration with", pd._fails, "failures /", iterations, "iterations" weights = pd.weights() weights.sort() print "avg weight:", float(sum(weights))/pd._len, max(weights), pd._seniorW, pd._seniors, len(pd), len(weights) print weights test() </code></pre> <p>Any comments are still welcome. @Darius: your binary trees are too complex and complicated for me; and I do not think its leafs can be removed efficiently... Thx all</p>
17
2009-02-08T19:52:22Z
526,843
<p>In comments on the original post, Nicholas Leonard suggests that both the exchanging and the sampling need to be fast. Here's an idea for that case; I haven't tried it.</p> <p>If only sampling had to be fast, we could use an array of the values together with the running sum of their probabilities, and do a binary search on the running sum (with key being a uniform random number) -- an O(log(n)) operation. But an exchange would require updating all of the running-sum values appearing after the entries exchanged -- an O(n) operation. (Could you choose to exchange only items near the end of their lists? I'll assume not.)</p> <p>So let's aim for O(log(n)) in both operations. Instead of an array, keep a binary tree for each set to sample from. A leaf holds the sample value and its (unnormalized) probability. A branch node holds the total probability of its children.</p> <p>To sample, generate a uniform random number <code>x</code> between 0 and the total probability of the root, and descend the tree. At each branch, choose the left child if the left child has total probability <code>&lt;= x</code>. Else subtract the left child's probability from <code>x</code> and go right. Return the leaf value you reach.</p> <p>To exchange, remove the leaf from its tree and adjust the branches that lead down to it (decreasing their total probability, and cutting out any single-child branch nodes). Insert the leaf into the destination tree: you have a choice of where to put it, so keep it balanced. Picking a random child at each level is probably good enough -- that's where I'd start. Increase each parent node's probability, back up to the root.</p> <p>Now both sampling and exchange are O(log(n)) on average. (If you need guaranteed balance, a simple way is to add another field to the branch nodes holding the count of leaves in the whole subtree. When adding a leaf, at each level pick the child with fewer leaves. This leaves the possibility of a tree getting unbalanced solely by deletions; this can't be a problem if there's reasonably even traffic between the sets, but if it is, then choose rotations during deletion using the leaf-count information on each node in your traversal.)</p> <p><strong>Update:</strong> On request, here's a basic implementation. Haven't tuned it at all. Usage:</p> <pre><code>&gt;&gt;&gt; t1 = build_tree([('one', 20), ('two', 2), ('three', 50)]) &gt;&gt;&gt; t1 Branch(Leaf(20, 'one'), Branch(Leaf(2, 'two'), Leaf(50, 'three'))) &gt;&gt;&gt; t1.sample() Leaf(50, 'three') &gt;&gt;&gt; t1.sample() Leaf(20, 'one') &gt;&gt;&gt; t2 = build_tree([('four', 10), ('five', 30)]) &gt;&gt;&gt; t1a, t2a = transfer(t1, t2) &gt;&gt;&gt; t1a Branch(Leaf(20, 'one'), Leaf(2, 'two')) &gt;&gt;&gt; t2a Branch(Leaf(10, 'four'), Branch(Leaf(30, 'five'), Leaf(50, 'three'))) </code></pre> <p>Code:</p> <pre><code>import random def build_tree(pairs): tree = Empty() for value, weight in pairs: tree = tree.add(Leaf(weight, value)) return tree def transfer(from_tree, to_tree): """Given a nonempty tree and a target, move a leaf from the former to the latter. Return the two updated trees.""" leaf, from_tree1 = from_tree.extract() return from_tree1, to_tree.add(leaf) class Tree: def add(self, leaf): "Return a new tree holding my leaves plus the given leaf." abstract def sample(self): "Pick one of my leaves at random in proportion to its weight." return self.sampling(random.uniform(0, self.weight)) def extract(self): """Pick one of my leaves and return it along with a new tree holding my leaves minus that one leaf.""" return self.extracting(random.uniform(0, self.weight)) class Empty(Tree): weight = 0 def __repr__(self): return 'Empty()' def add(self, leaf): return leaf def sampling(self, weight): raise Exception("You can't sample an empty tree") def extracting(self, weight): raise Exception("You can't extract from an empty tree") class Leaf(Tree): def __init__(self, weight, value): self.weight = weight self.value = value def __repr__(self): return 'Leaf(%r, %r)' % (self.weight, self.value) def add(self, leaf): return Branch(self, leaf) def sampling(self, weight): return self def extracting(self, weight): return self, Empty() def combine(left, right): if isinstance(left, Empty): return right if isinstance(right, Empty): return left return Branch(left, right) class Branch(Tree): def __init__(self, left, right): self.weight = left.weight + right.weight self.left = left self.right = right def __repr__(self): return 'Branch(%r, %r)' % (self.left, self.right) def add(self, leaf): # Adding to a random branch as a clumsy way to keep an # approximately balanced tree. if random.random() &lt; 0.5: return combine(self.left.add(leaf), self.right) return combine(self.left, self.right.add(leaf)) def sampling(self, weight): if weight &lt; self.left.weight: return self.left.sampling(weight) return self.right.sampling(weight - self.left.weight) def extracting(self, weight): if weight &lt; self.left.weight: leaf, left1 = self.left.extracting(weight) return leaf, combine(left1, self.right) leaf, right1 = self.right.extracting(weight - self.left.weight) return leaf, combine(self.left, right1) </code></pre> <p><strong>Update 2:</strong> In <a href="http://stackoverflow.com/questions/2140787/select-random-k-elements-from-a-list-whose-elements-have-weights/2149533#2149533">answering another problem</a>, Jason Orendorff points out that the binary trees can be kept perfectly balanced by representing them in an array just like the classical heap structure. (This saves the space spent on pointers, too.) See my comments to that answer for how to adapt his code to this problem.</p>
6
2009-02-09T01:12:33Z
[ "python", "algorithm", "random", "distribution", "probability" ]
Probability distribution in Python
526,255
<p>I have a bunch of keys that each have an unlikeliness variable. I want to randomly choose one of these keys, yet I want it to be more unlikely for unlikely (key, values) to be chosen than a less unlikely (a more likely) object. I am wondering if you would have any suggestions, preferably an existing python module that I could use, else I will need to make it myself.</p> <p>I have checked out the random module; it does not seem to provide this.</p> <p>I have to make such choices many millions of times for 1000 different sets of objects each containing 2,455 objects. Each set will exchange objects among each other so the random chooser needs to be dynamic. With 1000 sets of 2,433 objects, that is 2,433 million objects; low memory consumption is crucial. And since these choices are not the bulk of the algorithm, I need this process to be quite fast; CPU-time is limited.</p> <p>Thx </p> <p>Update:</p> <p>Ok, I tried to consider your suggestions wisely, but time is so limited... </p> <p>I looked at the binary search tree approach and it seems too risky (complex and complicated). The other suggestions all resemble the ActiveState recipe. I took it and modified it a little in the hope of making more efficient:</p> <pre><code>def windex(dict, sum, max): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' n = random.uniform(0, 1) sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: break n = n - weight return key </code></pre> <p>I am hoping to get an efficiency gain from dynamically maintaining the sum of certainties and the maximum certainty. Any further suggestions are welcome. You guys saves me so much time and effort, while increasing my effectiveness, it is crazy. Thx! Thx! Thx!</p> <p>Update2:</p> <p>I decided to make it more efficient by letting it choose more choices at once. This will result in an acceptable loss in precision in my algo for it is dynamic in nature. Anyway, here is what I have now:</p> <pre><code>def weightedChoices(dict, sum, max, choices=10): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' list = [random.uniform(0, 1) for i in range(choices)] (n, list) = relavate(list.sort()) keys = [] sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: keys.append(key) if list: (n, list) = relavate(list) else: break n = n - weight return keys def relavate(list): min = list[0] new = [l - min for l in list[1:]] return (min, new) </code></pre> <p>I haven't tried it out yet. If you have any comments/suggestions, please do not hesitate. Thx!</p> <p>Update3:</p> <p>I have been working all day on a task-tailored version of Rex Logan's answer. Instead of a 2 arrays of objects and weights, it is actually a special dictionary class; which makes things quite complex since Rex's code generates a random index... I also coded a test case that kind of resembles what will happen in my algo (but I can't really know until I try!). The basic principle is: the more a key is randomly generated often, the more unlikely it will be generated again: </p> <pre><code>import random, time import psyco psyco.full() class ProbDict(): """ Modified version of Rex Logans RandomObject class. The more a key is randomly chosen, the more unlikely it will further be randomly chosen. """ def __init__(self,keys_weights_values={}): self._kw=keys_weights_values self._keys=self._kw.keys() self._len=len(self._keys) self._findSeniors() self._effort = 0.15 self._fails = 0 def __iter__(self): return self.next() def __getitem__(self, key): return self._kw[key] def __setitem__(self, key, value): self.append(key, value) def __len__(self): return self._len def next(self): key=self._key() while key: yield key key = self._key() def __contains__(self, key): return key in self._kw def items(self): return self._kw.items() def pop(self, key): try: (w, value) = self._kw.pop(key) self._len -=1 if w == self._seniorW: self._seniors -= 1 if not self._seniors: #costly but unlikely: self._findSeniors() return [w, value] except KeyError: return None def popitem(self): return self.pop(self._key()) def values(self): values = [] for key in self._keys: try: values.append(self._kw[key][1]) except KeyError: pass return values def weights(self): weights = [] for key in self._keys: try: weights.append(self._kw[key][0]) except KeyError: pass return weights def keys(self, imperfect=False): if imperfect: return self._keys return self._kw.keys() def append(self, key, value=None): if key not in self._kw: self._len +=1 self._kw[key] = [0, value] self._keys.append(key) else: self._kw[key][1]=value def _key(self): for i in range(int(self._effort*self._len)): ri=random.randint(0,self._len-1) #choose a random object rx=random.uniform(0,self._seniorW) rkey = self._keys[ri] try: w = self._kw[rkey][0] if rx &gt;= w: # test to see if that is the value we want w += 1 self._warnSeniors(w) self._kw[rkey][0] = w return rkey except KeyError: self._keys.pop(ri) # if you do not find one after 100 tries then just get a random one self._fails += 1 #for confirming effectiveness only for key in self._keys: if key in self._kw: w = self._kw[key][0] + 1 self._warnSeniors(w) self._kw[key][0] = w return key return None def _findSeniors(self): '''this function finds the seniors, counts them and assess their age. It is costly but unlikely.''' seniorW = 0 seniors = 0 for w in self._kw.itervalues(): if w &gt;= seniorW: if w == seniorW: seniors += 1 else: seniorsW = w seniors = 1 self._seniors = seniors self._seniorW = seniorW def _warnSeniors(self, w): #a weight can only be incremented...good if w &gt;= self._seniorW: if w == self._seniorW: self._seniors+=1 else: self._seniors = 1 self._seniorW = w def test(): #test code iterations = 200000 size = 2500 nextkey = size pd = ProbDict(dict([(i,[0,i]) for i in xrange(size)])) start = time.clock() for i in xrange(iterations): key=pd._key() w=pd[key][0] if random.randint(0,1+pd._seniorW-w): #the heavier the object, the more unlikely it will be removed pd.pop(key) probAppend = float(500+(size-len(pd)))/1000 if random.uniform(0,1) &lt; probAppend: nextkey+=1 pd.append(nextkey) print (time.clock()-start)*1000/iterations, "msecs / iteration with", pd._fails, "failures /", iterations, "iterations" weights = pd.weights() weights.sort() print "avg weight:", float(sum(weights))/pd._len, max(weights), pd._seniorW, pd._seniors, len(pd), len(weights) print weights test() </code></pre> <p>Any comments are still welcome. @Darius: your binary trees are too complex and complicated for me; and I do not think its leafs can be removed efficiently... Thx all</p>
17
2009-02-08T19:52:22Z
529,790
<p>Here's a better answer for a special probability distribution, the one <a href="http://stackoverflow.com/questions/526255/probability-distribution-in-python/526585#526585">Rex Logan's answer</a> seems to be geared at. The distribution is like this: each object has an integer weight between 0 and 100, and its probability is in proportion to its weight. Since that's the currently accepted answer, I guess this is worth thinking about.</p> <p>So keep an array of 101 bins. Each bin holds a list of all of the objects with its particular weight. Each bin also knows the <em>total</em> weight of all its objects.</p> <p>To sample: pick a bin at random in proportion to its total weight. (Use one of the standard recipes for this -- linear or binary search.) Then pick an object from the bin uniformly at random.</p> <p>To transfer an object: remove it from its bin, put it in its bin in the target, and update both bins' weights. (If you're using binary search for sampling, you must also update the running sums that uses. This is still reasonably fast since there aren't many bins.)</p>
1
2009-02-09T20:31:49Z
[ "python", "algorithm", "random", "distribution", "probability" ]
Probability distribution in Python
526,255
<p>I have a bunch of keys that each have an unlikeliness variable. I want to randomly choose one of these keys, yet I want it to be more unlikely for unlikely (key, values) to be chosen than a less unlikely (a more likely) object. I am wondering if you would have any suggestions, preferably an existing python module that I could use, else I will need to make it myself.</p> <p>I have checked out the random module; it does not seem to provide this.</p> <p>I have to make such choices many millions of times for 1000 different sets of objects each containing 2,455 objects. Each set will exchange objects among each other so the random chooser needs to be dynamic. With 1000 sets of 2,433 objects, that is 2,433 million objects; low memory consumption is crucial. And since these choices are not the bulk of the algorithm, I need this process to be quite fast; CPU-time is limited.</p> <p>Thx </p> <p>Update:</p> <p>Ok, I tried to consider your suggestions wisely, but time is so limited... </p> <p>I looked at the binary search tree approach and it seems too risky (complex and complicated). The other suggestions all resemble the ActiveState recipe. I took it and modified it a little in the hope of making more efficient:</p> <pre><code>def windex(dict, sum, max): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' n = random.uniform(0, 1) sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: break n = n - weight return key </code></pre> <p>I am hoping to get an efficiency gain from dynamically maintaining the sum of certainties and the maximum certainty. Any further suggestions are welcome. You guys saves me so much time and effort, while increasing my effectiveness, it is crazy. Thx! Thx! Thx!</p> <p>Update2:</p> <p>I decided to make it more efficient by letting it choose more choices at once. This will result in an acceptable loss in precision in my algo for it is dynamic in nature. Anyway, here is what I have now:</p> <pre><code>def weightedChoices(dict, sum, max, choices=10): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' list = [random.uniform(0, 1) for i in range(choices)] (n, list) = relavate(list.sort()) keys = [] sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: keys.append(key) if list: (n, list) = relavate(list) else: break n = n - weight return keys def relavate(list): min = list[0] new = [l - min for l in list[1:]] return (min, new) </code></pre> <p>I haven't tried it out yet. If you have any comments/suggestions, please do not hesitate. Thx!</p> <p>Update3:</p> <p>I have been working all day on a task-tailored version of Rex Logan's answer. Instead of a 2 arrays of objects and weights, it is actually a special dictionary class; which makes things quite complex since Rex's code generates a random index... I also coded a test case that kind of resembles what will happen in my algo (but I can't really know until I try!). The basic principle is: the more a key is randomly generated often, the more unlikely it will be generated again: </p> <pre><code>import random, time import psyco psyco.full() class ProbDict(): """ Modified version of Rex Logans RandomObject class. The more a key is randomly chosen, the more unlikely it will further be randomly chosen. """ def __init__(self,keys_weights_values={}): self._kw=keys_weights_values self._keys=self._kw.keys() self._len=len(self._keys) self._findSeniors() self._effort = 0.15 self._fails = 0 def __iter__(self): return self.next() def __getitem__(self, key): return self._kw[key] def __setitem__(self, key, value): self.append(key, value) def __len__(self): return self._len def next(self): key=self._key() while key: yield key key = self._key() def __contains__(self, key): return key in self._kw def items(self): return self._kw.items() def pop(self, key): try: (w, value) = self._kw.pop(key) self._len -=1 if w == self._seniorW: self._seniors -= 1 if not self._seniors: #costly but unlikely: self._findSeniors() return [w, value] except KeyError: return None def popitem(self): return self.pop(self._key()) def values(self): values = [] for key in self._keys: try: values.append(self._kw[key][1]) except KeyError: pass return values def weights(self): weights = [] for key in self._keys: try: weights.append(self._kw[key][0]) except KeyError: pass return weights def keys(self, imperfect=False): if imperfect: return self._keys return self._kw.keys() def append(self, key, value=None): if key not in self._kw: self._len +=1 self._kw[key] = [0, value] self._keys.append(key) else: self._kw[key][1]=value def _key(self): for i in range(int(self._effort*self._len)): ri=random.randint(0,self._len-1) #choose a random object rx=random.uniform(0,self._seniorW) rkey = self._keys[ri] try: w = self._kw[rkey][0] if rx &gt;= w: # test to see if that is the value we want w += 1 self._warnSeniors(w) self._kw[rkey][0] = w return rkey except KeyError: self._keys.pop(ri) # if you do not find one after 100 tries then just get a random one self._fails += 1 #for confirming effectiveness only for key in self._keys: if key in self._kw: w = self._kw[key][0] + 1 self._warnSeniors(w) self._kw[key][0] = w return key return None def _findSeniors(self): '''this function finds the seniors, counts them and assess their age. It is costly but unlikely.''' seniorW = 0 seniors = 0 for w in self._kw.itervalues(): if w &gt;= seniorW: if w == seniorW: seniors += 1 else: seniorsW = w seniors = 1 self._seniors = seniors self._seniorW = seniorW def _warnSeniors(self, w): #a weight can only be incremented...good if w &gt;= self._seniorW: if w == self._seniorW: self._seniors+=1 else: self._seniors = 1 self._seniorW = w def test(): #test code iterations = 200000 size = 2500 nextkey = size pd = ProbDict(dict([(i,[0,i]) for i in xrange(size)])) start = time.clock() for i in xrange(iterations): key=pd._key() w=pd[key][0] if random.randint(0,1+pd._seniorW-w): #the heavier the object, the more unlikely it will be removed pd.pop(key) probAppend = float(500+(size-len(pd)))/1000 if random.uniform(0,1) &lt; probAppend: nextkey+=1 pd.append(nextkey) print (time.clock()-start)*1000/iterations, "msecs / iteration with", pd._fails, "failures /", iterations, "iterations" weights = pd.weights() weights.sort() print "avg weight:", float(sum(weights))/pd._len, max(weights), pd._seniorW, pd._seniors, len(pd), len(weights) print weights test() </code></pre> <p>Any comments are still welcome. @Darius: your binary trees are too complex and complicated for me; and I do not think its leafs can be removed efficiently... Thx all</p>
17
2009-02-08T19:52:22Z
1,053,635
<p>I was needed in faster functions, for non very large numbers. So here it is, in Visual C++:</p> <pre><code>#undef _DEBUG // disable linking with python25_d.dll #include &lt;Python.h&gt; #include &lt;malloc.h&gt; #include &lt;stdlib.h&gt; static PyObject* dieroll(PyObject *, PyObject *args) { PyObject *list; if (!PyArg_ParseTuple(args, "O:decompress", &amp;list)) return NULL; if (!PyList_Check(list)) return PyErr_Format(PyExc_TypeError, "list of numbers expected ('%s' given)", list-&gt;ob_type-&gt;tp_name), NULL; int size = PyList_Size(list); if (size &lt; 1) return PyErr_Format(PyExc_TypeError, "got empty list"), NULL; long *array = (long*)alloca(size*sizeof(long)); long sum = 0; for (int i = 0; i &lt; size; i++) { PyObject *o = PyList_GetItem(list, i); if (!PyInt_Check(o)) return PyErr_Format(PyExc_TypeError, "list of ints expected ('%s' found)", o-&gt;ob_type-&gt;tp_name), NULL; long n = PyInt_AsLong(o); if (n == -1 &amp;&amp; PyErr_Occurred()) return NULL; if (n &lt; 0) return PyErr_Format(PyExc_TypeError, "list of positive ints expected (negative found)"), NULL; sum += n; //NOTE: integer overflow array[i] = sum; } if (sum &lt;= 0) return PyErr_Format(PyExc_TypeError, "sum of numbers is not positive"), NULL; int r = rand() * (sum-1) / RAND_MAX; //NOTE: rand() may be too small (0x7fff). rand() * sum may result in integer overlow. assert(array[size-1] == sum); assert(r &lt; sum &amp;&amp; r &lt; array[size-1]); for (int i = 0; i &lt; size; ++i) { if (r &lt; array[i]) return PyInt_FromLong(i); } return PyErr_Format(PyExc_TypeError, "internal error."), NULL; } static PyMethodDef module_methods[] = { {"dieroll", (PyCFunction)dieroll, METH_VARARGS, "random index, beased on weights" }, {NULL} /* Sentinel */ }; PyMODINIT_FUNC initdieroll(void) { PyObject *module = Py_InitModule3("dieroll", module_methods, "dieroll"); if (module == NULL) return; } </code></pre>
0
2009-06-27T21:12:33Z
[ "python", "algorithm", "random", "distribution", "probability" ]
Probability distribution in Python
526,255
<p>I have a bunch of keys that each have an unlikeliness variable. I want to randomly choose one of these keys, yet I want it to be more unlikely for unlikely (key, values) to be chosen than a less unlikely (a more likely) object. I am wondering if you would have any suggestions, preferably an existing python module that I could use, else I will need to make it myself.</p> <p>I have checked out the random module; it does not seem to provide this.</p> <p>I have to make such choices many millions of times for 1000 different sets of objects each containing 2,455 objects. Each set will exchange objects among each other so the random chooser needs to be dynamic. With 1000 sets of 2,433 objects, that is 2,433 million objects; low memory consumption is crucial. And since these choices are not the bulk of the algorithm, I need this process to be quite fast; CPU-time is limited.</p> <p>Thx </p> <p>Update:</p> <p>Ok, I tried to consider your suggestions wisely, but time is so limited... </p> <p>I looked at the binary search tree approach and it seems too risky (complex and complicated). The other suggestions all resemble the ActiveState recipe. I took it and modified it a little in the hope of making more efficient:</p> <pre><code>def windex(dict, sum, max): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' n = random.uniform(0, 1) sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: break n = n - weight return key </code></pre> <p>I am hoping to get an efficiency gain from dynamically maintaining the sum of certainties and the maximum certainty. Any further suggestions are welcome. You guys saves me so much time and effort, while increasing my effectiveness, it is crazy. Thx! Thx! Thx!</p> <p>Update2:</p> <p>I decided to make it more efficient by letting it choose more choices at once. This will result in an acceptable loss in precision in my algo for it is dynamic in nature. Anyway, here is what I have now:</p> <pre><code>def weightedChoices(dict, sum, max, choices=10): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' list = [random.uniform(0, 1) for i in range(choices)] (n, list) = relavate(list.sort()) keys = [] sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: keys.append(key) if list: (n, list) = relavate(list) else: break n = n - weight return keys def relavate(list): min = list[0] new = [l - min for l in list[1:]] return (min, new) </code></pre> <p>I haven't tried it out yet. If you have any comments/suggestions, please do not hesitate. Thx!</p> <p>Update3:</p> <p>I have been working all day on a task-tailored version of Rex Logan's answer. Instead of a 2 arrays of objects and weights, it is actually a special dictionary class; which makes things quite complex since Rex's code generates a random index... I also coded a test case that kind of resembles what will happen in my algo (but I can't really know until I try!). The basic principle is: the more a key is randomly generated often, the more unlikely it will be generated again: </p> <pre><code>import random, time import psyco psyco.full() class ProbDict(): """ Modified version of Rex Logans RandomObject class. The more a key is randomly chosen, the more unlikely it will further be randomly chosen. """ def __init__(self,keys_weights_values={}): self._kw=keys_weights_values self._keys=self._kw.keys() self._len=len(self._keys) self._findSeniors() self._effort = 0.15 self._fails = 0 def __iter__(self): return self.next() def __getitem__(self, key): return self._kw[key] def __setitem__(self, key, value): self.append(key, value) def __len__(self): return self._len def next(self): key=self._key() while key: yield key key = self._key() def __contains__(self, key): return key in self._kw def items(self): return self._kw.items() def pop(self, key): try: (w, value) = self._kw.pop(key) self._len -=1 if w == self._seniorW: self._seniors -= 1 if not self._seniors: #costly but unlikely: self._findSeniors() return [w, value] except KeyError: return None def popitem(self): return self.pop(self._key()) def values(self): values = [] for key in self._keys: try: values.append(self._kw[key][1]) except KeyError: pass return values def weights(self): weights = [] for key in self._keys: try: weights.append(self._kw[key][0]) except KeyError: pass return weights def keys(self, imperfect=False): if imperfect: return self._keys return self._kw.keys() def append(self, key, value=None): if key not in self._kw: self._len +=1 self._kw[key] = [0, value] self._keys.append(key) else: self._kw[key][1]=value def _key(self): for i in range(int(self._effort*self._len)): ri=random.randint(0,self._len-1) #choose a random object rx=random.uniform(0,self._seniorW) rkey = self._keys[ri] try: w = self._kw[rkey][0] if rx &gt;= w: # test to see if that is the value we want w += 1 self._warnSeniors(w) self._kw[rkey][0] = w return rkey except KeyError: self._keys.pop(ri) # if you do not find one after 100 tries then just get a random one self._fails += 1 #for confirming effectiveness only for key in self._keys: if key in self._kw: w = self._kw[key][0] + 1 self._warnSeniors(w) self._kw[key][0] = w return key return None def _findSeniors(self): '''this function finds the seniors, counts them and assess their age. It is costly but unlikely.''' seniorW = 0 seniors = 0 for w in self._kw.itervalues(): if w &gt;= seniorW: if w == seniorW: seniors += 1 else: seniorsW = w seniors = 1 self._seniors = seniors self._seniorW = seniorW def _warnSeniors(self, w): #a weight can only be incremented...good if w &gt;= self._seniorW: if w == self._seniorW: self._seniors+=1 else: self._seniors = 1 self._seniorW = w def test(): #test code iterations = 200000 size = 2500 nextkey = size pd = ProbDict(dict([(i,[0,i]) for i in xrange(size)])) start = time.clock() for i in xrange(iterations): key=pd._key() w=pd[key][0] if random.randint(0,1+pd._seniorW-w): #the heavier the object, the more unlikely it will be removed pd.pop(key) probAppend = float(500+(size-len(pd)))/1000 if random.uniform(0,1) &lt; probAppend: nextkey+=1 pd.append(nextkey) print (time.clock()-start)*1000/iterations, "msecs / iteration with", pd._fails, "failures /", iterations, "iterations" weights = pd.weights() weights.sort() print "avg weight:", float(sum(weights))/pd._len, max(weights), pd._seniorW, pd._seniors, len(pd), len(weights) print weights test() </code></pre> <p>Any comments are still welcome. @Darius: your binary trees are too complex and complicated for me; and I do not think its leafs can be removed efficiently... Thx all</p>
17
2009-02-08T19:52:22Z
2,051,148
<p>(A year later) <a href="http://code.activestate.com/recipes/576564" rel="nofollow">Walker's alias method for random objects with different probablities</a> is very fast and very simple</p>
1
2010-01-12T17:58:54Z
[ "python", "algorithm", "random", "distribution", "probability" ]
Probability distribution in Python
526,255
<p>I have a bunch of keys that each have an unlikeliness variable. I want to randomly choose one of these keys, yet I want it to be more unlikely for unlikely (key, values) to be chosen than a less unlikely (a more likely) object. I am wondering if you would have any suggestions, preferably an existing python module that I could use, else I will need to make it myself.</p> <p>I have checked out the random module; it does not seem to provide this.</p> <p>I have to make such choices many millions of times for 1000 different sets of objects each containing 2,455 objects. Each set will exchange objects among each other so the random chooser needs to be dynamic. With 1000 sets of 2,433 objects, that is 2,433 million objects; low memory consumption is crucial. And since these choices are not the bulk of the algorithm, I need this process to be quite fast; CPU-time is limited.</p> <p>Thx </p> <p>Update:</p> <p>Ok, I tried to consider your suggestions wisely, but time is so limited... </p> <p>I looked at the binary search tree approach and it seems too risky (complex and complicated). The other suggestions all resemble the ActiveState recipe. I took it and modified it a little in the hope of making more efficient:</p> <pre><code>def windex(dict, sum, max): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' n = random.uniform(0, 1) sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: break n = n - weight return key </code></pre> <p>I am hoping to get an efficiency gain from dynamically maintaining the sum of certainties and the maximum certainty. Any further suggestions are welcome. You guys saves me so much time and effort, while increasing my effectiveness, it is crazy. Thx! Thx! Thx!</p> <p>Update2:</p> <p>I decided to make it more efficient by letting it choose more choices at once. This will result in an acceptable loss in precision in my algo for it is dynamic in nature. Anyway, here is what I have now:</p> <pre><code>def weightedChoices(dict, sum, max, choices=10): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: &gt;&gt;&gt; x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' list = [random.uniform(0, 1) for i in range(choices)] (n, list) = relavate(list.sort()) keys = [] sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n &lt; weight: keys.append(key) if list: (n, list) = relavate(list) else: break n = n - weight return keys def relavate(list): min = list[0] new = [l - min for l in list[1:]] return (min, new) </code></pre> <p>I haven't tried it out yet. If you have any comments/suggestions, please do not hesitate. Thx!</p> <p>Update3:</p> <p>I have been working all day on a task-tailored version of Rex Logan's answer. Instead of a 2 arrays of objects and weights, it is actually a special dictionary class; which makes things quite complex since Rex's code generates a random index... I also coded a test case that kind of resembles what will happen in my algo (but I can't really know until I try!). The basic principle is: the more a key is randomly generated often, the more unlikely it will be generated again: </p> <pre><code>import random, time import psyco psyco.full() class ProbDict(): """ Modified version of Rex Logans RandomObject class. The more a key is randomly chosen, the more unlikely it will further be randomly chosen. """ def __init__(self,keys_weights_values={}): self._kw=keys_weights_values self._keys=self._kw.keys() self._len=len(self._keys) self._findSeniors() self._effort = 0.15 self._fails = 0 def __iter__(self): return self.next() def __getitem__(self, key): return self._kw[key] def __setitem__(self, key, value): self.append(key, value) def __len__(self): return self._len def next(self): key=self._key() while key: yield key key = self._key() def __contains__(self, key): return key in self._kw def items(self): return self._kw.items() def pop(self, key): try: (w, value) = self._kw.pop(key) self._len -=1 if w == self._seniorW: self._seniors -= 1 if not self._seniors: #costly but unlikely: self._findSeniors() return [w, value] except KeyError: return None def popitem(self): return self.pop(self._key()) def values(self): values = [] for key in self._keys: try: values.append(self._kw[key][1]) except KeyError: pass return values def weights(self): weights = [] for key in self._keys: try: weights.append(self._kw[key][0]) except KeyError: pass return weights def keys(self, imperfect=False): if imperfect: return self._keys return self._kw.keys() def append(self, key, value=None): if key not in self._kw: self._len +=1 self._kw[key] = [0, value] self._keys.append(key) else: self._kw[key][1]=value def _key(self): for i in range(int(self._effort*self._len)): ri=random.randint(0,self._len-1) #choose a random object rx=random.uniform(0,self._seniorW) rkey = self._keys[ri] try: w = self._kw[rkey][0] if rx &gt;= w: # test to see if that is the value we want w += 1 self._warnSeniors(w) self._kw[rkey][0] = w return rkey except KeyError: self._keys.pop(ri) # if you do not find one after 100 tries then just get a random one self._fails += 1 #for confirming effectiveness only for key in self._keys: if key in self._kw: w = self._kw[key][0] + 1 self._warnSeniors(w) self._kw[key][0] = w return key return None def _findSeniors(self): '''this function finds the seniors, counts them and assess their age. It is costly but unlikely.''' seniorW = 0 seniors = 0 for w in self._kw.itervalues(): if w &gt;= seniorW: if w == seniorW: seniors += 1 else: seniorsW = w seniors = 1 self._seniors = seniors self._seniorW = seniorW def _warnSeniors(self, w): #a weight can only be incremented...good if w &gt;= self._seniorW: if w == self._seniorW: self._seniors+=1 else: self._seniors = 1 self._seniorW = w def test(): #test code iterations = 200000 size = 2500 nextkey = size pd = ProbDict(dict([(i,[0,i]) for i in xrange(size)])) start = time.clock() for i in xrange(iterations): key=pd._key() w=pd[key][0] if random.randint(0,1+pd._seniorW-w): #the heavier the object, the more unlikely it will be removed pd.pop(key) probAppend = float(500+(size-len(pd)))/1000 if random.uniform(0,1) &lt; probAppend: nextkey+=1 pd.append(nextkey) print (time.clock()-start)*1000/iterations, "msecs / iteration with", pd._fails, "failures /", iterations, "iterations" weights = pd.weights() weights.sort() print "avg weight:", float(sum(weights))/pd._len, max(weights), pd._seniorW, pd._seniors, len(pd), len(weights) print weights test() </code></pre> <p>Any comments are still welcome. @Darius: your binary trees are too complex and complicated for me; and I do not think its leafs can be removed efficiently... Thx all</p>
17
2009-02-08T19:52:22Z
21,211,673
<p><em><strong>About 3 years later...</em></strong></p> <p>If you use numpy, perhaps the simplest option is to use <a href="http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.random.choice.html" rel="nofollow"><code>np.random.choice</code></a>, which takes a list of possible values, and an optional sequence of probabilities associated with each value:</p> <pre><code>import numpy as np values = ('A', 'B', 'C', 'D') weights = (0.5, 0.1, 0.2, 0.2) print ''.join(np.random.choice(values, size=60, replace=True, p=weights)) # ACCADAACCDACDBACCADCAAAAAAADACCDCAADDDADAAACCAAACBAAADCADABA </code></pre>
1
2014-01-19T00:25:27Z
[ "python", "algorithm", "random", "distribution", "probability" ]
pycurl cancel a transfer and try & except
526,325
<p>How do i cancel a transfer in pycurl? i use to return -1 in libcurl but pycurl doesnt seem to like that ("pycurl.error: invalid return value for write callback -1 17") return 0 doesnt work either, i get "error: (23, 'Failed writing body')" . Also how do i do a try/except with pycurl? i dont see any examples online nor the pycurl examples from the site</p>
1
2009-02-08T20:39:16Z
622,703
<p>Example code would help here. Judging from the error message, and grepping for it in the source code, you've set up a write callback. This is configured, I think, by CURLOPT_WRITEFUNCTION, and the documentation for that says:</p> <blockquote> <p>Return the number of bytes actually taken care of. If that amount differs from the amount passed to your function, it'll signal an error to the library and it will abort the transfer and return CURLE_WRITE_ERROR.</p> </blockquote> <p>The pycurl wrapper code checks that the value is between 0 and the number passed to it. That's why -1 failed, and why 0, triggers CURLE_WRITE_ERROR, raises the "failed writing body" exception. The pycurl code is:</p> <pre><code> /* run callback */ arglist = Py_BuildValue("(s#)", ptr, total_size); if (arglist == NULL) goto verbose_error; result = PyEval_CallObject(cb, arglist); Py_DECREF(arglist); if (result == NULL) goto verbose_error; /* handle result */ if (result == Py_None) { ret = total_size; /* None means success */ } else if (PyInt_Check(result)) { long obj_size = PyInt_AsLong(result); if (obj_size &lt; 0 || obj_size &gt; total_size) { PyErr_Format(ErrorObject, "invalid return value for write callback %ld %ld", (long)obj_size, (long)total_size); goto verbose_error; } ret = (size_t) obj_size; /* success */ } else if (PyLong_Check(result)) { ... identical code for Long ... } else { PyErr_SetString(ErrorObject, "write callback must return int or None"); goto verbose_error; } </code></pre> <p>I don't see any way in pycurl for this function to support another return value. There might be other ways, like setting up a progress callback, which does seem to allow aborts.</p> <p>The relevant code in curl itself is:</p> <pre><code>/* If the previous block of data ended with CR and this block of data is just a NL, then the length might be zero */ if(len) { wrote = data-&gt;set.fwrite_func(ptr, 1, len, data-&gt;set.out); } else { wrote = len; } if(CURL_WRITEFUNC_PAUSE == wrote) return pausewrite(data, type, ptr, len); if(wrote != len) { failf(data, "Failed writing body (%d != %d)", (int)wrote, (int)len); return CURLE_WRITE_ERROR; } </code></pre> <p>so you can see that pycurl does not support returning the CURL_WRITEFUNC_PAUSE which curl itself allows. You can also see that curl has no way to support aborts through the write callback function. You will have to use something else.</p>
3
2009-03-07T23:14:21Z
[ "python", "pycurl" ]
Django form fails validation on a unique field
526,457
<p>I have a simple model that is defined as:</p> <pre><code>class Article(models.Model): slug = models.SlugField(max_length=50, unique=True) title = models.CharField(max_length=100, unique=False) </code></pre> <p>and the form:</p> <pre><code>class ArticleForm(ModelForm): class Meta: model = Article </code></pre> <p>The validation here fails when I try to update an existing row:</p> <pre><code> if request.method == 'POST': form = ArticleForm(request.POST) if form.is_valid(): # POOF form.save() </code></pre> <p>Creating a new entry is fine, however, when I try to update any of these fields, the validation no longer passes. </p> <p>The "errors" property had nothing, but I dropped into the debugger and deep within the Django guts I saw this:</p> <p>slug: "Article with this None already exists"</p> <p>So it looks like is_valid() fails on a unique value check, but all I want to do is <strong>update</strong> the row.</p> <p>I can't just do:</p> <pre><code>form.save(force_update=True) </code></pre> <p>... because the form will fail on validation.</p> <p>This looks like something very simple, but I just can't figure it out.</p> <p>I am running Django 1.0.2</p> <p>What croaks is BaseModelForm.validate_unique() which is called on form initialization.</p>
16
2009-02-08T21:36:23Z
526,477
<p>All i can guess is that you are getting an object to fill a form, and trying to save it again. </p> <p>Try using a ModelForm, and intantiate it with desired object.</p>
1
2009-02-08T21:47:58Z
[ "python", "django" ]
Django form fails validation on a unique field
526,457
<p>I have a simple model that is defined as:</p> <pre><code>class Article(models.Model): slug = models.SlugField(max_length=50, unique=True) title = models.CharField(max_length=100, unique=False) </code></pre> <p>and the form:</p> <pre><code>class ArticleForm(ModelForm): class Meta: model = Article </code></pre> <p>The validation here fails when I try to update an existing row:</p> <pre><code> if request.method == 'POST': form = ArticleForm(request.POST) if form.is_valid(): # POOF form.save() </code></pre> <p>Creating a new entry is fine, however, when I try to update any of these fields, the validation no longer passes. </p> <p>The "errors" property had nothing, but I dropped into the debugger and deep within the Django guts I saw this:</p> <p>slug: "Article with this None already exists"</p> <p>So it looks like is_valid() fails on a unique value check, but all I want to do is <strong>update</strong> the row.</p> <p>I can't just do:</p> <pre><code>form.save(force_update=True) </code></pre> <p>... because the form will fail on validation.</p> <p>This looks like something very simple, but I just can't figure it out.</p> <p>I am running Django 1.0.2</p> <p>What croaks is BaseModelForm.validate_unique() which is called on form initialization.</p>
16
2009-02-08T21:36:23Z
526,484
<p>It appears that your SlugField is returning None and because a null/blank slug already exists somewhere in the database, its giving an 'already exists' error. It seems like your slug field isn't saving correctly at all.</p>
1
2009-02-08T21:49:46Z
[ "python", "django" ]
Django form fails validation on a unique field
526,457
<p>I have a simple model that is defined as:</p> <pre><code>class Article(models.Model): slug = models.SlugField(max_length=50, unique=True) title = models.CharField(max_length=100, unique=False) </code></pre> <p>and the form:</p> <pre><code>class ArticleForm(ModelForm): class Meta: model = Article </code></pre> <p>The validation here fails when I try to update an existing row:</p> <pre><code> if request.method == 'POST': form = ArticleForm(request.POST) if form.is_valid(): # POOF form.save() </code></pre> <p>Creating a new entry is fine, however, when I try to update any of these fields, the validation no longer passes. </p> <p>The "errors" property had nothing, but I dropped into the debugger and deep within the Django guts I saw this:</p> <p>slug: "Article with this None already exists"</p> <p>So it looks like is_valid() fails on a unique value check, but all I want to do is <strong>update</strong> the row.</p> <p>I can't just do:</p> <pre><code>form.save(force_update=True) </code></pre> <p>... because the form will fail on validation.</p> <p>This looks like something very simple, but I just can't figure it out.</p> <p>I am running Django 1.0.2</p> <p>What croaks is BaseModelForm.validate_unique() which is called on form initialization.</p>
16
2009-02-08T21:36:23Z
526,656
<p>I don't think you are actually updating an existing article, but instead creating a new one, presumably with more or less the same content, especially the slug, and thus you will get an error. It is a bit strange that you don't get better error reporting, but also I do not know what the rest of your view looks like.</p> <p>What if you where to try something along these lines (I have included a bit more of a possible view function, change it to fit your needs); I haven't actually tested my code, so I am sure I've made at least one mistake, but you should at least get the general idea:</p> <pre><code>def article_update(request, id): article = get_objects_or_404(Article, pk=id) if request.method == 'POST': form = ArticleForm(request.POST, instance=article) if form.is_valid(): form.save() return HttpResponseRedirect(to-some-suitable-url) else: form = ArticleForm(instance=article) return render_to_response('article_update.html', { 'form': form }) </code></pre> <p>The thing is, as taurean noted, you should instantiate your model form with the object you wish to update, otherwise you will get a new one.</p>
23
2009-02-08T23:33:29Z
[ "python", "django" ]
Django form fails validation on a unique field
526,457
<p>I have a simple model that is defined as:</p> <pre><code>class Article(models.Model): slug = models.SlugField(max_length=50, unique=True) title = models.CharField(max_length=100, unique=False) </code></pre> <p>and the form:</p> <pre><code>class ArticleForm(ModelForm): class Meta: model = Article </code></pre> <p>The validation here fails when I try to update an existing row:</p> <pre><code> if request.method == 'POST': form = ArticleForm(request.POST) if form.is_valid(): # POOF form.save() </code></pre> <p>Creating a new entry is fine, however, when I try to update any of these fields, the validation no longer passes. </p> <p>The "errors" property had nothing, but I dropped into the debugger and deep within the Django guts I saw this:</p> <p>slug: "Article with this None already exists"</p> <p>So it looks like is_valid() fails on a unique value check, but all I want to do is <strong>update</strong> the row.</p> <p>I can't just do:</p> <pre><code>form.save(force_update=True) </code></pre> <p>... because the form will fail on validation.</p> <p>This looks like something very simple, but I just can't figure it out.</p> <p>I am running Django 1.0.2</p> <p>What croaks is BaseModelForm.validate_unique() which is called on form initialization.</p>
16
2009-02-08T21:36:23Z
658,765
<p>I was also searching for a way to update an existing record, even tried <code>form.save(force_update=True)</code> but received errors?? Finally by trial &amp; error managed to update existing record. Below codes tested working. Hope this helps...</p> <h1>models.py from djangobook</h1> <pre><code>class Author(models.Model): first_name = models.CharField(max_length=30) last_name = models.CharField(max_length=40) email = models.EmailField(blank=True, verbose_name='e-mail') objects = models.Manager() sel_objects=AuthorManager() def __unicode__(self): return self.first_name+' '+ self.last_name class AuthorForm(ModelForm): class Meta: model = Author # views.py # add new record def authorcontact(request): if request.method == 'POST': form = AuthorForm(request.POST) if form.is_valid(): form.save() return HttpResponseRedirect('/contact/created') else: form = AuthorForm() return render_to_response('author_form.html', {'form': form}) </code></pre> <h1>update existing record</h1> <pre><code>def authorcontactupd(request,id): if request.method == 'POST': a=Author.objects.get(pk=int(id)) form = AuthorForm(request.POST, instance=a) if form.is_valid(): form.save() return HttpResponseRedirect('/contact/created') else: a=Author.objects.get(pk=int(id)) form = AuthorForm(instance=a) return render_to_response('author_form.html', {'form': form}) </code></pre>
4
2009-03-18T15:27:47Z
[ "python", "django" ]
subprocess.Popen error
526,734
<p>I am running an msi installer in silent mode and caching logs in the specific file. The following is the command i need to execute.</p> <p><code>C:\Program Files\ My Installer\Setup.exe /s /v "/qn /lv %TEMP%\log_silent.log"</code></p> <p>I used:</p> <pre><code>subprocess.Popen(['C:\Program Files\ My Installer\Setup.exe', '/s /v "/qn /lv %TEMP%\log_silent.log"'],stdout=subprocess.PIPE).communicate()[0] </code></pre> <p>to execute the command however it does not recognise the operation and gives error regarding wrong option selected. I have cross-verified and found that the command only works this way.</p>
3
2009-02-09T00:10:49Z
526,771
<p>The problem is that you effectively supply Setup.exe with only one argument. Don't think in terms of the shell, the string you hand over as an argument does not get splitted on spaces anymore, that's your duty!</p> <p>So, if you are absolutely sure that "/qn /lv %TEMP%\log_silent.log" should be one argument, then use this:</p> <pre><code>subprocess.Popen(['C:\Program Files\ My Installer\Setup.exe', '/s', '/v', '/qn /lv %TEMP%\log_silent.log'],stdout=subprocess.PIPE).communicate()[0] </code></pre> <p>Otherwise (I guess this one will be correct), use this:</p> <pre><code>subprocess.Popen(['C:\Program Files\ My Installer\Setup.exe', '/s', '/v', '/qn', '/lv', '%TEMP%\log_silent.log'],stdout=subprocess.PIPE).communicate()[0] </code></pre>
2
2009-02-09T00:35:39Z
[ "python", "subprocess", "popen" ]
subprocess.Popen error
526,734
<p>I am running an msi installer in silent mode and caching logs in the specific file. The following is the command i need to execute.</p> <p><code>C:\Program Files\ My Installer\Setup.exe /s /v "/qn /lv %TEMP%\log_silent.log"</code></p> <p>I used:</p> <pre><code>subprocess.Popen(['C:\Program Files\ My Installer\Setup.exe', '/s /v "/qn /lv %TEMP%\log_silent.log"'],stdout=subprocess.PIPE).communicate()[0] </code></pre> <p>to execute the command however it does not recognise the operation and gives error regarding wrong option selected. I have cross-verified and found that the command only works this way.</p>
3
2009-02-09T00:10:49Z
526,775
<p>Try putting each argument in its own string (reformatted for readability):</p> <pre><code>cmd = ['C:\Program Files\ My Installer\Setup.exe', '/s', '/v', '"/qn', '/lv', '%TEMP%\log_silent.log"'] subprocess.Popen(cmd, stdout=subprocess.PIPE).communicate()[0] </code></pre> <p>I have to say though, those double quotes do not look in the right places to me.</p>
0
2009-02-09T00:38:00Z
[ "python", "subprocess", "popen" ]
subprocess.Popen error
526,734
<p>I am running an msi installer in silent mode and caching logs in the specific file. The following is the command i need to execute.</p> <p><code>C:\Program Files\ My Installer\Setup.exe /s /v "/qn /lv %TEMP%\log_silent.log"</code></p> <p>I used:</p> <pre><code>subprocess.Popen(['C:\Program Files\ My Installer\Setup.exe', '/s /v "/qn /lv %TEMP%\log_silent.log"'],stdout=subprocess.PIPE).communicate()[0] </code></pre> <p>to execute the command however it does not recognise the operation and gives error regarding wrong option selected. I have cross-verified and found that the command only works this way.</p>
3
2009-02-09T00:10:49Z
526,781
<p>The problem is very subtle.</p> <p>You're executing the program directly. It gets:</p> <pre><code>argv[0] = "C:\Program Files\ My Installer\Setup.exe" argv[1] = /s /v "/qn /lv %TEMP%\log_silent.log" </code></pre> <p>Whereas it should be:</p> <pre><code>argv[1] = "/s" argv[2] = "/v" argv[3] = "/qn" argv[4] = "/lv %TEMP%\log_silent.log" </code></pre> <p>In other words, it should receive 5 arguments, not 2 arguments.</p> <p>Also, <code>%TEMP%</code> is directly unknown to the program!</p> <p>There are 2 ways to fix this problem:</p> <ol> <li><p><strong>Calling the shell.</strong></p> <pre><code>p = subprocess.Popen('C:\Program Files\ My Installer\Setup.exe /s /v "/qn /lv %TEMP%\log_silent.log"', shell=True) output = p.communicate()[0] </code></pre></li> <li><p><strong>Directly call program (more safer)</strong></p> <pre><code>s = ['C:\Program Files\ My Installer\Setup.exe', '/s /v "/qn /lv %TEMP%\log_silent.log"'] safes = [os.path.expandvars(p) for p in argument_string] p = subprocess.Popen(safes[0], safes[1:]) output = p.communicate()[0] </code></pre></li> </ol>
8
2009-02-09T00:40:21Z
[ "python", "subprocess", "popen" ]
subprocess.Popen error
526,734
<p>I am running an msi installer in silent mode and caching logs in the specific file. The following is the command i need to execute.</p> <p><code>C:\Program Files\ My Installer\Setup.exe /s /v "/qn /lv %TEMP%\log_silent.log"</code></p> <p>I used:</p> <pre><code>subprocess.Popen(['C:\Program Files\ My Installer\Setup.exe', '/s /v "/qn /lv %TEMP%\log_silent.log"'],stdout=subprocess.PIPE).communicate()[0] </code></pre> <p>to execute the command however it does not recognise the operation and gives error regarding wrong option selected. I have cross-verified and found that the command only works this way.</p>
3
2009-02-09T00:10:49Z
526,830
<p>You said:</p> <pre><code>subprocess.Popen(['C:\Program Files\ My Installer\Setup.exe', '/s /v "/qn /lv %TEMP%\log_silent.log"'],stdout=subprocess.PIPE).communicate()[0] </code></pre> <p>Is the directory name really " My Installer" (with a leading space)?</p> <p>Also, as a general rule, you should use forward slashes in path specifications. Python should handle them seamlessly (even on Windows) and you avoid any problems with python interpreting backslashes as escape characters.</p> <p>(for example:</p> <pre><code>&gt;&gt;&gt; s = 'c:\program files\norton antivirus' &gt;&gt;&gt; print s c:\program files orton antivirus </code></pre> <p>)</p>
0
2009-02-09T01:03:07Z
[ "python", "subprocess", "popen" ]
How to add custom fields to InlineFormsets?
526,795
<p>I'm trying to add custom fields to an InlineFormset using the following code, but the fields won't show up in the Django Admin. Is the InlineFormset too locked down to allow this? My print "ding" test fires as expected, I can print out the form.fields and see them all there, but the actual fields are never rendered in the admin.</p> <p><strong>admin.py</strong></p> <pre><code>from django.contrib import admin import models from django.forms.models import BaseInlineFormSet from django import forms from forms import ProgressForm from django.template.defaultfilters import slugify class ProgressInlineFormset(BaseInlineFormSet): def add_fields(self, form, index): print "ding" super(ProgressInlineFormset, self).add_fields(form, index) for criterion in models.Criterion.objects.all(): form.fields[slugify(criterion.name)] = forms.IntegerField(label=criterion.name) class ProgressInline(admin.TabularInline): model = models.Progress extra = 8 formset = ProgressInlineFormset class ReportAdmin(admin.ModelAdmin): list_display = ("name", "pdf_column",) search_fields = ["name",] inlines = (ProgressInline,) admin.site.register(models.Report, ReportAdmin) </code></pre>
6
2009-02-09T00:47:25Z
527,903
<pre><code>model = models.Progress </code></pre> <p>In the admin there will be only the fields defined in this <em>Progress</em> model. You have no fields/fieldsets option overwriting it.</p> <p>If you want to add the new ones, there are two options:</p> <ul> <li>In the model definition, add those new additional fields (make them optional!)</li> <li><p>In the admin model (<em>admin.TabularInline</em>), add something something like:</p> <p>fields = ('newfield1', 'newfield2', 'newfield3')</p></li> </ul> <p>Take a look at <a href="http://docs.djangoproject.com/en/dev/ref/contrib/admin/#fields" rel="nofollow">fields</a>, <a href="http://docs.djangoproject.com/en/dev/ref/contrib/admin/#fieldsets" rel="nofollow">fieldsets</a>.</p>
1
2009-02-09T12:12:45Z
[ "python", "django", "field", "formset", "inline-formset" ]
How to add custom fields to InlineFormsets?
526,795
<p>I'm trying to add custom fields to an InlineFormset using the following code, but the fields won't show up in the Django Admin. Is the InlineFormset too locked down to allow this? My print "ding" test fires as expected, I can print out the form.fields and see them all there, but the actual fields are never rendered in the admin.</p> <p><strong>admin.py</strong></p> <pre><code>from django.contrib import admin import models from django.forms.models import BaseInlineFormSet from django import forms from forms import ProgressForm from django.template.defaultfilters import slugify class ProgressInlineFormset(BaseInlineFormSet): def add_fields(self, form, index): print "ding" super(ProgressInlineFormset, self).add_fields(form, index) for criterion in models.Criterion.objects.all(): form.fields[slugify(criterion.name)] = forms.IntegerField(label=criterion.name) class ProgressInline(admin.TabularInline): model = models.Progress extra = 8 formset = ProgressInlineFormset class ReportAdmin(admin.ModelAdmin): list_display = ("name", "pdf_column",) search_fields = ["name",] inlines = (ProgressInline,) admin.site.register(models.Report, ReportAdmin) </code></pre>
6
2009-02-09T00:47:25Z
2,250,977
<p>I did it another way:</p> <p>forms.py:</p> <pre><code>from django import forms class ItemAddForm(forms.ModelForm): my_new_field = forms.IntegerField(initial=1, label='quantity') class Meta: model = Item </code></pre> <p>admin.py:</p> <pre><code>from django.contrib import admin from forms import * class ItemAddInline(admin.TabularInline): form = ItemAddForm </code></pre> <p>This works so far, I only need to override somehow the save method to handle this new field. See this: <a href="http://docs.djangoproject.com/en/dev/ref/contrib/admin/#form" rel="nofollow">http://docs.djangoproject.com/en/dev/ref/contrib/admin/#form</a> . It says that by default Inlines use BaseModelForm, which is send to formset_factory. It doesn't work for me, tried to subclass BaseModelForm with errors (no attribute '_meta'). So I use ModelForm instead.</p>
4
2010-02-12T10:06:37Z
[ "python", "django", "field", "formset", "inline-formset" ]
How to add custom fields to InlineFormsets?
526,795
<p>I'm trying to add custom fields to an InlineFormset using the following code, but the fields won't show up in the Django Admin. Is the InlineFormset too locked down to allow this? My print "ding" test fires as expected, I can print out the form.fields and see them all there, but the actual fields are never rendered in the admin.</p> <p><strong>admin.py</strong></p> <pre><code>from django.contrib import admin import models from django.forms.models import BaseInlineFormSet from django import forms from forms import ProgressForm from django.template.defaultfilters import slugify class ProgressInlineFormset(BaseInlineFormSet): def add_fields(self, form, index): print "ding" super(ProgressInlineFormset, self).add_fields(form, index) for criterion in models.Criterion.objects.all(): form.fields[slugify(criterion.name)] = forms.IntegerField(label=criterion.name) class ProgressInline(admin.TabularInline): model = models.Progress extra = 8 formset = ProgressInlineFormset class ReportAdmin(admin.ModelAdmin): list_display = ("name", "pdf_column",) search_fields = ["name",] inlines = (ProgressInline,) admin.site.register(models.Report, ReportAdmin) </code></pre>
6
2009-02-09T00:47:25Z
26,925,774
<p>You can do it by another way (Dynamic forms):</p> <p><strong>admin.py</strong></p> <pre><code>class ProgressInline(admin.TabularInline): model = models.Progress extra = 8 def get_formset(self, request, obj=None, **kwargs): extra_fields = {'my_field': forms.CharField()} kwargs['form'] = type('ProgressForm', (forms.ModelForm,), extra_fields) return super(ProgressInline, self).get_formset(request, obj, **kwargs) </code></pre>
1
2014-11-14T08:27:09Z
[ "python", "django", "field", "formset", "inline-formset" ]
python console intrupt? and cross platform threads
526,955
<p>I want my app to loop in python but have a way to quit. Is there a way to get input from the console, scan it for letter q and quick when my app is ready to quit? in C i would just create a pthread that waits for cin, scans, locks a global quit var, change, unlock and exit the thread allowing my app to quit when its done dumping a file or w/e it is doing. DO i do this the same way in python and will it be cross platform? (i see a global single instance in python that was windows specific)</p>
0
2009-02-09T02:07:21Z
527,014
<p>use the threading module to make a thread class.</p> <pre><code>import threading; class foo(threading.Thread): def __init__(self): #initialize anything def run(self): while True: str = raw_input("input something"); class bar: def __init__(self) self.thread = foo(); #initialize the thread (foo) class and store self.thread.start(); #this command will start the loop in the new thread (the run method) if(quit): #quit </code></pre>
1
2009-02-09T02:47:34Z
[ "python", "multithreading", "console", "quit" ]
python console intrupt? and cross platform threads
526,955
<p>I want my app to loop in python but have a way to quit. Is there a way to get input from the console, scan it for letter q and quick when my app is ready to quit? in C i would just create a pthread that waits for cin, scans, locks a global quit var, change, unlock and exit the thread allowing my app to quit when its done dumping a file or w/e it is doing. DO i do this the same way in python and will it be cross platform? (i see a global single instance in python that was windows specific)</p>
0
2009-02-09T02:07:21Z
527,075
<p>Creating a new thread is easy enough – the threading module will help you out. You may want to make it daemonic (if you have other ways of exiting your program). I <em>think</em> you can change a variable without locking, too – python implements its own threads, and I'm fairly sure something like <code>self.running = False</code> will be atomic.</p> <p>The simplest way to kick off a new thread is with <code>threading.Thread(target=)</code>:</p> <pre><code># inside your class definition def signal_done(self): self.done = True def watcher(self): while True: if q_typed_in_console(): self.signal_done() return def start_watcher(self): t = threading.Thread(target=self.watcher) t.setDaemon(True) # Optional; means thread will exit when main thread does t.start() def main(self): while not self.done: # etc. </code></pre> <p>If you want your thread to be smarter, have its own state, etc. you can subclass <code>threading.Thread</code> yourself. The docs have more.</p> <p>[related to this: the python executable itself is single-threaded, even if you have multiple python threads]</p>
1
2009-02-09T03:43:15Z
[ "python", "multithreading", "console", "quit" ]
Dynamic data in postgresql
527,013
<p>I intend to have a python script do many UPDATEs per second on 2,433,000 rows. I am currently trying to keep the dynamic column in python as a value in a python dict. Yet to keep my python dict synchronized with changes in the other columns is becoming more and more difficult or nonviable.</p> <p>I know I could put the autovacuum on overdrive, but I wonder if this would be enough to catch up with the sheer amount of UPDATEs. If only I could associate a python variable to each row... </p> <p>I fear that the VACUUM and diskwrite overhead will kill my server?</p> <p>Any suggestions on how to associate extremely dynamic variables to rows/keys?</p> <p>Thx </p>
1
2009-02-09T02:46:33Z
527,220
<p>PostgreSQL supports asynchronous notifications using the <a href="http://www.postgresql.org/docs/8.3/static/sql-listen.html" rel="nofollow">LISTEN</a> and <a href="http://www.postgresql.org/docs/8.1/static/sql-notify.html" rel="nofollow">NOTIFY</a> commands. An application (client) LISTENs for a notification using a notification name (e.g. "table_updated"). The database itself can be made to issue notifications either manually i.e. in the code that performs the insertions or modifications (useful when a large number of updates are made, allowing for batch notifications) or automatically inside a row update <a href="http://www.postgresql.org/docs/8.3/static/sql-createtrigger.html" rel="nofollow">TRIGGER</a>.</p> <p>You could use such notifications to keep your data structures up to date.</p> <p>Alternatively (or you can use this in combination with the above), you can customize your Python dictionary by overriding the <code>__getitem__()</code>, <code>has_key()</code>, <code>contains()</code> methods and have them perform lookups as needed, allowing you to cache the results using timeouts etc.</p>
3
2009-02-09T06:04:10Z
[ "python", "postgresql", "performance", "dynamic-data", "vacuum" ]
Python or java which language will exposed a self taught programmer to more computer science concept?
527,134
<p>Of the two which one would exposed someone just learning to program to more computer science concept/problems?</p>
6
2009-02-09T04:48:10Z
527,145
<p><a href="http://en.wikipedia.org/wiki/Mu_(negative" rel="nofollow">Mu</a></p>
3
2009-02-09T04:53:55Z
[ "java", "python", "programming-languages" ]
Python or java which language will exposed a self taught programmer to more computer science concept?
527,134
<p>Of the two which one would exposed someone just learning to program to more computer science concept/problems?</p>
6
2009-02-09T04:48:10Z
527,146
<p>Neither. Try Scheme. Or Haskell. or C. or a book.</p>
7
2009-02-09T04:54:20Z
[ "java", "python", "programming-languages" ]
Python or java which language will exposed a self taught programmer to more computer science concept?
527,134
<p>Of the two which one would exposed someone just learning to program to more computer science concept/problems?</p>
6
2009-02-09T04:48:10Z
527,149
<p>Neither <em>language</em> will expose a student to computer science concepts. That's up to the instructor (or motivated student) – where will they take the learning experience?</p> <p>[I'm assuming here that by "computer science", you mean algorithms and data structures (and related topics); if instead you mean things like pointer arithmetic and knowing the difference between a short, an int, and a long, then java will be closer than python.]</p>
7
2009-02-09T04:55:42Z
[ "java", "python", "programming-languages" ]
Python or java which language will exposed a self taught programmer to more computer science concept?
527,134
<p>Of the two which one would exposed someone just learning to program to more computer science concept/problems?</p>
6
2009-02-09T04:48:10Z
527,152
<p>It's not about the language, it's about what you do with it. You can learn about virtually any CS concept in either Python or Java (or anything else), although there are definitely some concepts that are much better suited for one or another - for example, functional programming (e.g. the <code>map</code> and <code>reduce</code> functions) and metaclasses in Python, or graphics programming in Java. Having worked with both, I do think Python would give you an easier learning curve to programming in general, so I'd start with that, keeping in mind that it will be to your advantage to get experience in both Python and Java (and other languages, like C and/or C++) in the long run.</p>
14
2009-02-09T04:57:32Z
[ "java", "python", "programming-languages" ]
Python or java which language will exposed a self taught programmer to more computer science concept?
527,134
<p>Of the two which one would exposed someone just learning to program to more computer science concept/problems?</p>
6
2009-02-09T04:48:10Z
527,161
<p>Computer Science is fairly language agnostic. Both Python and Java support multiple programming paradigms. Python supports object-oriented, imperative, and functional, while Java supports object-oriented, imperative, and structured. Python is dynamically typed while Java is statically typed.</p> <p>I could go on and on listing similarities and differences, but the bottom line is that if you want to learn a lot of CS concepts, you should learn multiple languages. Either Python or Java would be a good place to start. Python is probably an easier language to learn, so I would start with it.</p>
4
2009-02-09T05:05:48Z
[ "java", "python", "programming-languages" ]
Python or java which language will exposed a self taught programmer to more computer science concept?
527,134
<p>Of the two which one would exposed someone just learning to program to more computer science concept/problems?</p>
6
2009-02-09T04:48:10Z
527,170
<p>C &lt;-- </p>
2
2009-02-09T05:13:25Z
[ "java", "python", "programming-languages" ]
Python or java which language will exposed a self taught programmer to more computer science concept?
527,134
<p>Of the two which one would exposed someone just learning to program to more computer science concept/problems?</p>
6
2009-02-09T04:48:10Z
527,182
<p><a href="http://www.clojure.org/" rel="nofollow">Clojure</a> or <a href="http://www.haskell.org/" rel="nofollow">Haskell</a></p>
1
2009-02-09T05:23:15Z
[ "java", "python", "programming-languages" ]
Python or java which language will exposed a self taught programmer to more computer science concept?
527,134
<p>Of the two which one would exposed someone just learning to program to more computer science concept/problems?</p>
6
2009-02-09T04:48:10Z
527,235
<p>I learned C++ first and can't think of a better learning tool. If you want to learn CS rewrite the STL.</p>
2
2009-02-09T06:17:34Z
[ "java", "python", "programming-languages" ]
Python or java which language will exposed a self taught programmer to more computer science concept?
527,134
<p>Of the two which one would exposed someone just learning to program to more computer science concept/problems?</p>
6
2009-02-09T04:48:10Z
527,783
<p>You have mentioned computer science concepts but that is too vague. IMHO you need to define what concepts you want to learn (say is it algorithms or OO design) and then work with either Java or Python to strengthen those concepts. </p> <p>If you intend to learn design patterns then I would suggest Java (at least for the reason there are lot of good books, reference materials on the net in this regard). On the other hand Python would be better when you want to quickly code an algorithm or if you want to try out your solution on a problem you came across. With Java you need to get familiarized with lot of API even before you get stared writing moderately complex programs.</p>
1
2009-02-09T11:26:27Z
[ "java", "python", "programming-languages" ]
Python or java which language will exposed a self taught programmer to more computer science concept?
527,134
<p>Of the two which one would exposed someone just learning to program to more computer science concept/problems?</p>
6
2009-02-09T04:48:10Z
528,604
<p>Strictly speaking neither language is going to help much with <em>computer science</em> concepts, if by this we mean algorithms, data structures and the like. Such things are largely independent of language However there are some <em>programming</em> concepts which are not language independent. Joel talks a fair bit about this in "<a href="http://www.joelonsoftware.com/articles/ThePerilsofJavaSchools.html" rel="nofollow">The Perils of Java Schools</a>, and he at least isn't impressed with either of these languages for teaching. He thinks that understanding pointers and recursion are essential to making a good programmer (I'm with him at least part of the way) and neither of these force you to do that.</p>
1
2009-02-09T15:26:32Z
[ "java", "python", "programming-languages" ]
Python or java which language will exposed a self taught programmer to more computer science concept?
527,134
<p>Of the two which one would exposed someone just learning to program to more computer science concept/problems?</p>
6
2009-02-09T04:48:10Z
528,919
<p>Just pick one language, and start learning by solving a problem most relevant to you. I don't think that a debate is needed in which language should you learn first.</p>
1
2009-02-09T16:46:59Z
[ "java", "python", "programming-languages" ]
Python or java which language will exposed a self taught programmer to more computer science concept?
527,134
<p>Of the two which one would exposed someone just learning to program to more computer science concept/problems?</p>
6
2009-02-09T04:48:10Z
871,027
<p>Pick the one you like. The important thing is not stop learning or giving up because every language you learn will make you a better coder. Lots of knowledge are common between them. Of course there are lots of differences, but do not mind them yet. No language is perfect, or no language is the solution for all problems. Lots of people like me find Python easier, so I would pick that. For starters, there is a free Python e-book at <a href="http://www.diveintopython.org/" rel="nofollow">http://www.diveintopython.org/</a>.<br/></p> <p>Furthermore, you can check the following link <a href="http://www.hanselman.com/blog/ProgrammerIntentOrWhatYoureNotGettingAboutRubyAndWhyItsTheTits.aspx" rel="nofollow">http://www.hanselman.com/blog/ProgrammerIntentOrWhatYoureNotGettingAboutRubyAndWhyItsTheTits.aspx</a>.</p>
0
2009-05-15T21:56:55Z
[ "java", "python", "programming-languages" ]
Python or java which language will exposed a self taught programmer to more computer science concept?
527,134
<p>Of the two which one would exposed someone just learning to program to more computer science concept/problems?</p>
6
2009-02-09T04:48:10Z
15,336,592
<p><a href="http://mitpress.mit.edu/sicp/full-text/book/book.html" rel="nofollow">Scheme</a> This book will teach you all the fundamental concepts of computer programming (I wish I had read this book before :)</p>
1
2013-03-11T10:44:36Z
[ "java", "python", "programming-languages" ]
Intercepting stdout of a subprocess while it is running
527,197
<p>If this is my subprocess:</p> <pre><code>import time, sys for i in range(200): sys.stdout.write( 'reading %i\n'%i ) time.sleep(.02) </code></pre> <p>And this is the script controlling and modifying the output of the subprocess:</p> <pre><code>import subprocess, time, sys print 'starting' proc = subprocess.Popen( 'c:/test_apps/testcr.py', shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE ) print 'process created' while True: #next_line = proc.communicate()[0] next_line = proc.stdout.readline() if next_line == '' and proc.poll() != None: break sys.stdout.write(next_line) sys.stdout.flush() print 'done' </code></pre> <p>Why is <code>readline</code> and <code>communicate</code> waiting until the process is done running? Is there a simple way to pass (and modify) the subprocess' stdout real-time?</p> <p>BTW, I've seen <a href="http://devlishgenius.blogspot.com/2008/10/logging-in-real-time-in-python.html">this</a>, but I don't need the logging features (and havn't bothered understand much of it).</p> <p>I'm on Windows XP.</p>
21
2009-02-09T05:43:35Z
527,202
<p>Process output is buffered. On more UNIXy operating systems (or Cygwin), the <A HREF="http://www.noah.org/wiki/Pexpect">pexpect</A> module is available, which recites all the necessary incantations to avoid buffering-related issues. However, these incantations require a working <A HREF="http://docs.python.org/library/pty.html">pty module</A>, which is not available on native (non-cygwin) win32 Python builds.</p> <p>In the example case where you control the subprocess, you can just have it call <code>sys.stdout.flush()</code> where necessary -- but for arbitrary subprocesses, that option isn't available.</p> <p>See also <A HREF="http://www.noah.org/wiki/Pexpect#Q:_Why_not_just_use_a_pipe_.28popen.28.29.29.3F">the question "Why not just use a pipe (popen())?"</A> in the pexpect FAQ.</p>
6
2009-02-09T05:49:20Z
[ "python", "process", "subprocess", "stdout", "popen" ]
Intercepting stdout of a subprocess while it is running
527,197
<p>If this is my subprocess:</p> <pre><code>import time, sys for i in range(200): sys.stdout.write( 'reading %i\n'%i ) time.sleep(.02) </code></pre> <p>And this is the script controlling and modifying the output of the subprocess:</p> <pre><code>import subprocess, time, sys print 'starting' proc = subprocess.Popen( 'c:/test_apps/testcr.py', shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE ) print 'process created' while True: #next_line = proc.communicate()[0] next_line = proc.stdout.readline() if next_line == '' and proc.poll() != None: break sys.stdout.write(next_line) sys.stdout.flush() print 'done' </code></pre> <p>Why is <code>readline</code> and <code>communicate</code> waiting until the process is done running? Is there a simple way to pass (and modify) the subprocess' stdout real-time?</p> <p>BTW, I've seen <a href="http://devlishgenius.blogspot.com/2008/10/logging-in-real-time-in-python.html">this</a>, but I don't need the logging features (and havn't bothered understand much of it).</p> <p>I'm on Windows XP.</p>
21
2009-02-09T05:43:35Z
527,229
<p>As Charles already mentioned, the problem is buffering. I ran in to a similar problem when writing some modules for SNMPd, and solved it by replacing stdout with an auto-flushing version.</p> <p>I used the following code, inspired by some posts on ActiveState:</p> <pre><code>class FlushFile(object): """Write-only flushing wrapper for file-type objects.""" def __init__(self, f): self.f = f def write(self, x): self.f.write(x) self.f.flush() # Replace stdout with an automatically flushing version sys.stdout = FlushFile(sys.__stdout__) </code></pre>
14
2009-02-09T06:12:49Z
[ "python", "process", "subprocess", "stdout", "popen" ]
How to debug deadlock with python?
527,296
<p>I am developing a multi-threading application, which is deadlocking. </p> <p>I am using Visual C++ Express 2008 to trace the program. Once the deadlock occurs, I just pause the program and trace. I found that when deadlock occurs, there will be two threads called python from my C++ extension. </p> <p>All of them use Queue in python code, so I guess the deadlock might caused by Queue. But however, once the extension goes into python code, I can't see nothing but asm code and binary from the VC++ debugger.</p> <p>I would like to know are there any way to dump the call stack of python code after I paused the program? And how can I know what lock are there in threads caused the deadlock?</p>
4
2009-02-09T06:49:07Z
527,354
<p>If you can compile your extension module with gcc (for example, by using <a href="http://cygwin.com/">Cygwin</a>), you could use gdb and the <a href="http://wiki.python.org/moin/DebuggingWithGdb">pystack</a> gdb macro to get Python stacks in that situation. I don't know if it would be possible to do something equivalent to pystack in Visual C++ Express, but you might get some ideas from the pystack macro implementation anyway.</p> <p>Since you mention you only see asm/binary in the VC++ debugger, you should make sure you compile Python with debug symbols. If VC++ is still showing asm, it might be that you need to tell VC++ where the source files are (sorry, haven't used VC++ in years so I can't tell what exactly you might need to do if this was the case).</p> <p>You might also get some important information by adding lots of logging calls to your code, both Python side and your C++ extension.</p> <p>In any case, I am almost certain the deadlocks are not due to Queue, but your own code.</p>
5
2009-02-09T07:33:28Z
[ "python", "multithreading", "debugging", "deadlock" ]
How to modify existing panels in Maya using MEL or Python?
527,314
<p>I've been writing tools in Maya for years using MEL and Python. I'd consider myself an expert in custom window/gui design in Maya except for one area; modifying existing panels and editors.</p> <p>Typically, I'm building tools that need totally custom UIs, so its customary for me to build them from scratch. However, recently I've found myself wanting to add some additional functionality to the layer editor in Maya. I've seen tutorials that explain how to do this, but now that I want to do it, I can't find any.</p> <p>Links to tutorials or a brief code snippet to get me started would be great. I just need to know how to find the layer editor/panel and, say, add a button or text field to it.</p>
1
2009-02-09T07:05:27Z
551,109
<p>Have you tried searching ui item names in MEL files under maya installation directory? It should be one of the MEL scripts included, and from there you can just modify it.</p>
1
2009-02-15T16:03:11Z
[ "python", "user-interface", "maya", "panels", "mel" ]
How to modify existing panels in Maya using MEL or Python?
527,314
<p>I've been writing tools in Maya for years using MEL and Python. I'd consider myself an expert in custom window/gui design in Maya except for one area; modifying existing panels and editors.</p> <p>Typically, I'm building tools that need totally custom UIs, so its customary for me to build them from scratch. However, recently I've found myself wanting to add some additional functionality to the layer editor in Maya. I've seen tutorials that explain how to do this, but now that I want to do it, I can't find any.</p> <p>Links to tutorials or a brief code snippet to get me started would be great. I just need to know how to find the layer editor/panel and, say, add a button or text field to it.</p>
1
2009-02-09T07:05:27Z
1,045,794
<p>having just stumbled across your question, have you tried Digital Tutors Artists Guide to MEL? Chapters 19-22 describe step by step how to create your own custom GUI's and windows in Maya, here's the link: <a href="http://www.digitaltutors.com/store/home.php?cat=82" rel="nofollow">http://www.digitaltutors.com/store/home.php?cat=82</a></p> <p>Have fun.</p>
1
2009-06-25T19:33:59Z
[ "python", "user-interface", "maya", "panels", "mel" ]
How to modify existing panels in Maya using MEL or Python?
527,314
<p>I've been writing tools in Maya for years using MEL and Python. I'd consider myself an expert in custom window/gui design in Maya except for one area; modifying existing panels and editors.</p> <p>Typically, I'm building tools that need totally custom UIs, so its customary for me to build them from scratch. However, recently I've found myself wanting to add some additional functionality to the layer editor in Maya. I've seen tutorials that explain how to do this, but now that I want to do it, I can't find any.</p> <p>Links to tutorials or a brief code snippet to get me started would be great. I just need to know how to find the layer editor/panel and, say, add a button or text field to it.</p>
1
2009-02-09T07:05:27Z
3,135,835
<p>Old post, but maybe someone still wants to find out. </p> <p>I wrote this script at least 30 years ago: <a href="http://www.creativecrash.com/maya/downloads/scripts-plugins/interface-display/c/guihelper" rel="nofollow">http://www.creativecrash.com/maya/downloads/scripts-plugins/interface-display/c/guihelper</a></p> <p>Its horrible scripting-wise, but very useful for modifying Maya's Gui. You can add popup menus to all gui items with the item's name, commands to print hierachy etc; and you can load a window showing tthe gui control hierarchy.</p> <p>Usually the trick is to identify a gui control that has a specific name, for example TimeSliderForm, and then traverse the hierachy to where you want to go, by querying the controls command, help text, label text etc.</p>
2
2010-06-28T20:18:21Z
[ "python", "user-interface", "maya", "panels", "mel" ]
How to modify existing panels in Maya using MEL or Python?
527,314
<p>I've been writing tools in Maya for years using MEL and Python. I'd consider myself an expert in custom window/gui design in Maya except for one area; modifying existing panels and editors.</p> <p>Typically, I'm building tools that need totally custom UIs, so its customary for me to build them from scratch. However, recently I've found myself wanting to add some additional functionality to the layer editor in Maya. I've seen tutorials that explain how to do this, but now that I want to do it, I can't find any.</p> <p>Links to tutorials or a brief code snippet to get me started would be great. I just need to know how to find the layer editor/panel and, say, add a button or text field to it.</p>
1
2009-02-09T07:05:27Z
10,439,595
<p>The easy way can modify existing Maya code and put it in you user/script. You can use whatIs to get the script name. Say for eg: </p> <pre><code>whatIs "layerEditor"; </code></pre> <p>Result is <code>./scripts/others/layerEditor.mel //</code> .But now you can use wrapping instance with PyQt also.</p>
3
2012-05-03T21:30:04Z
[ "python", "user-interface", "maya", "panels", "mel" ]
input and thread problem, python
527,420
<p>I am doing something like this in python</p> <pre><code>class MyThread ( threading.Thread ): def run (s): try: s.wantQuit = 0 while(not s.wantQuit): button = raw_input() if button == "q": s.wantQuit=1 except KeyboardInterrupt: s.wantQuit = 1 myThread = MyThread () myThread.start() a=5 while not myThread.wantQuit: print "hey" if (a == 0): break; a = a-1; time.sleep(1) #""" sys.exit() </code></pre> <p>What happens is my app locks up for 5 seconds printing "hey" (5 times) THEN i get the raw_input dialog. How can i have the dialog show up so i can quit anytime instead of when my loop runs out?</p>
0
2009-02-09T08:34:41Z
527,433
<p>just tried the code to make sure, but this does do what it's supposed to... you can type q and enter in to the console and make the application quit before a=0 (so it says hey less then 5 times)</p> <p>I don't know what you mean by the raw_input dialog, raw_input normally just takes info from stdin</p>
0
2009-02-09T08:49:10Z
[ "python", "multithreading" ]
input and thread problem, python
527,420
<p>I am doing something like this in python</p> <pre><code>class MyThread ( threading.Thread ): def run (s): try: s.wantQuit = 0 while(not s.wantQuit): button = raw_input() if button == "q": s.wantQuit=1 except KeyboardInterrupt: s.wantQuit = 1 myThread = MyThread () myThread.start() a=5 while not myThread.wantQuit: print "hey" if (a == 0): break; a = a-1; time.sleep(1) #""" sys.exit() </code></pre> <p>What happens is my app locks up for 5 seconds printing "hey" (5 times) THEN i get the raw_input dialog. How can i have the dialog show up so i can quit anytime instead of when my loop runs out?</p>
0
2009-02-09T08:34:41Z
527,437
<p>You mean the while loop runs before the thread? Well, you can't predict this unless you synchronize it. No one guarantees you that the thread will run before or after that while loop. But if it's being blocked for 5 seconds that's akward - the thread should have been pre-empted by then.</p> <p>Also, since you're first use of wantToQuit is in the run() method, no one assures you that the thread has been started when you're checking for it's wantToQuit attribute in <code>while not myThread.wantToQuit</code> .</p>
1
2009-02-09T08:51:08Z
[ "python", "multithreading" ]
input and thread problem, python
527,420
<p>I am doing something like this in python</p> <pre><code>class MyThread ( threading.Thread ): def run (s): try: s.wantQuit = 0 while(not s.wantQuit): button = raw_input() if button == "q": s.wantQuit=1 except KeyboardInterrupt: s.wantQuit = 1 myThread = MyThread () myThread.start() a=5 while not myThread.wantQuit: print "hey" if (a == 0): break; a = a-1; time.sleep(1) #""" sys.exit() </code></pre> <p>What happens is my app locks up for 5 seconds printing "hey" (5 times) THEN i get the raw_input dialog. How can i have the dialog show up so i can quit anytime instead of when my loop runs out?</p>
0
2009-02-09T08:34:41Z
527,445
<p>The behaviour here is not what you described. Look at those sample outputs I got:</p> <p>1st: pressing <code>q&lt;ENTER&gt;</code> as fast as possible:</p> <pre><code>hey q </code></pre> <p>2nd: wait a bit before pressing <code>q&lt;ENTER&gt;</code>:</p> <pre><code>hey hey hey q </code></pre> <p>3rd: Don't touch the keyboard:</p> <pre><code>hey hey hey hey hey hey # Application locks because main thread is over but # there are other threads running. add myThread.wantQuit = 1 # to prevent that if you want </code></pre>
1
2009-02-09T08:54:56Z
[ "python", "multithreading" ]
input and thread problem, python
527,420
<p>I am doing something like this in python</p> <pre><code>class MyThread ( threading.Thread ): def run (s): try: s.wantQuit = 0 while(not s.wantQuit): button = raw_input() if button == "q": s.wantQuit=1 except KeyboardInterrupt: s.wantQuit = 1 myThread = MyThread () myThread.start() a=5 while not myThread.wantQuit: print "hey" if (a == 0): break; a = a-1; time.sleep(1) #""" sys.exit() </code></pre> <p>What happens is my app locks up for 5 seconds printing "hey" (5 times) THEN i get the raw_input dialog. How can i have the dialog show up so i can quit anytime instead of when my loop runs out?</p>
0
2009-02-09T08:34:41Z
528,259
<p>huperboreean has your answer. The thread is still being started when the for loop is executed.</p> <p>You want to check that a thread is started before moving into your loop. You could simplify the thread to monitor raw_input, and return when a 'q' is entered. This will kill the thread.</p> <p>You main for loop can check if the thread is alive.</p>
0
2009-02-09T14:09:03Z
[ "python", "multithreading" ]
How to deploy a Python application with libraries as source with no further dependencies?
527,510
<p><strong>Background</strong>: I have a small Python application that makes life for developers releasing software in our company a bit easier. I build an executable for Windows using py2exe. The application as well as the binary are checked into Subversion. Distribution happens by people just checking out the directory from SVN. The program has about 6 different Python library dependencies (e.g. ElementTree, Mako)</p> <p><strong>The situation</strong>: Developers want to hack on the source of this tool and then run it without having to build the binary. Currently this means that they need a python 2.6 interpreter (which is fine) and also have the 6 libraries installed locally using easy_install.</p> <p><strong>The Problem</strong></p> <ul> <li>This is not a public, classical open source environment: I'm inside a corporate network, the tool will never leave the "walled garden" and we have seriously inconvenient barriers to getting to the outside internet (NTLM authenticating proxies and/or machines without direct internet access).</li> <li>I want the hurdles to starting to hack on this tool to be minimal: nobody should have to hunt for the right dependency in the right version, they should have to execute as little setup as possible. Optimally the prerequisites would be having a Python installation and just checking out the program from Subversion.</li> </ul> <p><strong>Anecdote</strong>: The more self-contained the process is the easier it is to repeat it. I had my machine swapped out for a new one and went through the unpleasant process of having to reverse engineer the dependencies, reinstall distutils, hunting down the libraries online and getting them to install (see corporate internet restrictions above). </p>
12
2009-02-09T09:20:17Z
527,531
<p>Just use <a href="http://pypi.python.org/pypi/virtualenv">virtualenv</a> - it is a tool to create isolated Python environments. You can create a set-up script and distribute the whole bunch if you want.</p>
9
2009-02-09T09:28:04Z
[ "python", "deployment", "layout", "bootstrapping" ]
How to deploy a Python application with libraries as source with no further dependencies?
527,510
<p><strong>Background</strong>: I have a small Python application that makes life for developers releasing software in our company a bit easier. I build an executable for Windows using py2exe. The application as well as the binary are checked into Subversion. Distribution happens by people just checking out the directory from SVN. The program has about 6 different Python library dependencies (e.g. ElementTree, Mako)</p> <p><strong>The situation</strong>: Developers want to hack on the source of this tool and then run it without having to build the binary. Currently this means that they need a python 2.6 interpreter (which is fine) and also have the 6 libraries installed locally using easy_install.</p> <p><strong>The Problem</strong></p> <ul> <li>This is not a public, classical open source environment: I'm inside a corporate network, the tool will never leave the "walled garden" and we have seriously inconvenient barriers to getting to the outside internet (NTLM authenticating proxies and/or machines without direct internet access).</li> <li>I want the hurdles to starting to hack on this tool to be minimal: nobody should have to hunt for the right dependency in the right version, they should have to execute as little setup as possible. Optimally the prerequisites would be having a Python installation and just checking out the program from Subversion.</li> </ul> <p><strong>Anecdote</strong>: The more self-contained the process is the easier it is to repeat it. I had my machine swapped out for a new one and went through the unpleasant process of having to reverse engineer the dependencies, reinstall distutils, hunting down the libraries online and getting them to install (see corporate internet restrictions above). </p>
12
2009-02-09T09:20:17Z
527,872
<p>"I dislike the fact that developers (or me starting on a clean new machine) have to jump through the distutils hoops of having to install the libraries locally before they can get started"</p> <p>Why?</p> <p>What -- specifically -- is wrong with this?</p> <p>You did it to create the project. Your project is so popular others want to do the same.</p> <p>I don't see a problem. Please update your question with specific problems you need solved. Disliking the way open source is distributed isn't a problem -- it's the way that open source works.</p> <p><strong>Edit</strong>. The "walled garden" doesn't matter very much. </p> <p>Choice 1. You could, BTW, build an "installer" that runs easy_install 6 times for them.</p> <p>Choice 2. You can save all of the installer kits that easy_install would have used. Then you can provide a script that does an unzip and a <code>python setup.py install</code> for all six.</p> <p>Choice 3. You can provide a zipped version of your <code>site-packages</code>. After they install Python, they unzip your site-packages directory into `C:\Python2.5\lib\site-packages``.</p> <p>Choice 4. You can build your own MSI installer kit for your Python environment.</p> <p>Choice 5. You can host your own pypi-like server and provide an easy_install that checks your server first.</p>
8
2009-02-09T12:01:16Z
[ "python", "deployment", "layout", "bootstrapping" ]
How to deploy a Python application with libraries as source with no further dependencies?
527,510
<p><strong>Background</strong>: I have a small Python application that makes life for developers releasing software in our company a bit easier. I build an executable for Windows using py2exe. The application as well as the binary are checked into Subversion. Distribution happens by people just checking out the directory from SVN. The program has about 6 different Python library dependencies (e.g. ElementTree, Mako)</p> <p><strong>The situation</strong>: Developers want to hack on the source of this tool and then run it without having to build the binary. Currently this means that they need a python 2.6 interpreter (which is fine) and also have the 6 libraries installed locally using easy_install.</p> <p><strong>The Problem</strong></p> <ul> <li>This is not a public, classical open source environment: I'm inside a corporate network, the tool will never leave the "walled garden" and we have seriously inconvenient barriers to getting to the outside internet (NTLM authenticating proxies and/or machines without direct internet access).</li> <li>I want the hurdles to starting to hack on this tool to be minimal: nobody should have to hunt for the right dependency in the right version, they should have to execute as little setup as possible. Optimally the prerequisites would be having a Python installation and just checking out the program from Subversion.</li> </ul> <p><strong>Anecdote</strong>: The more self-contained the process is the easier it is to repeat it. I had my machine swapped out for a new one and went through the unpleasant process of having to reverse engineer the dependencies, reinstall distutils, hunting down the libraries online and getting them to install (see corporate internet restrictions above). </p>
12
2009-02-09T09:20:17Z
527,934
<p>I agree with the answers by Nosklo and S.Lott. (+1 to both)</p> <p>Can I just add that what you want to do is actually a <strong>terrible idea</strong>.</p> <p>If you genuinely want people to hack on your code, they will need some understanding of the libraries involved, how they work, what they are, where they come from, the documentation for each etc. Sure provide them with a bootstrap script, but beyond that you will be molly-coddling to the point that they are clueless.</p> <p>Then there are specific issues such as "what if one user wants to install a different version or implementation of a library?", a glaring example here is ElementTree, as this has a number of implementations.</p>
0
2009-02-09T12:24:30Z
[ "python", "deployment", "layout", "bootstrapping" ]
How to deploy a Python application with libraries as source with no further dependencies?
527,510
<p><strong>Background</strong>: I have a small Python application that makes life for developers releasing software in our company a bit easier. I build an executable for Windows using py2exe. The application as well as the binary are checked into Subversion. Distribution happens by people just checking out the directory from SVN. The program has about 6 different Python library dependencies (e.g. ElementTree, Mako)</p> <p><strong>The situation</strong>: Developers want to hack on the source of this tool and then run it without having to build the binary. Currently this means that they need a python 2.6 interpreter (which is fine) and also have the 6 libraries installed locally using easy_install.</p> <p><strong>The Problem</strong></p> <ul> <li>This is not a public, classical open source environment: I'm inside a corporate network, the tool will never leave the "walled garden" and we have seriously inconvenient barriers to getting to the outside internet (NTLM authenticating proxies and/or machines without direct internet access).</li> <li>I want the hurdles to starting to hack on this tool to be minimal: nobody should have to hunt for the right dependency in the right version, they should have to execute as little setup as possible. Optimally the prerequisites would be having a Python installation and just checking out the program from Subversion.</li> </ul> <p><strong>Anecdote</strong>: The more self-contained the process is the easier it is to repeat it. I had my machine swapped out for a new one and went through the unpleasant process of having to reverse engineer the dependencies, reinstall distutils, hunting down the libraries online and getting them to install (see corporate internet restrictions above). </p>
12
2009-02-09T09:20:17Z
528,064
<p>I sometimes use the approach I describe below, for the exact same reason that @Boris states: I would prefer that the use of some code is as easy as a) svn checkout/update - b) go.</p> <p>But for the record:</p> <ul> <li>I use virtualenv/easy_install most of the time.</li> <li>I agree to a certain extent to the critisisms by @Ali A and @S.Lott</li> </ul> <p>Anyway, the approach I use depends on modifying sys.path, and works like this:</p> <ul> <li>Require python and setuptools (to enable loading code from eggs) on all computers that will use your software.</li> <li>Organize your directory structure this:</li> </ul> <pre> project/ *.py scriptcustomize.py file.pth thirdparty/ eggs/ mako-vNNN.egg ... .egg code/ elementtree\ *.py ... </pre> <ul> <li>In your top-level script(s) include the following code at the top:</li> </ul> <pre> from scriptcustomize import apply_pth_files apply_pth_files(__file__) </pre> <ul> <li>Add scriptcustomize.py to your project folder:</li> </ul> <pre> import os from glob import glob import fileinput import sys def apply_pth_files(scriptfilename, at_beginning=False): """At the top of your script: from scriptcustomize import apply_pth_files apply_pth_files(__file__) """ directory = os.path.dirname(scriptfilename) files = glob(os.path.join(directory, '*.pth')) if not files: return for line in fileinput.input(files): line = line.strip() if line and line[0] != '#': path = os.path.join(directory, line) if at_beginning: sys.path.insert(0, path) else: sys.path.append(path) </pre> <ul> <li>Add one or more *.pth file(s) to your project folder. On each line, put a reference to a directory with packages. For instance:</li> </ul> <pre> # contents of *.pth file thirdparty/code thirdparty/eggs/mako-vNNN.egg </pre> <ul> <li>I "kind-of" like this approach. What I like: it is similar to how *.pth files work, but for individual programs instead of your entire site-packages. What I do not like: having to add the two lines at the beginning of the top-level scripts.</li> <li>Again: I use virtualenv most of the time. But I tend to use virtualenv for projects where I have tight control of the deployment scenario. In cases where I do not have tight control, I tend to use the approach I describe above. It makes it really easy to package a project as a zip and have the end user "install" it (by unzipping).</li> </ul>
8
2009-02-09T13:05:46Z
[ "python", "deployment", "layout", "bootstrapping" ]
How to deploy a Python application with libraries as source with no further dependencies?
527,510
<p><strong>Background</strong>: I have a small Python application that makes life for developers releasing software in our company a bit easier. I build an executable for Windows using py2exe. The application as well as the binary are checked into Subversion. Distribution happens by people just checking out the directory from SVN. The program has about 6 different Python library dependencies (e.g. ElementTree, Mako)</p> <p><strong>The situation</strong>: Developers want to hack on the source of this tool and then run it without having to build the binary. Currently this means that they need a python 2.6 interpreter (which is fine) and also have the 6 libraries installed locally using easy_install.</p> <p><strong>The Problem</strong></p> <ul> <li>This is not a public, classical open source environment: I'm inside a corporate network, the tool will never leave the "walled garden" and we have seriously inconvenient barriers to getting to the outside internet (NTLM authenticating proxies and/or machines without direct internet access).</li> <li>I want the hurdles to starting to hack on this tool to be minimal: nobody should have to hunt for the right dependency in the right version, they should have to execute as little setup as possible. Optimally the prerequisites would be having a Python installation and just checking out the program from Subversion.</li> </ul> <p><strong>Anecdote</strong>: The more self-contained the process is the easier it is to repeat it. I had my machine swapped out for a new one and went through the unpleasant process of having to reverse engineer the dependencies, reinstall distutils, hunting down the libraries online and getting them to install (see corporate internet restrictions above). </p>
12
2009-02-09T09:20:17Z
530,727
<p>I'm not suggesting that this is a great idea, but usually what I do in situations like these is that I have a Makefile, checked into subversion, which contains make rules to fetch all the dependent libraries and install them. The makefile can be smart enough to only apply the dependent libraries if they aren't present, so this can be relatively fast.</p> <p>A new developer on the project simply checks out from subversion and then types "make".</p> <p>This approach might work well for you, given that your audience is already used to the idea of using subversion checkouts as part of their fetch process. Also, it has the nice property that all knowledge about your program, including its external dependencies, are captured in the source code repository.</p>
0
2009-02-10T01:01:51Z
[ "python", "deployment", "layout", "bootstrapping" ]
How to embed p tag inside some text using Beautifulsoup?
527,629
<p>I wanted to embed <code>&lt;p</code>> tag where ever there is a \r\n\r\n.</p> <p>u"Finally Sri Lanka showed up, prevented their first 5-0 series whitewash, and stopped India at nine ODI wins in a row. \r\n\r\nFor 62 balls Yuvraj Singh played a dream knock, keeping India in the game despite wickets falling around him. \r\n\r\nPerhaps the toss played a big part. This was only the second time Mahela Jayawardene beat Mahendra Singh Dhoni with the coin in the last 11 occasions. \r\n\r\nIt was Jayasuriya who provided Sri Lanka with the springboard. \r\n\r\nThe pyrotechnics may have stopped upon Jayasuriya's dismissal, but the runs kept coming at a fair pace."</p> <p>I tried solving this using BeautifulSoup but couldn't find the way out of it. Can anyone through some light on this. Thanks in advance.</p>
1
2009-02-09T10:09:05Z
527,647
<pre><code>''.join('&lt;p&gt;%s&lt;/p&gt;' % line for line in text.split('\r\n\r\n')) # Results: u"&lt;p&gt;Finally Sri Lanka showed up, prevented their first 5-0 series whitewash, and stopped India at nine ODI wins in a row. &lt;/p&gt; &lt;p&gt;For 62 balls Yuvraj Singh played a dream knock, keeping India in the game despite wickets falling around him. &lt;/p&gt;&lt;p&gt;Perhaps the toss played a big part. This was only the second time Mahela Jayawardene beat Mahendra Singh Dhoni with the coin in the last 11 occasions. &lt;/p&gt; &lt;p&gt;It was Jayasuriya who provided Sri Lanka with the springboard. &lt;/p&gt; &lt;p&gt;The pyrotechnics may have stopped upon Jayasuriya's dismissal, but the runs kept coming at a fair pace.&lt;/p&gt;" </code></pre>
5
2009-02-09T10:15:15Z
[ "python", "beautifulsoup" ]
Can I instantiate a subclass object from the superclass
527,757
<p>I have the following example code:</p> <pre><code>class A(object): def __init__(self, id): self.myid = id def foo(self, x): print 'foo', self.myid*x class B(A): def __init__(self, id): self.myid = id self.mybid = id*2 def bar(self, x): print 'bar', self.myid, self.mybid, x </code></pre> <p>When used, the following could be generated:</p> <pre><code>&gt;&gt;&gt; a = A(2) &gt;&gt;&gt; a.foo(10) foo 20 &gt;&gt;&gt; &gt;&gt;&gt; b = B(3) &gt;&gt;&gt; b.foo(10) foo 30 &gt;&gt;&gt; b.bar(12) bar 3 6 12 </code></pre> <p>Now lets say I have some more subclasses <code>class C(A):</code> and <code>class D(A):</code>. I also know that the id will always fit in either B, C or D, but never in 2 of them at the same time.</p> <p>Now I would like to call A(23) and get an object of the correct subclass. Something like this:</p> <blockquote> <pre><code>&gt;&gt;&gt; type(A(2)) &lt;class '__main__.B'&gt; &gt;&gt;&gt; type(A(22)) &lt;class '__main__.D'&gt; &gt;&gt;&gt; type(A(31)) &lt;class '__main__.C'&gt; &gt;&gt;&gt; type(A(12)) &lt;class '__main__.B'&gt; </code></pre> </blockquote> <p>Is this impossible or is it possible but just bad design? How should problems like this be solved?</p>
4
2009-02-09T11:13:30Z
527,789
<p>You should rather implement <a href="http://en.wikipedia.org/wiki/Abstract_factory_pattern">Abstract Factory pattern</a>, and your factory would then build any object you like, depending on provided parameters. That way your code will remain clean and extensible.</p> <p>Any hack you could use to make it directly can be removed when you upgrade your interpreter version, since no one expects backwards compatibility to preserve such things. </p> <p>EDIT: After a while I'm not sure if you should use Abstract Factory, or <a href="http://en.wikipedia.org/wiki/Factory_method_pattern">Factory Method pattern</a>. It depends on the details of your code, so suit your needs.</p>
6
2009-02-09T11:29:00Z
[ "python", "oop" ]
Can I instantiate a subclass object from the superclass
527,757
<p>I have the following example code:</p> <pre><code>class A(object): def __init__(self, id): self.myid = id def foo(self, x): print 'foo', self.myid*x class B(A): def __init__(self, id): self.myid = id self.mybid = id*2 def bar(self, x): print 'bar', self.myid, self.mybid, x </code></pre> <p>When used, the following could be generated:</p> <pre><code>&gt;&gt;&gt; a = A(2) &gt;&gt;&gt; a.foo(10) foo 20 &gt;&gt;&gt; &gt;&gt;&gt; b = B(3) &gt;&gt;&gt; b.foo(10) foo 30 &gt;&gt;&gt; b.bar(12) bar 3 6 12 </code></pre> <p>Now lets say I have some more subclasses <code>class C(A):</code> and <code>class D(A):</code>. I also know that the id will always fit in either B, C or D, but never in 2 of them at the same time.</p> <p>Now I would like to call A(23) and get an object of the correct subclass. Something like this:</p> <blockquote> <pre><code>&gt;&gt;&gt; type(A(2)) &lt;class '__main__.B'&gt; &gt;&gt;&gt; type(A(22)) &lt;class '__main__.D'&gt; &gt;&gt;&gt; type(A(31)) &lt;class '__main__.C'&gt; &gt;&gt;&gt; type(A(12)) &lt;class '__main__.B'&gt; </code></pre> </blockquote> <p>Is this impossible or is it possible but just bad design? How should problems like this be solved?</p>
4
2009-02-09T11:13:30Z
527,794
<p>Generally it's not such a good idea when a superclass has any knowledge of the subclasses.</p> <p>Think about what you want to do from an OO point of view.</p> <p>The superclass is providing common behaviour for all objects of that type, e.g. Animal. Then the subclass provides the specialisation of the behaviour, e.g. Dog.</p> <p>Think of it in terms of an "isa" relationship, i.e. a Dog is an Animal.</p> <p>An Animal is a Dog doesn't really make sense.</p> <p>HTH</p> <p>cheers,</p> <p>Rob</p>
2
2009-02-09T11:31:04Z
[ "python", "oop" ]
Can I instantiate a subclass object from the superclass
527,757
<p>I have the following example code:</p> <pre><code>class A(object): def __init__(self, id): self.myid = id def foo(self, x): print 'foo', self.myid*x class B(A): def __init__(self, id): self.myid = id self.mybid = id*2 def bar(self, x): print 'bar', self.myid, self.mybid, x </code></pre> <p>When used, the following could be generated:</p> <pre><code>&gt;&gt;&gt; a = A(2) &gt;&gt;&gt; a.foo(10) foo 20 &gt;&gt;&gt; &gt;&gt;&gt; b = B(3) &gt;&gt;&gt; b.foo(10) foo 30 &gt;&gt;&gt; b.bar(12) bar 3 6 12 </code></pre> <p>Now lets say I have some more subclasses <code>class C(A):</code> and <code>class D(A):</code>. I also know that the id will always fit in either B, C or D, but never in 2 of them at the same time.</p> <p>Now I would like to call A(23) and get an object of the correct subclass. Something like this:</p> <blockquote> <pre><code>&gt;&gt;&gt; type(A(2)) &lt;class '__main__.B'&gt; &gt;&gt;&gt; type(A(22)) &lt;class '__main__.D'&gt; &gt;&gt;&gt; type(A(31)) &lt;class '__main__.C'&gt; &gt;&gt;&gt; type(A(12)) &lt;class '__main__.B'&gt; </code></pre> </blockquote> <p>Is this impossible or is it possible but just bad design? How should problems like this be solved?</p>
4
2009-02-09T11:13:30Z
527,799
<p>I don't think you can change the type of the object, but you can create another class that will work like a factory for the subclasses. Something like this:</p> <pre><code>class LetterFactory(object): @staticmethod def getLetterObject(n): if n == 1: return A(n) elif n == 2: return B(n) else: return C(n) a = LetterFactory.getLetterObject(1) b = LetterFactory.getLetterObject(2) ... </code></pre>
2
2009-02-09T11:32:35Z
[ "python", "oop" ]
How to properly organize a package/module dependency tree?
527,919
<p>Good morning,</p> <p>I am currently writing a python library. At the moment, modules and classes are deployed in an unorganized way, with no reasoned design. As I approach a more official release, I would like to reorganize classes and modules so that they have a better overall design. I drew a diagram of the import dependencies, and I was planning to aggregate classes by layer level. Also, I was considering some modification to the classes so to reduce these dependencies.</p> <p>What is your strategy for a good overall design of a potentially complex and in-the-making python library? Do you have interesting suggestions ?</p> <p>Thanks</p> <p><b>Update:</b></p> <p>I was indeed looking for a rule of thumb. For example, suppose this case happens (<strong>init</strong>.py removed for clarity)</p> <pre><code>foo/bar/a.py foo/bar/b.py foo/hello/c.py foo/hello/d.py </code></pre> <p>now, if you happen to have d.py importing bar.b and a.py importing hello.c, I would consider this a bad setting. Another case would be</p> <pre><code>foo/bar/a.py foo/bar/baz/b.py foo/bar/baz/c.py </code></pre> <p>suppose that both a.py and b.py import c. you have three solutions: 1) b imports c, a import baz.c 2) you move c in foo/bar. a.py imports c, b.py imports .c 3) you move c somewhere else (say foo/cpackage/c.py) and then both a and b import cpackage.c</p> <p>I tend to prefer 3), but if c.py has no meaning as a standalone module, for example because you want to keep it "private" into the bar package, I would preferentially go for 1).</p> <p>There are many other similar cases. My rule of thumb is to reduce the number of dependencies and crossings at a minimum, so to prevent a highly branched, highly interweaved setup, but I could be wrong. </p>
2
2009-02-09T12:19:02Z
527,943
<p>The question is very vague.</p> <p>You can achieve this by having base/core things that import nothing from the remainder of the library, and concrete implementations importing from here. Apart from "don't have two modules importing from each-other at import-time", you should be fine.</p> <p>module1.py:</p> <pre><code>import module2 </code></pre> <p>module2.py:</p> <pre><code>import module1 </code></pre> <p>This won't work!</p>
0
2009-02-09T12:27:56Z
[ "python", "design" ]
How to properly organize a package/module dependency tree?
527,919
<p>Good morning,</p> <p>I am currently writing a python library. At the moment, modules and classes are deployed in an unorganized way, with no reasoned design. As I approach a more official release, I would like to reorganize classes and modules so that they have a better overall design. I drew a diagram of the import dependencies, and I was planning to aggregate classes by layer level. Also, I was considering some modification to the classes so to reduce these dependencies.</p> <p>What is your strategy for a good overall design of a potentially complex and in-the-making python library? Do you have interesting suggestions ?</p> <p>Thanks</p> <p><b>Update:</b></p> <p>I was indeed looking for a rule of thumb. For example, suppose this case happens (<strong>init</strong>.py removed for clarity)</p> <pre><code>foo/bar/a.py foo/bar/b.py foo/hello/c.py foo/hello/d.py </code></pre> <p>now, if you happen to have d.py importing bar.b and a.py importing hello.c, I would consider this a bad setting. Another case would be</p> <pre><code>foo/bar/a.py foo/bar/baz/b.py foo/bar/baz/c.py </code></pre> <p>suppose that both a.py and b.py import c. you have three solutions: 1) b imports c, a import baz.c 2) you move c in foo/bar. a.py imports c, b.py imports .c 3) you move c somewhere else (say foo/cpackage/c.py) and then both a and b import cpackage.c</p> <p>I tend to prefer 3), but if c.py has no meaning as a standalone module, for example because you want to keep it "private" into the bar package, I would preferentially go for 1).</p> <p>There are many other similar cases. My rule of thumb is to reduce the number of dependencies and crossings at a minimum, so to prevent a highly branched, highly interweaved setup, but I could be wrong. </p>
2
2009-02-09T12:19:02Z
527,964
<p><strong>"I drew a diagram of the import dependencies, and I was planning to aggregate classes by layer level."</strong></p> <p>Python must read like English (or any other natural language.)</p> <p>An import is a first-class statement that should have real meaning. Organizing things by "layer level" (whatever that is) should be clear, meaningful and obvious.</p> <p>Do not make arbitrary technical groupings of classes into modules and modules into packages. </p> <p>Make the modules and package obvious and logical so that the list of imports is obvious, simple and logical.</p> <p><strong>"Also, I was considering some modification to the classes so to reduce these dependencies."</strong></p> <p>Reducing the dependencies sounds technical and arbitrary. It may not be, but it sounds that way. Without actual examples, it's impossible to say.</p> <p>Your goal is clarity. </p> <p>Also, the module and package are the stand-alone units of reuse. (Not classes; a class, but itself isn't usually reusable.) Your dependency tree should reflect this. You're aiming for modules that can be imported neatly and cleanly into your application.</p> <p>If you have many closely-related modules (or alternative implementations) then packages can be used, but used sparingly. The Python libraries are relatively flat; and there's some wisdom in that.</p> <p><hr /></p> <p><strong>Edit</strong></p> <p>One-way dependency between layers is an essential feature. This is more about proper software design than it is about Python. You should (1) design in layers, (2) design so that the dependencies are very strict between the layers, and then (3) implement that in Python. </p> <p>The packages may not necessarily fit your layering precisely. The packages may physically be a flat list of directories with the dependencies expressed only via <code>import</code> statements.</p>
6
2009-02-09T12:34:30Z
[ "python", "design" ]
How to properly organize a package/module dependency tree?
527,919
<p>Good morning,</p> <p>I am currently writing a python library. At the moment, modules and classes are deployed in an unorganized way, with no reasoned design. As I approach a more official release, I would like to reorganize classes and modules so that they have a better overall design. I drew a diagram of the import dependencies, and I was planning to aggregate classes by layer level. Also, I was considering some modification to the classes so to reduce these dependencies.</p> <p>What is your strategy for a good overall design of a potentially complex and in-the-making python library? Do you have interesting suggestions ?</p> <p>Thanks</p> <p><b>Update:</b></p> <p>I was indeed looking for a rule of thumb. For example, suppose this case happens (<strong>init</strong>.py removed for clarity)</p> <pre><code>foo/bar/a.py foo/bar/b.py foo/hello/c.py foo/hello/d.py </code></pre> <p>now, if you happen to have d.py importing bar.b and a.py importing hello.c, I would consider this a bad setting. Another case would be</p> <pre><code>foo/bar/a.py foo/bar/baz/b.py foo/bar/baz/c.py </code></pre> <p>suppose that both a.py and b.py import c. you have three solutions: 1) b imports c, a import baz.c 2) you move c in foo/bar. a.py imports c, b.py imports .c 3) you move c somewhere else (say foo/cpackage/c.py) and then both a and b import cpackage.c</p> <p>I tend to prefer 3), but if c.py has no meaning as a standalone module, for example because you want to keep it "private" into the bar package, I would preferentially go for 1).</p> <p>There are many other similar cases. My rule of thumb is to reduce the number of dependencies and crossings at a minimum, so to prevent a highly branched, highly interweaved setup, but I could be wrong. </p>
2
2009-02-09T12:19:02Z
528,339
<p>It depends on the project, right? For example, if you are using a model-view-controller design, then your package would be structured in a way that makes the 3 groups of code independent.</p> <p>If you need some ideas, open up your site-packages directory, and look through some of the code in those modules to see how they are set up.</p> <p>There is no correct way without knowing more about the module; as Ali said, this is a vague question. You really just need to analyze what you have in front of you, and figure out what might work better.</p>
0
2009-02-09T14:30:34Z
[ "python", "design" ]
TypeError: 'NoneType' object is not iterable
528,116
<p>I need to process mysql data one row at a time and i have selected all rows put them in a tuple but i get the error above.</p> <p>what does this mean and how do I go about it?</p>
-2
2009-02-09T13:25:34Z
528,134
<ol> <li>Provide some code. </li> <li><p>You probably call some function that should update database, but the function does not return any data (like <code>cursor.execute()</code>). And code: </p> <p>data = cursor.execute()</p> <p>Makes <code>data</code> a <code>None</code> object (of <code>NoneType</code>). But without code it's hard to point you to the exact cause of your error.</p></li> </ol>
10
2009-02-09T13:31:20Z
[ "python", "mysql" ]
TypeError: 'NoneType' object is not iterable
528,116
<p>I need to process mysql data one row at a time and i have selected all rows put them in a tuple but i get the error above.</p> <p>what does this mean and how do I go about it?</p>
-2
2009-02-09T13:25:34Z
528,137
<p>It means that the object you are trying to iterate is actually None; maybe the query produced no results?<br /> Could you please post a code sample?</p>
6
2009-02-09T13:32:01Z
[ "python", "mysql" ]
TypeError: 'NoneType' object is not iterable
528,116
<p>I need to process mysql data one row at a time and i have selected all rows put them in a tuple but i get the error above.</p> <p>what does this mean and how do I go about it?</p>
-2
2009-02-09T13:25:34Z
528,158
<p>The function you used to select all rows returned None. This "probably" (because you did not provide code, I am only assuming) means that the SQL query did not return any values.</p>
5
2009-02-09T13:35:36Z
[ "python", "mysql" ]
TypeError: 'NoneType' object is not iterable
528,116
<p>I need to process mysql data one row at a time and i have selected all rows put them in a tuple but i get the error above.</p> <p>what does this mean and how do I go about it?</p>
-2
2009-02-09T13:25:34Z
528,299
<p>Try using the cursor.rowcount variable after you call cursor.execute(). (this code will not work because I don't know what module you are using).</p> <pre><code>db = mysqlmodule.connect("a connection string") curs = dbo.cursor() curs.execute("select top 10 * from tablename where fieldA &gt; 100") for i in range(curs.rowcount): row = curs.fetchone() print row </code></pre> <p>Alternatively, you can do this (if you know you want ever result returned):</p> <pre><code>db = mysqlmodule.connect("a connection string") curs = dbo.cursor() curs.execute("select top 10 * from tablename where fieldA &gt; 100") results = curs.fetchall() if results: for r in results: print r </code></pre>
0
2009-02-09T14:20:03Z
[ "python", "mysql" ]
TypeError: 'NoneType' object is not iterable
528,116
<p>I need to process mysql data one row at a time and i have selected all rows put them in a tuple but i get the error above.</p> <p>what does this mean and how do I go about it?</p>
-2
2009-02-09T13:25:34Z
528,454
<p>This error means that you are attempting to loop over a None object. This is like trying to loop over a Null array in C/C++. As Abgan, orsogufo, Dan mentioned, this is probably because the query did not return anything. I suggest that you check your query/databse connection.</p> <p>A simple code fragment to reproduce this error is:</p> <p>x = None</p> <p>for each i in x: #Do Something pass</p>
2
2009-02-09T14:55:27Z
[ "python", "mysql" ]
TypeError: 'NoneType' object is not iterable
528,116
<p>I need to process mysql data one row at a time and i have selected all rows put them in a tuple but i get the error above.</p> <p>what does this mean and how do I go about it?</p>
-2
2009-02-09T13:25:34Z
17,230,253
<p>This may occur when I try to let 'usrsor.fetchone' execute twice. Like this:</p> <pre><code>import sqlite3 db_filename = 'test.db' with sqlite3.connect(db_filename) as conn: cursor = conn.cursor() cursor.execute(""" insert into test_table (id, username, password) values ('user_id', 'myname', 'passwd') """) cursor.execute(""" select username, password from test_table where id = 'user_id' """) if cursor.fetchone() is not None: username, password = cursor.fetchone() print username, password </code></pre> <p>I don't know much about the reason. But I modified it with try and except, like this:</p> <pre><code>import sqlite3 db_filename = 'test.db' with sqlite3.connect(db_filename) as conn: cursor = conn.cursor() cursor.execute(""" insert into test_table (id, username, password) values ('user_id', 'myname', 'passwd') """) cursor.execute(""" select username, password from test_table where id = 'user_id' """) try: username, password = cursor.fetchone() print username, password except: pass </code></pre> <p>I guess the cursor.fetchone() can't execute twice, because the cursor will be None when execute it first time.</p>
0
2013-06-21T07:54:29Z
[ "python", "mysql" ]
TypeError: 'NoneType' object is not iterable
528,116
<p>I need to process mysql data one row at a time and i have selected all rows put them in a tuple but i get the error above.</p> <p>what does this mean and how do I go about it?</p>
-2
2009-02-09T13:25:34Z
29,757,361
<p>I know it's an old question but I thought I'd add one more possibility. I was getting this error when calling a stored procedure, and adding SET NOCOUNT ON at the top of the stored procedure solved it. The issue is that earlier selects that are not the final select for the procedure make it look like you've got empty row sets.</p>
0
2015-04-20T20:06:04Z
[ "python", "mysql" ]
Is there a better way (besides COM) to remote-control Excel?
528,817
<p>I'm working on a regression-testing tool that will validate a very large number of Excel spreadsheets. At the moment I control them via COM from a Python script using the latest version of the pywin32 product. Unfortunately COM seems to have a number of annoying drawbacks:</p> <p>For example, the slightest upset seems to be able to break the connection to the COM-Server, once severed there seems to be no safe way to re-connect to the Excel application. There's absolutely no safety built into the COM Application object.</p> <p>The Excel COM interface will not allow me to safely remote-control two seperate instances of the Excel application operating on the same workbook file, even if they are read-only.</p> <p>Also when something does go wrong I seldom get any useful error-messages... at best I can except a numerical error-code or a barely useful message such as "An Exception has occurred". It's almost impossible to know why something went wrong.</p> <p>Finally, COM lacks the ability to control some of the most fundamental aspects of Excel? For example there's no way to do a guaranteed close of just the Excel process that a COM client is connected to. You cannot even use COM to find Excel's PID.</p> <p><strong>So what if I were to completely abandon COM?</strong> Is there an alternative way to control Excel? </p> <p>All I want to do is run macros, open and close workbooks and read and write cell-ranges? Perhaps some .NET experts know a trick or two which have not yet bubbled into the Python community? What about you office-hackers? Could there be a better way to get at Excel's innards than COM?</p>
1
2009-02-09T16:21:49Z
528,833
<p>There is no way that completely bypasses COM. You can use VSTO (Visual Studio Tools for Office), which has nice .NET wrappers on the COM objects, but it is still COM underneath. </p>
6
2009-02-09T16:25:35Z
[ "python", ".net", "excel", "com" ]
Is there a better way (besides COM) to remote-control Excel?
528,817
<p>I'm working on a regression-testing tool that will validate a very large number of Excel spreadsheets. At the moment I control them via COM from a Python script using the latest version of the pywin32 product. Unfortunately COM seems to have a number of annoying drawbacks:</p> <p>For example, the slightest upset seems to be able to break the connection to the COM-Server, once severed there seems to be no safe way to re-connect to the Excel application. There's absolutely no safety built into the COM Application object.</p> <p>The Excel COM interface will not allow me to safely remote-control two seperate instances of the Excel application operating on the same workbook file, even if they are read-only.</p> <p>Also when something does go wrong I seldom get any useful error-messages... at best I can except a numerical error-code or a barely useful message such as "An Exception has occurred". It's almost impossible to know why something went wrong.</p> <p>Finally, COM lacks the ability to control some of the most fundamental aspects of Excel? For example there's no way to do a guaranteed close of just the Excel process that a COM client is connected to. You cannot even use COM to find Excel's PID.</p> <p><strong>So what if I were to completely abandon COM?</strong> Is there an alternative way to control Excel? </p> <p>All I want to do is run macros, open and close workbooks and read and write cell-ranges? Perhaps some .NET experts know a trick or two which have not yet bubbled into the Python community? What about you office-hackers? Could there be a better way to get at Excel's innards than COM?</p>
1
2009-02-09T16:21:49Z
528,960
<p>It is also possible to run Excel as a server application and use it as a calculation engine. This allows non IT users to specify business rules within Excel and call them through webservices. I have not worked with this myself, but I know a coworker of mine used this once. <a href="http://msdn.microsoft.com/en-us/library/ms519100.aspx" rel="nofollow">Walkthrough: Developing a Custom Application Using Excel Web Services</a> could be a good starting point. A first glance at that page looks like it requires Sharepoint. This might not be suiteable for every environment. </p>
1
2009-02-09T16:54:49Z
[ "python", ".net", "excel", "com" ]
Is there a better way (besides COM) to remote-control Excel?
528,817
<p>I'm working on a regression-testing tool that will validate a very large number of Excel spreadsheets. At the moment I control them via COM from a Python script using the latest version of the pywin32 product. Unfortunately COM seems to have a number of annoying drawbacks:</p> <p>For example, the slightest upset seems to be able to break the connection to the COM-Server, once severed there seems to be no safe way to re-connect to the Excel application. There's absolutely no safety built into the COM Application object.</p> <p>The Excel COM interface will not allow me to safely remote-control two seperate instances of the Excel application operating on the same workbook file, even if they are read-only.</p> <p>Also when something does go wrong I seldom get any useful error-messages... at best I can except a numerical error-code or a barely useful message such as "An Exception has occurred". It's almost impossible to know why something went wrong.</p> <p>Finally, COM lacks the ability to control some of the most fundamental aspects of Excel? For example there's no way to do a guaranteed close of just the Excel process that a COM client is connected to. You cannot even use COM to find Excel's PID.</p> <p><strong>So what if I were to completely abandon COM?</strong> Is there an alternative way to control Excel? </p> <p>All I want to do is run macros, open and close workbooks and read and write cell-ranges? Perhaps some .NET experts know a trick or two which have not yet bubbled into the Python community? What about you office-hackers? Could there be a better way to get at Excel's innards than COM?</p>
1
2009-02-09T16:21:49Z
529,039
<p>Have you looked at the <a href="http://pypi.python.org/pypi/xlrd" rel="nofollow">xlrd</a> and <a href="http://pypi.python.org/pypi/xlwt" rel="nofollow">xlwt</a> packages? I'm not in need of them any more, but I had good success with xlrd on my last project. Last I knew, they couldn't process macros, but could do basic reading and writing of spreadsheets. Also, they're platform independent (the program I wrote was targetted to run on Linux)!</p>
1
2009-02-09T17:16:39Z
[ "python", ".net", "excel", "com" ]
Is there a better way (besides COM) to remote-control Excel?
528,817
<p>I'm working on a regression-testing tool that will validate a very large number of Excel spreadsheets. At the moment I control them via COM from a Python script using the latest version of the pywin32 product. Unfortunately COM seems to have a number of annoying drawbacks:</p> <p>For example, the slightest upset seems to be able to break the connection to the COM-Server, once severed there seems to be no safe way to re-connect to the Excel application. There's absolutely no safety built into the COM Application object.</p> <p>The Excel COM interface will not allow me to safely remote-control two seperate instances of the Excel application operating on the same workbook file, even if they are read-only.</p> <p>Also when something does go wrong I seldom get any useful error-messages... at best I can except a numerical error-code or a barely useful message such as "An Exception has occurred". It's almost impossible to know why something went wrong.</p> <p>Finally, COM lacks the ability to control some of the most fundamental aspects of Excel? For example there's no way to do a guaranteed close of just the Excel process that a COM client is connected to. You cannot even use COM to find Excel's PID.</p> <p><strong>So what if I were to completely abandon COM?</strong> Is there an alternative way to control Excel? </p> <p>All I want to do is run macros, open and close workbooks and read and write cell-ranges? Perhaps some .NET experts know a trick or two which have not yet bubbled into the Python community? What about you office-hackers? Could there be a better way to get at Excel's innards than COM?</p>
1
2009-02-09T16:21:49Z
529,100
<p>You could use Jython with the JExcelApi (<a href="http://jexcelapi.sourceforge.net/" rel="nofollow">http://jexcelapi.sourceforge.net/</a>) to control your Excel application. I've been considering implementing this solution with one of my PyQt projects, but haven't gotten around to trying it yet. I have effectively used the JExcelApi in Java applications before, but have not used Jython (though I know you can import Java classes).</p> <p>NOTE: the JExcelApi may be COM under the hood (I'm not sure).</p>
1
2009-02-09T17:34:57Z
[ "python", ".net", "excel", "com" ]
Is there a better way (besides COM) to remote-control Excel?
528,817
<p>I'm working on a regression-testing tool that will validate a very large number of Excel spreadsheets. At the moment I control them via COM from a Python script using the latest version of the pywin32 product. Unfortunately COM seems to have a number of annoying drawbacks:</p> <p>For example, the slightest upset seems to be able to break the connection to the COM-Server, once severed there seems to be no safe way to re-connect to the Excel application. There's absolutely no safety built into the COM Application object.</p> <p>The Excel COM interface will not allow me to safely remote-control two seperate instances of the Excel application operating on the same workbook file, even if they are read-only.</p> <p>Also when something does go wrong I seldom get any useful error-messages... at best I can except a numerical error-code or a barely useful message such as "An Exception has occurred". It's almost impossible to know why something went wrong.</p> <p>Finally, COM lacks the ability to control some of the most fundamental aspects of Excel? For example there's no way to do a guaranteed close of just the Excel process that a COM client is connected to. You cannot even use COM to find Excel's PID.</p> <p><strong>So what if I were to completely abandon COM?</strong> Is there an alternative way to control Excel? </p> <p>All I want to do is run macros, open and close workbooks and read and write cell-ranges? Perhaps some .NET experts know a trick or two which have not yet bubbled into the Python community? What about you office-hackers? Could there be a better way to get at Excel's innards than COM?</p>
1
2009-02-09T16:21:49Z
546,465
<blockquote> <p>The Excel COM interface will not allow me to safely remote-control two seperate instances of the Excel application operating on the same workbook file, even if they are read-only.</p> </blockquote> <p>This is not a limitation of COM, this is a <a href="http://blogs.msdn.com/excel/archive/2009/01/07/why-can-t-i-open-two-files-with-the-same-name.aspx" rel="nofollow">limitation of Excel</a>. Excel will not even let you open two files with the same name at the same time if they exist in different directories. It is a fundamental limitation of the Excel program.</p> <p>To answer your other questions</p> <p>If you check your python documentation, there should be a way to connect to an existing server if the connection is lost.</p> <p>The lack of useful error messages again may be to do with Python.</p> <blockquote> <p>You cannot even use COM to find Excel's PID.</p> </blockquote> <p>COM is an internal object model and exposed what it wishes. PID are available to outside processes as much as they are to internal, there is no real reason to expose as a COM interface.</p>
2
2009-02-13T15:57:15Z
[ "python", ".net", "excel", "com" ]
Python: Pass or Sleep for long running processes?
529,034
<p>I am writing an queue processing application which uses threads for waiting on and responding to queue messages to be delivered to the app. For the main part of the application, it just needs to stay active. For a code example like:</p> <pre> while True: pass </pre> <p>or</p> <pre> while True: time.sleep(1) </pre> <p>Which one will have the least impact on a system? What is the preferred way to do nothing, but keep a python app running?</p>
19
2009-02-09T17:15:42Z
529,048
<p>I've always seen/heard that using sleep is the better way to do it. Using sleep will keep your Python interpreter's CPU usage from going wild.</p>
4
2009-02-09T17:18:45Z
[ "python" ]
Python: Pass or Sleep for long running processes?
529,034
<p>I am writing an queue processing application which uses threads for waiting on and responding to queue messages to be delivered to the app. For the main part of the application, it just needs to stay active. For a code example like:</p> <pre> while True: pass </pre> <p>or</p> <pre> while True: time.sleep(1) </pre> <p>Which one will have the least impact on a system? What is the preferred way to do nothing, but keep a python app running?</p>
19
2009-02-09T17:15:42Z
529,052
<p>I would imagine <em>time.sleep()</em> will have less overhead on the system. Using <em>pass</em> will cause the loop to immediately re-evaluate and peg the CPU, whereas using time.sleep will allow the execution to be temporarily suspended.</p> <p><strong>EDIT</strong>: just to prove the point, if you launch the python interpreter and run this: </p> <pre><code>&gt;&gt;&gt; while True: ... pass ... </code></pre> <p>You can watch Python start eating up 90-100% CPU instantly, versus: </p> <pre><code>&gt;&gt;&gt; import time &gt;&gt;&gt; while True: ... time.sleep(1) ... </code></pre> <p>Which barely even registers on the Activity Monitor (using OS X here but it should be the same for every platform).</p>
38
2009-02-09T17:20:00Z
[ "python" ]
Python: Pass or Sleep for long running processes?
529,034
<p>I am writing an queue processing application which uses threads for waiting on and responding to queue messages to be delivered to the app. For the main part of the application, it just needs to stay active. For a code example like:</p> <pre> while True: pass </pre> <p>or</p> <pre> while True: time.sleep(1) </pre> <p>Which one will have the least impact on a system? What is the preferred way to do nothing, but keep a python app running?</p>
19
2009-02-09T17:15:42Z
529,073
<p>You don't give much context to what you are really doing, but maybe <a href="http://www.python.org/doc/2.5.2/lib/module-Queue.html" rel="nofollow"><code>Queue</code></a> could be used instead of an explicit busy-wait loop? If not, I would assume <code>sleep</code> would be preferable, as I believe it will consume less CPU (as others have already noted).</p> <p>[Edited according to additional information in comment below.]</p> <p>Maybe this is obvious, but anyway, what you <em>could</em> do in a case where you are reading information from blocking sockets is to have one thread read from the socket and post suitably formatted messages into a <code>Queue</code>, and then have the rest of your "worker" threads reading from that queue; the workers will then block on reading from the queue without the need for neither <code>pass</code>, nor <code>sleep</code>.</p>
6
2009-02-09T17:27:08Z
[ "python" ]
Python: Pass or Sleep for long running processes?
529,034
<p>I am writing an queue processing application which uses threads for waiting on and responding to queue messages to be delivered to the app. For the main part of the application, it just needs to stay active. For a code example like:</p> <pre> while True: pass </pre> <p>or</p> <pre> while True: time.sleep(1) </pre> <p>Which one will have the least impact on a system? What is the preferred way to do nothing, but keep a python app running?</p>
19
2009-02-09T17:15:42Z
529,321
<p>Why sleep? You don't want to sleep, you want to wait for the threads to finish.</p> <p>So</p> <pre><code># store the threads you start in a your_threads list, then for a_thread in your_threads: a_thread.join() </code></pre> <p>See: <a href="http://www.python.org/doc/2.5.2/lib/thread-objects.html#l2h-3460">thread.join</a></p>
21
2009-02-09T18:41:46Z
[ "python" ]
Python Script: Print new line each time to shell rather than update existing line
529,395
<p>I am a noob when it comes to python. I have a python script which gives me output like this:</p> <pre><code>[last] ZVZX-W3vo9I: Downloading video webpage [last] ZVZX-W3vo9I: Extracting video information [download] Destination: myvideo.flv [download] 9.9% of 10.09M at 3.30M/s ETA 00:02 </code></pre> <p>The last line keeps getting updated with new values of progress. I want to change this. Instead of updating I want a new line to be printed each time. How can i do this? I think the part concerned is this bit:</p> <pre><code>def report_progress(self, percent_str, data_len_str, speed_str, eta_str): """Report download progress.""" self.to_stdout(u'\r[download] %s of %s at %s ETA %s' % (percent_str, data_len_str, speed_str, eta_str), skip_eol=True) </code></pre> <p>If more code needs to be seen please let me know so that I can show you what is needed to solve this.</p> <p>Thank you very much for any help.</p>
3
2009-02-09T19:00:14Z
529,415
<p>If I understand your request properly, you should be able to change that function to this:</p> <pre><code>def report_progress(self, percent_str, data_len_str, speed_str, eta_str): """Report download progress.""" print u'[download] %s of %s at %s ETA %s' % (percent_str, data_len_str, speed_str, eta_str) </code></pre> <p>That will print the output on a new line each time.</p>
3
2009-02-09T19:03:10Z
[ "python", "shell" ]
Python Script: Print new line each time to shell rather than update existing line
529,395
<p>I am a noob when it comes to python. I have a python script which gives me output like this:</p> <pre><code>[last] ZVZX-W3vo9I: Downloading video webpage [last] ZVZX-W3vo9I: Extracting video information [download] Destination: myvideo.flv [download] 9.9% of 10.09M at 3.30M/s ETA 00:02 </code></pre> <p>The last line keeps getting updated with new values of progress. I want to change this. Instead of updating I want a new line to be printed each time. How can i do this? I think the part concerned is this bit:</p> <pre><code>def report_progress(self, percent_str, data_len_str, speed_str, eta_str): """Report download progress.""" self.to_stdout(u'\r[download] %s of %s at %s ETA %s' % (percent_str, data_len_str, speed_str, eta_str), skip_eol=True) </code></pre> <p>If more code needs to be seen please let me know so that I can show you what is needed to solve this.</p> <p>Thank you very much for any help.</p>
3
2009-02-09T19:00:14Z
529,417
<p>I'm thinking you may just need to change:</p> <pre><code>skip_eol=True </code></pre> <p>to:</p> <pre><code>skip_eol=False </code></pre> <p>and get rid of the "<code>\r</code>" to see what happens. I think you'll be pleasantly surprised :-)</p>
3
2009-02-09T19:03:25Z
[ "python", "shell" ]
Python Script: Print new line each time to shell rather than update existing line
529,395
<p>I am a noob when it comes to python. I have a python script which gives me output like this:</p> <pre><code>[last] ZVZX-W3vo9I: Downloading video webpage [last] ZVZX-W3vo9I: Extracting video information [download] Destination: myvideo.flv [download] 9.9% of 10.09M at 3.30M/s ETA 00:02 </code></pre> <p>The last line keeps getting updated with new values of progress. I want to change this. Instead of updating I want a new line to be printed each time. How can i do this? I think the part concerned is this bit:</p> <pre><code>def report_progress(self, percent_str, data_len_str, speed_str, eta_str): """Report download progress.""" self.to_stdout(u'\r[download] %s of %s at %s ETA %s' % (percent_str, data_len_str, speed_str, eta_str), skip_eol=True) </code></pre> <p>If more code needs to be seen please let me know so that I can show you what is needed to solve this.</p> <p>Thank you very much for any help.</p>
3
2009-02-09T19:00:14Z
529,420
<p>You can take out the \r, which moves to the cursor back to the beginning of the line and take out the skip_eol=True probably. Perhaps:</p> <pre><code> self.to_stdout(u'[download] %s of %s at %s ETA %s' % (percent_str, data_len_str, speed_str, eta_str)) </code></pre>
0
2009-02-09T19:04:17Z
[ "python", "shell" ]
Python Script: Print new line each time to shell rather than update existing line
529,395
<p>I am a noob when it comes to python. I have a python script which gives me output like this:</p> <pre><code>[last] ZVZX-W3vo9I: Downloading video webpage [last] ZVZX-W3vo9I: Extracting video information [download] Destination: myvideo.flv [download] 9.9% of 10.09M at 3.30M/s ETA 00:02 </code></pre> <p>The last line keeps getting updated with new values of progress. I want to change this. Instead of updating I want a new line to be printed each time. How can i do this? I think the part concerned is this bit:</p> <pre><code>def report_progress(self, percent_str, data_len_str, speed_str, eta_str): """Report download progress.""" self.to_stdout(u'\r[download] %s of %s at %s ETA %s' % (percent_str, data_len_str, speed_str, eta_str), skip_eol=True) </code></pre> <p>If more code needs to be seen please let me know so that I can show you what is needed to solve this.</p> <p>Thank you very much for any help.</p>
3
2009-02-09T19:00:14Z
529,445
<p>The "update" effect is achieved by '\r'.</p> <p>Try this in a Python (2.x) shell:</p> <pre><code>print "00000000\r1111" </code></pre> <p>\r just returns the cursor to the beginning of the line.</p>
0
2009-02-09T19:08:28Z
[ "python", "shell" ]
Traverse a list in reverse order in Python
529,424
<p>So I can start from <code>len(collection)</code> and end in <code>collection[0]</code>.</p> <p>EDIT: Sorry, I forgot to mention I also want to be able to access the loop index.</p>
309
2009-02-09T19:04:58Z
529,427
<p>Use the <code>reversed()</code> built-in function:</p> <pre><code>&gt;&gt;&gt; a = ["foo", "bar", "baz"] &gt;&gt;&gt; for i in reversed(a): ... print i ... baz bar foo </code></pre> <p>To also access the original index:</p> <pre><code>&gt;&gt;&gt; for i, e in reversed(list(enumerate(a))): ... print i, e ... 2 baz 1 bar 0 foo </code></pre>
505
2009-02-09T19:05:54Z
[ "python", "loops" ]
Traverse a list in reverse order in Python
529,424
<p>So I can start from <code>len(collection)</code> and end in <code>collection[0]</code>.</p> <p>EDIT: Sorry, I forgot to mention I also want to be able to access the loop index.</p>
309
2009-02-09T19:04:58Z
529,439
<p>Use <code>list.reverse()</code> and then iterate as you normally would.</p> <p><a href="http://docs.python.org/tutorial/datastructures.html" rel="nofollow">http://docs.python.org/tutorial/datastructures.html</a></p>
2
2009-02-09T19:07:13Z
[ "python", "loops" ]