content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Using Python or Java, what would be the best way to create charts? I've been searching and found jFreeChart, Python Google Chart and matplotlib. Searching here I also found CairoPlot. I've heard I might be able to use OpenOffice to do it too. Is the API easy to use? Or would it be simpler to stick to one of those libraries? I have more experience with Java, but I've read most of Dive Into Python 3 and done some mockup programs in Python for simple things. I'm probably gonna have to spend more time doing it in Python, though I'm willing to do it as long as it isn't anything mindblowing. I want to automate some tests to put into a thesis, so I'm more worried about the end product. So far I'm thinking of using matplotlib simply because it's the only one that's had any recent updates, which leads me to assume there might be more documentation due to continued support. I've used jFreeChart in the past too for some testing, and it was ok. But I was hoping to find something better, or to have more documentation/examples to use. Last time I didn't customize the graphics appearance as I wanted - say, change the background in a line plot - due to the lack of examples/documentation. A: I recommend you to use matplotlib, it has high quality backends and a lot of graphical representations, you'll have the whole control over your plots and Python is a very handy and easy language to automatize tests, very practical for what you're willing to do. Matplotlib has also a large community that can help you and a lot of documentation/examples; just remember that matplotlib was not ported to Python 3.x yet, I don't know if this is important for you. What I absolutely don't recommend is CairoPlot, it is not maintained anymore and is a toy project. A: Google's Visualization API is fantastic - and much cleaner if you're working in a web environment, since you just output some text JS with your HTML, don't have to call back and render an image. JFree also has Eastwood which is a reimplementation of Google Charts API, if you don't want to send your data to Google, or need SSL, though I don't think it's quite current, it's a good subset.
Using Python or Java, what would be the best way to create charts?
I've been searching and found jFreeChart, Python Google Chart and matplotlib. Searching here I also found CairoPlot. I've heard I might be able to use OpenOffice to do it too. Is the API easy to use? Or would it be simpler to stick to one of those libraries? I have more experience with Java, but I've read most of Dive Into Python 3 and done some mockup programs in Python for simple things. I'm probably gonna have to spend more time doing it in Python, though I'm willing to do it as long as it isn't anything mindblowing. I want to automate some tests to put into a thesis, so I'm more worried about the end product. So far I'm thinking of using matplotlib simply because it's the only one that's had any recent updates, which leads me to assume there might be more documentation due to continued support. I've used jFreeChart in the past too for some testing, and it was ok. But I was hoping to find something better, or to have more documentation/examples to use. Last time I didn't customize the graphics appearance as I wanted - say, change the background in a line plot - due to the lack of examples/documentation.
[ "I recommend you to use matplotlib, it has high quality backends and a lot of graphical representations, you'll have the whole control over your plots and Python is a very handy and easy language to automatize tests, very practical for what you're willing to do. Matplotlib has also a large community that can help y...
[ 3, 0 ]
[]
[]
[ "charts", "java", "python" ]
stackoverflow_0003170013_charts_java_python.txt
Q: Determining the minimum of a list of n elements I'm having some trouble developing an algorithm to determine the minimum of a list of n elements. It's not the case of finding the minimum of an array of length n, that's simple: min = A[0] for i in range(1, len(A)): if min > A[i]: min = A[i] print min But my list contains objects: class Object: def __init__(self, somelist): self.classification = somelist[0] # String self.type = somelist[1] # String self.first = somelist[2] # Integer self.last = somelist[3] # Integer And for the same 'classification | type' objects I have m elements and I want to find the minimum element of the same 'classification | type' by comparing the difference between first and last. Example: obj1 = Object(['A', 'x', 4, 17]) obj2 = Object(['A', 'y', 5, 20]) obj3 = Object(['B', 'z', 10, 27]) obj4 = Object(['B', 'z', 2, 15]) obj5 = Object(['B', 'z', 20, 40]) obj6 = Object(['A', 'x', 6, 10]) obj7 = Object(['A', 'x', 2, 9]) list = [obj1, obj2, obj3, obj4, obj5, obj6, obj7] So I need an algorithm to determine the minimums of the list: A | x --> Object(['A', 'x', 6, 10]) B | z --> Object(['B', 'z', 2, 15]) A | y --> Object(['A', 'y', 5, 20]) Thanks! A: filtered = [obj for obj in lst if obj.classification == 'A' and obj.type = 'x'] min(filtered, key=lambda x: x.last - x.first) Note: don't name your variable list: it shadows built-in. A: Here's a simple understandable dynamic procedural way of going about it: class Object: def __init__(self, somelist): self.classification = somelist[0] # String self.type = somelist[1] # String self.first = somelist[2] # Integer self.last = somelist[3] # Integer def weight(self): return self.last - self.first def __str__(self): return "Object(%r, %r, %r, %r)" % (self.classification, self.type, self.first, self.last) __repr__ = __str__ obj1 = Object(['A', 'x', 4, 17]) obj2 = Object(['A', 'y', 5, 20]) obj3 = Object(['B', 'z', 10, 27]) obj4 = Object(['B', 'z', 2, 15]) obj5 = Object(['B', 'z', 20, 40]) obj6 = Object(['A', 'x', 6, 10]) obj7 = Object(['A', 'x', 2, 9]) olist = [obj1, obj2, obj3, obj4, obj5, obj6, obj7] mindict = {} for o in olist: key = (o.classification, o.type) if key in mindict: if o.weight() >= mindict[key].weight(): continue mindict[key] = o from pprint import pprint pprint(mindict) and here's the output: {('A', 'x'): Object('A', 'x', 6, 10), ('A', 'y'): Object('A', 'y', 5, 20), ('B', 'z'): Object('B', 'z', 2, 15)} Note: the __str__, __repr__ and pprint stuff is just to get the fancy printout, it's not essential. Also the above code runs unchanged on Python 2.2 to 2.7. Running time: O(N) where N is the number of objects in the list. Solutions which sort the objects are O(N * log(N)) on average. Another solution appears to be O(K * N) where K <= N is the number of unique (classification, type) keys derived from the objects. Extra memory used: Only O(K). Other solutions appear to be O(N). A: import itertools group_func = lambda o: (o.classification, o.type) map(lambda pair: (pair[0], min(pair[1], key=lambda o: o.last - o.first)), itertools.groupby(sorted(l, key=group_func), group_func)) group_func returns a tuple key containing the object's classification, then type (e.g. ('A', 'x')). This is first used to sort the list l (sorted call). We then call groupby on the sorted list, using group_func to group into sublists. Every time the key changes, we have a new sublist. Unlike SQL, groupby requires the list be pre-sorted on the same key. map takes the output of the groupby function. For each group, map returns a tuple. The first element is pair[0], which is the key ('A', 'x'). The second is the minimum of the group (pair[1]), as determined by the last - first key.
Determining the minimum of a list of n elements
I'm having some trouble developing an algorithm to determine the minimum of a list of n elements. It's not the case of finding the minimum of an array of length n, that's simple: min = A[0] for i in range(1, len(A)): if min > A[i]: min = A[i] print min But my list contains objects: class Object: def __init__(self, somelist): self.classification = somelist[0] # String self.type = somelist[1] # String self.first = somelist[2] # Integer self.last = somelist[3] # Integer And for the same 'classification | type' objects I have m elements and I want to find the minimum element of the same 'classification | type' by comparing the difference between first and last. Example: obj1 = Object(['A', 'x', 4, 17]) obj2 = Object(['A', 'y', 5, 20]) obj3 = Object(['B', 'z', 10, 27]) obj4 = Object(['B', 'z', 2, 15]) obj5 = Object(['B', 'z', 20, 40]) obj6 = Object(['A', 'x', 6, 10]) obj7 = Object(['A', 'x', 2, 9]) list = [obj1, obj2, obj3, obj4, obj5, obj6, obj7] So I need an algorithm to determine the minimums of the list: A | x --> Object(['A', 'x', 6, 10]) B | z --> Object(['B', 'z', 2, 15]) A | y --> Object(['A', 'y', 5, 20]) Thanks!
[ "filtered = [obj for obj in lst if obj.classification == 'A' and obj.type = 'x']\nmin(filtered, key=lambda x: x.last - x.first)\n\nNote: don't name your variable list: it shadows built-in.\n", "Here's a simple understandable dynamic procedural way of going about it:\nclass Object:\n def __init__(self, somelist...
[ 7, 2, 1 ]
[]
[]
[ "algorithm", "python" ]
stackoverflow_0003169711_algorithm_python.txt
Q: How can I repeat a 3x9 texture in OpenGL's GLSL? I have a texture with a 3x9 repeating section. I don't want to store the tesselated 1920x1080 image that I have for the texture, I'd much rather generate it in code so that it can be applied correctly at other resolutions. Any ideas on how I can do this? The original texture is here: http://img684.imageshack.us/img684/6282/matte1.png I know that the texture is a non-power-of-2, so I have to do the repeating within a shader, which I do: uniform sampler2D tex; varying vec2 texCoord; void main() { gl_FragColor = texture2D(tex, mod(texCoord, vec2(3.0, 9.0)) * vec2(0.75, 0.5625)); } This is how I'm drawing the quad: glBegin(GL_QUADS) glColor4f(1.0, 1.0, 1.0, 1.0) glMultiTexCoord2f(GL_TEXTURE1, self.widgetPhysicalRect.topLeft().x(), self.widgetPhysicalRect.topLeft().y()) glVertex2f(-1.0, 1.0) glMultiTexCoord2f(GL_TEXTURE1, self.widgetPhysicalRect.topRight().x(), self.widgetPhysicalRect.topRight().y()) glVertex2f(1.0, 1.0) glMultiTexCoord2f(GL_TEXTURE1, self.widgetPhysicalRect.bottomRight().x(), self.widgetPhysicalRect.bottomRight().y()) glVertex2f(1.0, -1.0) glMultiTexCoord2f(GL_TEXTURE1, self.widgetPhysicalRect.bottomLeft().x(), self.widgetPhysicalRect.bottomLeft().y()) glVertex2f(-1.0, -1.0) glEnd() Any ideas would be greatly appreciated. Thanks! A: Almost like you'd do it with fixed pipe. Set your bound texture's wrap mode to repeat before setting sampler uniform and texture coordinates outside 0-1 range will repeat the texture. glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); In your case, texture coordinates will be s = 0-640 and t = 0-120. Fragment shader doesn't need anything special, just normal texture2D(tex, texCoord) will do.
How can I repeat a 3x9 texture in OpenGL's GLSL?
I have a texture with a 3x9 repeating section. I don't want to store the tesselated 1920x1080 image that I have for the texture, I'd much rather generate it in code so that it can be applied correctly at other resolutions. Any ideas on how I can do this? The original texture is here: http://img684.imageshack.us/img684/6282/matte1.png I know that the texture is a non-power-of-2, so I have to do the repeating within a shader, which I do: uniform sampler2D tex; varying vec2 texCoord; void main() { gl_FragColor = texture2D(tex, mod(texCoord, vec2(3.0, 9.0)) * vec2(0.75, 0.5625)); } This is how I'm drawing the quad: glBegin(GL_QUADS) glColor4f(1.0, 1.0, 1.0, 1.0) glMultiTexCoord2f(GL_TEXTURE1, self.widgetPhysicalRect.topLeft().x(), self.widgetPhysicalRect.topLeft().y()) glVertex2f(-1.0, 1.0) glMultiTexCoord2f(GL_TEXTURE1, self.widgetPhysicalRect.topRight().x(), self.widgetPhysicalRect.topRight().y()) glVertex2f(1.0, 1.0) glMultiTexCoord2f(GL_TEXTURE1, self.widgetPhysicalRect.bottomRight().x(), self.widgetPhysicalRect.bottomRight().y()) glVertex2f(1.0, -1.0) glMultiTexCoord2f(GL_TEXTURE1, self.widgetPhysicalRect.bottomLeft().x(), self.widgetPhysicalRect.bottomLeft().y()) glVertex2f(-1.0, -1.0) glEnd() Any ideas would be greatly appreciated. Thanks!
[ "Almost like you'd do it with fixed pipe. Set your bound texture's wrap mode to repeat before setting sampler uniform and texture coordinates outside 0-1 range will repeat the texture.\nglTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);\nglTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);\n\nIn...
[ 4 ]
[]
[]
[ "glsl", "opengl", "python", "shader", "textures" ]
stackoverflow_0003169099_glsl_opengl_python_shader_textures.txt
Q: How to use BeautifulSoup to extract from within a HTML paragraph? I'm using BeautifulSoup to do some screen-scraping. My problem is this: I need to extract specific things out of a paragraph. An example: <p><b><a href="/name/abe">ABE</a></b> &nbsp; <font class="masc">m</font> &nbsp; <font class="info"><a href="/nmc/eng.php" class="usg">English</a>, <a href="/nmc/jew.php" class="usg">Hebrew</a></font><br />Short form of <a href="/name/abraham" class="nl">ABRAHAM</a> Out of this paragraph, I'm able to extract the name ABE as follows: for pFound in soup.findAll('p'): print pFound #will get the names x = pFound.find('a').renderContents() print x Now my problem is to extract the other name as well, in the same paragraph. Short form of <a href="/name/abraham" class="nl">ABRAHAM</a> I need to extract this only if the tag a is preceded by the text "Short form of" Any ideas on how to do this? There are many such paragraphs in the HTML page, and not all of them have the text "Short form of" They might contain some other text in that place. I think that some combination of regex and findNext() may be useful, but i'm not familiar with BeautifulSoup. Ended up wasting quite a lot of time. Any help would be appreciated. Thanks. A: The following should work...: htm = '''<p><b><a href="/name/abe">ABE</a></b> &nbsp; <font class="masc">m </font>&nbsp; <font class="info"><a href="/nmc/eng.php" class="usg">English </a>, <a href="/nmc/jew.php" class="usg">Hebrew</a></font><br /> Short form of <a href="/name/abraham" class="nl">ABRAHAM</a>''' import BeautifulSoup soup = BeautifulSoup.BeautifulSoup(htm) for p in soup.findAll('p'): firsta = True shortf = False for c in p.recursiveChildGenerator(): if isinstance(c, BeautifulSoup.NavigableString): if 'Short form of' in str(c): shortf = True elif c.name == 'a': if firsta or shortf: print c.renderContents() firsta = shortf = False A: You can use pyparsing as a sort of "super-regex" for parsing through HTML. You can put together a simple matching pattern by assembling the various starting and ending tags, without tripping over the typical regex HTML scraping pitfalls (unpredictable tag/attribute letter case, unpredictable attributes, attributes out of order, unpredictable whitespace). Then pattern.scanString will return a generator that will scan through the HTML source and return tuples of the matched tokens, the starting, and ending locations. Throw in the assignment of results names (similar to named fields in regex), and accessing the individual fields of interest is simple. html = """<some leading html> <p><b><a href="/name/abe">ABE</a></b> &nbsp; <font class="masc">m</font> &nbsp; <font class="info"><a href="/nmc/eng.php" class="usg">English</a>, <a href="/nmc/jew.php" class="usg"> Hebrew</a></font><br />Short form of <a href="/name/abraham" class="nl">ABRAHAM</a> <some trailing html>""" from pyparsing import makeHTMLTags, SkipTo, Optional pTag,pEnd = makeHTMLTags("P") bTag,bEnd = makeHTMLTags("B") aTag,aEnd = makeHTMLTags("A") fontTag,fontEnd = makeHTMLTags("FONT") brTag = makeHTMLTags("BR")[0] nbsp = "&nbsp;" nickEntry = (pTag + bTag + aTag + SkipTo(aEnd)("nickname") + aEnd + bEnd + Optional(nbsp) + fontTag + SkipTo(fontEnd) + fontEnd + Optional(nbsp) + fontTag + aTag + SkipTo(aEnd) + aEnd + "," + aTag + SkipTo(aEnd) + aEnd + fontEnd + brTag + "Short form of" + aTag + SkipTo(aEnd)("fullname") + aEnd) for match,_,_ in nickEntry.scanString(html): print match.nickname, "->", match.fullname prints: ABE -> ABRAHAM
How to use BeautifulSoup to extract from within a HTML paragraph?
I'm using BeautifulSoup to do some screen-scraping. My problem is this: I need to extract specific things out of a paragraph. An example: <p><b><a href="/name/abe">ABE</a></b> &nbsp; <font class="masc">m</font> &nbsp; <font class="info"><a href="/nmc/eng.php" class="usg">English</a>, <a href="/nmc/jew.php" class="usg">Hebrew</a></font><br />Short form of <a href="/name/abraham" class="nl">ABRAHAM</a> Out of this paragraph, I'm able to extract the name ABE as follows: for pFound in soup.findAll('p'): print pFound #will get the names x = pFound.find('a').renderContents() print x Now my problem is to extract the other name as well, in the same paragraph. Short form of <a href="/name/abraham" class="nl">ABRAHAM</a> I need to extract this only if the tag a is preceded by the text "Short form of" Any ideas on how to do this? There are many such paragraphs in the HTML page, and not all of them have the text "Short form of" They might contain some other text in that place. I think that some combination of regex and findNext() may be useful, but i'm not familiar with BeautifulSoup. Ended up wasting quite a lot of time. Any help would be appreciated. Thanks.
[ "The following should work...:\nhtm = '''<p><b><a href=\"/name/abe\">ABE</a></b> &nbsp; <font class=\"masc\">m\n</font>&nbsp; <font class=\"info\"><a href=\"/nmc/eng.php\" class=\"usg\">English\n</a>, <a href=\"/nmc/jew.php\" class=\"usg\">Hebrew</a></font><br />\nShort form of <a href=\"/name/abraham\" class=\"nl\...
[ 1, 0 ]
[]
[]
[ "beautifulsoup", "html", "paragraph", "python" ]
stackoverflow_0003169350_beautifulsoup_html_paragraph_python.txt
Q: In python, when using select.select on socket objects, how should I handle sockets that end up on the error list? read, write, error = select.select(sockets, sockets, sockets, 60.0) What is recommended if something ends up in the error list? A: On the operating systems I know, there's nothing you can do with the sockets suffering "exceptional conditions", except trying to close them (which may raise an exception, so be sure to use a try/except around the attempt). You know that the connections those sockets stood for have terminated abnormally, and may want to write some log information about that, show the problem to the user, or the like. In some situations, it may be appropriate to try to establish those connections again (this may of course fail, depending on what exceptional condition was encountered, so be prepared for that).
In python, when using select.select on socket objects, how should I handle sockets that end up on the error list?
read, write, error = select.select(sockets, sockets, sockets, 60.0) What is recommended if something ends up in the error list?
[ "On the operating systems I know, there's nothing you can do with the sockets suffering \"exceptional conditions\", except trying to close them (which may raise an exception, so be sure to use a try/except around the attempt). You know that the connections those sockets stood for have terminated abnormally, and ma...
[ 1 ]
[]
[]
[ "python", "select", "sockets" ]
stackoverflow_0003170332_python_select_sockets.txt
Q: I suspect I have multiple version of Python 2.6 installed on Mac OS X 10.6.3; how do I set which one Terminal should launch? When I enter in python in Terminal it loads up Python 2.6.2. However there are folders by the name of Python 2.6 in different places on my drive. I'm not sure if that's because Python 2.6 has been installed in different places or because Python just likes to have lots of folers in different places. If there are multiple installations, I could really do with being able to set which one should be used. A: When you run python in a shell or command prompt it will execute the first executable file which is found in your PATH environment variable. To find out what file is being executed use which python or where python. A: Don't make it complicated. In your ~/.bash_aliases put the following (assuming you are using bash): alias py26="/usr/bin/python-2.6.1" alias py30="/usr/bin/python-3.0.0" Of course, I just made up those paths. Put in whatever is correct for your system. If the ~/.bash_aliases file does not exist, create it. To use it just type, at the command line, py26 and the appropriate interpreter starts up. A: From the OS X Python man page (man python): CHANGING THE DEFAULT PYTHON Using % defaults write com.apple.versioner.python Version 2.5 will make version 2.5 the user default when running the both the python and pythonw commands (versioner is the internal name of the version- selection software used). To set a system-wide default, replace `com.apple.versioner.python' with `/Library/Preferences/com.apple.versioner.python' (admin privileges will be required). The environment variable VERSIONER_PYTHON_VERSION can also be used to set the python and pythonw version: % export VERSIONER_PYTHON_VERSION=2.5 # Bourne-like shells or % setenv VERSIONER_PYTHON_VERSION 2.5 # C-like shells % python ... This environment variable takes precedence over the preference file settings.
I suspect I have multiple version of Python 2.6 installed on Mac OS X 10.6.3; how do I set which one Terminal should launch?
When I enter in python in Terminal it loads up Python 2.6.2. However there are folders by the name of Python 2.6 in different places on my drive. I'm not sure if that's because Python 2.6 has been installed in different places or because Python just likes to have lots of folers in different places. If there are multiple installations, I could really do with being able to set which one should be used.
[ "When you run python in a shell or command prompt it will execute the first executable file which is found in your PATH environment variable.\nTo find out what file is being executed use which python or where python.\n", "Don't make it complicated. In your ~/.bash_aliases put the following (assuming you are using...
[ 4, 1, 1 ]
[]
[]
[ "macos", "python", "terminal" ]
stackoverflow_0003046183_macos_python_terminal.txt
Q: Google App Engine: UnicodeDecode Error in bulk data upload I'm getting an odd error with Google App Engine devserver 1.3.5, and Python 2.5.4, on Windows. A sample row in the CSV: EQS,550,foobar,"<some><html><garbage /></html></some>",odp,Ti4=,http://url.com,success The error: ..................................................................................................................[ERROR ] [Thread-1] WorkerThread: Traceback (most recent call last): File "C:\Program Files\Google\google_appengine\google\appengine\tools\adaptive_thread_pool.py", line 150, in WorkOnItems status, instruction = item.PerformWork(self.__thread_pool) File "C:\Program Files\Google\google_appengine\google\appengine\tools\bulkloader.py", line 695, in PerformWork transfer_time = self._TransferItem(thread_pool) File "C:\Program Files\Google\google_appengine\google\appengine\tools\bulkloader.py", line 852, in _TransferItem self.request_manager.PostEntities(self.content) File "C:\Program Files\Google\google_appengine\google\appengine\tools\bulkloader.py", line 1296, in PostEntities datastore.Put(entities) File "C:\Program Files\Google\google_appengine\google\appengine\api\datastore.py", line 282, in Put req.entity_list().extend([e._ToPb() for e in entities]) File "C:\Program Files\Google\google_appengine\google\appengine\api\datastore.py", line 687, in _ToPb properties = datastore_types.ToPropertyPb(name, values) File "C:\Program Files\Google\google_appengine\google\appengine\api\datastore_types.py", line 1499, in ToPropertyPb pbvalue = pack_prop(name, v, pb.mutable_value()) File "C:\Program Files\Google\google_appengine\google\appengine\api\datastore_types.py", line 1322, in PackString pbvalue.set_stringvalue(unicode(value).encode('utf-8')) UnicodeDecodeError: 'ascii' codec can't decode byte 0xe8 in position 36: ordinal not in range(128) [INFO ] Unexpected thread death: Thread-1 [INFO ] An error occurred. Shutting down... ..[ERROR ] Error in Thread-1: 'ascii' codec can't decode byte 0xe8 in position 36: ordinal not in range(128) Is the error being generated by an issue with a base64 string, of which there is one in every row? KGxwMAoobHAxCihTJ0JJT0VFJwpwMgpJMjYxMAp0cDMKYWEu KGxwMAoobHAxCihTJ01BVEgnCnAyCkkyOTQwCnRwMwphYS4= The data loader: class CourseLoader(bulkloader.Loader): def __init__(self): bulkloader.Loader.__init__(self, 'Course', [('dept_code', str), ('number', int), ('title', str), ('full_description', str), ('unparsed_pre_reqs', str), ('pickled_pre_reqs', lambda x: base64.b64decode(x)), ('course_catalog_url', str), ('parse_succeeded', lambda x: x == 'success') ]) loaders = [CourseLoader] Is there a way to tell from the traceback which row caused the error? UPDATE: It looks like there are two characters causing errors: è, and ®. How can I get Google App Engine to handle them? A: Looks like some row of the CSV has some non-ascii data (maybe a LATIN SMALL LETTER E WITH GRAVE -- that's what 0xe8 would be in ISO-8859-1, for example) and yet you're mapping it to str (should be unicode, and I believe the CSV should be in utf-8). To find if any row of a text file has non-ascii data, a simple Python snippet will help, e.g.: >>> f = open('thefile.csv') >>> prob = [] >>> for i, line in enumerate(f): ... try: unicode(line) ... except: prob.append(i) ... >>> print 'Problems in %d lines:' % len(prob) >>> print prob
Google App Engine: UnicodeDecode Error in bulk data upload
I'm getting an odd error with Google App Engine devserver 1.3.5, and Python 2.5.4, on Windows. A sample row in the CSV: EQS,550,foobar,"<some><html><garbage /></html></some>",odp,Ti4=,http://url.com,success The error: ..................................................................................................................[ERROR ] [Thread-1] WorkerThread: Traceback (most recent call last): File "C:\Program Files\Google\google_appengine\google\appengine\tools\adaptive_thread_pool.py", line 150, in WorkOnItems status, instruction = item.PerformWork(self.__thread_pool) File "C:\Program Files\Google\google_appengine\google\appengine\tools\bulkloader.py", line 695, in PerformWork transfer_time = self._TransferItem(thread_pool) File "C:\Program Files\Google\google_appengine\google\appengine\tools\bulkloader.py", line 852, in _TransferItem self.request_manager.PostEntities(self.content) File "C:\Program Files\Google\google_appengine\google\appengine\tools\bulkloader.py", line 1296, in PostEntities datastore.Put(entities) File "C:\Program Files\Google\google_appengine\google\appengine\api\datastore.py", line 282, in Put req.entity_list().extend([e._ToPb() for e in entities]) File "C:\Program Files\Google\google_appengine\google\appengine\api\datastore.py", line 687, in _ToPb properties = datastore_types.ToPropertyPb(name, values) File "C:\Program Files\Google\google_appengine\google\appengine\api\datastore_types.py", line 1499, in ToPropertyPb pbvalue = pack_prop(name, v, pb.mutable_value()) File "C:\Program Files\Google\google_appengine\google\appengine\api\datastore_types.py", line 1322, in PackString pbvalue.set_stringvalue(unicode(value).encode('utf-8')) UnicodeDecodeError: 'ascii' codec can't decode byte 0xe8 in position 36: ordinal not in range(128) [INFO ] Unexpected thread death: Thread-1 [INFO ] An error occurred. Shutting down... ..[ERROR ] Error in Thread-1: 'ascii' codec can't decode byte 0xe8 in position 36: ordinal not in range(128) Is the error being generated by an issue with a base64 string, of which there is one in every row? KGxwMAoobHAxCihTJ0JJT0VFJwpwMgpJMjYxMAp0cDMKYWEu KGxwMAoobHAxCihTJ01BVEgnCnAyCkkyOTQwCnRwMwphYS4= The data loader: class CourseLoader(bulkloader.Loader): def __init__(self): bulkloader.Loader.__init__(self, 'Course', [('dept_code', str), ('number', int), ('title', str), ('full_description', str), ('unparsed_pre_reqs', str), ('pickled_pre_reqs', lambda x: base64.b64decode(x)), ('course_catalog_url', str), ('parse_succeeded', lambda x: x == 'success') ]) loaders = [CourseLoader] Is there a way to tell from the traceback which row caused the error? UPDATE: It looks like there are two characters causing errors: è, and ®. How can I get Google App Engine to handle them?
[ "Looks like some row of the CSV has some non-ascii data (maybe a LATIN SMALL LETTER E WITH GRAVE -- that's what 0xe8 would be in ISO-8859-1, for example) and yet you're mapping it to str (should be unicode, and I believe the CSV should be in utf-8).\nTo find if any row of a text file has non-ascii data, a simple Py...
[ 0 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0003170489_google_app_engine_python.txt
Q: Google App Engine: "Cannot create a file when that file already exists" I'm running the Google App Engine devserver 1.3.3 on Windows 7. Usually, this method works fine, but this time it gave an error: def _deleteType(type): results = type.all().fetch(1000) while results: db.delete(results) results = type.all().fetch(1000) The error: File "src\modelutils.py", line 38, in _deleteType db.delete(results) File "C:\Program Files\Google\google_appengine\google\appengine\ext\db\__init__.py", line 1302, in delete datastore.Delete(keys, rpc=rpc) File "C:\Program Files\Google\google_appengine\google\appengine\api\datastore.py", line 386, in Delete 'datastore_v3', 'Delete', req, datastore_pb.DeleteResponse(), rpc) File "C:\Program Files\Google\google_appengine\google\appengine\api\datastore.py", line 186, in _MakeSyncCall rpc.check_success() File "C:\Program Files\Google\google_appengine\google\appengine\api\apiproxy_stub_map.py", line 474, in check_success self.__rpc.CheckSuccess() File "C:\Program Files\Google\google_appengine\google\appengine\api\apiproxy_rpc.py", line 149, in _WaitImpl self.request, self.response) File "C:\Program Files\Google\google_appengine\google\appengine\api\datastore_file_stub.py", line 667, in MakeSyncCall response) File "C:\Program Files\Google\google_appengine\google\appengine\api\apiproxy_stub.py", line 80, in MakeSyncCall method(request, response) File "C:\Program Files\Google\google_appengine\google\appengine\api\datastore_file_stub.py", line 775, in _Dynamic_Delete self.__WriteDatastore() File "C:\Program Files\Google\google_appengine\google\appengine\api\datastore_file_stub.py", line 610, in __WriteDatastore self.__WritePickled(encoded, self.__datastore_file) File "C:\Program Files\Google\google_appengine\google\appengine\api\datastore_file_stub.py", line 656, in __WritePickled os.rename(tmpfile.name, filename) WindowsError: [Error 183] Cannot create a file when that file already exists What am I doing wrong? How could this have failed this time, but usually it doesn't? UPDATE I restarted the devserver, and when it came back online, the datastore was empty. A: Unfortunately, 1.3.3 is too far back for me to look at its sources and try to diagnose your problem precisely - the SDK has no 1.3.3 release tag and I can't guess which revision of the datastore_filestub.py was in 1.3.3. Can you upgrade to the current version, 1.3.5, and try again? Running old versions (especially 2+ versions back) is not recommended since they'll be possibly a little out of sync with what's actually available on Google's actual servers, anyway (and/or have bugs that are fixed in later versions). Anyway... On Windows, os.rename doesn't work if the destination exists -- but the revisions I see are careful to catch the OSError that results (WindowsError derives from it), remove the existing file, and try renaming again. So I don't know what could explain your bug -- if the sources of the SDK you're running have that careful arrangement, and I think they do. Plus, I'd recommend to --use_sqlite (see Nick Johnson's blog announcing it here) in lieu of the file-stub for your SDK datastore - it just seems to make more sense!-) A: (disclaimer: i'm not answering your question but helping you optimize the code you're running) your code seems to be massively deleting objects. in the SDK/dev server, you can accomplish wiping out the datastore using this command as a quicker and more convenient alternative: $ dev_appserver.py -c helloworld now, that is, if you want to wipe your entire SDK datastore. if not, then of course, don't use it. :-) more importantly, you can make your code run faster and use less CPU on production if you change your query to be something like: results = type.all(keys_only=True).fetch(SIZE) this works the same as your's except it only fetches the keys as you don't need full entities retrieved from the datastore in order to delete them. also, your code is currently setting SIZE=1000, but you can make it larger than that, esp. if you have an idea of how many entities you have in your system... the 1000 result limit was lifted in 1.3.1 http://bit.ly/ahoLQp one minor nit... try not to use type as a variable name... that's one of the most import objects and built-in/factory functions in Python. your code may act odd if do this -- in your case, it's only fractionally better since you're inside a function/method, but that's not going to be true as a global variable. hope this helps!
Google App Engine: "Cannot create a file when that file already exists"
I'm running the Google App Engine devserver 1.3.3 on Windows 7. Usually, this method works fine, but this time it gave an error: def _deleteType(type): results = type.all().fetch(1000) while results: db.delete(results) results = type.all().fetch(1000) The error: File "src\modelutils.py", line 38, in _deleteType db.delete(results) File "C:\Program Files\Google\google_appengine\google\appengine\ext\db\__init__.py", line 1302, in delete datastore.Delete(keys, rpc=rpc) File "C:\Program Files\Google\google_appengine\google\appengine\api\datastore.py", line 386, in Delete 'datastore_v3', 'Delete', req, datastore_pb.DeleteResponse(), rpc) File "C:\Program Files\Google\google_appengine\google\appengine\api\datastore.py", line 186, in _MakeSyncCall rpc.check_success() File "C:\Program Files\Google\google_appengine\google\appengine\api\apiproxy_stub_map.py", line 474, in check_success self.__rpc.CheckSuccess() File "C:\Program Files\Google\google_appengine\google\appengine\api\apiproxy_rpc.py", line 149, in _WaitImpl self.request, self.response) File "C:\Program Files\Google\google_appengine\google\appengine\api\datastore_file_stub.py", line 667, in MakeSyncCall response) File "C:\Program Files\Google\google_appengine\google\appengine\api\apiproxy_stub.py", line 80, in MakeSyncCall method(request, response) File "C:\Program Files\Google\google_appengine\google\appengine\api\datastore_file_stub.py", line 775, in _Dynamic_Delete self.__WriteDatastore() File "C:\Program Files\Google\google_appengine\google\appengine\api\datastore_file_stub.py", line 610, in __WriteDatastore self.__WritePickled(encoded, self.__datastore_file) File "C:\Program Files\Google\google_appengine\google\appengine\api\datastore_file_stub.py", line 656, in __WritePickled os.rename(tmpfile.name, filename) WindowsError: [Error 183] Cannot create a file when that file already exists What am I doing wrong? How could this have failed this time, but usually it doesn't? UPDATE I restarted the devserver, and when it came back online, the datastore was empty.
[ "Unfortunately, 1.3.3 is too far back for me to look at its sources and try to diagnose your problem precisely - the SDK has no 1.3.3 release tag and I can't guess which revision of the datastore_filestub.py was in 1.3.3. Can you upgrade to the current version, 1.3.5, and try again? Running old versions (especial...
[ 3, 1 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0003162980_google_app_engine_python.txt
Q: Google application engine, maximum number of static files? I am developing an application in google application engine which would have a user profiles kind of feature. I was going through the Google App's online tutorial where I found that the maximum number of static files (app files and static files) should not exceed 3000. I am afraid whether the user's would be able to upload their images when the number of users increase. Is this limitation for the Free Quota only or its even after billing. In the document, its mentioned as the additional limit than the Free Quota. Please suggest. Thanks in advance. A: Welcome to Stack Overflow! One of the limitations in App Engine is that you cannot write directly to the filesystem from your app. Static files would be things like HTML, CSS, javascript and images that are global to your application, and get uploaded manually when you deploy. They are uploaded to and served from different servers than the ones that handle dynamic content. Since you can't write to the filesystem from your app, files uploaded by users must be saved to the datastore as blobs. These are not considered static files. As others have mentioned, you can use S3 or the Blobstore API, however both of these require billing. With the free quotas, each entity can be up to 1MB, and each HTTP request and response can be up to 10MB. Using standard entities with a BlobProperty, you can easily store and serve dynamically uploaded files up to 1MB, or 10MB if you want to get fancy and store your blob in slices across multiple entities. A: There is a new service called the BlobStore that let's you store binary data in the database. Also you may want to look into Amazon S3 as storage for the data. If user's are uploading images they cannot be stored as static files. Static files are files included in your GAE project like html and png/jpg/gif files. A: As others have mentioned, for more dynamic content such as user-uploaded files, those should go into the datastore as blobs, or if they're larger, as Blobstore objects (max size 2GB). 3000 static files is somewhat reasonable unless you have a lot of static assets (such as images, HTML, CSS, and JS files). for Python source however, you have another workaround, and that is to throw all your .py files into a single ZIP so they don't hit that count so badly. here's an article that describes how to do this: Using Django 1.0 on App Engine with Zipimport Just be aware that this article talks about how to bundle Django's source with App Engine; however, that's unnecessary unless you're doing 1.3 or are using a fork. App Engine systems already have 0.96 or 1.2.5 available to you for free. UPDATE (Mar 2011): In SDK 1.4.3, the App Engine team has released the Files API which allows you to read/write files/data programmatically with Blobstore. This applies to both Python and Java. More info can be found in the corresponding blogpost. In addition to Blobstore, the public roadmap shows a future feature integrating in Google Storage access. UPDATE (Sep 2011): In SDK 1.5.4, the App Engine team removed the Blobstore filesize limitation from 2GB to allow files of unlimited size. You pay per GB of storage however. UPDATE (Oct 2011): In SDK 1.5.5, the App Engine team extended the maximum number of files from 3000 to 10000, which is a great boost for users. Furthermore, the maximum individual filesize was bumped up from 10MB to 32MB. Another storage related improvement is that users can now write to Google's Cloud Storage directly from their App Engine app.
Google application engine, maximum number of static files?
I am developing an application in google application engine which would have a user profiles kind of feature. I was going through the Google App's online tutorial where I found that the maximum number of static files (app files and static files) should not exceed 3000. I am afraid whether the user's would be able to upload their images when the number of users increase. Is this limitation for the Free Quota only or its even after billing. In the document, its mentioned as the additional limit than the Free Quota. Please suggest. Thanks in advance.
[ "Welcome to Stack Overflow!\nOne of the limitations in App Engine is that you cannot write directly to the filesystem from your app. Static files would be things like HTML, CSS, javascript and images that are global to your application, and get uploaded manually when you deploy. They are uploaded to and served from...
[ 5, 2, 0 ]
[]
[]
[ "django", "google_app_engine", "python" ]
stackoverflow_0003165753_django_google_app_engine_python.txt
Q: Cross-platform Audio Playback in Python Is there a cross-platform Python library for audio playback available? The operating systems I am targeting are (in order of importance) Windows, Linux, and Mac OSX. The file formats which need to be supported are (in order of importance) MP3, OGG, WAV, and FLAC. Does something like this exist? I have tried a few of the Python libraries available such as Snack, PyMedia, PyGame, etc. I couldn't get PyMedia to compile, Snack wouldn't play audio, and PyGame wouldn't play audio either. I'm on Linux: Ubuntu 9.10. A: gstreamer is multiplatform. It runs on Linux, PPC, ARM, Solaris on x86 and SPARC, MacOSX, Microsoft Windows, IBM OS/400 and Symbian OS. A: It's probably overkill for what you want, but I've had good experience with the PyAudiere library. I've had it working on Windows and Linux without trouble, but I haven't tested it on OSX yet. A: The music page at the Python wiki lists many possibilities -- indeed it's intended to be exhaustive (you can edit it to add something that it's missing;-). I don't have direct experience with the vast majority of these tools and library, but at least from the list it seems that many claim to support at least MP3 and OGG (fewer explicitly mention WAV or FLAC;-).
Cross-platform Audio Playback in Python
Is there a cross-platform Python library for audio playback available? The operating systems I am targeting are (in order of importance) Windows, Linux, and Mac OSX. The file formats which need to be supported are (in order of importance) MP3, OGG, WAV, and FLAC. Does something like this exist? I have tried a few of the Python libraries available such as Snack, PyMedia, PyGame, etc. I couldn't get PyMedia to compile, Snack wouldn't play audio, and PyGame wouldn't play audio either. I'm on Linux: Ubuntu 9.10.
[ "gstreamer is multiplatform. It runs on Linux, PPC, ARM, Solaris on x86 and SPARC, MacOSX, Microsoft Windows, IBM OS/400 and Symbian OS.\n", "It's probably overkill for what you want, but I've had good experience with the PyAudiere library. I've had it working on Windows and Linux without trouble, but I haven't ...
[ 2, 1, 1 ]
[]
[]
[ "audio", "linux", "macos", "python", "windows" ]
stackoverflow_0003169666_audio_linux_macos_python_windows.txt
Q: Python library for experimenting with compiler optimizations I want to learn about compilers and some optimization techniques, and I thought it would be helpful to do some quick implementations of the algorithms. Is there a library/framework for Python that can make things easier (like the Natural Language Toolkit) - generating the parse tree, manipulating loops, methods? I saw that Microsoft Research has a library named Phoenix, but it's intended for C++ and I would like to avoid writing prototypes in C++, it's too much work. Thanks in advance! A: As far as I know, there is no Python module to do what you want. But you can create structures by yourself in Python, or use PyPy and write your compiler with JIT enabled features in RPython. If you really want to test some algorithms, I highly recommend you to use LLVM, it is in C++, but is the currently state-of-art platform to experiment such things that you're willing to do. LLVM has a lot of optimizations (where you can learn a lot) and a nice tutorial on how you can implement your own, its API is very simple and clean. There is bindings for Python too if you want, but only for LLVM 2.6. Give LLVM a try, it's a worth reading and you'll learn a lot with tutorials like this.
Python library for experimenting with compiler optimizations
I want to learn about compilers and some optimization techniques, and I thought it would be helpful to do some quick implementations of the algorithms. Is there a library/framework for Python that can make things easier (like the Natural Language Toolkit) - generating the parse tree, manipulating loops, methods? I saw that Microsoft Research has a library named Phoenix, but it's intended for C++ and I would like to avoid writing prototypes in C++, it's too much work. Thanks in advance!
[ "As far as I know, there is no Python module to do what you want. But you can create structures by yourself in Python, or use PyPy and write your compiler with JIT enabled features in RPython.\nIf you really want to test some algorithms, I highly recommend you to use LLVM, it is in C++, but is the currently state-o...
[ 4 ]
[]
[]
[ "compiler_construction", "optimization", "python" ]
stackoverflow_0003171538_compiler_construction_optimization_python.txt
Q: How can I make a T9 style on-screen keyboard for Windows? Sometimes at night, I like to watch movies in bed, or TV shows online. This is convenient since my computer is right beside my desk, so I just spin one of my monitors around, disable my other screen and pull my mouse over. My keyboard doesn't quite reach without re-routing the cable in a way that doesn't work when I move back to my desk the next day. Sometimes while I'm watching movies, my friends try to talk to me, and I would like to be able to talk back without jumping up, spinning the monitor around, moving the mouse back and sitting in the chair again. What I would like to do is make an on-screen keyboard to be used with the mouse -- but in a T9 phone-keypad style to (hopefully) minimize the number of clicks and amount of moving the mouse around, missing targets. I'd like to do this in Python since I'm already familiar with the language, but I'm not sure where to start. One thing I'm not sure of, is how to click the on-screen keyboard without stealing focus from the chat window. Can this be accomplished? Or can the application remember the last focused control in the last focused window and send keystrokes to it? Also, would I need an external library to do any of this window management and keystroke sending? Help is greatly appreciated, and if such a thing already exists (in any language), pointing me towards it would also be greatly appreciated. I'll definitely open source it and post a link to the project here if and when I develop it, incase anyone else would find this sort of thing useful :) A: About 12 years ago, I wrote a program for Windows that sat in the tray and would send keystrokes to certain windows when they gained focus. I no longer have the code, and I've forgotten all the details. Still, the process will work something like this. For your GUI, if using Python, you probably want to use PyQT or wxPython. Both libraries make it easy to write GUI apps (though you can use the Windows APIs directly). If it were me, though, I would defer the GUI programming and use PythonWin. Use its GUI tools (lots of examples in the source) to create a simple dialog (also a Window) to do the event handling. There are probably a couple of approaches for your application to select a target window. The virtual keyboard window will probably have to steal focus (to receive mouse events), but it will then need to know to which window to send the keystrokes. You can have a drop-down control in the dialog that allows you to select a target window (you can easily grab the title of each window for target selection), or When your window gains focus (there's an event you can trap, something like WM_FOCUS), you can either query for the last window that had focus or you can keep tabs on which windows have focus and use the last one you noticed. In either case, once you have a handle to the target window, you can use SendMessage to send keystrokes to the target window. I suggest at first just relaying regular keystrokes, and worry later about capturing mouse clicks. Edit I was able to cobble this together for sending keystrokes to another window. import win32ui import win32con import time from ctypes import * PUL = POINTER(c_ulong) class KeyBdInput(Structure): _fields_ = [("wVk", c_ushort), ("wScan", c_ushort), ("dwFlags", c_ulong), ("time", c_ulong), ("dwExtraInfo", PUL)] class HardwareInput(Structure): _fields_ = [("uMsg", c_ulong), ("wParamL", c_short), ("wParamH", c_ushort)] class MouseInput(Structure): _fields_ = [("dx", c_long), ("dy", c_long), ("mouseData", c_ulong), ("dwFlags", c_ulong), ("time",c_ulong), ("dwExtraInfo", PUL)] class Input_I(Union): _fields_ = [("ki", KeyBdInput), ("mi", MouseInput), ("hi", HardwareInput)] class Input(Structure): _fields_ = [("type", c_ulong), ("ii", Input_I)] def send_char(char): FInputs = Input * 1 extra = c_ulong(0) ii_ = Input_I() KEYEVENTF_UNICODE = 0x4 ii_.ki = KeyBdInput( 0, ord(char), KEYEVENTF_UNICODE, 0, pointer(extra) ) x = FInputs( ( 1, ii_ ) ) windll.user32.SendInput(1, pointer(x), sizeof(x[0])) if __name__ == '__main__': wnd = win32ui.FindWindow(None, '* Untitled - Notepad2 (Administrator)') type_this = 'jaraco' wnd.SetFocus() wnd.SetForegroundWindow() for char in type_this: send_char(char) I found that the PostMessage technique did not work very well (I could not get it to work at all for me). I also found this article on identifying the last active window.
How can I make a T9 style on-screen keyboard for Windows?
Sometimes at night, I like to watch movies in bed, or TV shows online. This is convenient since my computer is right beside my desk, so I just spin one of my monitors around, disable my other screen and pull my mouse over. My keyboard doesn't quite reach without re-routing the cable in a way that doesn't work when I move back to my desk the next day. Sometimes while I'm watching movies, my friends try to talk to me, and I would like to be able to talk back without jumping up, spinning the monitor around, moving the mouse back and sitting in the chair again. What I would like to do is make an on-screen keyboard to be used with the mouse -- but in a T9 phone-keypad style to (hopefully) minimize the number of clicks and amount of moving the mouse around, missing targets. I'd like to do this in Python since I'm already familiar with the language, but I'm not sure where to start. One thing I'm not sure of, is how to click the on-screen keyboard without stealing focus from the chat window. Can this be accomplished? Or can the application remember the last focused control in the last focused window and send keystrokes to it? Also, would I need an external library to do any of this window management and keystroke sending? Help is greatly appreciated, and if such a thing already exists (in any language), pointing me towards it would also be greatly appreciated. I'll definitely open source it and post a link to the project here if and when I develop it, incase anyone else would find this sort of thing useful :)
[ "About 12 years ago, I wrote a program for Windows that sat in the tray and would send keystrokes to certain windows when they gained focus. I no longer have the code, and I've forgotten all the details.\nStill, the process will work something like this.\nFor your GUI, if using Python, you probably want to use PyQT...
[ 3 ]
[]
[]
[ "python", "soft_keyboard", "windows", "windows_7" ]
stackoverflow_0003171045_python_soft_keyboard_windows_windows_7.txt
Q: Passing strings in value field in pyscopg2 Sorry this is a very newbie question. When I'm trying to pass a tuple into an insert statement the quotations seem to disappear. line=[0, 1, 3000248, 'G', 'T', 102, 102, 60, 25] SNPinfo = tuple(line) curs.execute("""INSERT INTO akr (code, chrID, chrLOC, refBase, conBase, \ consqual, SNPqual, maxMapqual, numbReadBases) \ VALUES (%s,%s,%s,%s,%s,%s,%s,%s,%s)""", SNPinfo) The Error I get is: LINE 1: ...axMapqual, numbReadBases) VALUES (0,1,3000248,G,T,102,10.. psycopg2.ProgrammingError: column "g" does not exist I think my insert statement is wrong somewhere. A: You are missing the single quotes around the varchars on your string formatting: curs.execute("""INSERT INTO akr (code, chrID, chrLOC, refBase, conBase, \ consqual, SNPqual, maxMapqual, numbReadBases) \ VALUES (%s,%s,%s,'%s','%s',%s,%s,%s,%s)""", SNPinfo) This would produce: INSERT INTO akr (code, chrID, chrLOC, refBase, conBase, consqual, SNPqual, maxMapqual, numbReadBases) VALUES (0,1,3000248,'G','T',102,102,60,25)
Passing strings in value field in pyscopg2
Sorry this is a very newbie question. When I'm trying to pass a tuple into an insert statement the quotations seem to disappear. line=[0, 1, 3000248, 'G', 'T', 102, 102, 60, 25] SNPinfo = tuple(line) curs.execute("""INSERT INTO akr (code, chrID, chrLOC, refBase, conBase, \ consqual, SNPqual, maxMapqual, numbReadBases) \ VALUES (%s,%s,%s,%s,%s,%s,%s,%s,%s)""", SNPinfo) The Error I get is: LINE 1: ...axMapqual, numbReadBases) VALUES (0,1,3000248,G,T,102,10.. psycopg2.ProgrammingError: column "g" does not exist I think my insert statement is wrong somewhere.
[ "You are missing the single quotes around the varchars on your string formatting:\ncurs.execute(\"\"\"INSERT INTO akr (code, chrID, chrLOC, refBase, conBase, \\\nconsqual, SNPqual, maxMapqual, numbReadBases) \\\nVALUES (%s,%s,%s,'%s','%s',%s,%s,%s,%s)\"\"\", SNPinfo) \n\nThis would produce:\nINSERT INTO akr (code, ...
[ 0 ]
[]
[]
[ "psycopg2", "python" ]
stackoverflow_0003170106_psycopg2_python.txt
Q: Python: how do you remember the order of `super`'s arguments? As the title says, how do you remember the order of super's arguments? Is there a mnemonic somewhere I've missed? After years of Python programming, I still have to look it up :( (for the record, it's super(Type, self)) A: Inheritance makes me think of a classification hierarchy. And the order of the arguments to super is hierarchical: first the class, then the instance. Another idea, inspired by the answer from ~unutbu: class Fubb(object): def __init__(self, *args, **kw): # Crap, I can't remember how super() goes!? Steps in building up a correct super() call. __init__(self, *args, **kw) # Copy the original method signature. super(Fubb).__init__(self, *args, **kw) # Add super(Type). / ------- / super(Fubb, self).__init__(*args, **kw) # Move 'self', but preserve order. A: Simply remember that the self is optional - super(Type) gives access to unbound superclass methods - and optional arguments always come last. A: I don't. In Python 3 we can just write super().method(params) A: Typically, super is used inside of a class definition. There, (again typically), the first argument to super should always be the name of the class. class Foo(object): def __init__(self,*args,**kw): super(Foo,self).__init__(*args,**kw)
Python: how do you remember the order of `super`'s arguments?
As the title says, how do you remember the order of super's arguments? Is there a mnemonic somewhere I've missed? After years of Python programming, I still have to look it up :( (for the record, it's super(Type, self))
[ "Inheritance makes me think of a classification hierarchy. And the order of the arguments to super is hierarchical: first the class, then the instance.\nAnother idea, inspired by the answer from ~unutbu:\nclass Fubb(object):\n def __init__(self, *args, **kw):\n # Crap, I can't remember how super() goes!?\...
[ 11, 10, 5, 2 ]
[]
[]
[ "python", "super" ]
stackoverflow_0003171824_python_super.txt
Q: Is it possible to put a toolbar button on the right side of it using wxpython? I'm making a toolbar using wxpython and I want to put the Quit button on the right side of it, I don't want to put them sequencially. Is it possible to define this position? Thanks in advance! A: If you add the quit button last, it will be on the right side.
Is it possible to put a toolbar button on the right side of it using wxpython?
I'm making a toolbar using wxpython and I want to put the Quit button on the right side of it, I don't want to put them sequencially. Is it possible to define this position? Thanks in advance!
[ "If you add the quit button last, it will be on the right side.\n" ]
[ 0 ]
[]
[]
[ "button", "position", "python", "toolbar", "wxpython" ]
stackoverflow_0002964108_button_position_python_toolbar_wxpython.txt
Q: Comparing if datetime.datetime exists or None I'm running a small app on Google App Engine with Python. In the model I have a property of type DateTimeProperty, which is datetime.datetime. When it's created there is no value (i.e. "None"). I want compare if that datetime.datetime is None, but I can't. if object.updated_date is None or object.updated_date >= past: object.updated_date = now Both updated_date and past is datetime.datetime. I get the following error. TypeError: can't compare datetime.datetime to NoneType What is the correct way to do this? A: Given that the previous discussion seems to have established that either of the variables could be None, one approach would be (assuming you want to set object.updated_date when either of the variables is None): if None in (past, object.updated_date) or object.updated_date >= past: object.updated_date = now point being that the check None in (past, object.updated_date) may be handier than the semantically equivalent alternative (past is None or object.update_date is None) (arguably an epsilon more readable thanks to its better compactness, but it is, of course, an arguable matter of style). As an aside, and a less-arguable matter of style;-), I strongly recommend against using built-ins' names as names for your own variables (and functions, etc) -- object is such a built-in name which in this context is clearly being used for your own purposes. Using obj instead is more concise, still readable (arguably more so;-), and has no downside. You're unlikely to be "bitten" in any given case by the iffy practice of "shadowing" built-ins' names with your own, but eventually it will happen (as you happen to need the normal meaning of the shadowed name during some later ordinary maintenance operation) and you may be in for a confusing debugging situation then; meanwhile, you risk confusing other readers / maintainers... and are getting absolutely no advantage in return for these disadvantages. I realize that many of Python's built-ins' names are an "attractive nuisance" in this sense... file, object, list, dict, set, min, max... all apparently attractive name for "a file", "an object", "a list`, etc. But, it's worthwhile to learn to resist this particular temptation!-) A: Perhaps you mean: if object.updated_date and object.updated_date >= past: If it's truthy (which implies not null), we check that it's also >= past. This uses short-circuit evaluation, which means the second condition isn't checked if the first is falsy. A: You want and, not or. You may also want to use is None. EDIT: Since you've determined that object.updated_date isn't None, this only other possibility is that past is None.
Comparing if datetime.datetime exists or None
I'm running a small app on Google App Engine with Python. In the model I have a property of type DateTimeProperty, which is datetime.datetime. When it's created there is no value (i.e. "None"). I want compare if that datetime.datetime is None, but I can't. if object.updated_date is None or object.updated_date >= past: object.updated_date = now Both updated_date and past is datetime.datetime. I get the following error. TypeError: can't compare datetime.datetime to NoneType What is the correct way to do this?
[ "Given that the previous discussion seems to have established that either of the variables could be None, one approach would be (assuming you want to set object.updated_date when either of the variables is None):\nif None in (past, object.updated_date) or object.updated_date >= past:\n object.updated_date = now\n\...
[ 7, 6, 3 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0003170640_google_app_engine_python.txt
Q: Is there any way to store cookies in django which is independent to browser? Is there any way in django to store cookies which is independent to browser ? is there any technique just like what flash SharedObject does ..? A: A Django view receives an instance of HttpRequest as its first argument. That object has an attribute COOKIES which is, and I quote, A standard Python dictionary containing all cookies. Keys and values are strings. You can of course save that dictionary in any way you like (e.g., pickle it into a blob of bytes and save that blob as a suitable attribute of a suitable entity, etc, etc). Conversely, when you instantiate an HttpResponse to return as the view's result, you can call its set_cookie method one or more times to set on it any cookies you may want to set. A: As it's explain in the link you give in your comment, Shared Objects are not stored as browser cookies, they are completely managed by the Flash Player. That's why they are independent of the browser. So the answer is no, you can't store a cookie that is independent of the browser with Django (or any other web framework). A possible solution is, if your visitor need to log to your site, you can store the info on the server, probably in a database. But if you don't required users to be log in, it will not work. Your other solution is to use Flash only to store the cookies. A: There is no way to set cookie, so that it would be available in all browsers on computer. You can do it in Flash, because it is external library (one for all browsers).
Is there any way to store cookies in django which is independent to browser?
Is there any way in django to store cookies which is independent to browser ? is there any technique just like what flash SharedObject does ..?
[ "A Django view receives an instance of HttpRequest as its first argument. That object has an attribute COOKIES which is, and I quote,\n\nA standard Python dictionary\n containing all cookies. Keys and\n values are strings.\n\nYou can of course save that dictionary in any way you like (e.g., pickle it into a blob...
[ 2, 1, 0 ]
[]
[]
[ "cookies", "django", "flash", "python" ]
stackoverflow_0003171404_cookies_django_flash_python.txt
Q: Most Efficient way to calculate Frequency of values in a Python list? I am looking for a fast and efficient way to calculate the frequency of list items in python: list = ['a','b','a','b', ......] I want a frequency counter which would give me an output like this: [ ('a', 10),('b', 8) ...] The items should be arranged in descending order of frequency as shown above. A: Python2.7+ >>> from collections import Counter >>> L=['a','b','a','b'] >>> print(Counter(L)) Counter({'a': 2, 'b': 2}) >>> print(Counter(L).items()) dict_items([('a', 2), ('b', 2)]) python2.5/2.6 >>> from collections import defaultdict >>> L=['a','b','a','b'] >>> d=defaultdict(int) >>> for item in L: >>> d[item]+=1 >>> >>> print d defaultdict(<type 'int'>, {'a': 2, 'b': 2}) >>> print d.items() [('a', 2), ('b', 2)]
Most Efficient way to calculate Frequency of values in a Python list?
I am looking for a fast and efficient way to calculate the frequency of list items in python: list = ['a','b','a','b', ......] I want a frequency counter which would give me an output like this: [ ('a', 10),('b', 8) ...] The items should be arranged in descending order of frequency as shown above.
[ "Python2.7+\n>>> from collections import Counter\n>>> L=['a','b','a','b']\n>>> print(Counter(L))\nCounter({'a': 2, 'b': 2})\n>>> print(Counter(L).items())\ndict_items([('a', 2), ('b', 2)])\n\npython2.5/2.6\n>>> from collections import defaultdict\n>>> L=['a','b','a','b']\n>>> d=defaultdict(int)\n>>> for item in L:\...
[ 33 ]
[]
[]
[ "frequency", "list", "python" ]
stackoverflow_0003172173_frequency_list_python.txt
Q: wxPython segmentation fault with Editors I have created a wx.grid.Grid with a wx.grid.PyGridTableBase derived class to provide its data. I want to also to control the editors used on the table. Towards that end I defined the following method def GetAttr(self, row, col, kind): attr = wx.grid.GridCellAttr() if col == 0: attr.SetEditor( wx.grid.GridCellChoiceEditor() ) return attr However, this causes a segmentation fault whenever I attempt to created the editor in the grid. I did try creating the editor beforehand and passing it in as a parameter but received the error: TypeError: in method 'GridCellAttr_SetEditor', expected argument 2 of type 'wxGridCellEditor *' I suspect the second error is caused by the GridCellAttr taking ownership off and then destroying my editor. I also tried using the SetDefaultEditor method on the wx.grid.Grid and that works, but naturally does not allow me to have a column specific editing strategy. See Full Example of crashing program: http://pastebin.com/SEbhvaKf A: I figured out the problem: The wxWidgets code assumes that the same Editor will be consistently returned from GetCellAttr. Returning a different editor each time as I was doing caused the segmentation faults. In order to return the same editor multiple times I also need to call IncRef() on the editor to keep it alive. For anybody else hitting the same problem in future, see my working code: import wx.grid app = wx.PySimpleApp() class Source(wx.grid.PyGridTableBase): def __init__(self): super(Source, self).__init__() self._editor = wx.grid.GridCellChoiceEditor() def IsEmptyCell(self, row, col): return False def GetValue(self, row, col): return repr( (row, col) ) def SetValue(self, row, col, value): pass def GetNumberRows(self): return 5 def GetNumberCols(self): return 5 def GetAttr(self, row, col, kind): attr = wx.grid.GridCellAttr() self._editor.IncRef() attr.SetEditor( self._editor ) return attr frame = wx.Frame(None) grid = wx.grid.Grid(frame) grid.SetTable( Source() ) frame.Show() app.MainLoop() A: This should solve it: import wx import wx.grid as gridlib and change: def GetAttr(self, row, col, kind): attr = gridlib.GridCellAttr() if col == 0: attr.SetEditor( gridlib.GridCellChoiceEditor() ) return attr Obs: I have no idea why you need to do it this way because: >>> import wx >>> attr = wx.grid.GridCellAttr() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'module' object has no attribute 'grid' Do not work but: import wx.grid as gridlib attr = gridlib.GridCellAttr() works... but: print attr <wx.grid.GridCellAttr; proxy of <wx.grid.GridCellAttr; proxy of <Swig Object of type 'wxGridCellAttr *' at 0x97cb398> > > It says: <wx.grid.GridCellAttr; proxy of <wx.grid.GridCellAttr>...> ! Obs2: if you use the ChoiceEditor on the whole column 0, you could also define it only one time before showing the grid with: attr.SetEditor( gridlib.GridCellChoiceEditor() ) yourGrid.SetColAttr(0, attr) and you could remove all the code from the GetAttr method (I think it should be faster - but i never timed that).
wxPython segmentation fault with Editors
I have created a wx.grid.Grid with a wx.grid.PyGridTableBase derived class to provide its data. I want to also to control the editors used on the table. Towards that end I defined the following method def GetAttr(self, row, col, kind): attr = wx.grid.GridCellAttr() if col == 0: attr.SetEditor( wx.grid.GridCellChoiceEditor() ) return attr However, this causes a segmentation fault whenever I attempt to created the editor in the grid. I did try creating the editor beforehand and passing it in as a parameter but received the error: TypeError: in method 'GridCellAttr_SetEditor', expected argument 2 of type 'wxGridCellEditor *' I suspect the second error is caused by the GridCellAttr taking ownership off and then destroying my editor. I also tried using the SetDefaultEditor method on the wx.grid.Grid and that works, but naturally does not allow me to have a column specific editing strategy. See Full Example of crashing program: http://pastebin.com/SEbhvaKf
[ "I figured out the problem:\nThe wxWidgets code assumes that the same Editor will be consistently returned from GetCellAttr. Returning a different editor each time as I was doing caused the segmentation faults.\nIn order to return the same editor multiple times I also need to call IncRef() on the editor to keep it ...
[ 4, 0 ]
[]
[]
[ "python", "wxpython" ]
stackoverflow_0003168971_python_wxpython.txt
Q: Splitting words in running text using Python? I am writing a piece of code which will extract words from running text. This text can contain delimiters like \r,\n etc. which might be there in text. I want to discard all these delimiters and only extract full words. How can I do this with Python? any library available for crunching text in python? A: Assuming your definition of "word" agrees with that of the regular expression module (re), that is, letters, digits and underscores, it's easy: import re fullwords = re.findall(r'\w+', thetext) where thetext is the string in question (e.g., coming from an f.read() of a file object f open for reading, if that's where you get your text from). If you define words differently (e.g. you want to include apostrophes so for example "it's" will be considered "one word"), it isn't much harder -- just use as the first argument of findall the appropriate pattern, e.g. r"[\w']+" for the apostrophe case. If you need to be very, very sophisticated (e.g., deal with languages that use no breaks between words), then the problem suddenly becomes much harder and you'll need some third-party package like nltk. A: Assuming your delimiters are whitespace characters (like space, \r and \n), then basic str.split() does what you want: >>> "asdf\nfoo\r\nbar too\tbaz".split() ['asdf', 'foo', 'bar', 'too', 'baz']
Splitting words in running text using Python?
I am writing a piece of code which will extract words from running text. This text can contain delimiters like \r,\n etc. which might be there in text. I want to discard all these delimiters and only extract full words. How can I do this with Python? any library available for crunching text in python?
[ "Assuming your definition of \"word\" agrees with that of the regular expression module (re), that is, letters, digits and underscores, it's easy:\nimport re\nfullwords = re.findall(r'\\w+', thetext)\n\nwhere thetext is the string in question (e.g., coming from an f.read() of a file object f open for reading, if th...
[ 5, 1 ]
[]
[]
[ "parsing", "python", "text_processing" ]
stackoverflow_0003172236_parsing_python_text_processing.txt
Q: Un/bound methods in Cheetah Is there a way to declare static methods in cheetah? IE snippets.tmpl #def address($address, $title) <div class="address"> <b>$title</h1></b> #if $address.title $address.title <br/> #end if $address.line1 <br/> #if $address.line2 $address.line2 <br/> #end if $address.town, $address.state $address.zipcode </div> #end def .... other snippets other.tmpl #from snippets import * $snippets.address($home_address, "home address") This code reports this error: NotFound: cannot find 'address'. Cheetah is compiling it as a bound method, natch: snippets.py class snippets(Template): ... def address(self, address, title, **KWS): Is there a way to declare static methods? If not, what are some alternative ways to implement something like this (a snippets library)? A: This page seems to have some relevant information, but I'm not in a position to try it out myself right now, sorry. Specifically, you should just be able to do: #@staticmethod #def address($address, $title) ...and have it work. (If you didn't know, staticmethod is a built-in function that creates a... static method :) It's most commonly used as a decorator. So I found that page by Googling "cheetah staticmethod".)
Un/bound methods in Cheetah
Is there a way to declare static methods in cheetah? IE snippets.tmpl #def address($address, $title) <div class="address"> <b>$title</h1></b> #if $address.title $address.title <br/> #end if $address.line1 <br/> #if $address.line2 $address.line2 <br/> #end if $address.town, $address.state $address.zipcode </div> #end def .... other snippets other.tmpl #from snippets import * $snippets.address($home_address, "home address") This code reports this error: NotFound: cannot find 'address'. Cheetah is compiling it as a bound method, natch: snippets.py class snippets(Template): ... def address(self, address, title, **KWS): Is there a way to declare static methods? If not, what are some alternative ways to implement something like this (a snippets library)?
[ "This page seems to have some relevant information, but I'm not in a position to try it out myself right now, sorry.\nSpecifically, you should just be able to do:\n#@staticmethod\n#def address($address, $title)\n\n...and have it work.\n(If you didn't know, staticmethod is a built-in function that creates a... stati...
[ 0 ]
[]
[]
[ "cheetah", "python" ]
stackoverflow_0003172279_cheetah_python.txt
Q: python curses.newwin not working I'm learning curses for the first time, and I decided to do it in python because it would be easier than constantly recompiling. However, I've hit a hitch. When I try to update a seccond window, I get no output. Here's a code snippet: import curses win = curses.initscr() curses.noecho() curses.cbreak() curses.curs_set(0) field = curses.newwin(1, 20, 1, 1) field.addstr(0, 0, "Hello, world!", curses.A_REVERSE) field.refresh() The normal win window initialized with initscr() works, but the field window doesn't show up. Any help? Edit: Here's the new, revised code, which still doesn't work. import curses ex = None def main(stdscr): global ex try: curses.curs_set(0) except Exception, e: ex = e field = curses.newwin(25, 25, 6, 6) field.border() cont = True x, y = 0, 0 while cont: stdscr.clear() field.clear() coords = "%d, %d" % (x, y) stdscr.addstr(5, 5, coords, curses.A_REVERSE) field.addstr(y+2, x+2, "@", curses.A_BOLD) chr = stdscr.getkey() if chr == 'h': if x > 0: x -= 1 if chr == 'l': if x < 20: x += 1 if chr == 'j': if y > 0: y -= 1 if chr == 'k': if y < 20: y += 1 if chr == 'q': cont = False stdscr.clear() field.clear() stdscr.noutrefresh() field.noutrefresh() curses.doupdate() curses.wrapper(main) if ex is not None: print 'got %s (%s)' % (type(ex).__name__, ex) A: Seems OK to me -- I always use curses.wrapper and my terminal doesn't support cursor visibility of 0, so this is what I have...: import curses ex = None def main(stdscr): global ex try: curses.curs_set(0) except Exception, e: ex = e field = curses.newwin(1, 20, 1, 1) field.addstr(0, 0, "Hello, world!", curses.A_REVERSE) field.refresh() field.getch() curses.wrapper(main) if ex is not None: print 'got %s (%s)' % (type(ex).__name__, ex) I see the reversed "Hello, world!", then when I hit any key to satisfy the getch the program terminates with the expected msg got error (curs_set() returned ERR). What are you seeing w/this program...? (Remember the wrapper does initscr and sets noecho and cbreak -- and more importantly resets it when done, which is why I always use it;-). BTW, I'm using Py 2.6.4 on a Mac (OSx 10.5.8) and Terminal.App. Your platform...? A: Ah, found the problem. When I use stdscr.clear(), it's clearing the entire terminal, including the new window. All I needed to do is make two windows, one for each separate display. Oh, and thanks to above for curses.wrapper tip. Saying here because I can't post a comment.
python curses.newwin not working
I'm learning curses for the first time, and I decided to do it in python because it would be easier than constantly recompiling. However, I've hit a hitch. When I try to update a seccond window, I get no output. Here's a code snippet: import curses win = curses.initscr() curses.noecho() curses.cbreak() curses.curs_set(0) field = curses.newwin(1, 20, 1, 1) field.addstr(0, 0, "Hello, world!", curses.A_REVERSE) field.refresh() The normal win window initialized with initscr() works, but the field window doesn't show up. Any help? Edit: Here's the new, revised code, which still doesn't work. import curses ex = None def main(stdscr): global ex try: curses.curs_set(0) except Exception, e: ex = e field = curses.newwin(25, 25, 6, 6) field.border() cont = True x, y = 0, 0 while cont: stdscr.clear() field.clear() coords = "%d, %d" % (x, y) stdscr.addstr(5, 5, coords, curses.A_REVERSE) field.addstr(y+2, x+2, "@", curses.A_BOLD) chr = stdscr.getkey() if chr == 'h': if x > 0: x -= 1 if chr == 'l': if x < 20: x += 1 if chr == 'j': if y > 0: y -= 1 if chr == 'k': if y < 20: y += 1 if chr == 'q': cont = False stdscr.clear() field.clear() stdscr.noutrefresh() field.noutrefresh() curses.doupdate() curses.wrapper(main) if ex is not None: print 'got %s (%s)' % (type(ex).__name__, ex)
[ "Seems OK to me -- I always use curses.wrapper and my terminal doesn't support cursor visibility of 0, so this is what I have...:\nimport curses\n\nex = None\n\ndef main(stdscr):\n global ex\n try:\n curses.curs_set(0)\n except Exception, e:\n ex = e\n\n field = curses.newwin(1, 20, 1, 1)\...
[ 4, 2 ]
[]
[]
[ "curses", "python", "window", "windows" ]
stackoverflow_0003170406_curses_python_window_windows.txt
Q: Access Gmail atom feed using OAuth I'm trying to grab the Gmail atom feed from a python application using OAuth. I have a working application that downloads the Google Reader feed, and I think it should simply be a matter of changing the scope and feed URLs. After replacing the URLs I can still successfully get Request and Access tokens, but when I try to grab the feed using the Access token I get a "401 Unauthorized" error. Here's my simple test program: import urlparse import oauth2 as oauth scope = "https://mail.google.com/mail/feed/atom/" sub_url = scope + "unread" request_token_url = "https://www.google.com/accounts/OAuthGetRequestToken?scope=%s&xoauth_displayname=%s" % (scope, "Test Application") authorize_url = 'https://www.google.com/accounts/OAuthAuthorizeToken' access_token_url = 'https://www.google.com/accounts/OAuthGetAccessToken' oauth_key = "anonymous" oauth_secret = "anonymous" consumer = oauth.Consumer(oauth_key, oauth_secret) client = oauth.Client(consumer) # Get a request token. resp, content = client.request(request_token_url, "GET") request_token = dict(urlparse.parse_qsl(content)) print "Request Token:" print " - oauth_token = %s" % request_token['oauth_token'] print " - oauth_token_secret = %s" % request_token['oauth_token_secret'] print # Step 2: Link to web page where the user can approve the request token. print "Go to the following link in your browser:" print "%s?oauth_token=%s" % (authorize_url, request_token['oauth_token']) print raw_input('Press enter after authorizing.') # Step 3: Get access token using approved request token token = oauth.Token(request_token['oauth_token'], request_token['oauth_token_secret']) client = oauth.Client(consumer, token) resp, content = client.request(access_token_url, "POST") access_token = dict(urlparse.parse_qsl(content)) print "Access Token:" print " - oauth_token = %s" % access_token['oauth_token'] print " - oauth_token_secret = %s" % access_token['oauth_token_secret'] print # Access content using access token token = oauth.Token(access_token['oauth_token'], access_token['oauth_token_secret']) client = oauth.Client(consumer, token) resp, content = client.request(sub_url, 'GET') print content You'll notice that I'm using 'anonymous/anonymous' as my OAuth key/secret, as mentioned in the Google documents for unregistered applications. This works fine for google reader, so I don't see any reason why it shouldn't work for Gmail. Does anyone have any idea on why this might not work, or how I could go about troubleshooting it? Thanks. A: You might want to try accessing Google's IMAP servers with OAuth instead of using the ATOM feed. After a little googling I found this: "Gmail supports OAuth over IMAP and SMTP via a standard they call XOAUTH. This allows you to authenticate against Gmail's IMAP and SMTP servers using an OAuth token and secret. It also has the added benefit of allowing you to use vanilla SMTP and IMAP libraries. The python-oauth2 package provides both IMAP and SMTP libraries that implement XOAUTH and wrap imaplib.IMAP4_SSL and smtplib.SMTP. This allows you to connect to Gmail with OAuth credentials using standard Python libraries." from http://github.com/simplegeo/python-oauth2
Access Gmail atom feed using OAuth
I'm trying to grab the Gmail atom feed from a python application using OAuth. I have a working application that downloads the Google Reader feed, and I think it should simply be a matter of changing the scope and feed URLs. After replacing the URLs I can still successfully get Request and Access tokens, but when I try to grab the feed using the Access token I get a "401 Unauthorized" error. Here's my simple test program: import urlparse import oauth2 as oauth scope = "https://mail.google.com/mail/feed/atom/" sub_url = scope + "unread" request_token_url = "https://www.google.com/accounts/OAuthGetRequestToken?scope=%s&xoauth_displayname=%s" % (scope, "Test Application") authorize_url = 'https://www.google.com/accounts/OAuthAuthorizeToken' access_token_url = 'https://www.google.com/accounts/OAuthGetAccessToken' oauth_key = "anonymous" oauth_secret = "anonymous" consumer = oauth.Consumer(oauth_key, oauth_secret) client = oauth.Client(consumer) # Get a request token. resp, content = client.request(request_token_url, "GET") request_token = dict(urlparse.parse_qsl(content)) print "Request Token:" print " - oauth_token = %s" % request_token['oauth_token'] print " - oauth_token_secret = %s" % request_token['oauth_token_secret'] print # Step 2: Link to web page where the user can approve the request token. print "Go to the following link in your browser:" print "%s?oauth_token=%s" % (authorize_url, request_token['oauth_token']) print raw_input('Press enter after authorizing.') # Step 3: Get access token using approved request token token = oauth.Token(request_token['oauth_token'], request_token['oauth_token_secret']) client = oauth.Client(consumer, token) resp, content = client.request(access_token_url, "POST") access_token = dict(urlparse.parse_qsl(content)) print "Access Token:" print " - oauth_token = %s" % access_token['oauth_token'] print " - oauth_token_secret = %s" % access_token['oauth_token_secret'] print # Access content using access token token = oauth.Token(access_token['oauth_token'], access_token['oauth_token_secret']) client = oauth.Client(consumer, token) resp, content = client.request(sub_url, 'GET') print content You'll notice that I'm using 'anonymous/anonymous' as my OAuth key/secret, as mentioned in the Google documents for unregistered applications. This works fine for google reader, so I don't see any reason why it shouldn't work for Gmail. Does anyone have any idea on why this might not work, or how I could go about troubleshooting it? Thanks.
[ "You might want to try accessing Google's IMAP servers with OAuth instead of using the ATOM feed. After a little googling I found this:\n\n\"Gmail supports OAuth over IMAP and\n SMTP via a standard they call XOAUTH.\n This allows you to authenticate\n against Gmail's IMAP and SMTP servers\n using an OAuth token...
[ 3 ]
[]
[]
[ "gdata", "gmail", "oauth", "python" ]
stackoverflow_0003170347_gdata_gmail_oauth_python.txt
Q: How to draw a spherical triangle on a sphere in 3D? Suppose you know the three vertices for a spherical triangle. Then how do you draw the sides on a sphere in 3D? I need some python code to use in Blender 3d modelisation software. I already have the sphere done in 3D in Blender. Thanks & happy blendering. note 1: i have the 3 points / vertices (p1,p2,p3 ) on the sphere for a spherical triangle but i need to trace the edges on the sphere in 3D so what would be the equations needed to determine all vertices between each points pair of the triangle on the sphere 3 edges from p1 to p2 - p2 to p3 and o3 to p1 i know it has something to do with the Great circle for Geodesic on a sphere but cannot find the proper equations to do the calculations in spherical coordinates! Thanks Great circles it would have been interesting to see a solution with great circle and see tehsolution in spherical coordinates directly ! but still interesting to do it in the euclidiens space Thanks ok i used this idea of line segment between 2 points but did not do it as indicated before i used an alternative method - Bezier line interpolation** i parametrize the line with a bezier line then subdivided and calculated as shonw ealier the ratio and angle for each of the subdivided bezier point on the chord and it works very well and very precise but it would be interesting to see how it is done whit the earlier method but not certain how to do the iteration loop? how do you load up the python code here just past it with Ctrl-V? Thanks and happy 2.5 i do use the blenders' forum but no guaranti to get a clear answer all the time! that's why i tried here - took a chance i did the first edge seems to work now got to make a loop to get multi segment for first edge and then do the other edges also 2- other subject i open here a post on bezier triangle patch i know it's not a usfull tool but just to show how it is done have youeseen a python sript to do theses triangel patch and i did ask this questin on blender's foum and no answer also on IRC python and sems to be dead right now probably guys are too busy finishing the 2.5 Beta vesion which should come out in a week or 2 Hey Thanks a lot for this math discussion if i have problem be back tomorrow happy math and 2.5 A: Create Sine Mesh Python code to create a sine wave mesh in Blender: import math import Blender from Blender import NMesh x = -1 * math.pi mesh = NMesh.GetRaw() vNew = NMesh.Vert( x, math.sin( x ), 0 ) mesh.verts.append( vNew ) while x < math.pi: x += 0.1 vOld = vNew vNew = NMesh.Vert( x, math.sin( x ), 0 ) mesh.verts.append( vNew ) mesh.addEdge( vOld, vNew ) NMesh.PutRaw( mesh, "SineWave", 1 ) Blender.Redraw() The code's explanation is at: http://davidjarvis.ca/blender/tutorial-04.shtml Algorithm to Plot Edges Drawing one line segment is the same as drawing three, so the problem can be restated as: How do you draw an arc on a sphere, given two end points? In other words, draw an arc between the following two points on a sphere: P1 = (x1, y1, z1) P2 = (x2, y2, z2) Solve this by plotting many mid-points along the arc P1P2 as follows: Calculate the radius of the sphere: R = sqrt( x12 + y12 + z12 ) Calculate the mid-point (m) of the line between P1 and P2: Pm = (xm, ym, zm) xm = (x1 + x2) / 2 ym = (y1 + y2) / 2 zm = (z1 + z2) / 2 Calculate the length to the mid-point of the line between P1 and P2: Lm = sqrt( xm2, ym2, zm2 ) Calculate the ratio of the sphere's radius to the length of the mid-point: k = R / Lm Calculate the mid-point along the arc: Am = k * Pm = (k * xm, k * ym, k * zm) For P1 to P2, create two edges: P1 to Am Am to P2 The two edges will cut through the sphere. To solve this, calculate the mid-points between P1Am and AmP2. The more mid-points, the more closely the line segments will follow the sphere's surface. As Blender is rather precise with its calculations, the resulting arc will likely be (asymptotically) hidden by the sphere. Once you have created the triangular mesh, move it away from the sphere by a few units (like 0.01 or so). Use a Spline Another solution is to create a spline from the following: sphere's radius (calculated as above) P1 Am P2 The resulting splines must be moved in front of the sphere. Blender Artists Forums The Blender experts will also have great ideas on how to solve this; try asking them. See Also http://www.mathnews.uwaterloo.ca/Issues/mn11106/DotProduct.php http://cr4.globalspec.com/thread/27311/Urgent-Midpoint-of-Arc-formula A: One cheap and easy method for doing this would be to create the triangle and subdivide the faces down to the level of detail you want, then normalize all the vertices to the radius you want.
How to draw a spherical triangle on a sphere in 3D?
Suppose you know the three vertices for a spherical triangle. Then how do you draw the sides on a sphere in 3D? I need some python code to use in Blender 3d modelisation software. I already have the sphere done in 3D in Blender. Thanks & happy blendering. note 1: i have the 3 points / vertices (p1,p2,p3 ) on the sphere for a spherical triangle but i need to trace the edges on the sphere in 3D so what would be the equations needed to determine all vertices between each points pair of the triangle on the sphere 3 edges from p1 to p2 - p2 to p3 and o3 to p1 i know it has something to do with the Great circle for Geodesic on a sphere but cannot find the proper equations to do the calculations in spherical coordinates! Thanks Great circles it would have been interesting to see a solution with great circle and see tehsolution in spherical coordinates directly ! but still interesting to do it in the euclidiens space Thanks ok i used this idea of line segment between 2 points but did not do it as indicated before i used an alternative method - Bezier line interpolation** i parametrize the line with a bezier line then subdivided and calculated as shonw ealier the ratio and angle for each of the subdivided bezier point on the chord and it works very well and very precise but it would be interesting to see how it is done whit the earlier method but not certain how to do the iteration loop? how do you load up the python code here just past it with Ctrl-V? Thanks and happy 2.5 i do use the blenders' forum but no guaranti to get a clear answer all the time! that's why i tried here - took a chance i did the first edge seems to work now got to make a loop to get multi segment for first edge and then do the other edges also 2- other subject i open here a post on bezier triangle patch i know it's not a usfull tool but just to show how it is done have youeseen a python sript to do theses triangel patch and i did ask this questin on blender's foum and no answer also on IRC python and sems to be dead right now probably guys are too busy finishing the 2.5 Beta vesion which should come out in a week or 2 Hey Thanks a lot for this math discussion if i have problem be back tomorrow happy math and 2.5
[ "Create Sine Mesh\nPython code to create a sine wave mesh in Blender:\nimport math\nimport Blender\nfrom Blender import NMesh\n\nx = -1 * math.pi\n\nmesh = NMesh.GetRaw()\nvNew = NMesh.Vert( x, math.sin( x ), 0 )\nmesh.verts.append( vNew )\n\nwhile x < math.pi:\n x += 0.1\n vOld = vNew\n vNew = NMesh.Vert( x, math....
[ 4, 0 ]
[]
[]
[ "blender", "python" ]
stackoverflow_0003172535_blender_python.txt
Q: how to show chinese word , not unicode word this is my code: from whoosh.analysis import RegexAnalyzer rex = RegexAnalyzer(re.compile(ur"([\u4e00-\u9fa5])|(\w+(\.?\w+)*)")) a=[(token.text) for token in rex(u"hi 中 000 中文测试中文 there 3.141 big-time under_score")] self.render_template('index.html',{'a':a}) and it show this on the web page: [u'hi', u'\u4e2d', u'000', u'\u4e2d', u'\u6587', u'\u6d4b', u'\u8bd5', u'\u4e2d', u'\u6587', u'there', u'3.141', u'big', u'time', u'under_score'] but i want to show chinese word , so i change this: a=[(token.text).encode('utf-8') for token in rex(u"hi 中 000 中文测试中文 there 3.141 big-time under_score")] and it show : ['hi', '\xe4\xb8\xad', '000', '\xe4\xb8\xad', '\xe6\x96\x87', '\xe6\xb5\x8b', '\xe8\xaf\x95', '\xe4\xb8\xad', '\xe6\x96\x87', 'there', '3.141', 'big', 'time', 'under_score'] so how to show chinese word in my code, thanks A: By default, printing a larger built-in structure gives the repr() of each of the elements. If you want the str()/unicode() instead then you need to iterate over the sequence yourself. a = u"['" + u"', '".join(token.text for token in ...) + u"']" print a
how to show chinese word , not unicode word
this is my code: from whoosh.analysis import RegexAnalyzer rex = RegexAnalyzer(re.compile(ur"([\u4e00-\u9fa5])|(\w+(\.?\w+)*)")) a=[(token.text) for token in rex(u"hi 中 000 中文测试中文 there 3.141 big-time under_score")] self.render_template('index.html',{'a':a}) and it show this on the web page: [u'hi', u'\u4e2d', u'000', u'\u4e2d', u'\u6587', u'\u6d4b', u'\u8bd5', u'\u4e2d', u'\u6587', u'there', u'3.141', u'big', u'time', u'under_score'] but i want to show chinese word , so i change this: a=[(token.text).encode('utf-8') for token in rex(u"hi 中 000 中文测试中文 there 3.141 big-time under_score")] and it show : ['hi', '\xe4\xb8\xad', '000', '\xe4\xb8\xad', '\xe6\x96\x87', '\xe6\xb5\x8b', '\xe8\xaf\x95', '\xe4\xb8\xad', '\xe6\x96\x87', 'there', '3.141', 'big', 'time', 'under_score'] so how to show chinese word in my code, thanks
[ "By default, printing a larger built-in structure gives the repr() of each of the elements. If you want the str()/unicode() instead then you need to iterate over the sequence yourself.\na = u\"['\" + u\"', '\".join(token.text for token in ...) + u\"']\"\nprint a\n\n" ]
[ 3 ]
[]
[]
[ "list", "python", "string", "utf_8" ]
stackoverflow_0003172741_list_python_string_utf_8.txt
Q: Open File with Python I am writing a tkinter program that is kind of a program that is like a portfolio and opens up other programs also writen in python. So for example i have FILE_1 and FILE_2 and i want to write a program that onced clicked on a certain button opens either FILE_1 or FILE_2. i dont need help with the look like with buttons just how to wirte a function that opens a program This is the code i used: from Tkinter import * import subprocess master = Tk() def z(): p=subprocess.Popen('test1.py') p.communicate() b = Button(master, text="OK", command=z) b.pack() mainloop() A: Hook the button up a callback which calls subprocess.Popen: import subprocess p=subprocess.Popen('FILE_1.py') p.communicate() This will try to run FILE_1.py as a separate process. p.communicate() will cause your main program to wait until FILE_1.py exits.
Open File with Python
I am writing a tkinter program that is kind of a program that is like a portfolio and opens up other programs also writen in python. So for example i have FILE_1 and FILE_2 and i want to write a program that onced clicked on a certain button opens either FILE_1 or FILE_2. i dont need help with the look like with buttons just how to wirte a function that opens a program This is the code i used: from Tkinter import * import subprocess master = Tk() def z(): p=subprocess.Popen('test1.py') p.communicate() b = Button(master, text="OK", command=z) b.pack() mainloop()
[ "Hook the button up a callback which calls subprocess.Popen:\nimport subprocess\np=subprocess.Popen('FILE_1.py')\np.communicate()\n\nThis will try to run FILE_1.py as a separate process. \np.communicate() will cause your main program to wait until FILE_1.py exits.\n" ]
[ 3 ]
[]
[]
[ "file_io", "popen", "python", "tkinter" ]
stackoverflow_0003172787_file_io_popen_python_tkinter.txt
Q: Python: Google App Engine source uses tab depth 2 Looking through the Google App Engine source, I noticed that the tab depth is 2 spaces instead of the conventional 4. Is there some wisdom behind this, or is it just someone's preference? (Maybe it's trivial, or maybe Google knows something that isn't immediately obvious.) UPDATE I wasn't suggesting that it ran differently based on the tab depth. But perhaps there's a good reason for their style. A: The Google Python Style Guide is published here, and, besides being generally vaster than PEP 8, it also differs from it in some aspects. However, the published version of the guide does mandate 4-space indents (like PEP 8 and like just about everybody else does). Within Google, however, the actual rule is two-space indents (and you'll often catch me posting 2-space indented code because (a) it's habit by now, and (b) it's how my editors are set up;-). This was historically derived from the Google C++ style guide (Google used both C++ and Python essentially from day one, but I think C++ got its formal style guide first), which says Spaces vs. Tabs ▽ Use only spaces, and indent 2 spaces at a time. Lots of googlers code in both C++ and Python all the time, so I guess keeping minor aspects of the two style guides in sync, where feasible, is considered a productivity enhancement. A: Obviously how many spaces you use doesn't matter at all in how it runs. It's their convention: http://code.google.com/p/soc/wiki/PythonStyleGuide#Indentation http://groups.google.com/group/django-developers/msg/f6a86d135fb2968f
Python: Google App Engine source uses tab depth 2
Looking through the Google App Engine source, I noticed that the tab depth is 2 spaces instead of the conventional 4. Is there some wisdom behind this, or is it just someone's preference? (Maybe it's trivial, or maybe Google knows something that isn't immediately obvious.) UPDATE I wasn't suggesting that it ran differently based on the tab depth. But perhaps there's a good reason for their style.
[ "The Google Python Style Guide is published here, and, besides being generally vaster than \nPEP 8, it also differs from it in some aspects. However, the published version of the guide does mandate 4-space indents (like PEP 8 and like just about everybody else does).\nWithin Google, however, the actual rule is two...
[ 5, 3 ]
[ "It's miserably bad style. 2-space indentation is simply unreadable. Don't copy it. Never use less than 4 spaces to indent in any language.\n(Don't assume that something is good simply because Google source is doing it. If you've ever spent some time looking through the Android source you'd know that there's as...
[ -6 ]
[ "google_app_engine", "python" ]
stackoverflow_0003172893_google_app_engine_python.txt
Q: Flattening mixed lists in Python (containing iterables and noniterables) Possible Duplicate: Flatten (an irregular) list of lists in Python How would I go about flattening a list in Python that contains both iterables and noniterables, such as [1, [2, 3, 4], 5, [6]]? The result should be [1,2,3,4,5,6], and lists of lists of lists (etc.) are certain never to occur. I have tried using itertools.chain, etc., but they seem only to work on lists of lists. Help! A: So you want to flatten only 1 or 2 levels, not recursively to further depts; and only within lists, not other iterables such as strings, tuples, arrays... did I get your specs right? OK, if so, then...: def flat2gen(alist): for item in alist: if isinstance(item, list): for subitem in item: yield subitem else: yield item If you want a list result, list(flat2gen(mylist)) will produce it. Hope this is trivially easy for you to adapt if your actual specs are minutely different!
Flattening mixed lists in Python (containing iterables and noniterables)
Possible Duplicate: Flatten (an irregular) list of lists in Python How would I go about flattening a list in Python that contains both iterables and noniterables, such as [1, [2, 3, 4], 5, [6]]? The result should be [1,2,3,4,5,6], and lists of lists of lists (etc.) are certain never to occur. I have tried using itertools.chain, etc., but they seem only to work on lists of lists. Help!
[ "So you want to flatten only 1 or 2 levels, not recursively to further depts; and only within lists, not other iterables such as strings, tuples, arrays... did I get your specs right? OK, if so, then...:\ndef flat2gen(alist):\n for item in alist:\n if isinstance(item, list):\n for subitem in item: yield s...
[ 3 ]
[]
[]
[ "arrays", "list", "python" ]
stackoverflow_0003172930_arrays_list_python.txt
Q: Google App Engine: "Error: Server Error" but nothing in the logs I deployed an app to Google App Engine. When I navigate to it, I get this: Error: Server Error The server encountered an error and could not complete your request. If the problem persists, please report your problem and mention this error message and the query that caused it. All the pages do this. appcfg.py upload_data doesn't work either. I'm not sure why. Am I doing something wrong here? As an aside, it feels like sometimes I spend more time wrestling with GAE than I do actually writing code. (Although it's possible that my frustration is entirely my own fault.) A: If you build your application abject (assuming you're using the webapp microframework that comes with App Engine, since you haven't mentioned that crucial detail;-) in the following way...: application = webapp.WSGIApplication(url_to_handlers, debug=True) the debug=True means you should be seeing a traceback in the browser for your exceptions, making debugging far easier. (The url_to_handlers is a list of pairs (url, handler), and of course I'm assuming you've imported webapp appropriately, etc etc). Just about every framework has debugging facilities that are at least equivalent (or better, e.g. showing a page in which by suitable clicking you can expand nested frames and examine each frame's local variables) so this should not be hard to do whatever other framework you may be using (and it's advisable until you feel your app is really solid -- after that, once you open it up beyond a small alpha-test layer of friends, you probably don't want to bother your users with full tracebacks... though I've seen many web pages do that, in all kinds of different languages and frameworks, it's still not the best user experience;-). Most any exception that occurs only when deploying to production but isn't reproduced in the local SDK in strict mode means a weakness in the SDK, btw, and it's a good idea (once you know exactly what happened) to post a bug in the SDK's tracker. (Some things that may reasonably considered exceptions to this general rule include timeout problems, since the exact performance and workload of the deployment servers at any given time are of course essentially impossible to reproduce with any accuracy on your local SDK, so the SDK basically doesn't even try;-).
Google App Engine: "Error: Server Error" but nothing in the logs
I deployed an app to Google App Engine. When I navigate to it, I get this: Error: Server Error The server encountered an error and could not complete your request. If the problem persists, please report your problem and mention this error message and the query that caused it. All the pages do this. appcfg.py upload_data doesn't work either. I'm not sure why. Am I doing something wrong here? As an aside, it feels like sometimes I spend more time wrestling with GAE than I do actually writing code. (Although it's possible that my frustration is entirely my own fault.)
[ "If you build your application abject (assuming you're using the webapp microframework that comes with App Engine, since you haven't mentioned that crucial detail;-) in the following way...:\napplication = webapp.WSGIApplication(url_to_handlers, debug=True)\n\nthe debug=True means you should be seeing a traceback i...
[ 0 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0003172856_google_app_engine_python.txt
Q: pyparsing ambiguity I'm trying to parse some text using PyParser. The problem is that I have names that can contain white spaces. So my input might look like this. First, a list of names: Joe bob Jimmy X grjiaer-rreaijgr Y Then, things they do: Joe A bob B Jimmy X C the problem of course is that a thing they do can be the same as the end of the name: Jimmy X X grjiaer-rreaijgr Y Y How can I create a parser for the action lines? The output of parsing Joe A should be [Joe, A]. The output of parsing Jimmy X C should be [Jimmy X, C], of Jimmy X X - [Jimmy X, X]. That is, [name, action] pairs. If I create my name parser naively, meaning something like OneOrMore(RegEx("\S*")), then it will match the entire line giving me [Jimmy X X] followed by a parsing error for not seeing an action (since it was already consumed by the name parser). NOTE: Sorry for the ambiguous phrasing earlier that made this look like an NLP question. A: You pretty much need more than a simple parser. Parsers use the symbols in a string to define which pieces of the string represent different elements of a grammar. This is why FM asked for some clue to indicate how you know what part is the name and what part is the rest of the sentence. If you could say that names are made up of one or more capitalized words, then the parser would know when the name stops and the rest of the sentence starts. But a name like "jimmy foo decides"? How can the parser know just by looking at the symbols in "decides" whether "decides" is or is not part of the name? Even a human reading your "jimmy foo decides decides to eat" sentence would have some trouble determining where the name starts or stops, and whether this was some sort of typo. If your input is really this unpredictable, then you need to use a tool such as the NLTK (Natural Language Toolkit). I've not used it myself, but it approaches this problem from the standpoint of parsing sentences in a language, as opposed to trying to parse structured data or mathematical formats. I would not recommend pyparsing for this kind of language interpretation. A: Have fun: from pyparsing import Regex, oneOf THE_NAMES = \ """Joe bob Jimmy X grjiaer-rreaijgr Y """ THE_THINGS_THEY_DO = \ """Joe A bob B Jimmy X C Jimmy X X grjiaer-rreaijgr Y Y """ ACTION = Regex('.*') NAMES = THE_NAMES.splitlines() print NAMES GRAMMAR = oneOf(NAMES) + ACTION for line in THE_THINGS_THEY_DO.splitlines(): print GRAMMAR.parseString(line) A: Looks like you need nltk, not pyparsing. Looks like you need a tractable problem to work on. How do YOU know how to parse 'jimmy foo decides decides to eat'? What rules do YOU use to deduce (contrary to what most people would assume) that "decides decides" is not a typo? Re "names that can contain whitespaces": Firstly, I'd hope that you'd normalise that into one space. Secondly: this is unexpected?? Thirdly: names can contain apostrophes and hyphens (O'Brien, Montagu-Douglas-Scott) and may have components that aren't capitalised e.g. Georg von und zu Hohenlohe) and we won't mention Unicode.
pyparsing ambiguity
I'm trying to parse some text using PyParser. The problem is that I have names that can contain white spaces. So my input might look like this. First, a list of names: Joe bob Jimmy X grjiaer-rreaijgr Y Then, things they do: Joe A bob B Jimmy X C the problem of course is that a thing they do can be the same as the end of the name: Jimmy X X grjiaer-rreaijgr Y Y How can I create a parser for the action lines? The output of parsing Joe A should be [Joe, A]. The output of parsing Jimmy X C should be [Jimmy X, C], of Jimmy X X - [Jimmy X, X]. That is, [name, action] pairs. If I create my name parser naively, meaning something like OneOrMore(RegEx("\S*")), then it will match the entire line giving me [Jimmy X X] followed by a parsing error for not seeing an action (since it was already consumed by the name parser). NOTE: Sorry for the ambiguous phrasing earlier that made this look like an NLP question.
[ "You pretty much need more than a simple parser. Parsers use the symbols in a string to define which pieces of the string represent different elements of a grammar. This is why FM asked for some clue to indicate how you know what part is the name and what part is the rest of the sentence. If you could say that n...
[ 2, 1, 0 ]
[]
[]
[ "parsing", "pyparsing", "python" ]
stackoverflow_0002982219_parsing_pyparsing_python.txt
Q: In interactive Python, how to unambiguously import a module In interactive python I'd like to import a module that is in, say, C:\Modules\Module1\module.py What I've been able to do is to create an empty C:\Modules\Module1\__init__.py and then do: >>> import sys >>> sys.path.append(r'C:\Modules\Module1') >>> import module And that works, but I'm having to append to sys.path, and if there was another file called module.py that is in the sys.path as well, how to unambiguously resolve to the one that I really want to import? Is there another way to import that doesn't involve appending to sys.path? A: EDIT: Here's something I'd forgotten about: Is this correct way to import python scripts residing in arbitrary folders? I'll leave the rest of my answer here for reference. There is, but you'd basically wind up writing your own importer which manually creates a new module object and uses execfile to run the module's code in that object's "namespace". If you want to do that, take a look at the mod_python importer for an example. For a simpler solution, you could just add the directory of the file you want to import to the beginning of sys.path, not the end, like so: >>> import sys >>> sys.path.insert(0, r'C:\Modules\Module1') >>> import module You shouldn't need to create the __init__.py file, not unless you're importing from within a package (so, if you were doing import package.module then you'd need __init__.py). A: inserting in sys.path (at the very first place) works better: >>> import sys >>> sys.path.insert(0, 'C:/Modules/Module1') >>> import module >>> del sys.path[0] # if you don't want that directory in the path append to a list puts the item in the last place, so it's quite possible that other previous entries in the path take precedence; putting the directory in the first place is therefore a sounder approach.
In interactive Python, how to unambiguously import a module
In interactive python I'd like to import a module that is in, say, C:\Modules\Module1\module.py What I've been able to do is to create an empty C:\Modules\Module1\__init__.py and then do: >>> import sys >>> sys.path.append(r'C:\Modules\Module1') >>> import module And that works, but I'm having to append to sys.path, and if there was another file called module.py that is in the sys.path as well, how to unambiguously resolve to the one that I really want to import? Is there another way to import that doesn't involve appending to sys.path?
[ "EDIT: Here's something I'd forgotten about: Is this correct way to import python scripts residing in arbitrary folders? I'll leave the rest of my answer here for reference.\n\nThere is, but you'd basically wind up writing your own importer which manually creates a new module object and uses execfile to run the mod...
[ 4, 1 ]
[]
[]
[ "import", "path", "python" ]
stackoverflow_0003173229_import_path_python.txt
Q: Python: Passing functions with arguments to a built-in function? like this question I want to pass a function with arguments. But I want to pass it to built-in functions. Example: files = [ 'hey.txt', 'hello.txt', 'goodbye.jpg', 'howdy.gif' ] def filterex(path, ex): pat = r'.+\.(' + ex + ')$' match = re.search(pat, path) return match and match.group(1) == ex) I could use that code with a for loop and an if statement but it's shorter and maybe more readable to use filter(func, seq). But if I understand correctly the function you use with filter only takes one argument which is the item from the sequence. So I was wondering if it's possible to pass more arguments? A: def make_filter(ex): def do_filter(path): pat = r'.+\.(' + ex + ')$' match = re.search(pat, path) return match and match.group(1) == ex return do_filter filter(make_filter('txt'), files) Or if you don't want to modify filterex: filter(lambda path: filterex(path, 'txt'), files) You could use a list comprehension, as suggested by gnibbler: [path for path in files if filterex(path, 'txt')] You could also use a generator comprehension, which might be particularly useful if you had a large list: (path for path in files if filterex(path, 'txt')) A: Here is a list comprehension that does the same thing import os [f for f in files if os.path.splitext(f)[1]=="."+ex]
Python: Passing functions with arguments to a built-in function?
like this question I want to pass a function with arguments. But I want to pass it to built-in functions. Example: files = [ 'hey.txt', 'hello.txt', 'goodbye.jpg', 'howdy.gif' ] def filterex(path, ex): pat = r'.+\.(' + ex + ')$' match = re.search(pat, path) return match and match.group(1) == ex) I could use that code with a for loop and an if statement but it's shorter and maybe more readable to use filter(func, seq). But if I understand correctly the function you use with filter only takes one argument which is the item from the sequence. So I was wondering if it's possible to pass more arguments?
[ "def make_filter(ex):\n def do_filter(path):\n pat = r'.+\\.(' + ex + ')$'\n match = re.search(pat, path)\n return match and match.group(1) == ex\n return do_filter\n\nfilter(make_filter('txt'), files)\n\nOr if you don't want to modify filterex:\nfilter(lambda path: filterex(path, 'txt'),...
[ 8, 3 ]
[]
[]
[ "python", "refactoring" ]
stackoverflow_0003173355_python_refactoring.txt
Q: Python event handler method not returning value in conditional I'm completely new to python (and it's been a while since I've coded much). I'm trying to call a method which acts as an event handler in a little "hello world" type game, but it's not working at all. I'm using the pygames 1.9.1 lib with python 2.6.1 on OSX 10.6.3. So this is in a while loop: self.exitCheck() if self.controlUpdate == True: self.playerX += 1 And the two methods in question are: def exitCheck(self): for event in pygame.event.get(): if event.type == KEYDOWN: if event.key == pygame.K_ESCAPE: print "Escape pressed" pygame.quit() exit() def controlUpdate(self): for event in pygame.event.get(): if event.type == KEYDOWN: if event.key == pygame.K_SPACE: print "Spacebar pressed" return True elif event.type == KEYUP: if event.key == pygame.K_SPACE: print "Spacebar not pressed" return False else: return False exitCheck is always, but controlUpdate never seems to get called while in the conditional. I feel like I've either missed something from the python book I've been going through (the oreilly Learning Python) or I've just taken too long a break from proper coding, but yeah. Would much appreciate the help. A: Aren't you missing some parentheses there? if self.controlUpdate() == True: A: Edit: The problem is that pygame.event.get() both returns and removes all events from the event queue. This means that every time you call controlUpdate(), the event queue will be empty and nothing inside the for loop will be executed.
Python event handler method not returning value in conditional
I'm completely new to python (and it's been a while since I've coded much). I'm trying to call a method which acts as an event handler in a little "hello world" type game, but it's not working at all. I'm using the pygames 1.9.1 lib with python 2.6.1 on OSX 10.6.3. So this is in a while loop: self.exitCheck() if self.controlUpdate == True: self.playerX += 1 And the two methods in question are: def exitCheck(self): for event in pygame.event.get(): if event.type == KEYDOWN: if event.key == pygame.K_ESCAPE: print "Escape pressed" pygame.quit() exit() def controlUpdate(self): for event in pygame.event.get(): if event.type == KEYDOWN: if event.key == pygame.K_SPACE: print "Spacebar pressed" return True elif event.type == KEYUP: if event.key == pygame.K_SPACE: print "Spacebar not pressed" return False else: return False exitCheck is always, but controlUpdate never seems to get called while in the conditional. I feel like I've either missed something from the python book I've been going through (the oreilly Learning Python) or I've just taken too long a break from proper coding, but yeah. Would much appreciate the help.
[ "Aren't you missing some parentheses there? \nif self.controlUpdate() == True:\n", "Edit: The problem is that pygame.event.get() both returns and removes all events from the event queue. This means that every time you call controlUpdate(), the event queue will be empty and nothing inside the for loop will be exe...
[ 3, 1 ]
[]
[]
[ "conditional", "pygame", "python" ]
stackoverflow_0003173385_conditional_pygame_python.txt
Q: Problem creating a vritualenv using virtualenv with OS X So I'm not sure what my problem is. Trying to configure a virtualenv this is the error I get: 20:59:51 $ virtualenv test -p /usr/local/bin/python Running virtualenv with interpreter /usr/local/bin/python New python executable in test/bin/python Please make sure you remove any previous custom paths from your /Users/nlang/.pydistutils.cfg file. Overwriting test/lib/python2.6/distutils/__init__.py with new content Installing setuptools...............done. Complete output from command /Users/nlang/Code/Python/venvs...ython /Users/nlang/Code/Python/venvs...stall /Library/Python/2.6/site-packa...ar.gz: /Users/nlang/Code/Python/venvs/test/bin/python: can't open file '/Users/nlang/Code/Python/venvs/test/bin/easy_install': [Errno 2] No such file or directory ---------------------------------------- Traceback (most recent call last): File "/Library/Python/2.6/site-packages/virtualenv-1.4.9-py2.6.egg/virtualenv.py", line 1489, in <module> main() File "/Library/Python/2.6/site-packages/virtualenv-1.4.9-py2.6.egg/virtualenv.py", line 526, in main use_distribute=options.use_distribute) File "/Library/Python/2.6/site-packages/virtualenv-1.4.9-py2.6.egg/virtualenv.py", line 618, in create_environment install_pip(py_executable) File "/Library/Python/2.6/site-packages/virtualenv-1.4.9-py2.6.egg/virtualenv.py", line 390, in install_pip filter_stdout=_filter_setup) File "/Library/Python/2.6/site-packages/virtualenv-1.4.9-py2.6.egg/virtualenv.py", line 587, in call_subprocess % (cmd_desc, proc.returncode)) OSError: Command /Users/nlang/Code/Python/venvs...ython /Users/nlang/Code/Python/venvs...stall /Library/Python/2.6/site-packa...ar.gz failed with error code 2 I've poked around various forums and google and have been unsuccessful in getting a virtualenv created. I'm stuck :( Thanks A: It's hard to tell exactly what's going on from the truncated output provided but it appears you are trying to update an existing virtual environment (note the Overwriting message). First, verify that the python at /usr/local/bin/python works correctly on its own. Then, try creating a virtualenv in a new (non-existing) directory (something other than test). If that doesn't work, edit your question to add the output from both.
Problem creating a vritualenv using virtualenv with OS X
So I'm not sure what my problem is. Trying to configure a virtualenv this is the error I get: 20:59:51 $ virtualenv test -p /usr/local/bin/python Running virtualenv with interpreter /usr/local/bin/python New python executable in test/bin/python Please make sure you remove any previous custom paths from your /Users/nlang/.pydistutils.cfg file. Overwriting test/lib/python2.6/distutils/__init__.py with new content Installing setuptools...............done. Complete output from command /Users/nlang/Code/Python/venvs...ython /Users/nlang/Code/Python/venvs...stall /Library/Python/2.6/site-packa...ar.gz: /Users/nlang/Code/Python/venvs/test/bin/python: can't open file '/Users/nlang/Code/Python/venvs/test/bin/easy_install': [Errno 2] No such file or directory ---------------------------------------- Traceback (most recent call last): File "/Library/Python/2.6/site-packages/virtualenv-1.4.9-py2.6.egg/virtualenv.py", line 1489, in <module> main() File "/Library/Python/2.6/site-packages/virtualenv-1.4.9-py2.6.egg/virtualenv.py", line 526, in main use_distribute=options.use_distribute) File "/Library/Python/2.6/site-packages/virtualenv-1.4.9-py2.6.egg/virtualenv.py", line 618, in create_environment install_pip(py_executable) File "/Library/Python/2.6/site-packages/virtualenv-1.4.9-py2.6.egg/virtualenv.py", line 390, in install_pip filter_stdout=_filter_setup) File "/Library/Python/2.6/site-packages/virtualenv-1.4.9-py2.6.egg/virtualenv.py", line 587, in call_subprocess % (cmd_desc, proc.returncode)) OSError: Command /Users/nlang/Code/Python/venvs...ython /Users/nlang/Code/Python/venvs...stall /Library/Python/2.6/site-packa...ar.gz failed with error code 2 I've poked around various forums and google and have been unsuccessful in getting a virtualenv created. I'm stuck :( Thanks
[ "It's hard to tell exactly what's going on from the truncated output provided but it appears you are trying to update an existing virtual environment (note the Overwriting message). First, verify that the python at /usr/local/bin/python works correctly on its own. Then, try creating a virtualenv in a new (non-exi...
[ 0 ]
[]
[]
[ "python", "virtualenv" ]
stackoverflow_0003173482_python_virtualenv.txt
Q: Word Frequency in text using Python but disregard stop words This gives me a frequency of words in a text: fullWords = re.findall(r'\w+', allText) d = defaultdict(int) for word in fullWords : d[word] += 1 finalFreq = sorted(d.iteritems(), key = operator.itemgetter(1), reverse=True) self.response.out.write(finalFreq) This also gives me useless words like "the" "an" "a" My question is, is there a stop words library available in python which can remove all these common words? I want to run this on google app engine A: You can download lists of stopwords as files in various formats, e.g. from here -- all Python needs to do is to read the file (and these are in csv format, easily read with the csv module), make a set, and use membership in that set (probably with some normalization, e.g., lowercasing) to exclude words from the count. A: There's an easy way to handle this by slightly modifying the code you have (edited to reflect John's comment): stopWords = set(['a', 'an', 'the', ...]) fullWords = re.findall(r'\w+', allText) d = defaultdict(int) for word in fullWords: if word not in stopWords: d[word] += 1 finalFreq = sorted(d.iteritems(), key=lambda t: t[1], reverse=True) self.response.out.write(finalFreq) This approach constructs the sorted list in two steps: first it filters out any words in your desired list of "stop words" (which has been converted to a set for efficiency), then it sorts the remaining entries. A: I know that NLTK has a package with a corpus and the stopwords for many languages, including English, see here for more information. NLTK has also a word frequency counter, it's a nice module for natural language processing that you should consider to use. A: stopwords = set(['an', 'a', 'the']) # etc... finalFreq = sorted((k,v) for k,v in d.iteritems() if k not in stopwords, key = operator.itemgetter(1), reverse=True) This will filter out any keys which are in the stopwords set.
Word Frequency in text using Python but disregard stop words
This gives me a frequency of words in a text: fullWords = re.findall(r'\w+', allText) d = defaultdict(int) for word in fullWords : d[word] += 1 finalFreq = sorted(d.iteritems(), key = operator.itemgetter(1), reverse=True) self.response.out.write(finalFreq) This also gives me useless words like "the" "an" "a" My question is, is there a stop words library available in python which can remove all these common words? I want to run this on google app engine
[ "You can download lists of stopwords as files in various formats, e.g. from here -- all Python needs to do is to read the file (and these are in csv format, easily read with the csv module), make a set, and use membership in that set (probably with some normalization, e.g., lowercasing) to exclude words from the co...
[ 5, 3, 2, 0 ]
[]
[]
[ "frequency_analysis", "google_app_engine", "python", "word_frequency" ]
stackoverflow_0003173592_frequency_analysis_google_app_engine_python_word_frequency.txt
Q: difficulty with Python def myfunc(x): y = x y.append('How do I stop Python from modifying x here?') return y x = [] z = myfunc(x) print(x) A: You do: y = x[:] to make a copy of list x. A: You need to copy X before you modify it, def myfunc(x): y = list(x) y.append('How do I stop Python from modifying x here?') return y x = [] z = myfunc(x) print(x)
difficulty with Python
def myfunc(x): y = x y.append('How do I stop Python from modifying x here?') return y x = [] z = myfunc(x) print(x)
[ "You do:\ny = x[:]\n\nto make a copy of list x.\n", "You need to copy X before you modify it, \ndef myfunc(x):\n y = list(x)\n y.append('How do I stop Python from modifying x here?')\n return y\n\nx = []\nz = myfunc(x)\nprint(x)\n\n" ]
[ 11, 1 ]
[]
[]
[ "python" ]
stackoverflow_0003173660_python.txt
Q: Python: finding files with matching extensions or extensions with matching names in a list Suppose I have a list of filenames: [exia.gundam, dynames.gundam, kyrios.gundam, virtue.gundam], or [exia.frame, exia.head, exia.swords, exia.legs, exia.arms, exia.pilot, exia.gn_drive, lockon_stratos.data, tieria_erde.data, ribbons_almark.data, otherstuff.dada]. In one iteration, I'd like to have all the *.gundam or *.data files, whereas on the other I'd like to group the exia.* files. What's the easiest way of doing this, besides iterating through the list and putting each element in a dictionary? Here's what I had in mind: def matching_names(files): ''' extracts files with repeated names from a list Keyword arguments: files - list of filenames Returns: Dictionary ''' nameDict = {} for file in files: filename = file.partition('.') if filename[0] not in nameDict: nameDict[filename[0]] = [] nameDict[filename[0]].append(filename[2]) matchingDict = {} for key in nameDict.keys(): if len(nameDict[key]) > 1: matchingDict[key] = nameDict[key] return matchingDict Well, assuming I have to use that, is there a simple way to invert it and have the file extension as key instead of the name? A: In my first version, it looks like I misinterpreted your question. So if I've got this correct, you're trying to process a list of files so that you can easily access all the filenames with a given extension, or all the filenames with a given base ("base" being the part before the period)? If that's the case, I would recommend this way: from itertools import groupby def group_by_name(filenames): '''Puts the filenames in the given iterable into a dictionary where the key is the first component of the filename and the value is a list of the filenames with that component.''' keyfunc = lambda f: f.split('.', 1)[0] return dict( (k, list(g)) for k,g in groupby( sorted(filenames, key=keyfunc), key=keyfunc ) ) For instance, given the list >>> test_data = [ ... exia.frame, exia.head, exia.swords, exia.legs, ... exia.arms, exia.pilot, exia.gn_drive, lockon_stratos.data, ... tieria_erde.data, ribbons_almark.data, otherstuff.dada ... ] that function would produce >>> group_by_name(test_data) {'exia': ['exia.arms', 'exia.frame', 'exia.gn_drive', 'exia.head', 'exia.legs', 'exia.pilot', 'exia.swords'], 'lockon_stratos': ['lockon_stratos.data'], 'otherstuff': ['otherstuff.dada'], 'ribbons_almark': ['ribbons_almark.data'], 'tieria_erde': ['tieria_erde.data']} If you wanted to index the filenames by extension instead, a slight modification will do that for you: def group_by_extension(filenames): '''Puts the filenames in the given iterable into a dictionary where the key is the last component of the filename and the value is a list of the filenames with that extension.''' keyfunc = lambda f: f.split('.', 1)[1] return dict( (k, list(g)) for k,g in groupby( sorted(filenames, key=keyfunc), key=keyfunc ) ) The only difference is in the keyfunc = ... line, where I changed the key from 0 to 1. Example: >>> group_by_extension(test_data) {'arms': ['exia.arms'], 'dada': ['otherstuff.dada'], 'data': ['lockon_stratos.data', 'ribbons_almark.data', 'tieria_erde.data'], 'frame': ['exia.frame'], 'gn_drive': ['exia.gn_drive'], 'head': ['exia.head'], 'legs': ['exia.legs'], 'pilot': ['exia.pilot'], 'swords': ['exia.swords']} If you want to get both those groupings at the same time, though, I think it'd be better to avoid a list comprehension, because that can only process them one way or another, it can't construct two different dictionaries at once. from collections import defaultdict def group_by_both(filenames): '''Puts the filenames in the given iterable into two dictionaries, where in the first, the key is the first component of the filename, and in the second, the key is the last component of the filename. The values in each dictionary are lists of the filenames with that base or extension.''' by_name = defaultdict(list) by_ext = defaultdict(list) for f in filenames: name, ext = f.split('.', 1) by_name[name] += [f] by_ext[ext] += [f] return by_name, by_ext A: I'm not sure if I entirely get what you're looking to do, but if I understand it correctly something like this might work: from collections import defaultdict files_by_extension = defaultdict(list) for f in files: files_by_extension[ f.split('.')[1] ].append(f) This is creating a hash keyed by file extension and filling it by iterating through the list in a single pass. A: Suppose for example that you want as the result a list of lists of filenames, grouped by either extension or rootname: import os.path import itertools as it def files_grouped_by(filenames, use_extension=True): def ky(fn): return os.path.splitext(fn)[use_extension] return [list(g) for _, g in it.groupby(sorted(filenames, key=ky), ky)] Now files_grouped_by(filenames, False) will return the list of lists grouping by rootname, while if the second argument is True or absent the grouping will be by extension. If you want instead a dict, the keys being either rootnames or extensions, and the values the corresponding lists of filenames, the approach is quite similar: import os.path import itertools as it def dict_files_grouped_by(filenames, use_extension=True): def ky(fn): return os.path.splitext(fn)[use_extension] return dict((k, list(g)) for k, g in it.groupby(sorted(filenames, key=ky), ky)]
Python: finding files with matching extensions or extensions with matching names in a list
Suppose I have a list of filenames: [exia.gundam, dynames.gundam, kyrios.gundam, virtue.gundam], or [exia.frame, exia.head, exia.swords, exia.legs, exia.arms, exia.pilot, exia.gn_drive, lockon_stratos.data, tieria_erde.data, ribbons_almark.data, otherstuff.dada]. In one iteration, I'd like to have all the *.gundam or *.data files, whereas on the other I'd like to group the exia.* files. What's the easiest way of doing this, besides iterating through the list and putting each element in a dictionary? Here's what I had in mind: def matching_names(files): ''' extracts files with repeated names from a list Keyword arguments: files - list of filenames Returns: Dictionary ''' nameDict = {} for file in files: filename = file.partition('.') if filename[0] not in nameDict: nameDict[filename[0]] = [] nameDict[filename[0]].append(filename[2]) matchingDict = {} for key in nameDict.keys(): if len(nameDict[key]) > 1: matchingDict[key] = nameDict[key] return matchingDict Well, assuming I have to use that, is there a simple way to invert it and have the file extension as key instead of the name?
[ "In my first version, it looks like I misinterpreted your question. So if I've got this correct, you're trying to process a list of files so that you can easily access all the filenames with a given extension, or all the filenames with a given base (\"base\" being the part before the period)?\nIf that's the case, I...
[ 2, 0, 0 ]
[]
[]
[ "python", "regex", "string" ]
stackoverflow_0003173652_python_regex_string.txt
Q: WxPython custom styled controls I'd need to make a new style for buttons and text entry controls. It should look something like Is there a way to do this? A: For a start, try to look up the wx.Frame style property wx.FRAME_SHAPED here: http://docs.wxwidgets.org/stable/wx_wxframe.html#wxframe I think it only applies to wx.Frame but maybe you can bind an event to mouse clicks inside the custom-shaped frame.
WxPython custom styled controls
I'd need to make a new style for buttons and text entry controls. It should look something like Is there a way to do this?
[ "For a start, try to look up the wx.Frame style property wx.FRAME_SHAPED here: http://docs.wxwidgets.org/stable/wx_wxframe.html#wxframe\nI think it only applies to wx.Frame but maybe you can bind an event to mouse clicks inside the custom-shaped frame.\n" ]
[ 1 ]
[]
[]
[ "controls", "python", "wxpython" ]
stackoverflow_0003171044_controls_python_wxpython.txt
Q: VIM: Preview height I am new to vim so I was trying to edit an existing script for the vimrc file. The script will take the content of the current buffer and copy it into a new window and then run Python. The scrip works but the preview window is always 50% of the current window. This is the script: " Preview window for python fu! DoRunPyBuffer2() pclose! " force preview window closed setlocal ft=python " copy the buffer into a new window, then run that buffer through python sil %y a | below new | sil put a | sil %!python - " indicate the output window as the current previewwindow setlocal previewwindow ro nomodifiable nomodified " back into the original window winc p endfu command! RunPyBuffer call DoRunPyBuffer2() map <f5> :RunPyBuffer<CR> I tried set lines, set previewheight, set pvh, winc 10 -, ... but nothing seems to work. So does anybody know how I can change the height of the preview window? A: You could try changing the window height before setting it to 'previewwindow': " copy the buffer into a new window, then run that buffer through python sil %y a | below new | sil put a | sil %!python - " indicate the output window as the current previewwindow setlocal winheight 20 setlocal previewwindow ro nomodifiable nomodified Update: I think the problem is that the window height is already set when you change the settings. Setting 'windowheight' does not change the window's height, it only sets the minimum height allowed for the window. A better solution is to specify the window height when you create it with :new: :below 10 new "create a window of height 10
VIM: Preview height
I am new to vim so I was trying to edit an existing script for the vimrc file. The script will take the content of the current buffer and copy it into a new window and then run Python. The scrip works but the preview window is always 50% of the current window. This is the script: " Preview window for python fu! DoRunPyBuffer2() pclose! " force preview window closed setlocal ft=python " copy the buffer into a new window, then run that buffer through python sil %y a | below new | sil put a | sil %!python - " indicate the output window as the current previewwindow setlocal previewwindow ro nomodifiable nomodified " back into the original window winc p endfu command! RunPyBuffer call DoRunPyBuffer2() map <f5> :RunPyBuffer<CR> I tried set lines, set previewheight, set pvh, winc 10 -, ... but nothing seems to work. So does anybody know how I can change the height of the preview window?
[ "You could try changing the window height before setting it to 'previewwindow':\n\" copy the buffer into a new window, then run that buffer through python\nsil %y a | below new | sil put a | sil %!python -\n\" indicate the output window as the current previewwindow\nsetlocal winheight 20\nsetlocal previewwindow ro ...
[ 4 ]
[]
[]
[ "python", "vim" ]
stackoverflow_0003173950_python_vim.txt
Q: Py-appscript: How to configure mail created by reply() I'm trying to reply to a mail in Mail.app with py-appscript. I tried the code below, from appscript import * mailapp = app('Mail') # get mail to be replied msg = mailapp.accounts.first.mailboxes.first.messages.first # create reply mail reply_msg = mailapp.reply(msg) # set mail (got error) reply_msg.visible.set(True) reply_msg.subject.set('replied message') reply_msg.content.set('some content') but got following error, it failed to set subject. (setting visible property is succeeded) CommandError: Command failed: OSERROR: -10000 MESSAGE: Apple event handler failed. COMMAND: app(u'/Applications/Mail.app').outgoing_messages.ID(465702416).subject.set('replied message') It works when I use "make" instead of "reply" to create new message. # create new mail msg = mailapp.make(new=k.outgoing_message) # set mail (works fine) msg.visible.set(True) msg.subject.set('new mail') msg.content.set('some content') Can you please tell me what this error is and how to fix it? A: Works fine on 10.6 but there's a bug in Mail on 10.5 (and probably earlier) that causes outgoing messages created by the reply command not to work correctly. If you have to support 10.5, I think your only option is to build a new outgoing message from scratch, copying the relevant information from the message you're replying to yourself.
Py-appscript: How to configure mail created by reply()
I'm trying to reply to a mail in Mail.app with py-appscript. I tried the code below, from appscript import * mailapp = app('Mail') # get mail to be replied msg = mailapp.accounts.first.mailboxes.first.messages.first # create reply mail reply_msg = mailapp.reply(msg) # set mail (got error) reply_msg.visible.set(True) reply_msg.subject.set('replied message') reply_msg.content.set('some content') but got following error, it failed to set subject. (setting visible property is succeeded) CommandError: Command failed: OSERROR: -10000 MESSAGE: Apple event handler failed. COMMAND: app(u'/Applications/Mail.app').outgoing_messages.ID(465702416).subject.set('replied message') It works when I use "make" instead of "reply" to create new message. # create new mail msg = mailapp.make(new=k.outgoing_message) # set mail (works fine) msg.visible.set(True) msg.subject.set('new mail') msg.content.set('some content') Can you please tell me what this error is and how to fix it?
[ "Works fine on 10.6 but there's a bug in Mail on 10.5 (and probably earlier) that causes outgoing messages created by the reply command not to work correctly. \nIf you have to support 10.5, I think your only option is to build a new outgoing message from scratch, copying the relevant information from the message yo...
[ 0 ]
[]
[]
[ "applescript", "python" ]
stackoverflow_0003173361_applescript_python.txt
Q: How to spider a password protected site in python? currently I have a spider written in Java that logs into a supplier website and spiders the website. (using htmlunit) It keeps the session (cookie) and even lets me enable/disable javascript etc. I also use htmlparser (java) to help parse the html and extract the relevant information. Does python have something similar to do this? A: Python has urllib2 to crawl pages, which supports password authentication and cookies. There is also a HTMLParser for extracting html, but some people prefer the more feature-full BeatifulSoup. A: Scrapy API uses urllib2 plus adds wires up some different parsers and helper routines.
How to spider a password protected site in python?
currently I have a spider written in Java that logs into a supplier website and spiders the website. (using htmlunit) It keeps the session (cookie) and even lets me enable/disable javascript etc. I also use htmlparser (java) to help parse the html and extract the relevant information. Does python have something similar to do this?
[ "Python has urllib2 to crawl pages, which supports password authentication and cookies.\nThere is also a HTMLParser for extracting html, but some people prefer the more feature-full BeatifulSoup.\n", "Scrapy API uses urllib2 plus adds wires up some different parsers and helper routines.\n" ]
[ 4, 1 ]
[]
[]
[ "python", "web_crawler" ]
stackoverflow_0003173433_python_web_crawler.txt
Q: HTML Tag Cloud creation using Python? Is there a library which can take a python dict with word freq = { 'abc' : 25, .... } and convert this into a html based Tag Cloud? A: There a numerous examples for this on the web, e.g. here: http://sujitpal.blogspot.com/2007/04/building-tag-cloud-with-python.html http://snipplr.com/view/8875/tag-cloud/ http://pypi.python.org/pypi/cs.tags/0.1.1 and more ...
HTML Tag Cloud creation using Python?
Is there a library which can take a python dict with word freq = { 'abc' : 25, .... } and convert this into a html based Tag Cloud?
[ "There a numerous examples for this on the web, e.g. here:\n\nhttp://sujitpal.blogspot.com/2007/04/building-tag-cloud-with-python.html\nhttp://snipplr.com/view/8875/tag-cloud/\nhttp://pypi.python.org/pypi/cs.tags/0.1.1\nand more ...\n\n" ]
[ 3 ]
[]
[]
[ "dictionary", "python", "tag_cloud" ]
stackoverflow_0003173734_dictionary_python_tag_cloud.txt
Q: How do I stop a Python function from modifying its inputs? I've asked almost this just before now, but the fix doesn't work for x = [[]], which I'm guessing is because it is a nested list, which is what I will be working with. def myfunc(w): y = w[:] y[0].append('What do I need to do to get this to work here?') y[0].append('When I search for the manual, I get pointed to python.org, but I can\'t find the answer there.') return y x = [[]] z = myfunc(x) print(x) A: Here's how you could fix your problem: def myfunc(w): y = [el[:] for el in w] y[0].append('What do I need to do to get this to work here?') y[0].append('When I search for the manual, I get pointed to python.org, but I can\'t find the answer there.') return y x = [[]] z = myfunc(x) print(x) The thing about [:] is that it's a shallow copy. You could also import deepcopy from the copy module to achieve the correct result. A: Use the copy module and to create deep copies of the inputs with the deepcopy function. Modify the copies and not the original inputs. import copy def myfunc(w): y = copy.deepcopy(w) y[0].append('What do I need to do to get this to work here?') y[0].append('When I search for the manual, I get pointed to python.org, but I can\'t find the answer there.') return y x = [[]] z = myfunc(x) print(x) Before you use this method, read up about the problems with deepcopy (check the links above) and make sure it's safe.
How do I stop a Python function from modifying its inputs?
I've asked almost this just before now, but the fix doesn't work for x = [[]], which I'm guessing is because it is a nested list, which is what I will be working with. def myfunc(w): y = w[:] y[0].append('What do I need to do to get this to work here?') y[0].append('When I search for the manual, I get pointed to python.org, but I can\'t find the answer there.') return y x = [[]] z = myfunc(x) print(x)
[ "Here's how you could fix your problem:\ndef myfunc(w):\n y = [el[:] for el in w]\n y[0].append('What do I need to do to get this to work here?')\n y[0].append('When I search for the manual, I get pointed to python.org, but I can\\'t find the answer there.')\n return y\n\nx = [[]]\nz = myfunc(x)\nprint(...
[ 3, 1 ]
[]
[]
[ "python" ]
stackoverflow_0003173875_python.txt
Q: decomposing a string into known patterns Here's the python list of strings: patterns = [ "KBKKB", "BBBK", "BKB", "KBBB", "KBB", "BKBB", "BBKB", "KKBKB", "BKBK", "KBKB", "KBKBK", "BBK", "BB", "BKKB", "BBB", "KBBK", "BKKBK", "KB", "KBKBK", "KKBKKB", "KBK", "BBKBK", "BBBB", "BK", "KKBKBK", "KBBKB", "BBKKB", "KKKKBB", "KKB" ] I have an input string that consist of K and B only of arbitrary length. I want to know all the possible complete decompositions of the input string. Just an example a string of 8 B: BBBBBBBB Here are possible decompositions BB BB BB BB BB BBBB BBBB BBBB BB BB BB BB BBBB BBB BBB BB BB BBB BBB Can anyone guide me how to go about it? I am not much concerned about efficiency right now. A: Here's one way using recursion: def getPossibleDecompositions(s): if s == '': yield [] else: for pattern in patterns: if s.startswith(pattern): for x in getPossibleDecompositions(s[len(pattern):]): yield [pattern] + x for x in getPossibleDecompositions('BBBBBBBB'): print x
decomposing a string into known patterns
Here's the python list of strings: patterns = [ "KBKKB", "BBBK", "BKB", "KBBB", "KBB", "BKBB", "BBKB", "KKBKB", "BKBK", "KBKB", "KBKBK", "BBK", "BB", "BKKB", "BBB", "KBBK", "BKKBK", "KB", "KBKBK", "KKBKKB", "KBK", "BBKBK", "BBBB", "BK", "KKBKBK", "KBBKB", "BBKKB", "KKKKBB", "KKB" ] I have an input string that consist of K and B only of arbitrary length. I want to know all the possible complete decompositions of the input string. Just an example a string of 8 B: BBBBBBBB Here are possible decompositions BB BB BB BB BB BBBB BBBB BBBB BB BB BB BB BBBB BBB BBB BB BB BBB BBB Can anyone guide me how to go about it? I am not much concerned about efficiency right now.
[ "Here's one way using recursion:\ndef getPossibleDecompositions(s):\n if s == '':\n yield []\n else:\n for pattern in patterns:\n if s.startswith(pattern):\n for x in getPossibleDecompositions(s[len(pattern):]):\n yield [pattern] + x\n\nfor x in getPo...
[ 5 ]
[]
[]
[ "pattern_matching", "python", "string" ]
stackoverflow_0003174586_pattern_matching_python_string.txt
Q: Does ruby have something similar to buildout or virtualenv? I was wondering: In python, canon says to use buildout or virtualenv, to avoid installing into the system packages. It's second nature now, I no longer see anything ludicrously bizarre to the practice. It makes a kind of sense. In Ruby, is there something similar? How does ruby deal with this problem? Does ruby have this problem? A: There are several projects trying to handle this issue: rip bundler rvm via gemsets sandbox
Does ruby have something similar to buildout or virtualenv?
I was wondering: In python, canon says to use buildout or virtualenv, to avoid installing into the system packages. It's second nature now, I no longer see anything ludicrously bizarre to the practice. It makes a kind of sense. In Ruby, is there something similar? How does ruby deal with this problem? Does ruby have this problem?
[ "There are several projects trying to handle this issue:\n\nrip\nbundler\nrvm via gemsets\nsandbox\n\n" ]
[ 7 ]
[]
[]
[ "buildout", "python", "ruby", "virtualenv" ]
stackoverflow_0003173792_buildout_python_ruby_virtualenv.txt
Q: How to make a dynamic number of horizontal BoxSizers? I have a function that calculates the number of images that can be displayed on the screen, if there are more images than the ones that can be put on screen, I resize the images till they all can appear. Then, I want to display them with one vertical box sizer and several horizontal box sizers! The horizontal number of box sizers are dynamic, it can be only one or more depending on the number of images. How can I define several box sizers and add them to the vertical box sizer? A: Why not simply make the horizontal sizers in a loop, .Adding them to the same vertical sizer? E.g. def HorzInVert(n): vert = wx.BoxSizer(wx.VERTICAL) horizontals = [] for i in range(n): horz = wx.BoxSizer(wx.HORIZONTAL) vert.Add(horz,1, wx.ALL, 0) horizontals.append(horz) return vert, horizontals You can call this simple function from anywhere, it returns the vertical sizer and the list of n horizontal sizers in it -- then the caller adds stuff suitably to the horizontal sliders, an appropriate SetSizerwith the vertical sizer as the argument, and the vertical sizer's .Fit. Of course you can make it as much fancier as you want, with all sort of arguments to control exactly how the Adds are performed. A: wx.GridSizer is the answer!
How to make a dynamic number of horizontal BoxSizers?
I have a function that calculates the number of images that can be displayed on the screen, if there are more images than the ones that can be put on screen, I resize the images till they all can appear. Then, I want to display them with one vertical box sizer and several horizontal box sizers! The horizontal number of box sizers are dynamic, it can be only one or more depending on the number of images. How can I define several box sizers and add them to the vertical box sizer?
[ "Why not simply make the horizontal sizers in a loop, .Adding them to the same vertical sizer? E.g.\ndef HorzInVert(n):\n vert = wx.BoxSizer(wx.VERTICAL)\n horizontals = []\n for i in range(n):\n horz = wx.BoxSizer(wx.HORIZONTAL)\n vert.Add(horz,1, wx.ALL, 0)\n horizontals.append(horz)\n return vert, ...
[ 3, 0 ]
[]
[]
[ "python", "sizer", "wxpython" ]
stackoverflow_0003171256_python_sizer_wxpython.txt
Q: How to create a Cocoa library and use it in python I've been making a game and the python library I was used is terrible (Pyglet). I want to try using Cocoa for the OSX version. I'll be able to figure out using the objects from classes like NSWindow and NSOpenGLView and then put these objects in my own class for the game loop. I have no idea how I can use PyObjC to load a dynamic Objective-C library I can make and then use the class I will make in python to setup the game which I suppose can be looped by NSTimer. However, the loop method will also need to call a python method from one of many python classes. My game consists of many python classes which are used for different sections of the game (Mapmaker,GameSession,AnacondaGame etc.). The game loop will need to call a loop method in any of these classes depending on the current section and pass even information. PyObjC is "bi-directional" apparently so how is that done? Alternatively I could create two methods to be called by python and I add the python code in-between, where the loop is controlled by python. The "documentation" on the PyObjC website only seem to explain how to use Cocoa in python and nothing else. What I can't do is make a fixed GUI with the interface builder because the library will need to create windows based on the python input to an initialisation method of my class. Knowing the syntax of Objective-C isn't a big problem and I can reefer to the Cocoa documentation to make the objects I require. Thank you for any help. It will be appreciated very much. I'm sick of using broken libraries like pygame and pyglet, using the platform specific OS APIs seems to be the best method to ensure quality. A: PyObjC bridges Python to the Objective-C runtime, so if you create NSObject subclasses in Python, they'll be accessible from Objective-C code running in the same process. What this means is that you'll need to encapsulate all of your Python functionality in a subclass of NSObject that you can access over the bridge. The way I'd do this is by having a singleton controller class on the Objective-C side that has a method like -(void)pythonReady:(PythonClass *)pythonObject, and also handles the loading of the Python code (which ensures that the controller class exists when your Python code is loaded). Then, in your Python code, after creating an instance of your PythonClass, you can call pythonReady: on your controller singleton. Then, in pythonReady: on the Objective-C side, you can call whatever methods you need on pythonObject, which will run the code on the Python side. To load the Python code from your controller class, you can do something like this: #import <Python/Python.h> @implementation PythonController (Loading) - (void)loadPython { NSString *pathToScript = @"/some/path/to/script.py"; setenv("PYTHONPATH", [@"/some/path/to/" UTF8String], 1); Py_SetProgramName("/usr/bin/python"); Py_Initialize(); FILE *pyScript = fopen([pathToScript UTF8String], "r"); int result = PyRun_SimpleFile(pyScript, [[pathToScript lastPathComponent] UTF8String]); if (result != 0) { NSLog(@"Loading Python Failed!"); } } @end Basically, we simply use the Python C API to run the script inside the current process. The script itself starts the bridge to the runtime in the current process, where you can then use the Cocoa API to access your controller singleton.
How to create a Cocoa library and use it in python
I've been making a game and the python library I was used is terrible (Pyglet). I want to try using Cocoa for the OSX version. I'll be able to figure out using the objects from classes like NSWindow and NSOpenGLView and then put these objects in my own class for the game loop. I have no idea how I can use PyObjC to load a dynamic Objective-C library I can make and then use the class I will make in python to setup the game which I suppose can be looped by NSTimer. However, the loop method will also need to call a python method from one of many python classes. My game consists of many python classes which are used for different sections of the game (Mapmaker,GameSession,AnacondaGame etc.). The game loop will need to call a loop method in any of these classes depending on the current section and pass even information. PyObjC is "bi-directional" apparently so how is that done? Alternatively I could create two methods to be called by python and I add the python code in-between, where the loop is controlled by python. The "documentation" on the PyObjC website only seem to explain how to use Cocoa in python and nothing else. What I can't do is make a fixed GUI with the interface builder because the library will need to create windows based on the python input to an initialisation method of my class. Knowing the syntax of Objective-C isn't a big problem and I can reefer to the Cocoa documentation to make the objects I require. Thank you for any help. It will be appreciated very much. I'm sick of using broken libraries like pygame and pyglet, using the platform specific OS APIs seems to be the best method to ensure quality.
[ "PyObjC bridges Python to the Objective-C runtime, so if you create NSObject subclasses in Python, they'll be accessible from Objective-C code running in the same process. What this means is that you'll need to encapsulate all of your Python functionality in a subclass of NSObject that you can access over the bridg...
[ 2 ]
[]
[]
[ "cocoa", "objective_c", "pyobjc", "python" ]
stackoverflow_0003171650_cocoa_objective_c_pyobjc_python.txt
Q: Need help making a program remember settings, cPickle How do I make this code remember the last position of the scale, upon reopening? import Tkinter import cPickle root = Tkinter.Tk() root.sclX = Tkinter.Scale(root, from_=0, to=1500, orient='horizontal', resolution=1) root.sclX.pack(ipadx=75) root.resizable(False,False) root.title('Scale') with open('myconfig.pk', 'wb') as f: cPickle.dump(root.config(), f, -1); cPickle.dump(root.sclX.config(), f, -1); root.mainloop() A: You need many changes and fixes to make your code work as intended: import Tkinter import cPickle root = Tkinter.Tk() place = 0 root.place = Tkinter.IntVar() root.sclX = Tkinter.Scale(root, from_=0, to=1500, orient='horizontal', resolution=1, variable=root.place) root.sclX.pack(ipadx=75) try: with open('myconfig.pk', 'rb') as f: place = cPickle.load(f) except IOError: pass else: root.place.set(place) def tracer(*a): global place place = root.place.get() root.place.trace('r', tracer) root.resizable(False, False) root.title('Scale') root.mainloop() with open('myconfig.pk', 'wb') as f: cPickle.dump(place, f, -1); Let's look at the changes from the top. I've introduced a Tkinter variable root.place so that the position of the scale can be tracked at all times (via later function tracer) in global variable place (it would be more elegant to use OOP and avoid global variables, but I'm trying to keep things simple for you;-). Then, the try/except/else statement changes the setting of place iff the .pk file is readable. You were never trying to read back what you had saved. Last but not least, I've moved the save operation to after mainloop exits, and simplified it (you don't need all of the config dictionaries -- which you can't access at that point anyway -- just the place global). You were saving before mainloop started, therefore "saving" the initial values (not those changed by the main loop execution). The tracer function and .trace method calls ensure that global variable place always records the scale's latest position -- so that it can be recovered, and saved, after mainloop has exited (after mainloop exits, all Tkinter objects, both GUI ones and variables, become inaccessible).
Need help making a program remember settings, cPickle
How do I make this code remember the last position of the scale, upon reopening? import Tkinter import cPickle root = Tkinter.Tk() root.sclX = Tkinter.Scale(root, from_=0, to=1500, orient='horizontal', resolution=1) root.sclX.pack(ipadx=75) root.resizable(False,False) root.title('Scale') with open('myconfig.pk', 'wb') as f: cPickle.dump(root.config(), f, -1); cPickle.dump(root.sclX.config(), f, -1); root.mainloop()
[ "You need many changes and fixes to make your code work as intended:\nimport Tkinter\nimport cPickle\n\nroot = Tkinter.Tk()\nplace = 0\nroot.place = Tkinter.IntVar()\nroot.sclX = Tkinter.Scale(root, from_=0, to=1500, orient='horizontal', resolution=1,\n variable=root.place)\nroot.sclX.pack(...
[ 2 ]
[]
[]
[ "pickle", "python", "tkinter" ]
stackoverflow_0003173897_pickle_python_tkinter.txt
Q: how to submit the query to http://www.ratsit.se/BC/Search.aspx ? I write a script but seems something wrong with the "Click" button import urllib2, cookielib import ClientForm from BeautifulSoup import BeautifulSoup first_name = "Mona" last_name = "Sahlin" url = 'http://www.ratsit.se/BC/Search.aspx' cookiejar = cookielib.LWPCookieJar() cookiejar = urllib2.HTTPCookieProcessor(cookiejar) opener = urllib2.build_opener(cookiejar) urllib2.install_opener(opener) response = urllib2.urlopen(url) forms = ClientForm.ParseResponse(response, backwards_compat=False) #Use to print out forms if website design changes for x in forms: print x ''' forms print result: <aspnetForm POST http://www.ratsit.se/BC/Search.aspx application/x-www-form-urlencoded <HiddenControl(__VIEWSTATE=/wEPDwULLTExMzU2NTM0MzcPZBYCZg9kFgICAxBkZBYGAgoPDxYCHghJbWFnZVVy....E1haW4kZ3J2U2VhcmNoUmVzdWx0D2dkBRdjdGwwMCRtdndVc2VyTG9naW5MZXZlbA8PZGZkle2yQ/dc9eIGMaQPJ/EEJs899xE=) (readonly)> <TextControl(ctl00$cphMain$txtFirstName=)> <TextControl(ctl00$cphMain$txtLastName=)> <TextControl(ctl00$cphMain$txtBirthDate=)> <TextControl(ctl00$cphMain$txtAddress=)> <TextControl(ctl00$cphMain$txtZipCode=)> <TextControl(ctl00$cphMain$txtCity=)> <TextControl(ctl00$cphMain$txtKommun=)> <CheckboxControl(ctl00$cphMain$chkExaktStavning=[on])> <ImageControl(ctl00$cphMain$cmdButton=)> > ''' #Confirm correct form form = forms[0] print form.__dict__ #print form.__dict__.get('controls') controls = form.__dict__.get('controls') print "------------------------------------------------------------" try: controls[1] = first_name controls[2] = last_name page = urllib2.urlopen(form.click('ctl00$cphMain$cmdButton')).read() ''' give error here: The following error occured: "'str' object has no attribute 'name'" ''' # print controls[9] print '----------here-------' soup = BeautifulSoup(''.join(page)) soup = soup.prettify() A: Here's a working version: import urllib2, cookielib import ClientForm from BeautifulSoup import BeautifulSoup first_name = "Mona" last_name = "Sahlin" url = 'http://www.ratsit.se/BC/Search.aspx' cookiejar = cookielib.LWPCookieJar() cookiejar = urllib2.HTTPCookieProcessor(cookiejar) opener = urllib2.build_opener(cookiejar) urllib2.install_opener(opener) response = urllib2.urlopen(url) forms = ClientForm.ParseResponse(response, backwards_compat=False) # Use to print out forms to check if website design changes for i, x in enumerate(forms): print 'Form[%d]: %r, %d controls' % (i, x.name, len(x.controls)) for j, c in enumerate(x.controls): print ' ', j, c.__class__.__name__, try: n = c.name except AttributeError: n = 'NO NAME' print repr(n) #Confirm correct form form = forms[0] controls = form.__dict__.get('controls') print controls, form.controls print "------------------------------------------------------------" try: controls[1].value = first_name controls[2].value = last_name p = form.click('ctl00$cphMain$cmdButton') print 'p is', repr(p) page = urllib2.urlopen(p).read() ''' give error here: The following error occured: "'str' object has no attribute 'name'" ''' # print controls[9] print '----------here-------' soup = BeautifulSoup(''.join(page)) soup = soup.prettify() finally: print 'ciao!' The core bug fix (beyond completing a try statement which you probably truncated, to fix the syntax error) is to use controls[1].value = first_name controls[2].value = last_name in lieu of your buggy code which just assigned directly to controls[1] and controls[2]. That bug of yours was what erroneously put strings in the controls list in lieu of controls (and thus made the by-name search in form.click fail as you observed).
how to submit the query to http://www.ratsit.se/BC/Search.aspx ? I write a script but seems something wrong with the "Click" button
import urllib2, cookielib import ClientForm from BeautifulSoup import BeautifulSoup first_name = "Mona" last_name = "Sahlin" url = 'http://www.ratsit.se/BC/Search.aspx' cookiejar = cookielib.LWPCookieJar() cookiejar = urllib2.HTTPCookieProcessor(cookiejar) opener = urllib2.build_opener(cookiejar) urllib2.install_opener(opener) response = urllib2.urlopen(url) forms = ClientForm.ParseResponse(response, backwards_compat=False) #Use to print out forms if website design changes for x in forms: print x ''' forms print result: <aspnetForm POST http://www.ratsit.se/BC/Search.aspx application/x-www-form-urlencoded <HiddenControl(__VIEWSTATE=/wEPDwULLTExMzU2NTM0MzcPZBYCZg9kFgICAxBkZBYGAgoPDxYCHghJbWFnZVVy....E1haW4kZ3J2U2VhcmNoUmVzdWx0D2dkBRdjdGwwMCRtdndVc2VyTG9naW5MZXZlbA8PZGZkle2yQ/dc9eIGMaQPJ/EEJs899xE=) (readonly)> <TextControl(ctl00$cphMain$txtFirstName=)> <TextControl(ctl00$cphMain$txtLastName=)> <TextControl(ctl00$cphMain$txtBirthDate=)> <TextControl(ctl00$cphMain$txtAddress=)> <TextControl(ctl00$cphMain$txtZipCode=)> <TextControl(ctl00$cphMain$txtCity=)> <TextControl(ctl00$cphMain$txtKommun=)> <CheckboxControl(ctl00$cphMain$chkExaktStavning=[on])> <ImageControl(ctl00$cphMain$cmdButton=)> > ''' #Confirm correct form form = forms[0] print form.__dict__ #print form.__dict__.get('controls') controls = form.__dict__.get('controls') print "------------------------------------------------------------" try: controls[1] = first_name controls[2] = last_name page = urllib2.urlopen(form.click('ctl00$cphMain$cmdButton')).read() ''' give error here: The following error occured: "'str' object has no attribute 'name'" ''' # print controls[9] print '----------here-------' soup = BeautifulSoup(''.join(page)) soup = soup.prettify()
[ "Here's a working version:\nimport urllib2, cookielib\nimport ClientForm\nfrom BeautifulSoup import BeautifulSoup\n\nfirst_name = \"Mona\"\nlast_name = \"Sahlin\"\nurl = 'http://www.ratsit.se/BC/Search.aspx'\ncookiejar = cookielib.LWPCookieJar()\ncookiejar = urllib2.HTTPCookieProcessor(cookiejar)\n\nopener = urllib...
[ 1 ]
[]
[]
[ "asp.net", "clientform", "python" ]
stackoverflow_0003174924_asp.net_clientform_python.txt
Q: bezier triangle patch in 3D i would like a python script to draw in 3D a triangle bezier patch this is an old problem and there must be some old script available to do this somehwere! Thanks for any help A: OpenGL RedBook, "Chapter 12 Evaluators and NURBS". C examples are here. But I really don't think you'll be able to use it on triangles, only on quads. If you want to go through the trouble of tesselating triangle, pick spline formula from wikipedia, and try to implement it yourself.
bezier triangle patch in 3D
i would like a python script to draw in 3D a triangle bezier patch this is an old problem and there must be some old script available to do this somehwere! Thanks for any help
[ "OpenGL RedBook, \"Chapter 12 Evaluators and NURBS\". C examples are here. But I really don't think you'll be able to use it on triangles, only on quads. If you want to go through the trouble of tesselating triangle, pick spline formula from wikipedia, and try to implement it yourself.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0003174603_python.txt
Q: How to design a twisted solution to download a file by reading on certain portion? How do I download a remote file into several chunks using twisted? Lets say if the file is 100 bytes, I want to spawn 10 connection which will read 10 bytes each but in no particular order and then later on merge them all. I was able to do this using threads in Python but I don't have any idea how to use twisted's reactor + manager + protocol to achieve the same. Any advice as to how should I design this? A: I don't think this really provides the direction the user requires - the question seems to be clear in how to use Twisted to achieve this - the answer implies reasonable knowledge of Twisted.
How to design a twisted solution to download a file by reading on certain portion?
How do I download a remote file into several chunks using twisted? Lets say if the file is 100 bytes, I want to spawn 10 connection which will read 10 bytes each but in no particular order and then later on merge them all. I was able to do this using threads in Python but I don't have any idea how to use twisted's reactor + manager + protocol to achieve the same. Any advice as to how should I design this?
[ "I don't think this really provides the direction the user requires - the question seems to be clear in how to use Twisted to achieve this - the answer implies reasonable knowledge of Twisted.\n" ]
[ 0 ]
[]
[]
[ "python", "twisted" ]
stackoverflow_0003174374_python_twisted.txt
Q: Delimiting choices in ModelChoiceField I'm new to python and to django so this question will probably be easy to solve but I can't get it to work. Basically I have a model which contains two foreign keys of User type. I'm building a form in which I want to remove one of the choices of a ModelChoiceField based on another field. I want the user to be unable to select himself in the form. My code. Models.py from django.db import models from django.contrib.auth.models import User class Deuda(models.Model): ESTADOS = (("S", "Sin pagar"), ("P", "Pagada"), ("PC", "Pagada (Confirmada)")) propietario = models.ForeignKey(User, verbose_name="Propietario", related_name="propietario_deuda") adeudado = models.ForeignKey(User, verbose_name="Adeudado", related_name="adeudado_deuda") cantidad = models.FloatField("Cantidad") descripcion = models.TextField("Descripcion") estado = models.CharField(max_length=2, choices=ESTADOS) def __str__(self): return self.descripcion forms.py from django import forms from housemanager.iou.models import Deuda from django.contrib.auth.models import User class DeudaForm(forms.ModelForm): class Meta: model = Deuda exclude = ('propietario',) def __init__(self, propietario): d = Deuda() d.propietario = propietario forms.ModelForm.__init__(self, instance=d) adeudado = forms.ModelChoiceField(queryset=User.objects.exclude(pk=d.propietario.pk)) extract of views.py [...] form = DeudaForm(request.user) So my idea is to pass the user to the form so it can define a ModelChoiceField that does not include it. My code doesn't work because d is not in the scope while setting 'adedado'. I'm sure there has to be many ways of achieving this goal feel free to change anything on my design. Final code: from django import forms from housemanager.iou.models import Deuda from django.contrib.auth.models import User class DeudaForm(forms.ModelForm): class Meta: model = Deuda exclude = ('propietario',) def __init__(self, propietario): d = Deuda() d.propietario = propietario forms.ModelForm.__init__(self, instance=d) self.fields['adeudado'].queryset = User.objects.exclude(pk=propietario.pk) A: Try to put it in the form's __init__: class DeudaForm(forms.ModelForm): class Meta: model = Deuda exclude = ('propietario',) def __init__(self, propietario): forms.ModelForm.__init__(self) self.fields['adeudado'].queryset = User.objects.exclude(pk=d.propietario.pk)
Delimiting choices in ModelChoiceField
I'm new to python and to django so this question will probably be easy to solve but I can't get it to work. Basically I have a model which contains two foreign keys of User type. I'm building a form in which I want to remove one of the choices of a ModelChoiceField based on another field. I want the user to be unable to select himself in the form. My code. Models.py from django.db import models from django.contrib.auth.models import User class Deuda(models.Model): ESTADOS = (("S", "Sin pagar"), ("P", "Pagada"), ("PC", "Pagada (Confirmada)")) propietario = models.ForeignKey(User, verbose_name="Propietario", related_name="propietario_deuda") adeudado = models.ForeignKey(User, verbose_name="Adeudado", related_name="adeudado_deuda") cantidad = models.FloatField("Cantidad") descripcion = models.TextField("Descripcion") estado = models.CharField(max_length=2, choices=ESTADOS) def __str__(self): return self.descripcion forms.py from django import forms from housemanager.iou.models import Deuda from django.contrib.auth.models import User class DeudaForm(forms.ModelForm): class Meta: model = Deuda exclude = ('propietario',) def __init__(self, propietario): d = Deuda() d.propietario = propietario forms.ModelForm.__init__(self, instance=d) adeudado = forms.ModelChoiceField(queryset=User.objects.exclude(pk=d.propietario.pk)) extract of views.py [...] form = DeudaForm(request.user) So my idea is to pass the user to the form so it can define a ModelChoiceField that does not include it. My code doesn't work because d is not in the scope while setting 'adedado'. I'm sure there has to be many ways of achieving this goal feel free to change anything on my design. Final code: from django import forms from housemanager.iou.models import Deuda from django.contrib.auth.models import User class DeudaForm(forms.ModelForm): class Meta: model = Deuda exclude = ('propietario',) def __init__(self, propietario): d = Deuda() d.propietario = propietario forms.ModelForm.__init__(self, instance=d) self.fields['adeudado'].queryset = User.objects.exclude(pk=propietario.pk)
[ "Try to put it in the form's __init__:\nclass DeudaForm(forms.ModelForm):\n\n class Meta:\n model = Deuda\n exclude = ('propietario',)\n\n def __init__(self, propietario):\n forms.ModelForm.__init__(self)\n self.fields['adeudado'].queryset = User.objects.exclude(pk=d.propietario.pk...
[ 1 ]
[]
[]
[ "django", "django_forms", "python" ]
stackoverflow_0003175443_django_django_forms_python.txt
Q: Using optparse to read in a list from command line options I am calling a python script with the following command line: myscript.py --myopt="[(5.,5.),(-5.,-5.)]" The question is -- how to convert myopt to a list variable. My solution was to use optparse, treating myopt as a string, and using (options, args) = parser.parse_args() myopt = eval(options.myopt) Now, because I used eval() I feel a bit like Dobby the house elf, having knowingly transgressed the commandments of great (coding) wizards, and wanting to self-flagellate myself in punishment. But are there better options for parsing lists or tuples or lists of tuples from the command line?.. I've seen solutions that use split(), but this won't work here since this isn't a simple list. Keep in mind, also, that this is being done in the context of mostly one-off scientific computing with no security concerns -- so perhaps eval() isn't as evil here?.. A: ast.literal_eval(node_or_string): Safely evaluate an expression node or a string containing a Python expression. The string or node provided may only consist of the following Python literal structures: strings, numbers, tuples, lists, dicts, booleans, and None. This can be used for safely evaluating strings containing Python expressions from untrusted sources without the need to parse the values oneself. So you can do import ast (options, args) = parser.parse_args() myopt = ast.literal_eval(options.myopt) A: Try JSON instead. The syntax is not exactly Python, but close enough. >>> import json >>> json.loads("[[5.0,5.0],[-5.0,-5.0]]") [[5.0, 5.0], [-5.0, -5.0]] >>> [tuple(p) for p in json.loads("[[5.0,5.0],[-5.0,-5.0]]")] [(5.0, 5.0), (-5.0, -5.0)] >>>
Using optparse to read in a list from command line options
I am calling a python script with the following command line: myscript.py --myopt="[(5.,5.),(-5.,-5.)]" The question is -- how to convert myopt to a list variable. My solution was to use optparse, treating myopt as a string, and using (options, args) = parser.parse_args() myopt = eval(options.myopt) Now, because I used eval() I feel a bit like Dobby the house elf, having knowingly transgressed the commandments of great (coding) wizards, and wanting to self-flagellate myself in punishment. But are there better options for parsing lists or tuples or lists of tuples from the command line?.. I've seen solutions that use split(), but this won't work here since this isn't a simple list. Keep in mind, also, that this is being done in the context of mostly one-off scientific computing with no security concerns -- so perhaps eval() isn't as evil here?..
[ "ast.literal_eval(node_or_string):\n\nSafely evaluate an expression node or\n a string containing a Python\n expression. The string or node\n provided may only consist of the\n following Python literal structures:\n strings, numbers, tuples, lists,\n dicts, booleans, and None.\nThis can be used for safely eva...
[ 3, 1 ]
[]
[]
[ "eval", "parsing", "python" ]
stackoverflow_0003175606_eval_parsing_python.txt
Q: OOPs paradigm in Python Here is something I've been having a doubt about. Consider the following snippet. class A(object): def check(self): super(A, self).check() print "inside a" class B(object): def check(self): print "inside b" class C(A, B): pass c = C() c.setup() Now this gives the output, inside b inside a Passing this through pdb i see that on reaching A.setup(), B.setup() is being called. However, the call from A is to the check method of its superclass; as it does not exist the call moves from that point to B.check(). Could someone explain or point me to a document which explains how this works internally? I could'nt find any. Could someone show me a similar implementation in C++/Java? I think comparing it with other languages would help me understand the problem at hand better. Many thanks. A: The algorithm is explained in this excellent article. In short, super(A,self) looks in self.__class__.__mro__ for the next class after A. In your case, self is c, so self.__class__ is C. C.__mro__ is [C,A,B,object]. So the next class in the MRO after A happens to be B. So super(A,self) returns a super object which behaves like B as far as attribute lookup is concerned. super(A, self).check() thus calls B.check(). The C3 algorithm Python uses to generate the MRO (Method Resolution Order) is also described in a little more detail in this essay by Michele Simionato.
OOPs paradigm in Python
Here is something I've been having a doubt about. Consider the following snippet. class A(object): def check(self): super(A, self).check() print "inside a" class B(object): def check(self): print "inside b" class C(A, B): pass c = C() c.setup() Now this gives the output, inside b inside a Passing this through pdb i see that on reaching A.setup(), B.setup() is being called. However, the call from A is to the check method of its superclass; as it does not exist the call moves from that point to B.check(). Could someone explain or point me to a document which explains how this works internally? I could'nt find any. Could someone show me a similar implementation in C++/Java? I think comparing it with other languages would help me understand the problem at hand better. Many thanks.
[ "The algorithm is explained in this excellent article.\nIn short, \nsuper(A,self) looks in self.__class__.__mro__ for the next class after A.\nIn your case, self is c, so self.__class__ is C.\nC.__mro__ is [C,A,B,object]. So the next class in the MRO after A happens to be B. \nSo super(A,self) returns a super objec...
[ 9 ]
[]
[]
[ "c++", "java", "oop", "python" ]
stackoverflow_0003175714_c++_java_oop_python.txt
Q: Set minimum column width to header width in PyQt4 QTableWidget I'm working with the QTableWidget component in PyQt4 and I can't seem to get columns to size correctly, according to their respective header lengths. Here's what the table layout should look like (sans pipes, obviously): Index | Long_Header | Longer_Header 1 | 102402 | 100 2 | 123123 | 2 3 | 454689 | 18 The code I'm working with looks something like this: import sys from PyQt4.QtCore import QStringList, QString from PyQt4.QtGui import QApplication, QMainWindow, QSizePolicy from PyQt4.QtGui import QTableWidget, QTableWidgetItem def createTable(): table = QTableWidget(5, 3) table.setSizePolicy(QSizePolicy.Expanding, QSizePolicy.Expanding) headers = QStringList() headers.append(QString("Index")) headers.append(QString("Long_Header")) headers.append(QString("Longer_Header")) table.setHorizontalHeaderLabels(headers) table.horizontalHeader().setStretchLastSection(True) # ignore crappy names -- this is just an example :) cell1 = QTableWidgetItem(QString("1")) cell2 = QTableWidgetItem(QString("102402")) cell3 = QTableWidgetItem(QString("100")) cell4 = QTableWidgetItem(QString("2")) cell5 = QTableWidgetItem(QString("123123")) cell6 = QTableWidgetItem(QString("2")) cell7 = QTableWidgetItem(QString("3")) cell8 = QTableWidgetItem(QString("454689")) cell9 = QTableWidgetItem(QString("18")) table.setItem(0, 0, cell1) table.setItem(0, 1, cell2) table.setItem(0, 2, cell3) table.setItem(1, 0, cell4) table.setItem(1, 1, cell5) table.setItem(1, 2, cell6) table.setItem(2, 0, cell7) table.setItem(2, 1, cell8) table.setItem(2, 2, cell9) return table if __name__ == '__main__': app = QApplication(sys.argv) mainW = QMainWindow() mainW.setMinimumWidth(300) mainW.setCentralWidget(createTable()) mainW.show() app.exec_() When the application executes, the first column is quite wide while the other columns are somewhat compressed. Is there a way to force the table to size according to the header widths, rather than the data itself? Better yet, is there a way to force each column width to be the maximum width of the data and header values? Update: I've tried calling resizeColumnsToContents() on the table, but the view becomes horribly mangled: Python Table http://img514.imageshack.us/img514/8633/tablef.png ** Update2**: resizeColumnsToContents() works just fine as long as it's called after all cells and headers have been inserted into the table. A: table.resizeColumnsToContents() should do the trick for this specific example. Be sure to bookmark the PyQt documentation if you haven't done so already (handy when you're looking for a specific function).
Set minimum column width to header width in PyQt4 QTableWidget
I'm working with the QTableWidget component in PyQt4 and I can't seem to get columns to size correctly, according to their respective header lengths. Here's what the table layout should look like (sans pipes, obviously): Index | Long_Header | Longer_Header 1 | 102402 | 100 2 | 123123 | 2 3 | 454689 | 18 The code I'm working with looks something like this: import sys from PyQt4.QtCore import QStringList, QString from PyQt4.QtGui import QApplication, QMainWindow, QSizePolicy from PyQt4.QtGui import QTableWidget, QTableWidgetItem def createTable(): table = QTableWidget(5, 3) table.setSizePolicy(QSizePolicy.Expanding, QSizePolicy.Expanding) headers = QStringList() headers.append(QString("Index")) headers.append(QString("Long_Header")) headers.append(QString("Longer_Header")) table.setHorizontalHeaderLabels(headers) table.horizontalHeader().setStretchLastSection(True) # ignore crappy names -- this is just an example :) cell1 = QTableWidgetItem(QString("1")) cell2 = QTableWidgetItem(QString("102402")) cell3 = QTableWidgetItem(QString("100")) cell4 = QTableWidgetItem(QString("2")) cell5 = QTableWidgetItem(QString("123123")) cell6 = QTableWidgetItem(QString("2")) cell7 = QTableWidgetItem(QString("3")) cell8 = QTableWidgetItem(QString("454689")) cell9 = QTableWidgetItem(QString("18")) table.setItem(0, 0, cell1) table.setItem(0, 1, cell2) table.setItem(0, 2, cell3) table.setItem(1, 0, cell4) table.setItem(1, 1, cell5) table.setItem(1, 2, cell6) table.setItem(2, 0, cell7) table.setItem(2, 1, cell8) table.setItem(2, 2, cell9) return table if __name__ == '__main__': app = QApplication(sys.argv) mainW = QMainWindow() mainW.setMinimumWidth(300) mainW.setCentralWidget(createTable()) mainW.show() app.exec_() When the application executes, the first column is quite wide while the other columns are somewhat compressed. Is there a way to force the table to size according to the header widths, rather than the data itself? Better yet, is there a way to force each column width to be the maximum width of the data and header values? Update: I've tried calling resizeColumnsToContents() on the table, but the view becomes horribly mangled: Python Table http://img514.imageshack.us/img514/8633/tablef.png ** Update2**: resizeColumnsToContents() works just fine as long as it's called after all cells and headers have been inserted into the table.
[ "table.resizeColumnsToContents()\n\nshould do the trick for this specific example.\nBe sure to bookmark the PyQt documentation if you haven't done so already (handy when you're looking for a specific function).\n" ]
[ 9 ]
[]
[]
[ "pyqt4", "python" ]
stackoverflow_0003175665_pyqt4_python.txt
Q: Are there any implementations of slashdot style moderation in python? Are there any implementations of slashdot style moderation in python? A: On slash ports site (http://www.slashcode.com/slashalikes.shtml) a Python version isn't listed. You may try to use slash perl code as a guide and develop your own version, though.
Are there any implementations of slashdot style moderation in python?
Are there any implementations of slashdot style moderation in python?
[ "On slash ports site (http://www.slashcode.com/slashalikes.shtml) a Python version isn't listed. You may try to use slash perl code as a guide and develop your own version, though.\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0003176048_python.txt
Q: XPath and lxml syntax I have a XML file with the structure as shown below: <x> <y/> <y/> . . </x> The number of <y> tags are arbitrary. I want to get the text of the <y> tags and for this I decided to use XPath. I have figured out the syntax, say for the first y: (Assume root as x) textFirst = root.xpath('y[1]/text()') This works as expected. However my problem is that I won't be knowing the number of <y> tags beforehand, so to fix that, I did this: >>> count = 0 >>> for number in root.getiterator('y'): ... count += 1 So now I know that there are count number of y in x. (Is there a better way to get the number of tags? If yes, please suggest) However, if I do this: >>> def try_it(x): ... return root.xpath('y[x]/text()') ... >>> try_it(1) [] It returns an empty list. So my question is: not knowing the arbitrary number of tags, how do I get an XPath syntax or expression for it and using lxml? Sorry if something is not clear, I tried my best to explain the problem. A: what about 'y[%i]/text()' % x ? now you see where you did a mistake? :) ( .. note that you can capture all y elements together with xpath 'y' or '//y' ) A: To count the number of y nodes, you can use the XPath expression 'count(/x/y)'. Also, I think the problem with your expression in the try_it function is that you appear to be using the literal value x instead of concatenating the input parameter into the XPath expression. Maybe something like this would work: >>> def try_it(x): ... return root.xpath('y[' + x + ']/text()') Hope this helps!
XPath and lxml syntax
I have a XML file with the structure as shown below: <x> <y/> <y/> . . </x> The number of <y> tags are arbitrary. I want to get the text of the <y> tags and for this I decided to use XPath. I have figured out the syntax, say for the first y: (Assume root as x) textFirst = root.xpath('y[1]/text()') This works as expected. However my problem is that I won't be knowing the number of <y> tags beforehand, so to fix that, I did this: >>> count = 0 >>> for number in root.getiterator('y'): ... count += 1 So now I know that there are count number of y in x. (Is there a better way to get the number of tags? If yes, please suggest) However, if I do this: >>> def try_it(x): ... return root.xpath('y[x]/text()') ... >>> try_it(1) [] It returns an empty list. So my question is: not knowing the arbitrary number of tags, how do I get an XPath syntax or expression for it and using lxml? Sorry if something is not clear, I tried my best to explain the problem.
[ "what about 'y[%i]/text()' % x ?\nnow you see where you did a mistake? :)\n( .. note that you can capture all y elements together with xpath 'y' or '//y' )\n", "To count the number of y nodes, you can use the XPath expression 'count(/x/y)'.\nAlso, I think the problem with your expression in the try_it function is...
[ 1, 1 ]
[]
[]
[ "lxml", "python", "xpath" ]
stackoverflow_0003176105_lxml_python_xpath.txt
Q: Compare two audio files Basically, I have a lot of audio files representing the same song. However, some of them are worse quality than the original, and some are edited to where they do not match the original song anymore. What I'd like to do is programmatically compare these audio files to the original and see which ones match up with that song, regardless of quality. A direct comparison would obviously not work because the quality of the files varies. I believe this could be done by analyzing the structure of the songs and comparing to the original, but I know nothing about audio engineering so that doesn't help me much. All the songs are of the same format (MP3). Also, I'm using Python, so if there are bindings for it, that would be fantastic; if not, something for the JVM or even a native library would be fine as well, as long as it runs on Linux and I can figure out how to use it. A: This is actually not a trivial task. I do not think any off-the-shelf library can do it. Here is a possible approach: Decode mp3 to PCM. Ensure that PCM data has specific sample rate, which you choose beforehand (e.g. 16KHz). You'll need to resample songs that have different sample rate. High sample rate is not required since you need a fuzzy comparison anyway, but too low sample rate will lose too much details. Normalize PCM data (i.e. find maximum sample value and rescale all samples so that sample with largest amplitude uses entire dynamic range of data format, e.g. if sample format is signed 16 bit, then after normalization max. amplitude sample should have value 32767 or -32767). Split audio data into frames of fixed number of samples (e.g.: 1000 samples per frame). Convert each frame to spectrum domain (FFT). Calculate correlation between sequences of frames representing two songs. If correllation is greater than a certain threshold, assume the songs are the same. Python libraries: PyMedia (for step 1) NumPy (for data processing) -- also see this article for some introductory info An additional complication. Your songs may have a different length of silence at the beginning. So to avoid false negatives, you may need an additional step: 3.1. Scan PCM data from the beginning, until sound energy exceeds predefined threshold. (E.g. calculate RMS with a sliding window of 10 samples and stop when it exceeds 1% of dynamic range). Then discard all data until this point. A: First, you will have to change your domain of comparison. Analyzing raw samples from the uncompressed files will get you nowhere. Your distance measure will be based on one or more features that you extract from the audio samples. Wikipedia lists the following features as commonly used for Acoustic Fingerprinting: Perceptual characteristics often exploited by audio fingerprints include average zero crossing rate, estimated tempo, average spectrum, spectral flatness, prominent tones across a set of bands, and bandwidth. I don't have programmatic solutions for you but here's an interesting attempt at reverse engineering the YouTube Audio ID system. It is used for copyright infringement detection, a similar problem. A: Copying from that answer: The exact same question that people at the old AudioScrobbler and currently at MusicBrainz have worked on since long ago. For the time being, the Python project that can aid in your quest, is Picard, which will tag audio files (not only MPEG 1 Layer 3 files) with a GUID (actually, several of them), and from then on, matching the tags is quite simple. If you prefer to do it as a project of your own, libofa might be of help. The documentation for the Python wrapper perhaps will help you the most.
Compare two audio files
Basically, I have a lot of audio files representing the same song. However, some of them are worse quality than the original, and some are edited to where they do not match the original song anymore. What I'd like to do is programmatically compare these audio files to the original and see which ones match up with that song, regardless of quality. A direct comparison would obviously not work because the quality of the files varies. I believe this could be done by analyzing the structure of the songs and comparing to the original, but I know nothing about audio engineering so that doesn't help me much. All the songs are of the same format (MP3). Also, I'm using Python, so if there are bindings for it, that would be fantastic; if not, something for the JVM or even a native library would be fine as well, as long as it runs on Linux and I can figure out how to use it.
[ "This is actually not a trivial task. I do not think any off-the-shelf library can do it. Here is a possible approach:\n\nDecode mp3 to PCM.\nEnsure that PCM data has specific sample rate, which you choose beforehand (e.g. 16KHz). You'll need to resample songs that have different sample rate. High sample rate is no...
[ 21, 6, 5 ]
[]
[]
[ "audio", "mp3", "python" ]
stackoverflow_0003172911_audio_mp3_python.txt
Q: Help with HTML parsing and sending requests to a web server I'm working on a small project and I've run into a small problem. The script I have needs to fetch a website and find a specific value in the source HTML file. The value is like this: id='elementID'> <fieldset> <input type='hidden' name='hash' value='e46c945fe32a3' /> </fieldset> Now I'm been trying to use the ElementTree library to parse the HTML document to find the value but I haven't been very successful. I'm really new to Python so I don't really know what to do next. I've been using httplib and urllib/urllib2 to connect to the website and POST my login details and things like that but I really don't know how to get that value from the page. I thought I could send a request for the input named 'hash' but I have no idea how to do that. A: You might consider looking at the BeautifulSoup library - it's designed to be quick and easy to use.
Help with HTML parsing and sending requests to a web server
I'm working on a small project and I've run into a small problem. The script I have needs to fetch a website and find a specific value in the source HTML file. The value is like this: id='elementID'> <fieldset> <input type='hidden' name='hash' value='e46c945fe32a3' /> </fieldset> Now I'm been trying to use the ElementTree library to parse the HTML document to find the value but I haven't been very successful. I'm really new to Python so I don't really know what to do next. I've been using httplib and urllib/urllib2 to connect to the website and POST my login details and things like that but I really don't know how to get that value from the page. I thought I could send a request for the input named 'hash' but I have no idea how to do that.
[ "You might consider looking at the BeautifulSoup library - it's designed to be quick and easy to use.\n" ]
[ 2 ]
[]
[]
[ "httplib", "python", "urllib", "urllib2" ]
stackoverflow_0003176663_httplib_python_urllib_urllib2.txt
Q: Python: Install 2.5.5? Here is the page I found to get 2.5.5: http://www.python.org/download/releases/2.5.5/ (I need it for Google App Engine.) All I see is source files, not an installer. I'm not entirely sure how to build them on my windows machine. What do I do? (Open in Visual Studio, build there?) Or is there an installer I can use? A: 2.5.5 is unfortunately only available as source, but you can get 2.5.4 installers here. If you're just debugging on your local machine for GAE, the differences between 2.5.4 and 2.5.5 won't matter to you. You can find build instructions for Windows here if you want to build from source. While they're intended for developers building debug builds, they'll work for you to make a release build as well. A: I would recommend using 2.5.2 instead of 2.5.4. I started out with 2.5.4 but found that the SDK would fail when sending email using the --smtp_host option. I downgraded to 2.5.2 after finding this issue, and sending email then worked. Also, 2.5.2 is the exact version which is used by App Engine in production.
Python: Install 2.5.5?
Here is the page I found to get 2.5.5: http://www.python.org/download/releases/2.5.5/ (I need it for Google App Engine.) All I see is source files, not an installer. I'm not entirely sure how to build them on my windows machine. What do I do? (Open in Visual Studio, build there?) Or is there an installer I can use?
[ "2.5.5 is unfortunately only available as source, but you can get 2.5.4 installers here. If you're just debugging on your local machine for GAE, the differences between 2.5.4 and 2.5.5 won't matter to you.\nYou can find build instructions for Windows here if you want to build from source. While they're intended f...
[ 2, 1 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0003170080_google_app_engine_python.txt
Q: Why doesn't finite repetition in lookbehind work in some flavors? I want to parse the 2 digits in the middle from a date in dd/mm/yy format but also allowing single digits for day and month. This is what I came up with: (?<=^[\d]{1,2}\/)[\d]{1,2} I want a 1 or 2 digit number [\d]{1,2} with a 1 or 2 digit number and slash ^[\d]{1,2}\/ before it. This doesn't work on many combinations, I have tested 10/10/10, 11/12/13, etc... But to my surprise (?<=^\d\d\/)[\d]{1,2} worked. But the [\d]{1,2} should also match if \d\d did, or am I wrong? A: On lookbehind support Major regex flavors have varying supports for lookbehind differently; some imposes certain restrictions, and some doesn't even support it at all. Javascript: not supported Python: fixed length only Java: finite length only .NET: no restriction References regular-expressions.info/Flavor comparison Python In Python, where only fixed length lookbehind is supported, your original pattern raises an error because \d{1,2} obviously does not have a fixed length. You can "fix" this by alternating on two different fixed-length lookbehinds, e.g. something like this: (?<=^\d\/)\d{1,2}|(?<=^\d\d\/)\d{1,2} Or perhaps you can put both lookbehinds as alternates of a non-capturing group: (?:(?<=^\d\/)|(?<=^\d\d\/))\d{1,2} (note that you can just use \d without the brackets). That said, it's probably much simpler to use a capturing group instead: ^\d{1,2}\/(\d{1,2}) Note that findall returns what group 1 captures if you only have one group. Capturing group is more widely supported than lookbehind, and often leads to a more readable pattern (such as in this case). This snippet illustrates all of the above points: p = re.compile(r'(?:(?<=^\d\/)|(?<=^\d\d\/))\d{1,2}') print(p.findall("12/34/56")) # "[34]" print(p.findall("1/23/45")) # "[23]" p = re.compile(r'^\d{1,2}\/(\d{1,2})') print(p.findall("12/34/56")) # "[34]" print(p.findall("1/23/45")) # "[23]" p = re.compile(r'(?<=^\d{1,2}\/)\d{1,2}') # raise error("look-behind requires fixed-width pattern") References regular-expressions.info/Lookarounds, Character classes, Alternation, Capturing groups Java Java supports only finite-length lookbehind, so you can use \d{1,2} like in the original pattern. This is demonstrated by the following snippet: String text = "12/34/56 date\n" + "1/23/45 another date\n"; Pattern p = Pattern.compile("(?m)(?<=^\\d{1,2}/)\\d{1,2}"); Matcher m = p.matcher(text); while (m.find()) { System.out.println(m.group()); } // "34", "23" Note that (?m) is the embedded Pattern.MULTILINE so that ^ matches the start of every line. Note also that since \ is an escape character for string literals, you must write "\\" to get one backslash in Java. C-Sharp C# supports full regex on lookbehind. The following snippet shows how you can use + repetition on a lookbehind: var text = @" 1/23/45 12/34/56 123/45/67 1234/56/78 "; Regex r = new Regex(@"(?m)(?<=^\d+/)\d{1,2}"); foreach (Match m in r.Matches(text)) { Console.WriteLine(m); } // "23", "34", "45", "56" Note that unlike Java, in C# you can use @-quoted string so that you don't have to escape \. For completeness, here's how you'd use the capturing group option in C#: Regex r = new Regex(@"(?m)^\d+/(\d{1,2})"); foreach (Match m in r.Matches(text)) { Console.WriteLine("Matched [" + m + "]; month = " + m.Groups[1]); } Given the previous text, this prints: Matched [1/23]; month = 23 Matched [12/34]; month = 34 Matched [123/45]; month = 45 Matched [1234/56]; month = 56 Related questions How can I match on, but exclude a regex pattern? A: Unless there's a specific reason for using the lookbehind which isn't noted in the question, how about simply matching the whole thing and only capturing the bit you're interested in instead? JavaScript example: >>> /^\d{1,2}\/(\d{1,2})\/\d{1,2}$/.exec("12/12/12")[1] "12" A: To quote regular-expressions.info: The bad news is that most regex flavors do not allow you to use just any regex inside a lookbehind, because they cannot apply a regular expression backwards. Therefore, the regular expression engine needs to be able to figure out how many steps to step back before checking the lookbehind. Therefore, many regex flavors, including those used by Perl and Python, only allow fixed-length strings. You can use any regex of which the length of the match can be predetermined. This means you can use literal text and character classes. You cannot use repetition or optional items. You can use alternation, but only if all options in the alternation have the same length. In other words your regex does not work because you're using a variable-width expression inside a lookbehind and your regex engine does not support that. A: In addition to those listed by @polygenelubricants, there are two more exceptions to the "fixed length only" rule. In PCRE (the regex engine for PHP, Apache, et al) and Oniguruma (Ruby 1.9, Textmate), a lookbehind may consist of an alternation in which each alternative may match a different number of characters, as long as the length of each alternative is fixed. For example: (?<=\b\d\d/|\b\d/)\d{1,2}(?=/\d{2}\b) Note that the alternation has to be at the top level of the lookbehind subexpression. You might, like me, be tempted to factor out the common elements, like this: (?<=\b(?:\d\d/|\d)/)\d{1,2}(?=/\d{2}\b) ...but it wouldn't work; at the top level, the subexpression now consists of a single alternative with a non-fixed length. The second exception is much more useful: \K, supported by Perl and PCRE. It effectively means "pretend the match really started here." Whatever appears before it in the regex is treated as a positive lookbehind. As with .NET lookbehinds, there are no restrictions; whatever can appear in a normal regex can be used before the \K. \b\d{1,2}/\K\d{1,2}(?=/\d{2}\b) But most of the time, when someone has a problem with lookbehinds, it turns out they shouldn't even be using them. As @insin pointed out, this problem can be solved much more easily by using a capturing group. EDIT: Almost forgot JGSoft, the regex flavor used by EditPad Pro and PowerGrep; like .NET, it has completely unrestricted lookbehinds, positive and negative.
Why doesn't finite repetition in lookbehind work in some flavors?
I want to parse the 2 digits in the middle from a date in dd/mm/yy format but also allowing single digits for day and month. This is what I came up with: (?<=^[\d]{1,2}\/)[\d]{1,2} I want a 1 or 2 digit number [\d]{1,2} with a 1 or 2 digit number and slash ^[\d]{1,2}\/ before it. This doesn't work on many combinations, I have tested 10/10/10, 11/12/13, etc... But to my surprise (?<=^\d\d\/)[\d]{1,2} worked. But the [\d]{1,2} should also match if \d\d did, or am I wrong?
[ "On lookbehind support\nMajor regex flavors have varying supports for lookbehind differently; some imposes certain restrictions, and some doesn't even support it at all.\n\nJavascript: not supported\nPython: fixed length only\nJava: finite length only\n.NET: no restriction\n\nReferences\n\nregular-expressions.info/...
[ 14, 4, 3, 2 ]
[]
[]
[ "c#", "java", "lookbehind", "python", "regex" ]
stackoverflow_0003159524_c#_java_lookbehind_python_regex.txt
Q: How can I improve this code? # max_list = [83, 1350, 1, 100] for i in range(len(max_list)): new_value = 1 while new_value < max_list[i]: new_value *= 10 max_list = new_value What I'm doing is rounding numbers up to the closest, uhm, zero filled value? I'm not sure what it would be called. But basically, I want 83 -> 100, 1 -> 1, 1350 -> 10000, 100 -> 100. I tried using the round() function but couldn't get it to do what I wanted. This does it but I thought it could be written in less lines. A: I'd do it mathematically: from math import ceil, log10 int(pow(10, ceil(log10(abs(x or 0.1))))) A: def nextPowerOfTen(x): if x in [0, 1]: return x elif x < 1: return -nextPowerOfTen(abs(x)) else: return 10**len(str(int(x) - 1)) >>> nextPowerOfTen(83) 100 >>> nextPowerOfTen(1350) 10000 >>> nextPowerOfTen(1) 1 >>> nextPowerOfTen(100) 100 >>> nextPowerOfTen(0) 0 >>> nextPowerOfTen(-1) -1 >>> nextPowerOfTen(-2) -10 It does something sensible with negatives, not sure if that is the behaviour you want or not. A: i need it to be 1350 / 10000 = 0.135 so it's in the [0, 1] range. Why didn't you say so initially? new_val = float("0." + str(old_val)) Unless you need the numbers for something else as well? A: >>> x = 12345.678 >>> y = round(x) >>> round(10 * y, -len(str(y))) 100000 A: Pseudocode: div = input != 1 ? power(10,truncate(log10(abs(input))) + 1) : 1; percent = input/div; A: Your original code was close, and more easily read than some terse expression. The problem with your code is a couple of minor errors: initializing new_value each time in the initial scan, rather than only once; and replacing the max_list with a calculated scalar while looping over it as a list. On the final line, you must have intended: max_list[i] = float(max_list[i]) / new_value but you dropped the array index, which would replace the list with a single value. On the second iteration of the loop, your Python would give an exception due to the invalid index into a non-list. Because your code develops greater and greater values of new_value as it progresses, I recommend you not replace the list items during the first scan. Make a second scan once you calculate a final value for new_value: max_list = [83, 1350, 1, 100] # Calculate the required "normalizing" power-of-ten new_value = 1.0 for i in range(len(max_list)): while new_value < max_list[i]: new_value *= 10.0 # Convert the values to fractions in [0.0, 1.0] for i in range(len(max_list)): max_list[i] = max_list[i] / new_value print max_list # "[0.0083000000000000001, 0.13500000000000001, 0.0001, 0.01]" Notice that I was required to initialize new_value as if it were a floating-point value, in order that it would result in floating-point quotients. There are alternative ways to do this, such as using float(max_list[i]) to retrieve the value for normalizing. The original calculation of new_value was starting over with each element, so your example would return new_value == 100 because this was based off the final element in the input list, which is 100. A: from math import ceil, log10 # works for floats, too. x = [83, 1350, 1, 100, 12.75] y = [10**ceil(log10(el)) for el in x] # alt list-comprehension if integers needed # y = [int(10**ceil(log10(el))) for el in x]
How can I improve this code?
# max_list = [83, 1350, 1, 100] for i in range(len(max_list)): new_value = 1 while new_value < max_list[i]: new_value *= 10 max_list = new_value What I'm doing is rounding numbers up to the closest, uhm, zero filled value? I'm not sure what it would be called. But basically, I want 83 -> 100, 1 -> 1, 1350 -> 10000, 100 -> 100. I tried using the round() function but couldn't get it to do what I wanted. This does it but I thought it could be written in less lines.
[ "I'd do it mathematically:\nfrom math import ceil, log10\nint(pow(10, ceil(log10(abs(x or 0.1)))))\n\n", "def nextPowerOfTen(x):\n if x in [0, 1]:\n return x\n elif x < 1:\n return -nextPowerOfTen(abs(x))\n else:\n return 10**len(str(int(x) - 1))\n\n>>> nextPowerOfTen(83)\n100\n>>> nextPowerOfTen(135...
[ 11, 3, 1, 1, 0, 0, 0 ]
[]
[]
[ "python", "rounding" ]
stackoverflow_0003176773_python_rounding.txt
Q: Scraping sites that require login with Python I use several ad networks for my sites, and to see how much money I made I need to log in to each daily to add up the values. I was thinking of making a Python script that would do this for me to get a quick total. I know I need to do a POST request to log in, then store the cookies that I get back and then GET request the report page while passing in those cookies. What's the most convenient way to replicate in Python what I'm doing when I browse the sites manually? A: See if this work for you: http://stockrt.github.com/p/emulating-a-browser-in-python-with-mechanize/ A: cookielib does client-side cookie handling, and mechanize enhances it in several ways -- including a way to initialize a cookie jar by reading the cookies from an Internet Explorer cache (so, if you can log in manually once to each site on Windows, you can then use cookielib or mechanize for future logins based on exactly the same cookies -- until they expire of course). A: not a Python solution, but consider using a browser automation tool like Chickenfoot.
Scraping sites that require login with Python
I use several ad networks for my sites, and to see how much money I made I need to log in to each daily to add up the values. I was thinking of making a Python script that would do this for me to get a quick total. I know I need to do a POST request to log in, then store the cookies that I get back and then GET request the report page while passing in those cookies. What's the most convenient way to replicate in Python what I'm doing when I browse the sites manually?
[ "See if this work for you:\nhttp://stockrt.github.com/p/emulating-a-browser-in-python-with-mechanize/\n", "cookielib does client-side cookie handling, and mechanize enhances it in several ways -- including a way to initialize a cookie jar by reading the cookies from an Internet Explorer cache (so, if you can log ...
[ 6, 1, 1 ]
[]
[]
[ "python", "screen_scraping" ]
stackoverflow_0003121639_python_screen_scraping.txt
Q: best python lib to make the textarea safe in the web page when user submit i want to clean some tag like : <script> and other, so what python lib you are using to do this . thanks A: You can try Web2py , it is a webframework for python with support for Wiki Markup , MarkDown , and many things + Server side Validation , XSS Prevention. If you want to use only it's Lib you can use gluon of web2py (which is lib of web2py framework) www.web2py.com A: BeautifulSoup This function should work for you: http://djangosnippets.org/snippets/1655/
best python lib to make the textarea safe in the web page when user submit
i want to clean some tag like : <script> and other, so what python lib you are using to do this . thanks
[ "You can try Web2py , it is a webframework for python with support for Wiki Markup , MarkDown , and many things + Server side Validation , XSS Prevention. If you want to use only it's Lib you can use gluon of web2py (which is lib of web2py framework)\nwww.web2py.com\n", "BeautifulSoup\nThis function should work f...
[ 0, 0 ]
[]
[]
[ "python", "tags", "textarea" ]
stackoverflow_0003175925_python_tags_textarea.txt
Q: What's wrong with my regular expression? I'm expecting a string NOT to match a regular expression, but it is! >>> re.compile('^P|([LA]E?)$').match('PE').group() 'P' This seems like a bug, because I see no way for the $ to match. On the other hand, it seems unlikely that Python's re lib would not be able to handle this simple case. Am I missing something here? btw, Python prints this out when I start it: Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. A: ^P|([LA]E?)$ becomes ^P | ([LA]E?)$ A: You wrote "P or ([LA]E?)". "P" matches. If you meant for the anchors to apply to both cases, then perhaps you meant this: ^(?:P|([LA]E?))$ A: Two other points worth mentioning: ^ is redundant when you use re.match(), and if you want to match the end of the string, use r"\Z", not "$". Why $ is evil: Suppose we want to check that the whole of some string s matches some pattern foo. We don't want to call it a match if there is anything after the pattern. Let's see what happens if there is a newline after the pattern. >>> import re >>> s = 'foo\n' >>> re.match('foo$', s) <_sre.SRE_Match object at 0x00BAFE90> # match -- FAIL! >>> re.match('foo\Z', s) # no match -- OK >>> Here are excerpts from the docs: Docs for $: Matches the end of the string or just before the newline at the end of the string ... this is using the default modes, and is nothing to do with MULTILINE mode. Docs for \Z: Matches only at the end of the string. Nothing to do with modes, and not spooked by newlines. A: It will match either ^P or ([LA]E)$. Did you mean ^(?:P|([LA]E?))$ instead? A: It prints the same output on Mac OS X and on Linux(Debian testing). I would be surprised to see a bug behave the same on three major platforms. Besides, 'PE' has 'P' in the beginning, so I don't see a problem in it matching the regex. Similarly, 'AE' matches the regex too. On both Debian and on Mac OS X.
What's wrong with my regular expression?
I'm expecting a string NOT to match a regular expression, but it is! >>> re.compile('^P|([LA]E?)$').match('PE').group() 'P' This seems like a bug, because I see no way for the $ to match. On the other hand, it seems unlikely that Python's re lib would not be able to handle this simple case. Am I missing something here? btw, Python prints this out when I start it: Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information.
[ "^P|([LA]E?)$\n\nbecomes\n^P\n|\n([LA]E?)$\n\n", "You wrote \"P or ([LA]E?)\". \"P\" matches.\nIf you meant for the anchors to apply to both cases, then perhaps you meant this:\n^(?:P|([LA]E?))$\n", "Two other points worth mentioning: ^ is redundant when you use re.match(), and if you want to match the end of t...
[ 4, 3, 2, 1, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0003176717_python_regex.txt
Q: Python checking daytime Basically, I want my script to pause between 4 and 5 AM. The only way to do this I've come up with so far is this: seconds_into_day = time.time() % (60*60*24) if 60*60*4 < seconds_into_day < 60*60*5: sleep(time_left_till_5am) Any "proper" way to do this? Aka some built-in function/lib for calculating time; rather than just using seconds all the time? A: You want datetime The datetime module supplies classes for manipulating dates and times in both simple and complex ways If you use date.hour from datetime.now() you'll get the current hour: datetimenow = datetime.now(); if datetimenow.hour in range(4, 5) sleep(time_left_till_5am) You can calculate time_left_till_5am by taking 60 - datetimenow.minute multiplying by 60 and adding to 60 - datetimenow.second. A: Python has a built-in datetime library: http://docs.python.org/library/datetime.html This should probably get you what you're after: import datetime as dt from time import sleep now = dt.datetime.now() if now.hour >= 4 andnow.hour < 5: sleep((60 - now.minute)*60 + (60 - now.second)) OK, the above works, but here's the purer, less error-prone solution (and what I was originally thinking of but suddenly forgot how to do): import datetime as dt from time import sleep now = dt.datetime.now() pause = dt.datetime(now.year, now.month, now.day, 4) start = dt.datetime(now.year, now.month, now.day, 5) if now >= pause and now < start: sleep((start - now).seconds) That's where my original "timedelta" comment came from -- what you get from subtracting two datetime objects is a timedelta object (which in this case we pull the 'seconds' attribute from). A: The following code covers the more general case where a script needs to pause during any fixed window of less than 24 hours duration. Example: must sleep between 11:00 PM and 01:00 AM. import datetime as dt def sleep_duration(sleep_from, sleep_to, now=None): # sleep_* are datetime.time objects # now is a datetime.datetime object if now is None: now = dt.datetime.now() duration = 0 lo = dt.datetime.combine(now, sleep_from) hi = dt.datetime.combine(now, sleep_to) if lo <= now < hi: duration = (hi - now).seconds elif hi < lo: if now >= lo: duration = (hi + dt.timedelta(hours=24) - now).seconds elif now < hi: duration = (hi - now).seconds return duration tests = [ (4, 5, 3, 30), (4, 5, 4, 0), (4, 5, 4, 30), (4, 5, 5, 0), (4, 5, 5, 30), (23, 1, 0, 0), (23, 1, 0, 30), (23, 1, 0, 59), (23, 1, 1, 0), (23, 1, 1, 30), (23, 1, 22, 30), (23, 1, 22, 59), (23, 1, 23, 0), (23, 1, 23, 1), (23, 1, 23, 59), ] for hfrom, hto, hnow, mnow in tests: sfrom = dt.time(hfrom) sto = dt.time(hto) dnow = dt.datetime(2010, 7, 5, hnow, mnow) print sfrom, sto, dnow, sleep_duration(sfrom, sto, dnow) and here's the output: 04:00:00 05:00:00 2010-07-05 03:30:00 0 04:00:00 05:00:00 2010-07-05 04:00:00 3600 04:00:00 05:00:00 2010-07-05 04:30:00 1800 04:00:00 05:00:00 2010-07-05 05:00:00 0 04:00:00 05:00:00 2010-07-05 05:30:00 0 23:00:00 01:00:00 2010-07-05 00:00:00 3600 23:00:00 01:00:00 2010-07-05 00:30:00 1800 23:00:00 01:00:00 2010-07-05 00:59:00 60 23:00:00 01:00:00 2010-07-05 01:00:00 0 23:00:00 01:00:00 2010-07-05 01:30:00 0 23:00:00 01:00:00 2010-07-05 22:30:00 0 23:00:00 01:00:00 2010-07-05 22:59:00 0 23:00:00 01:00:00 2010-07-05 23:00:00 7200 23:00:00 01:00:00 2010-07-05 23:01:00 7140 23:00:00 01:00:00 2010-07-05 23:59:00 3660 A: When dealing with dates and times in Python I still prefer mxDateTime over Python's datetime module as although the built-in one has improved greatly over the years it's still rather awkward and lacking in comparison. So if interested go here: mxDateTime It's free to download and use. Makes life much easier when dealing with datetime math. import mx.DateTime as dt from time import sleep now = dt.now() if 4 <= now.hour < 5: stop = dt.RelativeDateTime(hour=5, minute=0, second=0) secs_remaining = ((now + stop) - now).seconds sleep(secs_remaining)
Python checking daytime
Basically, I want my script to pause between 4 and 5 AM. The only way to do this I've come up with so far is this: seconds_into_day = time.time() % (60*60*24) if 60*60*4 < seconds_into_day < 60*60*5: sleep(time_left_till_5am) Any "proper" way to do this? Aka some built-in function/lib for calculating time; rather than just using seconds all the time?
[ "You want datetime\n\nThe datetime module supplies classes for manipulating dates and times in both simple and complex ways\n\nIf you use date.hour from datetime.now() you'll get the current hour:\ndatetimenow = datetime.now();\nif datetimenow.hour in range(4, 5)\n sleep(time_left_till_5am)\n\nYou can calculate ...
[ 3, 2, 1, 0 ]
[]
[]
[ "datetime", "python", "time" ]
stackoverflow_0003176360_datetime_python_time.txt
Q: Unstructured Text to Structured Data I am looking for references (tutorials, books, academic literature) concerning structuring unstructured text in a manner similar to the google calendar quick add button. I understand this may come under the NLP category, but I am interested only in the process of going from something like "Levi jeans size 32 A0b293" to: Brand: Levi, Size: 32, Category: Jeans, code: A0b293 I imagine it would be some combination of lexical parsing and machine learning techniques. I am rather language agnostic but if pushed would prefer python, Matlab or C++ references Thanks A: You need to provide more information about the source of the text (the web? user input?), the domain (is it just clothes?), the potential formatting and vocabulary... Assuming worst case scenario you need to start learning NLP. A very good free book is the documentation of NLTK: http://www.nltk.org/book . It is also a very good introduction to Python and the SW is free (for various usages). Be warned: NLP is hard. It doesn't always work. It is not fun at times. The state of the art is no where near where you imagine it is. Assuming a better scenario (your text is semi-structured) - a good free tool is pyparsing. There is a book, plenty of examples and the resulting code is extremely attractive. I hope this helps... A: Possibly look at "Collective Intelligence" by Toby Segaran. I seem to remember that addressing the basics of this in one chapter. A: After some researching I have found that this problem is commonly referred to as Information Extraction and have amassed a few papers and stored them in a Mendeley Collection http://www.mendeley.com/research-papers/collections/3237331/Information-Extraction/ Also as Tai Weiss noted NLTK for python is a good starting point and this chapter of the book, looks specifically at information extraction A: If you are only working for cases like the example you cited, you are better off using some manual rule-based that is 100% predictable and covers 90% of the cases it might encounter production.. You could enumerable lists of all possible brands and categories and detect which is which in an input string cos there's usually very little intersection in these two lists.. The other two could easily be detected and extracted using regular expressions. (1-3 digit numbers are always sizes, etc) Your problem domain doesn't seem big enough to warrant a more heavy duty approach such as statistical learning.
Unstructured Text to Structured Data
I am looking for references (tutorials, books, academic literature) concerning structuring unstructured text in a manner similar to the google calendar quick add button. I understand this may come under the NLP category, but I am interested only in the process of going from something like "Levi jeans size 32 A0b293" to: Brand: Levi, Size: 32, Category: Jeans, code: A0b293 I imagine it would be some combination of lexical parsing and machine learning techniques. I am rather language agnostic but if pushed would prefer python, Matlab or C++ references Thanks
[ "You need to provide more information about the source of the text (the web? user input?), the domain (is it just clothes?), the potential formatting and vocabulary...\nAssuming worst case scenario you need to start learning NLP. A very good free book is the documentation of NLTK: http://www.nltk.org/book . It is a...
[ 7, 1, 1, 0 ]
[]
[]
[ "nlp", "python", "structured_data" ]
stackoverflow_0003162450_nlp_python_structured_data.txt
Q: Advice on preparing/presenting a Python Master Class? I am preparing a master class to present to a group of Technical Artists# at work. Everyone in the group has previously programmed in C/C++/MEL/MAXScript/Python. The purpose of the class is to collectively bring everyone's skill levels and technical understanding on variety of Computer Science topics to a common level. I would like some advice as this first time I've delivered such a class. I am planning on structuring the course to be two 2 hour sessions, with 30 minute blocks of content interspersed with Q&A, code-review, and individual assistance. I know this a multi-part question, so don't feel you need to be able answer everything, just contribute what you can. Any links to articles, SO questions, or reflection on personal learning experiences is greatly appreciated. Questions/Advice/Links to Further Reading - What CS topics should I attempt to cover? - Examples of other Python training courses? - What do you wish someone taught you when you first started programming? - Python programming best-practices - Tips for delivering technical content to creative/artist audience? Using Dive Into Python as a textbook and referencing the MIT OpenCourseware Introduction to Computer Science on Academic Earth. I have also been given a 2 minute overview on adult training (Malcolm Knowles), i.e., working the students through the cycle of: identify the problem, determine the cause, researching a solution, and applying. # Technical Artists: write tools, create/script rigs, and manages the creation of data in DCC packages (Adobe Photoshop, Autodesk Maya and 3ds Max). A: Just some quick comments/thoughts from my experience: I think your time allotment is tight, so I would focus on a handful of key topics to drive home. Certainly spend some time on basic lists, tuple and dictionary usage and manipulation. I like to put together a cheat sheet of libraries and select methods/examples for the students so they get a sense of the "python universe" and how much is available. "Dive into Python" is decent book, but I think you can find more succinct and effective tutorials online, or more complete references (Learning Python, by Lutz?). Pick good example problems to illustrate points, and personalize them if possible. Have little "extra credit" teasers available for ambitious students. Make sure you gauge your audience's preparedness well, and especially consider the spread in their abilities. Is this a class to prepare them with python skills for specific tasks, or an introduction to python (where they will learn more on their own later)?
Advice on preparing/presenting a Python Master Class?
I am preparing a master class to present to a group of Technical Artists# at work. Everyone in the group has previously programmed in C/C++/MEL/MAXScript/Python. The purpose of the class is to collectively bring everyone's skill levels and technical understanding on variety of Computer Science topics to a common level. I would like some advice as this first time I've delivered such a class. I am planning on structuring the course to be two 2 hour sessions, with 30 minute blocks of content interspersed with Q&A, code-review, and individual assistance. I know this a multi-part question, so don't feel you need to be able answer everything, just contribute what you can. Any links to articles, SO questions, or reflection on personal learning experiences is greatly appreciated. Questions/Advice/Links to Further Reading - What CS topics should I attempt to cover? - Examples of other Python training courses? - What do you wish someone taught you when you first started programming? - Python programming best-practices - Tips for delivering technical content to creative/artist audience? Using Dive Into Python as a textbook and referencing the MIT OpenCourseware Introduction to Computer Science on Academic Earth. I have also been given a 2 minute overview on adult training (Malcolm Knowles), i.e., working the students through the cycle of: identify the problem, determine the cause, researching a solution, and applying. # Technical Artists: write tools, create/script rigs, and manages the creation of data in DCC packages (Adobe Photoshop, Autodesk Maya and 3ds Max).
[ "Just some quick comments/thoughts from my experience:\n\nI think your time allotment is tight, so I would focus on a handful of key topics to drive home. Certainly spend some time on basic lists, tuple and dictionary usage and manipulation.\nI like to put together a cheat sheet of libraries and select methods/exa...
[ 1 ]
[]
[]
[ "computer_science", "python" ]
stackoverflow_0003173710_computer_science_python.txt
Q: Why does Django use a BaseForm? I think I finally figured out they need to use this DeclarativeFieldsMetaclass (to turn the class fields into instance variables and maintain their order with an ordered/sorted dict). However, I'm still not quite sure why they opted to use a BaseForm rather than implementing everything directly within the Form class? They left a comment, class Form(BaseForm): "A collection of Fields, plus their associated data." # This is a separate class from BaseForm in order to abstract the way # self.fields is specified. This class (Form) is the one that does the # fancy metaclass stuff purely for the semantic sugar -- it allows one # to define a form using declarative syntax. # BaseForm itself has no way of designating self.fields. But I don't really understand it. "In order to abstract the way self.fields is specified" -- but Python calls DeclarativeFieldsMetaclass.__new__ before Form.__init__, so they could have taken full advantage of self.fields inside Form.__init__ as is; why do they need an extra layer of abstraction? A: I think reason is simpl,e with BaseForm alone you can't define fields using a decalrative syntax i.e. class MyForm(Form): field_xxx = form.TextField(...) field_nnn _ form.IntegerField(...) For such thing to work for should have a metaclass DeclarativeFieldsMetaclass which is set in Form only, they did that because This is a separate class from BaseForm in order to abstract the way, self.fields is specifie so now you can write WierdForm class in which fields can be defined may be in some wierd way e.g. passing params to class object, point is all the API is in BaseForm and Form class just provides an easy to defined fields. Summary: IMO django preferred to introduce another layer so that if needed different type of field declaration can be implemented, at-least it keeps the non core functionality of forms separate. A: Source: class MetaForm(type): def __new__(cls, name, bases, attrs): print "%s: %s" % (name, attrs) return type.__new__(cls, name, bases, attrs) class BaseForm(object): my_attr = 1 def __init__(self): print "BaseForm.__init__" class Form(BaseForm): __metaclass__ = MetaForm def __init__(self): print "Form.__init__" class CustomForm(Form): my_field = 2 def __init__(self): print "CustomForm.__init__" f = CustomForm() Output: Form: {'__module__': '__main__', '__metaclass__': <class '__main__.MetaForm'>, '__init__':<function __init__ at 0x0227E0F0>} CustomForm: {'__module__': '__main__', 'my_field': 2, '__init__': <function __init__ at 0x0227E170>} CustomForm.__init__ Looks like MetaForm.__new__ is called twice. Once for Form and once for CustomForm, but never for BaseForm. By having a clean (empty) Form class, there won't be any extraneous attributes to loop over. It also means that you can define Fields inside the BaseForm that could be used for internal use, but avoid rendering.
Why does Django use a BaseForm?
I think I finally figured out they need to use this DeclarativeFieldsMetaclass (to turn the class fields into instance variables and maintain their order with an ordered/sorted dict). However, I'm still not quite sure why they opted to use a BaseForm rather than implementing everything directly within the Form class? They left a comment, class Form(BaseForm): "A collection of Fields, plus their associated data." # This is a separate class from BaseForm in order to abstract the way # self.fields is specified. This class (Form) is the one that does the # fancy metaclass stuff purely for the semantic sugar -- it allows one # to define a form using declarative syntax. # BaseForm itself has no way of designating self.fields. But I don't really understand it. "In order to abstract the way self.fields is specified" -- but Python calls DeclarativeFieldsMetaclass.__new__ before Form.__init__, so they could have taken full advantage of self.fields inside Form.__init__ as is; why do they need an extra layer of abstraction?
[ "I think reason is simpl,e with BaseForm alone you can't define fields using a decalrative syntax i.e.\nclass MyForm(Form):\n field_xxx = form.TextField(...)\n field_nnn _ form.IntegerField(...)\n\nFor such thing to work for should have a metaclass DeclarativeFieldsMetaclass which is set in Form only, they di...
[ 2, 1 ]
[]
[]
[ "design_patterns", "django", "python" ]
stackoverflow_0003176594_design_patterns_django_python.txt
Q: How to convert generator or iterator to list recursively I want to convert generator or iterator to list recursively. I wrote a code in below, but it looks naive and ugly, and may be dropped case in doctest. Q1. Help me good version. Q2. How to specify object is immutable or not? import itertools def isiterable(datum): return hasattr(datum, '__iter__') def issubscriptable(datum): return hasattr(datum, "__getitem__") def eagerlize(obj): """ Convert generator or iterator to list recursively. return a eagalized object of given obj. This works but, whether it return a new object, break given one. test 1.0 iterator >>> q = itertools.permutations('AB', 2) >>> eagerlize(q) [('A', 'B'), ('B', 'A')] >>> test 2.0 generator in list >>> q = [(2**x for x in range(3))] >>> eagerlize(q) [[1, 2, 4]] >>> test 2.1 generator in tuple >>> q = ((2**x for x in range(3)),) >>> eagerlize(q) ([1, 2, 4],) >>> test 2.2 generator in tuple in generator >>> q = (((x, (y for y in range(x, x+1))) for x in range(3)),) >>> eagerlize(q) ([(0, [0]), (1, [1]), (2, [2])],) >>> test 3.0 complex test >>> def test(r): ... for x in range(3): ... r.update({'k%s'%x:x}) ... yield (n for n in range(1)) >>> >>> def creator(): ... r = {} ... t = test(r) ... return r, t >>> >>> a, b = creator() >>> q = {'b' : a, 'a' : b} >>> eagerlize(q) {'a': [[0], [0], [0]], 'b': {'k2': 2, 'k1': 1, 'k0': 0}} >>> test 3.1 complex test (other dict order) >>> a, b = creator() >>> q = {'b' : b, 'a' : a} >>> eagerlize(q) {'a': {'k2': 2, 'k1': 1, 'k0': 0}, 'b': [[0], [0], [0]]} >>> test 4.0 complex test with tuple >>> a, b = creator() >>> q = {'b' : (b, 10), 'a' : (a, 10)} >>> eagerlize(q) {'a': ({'k2': 2, 'k1': 1, 'k0': 0}, 10), 'b': ([[0], [0], [0]], 10)} >>> test 4.1 complex test with tuple (other dict order) >>> a, b = creator() >>> q = {'b' : (b, 10), 'a' : (a, 10)} >>> eagerlize(q) {'a': ({'k2': 2, 'k1': 1, 'k0': 0}, 10), 'b': ([[0], [0], [0]], 10)} >>> """ def loop(obj): if isiterable(obj): for k, v in obj.iteritems() if isinstance(obj, dict) \ else enumerate(obj): if isinstance(v, tuple): # immutable and iterable object must be recreate, # but realy only tuple? obj[k] = tuple(eagerlize(list(obj[k]))) elif issubscriptable(v): loop(v) elif isiterable(v): obj[k] = list(v) loop(obj[k]) b = [obj] loop(b) return b[0] def _test(): import doctest doctest.testmod() if __name__=="__main__": _test() A: To avoid badly affecting the original object, you basically need a variant of copy.deepcopy... subtly tweaked because you need to turn generators and iterators into lists (deepcopy wouldn't deep-copy generators anyway). Note that some effect on the original object is unfortunately inevitable, because generators and iterators are "exhausted" as a side effect of iterating all the way on them (be it to turn them into lists or for any other purpose) -- therefore, there is simply no way you can both leave the original object alone and have that generator or other iterator turned into a list in the "variant-deepcopied" result. The copy module is unfortunately not written to be customized, so the alternative ares, either copy-paste-edit, or a subtle (sigh) monkey-patch hinging on (double-sigh) the private module variable _deepcopy_dispatch (which means your patched version might not survive a Python version upgrade, say from 2.6 to 2.7, hypothetically). Plus, the monkey-patch would have to be uninstalled after each use of your eagerize (to avoid affecting other uses of deepcopy). So, let's assume we pick the copy-paste-edit route instead. Say we start with the most recent version, the one that's online here. You need to rename module, of course; rename the externally visible function deepcopy to eagerize at line 145; the substantial change is at lines 161-165, which in said version, annotated, are: 161 : copier = _deepcopy_dispatch.get(cls) 162 : if copier: 163 : y = copier(x, memo) 164 : else: 165 : tim_one 18729 try: We need to insert between line 163 and 164 the logic "otherwise if it's iterable expand it to a list (i.e., use the function _deepcopy_list as the copier". So these lines become: 161 : copier = _deepcopy_dispatch.get(cls) 162 : if copier: 163 : y = copier(x, memo) elif hasattr(cls, '__iter__'): y = _deepcopy_list(x, memo) 164 : else: 165 : tim_one 18729 try: That's all: just there two added lines. Note that I've left the original line numbers alone to make it perfectly clear where exactly these two lines need to be inserted, and not numbered the two new lines. You also need to rename other instances of identifier deepcopy (indirect recursive calls) to eagerize. You should also remove lines 66-144 (the shallow-copy functionality that you don't care about) and appropriately tweak lines 1-65 (docstrings, imports, __all__, etc). Of course, you want to work off a copy of the plaintext version of copy.py, here, not the annotated version I've been referring to (I used the annotated version just to clarify exactly where the changes were needed!-).
How to convert generator or iterator to list recursively
I want to convert generator or iterator to list recursively. I wrote a code in below, but it looks naive and ugly, and may be dropped case in doctest. Q1. Help me good version. Q2. How to specify object is immutable or not? import itertools def isiterable(datum): return hasattr(datum, '__iter__') def issubscriptable(datum): return hasattr(datum, "__getitem__") def eagerlize(obj): """ Convert generator or iterator to list recursively. return a eagalized object of given obj. This works but, whether it return a new object, break given one. test 1.0 iterator >>> q = itertools.permutations('AB', 2) >>> eagerlize(q) [('A', 'B'), ('B', 'A')] >>> test 2.0 generator in list >>> q = [(2**x for x in range(3))] >>> eagerlize(q) [[1, 2, 4]] >>> test 2.1 generator in tuple >>> q = ((2**x for x in range(3)),) >>> eagerlize(q) ([1, 2, 4],) >>> test 2.2 generator in tuple in generator >>> q = (((x, (y for y in range(x, x+1))) for x in range(3)),) >>> eagerlize(q) ([(0, [0]), (1, [1]), (2, [2])],) >>> test 3.0 complex test >>> def test(r): ... for x in range(3): ... r.update({'k%s'%x:x}) ... yield (n for n in range(1)) >>> >>> def creator(): ... r = {} ... t = test(r) ... return r, t >>> >>> a, b = creator() >>> q = {'b' : a, 'a' : b} >>> eagerlize(q) {'a': [[0], [0], [0]], 'b': {'k2': 2, 'k1': 1, 'k0': 0}} >>> test 3.1 complex test (other dict order) >>> a, b = creator() >>> q = {'b' : b, 'a' : a} >>> eagerlize(q) {'a': {'k2': 2, 'k1': 1, 'k0': 0}, 'b': [[0], [0], [0]]} >>> test 4.0 complex test with tuple >>> a, b = creator() >>> q = {'b' : (b, 10), 'a' : (a, 10)} >>> eagerlize(q) {'a': ({'k2': 2, 'k1': 1, 'k0': 0}, 10), 'b': ([[0], [0], [0]], 10)} >>> test 4.1 complex test with tuple (other dict order) >>> a, b = creator() >>> q = {'b' : (b, 10), 'a' : (a, 10)} >>> eagerlize(q) {'a': ({'k2': 2, 'k1': 1, 'k0': 0}, 10), 'b': ([[0], [0], [0]], 10)} >>> """ def loop(obj): if isiterable(obj): for k, v in obj.iteritems() if isinstance(obj, dict) \ else enumerate(obj): if isinstance(v, tuple): # immutable and iterable object must be recreate, # but realy only tuple? obj[k] = tuple(eagerlize(list(obj[k]))) elif issubscriptable(v): loop(v) elif isiterable(v): obj[k] = list(v) loop(obj[k]) b = [obj] loop(b) return b[0] def _test(): import doctest doctest.testmod() if __name__=="__main__": _test()
[ "To avoid badly affecting the original object, you basically need a variant of copy.deepcopy... subtly tweaked because you need to turn generators and iterators into lists (deepcopy wouldn't deep-copy generators anyway). Note that some effect on the original object is unfortunately inevitable, because generators a...
[ 5 ]
[]
[]
[ "generator", "immutability", "iterator", "python", "recursion" ]
stackoverflow_0003177442_generator_immutability_iterator_python_recursion.txt
Q: Django Templates - Printing Comma-separated ManyToManyField, sorting results list into dict? I have a Django project for managing a list of journal articles. The main model is Article. This has various fields to store things like title of the article, publication date, subject, as well as list of companies mentioned in the article. (company is it's own model). I want a template that prints out a list of the articles, sorted by category, and also listing the companies mentioned. However, I'm hitting two issues. Firstly, the company field is a ManyToMany field. I'm printing this successfully now, using the all iterable, thanks to this SO question =). (Curious though, where is this all iterable documented in the Django documentation?) listing objects from ManyToManyField However, I'd like to print ", " (comma followed by space) after each item, except the last item. So the output would be: Joe Bob Company, Sarah Jane Company, Tool Company and not: Joe Bob Company, Sarah Jane Company, Tool Company, How do you achieve this with Django's templating system? Secondly, each Article has a CharField, called category, that stores the category for the article. I would like the articles sorted by Categories, if possible. So I use QuerySet, and get a nice list of relevant articles in article_list. I then use the regroup template tag to sort this into categories and print each one. { 'tennis': ('article_4', 'article_5') 'cricket': ('article_2', 'article_3') 'ping pong': ('article_1') } However, I need to make sure that my input list is sorted, before I pass it to regroup. My question is, is it better to use the dictsort template-tag to sort this inside the template, or should I use QuerySet's order_by call instead? And I assume it's better to use regroup, rather than trying to code this myself in Python inside the view? Cheers, Victor A: first question Use the python like join filter {{ article.company.all|join:", " }} http://docs.djangoproject.com/en/dev/ref/templates/builtins/#join second question My question is, is it better to use the dictsort template-tag to sort this inside the template, or should I use QuerySet's order_by call instead? I would use QuerySet's order_by. I like doing such stuff in DB. Beacuse with a huge dataset you could use database indexes. And I assume it's better to use regroup, rather than trying to code this myself in Python inside the view? regroup. It is defintly better to use python native functions. A: Try forloop.last for your first question {% for company in article.companys.all %} {{company.name}}{% if not forloop.last %}, {% endif %} {% endfor %}
Django Templates - Printing Comma-separated ManyToManyField, sorting results list into dict?
I have a Django project for managing a list of journal articles. The main model is Article. This has various fields to store things like title of the article, publication date, subject, as well as list of companies mentioned in the article. (company is it's own model). I want a template that prints out a list of the articles, sorted by category, and also listing the companies mentioned. However, I'm hitting two issues. Firstly, the company field is a ManyToMany field. I'm printing this successfully now, using the all iterable, thanks to this SO question =). (Curious though, where is this all iterable documented in the Django documentation?) listing objects from ManyToManyField However, I'd like to print ", " (comma followed by space) after each item, except the last item. So the output would be: Joe Bob Company, Sarah Jane Company, Tool Company and not: Joe Bob Company, Sarah Jane Company, Tool Company, How do you achieve this with Django's templating system? Secondly, each Article has a CharField, called category, that stores the category for the article. I would like the articles sorted by Categories, if possible. So I use QuerySet, and get a nice list of relevant articles in article_list. I then use the regroup template tag to sort this into categories and print each one. { 'tennis': ('article_4', 'article_5') 'cricket': ('article_2', 'article_3') 'ping pong': ('article_1') } However, I need to make sure that my input list is sorted, before I pass it to regroup. My question is, is it better to use the dictsort template-tag to sort this inside the template, or should I use QuerySet's order_by call instead? And I assume it's better to use regroup, rather than trying to code this myself in Python inside the view? Cheers, Victor
[ "first question \nUse the python like join filter\n{{ article.company.all|join:\", \" }}\n\nhttp://docs.djangoproject.com/en/dev/ref/templates/builtins/#join\nsecond question \n\nMy question is, is it better to use\n the dictsort template-tag to sort this\n inside the template, or should I use\n QuerySet's order...
[ 18, 17 ]
[]
[]
[ "django", "django_templates", "python" ]
stackoverflow_0003177461_django_django_templates_python.txt
Q: python path django How can I add something to my "Pythonpath". Where exactly are the files located, I have to change to add to my pythonpath? What exactly do I add to my Pythonpath? If Python calls: /Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/ But I want it to call /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages What do I need to add to make it work. Strange is that I already used the django-admin.py for a startproject ccommand. But now it does not find it. Is there a way to clean up my ALL my Python, Django so I can restart with a fresh version? A: >>> import sys >>> sys.path sys.path is the list of search path for modules. if you want a module to be loaded from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages instead of /Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/ you just need to make sure that site-packages search path comes before the Contents in sys.path you can set the python path using PYTHONPATH environment variable ex: (on a linux system) export PYTHONPATH=/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages:$PYTHONPATH
python path django
How can I add something to my "Pythonpath". Where exactly are the files located, I have to change to add to my pythonpath? What exactly do I add to my Pythonpath? If Python calls: /Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/ But I want it to call /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages What do I need to add to make it work. Strange is that I already used the django-admin.py for a startproject ccommand. But now it does not find it. Is there a way to clean up my ALL my Python, Django so I can restart with a fresh version?
[ ">>> import sys\n>>> sys.path\n\nsys.path is the list of search path for modules.\nif you want a module to be loaded from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages instead of /Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/\nyou just need to make s...
[ 7 ]
[ "This tutorial will probably work for you if you want to remove an old version:\nhttp://docs.djangoproject.com/en/1.2/topics/install/#removing-old-versions-of-django\n" ]
[ -4 ]
[ "django", "path", "python" ]
stackoverflow_0003177715_django_path_python.txt
Q: Help with urllib + proxy in Python My program isn't running properly as should be... I'm getting only the error message (except part) of the urlopen with the proxy... why? At least, one of the proxy was tested and work correctly... please, some one take a look on the code here: http://pastebin.com/cBfv5H8J edit: the code doesn't work on the first try part, this one try: h = urllib.urlopen(website, proxies = {'http': proxylist}) break and always return me the except: print '['+time.strftime('%Y/%m/%d %H:%M:%S')+'] '+'ERROR. Trying again... (%s)' % proxy time.sleep(1) A: At least one error: h = urllib.urlopen(website, proxies = {'http': proxylist}) Should be h = urllib.urlopen(website, proxies = {'http': proxy})
Help with urllib + proxy in Python
My program isn't running properly as should be... I'm getting only the error message (except part) of the urlopen with the proxy... why? At least, one of the proxy was tested and work correctly... please, some one take a look on the code here: http://pastebin.com/cBfv5H8J edit: the code doesn't work on the first try part, this one try: h = urllib.urlopen(website, proxies = {'http': proxylist}) break and always return me the except: print '['+time.strftime('%Y/%m/%d %H:%M:%S')+'] '+'ERROR. Trying again... (%s)' % proxy time.sleep(1)
[ "At least one error:\nh = urllib.urlopen(website, proxies = {'http': proxylist})\n\nShould be\nh = urllib.urlopen(website, proxies = {'http': proxy})\n\n" ]
[ 0 ]
[]
[]
[ "proxy", "python", "urllib", "windows" ]
stackoverflow_0003178038_proxy_python_urllib_windows.txt
Q: Getting an external list of ip and turning in to variable dic in Python How can I do this: Enter on this website (http://www.samair.ru/proxy/time-01.htm) and get the list of the ip address and turn it to a dictionary variable? whit these code in particular, I only can get the first ip of the website ip = urllib.urlopen('http://www.samair.ru/proxy/time-01.htm').read() clientIp = re.search("(\d+\.\d+\.\d+\.\d+)", ip).group() print clientIp A: Use findall instead of search: ip = urllib.urlopen('http://www.samair.ru/proxy/time-01.htm').read() clientIp = re.findall(r"\d+\.\d+\.\d+\.\d+", ip) Note the “raw” string r"…" that prevents interpretation of the backslashes as escape character. This gives you a list of strings containing the IP addresses. To turn it into a dictionary you need key–value pairs. A: Use re.findall() instead of re.search()
Getting an external list of ip and turning in to variable dic in Python
How can I do this: Enter on this website (http://www.samair.ru/proxy/time-01.htm) and get the list of the ip address and turn it to a dictionary variable? whit these code in particular, I only can get the first ip of the website ip = urllib.urlopen('http://www.samair.ru/proxy/time-01.htm').read() clientIp = re.search("(\d+\.\d+\.\d+\.\d+)", ip).group() print clientIp
[ "Use findall instead of search:\nip = urllib.urlopen('http://www.samair.ru/proxy/time-01.htm').read()\nclientIp = re.findall(r\"\\d+\\.\\d+\\.\\d+\\.\\d+\", ip)\n\nNote the “raw” string r\"…\" that prevents interpretation of the backslashes as escape character.\nThis gives you a list of strings containing the IP ad...
[ 1, 1 ]
[]
[]
[ "python", "windows" ]
stackoverflow_0003178101_python_windows.txt
Q: Print output in a single line I have the following code: >>> x = 0 >>> y = 3 >>> while x < y: ... print '{0} / {1}, '.format(x+1, y) ... x += 1 Output: 1 / 3, 2 / 3, 3 / 3, I want my output like: 1 / 3, 2 / 3, 3 / 3 I searched and found that the way to do this in a single line would be: sys.stdout.write('{0} / {1}, '.format(x+1, y)) Is there another way of doing it? I don't exactly feel comfortable with sys.stdout.write() since I have no idea how it is different from print. A: you can use print "something", (with a trailing comma, to not insert a newline), so try this ... print '{0} / {1}, '.format(x+1, y), #<= with a , A: I think that sys.stdout.write() would be fine, but the standard way in Python 2 is print with a trailing comma, as mb14 suggested. If you are using Python 2.6+ and want to be upward-compatible to Python 3, you can use the new print function which offers a more readable syntax: from __future__ import print_function print("Hello World", end="") A: No need for write. If you put a trailing comma after the print statement, you'll get what you need. Caveats: You will need to add a blank print statement at the end if you want the next text to continue on a new line. May be different in Python 3.x There will always be at least one space added as a separator. IN this case, that is okay, because you want a space separating it anyway. A: >>> while x < y: ... print '{0} / {1}, '.format(x+1, y), ... x += 1 ... 1 / 3, 2 / 3, 3 / 3, Notice the additional comma. A: You can use , in the end of print statement. while x<y: print '{0} / {1}, '.format(x+1, y) , x += 1 You can further read this.
Print output in a single line
I have the following code: >>> x = 0 >>> y = 3 >>> while x < y: ... print '{0} / {1}, '.format(x+1, y) ... x += 1 Output: 1 / 3, 2 / 3, 3 / 3, I want my output like: 1 / 3, 2 / 3, 3 / 3 I searched and found that the way to do this in a single line would be: sys.stdout.write('{0} / {1}, '.format(x+1, y)) Is there another way of doing it? I don't exactly feel comfortable with sys.stdout.write() since I have no idea how it is different from print.
[ "you can use\n\nprint \"something\",\n\n(with a trailing comma, to not insert a newline), so\ntry this\n... print '{0} / {1}, '.format(x+1, y), #<= with a ,\n\n", "I think that sys.stdout.write() would be fine, but the standard way in Python 2 is print with a trailing comma, as mb14 suggested. If you are using Py...
[ 6, 3, 2, 2, 2 ]
[ "Here is a way to achieve what you want using itertools. This will also work ok for Python3 where print becomes a function\nfrom itertools import count, takewhile\ny=3\nprint(\", \".join(\"{0} / {1}\".format(x,y) for x in takewhile(lambda x: x<=y,count(1))))\n\nYou may find the following approach is easier to foll...
[ -1 ]
[ "printing", "python" ]
stackoverflow_0003178026_printing_python.txt
Q: How do I create a data structure that will be serialized this JSON format in python? I have a function that accepts a list of date objects and should output the following dictionary in JSON: { "2010":{ "1":{ "id":1, "title":"foo", "postContent":"bar" }, "7":{ "id":2, "title":"foo again", "postContent":"bar baz boo" } }, "2009":{ "6":{ "id":3, "title":"foo", "postContent":"bar" }, "8":{ "id":4, "title":"foo again", "postContent":"bar baz boo" } } } Basically I would like to access my objects by year and month number. What code can convert a list to this format in python that can be serialized to the dictionary above in json? A: Something along the lines of this should work: from collections import defaultdict import json d = defaultdict(dict) for date in dates: d[date.year][date.month] = info_for_date(date) json.dumps(d) Where info_for_date is a function that returns a dict like those in your question.
How do I create a data structure that will be serialized this JSON format in python?
I have a function that accepts a list of date objects and should output the following dictionary in JSON: { "2010":{ "1":{ "id":1, "title":"foo", "postContent":"bar" }, "7":{ "id":2, "title":"foo again", "postContent":"bar baz boo" } }, "2009":{ "6":{ "id":3, "title":"foo", "postContent":"bar" }, "8":{ "id":4, "title":"foo again", "postContent":"bar baz boo" } } } Basically I would like to access my objects by year and month number. What code can convert a list to this format in python that can be serialized to the dictionary above in json?
[ "Something along the lines of this should work:\nfrom collections import defaultdict\nimport json\n\nd = defaultdict(dict)\nfor date in dates:\n d[date.year][date.month] = info_for_date(date)\njson.dumps(d)\n\nWhere info_for_date is a function that returns a dict like those in your question.\n" ]
[ 4 ]
[]
[]
[ "dictionary", "django", "json", "list", "python" ]
stackoverflow_0003178028_dictionary_django_json_list_python.txt
Q: Django Python Delete Project App Library If I want to delete a Django App or Project. Is there a way to cleanly delete it? Or a library in Python? How can I delete and re install libraries. So I am SURE that nothing is left of that library. A: If you want to delete some python library go to /site-packages or /dist-packages, find this module(single file) or package(directory) or egg file (look at the extension) and delete this. If you want to delete an app and you have it inside your project, simply delete app directory, remove it from settings and remove all references to objects from this app.
Django Python Delete Project App Library
If I want to delete a Django App or Project. Is there a way to cleanly delete it? Or a library in Python? How can I delete and re install libraries. So I am SURE that nothing is left of that library.
[ "If you want to delete some python library go to /site-packages or /dist-packages, find this module(single file) or package(directory) or egg file (look at the extension) and delete this.\nIf you want to delete an app and you have it inside your project, simply delete app directory, remove it from settings and remo...
[ 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0003178259_django_python.txt
Q: converting wav to mp3 (and vice versa) using GStreamer I am using Python bindings for Gstreamer and am using the following pipeline to convert a wav file to mp3. I used one of the suggestions in this question , with some modifications (as I was getting some errors when original syntax was used) gst.parse_launch("filesrc location=C:\\music.wav ! decodebin ! audioconvert ! lame ! filesink location=C:\\music.mp3") When I run this code in Python, I get no errors. However, it doesn't generate music.mp3 file. What else do I need to do so that it creates a new file music.mp3 A: your pipeline is correct - or more specifically, your choice of elements and properties is correct. the problem is most likely in another part of your code. have you set the pipeline to gst.STATE_PLAYING? pipeline = gst.parse_launch("filesrc location=C:\\music.wav ! decodebin ! audioconvert ! lame ! filesink location=C:\\music.mp3") pipeline.set_state(gst.STATE_PLAYING) there are numerous other common mistakes that can be made- posting your entire source code would be a great help! A: If you didn't get this working I suggest using ffmpeg to convert your files, it's very efficient and opensource, you can find a compiled windows version with WinFF which you can manipulate through the command line.
converting wav to mp3 (and vice versa) using GStreamer
I am using Python bindings for Gstreamer and am using the following pipeline to convert a wav file to mp3. I used one of the suggestions in this question , with some modifications (as I was getting some errors when original syntax was used) gst.parse_launch("filesrc location=C:\\music.wav ! decodebin ! audioconvert ! lame ! filesink location=C:\\music.mp3") When I run this code in Python, I get no errors. However, it doesn't generate music.mp3 file. What else do I need to do so that it creates a new file music.mp3
[ "your pipeline is correct - or more specifically, your choice of elements and properties is correct.\nthe problem is most likely in another part of your code. have you set the pipeline to gst.STATE_PLAYING?\npipeline = gst.parse_launch(\"filesrc location=C:\\\\music.wav ! decodebin ! audioconvert ! lame ! filesink...
[ 1, 0 ]
[]
[]
[ "gstreamer", "python" ]
stackoverflow_0002172000_gstreamer_python.txt
Q: How can I alter a file and write only the changes to disk - basically, sed (python)? Let's say I have a file /etc/conf1 it's contents are along the lines of option = banana name = monkey operation = eat and let's say I want to replace "monkey" with "ostrich". How can I do that without reading the file to memory, altering it and then just writing it all back? Basically, how can I modify the file "in place"? A: You can't. "ostrich" is one letter more than "monkey", so you'll have to rewrite the file at least from that point onwards. File systems do not support "shifting" file contents upwards or downwards. If it's just a small file, there's no reason to bother with even this, and you might as well rewrite the whole file. If it's a really large file, you'll need to reconsider the internal design of the file's contents, for example, with a block-based approach. A: You should look at the fileinput module: http://docs.python.org/library/fileinput.html There's an option to perform inplace editing via the input method: http://docs.python.org/library/fileinput.html#fileinput.input UPDATE - example code: import fileinput import re import sys for line in fileinput.input(inplace=True): sys.stdout.write(re.sub(r'monkey', 'ostrich', line)) Using sys.stdout.write so as not to add any extra newlines in. A: It depends on what you mean by "in place". How can you do it if you want to replace monkey with supercalifragilisticexpialidocious? Do you want to overwrite the remaining file? If not, you are going to have to read ahead and shift subsequent contents of the file forwards. A: CPU instructions operate on data which come from memory. The portion of the file you wish to read must be resident in memory before you can read it; before you write anything to disk, that information must be in memory. The whole file doesn't have to be there at once, but to do a search-replace on an entire file, every character of the file will pass through RAM at some point. What you're probably looking for is something like the mmap() system call. The above fileinput module sounds like a plausible thing to use. A: In-place modifications are only easy if you don't alter the size of the file or only append to it. The following example replaces the first byte of the file by an "a" character: fd = os.open("...", os.O_WRONLY | os.O_CREAT) os.write(fd, "a") os.close(fd) Note that Python's file objects don't support this, you have to use the low-level functions. For appending, open file file with the open() function in "a" mode. A: sed -i.bak '/monkey$/newword/' file
How can I alter a file and write only the changes to disk - basically, sed (python)?
Let's say I have a file /etc/conf1 it's contents are along the lines of option = banana name = monkey operation = eat and let's say I want to replace "monkey" with "ostrich". How can I do that without reading the file to memory, altering it and then just writing it all back? Basically, how can I modify the file "in place"?
[ "You can't. \"ostrich\" is one letter more than \"monkey\", so you'll have to rewrite the file at least from that point onwards. File systems do not support \"shifting\" file contents upwards or downwards.\nIf it's just a small file, there's no reason to bother with even this, and you might as well rewrite the whol...
[ 2, 1, 0, 0, 0, 0 ]
[]
[]
[ "file", "file_manipulation", "fopen", "python", "sed" ]
stackoverflow_0003178135_file_file_manipulation_fopen_python_sed.txt
Q: Changing property names in Google application engine Is there any way to alter the Property names in the Google application engine for a Kind, or in other words is there a way to alter the column names of a table in Google application Engine (though it follows a different way to handle the data)? I am using python. Please suggest. Thanks in advance. A: Refactoring on Google AppEngine involves you having to either modify all of the records in your datastore as you make the change, or write the code so that it will still read the old value if the new value doesn't exist. Removing a column from the datastore is possible but not easy. More information can be found here.
Changing property names in Google application engine
Is there any way to alter the Property names in the Google application engine for a Kind, or in other words is there a way to alter the column names of a table in Google application Engine (though it follows a different way to handle the data)? I am using python. Please suggest. Thanks in advance.
[ "Refactoring on Google AppEngine involves you having to either modify all of the records in your datastore as you make the change, or write the code so that it will still read the old value if the new value doesn't exist.\nRemoving a column from the datastore is possible but not easy. More information can be found ...
[ 4 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0003178190_google_app_engine_python.txt
Q: I want to make this program remember settings I have tried unsuccessfully several times to get programs to remember settings after they've been destroyed. A large reason for that is because I don't have an example code to work off of. Below I have a simple program I wrote. I'd like it so that it both remembers the position of the scale, and the contents of the text widget upon restarting the program. I hate having to ask someone to write code for me, but I'm honestly stuck. I'm using Python 2.6.5 on Windows 7, BTW. Code: import Tkinter class simpleapp_tk(Tkinter.Tk): def __init__(self,parent): Tkinter.Tk.__init__(self,parent) self.parent = parent self.initialize() def initialize(self): self.sclX = Tkinter.Scale(self, from_=0, to=100, orient='horizontal',resolution=1,command=self.A) self.sclX.pack(ipadx=75) self.labelVar = Tkinter.StringVar() self.label = Tkinter.Label(self,textvariable=self.labelVar) self.label.pack(ipadx=75) self.frame = Tkinter.Frame(self,relief='ridge',borderwidth=4) self.frame.pack() self.LVariable = Tkinter.StringVar() self.s = Tkinter.Scrollbar(self.frame) self.L = Tkinter.Text(self.frame,borderwidth=0,font=('Arial', 10),width=30, height=15) self.s.config(command=self.L.yview,elementborderwidth=1) self.L.grid(column=0,row=0,sticky='EW') self.s.grid(column=1,row=0,sticky='NSEW') def A(self, event): self.labelVar.set(100 - self.sclX.get()) if __name__ == "__main__": app = simpleapp_tk(None) app.mainloop() A: It's not a matter of asking people to write code for you but knowing what to look for, you could write your own code after that! I think remembering settings like you describe is commonly done in two ways: Config file Registry entries You can then read in the stored value from either the config file/registry whenever your program loads, and adjust parameters to match. So now you go look up how you read/write files/registry entries and you are set! A: Depending on how you prefer to store settings you can also look into things like Shelve and Pickle/cPickle. I personally prefer Shelve because I tend to use dictionaries as settings containers and Shelve lets me store those as is. Full documentation available here: http://docs.python.org/library/shelve.html A: Your previous question where you had difficulty saving state using cPickle was a good start. I have added a couple of methods to your code and it will now save and load the data in the Scale and Text widgets using the pickle module. I've never used Shelve - that sounds like it would be easier based on what g.d.d.c says in his(?) answer. I store the widget values in a dictionary, then pickle the dictionary. import Tkinter import pickle class simpleapp_tk(Tkinter.Tk): def __init__(self, parent=None): Tkinter.Tk.__init__(self, parent) self.parent = parent self.initialize() self.load_data() self.protocol("WM_DELETE_WINDOW", self.save_data) def initialize(self): self.sclX = Tkinter.Scale(self, from_=0, to=100, orient='horizontal', resolution=1,command=self.update_label) self.sclX.pack(ipadx=75) self.labelVar = Tkinter.StringVar() self.label = Tkinter.Label(self,textvariable=self.labelVar) self.label.pack(ipadx=75) self.frame = Tkinter.Frame(self,relief='ridge',borderwidth=4) self.frame.pack() #self.LVariable = Tkinter.StringVar() self.s = Tkinter.Scrollbar(self.frame) self.L = Tkinter.Text(self.frame, borderwidth=0, font=('Arial', 10), width=30, height=15) self.s.config(command=self.L.yview, elementborderwidth=1) self.L.grid(column=0, row=0, sticky='EW') self.s.grid(column=1, row=0, sticky='NSEW') def update_label(self, event): self.labelVar.set(100 - self.sclX.get()) def save_data(self): data = {'scale': self.sclX.get(), 'text': self.L.get('1.0', 'end')} with file('config.data', 'wb') as f: pickle.dump(data, f) self.destroy() def load_data(self): try: with file('config.data', 'rb') as f: data = pickle.load(f) self.sclX.set(data['scale']) self.L.insert("end", data['text']) except IOError: # no config file exists pass if __name__ == "__main__": app = simpleapp_tk() app.mainloop()
I want to make this program remember settings
I have tried unsuccessfully several times to get programs to remember settings after they've been destroyed. A large reason for that is because I don't have an example code to work off of. Below I have a simple program I wrote. I'd like it so that it both remembers the position of the scale, and the contents of the text widget upon restarting the program. I hate having to ask someone to write code for me, but I'm honestly stuck. I'm using Python 2.6.5 on Windows 7, BTW. Code: import Tkinter class simpleapp_tk(Tkinter.Tk): def __init__(self,parent): Tkinter.Tk.__init__(self,parent) self.parent = parent self.initialize() def initialize(self): self.sclX = Tkinter.Scale(self, from_=0, to=100, orient='horizontal',resolution=1,command=self.A) self.sclX.pack(ipadx=75) self.labelVar = Tkinter.StringVar() self.label = Tkinter.Label(self,textvariable=self.labelVar) self.label.pack(ipadx=75) self.frame = Tkinter.Frame(self,relief='ridge',borderwidth=4) self.frame.pack() self.LVariable = Tkinter.StringVar() self.s = Tkinter.Scrollbar(self.frame) self.L = Tkinter.Text(self.frame,borderwidth=0,font=('Arial', 10),width=30, height=15) self.s.config(command=self.L.yview,elementborderwidth=1) self.L.grid(column=0,row=0,sticky='EW') self.s.grid(column=1,row=0,sticky='NSEW') def A(self, event): self.labelVar.set(100 - self.sclX.get()) if __name__ == "__main__": app = simpleapp_tk(None) app.mainloop()
[ "It's not a matter of asking people to write code for you but knowing what to look for, you could write your own code after that!\nI think remembering settings like you describe is commonly done in two ways:\n\nConfig file\nRegistry entries\n\nYou can then read in the stored value from either the config file/regist...
[ 3, 2, 1 ]
[]
[]
[ "memory", "python", "tkinter" ]
stackoverflow_0003176984_memory_python_tkinter.txt
Q: How do I print some python function output to console (for debugging purposes, while using manage.py runserver) from within a django template I am working on a custom Django form field and accompanying widget. While rendering the template, i would like to inspect the form.field as a python object. How do I do that, because anything in a Django template outside of template tags and filters is rendered as text. A: You'll need to write and install a custom tag (or filter... though that might considered be somewhat bizarre, it may help you fit in more places) that, as a side effect, performs the logging calls you desire (or print>>sys.stderr or whatever). A: You could put pdb on the form field render surely? If you really want to spit the field value out to some sort of log then I recommend using logging and putting the logger into the form field's unicode method (or str depending on what version of django).
How do I print some python function output to console (for debugging purposes, while using manage.py runserver) from within a django template
I am working on a custom Django form field and accompanying widget. While rendering the template, i would like to inspect the form.field as a python object. How do I do that, because anything in a Django template outside of template tags and filters is rendered as text.
[ "You'll need to write and install a custom tag (or filter... though that might considered be somewhat bizarre, it may help you fit in more places) that, as a side effect, performs the logging calls you desire (or print>>sys.stderr or whatever).\n", "You could put pdb on the form field render surely?\nIf you reall...
[ 1, 0 ]
[]
[]
[ "django", "django_templates", "python" ]
stackoverflow_0003177759_django_django_templates_python.txt
Q: Convert XML to python objects using lxml I'm trying to use the lxml library to parse an XML file...what I want is to use XML as the datasource, but still maintain the normal Django-way of interactive with the resulting objects...from the docs, I can see that lxml.objectify is what I'm suppossed to use, but I don't know how to proceed after: list = objectify.parse('myfile.xml') Any help will be very much appreciated. Thanks. A sample of the file (has about 100+ records) is this: <store> <book> <publisher>Hodder &...</publisher> <isbn>345123890</isbn> <author>King</author> <comments> <comment rank='1'>Interesting</comment> <comments> <pages>200</pages> </book> <book> <publisher>Penguin Books</publisher> <isbn>9011238XX</isbn> <author>Armstrong</author> <comments /> <pages>150</pages> </book> </store> From this, I want to do the following (something just as easy to write as Books.objects.all() and Books.object.get_object_or_404(isbn=selected) is most preferred ): Display a list of all books with their respective attributes Enable viewing of further details of a book by selecting it from the list A: Firstly, "list" isn't a very good variable because it "shadows" the built-in type "list." Now, say you have this xml: <root> <node1 val="foo">derp</node1> <node2 val="bar" /> </root> Now, you could do this: root = objectify.parse("myfile.xml") print root.node1.get("val") # prints "foo" print root.node1.text # prints "derp" print root.node2.get("val") # prints "bar" Another tip: when you have lots of nodes with the same name, you can loop over them. >>> xml = """<root> <node val="foo">derp</node> <node val="bar" /> </root>""" >>> root = objectify.fromstring(xml) >>> for node in root.node: print node.get("val") foo bar Edit You should be able to simply set your django context to the books object, and use that from your templates. context = dict(books = root.book, # other stuff ) And then you'll be able to iterate through the books in the template, and access each book object's attributes.
Convert XML to python objects using lxml
I'm trying to use the lxml library to parse an XML file...what I want is to use XML as the datasource, but still maintain the normal Django-way of interactive with the resulting objects...from the docs, I can see that lxml.objectify is what I'm suppossed to use, but I don't know how to proceed after: list = objectify.parse('myfile.xml') Any help will be very much appreciated. Thanks. A sample of the file (has about 100+ records) is this: <store> <book> <publisher>Hodder &...</publisher> <isbn>345123890</isbn> <author>King</author> <comments> <comment rank='1'>Interesting</comment> <comments> <pages>200</pages> </book> <book> <publisher>Penguin Books</publisher> <isbn>9011238XX</isbn> <author>Armstrong</author> <comments /> <pages>150</pages> </book> </store> From this, I want to do the following (something just as easy to write as Books.objects.all() and Books.object.get_object_or_404(isbn=selected) is most preferred ): Display a list of all books with their respective attributes Enable viewing of further details of a book by selecting it from the list
[ "Firstly, \"list\" isn't a very good variable because it \"shadows\" the built-in type \"list.\"\nNow, say you have this xml:\n<root>\n<node1 val=\"foo\">derp</node1>\n<node2 val=\"bar\" />\n</root>\n\nNow, you could do this:\nroot = objectify.parse(\"myfile.xml\")\nprint root.node1.get(\"val\") # prints \"foo\"\np...
[ 1 ]
[]
[]
[ "lxml", "python", "xml_parsing" ]
stackoverflow_0003178863_lxml_python_xml_parsing.txt
Q: SQLAlchemy many-to-many relationship on declarative tables I have the following tables defined declaratively (very simplified version): class Profile(Base): __tablename__ = 'profile' id = Column(Integer, primary_key = True) name = Column(String(65), nullable = False) def __init__(self, name): self.name = name class Question(Base): __tablename__ = 'question' id = Column(Integer, primary_key = True) description = Column(String(255), nullable = False) number = Column(Integer, nullable = False, unique = True) def __init__(self, description, number): self.description = description self.number = number class Answer(Base): __tablename__ = 'answer' profile_id = Column(Integer, ForeignKey('profile.id'), primary_key = True) question_id = Column(Integer, ForeignKey('question.id'), primary_key = True) value = Column(Integer, nullable = False) def __init__(self, profile_id, question_id, value): self.profile_id = profile_id self.question_id = question_id self.value = value Profile is linked to Question via a many-to-many relationship. In the linking table (Answer) I need to store a value for the answer. The documentation says I need to use an association object to do this but it's confusing me and I can't get it to work. How do I define the many-to-many relationship for the Profile and Question tables using Answer as the intermediary table? A: The documentation says I need to use an association object to do this but it's confusing me and I can't get it to work. That's right. And the Answer class is your association object as it maps to the association table 'answer'. How do I define the many-to-many relationship for the Profile and Question tables using Answer as the intermediary table? The code you've presented in your question is correct. It only needs additional information about relationships on the ORM level: from sqlalchemy.orm import relationship ... class Profile(Base): __tablename__ = 'profile' ... answers = relationship("Answer", backref="profile") ... class Question(Base): __tablename__ = 'question' ... answers = relationship("Answer", backref="question") ... Also, you shouldn't setup values for profile_id and question_id in your Answer's init function, because it's the ORM that's responsible for setting them accordingly based on you assignments to relationship attributes of your objects. You may be interested in reading documentation for declarative, especially the part about configuring relationships. Reading about working with related objects may be helpful as well.
SQLAlchemy many-to-many relationship on declarative tables
I have the following tables defined declaratively (very simplified version): class Profile(Base): __tablename__ = 'profile' id = Column(Integer, primary_key = True) name = Column(String(65), nullable = False) def __init__(self, name): self.name = name class Question(Base): __tablename__ = 'question' id = Column(Integer, primary_key = True) description = Column(String(255), nullable = False) number = Column(Integer, nullable = False, unique = True) def __init__(self, description, number): self.description = description self.number = number class Answer(Base): __tablename__ = 'answer' profile_id = Column(Integer, ForeignKey('profile.id'), primary_key = True) question_id = Column(Integer, ForeignKey('question.id'), primary_key = True) value = Column(Integer, nullable = False) def __init__(self, profile_id, question_id, value): self.profile_id = profile_id self.question_id = question_id self.value = value Profile is linked to Question via a many-to-many relationship. In the linking table (Answer) I need to store a value for the answer. The documentation says I need to use an association object to do this but it's confusing me and I can't get it to work. How do I define the many-to-many relationship for the Profile and Question tables using Answer as the intermediary table?
[ "\nThe documentation says I need to use\n an association object to do this but\n it's confusing me and I can't get it\n to work.\n\nThat's right. And the Answer class is your association object as it maps to the association table 'answer'.\n\nHow do I define the many-to-many\n relationship for the Profile and\n...
[ 13 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0003174979_python_sqlalchemy.txt
Q: Create a structure in a flat xml file I have an xml file made like this: <car>Ferrari</car> <color>red</color> <speed>300</speed> <car>Porsche</car> <color>black</color> <speed>310</speed> I need to have it in this form: <car name="Ferrari"> <color>red</color> <speed>300</speed> </car> <car name="Porsche"> <color>black</color> <speed>310</speed> </car> How can I do this? I'm struggling because I can't think of a way to create the structure I need from the flat lis of tags in the original xml file. My language of choice is Python, but any suggestion is welcome. A: XSLT is the perfect tool for transforming one XML structure into another. <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <!-- copy the root element and handle its <car> children --> <xsl:template match="/root"> <xsl:copy> <xsl:apply-templates select="car" /> <xsl:copy> </xsl:template> <!-- car elements become a container for their properties --> <xsl:template match="car"> <car name="{normalize-space()}"> <!-- ** see 1) --> <xsl:copy-of select="following-sibling::color[1]" /> <xsl:copy-of select="following-sibling::speed[1]" /> </car> </xsl:template> </xsl:stylesheet> 1) For this to work, your XML has to have a <color> and a <speed> for every <car>. If that's not guaranteed, or number and kind of properties is generally variable, replace the two lines with the generic form of the copy statement: <!-- any following-sibling element that "belongs" to the same <car> --> <xsl:copy-of select="following-sibling::*[ generate-id(preceding-sibling::car[1]) = generate-id(current()) ]" /> Applied to your XML (I implied a document element named <root>), this would be the result <root> <car name="Ferrari"> <color>red</color> <speed>300</speed> </car> <car name="Porsche"> <color>black</color> <speed>310</speed> </car> </root> Sample code that applies XSLT to XML in Python should be really easy to find, so I omit that here. It'll be hardly more than 4 or five lines of Python code. A: I don't know about python, but presuming you had an XML parser that gave you hierarchial access to the nodes in an XML document, the semantics you'd want would be something like the following (warning, I tend to use PHP). Basically, store any non-"car" tags, and then when you encounter a new "car" tag treat it as a delimiting field and create the assembled XML node: // Create an input and output handle input_handle = parse_xml_document(); output_handle = new_xml_document(); // Assuming the <car>, <color> etc. nodes are // the children of some, get them as an array list_of_nodes = input_handle.get_list_child_nodes(); // These are empty variables for storing our data as we parse it var car, color, speed = NULL foreach(list_of_nodes as node) { if(node.tag_name() == "speed") { speed = node.value(); // etc for each type of non-delimiting field } if(node.tag_name() == "car") { // If there's already a car specified, take its data, // insert it into the output xml structure and th if(car != NULL) { // Add a new child node to the output document node = output_handle.append_child_node("car"); // Set the attribute on this new output node node.set_attribute("name", node.value()); // Add the stored child attributes node.add_child("color", color); node.add_child("speed", speed); } // Replace the value of car afterwards. This allows the // first iteration to happen when there is no stored value // for "car". car = node.value(); } } A: IF your real life data is as simple as your example and there are no errors in it, you can use a regular expression substitution to do it in one hit: import re guff = """ <car>Ferrari</car> <color>red</color> <speed>300</speed> <car>Porsche</car> <color>black</color> <speed>310</speed> """ pattern = r""" <car>([^<]+)</car>\s* <color>([^<]+)</color>\s* <speed>([^<]+)</speed>\s* """ repl = r"""<car name="\1"> <color>\2</color> <speed>\3</speed> </car> """ regex = re.compile(pattern, re.VERBOSE) output = regex.sub(repl, guff) print output Otherwise you had better read it 3 lines at a time, do some validations, and write it out one "car" element at a time, either using string processing or ElementTree. A: Assuming the first element within the root is a car element, and all non-car elements "belong" to the last car: import xml.etree.cElementTree as etree root = etree.XML('''<root> <car>Ferrari</car> <color>red</color> <speed>300</speed> <car>Porsche</car> <color>black</color> <speed>310</speed> </root>''') new_root = etree.Element('root') for elem in root: if elem.tag == 'car': car = etree.SubElement(new_root, 'car', name=elem.text) else: car.append(elem) new_root would be: <root><car name="Ferrari"><color>red</color> <speed>300</speed> </car><car name="Porsche"><color>black</color> <speed>310</speed> </car></root> (I've assumed that the pretty whitespace was not important)
Create a structure in a flat xml file
I have an xml file made like this: <car>Ferrari</car> <color>red</color> <speed>300</speed> <car>Porsche</car> <color>black</color> <speed>310</speed> I need to have it in this form: <car name="Ferrari"> <color>red</color> <speed>300</speed> </car> <car name="Porsche"> <color>black</color> <speed>310</speed> </car> How can I do this? I'm struggling because I can't think of a way to create the structure I need from the flat lis of tags in the original xml file. My language of choice is Python, but any suggestion is welcome.
[ "XSLT is the perfect tool for transforming one XML structure into another.\n<xsl:stylesheet version=\"1.0\" xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\">\n\n <!-- copy the root element and handle its <car> children -->\n <xsl:template match=\"/root\">\n <xsl:copy>\n <xsl:apply-templates select=\"car...
[ 8, 1, 0, 0 ]
[]
[]
[ "python", "xml" ]
stackoverflow_0003178584_python_xml.txt
Q: Accessing static properties in Python I am relatively new to Python and was hoping someone could explain the following to me: class MyClass: Property1 = 1 Property2 = 2 print MyClass.Property1 # 1 mc = MyClass() print mc.Property1 # 1 Why can I access Property1 both statically and through a MyClass instance? A: The code class MyClass: Property1 = 1 creates a class MyClass which has a dict: >>> MyClass.__dict__ {'Property1': 1, '__doc__': None, '__module__': '__main__'} Notice the key-value pair 'Property1': 1. When you say MyClass.Property1, Python looks in the dict MyClass.__dict__ for the key Property1 and if it finds it, returns the associated value 1. >>> MyClass.Property1 1 When you create an instance of the class, >>> mc = MyClass() a dict for the instance is also created: >>> mc.__dict__ {} Notice this dict is empty. When you say mc.Property1, Python first looks in mc.__dict__ for the 'Property1' key. Since it does not find it there, it looks in the dict of mc's class, that is, MyClass.__dict__. >>> mc.Property1 1 Note that there is much more to the story of Python attribute access. (I haven't mentioned the important rules concerning descriptors, for instance.) But the above tells you the rule for most common cases of attribute access.
Accessing static properties in Python
I am relatively new to Python and was hoping someone could explain the following to me: class MyClass: Property1 = 1 Property2 = 2 print MyClass.Property1 # 1 mc = MyClass() print mc.Property1 # 1 Why can I access Property1 both statically and through a MyClass instance?
[ "The code \nclass MyClass:\n Property1 = 1\n\ncreates a class MyClass which has a dict:\n>>> MyClass.__dict__\n{'Property1': 1, '__doc__': None, '__module__': '__main__'}\n\nNotice the key-value pair 'Property1': 1. \nWhen you say MyClass.Property1, Python looks in the dict MyClass.__dict__ for the key Property1 a...
[ 30 ]
[]
[]
[ "python" ]
stackoverflow_0003179474_python.txt
Q: How to detect source code in a text? Is it possible to detect a programming language source code (primarily Java and C# ) in a text? For example I want to know whether there is any source code part in this text. .. text text text text text text text text text text text text text text text text text text text text text text text text text text text public static Person createInstance() { return new Person();} text text text text text text text text text text text text text text text text text text text text text text text text text text text .. I have been searching this for a while and I couldn't find anything. A solution with Python would be wonderful. Regards. A: There are some syntax highlighters around (pygments, google-code-prettify) and they've solved code detection and classification. Studying their sources could give an impression how it is done. (now that I looked at pygments again - I don't know if they can autodetect the programming language. But google-code-prettify definitly can do it) A: You would need a database of keywords with characteristics of those keywords (definition, control structures, etc.), as well as a list of operators, special characters that would be used throughout the languages structure (eg (},*,||), and a list of regex patterns. The best bet, to reduce iterations, would be to search on the keywords/operators/characters. Using a spacial/frequency formula, only start at text that may be a language, based on the value of the returned formula. Then it's off to identifying what language it is and where it ends. Because many languages have similar code, this might be hard. Which language is the following? for(i=0;i<10;i++){ // for loop } Without the comment it could be many different types of languages. With the comment, you could at least throw out Perl, since it uses # as the comment character, but it could still be JavaScript, C/C++, etc. Basically, you will need to do a lot of recursive lookups to identify proper code, which means that if you want something quick, you'll need a beast of a computer, or cluster of computers. Additionally, the search formula and identification formula will need to be well refined, for each language. Code identification without proper library calls or includes may be impossible, unless listing that it could belong to many languages, which you'll need a syntax library for.
How to detect source code in a text?
Is it possible to detect a programming language source code (primarily Java and C# ) in a text? For example I want to know whether there is any source code part in this text. .. text text text text text text text text text text text text text text text text text text text text text text text text text text text public static Person createInstance() { return new Person();} text text text text text text text text text text text text text text text text text text text text text text text text text text text .. I have been searching this for a while and I couldn't find anything. A solution with Python would be wonderful. Regards.
[ "There are some syntax highlighters around (pygments, google-code-prettify) and they've solved code detection and classification. Studying their sources could give an impression how it is done.\n(now that I looked at pygments again - I don't know if they can autodetect the programming language. But google-code-pret...
[ 3, 0 ]
[]
[]
[ "python" ]
stackoverflow_0003179439_python.txt
Q: using Pygsl with GCC 4.0 in Python I am trying to install pygsl using latest version of GCC, i.e.: $ gcc --version i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5659) I get the error: $ sudo python setup.py build numpy Building testing ufuncs! running build running build_py running build_ext building 'errno' extension C compiler: gcc-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 compile options: '-DSWIG_COBJECT_TYPES=1 -DGSL_RANGE_CHECK=1 -DDEBUG=1 -DNUMERIC=0 -DPYGSL_GSL_MAJOR_VERSION=1 -DPYGSL_GSL_MINOR_VERSION=9 -UNDEBUG -I/usr/local/include -IInclude -I. -I/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy-2.0.0.dev-py2.6-macosx-10.6-universal.egg/numpy/core/include -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c' gcc-4.0: src/init/errorno.c In file included from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/unicodeobject.h:4, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:85, from src/init/errorno.c:5: /Developer/SDKs/MacOSX10.4u.sdk/usr/include/stdarg.h:4:25: error: stdarg.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/unicodeobject.h:4, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:85, from src/init/errorno.c:5: /Developer/SDKs/MacOSX10.4u.sdk/usr/include/stdarg.h:4:25: error: stdarg.h: No such file or directory lipo: can't figure out the architecture type of: /var/tmp//ccMNNq87.out In file included from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/unicodeobject.h:4, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:85, from src/init/errorno.c:5: /Developer/SDKs/MacOSX10.4u.sdk/usr/include/stdarg.h:4:25: error: stdarg.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/unicodeobject.h:4, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:85, from src/init/errorno.c:5: /Developer/SDKs/MacOSX10.4u.sdk/usr/include/stdarg.h:4:25: error: stdarg.h: No such file or directory lipo: can't figure out the architecture type of: /var/tmp//ccMNNq87.out error: Command "gcc-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 -DSWIG_COBJECT_TYPES=1 -DGSL_RANGE_CHECK=1 -DDEBUG=1 -DNUMERIC=0 -DPYGSL_GSL_MAJOR_VERSION=1 -DPYGSL_GSL_MINOR_VERSION=9 -UNDEBUG -I/usr/local/include -IInclude -I. -I/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy-2.0.0.dev-py2.6-macosx-10.6-universal.egg/numpy/core/include -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c src/init/errorno.c -o build/temp.macosx-10.3-fat-2.6/src/init/errorno.o" failed with exit status 1 Any idea what might be causing this? thanks. A: Although gcc-4.2 is installed, you see that the build is using gcc-4.0 -- confusing. Where's the "gcc-4.0" coming from ? Maybe the setup.py, or ~/.pydistutils.cfg, or export CC-gcc-4.0 or ... just guessing, I don't have pygsl. Can you get gcc-4.0 out of the way, as described in SO setting-gcc-4-2-as-the-default-compiler-on-mac-os-x-leopard ?
using Pygsl with GCC 4.0 in Python
I am trying to install pygsl using latest version of GCC, i.e.: $ gcc --version i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5659) I get the error: $ sudo python setup.py build numpy Building testing ufuncs! running build running build_py running build_ext building 'errno' extension C compiler: gcc-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 compile options: '-DSWIG_COBJECT_TYPES=1 -DGSL_RANGE_CHECK=1 -DDEBUG=1 -DNUMERIC=0 -DPYGSL_GSL_MAJOR_VERSION=1 -DPYGSL_GSL_MINOR_VERSION=9 -UNDEBUG -I/usr/local/include -IInclude -I. -I/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy-2.0.0.dev-py2.6-macosx-10.6-universal.egg/numpy/core/include -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c' gcc-4.0: src/init/errorno.c In file included from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/unicodeobject.h:4, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:85, from src/init/errorno.c:5: /Developer/SDKs/MacOSX10.4u.sdk/usr/include/stdarg.h:4:25: error: stdarg.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/unicodeobject.h:4, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:85, from src/init/errorno.c:5: /Developer/SDKs/MacOSX10.4u.sdk/usr/include/stdarg.h:4:25: error: stdarg.h: No such file or directory lipo: can't figure out the architecture type of: /var/tmp//ccMNNq87.out In file included from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/unicodeobject.h:4, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:85, from src/init/errorno.c:5: /Developer/SDKs/MacOSX10.4u.sdk/usr/include/stdarg.h:4:25: error: stdarg.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/unicodeobject.h:4, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:85, from src/init/errorno.c:5: /Developer/SDKs/MacOSX10.4u.sdk/usr/include/stdarg.h:4:25: error: stdarg.h: No such file or directory lipo: can't figure out the architecture type of: /var/tmp//ccMNNq87.out error: Command "gcc-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 -DSWIG_COBJECT_TYPES=1 -DGSL_RANGE_CHECK=1 -DDEBUG=1 -DNUMERIC=0 -DPYGSL_GSL_MAJOR_VERSION=1 -DPYGSL_GSL_MINOR_VERSION=9 -UNDEBUG -I/usr/local/include -IInclude -I. -I/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy-2.0.0.dev-py2.6-macosx-10.6-universal.egg/numpy/core/include -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c src/init/errorno.c -o build/temp.macosx-10.3-fat-2.6/src/init/errorno.o" failed with exit status 1 Any idea what might be causing this? thanks.
[ "Although gcc-4.2 is installed, you see that the build is using gcc-4.0 -- confusing.\nWhere's the \"gcc-4.0\" coming from ? Maybe the setup.py,\nor ~/.pydistutils.cfg, or export CC-gcc-4.0 or ... just guessing, I don't have pygsl.\nCan you get gcc-4.0 out of the way, as described in\nSO setting-gcc-4-2-as-the-defa...
[ 1 ]
[]
[]
[ "gsl", "numpy", "pygsl", "python", "scipy" ]
stackoverflow_0003172513_gsl_numpy_pygsl_python_scipy.txt
Q: Problem unpacking list of lists in a for loop? I have a list of lists that I want to unpack in for loops, but I'm running into an issue. >>> a_list = [(date(2010, 7, 5), ['item 1', 'item 2']), (date(2010, 7, 6), ['item 1'])] >>> >>> for set in a_list: ... a, b = set ... print a, b ... 2010-07-05 ['item 1', 'item 2'] 2010-07-06 ['item 1'] >>> >>> for set in a_list: ... for a, b in set: ... print a, b ... Traceback (most recent call last): File "<stdin>", line 2, in <module> TypeError: 'datetime.date' object is not iterable How come the first one works, but the second one doesn't? A: I think you're looking for something like this: >>> for a, b in a_list: print(a, b) 2010-07-05 ['item 1', 'item 2'] 2010-07-06 ['item 1'] Also, note, set is a bad name for a variable as it shadows built-in. A: Mostly because they are completely different: In the first loop, set is (date(2010, 7, 5), ['item 1', 'item 2']) and you unpack it. a,b and set have the same length so this works. In the 2nd you loop over set (a tuple with 2 elements, that's why you can loop over it) and try to unpack the first element: The first iteration of the loop does tmp = set[0] which is date(2010, 7, 5), then you try a,b = tmp which fails with the given error message. A: for a,b in set is equivalent to a,b = set[0] ... loop ... a,b = set[1] ... loop ... So Python tried to unpack the first element in set into the tuple a,b which doesn't work.
Problem unpacking list of lists in a for loop?
I have a list of lists that I want to unpack in for loops, but I'm running into an issue. >>> a_list = [(date(2010, 7, 5), ['item 1', 'item 2']), (date(2010, 7, 6), ['item 1'])] >>> >>> for set in a_list: ... a, b = set ... print a, b ... 2010-07-05 ['item 1', 'item 2'] 2010-07-06 ['item 1'] >>> >>> for set in a_list: ... for a, b in set: ... print a, b ... Traceback (most recent call last): File "<stdin>", line 2, in <module> TypeError: 'datetime.date' object is not iterable How come the first one works, but the second one doesn't?
[ "I think you're looking for something like this:\n>>> for a, b in a_list:\n print(a, b)\n\n\n2010-07-05 ['item 1', 'item 2']\n2010-07-06 ['item 1']\n\nAlso, note, set is a bad name for a variable as it shadows built-in.\n", "Mostly because they are completely different:\nIn the first loop, set is (date(2010, 7...
[ 3, 2, 1 ]
[]
[]
[ "python" ]
stackoverflow_0003180132_python.txt
Q: Python: **kargs instead of overloading? I have a conceptual Python design dilemma. Say I have a City class, which represents a city in the database. The City object can be initialized in two ways: An integer (actually, an ID of an existing city in a database) A list of properties (name, country, population, ...), which will generate a new city in the database, and retrieve its ID. This means that the City object will always have an ID - either the initialized ID or a newly-created ID derived from the database. The classic Java approach would overload the constructor - One constructor would get a single intparameter, and the other would get numerous strongly-typed parameters. I've failed to find an elegant way to do it in Python: I can create a base class with a single method get_city_id, and derive CityFromID and CityFromNewData from it, but that's a lot of effort to work around this language lacuna. Using class methods seems awkward. Using a constructor with a long list of parameters is also awkward: I'd put both city id and the alternatives, and verify within the method that that only a specific subset have values. Using **kargs seems very inelegant, because the signature of the constructor does not clearly state the required input parameters, and docstrings just ain't enough: class City(object): def __init__(self, city_id=None, *args, **kargs): try: if city_id==None: self.city_id=city_id else: self.city_name=kargs['name'] except: error="A city object must be instanciated with a city id or with"+\ " full city details." raise NameError(error) Is there a Pythonic, elegant solution to constructor overloading? Adam A: How about: class City(object): def __init__(self, name, description, country, populations): self.city_name = name # etc. @classmethod def from_id(cls, city_id): # initialise from DB Then you can do normal object creation: >>> c = City('Hollowberg', '', 'Densin', 3) >>> c.id 1233L >>> c2 = City.from_id(1233) ~~~~~~ Also you might want to check out SQLAlchemy (and Elixir) for nicer ways to do these things A: There is a design pattern called Data Access Object that is usually used in your case. According to it you should separate fetching and creation of data objects in two classes City and CityDAO: class City: def __init__(self, name, country): self.name = name self.country = country class CityDAO: def fetch(self, id): return query(...) def insert(self, city): query(...) A: I think that the class (factory) method is the best one because already the method name states explicitly what is done. Two free-standing functions would also be fine: def load_existing_city(id): ... def create_new_city(name, population, ...): ...
Python: **kargs instead of overloading?
I have a conceptual Python design dilemma. Say I have a City class, which represents a city in the database. The City object can be initialized in two ways: An integer (actually, an ID of an existing city in a database) A list of properties (name, country, population, ...), which will generate a new city in the database, and retrieve its ID. This means that the City object will always have an ID - either the initialized ID or a newly-created ID derived from the database. The classic Java approach would overload the constructor - One constructor would get a single intparameter, and the other would get numerous strongly-typed parameters. I've failed to find an elegant way to do it in Python: I can create a base class with a single method get_city_id, and derive CityFromID and CityFromNewData from it, but that's a lot of effort to work around this language lacuna. Using class methods seems awkward. Using a constructor with a long list of parameters is also awkward: I'd put both city id and the alternatives, and verify within the method that that only a specific subset have values. Using **kargs seems very inelegant, because the signature of the constructor does not clearly state the required input parameters, and docstrings just ain't enough: class City(object): def __init__(self, city_id=None, *args, **kargs): try: if city_id==None: self.city_id=city_id else: self.city_name=kargs['name'] except: error="A city object must be instanciated with a city id or with"+\ " full city details." raise NameError(error) Is there a Pythonic, elegant solution to constructor overloading? Adam
[ "How about:\nclass City(object):\n def __init__(self, name, description, country, populations):\n self.city_name = name\n # etc.\n\n @classmethod\n def from_id(cls, city_id):\n # initialise from DB \n\nThen you can do normal object creation:\n >>> c = City('Hollowberg', '', 'Densin', 3)\n >>> ...
[ 7, 4, 2 ]
[]
[]
[ "design_patterns", "overloading", "python" ]
stackoverflow_0003179460_design_patterns_overloading_python.txt
Q: Python, ConfigParser: What is 'magical interpolation' The documentation for ConfigParser in Python talks a lot about the so-called "magical interpolation" feature, but never explains what it actually does. I've tried searching for it, but haven't found any answers. A: bad_subj below would be parsed into 'Notify [failure]' bad_subj: %(subj)s [failure] subj: Notify
Python, ConfigParser: What is 'magical interpolation'
The documentation for ConfigParser in Python talks a lot about the so-called "magical interpolation" feature, but never explains what it actually does. I've tried searching for it, but haven't found any answers.
[ "bad_subj below would be parsed into 'Notify [failure]'\nbad_subj: %(subj)s [failure]\nsubj: Notify\n\n" ]
[ 6 ]
[]
[]
[ "configparser", "python" ]
stackoverflow_0003180489_configparser_python.txt
Q: Does it make sense to check for identity in __eq__? When implementing a custom equality function for a class, does it make sense to check for identity first? An example: def __eq__(self, other): return (self is other) or (other criteria) This interesting is for cases when the other criteria may be more expensive (e.g. comparing some long strings). A: It may be a perfectly reasonable shortcut to check for identity first, and in equality methods good shortcuts (for both equality and non equality) are what you should be looking for so that you can return as soon as possible. But, on the other hand, it could also be a completely superfluous check if your test for equality is otherwise cheap and you are unlikely in practice to be comparing an object with itself. For example, if equality between objects can be gauged by comparing one or two integers then this should be quicker than the identity test, so in less than the time it would take to compare ids you've got the whole answer. And remember that if you check the identities and the objects don't have the same id (which is likely in most scenarios) then you've not gained anything as you've still got to do the full check. So if full equality checking is not cheap and it's possible that an object could be compared against itself, then checking identity first can be a good idea. Note that another reason the check isn't done by default is that it is quite reasonable (though rare) for objects with equal identities to compare as non equal, for example: >>> s = float('nan') >>> s == s False A: necesarry: no does it make sense: sure, why not? No such check is done by default, as you can see here: class bad(object): def __eq__(self, other): return False x = bad() print x is x, x==x # True, False A: When you implement custom equality in a class, you can decide for yourself whether to check for identify first. It's entirely up to you. Note that in Python, it's also perfectly valid to decide that __eq__ and __ne__ will return the same value for a given argument; so it's possible to define equality such that identity isn't a shortcut. It's certainly a speed improvement, although how much of one depends on the complexity of the method. I generally don't bother in my custom classes, but I don't have a lot of speed-critical code (and where I do, object comparisons aren't the hotspot). For most of my objects, the equality method looks like: def __eq__(self, o): try: return self.x == o.x and self.y == o.y except AttributeError: return False I could easily add a if self is o: return True check at the beginning of the method. Also remember to override __hash__ if you override __eq__, or you'll get odd behaviors in sets and dicts. A: I asked a similar question on comp.lang.python a few years ago - here is the thread. The conclusions at that time were that the up-front identity test was worth it if you did many tests for equality of objects with themselves, or if your other equality testing logic was slow. A: This is only done for performance reasons. At one programming job I worked on, in Java, this was always done, altough it does not change any functionality.
Does it make sense to check for identity in __eq__?
When implementing a custom equality function for a class, does it make sense to check for identity first? An example: def __eq__(self, other): return (self is other) or (other criteria) This interesting is for cases when the other criteria may be more expensive (e.g. comparing some long strings).
[ "It may be a perfectly reasonable shortcut to check for identity first, and in equality methods good shortcuts (for both equality and non equality) are what you should be looking for so that you can return as soon as possible.\nBut, on the other hand, it could also be a completely superfluous check if your test for...
[ 7, 3, 2, 1, 0 ]
[]
[]
[ "equality", "python" ]
stackoverflow_0003180004_equality_python.txt
Q: What should I use for the backend of a 'social' website? My two main requirements for the site are related to degrees of separation and graph matching (given two graphs, return some kind of similarity score). My first thought was to use MySql to do it, which would probably work out okay for storing how I want to manage 'friends' (similar to Twitter), but I'm thinking if I want to show users results which will make use of graphing algorithms (like shortest path between two people) maybe it isn't the way to go for that. My language of choice for the front end, would be Python using something like Pylons but I haven't committed to anything specific yet and would be willing to budge if it fitted well with a good backend solution. I'm thinking of using MySQL for storing user profile data, neo4j for the graph information of relations between users and then have a Python application talk to both of them. Maybe there is a simpler/more efficient way to do this kind of thing. At the moment for me it's more getting a suitable prototype done than worrying about scalability but I'm willing to invest some time learning something new if it'll save me time rewriting/porting in the future. PS: I'm more of a programmer than a database designer, so I'd prefer having rewrite the frontend later rather than say porting over the database, which is the main reason I'm looking for advice. A: Python/Django provides with Pinax a good framework for social websites. A: MySQL is really your best choice for the database unless you want to go proprietary. As for the actual language, pick whatever you are familiar with. While Youtube and Reddit are written in python, many of the other large sites use Ruby (Hulu, Twitter, Techcrunch) or C++ (Google) or PHP (Facebook, Yahoo, etc). A: I realize this is probably not exactly the answer you are looking for, but I think it benefits mentioning for viewers/seekers of the question overall: BuddyPress is a well-honed "out of the box" social networking platform, built on top of WordPress (php, MySQL). It's a great system if you're not trying to do advanced things, and if you are trying to do advanced things you can still get pretty far without having to overload into some other deeper language like Python. http://buddypress.org/ A: I'd use a combination of Graph Database plus MySQL. The Graph database should only be used to efficiently store user information and relations. For everything else like user pages etc. I'd use MySQL.
What should I use for the backend of a 'social' website?
My two main requirements for the site are related to degrees of separation and graph matching (given two graphs, return some kind of similarity score). My first thought was to use MySql to do it, which would probably work out okay for storing how I want to manage 'friends' (similar to Twitter), but I'm thinking if I want to show users results which will make use of graphing algorithms (like shortest path between two people) maybe it isn't the way to go for that. My language of choice for the front end, would be Python using something like Pylons but I haven't committed to anything specific yet and would be willing to budge if it fitted well with a good backend solution. I'm thinking of using MySQL for storing user profile data, neo4j for the graph information of relations between users and then have a Python application talk to both of them. Maybe there is a simpler/more efficient way to do this kind of thing. At the moment for me it's more getting a suitable prototype done than worrying about scalability but I'm willing to invest some time learning something new if it'll save me time rewriting/porting in the future. PS: I'm more of a programmer than a database designer, so I'd prefer having rewrite the frontend later rather than say porting over the database, which is the main reason I'm looking for advice.
[ "Python/Django provides with Pinax a good framework for social websites.\n", "MySQL is really your best choice for the database unless you want to go proprietary.\nAs for the actual language, pick whatever you are familiar with. While Youtube and Reddit are written in python, many of the other large sites use Rub...
[ 3, 2, 0, 0 ]
[]
[]
[ "database", "mysql", "python", "sql" ]
stackoverflow_0003126155_database_mysql_python_sql.txt
Q: Should I use Python or Assembly for a super fast copy program As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine. I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies. I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages. Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss) The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power. Thanks for the early responses but I don't think it is I/O limitations. I am not going over a network, the drive is plugged into my mother board with a sata connection and my Drobo is plugged into a Firewire port, my thinking is that both connections should allow faster transfer. Actually I can't use a sector copy except going from a single disk to the Drobo. It won't work the other way since the Drobo file structure is a mystery. My unscientific observation is that the copy from one internal disk to another is no faster than a copy to or from the Drobo to an internal disk. I am bound by the hardware, I can't afford 10K rpm 2 terabyte drives (if they even make them). A number of you are suggesting a file synching solution. But that does not solve my problem. First off, the file synching solutions I have played with build a map (for want of a better term) of the data first, I have too many little files so they choke. One of the reasons I use RICHCOPY is that it starts copying immediately, it does not use memory to build a map. Second, I had one of my three Drobo backups fail a couple of weeks ago. My rule is if I have a backup failure the other two have to stay off line until the new one is built. So I need to copy from one of the three back up single drive copies I have that I use with the LogicCube. At the end of the day I have to have a good copy on a single drive because that is what I deliver to my clients. Because my clients have diverse systems I deliver to them on SATA drives. I rent some cloud space from someone where my data is also stored as the deepest backup but it is expensive to pull if off of there. A: Copying files is an I/O bound process. It is unlikely that you will see any speed up from rewriting it in assembly, and even multithreading may just cause things to go slower as different threads requesting different files at the same time will result in more disk seeks. Using a standard tool is probably the best way to go here. If there is anything to optimize, you might want to consider changing your file system or your hardware. A: There are 2 places for slowdown: Per-file copy is MUCH slower than a disk copy (where you literally clone 100% of each sector's data). Especially for 20mm files. You can't fix that one with the most tuned assembly, unless you switch from cloning files to cloning raw disk data. In the latter case, yes, Assembly is indeed your ticket (or C). Simply storing 20mm files and recursively finding them may be less efficient in Python. But that's more likely a function of finding better algorithm and is not likely to be significantly improved by Assembly. Plus, that will NOT be the main contributor to 50 hrs In summary - Assembly WILL help if you do raw disk sector copy, but will NOT help if you do filesystem level copy. A: As the other answers mention (+1 to mark), when copying files, disk i/o is the bottleneck. The language you use won't make much of a difference. How you've laid out your files will make a difference, how you're transferring data will make a difference. You mentioned copying to a DROBO. How is your DROBO connected? Check out this graph of connection speeds. Let's look at the max copy rates you can get over certain wire types: USB = 97 days (1.5 TB / 1.5 Mbps). Lame, at least your performance is not this bad. USB2.0 = ~7hrs (1.5 TB / 480 Mbps). Maybe LogicCube? Fast SCSI = ~40hrs (1.5 TB / 80 Mbps). Maybe your hard drive speed? 100 Mbps ethernet = 1.4 days (1.5 TB / 100 Mbps). So, depending on the constraints of your problem, it's possible you can't do better. But you may want to start doing a raw disk copy (like Unix's dd), which should be much faster than a file-system level copy (it's faster because there are no random disk seeks for directory walks or fragmented files). To use dd, you could live boot linux onto your machine (or maybe use cygwin?). See this page for reference or this one about backing up from windows using a live-boot of Ubuntu. If you were to organize your 1.5 TB data on a RAID, you could probably speed up the copy (because the disks will be reading in parallel), and (depending on the configuration) it'll have the added benefit of protecting you from drive failures. A: I don't think writing it in assembly will help you. Writing a routine in assembly could help you if you are processor-bound and think you can do something smarter than your compiler. But in a network copy, you will be IO bound, so shaving a cycle here or there almost certainly will not make a difference. I think the genreal rule here is that it's always best to profile your process to see where you are spending the time before thinking about optimizations. A: I don't believe it will make a discernable difference which language you use for this purpose. The bottleneck here is not your application but the disk performance. Just because a language is interpreted, it doesn't mean that every single operation in it is slow. As an example, it's a fairly safe bet that the lower-level code in Python will call assembly (or compiled) code to do copying. Similarly, when you do stuff with collections and other libraries in Java, that's mostly compiled C, not interpreted Java. There are a couple of things you can do to possibly speed up the process. Buy faster hard disks (10K RPMs rather than 7.5K or less latency, larger caches and so forth). Copying between two physical disks may be faster than copying on a single disk (due to the head movement). If you're copying across the network, stage it. In other words, copy it fast to another local disk, then slow from there across the net. You can also stage it in a different way. If you run a nightly (or even weekly) process to keep the copy up to date (only copying changed files) rather than three times a year, you won't find yourself in a situation where you have to copy a massive amount. Also if you're using the network, run it on the box where the repository is. You don't want to copy all the data from a remote disk to another PC then back to yet another remote disk. You may also want to be careful with Python. I may be mistaken (and no doubt the Pythonistas will set me straight if I'm mistaken on this count ) but I have a vague recollection that its threading may not fully utilise multi-core CPUs. In that case, you'd be better off with another solution. You may well be better off sticking with your current solution. I suspect a specialised copy program will already be optimised as much as possible since that's what they do. A: There's no reason at all to write a copy program in assembly. The problem is with the amount of IO involved not CPU. Also, the copy function in python is already written in C by experts and you won't eek out any more speed writing one yourself in assembler. Lastly, threading won't help either, especially in python. Go with with either Twisted or just use the new multiprocessing module in Python 2.6 and kick off a pool of processes to do the copies. Save yourself a lot of torment while getting the job done. A: Before you question the copying app, you should most likely question the data path. What are the theoretical limits and what are you achieving? What are the potential bottlenecks? If there is a single data path, you are probably not going to get a significant boost by parallelizing storage tasks. You may even exacerbate it. Most of the benefits you'll get with asynchronous I/O come at the block level - a level lower than the file system. One thing you could do to boost I/O is decouple the fetch from source and store to destination portions. Assuming that the source and destination are separate entities, you could theoretically halve the amount of time for the process. But are the standard tools already doing this?? Oh - and on Python and the GIL - with I/O-bound execution, the GIL is really not quite that bad of a penalty. A: RICHCOPY is already copying files in parallel, and I expect the only way to beat it is to get in bed with the filesystem so that you minimize disk I/O, especially seeking. I suggest you try ntfsclone to see if it meets your needs. If not, my next suggestion would be to parallize ntfsclone. In any case, working directly with filesystem layout on disk is going to be easiest in C, not Python and certainly not assembly. Especially since you can get started by using the C code from the NTFS 3G project. This code is designed for reliability and ease of porting, not performance, but it's still probably the easiest way to get started. My time is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. No. Or more accurately, at your current level of mastery of systems programming, achieving significant improvements in speed will be prohibitively expensive. What you're asking for requires very specialized expertise. Although I myself have prior experience in implementing filesystems (much simpler ones than NTFS, XFS, or ext2), I would not tackle this job; I would hire it done. Footnote: if you have access to a Linux box, find out what raw write bandwidth you can get to the target drive: time dd if=/dev/zero of=/dev/sdc bs=1024k count=100 will give you the time to write 100MB sequentially in the fastest possible way. That will give you an absolute limit on what is possible with your hardware. Don't try this without understanding the man page for dd! dd stands for "destroy data". (Actually it stands for "copy and convert", but cc was taken.) A Windows programmer can probably point you to an equivalent test for Windows. A: Right, here the bottleneck is not in the execution of the copying software itself but rather the disk access. Going lower level does not mean that you will have better performance. Take a simple example of open() and fopen() APIs where open is much lower level is is more direct and fopen() is a library wrapper for the system open() function. But in reality fopen has better berformance because it adds buffering and optimizes a lot of stuff that is not done in the raw open() function. Implementing optimizations in assembly level is much harder and less efficient than in python. A: 1,5 TB in approximately 50 hours gives a throughput of (1,5 * 1024^2) MB / (50 * 60^2) s = 8,7 MB/s. A theoretical 100 mbit/s bandwidth should give you 12,5 MB/s. It seems to me that your firewire connection is a problem. You should look at upgrading drivers, or upgrading to a better firewire/esata/usb interface. That said, rather than the python/assembly question, you should look at acquiring a file syncing solution. It shouldn't be necessary copying that data over and over again. A: As already said, it is not the language here to make the difference; assembly could be cool or fast for computations, but when the processor have to "speak" to peripherals, the limit is given by these. In this case the speed is given by your hard disk speed, and this is a limit you hardly can change wiithout changing your hd and waiting for better hd in future, but also by the way data are organized on the disk, i.e. by the filesystem. AFAIK, most used filesystems are not optimized to handle fastly tons of "small" files, rather they are optimized to hold "few" huge files. So, changing the filesystem you're using could increase your copy speed, as far as it is more suitable to your case (and of course hd limits still apply!). If you want to "taste" the real limit of you hd, you should try a copy "sector by sector", replycating the exact image of your source hd to the dest hd. (But this option has some points to be aware of) A: Since I posted the question I have been playing around with some things and I think first off, not to be argumentative but those of you that have been posting the response that I am i/o bound are only partially correct. It is seek time that is the constraint. Long story to test various options I built a new machine with an I-7 processor and a reasonably powerful/functional motherboard and then using the same two drives I was working with before I noted a fairly significant increase in speed. I also noted that when I am moving big files (one gigabyte or so) I get sustained transfer speeds of in excess of 50 mb/s and the speed drops significantly when moving small files. I think the speed difference is due to an unordered disk relative to the way the copy program reads the directory structure to determine the files to copy. What I think needs to be done is to 1: Read the MFT and sort by sector working from the outside to the inside of the platter (it means I have to figure out how multi-platter disks work) 2: Analyze and separate all contiguous versus non-contiguous files. I would handle the contiguous files first and go back to handle the non-contiguous files 3: start copying the contiguous files from the outside to inside 4. When finished copy the non-contiguous files, by default they will end up on the inner rings of the platter(s) and they will be contiguous. (I want to note that I do regularly defragment and have less than 1% of my files/directories are fragmented)but 1% of 20 million is still 200K Why is this better than just running a copy program. When running a copy program the program is going to use some internal ordering mechanism to determine the copy order. Windows uses alphabetic (more or less) I imagine others do something similar but that order may-not (in my case probably does not) conform to the way the files were initially laid on the disk which is what I beli8eve is the biggest factor that affects copy speed. The problem with a sector-copy is it does not fix anything and so when I migrate across disk sizes and add data I end up with new problems to handle. If I do this right I should be able to check file headers and the eof record and do some housekeeping. CHKDSK is a great program but kind of dumb. When I do get file/folder corruption it is really hard to identify what was lost, by building my own copy program I could include a maintenance cycle that I could invoke when I want to run some tests on the files during copying. This might slow it up some but I don't think very much because the CPU is going to move the files much faster than they can get pulled or written. And even if it slows it up some when being run, at least I get some control (maybe understanding is a better word) of the problems that will invariably crop up in an imperfect world. I may not have to do this in A, I have been looking around for ways to play (read) the MFT and there are even Python tools for this see http://www.integriography.com A: Neither. If you want to take advantage of OS features to speed up I/O, you'll need to use some specialized system calls that are most easily accessed in C (or C++). You don't need to know a lot of C to write such a program, but you really need to know the system call interfaces. In all likelihood, you can solve the problem without writing any code by using an existing tool or tuning the operating system, but if you really do need to write a tool, C is the most straightforward way to do it.
Should I use Python or Assembly for a super fast copy program
As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine. I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies. I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages. Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss) The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power. Thanks for the early responses but I don't think it is I/O limitations. I am not going over a network, the drive is plugged into my mother board with a sata connection and my Drobo is plugged into a Firewire port, my thinking is that both connections should allow faster transfer. Actually I can't use a sector copy except going from a single disk to the Drobo. It won't work the other way since the Drobo file structure is a mystery. My unscientific observation is that the copy from one internal disk to another is no faster than a copy to or from the Drobo to an internal disk. I am bound by the hardware, I can't afford 10K rpm 2 terabyte drives (if they even make them). A number of you are suggesting a file synching solution. But that does not solve my problem. First off, the file synching solutions I have played with build a map (for want of a better term) of the data first, I have too many little files so they choke. One of the reasons I use RICHCOPY is that it starts copying immediately, it does not use memory to build a map. Second, I had one of my three Drobo backups fail a couple of weeks ago. My rule is if I have a backup failure the other two have to stay off line until the new one is built. So I need to copy from one of the three back up single drive copies I have that I use with the LogicCube. At the end of the day I have to have a good copy on a single drive because that is what I deliver to my clients. Because my clients have diverse systems I deliver to them on SATA drives. I rent some cloud space from someone where my data is also stored as the deepest backup but it is expensive to pull if off of there.
[ "Copying files is an I/O bound process. It is unlikely that you will see any speed up from rewriting it in assembly, and even multithreading may just cause things to go slower as different threads requesting different files at the same time will result in more disk seeks.\nUsing a standard tool is probably the best...
[ 42, 8, 8, 5, 4, 2, 2, 2, 1, 1, 0, 0, 0 ]
[]
[]
[ "assembly", "python" ]
stackoverflow_0002982829_assembly_python.txt
Q: Microsoft Powerpoint Python Parser I am looking for a python based microsoft office parser - specifically powerpoint. I want to be able to parse PPT in python and extract things like text and images from the powerpoint file. Is there a library available? A: I don't think there is such a library. What you can do is use pywin32 package to access PowerPoint's COM. Here is a very nice introduction to using the win32com module to automate tasks in PowerPoint someone has written: http://www.s-anand.net/blog/automating-powerpoint-with-python/ A: You might find such a beast, but I'd bet against it; you're looking for two rare properties together. You might consider instead using the Open Office SDK, which already has vast amounts of machinery to read power point files, and abuse it for your purposes. This is all Java, not Python, but my guess is the learning curve to learn Java is much smaller than the learning curve to figure out how to read PowerPoint files.
Microsoft Powerpoint Python Parser
I am looking for a python based microsoft office parser - specifically powerpoint. I want to be able to parse PPT in python and extract things like text and images from the powerpoint file. Is there a library available?
[ "I don't think there is such a library.\nWhat you can do is use pywin32 package to access PowerPoint's COM.\nHere is a very nice introduction to using the win32com module to automate tasks in PowerPoint someone has written:\nhttp://www.s-anand.net/blog/automating-powerpoint-with-python/\n", "You might find such a...
[ 5, 3 ]
[]
[]
[ "parsing", "powerpoint", "python" ]
stackoverflow_0003181286_parsing_powerpoint_python.txt
Q: Parsing arbitrary number of arguments in Python OptParser How can I define an option with an arbitrary number of arguments in Python's OptParser? I'd like something like: python my_program.py --my-option X,Y # one argument passed, "X,Y" python my_prgoram.py --my-option X,Y Z,W # two arguments passed, "X,Y" and "Z,W" the nargs= option of OptParser limits me to a defined number. How can I do something like this? parser.add_option("--my-options", dest="my_options", action="append", nargs="*") which will simply take whatever's after --my-option and put it into a list? E.g. for case 1, it should be ["X,Y"], for case 2 it should be ["X,Y", "Z,W"]. What's a way to do this with OptParser? thanks. A: The optarse module is deprecated in python 2.7 (which has just been released!). If you can upgrade, then you can use its replacement the argparse module. I think that has what you want. It supports a '*' value for nargs. http://docs.python.org/library/argparse.html#nargs A: Have you tried ommitting the n_args bit? The examples in the docs suggest you don't need it.
Parsing arbitrary number of arguments in Python OptParser
How can I define an option with an arbitrary number of arguments in Python's OptParser? I'd like something like: python my_program.py --my-option X,Y # one argument passed, "X,Y" python my_prgoram.py --my-option X,Y Z,W # two arguments passed, "X,Y" and "Z,W" the nargs= option of OptParser limits me to a defined number. How can I do something like this? parser.add_option("--my-options", dest="my_options", action="append", nargs="*") which will simply take whatever's after --my-option and put it into a list? E.g. for case 1, it should be ["X,Y"], for case 2 it should be ["X,Y", "Z,W"]. What's a way to do this with OptParser? thanks.
[ "The optarse module is deprecated in python 2.7 (which has just been released!). If you can upgrade, then you can use its replacement the argparse module. I think that has what you want. It supports a '*' value for nargs.\nhttp://docs.python.org/library/argparse.html#nargs\n", "Have you tried ommitting the n_args...
[ 3, 0 ]
[]
[]
[ "command_line", "optparse", "python" ]
stackoverflow_0003181360_command_line_optparse_python.txt
Q: Multiple application entry points Recently I was trying to add unit tests to an existing binary by creating a extra (DLLMain) entry point to an application that already has a main entry point (it is a console exe). The application seemed to compile correctly although I was unable to use it as a DLL from my python unit test framework, all attempts to use the exe as a dll failed. Has anyone any ideas or experience in adding extra application entry point with any input as to why this would or wouldn't work? A: There are some problems which you should solve to implement what you want: The exe must have relocation table (use linker switch /FIXED:NO) The exe must exports at least one function - it's clear how to do this. I recommend use DUMPBIN.EXE with no some switches (/headers, /exports and without switches) to examine the exe headers. You can compare the structure of your application with Winword.exe or outlook.exe which exports some functions. If all this will not helps, I'll try to write a test EXE application which can be loaded as an exe and post the code here. UPDATED: Just now verified my suggestion. It works. File Loadable.c looks like following #include <windows.h> #include <stdio.h> EXTERN_C int __declspec(dllexport) WINAPI Sum (int x, int y); EXTERN_C int __declspec(dllexport) WINAPI Sum (int x, int y) { return x + y; } int main() { printf ("2+3=%d\n", Sum(2,3)); } The only important linker switch is /FIXED:NO which one can find in advanced part of linker settings. The program can run and produced the output "2+3=5". Another EXE loaded the EXE as a DLL and calls Sum function: #include <windows.h> #include <stdio.h> typedef int (WINAPI *PFN_SUM) (int x, int y); int main() { HMODULE hModule = LoadLibrary (TEXT("C:\\Oleg\\ExeAsDll\\Loadable.exe")); PFN_SUM fnSum = (PFN_SUM) GetProcAddress (hModule, "_Sum@8"); int res = fnSum (5,4); printf ("5+4=%d\n", res); return 0; } The program also can run and produced the output "5+4=9". A: I don't know for sure, but I would guess that Windows simply refuses to load an EXE in-process and a DLL as a new process, plain and simple. These questions appear to contain more detail: Can the DllMain of an .exe be called? DllMain in an exe? The simplest way to get both behaviours in one executable image is to design it as a DLL, then use rundll32.exe to execute it standalone. There's no need to write your own wrapper.
Multiple application entry points
Recently I was trying to add unit tests to an existing binary by creating a extra (DLLMain) entry point to an application that already has a main entry point (it is a console exe). The application seemed to compile correctly although I was unable to use it as a DLL from my python unit test framework, all attempts to use the exe as a dll failed. Has anyone any ideas or experience in adding extra application entry point with any input as to why this would or wouldn't work?
[ "There are some problems which you should solve to implement what you want:\n\nThe exe must have relocation table (use linker switch /FIXED:NO)\nThe exe must exports at least one function - it's clear how to do this.\n\nI recommend use DUMPBIN.EXE with no some switches (/headers, /exports and without switches) to e...
[ 3, 1 ]
[]
[]
[ "c++", "dll", "python", "unit_testing", "windows" ]
stackoverflow_0003178877_c++_dll_python_unit_testing_windows.txt
Q: HTML Tag Cloud in Python I am looking for a simple library which can be given a set of items:value pair and which can generate a tag cloud as output. Library can preferably be in python A: Define font-sizes in your css-file. Use classes from size-0{ font-size: 11px; } size-1{ font-size: 12px; } etc. up to the font-size you need. And then simply use this snippet: CSS_SIZES = range(1, 7) # 1,2...6 for use in your css-file size-1, size-2, etc. TAGS = { 'python' : 28059, 'html' : 19160, 'tag-cloud' : 40, } MAX = max(TAGS.values()) # Needed to calculate the steps for the font-size STEP = MAX / len(CSS_SIZES) for tag, count in TAGS.items(): css = count / STEP print '<a href="%s" class="size-%s">%s</a>' % (tag, css, tag), That's all. No need for a library ;-)
HTML Tag Cloud in Python
I am looking for a simple library which can be given a set of items:value pair and which can generate a tag cloud as output. Library can preferably be in python
[ "Define font-sizes in your css-file. Use classes from \nsize-0{\n font-size: 11px;\n}\n\nsize-1{\n font-size: 12px;\n}\n\netc. up to the font-size you need. \nAnd then simply use this snippet:\nCSS_SIZES = range(1, 7) # 1,2...6 for use in your css-file size-1, size-2, etc.\n\nTAGS = {\n 'python' : 28059,\n ...
[ 5 ]
[]
[]
[ "html", "python", "tag_cloud" ]
stackoverflow_0003180779_html_python_tag_cloud.txt
Q: sqlalchemy's create_all doesn't create sequences automatically I am using SQLAlchemy 0.4.8 with Postgres in order to manage my datastore. Until now, it's been fairly easy to automatically deploy my database: I was using metadata.create_all(bind=engine) and everything worked just fine. But now I am trying to create a sequence that it's not being used by any table, so create_all() doesn't create it, even though it's define correctly: Sequence('my_seq', metadata=myMetadata). Any thoughts on how I could make this work ? P.S. And it's not possible at the moment to upgrade to a newer version of SQLAlchemy. A: Could you call the create it by using its own Sequence.create method: my_seq = Sequence('my_seq', metadata=myMetadata) # ... metadata.create_all(bind=engine) # @note: create unused objects explicitly my_seq.create(bind=engine) # ...
sqlalchemy's create_all doesn't create sequences automatically
I am using SQLAlchemy 0.4.8 with Postgres in order to manage my datastore. Until now, it's been fairly easy to automatically deploy my database: I was using metadata.create_all(bind=engine) and everything worked just fine. But now I am trying to create a sequence that it's not being used by any table, so create_all() doesn't create it, even though it's define correctly: Sequence('my_seq', metadata=myMetadata). Any thoughts on how I could make this work ? P.S. And it's not possible at the moment to upgrade to a newer version of SQLAlchemy.
[ "Could you call the create it by using its own Sequence.create method:\nmy_seq = Sequence('my_seq', metadata=myMetadata)\n# ...\nmetadata.create_all(bind=engine)\n# @note: create unused objects explicitly\nmy_seq.create(bind=engine)\n# ...\n\n" ]
[ 6 ]
[]
[]
[ "postgresql", "python", "sqlalchemy" ]
stackoverflow_0003175028_postgresql_python_sqlalchemy.txt
Q: Python boolean expression and or In python if you write something like foo==bar and spam or eggs python appears to return spam if the boolean statement is true and eggs otherwise. Could someone explain this behaviour? Why is the expression not being evaluated like one long boolean? Edit: Specifically, I'm trying to figure out the mechanism why 'spam' or 'eggs' is being returned as the result of the expression. A: The operators and and or are short-circuiting which means that if the result of the expression can be deduced from evaluating only the first operand, the second is not evaluated. For example if you have the expression a or b and a evaluates to true then it doesn't matter what b is, the result of the expression is true so b is not evaluated. They actually work as follows: a and b: If a is falsey, b is not evaluated and a is returned, otherwise b is returned. a or b: If a is truthy, b is not evaluated and a is returned, otherwise b is returned. Falsey and truthy refer to values that evaluate to false or true in a boolean context. However this and/or idiom was useful back in the days when there was no better alternative, but now there is a better way: spam if foo==bar else eggs The problem with the and/or idiom (apart from it being confusing to beginners) is that it gives the wrong result if the condition is true but spam evaluates to a falsey value (e.g. the empty string). For this reason you should avoid it. A: This is how the Python boolean operators work. From the documentation (the last paragraph explains why it is a good idea that the operators work the way they do): In the context of Boolean operations, and also when expressions are used by control flow statements, the following values are interpreted as false: False, None, numeric zero of all types, and empty strings and containers (including strings, tuples, lists, dictionaries, sets and frozensets). All other values are interpreted as true. (See the __nonzero__() special method for a way to change this.) The operator not yields True if its argument is false, False otherwise. The expression x and y first evaluates x; if x is false, its value is returned; otherwise, y is evaluated and the resulting value is returned. The expression x or y first evaluates x; if x is true, its value is returned; otherwise, y is evaluated and the resulting value is returned. (Note that neither and nor or restrict the value and type they return to False and True, but rather return the last evaluated argument. This is sometimes useful, e.g., if s is a string that should be replaced by a default value if it is empty, the expression s or 'foo' yields the desired value. Because not has to invent a value anyway, it does not bother to return a value of the same type as its argument, so e.g., not 'foo' yields False, not ''.) A: The reason is that Python evaluates boolean expression using the actual values of the variables involved, instead of restricting them to True and False values. The following values are considered to be false: None False 0 of any numeric type empty sequence or set ('', (), [], {}) user-defined types with __nonzero__() or __len__() method that returns 0 or False See the Truth Value Testing section of the Python documentation for more information. In particular: Operations and built-in functions that have a Boolean result always return 0 or False for false and 1 or True for true, unless otherwise stated. (Important exception: the Boolean operations or and and always return one of their operands.) A: Try using parentheses to make the expression non-ambiguous. The way it is, you're getting: (foo == bar and spam) or eggs
Python boolean expression and or
In python if you write something like foo==bar and spam or eggs python appears to return spam if the boolean statement is true and eggs otherwise. Could someone explain this behaviour? Why is the expression not being evaluated like one long boolean? Edit: Specifically, I'm trying to figure out the mechanism why 'spam' or 'eggs' is being returned as the result of the expression.
[ "The operators and and or are short-circuiting which means that if the result of the expression can be deduced from evaluating only the first operand, the second is not evaluated. For example if you have the expression a or b and a evaluates to true then it doesn't matter what b is, the result of the expression is ...
[ 20, 6, 3, 2 ]
[]
[]
[ "boolean_expression", "python", "syntax" ]
stackoverflow_0003181901_boolean_expression_python_syntax.txt